Compare commits

..

1 Commits

Author SHA1 Message Date
Cameron Smith
93b14e6c19 patch for config update problem
spack #26169
2022-10-18 13:27:59 -04:00
10397 changed files with 161631 additions and 311405 deletions

View File

@@ -1,4 +0,0 @@
{
"image": "ghcr.io/spack/ubuntu20.04-runner-amd64-gcc-11.4:2023.08.01",
"postCreateCommand": "./.devcontainer/postCreateCommand.sh"
}

View File

@@ -1,20 +0,0 @@
#!/bin/bash
# Load spack environment at terminal startup
cat <<EOF >> /root/.bashrc
. /workspaces/spack/share/spack/setup-env.sh
EOF
# Load spack environment in this script
. /workspaces/spack/share/spack/setup-env.sh
# Ensure generic targets for maximum matching with buildcaches
spack config --scope site add "packages:all:require:[target=x86_64_v3]"
spack config --scope site add "concretizer:targets:granularity:generic"
# Find compiler and install gcc-runtime
spack compiler find --scope site
# Setup buildcaches
spack mirror add --scope site develop https://binaries.spack.io/develop
spack buildcache keys --install --trust

View File

@@ -1,5 +1,3 @@
# .git-blame-ignore-revs # .git-blame-ignore-revs
# Formatted entire codebase with black 23 # Formatted entire codebase with black
603569e321013a1a63a637813c94c2834d0a0023
# Formatted entire codebase with black 22
f52f6e99dbf1131886a80112b8c79dfc414afb7c f52f6e99dbf1131886a80112b8c79dfc414afb7c

1
.gitattributes vendored
View File

@@ -1,4 +1,3 @@
*.py diff=python *.py diff=python
*.lp linguist-language=Prolog *.lp linguist-language=Prolog
lib/spack/external/* linguist-vendored lib/spack/external/* linguist-vendored
*.bat text eol=crlf

View File

@@ -9,7 +9,7 @@ body:
Thanks for taking the time to report this build failure. To proceed with the report please: Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue `Installation issue: <name-of-the-package>`. 1. Title the issue `Installation issue: <name-of-the-package>`.
2. Provide the information required below. 2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
- type: textarea - type: textarea
id: reproduce id: reproduce
@@ -29,9 +29,7 @@ body:
description: | description: |
Please post the error message from spack inside the `<details>` tag below: Please post the error message from spack inside the `<details>` tag below:
value: | value: |
<details><summary>Error message</summary> <details><summary>Error message</summary><pre>
<pre>
... ...
</pre></details> </pre></details>
validations: validations:
@@ -55,7 +53,7 @@ body:
Please upload the following files: Please upload the following files:
* **`spack-build-out.txt`** * **`spack-build-out.txt`**
* **`spack-build-env.txt`** * **`spack-build-env.txt`**
They should be present in the stage directory of the failing build. Also upload any `config.log` or similar file if one exists. They should be present in the stage directory of the failing build. Also upload any `config.log` or similar file if one exists.
- type: markdown - type: markdown
attributes: attributes:

View File

@@ -1,4 +1,4 @@
name: "\U0001F38A Feature request" name: "\U0001F38A Feature request"
description: Suggest adding a feature that is not yet in Spack description: Suggest adding a feature that is not yet in Spack
labels: [feature] labels: [feature]
body: body:
@@ -29,11 +29,13 @@ body:
attributes: attributes:
label: General information label: General information
options: options:
- label: I have run `spack --version` and reported the version of Spack
required: true
- label: I have searched the issues of this repo and believe this is not a duplicate - label: I have searched the issues of this repo and believe this is not a duplicate
required: true required: true
- type: markdown - type: markdown
attributes: attributes:
value: | value: |
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue. If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack! Other than that, thanks for taking the time to contribute to Spack!

View File

@@ -21,9 +21,7 @@ body:
description: | description: |
Please post the error message from spack inside the `<details>` tag below: Please post the error message from spack inside the `<details>` tag below:
value: | value: |
<details><summary>Error message</summary> <details><summary>Error message</summary><pre>
<pre>
... ...
</pre></details> </pre></details>
validations: validations:

View File

@@ -5,13 +5,3 @@ updates:
directory: "/" directory: "/"
schedule: schedule:
interval: "daily" interval: "daily"
# Requirements to build documentation
- package-ecosystem: "pip"
directory: "/lib/spack/docs"
schedule:
interval: "daily"
# Requirements to run style checks
- package-ecosystem: "pip"
directory: "/.github/workflows/style"
schedule:
interval: "daily"

View File

@@ -1,6 +0,0 @@
<!--
Remember that `spackbot` can help with your PR in multiple ways:
- `@spackbot help` shows all the commands that are currently available
- `@spackbot fix style` tries to push a commit to fix style issues in this PR
- `@spackbot re-run pipeline` runs the pipelines again, if you have write access to the repository
-->

View File

@@ -17,53 +17,28 @@ concurrency:
jobs: jobs:
# Run audits on all the packages in the built-in repository # Run audits on all the packages in the built-in repository
package-audits: package-audits:
runs-on: ${{ matrix.system.os }} runs-on: ubuntu-latest
strategy:
matrix:
system:
- { os: windows-latest, shell: 'powershell Invoke-Expression -Command "./share/spack/qa/windows_test_setup.ps1"; {0}' }
- { os: ubuntu-latest, shell: bash }
- { os: macos-latest, shell: bash }
defaults:
run:
shell: ${{ matrix.system.shell }}
steps: steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: ${{inputs.python_version}} python-version: ${{inputs.python_version}}
- name: Install Python packages - name: Install Python packages
run: | run: |
pip install --upgrade pip setuptools pytest coverage[toml] pip install --upgrade pip six setuptools pytest codecov 'coverage[toml]<=6.2'
- name: Setup for Windows run
if: runner.os == 'Windows'
run: |
python -m pip install --upgrade pywin32
- name: Package audits (with coverage) - name: Package audits (with coverage)
if: ${{ inputs.with_coverage == 'true' && runner.os != 'Windows' }} if: ${{ inputs.with_coverage == 'true' }}
run: | run: |
. share/spack/setup-env.sh . share/spack/setup-env.sh
coverage run $(which spack) audit packages coverage run $(which spack) audit packages
coverage run $(which spack) -d audit externals
coverage combine coverage combine
coverage xml coverage xml
- name: Package audits (without coverage) - name: Package audits (without coverage)
if: ${{ inputs.with_coverage == 'false' && runner.os != 'Windows' }} if: ${{ inputs.with_coverage == 'false' }}
run: | run: |
. share/spack/setup-env.sh . share/spack/setup-env.sh
spack -d audit packages $(which spack) audit packages
spack -d audit externals - uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70 # @v2.1.0
- name: Package audits (without coverage)
if: ${{ runner.os == 'Windows' }}
run: |
. share/spack/setup-env.sh
spack -d audit packages
./share/spack/qa/validate_last_exit.ps1
spack -d audit externals
./share/spack/qa/validate_last_exit.ps1
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c
if: ${{ inputs.with_coverage == 'true' }} if: ${{ inputs.with_coverage == 'true' }}
with: with:
flags: unittests,audits flags: unittests,linux,audits
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -1,8 +1,7 @@
#!/bin/bash #!/bin/bash
set -e set -ex
source share/spack/setup-env.sh source share/spack/setup-env.sh
$PYTHON bin/spack bootstrap disable github-actions-v0.4 $PYTHON bin/spack bootstrap untrust spack-install
$PYTHON bin/spack bootstrap disable spack-install $PYTHON bin/spack -d solve zlib
$PYTHON bin/spack $SPACK_FLAGS solve zlib
tree $BOOTSTRAP/store tree $BOOTSTRAP/store
exit 0 exit 0

View File

@@ -13,22 +13,116 @@ concurrency:
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
distros-clingo-sources: fedora-clingo-sources:
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: ${{ matrix.image }} container: "fedora:latest"
strategy:
matrix:
image: ["fedora:latest", "opensuse/leap:latest"]
steps: steps:
- name: Setup Fedora - name: Install dependencies
if: ${{ matrix.image == 'fedora:latest' }}
run: | run: |
dnf install -y \ dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gzip \ bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch unzip which xz python3 python3-devel tree \ make patch unzip which xz python3 python3-devel tree \
cmake bison bison-devel libstdc++-static cmake bison bison-devel libstdc++-static
- name: Setup OpenSUSE - name: Checkout
if: ${{ matrix.image == 'opensuse/leap:latest' }} uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack bootstrap untrust github-actions-v0.2
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
ubuntu-clingo-sources:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc gfortran git gnupg2 gzip \
make patch unzip xz-utils python3 python3-dev tree \
cmake bison
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack bootstrap untrust github-actions-v0.2
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
ubuntu-clingo-binaries-and-patchelf:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc gfortran git gnupg2 gzip \
make patch unzip xz-utils python3 python3-dev tree
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack -d solve zlib
tree ~/.spack/bootstrap/store/
opensuse-clingo-sources:
runs-on: ubuntu-latest
container: "opensuse/leap:latest"
steps:
- name: Install dependencies
run: | run: |
# Harden CI by applying the workaround described here: https://www.suse.com/support/kb/doc/?id=000019505 # Harden CI by applying the workaround described here: https://www.suse.com/support/kb/doc/?id=000019505
zypper update -y || zypper update -y zypper update -y || zypper update -y
@@ -37,114 +131,90 @@ jobs:
make patch unzip which xz python3 python3-devel tree \ make patch unzip which xz python3 python3-devel tree \
cmake bison cmake bison
- name: Checkout - name: Checkout
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Setup repo
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap clingo - name: Bootstrap clingo
run: | run: |
source share/spack/setup-env.sh source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.5 spack bootstrap untrust github-actions-v0.2
spack bootstrap disable github-actions-v0.4
spack external find cmake bison spack external find cmake bison
spack -d solve zlib spack -d solve zlib
tree ~/.spack/bootstrap/store/ tree ~/.spack/bootstrap/store/
clingo-sources: macos-clingo-sources:
runs-on: ${{ matrix.runner }} runs-on: macos-latest
strategy:
matrix:
runner: ['macos-13', 'macos-14', "ubuntu-latest"]
steps: steps:
- name: Setup macOS - name: Install dependencies
if: ${{ matrix.runner != 'ubuntu-latest' }}
run: | run: |
brew install cmake bison tree brew install cmake bison@2.7 tree
- name: Checkout - name: Checkout
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: "3.12"
- name: Bootstrap clingo - name: Bootstrap clingo
run: | run: |
source share/spack/setup-env.sh source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.5 export PATH=/usr/local/opt/bison@2.7/bin:$PATH
spack bootstrap disable github-actions-v0.4 spack bootstrap untrust github-actions-v0.2
spack external find --not-buildable cmake bison spack external find --not-buildable cmake bison
spack -d solve zlib spack -d solve zlib
tree ~/.spack/bootstrap/store/ tree ~/.spack/bootstrap/store/
gnupg-sources: macos-clingo-binaries:
runs-on: ${{ matrix.runner }} runs-on: ${{ matrix.macos-version }}
strategy: strategy:
matrix: matrix:
runner: [ 'macos-13', 'macos-14', "ubuntu-latest" ] macos-version: ['macos-11', 'macos-12']
steps: steps:
- name: Setup macOS - name: Install dependencies
if: ${{ matrix.runner != 'ubuntu-latest' }}
run: | run: |
brew install tree brew install tree
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Setup Ubuntu
if: ${{ matrix.runner == 'ubuntu-latest' }}
run: |
sudo rm -rf $(which gpg) $(which gpg2) $(which patchelf)
- name: Checkout - name: Checkout
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
spack solve zlib
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.4
spack -d gpg list
tree ~/.spack/bootstrap/store/
from-binaries:
runs-on: ${{ matrix.runner }}
strategy:
matrix:
runner: ['macos-13', 'macos-14', "ubuntu-latest"]
steps:
- name: Setup macOS
if: ${{ matrix.runner != 'ubuntu-latest' }}
run: |
brew install tree
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Setup Ubuntu
if: ${{ matrix.runner == 'ubuntu-latest' }}
run: |
sudo rm -rf $(which gpg) $(which gpg2) $(which patchelf)
- name: Checkout
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: |
3.8
3.9
3.10
3.11
3.12
- name: Set bootstrap sources
run: |
source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.4
spack bootstrap disable spack-install
- name: Bootstrap clingo - name: Bootstrap clingo
run: | run: |
set -e set -ex
for ver in '3.8' '3.9' '3.10' '3.11' '3.12' ; do for ver in '3.6' '3.7' '3.8' '3.9' '3.10' ; do
not_found=1 not_found=1
ver_dir="$(find $RUNNER_TOOL_CACHE/Python -wholename "*/${ver}.*/*/bin" | grep . || true)" ver_dir="$(find $RUNNER_TOOL_CACHE/Python -wholename "*/${ver}.*/*/bin" | grep . || true)"
echo "Testing $ver_dir"
if [[ -d "$ver_dir" ]] ; then
if $ver_dir/python --version ; then
export PYTHON="$ver_dir/python"
not_found=0
old_path="$PATH"
export PATH="$ver_dir:$PATH"
./bin/spack-tmpconfig -b ./.github/workflows/bootstrap-test.sh
export PATH="$old_path"
fi
fi
# NOTE: test all pythons that exist, not all do on 12
done
ubuntu-clingo-binaries:
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup repo
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
run: |
set -ex
for ver in '2.7' '3.6' '3.7' '3.8' '3.9' '3.10' ; do
not_found=1
ver_dir="$(find $RUNNER_TOOL_CACHE/Python -wholename "*/${ver}.*/*/bin" | grep . || true)"
echo "Testing $ver_dir"
if [[ -d "$ver_dir" ]] ; then if [[ -d "$ver_dir" ]] ; then
echo "Testing $ver_dir"
if $ver_dir/python --version ; then if $ver_dir/python --version ; then
export PYTHON="$ver_dir/python" export PYTHON="$ver_dir/python"
not_found=0 not_found=0
@@ -159,9 +229,118 @@ jobs:
exit 1 exit 1
fi fi
done done
ubuntu-gnupg-binaries:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc patchelf gfortran git gzip \
make patch unzip xz-utils python3 python3-dev tree
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap GnuPG - name: Bootstrap GnuPG
shell: runuser -u spack-test -- bash {0}
run: | run: |
source share/spack/setup-env.sh source share/spack/setup-env.sh
spack bootstrap untrust spack-install
spack -d gpg list spack -d gpg list
tree ~/.spack/bootstrap/store/ tree ~/.spack/bootstrap/store/
ubuntu-gnupg-sources:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc patchelf gfortran git gzip \
make patch unzip xz-utils python3 python3-dev tree \
gawk
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap GnuPG
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack solve zlib
spack bootstrap untrust github-actions-v0.2
spack -d gpg list
tree ~/.spack/bootstrap/store/
macos-gnupg-binaries:
runs-on: macos-latest
steps:
- name: Install dependencies
run: |
brew install tree
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
spack bootstrap untrust spack-install
spack -d gpg list
tree ~/.spack/bootstrap/store/
macos-gnupg-sources:
runs-on: macos-latest
steps:
- name: Install dependencies
run: |
brew install gawk tree
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout
uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
spack solve zlib
spack bootstrap untrust github-actions-v0.2
spack -d gpg list
tree ~/.spack/bootstrap/store/
# [1] Distros that have patched git to resolve CVE-2022-24765 (e.g. Ubuntu patching v2.25.1)
# introduce breaking behaviorso we have to set `safe.directory` in gitconfig ourselves.
# See:
# - https://github.blog/2022-04-12-git-security-vulnerability-announced/
# - https://github.com/actions/checkout/issues/760
# - http://changelogs.ubuntu.com/changelogs/pool/main/g/git/git_2.25.1-1ubuntu3.3/changelog

View File

@@ -13,7 +13,7 @@ on:
paths: paths:
- '.github/workflows/build-containers.yml' - '.github/workflows/build-containers.yml'
- 'share/spack/docker/*' - 'share/spack/docker/*'
- 'share/spack/templates/container/*' - 'share/templates/container/*'
- 'lib/spack/spack/container/*' - 'lib/spack/spack/container/*'
# Let's also build & tag Spack containers on releases. # Let's also build & tag Spack containers on releases.
release: release:
@@ -38,40 +38,32 @@ jobs:
# Meaning of the various items in the matrix list # Meaning of the various items in the matrix list
# 0: Container name (e.g. ubuntu-bionic) # 0: Container name (e.g. ubuntu-bionic)
# 1: Platforms to build for # 1: Platforms to build for
# 2: Base image (e.g. ubuntu:22.04) # 2: Base image (e.g. ubuntu:18.04)
dockerfile: [[amazon-linux, 'linux/amd64,linux/arm64', 'amazonlinux:2'], dockerfile: [[amazon-linux, 'linux/amd64,linux/arm64', 'amazonlinux:2'],
[centos7, 'linux/amd64,linux/arm64,linux/ppc64le', 'centos:7'], [centos7, 'linux/amd64,linux/arm64,linux/ppc64le', 'centos:7'],
[centos-stream, 'linux/amd64,linux/arm64,linux/ppc64le', 'centos:stream'], [centos-stream, 'linux/amd64,linux/arm64,linux/ppc64le', 'centos:stream'],
[leap15, 'linux/amd64,linux/arm64,linux/ppc64le', 'opensuse/leap:15'], [leap15, 'linux/amd64,linux/arm64,linux/ppc64le', 'opensuse/leap:15'],
[ubuntu-bionic, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:18.04'],
[ubuntu-focal, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:20.04'], [ubuntu-focal, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:20.04'],
[ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04'], [ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04']]
[ubuntu-noble, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:24.04'],
[almalinux8, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:8'],
[almalinux9, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:9'],
[rockylinux8, 'linux/amd64,linux/arm64', 'rockylinux:8'],
[rockylinux9, 'linux/amd64,linux/arm64', 'rockylinux:9'],
[fedora39, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:39'],
[fedora40, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:40']]
name: Build ${{ matrix.dockerfile[0] }} name: Build ${{ matrix.dockerfile[0] }}
if: github.repository == 'spack/spack' if: github.repository == 'spack/spack'
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
- uses: docker/metadata-action@8e5442c4ef9f78752691e2d8f8d19755c6f78e81 - name: Set Container Tag Normal (Nightly)
id: docker_meta run: |
with: container="${{ matrix.dockerfile[0] }}:latest"
images: | echo "container=${container}" >> $GITHUB_ENV
ghcr.io/${{ github.repository_owner }}/${{ matrix.dockerfile[0] }} echo "versioned=${container}" >> $GITHUB_ENV
${{ github.repository_owner }}/${{ matrix.dockerfile[0] }}
tags: | # On a new release create a container with the same tag as the release.
type=schedule,pattern=nightly - name: Set Container Tag on Release
type=schedule,pattern=develop if: github.event_name == 'release'
type=semver,pattern={{version}} run: |
type=semver,pattern={{major}}.{{minor}} versioned="${{matrix.dockerfile[0]}}:${GITHUB_REF##*/}"
type=semver,pattern={{major}} echo "versioned=${versioned}" >> $GITHUB_ENV
type=ref,event=branch
type=ref,event=pr
- name: Generate the Dockerfile - name: Generate the Dockerfile
env: env:
@@ -88,19 +80,19 @@ jobs:
fi fi
- name: Upload Dockerfile - name: Upload Dockerfile
uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8
with: with:
name: dockerfiles_${{ matrix.dockerfile[0] }} name: dockerfiles
path: dockerfiles path: dockerfiles
- name: Set up QEMU - name: Set up QEMU
uses: docker/setup-qemu-action@68827325e0b33c7199eb31dd4e31fbe9023e06e3 uses: docker/setup-qemu-action@8b122486cedac8393e77aa9734c3528886e4a1a8 # @v1
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@d70bba72b1f3fd22344832f00baa16ece964efeb uses: docker/setup-buildx-action@c74574e6c82eeedc46366be1b0d287eff9085eb6 # @v1
- name: Log in to GitHub Container Registry - name: Log in to GitHub Container Registry
uses: docker/login-action@e92390c5fb421da1463c202d546fed0ec5c39f20 uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # @v1
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.actor }} username: ${{ github.actor }}
@@ -108,27 +100,21 @@ jobs:
- name: Log in to DockerHub - name: Log in to DockerHub
if: github.event_name != 'pull_request' if: github.event_name != 'pull_request'
uses: docker/login-action@e92390c5fb421da1463c202d546fed0ec5c39f20 uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # @v1
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Deploy ${{ matrix.dockerfile[0] }} - name: Build & Deploy ${{ matrix.dockerfile[0] }}
uses: docker/build-push-action@2cdde995de11925a030ce8070c3d77a52ffcf1c0 uses: docker/build-push-action@c84f38281176d4c9cdb1626ffafcd6b3911b5d94 # @v2
with: with:
context: dockerfiles/${{ matrix.dockerfile[0] }} context: dockerfiles/${{ matrix.dockerfile[0] }}
platforms: ${{ matrix.dockerfile[1] }} platforms: ${{ matrix.dockerfile[1] }}
push: ${{ github.event_name != 'pull_request' }} push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.docker_meta.outputs.tags }} cache-from: type=gha
labels: ${{ steps.docker_meta.outputs.labels }} cache-to: type=gha,mode=max
tags: |
merge-dockerfiles: spack/${{ env.container }}
runs-on: ubuntu-latest spack/${{ env.versioned }}
needs: deploy-images ghcr.io/spack/${{ env.container }}
steps: ghcr.io/spack/${{ env.versioned }}
- name: Merge Artifacts
uses: actions/upload-artifact/merge@65462800fd760344b1a7b4382951275a0abb4808
with:
name: dockerfiles
pattern: dockerfiles_*
delete-merged: true

View File

@@ -18,9 +18,14 @@ jobs:
prechecks: prechecks:
needs: [ changes ] needs: [ changes ]
uses: ./.github/workflows/valid-style.yml uses: ./.github/workflows/valid-style.yml
secrets: inherit
with: with:
with_coverage: ${{ needs.changes.outputs.core }} with_coverage: ${{ needs.changes.outputs.core }}
audit-ancient-python:
uses: ./.github/workflows/audit.yaml
needs: [ changes ]
with:
with_coverage: ${{ needs.changes.outputs.core }}
python_version: 2.7
all-prechecks: all-prechecks:
needs: [ prechecks ] needs: [ prechecks ]
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -36,12 +41,12 @@ jobs:
core: ${{ steps.filter.outputs.core }} core: ${{ steps.filter.outputs.core }}
packages: ${{ steps.filter.outputs.packages }} packages: ${{ steps.filter.outputs.packages }}
steps: steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
if: ${{ github.event_name == 'push' }} if: ${{ github.event_name == 'push' }}
with: with:
fetch-depth: 0 fetch-depth: 0
# For pull requests it's not necessary to checkout the code # For pull requests it's not necessary to checkout the code
- uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 - uses: dorny/paths-filter@b2feaf19c27470162a626bd6fa8438ae5b263721
id: filter id: filter
with: with:
# See https://github.com/dorny/paths-filter/issues/56 for the syntax used below # See https://github.com/dorny/paths-filter/issues/56 for the syntax used below
@@ -71,19 +76,16 @@ jobs:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.bootstrap == 'true' }} if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.bootstrap == 'true' }}
needs: [ prechecks, changes ] needs: [ prechecks, changes ]
uses: ./.github/workflows/bootstrap.yml uses: ./.github/workflows/bootstrap.yml
secrets: inherit
unit-tests: unit-tests:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }} if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }}
needs: [ prechecks, changes ] needs: [ prechecks, changes ]
uses: ./.github/workflows/unit_tests.yaml uses: ./.github/workflows/unit_tests.yaml
secrets: inherit
windows: windows:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }} if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }}
needs: [ prechecks ] needs: [ prechecks ]
uses: ./.github/workflows/windows_python.yml uses: ./.github/workflows/windows_python.yml
secrets: inherit
all: all:
needs: [ windows, unit-tests, bootstrap ] needs: [ windows, unit-tests, bootstrap, audit-ancient-python ]
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Success - name: Success

View File

@@ -1,31 +0,0 @@
name: Windows Paraview Nightly
on:
schedule:
- cron: '0 2 * * *' # Run at 2 am
defaults:
run:
shell:
powershell Invoke-Expression -Command "./share/spack/qa/windows_test_setup.ps1"; {0}
jobs:
build-paraview-deps:
runs-on: windows-latest
steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
with:
fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools coverage
- name: Build Test
run: |
spack compiler find
spack external find cmake ninja win-sdk win-wdk wgl msmpi
spack -d install -y --cdash-upload-url https://cdash.spack.io/submit.php?project=Spack+on+Windows --cdash-track Nightly --only dependencies paraview
exit 0

View File

@@ -1,4 +1,6 @@
# (c) 2022 Lawrence Livermore National Laboratory # (c) 2021 Lawrence Livermore National Laboratory
Set-Location spack
git config --global user.email "spack@example.com" git config --global user.email "spack@example.com"
git config --global user.name "Test User" git config --global user.name "Test User"

View File

@@ -1,7 +0,0 @@
black==24.4.2
clingo==5.7.1
flake8==7.0.0
isort==5.13.2
mypy==1.8.0
types-six==1.16.21.20240513
vermin==1.6.0

View File

@@ -11,50 +11,36 @@ concurrency:
jobs: jobs:
# Run unit tests with different configurations on linux # Run unit tests with different configurations on linux
ubuntu: ubuntu:
runs-on: ${{ matrix.os }} runs-on: ubuntu-latest
strategy: strategy:
matrix: matrix:
os: [ubuntu-latest] python-version: ['2.7', '3.6', '3.7', '3.8', '3.9', '3.10']
python-version: ['3.7', '3.8', '3.9', '3.10', '3.11', '3.12']
concretizer: ['clingo'] concretizer: ['clingo']
on_develop: on_develop:
- ${{ github.ref == 'refs/heads/develop' }} - ${{ github.ref == 'refs/heads/develop' }}
include: include:
- python-version: '3.11' - python-version: 2.7
os: ubuntu-latest
concretizer: original concretizer: original
on_develop: ${{ github.ref == 'refs/heads/develop' }} on_develop: ${{ github.ref == 'refs/heads/develop' }}
- python-version: '3.6' - python-version: '3.10'
os: ubuntu-20.04 concretizer: original
concretizer: clingo
on_develop: ${{ github.ref == 'refs/heads/develop' }} on_develop: ${{ github.ref == 'refs/heads/develop' }}
exclude: exclude:
- python-version: '3.7' - python-version: '3.7'
os: ubuntu-latest
concretizer: 'clingo' concretizer: 'clingo'
on_develop: false on_develop: false
- python-version: '3.8' - python-version: '3.8'
os: ubuntu-latest
concretizer: 'clingo' concretizer: 'clingo'
on_develop: false on_develop: false
- python-version: '3.9' - python-version: '3.9'
os: ubuntu-latest
concretizer: 'clingo'
on_develop: false
- python-version: '3.10'
os: ubuntu-latest
concretizer: 'clingo'
on_develop: false
- python-version: '3.11'
os: ubuntu-latest
concretizer: 'clingo' concretizer: 'clingo'
on_develop: false on_develop: false
steps: steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Install System packages - name: Install System packages
@@ -63,11 +49,19 @@ jobs:
# Needed for unit tests # Needed for unit tests
sudo apt-get -y install \ sudo apt-get -y install \
coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build \ coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build \
cmake bison libbison-dev kcov patchelf cmake bison libbison-dev kcov
- name: Install Python packages - name: Install Python packages
run: | run: |
pip install --upgrade pip setuptools pytest pytest-xdist pytest-cov pip install --upgrade pip six setuptools pytest codecov[toml] pytest-cov pytest-xdist
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black" # ensure style checks are not skipped in unit tests for python >= 3.6
# note that true/false (i.e., 1/0) are opposite in conditions in python and bash
if python -c 'import sys; sys.exit(not sys.version_info >= (3, 6))'; then
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click==8.0.4" "black<=21.12b0"
fi
- name: Pin pathlib for Python 2.7
if: ${{ matrix.python-version == 2.7 }}
run: |
pip install -U pathlib2==2.3.6
- name: Setup git configuration - name: Setup git configuration
run: | run: |
# Need this for the git tests to succeed. # Need this for the git tests to succeed.
@@ -79,8 +73,7 @@ jobs:
SPACK_PYTHON: python SPACK_PYTHON: python
run: | run: |
. share/spack/setup-env.sh . share/spack/setup-env.sh
spack bootstrap disable spack-install spack bootstrap untrust spack-install
spack bootstrap now
spack -v solve zlib spack -v solve zlib
- name: Run unit tests - name: Run unit tests
env: env:
@@ -88,24 +81,24 @@ jobs:
SPACK_TEST_SOLVER: ${{ matrix.concretizer }} SPACK_TEST_SOLVER: ${{ matrix.concretizer }}
SPACK_TEST_PARALLEL: 2 SPACK_TEST_PARALLEL: 2
COVERAGE: true COVERAGE: true
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }} UNIT_TEST_COVERAGE: ${{ (matrix.concretizer == 'original' && matrix.python-version == '2.7') || (matrix.python-version == '3.10') }}
run: | run: |
share/spack/qa/run-unit-tests share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c coverage combine -a
coverage xml
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with: with:
flags: unittests,linux,${{ matrix.concretizer }} flags: unittests,linux,${{ matrix.concretizer }}
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Test shell integration # Test shell integration
shell: shell:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: '3.11' python-version: '3.10'
- name: Install System packages - name: Install System packages
run: | run: |
sudo apt-get -y update sudo apt-get -y update
@@ -113,7 +106,7 @@ jobs:
sudo apt-get install -y coreutils kcov csh zsh tcsh fish dash bash sudo apt-get install -y coreutils kcov csh zsh tcsh fish dash bash
- name: Install Python packages - name: Install Python packages
run: | run: |
pip install --upgrade pip setuptools pytest coverage[toml] pytest-xdist pip install --upgrade pip six setuptools pytest codecov coverage[toml]==6.2 pytest-xdist
- name: Setup git configuration - name: Setup git configuration
run: | run: |
# Need this for the git tests to succeed. # Need this for the git tests to succeed.
@@ -124,11 +117,9 @@ jobs:
COVERAGE: true COVERAGE: true
run: | run: |
share/spack/qa/run-shell-tests share/spack/qa/run-shell-tests
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c - uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with: with:
flags: shelltests,linux flags: shelltests,linux
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Test RHEL8 UBI with platform Python. This job is run # Test RHEL8 UBI with platform Python. This job is run
# only on PRs modifying core Spack # only on PRs modifying core Spack
@@ -141,11 +132,10 @@ jobs:
dnf install -y \ dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \ bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz make patch tcl unzip which xz
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
- name: Setup repo and non-root user - name: Setup repo and non-root user
run: | run: |
git --version git --version
git config --global --add safe.directory /__w/spack/spack
git fetch --unshallow git fetch --unshallow
. .github/workflows/setup_git.sh . .github/workflows/setup_git.sh
useradd spack-test useradd spack-test
@@ -154,26 +144,28 @@ jobs:
shell: runuser -u spack-test -- bash {0} shell: runuser -u spack-test -- bash {0}
run: | run: |
source share/spack/setup-env.sh source share/spack/setup-env.sh
spack -d bootstrap now --dev spack -d solve zlib
spack unit-test -k 'not cvs and not svn and not hg' -x --verbose spack unit-test -k 'not cvs and not svn and not hg' -x --verbose
# Test for the clingo based solver (using clingo-cffi) # Test for the clingo based solver (using clingo-cffi)
clingo-cffi: clingo-cffi:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: '3.11' python-version: '3.10'
- name: Install System packages - name: Install System packages
run: | run: |
sudo apt-get -y update sudo apt-get -y update
sudo apt-get -y install coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build kcov # Needed for unit tests
sudo apt-get -y install \
coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build \
patchelf kcov
- name: Install Python packages - name: Install Python packages
run: | run: |
pip install --upgrade pip setuptools pytest coverage[toml] pytest-cov clingo pytest-xdist pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-cov clingo pytest-xdist
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black"
- name: Setup git configuration - name: Setup git configuration
run: | run: |
# Need this for the git tests to succeed. # Need this for the git tests to succeed.
@@ -185,29 +177,28 @@ jobs:
SPACK_TEST_SOLVER: clingo SPACK_TEST_SOLVER: clingo
run: | run: |
share/spack/qa/run-unit-tests share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c coverage combine -a
coverage xml
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70 # @v2.1.0
with: with:
flags: unittests,linux,clingo flags: unittests,linux,clingo
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Run unit tests on MacOS # Run unit tests on MacOS
macos: macos:
runs-on: ${{ matrix.os }} runs-on: macos-latest
strategy: strategy:
matrix: matrix:
os: [macos-13, macos-14] python-version: [3.8]
python-version: ["3.11"]
steps: steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Install Python packages - name: Install Python packages
run: | run: |
pip install --upgrade pip setuptools pip install --upgrade pip six setuptools
pip install --upgrade pytest coverage[toml] pytest-xdist pytest-cov pip install --upgrade pytest codecov coverage[toml] pytest-xdist pytest-cov
- name: Setup Homebrew packages - name: Setup Homebrew packages
run: | run: |
brew install dash fish gcc gnupg2 kcov brew install dash fish gcc gnupg2 kcov
@@ -219,12 +210,15 @@ jobs:
git --version git --version
. .github/workflows/setup_git.sh . .github/workflows/setup_git.sh
. share/spack/setup-env.sh . share/spack/setup-env.sh
$(which spack) bootstrap disable spack-install $(which spack) bootstrap untrust spack-install
$(which spack) solve zlib $(which spack) solve zlib
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x) common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
$(which spack) unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}" $(which spack) unit-test --cov --cov-config=pyproject.toml "${common_args[@]}"
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c coverage combine -a
coverage xml
# Delete the symlink going from ./lib/spack/docs/_spack_root back to
# the initial directory, since it causes ELOOP errors with codecov/actions@2
rm lib/spack/docs/_spack_root
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with: with:
flags: unittests,macos flags: unittests,macos
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -18,34 +18,33 @@ jobs:
validate: validate:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: '3.11' python-version: '3.10'
cache: 'pip' cache: 'pip'
- name: Install Python Packages - name: Install Python Packages
run: | run: |
pip install --upgrade pip setuptools pip install --upgrade pip
pip install -r .github/workflows/style/requirements.txt pip install --upgrade vermin
- name: vermin (Spack's Core) - name: vermin (Spack's Core)
run: vermin --backport importlib --backport argparse --violations --backport typing -t=3.6- -vvv lib/spack/spack/ lib/spack/llnl/ bin/ run: vermin --backport argparse --violations --backport typing -t=2.7- -t=3.6- -vvv lib/spack/spack/ lib/spack/llnl/ bin/
- name: vermin (Repositories) - name: vermin (Repositories)
run: vermin --backport importlib --backport argparse --violations --backport typing -t=3.6- -vvv var/spack/repos run: vermin --backport argparse --violations --backport typing -t=2.7- -t=3.6- -vvv var/spack/repos
# Run style checks on the files that have been changed # Run style checks on the files that have been changed
style: style:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with: with:
python-version: '3.11' python-version: '3.10'
cache: 'pip' cache: 'pip'
- name: Install Python packages - name: Install Python packages
run: | run: |
pip install --upgrade pip setuptools python3 -m pip install --upgrade pip six setuptools types-six click==8.0.2 'black==21.12b0' mypy isort clingo flake8
pip install -r .github/workflows/style/requirements.txt
- name: Setup git configuration - name: Setup git configuration
run: | run: |
# Need this for the git tests to succeed. # Need this for the git tests to succeed.
@@ -56,34 +55,6 @@ jobs:
share/spack/qa/run-style-tests share/spack/qa/run-style-tests
audit: audit:
uses: ./.github/workflows/audit.yaml uses: ./.github/workflows/audit.yaml
secrets: inherit
with: with:
with_coverage: ${{ inputs.with_coverage }} with_coverage: ${{ inputs.with_coverage }}
python_version: '3.11' python_version: '3.10'
# Check that spack can bootstrap the development environment on Python 3.6 - RHEL8
bootstrap-dev-rhel8:
runs-on: ubuntu-latest
container: registry.access.redhat.com/ubi8/ubi
steps:
- name: Install dependencies
run: |
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
- name: Setup repo and non-root user
run: |
git --version
git config --global --add safe.directory /__w/spack/spack
git fetch --unshallow
. .github/workflows/setup_git.sh
useradd spack-test
chown -R spack-test .
- name: Bootstrap Spack development environment
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack debug report
spack -d bootstrap now --dev
spack style -t black
spack unit-test -V

View File

@@ -10,74 +10,151 @@ concurrency:
defaults: defaults:
run: run:
shell: shell:
powershell Invoke-Expression -Command "./share/spack/qa/windows_test_setup.ps1"; {0} powershell Invoke-Expression -Command ".\share\spack\qa\windows_test_setup.ps1"; {0}
jobs: jobs:
unit-tests: unit-tests:
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984
with: with:
python-version: 3.9 python-version: 3.9
- name: Install Python packages - name: Install Python packages
run: | run: |
python -m pip install --upgrade pip pywin32 setuptools pytest-cov clingo python -m pip install --upgrade pip six pywin32 setuptools codecov pytest-cov
- name: Create local develop - name: Create local develop
run: | run: |
./.github/workflows/setup_git.ps1 .\spack\.github\workflows\setup_git.ps1
- name: Unit Test - name: Unit Test
run: | run: |
spack unit-test -x --verbose --cov --cov-config=pyproject.toml --ignore=lib/spack/spack/test/cmd echo F|xcopy .\spack\share\spack\qa\configuration\windows_config.yaml $env:USERPROFILE\.spack\windows\config.yaml
./share/spack/qa/validate_last_exit.ps1 cd spack
dir
(Get-Item '.\lib\spack\docs\_spack_root').Delete()
spack unit-test --verbose --cov --cov-config=pyproject.toml --ignore=lib/spack/spack/test/cmd
coverage combine -a coverage combine -a
coverage xml coverage xml
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c - uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with: with:
flags: unittests,windows flags: unittests,windows
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
unit-tests-cmd: unit-tests-cmd:
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984
with: with:
python-version: 3.9 python-version: 3.9
- name: Install Python packages - name: Install Python packages
run: | run: |
python -m pip install --upgrade pip pywin32 setuptools coverage pytest-cov clingo python -m pip install --upgrade pip six pywin32 setuptools codecov coverage pytest-cov
- name: Create local develop - name: Create local develop
run: | run: |
./.github/workflows/setup_git.ps1 .\spack\.github\workflows\setup_git.ps1
- name: Command Unit Test - name: Command Unit Test
run: | run: |
spack unit-test -x --verbose --cov --cov-config=pyproject.toml lib/spack/spack/test/cmd echo F|xcopy .\spack\share\spack\qa\configuration\windows_config.yaml $env:USERPROFILE\.spack\windows\config.yaml
./share/spack/qa/validate_last_exit.ps1 cd spack
(Get-Item '.\lib\spack\docs\_spack_root').Delete()
spack unit-test --verbose --cov --cov-config=pyproject.toml lib/spack/spack/test/cmd
coverage combine -a coverage combine -a
coverage xml coverage xml
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c - uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with: with:
flags: unittests,windows flags: unittests,windows
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
build-abseil: build-abseil:
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29 - uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with: with:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d - uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984
with: with:
python-version: 3.9 python-version: 3.9
- name: Install Python packages - name: Install Python packages
run: | run: |
python -m pip install --upgrade pip pywin32 setuptools coverage python -m pip install --upgrade pip six pywin32 setuptools codecov coverage
- name: Build Test - name: Build Test
run: | run: |
spack compiler find spack compiler find
spack -d external find cmake ninja echo F|xcopy .\spack\share\spack\qa\configuration\windows_config.yaml $env:USERPROFILE\.spack\windows\config.yaml
spack -d install abseil-cpp spack external find cmake
spack external find ninja
spack install abseil-cpp
make-installer:
runs-on: windows-latest
steps:
- name: Disable Windows Symlinks
run: |
git config --global core.symlinks false
shell:
powershell
- uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8
with:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools
- name: Add Light and Candle to Path
run: |
$env:WIX >> $GITHUB_PATH
- name: Run Installer
run: |
.\spack\share\spack\qa\setup_spack.ps1
spack make-installer -s spack -g SILENT pkg
echo "installer_root=$((pwd).Path)" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
env:
ProgressPreference: SilentlyContinue
- uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8
with:
name: Windows Spack Installer Bundle
path: ${{ env.installer_root }}\pkg\Spack.exe
- uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8
with:
name: Windows Spack Installer
path: ${{ env.installer_root}}\pkg\Spack.msi
execute-installer:
needs: make-installer
runs-on: windows-latest
defaults:
run:
shell: pwsh
steps:
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools
- name: Setup installer directory
run: |
mkdir -p spack_installer
echo "spack_installer=$((pwd).Path)\spack_installer" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
- uses: actions/download-artifact@v3
with:
name: Windows Spack Installer Bundle
path: ${{ env.spack_installer }}
- name: Execute Bundled Installer
run: |
$proc = Start-Process ${{ env.spack_installer }}\spack.exe "/install /quiet" -Passthru
$handle = $proc.Handle # cache proc.Handle
$proc.WaitForExit();
$LASTEXITCODE
env:
ProgressPreference: SilentlyContinue
- uses: actions/download-artifact@v3
with:
name: Windows Spack Installer
path: ${{ env.spack_installer }}
- name: Execute MSI
run: |
$proc = Start-Process ${{ env.spack_installer }}\spack.msi "/quiet" -Passthru
$handle = $proc.Handle # cache proc.Handle
$proc.WaitForExit();
$LASTEXITCODE

View File

@@ -1,16 +1,10 @@
version: 2 version: 2
build:
os: "ubuntu-22.04"
apt_packages:
- graphviz
tools:
python: "3.11"
sphinx: sphinx:
configuration: lib/spack/docs/conf.py configuration: lib/spack/docs/conf.py
fail_on_warning: true fail_on_warning: true
python: python:
version: 3.7
install: install:
- requirements: lib/spack/docs/requirements.txt - requirements: lib/spack/docs/requirements.txt

View File

@@ -1,907 +1,16 @@
# v0.21.2 (2024-03-01)
## Bugfixes
- Containerize: accommodate nested or pre-existing spack-env paths (#41558)
- Fix setup-env script, when going back and forth between instances (#40924)
- Fix using fully-qualified namespaces from root specs (#41957)
- Fix a bug when a required provider is requested for multiple virtuals (#42088)
- OCI buildcaches:
- only push in parallel when forking (#42143)
- use pickleable errors (#42160)
- Fix using sticky variants in externals (#42253)
- Fix a rare issue with conditional requirements and multi-valued variants (#42566)
## Package updates
- rust: add v1.75, rework a few variants (#41161,#41903)
- py-transformers: add v4.35.2 (#41266)
- mgard: fix OpenMP on AppleClang (#42933)
# v0.21.1 (2024-01-11)
## New features
- Add support for reading buildcaches created by Spack v0.22 (#41773)
## Bugfixes
- spack graph: fix coloring with environments (#41240)
- spack info: sort variants in --variants-by-name (#41389)
- Spec.format: error on old style format strings (#41934)
- ASP-based solver:
- fix infinite recursion when computing concretization errors (#41061)
- don't error for type mismatch on preferences (#41138)
- don't emit spurious debug output (#41218)
- Improve the error message for deprecated preferences (#41075)
- Fix MSVC preview version breaking clingo build on Windows (#41185)
- Fix multi-word aliases (#41126)
- Add a warning for unconfigured compiler (#41213)
- environment: fix an issue with deconcretization/reconcretization of specs (#41294)
- buildcache: don't error if a patch is missing, when installing from binaries (#41986)
- Multiple improvements to unit-tests (#41215,#41369,#41495,#41359,#41361,#41345,#41342,#41308,#41226)
## Package updates
- root: add a webgui patch to address security issue (#41404)
- BerkeleyGW: update source urls (#38218)
# v0.21.0 (2023-11-11)
`v0.21.0` is a major feature release.
## Features in this release
1. **Better error messages with condition chaining**
In v0.18, we added better error messages that could tell you what problem happened,
but they couldn't tell you *why* it happened. `0.21` adds *condition chaining* to the
solver, and Spack can now trace back through the conditions that led to an error and
build a tree of causes potential causes and where they came from. For example:
```console
$ spack solve hdf5 ^cmake@3.0.1
==> Error: concretization failed for the following reasons:
1. Cannot satisfy 'cmake@3.0.1'
2. Cannot satisfy 'cmake@3.0.1'
required because hdf5 ^cmake@3.0.1 requested from CLI
3. Cannot satisfy 'cmake@3.18:' and 'cmake@3.0.1
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 depends on cmake@3.18: when @1.13:
required because hdf5 ^cmake@3.0.1 requested from CLI
4. Cannot satisfy 'cmake@3.12:' and 'cmake@3.0.1
required because hdf5 depends on cmake@3.12:
required because hdf5 ^cmake@3.0.1 requested from CLI
required because hdf5 ^cmake@3.0.1 requested from CLI
```
More details in #40173.
2. **OCI build caches**
You can now use an arbitrary [OCI](https://opencontainers.org) registry as a build
cache:
```console
$ spack mirror add my_registry oci://user/image # Dockerhub
$ spack mirror add my_registry oci://ghcr.io/haampie/spack-test # GHCR
$ spack mirror set --push --oci-username ... --oci-password ... my_registry # set login creds
$ spack buildcache push my_registry [specs...]
```
And you can optionally add a base image to get *runnable* images:
```console
$ spack buildcache push --base-image ubuntu:23.04 my_registry python
Pushed ... as [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
$ docker run --rm -it [image]:python-3.11.2-65txfcpqbmpawclvtasuog4yzmxwaoia.spack
```
This creates a container image from the Spack installations on the host system,
without the need to run `spack install` from a `Dockerfile` or `sif` file. It also
addresses the inconvenience of losing binaries of dependencies when `RUN spack
install` fails inside `docker build`.
Further, the container image layers and build cache tarballs are the same files. This
means that `spack install` and `docker pull` use the exact same underlying binaries.
If you previously used `spack install` inside of `docker build`, this feature helps
you save storage by a factor two.
More details in #38358.
3. **Multiple versions of build dependencies**
Increasingly, complex package builds require multiple versions of some build
dependencies. For example, Python packages frequently require very specific versions
of `setuptools`, `cython`, and sometimes different physics packages require different
versions of Python to build. The concretizer enforced that every solve was *unified*,
i.e., that there only be one version of every package. The concretizer now supports
"duplicate" nodes for *build dependencies*, but enforces unification through
transitive link and run dependencies. This will allow it to better resolve complex
dependency graphs in ecosystems like Python, and it also gets us very close to
modeling compilers as proper dependencies.
This change required a major overhaul of the concretizer, as well as a number of
performance optimizations. See #38447, #39621.
4. **Cherry-picking virtual dependencies**
You can now select only a subset of virtual dependencies from a spec that may provide
more. For example, if you want `mpich` to be your `mpi` provider, you can be explicit
by writing:
```
hdf5 ^[virtuals=mpi] mpich
```
Or, if you want to use, e.g., `intel-parallel-studio` for `blas` along with an external
`lapack` like `openblas`, you could write:
```
strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
```
The `virtuals=mpi` is an edge attribute, and dependency edges in Spack graphs now
track which virtuals they satisfied. More details in #17229 and #35322.
Note for packaging: in Spack 0.21 `spec.satisfies("^virtual")` is true if and only if
the package specifies `depends_on("virtual")`. This is different from Spack 0.20,
where depending on a provider implied depending on the virtual provided. See #41002
for an example where `^mkl` was being used to test for several `mkl` providers in a
package that did not depend on `mkl`.
5. **License directive**
Spack packages can now have license metadata, with the new `license()` directive:
```python
license("Apache-2.0")
```
Licenses use [SPDX identifiers](https://spdx.org/licenses), and you can use SPDX
expressions to combine them:
```python
license("Apache-2.0 OR MIT")
```
Like other directives in Spack, it's conditional, so you can handle complex cases like
Spack itself:
```python
license("LGPL-2.1", when="@:0.11")
license("Apache-2.0 OR MIT", when="@0.12:")
```
More details in #39346, #40598.
6. **`spack deconcretize` command**
We are getting close to having a `spack update` command for environments, but we're
not quite there yet. This is the next best thing. `spack deconcretize` gives you
control over what you want to update in an already concrete environment. If you have
an environment built with, say, `meson`, and you want to update your `meson` version,
you can run:
```console
spack deconcretize meson
```
and have everything that depends on `meson` rebuilt the next time you run `spack
concretize`. In a future Spack version, we'll handle all of this in a single command,
but for now you can use this to drop bits of your lockfile and resolve your
dependencies again. More in #38803.
7. **UI Improvements**
The venerable `spack info` command was looking shabby compared to the rest of Spack's
UI, so we reworked it to have a bit more flair. `spack info` now makes much better
use of terminal space and shows variants, their values, and their descriptions much
more clearly. Conditional variants are grouped separately so you can more easily
understand how packages are structured. More in #40998.
`spack checksum` now allows you to filter versions from your editor, or by version
range. It also notifies you about potential download URL changes. See #40403.
8. **Environments can include definitions**
Spack did not previously support using `include:` with The
[definitions](https://spack.readthedocs.io/en/latest/environments.html#spec-list-references)
section of an environment, but now it does. You can use this to curate lists of specs
and more easily reuse them across environments. See #33960.
9. **Aliases**
You can now add aliases to Spack commands in `config.yaml`, e.g. this might enshrine
your favorite args to `spack find` as `spack f`:
```yaml
config:
aliases:
f: find -lv
```
See #17229.
10. **Improved autoloading of modules**
Spack 0.20 was the first release to enable autoloading of direct dependencies in
module files.
The downside of this was that `module avail` and `module load` tab completion would
show users too many modules to choose from, and many users disabled generating
modules for dependencies through `exclude_implicits: true`. Further, it was
necessary to keep hashes in module names to avoid file name clashes.
In this release, you can start using `hide_implicits: true` instead, which exposes
only explicitly installed packages to the user, while still autoloading
dependencies. On top of that, you can safely use `hash_length: 0`, as this config
now only applies to the modules exposed to the user -- you don't have to worry about
file name clashes for hidden dependencies.
Note: for `tcl` this feature requires Modules 4.7 or higher
11. **Updated container labeling**
Nightly Docker images from the `develop` branch will now be tagged as `:develop` and
`:nightly`. The `:latest` tag is no longer associated with `:develop`, but with the
latest stable release. Releases will be tagged with `:{major}`, `:{major}.{minor}`
and `:{major}.{minor}.{patch}`. `ubuntu:18.04` has also been removed from the list of
generated Docker images, as it is no longer supported. See #40593.
## Other new commands and directives
* `spack env activate` without arguments now loads a `default` environment that you do
not have to create (#40756).
* `spack find -H` / `--hashes`: a new shortcut for piping `spack find` output to
other commands (#38663)
* Add `spack checksum --verify`, fix `--add` (#38458)
* New `default_args` context manager factors out common args for directives (#39964)
* `spack compiler find --[no]-mixed-toolchain` lets you easily mix `clang` and
`gfortran` on Linux (#40902)
## Performance improvements
* `spack external find` execution is now much faster (#39843)
* `spack location -i` now much faster on success (#40898)
* Drop redundant rpaths post install (#38976)
* ASP-based solver: avoid cycles in clingo using hidden directive (#40720)
* Fix multiple quadratic complexity issues in environments (#38771)
## Other new features of note
* archspec: update to v0.2.2, support for Sapphire Rapids, Power10, Neoverse V2 (#40917)
* Propagate variants across nodes that don't have that variant (#38512)
* Implement fish completion (#29549)
* Can now distinguish between source/binary mirror; don't ping mirror.spack.io as much (#34523)
* Improve status reporting on install (add [n/total] display) (#37903)
## Windows
This release has the best Windows support of any Spack release yet, with numerous
improvements and much larger swaths of tests passing:
* MSVC and SDK improvements (#37711, #37930, #38500, #39823, #39180)
* Windows external finding: update default paths; treat .bat as executable on Windows (#39850)
* Windows decompression: fix removal of intermediate file (#38958)
* Windows: executable/path handling (#37762)
* Windows build systems: use ninja and enable tests (#33589)
* Windows testing (#36970, #36972, #36973, #36840, #36977, #36792, #36834, #34696, #36971)
* Windows PowerShell support (#39118, #37951)
* Windows symlinking and libraries (#39933, #38599, #34701, #38578, #34701)
## Notable refactors
* User-specified flags take precedence over others in Spack compiler wrappers (#37376)
* Improve setup of build, run, and test environments (#35737, #40916)
* `make` is no longer a required system dependency of Spack (#40380)
* Support Python 3.12 (#40404, #40155, #40153)
* docs: Replace package list with packages.spack.io (#40251)
* Drop Python 2 constructs in Spack (#38720, #38718, #38703)
## Binary cache and stack updates
* e4s arm stack: duplicate and target neoverse v1 (#40369)
* Add macOS ML CI stacks (#36586)
* E4S Cray CI Stack (#37837)
* e4s cray: expand spec list (#38947)
* e4s cray sles ci: expand spec list (#39081)
## Removals, deprecations, and syntax changes
* ASP: targets, compilers and providers soft-preferences are only global (#31261)
* Parser: fix ambiguity with whitespace in version ranges (#40344)
* Module file generation is disabled by default; you'll need to enable it to use it (#37258)
* Remove deprecated "extra_instructions" option for containers (#40365)
* Stand-alone test feature deprecation postponed to v0.22 (#40600)
* buildcache push: make `--allow-root` the default and deprecate the option (#38878)
## Notable Bugfixes
* Bugfix: propagation of multivalued variants (#39833)
* Allow `/` in git versions (#39398)
* Fetch & patch: actually acquire stage lock, and many more issues (#38903)
* Environment/depfile: better escaping of targets with Git versions (#37560)
* Prevent "spack external find" to error out on wrong permissions (#38755)
* lmod: allow core compiler to be specified with a version range (#37789)
## Spack community stats
* 7,469 total packages, 303 new since `v0.20.0`
* 150 new Python packages
* 34 new R packages
* 353 people contributed to this release
* 336 committers to packages
* 65 committers to core
# v0.20.3 (2023-10-31)
## Bugfixes
- Fix a bug where `spack mirror set-url` would drop configured connection info (reverts #34210)
- Fix a minor issue with package hash computation for Python 3.12 (#40328)
# v0.20.2 (2023-10-03)
## Features in this release
Spack now supports Python 3.12 (#40155)
## Bugfixes
- Improve escaping in Tcl module files (#38375)
- Make repo cache work on repositories with zero mtime (#39214)
- Ignore errors for newer, incompatible buildcache version (#40279)
- Print an error when git is required, but missing (#40254)
- Ensure missing build dependencies get installed when using `spack install --overwrite` (#40252)
- Fix an issue where Spack freezes when the build process unexpectedly exits (#39015)
- Fix a bug where installation failures cause an unrelated `NameError` to be thrown (#39017)
- Fix an issue where Spack package versions would be incorrectly derived from git tags (#39414)
- Fix a bug triggered when file locking fails internally (#39188)
- Prevent "spack external find" to error out when a directory cannot be accessed (#38755)
- Fix multiple performance regressions in environments (#38771)
- Add more ignored modules to `pyproject.toml` for `mypy` (#38769)
# v0.20.1 (2023-07-10)
## Spack Bugfixes
- Spec removed from an environment where not actually removed if `--force` was not given (#37877)
- Speed-up module file generation (#37739)
- Hotfix for a few recipes that treat CMake as a link dependency (#35816)
- Fix re-running stand-alone test a second time, which was getting a trailing spurious failure (#37840)
- Fixed reading JSON manifest on Cray, reporting non-concrete specs (#37909)
- Fixed a few bugs when generating Dockerfiles from Spack (#37766,#37769)
- Fixed a few long-standing bugs when generating module files (#36678,#38347,#38465,#38455)
- Fixed issues with building Python extensions using an external Python (#38186)
- Fixed compiler removal from command line (#38057)
- Show external status as [e] (#33792)
- Backported `archspec` fixes (#37793)
- Improved a few error messages (#37791)
# v0.20.0 (2023-05-21)
`v0.20.0` is a major feature release.
## Features in this release
1. **`requires()` directive and enhanced package requirements**
We've added some more enhancements to requirements in Spack (#36286).
There is a new `requires()` directive for packages. `requires()` is the opposite of
`conflicts()`. You can use it to impose constraints on this package when certain
conditions are met:
```python
requires(
"%apple-clang",
when="platform=darwin",
msg="This package builds only with clang on macOS"
)
```
More on this in [the docs](
https://spack.rtfd.io/en/latest/packaging_guide.html#conflicts-and-requirements).
You can also now add a `when:` clause to `requires:` in your `packages.yaml`
configuration or in an environment:
```yaml
packages:
openmpi:
require:
- any_of: ["%gcc"]
when: "@:4.1.4"
message: "Only OpenMPI 4.1.5 and up can build with fancy compilers"
```
More details can be found [here](
https://spack.readthedocs.io/en/latest/build_settings.html#package-requirements)
2. **Exact versions**
Spack did not previously have a way to distinguish a version if it was a prefix of
some other version. For example, `@3.2` would match `3.2`, `3.2.1`, `3.2.2`, etc. You
can now match *exactly* `3.2` with `@=3.2`. This is useful, for example, if you need
to patch *only* the `3.2` version of a package. The new syntax is described in [the docs](
https://spack.readthedocs.io/en/latest/basic_usage.html#version-specifier).
Generally, when writing packages, you should prefer to use ranges like `@3.2` over
the specific versions, as this allows the concretizer more leeway when selecting
versions of dependencies. More details and recommendations are in the [packaging guide](
https://spack.readthedocs.io/en/latest/packaging_guide.html#ranges-versus-specific-versions).
See #36273 for full details on the version refactor.
3. **New testing interface**
Writing package tests is now much simpler with a new [test interface](
https://spack.readthedocs.io/en/latest/packaging_guide.html#stand-alone-tests).
Writing a test is now as easy as adding a method that starts with `test_`:
```python
class MyPackage(Package):
...
def test_always_fails(self):
"""use assert to always fail"""
assert False
def test_example(self):
"""run installed example"""
example = which(self.prefix.bin.example)
example()
```
You can use Python's native `assert` statement to implement your checks -- no more
need to fiddle with `run_test` or other test framework methods. Spack will
introspect the class and run `test_*` methods when you run `spack test`,
4. **More stable concretization**
* Now, `spack concretize` will *only* concretize the new portions of the environment
and will not change existing parts of an environment unless you specify `--force`.
This has always been true for `unify:false`, but not for `unify:true` and
`unify:when_possible` environments. Now it is true for all of them (#37438, #37681).
* The concretizer has a new `--reuse-deps` argument that *only* reuses dependencies.
That is, it will always treat the *roots* of your environment as it would with
`--fresh`. This allows you to upgrade just the roots of your environment while
keeping everything else stable (#30990).
5. **Weekly develop snapshot releases**
Since last year, we have maintained a buildcache of `develop` at
https://binaries.spack.io/develop, but the cache can grow to contain so many builds
as to be unwieldy. When we get a stable `develop` build, we snapshot the release and
add a corresponding tag the Spack repository. So, you can use a stack from a specific
day. There are now tags in the spack repository like:
* `develop-2023-05-14`
* `develop-2023-05-18`
that correspond to build caches like:
* https://binaries.spack.io/develop-2023-05-14/e4s
* https://binaries.spack.io/develop-2023-05-18/e4s
We plan to store these snapshot releases weekly.
6. **Specs in buildcaches can be referenced by hash.**
* Previously, you could run `spack buildcache list` and see the hashes in
buildcaches, but referring to them by hash would fail.
* You can now run commands like `spack spec` and `spack install` and refer to
buildcache hashes directly, e.g. `spack install /abc123` (#35042)
7. **New package and buildcache index websites**
Our public websites for searching packages have been completely revamped and updated.
You can check them out here:
* *Package Index*: https://packages.spack.io
* *Buildcache Index*: https://cache.spack.io
Both are searchable and more interactive than before. Currently major releases are
shown; UI for browsing `develop` snapshots is coming soon.
8. **Default CMake and Meson build types are now Release**
Spack has historically defaulted to building with optimization and debugging, but
packages like `llvm` can be enormous with debug turned on. Our default build type for
all Spack packages is now `Release` (#36679, #37436). This has a number of benefits:
* much smaller binaries;
* higher default optimization level; and
* defining `NDEBUG` disables assertions, which may lead to further speedups.
You can still get the old behavior back through requirements and package preferences.
## Other new commands and directives
* `spack checksum` can automatically add new versions to package (#24532)
* new command: `spack pkg grep` to easily search package files (#34388)
* New `maintainers` directive (#35083)
* Add `spack buildcache push` (alias to `buildcache create`) (#34861)
* Allow using `-j` to control the parallelism of concretization (#37608)
* Add `--exclude` option to 'spack external find' (#35013)
## Other new features of note
* editing: add higher-precedence `SPACK_EDITOR` environment variable
* Many YAML formatting improvements from updating `ruamel.yaml` to the latest version
supporting Python 3.6. (#31091, #24885, #37008).
* Requirements and preferences should not define (non-git) versions (#37687, #37747)
* Environments now store spack version/commit in `spack.lock` (#32801)
* User can specify the name of the `packages` subdirectory in repositories (#36643)
* Add container images supporting RHEL alternatives (#36713)
* make version(...) kwargs explicit (#36998)
## Notable refactors
* buildcache create: reproducible tarballs (#35623)
* Bootstrap most of Spack dependencies using environments (#34029)
* Split `satisfies(..., strict=True/False)` into two functions (#35681)
* spack install: simplify behavior when inside environments (#35206)
## Binary cache and stack updates
* Major simplification of CI boilerplate in stacks (#34272, #36045)
* Many improvements to our CI pipeline's reliability
## Removals, Deprecations, and disablements
* Module file generation is disabled by default; you'll need to enable it to use it (#37258)
* Support for Python 2 was deprecated in `v0.19.0` and has been removed. `v0.20.0` only
supports Python 3.6 and higher.
* Deprecated target names are no longer recognized by Spack. Use generic names instead:
* `graviton` is now `cortex_a72`
* `graviton2` is now `neoverse_n1`
* `graviton3` is now `neoverse_v1`
* `blacklist` and `whitelist` in module configuration were deprecated in `v0.19.0` and are
removed in this release. Use `exclude` and `include` instead.
* The `ignore=` parameter of the `extends()` directive has been removed. It was not used by
any builtin packages and is no longer needed to avoid conflicts in environment views (#35588).
* Support for the old YAML buildcache format has been removed. It was deprecated in `v0.19.0` (#34347).
* `spack find --bootstrap` has been removed. It was deprecated in `v0.19.0`. Use `spack
--bootstrap find` instead (#33964).
* `spack bootstrap trust` and `spack bootstrap untrust` are now removed, having been
deprecated in `v0.19.0`. Use `spack bootstrap enable` and `spack bootstrap disable`.
* The `--mirror-name`, `--mirror-url`, and `--directory` options to buildcache and
mirror commands were deprecated in `v0.19.0` and have now been removed. They have been
replaced by positional arguments (#37457).
* Deprecate `env:` as top level environment key (#37424)
* deprecate buildcache create --rel, buildcache install --allow-root (#37285)
* Support for very old perl-like spec format strings (e.g., `$_$@$%@+$+$=`) has been
removed (#37425). This was deprecated in in `v0.15` (#10556).
## Notable Bugfixes
* bugfix: don't fetch package metadata for unknown concrete specs (#36990)
* Improve package source code context display on error (#37655)
* Relax environment manifest filename requirements and lockfile identification criteria (#37413)
* `installer.py`: drop build edges of installed packages by default (#36707)
* Bugfix: package requirements with git commits (#35057, #36347)
* Package requirements: allow single specs in requirement lists (#36258)
* conditional variant values: allow boolean (#33939)
* spack uninstall: follow run/link edges on --dependents (#34058)
## Spack community stats
* 7,179 total packages, 499 new since `v0.19.0`
* 329 new Python packages
* 31 new R packages
* 336 people contributed to this release
* 317 committers to packages
* 62 committers to core
# v0.19.1 (2023-02-07)
### Spack Bugfixes
* `buildcache create`: make "file exists" less verbose (#35019)
* `spack mirror create`: don't change paths to urls (#34992)
* Improve error message for requirements (#33988)
* uninstall: fix accidental cubic complexity (#34005)
* scons: fix signature for `install_args` (#34481)
* Fix `combine_phase_logs` text encoding issues (#34657)
* Use a module-like object to propagate changes in the MRO, when setting build env (#34059)
* PackageBase should not define builder legacy attributes (#33942)
* Forward lookup of the "run_tests" attribute (#34531)
* Bugfix for timers (#33917, #33900)
* Fix path handling in prefix inspections (#35318)
* Fix libtool filter for Fujitsu compilers (#34916)
* Bug fix for duplicate rpath errors on macOS when creating build caches (#34375)
* FileCache: delete the new cache file on exception (#34623)
* Propagate exceptions from Spack python console (#34547)
* Tests: Fix a bug/typo in a `config_values.py` fixture (#33886)
* Various CI fixes (#33953, #34560, #34560, #34828)
* Docs: remove monitors and analyzers, typos (#34358, #33926)
* bump release version for tutorial command (#33859)
# v0.19.0 (2022-11-11)
`v0.19.0` is a major feature release.
## Major features in this release
1. **Package requirements**
Spack's traditional [package preferences](
https://spack.readthedocs.io/en/latest/build_settings.html#package-preferences)
are soft, but we've added hard requriements to `packages.yaml` and `spack.yaml`
(#32528, #32369). Package requirements use the same syntax as specs:
```yaml
packages:
libfabric:
require: "@1.13.2"
mpich:
require:
- one_of: ["+cuda", "+rocm"]
```
More details in [the docs](
https://spack.readthedocs.io/en/latest/build_settings.html#package-requirements).
2. **Environment UI Improvements**
* Fewer surprising modifications to `spack.yaml` (#33711):
* `spack install` in an environment will no longer add to the `specs:` list; you'll
need to either use `spack add <spec>` or `spack install --add <spec>`.
* Similarly, `spack uninstall` will not remove from your environment's `specs:`
list; you'll need to use `spack remove` or `spack uninstall --remove`.
This will make it easier to manage an environment, as there is clear separation
between the stack to be installed (`spack.yaml`/`spack.lock`) and which parts of
it should be installed (`spack install` / `spack uninstall`).
* `concretizer:unify:true` is now the default mode for new environments (#31787)
We see more users creating `unify:true` environments now. Users who need
`unify:false` can add it to their environment to get the old behavior. This will
concretize every spec in the environment independently.
* Include environment configuration from URLs (#29026, [docs](
https://spack.readthedocs.io/en/latest/environments.html#included-configurations))
You can now include configuration in your environment directly from a URL:
```yaml
spack:
include:
- https://github.com/path/to/raw/config/compilers.yaml
```
4. **Multiple Build Systems**
An increasing number of packages in the ecosystem need the ability to support
multiple build systems (#30738, [docs](
https://spack.readthedocs.io/en/latest/packaging_guide.html#multiple-build-systems)),
either across versions, across platforms, or within the same version of the software.
This has been hard to support through multiple inheritance, as methods from different
build system superclasses would conflict. `package.py` files can now define separate
builder classes with installation logic for different build systems, e.g.:
```python
class ArpackNg(CMakePackage, AutotoolsPackage):
build_system(
conditional("cmake", when="@0.64:"),
conditional("autotools", when="@:0.63"),
default="cmake",
)
class CMakeBuilder(spack.build_systems.cmake.CMakeBuilder):
def cmake_args(self):
pass
class Autotoolsbuilder(spack.build_systems.autotools.AutotoolsBuilder):
def configure_args(self):
pass
```
5. **Compiler and variant propagation**
Currently, compiler flags and variants are inconsistent: compiler flags set for a
package are inherited by its dependencies, while variants are not. We should have
these be consistent by allowing for inheritance to be enabled or disabled for both
variants and compiler flags.
Example syntax:
- `package ++variant`:
enabled variant that will be propagated to dependencies
- `package +variant`:
enabled variant that will NOT be propagated to dependencies
- `package ~~variant`:
disabled variant that will be propagated to dependencies
- `package ~variant`:
disabled variant that will NOT be propagated to dependencies
- `package cflags==-g`:
`cflags` will be propagated to dependencies
- `package cflags=-g`:
`cflags` will NOT be propagated to dependencies
Syntax for non-boolan variants is similar to compiler flags. More in the docs for
[variants](
https://spack.readthedocs.io/en/latest/basic_usage.html#variants) and [compiler flags](
https://spack.readthedocs.io/en/latest/basic_usage.html#compiler-flags).
6. **Enhancements to git version specifiers**
* `v0.18.0` added the ability to use git commits as versions. You can now use the
`git.` prefix to specify git tags or branches as versions. All of these are valid git
versions in `v0.19` (#31200):
```console
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # raw commit
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234 # commit with git prefix
foo@git.develop # the develop branch
foo@git.0.19 # use the 0.19 tag
```
* `v0.19` also gives you more control over how Spack interprets git versions, in case
Spack cannot detect the version from the git repository. You can suffix a git
version with `=<version>` to force Spack to concretize it as a particular version
(#30998, #31914, #32257):
```console
# use mybranch, but treat it as version 3.2 for version comparison
foo@git.mybranch=3.2
# use the given commit, but treat it as develop for version comparison
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop
```
More in [the docs](
https://spack.readthedocs.io/en/latest/basic_usage.html#version-specifier)
7. **Changes to Cray EX Support**
Cray machines have historically had their own "platform" within Spack, because we
needed to go through the module system to leverage compilers and MPI installations on
these machines. The Cray EX programming environment now provides standalone `craycc`
executables and proper `mpicc` wrappers, so Spack can treat EX machines like Linux
with extra packages (#29392).
We expect this to greatly reduce bugs, as external packages and compilers can now be
used by prefix instead of through modules. We will also no longer be subject to
reproducibility issues when modules change from Cray PE release to release and from
site to site. This also simplifies dealing with the underlying Linux OS on cray
systems, as Spack will properly model the machine's OS as either SuSE or RHEL.
8. **Improvements to tests and testing in CI**
* `spack ci generate --tests` will generate a `.gitlab-ci.yml` file that not only does
builds but also runs tests for built packages (#27877). Public GitHub pipelines now
also run tests in CI.
* `spack test run --explicit` will only run tests for packages that are explicitly
installed, instead of all packages.
9. **Experimental binding link model**
You can add a new option to `config.yaml` to make Spack embed absolute paths to
needed shared libraries in ELF executables and shared libraries on Linux (#31948, [docs](
https://spack.readthedocs.io/en/latest/config_yaml.html#shared-linking-bind)):
```yaml
config:
shared_linking:
type: rpath
bind: true
```
This can improve launch time at scale for parallel applications, and it can make
installations less susceptible to environment variables like `LD_LIBRARY_PATH`, even
especially when dealing with external libraries that use `RUNPATH`. You can think of
this as a faster, even higher-precedence version of `RPATH`.
## Other new features of note
* `spack spec` prints dependencies more legibly. Dependencies in the output now appear
at the *earliest* level of indentation possible (#33406)
* You can override `package.py` attributes like `url`, directly in `packages.yaml`
(#33275, [docs](
https://spack.readthedocs.io/en/latest/build_settings.html#assigning-package-attributes))
* There are a number of new architecture-related format strings you can use in Spack
configuration files to specify paths (#29810, [docs](
https://spack.readthedocs.io/en/latest/configuration.html#config-file-variables))
* Spack now supports bootstrapping Clingo on Windows (#33400)
* There is now support for an `RPATH`-like library model on Windows (#31930)
## Performance Improvements
* Major performance improvements for installation from binary caches (#27610, #33628,
#33636, #33608, #33590, #33496)
* Test suite can now be parallelized using `xdist` (used in GitHub Actions) (#32361)
* Reduce lock contention for parallel builds in environments (#31643)
## New binary caches and stacks
* We now build nearly all of E4S with `oneapi` in our buildcache (#31781, #31804,
#31804, #31803, #31840, #31991, #32117, #32107, #32239)
* Added 3 new machine learning-centric stacks to binary cache: `x86_64_v3`, CUDA, ROCm
(#31592, #33463)
## Removals and Deprecations
* Support for Python 3.5 is dropped (#31908). Only Python 2.7 and 3.6+ are officially
supported.
* This is the last Spack release that will support Python 2 (#32615). Spack `v0.19`
will emit a deprecation warning if you run it with Python 2, and Python 2 support will
soon be removed from the `develop` branch.
* `LD_LIBRARY_PATH` is no longer set by default by `spack load` or module loads.
Setting `LD_LIBRARY_PATH` in Spack environments/modules can cause binaries from
outside of Spack to crash, and Spack's own builds use `RPATH` and do not need
`LD_LIBRARY_PATH` set in order to run. If you still want the old behavior, you
can run these commands to configure Spack to set `LD_LIBRARY_PATH`:
```console
spack config add modules:prefix_inspections:lib64:[LD_LIBRARY_PATH]
spack config add modules:prefix_inspections:lib:[LD_LIBRARY_PATH]
```
* The `spack:concretization:[together|separately]` has been removed after being
deprecated in `v0.18`. Use `concretizer:unify:[true|false]`.
* `config:module_roots` is no longer supported after being deprecated in `v0.18`. Use
configuration in module sets instead (#28659, [docs](
https://spack.readthedocs.io/en/latest/module_file_support.html)).
* `spack activate` and `spack deactivate` are no longer supported, having been
deprecated in `v0.18`. Use an environment with a view instead of
activating/deactivating ([docs](
https://spack.readthedocs.io/en/latest/environments.html#configuration-in-spack-yaml)).
* The old YAML format for buildcaches is now deprecated (#33707). If you are using an
old buildcache with YAML metadata you will need to regenerate it with JSON metadata.
* `spack bootstrap trust` and `spack bootstrap untrust` are deprecated in favor of
`spack bootstrap enable` and `spack bootstrap disable` and will be removed in `v0.20`.
(#33600)
* The `graviton2` architecture has been renamed to `neoverse_n1`, and `graviton3`
is now `neoverse_v1`. Buildcaches using the old architecture names will need to be rebuilt.
* The terms `blacklist` and `whitelist` have been replaced with `include` and `exclude`
in all configuration files (#31569). You can use `spack config update` to
automatically fix your configuration files.
## Notable Bugfixes
* Permission setting on installation now handles effective uid properly (#19980)
* `buildable:true` for an MPI implementation now overrides `buildable:false` for `mpi` (#18269)
* Improved error messages when attempting to use an unconfigured compiler (#32084)
* Do not punish explicitly requested compiler mismatches in the solver (#30074)
* `spack stage`: add missing --fresh and --reuse (#31626)
* Fixes for adding build system executables like `cmake` to package scope (#31739)
* Bugfix for binary relocation with aliased strings produced by newer `binutils` (#32253)
## Spack community stats
* 6,751 total packages, 335 new since `v0.18.0`
* 141 new Python packages
* 89 new R packages
* 303 people contributed to this release
* 287 committers to packages
* 57 committers to core
# v0.18.1 (2022-07-19) # v0.18.1 (2022-07-19)
### Spack Bugfixes ### Spack Bugfixes
* Fix several bugs related to bootstrapping (#30834,#31042,#31180) * Fix several bugs related to bootstrapping (#30834,#31042,#31180)
* Fix a regression that was causing spec hashes to differ between * Fix a regression that was causing spec hashes to differ between
Python 2 and Python 3 (#31092) Python 2 and Python 3 (#31092)
* Fixed compiler flags for oneAPI and DPC++ (#30856) * Fixed compiler flags for oneAPI and DPC++ (#30856)
* Fixed several issues related to concretization (#31142,#31153,#31170,#31226) * Fixed several issues related to concretization (#31142,#31153,#31170,#31226)
* Improved support for Cray manifest file and `spack external find` (#31144,#31201,#31173,#31186) * Improved support for Cray manifest file and `spack external find` (#31144,#31201,#31173,#31186)
* Assign a version to openSUSE Tumbleweed according to the GLIBC version * Assign a version to openSUSE Tumbleweed according to the GLIBC version
in the system (#19895) in the system (#19895)
* Improved Dockerfile generation for `spack containerize` (#29741,#31321) * Improved Dockerfile generation for `spack containerize` (#29741,#31321)
* Fixed a few bugs related to concurrent execution of commands (#31509,#31493,#31477) * Fixed a few bugs related to concurrent execution of commands (#31509,#31493,#31477)
### Package updates ### Package updates
* WarpX: add v22.06, fixed libs property (#30866,#31102) * WarpX: add v22.06, fixed libs property (#30866,#31102)

View File

@@ -27,57 +27,12 @@
# And here's the CITATION.cff format: # And here's the CITATION.cff format:
# #
cff-version: 1.2.0 cff-version: 1.2.0
type: software
message: "If you are referencing Spack in a publication, please cite the paper below." message: "If you are referencing Spack in a publication, please cite the paper below."
title: "The Spack Package Manager: Bringing Order to HPC Software Chaos"
abstract: >-
Large HPC centers spend considerable time supporting software for thousands of users, but the
complexity of HPC software is quickly outpacing the capabilities of existing software management
tools. Scientific applications require specific versions of compilers, MPI, and other dependency
libraries, so using a single, standard software stack is infeasible. However, managing many
configurations is difficult because the configuration space is combinatorial in size. We
introduce Spack, a tool used at Lawrence Livermore National Laboratory to manage this complexity.
Spack provides a novel, re- cursive specification syntax to invoke parametric builds of packages
and dependencies. It allows any number of builds to coexist on the same system, and it ensures
that installed packages can find their dependencies, regardless of the environment. We show
through real-world use cases that Spack supports diverse and demanding applications, bringing
order to HPC software chaos.
preferred-citation: preferred-citation:
title: "The Spack Package Manager: Bringing Order to HPC Software Chaos"
type: conference-paper type: conference-paper
url: "https://tgamblin.github.io/pubs/spack-sc15.pdf" doi: "10.1145/2807591.2807623"
url: "https://github.com/spack/spack"
authors: authors:
- family-names: "Gamblin"
given-names: "Todd"
- family-names: "LeGendre"
given-names: "Matthew"
- family-names: "Collette"
given-names: "Michael R."
- family-names: "Lee"
given-names: "Gregory L."
- family-names: "Moody"
given-names: "Adam"
- family-names: "de Supinski"
given-names: "Bronis R."
- family-names: "Futral"
given-names: "Scott"
conference:
name: "Supercomputing 2015 (SC15)"
city: "Austin"
region: "Texas"
country: "US"
date-start: 2015-11-15
date-end: 2015-11-20
month: 11
year: 2015
identifiers:
- description: "The concept DOI of the work."
type: doi
value: 10.1145/2807591.2807623
- description: "The DOE Document Release Number of the work"
type: other
value: "LLNL-CONF-669890"
authors:
- family-names: "Gamblin" - family-names: "Gamblin"
given-names: "Todd" given-names: "Todd"
- family-names: "LeGendre" - family-names: "LeGendre"
@@ -92,3 +47,12 @@ authors:
given-names: "Bronis R." given-names: "Bronis R."
- family-names: "Futral" - family-names: "Futral"
given-names: "Scott" given-names: "Scott"
title: "The Spack Package Manager: Bringing Order to HPC Software Chaos"
conference:
name: "Supercomputing 2015 (SC15)"
city: "Austin"
region: "Texas"
country: "USA"
month: November 15-20
year: 2015
notes: LLNL-CONF-669890

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2013-2024 LLNS, LLC and other Spack Project Developers. Copyright (c) 2013-2022 LLNS, LLC and other Spack Project Developers.
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -1,34 +1,12 @@
<div align="left"> # <img src="https://cdn.rawgit.com/spack/spack/develop/share/spack/logo/spack-logo.svg" width="64" valign="middle" alt="Spack"/> Spack
<h2> [![Unit Tests](https://github.com/spack/spack/workflows/linux%20tests/badge.svg)](https://github.com/spack/spack/actions)
<picture> [![Bootstrapping](https://github.com/spack/spack/actions/workflows/bootstrap.yml/badge.svg)](https://github.com/spack/spack/actions/workflows/bootstrap.yml)
<source media="(prefers-color-scheme: dark)" srcset="https://cdn.rawgit.com/spack/spack/develop/share/spack/logo/spack-logo-white-text.svg" width="250"> [![codecov](https://codecov.io/gh/spack/spack/branch/develop/graph/badge.svg)](https://codecov.io/gh/spack/spack)
<source media="(prefers-color-scheme: light)" srcset="https://cdn.rawgit.com/spack/spack/develop/share/spack/logo/spack-logo-text.svg" width="250"> [![Containers](https://github.com/spack/spack/actions/workflows/build-containers.yml/badge.svg)](https://github.com/spack/spack/actions/workflows/build-containers.yml)
<img alt="Spack" src="https://cdn.rawgit.com/spack/spack/develop/share/spack/logo/spack-logo-text.svg" width="250"> [![Read the Docs](https://readthedocs.org/projects/spack/badge/?version=latest)](https://spack.readthedocs.io)
</picture> [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Slack](https://slack.spack.io/badge.svg)](https://slack.spack.io)
<br>
<br clear="all">
<a href="https://github.com/spack/spack/actions/workflows/ci.yml"><img src="https://github.com/spack/spack/workflows/ci/badge.svg" alt="CI Status"></a>
<a href="https://github.com/spack/spack/actions/workflows/bootstrapping.yml"><img src="https://github.com/spack/spack/actions/workflows/bootstrap.yml/badge.svg" alt="Bootstrap Status"></a>
<a href="https://github.com/spack/spack/actions/workflows/build-containers.yml"><img src="https://github.com/spack/spack/actions/workflows/build-containers.yml/badge.svg" alt="Containers Status"></a>
<a href="https://spack.readthedocs.io"><img src="https://readthedocs.org/projects/spack/badge/?version=latest" alt="Documentation Status"></a>
<a href="https://codecov.io/gh/spack/spack"><img src="https://codecov.io/gh/spack/spack/branch/develop/graph/badge.svg" alt="Code coverage"/></a>
<a href="https://slack.spack.io"><img src="https://slack.spack.io/badge.svg" alt="Slack"/></a>
<a href="https://matrix.to/#/#spack-space:matrix.org"><img src="https://img.shields.io/matrix/spack-space%3Amatrix.org?label=matrix" alt="Matrix"/></a>
</h2>
**[Getting Started] &nbsp;&nbsp; [Config] &nbsp;&nbsp; [Community] &nbsp;&nbsp; [Contributing] &nbsp;&nbsp; [Packaging Guide]**
[Getting Started]: https://spack.readthedocs.io/en/latest/getting_started.html
[Config]: https://spack.readthedocs.io/en/latest/configuration.html
[Community]: #community
[Contributing]: https://spack.readthedocs.io/en/latest/contribution_guide.html
[Packaging Guide]: https://spack.readthedocs.io/en/latest/packaging_guide.html
</div>
Spack is a multi-platform package manager that builds and installs Spack is a multi-platform package manager that builds and installs
multiple versions and configurations of software. It works on Linux, multiple versions and configurations of software. It works on Linux,
@@ -84,14 +62,10 @@ Resources:
* **Slack workspace**: [spackpm.slack.com](https://spackpm.slack.com). * **Slack workspace**: [spackpm.slack.com](https://spackpm.slack.com).
To get an invitation, visit [slack.spack.io](https://slack.spack.io). To get an invitation, visit [slack.spack.io](https://slack.spack.io).
* **Matrix space**: [#spack-space:matrix.org](https://matrix.to/#/#spack-space:matrix.org): * [**Github Discussions**](https://github.com/spack/spack/discussions): not just for discussions, also Q&A.
[bridged](https://github.com/matrix-org/matrix-appservice-slack#matrix-appservice-slack) to Slack. * **Mailing list**: [groups.google.com/d/forum/spack](https://groups.google.com/d/forum/spack)
* [**Github Discussions**](https://github.com/spack/spack/discussions): * **Twitter**: [@spackpm](https://twitter.com/spackpm). Be sure to
for Q&A and discussions. Note the pinned discussions for announcements.
* **X**: [@spackpm](https://twitter.com/spackpm). Be sure to
`@mention` us! `@mention` us!
* **Mailing list**: [groups.google.com/d/forum/spack](https://groups.google.com/d/forum/spack):
only for announcements. Please use other venues for discussions.
Contributing Contributing
------------------------ ------------------------

View File

@@ -2,26 +2,24 @@
## Supported Versions ## Supported Versions
We provide security updates for `develop` and for the last two We provide security updates for the following releases.
stable (`0.x`) release series of Spack. Security updates will be
made available as patch (`0.x.1`, `0.x.2`, etc.) releases.
For more on Spack's release structure, see For more on Spack's release structure, see
[`README.md`](https://github.com/spack/spack#releases). [`README.md`](https://github.com/spack/spack#releases).
| Version | Supported |
| ------- | ------------------ |
| develop | :white_check_mark: |
| 0.17.x | :white_check_mark: |
| 0.16.x | :white_check_mark: |
## Reporting a Vulnerability ## Reporting a Vulnerability
You can report a vulnerability using GitHub's private reporting To report a vulnerability or other security
feature: issue, email maintainers@spack.io.
1. Go to [github.com/spack/spack/security](https://github.com/spack/spack/security). You can expect to hear back within two days.
2. Click "Report a vulnerability" in the upper right corner of that page. If your security issue is accepted, we will do
3. Fill out the form and submit your draft security advisory. our best to release a fix within a week. If
fixing the issue will take longer than this,
More details are available in we will discuss timeline options with you.
[GitHub's docs](https://docs.github.com/en/code-security/security-advisories/guidance-on-reporting-and-writing/privately-reporting-a-security-vulnerability).
You can expect to hear back about security issues within two days.
If your security issue is accepted, we will do our best to release
a fix within a week. If fixing the issue will take longer than
this, we will discuss timeline options with you.

View File

@@ -1,4 +1,4 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -10,7 +10,6 @@ def getpywin():
try: try:
import win32con # noqa: F401 import win32con # noqa: F401
except ImportError: except ImportError:
print("pyWin32 not installed but is required...\nInstalling via pip:")
subprocess.check_call([sys.executable, "-m", "pip", "-q", "install", "--upgrade", "pip"]) subprocess.check_call([sys.executable, "-m", "pip", "-q", "install", "--upgrade", "pip"])
subprocess.check_call([sys.executable, "-m", "pip", "-q", "install", "pywin32"]) subprocess.check_call([sys.executable, "-m", "pip", "-q", "install", "pywin32"])

View File

@@ -1,6 +1,6 @@
#!/bin/sh #!/bin/sh
# #
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# sbang project developers. See the top-level COPYRIGHT file for details. # sbang project developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,7 +1,7 @@
#!/bin/sh #!/bin/sh
# -*- python -*- # -*- python -*-
# #
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -25,15 +25,19 @@ exit 1
# Line above is a shell no-op, and ends a python multi-line comment. # Line above is a shell no-op, and ends a python multi-line comment.
# The code above runs this file with our preferred python interpreter. # The code above runs this file with our preferred python interpreter.
from __future__ import print_function
import os import os
import os.path import os.path
import sys import sys
min_python3 = (3, 6) min_python3 = (3, 5)
if sys.version_info[:2] < min_python3: if sys.version_info[:2] < (2, 7) or (
sys.version_info[:2] >= (3, 0) and sys.version_info[:2] < min_python3
):
v_info = sys.version_info[:3] v_info = sys.version_info[:3]
msg = "Spack requires Python %d.%d or higher " % min_python3 msg = "Spack requires Python 2.7 or %d.%d or higher " % min_python3
msg += "You are running spack with Python %d.%d.%d." % v_info msg += "You are running spack with Python %d.%d.%d." % v_info
sys.exit(msg) sys.exit(msg)
@@ -45,8 +49,52 @@ spack_prefix = os.path.dirname(os.path.dirname(spack_file))
spack_lib_path = os.path.join(spack_prefix, "lib", "spack") spack_lib_path = os.path.join(spack_prefix, "lib", "spack")
sys.path.insert(0, spack_lib_path) sys.path.insert(0, spack_lib_path)
from spack_installable.main import main # noqa: E402 # Add external libs
spack_external_libs = os.path.join(spack_lib_path, "external")
if sys.version_info[:2] <= (2, 7):
sys.path.insert(0, os.path.join(spack_external_libs, "py2"))
sys.path.insert(0, spack_external_libs)
# Here we delete ruamel.yaml in case it has been already imported from site
# (see #9206 for a broader description of the issue).
#
# Briefly: ruamel.yaml produces a .pth file when installed with pip that
# makes the site installed package the preferred one, even though sys.path
# is modified to point to another version of ruamel.yaml.
if "ruamel.yaml" in sys.modules:
del sys.modules["ruamel.yaml"]
if "ruamel" in sys.modules:
del sys.modules["ruamel"]
# The following code is here to avoid failures when updating
# the develop version, due to spurious argparse.pyc files remaining
# in the libs/spack/external directory, see:
# https://github.com/spack/spack/pull/25376
# TODO: Remove in v0.18.0 or later
try:
import argparse
except ImportError:
argparse_pyc = os.path.join(spack_external_libs, "argparse.pyc")
if not os.path.exists(argparse_pyc):
raise
try:
os.remove(argparse_pyc)
import argparse # noqa: F401
except Exception:
msg = (
"The file\n\n\t{0}\n\nis corrupted and cannot be deleted by Spack. "
"Either delete it manually or ask some administrator to "
"delete it for you."
)
print(msg.format(argparse_pyc))
sys.exit(1)
import spack.main # noqa: E402
# Once we've set up the system path, run the spack main method # Once we've set up the system path, run the spack main method
if __name__ == "__main__": if __name__ == "__main__":
sys.exit(main()) sys.exit(spack.main.main())

View File

@@ -1,6 +1,6 @@
#!/bin/sh #!/bin/sh
# #
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -72,7 +72,6 @@ config:
root: $TMP_DIR/install root: $TMP_DIR/install
misc_cache: $$user_cache_path/cache misc_cache: $$user_cache_path/cache
source_cache: $$user_cache_path/source source_cache: $$user_cache_path/source
environments_root: $TMP_DIR/envs
EOF EOF
cat >"$SPACK_USER_CONFIG_PATH/bootstrap.yaml" <<EOF cat >"$SPACK_USER_CONFIG_PATH/bootstrap.yaml" <<EOF
bootstrap: bootstrap:

View File

@@ -1,4 +1,4 @@
:: Copyright 2013-2024 Lawrence Livermore National Security, LLC and other :: Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
:: Spack Project Developers. See the top-level COPYRIGHT file for details. :: Spack Project Developers. See the top-level COPYRIGHT file for details.
:: ::
:: SPDX-License-Identifier: (Apache-2.0 OR MIT) :: SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -14,7 +14,7 @@
:: ::
@echo off @echo off
set spack="%SPACK_ROOT%"\bin\spack set spack=%SPACK_ROOT%\bin\spack
::####################################################################### ::#######################################################################
:: This is a wrapper around the spack command that forwards calls to :: This is a wrapper around the spack command that forwards calls to
@@ -50,48 +50,25 @@ setlocal enabledelayedexpansion
:: flags will always start with '-', e.g. --help or -V :: flags will always start with '-', e.g. --help or -V
:: subcommands will never start with '-' :: subcommands will never start with '-'
:: everything after the subcommand is an arg :: everything after the subcommand is an arg
for %%x in (%*) do (
set t="%%~x"
:process_cl_args if "!t:~0,1!" == "-" (
rem Set first cl argument (denoted by %1) to be processed if defined _sp_subcommand (
set t=%1 :: We already have a subcommand, processing args now
rem shift moves all cl positional arguments left by one
rem meaning %2 is now %1, this allows us to iterate over each
rem argument
shift
rem assign next "first" cl argument to cl_args, will be null when
rem there are now further arguments to process
set cl_args=%1
if "!t:~0,1!" == "-" (
if defined _sp_subcommand (
rem We already have a subcommand, processing args now
if not defined _sp_args (
set "_sp_args=!t!"
) else (
set "_sp_args=!_sp_args! !t!" set "_sp_args=!_sp_args! !t!"
)
) else (
if not defined _sp_flags (
set "_sp_flags=!t!"
) else ( ) else (
set "_sp_flags=!_sp_flags! !t!" set "_sp_flags=!_sp_flags! !t!"
shift
) )
) ) else if not defined _sp_subcommand (
) else if not defined _sp_subcommand ( set "_sp_subcommand=!t!"
set "_sp_subcommand=!t!" shift
) else (
if not defined _sp_args (
set "_sp_args=!t!"
) else ( ) else (
set "_sp_args=!_sp_args! !t!" set "_sp_args=!_sp_args! !t!"
shift
) )
) )
rem if this is not nu;ll, we have more tokens to process
rem start above process again with remaining unprocessed cl args
if defined cl_args goto :process_cl_args
:: --help, -h and -V flags don't require further output parsing. :: --help, -h and -V flags don't require further output parsing.
:: If we encounter, execute and exit :: If we encounter, execute and exit
if defined _sp_flags ( if defined _sp_flags (
@@ -106,24 +83,24 @@ if defined _sp_flags (
exit /B 0 exit /B 0
) )
) )
if not defined _sp_subcommand (
if not defined _sp_args (
if not defined _sp_flags (
python "%spack%" --help
exit /B 0
)
)
)
:: pass parsed variables outside of local scope. Need to do :: pass parsed variables outside of local scope. Need to do
:: this because delayedexpansion can only be set by setlocal :: this because delayedexpansion can only be set by setlocal
endlocal & ( echo %_sp_flags%>flags
set "_sp_flags=%_sp_flags%" echo %_sp_args%>args
set "_sp_args=%_sp_args%" echo %_sp_subcommand%>subcmd
set "_sp_subcommand=%_sp_subcommand%" endlocal
) set /p _sp_subcommand=<subcmd
set /p _sp_flags=<flags
set /p _sp_args=<args
set str_subcommand=%_sp_subcommand:"='%
set str_flags=%_sp_flags:"='%
set str_args=%_sp_args:"='%
if "%str_subcommand%"=="ECHO is off." (set "_sp_subcommand=")
if "%str_flags%"=="ECHO is off." (set "_sp_flags=")
if "%str_args%"=="ECHO is off." (set "_sp_args=")
del subcmd
del flags
del args
:: Filter out some commands. For any others, just run the command. :: Filter out some commands. For any others, just run the command.
if "%_sp_subcommand%" == "cd" ( if "%_sp_subcommand%" == "cd" (
@@ -166,9 +143,7 @@ goto :end_switch
:: If no args or args contain --bat or -h/--help: just execute. :: If no args or args contain --bat or -h/--help: just execute.
if NOT defined _sp_args ( if NOT defined _sp_args (
goto :default_case goto :default_case
) )else if NOT "%_sp_args%"=="%_sp_args:--help=%" (
if NOT "%_sp_args%"=="%_sp_args:--help=%" (
goto :default_case goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args: -h=%" ( ) else if NOT "%_sp_args%"=="%_sp_args: -h=%" (
goto :default_case goto :default_case
@@ -176,11 +151,11 @@ if NOT "%_sp_args%"=="%_sp_args:--help=%" (
goto :default_case goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:deactivate=%" ( ) else if NOT "%_sp_args%"=="%_sp_args:deactivate=%" (
for /f "tokens=* USEBACKQ" %%I in ( for /f "tokens=* USEBACKQ" %%I in (
`call python %spack% %_sp_flags% env deactivate --bat %_sp_args:deactivate=%` `call python "%spack%" %_sp_flags% env deactivate --bat %_sp_args:deactivate=%`
) do %%I ) do %%I
) else if NOT "%_sp_args%"=="%_sp_args:activate=%" ( ) else if NOT "%_sp_args%"=="%_sp_args:activate=%" (
for /f "tokens=* USEBACKQ" %%I in ( for /f "tokens=* USEBACKQ" %%I in (
`python %spack% %_sp_flags% env activate --bat %_sp_args:activate=%` `call python "%spack%" %_sp_flags% env activate --bat %_sp_args:activate=%`
) do %%I ) do %%I
) else ( ) else (
goto :default_case goto :default_case
@@ -192,7 +167,7 @@ goto :end_switch
if defined _sp_args ( if defined _sp_args (
if NOT "%_sp_args%"=="%_sp_args:--help=%" ( if NOT "%_sp_args%"=="%_sp_args:--help=%" (
goto :default_case goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:-h=%" ( ) else if NOT "%_sp_args%"=="%_sp_args: -h=%" (
goto :default_case goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:--bat=%" ( ) else if NOT "%_sp_args%"=="%_sp_args:--bat=%" (
goto :default_case goto :default_case
@@ -201,7 +176,7 @@ if defined _sp_args (
for /f "tokens=* USEBACKQ" %%I in ( for /f "tokens=* USEBACKQ" %%I in (
`python "%spack%" %_sp_flags% %_sp_subcommand% --bat %_sp_args%`) do %%I `python "%spack%" %_sp_flags% %_sp_subcommand% --bat %_sp_args%`) do %%I
)
goto :end_switch goto :end_switch
:case_unload :case_unload
@@ -239,10 +214,10 @@ for %%Z in ("%_pa_new_path%") do if EXIST %%~sZ\NUL (
exit /b 0 exit /b 0
:: set module system roots :: set module system roots
:_sp_multi_pathadd :_sp_multi_pathadd
for %%I in (%~2) do ( for %%I in (%~2) do (
for %%Z in (%_sp_compatible_sys_types%) do ( for %%Z in (%_sp_compatible_sys_types%) do (
:pathadd "%~1" "%%I\%%Z" :pathadd "%~1" "%%I\%%Z"
) )
) )
exit /B %ERRORLEVEL% exit /B %ERRORLEVEL%

View File

@@ -1,148 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# #######################################################################
function Compare-CommonArgs {
$CMDArgs = $args[0]
# These aruments take precedence and call for no futher parsing of arguments
# invoke actual Spack entrypoint with that context and exit after
"--help", "-h", "--version", "-V" | ForEach-Object {
$arg_opt = $_
if(($CMDArgs) -and ([bool]($CMDArgs.Where({$_ -eq $arg_opt})))) {
return $true
}
}
return $false
}
function Read-SpackArgs {
$SpackCMD_params = @()
$SpackSubCommand = $NULL
$SpackSubCommandArgs = @()
$args_ = $args[0]
$args_ | ForEach-Object {
if (!$SpackSubCommand) {
if($_.SubString(0,1) -eq "-")
{
$SpackCMD_params += $_
}
else{
$SpackSubCommand = $_
}
}
else{
$SpackSubCommandArgs += $_
}
}
return $SpackCMD_params, $SpackSubCommand, $SpackSubCommandArgs
}
function Set-SpackEnv {
# This method is responsible
# for processing the return from $(spack <command>)
# which are returned as System.Object[]'s containing
# a list of env commands
# Invoke-Expression can only handle one command at a time
# so we iterate over the list to invoke the env modification
# expressions one at a time
foreach($envop in $args[0]){
Invoke-Expression $envop
}
}
function Invoke-SpackCD {
if (Compare-CommonArgs $SpackSubCommandArgs) {
python "$Env:SPACK_ROOT/bin/spack" cd -h
}
else {
$LOC = $(python "$Env:SPACK_ROOT/bin/spack" location $SpackSubCommandArgs)
if (($NULL -ne $LOC)){
if ( Test-Path -Path $LOC){
Set-Location $LOC
}
else{
exit 1
}
}
else {
exit 1
}
}
}
function Invoke-SpackEnv {
if (Compare-CommonArgs $SpackSubCommandArgs[0]) {
python "$Env:SPACK_ROOT/bin/spack" env -h
}
else {
$SubCommandSubCommand = $SpackSubCommandArgs[0]
$SubCommandSubCommandArgs = $SpackSubCommandArgs[1..$SpackSubCommandArgs.Count]
switch ($SubCommandSubCommand) {
"activate" {
if (Compare-CommonArgs $SubCommandSubCommandArgs) {
python "$Env:SPACK_ROOT/bin/spack" env activate $SubCommandSubCommandArgs
}
elseif ([bool]($SubCommandSubCommandArgs.Where({$_ -eq "--pwsh"}))) {
python "$Env:SPACK_ROOT/bin/spack" env activate $SubCommandSubCommandArgs
}
elseif (!$SubCommandSubCommandArgs) {
python "$Env:SPACK_ROOT/bin/spack" env activate $SubCommandSubCommandArgs
}
else {
$SpackEnv = $(python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params env activate "--pwsh" $SubCommandSubCommandArgs)
Set-SpackEnv $SpackEnv
}
}
"deactivate" {
if ([bool]($SubCommandSubCommandArgs.Where({$_ -eq "--pwsh"}))) {
python"$Env:SPACK_ROOT/bin/spack" env deactivate $SubCommandSubCommandArgs
}
elseif($SubCommandSubCommandArgs) {
python "$Env:SPACK_ROOT/bin/spack" env deactivate -h
}
else {
$SpackEnv = $(python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params env deactivate "--pwsh")
Set-SpackEnv $SpackEnv
}
}
default {python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand $SpackSubCommandArgs}
}
}
}
function Invoke-SpackLoad {
if (Compare-CommonArgs $SpackSubCommandArgs) {
python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand $SpackSubCommandArgs
}
elseif ([bool]($SpackSubCommandArgs.Where({($_ -eq "--pwsh") -or ($_ -eq "--list")}))) {
python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand $SpackSubCommandArgs
}
else {
$SpackEnv = $(python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand "--pwsh" $SpackSubCommandArgs)
Set-SpackEnv $SpackEnv
}
}
$SpackCMD_params, $SpackSubCommand, $SpackSubCommandArgs = Read-SpackArgs $args
if (Compare-CommonArgs $SpackCMD_params) {
python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand $SpackSubCommandArgs
exit $LASTEXITCODE
}
# Process Spack commands with special conditions
# all other commands are piped directly to Spack
switch($SpackSubCommand)
{
"cd" {Invoke-SpackCD}
"env" {Invoke-SpackEnv}
"load" {Invoke-SpackLoad}
"unload" {Invoke-SpackLoad}
default {python "$Env:SPACK_ROOT/bin/spack" $SpackCMD_params $SpackSubCommand $SpackSubCommandArgs}
}
exit $LASTEXITCODE

View File

@@ -52,6 +52,7 @@ if defined py_path (
if defined py_exe ( if defined py_exe (
"%py_exe%" "%SPACK_ROOT%\bin\haspywin.py" "%py_exe%" "%SPACK_ROOT%\bin\haspywin.py"
"%py_exe%" "%SPACK_ROOT%\bin\spack" external find python >NUL
) )
set "EDITOR=notepad" set "EDITOR=notepad"

View File

@@ -1,4 +1,4 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -9,15 +9,16 @@ bootstrap:
# may not be able to bootstrap all the software that Spack needs, # may not be able to bootstrap all the software that Spack needs,
# depending on its type. # depending on its type.
sources: sources:
- name: 'github-actions-v0.5' - name: 'github-actions-v0.3'
metadata: $spack/share/spack/bootstrap/github-actions-v0.5 metadata: $spack/share/spack/bootstrap/github-actions-v0.3
- name: 'github-actions-v0.4' - name: 'github-actions-v0.2'
metadata: $spack/share/spack/bootstrap/github-actions-v0.4 metadata: $spack/share/spack/bootstrap/github-actions-v0.2
- name: 'github-actions-v0.1'
metadata: $spack/share/spack/bootstrap/github-actions-v0.1
- name: 'spack-install' - name: 'spack-install'
metadata: $spack/share/spack/bootstrap/spack-install metadata: $spack/share/spack/bootstrap/spack-install
trusted: trusted:
# By default we trust bootstrapping from sources and from binaries # By default we trust bootstrapping from sources and from binaries
# produced on Github via the workflow # produced on Github via the workflow
github-actions-v0.5: true github-actions-v0.3: true
github-actions-v0.4: true
spack-install: true spack-install: true

View File

@@ -13,18 +13,16 @@ concretizer:
# Whether to consider installed packages or packages from buildcaches when # Whether to consider installed packages or packages from buildcaches when
# concretizing specs. If `true`, we'll try to use as many installs/binaries # concretizing specs. If `true`, we'll try to use as many installs/binaries
# as possible, rather than building. If `false`, we'll always give you a fresh # as possible, rather than building. If `false`, we'll always give you a fresh
# concretization. If `dependencies`, we'll only reuse dependencies but # concretization.
# give you a fresh concretization for your root specs.
reuse: true reuse: true
# Options that tune which targets are considered for concretization. The # Options that tune which targets are considered for concretization. The
# concretization process is very sensitive to the number targets, and the time # concretization process is very sensitive to the number targets, and the time
# needed to reach a solution increases noticeably with the number of targets # needed to reach a solution increases noticeably with the number of targets
# considered. # considered.
targets: targets:
# Determine whether we want to target specific or generic # Determine whether we want to target specific or generic microarchitectures.
# microarchitectures. Valid values are: "microarchitectures" or "generic". # An example of the first kind might be for instance "skylake" or "bulldozer",
# An example of "microarchitectures" would be "skylake" or "bulldozer", # while generic microarchitectures are for instance "aarch64" or "x86_64_v4".
# while an example of "generic" would be "aarch64" or "x86_64_v4".
granularity: microarchitectures granularity: microarchitectures
# If "false" allow targets that are incompatible with the current host (for # If "false" allow targets that are incompatible with the current host (for
# instance concretize with target "icelake" while running on "haswell"). # instance concretize with target "icelake" while running on "haswell").
@@ -35,15 +33,4 @@ concretizer:
# environments can always be activated. When "false" perform concretization separately # environments can always be activated. When "false" perform concretization separately
# on each root spec, allowing different versions and variants of the same package in # on each root spec, allowing different versions and variants of the same package in
# an environment. # an environment.
unify: true unify: false
# Option to deal with possible duplicate nodes (i.e. different nodes from the same package) in the DAG.
duplicates:
# "none": allows a single node for any package in the DAG.
# "minimal": allows the duplication of 'build-tools' nodes only (e.g. py-setuptools, cmake etc.)
# "full" (experimental): allows separation of the entire build-tool stack (e.g. the entire "cmake" subDAG)
strategy: minimal
# Option to specify compatiblity between operating systems for reuse of compilers and packages
# Specified as a key: [list] where the key is the os that is being targeted, and the list contains the OS's
# it can reuse. Note this is a directional compatibility so mutual compatibility between two OS's
# requires two entries i.e. os_compatible: {sonoma: [monterey], monterey: [sonoma]}
os_compatible: {}

View File

@@ -19,7 +19,7 @@ config:
install_tree: install_tree:
root: $spack/opt/spack root: $spack/opt/spack
projections: projections:
all: "{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}" all: "${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}"
# install_tree can include an optional padded length (int or boolean) # install_tree can include an optional padded length (int or boolean)
# default is False (do not pad) # default is False (do not pad)
# if padded_length is True, Spack will pad as close to the system max path # if padded_length is True, Spack will pad as close to the system max path
@@ -54,11 +54,6 @@ config:
# are that it precludes its use as a system package and its ability to be # are that it precludes its use as a system package and its ability to be
# pip installable. # pip installable.
# #
# In Spack environment files, chaining onto existing system Spack
# installations, the $env variable can be used to download, cache and build
# into user-writable paths that are relative to the currently active
# environment.
#
# In any case, if the username is not already in the path, Spack will append # In any case, if the username is not already in the path, Spack will append
# the value of `$user` in an attempt to avoid potential conflicts between # the value of `$user` in an attempt to avoid potential conflicts between
# users in shared temporary spaces. # users in shared temporary spaces.
@@ -81,10 +76,6 @@ config:
source_cache: $spack/var/spack/cache source_cache: $spack/var/spack/cache
## Directory where spack managed environments are created and stored
# environments_root: $spack/var/spack/environments
# Cache directory for miscellaneous files, like the package index. # Cache directory for miscellaneous files, like the package index.
# This can be purged with `spack clean --misc-cache` # This can be purged with `spack clean --misc-cache`
misc_cache: $user_cache_path/cache misc_cache: $user_cache_path/cache
@@ -101,12 +92,6 @@ config:
verify_ssl: true verify_ssl: true
# This is where custom certs for proxy/firewall are stored.
# It can be a path or environment variable. To match ssl env configuration
# the default is the environment variable SSL_CERT_FILE
ssl_certs: $SSL_CERT_FILE
# Suppress gpg warnings from binary package verification # Suppress gpg warnings from binary package verification
# Only suppresses warnings, gpg failure will still fail the install # Only suppresses warnings, gpg failure will still fail the install
# Potential rationale to set True: users have already explicitly trusted the # Potential rationale to set True: users have already explicitly trusted the
@@ -191,7 +176,7 @@ config:
# when Spack needs to manage its own package metadata and all operations are # when Spack needs to manage its own package metadata and all operations are
# expected to complete within the default time limit. The timeout should # expected to complete within the default time limit. The timeout should
# therefore generally be left untouched. # therefore generally be left untouched.
db_lock_timeout: 60 db_lock_timeout: 3
# How long to wait when attempting to modify a package (e.g. to install it). # How long to wait when attempting to modify a package (e.g. to install it).
@@ -202,44 +187,17 @@ config:
package_lock_timeout: null package_lock_timeout: null
# Control how shared libraries are located at runtime on Linux. See the # Control whether Spack embeds RPATH or RUNPATH attributes in ELF binaries.
# the Spack documentation for details. # Has no effect on macOS. DO NOT MIX these within the same install tree.
shared_linking: # See the Spack documentation for details.
# Spack automatically embeds runtime search paths in ELF binaries for their shared_linking: 'rpath'
# dependencies. Their type can either be "rpath" or "runpath". For glibc, rpath is
# inherited and has precedence over LD_LIBRARY_PATH; runpath is not inherited
# and of lower precedence. DO NOT MIX these within the same install tree.
type: rpath
# (Experimental) Embed absolute paths of dependent libraries directly in ELF
# binaries to avoid runtime search. This can improve startup time of
# executables with many dependencies, in particular on slow filesystems.
bind: false
# Set to 'false' to allow installation on filesystems that doesn't allow setgid bit # Set to 'false' to allow installation on filesystems that doesn't allow setgid bit
# manipulation by unprivileged user (e.g. AFS) # manipulation by unprivileged user (e.g. AFS)
allow_sgid: true allow_sgid: true
# Whether to show status information during building and installing packages. # Whether to set the terminal title to display status information during
# This gives information about Spack's current progress as well as the current # building and installing packages. This gives information about Spack's
# and total number of packages. Information is shown both in the terminal # current progress as well as the current and total number of packages.
# title and inline. terminal_title: false
install_status: true
# Number of seconds a buildcache's index.json is cached locally before probing
# for updates, within a single Spack invocation. Defaults to 10 minutes.
binary_index_ttl: 600
flags:
# Whether to keep -Werror flags active in package builds.
keep_werror: 'none'
# A mapping of aliases that can be used to define new commands. For instance,
# `sp: spec -I` will define a new command `sp` that will execute `spec` with
# the `-I` argument. Aliases cannot override existing commands.
aliases:
concretise: concretize
containerise: containerize
rm: remove

View File

@@ -1,19 +0,0 @@
# -------------------------------------------------------------------------
# This file controls default concretization preferences for Spack.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/packages.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/packages.yaml
# -------------------------------------------------------------------------
packages:
all:
providers:
iconv: [glibc, musl, libiconv]

View File

@@ -19,23 +19,12 @@ packages:
- apple-clang - apple-clang
- clang - clang
- gcc - gcc
- intel
providers: providers:
elf: [libelf] elf: [libelf]
fuse: [macfuse] fuse: [macfuse]
gl: [apple-gl]
glu: [apple-glu]
unwind: [apple-libunwind] unwind: [apple-libunwind]
uuid: [apple-libuuid] uuid: [apple-libuuid]
apple-gl:
buildable: false
externals:
- spec: apple-gl@4.1.0
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
apple-glu:
buildable: false
externals:
- spec: apple-glu@1.3.0
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
apple-libunwind: apple-libunwind:
buildable: false buildable: false
externals: externals:
@@ -49,4 +38,4 @@ packages:
# Apple bundles libuuid in libsystem_c version 1353.100.2, # Apple bundles libuuid in libsystem_c version 1353.100.2,
# although the version number used here isn't critical # although the version number used here isn't critical
- spec: apple-libuuid@1353.100.2 - spec: apple-libuuid@1353.100.2
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk prefix: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk

View File

@@ -1,19 +0,0 @@
# -------------------------------------------------------------------------
# This file controls default concretization preferences for Spack.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/packages.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/packages.yaml
# -------------------------------------------------------------------------
packages:
all:
providers:
iconv: [glibc, musl, libiconv]

View File

@@ -1,4 +1,2 @@
mirrors: mirrors:
spack-public: spack-public: https://mirror.spack.io
binary: false
url: https://mirror.spack.io

View File

@@ -40,12 +40,13 @@ modules:
roots: roots:
tcl: $spack/share/spack/modules tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod lmod: $spack/share/spack/lmod
# What type of modules to use ("tcl" and/or "lmod") # What type of modules to use
enable: [] enable:
- tcl
tcl: tcl:
all: all:
autoload: direct autoload: none
# Default configurations if lmod is enabled # Default configurations if lmod is enabled
lmod: lmod:

View File

@@ -15,36 +15,31 @@
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
packages: packages:
all: all:
compiler: [gcc, clang, oneapi, xl, nag, fj, aocc] compiler: [gcc, intel, pgi, clang, xl, nag, fj, aocc]
providers: providers:
awk: [gawk] awk: [gawk]
armci: [armcimpi]
blas: [openblas, amdblis] blas: [openblas, amdblis]
D: [ldc] D: [ldc]
daal: [intel-oneapi-daal] daal: [intel-daal]
elf: [elfutils] elf: [elfutils]
fftw-api: [fftw, amdfftw] fftw-api: [fftw, amdfftw]
flame: [libflame, amdlibflame] flame: [libflame, amdlibflame]
fortran-rt: [gcc-runtime, intel-oneapi-runtime]
fuse: [libfuse] fuse: [libfuse]
gl: [glx, osmesa] gl: [glx, osmesa]
glu: [mesa-glu, openglu] glu: [mesa-glu, openglu]
golang: [go, gcc] golang: [gcc]
go-or-gccgo-bootstrap: [go-bootstrap, gcc]
iconv: [libiconv] iconv: [libiconv]
ipp: [intel-oneapi-ipp] ipp: [intel-ipp]
java: [openjdk, jdk, ibm-java] java: [openjdk, jdk, ibm-java]
jpeg: [libjpeg-turbo, libjpeg] jpeg: [libjpeg-turbo, libjpeg]
lapack: [openblas, amdlibflame] lapack: [openblas, amdlibflame]
libc: [glibc, musl] libglx: [mesa+glx, mesa18+glx]
libgfortran: [ gcc-runtime ]
libglx: [mesa+glx]
libifcore: [ intel-oneapi-runtime ]
libllvm: [llvm] libllvm: [llvm]
libosmesa: [mesa+osmesa, mesa18+osmesa]
lua-lang: [lua, lua-luajit-openresty, lua-luajit] lua-lang: [lua, lua-luajit-openresty, lua-luajit]
luajit: [lua-luajit-openresty, lua-luajit] luajit: [lua-luajit-openresty, lua-luajit]
mariadb-client: [mariadb-c-client, mariadb] mariadb-client: [mariadb-c-client, mariadb]
mkl: [intel-oneapi-mkl] mkl: [intel-mkl]
mpe: [mpe2] mpe: [mpe2]
mpi: [openmpi, mpich] mpi: [openmpi, mpich]
mysql-client: [mysql, mariadb-c-client] mysql-client: [mysql, mariadb-c-client]
@@ -53,7 +48,6 @@ packages:
pbs: [openpbs, torque] pbs: [openpbs, torque]
pil: [py-pillow] pil: [py-pillow]
pkgconfig: [pkgconf, pkg-config] pkgconfig: [pkgconf, pkg-config]
qmake: [qt-base, qt]
rpc: [libtirpc] rpc: [libtirpc]
scalapack: [netlib-scalapack, amdscalapack] scalapack: [netlib-scalapack, amdscalapack]
sycl: [hipsycl] sycl: [hipsycl]
@@ -64,7 +58,6 @@ packages:
xxd: [xxd-standalone, vim] xxd: [xxd-standalone, vim]
yacc: [bison, byacc] yacc: [bison, byacc]
ziglang: [zig] ziglang: [zig]
zlib-api: [zlib-ng+compat, zlib]
permissions: permissions:
read: world read: world
write: user write: user

View File

@@ -1,6 +1,5 @@
config: config:
locks: false locks: false
concretizer: clingo concretizer: original
build_stage:: build_stage::
- '$spack/.staging' - '$spack/.staging'
stage_name: '{name}-{version}-{hash:7}'

View File

@@ -1,22 +0,0 @@
# -------------------------------------------------------------------------
# This file controls default concretization preferences for Spack.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/packages.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/packages.yaml
# -------------------------------------------------------------------------
packages:
all:
compiler:
- msvc
providers:
mpi: [msmpi]
gl: [wgl]

View File

@@ -1,7 +1,7 @@
package_list.html
command_index.rst command_index.rst
spack*.rst spack*.rst
llnl*.rst llnl*.rst
_build _build
.spack-env .spack-env
spack.lock spack.lock
_spack_root

View File

@@ -1,16 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# The name of the Pygments (syntax highlighting) style to use.
# We use our own extension of the default style with a few modifications
from pygments.styles.default import DefaultStyle
from pygments.token import Generic
class SpackStyle(DefaultStyle):
styles = DefaultStyle.styles.copy()
background_color = "#f4f4f8"
styles[Generic.Output] = "#355"
styles[Generic.Prompt] = "bold #346ec9"

1
lib/spack/docs/_spack_root Symbolic link
View File

@@ -0,0 +1 @@
../../..

View File

@@ -1,12 +0,0 @@
{% extends "!layout.html" %}
{%- block extrahead %}
<!-- Google tag (gtag.js) -->
<script async src="https://www.googletagmanager.com/gtag/js?id=G-S0PQ7WV75K"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'G-S0PQ7WV75K');
</script>
{% endblock %}

162
lib/spack/docs/analyze.rst Normal file
View File

@@ -0,0 +1,162 @@
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _analyze:
=======
Analyze
=======
The analyze command is a front-end to various tools that let us analyze
package installations. Each analyzer is a module for a different kind
of analysis that can be done on a package installation, including (but not
limited to) binary, log, or text analysis. Thus, the analyze command group
allows you to take an existing package install, choose an analyzer,
and extract some output for the package using it.
-----------------
Analyzer Metadata
-----------------
For all analyzers, we write to an ``analyzers`` folder in ``~/.spack``, or the
value that you specify in your spack config at ``config:analyzers_dir``.
For example, here we see the results of running an analysis on zlib:
.. code-block:: console
$ tree ~/.spack/analyzers/
└── linux-ubuntu20.04-skylake
└── gcc-9.3.0
└── zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2
├── environment_variables
│   └── spack-analyzer-environment-variables.json
├── install_files
│   └── spack-analyzer-install-files.json
└── libabigail
└── spack-analyzer-libabigail-libz.so.1.2.11.xml
This means that you can always find analyzer output in this folder, and it
is organized with the same logic as the package install it was run for.
If you want to customize this top level folder, simply provide the ``--path``
argument to ``spack analyze run``. The nested organization will be maintained
within your custom root.
-----------------
Listing Analyzers
-----------------
If you aren't familiar with Spack's analyzers, you can quickly list those that
are available:
.. code-block:: console
$ spack analyze list-analyzers
install_files : install file listing read from install_manifest.json
environment_variables : environment variables parsed from spack-build-env.txt
config_args : config args loaded from spack-configure-args.txt
libabigail : Application Binary Interface (ABI) features for objects
In the above, the first three are fairly simple - parsing metadata files from
a package install directory to save
-------------------
Analyzing a Package
-------------------
The analyze command, akin to install, will accept a package spec to perform
an analysis for. The package must be installed. Let's walk through an example
with zlib. We first ask to analyze it. However, since we have more than one
install, we are asked to disambiguate:
.. code-block:: console
$ spack analyze run zlib
==> Error: zlib matches multiple packages.
Matching packages:
fz2bs56 zlib@1.2.11%gcc@7.5.0 arch=linux-ubuntu18.04-skylake
sl7m27m zlib@1.2.11%gcc@9.3.0 arch=linux-ubuntu20.04-skylake
Use a more specific spec.
We can then specify the spec version that we want to analyze:
.. code-block:: console
$ spack analyze run zlib/fz2bs56
If you don't provide any specific analyzer names, by default all analyzers
(shown in the ``list-analyzers`` subcommand list) will be run. If an analyzer does not
have any result, it will be skipped. For example, here is a result running for
zlib:
.. code-block:: console
$ ls ~/.spack/analyzers/linux-ubuntu20.04-skylake/gcc-9.3.0/zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2/
spack-analyzer-environment-variables.json
spack-analyzer-install-files.json
spack-analyzer-libabigail-libz.so.1.2.11.xml
If you want to run a specific analyzer, ask for it with `--analyzer`. Here we run
spack analyze on libabigail (already installed) _using_ libabigail1
.. code-block:: console
$ spack analyze run --analyzer abigail libabigail
.. _analyze_monitoring:
----------------------
Monitoring An Analysis
----------------------
For any kind of analysis, you can
use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
as a server to upload the same run metadata to. You can
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
to first create a server along with a username and token for yourself.
You can then use this guide to interact with the server.
You should first export our spack monitor token and username to the environment:
.. code-block:: console
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
$ export SPACKMON_USER=spacky
By default, the host for your server is expected to be at ``http://127.0.0.1``
with a prefix of ``ms1``, and if this is the case, you can simply add the
``--monitor`` flag to the install command:
.. code-block:: console
$ spack analyze run --monitor wget
If you need to customize the host or the prefix, you can do that as well:
.. code-block:: console
$ spack analyze run --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io wget
If your server doesn't have authentication, you can skip it:
.. code-block:: console
$ spack analyze run --monitor --monitor-disable-auth wget
Regardless of your choice, when you run analyze on an installed package (whether
it was installed with ``--monitor`` or not, you'll see the results generating as they did
before, and a message that the monitor server was pinged:
.. code-block:: console
$ spack analyze --monitor wget
...
==> Sending result for wget bin/wget to monitor.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -45,8 +45,7 @@ Listing available packages
To install software with Spack, you need to know what software is To install software with Spack, you need to know what software is
available. You can see a list of available package names at the available. You can see a list of available package names at the
`packages.spack.io <https://packages.spack.io>`_ website, or :ref:`package-list` webpage, or using the ``spack list`` command.
using the ``spack list`` command.
.. _cmd-spack-list: .. _cmd-spack-list:
@@ -61,7 +60,7 @@ can install:
:ellipsis: 10 :ellipsis: 10
There are thousands of them, so we've truncated the output above, but you There are thousands of them, so we've truncated the output above, but you
can find a `full list here <https://packages.spack.io>`_. can find a :ref:`full list here <package-list>`.
Packages are listed by name in alphabetical order. Packages are listed by name in alphabetical order.
A pattern to match with no wildcards, ``*`` or ``?``, A pattern to match with no wildcards, ``*`` or ``?``,
will be treated as though it started and ended with will be treated as though it started and ended with
@@ -86,7 +85,7 @@ All packages whose names or descriptions contain documentation:
To get more information on a particular package from `spack list`, use To get more information on a particular package from `spack list`, use
`spack info`. Just supply the name of a package: `spack info`. Just supply the name of a package:
.. command-output:: spack info --all mpich .. command-output:: spack info mpich
Most of the information is self-explanatory. The *safe versions* are Most of the information is self-explanatory. The *safe versions* are
versions that Spack knows the checksum for, and it will use the versions that Spack knows the checksum for, and it will use the
@@ -865,7 +864,7 @@ There are several different ways to use Spack packages once you have
installed them. As you've seen, spack packages are installed into long installed them. As you've seen, spack packages are installed into long
paths with hashes, and you need a way to get them into your path. The paths with hashes, and you need a way to get them into your path. The
easiest way is to use :ref:`spack load <cmd-spack-load>`, which is easiest way is to use :ref:`spack load <cmd-spack-load>`, which is
described in this section. described in the next section.
Some more advanced ways to use Spack packages include: Some more advanced ways to use Spack packages include:
@@ -943,7 +942,7 @@ first ``libelf`` above, you would run:
$ spack load /qmm4kso $ spack load /qmm4kso
To see which packages that you have loaded to your environment you would To see which packages that you have loaded to your enviornment you would
use ``spack find --loaded``. use ``spack find --loaded``.
.. code-block:: console .. code-block:: console
@@ -959,86 +958,7 @@ use ``spack find --loaded``.
You can also use ``spack load --list`` to get the same output, but it You can also use ``spack load --list`` to get the same output, but it
does not have the full set of query options that ``spack find`` offers. does not have the full set of query options that ``spack find`` offers.
We'll learn more about Spack's spec syntax in :ref:`a later section <sec-specs>`. We'll learn more about Spack's spec syntax in the next section.
.. _extensions:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Python packages and virtual environments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack can install a large number of Python packages. Their names are
typically prefixed with ``py-``. Installing and using them is no
different from any other package:
.. code-block:: console
$ spack install py-numpy
$ spack load py-numpy
$ python3
>>> import numpy
The ``spack load`` command sets the ``PATH`` variable so that the right Python
executable is used, and makes sure that ``numpy`` and its dependencies can be
located in the ``PYTHONPATH``.
Spack is different from other Python package managers in that it installs
every package into its *own* prefix. This is in contrast to ``pip``, which
installs all packages into the same prefix, be it in a virtual environment
or not.
For many users, **virtual environments** are more convenient than repeated
``spack load`` commands, particularly when working with multiple Python
packages. Fortunately Spack supports environments itself, which together
with a view are no different from Python virtual environments.
The recommended way of working with Python extensions such as ``py-numpy``
is through :ref:`Environments <environments>`. The following example creates
a Spack environment with ``numpy`` in the current working directory. It also
puts a filesystem view in ``./view``, which is a more traditional combined
prefix for all packages in the environment.
.. code-block:: console
$ spack env create --with-view view --dir .
$ spack -e . add py-numpy
$ spack -e . concretize
$ spack -e . install
Now you can activate the environment and start using the packages:
.. code-block:: console
$ spack env activate .
$ python3
>>> import numpy
The environment view is also a virtual environment, which is useful if you are
sharing the environment with others who are unfamiliar with Spack. They can
either use the Python executable directly:
.. code-block:: console
$ ./view/bin/python3
>>> import numpy
or use the activation script:
.. code-block:: console
$ source ./view/bin/activate
$ python3
>>> import numpy
In general, there should not be much difference between ``spack env activate``
and using the virtual environment. The main advantage of ``spack env activate``
is that it knows about more packages than just Python packages, and it may set
additional runtime variables that are not covered by the virtual environment
activation script.
See :ref:`environments` for a more in-depth description of Spack
environments and customizations to views.
.. _sec-specs: .. _sec-specs:
@@ -1078,15 +998,11 @@ More formally, a spec consists of the following pieces:
* ``%`` Optional compiler specifier, with an optional compiler version * ``%`` Optional compiler specifier, with an optional compiler version
(``gcc`` or ``gcc@4.7.3``) (``gcc`` or ``gcc@4.7.3``)
* ``+`` or ``-`` or ``~`` Optional variant specifiers (``+debug``, * ``+`` or ``-`` or ``~`` Optional variant specifiers (``+debug``,
``-qt``, or ``~qt``) for boolean variants. Use ``++`` or ``--`` or ``-qt``, or ``~qt``) for boolean variants
``~~`` to propagate variants through the dependencies (``++debug``,
``--qt``, or ``~~qt``).
* ``name=<value>`` Optional variant specifiers that are not restricted to * ``name=<value>`` Optional variant specifiers that are not restricted to
boolean variants. Use ``name==<value>`` to propagate variant through the boolean variants
dependencies.
* ``name=<value>`` Optional compiler flag specifiers. Valid flag names are * ``name=<value>`` Optional compiler flag specifiers. Valid flag names are
``cflags``, ``cxxflags``, ``fflags``, ``cppflags``, ``ldflags``, and ``ldlibs``. ``cflags``, ``cxxflags``, ``fflags``, ``cppflags``, ``ldflags``, and ``ldlibs``.
Use ``name==<value>`` to propagate compiler flags through the dependencies.
* ``target=<value> os=<value>`` Optional architecture specifier * ``target=<value> os=<value>`` Optional architecture specifier
(``target=haswell os=CNL10``) (``target=haswell os=CNL10``)
* ``^`` Dependency specs (``^callpath@1.1``) * ``^`` Dependency specs (``^callpath@1.1``)
@@ -1183,75 +1099,54 @@ Below are more details about the specifiers that you can add to specs.
Version specifier Version specifier
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
A version specifier ``pkg@<specifier>`` comes after a package name A version specifier comes somewhere after a package name and starts
and starts with ``@``. It can be something abstract that matches with ``@``. It can be a single version, e.g. ``@1.0``, ``@3``, or
multiple known versions, or a specific version. During concretization, ``@1.2a7``. Or, it can be a range of versions, such as ``@1.0:1.5``
Spack will pick the optimal version within the spec's constraints (all versions between ``1.0`` and ``1.5``, inclusive). Version ranges
according to policies set for the particular Spack installation. can be open, e.g. ``:3`` means any version up to and including ``3``.
This would include ``3.4`` and ``3.4.2``. ``4.2:`` means any version
above and including ``4.2``. Finally, a version specifier can be a
set of arbitrary versions, such as ``@1.0,1.5,1.7`` (``1.0``, ``1.5``,
or ``1.7``). When you supply such a specifier to ``spack install``,
it constrains the set of versions that Spack will install.
The version specifier can be *a specific version*, such as ``@=1.0.0`` or For packages with a ``git`` attribute, ``git`` references
``@=1.2a7``. Or, it can be *a range of versions*, such as ``@1.0:1.5``. may be specified instead of a numerical version i.e. branches, tags
Version ranges are inclusive, so this example includes both ``1.0`` and commits. Spack will stage and build based off the ``git``
and any ``1.5.x`` version. Version ranges can be unbounded, e.g. ``@:3``
means any version up to and including ``3``. This would include ``3.4``
and ``3.4.2``. Similarly, ``@4.2:`` means any version above and including
``4.2``. As a short-hand, ``@3`` is equivalent to the range ``@3:3`` and
includes any version with major version ``3``.
Versions are ordered lexicograpically by its components. For more details
on the order, see :ref:`the packaging guide <version-comparison>`.
Notice that you can distinguish between the specific version ``@=3.2`` and
the range ``@3.2``. This is useful for packages that follow a versioning
scheme that omits the zero patch version number: ``3.2``, ``3.2.1``,
``3.2.2``, etc. In general it is preferable to use the range syntax
``@3.2``, since ranges also match versions with one-off suffixes, such as
``3.2-custom``.
A version specifier can also be a list of ranges and specific versions,
separated by commas. For example, ``@1.0:1.5,=1.7.1`` matches any version
in the range ``1.0:1.5`` and the specific version ``1.7.1``.
^^^^^^^^^^^^
Git versions
^^^^^^^^^^^^
For packages with a ``git`` attribute, ``git`` references
may be specified instead of a numerical version i.e. branches, tags
and commits. Spack will stage and build based off the ``git``
reference provided. Acceptable syntaxes for this are: reference provided. Acceptable syntaxes for this are:
.. code-block:: sh .. code-block:: sh
# commit hashes
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # 40 character hashes are automatically treated as git commits
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234
# branches and tags # branches and tags
foo@git.develop # use the develop branch foo@git.develop # use the develop branch
foo@git.0.19 # use the 0.19 tag foo@git.0.19 # use the 0.19 tag
# commit hashes
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # 40 character hashes are automatically treated as git commits
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234
Spack versions from git reference either have an associated version supplied by the user,
or infer a relationship to known versions from the structure of the git repository. If an
associated version is supplied by the user, Spack treats the git version as equivalent to that
version for all version comparisons in the package logic (e.g. ``depends_on('foo', when='@1.5')``).
Spack always needs to associate a Spack version with the git reference, The associated version can be assigned with ``[git ref]=[version]`` syntax, with the caveat that the specified version is known to Spack from either the package definition, or in the configuration preferences (i.e. ``packages.yaml``).
which is used for version comparison. This Spack version is heuristically
taken from the closest valid git tag among ancestors of the git ref.
Once a Spack version is associated with a git ref, it always printed with
the git ref. For example, if the commit ``@git.abcdefg`` is tagged
``0.19``, then the spec will be shown as ``@git.abcdefg=0.19``.
If the git ref is not exactly a tag, then the distance to the nearest tag
is also part of the resolved version. ``@git.abcdefg=0.19.git.8`` means
that the commit is 8 commits away from the ``0.19`` tag.
In cases where Spack cannot resolve a sensible version from a git ref,
users can specify the Spack version to use for the git ref. This is done
by appending ``=`` and the Spack version to the git ref. For example:
.. code-block:: sh .. code-block:: sh
foo@git.my_ref=3.2 # use the my_ref tag or branch, but treat it as version 3.2 for version comparisons foo@git.my_ref=3.2 # use the my_ref tag or branch, but treat it as version 3.2 for version comparisons
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop # use the given commit, but treat it as develop for version comparisons foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop # use the given commit, but treat it as develop for version comparisons
If an associated version is not supplied then the tags in the git repo are used to determine
the most recent previous version known to Spack. Details about how versions are compared
and how Spack determines if one version is less than another are discussed in the developer guide.
If the version spec is not provided, then Spack will choose one
according to policies set for the particular spack installation. If
the spec is ambiguous, i.e. it could match multiple versions, Spack
will choose a version within the spec's constraints according to
policies set for the particular Spack installation.
Details about how versions are compared and how Spack determines if Details about how versions are compared and how Spack determines if
one version is less than another are discussed in the developer guide. one version is less than another are discussed in the developer guide.
@@ -1331,23 +1226,6 @@ variants using the backwards compatibility syntax and uses only ``~``
for disabled boolean variants. The ``-`` and spaces on the command for disabled boolean variants. The ``-`` and spaces on the command
line are provided for convenience and legibility. line are provided for convenience and legibility.
Spack allows variants to propagate their value to the package's
dependency by using ``++``, ``--``, and ``~~`` for boolean variants.
For example, for a ``debug`` variant:
.. code-block:: sh
mpileaks ++debug # enabled debug will be propagated to dependencies
mpileaks +debug # only mpileaks will have debug enabled
To propagate the value of non-boolean variants Spack uses ``name==value``.
For example, for the ``stackstart`` variant:
.. code-block:: sh
mpileaks stackstart==4 # variant will be propagated to dependencies
mpileaks stackstart=4 # only mpileaks will have this variant value
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
Compiler Flags Compiler Flags
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
@@ -1355,15 +1233,10 @@ Compiler Flags
Compiler flags are specified using the same syntax as non-boolean variants, Compiler flags are specified using the same syntax as non-boolean variants,
but fulfill a different purpose. While the function of a variant is set by but fulfill a different purpose. While the function of a variant is set by
the package, compiler flags are used by the compiler wrappers to inject the package, compiler flags are used by the compiler wrappers to inject
flags into the compile line of the build. Additionally, compiler flags can flags into the compile line of the build. Additionally, compiler flags are
be inherited by dependencies by using ``==``. inherited by dependencies. ``spack install libdwarf cppflags="-g"`` will
``spack install libdwarf cppflags=="-g"`` will install both libdwarf and install both libdwarf and libelf with the ``-g`` flag injected into their
libelf with the ``-g`` flag injected into their compile line. compile line.
.. note::
versions of spack prior to 0.19.0 will propagate compiler flags using
the ``=`` syntax.
Notice that the value of the compiler flags must be quoted if it Notice that the value of the compiler flags must be quoted if it
contains any spaces. Any of ``cppflags=-O3``, ``cppflags="-O3"``, contains any spaces. Any of ``cppflags=-O3``, ``cppflags="-O3"``,
@@ -1565,7 +1438,7 @@ built.
You can see what virtual packages a particular package provides by You can see what virtual packages a particular package provides by
getting info on it: getting info on it:
.. command-output:: spack info --virtuals mpich .. command-output:: spack info mpich
Spack is unique in that its virtual packages can be versioned, just Spack is unique in that its virtual packages can be versioned, just
like regular packages. A particular version of a package may provide like regular packages. A particular version of a package may provide
@@ -1612,30 +1485,6 @@ any MPI implementation will do. If another package depends on
error. Likewise, if you try to plug in some package that doesn't error. Likewise, if you try to plug in some package that doesn't
provide MPI, Spack will raise an error. provide MPI, Spack will raise an error.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Explicit binding of virtual dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are packages that provide more than just one virtual dependency. When interacting with them, users
might want to utilize just a subset of what they could provide, and use other providers for virtuals they
need.
It is possible to be more explicit and tell Spack which dependency should provide which virtual, using a
special syntax:
.. code-block:: console
$ spack spec strumpack ^[virtuals=mpi] intel-parallel-studio+mkl ^[virtuals=lapack] openblas
Concretizing the spec above produces the following DAG:
.. figure:: images/strumpack_virtuals.svg
:scale: 60 %
:align: center
where ``intel-parallel-studio`` *could* provide ``mpi``, ``lapack``, and ``blas`` but is used only for the former. The ``lapack``
and ``blas`` dependencies are satisfied by ``openblas``.
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
Specifying Specs by Hash Specifying Specs by Hash
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
@@ -1784,6 +1633,247 @@ check only local packages (as opposed to those used transparently from
``upstream`` spack instances) and the ``-j,--json`` option to output ``upstream`` spack instances) and the ``-j,--json`` option to output
machine-readable json data for any errors. machine-readable json data for any errors.
.. _extensions:
---------------------------
Extensions & Python support
---------------------------
Spack's installation model assumes that each package will live in its
own install prefix. However, certain packages are typically installed
*within* the directory hierarchy of other packages. For example,
`Python <https://www.python.org>`_ packages are typically installed in the
``$prefix/lib/python-2.7/site-packages`` directory.
Spack has support for this type of installation as well. In Spack,
a package that can live inside the prefix of another package is called
an *extension*. Suppose you have Python installed like so:
.. code-block:: console
$ spack find python
==> 1 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
python@2.7.8
.. _cmd-spack-extensions:
^^^^^^^^^^^^^^^^^^^^
``spack extensions``
^^^^^^^^^^^^^^^^^^^^
You can find extensions for your Python installation like this:
.. code-block:: console
$ spack extensions python
==> python@2.7.8%gcc@4.4.7 arch=linux-debian7-x86_64-703c7a96
==> 36 extensions:
geos py-ipython py-pexpect py-pyside py-sip
py-basemap py-libxml2 py-pil py-pytz py-six
py-biopython py-mako py-pmw py-rpy2 py-sympy
py-cython py-matplotlib py-pychecker py-scientificpython py-virtualenv
py-dateutil py-mpi4py py-pygments py-scikit-learn
py-epydoc py-mx py-pylint py-scipy
py-gnuplot py-nose py-pyparsing py-setuptools
py-h5py py-numpy py-pyqt py-shiboken
==> 12 installed:
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
py-dateutil@2.4.0 py-nose@1.3.4 py-pyside@1.2.2
py-dateutil@2.4.0 py-numpy@1.9.1 py-pytz@2014.10
py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1
py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0
==> None activated.
The extensions are a subset of what's returned by ``spack list``, and
they are packages like any other. They are installed into their own
prefixes, and you can see this with ``spack find --paths``:
.. code-block:: console
$ spack find --paths py-numpy
==> 1 installed packages.
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
py-numpy@1.9.1 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/py-numpy@1.9.1-66733244
However, even though this package is installed, you cannot use it
directly when you run ``python``:
.. code-block:: console
$ spack load python
$ python
Python 2.7.8 (default, Feb 17 2015, 01:35:25)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named numpy
>>>
^^^^^^^^^^^^^^^^
Using Extensions
^^^^^^^^^^^^^^^^
There are four ways to get ``numpy`` working in Python. The first is
to use :ref:`shell-support`. You can simply ``load`` the extension,
and it will be added to the ``PYTHONPATH`` in your current shell:
.. code-block:: console
$ spack load python
$ spack load py-numpy
Now ``import numpy`` will succeed for as long as you keep your current
session open.
The loaded packages can be checked using ``spack find --loaded``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Loading Extensions via Modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Instead of using Spack's environment modification capabilities through
the ``spack load`` command, you can load numpy through your
environment modules (using ``environment-modules`` or ``lmod``). This
will also add the extension to the ``PYTHONPATH`` in your current
shell.
.. code-block:: console
$ module load <name of numpy module>
If you do not know the name of the specific numpy module you wish to
load, you can use the ``spack module tcl|lmod loads`` command to get
the name of the module from the Spack spec.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Activating Extensions in a View
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Another way to use extensions is to create a view, which merges the
python installation along with the extensions into a single prefix.
See :ref:`configuring_environment_views` for a more in-depth description
of views.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Activating Extensions Globally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As an alternative to creating a merged prefix with Python and its extensions,
and prior to support for views, Spack has provided a means to install the
extension into the Spack installation prefix for the extendee. This has
typically been useful since extendable packages typically search their own
installation path for addons by default.
Global activations are performed with the ``spack activate`` command:
.. _cmd-spack-activate:
^^^^^^^^^^^^^^^^^^
``spack activate``
^^^^^^^^^^^^^^^^^^
.. code-block:: console
$ spack activate py-numpy
==> Activated extension py-setuptools@11.3.1%gcc@4.4.7 arch=linux-debian7-x86_64-3c74eb69 for python@2.7.8%gcc@4.4.7.
==> Activated extension py-nose@1.3.4%gcc@4.4.7 arch=linux-debian7-x86_64-5f70f816 for python@2.7.8%gcc@4.4.7.
==> Activated extension py-numpy@1.9.1%gcc@4.4.7 arch=linux-debian7-x86_64-66733244 for python@2.7.8%gcc@4.4.7.
Several things have happened here. The user requested that
``py-numpy`` be activated in the ``python`` installation it was built
with. Spack knows that ``py-numpy`` depends on ``py-nose`` and
``py-setuptools``, so it activated those packages first. Finally,
once all dependencies were activated in the ``python`` installation,
``py-numpy`` was activated as well.
If we run ``spack extensions`` again, we now see the three new
packages listed as activated:
.. code-block:: console
$ spack extensions python
==> python@2.7.8%gcc@4.4.7 arch=linux-debian7-x86_64-703c7a96
==> 36 extensions:
geos py-ipython py-pexpect py-pyside py-sip
py-basemap py-libxml2 py-pil py-pytz py-six
py-biopython py-mako py-pmw py-rpy2 py-sympy
py-cython py-matplotlib py-pychecker py-scientificpython py-virtualenv
py-dateutil py-mpi4py py-pygments py-scikit-learn
py-epydoc py-mx py-pylint py-scipy
py-gnuplot py-nose py-pyparsing py-setuptools
py-h5py py-numpy py-pyqt py-shiboken
==> 12 installed:
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
py-dateutil@2.4.0 py-nose@1.3.4 py-pyside@1.2.2
py-dateutil@2.4.0 py-numpy@1.9.1 py-pytz@2014.10
py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1
py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0
==> 3 currently activated:
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
py-nose@1.3.4 py-numpy@1.9.1 py-setuptools@11.3.1
Now, when a user runs python, ``numpy`` will be available for import
*without* the user having to explicitly load it. ``python@2.7.8`` now
acts like a system Python installation with ``numpy`` installed inside
of it.
Spack accomplishes this by symbolically linking the *entire* prefix of
the ``py-numpy`` package into the prefix of the ``python`` package. To the
python interpreter, it looks like ``numpy`` is installed in the
``site-packages`` directory.
The only limitation of global activation is that you can only have a *single*
version of an extension activated at a time. This is because multiple
versions of the same extension would conflict if symbolically linked
into the same prefix. Users who want a different version of a package
can still get it by using environment modules or views, but they will have to
explicitly load their preferred version.
^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack activate --force``
^^^^^^^^^^^^^^^^^^^^^^^^^^
If, for some reason, you want to activate a package *without* its
dependencies, you can use ``spack activate --force``:
.. code-block:: console
$ spack activate --force py-numpy
==> Activated extension py-numpy@1.9.1%gcc@4.4.7 arch=linux-debian7-x86_64-66733244 for python@2.7.8%gcc@4.4.7.
.. _cmd-spack-deactivate:
^^^^^^^^^^^^^^^^^^^^
``spack deactivate``
^^^^^^^^^^^^^^^^^^^^
We've seen how activating an extension can be used to set up a default
version of a Python module. Obviously, you may want to change that at
some point. ``spack deactivate`` is the command for this. There are
several variants:
* ``spack deactivate <extension>`` will deactivate a single
extension. If another activated extension depends on this one,
Spack will warn you and exit with an error.
* ``spack deactivate --force <extension>`` deactivates an extension
regardless of packages that depend on it.
* ``spack deactivate --all <extension>`` deactivates an extension and
all of its dependencies. Use ``--force`` to disregard dependents.
* ``spack deactivate --all <extendee>`` deactivates *all* activated
extensions of a package. For example, to deactivate *all* python
extensions, use:
.. code-block:: console
$ spack deactivate --all python
----------------------- -----------------------
Filesystem requirements Filesystem requirements
----------------------- -----------------------

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -13,47 +13,49 @@ Some sites may encourage users to set up their own test environments
before carrying out central installations, or some users may prefer to set before carrying out central installations, or some users may prefer to set
up these environments on their own motivation. To reduce the load of up these environments on their own motivation. To reduce the load of
recompiling otherwise identical package specs in different installations, recompiling otherwise identical package specs in different installations,
installed packages can be put into build cache tarballs, pushed to installed packages can be put into build cache tarballs, uploaded to
your Spack mirror and then downloaded and installed by others. your Spack mirror and then downloaded and installed by others.
Whenever a mirror provides prebuilt packages, Spack will take these packages
into account during concretization and installation, making ``spack install``
significantly faster.
--------------------------
Creating build cache files
--------------------------
.. note:: A compressed tarball of an installed package is created. Tarballs are created
for all of its link and run dependency packages as well. Compressed tarballs are
We use the terms "build cache" and "mirror" often interchangeably. Mirrors signed with gpg and signature and tarball and put in a ``.spack`` file. Optionally,
are used during installation both for sources and prebuilt packages. Build the rpaths (and ids and deps on macOS) can be changed to paths relative to
caches refer to mirrors that provide prebuilt packages. the Spack install tree before the tarball is created.
----------------------
Creating a build cache
----------------------
Build caches are created via: Build caches are created via:
.. code-block:: console .. code-block:: console
$ spack buildcache push <path/url/mirror name> <spec> $ spack buildcache create <spec>
This command takes the locally installed spec and its dependencies, and
creates tarballs of their install prefixes. It also generates metadata files,
signed with GPG. These tarballs and metadata files are then pushed to the
provided binary cache, which can be a local directory or a remote URL.
Here is an example where a build cache is created in a local directory named If you wanted to create a build cache in a local directory, you would provide
"spack-cache", to which we push the "ninja" spec: the ``-d`` argument to target that directory, again also specifying the spec.
Here is an example creating a local directory, "spack-cache" and creating
build cache files for the "ninja" spec:
.. code-block:: console .. code-block:: console
$ spack buildcache push ./spack-cache ninja $ mkdir -p ./spack-cache
==> Pushing binary packages to file:///home/spackuser/spack/spack-cache/build_cache $ spack buildcache create -d ./spack-cache ninja
==> Buildcache files will be output to file:///home/spackuser/spack/spack-cache/build_cache
gpgconf: socketdir is '/run/user/1000/gnupg'
gpg: using "E6DF6A8BD43208E4D6F392F23777740B7DBD643D" as default secret key for signing
Note that ``ninja`` must be installed locally for this to work. Note that the targeted spec must already be installed. Once you have a build cache,
you can add it as a mirror, discussed next.
Once you have a build cache, you can add it as a mirror, discussed next. .. warning::
Spack improved the format used for binary caches in v0.18. The entire v0.18 series
will be able to verify and install binary caches both in the new and in the old format.
Support for using the old format is expected to end in v0.19, so we advise users to
recreate relevant buildcaches using Spack v0.18 or higher.
--------------------------------------- ---------------------------------------
Finding or installing build cache files Finding or installing build cache files
@@ -64,10 +66,10 @@ with:
.. code-block:: console .. code-block:: console
$ spack mirror add <name> <url or path> $ spack mirror add <name> <url>
Both web URLs and local paths on the filesystem can be specified. In the previous Note that the url can be a web url _or_ a local filesystem location. In the previous
example, you might add the directory "spack-cache" and call it ``mymirror``: example, you might add the directory "spack-cache" and call it ``mymirror``:
@@ -92,7 +94,7 @@ this new build cache as follows:
.. code-block:: console .. code-block:: console
$ spack buildcache update-index ./spack-cache $ spack buildcache update-index -d spack-cache/
Now you can use list: Now you can use list:
@@ -103,38 +105,46 @@ Now you can use list:
-- linux-ubuntu20.04-skylake / gcc@9.3.0 ------------------------ -- linux-ubuntu20.04-skylake / gcc@9.3.0 ------------------------
ninja@1.10.2 ninja@1.10.2
With ``mymirror`` configured and an index available, Spack will automatically
use it during concretization and installation. That means that you can expect Great! So now let's say you have a different spack installation, or perhaps just
``spack install ninja`` to fetch prebuilt packages from the mirror. Let's a different environment for the same one, and you want to install a package from
verify by re-installing ninja: that build cache. Let's first uninstall the actual library "ninja" to see if we can
re-install it from the cache.
.. code-block:: console .. code-block:: console
$ spack uninstall ninja $ spack uninstall ninja
$ spack install ninja
==> Installing ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz
==> Fetching file:///home/spackuser/spack/spack-cache/build_cache/linux-ubuntu20.04-skylake-gcc-9.3.0-ninja-1.10.2-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spec.json.sig
gpg: Signature made Do 12 Jan 2023 16:01:04 CET
gpg: using RSA key 61B82B2B2350E171BD17A1744E3A689061D57BF6
gpg: Good signature from "example (GPG created for Spack) <example@example.com>" [ultimate]
==> Fetching file:///home/spackuser/spack/spack-cache/build_cache/linux-ubuntu20.04-skylake/gcc-9.3.0/ninja-1.10.2/linux-ubuntu20.04-skylake-gcc-9.3.0-ninja-1.10.2-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spack
==> Extracting ninja-1.10.2-yxferyhmrjkosgta5ei6b4lqf6bxbscz from binary cache
==> ninja: Successfully installed ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz
Search: 0.00s. Fetch: 0.17s. Install: 0.12s. Total: 0.29s
[+] /home/harmen/spack/opt/spack/linux-ubuntu20.04-skylake/gcc-9.3.0/ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz
It worked! You've just completed a full example of creating a build cache with And now reinstall from the buildcache
a spec of interest, adding it as a mirror, updating its index, listing the contents,
and finally, installing from it.
By default Spack falls back to building from sources when the mirror is not available
or when the package is simply not already available. To force Spack to only install
prebuilt packages, you can use
.. code-block:: console .. code-block:: console
$ spack install --use-buildcache only <package> $ spack buildcache install ninja
==> buildcache spec(s) matching ninja
==> Fetching file:///home/spackuser/spack/spack-cache/build_cache/linux-ubuntu20.04-skylake/gcc-9.3.0/ninja-1.10.2/linux-ubuntu20.04-skylake-gcc-9.3.0-ninja-1.10.2-i4e5luour7jxdpc3bkiykd4imke3mkym.spack
####################################################################################################################################### 100.0%
==> Installing buildcache for spec ninja@1.10.2%gcc@9.3.0 arch=linux-ubuntu20.04-skylake
gpgconf: socketdir is '/run/user/1000/gnupg'
gpg: Signature made Tue 23 Mar 2021 10:16:29 PM MDT
gpg: using RSA key E6DF6A8BD43208E4D6F392F23777740B7DBD643D
gpg: Good signature from "spackuser (GPG created for Spack) <spackuser@noreply.users.github.com>" [ultimate]
It worked! You've just completed a full example of creating a build cache with
a spec of interest, adding it as a mirror, updating it's index, listing the contents,
and finally, installing from it.
Note that the above command is intended to install a particular package to a
build cache you have created, and not to install a package from a build cache.
For the latter, once a mirror is added, by default when you do ``spack install`` the ``--use-cache``
flag is set, and you will install a package from a build cache if it is available.
If you want to always use the cache, you can do:
.. code-block:: console
$ spack install --cache-only <package>
For example, to combine all of the commands above to add the E4S build cache For example, to combine all of the commands above to add the E4S build cache
and then install from it exclusively, you would do: and then install from it exclusively, you would do:
@@ -143,7 +153,7 @@ and then install from it exclusively, you would do:
$ spack mirror add E4S https://cache.e4s.io $ spack mirror add E4S https://cache.e4s.io
$ spack buildcache keys --install --trust $ spack buildcache keys --install --trust
$ spack install --use-buildcache only <package> $ spack install --cache-only <package>
We use ``--install`` and ``--trust`` to say that we are installing keys to our We use ``--install`` and ``--trust`` to say that we are installing keys to our
keyring, and trusting all downloaded keys. keyring, and trusting all downloaded keys.
@@ -153,181 +163,18 @@ keyring, and trusting all downloaded keys.
List of popular build caches List of popular build caches
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* `Extreme-scale Scientific Software Stack (E4S) <https://e4s-project.github.io/>`_: `build cache <https://oaciss.uoregon.edu/e4s/inventory.html>`_' * `Extreme-scale Scientific Software Stack (E4S) <https://e4s-project.github.io/>`_: `build cache <https://oaciss.uoregon.edu/e4s/inventory.html>`_
-------------------
Build cache signing
-------------------
By default, Spack will add a cryptographic signature to each package pushed to
a build cache, and verifies the signature when installing from a build cache.
Keys for signing can be managed with the :ref:`spack gpg <cmd-spack-gpg>` command,
as well as ``spack buildcache keys`` as mentioned above.
You can disable signing when pushing with ``spack buildcache push --unsigned``,
and disable verification when installing from any build cache with
``spack install --no-check-signature``.
Alternatively, signing and verification can be enabled or disabled on a per build cache
basis:
.. code-block:: console
$ spack mirror add --signed <name> <url> # enable signing and verification
$ spack mirror add --unsigned <name> <url> # disable signing and verification
$ spack mirror set --signed <name> # enable signing and verification for an existing mirror
$ spack mirror set --unsigned <name> # disable signing and verification for an existing mirror
Or you can directly edit the ``mirrors.yaml`` configuration file:
.. code-block:: yaml
mirrors:
<name>:
url: <url>
signed: false # disable signing and verification
See also :ref:`mirrors`.
---------- ----------
Relocation Relocation
---------- ----------
When using buildcaches across different machines, it is likely that the install Initial build and later installation do not necessarily happen at the same
root will be different from the one used to build the binaries. location. Spack provides a relocation capability and corrects for RPATHs and
non-relocatable scripts. However, many packages compile paths into binary
To address this issue, Spack automatically relocates all paths encoded in binaries artifacts directly. In such cases, the build instructions of this package would
and scripts to their new location upon install. need to be adjusted for better re-locatability.
Note that there are some cases where this is not possible: if binaries are built in
a relatively short path, and then installed to a longer path, there may not be enough
space in the binary to encode the new path. In this case, Spack will fail to install
the package from the build cache, and a source build is required.
To reduce the likelihood of this happening, it is highly recommended to add padding to
the install root during the build, as specified in the :ref:`config <config-yaml>`
section of the configuration:
.. code-block:: yaml
config:
install_tree:
root: /opt/spack
padded_length: 128
.. _binary_caches_oci:
---------------------------------
Automatic push to a build cache
---------------------------------
Sometimes it is convenient to push packages to a build cache as soon as they are installed. Spack can do this by setting autopush flag when adding a mirror:
.. code-block:: console
$ spack mirror add --autopush <name> <url or path>
Or the autopush flag can be set for an existing mirror:
.. code-block:: console
$ spack mirror set --autopush <name> # enable automatic push for an existing mirror
$ spack mirror set --no-autopush <name> # disable automatic push for an existing mirror
Then after installing a package it is automatically pushed to all mirrors with ``autopush: true``. The command
.. code-block:: console
$ spack install <package>
will have the same effect as
.. code-block:: console
$ spack install <package>
$ spack buildcache push <cache> <package> # for all caches with autopush: true
.. note::
Packages are automatically pushed to a build cache only if they are built from source.
-----------------------------------------
OCI / Docker V2 registries as build cache
-----------------------------------------
Spack can also use OCI or Docker V2 registries such as Dockerhub, Quay.io,
Github Packages, GitLab Container Registry, JFrog Artifactory, and others
as build caches. This is a convenient way to share binaries using public
infrastructure, or to cache Spack built binaries in Github Actions and
GitLab CI.
To get started, configure an OCI mirror using ``oci://`` as the scheme,
and optionally specify a username and password (or personal access token):
.. code-block:: console
$ spack mirror add --oci-username username --oci-password password my_registry oci://example.com/my_image
Spack follows the naming conventions of Docker, with Dockerhub as the default
registry. To use Dockerhub, you can omit the registry domain:
.. code-block:: console
$ spack mirror add --oci-username username --oci-password password my_registry oci://username/my_image
From here, you can use the mirror as any other build cache:
.. code-block:: console
$ spack buildcache push my_registry <specs...> # push to the registry
$ spack install <specs...> # install from the registry
A unique feature of buildcaches on top of OCI registries is that it's incredibly
easy to generate get a runnable container image with the binaries installed. This
is a great way to make applications available to users without requiring them to
install Spack -- all you need is Docker, Podman or any other OCI-compatible container
runtime.
To produce container images, all you need to do is add the ``--base-image`` flag
when pushing to the build cache:
.. code-block:: console
$ spack buildcache push --base-image ubuntu:20.04 my_registry ninja
Pushed to example.com/my_image:ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spack
$ docker run -it example.com/my_image:ninja-1.11.1-yxferyhmrjkosgta5ei6b4lqf6bxbscz.spack
root@e4c2b6f6b3f4:/# ninja --version
1.11.1
If ``--base-image`` is not specified, distroless images are produced. In practice,
you won't be able to run these as containers, since they don't come with libc and
other system dependencies. However, they are still compatible with tools like
``skopeo``, ``podman``, and ``docker`` for pulling and pushing.
.. note::
The docker ``overlayfs2`` storage driver is limited to 128 layers, above which a
``max depth exceeded`` error may be produced when pulling the image. There
are `alternative drivers <https://docs.docker.com/storage/storagedriver/>`_.
------------------------------------
Spack build cache for GitHub Actions
------------------------------------
To significantly speed up Spack in GitHub Actions, binaries can be cached in
GitHub Packages. This service is an OCI registry that can be linked to a GitHub
repository.
Spack offers a public build cache for GitHub Actions with a set of common packages,
which lets you get started quickly. See the following resources for more information:
* `spack/setup-spack <https://github.com/spack/setup-spack>`_ for setting up Spack in GitHub
Actions
* `spack/github-actions-buildcache <https://github.com/spack/github-actions-buildcache>`_ for
more details on the public build cache
.. _cmd-spack-buildcache: .. _cmd-spack-buildcache:
@@ -336,7 +183,7 @@ which lets you get started quickly. See the following resources for more informa
-------------------- --------------------
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack buildcache push`` ``spack buildcache create``
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
Create tarball of installed Spack package and all dependencies. Create tarball of installed Spack package and all dependencies.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -15,13 +15,15 @@ is an entire command dedicated to the management of every aspect of bootstrappin
.. command-output:: spack bootstrap --help .. command-output:: spack bootstrap --help
Spack is configured to bootstrap its dependencies lazily by default; i.e. the first time they are needed and The first thing to know to understand bootstrapping in Spack is that each of
can't be found. You can readily check if any prerequisite for using Spack is missing by running: Spack's dependencies is bootstrapped lazily; i.e. the first time it is needed and
can't be found. You can readily check if any prerequisite for using Spack
is missing by running:
.. code-block:: console .. code-block:: console
% spack bootstrap status % spack bootstrap status
Spack v0.19.0 - python@3.8 Spack v0.17.1 - python@3.8
[FAIL] Core Functionalities [FAIL] Core Functionalities
[B] MISSING "clingo": required to concretize specs [B] MISSING "clingo": required to concretize specs
@@ -32,14 +34,9 @@ can't be found. You can readily check if any prerequisite for using Spack is mis
Spack will take care of bootstrapping any missing dependency marked as [B]. Dependencies marked as [-] are instead required to be found on the system. Spack will take care of bootstrapping any missing dependency marked as [B]. Dependencies marked as [-] are instead required to be found on the system.
% echo $?
1
In the case of the output shown above Spack detected that both ``clingo`` and ``gnupg`` In the case of the output shown above Spack detected that both ``clingo`` and ``gnupg``
are missing and it's giving detailed information on why they are needed and whether are missing and it's giving detailed information on why they are needed and whether
they can be bootstrapped. The return code of this command summarizes the results, if any they can be bootstrapped. Running a command that concretize a spec, like:
dependencies are missing the return code is ``1``, otherwise ``0``. Running a command that
concretizes a spec, like:
.. code-block:: console .. code-block:: console
@@ -49,22 +46,7 @@ concretizes a spec, like:
==> Installing "clingo-bootstrap@spack%apple-clang@12.0.0~docs~ipo+python build_type=Release arch=darwin-catalina-x86_64" from a buildcache ==> Installing "clingo-bootstrap@spack%apple-clang@12.0.0~docs~ipo+python build_type=Release arch=darwin-catalina-x86_64" from a buildcache
[ ... ] [ ... ]
automatically triggers the bootstrapping of clingo from pre-built binaries as expected. triggers the bootstrapping of clingo from pre-built binaries as expected.
Users can also bootstrap all the dependencies needed by Spack in a single command, which
might be useful to setup containers or other similar environments:
.. code-block:: console
$ spack bootstrap now
==> Bootstrapping clingo from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.3/build_cache/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-shqedxgvjnhiwdcdrvjhbd73jaevv7wt.spec.json
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.3/build_cache/linux-centos7-x86_64/gcc-10.2.1/clingo-bootstrap-spack/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-shqedxgvjnhiwdcdrvjhbd73jaevv7wt.spack
==> Installing "clingo-bootstrap@spack%gcc@10.2.1~docs~ipo+python+static_libstdcpp build_type=Release arch=linux-centos7-x86_64" from a buildcache
==> Bootstrapping patchelf from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.3/build_cache/linux-centos7-x86_64-gcc-10.2.1-patchelf-0.15.0-htk62k7efo2z22kh6kmhaselru7bfkuc.spec.json
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.3/build_cache/linux-centos7-x86_64/gcc-10.2.1/patchelf-0.15.0/linux-centos7-x86_64-gcc-10.2.1-patchelf-0.15.0-htk62k7efo2z22kh6kmhaselru7bfkuc.spack
==> Installing "patchelf@0.15.0%gcc@10.2.1 ldflags="-static-libstdc++ -static-libgcc" arch=linux-centos7-x86_64" from a buildcache
----------------------- -----------------------
The Bootstrapping store The Bootstrapping store
@@ -87,7 +69,7 @@ You can check what is installed in the bootstrapping store at any time using:
.. code-block:: console .. code-block:: console
% spack -b find % spack find -b
==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store" ==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store"
==> 11 installed packages ==> 11 installed packages
-- darwin-catalina-x86_64 / apple-clang@12.0.0 ------------------ -- darwin-catalina-x86_64 / apple-clang@12.0.0 ------------------
@@ -101,7 +83,7 @@ In case it is needed you can remove all the software in the current bootstrappin
% spack clean -b % spack clean -b
==> Removing bootstrapped software and configuration in "/Users/spack/.spack/bootstrap" ==> Removing bootstrapped software and configuration in "/Users/spack/.spack/bootstrap"
% spack -b find % spack find -b
==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store" ==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store"
==> 0 installed packages ==> 0 installed packages
@@ -125,19 +107,19 @@ If need be, you can disable bootstrapping altogether by running:
in which case it's your responsibility to ensure Spack runs in an in which case it's your responsibility to ensure Spack runs in an
environment where all its prerequisites are installed. You can environment where all its prerequisites are installed. You can
also configure Spack to skip certain bootstrapping methods by disabling also configure Spack to skip certain bootstrapping methods by *untrusting*
them specifically: them. For instance:
.. code-block:: console .. code-block:: console
% spack bootstrap disable github-actions % spack bootstrap untrust github-actions
==> "github-actions" is now disabled and will not be used for bootstrapping ==> "github-actions" is now untrusted and will not be used for bootstrapping
tells Spack to skip trying to bootstrap from binaries. To add the "github-actions" method back you can: tells Spack to skip trying to bootstrap from binaries. To add the "github-actions" method back you can:
.. code-block:: console .. code-block:: console
% spack bootstrap enable github-actions % spack bootstrap trust github-actions
There is also an option to reset the bootstrapping configuration to Spack's defaults: There is also an option to reset the bootstrapping configuration to Spack's defaults:
@@ -175,4 +157,4 @@ bootstrapping.
This command needs to be run on a machine with internet access and the resulting folder This command needs to be run on a machine with internet access and the resulting folder
has to be moved over to the air-gapped system. Once the local sources are added using the has to be moved over to the air-gapped system. Once the local sources are added using the
commands suggested at the prompt, they can be used to bootstrap Spack. commands suggested at the prompt, they can be used to bootstrap Spack.

View File

@@ -1,117 +1,278 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _build-settings:
================================
Package Settings (packages.yaml)
================================
Spack allows you to customize how your software is built through the
``packages.yaml`` file. Using it, you can make Spack prefer particular
implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK),
or you can make it prefer to build with particular compilers. You can
also tell Spack to use *external* software installations already
present on your system.
At a high level, the ``packages.yaml`` file is structured like this:
.. code-block:: yaml
packages:
package1:
# settings for package1
package2:
# settings for package2
# ...
all:
# settings that apply to all packages.
So you can either set build preferences specifically for *one* package,
or you can specify that certain settings should apply to *all* packages.
The types of settings you can customize are described in detail below.
Spack's build defaults are in the default
``etc/spack/defaults/packages.yaml`` file. You can override them in
``~/.spack/packages.yaml`` or ``etc/spack/packages.yaml``. For more
details on how this works, see :ref:`configuration-scopes`.
.. _sec-external-packages:
-----------------
External Packages
-----------------
Spack can be configured to use externally-installed
packages rather than building its own packages. This may be desirable
if machines ship with system packages, such as a customized MPI
that should be used instead of Spack building its own MPI.
External packages are configured through the ``packages.yaml`` file.
Here's an example of an external configuration:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This example lists three installations of OpenMPI, one built with GCC,
one built with GCC and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the pre-installed OpenMPI in
the given directory. Note that the specified path is the top-level
install prefix, not the ``bin`` subdirectory.
``packages.yaml`` can also be used to specify modules to load instead
of the installation prefixes. The following example says that module
``CMake/3.7.2`` provides cmake version 3.7.2.
.. code-block:: yaml
cmake:
externals:
- spec: cmake@3.7.2
modules:
- CMake/3.7.2
Each ``packages.yaml`` begins with a ``packages:`` attribute, followed
by a list of package names. To specify externals, add an ``externals:``
attribute under the package name, which lists externals.
Each external should specify a ``spec:`` string that should be as
well-defined as reasonably possible. If a
package lacks a spec component, such as missing a compiler or
package version, then Spack will guess the missing component based
on its most-favored packages, and it may guess incorrectly.
Each package version and compiler listed in an external should
have entries in Spack's packages and compiler configuration, even
though the package and compiler may not ever be built.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Prevent packages from being built from sources
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Adding an external spec in ``packages.yaml`` allows Spack to use an external location,
but it does not prevent Spack from building packages from sources. In the above example,
Spack might choose for many valid reasons to start building and linking with the
latest version of OpenMPI rather than continue using the pre-installed OpenMPI versions.
To prevent this, the ``packages.yaml`` configuration also allows packages
to be flagged as non-buildable. The previous example could be modified to
be:
.. code-block:: yaml
packages:
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
buildable: False
The addition of the ``buildable`` flag tells Spack that it should never build
its own version of OpenMPI from sources, and it will instead always rely on a pre-built
OpenMPI.
.. note::
If ``concretizer:reuse`` is on (see :ref:`concretizer-options` for more information on that flag)
pre-built specs include specs already available from a local store, an upstream store, a registered
buildcache or specs marked as externals in ``packages.yaml``. If ``concretizer:reuse`` is off, only
external specs in ``packages.yaml`` are included in the list of pre-built specs.
If an external module is specified as not buildable, then Spack will load the
external module into the build environment which can be used for linking.
The ``buildable`` does not need to be paired with external packages.
It could also be used alone to forbid packages that may be
buggy or otherwise undesirable.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Non-buildable virtual packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Virtual packages in Spack can also be specified as not buildable, and
external implementations can be provided. In the example above,
OpenMPI is configured as not buildable, but Spack will often prefer
other MPI implementations over the externally available OpenMPI. Spack
can be configured with every MPI provider not buildable individually,
but more conveniently:
.. code-block:: yaml
packages:
mpi:
buildable: False
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
Spack can then use any of the listed external implementations of MPI
to satisfy a dependency, and will choose depending on the compiler and
architecture.
In cases where the concretizer is configured to reuse specs, and other ``mpi`` providers
(available via stores or buildcaches) are not wanted, Spack can be configured to require
specs matching only the available externals:
.. code-block:: yaml
packages:
mpi:
buildable: False
require:
- one_of: [
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64",
"openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug",
"openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
]
openmpi:
externals:
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.4.3
- spec: "openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug"
prefix: /opt/openmpi-1.4.3-debug
- spec: "openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64"
prefix: /opt/openmpi-1.6.5-intel
This configuration prevents any spec using MPI and originating from stores or buildcaches to be reused,
unless it matches the requirements under ``packages:mpi:require``. For more information on requirements see
:ref:`package-requirements`.
.. _cmd-spack-external-find:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatically Find External Packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can run the :ref:`spack external find <spack-external-find>` command
to search for system-provided packages and add them to ``packages.yaml``.
After running this command your ``packages.yaml`` may include new entries:
.. code-block:: yaml
packages:
cmake:
externals:
- spec: cmake@3.17.2
prefix: /usr
Generally this is useful for detecting a small set of commonly-used packages;
for now this is generally limited to finding build-only dependencies.
Specific limitations include:
* Packages are not discoverable by default: For a package to be
discoverable with ``spack external find``, it needs to add special
logic. See :ref:`here <make-package-findable>` for more details.
* The logic does not search through module files, it can only detect
packages with executables defined in ``PATH``; you can help Spack locate
externals which use module files by loading any associated modules for
packages that you want Spack to know about before running
``spack external find``.
* Spack does not overwrite existing entries in the package configuration:
If there is an external defined for a spec at any configuration scope,
then Spack will not add a new external entry (``spack config blame packages``
can help locate all external entries).
.. _concretizer-options: .. _concretizer-options:
========================================== ----------------------
Concretization Settings (concretizer.yaml) Concretizer options
========================================== ----------------------
The ``concretizer.yaml`` configuration file allows to customize aspects of the ``packages.yaml`` gives the concretizer preferences for specific packages,
algorithm used to select the dependencies you install. The default configuration but you can also use ``concretizer.yaml`` to customize aspects of the
is the following: algorithm it uses to select the dependencies you install:
.. literalinclude:: _spack_root/etc/spack/defaults/concretizer.yaml .. literalinclude:: _spack_root/etc/spack/defaults/concretizer.yaml
:language: yaml :language: yaml
-------------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Reuse already installed packages Reuse already installed packages
-------------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``reuse`` attribute controls how aggressively Spack reuses binary packages during concretization. The The ``reuse`` attribute controls whether Spack will prefer to use installed packages (``true``), or
attribute can either be a single value, or an object for more complex configurations. whether it will do a "fresh" installation and prefer the latest settings from
``package.py`` files and ``packages.yaml`` (``false``).
In the former case ("single value") it allows Spack to: You can use:
1. Reuse installed packages and buildcaches for all the specs to be concretized, when ``true``
2. Reuse installed packages and buildcaches only for the dependencies of the root specs, when ``dependencies``
3. Disregard reusing installed packages and buildcaches, when ``false``
In case a finer control over which specs are reused is needed, then the value of this attribute can be
an object, with the following keys:
1. ``roots``: if ``true`` root specs are reused, if ``false`` only dependencies of root specs are reused
2. ``from``: list of sources from which reused specs are taken
Each source in ``from`` is itself an object:
.. list-table:: Attributes for a source or reusable specs
:header-rows: 1
* - Attribute name
- Description
* - type (mandatory, string)
- Can be ``local``, ``buildcache``, or ``external``
* - include (optional, list of specs)
- If present, reusable specs must match at least one of the constraint in the list
* - exclude (optional, list of specs)
- If present, reusable specs must not match any of the constraint in the list.
For instance, the following configuration:
.. code-block:: yaml
concretizer:
reuse:
roots: true
from:
- type: local
include:
- "%gcc"
- "%clang"
tells the concretizer to reuse all specs compiled with either ``gcc`` or ``clang``, that are installed
in the local store. Any spec from remote buildcaches is disregarded.
To reduce the boilerplate in configuration files, default values for the ``include`` and
``exclude`` options can be pushed up one level:
.. code-block:: yaml
concretizer:
reuse:
roots: true
include:
- "%gcc"
from:
- type: local
- type: buildcache
- type: local
include:
- "foo %oneapi"
In the example above we reuse all specs compiled with ``gcc`` from the local store
and remote buildcaches, and we also reuse ``foo %oneapi``. Note that the last source of
specs override the default ``include`` attribute.
For one-off concretizations, the are command line arguments for each of the simple "single value"
configurations. This means a user can:
.. code-block:: console .. code-block:: console
% spack install --reuse <spec> % spack install --reuse <spec>
to enable reuse for a single installation, or: to enable reuse for a single installation, and you can use:
.. code-block:: console .. code-block:: console
spack install --fresh <spec> spack install --fresh <spec>
to do a fresh install if ``reuse`` is enabled by default. to do a fresh install if ``reuse`` is enabled by default.
``reuse: true`` is the default.
.. seealso:: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FAQ: :ref:`Why does Spack pick particular versions and variants? <faq-concretizer-precedence>`
------------------------------------------
Selection of the target microarchitectures Selection of the target microarchitectures
------------------------------------------ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The options under the ``targets`` attribute control which targets are considered during a solve. The options under the ``targets`` attribute control which targets are considered during a solve.
Currently the options in this section are only configurable from the ``concretizer.yaml`` file Currently the options in this section are only configurable from the ``concretization.yaml`` file
and there are no corresponding command line arguments to enable them for a single solve. and there are no corresponding command line arguments to enable them for a single solve.
The ``granularity`` option can take two possible values: ``microarchitectures`` and ``generic``. The ``granularity`` option can take two possible values: ``microarchitectures`` and ``generic``.
@@ -141,28 +302,232 @@ microarchitectures considered during the solve are constrained to be compatible
host Spack is currently running on. For instance, if this option is set to ``true``, a host Spack is currently running on. For instance, if this option is set to ``true``, a
user cannot concretize for ``target=icelake`` while running on an Haswell node. user cannot concretize for ``target=icelake`` while running on an Haswell node.
--------------- .. _package-preferences:
Duplicate nodes
---------------
The ``duplicates`` attribute controls whether the DAG can contain multiple configurations of -------------------
the same package. This is mainly relevant for build dependencies, which may have their version Package Preferences
pinned by some nodes, and thus be required at different versions by different nodes in the same -------------------
DAG.
The ``strategy`` option controls how the solver deals with duplicates. If the value is ``none``, Spack can be configured to prefer certain compilers, package
then a single configuration per package is allowed in the DAG. This means, for instance, that only versions, dependencies, and variants during concretization.
a single ``cmake`` or a single ``py-setuptools`` version is allowed. The result would be a slightly The preferred configuration can be controlled via the
faster concretization, at the expense of making a few specs unsolvable. ``~/.spack/packages.yaml`` file for user configurations, or the
``etc/spack/packages.yaml`` site configuration.
If the value is ``minimal`` Spack will allow packages tagged as ``build-tools`` to have duplicates. Here's an example ``packages.yaml`` file that sets preferred packages:
This allows, for instance, to concretize specs whose nodes require different, and incompatible, ranges
of some build tool. For instance, in the figure below the latest `py-shapely` requires a newer `py-setuptools`,
while `py-numpy` still needs an older version:
.. figure:: images/shapely_duplicates.svg .. code-block:: yaml
:scale: 70 %
:align: center
Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the packages:
default behavior is ``duplicates:strategy:minimal``. opencv:
compiler: [gcc@4.9]
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
all:
compiler: [gcc@4.4.7, 'gcc@4.6:', intel, clang, pgi]
target: [sandybridge]
providers:
mpi: [mvapich2, mpich, openmpi]
At a high level, this example is specifying how packages should be
concretized. The opencv package should prefer using GCC 4.9 and
be built with debug options. The gperftools package should prefer version
2.2 over 2.4. Every package on the system should prefer mvapich2 for
its MPI and GCC 4.4.7 (except for opencv, which overrides this by preferring GCC 4.9).
These options are used to fill in implicit defaults. Any of them can be overwritten
on the command line if explicitly requested.
Each ``packages.yaml`` file begins with the string ``packages:`` and
package names are specified on the next level. The special string ``all``
applies settings to *all* packages. Underneath each package name is one
or more components: ``compiler``, ``variants``, ``version``,
``providers``, and ``target``. Each component has an ordered list of
spec ``constraints``, with earlier entries in the list being preferred
over later entries.
Sometimes a package installation may have constraints that forbid
the first concretization rule, in which case Spack will use the first
legal concretization rule. Going back to the example, if a user
requests gperftools 2.3 or later, then Spack will install version 2.4
as the 2.4 version of gperftools is preferred over 2.3.
An explicit concretization rule in the preferred section will always
take preference over unlisted concretizations. In the above example,
xlc isn't listed in the compiler list. Every listed compiler from
gcc to pgi will thus be preferred over the xlc compiler.
The syntax for the ``provider`` section differs slightly from other
concretization rules. A provider lists a value that packages may
``depend_on`` (e.g, MPI) and a list of rules for fulfilling that
dependency.
.. _package-requirements:
--------------------
Package Requirements
--------------------
You can use the configuration to force the concretizer to choose
specific properties for packages when building them. Like preferences,
these are only applied when the package is required by some other
request (e.g. if the package is needed as a dependency of a
request to ``spack install``).
An example of where this is useful is if you have a package that
is normally built as a dependency but only under certain circumstances
(e.g. only when a variant on a dependent is active): you can make
sure that it always builds the way you want it to; this distinguishes
package configuration requirements from constraints that you add to
``spack install`` or to environments (in those cases, the associated
packages are always built).
The following is an example of how to enforce package properties in
``packages.yaml``:
.. code-block:: yaml
packages:
libfabric:
require: "@1.13.2"
openmpi:
require:
- any_of: ["~cuda", "%gcc"]
mpich:
require:
- one_of: ["+cuda", "+rocm"]
Requirements are expressed using Spec syntax (the same as what is provided
to ``spack install``). In the simplest case, you can specify attributes
that you always want the package to have by providing a single spec to
``require``; in the above example, ``libfabric`` will always build
with version 1.13.2.
You can provide a more-relaxed constraint and allow the concretizer to
choose between a set of options using ``any_of`` or ``one_of``:
* ``any_of`` is a list of specs. One of those specs must be satisfied
and it is also allowed for the concretized spec to match more than one.
In the above example, that means you could build ``openmpi+cuda%gcc``,
``openmpi~cuda%clang`` or ``openmpi~cuda%gcc`` (in the last case,
note that both specs in the ``any_of`` for ``openmpi`` are
satisfied).
* ``one_of`` is also a list of specs, and the final concretized spec
must match exactly one of them. In the above example, that means
you could build ``mpich+cuda`` or ``mpich+rocm`` but not
``mpich+cuda+rocm`` (note the current package definition for
``mpich`` already includes a conflict, so this is redundant but
still demonstrates the concept).
.. note::
For ``any_of`` and ``one_of``, the order of specs indicates a
preference: items that appear earlier in the list are preferred
(note that these preferences can be ignored in favor of others).
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting default requirements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can also set default requirements for all packages under ``all``
like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
which means every spec will be required to use ``clang`` as a compiler.
Note that in this case ``all`` represents a *default set of requirements* -
if there are specific package requirements, then the default requirements
under ``all`` are disregarded. For example, with a configuration like this:
.. code-block:: yaml
packages:
all:
require: '%clang'
cmake:
require: '%gcc'
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including cmake dependencies)
to use ``clang``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting requirements on virtual specs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A requirement on a virtual spec applies whenever that virtual is present in the DAG. This
can be useful for fixing which virtual provider you want to use:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
Requirements on the virtual spec and on the specific provider are both applied, if present. For
instance with a configuration like:
.. code-block:: yaml
packages:
mpi:
require: 'mvapich2 %gcc'
mvapich2:
require: '~cuda'
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
.. _package_permissions:
-------------------
Package Permissions
-------------------
Spack can be configured to assign permissions to the files installed
by a package.
In the ``packages.yaml`` file under ``permissions``, the attributes
``read``, ``write``, and ``group`` control the package
permissions. These attributes can be set per-package, or for all
packages under ``all``. If permissions are set under ``all`` and for a
specific package, the package-specific settings take precedence.
The ``read`` and ``write`` attributes take one of ``user``, ``group``,
and ``world``.
.. code-block:: yaml
packages:
all:
permissions:
write: group
group: spack
my_app:
permissions:
read: group
group: my_team
The permissions settings describe the broadest level of access to
installations of the specified packages. The execute permissions of
the file are set to the same level as read permissions for those files
that are executable. The default setting for ``read`` is ``world``,
and for ``write`` is ``user``. In the example above, installations of
``my_app`` will be installed with user and group permissions but no
world permissions, and owned by the group ``my_team``. All other
packages will be installed with user and group write privileges, and
world read privileges. Those packages will be owned by the group
``spack``.
The ``group`` attribute assigns a Unix-style group to a package. All
files installed by the package will be owned by the assigned group,
and the sticky group bit will be set on the install prefix and all
directories inside the install prefix. This will ensure that even
manually placed files within the install prefix are owned by the
assigned group. If no group is assigned, Spack will allow the OS
default behavior to go as expected.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -65,6 +65,7 @@ on these ideas for each distinct build system that Spack supports:
build_systems/custompackage build_systems/custompackage
build_systems/inteloneapipackage build_systems/inteloneapipackage
build_systems/intelpackage build_systems/intelpackage
build_systems/multiplepackage
build_systems/rocmpackage build_systems/rocmpackage
build_systems/sourceforgepackage build_systems/sourceforgepackage

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _autotoolspackage: .. _autotoolspackage:
--------- ----------------
Autotools AutotoolsPackage
--------- ----------------
Autotools is a GNU build system that provides a build-script generator. Autotools is a GNU build system that provides a build-script generator.
By running the platform-independent ``./configure`` script that comes By running the platform-independent ``./configure`` script that comes
@@ -17,7 +17,7 @@ with the package, you can generate a platform-dependent Makefile.
Phases Phases
^^^^^^ ^^^^^^
The ``AutotoolsBuilder`` and ``AutotoolsPackage`` base classes come with the following phases: The ``AutotoolsPackage`` base class comes with the following phases:
#. ``autoreconf`` - generate the configure script #. ``autoreconf`` - generate the configure script
#. ``configure`` - generate the Makefiles #. ``configure`` - generate the Makefiles
@@ -127,9 +127,9 @@ check out a commit from the ``master`` branch, you would want to add:
.. code-block:: python .. code-block:: python
depends_on("autoconf", type="build", when="@master") depends_on('autoconf', type='build', when='@master')
depends_on("automake", type="build", when="@master") depends_on('automake', type='build', when='@master')
depends_on("libtool", type="build", when="@master") depends_on('libtool', type='build', when='@master')
It is typically redundant to list the ``m4`` macro processor package as a It is typically redundant to list the ``m4`` macro processor package as a
dependency, since ``autoconf`` already depends on it. dependency, since ``autoconf`` already depends on it.
@@ -145,16 +145,7 @@ example, the ``bash`` shell is used to run the ``autogen.sh`` script.
.. code-block:: python .. code-block:: python
def autoreconf(self, spec, prefix): def autoreconf(self, spec, prefix):
which("bash")("autogen.sh") which('bash')('autogen.sh')
If the ``package.py`` has build instructions in a separate
:ref:`builder class <multiple_build_systems>`, the signature for a phase changes slightly:
.. code-block:: python
class AutotoolsBuilder(AutotoolsBuilder):
def autoreconf(self, pkg, spec, prefix):
which("bash")("autogen.sh")
""""""""""""""""""""""""""""""""""""""" """""""""""""""""""""""""""""""""""""""
patching configure or Makefile.in files patching configure or Makefile.in files
@@ -195,9 +186,9 @@ To opt out of this feature, use the following setting:
To enable it conditionally on different architectures, define a property and To enable it conditionally on different architectures, define a property and
make the package depend on ``gnuconfig`` as a build dependency: make the package depend on ``gnuconfig`` as a build dependency:
.. code-block:: python .. code-block
depends_on("gnuconfig", when="@1.0:") depends_on('gnuconfig', when='@1.0:')
@property @property
def patch_config_files(self): def patch_config_files(self):
@@ -239,7 +230,7 @@ version, this can be done like so:
@property @property
def force_autoreconf(self): def force_autoreconf(self):
return self.version == Version("1.2.3") return self.version == Version('1.2.3')
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
Finding configure flags Finding configure flags
@@ -287,22 +278,13 @@ function like so:
def configure_args(self): def configure_args(self):
args = [] args = []
if self.spec.satisfies("+mpi"): if '+mpi' in self.spec:
args.append("--enable-mpi") args.append('--enable-mpi')
else: else:
args.append("--disable-mpi") args.append('--disable-mpi')
return args return args
Alternatively, you can use the :ref:`enable_or_disable <autotools_enable_or_disable>` helper:
.. code-block:: python
def configure_args(self):
return [self.enable_or_disable("mpi")]
Note that we are explicitly disabling MPI support if it is not Note that we are explicitly disabling MPI support if it is not
requested. This is important, as many Autotools packages will enable requested. This is important, as many Autotools packages will enable
options by default if the dependencies are found, and disable them options by default if the dependencies are found, and disable them
@@ -313,11 +295,9 @@ and `here <https://wiki.gentoo.org/wiki/Project:Quality_Assurance/Automagic_depe
for a rationale as to why these so-called "automagic" dependencies for a rationale as to why these so-called "automagic" dependencies
are a problem. are a problem.
.. note:: By default, Autotools installs packages to ``/usr``. We don't want this,
so Spack automatically adds ``--prefix=/path/to/installation/prefix``
By default, Autotools installs packages to ``/usr``. We don't want this, to your list of ``configure_args``. You don't need to add this yourself.
so Spack automatically adds ``--prefix=/path/to/installation/prefix``
to your list of ``configure_args``. You don't need to add this yourself.
^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^
Helper functions Helper functions
@@ -328,8 +308,6 @@ You may have noticed that most of the Autotools flags are of the form
``--without-baz``. Since these flags are so common, Spack provides a ``--without-baz``. Since these flags are so common, Spack provides a
couple of helper functions to make your life easier. couple of helper functions to make your life easier.
.. _autotools_enable_or_disable:
""""""""""""""""" """""""""""""""""
enable_or_disable enable_or_disable
""""""""""""""""" """""""""""""""""
@@ -341,11 +319,11 @@ typically used to enable or disable some feature within the package.
.. code-block:: python .. code-block:: python
variant( variant(
"memchecker", 'memchecker',
default=False, default=False,
description="Memchecker support for debugging [degrades performance]" description='Memchecker support for debugging [degrades performance]'
) )
config_args.extend(self.enable_or_disable("memchecker")) config_args.extend(self.enable_or_disable('memchecker'))
In this example, specifying the variant ``+memchecker`` will generate In this example, specifying the variant ``+memchecker`` will generate
the following configuration options: the following configuration options:
@@ -365,15 +343,15 @@ the ``with_or_without`` method.
.. code-block:: python .. code-block:: python
variant( variant(
"schedulers", 'schedulers',
values=disjoint_sets( values=disjoint_sets(
("auto",), ("alps", "lsf", "tm", "slurm", "sge", "loadleveler") ('auto',), ('alps', 'lsf', 'tm', 'slurm', 'sge', 'loadleveler')
).with_non_feature_values("auto", "none"), ).with_non_feature_values('auto', 'none'),
description="List of schedulers for which support is enabled; " description="List of schedulers for which support is enabled; "
"'auto' lets openmpi determine", "'auto' lets openmpi determine",
) )
if not spec.satisfies("schedulers=auto"): if 'schedulers=auto' not in spec:
config_args.extend(self.with_or_without("schedulers")) config_args.extend(self.with_or_without('schedulers'))
In this example, specifying the variant ``schedulers=slurm,sge`` will In this example, specifying the variant ``schedulers=slurm,sge`` will
generate the following configuration options: generate the following configuration options:
@@ -398,16 +376,16 @@ generated, using the ``activation_value`` argument to
.. code-block:: python .. code-block:: python
variant( variant(
"fabrics", 'fabrics',
values=disjoint_sets( values=disjoint_sets(
("auto",), ("psm", "psm2", "verbs", "mxm", "ucx", "libfabric") ('auto',), ('psm', 'psm2', 'verbs', 'mxm', 'ucx', 'libfabric')
).with_non_feature_values("auto", "none"), ).with_non_feature_values('auto', 'none'),
description="List of fabrics that are enabled; " description="List of fabrics that are enabled; "
"'auto' lets openmpi determine", "'auto' lets openmpi determine",
) )
if not spec.satisfies("fabrics=auto"): if 'fabrics=auto' not in spec:
config_args.extend(self.with_or_without("fabrics", config_args.extend(self.with_or_without('fabrics',
activation_value="prefix")) activation_value='prefix'))
``activation_value`` accepts a callable that generates the configure ``activation_value`` accepts a callable that generates the configure
parameter value given the variant value; but the special value parameter value given the variant value; but the special value
@@ -431,16 +409,16 @@ When Spack variants and configure flags do not correspond one-to-one, the
.. code-block:: python .. code-block:: python
variant("debug_tools", default=False) variant('debug_tools', default=False)
config_args += self.enable_or_disable("debug-tools", variant="debug_tools") config_args += self.enable_or_disable('debug-tools', variant='debug_tools')
Or when one variant controls multiple flags: Or when one variant controls multiple flags:
.. code-block:: python .. code-block:: python
variant("debug_tools", default=False) variant('debug_tools', default=False)
config_args += self.with_or_without("memchecker", variant="debug_tools") config_args += self.with_or_without('memchecker', variant='debug_tools')
config_args += self.with_or_without("profiler", variant="debug_tools") config_args += self.with_or_without('profiler', variant='debug_tools')
"""""""""""""""""""" """"""""""""""""""""
@@ -454,8 +432,8 @@ For example:
.. code-block:: python .. code-block:: python
variant("profiler", when="@2.0:") variant('profiler', when='@2.0:')
config_args += self.with_or_without("profiler") config_args += self.with_or_without('profiler')
will neither add ``--with-profiler`` nor ``--without-profiler`` when the version is will neither add ``--with-profiler`` nor ``--without-profiler`` when the version is
below ``2.0``. below ``2.0``.
@@ -474,10 +452,10 @@ the variant values require atypical behavior.
def with_or_without_verbs(self, activated): def with_or_without_verbs(self, activated):
# Up through version 1.6, this option was named --with-openib. # Up through version 1.6, this option was named --with-openib.
# In version 1.7, it was renamed to be --with-verbs. # In version 1.7, it was renamed to be --with-verbs.
opt = "verbs" if self.spec.satisfies("@1.7:") else "openib" opt = 'verbs' if self.spec.satisfies('@1.7:') else 'openib'
if not activated: if not activated:
return f"--without-{opt}" return '--without-{0}'.format(opt)
return f"--with-{opt}={self.spec['rdma-core'].prefix}" return '--with-{0}={1}'.format(opt, self.spec['rdma-core'].prefix)
Defining ``with_or_without_verbs`` overrides the behavior of a Defining ``with_or_without_verbs`` overrides the behavior of a
``fabrics=verbs`` variant, changing the configure-time option to ``fabrics=verbs`` variant, changing the configure-time option to
@@ -501,7 +479,7 @@ do this like so:
.. code-block:: python .. code-block:: python
configure_directory = "src" configure_directory = 'src'
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
Building out of source Building out of source
@@ -513,7 +491,7 @@ This can be done using the ``build_directory`` variable:
.. code-block:: python .. code-block:: python
build_directory = "spack-build" build_directory = 'spack-build'
By default, Spack will build the package in the same directory that By default, Spack will build the package in the same directory that
contains the ``configure`` script contains the ``configure`` script
@@ -536,8 +514,8 @@ library or build the documentation, you can add these like so:
.. code-block:: python .. code-block:: python
build_targets = ["all", "docs"] build_targets = ['all', 'docs']
install_targets = ["install", "docs"] install_targets = ['install', 'docs']
^^^^^^^ ^^^^^^^
Testing Testing

View File

@@ -1,40 +1,17 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _bundlepackage: .. _bundlepackage:
------ -------------
Bundle BundlePackage
------ -------------
``BundlePackage`` represents a set of packages that are expected to work ``BundlePackage`` represents a set of packages that are expected to work well
well together, such as a collection of commonly used software libraries. together, such as a collection of commonly used software libraries. The
The associated software is specified as dependencies. associated software is specified as bundle dependencies.
If it makes sense, variants, conflicts, and requirements can be added to
the package. :ref:`Variants <variants>` ensure that common build options
are consistent across the packages supporting them. :ref:`Conflicts
and requirements <packaging_conflicts>` prevent attempts to build with known
bugs or limitations.
For example, if ``MyBundlePackage`` is known to only build on ``linux``,
it could use the ``require`` directive as follows:
.. code-block:: python
require("platform=linux", msg="MyBundlePackage only builds on linux")
Spack has a number of built-in bundle packages, such as:
* `AmdAocl <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/amd-aocl/package.py>`_
* `EcpProxyApps <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ecp-proxy-apps/package.py>`_
* `Libc <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libc/package.py>`_
* `Xsdk <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xsdk/package.py>`_
where ``Xsdk`` also inherits from ``CudaPackage`` and ``RocmPackage`` and
``Libc`` is a virtual bundle package for the C standard library.
^^^^^^^^ ^^^^^^^^

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _cachedcmakepackage: .. _cachedcmakepackage:
----------- ------------------
CachedCMake CachedCMakePackage
----------- ------------------
The CachedCMakePackage base class is used for CMake-based workflows The CachedCMakePackage base class is used for CMake-based workflows
that create a CMake cache file prior to running ``cmake``. This is that create a CMake cache file prior to running ``cmake``. This is
@@ -87,7 +87,7 @@ A typical usage of these methods may look something like this:
.. code-block:: python .. code-block:: python
def initconfig_mpi_entries(self): def initconfig_mpi_entries(self)
# Get existing MPI configurations # Get existing MPI configurations
entries = super(self, Foo).initconfig_mpi_entries() entries = super(self, Foo).initconfig_mpi_entries()
@@ -95,25 +95,25 @@ A typical usage of these methods may look something like this:
# This spec has an MPI variant, and we need to enable MPI when it is on. # This spec has an MPI variant, and we need to enable MPI when it is on.
# This hypothetical package controls MPI with the ``FOO_MPI`` option to # This hypothetical package controls MPI with the ``FOO_MPI`` option to
# cmake. # cmake.
if self.spec.satisfies("+mpi"): if '+mpi' in self.spec:
entries.append(cmake_cache_option("FOO_MPI", True, "enable mpi")) entries.append(cmake_cache_option('FOO_MPI', True, "enable mpi"))
else: else:
entries.append(cmake_cache_option("FOO_MPI", False, "disable mpi")) entries.append(cmake_cache_option('FOO_MPI', False, "disable mpi"))
def initconfig_package_entries(self): def initconfig_package_entries(self):
# Package specific options # Package specific options
entries = [] entries = []
entries.append("#Entries for build options") entries.append('#Entries for build options')
bar_on = self.spec.satisfies("+bar") bar_on = '+bar' in self.spec
entries.append(cmake_cache_option("FOO_BAR", bar_on, "toggle bar")) entries.append(cmake_cache_option('FOO_BAR', bar_on, 'toggle bar'))
entries.append("#Entries for dependencies") entries.append('#Entries for dependencies')
if self.spec["blas"].name == "baz": # baz is our blas provider if self.spec['blas'].name == 'baz': # baz is our blas provider
entries.append(cmake_cache_string("FOO_BLAS", "baz", "Use baz")) entries.append(cmake_cache_string('FOO_BLAS', 'baz', 'Use baz'))
entries.append(cmake_cache_path("BAZ_PREFIX", self.spec["baz"].prefix)) entries.append(cmake_cache_path('BAZ_PREFIX', self.spec['baz'].prefix))
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
External documentation External documentation

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _cmakepackage: .. _cmakepackage:
----- ------------
CMake CMakePackage
----- ------------
Like Autotools, CMake is a widely-used build-script generator. Designed Like Autotools, CMake is a widely-used build-script generator. Designed
by Kitware, CMake is the most popular build system for new C, C++, and by Kitware, CMake is the most popular build system for new C, C++, and
@@ -21,7 +21,7 @@ whereas Autotools is Unix-only.
Phases Phases
^^^^^^ ^^^^^^
The ``CMakeBuilder`` and ``CMakePackage`` base classes come with the following phases: The ``CMakePackage`` base class comes with the following phases:
#. ``cmake`` - generate the Makefile #. ``cmake`` - generate the Makefile
#. ``build`` - build the package #. ``build`` - build the package
@@ -82,7 +82,7 @@ class already contains:
.. code-block:: python .. code-block:: python
depends_on("cmake", type="build") depends_on('cmake', type='build')
If you need to specify a particular version requirement, you can If you need to specify a particular version requirement, you can
@@ -90,7 +90,7 @@ override this in your package:
.. code-block:: python .. code-block:: python
depends_on("cmake@2.8.12:", type="build") depends_on('cmake@2.8.12:', type='build')
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
@@ -130,17 +130,17 @@ Adding flags to cmake
To add additional flags to the ``cmake`` call, simply override the To add additional flags to the ``cmake`` call, simply override the
``cmake_args`` function. The following example defines values for the flags ``cmake_args`` function. The following example defines values for the flags
``WHATEVER``, ``ENABLE_BROKEN_FEATURE``, ``DETECT_HDF5``, and ``THREADS`` with ``WHATEVER``, ``ENABLE_BROKEN_FEATURE``, ``DETECT_HDF5``, and ``THREADS`` with
and without the :meth:`~spack.build_systems.cmake.CMakeBuilder.define` and and without the :meth:`~spack.build_systems.cmake.CMakePackage.define` and
:meth:`~spack.build_systems.cmake.CMakeBuilder.define_from_variant` helper functions: :meth:`~spack.build_systems.cmake.CMakePackage.define_from_variant` helper functions:
.. code-block:: python .. code-block:: python
def cmake_args(self): def cmake_args(self):
args = [ args = [
"-DWHATEVER:STRING=somevalue", '-DWHATEVER:STRING=somevalue',
self.define("ENABLE_BROKEN_FEATURE", False), self.define('ENABLE_BROKEN_FEATURE', False),
self.define_from_variant("DETECT_HDF5", "hdf5"), self.define_from_variant('DETECT_HDF5', 'hdf5'),
self.define_from_variant("THREADS"), # True if +threads self.define_from_variant('THREADS'), # True if +threads
] ]
return args return args
@@ -151,10 +151,10 @@ and CMake simply ignores the empty command line argument. For example the follow
.. code-block:: python .. code-block:: python
variant("example", default=True, when="@2.0:") variant('example', default=True, when='@2.0:')
def cmake_args(self): def cmake_args(self):
return [self.define_from_variant("EXAMPLE", "example")] return [self.define_from_variant('EXAMPLE', 'example')]
will generate ``'cmake' '-DEXAMPLE=ON' ...`` when `@2.0: +example` is met, but will will generate ``'cmake' '-DEXAMPLE=ON' ...`` when `@2.0: +example` is met, but will
result in ``'cmake' '' ...`` when the spec version is below ``2.0``. result in ``'cmake' '' ...`` when the spec version is below ``2.0``.
@@ -193,9 +193,9 @@ a variant to control this:
.. code-block:: python .. code-block:: python
variant("build_type", default="RelWithDebInfo", variant('build_type', default='RelWithDebInfo',
description="CMake build type", description='CMake build type',
values=("Debug", "Release", "RelWithDebInfo", "MinSizeRel")) values=('Debug', 'Release', 'RelWithDebInfo', 'MinSizeRel'))
However, not every CMake package accepts all four of these options. However, not every CMake package accepts all four of these options.
Grep the ``CMakeLists.txt`` file to see if the default values are Grep the ``CMakeLists.txt`` file to see if the default values are
@@ -205,9 +205,9 @@ package overrides the default variant with:
.. code-block:: python .. code-block:: python
variant("build_type", default="DebugRelease", variant('build_type', default='DebugRelease',
description="The build type to build", description='The build type to build',
values=("Debug", "Release", "DebugRelease")) values=('Debug', 'Release', 'DebugRelease'))
For more information on ``CMAKE_BUILD_TYPE``, see: For more information on ``CMAKE_BUILD_TYPE``, see:
https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html
@@ -250,7 +250,7 @@ generator is Ninja. To switch to the Ninja generator, simply add:
.. code-block:: python .. code-block:: python
generator("ninja") generator = 'Ninja'
``CMakePackage`` defaults to "Unix Makefiles". If you switch to the ``CMakePackage`` defaults to "Unix Makefiles". If you switch to the
@@ -258,7 +258,7 @@ Ninja generator, make sure to add:
.. code-block:: python .. code-block:: python
depends_on("ninja", type="build") depends_on('ninja', type='build')
to the package as well. Aside from that, you shouldn't need to do to the package as well. Aside from that, you shouldn't need to do
anything else. Spack will automatically detect that you are using anything else. Spack will automatically detect that you are using
@@ -288,7 +288,7 @@ like so:
.. code-block:: python .. code-block:: python
root_cmakelists_dir = "src" root_cmakelists_dir = 'src'
Note that this path is relative to the root of the extracted tarball, Note that this path is relative to the root of the extracted tarball,
@@ -304,7 +304,7 @@ different sub-directory, simply override ``build_directory`` like so:
.. code-block:: python .. code-block:: python
build_directory = "my-build" build_directory = 'my-build'
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
Build and install targets Build and install targets
@@ -324,8 +324,8 @@ library or build the documentation, you can add these like so:
.. code-block:: python .. code-block:: python
build_targets = ["all", "docs"] build_targets = ['all', 'docs']
install_targets = ["install", "docs"] install_targets = ['install', 'docs']
^^^^^^^ ^^^^^^^
Testing Testing

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _cudapackage: .. _cudapackage:
---- -----------
Cuda CudaPackage
---- -----------
Different from other packages, ``CudaPackage`` does not represent a build system. Different from other packages, ``CudaPackage`` does not represent a build system.
Instead its goal is to simplify and unify usage of ``CUDA`` in other packages by providing a `mixin-class <https://en.wikipedia.org/wiki/Mixin>`_. Instead its goal is to simplify and unify usage of ``CUDA`` in other packages by providing a `mixin-class <https://en.wikipedia.org/wiki/Mixin>`_.
@@ -28,14 +28,11 @@ This package provides the following variants:
* **cuda_arch** * **cuda_arch**
This variant supports the optional specification of one or multiple architectures. This variant supports the optional specification of the architecture.
Valid values are maintained in the ``cuda_arch_values`` property and Valid values are maintained in the ``cuda_arch_values`` property and
are the numeric character equivalent of the compute capability version are the numeric character equivalent of the compute capability version
(e.g., '10' for version 1.0). Each provided value affects associated (e.g., '10' for version 1.0). Each provided value affects associated
``CUDA`` dependencies and compiler conflicts. ``CUDA`` dependencies and compiler conflicts.
The variant builds both PTX code for the _virtual_ architecture
(e.g. ``compute_10``) and binary code for the _real_ architecture (e.g. ``sm_10``).
GPUs and their compute capability versions are listed at GPUs and their compute capability versions are listed at
https://developer.nvidia.com/cuda-gpus . https://developer.nvidia.com/cuda-gpus .
@@ -54,8 +51,8 @@ to terminate such build attempts with a suitable message:
.. code-block:: python .. code-block:: python
conflicts("cuda_arch=none", when="+cuda", conflicts('cuda_arch=none', when='+cuda',
msg="CUDA architecture is required") msg='CUDA architecture is required')
Similarly, if your software does not support all versions of the property, Similarly, if your software does not support all versions of the property,
you could add ``conflicts`` to your package for those versions. For example, you could add ``conflicts`` to your package for those versions. For example,
@@ -66,13 +63,13 @@ custom message should a user attempt such a build:
.. code-block:: python .. code-block:: python
unsupported_cuda_archs = [ unsupported_cuda_archs = [
"10", "11", "12", "13", '10', '11', '12', '13',
"20", "21", '20', '21',
"30", "32", "35", "37" '30', '32', '35', '37'
] ]
for value in unsupported_cuda_archs: for value in unsupported_cuda_archs:
conflicts(f"cuda_arch={value}", when="+cuda", conflicts('cuda_arch={0}'.format(value), when='+cuda',
msg=f"CUDA architecture {value} is not supported") msg='CUDA architecture {0} is not supported'.format(value))
^^^^^^^ ^^^^^^^
Methods Methods
@@ -83,7 +80,7 @@ standard CUDA compiler flags.
**cuda_flags** **cuda_flags**
This built-in static method returns a list of command line flags This built-in static method returns a list of command line flags
for the chosen ``cuda_arch`` value(s). The flags are intended to for the chosen ``cuda_arch`` value(s). The flags are intended to
be passed to the CUDA compiler driver (i.e., ``nvcc``). be passed to the CUDA compiler driver (i.e., ``nvcc``).
@@ -107,16 +104,16 @@ class of your package. For example, you can add it to your
spec = self.spec spec = self.spec
args = [] args = []
... ...
if spec.satisfies("+cuda"): if '+cuda' in spec:
# Set up the cuda macros needed by the build # Set up the cuda macros needed by the build
args.append("-DWITH_CUDA=ON") args.append('-DWITH_CUDA=ON')
cuda_arch_list = spec.variants["cuda_arch"].value cuda_arch_list = spec.variants['cuda_arch'].value
cuda_arch = cuda_arch_list[0] cuda_arch = cuda_arch_list[0]
if cuda_arch != "none": if cuda_arch != 'none':
args.append(f"-DCUDA_FLAGS=-arch=sm_{cuda_arch}") args.append('-DCUDA_FLAGS=-arch=sm_{0}'.format(cuda_arch))
else: else:
# Ensure build with cuda is disabled # Ensure build with cuda is disabled
args.append("-DWITH_CUDA=OFF") args.append('-DWITH_CUDA=OFF')
... ...
return args return args
@@ -125,7 +122,7 @@ You will need to customize options as needed for your build.
This example also illustrates how to check for the ``cuda`` variant using This example also illustrates how to check for the ``cuda`` variant using
``self.spec`` and how to retrieve the ``cuda_arch`` variant's value, which ``self.spec`` and how to retrieve the ``cuda_arch`` variant's value, which
is a list, using ``self.spec.variants["cuda_arch"].value``. is a list, using ``self.spec.variants['cuda_arch'].value``.
With over 70 packages using ``CudaPackage`` as of January 2021 there are With over 70 packages using ``CudaPackage`` as of January 2021 there are
lots of examples to choose from to get more ideas for using this package. lots of examples to choose from to get more ideas for using this package.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -57,13 +57,13 @@ If you look at the ``perl`` package, you'll see:
.. code-block:: python .. code-block:: python
phases = ["configure", "build", "install"] phases = ['configure', 'build', 'install']
Similarly, ``cmake`` defines: Similarly, ``cmake`` defines:
.. code-block:: python .. code-block:: python
phases = ["bootstrap", "build", "install"] phases = ['bootstrap', 'build', 'install']
If we look at the ``cmake`` example, this tells Spack's ``PackageBase`` If we look at the ``cmake`` example, this tells Spack's ``PackageBase``
class to run the ``bootstrap``, ``build``, and ``install`` functions class to run the ``bootstrap``, ``build``, and ``install`` functions
@@ -78,7 +78,7 @@ If we look at ``perl``, we see that it defines a ``configure`` method:
.. code-block:: python .. code-block:: python
def configure(self, spec, prefix): def configure(self, spec, prefix):
configure = Executable("./Configure") configure = Executable('./Configure')
configure(*self.configure_args()) configure(*self.configure_args())
There is also a corresponding ``configure_args`` function that handles There is also a corresponding ``configure_args`` function that handles
@@ -92,7 +92,7 @@ phases are pretty simple:
make() make()
def install(self, spec, prefix): def install(self, spec, prefix):
make("install") make('install')
The ``cmake`` package looks very similar, but with a ``bootstrap`` The ``cmake`` package looks very similar, but with a ``bootstrap``
function instead of ``configure``: function instead of ``configure``:
@@ -100,14 +100,14 @@ function instead of ``configure``:
.. code-block:: python .. code-block:: python
def bootstrap(self, spec, prefix): def bootstrap(self, spec, prefix):
bootstrap = Executable("./bootstrap") bootstrap = Executable('./bootstrap')
bootstrap(*self.bootstrap_args()) bootstrap(*self.bootstrap_args())
def build(self, spec, prefix): def build(self, spec, prefix):
make() make()
def install(self, spec, prefix): def install(self, spec, prefix):
make("install") make('install')
Again, there is a ``boostrap_args`` function that determines the Again, there is a ``boostrap_args`` function that determines the
correct bootstrap flags to use. correct bootstrap flags to use.
@@ -128,16 +128,16 @@ before or after a particular phase. For example, in ``perl``, we see:
.. code-block:: python .. code-block:: python
@run_after("install") @run_after('install')
def install_cpanm(self): def install_cpanm(self):
spec = self.spec spec = self.spec
if spec.satisfies("+cpanm"): if '+cpanm' in spec:
with working_dir(join_path("cpanm", "cpanm")): with working_dir(join_path('cpanm', 'cpanm')):
perl = spec["perl"].command perl = spec['perl'].command
perl("Makefile.PL") perl('Makefile.PL')
make() make()
make("install") make('install')
This extra step automatically installs ``cpanm`` in addition to the This extra step automatically installs ``cpanm`` in addition to the
base Perl installation. base Perl installation.
@@ -174,10 +174,10 @@ In the ``perl`` package, we can see:
.. code-block:: python .. code-block:: python
@run_after("build") @run_after('build')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def test(self): def test(self):
make("test") make('test')
As you can guess, this runs ``make test`` *after* building the package, As you can guess, this runs ``make test`` *after* building the package,
if and only if testing is requested. Again, this is not specific to if and only if testing is requested. Again, this is not specific to
@@ -189,7 +189,7 @@ custom build systems, it can be added to existing build systems as well.
.. code-block:: python .. code-block:: python
@run_after("install") @run_after('install')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
works as expected. However, if you reverse the ordering: works as expected. However, if you reverse the ordering:
@@ -197,7 +197,7 @@ custom build systems, it can be added to existing build systems as well.
.. code-block:: python .. code-block:: python
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
@run_after("install") @run_after('install')
the tests will always be run regardless of whether or not the tests will always be run regardless of whether or not
``--test=root`` is requested. See https://github.com/spack/spack/issues/3833 ``--test=root`` is requested. See https://github.com/spack/spack/issues/3833

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -6,9 +6,9 @@
.. _inteloneapipackage: .. _inteloneapipackage:
=========== ====================
IntelOneapi IntelOneapiPackage
=========== ====================
.. contents:: .. contents::
@@ -25,18 +25,18 @@ use Spack to build packages with the tools.
The Spack Python class ``IntelOneapiPackage`` is a base class that is The Spack Python class ``IntelOneapiPackage`` is a base class that is
used by ``IntelOneapiCompilers``, ``IntelOneapiMkl``, used by ``IntelOneapiCompilers``, ``IntelOneapiMkl``,
``IntelOneapiTbb`` and other classes to implement the oneAPI ``IntelOneapiTbb`` and other classes to implement the oneAPI
packages. Search for ``oneAPI`` at `packages.spack.io <https://packages.spack.io>`_ for the full packages. See the :ref:`package-list` for the full list of available
list of available oneAPI packages, or use:: oneAPI packages or use::
spack list -d oneAPI spack list -d oneAPI
For more information on a specific package, do:: For more information on a specific package, do::
spack info --all <package-name> spack info <package-name>
Intel no longer releases new versions of Parallel Studio, which can be Intel no longer releases new versions of Parallel Studio, which can be
used in Spack via the :ref:`intelpackage`. All of its components can used in Spack via the :ref:`intelpackage`. All of its components can
now be found in oneAPI. now be found in oneAPI.
Examples Examples
======== ========
@@ -53,24 +53,18 @@ Install the oneAPI compilers::
Add the compilers to your ``compilers.yaml`` so spack can use them:: Add the compilers to your ``compilers.yaml`` so spack can use them::
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/bin spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin/intel64
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin
Verify that the compilers are available:: Verify that the compilers are available::
spack compiler list spack compiler list
Note that 2024 and later releases do not include ``icc``. Before 2024,
the package layout was different::
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin/intel64
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin
The ``intel-oneapi-compilers`` package includes 2 families of The ``intel-oneapi-compilers`` package includes 2 families of
compilers: compilers:
* ``intel``: ``icc``, ``icpc``, ``ifort``. Intel's *classic* * ``intel``: ``icc``, ``icpc``, ``ifort``. Intel's *classic*
compilers. 2024 and later releases contain ``ifort``, but not compilers.
``icc`` and ``icpc``.
* ``oneapi``: ``icx``, ``icpx``, ``ifx``. Intel's new generation of * ``oneapi``: ``icx``, ``icpx``, ``ifx``. Intel's new generation of
compilers based on LLVM. compilers based on LLVM.
@@ -82,55 +76,6 @@ To build with with ``icx``, do ::
spack install patchelf%oneapi spack install patchelf%oneapi
Using oneAPI Spack environment
-------------------------------
In this example, we build lammps with ``icx`` using Spack environment for oneAPI packages created by Intel. The
compilers are installed with Spack like in example above.
Install the oneAPI compilers::
spack install intel-oneapi-compilers
Add the compilers to your ``compilers.yaml`` so Spack can use them::
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/bin
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/bin
Verify that the compilers are available::
spack compiler list
Clone `spack-configs <https://github.com/spack/spack-configs>`_ repo and activate Intel oneAPI CPU environment::
git clone https://github.com/spack/spack-configs
spack env activate spack-configs/INTEL/CPU
spack concretize -f
`Intel oneAPI CPU environment <https://github.com/spack/spack-configs/blob/main/INTEL/CPU/spack.yaml>`_ contains applications tested and validated by Intel, this list is constantly extended. And currently it supports:
- `Devito <https://www.devitoproject.org/>`_
- `GROMACS <https://www.gromacs.org/>`_
- `HPCG <https://www.hpcg-benchmark.org/>`_
- `HPL <https://netlib.org/benchmark/hpl/>`_
- `LAMMPS <https://www.lammps.org/#gsc.tab=0>`_
- `OpenFOAM <https://www.openfoam.com/>`_
- `Quantum Espresso <https://www.quantum-espresso.org/>`_
- `STREAM <https://www.cs.virginia.edu/stream/>`_
- `WRF <https://github.com/wrf-model/WRF>`_
To build lammps with oneAPI compiler from this environment just run::
spack install lammps
Compiled binaries can be find using::
spack cd -i lammps
You can do the same for all other applications from this environment.
Using oneAPI MPI to Satisfy a Virtual Dependence Using oneAPI MPI to Satisfy a Virtual Dependence
------------------------------------------------------ ------------------------------------------------------
@@ -152,7 +97,8 @@ Compilers
To use the compilers, add some information about the installation to To use the compilers, add some information about the installation to
``compilers.yaml``. For most users, it is sufficient to do:: ``compilers.yaml``. For most users, it is sufficient to do::
spack compiler add /opt/intel/oneapi/compiler/latest/bin spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin/intel64
spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin
Adapt the paths above if you did not install the tools in the default Adapt the paths above if you did not install the tools in the default
location. After adding the compilers, using them is the same location. After adding the compilers, using them is the same
@@ -161,12 +107,6 @@ Another option is to manually add the configuration to
``compilers.yaml`` as described in :ref:`Compiler configuration ``compilers.yaml`` as described in :ref:`Compiler configuration
<compiler-config>`. <compiler-config>`.
Before 2024, the directory structure was different::
spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin/intel64
spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin
Libraries Libraries
--------- ---------
@@ -184,7 +124,7 @@ Using oneAPI Tools Installed by Spack
===================================== =====================================
Spack can be a convenient way to install and configure compilers and Spack can be a convenient way to install and configure compilers and
libraries, even if you do not intend to build a Spack package. If you libaries, even if you do not intend to build a Spack package. If you
want to build a Makefile project using Spack-installed oneAPI compilers, want to build a Makefile project using Spack-installed oneAPI compilers,
then use spack to configure your environment:: then use spack to configure your environment::

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _intelpackage: .. _intelpackage:
----- ------------
Intel IntelPackage
----- ------------
.. contents:: .. contents::
@@ -90,7 +90,7 @@ and optimizers do require a paid license. In Spack, they are packaged as:
TODO: Confirm and possible change(!) the scope of MPI components (runtime TODO: Confirm and possible change(!) the scope of MPI components (runtime
vs. devel) in current (and previous?) *cluster/professional/composer* vs. devel) in current (and previous?) *cluster/professional/composer*
editions, i.e., presence in downloads, possibly subject to license editions, i.e., presence in downloads, possibly subject to license
coverage(!); see `discussion in PR #4300 coverage(!); see `disussion in PR #4300
<https://github.com/spack/spack/pull/4300#issuecomment-305582898>`_. [NB: <https://github.com/spack/spack/pull/4300#issuecomment-305582898>`_. [NB:
An "mpi" subdirectory is not indicative of the full MPI SDK being present An "mpi" subdirectory is not indicative of the full MPI SDK being present
(i.e., ``mpicc``, ..., and header files). The directory may just as well (i.e., ``mpicc``, ..., and header files). The directory may just as well
@@ -392,12 +392,12 @@ See section
:ref:`Configuration Scopes <configuration-scopes>` :ref:`Configuration Scopes <configuration-scopes>`
for an explanation about the different files for an explanation about the different files
and section and section
:ref:`Build customization <packages-config>` :ref:`Build customization <build-settings>`
for specifics and examples for ``packages.yaml`` files. for specifics and examples for ``packages.yaml`` files.
.. If your system administrator did not provide modules for pre-installed Intel .. If your system administrator did not provide modules for pre-installed Intel
tools, you could do well to ask for them, because installing multiple copies tools, you could do well to ask for them, because installing multiple copies
of the Intel tools, as is won't to happen once Spack is in the picture, is of the Intel tools, as is wont to happen once Spack is in the picture, is
bound to stretch disk space and patience thin. If you *are* the system bound to stretch disk space and patience thin. If you *are* the system
administrator and are still new to modules, then perhaps it's best to follow administrator and are still new to modules, then perhaps it's best to follow
the `next section <Installing Intel tools within Spack_>`_ and install the tools the `next section <Installing Intel tools within Spack_>`_ and install the tools
@@ -653,7 +653,7 @@ follow `the next section <intel-install-libs_>`_ instead.
* If you specified a custom variant (for example ``+vtune``) you may want to add this as your * If you specified a custom variant (for example ``+vtune``) you may want to add this as your
preferred variant in the packages configuration for the ``intel-parallel-studio`` package preferred variant in the packages configuration for the ``intel-parallel-studio`` package
as described in :ref:`package-preferences`. Otherwise you will have to specify as described in :ref:`package-preferences`. Otherwise you will have to specify
the variant every time ``intel-parallel-studio`` is being used as ``mkl``, ``fftw`` or ``mpi`` the variant everytime ``intel-parallel-studio`` is being used as ``mkl``, ``fftw`` or ``mpi``
implementation to avoid pulling in a different variant. implementation to avoid pulling in a different variant.
* To set the Intel compilers for default use in Spack, instead of the usual ``%gcc``, * To set the Intel compilers for default use in Spack, instead of the usual ``%gcc``,
@@ -934,9 +934,9 @@ a *virtual* ``mkl`` package is declared in Spack.
.. code-block:: python .. code-block:: python
# Examples for absolute and conditional dependencies: # Examples for absolute and conditional dependencies:
depends_on("mkl") depends_on('mkl')
depends_on("mkl", when="+mkl") depends_on('mkl', when='+mkl')
depends_on("mkl", when="fftw=mkl") depends_on('mkl', when='fftw=mkl')
The ``MKLROOT`` environment variable (part of the documented API) will be set The ``MKLROOT`` environment variable (part of the documented API) will be set
during all stages of client package installation, and is available to both during all stages of client package installation, and is available to both
@@ -972,8 +972,8 @@ a *virtual* ``mkl`` package is declared in Spack.
def configure_args(self): def configure_args(self):
args = [] args = []
... ...
args.append("--with-blas=%s" % self.spec["blas"].libs.ld_flags) args.append('--with-blas=%s' % self.spec['blas'].libs.ld_flags)
args.append("--with-lapack=%s" % self.spec["lapack"].libs.ld_flags) args.append('--with-lapack=%s' % self.spec['lapack'].libs.ld_flags)
... ...
.. tip:: .. tip::
@@ -989,13 +989,13 @@ a *virtual* ``mkl`` package is declared in Spack.
.. code-block:: python .. code-block:: python
self.spec["blas"].headers.include_flags self.spec['blas'].headers.include_flags
and to generate linker options (``-L<dir> -llibname ...``), use the same as above, and to generate linker options (``-L<dir> -llibname ...``), use the same as above,
.. code-block:: python .. code-block:: python
self.spec["blas"].libs.ld_flags self.spec['blas'].libs.ld_flags
See See
:ref:`MakefilePackage <makefilepackage>` :ref:`MakefilePackage <makefilepackage>`

View File

@@ -1,15 +1,15 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _luapackage: .. _luapackage:
--- ------------
Lua LuaPackage
--- ------------
The ``Lua`` build-system is a helper for the common case of Lua packages that provide LuaPackage is a helper for the common case of Lua packages that provide
a rockspec file. This is not meant to take a rock archive, but to build a rockspec file. This is not meant to take a rock archive, but to build
a source archive or repository that provides a rockspec, which should cover a source archive or repository that provides a rockspec, which should cover
most lua packages. In the case a Lua package builds by Make rather than most lua packages. In the case a Lua package builds by Make rather than
@@ -19,7 +19,7 @@ luarocks, prefer MakefilePackage.
Phases Phases
^^^^^^ ^^^^^^
The ``LuaBuilder`` and `LuaPackage`` base classes come with the following phases: The ``LuaPackage`` base class comes with the following phases:
#. ``unpack`` - if using a rock, unpacks the rock and moves into the source directory #. ``unpack`` - if using a rock, unpacks the rock and moves into the source directory
#. ``preprocess`` - adjust sources or rockspec to fix build #. ``preprocess`` - adjust sources or rockspec to fix build
@@ -88,7 +88,7 @@ override the ``luarocks_args`` method like so:
.. code-block:: python .. code-block:: python
def luarocks_args(self): def luarocks_args(self):
return ["flag1", "flag2"] return ['flag1', 'flag2']
One common use of this is to override warnings or flags for newer compilers, as in: One common use of this is to override warnings or flags for newer compilers, as in:

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _makefilepackage: .. _makefilepackage:
-------- ---------------
Makefile MakefilePackage
-------- ---------------
The most primitive build system a package can use is a plain Makefile. The most primitive build system a package can use is a plain Makefile.
Makefiles are simple to write for small projects, but they usually Makefiles are simple to write for small projects, but they usually
@@ -18,7 +18,7 @@ variables.
Phases Phases
^^^^^^ ^^^^^^
The ``MakefileBuilder`` and ``MakefilePackage`` base classes come with 3 phases: The ``MakefilePackage`` base class comes with 3 phases:
#. ``edit`` - edit the Makefile #. ``edit`` - edit the Makefile
#. ``build`` - build the project #. ``build`` - build the project
@@ -59,7 +59,7 @@ using GNU Make, you should add a dependency on ``gmake``:
.. code-block:: python .. code-block:: python
depends_on("gmake", type="build") depends_on('gmake', type='build')
^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -88,13 +88,13 @@ command-line. However, Makefiles that use ``?=`` for assignment honor
environment variables. Since Spack already sets ``CC``, ``CXX``, ``F77``, environment variables. Since Spack already sets ``CC``, ``CXX``, ``F77``,
and ``FC``, you won't need to worry about setting these variables. If and ``FC``, you won't need to worry about setting these variables. If
there are any other variables you need to set, you can do this in the there are any other variables you need to set, you can do this in the
``setup_build_environment`` method: ``edit`` method:
.. code-block:: python .. code-block:: python
def setup_build_environment(self, env): def edit(self, spec, prefix):
env.set("PREFIX", prefix) env['PREFIX'] = prefix
env.set("BLASLIB", spec["blas"].libs.ld_flags) env['BLASLIB'] = spec['blas'].libs.ld_flags
`cbench <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cbench/package.py>`_ `cbench <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cbench/package.py>`_
@@ -113,7 +113,7 @@ you can do this like so:
.. code-block:: python .. code-block:: python
build_targets = ["CC=cc"] build_targets = ['CC=cc']
If you do need access to the spec, you can create a property like so: If you do need access to the spec, you can create a property like so:
@@ -125,8 +125,8 @@ If you do need access to the spec, you can create a property like so:
spec = self.spec spec = self.spec
return [ return [
"CC=cc", 'CC=cc',
f"BLASLIB={spec['blas'].libs.ld_flags}", 'BLASLIB={0}'.format(spec['blas'].libs.ld_flags),
] ]
@@ -140,17 +140,17 @@ Edit Makefile
Some Makefiles are just plain stubborn and will ignore command-line Some Makefiles are just plain stubborn and will ignore command-line
variables. The only way to ensure that these packages build correctly variables. The only way to ensure that these packages build correctly
is to directly edit the Makefile. Spack provides a ``FileFilter`` class is to directly edit the Makefile. Spack provides a ``FileFilter`` class
and a ``filter`` method to help with this. For example: and a ``filter_file`` method to help with this. For example:
.. code-block:: python .. code-block:: python
def edit(self, spec, prefix): def edit(self, spec, prefix):
makefile = FileFilter("Makefile") makefile = FileFilter('Makefile')
makefile.filter(r"^\s*CC\s*=.*", f"CC = {spack_cc}") makefile.filter(r'^\s*CC\s*=.*', 'CC = ' + spack_cc)
makefile.filter(r"^\s*CXX\s*=.*", f"CXX = {spack_cxx}") makefile.filter(r'^\s*CXX\s*=.*', 'CXX = ' + spack_cxx)
makefile.filter(r"^\s*F77\s*=.*", f"F77 = {spack_f77}") makefile.filter(r'^\s*F77\s*=.*', 'F77 = ' + spack_f77)
makefile.filter(r"^\s*FC\s*=.*", f"FC = {spack_fc}") makefile.filter(r'^\s*FC\s*=.*', 'FC = ' + spack_fc)
`stream <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stream/package.py>`_ `stream <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stream/package.py>`_
@@ -181,16 +181,16 @@ well for storing variables:
def edit(self, spec, prefix): def edit(self, spec, prefix):
config = { config = {
"CC": "cc", 'CC': 'cc',
"MAKE": "make", 'MAKE': 'make',
} }
if spec.satisfies("+blas"): if '+blas' in spec:
config["BLAS_LIBS"] = spec["blas"].libs.joined() config['BLAS_LIBS'] = spec['blas'].libs.joined()
with open("make.inc", "w") as inc: with open('make.inc', 'w') as inc:
for key in config: for key in config:
inc.write(f"{key} = {config[key]}\n") inc.write('{0} = {1}\n'.format(key, config[key]))
`elk <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/elk/package.py>`_ `elk <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/elk/package.py>`_
@@ -204,14 +204,14 @@ them in a list:
def edit(self, spec, prefix): def edit(self, spec, prefix):
config = [ config = [
f"INSTALL_DIR = {prefix}", 'INSTALL_DIR = {0}'.format(prefix),
"INCLUDE_DIR = $(INSTALL_DIR)/include", 'INCLUDE_DIR = $(INSTALL_DIR)/include',
"LIBRARY_DIR = $(INSTALL_DIR)/lib", 'LIBRARY_DIR = $(INSTALL_DIR)/lib',
] ]
with open("make.inc", "w") as inc: with open('make.inc', 'w') as inc:
for var in config: for var in config:
inc.write(f"{var}\n") inc.write('{0}\n'.format(var))
`hpl <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpl/package.py>`_ `hpl <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpl/package.py>`_
@@ -284,7 +284,7 @@ can tell Spack where to locate it like so:
.. code-block:: python .. code-block:: python
build_directory = "src" build_directory = 'src'
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
@@ -299,8 +299,8 @@ install the package:
def install(self, spec, prefix): def install(self, spec, prefix):
mkdir(prefix.bin) mkdir(prefix.bin)
install("foo", prefix.bin) install('foo', prefix.bin)
install_tree("lib", prefix.lib) install_tree('lib', prefix.lib)
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _mavenpackage: .. _mavenpackage:
----- ------------
Maven MavenPackage
----- ------------
Apache Maven is a general-purpose build system that does not rely Apache Maven is a general-purpose build system that does not rely
on Makefiles to build software. It is designed for building and on Makefiles to build software. It is designed for building and
@@ -17,7 +17,7 @@ managing and Java-based project.
Phases Phases
^^^^^^ ^^^^^^
The ``MavenBuilder`` and ``MavenPackage`` base classes come with the following phases: The ``MavenPackage`` base class comes with the following phases:
#. ``build`` - compile code and package into a JAR file #. ``build`` - compile code and package into a JAR file
#. ``install`` - copy to installation prefix #. ``install`` - copy to installation prefix
@@ -48,8 +48,8 @@ class automatically adds the following dependencies:
.. code-block:: python .. code-block:: python
depends_on("java", type=("build", "run")) depends_on('java', type=('build', 'run'))
depends_on("maven", type="build") depends_on('maven', type='build')
In the ``pom.xml`` file, you may see sections like: In the ``pom.xml`` file, you may see sections like:
@@ -72,8 +72,8 @@ should add:
.. code-block:: python .. code-block:: python
depends_on("java@7:", type="build") depends_on('java@7:', type='build')
depends_on("maven@3.5.4:", type="build") depends_on('maven@3.5.4:', type='build')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -88,9 +88,9 @@ the build phase. For example:
def build_args(self): def build_args(self):
return [ return [
"-Pdist,native", '-Pdist,native',
"-Dtar", '-Dtar',
"-Dmaven.javadoc.skip=true" '-Dmaven.javadoc.skip=true'
] ]

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _mesonpackage: .. _mesonpackage:
----- ------------
Meson MesonPackage
----- ------------
Much like Autotools and CMake, Meson is a build system. But it is Much like Autotools and CMake, Meson is a build system. But it is
meant to be both fast and as user friendly as possible. GNOME's goal meant to be both fast and as user friendly as possible. GNOME's goal
@@ -17,7 +17,7 @@ is to port modules to use the Meson build system.
Phases Phases
^^^^^^ ^^^^^^
The ``MesonBuilder`` and ``MesonPackage`` base classes come with the following phases: The ``MesonPackage`` base class comes with the following phases:
#. ``meson`` - generate ninja files #. ``meson`` - generate ninja files
#. ``build`` - build the project #. ``build`` - build the project
@@ -86,8 +86,8 @@ the ``MesonPackage`` base class already contains:
.. code-block:: python .. code-block:: python
depends_on("meson", type="build") depends_on('meson', type='build')
depends_on("ninja", type="build") depends_on('ninja', type='build')
If you need to specify a particular version requirement, you can If you need to specify a particular version requirement, you can
@@ -95,8 +95,8 @@ override this in your package:
.. code-block:: python .. code-block:: python
depends_on("meson@0.43.0:", type="build") depends_on('meson@0.43.0:', type='build')
depends_on("ninja", type="build") depends_on('ninja', type='build')
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
@@ -121,7 +121,7 @@ override the ``meson_args`` method like so:
.. code-block:: python .. code-block:: python
def meson_args(self): def meson_args(self):
return ["--warnlevel=3"] return ['--warnlevel=3']
This method can be used to pass flags as well as variables. This method can be used to pass flags as well as variables.

View File

@@ -0,0 +1,350 @@
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _multiplepackage:
----------------------
Multiple Build Systems
----------------------
Quite frequently, a package will change build systems from one version to the
next. For example, a small project that once used a single Makefile to build
may now require Autotools to handle the increased number of files that need to
be compiled. Or, a package that once used Autotools may switch to CMake for
Windows support. In this case, it becomes a bit more challenging to write a
single build recipe for this package in Spack.
There are several ways that this can be handled in Spack:
#. Subclass the new build system, and override phases as needed (preferred)
#. Subclass ``Package`` and implement ``install`` as needed
#. Create separate ``*-cmake``, ``*-autotools``, etc. packages for each build system
#. Rename the old package to ``*-legacy`` and create a new package
#. Move the old package to a ``legacy`` repository and create a new package
#. Drop older versions that only support the older build system
Of these options, 1 is preferred, and will be demonstrated in this
documentation. Options 3-5 have issues with concretization, so shouldn't be
used. Options 4-5 also don't support more than two build systems. Option 6 only
works if the old versions are no longer needed. Option 1 is preferred over 2
because it makes it easier to drop the old build system entirely.
The exact syntax of the package depends on which build systems you need to
support. Below are a couple of common examples.
^^^^^^^^^^^^^^^^^^^^^
Makefile -> Autotools
^^^^^^^^^^^^^^^^^^^^^
Let's say we have the following package:
.. code-block:: python
class Foo(MakefilePackage):
version("1.2.0", sha256="...")
def edit(self, spec, prefix):
filter_file("CC=", "CC=" + spack_cc, "Makefile")
def install(self, spec, prefix):
install_tree(".", prefix)
The package subclasses from :ref:`makefilepackage`, which has three phases:
#. ``edit`` (does nothing by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
In this case, the ``install`` phase needed to be overridden because the
Makefile did not have an install target. We also modify the Makefile to use
Spack's compiler wrappers. The default ``build`` phase is not changed.
Starting with version 1.3.0, we want to use Autotools to build instead.
:ref:`autotoolspackage` has four phases:
#. ``autoreconf`` (does not if a configure script already exists)
#. ``configure`` (runs ``./configure --prefix=...`` by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
If the only version we need to support is 1.3.0, the package would look as
simple as:
.. code-block:: python
class Foo(AutotoolsPackage):
version("1.3.0", sha256="...")
def configure_args(self):
return ["--enable-shared"]
In this case, we use the default methods for each phase and only override
``configure_args`` to specify additional flags to pass to ``./configure``.
If we wanted to write a single package that supports both versions 1.2.0 and
1.3.0, it would look something like:
.. code-block:: python
class Foo(AutotoolsPackage):
version("1.3.0", sha256="...")
version("1.2.0", sha256="...", deprecated=True)
def configure_args(self):
return ["--enable-shared"]
# Remove the following once version 1.2.0 is dropped
@when("@:1.2")
def patch(self):
filter_file("CC=", "CC=" + spack_cc, "Makefile")
@when("@:1.2")
def autoreconf(self, spec, prefix):
pass
@when("@:1.2")
def configure(self, spec, prefix):
pass
@when("@:1.2")
def install(self, spec, prefix):
install_tree(".", prefix)
There are a few interesting things to note here:
* We added ``deprecated=True`` to version 1.2.0. This signifies that version
1.2.0 is deprecated and shouldn't be used. However, if a user still relies
on version 1.2.0, it's still there and builds just fine.
* We moved the contents of the ``edit`` phase to the ``patch`` function. Since
``AutotoolsPackage`` doesn't have an ``edit`` phase, the only way for this
step to be executed is to move it to the ``patch`` function, which always
gets run.
* The ``autoreconf`` and ``configure`` phases become no-ops. Since the old
Makefile-based build system doesn't use these, we ignore these phases when
building ``foo@1.2.0``.
* The ``@when`` decorator is used to override these phases only for older
versions. The default methods are used for ``foo@1.3:``.
Once a new Spack release comes out, version 1.2.0 and everything below the
comment can be safely deleted. The result is the same as if we had written a
package for version 1.3.0 from scratch.
^^^^^^^^^^^^^^^^^^
Autotools -> CMake
^^^^^^^^^^^^^^^^^^
Let's say we have the following package:
.. code-block:: python
class Bar(AutotoolsPackage):
version("1.2.0", sha256="...")
def configure_args(self):
return ["--enable-shared"]
The package subclasses from :ref:`autotoolspackage`, which has four phases:
#. ``autoreconf`` (does not if a configure script already exists)
#. ``configure`` (runs ``./configure --prefix=...`` by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
In this case, we use the default methods for each phase and only override
``configure_args`` to specify additional flags to pass to ``./configure``.
Starting with version 1.3.0, we want to use CMake to build instead.
:ref:`cmakepackage` has three phases:
#. ``cmake`` (runs ``cmake ...`` by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
If the only version we need to support is 1.3.0, the package would look as
simple as:
.. code-block:: python
class Bar(CMakePackage):
version("1.3.0", sha256="...")
def cmake_args(self):
return [self.define("BUILD_SHARED_LIBS", True)]
In this case, we use the default methods for each phase and only override
``cmake_args`` to specify additional flags to pass to ``cmake``.
If we wanted to write a single package that supports both versions 1.2.0 and
1.3.0, it would look something like:
.. code-block:: python
class Bar(CMakePackage):
version("1.3.0", sha256="...")
version("1.2.0", sha256="...", deprecated=True)
def cmake_args(self):
return [self.define("BUILD_SHARED_LIBS", True)]
# Remove the following once version 1.2.0 is dropped
def configure_args(self):
return ["--enable-shared"]
@when("@:1.2")
def cmake(self, spec, prefix):
configure("--prefix=" + prefix, *self.configure_args())
There are a few interesting things to note here:
* We added ``deprecated=True`` to version 1.2.0. This signifies that version
1.2.0 is deprecated and shouldn't be used. However, if a user still relies
on version 1.2.0, it's still there and builds just fine.
* Since CMake and Autotools are so similar, we only need to override the
``cmake`` phase, we can use the default ``build`` and ``install`` phases.
* We override ``cmake`` to run ``./configure`` for older versions.
``configure_args`` remains the same.
* The ``@when`` decorator is used to override these phases only for older
versions. The default methods are used for ``bar@1.3:``.
Once a new Spack release comes out, version 1.2.0 and everything below the
comment can be safely deleted. The result is the same as if we had written a
package for version 1.3.0 from scratch.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Multiple build systems for the same version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
During the transition from one build system to another, developers often
support multiple build systems at the same time. Spack can only use a single
build system for a single version. To decide which build system to use for a
particular version, take the following things into account:
1. If the developers explicitly state that one build system is preferred over
another, use that one.
2. If one build system is considered "experimental" while another is considered
"stable", use the stable build system.
3. Otherwise, use the newer build system.
The developer preference for which build system to use can change over time as
a newer build system becomes stable/recommended.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Dropping support for old build systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When older versions of a package don't support a newer build system, it can be
tempting to simply delete them from a package. This significantly reduces
package complexity and makes the build recipe much easier to maintain. However,
other packages or Spack users may rely on these older versions. The recommended
approach is to first support both build systems (as demonstrated above),
:ref:`deprecate <deprecate>` versions that rely on the old build system, and
remove those versions and any phases that needed to be overridden in the next
Spack release.
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Three or more build systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^
In rare cases, a package may change build systems multiple times. For example,
a package may start with Makefiles, then switch to Autotools, then switch to
CMake. The same logic used above can be extended to any number of build systems.
For example:
.. code-block:: python
class Baz(CMakePackage):
version("1.4.0", sha256="...") # CMake
version("1.3.0", sha256="...") # Autotools
version("1.2.0", sha256="...") # Makefile
def cmake_args(self):
return [self.define("BUILD_SHARED_LIBS", True)]
# Remove the following once version 1.3.0 is dropped
def configure_args(self):
return ["--enable-shared"]
@when("@1.3")
def cmake(self, spec, prefix):
configure("--prefix=" + prefix, *self.configure_args())
# Remove the following once version 1.2.0 is dropped
@when("@:1.2")
def patch(self):
filter_file("CC=", "CC=" + spack_cc, "Makefile")
@when("@:1.2")
def cmake(self, spec, prefix):
pass
@when("@:1.2")
def install(self, spec, prefix):
install_tree(".", prefix)
^^^^^^^^^^^^^^^^^^^
Additional examples
^^^^^^^^^^^^^^^^^^^
When writing new packages, it often helps to see examples of existing packages.
Here is an incomplete list of existing Spack packages that have changed build
systems before:
================ ===================== ================
Package Previous Build System New Build System
================ ===================== ================
amber custom CMake
arpack-ng Autotools CMake
atk Autotools Meson
blast None Autotools
dyninst Autotools CMake
evtgen Autotools CMake
fish Autotools CMake
gdk-pixbuf Autotools Meson
glib Autotools Meson
glog Autotools CMake
gmt Autotools CMake
gtkplus Autotools Meson
hpl Makefile Autotools
interproscan Perl Maven
jasper Autotools CMake
kahip SCons CMake
kokkos Makefile CMake
kokkos-kernels Makefile CMake
leveldb Makefile CMake
libdrm Autotools Meson
libjpeg-turbo Autotools CMake
mesa Autotools Meson
metis None CMake
mpifileutils Autotools CMake
muparser Autotools CMake
mxnet Makefile CMake
nest Autotools CMake
neuron Autotools CMake
nsimd CMake nsconfig
opennurbs Makefile CMake
optional-lite None CMake
plasma Makefile CMake
preseq Makefile Autotools
protobuf Autotools CMake
py-pygobject Autotools Python
singularity Autotools Makefile
span-lite None CMake
ssht Makefile CMake
string-view-lite None CMake
superlu Makefile CMake
superlu-dist Makefile CMake
uncrustify Autotools CMake
================ ===================== ================
Packages that support multiple build systems can be a bit confusing to write.
Don't hesitate to open an issue or draft pull request and ask for advice from
other Spack developers!

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _octavepackage: .. _octavepackage:
------ -------------
Octave OctavePackage
------ -------------
Octave has its own build system for installing packages. Octave has its own build system for installing packages.
@@ -15,7 +15,7 @@ Octave has its own build system for installing packages.
Phases Phases
^^^^^^ ^^^^^^
The ``OctaveBuilder`` and ``OctavePackage`` base classes have a single phase: The ``OctavePackage`` base class has a single phase:
#. ``install`` - install the package #. ``install`` - install the package

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _perlpackage: .. _perlpackage:
---- -----------
Perl PerlPackage
---- -----------
Much like Octave, Perl has its own language-specific Much like Octave, Perl has its own language-specific
build system. build system.
@@ -16,7 +16,7 @@ build system.
Phases Phases
^^^^^^ ^^^^^^
The ``PerlBuilder`` and ``PerlPackage`` base classes come with 3 phases that can be overridden: The ``PerlPackage`` base class comes with 3 phases that can be overridden:
#. ``configure`` - configure the package #. ``configure`` - configure the package
#. ``build`` - build the package #. ``build`` - build the package
@@ -118,7 +118,7 @@ so ``PerlPackage`` contains:
.. code-block:: python .. code-block:: python
extends("perl") extends('perl')
If your package requires a specific version of Perl, you should If your package requires a specific version of Perl, you should
@@ -132,14 +132,14 @@ properly. If your package uses ``Makefile.PL`` to build, add:
.. code-block:: python .. code-block:: python
depends_on("perl-extutils-makemaker", type="build") depends_on('perl-extutils-makemaker', type='build')
If your package uses ``Build.PL`` to build, add: If your package uses ``Build.PL`` to build, add:
.. code-block:: python .. code-block:: python
depends_on("perl-module-build", type="build") depends_on('perl-module-build', type='build')
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
@@ -165,80 +165,14 @@ arguments to ``Makefile.PL`` or ``Build.PL`` by overriding
.. code-block:: python .. code-block:: python
def configure_args(self): def configure_args(self):
expat = self.spec["expat"].prefix expat = self.spec['expat'].prefix
return [ return [
"EXPATLIBPATH={0}".format(expat.lib), 'EXPATLIBPATH={0}'.format(expat.lib),
"EXPATINCPATH={0}".format(expat.include), 'EXPATINCPATH={0}'.format(expat.include),
] ]
^^^^^^^
Testing
^^^^^^^
``PerlPackage`` provides a simple stand-alone test of the successfully
installed package to confirm that installed perl module(s) can be used.
These tests can be performed any time after the installation using
``spack -v test run``. (For more information on the command, see
:ref:`cmd-spack-test-run`.)
The base class automatically detects perl modules based on the presence
of ``*.pm`` files under the package's library directory. For example,
the files under ``perl-bignum``'s perl library are:
.. code-block:: console
$ find . -name "*.pm"
./bigfloat.pm
./bigrat.pm
./Math/BigFloat/Trace.pm
./Math/BigInt/Trace.pm
./Math/BigRat/Trace.pm
./bigint.pm
./bignum.pm
which results in the package having the ``use_modules`` property containing:
.. code-block:: python
use_modules = [
"bigfloat",
"bigrat",
"Math::BigFloat::Trace",
"Math::BigInt::Trace",
"Math::BigRat::Trace",
"bigint",
"bignum",
]
.. note::
This list can often be used to catch missing dependencies.
If the list is somehow wrong, you can provide the names of the modules
yourself by overriding ``use_modules`` like so:
.. code-block:: python
use_modules = ["bigfloat", "bigrat", "bigint", "bignum"]
If you only want a subset of the automatically detected modules to be
tested, you could instead define the ``skip_modules`` property on the
package. So, instead of overriding ``use_modules`` as shown above, you
could define the following:
.. code-block:: python
skip_modules = [
"Math::BigFloat::Trace",
"Math::BigInt::Trace",
"Math::BigRat::Trace",
]
for the same use tests.
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
Alternatives to Spack Alternatives to Spack
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _pythonpackage: .. _pythonpackage:
------ -------------
Python PythonPackage
------ -------------
Python packages and modules have their own special build system. This Python packages and modules have their own special build system. This
documentation covers everything you'll need to know in order to write documentation covers everything you'll need to know in order to write
@@ -152,16 +152,16 @@ set. Once set, ``pypi`` will be used to define the ``homepage``,
.. code-block:: python .. code-block:: python
homepage = "https://pypi.org/project/setuptools/" homepage = 'https://pypi.org/project/setuptools/'
url = "https://pypi.org/packages/source/s/setuptools/setuptools-49.2.0.zip" url = 'https://pypi.org/packages/source/s/setuptools/setuptools-49.2.0.zip'
list_url = "https://pypi.org/simple/setuptools/" list_url = 'https://pypi.org/simple/setuptools/'
is equivalent to: is equivalent to:
.. code-block:: python .. code-block:: python
pypi = "setuptools/setuptools-49.2.0.zip" pypi = 'setuptools/setuptools-49.2.0.zip'
If a package has a different homepage listed on PyPI, you can If a package has a different homepage listed on PyPI, you can
@@ -208,7 +208,7 @@ dependencies to your package:
.. code-block:: python .. code-block:: python
depends_on("py-setuptools@42:", type="build") depends_on('py-setuptools@42:', type='build')
Note that ``py-wheel`` is already listed as a build dependency in the Note that ``py-wheel`` is already listed as a build dependency in the
@@ -232,7 +232,7 @@ Look for dependencies under the following keys:
* ``dependencies`` under ``[project]`` * ``dependencies`` under ``[project]``
These packages are required for building and installation. You can These packages are required for building and installation. You can
add them with ``type=("build", "run")``. add them with ``type=('build', 'run')``.
* ``[project.optional-dependencies]`` * ``[project.optional-dependencies]``
@@ -279,12 +279,12 @@ distutils library, and has almost the exact same API. In addition to
* ``setup_requires`` * ``setup_requires``
These packages are usually only needed at build-time, so you can These packages are usually only needed at build-time, so you can
add them with ``type="build"``. add them with ``type='build'``.
* ``install_requires`` * ``install_requires``
These packages are required for building and installation. You can These packages are required for building and installation. You can
add them with ``type=("build", "run")``. add them with ``type=('build', 'run')``.
* ``extras_require`` * ``extras_require``
@@ -296,7 +296,7 @@ distutils library, and has almost the exact same API. In addition to
These are packages that are required to run the unit tests for the These are packages that are required to run the unit tests for the
package. These dependencies can be specified using the package. These dependencies can be specified using the
``type="test"`` dependency type. However, the PyPI tarballs rarely ``type='test'`` dependency type. However, the PyPI tarballs rarely
contain unit tests, so there is usually no reason to add these. contain unit tests, so there is usually no reason to add these.
See https://setuptools.pypa.io/en/latest/userguide/dependency_management.html See https://setuptools.pypa.io/en/latest/userguide/dependency_management.html
@@ -321,7 +321,7 @@ older versions of flit may use the following keys:
* ``requires`` under ``[tool.flit.metadata]`` * ``requires`` under ``[tool.flit.metadata]``
These packages are required for building and installation. You can These packages are required for building and installation. You can
add them with ``type=("build", "run")``. add them with ``type=('build', 'run')``.
* ``[tool.flit.metadata.requires-extra]`` * ``[tool.flit.metadata.requires-extra]``
@@ -366,7 +366,7 @@ If the ``pyproject.toml`` lists ``mesonpy`` as the ``build-backend``,
it uses the meson build system. Meson uses the default it uses the meson build system. Meson uses the default
``pyproject.toml`` keys to list dependencies. ``pyproject.toml`` keys to list dependencies.
See https://meson-python.readthedocs.io/en/latest/tutorials/introduction.html See https://meson-python.readthedocs.io/en/latest/usage/start.html
for more information. for more information.
""" """
@@ -434,12 +434,12 @@ the BLAS/LAPACK library you want pkg-config to search for:
.. code-block:: python .. code-block:: python
depends_on("py-pip@22.1:", type="build") depends_on('py-pip@22.1:', type='build')
def config_settings(self, spec, prefix): def config_settings(self, spec, prefix):
return { return {
"blas": spec["blas"].libs.names[0], 'blas': spec['blas'].libs.names[0],
"lapack": spec["lapack"].libs.names[0], 'lapack': spec['lapack'].libs.names[0],
} }
@@ -463,10 +463,10 @@ has an optional dependency on ``libyaml`` that can be enabled like so:
def global_options(self, spec, prefix): def global_options(self, spec, prefix):
options = [] options = []
if spec.satisfies("+libyaml"): if '+libyaml' in spec:
options.append("--with-libyaml") options.append('--with-libyaml')
else: else:
options.append("--without-libyaml") options.append('--without-libyaml')
return options return options
@@ -492,10 +492,10 @@ allows you to specify the directories to search for ``libyaml``:
def install_options(self, spec, prefix): def install_options(self, spec, prefix):
options = [] options = []
if spec.satisfies("+libyaml"): if '+libyaml' in spec:
options.extend([ options.extend([
spec["libyaml"].libs.search_flags, spec['libyaml'].libs.search_flags,
spec["libyaml"].headers.include_flags, spec['libyaml'].headers.include_flags,
]) ])
return options return options
@@ -556,7 +556,7 @@ detected are wrong, you can provide the names yourself by overriding
.. code-block:: python .. code-block:: python
import_modules = ["six"] import_modules = ['six']
Sometimes the list of module names to import depends on how the Sometimes the list of module names to import depends on how the
@@ -571,9 +571,9 @@ This can be expressed like so:
@property @property
def import_modules(self): def import_modules(self):
modules = ["yaml"] modules = ['yaml']
if self.spec.satisfies("+libyaml"): if '+libyaml' in self.spec:
modules.append("yaml.cyaml") modules.append('yaml.cyaml')
return modules return modules
@@ -582,18 +582,18 @@ libraries. Make sure not to add modules/packages containing the word
"test", as these likely won't end up in the installation directory, "test", as these likely won't end up in the installation directory,
or may require test dependencies like pytest to be installed. or may require test dependencies like pytest to be installed.
Instead of defining the ``import_modules`` explicitly, only the subset Instead of defining the ``import_modules`` explicity, only the subset
of module names to be skipped can be defined by using ``skip_modules``. of module names to be skipped can be defined by using ``skip_modules``.
If a defined module has submodules, they are skipped as well, e.g., If a defined module has submodules, they are skipped as well, e.g.,
in case the ``plotting`` modules should be excluded from the in case the ``plotting`` modules should be excluded from the
automatically detected ``import_modules`` ``["nilearn", "nilearn.surface", automatically detected ``import_modules`` ``['nilearn', 'nilearn.surface',
"nilearn.plotting", "nilearn.plotting.data"]`` set: 'nilearn.plotting', 'nilearn.plotting.data']`` set:
.. code-block:: python .. code-block:: python
skip_modules = ["nilearn.plotting"] skip_modules = ['nilearn.plotting']
This will set ``import_modules`` to ``["nilearn", "nilearn.surface"]`` This will set ``import_modules`` to ``['nilearn', 'nilearn.surface']``
Import tests can be run during the installation using ``spack install Import tests can be run during the installation using ``spack install
--test=root`` or at any time after the installation using --test=root`` or at any time after the installation using
@@ -612,11 +612,11 @@ after the ``install`` phase:
.. code-block:: python .. code-block:: python
@run_after("install") @run_after('install')
@on_package_attributes(run_tests=True) @on_package_attributes(run_tests=True)
def install_test(self): def install_test(self):
with working_dir("spack-test", create=True): with working_dir('spack-test', create=True):
python("-c", "import numpy; numpy.test('full', verbose=2)") python('-c', 'import numpy; numpy.test("full", verbose=2)')
when testing is enabled during the installation (i.e., ``spack install when testing is enabled during the installation (i.e., ``spack install
@@ -638,7 +638,7 @@ provides Python bindings in a ``python`` directory, you can use:
.. code-block:: python .. code-block:: python
build_directory = "python" build_directory = 'python'
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -718,45 +718,24 @@ command-line tool, or C/C++/Fortran program with optional Python
modules? The former should be prepended with ``py-``, while the modules? The former should be prepended with ``py-``, while the
latter should not. latter should not.
"""""""""""""""""""""""""""""" """"""""""""""""""""""
``extends`` vs. ``depends_on`` extends vs. depends_on
"""""""""""""""""""""""""""""" """"""""""""""""""""""
This is very similar to the naming dilemma above, with a slight twist.
As mentioned in the :ref:`Packaging Guide <packaging_extensions>`, As mentioned in the :ref:`Packaging Guide <packaging_extensions>`,
``extends`` and ``depends_on`` are very similar, but ``extends`` ensures ``extends`` and ``depends_on`` are very similar, but ``extends`` adds
that the extension and extendee share the same prefix in views. the ability to *activate* the package. Activation involves symlinking
This allows the user to import a Python module without everything in the installation prefix of the package to the installation
prefix of Python. This allows the user to import a Python module without
having to add that module to ``PYTHONPATH``. having to add that module to ``PYTHONPATH``.
Additionally, ``extends("python")`` adds a dependency on the package When deciding between ``extends`` and ``depends_on``, the best rule of
``python-venv``. This improves isolation from the system, whether thumb is to check the installation prefix. If Python libraries are
it's during the build or at runtime: user and system site packages installed to ``<prefix>/lib/pythonX.Y/site-packages``, then you
cannot accidentally be used by any package that ``extends("python")``. should use ``extends``. If Python libraries are installed elsewhere
or the only files that get installed reside in ``<prefix>/bin``, then
As a rule of thumb: if a package does not install any Python modules don't use ``extends``, as symlinking the package wouldn't be useful.
of its own, and merely puts a Python script in the ``bin`` directory,
then there is no need for ``extends``. If the package installs modules
in the ``site-packages`` directory, it requires ``extends``.
"""""""""""""""""""""""""""""""""""""
Executing ``python`` during the build
"""""""""""""""""""""""""""""""""""""
Whenever you need to execute a Python command or pass the path of the
Python interpreter to the build system, it is best to use the global
variable ``python`` directly. For example:
.. code-block:: python
@run_before("install")
def recythonize(self):
python("setup.py", "clean") # use the `python` global
As mentioned in the previous section, ``extends("python")`` adds an
automatic dependency on ``python-venv``, which is a virtual environment
that guarantees build isolation. The ``python`` global always refers to
the correct Python interpreter, whether the package uses ``extends("python")``
or ``depends_on("python")``.
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
Alternatives to Spack Alternatives to Spack

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _qmakepackage: .. _qmakepackage:
----- ------------
QMake QMakePackage
----- ------------
Much like Autotools and CMake, QMake is a build-script generator Much like Autotools and CMake, QMake is a build-script generator
designed by the developers of Qt. In its simplest form, Spack's designed by the developers of Qt. In its simplest form, Spack's
@@ -29,7 +29,7 @@ variables or edit ``*.pro`` files to get things working properly.
Phases Phases
^^^^^^ ^^^^^^
The ``QMakeBuilder`` and ``QMakePackage`` base classes come with the following phases: The ``QMakePackage`` base class comes with the following phases:
#. ``qmake`` - generate Makefiles #. ``qmake`` - generate Makefiles
#. ``build`` - build the project #. ``build`` - build the project
@@ -83,7 +83,7 @@ base class already contains:
.. code-block:: python .. code-block:: python
depends_on("qt", type="build") depends_on('qt', type='build')
If you want to specify a particular version requirement, or need to If you want to specify a particular version requirement, or need to
@@ -91,7 +91,7 @@ link to the ``qt`` libraries, you can override this in your package:
.. code-block:: python .. code-block:: python
depends_on("qt@5.6.0:") depends_on('qt@5.6.0:')
^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
Passing arguments to qmake Passing arguments to qmake
@@ -103,7 +103,7 @@ override the ``qmake_args`` method like so:
.. code-block:: python .. code-block:: python
def qmake_args(self): def qmake_args(self):
return ["-recursive"] return ['-recursive']
This method can be used to pass flags as well as variables. This method can be used to pass flags as well as variables.
@@ -118,7 +118,7 @@ sub-directory by adding the following to the package:
.. code-block:: python .. code-block:: python
build_directory = "src" build_directory = 'src'
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _racketpackage: .. _racketpackage:
------ -------------
Racket RacketPackage
------ -------------
Much like Python, Racket packages and modules have their own special build system. Much like Python, Racket packages and modules have their own special build system.
To learn more about the specifics of Racket package system, please refer to the To learn more about the specifics of Racket package system, please refer to the
@@ -17,7 +17,7 @@ To learn more about the specifics of Racket package system, please refer to the
Phases Phases
^^^^^^ ^^^^^^
The ``RacketBuilder`` and ``RacketPackage`` base classes provides an ``install`` phase that The ``RacketPackage`` base class provides an ``install`` phase that
can be overridden, corresponding to the use of: can be overridden, corresponding to the use of:
.. code-block:: console .. code-block:: console

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _rocmpackage: .. _rocmpackage:
---- -----------
ROCm ROCmPackage
---- -----------
The ``ROCmPackage`` is not a build system but a helper package. Like ``CudaPackage``, The ``ROCmPackage`` is not a build system but a helper package. Like ``CudaPackage``,
it provides standard variants, dependencies, and conflicts to facilitate building it provides standard variants, dependencies, and conflicts to facilitate building
@@ -25,7 +25,7 @@ This package provides the following variants:
* **rocm** * **rocm**
This variant is used to enable/disable building with ``rocm``. This variant is used to enable/disable building with ``rocm``.
The default is disabled (or ``False``). The default is disabled (or ``False``).
* **amdgpu_target** * **amdgpu_target**
@@ -81,27 +81,28 @@ class of your package. For example, you can add it to your
class MyRocmPackage(CMakePackage, ROCmPackage): class MyRocmPackage(CMakePackage, ROCmPackage):
... ...
# Ensure +rocm and amdgpu_targets are passed to dependencies # Ensure +rocm and amdgpu_targets are passed to dependencies
depends_on("mydeppackage", when="+rocm") depends_on('mydeppackage', when='+rocm')
for val in ROCmPackage.amdgpu_targets: for val in ROCmPackage.amdgpu_targets:
depends_on(f"mydeppackage amdgpu_target={val}", depends_on('mydeppackage amdgpu_target={0}'.format(val),
when=f"amdgpu_target={val}") when='amdgpu_target={0}'.format(val))
... ...
def cmake_args(self): def cmake_args(self):
spec = self.spec spec = self.spec
args = [] args = []
... ...
if spec.satisfies("+rocm"): if '+rocm' in spec:
# Set up the hip macros needed by the build # Set up the hip macros needed by the build
args.extend([ args.extend([
"-DENABLE_HIP=ON", '-DENABLE_HIP=ON',
f"-DHIP_ROOT_DIR={spec['hip'].prefix}"]) '-DHIP_ROOT_DIR={0}'.format(spec['hip'].prefix)])
rocm_archs = spec.variants["amdgpu_target"].value rocm_archs = spec.variants['amdgpu_target'].value
if "none" not in rocm_archs: if 'none' not in rocm_archs:
args.append(f"-DHIP_HIPCC_FLAGS=--amdgpu-target={','.join(rocm_archs}") args.append('-DHIP_HIPCC_FLAGS=--amdgpu-target={0}'
.format(",".join(rocm_archs)))
else: else:
# Ensure build with hip is disabled # Ensure build with hip is disabled
args.append("-DENABLE_HIP=OFF") args.append('-DENABLE_HIP=OFF')
... ...
return args return args
... ...
@@ -113,7 +114,7 @@ build.
This example also illustrates how to check for the ``rocm`` variant using This example also illustrates how to check for the ``rocm`` variant using
``self.spec`` and how to retrieve the ``amdgpu_target`` variant's value ``self.spec`` and how to retrieve the ``amdgpu_target`` variant's value
using ``self.spec.variants["amdgpu_target"].value``. using ``self.spec.variants['amdgpu_target'].value``.
All five packages using ``ROCmPackage`` as of January 2021 also use the All five packages using ``ROCmPackage`` as of January 2021 also use the
:ref:`CudaPackage <cudapackage>`. So it is worth looking at those packages :ref:`CudaPackage <cudapackage>`. So it is worth looking at those packages

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _rpackage: .. _rpackage:
-- --------
R RPackage
-- --------
Like Python, R has its own built-in build system. Like Python, R has its own built-in build system.
@@ -19,7 +19,7 @@ new Spack packages for.
Phases Phases
^^^^^^ ^^^^^^
The ``RBuilder`` and ``RPackage`` base classes have a single phase: The ``RPackage`` base class has a single phase:
#. ``install`` - install the package #. ``install`` - install the package
@@ -163,28 +163,28 @@ attributes that can be used to set ``homepage``, ``url``, ``list_url``, and
.. code-block:: python .. code-block:: python
cran = "caret" cran = 'caret'
is equivalent to: is equivalent to:
.. code-block:: python .. code-block:: python
homepage = "https://cloud.r-project.org/package=caret" homepage = 'https://cloud.r-project.org/package=caret'
url = "https://cloud.r-project.org/src/contrib/caret_6.0-86.tar.gz" url = 'https://cloud.r-project.org/src/contrib/caret_6.0-86.tar.gz'
list_url = "https://cloud.r-project.org/src/contrib/Archive/caret" list_url = 'https://cloud.r-project.org/src/contrib/Archive/caret'
Likewise, the following ``bioc`` attribute: Likewise, the following ``bioc`` attribute:
.. code-block:: python .. code-block:: python
bioc = "BiocVersion" bioc = 'BiocVersion'
is equivalent to: is equivalent to:
.. code-block:: python .. code-block:: python
homepage = "https://bioconductor.org/packages/BiocVersion/" homepage = 'https://bioconductor.org/packages/BiocVersion/'
git = "https://git.bioconductor.org/packages/BiocVersion" git = 'https://git.bioconductor.org/packages/BiocVersion'
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -193,14 +193,14 @@ Build system dependencies
As an extension of the R ecosystem, your package will obviously depend As an extension of the R ecosystem, your package will obviously depend
on R to build and run. Normally, we would use ``depends_on`` to express on R to build and run. Normally, we would use ``depends_on`` to express
this, but for R packages, we use ``extends``. This implies a special this, but for R packages, we use ``extends``. ``extends`` is similar to
dependency on R, which is used to set environment variables such as ``depends_on``, but adds an additional feature: the ability to "activate"
``R_LIBS`` uniformly. Since every R package needs this, the ``RPackage`` the package by symlinking it to the R installation directory. Since
base class contains: every R package needs this, the ``RPackage`` base class contains:
.. code-block:: python .. code-block:: python
extends("r") extends('r')
Take a close look at the homepage for ``caret``. If you look at the Take a close look at the homepage for ``caret``. If you look at the
@@ -209,7 +209,7 @@ You should add this to your package like so:
.. code-block:: python .. code-block:: python
depends_on("r@3.2.0:", type=("build", "run")) depends_on('r@3.2.0:', type=('build', 'run'))
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
@@ -227,7 +227,7 @@ and list all of their dependencies in the following sections:
* LinkingTo * LinkingTo
As far as Spack is concerned, all 3 of these dependency types As far as Spack is concerned, all 3 of these dependency types
correspond to ``type=("build", "run")``, so you don't have to worry correspond to ``type=('build', 'run')``, so you don't have to worry
about the details. If you are curious what they mean, about the details. If you are curious what they mean,
https://github.com/spack/spack/issues/2951 has a pretty good summary: https://github.com/spack/spack/issues/2951 has a pretty good summary:
@@ -330,7 +330,7 @@ the dependency:
.. code-block:: python .. code-block:: python
depends_on("r-lattice@0.20:", type=("build", "run")) depends_on('r-lattice@0.20:', type=('build', 'run'))
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
@@ -361,20 +361,20 @@ like so:
.. code-block:: python .. code-block:: python
def configure_args(self): def configure_args(self):
mpi_name = self.spec["mpi"].name mpi_name = self.spec['mpi'].name
# The type of MPI. Supported values are: # The type of MPI. Supported values are:
# OPENMPI, LAM, MPICH, MPICH2, or CRAY # OPENMPI, LAM, MPICH, MPICH2, or CRAY
if mpi_name == "openmpi": if mpi_name == 'openmpi':
Rmpi_type = "OPENMPI" Rmpi_type = 'OPENMPI'
elif mpi_name == "mpich": elif mpi_name == 'mpich':
Rmpi_type = "MPICH2" Rmpi_type = 'MPICH2'
else: else:
raise InstallError("Unsupported MPI type") raise InstallError('Unsupported MPI type')
return [ return [
"--with-Rmpi-type={0}".format(Rmpi_type), '--with-Rmpi-type={0}'.format(Rmpi_type),
"--with-mpi={0}".format(spec["mpi"].prefix), '--with-mpi={0}'.format(spec['mpi'].prefix),
] ]

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _rubypackage: .. _rubypackage:
---- -----------
Ruby RubyPackage
---- -----------
Like Perl, Python, and R, Ruby has its own build system for Like Perl, Python, and R, Ruby has its own build system for
installing Ruby gems. installing Ruby gems.
@@ -16,7 +16,7 @@ installing Ruby gems.
Phases Phases
^^^^^^ ^^^^^^
The ``RubyBuilder`` and ``RubyPackage`` base classes provide the following phases that The ``RubyPackage`` base class provides the following phases that
can be overridden: can be overridden:
#. ``build`` - build everything needed to install #. ``build`` - build everything needed to install
@@ -84,8 +84,8 @@ The ``*.gemspec`` file may contain something like:
.. code-block:: ruby .. code-block:: ruby
summary = "An implementation of the AsciiDoc text processor and publishing toolchain" summary = 'An implementation of the AsciiDoc text processor and publishing toolchain'
description = "A fast, open source text processor and publishing toolchain for converting AsciiDoc content to HTML 5, DocBook 5, and other formats." description = 'A fast, open source text processor and publishing toolchain for converting AsciiDoc content to HTML 5, DocBook 5, and other formats.'
Either of these can be used for the description of the Spack package. Either of these can be used for the description of the Spack package.
@@ -98,7 +98,7 @@ The ``*.gemspec`` file may contain something like:
.. code-block:: ruby .. code-block:: ruby
homepage = "https://asciidoctor.org" homepage = 'https://asciidoctor.org'
This should be used as the official homepage of the Spack package. This should be used as the official homepage of the Spack package.
@@ -112,21 +112,21 @@ the base class contains:
.. code-block:: python .. code-block:: python
extends("ruby") extends('ruby')
The ``*.gemspec`` file may contain something like: The ``*.gemspec`` file may contain something like:
.. code-block:: ruby .. code-block:: ruby
required_ruby_version = ">= 2.3.0" required_ruby_version = '>= 2.3.0'
This can be added to the Spack package using: This can be added to the Spack package using:
.. code-block:: python .. code-block:: python
depends_on("ruby@2.3.0:", type=("build", "run")) depends_on('ruby@2.3.0:', type=('build', 'run'))
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _sconspackage: .. _sconspackage:
----- ------------
SCons SConsPackage
----- ------------
SCons is a general-purpose build system that does not rely on SCons is a general-purpose build system that does not rely on
Makefiles to build software. SCons is written in Python, and handles Makefiles to build software. SCons is written in Python, and handles
@@ -42,7 +42,7 @@ As previously mentioned, SCons allows developers to add subcommands like
$ scons install $ scons install
To facilitate this, the ``SConsBuilder`` and ``SconsPackage`` base classes provide the To facilitate this, the ``SConsPackage`` base class provides the
following phases: following phases:
#. ``build`` - build the package #. ``build`` - build the package
@@ -57,7 +57,7 @@ overridden like so:
.. code-block:: python .. code-block:: python
def test(self): def test(self):
scons("check") scons('check')
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
@@ -88,7 +88,7 @@ base class already contains:
.. code-block:: python .. code-block:: python
depends_on("scons", type="build") depends_on('scons', type='build')
If you want to specify a particular version requirement, you can override If you want to specify a particular version requirement, you can override
@@ -96,7 +96,7 @@ this in your package:
.. code-block:: python .. code-block:: python
depends_on("scons@2.3.0:", type="build") depends_on('scons@2.3.0:', type='build')
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -238,14 +238,14 @@ the package build phase. This is done by overriding ``build_args`` like so:
def build_args(self, spec, prefix): def build_args(self, spec, prefix):
args = [ args = [
f"PREFIX={prefix}", 'PREFIX={0}'.format(prefix),
f"ZLIB={spec['zlib'].prefix}", 'ZLIB={0}'.format(spec['zlib'].prefix),
] ]
if spec.satisfies("+debug"): if '+debug' in spec:
args.append("DEBUG=yes") args.append('DEBUG=yes')
else: else:
args.append("DEBUG=no") args.append('DEBUG=no')
return args return args
@@ -275,8 +275,8 @@ environment variables. For example, cantera has the following option:
* env_vars: [ string ] * env_vars: [ string ]
Environment variables to propagate through to SCons. Either the Environment variables to propagate through to SCons. Either the
string "all" or a comma separated list of variable names, e.g. string "all" or a comma separated list of variable names, e.g.
"LD_LIBRARY_PATH,HOME". 'LD_LIBRARY_PATH,HOME'.
- default: "LD_LIBRARY_PATH,PYTHONPATH" - default: 'LD_LIBRARY_PATH,PYTHONPATH'
In the case of cantera, using ``env_vars=all`` allows us to use In the case of cantera, using ``env_vars=all`` allows us to use

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _sippackage: .. _sippackage:
--- ----------
SIP SIPPackage
--- ----------
SIP is a tool that makes it very easy to create Python bindings for C and C++ SIP is a tool that makes it very easy to create Python bindings for C and C++
libraries. It was originally developed to create PyQt, the Python bindings for libraries. It was originally developed to create PyQt, the Python bindings for
@@ -22,7 +22,7 @@ provides support functions to the automatically generated code.
Phases Phases
^^^^^^ ^^^^^^
The ``SIPBuilder`` and ``SIPPackage`` base classes come with the following phases: The ``SIPPackage`` base class comes with the following phases:
#. ``configure`` - configure the package #. ``configure`` - configure the package
#. ``build`` - build the package #. ``build`` - build the package
@@ -32,7 +32,7 @@ By default, these phases run:
.. code-block:: console .. code-block:: console
$ sip-build --verbose --target-dir ... $ python configure.py --bindir ... --destdir ...
$ make $ make
$ make install $ make install
@@ -41,30 +41,30 @@ By default, these phases run:
Important files Important files
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
Each SIP package comes with a custom configuration file written in Python. Each SIP package comes with a custom ``configure.py`` build script,
For newer packages, this is called ``project.py``, while in older packages, written in Python. This script contains instructions to build the project.
it may be called ``configure.py``. This script contains instructions to build
the project.
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
Build system dependencies Build system dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
``SIPPackage`` requires several dependencies. Python and SIP are needed at build-time ``SIPPackage`` requires several dependencies. Python is needed to run
to run the aforementioned configure script. Python is also needed at run-time to the ``configure.py`` build script, and to run the resulting Python
actually use the installed Python library. And as we are building Python bindings libraries. Qt is needed to provide the ``qmake`` command. SIP is also
for C/C++ libraries, Python is also needed as a link dependency. All of these needed to build the package. All of these dependencies are automatically
dependencies are automatically added via the base class. added via the base class
.. code-block:: python .. code-block:: python
extends("python", type=("build", "link", "run")) extends('python')
depends_on("py-sip", type="build")
depends_on('qt', type='build')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ depends_on('py-sip', type='build')
Passing arguments to ``sip-build``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Passing arguments to ``configure.py``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Each phase comes with a ``<phase_args>`` function that can be used to pass Each phase comes with a ``<phase_args>`` function that can be used to pass
arguments to that particular phase. For example, if you need to pass arguments to that particular phase. For example, if you need to pass
@@ -72,11 +72,11 @@ arguments to the configure phase, you can use:
.. code-block:: python .. code-block:: python
def configure_args(self): def configure_args(self, spec, prefix):
return ["--no-python-dbus"] return ['--no-python-dbus']
A list of valid options can be found by running ``sip-build --help``. A list of valid options can be found by running ``python configure.py --help``.
^^^^^^^ ^^^^^^^
Testing Testing
@@ -124,7 +124,7 @@ are wrong, you can provide the names yourself by overriding
.. code-block:: python .. code-block:: python
import_modules = ["PyQt5"] import_modules = ['PyQt5']
These tests often catch missing dependencies and non-RPATHed These tests often catch missing dependencies and non-RPATHed

View File

@@ -1,19 +1,19 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _sourceforgepackage: .. _sourceforgepackage:
----------- ------------------
Sourceforge SourceforgePackage
----------- ------------------
``SourceforgePackage`` is a ``SourceforgePackage`` is a
`mixin-class <https://en.wikipedia.org/wiki/Mixin>`_. It automatically `mixin-class <https://en.wikipedia.org/wiki/Mixin>`_. It automatically
sets the URL based on a list of Sourceforge mirrors listed in sets the URL based on a list of Sourceforge mirrors listed in
`sourceforge_mirror_path`, which defaults to a half dozen known mirrors. `sourceforge_mirror_path`, which defaults to a half dozen known mirrors.
Refer to the package source Refer to the package source
(`<https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/sourceforge.py>`__) for the current list of mirrors used by Spack. (`<https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/sourceforge.py>`__) for the current list of mirrors used by Spack.
@@ -29,7 +29,7 @@ This package provides a method for populating mirror URLs.
It is decorated with `property` so its results are treated as It is decorated with `property` so its results are treated as
a package attribute. a package attribute.
Refer to Refer to
`<https://spack.readthedocs.io/en/latest/packaging_guide.html#mirrors-of-the-main-url>`__ `<https://spack.readthedocs.io/en/latest/packaging_guide.html#mirrors-of-the-main-url>`__
for information on how Spack uses the `urls` attribute during for information on how Spack uses the `urls` attribute during
fetching. fetching.

View File

@@ -1,13 +1,13 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _wafpackage: .. _wafpackage:
--- ----------
Waf WafPackage
--- ----------
Like SCons, Waf is a general-purpose build system that does not rely Like SCons, Waf is a general-purpose build system that does not rely
on Makefiles to build software. on Makefiles to build software.
@@ -16,7 +16,7 @@ on Makefiles to build software.
Phases Phases
^^^^^^ ^^^^^^
The ``WafBuilder`` and ``WafPackage`` base classes come with the following phases: The ``WafPackage`` base class comes with the following phases:
#. ``configure`` - configure the project #. ``configure`` - configure the project
#. ``build`` - build the project #. ``build`` - build the project
@@ -58,13 +58,15 @@ Testing
``WafPackage`` also provides ``test`` and ``installtest`` methods, ``WafPackage`` also provides ``test`` and ``installtest`` methods,
which are run after the ``build`` and ``install`` phases, respectively. which are run after the ``build`` and ``install`` phases, respectively.
By default, these phases do nothing, but you can override them to By default, these phases do nothing, but you can override them to
run package-specific unit tests. run package-specific unit tests. For example, the
`py-py2cairo <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-py2cairo/package.py>`_
package uses:
.. code-block:: python .. code-block:: python
def installtest(self): def installtest(self):
with working_dir("test"): with working_dir('test'):
pytest = which("py.test") pytest = which('py.test')
pytest() pytest()
@@ -93,7 +95,7 @@ the following dependency automatically:
.. code-block:: python .. code-block:: python
depends_on("python@2.5:", type="build") depends_on('python@2.5:', type='build')
Waf only supports Python 2.5 and up. Waf only supports Python 2.5 and up.
@@ -113,7 +115,7 @@ phase, you can use:
args = [] args = []
if self.run_tests: if self.run_tests:
args.append("--test") args.append('--test')
return args return args

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -32,11 +32,14 @@
# If extensions (or modules to document with autodoc) are in another directory, # If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the # add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here. # documentation root, use os.path.abspath to make it absolute, like shown here.
link_name = os.path.abspath("_spack_root")
if not os.path.exists(link_name):
os.symlink(os.path.abspath("../../.."), link_name, target_is_directory=True)
sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external")) sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external"))
sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external/_vendoring")) sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external/pytest-fallback"))
if sys.version_info[0] < 3:
sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external/yaml/lib"))
else:
sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external/yaml/lib3"))
sys.path.append(os.path.abspath("_spack_root/lib/spack/")) sys.path.append(os.path.abspath("_spack_root/lib/spack/"))
# Add the Spack bin directory to the path so that we can use its output in docs. # Add the Spack bin directory to the path so that we can use its output in docs.
@@ -48,6 +51,9 @@
os.environ["COLIFY_SIZE"] = "25x120" os.environ["COLIFY_SIZE"] = "25x120"
os.environ["COLUMNS"] = "120" os.environ["COLUMNS"] = "120"
# Generate full package list if needed
subprocess.call(["spack", "list", "--format=html", "--update=package_list.html"])
# Generate a command index if an update is needed # Generate a command index if an update is needed
subprocess.call( subprocess.call(
[ [
@@ -71,22 +77,13 @@
"--force", # Overwrite existing files "--force", # Overwrite existing files
"--no-toc", # Don't create a table of contents file "--no-toc", # Don't create a table of contents file
"--output-dir=.", # Directory to place all output "--output-dir=.", # Directory to place all output
"--module-first", # emit module docs before submodule docs
] ]
sphinx_apidoc( sphinx_apidoc(apidoc_args + ["_spack_root/lib/spack/spack"])
apidoc_args
+ [
"_spack_root/lib/spack/spack",
"_spack_root/lib/spack/spack/test/*.py",
"_spack_root/lib/spack/spack/test/cmd/*.py",
]
)
sphinx_apidoc(apidoc_args + ["_spack_root/lib/spack/llnl"]) sphinx_apidoc(apidoc_args + ["_spack_root/lib/spack/llnl"])
# Enable todo items # Enable todo items
todo_include_todos = True todo_include_todos = True
# #
# Disable duplicate cross-reference warnings. # Disable duplicate cross-reference warnings.
# #
@@ -94,7 +91,9 @@ class PatchedPythonDomain(PythonDomain):
def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
if "refspecific" in node: if "refspecific" in node:
del node["refspecific"] del node["refspecific"]
return super().resolve_xref(env, fromdocname, builder, typ, target, node, contnode) return super(PatchedPythonDomain, self).resolve_xref(
env, fromdocname, builder, typ, target, node, contnode
)
# #
@@ -144,6 +143,7 @@ def setup(sphinx):
# Get nice vector graphics # Get nice vector graphics
graphviz_output_format = "svg" graphviz_output_format = "svg"
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"] templates_path = ["_templates"]
@@ -157,8 +157,8 @@ def setup(sphinx):
master_doc = "index" master_doc = "index"
# General information about the project. # General information about the project.
project = "Spack" project = u"Spack"
copyright = "2013-2023, Lawrence Livermore National Laboratory." copyright = u"2013-2021, Lawrence Livermore National Laboratory."
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the
@@ -199,24 +199,13 @@ def setup(sphinx):
("py:class", "contextlib.contextmanager"), ("py:class", "contextlib.contextmanager"),
("py:class", "module"), ("py:class", "module"),
("py:class", "_io.BufferedReader"), ("py:class", "_io.BufferedReader"),
("py:class", "_io.BytesIO"),
("py:class", "unittest.case.TestCase"), ("py:class", "unittest.case.TestCase"),
("py:class", "_frozen_importlib_external.SourceFileLoader"), ("py:class", "_frozen_importlib_external.SourceFileLoader"),
("py:class", "clingo.Control"), ("py:class", "clingo.Control"),
("py:class", "six.moves.urllib.parse.ParseResult"), ("py:class", "six.moves.urllib.parse.ParseResult"),
("py:class", "TextIO"),
("py:class", "hashlib._Hash"),
# Spack classes that are private and we don't want to expose # Spack classes that are private and we don't want to expose
("py:class", "spack.provider_index._IndexBase"), ("py:class", "spack.provider_index._IndexBase"),
("py:class", "spack.repo._PrependFileLoader"), ("py:class", "spack.repo._PrependFileLoader"),
("py:class", "spack.build_systems._checks.BaseBuilder"),
# Spack classes that intersphinx is unable to resolve
("py:class", "spack.version.StandardVersion"),
("py:class", "spack.spec.DependencySpec"),
("py:class", "spack.spec.InstallStatus"),
("py:class", "spack.spec.SpecfileReaderBase"),
("py:class", "spack.install_test.Pb"),
("py:class", "spack.filesystem_view.SimpleFilesystemView"),
] ]
# The reST default role (used for this markup: `text`) to use for all documents. # The reST default role (used for this markup: `text`) to use for all documents.
@@ -232,8 +221,30 @@ def setup(sphinx):
# If true, sectionauthor and moduleauthor directives will be shown in the # If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default. # output. They are ignored by default.
# show_authors = False # show_authors = False
sys.path.append("./_pygments")
pygments_style = "style.SpackStyle" # The name of the Pygments (syntax highlighting) style to use.
# We use our own extension of the default style with a few modifications
from pygments.style import Style
from pygments.styles.default import DefaultStyle
from pygments.token import Comment, Generic, Text
class SpackStyle(DefaultStyle):
styles = DefaultStyle.styles.copy()
background_color = "#f4f4f8"
styles[Generic.Output] = "#355"
styles[Generic.Prompt] = "bold #346ec9"
import pkg_resources
dist = pkg_resources.Distribution(__file__)
sys.path.append(".") # make 'conf' module findable
ep = pkg_resources.EntryPoint.parse("spack = conf:SpackStyle", dist=dist)
dist._ep_map = {"pygments.styles": {"plugin1": ep}}
pkg_resources.working_set.add(dist)
pygments_style = "spack"
# A list of ignored prefixes for module index sorting. # A list of ignored prefixes for module index sorting.
# modindex_common_prefix = [] # modindex_common_prefix = []
@@ -318,20 +329,23 @@ def setup(sphinx):
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = "Spackdoc" htmlhelp_basename = "Spackdoc"
# -- Options for LaTeX output -------------------------------------------------- # -- Options for LaTeX output --------------------------------------------------
latex_elements = { latex_elements = {
# The paper size ('letterpaper' or 'a4paper'). # The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper', #'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt'). # The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt', #'pointsize': '10pt',
# Additional stuff for the LaTeX preamble. # Additional stuff for the LaTeX preamble.
# 'preamble': '', #'preamble': '',
} }
# Grouping the document tree into LaTeX files. List of tuples # Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]). # (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [("index", "Spack.tex", "Spack Documentation", "Todd Gamblin", "manual")] latex_documents = [
("index", "Spack.tex", u"Spack Documentation", u"Todd Gamblin", "manual"),
]
# The name of an image file (relative to this directory) to place at the top of # The name of an image file (relative to this directory) to place at the top of
# the title page. # the title page.
@@ -358,7 +372,7 @@ def setup(sphinx):
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section). # (source start file, name, description, authors, manual section).
man_pages = [("index", "spack", "Spack Documentation", ["Todd Gamblin"], 1)] man_pages = [("index", "spack", u"Spack Documentation", [u"Todd Gamblin"], 1)]
# If true, show URL addresses after external links. # If true, show URL addresses after external links.
# man_show_urls = False # man_show_urls = False
@@ -373,12 +387,12 @@ def setup(sphinx):
( (
"index", "index",
"Spack", "Spack",
"Spack Documentation", u"Spack Documentation",
"Todd Gamblin", u"Todd Gamblin",
"Spack", "Spack",
"One line description of project.", "One line description of project.",
"Miscellaneous", "Miscellaneous",
) ),
] ]
# Documents to append as an appendix to all manuals. # Documents to append as an appendix to all manuals.
@@ -394,4 +408,6 @@ def setup(sphinx):
# -- Extension configuration ------------------------------------------------- # -- Extension configuration -------------------------------------------------
# sphinx.ext.intersphinx # sphinx.ext.intersphinx
intersphinx_mapping = {"python": ("https://docs.python.org/3", None)} intersphinx_mapping = {
"python": ("https://docs.python.org/3", None),
}

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -145,25 +145,6 @@ hosts when making ``ssl`` connections. Set to ``false`` to disable, and
tools like ``curl`` will use their ``--insecure`` options. Disabling tools like ``curl`` will use their ``--insecure`` options. Disabling
this can expose you to attacks. Use at your own risk. this can expose you to attacks. Use at your own risk.
--------------------
``ssl_certs``
--------------------
Path to custom certificats for SSL verification. The value can be a
filesytem path, or an environment variable that expands to an absolute file path.
The default value is set to the environment variable ``SSL_CERT_FILE``
to use the same syntax used by many other applications that automatically
detect custom certificates.
When ``url_fetch_method:curl`` the ``config:ssl_certs`` should resolve to
a single file. Spack will then set the environment variable ``CURL_CA_BUNDLE``
in the subprocess calling ``curl``.
If ``url_fetch_method:urllib`` then files and directories are supported i.e.
``config:ssl_certs:$SSL_CERT_FILE`` or ``config:ssl_certs:$SSL_CERT_DIR``
will work.
In all cases the expanded path must be absolute for Spack to use the certificates.
Certificates relative to an environment can be created by prepending the path variable
with the Spack configuration variable``$env``.
-------------------- --------------------
``checksum`` ``checksum``
-------------------- --------------------
@@ -241,11 +222,11 @@ and location. (See the *Configuration settings* section of ``man
ccache`` to learn more about the default settings and how to change ccache`` to learn more about the default settings and how to change
them). Please note that we currently disable ccache's ``hash_dir`` them). Please note that we currently disable ccache's ``hash_dir``
feature to avoid an issue with the stage directory (see feature to avoid an issue with the stage directory (see
https://github.com/spack/spack/pull/3761#issuecomment-294352232). https://github.com/LLNL/spack/pull/3761#issuecomment-294352232).
----------------------- ------------------
``shared_linking:type`` ``shared_linking``
----------------------- ------------------
Control whether Spack embeds ``RPATH`` or ``RUNPATH`` attributes in ELF binaries Control whether Spack embeds ``RPATH`` or ``RUNPATH`` attributes in ELF binaries
so that they can find their dependencies. Has no effect on macOS. so that they can find their dependencies. Has no effect on macOS.
@@ -264,76 +245,15 @@ the loading object.
DO NOT MIX the two options within the same install tree. DO NOT MIX the two options within the same install tree.
-----------------------
``shared_linking:bind``
-----------------------
This is an *experimental option* that controls whether Spack embeds absolute paths
to needed shared libraries in ELF executables and shared libraries on Linux. Setting
this option to ``true`` has two advantages:
1. **Improved startup time**: when running an executable, the dynamic loader does not
have to perform a search for needed libraries, they are loaded directly.
2. **Reliability**: libraries loaded at runtime are those that were linked to. This
minimizes the risk of accidentally picking up system libraries.
In the current implementation, Spack sets the soname (shared object name) of
libraries to their install path upon installation. This has two implications:
1. binding does not apply to libraries installed *before* the option was enabled;
2. toggling the option off does *not* prevent binding of libraries installed when
the option was still enabled.
It is also worth noting that:
1. Applications relying on ``dlopen(3)`` will continue to work, even when they open
a library by name. This is because ``RPATH``\s are retained in binaries also
when ``bind`` is enabled.
2. ``LD_PRELOAD`` continues to work for the typical use case of overriding
symbols, such as preloading a library with a more efficient ``malloc``.
However, the preloaded library will be loaded *additionally to*, instead of
*in place of* another library with the same name --- this can be problematic
in very rare cases where libraries rely on a particular ``init`` or ``fini``
order.
.. note::
In some cases packages provide *stub libraries* that only contain an interface
for linking, but lack an implementation for runtime. An example of this is
``libcuda.so``, provided by the CUDA toolkit; it can be used to link against,
but the library needed at runtime is the one installed with the CUDA driver.
To avoid binding those libraries, they can be marked as non-bindable using
a property in the package:
.. code-block:: python
class Example(Package):
non_bindable_shared_objects = ["libinterface.so"]
---------------------- ----------------------
``install_status`` ``terminal_title``
---------------------- ----------------------
When set to ``true``, Spack will show information about its current progress By setting this option to ``true``, Spack will update the terminal's title to
as well as the current and total package numbers. Progress is shown both provide information about its current progress as well as the current and
in the terminal title and inline. Setting it to ``false`` will not show any total package numbers.
progress information.
To work properly, this requires your terminal to reset its title after To work properly, this requires your terminal to reset its title after
Spack has finished its work, otherwise Spack's status information will Spack has finished its work, otherwise Spack's status information will
remain in the terminal's title indefinitely. Most terminals should already remain in the terminal's title indefinitely. Most terminals should already
be set up this way and clear Spack's status information. be set up this way and clear Spack's status information.
-----------
``aliases``
-----------
Aliases can be used to define new Spack commands. They can be either shortcuts
for longer commands or include specific arguments for convenience. For instance,
if users want to use ``spack install``'s ``-v`` argument all the time, they can
create a new alias called ``inst`` that will always call ``install -v``:
.. code-block:: yaml
aliases:
inst: install -v

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -17,12 +17,11 @@ case you want to skip directly to specific docs:
* :ref:`config.yaml <config-yaml>` * :ref:`config.yaml <config-yaml>`
* :ref:`mirrors.yaml <mirrors>` * :ref:`mirrors.yaml <mirrors>`
* :ref:`modules.yaml <modules>` * :ref:`modules.yaml <modules>`
* :ref:`packages.yaml <packages-config>` * :ref:`packages.yaml <build-settings>`
* :ref:`repos.yaml <repositories>` * :ref:`repos.yaml <repositories>`
You can also add any of these as inline configuration in the YAML You can also add any of these as inline configuration in ``spack.yaml``
manifest file (``spack.yaml``) describing an :ref:`environment in an :ref:`environment <environment-configuration>`.
<environment-configuration>`.
----------- -----------
YAML Format YAML Format
@@ -73,12 +72,9 @@ are six configuration scopes. From lowest to highest:
Spack instance per project) or for site-wide settings on a multi-user Spack instance per project) or for site-wide settings on a multi-user
machine (e.g., for a common Spack instance). machine (e.g., for a common Spack instance).
#. **plugin**: Read from a Python project's entry points. Settings here affect
all instances of Spack running with the same Python installation. This scope takes higher precedence than site, system, and default scopes.
#. **user**: Stored in the home directory: ``~/.spack/``. These settings #. **user**: Stored in the home directory: ``~/.spack/``. These settings
affect all instances of Spack and take higher precedence than site, affect all instances of Spack and take higher precedence than site,
system, plugin, or defaults scopes. system, or defaults scopes.
#. **custom**: Stored in a custom directory specified by ``--config-scope``. #. **custom**: Stored in a custom directory specified by ``--config-scope``.
If multiple scopes are listed on the command line, they are ordered If multiple scopes are listed on the command line, they are ordered
@@ -199,45 +195,6 @@ with MPICH. You can create different configuration scopes for use with
mpi: [mpich] mpi: [mpich]
.. _plugin-scopes:
^^^^^^^^^^^^^
Plugin scopes
^^^^^^^^^^^^^
.. note::
Python version >= 3.8 is required to enable plugin configuration.
Spack can be made aware of configuration scopes that are installed as part of a python package. To do so, register a function that returns the scope's path to the ``"spack.config"`` entry point. Consider the Python package ``my_package`` that includes Spack configurations:
.. code-block:: console
my-package/
├── src
│   ├── my_package
│   │   ├── __init__.py
│   │   └── spack/
│   │   │   └── config.yaml
└── pyproject.toml
adding the following to ``my_package``'s ``pyproject.toml`` will make ``my_package``'s ``spack/`` configurations visible to Spack when ``my_package`` is installed:
.. code-block:: toml
[project.entry_points."spack.config"]
my_package = "my_package:get_config_path"
The function ``my_package.get_extension_path`` in ``my_package/__init__.py`` might look like
.. code-block:: python
import importlib.resources
def get_config_path():
dirname = importlib.resources.files("my_package").joinpath("spack")
if dirname.exists():
return str(dirname)
.. _platform-scopes: .. _platform-scopes:
------------------------ ------------------------
@@ -270,9 +227,6 @@ You can get the name to use for ``<platform>`` by running ``spack arch
--platform``. The system config scope has a ``<platform>`` section for --platform``. The system config scope has a ``<platform>`` section for
sites at which ``/etc`` is mounted on multiple heterogeneous machines. sites at which ``/etc`` is mounted on multiple heterogeneous machines.
.. _config-scope-precedence:
---------------- ----------------
Scope Precedence Scope Precedence
---------------- ----------------
@@ -285,13 +239,6 @@ lower-precedence settings. Completely ignoring higher-level configuration
options is supported with the ``::`` notation for keys (see options is supported with the ``::`` notation for keys (see
:ref:`config-overrides` below). :ref:`config-overrides` below).
There are also special notations for string concatenation and precendense override:
* ``+:`` will force *prepending* strings or lists. For lists, this is the default behavior.
* ``-:`` works similarly, but for *appending* values.
:ref:`config-prepend-append`
^^^^^^^^^^^ ^^^^^^^^^^^
Simple keys Simple keys
^^^^^^^^^^^ ^^^^^^^^^^^
@@ -332,47 +279,6 @@ command:
- ~/.spack/stage - ~/.spack/stage
.. _config-prepend-append:
^^^^^^^^^^^^^^^^^^^^
String Concatenation
^^^^^^^^^^^^^^^^^^^^
Above, the user ``config.yaml`` *completely* overrides specific settings in the
default ``config.yaml``. Sometimes, it is useful to add a suffix/prefix
to a path or name. To do this, you can use the ``-:`` notation for *append*
string concatenation at the end of a key in a configuration file. For example:
.. code-block:: yaml
:emphasize-lines: 1
:caption: ~/.spack/config.yaml
config:
install_tree-: /my/custom/suffix/
Spack will then append to the lower-precedence configuration under the
``install_tree-:`` section:
.. code-block:: console
$ spack config get config
config:
install_tree: /some/other/directory/my/custom/suffix
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
Similarly, ``+:`` can be used to *prepend* to a path or name:
.. code-block:: yaml
:emphasize-lines: 1
:caption: ~/.spack/config.yaml
config:
install_tree+: /my/custom/suffix/
.. _config-overrides: .. _config-overrides:
^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -488,7 +394,7 @@ are indicated at the start of the path with ``~`` or ``~user``.
Spack-specific variables Spack-specific variables
^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
Spack understands over a dozen special variables. These are: Spack understands several special variables. These are:
* ``$env``: name of the currently active :ref:`environment <environments>` * ``$env``: name of the currently active :ref:`environment <environments>`
* ``$spack``: path to the prefix of this Spack installation * ``$spack``: path to the prefix of this Spack installation
@@ -499,19 +405,6 @@ Spack understands over a dozen special variables. These are:
* ``$user``: name of the current user * ``$user``: name of the current user
* ``$user_cache_path``: user cache directory (``~/.spack`` unless * ``$user_cache_path``: user cache directory (``~/.spack`` unless
:ref:`overridden <local-config-overrides>`) :ref:`overridden <local-config-overrides>`)
* ``$architecture``: the architecture triple of the current host, as
detected by Spack.
* ``$arch``: alias for ``$architecture``.
* ``$platform``: the platform of the current host, as detected by Spack.
* ``$operating_system``: the operating system of the current host, as
detected by the ``distro`` python module.
* ``$os``: alias for ``$operating_system``.
* ``$target``: the ISA target for the current host, as detected by
ArchSpec. E.g. ``skylake`` or ``neoverse-n1``.
* ``$target_family``. The target family for the current host, as
detected by ArchSpec. E.g. ``x86_64`` or ``aarch64``.
* ``$date``: the current date in the format YYYY-MM-DD
Note that, as with shell variables, you can write these as ``$varname`` Note that, as with shell variables, you can write these as ``$varname``
or with braces to distinguish the variable from surrounding characters: or with braces to distinguish the variable from surrounding characters:
@@ -656,7 +549,7 @@ down the problem:
You can see above that the ``build_jobs`` and ``debug`` settings are You can see above that the ``build_jobs`` and ``debug`` settings are
built in and are not overridden by a configuration file. The built in and are not overridden by a configuration file. The
``verify_ssl`` setting comes from the ``--insecure`` option on the ``verify_ssl`` setting comes from the ``--insceure`` option on the
command line. ``dirty`` and ``install_tree`` come from the custom command line. ``dirty`` and ``install_tree`` come from the custom
scopes ``./my-scope`` and ``./my-scope-2``, and all other configuration scopes ``./my-scope`` and ``./my-scope-2``, and all other configuration
options come from the default configuration files that ship with Spack. options come from the default configuration files that ship with Spack.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -9,96 +9,24 @@
Container Images Container Images
================ ================
Spack :ref:`environments` can easily be turned into container images. This page Spack :ref:`environments` are a great tool to create container images, but
outlines two ways in which this can be done: preparing one that is suitable for production requires some more boilerplate
than just:
1. By installing the environment on the host system, and copying the installations
into the container image. This approach does not require any tools like Docker
or Singularity to be installed.
2. By generating a Docker or Singularity recipe that can be used to build the
container image. In this approach, Spack builds the software inside the
container runtime, not on the host system.
The first approach is easiest if you already have an installed environment,
the second approach gives more control over the container image.
---------------------------
From existing installations
---------------------------
If you already have a Spack environment installed on your system, you can
share the binaries as an OCI compatible container image. To get started you
just have to configure and OCI registry and run ``spack buildcache push``.
.. code-block:: console
# Create and install an environment in the current directory
spack env create -d .
spack -e . add pkg-a pkg-b
spack -e . install
# Configure the registry
spack -e . mirror add --oci-username ... --oci-password ... container-registry oci://example.com/name/image
# Push the image
spack -e . buildcache push --update-index --base-image ubuntu:22.04 --tag my_env container-registry
The resulting container image can then be run as follows:
.. code-block:: console
$ docker run -it example.com/name/image:my_env
The image generated by Spack consists of the specified base image with each package from the
environment as a separate layer on top. The image is minimal by construction, it only contains the
environment roots and its runtime dependencies.
.. note::
When using registries like GHCR and Docker Hub, the ``--oci-password`` flag is not
the password for your account, but a personal access token you need to generate separately.
The specified ``--base-image`` should have a libc that is compatible with the host system.
For example if your host system is Ubuntu 20.04, you can use ``ubuntu:20.04``, ``ubuntu:22.04``
or newer: the libc in the container image must be at least the version of the host system,
assuming ABI compatibility. It is also perfectly fine to use a completely different
Linux distribution as long as the libc is compatible.
For convenience, Spack also turns the OCI registry into a :ref:`build cache <binary_caches_oci>`,
so that future ``spack install`` of the environment will simply pull the binaries from the
registry instead of doing source builds. The flag ``--update-index`` is needed to make Spack
take the build cache into account when concretizing.
.. note::
When generating container images in CI, the approach above is recommended when CI jobs
already run in a sandboxed environment. You can simply use ``spack`` directly
in the CI job and push the resulting image to a registry. Subsequent CI jobs should
run faster because Spack can install from the same registry instead of rebuilding from
sources.
---------------------------------------------
Generating recipes for Docker and Singularity
---------------------------------------------
Apart from copying existing installations into container images, Spack can also
generate recipes for container images. This is useful if you want to run Spack
itself in a sandboxed environment instead of on the host system.
Since recipes need a little bit more boilerplate than
.. code-block:: docker .. code-block:: docker
COPY spack.yaml /environment COPY spack.yaml /environment
RUN spack -e /environment install RUN spack -e /environment install
Spack provides a command to generate customizable recipes for container images. Customizations Additional actions may be needed to minimize the size of the
include minimizing the size of the image, installing packages in the base image using the system container, or to update the system software that is installed in the base
package manager, and setting up a proper entrypoint to run the image. image, or to set up a proper entrypoint to run the image. These tasks are
usually both necessary and repetitive, so Spack comes with a command
to generate recipes for container images starting from a ``spack.yaml``.
~~~~~~~~~~~~~~~~~~~~ --------------------
A Quick Introduction A Quick Introduction
~~~~~~~~~~~~~~~~~~~~ --------------------
Consider having a Spack environment like the following: Consider having a Spack environment like the following:
@@ -109,8 +37,8 @@ Consider having a Spack environment like the following:
- gromacs+mpi - gromacs+mpi
- mpich - mpich
Producing a ``Dockerfile`` from it is as simple as changing directories to Producing a ``Dockerfile`` from it is as simple as moving to the directory
where the ``spack.yaml`` file is stored and running the following command: where the ``spack.yaml`` file is stored and giving the following command:
.. code-block:: console .. code-block:: console
@@ -176,9 +104,9 @@ configuration are discussed in details in the sections below.
.. _container_spack_images: .. _container_spack_images:
~~~~~~~~~~~~~~~~~~~~~~~~~~ --------------------------
Spack Images on Docker Hub Spack Images on Docker Hub
~~~~~~~~~~~~~~~~~~~~~~~~~~ --------------------------
Docker images with Spack preinstalled and ready to be used are Docker images with Spack preinstalled and ready to be used are
built when a release is tagged, or nightly on ``develop``. The images built when a release is tagged, or nightly on ``develop``. The images
@@ -194,15 +122,15 @@ The OS that are currently supported are summarized in the table below:
* - Operating System * - Operating System
- Base Image - Base Image
- Spack Image - Spack Image
* - Ubuntu 18.04
- ``ubuntu:18.04``
- ``spack/ubuntu-bionic``
* - Ubuntu 20.04 * - Ubuntu 20.04
- ``ubuntu:20.04`` - ``ubuntu:20.04``
- ``spack/ubuntu-focal`` - ``spack/ubuntu-focal``
* - Ubuntu 22.04 * - Ubuntu 22.04
- ``ubuntu:22.04`` - ``ubuntu:22.04``
- ``spack/ubuntu-jammy`` - ``spack/ubuntu-jammy``
* - Ubuntu 24.04
- ``ubuntu:24.04``
- ``spack/ubuntu-noble``
* - CentOS 7 * - CentOS 7
- ``centos:7`` - ``centos:7``
- ``spack/centos7`` - ``spack/centos7``
@@ -215,26 +143,6 @@ The OS that are currently supported are summarized in the table below:
* - Amazon Linux 2 * - Amazon Linux 2
- ``amazonlinux:2`` - ``amazonlinux:2``
- ``spack/amazon-linux`` - ``spack/amazon-linux``
* - AlmaLinux 8
- ``almalinux:8``
- ``spack/almalinux8``
* - AlmaLinux 9
- ``almalinux:9``
- ``spack/almalinux9``
* - Rocky Linux 8
- ``rockylinux:8``
- ``spack/rockylinux8``
* - Rocky Linux 9
- ``rockylinux:9``
- ``spack/rockylinux9``
* - Fedora Linux 39
- ``fedora:39``
- ``spack/fedora39``
* - Fedora Linux 40
- ``fedora:40``
- ``spack/fedora40``
All the images are tagged with the corresponding release of Spack: All the images are tagged with the corresponding release of Spack:
@@ -248,9 +156,9 @@ by Spack use them as default base images for their ``build`` stage,
even though handles to use custom base images provided by users are even though handles to use custom base images provided by users are
available to accommodate complex use cases. available to accommodate complex use cases.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ---------------------------------
Configuring the Container Recipe Creating Images From Environments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ---------------------------------
Any Spack Environment can be used for the automatic generation of container Any Spack Environment can be used for the automatic generation of container
recipes. Sensible defaults are provided for things like the base image or the recipes. Sensible defaults are provided for things like the base image or the
@@ -284,25 +192,31 @@ under the ``container`` attribute of environments:
final: final:
- libgomp - libgomp
# Extra instructions
extra_instructions:
final: |
RUN echo 'export PS1="\[$(tput bold)\]\[$(tput setaf 1)\][gromacs]\[$(tput setaf 2)\]\u\[$(tput sgr0)\]:\w $ "' >> ~/.bashrc
# Labels for the image # Labels for the image
labels: labels:
app: "gromacs" app: "gromacs"
mpi: "mpich" mpi: "mpich"
A detailed description of the options available can be found in the :ref:`container_config_options` section. A detailed description of the options available can be found in the
:ref:`container_config_options` section.
~~~~~~~~~~~~~~~~~~~ -------------------
Setting Base Images Setting Base Images
~~~~~~~~~~~~~~~~~~~ -------------------
The ``images`` subsection is used to select both the image where The ``images`` subsection is used to select both the image where
Spack builds the software and the image where the built software Spack builds the software and the image where the built software
is installed. This attribute can be set in different ways and is installed. This attribute can be set in different ways and
which one to use depends on the use case at hand. which one to use depends on the use case at hand.
"""""""""""""""""""""""""""""""""""""""" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use Official Spack Images From Dockerhub Use Official Spack Images From Dockerhub
"""""""""""""""""""""""""""""""""""""""" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To generate a recipe that uses an official Docker image from the To generate a recipe that uses an official Docker image from the
Spack organization to build the software and the corresponding official OS image Spack organization to build the software and the corresponding official OS image
@@ -507,9 +421,9 @@ responsibility to ensure that:
Therefore we don't recommend its use in cases that can be otherwise Therefore we don't recommend its use in cases that can be otherwise
covered by the simplified mode shown first. covered by the simplified mode shown first.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ----------------------------
Singularity Definition Files Singularity Definition Files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ----------------------------
In addition to producing recipes in ``Dockerfile`` format Spack can produce In addition to producing recipes in ``Dockerfile`` format Spack can produce
Singularity Definition Files by just changing the value of the ``format`` Singularity Definition Files by just changing the value of the ``format``
@@ -530,132 +444,11 @@ attribute:
The minimum version of Singularity required to build a SIF (Singularity Image Format) The minimum version of Singularity required to build a SIF (Singularity Image Format)
image from the recipes generated by Spack is ``3.5.3``. image from the recipes generated by Spack is ``3.5.3``.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Extending the Jinja2 Templates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Dockerfile and the Singularity definition file that Spack can generate are based on
a few Jinja2 templates that are rendered according to the environment being containerized.
Even though Spack allows a great deal of customization by just setting appropriate values for
the configuration options, sometimes that is not enough.
In those cases, a user can directly extend the template that Spack uses to render the image
to e.g. set additional environment variables or perform specific operations either before or
after a given stage of the build. Let's consider as an example the following structure:
.. code-block:: console
$ tree /opt/environment
/opt/environment
├── data
│ └── data.csv
├── spack.yaml
├── data
└── templates
└── container
└── CustomDockerfile
containing both the custom template extension and the environment manifest file. To use a custom
template, the environment must register the directory containing it, and declare its use under the
``container`` configuration:
.. code-block:: yaml
:emphasize-lines: 7-8,12
spack:
specs:
- hdf5~mpi
concretizer:
unify: true
config:
template_dirs:
- /opt/environment/templates
container:
format: docker
depfile: true
template: container/CustomDockerfile
The template extension can override two blocks, named ``build_stage`` and ``final_stage``, similarly to
the example below:
.. code-block::
:emphasize-lines: 3,8
{% extends "container/Dockerfile" %}
{% block build_stage %}
RUN echo "Start building"
{{ super() }}
{% endblock %}
{% block final_stage %}
{{ super() }}
COPY data /share/myapp/data
{% endblock %}
The Dockerfile is generated by running:
.. code-block:: console
$ spack -e /opt/environment containerize
Note that the environment must be active for spack to read the template.
The recipe that gets generated contains the two extra instruction that we added in our template extension:
.. code-block:: Dockerfile
:emphasize-lines: 4,43
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-jammy:latest as builder
RUN echo "Start building"
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - hdf5~mpi" \
&& echo " concretizer:" \
&& echo " unify: true" \
&& echo " config:" \
&& echo " template_dirs:" \
&& echo " - /tmp/environment/templates" \
&& echo " install_tree: /opt/software" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unnecessary deps
RUN cd /opt/spack-environment && spack env activate . && spack concretize && spack env depfile -o Makefile && make -j $(nproc) && spack gc -y
# Strip all the binaries
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM ubuntu:22.04
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/._view /opt/._view
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
COPY data /share/myapp/data
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l", "-c", "$*", "--" ]
CMD [ "/bin/bash" ]
.. _container_config_options: .. _container_config_options:
~~~~~~~~~~~~~~~~~~~~~~~ -----------------------
Configuration Reference Configuration Reference
~~~~~~~~~~~~~~~~~~~~~~~ -----------------------
The tables below describe all the configuration options that are currently supported The tables below describe all the configuration options that are currently supported
to customize the generation of container recipes: to customize the generation of container recipes:
@@ -671,10 +464,6 @@ to customize the generation of container recipes:
- The format of the recipe - The format of the recipe
- ``docker`` or ``singularity`` - ``docker`` or ``singularity``
- Yes - Yes
* - ``depfile``
- Whether to use a depfile for installation, or not
- True or False (default)
- No
* - ``images:os`` * - ``images:os``
- Operating system used as a base for the image - Operating system used as a base for the image
- See :ref:`containers-supported-os` - See :ref:`containers-supported-os`
@@ -709,7 +498,7 @@ to customize the generation of container recipes:
- No - No
* - ``os_packages:command`` * - ``os_packages:command``
- Tool used to manage system packages - Tool used to manage system packages
- ``apt``, ``yum``, ``dnf``, ``dnf_epel``, ``zypper``, ``apk``, ``yum_amazon`` - ``apt``, ``yum``
- Only with custom base images - Only with custom base images
* - ``os_packages:update`` * - ``os_packages:update``
- Whether or not to update the list of available packages - Whether or not to update the list of available packages
@@ -723,6 +512,14 @@ to customize the generation of container recipes:
- System packages needed at run-time - System packages needed at run-time
- Valid packages for the current OS - Valid packages for the current OS
- No - No
* - ``extra_instructions:build``
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``build`` stage
- Anything understood by the current ``format``
- No
* - ``extra_instructions:final``
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``final`` stage
- Anything understood by the current ``format``
- No
* - ``labels`` * - ``labels``
- Labels to tag the image - Labels to tag the image
- Pairs of key-value strings - Pairs of key-value strings
@@ -752,13 +549,13 @@ to customize the generation of container recipes:
- Description string - Description string
- No - No
~~~~~~~~~~~~~~ --------------
Best Practices Best Practices
~~~~~~~~~~~~~~ --------------
""" ^^^
MPI MPI
""" ^^^
Due to the dependency on Fortran for OpenMPI, which is the spack default Due to the dependency on Fortran for OpenMPI, which is the spack default
implementation, consider adding ``gfortran`` to the ``apt-get install`` list. implementation, consider adding ``gfortran`` to the ``apt-get install`` list.
@@ -769,9 +566,9 @@ For execution on HPC clusters, it can be helpful to import the docker
image into Singularity in order to start a program with an *external* image into Singularity in order to start a program with an *external*
MPI. Otherwise, also add ``openssh-server`` to the ``apt-get install`` list. MPI. Otherwise, also add ``openssh-server`` to the ``apt-get install`` list.
"""" ^^^^
CUDA CUDA
"""" ^^^^
Starting from CUDA 9.0, Nvidia provides minimal CUDA images based on Starting from CUDA 9.0, Nvidia provides minimal CUDA images based on
Ubuntu. Please see `their instructions <https://hub.docker.com/r/nvidia/cuda/>`_. Ubuntu. Please see `their instructions <https://hub.docker.com/r/nvidia/cuda/>`_.
Avoid double-installing CUDA by adding, e.g. Avoid double-installing CUDA by adding, e.g.
@@ -790,9 +587,9 @@ to your ``spack.yaml``.
Users will either need ``nvidia-docker`` or e.g. Singularity to *execute* Users will either need ``nvidia-docker`` or e.g. Singularity to *execute*
device kernels. device kernels.
""""""""""""""""""""""""" ^^^^^^^^^^^^^^^^^^^^^^^^^
Docker on Windows and OSX Docker on Windows and OSX
""""""""""""""""""""""""" ^^^^^^^^^^^^^^^^^^^^^^^^^
On Mac OS and Windows, docker runs on a hypervisor that is not allocated much On Mac OS and Windows, docker runs on a hypervisor that is not allocated much
memory by default, and some spack packages may fail to build due to lack of memory by default, and some spack packages may fail to build due to lack of

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -118,7 +118,7 @@ make another change, test that change, etc. We use `pytest
<http://pytest.org/>`_ as our tests framework, and these types of <http://pytest.org/>`_ as our tests framework, and these types of
arguments are just passed to the ``pytest`` command underneath. See `the arguments are just passed to the ``pytest`` command underneath. See `the
pytest docs pytest docs
<https://doc.pytest.org/en/latest/how-to/usage.html#specifying-which-tests-to-run>`_ <http://doc.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests>`_
for more details on test selection syntax. for more details on test selection syntax.
``spack unit-test`` has a few special options that can help you ``spack unit-test`` has a few special options that can help you
@@ -147,7 +147,7 @@ you want to know about. For example, to see just the tests in
You can also combine any of these options with a ``pytest`` keyword You can also combine any of these options with a ``pytest`` keyword
search. See the `pytest usage docs search. See the `pytest usage docs
<https://doc.pytest.org/en/latest/how-to/usage.html#specifying-which-tests-to-run>`_ <https://docs.pytest.org/en/stable/usage.html#specifying-tests-selecting-tests>`_:
for more details on test selection syntax. For example, to see the names of all tests that have "spec" for more details on test selection syntax. For example, to see the names of all tests that have "spec"
or "concretize" somewhere in their names: or "concretize" somewhere in their names:
@@ -253,6 +253,27 @@ to update them.
multiple runs of ``spack style`` just to re-compute line numbers and multiple runs of ``spack style`` just to re-compute line numbers and
makes it much easier to fix errors directly off of the CI output. makes it much easier to fix errors directly off of the CI output.
.. warning::
Flake8 and ``pep8-naming`` require a number of dependencies in order
to run. If you installed ``py-flake8`` and ``py-pep8-naming``, the
easiest way to ensure the right packages are on your ``PYTHONPATH`` is
to run::
spack activate py-flake8
spack activate pep8-naming
so that all of the dependencies are symlinked to a central
location. If you see an error message like:
.. code-block:: console
Traceback (most recent call last):
File: "/usr/bin/flake8", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
that means Flake8 couldn't find setuptools in your ``PYTHONPATH``.
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
Documentation Tests Documentation Tests
@@ -288,9 +309,13 @@ All of these can be installed with Spack, e.g.
.. code-block:: console .. code-block:: console
$ spack load py-sphinx py-sphinx-rtd-theme py-sphinxcontrib-programoutput $ spack activate py-sphinx
$ spack activate py-sphinx-rtd-theme
$ spack activate py-sphinxcontrib-programoutput
so that all of the dependencies are added to PYTHONPATH. If you see an error message so that all of the dependencies are symlinked into that Python's
tree. Alternatively, you could arrange for their library
directories to be added to PYTHONPATH. If you see an error message
like: like:
.. code-block:: console .. code-block:: console
@@ -310,11 +335,53 @@ Once all of the dependencies are installed, you can try building the documentati
$ make clean $ make clean
$ make $ make
If you see any warning or error messages, you will have to correct those before your PR If you see any warning or error messages, you will have to correct those before
is accepted. If you are editing the documentation, you should be running the your PR is accepted.
documentation tests to make sure there are no errors. Documentation changes can result
in some obfuscated warning messages. If you don't understand what they mean, feel free If you are editing the documentation, you should obviously be running the
to ask when you submit your PR. documentation tests. But even if you are simply adding a new package, your
changes could cause the documentation tests to fail:
.. code-block:: console
package_list.rst:8745: WARNING: Block quote ends without a blank line; unexpected unindent.
At first, this error message will mean nothing to you, since you didn't edit
that file. Until you look at line 8745 of the file in question:
.. code-block:: rst
Description:
NetCDF is a set of software libraries and self-describing, machine-
independent data formats that support the creation, access, and sharing
of array-oriented scientific data.
Our documentation includes :ref:`a list of all Spack packages <package-list>`.
If you add a new package, its docstring is added to this page. The problem in
this case was that the docstring looked like:
.. code-block:: python
class Netcdf(Package):
"""
NetCDF is a set of software libraries and self-describing,
machine-independent data formats that support the creation,
access, and sharing of array-oriented scientific data.
"""
Docstrings cannot start with a newline character, or else Sphinx will complain.
Instead, they should look like:
.. code-block:: python
class Netcdf(Package):
"""NetCDF is a set of software libraries and self-describing,
machine-independent data formats that support the creation,
access, and sharing of array-oriented scientific data."""
Documentation changes can result in much more obfuscated warning messages.
If you don't understand what they mean, feel free to ask when you submit
your PR.
-------- --------
Coverage Coverage

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -149,9 +149,11 @@ grouped by functionality.
Package-related modules Package-related modules
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
:mod:`spack.package_base` :mod:`spack.package`
Contains the :class:`~spack.package_base.PackageBase` class, which Contains the :class:`~spack.package_base.Package` class, which
is the superclass for all packages in Spack. is the superclass for all packages in Spack. Methods on ``Package``
implement all phases of the :ref:`package lifecycle
<package-lifecycle>` and manage the build process.
:mod:`spack.util.naming` :mod:`spack.util.naming`
Contains functions for mapping between Spack package names, Contains functions for mapping between Spack package names,
@@ -175,11 +177,14 @@ Spec-related modules
^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^
:mod:`spack.spec` :mod:`spack.spec`
Contains :class:`~spack.spec.Spec`. Also implements most of the logic for concretization Contains :class:`~spack.spec.Spec` and :class:`~spack.spec.SpecParser`.
Also implements most of the logic for normalization and concretization
of specs. of specs.
:mod:`spack.parser` :mod:`spack.parse`
Contains :class:`~spack.parser.SpecParser` and functions related to parsing specs. Contains some base classes for implementing simple recursive descent
parsers: :class:`~spack.parse.Parser` and :class:`~spack.parse.Lexer`.
Used by :class:`~spack.spec.SpecParser`.
:mod:`spack.concretize` :mod:`spack.concretize`
Contains :class:`~spack.concretize.Concretizer` implementation, Contains :class:`~spack.concretize.Concretizer` implementation,
@@ -232,7 +237,7 @@ Spack Subcommands
Unit tests Unit tests
^^^^^^^^^^ ^^^^^^^^^^
``spack.test`` :mod:`spack.test`
Implements Spack's test suite. Add a module and put its name in Implements Spack's test suite. Add a module and put its name in
the test suite in ``__init__.py`` to add more unit tests. the test suite in ``__init__.py`` to add more unit tests.
@@ -357,23 +362,91 @@ If there is a hook that you would like and is missing, you can propose to add a
``pre_install(spec)`` ``pre_install(spec)``
""""""""""""""""""""" """""""""""""""""""""
A ``pre_install`` hook is run within the install subprocess, directly before the install starts. A ``pre_install`` hook is run within an install subprocess, directly before
It expects a single argument of a spec. the install starts. It expects a single argument of a spec, and is run in
a multiprocessing subprocess. Note that if you see ``pre_install`` functions associated with packages these are not hooks
as we have defined them here, but rather callback functions associated with
a package install.
""""""""""""""""""""""""""""""""""""" """"""""""""""""""""""
``post_install(spec, explicit=None)`` ``post_install(spec)``
""""""""""""""""""""""""""""""""""""" """"""""""""""""""""""
A ``post_install`` hook is run within the install subprocess, directly after the install finishes, A ``post_install`` hook is run within an install subprocess, directly after
but before the build stage is removed and the spec is registered in the database. It expects two the install finishes, but before the build stage is removed. If you
arguments: spec and an optional boolean indicating whether this spec is being installed explicitly. write one of these hooks, you should expect it to accept a spec as the only
argument. This is run in a multiprocessing subprocess. This ``post_install`` is
also seen in packages, but in this context not related to the hooks described
here.
""""""""""""""""""""""""""""""""""""""""""""""""""""
``pre_uninstall(spec)`` and ``post_uninstall(spec)``
""""""""""""""""""""""""""""""""""""""""""""""""""""
These hooks are currently used for cleaning up module files after uninstall. """"""""""""""""""""""""""
``on_install_start(spec)``
""""""""""""""""""""""""""
This hook is run at the beginning of ``lib/spack/spack/installer.py``,
in the install function of a ``PackageInstaller``,
and importantly is not part of a build process, but before it. This is when
we have just newly grabbed the task, and are preparing to install. If you
write a hook of this type, you should provide the spec to it.
.. code-block:: python
def on_install_start(spec):
"""On start of an install, we want to...
"""
print('on_install_start')
""""""""""""""""""""""""""""
``on_install_success(spec)``
""""""""""""""""""""""""""""
This hook is run on a successful install, and is also run inside the build
process, akin to ``post_install``. The main difference is that this hook
is run outside of the context of the stage directory, meaning after the
build stage has been removed and the user is alerted that the install was
successful. If you need to write a hook that is run on success of a particular
phase, you should use ``on_phase_success``.
""""""""""""""""""""""""""""
``on_install_failure(spec)``
""""""""""""""""""""""""""""
This hook is run given an install failure that happens outside of the build
subprocess, but somewhere in ``installer.py`` when something else goes wrong.
If you need to write a hook that is relevant to a failure within a build
process, you would want to instead use ``on_phase_failure``.
"""""""""""""""""""""""""""
``on_install_cancel(spec)``
"""""""""""""""""""""""""""
The same, but triggered if a spec install is cancelled for any reason.
"""""""""""""""""""""""""""""""""""""""""""""""
``on_phase_success(pkg, phase_name, log_file)``
"""""""""""""""""""""""""""""""""""""""""""""""
This hook is run within the install subprocess, and specifically when a phase
successfully finishes. Since we are interested in the package, the name of
the phase, and any output from it, we require:
- **pkg**: the package variable, which also has the attached spec at ``pkg.spec``
- **phase_name**: the name of the phase that was successful (e.g., configure)
- **log_file**: the path to the file with output, in case you need to inspect or otherwise interact with it.
"""""""""""""""""""""""""""""""""""""""""""""
``on_phase_error(pkg, phase_name, log_file)``
"""""""""""""""""""""""""""""""""""""""""""""
In the case of an error during a phase, we might want to trigger some event
with a hook, and this is the purpose of this particular hook. Akin to
``on_phase_success`` we require the same variables - the package that failed,
the name of the phase, and the log file where we might find errors.
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
@@ -404,7 +477,7 @@ use my new hook as follows:
.. code-block:: python .. code-block:: python
def post_log_write(message, level): def post_log_write(message, level):
"""Do something custom with the message and level every time we write """Do something custom with the messsage and level every time we write
to the log to the log
""" """
print('running post_log_write!') print('running post_log_write!')
@@ -552,11 +625,11 @@ With either interpreter you can run a single command:
.. code-block:: console .. code-block:: console
$ spack python -c 'from spack.spec import Spec; Spec("python").concretized()' $ spack python -c 'import distro; distro.linux_distribution()'
... ('Ubuntu', '18.04', 'Bionic Beaver')
$ spack python -i ipython -c 'from spack.spec import Spec; Spec("python").concretized()' $ spack python -i ipython -c 'import distro; distro.linux_distribution()'
Out[1]: ... Out[1]: ('Ubuntu', '18.04', 'Bionic Beaver')
or a file: or a file:
@@ -1071,9 +1144,9 @@ Announcing a release
We announce releases in all of the major Spack communication channels. We announce releases in all of the major Spack communication channels.
Publishing the release takes care of GitHub. The remaining channels are Publishing the release takes care of GitHub. The remaining channels are
X, Slack, and the mailing list. Here are the steps: Twitter, Slack, and the mailing list. Here are the steps:
#. Announce the release on X. #. Announce the release on Twitter.
* Compose the tweet on the ``@spackpm`` account per the * Compose the tweet on the ``@spackpm`` account per the
``spack-twitter`` slack channel. ``spack-twitter`` slack channel.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -58,9 +58,9 @@ Using Environments
Here we follow a typical use case of creating, concretizing, Here we follow a typical use case of creating, concretizing,
installing and loading an environment. installing and loading an environment.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Creating a managed Environment Creating a named Environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
An environment is created by: An environment is created by:
@@ -72,8 +72,7 @@ Spack then creates the directory ``var/spack/environments/myenv``.
.. note:: .. note::
All managed environments by default are stored in the ``var/spack/environments`` folder. All named environments are stored in the ``var/spack/environments`` folder.
This location can be changed by setting the ``environments_root`` variable in ``config.yaml``.
In the ``var/spack/environments/myenv`` directory, Spack creates the In the ``var/spack/environments/myenv`` directory, Spack creates the
file ``spack.yaml`` and the hidden directory ``.spack-env``. file ``spack.yaml`` and the hidden directory ``.spack-env``.
@@ -94,9 +93,9 @@ an Environment, the ``.spack-env`` directory also contains:
* ``logs/``: A directory containing the build logs for the packages * ``logs/``: A directory containing the build logs for the packages
in this Environment. in this Environment.
Spack Environments can also be created from either a manifest file Spack Environments can also be created from either a ``spack.yaml``
(usually but not necessarily named, ``spack.yaml``) or a lockfile. manifest or a ``spack.lock`` lockfile. To create an Environment from a
To create an Environment from a manifest: ``spack.yaml`` manifest:
.. code-block:: console .. code-block:: console
@@ -142,17 +141,6 @@ user's prompt to begin with the environment name in brackets.
$ spack env activate -p myenv $ spack env activate -p myenv
[myenv] $ ... [myenv] $ ...
The ``activate`` command can also be used to create a new environment if it does not already
exist.
.. code-block:: console
$ spack env activate --create -p myenv
# ...
# [creates if myenv does not exist yet]
# ...
[myenv] $ ...
To deactivate an environment, use the command: To deactivate an environment, use the command:
.. code-block:: console .. code-block:: console
@@ -172,36 +160,21 @@ environment will remove the view from the user environment.
Anonymous Environments Anonymous Environments
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
Apart from managed environments, Spack also supports anonymous environments. Any directory can be treated as an environment if it contains a file
``spack.yaml``. To load an anonymous environment, use:
Anonymous environments can be placed in any directory of choice.
.. note::
When uninstalling packages, Spack asks the user to confirm the removal of packages
that are still used in a managed environment. This is not the case for anonymous
environments.
To create an anonymous environment, use one of the following commands:
.. code-block:: console .. code-block:: console
$ spack env create --dir my_env $ spack env activate -d /path/to/directory
$ spack env create ./my_env
As a shorthand, you can also create an anonymous environment upon activation if it does not Anonymous specs can be created in place using the command:
already exist:
.. code-block:: console .. code-block:: console
$ spack env activate --create ./my_env $ spack env create -d .
For convenience, Spack can also place an anonymous environment in a temporary directory for you:
.. code-block:: console
$ spack env activate --temp
In this case Spack simply creates a spack.yaml file in the requested
directory.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Environment Sensitive Commands Environment Sensitive Commands
@@ -260,8 +233,8 @@ packages will be listed as roots of the Environment.
All of the Spack commands that act on the list of installed specs are All of the Spack commands that act on the list of installed specs are
Environment-sensitive in this way, including ``install``, Environment-sensitive in this way, including ``install``,
``uninstall``, ``find``, ``extensions``, and more. In the ``uninstall``, ``activate``, ``deactivate``, ``find``, ``extensions``,
:ref:`environment-configuration` section we will discuss and more. In the :ref:`environment-configuration` section we will discuss
Environment-sensitive commands further. Environment-sensitive commands further.
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
@@ -373,7 +346,7 @@ the Environment and then install the concretized specs.
(see :ref:`build-jobs`). To speed up environment builds further, independent (see :ref:`build-jobs`). To speed up environment builds further, independent
packages can be installed in parallel by launching more Spack instances. For packages can be installed in parallel by launching more Spack instances. For
example, the following will build at most four packages in parallel using example, the following will build at most four packages in parallel using
three background jobs: three background jobs:
.. code-block:: console .. code-block:: console
@@ -421,29 +394,12 @@ version (and other constraints) passed as the spec argument to the
For packages with ``git`` attributes, git branches, tags, and commits can For packages with ``git`` attributes, git branches, tags, and commits can
also be used as valid concrete versions (see :ref:`version-specifier`). also be used as valid concrete versions (see :ref:`version-specifier`).
This means that for a package ``foo``, ``spack develop foo@git.main`` will clone This means that for a package ``foo``, ``spack develop foo@git.main`` will clone
the ``main`` branch of the package, and ``spack install`` will install from the ``main`` branch of the package, and ``spack install`` will install from
that git clone if ``foo`` is in the environment. that git clone if ``foo`` is in the environment.
Further development on ``foo`` can be tested by reinstalling the environment, Further development on ``foo`` can be tested by reinstalling the environment,
and eventually committed and pushed to the upstream git repo. and eventually committed and pushed to the upstream git repo.
If the package being developed supports out-of-source builds then users can use the
``--build_directory`` flag to control the location and name of the build directory.
This is a shortcut to set the ``package_attributes:build_directory`` in the
``packages`` configuration (see :ref:`assigning-package-attributes`).
The supplied location will become the build-directory for that package in all future builds.
.. warning::
Potential pitfalls of setting the build directory
Spack does not check for out-of-source build compatibility with the packages and
so the onerous of making sure the package supports out-of-source builds is on
the user.
For example, most ``autotool`` and ``makefile`` packages do not support out-of-source builds
while all ``CMake`` packages do.
Understanding these nuances are on the software developers and we strongly encourage
developers to only redirect the build directory if they understand their package's
build-system.
^^^^^^^ ^^^^^^^
Loading Loading
^^^^^^^ ^^^^^^^
@@ -460,125 +416,6 @@ Sourcing that file in Bash will make the environment available to the
user; and can be included in ``.bashrc`` files, etc. The ``loads`` user; and can be included in ``.bashrc`` files, etc. The ``loads``
file may also be copied out of the environment, renamed, etc. file may also be copied out of the environment, renamed, etc.
.. _environment_include_concrete:
------------------------------
Included Concrete Environments
------------------------------
Spack environments can create an environment based off of information in already
established environments. You can think of it as a combination of existing
environments. It will gather information from the existing environment's
``spack.lock`` and use that during the creation of this included concrete
environment. When an included concrete environment is created it will generate
a ``spack.lock`` file for the newly created environment.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Creating included environments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To create a combined concrete environment, you must have at least one existing
concrete environment. You will use the command ``spack env create`` with the
argument ``--include-concrete`` followed by the name or path of the environment
you'd like to include. Here is an example of how to create a combined environment
from the command line.
.. code-block:: console
$ spack env create myenv
$ spack -e myenv add python
$ spack -e myenv concretize
$ spack env create --include-concrete myenv included_env
You can also include an environment directly in the ``spack.yaml`` file. It
involves adding the ``include_concrete`` heading in the yaml followed by the
absolute path to the independent environments.
.. code-block:: yaml
spack:
specs: []
concretizer:
unify: true
include_concrete:
- /absolute/path/to/environment1
- /absolute/path/to/environment2
Once the ``spack.yaml`` has been updated you must concretize the environment to
get the concrete specs from the included environments.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Updating an included environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If changes were made to the base environment and you want that reflected in the
included environment you will need to reconcretize both the base environment and the
included environment for the change to be implemented. For example:
.. code-block:: console
$ spack env create myenv
$ spack -e myenv add python
$ spack -e myenv concretize
$ spack env create --include-concrete myenv included_env
$ spack -e myenv find
==> In environment myenv
==> Root specs
python
==> 0 installed packages
$ spack -e included_env find
==> In environment included_env
==> No root specs
==> Included specs
python
==> 0 installed packages
Here we see that ``included_env`` has access to the python package through
the ``myenv`` environment. But if we were to add another spec to ``myenv``,
``included_env`` will not be able to access the new information.
.. code-block:: console
$ spack -e myenv add perl
$ spack -e myenv concretize
$ spack -e myenv find
==> In environment myenv
==> Root specs
perl python
==> 0 installed packages
$ spack -e included_env find
==> In environment included_env
==> No root specs
==> Included specs
python
==> 0 installed packages
It isn't until you run the ``spack concretize`` command that the combined
environment will get the updated information from the reconcretized base environmennt.
.. code-block:: console
$ spack -e included_env concretize
$ spack -e included_env find
==> In environment included_env
==> No root specs
==> Included specs
perl python
==> 0 installed packages
.. _environment-configuration: .. _environment-configuration:
------------------------ ------------------------
@@ -619,11 +456,11 @@ a ``packages.yaml`` file) could contain:
.. code-block:: yaml .. code-block:: yaml
spack: spack:
# ... ...
packages: packages:
all: all:
compiler: [intel] compiler: [intel]
# ... ...
This configuration sets the default compiler for all packages to This configuration sets the default compiler for all packages to
``intel``. ``intel``.
@@ -682,49 +519,8 @@ available from the yaml file.
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
Spec concretization Spec concretization
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
An environment can be concretized in three different modes and the behavior active under An environment can be concretized in three different modes and the behavior active under any environment
any environment is determined by the ``concretizer:unify`` configuration option. is determined by the ``concretizer:unify`` property. By default specs are concretized *separately*, one after the other:
The *default* mode is to unify all specs:
.. code-block:: yaml
spack:
specs:
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: true
This means that any package in the environment corresponds to a single concrete spec. In
the above example, when ``hdf5`` depends down the line of ``zlib``, it is required to
take ``zlib@1.2.8`` instead of a newer version. This mode of concretization is
particularly useful when environment views are used: if every package occurs in
only one flavor, it is usually possible to merge all install directories into a view.
A downside of unified concretization is that it can be overly strict. For example, a
concretization error would happen when both ``hdf5+mpi`` and ``hdf5~mpi`` are specified
in an environment.
The second mode is to *unify when possible*: this makes concretization of root specs
more independendent. Instead of requiring reuse of dependencies across different root
specs, it is only maximized:
.. code-block:: yaml
spack:
specs:
- hdf5~mpi
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: when_possible
This means that both ``hdf5`` installations will use ``zlib@1.2.8`` as a dependency even
if newer versions of that library are available.
The third mode of operation is to concretize root specs entirely independently by
disabling unified concretization:
.. code-block:: yaml .. code-block:: yaml
@@ -736,11 +532,45 @@ disabling unified concretization:
concretizer: concretizer:
unify: false unify: false
In this example ``hdf5`` is concretized separately, and does not consider ``zlib@1.2.8`` This mode of operation permits to deploy a full software stack where multiple configurations of the same package
as a constraint or preference. Instead, it will take the latest possible version. need to be installed alongside each other using the best possible selection of transitive dependencies. The downside
is that redundancy of installations is disregarded completely, and thus environments might be more bloated than
strictly needed. In the example above, for instance, if a version of ``zlib`` newer than ``1.2.8`` is known to Spack,
then it will be used for both ``hdf5`` installations.
The last two concretization options are typically useful for system administrators and If redundancy of the environment is a concern, Spack provides a way to install it *together where possible*,
user support groups providing a large software stack for their HPC center. i.e. trying to maximize reuse of dependencies across different specs:
.. code-block:: yaml
spack:
specs:
- hdf5~mpi
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: when_possible
Also in this case Spack allows having multiple configurations of the same package, but privileges the reuse of
specs over other factors. Going back to our example, this means that both ``hdf5`` installations will use
``zlib@1.2.8`` as a dependency even if newer versions of that library are available.
Central installations done at HPC centers by system administrators or user support groups are a common case
that fits either of these two modes.
Environments can also be configured to concretize all the root specs *together*, in a self-consistent way, to
ensure that each package in the environment comes with a single configuration:
.. code-block:: yaml
spack:
specs:
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: true
This mode of operation is usually what is required by software developers that want to deploy their development
environment and have a single view of it in the filesystem.
.. note:: .. note::
@@ -751,11 +581,10 @@ user support groups providing a large software stack for their HPC center.
.. admonition:: Re-concretization of user specs .. admonition:: Re-concretization of user specs
The ``spack concretize`` command without additional arguments will *not* change any When concretizing specs *together* or *together where possible* the entire set of specs will be
previously concretized specs. This may prevent it from finding a solution when using re-concretized after any addition of new user specs, to ensure that
``unify: true``, and it may prevent it from finding a minimal solution when using the environment remains consistent / minimal. When instead the specs are concretized
``unify: when_possible``. You can force Spack to ignore the existing concrete environment separately only the new specs will be re-concretized after any addition.
with ``spack concretize -f``.
^^^^^^^^^^^^^ ^^^^^^^^^^^^^
Spec Matrices Spec Matrices
@@ -930,7 +759,6 @@ For example, the following environment has three root packages:
This allows for a much-needed reduction in redundancy between packages This allows for a much-needed reduction in redundancy between packages
and constraints. and constraints.
---------------- ----------------
Filesystem Views Filesystem Views
---------------- ----------------
@@ -970,7 +798,7 @@ directories.
.. code-block:: yaml .. code-block:: yaml
spack: spack:
# ... ...
view: view:
mpis: mpis:
root: /path/to/view root: /path/to/view
@@ -1014,7 +842,7 @@ automatically named ``default``, so that
.. code-block:: yaml .. code-block:: yaml
spack: spack:
# ... ...
view: True view: True
is equivalent to is equivalent to
@@ -1022,7 +850,7 @@ is equivalent to
.. code-block:: yaml .. code-block:: yaml
spack: spack:
# ... ...
view: view:
default: default:
root: .spack-env/view root: .spack-env/view
@@ -1032,7 +860,7 @@ and
.. code-block:: yaml .. code-block:: yaml
spack: spack:
# ... ...
view: /path/to/view view: /path/to/view
is equivalent to is equivalent to
@@ -1040,7 +868,7 @@ is equivalent to
.. code-block:: yaml .. code-block:: yaml
spack: spack:
# ... ...
view: view:
default: default:
root: /path/to/view root: /path/to/view
@@ -1079,20 +907,9 @@ function, as shown in the example below:
.. code-block:: yaml .. code-block:: yaml
projections: projections:
zlib: "{name}-{version}" zlib: {name}-{version}
^mpi: "{name}-{version}/{^mpi.name}-{^mpi.version}-{compiler.name}-{compiler.version}" ^mpi: {name}-{version}/{^mpi.name}-{^mpi.version}-{compiler.name}-{compiler.version}
all: "{name}-{version}/{compiler.name}-{compiler.version}" all: {name}-{version}/{compiler.name}-{compiler.version}
Projections also permit environment and spack configuration variable
expansions as shown below:
.. code-block:: yaml
projections:
all: "{name}-{version}/{compiler.name}-{compiler.version}/$date/$SYSTEM_ENV_VARIBLE"
where ``$date`` is the spack configuration variable that will expand with the ``YYYY-MM-DD``
format and ``$SYSTEM_ENV_VARIABLE`` is an environment variable defined in the shell.
The entries in the projections configuration file must all be either The entries in the projections configuration file must all be either
specs or the keyword ``all``. For each spec, the projection used will specs or the keyword ``all``. For each spec, the projection used will
@@ -1164,7 +981,7 @@ other targets to depend on the environment installation.
A typical workflow is as follows: A typical workflow is as follows:
.. code-block:: console .. code:: console
spack env create -d . spack env create -d .
spack -e . add perl spack -e . add perl
@@ -1215,7 +1032,7 @@ gets installed and is available for use in the ``env`` target.
$(SPACK) -e . concretize -f $(SPACK) -e . concretize -f
env.mk: spack.lock env.mk: spack.lock
$(SPACK) -e . env depfile -o $@ --make-prefix spack $(SPACK) -e . env depfile -o $@ --make-target-prefix spack
env: spack/env env: spack/env
$(info Environment installed!) $(info Environment installed!)
@@ -1238,79 +1055,27 @@ the include is conditional.
.. note:: .. note::
When including generated ``Makefile``\s, it is important to use When including generated ``Makefile``\s, it is important to use
the ``--make-prefix`` flag and use the non-phony target the ``--make-target-prefix`` flag and use the non-phony target
``<prefix>/env`` as prerequisite, instead of the phony target ``<target-prefix>/env`` as prerequisite, instead of the phony target
``<prefix>/all``. ``<target-prefix>/all``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Building a subset of the environment Building a subset of the environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The generated ``Makefile``\s contain install targets for each spec, identified The generated ``Makefile``\s contain install targets for each spec. Given the hash
by ``<name>-<version>-<hash>``. This allows you to install only a subset of the of a particular spec, you can use the ``.install/<hash>`` target to install the
packages in the environment. When packages are unique in the environment, it's spec with its dependencies. There is also ``.install-deps/<hash>`` to *only* install
enough to know the name and let tab-completion fill out the version and hash.
The following phony targets are available: ``install/<spec>`` to install the
spec with its dependencies, and ``install-deps/<spec>`` to *only* install
its dependencies. This can be useful when certain flags should only apply to its dependencies. This can be useful when certain flags should only apply to
dependencies. Below we show a use case where a spec is installed with verbose dependencies. Below we show a use case where a spec is installed with verbose
output (``spack install --verbose``) while its dependencies are installed silently: output (``spack install --verbose``) while its dependencies are installed silently:
.. code-block:: console .. code:: console
$ spack env depfile -o Makefile $ spack env depfile -o Makefile --make-target-prefix my_env
# Install dependencies in parallel, only show a log on error. # Install dependencies in parallel, only show a log on error.
$ make -j16 install-deps/python-3.11.0-<hash> SPACK_INSTALL_FLAGS=--show-log-on-error $ make -j16 my_env/.install-deps/<hash> SPACK_INSTALL_FLAGS=--show-log-on-error
# Install the root spec with verbose output. # Install the root spec with verbose output.
$ make -j16 install/python-3.11.0-<hash> SPACK_INSTALL_FLAGS=--verbose $ make -j16 my_env/.install/<hash> SPACK_INSTALL_FLAGS=--verbose
^^^^^^^^^^^^^^^^^^^^^^^^^
Adding post-install hooks
^^^^^^^^^^^^^^^^^^^^^^^^^
Another advanced use-case of generated ``Makefile``\s is running a post-install
command for each package. These "hooks" could be anything from printing a
post-install message, running tests, or pushing just-built binaries to a buildcache.
This can be accomplished through the generated ``[<prefix>/]SPACK_PACKAGE_IDS``
variable. Assuming we have an active and concrete environment, we generate the
associated ``Makefile`` with a prefix ``example``:
.. code-block:: console
$ spack env depfile -o env.mk --make-prefix example
And we now include it in a different ``Makefile``, in which we create a target
``example/push/%`` with ``%`` referring to a package identifier. This target
depends on the particular package installation. In this target we automatically
have the target-specific ``HASH`` and ``SPEC`` variables at our disposal. They
are respectively the spec hash (excluding leading ``/``), and a human-readable spec.
Finally, we have an entrypoint target ``push`` that will update the buildcache
index once every package is pushed. Note how this target uses the generated
``example/SPACK_PACKAGE_IDS`` variable to define its prerequisites.
.. code:: Makefile
SPACK ?= spack
BUILDCACHE_DIR = $(CURDIR)/tarballs
.PHONY: all
all: push
include env.mk
example/push/%: example/install/%
@mkdir -p $(dir $@)
$(info About to push $(SPEC) to a buildcache)
$(SPACK) -e . buildcache push --allow-root --only=package $(BUILDCACHE_DIR) /$(HASH)
@touch $@
push: $(addprefix example/push/,$(example/SPACK_PACKAGE_IDS))
$(info Updating the buildcache index)
$(SPACK) -e . buildcache update-index $(BUILDCACHE_DIR)
$(info Done!)
@touch $@

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -9,42 +9,46 @@
Custom Extensions Custom Extensions
================= =================
*Spack extensions* allow you to extend Spack capabilities by deploying your *Spack extensions* permit you to extend Spack capabilities by deploying your
own custom commands or logic in an arbitrary location on your filesystem. own custom commands or logic in an arbitrary location on your filesystem.
This might be extremely useful e.g. to develop and maintain a command whose purpose is This might be extremely useful e.g. to develop and maintain a command whose purpose is
too specific to be considered for reintegration into the mainline or to too specific to be considered for reintegration into the mainline or to
evolve a command through its early stages before starting a discussion to merge evolve a command through its early stages before starting a discussion to merge
it upstream. it upstream.
From Spack's point of view an extension is any path in your filesystem which From Spack's point of view an extension is any path in your filesystem which
respects the following naming and layout for files: respects a prescribed naming and layout for files:
.. code-block:: console .. code-block:: console
spack-scripting/ # The top level directory must match the format 'spack-{extension_name}' spack-scripting/ # The top level directory must match the format 'spack-{extension_name}'
├── pytest.ini # Optional file if the extension ships its own tests ├── pytest.ini # Optional file if the extension ships its own tests
├── scripting # Folder that may contain modules that are needed for the extension commands ├── scripting # Folder that may contain modules that are needed for the extension commands
│   ── cmd # Folder containing extension commands │   ── cmd # Folder containing extension commands
│   │   └── filter.py # A new command that will be available │   └── filter.py # A new command that will be available
│   └── functions.py # Module with internal details ├── tests # Tests for this extension
└── tests # Tests for this extension
│ ├── conftest.py │ ├── conftest.py
│ └── test_filter.py │ └── test_filter.py
└── templates # Templates that may be needed by the extension └── templates # Templates that may be needed by the extension
In the example above, the extension is named *scripting*. It adds an additional command In the example above the extension named *scripting* adds an additional command (``filter``)
(``spack filter``) and unit tests to verify its behavior. and unit tests to verify its behavior. The code for this example can be
obtained by cloning the corresponding git repository:
The extension can import any core Spack module in its implementation. When loaded by .. TODO: write an ad-hoc "hello world" extension and make it part of the spack organization
the ``spack`` command, the extension itself is imported as a Python package in the
``spack.extensions`` namespace. In the example above, since the extension is named
"scripting", the corresponding Python module is ``spack.extensions.scripting``.
The code for this example extension can be obtained by cloning the corresponding git repository:
.. code-block:: console .. code-block:: console
$ git -C /tmp clone https://github.com/spack/spack-scripting.git $ cd ~/
$ mkdir tmp && cd tmp
$ git clone https://github.com/alalazo/spack-scripting.git
Cloning into 'spack-scripting'...
remote: Counting objects: 11, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 11 (delta 0), reused 11 (delta 0), pack-reused 0
Receiving objects: 100% (11/11), done.
As you can see by inspecting the sources, Python modules that are part of the extension
can import any core Spack module.
--------------------------------- ---------------------------------
Configure Spack to Use Extensions Configure Spack to Use Extensions
@@ -57,7 +61,7 @@ paths to ``config.yaml``. In the case of our example this means ensuring that:
config: config:
extensions: extensions:
- /tmp/spack-scripting - ~/tmp/spack-scripting
is part of your configuration file. Once this is setup any command that the extension provides is part of your configuration file. Once this is setup any command that the extension provides
will be available from the command line: will be available from the command line:
@@ -82,68 +86,37 @@ will be available from the command line:
--implicit select specs that are not installed or were installed implicitly --implicit select specs that are not installed or were installed implicitly
--output OUTPUT where to dump the result --output OUTPUT where to dump the result
The corresponding unit tests can be run giving the appropriate options to ``spack unit-test``: The corresponding unit tests can be run giving the appropriate options
to ``spack unit-test``:
.. code-block:: console .. code-block:: console
$ spack unit-test --extension=scripting $ spack unit-test --extension=scripting
========================================== test session starts ===========================================
platform linux -- Python 3.11.5, pytest-7.4.3, pluggy-1.3.0 ============================================================== test session starts ===============================================================
rootdir: /home/culpo/github/spack-scripting platform linux2 -- Python 2.7.15rc1, pytest-3.2.5, py-1.4.34, pluggy-0.4.0
configfile: pytest.ini rootdir: /home/mculpo/tmp/spack-scripting, inifile: pytest.ini
testpaths: tests
plugins: xdist-3.5.0
collected 5 items collected 5 items
tests/test_filter.py ..... [100%] tests/test_filter.py ...XX
============================================================ short test summary info =============================================================
XPASS tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
XPASS tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
========================================== slowest 30 durations ========================================== =========================================================== slowest 20 test durations ============================================================
2.31s setup tests/test_filter.py::test_filtering_specs[kwargs0-specs0-expected0] 3.74s setup tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0]
0.57s call tests/test_filter.py::test_filtering_specs[kwargs2-specs2-expected2] 0.17s call tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
0.56s call tests/test_filter.py::test_filtering_specs[kwargs4-specs4-expected4] 0.16s call tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2]
0.54s call tests/test_filter.py::test_filtering_specs[kwargs3-specs3-expected3] 0.15s call tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1]
0.54s call tests/test_filter.py::test_filtering_specs[kwargs1-specs1-expected1] 0.13s call tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
0.48s call tests/test_filter.py::test_filtering_specs[kwargs0-specs0-expected0] 0.08s call tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0]
0.01s setup tests/test_filter.py::test_filtering_specs[kwargs4-specs4-expected4] 0.04s teardown tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
0.01s setup tests/test_filter.py::test_filtering_specs[kwargs2-specs2-expected2] 0.00s setup tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
0.01s setup tests/test_filter.py::test_filtering_specs[kwargs1-specs1-expected1] 0.00s setup tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
0.01s setup tests/test_filter.py::test_filtering_specs[kwargs3-specs3-expected3] 0.00s setup tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1]
0.00s setup tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2]
(5 durations < 0.005s hidden. Use -vv to show these durations.) 0.00s teardown tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2]
=========================================== 5 passed in 5.06s ============================================ 0.00s teardown tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1]
0.00s teardown tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0]
--------------------------------------- 0.00s teardown tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
Registering Extensions via Entry Points ====================================================== 3 passed, 2 xpassed in 4.51 seconds =======================================================
---------------------------------------
.. note::
Python version >= 3.8 is required to register extensions via entry points.
Spack can be made aware of extensions that are installed as part of a python package. To do so, register a function that returns the extension path, or paths, to the ``"spack.extensions"`` entry point. Consider the Python package ``my_package`` that includes a Spack extension:
.. code-block:: console
my-package/
├── src
│   ├── my_package
│   │   └── __init__.py
│   └── spack-scripting/ # the spack extensions
└── pyproject.toml
adding the following to ``my_package``'s ``pyproject.toml`` will make the ``spack-scripting`` extension visible to Spack when ``my_package`` is installed:
.. code-block:: toml
[project.entry_points."spack.extenions"]
my_package = "my_package:get_extension_path"
The function ``my_package.get_extension_path`` in ``my_package/__init__.py`` might look like
.. code-block:: python
import importlib.resources
def get_extension_path():
dirname = importlib.resources.files("my_package").joinpath("spack-scripting")
if dirname.exists():
return str(dirname)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -98,42 +98,40 @@ For example, this command:
.. code-block:: console .. code-block:: console
$ spack create https://ftp.osuosl.org/pub/blfs/conglomeration/libelf/libelf-0.8.13.tar.gz $ spack create http://www.mr511.de/software/libelf-0.8.13.tar.gz
creates a simple python file: creates a simple python file:
.. code-block:: python .. code-block:: python
from spack.package import * from spack import *
class Libelf(AutotoolsPackage): class Libelf(Package):
"""FIXME: Put a proper description of your package here.""" """FIXME: Put a proper description of your package here."""
# FIXME: Add a proper url for your package's homepage here. # FIXME: Add a proper url for your package's homepage here.
homepage = "https://www.example.com" homepage = "http://www.example.com"
url = "https://ftp.osuosl.org/pub/blfs/conglomeration/libelf/libelf-0.8.13.tar.gz" url = "http://www.mr511.de/software/libelf-0.8.13.tar.gz"
# FIXME: Add a list of GitHub accounts to version('0.8.13', '4136d7b4c04df68b686570afa26988ac')
# notify when the package is updated.
# maintainers("github_user1", "github_user2")
version("0.8.13", sha256="591a9b4ec81c1f2042a97aa60564e0cb79d041c52faa7416acb38bc95bd2c76d")
# FIXME: Add dependencies if required. # FIXME: Add dependencies if required.
# depends_on("foo") # depends_on('foo')
def configure_args(self): def install(self, spec, prefix):
# FIXME: Add arguments other than --prefix # FIXME: Modify the configure line to suit your build system here.
# FIXME: If not needed delete this function configure('--prefix={0}'.format(prefix))
args = []
return args # FIXME: Add logic to build and install here.
make()
make('install')
It doesn't take much python coding to get from there to a working It doesn't take much python coding to get from there to a working
package: package:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/libelf/package.py .. literalinclude:: _spack_root/var/spack/repos/builtin/packages/libelf/package.py
:lines: 5- :lines: 6-
Spack also provides wrapper functions around common commands like Spack also provides wrapper functions around common commands like
``configure``, ``make``, and ``cmake`` to make writing packages ``configure``, ``make``, and ``cmake`` to make writing packages

View File

@@ -1,77 +0,0 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
==========================
Frequently Asked Questions
==========================
This page contains answers to frequently asked questions about Spack.
If you have questions that are not answered here, feel free to ask on
`Slack <https://slack.spack.io>`_ or `GitHub Discussions
<https://github.com/spack/spack/discussions>`_. If you've learned the
answer to a question that you think should be here, please consider
contributing to this page.
.. _faq-concretizer-precedence:
-----------------------------------------------------
Why does Spack pick particular versions and variants?
-----------------------------------------------------
This question comes up in a variety of forms:
1. Why does Spack seem to ignore my package preferences from ``packages.yaml`` config?
2. Why does Spack toggle a variant instead of using the default from the ``package.py`` file?
The short answer is that Spack always picks an optimal configuration
based on a complex set of criteria\ [#f1]_. These criteria are more nuanced
than always choosing the latest versions or default variants.
.. note::
As a rule of thumb: requirements + constraints > reuse > preferences > defaults.
The following set of criteria (from lowest to highest precedence) explain
common cases where concretization output may seem surprising at first.
1. :ref:`Package preferences <package-preferences>` configured in ``packages.yaml``
override variant defaults from ``package.py`` files, and influence the optimal
ordering of versions. Preferences are specified as follows:
.. code-block:: yaml
packages:
foo:
version: [1.0, 1.1]
variants: ~mpi
2. :ref:`Reuse concretization <concretizer-options>` configured in ``concretizer.yaml``
overrides preferences, since it's typically faster to reuse an existing spec than to
build a preferred one from sources. When build caches are enabled, specs may be reused
from a remote location too. Reuse concretization is configured as follows:
.. code-block:: yaml
concretizer:
reuse: dependencies # other options are 'true' and 'false'
3. :ref:`Package requirements <package-requirements>` configured in ``packages.yaml``,
and constraints from the command line as well as ``package.py`` files override all
of the above. Requirements are specified as follows:
.. code-block:: yaml
packages:
foo:
require:
- "@1.2: +mpi"
Requirements and constraints restrict the set of possible solutions, while reuse
behavior and preferences influence what an optimal solution looks like.
.. rubric:: Footnotes
.. [#f1] The exact list of criteria can be retrieved with the ``spack solve`` command

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other .. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details. Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT) SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -21,9 +21,8 @@ be present on the machine where Spack is run:
:header-rows: 1 :header-rows: 1
These requirements can be easily installed on most modern Linux systems; These requirements can be easily installed on most modern Linux systems;
on macOS, the Command Line Tools package is required, and a full XCode suite on macOS, XCode is required. Spack is designed to run on HPC
may be necessary for some packages such as Qt and apple-gl. Spack is designed platforms like Cray. Not all packages should be expected
to run on HPC platforms like Cray. Not all packages should be expected
to work on all platforms. to work on all platforms.
A build matrix showing which packages are working on which systems is shown below. A build matrix showing which packages are working on which systems is shown below.
@@ -41,9 +40,12 @@ A build matrix showing which packages are working on which systems is shown belo
.. code-block:: console .. code-block:: console
dnf install epel-release yum update -y
dnf group install "Development Tools" yum install -y epel-release
dnf install curl findutils gcc-gfortran gnupg2 hostname iproute redhat-lsb-core python3 python3-pip python3-setuptools unzip python3-boto3 yum update -y
yum --enablerepo epel groupinstall -y "Development Tools"
yum --enablerepo epel install -y curl findutils gcc-c++ gcc gcc-gfortran git gnupg2 hostname iproute make patch python3 python3-pip python3-setuptools unzip
python3 -m pip install boto3
.. tab-item:: macOS Brew .. tab-item:: macOS Brew
@@ -122,41 +124,88 @@ Spack provides two ways of bootstrapping ``clingo``: from pre-built binaries
(default), or from sources. The fastest way to get started is to bootstrap from (default), or from sources. The fastest way to get started is to bootstrap from
pre-built binaries. pre-built binaries.
The first time you concretize a spec, Spack will bootstrap automatically: .. note::
When bootstrapping from pre-built binaries, Spack currently requires
``patchelf`` on Linux and ``otool`` on macOS. If ``patchelf`` is not in the
``PATH``, Spack will build it from sources, and a C++ compiler is required.
The first time you concretize a spec, Spack will bootstrap in the background:
.. code-block:: console .. code-block:: console
$ spack spec zlib $ time spack spec zlib
==> Bootstrapping clingo from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-ba5ijauisd3uuixtmactc36vps7yfsrl.spec.json
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64/gcc-10.2.1/clingo-bootstrap-spack/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-ba5ijauisd3uuixtmactc36vps7yfsrl.spack
==> Installing "clingo-bootstrap@spack%gcc@10.2.1~docs~ipo+python+static_libstdcpp build_type=Release arch=linux-centos7-x86_64" from a buildcache
==> Bootstrapping patchelf from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64-gcc-10.2.1-patchelf-0.16.1-p72zyan5wrzuabtmzq7isa5mzyh6ahdp.spec.json
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64/gcc-10.2.1/patchelf-0.16.1/linux-centos7-x86_64-gcc-10.2.1-patchelf-0.16.1-p72zyan5wrzuabtmzq7isa5mzyh6ahdp.spack
==> Installing "patchelf@0.16.1%gcc@10.2.1 ldflags="-static-libstdc++ -static-libgcc" build_system=autotools arch=linux-centos7-x86_64" from a buildcache
Input spec Input spec
-------------------------------- --------------------------------
zlib zlib
Concretized Concretized
-------------------------------- --------------------------------
zlib@1.2.13%gcc@9.4.0+optimize+pic+shared build_system=makefile arch=linux-ubuntu20.04-icelake zlib@1.2.11%gcc@7.5.0+optimize+pic+shared arch=linux-ubuntu18.04-zen
If for security concerns you cannot bootstrap ``clingo`` from pre-built real 0m20.023s
binaries, you have to disable fetching the binaries we generated with Github Actions. user 0m18.351s
sys 0m0.784s
After this command you'll see that ``clingo`` has been installed for Spack's own use:
.. code-block:: console .. code-block:: console
$ spack bootstrap disable github-actions-v0.4 $ spack find -b
==> "github-actions-v0.4" is now disabled and will not be used for bootstrapping ==> Showing internal bootstrap store at "/root/.spack/bootstrap/store"
$ spack bootstrap disable github-actions-v0.3 ==> 3 installed packages
==> "github-actions-v0.3" is now disabled and will not be used for bootstrapping -- linux-rhel5-x86_64 / gcc@9.3.0 -------------------------------
clingo-bootstrap@spack python@3.6
-- linux-ubuntu18.04-zen / gcc@7.5.0 ----------------------------
patchelf@0.13
Subsequent calls to the concretizer will then be much faster:
.. code-block:: console
$ time spack spec zlib
[ ... ]
real 0m0.490s
user 0m0.431s
sys 0m0.041s
If for security concerns you cannot bootstrap ``clingo`` from pre-built
binaries, you have to mark this bootstrapping method as untrusted. This makes
Spack fall back to bootstrapping from sources:
.. code-block:: console
$ spack bootstrap untrust github-actions-v0.2
==> "github-actions-v0.2" is now untrusted and will not be used for bootstrapping
You can verify that the new settings are effective with: You can verify that the new settings are effective with:
.. command-output:: spack bootstrap list .. code-block:: console
$ spack bootstrap list
Name: github-actions-v0.2 UNTRUSTED
Type: buildcache
Info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.2
homepage: https://github.com/spack/spack-bootstrap-mirrors
releases: https://github.com/spack/spack-bootstrap-mirrors/releases
Description:
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
[ ... ]
Name: spack-install TRUSTED
Type: install
Description:
Specs built from sources by Spack. May take a long time.
.. note:: .. note::
@@ -186,7 +235,9 @@ under the ``${HOME}/.spack`` directory. The software installed there can be quer
.. code-block:: console .. code-block:: console
$ spack -b find $ spack find --bootstrap
==> Showing internal bootstrap store at "/home/spack/.spack/bootstrap/store"
==> 3 installed packages
-- linux-ubuntu18.04-x86_64 / gcc@10.1.0 ------------------------ -- linux-ubuntu18.04-x86_64 / gcc@10.1.0 ------------------------
clingo-bootstrap@spack python@3.6.9 re2c@1.2.1 clingo-bootstrap@spack python@3.6.9 re2c@1.2.1
@@ -195,7 +246,7 @@ In case it's needed the bootstrap store can also be cleaned with:
.. code-block:: console .. code-block:: console
$ spack clean -b $ spack clean -b
==> Removing bootstrapped software and configuration in "/home/spack/.spack/bootstrap" ==> Removing software in "/home/spack/.spack/bootstrap/store"
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
Check Installation Check Installation
@@ -250,10 +301,9 @@ Compiler configuration
Spack has the ability to build packages with multiple compilers and Spack has the ability to build packages with multiple compilers and
compiler versions. Compilers can be made available to Spack by compiler versions. Compilers can be made available to Spack by
specifying them manually in ``compilers.yaml`` or ``packages.yaml``, specifying them manually in ``compilers.yaml``, or automatically by
or automatically by running ``spack compiler find``, but for running ``spack compiler find``, but for convenience Spack will
convenience Spack will automatically detect compilers the first time automatically detect compilers the first time it needs them.
it needs them.
.. _cmd-spack-compilers: .. _cmd-spack-compilers:
@@ -318,7 +368,7 @@ installed, but you know that new compilers have been added to your
.. code-block:: console .. code-block:: console
$ module load gcc/4.9.0 $ module load gcc-4.9.0
$ spack compiler find $ spack compiler find
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml ==> Added 1 new compiler to ~/.spack/linux/compilers.yaml
gcc@4.9.0 gcc@4.9.0
@@ -366,8 +416,7 @@ Manual compiler configuration
If auto-detection fails, you can manually configure a compiler by If auto-detection fails, you can manually configure a compiler by
editing your ``~/.spack/<platform>/compilers.yaml`` file. You can do this by running editing your ``~/.spack/<platform>/compilers.yaml`` file. You can do this by running
``spack config edit compilers``, which will open the file in ``spack config edit compilers``, which will open the file in your ``$EDITOR``.
:ref:`your favorite editor <controlling-the-editor>`.
Each compiler configuration in the file looks like this: Each compiler configuration in the file looks like this:
@@ -458,54 +507,6 @@ specification. The operations available to modify the environment are ``set``, `
prepend_path: # Similar for append|remove_path prepend_path: # Similar for append|remove_path
LD_LIBRARY_PATH: /ld/paths/added/by/setvars/sh LD_LIBRARY_PATH: /ld/paths/added/by/setvars/sh
.. note::
Spack is in the process of moving compilers from a separate
attribute to be handled like all other packages. As part of this
process, the ``compilers.yaml`` section will eventually be replaced
by configuration in the ``packages.yaml`` section. This new
configuration is now available, although it is not yet the default
behavior.
Compilers can also be configured as external packages in the
``packages.yaml`` config file. Any external package for a compiler
(e.g. ``gcc`` or ``llvm``) will be treated as a configured compiler
assuming the paths to the compiler executables are determinable from
the prefix.
If the paths to the compiler executable are not determinable from the
prefix, you can add them to the ``extra_attributes`` field. Similarly,
all other fields from the compilers config can be added to the
``extra_attributes`` field for an external representing a compiler.
Note that the format for the ``paths`` field in the
``extra_attributes`` section is different than in the ``compilers``
config. For compilers configured as external packages, the section is
named ``compilers`` and the dictionary maps language names (``c``,
``cxx``, ``fortran``) to paths, rather than using the names ``cc``,
``fc``, and ``f77``.
.. code-block:: yaml
packages:
gcc:
external:
- spec: gcc@12.2.0 arch=linux-rhel8-skylake
prefix: /usr
extra_attributes:
environment:
set:
GCC_ROOT: /usr
external:
- spec: llvm+clang@15.0.0 arch=linux-rhel8-skylake
prefix: /usr
extra_attributes:
compilers:
c: /usr/bin/clang-with-suffix
cxx: /usr/bin/clang++-with-extra-info
fortran: /usr/bin/gfortran
extra_rpaths:
- /usr/lib/llvm/
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
Build Your Own Compiler Build Your Own Compiler
@@ -672,7 +673,7 @@ Fortran.
compilers: compilers:
- compiler: - compiler:
# ... ...
paths: paths:
cc: /usr/bin/clang cc: /usr/bin/clang
cxx: /usr/bin/clang++ cxx: /usr/bin/clang++
@@ -1553,7 +1554,7 @@ Spack On Windows
Windows support for Spack is currently under development. While this work is still in an early stage, Windows support for Spack is currently under development. While this work is still in an early stage,
it is currently possible to set up Spack and perform a few operations on Windows. This section will guide it is currently possible to set up Spack and perform a few operations on Windows. This section will guide
you through the steps needed to install Spack and start running it on a fresh Windows machine. you through the steps needed to install Spack and start running it on a fresh Windows machine.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Step 1: Install prerequisites Step 1: Install prerequisites
@@ -1563,7 +1564,7 @@ To use Spack on Windows, you will need the following packages:
Required: Required:
* Microsoft Visual Studio * Microsoft Visual Studio
* Python * Python
* Git * Git
Optional: Optional:
@@ -1578,8 +1579,6 @@ Microsoft Visual Studio
""""""""""""""""""""""" """""""""""""""""""""""
Microsoft Visual Studio provides the only Windows C/C++ compiler that is currently supported by Spack. Microsoft Visual Studio provides the only Windows C/C++ compiler that is currently supported by Spack.
Spack additionally requires that the Windows SDK (including WGL) to be installed as part of your
visual studio installation as it is required to build many packages from source.
We require several specific components to be included in the Visual Studio installation. We require several specific components to be included in the Visual Studio installation.
One is the C/C++ toolset, which can be selected as "Desktop development with C++" or "C++ build tools," One is the C/C++ toolset, which can be selected as "Desktop development with C++" or "C++ build tools,"
@@ -1587,7 +1586,6 @@ depending on installation type (Professional, Build Tools, etc.) The other requ
"C++ CMake tools for Windows," which can be selected from among the optional packages. "C++ CMake tools for Windows," which can be selected from among the optional packages.
This provides CMake and Ninja for use during Spack configuration. This provides CMake and Ninja for use during Spack configuration.
If you already have Visual Studio installed, you can make sure these components are installed by If you already have Visual Studio installed, you can make sure these components are installed by
rerunning the installer. Next to your installation, select "Modify" and look at the rerunning the installer. Next to your installation, select "Modify" and look at the
"Installation details" pane on the right. "Installation details" pane on the right.
@@ -1597,8 +1595,8 @@ Intel Fortran
""""""""""""" """""""""""""
For Fortran-based packages on Windows, we strongly recommend Intel's oneAPI Fortran compilers. For Fortran-based packages on Windows, we strongly recommend Intel's oneAPI Fortran compilers.
The suite is free to download from Intel's website, located at The suite is free to download from Intel's website, located at
https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/fortran-compiler.html. https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/fortran-compiler.html#gs.70t5tw.
The executable of choice for Spack will be Intel's Beta Compiler, ifx, which supports the classic The executable of choice for Spack will be Intel's Beta Compiler, ifx, which supports the classic
compiler's (ifort's) frontend and runtime libraries by using LLVM. compiler's (ifort's) frontend and runtime libraries by using LLVM.
@@ -1647,8 +1645,8 @@ in a Windows CMD prompt.
.. note:: .. note::
If you chose to install Spack into a directory on Windows that is set up to require Administrative If you chose to install Spack into a directory on Windows that is set up to require Administrative
Privileges, Spack will require elevated privileges to run. Privleges, Spack will require elevated privleges to run.
Administrative Privileges can be denoted either by default such as Administrative Privleges can be denoted either by default such as
``C:\Program Files``, or aministrator applied administrative restrictions ``C:\Program Files``, or aministrator applied administrative restrictions
on a directory that spack installs files to such as ``C:\Users`` on a directory that spack installs files to such as ``C:\Users``
@@ -1744,21 +1742,33 @@ Spack console via:
spack install cpuinfo spack install cpuinfo
If in the previous step, you did not have CMake or Ninja installed, running the command above should bootstrap both packages If in the previous step, you did not have CMake or Ninja installed, running the command above should boostrap both packages
""""""""""""""""""""""""""" """""""""""""""""""""""""""
Windows Compatible Packages Windows Compatible Packages
""""""""""""""""""""""""""" """""""""""""""""""""""""""
Not all spack packages currently have Windows support. Some are inherently incompatible with the Many Spack packages are not currently compatible with Windows, due to Unix
platform, and others simply have yet to be ported. To view the current set of packages with Windows dependencies or incompatible build tools like autoconf. Here are several
support, the list command should be used via `spack list -t windows`. If there's a package you'd like packages known to work on Windows:
to install on Windows but is not in that list, feel free to reach out to request the port or contribute
the port yourself. * abseil-cpp
* clingo
* cpuinfo
* cmake
* glm
* nasm
* netlib-lapack (requires Intel Fortran)
* ninja
* openssl
* perl
* python
* ruby
* wrf
* zlib
.. note:: .. note::
This is by no means a comprehensive list, some packages may have ports that were not tagged This is by no means a comprehensive list
while others may just work out of the box on Windows and have not been tagged as such.
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
For developers For developers
@@ -1770,4 +1780,3 @@ Instructions for creating the installer are at
https://github.com/spack/spack/blob/develop/lib/spack/spack/cmd/installer/README.md https://github.com/spack/spack/blob/develop/lib/spack/spack/cmd/installer/README.md
Alternatively a pre-built copy of the Windows installer is available as an artifact of Spack's Windows CI Alternatively a pre-built copy of the Windows installer is available as an artifact of Spack's Windows CI
available at each run of the CI on develop or any PR.

View File

@@ -1,138 +0,0 @@
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
==========================
Using External GPU Support
==========================
Many packages come with a ``+cuda`` or ``+rocm`` variant. With no added
configuration Spack will download and install the needed components.
It may be preferable to use existing system support: the following sections
help with using a system installation of GPU libraries.
-----------------------------------
Using an External ROCm Installation
-----------------------------------
Spack breaks down ROCm into many separate component packages. The following
is an example ``packages.yaml`` that organizes a consistent set of ROCm
components for use by dependent packages:
.. code-block:: yaml
packages:
all:
compiler: [rocmcc@=5.3.0]
variants: amdgpu_target=gfx90a
hip:
buildable: false
externals:
- spec: hip@5.3.0
prefix: /opt/rocm-5.3.0/hip
hsa-rocr-dev:
buildable: false
externals:
- spec: hsa-rocr-dev@5.3.0
prefix: /opt/rocm-5.3.0/
llvm-amdgpu:
buildable: false
externals:
- spec: llvm-amdgpu@5.3.0
prefix: /opt/rocm-5.3.0/llvm/
comgr:
buildable: false
externals:
- spec: comgr@5.3.0
prefix: /opt/rocm-5.3.0/
hipsparse:
buildable: false
externals:
- spec: hipsparse@5.3.0
prefix: /opt/rocm-5.3.0/
hipblas:
buildable: false
externals:
- spec: hipblas@5.3.0
prefix: /opt/rocm-5.3.0/
rocblas:
buildable: false
externals:
- spec: rocblas@5.3.0
prefix: /opt/rocm-5.3.0/
rocprim:
buildable: false
externals:
- spec: rocprim@5.3.0
prefix: /opt/rocm-5.3.0/rocprim/
This is in combination with the following compiler definition:
.. code-block:: yaml
compilers:
- compiler:
spec: rocmcc@=5.3.0
paths:
cc: /opt/rocm-5.3.0/bin/amdclang
cxx: /opt/rocm-5.3.0/bin/amdclang++
f77: null
fc: /opt/rocm-5.3.0/bin/amdflang
operating_system: rhel8
target: x86_64
This includes the following considerations:
- Each of the listed externals specifies ``buildable: false`` to force Spack
to use only the externals we defined.
- ``spack external find`` can automatically locate some of the ``hip``/``rocm``
packages, but not all of them, and furthermore not in a manner that
guarantees a complementary set if multiple ROCm installations are available.
- The ``prefix`` is the same for several components, but note that others
require listing one of the subdirectories as a prefix.
-----------------------------------
Using an External CUDA Installation
-----------------------------------
CUDA is split into fewer components and is simpler to specify:
.. code-block:: yaml
packages:
all:
variants:
- cuda_arch=70
cuda:
buildable: false
externals:
- spec: cuda@11.0.2
prefix: /opt/cuda/cuda-11.0.2/
where ``/opt/cuda/cuda-11.0.2/lib/`` contains ``libcudart.so``.
-----------------------------------
Using an External OpenGL API
-----------------------------------
Depending on whether we have a graphics card or not, we may choose to use OSMesa or GLX to implement the OpenGL API.
If a graphics card is unavailable, OSMesa is recommended and can typically be built with Spack.
However, if we prefer to utilize the system GLX tailored to our graphics card, we need to declare it as an external. Here's how to do it:
.. code-block:: yaml
packages:
libglx:
require: [opengl]
opengl:
buildable: false
externals:
- prefix: /usr/
spec: opengl@4.6
Note that prefix has to be the root of both the libraries and the headers, using is /usr not the path the the lib.
To know which spec for opengl is available use ``cd /usr/include/GL && grep -Ri gl_version``.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 658 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 449 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 126 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More