Compare commits

..

8 Commits

Author SHA1 Message Date
Gregory Becker
26e38d9b26 flake 2021-02-19 01:21:54 -08:00
Gregory Becker
f279ce2e7f update completion 2021-02-19 01:14:58 -08:00
Gregory Becker
3ac497dfec add capability to pull subcomponents 2021-02-19 01:10:40 -08:00
Gregory Becker
9810572411 run git from proper directory 2021-02-19 00:48:35 -08:00
Gregory Becker
d4895435f1 update bash completions 2021-02-19 00:37:22 -08:00
Gregory Becker
ae7ca04997 flake 2021-02-19 00:36:28 -08:00
Gregory Becker
29562596c3 fixup imports 2021-02-19 00:36:28 -08:00
Gregory Becker
892dd4d97f add spack checkout command 2021-02-19 00:36:28 -08:00
4280 changed files with 28269 additions and 108962 deletions

View File

@@ -14,21 +14,3 @@ ignore:
- share/spack/qa/.* - share/spack/qa/.*
comment: off comment: off
# Inline codecov annotations make the code hard to read, and they add
# annotations in files that seemingly have nothing to do with the PR.
github_checks:
annotations: false
# Attempt to fix "Missing base commit" messages in the codecov UI.
# Because we do not run full tests on package PRs, package PRs' merge
# commits on `develop` don't have coverage info. It appears that
# codecov will give you an error if the pseudo-base's coverage data
# doesn't all apply properly to the real PR base.
#
# See here for docs:
# https://docs.codecov.com/docs/comparing-commits#pseudo-comparison
# See here for another potential solution:
# https://community.codecov.com/t/2480/15
codecov:
allow_coverage_offsets: true

38
.coveragerc Normal file
View File

@@ -0,0 +1,38 @@
# -*- conf -*-
# .coveragerc to control coverage.py
[run]
parallel = True
concurrency = multiprocessing
branch = True
source =
bin
lib
omit =
lib/spack/spack/test/*
lib/spack/docs/*
lib/spack/external/*
share/spack/qa/*
[report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain about missing debug-only code:
def __repr__
if self\.debug
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if 0:
if False:
if __name__ == .__main__.:
ignore_errors = True
[html]
directory = htmlcov

View File

@@ -8,4 +8,4 @@ share/spack/dotkit/*
share/spack/lmod/* share/spack/lmod/*
share/spack/modules/* share/spack/modules/*
lib/spack/spack/test/* lib/spack/spack/test/*
var/spack/cache/*

42
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,42 @@
---
name: "\U0001F41E Bug report"
about: Report a bug in the core of Spack (command not working as expected, etc.)
labels: "bug,triage"
---
<!-- Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran `spack find` to list all the installed packages and ..." -->
### Steps to reproduce the issue
```console
$ spack <command1> <spec>
$ spack <command2> <spec>
...
```
### Error Message
<!-- If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect. -->
```console
$ spack --debug --stacktrace <command>
```
### Information on your system
<!-- Please include the output of `spack debug report` -->
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ ] I have searched the issues of this repo and believe this is not a duplicate
- [ ] I have run the failing commands in debug mode and reported the output
<!-- We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack! -->

View File

@@ -1,58 +0,0 @@
name: "\U0001F41E Bug report"
description: Report a bug in the core of Spack (command not working as expected, etc.)
labels: [bug, triage]
body:
- type: textarea
id: reproduce
attributes:
label: Steps to reproduce
description: |
Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran `spack find` to list all the installed packages and ..."
placeholder: |
```console
$ spack <command1> <spec>
$ spack <command2> <spec>
...
```
validations:
required: true
- type: textarea
id: error
attributes:
label: Error message
description: |
If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect.
placeholder: |
```console
$ spack --debug --stacktrace <command>
```
- type: textarea
id: information
attributes:
label: Information on your system
description: Please include the output of `spack debug report`
validations:
required: true
- type: markdown
attributes:
value: |
If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well.
- type: checkboxes
id: checks
attributes:
label: General information
options:
- label: I have run `spack debug report` and reported the version of Spack/Python/Platform
required: true
- label: I have searched the issues of this repo and believe this is not a duplicate
required: true
- label: I have run the failing commands in debug mode and reported the output
required: true
- type: markdown
attributes:
value: |
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!

43
.github/ISSUE_TEMPLATE/build_error.md vendored Normal file
View File

@@ -0,0 +1,43 @@
---
name: "\U0001F4A5 Build error"
about: Some package in Spack didn't build correctly
title: "Installation issue: "
labels: "build-error"
---
<!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install <spec>
...
```
### Information on your system
<!-- Please include the output of `spack debug report` -->
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
<!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. -->
* [spack-build-out.txt]()
* [spack-build-env.txt]()
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
### General information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ ] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [ ] I have uploaded the build log and environment files
- [ ] I have searched the issues of this repo and believe this is not a duplicate

View File

@@ -1,64 +0,0 @@
name: "\U0001F4A5 Build error"
description: Some package in Spack didn't build correctly
title: "Installation issue: "
labels: [build-error]
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue `Installation issue: <name-of-the-package>`.
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
- type: textarea
id: reproduce
attributes:
label: Steps to reproduce the issue
description: |
Fill in the exact spec you are trying to build and the relevant part of the error message
placeholder: |
```console
$ spack install <spec>
...
```
validations:
required: true
- type: textarea
id: information
attributes:
label: Information on your system
description: Please include the output of `spack debug report`
validations:
required: true
- type: markdown
attributes:
value: |
If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well.
- type: textarea
id: additional_information
attributes:
label: Additional information
description: |
Please upload the following files:
* **`spack-build-out.txt`**
* **`spack-build-env.txt`**
They should be present in the stage directory of the failing build. Also upload any `config.log` or similar file if one exists.
- type: markdown
attributes:
value: |
Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and **@mention** them here if they exist.
- type: checkboxes
id: checks
attributes:
label: General information
options:
- label: I have run `spack debug report` and reported the version of Spack/Python/Platform
required: true
- label: I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers
required: true
- label: I have uploaded the build log and environment files
required: true
- label: I have searched the issues of this repo and believe this is not a duplicate
required: true

View File

@@ -1 +0,0 @@
blank_issues_enabled: true

View File

@@ -0,0 +1,33 @@
---
name: "\U0001F38A Feature request"
about: Suggest adding a feature that is not yet in Spack
labels: feature
---
<!--*Please add a concise summary of your suggestion here.*-->
### Rationale
<!--*Is your feature request related to a problem? Please describe it!*-->
### Description
<!--*Describe the solution you'd like and the alternatives you have considered.*-->
### Additional information
<!--*Add any other context about the feature request here.*-->
### General information
- [ ] I have run `spack --version` and reported the version of Spack
- [ ] I have searched the issues of this repo and believe this is not a duplicate
<!--If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
-->

View File

@@ -1,41 +0,0 @@
name: "\U0001F38A Feature request"
description: Suggest adding a feature that is not yet in Spack
labels: [feature]
body:
- type: textarea
id: summary
attributes:
label: Summary
description: Please add a concise summary of your suggestion here.
validations:
required: true
- type: textarea
id: rationale
attributes:
label: Rationale
description: Is your feature request related to a problem? Please describe it!
- type: textarea
id: description
attributes:
label: Description
description: Describe the solution you'd like and the alternatives you have considered.
- type: textarea
id: additional_information
attributes:
label: Additional information
description: Add any other context about the feature request here.
- type: checkboxes
id: checks
attributes:
label: General information
options:
- label: I have run `spack --version` and reported the version of Spack
required: true
- label: I have searched the issues of this repo and believe this is not a duplicate
required: true
- type: markdown
attributes:
value: |
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!

View File

@@ -0,0 +1,6 @@
FROM python:3.7-alpine
RUN pip install pygithub
ADD entrypoint.py /entrypoint.py
ENTRYPOINT ["/entrypoint.py"]

View File

@@ -0,0 +1,85 @@
#!/usr/bin/env python
#
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Maintainer review action.
This action checks which packages have changed in a PR, and adds their
maintainers to the pull request for review.
"""
import json
import os
import re
import subprocess
from github import Github
def spack(*args):
"""Run the spack executable with arguments, and return the output split.
This does just enough to run `spack pkg` and `spack maintainers`, the
two commands used by this action.
"""
github_workspace = os.environ['GITHUB_WORKSPACE']
spack = os.path.join(github_workspace, 'bin', 'spack')
output = subprocess.check_output([spack] + list(args))
split = re.split(r'\s*', output.decode('utf-8').strip())
return [s for s in split if s]
def main():
# get these first so that we'll fail early
token = os.environ['GITHUB_TOKEN']
event_path = os.environ['GITHUB_EVENT_PATH']
with open(event_path) as file:
data = json.load(file)
# make sure it's a pull_request event
assert 'pull_request' in data
# only request reviews on open, edit, or reopen
action = data['action']
if action not in ('opened', 'edited', 'reopened'):
return
# get data from the event payload
pr_data = data['pull_request']
base_branch_name = pr_data['base']['ref']
full_repo_name = pr_data['base']['repo']['full_name']
pr_number = pr_data['number']
requested_reviewers = pr_data['requested_reviewers']
author = pr_data['user']['login']
# get a list of packages that this PR modified
changed_pkgs = spack(
'pkg', 'changed', '--type', 'ac', '%s...' % base_branch_name)
# get maintainers for all modified packages
maintainers = set()
for pkg in changed_pkgs:
pkg_maintainers = set(spack('maintainers', pkg))
maintainers |= pkg_maintainers
# remove any maintainers who are already on the PR, and the author,
# as you can't review your own PR)
maintainers -= set(requested_reviewers)
maintainers -= set([author])
if not maintainers:
return
# request reviews from each maintainer
gh = Github(token)
repo = gh.get_repo(full_repo_name)
pr = repo.get_pull(pr_number)
pr.create_review_request(list(maintainers))
if __name__ == "__main__":
main()

View File

@@ -1,291 +0,0 @@
name: Bootstrapping
on:
pull_request:
branches:
- develop
- releases/**
paths-ignore:
# Don't run if we only modified packages in the
# built-in repository or documentation
- 'var/spack/repos/builtin/**'
- '!var/spack/repos/builtin/packages/clingo-bootstrap/**'
- '!var/spack/repos/builtin/packages/python/**'
- '!var/spack/repos/builtin/packages/re2c/**'
- 'lib/spack/docs/**'
schedule:
# nightly at 2:16 AM
- cron: '16 2 * * *'
jobs:
fedora-clingo-sources:
runs-on: ubuntu-latest
container: "fedora:latest"
steps:
- name: Install dependencies
run: |
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch unzip which xz python3 python3-devel tree \
cmake bison bison-devel libstdc++-static
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
git fetch --unshallow
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack bootstrap untrust github-actions
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
ubuntu-clingo-sources:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc gfortran git gnupg2 gzip \
make patch unzip xz-utils python3 python3-dev tree \
cmake bison
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
git fetch --unshallow
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack bootstrap untrust github-actions
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
opensuse-clingo-sources:
runs-on: ubuntu-latest
container: "opensuse/leap:latest"
steps:
- name: Install dependencies
run: |
zypper update -y
zypper install -y \
bzip2 curl file gcc-c++ gcc gcc-fortran tar git gpg2 gzip \
make patch unzip which xz python3 python3-devel tree \
cmake bison
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- name: Setup repo
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
git --version
git fetch --unshallow
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh
spack bootstrap untrust github-actions
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
macos-clingo-sources:
runs-on: macos-latest
steps:
- name: Install dependencies
run: |
brew install cmake bison@2.7 tree
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh
export PATH=/usr/local/opt/bison@2.7/bin:$PATH
spack bootstrap untrust github-actions
spack external find --not-buildable cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
macos-clingo-binaries:
runs-on: macos-latest
strategy:
matrix:
python-version: ['3.5', '3.6', '3.7', '3.8', '3.9']
steps:
- name: Install dependencies
run: |
brew install tree
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- uses: actions/setup-python@98f2ad02fd48d057ee3b4d4f66525b231c3e52b6
with:
python-version: ${{ matrix.python-version }}
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh
spack bootstrap untrust spack-install
spack -d solve zlib
tree ~/.spack/bootstrap/store/
ubuntu-clingo-binaries:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['2.7', '3.5', '3.6', '3.7', '3.8', '3.9']
steps:
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- uses: actions/setup-python@98f2ad02fd48d057ee3b4d4f66525b231c3e52b6
with:
python-version: ${{ matrix.python-version }}
- name: Setup repo
run: |
git --version
git fetch --unshallow
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh
spack bootstrap untrust spack-install
spack -d solve zlib
tree ~/.spack/bootstrap/store/
ubuntu-gnupg-binaries:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc patchelf gfortran git gzip \
make patch unzip xz-utils python3 python3-dev tree
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
git fetch --unshallow
. .github/workflows/setup_git.sh
- name: Bootstrap GnuPG
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack bootstrap untrust spack-install
spack -d gpg list
tree ~/.spack/bootstrap/store/
ubuntu-gnupg-sources:
runs-on: ubuntu-latest
container: "ubuntu:latest"
steps:
- name: Install dependencies
env:
DEBIAN_FRONTEND: noninteractive
run: |
apt-get update -y && apt-get upgrade -y
apt-get install -y \
bzip2 curl file g++ gcc patchelf gfortran git gzip \
make patch unzip xz-utils python3 python3-dev tree \
gawk
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- name: Setup non-root user
run: |
# See [1] below
git config --global --add safe.directory /__w/spack/spack
useradd spack-test && mkdir -p ~spack-test
chown -R spack-test . ~spack-test
- name: Setup repo
shell: runuser -u spack-test -- bash {0}
run: |
git --version
git fetch --unshallow
. .github/workflows/setup_git.sh
- name: Bootstrap GnuPG
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack solve zlib
spack bootstrap untrust github-actions
spack -d gpg list
tree ~/.spack/bootstrap/store/
macos-gnupg-binaries:
runs-on: macos-latest
steps:
- name: Install dependencies
run: |
brew install tree
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
spack bootstrap untrust spack-install
spack -d gpg list
tree ~/.spack/bootstrap/store/
macos-gnupg-sources:
runs-on: macos-latest
steps:
- name: Install dependencies
run: |
brew install gawk tree
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout
uses: actions/checkout@2541b1294d2704b0964813337f33b291d3f8596b
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
spack solve zlib
spack bootstrap untrust github-actions
spack -d gpg list
tree ~/.spack/bootstrap/store/
# [1] Distros that have patched git to resolve CVE-2022-24765 (e.g. Ubuntu patching v2.25.1)
# introduce breaking behaviorso we have to set `safe.directory` in gitconfig ourselves.
# See:
# - https://github.blog/2022-04-12-git-security-vulnerability-announced/
# - https://github.com/actions/checkout/issues/760
# - http://changelogs.ubuntu.com/changelogs/pool/main/g/git/git_2.25.1-1ubuntu3.3/changelog

View File

@@ -1,91 +0,0 @@
name: Containers
on:
# This Workflow can be triggered manually
workflow_dispatch:
# Build new Spack develop containers nightly.
schedule:
- cron: '34 0 * * *'
# Run on pull requests that modify this file
pull_request:
branches:
- develop
paths:
- '.github/workflows/build-containers.yml'
# Let's also build & tag Spack containers on releases.
release:
types: [published]
jobs:
deploy-images:
runs-on: ubuntu-latest
permissions:
packages: write
strategy:
# Even if one container fails to build we still want the others
# to continue their builds.
fail-fast: false
# A matrix of Dockerfile paths, associated tags, and which architectures
# they support.
matrix:
dockerfile: [[amazon-linux, amazonlinux-2.dockerfile, 'linux/amd64,linux/arm64'],
[centos7, centos-7.dockerfile, 'linux/amd64,linux/arm64,linux/ppc64le'],
[leap15, leap-15.dockerfile, 'linux/amd64,linux/arm64,linux/ppc64le'],
[ubuntu-xenial, ubuntu-1604.dockerfile, 'linux/amd64,linux/arm64,linux/ppc64le'],
[ubuntu-bionic, ubuntu-1804.dockerfile, 'linux/amd64,linux/arm64,linux/ppc64le']]
name: Build ${{ matrix.dockerfile[0] }}
steps:
- name: Checkout
uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2
- name: Set Container Tag Normal (Nightly)
run: |
container="${{ matrix.dockerfile[0] }}:latest"
echo "container=${container}" >> $GITHUB_ENV
echo "versioned=${container}" >> $GITHUB_ENV
# On a new release create a container with the same tag as the release.
- name: Set Container Tag on Release
if: github.event_name == 'release'
run: |
versioned="${{matrix.dockerfile[0]}}:${GITHUB_REF##*/}"
echo "versioned=${versioned}" >> $GITHUB_ENV
- name: Check ${{ matrix.dockerfile[1] }} Exists
run: |
printf "Preparing to build ${{ env.container }} from ${{ matrix.dockerfile[1] }}"
if [ ! -f "share/spack/docker/${{ matrix.dockerfile[1]}}" ]; then
printf "Dockerfile ${{ matrix.dockerfile[0]}} does not exist"
exit 1;
fi
- name: Set up QEMU
uses: docker/setup-qemu-action@27d0a4f181a40b142cce983c5393082c365d1480 # @v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@94ab11c41e45d028884a99163086648e898eed25 # @v1
- name: Log in to GitHub Container Registry
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9 # @v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Log in to DockerHub
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9 # @v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Deploy ${{ matrix.dockerfile[1] }}
uses: docker/build-push-action@a66e35b9cbcf4ad0ea91ffcaf7bbad63ad9e0229 # @v2
with:
file: share/spack/docker/${{matrix.dockerfile[1]}}
platforms: ${{ matrix.dockerfile[2] }}
push: ${{ github.event_name != 'pull_request' }}
tags: |
spack/${{ env.container }}
spack/${{ env.versioned }}
ghcr.io/spack/${{ env.container }}
ghcr.io/spack/${{ env.versioned }}

View File

@@ -0,0 +1,65 @@
name: linux builds
on:
push:
branches:
- develop
- releases/**
pull_request:
branches:
- develop
- releases/**
paths-ignore:
# Don't run if we only modified packages in the built-in repository
- 'var/spack/repos/builtin/**'
- '!var/spack/repos/builtin/packages/lz4/**'
- '!var/spack/repos/builtin/packages/mpich/**'
- '!var/spack/repos/builtin/packages/tut/**'
- '!var/spack/repos/builtin/packages/py-setuptools/**'
- '!var/spack/repos/builtin/packages/openjpeg/**'
- '!var/spack/repos/builtin/packages/r-rcpp/**'
- '!var/spack/repos/builtin/packages/ruby-rake/**'
# Don't run if we only modified documentation
- 'lib/spack/docs/**'
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
package:
- lz4 # MakefilePackage
- mpich~fortran # AutotoolsPackage
- tut # WafPackage
- py-setuptools # PythonPackage
- openjpeg # CMakePackage
- r-rcpp # RPackage
- ruby-rake # RubyPackage
steps:
- uses: actions/checkout@v2
- uses: actions/cache@v2.1.4
with:
path: ~/.ccache
key: ccache-build-${{ matrix.package }}
restore-keys: |
ccache-build-${{ matrix.package }}
- uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install System Packages
run: |
sudo apt-get update
sudo apt-get -yqq install ccache gfortran perl perl-base r-base r-base-core r-base-dev ruby findutils openssl libssl-dev libpciaccess-dev
R --version
perl --version
ruby --version
- name: Copy Configuration
run: |
ccache -M 300M && ccache -z
# Set up external deps for build tests, b/c they take too long to compile
cp share/spack/qa/configuration/*.yaml etc/spack/
- name: Run the build test
run: |
. share/spack/setup-env.sh
SPEC=${{ matrix.package }} share/spack/qa/run-build-tests
ccache -s

167
.github/workflows/linux_unit_tests.yaml vendored Normal file
View File

@@ -0,0 +1,167 @@
name: linux tests
on:
push:
branches:
- develop
- releases/**
pull_request:
branches:
- develop
- releases/**
jobs:
unittests:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [2.7, 3.5, 3.6, 3.7, 3.8, 3.9]
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install System packages
run: |
sudo apt-get -y update
# Needed for unit tests
sudo apt-get install -y coreutils gfortran graphviz gnupg2 mercurial
sudo apt-get install -y ninja-build patchelf
# Needed for kcov
sudo apt-get -y install cmake binutils-dev libcurl4-openssl-dev
sudo apt-get -y install zlib1g-dev libdw-dev libiberty-dev
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools codecov coverage
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Install kcov for bash script coverage
env:
KCOV_VERSION: 34
run: |
KCOV_ROOT=$(mktemp -d)
wget --output-document=${KCOV_ROOT}/${KCOV_VERSION}.tar.gz https://github.com/SimonKagstrom/kcov/archive/v${KCOV_VERSION}.tar.gz
tar -C ${KCOV_ROOT} -xzvf ${KCOV_ROOT}/${KCOV_VERSION}.tar.gz
mkdir -p ${KCOV_ROOT}/build
cd ${KCOV_ROOT}/build && cmake -Wno-dev ${KCOV_ROOT}/kcov-${KCOV_VERSION} && cd -
make -C ${KCOV_ROOT}/build && sudo make -C ${KCOV_ROOT}/build install
- name: Run unit tests
env:
COVERAGE: true
run: |
share/spack/qa/run-unit-tests
coverage combine
coverage xml
- uses: codecov/codecov-action@v1
with:
flags: unittests,linux
shell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install System packages
run: |
sudo apt-get -y update
# Needed for shell tests
sudo apt-get install -y coreutils csh zsh tcsh fish dash bash
# Needed for kcov
sudo apt-get -y install cmake binutils-dev libcurl4-openssl-dev
sudo apt-get -y install zlib1g-dev libdw-dev libiberty-dev
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools codecov coverage
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Install kcov for bash script coverage
env:
KCOV_VERSION: 38
run: |
KCOV_ROOT=$(mktemp -d)
wget --output-document=${KCOV_ROOT}/${KCOV_VERSION}.tar.gz https://github.com/SimonKagstrom/kcov/archive/v${KCOV_VERSION}.tar.gz
tar -C ${KCOV_ROOT} -xzvf ${KCOV_ROOT}/${KCOV_VERSION}.tar.gz
mkdir -p ${KCOV_ROOT}/build
cd ${KCOV_ROOT}/build && cmake -Wno-dev ${KCOV_ROOT}/kcov-${KCOV_VERSION} && cd -
make -C ${KCOV_ROOT}/build && sudo make -C ${KCOV_ROOT}/build install
- name: Run shell tests
env:
COVERAGE: true
run: |
share/spack/qa/run-shell-tests
- uses: codecov/codecov-action@v1
with:
flags: shelltests,linux
centos6:
# Test for Python2.6 run on Centos 6
runs-on: ubuntu-latest
container: spack/github-actions:centos6
steps:
- name: Run unit tests
env:
HOME: /home/spack-test
run: |
whoami && echo $HOME && cd $HOME
git clone https://github.com/spack/spack.git && cd spack
git fetch origin ${{ github.ref }}:test-branch
git checkout test-branch
share/spack/qa/run-unit-tests
rhel8-platform-python:
runs-on: ubuntu-latest
container: registry.access.redhat.com/ubi8/ubi
steps:
- name: Install dependencies
run: |
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@v2
- name: Setup repo and non-root user
run: |
git --version
git fetch --unshallow
. .github/workflows/setup_git.sh
useradd spack-test
chown -R spack-test .
- name: Run unit tests
env:
SPACK_PYTHON: /usr/libexec/platform-python
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack unit-test -k 'not svn and not hg' -x --verbose
clingo:
# Test for the clingo based solver
runs-on: ubuntu-latest
container: spack/github-actions:clingo
steps:
- name: Run unit tests
run: |
whoami && echo PWD=$PWD && echo HOME=$HOME && echo SPACK_TEST_SOLVER=$SPACK_TEST_SOLVER
which clingo && clingo --version
git clone https://github.com/spack/spack.git && cd spack
git fetch origin ${{ github.ref }}:test-branch
git checkout test-branch
. share/spack/setup-env.sh
spack compiler find
spack solve mpileaks%gcc
coverage run $(which spack) unit-test -v
coverage combine
coverage xml
- uses: codecov/codecov-action@v1
with:
flags: unittests,linux,clingo

View File

@@ -24,8 +24,8 @@ jobs:
name: gcc with clang name: gcc with clang
runs-on: macos-latest runs-on: macos-latest
steps: steps:
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2 - uses: actions/checkout@v2
- uses: actions/setup-python@dc73133d4da04e56a135ae2246682783cc7c7cb6 # @v2 - uses: actions/setup-python@v2
with: with:
python-version: 3.9 python-version: 3.9
- name: spack install - name: spack install
@@ -39,8 +39,8 @@ jobs:
runs-on: macos-latest runs-on: macos-latest
timeout-minutes: 700 timeout-minutes: 700
steps: steps:
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2 - uses: actions/checkout@v2
- uses: actions/setup-python@dc73133d4da04e56a135ae2246682783cc7c7cb6 # @v2 - uses: actions/setup-python@v2
with: with:
python-version: 3.9 python-version: 3.9
- name: spack install - name: spack install
@@ -52,8 +52,8 @@ jobs:
name: scipy, mpl, pd name: scipy, mpl, pd
runs-on: macos-latest runs-on: macos-latest
steps: steps:
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2 - uses: actions/checkout@v2
- uses: actions/setup-python@dc73133d4da04e56a135ae2246682783cc7c7cb6 # @v2 - uses: actions/setup-python@v2
with: with:
python-version: 3.9 python-version: 3.9
- name: spack install - name: spack install
@@ -62,3 +62,17 @@ jobs:
spack install -v --fail-fast py-scipy %apple-clang spack install -v --fail-fast py-scipy %apple-clang
spack install -v --fail-fast py-matplotlib %apple-clang spack install -v --fail-fast py-matplotlib %apple-clang
spack install -v --fail-fast py-pandas %apple-clang spack install -v --fail-fast py-pandas %apple-clang
install_mpi4py_clang:
name: mpi4py, petsc4py
runs-on: macos-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.9
- name: spack install
run: |
. .github/workflows/install_spack.sh
spack install -v --fail-fast py-mpi4py %apple-clang
spack install -v --fail-fast py-petsc4py %apple-clang

44
.github/workflows/macos_unit_tests.yaml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: macos tests
on:
push:
branches:
- develop
- releases/**
pull_request:
branches:
- develop
- releases/**
jobs:
build:
runs-on: macos-latest
strategy:
matrix:
python-version: [3.8]
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools
pip install --upgrade codecov coverage
pip install --upgrade flake8 pep8-naming mypy
- name: Setup Homebrew packages
run: |
brew install dash fish gcc gnupg2 kcov
- name: Run unit tests
run: |
git --version
. .github/workflows/setup_git.sh
. share/spack/setup-env.sh
coverage run $(which spack) unit-test
coverage combine
coverage xml
- uses: codecov/codecov-action@v1
with:
file: ./coverage.xml
flags: unittests,macos

View File

@@ -1,8 +1,9 @@
#!/bin/bash -e #!/usr/bin/env sh
git config --global user.email "spack@example.com" git config --global user.email "spack@example.com"
git config --global user.name "Test User" git config --global user.name "Test User"
# With fetch-depth: 0 we have a remote develop
# create a local pr base branch # but not a local branch. Don't do this on develop
if [[ -n $GITHUB_BASE_REF ]]; then if [ "$(git branch --show-current)" != "develop" ]
git fetch origin "${GITHUB_BASE_REF}:${GITHUB_BASE_REF}" then
git branch develop origin/develop
fi fi

65
.github/workflows/style_and_docs.yaml vendored Normal file
View File

@@ -0,0 +1,65 @@
name: style and docs
on:
push:
branches:
- develop
- releases/**
pull_request:
branches:
- develop
- releases/**
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install Python Packages
run: |
pip install --upgrade pip
pip install --upgrade vermin
- name: Minimum Version (Spack's Core)
run: vermin --backport argparse --backport typing -t=2.6- -t=3.5- -v lib/spack/spack/ lib/spack/llnl/ bin/
- name: Minimum Version (Repositories)
run: vermin --backport argparse --backport typing -t=2.6- -t=3.5- -v var/spack/repos
style:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools flake8 mypy>=0.800 black
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Run style tests
run: |
share/spack/qa/run-style-tests
documentation:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install System packages
run: |
sudo apt-get -y update
sudo apt-get install -y coreutils ninja-build graphviz
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools
pip install --upgrade -r lib/spack/docs/requirements.txt
- name: Build documentation
run: |
share/spack/qa/run-doc-tests

View File

@@ -1,349 +0,0 @@
name: linux tests
on:
push:
branches:
- develop
- releases/**
pull_request:
branches:
- develop
- releases/**
jobs:
# Validate that the code can be run on all the Python versions
# supported by Spack
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2
- uses: actions/setup-python@dc73133d4da04e56a135ae2246682783cc7c7cb6 # @v2
with:
python-version: 3.9
- name: Install Python Packages
run: |
pip install --upgrade pip
pip install --upgrade vermin
- name: vermin (Spack's Core)
run: vermin --backport argparse --violations --backport typing -t=2.6- -t=3.5- -vvv lib/spack/spack/ lib/spack/llnl/ bin/
- name: vermin (Repositories)
run: vermin --backport argparse --violations --backport typing -t=2.6- -t=3.5- -vvv var/spack/repos
# Run style checks on the files that have been changed
style:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@dc73133d4da04e56a135ae2246682783cc7c7cb6 # @v2
with:
python-version: 3.9
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools types-six
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Run style tests
run: |
share/spack/qa/run-style-tests
# Check which files have been updated by the PR
changes:
runs-on: ubuntu-latest
# Set job outputs to values from filter step
outputs:
core: ${{ steps.filter.outputs.core }}
packages: ${{ steps.filter.outputs.packages }}
with_coverage: ${{ steps.coverage.outputs.with_coverage }}
steps:
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2
if: ${{ github.event_name == 'push' }}
with:
fetch-depth: 0
# For pull requests it's not necessary to checkout the code
- uses: dorny/paths-filter@b2feaf19c27470162a626bd6fa8438ae5b263721
id: filter
with:
# See https://github.com/dorny/paths-filter/issues/56 for the syntax used below
filters: |
core:
- './!(var/**)/**'
packages:
- 'var/**'
# Some links for easier reference:
#
# "github" context: https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#github-context
# job outputs: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idoutputs
# setting environment variables from earlier steps: https://docs.github.com/en/actions/reference/workflow-commands-for-github-actions#setting-an-environment-variable
#
- id: coverage
# Run the subsequent jobs with coverage if core has been modified,
# regardless of whether this is a pull request or a push to a branch
run: |
echo Core changes: ${{ steps.filter.outputs.core }}
echo Event name: ${{ github.event_name }}
if [ "${{ steps.filter.outputs.core }}" == "true" ]
then
echo "::set-output name=with_coverage::true"
else
echo "::set-output name=with_coverage::false"
fi
# Run unit tests with different configurations on linux
unittests:
needs: [ validate, style, changes ]
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [2.7, 3.5, 3.6, 3.7, 3.8, 3.9]
concretizer: ['original', 'clingo']
steps:
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@dc73133d4da04e56a135ae2246682783cc7c7cb6 # @v2
with:
python-version: ${{ matrix.python-version }}
- name: Install System packages
run: |
sudo apt-get -y update
# Needed for unit tests
sudo apt-get -y install \
coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build \
patchelf cmake bison libbison-dev kcov
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools codecov coverage[toml]
# ensure style checks are not skipped in unit tests for python >= 3.6
# note that true/false (i.e., 1/0) are opposite in conditions in python and bash
if python -c 'import sys; sys.exit(not sys.version_info >= (3, 6))'; then
pip install --upgrade flake8 isort>=4.3.5 mypy>=0.900 black
fi
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
if: ${{ matrix.concretizer == 'clingo' }}
env:
SPACK_PYTHON: python
run: |
. share/spack/setup-env.sh
spack bootstrap untrust spack-install
spack -v solve zlib
- name: Run unit tests (full suite with coverage)
if: ${{ needs.changes.outputs.with_coverage == 'true' }}
env:
SPACK_PYTHON: python
COVERAGE: true
SPACK_TEST_SOLVER: ${{ matrix.concretizer }}
run: |
share/spack/qa/run-unit-tests
coverage combine
coverage xml
- name: Run unit tests (reduced suite without coverage)
if: ${{ needs.changes.outputs.with_coverage == 'false' }}
env:
SPACK_PYTHON: python
ONLY_PACKAGES: true
SPACK_TEST_SOLVER: ${{ matrix.concretizer }}
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@f32b3a3741e1053eb607407145bc9619351dc93b # @v2.1.0
if: ${{ needs.changes.outputs.with_coverage == 'true' }}
with:
flags: unittests,linux,${{ matrix.concretizer }}
# Test shell integration
shell:
needs: [ validate, style, changes ]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@dc73133d4da04e56a135ae2246682783cc7c7cb6 # @v2
with:
python-version: 3.9
- name: Install System packages
run: |
sudo apt-get -y update
# Needed for shell tests
sudo apt-get install -y coreutils kcov csh zsh tcsh fish dash bash
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools codecov coverage[toml]
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Run shell tests (without coverage)
if: ${{ needs.changes.outputs.with_coverage == 'false' }}
run: |
share/spack/qa/run-shell-tests
- name: Run shell tests (with coverage)
if: ${{ needs.changes.outputs.with_coverage == 'true' }}
env:
COVERAGE: true
run: |
share/spack/qa/run-shell-tests
- uses: codecov/codecov-action@f32b3a3741e1053eb607407145bc9619351dc93b # @v2.1.0
if: ${{ needs.changes.outputs.with_coverage == 'true' }}
with:
flags: shelltests,linux
# Test for Python2.6 run on Centos 6
centos6:
needs: [ validate, style, changes ]
runs-on: ubuntu-latest
container: spack/github-actions:centos6
steps:
- name: Run unit tests (full test-suite)
# The CentOS 6 container doesn't run with coverage, but
# under the same conditions it runs the full test suite
if: ${{ needs.changes.outputs.with_coverage == 'true' }}
env:
HOME: /home/spack-test
SPACK_TEST_SOLVER: original
run: |
whoami && echo $HOME && cd $HOME
git clone "${{ github.server_url }}/${{ github.repository }}.git" && cd spack
git fetch origin "${{ github.ref }}:test-branch"
git checkout test-branch
. .github/workflows/setup_git.sh
bin/spack unit-test -x
- name: Run unit tests (only package tests)
if: ${{ needs.changes.outputs.with_coverage == 'false' }}
env:
HOME: /home/spack-test
ONLY_PACKAGES: true
SPACK_TEST_SOLVER: original
run: |
whoami && echo $HOME && cd $HOME
git clone "${{ github.server_url }}/${{ github.repository }}.git" && cd spack
git fetch origin "${{ github.ref }}:test-branch"
git checkout test-branch
. .github/workflows/setup_git.sh
bin/spack unit-test -x -m "not maybeslow" -k "package_sanity"
# Test RHEL8 UBI with platform Python. This job is run
# only on PRs modifying core Spack
rhel8-platform-python:
needs: [ validate, style, changes ]
runs-on: ubuntu-latest
if: ${{ needs.changes.outputs.with_coverage == 'true' }}
container: registry.access.redhat.com/ubi8/ubi
steps:
- name: Install dependencies
run: |
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2
- name: Setup repo and non-root user
run: |
git --version
git fetch --unshallow
. .github/workflows/setup_git.sh
useradd spack-test
chown -R spack-test .
- name: Run unit tests
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack -d solve zlib
spack unit-test -k 'not cvs and not svn and not hg' -x --verbose
# Test for the clingo based solver (using clingo-cffi)
clingo-cffi:
needs: [ validate, style, changes ]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@dc73133d4da04e56a135ae2246682783cc7c7cb6 # @v2
with:
python-version: 3.9
- name: Install System packages
run: |
sudo apt-get -y update
# Needed for unit tests
sudo apt-get -y install \
coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build \
patchelf kcov
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools codecov coverage[toml] clingo
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Run unit tests (full suite with coverage)
if: ${{ needs.changes.outputs.with_coverage == 'true' }}
env:
COVERAGE: true
SPACK_TEST_SOLVER: clingo
run: |
share/spack/qa/run-unit-tests
coverage combine
coverage xml
- name: Run unit tests (reduced suite without coverage)
if: ${{ needs.changes.outputs.with_coverage == 'false' }}
env:
ONLY_PACKAGES: true
SPACK_TEST_SOLVER: clingo
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@f32b3a3741e1053eb607407145bc9619351dc93b # @v2.1.0
if: ${{ needs.changes.outputs.with_coverage == 'true' }}
with:
flags: unittests,linux,clingo
# Run unit tests on MacOS
build:
needs: [ validate, style, changes ]
runs-on: macos-latest
strategy:
matrix:
python-version: [3.8]
steps:
- uses: actions/checkout@ec3a7ce113134d7a93b817d10a8272cb61118579 # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@dc73133d4da04e56a135ae2246682783cc7c7cb6 # @v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools
pip install --upgrade codecov coverage[toml]
- name: Setup Homebrew packages
run: |
brew install dash fish gcc gnupg2 kcov
- name: Run unit tests
env:
SPACK_TEST_SOLVER: clingo
run: |
git --version
. .github/workflows/setup_git.sh
. share/spack/setup-env.sh
$(which spack) bootstrap untrust spack-install
$(which spack) solve zlib
if [ "${{ needs.changes.outputs.with_coverage }}" == "true" ]
then
coverage run $(which spack) unit-test -x
coverage combine
coverage xml
# Delete the symlink going from ./lib/spack/docs/_spack_root back to
# the initial directory, since it causes ELOOP errors with codecov/actions@2
rm lib/spack/docs/_spack_root
else
echo "ONLY PACKAGE RECIPES CHANGED [skipping coverage]"
$(which spack) unit-test -x -m "not maybeslow" -k "package_sanity"
fi
- uses: codecov/codecov-action@f32b3a3741e1053eb607407145bc9619351dc93b # @v2.1.0
if: ${{ needs.changes.outputs.with_coverage == 'true' }}
with:
files: ./coverage.xml
flags: unittests,macos

4
.gitignore vendored
View File

@@ -136,7 +136,6 @@ venv/
ENV/ ENV/
env.bak/ env.bak/
venv.bak/ venv.bak/
!/lib/spack/env
# Spyder project settings # Spyder project settings
.spyderproject .spyderproject
@@ -210,9 +209,6 @@ tramp
/eshell/history /eshell/history
/eshell/lastdir /eshell/lastdir
# zsh byte-compiled files
*.zwc
# elpa packages # elpa packages
/elpa/ /elpa/

View File

@@ -3,8 +3,7 @@ Adam Moody <moody20@llnl.gov> Adam T. Moody
Alfredo Gimenez <gimenez1@llnl.gov> Alfredo Gimenez <alfredo.gimenez@gmail.com> Alfredo Gimenez <gimenez1@llnl.gov> Alfredo Gimenez <alfredo.gimenez@gmail.com>
Alfredo Gimenez <gimenez1@llnl.gov> Alfredo Adolfo Gimenez <alfredo.gimenez@gmail.com> Alfredo Gimenez <gimenez1@llnl.gov> Alfredo Adolfo Gimenez <alfredo.gimenez@gmail.com>
Andrew Williams <williamsa89@cardiff.ac.uk> Andrew Williams <andrew@alshain.org.uk> Andrew Williams <williamsa89@cardiff.ac.uk> Andrew Williams <andrew@alshain.org.uk>
Axel Huebl <axelhuebl@lbl.gov> Axel Huebl <a.huebl@hzdr.de> Axel Huebl <a.huebl@hzdr.de> Axel Huebl <axel.huebl@plasma.ninja>
Axel Huebl <axelhuebl@lbl.gov> Axel Huebl <axel.huebl@plasma.ninja>
Ben Boeckel <ben.boeckel@kitware.com> Ben Boeckel <mathstuf@gmail.com> Ben Boeckel <ben.boeckel@kitware.com> Ben Boeckel <mathstuf@gmail.com>
Ben Boeckel <ben.boeckel@kitware.com> Ben Boeckel <mathstuf@users.noreply.github.com> Ben Boeckel <ben.boeckel@kitware.com> Ben Boeckel <mathstuf@users.noreply.github.com>
Benedikt Hegner <hegner@cern.ch> Benedikt Hegner <benedikt.hegner@cern.ch> Benedikt Hegner <hegner@cern.ch> Benedikt Hegner <benedikt.hegner@cern.ch>

35
.mypy.ini Normal file
View File

@@ -0,0 +1,35 @@
[mypy]
python_version = 3.7
files=lib/spack/llnl/**/*.py,lib/spack/spack/**/*.py
mypy_path=bin,lib/spack,lib/spack/external,var/spack/repos/builtin
# This and a generated import file allows supporting packages
namespace_packages=True
# To avoid re-factoring all the externals, ignore errors and missing imports
# globally, then turn back on in spack and spack submodules
ignore_errors=True
ignore_missing_imports=True
[mypy-spack.*]
ignore_errors=False
ignore_missing_imports=False
[mypy-packages.*]
ignore_errors=False
ignore_missing_imports=False
[mypy-llnl.*]
ignore_errors=False
ignore_missing_imports=False
[mypy-spack.test.packages]
ignore_errors=True
# ignore errors in fake import path for packages
[mypy-spack.pkg.*]
ignore_errors=True
ignore_missing_imports=True
# jinja has syntax in it that requires python3 and causes a parse error
# skip importing it
[mypy-jinja2]
follow_imports=skip

View File

@@ -2,7 +2,6 @@ version: 2
sphinx: sphinx:
configuration: lib/spack/docs/conf.py configuration: lib/spack/docs/conf.py
fail_on_warning: true
python: python:
version: 3.7 version: 3.7

View File

@@ -1,294 +1,3 @@
# v0.17.3 (2022-07-14)
### Spack bugfixes
* Fix missing chgrp on symlinks in package installations (#30743)
* Allow having non-existing upstreams (#30744, #30746)
* Fix `spack stage` with custom paths (#30448)
* Fix failing call for `spack buildcache save-specfile` (#30637)
* Fix globbing in compiler wrapper (#30699)
# v0.17.2 (2022-04-13)
### Spack bugfixes
* Fix --reuse with upstreams set in an environment (#29680)
* config add: fix parsing of validator error to infer type from oneOf (#29475)
* Fix spack -C command_line_scope used in conjunction with other flags (#28418)
* Use Spec.constrain to construct spec lists for stacks (#28783)
* Fix bug occurring when searching for inherited patches in packages (#29574)
* Fixed a few bugs when manipulating symlinks (#28318, #29515, #29636)
* Fixed a few minor bugs affecting command prompt, terminal title and argument completion (#28279, #28278, #28939, #29405, #29070, #29402)
* Fixed a few bugs affecting the spack ci command (#29518, #29419)
* Fix handling of Intel compiler environment (#29439)
* Fix a few edge cases when reindexing the DB (#28764)
* Remove "Known issues" from documentation (#29664)
* Other miscellaneous bugfixes (0b72e070583fc5bcd016f5adc8a84c99f2b7805f, #28403, #29261)
# v0.17.1 (2021-12-23)
### Spack Bugfixes
* Allow locks to work under high contention (#27846)
* Improve errors messages from clingo (#27707 #27970)
* Respect package permissions for sbang (#25764)
* Fix --enable-locks behavior (#24675)
* Fix log-format reporter ignoring install errors (#25961)
* Fix overloaded argparse keys (#27379)
* Allow style commands to run with targets other than "develop" (#27472)
* Log lock messages to debug level, instead of verbose level (#27408)
* Handle invalid unicode while logging (#21447)
* spack audit: fix API calls to variants (#27713)
* Provide meaningful message for empty environment installs (#28031)
* Added opensuse leap containers to spack containerize (#27837)
* Revert "patches: make re-applied patches idempotent" (#27625)
* MANPATH can use system defaults (#21682)
* Add "setdefault" subcommand to `spack module tcl` (#14686)
* Regenerate views when specs already installed (#28113)
### Package bugfixes
* Fix external package detection for OpenMPI (#27255)
* Update the UPC++ package to 2021.9.0 (#26996)
* Added py-vermin v1.3.2 (#28072)
# v0.17.0 (2021-11-05)
`v0.17.0` is a major feature release.
## Major features in this release
1. **New concretizer is now default**
The new concretizer introduced as an experimental feature in `v0.16.0`
is now the default (#25502). The new concretizer is based on the
[clingo](https://github.com/potassco/clingo) logic programming system,
and it enables us to do much higher quality and faster dependency solving
The old concretizer is still available via the `concretizer: original`
setting, but it is deprecated and will be removed in `v0.18.0`.
2. **Binary Bootstrapping**
To make it easier to use the new concretizer and binary packages,
Spack now bootstraps `clingo` and `GnuPG` from public binaries. If it
is not able to bootstrap them from binaries, it installs them from
source code. With these changes, you should still be able to clone Spack
and start using it almost immediately. (#21446, #22354, #22489, #22606,
#22720, #22720, #23677, #23946, #24003, #25138, #25607, #25964, #26029,
#26399, #26599).
3. **Reuse existing packages (experimental)**
The most wanted feature from our
[2020 user survey](https://spack.io/spack-user-survey-2020/) and
the most wanted Spack feature of all time (#25310). `spack install`,
`spack spec`, and `spack concretize` now have a `--reuse` option, which
causes Spack to minimize the number of rebuilds it does. The `--reuse`
option will try to find existing installations and binary packages locally
and in registered mirrors, and will prefer to use them over building new
versions. This will allow users to build from source *far* less than in
prior versions of Spack. This feature will continue to be improved, with
configuration options and better CLI expected in `v0.17.1`. It will become
the *default* concretization mode in `v0.18.0`.
4. **Better error messages**
We have improved the error messages generated by the new concretizer by
using *unsatisfiable cores*. Spack will now print a summary of the types
of constraints that were violated to make a spec unsatisfiable (#26719).
5. **Conditional variants**
Variants can now have a `when="<spec>"` clause, allowing them to be
conditional based on the version or other attributes of a package (#24858).
6. **Git commit versions**
In an environment and on the command-line, you can now provide a full,
40-character git commit as a version for any package with a top-level
`git` URL. e.g., `spack install hdf5@45bb27f58240a8da7ebb4efc821a1a964d7712a8`.
Spack will compare the commit to tags in the git repository to understand
what versions it is ahead of or behind.
7. **Override local config and cache directories**
You can now set `SPACK_DISABLE_LOCAL_CONFIG` to disable the `~/.spack` and
`/etc/spack` configuration scopes. `SPACK_USER_CACHE_PATH` allows you to
move caches out of `~/.spack`, as well (#27022, #26735). This addresses
common problems where users could not isolate CI environments from local
configuration.
8. **Improvements to Spack Containerize**
For added reproducibility, you can now pin the Spack version used by
`spack containerize` (#21910). The container build will only build
with the Spack version pinned at build recipe creation instead of the
latest Spack version.
9. **New commands for dealing with tags**
The `spack tags` command allows you to list tags on packages (#26136), and you
can list tests and filter tags with `spack test list` (#26842).
## Other new features of note
* Copy and relocate environment views as stand-alone installations (#24832)
* `spack diff` command can diff two installed specs (#22283, #25169)
* `spack -c <config>` can set one-off config parameters on CLI (#22251)
* `spack load --list` is an alias for `spack find --loaded` (#27184)
* `spack gpg` can export private key with `--secret` (#22557)
* `spack style` automatically bootstraps dependencies (#24819)
* `spack style --fix` automatically invokes `isort` (#24071)
* build dependencies can be installed from build caches with `--include-build-deps` (#19955)
* `spack audit` command for checking package constraints (#23053)
* Spack can now fetch from `CVS` repositories (yep, really) (#23212)
* `spack monitor` lets you upload analysis about installations to a
[spack monitor server](https://github.com/spack/spack-monitor) (#23804, #24321,
#23777, #25928))
* `spack python --path` shows which `python` Spack is using (#22006)
* `spack env activate --temp` can create temporary environments (#25388)
* `--preferred` and `--latest` options for `spack checksum` (#25830)
* `cc` is now pure posix and runs on Alpine (#26259)
* `SPACK_PYTHON` environment variable sets which `python` spack uses (#21222)
* `SPACK_SKIP_MODULES` lets you source `setup-env.sh` faster if you don't need modules (#24545)
## Major internal refactors
* `spec.yaml` files are now `spec.json`, yielding a large speed improvement (#22845)
* Splicing allows Spack specs to store mixed build provenance (#20262)
* More extensive hooks API for installations (#21930)
* New internal API for getting the active environment (#25439)
## Performance Improvements
* Parallelize separate concretization in environments; Previously 55 min E4S solve
now takes 2.5 min (#26264)
* Drastically improve YamlFilesystemView file removal performance via batching (#24355)
* Speed up spec comparison (#21618)
* Speed up environment activation (#25633)
## Archspec improvements
* support for new generic `x86_64_v2`, `x86_64_v3`, `x86_64_v4` targets
(see [archspec#31](https://github.com/archspec/archspec-json/pull/31))
* `spack arch --generic` lets you get the best generic architecture for
your node (#27061)
* added support for aocc (#20124), `arm` compiler on `graviton2` (#24904)
and on `a64fx` (#24524),
## Infrastructure, buildcaches, and services
* Add support for GCS Bucket Mirrors (#26382)
* Add `spackbot` to help package maintainers with notifications. See
[spack.github.io/spackbot](https://spack.github.io/spackbot/)
* Reproducible pipeline builds with `spack ci rebuild` (#22887)
* Removed redundant concretizations from GitLab pipeline generation (#26622)
* Spack CI no longer generates jobs for unbuilt specs (#20435)
* Every pull request pipeline has its own buildcache (#25529)
* `--no-add` installs only specified specs and only if already present in… (#22657)
* Add environment-aware `spack buildcache sync` command (#25470)
* Binary cache installation speedups and improvements (#19690, #20768)
## Deprecations and Removals
* `spack setup` was deprecated in v0.16.0, and has now been removed.
Use `spack develop` and `spack dev-build`.
* Remove unused `--dependencies` flag from `spack load` (#25731)
* Remove stubs for `spack module [refresh|find|rm|loads]`, all of which
were deprecated in 2018.
## Notable Bugfixes
* Deactivate previous env before activating new one (#25409)
* Many fixes to error codes from `spack install` (#21319, #27012, #25314)
* config add: infer type based on JSON schema validation errors (#27035)
* `spack config edit` now works even if `spack.yaml` is broken (#24689)
## Packages
* Allow non-empty version ranges like `1.1.0:1.1` (#26402)
* Remove `.99`'s from many version ranges (#26422)
* Python: use platform-specific site packages dir (#25998)
* `CachedCMakePackage` for using *.cmake initial config files (#19316)
* `lua-lang` allows swapping `lua` and `luajit` (#22492)
* Better support for `ld.gold` and `ld.lld` (#25626)
* build times are now stored as metadata in `$prefix/.spack` (#21179)
* post-install tests can be reused in smoke tests (#20298)
* Packages can use `pypi` attribute to infer `homepage`/`url`/`list_url` (#17587)
* Use gnuconfig package for `config.guess` file replacement (#26035)
* patches: make re-applied patches idempotent (#26784)
## Spack community stats
* 5969 total packages, 920 new since `v0.16.0`
* 358 new Python packages, 175 new R packages
* 513 people contributed to this release
* 490 committers to packages
* 105 committers to core
* Lots of GPU updates:
* ~77 CUDA-related commits
* ~66 AMD-related updates
* ~27 OneAPI-related commits
* 30 commits from AMD toolchain support
* `spack test` usage in packages is increasing
* 1669 packages with tests (mostly generic python tests)
* 93 packages with their own tests
# v0.16.3 (2021-09-21)
* clang/llvm: fix version detection (#19978)
* Fix use of quotes in Python build system (#22279)
* Cray: fix extracting paths from module files (#23472)
* Use AWS CloudFront for source mirror (#23978)
* Ensure all roots of an installed environment are marked explicit in db (#24277)
* Fix fetching for Python 3.8 and 3.9 (#24686)
* locks: only open lockfiles once instead of for every lock held (#24794)
* Remove the EOL centos:6 docker image
# v0.16.2 (2021-05-22)
* Major performance improvement for `spack load` and other commands. (#23661)
* `spack fetch` is now environment-aware. (#19166)
* Numerous fixes for the new, `clingo`-based concretizer. (#23016, #23307,
#23090, #22896, #22534, #20644, #20537, #21148)
* Supoprt for automatically bootstrapping `clingo` from source. (#20652, #20657
#21364, #21446, #21913, #22354, #22444, #22460, #22489, #22610, #22631)
* Python 3.10 support: `collections.abc` (#20441)
* Fix import issues by using `__import__` instead of Spack package importe.
(#23288, #23290)
* Bugfixes and `--source-dir` argument for `spack location`. (#22755, #22348,
#22321)
* Better support for externals in shared prefixes. (#22653)
* `spack build-env` now prefers specs defined in the active environment.
(#21642)
* Remove erroneous warnings about quotes in `from_sourcing_files`. (#22767)
* Fix clearing cache of `InternalConfigScope`. (#22609)
* Bugfix for active when pkg is already active error. (#22587)
* Make `SingleFileScope` able to repopulate the cache after clearing it.
(#22559)
* Channelflow: Fix the package. (#22483)
* More descriptive error message for bugs in `package.py` (#21811)
* Use package-supplied `autogen.sh`. (#20319)
* Respect `-k/verify-ssl-false` in `_existing_url` method. (#21864)
# v0.16.1 (2021-02-22)
This minor release includes a new feature and associated fixes:
* intel-oneapi support through new packages (#20411, #20686, #20693, #20717,
#20732, #20808, #21377, #21448)
This release also contains bug fixes/enhancements for:
* HIP/ROCm support (#19715, #20095)
* concretization (#19988, #20020, #20082, #20086, #20099, #20102, #20128,
#20182, #20193, #20194, #20196, #20203, #20247, #20259, #20307, #20362,
#20383, #20423, #20473, #20506, #20507, #20604, #20638, #20649, #20677,
#20680, #20790)
* environment install reporting fix (#20004)
* avoid import in ABI compatibility info (#20236)
* restore ability of dev-build to skip patches (#20351)
* spack find -d spec grouping (#20028)
* spack smoke test support (#19987, #20298)
* macOS fixes (#20038, #21662)
* abstract spec comparisons (#20341)
* continuous integration (#17563)
* performance improvements for binary relocation (#19690, #20768)
* additional sanity checks for variants in builtin packages (#20373)
* do not pollute auto-generated configuration files with empty lists or
dicts (#20526)
plus assorted documentation (#20021, #20174) and package bug fixes/enhancements
(#19617, #19933, #19986, #20006, #20097, #20198, #20794, #20906, #21411).
# v0.16.0 (2020-11-18) # v0.16.0 (2020-11-18)
`v0.16.0` is a major feature release. `v0.16.0` is a major feature release.

View File

@@ -1,12 +1,12 @@
# <img src="https://cdn.rawgit.com/spack/spack/develop/share/spack/logo/spack-logo.svg" width="64" valign="middle" alt="Spack"/> Spack # <img src="https://cdn.rawgit.com/spack/spack/develop/share/spack/logo/spack-logo.svg" width="64" valign="middle" alt="Spack"/> Spack
[![Unit Tests](https://github.com/spack/spack/workflows/linux%20tests/badge.svg)](https://github.com/spack/spack/actions) [![MacOS Tests](https://github.com/spack/spack/workflows/macos%20tests/badge.svg)](https://github.com/spack/spack/actions)
[![Bootstrapping](https://github.com/spack/spack/actions/workflows/bootstrap.yml/badge.svg)](https://github.com/spack/spack/actions/workflows/bootstrap.yml) [![Linux Tests](https://github.com/spack/spack/workflows/linux%20tests/badge.svg)](https://github.com/spack/spack/actions)
[![Linux Builds](https://github.com/spack/spack/workflows/linux%20builds/badge.svg)](https://github.com/spack/spack/actions)
[![macOS Builds (nightly)](https://github.com/spack/spack/workflows/macOS%20builds%20nightly/badge.svg?branch=develop)](https://github.com/spack/spack/actions?query=workflow%3A%22macOS+builds+nightly%22) [![macOS Builds (nightly)](https://github.com/spack/spack/workflows/macOS%20builds%20nightly/badge.svg?branch=develop)](https://github.com/spack/spack/actions?query=workflow%3A%22macOS+builds+nightly%22)
[![codecov](https://codecov.io/gh/spack/spack/branch/develop/graph/badge.svg)](https://codecov.io/gh/spack/spack) [![codecov](https://codecov.io/gh/spack/spack/branch/develop/graph/badge.svg)](https://codecov.io/gh/spack/spack)
[![Containers](https://github.com/spack/spack/actions/workflows/build-containers.yml/badge.svg)](https://github.com/spack/spack/actions/workflows/build-containers.yml)
[![Read the Docs](https://readthedocs.org/projects/spack/badge/?version=latest)](https://spack.readthedocs.io) [![Read the Docs](https://readthedocs.org/projects/spack/badge/?version=latest)](https://spack.readthedocs.io)
[![Slack](https://slack.spack.io/badge.svg)](https://slack.spack.io) [![Slack](https://spackpm.herokuapp.com/badge.svg)](https://spackpm.herokuapp.com)
Spack is a multi-platform package manager that builds and installs Spack is a multi-platform package manager that builds and installs
multiple versions and configurations of software. It works on Linux, multiple versions and configurations of software. It works on Linux,
@@ -27,7 +27,7 @@ for examples and highlights.
To install spack and your first package, make sure you have Python. To install spack and your first package, make sure you have Python.
Then: Then:
$ git clone -c feature.manyFiles=true https://github.com/spack/spack.git $ git clone https://github.com/spack/spack.git
$ cd spack/bin $ cd spack/bin
$ ./spack install zlib $ ./spack install zlib
@@ -37,8 +37,6 @@ Documentation
[**Full documentation**](https://spack.readthedocs.io/) is available, or [**Full documentation**](https://spack.readthedocs.io/) is available, or
run `spack help` or `spack help --all`. run `spack help` or `spack help --all`.
For a cheat sheet on Spack syntax, run `spack help --spec`.
Tutorial Tutorial
---------------- ----------------
@@ -61,7 +59,7 @@ packages to bugfixes, documentation, or even new core features.
Resources: Resources:
* **Slack workspace**: [spackpm.slack.com](https://spackpm.slack.com). * **Slack workspace**: [spackpm.slack.com](https://spackpm.slack.com).
To get an invitation, visit [slack.spack.io](https://slack.spack.io). To get an invitation, [**click here**](https://spackpm.herokuapp.com).
* **Mailing list**: [groups.google.com/d/forum/spack](https://groups.google.com/d/forum/spack) * **Mailing list**: [groups.google.com/d/forum/spack](https://groups.google.com/d/forum/spack)
* **Twitter**: [@spackpm](https://twitter.com/spackpm). Be sure to * **Twitter**: [@spackpm](https://twitter.com/spackpm). Be sure to
`@mention` us! `@mention` us!

View File

@@ -1,24 +0,0 @@
# Security Policy
## Supported Versions
We provide security updates for the following releases.
For more on Spack's release structure, see
[`README.md`](https://github.com/spack/spack#releases).
| Version | Supported |
| ------- | ------------------ |
| develop | :white_check_mark: |
| 0.16.x | :white_check_mark: |
## Reporting a Vulnerability
To report a vulnerability or other security
issue, email maintainers@spack.io.
You can expect to hear back within two days.
If your security issue is accepted, we will do
our best to release a fix within a week. If
fixing the issue will take longer than this,
we will discuss timeline options with you.

View File

@@ -1,6 +1,6 @@
#!/bin/sh #!/bin/sh
# #
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# sbang project developers. See the top-level COPYRIGHT file for details. # sbang project developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -11,8 +11,7 @@
# See https://stackoverflow.com/a/47886254 # See https://stackoverflow.com/a/47886254
""":" """:"
# prefer SPACK_PYTHON environment variable, python3, python, then python2 # prefer SPACK_PYTHON environment variable, python3, python, then python2
SPACK_PREFERRED_PYTHONS="python3 python python2 /usr/libexec/platform-python" for cmd in "${SPACK_PYTHON:-}" python3 python python2; do
for cmd in "${SPACK_PYTHON:-}" ${SPACK_PREFERRED_PYTHONS}; do
if command -v > /dev/null "$cmd"; then if command -v > /dev/null "$cmd"; then
export SPACK_PYTHON="$(command -v "$cmd")" export SPACK_PYTHON="$(command -v "$cmd")"
exec "${SPACK_PYTHON}" "$0" "$@" exec "${SPACK_PYTHON}" "$0" "$@"
@@ -28,18 +27,12 @@ exit 1
from __future__ import print_function from __future__ import print_function
import os import os
import os.path
import sys import sys
min_python3 = (3, 5) if sys.version_info[:2] < (2, 6):
if sys.version_info[:2] < (2, 6) or (
sys.version_info[:2] >= (3, 0) and sys.version_info[:2] < min_python3
):
v_info = sys.version_info[:3] v_info = sys.version_info[:3]
msg = "Spack requires Python 2.6, 2.7 or %d.%d or higher " % min_python3 sys.exit("Spack requires Python 2.6 or higher."
msg += "You are running spack with Python %d.%d.%d." % v_info "This is Python %d.%d.%d." % v_info)
sys.exit(msg)
# Find spack's location and its prefix. # Find spack's location and its prefix.
spack_file = os.path.realpath(os.path.expanduser(__file__)) spack_file = os.path.realpath(os.path.expanduser(__file__))
@@ -53,9 +46,9 @@ sys.path.insert(0, spack_lib_path)
spack_external_libs = os.path.join(spack_lib_path, "external") spack_external_libs = os.path.join(spack_lib_path, "external")
if sys.version_info[:2] <= (2, 7): if sys.version_info[:2] <= (2, 7):
sys.path.insert(0, os.path.join(spack_external_libs, "py2")) sys.path.insert(0, os.path.join(spack_external_libs, 'py2'))
if sys.version_info[:2] == (2, 6): if sys.version_info[:2] == (2, 6):
sys.path.insert(0, os.path.join(spack_external_libs, "py26")) sys.path.insert(0, os.path.join(spack_external_libs, 'py26'))
sys.path.insert(0, spack_external_libs) sys.path.insert(0, spack_external_libs)
@@ -65,33 +58,11 @@ sys.path.insert(0, spack_external_libs)
# Briefly: ruamel.yaml produces a .pth file when installed with pip that # Briefly: ruamel.yaml produces a .pth file when installed with pip that
# makes the site installed package the preferred one, even though sys.path # makes the site installed package the preferred one, even though sys.path
# is modified to point to another version of ruamel.yaml. # is modified to point to another version of ruamel.yaml.
if "ruamel.yaml" in sys.modules: if 'ruamel.yaml' in sys.modules:
del sys.modules["ruamel.yaml"] del sys.modules['ruamel.yaml']
if "ruamel" in sys.modules:
del sys.modules["ruamel"]
# The following code is here to avoid failures when updating
# the develop version, due to spurious argparse.pyc files remaining
# in the libs/spack/external directory, see:
# https://github.com/spack/spack/pull/25376
# TODO: Remove in v0.18.0 or later
try:
import argparse
except ImportError:
argparse_pyc = os.path.join(spack_external_libs, 'argparse.pyc')
if not os.path.exists(argparse_pyc):
raise
try:
os.remove(argparse_pyc)
import argparse # noqa
except Exception:
msg = ('The file\n\n\t{0}\n\nis corrupted and cannot be deleted by Spack. '
'Either delete it manually or ask some administrator to '
'delete it for you.')
print(msg.format(argparse_pyc))
sys.exit(1)
if 'ruamel' in sys.modules:
del sys.modules['ruamel']
import spack.main # noqa import spack.main # noqa

View File

@@ -1,32 +0,0 @@
bootstrap:
# If set to false Spack will not bootstrap missing software,
# but will instead raise an error.
enable: true
# Root directory for bootstrapping work. The software bootstrapped
# by Spack is installed in a "store" subfolder of this root directory
root: $user_cache_path/bootstrap
# Methods that can be used to bootstrap software. Each method may or
# may not be able to bootstrap all of the software that Spack needs,
# depending on its type.
sources:
- name: 'github-actions'
type: buildcache
description: |
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.1
homepage: https://github.com/alalazo/spack-bootstrap-mirrors
releases: https://github.com/alalazo/spack-bootstrap-mirrors/releases
# This method is just Spack bootstrapping the software it needs from sources.
# It has been added here so that users can selectively disable bootstrapping
# from sources by "untrusting" it.
- name: spack-install
type: install
description: |
Specs built from sources by Spack. May take a long time.
trusted:
# By default we trust bootstrapping from sources and from binaries
# produced on Github via the workflow
github-actions: true
spack-install: true

View File

@@ -33,6 +33,13 @@ config:
template_dirs: template_dirs:
- $spack/share/spack/templates - $spack/share/spack/templates
# Locations where different types of modules should be installed.
module_roots:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
# Temporary locations Spack can try to use for builds. # Temporary locations Spack can try to use for builds.
# #
# Recommended options are given below. # Recommended options are given below.
@@ -42,8 +49,8 @@ config:
# (i.e., ``$TMP` or ``$TMPDIR``). # (i.e., ``$TMP` or ``$TMPDIR``).
# #
# Another option that prevents conflicts and potential permission issues is # Another option that prevents conflicts and potential permission issues is
# to specify `$user_cache_path/stage`, which ensures each user builds in their # to specify `~/.spack/stage`, which ensures each user builds in their home
# home directory. # directory.
# #
# A more traditional path uses the value of `$spack/var/spack/stage`, which # A more traditional path uses the value of `$spack/var/spack/stage`, which
# builds directly inside Spack's instance without staging them in a # builds directly inside Spack's instance without staging them in a
@@ -60,13 +67,13 @@ config:
# identifies Spack staging to avoid accidentally wiping out non-Spack work. # identifies Spack staging to avoid accidentally wiping out non-Spack work.
build_stage: build_stage:
- $tempdir/$user/spack-stage - $tempdir/$user/spack-stage
- $user_cache_path/stage - ~/.spack/stage
# - $spack/var/spack/stage # - $spack/var/spack/stage
# Directory in which to run tests and store test results. # Directory in which to run tests and store test results.
# Tests will be stored in directories named by date/time and package # Tests will be stored in directories named by date/time and package
# name/hash. # name/hash.
test_stage: $user_cache_path/test test_stage: ~/.spack/test
# Cache directory for already downloaded source tarballs and archived # Cache directory for already downloaded source tarballs and archived
# repositories. This can be purged with `spack clean --downloads`. # repositories. This can be purged with `spack clean --downloads`.
@@ -75,7 +82,7 @@ config:
# Cache directory for miscellaneous files, like the package index. # Cache directory for miscellaneous files, like the package index.
# This can be purged with `spack clean --misc-cache` # This can be purged with `spack clean --misc-cache`
misc_cache: $user_cache_path/cache misc_cache: ~/.spack/cache
# Timeout in seconds used for downloading sources etc. This only applies # Timeout in seconds used for downloading sources etc. This only applies
@@ -134,18 +141,12 @@ config:
# enabling locks. # enabling locks.
locks: true locks: true
# The default url fetch method to use.
# If set to 'curl', Spack will require curl on the user's system
# If set to 'urllib', Spack will use python built-in libs to fetch
url_fetch_method: urllib
# The maximum number of jobs to use for the build system (e.g. `make`), when # The maximum number of jobs to use when running `make` in parallel,
# the -j flag is not given on the command line. Defaults to 16 when not set. # always limited by the number of cores available. For instance:
# Note that the maximum number of jobs is limited by the number of cores # - If set to 16 on a 4 cores machine `spack install` will run `make -j4`
# available, taking thread affinity into account when supported. For instance: # - If set to 16 on a 18 cores machine `spack install` will run `make -j16`
# - With `build_jobs: 16` and 4 cores available `spack install` will run `make -j4` # If not set, Spack will use all available cores up to 16.
# - With `build_jobs: 16` and 32 cores available `spack install` will run `make -j16`
# - With `build_jobs: 2` and 4 cores available `spack install -j6` will run `make -j6`
# build_jobs: 16 # build_jobs: 16
@@ -160,10 +161,11 @@ config:
# sufficiently for many specs. # sufficiently for many specs.
# #
# 'clingo': Uses a logic solver under the hood to solve DAGs with full # 'clingo': Uses a logic solver under the hood to solve DAGs with full
# backtracking and optimization for user preferences. Spack will # backtracking and optimization for user preferences.
# try to bootstrap the logic solver, if not already available.
# #
concretizer: clingo # 'clingo' currently requires the clingo ASP solver to be installed and
# built with python bindings. 'original' is built in.
concretizer: original
# How long to wait to lock the Spack installation database. This lock is used # How long to wait to lock the Spack installation database. This lock is used
@@ -190,8 +192,3 @@ config:
# Set to 'false' to allow installation on filesystems that doesn't allow setgid bit # Set to 'false' to allow installation on filesystems that doesn't allow setgid bit
# manipulation by unprivileged user (e.g. AFS) # manipulation by unprivileged user (e.g. AFS)
allow_sgid: true allow_sgid: true
# Whether to set the terminal title to display status information during
# building and installing packages. This gives information about Spack's
# current progress as well as the current and total number of packages.
terminal_title: false

View File

@@ -1,21 +0,0 @@
# -------------------------------------------------------------------------
# This is the default configuration for Spack's module file generation.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/modules.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/modules.yaml
# -------------------------------------------------------------------------
modules:
prefix_inspections:
lib:
- LD_LIBRARY_PATH
lib64:
- LD_LIBRARY_PATH

View File

@@ -21,10 +21,12 @@ packages:
- gcc - gcc
- intel - intel
providers: providers:
elf: [libelf] elf:
fuse: [macfuse] - libelf
unwind: [apple-libunwind] unwind:
uuid: [apple-libuuid] - apple-libunwind
uuid:
- apple-libuuid
apple-libunwind: apple-libunwind:
buildable: false buildable: false
externals: externals:

View File

@@ -1,2 +1,2 @@
mirrors: mirrors:
spack-public: https://mirror.spack.io spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/

View File

@@ -14,7 +14,8 @@
# ~/.spack/modules.yaml # ~/.spack/modules.yaml
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
modules: modules:
# Paths to check when creating modules for all module sets enable:
- tcl
prefix_inspections: prefix_inspections:
bin: bin:
- PATH - PATH
@@ -24,6 +25,16 @@ modules:
- MANPATH - MANPATH
share/aclocal: share/aclocal:
- ACLOCAL_PATH - ACLOCAL_PATH
lib:
- LIBRARY_PATH
lib64:
- LIBRARY_PATH
include:
- C_INCLUDE_PATH
- CPLUS_INCLUDE_PATH
# The INCLUDE env variable specifies paths to look for
# .mod file for Intel Fortran compilers
- INCLUDE
lib/pkgconfig: lib/pkgconfig:
- PKG_CONFIG_PATH - PKG_CONFIG_PATH
lib64/pkgconfig: lib64/pkgconfig:
@@ -33,20 +44,6 @@ modules:
'': '':
- CMAKE_PREFIX_PATH - CMAKE_PREFIX_PATH
# These are configurations for the module set named "default"
default:
# These values are defaulted in the code. They are not defaulted here so
# that we can enable backwards compatibility with the old syntax more
# easily (old value is in the config yaml, config:module_roots)
# Where to install modules
# roots:
# tcl: $spack/share/spack/modules
# lmod: $spack/share/spack/lmod
# What type of modules to use
enable:
- tcl
# Default configurations if lmod is enabled
lmod: lmod:
hierarchy: hierarchy:
- mpi - mpi

View File

@@ -17,45 +17,39 @@ packages:
all: all:
compiler: [gcc, intel, pgi, clang, xl, nag, fj, aocc] compiler: [gcc, intel, pgi, clang, xl, nag, fj, aocc]
providers: providers:
D: [ldc]
awk: [gawk] awk: [gawk]
blas: [openblas, amdblis] blas: [openblas, amdblis]
D: [ldc]
daal: [intel-daal] daal: [intel-daal]
elf: [elfutils] elf: [elfutils]
fftw-api: [fftw, amdfftw] fftw-api: [fftw, amdfftw]
flame: [libflame, amdlibflame]
fuse: [libfuse]
gl: [mesa+opengl, mesa18, opengl] gl: [mesa+opengl, mesa18, opengl]
glu: [mesa-glu, openglu]
glx: [mesa+glx, mesa18+glx, opengl] glx: [mesa+glx, mesa18+glx, opengl]
glu: [mesa-glu, openglu]
golang: [gcc] golang: [gcc]
iconv: [libiconv] iconv: [libiconv]
ipp: [intel-ipp] ipp: [intel-ipp]
java: [openjdk, jdk, ibm-java] java: [openjdk, jdk, ibm-java]
jpeg: [libjpeg-turbo, libjpeg] jpeg: [libjpeg-turbo, libjpeg]
lapack: [openblas, amdlibflame] lapack: [openblas, amdlibflame]
lua-lang: [lua, lua-luajit]
mariadb-client: [mariadb-c-client, mariadb] mariadb-client: [mariadb-c-client, mariadb]
mkl: [intel-mkl] mkl: [intel-mkl]
mpe: [mpe2] mpe: [mpe2]
mpi: [openmpi, mpich] mpi: [openmpi, mpich]
mysql-client: [mysql, mariadb-c-client] mysql-client: [mysql, mariadb-c-client]
opencl: [pocl] opencl: [pocl]
onedal: [intel-oneapi-dal]
osmesa: [mesa+osmesa, mesa18+osmesa] osmesa: [mesa+osmesa, mesa18+osmesa]
pbs: [openpbs, torque]
pil: [py-pillow] pil: [py-pillow]
pkgconfig: [pkgconf, pkg-config] pkgconfig: [pkgconf, pkg-config]
rpc: [libtirpc] rpc: [libtirpc]
scalapack: [netlib-scalapack, amdscalapack] scalapack: [netlib-scalapack, amdscalapack]
sycl: [hipsycl] sycl: [hipsycl]
szip: [libaec, libszip] szip: [libszip, libaec]
tbb: [intel-tbb] tbb: [intel-tbb]
unwind: [libunwind] unwind: [libunwind]
uuid: [util-linux-uuid, libuuid]
xxd: [xxd-standalone, vim]
yacc: [bison, byacc] yacc: [bison, byacc]
ziglang: [zig] flame: [libflame, amdlibflame]
uuid: [util-linux-uuid, libuuid]
permissions: permissions:
read: world read: world
write: user write: user

View File

@@ -2,7 +2,7 @@
# #
# You can set these variables from the command line. # You can set these variables from the command line.
SPHINXOPTS = -W --keep-going SPHINXOPTS = -W
SPHINXBUILD = sphinx-build SPHINXBUILD = sphinx-build
PAPER = PAPER =
BUILDDIR = _build BUILDDIR = _build

View File

@@ -1,162 +0,0 @@
.. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _analyze:
=======
Analyze
=======
The analyze command is a front-end to various tools that let us analyze
package installations. Each analyzer is a module for a different kind
of analysis that can be done on a package installation, including (but not
limited to) binary, log, or text analysis. Thus, the analyze command group
allows you to take an existing package install, choose an analyzer,
and extract some output for the package using it.
-----------------
Analyzer Metadata
-----------------
For all analyzers, we write to an ``analyzers`` folder in ``~/.spack``, or the
value that you specify in your spack config at ``config:analyzers_dir``.
For example, here we see the results of running an analysis on zlib:
.. code-block:: console
$ tree ~/.spack/analyzers/
└── linux-ubuntu20.04-skylake
└── gcc-9.3.0
└── zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2
├── environment_variables
│   └── spack-analyzer-environment-variables.json
├── install_files
│   └── spack-analyzer-install-files.json
└── libabigail
└── spack-analyzer-libabigail-libz.so.1.2.11.xml
This means that you can always find analyzer output in this folder, and it
is organized with the same logic as the package install it was run for.
If you want to customize this top level folder, simply provide the ``--path``
argument to ``spack analyze run``. The nested organization will be maintained
within your custom root.
-----------------
Listing Analyzers
-----------------
If you aren't familiar with Spack's analyzers, you can quickly list those that
are available:
.. code-block:: console
$ spack analyze list-analyzers
install_files : install file listing read from install_manifest.json
environment_variables : environment variables parsed from spack-build-env.txt
config_args : config args loaded from spack-configure-args.txt
abigail : Application Binary Interface (ABI) features for objects
In the above, the first three are fairly simple - parsing metadata files from
a package install directory to save
-------------------
Analyzing a Package
-------------------
The analyze command, akin to install, will accept a package spec to perform
an analysis for. The package must be installed. Let's walk through an example
with zlib. We first ask to analyze it. However, since we have more than one
install, we are asked to disambiguate:
.. code-block:: console
$ spack analyze run zlib
==> Error: zlib matches multiple packages.
Matching packages:
fz2bs56 zlib@1.2.11%gcc@7.5.0 arch=linux-ubuntu18.04-skylake
sl7m27m zlib@1.2.11%gcc@9.3.0 arch=linux-ubuntu20.04-skylake
Use a more specific spec.
We can then specify the spec version that we want to analyze:
.. code-block:: console
$ spack analyze run zlib/fz2bs56
If you don't provide any specific analyzer names, by default all analyzers
(shown in the ``list-analyzers`` subcommand list) will be run. If an analyzer does not
have any result, it will be skipped. For example, here is a result running for
zlib:
.. code-block:: console
$ ls ~/.spack/analyzers/linux-ubuntu20.04-skylake/gcc-9.3.0/zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2/
spack-analyzer-environment-variables.json
spack-analyzer-install-files.json
spack-analyzer-libabigail-libz.so.1.2.11.xml
If you want to run a specific analyzer, ask for it with `--analyzer`. Here we run
spack analyze on libabigail (already installed) _using_ libabigail1
.. code-block:: console
$ spack analyze run --analyzer abigail libabigail
.. _analyze_monitoring:
----------------------
Monitoring An Analysis
----------------------
For any kind of analysis, you can
use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
as a server to upload the same run metadata to. You can
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
to first create a server along with a username and token for yourself.
You can then use this guide to interact with the server.
You should first export our spack monitor token and username to the environment:
.. code-block:: console
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
$ export SPACKMON_USER=spacky
By default, the host for your server is expected to be at ``http://127.0.0.1``
with a prefix of ``ms1``, and if this is the case, you can simply add the
``--monitor`` flag to the install command:
.. code-block:: console
$ spack analyze run --monitor wget
If you need to customize the host or the prefix, you can do that as well:
.. code-block:: console
$ spack analyze run --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io wget
If your server doesn't have authentication, you can skip it:
.. code-block:: console
$ spack analyze run --monitor --monitor-disable-auth wget
Regardless of your choice, when you run analyze on an installed package (whether
it was installed with ``--monitor`` or not, you'll see the results generating as they did
before, and a message that the monitor server was pinged:
.. code-block:: console
$ spack analyze --monitor wget
...
==> Sending result for wget bin/wget to monitor.

View File

@@ -27,17 +27,11 @@ It is recommended that the following be put in your ``.bashrc`` file:
If you do not see colorized output when using ``less -R`` it is because color If you do not see colorized output when using ``less -R`` it is because color
is being disabled in the piped output. In this case, tell spack to force is being disabled in the piped output. In this case, tell spack to force
colorized output with a flag colorized output.
.. code-block:: console .. code-block:: console
$ spack --color always find | less -R $ spack --color always | less -R
or an environment variable
.. code-block:: console
$ SPACK_COLOR=always spack find | less -R
-------------------------- --------------------------
Listing available packages Listing available packages
@@ -188,34 +182,6 @@ configuration a **spec**. In the commands above, ``mpileaks`` and
``mpileaks@3.0.4`` are both valid *specs*. We'll talk more about how ``mpileaks@3.0.4`` are both valid *specs*. We'll talk more about how
you can use them to customize an installation in :ref:`sec-specs`. you can use them to customize an installation in :ref:`sec-specs`.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Reusing installed dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. warning::
The ``--reuse`` option described here is experimental, and it will
likely be replaced with a different option and configuration settings
in the next Spack release.
By default, when you run ``spack install``, Spack tries to build a new
version of the package you asked for, along with updated versions of
its dependencies. This gets you the latest versions and configurations,
but it can result in unwanted rebuilds if you update Spack frequently.
If you want Spack to try hard to reuse existing installations as dependencies,
you can add the ``--reuse`` option:
.. code-block:: console
$ spack install --reuse mpich
This will not do anything if ``mpich`` is already installed. If ``mpich``
is not installed, but dependencies like ``hwloc`` and ``libfabric`` are,
the ``mpich`` will be build with the installed versions, if possible.
You can use the :ref:`spack spec -I <cmd-spack-spec>` command to see what
will be reused and what will be built before you install.
.. _cmd-spack-uninstall: .. _cmd-spack-uninstall:
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
@@ -723,136 +689,6 @@ structured the way you want:
} }
^^^^^^^^^^^^^^
``spack diff``
^^^^^^^^^^^^^^
It's often the case that you have two versions of a spec that you need to
disambiguate. Let's say that we've installed two variants of zlib, one with
and one without the optimize variant:
.. code-block:: console
$ spack install zlib
$ spack install zlib -optimize
When we do ``spack find`` we see the two versions.
.. code-block:: console
$ spack find zlib
==> 2 installed packages
-- linux-ubuntu20.04-skylake / gcc@9.3.0 ------------------------
zlib@1.2.11 zlib@1.2.11
Let's now say that we want to uninstall zlib. We run the command, and hit a problem
real quickly since we have two!
.. code-block:: console
$ spack uninstall zlib
==> Error: zlib matches multiple packages:
-- linux-ubuntu20.04-skylake / gcc@9.3.0 ------------------------
efzjziy zlib@1.2.11 sl7m27m zlib@1.2.11
==> Error: You can either:
a) use a more specific spec, or
b) specify the spec by its hash (e.g. `spack uninstall /hash`), or
c) use `spack uninstall --all` to uninstall ALL matching specs.
Oh no! We can see from the above that we have two different versions of zlib installed,
and the only difference between the two is the hash. This is a good use case for
``spack diff``, which can easily show us the "diff" or set difference
between properties for two packages. Let's try it out.
Since the only difference we see in the ``spack find`` view is the hash, let's use
``spack diff`` to look for more detail. We will provide the two hashes:
.. code-block:: console
$ spack diff /efzjziy /sl7m27m
==> Warning: This interface is subject to change.
--- zlib@1.2.11efzjziyc3dmb5h5u5azsthgbgog5mj7g
+++ zlib@1.2.11sl7m27mzkbejtkrajigj3a3m37ygv4u2
@@ variant_value @@
- zlib optimize False
+ zlib optimize True
The output is colored, and written in the style of a git diff. This means that you
can copy and paste it into a GitHub markdown as a code block with language "diff"
and it will render nicely! Here is an example:
.. code-block:: md
```diff
--- zlib@1.2.11/efzjziyc3dmb5h5u5azsthgbgog5mj7g
+++ zlib@1.2.11/sl7m27mzkbejtkrajigj3a3m37ygv4u2
@@ variant_value @@
- zlib optimize False
+ zlib optimize True
```
Awesome! Now let's read the diff. It tells us that our first zlib was built with ``~optimize``
(``False``) and the second was built with ``+optimize`` (``True``). You can't see it in the docs
here, but the output above is also colored based on the content being an addition (+) or
subtraction (-).
This is a small example, but you will be able to see differences for any attributes on the
installation spec. Running ``spack diff A B`` means we'll see which spec attributes are on
``B`` but not on ``A`` (green) and which are on ``A`` but not on ``B`` (red). Here is another
example with an additional difference type, ``version``:
.. code-block:: console
$ spack diff python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ variant_value @@
- python patches a8c52415a8b03c0e5f28b5d52ae498f7a7e602007db2b9554df28cd5685839b8
+ python patches 0d98e93189bc278fbc37a50ed7f183bd8aaf249a8e1670a465f0db6bb4f8cf87
@@ version @@
- openssl 1.0.2u
+ openssl 1.1.1k
- python 2.7.8
+ python 3.8.11
Let's say that we were only interested in one kind of attribute above, ``version``.
We can ask the command to only output this attribute. To do this, you'd add
the ``--attribute`` for attribute parameter, which defaults to all. Here is how you
would filter to show just versions:
.. code-block:: console
$ spack diff --attribute version python@2.7.8 python@3.8.11
==> Warning: This interface is subject to change.
--- python@2.7.8/tsxdi6gl4lihp25qrm4d6nys3nypufbf
+++ python@3.8.11/yjtseru4nbpllbaxb46q7wfkyxbuvzxx
@@ version @@
- openssl 1.0.2u
+ openssl 1.1.1k
- python 2.7.8
+ python 3.8.11
And you can add as many attributes as you'd like with multiple `--attribute` arguments
(for lots of attributes, you can use ``-a`` for short). Finally, if you want to view the
data as json (and possibly pipe into an output file) just add ``--json``:
.. code-block:: console
$ spack diff --json python@2.7.8 python@3.8.11
This data will be much longer because along with the differences for ``A`` vs. ``B`` and
``B`` vs. ``A``, the JSON output also showsthe intersection.
------------------------ ------------------------
Using installed packages Using installed packages
------------------------ ------------------------
@@ -896,9 +732,8 @@ your path:
These commands will add appropriate directories to your ``PATH``, These commands will add appropriate directories to your ``PATH``,
``MANPATH``, ``CPATH``, and ``LD_LIBRARY_PATH`` according to the ``MANPATH``, ``CPATH``, and ``LD_LIBRARY_PATH`` according to the
:ref:`prefix inspections <customize-env-modifications>` defined in your :ref:`prefix inspections <customize-env-modifications>` defined in your
modules configuration. modules configuration. When you no longer want to use a package, you
When you no longer want to use a package, you can type unload or can type unload or unuse similarly:
unuse similarly:
.. code-block:: console .. code-block:: console
@@ -939,22 +774,6 @@ first ``libelf`` above, you would run:
$ spack load /qmm4kso $ spack load /qmm4kso
To see which packages that you have loaded to your enviornment you would
use ``spack find --loaded``.
.. code-block:: console
$ spack find --loaded
==> 2 installed packages
-- linux-debian7 / gcc@4.4.7 ------------------------------------
libelf@0.8.13
-- linux-debian7 / intel@15.0.0 ---------------------------------
libelf@0.8.13
You can also use ``spack load --list`` to get the same output, but it
does not have the full set of query options that ``spack find`` offers.
We'll learn more about Spack's spec syntax in the next section. We'll learn more about Spack's spec syntax in the next section.
@@ -1144,7 +963,7 @@ Variants are named options associated with a particular package. They are
optional, as each package must provide default values for each variant it optional, as each package must provide default values for each variant it
makes available. Variants can be specified using makes available. Variants can be specified using
a flexible parameter syntax ``name=<value>``. For example, a flexible parameter syntax ``name=<value>``. For example,
``spack install mercury debug=True`` will install mercury built with debug ``spack install libelf debug=True`` will install libelf built with debug
flags. The names of particular variants available for a package depend on flags. The names of particular variants available for a package depend on
what was provided by the package author. ``spack info <package>`` will what was provided by the package author. ``spack info <package>`` will
provide information on what build variants are available. provide information on what build variants are available.
@@ -1152,11 +971,11 @@ provide information on what build variants are available.
For compatibility with earlier versions, variants which happen to be For compatibility with earlier versions, variants which happen to be
boolean in nature can be specified by a syntax that represents turning boolean in nature can be specified by a syntax that represents turning
options on and off. For example, in the previous spec we could have options on and off. For example, in the previous spec we could have
supplied ``mercury +debug`` with the same effect of enabling the debug supplied ``libelf +debug`` with the same effect of enabling the debug
compile time option for the libelf package. compile time option for the libelf package.
Depending on the package a variant may have any default value. For Depending on the package a variant may have any default value. For
``mercury`` here, ``debug`` is ``False`` by default, and we turned it on ``libelf`` here, ``debug`` is ``False`` by default, and we turned it on
with ``debug=True`` or ``+debug``. If a variant is ``True`` by default with ``debug=True`` or ``+debug``. If a variant is ``True`` by default
you can turn it off by either adding ``-name`` or ``~name`` to the spec. you can turn it off by either adding ``-name`` or ``~name`` to the spec.
@@ -1694,7 +1513,6 @@ and it will be added to the ``PYTHONPATH`` in your current shell:
Now ``import numpy`` will succeed for as long as you keep your current Now ``import numpy`` will succeed for as long as you keep your current
session open. session open.
The loaded packages can be checked using ``spack find --loaded``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Loading Extensions via Modules Loading Extensions via Modules
@@ -1906,39 +1724,6 @@ This issue typically manifests with the error below:
A nicer error message is TBD in future versions of Spack. A nicer error message is TBD in future versions of Spack.
---------------
Troubleshooting
---------------
The ``spack audit`` command:
.. command-output:: spack audit -h
can be used to detect a number of configuration issues. This command detects
configuration settings which might not be strictly wrong but are not likely
to be useful outside of special cases.
It can also be used to detect dependency issues with packages - for example
cases where a package constrains a dependency with a variant that doesn't
exist (in this case Spack could report the problem ahead of time but
automatically performing the check would slow down most runs of Spack).
A detailed list of the checks currently implemented for each subcommand can be
printed with:
.. command-output:: spack -v audit list
Depending on the use case, users might run the appropriate subcommands to obtain
diagnostics. Issues, if found, are reported to stdout:
.. code-block:: console
% spack audit packages lammps
PKG-DIRECTIVES: 1 issue found
1. lammps: wrong variant in "conflicts" directive
the variant 'adios' does not exist
in /home/spack/spack/var/spack/repos/builtin/packages/lammps/package.py
------------ ------------
Getting Help Getting Help

View File

@@ -31,25 +31,9 @@ Build caches are created via:
.. code-block:: console .. code-block:: console
$ spack buildcache create <spec> $ spack buildcache create spec
If you wanted to create a build cache in a local directory, you would provide
the ``-d`` argument to target that directory, again also specifying the spec.
Here is an example creating a local directory, "spack-cache" and creating
build cache files for the "ninja" spec:
.. code-block:: console
$ mkdir -p ./spack-cache
$ spack buildcache create -d ./spack-cache ninja
==> Buildcache files will be output to file:///home/spackuser/spack/spack-cache/build_cache
gpgconf: socketdir is '/run/user/1000/gnupg'
gpg: using "E6DF6A8BD43208E4D6F392F23777740B7DBD643D" as default secret key for signing
Note that the targeted spec must already be installed. Once you have a build cache,
you can add it as a mirror, discussed next.
--------------------------------------- ---------------------------------------
Finding or installing build cache files Finding or installing build cache files
--------------------------------------- ---------------------------------------
@@ -61,96 +45,17 @@ with:
$ spack mirror add <name> <url> $ spack mirror add <name> <url>
Build caches are found via:
Note that the url can be a web url _or_ a local filesystem location. In the previous
example, you might add the directory "spack-cache" and call it ``mymirror``:
.. code-block:: console
$ spack mirror add mymirror ./spack-cache
You can see that the mirror is added with ``spack mirror list`` as follows:
.. code-block:: console
$ spack mirror list
mymirror file:///home/spackuser/spack/spack-cache
spack-public https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/
At this point, you've create a buildcache, but spack hasn't indexed it, so if
you run ``spack buildcache list`` you won't see any results. You need to index
this new build cache as follows:
.. code-block:: console
$ spack buildcache update-index -d spack-cache/
Now you can use list:
.. code-block:: console .. code-block:: console
$ spack buildcache list $ spack buildcache list
==> 1 cached build.
-- linux-ubuntu20.04-skylake / gcc@9.3.0 ------------------------
ninja@1.10.2
Build caches are installed via:
Great! So now let's say you have a different spack installation, or perhaps just
a different environment for the same one, and you want to install a package from
that build cache. Let's first uninstall the actual library "ninja" to see if we can
re-install it from the cache.
.. code-block:: console .. code-block:: console
$ spack uninstall ninja $ spack buildcache install
And now reinstall from the buildcache
.. code-block:: console
$ spack buildcache install ninja
==> buildcache spec(s) matching ninja
==> Fetching file:///home/spackuser/spack/spack-cache/build_cache/linux-ubuntu20.04-skylake/gcc-9.3.0/ninja-1.10.2/linux-ubuntu20.04-skylake-gcc-9.3.0-ninja-1.10.2-i4e5luour7jxdpc3bkiykd4imke3mkym.spack
####################################################################################################################################### 100.0%
==> Installing buildcache for spec ninja@1.10.2%gcc@9.3.0 arch=linux-ubuntu20.04-skylake
gpgconf: socketdir is '/run/user/1000/gnupg'
gpg: Signature made Tue 23 Mar 2021 10:16:29 PM MDT
gpg: using RSA key E6DF6A8BD43208E4D6F392F23777740B7DBD643D
gpg: Good signature from "spackuser (GPG created for Spack) <spackuser@noreply.users.github.com>" [ultimate]
It worked! You've just completed a full example of creating a build cache with
a spec of interest, adding it as a mirror, updating it's index, listing the contents,
and finally, installing from it.
Note that the above command is intended to install a particular package to a
build cache you have created, and not to install a package from a build cache.
For the latter, once a mirror is added, by default when you do ``spack install`` the ``--use-cache``
flag is set, and you will install a package from a build cache if it is available.
If you want to always use the cache, you can do:
.. code-block:: console
$ spack install --cache-only <package>
For example, to combine all of the commands above to add the E4S build cache
and then install from it exclusively, you would do:
.. code-block:: console
$ spack mirror add E4S https://cache.e4s.io
$ spack buildcache keys --install --trust
$ spack install --cache-only <package>
We use ``--install`` and ``--trust`` to say that we are installing keys to our
keyring, and trusting all downloaded keys.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
List of popular build caches List of popular build caches

View File

@@ -59,11 +59,9 @@ on these ideas for each distinct build system that Spack supports:
build_systems/bundlepackage build_systems/bundlepackage
build_systems/cudapackage build_systems/cudapackage
build_systems/inteloneapipackage
build_systems/intelpackage build_systems/intelpackage
build_systems/rocmpackage build_systems/rocmpackage
build_systems/custompackage build_systems/custompackage
build_systems/multiplepackage
For reference, the :py:mod:`Build System API docs <spack.build_systems>` For reference, the :py:mod:`Build System API docs <spack.build_systems>`
provide a list of build systems and methods/attributes that can be provide a list of build systems and methods/attributes that can be

View File

@@ -112,44 +112,20 @@ phase runs:
.. code-block:: console .. code-block:: console
$ autoreconf --install --verbose --force -I <aclocal-prefix>/share/aclocal $ libtoolize
$ aclocal
In case you need to add more arguments, override ``autoreconf_extra_args`` $ autoreconf --install --verbose --force
in your ``package.py`` on class scope like this:
.. code-block:: python
autoreconf_extra_args = ["-Im4"]
All you need to do is add a few Autotools dependencies to the package. All you need to do is add a few Autotools dependencies to the package.
Most stable releases will come with a ``configure`` script, but if you Most stable releases will come with a ``configure`` script, but if you
check out a commit from the ``master`` branch, you would want to add: check out a commit from the ``develop`` branch, you would want to add:
.. code-block:: python .. code-block:: python
depends_on('autoconf', type='build', when='@master') depends_on('autoconf', type='build', when='@develop')
depends_on('automake', type='build', when='@master') depends_on('automake', type='build', when='@develop')
depends_on('libtool', type='build', when='@master') depends_on('libtool', type='build', when='@develop')
depends_on('m4', type='build', when='@develop')
It is typically redundant to list the ``m4`` macro processor package as a
dependency, since ``autoconf`` already depends on it.
"""""""""""""""""""""""""""""""
Using a custom autoreconf phase
"""""""""""""""""""""""""""""""
In some cases, it might be needed to replace the default implementation
of the autoreconf phase with one running a script interpreter. In this
example, the ``bash`` shell is used to run the ``autogen.sh`` script.
.. code-block:: python
def autoreconf(self, spec, prefix):
which('bash')('autogen.sh')
"""""""""""""""""""""""""""""""""""""""
patching configure or Makefile.in files
"""""""""""""""""""""""""""""""""""""""
In some cases, developers might need to distribute a patch that modifies In some cases, developers might need to distribute a patch that modifies
one of the files used to generate ``configure`` or ``Makefile.in``. one of the files used to generate ``configure`` or ``Makefile.in``.
@@ -159,57 +135,6 @@ create a new patch that directly modifies ``configure``. That way,
Spack can use the secondary patch and additional build system Spack can use the secondary patch and additional build system
dependencies aren't necessary. dependencies aren't necessary.
""""""""""""""""""""""""""""
Old Autotools helper scripts
""""""""""""""""""""""""""""
Autotools based tarballs come with helper scripts such as ``config.sub`` and
``config.guess``. It is the responsibility of the developers to keep these files
up to date so that they run on every platform, but for very old software
releases this is impossible. In these cases Spack can help to replace these
files with newer ones, without having to add the heavy dependency on
``automake``.
Automatic helper script replacement is currently enabled by default on
``ppc64le`` and ``aarch64``, as these are the known cases where old scripts fail.
On these targets, ``AutotoolsPackage`` adds a build dependency on ``gnuconfig``,
which is a very light-weight package with newer versions of the helper files.
Spack then tries to run all the helper scripts it can find in the release, and
replaces them on failure with the helper scripts from ``gnuconfig``.
To opt out of this feature, use the following setting:
.. code-block:: python
patch_config_files = False
To enable it conditionally on different architectures, define a property and
make the package depend on ``gnuconfig`` as a build dependency:
.. code-block
depends_on('gnuconfig', when='@1.0:')
@property
def patch_config_files(self):
return self.spec.satisfies("@1.0:")
.. note::
On some exotic architectures it is necessary to use system provided
``config.sub`` and ``config.guess`` files. In this case, the most
transparent solution is to mark the ``gnuconfig`` package as external and
non-buildable, with a prefix set to the directory containing the files:
.. code-block:: yaml
gnuconfig:
buildable: false
externals:
- spec: gnuconfig@master
prefix: /usr/share/configure_files/
"""""""""""""""" """"""""""""""""
force_autoreconf force_autoreconf
"""""""""""""""" """"""""""""""""
@@ -230,7 +155,7 @@ version, this can be done like so:
@property @property
def force_autoreconf(self): def force_autoreconf(self):
return self.version == Version('1.2.3') return self.version == Version('1.2.3'):
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
Finding configure flags Finding configure flags
@@ -399,29 +324,8 @@ options:
--with-libfabric=</path/to/libfabric> --with-libfabric=</path/to/libfabric>
"""""""""""""""""""""""
The ``variant`` keyword
"""""""""""""""""""""""
When Spack variants and configure flags do not correspond one-to-one, the
``variant`` keyword can be passed to ``with_or_without`` and
``enable_or_disable``. For example:
.. code-block:: python
variant('debug_tools', default=False)
config_args += self.enable_or_disable('debug-tools', variant='debug_tools')
Or when one variant controls multiple flags:
.. code-block:: python
variant('debug_tools', default=False)
config_args += self.with_or_without('memchecker', variant='debug_tools')
config_args += self.with_or_without('profiler', variant='debug_tools')
"""""""""""""""""""" """"""""""""""""""""
Activation overrides activation overrides
"""""""""""""""""""" """"""""""""""""""""
Finally, the behavior of either ``with_or_without`` or Finally, the behavior of either ``with_or_without`` or

View File

@@ -130,8 +130,8 @@ Adding flags to cmake
To add additional flags to the ``cmake`` call, simply override the To add additional flags to the ``cmake`` call, simply override the
``cmake_args`` function. The following example defines values for the flags ``cmake_args`` function. The following example defines values for the flags
``WHATEVER``, ``ENABLE_BROKEN_FEATURE``, ``DETECT_HDF5``, and ``THREADS`` with ``WHATEVER``, ``ENABLE_BROKEN_FEATURE``, ``DETECT_HDF5``, and ``THREADS`` with
and without the :meth:`~spack.build_systems.cmake.CMakePackage.define` and and without the :py:meth:`~.CMakePackage.define` and
:meth:`~spack.build_systems.cmake.CMakePackage.define_from_variant` helper functions: :py:meth:`~.CMakePackage.define_from_variant` helper functions:
.. code-block:: python .. code-block:: python

View File

@@ -10,7 +10,7 @@ CudaPackage
----------- -----------
Different from other packages, ``CudaPackage`` does not represent a build system. Different from other packages, ``CudaPackage`` does not represent a build system.
Instead its goal is to simplify and unify usage of ``CUDA`` in other packages by providing a `mixin-class <https://en.wikipedia.org/wiki/Mixin>`_. Instead its goal is to simplify and unify usage of ``CUDA`` in other packages by providing a ` mixin-class <https://en.wikipedia.org/wiki/Mixin>`__.
You can find source for the package at You can find source for the package at
`<https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/cuda.py>`__. `<https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/cuda.py>`__.

View File

@@ -9,7 +9,7 @@
Custom Build Systems Custom Build Systems
-------------------- --------------------
While the built-in build systems should meet your needs for the While the build systems listed above should meet your needs for the
vast majority of packages, some packages provide custom build scripts. vast majority of packages, some packages provide custom build scripts.
This guide is intended for the following use cases: This guide is intended for the following use cases:
@@ -31,7 +31,7 @@ installation. Both of these packages require custom build systems.
Base class Base class
^^^^^^^^^^ ^^^^^^^^^^
If your package does not belong to any of the built-in build If your package does not belong to any of the aforementioned build
systems that Spack already supports, you should inherit from the systems that Spack already supports, you should inherit from the
``Package`` base class. ``Package`` is a simple base class with a ``Package`` base class. ``Package`` is a simple base class with a
single phase: ``install``. If your package is simple, you may be able single phase: ``install``. If your package is simple, you may be able
@@ -168,8 +168,7 @@ if and only if this flag is set, we would use the following line:
Testing Testing
^^^^^^^ ^^^^^^^
Let's put everything together and add unit tests to be optionally run Let's put everything together and add unit tests to our package.
during the installation of our package.
In the ``perl`` package, we can see: In the ``perl`` package, we can see:
.. code-block:: python .. code-block:: python
@@ -183,6 +182,12 @@ As you can guess, this runs ``make test`` *after* building the package,
if and only if testing is requested. Again, this is not specific to if and only if testing is requested. Again, this is not specific to
custom build systems, it can be added to existing build systems as well. custom build systems, it can be added to existing build systems as well.
Ideally, every package in Spack will have some sort of test to ensure
that it was built correctly. It is up to the package authors to make
sure this happens. If you are adding a package for some software and
the developers list commands to test the installation, please add these
tests to your ``package.py``.
.. warning:: .. warning::
The order of decorators matters. The following ordering: The order of decorators matters. The following ordering:
@@ -202,12 +207,3 @@ custom build systems, it can be added to existing build systems as well.
the tests will always be run regardless of whether or not the tests will always be run regardless of whether or not
``--test=root`` is requested. See https://github.com/spack/spack/issues/3833 ``--test=root`` is requested. See https://github.com/spack/spack/issues/3833
for more information for more information
Ideally, every package in Spack will have some sort of test to ensure
that it was built correctly. It is up to the package authors to make
sure this happens. If you are adding a package for some software and
the developers list commands to test the installation, please add these
tests to your ``package.py``.
For more information on other forms of package testing, refer to
:ref:`Checking an installation <checking_an_installation>`.

View File

@@ -1,155 +0,0 @@
.. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _inteloneapipackage:
====================
IntelOneapiPackage
====================
.. contents::
oneAPI packages in Spack
========================
Spack can install and use the Intel oneAPI products. You may either
use spack to install the oneAPI tools or use the `Intel
installers`_. After installation, you may use the tools directly, or
use Spack to build packages with the tools.
The Spack Python class ``IntelOneapiPackage`` is a base class that is
used by ``IntelOneapiCompilers``, ``IntelOneapiMkl``,
``IntelOneapiTbb`` and other classes to implement the oneAPI
packages. See the :ref:`package-list` for the full list of available
oneAPI packages or use::
spack list -d oneAPI
For more information on a specific package, do::
spack info <package-name>
Intel no longer releases new versions of Parallel Studio, which can be
used in Spack via the :ref:`intelpackage`. All of its components can
now be found in oneAPI.
Examples
========
Building a Package With icx
---------------------------
In this example, we build patchelf with ``icc`` and ``icx``. The
compilers are installed with spack.
Install the oneAPI compilers::
spack install intel-oneapi-compilers
Add the compilers to your ``compilers.yaml`` so spack can use them::
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin/intel64
spack compiler add `spack location -i intel-oneapi-compilers`/compiler/latest/linux/bin
Verify that the compilers are available::
spack compiler list
The ``intel-oneapi-compilers`` package includes 2 families of
compilers:
* ``intel``: ``icc``, ``icpc``, ``ifort``. Intel's *classic*
compilers.
* ``oneapi``: ``icx``, ``icpx``, ``ifx``. Intel's new generation of
compilers based on LLVM.
To build the ``patchelf`` Spack package with ``icc``, do::
spack install patchelf%intel
To build with with ``icx``, do ::
spack install patchelf%oneapi
Using oneAPI MPI to Satisfy a Virtual Dependence
------------------------------------------------------
The ``hdf5`` package works with any compatible MPI implementation. To
build ``hdf5`` with Intel oneAPI MPI do::
spack install hdf5 +mpi ^intel-oneapi-mpi
Using an Externally Installed oneAPI
====================================
Spack can also use oneAPI tools that are manually installed with
`Intel Installers`_. The procedures for configuring Spack to use
external compilers and libraries are different.
Compilers
---------
To use the compilers, add some information about the installation to
``compilers.yaml``. For most users, it is sufficient to do::
spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin/intel64
spack compiler add /opt/intel/oneapi/compiler/latest/linux/bin
Adapt the paths above if you did not install the tools in the default
location. After adding the compilers, using them is the same
as if you had installed the ``intel-oneapi-compilers`` package.
Another option is to manually add the configuration to
``compilers.yaml`` as described in :ref:`Compiler configuration
<compiler-config>`.
Libraries
---------
If you want Spack to use MKL that you have installed without Spack in
the default location, then add the following to
``~/.spack/packages.yaml``, adjusting the version as appropriate::
intel-oneapi-mkl:
externals:
- spec: intel-oneapi-mkl@2021.1.1
prefix: /opt/intel/oneapi/
Using oneAPI Tools Installed by Spack
=====================================
Spack can be a convenient way to install and configure compilers and
libaries, even if you do not intend to build a Spack package. If you
want to build a Makefile project using Spack-installed oneAPI compilers,
then use spack to configure your environment::
spack load intel-oneapi-compilers
And then you can build with::
CXX=icpx make
You can also use Spack-installed libraries. For example::
spack load intel-oneapi-mkl
Will update your environment CPATH, LIBRARY_PATH, and other
environment variables for building an application with MKL.
More information
================
This section describes basic use of oneAPI, especially if it has
changed compared to Parallel Studio. See :ref:`intelpackage` for more
information on :ref:`intel-virtual-packages`,
:ref:`intel-unrelated-packages`,
:ref:`intel-integrating-external-libraries`, and
:ref:`using-mkl-tips`.
.. _`Intel installers`: https://software.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top.html

View File

@@ -137,7 +137,6 @@ If you need to save disk space or installation time, you could install the
``intel`` compilers-only subset (0.6 GB) and just the library packages you ``intel`` compilers-only subset (0.6 GB) and just the library packages you
need, for example ``intel-mpi`` (0.5 GB) and ``intel-mkl`` (2.5 GB). need, for example ``intel-mpi`` (0.5 GB) and ``intel-mkl`` (2.5 GB).
.. _intel-unrelated-packages:
"""""""""""""""""""" """"""""""""""""""""
Unrelated packages Unrelated packages
@@ -359,8 +358,6 @@ affected by an advanced third method:
Next, visit section `Selecting Intel Compilers`_ to learn how to tell Next, visit section `Selecting Intel Compilers`_ to learn how to tell
Spack to use the newly configured compilers. Spack to use the newly configured compilers.
.. _intel-integrating-external-libraries:
"""""""""""""""""""""""""""""""""" """"""""""""""""""""""""""""""""""
Integrating external libraries Integrating external libraries
"""""""""""""""""""""""""""""""""" """"""""""""""""""""""""""""""""""
@@ -561,29 +558,43 @@ follow `the next section <intel-install-libs_>`_ instead.
modules: [] modules: []
spec: intel@18.0.3 spec: intel@18.0.3
paths: paths:
cc: /usr/bin/true cc: stub
cxx: /usr/bin/true cxx: stub
f77: /usr/bin/true f77: stub
fc: /usr/bin/true fc: stub
Replace ``18.0.3`` with the version that you determined in the preceding Replace ``18.0.3`` with the version that you determined in the preceding
step. The exact contents under ``paths:`` do not matter yet, but the paths must exist. step. The contents under ``paths:`` do not matter yet.
This temporary stub is required such that the ``intel-parallel-studio`` package You are right to ask: "Why on earth is that necessary?" [fn8]_.
can be installed for the ``intel`` compiler (which the package itself is going The answer lies in Spack striving for strict compiler consistency.
to provide after the installation) rather than an arbitrary system compiler. Consider what happens without such a pre-declared compiler stub:
The paths given in ``cc``, ``cxx``, ``f77``, ``fc`` must exist, but will Say, you ask Spack to install a particular version
never be used to build anything during the installation of ``intel-parallel-studio``. ``intel-parallel-studio@edition.V``. Spack will apply an unrelated compiler
spec to concretize and install your request, resulting in
``intel-parallel-studio@edition.V %X``. That compiler ``%X`` is not going to
be the version that this new package itself provides. Rather, it would
typically be ``%gcc@...`` in a default Spack installation or possibly indeed
``%intel@...``, but at a version that precedes ``V``.
The reason for this stub is that ``intel-parallel-studio`` also provides the The problem comes to the fore as soon as you try to use any virtual ``mkl``
``mpi`` and ``mkl`` packages and when concretizing a spec, Spack ensures or ``mpi`` packages that you would expect to now be provided by
strong consistency of the used compiler across all dependencies: [fn8]_. ``intel-parallel-studio@edition.V``. Spack will indeed see those virtual
Installing a package ``foo +mkl %intel`` will make Spack look for a package packages, but only as being tied to the compiler that the package
``mkl %intel``, which can be provided by ``intel-parallel-studio+mkl %intel``, ``intel-parallel-studio@edition.V`` was concretized with *at installation*.
but not by ``intel-parallel-studio+mkl %gcc``. If you were to install a client package with the new compilers now available
to you, you would naturally run ``spack install foo +mkl %intel@V``, yet
Spack will either complain about ``mkl%intel@V`` being missing (because it
only knows about ``mkl%X``) or it will go and attempt to install *another
instance* of ``intel-parallel-studio@edition.V %intel@V`` so as to match the
compiler spec ``%intel@V`` that you gave for your client package ``foo``.
This will be unexpected and will quickly get annoying because each
reinstallation takes up time and extra disk space.
Failure to do so may result in additional installations of ``mkl``, ``intel-mpi`` or To escape this trap, put the compiler stub declaration shown here in place,
even ``intel-parallel-studio`` as dependencies for other packages. then use that pre-declared compiler spec to install the actual package, as
shown next. This approach works because during installation only the
package's own self-sufficient installer will be used, not any compiler.
.. _`verify-compiler-anticipated`: .. _`verify-compiler-anticipated`:
@@ -634,25 +645,11 @@ follow `the next section <intel-install-libs_>`_ instead.
want to use the ``intel64`` variant. The ``icpc`` and ``ifort`` compilers want to use the ``intel64`` variant. The ``icpc`` and ``ifort`` compilers
will be located in the same directory as ``icc``. will be located in the same directory as ``icc``.
* Make sure to specify ``modules: ['intel-parallel-studio-cluster2018.3-intel-18.0.3-HASH']`` * Use the ``modules:`` and/or ``cflags:`` tokens to specify a suitable accompanying
(with ``HASH`` being the short hash as displayed when running
``spack find -l intel-parallel-studio@cluster.2018.3`` and the versions adapted accordingly)
to ensure that the correct and complete environment for the Intel compilers gets
loaded when running them. With modern versions of the Intel compiler you may otherwise see
issues about missing libraries. Please also note that module name must exactly match
the name as returned by ``module avail`` (and shown in the example above).
* Use the ``modules:`` and/or ``cflags:`` tokens to further specify a suitable accompanying
``gcc`` version to help pacify picky client packages that ask for C++ ``gcc`` version to help pacify picky client packages that ask for C++
standards more recent than supported by your system-provided ``gcc`` and its standards more recent than supported by your system-provided ``gcc`` and its
``libstdc++.so``. ``libstdc++.so``.
* If you specified a custom variant (for example ``+vtune``) you may want to add this as your
preferred variant in the packages configuration for the ``intel-parallel-studio`` package
as described in :ref:`concretization-preferences`. Otherwise you will have to specify
the variant everytime ``intel-parallel-studio`` is being used as ``mkl``, ``fftw`` or ``mpi``
implementation to avoid pulling in a different variant.
* To set the Intel compilers for default use in Spack, instead of the usual ``%gcc``, * To set the Intel compilers for default use in Spack, instead of the usual ``%gcc``,
follow section `Selecting Intel compilers`_. follow section `Selecting Intel compilers`_.
@@ -837,7 +834,6 @@ for example:
compiler: [ intel@18, intel@17, gcc@4.4.7, gcc@4.9.3, gcc@7.3.0, ] compiler: [ intel@18, intel@17, gcc@4.4.7, gcc@4.9.3, gcc@7.3.0, ]
.. _intel-virtual-packages:
"""""""""""""""""""""""""""""""""""""""""""""""" """"""""""""""""""""""""""""""""""""""""""""""""
Selecting libraries to satisfy virtual packages Selecting libraries to satisfy virtual packages
@@ -911,7 +907,6 @@ With the proper installation as detailed above, no special steps should be
required when a client package specifically (and thus deliberately) requests an required when a client package specifically (and thus deliberately) requests an
Intel package as dependency, this being one of the target use cases for Spack. Intel package as dependency, this being one of the target use cases for Spack.
.. _using-mkl-tips:
""""""""""""""""""""""""""""""""""""""""""""""" """""""""""""""""""""""""""""""""""""""""""""""
Tips for configuring client packages to use MKL Tips for configuring client packages to use MKL

View File

@@ -147,10 +147,8 @@ and a ``filter_file`` method to help with this. For example:
def edit(self, spec, prefix): def edit(self, spec, prefix):
makefile = FileFilter('Makefile') makefile = FileFilter('Makefile')
makefile.filter(r'^\s*CC\s*=.*', 'CC = ' + spack_cc) makefile.filter('CC = gcc', 'CC = cc')
makefile.filter(r'^\s*CXX\s*=.*', 'CXX = ' + spack_cxx) makefile.filter('CXX = g++', 'CC = c++')
makefile.filter(r'^\s*F77\s*=.*', 'F77 = ' + spack_f77)
makefile.filter(r'^\s*FC\s*=.*', 'FC = ' + spack_fc)
`stream <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stream/package.py>`_ `stream <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stream/package.py>`_

View File

@@ -121,15 +121,11 @@ override the ``meson_args`` method like so:
.. code-block:: python .. code-block:: python
def meson_args(self): def meson_args(self):
return ['--warnlevel=3'] return ['--default-library=both']
This method can be used to pass flags as well as variables. This method can be used to pass flags as well as variables.
Note that the ``MesonPackage`` base class already defines variants for
``buildtype``, ``default_library`` and ``strip``, which are mapped to default
Meson arguments, meaning that you don't have to specify these.
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^
External documentation External documentation
^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -1,350 +0,0 @@
.. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _multiplepackage:
----------------------
Multiple Build Systems
----------------------
Quite frequently, a package will change build systems from one version to the
next. For example, a small project that once used a single Makefile to build
may now require Autotools to handle the increased number of files that need to
be compiled. Or, a package that once used Autotools may switch to CMake for
Windows support. In this case, it becomes a bit more challenging to write a
single build recipe for this package in Spack.
There are several ways that this can be handled in Spack:
#. Subclass the new build system, and override phases as needed (preferred)
#. Subclass ``Package`` and implement ``install`` as needed
#. Create separate ``*-cmake``, ``*-autotools``, etc. packages for each build system
#. Rename the old package to ``*-legacy`` and create a new package
#. Move the old package to a ``legacy`` repository and create a new package
#. Drop older versions that only support the older build system
Of these options, 1 is preferred, and will be demonstrated in this
documentation. Options 3-5 have issues with concretization, so shouldn't be
used. Options 4-5 also don't support more than two build systems. Option 6 only
works if the old versions are no longer needed. Option 1 is preferred over 2
because it makes it easier to drop the old build system entirely.
The exact syntax of the package depends on which build systems you need to
support. Below are a couple of common examples.
^^^^^^^^^^^^^^^^^^^^^
Makefile -> Autotools
^^^^^^^^^^^^^^^^^^^^^
Let's say we have the following package:
.. code-block:: python
class Foo(MakefilePackage):
version("1.2.0", sha256="...")
def edit(self, spec, prefix):
filter_file("CC=", "CC=" + spack_cc, "Makefile")
def install(self, spec, prefix):
install_tree(".", prefix)
The package subclasses from :ref:`makefilepackage`, which has three phases:
#. ``edit`` (does nothing by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
In this case, the ``install`` phase needed to be overridden because the
Makefile did not have an install target. We also modify the Makefile to use
Spack's compiler wrappers. The default ``build`` phase is not changed.
Starting with version 1.3.0, we want to use Autotools to build instead.
:ref:`autotoolspackage` has four phases:
#. ``autoreconf`` (does not if a configure script already exists)
#. ``configure`` (runs ``./configure --prefix=...`` by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
If the only version we need to support is 1.3.0, the package would look as
simple as:
.. code-block:: python
class Foo(AutotoolsPackage):
version("1.3.0", sha256="...")
def configure_args(self):
return ["--enable-shared"]
In this case, we use the default methods for each phase and only override
``configure_args`` to specify additional flags to pass to ``./configure``.
If we wanted to write a single package that supports both versions 1.2.0 and
1.3.0, it would look something like:
.. code-block:: python
class Foo(AutotoolsPackage):
version("1.3.0", sha256="...")
version("1.2.0", sha256="...", deprecated=True)
def configure_args(self):
return ["--enable-shared"]
# Remove the following once version 1.2.0 is dropped
@when("@:1.2")
def patch(self):
filter_file("CC=", "CC=" + spack_cc, "Makefile")
@when("@:1.2")
def autoreconf(self, spec, prefix):
pass
@when("@:1.2")
def configure(self, spec, prefix):
pass
@when("@:1.2")
def install(self, spec, prefix):
install_tree(".", prefix)
There are a few interesting things to note here:
* We added ``deprecated=True`` to version 1.2.0. This signifies that version
1.2.0 is deprecated and shouldn't be used. However, if a user still relies
on version 1.2.0, it's still there and builds just fine.
* We moved the contents of the ``edit`` phase to the ``patch`` function. Since
``AutotoolsPackage`` doesn't have an ``edit`` phase, the only way for this
step to be executed is to move it to the ``patch`` function, which always
gets run.
* The ``autoreconf`` and ``configure`` phases become no-ops. Since the old
Makefile-based build system doesn't use these, we ignore these phases when
building ``foo@1.2.0``.
* The ``@when`` decorator is used to override these phases only for older
versions. The default methods are used for ``foo@1.3:``.
Once a new Spack release comes out, version 1.2.0 and everything below the
comment can be safely deleted. The result is the same as if we had written a
package for version 1.3.0 from scratch.
^^^^^^^^^^^^^^^^^^
Autotools -> CMake
^^^^^^^^^^^^^^^^^^
Let's say we have the following package:
.. code-block:: python
class Bar(AutotoolsPackage):
version("1.2.0", sha256="...")
def configure_args(self):
return ["--enable-shared"]
The package subclasses from :ref:`autotoolspackage`, which has four phases:
#. ``autoreconf`` (does not if a configure script already exists)
#. ``configure`` (runs ``./configure --prefix=...`` by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
In this case, we use the default methods for each phase and only override
``configure_args`` to specify additional flags to pass to ``./configure``.
Starting with version 1.3.0, we want to use CMake to build instead.
:ref:`cmakepackage` has three phases:
#. ``cmake`` (runs ``cmake ...`` by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
If the only version we need to support is 1.3.0, the package would look as
simple as:
.. code-block:: python
class Bar(CMakePackage):
version("1.3.0", sha256="...")
def cmake_args(self):
return [self.define("BUILD_SHARED_LIBS", True)]
In this case, we use the default methods for each phase and only override
``cmake_args`` to specify additional flags to pass to ``cmake``.
If we wanted to write a single package that supports both versions 1.2.0 and
1.3.0, it would look something like:
.. code-block:: python
class Bar(CMakePackage):
version("1.3.0", sha256="...")
version("1.2.0", sha256="...", deprecated=True)
def cmake_args(self):
return [self.define("BUILD_SHARED_LIBS", True)]
# Remove the following once version 1.2.0 is dropped
def configure_args(self):
return ["--enable-shared"]
@when("@:1.2")
def cmake(self, spec, prefix):
configure("--prefix=" + prefix, *self.configure_args())
There are a few interesting things to note here:
* We added ``deprecated=True`` to version 1.2.0. This signifies that version
1.2.0 is deprecated and shouldn't be used. However, if a user still relies
on version 1.2.0, it's still there and builds just fine.
* Since CMake and Autotools are so similar, we only need to override the
``cmake`` phase, we can use the default ``build`` and ``install`` phases.
* We override ``cmake`` to run ``./configure`` for older versions.
``configure_args`` remains the same.
* The ``@when`` decorator is used to override these phases only for older
versions. The default methods are used for ``bar@1.3:``.
Once a new Spack release comes out, version 1.2.0 and everything below the
comment can be safely deleted. The result is the same as if we had written a
package for version 1.3.0 from scratch.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Multiple build systems for the same version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
During the transition from one build system to another, developers often
support multiple build systems at the same time. Spack can only use a single
build system for a single version. To decide which build system to use for a
particular version, take the following things into account:
1. If the developers explicitly state that one build system is preferred over
another, use that one.
2. If one build system is considered "experimental" while another is considered
"stable", use the stable build system.
3. Otherwise, use the newer build system.
The developer preference for which build system to use can change over time as
a newer build system becomes stable/recommended.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Dropping support for old build systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When older versions of a package don't support a newer build system, it can be
tempting to simply delete them from a package. This significantly reduces
package complexity and makes the build recipe much easier to maintain. However,
other packages or Spack users may rely on these older versions. The recommended
approach is to first support both build systems (as demonstrated above),
:ref:`deprecate <deprecate>` versions that rely on the old build system, and
remove those versions and any phases that needed to be overridden in the next
Spack release.
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Three or more build systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^
In rare cases, a package may change build systems multiple times. For example,
a package may start with Makefiles, then switch to Autotools, then switch to
CMake. The same logic used above can be extended to any number of build systems.
For example:
.. code-block:: python
class Baz(CMakePackage):
version("1.4.0", sha256="...") # CMake
version("1.3.0", sha256="...") # Autotools
version("1.2.0", sha256="...") # Makefile
def cmake_args(self):
return [self.define("BUILD_SHARED_LIBS", True)]
# Remove the following once version 1.3.0 is dropped
def configure_args(self):
return ["--enable-shared"]
@when("@1.3")
def cmake(self, spec, prefix):
configure("--prefix=" + prefix, *self.configure_args())
# Remove the following once version 1.2.0 is dropped
@when("@:1.2")
def patch(self):
filter_file("CC=", "CC=" + spack_cc, "Makefile")
@when("@:1.2")
def cmake(self, spec, prefix):
pass
@when("@:1.2")
def install(self, spec, prefix):
install_tree(".", prefix)
^^^^^^^^^^^^^^^^^^^
Additional examples
^^^^^^^^^^^^^^^^^^^
When writing new packages, it often helps to see examples of existing packages.
Here is an incomplete list of existing Spack packages that have changed build
systems before:
================ ===================== ================
Package Previous Build System New Build System
================ ===================== ================
amber custom CMake
arpack-ng Autotools CMake
atk Autotools Meson
blast None Autotools
dyninst Autotools CMake
evtgen Autotools CMake
fish Autotools CMake
gdk-pixbuf Autotools Meson
glib Autotools Meson
glog Autotools CMake
gmt Autotools CMake
gtkplus Autotools Meson
hpl Makefile Autotools
interproscan Perl Maven
jasper Autotools CMake
kahip SCons CMake
kokkos Makefile CMake
kokkos-kernels Makefile CMake
leveldb Makefile CMake
libdrm Autotools Meson
libjpeg-turbo Autotools CMake
mesa Autotools Meson
metis None CMake
mpifileutils Autotools CMake
muparser Autotools CMake
mxnet Makefile CMake
nest Autotools CMake
neuron Autotools CMake
nsimd CMake nsconfig
opennurbs Makefile CMake
optional-lite None CMake
plasma Makefile CMake
preseq Makefile Autotools
protobuf Autotools CMake
py-pygobject Autotools Python
singularity Autotools Makefile
span-lite None CMake
ssht Makefile CMake
string-view-lite None CMake
superlu Makefile CMake
superlu-dist Makefile CMake
uncrustify Autotools CMake
================ ===================== ================
Packages that support multiple build systems can be a bit confusing to write.
Don't hesitate to open an issue or draft pull request and ask for advice from
other Spack developers!

View File

@@ -336,7 +336,7 @@ This would be translated to:
.. code-block:: python .. code-block:: python
extends('python') extends('python')
depends_on('python@3.5:3', type=('build', 'run')) depends_on('python@3.5:3.999', type=('build', 'run'))
Many ``setup.py`` or ``setup.cfg`` files also contain information like:: Many ``setup.py`` or ``setup.cfg`` files also contain information like::
@@ -568,7 +568,7 @@ check the ``METADATA`` file for lines like::
Lines that use ``Requires-Dist`` are similar to ``install_requires``. Lines that use ``Requires-Dist`` are similar to ``install_requires``.
Lines that use ``Provides-Extra`` are similar to ``extra_requires``, Lines that use ``Provides-Extra`` are similar to ``extra_requires``,
and you can add a variant for those dependencies. The ``~=1.11.0`` and you can add a variant for those dependencies. The ``~=1.11.0``
syntax is equivalent to ``1.11.0:1.11``. syntax is equivalent to ``1.11.0:1.11.999``.
"""""""""" """"""""""
setuptools setuptools
@@ -627,8 +627,7 @@ adds:
Testing Testing
^^^^^^^ ^^^^^^^
``PythonPackage`` provides a couple of options for testing packages ``PythonPackage`` provides a couple of options for testing packages.
both during and after the installation process.
"""""""""""" """"""""""""
Import tests Import tests
@@ -697,20 +696,16 @@ libraries. Make sure not to add modules/packages containing the word
"test", as these likely won't end up in the installation directory, "test", as these likely won't end up in the installation directory,
or may require test dependencies like pytest to be installed. or may require test dependencies like pytest to be installed.
Import tests can be run during the installation using ``spack install These tests can be triggered by running ``spack install --test=root``
--test=root`` or at any time after the installation using or by running ``spack test run`` after the installation has finished.
``spack test run``.
"""""""""" """"""""""
Unit tests Unit tests
"""""""""" """"""""""
The package may have its own unit or regression tests. Spack can The package you want to install may come with additional unit tests.
run these tests during the installation by adding phase-appropriate You can add additional build-time or install-time tests by adding
test methods. additional testing functions. For example, ``py-numpy`` adds:
For example, ``py-numpy`` adds the following as a check to run
after the ``install`` phase:
.. code-block:: python .. code-block:: python
@@ -721,13 +716,7 @@ after the ``install`` phase:
python('-c', 'import numpy; numpy.test("full", verbose=2)') python('-c', 'import numpy; numpy.test("full", verbose=2)')
when testing is enabled during the installation (i.e., ``spack install These tests can be triggered by running ``spack install --test=root``.
--test=root``).
.. note::
Additional information is available on :ref:`install phase tests
<install_phase-tests>`.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setup file in a sub-directory Setup file in a sub-directory

View File

@@ -79,14 +79,12 @@ Description
The first thing you'll need to add to your new package is a description. The first thing you'll need to add to your new package is a description.
The top of the homepage for ``caret`` lists the following description: The top of the homepage for ``caret`` lists the following description:
Classification and Regression Training caret: Classification and Regression Training
Misc functions for training and plotting classification and regression models. Misc functions for training and plotting classification and regression models.
The first line is a short description (title) and the second line is a long You can either use the short description (first line), long description
description. In this case the description is only one line but often the (second line), or both depending on what you feel is most appropriate.
description is several lines. Spack makes use of both short and long
descriptions and convention is to use both when creating an R package.
^^^^^^^^ ^^^^^^^^
Homepage Homepage
@@ -126,67 +124,6 @@ If you only specify the URL for the latest release, your package will
no longer be able to fetch that version as soon as a new release comes no longer be able to fetch that version as soon as a new release comes
out. To get around this, add the archive directory as a ``list_url``. out. To get around this, add the archive directory as a ``list_url``.
^^^^^^^^^^^^^^^^^^^^^
Bioconductor packages
^^^^^^^^^^^^^^^^^^^^^
Bioconductor packages are set up in a similar way to CRAN packages, but there
are some very important distinctions. Bioconductor packages can be found at:
https://bioconductor.org/. Bioconductor packages are R packages and so follow
the same packaging scheme as CRAN packages. What is different is that
Bioconductor itself is versioned and released. This scheme, using the
Bioconductor package installer, allows further specification of the minimum
version of R as well as further restrictions on the dependencies between
packages than what is possible with the native R packaging system. Spack can
not replicate these extra features and thus Bioconductor packages in Spack need
to be managed as a group during updates in order to maintain package
consistency with Bioconductor itself.
Another key difference is that, while previous versions of packages are
available, they are not available from a site that can be programmatically set,
thus a ``list_url`` attribute can not be used. However, each package is also
available in a git repository, with branches corresponding to each Bioconductor
release. Thus, it is always possible to retrieve the version of any package
corresponding to a Bioconductor release simply by fetching the branch that
corresponds to the Bioconductor release of the package repository. For this
reason, spack Bioconductor R packages use the git repository, with the commit
of the respective branch used in the ``version()`` attribute of the package.
^^^^^^^^^^^^^^^^^^^^^^^^
cran and bioc attributes
^^^^^^^^^^^^^^^^^^^^^^^^
Much like the ``pypi`` attribute for python packages, due to the fact that R
packages are obtained from specific repositories, it is possible to set up shortcut
attributes that can be used to set ``homepage``, ``url``, ``list_url``, and
``git``. For example, the following ``cran`` attribute:
.. code-block:: python
cran = 'caret'
is equivalent to:
.. code-block:: python
homepage = 'https://cloud.r-project.org/package=caret'
url = 'https://cloud.r-project.org/src/contrib/caret_6.0-86.tar.gz'
list_url = 'https://cloud.r-project.org/src/contrib/Archive/caret'
Likewise, the following ``bioc`` attribute:
.. code-block:: python
bioc = 'BiocVersion'
is equivalent to:
.. code-block:: python
homepage = 'https://bioconductor.org/packages/BiocVersion/'
git = 'https://git.bioconductor.org/packages/BiocVersion'
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
Build system dependencies Build system dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -219,7 +156,7 @@ R dependencies
R packages are often small and follow the classic Unix philosophy R packages are often small and follow the classic Unix philosophy
of doing one thing well. They are modular and usually depend on of doing one thing well. They are modular and usually depend on
several other packages. You may find a single package with over a several other packages. You may find a single package with over a
hundred dependencies. Luckily, R packages are well-documented hundred dependencies. Luckily, CRAN packages are well-documented
and list all of their dependencies in the following sections: and list all of their dependencies in the following sections:
* Depends * Depends

View File

@@ -17,10 +17,10 @@
# All configuration values have a default; values that are commented out # All configuration values have a default; values that are commented out
# serve to show the default. # serve to show the default.
import sys
import os import os
import re import re
import subprocess import subprocess
import sys
from glob import glob from glob import glob
from sphinx.ext.apidoc import main as sphinx_apidoc from sphinx.ext.apidoc import main as sphinx_apidoc
@@ -82,8 +82,6 @@
# Disable duplicate cross-reference warnings. # Disable duplicate cross-reference warnings.
# #
from sphinx.domains.python import PythonDomain from sphinx.domains.python import PythonDomain
class PatchedPythonDomain(PythonDomain): class PatchedPythonDomain(PythonDomain):
def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
if 'refspecific' in node: if 'refspecific' in node:
@@ -97,19 +95,15 @@ def setup(sphinx):
# -- General configuration ----------------------------------------------------- # -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here. # If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '3.4' needs_sphinx = '1.8'
# Add any Sphinx extension module names here, as strings. They can be extensions # Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [ extensions = ['sphinx.ext.autodoc',
'sphinx.ext.autodoc',
'sphinx.ext.graphviz', 'sphinx.ext.graphviz',
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon', 'sphinx.ext.napoleon',
'sphinx.ext.todo', 'sphinx.ext.todo',
'sphinx.ext.viewcode', 'sphinxcontrib.programoutput']
'sphinxcontrib.programoutput',
]
# Set default graphviz options # Set default graphviz options
graphviz_dot_args = [ graphviz_dot_args = [
@@ -142,7 +136,6 @@ def setup(sphinx):
# #
# The short X.Y version. # The short X.Y version.
import spack import spack
version = '.'.join(str(s) for s in spack.spack_version_info[:2]) version = '.'.join(str(s) for s in spack.spack_version_info[:2])
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = spack.spack_version release = spack.spack_version
@@ -168,19 +161,6 @@ def setup(sphinx):
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
exclude_patterns = ['_build', '_spack_root', '.spack-env'] exclude_patterns = ['_build', '_spack_root', '.spack-env']
nitpicky = True
nitpick_ignore = [
# Python classes that intersphinx is unable to resolve
('py:class', 'argparse.HelpFormatter'),
('py:class', 'contextlib.contextmanager'),
('py:class', 'module'),
('py:class', '_io.BufferedReader'),
('py:class', 'unittest.case.TestCase'),
('py:class', '_frozen_importlib_external.SourceFileLoader'),
# Spack classes that are private and we don't want to expose
('py:class', 'spack.provider_index._IndexBase'),
]
# The reST default role (used for this markup: `text`) to use for all documents. # The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None #default_role = None
@@ -199,8 +179,7 @@ def setup(sphinx):
# We use our own extension of the default style with a few modifications # We use our own extension of the default style with a few modifications
from pygments.style import Style from pygments.style import Style
from pygments.styles.default import DefaultStyle from pygments.styles.default import DefaultStyle
from pygments.token import Comment, Generic, Text from pygments.token import Generic, Comment, Text
class SpackStyle(DefaultStyle): class SpackStyle(DefaultStyle):
styles = DefaultStyle.styles.copy() styles = DefaultStyle.styles.copy()
@@ -209,7 +188,6 @@ class SpackStyle(DefaultStyle):
styles[Generic.Prompt] = "bold #346ec9" styles[Generic.Prompt] = "bold #346ec9"
import pkg_resources import pkg_resources
dist = pkg_resources.Distribution(__file__) dist = pkg_resources.Distribution(__file__)
sys.path.append('.') # make 'conf' module findable sys.path.append('.') # make 'conf' module findable
ep = pkg_resources.EntryPoint.parse('spack = conf:SpackStyle', dist=dist) ep = pkg_resources.EntryPoint.parse('spack = conf:SpackStyle', dist=dist)
@@ -375,11 +353,3 @@ class SpackStyle(DefaultStyle):
# How to display URL addresses: 'footnote', 'no', or 'inline'. # How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote' #texinfo_show_urls = 'footnote'
# -- Extension configuration -------------------------------------------------
# sphinx.ext.intersphinx
intersphinx_mapping = {
"python": ("https://docs.python.org/3", None),
}

View File

@@ -202,23 +202,21 @@ of builds.
Unless overridden in a package or on the command line, Spack builds all Unless overridden in a package or on the command line, Spack builds all
packages in parallel. The default parallelism is equal to the number of packages in parallel. The default parallelism is equal to the number of
cores available to the process, up to 16 (the default of ``build_jobs``). cores on your machine, up to 16. Parallelism cannot exceed the number of
For a build system that uses Makefiles, this ``spack install`` runs: cores available on the host. For a build system that uses Makefiles, this
means running:
- ``make -j<build_jobs>``, when ``build_jobs`` is less than the number of - ``make -j<build_jobs>``, when ``build_jobs`` is less than the number of
cores available cores on the machine
- ``make -j<ncores>``, when ``build_jobs`` is greater or equal to the - ``make -j<ncores>``, when ``build_jobs`` is greater or equal to the
number of cores available number of cores on the machine
If you work on a shared login node or have a strict ulimit, it may be If you work on a shared login node or have a strict ulimit, it may be
necessary to set the default to a lower value. By setting ``build_jobs`` necessary to set the default to a lower value. By setting ``build_jobs``
to 4, for example, commands like ``spack install`` will run ``make -j4`` to 4, for example, commands like ``spack install`` will run ``make -j4``
instead of hogging every core. To build all software in serial, instead of hogging every core.
set ``build_jobs`` to 1.
Note that specifying the number of jobs on the command line always takes To build all software in serial, set ``build_jobs`` to 1.
priority, so that ``spack install -j<n>`` always runs `make -j<n>`, even
when that exceeds the number of cores available.
-------------------- --------------------
``ccache`` ``ccache``
@@ -259,16 +257,3 @@ and ld.so will ONLY search for dependencies in the ``RUNPATH`` of
the loading object. the loading object.
DO NOT MIX the two options within the same install tree. DO NOT MIX the two options within the same install tree.
----------------------
``terminal_title``
----------------------
By setting this option to ``true``, Spack will update the terminal's title to
provide information about its current progress as well as the current and
total package numbers.
To work properly, this requires your terminal to reset its title after
Spack has finished its work, otherwise Spack's status information will
remain in the terminal's title indefinitely. Most terminals should already
be set up this way and clear Spack's status information.

View File

@@ -78,13 +78,6 @@ are six configuration scopes. From lowest to highest:
If multiple scopes are listed on the command line, they are ordered If multiple scopes are listed on the command line, they are ordered
from lowest to highest precedence. from lowest to highest precedence.
#. **environment**: When using Spack :ref:`environments`, Spack reads
additional configuration from the environment file. See
:ref:`environment-configuration` for further details on these
scopes. Environment scopes can be referenced from the command line
as ``env:name`` (to reference environment ``foo``, use
``env:foo``).
#. **command line**: Build settings specified on the command line take #. **command line**: Build settings specified on the command line take
precedence over all other scopes. precedence over all other scopes.
@@ -199,11 +192,10 @@ with MPICH. You can create different configuration scopes for use with
Platform-specific Scopes Platform-specific Scopes
------------------------ ------------------------
For each scope above (excluding environment scopes), there can also be For each scope above, there can also be platform-specific settings.
platform-specific settings. For example, on most platforms, GCC is For example, on most platforms, GCC is the preferred compiler.
the preferred compiler. However, on macOS (darwin), Clang often works However, on macOS (darwin), Clang often works for more packages,
for more packages, and is set as the default compiler. This and is set as the default compiler. This configuration is set in
configuration is set in
``$(prefix)/etc/spack/defaults/darwin/packages.yaml``. It will take ``$(prefix)/etc/spack/defaults/darwin/packages.yaml``. It will take
precedence over settings in the ``defaults`` scope, but can still be precedence over settings in the ``defaults`` scope, but can still be
overridden by settings in ``system``, ``system/darwin``, ``site``, overridden by settings in ``system``, ``system/darwin``, ``site``,
@@ -402,15 +394,12 @@ Spack-specific variables
Spack understands several special variables. These are: Spack understands several special variables. These are:
* ``$env``: name of the currently active :ref:`environment <environments>`
* ``$spack``: path to the prefix of this Spack installation * ``$spack``: path to the prefix of this Spack installation
* ``$tempdir``: default system temporary directory (as specified in * ``$tempdir``: default system temporary directory (as specified in
Python's `tempfile.tempdir Python's `tempfile.tempdir
<https://docs.python.org/2/library/tempfile.html#tempfile.tempdir>`_ <https://docs.python.org/2/library/tempfile.html#tempfile.tempdir>`_
variable. variable.
* ``$user``: name of the current user * ``$user``: name of the current user
* ``$user_cache_path``: user cache directory (``~/.spack`` unless
:ref:`overridden <local-config-overrides>`)
Note that, as with shell variables, you can write these as ``$varname`` Note that, as with shell variables, you can write these as ``$varname``
or with braces to distinguish the variable from surrounding characters: or with braces to distinguish the variable from surrounding characters:
@@ -565,39 +554,3 @@ built in and are not overridden by a configuration file. The
command line. ``dirty`` and ``install_tree`` come from the custom command line. ``dirty`` and ``install_tree`` come from the custom
scopes ``./my-scope`` and ``./my-scope-2``, and all other configuration scopes ``./my-scope`` and ``./my-scope-2``, and all other configuration
options come from the default configuration files that ship with Spack. options come from the default configuration files that ship with Spack.
.. _local-config-overrides:
------------------------------
Overriding Local Configuration
------------------------------
Spack's ``system`` and ``user`` scopes provide ways for administrators and users to set
global defaults for all Spack instances, but for use cases where one wants a clean Spack
installation, these scopes can be undesirable. For example, users may want to opt out of
global system configuration, or they may want to ignore their own home directory
settings when running in a continuous integration environment.
Spack also, by default, keeps various caches and user data in ``~/.spack``, but
users may want to override these locations.
Spack provides three environment variables that allow you to override or opt out of
configuration locations:
* ``SPACK_USER_CONFIG_PATH``: Override the path to use for the
``user`` scope (``~/.spack`` by default).
* ``SPACK_SYSTEM_CONFIG_PATH``: Override the path to use for the
``system`` scope (``/etc/spack`` by default).
* ``SPACK_DISABLE_LOCAL_CONFIG``: set this environment variable to completely disable
**both** the system and user configuration directories. Spack will only consider its
own defaults and ``site`` configuration locations.
And one that allows you to move the default cache location:
* ``SPACK_USER_CACHE_PATH``: Override the default path to use for user data
(misc_cache, tests, reports, etc.)
With these settings, if you want to isolate Spack in a CI environment, you can do this::
export SPACK_DISABLE_LOCAL_CONFIG=true
export SPACK_USER_CACHE_PATH=/tmp/spack

View File

@@ -126,12 +126,12 @@ are currently supported are summarized in the table below:
* - Ubuntu 18.04 * - Ubuntu 18.04
- ``ubuntu:18.04`` - ``ubuntu:18.04``
- ``spack/ubuntu-bionic`` - ``spack/ubuntu-bionic``
* - CentOS 6
- ``centos:6``
- ``spack/centos6``
* - CentOS 7 * - CentOS 7
- ``centos:7`` - ``centos:7``
- ``spack/centos7`` - ``spack/centos7``
* - openSUSE Leap
- ``opensuse/leap``
- ``spack/leap15``
All the images are tagged with the corresponding release of Spack: All the images are tagged with the corresponding release of Spack:
@@ -200,7 +200,7 @@ Setting Base Images
The ``images`` subsection is used to select both the image where The ``images`` subsection is used to select both the image where
Spack builds the software and the image where the built software Spack builds the software and the image where the built software
is installed. This attribute can be set in different ways and is installed. This attribute can be set in two different ways and
which one to use depends on the use case at hand. which one to use depends on the use case at hand.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -227,7 +227,7 @@ following ``spack.yaml``:
container: container:
images: images:
os: centos:7 os: centos/7
spack: 0.15.4 spack: 0.15.4
uses ``spack/centos7:0.15.4`` and ``centos:7`` for the stages where the uses ``spack/centos7:0.15.4`` and ``centos:7`` for the stages where the
@@ -260,54 +260,10 @@ software is respectively built and installed:
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"] ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
This is the simplest available method of selecting base images, and we advise This method of selecting base images is the simplest of the two, and we advise
to use it whenever possible. There are cases though where using Spack official to use it whenever possible. There are cases though where using Spack official
images is not enough to fit production needs. In these situations users can images is not enough to fit production needs. In these situations users can manually
extend the recipe to start with the bootstrapping of Spack at a certain pinned select which base image to start from in the recipe, as we'll see next.
version or manually select which base image to start from in the recipe,
as we'll see next.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use a Bootstrap Stage for Spack
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In some cases users may want to pin the commit sha that is used for Spack, to ensure later
reproducibility, or start from a fork of the official Spack repository to try a bugfix or
a feature in the early stage of development. This is possible by being just a little more
verbose when specifying information about Spack in the ``spack.yaml`` file:
.. code-block:: yaml
images:
os: amazonlinux:2
spack:
# URL of the Spack repository to be used in the container image
url: <to-use-a-fork>
# Either a commit sha, a branch name or a tag
ref: <sha/tag/branch>
# If true turn a branch name or a tag into the corresponding commit
# sha at the time of recipe generation
resolve_sha: <true/false>
``url`` specifies the URL from which to clone Spack and defaults to https://github.com/spack/spack.
The ``ref`` attribute can be either a commit sha, a branch name or a tag. The default value in
this case is to use the ``develop`` branch, but it may change in the future to point to the latest stable
release. Finally ``resolve_sha`` transform branch names or tags into the corresponding commit
shas at the time of recipe generation, to allow for a greater reproducibility of the results
at a later time.
The list of operating systems that can be used to bootstrap Spack can be
obtained with:
.. command-output:: spack containerize --list-os
.. note::
The ``resolve_sha`` option uses ``git rev-parse`` under the hood and thus it requires
to checkout the corresponding Spack repository in a temporary folder before generating
the recipe. Recipe generation may take longer when this option is set to true because
of this additional step.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use Custom Images Provided by Users Use Custom Images Provided by Users
@@ -459,18 +415,6 @@ to customize the generation of container recipes:
- Version of Spack use in the ``build`` stage - Version of Spack use in the ``build`` stage
- Valid tags for ``base:image`` - Valid tags for ``base:image``
- Yes, if using constrained selection of base images - Yes, if using constrained selection of base images
* - ``images:spack:url``
- Repository from which Spack is cloned
- Any fork of Spack
- No
* - ``images:spack:ref``
- Reference for the checkout of Spack
- Either a commit sha, a branch name or a tag
- No
* - ``images:spack:resolve_sha``
- Resolve branches and tags in ``spack.yaml`` to commits in the generated recipe
- True or False (default: False)
- No
* - ``images:build`` * - ``images:build``
- Image to be used in the ``build`` stage - Image to be used in the ``build`` stage
- Any valid container image - Any valid container image

View File

@@ -338,6 +338,15 @@ Once all of the dependencies are installed, you can try building the documentati
If you see any warning or error messages, you will have to correct those before If you see any warning or error messages, you will have to correct those before
your PR is accepted. your PR is accepted.
.. note::
There is also a ``run-doc-tests`` script in ``share/spack/qa``. The only
difference between running this script and running ``make`` by hand is that
the script will exit immediately if it encounters an error or warning. This
is necessary for CI. If you made a lot of documentation changes, it is
much quicker to run ``make`` by hand so that you can see all of the warnings
at once.
If you are editing the documentation, you should obviously be running the If you are editing the documentation, you should obviously be running the
documentation tests. But even if you are simply adding a new package, your documentation tests. But even if you are simply adding a new package, your
changes could cause the documentation tests to fail: changes could cause the documentation tests to fail:

File diff suppressed because it is too large Load Diff

View File

@@ -248,9 +248,9 @@ Users can add abstract specs to an Environment using the ``spack add``
command. The most important component of an Environment is a list of command. The most important component of an Environment is a list of
abstract specs. abstract specs.
Adding a spec adds to the manifest (the ``spack.yaml`` file), which is Adding a spec adds to the manifest (the ``spack.yaml`` file) and to
used to define the roots of the Environment, but does not affect the the roots of the Environment, but does not affect the concrete specs
concrete specs in the lockfile, nor does it install the spec. in the lockfile, nor does it install the spec.
The ``spack add`` command is environment aware. It adds to the The ``spack add`` command is environment aware. It adds to the
currently active environment. All environment aware commands can also currently active environment. All environment aware commands can also
@@ -356,18 +356,6 @@ command also stores a Spack repo containing the ``package.py`` file
used at install time for each package in the ``repos/`` directory in used at install time for each package in the ``repos/`` directory in
the Environment. the Environment.
The ``--no-add`` option can be used in a concrete environment to tell
spack to install specs already present in the environment but not to
add any new root specs to the environment. For root specs provided
to ``spack install`` on the command line, ``--no-add`` is the default,
while for dependency specs on the other hand, it is optional. In other
words, if there is an unambiguous match in the active concrete environment
for a root spec provided to ``spack install`` on the command line, spack
does not require you to specify the ``--no-add`` option to prevent the spec
from being added again. At the same time, a spec that already exists in the
environment, but only as a dependency, will be added to the environment as a
root spec without the ``--no-add`` option.
^^^^^^^ ^^^^^^^
Loading Loading
^^^^^^^ ^^^^^^^
@@ -411,12 +399,6 @@ There are two ways to include configuration information in a Spack Environment:
#. Included in the ``spack.yaml`` file from another file. #. Included in the ``spack.yaml`` file from another file.
Many Spack commands also affect configuration information in files
automatically. Those commands take a ``--scope`` argument, and the
environment can be specified by ``env:NAME`` (to affect environment
``foo``, set ``--scope env:foo``). These commands will automatically
manipulate configuration inline in the ``spack.yaml`` file.
""""""""""""""""""""" """""""""""""""""""""
Inline configurations Inline configurations
""""""""""""""""""""" """""""""""""""""""""
@@ -459,8 +441,8 @@ Environments can include files with either relative or absolute
paths. Inline configurations take precedence over included paths. Inline configurations take precedence over included
configurations, so you don't have to change shared configuration files configurations, so you don't have to change shared configuration files
to make small changes to an individual Environment. Included configs to make small changes to an individual Environment. Included configs
listed earlier will have higher precedence, as the included configs are listed later will have higher precedence, as the included configs are
applied in reverse order. applied in order.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Manually Editing the Specs List Manually Editing the Specs List
@@ -723,8 +705,6 @@ Spack Environment managed views are updated every time the environment
is written out to the lock file ``spack.lock``, so the concrete is written out to the lock file ``spack.lock``, so the concrete
environment and the view are always compatible. environment and the view are always compatible.
.. _configuring_environment_views:
""""""""""""""""""""""""""""" """""""""""""""""""""""""""""
Configuring environment views Configuring environment views
""""""""""""""""""""""""""""" """""""""""""""""""""""""""""
@@ -732,17 +712,13 @@ Configuring environment views
The Spack Environment manifest file has a top-level keyword The Spack Environment manifest file has a top-level keyword
``view``. Each entry under that heading is a view descriptor, headed ``view``. Each entry under that heading is a view descriptor, headed
by a name. The view descriptor contains the root of the view, and by a name. The view descriptor contains the root of the view, and
optionally the projections for the view, ``select`` and optionally the projections for the view, and ``select`` and
``exclude`` lists for the view and link information via ``link`` and ``exclude`` lists for the view. For example, in the following manifest
``link_type``. For example, in the following manifest
file snippet we define a view named ``mpis``, rooted at file snippet we define a view named ``mpis``, rooted at
``/path/to/view`` in which all projections use the package name, ``/path/to/view`` in which all projections use the package name,
version, and compiler name to determine the path for a given version, and compiler name to determine the path for a given
package. This view selects all packages that depend on MPI, and package. This view selects all packages that depend on MPI, and
excludes those built with the PGI compiler at version 18.5. excludes those built with the PGI compiler at version 18.5.
All the dependencies of each root spec in the environment will be linked
in the view due to the command ``link: all`` and the files in the view will
be symlinks to the spack install directories.
.. code-block:: yaml .. code-block:: yaml
@@ -755,16 +731,11 @@ be symlinks to the spack install directories.
exclude: ['%pgi@18.5'] exclude: ['%pgi@18.5']
projections: projections:
all: {name}/{version}-{compiler.name} all: {name}/{version}-{compiler.name}
link: all
link_type: symlink
For more information on using view projections, see the section on For more information on using view projections, see the section on
:ref:`adding_projections_to_views`. The default for the ``select`` and :ref:`adding_projections_to_views`. The default for the ``select`` and
``exclude`` values is to select everything and exclude nothing. The ``exclude`` values is to select everything and exclude nothing. The
default projection is the default view projection (``{}``). The ``link`` default projection is the default view projection (``{}``).
defaults to ``all`` but can also be ``roots`` when only the root specs
in the environment are desired in the view. The ``link_type`` defaults
to ``symlink`` but can also take the value of ``hardlink`` or ``copy``.
Any number of views may be defined under the ``view`` heading in a Any number of views may be defined under the ``view`` heading in a
Spack Environment. Spack Environment.

View File

@@ -9,16 +9,21 @@
Getting Started Getting Started
=============== ===============
-------------------- -------------
System Prerequisites Prerequisites
-------------------- -------------
Spack has the following minimum system requirements, which are assumed to Spack has the following minimum requirements, which must be installed
be present on the machine where Spack is run: before Spack is run:
.. csv-table:: System prerequisites for Spack #. Python 2 (2.6 or 2.7) or 3 (3.5 - 3.9) to run Spack
:file: tables/system_prerequisites.csv #. A C/C++ compiler for building
:header-rows: 1 #. The ``make`` executable for building
#. The ``tar``, ``gzip``, ``bzip2``, ``xz`` and optionally ``zstd``
executables for extracting source code
#. The ``patch`` command to apply patches
#. The ``git`` and ``curl`` commands for fetching
#. If using the ``gpg`` subcommand, ``gnupg2`` is required
These requirements can be easily installed on most modern Linux systems; These requirements can be easily installed on most modern Linux systems;
on macOS, XCode is required. Spack is designed to run on HPC on macOS, XCode is required. Spack is designed to run on HPC
@@ -35,7 +40,7 @@ Getting Spack is easy. You can clone it from the `github repository
.. code-block:: console .. code-block:: console
$ git clone -c feature.manyFiles=true https://github.com/spack/spack.git $ git clone https://github.com/spack/spack.git
This will create a directory called ``spack``. This will create a directory called ``spack``.
@@ -65,13 +70,7 @@ Sourcing these files will put the ``spack`` command in your ``PATH``, set
up your ``MODULEPATH`` to use Spack's packages, and add other useful up your ``MODULEPATH`` to use Spack's packages, and add other useful
shell integration for :ref:`certain commands <packaging-shell-support>`, shell integration for :ref:`certain commands <packaging-shell-support>`,
:ref:`environments <environments>`, and :ref:`modules <modules>`. For :ref:`environments <environments>`, and :ref:`modules <modules>`. For
``bash`` and ``zsh``, it also sets up tab completion. ``bash``, it also sets up tab completion.
In order to know which directory to add to your ``MODULEPATH``, these scripts
query the ``spack`` command. On shared filesystems, this can be a bit slow,
especially if you log in frequently. If you don't use modules, or want to set
``MODULEPATH`` manually instead, you can set the ``SPACK_SKIP_MODULES``
environment variable to skip this step and speed up sourcing the file.
If you do not want to use Spack's shell support, you can always just run If you do not want to use Spack's shell support, you can always just run
the ``spack`` command directly from ``spack/bin/spack``. the ``spack`` command directly from ``spack/bin/spack``.
@@ -84,140 +83,6 @@ sourcing time, ensuring future invocations of the ``spack`` command will
continue to use the same consistent python version regardless of changes in continue to use the same consistent python version regardless of changes in
the environment. the environment.
^^^^^^^^^^^^^^^^^^^^
Bootstrapping clingo
^^^^^^^^^^^^^^^^^^^^
Spack uses ``clingo`` under the hood to resolve optimal versions and variants of
dependencies when installing a package. Since ``clingo`` itself is a binary,
Spack has to install it on initial use, which is called bootstrapping.
Spack provides two ways of bootstrapping ``clingo``: from pre-built binaries
(default), or from sources. The fastest way to get started is to bootstrap from
pre-built binaries.
.. note::
When bootstrapping from pre-built binaries, Spack currently requires
``patchelf`` on Linux and ``otool`` on macOS. If ``patchelf`` is not in the
``PATH``, Spack will build it from sources, and a C++ compiler is required.
The first time you concretize a spec, Spack will bootstrap in the background:
.. code-block:: console
$ time spack spec zlib
Input spec
--------------------------------
zlib
Concretized
--------------------------------
zlib@1.2.11%gcc@7.5.0+optimize+pic+shared arch=linux-ubuntu18.04-zen
real 0m20.023s
user 0m18.351s
sys 0m0.784s
After this command you'll see that ``clingo`` has been installed for Spack's own use:
.. code-block:: console
$ spack find -b
==> Showing internal bootstrap store at "/root/.spack/bootstrap/store"
==> 3 installed packages
-- linux-rhel5-x86_64 / gcc@9.3.0 -------------------------------
clingo-bootstrap@spack python@3.6
-- linux-ubuntu18.04-zen / gcc@7.5.0 ----------------------------
patchelf@0.13
Subsequent calls to the concretizer will then be much faster:
.. code-block:: console
$ time spack spec zlib
[ ... ]
real 0m0.490s
user 0m0.431s
sys 0m0.041s
If for security concerns you cannot bootstrap ``clingo`` from pre-built
binaries, you have to mark this bootstrapping method as untrusted. This makes
Spack fall back to bootstrapping from sources:
.. code-block:: console
$ spack bootstrap untrust github-actions
==> "github-actions" is now untrusted and will not be used for bootstrapping
You can verify that the new settings are effective with:
.. code-block:: console
$ spack bootstrap list
Name: github-actions UNTRUSTED
Type: buildcache
Info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.1
homepage: https://github.com/alalazo/spack-bootstrap-mirrors
releases: https://github.com/alalazo/spack-bootstrap-mirrors/releases
Description:
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
Name: spack-install TRUSTED
Type: install
Description:
Specs built from sources by Spack. May take a long time.
.. note::
When bootstrapping from sources, Spack requires a full install of Python
including header files (e.g. ``python3-dev`` on Debian), and a compiler
with support for C++14 (GCC on Linux, Apple Clang on macOS) and static C++
standard libraries on Linux.
Spack will build the required software on the first request to concretize a spec:
.. code-block:: console
$ spack spec zlib
[+] /usr (external bison-3.0.4-wu5pgjchxzemk5ya2l3ddqug2d7jv6eb)
[+] /usr (external cmake-3.19.4-a4kmcfzxxy45mzku4ipmj5kdiiz5a57b)
[+] /usr (external python-3.6.9-x4fou4iqqlh5ydwddx3pvfcwznfrqztv)
==> Installing re2c-1.2.1-e3x6nxtk3ahgd63ykgy44mpuva6jhtdt
[ ... ]
zlib@1.2.11%gcc@10.1.0+optimize+pic+shared arch=linux-ubuntu18.04-broadwell
"""""""""""""""""""
The Bootstrap Store
"""""""""""""""""""
All the tools Spack needs for its own functioning are installed in a separate store, which lives
under the ``${HOME}/.spack`` directory. The software installed there can be queried with:
.. code-block:: console
$ spack find --bootstrap
==> Showing internal bootstrap store at "/home/spack/.spack/bootstrap/store"
==> 3 installed packages
-- linux-ubuntu18.04-x86_64 / gcc@10.1.0 ------------------------
clingo-bootstrap@spack python@3.6.9 re2c@1.2.1
In case it's needed the bootstrap store can also be cleaned with:
.. code-block:: console
$ spack clean -b
==> Removing software in "/home/spack/.spack/bootstrap/store"
^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^
Check Installation Check Installation
@@ -449,34 +314,6 @@ then inject those flags into the compiler command. Compiler flags
entered from the command line will be discussed in more detail in the entered from the command line will be discussed in more detail in the
following section. following section.
Some compilers also require additional environment configuration.
Examples include Intels oneAPI and AMDs AOCC compiler suites,
which have custom scripts for loading environment variables and setting paths.
These variables should be specified in the ``environment`` section of the compiler
specification. The operations available to modify the environment are ``set``, ``unset``,
``prepend_path``, ``append_path``, and ``remove_path``. For example:
.. code-block:: yaml
compilers:
- compiler:
modules: []
operating_system: centos6
paths:
cc: /opt/intel/oneapi/compiler/latest/linux/bin/icx
cxx: /opt/intel/oneapi/compiler/latest/linux/bin/icpx
f77: /opt/intel/oneapi/compiler/latest/linux/bin/ifx
fc: /opt/intel/oneapi/compiler/latest/linux/bin/ifx
spec: oneapi@latest
environment:
set:
MKL_ROOT: "/path/to/mkl/root"
unset: # A list of environment variables to unset
- CC
prepend_path: # Similar for append|remove_path
LD_LIBRARY_PATH: /ld/paths/added/by/setvars/sh
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
Build Your Own Compiler Build Your Own Compiler
^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^
@@ -631,9 +468,8 @@ Fortran.
#. Run ``spack compiler find`` to locate Clang. #. Run ``spack compiler find`` to locate Clang.
#. There are different ways to get ``gfortran`` on macOS. For example, you can #. There are different ways to get ``gfortran`` on macOS. For example, you can
install GCC with Spack (``spack install gcc``), with Homebrew (``brew install install GCC with Spack (``spack install gcc``) or with Homebrew
gcc``), or from a `DMG installer (``brew install gcc``).
<https://github.com/fxcoudert/gfortran-for-macOS/releases>`_.
#. The only thing left to do is to edit ``~/.spack/darwin/compilers.yaml`` to provide #. The only thing left to do is to edit ``~/.spack/darwin/compilers.yaml`` to provide
the path to ``gfortran``: the path to ``gfortran``:
@@ -654,8 +490,7 @@ Fortran.
If you used Spack to install GCC, you can get the installation prefix by If you used Spack to install GCC, you can get the installation prefix by
``spack location -i gcc`` (this will only work if you have a single version ``spack location -i gcc`` (this will only work if you have a single version
of GCC installed). Whereas for Homebrew, GCC is installed in of GCC installed). Whereas for Homebrew, GCC is installed in
``/usr/local/Cellar/gcc/x.y.z``. With the DMG installer, the correct path ``/usr/local/Cellar/gcc/x.y.z``.
will be ``/usr/local/gfortran``.
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
Compiler Verification Compiler Verification
@@ -889,7 +724,7 @@ an OpenMPI installed in /opt/local, one would use:
buildable: False buildable: False
In general, Spack is easier to use and more reliable if it builds all of In general, Spack is easier to use and more reliable if it builds all of
its own dependencies. However, there are several packages for which one its own dependencies. However, there are two packages for which one
commonly needs to use system versions: commonly needs to use system versions:
^^^ ^^^
@@ -1237,33 +1072,6 @@ Secret keys may also be later exported using the
<https://www.digitalocean.com/community/tutorials/how-to-setup-additional-entropy-for-cloud-servers-using-haveged>`_ <https://www.digitalocean.com/community/tutorials/how-to-setup-additional-entropy-for-cloud-servers-using-haveged>`_
provides a good overview of sources of randomness. provides a good overview of sources of randomness.
Here is an example of creating a key. Note that we provide a name for the key first
(which we can use to reference the key later) and an email address:
.. code-block:: console
$ spack gpg create dinosaur dinosaur@thedinosaurthings.com
If you want to export the key as you create it:
.. code-block:: console
$ spack gpg create --export key.pub dinosaur dinosaur@thedinosaurthings.com
Or the private key:
.. code-block:: console
$ spack gpg create --export-secret key.priv dinosaur dinosaur@thedinosaurthings.com
You can include both ``--export`` and ``--export-secret``, each with
an output file of choice, to export both.
^^^^^^^^^^^^ ^^^^^^^^^^^^
Listing keys Listing keys
^^^^^^^^^^^^ ^^^^^^^^^^^^
@@ -1272,22 +1080,7 @@ In order to list the keys available in the keyring, the
``spack gpg list`` command will list trusted keys with the ``--trusted`` flag ``spack gpg list`` command will list trusted keys with the ``--trusted`` flag
and keys available for signing using ``--signing``. If you would like to and keys available for signing using ``--signing``. If you would like to
remove keys from your keyring, ``spack gpg untrust <keyid>``. Key IDs can be remove keys from your keyring, ``spack gpg untrust <keyid>``. Key IDs can be
email addresses, names, or (best) fingerprints. Here is an example of listing email addresses, names, or (best) fingerprints.
the key that we just created:
.. code-block:: console
gpgconf: socketdir is '/run/user/1000/gnupg'
/home/spackuser/spack/opt/spack/gpg/pubring.kbx
----------------------------------------------------------
pub rsa4096 2021-03-25 [SC]
60D2685DAB647AD4DB54125961E09BB6F2A0ADCB
uid [ultimate] dinosaur (GPG created for Spack) <dinosaur@thedinosaurthings.com>
Note that the name "dinosaur" can be seen under the uid, which is the unique
id. We might need this reference if we want to export or otherwise reference the key.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Signing and Verifying Packages Signing and Verifying Packages
@@ -1302,38 +1095,6 @@ may also be used to create a signed file which contains the contents, but it
is not recommended. Signed packages may be verified by using is not recommended. Signed packages may be verified by using
``spack gpg verify <file>``. ``spack gpg verify <file>``.
^^^^^^^^^^^^^^
Exporting Keys
^^^^^^^^^^^^^^
You likely might want to export a public key, and that looks like this. Let's
use the previous example and ask spack to export the key with uid "dinosaur."
We will provide an output location (typically a `*.pub` file) and the name of
the key.
.. code-block:: console
$ spack gpg export dinosaur.pub dinosaur
You can then look at the created file, `dinosaur.pub`, to see the exported key.
If you want to include the private key, then just add `--secret`:
.. code-block:: console
$ spack gpg export --secret dinosaur.priv dinosaur
This will write the private key to the file `dinosaur.priv`.
.. warning::
You should be very careful about exporting private keys. You likely would
only want to do this in the context of moving your spack installation to
a different server, and wanting to preserve keys for a buildcache. If you
are unsure about exporting, you can ask your local system administrator
or for help on an issue or the Spack slack.
.. _cray-support: .. _cray-support:
------------- -------------

View File

@@ -39,7 +39,7 @@ package:
.. code-block:: console .. code-block:: console
$ git clone -c feature.manyFiles=true https://github.com/spack/spack.git $ git clone https://github.com/spack/spack.git
$ cd spack/bin $ cd spack/bin
$ ./spack install libelf $ ./spack install libelf
@@ -56,6 +56,7 @@ or refer to the full manual below.
basic_usage basic_usage
workflows workflows
Tutorial: Spack 101 <https://spack-tutorial.readthedocs.io> Tutorial: Spack 101 <https://spack-tutorial.readthedocs.io>
known_issues
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
@@ -66,7 +67,6 @@ or refer to the full manual below.
build_settings build_settings
environments environments
containers containers
monitoring
mirrors mirrors
module_file_support module_file_support
repositories repositories
@@ -77,12 +77,6 @@ or refer to the full manual below.
extensions extensions
pipelines pipelines
.. toctree::
:maxdepth: 2
:caption: Research
analyze
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
:caption: Contributing :caption: Contributing

View File

@@ -0,0 +1,77 @@
.. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
============
Known Issues
============
This is a list of known bugs in Spack. It provides ways of getting around these
problems if you encounter them.
---------------------------------------------------
Variants are not properly forwarded to dependencies
---------------------------------------------------
**Status:** Expected to be fixed by Spack's new concretizer
Sometimes, a variant of a package can also affect how its dependencies are
built. For example, in order to build MPI support for a package, it may
require that its dependencies are also built with MPI support. In the
``package.py``, this looks like:
.. code-block:: python
depends_on('hdf5~mpi', when='~mpi')
depends_on('hdf5+mpi', when='+mpi')
Spack handles this situation properly for *immediate* dependencies, and
builds ``hdf5`` with the same variant you used for the package that
depends on it. However, for *indirect* dependencies (dependencies of
dependencies), Spack does not backtrack up the DAG far enough to handle
this. Users commonly run into this situation when trying to build R with
X11 support:
.. code-block:: console
$ spack install r+X
...
==> Error: Invalid spec: 'cairo@1.14.8%gcc@6.2.1+X arch=linux-fedora25-x86_64 ^bzip2@1.0.6%gcc@6.2.1+shared arch=linux-fedora25-x86_64 ^font-util@1.3.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^fontconfig@2.12.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^freetype@2.7.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^gettext@0.19.8.1%gcc@6.2.1+bzip2+curses+git~libunistring+libxml2+tar+xz arch=linux-fedora25-x86_64 ^glib@2.53.1%gcc@6.2.1~libmount arch=linux-fedora25-x86_64 ^inputproto@2.3.2%gcc@6.2.1 arch=linux-fedora25-x86_64 ^kbproto@1.0.7%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libffi@3.2.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libpng@1.6.29%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libpthread-stubs@0.4%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libx11@1.6.5%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libxau@1.0.8%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libxcb@1.12%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libxdmcp@1.1.2%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libxext@1.3.3%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libxml2@2.9.4%gcc@6.2.1~python arch=linux-fedora25-x86_64 ^libxrender@0.9.10%gcc@6.2.1 arch=linux-fedora25-x86_64 ^ncurses@6.0%gcc@6.2.1~symlinks arch=linux-fedora25-x86_64 ^openssl@1.0.2k%gcc@6.2.1 arch=linux-fedora25-x86_64 ^pcre@8.40%gcc@6.2.1+utf arch=linux-fedora25-x86_64 ^pixman@0.34.0%gcc@6.2.1 arch=linux-fedora25-x86_64 ^pkg-config@0.29.2%gcc@6.2.1+internal_glib arch=linux-fedora25-x86_64 ^python@2.7.13%gcc@6.2.1+shared~tk~ucs4 arch=linux-fedora25-x86_64 ^readline@7.0%gcc@6.2.1 arch=linux-fedora25-x86_64 ^renderproto@0.11.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^sqlite@3.18.0%gcc@6.2.1 arch=linux-fedora25-x86_64 ^tar^util-macros@1.19.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^xcb-proto@1.12%gcc@6.2.1 arch=linux-fedora25-x86_64 ^xextproto@7.3.0%gcc@6.2.1 arch=linux-fedora25-x86_64 ^xproto@7.0.31%gcc@6.2.1 arch=linux-fedora25-x86_64 ^xtrans@1.3.5%gcc@6.2.1 arch=linux-fedora25-x86_64 ^xz@5.2.3%gcc@6.2.1 arch=linux-fedora25-x86_64 ^zlib@1.2.11%gcc@6.2.1+pic+shared arch=linux-fedora25-x86_64'.
Package cairo requires variant ~X, but spec asked for +X
A workaround is to explicitly activate the variants of dependencies as well:
.. code-block:: console
$ spack install r+X ^cairo+X ^pango+X
See https://github.com/spack/spack/issues/267 and
https://github.com/spack/spack/issues/2546 for further details.
-----------------------------------------------
depends_on cannot handle recursive dependencies
-----------------------------------------------
**Status:** Not yet a work in progress
Although ``depends_on`` can handle any aspect of Spack's spec syntax,
it currently cannot handle recursive dependencies. If the ``^`` sigil
appears in a ``depends_on`` statement, the concretizer will hang.
For example, something like:
.. code-block:: python
depends_on('mfem+cuda ^hypre+cuda', when='+cuda')
should be rewritten as:
.. code-block:: python
depends_on('mfem+cuda', when='+cuda')
depends_on('hypre+cuda', when='+cuda')
See https://github.com/spack/spack/issues/17660 and
https://github.com/spack/spack/issues/11160 for more details.

View File

@@ -159,27 +159,6 @@ can supply a file with specs in it, one per line:
This is useful if there is a specific suite of software managed by This is useful if there is a specific suite of software managed by
your site. your site.
^^^^^^^^^^^^^^^^^^
Mirror environment
^^^^^^^^^^^^^^^^^^
To create a mirror of all packages required by a concerte environment, activate the environment and call ``spack mirror create -a``.
This is especially useful to create a mirror of an environment concretized on another machine.
.. code-block:: console
[remote] $ spack env create myenv
[remote] $ spack env activate myenv
[remote] $ spack add ...
[remote] $ spack concretize
$ sftp remote:/spack/var/environment/myenv/spack.lock
$ spack env create myenv spack.lock
$ spack env activate myenv
$ spack mirror create -a
.. _cmd-spack-mirror-add: .. _cmd-spack-mirror-add:
-------------------- --------------------

View File

@@ -71,24 +71,9 @@ Module file customization
------------------------- -------------------------
Module files are generated by post-install hooks after the successful Module files are generated by post-install hooks after the successful
installation of a package. installation of a package. The table below summarizes the essential
information associated with the different file formats
.. note:: that can be generated by Spack:
Spack only generates modulefiles when a package is installed. If
you attempt to install a package and it is already installed, Spack
will not regenerate modulefiles for the package. This may to
inconsistent modulefiles if the Spack module configuration has
changed since the package was installed, either by editing a file
or changing scopes or environments.
Later in this section there is a subsection on :ref:`regenerating
modules <cmd-spack-module-refresh>` that will allow you to bring
your modules to a consistent state.
The table below summarizes the essential information associated with
the different file formats that can be generated by Spack:
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+ +-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
| | **Hook name** | **Default root directory** | **Default template file** | **Compatible tools** | | | **Hook name** | **Default root directory** | **Default template file** | **Compatible tools** |
@@ -145,8 +130,9 @@ list of environment modifications.
to the corresponding environment variables: to the corresponding environment variables:
================== ================================= ================== =================================
LIBRARY_PATH ``self.prefix/rlib/R/lib``
LD_LIBRARY_PATH ``self.prefix/rlib/R/lib`` LD_LIBRARY_PATH ``self.prefix/rlib/R/lib``
PKG_CONFIG_PATH ``self.prefix/rlib/pkgconfig`` CPATH ``self.prefix/rlib/R/include``
================== ================================= ================== =================================
with the following snippet: with the following snippet:
@@ -178,58 +164,6 @@ the installation folder of each package for the presence of a set of subdirector
(``bin``, ``man``, ``share/man``, etc.). If any is found its full path is prepended (``bin``, ``man``, ``share/man``, etc.). If any is found its full path is prepended
to the environment variables listed below the folder name. to the environment variables listed below the folder name.
Spack modules can be configured for multiple module sets. The default
module set is named ``default``. All Spack commands which operate on
modules default to apply the ``default`` module set, but can be
applied to any module set in the configuration. Settings applied at
the root of the configuration (e.g. ``modules:enable`` rather than
``modules:default:enable``) are applied to the default module set for
backwards compatibility.
"""""""""""""""""""""""""
Changing the modules root
"""""""""""""""""""""""""
As shown in the table above, the default module root for ``lmod`` is
``$spack/share/spack/lmod`` and the default root for ``tcl`` is
``$spack/share/spack/modules``. This can be overridden for any module
set by changing the ``roots`` key of the configuration.
.. code-block:: yaml
modules:
default:
roots:
tcl: /path/to/install/tcl/modules
my_custom_lmod_modules:
roots:
lmod: /path/to/install/custom/lmod/modules
...
This configuration will create two module sets. The default module set
will install its ``tcl`` modules to ``/path/to/install/tcl/modules``
(and still install its lmod modules, if any, to the default
location). The set ``my_custom_lmod_modules`` will install its lmod
modules to ``/path/to/install/custom/lmod/modules`` (and still install
its tcl modules, if any, to the default location).
By default, an architecture-specific directory is added to the root
directory. A module set may override that behavior by setting the
``arch_folder`` config value to ``False``.
.. code-block:: yaml
modules:
default:
roots:
tcl: /path/to/install/tcl/modules
arch_folder: false
Obviously, having multiple module sets install modules to the default
location could be confusing to users of your modules. In the next
section, we will discuss enabling and disabling module types (module
file generators) for each module set.
"""""""""""""""""""" """"""""""""""""""""
Activate other hooks Activate other hooks
"""""""""""""""""""" """"""""""""""""""""
@@ -245,7 +179,6 @@ to the generator being customized:
.. code-block:: yaml .. code-block:: yaml
modules: modules:
default:
enable: enable:
- tcl - tcl
- lmod - lmod
@@ -461,52 +394,16 @@ that are already in the LMod hierarchy.
For hierarchies that are deeper than three layers ``lmod spider`` may have some issues. For hierarchies that are deeper than three layers ``lmod spider`` may have some issues.
See `this discussion on the LMod project <https://github.com/TACC/Lmod/issues/114>`_. See `this discussion on the LMod project <https://github.com/TACC/Lmod/issues/114>`_.
""""""""""""""""""""""
Select default modules
""""""""""""""""""""""
By default, when multiple modules of the same name share a directory,
the highest version number will be the default module. This behavior
of the ``module`` command can be overridden with a symlink named
``default`` to the desired default module. If you wish to configure
default modules with Spack, add a ``defaults`` key to your modules
configuration:
.. code-block:: yaml
modules:
my-module-set:
tcl:
defaults:
- gcc@10.2.1
- hdf5@1.2.10+mpi+hl%gcc
These defaults may be arbitrarily specific. For any package that
satisfies a default, Spack will generate the module file in the
appropriate path, and will generate a default symlink to the module
file as well.
.. warning::
If Spack is configured to generate multiple default packages in the
same directory, the last modulefile to be generated will be the
default module.
.. _customize-env-modifications: .. _customize-env-modifications:
""""""""""""""""""""""""""""""""""" """""""""""""""""""""""""""""""""""
Customize environment modifications Customize environment modifications
""""""""""""""""""""""""""""""""""" """""""""""""""""""""""""""""""""""
You can control which prefixes in a Spack package are added to You can control which prefixes in a Spack package are added to environment
environment variables with the ``prefix_inspections`` section; this variables with the ``prefix_inspections`` section; this section maps relative
section maps relative prefixes to the list of environment variables prefixes to the list of environment variables which should be updated with
which should be updated with those prefixes. those prefixes.
The ``prefix_inspections`` configuration is different from other
settings in that a ``prefix_inspections`` configuration at the
``modules`` level of the configuration file applies to all module
sets. This allows users to make general overrides to the default
inspections and customize them per-module-set.
.. code-block:: yaml .. code-block:: yaml
@@ -519,66 +416,10 @@ inspections and customize them per-module-set.
'': '':
- CMAKE_PREFIX_PATH - CMAKE_PREFIX_PATH
Prefix inspections are only applied if the relative path inside the In this case, for a Spack package ``foo`` installed to ``/spack/prefix/foo``,
installation prefix exists. In this case, for a Spack package ``foo`` the generated module file for ``foo`` would update ``PATH`` to contain
installed to ``/spack/prefix/foo``, if ``foo`` installs executables to
``bin`` but no libraries in ``lib``, the generated module file for
``foo`` would update ``PATH`` to contain ``/spack/prefix/foo/bin`` and
``CMAKE_PREFIX_PATH`` to contain ``/spack/prefix/foo``, but would not
update ``LIBRARY_PATH``.
There is a special case for prefix inspections relative to environment
views. If all of the following conditions hold for a module set
configuration:
#. The configuration is for an :ref:`environment <environments>` and
will never be applied outside the environment,
#. The environment in question is configured to use a :ref:`view
<filesystem-views>`,
#. The :ref:`environment view is configured
<configuring_environment_views>` with a projection that ensures
every package is linked to a unique directory,
then the module set may be configured to create modules relative to
the environment view. This is specified by the ``use_view``
configuration option in the module set. If ``True``, the module set is
constructed relative to the default view of the
environment. Otherwise, the value must be the name of the environment
view relative to which to construct modules, or ``False-ish`` to
disable the feature explicitly (the default is ``False``).
If the ``use_view`` value is set in the config, then the prefix
inspections for the package are done relative to the package's path in
the view.
.. code-block:: yaml
spack:
modules:
view_relative_modules:
use_view: my_view
prefix_inspections:
bin:
- PATH
view:
my_view:
projections:
root: /path/to/my/view
all: '{name}-{hash}'
The ``spack`` key is relevant to :ref:`environment <environments>`
configuration, and the view key is discussed in detail in the section
on :ref:`Configuring environment views
<configuring_environment_views>`. With this configuration the
generated module for package ``foo`` would set ``PATH`` to include
``/path/to/my/view/foo-<hash>/bin`` instead of
``/spack/prefix/foo/bin``. ``/spack/prefix/foo/bin``.
The ``use_view`` option is useful when deploying a large software
stack to users who are likely to inspect the modules to find full
paths to software, when it is desirable to present the users with a
simpler set of paths than those generated by the Spack install tree.
"""""""""""""""""""""""""""""""""""" """"""""""""""""""""""""""""""""""""
Filter out environment modifications Filter out environment modifications
"""""""""""""""""""""""""""""""""""" """"""""""""""""""""""""""""""""""""

View File

@@ -1,265 +0,0 @@
.. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _monitoring:
==========
Monitoring
==========
You can use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
server to store a database of your packages, builds, and associated metadata
for provenance, research, or some other kind of development. You should
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
to first create a server along with a username and token for yourself.
You can then use this guide to interact with the server.
-------------------
Analysis Monitoring
-------------------
To read about how to monitor an analysis (meaning you want to send analysis results
to a server) see :ref:`analyze_monitoring`.
---------------------
Monitoring An Install
---------------------
Since an install is typically when you build packages, we logically want
to tell spack to monitor during this step. Let's start with an example
where we want to monitor the install of hdf5. Unless you have disabled authentication
for the server, we first want to export our spack monitor token and username to the environment:
.. code-block:: console
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
$ export SPACKMON_USER=spacky
By default, the host for your server is expected to be at ``http://127.0.0.1``
with a prefix of ``ms1``, and if this is the case, you can simply add the
``--monitor`` flag to the install command:
.. code-block:: console
$ spack install --monitor hdf5
If you need to customize the host or the prefix, you can do that as well:
.. code-block:: console
$ spack install --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io hdf5
As a precaution, we cut out early in the spack client if you have not provided
authentication credentials. For example, if you run the command above without
exporting your username or token, you'll see:
.. code-block:: console
==> Error: You are required to export SPACKMON_TOKEN and SPACKMON_USER
This extra check is to ensure that we don't start any builds,
and then discover that you forgot to export your token. However, if
your monitoring server has authentication disabled, you can tell this to
the client to skip this step:
.. code-block:: console
$ spack install --monitor --monitor-disable-auth hdf5
If the service is not running, you'll cleanly exit early - the install will
not continue if you've asked it to monitor and there is no service.
For example, here is what you'll see if the monitoring service is not running:
.. code-block:: console
[Errno 111] Connection refused
If you want to continue builds (and stop monitoring) you can set the ``--monitor-keep-going``
flag.
.. code-block:: console
$ spack install --monitor --monitor-keep-going hdf5
This could mean that if a request fails, you only have partial or no data
added to your monitoring database. This setting will not be applied to the
first request to check if the server is running, but to subsequent requests.
If you don't have a monitor server running and you want to build, simply
don't provide the ``--monitor`` flag! Finally, if you want to provide one or
more tags to your build, you can do:
.. code-block:: console
# Add one tag, "pizza"
$ spack install --monitor --monitor-tags pizza hdf5
# Add two tags, "pizza" and "pasta"
$ spack install --monitor --monitor-tags pizza,pasta hdf5
----------------------------
Monitoring with Containerize
----------------------------
The same argument group is available to add to a containerize command.
^^^^^^
Docker
^^^^^^
To add monitoring to a Docker container recipe generation using the defaults,
and assuming a monitor server running on localhost, you would
start with a spack.yaml in your present working directory:
.. code-block:: yaml
spack:
specs:
- samtools
And then do:
.. code-block:: console
# preview first
spack containerize --monitor
# and then write to a Dockerfile
spack containerize --monitor > Dockerfile
The install command will be edited to include commands for enabling monitoring.
However, getting secrets into the container for your monitor server is something
that should be done carefully. Specifically you should:
- Never try to define secrets as ENV, ARG, or using ``--build-arg``
- Do not try to get the secret into the container via a "temporary" file that you remove (it in fact will still exist in a layer)
Instead, it's recommended to use buildkit `as explained here <https://pythonspeed.com/articles/docker-build-secrets/>`_.
You'll need to again export environment variables for your spack monitor server:
.. code-block:: console
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
$ export SPACKMON_USER=spacky
And then use buildkit along with your build and identifying the name of the secret:
.. code-block:: console
$ DOCKER_BUILDKIT=1 docker build --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
The secrets are expected to come from your environment, and then will be temporarily mounted and available
at ``/run/secrets/<name>``. If you forget to supply them (and authentication is required) the build
will fail. If you need to build on your host (and interact with a spack monitor at localhost) you'll
need to tell Docker to use the host network:
.. code-block:: console
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
^^^^^^^^^^^
Singularity
^^^^^^^^^^^
To add monitoring to a Singularity container build, the spack.yaml needs to
be modified slightly to specify wanting a different format:
.. code-block:: yaml
spack:
specs:
- samtools
container:
format: singularity
Again, generate the recipe:
.. code-block:: console
# preview first
$ spack containerize --monitor
# then write to a Singularity recipe
$ spack containerize --monitor > Singularity
Singularity doesn't have a direct way to define secrets at build time, so we have
to do a bit of a manual command to add a file, source secrets in it, and remove it.
Since Singularity doesn't have layers like Docker, deleting a file will truly
remove it from the container and history. So let's say we have this file,
``secrets.sh``:
.. code-block:: console
# secrets.sh
export SPACKMON_USER=spack
export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
We would then generate the Singularity recipe, and add a files section,
a source of that file at the start of ``%post``, and **importantly**
a removal of the final at the end of that same section.
.. code-block::
Bootstrap: docker
From: spack/ubuntu-bionic:latest
Stage: build
%files
secrets.sh /opt/secrets.sh
%post
. /opt/secrets.sh
# spack install commands are here
...
# Don't forget to remove here!
rm /opt/secrets.sh
You can then build the container as your normally would.
.. code-block:: console
$ sudo singularity build container.sif Singularity
------------------
Monitoring Offline
------------------
In the case that you want to save monitor results to your filesystem
and then upload them later (perhaps you are in an environment where you don't
have credentials or it isn't safe to use them) you can use the ``--monitor-save-local``
flag.
.. code-block:: console
$ spack install --monitor --monitor-save-local hdf5
This will save results in a subfolder, "monitor" in your designated spack
reports folder, which defaults to ``$HOME/.spack/reports/monitor``. When
you are ready to upload them to a spack monitor server:
.. code-block:: console
$ spack monitor upload ~/.spack/reports/monitor
You can choose the root directory of results as shown above, or a specific
subdirectory. The command accepts other arguments to specify configuration
for the monitor.

File diff suppressed because it is too large Load Diff

View File

@@ -30,70 +30,27 @@ at least one `runner <https://docs.gitlab.com/runner/>`_. Then the basic steps
for setting up a build pipeline are as follows: for setting up a build pipeline are as follows:
#. Create a repository on your gitlab instance #. Create a repository on your gitlab instance
#. Add a ``spack.yaml`` at the root containing your pipeline environment #. Add a ``spack.yaml`` at the root containing your pipeline environment (see
below for details)
#. Add a ``.gitlab-ci.yml`` at the root containing two jobs (one to generate #. Add a ``.gitlab-ci.yml`` at the root containing two jobs (one to generate
the pipeline dynamically, and one to run the generated jobs). the pipeline dynamically, and one to run the generated jobs), similar to
#. Push a commit containing the ``spack.yaml`` and ``.gitlab-ci.yml`` mentioned above this one:
to the gitlab repository
See the :ref:`functional_example` section for a minimal working example. See also
the :ref:`custom_Workflow` section for a link to an example of a custom workflow
based on spack pipelines.
While it is possible to set up pipelines on gitlab.com, as illustrated above, the
builds there are limited to 60 minutes and generic hardware. It is also possible to
`hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
topics are outside the scope of this document.
Spack's pipelines are now making use of the
`trigger <https://docs.gitlab.com/ee/ci/yaml/#trigger>`_ syntax to run
dynamically generated
`child pipelines <https://docs.gitlab.com/ee/ci/pipelines/parent_child_pipelines.html>`_.
Note that the use of dynamic child pipelines requires running Gitlab version
``>= 12.9``.
.. _functional_example:
------------------
Functional Example
------------------
The simplest fully functional standalone example of a working pipeline can be
examined live at this example `project <https://gitlab.com/scott.wittenburg/spack-pipeline-demo>`_
on gitlab.com.
Here's the ``.gitlab-ci.yml`` file from that example that builds and runs the
pipeline:
.. code-block:: yaml .. code-block:: yaml
stages: [generate, build] stages: [generate, build]
variables:
SPACK_REPO: https://github.com/scottwittenburg/spack.git
SPACK_REF: pipelines-reproducible-builds
generate-pipeline: generate-pipeline:
stage: generate stage: generate
tags: tags:
- docker - <custom-tag>
image:
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
script: script:
- spack env activate --without-view . - spack env activate --without-view .
- spack -d ci generate - spack ci generate
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml" --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
artifacts: artifacts:
paths: paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir" - "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
build-jobs: build-jobs:
stage: build stage: build
@@ -103,95 +60,49 @@ pipeline:
job: generate-pipeline job: generate-pipeline
strategy: depend strategy: depend
The key thing to note above is that there are two jobs: The first job to run,
``generate-pipeline``, runs the ``spack ci generate`` command to generate a
dynamic child pipeline and write it to a yaml file, which is then picked up
by the second job, ``build-jobs``, and used to trigger the downstream pipeline.
And here's the spack environment built by the pipeline represented as a #. Add any secrets required by the CI process to environment variables using the
``spack.yaml`` file: CI web ui
#. Push a commit containing the ``spack.yaml`` and ``.gitlab-ci.yml`` mentioned above
to the gitlab repository
.. code-block:: yaml The ``<custom-tag>``, above, is used to pick one of your configured runners to
run the pipeline generation phase (this is implemented in the ``spack ci generate``
command, which assumes the runner has an appropriate version of spack installed
and configured for use). Of course, there are many ways to customize the process.
You can configure CDash reporting on the progress of your builds, set up S3 buckets
to mirror binaries built by the pipeline, clone a custom spack repository/ref for
use by the pipeline, and more.
spack: While it is possible to set up pipelines on gitlab.com, the builds there are
view: false limited to 60 minutes and generic hardware. It is also possible to
concretization: separately `hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
topics are outside the scope of this document.
definitions: Spack's pipelines are now making use of the
- pkgs: `trigger <https://docs.gitlab.com/12.9/ee/ci/yaml/README.html#trigger>`_ syntax to run
- zlib dynamically generated
- bzip2 `child pipelines <https://docs.gitlab.com/12.9/ee/ci/parent_child_pipelines.html>`_.
- arch: Note that the use of dynamic child pipelines requires running Gitlab version
- '%gcc@7.5.0 arch=linux-ubuntu18.04-x86_64' ``>= 12.9``.
specs:
- matrix:
- - $pkgs
- - $arch
mirrors: { "mirror": "s3://spack-public/mirror" }
gitlab-ci:
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
- . "./spack/share/spack/setup-env.sh"
script:
- pushd ${SPACK_CONCRETE_ENV_DIR} && spack env activate --without-view . && popd
- spack -d ci rebuild
mappings:
- match: ["os=ubuntu18.04"]
runner-attributes:
image:
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]
tags:
- docker
enable-artifacts-buildcache: True
rebuild-index: False
The elements of this file important to spack ci pipelines are described in more
detail below, but there are a couple of things to note about the above working
example:
Normally ``enable-artifacts-buildcache`` is not recommended in production as it
results in large binary artifacts getting transferred back and forth between
gitlab and the runners. But in this example on gitlab.com where there is no
shared, persistent file system, and where no secrets are stored for giving
permission to write to an S3 bucket, ``enabled-buildcache-artifacts`` is the only
way to propagate binaries from jobs to their dependents.
Also, it is usually a good idea to let the pipeline generate a final "rebuild the
buildcache index" job, so that subsequent pipeline generation can quickly determine
which specs are up to date and which need to be rebuilt (it's a good idea for other
reasons as well, but those are out of scope for this discussion). In this case we
have disabled it (using ``rebuild-index: False``) because the index would only be
generated in the artifacts mirror anyway, and consequently would not be available
during subesequent pipeline runs.
.. note::
With the addition of reproducible builds (#22887) a previously working
pipeline will require some changes:
* In the build jobs (``runner-attributes``), the environment location changed.
This will typically show as a ``KeyError`` in the failing job. Be sure to
point to ``${SPACK_CONCRETE_ENV_DIR}``.
* When using ``include`` in your environment, be sure to make the included
files available in the build jobs. This means adding those files to the
artifact directory. Those files will also be missing in the reproducibility
artifact.
* Because the location of the environment changed, including files with
relative path may have to be adapted to work both in the project context
(generation job) and in the concrete env dir context (build job).
----------------------------------- -----------------------------------
Spack commands supporting pipelines Spack commands supporting pipelines
----------------------------------- -----------------------------------
Spack provides a ``ci`` command with a few sub-commands supporting spack Spack provides a command ``ci`` with two sub-commands: ``spack ci generate`` generates
ci pipelines. These commands are covered in more detail in this section. a pipeline (a .gitlab-ci.yml file) from a spack environment, and ``spack ci rebuild``
checks a spec against a remote mirror and possibly rebuilds it from source and updates
the binary mirror with the latest built package. Both ``spack ci ...`` commands must
be run from within the same environment, as each one makes use of the environment for
different purposes. Additionally, some options to the commands (or conditions present
in the spack environment file) may require particular environment variables to be
set in order to function properly. Examples of these are typically secrets
needed for pipeline operation that should not be visible in a spack environment
file. These environment variables are described in more detail
:ref:`ci_environment_variables`.
.. _cmd-spack-ci: .. _cmd-spack-ci:
@@ -210,17 +121,6 @@ pipeline jobs.
Concretizes the specs in the active environment, stages them (as described in Concretizes the specs in the active environment, stages them (as described in
:ref:`staging_algorithm`), and writes the resulting ``.gitlab-ci.yml`` to disk. :ref:`staging_algorithm`), and writes the resulting ``.gitlab-ci.yml`` to disk.
During concretization of the environment, ``spack ci generate`` also writes a
``spack.lock`` file which is then provided to generated child jobs and made
available in all generated job artifacts to aid in reproducing failed builds
in a local environment. This means there are two artifacts that need to be
exported in your pipeline generation job (defined in your ``.gitlab-ci.yml``).
The first is the output yaml file of ``spack ci generate``, and the other is
the directory containing the concrete environment files. In the
:ref:`functional_example` section, we only mentioned one path in the
``artifacts`` ``paths`` list because we used ``--artifacts-root`` as the
top level directory containing both the generated pipeline yaml and the
concrete environment.
Using ``--prune-dag`` or ``--no-prune-dag`` configures whether or not jobs are Using ``--prune-dag`` or ``--no-prune-dag`` configures whether or not jobs are
generated for specs that are already up to date on the mirror. If enabling generated for specs that are already up to date on the mirror. If enabling
@@ -228,16 +128,6 @@ DAG pruning using ``--prune-dag``, more information may be required in your
``spack.yaml`` file, see the :ref:`noop_jobs` section below regarding ``spack.yaml`` file, see the :ref:`noop_jobs` section below regarding
``service-job-attributes``. ``service-job-attributes``.
The optional ``--check-index-only`` argument can be used to speed up pipeline
generation by telling spack to consider only remote buildcache indices when
checking the remote mirror to determine if each spec in the DAG is up to date
or not. The default behavior is for spack to fetch the index and check it,
but if the spec is not found in the index, to also perform a direct check for
the spec on the mirror. If the remote buildcache index is out of date, which
can easily happen if it is not updated frequently, this behavior ensures that
spack has a way to know for certain about the status of any concrete spec on
the remote mirror, but can slow down pipeline generation significantly.
The ``--optimize`` argument is experimental and runs the generated pipeline The ``--optimize`` argument is experimental and runs the generated pipeline
document through a series of optimization passes designed to reduce the size document through a series of optimization passes designed to reduce the size
of the generated file. of the generated file.
@@ -253,64 +143,19 @@ The optional ``--output-file`` argument should be an absolute path (including
file name) to the generated pipeline, and if not given, the default is file name) to the generated pipeline, and if not given, the default is
``./.gitlab-ci.yml``. ``./.gitlab-ci.yml``.
While optional, the ``--artifacts-root`` argument is used to determine where
the concretized environment directory should be located. This directory will
be created by ``spack ci generate`` and will contain the ``spack.yaml`` and
generated ``spack.lock`` which are then passed to all child jobs as an
artifact. This directory will also be the root directory for all artifacts
generated by jobs in the pipeline.
.. _cmd-spack-ci-rebuild: .. _cmd-spack-ci-rebuild:
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^
``spack ci rebuild`` ``spack ci rebuild``
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^
The purpose of the ``spack ci rebuild`` is straightforward: take its assigned This sub-command is responsible for ensuring a single spec from the release
spec job, check whether the target mirror already has a binary for that spec, environment is up to date on the remote mirror configured in the environment,
and if not, build the spec from source and push the binary to the mirror. To and as such, corresponds to a single job in the ``.gitlab-ci.yml`` file.
accomplish this in a reproducible way, the sub-command prepares a ``spack install``
command line to build a single spec in the DAG, saves that command in a
shell script, ``install.sh``, in the current working directory, and then runs
it to install the spec. The shell script is also exported as an artifact to
aid in reproducing the build outside of the CI environment.
If it was necessary to install the spec from source, ``spack ci rebuild`` will Rather than taking command-line arguments, this sub-command expects information
also subsequently create a binary package for the spec and try to push it to the to be communicated via environment variables, which will typically come via the
mirror. ``.gitlab-ci.yml`` job as ``variables``.
The ``spack ci rebuild`` sub-command mainly expects its "input" to come either
from environment variables or from the ``gitlab-ci`` section of the ``spack.yaml``
environment file. There are two main sources of the environment variables, some
are written into ``.gitlab-ci.yml`` by ``spack ci generate``, and some are
provided by the GitLab CI runtime.
.. _cmd-spack-ci-rebuild-index:
^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack ci rebuild-index``
^^^^^^^^^^^^^^^^^^^^^^^^^^
This is a convenience command to rebuild the buildcache index associated with
the mirror in the active, gitlab-enabled environment (specifying the mirror
url or name is not required).
.. _cmd-spack-ci-reproduce-build:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack ci reproduce-build``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Given the url to a gitlab pipeline rebuild job, downloads and unzips the
artifacts into a local directory (which can be specified with the optional
``--working-dir`` argument), then finds the target job in the generated
pipeline to extract details about how it was run. Assuming the job used a
docker image, the command prints a ``docker run`` command line and some basic
instructions on how to reproduce the build locally.
Note that jobs failing in the pipeline will print messages giving the
arguments you can pass to ``spack ci reproduce-build`` in order to reproduce
a particular build locally.
------------------------------------ ------------------------------------
A pipeline-enabled spack environment A pipeline-enabled spack environment
@@ -395,13 +240,6 @@ takes a boolean and determines whether the pipeline uses artifacts to store and
pass along the buildcaches from one stage to the next (the default if you don't pass along the buildcaches from one stage to the next (the default if you don't
provide this option is ``False``). provide this option is ``False``).
The optional ``broken-specs-url`` key tells Spack to check against a list of
specs that are known to be currently broken in ``develop``. If any such specs
are found, the ``spack ci generate`` command will fail with an error message
informing the user what broken specs were encountered. This allows the pipeline
to fail early and avoid wasting compute resources attempting to build packages
that will not succeed.
The optional ``cdash`` section provides information that will be used by the The optional ``cdash`` section provides information that will be used by the
``spack ci generate`` command (invoked by ``spack ci start``) for reporting ``spack ci generate`` command (invoked by ``spack ci start``) for reporting
to CDash. All the jobs generated from this environment will belong to a to CDash. All the jobs generated from this environment will belong to a
@@ -519,9 +357,8 @@ scheduled on that runner. This allows users to do any custom preparation or
cleanup tasks that fit their particular workflow, as well as completely cleanup tasks that fit their particular workflow, as well as completely
customize the rebuilding of a spec if they so choose. Spack will not generate customize the rebuilding of a spec if they so choose. Spack will not generate
a ``before_script`` or ``after_script`` for jobs, but if you do not provide a ``before_script`` or ``after_script`` for jobs, but if you do not provide
a custom ``script``, spack will generate one for you that assumes the concrete a custom ``script``, spack will generate one for you that assumes your
environment directory is located within your ``--artifacts_root`` (or if not ``spack.yaml`` is at the root of the repository, activates that environment for
provided, within your ``$CI_PROJECT_DIR``), activates that environment for
you, and invokes ``spack ci rebuild``. you, and invokes ``spack ci rebuild``.
.. _staging_algorithm: .. _staging_algorithm:
@@ -646,15 +483,14 @@ Using a custom spack in your pipeline
If your runners will not have a version of spack ready to invoke, or if for some If your runners will not have a version of spack ready to invoke, or if for some
other reason you want to use a custom version of spack to run your pipelines, other reason you want to use a custom version of spack to run your pipelines,
this section provides an example of how you could take advantage of this section provides an example of how you could take advantage of
user-provided pipeline scripts to accomplish this fairly simply. First, consider user-provided pipeline scripts to accomplish this fairly simply. First, you
specifying the source and version of spack you want to use with variables, either could use the GitLab user interface to create CI environment variables
written directly into your ``.gitlab-ci.yml``, or provided by CI variables defined containing the url and branch or tag you want to use (calling them, for
in the gitlab UI or from some upstream pipeline. Let's say you choose the variable example, ``SPACK_REPO`` and ``SPACK_REF``), then refer to those in a custom shell
names ``SPACK_REPO`` and ``SPACK_REF`` to refer to the particular fork of spack script invoked both from your pipeline generation job, as well as in your rebuild
and branch you want for running your pipeline. You can then refer to those in a
custom shell script invoked both from your pipeline generation job and your rebuild
jobs. Here's the ``generate-pipeline`` job from the top of this document, jobs. Here's the ``generate-pipeline`` job from the top of this document,
updated to clone and source a custom spack: updated to invoke a custom shell script that will clone and source a custom
spack:
.. code-block:: yaml .. code-block:: yaml
@@ -662,24 +498,34 @@ updated to clone and source a custom spack:
tags: tags:
- <some-other-tag> - <some-other-tag>
before_script: before_script:
- git clone ${SPACK_REPO} - ./cloneSpack.sh
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
script: script:
- spack env activate --without-view . - spack env activate --without-view .
- spack ci generate --check-index-only - spack ci generate
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml" --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
after_script: after_script:
- rm -rf ./spack - rm -rf ./spack
artifacts: artifacts:
paths: paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir" - "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
That takes care of getting the desired version of spack when your pipeline is And the ``cloneSpack.sh`` script could contain:
generated by ``spack ci generate``. You also want your generated rebuild jobs
(all of them) to clone that version of spack, so next you would update your .. code-block:: bash
``spack.yaml`` from above as follows:
#!/bin/bash
git clone ${SPACK_REPO}
pushd ./spack
git checkout ${SPACK_REF}
popd
. "./spack/share/spack/setup-env.sh"
spack --version
Finally, you would also want your generated rebuild jobs to clone that version
of spack, so you would update your ``spack.yaml`` from above as follows:
.. code-block:: yaml .. code-block:: yaml
@@ -694,21 +540,21 @@ generated by ``spack ci generate``. You also want your generated rebuild jobs
- spack-kube - spack-kube
image: spack/ubuntu-bionic image: spack/ubuntu-bionic
before_script: before_script:
- git clone ${SPACK_REPO} - ./cloneSpack.sh
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
script: script:
- spack env activate --without-view ${SPACK_CONCRETE_ENV_DIR} - spack env activate --without-view .
- spack -d ci rebuild - spack -d ci rebuild
after_script: after_script:
- rm -rf ./spack - rm -rf ./spack
Now all of the generated rebuild jobs will use the same shell script to clone Now all of the generated rebuild jobs will use the same shell script to clone
spack before running their actual workload. spack before running their actual workload. Note in the above example the
provision of a custom ``script`` section. The reason for this is to run
``spack ci rebuild`` in debug mode to get more information when builds fail.
Now imagine you have long pipelines with many specs to be built, and you Now imagine you have long pipelines with many specs to be built, and you
are pointing to a spack repository and branch that has a tendency to change are pointing to a spack repository and branch that has a tendency to change
frequently, such as the main repo and its ``develop`` branch. If each child frequently, such as the main repo and it's ``develop`` branch. If each child
job checks out the ``develop`` branch, that could result in some jobs running job checks out the ``develop`` branch, that could result in some jobs running
with one SHA of spack, while later jobs run with another. To help avoid this with one SHA of spack, while later jobs run with another. To help avoid this
issue, the pipeline generation process saves global variables called issue, the pipeline generation process saves global variables called
@@ -718,32 +564,13 @@ simply contains the human-readable value produced by ``spack -V`` at pipeline
generation time, the ``SPACK_CHECKOUT_VERSION`` variable can be used in a generation time, the ``SPACK_CHECKOUT_VERSION`` variable can be used in a
``git checkout`` command to make sure all child jobs checkout the same version ``git checkout`` command to make sure all child jobs checkout the same version
of spack used to generate the pipeline. To take advantage of this, you could of spack used to generate the pipeline. To take advantage of this, you could
simply replace ``git checkout ${SPACK_REF}`` in the example ``spack.yaml`` simply replace ``git checkout ${SPACK_REF}`` in the example ``cloneSpack.sh``
above with ``git checkout ${SPACK_CHECKOUT_VERSION}``. script above with ``git checkout ${SPACK_CHECKOUT_VERSION}``.
On the other hand, if you're pointing to a spack repository and branch under your On the other hand, if you're pointing to a spack repository and branch under your
control, there may be no benefit in using the captured ``SPACK_CHECKOUT_VERSION``, control, there may be no benefit in using the captured ``SPACK_CHECKOUT_VERSION``,
and you can instead just clone using the variables you define (``SPACK_REPO`` and you can instead just clone using the project CI variables you set (in the
and ``SPACK_REF`` in the example aboves). earlier example these were ``SPACK_REPO`` and ``SPACK_REF``).
.. _custom_workflow:
---------------
Custom Workflow
---------------
There are many ways to take advantage of spack CI pipelines to achieve custom
workflows for building packages or other resources. One example of a custom
pipelines workflow is the spack tutorial container
`repo <https://github.com/spack/spack-tutorial-container>`_. This project uses
GitHub (for source control), GitLab (for automated spack ci pipelines), and
DockerHub automated builds to build Docker images (complete with fully populate
binary mirror) used by instructors and participants of a spack tutorial.
Take a look a the repo to see how it is accomplished using spack CI pipelines,
and see the following markdown files at the root of the repository for
descriptions and documentation describing the workflow: ``DESCRIPTION.md``,
``DOCKERHUB_SETUP.md``, ``GITLAB_SETUP.md``, and ``UPDATING.md``.
.. _ci_environment_variables: .. _ci_environment_variables:
@@ -760,33 +587,28 @@ environment variables used by the pipeline infrastructure are described here.
AWS_ACCESS_KEY_ID AWS_ACCESS_KEY_ID
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
Optional. Only needed when binary mirror is an S3 bucket. Needed when binary mirror is an S3 bucket.
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
AWS_SECRET_ACCESS_KEY AWS_SECRET_ACCESS_KEY
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
Optional. Only needed when binary mirror is an S3 bucket. Needed when binary mirror is an S3 bucket.
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
S3_ENDPOINT_URL S3_ENDPOINT_URL
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
Optional. Only needed when binary mirror is an S3 bucket that is *not* on AWS. Needed when binary mirror is an S3 bucket that is *not* on AWS.
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
CDASH_AUTH_TOKEN CDASH_AUTH_TOKEN
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
Optional. Only needed in order to report build groups to CDash. Needed in order to report build groups to CDash.
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
SPACK_SIGNING_KEY SPACK_SIGNING_KEY
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
Optional. Only needed if you want ``spack ci rebuild`` to trust the key you Needed to sign/verify binary packages from the remote binary mirror.
store in this variable, in which case, it will subsequently be used to sign and
verify binary packages (when installing or creating buildcaches). You could
also have already trusted a key spack know about, or if no key is present anywhere,
spack will install specs using ``--no-check-signature`` and create buildcaches
using ``-u`` (for unsigned binaries).

View File

@@ -335,7 +335,7 @@ merged YAML from all configuration files, use ``spack config get repos``:
- ~/myrepo - ~/myrepo
- $spack/var/spack/repos/builtin - $spack/var/spack/repos/builtin
Note that, unlike ``spack repo list``, this does not include the mNote that, unlike ``spack repo list``, this does not include the
namespace, which is read from each repo's ``repo.yaml``. namespace, which is read from each repo's ``repo.yaml``.
^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^

View File

@@ -1,10 +1,7 @@
# These dependencies should be installed using pip in order # These dependencies should be installed using pip in order
# to build the documentation. # to build the documentation.
sphinx>=3.4,!=4.1.2 sphinx
sphinxcontrib-programoutput sphinxcontrib-programoutput
sphinx-rtd-theme sphinx-rtd-theme
python-levenshtein python-levenshtein
# Restrict to docutils <0.17 to workaround a list rendering issue in sphinx.
# https://stackoverflow.com/questions/67542699
docutils <0.17

View File

@@ -8,21 +8,12 @@
# these commands in this directory to install Sphinx and its plugins, # these commands in this directory to install Sphinx and its plugins,
# then build the docs: # then build the docs:
# #
# spack env activate .
# spack install # spack install
# spack env activate .
# make # make
# #
spack: spack:
specs: specs:
# Sphinx - py-sphinx
- "py-sphinx@3.4:4.1.1,4.1.3:"
- py-sphinxcontrib-programoutput - py-sphinxcontrib-programoutput
- py-docutils@:0.16
- py-sphinx-rtd-theme - py-sphinx-rtd-theme
# VCS
- git
- mercurial
- subversion
# Plotting
- graphviz
concretization: together

View File

@@ -1,18 +0,0 @@
Name, Supported Versions, Notes, Requirement Reason
Python, 2.6/2.7/3.5-3.9, , Interpreter for Spack
C/C++ Compilers, , , Building software
make, , , Build software
patch, , , Build software
bash, , , Compiler wrappers
tar, , , Extract/create archives
gzip, , , Compress/Decompress archives
unzip, , , Compress/Decompress archives
bzip, , , Compress/Decompress archives
xz, , , Compress/Decompress archives
zstd, , Optional, Compress/Decompress archives
file, , , Create/Use Buildcaches
gnupg2, , , Sign/Verify Buildcaches
git, , , Manage Software Repositories
svn, , Optional, Manage Software Repositories
hg, , Optional, Manage Software Repositories
Python header files, , Optional (e.g. ``python3-dev`` on Debian), Bootstrapping from sources
1 Name Supported Versions Notes Requirement Reason
2 Python 2.6/2.7/3.5-3.9 Interpreter for Spack
3 C/C++ Compilers Building software
4 make Build software
5 patch Build software
6 bash Compiler wrappers
7 tar Extract/create archives
8 gzip Compress/Decompress archives
9 unzip Compress/Decompress archives
10 bzip Compress/Decompress archives
11 xz Compress/Decompress archives
12 zstd Optional Compress/Decompress archives
13 file Create/Use Buildcaches
14 gnupg2 Sign/Verify Buildcaches
15 git Manage Software Repositories
16 svn Optional Manage Software Repositories
17 hg Optional Manage Software Repositories
18 Python header files Optional (e.g. ``python3-dev`` on Debian) Bootstrapping from sources

View File

@@ -387,7 +387,7 @@ some nice features:
Spack-built compiler can be given to an IDE without requiring the Spack-built compiler can be given to an IDE without requiring the
IDE to load that compiler's module. IDE to load that compiler's module.
Unfortunately, Spack's RPATH support does not work in every case. For example: Unfortunately, Spack's RPATH support does not work in all case. For example:
#. Software comes in many forms --- not just compiled ELF binaries, #. Software comes in many forms --- not just compiled ELF binaries,
but also as interpreted code in Python, R, JVM bytecode, etc. but also as interpreted code in Python, R, JVM bytecode, etc.
@@ -543,8 +543,7 @@ specified from the command line using the ``--projection-file`` option
to the ``spack view`` command. to the ``spack view`` command.
The projections configuration file is a mapping of partial specs to The projections configuration file is a mapping of partial specs to
spec format strings, defined by the :meth:`~spack.spec.Spec.format` spec format strings, as shown in the example below.
function, as shown in the example below.
.. code-block:: yaml .. code-block:: yaml

563
lib/spack/env/cc vendored
View File

@@ -1,5 +1,4 @@
#!/bin/sh -f #!/bin/bash
# shellcheck disable=SC2034 # evals in this script fool shellcheck
# #
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
@@ -21,22 +20,10 @@
# -Wl,-rpath arguments for dependency /lib directories. # -Wl,-rpath arguments for dependency /lib directories.
# #
# Reset IFS to the default: whitespace-separated lists. When we use
# other separators, we set and reset it.
unset IFS
# Separator for lists whose names end with `_list`.
# We pick the alarm bell character, which is highly unlikely to
# conflict with anything. This is a literal bell character (which
# we have to use since POSIX sh does not convert escape sequences
# like '\a' outside of the format argument of `printf`).
# NOTE: Depending on your editor this may look empty, but it is not.
readonly lsep=''
# This is an array of environment variables that need to be set before # This is an array of environment variables that need to be set before
# the script runs. They are set by routines in spack.build_environment # the script runs. They are set by routines in spack.build_environment
# as part of the package installation process. # as part of the package installation process.
readonly params="\ parameters=(
SPACK_ENV_PATH SPACK_ENV_PATH
SPACK_DEBUG_LOG_DIR SPACK_DEBUG_LOG_DIR
SPACK_DEBUG_LOG_ID SPACK_DEBUG_LOG_ID
@@ -45,17 +32,13 @@ SPACK_CC_RPATH_ARG
SPACK_CXX_RPATH_ARG SPACK_CXX_RPATH_ARG
SPACK_F77_RPATH_ARG SPACK_F77_RPATH_ARG
SPACK_FC_RPATH_ARG SPACK_FC_RPATH_ARG
SPACK_TARGET_ARGS
SPACK_DTAGS_TO_ADD
SPACK_DTAGS_TO_STRIP
SPACK_LINKER_ARG SPACK_LINKER_ARG
SPACK_SHORT_SPEC SPACK_SHORT_SPEC
SPACK_SYSTEM_DIRS" SPACK_SYSTEM_DIRS
)
# Optional parameters that aren't required to be set
# Boolean (true/false/custom) if we want to add debug flags
# SPACK_ADD_DEBUG_FLAGS
# If a custom flag is requested, it will be defined
# SPACK_DEBUG_FLAGS
# The compiler input variables are checked for sanity later: # The compiler input variables are checked for sanity later:
# SPACK_CC, SPACK_CXX, SPACK_F77, SPACK_FC # SPACK_CC, SPACK_CXX, SPACK_F77, SPACK_FC
@@ -67,159 +50,43 @@ SPACK_SYSTEM_DIRS"
# Test command is used to unit test the compiler script. # Test command is used to unit test the compiler script.
# SPACK_TEST_COMMAND # SPACK_TEST_COMMAND
# die MESSAGE # die()
# Print a message and exit with error code 1. # Prints a message and exits with error 1.
die() { function die {
echo "[spack cc] ERROR: $*" echo "$@"
exit 1 exit 1
} }
# empty VARNAME # read input parameters into proper bash arrays.
# Return whether the variable VARNAME is unset or set to the empty string. # SYSTEM_DIRS is delimited by :
empty() { IFS=':' read -ra SPACK_SYSTEM_DIRS <<< "${SPACK_SYSTEM_DIRS}"
eval "test -z \"\${$1}\""
}
# setsep LISTNAME # SPACK_<LANG>FLAGS and SPACK_LDLIBS are split by ' '
# Set the global variable 'sep' to the separator for a list with name LISTNAME. IFS=' ' read -ra SPACK_FFLAGS <<< "$SPACK_FFLAGS"
# There are three types of lists: IFS=' ' read -ra SPACK_CPPFLAGS <<< "$SPACK_CPPFLAGS"
# 1. regular lists end with _list and are separated by $lsep IFS=' ' read -ra SPACK_CFLAGS <<< "$SPACK_CFLAGS"
# 2. directory lists end with _dirs/_DIRS/PATH(S) and are separated by ':' IFS=' ' read -ra SPACK_CXXFLAGS <<< "$SPACK_CXXFLAGS"
# 3. any other list is assumed to be separated by spaces: " " IFS=' ' read -ra SPACK_LDFLAGS <<< "$SPACK_LDFLAGS"
setsep() { IFS=' ' read -ra SPACK_LDLIBS <<< "$SPACK_LDLIBS"
case "$1" in
*_dirs|*_DIRS|*PATH|*PATHS)
sep=':'
;;
*_list)
sep="$lsep"
;;
*)
sep=" "
;;
esac
}
# prepend LISTNAME ELEMENT [SEP]
#
# Prepend ELEMENT to the list stored in the variable LISTNAME,
# assuming the list is separated by SEP.
# Handles empty lists and single-element lists.
prepend() {
varname="$1"
elt="$2"
if empty "$varname"; then
eval "$varname=\"\${elt}\""
else
# Get the appropriate separator for the list we're appending to.
setsep "$varname"
eval "$varname=\"\${elt}${sep}\${$varname}\""
fi
}
# append LISTNAME ELEMENT [SEP]
#
# Append ELEMENT to the list stored in the variable LISTNAME,
# assuming the list is separated by SEP.
# Handles empty lists and single-element lists.
append() {
varname="$1"
elt="$2"
if empty "$varname"; then
eval "$varname=\"\${elt}\""
else
# Get the appropriate separator for the list we're appending to.
setsep "$varname"
eval "$varname=\"\${$varname}${sep}\${elt}\""
fi
}
# extend LISTNAME1 LISTNAME2 [PREFIX]
#
# Append the elements stored in the variable LISTNAME2
# to the list stored in LISTNAME1.
# If PREFIX is provided, prepend it to each element.
extend() {
# Figure out the appropriate IFS for the list we're reading.
setsep "$2"
if [ "$sep" != " " ]; then
IFS="$sep"
fi
eval "for elt in \${$2}; do append $1 \"$3\${elt}\"; done"
unset IFS
}
# preextend LISTNAME1 LISTNAME2 [PREFIX]
#
# Prepend the elements stored in the list at LISTNAME2
# to the list at LISTNAME1, preserving order.
# If PREFIX is provided, prepend it to each element.
preextend() {
# Figure out the appropriate IFS for the list we're reading.
setsep "$2"
if [ "$sep" != " " ]; then
IFS="$sep"
fi
# first, reverse the list to prepend
_reversed_list=""
eval "for elt in \${$2}; do prepend _reversed_list \"$3\${elt}\"; done"
# prepend reversed list to preextend in order
IFS="${lsep}"
for elt in $_reversed_list; do prepend "$1" "$3${elt}"; done
unset IFS
}
# system_dir PATH
# test whether a path is a system directory # test whether a path is a system directory
system_dir() { function system_dir {
IFS=':' # SPACK_SYSTEM_DIRS is colon-separated
path="$1" path="$1"
for sd in $SPACK_SYSTEM_DIRS; do for sd in "${SPACK_SYSTEM_DIRS[@]}"; do
if [ "${path}" = "${sd}" ] || [ "${path}" = "${sd}/" ]; then if [ "${path}" == "${sd}" ] || [ "${path}" == "${sd}/" ]; then
# success if path starts with a system prefix # success if path starts with a system prefix
unset IFS
return 0 return 0
fi fi
done done
unset IFS
return 1 # fail if path starts no system prefix return 1 # fail if path starts no system prefix
} }
# Fail with a clear message if the input contains any bell characters. for param in "${parameters[@]}"; do
if eval "[ \"\${*#*${lsep}}\" != \"\$*\" ]"; then if [[ -z ${!param+x} ]]; then
die "Compiler command line contains our separator ('${lsep}'). Cannot parse."
fi
# ensure required variables are set
for param in $params; do
if eval "test -z \"\${${param}:-}\""; then
die "Spack compiler must be run from Spack! Input '$param' is missing." die "Spack compiler must be run from Spack! Input '$param' is missing."
fi fi
done done
# Check if optional parameters are defined
# If we aren't asking for debug flags, don't add them
if [ -z "${SPACK_ADD_DEBUG_FLAGS:-}" ]; then
SPACK_ADD_DEBUG_FLAGS="false"
fi
# SPACK_ADD_DEBUG_FLAGS must be true/false/custom
is_valid="false"
for param in "true" "false" "custom"; do
if [ "$param" = "$SPACK_ADD_DEBUG_FLAGS" ]; then
is_valid="true"
fi
done
# Exit with error if we are given an incorrect value
if [ "$is_valid" = "false" ]; then
die "SPACK_ADD_DEBUG_FLAGS, if defined, must be one of 'true', 'false', or 'custom'."
fi
# Figure out the type of compiler, the language, and the mode so that # Figure out the type of compiler, the language, and the mode so that
# the compiler script knows what to do. # the compiler script knows what to do.
# #
@@ -234,42 +101,37 @@ fi
# ld link # ld link
# ccld compile & link # ccld compile & link
command="${0##*/}" command=$(basename "$0")
comp="CC" comp="CC"
case "$command" in case "$command" in
cpp) cpp)
mode=cpp mode=cpp
debug_flags="-g"
;; ;;
cc|c89|c99|gcc|clang|armclang|icc|icx|pgcc|nvc|xlc|xlc_r|fcc) cc|c89|c99|gcc|clang|armclang|icc|icx|pgcc|nvc|xlc|xlc_r|fcc)
command="$SPACK_CC" command="$SPACK_CC"
language="C" language="C"
comp="CC" comp="CC"
lang_flags=C lang_flags=C
debug_flags="-g"
;; ;;
c++|CC|g++|clang++|armclang++|icpc|icpx|pgc++|nvc++|xlc++|xlc++_r|FCC) c++|CC|g++|clang++|armclang++|icpc|icpx|pgc++|nvc++|xlc++|xlc++_r|FCC)
command="$SPACK_CXX" command="$SPACK_CXX"
language="C++" language="C++"
comp="CXX" comp="CXX"
lang_flags=CXX lang_flags=CXX
debug_flags="-g"
;; ;;
ftn|f90|fc|f95|gfortran|flang|armflang|ifort|ifx|pgfortran|nvfortran|xlf90|xlf90_r|nagfor|frt) ftn|f90|fc|f95|gfortran|flang|armflang|ifort|ifx|pgfortran|nvfortran|xlf90|xlf90_r|nagfor|frt)
command="$SPACK_FC" command="$SPACK_FC"
language="Fortran 90" language="Fortran 90"
comp="FC" comp="FC"
lang_flags=F lang_flags=F
debug_flags="-g"
;; ;;
f77|xlf|xlf_r|pgf77) f77|xlf|xlf_r|pgf77)
command="$SPACK_F77" command="$SPACK_F77"
language="Fortran 77" language="Fortran 77"
comp="F77" comp="F77"
lang_flags=F lang_flags=F
debug_flags="-g"
;; ;;
ld|ld.gold|ld.lld) ld)
mode=ld mode=ld
;; ;;
*) *)
@@ -280,7 +142,7 @@ esac
# If any of the arguments below are present, then the mode is vcheck. # If any of the arguments below are present, then the mode is vcheck.
# In vcheck mode, nothing is added in terms of extra search paths or # In vcheck mode, nothing is added in terms of extra search paths or
# libraries. # libraries.
if [ -z "$mode" ] || [ "$mode" = ld ]; then if [[ -z $mode ]] || [[ $mode == ld ]]; then
for arg in "$@"; do for arg in "$@"; do
case $arg in case $arg in
-v|-V|--version|-dumpversion) -v|-V|--version|-dumpversion)
@@ -292,16 +154,16 @@ if [ -z "$mode" ] || [ "$mode" = ld ]; then
fi fi
# Finish setting up the mode. # Finish setting up the mode.
if [ -z "$mode" ]; then if [[ -z $mode ]]; then
mode=ccld mode=ccld
for arg in "$@"; do for arg in "$@"; do
if [ "$arg" = "-E" ]; then if [[ $arg == -E ]]; then
mode=cpp mode=cpp
break break
elif [ "$arg" = "-S" ]; then elif [[ $arg == -S ]]; then
mode=as mode=as
break break
elif [ "$arg" = "-c" ]; then elif [[ $arg == -c ]]; then
mode=cc mode=cc
break break
fi fi
@@ -328,46 +190,42 @@ dtags_to_strip="${SPACK_DTAGS_TO_STRIP}"
linker_arg="${SPACK_LINKER_ARG}" linker_arg="${SPACK_LINKER_ARG}"
# Set up rpath variable according to language. # Set up rpath variable according to language.
rpath="ERROR: RPATH ARG WAS NOT SET" eval rpath=\$SPACK_${comp}_RPATH_ARG
eval "rpath=\${SPACK_${comp}_RPATH_ARG:?${rpath}}"
# Dump the mode and exit if the command is dump-mode. # Dump the mode and exit if the command is dump-mode.
if [ "$SPACK_TEST_COMMAND" = "dump-mode" ]; then if [[ $SPACK_TEST_COMMAND == dump-mode ]]; then
echo "$mode" echo "$mode"
exit exit
fi fi
# If, say, SPACK_CC is set but SPACK_FC is not, we want to know. Compilers do not # Check that at least one of the real commands was actually selected,
# *have* to set up Fortran executables, so we need to tell the user when a build is # otherwise we don't know what to execute.
# about to attempt to use them unsuccessfully. if [[ -z $command ]]; then
if [ -z "$command" ]; then die "ERROR: Compiler '$SPACK_COMPILER_SPEC' does not support compiling $language programs."
die "Compiler '$SPACK_COMPILER_SPEC' does not have a $language compiler configured."
fi fi
# #
# Filter '.' and Spack environment directories out of PATH so that # Filter '.' and Spack environment directories out of PATH so that
# this script doesn't just call itself # this script doesn't just call itself
# #
new_dirs="" IFS=':' read -ra env_path <<< "$PATH"
IFS=':' IFS=':' read -ra spack_env_dirs <<< "$SPACK_ENV_PATH"
for dir in $PATH; do spack_env_dirs+=("" ".")
export PATH=""
for dir in "${env_path[@]}"; do
addpath=true addpath=true
for spack_env_dir in $SPACK_ENV_PATH; do for env_dir in "${spack_env_dirs[@]}"; do
case "${dir%%/}" in if [[ "$dir" == "$env_dir" ]]; then
"$spack_env_dir"|'.'|'')
addpath=false addpath=false
break break
;;
esac
done
if [ $addpath = true ]; then
append new_dirs "$dir"
fi fi
done done
unset IFS if $addpath; then
export PATH="$new_dirs" export PATH="${PATH:+$PATH:}$dir"
fi
done
if [ "$mode" = vcheck ]; then if [[ $mode == vcheck ]]; then
exec "${command}" "$@" exec "${command}" "$@"
fi fi
@@ -375,21 +233,17 @@ fi
# It doesn't work with -rpath. # It doesn't work with -rpath.
# This variable controls whether they are added. # This variable controls whether they are added.
add_rpaths=true add_rpaths=true
if [ "$mode" = ld ] || [ "$mode" = ccld ]; then if [[ ($mode == ld || $mode == ccld) && "$SPACK_SHORT_SPEC" =~ "darwin" ]];
if [ "${SPACK_SHORT_SPEC#*darwin}" != "${SPACK_SHORT_SPEC}" ]; then then
for arg in "$@"; do for arg in "$@"; do
if [ "$arg" = "-r" ]; then if [[ ($arg == -r && $mode == ld) ||
if [ "$mode" = ld ] || [ "$mode" = ccld ]; then ($arg == -r && $mode == ccld) ||
add_rpaths=false ($arg == -Wl,-r && $mode == ccld) ]]; then
break
fi
elif [ "$arg" = "-Wl,-r" ] && [ "$mode" = ccld ]; then
add_rpaths=false add_rpaths=false
break break
fi fi
done done
fi fi
fi
# Save original command for debug logging # Save original command for debug logging
input_command="$*" input_command="$*"
@@ -411,147 +265,114 @@ input_command="$*"
# The libs variable is initialized here for completeness, and it is also # The libs variable is initialized here for completeness, and it is also
# used later to inject flags supplied via `ldlibs` on the command # used later to inject flags supplied via `ldlibs` on the command
# line. These come into the wrappers via SPACK_LDLIBS. # line. These come into the wrappers via SPACK_LDLIBS.
#
includes=()
libdirs=()
rpaths=()
system_includes=()
system_libdirs=()
system_rpaths=()
libs=()
other_args=()
isystem_system_includes=()
isystem_includes=()
# The loop below breaks up the command line into these lists of components. while [ -n "$1" ]; do
# The lists are all bell-separated to be as flexible as possible, as their
# contents may come from the command line, from ' '-separated lists,
# ':'-separated lists, etc.
include_dirs_list=""
lib_dirs_list=""
rpath_dirs_list=""
system_include_dirs_list=""
system_lib_dirs_list=""
system_rpath_dirs_list=""
isystem_system_include_dirs_list=""
isystem_include_dirs_list=""
libs_list=""
other_args_list=""
while [ $# -ne 0 ]; do
# an RPATH to be added after the case statement. # an RPATH to be added after the case statement.
rp="" rp=""
# Multiple consecutive spaces in the command line can
# result in blank arguments
if [ -z "$1" ]; then
shift
continue
fi
case "$1" in case "$1" in
-isystem*) -isystem*)
arg="${1#-isystem}" arg="${1#-isystem}"
isystem_was_used=true isystem_was_used=true
if [ -z "$arg" ]; then shift; arg="$1"; fi if [ -z "$arg" ]; then shift; arg="$1"; fi
if system_dir "$arg"; then if system_dir "$arg"; then
append isystem_system_include_dirs_list "$arg" isystem_system_includes+=("$arg")
else else
append isystem_include_dirs_list "$arg" isystem_includes+=("$arg")
fi fi
;; ;;
-I*) -I*)
arg="${1#-I}" arg="${1#-I}"
if [ -z "$arg" ]; then shift; arg="$1"; fi if [ -z "$arg" ]; then shift; arg="$1"; fi
if system_dir "$arg"; then if system_dir "$arg"; then
append system_include_dirs_list "$arg" system_includes+=("$arg")
else else
append include_dirs_list "$arg" includes+=("$arg")
fi fi
;; ;;
-L*) -L*)
arg="${1#-L}" arg="${1#-L}"
if [ -z "$arg" ]; then shift; arg="$1"; fi if [ -z "$arg" ]; then shift; arg="$1"; fi
if system_dir "$arg"; then if system_dir "$arg"; then
append system_lib_dirs_list "$arg" system_libdirs+=("$arg")
else else
append lib_dirs_list "$arg" libdirs+=("$arg")
fi fi
;; ;;
-l*) -l*)
# -loopopt=0 is generated erroneously in autoconf <= 2.69,
# and passed by ifx to the linker, which confuses it with a
# library. Filter it out.
# TODO: generalize filtering of args with an env var, so that
# TODO: we do not have to special case this here.
if { [ "$mode" = "ccld" ] || [ $mode = "ld" ]; } \
&& [ "$1" != "${1#-loopopt}" ]; then
shift
continue
fi
arg="${1#-l}" arg="${1#-l}"
if [ -z "$arg" ]; then shift; arg="$1"; fi if [ -z "$arg" ]; then shift; arg="$1"; fi
append other_args_list "-l$arg" other_args+=("-l$arg")
;; ;;
-Wl,*) -Wl,*)
arg="${1#-Wl,}" arg="${1#-Wl,}"
if [ -z "$arg" ]; then shift; arg="$1"; fi if [ -z "$arg" ]; then shift; arg="$1"; fi
case "$arg" in if [[ "$arg" = -rpath=* ]]; then
-rpath=*) rp="${arg#-rpath=}" ;; rp="${arg#-rpath=}"
--rpath=*) rp="${arg#--rpath=}" ;; elif [[ "$arg" = --rpath=* ]]; then
-rpath,*) rp="${arg#-rpath,}" ;; rp="${arg#--rpath=}"
--rpath,*) rp="${arg#--rpath,}" ;; elif [[ "$arg" = -rpath,* ]]; then
-rpath|--rpath) rp="${arg#-rpath,}"
elif [[ "$arg" = --rpath,* ]]; then
rp="${arg#--rpath,}"
elif [[ "$arg" =~ ^-?-rpath$ ]]; then
shift; arg="$1" shift; arg="$1"
case "$arg" in if [[ "$arg" != -Wl,* ]]; then
-Wl,*)
rp="${arg#-Wl,}"
;;
*)
die "-Wl,-rpath was not followed by -Wl,*" die "-Wl,-rpath was not followed by -Wl,*"
;; fi
esac rp="${arg#-Wl,}"
;; elif [[ "$arg" = "$dtags_to_strip" ]] ; then
"$dtags_to_strip")
: # We want to remove explicitly this flag : # We want to remove explicitly this flag
;; else
*) other_args+=("-Wl,$arg")
append other_args_list "-Wl,$arg" fi
;;
esac
;; ;;
-Xlinker,*) -Xlinker,*)
arg="${1#-Xlinker,}" arg="${1#-Xlinker,}"
if [ -z "$arg" ]; then shift; arg="$1"; fi if [ -z "$arg" ]; then shift; arg="$1"; fi
if [[ "$arg" = -rpath=* ]]; then
case "$arg" in rp="${arg#-rpath=}"
-rpath=*) rp="${arg#-rpath=}" ;; elif [[ "$arg" = --rpath=* ]]; then
--rpath=*) rp="${arg#--rpath=}" ;; rp="${arg#--rpath=}"
-rpath|--rpath) elif [[ "$arg" = -rpath ]] || [[ "$arg" = --rpath ]]; then
shift; arg="$1" shift; arg="$1"
case "$arg" in if [[ "$arg" != -Xlinker,* ]]; then
-Xlinker,*)
rp="${arg#-Xlinker,}"
;;
*)
die "-Xlinker,-rpath was not followed by -Xlinker,*" die "-Xlinker,-rpath was not followed by -Xlinker,*"
;; fi
esac rp="${arg#-Xlinker,}"
;; else
*) other_args+=("-Xlinker,$arg")
append other_args_list "-Xlinker,$arg" fi
;;
esac
;; ;;
-Xlinker) -Xlinker)
if [ "$2" = "-rpath" ]; then if [[ "$2" == "-rpath" ]]; then
if [ "$3" != "-Xlinker" ]; then if [[ "$3" != "-Xlinker" ]]; then
die "-Xlinker,-rpath was not followed by -Xlinker,*" die "-Xlinker,-rpath was not followed by -Xlinker,*"
fi fi
shift 3; shift 3;
rp="$1" rp="$1"
elif [ "$2" = "$dtags_to_strip" ]; then elif [[ "$2" = "$dtags_to_strip" ]] ; then
shift # We want to remove explicitly this flag shift # We want to remove explicitly this flag
else else
append other_args_list "$1" other_args+=("$1")
fi fi
;; ;;
*) *)
if [ "$1" = "$dtags_to_strip" ]; then if [[ "$1" = "$dtags_to_strip" ]] ; then
: # We want to remove explicitly this flag : # We want to remove explicitly this flag
else else
append other_args_list "$1" other_args+=("$1")
fi fi
;; ;;
esac esac
@@ -559,9 +380,9 @@ while [ $# -ne 0 ]; do
# test rpaths against system directories in one place. # test rpaths against system directories in one place.
if [ -n "$rp" ]; then if [ -n "$rp" ]; then
if system_dir "$rp"; then if system_dir "$rp"; then
append system_rpath_dirs_list "$rp" system_rpaths+=("$rp")
else else
append rpath_dirs_list "$rp" rpaths+=("$rp")
fi fi
fi fi
shift shift
@@ -574,24 +395,14 @@ done
# See the gmake manual on implicit rules for details: # See the gmake manual on implicit rules for details:
# https://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html # https://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html
# #
flags_list="" flags=()
# Add debug flags
if [ "${SPACK_ADD_DEBUG_FLAGS}" = "true" ]; then
extend flags_list debug_flags
# If a custom flag is requested, derive from environment
elif [ "$SPACK_ADD_DEBUG_FLAGS" = "custom" ]; then
extend flags_list SPACK_DEBUG_FLAGS
fi
# Fortran flags come before CPPFLAGS # Fortran flags come before CPPFLAGS
case "$mode" in case "$mode" in
cc|ccld) cc|ccld)
case $lang_flags in case $lang_flags in
F) F)
extend flags_list SPACK_FFLAGS flags=("${flags[@]}" "${SPACK_FFLAGS[@]}") ;;
;;
esac esac
;; ;;
esac esac
@@ -599,8 +410,7 @@ esac
# C preprocessor flags come before any C/CXX flags # C preprocessor flags come before any C/CXX flags
case "$mode" in case "$mode" in
cpp|as|cc|ccld) cpp|as|cc|ccld)
extend flags_list SPACK_CPPFLAGS flags=("${flags[@]}" "${SPACK_CPPFLAGS[@]}") ;;
;;
esac esac
@@ -609,67 +419,67 @@ case "$mode" in
cc|ccld) cc|ccld)
case $lang_flags in case $lang_flags in
C) C)
extend flags_list SPACK_CFLAGS flags=("${flags[@]}" "${SPACK_CFLAGS[@]}") ;;
;;
CXX) CXX)
extend flags_list SPACK_CXXFLAGS flags=("${flags[@]}" "${SPACK_CXXFLAGS[@]}") ;;
;;
esac esac
flags=(${SPACK_TARGET_ARGS[@]} "${flags[@]}")
# prepend target args
preextend flags_list SPACK_TARGET_ARGS
;; ;;
esac esac
# Linker flags # Linker flags
case "$mode" in case "$mode" in
ld|ccld) ld|ccld)
extend flags_list SPACK_LDFLAGS flags=("${flags[@]}" "${SPACK_LDFLAGS[@]}") ;;
;;
esac esac
# On macOS insert headerpad_max_install_names linker flag # On macOS insert headerpad_max_install_names linker flag
if [ "$mode" = ld ] || [ "$mode" = ccld ]; then if [[ ($mode == ld || $mode == ccld) && "$SPACK_SHORT_SPEC" =~ "darwin" ]];
if [ "${SPACK_SHORT_SPEC#*darwin}" != "${SPACK_SHORT_SPEC}" ]; then then
case "$mode" in case "$mode" in
ld) ld)
append flags_list "-headerpad_max_install_names" ;; flags=("${flags[@]}" -headerpad_max_install_names) ;;
ccld) ccld)
append flags_list "-Wl,-headerpad_max_install_names" ;; flags=("${flags[@]}" "-Wl,-headerpad_max_install_names") ;;
esac esac
fi fi
fi
if [ "$mode" = ccld ] || [ "$mode" = ld ]; then IFS=':' read -ra rpath_dirs <<< "$SPACK_RPATH_DIRS"
if [ "$add_rpaths" != "false" ]; then if [[ $mode == ccld || $mode == ld ]]; then
if [[ "$add_rpaths" != "false" ]] ; then
# Append RPATH directories. Note that in the case of the # Append RPATH directories. Note that in the case of the
# top-level package these directories may not exist yet. For dependencies # top-level package these directories may not exist yet. For dependencies
# it is assumed that paths have already been confirmed. # it is assumed that paths have already been confirmed.
extend rpath_dirs_list SPACK_RPATH_DIRS rpaths=("${rpaths[@]}" "${rpath_dirs[@]}")
fi
fi fi
if [ "$mode" = ccld ] || [ "$mode" = ld ]; then fi
extend lib_dirs_list SPACK_LINK_DIRS
IFS=':' read -ra link_dirs <<< "$SPACK_LINK_DIRS"
if [[ $mode == ccld || $mode == ld ]]; then
libdirs=("${libdirs[@]}" "${link_dirs[@]}")
fi fi
# add RPATHs if we're in in any linking mode # add RPATHs if we're in in any linking mode
case "$mode" in case "$mode" in
ld|ccld) ld|ccld)
# Set extra RPATHs # Set extra RPATHs
extend lib_dirs_list SPACK_COMPILER_EXTRA_RPATHS IFS=':' read -ra extra_rpaths <<< "$SPACK_COMPILER_EXTRA_RPATHS"
if [ "$add_rpaths" != "false" ]; then libdirs+=("${extra_rpaths[@]}")
extend rpath_dirs_list SPACK_COMPILER_EXTRA_RPATHS if [[ "$add_rpaths" != "false" ]] ; then
rpaths+=("${extra_rpaths[@]}")
fi fi
# Set implicit RPATHs # Set implicit RPATHs
if [ "$add_rpaths" != "false" ]; then IFS=':' read -ra implicit_rpaths <<< "$SPACK_COMPILER_IMPLICIT_RPATHS"
extend rpath_dirs_list SPACK_COMPILER_IMPLICIT_RPATHS if [[ "$add_rpaths" != "false" ]] ; then
rpaths+=("${implicit_rpaths[@]}")
fi fi
# Add SPACK_LDLIBS to args # Add SPACK_LDLIBS to args
for lib in $SPACK_LDLIBS; do for lib in "${SPACK_LDLIBS[@]}"; do
append libs_list "${lib#-l}" libs+=("${lib#-l}")
done done
;; ;;
esac esac
@@ -677,62 +487,63 @@ esac
# #
# Finally, reassemble the command line. # Finally, reassemble the command line.
# #
args_list="$flags_list"
# Includes and system includes first
args=()
# flags assembled earlier
args+=("${flags[@]}")
# Insert include directories just prior to any system include directories # Insert include directories just prior to any system include directories
# NOTE: adding ${lsep} to the prefix here turns every added element into two
extend args_list include_dirs_list "-I"
extend args_list isystem_include_dirs_list "-isystem${lsep}"
case "$mode" in for dir in "${includes[@]}"; do args+=("-I$dir"); done
cpp|cc|as|ccld) for dir in "${isystem_includes[@]}"; do args+=("-isystem" "$dir"); done
if [ "$isystem_was_used" = "true" ]; then
extend args_list SPACK_INCLUDE_DIRS "-isystem${lsep}" IFS=':' read -ra spack_include_dirs <<< "$SPACK_INCLUDE_DIRS"
if [[ $mode == cpp || $mode == cc || $mode == as || $mode == ccld ]]; then
if [[ "$isystem_was_used" == "true" ]] ; then
for dir in "${spack_include_dirs[@]}"; do args+=("-isystem" "$dir"); done
else else
extend args_list SPACK_INCLUDE_DIRS "-I" for dir in "${spack_include_dirs[@]}"; do args+=("-I$dir"); done
fi
fi fi
;;
esac
extend args_list system_include_dirs_list -I for dir in "${system_includes[@]}"; do args+=("-I$dir"); done
extend args_list isystem_system_include_dirs_list "-isystem${lsep}" for dir in "${isystem_system_includes[@]}"; do args+=("-isystem" "$dir"); done
# Library search paths # Library search paths
extend args_list lib_dirs_list "-L" for dir in "${libdirs[@]}"; do args+=("-L$dir"); done
extend args_list system_lib_dirs_list "-L" for dir in "${system_libdirs[@]}"; do args+=("-L$dir"); done
# RPATHs arguments # RPATHs arguments
case "$mode" in case "$mode" in
ccld) ccld)
if [ -n "$dtags_to_add" ] ; then if [ -n "$dtags_to_add" ] ; then args+=("$linker_arg$dtags_to_add") ; fi
append args_list "$linker_arg$dtags_to_add" for dir in "${rpaths[@]}"; do args+=("$rpath$dir"); done
fi for dir in "${system_rpaths[@]}"; do args+=("$rpath$dir"); done
extend args_list rpath_dirs_list "$rpath"
extend args_list system_rpath_dirs_list "$rpath"
;; ;;
ld) ld)
if [ -n "$dtags_to_add" ] ; then if [ -n "$dtags_to_add" ] ; then args+=("$dtags_to_add") ; fi
append args_list "$dtags_to_add" for dir in "${rpaths[@]}"; do args+=("-rpath" "$dir"); done
fi for dir in "${system_rpaths[@]}"; do args+=("-rpath" "$dir"); done
extend args_list rpath_dirs_list "-rpath${lsep}"
extend args_list system_rpath_dirs_list "-rpath${lsep}"
;; ;;
esac esac
# Other arguments from the input command # Other arguments from the input command
extend args_list other_args_list args+=("${other_args[@]}")
# Inject SPACK_LDLIBS, if supplied # Inject SPACK_LDLIBS, if supplied
extend args_list libs_list "-l" for lib in "${libs[@]}"; do
args+=("-l$lib");
done
full_command_list="$command" full_command=("$command" "${args[@]}")
extend full_command_list args_list
# prepend the ccache binary if we're using ccache # prepend the ccache binary if we're using ccache
if [ -n "$SPACK_CCACHE_BINARY" ]; then if [ -n "$SPACK_CCACHE_BINARY" ]; then
case "$lang_flags" in case "$lang_flags" in
C|CXX) # ccache only supports C languages C|CXX) # ccache only supports C languages
prepend full_command_list "${SPACK_CCACHE_BINARY}" full_command=("${SPACK_CCACHE_BINARY}" "${full_command[@]}")
# workaround for stage being a temp folder # workaround for stage being a temp folder
# see #3761#issuecomment-294352232 # see #3761#issuecomment-294352232
export CCACHE_NOHASHDIR=yes export CCACHE_NOHASHDIR=yes
@@ -741,38 +552,22 @@ if [ -n "$SPACK_CCACHE_BINARY" ]; then
fi fi
# dump the full command if the caller supplies SPACK_TEST_COMMAND=dump-args # dump the full command if the caller supplies SPACK_TEST_COMMAND=dump-args
if [ -n "${SPACK_TEST_COMMAND=}" ]; then if [[ $SPACK_TEST_COMMAND == dump-args ]]; then
case "$SPACK_TEST_COMMAND" in IFS="
dump-args) " && echo "${full_command[*]}"
IFS="$lsep"
for arg in $full_command_list; do
echo "$arg"
done
unset IFS
exit exit
;; elif [[ -n $SPACK_TEST_COMMAND ]]; then
dump-env-*) die "ERROR: Unknown test command"
var=${SPACK_TEST_COMMAND#dump-env-}
eval "printf '%s\n' \"\$0: \$var: \$$var\""
;;
*)
die "Unknown test command: '$SPACK_TEST_COMMAND'"
;;
esac
fi fi
# #
# Write the input and output commands to debug logs if it's asked for. # Write the input and output commands to debug logs if it's asked for.
# #
if [ "$SPACK_DEBUG" = TRUE ]; then if [[ $SPACK_DEBUG == TRUE ]]; then
input_log="$SPACK_DEBUG_LOG_DIR/spack-cc-$SPACK_DEBUG_LOG_ID.in.log" input_log="$SPACK_DEBUG_LOG_DIR/spack-cc-$SPACK_DEBUG_LOG_ID.in.log"
output_log="$SPACK_DEBUG_LOG_DIR/spack-cc-$SPACK_DEBUG_LOG_ID.out.log" output_log="$SPACK_DEBUG_LOG_DIR/spack-cc-$SPACK_DEBUG_LOG_ID.out.log"
echo "[$mode] $command $input_command" >> "$input_log" echo "[$mode] $command $input_command" >> "$input_log"
IFS="$lsep" echo "[$mode] ${full_command[*]}" >> "$output_log"
echo "[$mode] "$full_command_list >> "$output_log"
unset IFS
fi fi
# Execute the full command, preserving spaces with IFS set exec "${full_command[@]}"
# to the alarm bell separator.
IFS="$lsep"; exec $full_command_list

View File

@@ -1 +0,0 @@
cc

View File

@@ -1 +0,0 @@
cc

View File

@@ -11,7 +11,7 @@
* Homepage: https://pypi.python.org/pypi/archspec * Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures * Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.1.2 (commit 85757b6666422fca86aa882a769bf78b0f992f54) * Version: 0.1.2 (commit 068b0ebd641211971acf10f39aa876703a34bae4)
argparse argparse
-------- --------
@@ -88,8 +88,6 @@
* Usage: Needed by pytest. Library with cross-python path, * Usage: Needed by pytest. Library with cross-python path,
ini-parsing, io, code, and log facilities. ini-parsing, io, code, and log facilities.
* Version: 1.4.34 (last version supporting Python 2.6) * Version: 1.4.34 (last version supporting Python 2.6)
* Note: This packages has been modified:
* https://github.com/pytest-dev/py/pull/186 was backported
pytest pytest
------ ------

View File

@@ -49,19 +49,6 @@ $ tox
congratulations :) congratulations :)
``` ```
## Citing Archspec
If you are referencing `archspec` in a publication, please cite the following
paper:
* Massimiliano Culpo, Gregory Becker, Carlos Eduardo Arango Gutierrez, Kenneth
Hoste, and Todd Gamblin.
[**`archspec`: A library for detecting, labeling, and reasoning about
microarchitectures**](https://tgamblin.github.io/pubs/archspec-canopie-hpc-2020.pdf).
In *2nd International Workshop on Containers and New Orchestration Paradigms
for Isolated Environments in HPC (CANOPIE-HPC'20)*, Online Event, November
12, 2020.
## License ## License
Archspec is distributed under the terms of both the MIT license and the Archspec is distributed under the terms of both the MIT license and the

View File

@@ -99,7 +99,6 @@ def sysctl_info_dict():
def sysctl(*args): def sysctl(*args):
return _check_output(["sysctl"] + list(args), env=child_environment).strip() return _check_output(["sysctl"] + list(args), env=child_environment).strip()
if platform.machine() == "x86_64":
flags = ( flags = (
sysctl("-n", "machdep.cpu.features").lower() sysctl("-n", "machdep.cpu.features").lower()
+ " " + " "
@@ -111,17 +110,6 @@ def sysctl(*args):
"model": sysctl("-n", "machdep.cpu.model"), "model": sysctl("-n", "machdep.cpu.model"),
"model name": sysctl("-n", "machdep.cpu.brand_string"), "model name": sysctl("-n", "machdep.cpu.brand_string"),
} }
else:
model = (
"m1" if "Apple" in sysctl("-n", "machdep.cpu.brand_string") else "unknown"
)
info = {
"vendor_id": "Apple",
"flags": [],
"model": model,
"CPU implementer": "Apple",
"model name": sysctl("-n", "machdep.cpu.brand_string"),
}
return info return info
@@ -185,11 +173,6 @@ def compatible_microarchitectures(info):
info (dict): dictionary containing information on the host cpu info (dict): dictionary containing information on the host cpu
""" """
architecture_family = platform.machine() architecture_family = platform.machine()
# On Apple M1 platform.machine() returns "arm64" instead of "aarch64"
# so we should normalize the name here
if architecture_family == "arm64":
architecture_family = "aarch64"
# If a tester is not registered, be conservative and assume no known # If a tester is not registered, be conservative and assume no known
# target is compatible with the host # target is compatible with the host
tester = COMPATIBILITY_CHECKS.get(architecture_family, lambda x, y: False) tester = COMPATIBILITY_CHECKS.get(architecture_family, lambda x, y: False)
@@ -206,26 +189,11 @@ def host():
# Get a list of possible candidates for this micro-architecture # Get a list of possible candidates for this micro-architecture
candidates = compatible_microarchitectures(info) candidates = compatible_microarchitectures(info)
# Sorting criteria for candidates
def sorting_fn(item):
return len(item.ancestors), len(item.features)
# Get the best generic micro-architecture
generic_candidates = [c for c in candidates if c.vendor == "generic"]
best_generic = max(generic_candidates, key=sorting_fn)
# Filter the candidates to be descendant of the best generic candidate.
# This is to avoid that the lack of a niche feature that can be disabled
# from e.g. BIOS prevents detection of a reasonably performant architecture
candidates = [c for c in candidates if c > best_generic]
# If we don't have candidates, return the best generic micro-architecture
if not candidates:
return best_generic
# Reverse sort of the depth for the inheritance tree among only targets we # Reverse sort of the depth for the inheritance tree among only targets we
# can use. This gets the newest target we satisfy. # can use. This gets the newest target we satisfy.
return max(candidates, key=sorting_fn) return sorted(
candidates, key=lambda t: (len(t.ancestors), len(t.features)), reverse=True
)[0]
def compatibility_check(architecture_family): def compatibility_check(architecture_family):
@@ -260,13 +228,7 @@ def compatibility_check_for_power(info, target):
"""Compatibility check for PPC64 and PPC64LE architectures.""" """Compatibility check for PPC64 and PPC64LE architectures."""
basename = platform.machine() basename = platform.machine()
generation_match = re.search(r"POWER(\d+)", info.get("cpu", "")) generation_match = re.search(r"POWER(\d+)", info.get("cpu", ""))
try:
generation = int(generation_match.group(1)) generation = int(generation_match.group(1))
except AttributeError:
# There might be no match under emulated environments. For instance
# emulating a ppc64le with QEMU and Docker still reports the host
# /proc/cpuinfo and not a Power
generation = 0
# We can use a target if it descends from our machine type and our # We can use a target if it descends from our machine type and our
# generation (9 for POWER9, etc) is at least its generation. # generation (9 for POWER9, etc) is at least its generation.
@@ -306,22 +268,3 @@ def compatibility_check_for_aarch64(info, target):
and (target.vendor == vendor or target.vendor == "generic") and (target.vendor == vendor or target.vendor == "generic")
and target.features.issubset(features) and target.features.issubset(features)
) )
@compatibility_check(architecture_family="riscv64")
def compatibility_check_for_riscv64(info, target):
"""Compatibility check for riscv64 architectures."""
basename = "riscv64"
uarch = info.get("uarch")
# sifive unmatched board
if uarch == "sifive,u74-mc":
uarch = "u74mc"
# catch-all for unknown uarchs
else:
uarch = "riscv64"
arch_root = TARGETS[basename]
return (target == arch_root or arch_root in target.ancestors) and (
target == uarch or target.vendor == "generic"
)

View File

@@ -173,12 +173,6 @@ def family(self):
return roots.pop() return roots.pop()
@property
def generic(self):
"""Returns the best generic architecture that is compatible with self"""
generics = [x for x in [self] + self.ancestors if x.vendor == "generic"]
return max(generics, key=lambda x: len(x.ancestors))
def to_dict(self, return_list_of_items=False): def to_dict(self, return_list_of_items=False):
"""Returns a dictionary representation of this object. """Returns a dictionary representation of this object.

View File

@@ -91,166 +91,6 @@
] ]
} }
}, },
"x86_64_v2": {
"from": ["x86_64"],
"vendor": "generic",
"features": [
"cx16",
"lahf_lm",
"mmx",
"sse",
"sse2",
"ssse3",
"sse4_1",
"sse4_2",
"popcnt"
],
"compilers": {
"gcc": [
{
"versions": "11.1:",
"name": "x86-64-v2",
"flags": "-march={name} -mtune=generic"
},
{
"versions": "4.6:11.0",
"name": "x86-64",
"flags": "-march={name} -mtune=generic -mcx16 -msahf -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3"
}
],
"clang": [
{
"versions": "12.0:",
"name": "x86-64-v2",
"flags": "-march={name} -mtune=generic"
},
{
"versions": "3.9:11.1",
"name": "x86-64",
"flags": "-march={name} -mtune=generic -mcx16 -msahf -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3"
}
]
}
},
"x86_64_v3": {
"from": ["x86_64_v2"],
"vendor": "generic",
"features": [
"cx16",
"lahf_lm",
"mmx",
"sse",
"sse2",
"ssse3",
"sse4_1",
"sse4_2",
"popcnt",
"avx",
"avx2",
"bmi1",
"bmi2",
"f16c",
"fma",
"abm",
"movbe",
"xsave"
],
"compilers": {
"gcc": [
{
"versions": "11.1:",
"name": "x86-64-v3",
"flags": "-march={name} -mtune=generic"
},
{
"versions": "4.8:11.0",
"name": "x86-64",
"flags": "-march={name} -mtune=generic -mcx16 -msahf -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mmovbe -mxsave"
}
],
"clang": [
{
"versions": "12.0:",
"name": "x86-64-v3",
"flags": "-march={name} -mtune=generic"
},
{
"versions": "3.9:11.1",
"name": "x86-64",
"flags": "-march={name} -mtune=generic -mcx16 -msahf -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mmovbe -mxsave"
}
],
"apple-clang": [
{
"versions": "8.0:",
"name": "x86-64",
"flags": "-march={name} -mtune=generic -mcx16 -msahf -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mmovbe -mxsave"
}
]
}
},
"x86_64_v4": {
"from": ["x86_64_v3"],
"vendor": "generic",
"features": [
"cx16",
"lahf_lm",
"mmx",
"sse",
"sse2",
"ssse3",
"sse4_1",
"sse4_2",
"popcnt",
"avx",
"avx2",
"bmi1",
"bmi2",
"f16c",
"fma",
"abm",
"movbe",
"xsave",
"avx512f",
"avx512bw",
"avx512cd",
"avx512dq",
"avx512vl"
],
"compilers": {
"gcc": [
{
"versions": "11.1:",
"name": "x86-64-v4",
"flags": "-march={name} -mtune=generic"
},
{
"versions": "6.0:11.0",
"name": "x86-64",
"flags": "-march={name} -mtune=generic -mcx16 -msahf -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mmovbe -mxsave -mavx512f -mavx512bw -mavx512cd -mavx512dq -mavx512vl"
}
],
"clang": [
{
"versions": "12.0:",
"name": "x86-64-v4",
"flags": "-march={name} -mtune=generic"
},
{
"versions": "3.9:11.1",
"name": "x86-64",
"flags": "-march={name} -mtune=generic -mcx16 -msahf -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mmovbe -mxsave -mavx512f -mavx512bw -mavx512cd -mavx512dq -mavx512vl"
}
],
"apple-clang": [
{
"versions": "8.0:",
"name": "x86-64",
"flags": "-march={name} -mtune=generic -mcx16 -msahf -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mmovbe -mxsave -mavx512f -mavx512bw -mavx512cd -mavx512dq -mavx512vl"
}
]
}
},
"nocona": { "nocona": {
"from": ["x86_64"], "from": ["x86_64"],
"vendor": "GenuineIntel", "vendor": "GenuineIntel",
@@ -337,7 +177,7 @@
} }
}, },
"nehalem": { "nehalem": {
"from": ["core2", "x86_64_v2"], "from": ["core2"],
"vendor": "GenuineIntel", "vendor": "GenuineIntel",
"features": [ "features": [
"mmx", "mmx",
@@ -554,7 +394,7 @@
} }
}, },
"haswell": { "haswell": {
"from": ["ivybridge", "x86_64_v3"], "from": ["ivybridge"],
"vendor": "GenuineIntel", "vendor": "GenuineIntel",
"features": [ "features": [
"mmx", "mmx",
@@ -802,7 +642,7 @@
} }
}, },
"skylake_avx512": { "skylake_avx512": {
"from": ["skylake", "x86_64_v4"], "from": ["skylake"],
"vendor": "GenuineIntel", "vendor": "GenuineIntel",
"features": [ "features": [
"mmx", "mmx",
@@ -1001,7 +841,7 @@
], ],
"intel": [ "intel": [
{ {
"versions": "19.0.1:", "versions": "19.0:",
"flags": "-march={name} -mtune={name}" "flags": "-march={name} -mtune={name}"
} }
] ]
@@ -1146,7 +986,7 @@
} }
}, },
"bulldozer": { "bulldozer": {
"from": ["x86_64_v2"], "from": ["x86_64"],
"vendor": "AuthenticAMD", "vendor": "AuthenticAMD",
"features": [ "features": [
"mmx", "mmx",
@@ -1305,7 +1145,7 @@
} }
}, },
"excavator": { "excavator": {
"from": ["steamroller", "x86_64_v3"], "from": ["steamroller"],
"vendor": "AuthenticAMD", "vendor": "AuthenticAMD",
"features": [ "features": [
"mmx", "mmx",
@@ -1364,7 +1204,7 @@
} }
}, },
"zen": { "zen": {
"from": ["x86_64_v3"], "from": ["x86_64"],
"vendor": "AuthenticAMD", "vendor": "AuthenticAMD",
"features": [ "features": [
"bmi1", "bmi1",
@@ -1488,64 +1328,6 @@
] ]
} }
}, },
"zen3": {
"from": ["zen2"],
"vendor": "AuthenticAMD",
"features": [
"bmi1",
"bmi2",
"f16c",
"fma",
"fsgsbase",
"avx",
"avx2",
"rdseed",
"clzero",
"aes",
"pclmulqdq",
"cx16",
"movbe",
"mmx",
"sse",
"sse2",
"sse4a",
"ssse3",
"sse4_1",
"sse4_2",
"abm",
"xsavec",
"xsaveopt",
"clflushopt",
"popcnt",
"clwb",
"vaes",
"vpclmulqdq",
"pku"
],
"compilers": {
"gcc": [
{
"versions": "10.3:",
"name": "znver3",
"flags": "-march={name} -mtune={name}"
}
],
"clang": [
{
"versions": "12.0:",
"name": "znver3",
"flags": "-march={name} -mtune={name}"
}
],
"aocc": [
{
"versions": "3.0:",
"name": "znver3",
"flags": "-march={name} -mtune={name}"
}
]
}
},
"ppc64": { "ppc64": {
"from": [], "from": [],
"vendor": "generic", "vendor": "generic",
@@ -1719,18 +1501,6 @@
"versions": ":", "versions": ":",
"flags": "-march=armv8-a -mtune=generic" "flags": "-march=armv8-a -mtune=generic"
} }
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv8-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv8-a -mtune=generic"
}
] ]
} }
}, },
@@ -1834,12 +1604,6 @@
"versions": "5:", "versions": "5:",
"flags": "-march=armv8.2-a+crc+crypto+fp16+sve" "flags": "-march=armv8.2-a+crc+crypto+fp16+sve"
} }
],
"arm": [
{
"versions": "20:",
"flags": "-march=armv8.2-a+crc+crypto+fp16+sve"
}
] ]
} }
}, },
@@ -1942,37 +1706,6 @@
"versions": "5:", "versions": "5:",
"flags" : "-march=armv8.2-a+fp16+rcpc+dotprod+crypto" "flags" : "-march=armv8.2-a+fp16+rcpc+dotprod+crypto"
} }
],
"arm" : [
{
"versions": "20:",
"flags" : "-march=armv8.2-a+fp16+rcpc+dotprod+crypto"
}
]
}
},
"m1": {
"from": ["aarch64"],
"vendor": "Apple",
"features": [],
"compilers": {
"gcc": [
{
"versions": "8.0:",
"flags" : "-march=armv8.4-a -mtune=generic"
}
],
"clang" : [
{
"versions": "9.0:",
"flags" : "-march=armv8.4-a"
}
],
"apple-clang": [
{
"versions": "11.0:",
"flags" : "-march=armv8.4-a"
}
] ]
} }
}, },
@@ -2017,44 +1750,6 @@
"features": [], "features": [],
"compilers": { "compilers": {
} }
},
"riscv64": {
"from": [],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "7.1:",
"flags" : "-march=rv64gc"
}
],
"clang": [
{
"versions": "9.0:",
"flags" : "-march=rv64gc"
}
]
}
},
"u74mc": {
"from": ["riscv64"],
"vendor": "SiFive",
"features": [],
"compilers": {
"gcc": [
{
"versions": "10.2:",
"flags" : "-march=rv64gc -mtune=sifive-7-series"
}
],
"clang" : [
{
"versions": "12.0:",
"flags" : "-march=rv64gc -mtune=sifive-7-series"
}
]
}
} }
}, },
"feature_aliases": { "feature_aliases": {

View File

@@ -77,18 +77,52 @@
from six import StringIO from six import StringIO
from six import string_types from six import string_types
class prefilter(object):
"""Make regular expressions faster with a simple prefiltering predicate.
Some regular expressions seem to be much more costly than others. In
most cases, we can evaluate a simple precondition, e.g.::
lambda x: "error" in x
to avoid evaluating expensive regexes on all lines in a file. This
can reduce parse time for large files by orders of magnitude when
evaluating lots of expressions.
A ``prefilter`` object is designed to act like a regex,, but
``search`` and ``match`` check the precondition before bothering to
evaluate the regular expression.
Note that ``match`` and ``search`` just return ``True`` and ``False``
at the moment. Make them return a ``MatchObject`` or ``None`` if it
becomes necessary.
"""
def __init__(self, precondition, *patterns):
self.patterns = [re.compile(p) for p in patterns]
self.pre = precondition
self.pattern = "\n ".join(
('MERGED:',) + patterns)
def search(self, text):
return self.pre(text) and any(p.search(text) for p in self.patterns)
def match(self, text):
return self.pre(text) and any(p.match(text) for p in self.patterns)
_error_matches = [ _error_matches = [
"^FAIL: ", prefilter(
"^FATAL: ", lambda x: any(s in x for s in (
"^failed ", 'Error:', 'error', 'undefined reference', 'multiply defined')),
"FAILED", "([^:]+): error[ \\t]*[0-9]+[ \\t]*:",
"Failed test", "([^:]+): (Error:|error|undefined reference|multiply defined)",
"([^ :]+) ?: (error|fatal error|catastrophic error)",
"([^:]+)\\(([^\\)]+)\\) ?: (error|fatal error|catastrophic error)"),
"^FAILED",
"^[Bb]us [Ee]rror", "^[Bb]us [Ee]rror",
"^[Ss]egmentation [Vv]iolation", "^[Ss]egmentation [Vv]iolation",
"^[Ss]egmentation [Ff]ault", "^[Ss]egmentation [Ff]ault",
":.*[Pp]ermission [Dd]enied", ":.*[Pp]ermission [Dd]enied",
"[^ :]:[0-9]+: [^ \\t]",
"[^:]: error[ \\t]*[0-9]+[ \\t]*:",
"^Error ([0-9]+):", "^Error ([0-9]+):",
"^Fatal", "^Fatal",
"^[Ee]rror: ", "^[Ee]rror: ",
@@ -98,9 +132,6 @@
"^cc[^C]*CC: ERROR File = ([^,]+), Line = ([0-9]+)", "^cc[^C]*CC: ERROR File = ([^,]+), Line = ([0-9]+)",
"^ld([^:])*:([ \\t])*ERROR([^:])*:", "^ld([^:])*:([ \\t])*ERROR([^:])*:",
"^ild:([ \\t])*\\(undefined symbol\\)", "^ild:([ \\t])*\\(undefined symbol\\)",
"[^ :] : (error|fatal error|catastrophic error)",
"[^:]: (Error:|error|undefined reference|multiply defined)",
"[^:]\\([^\\)]+\\) ?: (error|fatal error|catastrophic error)",
"^fatal error C[0-9]+:", "^fatal error C[0-9]+:",
": syntax error ", ": syntax error ",
"^collect2: ld returned 1 exit status", "^collect2: ld returned 1 exit status",
@@ -109,7 +140,7 @@
"^Unresolved:", "^Unresolved:",
"Undefined symbol", "Undefined symbol",
"^Undefined[ \\t]+first referenced", "^Undefined[ \\t]+first referenced",
"^CMake Error", "^CMake Error.*:",
":[ \\t]cannot find", ":[ \\t]cannot find",
":[ \\t]can't find", ":[ \\t]can't find",
": \\*\\*\\* No rule to make target [`'].*\\'. Stop", ": \\*\\*\\* No rule to make target [`'].*\\'. Stop",
@@ -123,7 +154,6 @@
"ld: 0706-006 Cannot find or open library file: -l ", "ld: 0706-006 Cannot find or open library file: -l ",
"ild: \\(argument error\\) can't find library argument ::", "ild: \\(argument error\\) can't find library argument ::",
"^could not be found and will not be loaded.", "^could not be found and will not be loaded.",
"^WARNING: '.*' is missing on your system",
"s:616 string too big", "s:616 string too big",
"make: Fatal error: ", "make: Fatal error: ",
"ld: 0711-993 Error occurred while writing to the output file:", "ld: 0711-993 Error occurred while writing to the output file:",
@@ -145,40 +175,44 @@
"instantiated from ", "instantiated from ",
"candidates are:", "candidates are:",
": warning", ": warning",
": WARNING",
": \\(Warning\\)", ": \\(Warning\\)",
": note", ": note",
" ok",
"Note:", "Note:",
"makefile:", "makefile:",
"Makefile:", "Makefile:",
":[ \\t]+Where:", ":[ \\t]+Where:",
"[^ :]:[0-9]+: Warning", "([^ :]+):([0-9]+): Warning",
"------ Build started: .* ------", "------ Build started: .* ------",
] ]
#: Regexes to match file/line numbers in error/warning messages #: Regexes to match file/line numbers in error/warning messages
_warning_matches = [ _warning_matches = [
"[^ :]:[0-9]+: warning:", prefilter(
"[^ :]:[0-9]+: note:", lambda x: 'warning' in x,
"^cc[^C]*CC: WARNING File = ([^,]+), Line = ([0-9]+)", "([^ :]+):([0-9]+): warning:",
"^ld([^:])*:([ \\t])*WARNING([^:])*:", "([^:]+): warning ([0-9]+):",
"[^:]: warning [0-9]+:", "([^:]+): warning[ \\t]*[0-9]+[ \\t]*:",
"^\"[^\"]+\", line [0-9]+: [Ww](arning|arnung)", "([^ :]+) : warning",
"[^:]: warning[ \\t]*[0-9]+[ \\t]*:", "([^:]+): warning"),
prefilter(
lambda x: 'note:' in x,
"^([^ :]+):([0-9]+): note:"),
prefilter(
lambda x: any(s in x for s in ('Warning', 'Warnung')),
"^(Warning|Warnung) ([0-9]+):", "^(Warning|Warnung) ([0-9]+):",
"^(Warning|Warnung)[ :]", "^(Warning|Warnung)[ :]",
"WARNING: ",
"[^ :] : warning",
"[^:]: warning",
"\", line [0-9]+\\.[0-9]+: [0-9]+-[0-9]+ \\([WI]\\)",
"^cxx: Warning:", "^cxx: Warning:",
"([^ :]+):([0-9]+): (Warning|Warnung)",
"^CMake Warning.*:"),
"file: .* has no symbols", "file: .* has no symbols",
"[^ :]:[0-9]+: (Warning|Warnung)", "^cc[^C]*CC: WARNING File = ([^,]+), Line = ([0-9]+)",
"^ld([^:])*:([ \\t])*WARNING([^:])*:",
"^\"[^\"]+\", line [0-9]+: [Ww](arning|arnung)",
"WARNING: ",
"\", line [0-9]+\\.[0-9]+: [0-9]+-[0-9]+ \\([WI]\\)",
"\\([0-9]*\\): remark #[0-9]*", "\\([0-9]*\\): remark #[0-9]*",
"\".*\", line [0-9]+: remark\\([0-9]*\\):", "\".*\", line [0-9]+: remark\\([0-9]*\\):",
"cc-[0-9]* CC: REMARK File = .*, Line = [0-9]*", "cc-[0-9]* CC: REMARK File = .*, Line = [0-9]*",
"^CMake Warning",
"^\\[WARNING\\]", "^\\[WARNING\\]",
] ]
@@ -309,7 +343,8 @@ def _profile_match(matches, exceptions, line, match_times, exc_times):
def _parse(lines, offset, profile): def _parse(lines, offset, profile):
def compile(regex_array): def compile(regex_array):
return [re.compile(regex) for regex in regex_array] return [regex if isinstance(regex, prefilter) else re.compile(regex)
for regex in regex_array]
error_matches = compile(_error_matches) error_matches = compile(_error_matches)
error_exceptions = compile(_error_exceptions) error_exceptions = compile(_error_exceptions)

View File

@@ -10,7 +10,7 @@
from py._path.common import iswin32, fspath from py._path.common import iswin32, fspath
from stat import S_ISLNK, S_ISDIR, S_ISREG from stat import S_ISLNK, S_ISDIR, S_ISREG
from os.path import abspath, normpath, isabs, exists, isdir, isfile, islink, dirname from os.path import abspath, normcase, normpath, isabs, exists, isdir, isfile, islink, dirname
if sys.version_info > (3,0): if sys.version_info > (3,0):
def map_as_list(func, iter): def map_as_list(func, iter):
@@ -801,10 +801,10 @@ def make_numbered_dir(cls, prefix='session-', rootdir=None, keep=3,
if rootdir is None: if rootdir is None:
rootdir = cls.get_temproot() rootdir = cls.get_temproot()
nprefix = prefix.lower() nprefix = normcase(prefix)
def parse_num(path): def parse_num(path):
""" parse the number out of a path (if it matches the prefix) """ """ parse the number out of a path (if it matches the prefix) """
nbasename = path.basename.lower() nbasename = normcase(path.basename)
if nbasename.startswith(nprefix): if nbasename.startswith(nprefix):
try: try:
return int(nbasename[len(nprefix):]) return int(nbasename[len(nprefix):])

View File

@@ -1,4 +1,4 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -5,9 +5,9 @@
from __future__ import print_function from __future__ import print_function
import re
import argparse import argparse
import errno import errno
import re
import sys import sys
from six import StringIO from six import StringIO
@@ -326,7 +326,7 @@ def end_function(self, prog=None):
"""Returns the syntax needed to end a function definition. """Returns the syntax needed to end a function definition.
Parameters: Parameters:
prog (str or None): the command name prog (str, optional): the command name
Returns: Returns:
str: the function definition ending str: the function definition ending

View File

@@ -4,9 +4,9 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import collections import collections
import errno import errno
import hashlib
import glob import glob
import grp import grp
import hashlib
import itertools import itertools
import numbers import numbers
import os import os
@@ -19,12 +19,11 @@
from contextlib import contextmanager from contextlib import contextmanager
import six import six
from llnl.util import tty from llnl.util import tty
from llnl.util.lang import dedupe, memoized from llnl.util.lang import dedupe, memoized
from spack.util.executable import Executable from spack.util.executable import Executable
if sys.version_info >= (3, 3): if sys.version_info >= (3, 3):
from collections.abc import Sequence # novm from collections.abc import Sequence # novm
else: else:
@@ -141,7 +140,7 @@ def filter_file(regex, repl, *filenames, **kwargs):
file. file.
""" """
string = kwargs.get('string', False) string = kwargs.get('string', False)
backup = kwargs.get('backup', False) backup = kwargs.get('backup', True)
ignore_absent = kwargs.get('ignore_absent', False) ignore_absent = kwargs.get('ignore_absent', False)
stop_at = kwargs.get('stop_at', None) stop_at = kwargs.get('stop_at', None)
@@ -302,16 +301,13 @@ def group_ids(uid=None):
return [g.gr_gid for g in grp.getgrall() if user in g.gr_mem] return [g.gr_gid for g in grp.getgrall() if user in g.gr_mem]
def chgrp(path, group, follow_symlinks=True): def chgrp(path, group):
"""Implement the bash chgrp function on a single path""" """Implement the bash chgrp function on a single path"""
if isinstance(group, six.string_types): if isinstance(group, six.string_types):
gid = grp.getgrnam(group).gr_gid gid = grp.getgrnam(group).gr_gid
else: else:
gid = group gid = group
if follow_symlinks:
os.chown(path, -1, gid) os.chown(path, -1, gid)
else:
os.lchown(path, -1, gid)
def chmod_x(entry, perms): def chmod_x(entry, perms):
@@ -447,7 +443,7 @@ def copy_tree(src, dest, symlinks=True, ignore=None, _permissions=False):
src (str): the directory to copy src (str): the directory to copy
dest (str): the destination directory dest (str): the destination directory
symlinks (bool): whether or not to preserve symlinks symlinks (bool): whether or not to preserve symlinks
ignore (typing.Callable): function indicating which files to ignore ignore (function): function indicating which files to ignore
_permissions (bool): for internal use only _permissions (bool): for internal use only
Raises: Raises:
@@ -521,7 +517,7 @@ def install_tree(src, dest, symlinks=True, ignore=None):
src (str): the directory to install src (str): the directory to install
dest (str): the destination directory dest (str): the destination directory
symlinks (bool): whether or not to preserve symlinks symlinks (bool): whether or not to preserve symlinks
ignore (typing.Callable): function indicating which files to ignore ignore (function): function indicating which files to ignore
Raises: Raises:
IOError: if *src* does not match any files or directories IOError: if *src* does not match any files or directories
@@ -560,12 +556,12 @@ def mkdirp(*paths, **kwargs):
paths (str): paths to create with mkdirp paths (str): paths to create with mkdirp
Keyword Aguments: Keyword Aguments:
mode (permission bits or None): optional permissions to set mode (permission bits or None, optional): optional permissions to set
on the created directory -- use OS default if not provided on the created directory -- use OS default if not provided
group (group name or None): optional group for permissions of group (group name or None, optional): optional group for permissions of
final created directory -- use OS default if not provided. Only final created directory -- use OS default if not provided. Only
used if world write permissions are not set used if world write permissions are not set
default_perms (str or None): one of 'parents' or 'args'. The default permissions default_perms ('parents' or 'args', optional): The default permissions
that are set for directories that are not themselves an argument that are set for directories that are not themselves an argument
for mkdirp. 'parents' means intermediate directories get the for mkdirp. 'parents' means intermediate directories get the
permissions of their direct parent directory, 'args' means permissions of their direct parent directory, 'args' means
@@ -659,12 +655,6 @@ def working_dir(dirname, **kwargs):
os.chdir(orig_dir) os.chdir(orig_dir)
class CouldNotRestoreDirectoryBackup(RuntimeError):
def __init__(self, inner_exception, outer_exception):
self.inner_exception = inner_exception
self.outer_exception = outer_exception
@contextmanager @contextmanager
def replace_directory_transaction(directory_name, tmp_root=None): def replace_directory_transaction(directory_name, tmp_root=None):
"""Moves a directory to a temporary space. If the operations executed """Moves a directory to a temporary space. If the operations executed
@@ -692,17 +682,16 @@ def replace_directory_transaction(directory_name, tmp_root=None):
assert os.path.isabs(tmp_root) assert os.path.isabs(tmp_root)
tmp_dir = tempfile.mkdtemp(dir=tmp_root) tmp_dir = tempfile.mkdtemp(dir=tmp_root)
tty.debug('Temporary directory created [{0}]'.format(tmp_dir)) tty.debug('TEMPORARY DIRECTORY CREATED [{0}]'.format(tmp_dir))
shutil.move(src=directory_name, dst=tmp_dir) shutil.move(src=directory_name, dst=tmp_dir)
tty.debug('Directory moved [src={0}, dest={1}]'.format(directory_name, tmp_dir)) tty.debug('DIRECTORY MOVED [src={0}, dest={1}]'.format(
directory_name, tmp_dir
))
try: try:
yield tmp_dir yield tmp_dir
except (Exception, KeyboardInterrupt, SystemExit) as inner_exception: except (Exception, KeyboardInterrupt, SystemExit) as e:
# Try to recover the original directory, if this fails, raise a
# composite exception.
try:
# Delete what was there, before copying back the original content # Delete what was there, before copying back the original content
if os.path.exists(directory_name): if os.path.exists(directory_name):
shutil.rmtree(directory_name) shutil.rmtree(directory_name)
@@ -710,15 +699,15 @@ def replace_directory_transaction(directory_name, tmp_root=None):
src=os.path.join(tmp_dir, directory_basename), src=os.path.join(tmp_dir, directory_basename),
dst=os.path.dirname(directory_name) dst=os.path.dirname(directory_name)
) )
except Exception as outer_exception: tty.debug('DIRECTORY RECOVERED [{0}]'.format(directory_name))
raise CouldNotRestoreDirectoryBackup(inner_exception, outer_exception)
tty.debug('Directory recovered [{0}]'.format(directory_name)) msg = 'the transactional move of "{0}" failed.'
raise msg += '\n ' + str(e)
raise RuntimeError(msg.format(directory_name))
else: else:
# Otherwise delete the temporary directory # Otherwise delete the temporary directory
shutil.rmtree(tmp_dir, ignore_errors=True) shutil.rmtree(tmp_dir)
tty.debug('Temporary directory deleted [{0}]'.format(tmp_dir)) tty.debug('TEMPORARY DIRECTORY DELETED [{0}]'.format(tmp_dir))
def hash_directory(directory, ignore=[]): def hash_directory(directory, ignore=[]):
@@ -876,7 +865,7 @@ def traverse_tree(source_root, dest_root, rel_path='', **kwargs):
Keyword Arguments: Keyword Arguments:
order (str): Whether to do pre- or post-order traversal. Accepted order (str): Whether to do pre- or post-order traversal. Accepted
values are 'pre' and 'post' values are 'pre' and 'post'
ignore (typing.Callable): function indicating which files to ignore ignore (function): function indicating which files to ignore
follow_nonexisting (bool): Whether to descend into directories in follow_nonexisting (bool): Whether to descend into directories in
``src`` that do not exit in ``dest``. Default is True ``src`` that do not exit in ``dest``. Default is True
follow_links (bool): Whether to descend into symlinks in ``src`` follow_links (bool): Whether to descend into symlinks in ``src``
@@ -1112,23 +1101,23 @@ def find(root, files, recursive=True):
Accepts any glob characters accepted by fnmatch: Accepts any glob characters accepted by fnmatch:
========== ==================================== ======= ====================================
Pattern Meaning Pattern Meaning
========== ==================================== ======= ====================================
``*`` matches everything * matches everything
``?`` matches any single character ? matches any single character
``[seq]`` matches any character in ``seq`` [seq] matches any character in ``seq``
``[!seq]`` matches any character not in ``seq`` [!seq] matches any character not in ``seq``
========== ==================================== ======= ====================================
Parameters: Parameters:
root (str): The root directory to start searching from root (str): The root directory to start searching from
files (str or Sequence): Library name(s) to search for files (str or Sequence): Library name(s) to search for
recursive (bool): if False search only root folder, recurse (bool, optional): if False search only root folder,
if True descends top-down from the root. Defaults to True. if True descends top-down from the root. Defaults to True.
Returns: Returns:
list: The files that have been found list of strings: The files that have been found
""" """
if isinstance(files, six.string_types): if isinstance(files, six.string_types):
files = [files] files = [files]
@@ -1210,7 +1199,7 @@ def directories(self):
['/dir1', '/dir2'] ['/dir1', '/dir2']
Returns: Returns:
list: A list of directories list of strings: A list of directories
""" """
return list(dedupe( return list(dedupe(
os.path.dirname(x) for x in self.files if os.path.dirname(x) os.path.dirname(x) for x in self.files if os.path.dirname(x)
@@ -1228,7 +1217,7 @@ def basenames(self):
['a.h', 'b.h'] ['a.h', 'b.h']
Returns: Returns:
list: A list of base-names list of strings: A list of base-names
""" """
return list(dedupe(os.path.basename(x) for x in self.files)) return list(dedupe(os.path.basename(x) for x in self.files))
@@ -1315,7 +1304,7 @@ def headers(self):
"""Stable de-duplication of the headers. """Stable de-duplication of the headers.
Returns: Returns:
list: A list of header files list of strings: A list of header files
""" """
return self.files return self.files
@@ -1328,7 +1317,7 @@ def names(self):
['a', 'b'] ['a', 'b']
Returns: Returns:
list: A list of files without extensions list of strings: A list of files without extensions
""" """
names = [] names = []
@@ -1419,9 +1408,9 @@ def find_headers(headers, root, recursive=False):
======= ==================================== ======= ====================================
Parameters: Parameters:
headers (str or list): Header name(s) to search for headers (str or list of str): Header name(s) to search for
root (str): The root directory to start searching from root (str): The root directory to start searching from
recursive (bool): if False search only root folder, recursive (bool, optional): if False search only root folder,
if True descends top-down from the root. Defaults to False. if True descends top-down from the root. Defaults to False.
Returns: Returns:
@@ -1457,7 +1446,7 @@ def find_all_headers(root):
in the directory passed as argument. in the directory passed as argument.
Args: Args:
root (str): directory where to look recursively for header files root (path): directory where to look recursively for header files
Returns: Returns:
List of all headers found in ``root`` and subdirectories. List of all headers found in ``root`` and subdirectories.
@@ -1477,7 +1466,7 @@ def libraries(self):
"""Stable de-duplication of library files. """Stable de-duplication of library files.
Returns: Returns:
list: A list of library files list of strings: A list of library files
""" """
return self.files return self.files
@@ -1490,7 +1479,7 @@ def names(self):
['a', 'b'] ['a', 'b']
Returns: Returns:
list: A list of library names list of strings: A list of library names
""" """
names = [] names = []
@@ -1575,8 +1564,8 @@ def find_system_libraries(libraries, shared=True):
======= ==================================== ======= ====================================
Parameters: Parameters:
libraries (str or list): Library name(s) to search for libraries (str or list of str): Library name(s) to search for
shared (bool): if True searches for shared libraries, shared (bool, optional): if True searches for shared libraries,
otherwise for static. Defaults to True. otherwise for static. Defaults to True.
Returns: Returns:
@@ -1626,11 +1615,11 @@ def find_libraries(libraries, root, shared=True, recursive=False):
======= ==================================== ======= ====================================
Parameters: Parameters:
libraries (str or list): Library name(s) to search for libraries (str or list of str): Library name(s) to search for
root (str): The root directory to start searching from root (str): The root directory to start searching from
shared (bool): if True searches for shared libraries, shared (bool, optional): if True searches for shared libraries,
otherwise for static. Defaults to True. otherwise for static. Defaults to True.
recursive (bool): if False search only root folder, recursive (bool, optional): if False search only root folder,
if True descends top-down from the root. Defaults to False. if True descends top-down from the root. Defaults to False.
Returns: Returns:
@@ -1858,18 +1847,3 @@ def keep_modification_time(*filenames):
for f, mtime in mtimes.items(): for f, mtime in mtimes.items():
if os.path.exists(f): if os.path.exists(f):
os.utime(f, (os.path.getatime(f), mtime)) os.utime(f, (os.path.getatime(f), mtime))
@contextmanager
def temporary_dir(*args, **kwargs):
"""Create a temporary directory and cd's into it. Delete the directory
on exit.
Takes the same arguments as tempfile.mkdtemp()
"""
tmp_dir = tempfile.mkdtemp(*args, **kwargs)
try:
with working_dir(tmp_dir):
yield tmp_dir
finally:
remove_directory_contents(tmp_dir)

View File

@@ -5,20 +5,15 @@
from __future__ import division from __future__ import division
import functools import multiprocessing
import inspect
import os import os
import re import re
import sys import functools
import inspect
from datetime import datetime, timedelta from datetime import datetime, timedelta
from six import string_types from six import string_types
import sys
if sys.version_info < (3, 0):
from itertools import izip_longest # novm
zip_longest = izip_longest
else:
from itertools import zip_longest # novm
if sys.version_info >= (3, 3): if sys.version_info >= (3, 3):
from collections.abc import Hashable, MutableMapping # novm from collections.abc import Hashable, MutableMapping # novm
@@ -30,6 +25,23 @@
ignore_modules = [r'^\.#', '~$'] ignore_modules = [r'^\.#', '~$']
# On macOS, Python 3.8 multiprocessing now defaults to the 'spawn' start
# method. Spack cannot currently handle this, so force the process to start
# using the 'fork' start method.
#
# TODO: This solution is not ideal, as the 'fork' start method can lead to
# crashes of the subprocess. Figure out how to make 'spawn' work.
#
# See:
# * https://github.com/spack/spack/pull/18124
# * https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods # noqa: E501
# * https://bugs.python.org/issue33725
if sys.version_info >= (3,): # novm
fork_context = multiprocessing.get_context('fork')
else:
fork_context = multiprocessing
def index_by(objects, *funcs): def index_by(objects, *funcs):
"""Create a hierarchy of dictionaries by splitting the supplied """Create a hierarchy of dictionaries by splitting the supplied
set of objects on unique values of the supplied functions. set of objects on unique values of the supplied functions.
@@ -215,31 +227,6 @@ def list_modules(directory, **kwargs):
yield re.sub('.py$', '', name) yield re.sub('.py$', '', name)
def decorator_with_or_without_args(decorator):
"""Allows a decorator to be used with or without arguments, e.g.::
# Calls the decorator function some args
@decorator(with, arguments, and=kwargs)
or::
# Calls the decorator function with zero arguments
@decorator
"""
# See https://stackoverflow.com/questions/653368 for more on this
@functools.wraps(decorator)
def new_dec(*args, **kwargs):
if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):
# actual decorated function
return decorator(args[0])
else:
# decorator arguments
return lambda realf: decorator(realf, *args, **kwargs)
return new_dec
def key_ordering(cls): def key_ordering(cls):
"""Decorates a class with extra methods that implement rich comparison """Decorates a class with extra methods that implement rich comparison
operations and ``__hash__``. The decorator assumes that the class operations and ``__hash__``. The decorator assumes that the class
@@ -281,197 +268,7 @@ def setter(name, value):
return cls return cls
#: sentinel for testing that iterators are done in lazy_lexicographic_ordering @key_ordering
done = object()
def tuplify(seq):
"""Helper for lazy_lexicographic_ordering()."""
return tuple((tuplify(x) if callable(x) else x) for x in seq())
def lazy_eq(lseq, rseq):
"""Equality comparison for two lazily generated sequences.
See ``lazy_lexicographic_ordering``.
"""
liter = lseq() # call generators
riter = rseq()
# zip_longest is implemented in native code, so use it for speed.
# use zip_longest instead of zip because it allows us to tell
# which iterator was longer.
for left, right in zip_longest(liter, riter, fillvalue=done):
if (left is done) or (right is done):
return False
# recursively enumerate any generators, otherwise compare
equal = lazy_eq(left, right) if callable(left) else left == right
if not equal:
return False
return True
def lazy_lt(lseq, rseq):
"""Less-than comparison for two lazily generated sequences.
See ``lazy_lexicographic_ordering``.
"""
liter = lseq()
riter = rseq()
for left, right in zip_longest(liter, riter, fillvalue=done):
if (left is done) or (right is done):
return left is done # left was shorter than right
sequence = callable(left)
equal = lazy_eq(left, right) if sequence else left == right
if equal:
continue
if sequence:
return lazy_lt(left, right)
if left is None:
return True
if right is None:
return False
return left < right
return False # if equal, return False
@decorator_with_or_without_args
def lazy_lexicographic_ordering(cls, set_hash=True):
"""Decorates a class with extra methods that implement rich comparison.
This is a lazy version of the tuple comparison used frequently to
implement comparison in Python. Given some objects with fields, you
might use tuple keys to implement comparison, e.g.::
class Widget:
def _cmp_key(self):
return (
self.a,
self.b,
(self.c, self.d),
self.e
)
def __eq__(self, other):
return self._cmp_key() == other._cmp_key()
def __lt__(self):
return self._cmp_key() < other._cmp_key()
# etc.
Python would compare ``Widgets`` lexicographically based on their
tuples. The issue there for simple comparators is that we have to
bulid the tuples *and* we have to generate all the values in them up
front. When implementing comparisons for large data structures, this
can be costly.
Lazy lexicographic comparison maps the tuple comparison shown above
to generator functions. Instead of comparing based on pre-constructed
tuple keys, users of this decorator can compare using elements from a
generator. So, you'd write::
@lazy_lexicographic_ordering
class Widget:
def _cmp_iter(self):
yield a
yield b
def cd_fun():
yield c
yield d
yield cd_fun
yield e
# operators are added by decorator
There are no tuples preconstructed, and the generator does not have
to complete. Instead of tuples, we simply make functions that lazily
yield what would've been in the tuple. The
``@lazy_lexicographic_ordering`` decorator handles the details of
implementing comparison operators, and the ``Widget`` implementor
only has to worry about writing ``_cmp_iter``, and making sure the
elements in it are also comparable.
Some things to note:
* If a class already has ``__eq__``, ``__ne__``, ``__lt__``,
``__le__``, ``__gt__``, ``__ge__``, or ``__hash__`` defined, this
decorator will overwrite them.
* If ``set_hash`` is ``False``, this will not overwrite
``__hash__``.
* This class uses Python 2 None-comparison semantics. If you yield
None and it is compared to a non-None type, None will always be
less than the other object.
Raises:
TypeError: If the class does not have a ``_cmp_iter`` method
"""
if not has_method(cls, "_cmp_iter"):
raise TypeError("'%s' doesn't define _cmp_iter()." % cls.__name__)
# comparison operators are implemented in terms of lazy_eq and lazy_lt
def eq(self, other):
if self is other:
return True
return (other is not None) and lazy_eq(self._cmp_iter, other._cmp_iter)
def lt(self, other):
if self is other:
return False
return (other is not None) and lazy_lt(self._cmp_iter, other._cmp_iter)
def ne(self, other):
if self is other:
return False
return (other is None) or not lazy_eq(self._cmp_iter, other._cmp_iter)
def gt(self, other):
if self is other:
return False
return (other is None) or lazy_lt(other._cmp_iter, self._cmp_iter)
def le(self, other):
if self is other:
return True
return (other is not None) and not lazy_lt(other._cmp_iter,
self._cmp_iter)
def ge(self, other):
if self is other:
return True
return (other is None) or not lazy_lt(self._cmp_iter, other._cmp_iter)
def h(self):
return hash(tuplify(self._cmp_iter))
def add_func_to_class(name, func):
"""Add a function to a class with a particular name."""
func.__name__ = name
setattr(cls, name, func)
add_func_to_class("__eq__", eq)
add_func_to_class("__ne__", ne)
add_func_to_class("__lt__", lt)
add_func_to_class("__le__", le)
add_func_to_class("__gt__", gt)
add_func_to_class("__ge__", ge)
if set_hash:
add_func_to_class("__hash__", h)
return cls
@lazy_lexicographic_ordering
class HashableMap(MutableMapping): class HashableMap(MutableMapping):
"""This is a hashable, comparable dictionary. Hash is performed on """This is a hashable, comparable dictionary. Hash is performed on
a tuple of the values in the dictionary.""" a tuple of the values in the dictionary."""
@@ -494,9 +291,8 @@ def __len__(self):
def __delitem__(self, key): def __delitem__(self, key):
del self.dict[key] del self.dict[key]
def _cmp_iter(self): def _cmp_key(self):
for _, v in sorted(self.items()): return tuple(sorted(self.values()))
yield v
def copy(self): def copy(self):
"""Type-agnostic clone method. Preserves subclass type.""" """Type-agnostic clone method. Preserves subclass type."""
@@ -596,8 +392,8 @@ def pretty_date(time, now=None):
"""Convert a datetime or timestamp to a pretty, relative date. """Convert a datetime or timestamp to a pretty, relative date.
Args: Args:
time (datetime.datetime or int): date to print prettily time (datetime or int): date to print prettily
now (datetime.datetime): datetime for 'now', i.e. the date the pretty date now (datetime): dateimte for 'now', i.e. the date the pretty date
is relative to (default is datetime.now()) is relative to (default is datetime.now())
Returns: Returns:
@@ -671,7 +467,7 @@ def pretty_string_to_date(date_str, now=None):
or be a *pretty date* (like ``yesterday`` or ``two months ago``) or be a *pretty date* (like ``yesterday`` or ``two months ago``)
Returns: Returns:
(datetime.datetime): datetime object corresponding to ``date_str`` (datetime): datetime object corresponding to ``date_str``
""" """
pattern = {} pattern = {}
@@ -828,9 +624,6 @@ def __repr__(self):
def load_module_from_file(module_name, module_path): def load_module_from_file(module_name, module_path):
"""Loads a python module from the path of the corresponding file. """Loads a python module from the path of the corresponding file.
If the module is already in ``sys.modules`` it will be returned as
is and not reloaded.
Args: Args:
module_name (str): namespace where the python module will be loaded, module_name (str): namespace where the python module will be loaded,
e.g. ``foo.bar`` e.g. ``foo.bar``
@@ -843,28 +636,12 @@ def load_module_from_file(module_name, module_path):
ImportError: when the module can't be loaded ImportError: when the module can't be loaded
FileNotFoundError: when module_path doesn't exist FileNotFoundError: when module_path doesn't exist
""" """
if module_name in sys.modules:
return sys.modules[module_name]
# This recipe is adapted from https://stackoverflow.com/a/67692/771663
if sys.version_info[0] == 3 and sys.version_info[1] >= 5: if sys.version_info[0] == 3 and sys.version_info[1] >= 5:
import importlib.util import importlib.util
spec = importlib.util.spec_from_file_location( # novm spec = importlib.util.spec_from_file_location( # novm
module_name, module_path) module_name, module_path)
module = importlib.util.module_from_spec(spec) # novm module = importlib.util.module_from_spec(spec) # novm
# The module object needs to exist in sys.modules before the
# loader executes the module code.
#
# See https://docs.python.org/3/reference/import.html#loading
sys.modules[spec.name] = module
try:
spec.loader.exec_module(module) spec.loader.exec_module(module)
except BaseException:
try:
del sys.modules[spec.name]
except KeyError:
pass
raise
elif sys.version_info[0] == 3 and sys.version_info[1] < 5: elif sys.version_info[0] == 3 and sys.version_info[1] < 5:
import importlib.machinery import importlib.machinery
loader = importlib.machinery.SourceFileLoader( # novm loader = importlib.machinery.SourceFileLoader( # novm
@@ -915,19 +692,3 @@ class Devnull(object):
""" """
def write(self, *_): def write(self, *_):
pass pass
def elide_list(line_list, max_num=10):
"""Takes a long list and limits it to a smaller number of elements,
replacing intervening elements with '...'. For example::
elide_list([1,2,3,4,5,6], 4)
gives::
[1, 2, 3, '...', 6]
"""
if len(line_list) > max_num:
return line_list[:max_num - 1] + ['...'] + line_list[-1:]
else:
return line_list

View File

@@ -7,12 +7,12 @@
from __future__ import print_function from __future__ import print_function
import filecmp
import os import os
import shutil import shutil
import filecmp
from llnl.util.filesystem import traverse_tree, mkdirp, touch
import llnl.util.tty as tty import llnl.util.tty as tty
from llnl.util.filesystem import mkdirp, touch, traverse_tree
__all__ = ['LinkTree'] __all__ = ['LinkTree']

View File

@@ -3,31 +3,20 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import errno
import fcntl
import os import os
import socket import fcntl
import errno
import time import time
import socket
from datetime import datetime from datetime import datetime
from typing import Dict, Tuple # novm
import llnl.util.tty as tty import llnl.util.tty as tty
import spack.util.string import spack.util.string
__all__ = [
'Lock', __all__ = ['Lock', 'LockTransaction', 'WriteTransaction', 'ReadTransaction',
'LockDowngradeError', 'LockError', 'LockTimeoutError',
'LockUpgradeError', 'LockPermissionError', 'LockROFileError', 'CantCreateLockError']
'LockTransaction',
'WriteTransaction',
'ReadTransaction',
'LockError',
'LockTimeoutError',
'LockPermissionError',
'LockROFileError',
'CantCreateLockError'
]
#: Mapping of supported locks to description #: Mapping of supported locks to description
lock_type = {fcntl.LOCK_SH: 'read', fcntl.LOCK_EX: 'write'} lock_type = {fcntl.LOCK_SH: 'read', fcntl.LOCK_EX: 'write'}
@@ -37,126 +26,6 @@
true_fn = lambda: True true_fn = lambda: True
class OpenFile(object):
"""Record for keeping track of open lockfiles (with reference counting).
There's really only one ``OpenFile`` per inode, per process, but we record the
filehandle here as it's the thing we end up using in python code. You can get
the file descriptor from the file handle if needed -- or we could make this track
file descriptors as well in the future.
"""
def __init__(self, fh):
self.fh = fh
self.refs = 0
class OpenFileTracker(object):
"""Track open lockfiles, to minimize number of open file descriptors.
The ``fcntl`` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.
Because of this, we need to track open lock files so that we only close them when
a process no longer needs them. We do this by tracking each lockfile by its
inode and process id. This has several nice properties:
1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
process's lockfiles. ``fcntl`` locks are not inherited across forks, so we'll
just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
number of times necessary for the locks we have.
Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.
"""
def __init__(self):
"""Create a new ``OpenFileTracker``."""
self._descriptors = {} # type: Dict[Tuple[int, int], OpenFile]
def get_fh(self, path):
"""Get a filehandle for a lockfile.
This routine will open writable files for read/write even if you're asking
for a shared (read-only) lock. This is so that we can upgrade to an exclusive
(write) lock later if requested.
Arguments:
path (str): path to lock file we want a filehandle for
"""
# Open writable files as 'r+' so we can upgrade to write later
os_mode, fh_mode = (os.O_RDWR | os.O_CREAT), 'r+'
pid = os.getpid()
open_file = None # OpenFile object, if there is one
stat = None # stat result for the lockfile, if it exists
try:
# see whether we've seen this inode/pid before
stat = os.stat(path)
key = (stat.st_ino, pid)
open_file = self._descriptors.get(key)
except OSError as e:
if e.errno != errno.ENOENT: # only handle file not found
raise
# path does not exist -- fail if we won't be able to create it
parent = os.path.dirname(path) or '.'
if not os.access(parent, os.W_OK):
raise CantCreateLockError(path)
# if there was no already open file, we'll need to open one
if not open_file:
if stat and not os.access(path, os.W_OK):
# we know path exists but not if it's writable. If it's read-only,
# only open the file for reading (and fail if we're trying to get
# an exclusive (write) lock on it)
os_mode, fh_mode = os.O_RDONLY, 'r'
fd = os.open(path, os_mode)
fh = os.fdopen(fd, fh_mode)
open_file = OpenFile(fh)
# if we just created the file, we'll need to get its inode here
if not stat:
inode = os.fstat(fd).st_ino
key = (inode, pid)
self._descriptors[key] = open_file
open_file.refs += 1
return open_file.fh
def release_fh(self, path):
"""Release a filehandle, only closing it if there are no more references."""
try:
inode = os.stat(path).st_ino
except OSError as e:
if e.errno != errno.ENOENT: # only handle file not found
raise
inode = None # this will not be in self._descriptors
key = (inode, os.getpid())
open_file = self._descriptors.get(key)
assert open_file, "Attempted to close non-existing lock path: %s" % path
open_file.refs -= 1
if not open_file.refs:
del self._descriptors[key]
open_file.fh.close()
#: Open file descriptors for locks in this process. Used to prevent one process
#: from opening the sam file many times for different byte range locks
file_tracker = OpenFileTracker()
def _attempts_str(wait_time, nattempts): def _attempts_str(wait_time, nattempts):
# Don't print anything if we succeeded on the first try # Don't print anything if we succeeded on the first try
if nattempts <= 1: if nattempts <= 1:
@@ -177,8 +46,7 @@ class Lock(object):
Note that this is for managing contention over resources *between* Note that this is for managing contention over resources *between*
processes and not for managing contention between threads in a process: the processes and not for managing contention between threads in a process: the
functions of this object are not thread-safe. A process also must not functions of this object are not thread-safe. A process also must not
maintain multiple locks on the same file (or, more specifically, on maintain multiple locks on the same file.
overlapping byte ranges in the same file).
""" """
def __init__(self, path, start=0, length=0, default_timeout=None, def __init__(self, path, start=0, length=0, default_timeout=None,
@@ -283,10 +151,25 @@ def _lock(self, op, timeout=None):
# Create file and parent directories if they don't exist. # Create file and parent directories if they don't exist.
if self._file is None: if self._file is None:
self._ensure_parent_directory() parent = self._ensure_parent_directory()
self._file = file_tracker.get_fh(self.path)
if op == fcntl.LOCK_EX and self._file.mode == 'r': # Open writable files as 'r+' so we can upgrade to write later
os_mode, fd_mode = (os.O_RDWR | os.O_CREAT), 'r+'
if os.path.exists(self.path):
if not os.access(self.path, os.W_OK):
if op == fcntl.LOCK_SH:
# can still lock read-only files if we open 'r'
os_mode, fd_mode = os.O_RDONLY, 'r'
else:
raise LockROFileError(self.path)
elif not os.access(parent, os.W_OK):
raise CantCreateLockError(self.path)
fd = os.open(self.path, os_mode)
self._file = os.fdopen(fd, fd_mode)
elif op == fcntl.LOCK_EX and self._file.mode == 'r':
# Attempt to upgrade to write lock w/a read-only file. # Attempt to upgrade to write lock w/a read-only file.
# If the file were writable, we'd have opened it 'r+' # If the file were writable, we'd have opened it 'r+'
raise LockROFileError(self.path) raise LockROFileError(self.path)
@@ -381,7 +264,7 @@ def _write_log_debug_data(self):
self.old_host = self.host self.old_host = self.host
self.pid = os.getpid() self.pid = os.getpid()
self.host = socket.gethostname() self.host = socket.getfqdn()
# write pid, host to disk to sync over FS # write pid, host to disk to sync over FS
self._file.seek(0) self._file.seek(0)
@@ -399,8 +282,7 @@ def _unlock(self):
""" """
fcntl.lockf(self._file, fcntl.LOCK_UN, fcntl.lockf(self._file, fcntl.LOCK_UN,
self._length, self._start, os.SEEK_SET) self._length, self._start, os.SEEK_SET)
self._file.close()
file_tracker.release_fh(self.path)
self._file = None self._file = None
self._reads = 0 self._reads = 0
self._writes = 0 self._writes = 0
@@ -519,7 +401,7 @@ def release_read(self, release_fn=None):
"""Releases a read lock. """Releases a read lock.
Arguments: Arguments:
release_fn (typing.Callable): function to call *before* the last recursive release_fn (callable): function to call *before* the last recursive
lock (read or write) is released. lock (read or write) is released.
If the last recursive lock will be released, then this will call If the last recursive lock will be released, then this will call
@@ -555,7 +437,7 @@ def release_write(self, release_fn=None):
"""Releases a write lock. """Releases a write lock.
Arguments: Arguments:
release_fn (typing.Callable): function to call before the last recursive release_fn (callable): function to call before the last recursive
write is released. write is released.
If the last recursive *write* lock will be released, then this If the last recursive *write* lock will be released, then this
@@ -651,10 +533,10 @@ class LockTransaction(object):
Arguments: Arguments:
lock (Lock): underlying lock for this transaction to be accquired on lock (Lock): underlying lock for this transaction to be accquired on
enter and released on exit enter and released on exit
acquire (typing.Callable or contextlib.contextmanager): function to be called acquire (callable or contextmanager): function to be called after lock
after lock is acquired, or contextmanager to enter after acquire and leave is acquired, or contextmanager to enter after acquire and leave
before release. before release.
release (typing.Callable): function to be called before release. If release (callable): function to be called before release. If
``acquire`` is a contextmanager, this will be called *after* ``acquire`` is a contextmanager, this will be called *after*
exiting the nexted context and before the lock is released. exiting the nexted context and before the lock is released.
timeout (float): number of seconds to set for the timeout when timeout (float): number of seconds to set for the timeout when

View File

@@ -5,7 +5,6 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import contextlib
import fcntl import fcntl
import os import os
import struct import struct
@@ -13,13 +12,12 @@
import termios import termios
import textwrap import textwrap
import traceback import traceback
from datetime import datetime
import six import six
from datetime import datetime
from six import StringIO from six import StringIO
from six.moves import input from six.moves import input
from llnl.util.tty.color import cescape, clen, cprint, cwrite from llnl.util.tty.color import cprint, cwrite, cescape, clen
# Globals # Globals
_debug = 0 _debug = 0
@@ -29,7 +27,6 @@
_msg_enabled = True _msg_enabled = True
_warn_enabled = True _warn_enabled = True
_error_enabled = True _error_enabled = True
_output_filter = lambda s: s
indent = " " indent = " "
@@ -92,18 +89,6 @@ def error_enabled():
return _error_enabled return _error_enabled
@contextlib.contextmanager
def output_filter(filter_fn):
"""Context manager that applies a filter to all output."""
global _output_filter
saved_filter = _output_filter
try:
_output_filter = filter_fn
yield
finally:
_output_filter = saved_filter
class SuppressOutput: class SuppressOutput:
"""Class for disabling output in a scope using 'with' keyword""" """Class for disabling output in a scope using 'with' keyword"""
@@ -180,23 +165,13 @@ def msg(message, *args, **kwargs):
if _stacktrace: if _stacktrace:
st_text = process_stacktrace(2) st_text = process_stacktrace(2)
if newline: if newline:
cprint( cprint("@*b{%s==>} %s%s" % (
"@*b{%s==>} %s%s" % ( st_text, get_timestamp(), cescape(message)))
st_text,
get_timestamp(),
cescape(_output_filter(message))
)
)
else: else:
cwrite( cwrite("@*b{%s==>} %s%s" % (
"@*b{%s==>} %s%s" % ( st_text, get_timestamp(), cescape(message)))
st_text,
get_timestamp(),
cescape(_output_filter(message))
)
)
for arg in args: for arg in args:
print(indent + _output_filter(six.text_type(arg))) print(indent + six.text_type(arg))
def info(message, *args, **kwargs): def info(message, *args, **kwargs):
@@ -212,29 +187,18 @@ def info(message, *args, **kwargs):
st_text = "" st_text = ""
if _stacktrace: if _stacktrace:
st_text = process_stacktrace(st_countback) st_text = process_stacktrace(st_countback)
cprint( cprint("@%s{%s==>} %s%s" % (
"@%s{%s==>} %s%s" % ( format, st_text, get_timestamp(), cescape(six.text_type(message))
format, ), stream=stream)
st_text,
get_timestamp(),
cescape(_output_filter(six.text_type(message)))
),
stream=stream
)
for arg in args: for arg in args:
if wrap: if wrap:
lines = textwrap.wrap( lines = textwrap.wrap(
_output_filter(six.text_type(arg)), six.text_type(arg), initial_indent=indent,
initial_indent=indent, subsequent_indent=indent, break_long_words=break_long_words)
subsequent_indent=indent,
break_long_words=break_long_words
)
for line in lines: for line in lines:
stream.write(line + '\n') stream.write(line + '\n')
else: else:
stream.write( stream.write(indent + six.text_type(arg) + '\n')
indent + _output_filter(six.text_type(arg)) + '\n'
)
def verbose(message, *args, **kwargs): def verbose(message, *args, **kwargs):

View File

@@ -10,11 +10,10 @@
import os import os
import sys import sys
from six import StringIO, text_type from six import StringIO, text_type
from llnl.util.tty import terminal_size from llnl.util.tty import terminal_size
from llnl.util.tty.color import cextra, clen from llnl.util.tty.color import clen, cextra
class ColumnConfig: class ColumnConfig:
@@ -109,17 +108,19 @@ def colify(elts, **options):
using ``str()``. using ``str()``.
Keyword Arguments: Keyword Arguments:
output (typing.IO): A file object to write to. Default is ``sys.stdout`` output (stream): A file object to write to. Default is ``sys.stdout``
indent (int): Optionally indent all columns by some number of spaces indent (int): Optionally indent all columns by some number of spaces
padding (int): Spaces between columns. Default is 2 padding (int): Spaces between columns. Default is 2
width (int): Width of the output. Default is 80 if tty not detected width (int): Width of the output. Default is 80 if tty not detected
cols (int): Force number of columns. Default is to size to terminal, or cols (int): Force number of columns. Default is to size to
single-column if no tty terminal, or single-column if no tty
tty (bool): Whether to attempt to write to a tty. Default is to autodetect a tty (bool): Whether to attempt to write to a tty. Default is to
tty. Set to False to force single-column output autodetect a tty. Set to False to force single-column
method (str): Method to use to fit columns. Options are variable or uniform. output
Variable-width columns are tighter, uniform columns are all the same width method (str): Method to use to fit columns. Options are variable or
and fit less data on the screen uniform. Variable-width columns are tighter, uniform
columns are all the same width and fit less data on
the screen
""" """
# Get keyword arguments or set defaults # Get keyword arguments or set defaults
cols = options.pop("cols", 0) cols = options.pop("cols", 0)

View File

@@ -60,9 +60,9 @@
To output an @, use '@@'. To output a } inside braces, use '}}'. To output an @, use '@@'. To output a } inside braces, use '}}'.
""" """
from __future__ import unicode_literals from __future__ import unicode_literals
import re import re
import sys import sys
from contextlib import contextmanager from contextlib import contextmanager
import six import six

View File

@@ -13,14 +13,15 @@
import os import os
import re import re
import select import select
import signal
import sys import sys
import traceback import traceback
import signal
from contextlib import contextmanager from contextlib import contextmanager
from types import ModuleType # novm from six import string_types
from typing import Optional # novm from six import StringIO
from six import StringIO, string_types from typing import Optional # novm
from types import ModuleType # novm
import llnl.util.tty as tty import llnl.util.tty as tty
@@ -33,7 +34,7 @@
# Use this to strip escape sequences # Use this to strip escape sequences
_escape = re.compile(r'\x1b[^m]*m|\x1b\[?1034h|\x1b\][0-9]+;[^\x07]*\x07') _escape = re.compile(r'\x1b[^m]*m|\x1b\[?1034h')
# control characters for enabling/disabling echo # control characters for enabling/disabling echo
# #
@@ -320,10 +321,7 @@ def __init__(self, file_like):
def unwrap(self): def unwrap(self):
if self.open: if self.open:
if self.file_like: if self.file_like:
if sys.version_info < (3,):
self.file = open(self.file_like, 'w') self.file = open(self.file_like, 'w')
else:
self.file = open(self.file_like, 'w', encoding='utf-8') # novm
else: else:
self.file = StringIO() self.file = StringIO()
return self.file return self.file
@@ -436,7 +434,7 @@ class log_output(object):
""" """
def __init__(self, file_like=None, echo=False, debug=0, buffer=False, def __init__(self, file_like=None, echo=False, debug=0, buffer=False,
env=None, filter_fn=None): env=None):
"""Create a new output log context manager. """Create a new output log context manager.
Args: Args:
@@ -446,8 +444,6 @@ def __init__(self, file_like=None, echo=False, debug=0, buffer=False,
debug (int): positive to enable tty debug mode during logging debug (int): positive to enable tty debug mode during logging
buffer (bool): pass buffer=True to skip unbuffering output; note buffer (bool): pass buffer=True to skip unbuffering output; note
this doesn't set up any *new* buffering this doesn't set up any *new* buffering
filter_fn (callable, optional): Callable[str] -> str to filter each
line of output
log_output can take either a file object or a filename. If a log_output can take either a file object or a filename. If a
filename is passed, the file will be opened and closed entirely filename is passed, the file will be opened and closed entirely
@@ -467,7 +463,6 @@ def __init__(self, file_like=None, echo=False, debug=0, buffer=False,
self.debug = debug self.debug = debug
self.buffer = buffer self.buffer = buffer
self.env = env # the environment to use for _writer_daemon self.env = env # the environment to use for _writer_daemon
self.filter_fn = filter_fn
self._active = False # used to prevent re-entry self._active = False # used to prevent re-entry
@@ -533,22 +528,20 @@ def __enter__(self):
# Sets a daemon that writes to file what it reads from a pipe # Sets a daemon that writes to file what it reads from a pipe
try: try:
# need to pass this b/c multiprocessing closes stdin in child. # need to pass this b/c multiprocessing closes stdin in child.
input_multiprocess_fd = None
try: try:
if sys.stdin.isatty():
input_multiprocess_fd = MultiProcessFd( input_multiprocess_fd = MultiProcessFd(
os.dup(sys.stdin.fileno()) os.dup(sys.stdin.fileno())
) )
except BaseException: except BaseException:
# just don't forward input if this fails # just don't forward input if this fails
pass input_multiprocess_fd = None
with replace_environment(self.env): with replace_environment(self.env):
self.process = multiprocessing.Process( self.process = multiprocessing.Process(
target=_writer_daemon, target=_writer_daemon,
args=( args=(
input_multiprocess_fd, read_multiprocess_fd, write_fd, input_multiprocess_fd, read_multiprocess_fd, write_fd,
self.echo, self.log_file, child_pipe, self.filter_fn self.echo, self.log_file, child_pipe
) )
) )
self.process.daemon = True # must set before start() self.process.daemon = True # must set before start()
@@ -672,7 +665,7 @@ def force_echo(self):
def _writer_daemon(stdin_multiprocess_fd, read_multiprocess_fd, write_fd, echo, def _writer_daemon(stdin_multiprocess_fd, read_multiprocess_fd, write_fd, echo,
log_file_wrapper, control_pipe, filter_fn): log_file_wrapper, control_pipe):
"""Daemon used by ``log_output`` to write to a log file and to ``stdout``. """Daemon used by ``log_output`` to write to a log file and to ``stdout``.
The daemon receives output from the parent process and writes it both The daemon receives output from the parent process and writes it both
@@ -717,7 +710,6 @@ def _writer_daemon(stdin_multiprocess_fd, read_multiprocess_fd, write_fd, echo,
log_file_wrapper (FileWrapper): file to log all output log_file_wrapper (FileWrapper): file to log all output
control_pipe (Pipe): multiprocessing pipe on which to send control control_pipe (Pipe): multiprocessing pipe on which to send control
information to the parent information to the parent
filter_fn (callable, optional): function to filter each line of output
""" """
# If this process was forked, then it will inherit file descriptors from # If this process was forked, then it will inherit file descriptors from
@@ -730,11 +722,7 @@ def _writer_daemon(stdin_multiprocess_fd, read_multiprocess_fd, write_fd, echo,
# Use line buffering (3rd param = 1) since Python 3 has a bug # Use line buffering (3rd param = 1) since Python 3 has a bug
# that prevents unbuffered text I/O. # that prevents unbuffered text I/O.
if sys.version_info < (3,):
in_pipe = os.fdopen(read_multiprocess_fd.fd, 'r', 1) in_pipe = os.fdopen(read_multiprocess_fd.fd, 'r', 1)
else:
# Python 3.x before 3.7 does not open with UTF-8 encoding by default
in_pipe = os.fdopen(read_multiprocess_fd.fd, 'r', 1, encoding='utf-8')
if stdin_multiprocess_fd: if stdin_multiprocess_fd:
stdin = os.fdopen(stdin_multiprocess_fd.fd) stdin = os.fdopen(stdin_multiprocess_fd.fd)
@@ -776,48 +764,29 @@ def _writer_daemon(stdin_multiprocess_fd, read_multiprocess_fd, write_fd, echo,
raise raise
if in_pipe in rlist: if in_pipe in rlist:
line_count = 0
try:
while line_count < 100:
# Handle output from the calling process. # Handle output from the calling process.
try:
line = _retry(in_pipe.readline)() line = _retry(in_pipe.readline)()
except UnicodeDecodeError:
# installs like --test=root gpgme produce non-UTF8 logs
line = '<line lost: output was not encoded as UTF-8>\n'
if not line: if not line:
return break
line_count += 1
# find control characters and strip them. # find control characters and strip them.
clean_line, num_controls = control.subn('', line) controls = control.findall(line)
line = control.sub('', line)
# Echo to stdout if requested or forced. # Echo to stdout if requested or forced.
if echo or force_echo: if echo or force_echo:
output_line = clean_line sys.stdout.write(line)
if filter_fn: sys.stdout.flush()
output_line = filter_fn(clean_line)
sys.stdout.write(output_line)
# Stripped output to log file. # Stripped output to log file.
log_file.write(_strip(clean_line)) log_file.write(_strip(line))
log_file.flush()
if num_controls > 0:
controls = control.findall(line)
if xon in controls: if xon in controls:
force_echo = True force_echo = True
if xoff in controls: if xoff in controls:
force_echo = False force_echo = False
if not _input_available(in_pipe):
break
finally:
if line_count > 0:
if echo or force_echo:
sys.stdout.flush()
log_file.flush()
except BaseException: except BaseException:
tty.error("Exception occurred in writer daemon!") tty.error("Exception occurred in writer daemon!")
traceback.print_exc() traceback.print_exc()
@@ -868,7 +837,3 @@ def wrapped(*args, **kwargs):
continue continue
raise raise
return wrapped return wrapped
def _input_available(f):
return f in select.select([f], [], [], 0)[0]

View File

@@ -14,10 +14,10 @@
""" """
from __future__ import print_function from __future__ import print_function
import multiprocessing
import os import os
import re
import signal import signal
import multiprocessing
import re
import sys import sys
import termios import termios
import time import time

View File

@@ -3,8 +3,9 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: major, minor, patch version for Spack, in a tuple #: major, minor, patch version for Spack, in a tuple
spack_version_info = (0, 17, 3) spack_version_info = (0, 16, 0)
#: String containing Spack version joined with .'s #: String containing Spack version joined with .'s
spack_version = '.'.join(str(v) for v in spack_version_info) spack_version = '.'.join(str(v) for v in spack_version_info)

View File

@@ -8,9 +8,9 @@
from llnl.util.lang import memoized from llnl.util.lang import memoized
import spack.spec import spack.spec
from spack.compilers.clang import Clang
from spack.spec import CompilerSpec from spack.spec import CompilerSpec
from spack.util.executable import Executable, ProcessError from spack.util.executable import Executable, ProcessError
from spack.compilers.clang import Clang
class ABI(object): class ABI(object):

View File

@@ -1,42 +0,0 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""This package contains code for creating analyzers to extract Application
Binary Interface (ABI) information, along with simple analyses that just load
existing metadata.
"""
from __future__ import absolute_import
import llnl.util.tty as tty
import spack.paths
import spack.util.classes
mod_path = spack.paths.analyzers_path
analyzers = spack.util.classes.list_classes("spack.analyzers", mod_path)
# The base analyzer does not have a name, and cannot do dict comprehension
analyzer_types = {}
for a in analyzers:
if not hasattr(a, "name"):
continue
analyzer_types[a.name] = a
def list_all():
"""A helper function to list all analyzers and their descriptions
"""
for name, analyzer in analyzer_types.items():
print("%-25s: %-35s" % (name, analyzer.description))
def get_analyzer(name):
"""Courtesy function to retrieve an analyzer, and exit on error if it
does not exist.
"""
if name in analyzer_types:
return analyzer_types[name]
tty.die("Analyzer %s does not exist" % name)

View File

@@ -1,116 +0,0 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""An analyzer base provides basic functions to run the analysis, save results,
and (optionally) interact with a Spack Monitor
"""
import os
import llnl.util.tty as tty
import spack.config
import spack.hooks
import spack.monitor
import spack.util.path
def get_analyzer_dir(spec, analyzer_dir=None):
"""
Given a spec, return the directory to save analyzer results.
We create the directory if it does not exist. We also check that the
spec has an associated package. An analyzer cannot be run if the spec isn't
associated with a package. If the user provides a custom analyzer_dir,
we use it over checking the config and the default at ~/.spack/analyzers
"""
# An analyzer cannot be run if the spec isn't associated with a package
if not hasattr(spec, "package") or not spec.package:
tty.die("A spec can only be analyzed with an associated package.")
# The top level directory is in the user home, or a custom location
if not analyzer_dir:
analyzer_dir = spack.util.path.canonicalize_path(
spack.config.get('config:analyzers_dir', '~/.spack/analyzers'))
# We follow the same convention as the spec install (this could be better)
package_prefix = os.sep.join(spec.package.prefix.split('/')[-3:])
meta_dir = os.path.join(analyzer_dir, package_prefix)
return meta_dir
class AnalyzerBase(object):
def __init__(self, spec, dirname=None):
"""
Verify that the analyzer has correct metadata.
An Analyzer is intended to run on one spec install, so the spec
with its associated package is required on init. The child analyzer
class should define an init function that super's the init here, and
also check that the analyzer has all dependencies that it
needs. If an analyzer subclass does not have dependencies, it does not
need to define an init. An Analyzer should not be allowed to proceed
if one or more dependencies are missing. The dirname, if defined,
is an optional directory name to save to (instead of the default meta
spack directory).
"""
self.spec = spec
self.dirname = dirname
self.meta_dir = os.path.dirname(spec.package.install_log_path)
for required in ["name", "outfile", "description"]:
if not hasattr(self, required):
tty.die("Please add a %s attribute on the analyzer." % required)
def run(self):
"""
Given a spec with an installed package, run the analyzer on it.
"""
raise NotImplementedError
@property
def output_dir(self):
"""
The full path to the output directory.
This includes the nested analyzer directory structure. This function
does not create anything.
"""
if not hasattr(self, "_output_dir"):
output_dir = get_analyzer_dir(self.spec, self.dirname)
self._output_dir = os.path.join(output_dir, self.name)
return self._output_dir
def save_result(self, result, overwrite=False):
"""
Save a result to the associated spack monitor, if defined.
This function is on the level of the analyzer because it might be
the case that the result is large (appropriate for a single request)
or that the data is organized differently (e.g., more than one
request per result). If an analyzer subclass needs to over-write
this function with a custom save, that is appropriate to do (see abi).
"""
# We maintain the structure in json with the analyzer as key so
# that in the future, we could upload to a monitor server
if result[self.name]:
outfile = os.path.join(self.output_dir, self.outfile)
# Only try to create the results directory if we have a result
if not os.path.exists(self._output_dir):
os.makedirs(self._output_dir)
# Don't overwrite an existing result if overwrite is False
if os.path.exists(outfile) and not overwrite:
tty.info("%s exists and overwrite is False, skipping." % outfile)
else:
tty.info("Writing result to %s" % outfile)
spack.monitor.write_json(result[self.name], outfile)
# This hook runs after a save result
spack.hooks.on_analyzer_save(self.spec.package, result)

View File

@@ -1,33 +0,0 @@
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""A configargs analyzer is a class of analyzer that typically just uploads
already existing metadata about config args from a package spec install
directory."""
import os
import spack.monitor
from .analyzer_base import AnalyzerBase
class ConfigArgs(AnalyzerBase):
name = "config_args"
outfile = "spack-analyzer-config-args.json"
description = "config args loaded from spack-configure-args.txt"
def run(self):
"""
Load the configure-args.txt and save in json.
The run function will find the spack-config-args.txt file in the
package install directory, and read it into a json structure that has
the name of the analyzer as the key.
"""
config_file = os.path.join(self.meta_dir, "spack-configure-args.txt")
return {self.name: spack.monitor.read_file(config_file)}

Some files were not shown because too many files have changed in this diff Show More