Compare commits

..

21 Commits

Author SHA1 Message Date
Todd Gamblin
42962f2409 Merge branch 'releases/v0.12' 2018-11-13 11:05:17 -06:00
Gregory Becker
8554e933d2 Make Spack relocate text files in build caches with relative binaries 2018-11-13 11:04:47 -06:00
Ubuntu
03a53dca5f Revert "Binary caching: remove symlinks, copy files instead (#9747)"
This reverts commit 058cf81312.
2018-11-13 11:04:47 -06:00
Todd Gamblin
041aa143db Merge branch 'releases/v0.11.2' 2018-02-07 12:46:55 -05:00
becker33
e905f8cf83 Add NameError to exceptions caught from configure_args in module generation (#7173) 2018-02-02 13:35:51 -08:00
Adam J. Stewart
41e6eb130c Fix gfortran 7 detection (#7017) 2018-01-28 16:31:34 -08:00
Massimiliano Culpo
6fcbc26f88 travis: removed /usr/local/include/c++ before installing gcc on OSX (#6515) (#7027)
"brew install gcc" fails for travis build because of an existing
/usr/local/include/c++. This commit removes the offending file
as suggested by brew.
2018-01-28 10:48:10 -08:00
Todd Gamblin
880e319cf6 Pull R list_urls from upstream. 2018-01-19 13:28:26 -08:00
Massimiliano Culpo
1cc9241030 Added flags to unit tests + OSX build done once per day (#6988)
* Adding flags to codecov reports

* OSX builds are triggered once a day
2018-01-19 11:59:43 -08:00
Todd Gamblin
9835f5077b Bump version to 0.11.1 2018-01-19 09:39:39 -08:00
becker33
4fb3b30d3e Fix type issues with setting flag handlers (#6960)
The flag_handlers method was being set as a bound method, but when
reset in the package.py file it was being set as an unbound method
(all python2 issues). This gets the underlying function information,
which is the same in either case.

The bug was uncovered for parmetis in #6858. This is a partial fix.
Included are changes to the parmetis package.py file to make use of
flag_handlers.
2018-01-19 09:39:38 -08:00
becker33
e0826804c2 Compiler flag handlers (#6415)
This adds the ability for packages to apply compiler flags in one of
three ways: by injecting them into the compiler wrapper calls (the
default in this PR and previously the only automated choice);
exporting environment variable definitions for variables with
corresponding names (e.g. CPPFLAGS=...); providing them as arguments
to the build system (e.g. configure).

When applying compiler flags using build system arguments, a package
must implement the 'flags_to_build_system_args" function. This is
provided for CMake and autotools packages, so for packages which
subclass those build systems, they need only update their flag
handler method specify which compiler flags should be specified as
arguments to the build system.

Convenience methods are provided to specify that all flags be applied
in one of the 3 available ways, so a custom implementation is only
required if more than one method of applying compiler flags is
needed.

This also removes redundant build system definitions from tutorial
examples
2018-01-19 09:32:24 -08:00
Todd Gamblin
c2a10a2aa2 Merge branch 'releases/v0.11.0' 2018-01-17 14:14:45 -08:00
Todd Gamblin
ba6c39310b Fix logo link in README.md to point to the develop branch. (#6969) 2018-01-17 09:07:40 -08:00
Todd Gamblin
974d166c8a Final changes for v0.11.0 (#6318) 2018-01-16 22:25:34 -08:00
scheibelp
7a0a907b5c elf relocation fix: cherry-picked from develop branch (#6889)
* Revert "Quick fix for relocation issues."

This reverts commit 57608a6dc4.

* Buildcache: relocate fixes (#6512)

* Updated function which checks if a binary file needs relocation.
  Previously this was incorrectly identifying ELF binaries as symbolic
  links (so they were being excluded from relocation). Added test to
  check that ELF binaries are not considered symlinks.

* relocate_text was not replacing paths in text files. Added test to
  check that text files are relocated properly (i.e. paths in the file
  are converted to the new prefix).

* Exclude backup files created by filter_file when installing from
  binary cache.

* Update write_buildinfo_file method signature to distinguish between
  the spec prefix and the working directory for the binary cache
  package.
2018-01-16 21:33:28 -08:00
Todd Gamblin
57608a6dc4 Quick fix for relocation issues. 2017-11-13 10:34:28 +00:00
Todd Gamblin
52a9e5d2a3 Merge branch 'releases/v0.10.0' 2017-01-17 01:36:03 -08:00
Todd Gamblin
4f8167b7ed Don't assume spack is in the path when building docs. 2016-08-15 10:57:15 -07:00
Todd Gamblin
164da8eed1 Version bump to 0.9.1
- Bugfixes for spack find
- 0.9.1 can read specs from current develop.
2016-05-18 08:30:13 -07:00
alalazo
6e1257ed2d fixes #967 2016-05-18 08:28:02 -07:00
5104 changed files with 43315 additions and 157499 deletions

View File

@@ -4,13 +4,33 @@ coverage:
range: 60...90
status:
project:
default:
threshold: 0.3%
default: true
llnl:
threshold: 0.5
paths:
- lib/spack/llnl
commands:
threshold: 0.5
paths:
- lib/spack/spack/cmd
build_systems:
threshold: 0.5
paths:
- lib/spack/spack/build_systems
modules:
threshold: 0.5
paths:
- lib/spack/spack/modules
core:
threshold: 0.5
paths:
- "!lib/spack/llnl"
- "!lib/spack/spack/cmd"
ignore:
- lib/spack/spack/test/.*
- lib/spack/env/.*
- lib/spack/docs/.*
- lib/spack/external/.*
- share/spack/qa/.*
comment: off

View File

@@ -7,9 +7,9 @@ branch = True
source = lib
omit =
lib/spack/spack/test/*
lib/spack/env/*
lib/spack/docs/*
lib/spack/external/*
share/spack/qa/*
[report]
# Regexes for lines to exclude from consideration

View File

@@ -1,11 +1,6 @@
.git/*
opt/spack/*
/etc/spack/*
!/etc/spack/defaults
share/spack/dotkit/*
share/spack/lmod/*
share/spack/modules/*
lib/spack/spack/test/*
.git
opt/spack
share/spack/docker/Dockerfile
share/spack/docker/build-image.sh
share/spack/docker/run-image.sh
share/spack/docker/push-image.sh

22
.flake8
View File

@@ -1,32 +1,24 @@
# -*- conf -*-
# flake8 settings for Spack core files.
#
# These exceptions are for Spack core files. We're slightly more lenient
# These exceptions ar for Spack core files. We're slightly more lenient
# with packages. See .flake8_packages for that.
#
# E1: Indentation
# Let people line things up nicely:
# - E129: visually indented line with same indent as next logical line
#
# E2: Whitespace
# - E221: multiple spaces before operator
# - E241: multiple spaces after ','
# - E272: multiple spaces before keyword
#
# E7: Statement
# - E731: do not assign a lambda expression, use a def
#
# W5: Line break warning
# - W503: line break before binary operator
# - W504: line break after binary operator
# Let people use terse Python features:
# - E731: lambda expressions
#
# These are required to get the package.py files to test clean:
# - F999: syntax error in doctest
#
# N8: PEP8-naming
# - N801: class names should use CapWords convention
# - N813: camelcase imported as lowercase
# - N814: camelcase imported as constant
# Exempt to allow decorator classes to be lowercase, but follow otherwise:
# - N801: CapWords for class names.
#
[flake8]
ignore = E129,E221,E241,E272,E731,W503,W504,F999,N801,N813,N814
ignore = E129,E221,E241,E272,E731,F999,N801,W503,W504
max-line-length = 79

View File

@@ -7,18 +7,16 @@
# wildcard import and dependencies can set globals for their
# dependents. So we add exceptions for checks related to undefined names.
#
# Note that we also add *per-line* exemptions for certain patterns in the
# Note that we also add *per-line* exemptions for certain patters in the
# `spack flake8` command. This is where F403 for `from spack import *`
# is added (beause we *only* allow that wildcard).
#
# See .flake8 for regular exceptions.
#
# F4: Import
# Redefinition exceptions:
# - F405: `name` may be undefined, or undefined from star imports: `module`
#
# F8: Name
# - F821: undefined name `name`
# - F821: undefined name `name` (needed for cmake, configure, etc.)
#
[flake8]
ignore = E129,E221,E241,E272,E731,W503,W504,F405,F821,F999,N801,N813,N814
ignore = E129,E221,E241,E272,E731,F999,F405,F821,W503,W504
max-line-length = 79

1
.gitattributes vendored
View File

@@ -1 +0,0 @@
*.py diff=python

View File

@@ -1,7 +1,7 @@
---
name: Bug report
about: Report a bug in the core of Spack (command not working as expected, etc.)
labels: bug
---
@@ -24,10 +24,10 @@ If Spack reported an error, provide the error message. If it did not report an e
but the output appears incorrect, provide the incorrect output. If there was no error
message and no output but the result is incorrect, describe how it does not match
what you expect. To provide more information you might re-run the commands with
the additional -d/--stacktrace flags:
the additional -sd flags:
```console
$ spack -d --stacktrace <command1> <spec>
$ spack -d --stacktrace <command2> <spec>
$ spack -sd <command1> <spec>
$ spack -sd <command2> <spec>
...
```
that activate the full debug output.
@@ -46,4 +46,4 @@ We encourage you to try, as much as possible, to reduce your problem to the mini
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!
Other than that, thanks for taking the time to contribute to Spack!

View File

@@ -1,7 +1,7 @@
---
name: Build error
about: Some package in Spack didn't build correctly
labels: "build-error"
---
*Thanks for taking the time to report this build failure. To proceed with the

View File

@@ -1,7 +1,6 @@
---
name: Feature request
about: Suggest adding a feature that is not yet in Spack
labels: feature
---

View File

@@ -1,6 +0,0 @@
FROM python:3.7-alpine
RUN pip install pygithub
ADD entrypoint.py /entrypoint.py
ENTRYPOINT ["/entrypoint.py"]

View File

@@ -1,85 +0,0 @@
#!/usr/bin/env python
#
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Maintainer review action.
This action checks which packages have changed in a PR, and adds their
maintainers to the pull request for review.
"""
import json
import os
import re
import subprocess
from github import Github
def spack(*args):
"""Run the spack executable with arguments, and return the output split.
This does just enough to run `spack pkg` and `spack maintainers`, the
two commands used by this action.
"""
github_workspace = os.environ['GITHUB_WORKSPACE']
spack = os.path.join(github_workspace, 'bin', 'spack')
output = subprocess.check_output([spack] + list(args))
split = re.split(r'\s*', output.decode('utf-8').strip())
return [s for s in split if s]
def main():
# get these first so that we'll fail early
token = os.environ['GITHUB_TOKEN']
event_path = os.environ['GITHUB_EVENT_PATH']
with open(event_path) as file:
data = json.load(file)
# make sure it's a pull_request event
assert 'pull_request' in data
# only request reviews on open, edit, or reopen
action = data['action']
if action not in ('opened', 'edited', 'reopened'):
return
# get data from the event payload
pr_data = data['pull_request']
base_branch_name = pr_data['base']['ref']
full_repo_name = pr_data['base']['repo']['full_name']
pr_number = pr_data['number']
requested_reviewers = pr_data['requested_reviewers']
author = pr_data['user']['login']
# get a list of packages that this PR modified
changed_pkgs = spack(
'pkg', 'changed', '--type', 'ac', '%s...' % base_branch_name)
# get maintainers for all modified packages
maintainers = set()
for pkg in changed_pkgs:
pkg_maintainers = set(spack('maintainers', pkg))
maintainers |= pkg_maintainers
# remove any maintainers who are already on the PR, and the author,
# as you can't review your own PR)
maintainers -= set(requested_reviewers)
maintainers -= set([author])
if not maintainers:
return
# request reviews from each maintainer
gh = Github(token)
repo = gh.get_repo(full_repo_name)
pr = repo.get_pull(pr_number)
pr.create_review_request(list(maintainers))
if __name__ == "__main__":
main()

View File

@@ -1,57 +0,0 @@
name: linux builds
on:
push:
branches:
- master
- develop
pull_request:
branches:
- master
- develop
paths-ignore:
# Don't run if we only modified packages in the built-in repository
- 'var/spack/repos/builtin/**'
- '!var/spack/repos/builtin/packages/lz4/**'
- '!var/spack/repos/builtin/packages/mpich/**'
- '!var/spack/repos/builtin/packages/tut/**'
- '!var/spack/repos/builtin/packages/py-setuptools/**'
- '!var/spack/repos/builtin/packages/openjpeg/**'
- '!var/spack/repos/builtin/packages/r-rcpp/**'
jobs:
build:
runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
package: [lz4, mpich, tut, py-setuptools, openjpeg, r-rcpp]
steps:
- uses: actions/checkout@v1
- name: Cache ccache's store
uses: actions/cache@v1
with:
path: ~/.ccache
key: ccache-build-${{ matrix.package }}
restore-keys: |
ccache-build-${{ matrix.package }}
- name: Setup Python
uses: actions/setup-python@v1
with:
python-version: 3.8
- name: Install System Packages
run: |
sudo apt-get -yqq install ccache gfortran perl perl-base r-base r-base-core r-base-dev findutils openssl libssl-dev libpciaccess-dev
R --version
perl --version
- name: Copy Configuration
run: |
ccache -M 300M && ccache -z
# Set up external deps for build tests, b/c they take too long to compile
cp share/spack/qa/configuration/*.yaml etc/spack/
- name: Run the build test
run: |
. share/spack/setup-env.sh
SPEC=${{ matrix.package }} share/spack/qa/run-build-tests
ccache -s

View File

@@ -1,30 +0,0 @@
name: python version check
on:
push:
branches:
- master
- develop
pull_request:
branches:
- master
- develop
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Setup Python
uses: actions/setup-python@v1
with:
python-version: 3.7
- name: Install Python Packages
run: |
pip install --upgrade pip
pip install --upgrade vermin
- name: Minimum Version (Spack's Core)
run: vermin --backport argparse -t=2.6- -t=3.5- -v lib/spack/spack/ lib/spack/llnl/ bin/
- name: Minimum Version (Repositories)
run: vermin --backport argparse -t=2.6- -t=3.5- -v var/spack/repos

3
.gitignore vendored
View File

@@ -20,10 +20,9 @@
*.swp
/htmlcov
.coverage
\#*
#*
.#*
lib/spack/spack/test/.cache
/bin/spackc
*.in.log
*.out.log
*.orig

View File

@@ -1,9 +0,0 @@
version: 2
sphinx:
configuration: lib/spack/docs/conf.py
python:
version: 3.7
install:
- requirements: lib/spack/docs/requirements.txt

View File

@@ -12,120 +12,182 @@ branches:
# Build matrix
#=============================================================================
dist: xenial
# Adding the keyword dist to permit an `allow_failures` section
# under `matrix.include`. More information here:
#
# https://docs.travis-ci.com/user/customizing-the-build/#Rows-that-are-Allowed-to-Fail
dist: trusty
jobs:
fast_finish: true
include:
- stage: 'style checks'
python: '3.8'
python: '2.7'
sudo: required
os: linux
language: python
env: TEST_SUITE=flake8
# Shell integration with module files
- python: '3.8'
- stage: 'flake8 + documentation'
python: '2.7'
os: linux
language: python
env: [ TEST_SUITE=bootstrap ]
- stage: 'unit tests + documentation'
env: TEST_SUITE=doc
- stage: 'unit tests'
python: '2.6'
dist: trusty
sudo: required
os: linux
language: python
env: [ TEST_SUITE=unit, COVERAGE=true ]
env: TEST_SUITE=unit
- python: '2.7'
sudo: required
os: linux
language: python
env: [ TEST_SUITE=unit, COVERAGE=true ]
env: TEST_SUITE=unit
- python: '3.4'
sudo: required
os: linux
language: python
env: TEST_SUITE=unit
- python: '3.5'
sudo: required
os: linux
language: python
env: TEST_SUITE=unit
- python: '3.6'
sudo: required
os: linux
language: python
env: TEST_SUITE=unit
- python: '3.7'
sudo: required
os: linux
dist: xenial
language: python
env: TEST_SUITE=unit
- python: '3.8'
os: linux
language: python
env: [ TEST_SUITE=unit, COVERAGE=true ]
- python: '3.8'
- python: '3.6'
sudo: required
os: linux
language: python
env: TEST_SUITE=doc
- os: osx
language: generic
env: [ TEST_SUITE=unit, PYTHON_VERSION=2.7, COVERAGE=true ]
if: type != pull_request
env: [ TEST_SUITE=unit, PYTHON_VERSION=2.7 ]
# mpich (AutotoolsPackage)
- stage: 'build tests'
python: '2.7'
os: linux
language: python
env: [ TEST_SUITE=build, 'SPEC=mpich' ]
# astyle (MakefilePackage)
- python: '2.7'
os: linux
language: python
env: [ TEST_SUITE=build, 'SPEC=astyle' ]
# tut (WafPackage)
- python: '2.7'
os: linux
language: python
env: [ TEST_SUITE=build, 'SPEC=tut' ]
# py-setuptools (PythonPackage)
- python: '2.7'
os: linux
language: python
env: [ TEST_SUITE=build, 'SPEC=py-setuptools' ]
# perl-dbi (PerlPackage)
# - python: '2.7'
# os: linux
# language: python
# env: [ TEST_SUITE=build, 'SPEC=perl-dbi' ]
# openjpeg (CMakePackage + external cmake)
- python: '2.7'
os: linux
language: python
env: [ TEST_SUITE=build, 'SPEC=openjpeg' ]
# r-rcpp (RPackage + external R)
- python: '2.7'
os: linux
language: python
env: [ TEST_SUITE=build, 'SPEC=r-rcpp' ]
# mpich (AutotoolsPackage)
- python: '3.6'
os: linux
language: python
env: [ TEST_SUITE=build, 'SPEC=mpich' ]
- stage: 'docker build'
sudo: required
os: linux
language: generic
env: TEST_SUITE=docker
allow_failures:
- dist: xenial
- env: TEST_SUITE=docker
# temporary Python 2.6 exception while we figure out why Travis is hanging
- python: '2.6'
stages:
- 'style checks'
- 'unit tests + documentation'
- 'build tests'
- name: 'docker build'
if: type = push AND branch IN (develop, master)
stages:
- 'flake8 + documentation'
- 'unit tests'
- 'build tests'
- name: 'unit tests - osx'
if: type IN (cron)
#=============================================================================
# Environment
#=============================================================================
# Use new Travis infrastructure (Docker can't sudo yet)
sudo: false
# Docs need graphviz to build
addons:
# for Linux builds, we use APT
apt:
packages:
- ccache
- cmake
- gfortran
- mercurial
- graphviz
- gnupg2
- kcov
- mercurial
- cmake
- ninja-build
- perl
- perl-base
- realpath
- r-base
- r-base-core
- r-base-dev
- zsh
# for Mac builds, we use Homebrew
homebrew:
packages:
- python@2
- gcc
- gnupg2
- ccache
- dash
- kcov
update: true
- perl
- perl-base
# ~/.ccache needs to be cached directly as Travis is not taking care of it
# (possibly because we use 'language: python' and not 'language: c')
cache:
pip: true
ccache: true
directories:
- ~/.ccache
- ~/.mirror
# Work around Travis's lack of support for Python on OSX
before_install:
- if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
brew update;
export HOMEBREW_NO_AUTO_UPDATE=1;
rm /usr/local/include/c++ ;
brew ls --versions python@2 > /dev/null || brew install python@2;
brew ls --versions gcc > /dev/null || brew install gcc;
brew ls --versions gnupg2 > /dev/null || brew install gnupg2;
pip2 install --upgrade pip;
pip2 install virtualenv;
virtualenv venv;
source venv/bin/activate;
fi
- ccache -M 2G && ccache -z
# Install various dependencies
install:
- pip install --upgrade pip
- pip install --upgrade six
- pip install --upgrade setuptools
- pip install --upgrade codecov coverage==4.5.4
- pip install --upgrade codecov
- pip install --upgrade flake8
- pip install --upgrade pep8-naming
- if [[ "$TEST_SUITE" == "doc" ]]; then
@@ -138,25 +200,43 @@ before_script:
- git config --global user.name "Test User"
# Need this to be able to compute the list of changed files
- git fetch origin ${TRAVIS_BRANCH}:${TRAVIS_BRANCH}
- git fetch origin develop:develop
# Set up external deps for build tests, b/c they take too long to compile
- if [[ "$TEST_SUITE" == "build" ]]; then cp
share/spack/qa/configuration/packages.yaml etc/spack/packages.yaml;
fi
#=============================================================================
# Building
#=============================================================================
services:
- docker
script:
- share/spack/qa/run-$TEST_SUITE-tests
after_success:
- ccache -s
- case "$TEST_SUITE" in
unit)
if [[ "$COVERAGE" == "true" ]]; then
codecov --env PYTHON_VERSION
--required
--flags "${TEST_SUITE}${TRAVIS_OS_NAME}";
- if [[ "$TEST_SUITE" == "docker build" ]]; then
login_attempted=0; login_success=0;
for config in share/spack/docker/config/* ; do
source "$config" ;
./share/spack/docker/build-image.sh;
if [ "$TRAVIS_EVENT_TYPE" != "pull_request" ]; then
if [ "$login_attempted" '=' '0' ]; then
if echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin; then
login_success=1;
fi;
login_attempted=1;
fi;
if [ "$login_success" '=' '1' ]; then
./share/spack/docker/push-image.sh;
fi
fi
;;
esac
done;
fi
- if [[ "$TEST_SUITE" == "unit" || "$TEST_SUITE" == "build" ]]; then
codecov --env PYTHON_VERSION
--required --flags "${TEST_SUITE}${TRAVIS_OS_NAME}";
fi
#=============================================================================
# Notifications

View File

@@ -1,451 +0,0 @@
# v0.14.2 (2019-04-15)
This is a minor release on the `0.14` series. It includes performance
improvements and bug fixes:
* Improvements to how `spack install` handles foreground/background (#15723)
* Major performance improvements for reading the package DB (#14693, #15777)
* No longer check for the old `index.yaml` database file (#15298)
* Properly activate environments with '-h' in the name (#15429)
* External packages have correct `.prefix` in environments/views (#15475)
* Improvements to computing env modifications from sourcing files (#15791)
* Bugfix on Cray machines when getting `TERM` env variable (#15630)
* Avoid adding spurious `LMOD` env vars to Intel modules (#15778)
* Don't output [+] for mock installs run during tests (#15609)
# v0.14.1 (2019-03-20)
This is a bugfix release on top of `v0.14.0`. Specific fixes include:
* several bugfixes for parallel installation (#15339, #15341, #15220, #15197)
* `spack load` now works with packages that have been renamed (#14348)
* bugfix for `suite-sparse` installation (#15326)
* deduplicate identical suffixes added to module names (#14920)
* fix issues with `configure_args` during module refresh (#11084)
* increased test coverage and test fixes (#15237, #15354, #15346)
* remove some unused code (#15431)
# v0.14.0 (2020-02-23)
`v0.14.0` is a major feature release, with 3 highlighted features:
1. **Distributed builds.** Multiple Spack instances will now coordinate
properly with each other through locks. This works on a single node
(where you've called `spack` several times) or across multiple nodes
with a shared filesystem. For example, with SLURM, you could build
`trilinos` and its dependencies on 2 24-core nodes, with 3 Spack
instances per node and 8 build jobs per instance, with `srun -N 2 -n 6
spack install -j 8 trilinos`. This requires a filesystem with locking
enabled, but not MPI or any other library for parallelism.
2. **Build pipelines.** You can also build in parallel through Gitlab
CI. Simply create a Spack environment and push it to Gitlab to build
on Gitlab runners. Pipeline support is now integreated into a single
`spack ci` command, so setting it up is easier than ever. See the
[Pipelines section](https://spack.readthedocs.io/en/v0.14.0/pipelines.html)
in the docs.
3. **Container builds.** The new `spack containerize` command allows you
to create a Docker or Singularity recipe from any Spack environment.
There are options to customize the build if you need them. See the
[Container Images section](https://spack.readthedocs.io/en/latest/containers.html)
in the docs.
In addition, there are several other new commands, many bugfixes and
improvements, and `spack load` no longer requires modules, so you can use
it the same way on your laptop or on your supercomputer.
Spack grew by over 300 packages since our last release in November 2019,
and the project grew to over 500 contributors. Thanks to all of you for
making yet another great release possible. Detailed notes below.
## Major new core features
* Distributed builds: spack instances coordinate and build in parallel (#13100)
* New `spack ci` command to manage CI pipelines (#12854)
* Generate container recipes from environments: `spack containerize` (#14202)
* `spack load` now works without using modules (#14062, #14628)
* Garbage collect old/unused installations with `spack gc` (#13534)
* Configuration files all set environment modifications the same way (#14372,
[docs](https://spack.readthedocs.io/en/v0.14.0/configuration.html#environment-modifications))
* `spack commands --format=bash` auto-generates completion (#14393, #14607)
* Packages can specify alternate fetch URLs in case one fails (#13881)
## Improvements
* Improved locking for concurrency with environments (#14676, #14621, #14692)
* `spack test` sends args to `pytest`, supports better listing (#14319)
* Better support for aarch64 and cascadelake microarch (#13825, #13780, #13820)
* Archspec is now a separate library (see https://github.com/archspec/archspec)
* Many improvements to the `spack buildcache` command (#14237, #14346,
#14466, #14467, #14639, #14642, #14659, #14696, #14698, #14714, #14732,
#14929, #15003, #15086, #15134)
## Selected Bugfixes
* Compilers now require an exact match on version (#8735, #14730, #14752)
* Bugfix for patches that specified specific versions (#13989)
* `spack find -p` now works in environments (#10019, #13972)
* Dependency queries work correctly in `spack find` (#14757)
* Bugfixes for locking upstream Spack instances chains (#13364)
* Fixes for PowerPC clang optimization flags (#14196)
* Fix for issue with compilers and specific microarchitectures (#13733, #14798)
## New commands and options
* `spack ci` (#12854)
* `spack containerize` (#14202)
* `spack gc` (#13534)
* `spack load` accepts `--only package`, `--only dependencies` (#14062, #14628)
* `spack commands --format=bash` (#14393)
* `spack commands --update-completion` (#14607)
* `spack install --with-cache` has new option: `--no-check-signature` (#11107)
* `spack test` now has `--list`, `--list-long`, and `--list-names` (#14319)
* `spack install --help-cdash` moves CDash help out of the main help (#13704)
## Deprecations
* `spack release-jobs` has been rolled into `spack ci`
* `spack bootstrap` will be removed in a future version, as it is no longer
needed to set up modules (see `spack load` improvements above)
## Documentation
* New section on building container images with Spack (see
[docs](https://spack.readthedocs.io/en/latest/containers.html))
* New section on using `spack ci` command to build pipelines (see
[docs](https://spack.readthedocs.io/en/latest/pipelines.html))
* Document how to add conditional dependencies (#14694)
* Document how to use Spack to replace Homebrew/Conda (#13083, see
[docs](https://spack.readthedocs.io/en/latest/workflows.html#using-spack-to-replace-homebrew-conda))
## Important package changes
* 3,908 total packages (345 added since 0.13.0)
* Added first cut at a TensorFlow package (#13112)
* We now build R without "recommended" packages, manage them w/Spack (#12015)
* Elpa and OpenBLAS now leverage microarchitecture support (#13655, #14380)
* Fix `octave` compiler wrapper usage (#14726)
* Enforce that packages in `builtin` aren't missing dependencies (#13949)
# v0.13.4 (2020-02-07)
This release contains several bugfixes:
* bugfixes for invoking python in various environments (#14349, #14496, #14569)
* brought tab completion up to date (#14392)
* bugfix for removing extensions from views in order (#12961)
* bugfix for nondeterministic hashing for specs with externals (#14390)
# v0.13.3 (2019-12-23)
This release contains more major performance improvements for Spack
environments, as well as bugfixes for mirrors and a `python` issue with
RHEL8.
* mirror bugfixes: symlinks, duplicate patches, and exception handling (#13789)
* don't try to fetch `BundlePackages` (#13908)
* avoid re-fetching patches already added to a mirror (#13908)
* avoid re-fetching alread added patches (#13908)
* avoid re-fetching alread added patches (#13908)
* allow repeated invocations of `spack mirror create` on the same dir (#13908)
* bugfix for RHEL8 when `python` is unavailable (#14252)
* improve concretization performance in environments (#14190)
* improve installation performance in environments (#14263)
# v0.13.2 (2019-12-04)
This release contains major performance improvements for Spack environments, as
well as some bugfixes and minor changes.
* allow missing modules if they are blacklisted (#13540)
* speed up environment activation (#13557)
* mirror path works for unknown versions (#13626)
* environments: don't try to modify run-env if a spec is not installed (#13589)
* use semicolons instead of newlines in module/python command (#13904)
* verify.py: os.path.exists exception handling (#13656)
* Document use of the maintainers field (#13479)
* bugfix with config caching (#13755)
* hwloc: added 'master' version pointing at the HEAD of the master branch (#13734)
* config option to allow gpg warning suppression (#13744)
* fix for relative symlinks when relocating binary packages (#13727)
* allow binary relocation of strings in relative binaries (#13724)
# v0.13.1 (2019-11-05)
This is a bugfix release on top of `v0.13.0`. Specific fixes include:
* `spack find` now displays variants and other spec constraints
* bugfix: uninstall should find concrete specs by DAG hash (#13598)
* environments: make shell modifications partially unconditional (#13523)
* binary distribution: relocate text files properly in relative binaries (#13578)
* bugfix: fetch prefers to fetch local mirrors over remote resources (#13545)
* environments: only write when necessary (#13546)
* bugfix: spack.util.url.join() now handles absolute paths correctly (#13488)
* sbang: use utf-8 for encoding when patching (#13490)
* Specs with quoted flags containing spaces are parsed correctly (#13521)
* targets: print a warning message before downgrading (#13513)
* Travis CI: Test Python 3.8 (#13347)
* Documentation: Database.query methods share docstrings (#13515)
* cuda: fix conflict statements for x86-64 targets (#13472)
* cpu: fix clang flags for generic x86_64 (#13491)
* syaml_int type should use int.__repr__ rather than str.__repr__ (#13487)
* elpa: prefer 2016.05.004 until sse/avx/avx2 issues are resolved (#13530)
* trilinos: temporarily constrain netcdf@:4.7.1 (#13526)
# v0.13.0 (2019-10-25)
`v0.13.0` is our biggest Spack release yet, with *many* new major features.
From facility deployment to improved environments, microarchitecture
support, and auto-generated build farms, this release has features for all of
our users.
Spack grew by over 700 packages in the past year, and the project now has
over 450 contributors. Thanks to all of you for making this release possible.
## Major new core features
- Chaining: use dependencies from external "upstream" Spack instances
- Environments now behave more like virtualenv/conda
- Each env has a *view*: a directory with all packages symlinked in
- Activating an environment sets `PATH`, `LD_LIBRARY_PATH`, `CPATH`,
`CMAKE_PREFIX_PATH`, `PKG_CONFIG_PATH`, etc. to point to this view.
- Spack detects and builds specifically for your microarchitecture
- named, understandable targets like `skylake`, `broadwell`, `power9`, `zen2`
- Spack knows which compilers can build for which architectures
- Packages can easily query support for features like `avx512` and `sse3`
- You can pick a target with, e.g. `spack install foo target=icelake`
- Spack stacks: combinatorial environments for facility deployment
- Environments can now build cartesian products of specs (with `matrix:`)
- Conditional syntax support to exclude certain builds from the stack
- Projections: ability to build easily navigable symlink trees environments
- Support no-source packages (BundlePackage) to aggregate related packages
- Extensions: users can write custom commands that live outside of Spack repo
- Support ARM and Fujitsu compilers
## CI/build farm support
- `spack release-jobs` can detect `package.py` changes and generate
`.gitlab-ci.yml` to create binaries for an environment or stack
in parallel (initial support -- will change in future release).
- Results of build pipelines can be uploaded to a CDash server.
- Spack can now upload/fetch from package mirrors in Amazon S3
## New commands/options
- `spack mirror create --all` downloads *all* package sources/resources/patches
- `spack dev-build` runs phases of the install pipeline on the working directory
- `spack deprecate` permanently symlinks an old, unwanted package to a new one
- `spack verify` chcecks that packages' files match what was originally installed
- `spack find --json` prints `JSON` that is easy to parse with, e.g. `jq`
- `spack find --format FORMAT` allows you to flexibly print package metadata
- `spack spec --json` prints JSON version of `spec.yaml`
## Selected improvements
- Auto-build requested compilers if they do not exist
- Spack automatically adds `RPATHs` needed to make executables find compiler
runtime libraries (e.g., path to newer `libstdc++` in `icpc` or `g++`)
- setup-env.sh is now compatible with Bash, Dash, and Zsh
- Spack now caps build jobs at min(16, ncores) by default
- `spack compiler find` now also throttles number of spawned processes
- Spack now writes stage directories directly to `$TMPDIR` instead of
symlinking stages within `$spack/var/spack/cache`.
- Improved and more powerful `spec` format strings
- You can pass a `spec.yaml` file anywhere in the CLI you can type a spec.
- Many improvements to binary caching
- Gradually supporting new features from Environment Modules v4
- `spack edit` respects `VISUAL` environment variable
- Simplified package syntax for specifying build/run environment modifications
- Numerous improvements to support for environments across Spack commands
- Concretization improvements
## Documentation
- Multi-lingual documentation (Started a Japanese translation)
- Tutorial now has its own site at spack-tutorial.readthedocs.io
- This enables us to keep multiple versions of the tutorial around
## Deprecations
- Spack no longer supports dotkit (LLNL's homegrown, now deprecated module tool)
- `spack build`, `spack configure`, `spack diy` deprecated in favor of
`spack dev-build` and `spack install`
## Important package changes
- 3,563 total packages (718 added since 0.12.1)
- Spack now defaults to Python 3 (previously preferred 2.7 by default)
- Much improved ARM support thanks to Fugaku (RIKEN) and SNL teams
- Support new special versions: master, trunk, and head (in addition to develop)
- Better finding logic for libraries and headers
# v0.12.1 (2018-11-13)
This is a minor bugfix release, with a minor fix in the tutorial and a `flake8` fix.
Bugfixes
* Add `r` back to regex strings in binary distribution
* Fix gcc install version in the tutorial
# v0.12.0 (2018-11-13)
## Major new features
- Spack environments
- `spack.yaml` and `spack.lock` files for tracking dependencies
- Custom configurations via command line
- Better support for linking Python packages into view directories
- Packages have more control over compiler flags via flag handlers
- Better support for module file generation
- Better support for Intel compilers, Intel MPI, etc.
- Many performance improvements, improved startup time
## License
- As of this release, all of Spack is permissively licensed under Apache-2.0 or MIT, at the user's option.
- Consents from over 300 contributors were obtained to make this relicense possible.
- Previous versions were distributed under the LGPL license, version 2.1.
## New packages
Over 2,900 packages (800 added since last year)
Spack would not be possible without our community. Thanks to all of our
[contributors](https://github.com/spack/spack/graphs/contributors) for the
new features and packages in this release!
# v0.11.2 (2018-02-07)
This release contains the following fixes:
* Fixes for `gfortran` 7 compiler detection (#7017)
* Fixes for exceptions thrown during module generation (#7173)
# v0.11.1 (2018-01-19)
This release contains bugfixes for compiler flag handling. There were issues in `v0.11.0` that caused some packages to be built without proper optimization.
Fixes:
* Issue #6999: FFTW installed with Spack 0.11.0 gets built without optimisations
Includes:
* PR #6415: Fixes for flag handling behavior
* PR #6960: Fix type issues with setting flag handlers
* 880e319: Upstream fixes to `list_url` in various R packages
# v0.11.0 (2018-01-17)
Spack v0.11.0 contains many improvements since v0.10.0.
Below is a summary of the major features, broken down by category.
## New packages
- Spack now has 2,178 packages (from 1,114 in v0.10.0)
- Many more Python packages (356) and R packages (471)
- 48 Exascale Proxy Apps (try `spack list -t proxy-app`)
## Core features for users
- Relocatable binary packages (`spack buildcache`, #4854)
- Spack now fully supports Python 3 (#3395)
- Packages can be tagged and searched by tags (#4786)
- Custom module file templates using Jinja (#3183)
- `spack bootstrap` command now sets up a basic module environment (#3057)
- Simplified and better organized help output (#3033)
- Improved, less redundant `spack install` output (#5714, #5950)
- Reworked `spack dependents` and `spack dependencies` commands (#4478)
## Major new features for packagers
- Multi-valued variants (#2386)
- New `conflicts()` directive (#3125)
- New dependency type: `test` dependencies (#5132)
- Packages can require their own patches on dependencies (#5476)
- `depends_on(..., patches=<patch list>)`
- Build interface for passing linker information through Specs (#1875)
- Major packages that use blas/lapack now use this interface
- Flag handlers allow packages more control over compiler flags (#6415)
- Package subclasses support many more build systems:
- autotools, perl, qmake, scons, cmake, makefile, python, R, WAF
- package-level support for installing Intel HPC products (#4300)
- `spack blame` command shows contributors to packages (#5522)
- `spack create` now guesses many more build systems (#2707)
- Better URL parsing to guess package version URLs (#2972)
- Much improved `PythonPackage` support (#3367)
## Core
- Much faster concretization (#5716, #5783)
- Improved output redirection (redirecting build output works properly #5084)
- Numerous improvements to internal structure and APIs
## Tutorials & Documentation
- Many updates to documentation
- [New tutorial material from SC17](https://spack.readthedocs.io/en/latest/tutorial.html)
- configuration
- build systems
- build interface
- working with module generation
- Documentation on docker workflows and best practices
## Selected improvements and bug fixes
- No longer build Python eggs -- installations are plain directories (#3587)
- Improved filtering of system paths from build PATHs and RPATHs (#2083, #3910)
- Git submodules are properly handled on fetch (#3956)
- Can now set default number of parallel build jobs in `config.yaml`
- Improvements to `setup-env.csh` (#4044)
- Better default compiler discovery on Mac OS X (#3427)
- clang will automatically mix with gfortran
- Improved compiler detection on Cray machines (#3075)
- Better support for IBM XL compilers
- Better tab completion
- Resume gracefully after prematurely terminated partial installs (#4331)
- Better mesa support (#5170)
Spack would not be possible without our community. Thanks to all of our
[contributors](https://github.com/spack/spack/graphs/contributors) for the
new features and packages in this release!
# v0.10.0 (2017-01-17)
This is Spack `v0.10.0`. With this release, we will start to push Spack
releases more regularly. This is the last Spack release without
automated package testing. With the next release, we will begin to run
package tests in addition to unit tests.
Spack has grown rapidly from 422 to
[1,114 packages](https://spack.readthedocs.io/en/v0.10.0/package_list.html),
thanks to the hard work of over 100 contributors. Below is a condensed
version of all the changes since `v0.9.1`.
### Packages
- Grew from 422 to 1,114 packages
- Includes major updates like X11, Qt
- Expanded HPC, R, and Python ecosystems
### Core
- Major speed improvements for spack find and concretization
- Completely reworked architecture support
- Platforms can have front-end and back-end OS/target combinations
- Much better support for Cray and BG/Q cross-compiled environments
- Downloads are now cached locally
- Support installations in deeply nested directories: patch long shebangs using `sbang`
### Basic usage
- Easier global configuration via config.yaml
- customize install, stage, and cache locations
- Hierarchical configuration scopes: default, site, user
- Platform-specific scopes allow better per-platform defaults
- Ability to set `cflags`, `cxxflags`, `fflags` on the command line
- YAML-configurable support for both Lmod and tcl modules in mainline
- `spack install` supports --dirty option for emergencies
### For developers
- Support multiple dependency types: `build`, `link`, and `run`
- Added `Package` base classes for custom build systems
- `AutotoolsPackage`, `CMakePackage`, `PythonPackage`, etc.
- `spack create` now guesses many more build systems
- Development environment integration with `spack setup`
- New interface to pass linking information via `spec` objects
- Currently used for `BLAS`/`LAPACK`/`SCALAPACK` libraries
- Polymorphic virtual dependency attributes: `spec['blas'].blas_libs`
### Testing & Documentation
- Unit tests run continuously on Travis CI for Mac and Linux
- Switched from `nose` to `pytest` for unit tests.
- Unit tests take 1 minute now instead of 8
- Massively expanded documentation
- Docs are now hosted on [spack.readthedocs.io](http://spack.readthedocs.io)

View File

@@ -1,4 +1,4 @@
# Spack Community Code of Conduct
# Contributor Covenant Code of Conduct
## Our Pledge
@@ -30,7 +30,7 @@ Project maintainers have the right and responsibility to remove, edit, or reject
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the Spack project or its community. Examples of representing the project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of the project may be further defined and clarified by Spack maintainers.
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement

View File

@@ -68,6 +68,10 @@ PackageName: py
PackageHomePage: https://pypi.python.org/pypi/py
PackageLicenseDeclared: MIT
PackageName: pyqver
PackageHomePage: https://github.com/ghewgill/pyqver
PackageLicenseDeclared: BSD-3-Clause
PackageName: pytest
PackageHomePage: https://pypi.python.org/pypi/pytest
PackageLicenseDeclared: MIT
@@ -79,11 +83,3 @@ PackageLicenseDeclared: MIT
PackageName: six
PackageHomePage: https://pypi.python.org/pypi/six
PackageLicenseDeclared: MIT
PackageName: macholib
PackageHomePage: https://macholib.readthedocs.io/en/latest/index.html
PackageLicenseDeclared: MIT
PackageName: altgraph
PackageHomePage: https://altgraph.readthedocs.io/en/latest/index.html
PackageLicenseDeclared: MIT

View File

@@ -1,4 +1,4 @@
Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
Permission is hereby granted, free of charge, to any person obtaining a

View File

@@ -1,7 +1,6 @@
# <img src="https://cdn.rawgit.com/spack/spack/develop/share/spack/logo/spack-logo.svg" width="64" valign="middle" alt="Spack"/> Spack
[![Build Status](https://travis-ci.org/spack/spack.svg?branch=develop)](https://travis-ci.org/spack/spack)
[![Linux Builds](https://github.com/spack/spack/workflows/linux%20builds/badge.svg)](https://github.com/spack/spack/actions)
[![codecov](https://codecov.io/gh/spack/spack/branch/develop/graph/badge.svg)](https://codecov.io/gh/spack/spack)
[![Read the Docs](https://readthedocs.org/projects/spack/badge/?version=latest)](https://spack.readthedocs.io)
[![Slack](https://spackpm.herokuapp.com/badge.svg)](https://spackpm.herokuapp.com)
@@ -27,43 +26,55 @@ Then:
$ git clone https://github.com/spack/spack.git
$ cd spack/bin
$ ./spack install zlib
$ ./spack install libelf
Documentation
----------------
[**Full documentation**](http://spack.readthedocs.io/) is available, or
run `spack help` or `spack help --all`.
[**Full documentation**](http://spack.readthedocs.io/) for Spack is
the first place to look.
Tutorial
----------------
Try the
[**Spack Tutorial**](http://spack.readthedocs.io/en/latest/tutorial.html),
to learn how to use spack, write packages, or deploy packages for users
at your site.
We maintain a
[**hands-on tutorial**](http://spack.readthedocs.io/en/latest/tutorial.html).
It covers basic to advanced usage, packaging, developer features, and large HPC
deployments. You can do all of the exercises on your own laptop using a
Docker container.
See also:
* [Technical paper](http://www.computer.org/csdl/proceedings/sc/2015/3723/00/2807623.pdf) and
[slides](https://tgamblin.github.io/files/Gamblin-Spack-SC15-Talk.pdf) on Spack's design and implementation.
* [Short presentation](https://tgamblin.github.io/files/Gamblin-Spack-Lightning-Talk-BOF-SC15.pdf) from the *Getting Scientific Software Installed* BOF session at Supercomputing 2015.
Feel free to use these materials to teach users at your organization
about Spack.
Community
Get Involved!
------------------------
Spack is an open source project. Questions, discussion, and
contributions are welcome. Contributions can be anything from new
packages to bugfixes, documentation, or even new core features.
packages to bugfixes, or even new core features.
Resources:
### Mailing list
* **Slack workspace**: [spackpm.slack.com](https://spackpm.slack.com).
To get an invitation, [**click here**](https://spackpm.herokuapp.com).
* **Mailing list**: [groups.google.com/d/forum/spack](https://groups.google.com/d/forum/spack)
* **Twitter**: [@spackpm](https://twitter.com/spackpm). Be sure to
`@mention` us!
If you are interested in contributing to spack, join the mailing list.
We're using Google Groups for this:
* [Spack Google Group](https://groups.google.com/d/forum/spack)
### Slack channel
Spack has a Slack channel where you can chat about all things Spack:
* [Spack on Slack](https://spackpm.slack.com)
[Sign up here](https://spackpm.herokuapp.com) to get an invitation mailed
to you.
### Twitter
You can follow [@spackpm](https://twitter.com/spackpm) on Twitter for
updates. Also, feel free to `@mention` us in in questions or comments
about your own experience with Spack.
### Contributions
Contributing
------------------------
Contributing to Spack is relatively easy. Just send us a
[pull request](https://help.github.com/articles/using-pull-requests/).
When you send your request, make ``develop`` the destination branch on the
@@ -81,13 +92,6 @@ branching model. The ``develop`` branch contains the latest
contributions, and ``master`` is always tagged and points to the latest
stable release.
Code of Conduct
------------------------
Please note that Spack has a
[**Code of Conduct**](.github/CODE_OF_CONDUCT.md). By participating in
the Spack community, you agree to abide by its rules.
Authors
----------------
Many thanks go to Spack's [contributors](https://github.com/spack/spack/graphs/contributors).
@@ -118,6 +122,6 @@ See [LICENSE-MIT](https://github.com/spack/spack/blob/develop/LICENSE-MIT),
[COPYRIGHT](https://github.com/spack/spack/blob/develop/COPYRIGHT), and
[NOTICE](https://github.com/spack/spack/blob/develop/NOTICE) for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
`SPDX-License-Identifier: (Apache-2.0 OR MIT)`
LLNL-CODE-647188
``LLNL-CODE-647188``

View File

@@ -1,6 +1,6 @@
#!/bin/bash
#
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -103,10 +103,10 @@ interpreter_f="${interpreter_v[0]}"
# Invoke any interpreter found, or raise an error if none was found.
if [[ -n "$interpreter_f" ]]; then
if [[ "${interpreter_f##*/}" = "perl"* ]]; then
exec $interpreter -x "$@"
if [[ "${interpreter_f##*/}" = "perl" ]]; then
exec $interpreter_v -x "$@"
else
exec $interpreter "$@"
exec $interpreter_v "$@"
fi
else
echo "error: sbang found no interpreter in $script"

View File

@@ -1,26 +1,10 @@
#!/bin/sh
# -*- python -*-
#!/usr/bin/env python
#
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# This file is bilingual. The following shell code finds our preferred python.
# Following line is a shell no-op, and starts a multi-line Python comment.
# See https://stackoverflow.com/a/47886254
""":"
# prefer python3, then python, then python2
for cmd in python3 python python2; do
command -v > /dev/null $cmd && exec $cmd $0 "$@"
done
echo "==> Error: spack could not find a python interpreter!" >&2
exit 1
":"""
# Line above is a shell no-op, and ends a python multi-line comment.
# The code above runs this file with our preferred python interpreter.
from __future__ import print_function
import os
@@ -51,7 +35,7 @@ sys.path.insert(0, spack_external_libs)
# (see #9206 for a broader description of the issue).
#
# Briefly: ruamel.yaml produces a .pth file when installed with pip that
# makes the site installed package the preferred one, even though sys.path
# makes the site installed package the preferred one, even tough sys.path
# is modified to point to another version of ruamel.yaml.
if 'ruamel.yaml' in sys.modules:
del sys.modules['ruamel.yaml']

View File

@@ -1,6 +1,6 @@
#!/bin/sh
#
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -16,7 +16,7 @@
config:
# This is the path to the root of the Spack install tree.
# You can use $spack here to refer to the root of the spack instance.
install_tree: ~/.spack/opt/spack
install_tree: $spack/opt/spack
# Locations where templates should be found
@@ -30,44 +30,31 @@ config:
# Locations where different types of modules should be installed.
module_roots:
tcl: ~/.spack/share/spack/modules
lmod: ~/.spack/share/spack/lmod
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
dotkit: $spack/share/spack/dotkit
# Temporary locations Spack can try to use for builds.
#
# Recommended options are given below.
# Spack will use the first one it finds that exists and is writable.
# You can use $tempdir to refer to the system default temp directory
# (as returned by tempfile.gettempdir()).
#
# Builds can be faster in temporary directories on some (e.g., HPC) systems.
# Specifying `$tempdir` will ensure use of the default temporary directory
# (i.e., ``$TMP` or ``$TMPDIR``).
# A value of $spack/var/spack/stage indicates that Spack should run
# builds directly inside its install directory without staging them in
# temporary space.
#
# Another option that prevents conflicts and potential permission issues is
# to specify `~/.spack/stage`, which ensures each user builds in their home
# directory.
#
# A more traditional path uses the value of `$spack/var/spack/stage`, which
# builds directly inside Spack's instance without staging them in a
# temporary space. Problems with specifying a path inside a Spack instance
# are that it precludes its use as a system package and its ability to be
# pip installable.
#
# In any case, if the username is not already in the path, Spack will append
# the value of `$user` in an attempt to avoid potential conflicts between
# users in shared temporary spaces.
#
# The build stage can be purged with `spack clean --stage` and
# `spack clean -a`, so it is important that the specified directory uniquely
# identifies Spack staging to avoid accidentally wiping out non-Spack work.
# The build stage can be purged with `spack clean --stage`.
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
# - $spack/var/spack/stage
- $tempdir
- /nfs/tmp2/$user
- $spack/var/spack/stage
# Cache directory for already downloaded source tarballs and archived
# repositories. This can be purged with `spack clean --downloads`.
source_cache: ~/.spack/var/spack/cache
source_cache: $spack/var/spack/cache
# Cache directory for miscellaneous files, like the package index.
@@ -80,20 +67,6 @@ config:
verify_ssl: true
# Suppress gpg warnings from binary package verification
# Only suppresses warnings, gpg failure will still fail the install
# Potential rationale to set True: users have already explicitly trusted the
# gpg key they are using, and may not want to see repeated warnings that it
# is self-signed or something of the sort.
suppress_gpg_warnings: false
# If set to true, Spack will attempt to build any compiler on the spec
# that is not already available. If set to False, Spack will only use
# compilers already configured in compilers.yaml
install_missing_compilers: False
# If set to true, Spack will always check checksums after downloading
# archives. If false, Spack skips the checksum step.
checksum: true
@@ -121,12 +94,10 @@ config:
locks: true
# The maximum number of jobs to use when running `make` in parallel,
# always limited by the number of cores available. For instance:
# - If set to 16 on a 4 cores machine `spack install` will run `make -j4`
# - If set to 16 on a 18 cores machine `spack install` will run `make -j16`
# If not set, Spack will use all available cores up to 16.
# build_jobs: 16
# The default number of jobs to use when running `make` in parallel.
# If set to 4, for example, `spack install` will run `make -j4`.
# If not set, all available cores are used by default.
# build_jobs: 4
# If set to true, Spack will use ccache to cache C compiles.
@@ -137,7 +108,7 @@ config:
# when Spack needs to manage its own package metadata and all operations are
# expected to complete within the default time limit. The timeout should
# therefore generally be left untouched.
db_lock_timeout: 3
db_lock_timeout: 120
# How long to wait when attempting to modify a package (e.g. to install it).
@@ -146,8 +117,3 @@ config:
# anticipates that a significant delay indicates that the lock attempt will
# never succeed.
package_lock_timeout: null
# Control whether Spack embeds RPATH or RUNPATH attributes in ELF binaries.
# Has no effect on macOS. DO NOT MIX these within the same install tree.
# See the Spack documentation for details.
shared_linking: 'rpath'

View File

@@ -16,6 +16,7 @@
modules:
enable:
- tcl
- dotkit
prefix_inspections:
bin:
- PATH
@@ -35,8 +36,6 @@ modules:
- PKG_CONFIG_PATH
lib64/pkgconfig:
- PKG_CONFIG_PATH
share/pkgconfig:
- PKG_CONFIG_PATH
'':
- CMAKE_PREFIX_PATH

View File

@@ -15,7 +15,7 @@
# -------------------------------------------------------------------------
packages:
all:
compiler: [gcc, intel, pgi, clang, xl, nag, fj]
compiler: [gcc, intel, pgi, clang, xl, nag]
providers:
D: [ldc]
awk: [gawk]
@@ -23,28 +23,24 @@ packages:
daal: [intel-daal]
elf: [elfutils]
fftw-api: [fftw]
gl: [mesa+opengl, opengl]
glx: [mesa+glx, opengl]
gl: [mesa, opengl]
glu: [mesa-glu, openglu]
golang: [gcc]
ipp: [intel-ipp]
java: [openjdk, jdk, ibm-java]
java: [jdk]
jpeg: [libjpeg-turbo, libjpeg]
lapack: [openblas]
mariadb-client: [mariadb-c-client, mariadb]
mkl: [intel-mkl]
mpe: [mpe2]
mpi: [openmpi, mpich]
mysql-client: [mysql, mariadb-c-client]
opencl: [pocl]
openfoam: [openfoam-com, openfoam-org, foam-extend]
pil: [py-pillow]
pkgconfig: [pkgconf, pkg-config]
scalapack: [netlib-scalapack]
sycl: [hipsycl]
szip: [libszip, libaec]
tbb: [intel-tbb]
unwind: [libunwind]
sycl: [hipsycl]
permissions:
read: world
write: user

View File

@@ -1,7 +0,0 @@
upstreams:
global:
install_tree: $spack/opt/spack
modules:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
dotkit: $spack/share/spack/dotkit

View File

@@ -3,5 +3,3 @@ command_index.rst
spack*.rst
llnl*.rst
_build
.spack-env
spack.lock

View File

@@ -2,7 +2,7 @@
#
# You can set these variables from the command line.
SPHINXOPTS = -W
SPHINXOPTS = -E
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build

View File

@@ -1 +0,0 @@
../../..

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -39,15 +39,12 @@ available. You can see a list of available package names at the
``spack list``
^^^^^^^^^^^^^^
The ``spack list`` command prints out a list of all of the packages Spack
can install:
The ``spack list`` command prints out a list of all of the packages
Spack can install:
.. command-output:: spack list
:ellipsis: 10
There are thosands of them, so we've truncated the output above, but you
can find a :ref:`full list here <package-list>`.
Packages are listed by name in alphabetical order.
The packages are listed by name in alphabetical order.
A pattern to match with no wildcards, ``*`` or ``?``,
will be treated as though it started and ended with
``*``, so ``util`` is equivalent to ``*util*``. All patterns will be treated
@@ -232,50 +229,6 @@ remove dependent packages *before* removing their dependencies or use the
.. _nondownloadable:
^^^^^^^^^^^^^^^^^^
Garbage collection
^^^^^^^^^^^^^^^^^^
When Spack builds software from sources, if often installs tools that are needed
just to build or test other software. These are not necessary at runtime.
To support cases where removing these tools can be a benefit Spack provides
the ``spack gc`` ("garbage collector") command, which will uninstall all unneeded packages:
.. code-block:: console
$ spack find
==> 24 installed packages
-- linux-ubuntu18.04-broadwell / gcc@9.0.1 ----------------------
autoconf@2.69 findutils@4.6.0 libiconv@1.16 libszip@2.1.1 m4@1.4.18 openjpeg@2.3.1 pkgconf@1.6.3 util-macros@1.19.1
automake@1.16.1 gdbm@1.18.1 libpciaccess@0.13.5 libtool@2.4.6 mpich@3.3.2 openssl@1.1.1d readline@8.0 xz@5.2.4
cmake@3.16.1 hdf5@1.10.5 libsigsegv@2.12 libxml2@2.9.9 ncurses@6.1 perl@5.30.0 texinfo@6.5 zlib@1.2.11
$ spack gc
==> The following packages will be uninstalled:
-- linux-ubuntu18.04-broadwell / gcc@9.0.1 ----------------------
vn47edz autoconf@2.69 6m3f2qn findutils@4.6.0 ubl6bgk libtool@2.4.6 pksawhz openssl@1.1.1d urdw22a readline@8.0
ki6nfw5 automake@1.16.1 fklde6b gdbm@1.18.1 b6pswuo m4@1.4.18 k3s2csy perl@5.30.0 lp5ya3t texinfo@6.5
ylvgsov cmake@3.16.1 5omotir libsigsegv@2.12 leuzbbh ncurses@6.1 5vmfbrq pkgconf@1.6.3 5bmv4tg util-macros@1.19.1
==> Do you want to proceed? [y/N] y
[ ... ]
$ spack find
==> 9 installed packages
-- linux-ubuntu18.04-broadwell / gcc@9.0.1 ----------------------
hdf5@1.10.5 libiconv@1.16 libpciaccess@0.13.5 libszip@2.1.1 libxml2@2.9.9 mpich@3.3.2 openjpeg@2.3.1 xz@5.2.4 zlib@1.2.11
In the example above Spack went through all the packages in the DB
and removed everything that is not either:
1. A package installed upon explicit request of the user
2. A ``link`` or ``run`` dependency, even transitive, of one of the packages at point 1.
You can check :ref:`cmd-spack-find-metadata` to see how to query for explicitly installed packages
or :ref:`dependency-types` for a more thorough treatment of dependency types.
^^^^^^^^^^^^^^^^^^^^^^^^^
Non-Downloadable Tarballs
^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -321,86 +274,6 @@ the tarballs in question to it (see :ref:`mirrors`):
$ spack install galahad
-----------------------------
Deprecating insecure packages
-----------------------------
``spack deprecate`` allows for the removal of insecure packages with
minimal impact to their dependents.
.. warning::
The ``spack deprecate`` command is designed for use only in
extraordinary circumstances. This is a VERY big hammer to be used
with care.
The ``spack deprecate`` command will remove one package and replace it
with another by replacing the deprecated package's prefix with a link
to the deprecator package's prefix.
.. warning::
The ``spack deprecate`` command makes no promises about binary
compatibility. It is up to the user to ensure the deprecator is
suitable for the deprecated package.
Spack tracks concrete deprecated specs and ensures that no future packages
concretize to a deprecated spec.
The first spec given to the ``spack deprecate`` command is the package
to deprecate. It is an abstract spec that must describe a single
installed package. The second spec argument is the deprecator
spec. By default it must be an abstract spec that describes a single
installed package, but with the ``-i/--install-deprecator`` it can be
any abstract spec that Spack will install and then use as the
deprecator. The ``-I/--no-install-deprecator`` option will ensure
the default behavior.
By default, ``spack deprecate`` will deprecate all dependencies of the
deprecated spec, replacing each by the dependency of the same name in
the deprecator spec. The ``-d/--dependencies`` option will ensure the
default, while the ``-D/--no-dependencies`` option will deprecate only
the root of the deprecate spec in favor of the root of the deprecator
spec.
``spack deprecate`` can use symbolic links or hard links. The default
behavior is symbolic links, but the ``-l/--link-type`` flag can take
options ``hard`` or ``soft``.
-----------------------
Verifying installations
-----------------------
The ``spack verify`` command can be used to verify the validity of
Spack-installed packages any time after installation.
At installation time, Spack creates a manifest of every file in the
installation prefix. For links, Spack tracks the mode, ownership, and
destination. For directories, Spack tracks the mode, and
ownership. For files, Spack tracks the mode, ownership, modification
time, hash, and size. The Spack verify command will check, for every
file in each package, whether any of those attributes have changed. It
will also check for newly added files or deleted files from the
installation prefix. Spack can either check all installed packages
using the `-a,--all` or accept specs listed on the command line to
verify.
The ``spack verify`` command can also verify for individual files that
they haven't been altered since installation time. If the given file
is not in a Spack installation prefix, Spack will report that it is
not owned by any package. To check individual files instead of specs,
use the ``-f,--files`` option.
Spack installation manifests are part of the tarball signed by Spack
for binary package distribution. When installed from a binary package,
Spack uses the packaged installation manifest instead of creating one
at install time.
The ``spack verify`` command also accepts the ``-l,--local`` option to
check only local packages (as opposed to those used transparently from
``upstream`` spack instances) and the ``-j,--json`` option to output
machine-readable json data for any errors.
-------------------------
Seeing installed packages
-------------------------
@@ -458,19 +331,11 @@ Packages are divided into groups according to their architecture and
compiler. Within each group, Spack tries to keep the view simple, and
only shows the version of installed packages.
.. _cmd-spack-find-metadata:
""""""""""""""""""""""""""""""""
Viewing more metadata
""""""""""""""""""""""""""""""""
``spack find`` can filter the package list based on the package name,
spec, or a number of properties of their installation status. For
example, missing dependencies of a spec can be shown with
``--missing``, deprecated packages can be included with
``--deprecated``, packages which were explicitly installed with
``spack install <package>`` can be singled out with ``--explicit`` and
those which have been pulled in only as dependencies with
``spack find`` can filter the package list based on the package name, spec, or
a number of properties of their installation status. For example, missing
dependencies of a spec can be shown with ``--missing``, packages which were
explicitly installed with ``spack install <package>`` can be singled out with
``--explicit`` and those which have been pulled in only as dependencies with
``--implicit``.
In some cases, there may be different configurations of the *same*
@@ -523,8 +388,8 @@ use ``spack find --paths``:
callpath@1.0.2 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/callpath@1.0.2-5dce4318
...
You can restrict your search to a particular package by supplying its
name:
And, finally, you can restrict your search to a particular package
by supplying its name:
.. code-block:: console
@@ -534,10 +399,6 @@ name:
libelf@0.8.12 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/libelf@0.8.12
libelf@0.8.13 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/libelf@0.8.13
""""""""""""""""""""""""""""""""
Spec queries
""""""""""""""""""""""""""""""""
``spack find`` actually does a lot more than this. You can use
*specs* to query for specific configurations and builds of each
package. If you want to find only libelf versions greater than version
@@ -565,109 +426,6 @@ with the 'debug' compile-time option enabled.
The full spec syntax is discussed in detail in :ref:`sec-specs`.
""""""""""""""""""""""""""""""""
Machine-readable output
""""""""""""""""""""""""""""""""
If you only want to see very specific things about installed packages,
Spack has some options for you. ``spack find --format`` can be used to
output only specific fields:
.. code-block:: console
$ spack find --format "{name}-{version}-{hash}"
autoconf-2.69-icynozk7ti6h4ezzgonqe6jgw5f3ulx4
automake-1.16.1-o5v3tc77kesgonxjbmeqlwfmb5qzj7zy
bzip2-1.0.6-syohzw57v2jfag5du2x4bowziw3m5p67
bzip2-1.0.8-zjny4jwfyvzbx6vii3uuekoxmtu6eyuj
cmake-3.15.1-7cf6onn52gywnddbmgp7qkil4hdoxpcb
...
or:
.. code-block:: console
$ spack find --format "{hash:7}"
icynozk
o5v3tc7
syohzw5
zjny4jw
7cf6onn
...
This uses the same syntax as described in documentation for
:meth:`~spack.spec.Spec.format` -- you can use any of the options there.
This is useful for passing metadata about packages to other command-line
tools.
Alternately, if you want something even more machine readable, you can
output each spec as JSON records using ``spack find --json``. This will
output metadata on specs and all dependencies as json:
.. code-block:: console
$ spack find --json sqlite@3.28.0
[
{
"name": "sqlite",
"hash": "3ws7bsihwbn44ghf6ep4s6h4y2o6eznv",
"version": "3.28.0",
"arch": {
"platform": "darwin",
"platform_os": "mojave",
"target": "x86_64"
},
"compiler": {
"name": "clang",
"version": "10.0.0-apple"
},
"namespace": "builtin",
"parameters": {
"fts": true,
"functions": false,
"cflags": [],
"cppflags": [],
"cxxflags": [],
"fflags": [],
"ldflags": [],
"ldlibs": []
},
"dependencies": {
"readline": {
"hash": "722dzmgymxyxd6ovjvh4742kcetkqtfs",
"type": [
"build",
"link"
]
}
}
},
...
]
You can use this with tools like `jq <https://stedolan.github.io/jq/>`_ to quickly create JSON records
structured the way you want:
.. code-block:: console
$ spack find --json sqlite@3.28.0 | jq -C '.[] | { name, version, hash }'
{
"name": "sqlite",
"version": "3.28.0",
"hash": "3ws7bsihwbn44ghf6ep4s6h4y2o6eznv"
}
{
"name": "readline",
"version": "7.0",
"hash": "722dzmgymxyxd6ovjvh4742kcetkqtfs"
}
{
"name": "ncurses",
"version": "6.1",
"hash": "zvaa4lhlhilypw5quj3akyd3apbq5gap"
}
.. _sec-specs:
--------------------
@@ -915,10 +673,10 @@ compile line.
Notice that the value of the compiler flags must be quoted if it
contains any spaces. Any of ``cppflags=-O3``, ``cppflags="-O3"``,
``cppflags='-O3'``, and ``cppflags="-O3 -fPIC"`` are acceptable, but
``cppflags=-O3 -fPIC`` is not. Additionally, if the value of the
``cppflags=-O3 -fPIC`` is not. Additionally, if they value of the
compiler flags is not the last thing on the line, it must be followed
by a space. The commmand ``spack install libelf cppflags="-O3"%intel``
will be interpreted as an attempt to set ``cppflags="-O3%intel"``.
will be interpreted as an attempt to set `cppflags="-O3%intel"``.
The six compiler flags are injected in the order of implicit make commands
in GNU Autotools. If all flags are set, the order is
@@ -929,13 +687,11 @@ in GNU Autotools. If all flags are set, the order is
Compiler environment variables and additional RPATHs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sometimes compilers require setting special environment variables to
operate correctly. Spack handles these cases by allowing custom environment
modifications in the ``environment`` attribute of the compiler configuration
section. See also the :ref:`configuration_environment_variables` section
of the configuration files docs for more information.
It is also possible to specify additional ``RPATHs`` that the
In the exceptional case a compiler requires setting special environment
variables, like an explicit library load path. These can bet set in an
extra section in the compiler configuration (the supported environment
modification commands are: ``set``, ``unset``, ``append-path``, and
``prepend-path``). The user can also specify additional ``RPATHs`` that the
compiler will add to all executables generated by that compiler. This is
useful for forcing certain compilers to RPATH their own runtime libraries, so
that executables will run without the need to set ``LD_LIBRARY_PATH``.
@@ -952,130 +708,44 @@ that executables will run without the need to set ``LD_LIBRARY_PATH``.
fc: /opt/gcc/bin/gfortran
environment:
unset:
- BAD_VARIABLE
BAD_VARIABLE: # The colon is required but the value must be empty
set:
GOOD_VARIABLE_NUM: 1
GOOD_VARIABLE_STR: good
prepend_path:
prepend-path:
PATH: /path/to/binutils
append_path:
append-path:
LD_LIBRARY_PATH: /opt/gcc/lib
extra_rpaths:
- /path/to/some/compiler/runtime/directory
- /path/to/some/other/compiler/runtime/directory
.. note::
The section `environment` is interpreted as an ordered dictionary, which
means two things. First, environment modification are applied in the order
they are specified in the configuration file. Second, you cannot express
environment modifications that require mixing different commands, i.e. you
cannot `set` one variable, than `prepend-path` to another one, and than
again `set` a third one.
^^^^^^^^^^^^^^^^^^^^^^^
Architecture specifiers
^^^^^^^^^^^^^^^^^^^^^^^
Each node in the dependency graph of a spec has an architecture attribute.
This attribute is a triplet of platform, operating system and processor.
You can specify the elements either separately, by using
the reserved keywords ``platform``, ``os`` and ``target``:
.. code-block:: console
$ spack install libelf platform=linux
$ spack install libelf os=ubuntu18.04
$ spack install libelf target=broadwell
or together by using the reserved keyword ``arch``:
The architecture can be specified by using the reserved
words ``target`` and/or ``os`` (``target=x86-64 os=debian7``). You can also
use the triplet form of platform, operating system and processor.
.. code-block:: console
$ spack install libelf arch=cray-CNL10-haswell
Normally users don't have to bother specifying the architecture if they
are installing software for their current host, as in that case the
values will be detected automatically. If you need fine-grained control
over which packages use which targets (or over *all* packages' default
target), see :ref:`concretization-preferences`.
.. admonition:: Cray machines
The situation is a little bit different for Cray machines and a detailed
explanation on how the architecture can be set on them can be found at :ref:`cray-support`
.. _support-for-microarchitectures:
"""""""""""""""""""""""""""""""""""""""
Support for specific microarchitectures
"""""""""""""""""""""""""""""""""""""""
Spack knows how to detect and optimize for many specific microarchitectures
(including recent Intel, AMD and IBM chips) and encodes this information in
the ``target`` portion of the architecture specification. A complete list of
the microarchitectures known to Spack can be obtained in the following way:
.. command-output:: spack arch --known-targets
When a spec is installed Spack matches the compiler being used with the
microarchitecture being targeted to inject appropriate optimization flags
at compile time. Giving a command such as the following:
.. code-block:: console
$ spack install zlib%gcc@9.0.1 target=icelake
will produce compilation lines similar to:
.. code-block:: console
$ /usr/bin/gcc-9 -march=icelake-client -mtune=icelake-client -c ztest10532.c
$ /usr/bin/gcc-9 -march=icelake-client -mtune=icelake-client -c -fPIC -O2 ztest10532.
...
where the flags ``-march=icelake-client -mtune=icelake-client`` are injected
by Spack based on the requested target and compiler.
If Spack knows that the requested compiler can't optimize for the current target
or can't build binaries for that target at all, it will exit with a meaningful error message:
.. code-block:: console
$ spack install zlib%gcc@5.5.0 target=icelake
==> Error: cannot produce optimized binary for micro-architecture "icelake" with gcc@5.5.0 [supported compiler versions are 8:]
When instead an old compiler is selected on a recent enough microarchitecture but there is
no explicit ``target`` specification, Spack will optimize for the best match it can find instead
of failing:
.. code-block:: console
$ spack arch
linux-ubuntu18.04-broadwell
$ spack spec zlib%gcc@4.8
Input spec
--------------------------------
zlib%gcc@4.8
Concretized
--------------------------------
zlib@1.2.11%gcc@4.8+optimize+pic+shared arch=linux-ubuntu18.04-haswell
$ spack spec zlib%gcc@9.0.1
Input spec
--------------------------------
zlib%gcc@9.0.1
Concretized
--------------------------------
zlib@1.2.11%gcc@9.0.1+optimize+pic+shared arch=linux-ubuntu18.04-broadwell
In the snippet above, for instance, the microarchitecture was demoted to ``haswell`` when
compiling with ``gcc@4.8`` since support to optimize for ``broadwell`` starts from ``gcc@4.9:``.
Finally if Spack has no information to match compiler and target, it will
proceed with the installation but avoid injecting any microarchitecture
specific flags.
.. warning::
Currently Spack doesn't print any warning to the user if it has no information
on which optimization flags should be used for a given compiler. This behavior
might change in the future.
Users on non-Cray systems won't have to worry about specifying the architecture.
Spack will autodetect what kind of operating system is on your machine as well
as the processor. For more information on how the architecture can be
used on Cray machines, see :ref:`cray-support`
.. _sec-virtual-dependencies:
@@ -1310,41 +980,33 @@ directly when you run ``python``:
Using Extensions
^^^^^^^^^^^^^^^^
There are four ways to get ``numpy`` working in Python. The first is
to use :ref:`shell-support`. You can simply ``load`` the extension,
and it will be added to the ``PYTHONPATH`` in your current shell:
There are three ways to get ``numpy`` working in Python. The first is
to use :ref:`shell-support`. You can simply ``use`` or ``load`` the
module for the extension, and it will be added to the ``PYTHONPATH``
in your current shell.
For tcl modules:
.. code-block:: console
$ spack load python
$ spack load py-numpy
Now ``import numpy`` will succeed for as long as you keep your current
session open.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Loading Extensions via Modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Instead of using Spack's environment modification capabilities through
the ``spack load`` command, you can load numpy through your
environment modules (using ``environment-modules`` or ``lmod``). This
will also add the extension to the ``PYTHONPATH`` in your current
shell.
or, for dotkit:
.. code-block:: console
$ module load <name of numpy module>
$ spack use python
$ spack use py-numpy
If you do not know the name of the specific numpy module you wish to
load, you can use the ``spack module tcl|lmod loads`` command to get
the name of the module from the Spack spec.
Now ``import numpy`` will succeed for as long as you keep your current
session open.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Activating Extensions in a View
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Another way to use extensions is to create a view, which merges the
The second way to use extensions is to create a view, which merges the
python installation along with the extensions into a single prefix.
See :ref:`filesystem-views` for a more in-depth description of views and
:ref:`cmd-spack-view` for usage of the ``spack view`` command.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -58,26 +58,16 @@ directory. Here's an example of an external configuration:
packages:
openmpi:
paths:
openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64: /opt/openmpi-1.6.5-intel
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-x86_64-debian7: /opt/openmpi-1.6.5-intel
This example lists three installations of OpenMPI, one built with GCC,
one built with GCC and debug information, and another built with Intel.
If Spack is asked to build a package that uses one of these MPIs as a
dependency, it will use the pre-installed OpenMPI in
the given directory. Note that the specified path is the top-level
install prefix, not the ``bin`` subdirectory.
``packages.yaml`` can also be used to specify modules to load instead
of the installation prefixes. The following example says that module
``CMake/3.7.2`` provides cmake version 3.7.2.
.. code-block:: yaml
cmake:
modules:
cmake@3.7.2: CMake/3.7.2
the given directory. ``packages.yaml`` can also be used to specify modules
to load instead of the installation prefixes.
Each ``packages.yaml`` begins with a ``packages:`` token, followed
by a list of package names. To specify externals, add a ``paths`` or ``modules``
@@ -107,9 +97,9 @@ be:
packages:
openmpi:
paths:
openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64: /opt/openmpi-1.6.5-intel
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-x86_64-debian7: /opt/openmpi-1.6.5-intel
buildable: False
The addition of the ``buildable`` flag tells Spack that it should never build
@@ -148,8 +138,7 @@ Here's an example ``packages.yaml`` file that sets preferred packages:
gperftools:
version: [2.2, 2.4, 2.3]
all:
compiler: [gcc@4.4.7, 'gcc@4.6:', intel, clang, pgi]
target: [sandybridge]
compiler: [gcc@4.4.7, gcc@4.6:, intel, clang, pgi]
providers:
mpi: [mvapich2, mpich, openmpi]
@@ -163,11 +152,11 @@ on the command line if explicitly requested.
Each ``packages.yaml`` file begins with the string ``packages:`` and
package names are specified on the next level. The special string ``all``
applies settings to *all* packages. Underneath each package name is one
or more components: ``compiler``, ``variants``, ``version``,
``providers``, and ``target``. Each component has an ordered list of
spec ``constraints``, with earlier entries in the list being preferred
over later entries.
applies settings to each package. Underneath each package name is
one or more components: ``compiler``, ``variants``, ``version``,
or ``providers``. Each component has an ordered list of spec
``constraints``, with earlier entries in the list being preferred over
later entries.
Sometimes a package installation may have constraints that forbid
the first concretization rule, in which case Spack will use the first

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -40,7 +40,6 @@ on these ideas for each distinct build system that Spack supports:
build_systems/cmakepackage
build_systems/mesonpackage
build_systems/qmakepackage
build_systems/sippackage
.. toctree::
:maxdepth: 1
@@ -56,7 +55,6 @@ on these ideas for each distinct build system that Spack supports:
:maxdepth: 1
:caption: Other
build_systems/bundlepackage
build_systems/cudapackage
build_systems/intelpackage
build_systems/custompackage

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,52 +0,0 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _bundlepackage:
-------------
BundlePackage
-------------
``BundlePackage`` represents a set of packages that are expected to work well
together, such as a collection of commonly used software libraries. The
associated software is specified as bundle dependencies.
^^^^^^^^
Creation
^^^^^^^^
Be sure to specify the ``bundle`` template if you are using ``spack create``
to generate a package from the template. For example, use the following
command to create a bundle package whose class name will be ``Mybundle``:
.. code-block:: console
$ spack create --template bundle --name mybundle
^^^^^^
Phases
^^^^^^
The ``BundlePackage`` base class does not provide any phases by default
since the bundle does not represent a build system.
^^^
URL
^^^
The ``url`` property does not have meaning since there is no package-specific
code to fetch.
^^^^^^^
Version
^^^^^^^
At least one ``version`` must be specified in order for the package to
build.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -120,7 +120,7 @@ version numbers seen with most other Spack packages. For example, we have:
...
Preferred version:
professional.2018.3 http:...
Safe versions:
professional.2018.3 http:...
...
@@ -728,7 +728,7 @@ For packages that contain a compiler, follow `the previous section
.. code-block:: console
$ spack install intel-mpi@2018.3.199
$ spack install intel-mpi@2018.3.199
$ spack install intel-mpi@2018.3.199 %intel@18
4. To prepare the new packages for use with client packages,
@@ -802,7 +802,7 @@ by one of the following means:
Configure the order of compilers in the appropriate ``packages.yaml`` file,
under either an ``all:`` or client-package-specific entry, in a
``compiler:`` list. Consult the Spack documentation for
`Configuring Package Preferences <https://spack-tutorial.readthedocs.io/en/latest/tutorial_configuration.html#configuring-package-preferences>`_
:ref:`Configuring Package Preferences <configs-tutorial-package-prefs>`
and
:ref:`Concretization Preferences <concretization-preferences>`.
@@ -851,7 +851,7 @@ client packages, edit the ``packages.yaml`` file. Customize, either in the
the virtual packages and whose values are the Spack specs that satisfy the
virtual package, in order of decreasing preference. To learn more about the
``providers:`` settings, see the Spack tutorial for
`Configuring Package Preferences <https://spack-tutorial.readthedocs.io/en/latest/tutorial_configuration.html#configuring-package-preferences>`_
:ref:`Configuring Package Preferences <configs-tutorial-package-prefs>`
and the section
:ref:`Concretization Preferences <concretization-preferences>`.
@@ -972,7 +972,7 @@ a *virtual* ``mkl`` package is declared in Spack.
.. code-block:: python
self.spec['blas'].headers.include_flags
and to generate linker options (``-L<dir> -llibname ...``), use the same as above,
.. code-block:: python

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -54,28 +54,6 @@ Packages that use the Meson build system can be identified by the
presence of a ``meson.build`` file. This file declares things
like build instructions and dependencies.
One thing to look for is the ``meson_version`` key that gets passed
to the ``project`` function:
.. code-block:: none
:emphasize-lines: 10
project('gtk+', 'c',
version: '3.94.0',
default_options: [
'buildtype=debugoptimized',
'warning_level=1',
# We only need c99, but glib needs GNU-specific features
# https://github.com/mesonbuild/meson/issues/2289
'c_std=gnu99',
],
meson_version: '>= 0.43.0',
license: 'LGPLv2.1+')
This means that Meson 0.43.0 is the earliest release that will work.
You should specify this in a ``depends_on`` statement.
^^^^^^^^^^^^^^^^^^^^^^^^^
Build system dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -89,28 +67,6 @@ the ``MesonPackage`` base class already contains:
depends_on('meson', type='build')
depends_on('ninja', type='build')
If you need to specify a particular version requirement, you can
override this in your package:
.. code-block:: python
depends_on('meson@0.43.0:', type='build')
depends_on('ninja', type='build')
^^^^^^^^^^^^^^^^^^^
Finding meson flags
^^^^^^^^^^^^^^^^^^^
To get a list of valid flags that can be passed to ``meson``, run the
following command in the directory that contains ``meson.build``:
.. code-block:: console
$ meson setup --help
^^^^^^^^^^^^^^^^^^^^^^^^^^
Passing arguments to meson
^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -42,11 +42,7 @@ If it isn't on CRAN, try Bioconductor, another common R repository.
For the purposes of this tutorial, we will be walking through
`r-caret <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-caret/package.py>`_
as an example. If you search for "CRAN caret", you will quickly find what
you are looking for at https://cran.r-project.org/package=caret.
https://cran.r-project.org is the main CRAN website. However, CRAN also
has a https://cloud.r-project.org site that automatically redirects to
`mirrors around the world <https://cloud.r-project.org/mirrors.html>`_.
For stability and performance reasons, we will use https://cloud.r-project.org/package=caret.
you are looking for at https://cran.r-project.org/web/packages/caret/index.html.
If you search for "Package source", you will find the download URL for
the latest release. Use this URL with ``spack create`` to create a new
package.
@@ -97,8 +93,8 @@ If you look at the bottom of the page, you'll see:
Please use the canonical form https://CRAN.R-project.org/package=caret to link to this page.
Please uphold the wishes of the CRAN admins and use
https://cloud.r-project.org/package=caret as the homepage instead of
https://cloud.r-project.org/web/packages/caret/index.html. The latter may
https://CRAN.R-project.org/package=caret as the homepage instead of
https://cran.r-project.org/web/packages/caret/index.html. The latter may
change without notice.
^^^
@@ -113,12 +109,12 @@ List URL
^^^^^^^^
CRAN maintains a single webpage containing the latest release of every
single package: https://cloud.r-project.org/src/contrib/
single package: https://cran.r-project.org/src/contrib/
Of course, as soon as a new release comes out, the version you were using
in your package is no longer available at that URL. It is moved to an
archive directory. If you search for "Old sources", you will find:
https://cloud.r-project.org/src/contrib/Archive/caret
https://cran.r-project.org/src/contrib/Archive/caret
If you only specify the URL for the latest release, your package will
no longer be able to fetch that version as soon as a new release comes
@@ -142,12 +138,12 @@ every R package needs this, the ``RPackage`` base class contains:
Take a close look at the homepage for ``caret``. If you look at the
"Depends" section, you'll notice that ``caret`` depends on "R (≥ 3.2.0)".
"Depends" section, you'll notice that ``caret`` depends on "R (≥ 2.10)".
You should add this to your package like so:
.. code-block:: python
depends_on('r@3.2.0:', type=('build', 'run'))
depends_on('r@2.10:', type=('build', 'run'))
^^^^^^^^^^^^^^
@@ -166,7 +162,7 @@ and list all of their dependencies in the following sections:
As far as Spack is concerned, all 3 of these dependency types
correspond to ``type=('build', 'run')``, so you don't have to worry
about the details. If you are curious what they mean,
about them. If you are curious what they mean,
https://github.com/spack/spack/issues/2951 has a pretty good summary:
``Depends`` is required and will cause those R packages to be *attached*,
@@ -197,14 +193,6 @@ R packages already have enough dependencies as it is, and adding
optional dependencies can really slow down the concretization
process. They can also introduce circular dependencies.
A fifth rarely used section is:
* Enhances
This means that the package can be used as an optional dependency
for another package. Again, these packages should **NOT** be listed
as dependencies.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Core, recommended, and non-core packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -220,8 +208,8 @@ If you look at the ``caret`` homepage, you'll notice a few dependencies
that don't have a link to the package, like ``methods``, ``stats``, and
``utils``. These packages are part of the core R distribution and are
tied to the R version installed. You can basically consider these to be
"R itself". These are so essential to R that it would not make sense for
them to be updated via CRAN. If you did, you would basically get a different
"R itself". These are so essential to R so it would not make sense that
they could be updated via CRAN. If so, you would basically get a different
version of R. Thus, they're updated when R is updated.
You can find a list of these core libraries at:
@@ -277,7 +265,7 @@ Non-R dependencies
Some packages depend on non-R libraries for linking. Check out the
`r-stringi <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-stringi/package.py>`_
package for an example: https://cloud.r-project.org/package=stringi.
package for an example: https://CRAN.R-project.org/package=stringi.
If you search for the text "SystemRequirements", you will see:
ICU4C (>= 52, optional)
@@ -356,11 +344,3 @@ External documentation
For more information on installing R packages, see:
https://stat.ethz.ch/R-manual/R-devel/library/utils/html/INSTALL.html
For more information on writing R packages, see:
https://cloud.r-project.org/doc/manuals/r-release/R-exts.html
In particular,
https://cloud.r-project.org/doc/manuals/r-release/R-exts.html#Package-Dependencies
has a great explanation of the difference between Depends, Imports,
and LinkingTo.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,141 +0,0 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _sippackage:
----------
SIPPackage
----------
SIP is a tool that makes it very easy to create Python bindings for C and C++
libraries. It was originally developed to create PyQt, the Python bindings for
the Qt toolkit, but can be used to create bindings for any C or C++ library.
SIP comprises a code generator and a Python module. The code generator
processes a set of specification files and generates C or C++ code which is
then compiled to create the bindings extension module. The SIP Python module
provides support functions to the automatically generated code.
^^^^^^
Phases
^^^^^^
The ``SIPPackage`` base class comes with the following phases:
#. ``configure`` - configure the package
#. ``build`` - build the package
#. ``install`` - install the package
By default, these phases run:
.. code-block:: console
$ python configure.py --bindir ... --destdir ...
$ make
$ make install
^^^^^^^^^^^^^^^
Important files
^^^^^^^^^^^^^^^
Each SIP package comes with a custom ``configure.py`` build script,
written in Python. This script contains instructions to build the project.
^^^^^^^^^^^^^^^^^^^^^^^^^
Build system dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^
``SIPPackage`` requires several dependencies. Python is needed to run
the ``configure.py`` build script, and to run the resulting Python
libraries. Qt is needed to provide the ``qmake`` command. SIP is also
needed to build the package. SIP is an unusual dependency in that it
must be installed in the same installation directory as the package,
so instead of a ``depends_on``, we use a ``resource``. All of these
dependencies are automatically added via the base class
.. code-block:: python
extends('python')
depends_on('qt', type='build')
resource(name='sip',
url='https://www.riverbankcomputing.com/static/Downloads/sip/4.19.18/sip-4.19.18.tar.gz',
sha256='c0bd863800ed9b15dcad477c4017cdb73fa805c25908b0240564add74d697e1e',
destination='.')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Passing arguments to ``configure.py``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Each phase comes with a ``<phase_args>`` function that can be used to pass
arguments to that particular phase. For example, if you need to pass
arguments to the configure phase, you can use:
.. code-block:: python
def configure_args(self, spec, prefix):
return ['--no-python-dbus']
A list of valid options can be found by running ``python configure.py --help``.
^^^^^^^
Testing
^^^^^^^
Just because a package successfully built does not mean that it built
correctly. The most reliable test of whether or not the package was
correctly installed is to attempt to import all of the modules that
get installed. To get a list of modules, run the following command
in the site-packages directory:
.. code-block:: console
$ python
>>> import setuptools
>>> setuptools.find_packages()
['QtPy5']
Large, complex packages like ``QtPy5`` will return a long list of
packages, while other packages may return an empty list. These packages
only install a single ``foo.py`` file. In Python packaging lingo,
a "package" is a directory containing files like:
.. code-block:: none
foo/__init__.py
foo/bar.py
foo/baz.py
whereas a "module" is a single Python file. Since ``find_packages``
only returns packages, you'll have to determine the correct module
names yourself. You can now add these packages and modules to the
package like so:
.. code-block:: python
import_modules = ['PyQt5']
When you run ``spack install --test=root py-pyqt5``, Spack will attempt
to import the ``PyQt5`` module after installation.
These tests most often catch missing dependencies and non-RPATHed
libraries.
^^^^^^^^^^^^^^^^^^^^^^
External documentation
^^^^^^^^^^^^^^^^^^^^^^
For more information on the SIP build system, see:
* https://www.riverbankcomputing.com/software/sip/intro
* https://www.riverbankcomputing.com/static/Docs/sip/
* https://wiki.python.org/moin/SIP

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,96 +0,0 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. chain:
============================
Chaining Spack Installations
============================
You can point your Spack installation to another installation to use any
packages that are installed there. To register the other Spack instance,
you can add it as an entry to ``upstreams.yaml``:
.. code-block:: yaml
upstreams:
spack-instance-1:
install_tree: /path/to/other/spack/opt/spack
spack-instance-2:
install_tree: /path/to/another/spack/opt/spack
``install_tree`` must point to the ``opt/spack`` directory inside of the
Spack base directory.
Once the upstream Spack instance has been added, ``spack find`` will
automatically check the upstream instance when querying installed packages,
and new package installations for the local Spack install will use any
dependencies that are installed in the upstream instance.
This other instance of Spack has no knowledge of the local Spack instance
and may not have the same permissions or ownership as the local Spack instance.
This has the following consequences:
#. Upstream Spack instances are not locked. Therefore it is up to users to
make sure that the local instance is not using an upstream instance when it
is being modified.
#. Users should not uninstall packages from the upstream instance. Since the
upstream instance doesn't know about the local instance, it cannot prevent
the uninstallation of packages which the local instance depends on.
Other details about upstream installations:
#. If a package is installed both locally and upstream, the local installation
will always be used as a dependency. This can occur if the local Spack
installs a package which is not present in the upstream, but later on the
upstream Spack instance also installs that package.
#. If an upstream Spack instance registers and installs an external package,
the local Spack instance will treat this the same as a Spack-installed
package. This feature will only work if the upstream Spack instance
includes the upstream functionality (i.e. if its commit is after March
27, 2019).
---------------------------------------
Using Multiple Upstream Spack Instances
---------------------------------------
A single Spack instance can use multiple upstream Spack installations. Spack
will search upstream instances in the order you list them in your
configuration. If your installation refers to instances X and Y, in that order,
then instance X must list Y as an upstream in its own ``upstreams.yaml``.
-----------------------------------
Using Modules for Upstream Packages
-----------------------------------
The local Spack instance does not generate modules for packages which are
installed upstream. The local Spack instance can be configured to use the
modules generated by the upstream Spack instance.
There are two requirements to use the modules created by an upstream Spack
instance: firstly the upstream instance must do a ``spack module tcl refresh``,
which generates an index file that maps installed packages to their modules;
secondly, the local Spack instance must add a ``modules`` entry to the
configuration:
.. code-block:: yaml
upstreams:
spack-instance-1:
install_tree: /path/to/other/spack/opt/spack
modules:
tcl: /path/to/other/spack/share/spack/modules
Each time new packages are installed in the upstream Spack instance, the
upstream Spack maintainer should run ``spack module tcl refresh`` (or the
corresponding command for the type of module they intend to use).
.. note::
Spack can generate modules that :ref:`automatically load
<autoloading-dependencies>` the modules of dependency packages. Spack cannot
currently do this for modules in upstream packages.

View File

@@ -1,4 +1,4 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -20,45 +20,67 @@
import sys
import os
import re
import shutil
import subprocess
from glob import glob
from sphinx.ext.apidoc import main as sphinx_apidoc
# Since Sphinx 1.7, sphinx.apidoc has been moved to sphinx.ext.apidoc
# sphinx.apidoc is deprecated and will be removed in Sphinx 2.0
try:
from sphinx.ext.apidoc import main as sphinx_apidoc
except ImportError:
from sphinx.apidoc import main as sphinx_apidoc
# -- Spack customizations -----------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('_spack_root/lib/spack/external'))
sys.path.insert(0, os.path.abspath('../external'))
if sys.version_info[0] < 3:
sys.path.insert(
0, os.path.abspath('_spack_root/lib/spack/external/yaml/lib'))
sys.path.insert(0, os.path.abspath('../external/yaml/lib'))
else:
sys.path.insert(
0, os.path.abspath('_spack_root/lib/spack/external/yaml/lib3'))
sys.path.append(os.path.abspath('_spack_root/lib/spack/'))
sys.path.insert(0, os.path.abspath('../external/yaml/lib3'))
sys.path.append(os.path.abspath('..'))
# Add the Spack bin directory to the path so that we can use its output in docs.
os.environ['SPACK_ROOT'] = os.path.abspath('_spack_root')
os.environ['PATH'] += "%s%s" % (os.pathsep, os.path.abspath('_spack_root/bin'))
spack_root = '../../..'
os.environ['SPACK_ROOT'] = spack_root
os.environ['PATH'] += '%s%s/bin' % (os.pathsep, spack_root)
# Set an environment variable so that colify will print output like it would to
# a terminal.
os.environ['COLIFY_SIZE'] = '25x120'
#
# Generate package list using spack command
#
with open('package_list.html', 'w') as plist_file:
subprocess.Popen(
[spack_root + '/bin/spack', 'list', '--format=html'],
stdout=plist_file)
#
# Find all the `cmd-spack-*` references and add them to a command index
#
import spack
import spack.cmd
command_names = spack.cmd.all_commands()
documented_commands = set()
for filename in glob('*rst'):
with open(filename) as f:
for line in f:
match = re.match('.. _cmd-(spack-.*):', line)
if match:
documented_commands.add(match.group(1).strip())
os.environ['COLUMNS'] = '120'
# Generate full package list if needed
subprocess.call([
'spack', 'list', '--format=html', '--update=package_list.html'])
# Generate a command index if an update is needed
subprocess.call([
'spack', 'commands',
'--format=rst',
'--header=command_index.in',
'--update=command_index.rst'] + glob('*rst'))
shutil.copy('command_index.in', 'command_index.rst')
with open('command_index.rst', 'a') as index:
subprocess.Popen(
[spack_root + '/bin/spack', 'commands', '--format=rst'] + list(
documented_commands),
stdout=index)
#
# Run sphinx-apidoc
@@ -68,12 +90,13 @@
# Without this, the API Docs will never actually update
#
apidoc_args = [
'--force', # Older versions of Sphinx ignore the first argument
'--force', # Overwrite existing files
'--no-toc', # Don't create a table of contents file
'--output-dir=.', # Directory to place all output
]
sphinx_apidoc(apidoc_args + ['_spack_root/lib/spack/spack'])
sphinx_apidoc(apidoc_args + ['_spack_root/lib/spack/llnl'])
sphinx_apidoc(apidoc_args + ['../spack'])
sphinx_apidoc(apidoc_args + ['../llnl'])
# Enable todo items
todo_include_todos = True
@@ -90,12 +113,12 @@ def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
env, fromdocname, builder, typ, target, node, contnode)
def setup(sphinx):
sphinx.add_domain(PatchedPythonDomain, override=True)
sphinx.override_domain(PatchedPythonDomain)
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = '1.8'
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
@@ -128,14 +151,13 @@ def setup(sphinx):
# General information about the project.
project = u'Spack'
copyright = u'2013-2019, Lawrence Livermore National Laboratory.'
copyright = u'2013-2018, Lawrence Livermore National Laboratory.'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
import spack
version = '.'.join(str(s) for s in spack.spack_version_info[:2])
# The full version, including alpha/beta/rc tags.
release = spack.spack_version
@@ -144,13 +166,6 @@ def setup(sphinx):
# for a list of supported languages.
#language = None
# Places to look for .po/.mo files for doc translations
#locale_dirs = []
# Sphinx gettext settings
gettext_compact = True
gettext_uuid = False
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
@@ -159,7 +174,7 @@ def setup(sphinx):
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build', '_spack_root', '.spack-env']
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
@@ -176,25 +191,7 @@ def setup(sphinx):
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
# We use our own extension of the default style with a few modifications
from pygments.style import Style
from pygments.styles.default import DefaultStyle
from pygments.token import Generic, Comment, Text
class SpackStyle(DefaultStyle):
styles = DefaultStyle.styles.copy()
background_color = "#f4f4f8"
styles[Generic.Output] = "#355"
styles[Generic.Prompt] = "bold #346ec9"
import pkg_resources
dist = pkg_resources.Distribution(__file__)
sys.path.append('.') # make 'conf' module findable
ep = pkg_resources.EntryPoint.parse('spack = conf:SpackStyle', dist=dist)
dist._ep_map = {'pygments.styles': {'plugin1': ep}}
pkg_resources.working_set.add(dist)
pygments_style = 'spack'
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
@@ -223,12 +220,12 @@ class SpackStyle(DefaultStyle):
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = '_spack_root/share/spack/logo/spack-logo-white-text.svg'
html_logo = '../../../share/spack/logo/spack-logo-white-text.svg'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = '_spack_root/share/spack/logo/favicon.ico'
html_favicon = '../../../share/spack/logo/favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -13,7 +13,7 @@ Spack's basic configuration options are set in ``config.yaml``. You can
see the default settings by looking at
``etc/spack/defaults/config.yaml``:
.. literalinclude:: _spack_root/etc/spack/defaults/config.yaml
.. literalinclude:: ../../../etc/spack/defaults/config.yaml
:language: yaml
These settings can be overridden in ``etc/spack/config.yaml`` or
@@ -30,34 +30,25 @@ Default is ``$spack/opt/spack``.
``install_hash_length`` and ``install_path_scheme``
---------------------------------------------------
The default Spack installation path can be very long and can create problems
for scripts with hardcoded shebangs. Additionally, when using the Intel
compiler, and if there is also a long list of dependencies, the compiler may
segfault. If you see the following:
The default Spack installation path can be very long and can create
problems for scripts with hardcoded shebangs. There are two parameters
to help with that. Firstly, the ``install_hash_length`` parameter can
set the length of the hash in the installation path from 1 to 32. The
default path uses the full 32 characters.
.. code-block:: console
: internal error: ** The compiler has encountered an unexpected problem.
** Segmentation violation signal raised. **
Access violation or stack overflow. Please contact Intel Support for assistance.
it may be because variables containing dependency specs may be too long. There
are two parameters to help with long path names. Firstly, the
``install_hash_length`` parameter can set the length of the hash in the
installation path from 1 to 32. The default path uses the full 32 characters.
Secondly, it is also possible to modify the entire installation
scheme. By default Spack uses
``{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}``
Secondly, it is
also possible to modify the entire installation scheme. By default
Spack uses
``${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}``
where the tokens that are available for use in this directive are the
same as those understood by the :meth:`~spack.spec.Spec.format`
method. Using this parameter it is possible to use a different package
layout or reduce the depth of the installation paths. For example
same as those understood by the ``Spec.format`` method. Using this parameter it
is possible to use a different package layout or reduce the depth of
the installation paths. For example
.. code-block:: yaml
config:
install_path_scheme: '{name}/{version}/{hash:7}'
install_path_scheme: '${PACKAGE}/${VERSION}/${HASH:7}'
would install packages into sub-directories using only the package
name, version and a hash length of 7 characters.
@@ -84,6 +75,7 @@ the location for each type of module. e.g.:
module_roots:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
dotkit: $spack/share/spack/dotkit
See :ref:`modules` for details.
@@ -93,46 +85,40 @@ See :ref:`modules` for details.
Spack is designed to run out of a user home directory, and on many
systems the home directory is a (slow) network file system. On most systems,
building in a temporary file system is faster. Usually, there is also more
space available in the temporary location than in the home directory. If the
username is not already in the path, Spack will append the value of ``$user`` to
the selected ``build_stage`` path.
.. warning:: We highly recommend specifying ``build_stage`` paths that
distinguish between staging and other activities to ensure
``spack clean`` does not inadvertently remove unrelated files.
Spack prepends ``spack-stage-`` to temporary staging directory names to
reduce this risk. Using a combination of ``spack`` and or ``stage`` in
each specified path, as shown in the default settings and documented
examples, will add another layer of protection.
building in a temporary file system results in faster builds than building
in the home directory. Usually, there is also more space available in
the temporary location than in the home directory. So, Spack tries to
create build stages in temporary space.
By default, Spack's ``build_stage`` is configured like this:
.. code-block:: yaml
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
- $tempdir
- /nfs/tmp2/$user
- $spack/var/spack/stage
This can be an ordered list of paths that Spack should search when trying to
This is an ordered list of paths that Spack should search when trying to
find a temporary directory for the build stage. The list is searched in
order, and Spack will use the first directory to which it has write access.
Specifying `~/.spack/stage` first will ensure each user builds in their home
directory. The historic Spack stage path `$spack/var/spack/stage` will build
directly inside the Spack instance. See :ref:`config-file-variables` for more
on ``$tempdir`` and ``$spack``.
See :ref:`config-file-variables` for more on ``$tempdir`` and ``$spack``.
When Spack builds a package, it creates a temporary directory within the
``build_stage``. After the package is successfully installed, Spack deletes
the temporary directory it used to build. Unsuccessful builds are not
deleted, but you can manually purge them with :ref:`spack clean --stage
``build_stage``, and it creates a symbolic link to that directory in
``$spack/var/spack/stage``. This is used to track the stage.
After a package is successfully installed, Spack deletes the temporary
directory it used to build. Unsuccessful builds are not deleted, but you
can manually purge them with :ref:`spack clean --stage
<cmd-spack-clean>`.
.. note::
The build will fail if there is no writable directory in the ``build_stage``
list, where any user- and site-specific setting will be searched first.
The last item in the list is ``$spack/var/spack/stage``. If this is the
only writable directory in the ``build_stage`` list, Spack will build
*directly* in ``$spack/var/spack/stage`` and will not link to temporary
space.
--------------------
``source_cache``
@@ -194,23 +180,16 @@ set ``dirty`` to ``true`` to skip the cleaning step and make all builds
"dirty" by default. Be aware that this will reduce the reproducibility
of builds.
.. _build-jobs:
--------------
``build_jobs``
--------------
Unless overridden in a package or on the command line, Spack builds all
packages in parallel. The default parallelism is equal to the number of
cores on your machine, up to 16. Parallelism cannot exceed the number of
cores available on the host. For a build system that uses Makefiles, this
means running:
- ``make -j<build_jobs>``, when ``build_jobs`` is less than the number of
cores on the machine
- ``make -j<ncores>``, when ``build_jobs`` is greater or equal to the
number of cores on the machine
packages in parallel. For a build system that uses Makefiles, this means
running ``make -j<build_jobs>``, where ``build_jobs`` is the number of
threads to use.
The default parallelism is equal to the number of cores on your machine.
If you work on a shared login node or have a strict ulimit, it may be
necessary to set the default to a lower value. By setting ``build_jobs``
to 4, for example, commands like ``spack install`` will run ``make -j4``
@@ -236,24 +215,3 @@ ccache`` to learn more about the default settings and how to change
them). Please note that we currently disable ccache's ``hash_dir``
feature to avoid an issue with the stage directory (see
https://github.com/LLNL/spack/pull/3761#issuecomment-294352232).
------------------
``shared_linking``
------------------
Control whether Spack embeds ``RPATH`` or ``RUNPATH`` attributes in ELF binaries
so that they can find their dependencies. Has no effect on macOS.
Two options are allowed:
1. ``rpath`` uses ``RPATH`` and forces the ``--disable-new-tags`` flag to be passed to the linker
2. ``runpath`` uses ``RUNPATH`` and forces the ``--enable-new-tags`` flag to be passed to the linker
``RPATH`` search paths have higher precedence than ``LD_LIBRARY_PATH``
and ld.so will search for libraries in transitive ``RPATHs`` of
parent objects.
``RUNPATH`` search paths have lower precedence than ``LD_LIBRARY_PATH``,
and ld.so will ONLY search for dependencies in the ``RUNPATH`` of
the loading object.
DO NOT MIX the two options within the same install tree.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -36,8 +36,8 @@ Here is an example ``config.yaml`` file:
module_roots:
lmod: $spack/share/spack/lmod
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
- $tempdir
- /nfs/tmp2/$user
Each Spack configuration file is nested under a top-level section
corresponding to its name. So, ``config.yaml`` starts with ``config:``,
@@ -244,8 +244,8 @@ your configurations look like this:
module_roots:
lmod: $spack/share/spack/lmod
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
- $tempdir
- /nfs/tmp2/$user
.. code-block:: yaml
@@ -269,8 +269,8 @@ command:
module_roots:
lmod: $spack/share/spack/lmod
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
- $tempdir
- /nfs/tmp2/$user
.. _config-overrides:
@@ -312,8 +312,8 @@ Let's revisit the ``config.yaml`` example one more time. The
:caption: $(prefix)/etc/spack/defaults/config.yaml
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
- $tempdir
- /nfs/tmp2/$user
Suppose the user configuration adds its *own* list of ``build_stage``
@@ -323,7 +323,7 @@ paths:
:caption: ~/.spack/config.yaml
build_stage:
- /lustre-scratch/$user/spack
- /lustre-scratch/$user
- ~/mystage
@@ -341,10 +341,10 @@ get config`` shows the result:
module_roots:
lmod: $spack/share/spack/lmod
build_stage:
- /lustre-scratch/$user/spack
- /lustre-scratch/$user
- ~/mystage
- $tempdir/$user/spack-stage
- ~/.spack/stage
- $tempdir
- /nfs/tmp2/$user
As in :ref:`config-overrides`, the higher-precedence scope can
@@ -356,7 +356,7 @@ user config looked like this:
:caption: ~/.spack/config.yaml
build_stage::
- /lustre-scratch/$user/spack
- /lustre-scratch/$user
- ~/mystage
@@ -371,7 +371,7 @@ The merged configuration would look like this:
module_roots:
lmod: $spack/share/spack/lmod
build_stage:
- /lustre-scratch/$user/spack
- /lustre-scratch/$user
- ~/mystage
@@ -427,33 +427,6 @@ home directory, and ``~user`` will expand to a specified user's home
directory. The ``~`` must appear at the beginning of the path, or Spack
will not expand it.
.. _configuration_environment_variables:
-------------------------
Environment Modifications
-------------------------
Spack allows to prescribe custom environment modifications in a few places
within its configuration files. Every time these modifications are allowed
they are specified as a dictionary, like in the following example:
.. code-block:: yaml
environment:
set:
LICENSE_FILE: '/path/to/license'
unset:
- CPATH
- LIBRARY_PATH
append_path:
PATH: '/new/bin/dir'
The possible actions that are permitted are ``set``, ``unset``, ``append_path``,
``prepend_path`` and finally ``remove_path``. They all require a dictionary
of variable names mapped to the values used for the modification.
The only exception is ``unset`` that requires just a list of variable names.
No particular order is ensured on the execution of each of these modifications.
----------------------------
Seeing Spack's Configuration
----------------------------
@@ -486,13 +459,14 @@ account all scopes. For example, to see the fully merged
install_tree: $spack/opt/spack
template_dirs:
- $spack/templates
directory_layout: {architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}
directory_layout: ${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}
module_roots:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
dotkit: $spack/share/spack/dotkit
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
- $tempdir
- /nfs/tmp2/$user
- $spack/var/spack/stage
source_cache: $spack/var/spack/cache
misc_cache: ~/.spack/cache
@@ -536,13 +510,14 @@ down the problem:
./my-scope/config.yaml:2 install_tree: /path/to/some/tree
/home/myuser/spack/etc/spack/defaults/config.yaml:23 template_dirs:
/home/myuser/spack/etc/spack/defaults/config.yaml:24 - $spack/templates
/home/myuser/spack/etc/spack/defaults/config.yaml:28 directory_layout: {architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}
/home/myuser/spack/etc/spack/defaults/config.yaml:28 directory_layout: ${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}
/home/myuser/spack/etc/spack/defaults/config.yaml:32 module_roots:
/home/myuser/spack/etc/spack/defaults/config.yaml:33 tcl: $spack/share/spack/modules
/home/myuser/spack/etc/spack/defaults/config.yaml:34 lmod: $spack/share/spack/lmod
/home/myuser/spack/etc/spack/defaults/config.yaml:35 dotkit: $spack/share/spack/dotkit
/home/myuser/spack/etc/spack/defaults/config.yaml:49 build_stage:
/home/myuser/spack/etc/spack/defaults/config.yaml:50 - $tempdir/$user/spack-stage
/home/myuser/spack/etc/spack/defaults/config.yaml:51 - ~/.spack/stage
/home/myuser/spack/etc/spack/defaults/config.yaml:50 - $tempdir
/home/myuser/spack/etc/spack/defaults/config.yaml:51 - /nfs/tmp2/$user
/home/myuser/spack/etc/spack/defaults/config.yaml:52 - $spack/var/spack/stage
/home/myuser/spack/etc/spack/defaults/config.yaml:57 source_cache: $spack/var/spack/cache
/home/myuser/spack/etc/spack/defaults/config.yaml:62 misc_cache: ~/.spack/cache

View File

@@ -1,307 +0,0 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _containers:
================
Container Images
================
Spack can be an ideal tool to setup images for containers since all the
features discussed in :ref:`environments` can greatly help to manage
the installation of software during the image build process. Nonetheless,
building a production image from scratch still requires a lot of
boilerplate to:
- Get Spack working within the image, possibly running as root
- Minimize the physical size of the software installed
- Properly update the system software in the base image
To facilitate users with these tedious tasks, Spack provides a command
to automatically generate recipes for container images based on
Environments:
.. code-block:: console
$ ls
spack.yaml
$ spack containerize
# Build stage with Spack pre-installed and ready to be used
FROM spack/centos7:latest as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs+mpi" \
&& echo " - mpich" \
&& echo " concretization: together" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unecessary deps
RUN cd /opt/spack-environment && spack install && spack gc -y
# Strip all the binaries
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM centos:7
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
RUN yum update -y && yum install -y epel-release && yum update -y \
&& yum install -y libgomp \
&& rm -rf /var/cache/yum && yum clean all
RUN echo 'export PS1="\[$(tput bold)\]\[$(tput setaf 1)\][gromacs]\[$(tput setaf 2)\]\u\[$(tput sgr0)\]:\w $ \[$(tput sgr0)\]"' >> ~/.bashrc
LABEL "app"="gromacs"
LABEL "mpi"="mpich"
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
The bits that make this automation possible are discussed in details
below. All the images generated in this way will be based on
multi-stage builds with:
- A fat ``build`` stage containing common build tools and Spack itself
- A minimal ``final`` stage containing only the software requested by the user
-----------------
Spack Base Images
-----------------
Docker images with Spack preinstalled and ready to be used are
built on `Docker Hub <https://hub.docker.com/u/spack>`_
at every push to ``develop`` or to a release branch. The OS that
are currently supported are summarized in the table below:
.. _containers-supported-os:
.. list-table:: Supported operating systems
:header-rows: 1
* - Operating System
- Base Image
- Spack Image
* - Ubuntu 16.04
- ``ubuntu:16.04``
- ``spack/ubuntu-xenial``
* - Ubuntu 18.04
- ``ubuntu:16.04``
- ``spack/ubuntu-bionic``
* - CentOS 6
- ``centos:6``
- ``spack/centos6``
* - CentOS 7
- ``centos:7``
- ``spack/centos7``
All the images are tagged with the corresponding release of Spack:
.. image:: dockerhub_spack.png
with the exception of the ``latest`` tag that points to the HEAD
of the ``develop`` branch. These images are available for anyone
to use and take care of all the repetitive tasks that are necessary
to setup Spack within a container. All the container recipes generated
automatically by Spack use them as base images for their ``build`` stage.
-------------------------
Environment Configuration
-------------------------
Any Spack Environment can be used for the automatic generation of container
recipes. Sensible defaults are provided for things like the base image or the
version of Spack used in the image. If a finer tuning is needed it can be
obtained by adding the relevant metadata under the ``container`` attribute
of environments:
.. code-block:: yaml
spack:
specs:
- gromacs+mpi
- mpich
container:
# Select the format of the recipe e.g. docker,
# singularity or anything else that is currently supported
format: docker
# Select from a valid list of images
base:
image: "centos:7"
spack: develop
# Whether or not to strip binaries
strip: true
# Additional system packages that are needed at runtime
os_packages:
- libgomp
# Extra instructions
extra_instructions:
final: |
RUN echo 'export PS1="\[$(tput bold)\]\[$(tput setaf 1)\][gromacs]\[$(tput setaf 2)\]\u\[$(tput sgr0)\]:\w $ \[$(tput sgr0)\]"' >> ~/.bashrc
# Labels for the image
labels:
app: "gromacs"
mpi: "mpich"
The tables below describe the configuration options that are currently supported:
.. list-table:: General configuration options for the ``container`` section of ``spack.yaml``
:header-rows: 1
* - Option Name
- Description
- Allowed Values
- Required
* - ``format``
- The format of the recipe
- ``docker`` or ``singularity``
- Yes
* - ``base:image``
- Base image for ``final`` stage
- See :ref:`containers-supported-os`
- Yes
* - ``base:spack``
- Version of Spack
- Valid tags for ``base:image``
- Yes
* - ``strip``
- Whether to strip binaries
- ``true`` (default) or ``false``
- No
* - ``os_packages``
- System packages to be installed
- Valid packages for the ``final`` OS
- No
* - ``extra_instructions:build``
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``build`` stage
- Anything understood by the current ``format``
- No
* - ``extra_instructions:final``
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``final`` stage
- Anything understood by the current ``format``
- No
* - ``labels``
- Labels to tag the image
- Pairs of key-value strings
- No
.. list-table:: Configuration options specific to Singularity
:header-rows: 1
* - Option Name
- Description
- Allowed Values
- Required
* - ``singularity:runscript``
- Content of ``%runscript``
- Any valid script
- No
* - ``singularity:startscript``
- Content of ``%startscript``
- Any valid script
- No
* - ``singularity:test``
- Content of ``%test``
- Any valid script
- No
* - ``singularity:help``
- Description of the image
- Description string
- No
Once the Environment is properly configured a recipe for a container
image can be printed to standard output by issuing the following
command from the directory where the ``spack.yaml`` resides:
.. code-block:: console
$ spack containerize
The example ``spack.yaml`` above would produce for instance the
following ``Dockerfile``:
.. code-block:: docker
# Build stage with Spack pre-installed and ready to be used
FROM spack/centos7:latest as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs+mpi" \
&& echo " - mpich" \
&& echo " concretization: together" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unecessary deps
RUN cd /opt/spack-environment && spack install && spack gc -y
# Strip all the binaries
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM centos:7
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
RUN yum update -y && yum install -y epel-release && yum update -y \
&& yum install -y libgomp \
&& rm -rf /var/cache/yum && yum clean all
RUN echo 'export PS1="\[$(tput bold)\]\[$(tput setaf 1)\][gromacs]\[$(tput setaf 2)\]\u\[$(tput sgr0)\]:\w $ \[$(tput sgr0)\]"' >> ~/.bashrc
LABEL "app"="gromacs"
LABEL "mpi"="mpich"
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
.. note::
Spack can also produce Singularity definition files to build the image. The
minimum version of Singularity required to build a SIF (Singularity Image Format)
from them is ``3.5.3``.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -64,8 +64,6 @@ If you take a look in ``$SPACK_ROOT/.travis.yml``, you'll notice that we test
against Python 2.6, 2.7, and 3.4-3.7 on both macOS and Linux. We currently
perform 3 types of tests:
.. _cmd-spack-test:
^^^^^^^^^^
Unit Tests
^^^^^^^^^^
@@ -88,83 +86,40 @@ To run *all* of the unit tests, use:
$ spack test
These tests may take several minutes to complete. If you know you are
only modifying a single Spack feature, you can run subsets of tests at a
time. For example, this would run all the tests in
``lib/spack/spack/test/architecture.py``:
These tests may take several minutes to complete. If you know you are only
modifying a single Spack feature, you can run a single unit test at a time:
.. code-block:: console
$ spack test architecture.py
$ spack test architecture
And this would run the ``test_platform`` test from that file:
.. code-block:: console
$ spack test architecture.py::test_platform
This allows you to develop iteratively: make a change, test that change,
make another change, test that change, etc. We use `pytest
<http://pytest.org/>`_ as our tests fromework, and these types of
arguments are just passed to the ``pytest`` command underneath. See `the
pytest docs
<http://doc.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests>`_
for more details on test selection syntax.
``spack test`` has a few special options that can help you understand
what tests are available. To get a list of all available unit test
files, run:
This allows you to develop iteratively: make a change, test that change, make
another change, test that change, etc. To get a list of all available unit
tests, run:
.. command-output:: spack test --list
:ellipsis: 5
To see a more detailed list of available unit tests, use ``spack test
--list-long``:
A more detailed list of available unit tests can be found by running
``spack test --long-list``.
.. command-output:: spack test --list-long
:ellipsis: 10
And to see the fully qualified names of all tests, use ``--list-names``:
.. command-output:: spack test --list-names
:ellipsis: 5
You can combine these with ``pytest`` arguments to restrict which tests
you want to know about. For example, to see just the tests in
``architecture.py``:
.. command-output:: spack test --list-long architecture.py
You can also combine any of these options with a ``pytest`` keyword
search. For example, to see the names of all tests that have "spec"
or "concretize" somewhere in their names:
.. command-output:: spack test --list-names -k "spec and concretize"
By default, ``pytest`` captures the output of all unit tests, and it will
print any captured output for failed tests. Sometimes it's helpful to see
your output interactively, while the tests run (e.g., if you add print
statements to a unit tests). To see the output *live*, use the ``-s``
argument to ``pytest``:
By default, ``pytest`` captures the output of all unit tests. If you add print
statements to a unit test and want to see the output, simply run:
.. code-block:: console
$ spack test -s architecture.py::test_platform
$ spack test -s -k architecture
Unit tests are crucial to making sure bugs aren't introduced into
Spack. If you are modifying core Spack libraries or adding new
functionality, please add new unit tests for your feature, and consider
strengthening existing tests. You will likely be asked to do this if you
submit a pull request to the Spack project on GitHub. Check out the
`pytest docs <http://pytest.org/>`_ and feel free to ask for guidance on
how to write tests!
Unit tests are crucial to making sure bugs aren't introduced into Spack. If you
are modifying core Spack libraries or adding new functionality, please consider
adding new unit tests or strengthening existing tests.
.. note::
You may notice the ``share/spack/qa/run-unit-tests`` script in the
repository. This script is designed for Travis CI. It runs the unit
tests and reports coverage statistics back to Codecov. If you want to
run the unit tests yourself, we suggest you use ``spack test``.
There is also a ``run-unit-tests`` script in ``share/spack/qa`` that
runs the unit tests. Afterwards, it reports back to Codecov with the
percentage of Spack that is covered by unit tests. This script is
designed for Travis CI. If you want to run the unit tests yourself, we
suggest you use ``spack test``.
^^^^^^^^^^^^
Flake8 Tests
@@ -268,7 +223,8 @@ documentation. In order to prevent things like broken links and missing imports,
we added documentation tests that build the documentation and fail if there
are any warning or error messages.
Building the documentation requires several dependencies:
Building the documentation requires several dependencies, all of which can be
installed with Spack:
* sphinx
* sphinxcontrib-programoutput
@@ -278,18 +234,11 @@ Building the documentation requires several dependencies:
* mercurial
* subversion
All of these can be installed with Spack, e.g.
.. code-block:: console
$ spack install py-sphinx py-sphinxcontrib-programoutput py-sphinx-rtd-theme graphviz git mercurial subversion
.. warning::
Sphinx has `several required dependencies <https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sphinx/package.py>`_.
If you're using a ``python`` from Spack and you installed
``py-sphinx`` and friends, you need to make them available to your
``python``. The easiest way to do this is to run:
If you installed ``py-sphinx`` with Spack, make sure to add all of these
dependencies to your ``PYTHONPATH``. The easiest way to do this is to run:
.. code-block:: console
@@ -297,10 +246,8 @@ All of these can be installed with Spack, e.g.
$ spack activate py-sphinx-rtd-theme
$ spack activate py-sphinxcontrib-programoutput
so that all of the dependencies are symlinked into that Python's
tree. Alternatively, you could arrange for their library
directories to be added to PYTHONPATH. If you see an error message
like:
so that all of the dependencies are symlinked to a central location.
If you see an error message like:
.. code-block:: console

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -363,12 +363,12 @@ Developer commands
``spack doc``
^^^^^^^^^^^^^
.. _cmd-spack-test:
^^^^^^^^^^^^^^
``spack test``
^^^^^^^^^^^^^^
See the :ref:`contributor guide section <cmd-spack-test>` on ``spack test``.
.. _cmd-spack-python:
^^^^^^^^^^^^^^^^
@@ -488,7 +488,7 @@ supply ``--profile`` to Spack on the command line, before any subcommands.
``spack --profile`` output looks like this:
.. command-output:: spack --profile graph hdf5
.. command-output:: spack --profile graph dyninst
:ellipsis: 25
The bottom of the output shows the top most time consuming functions,

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

View File

@@ -1,826 +0,0 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _environments:
============
Environments
============
An environment is used to group together a set of specs for the
purpose of building, rebuilding and deploying in a coherent fashion.
Environments provide a number of advantages over the *à la carte*
approach of building and loading individual Spack modules:
#. Environments separate the steps of (a) choosing what to
install, (b) concretizing, and (c) installing. This allows
Environments to remain stable and repeatable, even if Spack packages
are upgraded: specs are only re-concretized when the user
explicitly asks for it. It is even possible to reliably
transport environments between different computers running
different versions of Spack!
#. Environments allow several specs to be built at once; a more robust
solution than ad-hoc scripts making multiple calls to ``spack
install``.
#. An Environment that is built as a whole can be loaded as a whole
into the user environment. An Environment can be built to maintain
a filesystem view of its packages, and the environment can load
that view into the user environment at activation time. Spack can
also generate a script to load all modules related to an
environment.
Other packaging systems also provide environments that are similar in
some ways to Spack environments; for example, `Conda environments
<https://conda.io/docs/user-guide/tasks/manage-environments.html>`_ or
`Python Virtual Environments
<https://docs.python.org/3/tutorial/venv.html>`_. Spack environments
provide some distinctive features:
#. A spec installed "in" an environment is no different from the same
spec installed anywhere else in Spack. Environments are assembled
simply by collecting together a set of specs.
#. Spack Environments may contain more than one spec of the same
package.
Spack uses a "manifest and lock" model similar to `Bundler gemfiles
<https://bundler.io/man/gemfile.5.html>`_ and other package
managers. The user input file is named ``spack.yaml`` and the lock
file is named ``spack.lock``
.. _environments-using:
------------------
Using Environments
------------------
Here we follow a typical use case of creating, concretizing,
installing and loading an environment.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Creating a named Environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
An environment is created by:
.. code-block:: console
$ spack env create myenv
Spack then creates the directory ``var/spack/environments/myenv``.
.. note::
All named environments are stored in the ``var/spack/environments`` folder.
In the ``var/spack/environments/myenv`` directory, Spack creates the
file ``spack.yaml`` and the hidden directory ``.spack-env``.
Spack stores metadata in the ``.spack-env`` directory. User
interaction will occur through the ``spack.yaml`` file and the Spack
commands that affect it. When the environment is concretized, Spack
will create a file ``spack.lock`` with the concrete information for
the environment.
In addition to being the default location for the view associated with
an Environment, the ``.spack-env`` directory also contains:
* ``repo/``: A repo consisting of the Spack packages used in this
environment. This allows the environment to build the same, in
theory, even on different versions of Spack with different
packages!
* ``logs/``: A directory containing the build logs for the packages
in this Environment.
Spack Environments can also be created from either a ``spack.yaml``
manifest or a ``spack.lock`` lockfile. To create an Environment from a
``spack.yaml`` manifest:
.. code-block:: console
$ spack env create myenv spack.yaml
To create an Environment from a ``spack.lock`` lockfile:
.. code-block:: console
$ spack env create myenv spack.lock
Either of these commands can also take a full path to the
initialization file.
A Spack Environment created from a ``spack.yaml`` manifest is
guaranteed to have the same root specs as the original Environment,
but may concretize differently. A Spack Environment created from a
``spack.lock`` lockfile is guaranteed to have the same concrete specs
as the original Environment. Either may obviously then differ as the
user modifies it.
^^^^^^^^^^^^^^^^^^^^^^^^^
Activating an Environment
^^^^^^^^^^^^^^^^^^^^^^^^^
To activate an environment, use the following command:
.. code-block:: console
$ spack env activate myenv
By default, the ``spack env activate`` will load the view associated
with the Environment into the user environment. The ``-v,
--with-view`` argument ensures this behavior, and the ``-V,
--without-vew`` argument activates the environment without changing
the user environment variables.
The ``-p`` option to the ``spack env activate`` command modifies the
user's prompt to begin with the environment name in brackets.
.. code-block:: console
$ spack env activate -p myenv
[myenv] $ ...
To deactivate an environment, use the command:
.. code-block:: console
$ spack env deactivate
or the shortcut alias
.. code-block:: console
$ despacktivate
If the environment was activated with its view, deactivating the
environment will remove the view from the user environment.
^^^^^^^^^^^^^^^^^^^^^^
Anonymous Environments
^^^^^^^^^^^^^^^^^^^^^^
Any directory can be treated as an environment if it contains a file
``spack.yaml``. To load an anonymous environment, use:
.. code-block:: console
$ spack env activate -d /path/to/directory
Spack commands that are environment sensitive will also act on the
environment any time the current working directory contains a
``spack.yaml`` file. Changing working directory to a directory
containing a ``spack.yaml`` file is equivalent to the command:
.. code-block:: console
$ spack env activate -d /path/to/dir --without-view
Anonymous specs can be created in place using the command:
.. code-block:: console
$ spack env create -d .
In this case Spack simply creates a spack.yaml file in the requested
directory.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Environment Sensitive Commands
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack commands are environment sensitive. For example, the ``find``
command shows only the specs in the active Environment if an
Environment has been activated. Similarly, the ``install`` and
``uninstall`` commands act on the active environment.
.. code-block:: console
$ spack find
==> 0 installed packages
$ spack install zlib@1.2.11
==> Installing zlib
==> Searching for binary cache of zlib
==> Warning: No Spack mirrors are currently configured
==> No binary for zlib found: installing from source
==> Fetching http://zlib.net/fossils/zlib-1.2.11.tar.gz
######################################################################## 100.0%
==> Staging archive: /spack/var/spack/stage/zlib-1.2.11-3r4cfkmx3wwfqeof4bc244yduu2mz4ur/zlib-1.2.11.tar.gz
==> Created stage in /spack/var/spack/stage/zlib-1.2.11-3r4cfkmx3wwfqeof4bc244yduu2mz4ur
==> No patches needed for zlib
==> Building zlib [Package]
==> Executing phase: 'install'
==> Successfully installed zlib
Fetch: 0.36s. Build: 11.58s. Total: 11.93s.
[+] /spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/zlib-1.2.11-3r4cfkmx3wwfqeof4bc244yduu2mz4ur
$ spack env activate myenv
$ spack find
==> In environment myenv
==> No root specs
==> 0 installed packages
$ spack install zlib@1.2.8
==> Installing zlib
==> Searching for binary cache of zlib
==> Warning: No Spack mirrors are currently configured
==> No binary for zlib found: installing from source
==> Fetching http://zlib.net/fossils/zlib-1.2.8.tar.gz
######################################################################## 100.0%
==> Staging archive: /spack/var/spack/stage/zlib-1.2.8-y2t6kq3s23l52yzhcyhbpovswajzi7f7/zlib-1.2.8.tar.gz
==> Created stage in /spack/var/spack/stage/zlib-1.2.8-y2t6kq3s23l52yzhcyhbpovswajzi7f7
==> No patches needed for zlib
==> Building zlib [Package]
==> Executing phase: 'install'
==> Successfully installed zlib
Fetch: 0.26s. Build: 2.08s. Total: 2.35s.
[+] /spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/zlib-1.2.8-y2t6kq3s23l52yzhcyhbpovswajzi7f7
$ spack find
==> In environment myenv
==> Root specs
zlib@1.2.8
==> 1 installed package
-- linux-rhel7-x86_64 / gcc@4.9.3 -------------------------------
zlib@1.2.8
$ despacktivate
$ spack find
==> 2 installed packages
-- linux-rhel7-x86_64 / gcc@4.9.3 -------------------------------
zlib@1.2.8 zlib@1.2.11
Note that when we installed the abstract spec ``zlib@1.2.8``, it was
presented as a root of the Environment. All explicitly installed
packages will be listed as roots of the Environment.
All of the Spack commands that act on the list of installed specs are
Environment-sensitive in this way, including ``install``,
``uninstall``, ``activate``, ``deactivate``, ``find``, ``extensions``,
and more. In the :ref:`environment-configuration` section we will discuss
Environment-sensitive commands further.
^^^^^^^^^^^^^^^^^^^^^
Adding Abstract Specs
^^^^^^^^^^^^^^^^^^^^^
An abstract spec is the user-specified spec before Spack has applied
any defaults or dependency information.
Users can add abstract specs to an Environment using the ``spack add``
command. The most important component of an Environment is a list of
abstract specs.
Adding a spec adds to the manifest (the ``spack.yaml`` file) and to
the roots of the Environment, but does not affect the concrete specs
in the lockfile, nor does it install the spec.
The ``spack add`` command is environment aware. It adds to the
currently active environment. All environment aware commands can also
be called using the ``spack -E`` flag to specify the environment.
.. code-block:: console
$ spack activate myenv
$ spack add mpileaks
or
.. code-block:: console
$ spack -E myenv add python
.. _environments_concretization:
^^^^^^^^^^^^
Concretizing
^^^^^^^^^^^^
Once some user specs have been added to an environment, they can be
concretized. *By default specs are concretized separately*, one after
the other. This mode of operation permits to deploy a full
software stack where multiple configurations of the same package
need to be installed alongside each other. Central installations done
at HPC centers by system administrators or user support groups
are a common case that fits in this behavior.
Environments *can also be configured to concretize all
the root specs in a self-consistent way* to ensure that
each package in the environment comes with a single configuration. This
mode of operation is usually what is required by software developers that
want to deploy their development environment.
Regardless of which mode of operation has been chosen, the following
command will ensure all the root specs are concretized according to the
constraints that are prescribed in the configuration:
.. code-block:: console
[myenv]$ spack concretize
In the case of specs that are not concretized together, the command
above will concretize only the specs that were added and not yet
concretized. Forcing a re-concretization of all the specs can be done
instead with this command:
.. code-block:: console
[myenv]$ spack concretize -f
When the ``-f`` flag is not used to reconcretize all specs, Spack
guarantees that already concretized specs are unchanged in the
environment.
The ``concretize`` command does not install any packages. For packages
that have already been installed outside of the environment, the
process of adding the spec and concretizing is identical to installing
the spec assuming it concretizes to the exact spec that was installed
outside of the environment.
The ``spack find`` command can show concretized specs separately from
installed specs using the ``-c`` (``--concretized``) flag.
.. code-block:: console
[myenv]$ spack add zlib
[myenv]$ spack concretize
[myenv]$ spack find -c
==> In environment myenv
==> Root specs
zlib
==> Concretized roots
-- linux-rhel7-x86_64 / gcc@4.9.3 -------------------------------
zlib@1.2.11
==> 0 installed packages
^^^^^^^^^^^^^^^^^^^^^^^^^
Installing an Environment
^^^^^^^^^^^^^^^^^^^^^^^^^
In addition to installing individual specs into an Environment, one
can install the entire Environment at once using the command
.. code-block:: console
[myenv]$ spack install
If the Environment has been concretized, Spack will install the
concretized specs. Otherwise, ``spack install`` will first concretize
the Environment and then install the concretized specs.
As it installs, ``spack install`` creates symbolic links in the
``logs/`` directory in the Environment, allowing for easy inspection
of build logs related to that environment. The ``spack install``
command also stores a Spack repo containing the ``package.py`` file
used at install time for each package in the ``repos/`` directory in
the Environment.
^^^^^^^
Loading
^^^^^^^
Once an environment has been installed, the following creates a load
script for it:
.. code-block:: console
$ spack env loads -r
This creates a file called ``loads`` in the environment directory.
Sourcing that file in Bash will make the environment available to the
user; and can be included in ``.bashrc`` files, etc. The ``loads``
file may also be copied out of the environment, renamed, etc.
----------
spack.yaml
----------
Spack environments can be customized at finer granularity by editing
the ``spack.yaml`` manifest file directly.
.. _environment-configuration:
^^^^^^^^^^^^^^^^^^^^^^^^
Configuring Environments
^^^^^^^^^^^^^^^^^^^^^^^^
A variety of Spack behaviors are changed through Spack configuration
files, covered in more detail in the :ref:`configuration`
section.
Spack Environments provide an additional level of configuration scope
between the custom scope and the user scope discussed in the
configuration documentation.
There are two ways to include configuration information in a Spack Environment:
#. Inline in the ``spack.yaml`` file
#. Included in the ``spack.yaml`` file from another file.
"""""""""""""""""""""
Inline configurations
"""""""""""""""""""""
Inline Environment-scope configuration is done using the same yaml
format as standard Spack configuration scopes, covered in the
:ref:`configuration` section. Each section is contained under a
top-level yaml object with it's name. For example, a ``spack.yaml``
manifest file containing some package preference configuration (as in
a ``packages.yaml`` file) could contain:
.. code-block:: yaml
spack:
...
packages:
all:
compiler: [intel]
...
This configuration sets the default compiler for all packages to
``intel``.
"""""""""""""""""""""""
Included configurations
"""""""""""""""""""""""
Spack environments allow an ``include`` heading in their yaml
schema. This heading pulls in external configuration files and applies
them to the Environment.
.. code-block:: yaml
spack:
include:
- relative/path/to/config.yaml
- /absolute/path/to/packages.yaml
Environments can include files with either relative or absolute
paths. Inline configurations take precedence over included
configurations, so you don't have to change shared configuration files
to make small changes to an individual Environment. Included configs
listed later will have higher precedence, as the included configs are
applied in order.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Manually Editing the Specs List
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The list of abstract/root specs in the Environment is maintained in
the ``spack.yaml`` manifest under the heading ``specs``.
.. code-block:: yaml
spack:
specs:
- ncview
- netcdf
- nco
- py-sphinx
Appending to this list in the yaml is identical to using the ``spack
add`` command from the command line. However, there is more power
available from the yaml file.
"""""""""""""""""""
Spec concretization
"""""""""""""""""""
Specs can be concretized separately or together, as already
explained in :ref:`environments_concretization`. The behavior active
under any environment is determined by the ``concretization`` property:
.. code-block:: yaml
spack:
specs:
- ncview
- netcdf
- nco
- py-sphinx
concretization: together
which can currently take either one of the two allowed values ``together`` or ``separately``
(the default).
.. admonition:: Re-concretization of user specs
When concretizing specs together the entire set of specs will be
re-concretized after any addition of new user specs, to ensure that
the environment remains consistent. When instead the specs are concretized
separately only the new specs will be re-concretized after any addition.
"""""""""""""
Spec Matrices
"""""""""""""
Entries in the ``specs`` list can be individual abstract specs or a
spec matrix.
A spec matrix is a yaml object containing multiple lists of specs, and
evaluates to the cross-product of those specs. Spec matrices also
contain an ``excludes`` directive, which eliminates certain
combinations from the evaluated result.
The following two Environment manifests are identical:
.. code-block:: yaml
spack:
specs:
- zlib %gcc@7.1.0
- zlib %gcc@4.9.3
- libelf %gcc@7.1.0
- libelf %gcc@4.9.3
- libdwarf %gcc@7.1.0
- cmake
spack:
specs:
- matrix:
- [zlib, libelf, libdwarf]
- ['%gcc@7.1.0', '%gcc@4.9.3']
exclude:
- libdwarf%gcc@4.9.3
- cmake
Spec matrices can be used to install swaths of software across various
toolchains.
The concretization logic for spec matrices differs slightly from the
rest of Spack. If a variant or dependency constraint from a matrix is
invalid, Spack will reject the constraint and try again without
it. For example, the following two Environment manifests will produce
the same specs:
.. code-block:: yaml
spack:
specs:
- matrix:
- [zlib, libelf, hdf5+mpi]
- [^mvapich2@2.2, ^openmpi@3.1.0]
spack:
specs:
- zlib
- libelf
- hdf5+mpi ^mvapich2@2.2
- hdf5+mpi ^openmpi@3.1.0
This allows one to create toolchains out of combinations of
constraints and apply them somewhat indiscriminately to packages,
without regard for the applicability of the constraint.
""""""""""""""""""""
Spec List References
""""""""""""""""""""
The last type of possible entry in the specs list is a reference.
The Spack Environment manifest yaml schema contains an additional
heading ``definitions``. Under definitions is an array of yaml
objects. Each object has one or two fields. The one required field is
a name, and the optional field is a ``when`` clause.
The named field is a spec list. The spec list uses the same syntax as
the ``specs`` entry. Each entry in the spec list can be a spec, a spec
matrix, or a reference to an earlier named list. References are
specified using the ``$`` sigil, and are "splatted" into place
(i.e. the elements of the referent are at the same level as the
elements listed separately). As an example, the following two manifest
files are identical.
.. code-block:: yaml
spack:
definitions:
- first: [libelf, libdwarf]
- compilers: ['%gcc', '^intel']
- second:
- $first
- matrix:
- [zlib]
- [$compilers]
specs:
- $second
- cmake
spack:
specs:
- libelf
- libdwarf
- zlib%gcc
- zlib%intel
- cmake
.. note::
Named spec lists in the definitions section may only refer
to a named list defined above itself. Order matters.
In short files like the example, it may be easier to simply list the
included specs. However for more complicated examples involving many
packages across many toolchains, separately factored lists make
Environments substantially more manageable.
Additionally, the ``-l`` option to the ``spack add`` command allows
one to add to named lists in the definitions section of the manifest
file directly from the command line.
The ``when`` directive can be used to conditionally add specs to a
named list. The ``when`` directive takes a string of Python code
referring to a restricted set of variables, and evaluates to a
boolean. The specs listed are appended to the named list if the
``when`` string evaluates to ``True``. In the following snippet, the
named list ``compilers`` is ``['%gcc', '%clang', '%intel']`` on
``x86_64`` systems and ``['%gcc', '%clang']`` on all other systems.
.. code-block:: yaml
spack:
definitions:
- compilers: ['%gcc', '%clang']
- when: target == 'x86_64'
compilers: ['%intel']
.. note::
Any definitions with the same named list with true ``when``
clauses (or absent ``when`` clauses) will be appended together
The valid variables for a ``when`` clause are:
#. ``platform``. The platform string of the default Spack
architecture on the system.
#. ``os``. The os string of the default Spack architecture on
the system.
#. ``target``. The target string of the default Spack
architecture on the system.
#. ``architecture`` or ``arch``. The full string of the
default Spack architecture on the system.
#. ``re``. The standard regex module in Python.
#. ``env``. The user environment (usually ``os.environ`` in Python).
#. ``hostname``. The hostname of the system (if ``hostname`` is an
executable in the user's PATH).
^^^^^^^^^^^^^^^^^^^^^^^^^
Environment-managed Views
^^^^^^^^^^^^^^^^^^^^^^^^^
Spack Environments can define filesystem views of their software,
which are maintained as packages and can be installed and uninstalled from
the Environment. Filesystem views provide an access point for packages
from the filesystem for users who want to access those packages
directly. For more information on filesystem views, see the section
:ref:`filesystem-views`.
Spack Environment managed views are updated every time the environment
is written out to the lock file ``spack.lock``, so the concrete
environment and the view are always compatible.
"""""""""""""""""""""""""""""
Configuring environment views
"""""""""""""""""""""""""""""
The Spack Environment manifest file has a top-level keyword
``view``. Each entry under that heading is a view descriptor, headed
by a name. The view descriptor contains the root of the view, and
optionally the projections for the view, and ``select`` and
``exclude`` lists for the view. For example, in the following manifest
file snippet we define a view named ``mpis``, rooted at
``/path/to/view`` in which all projections use the package name,
version, and compiler name to determine the path for a given
package. This view selects all packages that depend on MPI, and
excludes those built with the PGI compiler at version 18.5.
.. code-block:: yaml
spack:
...
view:
mpis:
root: /path/to/view
select: [^mpi]
exclude: ['%pgi@18.5']
projections:
all: {name}/{version}-{compiler.name}
For more information on using view projections, see the section on
:ref:`adding_projections_to_views`. The default for the ``select`` and
``exclude`` values is to select everything and exclude nothing. The
default projection is the default view projection (``{}``).
Any number of views may be defined under the ``view`` heading in a
Spack Environment.
There are two shorthands for environments with a single view. If the
environment at ``/path/to/env`` has a single view, with a root at
``/path/to/env/.spack-env/view``, with default selection and exclusion
and the default projection, we can put ``view: True`` in the
environment manifest. Similarly, if the environment has a view with a
different root, but default selection, exclusion, and projections, the
manifest can say ``view: /path/to/view``. These views are
automatically named ``default``, so that
.. code-block:: yaml
spack:
...
view: True
is equivalent to
.. code-block:: yaml
spack:
...
view:
default:
root: .spack-env/view
and
.. code-block:: yaml
spack:
...
view: /path/to/view
is equivalent to
.. code-block:: yaml
spack:
...
view:
default:
root: /path/to/view
By default, Spack environments are configured with ``view: True`` in
the manifest. Environments can be configured without views using
``view: False``. For backwards compatibility reasons, environments
with no ``view`` key are treated the same as ``view: True``.
From the command line, the ``spack env create`` command takes an
argument ``--with-view [PATH]`` that sets the path for a single, default
view. If no path is specified, the default path is used (``view:
True``). The argument ``--without-view`` can be used to create an
environment without any view configured.
The ``spack env view`` command can be used to change the manage views
of an Environment. The subcommand ``spack env view enable`` will add a
view named ``default`` to an environment. It takes an optional
argument to specify the path for the new default view. The subcommand
``spack env view disable`` will remove the view named ``default`` from
an environment if one exists. The subcommand ``spack env view
regenerate`` will regenerate the views for the environment. This will
apply any updates in the environment configuration that have not yet
been applied.
""""""""""""""""""""""""""""
Activating environment views
""""""""""""""""""""""""""""
The ``spack env activate`` command will put the default view for the
environment into the user's path, in addition to activating the
environment for Spack commands. The arguments ``-v,--with-view`` and
``-V,--without-view`` can be used to tune this behavior. The default
behavior is to activate with the environment view if there is one.
The environment variables affected by the ``spack env activate``
command and the paths that are used to update them are in the
following table.
=================== =========
Variable Paths
=================== =========
PATH bin
MANPATH man, share/man
ACLOCAL_PATH share/aclocal
LD_LIBRARY_PATH lib, lib64
LIBRARY_PATH lib, lib64
CPATH include
PKG_CONFIG_PATH lib/pkgconfig, lib64/pkgconfig, share/pkgconfig
CMAKE_PREFIX_PATH .
=================== =========
Each of these paths are appended to the view root, and added to the
relevant variable if the path exists. For this reason, it is not
recommended to use non-default projections with the default view of an
environment.
The ``spack env deactivate`` command will remove the default view of
the environment from the user's path.

View File

@@ -1,161 +0,0 @@
spack:
definitions:
- compiler-pkgs:
- 'llvm+clang@6.0.1 os=centos7'
- 'gcc@6.5.0 os=centos7'
- 'llvm+clang@6.0.1 os=ubuntu18.04'
- 'gcc@6.5.0 os=ubuntu18.04'
- pkgs:
- readline@7.0
# - xsdk@0.4.0
- compilers:
- '%gcc@5.5.0'
- '%gcc@6.5.0'
- '%gcc@7.3.0'
- '%clang@6.0.0'
- '%clang@6.0.1'
- oses:
- os=ubuntu18.04
- os=centos7
specs:
- matrix:
- [$pkgs]
- [$compilers]
- [$oses]
exclude:
- '%gcc@7.3.0 os=centos7'
- '%gcc@5.5.0 os=ubuntu18.04'
mirrors:
cloud_gitlab: https://mirror.spack.io
compilers:
# The .gitlab-ci.yml for this project picks a Docker container which does
# not have any compilers pre-built and ready to use, so we need to fake the
# existence of those here.
- compiler:
operating_system: centos7
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: gcc@5.5.0
target: x86_64
- compiler:
operating_system: centos7
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: gcc@6.5.0
target: x86_64
- compiler:
operating_system: centos7
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: clang@6.0.0
target: x86_64
- compiler:
operating_system: centos7
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: clang@6.0.1
target: x86_64
- compiler:
operating_system: ubuntu18.04
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: clang@6.0.0
target: x86_64
- compiler:
operating_system: ubuntu18.04
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: clang@6.0.1
target: x86_64
- compiler:
operating_system: ubuntu18.04
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: gcc@6.5.0
target: x86_64
- compiler:
operating_system: ubuntu18.04
modules: []
paths:
cc: /not/used
cxx: /not/used
f77: /not/used
fc: /not/used
spec: gcc@7.3.0
target: x86_64
gitlab-ci:
bootstrap:
- name: compiler-pkgs
compiler-agnostic: true
mappings:
- # spack-cloud-ubuntu
match:
# these are specs, if *any* match the spec under consideration, this
# 'mapping' will be used to generate the CI job
- os=ubuntu18.04
runner-attributes:
# 'tags' and 'image' go directly onto the job, 'variables' will
# be added to what we already necessarily create for the job as
# a part of the CI workflow
tags:
- spack-k8s
image:
name: scottwittenburg/spack_builder_ubuntu_18.04
entrypoint: [""]
- # spack-cloud-centos
match:
# these are specs, if *any* match the spec under consideration, this
# 'mapping' will be used to generate the CI job
- 'os=centos7'
runner-attributes:
tags:
- spack-k8s
image:
name: scottwittenburg/spack_builder_centos_7
entrypoint: [""]
cdash:
build-group: Release Testing
url: http://cdash
project: Spack Testing
site: Spack Docker-Compose Workflow
repos: []
upstreams: {}
modules:
enable: []
packages: {}
config: {}

View File

@@ -1,122 +0,0 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. extensions:
=================
Custom Extensions
=================
*Spack extensions* permit you to extend Spack capabilities by deploying your
own custom commands or logic in an arbitrary location on your filesystem.
This might be extremely useful e.g. to develop and maintain a command whose purpose is
too specific to be considered for reintegration into the mainline or to
evolve a command through its early stages before starting a discussion to merge
it upstream.
From Spack's point of view an extension is any path in your filesystem which
respects a prescribed naming and layout for files:
.. code-block:: console
spack-scripting/ # The top level directory must match the format 'spack-{extension_name}'
├── pytest.ini # Optional file if the extension ships its own tests
├── scripting # Folder that may contain modules that are needed for the extension commands
│   └── cmd # Folder containing extension commands
│   └── filter.py # A new command that will be available
├── tests # Tests for this extension
│ ├── conftest.py
│ └── test_filter.py
└── templates # Templates that may be needed by the extension
In the example above the extension named *scripting* adds an additional command (``filter``)
and unit tests to verify its behavior. The code for this example can be
obtained by cloning the corresponding git repository:
.. TODO: write an ad-hoc "hello world" extension and make it part of the spack organization
.. code-block:: console
$ pwd
/home/user
$ mkdir tmp && cd tmp
$ git clone https://github.com/alalazo/spack-scripting.git
Cloning into 'spack-scripting'...
remote: Counting objects: 11, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 11 (delta 0), reused 11 (delta 0), pack-reused 0
Receiving objects: 100% (11/11), done.
As you can see by inspecting the sources, Python modules that are part of the extension
can import any core Spack module.
---------------------------------
Configure Spack to Use Extensions
---------------------------------
To make your current Spack instance aware of extensions you should add their root
paths to ``config.yaml``. In the case of our example this means ensuring that:
.. code-block:: yaml
config:
extensions:
- /home/user/tmp/spack-scripting
is part of your configuration file. Once this is setup any command that the extension provides
will be available from the command line:
.. code-block:: console
$ spack filter --help
usage: spack filter [-h] [--installed | --not-installed]
[--explicit | --implicit] [--output OUTPUT]
...
filter specs based on their properties
positional arguments:
specs specs to be filtered
optional arguments:
-h, --help show this help message and exit
--installed select installed specs
--not-installed select specs that are not yet installed
--explicit select specs that were installed explicitly
--implicit select specs that are not installed or were installed implicitly
--output OUTPUT where to dump the result
The corresponding unit tests can be run giving the appropriate options to ``spack test``:
.. code-block:: console
$ spack test --extension=scripting
============================================================== test session starts ===============================================================
platform linux2 -- Python 2.7.15rc1, pytest-3.2.5, py-1.4.34, pluggy-0.4.0
rootdir: /home/mculpo/tmp/spack-scripting, inifile: pytest.ini
collected 5 items
tests/test_filter.py ...XX
============================================================ short test summary info =============================================================
XPASS tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
XPASS tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
=========================================================== slowest 20 test durations ============================================================
3.74s setup tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0]
0.17s call tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
0.16s call tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2]
0.15s call tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1]
0.13s call tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
0.08s call tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0]
0.04s teardown tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
0.00s setup tests/test_filter.py::test_filtering_specs[flags4-specs4-expected4]
0.00s setup tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
0.00s setup tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1]
0.00s setup tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2]
0.00s teardown tests/test_filter.py::test_filtering_specs[flags2-specs2-expected2]
0.00s teardown tests/test_filter.py::test_filtering_specs[flags1-specs1-expected1]
0.00s teardown tests/test_filter.py::test_filtering_specs[flags0-specs0-expected0]
0.00s teardown tests/test_filter.py::test_filtering_specs[flags3-specs3-expected3]
====================================================== 3 passed, 2 xpassed in 4.51 seconds =======================================================

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -60,14 +60,14 @@ Customize dependencies
----------------------
Spack allows *dependencies* of a particular installation to be
customized extensively. Suppose that ``hdf5`` depends
on ``openmpi`` and indirectly on ``hwloc``. Using ``^``, users can add custom
customized extensively. Suppose that ``mpileaks`` depends indirectly
on ``libelf`` and ``libdwarf``. Using ``^``, users can add custom
configurations for the dependencies:
.. code-block:: console
# Install hdf5 and link it with specific versions of openmpi and hwloc
$ spack install hdf5@1.10.1 %gcc@4.7.3 +debug ^openmpi+cuda fabrics=auto ^hwloc+gl
# Install mpileaks and link it with specific versions of libelf and libdwarf
$ spack install mpileaks@1.1.2 %gcc@4.7.3 +debug ^libelf@0.8.12 ^libdwarf@20130729+debug
------------------------
Non-destructive installs
@@ -130,7 +130,7 @@ creates a simple python file:
It doesn't take much python coding to get from there to a working
package:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/libelf/package.py
.. literalinclude:: ../../../var/spack/repos/builtin/packages/libelf/package.py
:lines: 6-
Spack also provides wrapper functions around common commands like

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -16,11 +16,10 @@ Prerequisites
Spack has the following minimum requirements, which must be installed
before Spack is run:
#. Python 2 (2.6 or 2.7) or 3 (3.5 - 3.8) to run Spack
#. A C/C++ compiler for building
#. The ``make`` executable for building
#. The ``git`` and ``curl`` commands for fetching
#. If using the ``gpg`` subcommand, ``gnupg2`` is required
1. Python 2 (2.6 or 2.7) or 3 (3.4 - 3.7)
2. A C/C++ compiler
3. The ``git`` and ``curl`` commands.
4. If using the ``gpg`` subcommand, ``gnupg2`` is required.
These requirements can be easily installed on most modern Linux systems;
on Macintosh, XCode is required. Spack is designed to run on HPC
@@ -71,7 +70,7 @@ This automatically adds Spack to your ``PATH`` and allows the ``spack``
command to be used to execute spack :ref:`commands <shell-support>` and
:ref:`useful packaging commands <packaging-shell-support>`.
If :ref:`environment-modules <InstallEnvironmentModules>` is
If :ref:`environment-modules or dotkit <InstallEnvironmentModules>` is
installed and available, the ``spack`` command can also load and unload
:ref:`modules <modules>`.
@@ -97,7 +96,7 @@ Check Installation
With Spack installed, you should be able to run some basic Spack
commands. For example:
.. command-output:: spack spec netcdf-c
.. command-output:: spack spec netcdf
^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -190,7 +189,7 @@ where the compiler is installed. For example:
.. code-block:: console
$ spack compiler find /usr/local/tools/ic-13.0.079
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml
==> Added 1 new compiler to ~/.spack/compilers.yaml
intel@13.0.079
Or you can run ``spack compiler find`` with no arguments to force
@@ -202,7 +201,7 @@ installed, but you know that new compilers have been added to your
$ module load gcc-4.9.0
$ spack compiler find
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml
==> Added 1 new compiler to ~/.spack/compilers.yaml
gcc@4.9.0
This loads the environment module for gcc-4.9.0 to add it to
@@ -247,7 +246,7 @@ Manual compiler configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If auto-detection fails, you can manually configure a compiler by
editing your ``~/.spack/<platform>/compilers.yaml`` file. You can do this by running
editing your ``~/.spack/compilers.yaml`` file. You can do this by running
``spack config edit compilers``, which will open the file in your ``$EDITOR``.
Each compiler configuration in the file looks like this:
@@ -263,7 +262,7 @@ Each compiler configuration in the file looks like this:
cxx: /usr/local/bin/icpc-15.0.024-beta
f77: /usr/local/bin/ifort-15.0.024-beta
fc: /usr/local/bin/ifort-15.0.024-beta
spec: intel@15.0.0
spec: intel@15.0.0:
For compilers that do not support Fortran (like ``clang``), put
``None`` for ``f77`` and ``fc``:
@@ -469,21 +468,18 @@ Fortran.
install GCC with Spack (``spack install gcc``) or with Homebrew
(``brew install gcc``).
#. The only thing left to do is to edit ``~/.spack/darwin/compilers.yaml`` to provide
#. The only thing left to do is to edit ``~/.spack/compilers.yaml`` to provide
the path to ``gfortran``:
.. code-block:: yaml
compilers:
- compiler:
...
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /path/to/bin/gfortran
fc: /path/to/bin/gfortran
spec: clang@11.0.0-apple
darwin-x86_64:
clang@7.3.0-apple:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /path/to/bin/gfortran
fc: /path/to/bin/gfortran
If you used Spack to install GCC, you can get the installation prefix by
``spack location -i gcc`` (this will only work if you have a single version
@@ -594,12 +590,11 @@ flags to the ``icc`` command:
operating_system: centos7
paths:
cc: /opt/intel-15.0.24/bin/icc-15.0.24-beta
cflags: -gcc-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc
cxx: /opt/intel-15.0.24/bin/icpc-15.0.24-beta
cxxflags: -gxx-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/g++
f77: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
fc: /opt/intel-15.0.24/bin/ifort-15.0.24-beta
flags:
cflags: -gcc-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc
cxxflags: -gxx-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/g++
fflags: -gcc-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc
spec: intel@15.0.24.4.9.3

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -55,7 +55,7 @@ or refer to the full manual below.
getting_started
basic_usage
workflows
Tutorial: Spack 101 <https://spack-tutorial.readthedocs.io>
tutorial
known_issues
.. toctree::
@@ -65,17 +65,12 @@ or refer to the full manual below.
configuration
config_yaml
build_settings
environments
containers
mirrors
module_file_support
repositories
binary_caches
command_index
package_list
chain
extensions
pipelines
.. toctree::
:maxdepth: 2
@@ -86,11 +81,6 @@ or refer to the full manual below.
build_systems
developer_guide
docker_for_developers
.. toctree::
:maxdepth: 2
:caption: API Docs
Spack API Docs <spack>
LLNL API Docs <llnl>

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -13,8 +13,8 @@ The use of module systems to manage user environment in a controlled way
is a common practice at HPC centers that is often embraced also by individual
programmers on their development machines. To support this common practice
Spack integrates with `Environment Modules
<http://modules.sourceforge.net/>`_ and `LMod
<http://lmod.readthedocs.io/en/latest/>`_ by
<http://modules.sourceforge.net/>`_ , `LMod
<http://lmod.readthedocs.io/en/latest/>`_ and `Dotkit <https://computing.llnl.gov/?set=jobs&page=dotkit>`_ by
providing post-install hooks that generate module files and commands to manipulate them.
.. note::
@@ -67,7 +67,7 @@ to load the ``cmake`` module:
$ module load cmake-3.7.2-gcc-6.3.0-fowuuby
Neither of these is particularly pretty, easy to remember, or
easy to type. Luckily, Spack has its own interface for using modules.
easy to type. Luckily, Spack has its own interface for using modules and dotkits.
^^^^^^^^^^^^^
Shell support
@@ -108,10 +108,20 @@ that the startup time may be slightly increased because of that.
^^^^^^^^^^^^^^^^^^^^^^^
Once you have shell support enabled you can use the same spec syntax
you're used to and you can use the same shortened names you use
everywhere else in Spack.
you're used to:
For example this will add the ``mpich`` package built with ``gcc`` to your path:
========================= ==========================
Modules Dotkit
========================= ==========================
``spack load <spec>`` ``spack use <spec>``
``spack unload <spec>`` ``spack unuse <spec>``
========================= ==========================
And you can use the same shortened names you use everywhere else in
Spack.
For example, if you are using dotkit, this will add the ``mpich``
package built with ``gcc`` to your path:
.. code-block:: console
@@ -119,39 +129,45 @@ For example this will add the ``mpich`` package built with ``gcc`` to your path:
# ... wait for install ...
$ spack load mpich %gcc@4.4.7
$ spack use mpich %gcc@4.4.7 # dotkit
Prepending: mpich@3.0.4%gcc@4.4.7 (ok)
$ which mpicc
~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/mpich@3.0.4/bin/mpicc
Or, similarly if you are using modules, you could type:
.. code-block:: console
$ spack load mpich %gcc@4.4.7 # modules
These commands will add appropriate directories to your ``PATH``,
``MANPATH``, ``CPATH``, and ``LD_LIBRARY_PATH``. When you no longer
want to use a package, you can type unload or unuse similarly:
.. code-block:: console
$ spack unload mpich %gcc@4.4.7
$ spack unload mpich %gcc@4.4.7 # modules
$ spack unuse mpich %gcc@4.4.7 # dotkit
.. note::
The ``load`` and ``unload`` subcommands are only available if you
have enabled Spack's shell support. These command DO NOT use the
underlying Spack-generated module files.
These ``use``, ``unuse``, ``load``, and ``unload`` subcommands are
only available if you have enabled Spack's shell support *and* you
have dotkit or modules installed on your machine.
^^^^^^^^^^^^^^^
Ambiguous specs
^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^
Ambiguous module names
^^^^^^^^^^^^^^^^^^^^^^
If a spec used with load/unload or is ambiguous (i.e. more than one
installed package matches it), then Spack will warn you:
If a spec used with load/unload or use/unuse is ambiguous (i.e. more
than one installed package matches it), then Spack will warn you:
.. code-block:: console
$ spack load libelf
==> Error: libelf matches multiple packages.
Matching packages:
libelf@0.8.13%gcc@4.4.7 arch=linux-debian7-x86_64
libelf@0.8.13%intel@15.0.0 arch=linux-debian7-x86_64
Use a more specific spec
==> Error: Multiple matches for spec libelf. Choose one:
libelf@0.8.13%gcc@4.4.7 arch=linux-debian7-x86_64
libelf@0.8.13%intel@15.0.0 arch=linux-debian7-x86_64
You can either type the ``spack load`` command again with a fully
qualified argument, or you can add just enough extra constraints to
@@ -173,15 +189,8 @@ To identify just the one built with the Intel compiler.
``spack module tcl loads``
^^^^^^^^^^^^^^^^^^^^^^^^^^
In some cases, it is desirable to use a Spack-generated module, rather
than relying on Spack's built-in user-environment modification
capabilities. To translate a spec into a module name, use ``spack
module tcl loads`` or ``spack module lmod loads`` depending on the
module system desired.
To load not just a module, but also all the modules it depends on, use
the ``--dependencies`` option. This is not required for most modules
In some cases, it is desirable to load not just a module, but also all
the modules it depends on. This is not required for most modules
because Spack builds binaries with RPATH support. However, not all
packages use RPATH to find their dependencies: this can be true in
particular for Python extensions, which are currently *not* built with
@@ -206,8 +215,8 @@ Module Commands for Shell Scripts
Although Spack is flexible, the ``module`` command is much faster.
This could become an issue when emitting a series of ``spack load``
commands inside a shell script. By adding the ``--dependencies`` flag,
``spack module tcl loads`` may also be used to generate code that can be
commands inside a shell script. By adding the ``--shell`` flag,
``spack module tcl find`` may also be used to generate code that can be
cut-and-pasted into a shell script. For example:
.. code-block:: console
@@ -283,6 +292,8 @@ that can be generated by Spack:
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
| | **Hook name** | **Default root directory** | **Default template file** | **Compatible tools** |
+=============================+====================+===============================+==============================================+======================+
| **Dotkit** | ``dotkit`` | share/spack/dotkit | share/spack/templates/modules/modulefile.dk | DotKit |
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
| **TCL - Non-Hierarchical** | ``tcl`` | share/spack/modules | share/spack/templates/modules/modulefile.tcl | Env. Modules/LMod |
+-----------------------------+--------------------+-------------------------------+----------------------------------------------+----------------------+
| **Lua - Hierarchical** | ``lmod`` | share/spack/lmod | share/spack/templates/modules/modulefile.lua | LMod |
@@ -312,7 +323,8 @@ content of the module files generated by Spack. The first one:
.. code-block:: python
def setup_run_environment(self, env):
def setup_environment(self, spack_env, run_env):
"""Set up the compile and runtime environments for a package."""
pass
can alter the content of the module file associated with the same package where it is overridden.
@@ -320,15 +332,16 @@ The second method:
.. code-block:: python
def setup_dependent_run_environment(self, env, dependent_spec):
def setup_dependent_environment(self, spack_env, run_env, dependent_spec):
"""Set up the environment of packages that depend on this one"""
pass
can instead inject run-time environment modifications in the module files of packages
that depend on it. In both cases you need to fill ``run_env`` with the desired
list of environment modifications.
.. admonition:: The ``r`` package and callback APIs
.. note::
The ``r`` package and callback APIs
An example in which it is crucial to override both methods
is given by the ``r`` package. This package installs libraries and headers
in non-standard locations and it is possible to prepend the appropriate directory
@@ -342,15 +355,15 @@ list of environment modifications.
with the following snippet:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/r/package.py
:pyobject: R.setup_run_environment
.. literalinclude:: ../../../var/spack/repos/builtin/packages/r/package.py
:pyobject: R.setup_environment
The ``r`` package also knows which environment variable should be modified
to make language extensions provided by other packages available, and modifies
it appropriately in the override of the second method:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/r/package.py
:pyobject: R.setup_dependent_run_environment
.. literalinclude:: ../../../var/spack/repos/builtin/packages/r/package.py
:pyobject: R.setup_dependent_environment
.. _modules-yaml:
@@ -361,10 +374,10 @@ Write a configuration file
The configuration files that control module generation behavior
are named ``modules.yaml``. The default configuration:
.. literalinclude:: _spack_root/etc/spack/defaults/modules.yaml
.. literalinclude:: ../../../etc/spack/defaults/modules.yaml
:language: yaml
activates the hooks to generate ``tcl`` module files and inspects
activates the hooks to generate ``tcl`` and ``dotkit`` module files and inspects
the installation folder of each package for the presence of a set of subdirectories
(``bin``, ``man``, ``share/man``, etc.). If any is found its full path is prepended
to the environment variables listed below the folder name.
@@ -386,9 +399,12 @@ to the generator being customized:
modules:
enable:
- tcl
- dotkit
- lmod
tcl:
# contains environment modules specific customizations
dotkit:
# contains dotkit specific customizations
lmod:
# contains lmod specific customizations
@@ -519,17 +535,17 @@ most likely via the ``+blas`` variant specification.
modules:
tcl:
naming_scheme: '{name}/{version}-{compiler.name}-{compiler.version}'
naming_scheme: '${PACKAGE}/${VERSION}-${COMPILERNAME}-${COMPILERVER}'
all:
conflict:
- '{name}'
- '${PACKAGE}'
- 'intel/14.0.1'
will create module files that will conflict with ``intel/14.0.1`` and with the
base directory of the same module, effectively preventing the possibility to
load two or more versions of the same software at the same time. The tokens
that are available for use in this directive are the same understood by
the :meth:`~spack.spec.Spec.format` method.
the ``Spec.format`` method.
.. note::
@@ -574,17 +590,15 @@ do so by using the environment blacklist:
.. code-block:: yaml
modules:
tcl:
dotkit:
all:
filter:
# Exclude changes to any of these variables
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
The configuration above will generate module files that will not contain
modifications to either ``CPATH`` or ``LIBRARY_PATH``.
.. _autoloading-dependencies:
The configuration above will generate dotkit module files that will not contain
modifications to either ``CPATH`` or ``LIBRARY_PATH`` and environment module
files that instead will contain these modifications.
"""""""""""""""""""""
Autoload dependencies
@@ -604,21 +618,7 @@ activated using ``spack activate``:
The configuration file above will produce module files that will
load their direct dependencies if the package installed depends on ``python``.
The allowed values for the ``autoload`` statement are either ``none``,
``direct`` or ``all``. The default is ``none``.
.. tip::
Building external software
Setting ``autoload`` to ``direct`` for all packages can be useful
when building software outside of a Spack installation that depends on
artifacts in that installation. E.g. (adjust ``lmod`` vs ``tcl``
as appropriate):
.. code-block:: yaml
modules:
lmod:
all:
autoload: 'direct'
``direct`` or ``all``.
.. note::
TCL prerequisites

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -136,10 +136,6 @@ generates a boilerplate template for your package, and opens up the new
homepage = "http://www.example.com"
url = "https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2"
# FIXME: Add a list of GitHub accounts to
# notify when the package is updated.
# maintainers = ['github_user1', 'github_user2']
version('6.1.2', '8ddbb26dc3bd4e2302984debba1406a5')
version('6.1.1', '4c175f86e11eb32d8bf9872ca3a8e11d')
version('6.1.0', '86ee6e54ebfc4a90b643a65e402c4048')
@@ -188,17 +184,6 @@ The rest of the tasks you need to do are as follows:
The ``homepage`` is displayed when users run ``spack info`` so
that they can learn more about your package.
#. Add a comma-separated list of maintainers.
The ``maintainers`` field is a list of GitHub accounts of people
who want to be notified any time the package is modified. When a
pull request is submitted that updates the package, these people
will be requested to review the PR. This is useful for developers
who maintain a Spack package for their own software, as well as
users who rely on a piece of software and want to ensure that the
package doesn't break. It also gives users a list of people to
contact for help when someone reports a build error with the package.
#. Add ``depends_on()`` calls for the package's dependencies.
``depends_on`` tells Spack that other packages need to be built
@@ -425,8 +410,6 @@ For tarball downloads, Spack can currently support checksums using the
MD5, SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512 algorithms. It
determines the algorithm to use based on the hash length.
.. _versions-and-fetching:
---------------------
Versions and fetching
---------------------
@@ -477,7 +460,7 @@ https://www.open-mpi.org/software/ompi/v2.1/downloads/openmpi-2.1.1.tar.bz2
In order to handle this, you can define a ``url_for_version()`` function
like so:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/openmpi/package.py
.. literalinclude:: ../../../var/spack/repos/builtin/packages/openmpi/package.py
:pyobject: Openmpi.url_for_version
With the use of this ``url_for_version()``, Spack knows to download OpenMPI ``2.1.1``
@@ -553,43 +536,14 @@ version. This is useful for packages that have an easy to extrapolate URL, but
keep changing their URL format every few releases. With this method, you only
need to specify the ``url`` when the URL changes.
"""""""""""""""""""""""
Mirrors of the main URL
"""""""""""""""""""""""
Spack supports listing mirrors of the main URL in a package by defining
the ``urls`` attribute:
.. code-block:: python
class Foo(Package):
urls = [
'http://example.com/foo-1.0.tar.gz',
'http://mirror.com/foo-1.0.tar.gz'
]
instead of just a single ``url``. This attribute is a list of possible URLs that
will be tried in order when fetching packages. Notice that either one of ``url``
or ``urls`` can be present in a package, but not both at the same time.
A well-known case of packages that can be fetched from multiple mirrors is that
of GNU. For that, Spack goes a step further and defines a mixin class that
takes care of all of the plumbing and requires packagers to just define a proper
``gnu_mirror_path`` attribute:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/autoconf/package.py
:lines: 9-18
^^^^^^^^^^^^^^^^^^^^^^^^
Skipping the expand step
^^^^^^^^^^^^^^^^^^^^^^^^
Spack normally expands archives (e.g. ``*.tar.gz`` and ``*.zip``) automatically
into a standard stage source directory (``self.stage.source_path``) after
downloading them. If you want to skip this step (e.g., for self-extracting
executables and other custom archive types), you can add ``expand=False`` to a
``version`` directive.
after downloading them. If you want to skip this step (e.g., for
self-extracting executables and other custom archive types), you can add
``expand=False`` to a ``version`` directive.
.. code-block:: python
@@ -619,11 +573,7 @@ Download caching
Spack maintains a cache (described :ref:`here <caching>`) which saves files
retrieved during package installations to avoid re-downloading in the case that
a package is installed with a different specification (but the same version) or
reinstalled on account of a change in the hashing scheme. It may (rarely) be
necessary to avoid caching for a particular version by adding ``no_cache=True``
as an option to the ``version()`` directive. Example situations would be a
"snapshot"-like Version Control System (VCS) tag, a VCS branch such as
``v6-16-00-patches``, or a URL specifying a regularly updated snapshot tarball.
reinstalled on account of a change in the hashing scheme.
^^^^^^^^^^^^^^^^^^
Version comparison
@@ -640,15 +590,13 @@ with `RPM <https://bugzilla.redhat.com/show_bug.cgi?id=50977>`_.
Spack versions may also be arbitrary non-numeric strings; any string
here will suffice; for example, ``@develop``, ``@master``, ``@local``.
Versions are compared as follows. First, a version string is split into
multiple fields based on delimiters such as ``.``, ``-`` etc. Then
matching fields are compared using the rules below:
The following rules determine the sort order of numeric
vs. non-numeric versions:
#. The following develop-like strings are greater (newer) than all
numbers and are ordered as ``develop > master > head > trunk``.
#. The non-numeric version ``@develop`` is considered greatest (newest).
#. Numbers are all less than the chosen develop-like strings above,
and are sorted numerically.
#. Numeric versions are all less than ``@develop`` version, and are
sorted numerically.
#. All other non-numeric versions are less than numeric versions, and
are sorted alphabetically.
@@ -662,7 +610,7 @@ The logic behind this sort order is two-fold:
#. The most-recent development version of a package will usually be
newer than any released numeric versions. This allows the
``@develop`` version to satisfy dependencies like ``depends_on(abc,
``develop`` version to satisfy dependencies like ``depends_on(abc,
when="@x.y.z:")``
^^^^^^^^^^^^^^^^^
@@ -864,8 +812,7 @@ For some packages, source code is provided in a Version Control System
(VCS) repository rather than in a tarball. Spack can fetch packages
from VCS repositories. Currently, Spack supports fetching with `Git
<git-fetch_>`_, `Mercurial (hg) <hg-fetch_>`_, `Subversion (svn)
<svn-fetch_>`_, and `Go <go-fetch_>`_. In all cases, the destination
is the standard stage source path.
<svn-fetch_>`_, and `Go <go-fetch_>`_.
To fetch a package from a source repository, Spack needs to know which
VCS to use and where to download from. Much like with ``url``, package
@@ -929,19 +876,9 @@ Git fetching supports the following parameters to ``version``:
* ``tag``: Name of a tag to fetch.
* ``commit``: SHA hash (or prefix) of a commit to fetch.
* ``submodules``: Also fetch submodules recursively when checking out this repository.
* ``submodules_delete``: A list of submodules to forcibly delete from the repository
after fetching. Useful if a version in the repository has submodules that
have disappeared/are no longer accessible.
* ``get_full_repo``: Ensure the full git history is checked out with all remote
branch information. Normally (``get_full_repo=False``, the default), the git
option ``--depth 1`` will be used if the version of git and the specified
transport protocol support it, and ``--single-branch`` will be used if the
version of git supports it.
Only one of ``tag``, ``branch``, or ``commit`` can be used at a time.
The destination directory for the clone is the standard stage source path.
Default branch
To fetch a repository's default branch:
@@ -1042,7 +979,6 @@ Mercurial
Fetching with Mercurial works much like `Git <git-fetch>`_, but you
use the ``hg`` parameter.
The destination directory is still the standard stage source path.
Default branch
Add the ``hg`` attribute with no ``revision`` passed to ``version``:
@@ -1081,7 +1017,6 @@ Subversion
^^^^^^^^^^
To fetch with subversion, use the ``svn`` and ``revision`` parameters.
The destination directory will be the standard stage source path.
Fetching the head
Simply add an ``svn`` parameter to the package:
@@ -1126,9 +1061,7 @@ Go
Go isn't a VCS, it is a programming language with a builtin command,
`go get <https://golang.org/cmd/go/#hdr-Download_and_install_packages_and_dependencies>`_,
that fetches packages and their dependencies automatically.
The destination directory will be the standard stage source path.
This strategy can clone a Git repository, or download from another source location.
It can clone a Git repository, or download from another source location.
For example:
.. code-block:: python
@@ -1144,174 +1077,6 @@ Go cannot be used to fetch a particular commit or branch, it always
downloads the head of the repository. This download method is untrusted,
and is not recommended. Use another fetch strategy whenever possible.
--------
Variants
--------
Many software packages can be configured to enable optional
features, which often come at the expense of additional dependencies or
longer build-times. To be flexible enough and support a wide variety of
use cases, Spack permits to expose to the end-user the ability to choose
which features should be activated in a package at the time it is installed.
The mechanism to be employed is the :py:func:`spack.directives.variant` directive.
^^^^^^^^^^^^^^^^
Boolean variants
^^^^^^^^^^^^^^^^
In their simplest form variants are boolean options specified at the package
level:
.. code-block:: python
class Hdf5(AutotoolsPackage):
...
variant(
'shared', default=True, description='Builds a shared version of the library'
)
with a default value and a description of their meaning / use in the package.
*Variants can be tested in any context where a spec constraint is expected.*
In the example above the ``shared`` variant is tied to the build of shared dynamic
libraries. To pass the right option at configure time we can branch depending on
its value:
.. code-block:: python
def configure_args(self):
...
if '+shared' in self.spec:
extra_args.append('--enable-shared')
else:
extra_args.append('--disable-shared')
extra_args.append('--enable-static-exec')
As explained in :ref:`basic-variants` the constraint ``+shared`` means
that the boolean variant is set to ``True``, while ``~shared`` means it is set
to ``False``.
Another common example is the optional activation of an extra dependency
which requires to use the variant in the ``when`` argument of
:py:func:`spack.directives.depends_on`:
.. code-block:: python
class Hdf5(AutotoolsPackage):
...
variant('szip', default=False, description='Enable szip support')
depends_on('szip', when='+szip')
as shown in the snippet above where ``szip`` is modeled to be an optional
dependency of ``hdf5``.
^^^^^^^^^^^^^^^^^^^^^
Multi-valued variants
^^^^^^^^^^^^^^^^^^^^^
If need be, Spack can go beyond Boolean variants and permit an arbitrary
number of allowed values. This might be useful when modeling
options that are tightly related to each other.
The values in this case are passed to the :py:func:`spack.directives.variant`
directive as a tuple:
.. code-block:: python
class Blis(Package):
...
variant(
'threads', default='none', description='Multithreading support',
values=('pthreads', 'openmp', 'none'), multi=False
)
In the example above the argument ``multi`` is set to ``False`` to indicate
that only one among all the variant values can be active at any time. This
constraint is enforced by the parser and an error is emitted if a user
specifies two or more values at the same time:
.. code-block:: console
$ spack spec blis threads=openmp,pthreads
Input spec
--------------------------------
blis threads=openmp,pthreads
Concretized
--------------------------------
==> Error: multiple values are not allowed for variant "threads"
Another useful note is that *Python's* ``None`` *is not allowed as a default value*
and therefore it should not be used to denote that no feature was selected.
Users should instead select another value, like ``'none'``, and handle it explicitly
within the package recipe if need be:
.. code-block:: python
if self.spec.variants['threads'].value == 'none':
options.append('--no-threads')
In cases where multiple values can be selected at the same time ``multi`` should
be set to ``True``:
.. code-block:: python
class Gcc(AutotoolsPackage):
...
variant(
'languages', default='c,c++,fortran',
values=('ada', 'brig', 'c', 'c++', 'fortran',
'go', 'java', 'jit', 'lto', 'objc', 'obj-c++'),
multi=True,
description='Compilers and runtime libraries to build'
)
Within a package recipe a multi-valued variant is tested using a ``key=value`` syntax:
.. code-block:: python
if 'languages=jit' in spec:
options.append('--enable-host-shared')
"""""""""""""""""""""""""""""""""""""""""""
Complex validation logic for variant values
"""""""""""""""""""""""""""""""""""""""""""
To cover complex use cases, the :py:func:`spack.directives.variant` directive
could accept as the ``values`` argument a full-fledged object which has
``default`` and other arguments of the directive embedded as attributes.
An example, already implemented in Spack's core, is :py:class:`spack.variant.DisjointSetsOfValues`.
This class is used to implement a few convenience functions, like
:py:func:`spack.variant.any_combination_of`:
.. code-block:: python
class Adios(AutotoolsPackage):
...
variant(
'staging',
values=any_combination_of('flexpath', 'dataspaces'),
description='Enable dataspaces and/or flexpath staging transports'
)
that allows any combination of the specified values, and also allows the
user to specify ``'none'`` (as a string) to choose none of them.
The objects returned by these functions can be modified at will by chaining
method calls to change the default value, customize the error message or
other similar operations:
.. code-block:: python
class Mvapich2(AutotoolsPackage):
...
variant(
'process_managers',
description='List of the process managers to activate',
values=disjoint_sets(
('auto',), ('slurm',), ('hydra', 'gforker', 'remshell')
).prohibit_empty_set().with_error(
"'slurm' or 'auto' cannot be activated along with "
"other process managers"
).with_default('auto').with_non_feature_values('auto'),
)
------------------------------------
Resources (expanding extra tarballs)
------------------------------------
@@ -1510,8 +1275,8 @@ that the same package with different patches applied will have different
hash identifiers. To ensure that the hashing scheme is consistent, you
must use a ``sha256`` checksum for the patch. Patches will be fetched
from their URLs, checked, and applied to your source code. You can use
the GNU utils ``sha256sum`` or the macOS ``shasum -a 256`` commands to
generate a checksum for a patch file.
the ``spack sha256`` command to generate a checksum for a patch file or
URL.
Spack can also handle compressed patches. If you use these, Spack needs
a little more help. Specifically, it needs *two* checksums: the
@@ -1630,7 +1395,7 @@ handles ``RPATH``:
.. _pyside-patch:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/py-pyside/package.py
.. literalinclude:: ../../../var/spack/repos/builtin/packages/py-pyside/package.py
:pyobject: PyPyside.patch
:linenos:
@@ -1657,73 +1422,6 @@ with all packages in Spack, a patched dependency library can coexist with
other versions of that library. See the `section on depends_on
<dependency_dependency_patching_>`_ for more details.
.. _patch_inspecting_patches:
^^^^^^^^^^^^^^^^^^^
Inspecting patches
^^^^^^^^^^^^^^^^^^^
If you want to better understand the patches that Spack applies to your
packages, you can do that using ``spack spec``, ``spack find``, and other
query commands. Let's look at ``m4``. If you run ``spack spec m4``, you
can see the patches that would be applied to ``m4``::
$ spack spec m4
Input spec
--------------------------------
m4
Concretized
--------------------------------
m4@1.4.18%clang@9.0.0-apple patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=darwin-highsierra-x86_64
^libsigsegv@2.11%clang@9.0.0-apple arch=darwin-highsierra-x86_64
You can also see patches that have been applied to installed packages
with ``spack find -v``::
$ spack find -v m4
==> 1 installed package
-- darwin-highsierra-x86_64 / clang@9.0.0-apple -----------------
m4@1.4.18 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv
.. _cmd-spack-resource:
In both cases above, you can see that the patches' sha256 hashes are
stored on the spec as a variant. As mentioned above, this means that you
can have multiple, differently-patched versions of a package installed at
once.
You can look up a patch by its sha256 hash (or a short version of it)
using the ``spack resource show`` command::
$ spack resource show 3877ab54
3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00
path: /home/spackuser/src/spack/var/spack/repos/builtin/packages/m4/gnulib-pgi.patch
applies to: builtin.m4
``spack resource show`` looks up downloadable resources from package
files by hash and prints out information about them. Above, we see that
the ``3877ab54`` patch applies to the ``m4`` package. The output also
tells us where to find the patch.
Things get more interesting if you want to know about dependency
patches. For example, when ``dealii`` is built with ``boost@1.68.0``, it
has to patch boost to work correctly. If you didn't know this, you might
wonder where the extra boost patches are coming from::
$ spack spec dealii ^boost@1.68.0 ^hdf5+fortran | grep '\^boost'
^boost@1.68.0
^boost@1.68.0%clang@9.0.0-apple+atomic+chrono~clanglibcpp cxxstd=default +date_time~debug+exception+filesystem+graph~icu+iostreams+locale+log+math~mpi+multithreaded~numpy patches=2ab6c72d03dec6a4ae20220a9dfd5c8c572c5294252155b85c6874d97c323199,b37164268f34f7133cbc9a4066ae98fda08adf51e1172223f6a969909216870f ~pic+program_options~python+random+regex+serialization+shared+signals~singlethreaded+system~taggedlayout+test+thread+timer~versionedlayout+wave arch=darwin-highsierra-x86_64
$ spack resource show b37164268
b37164268f34f7133cbc9a4066ae98fda08adf51e1172223f6a969909216870f
path: /home/spackuser/src/spack/var/spack/repos/builtin/packages/dealii/boost_1.68.0.patch
applies to: builtin.boost
patched by: builtin.dealii
Here you can see that the patch is applied to ``boost`` by ``dealii``,
and that it lives in ``dealii``'s directory in Spack's ``builtin``
package repository.
.. _handling_rpaths:
---------------
@@ -1778,11 +1476,12 @@ RPATHs in Spack are handled in one of three ways:
Parallel builds
---------------
By default, Spack will invoke ``make()``, or any other similar tool,
with a ``-j <njobs>`` argument, so that builds run in parallel.
The parallelism is determined by the value of the ``build_jobs`` entry
in ``config.yaml`` (see :ref:`here <build-jobs>` for more details on
how this value is computed).
By default, Spack will invoke ``make()`` with a ``-j <njobs>``
argument, so that builds run in parallel. It figures out how many
jobs to run by determining how many cores are on the host machine.
Specifically, it uses the number of CPUs reported by Python's
`multiprocessing.cpu_count()
<http://docs.python.org/library/multiprocessing.html#multiprocessing.cpu_count>`_.
If a package does not build properly in parallel, you can override
this setting by adding ``parallel = False`` to your package. For
@@ -1953,8 +1652,6 @@ issues with 1.64.0, 1.65.0, and 1.66.0, you can say:
depends_on('boost@1.59.0:1.63,1.65.1,1.67.0:')
.. _dependency-types:
^^^^^^^^^^^^^^^^
Dependency types
^^^^^^^^^^^^^^^^
@@ -1992,28 +1689,6 @@ inject the dependency's ``prefix/lib`` directory, but the package needs to
be in ``PATH`` and ``PYTHONPATH`` during the build process and later when
a user wants to run the package.
^^^^^^^^^^^^^^^^^^^^^^^^
Conditional dependencies
^^^^^^^^^^^^^^^^^^^^^^^^
You may have a package that only requires a dependency under certain
conditions. For example, you may have a package that has optional MPI support,
- MPI is only a dependency when you want to enable MPI support for the
package. In that case, you could say something like:
.. code-block:: python
variant('mpi', default=False)
depends_on('mpi', when='+mpi')
``when`` can include constraints on the variant, version, compiler, etc. and
the :mod:`syntax<spack.spec>` is the same as for Specs written on the command
line.
If a dependency/feature of a package isn't typically used, you can save time
by making it conditional (since Spack will not build the dependency unless it
is required for the Spec).
.. _dependency_dependency_patching:
^^^^^^^^^^^^^^^^^^^
@@ -2102,58 +1777,55 @@ appear in the package file (or in this case, in the list).
.. _setup-dependent-environment:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Influence how dependents are built or run
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``setup_dependent_environment()``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack provides a mechanism for dependencies to influence the
environment of their dependents by overriding the
:meth:`setup_dependent_run_environment <spack.package.PackageBase.setup_dependent_run_environment>`
or the
:meth:`setup_dependent_build_environment <spack.package.PackageBase.setup_dependent_build_environment>`
methods.
The Qt package, for instance, uses this call:
Spack provides a mechanism for dependencies to provide variables that
can be used in their dependents' build. Any package can declare a
``setup_dependent_environment()`` function, and this function will be
called before the ``install()`` method of any dependent packages.
This allows dependencies to set up environment variables and other
properties to be used by dependents.
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/qt/package.py
:pyobject: Qt.setup_dependent_build_environment
The function declaration should look like this:
.. literalinclude:: ../../../var/spack/repos/builtin/packages/qt/package.py
:pyobject: Qt.setup_dependent_environment
:linenos:
to set the ``QTDIR`` environment variable so that packages
that depend on a particular Qt installation will find it.
Another good example of how a dependency can influence
the build environment of dependents is the Python package:
Here, the Qt package sets the ``QTDIR`` environment variable so that
packages that depend on a particular Qt installation will find it.
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.setup_dependent_build_environment
The arguments to this function are:
* **spack_env**: List of environment modifications to be applied when
the dependent package is built within Spack.
* **run_env**: List of environment modifications to be applied when
the dependent package is run outside of Spack. These are added to the
resulting module file.
* **dependent_spec**: The spec of the dependent package about to be
built. This allows the extendee (self) to query the dependent's state.
Note that *this* package's spec is available as ``self.spec``.
A good example of using these is in the Python package:
.. literalinclude:: ../../../var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.setup_dependent_environment
:linenos:
In the method above it is ensured that any package that depends on Python
will have the ``PYTHONPATH``, ``PYTHONHOME`` and ``PATH`` environment
variables set appropriately before starting the installation. To make things
even simpler the ``python setup.py`` command is also inserted into the module
scope of dependents by overriding a third method called
:meth:`setup_dependent_package <spack.package.PackageBase.setup_dependent_package>`
:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.setup_dependent_package
:linenos:
This allows most python packages to have a very simple install procedure,
like the following:
The first thing that happens here is that the ``python`` command is
inserted into module scope of the dependent. This allows most python
packages to have a very simple install method, like this:
.. code-block:: python
def install(self, spec, prefix):
setup_py('install', '--prefix={0}'.format(prefix))
Finally the Python package takes also care of the modifications to ``PYTHONPATH``
to allow dependencies to run correctly:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.setup_dependent_run_environment
:linenos:
python('setup.py', 'install', '--prefix={0}'.format(prefix))
Python's ``setup_dependent_environment`` method also sets up some
other variables, creates a directory, and sets up the ``PYTHONPATH``
so that dependent packages can find their dependencies at build time.
.. _packaging_conflicts:
@@ -2300,7 +1972,7 @@ same way that Python does.
Let's look at Python's activate function:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
.. literalinclude:: ../../../var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.activate
:linenos:
@@ -2312,7 +1984,7 @@ Python's setuptools.
Deactivate behaves similarly to activate, but it unlinks files:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
.. literalinclude:: ../../../var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.deactivate
:linenos:
@@ -2754,7 +2426,7 @@ docs at :py:mod:`~.spack.build_systems`, or using the ``spack info`` command:
Typically, phases have default implementations that fit most of the common cases:
.. literalinclude:: _spack_root/lib/spack/spack/build_systems/autotools.py
.. literalinclude:: ../../../lib/spack/spack/build_systems/autotools.py
:pyobject: AutotoolsPackage.configure
:linenos:
@@ -2762,7 +2434,7 @@ It is thus just sufficient for a packager to override a few
build system specific helper methods or attributes to provide, for instance,
configure arguments:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/m4/package.py
.. literalinclude:: ../../../var/spack/repos/builtin/packages/m4/package.py
:pyobject: M4.configure_args
:linenos:
@@ -2937,7 +2609,7 @@ Shell command functions
Recall the install method from ``libelf``:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/libelf/package.py
.. literalinclude:: ../../../var/spack/repos/builtin/packages/libelf/package.py
:pyobject: Libelf.install
:linenos:
@@ -3020,7 +2692,8 @@ as arguments.
Here are the definitions of the three built-in flag handlers:
.. code-block:: python
def build_system_flags(self, name, flags):
return (None, None, flags)
def inject_flags(pkg, name, flags):
return (flags, None, None)
@@ -3288,137 +2961,6 @@ the two functions is that ``satisfies()`` tests whether spec
constraints overlap at all, while ``in`` tests whether a spec or any
of its dependencies satisfy the provided spec.
^^^^^^^^^^^^^^^^^^^^^^^
Architecture specifiers
^^^^^^^^^^^^^^^^^^^^^^^
As mentioned in :ref:`support-for-microarchitectures` each node in a concretized spec
object has an architecture attribute which is a triplet of ``platform``, ``os`` and ``target``.
Each of these three items can be queried to take decisions when configuring, building or
installing a package.
""""""""""""""""""""""""""""""""""""""""""""""
Querying the platform and the operating system
""""""""""""""""""""""""""""""""""""""""""""""
Sometimes the actions to be taken to install a package might differ depending on the
platform we are installing for. If that is the case we can use conditionals:
.. code-block:: python
if spec.platform == 'darwin':
# Actions that are specific to Darwin
args.append('--darwin-specific-flag')
and branch based on the current spec platform. If we need to make a package directive
conditional on the platform we can instead employ the usual spec syntax and pass the
corresponding constraint to the appropriate argument of that directive:
.. code-block:: python
class Libnl(AutotoolsPackage):
conflicts('platform=darwin', msg='libnl requires FreeBSD or Linux')
Similar considerations are also valid for the ``os`` part of a spec's architecture.
For instance:
.. code-block:: python
class Glib(AutotoolsPackage)
patch('old-kernels.patch', when='os=centos6')
will apply the patch only when the operating system is Centos 6.
.. note::
Even though experienced Python programmers might recognize that there are other ways
to retrieve information on the platform:
.. code-block:: python
if sys.platform == 'darwin':
# Actions that are specific to Darwin
args.append('--darwin-specific-flag')
querying the spec architecture's platform should be considered the preferred. The key difference
is that a query on ``sys.platform``, or anything similar, is always bound to the host on which the
interpreter running Spack is located and as such it won't work correctly in environments where
cross-compilation is required.
"""""""""""""""""""""""""""""""""""""
Querying the target microarchitecture
"""""""""""""""""""""""""""""""""""""
The third item of the architecture tuple is the ``target`` which abstracts the information on the
CPU microarchitecture. A list of all the targets known to Spack can be obtained via the
command line:
.. command-output:: spack arch --known-targets
Within directives each of the names above can be used to match a particular target:
.. code-block:: python
class Julia(Package):
# This patch is only applied on icelake microarchitectures
patch("icelake.patch", when="target=icelake")
It's also possible to select all the architectures belonging to the same family
using an open range:
.. code-block:: python
class Julia(Package):
# This patch is applied on all x86_64 microarchitectures.
# The trailing colon that denotes an open range of targets
patch("generic_x86_64.patch", when="target=x86_64:")
in a way that resembles what was shown in :ref:`versions-and-fetching` for versions.
Where ``target`` objects really shine though is when they are used in methods
called at configure, build or install time. In that case we can test targets
for supported features, for instance:
.. code-block:: python
if 'avx512' in spec.target:
args.append('--with-avx512')
The snippet above will append the ``--with-avx512`` item to a list of arguments only if the corresponding
feature is supported by the current target. Sometimes we need to take different actions based
on the architecture family and not on the specific microarchitecture. In those cases
we can check the ``family`` attribute:
.. code-block:: python
if spec.target.family == 'ppc64le':
args.append('--enable-power')
Possible values for the ``family`` attribute are displayed by ``spack arch --known-targets``
under the "Generic architectures (families)" header.
Finally it's possible to perform actions based on whether the current microarchitecture
is compatible with a known one:
.. code-block:: python
if spec.target > 'haswell':
args.append('--needs-at-least-haswell')
The snippet above will add an item to a list of configure options only if the current
architecture is a superset of ``haswell`` or, said otherwise, only if the current
architecture is a later microarchitecture still compatible with ``haswell``.
.. admonition:: Using Spack on unknown microarchitectures
If Spack is used on an unknown microarchitecture it will try to perform a best match
of the features it detects and will select the closest microarchitecture it has
information for. In case nothing matches, it will create on the fly a new generic
architecture. This is done to allow users to still be able to use Spack
for their work. The software built won't be probably as optimized as it could but just
as you need a newer compiler to build for newer architectures, you may need newer
versions of Spack for new architectures to be correctly labeled.
^^^^^^^^^^^^^^^^^^^^^^
Accessing Dependencies
^^^^^^^^^^^^^^^^^^^^^^
@@ -3727,7 +3269,7 @@ the one passed to install, only the MPI implementations all set some
additional properties on it to help you out. E.g., in mvapich2, you'll
find this:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/mvapich2/package.py
.. literalinclude:: ../../../var/spack/repos/builtin/packages/mvapich2/package.py
:pyobject: Mvapich2.setup_dependent_package
That code allows the mvapich2 package to associate an ``mpicc`` property
@@ -4062,6 +3604,7 @@ variant names are:
Name Default Description
======= ======== ========================
shared True Build shared libraries
static True Build static libraries
mpi True Use MPI
python False Build Python extension
======= ======== ========================
@@ -4069,12 +3612,6 @@ variant names are:
If specified in this table, the corresponding default should be used
when declaring a variant.
The semantics of the `shared` variant are important. When a package is
built `~shared`, the package guarantees that no shared libraries are
built. When a package is built `+shared`, the package guarantees that
shared libraries are built, but it makes no guarantee about whether
static libraries are built.
^^^^^^^^^^^^^
Version Lists
^^^^^^^^^^^^^
@@ -4121,8 +3658,7 @@ The first step of ``spack install``. Takes a spec and determines the
correct download URL to use for the requested package version, then
downloads the archive, checks it against an MD5 checksum, and stores
it in a staging directory if the check was successful. The staging
directory will be located under the first writable directory in the
``build_stage`` configuration setting.
directory will be located under ``$SPACK_HOME/var/spack``.
When run after the archive has already been downloaded, ``spack
fetch`` is idempotent and will not download the archive again.
@@ -4245,36 +3781,32 @@ Spack provides the ``spack graph`` command for graphing dependencies.
The command by default generates an ASCII rendering of a spec's
dependency graph. For example:
.. command-output:: spack graph hdf5
.. command-output:: spack graph mpileaks
At the top is the root package in the DAG, with dependency edges emerging
from it. On a color terminal, the edges are colored by which dependency
they lead to.
.. command-output:: spack graph --deptype=link hdf5
.. command-output:: spack graph --deptype=all mpileaks
The ``deptype`` argument tells Spack what types of dependencies to graph.
By default it includes link and run dependencies but not build
dependencies. Supplying ``--deptype=link`` will show only link
dependencies. The default is ``--deptype=all``, which is equivalent to
``--deptype=build,link,run,test``. Options for ``deptype`` include:
dependencies. Supplying ``--deptype=all`` will show the build
dependencies as well. This is equivalent to
``--deptype=build,link,run``. Options for ``deptype`` include:
* Any combination of ``build``, ``link``, ``run``, and ``test`` separated
by commas.
* ``all`` for all types of dependencies.
* Any combination of ``build``, ``link``, and ``run`` separated by
commas.
* ``all`` or ``alldeps`` for all types of dependencies.
You can also use ``spack graph`` to generate graphs in the widely used
`Dot <http://www.graphviz.org/doc/info/lang.html>`_ format. For example:
`Dot <http://www.graphviz.org/doc/info/lang.html>`_ format. For
example:
.. command-output:: spack graph --dot hdf5
.. command-output:: spack graph --dot mpileaks
This graph can be provided as input to other graphing tools, such as
those in `Graphviz <http://www.graphviz.org>`_. If you have graphviz
installed, you can write straight to PDF like this:
.. code-block:: console
$ spack graph --dot hdf5 | dot -Tpdf > hdf5.pdf
those in `Graphviz <http://www.graphviz.org>`_.
.. _packaging-shell-support:
@@ -4454,7 +3986,7 @@ translate variant flags into CMake definitions. For example:
.. code-block:: python
def cmake_args(self):
def configure_args(self):
spec = self.spec
return [
'-DUSE_EVERYTRACE=%s' % ('YES' if '+everytrace' in spec else 'NO'),
@@ -4501,8 +4033,3 @@ Autotools-based packages would be easy (and should be done by a
developer who actively uses Autotools). Packages that use
non-standard build systems can gain ``setup`` functionality by
subclassing ``StagedPackage`` directly.
.. Emacs local variables
Local Variables:
fill-column: 79
End:

View File

@@ -1,439 +0,0 @@
.. Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _pipelines:
=========
Pipelines
=========
Spack provides commands that support generating and running automated build
pipelines designed for Gitlab CI. At the highest level it works like this:
provide a spack environment describing the set of packages you care about,
and include within that environment file a description of how those packages
should be mapped to Gitlab runners. Spack can then generate a ``.gitlab-ci.yml``
file containing job descriptions for all your packages that can be run by a
properly configured Gitlab CI instance. When run, the generated pipeline will
build and deploy binaries, and it can optionally report to a CDash instance
regarding the health of the builds as they evolve over time.
------------------------------
Getting started with pipelines
------------------------------
It is fairly straightforward to get started with automated build pipelines. At
a minimum, you'll need to set up a Gitlab instance (more about Gitlab CI
`here <https://about.gitlab.com/product/continuous-integration/>`_) and configure
at least one `runner <https://docs.gitlab.com/runner/>`_. Then the basic steps
for setting up a build pipeline are as follows:
#. Create a repository on your gitlab instance
#. Add a ``spack.yaml`` at the root containing your pipeline environment (see
below for details)
#. Add a ``.gitlab-ci.yml`` at the root containing a single job, similar to
this one:
.. code-block:: yaml
pipeline-job:
tags:
- <custom-tag>
...
script:
- spack ci start
#. Add any secrets required by the CI process to environment variables using the
CI web ui
#. Push a commit containing the ``spack.yaml`` and ``.gitlab-ci.yml`` mentioned above
to the gitlab repository
The ``<custom-tag>``, above, is used to pick one of your configured runners,
while the use of the ``spack ci start`` command implies that runner has an
appropriate version of spack installed and configured for use. Of course, there
are myriad ways to customize the process. You can configure CDash reporting
on the progress of your builds, set up S3 buckets to mirror binaries built by
the pipeline, clone a custom spack repository/ref for use by the pipeline, and
more.
While it is possible to set up pipelines on gitlab.com, the builds there are
limited to 60 minutes and generic hardware. It is also possible to
`hook up <https://about.gitlab.com/blog/2018/04/24/getting-started-gitlab-ci-gcp>`_
Gitlab to Google Kubernetes Engine (`GKE <https://cloud.google.com/kubernetes-engine/>`_)
or Amazon Elastic Kubernetes Service (`EKS <https://aws.amazon.com/eks>`_), though those
topics are outside the scope of this document.
-----------------------------------
Spack commands supporting pipelines
-----------------------------------
Spack provides a command `ci` with sub-commands for doing various things related
to automated build pipelines. All of the ``spack ci ...`` commands must be run
from within a environment, as each one makes use of the environment for different
purposes. Additionally, some options to the commands (or conditions present in
the spack environment file) may require particular environment variables to be
set in order to function properly. Examples of these are typically secrets
needed for pipeline operation that should not be visible in a spack environment
file. These environment variables are described in more detail
:ref:`ci_environment_variables`.
.. _cmd_spack_ci:
^^^^^^^^^^^^^^^^^^
``spack ci``
^^^^^^^^^^^^^^^^^^
Super-command for functionality related to generating pipelines and executing
pipeline jobs.
.. _cmd_spack_ci_start:
^^^^^^^^^^^^^^^^^^
``spack ci start``
^^^^^^^^^^^^^^^^^^
Currently this command is a short-cut to first run ``spack ci generate``, followed
by ``spack ci pushyaml``.
.. _cmd_spack_ci_generate:
^^^^^^^^^^^^^^^^^^^^^
``spack ci generate``
^^^^^^^^^^^^^^^^^^^^^
Concretizes the specs in the active environment, stages them (as described in
:ref:`staging_algorithm`), and writes the resulting ``.gitlab-ci.yml`` to disk.
.. _cmd_spack_ci_pushyaml:
^^^^^^^^^^^^^^^^^^^^^
``spack ci pushyaml``
^^^^^^^^^^^^^^^^^^^^^
Generates a commit containing the generated ``.gitlab-ci.yml`` and pushes it to a
``DOWNSTREAM_CI_REPO``, which is frequently the same repository. The branch
created has the same name as the current branch being tested, but has ``multi-ci-``
prepended to the branch name. Once Gitlab CI has full support for dynamically
defined workloads, this command will be deprecated.
.. _cmd_spack_ci_rebuild:
^^^^^^^^^^^^^^^^^^^^
``spack ci rebuild``
^^^^^^^^^^^^^^^^^^^^
This sub-command is responsible for ensuring a single spec from the release
environment is up to date on the remote mirror configured in the environment,
and as such, corresponds to a single job in the ``.gitlab-ci.yml`` file.
------------------------------------
A pipeline-enabled spack environment
------------------------------------
Here's an example of a spack environment file that has been enhanced with
sections desribing a build pipeline:
.. code-block:: yaml
spack:
definitions:
- pkgs:
- readline@7.0
- compilers:
- '%gcc@5.5.0'
- oses:
- os=ubuntu18.04
- os=centos7
specs:
- matrix:
- [$pkgs]
- [$compilers]
- [$oses]
mirrors:
cloud_gitlab: https://mirror.spack.io
gitlab-ci:
mappings:
- match:
- os=ubuntu18.04
runner-attributes:
tags:
- spack-k8s
image: spack/spack_builder_ubuntu_18.04
- match:
- os=centos7
runner-attributes:
tags:
- spack-k8s
image: spack/spack_builder_centos_7
cdash:
build-group: Release Testing
url: https://cdash.spack.io
project: Spack
site: Spack AWS Gitlab Instance
Hopefully, the ``definitions``, ``specs``, ``mirrors``, etc. sections are already
familiar, as they are part of spack :ref:`environments`. So let's take a more
in-depth look some of the pipeline-related sections in that environment file
that might not be as familiar.
The ``gitlab-ci`` section is used to configure how the pipeline workload should be
generated, mainly how the jobs for building specs should be assigned to the
configured runners on your instance. Each entry within the list of ``mappings``
corresponds to a known gitlab runner, where the ``match`` section is used
in assigning a release spec to one of the runners, and the ``runner-attributes``
section is used to configure the spec/job for that particular runner.
There are other pipeline options you can configure within the ``gitlab-ci`` section
as well. The ``bootstrap`` section allows you to specify lists of specs from
your ``definitions`` that should be staged ahead of the environment's ``specs`` (this
section is described in more detail below). The ``enable-artifacts-buildcache`` key
takes a boolean and determines whether the pipeline uses artifacts to store and
pass along the buildcaches from one stage to the next (the default if you don't
provide this option is ``False``). The ``enable-debug-messages`` key takes a boolean
and allows you to choose whether the pipeline build jobs are run as ``spack -d ci rebuild``
or just ``spack ci rebuild`` (the default is not to enable debug messages). The
``final-stage-rebuild-index`` section controls whether an extra job is added to the
end of your pipeline (in a stage by itself) which will regenerate the mirror's
buildcache index. Under normal operation, each pipeline job that rebuilds a package
will re-generate the mirror's buildcache index after the buildcache entry for that
job has been created and pushed to the mirror. Since jobs in the same stage can run in
parallel, there is the possibility that at the end of some stage, the index may not
reflect all the binaries in the buildcache. Adding the ``final-stage-rebuild-index``
section ensures that at the end of the pipeline, the index will be in sync with the
binaries on the mirror. If the mirror lives in an S3 bucket, this job will need to
run on a machine with the Python ``boto3`` module installed, and consequently the
``final-stage-rebuild-index`` needs to specify a list of ``tags`` to pick a runner
satisfying that condition. It can also take an ``image`` key so Docker executor type
runners can pick the right image for the index regeneration job.
The optional ``cdash`` section provides information that will be used by the
``spack ci generate`` command (invoked by ``spack ci start``) for reporting
to CDash. All the jobs generated from this environment will belong to a
"build group" within CDash that can be tracked over time. As the release
progresses, this build group may have jobs added or removed. The url, project,
and site are used to specify the CDash instance to which build results should
be reported.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Assignment of specs to runners
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``mappings`` section corresponds to a list of runners, and during assignment
of specs to runners, the list is traversed in order looking for matches, the
first runner that matches a release spec is assigned to build that spec. The
``match`` section within each runner mapping section is a list of specs, and
if any of those specs match the release spec (the ``spec.satisfies()`` method
is used), then that runner is considered a match.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Configuration of specs/jobs for a runner
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Once a runner has been chosen to build a release spec, the ``runner-attributes``
section provides information determining details of the job in the context of
the runner. The ``runner-attributes`` section must have a ``tags`` key, which
is a list containing at least one tag used to select the runner from among the
runners known to the gitlab instance. For Docker executor type runners, the
``image`` key is used to specify the Docker image used to build the release spec
(and could also appear as a dictionary with a ``name`` specifying the image name,
as well as an ``entrypoint`` to override whatever the default for that image is).
For other types of runners the ``variables`` key will be useful to pass any
information on to the runner that it needs to do its work (e.g. scheduler
parameters, etc.).
.. _staging_algorithm:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Summary of ``.gitlab-ci.yml`` generation algorithm
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All specs yielded by the matrix (or all the specs in the environment) have their
dependencies computed, and the entire resulting set of specs are staged together
before being run through the ``gitlab-ci/mappings`` entries, where each staged
spec is assigned a runner. "Staging" is the name we have given to the process
of figuring out in what order the specs should be built, taking into consideration
Gitlab CI rules about jobs/stages. In the staging process the goal is to maximize
the number of jobs in any stage of the pipeline, while ensuring that the jobs in
any stage only depend on jobs in previous stages (since those jobs are guaranteed
to have completed already). As a runner is determined for a job, the information
in the ``runner-attributes`` is used to populate various parts of the job
description that will be used by Gitlab CI. Once all the jobs have been assigned
a runner, the ``.gitlab-ci.yml`` is written to disk.
The short example provided above would result in the ``readline``, ``ncurses``,
and ``pkgconf`` packages getting staged and built on the runner chosen by the
``spack-k8s`` tag. In this example, we assume the runner is a Docker executor
type runner, and thus certain jobs will be run in the ``centos7`` container,
and others in the ``ubuntu-18.04`` container. The resulting ``.gitlab-ci.yml``
will contain 6 jobs in three stages. Once the jobs have been generated, the
presence of a ``SPACK_CDASH_AUTH_TOKEN`` environment variable during the
``spack ci generate`` command would result in all of the jobs being put in a
build group on CDash called "Release Testing" (that group will be created if
it didn't already exist).
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Optional compiler bootstrapping
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack pipelines also have support for bootstrapping compilers on systems that
may not already have the desired compilers installed. The idea here is that
you can specify a list of things to bootstrap in your ``definitions``, and
spack will guarantee those will be installed in a phase of the pipeline before
your release specs, so that you can rely on those packages being available in
the binary mirror when you need them later on in the pipeline. At the moment
the only viable use-case for bootstrapping is to install compilers.
Here's an example of what bootstrapping some compilers might look like:
.. code-block:: yaml
spack:
definitions:
- compiler-pkgs:
- 'llvm+clang@6.0.1 os=centos7'
- 'gcc@6.5.0 os=centos7'
- 'llvm+clang@6.0.1 os=ubuntu18.04'
- 'gcc@6.5.0 os=ubuntu18.04'
- pkgs:
- readline@7.0
- compilers:
- '%gcc@5.5.0'
- '%gcc@6.5.0'
- '%gcc@7.3.0'
- '%clang@6.0.0'
- '%clang@6.0.1'
- oses:
- os=ubuntu18.04
- os=centos7
specs:
- matrix:
- [$pkgs]
- [$compilers]
- [$oses]
exclude:
- '%gcc@7.3.0 os=centos7'
- '%gcc@5.5.0 os=ubuntu18.04'
gitlab-ci:
bootstrap:
- name: compiler-pkgs
compiler-agnostic: true
mappings:
# mappings similar to the example higher up in this description
...
In the example above, we have added a list to the ``definitions`` called
``compiler-pkgs`` (you can add any number of these), which lists compiler packages
we want to be staged ahead of the full matrix of release specs (which consists
only of readline in our example). Then within the ``gitlab-ci`` section, we
have added a ``bootstrap`` section, which can contain a list of items, each
referring to a list in the ``definitions`` section. These items can either
be a dictionary or a string. If you supply a dictionary, it must have a name
key whose value must match one of the lists in definitions and it can have a
``compiler-agnostic`` key whose value is a boolean. If you supply a string,
then it needs to match one of the lists provided in ``definitions``. You can
think of the bootstrap list as an ordered list of pipeline "phases" that will
be staged before your actual release specs. While this introduces another
layer of bottleneck in the pipeline (all jobs in all stages of one phase must
complete before any jobs in the next phase can begin), it also means you are
guaranteed your bootstrapped compilers will be available when you need them.
The ``compiler-agnostic`` key can be provided with each item in the
bootstrap list. It tells the ``spack ci generate`` command that any jobs staged
from that particular list should have the compiler removed from the spec, so
that any compiler available on the runner where the job is run can be used to
build the package.
When including a bootstrapping phase as in the example above, the result is that
the bootstrapped compiler packages will be pushed to the binary mirror (and the
local artifacts mirror) before the actual release specs are built. In this case,
the jobs corresponding to subsequent release specs are configured to
``install_missing_compilers``, so that if spack is asked to install a package
with a compiler it doesn't know about, it can be quickly installed from the
binary mirror first.
Since bootstrapping compilers is optional, those items can be left out of the
environment/stack file, and in that case no bootstrapping will be done (only the
specs will be staged for building) and the runners will be expected to already
have all needed compilers installed and configured for spack to use.
-------------------------------------
Using a custom spack in your pipeline
-------------------------------------
If your runners will not have a version of spack ready to invoke, or if for some
other reason you want to use a custom version of spack to run your pipelines,
this can be accomplished fairly simply. First, create CI environment variables
containing the url and branch/tag you want to clone (calling them, for example,
``SPACK_REPO`` and ``SPACK_REF``), use them to clone spack in your pre-ci
``before_script``, and finally pass those same values along to the workload
generation process via the ``spack-repo`` and ``spack-ref`` cli args. Here's
an example:
.. code-block:: yaml
pipeline-job:
tags:
- <some-other-tag>
before_script:
- git clone ${SPACK_REPO} --branch ${SPACK_REF}
- . ./spack/share/spack/setup-env.sh
script:
- spack ci start --spack-repo ${SPACK_REPO} --spack-ref ${SPACK_REF} <...args>
after_script:
- rm -rf ./spack
If the ``spack ci start`` command receives those extra command line arguments,
then it adds similar ``before_script`` and ``after_script`` sections for each of
the ``spack ci rebuild`` jobs it generates (cloning and sourcing a custom
spack in the ``before_script`` and removing it again in the ``after_script``).
This gives you control over the version of spack used when the rebuild jobs
are actually run on the gitlab runner.
.. _ci_environment_variables:
--------------------------------------------------
Environment variables affecting pipeline operation
--------------------------------------------------
Certain secrets and some other information should be provided to the pipeline
infrastructure via environment variables, usually for reasons of security, but
in some cases to support other pipeline use cases such as PR testing. The
environment variables used by the pipeline infrastructure are described here.
^^^^^^^^^^^^^^^^^
AWS_ACCESS_KEY_ID
^^^^^^^^^^^^^^^^^
Needed when binary mirror is an S3 bucket.
^^^^^^^^^^^^^^^^^^^^^
AWS_SECRET_ACCESS_KEY
^^^^^^^^^^^^^^^^^^^^^
Needed when binary mirror is an S3 bucket.
^^^^^^^^^^^^^^^
S3_ENDPOINT_URL
^^^^^^^^^^^^^^^
Needed when binary mirror is an S3 bucket that is *not* on AWS.
^^^^^^^^^^^^^^^^^
CDASH_AUTH_TOKEN
^^^^^^^^^^^^^^^^^
Needed in order to report build groups to CDash.
^^^^^^^^^^^^^^^^^
SPACK_SIGNING_KEY
^^^^^^^^^^^^^^^^^
Needed to sign/verify binary packages from the remote binary mirror.
^^^^^^^^^^^^^^^^^^
DOWNSTREAM_CI_REPO
^^^^^^^^^^^^^^^^^^
Needed until Gitlab CI supports dynamic job generation. Can contain connection
credentials, and could be the same repository or a different one.

View File

@@ -1,4 +1,4 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,7 +1,6 @@
# These dependencies should be installed using pip in order
# to build the documentation.
sphinx
sphinx==1.7.0
sphinxcontrib-programoutput
sphinx-rtd-theme
python-levenshtein

View File

@@ -1,19 +0,0 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#
# These are requirements for building the documentation. You can run
# these commands in this directory to install Sphinx and its plugins,
# then build the docs:
#
# spack install
# spack env activate .
# make
#
spack:
specs:
- py-sphinx
- py-sphinxcontrib-programoutput
- py-sphinx-rtd-theme

View File

@@ -0,0 +1,60 @@
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _spack-101:
=============================
Tutorial: Spack 101
=============================
This is a full-day introduction to Spack with lectures and live demos. It
was presented as a tutorial at `Supercomputing 2018
<http://sc18.supercomputing.org>`_. You can use these materials to teach
a course on Spack at your own site, or you can just skip ahead and read
the live demo scripts to see how Spack is used in practice.
.. _sc16-slides:
.. rubric:: Slides
.. figure:: tutorial/sc16-tutorial-slide-preview.png
:target: http://spack.io/slides/Spack-SC18-Tutorial.pdf
:height: 72px
:align: left
:alt: Slide Preview
`Download Slides <http://spack.io/slides/Spack-SC18-Tutorial.pdf>`_.
**Full citation:** Todd Gamblin, Gregory Becker, Massimiliano Culpo, Matt
Legendre, Mario Melara, Peter Scheibel, and Adam Stewart.
`Managing HPC Software Complexity with Spack
<https://sc18.supercomputing.org/presentation/?id=tut165&sess=sess252>`_.
Tutorial presented at Supercomputing 2018. November 12, 2018, Dallas, TX, USA.
.. _sc16-live-demos:
.. rubric:: Live Demos
These scripts will take you step-by-step through basic Spack tasks. They
correspond to sections in the slides above.
1. :ref:`basics-tutorial`
2. :ref:`configs-tutorial`
3. :ref:`packaging-tutorial`
4. :ref:`environments-tutorial`
5. :ref:`modules-tutorial`
6. :ref:`build-systems-tutorial`
7. :ref:`advanced-packaging-tutorial`
Full contents:
.. toctree::
tutorial_basics
tutorial_configuration
tutorial_packaging
tutorial_environments
tutorial_modules
tutorial_buildsystems
tutorial_advanced_packaging

View File

@@ -0,0 +1,39 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#
# This is a template package file for Spack. We've put "FIXME"
# next to all the things you'll want to change. Once you've handled
# them, you can save this file and test your package like this:
#
# spack install mpileaks
#
# You can edit this file again by typing:
#
# spack edit mpileaks
#
# See the Spack documentation for more information on packaging.
# If you submit this package back to Spack as a pull request,
# please first remove this boilerplate and all FIXME comments.
#
from spack import *
class Mpileaks(Package):
"""FIXME: Put a proper description of your package here."""
# FIXME: Add a proper url for your package's homepage here.
homepage = "http://www.example.com"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
# FIXME: Add dependencies if required.
# depends_on('foo')
def install(self, spec, prefix):
# FIXME: Unknown build system
make()
make('install')

View File

@@ -0,0 +1,23 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Mpileaks(Package):
"""Tool to detect and report MPI objects like MPI_Requests and
MPI_Datatypes."""
homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" # NOQA
version('1.0', '8838c574b39202a57d7c2d68692718aa')
# FIXME: Add dependencies if required.
# depends_on('foo')
def install(self, spec, prefix):
# FIXME: Unknown build system
make()
make('install')

View File

@@ -0,0 +1,25 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Mpileaks(Package):
"""Tool to detect and report MPI objects like MPI_Requests and
MPI_Datatypes."""
homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
depends_on('mpi')
depends_on('adept-utils')
depends_on('callpath')
def install(self, spec, prefix):
# FIXME: Unknown build system
make()
make('install')

View File

@@ -0,0 +1,25 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Mpileaks(Package):
"""Tool to detect and report MPI objects like MPI_Requests and
MPI_Datatypes."""
homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
depends_on('mpi')
depends_on('adept-utils')
depends_on('callpath')
def install(self, spec, prefix):
configure()
make()
make('install')

View File

@@ -0,0 +1,27 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Mpileaks(Package):
"""Tool to detect and report MPI objects like MPI_Requests and
MPI_Datatypes."""
homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
depends_on('mpi')
depends_on('adept-utils')
depends_on('callpath')
def install(self, spec, prefix):
configure('--with-adept-utils=%s' % self.spec['adept-utils'].prefix,
'--with-callpath=%s' % self.spec['callpath'].prefix,
'--prefix=%s' % self.spec.prefix)
make()
make('install')

View File

@@ -0,0 +1,34 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Mpileaks(Package):
"""Tool to detect and report MPI objects like MPI_Requests and
MPI_Datatypes."""
homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
variant('stackstart', values=int, default=0, description='Specify the number of stack frames to truncate.')
depends_on('mpi')
depends_on('adept-utils')
depends_on('callpath')
def install(self, spec, prefix):
stackstart = int(self.spec.variants['stackstart'].value)
confargs = ['--with-adept-utils=%s' % self.spec['adept-utils'].prefix,
'--with-callpath=%s' % self.spec['callpath'].prefix,
'--prefix=%s' % self.spec.prefix]
if stackstart:
confargs.extend(['--with-stack-start-c=%s' % stackstart,
'--with-stack-start-fortran=%s' % stackstart])
configure(*confargs)
make()
make('install')

View File

@@ -0,0 +1,27 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Mpileaks(AutotoolsPackage):
"""Tool to detect and report leaked MPI objects like MPI_Requests and
MPI_Datatypes."""
homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
depends_on("mpi")
depends_on("adept-utils")
depends_on("callpath")
def install(self, spec, prefix):
configure("--prefix=" + prefix,
"--with-adept-utils=" + spec['adept-utils'].prefix,
"--with-callpath=" + spec['callpath'].prefix)
make()
make("install")

View File

@@ -0,0 +1,32 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Mpileaks(AutotoolsPackage):
"""Tool to detect and report leaked MPI objects like MPI_Requests and
MPI_Datatypes."""
homepage = "https://github.com/hpc/mpileaks"
url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz"
version('1.0', '8838c574b39202a57d7c2d68692718aa')
variant("stackstart", values=int, default=0,
description="Specify the number of stack frames to truncate")
depends_on("mpi")
depends_on("adept-utils")
depends_on("callpath")
def configure_args(self):
stackstart = int(self.spec.variants['stackstart'].value)
args = ["--with-adept-utils=" + spec['adept-utils'].prefix,
"--with-callpath=" + spec['callpath'].prefix]
if stackstart:
args.extend(['--with-stack-start-c=%s' % stackstart,
'--with-stack-start-fortran=%s' % stackstart])
return args

View File

@@ -0,0 +1,41 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#
# This is a template package file for Spack. We've put "FIXME"
# next to all the things you'll want to change. Once you've handled
# them, you can save this file and test your package like this:
#
# spack install callpath
#
# You can edit this file again by typing:
#
# spack edit callpath
#
# See the Spack documentation for more information on packaging.
# If you submit this package back to Spack as a pull request,
# please first remove this boilerplate and all FIXME comments.
#
from spack import *
class Callpath(CMakePackage):
"""FIXME: Put a proper description of your package here."""
# FIXME: Add a proper url for your package's homepage here.
homepage = "http://www.example.com"
url = "https://github.com/llnl/callpath/archive/v1.0.1.tar.gz"
version('1.0.3', 'c89089b3f1c1ba47b09b8508a574294a')
# FIXME: Add dependencies if required.
# depends_on('foo')
def cmake_args(self):
# FIXME: Add arguments other than
# FIXME: CMAKE_INSTALL_PREFIX and CMAKE_BUILD_TYPE
# FIXME: If not needed delete this function
args = []
return args

View File

@@ -0,0 +1,23 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Callpath(CMakePackage):
"""Library for representing callpaths consistently in
distributed-memory performance tools."""
homepage = "https://github.com/llnl/callpath"
url = "https://github.com/llnl/callpath/archive/v1.0.3.tar.gz"
version('1.0.3', 'c89089b3f1c1ba47b09b8508a574294a')
depends_on("elf", type="link")
depends_on("libdwarf")
depends_on("dyninst")
depends_on("adept-utils")
depends_on("mpi")
depends_on("cmake@2.8:", type="build")

View File

@@ -0,0 +1,33 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Callpath(CMakePackage):
"""Library for representing callpaths consistently in
distributed-memory performance tools."""
homepage = "https://github.com/llnl/callpath"
url = "https://github.com/llnl/callpath/archive/v1.0.3.tar.gz"
version('1.0.3', 'c89089b3f1c1ba47b09b8508a574294a')
depends_on("elf", type="link")
depends_on("libdwarf")
depends_on("dyninst")
depends_on("adept-utils")
depends_on("mpi")
depends_on("cmake@2.8:", type="build")
def cmake_args(self):
args = ["-DCALLPATH_WALKER=dyninst"]
if self.spec.satisfies("^dyninst@9.3.0:"):
std.flag = self.compiler.cxx_flag
args.append("-DCMAKE_CXX_FLAGS='{0}' -fpermissive'".format(
std_flag))
return args

View File

@@ -0,0 +1,26 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Bowtie(MakefilePackage):
"""FIXME: Put a proper description of your package here."""
# FIXME: Add a proper url for your package's homepage here.
homepage = "http://www.example.com"
url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip"
version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf')
# FIXME: Add dependencies if required.
# depends_on('foo')
def edit(self, spec, prefix):
# FIXME: Edit the Makefile if necessary
# FIXME: If not needed delete this function
# makefile = FileFilter('Makefile')
# makefile.filter('CC = .*', 'CC = cc')
return

View File

@@ -0,0 +1,27 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Bowtie(MakefilePackage):
"""Bowtie is an ultrafast, memory efficient short read aligner
for short DNA sequences (reads) from next-gen sequencers."""
homepage = "https://sourceforge.net/projects/bowtie-bio/"
url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip"
version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf')
variant("tbb", default=False, description="Use Intel thread building block")
depends_on("tbb", when="+tbb")
def edit(self, spec, prefix):
# FIXME: Edit the Makefile if necessary
# FIXME: If not needed delete this function
# makefile = FileFilter('Makefile')
# makefile.filter('CC = .*', 'CC = cc')
return

View File

@@ -0,0 +1,25 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Bowtie(MakefilePackage):
"""Bowtie is an ultrafast, memory efficient short read aligner
for short DNA sequences (reads) from next-gen sequencers."""
homepage = "https://sourceforge.net/projects/bowtie-bio/"
url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip"
version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf')
variant("tbb", default=False, description="Use Intel thread building block")
depends_on("tbb", when="+tbb")
def edit(self, spec, prefix):
makefile = FileFilter("Makefile")
makefile.filter('CC= .*', 'CC = ' + env['CC'])
makefile.filter('CXX = .*', 'CXX = ' + env['CXX'])

View File

@@ -0,0 +1,36 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Bowtie(MakefilePackage):
"""Bowtie is an ultrafast, memory efficient short read aligner
for short DNA sequences (reads) from next-gen sequencers."""
homepage = "https://sourceforge.net/projects/bowtie-bio/"
url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip"
version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf')
variant("tbb", default=False, description="Use Intel thread building block")
depends_on("tbb", when="+tbb")
def edit(self, spec, prefix):
makefile = FileFilter("Makefile")
makefile.filter('CC= .*', 'CC = ' + env['CC'])
makefile.filter('CXX = .*', 'CXX = ' + env['CXX'])
@property
def build_targets(self):
if "+tbb" in spec:
return []
else:
return ["NO_TBB=1"]
@property
def install_targets(self):
return ['prefix={0}'.format(self.prefix), 'install']

View File

@@ -0,0 +1,41 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#
# This is a template package file for Spack. We've put "FIXME"
# next to all the things you'll want to change. Once you've handled
# them, you can save this file and test your package like this:
#
# spack install py-pandas
#
# You can edit this file again by typing:
#
# spack edit py-pandas
#
# See the Spack documentation for more information on packaging.
# If you submit this package back to Spack as a pull request,
# please first remove this boilerplate and all FIXME comments.
#
from spack import *
class PyPandas(PythonPackage):
"""FIXME: Put a proper description of your package here."""
# FIXME: Add a proper url for your package's homepage here.
homepage = "http://www.example.com"
url = "https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz"
version('0.19.0', 'bc9bb7188e510b5d44fbdd249698a2c3')
# FIXME: Add dependencies if required.
# depends_on('py-setuptools', type='build')
# depends_on('py-foo', type=('build', 'run'))
def build_args(self, spec, prefix):
# FIXME: Add arguments other than --prefix
# FIXME: If not needed delete this function
args = []
return args

View File

@@ -0,0 +1,32 @@
# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class PyPandas(PythonPackage):
"""pandas is a Python package providing fast, flexible, and expressive
data structures designed to make working with relational or
labeled data both easy and intuitive. It aims to be the
fundamental high-level building block for doing practical, real
world data analysis in Python. Additionally, it has the broader
goal of becoming the most powerful and flexible open source data
analysis / manipulation tool available in any language.
"""
homepage = "http://pandas.pydata.org/"
url = "https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz"
version('0.19.0', 'bc9bb7188e510b5d44fbdd249698a2c3')
version('0.18.0', 'f143762cd7a59815e348adf4308d2cf6')
version('0.16.1', 'fac4f25748f9610a3e00e765474bdea8')
version('0.16.0', 'bfe311f05dc0c351f8955fbd1e296e73')
depends_on('py-dateutil', type=('build', 'run'))
depends_on('py-numpy', type=('build', 'run'))
depends_on('py-setuptools', type='build')
depends_on('py-cython', type='build')
depends_on('py-pytz', type=('build', 'run'))
depends_on('py-numexpr', type=('build', 'run'))
depends_on('py-bottleneck', type=('build', 'run'))

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

View File

@@ -0,0 +1,505 @@
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _advanced-packaging-tutorial:
============================
Advanced Topics in Packaging
============================
Spack tries to automatically configure packages with information from
dependencies such that all you need to do is to list the dependencies
(i.e., with the ``depends_on`` directive) and the build system (for example
by deriving from :code:`CmakePackage`).
However, there are many special cases. Often you need to retrieve details
about dependencies to set package-specific configuration options, or to
define package-specific environment variables used by the package's build
system. This tutorial covers how to retrieve build information from
dependencies, and how you can automatically provide important information to
dependents in your package.
----------------------
Setup for the tutorial
----------------------
.. note::
If you are not using the tutorial docker image, it is recommended that you
do this section of the tutorial in a fresh clone of Spack
The tutorial uses custom package definitions with missing sections that
will be filled in during the tutorial. These package definitions are stored
in a separate package repository, which can be enabled with:
.. code-block:: console
$ spack repo add --scope=site var/spack/repos/tutorial
This section of the tutorial may also require a newer version of gcc, which
you can add with:
.. code-block:: console
$ spack install gcc@7.2.0
$ spack compiler add --scope=site path/to/spack-installed-gcc/bin
If you are using the tutorial docker image, all dependency packages
will have been installed. Otherwise, to install these packages you can use
the following commands:
.. code-block:: console
$ spack install openblas
$ spack install netlib-lapack
$ spack install mpich
Now, you are ready to set your preferred ``EDITOR`` and continue with
the rest of the tutorial.
.. note::
Several of these packages depend on an MPI implementation. You can use
OpenMPI if you install it from scratch, but this is slow (>10 min.).
A binary cache of MPICH may be provided, in which case you can force
the package to use it and install quickly. All tutorial examples with
packages that depend on MPICH include the spec syntax for building with it
.. _adv_pkg_tutorial_start:
---------------------------------------
Modifying a package's build environment
---------------------------------------
Spack sets up several environment variables like ``PATH`` by default to aid in
building a package, but many packages make use of environment variables which
convey specific information about their dependencies (e.g., ``MPICC``).
This section covers how to update your Spack packages so that package-specific
environment variables are defined at build-time.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Set environment variables in dependent packages at build-time
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Dependencies can set environment variables that are required when their
dependents build. For example, when a package depends on a python extension
like py-numpy, Spack's ``python`` package will add it to ``PYTHONPATH``
so it is available at build time; this is required because the default setup
that spack does is not sufficient for python to import modules.
To provide environment setup for a dependent, a package can implement the
:py:func:`setup_dependent_environment <spack.package.PackageBase.setup_dependent_environment>`
function. This function takes as a parameter a :py:class:`EnvironmentModifications <spack.util.environment.EnvironmentModifications>`
object which includes convenience methods to update the environment. For
example, an MPI implementation can set ``MPICC`` for packages that depend on it:
.. code-block:: python
def setup_dependent_environment(self, spack_env, run_env, dependent_spec):
spack_env.set('MPICC', join_path(self.prefix.bin, 'mpicc'))
In this case packages that depend on ``mpi`` will have ``MPICC`` defined in
their environment when they build. This section is focused on modifying the
build-time environment represented by ``spack_env``, but it's worth noting that
modifications to ``run_env`` are included in Spack's automatically-generated
module files.
We can practice by editing the ``mpich`` package to set the ``MPICC``
environment variable in the build-time environment of dependent packages.
.. code-block:: console
root@advanced-packaging-tutorial:/# spack edit mpich
Once you're finished, the method should look like this:
.. code-block:: python
def setup_dependent_environment(self, spack_env, run_env, dependent_spec):
spack_env.set('MPICC', join_path(self.prefix.bin, 'mpicc'))
spack_env.set('MPICXX', join_path(self.prefix.bin, 'mpic++'))
spack_env.set('MPIF77', join_path(self.prefix.bin, 'mpif77'))
spack_env.set('MPIF90', join_path(self.prefix.bin, 'mpif90'))
spack_env.set('MPICH_CC', spack_cc)
spack_env.set('MPICH_CXX', spack_cxx)
spack_env.set('MPICH_F77', spack_f77)
spack_env.set('MPICH_F90', spack_fc)
spack_env.set('MPICH_FC', spack_fc)
At this point we can, for instance, install ``netlib-scalapack`` with
``mpich``:
.. code-block:: console
root@advanced-packaging-tutorial:/# spack install netlib-scalapack ^mpich
...
==> Created stage in /usr/local/var/spack/stage/netlib-scalapack-2.0.2-km7tsbgoyyywonyejkjoojskhc5knz3z
==> No patches needed for netlib-scalapack
==> Building netlib-scalapack [CMakePackage]
==> Executing phase: 'cmake'
==> Executing phase: 'build'
==> Executing phase: 'install'
==> Successfully installed netlib-scalapack
Fetch: 0.01s. Build: 3m 59.86s. Total: 3m 59.87s.
[+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-scalapack-2.0.2-km7tsbgoyyywonyejkjoojskhc5knz3z
and double check the environment logs to verify that every variable was
set to the correct value.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Set environment variables in your own package
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Packages can modify their own build-time environment by implementing the
:py:func:`setup_environment <spack.package.PackageBase.setup_environment>` function.
For ``qt`` this looks like:
.. code-block:: python
def setup_environment(self, spack_env, run_env):
spack_env.set('MAKEFLAGS', '-j{0}'.format(make_jobs))
run_env.set('QTDIR', self.prefix)
When ``qt`` builds, ``MAKEFLAGS`` will be defined in the environment.
To contrast with ``qt``'s :py:func:`setup_dependent_environment <spack.package.PackageBase.setup_dependent_environment>`
function:
.. code-block:: python
def setup_dependent_environment(self, spack_env, run_env, dependent_spec):
spack_env.set('QTDIR', self.prefix)
Let's see how it works by completing the ``elpa`` package:
.. code-block:: console
root@advanced-packaging-tutorial:/# spack edit elpa
In the end your method should look like:
.. code-block:: python
def setup_environment(self, spack_env, run_env):
spec = self.spec
spack_env.set('CC', spec['mpi'].mpicc)
spack_env.set('FC', spec['mpi'].mpifc)
spack_env.set('CXX', spec['mpi'].mpicxx)
spack_env.set('SCALAPACK_LDFLAGS', spec['scalapack'].libs.joined())
spack_env.append_flags('LDFLAGS', spec['lapack'].libs.search_flags)
spack_env.append_flags('LIBS', spec['lapack'].libs.link_flags)
At this point it's possible to proceed with the installation of ``elpa ^mpich``
------------------------------
Retrieving library information
------------------------------
Although Spack attempts to help packages locate their dependency libraries
automatically (e.g. by setting ``PKG_CONFIG_PATH`` and ``CMAKE_PREFIX_PATH``),
a package may have unique configuration options that are required to locate
libraries. When a package needs information about dependency libraries, the
general approach in Spack is to query the dependencies for the locations of
their libraries and set configuration options accordingly. By default most
Spack packages know how to automatically locate their libraries. This section
covers how to retrieve library information from dependencies and how to locate
libraries when the default logic doesn't work.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Accessing dependency libraries
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you need to access the libraries of a dependency, you can do so
via the ``libs`` property of the spec, for example in the ``arpack-ng``
package:
.. code-block:: python
def install(self, spec, prefix):
lapack_libs = spec['lapack'].libs.joined(';')
blas_libs = spec['blas'].libs.joined(';')
cmake(*[
'-DLAPACK_LIBRARIES={0}'.format(lapack_libs),
'-DBLAS_LIBRARIES={0}'.format(blas_libs)
], '.')
Note that ``arpack-ng`` is querying virtual dependencies, which Spack
automatically resolves to the installed implementation (e.g. ``openblas``
for ``blas``).
We've started work on a package for ``armadillo``. You should open it,
read through the comment that starts with ``# TUTORIAL:`` and complete
the ``cmake_args`` section:
.. code-block:: console
root@advanced-packaging-tutorial:/# spack edit armadillo
If you followed the instructions in the package, when you are finished your
``cmake_args`` method should look like:
.. code-block:: python
def cmake_args(self):
spec = self.spec
return [
# ARPACK support
'-DARPACK_LIBRARY={0}'.format(spec['arpack-ng'].libs.joined(";")),
# BLAS support
'-DBLAS_LIBRARY={0}'.format(spec['blas'].libs.joined(";")),
# LAPACK support
'-DLAPACK_LIBRARY={0}'.format(spec['lapack'].libs.joined(";")),
# SuperLU support
'-DSuperLU_INCLUDE_DIR={0}'.format(spec['superlu'].prefix.include),
'-DSuperLU_LIBRARY={0}'.format(spec['superlu'].libs.joined(";")),
# HDF5 support
'-DDETECT_HDF5={0}'.format('ON' if '+hdf5' in spec else 'OFF')
]
As you can see, getting the list of libraries that your dependencies provide
is as easy as accessing the their ``libs`` attribute. Furthermore, the interface
remains the same whether you are querying regular or virtual dependencies.
At this point you can complete the installation of ``armadillo`` using ``openblas``
as a LAPACK provider (``armadillo ^openblas ^mpich``):
.. code-block:: console
root@advanced-packaging-tutorial:/# spack install armadillo ^openblas ^mpich
==> pkg-config is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkg-config-0.29.2-ae2hwm7q57byfbxtymts55xppqwk7ecj
...
==> superlu is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/superlu-5.2.1-q2mbtw2wo4kpzis2e2n227ip2fquxrno
==> Installing armadillo
==> Using cached archive: /usr/local/var/spack/cache/armadillo/armadillo-8.100.1.tar.xz
==> Staging archive: /usr/local/var/spack/stage/armadillo-8.100.1-n2eojtazxbku6g4l5izucwwgnpwz77r4/armadillo-8.100.1.tar.xz
==> Created stage in /usr/local/var/spack/stage/armadillo-8.100.1-n2eojtazxbku6g4l5izucwwgnpwz77r4
==> Applied patch undef_linux.patch
==> Building armadillo [CMakePackage]
==> Executing phase: 'cmake'
==> Executing phase: 'build'
==> Executing phase: 'install'
==> Successfully installed armadillo
Fetch: 0.01s. Build: 3.96s. Total: 3.98s.
[+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/armadillo-8.100.1-n2eojtazxbku6g4l5izucwwgnpwz77r4
Hopefully the installation went fine and the code we added expanded to the right list
of semicolon separated libraries (you are encouraged to open ``armadillo``'s
build logs to double check).
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Providing libraries to dependents
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack provides a default implementation for ``libs`` which often works
out of the box. A user can write a package definition without having to
implement a ``libs`` property and dependents can retrieve its libraries
as shown in the above section. However, the default implementation assumes that
libraries follow the naming scheme ``lib<package name>.so`` (or e.g.
``lib<package name>.a`` for static libraries). Packages which don't
follow this naming scheme must implement this function themselves, e.g.
``opencv``:
.. code-block:: python
@property
def libs(self):
shared = "+shared" in self.spec
return find_libraries(
"libopencv_*", root=self.prefix, shared=shared, recurse=True
)
This issue is common for packages which implement an interface (i.e.
virtual package providers in Spack). If we try to build another version of
``armadillo`` tied to ``netlib-lapack`` (``armadillo ^netlib-lapack ^mpich``)
we'll notice that this time the installation won't complete:
.. code-block:: console
root@advanced-packaging-tutorial:/# spack install armadillo ^netlib-lapack ^mpich
==> pkg-config is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkg-config-0.29.2-ae2hwm7q57byfbxtymts55xppqwk7ecj
...
==> openmpi is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f
==> Installing arpack-ng
==> Using cached archive: /usr/local/var/spack/cache/arpack-ng/arpack-ng-3.5.0.tar.gz
==> Already staged arpack-ng-3.5.0-bloz7cqirpdxj33pg7uj32zs5likz2un in /usr/local/var/spack/stage/arpack-ng-3.5.0-bloz7cqirpdxj33pg7uj32zs5likz2un
==> No patches needed for arpack-ng
==> Building arpack-ng [Package]
==> Executing phase: 'install'
==> Error: RuntimeError: Unable to recursively locate netlib-lapack libraries in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-lapack-3.6.1-jjfe23wgt7nkjnp2adeklhseg3ftpx6z
RuntimeError: RuntimeError: Unable to recursively locate netlib-lapack libraries in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-lapack-3.6.1-jjfe23wgt7nkjnp2adeklhseg3ftpx6z
/usr/local/var/spack/repos/builtin/packages/arpack-ng/package.py:105, in install:
5 options.append('-DCMAKE_INSTALL_NAME_DIR:PATH=%s/lib' % prefix)
6
7 # Make sure we use Spack's blas/lapack:
>> 8 lapack_libs = spec['lapack'].libs.joined(';')
9 blas_libs = spec['blas'].libs.joined(';')
10
11 options.extend([
See build log for details:
/usr/local/var/spack/stage/arpack-ng-3.5.0-bloz7cqirpdxj33pg7uj32zs5likz2un/arpack-ng-3.5.0/spack-build.out
Unlike ``openblas`` which provides a library named ``libopenblas.so``,
``netlib-lapack`` provides ``liblapack.so``, so it needs to implement
customized library search logic. Let's edit it:
.. code-block:: console
root@advanced-packaging-tutorial:/# spack edit netlib-lapack
and follow the instructions in the ``# TUTORIAL:`` comment as before.
What we need to implement is:
.. code-block:: python
@property
def lapack_libs(self):
shared = True if '+shared' in self.spec else False
return find_libraries(
'liblapack', root=self.prefix, shared=shared, recursive=True
)
i.e., a property that returns the correct list of libraries for the LAPACK interface.
We use the name ``lapack_libs`` rather than ``libs`` because
``netlib-lapack`` can also provide ``blas``, and when it does it is provided
as a separate library file. Using this name ensures that when
dependents ask for ``lapack`` libraries, ``netlib-lapack`` will retrieve only
the libraries associated with the ``lapack`` interface. Now we can finally
install ``armadillo ^netlib-lapack ^mpich``:
.. code-block:: console
root@advanced-packaging-tutorial:/# spack install armadillo ^netlib-lapack ^mpich
...
==> Building armadillo [CMakePackage]
==> Executing phase: 'cmake'
==> Executing phase: 'build'
==> Executing phase: 'install'
==> Successfully installed armadillo
Fetch: 0.01s. Build: 3.75s. Total: 3.76s.
[+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/armadillo-8.100.1-sxmpu5an4dshnhickh6ykchyfda7jpyn
Since each implementation of a virtual package is responsible for locating the
libraries associated with the interfaces it provides, dependents do not need
to include special-case logic for different implementations and for example
need only ask for :code:`spec['blas'].libs`.
----------------------
Other Packaging Topics
----------------------
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Attach attributes to other packages
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Build tools usually also provide a set of executables that can be used
when another package is being installed. Spack gives you the opportunity
to monkey-patch dependent modules and attach attributes to them. This
helps make the packager experience as similar as possible to what would
have been the manual installation of the same package.
An example here is the ``automake`` package, which overrides
:py:func:`setup_dependent_package <spack.package.PackageBase.setup_dependent_package>`:
.. code-block:: python
def setup_dependent_package(self, module, dependent_spec):
# Automake is very likely to be a build dependency,
# so we add the tools it provides to the dependent module
executables = ['aclocal', 'automake']
for name in executables:
setattr(module, name, self._make_executable(name))
so that every other package that depends on it can use directly ``aclocal``
and ``automake`` with the usual function call syntax of :py:class:`Executable <spack.util.executable.Executable>`:
.. code-block:: python
aclocal('--force')
^^^^^^^^^^^^^^^^^^^^^^^
Extra query parameters
^^^^^^^^^^^^^^^^^^^^^^^
An advanced feature of the Spec's build-interface protocol is the support
for extra parameters after the subscript key. In fact, any of the keys used in the query
can be followed by a comma-separated list of extra parameters which can be
inspected by the package receiving the request to fine-tune a response.
Let's look at an example and try to install ``netcdf ^mpich``:
.. code-block:: console
root@advanced-packaging-tutorial:/# spack install netcdf ^mpich
==> libsigsegv is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz
==> m4 is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-r5envx3kqctwwflhd4qax4ahqtt6x43a
...
==> Error: AttributeError: 'list' object has no attribute 'search_flags'
AttributeError: AttributeError: 'list' object has no attribute 'search_flags'
/usr/local/var/spack/repos/builtin/packages/netcdf/package.py:207, in configure_args:
50 # used instead.
51 hdf5_hl = self.spec['hdf5:hl']
52 CPPFLAGS.append(hdf5_hl.headers.cpp_flags)
>> 53 LDFLAGS.append(hdf5_hl.libs.search_flags)
54
55 if '+parallel-netcdf' in self.spec:
56 config_args.append('--enable-pnetcdf')
See build log for details:
/usr/local/var/spack/stage/netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj/netcdf-4.4.1.1/spack-build.out
We can see from the error that ``netcdf`` needs to know how to link the *high-level interface*
of ``hdf5``, and thus passes the extra parameter ``hl`` after the request to retrieve it.
Clearly the implementation in the ``hdf5`` package is not complete, and we need to fix it:
.. code-block:: console
root@advanced-packaging-tutorial:/# spack edit hdf5
If you followed the instructions correctly, the code added to the
``lib`` property should be similar to:
.. code-block:: python
:emphasize-lines: 1
query_parameters = self.spec.last_query.extra_parameters
key = tuple(sorted(query_parameters))
libraries = query2libraries[key]
shared = '+shared' in self.spec
return find_libraries(
libraries, root=self.prefix, shared=shared, recurse=True
)
where we highlighted the line retrieving the extra parameters. Now we can successfully
complete the installation of ``netcdf ^mpich``:
.. code-block:: console
root@advanced-packaging-tutorial:/# spack install netcdf ^mpich
==> libsigsegv is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz
==> m4 is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-r5envx3kqctwwflhd4qax4ahqtt6x43a
...
==> Installing netcdf
==> Using cached archive: /usr/local/var/spack/cache/netcdf/netcdf-4.4.1.1.tar.gz
==> Already staged netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj in /usr/local/var/spack/stage/netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj
==> Already patched netcdf
==> Building netcdf [AutotoolsPackage]
==> Executing phase: 'autoreconf'
==> Executing phase: 'configure'
==> Executing phase: 'build'
==> Executing phase: 'install'
==> Successfully installed netcdf
Fetch: 0.01s. Build: 24.61s. Total: 24.62s.
[+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,807 @@
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _build-systems-tutorial:
==============================
Spack Package Build Systems
==============================
You may begin to notice after writing a couple of package template files a
pattern emerge for some packages. For example, you may find yourself writing
an :code:`install()` method that invokes: :code:`configure`, :code:`cmake`,
:code:`make`, :code:`make install`. You may also find yourself writing
:code:`"prefix=" + prefix` as an argument to :code:`configure` or :code:`cmake`.
Rather than having you repeat these lines for all packages, Spack has
classes that can take care of these patterns. In addition,
these package files allow for finer grained control of these build systems.
In this section, we will describe each build system and give examples on
how these can be manipulated to install a package.
-----------------------
Package Class Hierarchy
-----------------------
.. graphviz::
digraph G {
node [
shape = "record"
]
edge [
arrowhead = "empty"
]
PackageBase -> Package [dir=back]
PackageBase -> MakefilePackage [dir=back]
PackageBase -> AutotoolsPackage [dir=back]
PackageBase -> CMakePackage [dir=back]
PackageBase -> PythonPackage [dir=back]
}
The above diagram gives a high level view of the class hierarchy and how each
package relates. Each subclass inherits from the :code:`PackageBaseClass`
super class. The bulk of the work is done in this super class which includes
fetching, extracting to a staging directory and installing. Each subclass
then adds additional build-system-specific functionality. In the following
sections, we will go over examples of how to utilize each subclass and to see
how powerful these abstractions are when packaging.
-----------------
Package
-----------------
We've already seen examples of a :code:`Package` class in our walkthrough for writing
package files, so we won't be spending much time with them here. Briefly,
the Package class allows for abitrary control over the build process, whereas
subclasses rely on certain patterns (e.g. :code:`configure` :code:`make`
:code:`make install`) to be useful. :code:`Package` classes are particularly useful
for packages that have a non-conventional way of being built since the packager
can utilize some of Spack's helper functions to customize the building and
installing of a package.
-------------------
Autotools
-------------------
As we have seen earlier, packages using :code:`Autotools` use :code:`configure`,
:code:`make` and :code:`make install` commands to execute the build and
install process. In our :code:`Package` class, your typical build incantation will
consist of the following:
.. code-block:: python
def install(self, spec, prefix):
configure("--prefix=" + prefix)
make()
make("install")
You'll see that this looks similar to what we wrote in our packaging tutorial.
The :code:`Autotools` subclass aims to simplify writing package files and provides
convenience methods to manipulate each of the different phases for a :code:`Autotools`
build system.
:code:`Autotools` packages consist of four phases:
1. :code:`autoreconf()`
2. :code:`configure()`
3. :code:`build()`
4. :code:`install()`
Each of these phases have sensible defaults. Let's take a quick look at some
the internals of the :code:`Autotools` class:
.. code-block:: console
$ spack edit --build-system autotools
This will open the :code:`AutotoolsPackage` file in your text editor.
.. note::
The examples showing code for these classes is abridged to avoid having
long examples. We only show what is relevant to the packager.
.. literalinclude:: ../../../lib/spack/spack/build_systems/autotools.py
:language: python
:emphasize-lines: 33,36,54
:lines: 30-76,240-248
:linenos:
Important to note are the highlighted lines. These properties allow the
packager to set what build targets and install targets they want for their
package. If, for example, we wanted to add as our build target :code:`foo`
then we can append to our :code:`build_targets` property:
.. code-block:: python
build_targets = ["foo"]
Which is similiar to invoking make in our Package
.. code-block:: python
make("foo")
This is useful if we have packages that ignore environment variables and need
a command-line argument.
Another thing to take note of is in the :code:`configure()` method.
Here we see that the :code:`prefix` argument is already included since it is a
common pattern amongst packages using :code:`Autotools`. We then only have to
override :code:`configure_args()`, which will then return it's output to
to :code:`configure()`. Then, :code:`configure()` will append the common
arguments
Packagers also have the option to run :code:`autoreconf` in case a package
needs to update the build system and generate a new :code:`configure`. Though,
for the most part this will be unnecessary.
Let's look at the :code:`mpileaks` package.py file that we worked on earlier:
.. code-block:: console
$ spack edit mpileaks
Notice that mpileaks is a :code:`Package` class but uses the :code:`Autotools`
build system. Although this package is acceptable let's make this into an
:code:`AutotoolsPackage` class and simplify it further.
.. literalinclude:: tutorial/examples/Autotools/0.package.py
:language: python
:emphasize-lines: 9
:linenos:
We first inherit from the :code:`AutotoolsPackage` class.
Although we could keep the :code:`install()` method, most of it can be handled
by the :code:`AutotoolsPackage` base class. In fact, the only thing that needs
to be overridden is :code:`configure_args()`.
.. literalinclude:: tutorial/examples/Autotools/1.package.py
:language: python
:emphasize-lines: 25,26,27,28,29,30,31,32
:linenos:
Since Spack takes care of setting the prefix for us we can exclude that as
an argument to :code:`configure`. Our packages look simpler, and the packager
does not need to worry about whether they have properly included :code:`configure`
and :code:`make`.
This version of the :code:`mpileaks` package installs the same as the previous,
but the :code:`AutotoolsPackage` class lets us do it with a cleaner looking
package file.
-----------------
Makefile
-----------------
Packages that utilize :code:`Make` or a :code:`Makefile` usually require you
to edit a :code:`Makefile` to set up platform and compiler specific variables.
These packages are handled by the :code:`Makefile` subclass which provides
convenience methods to help write these types of packages.
A :code:`MakefilePackage` class has three phases that can be overridden. These include:
1. :code:`edit()`
2. :code:`build()`
3. :code:`install()`
Packagers then have the ability to control how a :code:`Makefile` is edited, and
what targets to include for the build phase or install phase.
Let's also take a look inside the :code:`MakefilePackage` class:
.. code-block:: console
$ spack edit --build-system makefile
Take note of the following:
.. literalinclude:: ../../../lib/spack/spack/build_systems/makefile.py
:language: python
:lines: 14,43-61,70-88
:emphasize-lines: 21,27,34
:linenos:
Similar to :code:`Autotools`, :code:`MakefilePackage` class has properties
that can be set by the packager. We can also override the different
methods highlighted.
Let's try to recreate the Bowtie_ package:
.. _Bowtie: http://bowtie-bio.sourceforge.net/index.shtml
.. code-block:: console
$ spack create -f https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip
==> This looks like a URL for bowtie
==> Found 1 version of bowtie:
1.2.1.1 https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip
==> How many would you like to checksum? (default is 1, q to abort) 1
==> Downloading...
==> Fetching https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip
######################################################################## 100.0%
==> Checksummed 1 version of bowtie
==> This package looks like it uses the makefile build system
==> Created template for bowtie package
==> Created package file: /Users/mamelara/spack/var/spack/repos/builtin/packages/bowtie/package.py
Once the fetching is completed, Spack will open up your text editor in the
usual fashion and create a template of a :code:`MakefilePackage` package.py.
.. literalinclude:: tutorial/examples/Makefile/0.package.py
:language: python
:linenos:
Spack was successfully able to detect that :code:`Bowtie` uses :code:`Make`.
Let's add in the rest of our details for our package:
.. literalinclude:: tutorial/examples/Makefile/1.package.py
:language: python
:emphasize-lines: 10,11,13,14,18,20
:linenos:
As we mentioned earlier, most packages using a :code:`Makefile` have hard-coded
variables that must be edited. These variables are fine if you happen to not
care about setup or types of compilers used but Spack is designed to work with
any compiler. The :code:`MakefilePackage` subclass makes it easy to edit
these :code:`Makefiles` by having an :code:`edit()` method that
can be overridden.
Let's take a look at the default :code:`Makefile` that :code:`Bowtie` provides.
If we look inside, we see that :code:`CC` and :code:`CXX` point to our GNU
compiler:
.. code-block:: console
$ spack stage bowtie
.. note::
As usual make sure you have shell support activated with spack:
:code:`source /path/to/spack_root/spack/share/spack/setup-env.sh`
.. code-block:: console
$ spack cd -s bowtie
$ cd bowtie-1.2
$ vim Makefile
.. code-block:: make
CPP = g++ -w
CXX = $(CPP)
CC = gcc
LIBS = $(LDFLAGS) -lz
HEADERS = $(wildcard *.h)
To fix this, we need to use the :code:`edit()` method to write our custom
:code:`Makefile`.
.. literalinclude:: tutorial/examples/Makefile/2.package.py
:language: python
:emphasize-lines: 23,24,25
:linenos:
Here we use a :code:`FileFilter` object to edit our :code:`Makefile`. It takes
in a regular expression and then replaces :code:`CC` and :code:`CXX` to whatever
Spack sets :code:`CC` and :code:`CXX` environment variables to. This allows us to
build :code:`Bowtie` with whatever compiler we specify through Spack's
:code:`spec` syntax.
Let's change the build and install phases of our package:
.. literalinclude:: tutorial/examples/Makefile/3.package.py
:language: python
:emphasize-lines: 28,29,30,31,32,35,36
:linenos:
Here demonstrate another strategy that we can use to manipulate our package
We can provide command-line arguments to :code:`make()`. Since :code:`Bowtie`
can use :code:`tbb` we can either add :code:`NO_TBB=1` as a argument to prevent
:code:`tbb` support or we can just invoke :code:`make` with no arguments.
:code:`Bowtie` requires our :code:`install_target` to provide a path to
the install directory. We can do this by providing :code:`prefix=` as a command
line argument to :code:`make()`.
Let's look at a couple of other examples and go through them:
.. code-block:: console
$ spack edit esmf
Some packages allow environment variables to be set and will honor them.
Packages that use :code:`?=` for assignment in their :code:`Makefile`
can be set using environment variables. In our :code:`esmf` example we
set two environment variables in our :code:`edit()` method:
.. code-block:: python
def edit(self, spec, prefix):
for var in os.environ:
if var.startswith('ESMF_'):
os.environ.pop(var)
# More code ...
if self.compiler.name == 'gcc':
os.environ['ESMF_COMPILER'] = 'gfortran'
elif self.compiler.name == 'intel':
os.environ['ESMF_COMPILER'] = 'intel'
elif self.compiler.name == 'clang':
os.environ['ESMF_COMPILER'] = 'gfortranclang'
elif self.compiler.name == 'nag':
os.environ['ESMF_COMPILER'] = 'nag'
elif self.compiler.name == 'pgi':
os.environ['ESMF_COMPILER'] = 'pgi'
else:
msg = "The compiler you are building with, "
msg += "'{0}', is not supported by ESMF."
raise InstallError(msg.format(self.compiler.name))
As you may have noticed, we didn't really write anything to the :code:`Makefile`
but rather we set environment variables that will override variables set in
the :code:`Makefile`.
Some packages include a configuration file that sets certain compiler variables,
platform specific variables, and the location of dependencies or libraries.
If the file is simple and only requires a couple of changes, we can overwrite
those entries with our :code:`FileFilter` object. If the configuration involves
complex changes, we can write a new configuration file from scratch.
Let's look at an example of this in the :code:`elk` package:
.. code-block:: console
$ spack edit elk
.. code-block:: python
def edit(self, spec, prefix):
# Dictionary of configuration options
config = {
'MAKE': 'make',
'AR': 'ar'
}
# Compiler-specific flags
flags = ''
if self.compiler.name == 'intel':
flags = '-O3 -ip -unroll -no-prec-div'
elif self.compiler.name == 'gcc':
flags = '-O3 -ffast-math -funroll-loops'
elif self.compiler.name == 'pgi':
flags = '-O3 -lpthread'
elif self.compiler.name == 'g95':
flags = '-O3 -fno-second-underscore'
elif self.compiler.name == 'nag':
flags = '-O4 -kind=byte -dusty -dcfuns'
elif self.compiler.name == 'xl':
flags = '-O3'
config['F90_OPTS'] = flags
config['F77_OPTS'] = flags
# BLAS/LAPACK support
# Note: BLAS/LAPACK must be compiled with OpenMP support
# if the +openmp variant is chosen
blas = 'blas.a'
lapack = 'lapack.a'
if '+blas' in spec:
blas = spec['blas'].libs.joined()
if '+lapack' in spec:
lapack = spec['lapack'].libs.joined()
# lapack must come before blas
config['LIB_LPK'] = ' '.join([lapack, blas])
# FFT support
if '+fft' in spec:
config['LIB_FFT'] = join_path(spec['fftw'].prefix.lib,
'libfftw3.so')
config['SRC_FFT'] = 'zfftifc_fftw.f90'
else:
config['LIB_FFT'] = 'fftlib.a'
config['SRC_FFT'] = 'zfftifc.f90'
# MPI support
if '+mpi' in spec:
config['F90'] = spec['mpi'].mpifc
config['F77'] = spec['mpi'].mpif77
else:
config['F90'] = spack_fc
config['F77'] = spack_f77
config['SRC_MPI'] = 'mpi_stub.f90'
# OpenMP support
if '+openmp' in spec:
config['F90_OPTS'] += ' ' + self.compiler.openmp_flag
config['F77_OPTS'] += ' ' + self.compiler.openmp_flag
else:
config['SRC_OMP'] = 'omp_stub.f90'
# Libxc support
if '+libxc' in spec:
config['LIB_libxc'] = ' '.join([
join_path(spec['libxc'].prefix.lib, 'libxcf90.so'),
join_path(spec['libxc'].prefix.lib, 'libxc.so')
])
config['SRC_libxc'] = ' '.join([
'libxc_funcs.f90',
'libxc.f90',
'libxcifc.f90'
])
else:
config['SRC_libxc'] = 'libxcifc_stub.f90'
# Write configuration options to include file
with open('make.inc', 'w') as inc:
for key in config:
inc.write('{0} = {1}\n'.format(key, config[key]))
:code:`config` is just a dictionary that we can add key-value pairs to. By the
end of the :code:`edit()` method we write the contents of our dictionary to
:code:`make.inc`.
---------------
CMake
---------------
CMake_ is another common build system that has been gaining popularity. It works
in a similar manner to :code:`Autotools` but with differences in variable names,
the number of configuration options available, and the handling of shared libraries.
Typical build incantations look like this:
.. _CMake: https://cmake.org
.. code-block:: python
def install(self, spec, prefix):
cmake("-DCMAKE_INSTALL_PREFIX:PATH=/path/to/install_dir ..")
make()
make("install")
As you can see from the example above, it's very similar to invoking
:code:`configure` and :code:`make` in an :code:`Autotools` build system. However,
the variable names and options differ. Most options in CMake are prefixed
with a :code:`'-D'` flag to indicate a configuration setting.
In the :code:`CMakePackage` class we can override the following phases:
1. :code:`cmake()`
2. :code:`build()`
3. :code:`install()`
The :code:`CMakePackage` class also provides sensible defaults so we only need to
override :code:`cmake_args()`.
Let's look at these defaults in the :code:`CMakePackage` class in the :code:`_std_args()` method:
.. code-block:: console
$ spack edit --build-system cmake
.. literalinclude:: ../../../lib/spack/spack/build_systems/cmake.py
:language: python
:lines: 102-147
:emphasize-lines: 10,18,24,36,37,38,44
:linenos:
Some :code:`CMake` packages use different generators. Spack is able to support
Unix-Makefile_ generators as well as Ninja_ generators.
.. _Unix-Makefile: https://cmake.org/cmake/help/v3.4/generator/Unix%20Makefiles.html
.. _Ninja: https://cmake.org/cmake/help/v3.4/generator/Ninja.html
If no generator is specified Spack will default to :code:`Unix Makefiles`.
Next we setup the build type. In :code:`CMake` you can specify the build type
that you want. Options include:
1. :code:`empty`
2. :code:`Debug`
3. :code:`Release`
4. :code:`RelWithDebInfo`
5. :code:`MinSizeRel`
With these options you can specify whether you want your executable to have
the debug version only, release version or the release with debug information.
Release executables tend to be more optimized than Debug. In Spack, we set
the default as RelWithDebInfo unless otherwise specified through a variant.
Spack then automatically sets up the :code:`-DCMAKE_INSTALL_PREFIX` path,
appends the build type (:code:`RelWithDebInfo` default), and then specifies a verbose
:code:`Makefile`.
Next we add the :code:`rpaths` to :code:`-DCMAKE_INSTALL_RPATH:STRING`.
Finally we add to :code:`-DCMAKE_PREFIX_PATH:STRING` the locations of all our
dependencies so that :code:`CMake` can find them.
In the end our :code:`cmake` line will look like this (example is :code:`xrootd`):
.. code-block:: console
$ cmake $HOME/spack/var/spack/stage/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk/xrootd-4.6.0 -G Unix Makefiles -DCMAKE_INSTALL_PREFIX:PATH=$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk -DCMAKE_BUILD_TYPE:STRING=RelWithDebInfo -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_FIND_FRAMEWORK:STRING=LAST -DCMAKE_INSTALL_RPATH_USE_LINK_PATH:BOOL=FALSE -DCMAKE_INSTALL_RPATH:STRING=$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk/lib:$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk/lib64 -DCMAKE_PREFIX_PATH:STRING=$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/cmake-3.9.4-hally3vnbzydiwl3skxcxcbzsscaasx5
We can see now how :code:`CMake` takes care of a lot of the boilerplate code
that would have to be otherwise typed in.
Let's try to recreate callpath_:
.. _callpath: https://github.com/LLNL/callpath.git
.. code-block:: console
$ spack create -f https://github.com/llnl/callpath/archive/v1.0.3.tar.gz
==> This looks like a URL for callpath
==> Found 4 versions of callpath:
1.0.3 https://github.com/LLNL/callpath/archive/v1.0.3.tar.gz
1.0.2 https://github.com/LLNL/callpath/archive/v1.0.2.tar.gz
1.0.1 https://github.com/LLNL/callpath/archive/v1.0.1.tar.gz
1.0 https://github.com/LLNL/callpath/archive/v1.0.tar.gz
==> How many would you like to checksum? (default is 1, q to abort) 1
==> Downloading...
==> Fetching https://github.com/LLNL/callpath/archive/v1.0.3.tar.gz
######################################################################## 100.0%
==> Checksummed 1 version of callpath
==> This package looks like it uses the cmake build system
==> Created template for callpath package
==> Created package file: /Users/mamelara/spack/var/spack/repos/builtin/packages/callpath/package.py
which then produces the following template:
.. literalinclude:: tutorial/examples/Cmake/0.package.py
:language: python
:linenos:
Again we fill in the details:
.. literalinclude:: tutorial/examples/Cmake/1.package.py
:language: python
:linenos:
:emphasize-lines: 9,13,14,18,19,20,21,22,23
As mentioned earlier, Spack will use sensible defaults to prevent repeated code
and to make writing :code:`CMake` package files simpler.
In callpath, we want to add options to :code:`CALLPATH_WALKER` as well as add
compiler flags. We add the following options like so:
.. literalinclude:: tutorial/examples/Cmake/2.package.py
:language: python
:linenos:
:emphasize-lines: 26,30,31
Now we can control our build options using :code:`cmake_args()`. If defaults are
sufficient enough for the package, we can leave this method out.
:code:`CMakePackage` classes allow for control of other features in the
build system. For example, you can specify the path to the "out of source"
build directory and also point to the root of the :code:`CMakeLists.txt` file if it
is placed in a non-standard location.
A good example of a package that has its :code:`CMakeLists.txt` file located at a
different location is found in :code:`spades`.
.. code-block:: console
$ spack edit spades
.. code-block:: python
root_cmakelists_dir = "src"
Here :code:`root_cmakelists_dir` will tell Spack where to find the location
of :code:`CMakeLists.txt`. In this example, it is located a directory level below in
the :code:`src` directory.
Some :code:`CMake` packages also require the :code:`install` phase to be
overridden. For example, let's take a look at :code:`sniffles`.
.. code-block:: console
$ spack edit sniffles
In the :code:`install()` method, we have to manually install our targets
so we override the :code:`install()` method to do it for us:
.. code-block:: python
# the build process doesn't actually install anything, do it by hand
def install(self, spec, prefix):
mkdir(prefix.bin)
src = "bin/sniffles-core-{0}".format(spec.version.dotted)
binaries = ['sniffles', 'sniffles-debug']
for b in binaries:
install(join_path(src, b), join_path(prefix.bin, b))
--------------
PythonPackage
--------------
Python extensions and modules are built differently from source than most
applications. Python uses a :code:`setup.py` script to install Python modules.
The script consists of a call to :code:`setup()` which provides the information
required to build a module to Distutils. If you're familiar with pip or
easy_install, setup.py does the same thing.
These modules are usually installed using the following line:
.. code-block:: console
$ python setup.py install
There are also a list of commands and phases that you can call. To see the full
list you can run:
.. code-block:: console
$ python setup.py --help-commands
Standard commands:
build build everything needed to install
build_py "build" pure Python modules (copy to build directory)
build_ext build C/C++ extensions (compile/link to build directory)
build_clib build C/C++ libraries used by Python extensions
build_scripts "build" scripts (copy and fixup #! line)
clean (no description available)
install install everything from build directory
install_lib install all Python modules (extensions and pure Python)
install_headers install C/C++ header files
install_scripts install scripts (Python or otherwise)
install_data install data files
sdist create a source distribution (tarball, zip file, etc.)
register register the distribution with the Python package index
bdist create a built (binary) distribution
bdist_dumb create a "dumb" built distribution
bdist_rpm create an RPM distribution
bdist_wininst create an executable installer for MS Windows
upload upload binary package to PyPI
check perform some checks on the package
We can write package files for Python packages using the :code:`Package` class,
but the class brings with it a lot of methods that are useless for Python packages.
Instead, Spack has a :code:`PythonPackage` subclass that allows packagers
of Python modules to be able to invoke :code:`setup.py` and use :code:`Distutils`,
which is much more familiar to a typical python user.
To see the defaults that Spack has for each a methods, we will take a look
at the :code:`PythonPackage` class:
.. code-block:: console
$ spack edit --build-system python
We see the following:
.. literalinclude:: ../../../lib/spack/spack/build_systems/python.py
:language: python
:lines: 19,146-357
:linenos:
Each of these methods have sensible defaults or they can be overridden.
We will write a package file for Pandas_:
.. _pandas: https://pandas.pydata.org
.. code-block:: console
$ spack create -f https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz
==> This looks like a URL for pandas
==> Warning: Spack was unable to fetch url list due to a certificate verification problem. You can try running spack -k, which will not check SSL certificates. Use this at your own risk.
==> Found 1 version of pandas:
0.19.0 https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz
==> How many would you like to checksum? (default is 1, q to abort) 1
==> Downloading...
==> Fetching https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz
######################################################################## 100.0%
==> Checksummed 1 version of pandas
==> This package looks like it uses the python build system
==> Changing package name from pandas to py-pandas
==> Created template for py-pandas package
==> Created package file: /Users/mamelara/spack/var/spack/repos/builtin/packages/py-pandas/package.py
And we are left with the following template:
.. literalinclude:: tutorial/examples/PyPackage/0.package.py
:language: python
:linenos:
As you can see this is not any different than any package template that we have
written. We have the choice of providing build options or using the sensible
defaults
Luckily for us, there is no need to provide build args.
Next we need to find the dependencies of a package. Dependencies are usually
listed in :code:`setup.py`. You can find the dependencies by searching for
:code:`install_requires` keyword in that file. Here it is for :code:`Pandas`:
.. code-block:: python
# ... code
if sys.version_info[0] >= 3:
setuptools_kwargs = {
'zip_safe': False,
'install_requires': ['python-dateutil >= 2',
'pytz >= 2011k',
'numpy >= %s' % min_numpy_ver],
'setup_requires': ['numpy >= %s' % min_numpy_ver],
}
if not _have_setuptools:
sys.exit("need setuptools/distribute for Py3k"
"\n$ pip install distribute")
# ... more code
You can find a more comprehensive list at the Pandas documentation_.
.. _documentation: https://pandas.pydata.org/pandas-docs/stable/install.html
By reading the documentation and :code:`setup.py` we found that :code:`Pandas`
depends on :code:`python-dateutil`, :code:`pytz`, and :code:`numpy`, :code:`numexpr`,
and finally :code:`bottleneck`.
Here is the completed :code:`Pandas` script:
.. literalinclude:: tutorial/examples/PyPackage/1.package.py
:language: python
:linenos:
It is quite important to declare all the dependencies of a Python package.
Spack can "activate" Python packages to prevent the user from having to
load each dependency module explictly. If a dependency is missed, Spack will
be unable to properly activate the package and it will cause an issue. To
learn more about extensions go to :ref:`cmd-spack-extensions`.
From this example, you can see that building Python modules is made easy
through the :code:`PythonPackage` class.
-------------------
Other Build Systems
-------------------
Although we won't get in depth with any of the other build systems that Spack
supports, it is worth mentioning that Spack does provide subclasses
for the following build systems:
1. :code:`IntelPackage`
2. :code:`SconsPackage`
3. :code:`WafPackage`
4. :code:`RPackage`
5. :code:`PerlPackage`
6. :code:`QMakePackage`
Each of these classes have their own abstractions to help assist in writing
package files. For whatever doesn't fit nicely into the other build-systems,
you can use the :code:`Package` class.
Hopefully by now you can see how we aim to make packaging simple and
robust through these classes. If you want to learn more about these build
systems, check out :ref:`installation_procedure` in the Packaging Guide.

View File

@@ -0,0 +1,935 @@
.. Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _configs-tutorial:
======================
Configuration Tutorial
======================
This tutorial will guide you through various configuration options
that allow you to customize Spack's behavior with respect to
software installation. We will first cover the configuration file
hierarchy. Then, we will cover configuration options for compilers,
focusing on how they can be used to extend Spack's compiler auto-detection.
Next, we will cover the packages configuration file, focusing on
how it can be used to override default build options as well as
specify external package installations to use. Finally, we will
briefly touch on the config configuration file, which manages more
high-level Spack configuration options.
For all of these features we will demonstrate how we build up a full
configuration file. For some we will then demonstrate how the
configuration affects the install command, and for others we will use
the ``spack spec`` command to demonstrate how the configuration
changes have affected Spack's concretization algorithm. The provided
output is all from a server running Ubuntu version 16.04.
.. _configs-tutorial-scopes:
--------------------
Configuration Scopes
--------------------
Depending on your use case, you may want to provide configuration
settings common to everyone on your team, or you may want to set
default behaviors specific to a single user account. Spack provides
six configuration *scopes* to handle this customization. These scopes,
in order of decreasing priority, are:
============ ===================================================
Scope Directory
============ ===================================================
Command-line N/A
Custom Custom directory, specified with ``--config-scope``
User ``~/.spack/``
Site ``$SPACK_ROOT/etc/spack/``
System ``/etc/spack/``
Defaults ``$SPACK_ROOT/etc/spack/defaults/``
============ ===================================================
Spack's default configuration settings reside in
``$SPACK_ROOT/etc/spack/defaults``. These are useful for reference,
but should never be directly edited. To override these settings,
create new configuration files in any of the higher-priority
configuration scopes.
A particular cluster may have multiple Spack installations associated
with different projects. To provide settings common to all Spack
installations, put your configuration files in ``/etc/spack``.
To provide settings specific to a particular Spack installation,
you can use the ``$SPACK_ROOT/etc/spack`` directory.
For settings specific to a particular user, you will want to add
configuration files to the ``~/.spack`` directory. When Spack first
checked for compilers on your system, you may have noticed that it
placed your compiler configuration in this directory.
Configuration settings can also be placed in a custom location,
which is then specified on the command line via ``--config-scope``.
An example use case is managing two sets of configurations, one for
development and another for production preferences.
Settings specified on the command line have precedence over all
other configuration scopes.
^^^^^^^^^^^^^^^^^^^^^^^^
Platform-specific Scopes
^^^^^^^^^^^^^^^^^^^^^^^^
Some facilities manage multiple platforms from a single shared
file system. In order to handle this, each of the configuration
scopes listed above has two *sub-scopes*: platform-specific and
platform-independent. For example, compiler settings can be stored
in ``compilers.yaml`` configuration files in the following locations:
#. ``~/.spack/<platform>/compilers.yaml``
#. ``~/.spack/compilers.yaml``
#. ``$SPACK_ROOT/etc/spack/<platform>/compilers.yaml``
#. ``$SPACK_ROOT/etc/spack/compilers.yaml``
#. ``/etc/spack/<platform>/compilers.yaml``
#. ``/etc/spack/compilers.yaml``
#. ``$SPACK_ROOT/etc/defaults/<platform>/compilers.yaml``
#. ``$SPACK_ROOT/etc/defaults/compilers.yaml``
These files are listed in decreasing order of precedence, so files in
``~/.spack/<platform>`` will override settings in ``~/.spack``.
-----------
YAML Format
-----------
Spack configurations are YAML dictionaries. Every configuration file
begins with a top-level dictionary that tells Spack which
configuration set it modifies. When Spack checks it's configuration,
the configuration scopes are updated as dictionaries in increasing
order of precedence, allowing higher precedence files to override
lower. YAML dictionaries use a colon ":" to specify key-value
pairs. Spack extends YAML syntax slightly to allow a double-colon
"::" to specify a key-value pair. When a double-colon is used to
specify a key-value pair, instead of adding that section Spack
replaces what was in that section with the new value. For example, a
user compilers configuration file as follows:
.. code-block:: yaml
compilers::
- compiler:
environment: {}
extra_rpaths: []
flags: {}
modules: []
operating_system: ubuntu16.04
paths:
cc: /usr/bin/gcc
cxx: /usr/bin/g++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
spec: gcc@5.4.0
target: x86_64
ensures that no other compilers are used, as the user configuration
scope is the last scope searched and the ``compilers::`` line replaces
all previous configuration files information. If the same
configuration file had a single colon instead of the double colon, it
would add the GCC version 5.4.0 compiler to whatever other compilers
were listed in other configuration files.
.. _configs-tutorial-compilers:
----------------------
Compiler Configuration
----------------------
For most tasks, we can use Spack with the compilers auto-detected the
first time Spack runs on a system. As discussed in the basic
installation tutorial, we can also tell Spack where compilers are
located using the ``spack compiler add`` command. However, in some
circumstances we want even more fine-grained control over the
compilers available. This section will teach you how to exercise that
control using the compilers configuration file.
We will start by opening the compilers configuration file
.. code-block:: console
$ spack config edit compilers
.. code-block:: yaml
compilers:
- compiler:
environment: {}
extra_rpaths: []
flags: {}
modules: []
operating_system: ubuntu16.04
paths:
cc: /usr/bin/clang-3.7
cxx: /usr/bin/clang++-3.7
f77: null
fc: null
spec: clang@3.7.1-2ubuntu2
target: x86_64
- compiler:
environment: {}
extra_rpaths: []
flags: {}
modules: []
operating_system: ubuntu16.04
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: null
fc: null
spec: clang@3.8.0-2ubuntu4
target: x86_64
- compiler:
environment: {}
extra_rpaths: []
flags: {}
modules: []
operating_system: ubuntu16.04
paths:
cc: /usr/bin/gcc-4.7
cxx: /usr/bin/g++-4.7
f77: /usr/bin/gfortran-4.7
fc: /usr/bin/gfortran-4.7
spec: gcc@4.7
target: x86_64
- compiler:
environment: {}
extra_rpaths: []
flags: {}
modules: []
operating_system: ubuntu16.04
paths:
cc: /usr/bin/gcc
cxx: /usr/bin/g++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
spec: gcc@5.4.0
target: x86_64
This specifies two versions of the GCC compiler and two versions of the
Clang compiler with no Flang compiler. Now suppose we have a code that
we want to compile with the Clang compiler for C/C++ code, but with
gfortran for Fortran components. We can do this by adding another entry
to the ``compilers.yaml`` file.
.. code-block:: yaml
- compiler:
environment: {}
extra_rpaths: []
flags: {}
modules: []
operating_system: ubuntu16.04
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
spec: clang@3.8.0-gfortran
target: x86_64
Let's talk about the sections of this compiler entry that we've changed.
The biggest change we've made is to the ``paths`` section. This lists
the paths to the compilers to use for each language/specification.
In this case, we point to the clang compiler for C/C++ and the gfortran
compiler for both specifications of Fortran. We've also changed the
``spec`` entry for this compiler. The ``spec`` entry is effectively the
name of the compiler for Spack. It consists of a name and a version
number, separated by the ``@`` sigil. The name must be one of the supported
compiler names in Spack (gcc, intel, pgi, xl, xl_r, clang, nag, cce, arm).
The version number can be an arbitrary string of alphanumeric characters,
as well as ``-``, ``.``, and ``_``. The ``target`` and ``operating_system``
sections we leave unchanged. These sections specify when Spack can use
different compilers, and are primarily useful for configuration files that
will be used across multiple systems.
We can verify that our new compiler works by invoking it now:
.. code-block:: console
$ spack install --no-cache zlib %clang@3.8.0-gfortran
...
This new compiler also works on Fortran codes:
.. code-block:: console
$ spack install --no-cache cfitsio %clang@3.8.0-gfortran -bzip2
...
^^^^^^^^^^^^^^
Compiler Flags
^^^^^^^^^^^^^^
Some compilers may require specific compiler flags to work properly in
a particular computing environment. Spack provides configuration
options for setting compiler flags every time a specific compiler is
invoked. These flags become part of the package spec and therefore of
the build provenance. As on the command line, the flags are set
through the implicit build variables ``cflags``, ``cxxflags``, ``cppflags``,
``fflags``, ``ldflags``, and ``ldlibs``.
Let's open our compilers configuration file again and add a compiler flag.
.. code-block:: yaml
- compiler:
environment: {}
extra_rpaths: []
flags:
cppflags: -g
modules: []
operating_system: ubuntu16.04
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /usr/bin/gfortran
fc: /usr/bin/gfortran
spec: clang@3.8.0-gfortran
target: x86_64
We can test this out using the ``spack spec`` command to show how the
spec is concretized.
.. code-block:: console
$ spack spec cfitsio %clang@3.8.0-gfortran
Input spec
--------------------------------
cfitsio%clang@3.8.0-gfortran
Normalized
--------------------------------
cfitsio%clang@3.8.0-gfortran
Concretized
--------------------------------
cfitsio@3.410%clang@3.8.0-gfortran cppflags="-g" +bzip2+shared arch=linux-ubuntu16.04-x86_64
^bzip2@1.0.6%clang@3.8.0-gfortran cppflags="-g" +shared arch=linux-ubuntu16.04-x86_64
We can see that ``cppflags="-g"`` has been added to every node in the DAG.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Advanced Compiler Configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are three fields of the compiler configuration entry that we
have not yet talked about.
The ``modules`` field of the compiler is used primarily on Cray systems,
but can be useful on any system that has compilers that are only
useful when a particular module is loaded. Any modules in the
``modules`` field of the compiler configuration will be loaded as part
of the build environment for packages using that compiler.
The ``extra_rpaths`` field of the compiler configuration is used for
compilers that do not rpath all of their dependencies by
default. Since compilers are often installed externally to Spack,
Spack is unable to manage compiler dependencies and enforce
rpath usage. This can lead to packages not finding link dependencies
imposed by the compiler properly. For compilers that impose link
dependencies on the resulting executables that are not rpath'ed into
the executable automatically, the ``extra_rpaths`` field of the compiler
configuration tells Spack which dependencies to rpath into every
executable created by that compiler. The executables will then be able
to find the link dependencies imposed by the compiler. As an example,
this field can be set by
.. code-block:: yaml
- compiler:
...
extra_rpaths:
- /apps/intel/ComposerXE2017/compilers_and_libraries_2017.5.239/linux/compiler/lib/intel64_lin
...
The ``environment`` field of the compiler configuration is used for
compilers that require environment variables to be set during build
time. For example, if your Intel compiler suite requires the
``INTEL_LICENSE_FILE`` environment variable to point to the proper
license server, you can set this in ``compilers.yaml`` as follows:
.. code-block:: yaml
- compiler:
environment:
set:
INTEL_LICENSE_FILE: 1713@license4
...
In addition to ``set``, ``environment`` also supports ``unset``,
``prepend-path``, and ``append-path``.
.. _configs-tutorial-package-prefs:
-------------------------------
Configuring Package Preferences
-------------------------------
Package preferences in Spack are managed through the ``packages.yaml``
configuration file. First, we will look at the default
``packages.yaml`` file.
.. code-block:: console
$ spack config --scope defaults edit packages
.. literalinclude:: ../../../etc/spack/defaults/packages.yaml
:language: yaml
This sets the default preferences for compilers and for providers of
virtual packages. To illustrate how this works, suppose we want to
change the preferences to prefer the Clang compiler and to prefer
MPICH over OpenMPI. Currently, we prefer GCC and OpenMPI.
.. code-block:: console
$ spack spec hdf5
Input spec
--------------------------------
hdf5
Concretized
--------------------------------
hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64
^openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64
^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64
^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64
^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64
^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64
^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64
^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64
^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64
^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64
^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64
^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64
^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64
^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64
^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64
^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64
^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64
^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64
^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64
Now we will open the packages configuration file and update our
preferences.
.. code-block:: console
$ spack config edit packages
.. code-block:: yaml
packages:
all:
compiler: [clang, gcc, intel, pgi, xl, nag]
providers:
mpi: [mpich, openmpi]
Because of the configuration scoping we discussed earlier, this
overrides the default settings just for these two items.
.. code-block:: console
$ spack spec hdf5
Input spec
--------------------------------
hdf5
Concretized
--------------------------------
hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64
^mpich@3.2.1%clang@3.8.0-2ubuntu4 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64
^findutils@4.6.0%clang@3.8.0-2ubuntu4 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64
^autoconf@2.69%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^m4@1.4.18%clang@3.8.0-2ubuntu4 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64
^libsigsegv@2.11%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^perl@5.26.2%clang@3.8.0-2ubuntu4+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64
^gdbm@1.14.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^readline@7.0%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^ncurses@6.1%clang@3.8.0-2ubuntu4~symlinks~termlib arch=linux-ubuntu16.04-x86_64
^pkgconf@1.4.2%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^automake@1.16.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^libtool@2.4.6%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^texinfo@6.5%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic+shared arch=linux-ubuntu16.04-x86_64
^^^^^^^^^^^^^^^^^^^
Variant Preferences
^^^^^^^^^^^^^^^^^^^
The packages configuration file can also set variant preferences for
package variants. For example, let's change our preferences to build all
packages without shared libraries. We will accomplish this by turning
off the ``shared`` variant on all packages that have one.
.. code-block:: yaml
packages:
all:
compiler: [clang, gcc, intel, pgi, xl, nag]
providers:
mpi: [mpich, openmpi]
variants: ~shared
We can check the effect of this command with ``spack spec hdf5`` again.
.. code-block:: console
$ spack spec hdf5
Input spec
--------------------------------
hdf5
Concretized
--------------------------------
hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic~shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64
^mpich@3.2.1%clang@3.8.0-2ubuntu4 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64
^findutils@4.6.0%clang@3.8.0-2ubuntu4 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64
^autoconf@2.69%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^m4@1.4.18%clang@3.8.0-2ubuntu4 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64
^libsigsegv@2.11%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^perl@5.26.2%clang@3.8.0-2ubuntu4+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac ~shared+threads arch=linux-ubuntu16.04-x86_64
^gdbm@1.14.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^readline@7.0%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^ncurses@6.1%clang@3.8.0-2ubuntu4~symlinks~termlib arch=linux-ubuntu16.04-x86_64
^pkgconf@1.4.2%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^automake@1.16.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^libtool@2.4.6%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^texinfo@6.5%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic~shared arch=linux-ubuntu16.04-x86_64
So far we have only made global changes to the package preferences. As
we've seen throughout this tutorial, hdf5 builds with MPI enabled by
default in Spack. If we were working on a project that would routinely
need serial hdf5, that might get annoying quickly, having to type
``hdf5~mpi`` all the time. Instead, we'll update our preferences for
hdf5.
.. code-block:: yaml
packages:
all:
compiler: [clang, gcc, intel, pgi, xl, nag]
providers:
mpi: [mpich, openmpi]
variants: ~shared
hdf5:
variants: ~mpi
Now hdf5 will concretize without an MPI dependency by default.
.. code-block:: console
$ spack spec hdf5
Input spec
--------------------------------
hdf5
Concretized
--------------------------------
hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64
^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic~shared arch=linux-ubuntu16.04-x86_64
In general, every attribute that we can set for all packages we can
set separately for an individual package.
^^^^^^^^^^^^^^^^^
External Packages
^^^^^^^^^^^^^^^^^
The packages configuration file also controls when Spack will build
against an externally installed package. On these systems we have a
pre-installed zlib.
.. code-block:: yaml
packages:
all:
compiler: [clang, gcc, intel, pgi, xl, nag]
providers:
mpi: [mpich, openmpi]
variants: ~shared
hdf5:
variants: ~mpi
zlib:
paths:
zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr
Here, we've told Spack that zlib 1.2.8 is installed on our system.
We've also told it the installation prefix where zlib can be found.
We don't know exactly which variants it was built with, but that's
okay.
.. code-block:: console
$ spack spec hdf5
Input spec
--------------------------------
hdf5
Concretized
--------------------------------
hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64
^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64
You'll notice that Spack is now using the external zlib installation,
but the compiler used to build zlib is now overriding our compiler
preference of clang. If we explicitly specify clang:
.. code-block:: console
$ spack spec hdf5 %clang
Input spec
--------------------------------
hdf5%clang
Concretized
--------------------------------
hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64
^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic~shared arch=linux-ubuntu16.04-x86_64
Spack concretizes to both hdf5 and zlib being built with clang.
This has a side-effect of rebuilding zlib. If we want to force
Spack to use the system zlib, we have two choices. We can either
specify it on the command line, or we can tell Spack that it's
not allowed to build its own zlib. We'll go with the latter.
.. code-block:: yaml
packages:
all:
compiler: [clang, gcc, intel, pgi, xl, nag]
providers:
mpi: [mpich, openmpi]
variants: ~shared
hdf5:
variants: ~mpi
zlib:
paths:
zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr
buildable: False
Now Spack will be forced to choose the external zlib.
.. code-block:: console
$ spack spec hdf5 %clang
Input spec
--------------------------------
hdf5%clang
Concretized
--------------------------------
hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64
^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64
This gets slightly more complicated with virtual dependencies. Suppose
we don't want to build our own MPI, but we now want a parallel version
of hdf5? Well, fortunately we have mpich installed on these systems.
.. code-block:: yaml
packages:
all:
compiler: [clang, gcc, intel, pgi, xl, nag]
providers:
mpi: [mpich, openmpi]
variants: ~shared
hdf5:
variants: ~mpi
zlib:
paths:
zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr
buildable: False
mpich:
paths:
mpich@3.2%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64: /usr
buildable: False
If we concretize ``hdf5+mpi`` with this configuration file, we will just
build with an alternate MPI implementation.
.. code-block:: console
$ spack spec hdf5 %clang +mpi
Input spec
--------------------------------
hdf5%clang+mpi
Concretized
--------------------------------
hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64
^openmpi@3.1.3%clang@3.8.0-2ubuntu4~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64
^hwloc@1.11.9%clang@3.8.0-2ubuntu4~cairo~cuda+libxml2+pci~shared arch=linux-ubuntu16.04-x86_64
^libpciaccess@0.13.5%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^libtool@2.4.6%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^m4@1.4.18%clang@3.8.0-2ubuntu4 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64
^libsigsegv@2.11%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^pkgconf@1.4.2%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^util-macros@1.19.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^libxml2@2.9.8%clang@3.8.0-2ubuntu4~python arch=linux-ubuntu16.04-x86_64
^xz@5.2.4%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64
^numactl@2.0.11%clang@3.8.0-2ubuntu4 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64
^autoconf@2.69%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^perl@5.26.2%clang@3.8.0-2ubuntu4+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac ~shared+threads arch=linux-ubuntu16.04-x86_64
^gdbm@1.14.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^readline@7.0%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
^ncurses@6.1%clang@3.8.0-2ubuntu4~symlinks~termlib arch=linux-ubuntu16.04-x86_64
^automake@1.16.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64
We have only expressed a preference for mpich over other MPI
implementations, and Spack will happily build with one we haven't
forbid it from building. We could resolve this by requesting
``hdf5%clang+mpi^mpich`` explicitly, or we can configure Spack not to
use any other MPI implementation. Since we're focused on
configurations here and the former can get tedious, we'll need to
modify our ``packages.yaml`` file again.
While we're at it, we can configure hdf5 to build with MPI by default
again.
.. code-block:: yaml
packages:
all:
compiler: [clang, gcc, intel, pgi, xl, nag]
providers:
mpi: [mpich, openmpi]
variants: ~shared
zlib:
paths:
zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr
buildable: False
mpich:
paths:
mpich@3.2%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64: /usr
buildable: False
openmpi:
buildable: False
mvapich2:
buildable: False
intel-mpi:
buildable: False
intel-parallel-studio:
buildable: False
spectrum-mpi:
buildable: False
mpilander:
buildable: False
charm:
buildable: False
charmpp:
buildable: False
Now that we have configured Spack not to build any of the possible
providers for MPI we can try again.
.. code-block:: console
$ spack spec hdf5 %clang
Input spec
--------------------------------
hdf5%clang
Concretized
--------------------------------
hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic~shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64
^mpich@3.2%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64
^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64
By configuring most of our package preferences in ``packages.yaml``,
we can cut down on the amount of work we need to do when specifying
a spec on the command line. In addition to compiler and variant
preferences, we can specify version preferences as well. Anything
that you can specify on the command line can be specified in
``packages.yaml`` with the exact same spec syntax.
^^^^^^^^^^^^^^^^^^^^^^^^
Installation Permissions
^^^^^^^^^^^^^^^^^^^^^^^^
The ``packages.yaml`` file also controls the default permissions
to use when installing a package. You'll notice that by default,
the installation prefix will be world readable but only user writable.
Let's say we need to install ``converge``, a licensed software package.
Since a specific research group, ``fluid_dynamics``, pays for this
license, we want to ensure that only members of this group can access
the software. We can do this like so:
.. code-block:: yaml
packages:
converge:
permissions:
read: group
group: fluid_dynamics
Now, only members of the ``fluid_dynamics`` group can use any
``converge`` installations.
.. warning::
Make sure to delete or move the ``packages.yaml`` you have been
editing up to this point. Otherwise, it will change the hashes
of your packages, leading to differences in the output of later
tutorial sections.
-----------------
High-level Config
-----------------
In addition to compiler and package settings, Spack allows customization
of several high-level settings. These settings are stored in the generic
``config.yaml`` configuration file. You can see the default settings by
running:
.. code-block:: console
$ spack config --scope defaults edit config
.. literalinclude:: ../../../etc/spack/defaults/config.yaml
:language: yaml
As you can see, many of the directories Spack uses can be customized.
For example, you can tell Spack to install packages to a prefix
outside of the ``$SPACK_ROOT`` hierarchy. Module files can be
written to a central location if you are using multiple Spack
instances. If you have a fast scratch file system, you can run builds
from this file system with the following ``config.yaml``:
.. code-block:: yaml
config:
build_stage:
- /scratch/$user
On systems with compilers that absolutely *require* environment variables
like ``LD_LIBRARY_PATH``, it is possible to prevent Spack from cleaning
the build environment with the ``dirty`` setting:
.. code-block:: yaml
config:
dirty: true
However, this is strongly discouraged, as it can pull unwanted libraries
into the build.
One last setting that may be of interest to many users is the ability
to customize the parallelism of Spack builds. By default, Spack
installs all packages in parallel with the number of jobs equal to the
number of cores on the node. For example, on a node with 16 cores,
this will look like:
.. code-block:: console
$ spack install --no-cache --verbose zlib
==> Installing zlib
==> Using cached archive: /home/user/spack/var/spack/cache/zlib/zlib-1.2.11.tar.gz
==> Staging archive: /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb/zlib-1.2.11.tar.gz
==> Created stage in /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb
==> No patches needed for zlib
==> Building zlib [Package]
==> Executing phase: 'install'
==> './configure' '--prefix=/home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb'
...
==> 'make' '-j16'
...
==> 'make' '-j16' 'install'
...
==> Successfully installed zlib
Fetch: 0.00s. Build: 1.03s. Total: 1.03s.
[+] /home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb
As you can see, we are building with all 16 cores on the node. If you are
on a shared login node, this can slow down the system for other users. If
you have a strict ulimit or restriction on the number of available licenses,
you may not be able to build at all with this many cores. On nodes with 64+
cores, you may not see a significant speedup of the build anyway. To limit
the number of cores our build uses, set ``build_jobs`` like so:
.. code-block:: yaml
config:
build_jobs: 4
If we uninstall and reinstall zlib, we see that it now uses only 4 cores:
.. code-block:: console
$ spack install --no-cache --verbose zlib
==> Installing zlib
==> Using cached archive: /home/user/spack/var/spack/cache/zlib/zlib-1.2.11.tar.gz
==> Staging archive: /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb/zlib-1.2.11.tar.gz
==> Created stage in /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb
==> No patches needed for zlib
==> Building zlib [Package]
==> Executing phase: 'install'
==> './configure' '--prefix=/home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb'
...
==> 'make' '-j4'
...
==> 'make' '-j4' 'install'
...
==> Successfully installed zlib
Fetch: 0.00s. Build: 1.03s. Total: 1.03s.
[+] /home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb
Obviously, if you want to build everything in serial for whatever reason,
you would set ``build_jobs`` to 1.
--------
Examples
--------
For examples of how other sites configure Spack, see
https://github.com/spack/spack-configs. If you use Spack at your site
and want to share your config files, feel free to submit a pull request!

Some files were not shown because too many files have changed in this diff Show More