Compare commits

...

25 Commits

Author SHA1 Message Date
Todd Gamblin
764cafc1ce update CHANGELOG.md for 0.15.4 2020-08-13 00:41:59 -07:00
Todd Gamblin
091b45c3c7 bump version number for 0.15.4 2020-08-13 00:33:31 -07:00
Massimiliano Culpo
1707448fde Move Python 2.6 unit tests to Github Actions (#17279)
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
2020-08-13 00:33:31 -07:00
Patrick Gartung
4d25481473 Buildcache: bindist test without invoking spack compiler wrappers. (#15687)
* Buildcache:
   * Try mocking an install of quux, corge and garply using prebuilt binaries
   * Put patchelf install after ccache restore
   * Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
   * Remove mirror at end of bindist test
   * Add patchelf to Ubuntu build env
   * Revert mock patchelf package to allow other tests to run.
   * Remove depends_on('patchelf', type='build') relying instead on
   * Test fixture to ensure patchelf is available.

* Call g++ command to build libraries directly during test build

* Flake8

* Install patchelf in before_install stage using apt unless on Trusty where a build is done.

* Add some symbolic links between packages

* Flake8

* Flake8:

* Update mock packages to write their own source files

* Create the stage because spec search does not create it any longer

* updates after change of list command arguments

* cleanup after merge

* flake8
2020-08-13 00:33:31 -07:00
Massimiliano Culpo
ecbfa5e448 Use "fetch-depth: 0" to retrieve all history from remote 2020-08-13 00:33:31 -07:00
Massimiliano Culpo
c00773521e Simplified YAML files for Github Actions workflows
Updated actions where needed
2020-08-13 00:33:31 -07:00
Massimiliano Culpo
a4b0239635 Group tests with similar duration together
Style and documentation tests take just a few minutes
to run. Since in Github actions one can't restart a single
job but needs to restart an entire workflow, here we group
tests with similar duration together.
2020-08-13 00:33:31 -07:00
Todd Gamblin
303882834a docs: document releases and branches in Spack
- [x] Remove references to `master` branch
- [x] Document how release branches are structured
- [x] Document how to make a major release
- [x] Document how to make a point release
- [x] Document how to do work in our release projects
2020-08-13 00:33:31 -07:00
Todd Gamblin
5b63ec8652 Remove references to master from CI
- [x] remove master from github actions
- [x] remove master from .travis.yml
- [x] make `develop` the default branch for `spack ci`
2020-08-13 00:30:51 -07:00
Massimiliano Culpo
fc94dde3fc Moved flake8, shell and documentation tests to Github Action (#17328)
* Move flake8 tests on Github Actions

* Move shell test to Github Actions

* Moved documentation build to Github Action

* Don't run coverage on Python 2.6

Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
2020-08-13 00:30:51 -07:00
Robert Blake
c064088cf3 Bugfix for #17999: use cudart instead of cuda. (#18000)
This is needed because libcuda is used by the driver,
whereas libcudart is used by the runtime. CMake searches
for cudart instead of cuda.

On LLNL LC systems, libcuda is only found in compat and
stubs directories, meaning that the lookup of libraries
fails.
2020-08-12 23:58:10 -07:00
Todd Gamblin
c05fa25057 bugfix: fix spack -V with releases/latest and shallow clones (#17884)
`spack -V` stopped working when we added the `releases/latest` tag to
track the most recent release. It started just reporting the version,
even on a `develop` checkout. We need to tell it to *only* search for
tags that start with `v`, so that it will ignore `releases/latest`.

`spack -V` also would print out unwanted git eror output on a shallow
clone.

- [x] add `--match 'v*'` to `git describe` arguments
- [x] route error output to `os.devnull`
2020-08-12 23:58:10 -07:00
Patrick Gartung
8e2f41fe18 Buildcache create: change NoOverwriteException back to a warning as in v0.14 (#17832)
* Change buildcache create `NoOverwriteException` back to a warning.
2020-08-12 23:58:10 -07:00
Axel Huebl
50f76f6131 Hotfix: move CUDAHOSTCXX (#17826)
* Hotfix: move CUDAHOSTCXX

Set only in dependent packages.

* dependent compiler
2020-08-12 23:58:10 -07:00
Todd Gamblin
5f8ab69396 bugfix: fix spack buildcache list --allarch
`spack buildcache list` was trying to construct an `Arch` object and
compare it to `arch_for_spec(<spec>)`. for each spec in the buildcache.
`Arch` objects are only intended to be constructed for the machine they
describe. The `ArchSpec` object (part of the `Spec`) is the descriptor
that lets us talk about architectures anywhere.

- [x] Modify `spack buildcache list` and `spack buildcache install` to
      filter with `Spec` matching instead of using `Arch`.
2020-08-12 23:58:10 -07:00
Todd Gamblin
aff0e8b592 architecture: make it easier to get a Spec for the default arch
- [x] Make it easier to get a `Spec` with a proper `ArchSpec` from an
      `Arch` object via new `Arch.to_spec()` method.

- [x] Pull `spack.architecture.default_arch()` out of
      `spack.architecture.sys_type()` so we can get an `Arch` instead of
      a string.
2020-08-12 23:58:10 -07:00
eugeneswalker
7302dd834f allow GNUPGHOME to come from SPACK_GNUPGHOME in env, if set (#17139) 2020-08-12 22:57:57 -07:00
Todd Gamblin
0f25462ea6 update CHANGELOG.md for 0.15.3 2020-07-28 02:11:06 -07:00
Todd Gamblin
ae4bbbd241 bump version number for 0.15.3 2020-07-28 02:05:26 -07:00
Greg Becker
24bd9e3039 bugfix: allow relative view paths (#17721)
Relative paths in views have been broken since #17608 or earlier.

- [x] Fix by passing base path of the environment into the `ViewDescriptor`.
      Relative paths are calculated from this path.
2020-07-27 23:48:59 -07:00
Todd Gamblin
0efb8ef412 tutorial: Add boto3 installation to setup script 2020-07-27 15:55:44 -07:00
Patrick Gartung
69775fcc07 Relocation of sbang needs to be done when the spack prefix changes even if the install tree has not changed. (#17455) 2020-07-27 11:38:48 -07:00
Patrick Gartung
ce772420dd Relocate rpaths for all binaries, then do text bin replacement if the rpaths still exist after running patchelf/otool (#17418) 2020-07-27 11:28:50 -07:00
Greg Becker
9cc01dc574 add tutorial setup script to share/spack (#17705)
* add tutorial setup script to share/spack

* Add check for Ubuntu 18, fix xvda check, fix apt-get errors
  - now works on t2.micro, t2.small, and m instances
  - apt-get needs retries around it to work

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2020-07-27 01:18:16 -07:00
Todd Gamblin
8d8cf6201b bugfix: don't redundantly print ChildErrors (#17709)
A bug was introduced in #13100 where ChildErrors would be redundantly
printed when raised during a build. We should eventually revisit error
handling in builds and figure out what the right separation of
responsibilities is for distributed builds, but for now just skip
printing.

- [x] SpackErrors were designed to be printed by the forked process, not
      by the parent, so check if they've already been printed.
- [x] update tests
2020-07-26 22:43:10 -07:00
41 changed files with 1802 additions and 334 deletions

View File

@@ -3,13 +3,12 @@ name: linux builds
on:
push:
branches:
- master
- develop
- releases/**
pull_request:
branches:
- master
- develop
- releases/**
paths-ignore:
# Don't run if we only modified packages in the built-in repository
- 'var/spack/repos/builtin/**'
@@ -24,23 +23,19 @@ on:
jobs:
build:
runs-on: ubuntu-latest
strategy:
max-parallel: 4
matrix:
package: [lz4, mpich, tut, py-setuptools, openjpeg, r-rcpp]
steps:
- uses: actions/checkout@v2
- name: Cache ccache's store
uses: actions/cache@v1
- uses: actions/cache@v2
with:
path: ~/.ccache
key: ccache-build-${{ matrix.package }}
restore-keys: |
ccache-build-${{ matrix.package }}
- name: Setup Python
uses: actions/setup-python@v1
- uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install System Packages

View File

@@ -3,13 +3,12 @@ name: linux tests
on:
push:
branches:
- master
- develop
- releases/**
pull_request:
branches:
- master
- develop
- releases/**
jobs:
unittests:
runs-on: ubuntu-latest
@@ -19,8 +18,9 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install System packages
@@ -36,9 +36,7 @@ jobs:
run: |
# Need this for the git tests to succeed.
git --version
git config --global user.email "spack@example.com"
git config --global user.name "Test User"
git fetch -u origin develop:develop
. .github/workflows/setup_git.sh
- name: Install kcov for bash script coverage
env:
KCOV_VERSION: 34
@@ -56,7 +54,61 @@ jobs:
share/spack/qa/run-unit-tests
coverage combine
coverage xml
- name: Upload to codecov.io
uses: codecov/codecov-action@v1
- uses: codecov/codecov-action@v1
with:
flags: unittests,linux
shell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install System packages
run: |
sudo apt-get -y update
sudo apt-get install -y coreutils gfortran gnupg2 mercurial ninja-build patchelf zsh fish
# Needed for kcov
sudo apt-get -y install cmake binutils-dev libcurl4-openssl-dev zlib1g-dev libdw-dev libiberty-dev
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools codecov coverage
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Install kcov for bash script coverage
env:
KCOV_VERSION: 38
run: |
KCOV_ROOT=$(mktemp -d)
wget --output-document=${KCOV_ROOT}/${KCOV_VERSION}.tar.gz https://github.com/SimonKagstrom/kcov/archive/v${KCOV_VERSION}.tar.gz
tar -C ${KCOV_ROOT} -xzvf ${KCOV_ROOT}/${KCOV_VERSION}.tar.gz
mkdir -p ${KCOV_ROOT}/build
cd ${KCOV_ROOT}/build && cmake -Wno-dev ${KCOV_ROOT}/kcov-${KCOV_VERSION} && cd -
make -C ${KCOV_ROOT}/build && sudo make -C ${KCOV_ROOT}/build install
- name: Run shell tests
env:
COVERAGE: true
run: |
share/spack/qa/run-shell-tests
- uses: codecov/codecov-action@v1
with:
flags: shelltests,linux
centos6:
# Test for Python2.6 run on Centos 6
runs-on: ubuntu-latest
container: spack/github-actions:centos6
steps:
- name: Run unit tests
env:
HOME: /home/spack-test
run: |
whoami && echo $HOME && cd $HOME
git clone https://github.com/spack/spack.git && cd spack
git fetch origin ${{ github.ref }}:test-branch
git checkout test-branch
share/spack/qa/run-unit-tests

View File

@@ -3,27 +3,22 @@ name: macos tests
on:
push:
branches:
- master
- develop
- releases/**
pull_request:
branches:
- master
- develop
- releases/**
jobs:
build:
runs-on: macos-latest
strategy:
matrix:
python-version: [3.7]
steps:
- uses: actions/checkout@v2
- name: Setup Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: 3.7
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools
@@ -37,13 +32,12 @@ jobs:
- name: Run unit tests
run: |
git --version
git fetch -u origin develop:develop
. .github/workflows/setup_git.sh
. share/spack/setup-env.sh
coverage run $(which spack) test
coverage combine
coverage xml
- name: Upload to codecov.io
uses: codecov/codecov-action@v1
- uses: codecov/codecov-action@v1
with:
file: ./coverage.xml
flags: unittests,macos

View File

@@ -1,31 +0,0 @@
name: python version check
on:
push:
branches:
- master
- develop
- releases/**
pull_request:
branches:
- master
- develop
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v1
with:
python-version: 3.7
- name: Install Python Packages
run: |
pip install --upgrade pip
pip install --upgrade vermin
- name: Minimum Version (Spack's Core)
run: vermin --backport argparse -t=2.6- -t=3.5- -v lib/spack/spack/ lib/spack/llnl/ bin/
- name: Minimum Version (Repositories)
run: vermin --backport argparse -t=2.6- -t=3.5- -v var/spack/repos

9
.github/workflows/setup_git.sh vendored Executable file
View File

@@ -0,0 +1,9 @@
#!/usr/bin/env sh
git config --global user.email "spack@example.com"
git config --global user.name "Test User"
# With fetch-depth: 0 we have a remote develop
# but not a local branch. Don't do this on develop
if [ "$(git branch --show-current)" != "develop" ]
then
git branch develop origin/develop
fi

65
.github/workflows/style_and_docs.yaml vendored Normal file
View File

@@ -0,0 +1,65 @@
name: style and docs
on:
push:
branches:
- develop
- releases/**
pull_request:
branches:
- develop
- releases/**
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.7
- name: Install Python Packages
run: |
pip install --upgrade pip
pip install --upgrade vermin
- name: Minimum Version (Spack's Core)
run: vermin --backport argparse -t=2.6- -t=3.5- -v lib/spack/spack/ lib/spack/llnl/ bin/
- name: Minimum Version (Repositories)
run: vermin --backport argparse -t=2.6- -t=3.5- -v var/spack/repos
flake8:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools flake8
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Run flake8 tests
run: |
share/spack/qa/run-flake8-tests
documentation:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.8
- name: Install System packages
run: |
sudo apt-get -y update
sudo apt-get install -y coreutils ninja-build graphviz
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools
pip install --upgrade -r lib/spack/docs/requirements.txt
- name: Build documentation
run: |
share/spack/qa/run-doc-tests

View File

@@ -1,152 +0,0 @@
#=============================================================================
# Project settings
#=============================================================================
# Only build master and develop on push; do not build every branch.
branches:
only:
- master
- develop
- /^releases\/.*$/
#=============================================================================
# Build matrix
#=============================================================================
dist: bionic
jobs:
fast_finish: true
include:
- stage: 'style checks'
python: '3.8'
os: linux
language: python
env: TEST_SUITE=flake8
- stage: 'unit tests + documentation'
python: '2.6'
dist: trusty
os: linux
language: python
addons:
apt:
# Everything but patchelf, that is not available for trusty
packages:
- ccache
- gfortran
- graphviz
- gnupg2
- kcov
- mercurial
- ninja-build
- realpath
- zsh
- fish
env: [ TEST_SUITE=unit, COVERAGE=true ]
- python: '3.8'
os: linux
language: python
env: [ TEST_SUITE=shell, COVERAGE=true, KCOV_VERSION=38 ]
- python: '3.8'
os: linux
language: python
env: TEST_SUITE=doc
stages:
- 'style checks'
- 'unit tests + documentation'
#=============================================================================
# Environment
#=============================================================================
# Docs need graphviz to build
addons:
# for Linux builds, we use APT
apt:
packages:
- ccache
- coreutils
- gfortran
- graphviz
- gnupg2
- mercurial
- ninja-build
- patchelf
- zsh
- fish
update: true
# ~/.ccache needs to be cached directly as Travis is not taking care of it
# (possibly because we use 'language: python' and not 'language: c')
cache:
pip: true
ccache: true
directories:
- ~/.ccache
before_install:
- ccache -M 2G && ccache -z
# Install kcov manually, since it's not packaged for bionic beaver
- if [[ "$KCOV_VERSION" ]]; then
sudo apt-get -y install cmake binutils-dev libcurl4-openssl-dev zlib1g-dev libdw-dev libiberty-dev;
KCOV_ROOT=$(mktemp -d);
wget --output-document=${KCOV_ROOT}/${KCOV_VERSION}.tar.gz https://github.com/SimonKagstrom/kcov/archive/v${KCOV_VERSION}.tar.gz;
tar -C ${KCOV_ROOT} -xzvf ${KCOV_ROOT}/${KCOV_VERSION}.tar.gz;
mkdir -p ${KCOV_ROOT}/build;
cd ${KCOV_ROOT}/build && cmake -Wno-dev ${KCOV_ROOT}/kcov-${KCOV_VERSION} && cd - ;
make -C ${KCOV_ROOT}/build && sudo make -C ${KCOV_ROOT}/build install;
fi
# Install various dependencies
install:
- pip install --upgrade pip
- pip install --upgrade six
- pip install --upgrade setuptools
- pip install --upgrade codecov coverage==4.5.4
- pip install --upgrade flake8
- pip install --upgrade pep8-naming
- if [[ "$TEST_SUITE" == "doc" ]]; then
pip install --upgrade -r lib/spack/docs/requirements.txt;
fi
before_script:
# Need this for the git tests to succeed.
- git config --global user.email "spack@example.com"
- git config --global user.name "Test User"
# Need this to be able to compute the list of changed files
- git fetch origin ${TRAVIS_BRANCH}:${TRAVIS_BRANCH}
#=============================================================================
# Building
#=============================================================================
script:
- share/spack/qa/run-$TEST_SUITE-tests
after_success:
- ccache -s
- case "$TEST_SUITE" in
unit)
if [[ "$COVERAGE" == "true" ]]; then
codecov --env PYTHON_VERSION
--required
--flags "${TEST_SUITE}${TRAVIS_OS_NAME}";
fi
;;
shell)
codecov --env PYTHON_VERSION
--required
--flags "${TEST_SUITE}${TRAVIS_OS_NAME}";
esac
#=============================================================================
# Notifications
#=============================================================================
notifications:
email:
recipients:
- tgamblin@llnl.gov
- massimiliano.culpo@gmail.com
on_success: change
on_failure: always

View File

@@ -1,3 +1,32 @@
# v0.15.4 (2020-08-12)
This release contains one feature addition:
* Users can set `SPACK_GNUPGHOME` to override Spack's GPG path (#17139)
Several bugfixes for CUDA, binary packaging, and `spack -V`:
* CUDA package's `.libs` method searches for `libcudart` instead of `libcuda` (#18000)
* Don't set `CUDAHOSTCXX` in environments that contain CUDA (#17826)
* `buildcache create`: `NoOverwriteException` is a warning, not an error (#17832)
* Fix `spack buildcache list --allarch` (#17884)
* `spack -V` works with `releases/latest` tag and shallow clones (#17884)
And fixes for GitHub Actions and tests to ensure that CI passes on the
release branch (#15687, #17279, #17328, #17377, #17732).
# v0.15.3 (2020-07-28)
This release contains the following bugfixes:
* Fix handling of relative view paths (#17721)
* Fixes for binary relocation (#17418, #17455)
* Fix redundant printing of error messages in build environment (#17709)
It also adds a support script for Spack tutorials:
* Add a tutorial setup script to share/spack (#17705, #17722)
# v0.15.2 (2020-07-23)
This minor release includes two new features:

View File

@@ -4,7 +4,6 @@
[![Linux Tests](https://github.com/spack/spack/workflows/linux%20tests/badge.svg)](https://github.com/spack/spack/actions)
[![Linux Builds](https://github.com/spack/spack/workflows/linux%20builds/badge.svg)](https://github.com/spack/spack/actions)
[![macOS Builds (nightly)](https://github.com/spack/spack/workflows/macOS%20builds%20nightly/badge.svg?branch=develop)](https://github.com/spack/spack/actions?query=workflow%3A%22macOS+builds+nightly%22)
[![Build Status](https://travis-ci.com/spack/spack.svg?branch=develop)](https://travis-ci.com/spack/spack)
[![codecov](https://codecov.io/gh/spack/spack/branch/develop/graph/badge.svg)](https://codecov.io/gh/spack/spack)
[![Read the Docs](https://readthedocs.org/projects/spack/badge/?version=latest)](https://spack.readthedocs.io)
[![Slack](https://spackpm.herokuapp.com/badge.svg)](https://spackpm.herokuapp.com)
@@ -74,15 +73,31 @@ When you send your request, make ``develop`` the destination branch on the
Your PR must pass Spack's unit tests and documentation tests, and must be
[PEP 8](https://www.python.org/dev/peps/pep-0008/) compliant. We enforce
these guidelines with [Travis CI](https://travis-ci.org/spack/spack). To
run these tests locally, and for helpful tips on git, see our
these guidelines with our CI process. To run these tests locally, and for
helpful tips on git, see our
[Contribution Guide](http://spack.readthedocs.io/en/latest/contribution_guide.html).
Spack uses a rough approximation of the
[Git Flow](http://nvie.com/posts/a-successful-git-branching-model/)
branching model. The ``develop`` branch contains the latest
contributions, and ``master`` is always tagged and points to the latest
stable release.
Spack's `develop` branch has the latest contributions. Pull requests
should target `develop`, and users who want the latest package versions,
features, etc. can use `develop`.
Releases
--------
For multi-user site deployments or other use cases that need very stable
software installations, we recommend using Spack's
[stable releases](https://github.com/spack/spack/releases).
Each Spack release series also has a corresponding branch, e.g.
`releases/v0.14` has `0.14.x` versions of Spack, and `releases/v0.13` has
`0.13.x` versions. We backport important bug fixes to these branches but
we do not advance the package versions or make other changes that would
change the way Spack concretizes dependencies within a release branch.
So, you can base your Spack deployment on a release branch and `git pull`
to get fixes, without the package churn that comes with `develop`.
See the [docs on releases](https://spack.readthedocs.io/en/latest/developer_guide.html#releases)
for more details.
Code of Conduct
------------------------

View File

@@ -27,17 +27,28 @@ correspond to one feature/bugfix/extension/etc. One can create PRs with
changes relevant to different ideas, however reviewing such PRs becomes tedious
and error prone. If possible, try to follow the **one-PR-one-package/feature** rule.
Spack uses a rough approximation of the `Git Flow <http://nvie.com/posts/a-successful-git-branching-model/>`_
branching model. The develop branch contains the latest contributions, and
master is always tagged and points to the latest stable release. Therefore, when
you send your request, make ``develop`` the destination branch on the
`Spack repository <https://github.com/spack/spack>`_.
--------
Branches
--------
Spack's ``develop`` branch has the latest contributions. Nearly all pull
requests should start from ``develop`` and target ``develop``.
There is a branch for each major release series. Release branches
originate from ``develop`` and have tags for each point release in the
series. For example, ``releases/v0.14`` has tags for ``0.14.0``,
``0.14.1``, ``0.14.2``, etc. versions of Spack. We backport important bug
fixes to these branches, but we do not advance the package versions or
make other changes that would change the way Spack concretizes
dependencies. Currently, the maintainers manage these branches by
cherry-picking from ``develop``. See :ref:`releases` for more
information.
----------------------
Continuous Integration
----------------------
Spack uses `Travis CI <https://travis-ci.org/spack/spack>`_ for Continuous Integration
Spack uses `Github Actions <https://docs.github.com/en/actions>`_ for Continuous Integration
testing. This means that every time you submit a pull request, a series of tests will
be run to make sure you didn't accidentally introduce any bugs into Spack. **Your PR
will not be accepted until it passes all of these tests.** While you can certainly wait
@@ -46,22 +57,21 @@ locally to speed up the review process.
.. note::
Oftentimes, Travis will fail for reasons other than a problem with your PR.
Oftentimes, CI will fail for reasons other than a problem with your PR.
For example, apt-get, pip, or homebrew will fail to download one of the
dependencies for the test suite, or a transient bug will cause the unit tests
to timeout. If Travis fails, click the "Details" link and click on the test(s)
to timeout. If any job fails, click the "Details" link and click on the test(s)
that is failing. If it doesn't look like it is failing for reasons related to
your PR, you have two options. If you have write permissions for the Spack
repository, you should see a "Restart job" button on the right-hand side. If
repository, you should see a "Restart workflow" button on the right-hand side. If
not, you can close and reopen your PR to rerun all of the tests. If the same
test keeps failing, there may be a problem with your PR. If you notice that
every recent PR is failing with the same error message, it may be that Travis
is down or one of Spack's dependencies put out a new release that is causing
problems. If this is the case, please file an issue.
every recent PR is failing with the same error message, it may be that an issue
occurred with the CI infrastructure or one of Spack's dependencies put out a
new release that is causing problems. If this is the case, please file an issue.
If you take a look in ``$SPACK_ROOT/.travis.yml``, you'll notice that we test
against Python 2.6, 2.7, and 3.4-3.7 on both macOS and Linux. We currently
We currently test against Python 2.6, 2.7, and 3.5-3.7 on both macOS and Linux and
perform 3 types of tests:
.. _cmd-spack-test:
@@ -95,7 +105,7 @@ time. For example, this would run all the tests in
.. code-block:: console
$ spack test architecture.py
$ spack test lib/spack/spack/test/architecture.py
And this would run the ``test_platform`` test from that file:
@@ -105,7 +115,7 @@ And this would run the ``test_platform`` test from that file:
This allows you to develop iteratively: make a change, test that change,
make another change, test that change, etc. We use `pytest
<http://pytest.org/>`_ as our tests fromework, and these types of
<http://pytest.org/>`_ as our tests framework, and these types of
arguments are just passed to the ``pytest`` command underneath. See `the
pytest docs
<http://doc.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests>`_
@@ -133,7 +143,7 @@ You can combine these with ``pytest`` arguments to restrict which tests
you want to know about. For example, to see just the tests in
``architecture.py``:
.. command-output:: spack test --list-long architecture.py
.. command-output:: spack test --list-long lib/spack/spack/test/architecture.py
You can also combine any of these options with a ``pytest`` keyword
search. For example, to see the names of all tests that have "spec"
@@ -149,7 +159,7 @@ argument to ``pytest``:
.. code-block:: console
$ spack test -s architecture.py::test_platform
$ spack test -s spack test --list-long lib/spack/spack/test/architecture.py::test_platform
Unit tests are crucial to making sure bugs aren't introduced into
Spack. If you are modifying core Spack libraries or adding new
@@ -162,7 +172,7 @@ how to write tests!
.. note::
You may notice the ``share/spack/qa/run-unit-tests`` script in the
repository. This script is designed for Travis CI. It runs the unit
repository. This script is designed for CI. It runs the unit
tests and reports coverage statistics back to Codecov. If you want to
run the unit tests yourself, we suggest you use ``spack test``.
@@ -235,7 +245,7 @@ to update them.
Try fixing flake8 errors in reverse order. This eliminates the need for
multiple runs of ``spack flake8`` just to re-compute line numbers and
makes it much easier to fix errors directly off of the Travis output.
makes it much easier to fix errors directly off of the CI output.
.. warning::
@@ -327,7 +337,7 @@ your PR is accepted.
There is also a ``run-doc-tests`` script in ``share/spack/qa``. The only
difference between running this script and running ``make`` by hand is that
the script will exit immediately if it encounters an error or warning. This
is necessary for Travis CI. If you made a lot of documentation changes, it is
is necessary for CI. If you made a lot of documentation changes, it is
much quicker to run ``make`` by hand so that you can see all of the warnings
at once.
@@ -391,7 +401,7 @@ and allow you to see coverage line-by-line when viewing the Spack repository.
If you are new to Spack, a great way to get started is to write unit tests to
increase coverage!
Unlike with Travis, Codecov tests are not required to pass in order for your
Unlike with CI on Github Actions Codecov tests are not required to pass in order for your
PR to be merged. If you modify core Spack libraries, we would greatly
appreciate unit tests that cover these changed lines. Otherwise, we have no
way of knowing whether or not your changes introduce a bug. If you make

View File

@@ -495,3 +495,370 @@ The bottom of the output shows the top most time consuming functions,
slowest on top. The profiling support is from Python's built-in tool,
`cProfile
<https://docs.python.org/2/library/profile.html#module-cProfile>`_.
.. _releases:
--------
Releases
--------
This section documents Spack's release process. It is intended for
project maintainers, as the tasks described here require maintainer
privileges on the Spack repository. For others, we hope this section at
least provides some insight into how the Spack project works.
.. _release-branches:
^^^^^^^^^^^^^^^^
Release branches
^^^^^^^^^^^^^^^^
There are currently two types of Spack releases: :ref:`major releases
<major-releases>` (``0.13.0``, ``0.14.0``, etc.) and :ref:`point releases
<point-releases>` (``0.13.1``, ``0.13.2``, ``0.13.3``, etc.). Here is a
diagram of how Spack release branches work::
o branch: develop (latest version)
|
o merge v0.14.1 into develop
|\
| o branch: releases/v0.14, tag: v0.14.1
o | merge v0.14.0 into develop
|\|
| o tag: v0.14.0
|/
o merge v0.13.2 into develop
|\
| o branch: releases/v0.13, tag: v0.13.2
o | merge v0.13.1 into develop
|\|
| o tag: v0.13.1
o | merge v0.13.0 into develop
|\|
| o tag: v0.13.0
o |
| o
|/
o
The ``develop`` branch has the latest contributions, and nearly all pull
requests target ``develop``.
Each Spack release series also has a corresponding branch, e.g.
``releases/v0.14`` has ``0.14.x`` versions of Spack, and
``releases/v0.13`` has ``0.13.x`` versions. A major release is the first
tagged version on a release branch. Minor releases are back-ported from
develop onto release branches. This is typically done by cherry-picking
bugfix commits off of ``develop``.
To avoid version churn for users of a release series, minor releases
should **not** make changes that would change the concretization of
packages. They should generally only contain fixes to the Spack core.
Both major and minor releases are tagged. After each release, we merge
the release branch back into ``develop`` so that the version bump and any
other release-specific changes are visible in the mainline (see
:ref:`merging-releases-to-develop`).
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Scheduling work for releases
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We schedule work for releases by creating `GitHub projects
<https://github.com/spack/spack/projects>`_. At any time, there may be
several open release projects. For example, here are two releases (from
some past version of the page linked above):
.. image:: images/projects.png
Here, there's one release in progress for ``0.15.1`` and another for
``0.16.0``. Each of these releases has a project board containing issues
and pull requests. GitHub shows a status bar with completed work in
green, work in progress in purple, and work not started yet in gray, so
it's fairly easy to see progress.
Spack's project boards are not firm commitments, and we move work between
releases frequently. If we need to make a release and some tasks are not
yet done, we will simply move them to next minor or major release, rather
than delaying the release to complete them.
For more on using GitHub project boards, see `GitHub's documentation
<https://docs.github.com/en/github/managing-your-work-on-github/about-project-boards>`_.
.. _major-releases:
^^^^^^^^^^^^^^^^^^^^^
Making Major Releases
^^^^^^^^^^^^^^^^^^^^^
Assuming you've already created a project board and completed the work
for a major release, the steps to make the release are as follows:
#. Create two new project boards:
* One for the next major release
* One for the next point release
#. Move any tasks that aren't done yet to one of the new project boards.
Small bugfixes should go to the next point release. Major features,
refactors, and changes that could affect concretization should go in
the next major release.
#. Create a branch for the release, based on ``develop``:
.. code-block:: console
$ git checkout -b releases/v0.15 develop
For a version ``vX.Y.Z``, the branch's name should be
``releases/vX.Y``. That is, you should create a ``releases/vX.Y``
branch if you are preparing the ``X.Y.0`` release.
#. Bump the version in ``lib/spack/spack/__init__.py``. See `this example from 0.13.0
<https://github.com/spack/spack/commit/8eeb64096c98b8a43d1c587f13ece743c864fba9>`_
#. Updaate the release version lists in these files to include the new version:
* ``lib/spack/spack/schema/container.py``
* ``lib/spack/spack/container/images.json``
**TODO**: We should get rid of this step in some future release.
#. Update ``CHANGELOG.md`` with major highlights in bullet form. Use
proper markdown formatting, like `this example from 0.15.0
<https://github.com/spack/spack/commit/d4bf70d9882fcfe88507e9cb444331d7dd7ba71c>`_.
#. Push the release branch to GitHub.
#. Make sure CI passes on the release branch, including:
* Regular unit tests
* Build tests
* The E4S pipeline at `gitlab.spack.io <https://gitlab.spack.io>`_
If CI is not passing, submit pull requests to ``develop`` as normal
and keep rebasing the release branch on ``develop`` until CI passes.
#. Follow the steps in :ref:`publishing-releases`.
#. Follow the steps in :ref:`merging-releases-to-develop`.
#. Follow the steps in :ref:`announcing-releases`.
.. _point-releases:
^^^^^^^^^^^^^^^^^^^^^
Making Point Releases
^^^^^^^^^^^^^^^^^^^^^
This assumes you've already created a project board for a point release
and completed the work to be done for the release. To make a point
release:
#. Create one new project board for the next point release.
#. Move any cards that aren't done yet to the next project board.
#. Check out the release branch (it should already exist). For the
``X.Y.Z`` release, the release branch is called ``releases/vX.Y``. For
``v0.15.1``, you would check out ``releases/v0.15``:
.. code-block:: console
$ git checkout releases/v0.15
#. Cherry-pick each pull request in the ``Done`` column of the release
project onto the release branch.
This is **usually** fairly simple since we squash the commits from the
vast majority of pull requests, which means there is only one commit
per pull request to cherry-pick. For example, `this pull request
<https://github.com/spack/spack/pull/15777>`_ has three commits, but
the were squashed into a single commit on merge. You can see the
commit that was created here:
.. image:: images/pr-commit.png
You can easily cherry pick it like this (assuming you already have the
release branch checked out):
.. code-block:: console
$ git cherry-pick 7e46da7
For pull requests that were rebased, you'll need to cherry-pick each
rebased commit individually. There have not been any rebased PRs like
this in recent point releases.
.. warning::
It is important to cherry-pick commits in the order they happened,
otherwise you can get conflicts while cherry-picking. When
cherry-picking onto a point release, look at the merge date,
**not** the number of the pull request or the date it was opened.
Sometimes you may **still** get merge conflicts even if you have
cherry-picked all the commits in order. This generally means there
is some other intervening pull request that the one you're trying
to pick depends on. In these cases, you'll need to make a judgment
call:
1. If the dependency is small, you might just cherry-pick it, too.
If you do this, add it to the release board.
2. If it is large, then you may decide that this fix is not worth
including in a point release, in which case you should remove it
from the release project.
3. You can always decide to manually back-port the fix to the release
branch if neither of the above options makes sense, but this can
require a lot of work. It's seldom the right choice.
#. Bump the version in ``lib/spack/spack/__init__.py``. See `this example from 0.14.1
<https://github.com/spack/spack/commit/ff0abb9838121522321df2a054d18e54b566b44a>`_.
#. Updaate the release version lists in these files to include the new version:
* ``lib/spack/spack/schema/container.py``
* ``lib/spack/spack/container/images.json``
**TODO**: We should get rid of this step in some future release.
#. Update ``CHANGELOG.md`` with a list of bugfixes. This is typically just a
summary of the commits you cherry-picked onto the release branch. See
`the changelog from 0.14.1
<https://github.com/spack/spack/commit/ff0abb9838121522321df2a054d18e54b566b44a>`_.
#. Push the release branch to GitHub.
#. Make sure CI passes on the release branch, including:
* Regular unit tests
* Build tests
* The E4S pipeline at `gitlab.spack.io <https://gitlab.spack.io>`_
If CI does not pass, you'll need to figure out why, and make changes
to the release branch until it does. You can make more commits, modify
or remove cherry-picked commits, or cherry-pick **more** from
``develop`` to make this happen.
#. Follow the steps in :ref:`publishing-releases`.
#. Follow the steps in :ref:`merging-releases-to-develop`.
#. Follow the steps in :ref:`announcing-releases`.
.. _publishing-releases:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Publishing a release on GitHub
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#. Go to `github.com/spack/spack/releases
<https://github.com/spack/spack/releases>`_ and click ``Draft a new
release``. Set the following:
* ``Tag version`` should start with ``v`` and contain *all three*
parts of the version, .g. ``v0.15.1``. This is the name of the tag
that will be created.
* ``Target`` should be the ``releases/vX.Y`` branch (e.g., ``releases/v0.15``).
* ``Release title`` should be ``vX.Y.Z`` (To match the tag, e.g., ``v0.15.1``).
* For the text, paste the latest release markdown from your ``CHANGELOG.md``.
You can save the draft and keep coming back to this as you prepare the release.
#. When you are done, click ``Publish release``.
#. Immediately after publishing, go back to
`github.com/spack/spack/releases
<https://github.com/spack/spack/releases>`_ and download the
auto-generated ``.tar.gz`` file for the release. It's the ``Source
code (tar.gz)`` link.
#. Click ``Edit`` on the release you just did and attach the downloaded
release tarball as a binary. This does two things:
#. Makes sure that the hash of our releases doesn't change over time.
GitHub sometimes annoyingly changes they way they generate
tarballs, and then hashes can change if you rely on the
auto-generated tarball links.
#. Gets us download counts on releases visible through the GitHub
API. GitHub tracks downloads of artifacts, but *not* the source
links. See the `releases
page <https://api.github.com/repos/spack/spack/releases>`_ and search
for ``download_count`` to see this.
.. _merging-releases-to-develop:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Merging back into ``develop``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Once each release is complete, make sure that it is merged back into
``develop`` with a merge commit:
.. code-block:: console
$ git checkout develop
$ git merge --no-ff releases/v0.15
$ git push
We merge back to ``develop`` because it:
* updates the version and ``CHANGELOG.md`` on ``develop``.
* ensures that your release tag is reachable from the head of
``develop``
We *must* use a real merge commit (via the ``--no-ff`` option) because it
ensures that the release tag is reachable from the tip of ``develop``.
This is necessary for ``spack -V`` to work properly -- it uses ``git
describe --tags`` to find the last reachable tag in the repository and
reports how far we are from it. For example:
.. code-block:: console
$ spack -V
0.14.2-1486-b80d5e74e5
This says that we are at commit ``b80d5e74e5``, which is 1,486 commits
ahead of the ``0.14.2`` release.
We put this step last in the process because it's best to do it only once
the release is complete and tagged. If you do it before you've tagged the
release and later decide you want to tag some later commit, you'll need
to merge again.
.. _announcing-releases:
^^^^^^^^^^^^^^^^^^^^
Announcing a release
^^^^^^^^^^^^^^^^^^^^
We announce releases in all of the major Spack communication channels.
Publishing the release takes care of GitHub. The remaining channels are
Twitter, Slack, and the mailing list. Here are the steps:
#. Make a tweet to announce the release. It should link to the release's
page on GitHub. You can base it on `this example tweet
<https://twitter.com/spackpm/status/1231761858182307840>`_.
#. Ping ``@channel`` in ``#general`` on Slack (`spackpm.slack.com
<https://spackpm.slack.com>`_) with a link to the tweet. The tweet
will be shown inline so that you do not have to retype your release
announcement.
#. Email the Spack mailing list to let them know about the release. As
with the tweet, you likely want to link to the release's page on
GitHub. It's also helpful to include some information directly in the
email. You can base yours on this `example email
<https://groups.google.com/forum/#!topic/spack/WT4CT9i_X4s>`_.
Once you've announced the release, congratulations, you're done! You've
finished making the release!

View File

@@ -818,7 +818,7 @@ Git
Some Spack packages use ``git`` to download, which might not work on
some computers. For example, the following error was
encountered on a Macintosh during ``spack install julia-master``:
encountered on a Macintosh during ``spack install julia@master``:
.. code-block:: console

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

View File

@@ -5,7 +5,7 @@
#: major, minor, patch version for Spack, in a tuple
spack_version_info = (0, 15, 2)
spack_version_info = (0, 15, 4)
#: String containing Spack version joined with .'s
spack_version = '.'.join(str(v) for v in spack_version_info)

View File

@@ -436,6 +436,12 @@ def to_dict(self):
('target', self.target.to_dict_or_value())])
return syaml_dict([('arch', d)])
def to_spec(self):
"""Convert this Arch to an anonymous Spec with architecture defined."""
spec = spack.spec.Spec()
spec.architecture = spack.spec.ArchSpec(str(self))
return spec
@staticmethod
def from_dict(d):
spec = spack.spec.ArchSpec.from_dict(d)
@@ -518,6 +524,14 @@ def platform():
@memoized
def default_arch():
"""Default ``Arch`` object for this machine.
See ``sys_type()``.
"""
return Arch(platform(), 'default_os', 'default_target')
def sys_type():
"""Print out the "default" platform-os-target tuple for this machine.
@@ -530,8 +544,7 @@ def sys_type():
architectures.
"""
arch = Arch(platform(), 'default_os', 'default_target')
return str(arch)
return str(default_arch())
@memoized

View File

@@ -36,7 +36,6 @@
from spack.spec import Spec
from spack.stage import Stage
from spack.util.gpg import Gpg
import spack.architecture as architecture
_build_cache_relative_path = 'build_cache'
@@ -498,6 +497,7 @@ def download_tarball(spec):
# stage the tarball into standard place
stage = Stage(url, name="build_cache", keep=True)
stage.create()
try:
stage.fetch()
return stage.save_filename
@@ -602,15 +602,11 @@ def is_backup_file(file):
if not is_backup_file(text_name):
text_names.append(text_name)
# If we are installing back to the same location don't replace anything
# If we are not installing back to the same install tree do the relocation
if old_layout_root != new_layout_root:
paths_to_relocate = [old_spack_prefix, old_layout_root]
paths_to_relocate.extend(prefix_to_hash.keys())
files_to_relocate = list(filter(
lambda pathname: not relocate.file_is_relocatable(
pathname, paths_to_relocate=paths_to_relocate),
map(lambda filename: os.path.join(workdir, filename),
buildinfo['relocate_binaries'])))
files_to_relocate = [os.path.join(workdir, filename)
for filename in buildinfo.get('relocate_binaries')
]
# If the buildcache was not created with relativized rpaths
# do the relocation of path in binaries
if (spec.architecture.platform == 'darwin' or
@@ -646,6 +642,13 @@ def is_backup_file(file):
new_spack_prefix,
prefix_to_prefix)
paths_to_relocate = [old_prefix, old_layout_root]
paths_to_relocate.extend(prefix_to_hash.keys())
files_to_relocate = list(filter(
lambda pathname: not relocate.file_is_relocatable(
pathname, paths_to_relocate=paths_to_relocate),
map(lambda filename: os.path.join(workdir, filename),
buildinfo['relocate_binaries'])))
# relocate the install prefixes in binary files including dependencies
relocate.relocate_text_bin(files_to_relocate,
old_prefix, new_prefix,
@@ -653,6 +656,17 @@ def is_backup_file(file):
new_spack_prefix,
prefix_to_prefix)
# If we are installing back to the same location
# relocate the sbang location if the spack directory changed
else:
if old_spack_prefix != new_spack_prefix:
relocate.relocate_text(text_names,
old_layout_root, new_layout_root,
old_prefix, new_prefix,
old_spack_prefix,
new_spack_prefix,
prefix_to_prefix)
def extract_tarball(spec, filename, allow_root=False, unsigned=False,
force=False):
@@ -841,13 +855,11 @@ def get_spec(spec=None, force=False):
return try_download_specs(urls=urls, force=force)
def get_specs(allarch=False):
def get_specs():
"""
Get spec.yaml's for build caches available on mirror
"""
global _cached_specs
arch = architecture.Arch(architecture.platform(),
'default_os', 'default_target')
if not spack.mirror.MirrorCollection():
tty.debug("No Spack mirrors are currently configured")
@@ -867,8 +879,7 @@ def get_specs(allarch=False):
index_url, 'application/json')
index_object = codecs.getreader('utf-8')(file_stream).read()
except (URLError, web_util.SpackWebError) as url_err:
tty.error('Failed to read index {0}'.format(index_url))
tty.debug(url_err)
tty.debug('Failed to read index {0}'.format(index_url), url_err, 1)
# Continue on to the next mirror
continue
@@ -885,9 +896,7 @@ def get_specs(allarch=False):
spec_list = db.query_local(installed=False)
for indexed_spec in spec_list:
spec_arch = architecture.arch_for_spec(indexed_spec.architecture)
if (allarch is True or spec_arch == arch):
_cached_specs.add(indexed_spec)
_cached_specs.add(indexed_spec)
return _cached_specs

View File

@@ -493,7 +493,7 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
after_script = None
if custom_spack_repo:
if not custom_spack_ref:
custom_spack_ref = 'master'
custom_spack_ref = 'develop'
before_script = [
('git clone "{0}"'.format(custom_spack_repo)),
'pushd ./spack && git checkout "{0}" && popd'.format(

View File

@@ -8,6 +8,7 @@
import sys
import llnl.util.tty as tty
import spack.architecture
import spack.binary_distribution as bindist
import spack.cmd
import spack.cmd.common.arguments as arguments
@@ -25,6 +26,7 @@
from spack.error import SpecError
from spack.spec import Spec, save_dependency_spec_yamls
from spack.util.string import plural
from spack.cmd import display_specs
@@ -288,8 +290,12 @@ def match_downloaded_specs(pkgs, allow_multiple_matches=False, force=False,
# List of specs that match expressions given via command line
specs_from_cli = []
has_errors = False
allarch = other_arch
specs = bindist.get_specs(allarch)
specs = bindist.get_specs()
if not other_arch:
arch = spack.architecture.default_arch().to_spec()
specs = [s for s in specs if s.satisfies(arch)]
for pkg in pkgs:
matches = []
tty.msg("buildcache spec(s) matching %s \n" % pkg)
@@ -393,9 +399,12 @@ def _createtarball(env, spec_yaml=None, packages=None, add_spec=True,
for spec in specs:
tty.debug('creating binary cache file for package %s ' % spec.format())
bindist.build_tarball(spec, outdir, force, make_relative,
unsigned, allow_root, signing_key,
rebuild_index)
try:
bindist.build_tarball(spec, outdir, force, make_relative,
unsigned, allow_root, signing_key,
rebuild_index)
except bindist.NoOverwriteException as e:
tty.warn(e)
def createtarball(args):
@@ -488,10 +497,20 @@ def install_tarball(spec, args):
def listspecs(args):
"""list binary packages available from mirrors"""
specs = bindist.get_specs(args.allarch)
specs = bindist.get_specs()
if not args.allarch:
arch = spack.architecture.default_arch().to_spec()
specs = [s for s in specs if s.satisfies(arch)]
if args.specs:
constraints = set(args.specs)
specs = [s for s in specs if any(s.satisfies(c) for c in constraints)]
if sys.stdout.isatty():
builds = len(specs)
tty.msg("%s." % plural(builds, 'cached build'))
if not builds and not args.allarch:
tty.msg("You can query all available architectures with:",
"spack buildcache list --allarch")
display_specs(specs, args, all_headers=True)

View File

@@ -14,7 +14,9 @@
"0.15": "0.15",
"0.15.0": "0.15.0",
"0.15.1": "0.15.1",
"0.15.2": "0.15.2"
"0.15.2": "0.15.2",
"0.15.3": "0.15.3",
"0.15.4": "0.15.4"
}
},
"ubuntu:16.04": {
@@ -32,7 +34,9 @@
"0.15": "0.15",
"0.15.0": "0.15.0",
"0.15.1": "0.15.1",
"0.15.2": "0.15.2"
"0.15.2": "0.15.2",
"0.15.3": "0.15.3",
"0.15.4": "0.15.4"
}
},
"centos:7": {
@@ -50,7 +54,9 @@
"0.15": "0.15",
"0.15.0": "0.15.0",
"0.15.1": "0.15.1",
"0.15.2": "0.15.2"
"0.15.2": "0.15.2",
"0.15.3": "0.15.3",
"0.15.4": "0.15.4"
}
},
"centos:6": {
@@ -68,7 +74,9 @@
"0.15": "0.15",
"0.15.0": "0.15.0",
"0.15.1": "0.15.1",
"0.15.2": "0.15.2"
"0.15.2": "0.15.2",
"0.15.3": "0.15.3",
"0.15.4": "0.15.4"
}
}
}

View File

@@ -463,8 +463,9 @@ def _eval_conditional(string):
class ViewDescriptor(object):
def __init__(self, root, projections={}, select=[], exclude=[],
def __init__(self, base_path, root, projections={}, select=[], exclude=[],
link=default_view_link):
self.base = base_path
self.root = root
self.projections = projections
self.select = select
@@ -494,15 +495,19 @@ def to_dict(self):
return ret
@staticmethod
def from_dict(d):
return ViewDescriptor(d['root'],
def from_dict(base_path, d):
return ViewDescriptor(base_path,
d['root'],
d.get('projections', {}),
d.get('select', []),
d.get('exclude', []),
d.get('link', default_view_link))
def view(self):
return YamlFilesystemView(self.root, spack.store.layout,
root = self.root
if not os.path.isabs(root):
root = os.path.normpath(os.path.join(self.base, self.root))
return YamlFilesystemView(root, spack.store.layout,
ignore_conflicts=True,
projections=self.projections)
@@ -548,8 +553,11 @@ def regenerate(self, all_specs, roots):
# that cannot be resolved or have repos that have been removed
# we always regenerate the view from scratch. We must first make
# sure the root directory exists for the very first time though.
fs.mkdirp(self.root)
with fs.replace_directory_transaction(self.root):
root = self.root
if not os.path.isabs(root):
root = os.path.normpath(os.path.join(self.base, self.root))
fs.mkdirp(root)
with fs.replace_directory_transaction(root):
view = self.view()
view.clean()
@@ -609,9 +617,11 @@ def __init__(self, path, init_file=None, with_view=None):
self.views = {}
elif with_view is True:
self.views = {
default_view_name: ViewDescriptor(self.view_path_default)}
default_view_name: ViewDescriptor(self.path,
self.view_path_default)}
elif isinstance(with_view, six.string_types):
self.views = {default_view_name: ViewDescriptor(with_view)}
self.views = {default_view_name: ViewDescriptor(self.path,
with_view)}
# If with_view is None, then defer to the view settings determined by
# the manifest file
@@ -682,11 +692,14 @@ def _read_manifest(self, f, raw_yaml=None):
# enable_view can be boolean, string, or None
if enable_view is True or enable_view is None:
self.views = {
default_view_name: ViewDescriptor(self.view_path_default)}
default_view_name: ViewDescriptor(self.path,
self.view_path_default)}
elif isinstance(enable_view, six.string_types):
self.views = {default_view_name: ViewDescriptor(enable_view)}
self.views = {default_view_name: ViewDescriptor(self.path,
enable_view)}
elif enable_view:
self.views = dict((name, ViewDescriptor.from_dict(values))
path = self.path
self.views = dict((name, ViewDescriptor.from_dict(path, values))
for name, values in enable_view.items())
else:
self.views = {}
@@ -1120,7 +1133,7 @@ def update_default_view(self, viewpath):
if name in self.views:
self.default_view.root = viewpath
else:
self.views[name] = ViewDescriptor(viewpath)
self.views[name] = ViewDescriptor(self.path, viewpath)
else:
self.views.pop(name, None)
@@ -1531,9 +1544,10 @@ def write(self, regenerate_views=True):
default_name = default_view_name
if self.views and len(self.views) == 1 and default_name in self.views:
path = self.default_view.root
if self.default_view == ViewDescriptor(self.view_path_default):
if self.default_view == ViewDescriptor(self.path,
self.view_path_default):
view = True
elif self.default_view == ViewDescriptor(path):
elif self.default_view == ViewDescriptor(self.path, path):
view = path
else:
view = dict((name, view.to_dict())

View File

@@ -1568,9 +1568,14 @@ def install(self, **kwargs):
except (Exception, SystemExit) as exc:
# Best effort installs suppress the exception and mark the
# package as a failure UNLESS this is the explicit package.
err = 'Failed to install {0} due to {1}: {2}'
tty.error(err.format(pkg.name, exc.__class__.__name__,
str(exc)))
if (not isinstance(exc, spack.error.SpackError) or
not exc.printed):
# SpackErrors can be printed by the build process or at
# lower levels -- skip printing if already printed.
# TODO: sort out this and SpackEror.print_context()
err = 'Failed to install {0} due to {1}: {2}'
tty.error(
err.format(pkg.name, exc.__class__.__name__, str(exc)))
self._update_failed(task, True, exc)

View File

@@ -128,8 +128,8 @@ def get_version():
git = exe.which("git")
if git:
with fs.working_dir(spack.paths.prefix):
desc = git(
"describe", "--tags", output=str, fail_on_error=False)
desc = git("describe", "--tags", "--match", "v*",
output=str, error=os.devnull, fail_on_error=False)
if git.returncode == 0:
match = re.match(r"v([^-]+)-([^-]+)-g([a-f\d]+)", desc)

View File

@@ -804,15 +804,17 @@ def relocate_text(
where they should be relocated
"""
# TODO: reduce the number of arguments (8 seems too much)
sbang_regex = r'#!/bin/bash {0}/bin/sbang'.format(orig_spack)
new_sbang = r'#!/bin/bash {0}/bin/sbang'.format(new_spack)
orig_sbang = '#!/bin/bash {0}/bin/sbang'.format(orig_spack)
new_sbang = '#!/bin/bash {0}/bin/sbang'.format(new_spack)
for file in files:
_replace_prefix_text(file, orig_install_prefix, new_install_prefix)
for orig_dep_prefix, new_dep_prefix in new_prefixes.items():
_replace_prefix_text(file, orig_dep_prefix, new_dep_prefix)
_replace_prefix_text(file, orig_layout_root, new_layout_root)
_replace_prefix_text(file, sbang_regex, new_sbang)
# relocate the sbang location only if the spack directory changed
if orig_spack != new_spack:
_replace_prefix_text(file, orig_sbang, new_sbang)
def relocate_text_bin(

View File

@@ -33,6 +33,7 @@
'develop',
'0.14', '0.14.0', '0.14.1', '0.14.2',
'0.15', '0.15.0', '0.15.1', '0.15.2',
'0.15.3', '0.15.4',
]
}
},

View File

@@ -0,0 +1,471 @@
# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""
This test checks creating and install buildcaches
"""
import os
import py
import pytest
import argparse
import platform
import spack.repo
import spack.store
import spack.binary_distribution as bindist
import spack.cmd.buildcache as buildcache
import spack.cmd.install as install
import spack.cmd.uninstall as uninstall
import spack.cmd.mirror as mirror
from spack.spec import Spec
from spack.directory_layout import YamlDirectoryLayout
def_install_path_scheme = '${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}' # noqa: E501
ndef_install_path_scheme = '${PACKAGE}/${VERSION}/${ARCHITECTURE}-${COMPILERNAME}-${COMPILERVER}-${HASH}' # noqa: E501
mirror_path_def = None
mirror_path_rel = None
@pytest.fixture(scope='function')
def cache_directory(tmpdir):
old_cache_path = spack.caches.fetch_cache
tmpdir.ensure('fetch_cache', dir=True)
fsc = spack.fetch_strategy.FsCache(str(tmpdir.join('fetch_cache')))
spack.config.caches = fsc
yield spack.config.caches
tmpdir.join('fetch_cache').remove()
spack.config.caches = old_cache_path
@pytest.fixture(scope='session')
def session_mirror_def(tmpdir_factory):
dir = tmpdir_factory.mktemp('mirror')
global mirror_path_rel
mirror_path_rel = dir
dir.ensure('build_cache', dir=True)
yield dir
dir.join('build_cache').remove()
@pytest.fixture(scope='function')
def mirror_directory_def(session_mirror_def):
yield str(session_mirror_def)
@pytest.fixture(scope='session')
def session_mirror_rel(tmpdir_factory):
dir = tmpdir_factory.mktemp('mirror')
global mirror_path_rel
mirror_path_rel = dir
dir.ensure('build_cache', dir=True)
yield dir
dir.join('build_cache').remove()
@pytest.fixture(scope='function')
def mirror_directory_rel(session_mirror_rel):
yield(session_mirror_rel)
@pytest.fixture(scope='session')
def config_directory(tmpdir_factory):
tmpdir = tmpdir_factory.mktemp('test_configs')
# restore some sane defaults for packages and config
config_path = py.path.local(spack.paths.etc_path)
modules_yaml = config_path.join('spack', 'defaults', 'modules.yaml')
os_modules_yaml = config_path.join('spack', 'defaults', '%s' %
platform.system().lower(),
'modules.yaml')
packages_yaml = config_path.join('spack', 'defaults', 'packages.yaml')
config_yaml = config_path.join('spack', 'defaults', 'config.yaml')
repos_yaml = config_path.join('spack', 'defaults', 'repos.yaml')
tmpdir.ensure('site', dir=True)
tmpdir.ensure('user', dir=True)
tmpdir.ensure('site/%s' % platform.system().lower(), dir=True)
modules_yaml.copy(tmpdir.join('site', 'modules.yaml'))
os_modules_yaml.copy(tmpdir.join('site/%s' % platform.system().lower(),
'modules.yaml'))
packages_yaml.copy(tmpdir.join('site', 'packages.yaml'))
config_yaml.copy(tmpdir.join('site', 'config.yaml'))
repos_yaml.copy(tmpdir.join('site', 'repos.yaml'))
yield tmpdir
tmpdir.remove()
@pytest.fixture(scope='function')
def default_config(tmpdir_factory, config_directory, monkeypatch):
mutable_dir = tmpdir_factory.mktemp('mutable_config').join('tmp')
config_directory.copy(mutable_dir)
cfg = spack.config.Configuration(
*[spack.config.ConfigScope(name, str(mutable_dir))
for name in ['site/%s' % platform.system().lower(),
'site', 'user']])
monkeypatch.setattr(spack.config, 'config', cfg)
# This is essential, otherwise the cache will create weird side effects
# that will compromise subsequent tests if compilers.yaml is modified
monkeypatch.setattr(spack.compilers, '_cache_config_file', [])
njobs = spack.config.get('config:build_jobs')
if not njobs:
spack.config.set('config:build_jobs', 4, scope='user')
extensions = spack.config.get('config:template_dirs')
if not extensions:
spack.config.set('config:template_dirs',
[os.path.join(spack.paths.share_path, 'templates')],
scope='user')
mutable_dir.ensure('build_stage', dir=True)
build_stage = spack.config.get('config:build_stage')
if not build_stage:
spack.config.set('config:build_stage',
[str(mutable_dir.join('build_stage'))], scope='user')
timeout = spack.config.get('config:connect_timeout')
if not timeout:
spack.config.set('config:connect_timeout', 10, scope='user')
yield spack.config.config
mutable_dir.remove()
@pytest.fixture(scope='function')
def install_dir_default_layout(tmpdir):
"""Hooks a fake install directory with a default layout"""
real_store = spack.store.store
real_layout = spack.store.layout
spack.store.store = spack.store.Store(str(tmpdir.join('opt')))
spack.store.layout = YamlDirectoryLayout(str(tmpdir.join('opt')),
path_scheme=def_install_path_scheme) # noqa: E501
yield spack.store
spack.store.store = real_store
spack.store.layout = real_layout
@pytest.fixture(scope='function')
def install_dir_non_default_layout(tmpdir):
"""Hooks a fake install directory with a non-default layout"""
real_store = spack.store.store
real_layout = spack.store.layout
spack.store.store = spack.store.Store(str(tmpdir.join('opt')))
spack.store.layout = YamlDirectoryLayout(str(tmpdir.join('opt')),
path_scheme=ndef_install_path_scheme) # noqa: E501
yield spack.store
spack.store.store = real_store
spack.store.layout = real_layout
@pytest.mark.requires_executables(
'/usr/bin/gcc', 'patchelf', 'strings', 'file')
@pytest.mark.disable_clean_stage_check
@pytest.mark.maybeslow
@pytest.mark.usefixtures('default_config', 'cache_directory',
'install_dir_default_layout')
def test_default_rpaths_create_install_default_layout(tmpdir,
mirror_directory_def,
install_mockery):
"""
Test the creation and installation of buildcaches with default rpaths
into the default directory layout scheme.
"""
gspec = Spec('garply')
gspec.concretize()
cspec = Spec('corge')
cspec.concretize()
# Install patchelf needed for relocate in linux test environment
iparser = argparse.ArgumentParser()
install.setup_parser(iparser)
# Install some packages with dependent packages
iargs = iparser.parse_args(['--no-cache', cspec.name])
install.install(iparser, iargs)
global mirror_path_def
mirror_path_def = mirror_directory_def
mparser = argparse.ArgumentParser()
mirror.setup_parser(mparser)
margs = mparser.parse_args(
['add', '--scope', 'site', 'test-mirror-def', 'file://%s' % mirror_path_def])
mirror.mirror(mparser, margs)
margs = mparser.parse_args(['list'])
mirror.mirror(mparser, margs)
# setup argument parser
parser = argparse.ArgumentParser()
buildcache.setup_parser(parser)
# Set default buildcache args
create_args = ['create', '-a', '-u', '-d', str(mirror_path_def),
cspec.name]
install_args = ['install', '-a', '-u', cspec.name]
# Create a buildache
args = parser.parse_args(create_args)
buildcache.buildcache(parser, args)
# Test force overwrite create buildcache
create_args.insert(create_args.index('-a'), '-f')
args = parser.parse_args(create_args)
buildcache.buildcache(parser, args)
# create mirror index
args = parser.parse_args(['update-index', '-d', 'file://%s' % str(mirror_path_def)])
buildcache.buildcache(parser, args)
# list the buildcaches in the mirror
args = parser.parse_args(['list', '-a', '-l', '-v'])
buildcache.buildcache(parser, args)
# Uninstall the package and deps
uparser = argparse.ArgumentParser()
uninstall.setup_parser(uparser)
uargs = uparser.parse_args(['-y', '--dependents', gspec.name])
uninstall.uninstall(uparser, uargs)
# test install
args = parser.parse_args(install_args)
buildcache.buildcache(parser, args)
# This gives warning that spec is already installed
buildcache.buildcache(parser, args)
# test overwrite install
install_args.insert(install_args.index('-a'), '-f')
args = parser.parse_args(install_args)
buildcache.buildcache(parser, args)
args = parser.parse_args(['keys', '-f'])
buildcache.buildcache(parser, args)
args = parser.parse_args(['list'])
buildcache.buildcache(parser, args)
args = parser.parse_args(['list', '-a'])
buildcache.buildcache(parser, args)
args = parser.parse_args(['list', '-l', '-v'])
buildcache.buildcache(parser, args)
bindist._cached_specs = set()
spack.stage.purge()
margs = mparser.parse_args(
['rm', '--scope', 'site', 'test-mirror-def'])
mirror.mirror(mparser, margs)
@pytest.mark.requires_executables(
'/usr/bin/gcc', 'patchelf', 'strings', 'file')
@pytest.mark.disable_clean_stage_check
@pytest.mark.maybeslow
@pytest.mark.nomockstage
@pytest.mark.usefixtures('default_config', 'cache_directory',
'install_dir_non_default_layout')
def test_default_rpaths_install_nondefault_layout(tmpdir,
install_mockery):
"""
Test the creation and installation of buildcaches with default rpaths
into the non-default directory layout scheme.
"""
gspec = Spec('garply')
gspec.concretize()
cspec = Spec('corge')
cspec.concretize()
global mirror_path_def
mparser = argparse.ArgumentParser()
mirror.setup_parser(mparser)
margs = mparser.parse_args(
['add', '--scope', 'site', 'test-mirror-def', 'file://%s' % mirror_path_def])
mirror.mirror(mparser, margs)
# setup argument parser
parser = argparse.ArgumentParser()
buildcache.setup_parser(parser)
# Set default buildcache args
install_args = ['install', '-a', '-u', '%s' % cspec.name]
# Install some packages with dependent packages
# test install in non-default install path scheme
args = parser.parse_args(install_args)
buildcache.buildcache(parser, args)
# test force install in non-default install path scheme
install_args.insert(install_args.index('-a'), '-f')
args = parser.parse_args(install_args)
buildcache.buildcache(parser, args)
bindist._cached_specs = set()
spack.stage.purge()
margs = mparser.parse_args(
['rm', '--scope', 'site', 'test-mirror-def'])
mirror.mirror(mparser, margs)
@pytest.mark.requires_executables(
'/usr/bin/gcc', 'patchelf', 'strings', 'file')
@pytest.mark.disable_clean_stage_check
@pytest.mark.maybeslow
@pytest.mark.nomockstage
@pytest.mark.usefixtures('default_config', 'cache_directory',
'install_dir_default_layout')
def test_relative_rpaths_create_default_layout(tmpdir,
mirror_directory_rel,
install_mockery):
"""
Test the creation and installation of buildcaches with relative
rpaths into the default directory layout scheme.
"""
gspec = Spec('garply')
gspec.concretize()
cspec = Spec('corge')
cspec.concretize()
global mirror_path_rel
mirror_path_rel = mirror_directory_rel
# Install patchelf needed for relocate in linux test environment
iparser = argparse.ArgumentParser()
install.setup_parser(iparser)
# Install some packages with dependent packages
iargs = iparser.parse_args(['--no-cache', cspec.name])
install.install(iparser, iargs)
# setup argument parser
parser = argparse.ArgumentParser()
buildcache.setup_parser(parser)
# set default buildcache args
create_args = ['create', '-a', '-u', '-r', '-d',
str(mirror_path_rel),
cspec.name]
# create build cache with relatived rpaths
args = parser.parse_args(create_args)
buildcache.buildcache(parser, args)
# create mirror index
args = parser.parse_args(['update-index', '-d', 'file://%s' % str(mirror_path_rel)])
buildcache.buildcache(parser, args)
# Uninstall the package and deps
uparser = argparse.ArgumentParser()
uninstall.setup_parser(uparser)
uargs = uparser.parse_args(['-y', '--dependents', gspec.name])
uninstall.uninstall(uparser, uargs)
bindist._cached_specs = set()
spack.stage.purge()
@pytest.mark.requires_executables(
'/usr/bin/gcc', 'patchelf', 'strings', 'file')
@pytest.mark.disable_clean_stage_check
@pytest.mark.maybeslow
@pytest.mark.nomockstage
@pytest.mark.usefixtures('default_config', 'cache_directory',
'install_dir_default_layout')
def test_relative_rpaths_install_default_layout(tmpdir,
install_mockery):
"""
Test the creation and installation of buildcaches with relative
rpaths into the default directory layout scheme.
"""
gspec = Spec('garply')
gspec.concretize()
cspec = Spec('corge')
cspec.concretize()
global mirror_path_rel
mparser = argparse.ArgumentParser()
mirror.setup_parser(mparser)
margs = mparser.parse_args(
['add', '--scope', 'site', 'test-mirror-rel', 'file://%s' % mirror_path_rel])
mirror.mirror(mparser, margs)
# Install patchelf needed for relocate in linux test environment
iparser = argparse.ArgumentParser()
install.setup_parser(iparser)
# setup argument parser
parser = argparse.ArgumentParser()
buildcache.setup_parser(parser)
# set default buildcache args
install_args = ['install', '-a', '-u',
cspec.name]
# install buildcache created with relativized rpaths
args = parser.parse_args(install_args)
buildcache.buildcache(parser, args)
# This gives warning that spec is already installed
buildcache.buildcache(parser, args)
# Uninstall the package and deps
uparser = argparse.ArgumentParser()
uninstall.setup_parser(uparser)
uargs = uparser.parse_args(['-y', '--dependents', gspec.name])
uninstall.uninstall(uparser, uargs)
# install build cache
buildcache.buildcache(parser, args)
# test overwrite install
install_args.insert(install_args.index('-a'), '-f')
args = parser.parse_args(install_args)
buildcache.buildcache(parser, args)
bindist._cached_specs = set()
spack.stage.purge()
margs = mparser.parse_args(
['rm', '--scope', 'site', 'test-mirror-rel'])
mirror.mirror(mparser, margs)
@pytest.mark.requires_executables(
'/usr/bin/gcc', 'patchelf', 'strings', 'file')
@pytest.mark.disable_clean_stage_check
@pytest.mark.maybeslow
@pytest.mark.nomockstage
@pytest.mark.usefixtures('default_config', 'cache_directory',
'install_dir_non_default_layout')
def test_relative_rpaths_install_nondefault(tmpdir,
install_mockery):
"""
Test the installation of buildcaches with relativized rpaths
into the non-default directory layout scheme.
"""
gspec = Spec('garply')
gspec.concretize()
cspec = Spec('corge')
cspec.concretize()
global mirror_path_rel
mparser = argparse.ArgumentParser()
mirror.setup_parser(mparser)
margs = mparser.parse_args(
['add', '--scope', 'site', 'test-mirror-rel', 'file://%s' % mirror_path_rel])
mirror.mirror(mparser, margs)
# Install patchelf needed for relocate in linux test environment
iparser = argparse.ArgumentParser()
install.setup_parser(iparser)
# setup argument parser
parser = argparse.ArgumentParser()
buildcache.setup_parser(parser)
# Set default buildcache args
install_args = ['install', '-a', '-u', '%s' % cspec.name]
# test install in non-default install path scheme and relative path
args = parser.parse_args(install_args)
buildcache.buildcache(parser, args)
bindist._cached_specs = set()
spack.stage.purge()
margs = mparser.parse_args(
['rm', '--scope', 'site', 'test-mirror-rel'])
mirror.mirror(mparser, margs)

View File

@@ -12,6 +12,7 @@
import spack.main
import spack.binary_distribution
import spack.environment as ev
import spack.spec
from spack.spec import Spec
buildcache = spack.main.SpackCommand('buildcache')
@@ -24,7 +25,22 @@
def mock_get_specs(database, monkeypatch):
specs = database.query_local()
monkeypatch.setattr(
spack.binary_distribution, 'get_specs', lambda x: specs
spack.binary_distribution, 'get_specs', lambda: specs
)
@pytest.fixture()
def mock_get_specs_multiarch(database, monkeypatch):
specs = [spec.copy() for spec in database.query_local()]
# make one spec that is NOT the test architecture
for spec in specs:
if spec.name == "mpileaks":
spec.architecture = spack.spec.ArchSpec('linux-rhel7-x86_64')
break
monkeypatch.setattr(
spack.binary_distribution, 'get_specs', lambda: specs
)
@@ -37,10 +53,6 @@ def test_buildcache_preview_just_runs(database):
buildcache('preview', 'mpileaks')
@pytest.mark.skipif(
platform.system().lower() != 'linux',
reason='implementation for MacOS still missing'
)
@pytest.mark.db
@pytest.mark.regression('13757')
def test_buildcache_list_duplicates(mock_get_specs, capsys):
@@ -50,6 +62,20 @@ def test_buildcache_list_duplicates(mock_get_specs, capsys):
assert output.count('mpileaks') == 3
@pytest.mark.db
@pytest.mark.regression('17827')
def test_buildcache_list_allarch(database, mock_get_specs_multiarch, capsys):
with capsys.disabled():
output = buildcache('list', '--allarch')
assert output.count('mpileaks') == 3
with capsys.disabled():
output = buildcache('list')
assert output.count('mpileaks') == 2
def tests_buildcache_create(
install_mockery, mock_fetch, monkeypatch, tmpdir):
""""Ensure that buildcache create creates output files"""

View File

@@ -180,10 +180,12 @@ def test_show_log_on_error(mock_packages, mock_archive, mock_fetch,
assert install.error.pkg.name == 'build-error'
assert 'Full build log:' in out
# Message shows up for ProcessError (1), ChildError (1), and output (1)
print(out)
# Message shows up for ProcessError (1) and output (1)
errors = [line for line in out.split('\n')
if 'configure: error: cannot run C compiled programs' in line]
assert len(errors) == 3
assert len(errors) == 2
def test_install_overwrite(

View File

@@ -2,8 +2,9 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import re
import sys
import pytest
import spack.repo
@@ -12,6 +13,7 @@
from spack.cmd.url import name_parsed_correctly, version_parsed_correctly
from spack.cmd.url import url_summary
url = SpackCommand('url')
@@ -70,6 +72,11 @@ def test_url_with_no_version_fails():
@pytest.mark.network
@pytest.mark.skipif(
sys.version_info < (2, 7),
reason="Python 2.6 tests are run in a container, where "
"networking is super slow"
)
def test_url_list():
out = url('list')
total_urls = len(out.split('\n'))
@@ -100,6 +107,11 @@ def test_url_list():
@pytest.mark.network
@pytest.mark.skipif(
sys.version_info < (2, 7),
reason="Python 2.6 tests are run in a container, where "
"networking is super slow"
)
def test_url_summary():
"""Test the URL summary command."""
# test url_summary, the internal function that does the work
@@ -126,6 +138,11 @@ def test_url_summary():
assert out_correct_versions == correct_versions
@pytest.mark.skipif(
sys.version_info < (2, 7),
reason="Python 2.6 tests are run in a container, where "
"networking is super slow"
)
def test_url_stats(capfd):
with capfd.disabled():
output = url('stats')

View File

@@ -1143,8 +1143,6 @@ def read():
assert vals['read'] == 1
@pytest.mark.skipif('macos' in os.environ.get('GITHUB_WORKFLOW', ''),
reason="Skip failing test for GA on MacOS")
def test_lock_debug_output(lock_path):
host = socket.getfqdn()

View File

@@ -389,6 +389,10 @@ def mock_shell_v_v_no_termios(proc, ctl, **kwargs):
(mock_shell_v_v, nullcontext),
(mock_shell_v_v_no_termios, no_termios),
])
@pytest.mark.skipif(
sys.version_info < (2, 7),
reason="Python 2.6 tests are run in a container, where this fails often"
)
def test_foreground_background_output(
test_fn, capfd, termios_on_or_off, tmpdir):
"""Tests hitting 'v' toggles output, and that force_echo works."""

View File

@@ -13,7 +13,7 @@
_gnupg_version_re = r"^gpg \(GnuPG\) (.*)$"
GNUPGHOME = spack.paths.gpg_path
GNUPGHOME = os.getenv('SPACK_GNUPGHOME', spack.paths.gpg_path)
def parse_keys_output(output):

View File

@@ -18,7 +18,7 @@
ORIGINAL_PATH="$PATH"
. "$(dirname $0)/setup.sh"
check_dependencies $coverage git hg svn
check_dependencies $coverage kcov git hg svn
# Clean the environment by removing Spack from the path and getting rid of
# the spack shell function

View File

@@ -37,11 +37,7 @@ bin/spack -h
bin/spack help -a
# Profile and print top 20 lines for a simple call to spack spec
if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
spack -p --lines 20 spec openmpi
else
spack -p --lines 20 spec mpileaks%gcc ^elfutils@0.170
fi
spack -p --lines 20 spec mpileaks%gcc ^elfutils@0.170
#-----------------------------------------------------------
# Run unit tests with code coverage

View File

@@ -26,14 +26,11 @@ if [[ "$COVERAGE" == "true" ]]; then
coverage=coverage
coverage_run="coverage run"
# bash coverage depends on some other factors -- there are issues with
# kcov for Python 2.6, unit tests, and build tests.
if [[ $TRAVIS_PYTHON_VERSION != 2.6 ]]; then
mkdir -p coverage
cc_script="$SPACK_ROOT/lib/spack/env/cc"
bashcov=$(realpath ${QA_DIR}/bashcov)
sed -i~ "s@#\!/bin/bash@#\!${bashcov}@" "$cc_script"
fi
# bash coverage depends on some other factors
mkdir -p coverage
cc_script="$SPACK_ROOT/lib/spack/env/cc"
bashcov=$(realpath ${QA_DIR}/bashcov)
sed -i~ "s@#\!/bin/bash@#\!${bashcov}@" "$cc_script"
fi
#
@@ -74,6 +71,9 @@ check_dependencies() {
spack_package=mercurial
pip_package=mercurial
;;
kcov)
spack_package=kcov
;;
svn)
spack_package=subversion
;;

129
share/spack/setup-tutorial-env.sh Executable file
View File

@@ -0,0 +1,129 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
###############################################################################
#
# This file is part of Spack and sets up the environment for the Spack tutorial
# It is intended to be run on ubuntu-18.04 or an ubuntu-18.04 container or AWS
# cloud9 environment
#
# Components:
# 1. apt installs for packages used in the tutorial
# these include compilers and externals used by the tutorial and
# basic spack requirements like python and curl
# 2. spack configuration files
# these set the default configuration for Spack to use x86_64 and suppress
# certain gpg warnings. The gpg warnings are not relevant for the tutorial
# and the default x86_64 architecture allows us to run the same tutorial on
# any x86_64 architecture without needing new binary packages.
# 3. aws cloud9 configuration to expand available storage
# when we run on aws cloud9 we have to expand the storage from 10G to 30G
# because we install too much software for a default cloud9 instance
###############################################################################
####
# Ensure we're on Ubuntu 18.04
####
if [ -f /etc/os-release ]; then
. /etc/os-release
fi
if [ x"$UBUNTU_CODENAME" != "xbionic" ]; then
echo "The tutorial setup script must be run on Ubuntu 18.04."
return 1 &>/dev/null || exit 1 # works if sourced or run
fi
####
# Install packages needed for tutorial
####
# compilers, basic system components, externals
# There are retries around these because apt fails frequently on new instances,
# due to unattended updates running in the background and taking the lock.
until sudo apt-get update -y; do
echo "==> apt-get update failed. retrying..."
sleep 5
done
until sudo apt-get install -y --no-install-recommends \
autoconf make python3 python3-pip \
build-essential ca-certificates curl git gnupg2 iproute2 emacs \
file openssh-server tcl unzip vim wget \
clang g++ g++-6 gcc gcc-6 gfortran gfortran-6 \
zlib1g zlib1g-dev mpich; do
echo "==> apt-get install failed. retrying..."
sleep 5
done
####
# Upgrade boto3 python package on AWS systems
####
pip3 install --upgrade boto3
####
# Spack configuration settings for tutorial
####
# create spack system config
sudo mkdir -p /etc/spack
# set default arch to x86_64
sudo tee /etc/spack/packages.yaml << EOF > /dev/null
packages:
all:
target: [x86_64]
EOF
# suppress gpg warnings
sudo tee /etc/spack/config.yaml << EOF > /dev/null
config:
suppress_gpg_warnings: true
EOF
####
# AWS set volume size to at least 30G
####
# Hardcode the specified size to 30G
SIZE=30
# Get the ID of the environment host Amazon EC2 instance.
INSTANCEID=$(curl http://169.254.169.254/latest/meta-data//instance-id)
# Get the ID of the Amazon EBS volume associated with the instance.
VOLUMEID=$(aws ec2 describe-instances \
--instance-id $INSTANCEID \
--query "Reservations[0].Instances[0].BlockDeviceMappings[0].Ebs.VolumeId" \
--output text)
# Resize the EBS volume.
aws ec2 modify-volume --volume-id $VOLUMEID --size $SIZE
# Wait for the resize to finish.
while [ \
"$(aws ec2 describe-volumes-modifications \
--volume-id $VOLUMEID \
--filters Name=modification-state,Values="optimizing","completed" \
--query "length(VolumesModifications)"\
--output text)" != "1" ]; do
sleep 1
done
if [ -e /dev/xvda1 ]
then
# Rewrite the partition table so that the partition takes up all the space that it can.
sudo growpart /dev/xvda 1
# Expand the size of the file system.
sudo resize2fs /dev/xvda1
else
# Rewrite the partition table so that the partition takes up all the space that it can.
sudo growpart /dev/nvme0n1 1
# Expand the size of the file system.
sudo resize2fs /dev/nvme0n1p1
fi

View File

@@ -0,0 +1,155 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
import os
class Corge(Package):
"""A toy package to test dependencies"""
homepage = "https://www.example.com"
url = "https://github.com/gartung/corge/archive/v3.0.0.tar.gz"
version('3.0.0',
sha256='5058861c3b887511387c725971984cec665a8307d660158915a04d7786fed6bc')
depends_on('quux')
def install(self, spec, prefix):
corge_cc = '''#include <iostream>
#include <stdexcept>
#include "corge.h"
#include "corge_version.h"
#include "quux/quux.h"
const int Corge::version_major = corge_version_major;
const int Corge::version_minor = corge_version_minor;
Corge::Corge()
{
}
int
Corge::get_version() const
{
return 10 * version_major + version_minor;
}
int
Corge::corgegate() const
{
int corge_version = get_version();
std::cout << "Corge::corgegate version " << corge_version
<< " invoked" << std::endl;
std::cout << "Corge config directory = %s" <<std::endl;
Quux quux;
int quux_version = quux.quuxify();
if(quux_version != corge_version) {
throw std::runtime_error(
"Corge found an incompatible version of Garply.");
}
return corge_version;
}
'''
corge_h = '''#ifndef CORGE_H_
class Corge
{
private:
static const int version_major;
static const int version_minor;
public:
Corge();
int get_version() const;
int corgegate() const;
};
#endif // CORGE_H_
'''
corge_version_h = '''
const int corge_version_major = %s;
const int corge_version_minor = %s;
'''
corgegator_cc = '''
#include <iostream>
#include "corge.h"
int
main(int argc, char* argv[])
{
std::cout << "corgerator called with ";
if (argc == 0) {
std::cout << "no command-line arguments" << std::endl;
} else {
std::cout << "command-line arguments:";
for (int i = 0; i < argc; ++i) {
std::cout << " \"" << argv[i] << "\"";
}
std::cout << std::endl;
}
std::cout << "corgegating.."<<std::endl;
Corge corge;
corge.corgegate();
std::cout << "done."<<std::endl;
return 0;
}
'''
mkdirp(prefix.lib64)
mkdirp('%s/corge' % prefix.include)
mkdirp('%s/corge' % self.stage.source_path)
with open('%s/corge_version.h' % self.stage.source_path, 'w') as f:
f.write(corge_version_h % (self.version[0], self.version[1:]))
with open('%s/corge/corge.cc' % self.stage.source_path, 'w') as f:
f.write(corge_cc % prefix.config)
with open('%s/corge/corge.h' % self.stage.source_path, 'w') as f:
f.write(corge_h)
with open('%s/corge/corgegator.cc' % self.stage.source_path, 'w') as f:
f.write(corgegator_cc)
gpp = which('/usr/bin/g++')
gpp('-Dcorge_EXPORTS',
'-I%s' % self.stage.source_path,
'-I%s' % spec['quux'].prefix.include,
'-I%s' % spec['garply'].prefix.include,
'-O2', '-g', '-DNDEBUG', '-fPIC',
'-o', 'corge.cc.o',
'-c', 'corge/corge.cc')
gpp('-Dcorge_EXPORTS',
'-I%s' % self.stage.source_path,
'-I%s' % spec['quux'].prefix.include,
'-I%s' % spec['garply'].prefix.include,
'-O2', '-g', '-DNDEBUG', '-fPIC',
'-o', 'corgegator.cc.o',
'-c', 'corge/corgegator.cc')
gpp('-fPIC', '-O2', '-g', '-DNDEBUG', '-shared',
'-Wl,-soname,libcorge.so', '-o', 'libcorge.so', 'corge.cc.o',
'-Wl,-rpath,%s:%s::::' %
(spec['quux'].prefix.lib64, spec['garply'].prefix.lib64),
'%s/libquux.so' % spec['quux'].prefix.lib64,
'%s/libgarply.so' % spec['garply'].prefix.lib64)
gpp('-O2', '-g', '-DNDEBUG', '-rdynamic',
'corgegator.cc.o', '-o', 'corgegator',
'-Wl,-rpath,%s:%s:%s:::' % (prefix.lib64,
spec['quux'].prefix.lib64,
spec['garply'].prefix.lib64),
'libcorge.so',
'%s/libquux.so' % spec['quux'].prefix.lib64,
'%s/libgarply.so' % spec['garply'].prefix.lib64)
copy('corgegator', '%s/corgegator' % prefix.lib64)
copy('libcorge.so', '%s/libcorge.so' % prefix.lib64)
copy('%s/corge/corge.h' % self.stage.source_path,
'%s/corge/corge.h' % prefix.include)
mkdirp(prefix.bin)
copy('corge_version.h', '%s/corge_version.h' % prefix.bin)
os.symlink('%s/corgegator' % prefix.lib64,
'%s/corgegator' % prefix.bin)
os.symlink('%s/quuxifier' % spec['quux'].prefix.lib64,
'%s/quuxifier' % prefix.bin)
os.symlink('%s/garplinator' % spec['garply'].prefix.lib64,
'%s/garplinator' % prefix.bin)

View File

@@ -0,0 +1,112 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
import os
class Garply(Package):
"""Toy package for testing dependencies"""
homepage = "https://www.example.com"
url = "https://github.com/gartung/garply/archive/v3.0.0.tar.gz"
version('3.0.0',
sha256='534ac8ba7a6fed7e8bbb543bd43ca04999e65337445a531bd296939f5ac2f33d')
def install(self, spec, prefix):
garply_h = '''#ifndef GARPLY_H_
class Garply
{
private:
static const int version_major;
static const int version_minor;
public:
Garply();
int get_version() const;
int garplinate() const;
};
#endif // GARPLY_H_
'''
garply_cc = '''#include "garply.h"
#include "garply_version.h"
#include <iostream>
const int Garply::version_major = garply_version_major;
const int Garply::version_minor = garply_version_minor;
Garply::Garply() {}
int
Garply::get_version() const
{
return 10 * version_major + version_minor;
}
int
Garply::garplinate() const
{
std::cout << "Garply::garplinate version " << get_version()
<< " invoked" << std::endl;
std::cout << "Garply config dir = %s" << std::endl;
return get_version();
}
'''
garplinator_cc = '''#include "garply.h"
#include <iostream>
int
main()
{
Garply garply;
garply.garplinate();
return 0;
}
'''
garply_version_h = '''const int garply_version_major = %s;
const int garply_version_minor = %s;
'''
mkdirp(prefix.lib64)
mkdirp('%s/garply' % prefix.include)
mkdirp('%s/garply' % self.stage.source_path)
with open('%s/garply_version.h' % self.stage.source_path, 'w') as f:
f.write(garply_version_h % (self.version[0], self.version[1:]))
with open('%s/garply/garply.h' % self.stage.source_path, 'w') as f:
f.write(garply_h)
with open('%s/garply/garply.cc' % self.stage.source_path, 'w') as f:
f.write(garply_cc % prefix.config)
with open('%s/garply/garplinator.cc' %
self.stage.source_path, 'w') as f:
f.write(garplinator_cc)
gpp = which('/usr/bin/g++')
gpp('-Dgarply_EXPORTS',
'-I%s' % self.stage.source_path,
'-O2', '-g', '-DNDEBUG', '-fPIC',
'-o', 'garply.cc.o',
'-c', '%s/garply/garply.cc' % self.stage.source_path)
gpp('-Dgarply_EXPORTS',
'-I%s' % self.stage.source_path,
'-O2', '-g', '-DNDEBUG', '-fPIC',
'-o', 'garplinator.cc.o',
'-c', '%s/garply/garplinator.cc' % self.stage.source_path)
gpp('-fPIC', '-O2', '-g', '-DNDEBUG', '-shared',
'-Wl,-soname,libgarply.so', '-o', 'libgarply.so', 'garply.cc.o')
gpp('-O2', '-g', '-DNDEBUG', '-rdynamic',
'garplinator.cc.o', '-o', 'garplinator',
'-Wl,-rpath,%s' % prefix.lib64,
'libgarply.so')
copy('libgarply.so', '%s/libgarply.so' % prefix.lib64)
copy('garplinator', '%s/garplinator' % prefix.lib64)
copy('%s/garply/garply.h' % self.stage.source_path,
'%s/garply/garply.h' % prefix.include)
mkdirp(prefix.bin)
copy('garply_version.h', '%s/garply_version.h' % prefix.bin)
os.symlink('%s/garplinator' % prefix.lib64,
'%s/garplinator' % prefix.bin)

View File

@@ -7,16 +7,17 @@
class Patchelf(AutotoolsPackage):
"""
PatchELF is a small utility to modify the
dynamic linker and RPATH of ELF executables.
"""
"""PatchELF is a small utility to modify the dynamic linker and RPATH of
ELF executables."""
homepage = "https://nixos.org/patchelf.html"
url = "http://nixos.org/releases/patchelf/patchelf-0.8/patchelf-0.8.tar.gz"
list_url = "http://nixos.org/releases/patchelf/"
url = "https://nixos.org/releases/patchelf/patchelf-0.10/patchelf-0.10.tar.gz"
list_url = "https://nixos.org/releases/patchelf/"
list_depth = 1
version('0.9', '3c265508526760f233620f35d79c79fc')
version('0.8', '407b229e6a681ffb0e2cdd5915cb2d01')
version('0.10', sha256='b2deabce05c34ce98558c0efb965f209de592197b2c88e930298d740ead09019')
version('0.9', sha256='f2aa40a6148cb3b0ca807a1bf836b081793e55ec9e5540a5356d800132be7e0a')
version('0.8', sha256='14af06a2da688d577d64ff8dac065bb8903bbffbe01d30c62df7af9bf4ce72fe')
def install(self, spec, prefix):
install_tree(self.stage.source_path, prefix)

View File

@@ -0,0 +1,132 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
import os
class Quux(Package):
"""Toy package for testing dependencies"""
homepage = "https://www.example.com"
url = "https://github.com/gartung/quux/archive/v3.0.0.tar.gz"
version('3.0.0',
sha256='b91bc96fb746495786bddac2c527039177499f2f76d3fa9dcf0b393859e68484')
depends_on('garply')
def install(self, spec, prefix):
quux_cc = '''#include "quux.h"
#include "garply/garply.h"
#include "quux_version.h"
#include <iostream>
#include <stdexcept>
const int Quux::version_major = quux_version_major;
const int Quux::version_minor = quux_version_minor;
Quux::Quux() {}
int
Quux::get_version() const
{
return 10 * version_major + version_minor;
}
int
Quux::quuxify() const
{
int quux_version = get_version();
std::cout << "Quux::quuxify version " << quux_version
<< " invoked" <<std::endl;
std::cout << "Quux config directory is %s" <<std::endl;
Garply garply;
int garply_version = garply.garplinate();
if (garply_version != quux_version) {
throw std::runtime_error(
"Quux found an incompatible version of Garply.");
}
return quux_version;
}
'''
quux_h = '''#ifndef QUUX_H_
class Quux
{
private:
static const int version_major;
static const int version_minor;
public:
Quux();
int get_version() const;
int quuxify() const;
};
#endif // QUUX_H_
'''
quuxifier_cc = '''
#include "quux.h"
#include <iostream>
int
main()
{
Quux quux;
quux.quuxify();
return 0;
}
'''
quux_version_h = '''const int quux_version_major = %s;
const int quux_version_minor = %s;
'''
mkdirp(prefix.lib64)
mkdirp('%s/quux' % prefix.include)
with open('%s/quux_version.h' % self.stage.source_path, 'w') as f:
f.write(quux_version_h % (self.version[0], self.version[1:]))
with open('%s/quux/quux.cc' % self.stage.source_path, 'w') as f:
f.write(quux_cc % (prefix.config))
with open('%s/quux/quux.h' % self.stage.source_path, 'w') as f:
f.write(quux_h)
with open('%s/quux/quuxifier.cc' % self.stage.source_path, 'w') as f:
f.write(quuxifier_cc)
gpp = which('/usr/bin/g++')
gpp('-Dquux_EXPORTS',
'-I%s' % self.stage.source_path,
'-I%s' % spec['garply'].prefix.include,
'-O2', '-g', '-DNDEBUG', '-fPIC',
'-o', 'quux.cc.o',
'-c', 'quux/quux.cc')
gpp('-Dquux_EXPORTS',
'-I%s' % self.stage.source_path,
'-I%s' % spec['garply'].prefix.include,
'-O2', '-g', '-DNDEBUG', '-fPIC',
'-o', 'quuxifier.cc.o',
'-c', 'quux/quuxifier.cc')
gpp('-fPIC', '-O2', '-g', '-DNDEBUG', '-shared',
'-Wl,-soname,libquux.so', '-o', 'libquux.so', 'quux.cc.o',
'-Wl,-rpath,%s:%s::::' % (prefix.lib64,
spec['garply'].prefix.lib64),
'%s/libgarply.so' % spec['garply'].prefix.lib64)
gpp('-O2', '-g', '-DNDEBUG', '-rdynamic',
'quuxifier.cc.o', '-o', 'quuxifier',
'-Wl,-rpath,%s:%s::::' % (prefix.lib64,
spec['garply'].prefix.lib64),
'libquux.so',
'%s/libgarply.so' % spec['garply'].prefix.lib64)
copy('libquux.so', '%s/libquux.so' % prefix.lib64)
copy('quuxifier', '%s/quuxifier' % prefix.lib64)
copy('%s/quux/quux.h' % self.stage.source_path,
'%s/quux/quux.h' % prefix.include)
mkdirp(prefix.bin)
copy('quux_version.h', '%s/quux_version.h' % prefix.bin)
os.symlink('%s/quuxifier' % prefix.lib64, '%s/quuxifier' % prefix.bin)
os.symlink('%s/garplinator' % spec['garply'].prefix.lib64,
'%s/garplinator' % prefix.bin)

View File

@@ -77,15 +77,16 @@ class Cuda(Package):
depends_on('libxml2', when='@10.1.243:')
def setup_build_environment(self, env):
env.set('CUDAHOSTCXX', self.compiler.cxx)
if self.spec.satisfies('@10.1.243:'):
libxml2_home = self.spec['libxml2'].prefix
env.set('LIBXML2HOME', libxml2_home)
env.append_path('LD_LIBRARY_PATH', libxml2_home.lib)
def setup_dependent_build_environment(self, env, dependent_spec):
env.set('CUDAHOSTCXX', dependent_spec.package.compiler.cxx)
def setup_run_environment(self, env):
env.set('CUDA_HOME', self.prefix)
env.set('CUDAHOSTCXX', self.compiler.cxx)
def install(self, spec, prefix):
if os.path.exists('/tmp/cuda-installer.log'):
@@ -128,7 +129,7 @@ def install(self, spec, prefix):
@property
def libs(self):
libs = find_libraries('libcuda', root=self.prefix, shared=True,
libs = find_libraries('libcudart', root=self.prefix, shared=True,
recursive=True)
filtered_libs = []