Compare commits

..

101 Commits

Author SHA1 Message Date
Gregory Becker
afe1fd89b9 WIP -- wait for 18205 to continue 2020-10-21 18:37:21 -07:00
Tamara Dahlgren
1452020f22 Added raja smoke tests; updated build dir for cmake-based package tests 2020-10-14 19:35:45 -07:00
Tamara Dahlgren
9afff8eb60 Resolved eleven unit test failures (#18979) 2020-10-06 10:54:29 -07:00
Tamara Dahlgren
c6cd52f616 Remove unused test_compiler in intel.py (#18950) 2020-09-25 14:04:26 -07:00
Tamara Dahlgren
8bbfbc741d Added __init__.py to address test collection on the tty.py test (#18903) 2020-09-25 14:04:24 -07:00
Tamara Dahlgren
3f31fffe65 Resolved all basic flake8 errors 2020-09-25 14:04:23 -07:00
Tamara Dahlgren
fa023354c6 Restore test subcommand list limited to the first line though (#18723) 2020-09-23 12:44:27 -07:00
Tamara Dahlgren
02bd3d55a0 Bugfix: correct test find stage directory; fix flake8 errors (#18704) 2020-09-23 12:44:22 -07:00
Tamara Dahlgren
e58d4f8cb7 Fix test subcommand help/description (#18721) 2020-09-23 12:44:19 -07:00
Tamara Dahlgren
37a77e0d12 Preliminary binutils install tests (#18645) 2020-09-23 12:44:17 -07:00
Tamara Dahlgren
f10864b96e openmpi: Remove unneeded references to test part status values (#18644) 2020-09-23 12:44:14 -07:00
Greg Becker
19e226259c Features/spack test refactor cmds (#18518)
* no clean -t option, use 'spack test remove'

* refactor commands to make better use of TestSuite objects
2020-09-23 12:44:12 -07:00
Tamara Dahlgren
102b91203a Rename and make escaped text file utility readily available to packages (#18339) 2020-09-23 12:44:09 -07:00
Tamara Dahlgren
04ca718051 Updated hdf smoke test (#18337) 2020-09-23 12:44:06 -07:00
Tamara Dahlgren
61abc75bc6 Add remaining bugfixes and consistency changes from #18210 (#18334) 2020-09-23 12:44:03 -07:00
Greg Becker
86eececc5c Features/spack test refactor (#18277)
* refactor test code into a TestSuite object and install_test module

* update mpi tests

* refactor tests suites to use content hash for name and record reproducibility info

* update unit tests and fix bugs

* Fix tests using data dir for new format
Use new `self.test_stage` object to access current data dir

Co-authored by: Tamara Dahlgren <dahlgren1@llnl.gov>
2020-09-23 12:44:00 -07:00
Tamara Dahlgren
7551b66cd9 Smoke tests: handle package syntax errors (#17919) 2020-09-23 12:43:58 -07:00
Tamara Dahlgren
c9a00562c4 Preliminary HDF smoke tests (#18076) 2020-09-23 12:43:55 -07:00
Gregory Becker
dafda1ab1a update tests to use status=0 over status=None 2020-09-23 12:43:53 -07:00
Greg Becker
d6b9871169 Features/compiler tests (#17353)
* fix setup of run environment for tests

* remove unnecessary 'None' option from run_tests status arg

* allow package files for virtuals

* run tests for all virtuals provided by each package

* add tests for mpi

* add compiler tests for virtual packages

* run compiler tests automatically like virtuals

* use working_dir instead of os.chdir

* Move knowledge of virtual-ness from spec to repo

* refactor test/cmd/clean

* update cmd/pkg tests for correctness
2020-09-23 12:43:50 -07:00
Tamara Dahlgren
b6d1704729 Smoke tests: Preliminary berkeley-db tests (#17899) 2020-09-23 12:43:46 -07:00
Tamara Dahlgren
f49fa74bf7 Smoke test: Add test of a sequence of hdf5 commands (#17686) 2020-09-23 12:43:44 -07:00
Tamara Dahlgren
0b8bc43fb0 smoke test: preliminary sqlite tests 2020-09-23 12:43:40 -07:00
Tamara Dahlgren
70eb1960fb smoke test: Ensure expected test results and options are lists 2020-09-23 12:43:37 -07:00
Tamara Dahlgren
a7c109c3aa Smoke tests: cmake version checks (#17359)
* Smoke tests: cmake version checks

* Simplified cmake install checks: dict-to-list

Co-authored-by: Greg Becker <becker33@llnl.gov>
2020-09-23 12:43:34 -07:00
Tamara Dahlgren
da62f89f4a Smoke tests: hdf5 version checks and check_install (#17360) 2020-09-23 12:43:31 -07:00
Tamara Dahlgren
d9f0170024 Smoke tests: switched warn to debug plus bugfix (#17576) 2020-09-23 12:43:28 -07:00
Tamara Dahlgren
171ebd8189 Features/spack test emacs (#17363)
* Smoke tests: emacs version checks
2020-09-23 12:43:24 -07:00
Tamara Dahlgren
6d986b4478 Smoke tests: Preliminary Umpire install tests (#17178)
* Preliminary install tests for the Umpire package
2020-09-23 12:43:20 -07:00
Tamara Dahlgren
03569dee8d Add install tests for libsigsegv (#17064) 2020-09-23 12:43:18 -07:00
Tamara Dahlgren
e2ddd7846c bugfix: fix cache_extra_test_sources' file copy; add unit tests (#17057) 2020-09-23 12:43:16 -07:00
Gregory Becker
32693fa573 fixup bugs after rebase 2020-09-23 12:43:14 -07:00
Gregory Becker
e221ba6ba7 update macos test for new unit-test command 2020-09-23 12:43:12 -07:00
Gregory Becker
4146c3d135 flake 2020-09-23 12:43:10 -07:00
Gregory Becker
f5b165a76f flake 2020-09-23 12:43:08 -07:00
Tamara Dahlgren
3508362dde smoke tests: grab and run build examples (openmpi) (#16365)
* Snapshot smoke tests that grab and run examples

* Resolved openmpi example test issues for 2.0.0-4.0.3

* Use spec.satisfies; copy extra packages after install (vs. prior to install tests

* Added smoke tests for selected openmpi installed binaries

* Use which() to determine if install exe exists

* Switched onus for installer test source grab from installer to package

* Resolved (local) flake8 issues with package.py

* Use runner.name; use string format for *run_test* messages

* Renamed copy_src_to_install to cache_extra_test_source and added comments

* Metadata path cleanup: added metadata_dir property to and its use in package

* Support list of source paths to cache for install testing (with unit test)

* Added test subdir to install_test_root; changed skip_file to lambda
2020-09-23 12:43:05 -07:00
Tamara Dahlgren
fb145df4f4 bugfix: Resolve perl install test bug (#16501) 2020-09-23 12:43:02 -07:00
Tamara Dahlgren
fd46f67d63 smoke tests: Refined openmpi version checks (#16337)
* Refined openmpi version checks to pass for 2.1.0 through 4.0.3

* Allow skipping install tests with exe not in bin dir and revised openmpi version tests
2020-09-23 12:43:00 -07:00
Greg Becker
edf3e91a12 Add --fail-first and --fail-fast options to spack test run (#16277)
`spack test run --fail-first` exits after the first failed package.

`spack test run --fail-fast` stops each package test after the first
failure.
2020-09-23 12:42:58 -07:00
Gregory Becker
bca630a22a make output comparisons regex 2020-09-23 12:42:56 -07:00
Gregory Becker
5ff9ba320d remove debug log parser from ctest 2020-09-23 12:42:54 -07:00
Gregory Becker
50640d4924 update from Error to FAILED 2020-09-23 12:42:51 -07:00
Gregory Becker
f5cfcadfc5 change test headings from {name}-{hash} to {name}-{version}-{hash} 2020-09-23 12:42:49 -07:00
Gregory Becker
a5c534b86d update bash completion 2020-09-23 12:42:46 -07:00
Gregory Becker
99364b9c3f refactor 2020-09-23 12:42:43 -07:00
Gregory Becker
749ab2e79d Make Spack tests record their errors and continue
previously, tests would fail on the first error
now, we wrap them in a TestFailure object that records all failures
2020-09-23 12:42:40 -07:00
Greg Becker
1b3e1897ca Features/spack test subcommands (#16054)
* spack test: subcommands for asynchronous tests

* commands are `run`, `list`, `status`, `results`, `remove`.
2020-09-23 12:42:36 -07:00
Tamara Dahlgren
3976b2a083 tests: Preliminary libsigsegv smoke tests (updated) (#15981)
* tests: Preliminary libsigsegv smoke tests (updated)

* Cleaned up and added doc to libsigsegv smoke test
2020-09-23 12:42:33 -07:00
Tamara Dahlgren
54603cb91f tests: Update openmpi smoke tests to new run_test api (#15982)
* tests: Update openmpi smoke tests to new run_test api

* Removed version check try-except tracking per discussion

* Changed openmpi orted command status values to list
2020-09-23 12:42:29 -07:00
Tamara Dahlgren
aa630b8d71 install tests: added support for multiple test command status values (#15979)
* install tests: added support for multiple test command status values
2020-09-23 12:42:25 -07:00
Gregory Becker
77acf8ddc2 spack unit-test: fix pytest help command 2020-09-23 12:42:22 -07:00
Gregory Becker
dfb02e6d45 test runner: add options to check installation dir and print purpose 2020-09-23 12:42:18 -07:00
Gregory Becker
cf4a0cbc01 python: use self.command to get exe name in test 2020-09-23 12:42:16 -07:00
Gregory Becker
b0eb02a86f cmd/test.py: fix typo in spdx license header 2020-09-23 12:42:14 -07:00
Gregory Becker
73f76bc1b5 update bash completions 2020-09-23 12:42:12 -07:00
Gregory Becker
6f39d8011e spack test: factor out common args 2020-09-23 12:42:10 -07:00
Gregory Becker
97dc74c727 python: fix tests, remove intentional debug failures 2020-09-23 12:42:08 -07:00
Gregory Becker
d53eefa69f fix docs 2020-09-23 12:42:05 -07:00
Gregory Becker
bae57f2ae8 spack test: update existing docs for moved unit-test cmd 2020-09-23 12:41:23 -07:00
Gregory Becker
ba58ae9118 simplify error handling using language features 2020-09-23 12:36:23 -07:00
Gregory Becker
fdb8a59bae fix get_package_context check whether in a package file 2020-09-23 12:36:22 -07:00
Gregory Becker
d92f52ae02 fix handling of asserts for python3 2020-09-23 12:36:22 -07:00
Gregory Becker
3229bf04f5 fix 'belt and suspenders' for config values 2020-09-23 12:36:21 -07:00
Gregory Becker
ccf519daa5 update travis 2020-09-23 12:36:20 -07:00
Gregory Becker
c5ae92bf3f flake 2020-09-23 12:36:19 -07:00
Gregory Becker
f83280cb58 standardize names for configure_test, build_test, install_test 2020-09-23 12:36:18 -07:00
Gregory Becker
6e80de652c unbreak zlib 2020-09-23 12:36:17 -07:00
Gregory Becker
0dc212e67d tests and bugfixes 2020-09-23 12:36:16 -07:00
Gregory Becker
3ce2efe32a update bash completions 2020-09-23 12:36:14 -07:00
Gregory Becker
76ce5d90ec fixup unit-test from develop 2020-09-23 12:36:14 -07:00
Gregory Becker
e5a9a376bf fix cmd/clean tests 2020-09-23 12:36:13 -07:00
Gregory Becker
d6a497540d fixup reporter work 2020-09-23 12:36:12 -07:00
Gregory Becker
b996d65a96 bugfix 2020-09-23 12:36:11 -07:00
Gregory Becker
991a2aae37 test name message 2020-09-23 12:36:10 -07:00
Tamara Dahlgren
8ba45e358b Initial OpenMPI smoke tests: version checks 2020-09-23 12:36:09 -07:00
Gregory Becker
28e76be185 spack clean: option to clean test stage (-t) 2020-09-23 12:36:09 -07:00
Gregory Becker
70e91cc1e0 spack test: add dirty/clean flags to command 2020-09-23 12:36:08 -07:00
Gregory Becker
b52113aca9 move test dir to config option 2020-09-23 12:36:07 -07:00
Gregory Becker
ce06e24a2e refactor run_test to Package level 2020-09-23 12:36:06 -07:00
Gregory Becker
dd0fbe670c continue testing after error 2020-09-23 12:36:05 -07:00
Tamara Dahlgren
6ad70b5f5d Preliminary libxml2 tests (#15092)
* Initial libxml2 tests (using executables)

* Expanded libxml2 tests using installed bins

* Refactored/generalized _run_tests
2020-09-23 12:36:04 -07:00
wspear
dadf4d1ed9 Fixed import string (#15094) 2020-09-23 12:36:03 -07:00
Gregory Becker
64bac977f1 add spack test-env command, refactor to combine with build-env 2020-09-23 12:36:02 -07:00
Gregory Becker
2f1d26fa87 allow tests to require compiler 2020-09-23 12:36:02 -07:00
Gregory Becker
cf713c5320 Modify existing test methods to naming scheme <phase_name>test
Existing test methods run via callbacks at install time when run with `spack install --run-tests`
These methods are tied into the package build system, and cannot be run arbitrarily
New naming scheme for these tests based on the build system phase after which they should be run
The method name `test` is now reserved for methods run via the `spack test` command
2020-09-23 12:36:01 -07:00
Tamara Dahlgren
035e7b3743 tests: Added preliminary smoke test for perl (#14592)
* Added install test for perl, including use statements
2020-09-23 12:35:59 -07:00
Tamara Dahlgren
473457f2ba tests: Preliminary m4 smoke tests (#14553)
* Preliminary m4 smoke tests
2020-09-23 12:35:59 -07:00
Tamara Dahlgren
490bca73d1 Change variable name to 'standard' file to avoid confusion with function (#14589) 2020-09-23 12:35:58 -07:00
Tamara Dahlgren
59e885bd4f tests: Preliminary patchelf smoke tests (#14551)
* Initial patchelf smoke tests
2020-09-23 12:35:57 -07:00
Gregory Becker
966fc427a9 copy test data into './data' in test environment 2020-09-23 12:35:56 -07:00
Gregory Becker
8a34511789 improved error printing 2020-09-23 12:35:55 -07:00
Gregory Becker
8f255f9e6a fix reporter call for install command 2020-09-23 12:35:54 -07:00
Gregory Becker
4d282ad4d9 Changes in cmd/test.py in develop mirrored to cmd/unit-test.py 2020-09-23 12:35:53 -07:00
Gregory Becker
7216451ba7 tests occur in temporary directory, can be kept for debugging 2020-09-23 12:35:52 -07:00
Gregory Becker
e614cdf007 improve error catching/handling/re-raising 2020-09-23 12:35:51 -07:00
Gregory Becker
bc486a961c make test fail 2020-09-23 12:35:50 -07:00
Gregory Becker
a13eab94ce improve logging and add junit basics 2020-09-23 12:35:49 -07:00
Gregory Becker
6574c6779b python3 syntax for re-raising an error with the old traceback 2020-09-23 12:35:48 -07:00
Gregory Becker
d2cfbf177d make cdash test reporter work for testing 2020-09-23 12:35:46 -07:00
Gregory Becker
bfb97e4d57 add reporting format options to spack test 2020-09-23 12:35:14 -07:00
Gregory Becker
4151224ef2 WIP infrastructure for Spack test command to test existing installations 2020-09-23 12:22:26 -07:00
1331 changed files with 9414 additions and 47773 deletions

View File

@@ -4,9 +4,7 @@
parallel = True
concurrency = multiprocessing
branch = True
source =
bin
lib
source = lib
omit =
lib/spack/spack/test/*
lib/spack/docs/*

View File

@@ -29,7 +29,7 @@ jobs:
matrix:
package:
- lz4 # MakefilePackage
- mpich~fortran # AutotoolsPackage
- mpich # AutotoolsPackage
- tut # WafPackage
- py-setuptools # PythonPackage
- openjpeg # CMakePackage
@@ -45,7 +45,7 @@ jobs:
ccache-build-${{ matrix.package }}
- uses: actions/setup-python@v2
with:
python-version: 3.9
python-version: 3.8
- name: Install System Packages
run: |
sudo apt-get update

View File

@@ -14,8 +14,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [2.7, 3.5, 3.6, 3.7, 3.8, 3.9]
concretizer: ['original', 'clingo']
python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
steps:
- uses: actions/checkout@v2
@@ -27,12 +26,9 @@ jobs:
- name: Install System packages
run: |
sudo apt-get -y update
# Needed for unit tests
sudo apt-get install -y coreutils gfortran graphviz gnupg2 mercurial
sudo apt-get install -y ninja-build patchelf
sudo apt-get install -y coreutils gfortran graphviz gnupg2 mercurial ninja-build patchelf
# Needed for kcov
sudo apt-get -y install cmake binutils-dev libcurl4-openssl-dev
sudo apt-get -y install zlib1g-dev libdw-dev libiberty-dev
sudo apt-get -y install cmake binutils-dev libcurl4-openssl-dev zlib1g-dev libdw-dev libiberty-dev
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools codecov coverage
@@ -51,23 +47,16 @@ jobs:
mkdir -p ${KCOV_ROOT}/build
cd ${KCOV_ROOT}/build && cmake -Wno-dev ${KCOV_ROOT}/kcov-${KCOV_VERSION} && cd -
make -C ${KCOV_ROOT}/build && sudo make -C ${KCOV_ROOT}/build install
- name: Bootstrap clingo from sources
if: ${{ matrix.concretizer == 'clingo' }}
run: |
. share/spack/setup-env.sh
spack external find --not-buildable cmake bison
spack -v solve zlib
- name: Run unit tests
env:
COVERAGE: true
SPACK_TEST_SOLVER: ${{ matrix.concretizer }}
run: |
share/spack/qa/run-unit-tests
coverage combine
coverage xml
- uses: codecov/codecov-action@v1
with:
flags: unittests,linux,${{ matrix.concretizer }}
flags: unittests,linux
shell:
runs-on: ubuntu-latest
steps:
@@ -76,15 +65,13 @@ jobs:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: 3.9
python-version: 3.8
- name: Install System packages
run: |
sudo apt-get -y update
# Needed for shell tests
sudo apt-get install -y coreutils csh zsh tcsh fish dash bash
sudo apt-get install -y coreutils gfortran gnupg2 mercurial ninja-build patchelf zsh fish
# Needed for kcov
sudo apt-get -y install cmake binutils-dev libcurl4-openssl-dev
sudo apt-get -y install zlib1g-dev libdw-dev libiberty-dev
sudo apt-get -y install cmake binutils-dev libcurl4-openssl-dev zlib1g-dev libdw-dev libiberty-dev
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools codecov coverage
@@ -111,7 +98,6 @@ jobs:
- uses: codecov/codecov-action@v1
with:
flags: shelltests,linux
centos6:
# Test for Python2.6 run on Centos 6
runs-on: ubuntu-latest
@@ -126,25 +112,3 @@ jobs:
git fetch origin ${{ github.ref }}:test-branch
git checkout test-branch
share/spack/qa/run-unit-tests
clingo-cffi:
# Test for the clingo based solver (using clingo-cffi)
runs-on: ubuntu-latest
container: spack/github-actions:clingo-cffi
steps:
- name: Run unit tests
run: |
whoami && echo PWD=$PWD && echo HOME=$HOME && echo SPACK_TEST_SOLVER=$SPACK_TEST_SOLVER
python3 -c "import clingo; print(hasattr(clingo.Symbol, '_rep'), clingo.__version__)"
git clone https://github.com/spack/spack.git && cd spack
git fetch origin ${{ github.ref }}:test-branch
git checkout test-branch
. share/spack/setup-env.sh
spack compiler find
spack solve mpileaks%gcc
coverage run $(which spack) unit-test -v
coverage combine
coverage xml
- uses: codecov/codecov-action@v1
with:
flags: unittests,linux,clingo

View File

@@ -27,7 +27,7 @@ jobs:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.9
python-version: 3.8
- name: spack install
run: |
. .github/workflows/install_spack.sh
@@ -42,11 +42,12 @@ jobs:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.9
python-version: 3.8
- name: spack install
run: |
. .github/workflows/install_spack.sh
spack install -v --fail-fast py-jupyterlab %apple-clang
spack config add packages:opengl:paths:opengl@4.1:/usr/X11R6
spack install -v --fail-fast py-jupyter %apple-clang
install_scipy_clang:
name: scipy, mpl, pd
@@ -55,7 +56,7 @@ jobs:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.9
python-version: 3.8
- name: spack install
run: |
. .github/workflows/install_spack.sh
@@ -70,7 +71,7 @@ jobs:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.9
python-version: 3.8
- name: spack install
run: |
. .github/workflows/install_spack.sh

View File

@@ -12,16 +12,13 @@ on:
jobs:
build:
runs-on: macos-latest
strategy:
matrix:
python-version: [3.8]
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
python-version: 3.8
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools
@@ -29,7 +26,7 @@ jobs:
pip install --upgrade flake8 pep8-naming
- name: Setup Homebrew packages
run: |
brew install dash fish gcc gnupg2 kcov
brew install gcc gnupg2 dash kcov
- name: Run unit tests
run: |
git --version

View File

@@ -16,7 +16,7 @@ jobs:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.9
python-version: 3.8
- name: Install Python Packages
run: |
pip install --upgrade pip
@@ -33,7 +33,7 @@ jobs:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: 3.9
python-version: 3.8
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools flake8
@@ -51,7 +51,7 @@ jobs:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: 3.9
python-version: 3.8
- name: Install System packages
run: |
sudo apt-get -y update

View File

@@ -1,169 +1,3 @@
# v0.16.2 (2021-05-22)
* Major performance improvement for `spack load` and other commands. (#23661)
* `spack fetch` is now environment-aware. (#19166)
* Numerous fixes for the new, `clingo`-based concretizer. (#23016, #23307,
#23090, #22896, #22534, #20644, #20537, #21148)
* Supoprt for automatically bootstrapping `clingo` from source. (#20652, #20657
#21364, #21446, #21913, #22354, #22444, #22460, #22489, #22610, #22631)
* Python 3.10 support: `collections.abc` (#20441)
* Fix import issues by using `__import__` instead of Spack package importe.
(#23288, #23290)
* Bugfixes and `--source-dir` argument for `spack location`. (#22755, #22348,
#22321)
* Better support for externals in shared prefixes. (#22653)
* `spack build-env` now prefers specs defined in the active environment.
(#21642)
* Remove erroneous warnings about quotes in `from_sourcing_files`. (#22767)
* Fix clearing cache of `InternalConfigScope`. (#22609)
* Bugfix for active when pkg is already active error. (#22587)
* Make `SingleFileScope` able to repopulate the cache after clearing it.
(#22559)
* Channelflow: Fix the package. (#22483)
* More descriptive error message for bugs in `package.py` (#21811)
* Use package-supplied `autogen.sh`. (#20319)
* Respect `-k/verify-ssl-false` in `_existing_url` method. (#21864)
# v0.16.1 (2021-02-22)
This minor release includes a new feature and associated fixes:
* intel-oneapi support through new packages (#20411, #20686, #20693, #20717,
#20732, #20808, #21377, #21448)
This release also contains bug fixes/enhancements for:
* HIP/ROCm support (#19715, #20095)
* concretization (#19988, #20020, #20082, #20086, #20099, #20102, #20128,
#20182, #20193, #20194, #20196, #20203, #20247, #20259, #20307, #20362,
#20383, #20423, #20473, #20506, #20507, #20604, #20638, #20649, #20677,
#20680, #20790)
* environment install reporting fix (#20004)
* avoid import in ABI compatibility info (#20236)
* restore ability of dev-build to skip patches (#20351)
* spack find -d spec grouping (#20028)
* spack smoke test support (#19987, #20298)
* macOS fixes (#20038, #21662)
* abstract spec comparisons (#20341)
* continuous integration (#17563)
* performance improvements for binary relocation (#19690, #20768)
* additional sanity checks for variants in builtin packages (#20373)
* do not pollute auto-generated configuration files with empty lists or
dicts (#20526)
plus assorted documentation (#20021, #20174) and package bug fixes/enhancements
(#19617, #19933, #19986, #20006, #20097, #20198, #20794, #20906, #21411).
# v0.16.0 (2020-11-18)
`v0.16.0` is a major feature release.
## Major features in this release
1. **New concretizer (experimental)** Our new backtracking concretizer is
now in Spack as an experimental feature. You will need to install
`clingo@master+python` and set `concretizer: clingo` in `config.yaml`
to use it. The original concretizer is not exhaustive and is not
guaranteed to find a solution if one exists. We encourage you to use
the new concretizer and to report any bugs you find with it. We
anticipate making the new concretizer the default and including all
required dependencies for it in Spack `v0.17`. For more details, see
#19501.
2. **spack test (experimental)** Users can add `test()` methods to their
packages to run smoke tests on installations with the new `spack test`
command (the old `spack test` is now `spack unit-test`). `spack test`
is environment-aware, so you can `spack install` an environment and
`spack test run` smoke tests on all of its packages. Historical test
logs can be perused with `spack test results`. Generic smoke tests for
MPI implementations, C, C++, and Fortran compilers as well as specific
smoke tests for 18 packages. This is marked experimental because the
test API (`self.run_test()`) is likely to be change, but we encourage
users to upstream tests, and we will maintain and refactor any that
are added to mainline packages (#15702).
3. **spack develop** New `spack develop` command allows you to develop
several packages at once within a Spack environment. Running
`spack develop foo@v1` and `spack develop bar@v2` will check
out specific versions of `foo` and `bar` into subdirectories, which you
can then build incrementally with `spack install ` (#15256).
4. **More parallelism** Spack previously installed the dependencies of a
_single_ spec in parallel. Entire environments can now be installed in
parallel, greatly accelerating builds of large environments. get
parallelism from individual specs. Spack now parallelizes entire
environment builds (#18131).
5. **Customizable base images for spack containerize**
`spack containerize` previously only output a `Dockerfile` based
on `ubuntu`. You may now specify any base image of your choosing (#15028).
6. **more external finding** `spack external find` was added in `v0.15`,
but only `cmake` had support. `spack external find` can now find
`bison`, `cuda`, `findutils`, `flex`, `git`, `lustre` `m4`, `mpich`,
`mvapich2`, `ncurses`, `openmpi`, `perl`, `spectrum-mpi`, `tar`, and
`texinfo` on your system and add them automatically to
`packages.yaml`.
7. **Support aocc, nvhpc, and oneapi compilers** We are aggressively
pursuing support for the newest vendor compilers, especially those for
the U.S. exascale and pre-exascale systems. Compiler classes and
auto-detection for `aocc`, `nvhpc`, `oneapi` are now in Spack (#19345,
#19294, #19330).
## Additional new features of note
* New `spack mark` command can be used to designate packages as explicitly
installed, so that `spack gc` will not garbage-collect them (#16662).
* `install_tree` can be customized with Spack's projection format (#18341)
* `sbang` now lives in the `install_tree` so that all users can access it (#11598)
* `csh` and `tcsh` users no longer need to set `SPACK_ROOT` before
sourcing `setup-env.csh` (#18225)
* Spec syntax now supports `variant=*` syntax for finding any package
that has a particular variant (#19381).
* Spack respects `SPACK_GNUPGHOME` variable for custom GPG directories (#17139)
* Spack now recognizes Graviton chips
## Major refactors
* Use spawn instead of fork on Python >= 3.8 on macOS (#18205)
* Use indexes for public build caches (#19101, #19117, #19132, #19141, #19209)
* `sbang` is an external package now (https://github.com/spack/sbang, #19582)
* `archspec` is an external package now (https://github.com/archspec/archspec, #19600)
## Deprecations and Removals
* `spack bootstrap` was deprecated in v0.14.0, and has now been removed.
* `spack setup` is deprecated as of v0.16.0.
* What was `spack test` is now called `spack unit-test`. `spack test` is
now the smoke testing feature in (2) above.
## Bugfixes
Some of the most notable bugfixes in this release include:
* Better warning messages for deprecated syntax in `packages.yaml` (#18013)
* `buildcache list --allarch` now works properly (#17827)
* Many fixes and tests for buildcaches and binary relcoation (#15687,
*#17455, #17418, #17455, #15687, #18110)
## Package Improvements
Spack now has 5050 total packages, 720 of which were added since `v0.15`.
* ROCm packages (`hip`, `aomp`, more) added by AMD (#19957, #19832, others)
* Many improvements for ARM support
* `llvm-flang`, `flang`, and `f18` removed, as `llvm` has real `flang`
support since Flang was merged to LLVM mainline
* Emerging support for `spack external find` and `spack test` in packages.
## Infrastructure
* Major infrastructure improvements to pipelines on `gitlab.spack.io`
* Support for testing PRs from forks (#19248) is being enabled for all
forks to enable rolling, up-to-date binary builds on `develop`
# v0.15.4 (2020-08-12)
This release contains one feature addition:
@@ -772,4 +606,4 @@ version of all the changes since `v0.9.1`.
- Switched from `nose` to `pytest` for unit tests.
- Unit tests take 1 minute now instead of 8
- Massively expanded documentation
- Docs are now hosted on [spack.readthedocs.io](https://spack.readthedocs.io)
- Docs are now hosted on [spack.readthedocs.io](http://spack.readthedocs.io)

View File

@@ -28,11 +28,9 @@ text in the license header:
External Packages
-------------------
Spack bundles most external dependencies in lib/spack/external. It also
includes the sbang tool directly in bin/sbang. These packages are covered
by various permissive licenses. A summary listing follows. See the
license included with each package for full details.
Spack bundles its external dependencies in lib/spack/external. These
packages are covered by various permissive licenses. A summary listing
follows. See the license included with each package for full details.
PackageName: argparse
PackageHomePage: https://pypi.python.org/pypi/argparse
@@ -78,10 +76,6 @@ PackageName: ruamel.yaml
PackageHomePage: https://yaml.readthedocs.io/
PackageLicenseDeclared: MIT
PackageName: sbang
PackageHomePage: https://github.com/spack/sbang
PackageLicenseDeclared: Apache-2.0 OR MIT
PackageName: six
PackageHomePage: https://pypi.python.org/pypi/six
PackageLicenseDeclared: MIT

View File

@@ -21,7 +21,7 @@ builds of the same package. With Spack, you can build your software
*all* the ways you want to.
See the
[Feature Overview](https://spack.readthedocs.io/en/latest/features.html)
[Feature Overview](http://spack.readthedocs.io/en/latest/features.html)
for examples and highlights.
To install spack and your first package, make sure you have Python.
@@ -34,14 +34,14 @@ Then:
Documentation
----------------
[**Full documentation**](https://spack.readthedocs.io/) is available, or
[**Full documentation**](http://spack.readthedocs.io/) is available, or
run `spack help` or `spack help --all`.
Tutorial
----------------
We maintain a
[**hands-on tutorial**](https://spack.readthedocs.io/en/latest/tutorial.html).
[**hands-on tutorial**](http://spack.readthedocs.io/en/latest/tutorial.html).
It covers basic to advanced usage, packaging, developer features, and large HPC
deployments. You can do all of the exercises on your own laptop using a
Docker container.
@@ -75,7 +75,7 @@ Your PR must pass Spack's unit tests and documentation tests, and must be
[PEP 8](https://www.python.org/dev/peps/pep-0008/) compliant. We enforce
these guidelines with our CI process. To run these tests locally, and for
helpful tips on git, see our
[Contribution Guide](https://spack.readthedocs.io/en/latest/contribution_guide.html).
[Contribution Guide](http://spack.readthedocs.io/en/latest/contribution_guide.html).
Spack's `develop` branch has the latest contributions. Pull requests
should target `develop`, and users who want the latest package versions,
@@ -120,7 +120,7 @@ If you are referencing Spack in a publication, please cite the following paper:
* Todd Gamblin, Matthew P. LeGendre, Michael R. Collette, Gregory L. Lee,
Adam Moody, Bronis R. de Supinski, and W. Scott Futral.
[**The Spack Package Manager: Bringing Order to HPC Software Chaos**](https://www.computer.org/csdl/proceedings/sc/2015/3723/00/2807623.pdf).
[**The Spack Package Manager: Bringing Order to HPC Software Chaos**](http://www.computer.org/csdl/proceedings/sc/2015/3723/00/2807623.pdf).
In *Supercomputing 2015 (SC15)*, Austin, Texas, November 15-20 2015. LLNL-CONF-669890.
License

163
bin/sbang
View File

@@ -1,103 +1,114 @@
#!/bin/sh
#!/bin/bash
#
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# sbang project developers. See the top-level COPYRIGHT file for details.
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#
# `sbang`: Run scripts with long shebang lines.
#
# Many operating systems limit the length and number of possible
# arguments in shebang lines, making it hard to use interpreters that are
# deep in the directory hierarchy or require special arguments.
# Many operating systems limit the length of shebang lines, making it
# hard to use interpreters that are deep in the directory hierarchy.
# `sbang` can run such scripts, either as a shebang interpreter, or
# directly on the command line.
#
# To use, put the long shebang on the second line of your script, and
# make sbang the interpreter, like this:
# Usage
# -----------------------------
# Suppose you have a script, long-shebang.sh, like this:
#
# #!/bin/sh /path/to/sbang
# #!/long/path/to/real/interpreter with arguments
# 1 #!/very/long/path/to/some/interpreter
# 2
# 3 echo "success!"
#
# `sbang` will run the real interpreter with the script as its argument.
# Invoking this script will result in an error on some OS's. On
# Linux, you get this:
#
# See https://github.com/spack/sbang for more details.
# $ ./long-shebang.sh
# -bash: ./long: /very/long/path/to/some/interp: bad interpreter:
# No such file or directory
#
# On Mac OS X, the system simply assumes the interpreter is the shell
# and tries to run with it, which is likely not what you want.
#
#
# `sbang` on the command line
# -----------------------------
# You can use `sbang` in two ways. The first is to use it directly,
# from the command line, like this:
#
# $ sbang ./long-shebang.sh
# success!
#
#
# `sbang` as the interpreter
# -----------------------------
# You can also use `sbang` *as* the interpreter for your script. Put
# `#!/bin/bash /path/to/sbang` on line 1, and move the original
# shebang to line 2 of the script:
#
# 1 #!/bin/bash /path/to/sbang
# 2 #!/long/path/to/real/interpreter with arguments
# 3
# 4 echo "success!"
#
# $ ./long-shebang.sh
# success!
#
# On Linux, you could shorten line 1 to `#!/path/to/sbang`, but other
# operating systems like Mac OS X require the interpreter to be a
# binary, so it's best to use `sbang` as a `bash` argument.
# Obviously, for this to work, `sbang` needs to have a short enough
# path that *it* will run without hitting OS limits.
#
# For Lua, scripts the second line can't start with #!, as # is not
# the comment character in lua (even though lua ignores #! on the
# *first* line of a script). So, instrument a lua script like this,
# using -- instead of # on the second line:
#
# 1 #!/bin/bash /path/to/sbang
# 2 --!/long/path/to/lua with arguments
# 3
# 4 print "success!"
#
# How it works
# -----------------------------
# `sbang` is a very simple bash script. It looks at the first two
# lines of a script argument and runs the last line starting with
# `#!`, with the script as an argument. It also forwards arguments.
#
# Generic error handling
die() {
echo "$@" 1>&2;
exit 1
}
# set SBANG_DEBUG to make the script print what would normally be executed.
exec="exec"
if [ -n "${SBANG_DEBUG}" ]; then
exec="echo "
fi
# First argument is the script we want to actually run.
script="$1"
# ensure that the script actually exists
if [ -z "$script" ]; then
die "error: sbang requires exactly one argument"
elif [ ! -f "$script" ]; then
die "$script: no such file or directory"
fi
# Search the first two lines of script for interpreters.
lines=0
while read -r line && [ $lines -ne 2 ]; do
if [ "${line#\#!}" != "$line" ]; then
shebang_line="${line#\#!}"
elif [ "${line#//!}" != "$line" ]; then # // comments
shebang_line="${line#//!}"
elif [ "${line#--!}" != "$line" ]; then # -- lua comments
shebang_line="${line#--!}"
elif [ "${line#<?php\ }" != "$line" ]; then # php comments
shebang_line="${line#<?php\ \#!}"
shebang_line="${shebang_line%\ ?>}"
while read line && ((lines < 2)) ; do
if [[ "$line" = '#!'* ]]; then
interpreter="${line#\#!}"
elif [[ "$line" = '//!'*node* ]]; then
interpreter="${line#//!}"
elif [[ "$line" = '--!'*lua* ]]; then
interpreter="${line#--!}"
fi
lines=$((lines+1))
done < "$script"
# this is ineeded for scripts with sbang parameter
# like ones in intltool
# #!/<spack-long-path>/perl -w
# this is the interpreter line with all the parameters as a vector
interpreter_v=(${interpreter})
# this is the single interpreter path
interpreter_f="${interpreter_v[0]}"
# error if we did not find any interpreter
if [ -z "$shebang_line" ]; then
die "error: sbang found no interpreter in $script"
fi
# parse out the interpreter and first argument
IFS=' ' read -r interpreter arg1 rest <<EOF
$shebang_line
EOF
# Determine if the interpreter is a particular program, accounting for the
# '#!/usr/bin/env PROGRAM' convention. So:
#
# interpreter_is perl
#
# will be true for '#!/usr/bin/perl' and '#!/usr/bin/env perl'
interpreter_is() {
if [ "${interpreter##*/}" = "$1" ]; then
return 0
elif [ "$interpreter" = "/usr/bin/env" ] && [ "$arg1" = "$1" ]; then
return 0
# Invoke any interpreter found, or raise an error if none was found.
if [[ -n "$interpreter_f" ]]; then
if [[ "${interpreter_f##*/}" = "perl"* ]]; then
exec $interpreter -x "$@"
else
return 1
exec $interpreter "$@"
fi
}
if interpreter_is "sbang"; then
die "error: refusing to re-execute sbang to avoid infinite loop."
fi
# Finally invoke the real shebang line
# ruby and perl need -x to ignore the first line of input (the sbang line)
#
if interpreter_is perl || interpreter_is ruby; then
# shellcheck disable=SC2086
$exec $shebang_line -x "$@"
else
# shellcheck disable=SC2086
$exec $shebang_line "$@"
echo "error: sbang found no interpreter in $script"
exit 1
fi

View File

@@ -59,8 +59,6 @@ if 'ruamel.yaml' in sys.modules:
if 'ruamel' in sys.modules:
del sys.modules['ruamel']
import spack.main # noqa
# Once we've set up the system path, run the spack main method
if __name__ == "__main__":
sys.exit(spack.main.main())
import spack.main # noqa
sys.exit(spack.main.main())

View File

@@ -16,17 +16,7 @@
config:
# This is the path to the root of the Spack install tree.
# You can use $spack here to refer to the root of the spack instance.
install_tree:
root: $spack/opt/spack
projections:
all: "${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}"
# install_tree can include an optional padded length (int or boolean)
# default is False (do not pad)
# if padded_length is True, Spack will pad as close to the system max path
# length as possible
# if padded_length is an integer, Spack will pad to that many characters,
# assuming it is higher than the length of the install_tree root.
# padded_length: 128
install_tree: $spack/opt/spack
# Locations where templates should be found
@@ -34,6 +24,10 @@ config:
- $spack/share/spack/templates
# Default directory layout
install_path_scheme: "${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}"
# Locations where different types of modules should be installed.
module_roots:
tcl: $spack/share/spack/modules
@@ -149,20 +143,6 @@ config:
ccache: false
# The concretization algorithm to use in Spack. Options are:
#
# 'original': Spack's original greedy, fixed-point concretizer. This
# algorithm can make decisions too early and will not backtrack
# sufficiently for many specs.
#
# 'clingo': Uses a logic solver under the hood to solve DAGs with full
# backtracking and optimization for user preferences.
#
# 'clingo' currently requires the clingo ASP solver to be installed and
# built with python bindings. 'original' is built in.
concretizer: original
# How long to wait to lock the Spack installation database. This lock is used
# when Spack needs to manage its own package metadata and all operations are
# expected to complete within the default time limit. The timeout should
@@ -177,13 +157,11 @@ config:
# never succeed.
package_lock_timeout: null
# Control whether Spack embeds RPATH or RUNPATH attributes in ELF binaries.
# Has no effect on macOS. DO NOT MIX these within the same install tree.
# See the Spack documentation for details.
shared_linking: 'rpath'
# Set to 'false' to allow installation on filesystems that doesn't allow setgid bit
# manipulation by unprivileged user (e.g. AFS)
allow_sgid: true

View File

@@ -15,40 +15,38 @@
# -------------------------------------------------------------------------
packages:
all:
compiler: [gcc, intel, pgi, clang, xl, nag, fj, aocc]
compiler: [gcc, intel, pgi, clang, xl, nag, fj]
providers:
D: [ldc]
awk: [gawk]
blas: [openblas, amdblis]
blas: [openblas]
daal: [intel-daal]
elf: [elfutils]
fftw-api: [fftw, amdfftw]
gl: [mesa+opengl, mesa18+opengl, opengl]
glx: [mesa+glx, mesa18+glx, opengl]
fftw-api: [fftw]
gl: [mesa+opengl, opengl]
glx: [mesa+glx, opengl]
glu: [mesa-glu, openglu]
golang: [gcc]
iconv: [libiconv]
ipp: [intel-ipp]
java: [openjdk, jdk, ibm-java]
jpeg: [libjpeg-turbo, libjpeg]
lapack: [openblas, amdlibflame]
lapack: [openblas]
mariadb-client: [mariadb-c-client, mariadb]
mkl: [intel-mkl]
mpe: [mpe2]
mpi: [openmpi, mpich]
mysql-client: [mysql, mariadb-c-client]
opencl: [pocl]
osmesa: [mesa+osmesa, mesa18+osmesa]
pil: [py-pillow]
pil: [py-pillow-simd]
pkgconfig: [pkgconf, pkg-config]
rpc: [libtirpc]
scalapack: [netlib-scalapack, amdscalapack]
scalapack: [netlib-scalapack]
sycl: [hipsycl]
szip: [libszip, libaec]
tbb: [intel-tbb]
unwind: [libunwind]
yacc: [bison, byacc]
flame: [libflame, amdlibflame]
sycl: [hipsycl]
permissions:
read: world
write: user

View File

@@ -1,10 +1,10 @@
<html>
<head>
<meta http-equiv="refresh" content="0; url=https://spack.readthedocs.io/" />
<meta http-equiv="refresh" content="0; url=http://spack.readthedocs.io/" />
</head>
<body>
<p>
This page has moved to <a href="https://spack.readthedocs.io/">https://spack.readthedocs.io/</a>
This page has moved to <a href="http://spack.readthedocs.io/">http://spack.readthedocs.io/</a>
</p>
</body>
</html>

View File

@@ -31,7 +31,7 @@ colorized output.
.. code-block:: console
$ spack --color always | less -R
$ spack --color always | less -R
--------------------------
Listing available packages
@@ -132,28 +132,32 @@ If ``mpileaks`` depends on other packages, Spack will install the
dependencies first. It then fetches the ``mpileaks`` tarball, expands
it, verifies that it was downloaded without errors, builds it, and
installs it in its own directory under ``$SPACK_ROOT/opt``. You'll see
a number of messages from Spack, a lot of build output, and a message
that the package is installed. Add one or more debug options (``-d``)
to get increasingly detailed output.
a number of messages from spack, a lot of build output, and a message
that the packages is installed:
.. code-block:: console
$ spack install mpileaks
... dependency build output ...
==> Installing mpileaks-1.0-ph7pbnhl334wuhogmugriohcwempqry2
==> No binary for mpileaks-1.0-ph7pbnhl334wuhogmugriohcwempqry2 found: installing from source
==> mpileaks: Executing phase: 'autoreconf'
==> mpileaks: Executing phase: 'configure'
==> mpileaks: Executing phase: 'build'
==> mpileaks: Executing phase: 'install'
[+] ~/spack/opt/linux-rhel7-broadwell/gcc-8.1.0/mpileaks-1.0-ph7pbnhl334wuhogmugriohcwempqry2
==> Installing mpileaks
==> mpich is already installed in ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/mpich@3.0.4.
==> callpath is already installed in ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/callpath@1.0.2-5dce4318.
==> adept-utils is already installed in ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/adept-utils@1.0-5adef8da.
==> Trying to fetch from https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz
######################################################################## 100.0%
==> Staging archive: ~/spack/var/spack/stage/mpileaks@1.0%gcc@4.4.7 arch=linux-debian7-x86_64-59f6ad23/mpileaks-1.0.tar.gz
==> Created stage in ~/spack/var/spack/stage/mpileaks@1.0%gcc@4.4.7 arch=linux-debian7-x86_64-59f6ad23.
==> No patches needed for mpileaks.
==> Building mpileaks.
... build output ...
==> Successfully installed mpileaks.
Fetch: 2.16s. Build: 9.82s. Total: 11.98s.
[+] ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/mpileaks@1.0-59f6ad23
The last line, with the ``[+]``, indicates where the package is
installed.
Add the debug option -- ``spack install -d mpileaks`` -- to get additional
output.
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Building a specific version
^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -280,102 +284,6 @@ and removed everything that is not either:
You can check :ref:`cmd-spack-find-metadata` to see how to query for explicitly installed packages
or :ref:`dependency-types` for a more thorough treatment of dependency types.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Marking packages explicit or implicit
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default, Spack will mark packages a user installs as explicitly installed,
while all of its dependencies will be marked as implicitly installed. Packages
can be marked manually as explicitly or implicitly installed by using
``spack mark``. This can be used in combination with ``spack gc`` to clean up
packages that are no longer required.
.. code-block:: console
$ spack install m4
==> 29005: Installing libsigsegv
[...]
==> 29005: Installing m4
[...]
$ spack install m4 ^libsigsegv@2.11
==> 39798: Installing libsigsegv
[...]
==> 39798: Installing m4
[...]
$ spack find -d
==> 4 installed packages
-- linux-fedora32-haswell / gcc@10.1.1 --------------------------
libsigsegv@2.11
libsigsegv@2.12
m4@1.4.18
libsigsegv@2.12
m4@1.4.18
libsigsegv@2.11
$ spack gc
==> There are no unused specs. Spack's store is clean.
$ spack mark -i m4 ^libsigsegv@2.11
==> m4@1.4.18 : marking the package implicit
$ spack gc
==> The following packages will be uninstalled:
-- linux-fedora32-haswell / gcc@10.1.1 --------------------------
5fj7p2o libsigsegv@2.11 c6ensc6 m4@1.4.18
==> Do you want to proceed? [y/N]
In the example above, we ended up with two versions of ``m4`` since they depend
on different versions of ``libsigsegv``. ``spack gc`` will not remove any of
the packages since both versions of ``m4`` have been installed explicitly
and both versions of ``libsigsegv`` are required by the ``m4`` packages.
``spack mark`` can also be used to implement upgrade workflows. The following
example demonstrates how the ``spack mark`` and ``spack gc`` can be used to
only keep the current version of a package installed.
When updating Spack via ``git pull``, new versions for either ``libsigsegv``
or ``m4`` might be introduced. This will cause Spack to install duplicates.
Since we only want to keep one version, we mark everything as implicitly
installed before updating Spack. If there is no new version for either of the
packages, ``spack install`` will simply mark them as explicitly installed and
``spack gc`` will not remove them.
.. code-block:: console
$ spack install m4
==> 62843: Installing libsigsegv
[...]
==> 62843: Installing m4
[...]
$ spack mark -i -a
==> m4@1.4.18 : marking the package implicit
$ git pull
[...]
$ spack install m4
[...]
==> m4@1.4.18 : marking the package explicit
[...]
$ spack gc
==> There are no unused specs. Spack's store is clean.
When using this workflow for installations that contain more packages, care
has to be taken to either only mark selected packages or issue ``spack install``
for all packages that should be kept.
You can check :ref:`cmd-spack-find-metadata` to see how to query for explicitly
or implicitly installed packages.
^^^^^^^^^^^^^^^^^^^^^^^^^
Non-Downloadable Tarballs
^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -421,6 +329,85 @@ the tarballs in question to it (see :ref:`mirrors`):
$ spack install galahad
-----------------------------
Deprecating insecure packages
-----------------------------
``spack deprecate`` allows for the removal of insecure packages with
minimal impact to their dependents.
.. warning::
The ``spack deprecate`` command is designed for use only in
extraordinary circumstances. This is a VERY big hammer to be used
with care.
The ``spack deprecate`` command will remove one package and replace it
with another by replacing the deprecated package's prefix with a link
to the deprecator package's prefix.
.. warning::
The ``spack deprecate`` command makes no promises about binary
compatibility. It is up to the user to ensure the deprecator is
suitable for the deprecated package.
Spack tracks concrete deprecated specs and ensures that no future packages
concretize to a deprecated spec.
The first spec given to the ``spack deprecate`` command is the package
to deprecate. It is an abstract spec that must describe a single
installed package. The second spec argument is the deprecator
spec. By default it must be an abstract spec that describes a single
installed package, but with the ``-i/--install-deprecator`` it can be
any abstract spec that Spack will install and then use as the
deprecator. The ``-I/--no-install-deprecator`` option will ensure
the default behavior.
By default, ``spack deprecate`` will deprecate all dependencies of the
deprecated spec, replacing each by the dependency of the same name in
the deprecator spec. The ``-d/--dependencies`` option will ensure the
default, while the ``-D/--no-dependencies`` option will deprecate only
the root of the deprecate spec in favor of the root of the deprecator
spec.
``spack deprecate`` can use symbolic links or hard links. The default
behavior is symbolic links, but the ``-l/--link-type`` flag can take
options ``hard`` or ``soft``.
-----------------------
Verifying installations
-----------------------
The ``spack verify`` command can be used to verify the validity of
Spack-installed packages any time after installation.
At installation time, Spack creates a manifest of every file in the
installation prefix. For links, Spack tracks the mode, ownership, and
destination. For directories, Spack tracks the mode, and
ownership. For files, Spack tracks the mode, ownership, modification
time, hash, and size. The Spack verify command will check, for every
file in each package, whether any of those attributes have changed. It
will also check for newly added files or deleted files from the
installation prefix. Spack can either check all installed packages
using the `-a,--all` or accept specs listed on the command line to
verify.
The ``spack verify`` command can also verify for individual files that
they haven't been altered since installation time. If the given file
is not in a Spack installation prefix, Spack will report that it is
not owned by any package. To check individual files instead of specs,
use the ``-f,--files`` option.
Spack installation manifests are part of the tarball signed by Spack
for binary package distribution. When installed from a binary package,
Spack uses the packaged installation manifest instead of creating one
at install time.
The ``spack verify`` command also accepts the ``-l,--local`` option to
check only local packages (as opposed to those used transparently from
``upstream`` spack instances) and the ``-j,--json`` option to output
machine-readable json data for any errors.
-------------------------
Seeing installed packages
@@ -689,95 +676,6 @@ structured the way you want:
"hash": "zvaa4lhlhilypw5quj3akyd3apbq5gap"
}
------------------------
Using installed packages
------------------------
There are several different ways to use Spack packages once you have
installed them. As you've seen, spack packages are installed into long
paths with hashes, and you need a way to get them into your path. The
easiest way is to use :ref:`spack load <cmd-spack-load>`, which is
described in the next section.
Some more advanced ways to use Spack packages include:
* :ref:`environments <environments>`, which you can use to bundle a
number of related packages to "activate" all at once, and
* :ref:`environment modules <modules>`, which are commonly used on
supercomputing clusters. Spack generates module files for every
installation automatically, and you can customize how this is done.
.. _cmd-spack-load:
^^^^^^^^^^^^^^^^^^^^^^^
``spack load / unload``
^^^^^^^^^^^^^^^^^^^^^^^
If you have :ref:`shell support <shell-support>` enabled you can use the
``spack load`` command to quickly get a package on your ``PATH``.
For example this will add the ``mpich`` package built with ``gcc`` to
your path:
.. code-block:: console
$ spack install mpich %gcc@4.4.7
# ... wait for install ...
$ spack load mpich %gcc@4.4.7
$ which mpicc
~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/mpich@3.0.4/bin/mpicc
These commands will add appropriate directories to your ``PATH``,
``MANPATH``, ``CPATH``, and ``LD_LIBRARY_PATH`` according to the
:ref:`prefix inspections <customize-env-modifications>` defined in your
modules configuration. When you no longer want to use a package, you
can type unload or unuse similarly:
.. code-block:: console
$ spack unload mpich %gcc@4.4.7
"""""""""""""""
Ambiguous specs
"""""""""""""""
If a spec used with load/unload or is ambiguous (i.e. more than one
installed package matches it), then Spack will warn you:
.. code-block:: console
$ spack load libelf
==> Error: libelf matches multiple packages.
Matching packages:
qmm4kso libelf@0.8.13%gcc@4.4.7 arch=linux-debian7-x86_64
cd2u6jt libelf@0.8.13%intel@15.0.0 arch=linux-debian7-x86_64
Use a more specific spec
You can either type the ``spack load`` command again with a fully
qualified argument, or you can add just enough extra constraints to
identify one package. For example, above, the key differentiator is
that one ``libelf`` is built with the Intel compiler, while the other
used ``gcc``. You could therefore just type:
.. code-block:: console
$ spack load libelf %intel
To identify just the one built with the Intel compiler. If you want to be
*very* specific, you can load it by its hash. For example, to load the
first ``libelf`` above, you would run:
.. code-block:: console
$ spack load /qmm4kso
We'll learn more about Spack's spec syntax in the next section.
.. _sec-specs:
--------------------
@@ -1336,88 +1234,6 @@ add a version specifier to the spec:
Notice that the package versions that provide insufficient MPI
versions are now filtered out.
-----------------------------
Deprecating insecure packages
-----------------------------
``spack deprecate`` allows for the removal of insecure packages with
minimal impact to their dependents.
.. warning::
The ``spack deprecate`` command is designed for use only in
extraordinary circumstances. This is a VERY big hammer to be used
with care.
The ``spack deprecate`` command will remove one package and replace it
with another by replacing the deprecated package's prefix with a link
to the deprecator package's prefix.
.. warning::
The ``spack deprecate`` command makes no promises about binary
compatibility. It is up to the user to ensure the deprecator is
suitable for the deprecated package.
Spack tracks concrete deprecated specs and ensures that no future packages
concretize to a deprecated spec.
The first spec given to the ``spack deprecate`` command is the package
to deprecate. It is an abstract spec that must describe a single
installed package. The second spec argument is the deprecator
spec. By default it must be an abstract spec that describes a single
installed package, but with the ``-i/--install-deprecator`` it can be
any abstract spec that Spack will install and then use as the
deprecator. The ``-I/--no-install-deprecator`` option will ensure
the default behavior.
By default, ``spack deprecate`` will deprecate all dependencies of the
deprecated spec, replacing each by the dependency of the same name in
the deprecator spec. The ``-d/--dependencies`` option will ensure the
default, while the ``-D/--no-dependencies`` option will deprecate only
the root of the deprecate spec in favor of the root of the deprecator
spec.
``spack deprecate`` can use symbolic links or hard links. The default
behavior is symbolic links, but the ``-l/--link-type`` flag can take
options ``hard`` or ``soft``.
-----------------------
Verifying installations
-----------------------
The ``spack verify`` command can be used to verify the validity of
Spack-installed packages any time after installation.
At installation time, Spack creates a manifest of every file in the
installation prefix. For links, Spack tracks the mode, ownership, and
destination. For directories, Spack tracks the mode, and
ownership. For files, Spack tracks the mode, ownership, modification
time, hash, and size. The Spack verify command will check, for every
file in each package, whether any of those attributes have changed. It
will also check for newly added files or deleted files from the
installation prefix. Spack can either check all installed packages
using the `-a,--all` or accept specs listed on the command line to
verify.
The ``spack verify`` command can also verify for individual files that
they haven't been altered since installation time. If the given file
is not in a Spack installation prefix, Spack will report that it is
not owned by any package. To check individual files instead of specs,
use the ``-f,--files`` option.
Spack installation manifests are part of the tarball signed by Spack
for binary package distribution. When installed from a binary package,
Spack uses the packaged installation manifest instead of creating one
at install time.
The ``spack verify`` command also accepts the ``-l,--local`` option to
check only local packages (as opposed to those used transparently from
``upstream`` spack instances) and the ``-j,--json`` option to output
machine-readable json data for any errors.
.. _extensions:
---------------------------

View File

@@ -175,7 +175,7 @@ In the ``perl`` package, we can see:
@run_after('build')
@on_package_attributes(run_tests=True)
def test(self):
def build_test(self):
make('test')
As you can guess, this runs ``make test`` *after* building the package,

View File

@@ -76,24 +76,6 @@ should add:
depends_on('maven@3.5.4:', type='build')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Passing arguments to the build phase
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The default build and install phases should be sufficient to install
most packages. However, you may want to pass additional flags to
the build phase. For example:
.. code-block:: python
def build_args(self):
return [
'-Pdist,native',
'-Dtar',
'-Dmaven.javadoc.skip=true'
]
^^^^^^^^^^^^^^^^^^^^^^
External documentation
^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -324,21 +324,21 @@ mentions that Python 3 is required, this can be specified as:
.. code-block:: python
depends_on('python@3:', type=('build', 'run'))
depends_on('python@3:', type=('build', 'run')
If Python 2 is required, this would look like:
.. code-block:: python
depends_on('python@:2', type=('build', 'run'))
depends_on('python@:2', type=('build', 'run')
If Python 2.7 is the only version that works, you can use:
.. code-block:: python
depends_on('python@2.7:2.8', type=('build', 'run'))
depends_on('python@2.7:2.8', type=('build', 'run')
The documentation may not always specify supported Python versions.

View File

@@ -56,7 +56,7 @@ overridden like so:
.. code-block:: python
def test(self):
def build_test(self):
scons('check')

View File

@@ -9,48 +9,28 @@
Container Images
================
Spack :ref:`environments` are a great tool to create container images, but
preparing one that is suitable for production requires some more boilerplate
than just:
Spack can be an ideal tool to setup images for containers since all the
features discussed in :ref:`environments` can greatly help to manage
the installation of software during the image build process. Nonetheless,
building a production image from scratch still requires a lot of
boilerplate to:
.. code-block:: docker
- Get Spack working within the image, possibly running as root
- Minimize the physical size of the software installed
- Properly update the system software in the base image
COPY spack.yaml /environment
RUN spack -e /environment install
Additional actions may be needed to minimize the size of the
container, or to update the system software that is installed in the base
image, or to set up a proper entrypoint to run the image. These tasks are
usually both necessary and repetitive, so Spack comes with a command
to generate recipes for container images starting from a ``spack.yaml``.
--------------------
A Quick Introduction
--------------------
Consider having a Spack environment like the following:
.. code-block:: yaml
spack:
specs:
- gromacs+mpi
- mpich
Producing a ``Dockerfile`` from it is as simple as moving to the directory
where the ``spack.yaml`` file is stored and giving the following command:
To facilitate users with these tedious tasks, Spack provides a command
to automatically generate recipes for container images based on
Environments:
.. code-block:: console
$ spack containerize > Dockerfile
The ``Dockerfile`` that gets created uses multi-stage builds and
other techniques to minimize the size of the final image:
.. code-block:: docker
$ ls
spack.yaml
$ spack containerize
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-bionic:latest as builder
FROM spack/centos7:latest as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
@@ -65,7 +45,7 @@ other techniques to minimize the size of the final image:
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unnecessary deps
RUN cd /opt/spack-environment && spack env activate . && spack install --fail-fast && spack gc -y
RUN cd /opt/spack-environment && spack env activate . && spack install && spack gc -y
# Strip all the binaries
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
@@ -78,34 +58,38 @@ other techniques to minimize the size of the final image:
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM ubuntu:18.04
FROM centos:7
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
RUN yum update -y && yum install -y epel-release && yum update -y \
&& yum install -y libgomp \
&& rm -rf /var/cache/yum && yum clean all
RUN echo 'export PS1="\[$(tput bold)\]\[$(tput setaf 1)\][gromacs]\[$(tput setaf 2)\]\u\[$(tput sgr0)\]:\w $ "' >> ~/.bashrc
LABEL "app"="gromacs"
LABEL "mpi"="mpich"
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
The image itself can then be built and run in the usual way, with any of the
tools suitable for the task. For instance, if we decided to use ``docker``:
.. code-block:: bash
The bits that make this automation possible are discussed in details
below. All the images generated in this way will be based on
multi-stage builds with:
$ spack containerize > Dockerfile
$ docker build -t myimage .
[ ... ]
$ docker run -it myimage
- A fat ``build`` stage containing common build tools and Spack itself
- A minimal ``final`` stage containing only the software requested by the user
The various components involved in the generation of the recipe and their
configuration are discussed in details in the sections below.
.. _container_spack_images:
--------------------------
Spack Images on Docker Hub
--------------------------
-----------------
Spack Base Images
-----------------
Docker images with Spack preinstalled and ready to be used are
built on `Docker Hub <https://hub.docker.com/u/spack>`_
@@ -140,20 +124,19 @@ All the images are tagged with the corresponding release of Spack:
with the exception of the ``latest`` tag that points to the HEAD
of the ``develop`` branch. These images are available for anyone
to use and take care of all the repetitive tasks that are necessary
to setup Spack within a container. The container recipes generated
by Spack use them as default base images for their ``build`` stage,
even though handles to use custom base images provided by users are
available to accommodate complex use cases.
to setup Spack within a container. All the container recipes generated
automatically by Spack use them as base images for their ``build`` stage.
---------------------------------
Creating Images From Environments
---------------------------------
-------------------------
Environment Configuration
-------------------------
Any Spack Environment can be used for the automatic generation of container
recipes. Sensible defaults are provided for things like the base image or the
version of Spack used in the image.
If a finer tuning is needed it can be obtained by adding the relevant metadata
under the ``container`` attribute of environments:
version of Spack used in the image. If a finer tuning is needed it can be
obtained by adding the relevant metadata under the ``container`` attribute
of environments:
.. code-block:: yaml
@@ -167,10 +150,9 @@ under the ``container`` attribute of environments:
# singularity or anything else that is currently supported
format: docker
# Sets the base images for the stages where Spack builds the
# software or where the software gets installed after being built..
images:
os: "centos:7"
# Select from a valid list of images
base:
image: "centos:7"
spack: develop
# Whether or not to strip binaries
@@ -178,8 +160,7 @@ under the ``container`` attribute of environments:
# Additional system packages that are needed at runtime
os_packages:
final:
- libgomp
- libgomp
# Extra instructions
extra_instructions:
@@ -191,210 +172,7 @@ under the ``container`` attribute of environments:
app: "gromacs"
mpi: "mpich"
A detailed description of the options available can be found in the
:ref:`container_config_options` section.
-------------------
Setting Base Images
-------------------
The ``images`` subsection is used to select both the image where
Spack builds the software and the image where the built software
is installed. This attribute can be set in two different ways and
which one to use depends on the use case at hand.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use Official Spack Images From Dockerhub
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To generate a recipe that uses an official Docker image from the
Spack organization to build the software and the corresponding official OS image
to install the built software, all the user has to do is specify:
1. An operating system under ``images:os``
2. A Spack version under ``images:spack``
Any combination of these two values that can be mapped to one of the images
discussed in :ref:`container_spack_images` is allowed. For instance, the
following ``spack.yaml``:
.. code-block:: yaml
spack:
specs:
- gromacs+mpi
- mpich
container:
images:
os: centos/7
spack: 0.15.4
uses ``spack/centos7:0.15.4`` and ``centos:7`` for the stages where the
software is respectively built and installed:
.. code-block:: docker
# Build stage with Spack pre-installed and ready to be used
FROM spack/centos7:0.15.4 as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs+mpi" \
&& echo " - mpich" \
&& echo " concretization: together" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
[ ... ]
# Bare OS image to run the installed executables
FROM centos:7
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
This method of selecting base images is the simplest of the two, and we advise
to use it whenever possible. There are cases though where using Spack official
images is not enough to fit production needs. In these situations users can manually
select which base image to start from in the recipe, as we'll see next.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use Custom Images Provided by Users
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Consider, as an example, building a production grade image for a CUDA
application. The best strategy would probably be to build on top of
images provided by the vendor and regard CUDA as an external package.
Spack doesn't currently provide an official image with CUDA configured
this way, but users can build it on their own and then configure the
environment to explicitly pull it. This requires users to:
1. Specify the image used to build the software under ``images:build``
2. Specify the image used to install the built software under ``images:final``
A ``spack.yaml`` like the following:
.. code-block:: yaml
spack:
specs:
- gromacs@2019.4+cuda build_type=Release
- mpich
- fftw precision=float
packages:
cuda:
buildable: False
externals:
- spec: cuda%gcc
prefix: /usr/local/cuda
container:
images:
build: custom/cuda-10.1-ubuntu18.04:latest
final: nvidia/cuda:10.1-base-ubuntu18.04
produces, for instance, the following ``Dockerfile``:
.. code-block:: docker
# Build stage with Spack pre-installed and ready to be used
FROM custom/cuda-10.1-ubuntu18.04:latest as builder
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs@2019.4+cuda build_type=Release" \
&& echo " - mpich" \
&& echo " - fftw precision=float" \
&& echo " packages:" \
&& echo " cuda:" \
&& echo " buildable: false" \
&& echo " externals:" \
&& echo " - spec: cuda%gcc" \
&& echo " prefix: /usr/local/cuda" \
&& echo " concretization: together" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unnecessary deps
RUN cd /opt/spack-environment && spack env activate . && spack install --fail-fast && spack gc -y
# Strip all the binaries
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM nvidia/cuda:10.1-base-ubuntu18.04
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
where the base images for both stages are completely custom.
This second mode of selection for base images is more flexible than just
choosing an operating system and a Spack version, but is also more demanding.
Users may need to generate by themselves their base images and it's also their
responsibility to ensure that:
1. Spack is available in the ``build`` stage and set up correctly to install the required software
2. The artifacts produced in the ``build`` stage can be executed in the ``final`` stage
Therefore we don't recommend its use in cases that can be otherwise
covered by the simplified mode shown first.
----------------------------
Singularity Definition Files
----------------------------
In addition to producing recipes in ``Dockerfile`` format Spack can produce
Singularity Definition Files by just changing the value of the ``format``
attribute:
.. code-block:: console
$ cat spack.yaml
spack:
specs:
- hdf5~mpi
container:
format: singularity
$ spack containerize > hdf5.def
$ sudo singularity build hdf5.sif hdf5.def
The minimum version of Singularity required to build a SIF (Singularity Image Format)
image from the recipes generated by Spack is ``3.5.3``.
.. _container_config_options:
-----------------------
Configuration Reference
-----------------------
The tables below describe all the configuration options that are currently supported
to customize the generation of container recipes:
The tables below describe the configuration options that are currently supported:
.. list-table:: General configuration options for the ``container`` section of ``spack.yaml``
:header-rows: 1
@@ -407,41 +185,21 @@ to customize the generation of container recipes:
- The format of the recipe
- ``docker`` or ``singularity``
- Yes
* - ``images:os``
- Operating system used as a base for the image
* - ``base:image``
- Base image for ``final`` stage
- See :ref:`containers-supported-os`
- Yes, if using constrained selection of base images
* - ``images:spack``
- Version of Spack use in the ``build`` stage
- Yes
* - ``base:spack``
- Version of Spack
- Valid tags for ``base:image``
- Yes, if using constrained selection of base images
* - ``images:build``
- Image to be used in the ``build`` stage
- Any valid container image
- Yes, if using custom selection of base images
* - ``images:final``
- Image to be used in the ``build`` stage
- Any valid container image
- Yes, if using custom selection of base images
- Yes
* - ``strip``
- Whether to strip binaries
- ``true`` (default) or ``false``
- No
* - ``os_packages:command``
- Tool used to manage system packages
- ``apt``, ``yum``
- Only with custom base images
* - ``os_packages:update``
- Whether or not to update the list of available packages
- True or False (default: True)
- No
* - ``os_packages:build``
- System packages needed at build-time
- Valid packages for the current OS
- No
* - ``os_packages:final``
- System packages needed at run-time
- Valid packages for the current OS
* - ``os_packages``
- System packages to be installed
- Valid packages for the ``final`` OS
- No
* - ``extra_instructions:build``
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``build`` stage
@@ -480,56 +238,70 @@ to customize the generation of container recipes:
- Description string
- No
--------------
Best Practices
--------------
Once the Environment is properly configured a recipe for a container
image can be printed to standard output by issuing the following
command from the directory where the ``spack.yaml`` resides:
^^^
MPI
^^^
Due to the dependency on Fortran for OpenMPI, which is the spack default
implementation, consider adding ``gfortran`` to the ``apt-get install`` list.
.. code-block:: console
Recent versions of OpenMPI will require you to pass ``--allow-run-as-root``
to your ``mpirun`` calls if started as root user inside Docker.
$ spack containerize
For execution on HPC clusters, it can be helpful to import the docker
image into Singularity in order to start a program with an *external*
MPI. Otherwise, also add ``openssh-server`` to the ``apt-get install`` list.
The example ``spack.yaml`` above would produce for instance the
following ``Dockerfile``:
^^^^
CUDA
^^^^
Starting from CUDA 9.0, Nvidia provides minimal CUDA images based on
Ubuntu. Please see `their instructions <https://hub.docker.com/r/nvidia/cuda/>`_.
Avoid double-installing CUDA by adding, e.g.
.. code-block:: docker
.. code-block:: yaml
# Build stage with Spack pre-installed and ready to be used
FROM spack/centos7:latest as builder
packages:
cuda:
externals:
- spec: "cuda@9.0.176%gcc@5.4.0 arch=linux-ubuntu16-x86_64"
prefix: /usr/local/cuda
buildable: False
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - gromacs+mpi" \
&& echo " - mpich" \
&& echo " concretization: together" \
&& echo " config:" \
&& echo " install_tree: /opt/software" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
to your ``spack.yaml``.
# Install the software, remove unnecessary deps
RUN cd /opt/spack-environment && spack env activate . && spack install && spack gc -y
Users will either need ``nvidia-docker`` or e.g. Singularity to *execute*
device kernels.
# Strip all the binaries
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
^^^^^^^^^^^^^^^^^^^^^^^^^
Docker on Windows and OSX
^^^^^^^^^^^^^^^^^^^^^^^^^
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
On Mac OS and Windows, docker runs on a hypervisor that is not allocated much
memory by default, and some spack packages may fail to build due to lack of
memory. To work around this issue, consider configuring your docker installation
to use more of your host memory. In some cases, you can also ease the memory
pressure on parallel builds by limiting the parallelism in your config.yaml.
.. code-block:: yaml
# Bare OS image to run the installed executables
FROM centos:7
config:
build_jobs: 2
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
RUN yum update -y && yum install -y epel-release && yum update -y \
&& yum install -y libgomp \
&& rm -rf /var/cache/yum && yum clean all
RUN echo 'export PS1="\[$(tput bold)\]\[$(tput setaf 1)\][gromacs]\[$(tput setaf 2)\]\u\[$(tput sgr0)\]:\w $ "' >> ~/.bashrc
LABEL "app"="gromacs"
LABEL "mpi"="mpich"
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
.. note::
Spack can also produce Singularity definition files to build the image. The
minimum version of Singularity required to build a SIF (Singularity Image Format)
from them is ``3.5.3``.

View File

@@ -621,6 +621,13 @@ for a major release, the steps to make the release are as follows:
#. Bump the version in ``lib/spack/spack/__init__.py``. See `this example from 0.13.0
<https://github.com/spack/spack/commit/8eeb64096c98b8a43d1c587f13ece743c864fba9>`_
#. Update the release version lists in these files to include the new version:
* ``lib/spack/spack/schema/container.py``
* ``lib/spack/spack/container/images.json``
.. TODO: We should get rid of this step in some future release.
#. Update ``CHANGELOG.md`` with major highlights in bullet form. Use
proper markdown formatting, like `this example from 0.15.0
<https://github.com/spack/spack/commit/d4bf70d9882fcfe88507e9cb444331d7dd7ba71c>`_.
@@ -715,6 +722,13 @@ release:
#. Bump the version in ``lib/spack/spack/__init__.py``. See `this example from 0.14.1
<https://github.com/spack/spack/commit/ff0abb9838121522321df2a054d18e54b566b44a>`_.
#. Updaate the release version lists in these files to include the new version:
* ``lib/spack/spack/schema/container.py``
* ``lib/spack/spack/container/images.json``
**TODO**: We should get rid of this step in some future release.
#. Update ``CHANGELOG.md`` with a list of bugfixes. This is typically just a
summary of the commits you cherry-picked onto the release branch. See
`the changelog from 0.14.1

View File

@@ -0,0 +1,41 @@
.. Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _docker_for_developers:
=====================
Docker for Developers
=====================
This guide is intended for people who want to use our prepared docker
environments to work on developing Spack or working on spack packages. It is
meant to serve as the companion documentation for the :ref:`packaging-guide`.
--------
Overview
--------
To get started, all you need is the latest version of ``docker``.
.. code-block:: console
$ cd share/spack/docker
$ source config/ubuntu.bash
$ ./run-image.sh
This command should drop you into an interactive shell where you can run spack
within an isolated docker container running ubuntu. The copy of spack being
used should be tied to the working copy of your cloned git repo, so any changes
you make should be immediately reflected in the running docker container. Feel
free to add or modify any packages or to hack on spack, itself. Your contained
copy of spack should immediately reflect all changes.
To work within a container running a different linux distro, source one of the
other environment files under ``config``.
.. code-block:: console
$ source config/fedora.bash
$ ./run-image.sh

View File

@@ -191,24 +191,44 @@ Environment has been activated. Similarly, the ``install`` and
==> 0 installed packages
$ spack install zlib@1.2.11
==> Installing zlib-1.2.11-q6cqrdto4iktfg6qyqcc5u4vmfmwb7iv
==> No binary for zlib-1.2.11-q6cqrdto4iktfg6qyqcc5u4vmfmwb7iv found: installing from source
==> zlib: Executing phase: 'install'
[+] ~/spack/opt/spack/linux-rhel7-broadwell/gcc-8.1.0/zlib-1.2.11-q6cqrdto4iktfg6qyqcc5u4vmfmwb7iv
==> Installing zlib
==> Searching for binary cache of zlib
==> Warning: No Spack mirrors are currently configured
==> No binary for zlib found: installing from source
==> Fetching http://zlib.net/fossils/zlib-1.2.11.tar.gz
######################################################################## 100.0%
==> Staging archive: /spack/var/spack/stage/zlib-1.2.11-3r4cfkmx3wwfqeof4bc244yduu2mz4ur/zlib-1.2.11.tar.gz
==> Created stage in /spack/var/spack/stage/zlib-1.2.11-3r4cfkmx3wwfqeof4bc244yduu2mz4ur
==> No patches needed for zlib
==> Building zlib [Package]
==> Executing phase: 'install'
==> Successfully installed zlib
Fetch: 0.36s. Build: 11.58s. Total: 11.93s.
[+] /spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/zlib-1.2.11-3r4cfkmx3wwfqeof4bc244yduu2mz4ur
$ spack env activate myenv
$ spack find
==> In environment myenv
==> No root specs
==> 0 installed packages
$ spack install zlib@1.2.8
==> Installing zlib-1.2.8-yfc7epf57nsfn2gn4notccaiyxha6z7x
==> No binary for zlib-1.2.8-yfc7epf57nsfn2gn4notccaiyxha6z7x found: installing from source
==> zlib: Executing phase: 'install'
[+] ~/spack/opt/spack/linux-rhel7-broadwell/gcc-8.1.0/zlib-1.2.8-yfc7epf57nsfn2gn4notccaiyxha6z7x
==> Updating view at ~/spack/var/spack/environments/myenv/.spack-env/view
==> Installing zlib
==> Searching for binary cache of zlib
==> Warning: No Spack mirrors are currently configured
==> No binary for zlib found: installing from source
==> Fetching http://zlib.net/fossils/zlib-1.2.8.tar.gz
######################################################################## 100.0%
==> Staging archive: /spack/var/spack/stage/zlib-1.2.8-y2t6kq3s23l52yzhcyhbpovswajzi7f7/zlib-1.2.8.tar.gz
==> Created stage in /spack/var/spack/stage/zlib-1.2.8-y2t6kq3s23l52yzhcyhbpovswajzi7f7
==> No patches needed for zlib
==> Building zlib [Package]
==> Executing phase: 'install'
==> Successfully installed zlib
Fetch: 0.26s. Build: 2.08s. Total: 2.35s.
[+] /spack/opt/spack/linux-rhel7-x86_64/gcc-4.9.3/zlib-1.2.8-y2t6kq3s23l52yzhcyhbpovswajzi7f7
$ spack find
==> In environment myenv
@@ -216,17 +236,15 @@ Environment has been activated. Similarly, the ``install`` and
zlib@1.2.8
==> 1 installed package
-- linux-rhel7-broadwell / gcc@8.1.0 ----------------------------
-- linux-rhel7-x86_64 / gcc@4.9.3 -------------------------------
zlib@1.2.8
$ despacktivate
$ spack find
==> 2 installed packages
-- linux-rhel7-broadwell / gcc@8.1.0 ----------------------------
-- linux-rhel7-x86_64 / gcc@4.9.3 -------------------------------
zlib@1.2.8 zlib@1.2.11
Note that when we installed the abstract spec ``zlib@1.2.8``, it was
presented as a root of the Environment. All explicitly installed
packages will be listed as roots of the Environment.
@@ -331,9 +349,6 @@ installed specs using the ``-c`` (``--concretized``) flag.
==> 0 installed packages
.. _installing-environment:
^^^^^^^^^^^^^^^^^^^^^^^^^
Installing an Environment
^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -815,10 +830,8 @@ environment for Spack commands. The arguments ``-v,--with-view`` and
behavior is to activate with the environment view if there is one.
The environment variables affected by the ``spack env activate``
command and the paths that are used to update them are determined by
the :ref:`prefix inspections <customize-env-modifications>` defined in
your modules configuration; the defaults are summarized in the following
table.
command and the paths that are used to update them are in the
following table.
=================== =========
Variable Paths

View File

@@ -16,7 +16,7 @@ Prerequisites
Spack has the following minimum requirements, which must be installed
before Spack is run:
#. Python 2 (2.6 or 2.7) or 3 (3.5 - 3.9) to run Spack
#. Python 2 (2.6 or 2.7) or 3 (3.5 - 3.8) to run Spack
#. A C/C++ compiler for building
#. The ``make`` executable for building
#. The ``tar``, ``gzip``, ``bzip2``, ``xz`` and optionally ``zstd``
@@ -26,8 +26,8 @@ before Spack is run:
#. If using the ``gpg`` subcommand, ``gnupg2`` is required
These requirements can be easily installed on most modern Linux systems;
on macOS, XCode is required. Spack is designed to run on HPC
platforms like Cray. Not all packages should be expected
on Macintosh, XCode is required. Spack is designed to run on HPC
platforms like Cray and BlueGene/Q. Not all packages should be expected
to work on all platforms. A build matrix showing which packages are
working on which systems is planned but not yet available.
@@ -44,50 +44,50 @@ Getting Spack is easy. You can clone it from the `github repository
This will create a directory called ``spack``.
.. _shell-support:
^^^^^^^^^^^^^^^^^^^^^^^^
Add Spack to the Shell
^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^
Shell support
^^^^^^^^^^^^^
We'll assume that the full path to your downloaded Spack directory is
in the ``SPACK_ROOT`` environment variable. Add ``$SPACK_ROOT/bin``
to your path and you're ready to go:
Once you have cloned Spack, we recommend sourcing the appropriate script
for your shell:
.. code-block:: console
# For bash/zsh users
$ export SPACK_ROOT=/path/to/spack
$ export PATH=$SPACK_ROOT/bin:$PATH
# For tsch/csh users
$ setenv SPACK_ROOT /path/to/spack
$ setenv PATH $SPACK_ROOT/bin:$PATH
# For fish users
$ set -x SPACK_ROOT /path/to/spack
$ set -U fish_user_paths /path/to/spack $fish_user_paths
.. code-block:: console
# For bash/zsh/sh
$ . spack/share/spack/setup-env.sh
$ spack install libelf
# For tcsh/csh
$ source spack/share/spack/setup-env.csh
For a richer experience, use Spack's shell support:
# For fish
$ . spack/share/spack/setup-env.fish
.. code-block:: console
That's it! You're ready to use Spack.
# Note you must set SPACK_ROOT
Sourcing these files will put the ``spack`` command in your ``PATH``, set
up your ``MODULEPATH`` to use Spack's packages, and add other useful
shell integration for :ref:`certain commands <packaging-shell-support>`,
:ref:`environments <environments>`, and :ref:`modules <modules>`. For
``bash``, it also sets up tab completion.
# For bash/zsh users
$ . $SPACK_ROOT/share/spack/setup-env.sh
If you do not want to use Spack's shell support, you can always just run
the ``spack`` command directly from ``spack/bin/spack``.
# For tcsh/csh users
$ source $SPACK_ROOT/share/spack/setup-env.csh
# For fish users
$ source $SPACK_ROOT/share/spack/setup-env.fish
^^^^^^^^^^^^^^^^^^
Check Installation
^^^^^^^^^^^^^^^^^^
With Spack installed, you should be able to run some basic Spack
commands. For example:
.. command-output:: spack spec netcdf-c
In theory, Spack doesn't need any additional installation; just
download and run! But in real life, additional steps are usually
required before Spack can work in a practical sense. Read on...
This automatically adds Spack to your ``PATH`` and allows the ``spack``
command to be used to execute spack :ref:`commands <shell-support>` and
:ref:`useful packaging commands <packaging-shell-support>`.
^^^^^^^^^^^^^^^^^
Clean Environment
@@ -103,52 +103,16 @@ environment*, especially for ``PATH``. Only software that comes with
the system, or that you know you wish to use with Spack, should be
included. This procedure will avoid many strange build errors.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Optional: Bootstrapping clingo
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack supports using clingo as an external solver to compute which software
needs to be installed. If you have a default compiler supporting C++14 Spack
can automatically bootstrap this tool from sources the first time it is
needed:
^^^^^^^^^^^^^^^^^^
Check Installation
^^^^^^^^^^^^^^^^^^
.. code-block:: console
With Spack installed, you should be able to run some basic Spack
commands. For example:
$ spack solve zlib
[+] /usr (external bison-3.0.4-wu5pgjchxzemk5ya2l3ddqug2d7jv6eb)
[+] /usr (external cmake-3.19.4-a4kmcfzxxy45mzku4ipmj5kdiiz5a57b)
[+] /usr (external python-3.6.9-x4fou4iqqlh5ydwddx3pvfcwznfrqztv)
==> Installing re2c-1.2.1-e3x6nxtk3ahgd63ykgy44mpuva6jhtdt
[ ... ]
==> Optimization: [0, 0, 0, 0, 0, 1, 0, 0, 0]
zlib@1.2.11%gcc@10.1.0+optimize+pic+shared arch=linux-ubuntu18.04-broadwell
.. command-output:: spack spec netcdf-c
If you want to speed-up bootstrapping, you may try to search for ``cmake`` and ``bison``
on your system:
.. code-block:: console
$ spack external find cmake bison
==> The following specs have been detected on this system and added to /home/spack/.spack/packages.yaml
bison@3.0.4 cmake@3.19.4
All the tools Spack needs for its own functioning are installed in a separate store, which lives
under the ``${HOME}/.spack`` directory. The software installed there can be queried with:
.. code-block:: console
$ spack find --bootstrap
==> Showing internal bootstrap store at "/home/spack/.spack/bootstrap/store"
==> 3 installed packages
-- linux-ubuntu18.04-x86_64 / gcc@10.1.0 ------------------------
clingo-bootstrap@spack python@3.6.9 re2c@1.2.1
In case it's needed the bootstrap store can also be cleaned with:
.. code-block:: console
$ spack clean -b
==> Removing software in "/home/spack/.spack/bootstrap/store"
^^^^^^^^^^^^^^^^^^^^^^^^^^
Optional: Alternate Prefix
@@ -168,6 +132,15 @@ copy of spack installs packages into its own ``$PREFIX/opt``
directory.
^^^^^^^^^^
Next Steps
^^^^^^^^^^
In theory, Spack doesn't need any additional installation; just
download and run! But in real life, additional steps are usually
required before Spack can work in a practical sense. Read on...
.. _compiler-config:
----------------------

View File

@@ -85,6 +85,7 @@ or refer to the full manual below.
packaging_guide
build_systems
developer_guide
docker_for_developers
.. toctree::
:maxdepth: 2

View File

@@ -10,16 +10,14 @@ Modules
=======
The use of module systems to manage user environment in a controlled way
is a common practice at HPC centers that is often embraced also by
individual programmers on their development machines. To support this
common practice Spack integrates with `Environment Modules
<http://modules.sourceforge.net/>`_ and `LMod
<http://lmod.readthedocs.io/en/latest/>`_ by providing post-install hooks
that generate module files and commands to manipulate them.
is a common practice at HPC centers that is often embraced also by individual
programmers on their development machines. To support this common practice
Spack integrates with `Environment Modules
<http://modules.sourceforge.net/>`_ and `LMod
<http://lmod.readthedocs.io/en/latest/>`_ by
providing post-install hooks that generate module files and commands to manipulate them.
Modules are one of several ways you can use Spack packages. For other
options that may fit your use case better, you should also look at
:ref:`spack load <spack-load>` and :ref:`environments <environments>`.
.. _shell-support:
----------------------------
Using module files via Spack
@@ -62,9 +60,206 @@ to load the ``cmake`` module:
$ module load cmake-3.7.2-gcc-6.3.0-fowuuby
Neither of these is particularly pretty, easy to remember, or easy to
type. Luckily, Spack offers many facilities for customizing the module
scheme used at your site.
Neither of these is particularly pretty, easy to remember, or
easy to type. Luckily, Spack has its own interface for using modules.
^^^^^^^^^^^^^
Shell support
^^^^^^^^^^^^^
To enable additional Spack commands for loading and unloading module files,
and to add the correct path to ``MODULEPATH``, you need to source the appropriate
setup file in the ``$SPACK_ROOT/share/spack`` directory. This will activate shell
support for the commands that need it. For ``bash``, ``ksh`` or ``zsh`` users:
.. code-block:: console
$ . ${SPACK_ROOT}/share/spack/setup-env.sh
For ``csh`` and ``tcsh`` instead:
.. code-block:: console
$ set SPACK_ROOT ...
$ source $SPACK_ROOT/share/spack/setup-env.csh
Note that in the latter case it is necessary to explicitly set ``SPACK_ROOT``
before sourcing the setup file (you will get a meaningful error message
if you don't).
If you want to have Spack's shell support available on the command line at
any login you can put this source line in one of the files that are sourced
at startup (like ``.profile``, ``.bashrc`` or ``.cshrc``). Be aware though
that the startup time may be slightly increased because of that.
.. _cmd-spack-load:
^^^^^^^^^^^^^^^^^^^^^^^
``spack load / unload``
^^^^^^^^^^^^^^^^^^^^^^^
Once you have shell support enabled you can use the same spec syntax
you're used to and you can use the same shortened names you use
everywhere else in Spack.
For example this will add the ``mpich`` package built with ``gcc`` to your path:
.. code-block:: console
$ spack install mpich %gcc@4.4.7
# ... wait for install ...
$ spack load mpich %gcc@4.4.7
$ which mpicc
~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/mpich@3.0.4/bin/mpicc
These commands will add appropriate directories to your ``PATH``,
``MANPATH``, ``CPATH``, and ``LD_LIBRARY_PATH``. When you no longer
want to use a package, you can type unload or unuse similarly:
.. code-block:: console
$ spack unload mpich %gcc@4.4.7
.. note::
The ``load`` and ``unload`` subcommands are only available if you
have enabled Spack's shell support. These command DO NOT use the
underlying Spack-generated module files.
^^^^^^^^^^^^^^^
Ambiguous specs
^^^^^^^^^^^^^^^
If a spec used with load/unload or is ambiguous (i.e. more than one
installed package matches it), then Spack will warn you:
.. code-block:: console
$ spack load libelf
==> Error: libelf matches multiple packages.
Matching packages:
libelf@0.8.13%gcc@4.4.7 arch=linux-debian7-x86_64
libelf@0.8.13%intel@15.0.0 arch=linux-debian7-x86_64
Use a more specific spec
You can either type the ``spack load`` command again with a fully
qualified argument, or you can add just enough extra constraints to
identify one package. For example, above, the key differentiator is
that one ``libelf`` is built with the Intel compiler, while the other
used ``gcc``. You could therefore just type:
.. code-block:: console
$ spack load libelf %intel
To identify just the one built with the Intel compiler.
.. _cmd-spack-module-loads:
^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack module tcl loads``
^^^^^^^^^^^^^^^^^^^^^^^^^^
In some cases, it is desirable to use a Spack-generated module, rather
than relying on Spack's built-in user-environment modification
capabilities. To translate a spec into a module name, use ``spack
module tcl loads`` or ``spack module lmod loads`` depending on the
module system desired.
To load not just a module, but also all the modules it depends on, use
the ``--dependencies`` option. This is not required for most modules
because Spack builds binaries with RPATH support. However, not all
packages use RPATH to find their dependencies: this can be true in
particular for Python extensions, which are currently *not* built with
RPATH.
Scripts to load modules recursively may be made with the command:
.. code-block:: console
$ spack module tcl loads --dependencies <spec>
An equivalent alternative using `process substitution <http://tldp.org/LDP/abs/html/process-sub.html>`_ is:
.. code-block :: console
$ source <( spack module tcl loads --dependencies <spec> )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Module Commands for Shell Scripts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Although Spack is flexible, the ``module`` command is much faster.
This could become an issue when emitting a series of ``spack load``
commands inside a shell script. By adding the ``--dependencies`` flag,
``spack module tcl loads`` may also be used to generate code that can be
cut-and-pasted into a shell script. For example:
.. code-block:: console
$ spack module tcl loads --dependencies py-numpy git
# bzip2@1.0.6%gcc@4.9.3=linux-x86_64
module load bzip2-1.0.6-gcc-4.9.3-ktnrhkrmbbtlvnagfatrarzjojmkvzsx
# ncurses@6.0%gcc@4.9.3=linux-x86_64
module load ncurses-6.0-gcc-4.9.3-kaazyneh3bjkfnalunchyqtygoe2mncv
# zlib@1.2.8%gcc@4.9.3=linux-x86_64
module load zlib-1.2.8-gcc-4.9.3-v3ufwaahjnviyvgjcelo36nywx2ufj7z
# sqlite@3.8.5%gcc@4.9.3=linux-x86_64
module load sqlite-3.8.5-gcc-4.9.3-a3eediswgd5f3rmto7g3szoew5nhehbr
# readline@6.3%gcc@4.9.3=linux-x86_64
module load readline-6.3-gcc-4.9.3-se6r3lsycrwxyhreg4lqirp6xixxejh3
# python@3.5.1%gcc@4.9.3=linux-x86_64
module load python-3.5.1-gcc-4.9.3-5q5rsrtjld4u6jiicuvtnx52m7tfhegi
# py-setuptools@20.5%gcc@4.9.3=linux-x86_64
module load py-setuptools-20.5-gcc-4.9.3-4qr2suj6p6glepnedmwhl4f62x64wxw2
# py-nose@1.3.7%gcc@4.9.3=linux-x86_64
module load py-nose-1.3.7-gcc-4.9.3-pwhtjw2dvdvfzjwuuztkzr7b4l6zepli
# openblas@0.2.17%gcc@4.9.3+shared=linux-x86_64
module load openblas-0.2.17-gcc-4.9.3-pw6rmlom7apfsnjtzfttyayzc7nx5e7y
# py-numpy@1.11.0%gcc@4.9.3+blas+lapack=linux-x86_64
module load py-numpy-1.11.0-gcc-4.9.3-mulodttw5pcyjufva4htsktwty4qd52r
# curl@7.47.1%gcc@4.9.3=linux-x86_64
module load curl-7.47.1-gcc-4.9.3-ohz3fwsepm3b462p5lnaquv7op7naqbi
# autoconf@2.69%gcc@4.9.3=linux-x86_64
module load autoconf-2.69-gcc-4.9.3-bkibjqhgqm5e3o423ogfv2y3o6h2uoq4
# cmake@3.5.0%gcc@4.9.3~doc+ncurses+openssl~qt=linux-x86_64
module load cmake-3.5.0-gcc-4.9.3-x7xnsklmgwla3ubfgzppamtbqk5rwn7t
# expat@2.1.0%gcc@4.9.3=linux-x86_64
module load expat-2.1.0-gcc-4.9.3-6pkz2ucnk2e62imwakejjvbv6egncppd
# git@2.8.0-rc2%gcc@4.9.3+curl+expat=linux-x86_64
module load git-2.8.0-rc2-gcc-4.9.3-3bib4hqtnv5xjjoq5ugt3inblt4xrgkd
The script may be further edited by removing unnecessary modules.
^^^^^^^^^^^^^^^
Module Prefixes
^^^^^^^^^^^^^^^
On some systems, modules are automatically prefixed with a certain
string; ``spack module tcl loads`` needs to know about that prefix when it
issues ``module load`` commands. Add the ``--prefix`` option to your
``spack module tcl loads`` commands if this is necessary.
For example, consider the following on one system:
.. code-block:: console
$ module avail
linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y
$ spack module tcl loads antlr # WRONG!
# antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
module load antlr-2.7.7-gcc-5.3.0-bdpl46y
$ spack module tcl loads --prefix linux-SuSE11-x86_64/ antlr
# antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y
-------------------------
Module file customization
@@ -394,32 +589,6 @@ that are already in the LMod hierarchy.
For hierarchies that are deeper than three layers ``lmod spider`` may have some issues.
See `this discussion on the LMod project <https://github.com/TACC/Lmod/issues/114>`_.
.. _customize-env-modifications:
"""""""""""""""""""""""""""""""""""
Customize environment modifications
"""""""""""""""""""""""""""""""""""
You can control which prefixes in a Spack package are added to environment
variables with the ``prefix_inspections`` section; this section maps relative
prefixes to the list of environment variables which should be updated with
those prefixes.
.. code-block:: yaml
modules:
prefix_inspections:
bin:
- PATH
lib:
- LIBRARY_PATH
'':
- CMAKE_PREFIX_PATH
In this case, for a Spack package ``foo`` installed to ``/spack/prefix/foo``,
the generated module file for ``foo`` would update ``PATH`` to contain
``/spack/prefix/foo/bin``.
""""""""""""""""""""""""""""""""""""
Filter out environment modifications
""""""""""""""""""""""""""""""""""""
@@ -528,135 +697,3 @@ subcommand is ``rm``:
that are already existing will ask for a confirmation by default. If
the command is used in a script it is possible though to pass the
``-y`` argument, that will skip this safety measure.
.. _modules-in-shell-scripts:
------------------------------------
Using Spack modules in shell scripts
------------------------------------
The easiest To enable additional Spack commands for loading and unloading
module files, and to add the correct path to ``MODULEPATH``, you need to
source the appropriate setup file. Assuming Spack is installed in
``$SPACK_ROOT``, run the appropriate command for your shell:
.. code-block:: console
# For bash/zsh/sh
$ . $SPACK_ROOT/share/spack/setup-env.sh
# For tcsh/csh
$ source $SPACK_ROOT/share/spack/setup-env.csh
# For fish
$ . $SPACK_ROOT/share/spack/setup-env.fish
If you want to have Spack's shell support available on the command line
at any login you can put this source line in one of the files that are
sourced at startup (like ``.profile``, ``.bashrc`` or ``.cshrc``). Be
aware that the shell startup time may increase slightly as a result.
.. _cmd-spack-module-loads:
^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack module tcl loads``
^^^^^^^^^^^^^^^^^^^^^^^^^^
In some cases, it is desirable to use a Spack-generated module, rather
than relying on Spack's built-in user-environment modification
capabilities. To translate a spec into a module name, use ``spack
module tcl loads`` or ``spack module lmod loads`` depending on the
module system desired.
To load not just a module, but also all the modules it depends on, use
the ``--dependencies`` option. This is not required for most modules
because Spack builds binaries with RPATH support. However, not all
packages use RPATH to find their dependencies: this can be true in
particular for Python extensions, which are currently *not* built with
RPATH.
Scripts to load modules recursively may be made with the command:
.. code-block:: console
$ spack module tcl loads --dependencies <spec>
An equivalent alternative using `process substitution <http://tldp.org/LDP/abs/html/process-sub.html>`_ is:
.. code-block:: console
$ source <( spack module tcl loads --dependencies <spec> )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Module Commands for Shell Scripts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Although Spack is flexible, the ``module`` command is much faster.
This could become an issue when emitting a series of ``spack load``
commands inside a shell script. By adding the ``--dependencies`` flag,
``spack module tcl loads`` may also be used to generate code that can be
cut-and-pasted into a shell script. For example:
.. code-block:: console
$ spack module tcl loads --dependencies py-numpy git
# bzip2@1.0.6%gcc@4.9.3=linux-x86_64
module load bzip2-1.0.6-gcc-4.9.3-ktnrhkrmbbtlvnagfatrarzjojmkvzsx
# ncurses@6.0%gcc@4.9.3=linux-x86_64
module load ncurses-6.0-gcc-4.9.3-kaazyneh3bjkfnalunchyqtygoe2mncv
# zlib@1.2.8%gcc@4.9.3=linux-x86_64
module load zlib-1.2.8-gcc-4.9.3-v3ufwaahjnviyvgjcelo36nywx2ufj7z
# sqlite@3.8.5%gcc@4.9.3=linux-x86_64
module load sqlite-3.8.5-gcc-4.9.3-a3eediswgd5f3rmto7g3szoew5nhehbr
# readline@6.3%gcc@4.9.3=linux-x86_64
module load readline-6.3-gcc-4.9.3-se6r3lsycrwxyhreg4lqirp6xixxejh3
# python@3.5.1%gcc@4.9.3=linux-x86_64
module load python-3.5.1-gcc-4.9.3-5q5rsrtjld4u6jiicuvtnx52m7tfhegi
# py-setuptools@20.5%gcc@4.9.3=linux-x86_64
module load py-setuptools-20.5-gcc-4.9.3-4qr2suj6p6glepnedmwhl4f62x64wxw2
# py-nose@1.3.7%gcc@4.9.3=linux-x86_64
module load py-nose-1.3.7-gcc-4.9.3-pwhtjw2dvdvfzjwuuztkzr7b4l6zepli
# openblas@0.2.17%gcc@4.9.3+shared=linux-x86_64
module load openblas-0.2.17-gcc-4.9.3-pw6rmlom7apfsnjtzfttyayzc7nx5e7y
# py-numpy@1.11.0%gcc@4.9.3+blas+lapack=linux-x86_64
module load py-numpy-1.11.0-gcc-4.9.3-mulodttw5pcyjufva4htsktwty4qd52r
# curl@7.47.1%gcc@4.9.3=linux-x86_64
module load curl-7.47.1-gcc-4.9.3-ohz3fwsepm3b462p5lnaquv7op7naqbi
# autoconf@2.69%gcc@4.9.3=linux-x86_64
module load autoconf-2.69-gcc-4.9.3-bkibjqhgqm5e3o423ogfv2y3o6h2uoq4
# cmake@3.5.0%gcc@4.9.3~doc+ncurses+openssl~qt=linux-x86_64
module load cmake-3.5.0-gcc-4.9.3-x7xnsklmgwla3ubfgzppamtbqk5rwn7t
# expat@2.1.0%gcc@4.9.3=linux-x86_64
module load expat-2.1.0-gcc-4.9.3-6pkz2ucnk2e62imwakejjvbv6egncppd
# git@2.8.0-rc2%gcc@4.9.3+curl+expat=linux-x86_64
module load git-2.8.0-rc2-gcc-4.9.3-3bib4hqtnv5xjjoq5ugt3inblt4xrgkd
The script may be further edited by removing unnecessary modules.
^^^^^^^^^^^^^^^
Module Prefixes
^^^^^^^^^^^^^^^
On some systems, modules are automatically prefixed with a certain
string; ``spack module tcl loads`` needs to know about that prefix when it
issues ``module load`` commands. Add the ``--prefix`` option to your
``spack module tcl loads`` commands if this is necessary.
For example, consider the following on one system:
.. code-block:: console
$ module avail
linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y
$ spack module tcl loads antlr # WRONG!
# antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
module load antlr-2.7.7-gcc-5.3.0-bdpl46y
$ spack module tcl loads --prefix linux-SuSE11-x86_64/ antlr
# antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y

View File

@@ -10,8 +10,8 @@ Package List
============
This is a list of things you can install using Spack. It is
automatically generated based on the packages in this Spack
version.
automatically generated based on the packages in the latest Spack
release.
.. raw:: html
:file: package_list.html

View File

@@ -1778,18 +1778,8 @@ RPATHs in Spack are handled in one of three ways:
Parallel builds
---------------
Spack supports parallel builds on an individual package and at the
installation level. Package-level parallelism is established by the
``--jobs`` option and its configuration and package recipe equivalents.
Installation-level parallelism is driven by the DAG(s) of the requested
package or packages.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Package-level build parallelism
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default, Spack will invoke ``make()``, or any other similar tool,
with a ``-j <njobs>`` argument, so those builds run in parallel.
with a ``-j <njobs>`` argument, so that builds run in parallel.
The parallelism is determined by the value of the ``build_jobs`` entry
in ``config.yaml`` (see :ref:`here <build-jobs>` for more details on
how this value is computed).
@@ -1837,43 +1827,6 @@ you set ``parallel`` to ``False`` at the package level, then each call
to ``make()`` will be sequential by default, but packagers can call
``make(parallel=True)`` to override it.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Install-level build parallelism
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack supports the concurrent installation of packages within a Spack
instance across multiple processes using file system locks. This
parallelism is separate from the package-level achieved through build
systems' use of the ``-j <njobs>`` option. With install-level parallelism,
processes coordinate the installation of the dependencies of specs
provided on the command line and as part of an environment build with
only **one process** being allowed to install a given package at a time.
Refer to :ref:`Dependencies` for more information on dependencies and
:ref:`installing-environment` for how to install an environment.
Concurrent processes may be any combination of interactive sessions and
batch jobs. Which means a ``spack install`` can be running in a terminal
window while a batch job is running ``spack install`` on the same or
overlapping dependencies without any process trying to re-do the work of
another.
For example, if you are using SLURM, you could launch an installation
of ``mpich`` using the following command:
.. code-block:: console
$ srun -N 2 -n 8 spack install -j 4 mpich@3.3.2
This will create eight concurrent four-job installation on two different
nodes.
.. note::
The effective parallelism will be based on the maximum number of
packages that can be installed at the same time, which will limited
by the number of packages with no (remaining) uninstalled dependencies.
.. _dependencies:
------------
@@ -3163,7 +3116,7 @@ differ from package to package. In order to make the ``install()`` method
independent of the choice of ``Blas`` implementation, each package which
provides it implements ``@property def blas_libs(self):`` to return an object
of
`LibraryList <https://spack.readthedocs.io/en/latest/llnl.util.html#llnl.util.filesystem.LibraryList>`_
`LibraryList <http://spack.readthedocs.io/en/latest/llnl.util.html#llnl.util.filesystem.LibraryList>`_
type which simplifies usage of a set of libraries.
The same applies to packages which provide ``Lapack`` and ``ScaLapack``.
Package developers are requested to use this interface. Common usage cases are:
@@ -3198,7 +3151,7 @@ Package developers are requested to use this interface. Common usage cases are:
For more information, see documentation of
`LibraryList <https://spack.readthedocs.io/en/latest/llnl.util.html#llnl.util.filesystem.LibraryList>`_
`LibraryList <http://spack.readthedocs.io/en/latest/llnl.util.html#llnl.util.filesystem.LibraryList>`_
class.
@@ -3948,118 +3901,6 @@ using the ``run_before`` decorator.
.. _file-manipulation:
^^^^^^^^^^^^^
Install Tests
^^^^^^^^^^^^^
.. warning::
The API for adding and running install tests is not yet considered
stable and may change drastically in future releases. Packages with
upstreamed tests will be refactored to match changes to the API.
While build-tests are integrated with the build system, install tests
may be added to Spack packages to be run independently of the install
method.
Install tests may be added by defining a ``test`` method with the following signature:
.. code-block:: python
def test(self):
These tests will be run in an environment set up to provide access to
this package and all of its dependencies, including ``test``-type
dependencies. Inside the ``test`` method, standard python ``assert``
statements and other error reporting mechanisms can be used. Spack
will report any errors as a test failure.
Inside the test method, individual tests can be run separately (and
continue transparently after a test failure) using the ``run_test``
method. The signature for the ``run_test`` method is:
.. code-block:: python
def run_test(self, exe, options=[], expected=[], status=0, installed=False,
purpose='', skip_missing=False, work_dir=None):
This method will operate in ``work_dir`` if one is specified. It will
search for an executable in the ``PATH`` variable named ``exe``, and
if ``installed=True`` it will fail if that executable does not come
from the prefix of the package being tested. If the executable is not
found, it will fail the test unless ``skip_missing`` is set to
``True``. The executable will be run with the options specified, and
the return code will be checked against the ``status`` argument, which
can be an integer or list of integers. Spack will also check that
every string in ``expected`` is a regex matching part of the output of
the executable. The ``purpose`` argument is recorded in the test log
for debugging purposes.
""""""""""""""""""""""""""""""""""""""
Install tests that require compilation
""""""""""""""""""""""""""""""""""""""
Some tests may require access to the compiler with which the package
was built, especially to test library-only packages. To ensure the
compiler is configured as part of the test environment, set the
attribute ``tests_require_compiler = True`` on the package. The
compiler will be available through the canonical environment variables
(``CC``, ``CXX``, ``FC``, ``F77``) in the test environment.
""""""""""""""""""""""""""""""""""""""""""""""""
Install tests that require build-time components
""""""""""""""""""""""""""""""""""""""""""""""""
Some packages cannot be easily tested without components from the
build-time test suite. For those packages, the
``cache_extra_test_sources`` method can be used.
.. code-block:: python
@run_after('install')
def cache_test_sources(self):
srcs = ['./tests/foo.c', './tests/bar.c']
self.cache_extra_test_sources(srcs)
This method will copy the listed methods into the metadata directory
of the package at the end of the install phase of the build. They will
be available to the test method in the directory
``self._extra_tests_path``.
While source files are generally recommended, for many packages
binaries may also technically be cached in this way for later testing.
"""""""""""""""""""""
Running install tests
"""""""""""""""""""""
Install tests can be run using the ``spack test run`` command. The
``spack test run`` command will create a ``test suite`` out of the
specs provided to it, or if no specs are provided it will test all
specs in the active environment, or all specs installed in Spack if no
environment is active. Test suites can be named using the ``--alias``
option; test suites not aliased will use the content hash of their
specs as their name.
Packages to install test can be queried using the ``spack test list``
command, which outputs all installed packages with defined ``test``
methods.
Test suites can be found using the ``spack test find`` command. It
will list all test suites that have been run and have not been removed
using the ``spack test remove`` command. The ``spack test remove``
command will remove tests to declutter the test stage. The ``spack
test results`` command will show results for completed test suites.
The test stage is the working directory for all install tests run with
Spack. By default, Spack uses ``~/.spack/test`` as the test stage. The
test stage can be set in the high-level config:
.. code-block:: yaml
config:
test_stage: /path/to/stage
---------------------------
File manipulation functions
---------------------------

View File

@@ -1034,6 +1034,171 @@ The main points that are implemented below:
- make -j 2
- make test
.. _workflow_create_docker_image:
-----------------------------------
Using Spack to Create Docker Images
-----------------------------------
Spack can be the ideal tool to set up images for Docker (and Singularity).
An example ``Dockerfile`` is given below, downloading the latest spack
version.
The following functionality is prepared:
#. Base image: the example starts from a minimal ubuntu.
#. Pre-install the spack dependencies.
Package installs are followed by a clean-up of the system package index,
to avoid outdated information and it saves space.
#. Install spack in ``/usr/local``.
Add ``setup-env.sh`` to profile scripts, so commands in *login* shells
can use the whole spack functionality, including modules.
#. Install an example package (``tar``).
As with system package managers above, ``spack install`` commands should be
concatenated with a ``&& spack clean -a`` in order to keep image sizes small.
#. Add a startup hook to an *interactive login shell* so spack modules will be
usable.
In order to build and run the image, execute:
.. code-block:: bash
docker build -t spack .
docker run -it spack
.. code-block:: docker
FROM ubuntu:16.04
MAINTAINER Your Name <someone@example.com>
# general environment for docker
ENV DEBIAN_FRONTEND=noninteractive \
SPACK_ROOT=/usr/local
# install minimal spack dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
autoconf \
build-essential \
ca-certificates \
coreutils \
curl \
environment-modules \
git \
python \
unzip \
vim \
&& rm -rf /var/lib/apt/lists/*
# load spack environment on login
RUN echo "source $SPACK_ROOT/share/spack/setup-env.sh" \
> /etc/profile.d/spack.sh
# spack settings
# note: if you wish to change default settings, add files alongside
# the Dockerfile with your desired settings. Then uncomment this line
#COPY packages.yaml modules.yaml $SPACK_ROOT/etc/spack/
# install spack
RUN curl -s -L https://api.github.com/repos/spack/spack/tarball \
| tar xzC $SPACK_ROOT --strip 1
# note: at this point one could also run ``spack bootstrap`` to avoid
# parts of the long apt-get install list above
# install software
RUN spack install tar \
&& spack clean -a
# need the executables from a package already during image build?
#RUN /bin/bash -l -c ' \
# spack load tar \
# && which tar'
# image run hook: the -l will make sure /etc/profile environments are loaded
CMD /bin/bash -l
^^^^^^^^^^^^^^
Best Practices
^^^^^^^^^^^^^^
"""
MPI
"""
Due to the dependency on Fortran for OpenMPI, which is the spack default
implementation, consider adding ``gfortran`` to the ``apt-get install`` list.
Recent versions of OpenMPI will require you to pass ``--allow-run-as-root``
to your ``mpirun`` calls if started as root user inside Docker.
For execution on HPC clusters, it can be helpful to import the docker
image into Singularity in order to start a program with an *external*
MPI. Otherwise, also add ``openssh-server`` to the ``apt-get install`` list.
""""
CUDA
""""
Starting from CUDA 9.0, Nvidia provides minimal CUDA images based on
Ubuntu.
Please see `their instructions <https://hub.docker.com/r/nvidia/cuda/>`_.
Avoid double-installing CUDA by adding, e.g.
.. code-block:: yaml
packages:
cuda:
externals:
- spec: "cuda@9.0.176%gcc@5.4.0 arch=linux-ubuntu16-x86_64"
prefix: /usr/local/cuda
buildable: False
to your ``packages.yaml``.
Then ``COPY`` in that file into the image as in the example above.
Users will either need ``nvidia-docker`` or e.g. Singularity to *execute*
device kernels.
"""""""""""
Singularity
"""""""""""
Importing and running the image created above into
`Singularity <http://singularity.lbl.gov/>`_ works like a charm.
Just use the `docker bootstraping mechanism <http://singularity.lbl.gov/quickstart#bootstrap-recipes>`_:
.. code-block:: none
Bootstrap: docker
From: registry/user/image:tag
%runscript
exec /bin/bash -l
""""""""""""""""""""""
Docker for Development
""""""""""""""""""""""
For examples of how we use docker in development, see
:ref:`docker_for_developers`.
"""""""""""""""""""""""""
Docker on Windows and OSX
"""""""""""""""""""""""""
On Mac OS and Windows, docker runs on a hypervisor that is not allocated much
memory by default, and some spack packages may fail to build due to lack of
memory. To work around this issue, consider configuring your docker installation
to use more of your host memory. In some cases, you can also ease the memory
pressure on parallel builds by limiting the parallelism in your config.yaml.
.. code-block:: yaml
config:
build_jobs: 2
------------------
Upstream Bug Fixes
------------------

View File

@@ -1 +0,0 @@
../cc

View File

@@ -1 +0,0 @@
../cpp

View File

@@ -1 +0,0 @@
../fc

10
lib/spack/env/cc vendored
View File

@@ -22,7 +22,7 @@
# This is an array of environment variables that need to be set before
# the script runs. They are set by routines in spack.build_environment
# as part of the package installation process.
# as part of spack.package.Package.do_install().
parameters=(
SPACK_ENV_PATH
SPACK_DEBUG_LOG_DIR
@@ -107,25 +107,25 @@ case "$command" in
cpp)
mode=cpp
;;
cc|c89|c99|gcc|clang|armclang|icc|icx|pgcc|nvc|xlc|xlc_r|fcc)
cc|c89|c99|gcc|clang|armclang|icc|pgcc|xlc|xlc_r|fcc)
command="$SPACK_CC"
language="C"
comp="CC"
lang_flags=C
;;
c++|CC|g++|clang++|armclang++|icpc|icpx|pgc++|nvc++|xlc++|xlc++_r|FCC)
c++|CC|g++|clang++|armclang++|icpc|pgc++|xlc++|xlc++_r|FCC)
command="$SPACK_CXX"
language="C++"
comp="CXX"
lang_flags=CXX
;;
ftn|f90|fc|f95|gfortran|flang|armflang|ifort|ifx|pgfortran|nvfortran|xlf90|xlf90_r|nagfor|frt)
ftn|f90|fc|f95|gfortran|flang|armflang|ifort|pgfortran|xlf90|xlf90_r|nagfor|frt)
command="$SPACK_FC"
language="Fortran 90"
comp="FC"
lang_flags=F
;;
f77|xlf|xlf_r|pgf77|frt|flang)
f77|xlf|xlf_r|pgf77|frt)
command="$SPACK_F77"
language="Fortran 77"
comp="F77"

View File

@@ -1 +0,0 @@
../cc

View File

@@ -1 +0,0 @@
../cc

View File

@@ -1 +0,0 @@
../cc

View File

@@ -1 +0,0 @@
../cc

View File

@@ -1 +0,0 @@
../cc

View File

@@ -1 +0,0 @@
../cc

View File

@@ -6,13 +6,6 @@
"""This module contains the following external, potentially separately
licensed, packages that are included in Spack:
archspec
--------
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.1.2 (commit 2846749dc5b12ae2b30ff1d3f0270a4a5954710d)
argparse
--------

View File

@@ -5,12 +5,9 @@
import _pytest._code
import py
try:
from collections.abc import Sequence
from collections import Sequence
except ImportError:
try:
from collections import Sequence
except ImportError:
Sequence = list
Sequence = list
u = py.builtin._totext

View File

@@ -10,12 +10,9 @@
import _pytest._code
import py
try:
from collections.abc import MutableMapping as MappingMixin
from collections import MutableMapping as MappingMixin
except ImportError:
try:
from collections import MutableMapping as MappingMixin
except ImportError:
from UserDict import DictMixin as MappingMixin
from UserDict import DictMixin as MappingMixin
from _pytest.config import directory_arg, UsageError, hookimpl
from _pytest.outcomes import exit

View File

@@ -398,10 +398,7 @@ def approx(expected, rel=None, abs=None, nan_ok=False):
__ https://docs.python.org/3/reference/datamodel.html#object.__ge__
"""
if sys.version_info >= (3, 3):
from collections.abc import Mapping, Sequence
else:
from collections import Mapping, Sequence
from collections import Mapping, Sequence
from _pytest.compat import STRING_TYPES as String
# Delegate the comparison to a class that knows how to deal with the type

View File

@@ -1,22 +0,0 @@
Intellectual Property Notice
------------------------------
Archspec is licensed under the Apache License, Version 2.0 (LICENSE-APACHE
or http://www.apache.org/licenses/LICENSE-2.0) or the MIT license,
(LICENSE-MIT or http://opensource.org/licenses/MIT), at your option.
Copyrights and patents in the Archspec project are retained by contributors.
No copyright assignment is required to contribute to Archspec.
SPDX usage
------------
Individual files contain SPDX tags instead of the full license text.
This enables machine processing of license information based on the SPDX
License Identifiers that are available here: https://spdx.org/licenses/
Files that are dual-licensed as Apache-2.0 OR MIT contain the following
text in the license header:
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,20 +0,0 @@
Copyright 2019-2020 Lawrence Livermore National Security, LLC and other
Archspec Project Developers. See the top-level COPYRIGHT file for details.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@@ -1,68 +0,0 @@
[![](https://github.com/archspec/archspec/workflows/Unit%20tests/badge.svg)](https://github.com/archspec/archspec/actions)
[![codecov](https://codecov.io/gh/archspec/archspec/branch/master/graph/badge.svg)](https://codecov.io/gh/archspec/archspec)
[![Documentation Status](https://readthedocs.org/projects/archspec/badge/?version=latest)](https://archspec.readthedocs.io/en/latest/?badge=latest)
# Archspec (Python bindings)
Archspec aims at providing a standard set of human-understandable labels for
various aspects of a system architecture like CPU, network fabrics, etc. and
APIs to detect, query and compare them.
This project grew out of [Spack](https://spack.io/) and is currently under
active development. At present it supports APIs to detect and model
compatibility relationships among different CPU microarchitectures.
## Getting started with development
The `archspec` Python package needs [poetry](https://python-poetry.org/) to
be installed from VCS sources. The preferred method to install it is via
its custom installer outside of any virtual environment:
```console
$ curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
```
You can refer to [Poetry's documentation](https://python-poetry.org/docs/#installation)
for further details or for other methods to install this tool. You'll also need `tox`
to run unit test:
```console
$ pip install --user tox
```
Finally you'll need to clone the repository:
```console
$ git clone --recursive https://github.com/archspec/archspec.git
```
### Running unit tests
Once you have your environment ready you can run `archspec` unit tests
using ``tox`` from the root of the repository:
```console
$ tox
[ ... ]
py27: commands succeeded
py35: commands succeeded
py36: commands succeeded
py37: commands succeeded
py38: commands succeeded
pylint: commands succeeded
flake8: commands succeeded
black: commands succeeded
congratulations :)
```
## License
Archspec is distributed under the terms of both the MIT license and the
Apache License (Version 2.0). Users may choose either license, at their
option.
All new contributions must be made under both the MIT and Apache-2.0
licenses.
See [LICENSE-MIT](https://github.com/archspec/archspec/blob/master/LICENSE-MIT),
[LICENSE-APACHE](https://github.com/archspec/archspec/blob/master/LICENSE-APACHE),
[COPYRIGHT](https://github.com/archspec/archspec/blob/master/COPYRIGHT), and
[NOTICE](https://github.com/archspec/archspec/blob/master/NOTICE) for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
LLNL-CODE-811653

View File

@@ -1,2 +0,0 @@
"""Init file to avoid namespace packages"""
__version__ = "0.1.1"

View File

@@ -1,24 +0,0 @@
# Copyright 2019-2020 Lawrence Livermore National Security, LLC and other
# Archspec Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""
archspec command line interface
"""
import click
import archspec
import archspec.cpu
@click.group(name="archspec")
@click.version_option(version=archspec.__version__)
def main():
"""archspec command line interface"""
@main.command()
def cpu():
"""archspec command line interface for CPU"""
click.echo(archspec.cpu.host())

View File

@@ -1,20 +0,0 @@
# Copyright 2019-2020 Lawrence Livermore National Security, LLC and other
# Archspec Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""The "cpu" package permits to query and compare different
CPU microarchitectures.
"""
from .microarchitecture import Microarchitecture, UnsupportedMicroarchitecture
from .microarchitecture import TARGETS, generic_microarchitecture
from .microarchitecture import version_components
from .detect import host
__all__ = [
"Microarchitecture",
"UnsupportedMicroarchitecture",
"TARGETS",
"generic_microarchitecture",
"host",
"version_components",
]

View File

@@ -1,88 +0,0 @@
# Copyright 2019-2020 Lawrence Livermore National Security, LLC and other
# Archspec Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Aliases for microarchitecture features."""
# pylint: disable=useless-object-inheritance
from .schema import TARGETS_JSON, LazyDictionary
_FEATURE_ALIAS_PREDICATE = {}
class FeatureAliasTest(object):
"""A test that must be passed for a feature alias to succeed.
Args:
rules (dict): dictionary of rules to be met. Each key must be a
valid alias predicate
"""
# pylint: disable=too-few-public-methods
def __init__(self, rules):
self.rules = rules
self.predicates = []
for name, args in rules.items():
self.predicates.append(_FEATURE_ALIAS_PREDICATE[name](args))
def __call__(self, microarchitecture):
return all(feature_test(microarchitecture) for feature_test in self.predicates)
def _feature_aliases():
"""Returns the dictionary of all defined feature aliases."""
json_data = TARGETS_JSON["feature_aliases"]
aliases = {}
for alias, rules in json_data.items():
aliases[alias] = FeatureAliasTest(rules)
return aliases
FEATURE_ALIASES = LazyDictionary(_feature_aliases)
def alias_predicate(func):
"""Decorator to register a predicate that can be used to evaluate
feature aliases.
"""
name = func.__name__
# Check we didn't register anything else with the same name
if name in _FEATURE_ALIAS_PREDICATE:
msg = 'the alias predicate "{0}" already exists'.format(name)
raise KeyError(msg)
_FEATURE_ALIAS_PREDICATE[name] = func
return func
@alias_predicate
def reason(_):
"""This predicate returns always True and it's there to allow writing
a documentation string in the JSON file to explain why an alias is needed.
"""
return lambda x: True
@alias_predicate
def any_of(list_of_features):
"""Returns a predicate that is True if any of the feature in the
list is in the microarchitecture being tested, False otherwise.
"""
def _impl(microarchitecture):
return any(x in microarchitecture for x in list_of_features)
return _impl
@alias_predicate
def families(list_of_families):
"""Returns a predicate that is True if the architecture family of
the microarchitecture being tested is in the list, False otherwise.
"""
def _impl(microarchitecture):
return str(microarchitecture.family) in list_of_families
return _impl

View File

@@ -1,70 +0,0 @@
# Copyright 2019-2020 Lawrence Livermore National Security, LLC and other
# Archspec Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Global objects with the content of the microarchitecture
JSON file and its schema
"""
import json
import os.path
try:
from collections.abc import MutableMapping # novm
except ImportError:
from collections import MutableMapping
class LazyDictionary(MutableMapping):
"""Lazy dictionary that gets constructed on first access to any object key
Args:
factory (callable): factory function to construct the dictionary
"""
def __init__(self, factory, *args, **kwargs):
self.factory = factory
self.args = args
self.kwargs = kwargs
self._data = None
@property
def data(self):
"""Returns the lazily constructed dictionary"""
if self._data is None:
self._data = self.factory(*self.args, **self.kwargs)
return self._data
def __getitem__(self, key):
return self.data[key]
def __setitem__(self, key, value):
self.data[key] = value
def __delitem__(self, key):
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
def _load_json_file(json_file):
json_dir = os.path.join(os.path.dirname(__file__), "..", "json", "cpu")
json_dir = os.path.abspath(json_dir)
def _factory():
filename = os.path.join(json_dir, json_file)
with open(filename, "r") as file:
return json.load(file)
return _factory
#: In memory representation of the data in microarchitectures.json,
#: loaded on first access
TARGETS_JSON = LazyDictionary(_load_json_file("microarchitectures.json"))
#: JSON schema for microarchitectures.json, loaded on first access
SCHEMA = LazyDictionary(_load_json_file("microarchitectures_schema.json"))

View File

@@ -1,22 +0,0 @@
Intellectual Property Notice
------------------------------
Archspec is licensed under the Apache License, Version 2.0 (LICENSE-APACHE
or http://www.apache.org/licenses/LICENSE-2.0) or the MIT license,
(LICENSE-MIT or http://opensource.org/licenses/MIT), at your option.
Copyrights and patents in the Archspec project are retained by contributors.
No copyright assignment is required to contribute to Archspec.
SPDX usage
------------
Individual files contain SPDX tags instead of the full license text.
This enables machine processing of license information based on the SPDX
License Identifiers that are available here: https://spdx.org/licenses/
Files that are dual-licensed as Apache-2.0 OR MIT contain the following
text in the license header:
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,20 +0,0 @@
Copyright 2019-2020 Lawrence Livermore National Security, LLC and other
Archspec Project Developers. See the top-level COPYRIGHT file for details.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

View File

@@ -1,21 +0,0 @@
This work was produced under the auspices of the U.S. Department of
Energy by Lawrence Livermore National Laboratory under Contract
DE-AC52-07NA27344.
This work was prepared as an account of work sponsored by an agency of
the United States Government. Neither the United States Government nor
Lawrence Livermore National Security, LLC, nor any of their employees
makes any warranty, expressed or implied, or assumes any legal liability
or responsibility for the accuracy, completeness, or usefulness of any
information, apparatus, product, or process disclosed, or represents that
its use would not infringe privately owned rights.
Reference herein to any specific commercial product, process, or service
by trade name, trademark, manufacturer, or otherwise does not necessarily
constitute or imply its endorsement, recommendation, or favoring by the
United States Government or Lawrence Livermore National Security, LLC.
The views and opinions of authors expressed herein do not necessarily
state or reflect those of the United States Government or Lawrence
Livermore National Security, LLC, and shall not be used for advertising
or product endorsement purposes.

View File

@@ -1,36 +0,0 @@
[![](https://github.com/archspec/archspec-json/workflows/JSON%20Validation/badge.svg)](https://github.com/archspec/archspec-json/actions)
# Archspec-json
The [archspec-json](https://github.com/archspec/archspec-json) repository is part of the
[Archspec](https://github.com/archspec) project. It contains data on various architectural
aspects of a platform stored in JSON format and is meant to be used as a base to develop
language specific APIs.
Currently the repository contains the following JSON files:
```console
.
├── COPYRIGHT
└── cpu
   ├── microarchitectures.json # Contains information on CPU microarchitectures
   └── microarchitectures_schema.json # Schema for the file above
```
## License
Archspec is distributed under the terms of both the MIT license and the
Apache License (Version 2.0). Users may choose either license, at their
option.
All new contributions must be made under both the MIT and Apache-2.0
licenses.
See [LICENSE-MIT](https://github.com/archspec/archspec-json/blob/master/LICENSE-MIT),
[LICENSE-APACHE](https://github.com/archspec/archspec-json/blob/master/LICENSE-APACHE),
[COPYRIGHT](https://github.com/archspec/archspec-json/blob/master/COPYRIGHT), and
[NOTICE](https://github.com/archspec/archspec-json/blob/master/NOTICE) for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
LLNL-CODE-811653

View File

@@ -1,110 +0,0 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Schema for microarchitecture definitions and feature aliases",
"type": "object",
"additionalProperties": false,
"properties": {
"microarchitectures": {
"type": "object",
"patternProperties": {
"([\\w]*)": {
"type": "object",
"properties": {
"from": {
"$comment": "More than one parent",
"type": "array",
"items": {
"type": "string"
}
},
"vendor": {
"type": "string"
},
"features": {
"type": "array",
"items": {
"type": "string"
}
},
"compilers": {
"type": "object",
"patternProperties": {
"([\\w]*)": {
"$comment": "Permit multiple entries since compilers change options across versions",
"type": "array",
"items": {
"type": "object",
"properties": {
"versions": {
"type": "string"
},
"name": {
"type": "string"
},
"flags": {
"type": "string"
}
},
"required": [
"versions",
"flags"
]
}
}
}
}
},
"required": [
"from",
"vendor",
"features"
]
}
}
},
"feature_aliases": {
"type": "object",
"patternProperties": {
"([\\w]*)": {
"type": "object",
"properties": {
"reason": {
"$comment": "Comment containing the reason why an alias is there",
"type": "string"
},
"any_of": {
"$comment": "The alias is true if any of the items is a feature of the target",
"type": "array",
"items": {
"type": "string"
}
},
"families": {
"$comment": "The alias is true if the family of the target is in this list",
"type": "array",
"items": {
"type": "string"
}
}
},
"additionalProperties": false
}
}
},
"conversions": {
"type": "object",
"properties": {
"description": {
"type": "string"
},
"arm_vendors": {
"type": "object"
},
"darwin_flags": {
"type": "object"
}
},
"additionalProperties": false
}
}
}

View File

@@ -315,14 +315,10 @@ def __repr__(self):
# register the context as mapping if possible
try:
from collections.abc import Mapping
from collections import Mapping
Mapping.register(Context)
except ImportError:
try:
from collections import Mapping
Mapping.register(Context)
except ImportError:
pass
pass
class BlockReference(object):

View File

@@ -14,7 +14,7 @@
"""
import types
import operator
import sys
from collections import Mapping
from jinja2.environment import Environment
from jinja2.exceptions import SecurityError
from jinja2._compat import string_types, PY2
@@ -23,11 +23,6 @@
from markupsafe import EscapeFormatter
from string import Formatter
if sys.version_info >= (3, 3):
from collections.abc import Mapping
else:
from collections import Mapping
#: maximum number of items a range may produce
MAX_RANGE = 100000
@@ -84,10 +79,7 @@
pass
#: register Python 2.6 abstract base classes
if sys.version_info >= (3, 3):
from collections.abc import MutableSet, MutableMapping, MutableSequence
else:
from collections import MutableSet, MutableMapping, MutableSequence
from collections import MutableSet, MutableMapping, MutableSequence
_mutable_set_types += (MutableSet,)
_mutable_mapping_types += (MutableMapping,)
_mutable_sequence_types += (MutableSequence,)

View File

@@ -10,16 +10,11 @@
"""
import operator
import re
import sys
from collections import Mapping
from jinja2.runtime import Undefined
from jinja2._compat import text_type, string_types, integer_types
import decimal
if sys.version_info >= (3, 3):
from collections.abc import Mapping
else:
from collections import Mapping
number_re = re.compile(r'^-?\d+(\.\d+)?$')
regex_type = type(number_re)

View File

@@ -482,14 +482,10 @@ def __reversed__(self):
# register the LRU cache as mutable mapping if possible
try:
from collections.abc import MutableMapping
from collections import MutableMapping
MutableMapping.register(LRUCache)
except ImportError:
try:
from collections import MutableMapping
MutableMapping.register(LRUCache)
except ImportError:
pass
pass
def select_autoescape(enabled_extensions=('html', 'htm', 'xml'),

View File

@@ -10,15 +10,10 @@
"""
import re
import string
import sys
from collections import Mapping
from markupsafe._compat import text_type, string_types, int_types, \
unichr, iteritems, PY2
if sys.version_info >= (3, 3):
from collections.abc import Mapping
else:
from collections import Mapping
__version__ = "1.0"
__all__ = ['Markup', 'soft_unicode', 'escape', 'escape_silent']

View File

@@ -9,12 +9,7 @@
a separate base
"""
import sys
if sys.version_info >= (3, 3):
from collections.abc import MutableSet
else:
from collections import MutableSet
from collections import MutableSet
__all__ = ["CommentedSeq", "CommentedMap", "CommentedOrderedMap",
"CommentedSet", 'comment_attrib', 'merge_attrib']

View File

@@ -12,12 +12,9 @@
from ruamel.ordereddict import ordereddict
except:
try:
from collections.abc import OrderedDict
from collections import OrderedDict
except ImportError:
try:
from collections import OrderedDict
except ImportError:
from ordereddict import OrderedDict
from ordereddict import OrderedDict
# to get the right name import ... as ordereddict doesn't do that
class ordereddict(OrderedDict):

View File

@@ -3,6 +3,7 @@
from __future__ import absolute_import
from __future__ import print_function
import collections
import datetime
import base64
import binascii
@@ -25,12 +26,6 @@
from ruamel.yaml.scalarstring import * # NOQA
if sys.version_info >= (3, 3):
from collections.abc import Hashable
else:
from collections import Hashable
__all__ = ['BaseConstructor', 'SafeConstructor', 'Constructor',
'ConstructorError', 'RoundTripConstructor']
@@ -168,7 +163,7 @@ def construct_mapping(self, node, deep=False):
# keys can be list -> deep
key = self.construct_object(key_node, deep=True)
# lists are not hashable, but tuples are
if not isinstance(key, Hashable):
if not isinstance(key, collections.Hashable):
if isinstance(key, list):
key = tuple(key)
if PY2:
@@ -180,7 +175,7 @@ def construct_mapping(self, node, deep=False):
"found unacceptable key (%s)" %
exc, key_node.start_mark)
else:
if not isinstance(key, Hashable):
if not isinstance(key, collections.Hashable):
raise ConstructorError(
"while constructing a mapping", node.start_mark,
"found unhashable key", key_node.start_mark)
@@ -964,7 +959,7 @@ def construct_mapping(self, node, maptyp, deep=False):
# keys can be list -> deep
key = self.construct_object(key_node, deep=True)
# lists are not hashable, but tuples are
if not isinstance(key, Hashable):
if not isinstance(key, collections.Hashable):
if isinstance(key, list):
key = tuple(key)
if PY2:
@@ -976,7 +971,7 @@ def construct_mapping(self, node, maptyp, deep=False):
"found unacceptable key (%s)" %
exc, key_node.start_mark)
else:
if not isinstance(key, Hashable):
if not isinstance(key, collections.Hashable):
raise ConstructorError(
"while constructing a mapping", node.start_mark,
"found unhashable key", key_node.start_mark)
@@ -1008,7 +1003,7 @@ def construct_setting(self, node, typ, deep=False):
# keys can be list -> deep
key = self.construct_object(key_node, deep=True)
# lists are not hashable, but tuples are
if not isinstance(key, Hashable):
if not isinstance(key, collections.Hashable):
if isinstance(key, list):
key = tuple(key)
if PY2:
@@ -1020,7 +1015,7 @@ def construct_setting(self, node, typ, deep=False):
"found unacceptable key (%s)" %
exc, key_node.start_mark)
else:
if not isinstance(key, Hashable):
if not isinstance(key, collections.Hashable):
raise ConstructorError(
"while constructing a mapping", node.start_mark,
"found unhashable key", key_node.start_mark)

View File

@@ -0,0 +1,18 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from .microarchitecture import Microarchitecture, UnsupportedMicroarchitecture
from .microarchitecture import targets, generic_microarchitecture
from .microarchitecture import version_components
from .detect import host
__all__ = [
'Microarchitecture',
'UnsupportedMicroarchitecture',
'targets',
'generic_microarchitecture',
'host',
'version_components'
]

View File

@@ -0,0 +1,102 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: Known predicates that can be used to construct feature aliases
from .schema import targets_json, LazyDictionary, properties
_feature_alias_predicate = {}
class FeatureAliasTest(object):
"""A test that must be passed for a feature alias to succeed.
Args:
rules (dict): dictionary of rules to be met. Each key must be a
valid alias predicate
"""
def __init__(self, rules):
self.rules = rules
self.predicates = []
for name, args in rules.items():
self.predicates.append(_feature_alias_predicate[name](args))
def __call__(self, microarchitecture):
return all(
feature_test(microarchitecture) for feature_test in self.predicates
)
def _feature_aliases():
"""Returns the dictionary of all defined feature aliases."""
json_data = targets_json['feature_aliases']
aliases = {}
for alias, rules in json_data.items():
aliases[alias] = FeatureAliasTest(rules)
return aliases
feature_aliases = LazyDictionary(_feature_aliases)
def alias_predicate(predicate_schema):
"""Decorator to register a predicate that can be used to define
feature aliases.
Args:
predicate_schema (dict): schema to be enforced in
microarchitectures.json for the predicate
"""
def decorator(func):
name = func.__name__
# Check we didn't register anything else with the same name
if name in _feature_alias_predicate:
msg = 'the alias predicate "{0}" already exists'.format(name)
raise KeyError(msg)
# Update the overall schema
alias_schema = properties['feature_aliases']['patternProperties']
alias_schema[r'([\w]*)']['properties'].update(
{name: predicate_schema}
)
# Register the predicate
_feature_alias_predicate[name] = func
return func
return decorator
@alias_predicate(predicate_schema={'type': 'string'})
def reason(motivation_for_the_alias):
"""This predicate returns always True and it's there to allow writing
a documentation string in the JSON file to explain why an alias is needed.
"""
return lambda x: True
@alias_predicate(predicate_schema={
'type': 'array',
'items': {'type': 'string'}
})
def any_of(list_of_features):
"""Returns a predicate that is True if any of the feature in the
list is in the microarchitecture being tested, False otherwise.
"""
def _impl(microarchitecture):
return any(x in microarchitecture for x in list_of_features)
return _impl
@alias_predicate(predicate_schema={
'type': 'array',
'items': {'type': 'string'}
})
def families(list_of_families):
"""Returns a predicate that is True if the architecture family of
the microarchitecture being tested is in the list, False otherwise.
"""
def _impl(microarchitecture):
return str(microarchitecture.family) in list_of_families
return _impl

View File

@@ -1,8 +1,7 @@
# Copyright 2019-2020 Lawrence Livermore National Security, LLC and other
# Archspec Project Developers. See the top-level COPYRIGHT file for details.
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Detection of CPU microarchitectures"""
import collections
import functools
import os
@@ -13,16 +12,16 @@
import six
from .microarchitecture import generic_microarchitecture, TARGETS
from .schema import TARGETS_JSON
from .microarchitecture import generic_microarchitecture, targets
from .schema import targets_json
#: Mapping from operating systems to chain of commands
#: to obtain a dictionary of raw info on the current cpu
INFO_FACTORY = collections.defaultdict(list)
info_factory = collections.defaultdict(list)
#: Mapping from micro-architecture families (x86_64, ppc64le, etc.) to
#: functions checking the compatibility of the host with a given target
COMPATIBILITY_CHECKS = {}
compatibility_checks = {}
def info_dict(operating_system):
@@ -33,9 +32,8 @@ def info_dict(operating_system):
operating_system (str or tuple): operating system for which the marked
function is a viable factory of raw info dictionaries.
"""
def decorator(factory):
INFO_FACTORY[operating_system].append(factory)
info_factory[operating_system].append(factory)
@functools.wraps(factory)
def _impl():
@@ -43,10 +41,10 @@ def _impl():
# Check that info contains a few mandatory fields
msg = 'field "{0}" is missing from raw info dictionary'
assert "vendor_id" in info, msg.format("vendor_id")
assert "flags" in info, msg.format("flags")
assert "model" in info, msg.format("model")
assert "model_name" in info, msg.format("model_name")
assert 'vendor_id' in info, msg.format('vendor_id')
assert 'flags' in info, msg.format('flags')
assert 'model' in info, msg.format('model')
assert 'model_name' in info, msg.format('model_name')
return info
@@ -55,15 +53,15 @@ def _impl():
return decorator
@info_dict(operating_system="Linux")
@info_dict(operating_system='Linux')
def proc_cpuinfo():
"""Returns a raw info dictionary by parsing the first entry of
``/proc/cpuinfo``
"""
info = {}
with open("/proc/cpuinfo") as file:
with open('/proc/cpuinfo') as file:
for line in file:
key, separator, value = line.partition(":")
key, separator, value = line.partition(':')
# If there's no separator and info was already populated
# according to what's written here:
@@ -72,43 +70,44 @@ def proc_cpuinfo():
#
# we are on a blank line separating two cpus. Exit early as
# we want to read just the first entry in /proc/cpuinfo
if separator != ":" and info:
if separator != ':' and info:
break
info[key.strip()] = value.strip()
return info
def _check_output(args, env):
output = subprocess.Popen(args, stdout=subprocess.PIPE, env=env).communicate()[0]
return six.text_type(output.decode("utf-8"))
def check_output(args, env):
output = subprocess.Popen(
args, stdout=subprocess.PIPE, env=env
).communicate()[0]
return six.text_type(output.decode('utf-8'))
@info_dict(operating_system="Darwin")
@info_dict(operating_system='Darwin')
def sysctl_info_dict():
"""Returns a raw info dictionary parsing the output of sysctl."""
# Make sure that /sbin and /usr/sbin are in PATH as sysctl is
# usually found there
child_environment = dict(os.environ.items())
search_paths = child_environment.get("PATH", "").split(os.pathsep)
for additional_path in ("/sbin", "/usr/sbin"):
search_paths = child_environment.get('PATH', '').split(os.pathsep)
for additional_path in ('/sbin', '/usr/sbin'):
if additional_path not in search_paths:
search_paths.append(additional_path)
child_environment["PATH"] = os.pathsep.join(search_paths)
child_environment['PATH'] = os.pathsep.join(search_paths)
def sysctl(*args):
return _check_output(["sysctl"] + list(args), env=child_environment).strip()
return check_output(
['sysctl'] + list(args), env=child_environment
).strip()
flags = (
sysctl("-n", "machdep.cpu.features").lower()
+ " "
+ sysctl("-n", "machdep.cpu.leaf7_features").lower()
)
flags = (sysctl('-n', 'machdep.cpu.features').lower() + ' '
+ sysctl('-n', 'machdep.cpu.leaf7_features').lower())
info = {
"vendor_id": sysctl("-n", "machdep.cpu.vendor"),
"flags": flags,
"model": sysctl("-n", "machdep.cpu.model"),
"model name": sysctl("-n", "machdep.cpu.brand_string"),
'vendor_id': sysctl('-n', 'machdep.cpu.vendor'),
'flags': flags,
'model': sysctl('-n', 'machdep.cpu.model'),
'model name': sysctl('-n', 'machdep.cpu.brand_string')
}
return info
@@ -118,16 +117,16 @@ def adjust_raw_flags(info):
slightly different representations.
"""
# Flags detected on Darwin turned to their linux counterpart
flags = info.get("flags", [])
d2l = TARGETS_JSON["conversions"]["darwin_flags"]
flags = info.get('flags', [])
d2l = targets_json['conversions']['darwin_flags']
for darwin_flag, linux_flag in d2l.items():
if darwin_flag in flags:
info["flags"] += " " + linux_flag
info['flags'] += ' ' + linux_flag
def adjust_raw_vendor(info):
"""Adjust the vendor field to make it human readable"""
if "CPU implementer" not in info:
if 'CPU implementer' not in info:
return
# Mapping numeric codes to vendor (ARM). This list is a merge from
@@ -137,10 +136,10 @@ def adjust_raw_vendor(info):
# https://developer.arm.com/docs/ddi0487/latest/arm-architecture-reference-manual-armv8-for-armv8-a-architecture-profile
# https://github.com/gcc-mirror/gcc/blob/master/gcc/config/aarch64/aarch64-cores.def
# https://patchwork.kernel.org/patch/10524949/
arm_vendors = TARGETS_JSON["conversions"]["arm_vendors"]
arm_code = info["CPU implementer"]
arm_vendors = targets_json['conversions']['arm_vendors']
arm_code = info['CPU implementer']
if arm_code in arm_vendors:
info["CPU implementer"] = arm_vendors[arm_code]
info['CPU implementer'] = arm_vendors[arm_code]
def raw_info_dictionary():
@@ -149,13 +148,12 @@ def raw_info_dictionary():
This function calls all the viable factories one after the other until
there's one that is able to produce the requested information.
"""
# pylint: disable=broad-except
info = {}
for factory in INFO_FACTORY[platform.system()]:
for factory in info_factory[platform.system()]:
try:
info = factory()
except Exception as exc:
warnings.warn(str(exc))
except Exception as e:
warnings.warn(str(e))
if info:
adjust_raw_flags(info)
@@ -175,10 +173,9 @@ def compatible_microarchitectures(info):
architecture_family = platform.machine()
# If a tester is not registered, be conservative and assume no known
# target is compatible with the host
tester = COMPATIBILITY_CHECKS.get(architecture_family, lambda x, y: False)
return [x for x in TARGETS.values() if tester(info, x)] or [
generic_microarchitecture(architecture_family)
]
tester = compatibility_checks.get(architecture_family, lambda x, y: False)
return [x for x in targets.values() if tester(info, x)] or \
[generic_microarchitecture(architecture_family)]
def host():
@@ -191,9 +188,7 @@ def host():
# Reverse sort of the depth for the inheritance tree among only targets we
# can use. This gets the newest target we satisfy.
return sorted(
candidates, key=lambda t: (len(t.ancestors), len(t.features)), reverse=True
)[0]
return sorted(candidates, key=lambda t: len(t.ancestors), reverse=True)[0]
def compatibility_check(architecture_family):
@@ -212,59 +207,50 @@ def compatibility_check(architecture_family):
architecture_family = (architecture_family,)
def decorator(func):
# pylint: disable=fixme
# TODO: on removal of Python 2.6 support this can be re-written as
# TODO: an update + a dict comprehension
for arch_family in architecture_family:
COMPATIBILITY_CHECKS[arch_family] = func
compatibility_checks[arch_family] = func
return func
return decorator
@compatibility_check(architecture_family=("ppc64le", "ppc64"))
@compatibility_check(architecture_family=('ppc64le', 'ppc64'))
def compatibility_check_for_power(info, target):
"""Compatibility check for PPC64 and PPC64LE architectures."""
basename = platform.machine()
generation_match = re.search(r"POWER(\d+)", info.get("cpu", ""))
generation_match = re.search(r'POWER(\d+)', info.get('cpu', ''))
generation = int(generation_match.group(1))
# We can use a target if it descends from our machine type and our
# generation (9 for POWER9, etc) is at least its generation.
arch_root = TARGETS[basename]
return (
target == arch_root or arch_root in target.ancestors
) and target.generation <= generation
arch_root = targets[basename]
return (target == arch_root or arch_root in target.ancestors) \
and target.generation <= generation
@compatibility_check(architecture_family="x86_64")
@compatibility_check(architecture_family='x86_64')
def compatibility_check_for_x86_64(info, target):
"""Compatibility check for x86_64 architectures."""
basename = "x86_64"
vendor = info.get("vendor_id", "generic")
features = set(info.get("flags", "").split())
basename = 'x86_64'
vendor = info.get('vendor_id', 'generic')
features = set(info.get('flags', '').split())
# We can use a target if it descends from our machine type, is from our
# vendor, and we have all of its features
arch_root = TARGETS[basename]
return (
(target == arch_root or arch_root in target.ancestors)
and (target.vendor == vendor or target.vendor == "generic")
arch_root = targets[basename]
return (target == arch_root or arch_root in target.ancestors) \
and (target.vendor == vendor or target.vendor == 'generic') \
and target.features.issubset(features)
)
@compatibility_check(architecture_family="aarch64")
@compatibility_check(architecture_family='aarch64')
def compatibility_check_for_aarch64(info, target):
"""Compatibility check for AARCH64 architectures."""
basename = "aarch64"
features = set(info.get("Features", "").split())
vendor = info.get("CPU implementer", "generic")
basename = 'aarch64'
features = set(info.get('Features', '').split())
vendor = info.get('CPU implementer', 'generic')
arch_root = TARGETS[basename]
return (
(target == arch_root or arch_root in target.ancestors)
and (target.vendor == vendor or target.vendor == "generic")
arch_root = targets[basename]
return (target == arch_root or arch_root in target.ancestors) \
and (target.vendor == vendor or target.vendor == 'generic') \
and target.features.issubset(features)
)

View File

@@ -1,81 +1,81 @@
# Copyright 2019-2020 Lawrence Livermore National Security, LLC and other
# Archspec Project Developers. See the top-level COPYRIGHT file for details.
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Types and functions to manage information
on CPU microarchitectures.
"""
# pylint: disable=useless-object-inheritance
import functools
import platform
import re
import warnings
try:
from collections.abc import Sequence # novm
except ImportError:
from collections import Sequence
import six
import archspec
import archspec.cpu.alias
import archspec.cpu.schema
from .alias import FEATURE_ALIASES
import llnl.util
import llnl.util.cpu.alias
import llnl.util.cpu.schema
from .schema import LazyDictionary
from .alias import feature_aliases
def coerce_target_names(func):
"""Decorator that automatically converts a known target name to a proper
Microarchitecture object.
"""
@functools.wraps(func)
def _impl(self, other):
if isinstance(other, six.string_types):
if other not in TARGETS:
if other not in targets:
msg = '"{0}" is not a valid target name'
raise ValueError(msg.format(other))
other = TARGETS[other]
other = targets[other]
return func(self, other)
return _impl
class Microarchitecture(object):
"""Represents a specific CPU micro-architecture.
Args:
name (str): name of the micro-architecture (e.g. skylake).
parents (list): list of parents micro-architectures, if any.
Parenthood is considered by cpu features and not
chronologically. As such each micro-architecture is
compatible with its ancestors. For example "skylake",
which has "broadwell" as a parent, supports running binaries
optimized for "broadwell".
vendor (str): vendor of the micro-architecture
features (list of str): supported CPU flags. Note that the semantic
of the flags in this field might vary among architectures, if
at all present. For instance x86_64 processors will list all
the flags supported by a given CPU while Arm processors will
list instead only the flags that have been added on top of the
base model for the current micro-architecture.
compilers (dict): compiler support to generate tuned code for this
micro-architecture. This dictionary has as keys names of
supported compilers, while values are list of dictionaries
with fields:
* name: name of the micro-architecture according to the
compiler. This is the name passed to the ``-march`` option
or similar. Not needed if the name is the same as that
passed in as argument above.
* versions: versions that support this micro-architecture.
generation (int): generation of the micro-architecture, if
relevant.
"""
# pylint: disable=too-many-arguments
#: Aliases for micro-architecture's features
feature_aliases = FEATURE_ALIASES
feature_aliases = feature_aliases
def __init__(self, name, parents, vendor, features, compilers, generation=0):
def __init__(
self, name, parents, vendor, features, compilers, generation=0
):
"""Represents a specific CPU micro-architecture.
Args:
name (str): name of the micro-architecture (e.g. skylake).
parents (list): list of parents micro-architectures, if any.
Parenthood is considered by cpu features and not
chronologically. As such each micro-architecture is
compatible with its ancestors. For example "skylake",
which has "broadwell" as a parent, supports running binaries
optimized for "broadwell".
vendor (str): vendor of the micro-architecture
features (list of str): supported CPU flags. Note that the semantic
of the flags in this field might vary among architectures, if
at all present. For instance x86_64 processors will list all
the flags supported by a given CPU while Arm processors will
list instead only the flags that have been added on top of the
base model for the current micro-architecture.
compilers (dict): compiler support to generate tuned code for this
micro-architecture. This dictionary has as keys names of
supported compilers, while values are list of dictionaries
with fields:
* name: name of the micro-architecture according to the
compiler. This is the name passed to the ``-march`` option
or similar. Not needed if the name is the same as that
passed in as argument above.
* versions: versions that support this micro-architecture.
generation (int): generation of the micro-architecture, if
relevant.
"""
self.name = name
self.parents = parents
self.vendor = vendor
@@ -85,7 +85,6 @@ def __init__(self, name, parents, vendor, features, compilers, generation=0):
@property
def ancestors(self):
"""All the ancestors of this microarchitecture."""
value = self.parents[:]
for parent in self.parents:
value.extend(a for a in parent.ancestors if a not in value)
@@ -102,14 +101,12 @@ def __eq__(self, other):
if not isinstance(other, Microarchitecture):
return NotImplemented
return (
self.name == other.name
and self.vendor == other.vendor
and self.features == other.features
and self.ancestors == other.ancestors
and self.compilers == other.compilers
and self.generation == other.generation
)
return (self.name == other.name and
self.vendor == other.vendor and
self.features == other.features and
self.ancestors == other.ancestors and
self.compilers == other.compilers and
self.generation == other.generation)
@coerce_target_names
def __ne__(self, other):
@@ -139,10 +136,8 @@ def __ge__(self, other):
def __repr__(self):
cls_name = self.__class__.__name__
fmt = (
cls_name + "({0.name!r}, {0.parents!r}, {0.vendor!r}, "
"{0.features!r}, {0.compilers!r}, {0.generation!r})"
)
fmt = cls_name + '({0.name!r}, {0.parents!r}, {0.vendor!r}, ' \
'{0.features!r}, {0.compilers!r}, {0.generation!r})'
return fmt.format(self)
def __str__(self):
@@ -151,7 +146,7 @@ def __str__(self):
def __contains__(self, feature):
# Feature must be of a string type, so be defensive about that
if not isinstance(feature, six.string_types):
msg = "only objects of string types are accepted [got {0}]"
msg = 'only objects of string types are accepted [got {0}]'
raise TypeError(msg.format(str(type(feature))))
# Here we look first in the raw features, and fall-back to
@@ -160,7 +155,9 @@ def __contains__(self, feature):
return True
# Check if the alias is defined, if not it will return False
match_alias = Microarchitecture.feature_aliases.get(feature, lambda x: False)
match_alias = Microarchitecture.feature_aliases.get(
feature, lambda x: False
)
return match_alias(self)
@property
@@ -168,7 +165,7 @@ def family(self):
"""Returns the architecture family a given target belongs to"""
roots = [x for x in [self] + self.ancestors if not x.ancestors]
msg = "a target is expected to belong to just one architecture family"
msg += "[found {0}]".format(", ".join(str(x) for x in roots))
msg += "[found {0}]".format(', '.join(str(x) for x in roots))
assert len(roots) == 1, msg
return roots.pop()
@@ -181,11 +178,13 @@ def to_dict(self, return_list_of_items=False):
items instead of the dictionary
"""
list_of_items = [
("name", str(self.name)),
("vendor", str(self.vendor)),
("features", sorted(str(x) for x in self.features)),
("generation", self.generation),
("parents", [str(x) for x in self.parents]),
('name', str(self.name)),
('vendor', str(self.vendor)),
('features', sorted(
str(x) for x in self.features
)),
('generation', self.generation),
('parents', [str(x) for x in self.parents])
]
if return_list_of_items:
return list_of_items
@@ -205,18 +204,19 @@ def optimization_flags(self, compiler, version):
compiler (str): name of the compiler to be used
version (str): version of the compiler to be used
"""
# If we don't have information on compiler at all return an empty string
# If we don't have information on compiler at all
# return an empty string
if compiler not in self.family.compilers:
return ""
return ''
# If we have information but it stops before this
# microarchitecture, fall back to the best known target
if compiler not in self.compilers:
best_target = [x for x in self.ancestors if compiler in x.compilers][0]
msg = (
"'{0}' compiler is known to optimize up to the '{1}'"
" microarchitecture in the '{2}' architecture family"
)
best_target = [
x for x in self.ancestors if compiler in x.compilers
][0]
msg = ("'{0}' compiler is known to optimize up to the '{1}'"
" microarchitecture in the '{2}' architecture family")
msg = msg.format(compiler, best_target, best_target.family)
raise UnsupportedMicroarchitecture(msg)
@@ -224,17 +224,20 @@ def optimization_flags(self, compiler, version):
# version being used
compiler_info = self.compilers[compiler]
def satisfies_constraint(entry, version):
min_version, max_version = entry["versions"].split(":")
# Normalize the entries to have a uniform treatment in the code below
if not isinstance(compiler_info, Sequence):
compiler_info = [compiler_info]
# Check version suffixes
def satisfies_constraint(entry, version):
min_version, max_version = entry['versions'].split(':')
# Extract numeric part of the version
min_version, _ = version_components(min_version)
max_version, _ = version_components(max_version)
version, _ = version_components(version)
# Assume compiler versions fit into semver
def tuplify(ver):
return tuple(int(y) for y in ver.split("."))
tuplify = lambda x: tuple(int(y) for y in x.split('.'))
version = tuplify(version)
if min_version:
@@ -251,29 +254,23 @@ def tuplify(ver):
for compiler_entry in compiler_info:
if satisfies_constraint(compiler_entry, version):
flags_fmt = compiler_entry["flags"]
flags_fmt = compiler_entry['flags']
# If there's no field name, use the name of the
# micro-architecture
compiler_entry.setdefault("name", self.name)
compiler_entry.setdefault('name', self.name)
# Check if we need to emit a warning
warning_message = compiler_entry.get("warnings", None)
warning_message = compiler_entry.get('warnings', None)
if warning_message:
warnings.warn(warning_message)
flags = flags_fmt.format(**compiler_entry)
return flags
msg = (
"cannot produce optimized binary for micro-architecture '{0}'"
" with {1}@{2} [supported compiler versions are {3}]"
)
msg = msg.format(
self.name,
compiler,
version,
", ".join([x["versions"] for x in compiler_info]),
)
msg = ("cannot produce optimized binary for micro-architecture '{0}'"
" with {1}@{2} [supported compiler versions are {3}]")
msg = msg.format(self.name, compiler, version,
', '.join([x['versions'] for x in compiler_info]))
raise UnsupportedMicroarchitecture(msg)
@@ -284,7 +281,7 @@ def generic_microarchitecture(name):
name (str): name of the micro-architecture
"""
return Microarchitecture(
name, parents=[], vendor="generic", features=[], compilers={}
name, parents=[], vendor='generic', features=[], compilers={}
)
@@ -292,15 +289,15 @@ def version_components(version):
"""Decomposes the version passed as input in version number and
suffix and returns them.
If the version number or the suffix are not present, an empty
If the version number of the suffix are not present, an empty
string is returned.
Args:
version (str): version to be decomposed into its components
"""
match = re.match(r"([\d.]*)(-?)(.*)", str(version))
match = re.match(r'([\d.]*)(-?)(.*)', str(version))
if not match:
return "", ""
return '', ''
version_number = match.group(1)
suffix = match.group(3)
@@ -312,7 +309,7 @@ def _known_microarchitectures():
"""Returns a dictionary of the known micro-architectures. If the
current host platform is unknown adds it too as a generic target.
"""
# pylint: disable=fixme
# TODO: Simplify this logic using object_pairs_hook to OrderedDict
# TODO: when we stop supporting python2.6
@@ -329,40 +326,44 @@ def fill_target_from_dict(name, data, targets):
values = data[name]
# Get direct parents of target
parent_names = values["from"]
for parent in parent_names:
parent_names = values['from']
if isinstance(parent_names, six.string_types):
parent_names = [parent_names]
if parent_names is None:
parent_names = []
for p in parent_names:
# Recursively fill parents so they exist before we add them
if parent in targets:
if p in targets:
continue
fill_target_from_dict(parent, data, targets)
parents = [targets.get(parent) for parent in parent_names]
fill_target_from_dict(p, data, targets)
parents = [targets.get(p) for p in parent_names]
vendor = values["vendor"]
features = set(values["features"])
compilers = values.get("compilers", {})
generation = values.get("generation", 0)
vendor = values['vendor']
features = set(values['features'])
compilers = values.get('compilers', {})
generation = values.get('generation', 0)
targets[name] = Microarchitecture(
name, parents, vendor, features, compilers, generation
)
known_targets = {}
data = archspec.cpu.schema.TARGETS_JSON["microarchitectures"]
targets = {}
data = llnl.util.cpu.schema.targets_json['microarchitectures']
for name in data:
if name in known_targets:
if name in targets:
# name was already brought in as ancestor to a target
continue
fill_target_from_dict(name, data, known_targets)
fill_target_from_dict(name, data, targets)
# Add the host platform if not present
host_platform = platform.machine()
known_targets.setdefault(host_platform, generic_microarchitecture(host_platform))
targets.setdefault(host_platform, generic_microarchitecture(host_platform))
return known_targets
return targets
#: Dictionary of known micro-architectures
TARGETS = LazyDictionary(_known_microarchitectures)
targets = LazyDictionary(_known_microarchitectures)
class UnsupportedMicroarchitecture(ValueError):

View File

@@ -0,0 +1,147 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import json
import os.path
try:
from collections.abc import MutableMapping # novm
except ImportError:
from collections import MutableMapping
compilers_schema = {
'type': 'object',
'properties': {
'versions': {'type': 'string'},
'name': {'type': 'string'},
'flags': {'type': 'string'}
},
'required': ['versions', 'flags']
}
properties = {
'microarchitectures': {
'type': 'object',
'patternProperties': {
r'([\w]*)': {
'type': 'object',
'properties': {
'from': {
'anyOf': [
# More than one parent
{'type': 'array', 'items': {'type': 'string'}},
# Exactly one parent
{'type': 'string'},
# No parent
{'type': 'null'}
]
},
'vendor': {
'type': 'string'
},
'features': {
'type': 'array',
'items': {'type': 'string'}
},
'compilers': {
'type': 'object',
'patternProperties': {
r'([\w]*)': {
'anyOf': [
compilers_schema,
{
'type': 'array',
'items': compilers_schema
}
]
}
}
}
},
'required': ['from', 'vendor', 'features']
}
}
},
'feature_aliases': {
'type': 'object',
'patternProperties': {
r'([\w]*)': {
'type': 'object',
'properties': {},
'additionalProperties': False
}
},
},
'conversions': {
'type': 'object',
'properties': {
'description': {
'type': 'string'
},
'arm_vendors': {
'type': 'object',
},
'darwin_flags': {
'type': 'object'
}
},
'additionalProperties': False
}
}
schema = {
'$schema': 'http://json-schema.org/schema#',
'title': 'Schema for microarchitecture definitions and feature aliases',
'type': 'object',
'additionalProperties': False,
'properties': properties,
}
class LazyDictionary(MutableMapping):
"""Lazy dictionary that gets constructed on first access to any object key
Args:
factory (callable): factory function to construct the dictionary
"""
def __init__(self, factory, *args, **kwargs):
self.factory = factory
self.args = args
self.kwargs = kwargs
self._data = None
@property
def data(self):
if self._data is None:
self._data = self.factory(*self.args, **self.kwargs)
return self._data
def __getitem__(self, key):
return self.data[key]
def __setitem__(self, key, value):
self.data[key] = value
def __delitem__(self, key):
del self.data[key]
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
def _load_targets_json():
"""Loads ``microarchitectures.json`` in memory."""
directory_name = os.path.dirname(os.path.abspath(__file__))
filename = os.path.join(directory_name, 'microarchitectures.json')
with open(filename, 'r') as f:
return json.load(f)
#: In memory representation of the data in microarchitectures.json,
#: loaded on first access
targets_json = LazyDictionary(_load_targets_json)

View File

@@ -23,12 +23,6 @@
from llnl.util.lang import dedupe, memoized
from spack.util.executable import Executable
if sys.version_info >= (3, 3):
from collections.abc import Sequence # novm
else:
from collections import Sequence
__all__ = [
'FileFilter',
'FileList',
@@ -55,7 +49,6 @@
'install_tree',
'is_exe',
'join_path',
'last_modification_time_recursive',
'mkdirp',
'partition_path',
'prefixes',
@@ -930,15 +923,6 @@ def set_executable(path):
os.chmod(path, mode)
def last_modification_time_recursive(path):
path = os.path.abspath(path)
times = [os.stat(path).st_mtime]
times.extend(os.stat(os.path.join(root, name)).st_mtime
for root, dirs, files in os.walk(path)
for name in dirs + files)
return max(times)
def remove_empty_directories(root):
"""Ascend up from the leaves accessible from `root` and remove empty
directories.
@@ -1111,7 +1095,7 @@ def find(root, files, recursive=True):
Parameters:
root (str): The root directory to start searching from
files (str or Sequence): Library name(s) to search for
files (str or collections.Sequence): Library name(s) to search for
recurse (bool, optional): if False search only root folder,
if True descends top-down from the root. Defaults to True.
@@ -1174,7 +1158,7 @@ def _find_non_recursive(root, search_files):
# Utilities for libraries and headers
class FileList(Sequence):
class FileList(collections.Sequence):
"""Sequence of absolute paths to files.
Provides a few convenience methods to manipulate file paths.
@@ -1417,7 +1401,7 @@ def find_headers(headers, root, recursive=False):
"""
if isinstance(headers, six.string_types):
headers = [headers]
elif not isinstance(headers, Sequence):
elif not isinstance(headers, collections.Sequence):
message = '{0} expects a string or sequence of strings as the '
message += 'first argument [got {1} instead]'
message = message.format(find_headers.__name__, type(headers))
@@ -1572,7 +1556,7 @@ def find_system_libraries(libraries, shared=True):
"""
if isinstance(libraries, six.string_types):
libraries = [libraries]
elif not isinstance(libraries, Sequence):
elif not isinstance(libraries, collections.Sequence):
message = '{0} expects a string or sequence of strings as the '
message += 'first argument [got {1} instead]'
message = message.format(find_system_libraries.__name__,
@@ -1626,7 +1610,7 @@ def find_libraries(libraries, root, shared=True, recursive=False):
"""
if isinstance(libraries, six.string_types):
libraries = [libraries]
elif not isinstance(libraries, Sequence):
elif not isinstance(libraries, collections.Sequence):
message = '{0} expects a string or sequence of strings as the '
message += 'first argument [got {1} instead]'
message = message.format(find_libraries.__name__, type(libraries))

View File

@@ -9,18 +9,13 @@
import os
import re
import functools
import collections
import inspect
from datetime import datetime, timedelta
from six import string_types
import sys
if sys.version_info >= (3, 3):
from collections.abc import Hashable, MutableMapping # novm
else:
from collections import Hashable, MutableMapping
# Ignore emacs backups when listing modules
ignore_modules = [r'^\.#', '~$']
@@ -194,7 +189,7 @@ def memoized(func):
@functools.wraps(func)
def _memoized_function(*args):
if not isinstance(args, Hashable):
if not isinstance(args, collections.Hashable):
# Not hashable, so just call the function.
return func(*args)
@@ -269,7 +264,7 @@ def setter(name, value):
@key_ordering
class HashableMap(MutableMapping):
class HashableMap(collections.MutableMapping):
"""This is a hashable, comparable dictionary. Hash is performed on
a tuple of the values in the dictionary."""
@@ -573,12 +568,6 @@ def instance(self):
return self._instance
def __getattr__(self, name):
# When unpickling Singleton objects, the 'instance' attribute may be
# requested but not yet set. The final 'getattr' line here requires
# 'instance'/'_instance' to be defined or it will enter an infinite
# loop, so protect against that here.
if name in ['_instance', 'instance']:
raise AttributeError()
return getattr(self.instance, name)
def __getitem__(self, name):
@@ -607,8 +596,6 @@ def __init__(self, ref_function):
self.ref_function = ref_function
def __getattr__(self, name):
if name == 'ref_function':
raise AttributeError()
return getattr(self.ref_function(), name)
def __getitem__(self, name):
@@ -676,19 +663,3 @@ def uniq(sequence):
uniq_list.append(element)
last = element
return uniq_list
def star(func):
"""Unpacks arguments for use with Multiprocessing mapping functions"""
def _wrapper(args):
return func(*args)
return _wrapper
class Devnull(object):
"""Null stream with less overhead than ``os.devnull``.
See https://stackoverflow.com/a/2929954.
"""
def write(self, *_):
pass

View File

@@ -21,6 +21,7 @@
from six import StringIO
import llnl.util.tty as tty
from llnl.util.lang import fork_context
try:
import termios
@@ -236,8 +237,6 @@ def __exit__(self, exc_type, exception, traceback):
"""If termios was available, restore old settings."""
if self.old_cfg:
self._restore_default_terminal_settings()
if sys.version_info >= (3,):
atexit.unregister(self._restore_default_terminal_settings)
# restore SIGSTP and SIGCONT handlers
if self.old_handlers:
@@ -289,109 +288,6 @@ def _file_descriptors_work(*streams):
return False
class FileWrapper(object):
"""Represents a file. Can be an open stream, a path to a file (not opened
yet), or neither. When unwrapped, it returns an open file (or file-like)
object.
"""
def __init__(self, file_like):
# This records whether the file-like object returned by "unwrap" is
# purely in-memory. In that case a subprocess will need to explicitly
# transmit the contents to the parent.
self.write_in_parent = False
self.file_like = file_like
if isinstance(file_like, string_types):
self.open = True
elif _file_descriptors_work(file_like):
self.open = False
else:
self.file_like = None
self.open = True
self.write_in_parent = True
self.file = None
def unwrap(self):
if self.open:
if self.file_like:
self.file = open(self.file_like, 'w')
else:
self.file = StringIO()
return self.file
else:
# We were handed an already-open file object. In this case we also
# will not actually close the object when requested to.
return self.file_like
def close(self):
if self.file:
self.file.close()
class MultiProcessFd(object):
"""Return an object which stores a file descriptor and can be passed as an
argument to a function run with ``multiprocessing.Process``, such that
the file descriptor is available in the subprocess."""
def __init__(self, fd):
self._connection = None
self._fd = None
if sys.version_info >= (3, 8):
self._connection = multiprocessing.connection.Connection(fd)
else:
self._fd = fd
@property
def fd(self):
if self._connection:
return self._connection._handle
else:
return self._fd
def close(self):
if self._connection:
self._connection.close()
else:
os.close(self._fd)
def close_connection_and_file(multiprocess_fd, file):
# MultiprocessFd is intended to transmit a FD
# to a child process, this FD is then opened to a Python File object
# (using fdopen). In >= 3.8, MultiprocessFd encapsulates a
# multiprocessing.connection.Connection; Connection closes the FD
# when it is deleted, and prints a warning about duplicate closure if
# it is not explicitly closed. In < 3.8, MultiprocessFd encapsulates a
# simple FD; closing the FD here appears to conflict with
# closure of the File object (in < 3.8 that is). Therefore this needs
# to choose whether to close the File or the Connection.
if sys.version_info >= (3, 8):
multiprocess_fd.close()
else:
file.close()
@contextmanager
def replace_environment(env):
"""Replace the current environment (`os.environ`) with `env`.
If `env` is empty (or None), this unsets all current environment
variables.
"""
env = env or {}
old_env = os.environ.copy()
try:
os.environ.clear()
for name, val in env.items():
os.environ[name] = val
yield
finally:
os.environ.clear()
for name, val in old_env.items():
os.environ[name] = val
class log_output(object):
"""Context manager that logs its output to a file.
@@ -428,8 +324,8 @@ class log_output(object):
work within test frameworks like nose and pytest.
"""
def __init__(self, file_like=None, echo=False, debug=0, buffer=False,
env=None):
def __init__(self, file_like=None, output=None, error=None,
echo=False, debug=0, buffer=False):
"""Create a new output log context manager.
Args:
@@ -454,14 +350,15 @@ def __init__(self, file_like=None, echo=False, debug=0, buffer=False,
"""
self.file_like = file_like
self.output = output or sys.stdout
self.error = error or sys.stderr
self.echo = echo
self.debug = debug
self.buffer = buffer
self.env = env # the environment to use for _writer_daemon
self._active = False # used to prevent re-entry
def __call__(self, file_like=None, echo=None, debug=None, buffer=None):
def __call__(self, file_like=None, output=None, error=None,
echo=None, debug=None, buffer=None):
"""This behaves the same as init. It allows a logger to be reused.
Arguments are the same as for ``__init__()``. Args here take
@@ -482,6 +379,10 @@ def __call__(self, file_like=None, echo=None, debug=None, buffer=None):
"""
if file_like is not None:
self.file_like = file_like
if output is not None:
self.output = output
if error is not None:
self.error = error
if echo is not None:
self.echo = echo
if debug is not None:
@@ -499,7 +400,18 @@ def __enter__(self):
"file argument must be set by either __init__ or __call__")
# set up a stream for the daemon to write to
self.log_file = FileWrapper(self.file_like)
self.close_log_in_parent = True
self.write_log_in_parent = False
if isinstance(self.file_like, string_types):
self.log_file = open(self.file_like, 'w')
elif _file_descriptors_work(self.file_like):
self.log_file = self.file_like
self.close_log_in_parent = False
else:
self.log_file = StringIO()
self.write_log_in_parent = True
# record parent color settings before redirecting. We do this
# because color output depends on whether the *original* stdout
@@ -514,8 +426,6 @@ def __enter__(self):
# OS-level pipe for redirecting output to logger
read_fd, write_fd = os.pipe()
read_multiprocess_fd = MultiProcessFd(read_fd)
# Multiprocessing pipe for communication back from the daemon
# Currently only used to save echo value between uses
self.parent_pipe, child_pipe = multiprocessing.Pipe()
@@ -524,68 +434,75 @@ def __enter__(self):
try:
# need to pass this b/c multiprocessing closes stdin in child.
try:
input_multiprocess_fd = MultiProcessFd(
os.dup(sys.stdin.fileno())
)
input_stream = os.fdopen(os.dup(sys.stdin.fileno()))
except BaseException:
# just don't forward input if this fails
input_multiprocess_fd = None
input_stream = None # just don't forward input if this fails
with replace_environment(self.env):
self.process = multiprocessing.Process(
target=_writer_daemon,
args=(
input_multiprocess_fd, read_multiprocess_fd, write_fd,
self.echo, self.log_file, child_pipe
)
self.process = fork_context.Process(
target=_writer_daemon,
args=(
input_stream, read_fd, write_fd, self.echo, self.output,
self.log_file, child_pipe
)
self.process.daemon = True # must set before start()
self.process.start()
)
self.process.daemon = True # must set before start()
self.process.start()
os.close(read_fd) # close in the parent process
finally:
if input_multiprocess_fd:
input_multiprocess_fd.close()
read_multiprocess_fd.close()
if input_stream:
input_stream.close()
# Flush immediately before redirecting so that anything buffered
# goes to the original stream
sys.stdout.flush()
sys.stderr.flush()
self.output.flush()
self.error.flush()
# sys.stdout.flush()
# sys.stderr.flush()
# Now do the actual output rediction.
self.use_fds = _file_descriptors_work(sys.stdout, sys.stderr)
self.use_fds = _file_descriptors_work(self.output, self.error)#sys.stdout, sys.stderr)
if self.use_fds:
# We try first to use OS-level file descriptors, as this
# redirects output for subprocesses and system calls.
# Save old stdout and stderr file descriptors
self._saved_stdout = os.dup(sys.stdout.fileno())
self._saved_stderr = os.dup(sys.stderr.fileno())
self._saved_output = os.dup(self.output.fileno())
self._saved_error = os.dup(self.error.fileno())
# self._saved_stdout = os.dup(sys.stdout.fileno())
# self._saved_stderr = os.dup(sys.stderr.fileno())
# redirect to the pipe we created above
os.dup2(write_fd, sys.stdout.fileno())
os.dup2(write_fd, sys.stderr.fileno())
os.dup2(write_fd, self.output.fileno())
os.dup2(write_fd, self.error.fileno())
# os.dup2(write_fd, sys.stdout.fileno())
# os.dup2(write_fd, sys.stderr.fileno())
os.close(write_fd)
else:
# Handle I/O the Python way. This won't redirect lower-level
# output, but it's the best we can do, and the caller
# shouldn't expect any better, since *they* have apparently
# redirected I/O the Python way.
# Save old stdout and stderr file objects
self._saved_stdout = sys.stdout
self._saved_stderr = sys.stderr
self._saved_output = self.output
self._saved_error = self.error
# self._saved_stdout = sys.stdout
# self._saved_stderr = sys.stderr
# create a file object for the pipe; redirect to it.
pipe_fd_out = os.fdopen(write_fd, 'w')
sys.stdout = pipe_fd_out
sys.stderr = pipe_fd_out
self.output = pipe_fd_out
self.error = pipe_fd_out
# sys.stdout = pipe_fd_out
# sys.stderr = pipe_fd_out
# Unbuffer stdout and stderr at the Python level
if not self.buffer:
sys.stdout = Unbuffered(sys.stdout)
sys.stderr = Unbuffered(sys.stderr)
self.output = Unbuffered(self.output)
self.error = Unbuffered(self.error)
# sys.stdout = Unbuffered(sys.stdout)
# sys.stderr = Unbuffered(sys.stderr)
# Force color and debug settings now that we have redirected.
tty.color.set_color_when(forced_color)
@@ -600,37 +517,43 @@ def __enter__(self):
def __exit__(self, exc_type, exc_val, exc_tb):
# Flush any buffered output to the logger daemon.
sys.stdout.flush()
sys.stderr.flush()
self.output.flush()
self.error.flush()
# sys.stdout.flush()
# sys.stderr.flush()
# restore previous output settings, either the low-level way or
# the python way
if self.use_fds:
os.dup2(self._saved_stdout, sys.stdout.fileno())
os.close(self._saved_stdout)
os.dup2(self._saved_output, self.output.fileno())
os.close(self._saved_output)
os.dup2(self._saved_stderr, sys.stderr.fileno())
os.close(self._saved_stderr)
os.dup2(self._saved_error, self.error.fileno())
os.close(self._saved_error)
# os.dup2(self._saved_stdout, sys.stdout.fileno())
# os.close(self._saved_stdout)
# os.dup2(self._saved_stderr, sys.stderr.fileno())
# os.close(self._saved_stderr)
else:
sys.stdout = self._saved_stdout
sys.stderr = self._saved_stderr
self.output = self._saved_output
self.error = self._saved_error
# sys.stdout = self._saved_stdout
# sys.stderr = self._saved_stderr
# print log contents in parent if needed.
if self.log_file.write_in_parent:
if self.write_log_in_parent:
string = self.parent_pipe.recv()
self.file_like.write(string)
# recover and store echo settings from the child before it dies
try:
self.echo = self.parent_pipe.recv()
except EOFError:
# This may occur if some exception prematurely terminates the
# _writer_daemon. An exception will have already been generated.
pass
if self.close_log_in_parent:
self.log_file.close()
# now that the write pipe is closed (in this __exit__, when we restore
# stdout with dup2), the logger daemon process loop will terminate. We
# wait for that here.
# recover and store echo settings from the child before it dies
self.echo = self.parent_pipe.recv()
# join the daemon process. The daemon will quit automatically
# when the write pipe is closed; we just wait for it here.
self.process.join()
# restore old color and debug settings
@@ -650,17 +573,17 @@ def force_echo(self):
# output. We us these control characters rather than, say, a
# separate pipe, because they're in-band and assured to appear
# exactly before and after the text we want to echo.
sys.stdout.write(xon)
sys.stdout.flush()
self.output.write(xon)
self.output.flush()
try:
yield
finally:
sys.stdout.write(xoff)
sys.stdout.flush()
self.output.write(xoff)
self.output.flush()
def _writer_daemon(stdin_multiprocess_fd, read_multiprocess_fd, write_fd, echo,
log_file_wrapper, control_pipe):
def _writer_daemon(stdin, read_fd, write_fd, echo, echo_stream, log_file,
control_pipe):
"""Daemon used by ``log_output`` to write to a log file and to ``stdout``.
The daemon receives output from the parent process and writes it both
@@ -697,39 +620,27 @@ def _writer_daemon(stdin_multiprocess_fd, read_multiprocess_fd, write_fd, echo,
``StringIO`` in the parent. This is mainly for testing.
Arguments:
stdin_multiprocess_fd (int): input from the terminal
read_multiprocess_fd (int): pipe for reading from parent's redirected
stdout
stdin (stream): input from the terminal
read_fd (int): pipe for reading from parent's redirected stdout
write_fd (int): parent's end of the pipe will write to (will be
immediately closed by the writer daemon)
echo (bool): initial echo setting -- controlled by user and
preserved across multiple writer daemons
log_file_wrapper (FileWrapper): file to log all output
echo_stream (stream): output to echo to when echoing
log_file (file-like): file to log all output
control_pipe (Pipe): multiprocessing pipe on which to send control
information to the parent
"""
# If this process was forked, then it will inherit file descriptors from
# the parent process. This process depends on closing all instances of
# write_fd to terminate the reading loop, so we close the file descriptor
# here. Forking is the process spawning method everywhere except Mac OS
# for Python >= 3.8 and on Windows
if sys.version_info < (3, 8) or sys.platform != 'darwin':
os.close(write_fd)
# Use line buffering (3rd param = 1) since Python 3 has a bug
# that prevents unbuffered text I/O.
in_pipe = os.fdopen(read_multiprocess_fd.fd, 'r', 1)
if stdin_multiprocess_fd:
stdin = os.fdopen(stdin_multiprocess_fd.fd)
else:
stdin = None
in_pipe = os.fdopen(read_fd, 'r', 1)
os.close(write_fd)
# list of streams to select from
istreams = [in_pipe, stdin] if stdin else [in_pipe]
force_echo = False # parent can force echo for certain output
log_file = log_file_wrapper.unwrap()
try:
with keyboard_input(stdin) as kb:
while True:
@@ -770,8 +681,8 @@ def _writer_daemon(stdin_multiprocess_fd, read_multiprocess_fd, write_fd, echo,
# Echo to stdout if requested or forced.
if echo or force_echo:
sys.stdout.write(line)
sys.stdout.flush()
echo_stream.write(line)
echo_stream.flush()
# Stripped output to log file.
log_file.write(_strip(line))
@@ -790,13 +701,10 @@ def _writer_daemon(stdin_multiprocess_fd, read_multiprocess_fd, write_fd, echo,
# send written data back to parent if we used a StringIO
if isinstance(log_file, StringIO):
control_pipe.send(log_file.getvalue())
log_file_wrapper.close()
close_connection_and_file(read_multiprocess_fd, in_pipe)
if stdin_multiprocess_fd:
close_connection_and_file(stdin_multiprocess_fd, stdin)
log_file.close()
# send echo value back to the parent so it can be preserved.
control_pipe.send(echo)
# send echo value back to the parent so it can be preserved.
control_pipe.send(echo)
def _retry(function):

View File

@@ -24,6 +24,7 @@
import traceback
import llnl.util.tty.log as log
from llnl.util.lang import fork_context
from spack.util.executable import which
@@ -233,7 +234,7 @@ def start(self, **kwargs):
``minion_function``.
"""
self.proc = multiprocessing.Process(
self.proc = fork_context.Process(
target=PseudoShell._set_up_and_run_controller_function,
args=(self.controller_function, self.minion_function,
self.controller_timeout, self.sleep_time),

View File

@@ -5,7 +5,7 @@
#: major, minor, patch version for Spack, in a tuple
spack_version_info = (0, 16, 2)
spack_version_info = (0, 15, 4)
#: String containing Spack version joined with .'s
spack_version = '.'.join(str(v) for v in spack_version_info)

View File

@@ -8,6 +8,7 @@
from llnl.util.lang import memoized
import spack.spec
from spack.build_environment import dso_suffix
from spack.spec import CompilerSpec
from spack.util.executable import Executable, ProcessError
from spack.compilers.clang import Clang
@@ -29,7 +30,6 @@ def architecture_compatible(self, target, constraint):
def _gcc_get_libstdcxx_version(self, version):
"""Returns gcc ABI compatibility info by getting the library version of
a compiler's libstdc++ or libgcc_s"""
from spack.build_environment import dso_suffix
spec = CompilerSpec("gcc", version)
compilers = spack.compilers.compilers_for_spec(spec)
if not compilers:

View File

@@ -56,20 +56,17 @@
attributes front_os and back_os. The operating system as described earlier,
will be responsible for compiler detection.
"""
import contextlib
import functools
import inspect
import warnings
import archspec.cpu
import six
import llnl.util.cpu as cpu
import llnl.util.tty as tty
from llnl.util.lang import memoized, list_modules, key_ordering
import spack.compiler
import spack.compilers
import spack.config
import spack.paths
import spack.error as serr
import spack.util.executable
@@ -112,9 +109,9 @@ def __init__(self, name, module_name=None):
current target. This is typically used on machines
like Cray (e.g. craype-compiler)
"""
if not isinstance(name, archspec.cpu.Microarchitecture):
name = archspec.cpu.TARGETS.get(
name, archspec.cpu.generic_microarchitecture(name)
if not isinstance(name, cpu.Microarchitecture):
name = cpu.targets.get(
name, cpu.generic_microarchitecture(name)
)
self.microarchitecture = name
self.module_name = module_name
@@ -210,9 +207,7 @@ def optimization_flags(self, compiler):
# has an unexpected suffix. If so, treat it as a compiler with a
# custom spec.
compiler_version = compiler.version
version_number, suffix = archspec.cpu.version_components(
compiler.version
)
version_number, suffix = cpu.version_components(compiler.version)
if not version_number or suffix not in ('', 'apple'):
# Try to deduce the underlying version of the compiler, regardless
# of its name in compilers.yaml. Depending on where this function
@@ -245,7 +240,7 @@ class Platform(object):
front_end = None
back_end = None
default = None # The default back end target.
default = None # The default back end target. On cray ivybridge
front_os = None
back_os = None
@@ -494,7 +489,7 @@ def arch_for_spec(arch_spec):
@memoized
def _all_platforms():
def all_platforms():
classes = []
mod_path = spack.paths.platform_path
parent_module = "spack.platforms"
@@ -515,7 +510,7 @@ def _all_platforms():
@memoized
def _platform():
def platform():
"""Detects the platform for this machine.
Gather a list of all available subclasses of platforms.
@@ -524,7 +519,7 @@ def _platform():
a file path (/opt/cray...)
"""
# Try to create a Platform object using the config file FIRST
platform_list = _all_platforms()
platform_list = all_platforms()
platform_list.sort(key=lambda a: a.priority)
for platform_cls in platform_list:
@@ -532,19 +527,6 @@ def _platform():
return platform_cls()
#: The "real" platform of the host running Spack. This should not be changed
#: by any method and is here as a convenient way to refer to the host platform.
real_platform = _platform
#: The current platform used by Spack. May be swapped by the use_platform
#: context manager.
platform = _platform
#: The list of all platform classes. May be swapped by the use_platform
#: context manager.
all_platforms = _all_platforms
@memoized
def default_arch():
"""Default ``Arch`` object for this machine.
@@ -573,45 +555,9 @@ def sys_type():
def compatible_sys_types():
"""Returns a list of all the systypes compatible with the current host."""
compatible_archs = []
current_host = archspec.cpu.host()
current_host = cpu.host()
compatible_targets = [current_host] + current_host.ancestors
for target in compatible_targets:
arch = Arch(platform(), 'default_os', target)
compatible_archs.append(str(arch))
return compatible_archs
class _PickleableCallable(object):
"""Class used to pickle a callable that may substitute either
_platform or _all_platforms. Lambda or nested functions are
not pickleable.
"""
def __init__(self, return_value):
self.return_value = return_value
def __call__(self):
return self.return_value
@contextlib.contextmanager
def use_platform(new_platform):
global platform, all_platforms
msg = '"{0}" must be an instance of Platform'
assert isinstance(new_platform, Platform), msg.format(new_platform)
original_platform_fn, original_all_platforms_fn = platform, all_platforms
platform = _PickleableCallable(new_platform)
all_platforms = _PickleableCallable([type(new_platform)])
# Clear configuration and compiler caches
spack.config.config.clear_caches()
spack.compilers._cache_config_files = []
yield new_platform
platform, all_platforms = original_platform_fn, original_all_platforms_fn
# Clear configuration and compiler caches
spack.config.config.clear_caches()
spack.compilers._cache_config_files = []

File diff suppressed because it is too large Load Diff

View File

@@ -1,252 +0,0 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import contextlib
import os
import sys
try:
import sysconfig # novm
except ImportError:
# Not supported on Python 2.6
pass
import archspec.cpu
import llnl.util.filesystem as fs
import llnl.util.tty as tty
import spack.architecture
import spack.config
import spack.paths
import spack.repo
import spack.spec
import spack.store
import spack.user_environment as uenv
import spack.util.executable
from spack.util.environment import EnvironmentModifications
def spec_for_current_python():
"""For bootstrapping purposes we are just interested in the Python
minor version (all patches are ABI compatible with the same minor)
and on whether ucs4 support has been enabled for Python 2.7
See:
https://www.python.org/dev/peps/pep-0513/
https://stackoverflow.com/a/35801395/771663
"""
version_str = '.'.join(str(x) for x in sys.version_info[:2])
variant_str = ''
if sys.version_info[0] == 2 and sys.version_info[1] == 7:
unicode_size = sysconfig.get_config_var('Py_UNICODE_SIZE')
variant_str = '+ucs4' if unicode_size == 4 else '~ucs4'
spec_fmt = 'python@{0} {1}'
return spec_fmt.format(version_str, variant_str)
@contextlib.contextmanager
def spack_python_interpreter():
"""Override the current configuration to set the interpreter under
which Spack is currently running as the only Python external spec
available.
"""
python_prefix = os.path.dirname(os.path.dirname(sys.executable))
external_python = spec_for_current_python()
entry = {
'buildable': False,
'externals': [
{'prefix': python_prefix, 'spec': str(external_python)}
]
}
with spack.config.override('packages:python::', entry):
yield
def make_module_available(module, spec=None, install=False):
"""Ensure module is importable"""
# If we already can import it, that's great
try:
__import__(module)
return
except ImportError:
pass
# If it's already installed, use it
# Search by spec
spec = spack.spec.Spec(spec or module)
# We have to run as part of this python
# We can constrain by a shortened version in place of a version range
# because this spec is only used for querying or as a placeholder to be
# replaced by an external that already has a concrete version. This syntax
# is not sufficient when concretizing without an external, as it will
# concretize to python@X.Y instead of python@X.Y.Z
python_requirement = '^' + spec_for_current_python()
spec.constrain(python_requirement)
installed_specs = spack.store.db.query(spec, installed=True)
for ispec in installed_specs:
# TODO: make sure run-environment is appropriate
module_path = os.path.join(ispec.prefix,
ispec['python'].package.site_packages_dir)
module_path_64 = module_path.replace('/lib/', '/lib64/')
try:
sys.path.append(module_path)
sys.path.append(module_path_64)
__import__(module)
return
except ImportError:
tty.warn("Spec %s did not provide module %s" % (ispec, module))
sys.path = sys.path[:-2]
def _raise_error(module_name, module_spec):
error_msg = 'cannot import module "{0}"'.format(module_name)
if module_spec:
error_msg += ' from spec "{0}'.format(module_spec)
raise ImportError(error_msg)
if not install:
_raise_error(module, spec)
with spack_python_interpreter():
# We will install for ourselves, using this python if needed
# Concretize the spec
spec.concretize()
spec.package.do_install()
module_path = os.path.join(spec.prefix,
spec['python'].package.site_packages_dir)
module_path_64 = module_path.replace('/lib/', '/lib64/')
try:
sys.path.append(module_path)
sys.path.append(module_path_64)
__import__(module)
return
except ImportError:
sys.path = sys.path[:-2]
_raise_error(module, spec)
def get_executable(exe, spec=None, install=False):
"""Find an executable named exe, either in PATH or in Spack
Args:
exe (str): needed executable name
spec (Spec or str): spec to search for exe in (default exe)
install (bool): install spec if not available
When ``install`` is True, Spack will use the python used to run Spack as an
external. The ``install`` option should only be used with packages that
install quickly (when using external python) or are guaranteed by Spack
organization to be in a binary mirror (clingo)."""
# Search the system first
runner = spack.util.executable.which(exe)
if runner:
return runner
# Check whether it's already installed
spec = spack.spec.Spec(spec or exe)
installed_specs = spack.store.db.query(spec, installed=True)
for ispec in installed_specs:
# filter out directories of the same name as the executable
exe_path = [exe_p for exe_p in fs.find(ispec.prefix, exe)
if fs.is_exe(exe_p)]
if exe_path:
ret = spack.util.executable.Executable(exe_path[0])
envmod = EnvironmentModifications()
for dep in ispec.traverse(root=True, order='post'):
envmod.extend(uenv.environment_modifications_for_spec(dep))
ret.add_default_envmod(envmod)
return ret
else:
tty.warn('Exe %s not found in prefix %s' % (exe, ispec.prefix))
def _raise_error(executable, exe_spec):
error_msg = 'cannot find the executable "{0}"'.format(executable)
if exe_spec:
error_msg += ' from spec "{0}'.format(exe_spec)
raise RuntimeError(error_msg)
# If we're not allowed to install this for ourselves, we can't find it
if not install:
_raise_error(exe, spec)
with spack_python_interpreter():
# We will install for ourselves, using this python if needed
# Concretize the spec
spec.concretize()
spec.package.do_install()
# filter out directories of the same name as the executable
exe_path = [exe_p for exe_p in fs.find(spec.prefix, exe)
if fs.is_exe(exe_p)]
if exe_path:
ret = spack.util.executable.Executable(exe_path[0])
envmod = EnvironmentModifications()
for dep in spec.traverse(root=True, order='post'):
envmod.extend(uenv.environment_modifications_for_spec(dep))
ret.add_default_envmod(envmod)
return ret
_raise_error(exe, spec)
def _bootstrap_config_scopes():
tty.debug('[BOOTSTRAP CONFIG SCOPE] name=_builtin')
config_scopes = [
spack.config.InternalConfigScope(
'_builtin', spack.config.config_defaults
)
]
for name, path in spack.config.configuration_paths:
platform = spack.architecture.platform().name
platform_scope = spack.config.ConfigScope(
'/'.join([name, platform]), os.path.join(path, platform)
)
generic_scope = spack.config.ConfigScope(name, path)
config_scopes.extend([generic_scope, platform_scope])
msg = '[BOOTSTRAP CONFIG SCOPE] name={0}, path={1}'
tty.debug(msg.format(generic_scope.name, generic_scope.path))
tty.debug(msg.format(platform_scope.name, platform_scope.path))
return config_scopes
@contextlib.contextmanager
def ensure_bootstrap_configuration():
with spack.architecture.use_platform(spack.architecture.real_platform()):
with spack.repo.use_repositories(spack.paths.packages_path):
with spack.store.use_store(spack.paths.user_bootstrap_store):
# Default configuration scopes excluding command line
# and builtin but accounting for platform specific scopes
config_scopes = _bootstrap_config_scopes()
with spack.config.use_configuration(*config_scopes):
with spack_python_interpreter():
yield
def clingo_root_spec():
# Construct the root spec that will be used to bootstrap clingo
spec_str = 'clingo-bootstrap@spack+python'
# Add a proper compiler hint to the root spec. We use GCC for
# everything but MacOS.
if str(spack.architecture.platform()) == 'darwin':
spec_str += ' %apple-clang'
else:
spec_str += ' %gcc'
# Add hint to use frontend operating system on Cray
if str(spack.architecture.platform()) == 'cray':
spec_str += ' os=fe'
# Add the generic target
generic_target = archspec.cpu.host().family
spec_str += ' target={0}'.format(str(generic_target))
tty.debug('[BOOTSTRAP ROOT SPEC] clingo: {0}'.format(spec_str))
return spack.spec.Spec(spec_str)

View File

@@ -32,7 +32,6 @@
Skimming this module is a nice way to get acquainted with the types of
calls you can make from within the install() function.
"""
import inspect
import re
import multiprocessing
import os
@@ -45,8 +44,7 @@
import llnl.util.tty as tty
from llnl.util.tty.color import cescape, colorize
from llnl.util.filesystem import mkdirp, install, install_tree
from llnl.util.lang import dedupe
from llnl.util.tty.log import MultiProcessFd
from llnl.util.lang import dedupe, fork_context
import spack.build_systems.cmake
import spack.build_systems.meson
@@ -54,11 +52,9 @@
import spack.main
import spack.paths
import spack.package
import spack.repo
import spack.schema.environment
import spack.store
import spack.install_test
import spack.subprocess_context
import spack.architecture as arch
import spack.util.path
from spack.util.string import plural
@@ -302,19 +298,6 @@ def set_compiler_environment_variables(pkg, env):
return env
def _place_externals_last(spec_container):
"""
For a (possibly unordered) container of specs, return an ordered list
where all external specs are at the end of the list. External packages
may be installed in merged prefixes with other packages, and so
they should be deprioritized for any search order (i.e. in PATH, or
for a set of -L entries in a compiler invocation).
"""
first = list(x for x in spec_container if not x.external)
second = list(x for x in spec_container if x.external)
return first + second
def set_build_environment_variables(pkg, env, dirty):
"""Ensure a clean install environment when we build packages.
@@ -332,29 +315,6 @@ def set_build_environment_variables(pkg, env, dirty):
link_deps = set(pkg.spec.traverse(root=False, deptype=('link')))
build_link_deps = build_deps | link_deps
rpath_deps = get_rpath_deps(pkg)
# This includes all build dependencies and any other dependencies that
# should be added to PATH (e.g. supporting executables run by build
# dependencies)
build_and_supporting_deps = set()
for build_dep in build_deps:
build_and_supporting_deps.update(build_dep.traverse(deptype='run'))
# Establish an arbitrary but fixed ordering of specs so that resulting
# environment variable values are stable
def _order(specs):
return sorted(specs, key=lambda x: x.name)
# External packages may be installed in a prefix which contains many other
# package installs. To avoid having those installations override
# Spack-installed packages, they are placed at the end of search paths.
# System prefixes are removed entirely later on since they are already
# searched.
build_deps = _place_externals_last(_order(build_deps))
link_deps = _place_externals_last(_order(link_deps))
build_link_deps = _place_externals_last(_order(build_link_deps))
rpath_deps = _place_externals_last(_order(rpath_deps))
build_and_supporting_deps = _place_externals_last(
_order(build_and_supporting_deps))
link_dirs = []
include_dirs = []
@@ -401,10 +361,21 @@ def _order(specs):
env.set(SPACK_INCLUDE_DIRS, ':'.join(include_dirs))
env.set(SPACK_RPATH_DIRS, ':'.join(rpath_dirs))
build_and_supporting_prefixes = filter_system_paths(
x.prefix for x in build_and_supporting_deps)
build_link_prefixes = filter_system_paths(
x.prefix for x in build_link_deps)
build_prefixes = [dep.prefix for dep in build_deps]
build_link_prefixes = [dep.prefix for dep in build_link_deps]
# add run-time dependencies of direct build-time dependencies:
for build_dep in build_deps:
for run_dep in build_dep.traverse(deptype='run'):
build_prefixes.append(run_dep.prefix)
# Filter out system paths: ['/', '/usr', '/usr/local']
# These paths can be introduced into the build when an external package
# is added as a dependency. The problem with these paths is that they often
# contain hundreds of other packages installed in the same directory.
# If these paths come first, they can overshadow Spack installations.
build_prefixes = filter_system_paths(build_prefixes)
build_link_prefixes = filter_system_paths(build_link_prefixes)
# Add dependencies to CMAKE_PREFIX_PATH
env.set_path('CMAKE_PREFIX_PATH', build_link_prefixes)
@@ -419,10 +390,7 @@ def _order(specs):
env.set('SPACK_COMPILER_EXTRA_RPATHS', extra_rpaths)
# Add bin directories from dependencies to the PATH for the build.
# These directories are added to the beginning of the search path, and in
# the order given by 'build_and_supporting_prefixes' (the iteration order
# is reversed because each entry is prepended)
for prefix in reversed(build_and_supporting_prefixes):
for prefix in build_prefixes:
for dirname in ['bin', 'bin64']:
bin_dir = os.path.join(prefix, dirname)
if os.path.isdir(bin_dir):
@@ -440,8 +408,7 @@ def _order(specs):
# directory. Add that to the path too.
env_paths = []
compiler_specific = os.path.join(
spack.paths.build_env_path,
os.path.dirname(pkg.compiler.link_paths['cc']))
spack.paths.build_env_path, pkg.compiler.name)
for item in [spack.paths.build_env_path, compiler_specific]:
env_paths.append(item)
ci = os.path.join(item, 'case-insensitive')
@@ -467,7 +434,7 @@ def _order(specs):
env.set(SPACK_CCACHE_BINARY, ccache)
# Add any pkgconfig directories to PKG_CONFIG_PATH
for prefix in reversed(build_link_prefixes):
for prefix in build_link_prefixes:
for directory in ('lib', 'lib64', 'share'):
pcdir = os.path.join(prefix, directory, 'pkgconfig')
if os.path.isdir(pcdir):
@@ -749,6 +716,7 @@ def setup_package(pkg, dirty, context='build'):
"""Execute all environment setup routines."""
env = EnvironmentModifications()
# clean environment
if not dirty:
clean_environment()
@@ -779,9 +747,6 @@ def setup_package(pkg, dirty, context='build'):
elif context == 'test':
import spack.user_environment as uenv # avoid circular import
env.extend(uenv.environment_modifications_for_spec(pkg.spec))
env.extend(
modifications_from_dependencies(pkg.spec, context=context)
)
set_module_variables_for_package(pkg)
env.prepend_path('PATH', '.')
@@ -846,8 +811,7 @@ def modifications_from_dependencies(spec, context):
}
deptype, method = deptype_and_method[context]
root = context == 'test'
for dspec in spec.traverse(order='post', root=root, deptype=deptype):
for dspec in spec.traverse(order='post', root=False, deptype=deptype):
dpkg = dspec.package
set_module_variables_for_package(dpkg)
# Allow dependencies to modify the module
@@ -857,90 +821,28 @@ def modifications_from_dependencies(spec, context):
return env
def _setup_pkg_and_run(serialized_pkg, function, kwargs, child_pipe,
input_multiprocess_fd):
context = kwargs.get('context', 'build')
try:
# We are in the child process. Python sets sys.stdin to
# open(os.devnull) to prevent our process and its parent from
# simultaneously reading from the original stdin. But, we assume
# that the parent process is not going to read from it till we
# are done with the child, so we undo Python's precaution.
if input_multiprocess_fd is not None:
sys.stdin = os.fdopen(input_multiprocess_fd.fd)
pkg = serialized_pkg.restore()
if not kwargs.get('fake', False):
kwargs['unmodified_env'] = os.environ.copy()
setup_package(pkg, dirty=kwargs.get('dirty', False),
context=context)
return_value = function(pkg, kwargs)
child_pipe.send(return_value)
except StopPhase as e:
# Do not create a full ChildError from this, it's not an error
# it's a control statement.
child_pipe.send(e)
except BaseException:
# catch ANYTHING that goes wrong in the child process
exc_type, exc, tb = sys.exc_info()
# Need to unwind the traceback in the child because traceback
# objects can't be sent to the parent.
tb_string = traceback.format_exc()
# build up some context from the offending package so we can
# show that, too.
package_context = get_package_context(tb)
logfile = None
if context == 'build':
try:
if hasattr(pkg, 'log_path'):
logfile = pkg.log_path
except NameError:
# 'pkg' is not defined yet
pass
elif context == 'test':
logfile = os.path.join(
pkg.test_suite.stage,
spack.install_test.TestSuite.test_log_name(pkg.spec))
# make a pickleable exception to send to parent.
msg = "%s: %s" % (exc_type.__name__, str(exc))
ce = ChildError(msg,
exc_type.__module__,
exc_type.__name__,
tb_string, logfile, context, package_context)
child_pipe.send(ce)
finally:
child_pipe.close()
if input_multiprocess_fd is not None:
input_multiprocess_fd.close()
def start_build_process(pkg, function, kwargs):
"""Create a child process to do part of a spack build.
def fork(pkg, function, dirty, fake, context='build', **kwargs):
"""Fork a child process to do part of a spack build.
Args:
pkg (PackageBase): package whose environment we should set up the
child process for.
forked process for.
function (callable): argless function to run in the child
process.
dirty (bool): If True, do NOT clean the environment before
building.
fake (bool): If True, skip package setup b/c it's not a real build
context (string): If 'build', setup build environment. If 'test', setup
test environment.
Usage::
def child_fun():
# do stuff
build_env.start_build_process(pkg, child_fun)
build_env.fork(pkg, child_fun)
The child process is run with the build environment set up by
Forked processes are run with the build environment set up by
spack.build_environment. This allows package authors to have full
control over the environment, etc. without affecting other builds
that might be executed in the same spack call.
@@ -948,36 +850,74 @@ def child_fun():
If something goes wrong, the child process catches the error and
passes it to the parent wrapped in a ChildError. The parent is
expected to handle (or re-raise) the ChildError.
This uses `multiprocessing.Process` to create the child process. The
mechanism used to create the process differs on different operating
systems and for different versions of Python. In some cases "fork"
is used (i.e. the "fork" system call) and some cases it starts an
entirely new Python interpreter process (in the docs this is referred
to as the "spawn" start method). Breaking it down by OS:
- Linux always uses fork.
- Mac OS uses fork before Python 3.8 and "spawn" for 3.8 and after.
- Windows always uses the "spawn" start method.
For more information on `multiprocessing` child process creation
mechanisms, see https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
"""
def child_process(child_pipe, input_stream):
# We are in the child process. Python sets sys.stdin to
# open(os.devnull) to prevent our process and its parent from
# simultaneously reading from the original stdin. But, we assume
# that the parent process is not going to read from it till we
# are done with the child, so we undo Python's precaution.
if input_stream is not None:
sys.stdin = input_stream
try:
if not fake:
setup_package(pkg, dirty=dirty, context=context)
return_value = function()
child_pipe.send(return_value)
except StopPhase as e:
# Do not create a full ChildError from this, it's not an error
# it's a control statement.
child_pipe.send(e)
except BaseException:
# catch ANYTHING that goes wrong in the child process
exc_type, exc, tb = sys.exc_info()
# Need to unwind the traceback in the child because traceback
# objects can't be sent to the parent.
tb_string = traceback.format_exc()
# build up some context from the offending package so we can
# show that, too.
if exc_type is not spack.install_test.TestFailure:
package_context = get_package_context(traceback.extract_tb(tb))
else:
package_context = []
build_log = None
if context == 'build' and hasattr(pkg, 'log_path'):
build_log = pkg.log_path
test_log = None
if context == 'test':
test_log = os.path.join(
pkg.test_suite.stage,
spack.install_test.TestSuite.test_log_name(pkg.spec))
# make a pickleable exception to send to parent.
msg = "%s: %s" % (exc_type.__name__, str(exc))
ce = ChildError(msg,
exc_type.__module__,
exc_type.__name__,
tb_string, package_context,
build_log, test_log)
child_pipe.send(ce)
finally:
child_pipe.close()
parent_pipe, child_pipe = multiprocessing.Pipe()
input_multiprocess_fd = None
serialized_pkg = spack.subprocess_context.PackageInstallContext(pkg)
input_stream = None
try:
# Forward sys.stdin when appropriate, to allow toggling verbosity
if sys.stdin.isatty() and hasattr(sys.stdin, 'fileno'):
input_fd = os.dup(sys.stdin.fileno())
input_multiprocess_fd = MultiProcessFd(input_fd)
input_stream = os.fdopen(os.dup(sys.stdin.fileno()))
p = multiprocessing.Process(
target=_setup_pkg_and_run,
args=(serialized_pkg, function, kwargs, child_pipe,
input_multiprocess_fd))
p = fork_context.Process(
target=child_process, args=(child_pipe, input_stream))
p.start()
except InstallError as e:
@@ -986,8 +926,8 @@ def child_fun():
finally:
# Close the input stream in the parent process
if input_multiprocess_fd is not None:
input_multiprocess_fd.close()
if input_stream is not None:
input_stream.close()
child_result = parent_pipe.recv()
p.join()
@@ -1016,9 +956,8 @@ def get_package_context(traceback, context=3):
"""Return some context for an error message when the build fails.
Args:
traceback (traceback): A traceback from some exception raised during
install
traceback (list of tuples): output from traceback.extract_tb() or
traceback.extract_stack()
context (int): Lines of context to show before and after the line
where the error happened
@@ -1027,51 +966,44 @@ def get_package_context(traceback, context=3):
from there.
"""
def make_stack(tb, stack=None):
"""Tracebacks come out of the system in caller -> callee order. Return
an array in callee -> caller order so we can traverse it."""
if stack is None:
stack = []
if tb is not None:
make_stack(tb.tb_next, stack)
stack.append(tb)
return stack
stack = make_stack(traceback)
for tb in stack:
frame = tb.tb_frame
if 'self' in frame.f_locals:
# Find the first proper subclass of PackageBase.
obj = frame.f_locals['self']
if isinstance(obj, spack.package.PackageBase):
for filename, lineno, function, text in reversed(traceback):
if 'package.py' in filename or 'spack/build_systems' in filename:
if function not in ('run_test', '_run_test_helper'):
# We are in a package and not one of the listed methods
# We exclude these methods because we expect errors in them to
# be the result of user tests failing, and we show the tests
# instead.
break
# Package files have a line added at import time, so we adjust the lineno
# when we are getting context from a package file instead of a base class
adjust = 1 if spack.paths.is_package_file(filename) else 0
lineno = lineno - adjust
# We found obj, the Package implementation we care about.
# Point out the location in the install method where we failed.
lines = [
'{0}:{1:d}, in {2}:'.format(
inspect.getfile(frame.f_code),
frame.f_lineno - 1, # subtract 1 because f_lineno is 0-indexed
frame.f_code.co_name
filename,
lineno,
function
)
]
# Build a message showing context in the install method.
sourcelines, start = inspect.getsourcelines(frame)
# Calculate lineno of the error relative to the start of the function.
# Subtract 1 because f_lineno is 0-indexed.
fun_lineno = frame.f_lineno - start - 1
start_ctx = max(0, fun_lineno - context)
sourcelines = sourcelines[start_ctx:fun_lineno + context + 1]
# Adjust for import mangling of package files.
with open(filename, 'r') as f:
sourcelines = f.readlines()
start = max(0, lineno - context - 1)
sourcelines = sourcelines[start:lineno + context + 1]
for i, line in enumerate(sourcelines):
is_error = start_ctx + i == fun_lineno
i = i + adjust # adjusting for import munging again
is_error = start + i == lineno
mark = '>> ' if is_error else ' '
# Add start to get lineno relative to start of file, not function.
marked = ' {0}{1:-6d}{2}'.format(
mark, start + start_ctx + i, line.rstrip())
mark, start + i, line.rstrip())
if is_error:
marked = colorize('@R{%s}' % cescape(marked))
lines.append(marked)
@@ -1125,31 +1057,32 @@ class ChildError(InstallError):
# context instead of Python context.
build_errors = [('spack.util.executable', 'ProcessError')]
def __init__(self, msg, module, classname, traceback_string, log_name,
log_type, context):
def __init__(self, msg, module, classname, traceback_string, context,
build_log, test_log):
super(ChildError, self).__init__(msg)
self.module = module
self.name = classname
self.traceback = traceback_string
self.log_name = log_name
self.log_type = log_type
self.context = context
self.build_log = build_log
self.test_log = test_log
@property
def long_message(self):
out = StringIO()
out.write(self._long_message if self._long_message else '')
have_log = self.log_name and os.path.exists(self.log_name)
if (self.module, self.name) in ChildError.build_errors:
# The error happened in some external executed process. Show
# the log with errors or warnings highlighted.
if have_log:
write_log_summary(out, self.log_type, self.log_name)
if self.build_log and os.path.exists(self.build_log):
write_log_summary(out, 'build', self.build_log)
if self.test_log and os.path.exists(self.test_log):
write_log_summary(out, 'test', self.test_log)
else:
# The error happened in the Python code, so try to show
# The error happened in in the Python code, so try to show
# some context from the Package itself.
if self.context:
out.write('\n')
@@ -1159,14 +1092,18 @@ def long_message(self):
if out.getvalue():
out.write('\n')
if have_log:
out.write('See {0} log for details:\n'.format(self.log_type))
out.write(' {0}\n'.format(self.log_name))
if self.build_log and os.path.exists(self.build_log):
out.write('See build log for details:\n')
out.write(' %s\n' % self.build_log)
if self.test_log and os.path.exists(self.test_log):
out.write('See test log for details:\n')
out.write(' %s\n' % self.test_log)
return out.getvalue()
def __str__(self):
return self.message
return self.message + self.long_message + self.traceback
def __reduce__(self):
"""__reduce__ is used to serialize (pickle) ChildErrors.
@@ -1179,14 +1116,16 @@ def __reduce__(self):
self.module,
self.name,
self.traceback,
self.log_name,
self.log_type,
self.context)
self.context,
self.build_log,
self.test_log)
def _make_child_error(msg, module, name, traceback, log, log_type, context):
def _make_child_error(msg, module, name, traceback, context,
build_log, test_log):
"""Used by __reduce__ in ChildError to reconstruct pickled errors."""
return ChildError(msg, module, name, traceback, log, log_type, context)
return ChildError(msg, module, name, traceback, context,
build_log, test_log)
class StopPhase(spack.error.SpackError):

View File

@@ -2,15 +2,20 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import inspect
import itertools
import fileinput
import os
import os.path
import shutil
import stat
import sys
import re
from subprocess import PIPE
from subprocess import check_call
import llnl.util.tty as tty
import llnl.util.filesystem as fs
from llnl.util.filesystem import working_dir, force_remove
from spack.package import PackageBase, run_after, run_before
from spack.util.executable import Executable
@@ -77,33 +82,18 @@ class AutotoolsPackage(PackageBase):
#: Options to be passed to autoreconf when using the default implementation
autoreconf_extra_args = []
#: If False deletes all the .la files in the prefix folder
#: after the installation. If True instead it installs them.
install_libtool_archives = False
@property
def _removed_la_files_log(self):
"""File containing the list of remove libtool archives"""
build_dir = self.build_directory
if not os.path.isabs(self.build_directory):
build_dir = os.path.join(self.stage.path, build_dir)
return os.path.join(build_dir, 'removed_la_files.txt')
@property
def archive_files(self):
"""Files to archive for packages based on autotools"""
files = [os.path.join(self.build_directory, 'config.log')]
if not self.install_libtool_archives:
files.append(self._removed_la_files_log)
return files
return [os.path.join(self.build_directory, 'config.log')]
@run_after('autoreconf')
def _do_patch_config_files(self):
"""Some packages ship with older config.guess/config.sub files and
need to have these updated when installed on a newer architecture.
In particular, config.guess fails for PPC64LE for version prior
to a 2013-06-10 build date (automake 1.13.4) and for ARM (aarch64).
"""
to a 2013-06-10 build date (automake 1.13.4) and for ARM (aarch64)."""
if not self.patch_config_files or (
not self.spec.satisfies('target=ppc64le:') and
not self.spec.satisfies('target=aarch64:')
@@ -120,60 +110,70 @@ def _do_patch_config_files(self):
else:
config_arch = 'local'
def runs_ok(script_abs_path):
# Construct the list of arguments for the call
additional_args = {
'config.sub': [config_arch]
}
script_name = os.path.basename(script_abs_path)
args = [script_abs_path] + additional_args.get(script_name, [])
my_config_files = {'guess': None, 'sub': None}
config_files = {'guess': None, 'sub': None}
config_args = {'guess': [], 'sub': [config_arch]}
try:
check_call(args, stdout=PIPE, stderr=PIPE)
except Exception as e:
tty.debug(e)
return False
for config_name in config_files.keys():
config_file = 'config.{0}'.format(config_name)
if os.path.exists(config_file):
# First search the top-level source directory
my_config_files[config_name] = os.path.abspath(config_file)
else:
# Then search in all sub directories recursively.
# We would like to use AC_CONFIG_AUX_DIR, but not all packages
# ship with their configure.in or configure.ac.
config_path = next((os.path.abspath(os.path.join(r, f))
for r, ds, fs in os.walk('.') for f in fs
if f == config_file), None)
my_config_files[config_name] = config_path
return True
if my_config_files[config_name] is not None:
try:
config_path = my_config_files[config_name]
check_call([config_path] + config_args[config_name],
stdout=PIPE, stderr=PIPE)
# The package's config file already runs OK, so just use it
continue
except Exception as e:
tty.debug(e)
else:
continue
# Compute the list of files that needs to be patched
search_dir = self.stage.path
to_be_patched = fs.find(
search_dir, files=['config.sub', 'config.guess'], recursive=True
)
to_be_patched = [f for f in to_be_patched if not runs_ok(f)]
# Look for a spack-installed automake package
if 'automake' in self.spec:
automake_dir = 'automake-' + str(self.spec['automake'].version)
automake_path = os.path.join(self.spec['automake'].prefix,
'share', automake_dir)
path = os.path.join(automake_path, config_file)
if os.path.exists(path):
config_files[config_name] = path
# Look for the system's config.guess
if (config_files[config_name] is None and
os.path.exists('/usr/share')):
automake_dir = [s for s in os.listdir('/usr/share') if
"automake" in s]
if automake_dir:
automake_path = os.path.join('/usr/share', automake_dir[0])
path = os.path.join(automake_path, config_file)
if os.path.exists(path):
config_files[config_name] = path
if config_files[config_name] is not None:
try:
config_path = config_files[config_name]
my_config_path = my_config_files[config_name]
# If there are no files to be patched, return early
if not to_be_patched:
return
check_call([config_path] + config_args[config_name],
stdout=PIPE, stderr=PIPE)
# Directories where to search for files to be copied
# over the failing ones
good_file_dirs = ['/usr/share']
if 'automake' in self.spec:
good_file_dirs.insert(0, self.spec['automake'].prefix)
m = os.stat(my_config_path).st_mode & 0o777 | stat.S_IWUSR
os.chmod(my_config_path, m)
shutil.copyfile(config_path, my_config_path)
continue
except Exception as e:
tty.debug(e)
# List of files to be found in the directories above
to_be_found = list(set(os.path.basename(f) for f in to_be_patched))
substitutes = {}
for directory in good_file_dirs:
candidates = fs.find(directory, files=to_be_found, recursive=True)
candidates = [f for f in candidates if runs_ok(f)]
for name, good_files in itertools.groupby(
candidates, key=os.path.basename
):
substitutes[name] = next(good_files)
to_be_found.remove(name)
# Check that we found everything we needed
if to_be_found:
msg = 'Failed to find suitable substitutes for {0}'
raise RuntimeError(msg.format(', '.join(to_be_found)))
# Copy the good files over the bad ones
for abs_path in to_be_patched:
name = os.path.basename(abs_path)
fs.copy(substitutes[name], abs_path)
raise RuntimeError('Failed to find suitable ' + config_file)
@run_before('configure')
def _set_autotools_environment_variables(self):
@@ -196,30 +196,20 @@ def _do_patch_libtool(self):
detect the compiler (and patch_libtool is set), patch in the correct
flags for the Arm, Clang/Flang, and Fujitsu compilers."""
# Exit early if we are required not to patch libtool
if not self.patch_libtool:
return
for libtool_path in fs.find(
self.build_directory, 'libtool', recursive=True):
self._patch_libtool(libtool_path)
def _patch_libtool(self, libtool_path):
if self.spec.satisfies('%arm')\
or self.spec.satisfies('%clang')\
or self.spec.satisfies('%fj'):
fs.filter_file('wl=""\n', 'wl="-Wl,"\n', libtool_path)
fs.filter_file('pic_flag=""\n',
'pic_flag="{0}"\n'
.format(self.compiler.cc_pic_flag),
libtool_path)
if self.spec.satisfies('%fj'):
fs.filter_file('-nostdlib', '', libtool_path)
rehead = r'/\S*/'
objfile = ['fjhpctag.o', 'fjcrt0.o', 'fjlang08.o', 'fjomp.o',
'crti.o', 'crtbeginS.o', 'crtendS.o']
for o in objfile:
fs.filter_file(rehead + o, '', libtool_path)
libtool = os.path.join(self.build_directory, "libtool")
if self.patch_libtool and os.path.exists(libtool):
if self.spec.satisfies('%arm') or self.spec.satisfies('%clang') \
or self.spec.satisfies('%fj'):
for line in fileinput.input(libtool, inplace=True):
# Replace missing flags with those for Arm/Clang
if line == 'wl=""\n':
line = 'wl="-Wl,"\n'
if line == 'pic_flag=""\n':
line = 'pic_flag="{0}"\n'\
.format(self.compiler.cc_pic_flag)
if self.spec.satisfies('%fj') and 'fjhpctag.o' in line:
line = re.sub(r'/\S*/fjhpctag.o', '', line)
sys.stdout.write(line)
@property
def configure_directory(self):
@@ -268,19 +258,14 @@ def autoreconf(self, spec, prefix):
# This line is what is needed most of the time
# --install, --verbose, --force
autoreconf_args = ['-ivf']
autoreconf_args += self.autoreconf_search_path_args
for dep in spec.dependencies(deptype='build'):
if os.path.exists(dep.prefix.share.aclocal):
autoreconf_args.extend([
'-I', dep.prefix.share.aclocal
])
autoreconf_args += self.autoreconf_extra_args
m.autoreconf(*autoreconf_args)
@property
def autoreconf_search_path_args(self):
"""Arguments to autoreconf to modify the search paths"""
search_path_args = []
for dep in self.spec.dependencies(deptype='build'):
if os.path.exists(dep.prefix.share.aclocal):
search_path_args.extend(['-I', dep.prefix.share.aclocal])
return search_path_args
@run_after('autoreconf')
def set_configure_or_die(self):
"""Checks the presence of a ``configure`` file after the
@@ -536,19 +521,3 @@ def installcheck(self):
# Check that self.prefix is there after installation
run_after('install')(PackageBase.sanity_check_prefix)
@run_after('install')
def remove_libtool_archives(self):
"""Remove all .la files in prefix sub-folders if the package sets
``install_libtool_archives`` to be False.
"""
# If .la files are to be installed there's nothing to do
if self.install_libtool_archives:
return
# Remove the files and create a log of what was removed
libtool_files = fs.find(str(self.prefix), '*.la', recursive=True)
with fs.safe_remove(*libtool_files):
fs.mkdirp(os.path.dirname(self._removed_la_files_log))
with open(self._removed_la_files_log, mode='w') as f:
f.write('\n'.join(libtool_files))

View File

@@ -12,7 +12,7 @@
import spack.build_environment
from llnl.util.filesystem import working_dir
from spack.util.environment import filter_system_paths
from spack.directives import depends_on, variant, conflicts
from spack.directives import depends_on, variant
from spack.package import PackageBase, InstallError, run_after
# Regex to extract the primary generator from the CMake generator
@@ -94,13 +94,6 @@ class CMakePackage(PackageBase):
description='CMake build type',
values=('Debug', 'Release', 'RelWithDebInfo', 'MinSizeRel'))
# https://cmake.org/cmake/help/latest/variable/CMAKE_INTERPROCEDURAL_OPTIMIZATION.html
variant('ipo', default=False,
description='CMake interprocedural optimization')
# CMAKE_INTERPROCEDURAL_OPTIMIZATION only exists for CMake >= 3.9
conflicts('+ipo', when='^cmake@:3.8',
msg='+ipo is not supported by CMake < 3.9')
depends_on('cmake', type='build')
@property
@@ -154,11 +147,6 @@ def _std_args(pkg):
except KeyError:
build_type = 'RelWithDebInfo'
try:
ipo = pkg.spec.variants['ipo'].value
except KeyError:
ipo = False
define = CMakePackage.define
args = [
'-G', generator,
@@ -166,10 +154,6 @@ def _std_args(pkg):
define('CMAKE_BUILD_TYPE', build_type),
]
# CMAKE_INTERPROCEDURAL_OPTIMIZATION only exists for CMake >= 3.9
if pkg.spec.satisfies('^cmake@3.9:'):
args.append(define('CMAKE_INTERPROCEDURAL_OPTIMIZATION', ipo))
if primary_generator == 'Unix Makefiles':
args.append(define('CMAKE_VERBOSE_MAKEFILE', True))

View File

@@ -13,21 +13,21 @@ class CudaPackage(PackageBase):
"""Auxiliary class which contains CUDA variant, dependencies and conflicts
and is meant to unify and facilitate its usage.
Maintainers: ax3l, Rombur
Maintainers: ax3l, svenevs
"""
# https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#gpu-feature-list
# https://developer.nvidia.com/cuda-gpus
# https://en.wikipedia.org/wiki/CUDA#GPUs_supported
cuda_arch_values = (
cuda_arch_values = [
'10', '11', '12', '13',
'20', '21',
'30', '32', '35', '37',
'50', '52', '53',
'60', '61', '62',
'70', '72', '75',
'80', '86'
)
'80',
]
# FIXME: keep cuda and cuda_arch separate to make usage easier until
# Spack has depends_on(cuda, when='cuda_arch!=None') or alike
@@ -77,7 +77,6 @@ def cuda_flags(arch_list):
depends_on('cuda@10.0:', when='cuda_arch=75')
depends_on('cuda@11.0:', when='cuda_arch=80')
depends_on('cuda@11.1:', when='cuda_arch=86')
# There are at least three cases to be aware of for compiler conflicts
# 1. Linux x86_64
@@ -94,9 +93,7 @@ def cuda_flags(arch_list):
conflicts('%gcc@7:', when='+cuda ^cuda@:9.1' + arch_platform)
conflicts('%gcc@8:', when='+cuda ^cuda@:10.0.130' + arch_platform)
conflicts('%gcc@9:', when='+cuda ^cuda@:10.2.89' + arch_platform)
conflicts('%gcc@:4', when='+cuda ^cuda@11.0.2:' + arch_platform)
conflicts('%gcc@10:', when='+cuda ^cuda@:11.0.2' + arch_platform)
conflicts('%gcc@11:', when='+cuda ^cuda@:11.1.0' + arch_platform)
conflicts('%gcc@:4,10:', when='+cuda ^cuda@:11.0.2' + arch_platform)
conflicts('%pgi@:14.8', when='+cuda ^cuda@:7.0.27' + arch_platform)
conflicts('%pgi@:15.3,15.5:', when='+cuda ^cuda@7.5' + arch_platform)
conflicts('%pgi@:16.2,16.0:16.3', when='+cuda ^cuda@8' + arch_platform)
@@ -104,8 +101,7 @@ def cuda_flags(arch_list):
conflicts('%pgi@:16,19:', when='+cuda ^cuda@9.2.88:10' + arch_platform)
conflicts('%pgi@:17,20:',
when='+cuda ^cuda@10.1.105:10.2.89' + arch_platform)
conflicts('%pgi@:17,21:',
when='+cuda ^cuda@11.0.2:11.1.0' + arch_platform)
conflicts('%pgi@:17,20.2:', when='+cuda ^cuda@11.0.2' + arch_platform)
conflicts('%clang@:3.4', when='+cuda ^cuda@:7.5' + arch_platform)
conflicts('%clang@:3.7,4:',
when='+cuda ^cuda@8.0:9.0' + arch_platform)
@@ -117,9 +113,7 @@ def cuda_flags(arch_list):
conflicts('%clang@:3.7,8.1:',
when='+cuda ^cuda@10.1.105:10.1.243' + arch_platform)
conflicts('%clang@:3.2,9:', when='+cuda ^cuda@10.2.89' + arch_platform)
conflicts('%clang@:5', when='+cuda ^cuda@11.0.2:' + arch_platform)
conflicts('%clang@10:', when='+cuda ^cuda@:11.0.2' + arch_platform)
conflicts('%clang@11:', when='+cuda ^cuda@:11.1.0' + arch_platform)
conflicts('%clang@:5,10:', when='+cuda ^cuda@11.0.2' + arch_platform)
# x86_64 vs. ppc64le differ according to NVidia docs
# Linux ppc64le compiler conflicts from Table from the docs below:
@@ -135,9 +129,7 @@ def cuda_flags(arch_list):
conflicts('%gcc@8:', when='+cuda ^cuda@:10.0.130' + arch_platform)
conflicts('%gcc@9:', when='+cuda ^cuda@:10.1.243' + arch_platform)
# officially, CUDA 11.0.2 only supports the system GCC 8.3 on ppc64le
conflicts('%gcc@:4', when='+cuda ^cuda@11.0.2:' + arch_platform)
conflicts('%gcc@10:', when='+cuda ^cuda@:11.0.2' + arch_platform)
conflicts('%gcc@11:', when='+cuda ^cuda@:11.1.0' + arch_platform)
conflicts('%gcc@:4,10:', when='+cuda ^cuda@:11.0.2' + arch_platform)
conflicts('%pgi', when='+cuda ^cuda@:8' + arch_platform)
conflicts('%pgi@:16', when='+cuda ^cuda@:9.1.185' + arch_platform)
conflicts('%pgi@:17', when='+cuda ^cuda@:10' + arch_platform)
@@ -147,9 +139,7 @@ def cuda_flags(arch_list):
conflicts('%clang@7:', when='+cuda ^cuda@10.0.130' + arch_platform)
conflicts('%clang@7.1:', when='+cuda ^cuda@:10.1.105' + arch_platform)
conflicts('%clang@8.1:', when='+cuda ^cuda@:10.2.89' + arch_platform)
conflicts('%clang@:5', when='+cuda ^cuda@11.0.2:' + arch_platform)
conflicts('%clang@10:', when='+cuda ^cuda@:11.0.2' + arch_platform)
conflicts('%clang@11:', when='+cuda ^cuda@:11.1.0' + arch_platform)
conflicts('%clang@:5,10.0:', when='+cuda ^cuda@11.0.2' + arch_platform)
# Intel is mostly relevant for x86_64 Linux, even though it also
# exists for Mac OS X. No information prior to CUDA 3.2 or Intel 11.1
@@ -164,12 +154,12 @@ def cuda_flags(arch_list):
conflicts('%intel@18.0:', when='+cuda ^cuda@:9.9')
conflicts('%intel@19.0:', when='+cuda ^cuda@:10.0')
conflicts('%intel@19.1:', when='+cuda ^cuda@:10.1')
conflicts('%intel@19.2:', when='+cuda ^cuda@:11.1.0')
conflicts('%intel@19.2:', when='+cuda ^cuda@:11.0.2')
# XL is mostly relevant for ppc64le Linux
conflicts('%xl@:12,14:', when='+cuda ^cuda@:9.1')
conflicts('%xl@:12,14:15,17:', when='+cuda ^cuda@9.2')
conflicts('%xl@:12,17:', when='+cuda ^cuda@:11.1.0')
conflicts('%xl@:12,17:', when='+cuda ^cuda@:11.0.2')
# Mac OS X
# platform = ' platform=darwin'

View File

@@ -35,19 +35,15 @@ def build_directory(self):
"""The directory containing the ``pom.xml`` file."""
return self.stage.source_path
def build_args(self):
"""List of args to pass to build phase."""
return []
def build(self, spec, prefix):
"""Compile code and package into a JAR file."""
with working_dir(self.build_directory):
mvn = which('mvn')
if self.run_tests:
mvn('verify', *self.build_args())
mvn('verify')
else:
mvn('package', '-DskipTests', *self.build_args())
mvn('package', '-DskipTests')
def install(self, spec, prefix):
"""Copy to installation prefix."""

View File

@@ -1,80 +0,0 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Common utilities for managing intel oneapi packages.
"""
from os.path import dirname, isdir
from spack.package import Package
from spack.util.executable import Executable
from llnl.util.filesystem import find_headers, find_libraries
class IntelOneApiPackage(Package):
"""Base class for Intel oneAPI packages."""
homepage = 'https://software.intel.com/oneapi'
phases = ['install']
def component_info(self,
dir_name,
components,
releases,
url_name):
self._dir_name = dir_name
self._components = components
self._releases = releases
self._url_name = url_name
def url_for_version(self, version):
release = self._release(version)
return 'https://registrationcenter-download.intel.com/akdlm/irc_nas/%s/%s' % (
release['irc_id'], self._oneapi_file(version, release))
def install(self, spec, prefix):
bash = Executable('bash')
# Installer writes files in ~/intel set HOME so it goes to prefix
bash.add_default_env('HOME', prefix)
version = spec.versions.lowest()
release = self._release(version)
bash('./%s' % self._oneapi_file(version, release),
'-s', '-a', '-s', '--action', 'install',
'--eula', 'accept',
'--components',
self._components,
'--install-dir', prefix)
#
# Helper functions
#
def _release(self, version):
return self._releases[str(version)]
def _oneapi_file(self, version, release):
return 'l_%s_p_%s.%s_offline.sh' % (
self._url_name, version, release['build'])
class IntelOneApiLibraryPackage(IntelOneApiPackage):
"""Base class for Intel oneAPI library packages."""
@property
def headers(self):
include_path = '%s/%s/latest/include' % (
self.prefix, self._dir_name)
return find_headers('*', include_path, recursive=True)
@property
def libs(self):
lib_path = '%s/%s/latest/lib/intel64' % (self.prefix, self._dir_name)
lib_path = lib_path if isdir(lib_path) else dirname(lib_path)
return find_libraries('*', root=lib_path, shared=True, recursive=True)

View File

@@ -2,6 +2,8 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import inspect
import os
import shutil
@@ -233,28 +235,7 @@ def install_args(self, spec, prefix):
if ('py-setuptools' == spec.name or # this is setuptools, or
'py-setuptools' in spec._dependencies and # it's an immediate dep
'build' in spec._dependencies['py-setuptools'].deptypes):
args += ['--single-version-externally-managed']
# Get all relative paths since we set the root to `prefix`
# We query the python with which these will be used for the lib and inc
# directories. This ensures we use `lib`/`lib64` as expected by python.
python = spec['python'].package.command
command_start = 'print(distutils.sysconfig.'
commands = ';'.join([
'import distutils.sysconfig',
command_start + 'get_python_lib(plat_specific=False, prefix=""))',
command_start + 'get_python_lib(plat_specific=True, prefix=""))',
command_start + 'get_python_inc(plat_specific=True, prefix=""))'])
pure_site_packages_dir, plat_site_packages_dir, inc_dir = python(
'-c', commands, output=str, error=str).strip().split('\n')
args += ['--root=%s' % prefix,
'--install-purelib=%s' % pure_site_packages_dir,
'--install-platlib=%s' % plat_site_packages_dir,
'--install-scripts=bin',
'--install-data=""',
'--install-headers=%s' % inc_dir
]
args += ['--single-version-externally-managed', '--root=/']
return args
@@ -437,7 +418,6 @@ def view_file_conflicts(self, view, merge_map):
def add_files_to_view(self, view, merge_map):
bin_dir = self.spec.prefix.bin
python_prefix = self.extendee_spec.prefix
python_is_external = self.extendee_spec.external
global_view = same_path(python_prefix, view.get_projection_for_spec(
self.spec
))
@@ -448,8 +428,7 @@ def add_files_to_view(self, view, merge_map):
view.link(src, dst)
elif not os.path.islink(src):
shutil.copy2(src, dst)
is_script = 'script' in get_filetype(src)
if is_script and not python_is_external:
if 'script' in get_filetype(src):
filter_file(
python_prefix, os.path.abspath(
view.get_projection_for_spec(self.spec)), dst

View File

@@ -1,133 +0,0 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# Troubleshooting advice for +rocm builds:
#
# 1. When building with clang, go your compilers.yaml,
# add an entry for the amd version of clang, as below.
# This will ensure that your entire package is compiled/linked
# with the same compiler version. If you use a different version of
# clang which is linked against a different version of the gcc library,
# you will get errors along the lines of:
# undefined reference to
# `std::__throw_out_of_range_fmt(char const*, ...)@@GLIBCXX_3.4.20'
# which is indicative of a mismatch in standard library versions.
#
# in compilers.yaml
# - compiler:
# spec: clang@amd
# paths:
# cc: /opt/rocm/llvm/bin/clang
# cxx: /opt/rocm/llvm/bin/clang++
# f77:
# fc:
# flags: {}
# operating_system: rhel7
# target: x86_64
# modules: []
# environment: {}
# extra_rpaths: []
#
#
# 2. hip and its dependencies are currently NOT picked up by spack
# automatically, and should therefore be added to packages.yaml by hand:
#
# in packages.yaml:
# hip:
# externals:
# - spec: hip@3.8.20371-d1886b0b
# prefix: /opt/rocm/hip
# extra_attributes:
# compilers:
# c: /opt/rocm/llvm/bin/clang++
# c++: /opt/rocm/llvm/bin/clang++
# hip: /opt/rocm/hip/bin/hipcc
# buildable: false
# hsa-rocr-dev:
# externals:
# - spec: hsa-rocr-dev
# prefix: /opt/rocm
# extra_attributes:
# compilers:
# c: /opt/rocm/llvm/bin/clang++
# cxx: /opt/rocm/llvm/bin/clang++
# buildable: false
# llvm-amdgpu:
# externals:
# - spec: llvm-amdgpu
# prefix: /opt/rocm/llvm
# extra_attributes:
# compilers:
# c: /opt/rocm/llvm/bin/clang++
# cxx: /opt/rocm/llvm/bin/clang++
# buildable: false
#
# 3. In part 2, DO NOT list the path to hsa as /opt/rocm/hsa ! You want spack
# to find hsa in /opt/rocm/include/hsa/hsa.h . The directory of
# /opt/rocm/hsa also has an hsa.h file, but it won't be found because spack
# does not like its directory structure.
#
from spack.package import PackageBase
from spack.directives import depends_on, variant, conflicts
import spack.variant
class ROCmPackage(PackageBase):
"""Auxiliary class which contains ROCm variant, dependencies and conflicts
and is meant to unify and facilitate its usage. Closely mimics CudaPackage.
Maintainers: dtaller
"""
# https://llvm.org/docs/AMDGPUUsage.html
# Possible architectures
amdgpu_targets = (
'gfx701', 'gfx801', 'gfx802', 'gfx803',
'gfx900', 'gfx906', 'gfx908', 'gfx1010',
'gfx1011', 'gfx1012'
)
variant('rocm', default=False, description='Enable ROCm support')
# possible amd gpu targets for rocm builds
variant('amdgpu_target',
description='AMD GPU architecture',
values=spack.variant.any_combination_of(*amdgpu_targets))
depends_on('llvm-amdgpu', when='+rocm')
depends_on('hsa-rocr-dev', when='+rocm')
depends_on('hip', when='+rocm')
# need amd gpu type for rocm builds
conflicts('amdgpu_target=none', when='+rocm')
# Make sure amdgpu_targets cannot be used without +rocm
for value in amdgpu_targets:
conflicts('~rocm', when='amdgpu_target=' + value)
# https://github.com/ROCm-Developer-Tools/HIP/blob/master/bin/hipcc
# It seems that hip-clang does not (yet?) accept this flag, in which case
# we will still need to set the HCC_AMDGPU_TARGET environment flag in the
# hip package file. But I will leave this here for future development.
@staticmethod
def hip_flags(amdgpu_target):
archs = ",".join(amdgpu_target)
return '--amdgpu-target={0}'.format(archs)
# HIP version vs Architecture
# TODO: add a bunch of lines like:
# depends_on('hip@:6.0', when='amdgpu_target=gfx701')
# to indicate minimum version for each architecture.
# Compiler conflicts
# TODO: add conflicts statements along the lines of
# arch_platform = ' target=x86_64: platform=linux'
# conflicts('%gcc@5:', when='+cuda ^cuda@:7.5' + arch_platform)
# conflicts('platform=darwin', when='+cuda ^cuda@11.0.2:')
# for hip-related limitations.

View File

@@ -33,16 +33,12 @@
from spack.spec import Spec
import spack.util.spack_yaml as syaml
import spack.util.web as web_util
import spack.util.gpg as gpg_util
import spack.util.url as url_util
JOB_RETRY_CONDITIONS = [
'always',
]
SPACK_PR_MIRRORS_ROOT_URL = 's3://spack-pr-mirrors'
spack_gpg = spack.main.SpackCommand('gpg')
spack_compiler = spack.main.SpackCommand('compiler')
@@ -533,12 +529,6 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
os.environ.get('SPACK_IS_PR_PIPELINE', '').lower() == 'true'
)
spack_pr_branch = os.environ.get('SPACK_PR_BRANCH', None)
pr_mirror_url = None
if spack_pr_branch:
pr_mirror_url = url_util.join(SPACK_PR_MIRRORS_ROOT_URL,
spack_pr_branch)
ci_mirrors = yaml_root['mirrors']
mirror_urls = [url for url in ci_mirrors.values()]
@@ -597,7 +587,6 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
max_length_needs = 0
max_needs_job = ''
before_script, after_script = None, None
for phase in phases:
phase_name = phase['name']
strip_compilers = phase['strip-compilers']
@@ -702,19 +691,12 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
# bootstrap spec lists, then we will add more dependencies to
# the job (that compiler and maybe it's dependencies as well).
if is_main_phase(phase_name):
spec_arch_family = (release_spec.architecture
.target
.microarchitecture
.family)
compiler_pkg_spec = compilers.pkg_spec_for_compiler(
release_spec.compiler)
for bs in bootstrap_specs:
bs_arch = bs['spec'].architecture
bs_arch_family = (bs_arch.target
.microarchitecture
.family)
if (bs['spec'].satisfies(compiler_pkg_spec) and
bs_arch_family == spec_arch_family):
bs_arch == release_spec.architecture):
# We found the bootstrap compiler this release spec
# should be built with, so for DAG scheduling
# purposes, we will at least add the compiler spec
@@ -734,15 +716,6 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
str(bs_arch),
build_group,
enable_artifacts_buildcache))
else:
debug_msg = ''.join([
'Considered compiler {0} for spec ',
'{1}, but rejected it either because it was ',
'not the compiler required by the spec, or ',
'because the target arch families of the ',
'spec and the compiler did not match'
]).format(bs['spec'], release_spec)
tty.debug(debug_msg)
if enable_cdash_reporting:
cdash_build_name = get_cdash_build_name(
@@ -833,10 +806,9 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
final_stage = 'stage-rebuild-index'
final_job = {
'stage': final_stage,
'script': 'spack buildcache update-index --keys -d {0}'.format(
'script': 'spack buildcache update-index -d {0}'.format(
mirror_urls[0]),
'tags': final_job_config['tags'],
'when': 'always'
'tags': final_job_config['tags']
}
if 'image' in final_job_config:
final_job['image'] = final_job_config['image']
@@ -869,9 +841,6 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
'SPACK_CHECKOUT_VERSION': version_to_clone,
}
if pr_mirror_url:
output_object['variables']['SPACK_PR_MIRROR_URL'] = pr_mirror_url
sorted_output = {}
for output_key, output_value in sorted(output_object.items()):
sorted_output[output_key] = output_value
@@ -934,14 +903,6 @@ def import_signing_key(base64_signing_key):
tty.debug(signing_keys_output)
def can_sign_binaries():
return len(gpg_util.signing_keys()) == 1
def can_verify_binaries():
return len(gpg_util.public_keys()) >= 1
def configure_compilers(compiler_action, scope=None):
if compiler_action == 'INSTALL_MISSING':
tty.debug('Make sure bootstrapped compiler will be installed')
@@ -1117,15 +1078,12 @@ def read_cdashid_from_mirror(spec, mirror_url):
return int(contents)
def push_mirror_contents(env, spec, yaml_path, mirror_url, build_id,
sign_binaries):
def push_mirror_contents(env, spec, yaml_path, mirror_url, build_id):
if mirror_url:
unsigned = not sign_binaries
tty.debug('Creating buildcache ({0})'.format(
'unsigned' if unsigned else 'signed'))
tty.debug('Creating buildcache')
buildcache._createtarball(env, spec_yaml=yaml_path, add_deps=False,
output_location=mirror_url, force=True,
allow_root=True, unsigned=unsigned)
allow_root=True)
if build_id:
tty.debug('Writing cdashid ({0}) to remote mirror: {1}'.format(
build_id, mirror_url))

View File

@@ -181,19 +181,6 @@ def parse_specs(args, **kwargs):
raise spack.error.SpackError(msg)
def matching_spec_from_env(spec):
"""
Returns a concrete spec, matching what is available in the environment.
If no matching spec is found in the environment (or if no environment is
active), this will return the given spec but concretized.
"""
env = spack.environment.get_env({}, cmd_name)
if env:
return env.matching_spec(spec) or spec.concretized()
else:
return spec.concretized()
def elide_list(line_list, max_num=10):
"""Takes a long list and limits it to a smaller number of elements,
replacing intervening elements with '...'. For example::
@@ -351,7 +338,6 @@ def display_specs(specs, args=None, **kwargs):
decorators (dict): dictionary mappng specs to decorators
header_callback (function): called at start of arch/compiler groups
all_headers (bool): show headers even when arch/compiler aren't defined
output (stream): A file object to write to. Default is ``sys.stdout``
"""
def get_arg(name, default=None):
@@ -372,7 +358,6 @@ def get_arg(name, default=None):
variants = get_arg('variants', False)
groups = get_arg('groups', True)
all_headers = get_arg('all_headers', False)
output = get_arg('output', sys.stdout)
decorator = get_arg('decorator', None)
if decorator is None:
@@ -421,39 +406,31 @@ def format_list(specs):
# unless any of these are set, we can just colify and be done.
if not any((deps, paths)):
colify((f[0] for f in formatted), indent=indent, output=output)
return ''
colify((f[0] for f in formatted), indent=indent)
return
# otherwise, we'll print specs one by one
max_width = max(len(f[0]) for f in formatted)
path_fmt = "%%-%ds%%s" % (max_width + 2)
out = ''
# getting lots of prefixes requires DB lookups. Ensure
# all spec.prefix calls are in one transaction.
with spack.store.db.read_transaction():
for string, spec in formatted:
if not string:
# print newline from above
out += '\n'
print() # print newline from above
continue
if paths:
out += path_fmt % (string, spec.prefix) + '\n'
print(path_fmt % (string, spec.prefix))
else:
out += string + '\n'
print(string)
return out
out = ''
if groups:
for specs in iter_groups(specs, indent, all_headers):
output.write(format_list(specs))
format_list(specs)
else:
out = format_list(sorted(specs))
output.write(out)
output.flush()
format_list(sorted(specs))
def spack_is_git_repo():

View File

@@ -7,7 +7,7 @@
import collections
import archspec.cpu
import llnl.util.cpu
import llnl.util.tty.colify as colify
import llnl.util.tty.color as color
import spack.architecture as architecture
@@ -73,7 +73,7 @@ def display_target_group(header, target_group):
def arch(parser, args):
if args.known_targets:
display_targets(archspec.cpu.TARGETS)
display_targets(llnl.util.cpu.targets)
return
if args.frontend:

View File

@@ -231,9 +231,6 @@ def setup_parser(subparser):
'update-index', help=buildcache_update_index.__doc__)
update_index.add_argument(
'-d', '--mirror-url', default=None, help='Destination mirror url')
update_index.add_argument(
'-k', '--keys', default=False, action='store_true',
help='If provided, key index will be updated as well as package index')
update_index.set_defaults(func=buildcache_update_index)
@@ -295,7 +292,7 @@ def match_downloaded_specs(pkgs, allow_multiple_matches=False, force=False,
specs_from_cli = []
has_errors = False
specs = bindist.update_cache_and_get_specs()
specs = bindist.get_specs()
if not other_arch:
arch = spack.architecture.default_arch().to_spec()
specs = [s for s in specs if s.satisfies(arch)]
@@ -476,8 +473,7 @@ def installtarball(args):
tty.die("build cache file installation requires" +
" at least one package spec argument")
pkgs = set(args.specs)
matches = match_downloaded_specs(pkgs, args.multiple, args.force,
args.otherarch)
matches = match_downloaded_specs(pkgs, args.multiple, args.otherarch)
for match in matches:
install_tarball(match, args)
@@ -509,7 +505,7 @@ def install_tarball(spec, args):
def listspecs(args):
"""list binary packages available from mirrors"""
specs = bindist.update_cache_and_get_specs()
specs = bindist.get_specs()
if not args.allarch:
arch = spack.architecture.default_arch().to_spec()
specs = [s for s in specs if s.satisfies(arch)]
@@ -781,13 +777,6 @@ def buildcache_update_index(args):
bindist.generate_package_index(
url_util.join(outdir, bindist.build_cache_relative_path()))
if args.keys:
keys_url = url_util.join(outdir,
bindist.build_cache_relative_path(),
bindist.build_cache_keys_relative_path())
bindist.generate_key_index(keys_url)
def buildcache(parser, args):
if args.func:

View File

@@ -3,7 +3,8 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.cmd.common
from spack.cmd.common import print_module_placeholder_help
import spack.cmd.location
description = "cd to spack directories in the shell"
@@ -19,8 +20,4 @@ def setup_parser(subparser):
def cd(parser, args):
spec = " ".join(args.spec) if args.spec else "SPEC"
spack.cmd.common.shell_init_instructions(
"spack cd",
"cd `spack location --install-dir %s`" % spec
)
print_module_placeholder_help()

View File

@@ -138,7 +138,6 @@ def ci_rebuild(args):
cdash_build_name = get_env_var('SPACK_CDASH_BUILD_NAME')
related_builds = get_env_var('SPACK_RELATED_BUILDS_CDASH')
pr_env_var = get_env_var('SPACK_IS_PR_PIPELINE')
pr_mirror_url = get_env_var('SPACK_PR_MIRROR_URL')
gitlab_ci = None
if 'gitlab-ci' in yaml_root:
@@ -181,6 +180,8 @@ def ci_rebuild(args):
tty.debug('job_spec_pkg_name = {0}'.format(job_spec_pkg_name))
tty.debug('compiler_action = {0}'.format(compiler_action))
spack_cmd = exe.which('spack')
cdash_report_dir = os.path.join(ci_artifact_dir, 'cdash_report')
temp_dir = os.path.join(ci_artifact_dir, 'jobs_scratch_dir')
job_log_dir = os.path.join(temp_dir, 'logs')
@@ -234,17 +235,20 @@ def ci_rebuild(args):
for next_entry in directory_list:
tty.debug(' {0}'.format(next_entry))
# Make a copy of the environment file, so we can overwrite the changed
# version in between the two invocations of "spack install"
env_src_path = env.manifest_path
env_dirname = os.path.dirname(env_src_path)
env_filename = os.path.basename(env_src_path)
env_copyname = '{0}_BACKUP'.format(env_filename)
env_dst_path = os.path.join(env_dirname, env_copyname)
shutil.copyfile(env_src_path, env_dst_path)
tty.debug('job concrete spec path: {0}'.format(job_spec_yaml_path))
if signing_key:
spack_ci.import_signing_key(signing_key)
can_sign = spack_ci.can_sign_binaries()
sign_binaries = can_sign and spack_is_pr_pipeline is False
can_verify = spack_ci.can_verify_binaries()
verify_binaries = can_verify and spack_is_pr_pipeline is False
spack_ci.configure_compilers(compiler_action)
spec_map = spack_ci.get_concrete_specs(
@@ -269,76 +273,27 @@ def ci_rebuild(args):
with open(root_spec_yaml_path, 'w') as fd:
fd.write(spec_map['root'].to_yaml(hash=ht.build_hash))
# TODO: Refactor the spack install command so it's easier to use from
# python modules. Currently we use "exe.which('spack')" to make it
# easier to install packages from here, but it introduces some
# problems, e.g. if we want the spack command to have access to the
# mirrors we're configuring, then we have to use the "spack" command
# to add the mirrors too, which in turn means that any code here *not*
# using the spack command does *not* have access to the mirrors.
spack_cmd = exe.which('spack')
mirrors_to_check = {
'ci_remote_mirror': remote_mirror_url,
}
def add_mirror(mirror_name, mirror_url):
m_args = ['mirror', 'add', mirror_name, mirror_url]
tty.debug('Adding mirror: spack {0}'.format(m_args))
mirror_add_output = spack_cmd(*m_args)
# Workaround: Adding the mirrors above, using "spack_cmd" makes
# sure they're available later when we use "spack_cmd" to install
# the package. But then we also need to add them to this dict
# below, so they're available in this process (we end up having to
# pass them to "bindist.get_mirrors_for_spec()")
mirrors_to_check[mirror_name] = mirror_url
tty.debug('spack mirror add output: {0}'.format(mirror_add_output))
# Configure mirrors
if pr_mirror_url:
add_mirror('ci_pr_mirror', pr_mirror_url)
if enable_artifacts_mirror:
add_mirror('ci_artifact_mirror', artifact_mirror_url)
tty.debug('listing spack mirrors:')
spack_cmd('mirror', 'list')
spack_cmd('config', 'blame', 'mirrors')
# Checks all mirrors for a built spec with a matching full hash
matches = bindist.get_mirrors_for_spec(
job_spec, force=False, full_hash_match=True,
mirrors_to_check=mirrors_to_check)
if matches:
# Got at full hash match on at least one configured mirror. All
# matches represent the fully up-to-date spec, so should all be
# equivalent. If artifacts mirror is enabled, we just pick one
# of the matches and download the buildcache files from there to
# the artifacts, so they're available to be used by dependent
# jobs in subsequent stages.
tty.debug('No need to rebuild {0}'.format(job_spec_pkg_name))
if bindist.needs_rebuild(job_spec, remote_mirror_url, True):
# Binary on remote mirror is not up to date, we need to rebuild
# it.
#
# FIXME: ensure mirror precedence causes this local mirror to
# be chosen ahead of the remote one when installing deps
if enable_artifacts_mirror:
matching_mirror = matches[0]['mirror_url']
tty.debug('Getting {0} buildcache from {1}'.format(
job_spec_pkg_name, matching_mirror))
tty.debug('Downloading to {0}'.format(build_cache_dir))
buildcache.download_buildcache_files(
job_spec, build_cache_dir, True, matching_mirror)
else:
# No full hash match anywhere means we need to rebuild spec
mirror_add_output = spack_cmd(
'mirror', 'add', 'local_mirror', artifact_mirror_url)
tty.debug('spack mirror add:')
tty.debug(mirror_add_output)
# Build up common install arguments
install_args = [
'-d', '-v', '-k', 'install',
'--keep-stage',
'--require-full-hash-match',
]
mirror_list_output = spack_cmd('mirror', 'list')
tty.debug('listing spack mirrors:')
tty.debug(mirror_list_output)
if not verify_binaries:
install_args.append('--no-check-signature')
# 2) build up install arguments
install_args = ['-d', '-v', '-k', 'install', '--keep-stage']
# Add arguments to create + register a new build on CDash (if
# enabled)
# 3) create/register a new build on CDash (if enabled)
cdash_args = []
if enable_cdash:
tty.debug('Registering build with CDash')
(cdash_build_id,
@@ -349,63 +304,82 @@ def add_mirror(mirror_name, mirror_url):
cdash_upload_url = '{0}/submit.php?project={1}'.format(
cdash_base_url, cdash_project_enc)
install_args.extend([
cdash_args = [
'--cdash-upload-url', cdash_upload_url,
'--cdash-build', cdash_build_name,
'--cdash-site', cdash_site,
'--cdash-buildstamp', cdash_build_stamp,
])
]
install_args.append(job_spec_yaml_path)
spec_cli_arg = [job_spec_yaml_path]
tty.debug('Installing {0} from source'.format(job_spec.name))
tty.debug('Installing package')
try:
tty.debug('spack install arguments: {0}'.format(
install_args))
spack_cmd(*install_args)
finally:
spack_ci.copy_stage_logs_to_artifacts(job_spec, job_log_dir)
# Two-pass install is intended to avoid spack trying to
# install from buildcache even though the locally computed
# full hash is different than the one stored in the spec.yaml
# file on the remote mirror.
first_pass_args = install_args + [
'--cache-only',
'--only',
'dependencies',
]
first_pass_args.extend(spec_cli_arg)
tty.debug('First pass install arguments: {0}'.format(
first_pass_args))
spack_cmd(*first_pass_args)
# Create buildcache on remote mirror, either on pr-specific
# mirror or on mirror defined in spack environment
if spack_is_pr_pipeline:
buildcache_mirror_url = pr_mirror_url
else:
buildcache_mirror_url = remote_mirror_url
# Overwrite the changed environment file so it doesn't break
# the next install invocation.
tty.debug('Copying {0} to {1}'.format(
env_dst_path, env_src_path))
shutil.copyfile(env_dst_path, env_src_path)
try:
spack_ci.push_mirror_contents(
env, job_spec, job_spec_yaml_path, buildcache_mirror_url,
cdash_build_id, sign_binaries)
second_pass_args = install_args + [
'--no-cache',
'--only',
'package',
]
second_pass_args.extend(cdash_args)
second_pass_args.extend(spec_cli_arg)
tty.debug('Second pass install arguments: {0}'.format(
second_pass_args))
spack_cmd(*second_pass_args)
except Exception as inst:
# If the mirror we're pushing to is on S3 and there's some
# permissions problem, for example, we can't just target
# that exception type here, since users of the
# `spack ci rebuild' may not need or want any dependency
# on boto3. So we use the first non-boto exception type
# in the heirarchy:
# boto3.exceptions.S3UploadFailedError
# boto3.exceptions.Boto3Error
# Exception
# BaseException
# object
err_msg = 'Error msg: {0}'.format(inst)
if 'Access Denied' in err_msg:
tty.msg('Permission problem writing to mirror')
tty.msg(err_msg)
tty.error('Caught exception during install:')
tty.error(inst)
# Create another copy of that buildcache on "local artifact
# mirror" (only done if artifacts buildcache is enabled)
spack_ci.copy_stage_logs_to_artifacts(job_spec, job_log_dir)
# 4) create buildcache on remote mirror, but not if this is
# running to test a spack PR
if not spack_is_pr_pipeline:
spack_ci.push_mirror_contents(
env, job_spec, job_spec_yaml_path, remote_mirror_url,
cdash_build_id)
# 5) create another copy of that buildcache on "local artifact
# mirror" (only done if cash reporting is enabled)
spack_ci.push_mirror_contents(env, job_spec, job_spec_yaml_path,
artifact_mirror_url, cdash_build_id,
sign_binaries)
artifact_mirror_url, cdash_build_id)
# Relate this build to its dependencies on CDash (if enabled)
# 6) relate this build to its dependencies on CDash (if enabled)
if enable_cdash:
spack_ci.relate_cdash_builds(
spec_map, cdash_base_url, cdash_build_id, cdash_project,
artifact_mirror_url or pr_mirror_url or remote_mirror_url)
artifact_mirror_url or remote_mirror_url)
else:
# There is nothing to do here unless "local artifact mirror" is
# enabled, in which case, we need to download the buildcache to
# the local artifacts directory to be used by dependent jobs in
# subsequent stages
tty.debug('No need to rebuild {0}'.format(job_spec_pkg_name))
if enable_artifacts_mirror:
tty.debug('Getting {0} buildcache'.format(job_spec_pkg_name))
tty.debug('Downloading to {0}'.format(build_cache_dir))
buildcache.download_buildcache_files(
job_spec, build_cache_dir, True, remote_mirror_url)
def ci(parser, args):

View File

@@ -10,12 +10,11 @@
import llnl.util.tty as tty
import spack.caches
import spack.config
import spack.cmd.test
import spack.cmd.common.arguments as arguments
import spack.main
import spack.repo
import spack.stage
import spack.config
from spack.paths import lib_path, var_path
@@ -27,7 +26,7 @@
class AllClean(argparse.Action):
"""Activates flags -s -d -f -m and -p simultaneously"""
def __call__(self, parser, namespace, values, option_string=None):
parser.parse_args(['-sdfmpb'], namespace=namespace)
parser.parse_args(['-sdfmp'], namespace=namespace)
def setup_parser(subparser):
@@ -47,10 +46,7 @@ def setup_parser(subparser):
'-p', '--python-cache', action='store_true',
help="remove .pyc, .pyo files and __pycache__ folders")
subparser.add_argument(
'-b', '--bootstrap', action='store_true',
help="remove software needed to bootstrap Spack")
subparser.add_argument(
'-a', '--all', action=AllClean, help="equivalent to -sdfmpb", nargs=0
'-a', '--all', action=AllClean, help="equivalent to -sdfmp", nargs=0
)
arguments.add_common_arguments(subparser, ['specs'])
@@ -58,7 +54,7 @@ def setup_parser(subparser):
def clean(parser, args):
# If nothing was set, activate the default
if not any([args.specs, args.stage, args.downloads, args.failures,
args.misc_cache, args.python_cache, args.bootstrap]):
args.misc_cache, args.python_cache]):
args.stage = True
# Then do the cleaning falling through the cases
@@ -100,10 +96,3 @@ def clean(parser, args):
dname = os.path.join(root, d)
tty.debug('Removing {0}'.format(dname))
shutil.rmtree(dname)
if args.bootstrap:
msg = 'Removing software in "{0}"'
tty.msg(msg.format(spack.paths.user_bootstrap_store))
with spack.store.use_store(spack.paths.user_bootstrap_store):
uninstall = spack.main.SpackCommand('uninstall')
uninstall('-a', '-y')

View File

@@ -3,51 +3,35 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import llnl.util.tty as tty
import llnl.util.tty.color as color
import spack.paths
from llnl.util import tty
def shell_init_instructions(cmd, equivalent):
"""Print out instructions for users to initialize shell support.
shell_init_instructions = [
"To initialize spack's shell commands:",
"",
" # for bash and zsh",
" . %s/setup-env.sh" % spack.paths.share_path,
"",
" # for csh and tcsh",
" setenv SPACK_ROOT %s" % spack.paths.prefix,
" source %s/setup-env.csh" % spack.paths.share_path, ""
]
Arguments:
cmd (str): the command the user tried to run that requires
shell support in order to work
equivalent (str): a command they can run instead, without
enabling shell support
def print_module_placeholder_help():
"""
For use by commands to tell user how to activate shell support.
"""
shell_specific = "{sh_arg}" in equivalent
msg = [
"`%s` requires spack's shell support." % cmd,
"",
"To set up shell support, run the command below for your shell.",
"",
color.colorize("@*c{For bash/zsh/sh:}"),
" . %s/setup-env.sh" % spack.paths.share_path,
"",
color.colorize("@*c{For csh/tcsh:}"),
" source %s/setup-env.csh" % spack.paths.share_path,
"",
color.colorize("@*c{For fish:}"),
" source %s/setup-env.fish" % spack.paths.share_path,
"",
"Or, if you do not want to use shell support, run " + (
"one of these" if shell_specific else "this") + " instead:",
"",
"This command requires spack's shell integration.", ""
] + shell_init_instructions + [
"This exposes a 'spack' shell function, which you can use like",
" $ spack load package-foo", "",
"Running the Spack executable directly (for example, invoking",
"./bin/spack) will bypass the shell function and print this",
"placeholder message, even if you have sourced one of the above",
"shell integration scripts."
]
if shell_specific:
msg += [
equivalent.format(sh_arg="--sh ") + " # bash/zsh/sh",
equivalent.format(sh_arg="--csh ") + " # csh/tcsh",
equivalent.format(sh_arg="--fish") + " # fish",
]
else:
msg += [" " + equivalent]
msg += ['']
tty.error(*msg)
tty.msg(*msg)

View File

@@ -53,13 +53,11 @@ def emulate_env_utility(cmd_name, context, args):
spec = args.spec[0]
cmd = args.spec[1:]
specs = spack.cmd.parse_specs(spec, concretize=False)
specs = spack.cmd.parse_specs(spec, concretize=True)
if len(specs) > 1:
tty.die("spack %s only takes one spec." % cmd_name)
spec = specs[0]
spec = spack.cmd.matching_spec_from_env(spec)
build_environment.setup_package(spec.package, args.dirty, context)
if args.dump:
@@ -78,7 +76,7 @@ def emulate_env_utility(cmd_name, context, args):
os.execvp(cmd[0], cmd)
elif not bool(args.pickle or args.dump):
# If no command or dump/pickle option then act like the "env" command
# If no command or dump/pickle option act like the "env" command
# and print out env vars.
for key, val in os.environ.items():
print("%s=%s" % (key, val))

View File

@@ -12,6 +12,7 @@
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.repo
from spack.stage import DIYStage
description = "developer build: build from code in current working directory"
section = "build"
@@ -39,13 +40,6 @@ def setup_parser(subparser):
subparser.add_argument(
'--drop-in', type=str, dest='shell', default=None,
help="drop into a build environment in a new shell, e.g. bash, zsh")
subparser.add_argument(
'--test', default=None,
choices=['root', 'all'],
help="""If 'root' is chosen, run package tests during
installation for top-level packages (but skip tests for dependencies).
if 'all' is chosen, run package tests during installation for all
packages. If neither are chosen, don't run tests for any packages.""")
arguments.add_common_arguments(subparser, ['spec'])
stop_group = subparser.add_mutually_exclusive_group()
@@ -78,14 +72,6 @@ def dev_build(self, args):
"spack dev-build spec must have a single, concrete version. "
"Did you forget a package version number?")
source_path = args.source_path
if source_path is None:
source_path = os.getcwd()
source_path = os.path.abspath(source_path)
# Forces the build to run out of the source directory.
spec.constrain('dev_path=%s' % source_path)
spec.concretize()
package = spack.repo.get(spec)
@@ -94,25 +80,26 @@ def dev_build(self, args):
tty.msg("Uninstall or try adding a version suffix for this dev build.")
sys.exit(1)
source_path = args.source_path
if source_path is None:
source_path = os.getcwd()
source_path = os.path.abspath(source_path)
# Forces the build to run out of the current directory.
package.stage = DIYStage(source_path)
# disable checksumming if requested
if args.no_checksum:
spack.config.set('config:checksum', False, scope='command_line')
tests = False
if args.test == 'all':
tests = True
elif args.test == 'root':
tests = [spec.name for spec in specs]
package.do_install(
tests=tests,
make_jobs=args.jobs,
keep_prefix=args.keep_prefix,
install_deps=not args.ignore_deps,
verbose=not args.quiet,
keep_stage=True, # don't remove source dir for dev build.
dirty=args.dirty,
stop_before=args.before,
skip_patch=args.skip_patch,
stop_at=args.until)
# drop into the build environment of the package?

View File

@@ -1,102 +0,0 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import shutil
import llnl.util.tty as tty
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.environment as ev
from spack.error import SpackError
description = "add a spec to an environment's dev-build information"
section = "environments"
level = "long"
def setup_parser(subparser):
subparser.add_argument(
'-p', '--path', help='Source location of package')
clone_group = subparser.add_mutually_exclusive_group()
clone_group.add_argument(
'--no-clone', action='store_false', dest='clone', default=None,
help='Do not clone. The package already exists at the source path')
clone_group.add_argument(
'--clone', action='store_true', dest='clone', default=None,
help='Clone the package even if the path already exists')
subparser.add_argument(
'-f', '--force',
help='Remove any files or directories that block cloning source code')
arguments.add_common_arguments(subparser, ['spec'])
def develop(parser, args):
env = ev.get_env(args, 'develop', required=True)
if not args.spec:
if args.clone is False:
raise SpackError("No spec provided to spack develop command")
# download all dev specs
for name, entry in env.dev_specs.items():
path = entry.get('path', name)
abspath = path if os.path.isabs(path) else os.path.join(
env.path, path)
if os.path.exists(abspath):
msg = "Skipping developer download of %s" % entry['spec']
msg += " because its path already exists."
tty.msg(msg)
continue
stage = spack.spec.Spec(entry['spec']).package.stage
stage.steal_source(abspath)
if not env.dev_specs:
tty.warn("No develop specs to download")
return
specs = spack.cmd.parse_specs(args.spec)
if len(specs) > 1:
raise SpackError("spack develop requires at most one named spec")
spec = specs[0]
if not spec.versions.concrete:
raise SpackError("Packages to develop must have a concrete version")
# default path is relative path to spec.name
path = args.path or spec.name
# get absolute path to check
abspath = path
if not os.path.isabs(abspath):
abspath = os.path.join(env.path, path)
# clone default: only if the path doesn't exist
clone = args.clone
if clone is None:
clone = not os.path.exists(abspath)
if not clone and not os.path.exists(abspath):
raise SpackError("Provided path %s does not exist" % abspath)
if clone and os.path.exists(abspath):
if args.force:
shutil.rmtree(abspath)
else:
msg = "Path %s already exists and cannot be cloned to." % abspath
msg += " Use `spack develop -f` to overwrite."
raise SpackError(msg)
with env.write_transaction():
changed = env.develop(spec, path, clone)
if changed:
env.write()

Some files were not shown because too many files have changed in this diff Show More