Compare commits

..

164 Commits

Author SHA1 Message Date
Carson Woods
c7ffdf5367 Merge branch 'develop' into releases/v0.13.4 2020-02-12 11:58:23 -05:00
Carson Woods
3f954b8157 Merge branch 'features/shared' into releases/v0.13.4 2020-02-12 11:32:06 -05:00
Carson Woods
cdcd3dcedd Merge branch 'develop' into features/shared 2020-01-27 15:28:28 -05:00
Carson Woods
7c1083916a Fix bash completion script 2020-01-24 11:36:26 -05:00
Carson Woods
c07bbe1a25 Fix flake8 error 2020-01-24 11:00:08 -05:00
Carson Woods
85032c6224 Resolve merge conflicts with develop 2020-01-24 10:41:44 -05:00
Carson Woods
7b7898a69c Merge branch 'develop' into features/shared 2020-01-21 18:55:21 -05:00
Carson Woods
84c5d76eae Merge branch 'develop' into features/shared 2020-01-18 13:57:50 -08:00
Carson Woods
bcd47f0bd6 Merge branch 'develop' into features/shared 2020-01-17 14:32:47 -08:00
Carson Woods
cb6a959cdb Merge branch 'develop' into features/shared 2020-01-15 14:41:14 -05:00
Carson Woods
32cd12bff7 Merge branch 'develop' into features/shared 2020-01-10 16:19:37 -08:00
Carson Woods
7021965159 Fix merge conflicts and repair broken unit test. 2020-01-09 20:12:39 -08:00
Carson Woods
5c5743ca33 Merge branch 'develop' into features/shared to support Spack 0.13.3 2019-12-26 21:00:09 -06:00
Carson Woods
034a7662ac Merge branch 'develop' into features/shared 2019-11-21 12:52:24 -07:00
Carson Woods
e6b6ac5898 Fixed error message to use proper --upstream rather than -u 2019-11-21 12:30:15 -07:00
Carson Woods
35037bf088 Merge branch 'develop' into features/shared 2019-11-17 16:37:47 -07:00
Carson Woods
d14c245411 Merge branch 'develop' into features/shared 2019-11-10 22:05:20 -05:00
Carson Woods
6e2ad01f20 Fix flake8 formatting 2019-11-06 13:25:15 -05:00
Carson Woods
ef9b5a8f74 Fix unit test failing 2019-11-06 13:24:10 -05:00
Carson Woods
4921ed29d5 Fix a few broken unit tests 2019-11-06 09:56:22 -05:00
Carson Woods
f4c720e902 Ensure feature supports Spack version 0.13.0-0.13.1 2019-11-05 16:38:18 -05:00
Carson Woods
0a71b1d5ac Merge branch 'develop' into features/shared 2019-10-31 21:29:33 -04:00
Carson Woods
3593a7be6a Better comment the purpose of new unit tests 2019-09-20 19:05:56 -04:00
Carson Woods
e4d2cf4441 Fix flake8 error to avoid failing CI testing 2019-09-20 15:29:46 -04:00
Carson Woods
911e51bd89 Merge branch 'develop' into features/shared
Done to resolve merge conflicts that had arisen since work on this
feature completed.
2019-09-20 15:28:44 -04:00
Carson Woods
6ec8aea6f7 Rebase branch 'features/shared' of github.com:carsonwoods/spack against develop 2019-08-07 18:57:48 -06:00
Carson Woods
5b11f7aa4c Fix bug where environments would ignore global path 2019-08-07 18:32:28 -06:00
Carson Woods
97e46981b9 Remove old doc from doc index 2019-08-07 18:32:28 -06:00
Carson Woods
873ac5e890 Remove old documentation for shared mode 2019-08-07 18:32:28 -06:00
Carson Woods
4d7dae5716 Remove old share command from tests 2019-08-07 18:32:28 -06:00
Carson Woods
b19f0fafcc Remove outdate share command 2019-08-07 18:32:28 -06:00
Carson Woods
11b1bdd119 Pep8 Compliance Fix 2019-08-07 18:32:28 -06:00
Carson Woods
f749821dc2 Pep8 Compliance 2019-08-07 18:32:28 -06:00
Carson Woods
5abb20dcab Rename test 2019-08-07 18:32:28 -06:00
Carson Woods
0c233bdd0f Add test for validating upstream database initialization 2019-08-07 18:32:28 -06:00
Carson Woods
0f171c7ded Replace space with = in command parameter 2019-08-07 18:32:28 -06:00
Carson Woods
b4c7520dd8 Flake8 Test Compliance 2019-08-07 18:32:28 -06:00
Carson Woods
9ab7d8f01d Add config parameter for active upstream to set install location for modules 2019-08-07 18:32:28 -06:00
Carson Woods
a7ad344c2a Add upstreams.yaml testing file so calls to upstreams['global] doesn't cause tests to fail 2019-08-07 18:32:28 -06:00
Carson Woods
deb2d3745c Fix .spack-db/index.json not being created in global upstream if previously uninitialized 2019-08-07 18:32:28 -06:00
Carson Woods
ff96ec430b Can now specify upstream of anyname through -u/--upstream flag 2019-08-07 18:32:28 -06:00
Carson Woods
d4a959736a Flake8 Compliance Changes 2019-08-07 18:32:28 -06:00
Carson Woods
5ba51a0be0 --global option now works for both install and uninstall 2019-08-07 18:32:28 -06:00
Carson Woods
27e1140df7 Reset active directory after each global install 2019-08-07 18:32:28 -06:00
Carson Woods
7ab6af8a3b Add scope to setting active tree to ensure that it is set at user level 2019-08-07 18:32:28 -06:00
Carson Woods
0e6e93eaac Fix unit test config.yaml 2019-08-07 18:32:28 -06:00
Carson Woods
38f8bdd2bb Home expansion was removed because it was no longer being used 2019-08-07 18:32:27 -06:00
Carson Woods
8e45a3fc2f Fix flake8 compliance 2019-08-07 18:32:27 -06:00
Carson Woods
c22af99b04 Fix how upstream db paths are canonicalized 2019-08-07 18:32:27 -06:00
Carson Woods
fc3a909fbc Set staging location to ~/.spack/var 2019-08-07 18:32:27 -06:00
Carson Woods
9665754eae Fix default install tree 2019-08-07 18:32:27 -06:00
Carson Woods
0f9f9f3a85 Revise default var path 2019-08-07 18:32:27 -06:00
Carson Woods
777a5682a6 Fix default install location 2019-08-07 18:32:27 -06:00
Carson Woods
8994b4aab6 Fix flake8 compliance 2019-08-07 18:32:27 -06:00
Carson Woods
98ec366470 Set root of store object to active tree 2019-08-07 18:32:27 -06:00
Carson Woods
c61f4d7c82 Add logic to set the active install tree 2019-08-07 18:32:27 -06:00
Carson Woods
811b304230 Remove old code 2019-08-07 18:32:27 -06:00
Carson Woods
8f0c9ad409 Change name of global parameter to install_global 2019-08-07 18:32:27 -06:00
Carson Woods
6a423a5d8a Typo fix 2019-08-07 18:32:27 -06:00
Carson Woods
23c37063bd Add default global upstream of /opt/spack 2019-08-07 18:32:27 -06:00
Carson Woods
478f3a5a99 Fix whitespace issue 2019-08-07 18:32:27 -06:00
Carson Woods
02afb30990 Remove unit testing for shared spack mode 2019-08-07 18:32:27 -06:00
Carson Woods
06e3f15e47 Remove old shared spack code 2019-08-07 18:32:27 -06:00
Carson Woods
f13ce3540d Add dest name of install_global to --global parameter 2019-08-07 18:32:27 -06:00
Carson Woods
7ae34087e3 Set remove old shared spack code 2019-08-07 18:32:27 -06:00
Carson Woods
f0fea97e88 Set source_cache to user's ~/.spack directory 2019-08-07 18:32:27 -06:00
Carson Woods
54893197ed Set staging location to be based out of users .spack directory 2019-08-07 18:32:27 -06:00
Carson Woods
80da1d50d1 Make var_path point to ~/.spack/var/spack 2019-08-07 18:32:27 -06:00
Carson Woods
944c5d75cd Add --global flag to install cmd to install to globally accessible location 2019-08-07 18:32:27 -06:00
Carson Woods
9ef4bc9d50 Add macro for expanding home directory 2019-08-07 18:32:27 -06:00
Carson Woods
a2af432833 Temporarily disable module file location overrride while feature is being implemented 2019-08-07 18:32:27 -06:00
Carson Woods
aefed311af Change modulefiles install location 2019-08-07 18:32:27 -06:00
Carson Woods
6ffacddcf4 Change default install tree to user's ~/.spack directory 2019-08-07 18:32:27 -06:00
Carson Woods
e17824f82f Remove shared mode set self as upstream 2019-08-07 18:32:27 -06:00
Carson Woods
57ca47f035 Remove testing for shared mode 2019-08-07 18:32:27 -06:00
Carson Woods
4532a56b4e Remove shared disable from unit testing 2019-08-07 18:32:27 -06:00
Carson Woods
86e69a48a2 Fix flake8 error 2019-08-07 18:32:27 -06:00
Carson Woods
2508295d81 Fix error caused by SPACK_PATH environment variable not existing 2019-08-07 18:32:27 -06:00
Carson Woods
1a041c051a Fix flake8 error 2019-08-07 18:32:27 -06:00
Carson Woods
2262ca2e67 Add test for install in shared mode 2019-08-07 18:32:27 -06:00
Carson Woods
2269771a91 Fix typo 2019-08-07 18:32:27 -06:00
Carson Woods
7f32574dd8 Fix shared cmd test file 2019-08-07 18:32:27 -06:00
Carson Woods
d15ac30f62 Add shared to toctree 2019-08-07 18:32:27 -06:00
Carson Woods
1f41347ab8 Share feature Unit testing 2019-08-07 18:32:27 -06:00
Carson Woods
1f4f01103b Add command interface for share feature 2019-08-07 18:32:27 -06:00
Carson Woods
8f46fcb512 When running tests, disable shared mode because it will break other tests. Custom tests must be written 2019-08-07 18:32:27 -06:00
Carson Woods
2d3b973ebc When shared mode is active store installed packages in SPACK_PATH 2019-08-07 18:32:27 -06:00
Carson Woods
7e62e0f27f When shared mode is active set stage path to SPACK_PATH 2019-08-07 18:32:27 -06:00
Carson Woods
ea0db4c0f9 Prevent packages from being installed upstream 2019-08-07 18:32:27 -06:00
Carson Woods
0afc68e60b Change module root path when shared mode is active 2019-08-07 18:32:27 -06:00
Carson Woods
8ad25d5013 Uninstall from SPACK_PATH when shared mode is active 2019-08-07 18:32:27 -06:00
Carson Woods
e90db68321 Install to SPACK_PATH when shared mode is active 2019-08-07 18:32:27 -06:00
Carson Woods
9e96b89f02 Add documentation for spack share command 2019-08-07 18:32:27 -06:00
Carson Woods
b4dae1b7fd When shared mode is active, spack treats the normal install directory as an upstream 2019-08-07 18:32:27 -06:00
Carson Woods
9e9adf1d2f When shared mode is active, set cache location to SPACK_PATH 2019-08-07 18:32:27 -06:00
Carson Woods
de9255247a Fix bug where environments would ignore global path 2019-08-06 17:49:17 -06:00
Carson Woods
de5d3e3229 Remove old doc from doc index 2019-07-26 08:54:12 -06:00
Carson Woods
e621aafc77 Remove old documentation for shared mode 2019-07-25 16:40:00 -06:00
Carson Woods
c53427c98d Remove old share command from tests 2019-07-25 14:22:43 -06:00
Carson Woods
7a75148d1b Remove outdate share command 2019-07-25 13:32:44 -06:00
Carson Woods
4210520c9d Pep8 Compliance Fix 2019-07-25 13:32:44 -06:00
Carson Woods
4f3fb50ae7 Pep8 Compliance 2019-07-25 13:32:44 -06:00
Carson Woods
7660659107 Rename test 2019-07-25 13:32:44 -06:00
Carson Woods
fcca2a518b Add test for validating upstream database initialization 2019-07-25 13:32:44 -06:00
Carson Woods
23e1cd7775 Replace space with = in command parameter 2019-07-25 13:32:44 -06:00
Carson Woods
58e794e95a Flake8 Test Compliance 2019-07-25 13:32:44 -06:00
Carson Woods
7ed59ed835 Add config parameter for active upstream to set install location for modules 2019-07-25 13:32:43 -06:00
Carson Woods
512726ae5b Add upstreams.yaml testing file so calls to upstreams['global] doesn't cause tests to fail 2019-07-25 13:32:43 -06:00
Carson Woods
20851a6e6c Fix .spack-db/index.json not being created in global upstream if previously uninitialized 2019-07-25 13:32:43 -06:00
Carson Woods
92bbbb9659 Can now specify upstream of anyname through -u/--upstream flag 2019-07-25 13:32:43 -06:00
Carson Woods
5f2f2bfb84 Flake8 Compliance Changes 2019-07-25 13:32:43 -06:00
Carson Woods
9b63f72d6b --global option now works for both install and uninstall 2019-07-25 13:32:43 -06:00
Carson Woods
4c60f01bae Reset active directory after each global install 2019-07-25 13:32:43 -06:00
Carson Woods
cd08308463 Add scope to setting active tree to ensure that it is set at user level 2019-07-25 13:32:43 -06:00
Carson Woods
fe69997043 Fix unit test config.yaml 2019-07-25 13:32:43 -06:00
Carson Woods
1584a6e3c6 Home expansion was removed because it was no longer being used 2019-07-25 13:32:43 -06:00
Carson Woods
c393880852 Fix flake8 compliance 2019-07-25 13:32:43 -06:00
Carson Woods
bbe9e6bf54 Fix how upstream db paths are canonicalized 2019-07-25 13:32:43 -06:00
Carson Woods
d7a00b71d4 Set staging location to ~/.spack/var 2019-07-25 13:32:43 -06:00
Carson Woods
6775d2546a Fix default install tree 2019-07-25 13:32:43 -06:00
Carson Woods
8a154333f2 Revise default var path 2019-07-25 13:32:43 -06:00
Carson Woods
5e637a04fd Fix default install location 2019-07-25 13:32:43 -06:00
Carson Woods
0213869439 Fix flake8 compliance 2019-07-25 13:32:43 -06:00
Carson Woods
22e9a9792a Set root of store object to active tree 2019-07-25 13:32:43 -06:00
Carson Woods
4f23da9d26 Add logic to set the active install tree 2019-07-25 13:32:43 -06:00
Carson Woods
f9430e2fd4 Remove old code 2019-07-25 13:32:43 -06:00
Carson Woods
a2f86d5d18 Change name of global parameter to install_global 2019-07-25 13:32:43 -06:00
Carson Woods
0efab6637c Typo fix 2019-07-25 13:32:43 -06:00
Carson Woods
2b11694b94 Add default global upstream of /opt/spack 2019-07-25 13:32:43 -06:00
Carson Woods
088798a727 Fix whitespace issue 2019-07-25 13:32:43 -06:00
Carson Woods
bddbb1c22e Remove unit testing for shared spack mode 2019-07-25 13:32:42 -06:00
Carson Woods
92f447cf1c Remove old shared spack code 2019-07-25 13:32:42 -06:00
Carson Woods
96f266c3e3 Add dest name of install_global to --global parameter 2019-07-25 13:32:42 -06:00
Carson Woods
d5093c20c5 Set remove old shared spack code 2019-07-25 13:32:42 -06:00
Carson Woods
2064241c37 Set source_cache to user's ~/.spack directory 2019-07-25 13:32:42 -06:00
Carson Woods
721742b764 Set staging location to be based out of users .spack directory 2019-07-25 13:32:42 -06:00
Carson Woods
c45bf153d8 Make var_path point to ~/.spack/var/spack 2019-07-25 13:32:42 -06:00
Carson Woods
b98e5e66e7 Add --global flag to install cmd to install to globally accessible location 2019-07-25 13:32:42 -06:00
Carson Woods
3d18bf345f Add macro for expanding home directory 2019-07-25 13:32:42 -06:00
Carson Woods
f8e9cf4081 Temporarily disable module file location overrride while feature is being implemented 2019-07-25 13:32:42 -06:00
Carson Woods
98e0f8b89b Change modulefiles install location 2019-07-25 13:32:42 -06:00
Carson Woods
263275b7ea Change default install tree to user's ~/.spack directory 2019-07-25 13:32:42 -06:00
Carson Woods
3e13002d7f Remove shared mode set self as upstream 2019-07-25 13:32:42 -06:00
Carson Woods
654e5cc924 Remove testing for shared mode 2019-07-25 13:32:42 -06:00
Carson Woods
04a72c1834 Remove shared disable from unit testing 2019-07-25 13:32:42 -06:00
Carson Woods
53cf6eb194 Fix flake8 error 2019-07-25 13:32:42 -06:00
Carson Woods
5a7f186176 Fix error caused by SPACK_PATH environment variable not existing 2019-07-25 13:32:42 -06:00
Carson Woods
987adfa9c9 Fix flake8 error 2019-07-25 13:32:42 -06:00
Carson Woods
e476bb1400 Add test for install in shared mode 2019-07-25 13:32:42 -06:00
Carson Woods
dc12233610 Fix typo 2019-07-25 13:32:42 -06:00
Carson Woods
29d21a0a5d Fix shared cmd test file 2019-07-25 13:32:42 -06:00
Carson Woods
762f505da5 Add shared to toctree 2019-07-25 13:32:42 -06:00
Carson Woods
8e1c326174 Share feature Unit testing 2019-07-25 13:32:42 -06:00
Carson Woods
0bac5d527d Add command interface for share feature 2019-07-25 13:32:42 -06:00
Carson Woods
79256eeb5c When running tests, disable shared mode because it will break other tests. Custom tests must be written 2019-07-25 13:32:42 -06:00
Carson Woods
de760942f2 When shared mode is active store installed packages in SPACK_PATH 2019-07-25 13:32:41 -06:00
Carson Woods
860641bfab When shared mode is active set stage path to SPACK_PATH 2019-07-25 13:32:41 -06:00
Carson Woods
673e55f14d Prevent packages from being installed upstream 2019-07-25 13:32:41 -06:00
Carson Woods
54777a4f3e Change module root path when shared mode is active 2019-07-25 13:32:41 -06:00
Carson Woods
db36e66592 Uninstall from SPACK_PATH when shared mode is active 2019-07-25 13:32:41 -06:00
Carson Woods
0d36e94407 Install to SPACK_PATH when shared mode is active 2019-07-25 13:32:41 -06:00
Carson Woods
92c3b5b8b2 Add documentation for spack share command 2019-07-25 13:32:41 -06:00
Carson Woods
71220a3656 When shared mode is active, spack treats the normal install directory as an upstream 2019-07-25 13:32:41 -06:00
Carson Woods
09bd29d816 When shared mode is active, set cache location to SPACK_PATH 2019-07-25 13:32:41 -06:00
369 changed files with 1761 additions and 17176 deletions

View File

@@ -1,100 +1,3 @@
# v0.14.0 (2020-02-23)
`v0.14.0` is a major feature release, with 3 highlighted features:
1. **Distributed builds.** Multiple Spack instances will now coordinate
properly with each other through locks. This works on a single node
(where you've called `spack` several times) or across multiple nodes
with a shared filesystem. For example, with SLURM, you could build
`trilinos` and its dependencies on 2 24-core nodes, with 3 Spack
instances per node and 8 build jobs per instance, with `srun -N 2 -n 6
spack install -j 8 trilinos`. This requires a filesystem with locking
enabled, but not MPI or any other library for parallelism.
2. **Build pipelines.** You can also build in parallel through Gitlab
CI. Simply create a Spack environment and push it to Gitlab to build
on Gitlab runners. Pipeline support is now integreated into a single
`spack ci` command, so setting it up is easier than ever. See the
[Pipelines section](https://spack.readthedocs.io/en/v0.14.0/pipelines.html)
in the docs.
3. **Container builds.** The new `spack containerize` command allows you
to create a Docker or Singularity recipe from any Spack environment.
There are options to customize the build if you need them. See the
[Container Images section](https://spack.readthedocs.io/en/latest/containers.html)
in the docs.
In addition, there are several other new commands, many bugfixes and
improvements, and `spack load` no longer requires modules, so you can use
it the same way on your laptop or on your supercomputer.
Spack grew by over 300 packages since our last release in November 2019,
and the project grew to over 500 contributors. Thanks to all of you for
making yet another great release possible. Detailed notes below.
## Major new core features
* Distributed builds: spack instances coordinate and build in parallel (#13100)
* New `spack ci` command to manage CI pipelines (#12854)
* Generate container recipes from environments: `spack containerize` (#14202)
* `spack load` now works without using modules (#14062, #14628)
* Garbage collect old/unused installations with `spack gc` (#13534)
* Configuration files all set environment modifications the same way (#14372,
[docs](https://spack.readthedocs.io/en/v0.14.0/configuration.html#environment-modifications))
* `spack commands --format=bash` auto-generates completion (#14393, #14607)
* Packages can specify alternate fetch URLs in case one fails (#13881)
## Improvements
* Improved locking for concurrency with environments (#14676, #14621, #14692)
* `spack test` sends args to `pytest`, supports better listing (#14319)
* Better support for aarch64 and cascadelake microarch (#13825, #13780, #13820)
* Archspec is now a separate library (see https://github.com/archspec/archspec)
* Many improvements to the `spack buildcache` command (#14237, #14346,
#14466, #14467, #14639, #14642, #14659, #14696, #14698, #14714, #14732,
#14929, #15003, #15086, #15134)
## Selected Bugfixes
* Compilers now require an exact match on version (#8735, #14730, #14752)
* Bugfix for patches that specified specific versions (#13989)
* `spack find -p` now works in environments (#10019, #13972)
* Dependency queries work correctly in `spack find` (#14757)
* Bugfixes for locking upstream Spack instances chains (#13364)
* Fixes for PowerPC clang optimization flags (#14196)
* Fix for issue with compilers and specific microarchitectures (#13733, #14798)
## New commands and options
* `spack ci` (#12854)
* `spack containerize` (#14202)
* `spack gc` (#13534)
* `spack load` accepts `--only package`, `--only dependencies` (#14062, #14628)
* `spack commands --format=bash` (#14393)
* `spack commands --update-completion` (#14607)
* `spack install --with-cache` has new option: `--no-check-signature` (#11107)
* `spack test` now has `--list`, `--list-long`, and `--list-names` (#14319)
* `spack install --help-cdash` moves CDash help out of the main help (#13704)
## Deprecations
* `spack release-jobs` has been rolled into `spack ci`
* `spack bootstrap` will be removed in a future version, as it is no longer
needed to set up modules (see `spack load` improvements above)
## Documentation
* New section on building container images with Spack (see
[docs](https://spack.readthedocs.io/en/latest/containers.html))
* New section on using `spack ci` command to build pipelines (see
[docs](https://spack.readthedocs.io/en/latest/pipelines.html))
* Document how to add conditional dependencies (#14694)
* Document how to use Spack to replace Homebrew/Conda (#13083, see
[docs](https://spack.readthedocs.io/en/latest/workflows.html#using-spack-to-replace-homebrew-conda))
## Important package changes
* 3,908 total packages (345 added since 0.13.0)
* Added first cut at a TensorFlow package (#13112)
* We now build R without "recommended" packages, manage them w/Spack (#12015)
* Elpa and OpenBLAS now leverage microarchitecture support (#13655, #14380)
* Fix `octave` compiler wrapper usage (#14726)
* Enforce that packages in `builtin` aren't missing dependencies (#13949)
# v0.13.4 (2020-02-07)
This release contains several bugfixes:

View File

@@ -16,7 +16,7 @@
config:
# This is the path to the root of the Spack install tree.
# You can use $spack here to refer to the root of the spack instance.
install_tree: $spack/opt/spack
install_tree: ~/.spack/opt/spack
# Locations where templates should be found
@@ -30,8 +30,8 @@ config:
# Locations where different types of modules should be installed.
module_roots:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
tcl: ~/.spack/share/spack/modules
lmod: ~/.spack/share/spack/lmod
# Temporary locations Spack can try to use for builds.
@@ -67,7 +67,7 @@ config:
# Cache directory for already downloaded source tarballs and archived
# repositories. This can be purged with `spack clean --downloads`.
source_cache: $spack/var/spack/cache
source_cache: ~/.spack/var/spack/cache
# Cache directory for miscellaneous files, like the package index.
@@ -137,7 +137,7 @@ config:
# when Spack needs to manage its own package metadata and all operations are
# expected to complete within the default time limit. The timeout should
# therefore generally be left untouched.
db_lock_timeout: 3
db_lock_timeout: 120
# How long to wait when attempting to modify a package (e.g. to install it).

View File

@@ -43,7 +43,6 @@ packages:
szip: [libszip, libaec]
tbb: [intel-tbb]
unwind: [libunwind]
sycl: [hipsycl]
permissions:
read: world
write: user

View File

@@ -0,0 +1,7 @@
upstreams:
global:
install_tree: $spack/opt/spack
modules:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
dotkit: $spack/share/spack/dotkit

View File

@@ -58,9 +58,9 @@ directory. Here's an example of an external configuration:
packages:
openmpi:
paths:
openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64: /opt/openmpi-1.6.5-intel
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-x86_64-debian7: /opt/openmpi-1.6.5-intel
This example lists three installations of OpenMPI, one built with GCC,
one built with GCC and debug information, and another built with Intel.
@@ -107,9 +107,9 @@ be:
packages:
openmpi:
paths:
openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-debian7-x86_64+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-debian7-x86_64: /opt/openmpi-1.6.5-intel
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7: /opt/openmpi-1.4.3
openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7+debug: /opt/openmpi-1.4.3-debug
openmpi@1.6.5%intel@10.1 arch=linux-x86_64-debian7: /opt/openmpi-1.6.5-intel
buildable: False
The addition of the ``buildable`` flag tells Spack that it should never build

View File

@@ -4454,7 +4454,7 @@ translate variant flags into CMake definitions. For example:
.. code-block:: python
def cmake_args(self):
def configure_args(self):
spec = self.spec
return [
'-DUSE_EVERYTRACE=%s' % ('YES' if '+everytrace' in spec else 'NO'),

View File

@@ -456,7 +456,7 @@ def copy_tree(src, dest, symlinks=True, ignore=None, _permissions=False):
if os.path.isdir(s):
mkdirp(d)
else:
shutil.copy2(s, d)
shutil.copyfile(s, d)
if _permissions:
set_install_permissions(d)

View File

@@ -8,32 +8,14 @@
import errno
import time
import socket
from datetime import datetime
import llnl.util.tty as tty
import spack.util.string
__all__ = ['Lock', 'LockTransaction', 'WriteTransaction', 'ReadTransaction',
'LockError', 'LockTimeoutError',
'LockPermissionError', 'LockROFileError', 'CantCreateLockError']
#: Mapping of supported locks to description
lock_type = {fcntl.LOCK_SH: 'read', fcntl.LOCK_EX: 'write'}
#: A useful replacement for functions that should return True when not provided
#: for example.
true_fn = lambda: True
def _attempts_str(wait_time, nattempts):
# Don't print anything if we succeeded on the first try
if nattempts <= 1:
return ''
attempts = spack.util.string.plural(nattempts, 'attempt')
return ' after {0:0.2f}s and {1}'.format(wait_time, attempts)
class Lock(object):
"""This is an implementation of a filesystem lock using Python's lockf.
@@ -49,8 +31,8 @@ class Lock(object):
maintain multiple locks on the same file.
"""
def __init__(self, path, start=0, length=0, default_timeout=None,
debug=False, desc=''):
def __init__(self, path, start=0, length=0, debug=False,
default_timeout=None):
"""Construct a new lock on the file at ``path``.
By default, the lock applies to the whole file. Optionally,
@@ -61,16 +43,6 @@ def __init__(self, path, start=0, length=0, default_timeout=None,
not currently expose the ``whence`` parameter -- ``whence`` is
always ``os.SEEK_SET`` and ``start`` is always evaluated from the
beginning of the file.
Args:
path (str): path to the lock
start (int): optional byte offset at which the lock starts
length (int): optional number of bytes to lock
default_timeout (int): number of seconds to wait for lock attempts,
where None means to wait indefinitely
debug (bool): debug mode specific to locking
desc (str): optional debug message lock description, which is
helpful for distinguishing between different Spack locks.
"""
self.path = path
self._file = None
@@ -84,9 +56,6 @@ def __init__(self, path, start=0, length=0, default_timeout=None,
# enable debug mode
self.debug = debug
# optional debug description
self.desc = ' ({0})'.format(desc) if desc else ''
# If the user doesn't set a default timeout, or if they choose
# None, 0, etc. then lock attempts will not time out (unless the
# user sets a timeout for each attempt)
@@ -120,20 +89,6 @@ def _poll_interval_generator(_wait_times=None):
num_requests += 1
yield wait_time
def __repr__(self):
"""Formal representation of the lock."""
rep = '{0}('.format(self.__class__.__name__)
for attr, value in self.__dict__.items():
rep += '{0}={1}, '.format(attr, value.__repr__())
return '{0})'.format(rep.strip(', '))
def __str__(self):
"""Readable string (with key fields) of the lock."""
location = '{0}[{1}:{2}]'.format(self.path, self._start, self._length)
timeout = 'timeout={0}'.format(self.default_timeout)
activity = '#reads={0}, #writes={1}'.format(self._reads, self._writes)
return '({0}, {1}, {2})'.format(location, timeout, activity)
def _lock(self, op, timeout=None):
"""This takes a lock using POSIX locks (``fcntl.lockf``).
@@ -144,9 +99,8 @@ def _lock(self, op, timeout=None):
successfully acquired, the total wait time and the number of attempts
is returned.
"""
assert op in lock_type
assert op in (fcntl.LOCK_SH, fcntl.LOCK_EX)
self._log_acquiring('{0} LOCK'.format(lock_type[op].upper()))
timeout = timeout or self.default_timeout
# Create file and parent directories if they don't exist.
@@ -174,9 +128,6 @@ def _lock(self, op, timeout=None):
# If the file were writable, we'd have opened it 'r+'
raise LockROFileError(self.path)
tty.debug("{0} locking [{1}:{2}]: timeout {3} sec"
.format(lock_type[op], self._start, self._length, timeout))
poll_intervals = iter(Lock._poll_interval_generator())
start_time = time.time()
num_attempts = 0
@@ -188,21 +139,17 @@ def _lock(self, op, timeout=None):
time.sleep(next(poll_intervals))
# TBD: Is an extra attempt after timeout needed/appropriate?
num_attempts += 1
if self._poll_lock(op):
total_wait_time = time.time() - start_time
return total_wait_time, num_attempts
raise LockTimeoutError("Timed out waiting for a {0} lock."
.format(lock_type[op]))
raise LockTimeoutError("Timed out waiting for lock.")
def _poll_lock(self, op):
"""Attempt to acquire the lock in a non-blocking manner. Return whether
the locking attempt succeeds
"""
assert op in lock_type
try:
# Try to get the lock (will raise if not available.)
fcntl.lockf(self._file, op | fcntl.LOCK_NB,
@@ -212,9 +159,6 @@ def _poll_lock(self, op):
if self.debug:
# All locks read the owner PID and host
self._read_debug_data()
tty.debug('{0} locked {1} [{2}:{3}] (owner={4})'
.format(lock_type[op], self.path,
self._start, self._length, self.pid))
# Exclusive locks write their PID/host
if op == fcntl.LOCK_EX:
@@ -223,12 +167,12 @@ def _poll_lock(self, op):
return True
except IOError as e:
# EAGAIN and EACCES == locked by another process (so try again)
if e.errno not in (errno.EAGAIN, errno.EACCES):
if e.errno in (errno.EAGAIN, errno.EACCES):
# EAGAIN and EACCES == locked by another process
pass
else:
raise
return False
def _ensure_parent_directory(self):
parent = os.path.dirname(self.path)
@@ -283,8 +227,6 @@ def _unlock(self):
self._length, self._start, os.SEEK_SET)
self._file.close()
self._file = None
self._reads = 0
self._writes = 0
def acquire_read(self, timeout=None):
"""Acquires a recursive, shared lock for reading.
@@ -300,14 +242,15 @@ def acquire_read(self, timeout=None):
timeout = timeout or self.default_timeout
if self._reads == 0 and self._writes == 0:
self._debug(
'READ LOCK: {0.path}[{0._start}:{0._length}] [Acquiring]'
.format(self))
# can raise LockError.
wait_time, nattempts = self._lock(fcntl.LOCK_SH, timeout=timeout)
self._acquired_debug('READ LOCK', wait_time, nattempts)
self._reads += 1
# Log if acquired, which includes counts when verbose
self._log_acquired('READ LOCK', wait_time, nattempts)
return True
else:
# Increment the read count for nested lock tracking
self._reads += 1
return False
@@ -325,11 +268,13 @@ def acquire_write(self, timeout=None):
timeout = timeout or self.default_timeout
if self._writes == 0:
self._debug(
'WRITE LOCK: {0.path}[{0._start}:{0._length}] [Acquiring]'
.format(self))
# can raise LockError.
wait_time, nattempts = self._lock(fcntl.LOCK_EX, timeout=timeout)
self._acquired_debug('WRITE LOCK', wait_time, nattempts)
self._writes += 1
# Log if acquired, which includes counts when verbose
self._log_acquired('WRITE LOCK', wait_time, nattempts)
# return True only if we weren't nested in a read lock.
# TODO: we may need to return two values: whether we got
@@ -337,65 +282,9 @@ def acquire_write(self, timeout=None):
# write lock for the first time. Now it returns the latter.
return self._reads == 0
else:
# Increment the write count for nested lock tracking
self._writes += 1
return False
def is_write_locked(self):
"""Check if the file is write locked
Return:
(bool): ``True`` if the path is write locked, otherwise, ``False``
"""
try:
self.acquire_read()
# If we have a read lock then no other process has a write lock.
self.release_read()
except LockTimeoutError:
# Another process is holding a write lock on the file
return True
return False
def downgrade_write_to_read(self, timeout=None):
"""
Downgrade from an exclusive write lock to a shared read.
Raises:
LockDowngradeError: if this is an attempt at a nested transaction
"""
timeout = timeout or self.default_timeout
if self._writes == 1 and self._reads == 0:
self._log_downgrading()
# can raise LockError.
wait_time, nattempts = self._lock(fcntl.LOCK_SH, timeout=timeout)
self._reads = 1
self._writes = 0
self._log_downgraded(wait_time, nattempts)
else:
raise LockDowngradeError(self.path)
def upgrade_read_to_write(self, timeout=None):
"""
Attempts to upgrade from a shared read lock to an exclusive write.
Raises:
LockUpgradeError: if this is an attempt at a nested transaction
"""
timeout = timeout or self.default_timeout
if self._reads == 1 and self._writes == 0:
self._log_upgrading()
# can raise LockError.
wait_time, nattempts = self._lock(fcntl.LOCK_EX, timeout=timeout)
self._reads = 0
self._writes = 1
self._log_upgraded(wait_time, nattempts)
else:
raise LockUpgradeError(self.path)
def release_read(self, release_fn=None):
"""Releases a read lock.
@@ -416,17 +305,17 @@ def release_read(self, release_fn=None):
"""
assert self._reads > 0
locktype = 'READ LOCK'
if self._reads == 1 and self._writes == 0:
self._log_releasing(locktype)
self._debug(
'READ LOCK: {0.path}[{0._start}:{0._length}] [Released]'
.format(self))
# we need to call release_fn before releasing the lock
release_fn = release_fn or true_fn
result = release_fn()
result = True
if release_fn is not None:
result = release_fn()
self._unlock() # can raise LockError.
self._reads = 0
self._log_released(locktype)
self._reads -= 1
return result
else:
self._reads -= 1
@@ -450,91 +339,45 @@ def release_write(self, release_fn=None):
"""
assert self._writes > 0
release_fn = release_fn or true_fn
locktype = 'WRITE LOCK'
if self._writes == 1 and self._reads == 0:
self._log_releasing(locktype)
self._debug(
'WRITE LOCK: {0.path}[{0._start}:{0._length}] [Released]'
.format(self))
# we need to call release_fn before releasing the lock
result = release_fn()
result = True
if release_fn is not None:
result = release_fn()
self._unlock() # can raise LockError.
self._writes = 0
self._log_released(locktype)
self._writes -= 1
return result
else:
self._writes -= 1
# when the last *write* is released, we call release_fn here
# instead of immediately before releasing the lock.
if self._writes == 0:
return release_fn()
return release_fn() if release_fn is not None else True
else:
return False
def _debug(self, *args):
tty.debug(*args)
def _get_counts_desc(self):
return '(reads {0}, writes {1})'.format(self._reads, self._writes) \
if tty.is_verbose() else ''
def _log_acquired(self, locktype, wait_time, nattempts):
attempts_part = _attempts_str(wait_time, nattempts)
now = datetime.now()
desc = 'Acquired at %s' % now.strftime("%H:%M:%S.%f")
self._debug(self._status_msg(locktype, '{0}{1}'.
format(desc, attempts_part)))
def _log_acquiring(self, locktype):
self._debug2(self._status_msg(locktype, 'Acquiring'))
def _log_downgraded(self, wait_time, nattempts):
attempts_part = _attempts_str(wait_time, nattempts)
now = datetime.now()
desc = 'Downgraded at %s' % now.strftime("%H:%M:%S.%f")
self._debug(self._status_msg('READ LOCK', '{0}{1}'
.format(desc, attempts_part)))
def _log_downgrading(self):
self._debug2(self._status_msg('WRITE LOCK', 'Downgrading'))
def _log_released(self, locktype):
now = datetime.now()
desc = 'Released at %s' % now.strftime("%H:%M:%S.%f")
self._debug(self._status_msg(locktype, desc))
def _log_releasing(self, locktype):
self._debug2(self._status_msg(locktype, 'Releasing'))
def _log_upgraded(self, wait_time, nattempts):
attempts_part = _attempts_str(wait_time, nattempts)
now = datetime.now()
desc = 'Upgraded at %s' % now.strftime("%H:%M:%S.%f")
self._debug(self._status_msg('WRITE LOCK', '{0}{1}'.
format(desc, attempts_part)))
def _log_upgrading(self):
self._debug2(self._status_msg('READ LOCK', 'Upgrading'))
def _status_msg(self, locktype, status):
status_desc = '[{0}] {1}'.format(status, self._get_counts_desc())
return '{0}{1.desc}: {1.path}[{1._start}:{1._length}] {2}'.format(
locktype, self, status_desc)
def _debug2(self, *args):
# TODO: Easy place to make a single, temporary change to the
# TODO: debug level associated with the more detailed messages.
# TODO:
# TODO: Someday it would be great if we could switch this to
# TODO: another level, perhaps _between_ debug and verbose, or
# TODO: some other form of filtering so the first level of
# TODO: debugging doesn't have to generate these messages. Using
# TODO: verbose here did not work as expected because tests like
# TODO: test_spec_json will write the verbose messages to the
# TODO: output that is used to check test correctness.
tty.debug(*args)
def _acquired_debug(self, lock_type, wait_time, nattempts):
attempts_format = 'attempt' if nattempts == 1 else 'attempt'
if nattempts > 1:
acquired_attempts_format = ' after {0:0.2f}s and {1:d} {2}'.format(
wait_time, nattempts, attempts_format)
else:
# Dont print anything if we succeeded immediately
acquired_attempts_format = ''
self._debug(
'{0}: {1.path}[{1._start}:{1._length}] [Acquired{2}]'
.format(lock_type, self, acquired_attempts_format))
class LockTransaction(object):
@@ -619,28 +462,10 @@ class LockError(Exception):
"""Raised for any errors related to locks."""
class LockDowngradeError(LockError):
"""Raised when unable to downgrade from a write to a read lock."""
def __init__(self, path):
msg = "Cannot downgrade lock from write to read on file: %s" % path
super(LockDowngradeError, self).__init__(msg)
class LockLimitError(LockError):
"""Raised when exceed maximum attempts to acquire a lock."""
class LockTimeoutError(LockError):
"""Raised when an attempt to acquire a lock times out."""
class LockUpgradeError(LockError):
"""Raised when unable to upgrade from a read to a write lock."""
def __init__(self, path):
msg = "Cannot upgrade lock from read to write on file: %s" % path
super(LockUpgradeError, self).__init__(msg)
class LockPermissionError(LockError):
"""Raised when there are permission issues with a lock."""

View File

@@ -135,9 +135,7 @@ def process_stacktrace(countback):
def get_timestamp(force=False):
"""Get a string timestamp"""
if _debug or _timestamp or force:
# Note inclusion of the PID is useful for parallel builds.
return '[{0}, {1}] '.format(
datetime.now().strftime("%Y-%m-%d-%H:%M:%S.%f"), os.getpid())
return datetime.now().strftime("[%Y-%m-%d-%H:%M:%S.%f] ")
else:
return ''

View File

@@ -5,7 +5,7 @@
#: major, minor, patch version for Spack, in a tuple
spack_version_info = (0, 14, 0)
spack_version_info = (0, 13, 4)
#: String containing Spack version joined with .'s
spack_version = '.'.join(str(v) for v in spack_version_info)

View File

@@ -18,7 +18,7 @@
from six.moves.urllib.error import URLError
import llnl.util.tty as tty
from llnl.util.filesystem import mkdirp
from llnl.util.filesystem import mkdirp, install_tree
import spack.cmd
import spack.config as config
@@ -308,7 +308,7 @@ def build_tarball(spec, outdir, force=False, rel=False, unsigned=False,
tmpdir = tempfile.mkdtemp()
cache_prefix = build_cache_prefix(tmpdir)
tarfile_name = tarball_name(spec, '.tar.bz2')
tarfile_name = tarball_name(spec, '.tar.gz')
tarfile_dir = os.path.join(cache_prefix, tarball_directory_name(spec))
tarfile_path = os.path.join(tarfile_dir, tarfile_name)
spackfile_path = os.path.join(
@@ -342,18 +342,8 @@ def build_tarball(spec, outdir, force=False, rel=False, unsigned=False,
raise NoOverwriteException(url_util.format(remote_specfile_path))
# make a copy of the install directory to work with
workdir = os.path.join(tmpdir, os.path.basename(spec.prefix))
# install_tree copies hardlinks
# create a temporary tarfile from prefix and exract it to workdir
# tarfile preserves hardlinks
temp_tarfile_name = tarball_name(spec, '.tar')
temp_tarfile_path = os.path.join(tarfile_dir, temp_tarfile_name)
with closing(tarfile.open(temp_tarfile_path, 'w')) as tar:
tar.add(name='%s' % spec.prefix,
arcname='.')
with closing(tarfile.open(temp_tarfile_path, 'r')) as tar:
tar.extractall(workdir)
os.remove(temp_tarfile_path)
workdir = os.path.join(tempfile.mkdtemp(), os.path.basename(spec.prefix))
install_tree(spec.prefix, workdir, symlinks=True)
# create info for later relocation and create tar
write_buildinfo_file(spec.prefix, workdir, rel=rel)
@@ -378,7 +368,7 @@ def build_tarball(spec, outdir, force=False, rel=False, unsigned=False,
tty.die(e)
# create compressed tarball of the install prefix
with closing(tarfile.open(tarfile_path, 'w:bz2')) as tar:
with closing(tarfile.open(tarfile_path, 'w:gz')) as tar:
tar.add(name='%s' % workdir,
arcname='%s' % os.path.basename(spec.prefix))
# remove copy of install directory
@@ -417,8 +407,8 @@ def build_tarball(spec, outdir, force=False, rel=False, unsigned=False,
sign_tarball(key, force, specfile_path)
# put tarball, spec and signature files in .spack archive
with closing(tarfile.open(spackfile_path, 'w')) as tar:
tar.add(name=tarfile_path, arcname='%s' % tarfile_name)
tar.add(name=specfile_path, arcname='%s' % specfile_name)
tar.add(name='%s' % tarfile_path, arcname='%s' % tarfile_name)
tar.add(name='%s' % specfile_path, arcname='%s' % specfile_name)
if not unsigned:
tar.add(name='%s.asc' % specfile_path,
arcname='%s.asc' % specfile_name)
@@ -589,17 +579,13 @@ def extract_tarball(spec, filename, allow_root=False, unsigned=False,
stagepath = os.path.dirname(filename)
spackfile_name = tarball_name(spec, '.spack')
spackfile_path = os.path.join(stagepath, spackfile_name)
tarfile_name = tarball_name(spec, '.tar.bz2')
tarfile_name = tarball_name(spec, '.tar.gz')
tarfile_path = os.path.join(tmpdir, tarfile_name)
specfile_name = tarball_name(spec, '.spec.yaml')
specfile_path = os.path.join(tmpdir, specfile_name)
with closing(tarfile.open(spackfile_path, 'r')) as tar:
tar.extractall(tmpdir)
# older buildcache tarfiles use gzip compression
if not os.path.exists(tarfile_path):
tarfile_name = tarball_name(spec, '.tar.gz')
tarfile_path = os.path.join(tmpdir, tarfile_name)
if not unsigned:
if os.path.exists('%s.asc' % specfile_path):
try:
@@ -652,17 +638,7 @@ def extract_tarball(spec, filename, allow_root=False, unsigned=False,
# so the pathname should be the same now that the directory layout
# is confirmed
workdir = os.path.join(tmpdir, os.path.basename(spec.prefix))
# install_tree copies hardlinks
# create a temporary tarfile from prefix and exract it to workdir
# tarfile preserves hardlinks
temp_tarfile_name = tarball_name(spec, '.tar')
temp_tarfile_path = os.path.join(tmpdir, temp_tarfile_name)
with closing(tarfile.open(temp_tarfile_path, 'w')) as tar:
tar.add(name='%s' % workdir,
arcname='.')
with closing(tarfile.open(temp_tarfile_path, 'r')) as tar:
tar.extractall(spec.prefix)
os.remove(temp_tarfile_path)
install_tree(workdir, spec.prefix, symlinks=True)
# cleanup
os.remove(tarfile_path)

View File

@@ -42,6 +42,7 @@ def _fetch_cache():
building the same package different ways or multiple times.
"""
path = spack.config.get('config:source_cache')
if not path:
path = os.path.join(spack.paths.var_path, "cache")
path = spack.util.path.canonicalize_path(path)

View File

@@ -142,7 +142,7 @@ def compiler_info(args):
for flag, flag_value in iteritems(c.flags):
print("\t\t%s = %s" % (flag, flag_value))
if len(c.environment) != 0:
if len(c.environment.get('set', {})) != 0:
if len(c.environment['set']) != 0:
print("\tenvironment:")
print("\t set:")
for key, value in iteritems(c.environment['set']):

View File

@@ -4,7 +4,6 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse
import sys
import llnl.util.tty as tty
from llnl.util.tty.colify import colify
@@ -22,8 +21,6 @@
def setup_parser(subparser):
subparser.epilog = 'If called without argument returns ' \
'the list of all valid extendable packages'
arguments.add_common_arguments(subparser, ['long', 'very_long'])
subparser.add_argument('-d', '--deps', action='store_true',
help='output dependencies along with found specs')
@@ -45,19 +42,7 @@ def setup_parser(subparser):
def extensions(parser, args):
if not args.spec:
# If called without arguments, list all the extendable packages
isatty = sys.stdout.isatty()
if isatty:
tty.info('Extendable packages:')
extendable_pkgs = []
for name in spack.repo.all_package_names():
pkg = spack.repo.get(name)
if pkg.extendable:
extendable_pkgs.append(name)
colify(extendable_pkgs, indent=4)
return
tty.die("extensions requires a package spec.")
# Checks
spec = cmd.parse_specs(args.spec)

View File

@@ -40,6 +40,8 @@ def update_kwargs_from_args(args, kwargs):
'fake': args.fake,
'dirty': args.dirty,
'use_cache': args.use_cache,
'install_global': args.install_global,
'upstream': args.upstream,
'cache_only': args.cache_only,
'explicit': True, # Always true for install command
'stop_at': args.until,
@@ -47,7 +49,7 @@ def update_kwargs_from_args(args, kwargs):
})
kwargs.update({
'install_deps': ('dependencies' in args.things_to_install),
'install_dependencies': ('dependencies' in args.things_to_install),
'install_package': ('package' in args.things_to_install)
})
@@ -123,6 +125,14 @@ def setup_parser(subparser):
'-f', '--file', action='append', default=[],
dest='specfiles', metavar='SPEC_YAML_FILE',
help="install from file. Read specs to install from .yaml files")
subparser.add_argument(
'--upstream', action='store', default=None,
dest='upstream', metavar='UPSTREAM_NAME',
help='specify which upstream spack to install too')
subparser.add_argument(
'-g', '--global', action='store_true', default=False,
dest='install_global',
help='install package to globally accesible location')
cd_group = subparser.add_mutually_exclusive_group()
arguments.add_common_arguments(cd_group, ['clean', 'dirty'])
@@ -216,7 +226,10 @@ def default_log_file(spec):
"""
fmt = 'test-{x.name}-{x.version}-{hash}.xml'
basename = fmt.format(x=spec, hash=spec.dag_hash())
dirname = fs.os.path.join(spack.paths.var_path, 'junit-report')
dirname = fs.os.path.join(spack.paths.user_config_path,
'var/spack',
'junit-report')
fs.mkdirp(dirname)
return fs.os.path.join(dirname, basename)
@@ -227,6 +240,7 @@ def install_spec(cli_args, kwargs, abstract_spec, spec):
try:
# handle active environment, if any
env = ev.get_env(cli_args, 'install')
if env:
with env.write_transaction():
concrete = env.concretize_and_add(
@@ -237,6 +251,10 @@ def install_spec(cli_args, kwargs, abstract_spec, spec):
env.regenerate_views()
else:
spec.package.do_install(**kwargs)
spack.config.set('config:active_tree', '~/.spack/opt/spack',
scope='user')
spack.config.set('config:active_upstream', None,
scope='user')
except spack.build_environment.InstallError as e:
if cli_args.show_log_on_error:
@@ -251,6 +269,30 @@ def install_spec(cli_args, kwargs, abstract_spec, spec):
def install(parser, args, **kwargs):
# Install Package to Global Upstream for multi-user use
if args.install_global:
spack.config.set('config:active_upstream', 'global',
scope='user')
global_root = spack.config.get('upstreams')
global_root = global_root['global']['install_tree']
global_root = spack.util.path.canonicalize_path(global_root)
spack.config.set('config:active_tree', global_root,
scope='user')
elif args.upstream:
if args.upstream not in spack.config.get('upstreams'):
tty.die("specified upstream does not exist")
spack.config.set('config:active_upstream', args.upstream,
scope='user')
root = spack.config.get('upstreams')
root = root[args.upstream]['install_tree']
root = spack.util.path.canonicalize_path(root)
spack.config.set('config:active_tree', root, scope='user')
else:
spack.config.set('config:active_upstream', None,
scope='user')
spack.config.set('config:active_tree',
spack.config.get('config:install_tree'),
scope='user')
if args.help_cdash:
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,

View File

@@ -8,9 +8,6 @@
import code
import argparse
import platform
import runpy
import llnl.util.tty as tty
import spack
@@ -22,23 +19,12 @@
def setup_parser(subparser):
subparser.add_argument(
'-c', dest='python_command', help='command to execute')
subparser.add_argument(
'-m', dest='module', action='store',
help='run library module as a script')
subparser.add_argument(
'python_args', nargs=argparse.REMAINDER,
help="file to run plus arguments")
def python(parser, args, unknown_args):
if args.module:
sys.argv = ['spack-python'] + unknown_args + args.python_args
runpy.run_module(args.module, run_name="__main__", alter_sys=True)
return
if unknown_args:
tty.die("Unknown arguments:", " ".join(unknown_args))
def python(parser, args):
# Fake a main python shell by setting __name__ to __main__.
console = code.InteractiveConsole({'__name__': '__main__',
'spack': spack})

View File

@@ -5,6 +5,8 @@
from __future__ import print_function
import argparse
import copy
import sys
import itertools
@@ -15,6 +17,7 @@
import spack.cmd.common.arguments as arguments
import spack.repo
import spack.store
import spack.spec
from spack.database import InstallStatuses
from llnl.util import tty
@@ -54,8 +57,24 @@ def setup_parser(subparser):
"If used in an environment, all packages in the environment "
"will be uninstalled.")
subparser.add_argument(
'packages',
nargs=argparse.REMAINDER,
help="specs of packages to uninstall")
def find_matching_specs(env, specs, allow_multiple_matches=False, force=False):
subparser.add_argument(
'-u', '--upstream', action='store', default=None,
dest='upstream', metavar='UPSTREAM_NAME',
help='specify which upstream spack to uninstall from')
subparser.add_argument(
'-g', '--global', action='store_true',
dest='global_uninstall',
help='uninstall packages installed to global upstream')
def find_matching_specs(env, specs, allow_multiple_matches=False, force=False,
upstream=None, global_uninstall=False):
"""Returns a list of specs matching the not necessarily
concretized specs given from cli
@@ -67,6 +86,35 @@ def find_matching_specs(env, specs, allow_multiple_matches=False, force=False):
Return:
list of specs
"""
if global_uninstall:
spack.config.set('config:active_upstream', 'global',
scope='user')
global_root = spack.config.get('upstreams')
global_root = global_root['global']['install_tree']
global_root = spack.util.path.canonicalize_path(global_root)
spack.config.set('config:active_tree', global_root,
scope='user')
elif upstream:
if upstream not in spack.config.get('upstreams'):
tty.die("specified upstream does not exist")
spack.config.set('config:active_upstream', upstream,
scope='user')
root = spack.config.get('upstreams')
root = root[upstream]['install_tree']
root = spack.util.path.canonicalize_path(root)
spack.config.set('config:active_tree', root, scope='user')
else:
spack.config.set('config:active_upstream', None,
scope='user')
for spec in specs:
if isinstance(spec, spack.spec.Spec):
spec_name = str(spec)
spec_copy = (copy.deepcopy(spec))
spec_copy.concretize()
if spec_copy.package.installed_upstream:
tty.warn("{0} is installed upstream".format(spec_name))
tty.die("Use 'spack uninstall [--upstream upstream_name]'")
# constrain uninstall resolution to current environment if one is active
hashes = env.all_hashes() if env else None
@@ -224,11 +272,25 @@ def do_uninstall(env, specs, force):
for item in ready:
item.do_uninstall(force=force)
# write any changes made to the active environment
if env:
env.write()
spack.config.set('config:active_tree',
'~/.spack/opt/spack',
scope='user')
spack.config.set('config:active_upstream', None,
scope='user')
def get_uninstall_list(args, specs, env):
# Gets the list of installed specs that match the ones give via cli
# args.all takes care of the case where '-a' is given in the cli
uninstall_list = find_matching_specs(env, specs, args.all, args.force)
uninstall_list = find_matching_specs(env, specs, args.all, args.force,
upstream=args.upstream,
global_uninstall=args.global_uninstall
)
# Takes care of '-R'
active_dpts, inactive_dpts = installed_dependents(uninstall_list, env)
@@ -305,7 +367,7 @@ def uninstall_specs(args, specs):
anything_to_do = set(uninstall_list).union(set(remove_list))
if not anything_to_do:
tty.warn('There are no package to uninstall.')
tty.warn('There are no packages to uninstall.')
return
if not args.yes_to_all:

View File

@@ -413,14 +413,6 @@ def get_compilers(config, cspec=None, arch_spec=None):
assert arch_spec is None
if arch_spec and target and (target != family and target != 'any'):
# If the family of the target is the family we are seeking,
# there's an error in the underlying configuration
if llnl.util.cpu.targets[target].family == family:
msg = ('the "target" field in compilers.yaml accepts only '
'target families [replace "{0}" with "{1}"'
' in "{2}" specification]')
msg = msg.format(str(target), family, items.get('spec', '??'))
raise ValueError(msg)
continue
compilers.append(_compiler_from_config_entry(items))

View File

@@ -362,16 +362,7 @@ def concretize_compiler(self, spec):
# compiler_for_spec Should think whether this can be more
# efficient
def _proper_compiler_style(cspec, aspec):
compilers = spack.compilers.compilers_for_spec(
cspec, arch_spec=aspec
)
# If the spec passed as argument is concrete we want to check
# the versions match exactly
if (cspec.concrete and compilers and
cspec.version not in [c.version for c in compilers]):
return []
return compilers
return spack.compilers.compilers_for_spec(cspec, arch_spec=aspec)
if spec.compiler and spec.compiler.concrete:
if (self.check_for_compiler_existence and not
@@ -412,9 +403,7 @@ def _proper_compiler_style(cspec, aspec):
return True
else:
# No compiler with a satisfactory spec was found
raise UnavailableCompilerVersionError(
other_compiler, spec.architecture
)
raise UnavailableCompilerVersionError(other_compiler)
else:
# We have no hints to go by, grab any compiler
compiler_list = spack.compilers.all_compiler_specs()

View File

@@ -37,7 +37,6 @@
import spack.store
import spack.repo
import spack.spec
import spack.util.lock as lk
import spack.util.spack_yaml as syaml
import spack.util.spack_json as sjson
from spack.filesystem_view import YamlFilesystemView
@@ -45,9 +44,7 @@
from spack.directory_layout import DirectoryLayoutError
from spack.error import SpackError
from spack.version import Version
# TODO: Provide an API automatically retyring a build after detecting and
# TODO: clearing a failure.
from spack.util.lock import Lock, WriteTransaction, ReadTransaction, LockError
# DB goes in this directory underneath the root
_db_dirname = '.spack-db'
@@ -68,20 +65,9 @@
(Version('0.9.3'), Version('5')),
]
# Default timeout for spack database locks in seconds or None (no timeout).
# A balance needs to be struck between quick turnaround for parallel installs
# (to avoid excess delays) and waiting long enough when the system is busy
# (to ensure the database is updated).
# Timeout for spack database locks in seconds
_db_lock_timeout = 120
# Default timeout for spack package locks in seconds or None (no timeout).
# A balance needs to be struck between quick turnaround for parallel installs
# (to avoid excess delays when performing a parallel installation) and waiting
# long enough for the next possible spec to install (to avoid excessive
# checking of the last high priority package) or holding on to a lock (to
# ensure a failed install is properly tracked).
_pkg_lock_timeout = None
# Types of dependencies tracked by the database
_tracked_deps = ('link', 'run')
@@ -269,9 +255,6 @@ class Database(object):
"""Per-process lock objects for each install prefix."""
_prefix_locks = {}
"""Per-process failure (lock) objects for each install prefix."""
_prefix_failures = {}
def __init__(self, root, db_dir=None, upstream_dbs=None,
is_upstream=False):
"""Create a Database for Spack installations under ``root``.
@@ -296,7 +279,6 @@ def __init__(self, root, db_dir=None, upstream_dbs=None,
"""
self.root = root
if db_dir is None:
# If the db_dir is not provided, default to within the db root.
self._db_dir = os.path.join(self.root, _db_dirname)
@@ -312,29 +294,42 @@ def __init__(self, root, db_dir=None, upstream_dbs=None,
# This is for other classes to use to lock prefix directories.
self.prefix_lock_path = os.path.join(self._db_dir, 'prefix_lock')
# Ensure a persistent location for dealing with parallel installation
# failures (e.g., across near-concurrent processes).
self._failure_dir = os.path.join(self._db_dir, 'failures')
# Support special locks for handling parallel installation failures
# of a spec.
self.prefix_fail_path = os.path.join(self._db_dir, 'prefix_failures')
# Create needed directories and files
if not os.path.exists(self._db_dir):
mkdirp(self._db_dir)
if not os.path.exists(self._failure_dir):
mkdirp(self._failure_dir)
self.is_upstream = is_upstream
# Create .spack-db/index.json for global upstream it doesn't exist
global_install_tree = spack.config.get(
'upstreams')['global']['install_tree']
global_install_tree = global_install_tree.replace(
'$spack', spack.paths.prefix)
if self.is_upstream:
if global_install_tree in self._db_dir:
if not os.path.isfile(self._index_path):
f = open(self._index_path, "w+")
database = {
'database': {
'installs': {},
'version': str(_db_version)
}
}
try:
sjson.dump(database, f)
except YAMLError as e:
raise syaml.SpackYAMLError(
"error writing YAML database:", str(e))
self.lock = ForbiddenLock()
else:
self.lock = Lock(self._lock_path)
# initialize rest of state.
self.db_lock_timeout = (
spack.config.get('config:db_lock_timeout') or _db_lock_timeout)
self.package_lock_timeout = (
spack.config.get('config:package_lock_timeout') or
_pkg_lock_timeout)
spack.config.get('config:package_lock_timeout') or None)
tty.debug('DATABASE LOCK TIMEOUT: {0}s'.format(
str(self.db_lock_timeout)))
timeout_format_str = ('{0}s'.format(str(self.package_lock_timeout))
@@ -345,9 +340,8 @@ def __init__(self, root, db_dir=None, upstream_dbs=None,
if self.is_upstream:
self.lock = ForbiddenLock()
else:
self.lock = lk.Lock(self._lock_path,
default_timeout=self.db_lock_timeout,
desc='database')
self.lock = Lock(self._lock_path,
default_timeout=self.db_lock_timeout)
self._data = {}
self.upstream_dbs = list(upstream_dbs) if upstream_dbs else []
@@ -362,136 +356,14 @@ def __init__(self, root, db_dir=None, upstream_dbs=None,
def write_transaction(self):
"""Get a write lock context manager for use in a `with` block."""
return lk.WriteTransaction(
return WriteTransaction(
self.lock, acquire=self._read, release=self._write)
def read_transaction(self):
"""Get a read lock context manager for use in a `with` block."""
return lk.ReadTransaction(self.lock, acquire=self._read)
return ReadTransaction(self.lock, acquire=self._read)
def _failed_spec_path(self, spec):
"""Return the path to the spec's failure file, which may not exist."""
if not spec.concrete:
raise ValueError('Concrete spec required for failure path for {0}'
.format(spec.name))
return os.path.join(self._failure_dir,
'{0}-{1}'.format(spec.name, spec.full_hash()))
def clear_failure(self, spec, force=False):
"""
Remove any persistent and cached failure tracking for the spec.
see `mark_failed()`.
Args:
spec (Spec): the spec whose failure indicators are being removed
force (bool): True if the failure information should be cleared
when a prefix failure lock exists for the file or False if
the failure should not be cleared (e.g., it may be
associated with a concurrent build)
"""
failure_locked = self.prefix_failure_locked(spec)
if failure_locked and not force:
tty.msg('Retaining failure marking for {0} due to lock'
.format(spec.name))
return
if failure_locked:
tty.warn('Removing failure marking despite lock for {0}'
.format(spec.name))
lock = self._prefix_failures.pop(spec.prefix, None)
if lock:
lock.release_write()
if self.prefix_failure_marked(spec):
try:
path = self._failed_spec_path(spec)
tty.debug('Removing failure marking for {0}'.format(spec.name))
os.remove(path)
except OSError as err:
tty.warn('Unable to remove failure marking for {0} ({1}): {2}'
.format(spec.name, path, str(err)))
def mark_failed(self, spec):
"""
Mark a spec as failing to install.
Prefix failure marking takes the form of a byte range lock on the nth
byte of a file for coordinating between concurrent parallel build
processes and a persistent file, named with the full hash and
containing the spec, in a subdirectory of the database to enable
persistence across overlapping but separate related build processes.
The failure lock file, ``spack.store.db.prefix_failures``, lives
alongside the install DB. ``n`` is the sys.maxsize-bit prefix of the
associated DAG hash to make the likelihood of collision very low with
no cleanup required.
"""
# Dump the spec to the failure file for (manual) debugging purposes
path = self._failed_spec_path(spec)
with open(path, 'w') as f:
spec.to_json(f)
# Also ensure a failure lock is taken to prevent cleanup removal
# of failure status information during a concurrent parallel build.
err = 'Unable to mark {0.name} as failed.'
prefix = spec.prefix
if prefix not in self._prefix_failures:
mark = lk.Lock(
self.prefix_fail_path,
start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
length=1,
default_timeout=self.package_lock_timeout, desc=spec.name)
try:
mark.acquire_write()
except lk.LockTimeoutError:
# Unlikely that another process failed to install at the same
# time but log it anyway.
tty.debug('PID {0} failed to mark install failure for {1}'
.format(os.getpid(), spec.name))
tty.warn(err.format(spec))
# Whether we or another process marked it as a failure, track it
# as such locally.
self._prefix_failures[prefix] = mark
return self._prefix_failures[prefix]
def prefix_failed(self, spec):
"""Return True if the prefix (installation) is marked as failed."""
# The failure was detected in this process.
if spec.prefix in self._prefix_failures:
return True
# The failure was detected by a concurrent process (e.g., an srun),
# which is expected to be holding a write lock if that is the case.
if self.prefix_failure_locked(spec):
return True
# Determine if the spec may have been marked as failed by a separate
# spack build process running concurrently.
return self.prefix_failure_marked(spec)
def prefix_failure_locked(self, spec):
"""Return True if a process has a failure lock on the spec."""
check = lk.Lock(
self.prefix_fail_path,
start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
length=1,
default_timeout=self.package_lock_timeout, desc=spec.name)
return check.is_write_locked()
def prefix_failure_marked(self, spec):
"""Determine if the spec has a persistent failure marking."""
return os.path.exists(self._failed_spec_path(spec))
def prefix_lock(self, spec, timeout=None):
def prefix_lock(self, spec):
"""Get a lock on a particular spec's installation directory.
NOTE: The installation directory **does not** need to exist.
@@ -506,16 +378,13 @@ def prefix_lock(self, spec, timeout=None):
readers-writer lock semantics with just a single lockfile, so no
cleanup required.
"""
timeout = timeout or self.package_lock_timeout
prefix = spec.prefix
if prefix not in self._prefix_locks:
self._prefix_locks[prefix] = lk.Lock(
self._prefix_locks[prefix] = Lock(
self.prefix_lock_path,
start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
length=1,
default_timeout=timeout, desc=spec.name)
elif timeout != self._prefix_locks[prefix].default_timeout:
self._prefix_locks[prefix].default_timeout = timeout
default_timeout=self.package_lock_timeout)
return self._prefix_locks[prefix]
@@ -526,7 +395,7 @@ def prefix_read_lock(self, spec):
try:
yield self
except lk.LockError:
except LockError:
# This addresses the case where a nested lock attempt fails inside
# of this context manager
raise
@@ -543,7 +412,7 @@ def prefix_write_lock(self, spec):
try:
yield self
except lk.LockError:
except LockError:
# This addresses the case where a nested lock attempt fails inside
# of this context manager
raise
@@ -779,7 +648,7 @@ def _read_suppress_error():
self._error = e
self._data = {}
transaction = lk.WriteTransaction(
transaction = WriteTransaction(
self.lock, acquire=_read_suppress_error, release=self._write
)
@@ -965,7 +834,7 @@ def _read(self):
self._db_dir, os.R_OK | os.W_OK):
# if we can write, then read AND write a JSON file.
self._read_from_file(self._old_yaml_index_path, format='yaml')
with lk.WriteTransaction(self.lock):
with WriteTransaction(self.lock):
self._write(None, None, None)
else:
# Read chck for a YAML file if we can't find JSON.
@@ -978,7 +847,7 @@ def _read(self):
" databases cannot generate an index file")
# The file doesn't exist, try to traverse the directory.
# reindex() takes its own write lock, so no lock here.
with lk.WriteTransaction(self.lock):
with WriteTransaction(self.lock):
self._write(None, None, None)
self.reindex(spack.store.layout)
@@ -1142,6 +1011,9 @@ def _remove(self, spec):
rec.installed = False
return rec.spec
if self.is_upstream:
return rec.spec
del self._data[key]
for dep in rec.spec.dependencies(_tracked_deps):
self._decrement_ref_count(dep)

File diff suppressed because it is too large Load Diff

View File

@@ -550,11 +550,9 @@ def __call__(self, *argv, **kwargs):
tty.debug(e)
self.error = e
if fail_on_error:
self._log_command_output(out)
raise
if fail_on_error and self.returncode not in (None, 0):
self._log_command_output(out)
raise SpackCommandError(
"Command exited with code %d: %s(%s)" % (
self.returncode, self.command_name,
@@ -562,13 +560,6 @@ def __call__(self, *argv, **kwargs):
return out.getvalue()
def _log_command_output(self, out):
if tty.is_verbose():
fmt = self.command_name + ': {0}'
for ln in out.getvalue().split('\n'):
if len(ln) > 0:
tty.verbose(fmt.format(ln.replace('==> ', '')))
def _profile_wrapper(command, parser, args, unknown_args):
import cProfile
@@ -643,6 +634,7 @@ def shell_set(var, value):
other_spack_instances = spack.config.get(
'upstreams') or {}
for install_properties in other_spack_instances.values():
upstream_module_roots = install_properties.get('modules', {})
upstream_module_roots = dict(

View File

@@ -214,6 +214,7 @@ def root_path(name):
Returns:
root folder for module file installation
"""
# Root folders where the various module files should be written
roots = spack.config.get('config:module_roots', {})
path = roots.get(name, os.path.join(spack.paths.share_path, name))
@@ -281,6 +282,7 @@ def read_module_indices():
module_type_to_index = {}
module_type_to_root = install_properties.get('modules', {})
for module_type, root in module_type_to_root.items():
root = spack.util.path.canonicalize_path(root)
module_type_to_index[module_type] = read_module_index(root)
module_indices.append(module_type_to_index)

View File

@@ -14,6 +14,7 @@
import contextlib
import copy
import functools
import glob
import hashlib
import inspect
import os
@@ -47,16 +48,21 @@
import spack.util.environment
import spack.util.web
import spack.multimethod
import spack.binary_distribution as binary_distribution
from llnl.util.filesystem import mkdirp, touch, working_dir
from llnl.util.filesystem import mkdirp, touch, chgrp
from llnl.util.filesystem import working_dir, install_tree, install
from llnl.util.lang import memoized
from llnl.util.link_tree import LinkTree
from llnl.util.tty.log import log_output
from llnl.util.tty.color import colorize
from spack.filesystem_view import YamlFilesystemView
from spack.installer import \
install_args_docstring, PackageInstaller, InstallError
from spack.util.executable import which
from spack.stage import stage_prefix, Stage, ResourceStage, StageComposite
from spack.util.environment import dump_environment
from spack.util.package_hash import package_hash
from spack.version import Version
from spack.package_prefs import get_package_dir_permissions, get_package_group
"""Allowed URL schemes for spack packages."""
_ALLOWED_URL_SCHEMES = ["http", "https", "ftp", "file", "git"]
@@ -424,18 +430,10 @@ class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
# These are default values for instance variables.
#
#: A list or set of build time test functions to be called when tests
#: are executed or 'None' if there are no such test functions.
build_time_test_callbacks = None
#: Most Spack packages are used to install source or binary code while
#: those that do not can be used to install a set of other Spack packages.
has_code = True
#: A list or set of install time test functions to be called when tests
#: are executed or 'None' if there are no such test functions.
install_time_test_callbacks = None
#: By default we build in parallel. Subclasses can override this.
parallel = True
@@ -1285,6 +1283,41 @@ def content_hash(self, content=None):
hashlib.sha256(bytes().join(
sorted(hash_content))).digest()).lower()
def do_fake_install(self):
"""Make a fake install directory containing fake executables,
headers, and libraries."""
command = self.name
header = self.name
library = self.name
# Avoid double 'lib' for packages whose names already start with lib
if not self.name.startswith('lib'):
library = 'lib' + library
dso_suffix = '.dylib' if sys.platform == 'darwin' else '.so'
chmod = which('chmod')
# Install fake command
mkdirp(self.prefix.bin)
touch(os.path.join(self.prefix.bin, command))
chmod('+x', os.path.join(self.prefix.bin, command))
# Install fake header file
mkdirp(self.prefix.include)
touch(os.path.join(self.prefix.include, header + '.h'))
# Install fake shared and static libraries
mkdirp(self.prefix.lib)
for suffix in [dso_suffix, '.a']:
touch(os.path.join(self.prefix.lib, library + suffix))
# Install fake man page
mkdirp(self.prefix.man.man1)
packages_dir = spack.store.layout.build_packages_path(self.spec)
dump_packages(self.spec, packages_dir)
def _has_make_target(self, target):
"""Checks to see if 'target' is a valid target in a Makefile.
@@ -1428,17 +1461,382 @@ def _stage_and_write_lock(self):
with spack.store.db.prefix_write_lock(self.spec):
yield
def _process_external_package(self, explicit):
"""Helper function to process external packages.
Runs post install hooks and registers the package in the DB.
Args:
explicit (bool): if the package was requested explicitly by
the user, False if it was pulled in as a dependency of an
explicit package.
"""
if self.spec.external_module:
message = '{s.name}@{s.version} : has external module in {module}'
tty.msg(message.format(s=self, module=self.spec.external_module))
message = '{s.name}@{s.version} : is actually installed in {path}'
tty.msg(message.format(s=self, path=self.spec.external_path))
else:
message = '{s.name}@{s.version} : externally installed in {path}'
tty.msg(message.format(s=self, path=self.spec.external_path))
try:
# Check if the package was already registered in the DB
# If this is the case, then just exit
rec = spack.store.db.get_record(self.spec)
message = '{s.name}@{s.version} : already registered in DB'
tty.msg(message.format(s=self))
# Update the value of rec.explicit if it is necessary
self._update_explicit_entry_in_db(rec, explicit)
except KeyError:
# If not register it and generate the module file
# For external packages we just need to run
# post-install hooks to generate module files
message = '{s.name}@{s.version} : generating module file'
tty.msg(message.format(s=self))
spack.hooks.post_install(self.spec)
# Add to the DB
message = '{s.name}@{s.version} : registering into DB'
tty.msg(message.format(s=self))
spack.store.db.add(self.spec, None, explicit=explicit)
def _update_explicit_entry_in_db(self, rec, explicit):
if explicit and not rec.explicit:
with spack.store.db.write_transaction():
rec = spack.store.db.get_record(self.spec)
rec.explicit = True
message = '{s.name}@{s.version} : marking the package explicit'
tty.msg(message.format(s=self))
def try_install_from_binary_cache(self, explicit, unsigned=False):
tty.msg('Searching for binary cache of %s' % self.name)
specs = binary_distribution.get_spec(spec=self.spec,
force=False)
binary_spec = spack.spec.Spec.from_dict(self.spec.to_dict())
binary_spec._mark_concrete()
if binary_spec not in specs:
return False
tarball = binary_distribution.download_tarball(binary_spec)
# see #10063 : install from source if tarball doesn't exist
if tarball is None:
tty.msg('%s exist in binary cache but with different hash' %
self.name)
return False
tty.msg('Installing %s from binary cache' % self.name)
binary_distribution.extract_tarball(
binary_spec, tarball, allow_root=False,
unsigned=unsigned, force=False)
self.installed_from_binary_cache = True
spack.store.db.add(
self.spec, spack.store.layout, explicit=explicit)
return True
def bootstrap_compiler(self, **kwargs):
"""Called by do_install to setup ensure Spack has the right compiler.
Checks Spack's compiler configuration for a compiler that
matches the package spec. If none are configured, installs and
adds to the compiler configuration the compiler matching the
CompilerSpec object."""
compilers = spack.compilers.compilers_for_spec(
self.spec.compiler,
arch_spec=self.spec.architecture
)
if not compilers:
dep = spack.compilers.pkg_spec_for_compiler(self.spec.compiler)
dep.architecture = self.spec.architecture
# concrete CompilerSpec has less info than concrete Spec
# concretize as Spec to add that information
dep.concretize()
dep.package.do_install(**kwargs)
spack.compilers.add_compilers_to_config(
spack.compilers.find_compilers([dep.prefix])
)
def do_install(self, **kwargs):
"""Called by commands to install a package and or its dependencies.
"""Called by commands to install a package and its dependencies.
Package implementations should override install() to describe
their build process.
Args:"""
builder = PackageInstaller(self)
builder.install(**kwargs)
Args:
keep_prefix (bool): Keep install prefix on failure. By default,
destroys it.
keep_stage (bool): By default, stage is destroyed only if there
are no exceptions during build. Set to True to keep the stage
even with exceptions.
install_source (bool): By default, source is not installed, but
for debugging it might be useful to keep it around.
install_deps (bool): Install dependencies before installing this
package
skip_patch (bool): Skip patch stage of build if True.
verbose (bool): Display verbose build output (by default,
suppresses it)
fake (bool): Don't really build; install fake stub files instead.
explicit (bool): True if package was explicitly installed, False
if package was implicitly installed (as a dependency).
tests (bool or list or set): False to run no tests, True to test
all packages, or a list of package names to run tests for some
dirty (bool): Don't clean the build environment before installing.
restage (bool): Force spack to restage the package source.
force (bool): Install again, even if already installed.
use_cache (bool): Install from binary package, if available.
cache_only (bool): Fail if binary package unavailable.
stop_at (InstallPhase): last installation phase to be executed
(or None)
"""
if not self.spec.concrete:
raise ValueError("Can only install concrete packages: %s."
% self.spec.name)
do_install.__doc__ += install_args_docstring
keep_prefix = kwargs.get('keep_prefix', False)
keep_stage = kwargs.get('keep_stage', False)
install_source = kwargs.get('install_source', False)
install_deps = kwargs.get('install_deps', True)
skip_patch = kwargs.get('skip_patch', False)
verbose = kwargs.get('verbose', False)
fake = kwargs.get('fake', False)
explicit = kwargs.get('explicit', False)
tests = kwargs.get('tests', False)
dirty = kwargs.get('dirty', False)
restage = kwargs.get('restage', False)
# install_self defaults True and is popped so that dependencies are
# always installed regardless of whether the root was installed
install_self = kwargs.pop('install_package', True)
# explicit defaults False so that dependents are implicit regardless
# of whether their dependents are implicitly or explicitly installed.
# Spack ensures root packages of install commands are always marked to
# install explicit
explicit = kwargs.pop('explicit', False)
# For external packages the workflow is simplified, and basically
# consists in module file generation and registration in the DB
if self.spec.external:
return self._process_external_package(explicit)
if self.installed_upstream:
tty.msg("{0.name} is installed in an upstream Spack instance"
" at {0.prefix}".format(self))
# Note this skips all post-install hooks. In the case of modules
# this is considered correct because we want to retrieve the
# module from the upstream Spack instance.
return
partial = self.check_for_unfinished_installation(keep_prefix, restage)
# Ensure package is not already installed
layout = spack.store.layout
with spack.store.db.prefix_read_lock(self.spec):
if partial:
tty.msg(
"Continuing from partial install of %s" % self.name)
elif layout.check_installed(self.spec):
msg = '{0.name} is already installed in {0.prefix}'
tty.msg(msg.format(self))
rec = spack.store.db.get_record(self.spec)
# In case the stage directory has already been created,
# this ensures it's removed after we checked that the spec
# is installed
if keep_stage is False:
self.stage.destroy()
return self._update_explicit_entry_in_db(rec, explicit)
self._do_install_pop_kwargs(kwargs)
# First, install dependencies recursively.
if install_deps:
tty.debug('Installing {0} dependencies'.format(self.name))
dep_kwargs = kwargs.copy()
dep_kwargs['explicit'] = False
dep_kwargs['install_deps'] = False
for dep in self.spec.traverse(order='post', root=False):
if spack.config.get('config:install_missing_compilers', False):
Package._install_bootstrap_compiler(dep.package, **kwargs)
dep.package.do_install(**dep_kwargs)
# Then install the compiler if it is not already installed.
if install_deps:
Package._install_bootstrap_compiler(self, **kwargs)
if not install_self:
return
# Then, install the package proper
tty.msg(colorize('@*{Installing} @*g{%s}' % self.name))
if kwargs.get('use_cache', True):
if self.try_install_from_binary_cache(
explicit, unsigned=kwargs.get('unsigned', False)):
tty.msg('Successfully installed %s from binary cache'
% self.name)
print_pkg(self.prefix)
spack.hooks.post_install(self.spec)
return
elif kwargs.get('cache_only', False):
tty.die('No binary for %s found and cache-only specified'
% self.name)
tty.msg('No binary for %s found: installing from source'
% self.name)
# Set run_tests flag before starting build
self.run_tests = (tests is True or
tests and self.name in tests)
# Then install the package itself.
def build_process():
"""This implements the process forked for each build.
Has its own process and python module space set up by
build_environment.fork().
This function's return value is returned to the parent process.
"""
start_time = time.time()
if not fake:
if not skip_patch:
self.do_patch()
else:
self.do_stage()
tty.msg(
'Building {0} [{1}]'.format(self.name, self.build_system_class)
)
# get verbosity from do_install() parameter or saved value
echo = verbose
if PackageBase._verbose is not None:
echo = PackageBase._verbose
self.stage.keep = keep_stage
with self._stage_and_write_lock():
# Run the pre-install hook in the child process after
# the directory is created.
spack.hooks.pre_install(self.spec)
if fake:
self.do_fake_install()
else:
source_path = self.stage.source_path
if install_source and os.path.isdir(source_path):
src_target = os.path.join(
self.spec.prefix, 'share', self.name, 'src')
tty.msg('Copying source to {0}'.format(src_target))
install_tree(self.stage.source_path, src_target)
# Do the real install in the source directory.
with working_dir(self.stage.source_path):
# Save the build environment in a file before building.
dump_environment(self.env_path)
# cache debug settings
debug_enabled = tty.is_debug()
# Spawn a daemon that reads from a pipe and redirects
# everything to log_path
with log_output(self.log_path, echo, True) as logger:
for phase_name, phase_attr in zip(
self.phases, self._InstallPhase_phases):
with logger.force_echo():
inner_debug = tty.is_debug()
tty.set_debug(debug_enabled)
tty.msg(
"Executing phase: '%s'" % phase_name)
tty.set_debug(inner_debug)
# Redirect stdout and stderr to daemon pipe
phase = getattr(self, phase_attr)
phase(self.spec, self.prefix)
echo = logger.echo
self.log()
# Run post install hooks before build stage is removed.
spack.hooks.post_install(self.spec)
# Stop timer.
self._total_time = time.time() - start_time
build_time = self._total_time - self._fetch_time
tty.msg("Successfully installed %s" % self.name,
"Fetch: %s. Build: %s. Total: %s." %
(_hms(self._fetch_time), _hms(build_time),
_hms(self._total_time)))
print_pkg(self.prefix)
# preserve verbosity across runs
return echo
# hook that allow tests to inspect this Package before installation
# see unit_test_check() docs.
if not self.unit_test_check():
return
try:
# Create the install prefix and fork the build process.
if not os.path.exists(self.prefix):
spack.store.layout.create_install_directory(self.spec)
else:
# Set the proper group for the prefix
group = get_package_group(self.spec)
if group:
chgrp(self.prefix, group)
# Set the proper permissions.
# This has to be done after group because changing groups blows
# away the sticky group bit on the directory
mode = os.stat(self.prefix).st_mode
perms = get_package_dir_permissions(self.spec)
if mode != perms:
os.chmod(self.prefix, perms)
# Ensure the metadata path exists as well
mkdirp(spack.store.layout.metadata_path(self.spec), mode=perms)
# Fork a child to do the actual installation.
# Preserve verbosity settings across installs.
PackageBase._verbose = spack.build_environment.fork(
self, build_process, dirty=dirty, fake=fake)
# If we installed then we should keep the prefix
keep_prefix = self.last_phase is None or keep_prefix
# note: PARENT of the build process adds the new package to
# the database, so that we don't need to re-read from file.
spack.store.db.add(
self.spec, spack.store.layout, explicit=explicit
)
except spack.directory_layout.InstallDirectoryAlreadyExistsError:
# Abort install if install directory exists.
# But do NOT remove it (you'd be overwriting someone else's stuff)
tty.warn("Keeping existing install prefix in place.")
raise
except StopIteration as e:
# A StopIteration exception means that do_install
# was asked to stop early from clients
tty.msg(e.message)
tty.msg(
'Package stage directory : {0}'.format(self.stage.source_path)
)
finally:
# Remove the install prefix if anything went wrong during install.
if not keep_prefix:
self.remove_prefix()
# The subprocess *may* have removed the build stage. Mark it
# not created so that the next time self.stage is invoked, we
# check the filesystem for it.
self.stage.created = False
@staticmethod
def _install_bootstrap_compiler(pkg, **install_kwargs):
tty.debug('Bootstrapping {0} compiler for {1}'.format(
pkg.spec.compiler, pkg.name
))
comp_kwargs = install_kwargs.copy()
comp_kwargs['explicit'] = False
comp_kwargs['install_deps'] = True
pkg.bootstrap_compiler(**comp_kwargs)
def unit_test_check(self):
"""Hook for unit tests to assert things about package internals.
@@ -1457,6 +1855,125 @@ def unit_test_check(self):
"""
return True
def check_for_unfinished_installation(
self, keep_prefix=False, restage=False):
"""Check for leftover files from partially-completed prior install to
prepare for a new install attempt.
Options control whether these files are reused (vs. destroyed).
Args:
keep_prefix (bool): True if the installation prefix needs to be
kept, False otherwise
restage (bool): False if the stage has to be kept, True otherwise
Returns:
True if the prefix exists but the install is not complete, False
otherwise.
"""
if self.spec.external:
raise ExternalPackageError("Attempted to repair external spec %s" %
self.spec.name)
with spack.store.db.prefix_write_lock(self.spec):
try:
record = spack.store.db.get_record(self.spec)
installed_in_db = record.installed if record else False
except KeyError:
installed_in_db = False
partial = False
if not installed_in_db and os.path.isdir(self.prefix):
if not keep_prefix:
self.remove_prefix()
else:
partial = True
if restage and self.stage.managed_by_spack:
self.stage.destroy()
return partial
def _do_install_pop_kwargs(self, kwargs):
"""Pops kwargs from do_install before starting the installation
Args:
kwargs:
'stop_at': last installation phase to be executed (or None)
"""
self.last_phase = kwargs.pop('stop_at', None)
if self.last_phase is not None and self.last_phase not in self.phases:
tty.die('\'{0}\' is not an allowed phase for package {1}'
.format(self.last_phase, self.name))
def log(self):
"""Copy provenance into the install directory on success."""
packages_dir = spack.store.layout.build_packages_path(self.spec)
# Remove first if we're overwriting another build
# (can happen with spack setup)
try:
# log and env install paths are inside this
shutil.rmtree(packages_dir)
except Exception as e:
# FIXME : this potentially catches too many things...
tty.debug(e)
# Archive the whole stdout + stderr for the package
install(self.log_path, self.install_log_path)
# Archive the environment used for the build
install(self.env_path, self.install_env_path)
# Finally, archive files that are specific to each package
with working_dir(self.stage.path):
errors = StringIO()
target_dir = os.path.join(
spack.store.layout.metadata_path(self.spec),
'archived-files')
for glob_expr in self.archive_files:
# Check that we are trying to copy things that are
# in the stage tree (not arbitrary files)
abs_expr = os.path.realpath(glob_expr)
if os.path.realpath(self.stage.path) not in abs_expr:
errors.write(
'[OUTSIDE SOURCE PATH]: {0}\n'.format(glob_expr)
)
continue
# Now that we are sure that the path is within the correct
# folder, make it relative and check for matches
if os.path.isabs(glob_expr):
glob_expr = os.path.relpath(
glob_expr, self.stage.path
)
files = glob.glob(glob_expr)
for f in files:
try:
target = os.path.join(target_dir, f)
# We must ensure that the directory exists before
# copying a file in
mkdirp(os.path.dirname(target))
install(f, target)
except Exception as e:
tty.debug(e)
# Here try to be conservative, and avoid discarding
# the whole install procedure because of copying a
# single file failed
errors.write('[FAILED TO ARCHIVE]: {0}'.format(f))
if errors.getvalue():
error_file = os.path.join(target_dir, 'errors.txt')
mkdirp(target_dir)
with open(error_file, 'w') as err:
err.write(errors.getvalue())
tty.warn('Errors occurred when archiving files.\n\t'
'See: {0}'.format(error_file))
dump_packages(self.spec, packages_dir)
def sanity_check_prefix(self):
"""This function checks whether install succeeded."""
@@ -2022,6 +2539,8 @@ def rpath_args(self):
"""
return " ".join("-Wl,-rpath,%s" % p for p in self.rpath)
build_time_test_callbacks = None
@on_package_attributes(run_tests=True)
def _run_default_build_time_test_callbacks(self):
"""Tries to call all the methods that are listed in the attribute
@@ -2041,6 +2560,8 @@ def _run_default_build_time_test_callbacks(self):
msg = 'RUN-TESTS: method not implemented [{0}]'
tty.warn(msg.format(name))
install_time_test_callbacks = None
@on_package_attributes(run_tests=True)
def _run_default_install_time_test_callbacks(self):
"""Tries to call all the methods that are listed in the attribute
@@ -2131,6 +2652,54 @@ def flatten_dependencies(spec, flat_dir):
dep_files.merge(flat_dir + '/' + name)
def dump_packages(spec, path):
"""Dump all package information for a spec and its dependencies.
This creates a package repository within path for every
namespace in the spec DAG, and fills the repos wtih package
files and patch files for every node in the DAG.
"""
mkdirp(path)
# Copy in package.py files from any dependencies.
# Note that we copy them in as they are in the *install* directory
# NOT as they are in the repository, because we want a snapshot of
# how *this* particular build was done.
for node in spec.traverse(deptype=all):
if node is not spec:
# Locate the dependency package in the install tree and find
# its provenance information.
source = spack.store.layout.build_packages_path(node)
source_repo_root = os.path.join(source, node.namespace)
# There's no provenance installed for the source package. Skip it.
# User can always get something current from the builtin repo.
if not os.path.isdir(source_repo_root):
continue
# Create a source repo and get the pkg directory out of it.
try:
source_repo = spack.repo.Repo(source_repo_root)
source_pkg_dir = source_repo.dirname_for_package_name(
node.name)
except spack.repo.RepoError:
tty.warn("Warning: Couldn't copy in provenance for %s" %
node.name)
# Create a destination repository
dest_repo_root = os.path.join(path, node.namespace)
if not os.path.exists(dest_repo_root):
spack.repo.create_repo(dest_repo_root)
repo = spack.repo.Repo(dest_repo_root)
# Get the location of the package in the dest repo.
dest_pkg_dir = repo.dirname_for_package_name(node.name)
if node is not spec:
install_tree(source_pkg_dir, dest_pkg_dir)
else:
spack.repo.path.dump_provenance(node, dest_pkg_dir)
def possible_dependencies(*pkg_or_spec, **kwargs):
"""Get the possible dependencies of a number of packages.
@@ -2160,6 +2729,28 @@ def possible_dependencies(*pkg_or_spec, **kwargs):
return visited
def print_pkg(message):
"""Outputs a message with a package icon."""
from llnl.util.tty.color import cwrite
cwrite('@*g{[+]} ')
print(message)
def _hms(seconds):
"""Convert time in seconds to hours, minutes, seconds."""
m, s = divmod(seconds, 60)
h, m = divmod(m, 60)
parts = []
if h:
parts.append("%dh" % h)
if m:
parts.append("%dm" % m)
if s:
parts.append("%.2fs" % s)
return ' '.join(parts)
class FetchError(spack.error.SpackError):
"""Raised when something goes wrong during fetch."""
@@ -2167,6 +2758,17 @@ def __init__(self, message, long_msg=None):
super(FetchError, self).__init__(message, long_msg)
class InstallError(spack.error.SpackError):
"""Raised when something goes wrong during install or uninstall."""
def __init__(self, message, long_msg=None):
super(InstallError, self).__init__(message, long_msg)
class ExternalPackageError(InstallError):
"""Raised by install() when a package is only for external use."""
class PackageStillNeededError(InstallError):
"""Raised when package is still needed by another on uninstall."""
def __init__(self, spec, dependents):

View File

@@ -16,6 +16,9 @@
#: This file lives in $prefix/lib/spack/spack/__file__
prefix = ancestor(__file__, 4)
#: User configuration location
user_config_path = os.path.expanduser('~/.spack')
#: synonym for prefix
spack_root = prefix
@@ -38,6 +41,8 @@
test_path = os.path.join(module_path, "test")
hooks_path = os.path.join(module_path, "hooks")
var_path = os.path.join(prefix, "var", "spack")
user_var_path = os.path.join(user_config_path, "var", "spack")
stage_path = os.path.join(user_var_path, "stage")
repos_path = os.path.join(var_path, "repos")
share_path = os.path.join(prefix, "share", "spack")
@@ -45,9 +50,6 @@
packages_path = os.path.join(repos_path, "builtin")
mock_packages_path = os.path.join(repos_path, "builtin.mock")
#: User configuration location
user_config_path = os.path.expanduser('~/.spack')
opt_path = os.path.join(prefix, "opt")
etc_path = os.path.join(prefix, "etc")

View File

@@ -50,10 +50,7 @@
from spack.package import \
install_dependency_symlinks, flatten_dependencies, \
DependencyConflictError
from spack.installer import \
ExternalPackageError, InstallError, InstallLockError, UpstreamPackageError
DependencyConflictError, InstallError, ExternalPackageError
from spack.variant import any_combination_of, auto_or_any_combination_of
from spack.variant import disjoint_sets

View File

@@ -684,6 +684,13 @@ def file_is_relocatable(file, paths_to_relocate=None):
strings = Executable('strings')
# if we're relocating patchelf itself, use it
if file[-13:] == "/bin/patchelf":
patchelf = Executable(file)
else:
patchelf = Executable(get_patchelf())
# Remove the RPATHS from the strings in the executable
set_of_strings = set(strings(file, output=str).split())
@@ -693,8 +700,8 @@ def file_is_relocatable(file, paths_to_relocate=None):
if platform.system().lower() == 'linux':
if m_subtype == 'x-executable' or m_subtype == 'x-sharedlib':
rpaths = ':'.join(get_existing_elf_rpaths(file))
set_of_strings.discard(rpaths)
rpaths = patchelf('--print-rpath', file, output=str).strip()
set_of_strings.discard(rpaths.strip())
if platform.system().lower() == 'darwin':
if m_subtype == 'x-mach-binary':
rpaths, deps, idpath = macho_get_paths(file)
@@ -749,5 +756,4 @@ def mime_type(file):
tty.debug('[MIME_TYPE] {0} -> {1}'.format(file, output.strip()))
if '/' not in output:
output += '/'
split_by_slash = output.strip().split('/')
return (split_by_slash[0], "/".join(split_by_slash[1:]))
return tuple(output.strip().split('/'))

View File

@@ -44,8 +44,7 @@ def fetch_package_log(pkg):
class InfoCollector(object):
"""Decorates PackageInstaller._install_task, which is called by
PackageBase.do_install for each spec, to collect information
"""Decorates PackageBase.do_install to collect information
on the installation of certain specs.
When exiting the context this change will be rolled-back.
@@ -58,8 +57,8 @@ class InfoCollector(object):
specs (list of Spec): specs whose install information will
be recorded
"""
#: Backup of PackageInstaller._install_task
_backup__install_task = spack.package.PackageInstaller._install_task
#: Backup of PackageBase.do_install
_backup_do_install = spack.package.PackageBase.do_install
def __init__(self, specs):
#: Specs that will be installed
@@ -109,16 +108,15 @@ def __enter__(self):
}
spec['packages'].append(package)
def gather_info(_install_task):
"""Decorates PackageInstaller._install_task to gather useful
information on PackageBase.do_install for a CI report.
def gather_info(do_install):
"""Decorates do_install to gather useful information for
a CI report.
It's defined here to capture the environment and build
this context as the installations proceed.
"""
@functools.wraps(_install_task)
def wrapper(installer, task, *args, **kwargs):
pkg = task.pkg
@functools.wraps(do_install)
def wrapper(pkg, *args, **kwargs):
# We accounted before for what is already installed
installed_on_entry = pkg.installed
@@ -136,7 +134,7 @@ def wrapper(installer, task, *args, **kwargs):
value = None
try:
value = _install_task(installer, task, *args, **kwargs)
value = do_install(pkg, *args, **kwargs)
package['result'] = 'success'
package['stdout'] = fetch_package_log(pkg)
package['installed_from_binary_cache'] = \
@@ -184,15 +182,14 @@ def wrapper(installer, task, *args, **kwargs):
return wrapper
spack.package.PackageInstaller._install_task = gather_info(
spack.package.PackageInstaller._install_task
spack.package.PackageBase.do_install = gather_info(
spack.package.PackageBase.do_install
)
def __exit__(self, exc_type, exc_val, exc_tb):
# Restore the original method in PackageInstaller
spack.package.PackageInstaller._install_task = \
InfoCollector._backup__install_task
# Restore the original method in PackageBase
spack.package.PackageBase.do_install = InfoCollector._backup_do_install
for spec in self.specs:
spec['npackages'] = len(spec['packages'])
@@ -211,9 +208,9 @@ class collect_info(object):
"""Collects information to build a report while installing
and dumps it on exit.
If the format name is not ``None``, this context manager decorates
PackageInstaller._install_task when entering the context for a
PackageBase.do_install operation and unrolls the change when exiting.
If the format name is not ``None``, this context manager
decorates PackageBase.do_install when entering the context
and unrolls the change when exiting.
Within the context, only the specs that are passed to it
on initialization will be recorded for the report. Data from
@@ -258,14 +255,14 @@ def concretization_report(self, msg):
def __enter__(self):
if self.format_name:
# Start the collector and patch PackageInstaller._install_task
# Start the collector and patch PackageBase.do_install
self.collector = InfoCollector(self.specs)
self.collector.__enter__()
def __exit__(self, exc_type, exc_val, exc_tb):
if self.format_name:
# Close the collector and restore the
# original PackageInstaller._install_task
# original PackageBase.do_install
self.collector.__exit__(exc_type, exc_val, exc_tb)
report_data = {'specs': self.collector.specs}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,282 +0,0 @@
%=============================================================================
% Generate
%=============================================================================
%-----------------------------------------------------------------------------
% Version semantics
%-----------------------------------------------------------------------------
% versions are declared w/priority -- declared with priority implies declared
version_declared(P, V) :- version_declared(P, V, _).
% If something is a package, it has only one version and that must be a
% possible version.
1 { version(P, V) : version_possible(P, V) } 1 :- node(P).
% If a version is declared but conflicted, it's not possible.
version_possible(P, V) :- version_declared(P, V), not version_conflict(P, V).
version_weight(P, V, N) :- version(P, V), version_declared(P, V, N).
#defined version_conflict/2.
%-----------------------------------------------------------------------------
% Dependency semantics
%-----------------------------------------------------------------------------
% Dependencies of any type imply that one package "depends on" another
depends_on(P, D) :- depends_on(P, D, _).
% declared dependencies are real if they're not virtual
depends_on(P, D, T) :- declared_dependency(P, D, T), not virtual(D), node(P).
% if you declare a dependency on a virtual, you depend on one of its providers
1 { depends_on(P, Q, T) : provides_virtual(Q, V) } 1
:- declared_dependency(P, V, T), virtual(V), node(P).
% if a virtual was required by some root spec, one provider is in the DAG
1 { node(P) : provides_virtual(P, V) } 1 :- virtual_node(V).
% for any virtual, there can be at most one provider in the DAG
provider(P, V) :- node(P), provides_virtual(P, V).
0 { provider(P, V) : node(P) } 1 :- virtual(V).
% give dependents the virtuals they want
provider_weight(D, N)
:- virtual(V), depends_on(P, D), provider(D, V),
pkg_provider_preference(P, V, D, N).
provider_weight(D, N)
:- virtual(V), depends_on(P, D), provider(D, V),
not pkg_provider_preference(P, V, D, _),
default_provider_preference(V, D, N).
% if there's no preference for something, it costs 100 to discourage its
% use with minimization
provider_weight(D, 100)
:- virtual(V), depends_on(P, D), provider(D, V),
not pkg_provider_preference(P, V, D, _),
not default_provider_preference(V, D, _).
% all nodes must be reachable from some root
needed(D) :- root(D), node(D).
needed(D) :- root(P), depends_on(P, D).
needed(D) :- needed(P), depends_on(P, D), node(P).
:- node(P), not needed(P).
% real dependencies imply new nodes.
node(D) :- node(P), depends_on(P, D).
% do not warn if generated program contains none of these.
#defined depends_on/3.
#defined declared_dependency/3.
#defined virtual/1.
#defined virtual_node/1.
#defined provides_virtual/2.
#defined pkg_provider_preference/4.
#defined default_provider_preference/3.
#defined root/1.
%-----------------------------------------------------------------------------
% Variant semantics
%-----------------------------------------------------------------------------
% one variant value for single-valued variants.
1 { variant_value(P, V, X) : variant_possible_value(P, V, X) } 1
:- node(P), variant(P, V), variant_single_value(P, V).
% at least one variant value for multi-valued variants.
1 { variant_value(P, V, X) : variant_possible_value(P, V, X) }
:- node(P), variant(P, V), not variant_single_value(P, V).
% if a variant is set to anything, it is considered 'set'.
variant_set(P, V) :- variant_set(P, V, _).
% variant_set is an explicitly set variant value. If it's not 'set',
% we revert to the default value. If it is set, we force the set value
variant_value(P, V, X) :- node(P), variant(P, V), variant_set(P, V, X).
% prefer default values.
variant_not_default(P, V, X, 1)
:- variant_value(P, V, X),
not variant_default_value(P, V, X),
node(P).
variant_not_default(P, V, X, 0)
:- variant_value(P, V, X),
variant_default_value(P, V, X),
node(P).
% suppress wranings about this atom being unset. It's only set if some
% spec or some package sets it, and without this, clingo will give
% warnings like 'info: atom does not occur in any rule head'.
#defined variant/2.
#defined variant_set/3.
#defined variant_single_value/2.
#defined variant_default_value/3.
#defined variant_possible_value/3.
%-----------------------------------------------------------------------------
% Platform/OS semantics
%-----------------------------------------------------------------------------
% one platform, os per node
% TODO: convert these to use optimization, like targets.
1 { node_platform(P, A) : node_platform(P, A) } 1 :- node(P).
1 { node_os(P, A) : node_os(P, A) } 1 :- node(P).
% arch fields for pkg P are set if set to anything
node_platform_set(P) :- node_platform_set(P, _).
node_os_set(P) :- node_os_set(P, _).
% if no platform/os is set, fall back to the defaults
node_platform(P, A)
:- node(P), not node_platform_set(P), node_platform_default(A).
node_os(P, A) :- node(P), not node_os_set(P), node_os_default(A).
% setting os/platform on a node is a hard constraint
node_platform(P, A) :- node(P), node_platform_set(P, A).
node_os(P, A) :- node(P), node_os_set(P, A).
% avoid info warnings (see variants)
#defined node_platform_set/2.
#defined node_os_set/2.
%-----------------------------------------------------------------------------
% Target semantics
%-----------------------------------------------------------------------------
% one target per node -- optimization will pick the "best" one
1 { node_target(P, T) : target(T) } 1 :- node(P).
% can't use targets on node if the compiler for the node doesn't support them
:- node_target(P, T), not compiler_supports_target(C, V, T),
node_compiler(P, C), node_compiler_version(P, C, V).
% if a target is set explicitly, respect it
node_target(P, T) :- node(P), node_target_set(P, T).
% each node has the weight of its assigned target
node_target_weight(P, N) :- node(P), node_target(P, T), target_weight(T, N).
#defined node_target_set/2.
%-----------------------------------------------------------------------------
% Compiler semantics
%-----------------------------------------------------------------------------
% one compiler per node
1 { node_compiler(P, C) : compiler(C) } 1 :- node(P).
1 { node_compiler_version(P, C, V) : compiler_version(C, V) } 1 :- node(P).
1 { compiler_weight(P, N) : compiler_weight(P, N) } 1 :- node(P).
% dependencies imply we should try to match hard compiler constraints
% todo: look at what to do about intersecting constraints here. we'd
% ideally go with the "lowest" pref in the DAG
node_compiler_match_pref(P, C) :- node_compiler_hard(P, C).
node_compiler_match_pref(D, C)
:- depends_on(P, D), node_compiler_match_pref(P, C),
not node_compiler_hard(D, _).
compiler_match(P, 1) :- node_compiler(P, C), node_compiler_match_pref(P, C).
node_compiler_version_match_pref(P, C, V)
:- node_compiler_version_hard(P, C, V).
node_compiler_version_match_pref(D, C, V)
:- depends_on(P, D), node_compiler_version_match_pref(P, C, V),
not node_compiler_version_hard(D, C, _).
compiler_version_match(P, 1)
:- node_compiler_version(P, C, V),
node_compiler_version_match_pref(P, C, V).
#defined node_compiler_hard/2.
#defined node_compiler_version_hard/3.
% compilers weighted by preference acccording to packages.yaml
compiler_weight(P, N)
:- node_compiler(P, C), node_compiler_version(P, C, V),
node_compiler_preference(P, C, V, N).
compiler_weight(P, N)
:- node_compiler(P, C), node_compiler_version(P, C, V),
not node_compiler_preference(P, C, _, _),
default_compiler_preference(C, V, N).
compiler_weight(P, 100)
:- node_compiler(P, C), node_compiler_version(P, C, V),
not node_compiler_preference(P, C, _, _),
not default_compiler_preference(C, _, _).
#defined node_compiler_preference/4.
#defined default_compiler_preference/3.
%-----------------------------------------------------------------------------
% Compiler flags
%-----------------------------------------------------------------------------
% propagate flags when compilers match
inherit_flags(P, D)
:- depends_on(P, D), node_compiler(P, C), node_compiler(D, C),
compiler(C), flag_type(T).
node_flag_inherited(D, T, F) :- node_flag_set(P, T, F), inherit_flags(P, D).
node_flag_inherited(D, T, F)
:- node_flag_inherited(P, T, F), inherit_flags(P, D).
% node with flags set to anythingg is "set"
node_flag_set(P) :- node_flag_set(P, _, _).
% remember where flags came from
node_flag_source(P, P) :- node_flag_set(P).
node_flag_source(D, Q) :- node_flag_source(P, Q), inherit_flags(P, D).
% compiler flags from compilers.yaml are put on nodes if compiler matches
node_flag(P, T, F),
node_flag_compiler_default(P)
:- not node_flag_set(P), compiler_version_flag(C, V, T, F),
node_compiler(P, C), node_compiler_version(P, C, V),
flag_type(T), compiler(C), compiler_version(C, V).
% if a flag is set to something or inherited, it's included
node_flag(P, T, F) :- node_flag_set(P, T, F).
node_flag(P, T, F) :- node_flag_inherited(P, T, F).
% if no node flags are set for a type, there are no flags.
no_flags(P, T) :- not node_flag(P, T, _), node(P), flag_type(T).
#defined compiler_version_flag/4.
#defined node_flag/3.
#defined node_flag_set/3.
%-----------------------------------------------------------------------------
% How to optimize the spec (high to low priority)
%-----------------------------------------------------------------------------
% weight root preferences higher
%
% TODO: how best to deal with this issue? It's not clear how best to
% weight all the constraints. Without this root preference, `spack solve
% hdf5` will pick mpich instead of openmpi, even if openmpi is the
% preferred provider, because openmpi has a version constraint on hwloc.
% It ends up choosing between settling for an old version of hwloc, or
% picking the second-best provider. This workaround weights root
% preferences higher so that hdf5's prefs are more important, but it's
% not clear this is a general solution. It would be nice to weight by
% distance to root, but that seems to slow down the solve a lot.
%
% One option is to make preferences hard constraints. Or maybe we need
% to look more closely at where a constraint came from and factor that
% into our weights. e.g., a non-default variant resulting from a version
% constraint counts like a version constraint. Needs more thought later.
%
root(D, 2) :- root(D), node(D).
root(D, 1) :- not root(D), node(D).
% prefer default variants
#minimize { N*R@10,P,V,X : variant_not_default(P, V, X, N), root(P, R) }.
% pick most preferred virtual providers
#minimize{ N*R@9,D : provider_weight(D, N), root(P, R) }.
% prefer more recent versions.
#minimize{ N@8,P,V : version_weight(P, V, N) }.
% compiler preferences
#maximize{ N@7,P : compiler_match(P, N) }.
#minimize{ N@6,P : compiler_weight(P, N) }.
% fastest target for node
% TODO: if these are slightly different by compiler (e.g., skylake is
% best, gcc supports skylake and broadweell, clang's best is haswell)
% things seem to get really slow.
#minimize{ N@5,P : node_target_weight(P, N) }.

View File

@@ -154,6 +154,7 @@ def get_stage_root():
if _stage_root is None:
candidates = spack.config.get('config:build_stage')
if isinstance(candidates, string_types):
candidates = [candidates]
@@ -307,9 +308,8 @@ def __init__(
lock_id = prefix_bits(sha1, bit_length(sys.maxsize))
stage_lock_path = os.path.join(get_stage_root(), '.lock')
tty.debug("Creating stage lock {0}".format(self.name))
Stage.stage_locks[self.name] = spack.util.lock.Lock(
stage_lock_path, lock_id, 1, desc=self.name)
stage_lock_path, lock_id, 1)
self._lock = Stage.stage_locks[self.name]

View File

@@ -34,7 +34,7 @@
import spack.directory_layout
#: default installation root, relative to the Spack install path
default_root = os.path.join(spack.paths.opt_path, 'spack')
default_root = os.path.join(spack.paths.user_config_path, 'opt/spack')
class Store(object):
@@ -70,9 +70,10 @@ def reindex(self):
def _store():
"""Get the singleton store instance."""
root = spack.config.get('config:install_tree', default_root)
root = spack.util.path.canonicalize_path(root)
root = spack.config.get('config:active_tree', default_root)
# Canonicalize Path for Root regardless of origin
root = spack.util.path.canonicalize_path(root)
return Store(root,
spack.config.get('config:install_path_scheme'),
spack.config.get('config:install_hash_length'))
@@ -88,11 +89,19 @@ def _store():
def retrieve_upstream_dbs():
other_spack_instances = spack.config.get('upstreams', {})
global_fallback = {'global': {'install_tree': '$spack/opt/spack',
'modules':
{'tcl': '$spack/share/spack/modules',
'lmod': '$spack/share/spack/lmod',
'dotkit': '$spack/share/spack/dotkit'}}}
other_spack_instances = spack.config.get('upstreams',
global_fallback)
install_roots = []
for install_properties in other_spack_instances.values():
install_roots.append(install_properties['install_tree'])
install_roots.append(spack.util.path.canonicalize_path(
install_properties['install_tree']))
return _construct_upstream_dbs_from_install_roots(install_roots)

View File

@@ -1,66 +0,0 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import pytest
import spack.installer as inst
import spack.repo
import spack.spec
def test_build_task_errors(install_mockery):
with pytest.raises(ValueError, match='must be a package'):
inst.BuildTask('abc', False, 0, 0, 0, [])
pkg = spack.repo.get('trivial-install-test-package')
with pytest.raises(ValueError, match='must have a concrete spec'):
inst.BuildTask(pkg, False, 0, 0, 0, [])
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
with pytest.raises(inst.InstallError, match='Cannot create a build task'):
inst.BuildTask(spec.package, False, 0, 0, inst.STATUS_REMOVED, [])
def test_build_task_basics(install_mockery):
spec = spack.spec.Spec('dependent-install')
spec.concretize()
assert spec.concrete
# Ensure key properties match expectations
task = inst.BuildTask(spec.package, False, 0, 0, inst.STATUS_ADDED, [])
assert task.priority == len(task.uninstalled_deps)
assert task.key == (task.priority, task.sequence)
# Ensure flagging installed works as expected
assert len(task.uninstalled_deps) > 0
assert task.dependencies == task.uninstalled_deps
task.flag_installed(task.dependencies)
assert len(task.uninstalled_deps) == 0
assert task.priority == 0
def test_build_task_strings(install_mockery):
"""Tests of build_task repr and str for coverage purposes."""
# Using a package with one dependency
spec = spack.spec.Spec('dependent-install')
spec.concretize()
assert spec.concrete
# Ensure key properties match expectations
task = inst.BuildTask(spec.package, False, 0, 0, inst.STATUS_ADDED, [])
# Cover __repr__
irep = task.__repr__()
assert irep.startswith(task.__class__.__name__)
assert "status='queued'" in irep # == STATUS_ADDED
assert "sequence=" in irep
# Cover __str__
istr = str(task)
assert "status=queued" in istr # == STATUS_ADDED
assert "#dependencies=1" in istr
assert "priority=" in istr

View File

@@ -13,7 +13,6 @@
import spack.paths as spack_paths
import spack.spec as spec
import spack.util.web as web_util
import spack.util.gpg
@pytest.fixture
@@ -42,15 +41,6 @@ def test_urlencode_string():
assert(s_enc == 'Spack+Test+Project')
def has_gpg():
try:
gpg = spack.util.gpg.Gpg.gpg()
except spack.util.gpg.SpackGPGError:
gpg = None
return bool(gpg)
@pytest.mark.skipif(not has_gpg(), reason='This test requires gpg')
def test_import_signing_key(mock_gnupghome):
signing_key_dir = spack_paths.mock_gpg_keys_path
signing_key_path = os.path.join(signing_key_dir, 'package-signing-key')

View File

@@ -21,7 +21,6 @@
from spack.test.conftest import MockPackage, MockPackageMultiRepo
import spack.util.executable as exe
import spack.util.spack_yaml as syaml
import spack.util.gpg
ci_cmd = SpackCommand('ci')
@@ -33,14 +32,6 @@
git = exe.which('git', required=True)
def has_gpg():
try:
gpg = spack.util.gpg.Gpg.gpg()
except spack.util.gpg.SpackGPGError:
gpg = None
return bool(gpg)
@pytest.fixture()
def env_deactivate():
yield
@@ -503,7 +494,6 @@ def test_ci_pushyaml(tmpdir):
@pytest.mark.disable_clean_stage_check
@pytest.mark.skipif(not has_gpg(), reason='This test requires gpg')
def test_push_mirror_contents(tmpdir, mutable_mock_env_path, env_deactivate,
install_mockery, mock_packages, mock_fetch,
mock_stage, mock_gnupghome):

View File

@@ -117,7 +117,7 @@ def test_uninstall_deprecated(mock_packages, mock_archive, mock_fetch,
non_deprecated = spack.store.db.query()
uninstall('-y', 'libelf@0.8.10')
uninstall('-y', '-g', 'libelf@0.8.10')
assert spack.store.db.query() == spack.store.db.query(installed=any)
assert spack.store.db.query() == non_deprecated

View File

@@ -169,11 +169,9 @@ def test_env_install_same_spec_twice(install_mockery, mock_fetch, capfd):
e = ev.read('test')
with capfd.disabled():
with e:
# The first installation outputs the package prefix
install('cmake-client')
# The second installation attempt will also update the view
out = install('cmake-client')
assert 'Updating view at' in out
assert 'is already installed in' in out
def test_remove_after_concretize():

View File

@@ -69,11 +69,6 @@ def check_output(ni, na):
check_output(1, 1)
def test_extensions_no_arguments(mock_packages):
out = extensions()
assert 'python' in out
def test_extensions_raises_if_not_extendable(mock_packages):
with pytest.raises(SpackCommandError):
extensions("flake8")

View File

@@ -52,16 +52,8 @@ def test_no_gpg_in_path(tmpdir, mock_gnupghome, monkeypatch):
spack.util.gpg.Gpg.gpg()
def has_gpg():
try:
gpg = spack.util.gpg.Gpg.gpg()
except spack.util.gpg.SpackGPGError:
gpg = None
return bool(gpg)
@pytest.mark.maybeslow
@pytest.mark.skipif(not has_gpg(),
@pytest.mark.skipif(not spack.util.gpg.Gpg.gpg(),
reason='These tests require gnupg2')
def test_gpg(tmpdir, mock_gnupghome):
# Verify a file with an empty keyring.

View File

@@ -54,6 +54,46 @@ def test_install_package_and_dependency(
assert 'errors="0"' in content
def test_global_install_package_and_dependency(
tmpdir, mock_packages, mock_archive, mock_fetch, config,
install_mockery):
with tmpdir.as_cwd():
install('--global',
'--log-format=junit',
'--log-file=test.xml',
'libdwarf')
files = tmpdir.listdir()
filename = tmpdir.join('test.xml')
assert filename in files
content = filename.open().read()
assert 'tests="2"' in content
assert 'failures="0"' in content
assert 'errors="0"' in content
def test_upstream_install_package_and_dependency(
tmpdir, mock_packages, mock_archive, mock_fetch, config,
install_mockery):
with tmpdir.as_cwd():
install('--upstream=global',
'--log-format=junit',
'--log-file=test.xml',
'libdwarf')
files = tmpdir.listdir()
filename = tmpdir.join('test.xml')
assert filename in files
content = filename.open().read()
assert 'tests="2"' in content
assert 'failures="0"' in content
assert 'errors="0"' in content
@pytest.mark.disable_clean_stage_check
def test_install_runtests_notests(monkeypatch, mock_packages, install_mockery):
def check(pkg):
@@ -139,7 +179,8 @@ def test_install_output_on_build_error(mock_packages, mock_archive, mock_fetch,
# capfd interferes with Spack's capturing
with capfd.disabled():
out = install('build-error', fail_on_error=False)
assert 'ProcessError' in out
assert isinstance(install.error, spack.build_environment.ChildError)
assert install.error.name == 'ProcessError'
assert 'configure: error: in /path/to/some/file:' in out
assert 'configure: error: cannot run C compiled programs.' in out
@@ -176,10 +217,9 @@ def test_show_log_on_error(mock_packages, mock_archive, mock_fetch,
assert install.error.pkg.name == 'build-error'
assert 'Full build log:' in out
# Message shows up for ProcessError (1), ChildError (1), and output (1)
errors = [line for line in out.split('\n')
if 'configure: error: cannot run C compiled programs' in line]
assert len(errors) == 3
assert len(errors) == 2
def test_install_overwrite(
@@ -373,12 +413,8 @@ def just_throw(*args, **kwargs):
exc_type = getattr(builtins, exc_typename)
raise exc_type(msg)
monkeypatch.setattr(spack.installer.PackageInstaller, '_install_task',
just_throw)
monkeypatch.setattr(spack.package.PackageBase, 'do_install', just_throw)
# TODO: Why does junit output capture appear to swallow the exception
# TODO: as evidenced by the two failing packages getting tagged as
# TODO: installed?
with tmpdir.as_cwd():
install('--log-format=junit', '--log-file=test.xml', 'libdwarf')
@@ -388,14 +424,14 @@ def just_throw(*args, **kwargs):
content = filename.open().read()
# Count failures and errors correctly: libdwarf _and_ libelf
assert 'tests="2"' in content
# Count failures and errors correctly
assert 'tests="1"' in content
assert 'failures="0"' in content
assert 'errors="2"' in content
assert 'errors="1"' in content
# We want to have both stdout and stderr
assert '<system-out>' in content
assert 'error message="{0}"'.format(msg) in content
assert msg in content
@pytest.mark.usefixtures('noop_install', 'config')
@@ -482,8 +518,9 @@ def test_cdash_upload_build_error(tmpdir, mock_fetch, install_mockery,
@pytest.mark.disable_clean_stage_check
def test_cdash_upload_clean_build(tmpdir, mock_fetch, install_mockery, capfd):
# capfd interferes with Spack's capturing of e.g., Build.xml output
def test_cdash_upload_clean_build(tmpdir, mock_fetch, install_mockery,
capfd):
# capfd interferes with Spack's capturing
with capfd.disabled():
with tmpdir.as_cwd():
install(
@@ -501,7 +538,7 @@ def test_cdash_upload_clean_build(tmpdir, mock_fetch, install_mockery, capfd):
@pytest.mark.disable_clean_stage_check
def test_cdash_upload_extra_params(tmpdir, mock_fetch, install_mockery, capfd):
# capfd interferes with Spack's capture of e.g., Build.xml output
# capfd interferes with Spack's capturing
with capfd.disabled():
with tmpdir.as_cwd():
install(
@@ -523,7 +560,7 @@ def test_cdash_upload_extra_params(tmpdir, mock_fetch, install_mockery, capfd):
@pytest.mark.disable_clean_stage_check
def test_cdash_buildstamp_param(tmpdir, mock_fetch, install_mockery, capfd):
# capfd interferes with Spack's capture of e.g., Build.xml output
# capfd interferes with Spack's capturing
with capfd.disabled():
with tmpdir.as_cwd():
cdash_track = 'some_mocked_track'
@@ -572,6 +609,7 @@ def test_cdash_install_from_spec_yaml(tmpdir, mock_fetch, install_mockery,
report_file = report_dir.join('a_Configure.xml')
assert report_file in report_dir.listdir()
content = report_file.open().read()
import re
install_command_regex = re.compile(
r'<ConfigureCommand>(.+)</ConfigureCommand>',
re.MULTILINE | re.DOTALL)
@@ -601,7 +639,6 @@ def test_build_warning_output(tmpdir, mock_fetch, install_mockery, capfd):
msg = ''
try:
install('build-warnings')
assert False, "no exception was raised!"
except spack.build_environment.ChildError as e:
msg = e.long_message
@@ -610,16 +647,12 @@ def test_build_warning_output(tmpdir, mock_fetch, install_mockery, capfd):
def test_cache_only_fails(tmpdir, mock_fetch, install_mockery, capfd):
msg = ''
with capfd.disabled():
try:
install('--cache-only', 'libdwarf')
except spack.installer.InstallError as e:
msg = str(e)
# libelf from cache failed to install, which automatically removed the
# the libdwarf build task and flagged the package as failed to install.
assert 'Installation of libdwarf failed' in msg
assert False
except spack.main.SpackCommandError:
pass
def test_install_only_dependencies(tmpdir, mock_fetch, install_mockery):

View File

@@ -3,8 +3,6 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import pytest
import spack
from spack.main import SpackCommand
@@ -14,17 +12,3 @@
def test_python():
out = python('-c', 'import spack; print(spack.spack_version)')
assert out.strip() == spack.spack_version
def test_python_with_module():
# pytest rewrites a lot of modules, which interferes with runpy, so
# it's hard to test this. Trying to import a module like sys, that
# has no code associated with it, raises an error reliably in python
# 2 and 3, which indicates we successfully ran runpy.run_module.
with pytest.raises(ImportError, match="No code object"):
python('-m', 'sys')
def test_python_raises():
out = python('--foobar', fail_on_error=False)
assert "Error: Unknown arguments" in out

View File

@@ -80,6 +80,41 @@ def test_force_uninstall_spec_with_ref_count_not_zero(
@pytest.mark.db
@pytest.mark.usefixtures('mutable_database')
def test_global_recursive_uninstall():
"""Test recursive uninstall from global upstream"""
uninstall('-g', '-y', '-a', '--dependents', 'callpath')
all_specs = spack.store.layout.all_specs()
assert len(all_specs) == 8
# query specs with multiple configurations
mpileaks_specs = [s for s in all_specs if s.satisfies('mpileaks')]
callpath_specs = [s for s in all_specs if s.satisfies('callpath')]
mpi_specs = [s for s in all_specs if s.satisfies('mpi')]
assert len(mpileaks_specs) == 0
assert len(callpath_specs) == 0
assert len(mpi_specs) == 3
@pytest.mark.db
@pytest.mark.usefixtures('mutable_database')
def test_upstream_recursive_uninstall():
"""Test recursive uninstall from specified upstream"""
uninstall('--upstream=global', '-y', '-a', '--dependents', 'callpath')
all_specs = spack.store.layout.all_specs()
assert len(all_specs) == 8
# query specs with multiple configurations
mpileaks_specs = [s for s in all_specs if s.satisfies('mpileaks')]
callpath_specs = [s for s in all_specs if s.satisfies('callpath')]
mpi_specs = [s for s in all_specs if s.satisfies('mpi')]
assert len(mpileaks_specs) == 0
assert len(callpath_specs) == 0
assert len(mpi_specs) == 3
def test_force_uninstall_and_reinstall_by_hash(mutable_database):
"""Test forced uninstall and reinstall of old specs."""
# this is the spec to be removed
@@ -102,12 +137,12 @@ def validate_callpath_spec(installed):
specs = spack.store.db.get_by_hash(dag_hash[:7], installed=any)
assert len(specs) == 1 and specs[0] == callpath_spec
specs = spack.store.db.get_by_hash(dag_hash, installed=not installed)
assert specs is None
# specs = spack.store.db.get_by_hash(dag_hash, installed=not installed)
# assert specs is None
specs = spack.store.db.get_by_hash(dag_hash[:7],
installed=not installed)
assert specs is None
# specs = spack.store.db.get_by_hash(dag_hash[:7],
# installed=not installed)
# assert specs is None
mpileaks_spec = spack.store.db.query_one('mpileaks ^mpich')
assert callpath_spec in mpileaks_spec

View File

@@ -485,28 +485,3 @@ def test_fj_version_detection(version_str, expected_version):
def test_detecting_mixed_toolchains(compiler_spec, expected_result, config):
compiler = spack.compilers.compilers_for_spec(compiler_spec).pop()
assert spack.compilers.is_mixed_toolchain(compiler) is expected_result
@pytest.mark.regression('14798,13733')
def test_raising_if_compiler_target_is_over_specific(config):
# Compiler entry with an overly specific target
compilers = [{'compiler': {
'spec': 'gcc@9.0.1',
'paths': {
'cc': '/usr/bin/gcc-9',
'cxx': '/usr/bin/g++-9',
'f77': '/usr/bin/gfortran-9',
'fc': '/usr/bin/gfortran-9'
},
'flags': {},
'operating_system': 'ubuntu18.04',
'target': 'haswell',
'modules': [],
'environment': {},
'extra_rpaths': []
}}]
arch_spec = spack.spec.ArchSpec(('linux', 'ubuntu18.04', 'haswell'))
with spack.config.override('compilers', compilers):
cfg = spack.compilers.get_compiler_config()
with pytest.raises(ValueError):
spack.compilers.get_compilers(cfg, 'gcc@9.0.1', arch_spec)

View File

@@ -620,16 +620,3 @@ def test_adjusting_default_target_based_on_compiler(
with spack.concretize.disable_compiler_existence_check():
s = Spec(spec).concretized()
assert str(s.architecture.target) == str(expected)
@pytest.mark.regression('8735,14730')
def test_compiler_version_matches_any_entry_in_compilers_yaml(self):
# Ensure that a concrete compiler with different compiler version
# doesn't match (here it's 4.5 vs. 4.5.0)
with pytest.raises(spack.concretize.UnavailableCompilerVersionError):
s = Spec('mpileaks %gcc@4.5')
s.concretize()
# An abstract compiler with a version list could resolve to 4.5.0
s = Spec('mpileaks %gcc@4.5:')
s.concretize()
assert str(s.compiler.version) == '4.5.0'

View File

@@ -28,7 +28,6 @@
import spack.database
import spack.directory_layout
import spack.environment as ev
import spack.package
import spack.package_prefs
import spack.paths
import spack.platforms.test
@@ -39,6 +38,7 @@
from spack.util.pattern import Bunch
from spack.dependency import Dependency
from spack.package import PackageBase
from spack.fetch_strategy import FetchStrategyComposite, URLFetchStrategy
from spack.fetch_strategy import FetchError
from spack.spec import Spec
@@ -329,18 +329,8 @@ def mock_repo_path():
yield spack.repo.RepoPath(spack.paths.mock_packages_path)
@pytest.fixture
def mock_pkg_install(monkeypatch):
def _pkg_install_fn(pkg, spec, prefix):
# sanity_check_prefix requires something in the install directory
mkdirp(prefix.bin)
monkeypatch.setattr(spack.package.PackageBase, 'install', _pkg_install_fn,
raising=False)
@pytest.fixture(scope='function')
def mock_packages(mock_repo_path, mock_pkg_install):
def mock_packages(mock_repo_path):
"""Use the 'builtin.mock' repository instead of 'builtin'"""
with use_repo(mock_repo_path):
yield mock_repo_path
@@ -609,10 +599,10 @@ def mock_fetch(mock_archive):
def fake_fn(self):
return fetcher
orig_fn = spack.package.PackageBase.fetcher
spack.package.PackageBase.fetcher = fake_fn
orig_fn = PackageBase.fetcher
PackageBase.fetcher = fake_fn
yield
spack.package.PackageBase.fetcher = orig_fn
PackageBase.fetcher = orig_fn
class MockLayout(object):

View File

@@ -1,5 +1,5 @@
config:
install_tree: $spack/opt/spack
install_tree: ~/.spack/opt/spack
template_dirs:
- $spack/share/spack/templates
- $spack/lib/spack/spack/test/data/templates
@@ -7,7 +7,7 @@ config:
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
source_cache: $spack/var/spack/cache
source_cache: ~/.spack/var/spack/cache
misc_cache: ~/.spack/cache
verify_ssl: true
checksum: true

View File

@@ -0,0 +1,7 @@
upstreams:
global:
install_tree: $spack/opt/spack
modules:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
dotkit: $spack/share/spack/dotkit

View File

@@ -0,0 +1,7 @@
upstreams:
global:
install_tree: $spack/opt/spack
modules:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
dotkit: $spack/share/spack/dotkit

View File

@@ -13,8 +13,8 @@
import os
import pytest
import json
import shutil
import llnl.util.lock as lk
from llnl.util.tty.colify import colify
import spack.repo
@@ -39,6 +39,19 @@ def test_store(tmpdir):
spack.store.store = real_store
@pytest.fixture()
def test_global_db_initializtion():
global_store = spack.store.store
global_db_path = '$spack/opt/spack'
global_db_path = spack.util.path.canonicalize_path(global_db_path)
shutil.rmtree(os.path.join(global_db_path, '.spack-db'))
global_store = spack.store.Store(str(global_db_path))
yield
spack.store.store = global_store
@pytest.fixture()
def upstream_and_downstream_db(tmpdir_factory, gen_mock_layout):
mock_db_root = str(tmpdir_factory.mktemp('mock_db_root'))
@@ -750,118 +763,3 @@ def test_query_spec_with_non_conditional_virtual_dependency(database):
# dependency that are not conditional on variants
results = spack.store.db.query_local('mpileaks ^mpich')
assert len(results) == 1
def test_failed_spec_path_error(database):
"""Ensure spec not concrete check is covered."""
s = spack.spec.Spec('a')
with pytest.raises(ValueError, matches='Concrete spec required'):
spack.store.db._failed_spec_path(s)
@pytest.mark.db
def test_clear_failure_keep(mutable_database, monkeypatch, capfd):
"""Add test coverage for clear_failure operation when to be retained."""
def _is(db, spec):
return True
# Pretend the spec has been failure locked
monkeypatch.setattr(spack.database.Database, 'prefix_failure_locked', _is)
s = spack.spec.Spec('a')
spack.store.db.clear_failure(s)
out = capfd.readouterr()[0]
assert 'Retaining failure marking' in out
@pytest.mark.db
def test_clear_failure_forced(mutable_database, monkeypatch, capfd):
"""Add test coverage for clear_failure operation when force."""
def _is(db, spec):
return True
# Pretend the spec has been failure locked
monkeypatch.setattr(spack.database.Database, 'prefix_failure_locked', _is)
# Ensure raise OSError when try to remove the non-existent marking
monkeypatch.setattr(spack.database.Database, 'prefix_failure_marked', _is)
s = spack.spec.Spec('a').concretized()
spack.store.db.clear_failure(s, force=True)
out = capfd.readouterr()[1]
assert 'Removing failure marking despite lock' in out
assert 'Unable to remove failure marking' in out
@pytest.mark.db
def test_mark_failed(mutable_database, monkeypatch, tmpdir, capsys):
"""Add coverage to mark_failed."""
def _raise_exc(lock):
raise lk.LockTimeoutError('Mock acquire_write failure')
# Ensure attempt to acquire write lock on the mark raises the exception
monkeypatch.setattr(lk.Lock, 'acquire_write', _raise_exc)
with tmpdir.as_cwd():
s = spack.spec.Spec('a').concretized()
spack.store.db.mark_failed(s)
out = str(capsys.readouterr()[1])
assert 'Unable to mark a as failed' in out
# Clean up the failure mark to ensure it does not interfere with other
# tests using the same spec.
del spack.store.db._prefix_failures[s.prefix]
@pytest.mark.db
def test_prefix_failed(mutable_database, monkeypatch):
"""Add coverage to prefix_failed operation."""
def _is(db, spec):
return True
s = spack.spec.Spec('a').concretized()
# Confirm the spec is not already marked as failed
assert not spack.store.db.prefix_failed(s)
# Check that a failure entry is sufficient
spack.store.db._prefix_failures[s.prefix] = None
assert spack.store.db.prefix_failed(s)
# Remove the entry and check again
del spack.store.db._prefix_failures[s.prefix]
assert not spack.store.db.prefix_failed(s)
# Now pretend that the prefix failure is locked
monkeypatch.setattr(spack.database.Database, 'prefix_failure_locked', _is)
assert spack.store.db.prefix_failed(s)
def test_prefix_read_lock_error(mutable_database, monkeypatch):
"""Cover the prefix read lock exception."""
def _raise(db, spec):
raise lk.LockError('Mock lock error')
s = spack.spec.Spec('a').concretized()
# Ensure subsequent lock operations fail
monkeypatch.setattr(lk.Lock, 'acquire_read', _raise)
with pytest.raises(Exception):
with spack.store.db.prefix_read_lock(s):
assert False
def test_prefix_write_lock_error(mutable_database, monkeypatch):
"""Cover the prefix write lock exception."""
def _raise(db, spec):
raise lk.LockError('Mock lock error')
s = spack.spec.Spec('a').concretized()
# Ensure subsequent lock operations fail
monkeypatch.setattr(lk.Lock, 'acquire_write', _raise)
with pytest.raises(Exception):
with spack.store.db.prefix_write_lock(s):
assert False

View File

@@ -100,9 +100,6 @@ def test_partial_install_delete_prefix_and_stage(install_mockery, mock_fetch):
rm_prefix_checker = RemovePrefixChecker(instance_rm_prefix)
spack.package.Package.remove_prefix = rm_prefix_checker.remove_prefix
# must clear failure markings for the package before re-installing it
spack.store.db.clear_failure(spec, True)
pkg.succeed = True
pkg.stage = MockStage(pkg.stage)
@@ -267,9 +264,6 @@ def test_partial_install_keep_prefix(install_mockery, mock_fetch):
pkg.do_install(keep_prefix=True)
assert os.path.exists(pkg.prefix)
# must clear failure markings for the package before re-installing it
spack.store.db.clear_failure(spec, True)
pkg.succeed = True # make the build succeed
pkg.stage = MockStage(pkg.stage)
pkg.do_install(keep_prefix=True)
@@ -306,13 +300,12 @@ def test_store(install_mockery, mock_fetch):
@pytest.mark.disable_clean_stage_check
def test_failing_build(install_mockery, mock_fetch, capfd):
def test_failing_build(install_mockery, mock_fetch):
spec = Spec('failing-build').concretized()
pkg = spec.package
with pytest.raises(spack.build_environment.ChildError):
pkg.do_install()
assert 'InstallError: Expected Failure' in capfd.readouterr()[0]
class MockInstallError(spack.error.SpackError):
@@ -439,7 +432,7 @@ def test_pkg_install_log(install_mockery):
# Attempt installing log without the build log file
with pytest.raises(IOError, match="No such file or directory"):
spack.installer.log(spec.package)
spec.package.log()
# Set up mock build files and try again
log_path = spec.package.log_path
@@ -452,7 +445,7 @@ def test_pkg_install_log(install_mockery):
install_path = os.path.dirname(spec.package.install_log_path)
mkdirp(install_path)
spack.installer.log(spec.package)
spec.package.log()
assert os.path.exists(spec.package.install_log_path)
assert os.path.exists(spec.package.install_env_path)
@@ -476,14 +469,3 @@ def test_unconcretized_install(install_mockery, mock_fetch, mock_packages):
with pytest.raises(ValueError, match="only patch concrete packages"):
spec.package.do_patch()
def test_install_error():
try:
msg = 'test install error'
long_msg = 'this is the long version of test install error'
raise InstallError(msg, long_msg=long_msg)
except Exception as exc:
assert exc.__class__.__name__ == 'InstallError'
assert exc.message == msg
assert exc.long_message == long_msg

View File

@@ -1,579 +0,0 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import pytest
import llnl.util.tty as tty
import spack.binary_distribution
import spack.compilers
import spack.directory_layout as dl
import spack.installer as inst
import spack.util.lock as lk
import spack.repo
import spack.spec
def _noop(*args, **kwargs):
"""Generic monkeypatch no-op routine."""
pass
def _none(*args, **kwargs):
"""Generic monkeypatch function that always returns None."""
return None
def _true(*args, **kwargs):
"""Generic monkeypatch function that always returns True."""
return True
def create_build_task(pkg):
"""
Create a built task for the given (concretized) package
Args:
pkg (PackageBase): concretized package associated with the task
Return:
(BuildTask) A basic package build task
"""
return inst.BuildTask(pkg, False, 0, 0, inst.STATUS_ADDED, [])
def create_installer(spec_name):
"""
Create an installer for the named spec
Args:
spec_name (str): Name of the explicit install spec
Return:
spec (Spec): concretized spec
installer (PackageInstaller): the associated package installer
"""
spec = spack.spec.Spec(spec_name)
spec.concretize()
assert spec.concrete
return spec, inst.PackageInstaller(spec.package)
@pytest.mark.parametrize('sec,result', [
(86400, "24h"),
(3600, "1h"),
(60, "1m"),
(1.802, "1.80s"),
(3723.456, "1h 2m 3.46s")])
def test_hms(sec, result):
assert inst._hms(sec) == result
def test_install_msg():
name = 'some-package'
pid = 123456
expected = "{0}: Installing {1}".format(pid, name)
assert inst.install_msg(name, pid) == expected
def test_install_from_cache_errors(install_mockery, capsys):
"""Test to ensure cover _install_from_cache errors."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
# Check with cache-only
with pytest.raises(SystemExit):
inst._install_from_cache(spec.package, True, True, False)
captured = str(capsys.readouterr())
assert 'No binary' in captured
assert 'found when cache-only specified' in captured
assert not spec.package.installed_from_binary_cache
# Check when don't expect to install only from binary cache
assert not inst._install_from_cache(spec.package, False, True, False)
assert not spec.package.installed_from_binary_cache
def test_install_from_cache_ok(install_mockery, monkeypatch):
"""Test to ensure cover _install_from_cache to the return."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
monkeypatch.setattr(inst, '_try_install_from_binary_cache', _true)
monkeypatch.setattr(spack.hooks, 'post_install', _noop)
assert inst._install_from_cache(spec.package, True, True, False)
def test_process_external_package_module(install_mockery, monkeypatch, capfd):
"""Test to simply cover the external module message path."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
# Ensure take the external module path WITHOUT any changes to the database
monkeypatch.setattr(spack.database.Database, 'get_record', _none)
spec.external_path = '/actual/external/path/not/checked'
spec.external_module = 'unchecked_module'
inst._process_external_package(spec.package, False)
out = capfd.readouterr()[0]
assert 'has external module in {0}'.format(spec.external_module) in out
assert 'is actually installed in {0}'.format(spec.external_path) in out
def test_process_binary_cache_tarball_none(install_mockery, monkeypatch,
capfd):
"""Tests to cover _process_binary_cache_tarball when no tarball."""
monkeypatch.setattr(spack.binary_distribution, 'download_tarball', _none)
pkg = spack.repo.get('trivial-install-test-package')
assert not inst._process_binary_cache_tarball(pkg, None, False, False)
assert 'exists in binary cache but' in capfd.readouterr()[0]
def test_process_binary_cache_tarball_tar(install_mockery, monkeypatch, capfd):
"""Tests to cover _process_binary_cache_tarball with a tar file."""
def _spec(spec):
return spec
# Skip binary distribution functionality since assume tested elsewhere
monkeypatch.setattr(spack.binary_distribution, 'download_tarball', _spec)
monkeypatch.setattr(spack.binary_distribution, 'extract_tarball', _noop)
# Skip database updates
monkeypatch.setattr(spack.database.Database, 'add', _noop)
spec = spack.spec.Spec('a').concretized()
assert inst._process_binary_cache_tarball(spec.package, spec, False, False)
assert 'Installing a from binary cache' in capfd.readouterr()[0]
def test_installer_init_errors(install_mockery):
"""Test to ensure cover installer constructor errors."""
with pytest.raises(ValueError, match='must be a package'):
inst.PackageInstaller('abc')
pkg = spack.repo.get('trivial-install-test-package')
with pytest.raises(ValueError, match='Can only install concrete'):
inst.PackageInstaller(pkg)
def test_installer_strings(install_mockery):
"""Tests of installer repr and str for coverage purposes."""
spec, installer = create_installer('trivial-install-test-package')
# Cover __repr__
irep = installer.__repr__()
assert irep.startswith(installer.__class__.__name__)
assert "installed=" in irep
assert "failed=" in irep
# Cover __str__
istr = str(installer)
assert "#tasks=0" in istr
assert "installed (0)" in istr
assert "failed (0)" in istr
def test_installer_last_phase_error(install_mockery, capsys):
"""Test to cover last phase error."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
with pytest.raises(SystemExit):
installer = inst.PackageInstaller(spec.package)
installer.install(stop_at='badphase')
captured = capsys.readouterr()
assert 'is not an allowed phase' in str(captured)
def test_installer_ensure_ready_errors(install_mockery):
"""Test to cover _ensure_ready errors."""
spec, installer = create_installer('trivial-install-test-package')
fmt = r'cannot be installed locally.*{0}'
# Force an external package error
path, module = spec.external_path, spec.external_module
spec.external_path = '/actual/external/path/not/checked'
spec.external_module = 'unchecked_module'
msg = fmt.format('is external')
with pytest.raises(inst.ExternalPackageError, match=msg):
installer._ensure_install_ready(spec.package)
# Force an upstream package error
spec.external_path, spec.external_module = path, module
spec.package._installed_upstream = True
msg = fmt.format('is upstream')
with pytest.raises(inst.UpstreamPackageError, match=msg):
installer._ensure_install_ready(spec.package)
# Force an install lock error, which should occur naturally since
# we are calling an internal method prior to any lock-related setup
spec.package._installed_upstream = False
assert len(installer.locks) == 0
with pytest.raises(inst.InstallLockError, match=fmt.format('not locked')):
installer._ensure_install_ready(spec.package)
def test_ensure_locked_have(install_mockery, tmpdir):
"""Test to cover _ensure_locked when already have lock."""
spec, installer = create_installer('trivial-install-test-package')
with tmpdir.as_cwd():
lock = lk.Lock('./test', default_timeout=1e-9, desc='test')
lock_type = 'read'
tpl = (lock_type, lock)
installer.locks[installer.pkg_id] = tpl
assert installer._ensure_locked(lock_type, spec.package) == tpl
def test_package_id(install_mockery):
"""Test to cover package_id functionality."""
pkg = spack.repo.get('trivial-install-test-package')
with pytest.raises(ValueError, matches='spec is not concretized'):
inst.package_id(pkg)
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
pkg = spec.package
assert pkg.name in inst.package_id(pkg)
def test_fake_install(install_mockery):
"""Test to cover fake install basics."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
pkg = spec.package
inst._do_fake_install(pkg)
assert os.path.isdir(pkg.prefix.lib)
def test_packages_needed_to_bootstrap_compiler(install_mockery, monkeypatch):
"""Test to cover most of _packages_needed_to_boostrap_compiler."""
# TODO: More work is needed to go beyond the dependency check
def _no_compilers(pkg, arch_spec):
return []
# Test path where no compiler packages returned
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
packages = inst._packages_needed_to_bootstrap_compiler(spec.package)
assert not packages
# Test up to the dependency check
monkeypatch.setattr(spack.compilers, 'compilers_for_spec', _no_compilers)
with pytest.raises(spack.repo.UnknownPackageError, matches='not found'):
inst._packages_needed_to_bootstrap_compiler(spec.package)
def test_dump_packages_deps(install_mockery, tmpdir):
"""Test to add coverage to dump_packages."""
spec = spack.spec.Spec('simple-inheritance').concretized()
with tmpdir.as_cwd():
inst.dump_packages(spec, '.')
def test_add_bootstrap_compilers(install_mockery, monkeypatch):
"""Test to cover _add_bootstrap_compilers."""
def _pkgs(pkg):
spec = spack.spec.Spec('mpi').concretized()
return [(spec.package, True)]
spec, installer = create_installer('trivial-install-test-package')
monkeypatch.setattr(inst, '_packages_needed_to_bootstrap_compiler', _pkgs)
installer._add_bootstrap_compilers(spec.package)
ids = list(installer.build_tasks)
assert len(ids) == 1
task = installer.build_tasks[ids[0]]
assert task.compiler
def test_prepare_for_install_on_installed(install_mockery, monkeypatch):
"""Test of _prepare_for_install's early return for installed task path."""
spec, installer = create_installer('dependent-install')
task = create_build_task(spec.package)
installer.installed.add(task.pkg_id)
monkeypatch.setattr(inst.PackageInstaller, '_ensure_install_ready', _noop)
installer._prepare_for_install(task, True, True, False)
def test_installer_init_queue(install_mockery):
"""Test of installer queue functions."""
with spack.config.override('config:install_missing_compilers', True):
spec, installer = create_installer('dependent-install')
installer._init_queue(True, True)
ids = list(installer.build_tasks)
assert len(ids) == 2
assert 'dependency-install' in ids
assert 'dependent-install' in ids
def test_install_task_use_cache(install_mockery, monkeypatch):
"""Test _install_task to cover use_cache path."""
spec, installer = create_installer('trivial-install-test-package')
task = create_build_task(spec.package)
monkeypatch.setattr(inst, '_install_from_cache', _true)
installer._install_task(task)
assert spec.package.name in installer.installed
def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
"""Test _release_lock for supposed write lock with exception."""
spec, installer = create_installer('trivial-install-test-package')
pkg_id = 'test'
with tmpdir.as_cwd():
lock = lk.Lock('./test', default_timeout=1e-9, desc='test')
installer.locks[pkg_id] = ('write', lock)
assert lock._writes == 0
installer._release_lock(pkg_id)
out = str(capsys.readouterr()[1])
msg = 'exception when releasing write lock for {0}'.format(pkg_id)
assert msg in out
def test_requeue_task(install_mockery, capfd):
"""Test to ensure cover _requeue_task."""
spec, installer = create_installer('a')
task = create_build_task(spec.package)
installer._requeue_task(task)
ids = list(installer.build_tasks)
assert len(ids) == 1
qtask = installer.build_tasks[ids[0]]
assert qtask.status == inst.STATUS_INSTALLING
out = capfd.readouterr()[0]
assert 'Installing a in progress by another process' in out
def test_cleanup_all_tasks(install_mockery, monkeypatch):
"""Test to ensure cover _cleanup_all_tasks."""
def _mktask(pkg):
return create_build_task(pkg)
def _rmtask(installer, pkg_id):
raise RuntimeError('Raise an exception to test except path')
spec, installer = create_installer('a')
# Cover task removal happy path
installer.build_tasks['a'] = _mktask(spec.package)
installer._cleanup_all_tasks()
assert len(installer.build_tasks) == 0
# Cover task removal exception path
installer.build_tasks['a'] = _mktask(spec.package)
monkeypatch.setattr(inst.PackageInstaller, '_remove_task', _rmtask)
installer._cleanup_all_tasks()
assert len(installer.build_tasks) == 1
def test_cleanup_failed(install_mockery, tmpdir, monkeypatch, capsys):
"""Test to increase coverage of _cleanup_failed."""
msg = 'Fake release_write exception'
def _raise_except(lock):
raise RuntimeError(msg)
spec, installer = create_installer('trivial-install-test-package')
monkeypatch.setattr(lk.Lock, 'release_write', _raise_except)
pkg_id = 'test'
with tmpdir.as_cwd():
lock = lk.Lock('./test', default_timeout=1e-9, desc='test')
installer.failed[pkg_id] = lock
installer._cleanup_failed(pkg_id)
out = str(capsys.readouterr()[1])
assert 'exception when removing failure mark' in out
assert msg in out
def test_update_failed_no_mark(install_mockery):
"""Test of _update_failed sans mark and dependent build tasks."""
spec, installer = create_installer('dependent-install')
task = create_build_task(spec.package)
installer._update_failed(task)
assert installer.failed['dependent-install'] is None
def test_install_uninstalled_deps(install_mockery, monkeypatch, capsys):
"""Test install with uninstalled dependencies."""
spec, installer = create_installer('dependent-install')
# Skip the actual installation and any status updates
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _noop)
monkeypatch.setattr(inst.PackageInstaller, '_update_installed', _noop)
monkeypatch.setattr(inst.PackageInstaller, '_update_failed', _noop)
msg = 'Cannot proceed with dependent-install'
with pytest.raises(spack.installer.InstallError, matches=msg):
installer.install()
out = str(capsys.readouterr())
assert 'Detected uninstalled dependencies for' in out
def test_install_failed(install_mockery, monkeypatch, capsys):
"""Test install with failed install."""
spec, installer = create_installer('b')
# Make sure the package is identified as failed
monkeypatch.setattr(spack.database.Database, 'prefix_failed', _true)
# Skip the actual installation though it should never get there
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _noop)
msg = 'Installation of b failed'
with pytest.raises(spack.installer.InstallError, matches=msg):
installer.install()
out = str(capsys.readouterr())
assert 'Warning: b failed to install' in out
def test_install_lock_failures(install_mockery, monkeypatch, capfd):
"""Cover basic install lock failure handling in a single pass."""
def _requeued(installer, task):
tty.msg('requeued {0}' .format(task.pkg.spec.name))
def _not_locked(installer, lock_type, pkg):
tty.msg('{0} locked {1}' .format(lock_type, pkg.spec.name))
return lock_type, None
spec, installer = create_installer('b')
# Ensure never acquire a lock
monkeypatch.setattr(inst.PackageInstaller, '_ensure_locked', _not_locked)
# Ensure don't continually requeue the task
monkeypatch.setattr(inst.PackageInstaller, '_requeue_task', _requeued)
# Skip the actual installation though should never reach it
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _noop)
installer.install()
out = capfd.readouterr()[0]
expected = ['write locked', 'read locked', 'requeued']
for exp, ln in zip(expected, out.split('\n')):
assert exp in ln
def test_install_lock_installed_requeue(install_mockery, monkeypatch, capfd):
"""Cover basic install handling for installed package."""
def _install(installer, task, **kwargs):
tty.msg('{0} installing'.format(task.pkg.spec.name))
def _not_locked(installer, lock_type, pkg):
tty.msg('{0} locked {1}' .format(lock_type, pkg.spec.name))
return lock_type, None
def _prep(installer, task, keep_prefix, keep_stage, restage):
installer.installed.add('b')
tty.msg('{0} is installed' .format(task.pkg.spec.name))
# also do not allow the package to be locked again
monkeypatch.setattr(inst.PackageInstaller, '_ensure_locked',
_not_locked)
def _requeued(installer, task):
tty.msg('requeued {0}' .format(task.pkg.spec.name))
# Skip the actual installation though should never reach it
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _install)
# Flag the package as installed
monkeypatch.setattr(inst.PackageInstaller, '_prepare_for_install', _prep)
# Ensure don't continually requeue the task
monkeypatch.setattr(inst.PackageInstaller, '_requeue_task', _requeued)
spec, installer = create_installer('b')
installer.install()
assert 'b' not in installer.installed
out = capfd.readouterr()[0]
expected = ['is installed', 'read locked', 'requeued']
for exp, ln in zip(expected, out.split('\n')):
assert exp in ln
def test_install_read_locked_requeue(install_mockery, monkeypatch, capfd):
"""Cover basic read lock handling for uninstalled package with requeue."""
orig_fn = inst.PackageInstaller._ensure_locked
def _install(installer, task, **kwargs):
tty.msg('{0} installing'.format(task.pkg.spec.name))
def _read(installer, lock_type, pkg):
tty.msg('{0}->read locked {1}' .format(lock_type, pkg.spec.name))
return orig_fn(installer, 'read', pkg)
def _prep(installer, task, keep_prefix, keep_stage, restage):
tty.msg('preparing {0}' .format(task.pkg.spec.name))
assert task.pkg.spec.name not in installer.installed
def _requeued(installer, task):
tty.msg('requeued {0}' .format(task.pkg.spec.name))
# Force a read lock
monkeypatch.setattr(inst.PackageInstaller, '_ensure_locked', _read)
# Skip the actual installation though should never reach it
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _install)
# Flag the package as installed
monkeypatch.setattr(inst.PackageInstaller, '_prepare_for_install', _prep)
# Ensure don't continually requeue the task
monkeypatch.setattr(inst.PackageInstaller, '_requeue_task', _requeued)
spec, installer = create_installer('b')
installer.install()
assert 'b' not in installer.installed
out = capfd.readouterr()[0]
expected = ['write->read locked', 'preparing', 'requeued']
for exp, ln in zip(expected, out.split('\n')):
assert exp in ln
def test_install_dir_exists(install_mockery, monkeypatch, capfd):
"""Cover capture of install directory exists error."""
err = 'Mock directory exists error'
def _install(installer, task, **kwargs):
raise dl.InstallDirectoryAlreadyExistsError(err)
# Skip the actual installation though should never reach it
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _install)
spec, installer = create_installer('b')
with pytest.raises(dl.InstallDirectoryAlreadyExistsError, matches=err):
installer.install()
assert 'b' in installer.installed

View File

@@ -1240,57 +1240,3 @@ def test_lock_in_current_directory(tmpdir):
pass
with lk.WriteTransaction(lock):
pass
def test_attempts_str():
assert lk._attempts_str(0, 0) == ''
assert lk._attempts_str(0.12, 1) == ''
assert lk._attempts_str(12.345, 2) == ' after 12.35s and 2 attempts'
def test_lock_str():
lock = lk.Lock('lockfile')
lockstr = str(lock)
assert 'lockfile[0:0]' in lockstr
assert 'timeout=None' in lockstr
assert '#reads=0, #writes=0' in lockstr
def test_downgrade_write_okay(tmpdir):
"""Test the lock write-to-read downgrade operation."""
with tmpdir.as_cwd():
lock = lk.Lock('lockfile')
lock.acquire_write()
lock.downgrade_write_to_read()
assert lock._reads == 1
assert lock._writes == 0
def test_downgrade_write_fails(tmpdir):
"""Test failing the lock write-to-read downgrade operation."""
with tmpdir.as_cwd():
lock = lk.Lock('lockfile')
lock.acquire_read()
msg = 'Cannot downgrade lock from write to read on file: lockfile'
with pytest.raises(lk.LockDowngradeError, matches=msg):
lock.downgrade_write_to_read()
def test_upgrade_read_okay(tmpdir):
"""Test the lock read-to-write upgrade operation."""
with tmpdir.as_cwd():
lock = lk.Lock('lockfile')
lock.acquire_read()
lock.upgrade_read_to_write()
assert lock._reads == 0
assert lock._writes == 1
def test_upgrade_read_fails(tmpdir):
"""Test failing the lock read-to-write upgrade operation."""
with tmpdir.as_cwd():
lock = lk.Lock('lockfile')
lock.acquire_write()
msg = 'Cannot upgrade lock from read to write on file: lockfile'
with pytest.raises(lk.LockUpgradeError, matches=msg):
lock.upgrade_read_to_write()

View File

@@ -98,10 +98,6 @@ def test_all_same_but_archive_hash(self):
assert spec1.package.content_hash(content=content1) != \
spec2.package.content_hash(content=content2)
def test_parse_dynamic_function_call(self):
spec = Spec("hash-test4").concretized()
spec.package.content_hash()
# Below tests target direct imports of spack packages from the
# spack.pkg namespace
def test_import_package(self):

View File

@@ -31,14 +31,6 @@
from spack.relocate import modify_macho_object, macho_get_paths
def has_gpg():
try:
gpg = spack.util.gpg.Gpg.gpg()
except spack.util.gpg.SpackGPGError:
gpg = None
return bool(gpg)
def fake_fetchify(url, pkg):
"""Fake the URL for a package so it downloads from a file."""
fetcher = FetchStrategyComposite()
@@ -46,7 +38,6 @@ def fake_fetchify(url, pkg):
pkg.fetcher = fetcher
@pytest.mark.skipif(not has_gpg(), reason='This test requires gpg')
@pytest.mark.usefixtures('install_mockery', 'mock_gnupghome')
def test_buildcache(mock_archive, tmpdir):
# tweak patchelf to only do a download
@@ -116,6 +107,11 @@ def test_buildcache(mock_archive, tmpdir):
buildcache.buildcache(parser, args)
files = os.listdir(spec.prefix)
assert 'link_to_dummy.txt' in files
assert 'dummy.txt' in files
assert os.path.realpath(
os.path.join(spec.prefix, 'link_to_dummy.txt')
) == os.path.realpath(os.path.join(spec.prefix, 'dummy.txt'))
# create build cache with relative path and signing
args = parser.parse_args(
@@ -133,6 +129,13 @@ def test_buildcache(mock_archive, tmpdir):
args = parser.parse_args(['install', '-f', str(pkghash)])
buildcache.buildcache(parser, args)
files = os.listdir(spec.prefix)
assert 'link_to_dummy.txt' in files
assert 'dummy.txt' in files
assert os.path.realpath(
os.path.join(spec.prefix, 'link_to_dummy.txt')
) == os.path.realpath(os.path.join(spec.prefix, 'dummy.txt'))
else:
# create build cache without signing
args = parser.parse_args(
@@ -149,6 +152,10 @@ def test_buildcache(mock_archive, tmpdir):
files = os.listdir(spec.prefix)
assert 'link_to_dummy.txt' in files
assert 'dummy.txt' in files
assert os.path.realpath(
os.path.join(spec.prefix, 'link_to_dummy.txt')
) == os.path.realpath(os.path.join(spec.prefix, 'dummy.txt'))
# test overwrite install without verification
args = parser.parse_args(['install', '-f', '-u', str(pkghash)])
buildcache.buildcache(parser, args)
@@ -235,7 +242,7 @@ def test_relocate_links(tmpdir):
old_src = os.path.join(old_dir, filename)
os.symlink(old_src, filename)
filenames = [filename]
new_dir = '/opt/rh/devtoolset'
new_dir = '/opt/rh/devtoolset/'
relocate_links(filenames, old_dir, new_dir)
assert os.path.realpath(filename) == os.path.join(new_dir, filename)

View File

@@ -16,7 +16,7 @@
from spack.parse import Token
from spack.spec import Spec
from spack.spec import SpecParseError, RedundantSpecError
from spack.spec import AmbiguousHashError, InvalidHashError, NoSuchHashError
from spack.spec import AmbiguousHashError, InvalidHashError
from spack.spec import DuplicateArchitectureError
from spack.spec import DuplicateDependencyError, DuplicateCompilerSpecError
from spack.spec import SpecFilenameError, NoSuchSpecFileError
@@ -363,9 +363,9 @@ def test_nonexistent_hash(self, database):
hashes = [s._hash for s in specs]
assert no_such_hash not in [h[:len(no_such_hash)] for h in hashes]
self._check_raises(NoSuchHashError, [
'/' + no_such_hash,
'mpileaks /' + no_such_hash])
# self._check_raises(NoSuchHashError, [
# '/' + no_such_hash,
# 'mpileaks /' + no_such_hash])
@pytest.mark.db
def test_redundant_spec(self, database):

View File

@@ -41,23 +41,8 @@ def __init__(self, spec):
self.spec = spec
def is_directive(self, node):
"""Check to determine if the node is a valid directive
Directives are assumed to be represented in the AST as a named function
call expression. This means that they will NOT be represented by a
named function call within a function call expression (e.g., as
callbacks are sometimes represented).
Args:
node (AST): the AST node being checked
Returns:
(bool): ``True`` if the node represents a known directive,
``False`` otherwise
"""
return (isinstance(node, ast.Expr) and
node.value and isinstance(node.value, ast.Call) and
isinstance(node.value.func, ast.Name) and
node.value.func.id in spack.directives.__all__)
def is_spack_attr(self, node):

View File

@@ -226,7 +226,7 @@ _config_sections() {
_extensions() {
if [[ -z "${SPACK_EXTENSIONS:-}" ]]
then
SPACK_EXTENSIONS="$(spack extensions)"
SPACK_EXTENSIONS="aspell go-bootstrap go icedtea jdk kim-api lua matlab mofem-cephas octave openjdk perl python r ruby rust tcl yorick"
fi
SPACK_COMPREPLY="$SPACK_EXTENSIONS"
}

View File

@@ -212,8 +212,7 @@ _spack_determine_shell() {
# If procfs is present this seems a more reliable
# way to detect the current shell
_sp_exe=$(readlink /proc/$$/exe)
# Shell may contain number, like zsh5 instead of zsh
basename ${_sp_exe} | tr -d '0123456789'
basename ${_sp_exe}
elif [ -n "${BASH:-}" ]; then
echo bash
elif [ -n "${ZSH_NAME:-}" ]; then

View File

@@ -226,7 +226,7 @@ _config_sections() {
_extensions() {
if [[ -z "${SPACK_EXTENSIONS:-}" ]]
then
SPACK_EXTENSIONS="$(spack extensions)"
SPACK_EXTENSIONS="aspell go-bootstrap go icedtea jdk kim-api lua matlab mofem-cephas octave openjdk perl python r ruby rust tcl yorick"
fi
SPACK_COMPREPLY="$SPACK_EXTENSIONS"
}
@@ -945,7 +945,7 @@ _spack_info() {
_spack_install() {
if $list_options
then
SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --no-check-signature --show-log-on-error --source -n --no-checksum -v --verbose --fake --only-concrete -f --file --clean --dirty --test --run-tests --log-format --log-file --help-cdash --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp -y --yes-to-all"
SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --no-check-signature --show-log-on-error --source -n --no-checksum -v --verbose --fake --only-concrete -f --file --upstream -g --global --clean --dirty --test --run-tests --log-format --log-file --help-cdash --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp -y --yes-to-all"
else
_all_packages
fi
@@ -1272,7 +1272,7 @@ _spack_pydoc() {
_spack_python() {
if $list_options
then
SPACK_COMPREPLY="-h --help -c -m"
SPACK_COMPREPLY="-h --help -c"
else
SPACK_COMPREPLY=""
fi
@@ -1419,7 +1419,7 @@ _spack_test() {
_spack_uninstall() {
if $list_options
then
SPACK_COMPREPLY="-h --help -f --force -R --dependents -y --yes-to-all -a --all"
SPACK_COMPREPLY="-h --help -f --force -R --dependents -y --yes-to-all -a --all -u --upstream -g --global"
else
_installed_packages
fi

View File

@@ -49,6 +49,4 @@ def build(self, spec, prefix):
pass
def install(self, spec, prefix):
# sanity_check_prefix requires something in the install directory
# Test requires overriding the one provided by `AutotoolsPackage`
mkdirp(prefix.bin)
pass

View File

@@ -13,3 +13,6 @@ class B(Package):
url = "http://www.example.com/b-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View File

@@ -59,3 +59,6 @@ class Boost(Package):
description="Build the Boost Graph library")
variant('taggedlayout', default=False,
description="Augment library names with build options")
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class C(Package):
url = "http://www.example.com/c-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View File

@@ -17,3 +17,6 @@ class ConflictingDependent(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dependency-install@:1.0')
def install(self, spec, prefix):
pass

View File

@@ -25,3 +25,6 @@ class DepDiamondPatchMid1(Package):
# single patch file in repo
depends_on('patch', patches='mid1.patch')
def install(self, spec, prefix):
pass

View File

@@ -28,3 +28,6 @@ class DepDiamondPatchMid2(Package):
patch('http://example.com/urlpatch.patch',
sha256='mid21234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234'), # noqa: E501
])
def install(self, spec, prefix):
pass

View File

@@ -27,3 +27,6 @@ class DepDiamondPatchTop(Package):
depends_on('patch', patches='top.patch')
depends_on('dep-diamond-patch-mid1')
depends_on('dep-diamond-patch-mid2')
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class DevelopTest(Package):
version('develop', git='https://github.com/dummy/repo.git')
version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9')
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class DevelopTest2(Package):
version('0.2.15.develop', git='https://github.com/dummy/repo.git')
version('0.2.15', 'b1190f3d3471685f17cfd1ec1d252ac9')
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class DirectMpich(Package):
version('1.0', 'foobarbaz')
depends_on('mpich')
def install(self, spec, prefix):
pass

View File

@@ -12,3 +12,6 @@ class DtDiamondBottom(Package):
url = "http://www.example.com/dt-diamond-bottom-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View File

@@ -14,3 +14,6 @@ class DtDiamondLeft(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dt-diamond-bottom', type='build')
def install(self, spec, prefix):
pass

View File

@@ -14,3 +14,6 @@ class DtDiamondRight(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dt-diamond-bottom', type=('build', 'link', 'run'))
def install(self, spec, prefix):
pass

View File

@@ -15,3 +15,6 @@ class DtDiamond(Package):
depends_on('dt-diamond-left')
depends_on('dt-diamond-right')
def install(self, spec, prefix):
pass

View File

@@ -18,3 +18,6 @@ class Dtbuild1(Package):
depends_on('dtbuild2', type='build')
depends_on('dtlink2')
depends_on('dtrun2', type='run')
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class Dtbuild2(Package):
url = "http://www.example.com/dtbuild2-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class Dtbuild3(Package):
url = "http://www.example.com/dtbuild3-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View File

@@ -15,3 +15,6 @@ class Dtlink1(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dtlink3')
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class Dtlink2(Package):
url = "http://www.example.com/dtlink2-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View File

@@ -16,3 +16,6 @@ class Dtlink3(Package):
depends_on('dtbuild2', type='build')
depends_on('dtlink4')
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class Dtlink4(Package):
url = "http://www.example.com/dtlink4-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class Dtlink5(Package):
url = "http://www.example.com/dtlink5-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View File

@@ -16,3 +16,6 @@ class Dtrun1(Package):
depends_on('dtlink5')
depends_on('dtrun3', type='run')
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class Dtrun2(Package):
url = "http://www.example.com/dtrun2-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View File

@@ -15,3 +15,6 @@ class Dtrun3(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dtbuild3', type='build')
def install(self, spec, prefix):
pass

View File

@@ -17,3 +17,6 @@ class Dttop(Package):
depends_on('dtbuild1', type='build')
depends_on('dtlink1')
depends_on('dtrun1', type='run')
def install(self, spec, prefix):
pass

View File

@@ -15,3 +15,6 @@ class Dtuse(Package):
version('1.0', '0123456789abcdef0123456789abcdef')
depends_on('dttop')
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class E(Package):
url = "http://www.example.com/e-1.0.tar.gz"
version('1.0', '0123456789abcdef0123456789abcdef')
def install(self, spec, prefix):
pass

View File

@@ -13,3 +13,6 @@ class Externalmodule(Package):
version('1.0', '1234567890abcdef1234567890abcdef')
depends_on('externalprereq')
def install(self, spec, prefix):
pass

View File

@@ -11,3 +11,6 @@ class Externalprereq(Package):
url = "http://somewhere.com/prereq-1.0.tar.gz"
version('1.4', 'f1234567890abcdef1234567890abcde')
def install(self, spec, prefix):
pass

View File

@@ -14,3 +14,6 @@ class Externaltool(Package):
version('0.9', '1234567890abcdef1234567890abcdef')
depends_on('externalprereq')
def install(self, spec, prefix):
pass

View File

@@ -16,3 +16,6 @@ class Externalvirtual(Package):
version('2.2', '4567890abcdef1234567890abcdef123')
provides('stuff', when='@1.0:')
def install(self, spec, prefix):
pass

View File

@@ -11,3 +11,6 @@ class Fake(Package):
url = "http://www.fake-spack-example.org/downloads/fake-1.0.tar.gz"
version('1.0', 'foobarbaz')
def install(self, spec, prefix):
pass

View File

@@ -58,11 +58,7 @@ def install(self, spec, prefix):
if 'really-long-if-statement' != 'that-goes-over-the-line-length-limit-and-requires-noqa': # noqa
pass
# sanity_check_prefix requires something in the install directory
mkdirp(prefix.bin)
# '@when' decorated functions are exempt from redefinition errors
@when('@2.0')
def install(self, spec, prefix):
# sanity_check_prefix requires something in the install directory
mkdirp(prefix.bin)
pass

View File

@@ -15,3 +15,6 @@ class GitSvnTopLevel(Package):
svn = 'https://example.com/some/svn/repo'
version('2.0')
def install(self, spec, prefix):
pass

View File

@@ -11,3 +11,6 @@ class GitTest(Package):
homepage = "http://www.git-fetch-example.com"
version('git', git='to-be-filled-in-by-test')
def install(self, spec, prefix):
pass

View File

@@ -12,3 +12,6 @@ class GitTopLevel(Package):
git = 'https://example.com/some/git/repo'
version('1.0')
def install(self, spec, prefix):
pass

View File

@@ -16,3 +16,6 @@ class GitUrlSvnTopLevel(Package):
svn = 'https://example.com/some/svn/repo'
version('2.0')
def install(self, spec, prefix):
pass

View File

@@ -38,3 +38,6 @@ class GitUrlTopLevel(Package):
version('1.2', sha512='abc12', branch='releases/v1.2')
version('1.1', md5='abc11', tag='v1.1')
version('1.0', 'abc11', tag='abc123')
def install(self, spec, prefix):
pass

Some files were not shown because too many files have changed in this diff Show More