Compare commits

...

659 Commits

Author SHA1 Message Date
kwryankrattiger
4583161224 Revert "Revert "Revert "gitlab: Add shared PR mirror to places pipelines look for binaries. (#33746)" (#34087)" (#34153)"
This reverts commit efa1dba9e4.
2022-12-02 12:13:44 -06:00
Todd Gamblin
87562042df concretizer: use only attr() for Spec attributes (#31202)
All Spec attributes are now represented as `attr(attribute_name, ... args ...)`, e.g.
`attr(node, "hdf5")` instead of `node("hdf5")`, as we *have* to maintain the `attr()`
form anyway, and it simplifies the encoding to just maintain one form of the Spec
information.

Background
----------

In #20644, we unified the way conditionals are done in the concretizer, but this
introduced a nasty aspect to the encoding: we have to maintain everything we want in
general conditions in two forms: `predicate(...)` and `attr("predicate", ...)`. For
example, here's the start of the table of spec attributes we had to maintain:

```prolog
node(Package)                      :- attr("node", Package).
virtual_node(Virtual)              :- attr("virtual_node", Virtual).
hash(Package, Hash)                :- attr("hash", Package, Hash).
version(Package, Version)          :- attr("version", Package, Version).
...
```

```prolog
attr("node", Package)              :- node(Package).
attr("virtual_node", Virtual)      :- virtual_node(Virtual).
attr("hash", Package, Hash)        :- hash(Package, Hash).
attr("version", Package, Version)  :- version(Package, Version).
...
```

This adds cognitive load to understanding how the concretizer works, as you have to
understand the equivalence between the two forms of spec attributes. It also makes the
general condition logic in #20644 hard to explain, and it's easy to forget to add a new
equivalence to this list when adding new spec attributes (at least two people have been
bitten by this).

Solution
--------

- [x] remove the equivalence list from `concretize.lp`
- [x] simplify `spec_clauses()`, `condition()`, and other functions in `asp.py` that need
      to deal with `Spec` attributes.
- [x] Convert all old-form spec attributes in `concretize.lp` to the `attr()` form
- [x] Simplify `display.lp`, where we also had to maintain a list of spec attributes. Now
      we only need to show `attr/2`, `attr/3`, and `attr/4`.
- [x] Simplify model extraction logic in `asp.py`.

Performance
-----------

This seems to result in a smaller grounded problem (as there are no longer duplicated
`attr("foo", ...)` / `foo(...)` predicates in the program), but it also adds a slight
performance overhead vs. develop. Ultimately, simplifying the encoding will be a win,
particularly for improving error messages.

Notes
-----

This will simplify future node refactors in `concretize.lp` (e.g., not identifying nodes
by package name, which we need for separate build dependencies).

I'm still not entirely used to reading `attr()` notation, but I thnk it's ultimately
clearer than what we did before. We need more uniform naming, and it's now clear what is
part of a solution. We should probably continue making the encoding of `concretize.lp`
simpler and more self-explanatory. It may make sense to rename `attr` to something like
`node_attr` and to simplify the names of node attributes. It also might make sense to do
something similar for other types of predicates in `concretize.lp`.
2022-12-02 18:56:18 +01:00
Manuela Kuhn
10d10b612a py-keyrings-alt: add 4.2.0 (#34262)
* py-keyrings-alt: add 4.2.0

* Add missing py-jaraco-classes dependency
2022-12-02 09:08:53 -07:00
iarspider
69dd742dc9 Add checksum for py-hatchling 1.8.1 (#34260) 2022-12-02 09:29:38 -06:00
Tamara Dahlgren
18efd817b1 Bugfix: Fetch should not force use of curl to check url existence (#34225)
* Bugfix: Fetch should not force use of curl to check url existence

* Switch type hints from comments to actual hints
2022-12-02 04:50:23 -07:00
Manuela Kuhn
65a5369d6a py-flask: add 2.2.2 and fix dependencies for py-werkzeug and py-markupsafe (#32849)
* py-flask: add 2.2.2, py-werkzeug: add 2.2.2, py-markupsafe: add 2.1.1

* Remove py-dataclasses dependency
2022-12-01 22:56:18 -07:00
iarspider
f66ec00fa9 Herwig3: make njet, vbfnlo dependencies optional... (#33941)
* Herwig3: make njet, vbfnlo dependencies optional...
  also drop openloops dependency when building on PowerPC
* Update package.py
2022-12-01 17:19:46 -08:00
Manuela Kuhn
f63fb2f521 py-twine: add 4.0.1, py-readme-renderer: add 37.3 (#34203)
* py-twine: add 4.0.1

* Remove py-setuptools as run dependency
2022-12-01 12:29:10 -06:00
Annop Wongwathanarat
dfa00f5a8d acfl: add post-installation check by running examples (#34172) 2022-12-01 09:47:59 -07:00
Alec Scott
6602780657 Add py-python-lsp-server and dependencies (#34149)
* Add py-python-lsp-server and dependencies

* Update var/spack/repos/builtin/packages/py-python-lsp-server/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Relax version range constraints on py-python-lsp-jsonrpc and add missing dep

* Add runtime dependency flag to setuptools dependencies

* Remove unused python@3.6: dependency and move setuptools-scm to build dep only

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-12-01 09:34:49 -07:00
iarspider
8420c610fa gnuplot: make readline optional (#34179)
* gnuplot: make readline optional
* Update package.py

Co-authored-by: iarspider <iarspider@users.noreply.github.com>
2022-12-01 09:11:14 -07:00
Greg Becker
b139cab687 conditional variant values: allow boolean (#33939) 2022-12-01 08:25:57 +01:00
Adam J. Stewart
99fcc57607 py-scipy: hardcode to use blis.pc (#34171) 2022-11-30 12:29:20 -08:00
Larry Knox
5a394d37b7 Add HDF5 version 1.13.3. (#34165)
* Add HDF5 version 1.13.3.
* Remove maintainers no longer with The HDFGroup.
* Fix indentation.
2022-11-30 11:23:38 -08:00
Loïc Pottier
472074cb7c redis-plus-plus: newer version and added TLS support (#34197)
Signed-off-by: Loïc Pottier <pottier1@llnl.gov>
2022-11-30 11:18:47 -08:00
Adam J. Stewart
45c8d7f457 py-segmentation-models-pytorch: add v0.3.1 (#34214) 2022-11-30 10:58:27 -08:00
Edward Hartnett
dce1f01f1a new w3emc version (#34219)
* updated version of w3emc package
* fixed sha
2022-11-30 10:54:45 -08:00
Sam Grayson
03cc83bc67 Add py-yt 4.x versions (#30418)
* Add py-yt 4.x versions

* Fix spelling

* Add yt dependencies

* Refine cython dependency

* Tweak depends_on for py-yt 4.x

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Fix comments from code review

* Fix formatting

* Fix stuff

* Fix constraints

* Update py-yt to 4.1.2

* Updated packages

* Fix py-tomli checksum

* Remove `expand` from `py-tomli/package.py`

* Respond to Adam's comments

* Update checksums

* Update checksusm

* Respond to comments

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-11-30 12:03:26 -06:00
eugeneswalker
f452741e3d e4s ci: use 2022-12-01 runner images (#34212) 2022-11-30 09:52:30 -08:00
eugeneswalker
99b68e646d e4s ci: hpx: set max_cpu_count=512 (#33977) 2022-11-30 09:16:57 -08:00
iarspider
f78c8265f4 Add checksum for py-kiwisolver 1.4.4 (#34121) 2022-11-30 09:18:18 -06:00
iarspider
5d3efbba14 Fix recipe for py-onnx-runtime (#34130)
* Fix recipe

* Update package.py

* Update recipe following review

* Update var/spack/repos/builtin/packages/py-onnx-runtime/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* remove unused imports

* Update package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-11-30 09:17:22 -06:00
Sajid Ali
7423f52cd3 PMIx: enable python bindings (#34107) 2022-11-30 15:55:24 +01:00
Massimiliano Culpo
43d93f7773 Deduplicate code to propagate module changes across MRO (#34157) 2022-11-30 11:10:42 +01:00
Harmen Stoppels
f8dec3e87f Single pass text replacement (#34180) 2022-11-30 10:21:51 +01:00
Manuela Kuhn
ef06b9db5b py-bidscoin, py-multiecho: add new packages (#34168) 2022-11-29 17:37:50 -07:00
Todd Gamblin
c64c9649be debug: move "nonexistent config path" message to much higher verbosity level (#34201)
We currently report that searched config paths don't exist at debug level 1, which
clutters the output quite a bit:

```console
> spack -d solve --fresh --show asp hdf5 > hdf5.lp
==> [2022-11-29-14:18:21.035133] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/darwin/concretizer.yaml
==> [2022-11-29-14:18:21.035151] Skipping nonexistent config path /Users/gamblin2/.spack/concretizer.yaml
==> [2022-11-29-14:18:21.035169] Skipping nonexistent config path /Users/gamblin2/.spack/darwin/concretizer.yaml
==> [2022-11-29-14:18:21.035238] Reading config from file /Users/gamblin2/src/spack/etc/spack/defaults/repos.yaml
==> [2022-11-29-14:18:21.035996] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/defaults/darwin/repos.yaml
==> [2022-11-29-14:18:21.036021] Skipping nonexistent config path /etc/spack/repos.yaml
==> [2022-11-29-14:18:21.036039] Skipping nonexistent config path /etc/spack/darwin/repos.yaml
==> [2022-11-29-14:18:21.036057] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/repos.yaml
==> [2022-11-29-14:18:21.036072] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/darwin/repos.yaml
==> [2022-11-29-14:18:21.036088] Skipping nonexistent config path /Users/gamblin2/.spack/repos.yaml
==> [2022-11-29-14:18:21.036105] Skipping nonexistent config path /Users/gamblin2/.spack/darwin/repos.yaml
==> [2022-11-29-14:18:21.071828] Reading config from file /Users/gamblin2/src/spack/etc/spack/defaults/config.yaml
==> [2022-11-29-14:18:21.081628] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/defaults/darwin/config.yaml
==> [2022-11-29-14:18:21.081669] Skipping nonexistent config path /etc/spack/config.yaml
==> [2022-11-29-14:18:21.081692] Skipping nonexistent config path /etc/spack/darwin/config.yaml
==> [2022-11-29-14:18:21.081712] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/config.yaml
==> [2022-11-29-14:18:21.081731] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/darwin/config.yaml
==> [2022-11-29-14:18:21.081748] Skipping nonexistent config path /Users/gamblin2/.spack/config.yaml
==> [2022-11-29-14:18:21.081764] Skipping nonexistent config path /Users/gamblin2/.spack/darwin/config.yaml
==> [2022-11-29-14:18:21.134909] Reading config from file /Users/gamblin2/src/spack/etc/spack/defaults/packages.yaml
==> [2022-11-29-14:18:21.148695] Reading config from file /Users/gamblin2/src/spack/etc/spack/defaults/darwin/packages.yaml
==> [2022-11-29-14:18:21.152555] Skipping nonexistent config path /etc/spack/packages.yaml
==> [2022-11-29-14:18:21.152582] Skipping nonexistent config path /etc/spack/darwin/packages.yaml
==> [2022-11-29-14:18:21.152601] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/packages.yaml
==> [2022-11-29-14:18:21.152620] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/darwin/packages.yaml
==> [2022-11-29-14:18:21.152637] Skipping nonexistent config path /Users/gamblin2/.spack/packages.yaml
==> [2022-11-29-14:18:21.152654] Skipping nonexistent config path /Users/gamblin2/.spack/darwin/packages.yaml
==> [2022-11-29-14:18:21.853915] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/defaults/compilers.yaml
==> [2022-11-29-14:18:21.853962] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/defaults/darwin/compilers.yaml
==> [2022-11-29-14:18:21.853987] Skipping nonexistent config path /etc/spack/compilers.yaml
==> [2022-11-29-14:18:21.854007] Skipping nonexistent config path /etc/spack/darwin/compilers.yaml
==> [2022-11-29-14:18:21.854025] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/compilers.yaml
==> [2022-11-29-14:18:21.854043] Skipping nonexistent config path /Users/gamblin2/src/spack/etc/spack/darwin/compilers.yaml
==> [2022-11-29-14:18:21.854060] Skipping nonexistent config path /Users/gamblin2/.spack/compilers.yaml
==> [2022-11-29-14:18:21.854093] Reading config from file /Users/gamblin2/.spack/darwin/compilers.yaml
```

It is very rare that I want to know this much information about config search, so I've
moved this to level 3. Now at level 1, we can see much more clearly what configs were
actually found:

```console
> spack -d solve --fresh --show asp hdf5 > hdf5.lp
==> [2022-11-29-14:19:04.035457] Imported solve from built-in commands
==> [2022-11-29-14:19:04.035818] Imported solve from built-in commands
==> [2022-11-29-14:19:04.037626] Reading config from file /Users/gamblin2/src/spack/etc/spack/defaults/concretizer.yaml
==> [2022-11-29-14:19:04.040033] Reading config from file /Users/gamblin2/src/spack/etc/spack/defaults/repos.yaml
==> [2022-11-29-14:19:04.080852] Reading config from file /Users/gamblin2/src/spack/etc/spack/defaults/config.yaml
==> [2022-11-29-14:19:04.133241] Reading config from file /Users/gamblin2/src/spack/etc/spack/defaults/packages.yaml
==> [2022-11-29-14:19:04.147175] Reading config from file /Users/gamblin2/src/spack/etc/spack/defaults/darwin/packages.yaml
==> [2022-11-29-14:19:05.157896] Reading config from file /Users/gamblin2/.spack/darwin/compilers.yaml
```

You can still get the old messages with `spack -ddd` (to activate debug level 3).
2022-11-29 16:17:13 -08:00
Paul R. C. Kent
2c6b52f137 add 15.0.5, 15.0.6 (#34194) 2022-11-29 15:26:13 -08:00
kwryankrattiger
33422acef0 CI: Update Data and Vis SDK Stack (#34009)
* CI: Update Data and Vis SDK Stack

* Update image to match target deployments (E4S)
* Enable all packages
* Test supported variants of ParaView and VisIt

* Sensei: Update Python hint for newer cmake

* Sensei: add Python3 hint
2022-11-29 14:49:55 -07:00
Stephen Sachs
428f635142 icc@2021.6.0 does not support gcc@12 headers (#34191)
Error message:
```
/shared/spack/opt/spack/linux-amzn2-x86_64_v3/gcc-7.3.1/gcc-12.2.0-4tairupdxg2tg2yhvjdlbs7xbd7wudl3/bin/../include/c++/12.2.0/bits/random.h(104): error: expected a declaration
{ extension using type = unsigned __int128; };
^
```

Co-authored-by: Stephen Sachs <stesachs@amazon.com>
2022-11-29 14:36:48 -05:00
kwryankrattiger
c6c74e98ff Dav sdk catalyst (#34010)
* SDK: Add Catalyst 1 and 2 support to the SDK

* LibCatalyst: Remove unused python3 variant from package
2022-11-29 11:28:32 -06:00
Valentin Volkl
d9b438ec76 evtgen: add v02.02.00 (#34187)
* evtgen: add v02.02.00
* format
2022-11-29 09:54:14 -07:00
Cristian Le
c6ee30497c Fix libxc cflag (#34000)
Using standard c99 should not be specific to intel compilers.
2022-11-29 13:45:28 +01:00
Loïc Pottier
1270ae1526 hiredis: updated package definition to use CMake (#33949) 2022-11-29 13:44:50 +01:00
psakievich
d15fead30c Add maintainer to Exawind stack and Trilinos (#34174)
* Add maintainer to Nalu-Wind and Trilinos

* Add to trilinos

* Exawind too

* amr-wind too
2022-11-28 17:23:17 -08:00
Valentin Volkl
23aaaf2d28 genfit: add v02-00-01 (#34159) 2022-11-28 17:09:47 -08:00
Hans Fangohr
56f9c76394 fix typo in path for sanity check (#34117)
- typo breaks install
2022-11-28 16:49:57 -07:00
Sam Grayson
49cda811fc Add new version of snakemake (#34041)
* Add new version of snakemake

* Add myself as a maintainer

* py-retry -> py-reretry

* Added snakemake variants for storage systems

* Updated comments

* Responded to Adam's comments

* Fixed spack style

* Add build/run dependency types
2022-11-28 15:10:10 -07:00
Benjamin Meyers
a97312535a New package: py-statmorph (#34158)
* New package py-statmorph w/ dependecies. Add py-astropy@5.1

* [@spackbot] updating style on behalf of meyersbs

* [py-statmorph,py-astropy,py-pyerfa] minor fixes
2022-11-28 15:46:10 -06:00
Benjamin Meyers
a0180ef741 New package: py-stui (#34156)
* New package py-stui

* [py-stui] add maintainer

* [@spackbot] updating style on behalf of meyersbs

* [py-stui] fix deps
2022-11-28 15:45:40 -06:00
iarspider
d640a573a8 Add checksum for py-rsa 4.9 (#34115)
* Add checksum for py-rsa 4.9

* Update package.py

* Update var/spack/repos/builtin/packages/py-rsa/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-11-28 14:10:21 -07:00
Cameron Smith
b3679406d0 omegah: new scorec version, fix cuda flags (#34169) 2022-11-28 13:03:27 -08:00
eugeneswalker
587488882a e4s ci: add hdf5-vol-async; remove expired comments (#34110) 2022-11-28 19:35:42 +00:00
Satish Balay
a17844a367 petsc, py-petsc4py: add 3.18.2 (#34161) 2022-11-28 10:51:48 -08:00
Manuela Kuhn
093a37750c py-bidskit: new package and dcm2niix: add 1.0.20220720 (#34162)
* py-bidskit: new package and dcm2niix: add 1.0.20220720

* Remove list_url
2022-11-28 11:05:49 -07:00
iarspider
173cc7e973 Add checksum for py-traitlets 5.3.0 (#34127) 2022-11-28 11:05:35 -07:00
Greg Becker
451e3ff50b warn about removal of deprecated format strings (#34101)
* warn about removal of deprecated format strings

Co-authored-by: becker33 <becker33@users.noreply.github.com>
2022-11-28 10:03:49 -08:00
Manuela Kuhn
523c4c2b63 py-dcm2bids: add new package (#34163) 2022-11-28 10:46:04 -07:00
Thomas-Ulrich
35e5a916bc easi: update package, rework impalajit (#34032) 2022-11-28 15:54:21 +01:00
Annop Wongwathanarat
1374577659 acfl: provides blas, lapack, and fftw-api@3 (#34154) 2022-11-28 14:25:32 +01:00
Hector Martinez-Seara
4c017403db texinfo: add v7.0 (#34150) 2022-11-28 06:22:13 -07:00
Massimiliano Culpo
fdfda72371 Use a module-like object to propagate changes in the MRO, when setting build env (#34059)
This fixes an issue introduced in #32340, which changed the semantics of the "module"
object passed to the "setup_dependent_package" callback.
2022-11-28 14:18:26 +01:00
Harmen Stoppels
efa1dba9e4 Revert "Revert "gitlab: Add shared PR mirror to places pipelines look for binaries. (#33746)" (#34087)" (#34153)
This reverts commit 63e4406514.
2022-11-28 06:06:03 -07:00
Annop Wongwathanarat
2a7ae2a700 armpl-gcc: add post-installation check by running examples (#34086) 2022-11-28 13:55:51 +01:00
Adam J. Stewart
a1b4e1bccd Add type hints to Prefix class (#34135) 2022-11-28 13:49:57 +01:00
Alec Scott
066ec31604 Add restic v0.14.0 (#34148) 2022-11-28 08:39:37 +01:00
Alec Scott
bb1888dbd4 direnv: add v2.32.2 (#34147) 2022-11-27 22:09:53 +01:00
Michael Kuhn
bc17b6cefb rust: add 1.65.0 (#34124) 2022-11-27 16:03:46 +01:00
Umashankar Sivakumar
46a0cd8e55 SingularityCE: Add conmon+squashfs as dependencies (#33891) 2022-11-27 13:05:49 +01:00
fpruvost
b2ceb23165 pastix: add new version 6.2.2 (#34066) 2022-11-27 00:33:05 +01:00
Harmen Stoppels
2fad966139 neovim: fix deptypes (#34060) 2022-11-27 00:23:24 +01:00
Luke Diorio-Toth
0b01c8c950 py-fastpath: new package (#34142)
* py-fastpath: new package

* Update var/spack/repos/builtin/packages/py-fastpath/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-11-26 15:57:45 -07:00
Eric Berquist
613d0b7e8e emacs: add variant treesitter for Emacs 29+ (#34134) 2022-11-26 23:45:00 +01:00
Wouter Deconinck
21c29ee375 prmon: Add missing depends_on py-numpy, py-pandas when +plot (#34123) 2022-11-26 23:43:25 +01:00
Luke Diorio-Toth
e236339e5a aragorn: add newer versions and URL (#34140) 2022-11-26 23:34:59 +01:00
Luke Diorio-Toth
7a03525c35 minced: add v0.4.2 (#34141) 2022-11-26 23:30:56 +01:00
Hans Fangohr
17ca86a309 Octopus: branch for Octopus development is now "main" (#34128)
Historically, development of the Octopus code was done on the "develop" branch
on https://gitlab.com/octopus-code/octopus but now development takes place on
"main" (since Q3 2022).

The suggestion in this PR to solve the issue is to keep the spack label
`octopus@develop` as this better indicates this is the development branch on git
than `octopus@main`, but of course to use the `main` branch (there is no choice
here - the `develop` branch is not touched anymore). Sticking to
`octopus@develop` as the version label also keeps backwards compatibility.
2022-11-26 15:29:49 -07:00
marcost2
ce71a38703 nvtop: Add 2.0.3, 2.0.4, 3.0.0 and 3.0.1 (#34145)
* And add the option to compile support for Intel GPU's
2022-11-26 21:04:48 +01:00
Satish Balay
12c23f2724 Revert "url_exists related improvements (#34095)" (#34144)
This reverts commit d06fd26c9a.

The problem is that Bitbucket's API forwards download requests to an S3 bucket using a temporary URL. This URL includes a signature for the request, which embeds the HTTP verb. That means only GET requests are allowed, and HEAD requests would fail verification, leading to 403 erros. The same is observed when using `curl -LI ...`
2022-11-26 17:56:36 +00:00
Carlos Bederián
b8ae0fbbf4 amdblis: symlink libblis-mt to libblis (#32819) 2022-11-26 00:33:29 +01:00
Sebastian Ehlert
6b5c86e0be toml-f: add 0.2.4 and 0.3.1 (#34025) 2022-11-25 16:29:58 -07:00
Victoria Cherkas
1ed1b49c9b metkit, fdb: Add latest versions (#33289) 2022-11-25 23:54:51 +01:00
petertea
4265d5e111 Update TotalView versions and website (#33418)
Co-authored-by: Peter Thompson <thompson81@llnl.gov>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2022-11-25 23:42:56 +01:00
snehring
8c0fb91d4e ffmpeg: adding version 5.1.2 (#33758)
* ffmpeg: add version 5.1.2 and switch to conditional variants

Also: py-torchvision: restrict ffmpeg dependency
2022-11-25 23:37:17 +01:00
Thomas Madlener
567532b9e5 lcio: Add new version and restrictions on c++ standard (#33997) 2022-11-25 23:08:23 +01:00
Adam J. Stewart
47d59e571e py-pandas: add v1.5.2 (#34091) 2022-11-25 23:03:16 +01:00
Adam J. Stewart
93ff19c9b7 py-pytorch-lightning: add v1.8.3 (#34096) 2022-11-25 22:59:39 +01:00
Harmen Stoppels
2167cbf72c Track locks by (dev, ino); close file handlers between tests (#34122) 2022-11-25 10:57:33 +01:00
Sergey Kosukhin
7a5e527cab zlib: fix shared libraries when '%nvhpc' (#34039) 2022-11-24 20:10:18 +01:00
iarspider
a25868594c Add checksum for py-uncertainties 3.1.7 (#34116) 2022-11-24 12:11:58 -06:00
Brent Huisman
dd5263694b Arbor: Yank v0.5 (#34094)
v0.5 does not build due to a change in setting `arch` introduced in v0.5.2, compatibility with which was not kept in `arbor/package.py`. Since v0.5.2 is compatible with `arbor/package.py`, and is API compatible with v0.5, any users relying on v0.5 can rely on v0.5.2.
2022-11-24 11:19:32 +01:00
Thomas-Ulrich
5fca1c9aff hipsycl: add v0.9.3 (#34052) 2022-11-23 23:14:01 -07:00
Tim Haines
1d7393c281 dyninst: add checksums for all supported versions (#34051) 2022-11-24 03:45:56 +01:00
Jen Herting
f0bc551718 New package: py-kt-legacy (#34104)
* first build of keras-tuner with dataset kt-legacy

* [py-kt-legacy] fixed homepage

* [py-kt-legacy] depends on setuptools

* [py-kt-legacy] fixed import

* [@spackbot] updating style on behalf of qwertos

Co-authored-by: Sid Pendelberry <sid@rit.edu>
Co-authored-by: qwertos <qwertos@users.noreply.github.com>
2022-11-23 19:26:21 -07:00
Adam J. Stewart
46b9a09843 py-cartopy: older versions don't support newer matplotlib (#34109) 2022-11-24 02:44:20 +01:00
Stephen Sachs
c0898565b9 openfoam: Fix openfoam@2012_220610 %intel (add #include <array>) (#34088) 2022-11-24 02:34:40 +01:00
Benjamin Meyers
3018e7f63d Add py-urwid@2.1.2 (#34103)
* Add py-urwid@2.1.2

* [@spackbot] updating style on behalf of meyersbs
2022-11-23 19:25:02 -06:00
Satish Balay
dfa1a42420 petsc, slepc: enable parallel builds (#34024) 2022-11-24 02:16:20 +01:00
Adam J. Stewart
2c8ab85e6a gnuplot: fix build with Apple Clang (#34092) 2022-11-24 01:33:33 +01:00
Nicolas Cornu
b2505aed5c HighFive: bump to 2.6.2 (#34090) 2022-11-24 00:20:36 +01:00
Valentin Volkl
7847d4332e docs: update info on XCode requirements (#34097) 2022-11-24 00:20:09 +01:00
Massimiliano Culpo
70bcbba5eb ecflow: polish recipe (#34043) 2022-11-23 21:38:37 +01:00
Tom Scogland
0182603609 Control Werror by converting to Wno-error (#30882)
Using `-Werror` is good practice for development and testing, but causes us a great
deal of heartburn supporting multiple compiler versions, especially as newer compiler
versions add warnings for released packages.  This PR adds support for suppressing
`-Werror` through spack's compiler wrappers.  There are currently three modes for
the `flags:keep_werror` setting:

* `none`: (default) cancel all `-Werror`, `-Werror=*` and `-Werror-*` flags by
  converting them to `-Wno-error[=]*` flags
* `specific`: preserve explicitly selected warnings as errors, such as
  `-Werror=format-truncation`, but reverse the blanket `-Werror`
* `all`: keeps all `-Werror` flags

These can be set globally in config.yaml, through the config command-line flags, or
overridden by a particular package (some packages use Werror as a proxy for determining
support for other compiler features).  We chose to use this approach because:

1. removing `-Werror` flags entirely broke *many* build systems, especially autoconf
   based ones, because of things like checking `-Werror=feature` and making the
   assumption that if that did not error other flags related to that feature would also work
2. Attempting to preserve `-Werror` in some phases but not others caused similar issues
3. The per-package setting came about because some packages, even with all these
   protections, still use `-Werror` unsafely.  Currently there are roughly 3 such packages
   known.
2022-11-23 12:29:17 -08:00
Jen Herting
bf1b846f26 [py-antlr4-python3-runtime] Added versions 4.9.3 and 4.10 (#34102)
* Working updates to py-antlr4-python3-runtime and py-omegaconf

* [py-antlr4-python3-runtime] added version 4.9.3

* [@spackbot] updating style on behalf of qwertos

Co-authored-by: Benjamin Meyers <bsmits@rit.edu>
Co-authored-by: qwertos <qwertos@users.noreply.github.com>
2022-11-23 12:30:11 -07:00
Harmen Stoppels
d06fd26c9a url_exists related improvements (#34095)
For reasons beyond me Python thinks it's a great idea to upgrade HEAD
requests to GET requests when following redirects. So, this PR adds a
better `HTTPRedirectHandler`, and also moves some ad-hoc logic around
for dealing with disabling SSL certs verification.

Also, I'm stumped by the fact that Spack's `url_exists` does not use
HEAD requests at all, so in certain cases Spack awkwardly downloads
something first to see if it can download it, and then downloads it
again because it knows it can download it. So, this PR ensures that both
urllib and botocore use HEAD requests.

Finally, it also removes some things that were there to support currently
unsupported Python versions.

Notice that the HTTP spec [section 10.3.2](https://datatracker.ietf.org/doc/html/rfc2616.html#section-10.3.2) just talks about how to deal
with POST request on redirect (whether to follow or not):

>   If the 301 status code is received in response to a request other
>   than GET or HEAD, the user agent MUST NOT automatically redirect the
>   request unless it can be confirmed by the user, since this might
>   change the conditions under which the request was issued.

>   Note: When automatically redirecting a POST request after
>   receiving a 301 status code, some existing HTTP/1.0 user agents
>   will erroneously change it into a GET request.

Python has a comment about this, they choose to go with the "erroneous change".
But they then mess up the HEAD request while following the redirect, probably
because they were too busy discussing how to deal with POST.

See https://github.com/python/cpython/pull/99731
2022-11-23 19:26:24 +00:00
kwryankrattiger
5d2c9636ff E4S: Conservatively add ecp-data-vis-sdk (#33621)
* E4S: Conservatively add ecp-data-vis-sdk

* Remove ascent from CUDA SDK stack to stop hanging on Dray

* Adios2: Newer FindPython uses Python_EXECUTABLE
2022-11-23 11:01:30 -08:00
Harmen Stoppels
63e4406514 Revert "gitlab: Add shared PR mirror to places pipelines look for binaries. (#33746)" (#34087)
This reverts commit 5c4137baf1.
2022-11-23 10:41:52 -08:00
Jen Herting
d56380fc07 New package: py-imagecodecs (#34098)
* [libjpeg-turbo] Added version 2.1.3

* [imagecodecs] Added jpeg deependency commented outconglicting libraries

* [WIP]

* [py-imagecodecs] modifying setup.py to work with spack install locations

* [py-imagecodecs] Removed comments and unneeded dependencies

* [py-imagecodecs] removed some comments and fixed up some flake8 complaints

* [py-imagecodecs] flake8

* [py-imagecodecs] fixed import

* [@spackbot] updating style on behalf of qwertos

Co-authored-by: James A Zilberman <jazrc@rit.edu>
Co-authored-by: qwertos <qwertos@users.noreply.github.com>
2022-11-23 10:39:45 -07:00
Adam J. Stewart
f89cc96b0c libelf: fix build on macOS arm64 (#34036) 2022-11-23 09:27:44 -07:00
iarspider
cf952d41d8 Add checksum for py-cffi 1.15.1 (#34081) 2022-11-23 08:52:21 -07:00
iarspider
5f737c5a71 Add checksum for py-parsimonious 0.10.0 (#34079) 2022-11-23 08:59:30 -06:00
Bernhard Kaindl
a845b1f984 openloops: add check for added Fortran compiler (#34014) 2022-11-23 07:59:13 -07:00
Henning Glawe
b8d059e8f4 berkelygw: use mpi variant for scalapack (#33948)
The package.py assumed "+mpi" in many places, without checking for the variant.
This problem went undetected, as a hard dependency on scalapack pulled an mpi
implementation into the dependency chain (this is also fixed).

Also, the +mpi variant is used select between serial and parallel mode:

It has to enable MPI and ScaLAPACK: They are inter-dependent. Compile
fails because of checks for the other if the other is not enabled.

Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
2022-11-23 07:42:03 -07:00
Cory Bloor
1006c77374 rocm: add minimum versions for amdgpu_targets (#34030) 2022-11-23 14:23:38 +01:00
Alec Scott
38d4fd7711 Add conflicts statements to flux-core to limit builds to linux based platforms (#34068) 2022-11-23 06:08:46 -07:00
Harmen Stoppels
643ce586de libxcrypt: add v4.33 (#34069) 2022-11-23 05:36:47 -07:00
Adam J. Stewart
5b3b0130f2 Build System docs: consistent headers (#34047) 2022-11-23 13:35:55 +01:00
Harmen Stoppels
55c77d659e make/ninja: use the right number of jobs (#34057) 2022-11-23 12:35:15 +01:00
Alexander Knieps
fe1c105161 capnproto: update to v0.10.2 (#34063)
Co-authored-by: Alexander Knieps <a.knieps@fz-juelich.de>
2022-11-23 12:32:10 +01:00
Bernhard Kaindl
09f2b6f5f5 boost: At least with older Xcode, boost can't build with lzma (#34075)
Reference: https://lists.boost.org/Archives/boost/2019/11/247380.php
As reported at the end of #33998 and this link, liblzma on older Xcode on
MacOSX 10 misses _lzma_cputhreads, so boost's can't use liblzma on those.
2022-11-23 03:36:22 -07:00
Matthieu Dorier
73fe21ba41 [mochi-margo] fixed dependency to Argobots (#34082) 2022-11-23 03:36:05 -07:00
Jen Herting
81fb87cedf New package: py-rasterstats (#34070)
* Fixed dependencies for rasterstats

* Fixed flake8 errors

* Fix flake8 error

* Cleans up package desc., adds build dependency on setuptools.

* Fixes flake8 error

Co-authored-by: Bailey Brown <bobits@rit.edu>
2022-11-23 02:55:14 -07:00
Tim Haines
7de39c44b1 dyninst: add v12.2.1 (#34050) 2022-11-23 10:13:00 +01:00
Keita Iwabuchi
c902e27e52 Metall package: add v0.22, v0.23, and v0.23.1 (#34073) 2022-11-23 09:57:32 +01:00
Takahiro Ueda
65b991a4c5 form: new version 4.3.0 (#34078) 2022-11-23 09:51:55 +01:00
Mosè Giordano
65520311a6 ccache: add new versions (#34067) 2022-11-23 01:42:44 -07:00
Umar Arshad
def79731d0 span-lite: Add new versions (#34072) 2022-11-23 02:21:58 +01:00
Michael Kuhn
0fd3c9f451 cmd/checksum: allow adding new versions to package (#24532)
This adds super-lazy maintainer mode to `spack checksum`: Instead of
only printing the new checksums to the terminal, `-a` and
`--add-to-package` will add the new checksums to the `package.py` file
and open it in the editor afterwards for final checks.
2022-11-22 16:30:49 -08:00
Adam J. Stewart
c5883fffd7 Python: drop EOL versions (#33898)
This PR removes [end of life](https://endoflife.date/python) versions of Python from Spack. Specifically, this includes all versions of Python older than 3.7.

See https://github.com/spack/spack/discussions/31824 for rationale. Deprecated in #32615. And #28003.

For anyone using software that relies on Python 2, you have a few options:

* Upgrade the software to support Python 3. The `3to2` tool may get you most of the way there, although more complex libraries may need manual tweaking.
* Add Python 2 as an [external package](https://spack.readthedocs.io/en/latest/build_settings.html#external-packages). Many Python libraries do not support Python 2, but you may be able to add older versions that did once upon a time.
* Use Spack 0.19. Spack 0.19 is the last release to officially support Python 3.6 and older
* Create and maintain your own [custom repository](https://spack.readthedocs.io/en/latest/repositories.html). Basically, you would need a package for Python 2 and any other Python 2-specific libraries you need.
2022-11-22 15:02:30 -08:00
Harmen Stoppels
4bf964e6b3 spack uninstall: use topo order (#34053) 2022-11-22 07:22:07 -07:00
Bernhard Kaindl
bcc0fda4e2 berkeleygw: fix build (no change to attribute spec.compiler_flags) (#34019) 2022-11-22 07:13:56 -07:00
iarspider
69987fd323 cpu-features: Fix appending -DBUILD_SHARED_LIBS (#34055)
Also:
* Use the release tarball for v0.7.0 to fix spack warning

Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2022-11-22 06:10:34 -07:00
Dominic Hofer
9a16234ed4 eckit: add v1.20.2, v1.16.3 (#33200)
Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
2022-11-22 05:45:17 -07:00
Massimiliano Culpo
bd198312c9 Revert "Warn about removal of deprecated format strings (#33829)" (#34056)
This reverts commit 7f9af8d4a0.
2022-11-22 12:35:36 +01:00
Greg Becker
7f9af8d4a0 Warn about removal of deprecated format strings (#33829)
Co-authored-by: becker33 <becker33@users.noreply.github.com>
2022-11-22 10:56:57 +01:00
John W. Parent
793a7bc6a9 Windows: add registry query and SDK/WDK packages (#33021)
* Add a WindowsRegistryView class, which can query for existing
  package installations on Windows. This is particularly important
  because some Windows packages (including those added here)
  do not allow two simultaneous installs, and this can be
  queried in order to provide a clear error message.
* Consolidate external path detection logic for Windows into
  WindowsKitExternalPaths and WindowsCompilerExternalPaths objects.
* Add external-only packages win-sdk and wgl
* Add win-wdk (including external detection) which depends on
  win-sdk
* Replace prior msmpi implementation with a source-based install
  (depends on win-wdk). This install can control the install
  destination (unlike the binary installation).
* Update MSVC compiler to choose vcvars based on win-sdk dependency
* Provide "msbuild" module-level variable to packages during build
* When creating symlinks on Windows, need to explicitly specify when
  a symlink target is a directory
* executables_in_path no-longer defaults to using PATH (this is
  now expected to be taken care of by the caller)
2022-11-22 00:27:42 -08:00
genric
376afd631c py-kubernetes: add version 25.3.0 (#33915) 2022-11-22 05:46:41 +01:00
Harmen Stoppels
e287c6ac4b gnuconfig: bump with 2022-09-17 (#34035) 2022-11-22 05:31:29 +01:00
Sergey Kosukhin
e864744b60 py-fprettify: new version 0.3.7 (#34040) 2022-11-21 21:26:31 -06:00
Andrew W Elble
5b3af53b10 qiskit: updates (#33877) 2022-11-21 21:10:10 -06:00
Harmen Stoppels
44c22a54c9 Spec traversal: add option for topological ordering (#33817)
Spec traversals can now specify a topological ordering. A topologically-
ordered traversal with input specs X1, X2... will

* include all of X1, X2... and their children
* be ordered such that a given node is guaranteed to appear before any
  of its children in the traversal

Other notes:

* Input specs can be children of other input specs (this is useful if
  a user specifies a set of specs to uninstall: some of those specs
  might be children of others)
* `direction="parents"` will produce a reversed topological order
  (children always come before parents).
* `cover="edges"` will generate a list of edges L such that (a) input
  edges will always appear before output edges and (b) if you create
  a list with the destination of each edge in L the result is
  topologically ordered
2022-11-21 18:33:35 -08:00
Chris Green
f97f37550a libjpeg-turbo: make build_system settings comprehensive (#34046) 2022-11-22 03:15:32 +01:00
Massimiliano Culpo
0e4ee3d352 Speed-up a few unit-tests (#34044)
* test_suite.py: speed up slow test by using mock packages

* Don't resolve the sha during unit-tests

* Skip long-running test that fails, instead of executing it
2022-11-21 23:50:55 +01:00
Brian Van Essen
05fc800db9 Fixed the rdma-core package to find its external library (#33798) 2022-11-21 15:45:47 -07:00
Dom Heinzeller
2387c116ad ecflow: add v5.8.3, update with changes from JCSDA-EMC fork (#34038) 2022-11-21 11:01:52 -07:00
Scott Wittenburg
6411cbd803 ci: restore ability to reproduce gitlab job failures (#33953) 2022-11-21 10:39:03 -06:00
Harmen Stoppels
8ea366b33f uninstall: fix accidental cubic complexity (#34005)
* uninstall: fix accidental cubic complexity

Currently spack uninstall runs in worst case cubic time complexity
thanks to traversal during traversal during traversal while collecting
the specs to be uninstalled.

Also brings down the number of error messages printed to something
linear in the amount of matching specs instead of quadratic.
2022-11-21 16:44:48 +01:00
Drew Whitehouse
9a2fbf373c openvdb: update to v10.0.0 (#33835) 2022-11-21 06:38:28 +01:00
Mosè Giordano
9e1fef8813 texinfo: require also makeinfo executable (#33370)
* texinfo: require also `makeinfo` executable
* texinfo: add versions 6.6, 6.7, 6.8
* texinfo: add `info` and `makeinfo` sanity checks
2022-11-21 05:20:11 +01:00
Alec Scott
f8a6e3ad90 hugo: add v0.106.0 (#34023) 2022-11-21 04:22:32 +01:00
Adam J. Stewart
0706919b09 Python: specify tcl/tk version requirements (#34027) 2022-11-20 19:13:47 -06:00
Wouter Deconinck
b9b93ce272 qt6: new packages (#29555)
* qt6: initial commit of several basic qt6 packages

* Qt6: fix style issues

* [qt6] fix style issues, trailing spaces

* [qt6] rename to qt-* ecosystem; remove imports

* [qt6] rename dependencies; change version strings

* [qt6] list_urls

* [qt6] homepage links

* [qt6] missing closing quotes failed style check

* qt-declarative: use private _versions

Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>

* qt-quick3d, qt-quicktimeline, qt-shadertools: use private _versions

* qt-base: rework feature defines and use run_tests

* qt: new version 6.2.4

* flake8 whitespace before comma

* qt-base: variant opengl when +gui

Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>

* qt6: rebase and apply new black style

* qt6: apply style isort fixes

* qt6: new version 6.3.0 and 6.3.1

* qt6: add 6.3.0 and 6.3.1 to versions list

* qt6: multi-argument join_path

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* qt-base: fix isort

* qt-shadertools: no cmake_args needed

* qt-declarative: imports up front

* qt-quick3d: fix import

* qt-declarative: remove useless cmake_args

* qt-shadertools: imports and join_path fixes

* qt-quick3d: join_path fixes

* qt-declarative: join_path fixes

* Update features based on gui usage

* Update dependencies, cmake args, mac support

* Update features based on linux

* More updates

* qt-base: fix style

* qt-base: archive_files join_path

* qt-base: new version 6.3.2

* qt-{declarative,quick3d,quicktimeline,shadertools}@6.3.2

* qt-base: require libxcb@1.13: and use system xcb_xinput when on linux

Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-11-20 20:03:52 -05:00
Pedro Ciambra
87cb9760ce mold: new package for the mold linker (#34017) 2022-11-21 01:32:47 +01:00
Adam J. Stewart
d472e28bfe py-numpy: add v1.23.5 (#34026) 2022-11-21 00:40:37 +01:00
Bernhard Kaindl
dbc81549db krb5: Add new versions 1.19.4 and 1.20.1 (#34021)
Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
2022-11-21 00:39:13 +01:00
Sam Grayson
dc00c4fdae Update dask and related packages (#33925)
* Update dask and related packages

* Update package dependency specs

* Run spack style

* Add new version of locket

* Respond to comments

* Added constraints

* Add version constraints for py-dask+distributed

* Run spack style

* Update var/spack/repos/builtin/packages/py-dask/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Deprecated dask versions

* Deprecated more dask and distirbuted

* spack style --fix

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-11-20 12:43:32 -06:00
Bernhard Kaindl
f1b9da16c8 bash: Update 5.1 to 5.1.16 (#34022) 2022-11-20 18:36:24 +01:00
Adam J. Stewart
93ce943301 py-torch: add note about MPS variant (#34018) 2022-11-19 18:10:06 -07:00
Jen Herting
632b36ab5d New package: py-ahpy (#34008)
* first build of ahpy

* updated to limit python to >4

* added from spack.package import * to >4

Co-authored-by: Sid Pendelberry <sid@rit.edu>
2022-11-19 12:10:14 -07:00
Paul Romano
94c76c5823 OpenMC: add v0.13.2 (#33903)
* openmc: add v0.13.2

* Fix style formatting

* Update Python version dependency

* Update numpy version dependency
2022-11-19 12:22:58 -06:00
Massimiliano Culpo
45b4cedb7e spack find: remove deprecated "--bootstrap" option (#34015) 2022-11-19 16:09:34 +01:00
Erik Schnetter
6d0a8f78b2 libxcrypt: Disable -Werror (#34013) 2022-11-19 06:29:50 -07:00
Chris Green
409cf185ce package_base.py: Fix #34006: test msg needs to be a string (#34007) 2022-11-19 13:02:51 +01:00
iarspider
602984460d Boost: enable lzma and zstd iostreams (#33998) 2022-11-19 12:07:52 +01:00
Olivier Cessenat
62b1d52a1e graphviz: remove 1. cyclic dep when +pangocairo and 2. error with poppler+glib (#32120)
* graphviz: remove cyclic dep to svg when pangocairo and poppler+glib failure

* graphviz: remove cyclic dep to svg when pangocairo and poppler+glib failure
2022-11-19 11:53:09 +01:00
Jim Edwards
790bd175e0 parallelio: update package to use mpi-serial, add extra module info (#33153) 2022-11-19 03:50:01 -07:00
Adam J. Stewart
2f057d729d py-scipy: add v1.9 (#31810) 2022-11-19 11:16:01 +01:00
Jerome Soumagne
a124185090 mercury: add version 2.2.0 (#31966)
add psm, psm2 and hwloc variants

Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2022-11-19 11:12:52 +01:00
Michael Kuhn
c5235bbe86 sqlite: add 3.40.0 (#33975) 2022-11-18 18:42:03 -07:00
Chris Green
e715901cb2 PackageBase should not define builder legacy attributes (#33942)
* Add a regression test for 33928

* PackageBase should not set `(build|install)_time_test_callbacks`

* Fix audits by preserving the current semantic

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2022-11-18 22:22:51 +01:00
Robert Blake
1db914f567 Updating faiss with new versions (#33983)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2022-11-18 21:43:41 +01:00
Robert Blake
688dae7058 Updating hiredis with new software versions (#33982) 2022-11-18 21:43:20 +01:00
Richard Berger
ddb460ec8d charliecloud: new version 0.30 (#34004) 2022-11-18 11:29:33 -08:00
snehring
703e5fe44a trnascan-se: adding missing build dep (#33978) 2022-11-18 11:22:21 -08:00
Bernhard Kaindl
c601bdf7bf gcc: Ensure matching assembler/binutils on RHEL8 (#33994)
gcc@10: Newer binutils than RHEL7/8's are required to for guaranteed operaton. Therefore, on RHEL7/8, reject ~binutils. You need to add +binutils to be sure to have binutils which are recent enough.

See this discussion with the OpenBLAS devs for reference:
https://github.com/xianyi/OpenBLAS/issues/3805#issuecomment-1319878852

Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
2022-11-18 10:55:52 -08:00
Satish Balay
778dddc523 pflotran: add "rxn" variant (#33995) 2022-11-18 10:34:27 -08:00
Robert Underwood
acc19ad34f LibPressio support for MGARD (#33999)
Co-authored-by: Robert Underwood <runderwood@anl.gov>
2022-11-18 10:26:50 -08:00
Adam J. Stewart
f4826e1b33 py-mypy: add new versions; add new py-types packages (#34002)
* py-mypy: add new versions
* Add new packages
2022-11-18 10:23:55 -08:00
snehring
05ff7e657c py-cutadapt: adding version 4.1 (#33959)
* py-dnaio: adding version 0.9.1
py-cutadapt: adding version 4.1

* py-cutadapt: remove old python versions

* py-dnaio: remove old python versions

* py-cutadapt: add cython dep
2022-11-18 10:33:57 -07:00
Massimiliano Culpo
839a14c0ba Improve error message for requirements (#33988)
refers #33985
2022-11-18 15:49:46 +01:00
Bernhard Kaindl
9aafbec121 openblas: Fix build on ARM Neoverse with gcc@:9 (no sve2+bf16) (#33968)
Also improve the InstallError message when +fortran but no FC was added.

Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
2022-11-18 02:54:04 -07:00
iarspider
20071e0c04 Update libjpeg-turbo using new multibuildsystem approach (#33971)
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2022-11-18 01:41:53 -07:00
Adam J. Stewart
51bb2f23a3 py-pytorch-lightning: add v1.8.2 (#33984) 2022-11-17 23:13:55 -07:00
Ross Miller
2060d51bd0 libxml2: make older versions depend on python@:3.9 (#33952)
Add a dependency on python versions less than 3.10 in order to work
around a bug in libxml2's configure script that fails to parse python
version strings with more than one character for the minor version.

The bug is present in v2.10.1, but has been fixed in 2.10.2.

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2022-11-18 05:16:11 +01:00
Harmen Stoppels
e5af0ccc09 libuv-julia: usa static libs for julia (#33980) 2022-11-18 03:09:07 +01:00
Jen Herting
284859e742 [lerc] added version 4.0.0 (#33974)
* [lerc] added version 4.0.0

* [@spackbot] updating style on behalf of qwertos

Co-authored-by: qwertos <qwertos@users.noreply.github.com>
2022-11-17 19:50:58 -06:00
Cory Quammen
37e77f7a15 ParaView: add ParaView-5.11.0 new release (#33972) 2022-11-17 18:06:08 -07:00
Harmen Stoppels
d2432e1ba4 julia: 1.8.3 (#33976) 2022-11-17 16:18:32 -07:00
David
5809ba0e3f ompss-2: new package (#33844) 2022-11-17 15:03:32 -08:00
Robert Underwood
95e294b2e8 fixes for ndzip and sperr (#33882)
* fixes for ndzip
* fix commits in spack
* new and fixed sperr release

Co-authored-by: Robert Underwood <runderwood@anl.gov>
2022-11-17 14:25:48 -08:00
Glenn Johnson
cdaac58488 Spack Bioconductor package updates (#33852)
* add version 1.46.0 to bioconductor package r-a4
* add version 1.46.0 to bioconductor package r-a4base
* add version 1.46.0 to bioconductor package r-a4classif
* add version 1.46.0 to bioconductor package r-a4core
* add version 1.46.0 to bioconductor package r-a4preproc
* add version 1.46.0 to bioconductor package r-a4reporting
* add version 1.52.0 to bioconductor package r-absseq
* add version 1.28.0 to bioconductor package r-acde
* add version 1.76.0 to bioconductor package r-acgh
* add version 2.54.0 to bioconductor package r-acme
* add version 1.68.0 to bioconductor package r-adsplit
* add version 1.70.0 to bioconductor package r-affxparser
* add version 1.76.0 to bioconductor package r-affy
* add version 1.74.0 to bioconductor package r-affycomp
* add version 1.58.0 to bioconductor package r-affycompatible
* add version 1.56.0 to bioconductor package r-affycontam
* add version 1.70.0 to bioconductor package r-affycoretools
* add version 1.46.0 to bioconductor package r-affydata
* add version 1.50.0 to bioconductor package r-affyilm
* add version 1.68.0 to bioconductor package r-affyio
* add version 1.74.0 to bioconductor package r-affyplm
* add version 1.44.0 to bioconductor package r-affyrnadegradation
* add version 1.46.0 to bioconductor package r-agdex
* add version 3.30.0 to bioconductor package r-agilp
* add version 2.48.0 to bioconductor package r-agimicrorna
* add version 1.30.0 to bioconductor package r-aims
* add version 1.30.0 to bioconductor package r-aldex2
* add version 1.36.0 to bioconductor package r-allelicimbalance
* add version 1.24.0 to bioconductor package r-alpine
* add version 2.60.0 to bioconductor package r-altcdfenvs
* add version 2.22.0 to bioconductor package r-anaquin
* add version 1.26.0 to bioconductor package r-aneufinder
* add version 1.26.0 to bioconductor package r-aneufinderdata
* add version 1.70.0 to bioconductor package r-annaffy
* add version 1.76.0 to bioconductor package r-annotate
* add version 1.60.0 to bioconductor package r-annotationdbi
* add version 1.22.0 to bioconductor package r-annotationfilter
* add version 1.40.0 to bioconductor package r-annotationforge
* add version 3.6.0 to bioconductor package r-annotationhub
* add version 3.28.0 to bioconductor package r-aroma-light
* add version 1.30.0 to bioconductor package r-bamsignals
* add version 2.14.0 to bioconductor package r-beachmat
* add version 2.58.0 to bioconductor package r-biobase
* add version 2.6.0 to bioconductor package r-biocfilecache
* add version 0.44.0 to bioconductor package r-biocgenerics
* add version 1.8.0 to bioconductor package r-biocio
* add version 1.16.0 to bioconductor package r-biocneighbors
* add version 1.32.1 to bioconductor package r-biocparallel
* add version 1.14.0 to bioconductor package r-biocsingular
* add version 2.26.0 to bioconductor package r-biocstyle
* add version 3.16.0 to bioconductor package r-biocversion
* add version 2.54.0 to bioconductor package r-biomart
* add version 1.26.0 to bioconductor package r-biomformat
* add version 2.66.0 to bioconductor package r-biostrings
* add version 1.46.0 to bioconductor package r-biovizbase
* add version 1.8.0 to bioconductor package r-bluster
* add version 1.66.1 to bioconductor package r-bsgenome
* add version 1.34.0 to bioconductor package r-bsseq
* add version 1.40.0 to bioconductor package r-bumphunter
* add version 2.64.0 to bioconductor package r-category
* add version 2.28.0 to bioconductor package r-champ
* add version 2.30.0 to bioconductor package r-champdata
* add version 1.48.0 to bioconductor package r-chipseq
* add version 4.6.0 to bioconductor package r-clusterprofiler
* add version 1.34.0 to bioconductor package r-cner
* add version 1.30.0 to bioconductor package r-codex
* add version 2.14.0 to bioconductor package r-complexheatmap
* add version 1.72.0 to bioconductor package r-ctc
* add version 2.26.0 to bioconductor package r-decipher
* add version 0.24.0 to bioconductor package r-delayedarray
* add version 1.20.0 to bioconductor package r-delayedmatrixstats
* add version 1.38.0 to bioconductor package r-deseq2
* add version 1.44.0 to bioconductor package r-dexseq
* add version 1.40.0 to bioconductor package r-dirichletmultinomial
* add version 2.12.0 to bioconductor package r-dmrcate
* add version 1.72.0 to bioconductor package r-dnacopy
* add version 3.24.1 to bioconductor package r-dose
* add version 2.46.0 to bioconductor package r-dss
* add version 3.40.0 to bioconductor package r-edger
* add version 1.18.0 to bioconductor package r-enrichplot
* add version 2.22.0 to bioconductor package r-ensembldb
* add version 1.44.0 to bioconductor package r-exomecopy
* add version 2.6.0 to bioconductor package r-experimenthub
* add version 1.24.0 to bioconductor package r-fgsea
* add version 2.70.0 to bioconductor package r-gcrma
* add version 1.34.0 to bioconductor package r-gdsfmt
* add version 1.80.0 to bioconductor package r-genefilter
* add version 1.34.0 to bioconductor package r-genelendatabase
* add version 1.70.0 to bioconductor package r-genemeta
* add version 1.76.0 to bioconductor package r-geneplotter
* add version 1.20.0 to bioconductor package r-genie3
* add version 1.34.3 to bioconductor package r-genomeinfodb
* update r-genomeinfodbdata
* add version 1.34.0 to bioconductor package r-genomicalignments
* add version 1.50.2 to bioconductor package r-genomicfeatures
* add version 1.50.1 to bioconductor package r-genomicranges
* add version 2.66.0 to bioconductor package r-geoquery
* add version 1.46.0 to bioconductor package r-ggbio
* add version 3.6.2 to bioconductor package r-ggtree
* add version 2.8.0 to bioconductor package r-glimma
* add version 1.10.0 to bioconductor package r-glmgampoi
* add version 5.52.0 to bioconductor package r-globaltest
* update r-go-db
* add version 1.18.0 to bioconductor package r-gofuncr
* add version 2.24.0 to bioconductor package r-gosemsim
* add version 1.50.0 to bioconductor package r-goseq
* add version 2.64.0 to bioconductor package r-gostats
* add version 1.76.0 to bioconductor package r-graph
* add version 1.60.0 to bioconductor package r-gseabase
* add version 1.30.0 to bioconductor package r-gtrellis
* add version 1.42.0 to bioconductor package r-gviz
* add version 1.26.0 to bioconductor package r-hdf5array
* add version 1.70.0 to bioconductor package r-hypergraph
* add version 1.34.0 to bioconductor package r-illumina450probevariants-db
* add version 0.40.0 to bioconductor package r-illuminaio
* add version 1.72.0 to bioconductor package r-impute
* add version 1.36.0 to bioconductor package r-interactivedisplaybase
* add version 2.32.0 to bioconductor package r-iranges
* add version 1.58.0 to bioconductor package r-kegggraph
* add version 1.38.0 to bioconductor package r-keggrest
* add version 3.54.0 to bioconductor package r-limma
* add version 2.50.0 to bioconductor package r-lumi
* add version 1.74.0 to bioconductor package r-makecdfenv
* add version 1.76.0 to bioconductor package r-marray
* add version 1.10.0 to bioconductor package r-matrixgenerics
* add version 1.6.0 to bioconductor package r-metapod
* add version 2.44.0 to bioconductor package r-methylumi
* add version 1.44.0 to bioconductor package r-minfi
* add version 1.32.0 to bioconductor package r-missmethyl
* add version 1.78.0 to bioconductor package r-mlinterfaces
* add version 1.10.0 to bioconductor package r-mscoreutils
* add version 2.24.0 to bioconductor package r-msnbase
* add version 2.54.0 to bioconductor package r-multtest
* add version 1.36.0 to bioconductor package r-mzid
* add version 2.32.0 to bioconductor package r-mzr
* add version 1.60.0 to bioconductor package r-oligoclasses
* update r-org-hs-eg-db
* add version 1.40.0 to bioconductor package r-organismdbi
* add version 1.38.0 to bioconductor package r-pathview
* add version 1.90.0 to bioconductor package r-pcamethods
* update r-pfam-db
* add version 1.42.0 to bioconductor package r-phyloseq
* add version 1.60.0 to bioconductor package r-preprocesscore
* add version 1.30.0 to bioconductor package r-protgenerics
* add version 1.32.0 to bioconductor package r-quantro
* add version 2.30.0 to bioconductor package r-qvalue
* add version 1.74.0 to bioconductor package r-rbgl
* add version 2.38.0 to bioconductor package r-reportingtools
* add version 2.42.0 to bioconductor package r-rgraphviz
* add version 2.42.0 to bioconductor package r-rhdf5
* add version 1.10.0 to bioconductor package r-rhdf5filters
* add version 1.20.0 to bioconductor package r-rhdf5lib
* add version 2.0.0 to bioconductor package r-rhtslib
* add version 1.74.0 to bioconductor package r-roc
* add version 1.26.0 to bioconductor package r-rots
* add version 2.14.0 to bioconductor package r-rsamtools
* add version 1.58.0 to bioconductor package r-rtracklayer
* add version 0.36.0 to bioconductor package r-s4vectors
* add version 1.6.0 to bioconductor package r-scaledmatrix
* add version 1.26.0 to bioconductor package r-scater
* add version 1.12.0 to bioconductor package r-scdblfinder
* add version 1.26.0 to bioconductor package r-scran
* add version 1.8.0 to bioconductor package r-scuttle
* add version 1.64.0 to bioconductor package r-seqlogo
* add version 1.56.0 to bioconductor package r-shortread
* add version 1.72.0 to bioconductor package r-siggenes
* add version 1.20.0 to bioconductor package r-singlecellexperiment
* add version 1.32.0 to bioconductor package r-snprelate
* add version 1.48.0 to bioconductor package r-snpstats
* add version 2.34.0 to bioconductor package r-somaticsignatures
* add version 1.10.0 to bioconductor package r-sparsematrixstats
* add version 1.38.0 to bioconductor package r-spem
* add version 1.36.0 to bioconductor package r-sseq
* add version 1.28.0 to bioconductor package r-summarizedexperiment
* add version 3.46.0 to bioconductor package r-sva
* add version 1.36.0 to bioconductor package r-tfbstools
* add version 1.20.0 to bioconductor package r-tmixclust
* add version 2.50.0 to bioconductor package r-topgo
* add version 1.22.0 to bioconductor package r-treeio
* add version 1.26.0 to bioconductor package r-tximport
* add version 1.26.0 to bioconductor package r-tximportdata
* add version 1.44.0 to bioconductor package r-variantannotation
* add version 3.66.0 to bioconductor package r-vsn
* add version 2.4.0 to bioconductor package r-watermelon
* add version 2.44.0 to bioconductor package r-xde
* add version 1.56.0 to bioconductor package r-xmapbridge
* add version 0.38.0 to bioconductor package r-xvector
* add version 1.24.0 to bioconductor package r-yapsa
* add version 1.24.0 to bioconductor package r-yarn
* add version 1.44.0 to bioconductor package r-zlibbioc
* make version resource consistent for r-bsgenome-hsapiens-ucsc-hg19
* make version resource consistent for r-go-db
* make version resource consistent for r-kegg-db
* make version resource consistent for r-org-hs-eg-db
* make version resource consistent for r-pfam-db
* new package: r-ggrastr
* Patches not needed for new version
* new package: r-hdo-db
* new package: r-ggnewscale
* new package: r-gson
* Actually depends on ggplot2@3.4.0:
* Fix formatting of r-hdo-db
* Fix dependency version specifiers
* Clean up duplicate dependency references
2022-11-17 14:04:45 -08:00
Erik Heeren
13389f7eb8 glib: fix URLs (#33919) 2022-11-17 13:58:07 -07:00
Adam J. Stewart
4964633614 py-tensorflow: add patch releases, remove v0.X (#33963) 2022-11-17 14:29:48 -06:00
Jared Popelar
381bedf369 Hdf5 package: build on Windows (#31141)
* Enable hdf5 build (including +mpi) on Windows
* This includes updates to hdf5 dependencies openssl (minor edit) and
  bzip2 (more-extensive edits)
* Add binary-based installation of msmpi (this is currently the only
  supported MPI implementation in Spack for Windows). Note that this
  does not install to the Spack-specified prefix. This implementation
  will be replaced with a source-based implementation

Co-authored-by: John Parent <john.parent@kitware.com>
2022-11-17 10:40:53 -08:00
John W. Parent
6811651a0f Update CMake version to 3.25.0 (#33957)
CMake had official release 3.25.0, update package version to reflect

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2022-11-17 10:58:38 -07:00
Chris Green
22aada0e20 Waf build system: fix typo in legacy_attributes (#33958)
Fix erroneous duplication of `build_time_test_callbacks` in
`legacy_attributes`: one of the duplicates should be
`install_time_test_callbacks`
2022-11-17 16:54:46 +01:00
Massimiliano Culpo
4a71020cd2 Python: do not set PYTHONHOME during build (#33956)
Setting PYTHONHOME is rarely needed (since each interpreter has
various ways of setting it automatically) and very often it is
difficult to get right manually.

For instance, the change done to set PYTHONHOME to
sysconfig["base_prefix"] broke bootstrapping dev dependencies
of Spack for me, when working inside a virtual environment in Linux.
2022-11-17 15:41:50 +01:00
Adam J. Stewart
294e6f80a0 py-rasterio: add v1.3.4 (#33961) 2022-11-17 15:23:26 +01:00
Harmen Stoppels
cc2d0eade6 docs: fix typo in multiple build systems (#33965) 2022-11-17 15:20:10 +01:00
Harmen Stoppels
f00e411287 relocate.py: small refactor for file_is_relocatable (#33967) 2022-11-17 13:21:27 +01:00
Massimiliano Culpo
da0a6280ac Remove deprecated subcommands from "spack bootstrap" (#33964)
These commands are slated for removal in v0.20
2022-11-17 12:42:57 +01:00
Ashwin Kumar
6ee6844473 Octopus : Separate serial and MPI dependencies (#33836)
* replace mpi as a variant instead of dependency 
* separate serial and MPI dependencies
* configure args depending on serial or mpi variant
* reformat with black
2022-11-17 04:34:03 -07:00
Adam J. Stewart
69822b0d82 py-torchmetrics: add v0.10.3 (#33962) 2022-11-17 11:18:57 +01:00
Michael Kuhn
93eecae0c3 Add sgid notice when running on AFS (#30247) 2022-11-17 09:17:41 +01:00
Massimiliano Culpo
61f5d85525 Python: fix bug detection, trying to access self (#33955)
Typo introduced in #33847
2022-11-17 08:38:49 +01:00
Ben Boeckel
a90e86de75 zlib, openssl: return to http URLs (#33324)
In #3113, `https` was removed to ensure that `curl` can be bootstrapped
without SSL being present. This was lost in #25672 which aimed to use
`https` where possible.

Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2022-11-16 22:25:55 -07:00
Stephen Sachs
7247a493ab [xz] icc does not support attribute __symver__ (#33839)
xz have added attribute __symver__ for compilers that identify as GCC>=10.0 via
__GNUC__. Intels `icc` sets __GNUC__ but currently does not support this
attribute:
https://community.intel.com/t5/Intel-C-Compiler/symver-not-supported/m-p/1429028/emcs_t/S2h8ZW1haWx8dG9waWNfc3Vic2NyaXB0aW9ufExBQVRWMjIyUFFZTlZTfDE0MjkwMjh8U1VCU0NSSVBUSU9OU3xoSw#M40459
2022-11-16 20:06:41 -07:00
Sergey Kosukhin
cd8ec60ae9 serialbox: update patch to handle UTF-8 conversion errors (#33317) 2022-11-16 20:06:28 -07:00
Manuela Kuhn
fe597dfb0c py-jupyter-server: add 1.21.0 (#33310) 2022-11-16 20:02:20 -07:00
Brian Vanderwende
c721aab006 lib/spack/spack/store.py: Fix #28170 for padding relocation (#33122) 2022-11-17 03:56:00 +01:00
Manuela Kuhn
6a08e9ed08 py-jupyterlab: add 3.4.8 (#33308) 2022-11-16 19:53:59 -07:00
Christian Kniep
7637efb363 blast-plus: update to 2.13.0 and add cpio as build dependency (#33187) 2022-11-17 03:42:25 +01:00
Sebastian Grabowski
b31f1b0353 * ninja: Set min. required version of re2c dependency, fixes #33127 (#33128)
Reviewed-by: haampie, bernhardkaindl
2022-11-17 03:28:06 +01:00
Jonathon Anderson
497682260f gettext: On ppc64le, for older versions, use system cdefs.h (#33411)
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2022-11-17 02:55:02 +01:00
Massimiliano Culpo
e47beceb8a Clean unit-test workflow file (#33945)
Delete statements related to Python 2.7, and avoid installing
patchelf since now we can bootstrap it.
2022-11-16 23:28:49 +01:00
chenwany
067976f4b8 updated v2.11.8 (#33944) 2022-11-16 13:04:15 -08:00
Greg Becker
1263b5c444 initial implementation of slingshot detection (#33793) 2022-11-16 13:01:37 -08:00
psakievich
90f0a8eacc Upstreams: add canonicalize path (#33946) 2022-11-16 14:37:32 -06:00
John W. Parent
61a7420c94 Windows bootstrapping: remove unneeded call to add dll to PATH (#33622)
#32942 fixed bootstrapping on Windows by having the core Spack
code explicitly add the Clingo package bin/ directory as a
DLL path.

Since then, #33400 has been merged, which ensures that the Python
module installed by the Spack `clingo` package can find the DLLs
in bin/.

Note that this only works for Spack instances which have been
bootstrapped after #33400: for installations bootstrapped before
then, you will need to run `spack clean -b` (this would only
be needed for Spack instances running on Windows).
2022-11-16 12:04:57 -08:00
Stephen Sachs
39a1f1462b SIP build system: fix "python not defined in builder" (#33906)
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
2022-11-16 13:36:25 -06:00
Wouter Deconinck
6de5d8e68c qt: new version 5.15.6 and 5.15.7 (#33933) 2022-11-16 11:35:01 -07:00
Robert Underwood
b0f2523350 add sz3 and mdz smoke test for inclusion in e4s (#33864)
* add sz3 and mdz smoke test

Co-authored-by: Robert Underwood <runderwood@anl.gov>
2022-11-16 10:05:43 -08:00
snehring
bc8cc39871 py-isal: use external libisal (#33932)
* py-isal: adding some missing build deps

* py-isal: use external libisal
2022-11-16 10:58:15 -07:00
Pat Riehecky
b36a8f4f2e New Package: rarpd (#28686) 2022-11-16 08:29:58 -07:00
Harmen Stoppels
0a952f8b7b docs updates for spack env depfile (#33937) 2022-11-16 15:47:31 +01:00
Matthias Wolf
26a0384171 neovim: fix build dependencies (#33935)
Buildin neovim@0.8.0 complains (for me) about Lua's lpeg and mpack
packages not being available at build time. Removing the link-only
setting in the dependencies for these two packages fixes the build for
me.
2022-11-16 07:30:08 -07:00
Harmen Stoppels
d18cccf7c5 spack env depfile in Gitlab CI should use install-deps/pkg-version-hash target (#33936) 2022-11-16 14:02:56 +00:00
Kevin Broch
fbe6b4b486 Change code suggestions to output black formatter compliant code (#33931) 2022-11-16 12:08:20 +01:00
Nathalie Furmento
5fe08a5647 starpu: add v1.3.10 (#33934) 2022-11-16 12:04:31 +01:00
Vanessasaurus
ac2fc4f271 Automated deployment to update package flux-core 2022-11-09 (#33780)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2022-11-16 11:58:51 +01:00
Adam J. Stewart
93430496e2 Getting Started: Python 2 is no longer supported (#33927) 2022-11-16 08:36:49 +01:00
Michael Kuhn
901b31a7aa docs: fix typo (#33926) 2022-11-15 16:06:12 -08:00
Garrett Morrison
0cec2d3110 zfp: update for version 1.0.0 (#33452)
* zfp: updates and many fixes for version 1.0.0

Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2022-11-15 16:21:57 -07:00
Satish Balay
40e4884e8b xsdk: add version @0.8.0 (#33794)
- enable sundials+magma+ginkgo
- enable dealii+sundials
- disable dealii~cgal due to build errors
- enable petsc+rocm
- enable slepc+cuda+rocm
- enable strumpack+cuda+slate
- enable slate build with non-gcc compilers
- enable pumi+shared
- enable mfem+shared
- enable ginkgo+mpi
- add hiop
- and exago
  - use exago~ipopt due to mumps~mpi conflict with mpi.h)
  - add raja variant [used by exago/hiop]
    ~raja builds exago in pflow-only mode - i.e exago~hiop~ipopt~python~cuda ^hiop~cuda [Default on MacOS]

Co-authored-by: Cody Balos <balos1@llnl.gov>
2022-11-15 16:52:58 -06:00
Jean-Luc Fattebert
f18425a51f Update BML versions list in package.py (#33920)
* Update BML versions list in package.py
* Update package.py
2022-11-15 14:08:10 -08:00
Massimiliano Culpo
472893c5c4 Run Python 3.6 unit tests on ubuntu-20.04 (#33918) 2022-11-15 22:33:11 +01:00
SXS Bot
289bbf74f6 spectre: add v2022.11.15 (#33921)
Co-authored-by: sxs-bot <sxs-bot@users.noreply.github.com>
2022-11-15 13:05:02 -08:00
Harmen Stoppels
a0182c069f Show time per phase (#33874) 2022-11-15 21:10:07 +01:00
Erik Schnetter
4ecb6ecaff meson: add new version 0.64.0 (#33880) 2022-11-15 12:42:11 -07:00
Saqib Khan
d5193f73d8 New Package: Prime95/Mprime (#33895)
* New Package: Prime95/Mprime
* Fix trailing whitespaces in prime95
* Fix checksum for prime95

Signed-off-by: saqibkh <saqibkhan@utexas.edu>
2022-11-15 11:39:02 -08:00
Brent Huisman
1aab5bb9f2 Add v0.8 to Arbor (#33916) 2022-11-15 12:57:29 -06:00
Erik Schnetter
8dda4ff60b nsimd: Update Python requirements (#33879)
We need Python 3.0:3.9
2022-11-15 10:05:14 -08:00
iarspider
0811f81a09 thepeg: make rivet dependency optional... (#33912)
* thepeg: make rivet dependency optional...
* add "libs" variant, move compiler flags to flag_handler

Co-authored-by: iarspider <iarspider@users.noreply.github.com>
2022-11-15 09:36:15 -08:00
Harmen Stoppels
af74680405 depfile: improve tab completion (#33773)
This PR allows you to do:

```
spack env create -d .
spack -e . add python
spack -e . concretize
spack -e . env depfile -o Makefile

make in<tab>              # -> install
make install-<tab>        # -> install-deps/
make install-deps/py<tab> # -> install-deps/python-x.y.z-hash
make install/zl<tab>      # -> install/zlib-x.y.z-hash

make SP<tab>              # -> make SPACK
make SPACK_<tab>          # -> make SPACK_INSTALL_FLAGS=
```
2022-11-15 18:03:17 +01:00
Harmen Stoppels
d1715c5fdf Fixup: start the timer before the phase (#33917) 2022-11-15 16:52:43 +01:00
Harmen Stoppels
b245f1ece1 Fix incorrect timer (#33900)
Revamp the timer so we always have a designated begin and end.

Fix a bug where the phase timer was stopped before the phase started,
resulting in incorrect timing reports in timers.json.
2022-11-15 16:33:47 +01:00
Harmen Stoppels
e10c47c53d glib: add missing libelf dep (#33894) 2022-11-15 07:18:10 -07:00
Harmen Stoppels
0697d20fd4 openssh: add libxcrypt (#33892) 2022-11-15 06:58:12 -07:00
Brian Van Essen
fd4f905ce5 External find now searches all dynamic linker paths (#33800)
Add spack.ld_so_conf.host_dynamic_linker_search_paths

Retrieve the current host runtime search paths for shared libraries;
for GNU and musl Linux we try to retrieve the dynamic linker from the
current Python interpreter and then find the corresponding config file
(e.g. ld.so.conf or ld-musl-<arch>.path). Similar can be done for
BSD and others, but this is not implemented yet. The default paths
are always returned. We don't check if the listed directories exist.

Use this in spack external find for libraries.

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2022-11-15 14:48:15 +01:00
Harmen Stoppels
d36c7b20d2 python: missing libxcrypt dep (#33847)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-11-15 14:36:16 +01:00
Harmen Stoppels
2948248d7a Remove exit(0) (#33896)
Since they cause --backtrace to report backtraces even with exit code 0
2022-11-15 14:35:44 +01:00
Jonathon Anderson
850c54c3b1 Revert "fix perl libxcrypt.so dep (#33846)" (#33909)
This reverts commit bf1b2a828c, as libxcrypt's configure script requires Perl, leading to a circular dependency.
2022-11-15 14:04:31 +01:00
Harmen Stoppels
90fb16033e gitlab: report load in generate job (#33888) 2022-11-15 13:21:21 +01:00
Matthieu Dorier
13a68d547d mochi-margo: add v0.11 (#33910) 2022-11-15 13:19:18 +01:00
Massimiliano Culpo
857ae5a74b fixup 2022-11-15 12:42:48 +01:00
Massimiliano Culpo
b3124bff7c Stop using six in Spack (#33905)
Since we dropped support for Python 2.7, there's no need
so use `six` anymore. We still need to vendor it until
we update our vendored dependencies.
2022-11-15 10:07:54 +01:00
Scott Wittenburg
5c4137baf1 gitlab: Add shared PR mirror to places pipelines look for binaries. (#33746)
While binaries built for PRs that get merged must still be rebuilt
in develop pipelines, they can be used by other PRs that find they
would otherwise need to rebuild them.  Now that spackbot is
managing copying PR binaries from merged PRs into a shared location,
keeping it pruned to a reasonable size, and making sure the indices
are up to date, spack can use these mirrors as a potential source
of binaries.
2022-11-14 19:37:23 -07:00
Stephen Sachs
a9dcd4c01e [py-pmw-patched] needs setuptools to build (#33902)
Error message:
```
ModuleNotFoundError: No module named 'setuptools'
```

Co-authored-by: Stephen Sachs <stesachs@amazon.com>
2022-11-14 18:11:23 -06:00
Harmen Stoppels
2cd7322b11 swig: needs zlib (#33890) 2022-11-14 13:22:12 -07:00
Terry Cojean
f9e9ecd0c1 Ginkgo 1.5.0 version, new MPI variant, related fixes (#33838)
* Ginkgo 1.5.0 release, new MPI variant

* Fix ROCTHRUST/ROCPRIM issues

* Fix deal.II issue with Ginkgo 1.5.0

* Also fix hipRAND+rocRAND RPATH settings

* Turn off CCACHE for spack builds.

Co-authored-by: Veselin Dobrev <dobrev@llnl.gov>

Co-authored-by: Veselin Dobrev <dobrev@llnl.gov>
2022-11-14 12:33:11 -06:00
Veselin Dobrev
d756034161 Alquimia: tweak for building xsdk v0.8.0 with ROCm/HIP (#33881) 2022-11-14 12:15:50 -06:00
psakievich
2460c4fc28 Add $date option to the list of config variables (#33875)
I'm finding I often want the date in my paths and it would be nice if spack had a config variable for this.

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2022-11-14 10:13:30 -08:00
Cory Bloor
6e39efbb9a rocm: add all GFX9, GFX10 and GFX11 amdgpu_targets (#33871)
This change adds all documented AMDGPU processors from GFX9 through GFX11 and sorts the list.
2022-11-14 10:11:22 -08:00
Satish Balay
277e35c3b0 superlu-dist: add version 8.1.2 (#33868) 2022-11-14 10:09:55 -08:00
Harmen Stoppels
bf1b2a828c fix perl libxcrypt.so dep (#33846)
Detected using https://github.com/spack/spack/pull/28109
2022-11-14 18:56:43 +01:00
Dom Heinzeller
2913f8b42b hdf4: fix build on Apple M1 (#33740) 2022-11-14 17:49:34 +01:00
Harmen Stoppels
57f4c922e9 util-linux: fix deps (#33893) 2022-11-14 17:26:21 +01:00
Todd Gamblin
8d82fecce9 Update CHANGELOG.md for v0.19.0
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2022-11-14 08:22:29 -06:00
Todd Gamblin
96126cbf17 Update SECURITY.md for v0.19 2022-11-14 08:22:29 -06:00
Todd Gamblin
6ecb57e91f Bump version to v0.20.0.dev0 2022-11-14 08:22:29 -06:00
Harmen Stoppels
a75af62fe3 Get rid of context for exceptions outside PackageBase (#33887) 2022-11-14 14:11:22 +01:00
Massimiliano Culpo
e4e02dbeae Fix a bug/typo in a config_values.py fixture (#33886) 2022-11-14 05:26:14 -07:00
Massimiliano Culpo
3efa4ee26f Remove support for running with Python 2.7 (#33063)
* Remove CI jobs related to Python 2.7

* Remove Python 2.7 specific code from Spack core

* Remove externals for Python 2 only

* Remove llnl.util.compat
2022-11-14 13:11:28 +01:00
Harmen Stoppels
f4c3d98064 libxcrypt: 4.4.31 (#33885) 2022-11-14 04:30:04 -07:00
Harmen Stoppels
a4cec82841 Speed up traverse unit tests (#33840) 2022-11-14 09:34:14 +01:00
Veselin Dobrev
3812edd0db [plasma] add support for building with 'cray-libsci' (#33869) 2022-11-13 21:47:24 -06:00
Harmen Stoppels
3ea9c8529a tau: checksum (#33873) 2022-11-13 13:06:08 -07:00
Adam J. Stewart
ed28797f83 GDAL: add v3.6.0 (#33856)
* GDAL: add v3.6.0

* Explicitly control BASISU

* More reasonable variant defaults
2022-11-13 13:56:11 -06:00
Adam J. Stewart
eadb6ae774 py-pytorch-lightning: add v1.8.1 (#33854) 2022-11-13 11:45:23 -08:00
Veselin Dobrev
a5d35c3077 [sundials] fix cmake argument generation for '+magma' (#33858)
[dealii] force cmake to accept Scalapack settings from Spack
2022-11-13 09:50:57 -06:00
Todd Gamblin
3d811617e6 hotfix: ensure that schema is compatible with tutorial VM config
We added a hotfix to releases/v0.19 with a feature flag, but the flag
is incompatible with the config schema on `develop`.

- [x] Ensure schema is compatible on develop even though config option is unused.
2022-11-13 09:14:00 -06:00
Massimiliano Culpo
03224e52d2 Speed-up bootstrap and architecture unit tests (#33865)
* Speed-up bootstrap mirror unit test

The unit test doesn't need to concretize, since it checks
only metadata for the mirror.

* architecture.py: use "default_mock_concretization" for slow test
2022-11-13 13:09:22 +01:00
Greg Becker
4ebe57cd64 update tutorial command to newest release branch (#33867) 2022-11-12 13:29:38 -08:00
Greg Becker
343cd04a54 use spack.version_info as source of version truth for spack tutorial command (#33860)
* Use spack.spack_version_info as source of truth

Co-authored-by: Todd Gamblin <gamblin2@llnl.gov>
2022-11-12 11:42:59 -08:00
Morten Kristensen
ed45385b7b py-vermin: add latest version 1.5.1 (#33861) 2022-11-12 10:31:23 -06:00
Cameron Rutherford
8a3b596042 Update ExaGO for 1.5.0 release. (#33853)
* Update ExaGO for 1.5.0 release
2022-11-11 21:58:56 -06:00
Satish Balay
1792327874 magma: add version 2.7.0 (#33814)
* magma: add version 2.7.0

Co-authored-by: Stan Tomov <tomov@icl.utk.edu>
2022-11-11 21:58:02 -06:00
Veselin Dobrev
d0dedda9a9 [mfem] add a patch to fix some issues with building with ROCm (#33810) 2022-11-11 14:42:42 -08:00
Harmen Stoppels
368dde437a libxcrypt: 4.4.30 (#33845)
* libxcrypt: 4.4.30

* libxcrypt.so.2 by default

* add libs prop, since libxcrypt provides libcrypt
2022-11-11 15:22:17 -06:00
Massimiliano Culpo
022a2d2eaf Speed-up unit tests by caching default mock concretization (#33755) 2022-11-11 21:32:40 +01:00
kwryankrattiger
5f8511311c ParaView: Add variant for VisItBridge (#33783)
* ParaView: Add variant for VisItBridge

* ParaView: Add kwryankrattiger has package maintainer
2022-11-11 13:09:43 -07:00
Greg Becker
2d2c591633 ensure view projections for extensions always point to extendee (#33848) 2022-11-11 10:59:23 -07:00
Satish Balay
d49c992b23 xsdk: update maintainers (#33822) 2022-11-11 08:30:28 -06:00
Satish Balay
f1392bbd49 trilinos: add version 13.4.1 (#33533)
And update dependency on superlu-dist (now works with @8)
2022-11-11 09:28:47 -05:00
Harmen Stoppels
c14dc2f56a docs: updates related to extensions (#33837) 2022-11-11 13:24:17 +01:00
Harmen Stoppels
0f54a63dfd remove activate/deactivate support in favor of environments (#29317)
Environments and environment views have taken over the role of `spack activate/deactivate`, and we should deprecate these commands for several reasons:

- Global activation is a really poor idea:
   - Install prefixes should be immutable; since they can have multiple, unrelated dependents; see below
   - Added complexity elsewhere: verification of installations, tarballs for build caches, creation of environment views of packages with unrelated extensions "globally activated"... by removing the feature, it gets easier for people to contribute, and we'd end up with fewer bugs due to edge cases.
- Environment accomplish the same thing for non-global "activation" i.e. `spack view`, but better.

Also we write in the docs:

```
However, Spack global activations have two potential drawbacks:

#. Activated packages that involve compiled C extensions may still
   need their dependencies to be loaded manually.  For example,
   ``spack load openblas`` might be required to make ``py-numpy``
   work.

#. Global activations "break" a core feature of Spack, which is that
   multiple versions of a package can co-exist side-by-side.  For example,
   suppose you wish to run a Python package in two different
   environments but the same basic Python --- one with
   ``py-numpy@1.7`` and one with ``py-numpy@1.8``.  Spack extensions
   will not support this potential debugging use case.
```

Now that environments are established and views can take over the role of activation
non-destructively, we can remove global activation/deactivation.
2022-11-11 00:50:07 -08:00
Greg Becker
f11778bb02 remove deprecated top-level module config (#33828)
* remove deprecated top-level module config per deprecation in 0.18
2022-11-10 23:35:33 -08:00
Brian Van Essen
3437926cde Fixed a bug where the external HIP library is found in a nested directory, even on newer releases of ROCm. (#33772) 2022-11-10 23:32:55 -08:00
Pranith
d25375da55 emacs: Add version 28.2 (#33255) 2022-11-11 04:07:11 +01:00
Yang Zongze
0b302034df global: add --with-ncurses with prefix to configure (#33136) 2022-11-11 03:40:03 +01:00
Seth R. Johnson
b9f69a8dfa go-bootstrap: improve error message for Monterey (#31800)
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
2022-11-11 02:22:37 +01:00
Loïc Pottier
c3e9aeeed0 redis-plus-plus: added initial support (#33803)
* redis-plus-plus: added initial support
* redis-plus-plus: use cmake arg provided by Spack
* redis-plus-plus: oups tiny typo

Signed-off-by: Loïc Pottier <pottier1@llnl.gov>
Signed-off-by: Loïc Pottier <lpottier@arnor>
Co-authored-by: Loïc Pottier <lpottier@arnor>
2022-11-10 17:19:45 -08:00
Greg Becker
277234c044 remove deprecated concretization environment key (#33774) 2022-11-11 00:11:52 +00:00
Harmen Stoppels
0077a25639 Use bfs order in spack find --deps tree (#33782)
* Use bfs order in spack find --deps tree
* fix printing tests
2022-11-10 15:39:07 -08:00
Massimiliano Culpo
6a3e20023e Delete directory with vestigial configuration file (#33825) 2022-11-11 00:10:29 +01:00
MatthewLieber
f92987b11f adding sha for 7.0 release (#33823)
Co-authored-by: Matthew Lieber <lieber.31@osu.edu>
2022-11-10 14:24:05 -08:00
Satish Balay
61f198e8af xsdk: deprecate 0.6.0, and remove older deprecated releases (#33815)
xsdk-examples: deprecate @0.2.0, remove @0.1.0 in sync with older xsdk releases
2022-11-10 14:16:17 -08:00
Glenn Johnson
4d90d663a3 Cran updates (#33819)
* add version 1.7-20 to r-ade4
* add version 1.1-13 to r-adephylo
* add version 0.3-20 to r-adespatial
* add version 1.2-0 to r-afex
* add version 0.8-19 to r-amap
* add version 0.1.8 to r-aplot
* add version 2.0.8 to r-biasedurn
* add version 2.4-4 to r-bio3d
* add version 1.30.19 to r-biocmanager
* add version 1.2-9 to r-brobdingnag
* add version 0.4.1 to r-bslib
* add version 3.7.3 to r-callr
* add version 3.1-1 to r-car
* add version 0.3-62 to r-clue
* add version 1.2.0 to r-colourpicker
* add version 1.8.1 to r-commonmark
* add version 0.4.3 to r-cpp11
* add version 1.14.4 to r-data-table
* add version 1.34 to r-desolve
* add version 2.4.5 to r-devtools
* add version 0.6.30 to r-digest
* add version 0.26 to r-dt
* add version 1.7-12 to r-e1071
* add version 1.8.2 to r-emmeans
* add version 1.1.16 to r-exomedepth
* add version 0.1-8 to r-expint
* add version 0.4.0 to r-fontawesome
* add version 1.5-2 to r-fracdiff
* add version 1.10.0 to r-future-apply
* add version 3.0.1 to r-ggmap
* add version 3.4.0 to r-ggplot2
* add version 2.1.0 to r-ggraph
* add version 0.6.4 to r-ggsignif
* add version 0.6-7 to r-gmp
* add version 2022.10-2 to r-gparotation
* add version 0.8.3 to r-graphlayouts
* add version 1.8.8 to r-grbase
* add version 2.1-0 to r-gstat
* add version 0.18.6 to r-insight
* add version 1.3.1 to r-irkernel
* add version 1.8.3 to r-jsonlite
* add version 1.7.0 to r-lava
* add version 1.1-31 to r-lme4
* add version 5.6.17 to r-lpsolve
* add version 5.5.2.0-17.9 to r-lpsolveapi
* add version 1.2.9 to r-mapproj
* add version 3.4.1 to r-maps
* add version 1.1-5 to r-maptools
* add version 1.3 to r-markdown
* add version 6.0.0 to r-mclust
* add version 4.2-2 to r-memuse
* add version 1.8-41 to r-mgcv
* add version 1.2.5 to r-minqa
* add version 0.3.7 to r-nanotime
* add version 2.4.1.1 to r-nfactors
* add version 3.1-160 to r-nlme
* add version 2.7-1 to r-nmof
* add version 0.60-16 to r-np
* add version 2.0.4 to r-openssl
* add version 4.2.5.1 to r-openxlsx
* add version 0.3-8 to r-pbdzmq
* add version 2.0-3 to r-pcapp
* add version 0.7.0 to r-philentropy
* add version 2.0.3 to r-pkgcache
* add version 1.3.1 to r-pkgload
* add version 1.10-4 to r-polyclip
* add version 3.8.0 to r-processx
* add version 1.7.2 to r-ps
* add version 2.12.1 to r-r-utils
* add version 1.2.4 to r-ragg
* add version 3.7 to r-rainbow
* add version 0.0.20 to r-rcppannoy
* add version 0.3.3.9.3 to r-rcppeigen
* add version 0.3.12 to r-rcppgsl
* add version 1.0.2 to r-recipes
* add version 1.1.10 to r-rmutil
* add version 0.10.24 to r-rmysql
* add version 2.4.8 to r-rnexml
* add version 4.1.19 to r-rpart
* add version 1.7-2 to r-rrcov
* add version 0.8.28 to r-rsconnect
* add version 0.25 to r-servr
* add version 1.7.3 to r-shiny
* add version 3.0-0 to r-spatstat-data
* add version 3.0-3 to r-spatstat-geom
* add version 3.0-1 to r-spatstat-random
* add version 3.0-0 to r-spatstat-sparse
* add version 3.0-1 to r-spatstat-utils
* add version 1.8.0 to r-styler
* add version 3.4.1 to r-sys
* add version 1.5-2 to r-tclust
* add version 0.0.9 to r-tfmpvalue
* add version 1.2.0 to r-tidyselect
* add version 0.10-52 to r-tseries
* add version 4.2.2 to r-v8
* add version 0.5.0 to r-vctrs
* add version 2.6-4 to r-vegan
* add version 0.7.0 to r-wk
* add version 0.34 to r-xfun
* add version 1.0.6 to r-xlconnect
* add version 3.99-0.12 to r-xml
* add version 0.12.2 to r-xts
* add version 1.0-33 to r-yaimpute
* add version 2.3.6 to r-yaml
* add version 2.2.2 to r-zip
* add version 2.2-7 to r-deoptim
* add version 4.3.1 to r-ergm
* add version 0.18 to r-evaluate
* add version 1.29.0 to r-future
* add version 0.0.8 to r-ggfun
* add version 0.9.2 to r-ggrepel
* add version 1.9.0 to r-lubridate
* add version 4.10.1 to r-plotly
* add version 0.2.12 to r-rcppcctz
* add version 1.2 to r-rook
* add version 1.6-1 to r-segmented
* add version 4.2.1 to r-seurat
* add version 4.1.3 to r-seuratobject
* add version 1.0-9 to r-sf
* add version 1.5-1 to r-sp
* add version 1.8.1 to r-styler
* new package: r-timechange
* new package: r-stars
* new package: r-sftime
* new package: r-spatstat-explore

Co-authored-by: glennpj <glennpj@users.noreply.github.com>
2022-11-10 12:06:13 -07:00
Rémi Lacroix
7a7e9eb04f pandoc: add version 2.19.2 (#33811) 2022-11-10 10:47:28 -08:00
Chris White
3ea4b53bf6 add utilities and examples variants to conduit (#33804) 2022-11-10 11:05:24 -07:00
Olivier Cessenat
ad0d908d8d octave: new version 7.3.0 (#33788) 2022-11-10 09:42:00 -08:00
Harmen Stoppels
9a793fe01b gcc: drop target bootstrap flags for aarch64 (#33813)
See https://github.com/spack/spack/issues/31184

GCC bootstrap logic adds `-mpcu` for libatomic (iirc), which conflicts
with the `-march` flag we provide.
2022-11-10 17:42:45 +01:00
Dave Love
6dd3c78924 elpa: Fix build on ppc64le (#33639) 2022-11-10 14:51:17 +01:00
Axel Huebl
5b080d63fb pybind11: v2.10.1 (#33645)
Add the latest release of pybind11:
  https://github.com/pybind/pybind11/releases/tag/v2.10.1
2022-11-10 05:01:57 -07:00
Adam J. Stewart
ea8e3c27a4 py-einops: add v0.6.0 (#33799)
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2022-11-10 11:54:14 +01:00
iarspider
30ffd6d33e dcap: add variant ~plugins: Disables the build of plugins (#33524) 2022-11-10 11:20:32 +01:00
iarspider
c1aec72f60 cpu-features: Add variant to enable BUILD_SHARED_LIBS=True (#33809)
* Allow building shared libraries for cpu-features
2022-11-10 11:14:52 +01:00
Massimiliano Culpo
cfd0dc6d89 spack location: fix attribute lookup after multiple build systems (#33791)
fixes #33785
2022-11-10 01:53:59 -07:00
Satish Balay
60b3d32072 py-mpi4py: add version 3.1.4 (#33805) 2022-11-09 23:33:56 -07:00
Glenn Johnson
5142ebdd57 udunits: Add libs property to recipe to find libudunits2 (#33764) 2022-11-10 01:43:20 +01:00
Harmen Stoppels
6b782e6d7e ucx: fix int overflow: use ssize_t (#33784) 2022-11-10 01:18:22 +01:00
Saqib Khan
168bced888 New Package: stressapptest (#33736)
Signed-off-by: saqibkh <saqibkhan@utexas.edu>
2022-11-10 01:11:46 +01:00
Saqib Khan
489de38890 New Package: y-cruncher (#33754)
y-cruncher is a program that can compute Pi and
other constants to trillions of digits.

Signed-off-by: saqibkh <saqibkhan@utexas.edu>
2022-11-10 00:50:54 +01:00
Robert Underwood
2a20520cc8 updates and fixes for libpressio (#33789)
* updates and fixes for libpressio

* differentiate between standalone and build tests

* add e4s tags

Co-authored-by: Robert Underwood <runderwood@anl.gov>
2022-11-09 16:22:58 -07:00
Jean Luca Bez
ae6213b193 New package: py-darshan (#33430)
* include py-darshan

* include requested changes

* fix required versions

* fix style

* fix style

* Update package.py

* Update var/spack/repos/builtin/packages/py-darshan/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-11-09 16:44:45 -06:00
Chris White
bb1cd430c0 always use the cxx compiler as a host compiler (#33771) 2022-11-09 13:22:43 -08:00
Filippo Spiga
36877abd02 Adding libpfm 4.12.0 (#33779) 2022-11-09 12:54:50 -08:00
snehring
62db008e42 sentieon-genomics: adding version 202112.06 (#33786) 2022-11-09 12:52:57 -08:00
James Willenbring
b10d75b1c6 Update package.py (#33787)
Proposing to add myself as a maintainer of the Trilinos Spack package after a related conversation with @kuberry.
2022-11-09 12:40:33 -08:00
kwryankrattiger
078767946c Boost: Change comment to conflict for MPI/Python (#33767)
Boost 1.64.0 has build errors when building the python and MPI modules. This was previously just a comment in the package.py which allowed broken specs to concretize. The comments are now expressed in conflicts to prevent this.
2022-11-09 09:05:27 -06:00
snehring
9ca7165ef0 postgresql: fix weird spack message (#33770) 2022-11-09 15:25:22 +01:00
Harmen Stoppels
d1d668a9d5 Revert "fix racy sbang (#33549)" (#33778)
This reverts commit 4d28a64661.
2022-11-09 09:31:27 +00:00
Greg Becker
284c3a3fd8 ensure external PythonPackages have python deps (#33777)
Currently, external `PythonPackage`s cause install failures because the logic in `PythonPackage` assumes that it can ask for `spec["python"]`.  Because we chop off externals' dependencies, an external Python extension may not have a `python` dependency.

This PR resolves the issue by guaranteeing that a `python` node is present in one of two ways:
1. If there is already a `python` node in the DAG, we wire the external up to it.
2. If there is no existing `python` node, we wire up a synthetic external `python` node, and we assume that it has the same prefix as the external.

The assumption in (2) isn't always valid, but it's better than leaving the user with a non-working `PythonPackage`.

The logic here is specific to `python`, but other types of extensions could take advantage of it.  Packages need only define `update_external_dependencies(self)`, and this method will be called on externals after concretization.  This likely needs to be fleshed out in the future so that any added nodes are included in concretization, but for now we only bolt on dependencies post-concretization.

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2022-11-09 08:25:30 +00:00
Massimiliano Culpo
ec89c47aee Account for patchelf binaries when creating local bootstrap mirror (#33776)
This was overlooked when we added binary patchelf buildcaches
2022-11-08 23:05:03 -08:00
Veselin Dobrev
49114ffff7 MFEM: more updates for v4.5 (#33603)
* [mfem] updates related to building with cuda

* [hypre] tweak to support building with external ROCm/HIP

* [mfem] more tweaks related to building with +rocm

* [mfem] temporary (?) workaround for issue #33684

* [mfem] fix style

* [mfem] fix +shared+miniapps install
2022-11-08 22:56:58 -08:00
iarspider
05fd39477e Add checksum for py-protobuf 4.21.7, protobuf 21.7; remove protobuf and py-protobuf 2.x (#32977)
* Add checksum for py-protobuf 4.21.7, protobuf 21.7

* Update var/spack/repos/builtin/packages/protobuf/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update package.py

* Update var/spack/repos/builtin/packages/protobuf/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/protobuf/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update package.py

* Update package.py

* Delete protoc2.5.0_aarch64.patch

* Update package.py

* Restore but deprecate py-protobuf 3.0.0a/b; deprecate py-tensorflow 0.x

* Fix audit

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-11-08 16:10:29 -07:00
Mikael Simberg
e53a19a08d Patch fmt for hipcc/dpcpp (#33733)
* Patch fmt for hipcc/dpcpp
* Add msimberg as fmt maintainer
2022-11-08 12:14:14 -08:00
Xavier Delaruelle
a8470a7efe environment-modules: add version 5.2.0 (#33762) 2022-11-08 13:10:22 -07:00
Wouter Deconinck
0f26d4402e hepmc3: new version 3.2.5 (#33748)
Changelog at https://gitlab.cern.ch/hepmc/HepMC3/-/tags/3.2.5

Maintainer: @vvolkl
2022-11-08 11:54:28 -08:00
Harmen Stoppels
4d28a64661 fix racy sbang (#33549)
Spack currently creates a temporary sbang that is moved "atomically" in place,
but this temporary causes races when multiple processes start installing sbang.

Let's just stick to an idempotent approach. Notice that we only re-install sbang
if Spack updates it (since we do file compare), and sbang was only touched
18 times in the past 6 years, whereas we hit the sbang tempfile issue
frequently with parallel install on a fresh spack instance in CI.

Also fixes a bug where permissions weren't updated if config changed but
the latest version of the sbang file was already installed.
2022-11-08 11:05:05 -08:00
eugeneswalker
4a5e68816b hypre +rocm: needs explicit rocprim dep (#33745) 2022-11-08 10:50:02 -08:00
Greg Becker
97fe7ad32b use pwd for usernames on unix (#19980) 2022-11-08 10:36:10 -08:00
Harmen Stoppels
052bf6b9df python: 3.11.0 (#33507) 2022-11-08 09:38:45 -08:00
Stephen Sachs
80f5939a94 Install from source if binary cache checksum validation fails (#31696)
* Fix https://github.com/spack/spack/issues/31640

Some packages in the binary cache fail checksum validation. Instead of having to
go back and manually install all failed packages with `--no-cache` option,
requeue those failed packages for installation from source

```shell
$ spack install py-pip
==> Installing py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
==> Fetching https://binaries.spack.io/releases/v0.18/build_cache/linux-amzn2-graviton2-gcc-7.3.1-py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7.spec.json.sig
gpg: Signature made Wed 20 Jul 2022 12:13:43 PM UTC using RSA key ID 3DB0C723
gpg: Good signature from "Spack Project Official Binaries <maintainers@spack.io>"
==> Fetching https://binaries.spack.io/releases/v0.18/build_cache/linux-amzn2-graviton2/gcc-7.3.1/py-pip-21.3.1/linux-amzn2-graviton2-gcc-7.3.1-py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7.spack
==> Extracting py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7 from binary cache
==> Error: Failed to install py-pip due to NoChecksumException: Requeue for manual installation.
==> Installing py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
==> Using cached archive: /shared/spack/var/spack/cache/_source-cache/archive/de/deaf32dcd9ab821e359cd8330786bcd077604b5c5730c0b096eda46f95c24a2d
==> No patches needed for py-pip
==> py-pip: Executing phase: 'install'
==> py-pip: Successfully installed py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
  Fetch: 0.01s.  Build: 2.81s.  Total: 2.82s.
[+] /shared/spack/opt/spack/linux-amzn2-graviton2/gcc-7.3.1/py-pip-21.3.1-s2cx4gqrqkdqhashlinqyzkrvuwkl3x7
```

* Cleanup style

* better wording

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>

* Update lib/spack/spack/installer.py

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>

* changes quotes for style checks

* Update lib/spack/spack/installer.py

Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>

* Addressing @kwryankrattiger comment to use local 'use_cache`

Co-authored-by: Stephen Sachs <stesachs@amazon.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
2022-11-08 09:19:55 -08:00
Dave Love
bca8b52a8d cosma: Add shared option (#33751) 2022-11-08 18:06:58 +01:00
Annop Wongwathanarat
e4c2d1afc6 gromacs: enable linking with armpl-gcc FFT (#33750) 2022-11-08 08:44:03 -07:00
Massimiliano Culpo
89976af732 scons: fix Scons builder after multi build-system refactoring (#33753) 2022-11-08 16:03:15 +01:00
Massimiliano Culpo
a079722b1c r: fix order of execution for Makeconf filtering (#33752)
fixes #33747
2022-11-08 14:58:33 +01:00
Harmen Stoppels
f332ac6d21 More jobs in Gitlab CI (#33688)
Use at most 32 jobs when available.
2022-11-08 12:53:11 +01:00
Massimiliano Culpo
e4218595de Rework unit test to avoid tripping into concretization slowdown (#33749) 2022-11-08 10:56:24 +01:00
Greg Becker
c9561c5a0e intel oneapi classic bootstrapping (#31285)
The `intel` compiler at versions > 20 is provided by the `intel-oneapi-compilers-classic`
package (a thin wrapper around the `intel-oneapi-compilers` package), and the `oneapi`
compiler is provided by the `intel-oneapi-compilers` package. 

Prior to this work, neither of these compilers could be bootstrapped by Spack as part of
an install with `install_missing_compilers: True`.

Changes made to make these two packages bootstrappable:

1. The `intel-oneapi-compilers-classic` package includes a bin directory and symlinks
   to the compiler executables, not just logical pointers in Spack.
2. Spack can look for bootstrapped compilers in directories other than `$prefix/bin`,
   defined on a per-package basis
3. `intel-oneapi-compilers` specifies a non-default search directory for the
   compiler executables.
4. The `spack.compilers` module now can make more advanced associations between
   packages and compilers, not just simple name translations
5. Spack support for lmod hierarchies accounts for differences between package
   names and the associated compiler names for `intel-oneapi-compilers/oneapi`,
   `intel-oneapi-compilers-classic/intel@20:`, `llvm+clang/clang`, and
   `llvm-amdgpu/rocmcc`.

- [x] full end-to-end testing
- [x] add unit tests
2022-11-07 21:50:16 -08:00
Sergey Kosukhin
f099a68e65 mpich: patch @3.4 to fix checking whether the datatype is contiguous (#33328)
Co-authored-by: Thomas Jahns <jahns@dkrz.de>

Co-authored-by: Thomas Jahns <jahns@dkrz.de>
2022-11-07 22:02:21 -07:00
Adam J. Stewart
54abc7fb7e PyTorch: add v1.13.0 (#33596)
* PyTorch: add v1.13.0
* py-torchaudio: add v0.13.0
* py-torchaudio: add all versions
* py-torchvision: jpeg required for all backends
* py-torchtext: add v0.14.0
* py-torchtext: fix build
* py-torchaudio: fix build
* py-torchtext: update version tag
* Use Spack-built sox
* Explicitly disable sox build
* https -> http
2022-11-07 19:42:35 -08:00
Cédric Chevalier
23eb2dc9d6 Add zstd support for elfutils (#33695)
* Add zstd support for elfutils
Not defining `+zstd` implies `--without-zstd` flag to configure.
This avoids automatic library detection and thus make the build only
depends on Spack installed dependencies.
* Use autotools helper "with_or_without"
* Revert use of with_or_without
Using `with_or_without()` with `variant` keyword does not seem to work.
2022-11-07 20:30:23 -07:00
Peter Scheibel
1a3415619e "spack uninstall": don't modify env (#33711)
"spack install foo" no longer adds package "foo" to the environment
(i.e. to the list of root specs) by default: you must specify "--add".
Likewise "spack uninstall foo" no longer removes package "foo" from
the environment: you must specify --remove. Generally this means
that install/uninstall commands will no longer modify the users list
of root specs (which many users found problematic: they had to
deactivate an environment if they wanted to uninstall a spec without
changing their spack.yaml description).

In more detail: if you have environments e1 and e2, and specs [P, Q, R]
such that P depends on R, Q depends on R, [P, R] are in e1, and [Q, R]
are in e2:

* `spack uninstall --dependents --remove r` in e1: removes R from e1
  (but does not uninstall it) and uninstalls (and removes) P
* `spack uninstall -f --dependents r` in e1: will uninstall P, Q, and
   R (i.e. e2 will have dependent specs uninstalled as a side effect)
* `spack uninstall -f --dependents --remove r` in e1: this uninstalls
   P, Q, and R, and removes [P, R] from e1
* `spack uninstall -f --remove r` in e1: uninstalls R (so it is
  "missing" in both environments) and removes R from e1 (note that e1
  would still install R as a dependency of P, but it would no longer
  be listed as a root spec)
* `spack uninstall --dependents r` in e1: will fail because e2 needs R

Individual unit tests were created for each of these scenarios.
2022-11-08 03:24:51 +00:00
Jordan Galby
84a3d32aa3 Fix missing "*.spack*" files in views (#30980)
All files/dirs containing ".spack" anywhere their name were ignored when
generating a spack view.

For example, this happened with the 'r' package.
2022-11-08 02:58:19 +00:00
akhursev
69d4637671 2022.3.1 oneAPI release promotion (#33742) 2022-11-07 18:14:23 -07:00
Harmen Stoppels
3693622edf reorder packages.yaml: requirements first, then preferences (#33741)
* reorder packages.yaml: requirements first, then preferences
* expand preferences vs reuse vs requirements
2022-11-07 16:16:11 -08:00
Scott Wittenburg
b3b675157c gitlab ci: Add "script_failure" as a reason for retrying service jobs (#33420)
Somehow a network error when cloning the repo for ci gets
categorized by gitlab as a script failure.  To make sure we retry
jobs that failed for that reason or a similar one, include 
"script_failure" as one of the reasons for retrying service jobs
(which include "no specs to rebuild" jobs, update buildcache
index jobs, and temp storage cleanup jobs.
2022-11-07 16:11:04 -07:00
Tom Scogland
6241cdb27b encode development requirements in pyproject.toml (#32616)
Add a `project` block to the toml config along with development and CI
dependencies and a minimal `build-system` block, doing basically
nothing, so that spack can be bootstrapped to a full development
environment with:

```shell
$ hatch -e dev shell
```

or for a minimal environment without hatch:

```shell
$ python3 -m venv venv
$ source venv/bin/activate
$ python3 -m pip install --upgrade pip
$ python3 -m pip install -e '.[dev]'
```

This means we can re-use the requirements list throughout the workflow
yaml files and otherwise maintain this list in *one place* rather than
several disparate ones.  We may be stuck with a couple more temporarily
to continue supporting python2.7, but aside from that it's less places
to get out of sync and a couple new bootstrap options.

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-11-07 15:00:22 -08:00
dependabot[bot]
28d669cb39 build(deps): bump docker/setup-buildx-action from 2.2.0 to 2.2.1 (#33399)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 2.2.0 to 2.2.1.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](c74574e6c8...8c0edbc76e)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-07 20:49:42 +00:00
dependabot[bot]
c0170a675b build(deps): bump actions/upload-artifact from 3.1.0 to 3.1.1 (#33471)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3.1.0 to 3.1.1.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](3cea537223...83fd05a356)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-07 21:39:38 +01:00
Scott Wittenburg
28a77c2821 binary_distribution: Speed up buildcache update-index (#32796)
This change uses the aws cli, if available, to retrieve spec files
from the mirror to a local temp directory, then parallelizes the
reading of those files from disk using multiprocessing.ThreadPool.

If the aws cli is not available, then a ThreadPool is used to fetch
and read the spec files from the mirror.

Using aws cli results in ~16 times speed up to recreate the binary
mirror index, while just parallelizing the fetching and reading
results in ~3 speed up.
2022-11-07 21:31:14 +01:00
Sergey Kosukhin
01a5788517 eckit: fix underlinking (#33739) 2022-11-07 12:54:36 -07:00
Harmen Stoppels
cc84ab1e92 Remove known issues (#33738) 2022-11-07 19:41:16 +00:00
dependabot[bot]
e0e20e3e79 Bump docker/build-push-action from 3.1.1 to 3.2.0 (#33271)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 3.1.1 to 3.2.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](c84f382811...c56af95754)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-07 20:40:06 +01:00
dependabot[bot]
9d08feb63e Bump dorny/paths-filter from 2.10.2 to 2.11.1 (#33270)
Bumps [dorny/paths-filter](https://github.com/dorny/paths-filter) from 2.10.2 to 2.11.1.
- [Release notes](https://github.com/dorny/paths-filter/releases)
- [Changelog](https://github.com/dorny/paths-filter/blob/master/CHANGELOG.md)
- [Commits](b2feaf19c2...4512585405)

---
updated-dependencies:
- dependency-name: dorny/paths-filter
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-07 20:38:32 +01:00
dependabot[bot]
8be6378688 Bump docker/setup-qemu-action from 2.0.0 to 2.1.0 (#33269)
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 2.0.0 to 2.1.0.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](8b122486ce...e81a89b173)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-07 20:13:09 +01:00
Greg Becker
ec05543054 Bugfix: Compiler bootstrapping for compilers that are independently present in env (#32228)
The compiler bootstrapping logic currently does not add a task when the compiler package is already in the install task queue. This causes failures when the compiler package is added without the additional metadata telling the task to update the compilers list.

Solution: requeue compilers for bootstrapping when needed, to update `task.compiler` metadata.
2022-11-07 09:38:51 -08:00
Greg Becker
a30b60f9a6 Apply dev specs for dependencies of roots (#30909)
Currently, develop specs that are not roots and are not explicitly listed dependencies 
of the roots are not applied.

- [x] ensure dev specs are applied.

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2022-11-07 09:37:03 -08:00
Adam J. Stewart
8fb8381b6f Rust: don't apply constraints to nightly/beta versions (#33723) 2022-11-07 09:21:46 -08:00
Laura Bellentani
1dcb5d1fa7 quantum-espresso: improve concretization for intel libraries (#33312) 2022-11-07 16:47:07 +01:00
Yang Zongze
96b8240ea6 singularity: add new versions (#33462) 2022-11-07 16:44:09 +01:00
Harmen Stoppels
47df88404a Simplify repeated _add_dependency calls for same package (#33732) 2022-11-07 16:33:18 +01:00
Veselin Dobrev
476e647c94 GLVis: new versions: v4.1, v4.2 (#33728) 2022-11-07 16:31:59 +01:00
Sajid Ali
c3851704a2 openblas confuses flang/flang-new, so do not set TIME with ~fortran (#33163)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
2022-11-07 16:20:03 +01:00
Axel Huebl
1eb35d0378 Doc: lsb-release (#32479)
Without the `lsb-release` tool installed, Spack cannot identify the
Ubuntu/Debian version.
2022-11-07 16:13:17 +01:00
Sergey Kosukhin
74c3fbdf87 netcdf-c: add variant optimize (#33642) 2022-11-07 15:49:55 +01:00
Michael Kuhn
492525fda5 socat: new package (#33713) 2022-11-07 15:45:50 +01:00
Adam J. Stewart
9dcd4fac15 py-modin: add new package (#33724)
Co-authored-by: adamjstewart <adamjstewart@users.noreply.github.com>
2022-11-07 15:43:37 +01:00
Harmen Stoppels
2ab974f530 concretizer:unify:true as a default (#31787)
`spack env create` enables a view by default (in a weird hidden
directory, but well...). This is asking for trouble with the other
default of `concretizer:unify:false`, since having different flavors of
the same spec in an environment, leads to collision errors when
generating the view.

A change of defaults would improve user experience:

However, `unify:true` makes most sense, since any time the issue is
brought up in Slack, the user changes the concretization config, since
it wasn't the intention to have different flavors of the same spec, and
install times are decreased.

Further we improve the docs and drop the duplicate root spec limitation
2022-11-07 15:38:24 +01:00
Massimiliano Culpo
e045dabb3a archspec: update version, translate renamed uarchs (#33556)
* Update archspec version

* Add a translation table from old names
2022-11-07 04:50:38 -08:00
Tim Haines
f8e4ad5209 elfutils: add version 0.188 (#33715) 2022-11-07 12:23:10 +01:00
Greg Becker
4b84cd8af5 bugfix for matrices with dependencies by hash (#22991)
Dependencies specified by hash are unique in Spack in that the abstract
specs are created with internal structure. In this case, the constraint
generation for spec matrices fails due to flattening the structure.

It turns out that the dep_difference method for Spec.constrain does not
need to operate on transitive deps to ensure correctness. Removing transitive
deps from this method resolves the bug.

- [x] Includes regression test
2022-11-06 16:49:35 -08:00
Greg Becker
fce7bf179f solver setup: extract virtual dependencies from reusable specs (#32434)
* extract virtual dependencies from reusable specs
* bugfix to avoid establishing new node for virtual
2022-11-06 16:47:07 -08:00
Greg Becker
f3db624b86 package preferences: allow specs to be configured buildable when their virtuals are not (#18269)
* respect spec buildable that overrides virtual buildable
2022-11-06 16:45:38 -08:00
Greg Becker
22c2f3fe89 improve error message for dependency on nonexistant compiler (#32084) 2022-11-06 16:44:11 -08:00
Greg Becker
52cc798948 solver: do not punish explicitly requested compiler mismatches (#30074) 2022-11-06 16:40:00 -08:00
Michael Kuhn
258edf7dac MesonPackage: disable automatic download and install of dependencies (#33717)
Without this, Meson will use its Wraps to automatically download and
install dependencies. We want to manage dependencies explicitly,
therefore disable this functionality.
2022-11-06 19:34:43 +00:00
Greg Becker
d4b45605c8 allow multiple compatible deps from CLI (#21262)
Currently, Spack can fail for a valid spec if the spec is constructed from overlapping, but not conflicting, concrete specs via the hash.

For example, if abcdef and ghijkl are the hashes of specs that both depend on zlib/mnopqr, then foo ^/abcdef ^/ghijkl will fail to construct a spec, with the error message "Cannot depend on zlib... twice".

This PR changes this behavior to check whether the specs are compatible before failing.

With this PR, foo ^/abcdef ^/ghijkl will concretize.

As a side-effect, so will foo ^zlib ^zlib and other specs that are redundant on their dependencies.
2022-11-06 11:30:37 -08:00
Morten Kristensen
8b4b26fcbd py-vermin: add latest version 1.5.0 (#33727) 2022-11-06 10:32:17 -08:00
John W. Parent
f07f75a47b CMake: add versions 3.24.3, 3.23.4, and 3.23.5 (#33700) 2022-11-06 14:21:42 +01:00
Glenn Johnson
f286a7fa9a Add version 4.2.2 to R (#33726) 2022-11-06 12:40:14 +01:00
Erik Schnetter
fffc4c4846 z3: New version 4.11.2 (#33725) 2022-11-06 11:05:57 +01:00
Greg Becker
27e1d28c0b canonicalize_path: add arch information to substitutions (#29810)
Co-authored-by: becker33 <becker33@users.noreply.github.com>
2022-11-06 10:11:59 +01:00
Emilio J. Padrón González
e550f48b17 ADD version 0.19.0 in py-gym recipe (#33701)
* ADD version 0.19.0 in py-gym recipe

* Fix py-gym download url and dependencies for v0.19.0

* Fix stupid error in previous commit: no change in py-cloudpickle dep

* Yes, I should've paid more attention! O:)

I think now it is right, thanks!
2022-11-05 16:28:58 -07:00
Adam J. Stewart
0f32f7d0e9 py-transformers: add v4.24.0 (#33716)
* py-transformers: add v4.24.0

* Internet access still required
2022-11-05 12:38:49 -05:00
Massimiliano Culpo
5558940ce6 Add support for Python 3.11 (#33505)
Argparse started raising ArgumentError exceptions
when the same parser is added twice. Therefore, we
perform the addition only if the parser is not there
already

Port match syntax to our unparser
2022-11-05 15:59:12 +01:00
Erik Schnetter
c9fcb8aadc openssh: New version 9.1p1 (#33668)
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2022-11-05 15:34:59 +01:00
Tom Scogland
3346c0918b Fix relocation to avoid overwriting merged constant strings (#32253)
Compilers and linker optimize string constants for space by aliasing
them when one is a suffix of another. For gcc / binutils this happens
already at -O1, due to -fmerge-constants. This means that we have
to take care during relocation to always preserve a certain length
of the suffix of those prefixes that are C-strings. 

In this commit we pick length 7 as a safe suffix length, assuming the
suffix is typically the 7 characters from the hash (i.e. random), so
it's unlikely to alias with any string constant used in the sources.

In general we now pad shortened strings from the left with leading
dir seperators, but in the case of C-strings that are much shorter
and don't share a common suffix (due to projections), we do allow
shrinking the C-string, appending a null, and retaining the old part
of the prefix.

Also when rewiring, we ensure that the new hash preserves the last
7 bytes of the old hash.

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2022-11-05 11:09:35 +01:00
Peter Scheibel
71d480515b packages.yaml: set url/git (#33275)
A user may want to set some attributes on a package without actually modifying the package (e.g. if they want to git pull updates to the package without conflicts). This PR adds a per-package configuration section called "set", which is a dictionary of attribute names to desired values. For example:

packages:
  openblas:
    package_attributes:
      submodules: true
      git: "https://github.com/myfork/openblas"

in this case, the package will always retrieve git submodules, and will use an alternate location for the git repo.

While git, url, and submodules are the attributes for which we envision the most usage, this allows any attribute to be overridden, and the acceptable values are any value parseable from yaml.
2022-11-05 08:44:50 +00:00
Massimiliano Culpo
c0ed5612ab unparser: fix bug in unit test assertion (#33722) 2022-11-05 09:00:54 +01:00
Adam J. Stewart
d79cba1a77 py-matplotlib: add v3.6.2 (#33683) 2022-11-04 22:38:04 -06:00
wspear
34724cae87 Updated tau 2.32 hash (#33718) 2022-11-04 20:07:33 -07:00
iarspider
9becc82dfc Fix formatting in packaging guide (#33714) 2022-11-05 02:00:16 +00:00
Erik Schnetter
d7cb790f88 openssl: New version 1.1.1s (#33664)
This is a security update.
2022-11-04 18:18:05 -06:00
Adam J. Stewart
7c0b3f6374 py-pytorch-lightning: add conflicts for py-torch~distributed (#33710) 2022-11-05 00:00:23 +01:00
Greg Becker
912d544afe demote warning for no source id to debug message (#33657)
* demote warning for no source id to debug message
2022-11-04 21:52:58 +00:00
Greg Becker
53fbaa5dcd Cray support: use linux platform for newer craype versions (#29392)
Newer versions of the CrayPE for EX systems have standalone compiler executables for CCE and compiler wrappers for Cray MPICH. With those, we can treat the cray systems as part of the linux platform rather than having a separate cray platform.

This PR:
- [x] Changes cray platform detection to ignore EX systems with Craype version 21.10 or later
- [x] Changes the cce compiler to be detectable via paths
- [x] Changes the spack compiler wrapper to understand the executable names for the standalone cce compiler (`craycc`, `crayCC`, `crayftn`).
2022-11-04 14:52:11 -07:00
Greg Becker
a88c74dc17 delegate to cray modules for target args on cray (#17857) 2022-11-04 14:50:05 -07:00
Sinan
fb5ff901c0 package/py-pykml: add new py3 compatible version (#33631)
* package/py-pykml: add new py3 compatible version

* fix bugs

Co-authored-by: sbulut <sbulut@3vgeomatics.com>
2022-11-04 14:34:50 -07:00
Matthieu Boileau
76ec64859a Fix typo in docs (#33662) 2022-11-04 20:19:34 +01:00
kent-cheung-arm
e85b308212 arm-forge: add 22.1 and 22.1.1. (#33584) 2022-11-04 20:18:20 +01:00
Massimiliano Culpo
a4f3fe2ac7 Deprecate old YAML format for buildcaches (#33707) 2022-11-04 12:06:42 -07:00
Alberto Sartori
b6da8635ec add justbuild package (#33689) 2022-11-04 11:59:27 -07:00
Tamara Dahlgren
d338ac0634 Updates to stand-alone test documentation (#33703) 2022-11-04 18:55:38 +00:00
Dom Heinzeller
33e5e77225 Python package: fix .libs on macOS with external Python (#33410)
For some instances of externally-provided Python (e.g. Homebrew),
the LDLIBRARY/LIBRARY config variables don't actually refer to
libraries and should therefore be excluded from ".libs".
2022-11-04 11:52:27 -07:00
Moritz Kern
13565df027 add elephant version v0.11.2 (#33663) 2022-11-04 12:34:16 -06:00
Stephen McDowell
6f8e242db8 ECP-SDK: fixup +hdf5 +cuda contraints (#33676)
Only enable the hdf5-vfd-gds package if it can compile.

- hdf5-vfd-gds needs cuda@11.7.1+ to be able to `find_library` for cuFile.
- Only enable hdf5-vfd-gds in the sdk if cuda@11.7.1+ is available.
  If an earlier version of cuda is being used, do not depend on the
  hdf5-vfd-gds package at all.
2022-11-04 13:26:58 -05:00
Greg Becker
17eaf34902 fix requires test for aarch64 (#33656) 2022-11-04 18:12:55 +00:00
Vicente Bolea
ac9172fdbc paraview: static cuda is not supported (#33246) 2022-11-04 18:52:28 +01:00
Erik Schnetter
1a77f3e2e0 gnutls: add v3.7.8 (#33708) 2022-11-04 18:06:11 +01:00
Erik Schnetter
7d4373f526 libressl: New package (#33709) 2022-11-04 18:05:23 +01:00
Greg Becker
2caec6bd27 Bugfix: glvis new builder interface (#33704)
* take two

* Add missing import statement

* Group dependencies together

* Extract libtiff arguments

* Extract libpng arguments

* Push preamble variable into png_args and tiff_args

* Extract setting args associated with the screenshot variant

* Inlined a few variables

* Modify only build targets and install targets

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2022-11-04 08:42:06 -07:00
Ashwin Kumar
bde4d1e38c Octopus: refactor AutotoolsPackage (#33526) 2022-11-04 14:59:45 +01:00
snehring
49d7aa21fd orca: add v5.0.3 (#33649) 2022-11-04 12:26:59 +01:00
Harmen Stoppels
1d4919924d Add in-place RPATH replacement (#27610)
Whenever the rpath string actually _grows_, it falls back to patchelf,
when it stays the same length or gets shorter, we update it in-place,
padded with null bytes.

This PR only deals with absolute -> absolute rpath replacement. We don't
use `_build_tarball(relative=True)` in our CI. If `relative` then it falls
back to the old replacement code.

With this PR, relocation time goes down significantly, likely because patchelf
does some odd things with mmap, causing lots of overhead. Example:

- `binutils`: 700MB installed, goes from `1.91s` to `0.57s`, or `3.4x` faster.
   Relocation time: 27% -> 10% of total install time
- `llvm`: 6.8GB installed, goes from `28.56s` to `5.38`, or `5.3x` faster.
   Relocation time: 44% -> 13% of total install time

The bottleneck is now decompression.

Note: I'm somewhat confused about the "relative rpath" code paths. Right
now this PR only deals with absolute -> absolute replacement. As far as
I understand, if you embrace relative rpaths when uploading to the
buildcache, the whole point is you _don't_ want to patch rpaths on
install? So it seems fine to not expand `$ORIGIN` again imho.
2022-11-04 02:30:53 -07:00
Jordan Galby
b1f896e6c7 Fix non-parallel make under depfile jobserver (#32862)
When a package asks for non-parallel make, we need to force `make -j1` because just doing `make` will run in parallel under jobserver (e.g. `spack env depfile`).

We now always add `-j1` when asked for a non-parallel execution (even if there is no jobserver).

And each `MakeExecutable` can now ask for jobserver support or not. For example: the default `ninja` does not support jobserver so spack applies the default `-j`, but `ninja@kitware` or `ninja-fortran` does, so spack doesn't add `-j`.

Tips: you can run `SPACK_INSTALL_FLAGS=-j1 make -f spack-env-depfile.make -j8` to avoid massive job-spawning because of build tools that don't support jobserver (ninja).
2022-11-04 02:05:22 -07:00
Satish Balay
34969b7072 petsc: fix configure option to use double-hyphen (#33685) 2022-11-03 20:22:09 -06:00
Lucas
72b11c1883 Adding gegelati library package (#33686)
* testing ssh key
* test
* LR : Creating the packge to install the gegelati app
* LR : Gegelati, a TPG C++ library added and fully tested
* LR : adjusting for fork
* LR: taking out the boilerplates
* LR: taking out the rest
2022-11-03 18:34:00 -07:00
wspear
b3e41736e6 Add version 2.32 to tau package (#33702) 2022-11-03 18:24:23 -07:00
Todd Gamblin
9c5c327a6f propagation: don't count propagated variant values as non-default (#33687)
We try to avoid non-default variant values in the concretizer, but this doesn't make
sense for variants forced to take some non-default value by variant propagation.
Counting this as a penalty effectively biases the concretizer for small specs dependency
graphs -- something we try very hard to avoid elsewhere because it can lead to very
strange decisions.

Example: with the penalty, `spack spec hdf5` will choose the default `openmpi` as its
`mpi` provider, but `spack spec hdf5 ~~shared` will choose `mpich` because it has to set
fewer non-default variant values because `mpich`'s DAG is smaller. That's not a good
reason to prefer a non-default virtual provider.

To fix this, if the user explicitly requests a non-default value to be propagated, there
shouldn't be a penalty. Variant values set on the CLI already don't count as default; we
just need to extend that to propagated values.
2022-11-03 18:26:03 -06:00
Greg Becker
79fcc0848f Update glvis for new builder interface (#33699) 2022-11-03 17:10:39 -07:00
Harmen Stoppels
b52be75978 Experimental binding of shared ELF libraries (#31948)
Adds another post install hook that loops over the install prefix, looking for shared libraries type of ELF files, and sets the soname to their own absolute paths.

The idea being, whenever somebody links against those libraries, the linker copies the soname (which is the absolute path to the library) as a "needed" library, so that at runtime the dynamic loader realizes the needed library is a path which should be loaded directly without searching.

As a result:

1. rpaths are not used for the fixed/static list of needed libraries in the dynamic section (only for _actually_ dynamically loaded libraries through `dlopen`), which largely solves the issue that Spack's rpaths are a heuristic (`<prefix>/lib` and `<prefix>/lib64` might not be where libraries really are...)
2. improved startup times (no library search required)
2022-11-03 17:34:00 -06:00
Adrien Cotte
7fc49c42ee Add new versions to py-clustershell (#33694) 2022-11-03 17:11:28 -05:00
snehring
5b39059472 tassel: adding version 5.2.86 (#33697) 2022-11-03 15:02:36 -07:00
snehring
e8cc1a60ea spades: adding version 3.15.5 (#33698) 2022-11-03 14:44:31 -07:00
eugeneswalker
d2d01ea488 flux-core: allow ncurses >= 6.2 (#33599) 2022-11-03 14:44:02 -07:00
Zack Galbreath
ccc716f617 Limit the number of parallel jobs launched by Tensile (#33692) 2022-11-03 15:26:03 -06:00
Scott Wittenburg
b55509ffa8 gitlab: Prune untouched specs less aggressively (#33669)
Untouched spec pruning was added to reduce the number of specs
developers see getting rebuilt in their PR pipelines that they
don't understand.  Because the state of the develop mirror lags
quite far behind the tip of the develop branch, PRs often find
they need to rebuild things untouched by their PR.

Untouched spec pruning was previously implemented by finding all
specs in the environment with names of packages touched by the PR,
traversing in both directions the DAGS of those specs, and adding
all dependencies as well as dependents to a list of concrete specs
that should not be considered for pruning.

We found that this heuristic results in too many pruned specs, and
that dependents of touched specs must have all their dependencies
added to the list of specs that should not be considered for pruning.
2022-11-03 13:33:52 -06:00
Miroslav Stoyanov
08bee718a2 new cmake requirement (#33679) 2022-11-03 11:28:08 -07:00
Nicolas Cornu
da020d1bb8 Bump HighFive to v2.5.0 (#33691)
* Bump HighFive to v2.5.0
* Adding myself as maintainers
* fix format with black
2022-11-03 11:34:15 -06:00
Cyrus Harrison
ca93c8b57a fides: remove unneeded variants (#32521) 2022-11-03 11:33:59 -06:00
Greg Sjaardema
16acd25053 SEACAS: further refactor (#33673)
* SEACAS: Update package.py to handle new SEACAS project name

The base project name for the SEACAS project has changed from
"SEACASProj" to "SEACAS" as of @2022-10-14, so the package
needed to be updated to use the new project name when needed.

The refactor also changes several:
    "-DSome_CMAKE_Option:BOOL=ON"
to
   define("Some_CMAKE_Option", True)

* SEACAS: Additional refactorings

* Replaced all cmake "-Dsomething=other" lines with either `define`
or `define_from_variant` functions.

Consolidated the application (fortran, legacy, all) enabling lines
into loops over the code names.  Easier to see categorization of
applications and also to add/move/remove an application

Reordered some lines; general cleanup and restructuring.

* Address flake8 issues

* Remove trailing whitespace

* Reformat using black
2022-11-03 11:42:25 -04:00
snehring
9925f3b779 isescan: add version 1.7.2.3 (#33675) 2022-11-03 09:29:59 -06:00
Mikael Simberg
68a5fe84a7 Add pika 0.10.0 (#33659)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2022-11-03 12:28:38 +01:00
Harmen Stoppels
243dfe91e9 Use spack.traverse.traverse_nodes where useful (#33677) 2022-11-03 11:34:24 +01:00
Jose E. Roman
30da20c1bc New patch release SLEPc 3.18.1 (#33661) 2022-11-03 03:58:16 -06:00
Massimiliano Culpo
0d82688903 Update metadata for bootstrapping (#33665) 2022-11-03 09:05:03 +00:00
Michael Kuhn
707e56dea8 glib: add 2.74.1 (#33650) 2022-11-03 02:50:20 -06:00
Sinan
2264b75ca0 add new package: py-pylatex (#33573)
* add new package: py-pylatex

* fix bugs

* add extras indicated in setup.py

* Update var/spack/repos/builtin/packages/py-pylatex/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-pylatex/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* improvements

* remove git merge related lines

* tidy

* Update var/spack/repos/builtin/packages/py-pylatex/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* remove variant

* [@spackbot] updating style on behalf of Sinan81

Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Sinan81 <Sinan81@users.noreply.github.com>
2022-11-02 22:38:17 -06:00
Greg Becker
c716c6ca95 Bugfix for spec objects modified by flag handlers (#33682)
This issue was introduced in #29761:

```
==> Installing ncurses-6.3-22hz6q6cvo3ep2uhrs3erpp2kogxncbn
==> No binary for ncurses-6.3-22hz6q6cvo3ep2uhrs3erpp2kogxncbn found: installing from source
==> Using cached archive: /spack/var/spack/cache/_source-cache/archive/97/97fc51ac2b085d4cde31ef4d2c3122c21abc217e9090a43a30fc5ec21684e059.tar.gz
==> No patches needed for ncurses
==> ncurses: Executing phase: 'autoreconf'
==> ncurses: Executing phase: 'configure'
==> ncurses: Executing phase: 'build'
==> ncurses: Executing phase: 'install'
==> Error: AttributeError: 'str' object has no attribute 'propagate'

The 'ncurses' package cannot find an attribute while trying to build from sources. This might be due to a change in Spack's package format to support multiple build-systems for a single package. You can fix this by updating the build recipe, and you can also report the issue as a bug. More information at https://spack.readthedocs.io/en/latest/packaging_guide.html#installation-procedure

/spack/lib/spack/spack/build_environment.py:1075, in _setup_pkg_and_run:
       1072        tb_string = traceback.format_exc()
       1073
       1074        # build up some context from the offending package so we can
  >>   1075        # show that, too.
       1076        package_context = get_package_context(tb)
       1077
       1078        logfile = None
```

It turns out this was caused by a bug that had been around much longer, in which the flags were passed by reference to the flag_handler, and the flag_handler was modifying the spec object, not just the flags given to the build system. The scope of this bug was limited by the forking model in Spack, which is how it went under the radar for so long.

PR includes regression test.
2022-11-02 21:09:43 -07:00
Harmen Stoppels
b652fe72d7 remove deptype_query remnants and fix incorrect deptypes kwarg (#33670)
* remove deptype_query remnants
* deptypes -> deptype

These arguments haven't existed since 2017, but `traverse` now fails on unknown **kwargs, so they have finally popped up.
2022-11-02 14:24:59 -07:00
Robert Pavel
4d5f2e3a37 Initial Version of Hypar Spackage (#33647)
Initial spackage for Hypar proxy app
2022-11-02 14:10:19 -07:00
Martin Diehl
4c535a2037 update: damask3.0.0-alpha7 (#33634)
* damask3.0.0-alpha7
* [@spackbot] updating style on behalf of MarDiehl

Co-authored-by: MarDiehl <MarDiehl@users.noreply.github.com>
2022-11-02 12:23:19 -07:00
Weiqun Zhang
2a516aadb1 amrex: add v22.11 (#33671) 2022-11-02 11:59:50 -07:00
Carson Woods
0661c1f531 Package: add new package py-fitter (#33652)
* Add new python package

* Fix isort style issues

* Add additional dependencies

* Add additional dependencies

* Remove comments

* Add additional explicit dependencies
2022-11-02 12:42:10 -06:00
Sinan
4b549560f9 package_qgis_fix_pythonpath (#33655)
* package_qgis_fix_pythonpath

* check if bindings enabled also

Co-authored-by: sbulut <sbulut@3vgeomatics.com>
2022-11-02 13:40:05 -05:00
Greg Sjaardema
d4ea74bf80 SEACAS: Update package.py to handle new SEACAS project name (#33646)
The base project name for the SEACAS project has changed from
"SEACASProj" to "SEACAS" as of @2022-10-14, so the package
needed to be updated to use the new project name when needed.

The refactor also changes several:
    "-DSome_CMAKE_Option:BOOL=ON"
to
   define("Some_CMAKE_Option", True)
2022-11-02 11:50:15 -05:00
Paul R. C. Kent
cff94f8e71 llvm: add 15.0.3, 15.0.4 (#33651)
* v1503

* v1504
2022-11-02 10:46:08 -06:00
Gregory Becker
aa4f478ab8 propagation: improve performance
This updates the propagation logic used in `concretize.lp` to avoid rules with `path()`
in the body and instead base propagation around `depends_on()`.
2022-11-02 09:43:57 -07:00
Kayla Butler
bc209c470d flags/variants: Add ++/~~/== syntax for propagation to dependencies
Currently, compiler flags and variants are inconsistent: compiler flags set for a
package are inherited by its dependencies, while variants are not. We should have these
be consistent by allowing for inheritance to be enabled or disabled for both variants
and compiler flags.

- [x] Make new (spec language) operators
- [x] Apply operators to variants and compiler flags
- [x] Conflicts currently result in an unsatisfiable spec
      (i.e., you can't propagate two conflicting values)

What I propose is using two of the currently used sigils to symbolized that the variant
or compiler flag will be inherited:

Example syntax:
- `package ++variant`
      enabled variant that will be propagated to dependencies
- `package +variant`
      enabled variant that will NOT be propagated to dependencies
- `package ~~variant`
      disabled variant that will be propagated to dependencies
- `package ~variant`
      disabled variant that will NOT be propagated to dependencies
- `package cflags==True`
      `cflags` will be propagated to dependencies
- `package cflags=True`
      `cflags` will NOT be propagated to dependencies

Syntax for string-valued variants is similar to compiler flags.
2022-11-02 09:43:57 -07:00
Filippo Spiga
ae99829af4 armpl-gcc: Pull RHEL8 package when OS is Rocky8 (#33641) 2022-11-02 16:09:28 +01:00
Todd Gamblin
a4d978be59 tests: fix group membership check in sbang tests. (#33658)
Fixes an issue on the RHEL8 UBI container where this test would fail because `gr_mem`
was empty for every entry in the `grp` DB.

You have to check *both* the `pwd` database (which has primary groups) and `grp` (which
has other gorups) to do this correctly.

- [x] update `llnl.util.filesystem.group_ids()` to do this
- [x] use it in the `sbang` test
2022-11-02 11:00:16 +01:00
Harmen Stoppels
4bad9f9b13 Consolidate DAG traversal in traverse.py, support DFS/BFS (#33406)
This PR introduces breadth-first traversal, and moves depth-first traversal
logic out of Spec's member functions, into `traverse.py`.

It introduces a high-level API with three main methods:

```python
spack.traverse.traverse_edges(specs, kwargs...)
spack.traverse.traverse_nodes(specs, kwags...)
spack.traverse.traverse_tree(specs, kwargs...)
```

with the usual `root`, `order`, `cover`, `direction`, `deptype`, `depth`, `key`,
`visited` kwargs for the first two.

What's new is that `order="breadth"` is added for breadth-first traversal.

The lower level API is not exported, but is certainly useful for advanced use
cases. The lower level API includes visitor classes for direction reversal and
edge pruning, which can be used to create more advanced traversal methods,
especially useful when the `deptype` is not constant but depends on the node
or depth. 

---

There's a couple nice use-cases for breadth-first traversal:

- Sometimes roots have to be handled differently (e.g. follow build edges of
  roots but not of deps). BFS ensures that root nodes are always discovered at
  depth 0, instead of at any depth > 1 as a dep of another root.
- When printing a tree, it would be nice to reduce indent levels so it fits in the 
  terminal, and ensure that e.g. `zlib` is not printed at indent level 10 as a 
  dependency of a build dep of a build dep -- rather if it's a direct dep of my
  package, I wanna see it at depth 1. This basically requires one breadth-first
  traversal to construct a tree, which can then be printed with depth-first traversal.
- In environments in general, it's sometimes inconvenient to have a double
  loop: first over the roots then over each root's deps, and maintain your own
  `visited` set outside. With BFS, you can simply init the queue with the
  environment root specs and it Just Works. [Example here](3ec7304699/lib/spack/spack/environment/environment.py (L1815-L1816))
2022-11-01 23:04:47 -07:00
eugeneswalker
353e31e72a rempi %oneapi: -Wno-error=implicit-function-declaration (#33654) 2022-11-01 17:53:53 -06:00
Vicente Bolea
328addd43d ParaView: add v5.11.0-RC2 (#33486)
Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
2022-11-01 15:49:57 -07:00
Greg Becker
f696f02a46 Unit tests: make unit tests work for aarch64 machines (#33625)
Currently, many tests hardcode to older versions of gcc for comparisons of
concretization among compiler versions. Those versions are too old to concretize for
`aarch64`-family targets, which leads to failing tests on `aarch64`.

This PR fixes those tests by updating the compiler versions used for testing.

Currently, many tests hardcode the expected architecture result in concretization to the
`x86_64` family of architectures.

This PR generalizes the tests that can be generalized, to cover multiple architecture
families. For those that test specific relationships among `x86_64`-family targets, it
ensures that concretization uses the `x86_64`-family targets in those cases.

Currently, many tests rely on the fact that `AutotoolsPackage` imposes no dependencies
on the inheriting package. That is not true on `aarch64`-family architectures.

This PR ensures that the fact `AutotoolsPackage` on `aarch64` pulls in a dependency on
`gnuconfig` is ignored when testing for the appropriate relationships among dependencies

Additionally, 5 tests currently prompt the user for input when `gpg` is available in the
user's path. This PR fixes that issue. And 7 tests fail currently when the user has a
yubikey available. This PR fixes the incorrect gpg argument causing those issues.
2022-11-01 16:25:55 -06:00
Greg Sjaardema
e0265745bc Update command option for example (#33321)
The `spack info <package>` command does not show the `Virtual Packages:` output unless the `--virtuals` command option is passed.  Before this changes, the information that the command is supposed to be illustrating is not shown in the example and is confusing.
2022-11-01 15:29:50 -06:00
Massimiliano Culpo
75360bdc21 Allow target requirements in packages.yaml (#32528)
This PR solves the issue reported in #32471 specifically for targets and operating systems, 
by avoiding to add a default platform to anonymous specs.
2022-11-01 22:11:49 +01:00
Harmen Stoppels
230e96fbb8 Add elf parsing utility function (#33628)
Introduces `spack.util.elf.parse_elf(file_handle)`
2022-11-01 19:42:06 +00:00
Michael Kuhn
6b3ea94630 perl: add 5.36.0 (#33336) 2022-11-01 12:13:53 -07:00
Michael Kuhn
6dd2bb258c xz: add 5.2.7 (#33338) 2022-11-01 20:02:11 +01:00
Cody Balos
1abcc8caf7 sundials: add v6.4.1, new ginkgo and kokkos variants, plus some fixes (#33644)
* sundials: add v6.4.1, new ginkgo and kokkos variants, plus some fixes
* add missing kokkos/kokkos-kernels variant
* add v6.4.0
2022-11-01 12:45:53 -06:00
Massimiliano Culpo
23aef6bb94 Let pytest-cov create the xml directly (#33619)
`coverage` sometimes failed to combine, even if there were multiple reports.
2022-11-01 19:04:45 +01:00
liuyangzhuan
973b43b1c1 superlu-dist: fix rocm variant for the master branch (#33624)
* superlu-dist: fix rocm variant for the master branch
* simplify superlu-dist
* add mpi include to HIP_HIPCC_FLAGS

Co-authored-by: liuyangzhuan <liuyangzhuan@users.noreply.github.com>
2022-11-01 10:28:01 -07:00
Mikael Simberg
d925ba9bc6 Add tracy 0.9 (#33638)
* Add tracy 0.9
* Add conflict for pika and tracy@0.9:
2022-11-01 09:22:34 -07:00
Adam J. Stewart
1780f3ab3c py-pytorch-lightning: add v1.8.0 (#33643) 2022-11-01 09:13:12 -07:00
Qian Jianhua
d6f25becdb py-ilmbase: add dependency to py-numpy (#33633)
* py-ilmbase: add dependency to py-numpy

* fix style
2022-11-01 10:36:50 -05:00
Sinan
f2c84efed2 package/py-simplekml_add_new_versions (#33632)
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
2022-11-01 10:30:02 -05:00
Harmen Stoppels
156dd5848e Relocate links using prefix to prefix map (#33636)
Previously symlinks were not relocated when they pointed across packages
2022-11-01 16:00:51 +01:00
Mark W. Krentel
cd40d02214 hpctoolkit: adjust rocm dependency types (#33627)
Drop the link dependency type for the rocm packages.  We don't
actually link, and that adds rpaths that conflict with the app.
2022-10-31 21:38:04 -07:00
Adam J. Stewart
3187d4e7b1 py-torchmetrics: add v0.10.2 (#33630) 2022-10-31 20:01:23 -07:00
kwryankrattiger
6b86a8562f ParaView: ParaView needs to set the HDF5 API (#33617)
When building ParaView with a newer HDf5 than 1.10, it needs to select the 1.10 API using flags.
2022-10-31 20:58:02 -06:00
Adam J. Stewart
df6cdcf6c7 GDAL: add v3.5.3 (#33623) 2022-10-31 19:53:52 -06:00
Luke Diorio-Toth
02097b1856 new package: polypolish (#33601)
* added polypolish package, comfirmed builds
* added bwa dep
2022-10-31 15:07:40 -07:00
downloadico
a83456dd7b julia: don't look for the openlibm libraries when unneeded (#33626)
* julia: don't look for the openlibm libraries when unneeded
Cause spack to *not* check for the existence of the openlibm libraries (by adding it to the pkgs list) when ~openlibm is specified.
* [@spackbot] updating style on behalf of downloadico

Co-authored-by: downloadico <downloadico@users.noreply.github.com>
2022-10-31 16:05:49 -06:00
SoniaScard
269304a7ac ophidia-server: new package at v1.7 (#33581)
Co-authored-by: SoniaScard <SoniaScard@users.noreply.github.com>
2022-10-31 14:20:56 -07:00
Massimiliano Culpo
ace8c74e20 Revert "gitlab: when_possible -> false (#33443)" (#33552)
This reverts commit b1559cc831.
2022-10-31 13:31:57 -07:00
Michael Kuhn
fc2d5d2311 Fix pkgconfig dependencies (#33614)
Packages should depend on the virtual provider, pkgconfig, not on its implementations pkg-config or pkgconf.
2022-10-31 11:44:07 -07:00
Mark Olesen
ef3cd6d6ca openfoam: update mechanism for creating spack-specific wmake rules (#33615)
- the updated OpenFOAM wmake rules now allow multiple locations for
  compiler flags:
  * wmake/General/common/c++Opt  [central]
  * wmake/linux64Gcc/c++Opt      [traditional]
- match both '=' and ':=' make rule lines

Co-authored-by: Mark Olesen <Mark.Olesen@esi-group.com>
2022-10-31 09:43:58 -07:00
John W. Parent
222cef9b7c Windows: fix library loading and enable Clingo bootstrapping (#33400)
Changes to improve locating shared libraries on Windows, which in
turn enables the use of Clingo. This PR attempts to establish a
proper distinction between linking on Windows vs. Linux/Mac: on
Windows, linking is always done with .lib files (never .dll files).
This somewhat complicates the model since the Spec.lib method could
return libraries that were used for both linking and loading, but
since these are not always the same on Windows, it was decided to
treat Spec.libs as being for link-time libraries. Additional functions
are added to help dependents locate run-time libraries.

* Clingo is now the default concretizer on Windows
* Clingo is now the concretizer used for unit tests on Windows
* Fix a permissions issue that can occur while moving Git files during
  fetching/staging
* Packages can now implement "win_add_library_dependent" to register
  files/directories that include libraries that would need to link
  to dependency dlls
* Packages can now implement "win_add_rpath" to register the locations
  of dlls that dependents would want to load
* "Spec.libs" on Windows is updated to return link-time libraries
  (i.e. .lib files, rather than .dll files)
* PackageBase.rpath on Windows is now updated to return the most-likely
  locations where .dlls will be found (which is generally in the bin/
  directory)
2022-10-31 09:36:52 -07:00
Erik
214890c026 Enable Cuda for AMReX smoke test. (#28576)
* Enable Cuda for AMReX smoke test.

* style fix

* more style fixes

* change /... to join_path

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2022-10-31 09:33:55 -07:00
Erik Schnetter
30d84a2716 rsync: New version 3.2.7 (#33618) 2022-10-31 09:26:45 -07:00
psakievich
6990cf76a0 Nalu-Wind: Allow for standard versions of trilinos (#33620)
* Nalu-Wind: Allow for standard versions of trilinos

This will allow us to utilize custom numeric versions for trilinos in `spack-manager` while we continue to develop `nalu-wind`.
Pinging @eugeneswalker @jrood-nrel @tasmith4

* Update var/spack/repos/builtin/packages/nalu-wind/package.py
2022-10-31 08:37:38 -07:00
Harmen Stoppels
7db38433e2 gmake: add a patch so dirs are not executed (#33578) 2022-10-31 08:17:40 -06:00
Satish Balay
a93b3643c6 exago: query and use MPI compilers from spack (#33598)
* exago: query and use MPI compilers from spack

* exago: requires explicit location of mpi.h for nvcc
2022-10-31 07:03:26 -07:00
Harmen Stoppels
a4930c74cb Make --backtrace show non-SpackError backtraces (#33540) 2022-10-31 12:49:19 +00:00
Harmen Stoppels
1ab888cdc1 remove sequential filter in binary relo (#33608)
Currently there's a slow sequential step in binary relocation where all
strings of a binary are collected, with rpaths removed, and then
filtered for the old install root.

This is completely unnecessary, and also incorrect, since we replace
more than just the old install root in the prefix to prefix mapping. And
in fact the prefix to prefix mapping is parallel, and a single pass. So
even as an optimization, this filter makes no sense anymore.

Therefor we remove it
2022-10-31 10:41:59 +01:00
Harmen Stoppels
616d5a89d4 _replace_prefix_bin performance improvements (#33590)
- single pass over the binary data matching all prefixes
- collect offsets and replacement strings
- do in-place updates with `fseek` / `fwrite`, since typically our
  replacement touch O(few bytes) while the file is O(many megabytes)
- be nice: leave the file untouched if some string can't be
  replaced
2022-10-31 10:08:16 +01:00
Mosè Giordano
fb4be98f26 utf8proc: add v2.8.0 (#33611) 2022-10-31 00:47:20 +01:00
Diego Alvarez
20fafe6a46 openjdk: add 11.0.16.1+1, 11.0.17+8, 17.0.4.1+1, 17.0.5+8 (#33355) 2022-10-30 13:25:41 -06:00
Mosè Giordano
7d5446740c libgit2: add v1.4.4 and v1.5.0 (#33604) 2022-10-30 12:13:47 +01:00
eugeneswalker
999c460b1e e4s ci: add mfem +rocm (#31604) 2022-10-29 23:32:46 +00:00
Veselin Dobrev
82470f880a New MFEM version: 4.5 (#33480)
* New MFEM version: 4.5

Add new MFEM variants: ginkgo, hiop

* mfem: small tweaks

* mfem: tweak testing script

* mfem: try to resolve issue #30483

* mfem: fix style

* mfem: tweak for Spack-built hipsparse

Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
2022-10-29 13:22:25 -07:00
eugeneswalker
e5274de7ec unifyfs %oneapi: -Wno-error=deprecated-non-prototype,unused-function (#33602) 2022-10-29 13:19:27 -07:00
eugeneswalker
23aada1d24 gettext: constrain nvhpc patch to @:0.20.0 (#33489) 2022-10-29 21:41:10 +02:00
Massimiliano Culpo
7e645f54c5 Deprecate spack bootstrap trust/untrust (#33600)
* Deprecate spack bootstrap trust/untrust
* Update CI
* Update tests
2022-10-29 12:24:26 -07:00
Sergey Kosukhin
29f1e8395c mpich: fix rpath flags in mpif90 when building with oneapi (#33319)
Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
2022-10-29 12:17:43 -06:00
Jonas Thies
83b49070e6 phist: add/update conflicts with cray-libsci and/or python@3.11: for … (#33587)
* phist: add/update conflicts with cray-libsci and/or python@3.11: for versions <1.11.2

* phist: style fix
2022-10-29 10:12:26 -07:00
eugeneswalker
ef8c15c5ef czmq %oneapi: -Wno-error: gnu-null-pointer-arithmetic, strict-prototypes (#33586) 2022-10-29 06:14:15 -06:00
eugeneswalker
c10d525956 json-c %oneapi: add -Wno-error=implicit-function-declaration (#33585) 2022-10-29 06:13:54 -06:00
Satish Balay
6edb20c7a6 mumps: update URLs, and add versions 5.5.0, 5.5.1 (#33597) 2022-10-29 11:34:45 +02:00
Axel Huebl
b06c5a43d9 WarpX 22.10 & PICMI-Standard (#33594)
Update `warpx` & `py-warpx` to the latest release, `22.10`.
Update `py-picmistandard` accordingly.
2022-10-29 11:27:46 +02:00
Adam J. Stewart
06bac4a487 libtiff: CMake support, internal codecs variants (#33591) 2022-10-29 06:36:14 +02:00
Brian Van Essen
6c5a7fefd6 Fixed a bad variant when statement for including cuDNN. (#33595) 2022-10-28 13:21:50 -06:00
Luke Diorio-Toth
04f87da36b new packages (py-auditwheel, py-medaka, py-parasail, py-progressbar33, py-pyspoa) and updates to others (py-scikit-build, py-ont-fast5-api) (#33541)
* Added py-medaka and dependencies

* fixed py-parasail build error

* medaka still doesn't have correct linked libdeflate

* fixed pyspoa deps

* added htslib.patch, confirmed builds and runs

* fixed style

* Update var/spack/repos/builtin/packages/py-auditwheel/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* made requested changes

* added targets for pyspoa dep

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-10-28 14:05:15 -05:00
Satish Balay
f74a6a4503 netlib-scalapack: fix build error with oneapi compilers (#33539)
/home/xsdk/spack.x/lib/spack/env/oneapi/icx -DAdd_ -Dscalapack_EXPORTS -I/opt/intel/oneapi/mpi/2021.7.0/include -O3 -DNDEBUG -fPIC -MD -MT CMakeFiles/scalapack.dir/BLACS/SRC/dgamx2d_.c.o -MF CMakeFiles/scalapack.dir/BLACS/SRC/dgamx2d_.c.o.d -o CMakeFiles/scalapack.dir/BLACS/SRC/dgamx2d_.c.o -c /home/xsdk/spack.x/spack-stage/spack-stage-netlib-scalapack-2.2.0-uj3jepiowz5is4hmdmjrzjltetgdr3lx/spack-src/BLACS/SRC/dgamx2d_.c
/home/xsdk/spack.x/spack-stage/spack-stage-netlib-scalapack-2.2.0-uj3jepiowz5is4hmdmjrzjltetgdr3lx/spack-src/BLACS/SRC/igsum2d_.c:154:7: error: call to undeclared function 'BI_imvcopy'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
      BI_imvcopy(Mpval(m), Mpval(n), A, tlda, bp->Buff);
      ^
2022-10-28 10:13:51 -06:00
Lucas Nesi
b89f6226f8 starpu: add papi and blocking variants (#33580) 2022-10-28 16:43:45 +02:00
Vasileios Karakasis
c772b662c9 ReFrame: add versions up to v3.12.0 (#33446)
Also adds 4.0 dev versions
2022-10-28 16:38:29 +02:00
Harmen Stoppels
fa0432c521 gitlab ci: patched make 4.3.0 (#33583) 2022-10-28 14:38:57 +02:00
Harmen Stoppels
21e826f256 patchelf: add v0.16.1 (#33579) 2022-10-28 11:15:06 +02:00
eugeneswalker
0896bf928b openblas %oneapi: extend application of --Wno-error=implicit-function-declaration (#33576) 2022-10-28 02:05:59 -06:00
Chris White
d4a0f588e9 CachedCMakePackage: Add back initconfig as a defined phase (#33575)
Also: add type annotation to indicate that "phases" is always a
tuple of strings.
2022-10-27 23:04:18 -07:00
Adam J. Stewart
b81b54a74c py-scikit-learn: add v1.1.3 (#33534) 2022-10-27 19:49:43 -06:00
Diego Alvarez
3ad0952956 nextflow: add 22.10.1 (#33574)
* Add nextflow 22.10.1
* Add trailing comma (style)
2022-10-27 16:41:49 -06:00
Adam J. Stewart
bd51751a8c GDAL: multi-build system support (#33566) 2022-10-27 16:10:12 -06:00
rfbgo
67585fe13e Add vasp variant to control shmem compile options (#33531)
Currently the vasp package always enables the use of shmem to reduce algorithm memory usage (see
https://www.vasp.at/wiki/index.php/Precompiler_options). This is great,but on some systems gives compile errors with the interoperability of C and Fortran. This PR makes that shmem flag optional, but retains the
existing default on behavior.
2022-10-27 14:59:06 -07:00
Robert Cohn
3e966f2547 support pkgconfig for mkl (#33382) 2022-10-27 15:01:59 -06:00
SoniaScard
600948558d ophidia-analytics-framework: new package at v1.7 (#33567)
* ophidia-analytics-framework: new package at v1.7
* Fix code style in ophidia-analytics-framework

Co-authored-by: SoniaScard <SoniaScard@users.noreply.github.com>
Co-authored-by: Donatello Elia <eldoo@users.noreply.github.com>
2022-10-27 13:42:58 -07:00
Adam J. Stewart
a8c0a662a1 py-torchdata: add v0.5.0 (#33568) 2022-10-27 15:02:46 -05:00
Brian Van Essen
6408b51def Support ROCm backing in DiHydrogen (#33563)
* Added support for building the DiHydrogen package and LBANN extensions
to DiHydrogen with ROCm libraries.

Fixed a bug on Cray systems where CMake didn't try hard enough to find
an MPI-compatible compiler wrapper.  Make it look more.

Added support for the roctracer package when using ROCm libraries.

* Fixed how ROCm support is defined for pre-v0.3 versions.
2022-10-27 21:19:56 +02:00
Stephen McDowell
4be67facdc ECP-SDK: enable hdf5-vfd-gds when +cuda (#33300)
- hdf5-vfd-gds:
    - Add new version 1.0.2 compatible with hdf5@1.13.
    - CMake is a build dependency.
    - Set `HDF5_PLUGIN_PATH` in the runtime environment, this plugin
      is loaded dynamically.
- SDK:
    - The VFD GDS driver only has utility when CUDA is enabled.
    - Require hdf5-vfd-gds@1.0.2+ (1.0.1 and earlier do not compile).
2022-10-27 13:51:19 -05:00
Carson Woods
13636a2094 Add new versions to wi4mpi (#33569) 2022-10-27 11:32:20 -07:00
Massimiliano Culpo
ec50906943 Update macOS Python version in CI to 3.10 (#33560) 2022-10-27 20:25:26 +02:00
Ben Boeckel
ea1719d986 Paraview catalyst support (#33369)
* paraview: add support for Catalyst 1 APIs

* paraview: add support for libcatalyst impl support
2022-10-27 12:18:16 -06:00
Miroslav Stoyanov
6f4d69cf8d new version and rocm fix (#33536) 2022-10-27 10:18:44 -07:00
Luke Diorio-Toth
57226a870b py-ncbi-genome-download: new package (#33511)
* py-ncbi-genome-download: new package

* fixed style
2022-10-27 11:51:36 -05:00
Satish Balay
34d55af55d phist: add v1.11.2 (#33561) 2022-10-27 11:44:51 -05:00
Brian Van Essen
5f99d3dfaa spdlog: allow using vendored fmt library (#33379) 2022-10-27 11:13:07 -05:00
Lucas Nesi
86337f042e chameleon: correct chameleon+simgrid build (#33559)
* chameleon: correct chameleon+simgrid build
* chameleon: remove unused declarations
2022-10-27 08:59:06 -07:00
SoniaScard
2960d8ac0a ophidia-io-server: new package at v1.7 (#33436)
Co-authored-by: SoniaScard <SoniaScard@users.noreply.github.com>
2022-10-27 17:52:49 +02:00
psakievich
b9bee70a97 curl: stop auto generating file named str (#33532)
Calling `determine_variants` from the `curl` package autogenerates an empty file every time it is called.
2022-10-27 17:33:51 +02:00
ryandanehy
5a939d9c94 exago, hiop: propagate build type (#32750) 2022-10-27 17:14:09 +02:00
Satish Balay
6cb4a00280 petsc+kokkos: pass in cuda_arch, rocm_arch to kokkos (#33530)
Also remove dependency on kokkos+wrapper
2022-10-27 17:02:36 +02:00
Lucas Nesi
ecdfe02355 starpu: correct fxt dependency variant when simgrid (#33558) 2022-10-27 09:02:09 -06:00
Massimiliano Culpo
b19549ce9d suite-sparse: add versions up to v5.13.0 (#33550) 2022-10-27 17:01:50 +02:00
G-Ragghianti
590c4e35ca Package slate: added requirements for cuda_arch (#33554) 2022-10-27 08:54:16 -06:00
Lucas Nesi
41d53b85f8 simgrid: add v3.32 (#33557) 2022-10-27 08:46:54 -06:00
Mosè Giordano
07d9c254a2 intel-mpi: add cpio as build dependency (#33555) 2022-10-27 08:26:06 -06:00
Massimiliano Culpo
9a51d42cec oce: rework recipe to prefer old intel-tbb (#33553) 2022-10-27 15:34:47 +02:00
Satish Balay
883b7cfa29 hiop: add version 0.7.1 (#33543) 2022-10-27 15:00:15 +02:00
Massimiliano Culpo
605411a9fb LuaPackage: add missing attribute (#33551)
fixes #33544
2022-10-27 06:17:54 -06:00
Harmen Stoppels
df1d233573 Don't fail over cpuinfo (#33546) 2022-10-27 11:14:09 +02:00
Marco De La Pierre
a82a9cf3c1 tower-agent and tower-cli: update versions (#33545) 2022-10-27 11:10:22 +02:00
Massimiliano Culpo
f89be5d7e4 Fix bootstrapping from sources in CI (#33538)
Since #32262 we are not testing bootstrapping from sources, since
we didn't update the mirrors in tests
2022-10-27 07:40:10 +02:00
iarspider
9d7c688d3c cppunit: add static/shared variant, add version 1.15_20220904 (#33522) 2022-10-26 17:04:29 -07:00
Wouter Deconinck
8fc3e49e00 opencascade: new version 7.6.3 (#33518)
* opencascade: new version 7.6.3
* opencascade: correct hash for 7.6.3

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2022-10-26 17:41:56 -06:00
iarspider
6594f49758 CLHEP: Add checksum for clhep 2.4.5.4, 2.4.6.0; cleanup recipe (#33521)
* CLHEP: patching of CMake policy not needed for new-ish versions
* Add checksum for clhep 2.4.5.4, 2.4.6.0; cleanup recipe
2022-10-26 16:41:40 -07:00
Ben Boeckel
8b202769f4 libcatalyst: add 2.0.0-rc3 release (#33322) 2022-10-26 14:45:55 -06:00
百地 希留耶
4ff8a6a9b7 Windows: fix bootstrap and package install failure (#32942)
* Add patches for building clingo with MSVC
* Help python find clingo DLL
* If an executable is located in "C:\Program Files", Executable was
  running into issues with the extra space. This quotes the exe
  to ensure that it is treated as a single value.

Signed-off-by: Kiruya Momochi <65301509+KiruyaMomochi@users.noreply.github.com>
2022-10-26 13:45:35 -07:00
Satish Balay
5d0ae001a1 slepc: fix for slepc+cuda ^petsc+kokkos+cuda ^kokkos+cuda+wrapper (#33529)
kokkos wrappers modify mpicxx - breaking slepc build.
2022-10-26 13:04:22 -07:00
Miroslav Stoyanov
bfc23f4560 new version (#33537) 2022-10-26 12:38:20 -07:00
Harmen Stoppels
34f9394732 gitlab ci: show build machine info (#33523) 2022-10-26 20:31:16 +02:00
Massimiliano Culpo
30c9ff50dd Allow for packages with multiple build-systems (#30738)
This commit extends the DSL that can be used in packages
to allow declaring that a package uses different build-systems
under different conditions.

It requires each spec to have a `build_system` single valued
variant. The variant can be used in many context to query, manipulate
or select the build system associated with a concrete spec.

The knowledge to build a package has been moved out of the
PackageBase hierarchy, into a new Builder hierarchy. Customization
of the default behavior for a given builder can be obtained by
coding a new derived builder in package.py.

The "run_after" and "run_before" decorators are now applied to
methods on the builder. They can also incorporate a "when="
argument to specify that a method is run only when certain
conditions apply.

For packages that do not define their own builder, forwarding logic
is added between the builder and package (methods not found in one
will be retrieved from the other); this PR is expected to be fully
backwards compatible with unmodified packages that use a single
build system.
2022-10-26 20:17:32 +02:00
Adam J. Stewart
83ee500108 py-torchmetrics: add v0.10.1 (#33535) 2022-10-26 12:14:08 -06:00
Satish Balay
b497581ce7 pflotran: fix build errors with gfortran@10: (#33527)
>> 38    Error: Rank mismatch between actual argument at (1) and actual argument at (2) (scalar and rank-1)
2022-10-26 11:33:52 -06:00
Satish Balay
117a82117d petsc, py-petsc4py: add 3.18.1 (#33525) 2022-10-26 09:40:32 -07:00
MatthewLieber
bb64d09ccd Updating package file for osu-micro-benchmarks for the 6.2 release (#33512)
* Updating package file for osu-micro-benchmarks for the 6.2 release
* updating sha hash for 6.2 tarball

Co-authored-by: natshineman <shineman.5@osu.edu>
2022-10-26 09:34:37 -07:00
Matthieu Dorier
8ddb5c3299 [mochi-margo] added version 0.10 (#33519) 2022-10-26 09:20:33 -07:00
Dom Heinzeller
3e37ad9aee Add netcdf-c 4.9.0 and netcdf-fortran 4.6.0 (supersedes #31953) (#33514)
* Add netcdf-c 4.9.0 and netcdf-fortran 4.6.0

With v4.9.0 netcdf-c introduces zstandard compression option which is added as a variant.

* Fix when= in dependency

* Turn on variant zstd by default

Co-authored-by: kgerheiser <kgerheiser@icloud.com>
2022-10-26 09:18:17 -07:00
Satish Balay
b3794761ab alquimia, pflotran, plasma, py-mpi4py, strumpack - add in new versions (#33447)
* alquimia, pflotran, plasma, py-mpi4py, strumpack - add in new versions

* Fix hip CI failure

Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
2022-10-26 09:09:45 -07:00
Harmen Stoppels
6413862f84 fix use of non-existing kwarg (#33520) 2022-10-26 09:03:40 -07:00
Harmen Stoppels
b92bdf8c72 Relocation regex single pass (#33496)
Instead of looping over multiple regexes and the entire text file
contents, create a giant regex with all literal prefixes and do a single
pass over files to detect prefixes. Not only is a single pass faster,
it's also likely that the regex is compiled better, given that most
prefixes share a common ... prefix.
2022-10-26 10:41:31 +02:00
Harmen Stoppels
a2520e80c0 gitlab ci: install binary deps faster (#33248)
* Fast Gitlab CI job setup, and better legibility

* Use a non-broken, recent GNU Make
2022-10-26 09:19:24 +02:00
Harmen Stoppels
d039744a5b dfs traversal: simplify edges in reverse mode (#33481)
In the dfs code, flip edges so that `parent` means `from` and
`spec` means `to` in the direction of traversal. This makes it slightly
easier to write generic/composable code. For example when using visitors
where one visitor reverses direction, and another only cares about
accepting particular edges or not depending on whether the target node
is seen before, it would be good if the second visitor didn't have to
know whether the order was changed or not.
2022-10-25 22:55:05 -07:00
Mark W. Krentel
57c1d6c410 elfutils: add version 0.187 (#33419)
* elfutils: add version 0.187
* Move conflict for debuginfod to variant when clause.
* Add myself as maintainer.
2022-10-25 19:06:29 -07:00
MichaelLaufer
53a76761d0 Scotch: 'link_error_lib' variant - Link error handling library to libscotch/libptscotch (#33297) 2022-10-25 19:04:04 -07:00
Luke Diorio-Toth
7b053fde89 New packages: mlst, any2fasta, perl-carp, perl-class-method-modifiers, perl-role-tiny, perl-moo, perl-sub-quote (#33274)
* added mlst and deps
* added install() to any2fasta
* added script dependencies, builds OK
* fixed style
* Update var/spack/repos/builtin/packages/any2fasta/package.py

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2022-10-25 16:08:56 -07:00
Harmen Stoppels
b538acb2a9 binary_distribution: compress level 9 -> 6 (#33513)
Use the same compression level as `gzip` (6) instead of what Python uses
(9).

The LLVM tarball takes 4m instead of 12m to create, and is <1% larger.
That's not worth the wait...
2022-10-25 22:15:16 +00:00
Rémi Lacroix
8aa9758024 udunits: Update download URL (#32390)
* udunits: Update download URL
* udunits: Deprecate older versions

Unidata now only provides the latest version of each X.Y branch. Older 2.2 versions have been deprecated accordingly but are still available in the build cache.

Co-authored-by: RemiLacroix-IDRIS <RemiLacroix-IDRIS@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2022-10-25 15:06:52 -05:00
Dom Heinzeller
10aa6bdfc1 Update wgrib2 from JCSDA/NOAA-EMC fork (#32857)
* Update wgrib2 from JCSDA/NOAA-EMC fork
* var/spack/repos/builtin/packages/wgrib2/package.py: fix typo in comment, add conflict for variants netcdf3, netcdf4
* wget hdf5/netcdf4 internal dependencies for wgrib2
* Black-format var/spack/repos/builtin/packages/wgrib2/package.py
* More format changes in var/spack/repos/builtin/packages/wgrib2/package.py
2022-10-25 12:52:01 -07:00
Harmen Stoppels
649e2d3e28 depfile: resurrect lost touch (#33504) 2022-10-25 12:48:24 -07:00
Tamara Dahlgren
512f8d14d2 feature: Add -x|--explicit option to 'spack test run' (#32910) 2022-10-25 12:32:55 -07:00
Jonathon Anderson
9d5151ba25 BinaryCacheIndex: track update failures with cooldown (#33509)
#32137 added an option to update() a BinaryCacheIndex with a
cooldown: repeated attempts within this cooldown would not
actually retry. However, the cooldown was not properly
tracked for failures (which is common when the mirror
does not store any binaries and therefore has no index.json).

This commit ensures that update(..., with_cooldown=True) will
also skip the update even if a failure has occurred within the
cooldown period.
2022-10-25 11:36:12 -07:00
Rémi Lacroix
d2e75045b8 Gmsh: Fix CGNS support for version up to 4.7.1 (#33508)
Gmsh started supporting the "scoping" option of CGNS in version 4.8.0.
2022-10-25 10:40:08 -07:00
Harmen Stoppels
09e0bd55c2 spec.py: prefer transitive link and direct build/run/test deps (#33498)
Due to reuse concretization, we may generate DAGs with two occurrences
of the same package corresponding to distinct specs. This happens when
build dependencies are reused, since their dependencies are ignored in
concretization.

This caused a regression, for example: `spec['openssl']` would take the
'openssl' of the build dep `cmake`, instead of the direct `openssl`
dependency, simply because the edge to `cmake` was traversed first and
we do depth first traversal.

One solution that was discussed is to limit `spec[name]` to just direct
deps, or direct deps + transitive link deps, but this is too breaking.
Instead, this PR simply prioritizes transitive link and direct
build/run/test deps, and then falls back to a full DAG traversal. So,
it's just about order of iteration.
2022-10-25 16:47:50 +02:00
Massimiliano Culpo
00ae74f40e Update Spack Dockerfiles (#33500)
* Use spack bootstrap now in containers

* Fix wrong path glob expression
2022-10-25 11:46:47 +00:00
Massimiliano Culpo
62526c1085 Make CI on Windows fail fast (#33502) 2022-10-25 13:04:25 +02:00
Massimiliano Culpo
4b237349a3 Remove recursive symbolic link in lib/spack/docs from git repository (#33483)
Delete code removing the symlink during CI
2022-10-25 12:27:13 +02:00
Harmen Stoppels
d361378553 Improve legibility of Gitlab CI (#33482)
Use --backtrace in ci instead of --debug to reduce verbosity
and don't show log on error, since log is already printed
2022-10-25 12:21:34 +02:00
Massimiliano Culpo
329adc1d22 CI: speed-up tests by dropping coverage on Python 2.7 (#33497) 2022-10-25 11:41:29 +02:00
Brian Spilner
272767c67f cdo: add new release 2.1.0 (#33303) 2022-10-25 08:49:17 +02:00
Miroslav Stoyanov
8aa09fd1c0 fix problems with missing rocm dependencies (#33381) 2022-10-24 19:34:04 -07:00
Zach Jibben
8e78a91ebd Add py-myst-parser & update py-mdit-py-plugins and py-sphinxcontrib-mermaid (#33427)
* Update py-sphinxcontrib-mermaid

* Add py-myst-parser

* Fix py-mdit-py-plugins and py-myst-parser dependencies

* Add py-exhale package

* Update var/spack/repos/builtin/packages/py-mdit-py-plugins/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-myst-parser/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-myst-parser/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-myst-parser/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update py-exhale and py-myst-parser dependencies

* Add @svenevs as py-exhale maintainer

* Update var/spack/repos/builtin/packages/py-mdit-py-plugins/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-10-24 20:38:23 -05:00
eugeneswalker
baa21d664b e4s ci: use an appropriate name for cdash build group name (#33494) 2022-10-24 16:15:01 -07:00
iarspider
e7512bcb7b Add filename to text_to_relocate only if it needs to be relocated (#31074)
Scan the text files for relocatable prefixes *before* creating a tarball,
to reduce the amount of work to be done during install from binary
cache. 

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2022-10-24 23:32:46 +02:00
Jon Rood
20492fa48e cppcheck: add version 2.9 (#33491) 2022-10-24 13:37:58 -07:00
Ryan Mulhall
2eb0660a79 update fms package for v2022.04 (#33484)
Co-authored-by: rem1776 <Ryan.Mulhall@noaa.gov>
2022-10-24 13:15:19 -07:00
Danny McClanahan
0b971a1aef redact line numbers from grouped exception message (#33485) 2022-10-24 17:08:18 +00:00
eugeneswalker
a51bd80a5e e4s ci: add chai +rocm (#32506) 2022-10-24 18:16:56 +02:00
eugeneswalker
dda2ff4653 chai +rocm: use hipcc as CMAKE_CXX_COMPILER (#33479) 2022-10-24 09:08:16 -07:00
Luke Diorio-Toth
560a9eec92 py-drep and ANIcalculator: new packages (#33467)
* py-drep: new package

* fixed file extension

* added darwin conflict

* py-checkm-genome and py-pysam: bumped version and updated deps (#10)

added checkm and pysam deps

* added dep documentation and fixed style

* changed checkm and pysam back to dev version for upstreaming

* added url and perl run dep

* fixed style
2022-10-24 09:01:28 -05:00
Harmen Stoppels
d67b12eb79 locks: improved errors (#33477)
Instead of showing

```
==> Error: Timed out waiting for a write lock.
```

show

```
==> Error: Timed out waiting for a write lock after 1.200ms and 4 attempts on file: /some/file
```

s.t. we actually get to see where acquiring a lock failed even when not
running in debug mode.

And use pretty time units everywhere, so we don't get 1.45e-9 seconds
but 1.450ns etc.
2022-10-24 11:54:49 +02:00
Harmen Stoppels
7d99fbcafd backtraces with --backtrace (#33478)
* backtraces without --debug

Currently `--debug` is too verbose and not-`--debug` gives to little
context about where exceptions are coming from.

So, instead, it'd be nice to have `spack --backtrace` and
`SPACK_BACKTRACE=1` as methods to get something inbetween: no verbose
debug messages, but always a full backtrace.

This is useful for CI, where we don't want to drown in debug messages
when installing deps, but we do want to get details where something goes
wrong if it goes wrong.

* completion
2022-10-23 18:12:38 -07:00
Michael Kuhn
6c32c6fbdb py-gcovr: add 5.2 (#33476) 2022-10-23 08:49:57 -05:00
Michael Kuhn
f8a899f904 ca-certificates-mozilla: add 2022-10-11 (#33331) 2022-10-23 13:44:36 +02:00
HELICS-bot
6e0d06c104 helics: Add version 3.3.1 (#33475) 2022-10-23 05:05:54 -06:00
Michael Kuhn
efa6acae86 sqlite: add 3.39.4 (#33339) 2022-10-23 04:45:46 -06:00
Michael Kuhn
e00208c566 gettext: add 0.21.1 (#33333) 2022-10-23 01:33:54 -06:00
Michael Kuhn
d4a7ea8eb0 util-linux, util-linux-uuid: add 2.38.1 (#33342) 2022-10-23 00:49:53 -06:00
Michael Kuhn
3256688f20 python: add 3.10.8, 3.9.15, 3.8.15, 3.7.15 (#33340)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2022-10-22 21:57:57 -06:00
Michael Kuhn
0bc596fc27 zlib: add 1.2.13 (#33337) 2022-10-22 19:53:51 -06:00
eugeneswalker
afc33518e6 openblas@0.3.21: fix misdetection of gfortran on cray (#33444) 2022-10-22 18:53:52 -06:00
eugeneswalker
8e2696172b netlib-scalapack %cce: add -hnopattern to fflags (#33422) 2022-10-22 09:10:28 -06:00
Massimiliano Culpo
dc110db65d Don't install xdist in CI on Python 2.7 (#33474) 2022-10-22 15:53:42 +02:00
Wouter Deconinck
5e2f258767 acts: new versions (#32969)
* acts: new versions

In the 20.x release line, these are the changes, https://github.com/acts-project/acts/compare/v20.0.0...v20.2.0
- `option(ACTS_SETUP_ACTSVG "Build ActSVG display plugin" OFF)` introduced in v20.1.0
- `option(ACTS_USE_SYSTEM_ACTSVG "Use the ActSVG system library" OFF)` introduced in v20.1.0
- `option(ACTS_BUILD_PLUGIN_ACTSVG "Build SVG display plugin" OFF)` introduced in v20.1.0
- `option(ACTS_USE_EXAMPLES_TBB "Use Threading Building Blocks library in examples" ON)` introduced in v20.1.0
- `option(ACTS_EXATRKX_ENABLE_ONNX "Build the Onnx backend for the exatrkx plugin" OFF)` introduced in v20.2.0
- `option(ACTS_EXATRKX_ENABLE_TORCH "Build the torchscript backend for the exatrkx plugin" ON)` introduced in v20.2.0

In the 19.x release line, these are the changes: https://github.com/acts-project/acts/compare/v19.7.0...v19.9.0
- `option(ACTS_USE_EXAMPLES_TBB "Use Threading Building Blocks library in examples" ON)` introduced in v19.8.0

The new build options have not been implemented in this commit but will be implemented next.

* acts: new variant svg

* actsvg: new package

* actsvg: style fixes

* acts: new versions 20.3.0 and 19.10.0

* astsvg: depends_on boost googletest

* actsvg: new version 0.4.26 (and style fix)

Includes fix to build issue when +examples, https://github.com/acts-project/actsvg/pull/23

* acts: new variant tbb when +examples @19.8:19 @20.1:

* acts: set ACTS_USE_EXAMPLES_TBB

* acts: no need for ACTS_SETUP_ACTSVG

* acts: move tbb variant to examples block

* acts: ACTS_USE_SYSTEM_ACTSDD4HEP removed in 20.3

* acts: use new ACTS_USE_SYSTEM_LIBS

* acts-dd4hep: new version 1.0.1, maintainer handle fixed

* acts: simplify variant tbb condition
2022-10-22 13:56:22 +02:00
Zack Galbreath
6d3869a7d3 Remove x86_64_v4 target from AHUG and ISC stacks (#33464) 2022-10-22 08:53:07 +00:00
Harmen Stoppels
669bbe1e66 stop building binaries for the 1% (#33463) 2022-10-22 02:01:50 -06:00
Andrey Prokopenko
cff76fb23f arborx: add new release 1.3 (#33402)
* arborx: add new release 1.3

* [@spackbot] updating style on behalf of aprokop
2022-10-21 21:36:19 -07:00
Chris White
773de54cd9 honor global spack flags (#33470) 2022-10-21 16:50:01 -06:00
eugeneswalker
3fd097f1d5 raja@0.14.0 +rocm: add -std=c++14 to HIP_HIPCC_FLAGS (#33456) 2022-10-21 14:58:58 -07:00
Alex Richert
9268b14f96 Update maintainers for NOAA/EMC-maintained libraries (#33469)
* Update maintainers for NOAA/EMC-maintained libraries
* Fix line lengths
* Fix line length for gptl
2022-10-21 14:58:24 -07:00
Jon Rood
ffbace0fbd openfast: Fix package file (#33454)
* Fix openfast package file.

* Fix openfast package file.

* Fix typo.

* [@spackbot] updating style on behalf of jrood-nrel

Co-authored-by: jrood-nrel <jrood-nrel@users.noreply.github.com>
2022-10-21 13:50:05 -07:00
Luke Diorio-Toth
6fdb8b2682 py-instrain: added required + optional dependency (#33465)
* added py-instrain dependencies

* fixed style

* removed coverm dep until I have a working coverm package

* added dep documentation
2022-10-21 13:54:23 -06:00
Luke Diorio-Toth
b1836a7c50 updated python version requirements (#33466)
* updated python version requirements

* updated sha256

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-10-21 13:50:19 -06:00
Luke Diorio-Toth
3c37bfb6d9 py-checkm-genome and py-pysam: bumped version and updated deps (#33449)
* py-checkm-genome and py-pysam: bumped version and updated deps

* updated setuptools dep type

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-10-21 14:38:18 -05:00
Scott Wittenburg
27921c38ce gitlab: Retry protected publish jobs in certain cases (#32496)
When we lose a running pod (possibly loss of spot instance) or encounter
some other infrastructure-related failure of this job, we need to retry
it.  This retries the job the maximum number of times in those cases.
2022-10-21 10:35:20 -06:00
Harmen Stoppels
f9112a6244 Relocation should take hardlinks into account (#33460)
Currently `relocate_text` and `relocate_text_bin` are unsafe in the
sense that they run in parallel, and lead to races when modifying
different items pointing to the same inode.

This leads to the issue observed in #33453.

This PR:

1. Renames those functions to `unsafe_*` so people are aware
2. Adds logic to deal with hardlinks in current binary packages
3. Adds logic to deal with hardlinks when creating new binary tarballs,
   so the install side doesn't have to de-dupe hardlinks.
4. Adds a test for 3

The assumption is that all our relocation logic preserves inodes. That
is, we should never copy a file, modify it, and then move it back. I
quickly verified, and its seems like this is true for (binary) text
relocation, as well as rpath patching in patchelf (even when the file
grows) and mach-o fixes.
2022-10-21 18:30:26 +02:00
eugeneswalker
7c3d93465c axom@0.7.0: require cmake@3.21: (#33450)
* axom@0.7.0: require cmake@3.21:

* Update var/spack/repos/builtin/packages/axom/package.py

Co-authored-by: Chris White <white238@llnl.gov>

Co-authored-by: Chris White <white238@llnl.gov>
2022-10-21 08:15:31 -07:00
Harmen Stoppels
428a8f72a0 git: new versions and deprecations (#33408) 2022-10-21 15:02:39 +02:00
Harmen Stoppels
b1559cc831 gitlab: when_possible -> false (#33443)
`reuse` and `when_possible` concretization broke the invariant that
`spec[pkg_name]` has unique keys. This invariant is relied on in tons of
places, such as when setting up the build environment.

When using `when_possible` concretization, one may end up with two or
more `perl`s or `python`s among the transitive deps of a spec, because
concretization does not consider build-only deps of reusable specs.

Until the code base is fixed not to rely on this broken property of
`__getitem__`, we should disable reuse in CI.
2022-10-21 14:42:06 +02:00
Satish Balay
93db654b8d intel-tbb: add in versions 2021.7.0, 2021.6.0 (#33445)
2021.7.0 fixes build on linux-ubuntu20.04-skylake / oneapi@2022.2.0
2022-10-21 06:34:18 -04:00
Massimiliano Culpo
abf3a696bd Remove "spack buildcache copy" in v0.19.0 (#33437) 2022-10-21 12:17:53 +02:00
eugeneswalker
1e4732d5fe e4s ci: add raja +rocm (#32505) 2022-10-20 14:26:18 -07:00
Luke Diorio-Toth
9621b2bb3b replaced package shortbred with py-shortbred (#33404)
* fixed version numbers to python 2 and old biopython

* changed shortbred pacakge to pypi, removed python 2 version

* added package description

* re-added shortbred package with depreciated flag

* fixed style and removed unnecessary python dep (it can't build with python 2 anyway)

* removed whitespace and readded the python2.7.9+ dep

* fixed style
2022-10-20 15:44:57 -05:00
Mikael Simberg
ee721f9c91 Add Boost 1.80.0 (#32879)
* Add Boost 1.80.0
* Add conflict for Boost 1.80.0 and dealii
* Add conflict for Boost 1.80 and %oneapi
2022-10-20 13:32:25 -07:00
eugeneswalker
e981ab9b65 hiop: add v0.7.0 (#33441)
* hiop: add v0.7.0

* Update var/spack/repos/builtin/packages/hiop/package.py

Co-authored-by: Cameron Rutherford <cameron.rutherford@me.com>

Co-authored-by: Cameron Rutherford <cameron.rutherford@me.com>
2022-10-20 20:21:43 +00:00
Cody Balos
e60e743843 kokkos and kokkos-kernels: add new versions (#33301)
* kokkos: add version 3.7.00
* kokkos-kernels: add versions 3.6.01 and 3.7.00
* add correct kokkos dependence
2022-10-20 11:50:59 -07:00
Satish Balay
70e369086c butterflypack: add version 2.2.2 and openmp variant (#33416)
- add conflcit with gcc < 7
- fails with MacOS sed - so add in (gnu) sed as build dependency
2022-10-20 11:20:45 -07:00
Jon Rood
cd015b8498 Update OpenFAST package file (#33438)
* Update OpenFAST package file.
* Add comment.
2022-10-20 11:04:55 -07:00
SoniaScard
69e66f57a9 ophidia-primitives: new package at v1.7 (#33434)
* ophidia-primitives: new package at v1.7
* ophidia-primitives: Add mantainers
* ophidia-primitives: Fix style

Co-authored-by: SoniaScard <SoniaScard@users.noreply.github.com>
2022-10-20 10:56:53 -07:00
Adam J. Stewart
0b873be13c py-rasterio: add v1.3.3 (#33428) 2022-10-20 10:12:59 -07:00
Rémi Lacroix
b3e04e8cd2 spdlog: Add version 1.10.0 (#33431) 2022-10-20 10:12:02 -07:00
Scott Wittenburg
9a1957c881 gitlab: Do not use root_spec['pkg_name'] anymore (#33403)
* gitlab: Do not use root_spec['pkg_name'] anymore

For a long time it was fine to index a concrete root spec with the name
of a dependency in order to access the concrete dependency spec.  Since
pipelines started using `--use-buildcache dependencies:only,package:never`
though, it has exposed a scheduling issue in how pipelines are
generated.  If a concrete root spec depends on two different hashes of
`openssl` for example, indexing that root with just the package name
is ambiguous, so we should no longer depend on that approach when
scheduling jobs.

* env: make sure exactly one spec in env matches hash
2022-10-20 10:33:18 -06:00
G-Ragghianti
af5d6295d5 Change scalapack to test-only dependency (#33433) 2022-10-20 08:42:52 -07:00
eugeneswalker
3e1db75372 arpack-ng %cce: add -hnopattern to fflags (#33424) 2022-10-20 08:09:35 -07:00
Luke Diorio-Toth
70d2556f4b py-bakta: new package (#33417) 2022-10-20 08:22:59 -05:00
Massimiliano Culpo
89cf5004db FIX CI after git update (#33429)
Add `protocol.file.allow always` to git configuration in CI
2022-10-20 09:46:04 +02:00
eugeneswalker
8aac0d09d4 e4s ci: add umpire +rocm (#32504) 2022-10-19 17:05:45 -06:00
Richard Berger
0934c4d602 singularity-eos: new version 1.6.2 (#33415) 2022-10-19 14:06:44 -07:00
Harmen Stoppels
e1344067fd depfile: buildcache support (#33315)
When installing some/all specs from a buildcache, build edges are pruned
from those specs. This can result in a much smaller effective DAG. Until
now, `spack env depfile` would always generate a full DAG.

Ths PR adds the `spack env depfile --use-buildcache` flag that was
introduced for `spack install` before. This way, not only can we drop
build edges, but also we can automatically set the right buildcache
related flags on the specific specs that are gonna get installed.

This way we get parallel installs of binary deps without redundancy,
which is useful for Gitlab CI.
2022-10-19 13:57:06 -07:00
Jon Rood
ae7999d7a1 Simplify TIOGA package (#33396)
* Update TIOGA package.

* Add comment.

* Remove cuda variant and MPI_ROOT.

* Style.
2022-10-19 13:33:01 -07:00
Adam J. Stewart
4b0832d3bc py-pandas: add v1.5.1 (#33412) 2022-10-19 14:18:00 -06:00
Sergey Kosukhin
ef872cc64b mpich: enable building when @3.4~cuda (#33325) 2022-10-19 13:04:32 -07:00
eugeneswalker
1f0751defe patch std::filesystem check as done in llnl/umpire pr#784 (#33250) 2022-10-19 14:14:46 -05:00
eugeneswalker
fa30f74e0c umpire +rocm: use hipcc as CMAKE_CXX_COMPILER (#33377) 2022-10-19 12:53:58 -06:00
eugeneswalker
6b45e2fef1 raja +rocm: use hipcc as CMAKE_CXX_COMPILER (#33375) 2022-10-19 13:43:51 -05:00
Luke Diorio-Toth
5cce66be75 pilercr: new package (#33251)
* new package
* fixed style
* actually building now
2022-10-19 11:41:39 -07:00
eugeneswalker
9f89926980 axom: python only reliably available when +python, +devtools (#33414) 2022-10-19 11:41:52 -06:00
Massimiliano Culpo
7ad7fde09c Add a command to bootstrap Spack right now (#33407) 2022-10-19 19:25:20 +02:00
Stephen Sachs
25cbb34579 Relocate "run" type dependencies too (#33191)
When downloading from binary cache not only replace RPATHs to dependencies, but
also text references to dependencies.

Example:
`autoconf@2.69` contains a text reference to the executable of its dependency
`perl`:

```
$ grep perl-5 /shared/spack/opt/spack/linux-amzn2-x86_64_v3/gcc-7.3.1/autoconf-2.69-q3lo/bin/autoreconf
eval 'case $# in 0) exec /shared/spack/opt/spack/linux-amzn2-x86_64_v3/gcc-7.3.1/perl-5.34.1-yphg/bin/perl -S "$0";; *) exec /shared/spack/opt/spack/linux-amzn2-x86_64_v3/gcc-7.3.1/perl-5.34.1-yphg/bin/perl -S "$0" "$@";; esac'
```

These references need to be replace or any package using `autoreconf` will fail
as it cannot find the installed `perl`.

Co-authored-by: Stephen Sachs <stesachs@amazon.com>
2022-10-19 17:37:07 +02:00
Jonathon Anderson
a423dc646a Update the binary index before attempting direct fetches (#32137)
"spack install" will not update the binary index if given a concrete
spec, which causes it to fall back to direct fetches when a simple
index update would have helped. For S3 buckets in particular, this
significantly and needlessly slows down the install process.

This commit alters the logic so that the binary index is updated
whenever a by-hash lookup fails. The lookup is attempted again with
the updated index before falling back to direct fetches. To avoid
updating too frequently (potentially once for each spec being
installed), BinaryCacheIndex.update now includes a "cooldown"
option, and when this option is enabled it will not update more
than once in a cooldown window (set in config.yaml).

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2022-10-19 09:36:27 -06:00
Tamara Dahlgren
3ec7304699 spack checksum: warn if version is deprecated (#32438)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2022-10-18 22:51:38 +00:00
Robert Cohn
7bb4b58b8b intel-oneapi-compilers: do not pass -Wno-unused-command-line-argument to icc + refactor (#33389) 2022-10-18 16:17:50 -06:00
Tamara Dahlgren
13356f3bfa Docs: Spack info option updates (#33376) 2022-10-18 21:11:21 +02:00
Harmen Stoppels
c6c5e56ec1 Reusable --use-buildcache with better validation (#33388)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2022-10-18 18:52:28 +00:00
Bernhard Kaindl
1ae32ff62c go,gcc: Support external go compilers for Go bootstrap (#27769)
For ARM64, fallback to gccgo. (go-bootstrap@1.4 can't support ARM64)
2022-10-18 10:18:49 -07:00
Howard Pritchard
d95f14084e papi: fix for Intel OneAPI compiler (#33225)
Without this patch one hits this error trying to compiler papi with Intel OneAPI:

icx: error: Note that use of '-g' without any optimization-level option will turn off most compiler optimizations similar to use of '-O0' [-Werror,-Wdebug-disables-optimization]

Signed-off-by: Howard Pritchard <howardp@lanl.gov>

Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2022-10-18 11:18:04 -06:00
1757 changed files with 29718 additions and 24668 deletions

View File

@@ -25,7 +25,7 @@ jobs:
python-version: ${{inputs.python_version}}
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools pytest codecov 'coverage[toml]<=6.2'
pip install --upgrade pip six setuptools pytest codecov coverage[toml]
- name: Package audits (with coverage)
if: ${{ inputs.with_coverage == 'true' }}
run: |

View File

@@ -1,7 +1,7 @@
#!/bin/bash
set -ex
source share/spack/setup-env.sh
$PYTHON bin/spack bootstrap untrust spack-install
$PYTHON bin/spack bootstrap disable spack-install
$PYTHON bin/spack -d solve zlib
tree $BOOTSTRAP/store
exit 0

View File

@@ -42,7 +42,8 @@ jobs:
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack bootstrap untrust github-actions-v0.2
spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.3
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
@@ -79,7 +80,8 @@ jobs:
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack bootstrap untrust github-actions-v0.2
spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.3
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
@@ -143,7 +145,8 @@ jobs:
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh
spack bootstrap untrust github-actions-v0.2
spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.3
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
@@ -160,7 +163,8 @@ jobs:
run: |
source share/spack/setup-env.sh
export PATH=/usr/local/opt/bison@2.7/bin:$PATH
spack bootstrap untrust github-actions-v0.2
spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.3
spack external find --not-buildable cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
@@ -210,7 +214,7 @@ jobs:
- name: Bootstrap clingo
run: |
set -ex
for ver in '2.7' '3.6' '3.7' '3.8' '3.9' '3.10' ; do
for ver in '3.6' '3.7' '3.8' '3.9' '3.10' ; do
not_found=1
ver_dir="$(find $RUNNER_TOOL_CACHE/Python -wholename "*/${ver}.*/*/bin" | grep . || true)"
echo "Testing $ver_dir"
@@ -261,7 +265,7 @@ jobs:
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack bootstrap untrust spack-install
spack bootstrap disable spack-install
spack -d gpg list
tree ~/.spack/bootstrap/store/
@@ -298,7 +302,8 @@ jobs:
run: |
source share/spack/setup-env.sh
spack solve zlib
spack bootstrap untrust github-actions-v0.2
spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.3
spack -d gpg list
tree ~/.spack/bootstrap/store/
@@ -315,7 +320,7 @@ jobs:
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
spack bootstrap untrust spack-install
spack bootstrap disable spack-install
spack -d gpg list
tree ~/.spack/bootstrap/store/
@@ -333,7 +338,8 @@ jobs:
run: |
source share/spack/setup-env.sh
spack solve zlib
spack bootstrap untrust github-actions-v0.2
spack bootstrap disable github-actions-v0.4
spack bootstrap disable github-actions-v0.3
spack -d gpg list
tree ~/.spack/bootstrap/store/

View File

@@ -13,7 +13,7 @@ on:
paths:
- '.github/workflows/build-containers.yml'
- 'share/spack/docker/*'
- 'share/templates/container/*'
- 'share/spack/templates/container/*'
- 'lib/spack/spack/container/*'
# Let's also build & tag Spack containers on releases.
release:
@@ -80,16 +80,16 @@ jobs:
fi
- name: Upload Dockerfile
uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8
uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
with:
name: dockerfiles
path: dockerfiles
- name: Set up QEMU
uses: docker/setup-qemu-action@8b122486cedac8393e77aa9734c3528886e4a1a8 # @v1
uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # @v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@c74574e6c82eeedc46366be1b0d287eff9085eb6 # @v1
uses: docker/setup-buildx-action@8c0edbc76e98fa90f69d9a2c020dcb50019dc325 # @v1
- name: Log in to GitHub Container Registry
uses: docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a # @v1
@@ -106,7 +106,7 @@ jobs:
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Deploy ${{ matrix.dockerfile[0] }}
uses: docker/build-push-action@c84f38281176d4c9cdb1626ffafcd6b3911b5d94 # @v2
uses: docker/build-push-action@c56af957549030174b10d6867f20e78cfd7debc5 # @v2
with:
context: dockerfiles/${{ matrix.dockerfile[0] }}
platforms: ${{ matrix.dockerfile[1] }}

View File

@@ -20,12 +20,6 @@ jobs:
uses: ./.github/workflows/valid-style.yml
with:
with_coverage: ${{ needs.changes.outputs.core }}
audit-ancient-python:
uses: ./.github/workflows/audit.yaml
needs: [ changes ]
with:
with_coverage: ${{ needs.changes.outputs.core }}
python_version: 2.7
all-prechecks:
needs: [ prechecks ]
runs-on: ubuntu-latest
@@ -46,7 +40,7 @@ jobs:
with:
fetch-depth: 0
# For pull requests it's not necessary to checkout the code
- uses: dorny/paths-filter@b2feaf19c27470162a626bd6fa8438ae5b263721
- uses: dorny/paths-filter@4512585405083f25c027a35db413c2b3b9006d50
id: filter
with:
# See https://github.com/dorny/paths-filter/issues/56 for the syntax used below
@@ -85,7 +79,7 @@ jobs:
needs: [ prechecks ]
uses: ./.github/workflows/windows_python.yml
all:
needs: [ windows, unit-tests, bootstrap, audit-ancient-python ]
needs: [ windows, unit-tests, bootstrap ]
runs-on: ubuntu-latest
steps:
- name: Success

View File

@@ -6,6 +6,10 @@ git config --global user.email "spack@example.com"
git config --global user.name "Test User"
git config --global core.longpaths true
# See https://github.com/git/git/security/advisories/GHSA-3wp6-j8xr-qw85 (CVE-2022-39253)
# This is needed to let some fixture in our unit-test suite run
git config --global protocol.file.allow always
if ($(git branch --show-current) -ne "develop")
{
git branch develop origin/develop

View File

@@ -2,6 +2,10 @@
git config --global user.email "spack@example.com"
git config --global user.name "Test User"
# See https://github.com/git/git/security/advisories/GHSA-3wp6-j8xr-qw85 (CVE-2022-39253)
# This is needed to let some fixture in our unit-test suite run
git config --global protocol.file.allow always
# create a local pr base branch
if [[ -n $GITHUB_BASE_REF ]]; then
git fetch origin "${GITHUB_BASE_REF}:${GITHUB_BASE_REF}"

View File

@@ -11,28 +11,38 @@ concurrency:
jobs:
# Run unit tests with different configurations on linux
ubuntu:
runs-on: ubuntu-latest
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['2.7', '3.6', '3.7', '3.8', '3.9', '3.10']
os: [ubuntu-latest]
python-version: ['3.7', '3.8', '3.9', '3.10', '3.11']
concretizer: ['clingo']
on_develop:
- ${{ github.ref == 'refs/heads/develop' }}
include:
- python-version: 2.7
- python-version: '3.11'
os: ubuntu-latest
concretizer: original
on_develop: ${{ github.ref == 'refs/heads/develop' }}
- python-version: '3.10'
concretizer: original
- python-version: '3.6'
os: ubuntu-20.04
concretizer: clingo
on_develop: ${{ github.ref == 'refs/heads/develop' }}
exclude:
- python-version: '3.7'
os: ubuntu-latest
concretizer: 'clingo'
on_develop: false
- python-version: '3.8'
os: ubuntu-latest
concretizer: 'clingo'
on_develop: false
- python-version: '3.9'
os: ubuntu-latest
concretizer: 'clingo'
on_develop: false
- python-version: '3.10'
os: ubuntu-latest
concretizer: 'clingo'
on_develop: false
@@ -49,19 +59,11 @@ jobs:
# Needed for unit tests
sudo apt-get -y install \
coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build \
patchelf cmake bison libbison-dev kcov
cmake bison libbison-dev kcov
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools pytest codecov[toml] pytest-cov pytest-xdist
# ensure style checks are not skipped in unit tests for python >= 3.6
# note that true/false (i.e., 1/0) are opposite in conditions in python and bash
if python -c 'import sys; sys.exit(not sys.version_info >= (3, 6))'; then
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click==8.0.4" "black<=21.12b0"
fi
- name: Pin pathlib for Python 2.7
if: ${{ matrix.python-version == 2.7 }}
run: |
pip install -U pathlib2==2.3.6
pip install --upgrade pip six setuptools pytest codecov[toml] pytest-xdist pytest-cov
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black"
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -73,7 +75,8 @@ jobs:
SPACK_PYTHON: python
run: |
. share/spack/setup-env.sh
spack bootstrap untrust spack-install
spack bootstrap disable spack-install
spack bootstrap now
spack -v solve zlib
- name: Run unit tests
env:
@@ -81,11 +84,9 @@ jobs:
SPACK_TEST_SOLVER: ${{ matrix.concretizer }}
SPACK_TEST_PARALLEL: 2
COVERAGE: true
UNIT_TEST_COVERAGE: ${{ (matrix.concretizer == 'original' && matrix.python-version == '2.7') || (matrix.python-version == '3.10') }}
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }}
run: |
share/spack/qa/run-unit-tests
coverage combine -a
coverage xml
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
flags: unittests,linux,${{ matrix.concretizer }}
@@ -98,7 +99,7 @@ jobs:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with:
python-version: '3.10'
python-version: '3.11'
- name: Install System packages
run: |
sudo apt-get -y update
@@ -106,7 +107,7 @@ jobs:
sudo apt-get install -y coreutils kcov csh zsh tcsh fish dash bash
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools pytest codecov coverage[toml]==6.2 pytest-xdist
pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-xdist
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -155,14 +156,11 @@ jobs:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with:
python-version: '3.10'
python-version: '3.11'
- name: Install System packages
run: |
sudo apt-get -y update
# Needed for unit tests
sudo apt-get -y install \
coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build \
patchelf kcov
sudo apt-get -y install coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build kcov
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-cov clingo pytest-xdist
@@ -177,8 +175,6 @@ jobs:
SPACK_TEST_SOLVER: clingo
run: |
share/spack/qa/run-unit-tests
coverage combine -a
coverage xml
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70 # @v2.1.0
with:
flags: unittests,linux,clingo
@@ -187,7 +183,7 @@ jobs:
runs-on: macos-latest
strategy:
matrix:
python-version: [3.8]
python-version: ["3.10"]
steps:
- uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
with:
@@ -210,15 +206,10 @@ jobs:
git --version
. .github/workflows/setup_git.sh
. share/spack/setup-env.sh
$(which spack) bootstrap untrust spack-install
$(which spack) bootstrap disable spack-install
$(which spack) solve zlib
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
$(which spack) unit-test --cov --cov-config=pyproject.toml "${common_args[@]}"
coverage combine -a
coverage xml
# Delete the symlink going from ./lib/spack/docs/_spack_root back to
# the initial directory, since it causes ELOOP errors with codecov/actions@2
rm lib/spack/docs/_spack_root
$(which spack) unit-test --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}"
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
with:
flags: unittests,macos

View File

@@ -21,16 +21,16 @@ jobs:
- uses: actions/checkout@93ea575cb5d8a053eaa0ac8fa3b40d7e05a33cc8 # @v2
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with:
python-version: '3.10'
python-version: '3.11'
cache: 'pip'
- name: Install Python Packages
run: |
pip install --upgrade pip
pip install --upgrade vermin
- name: vermin (Spack's Core)
run: vermin --backport argparse --violations --backport typing -t=2.7- -t=3.6- -vvv lib/spack/spack/ lib/spack/llnl/ bin/
run: vermin --backport importlib --backport argparse --violations --backport typing -t=3.6- -vvv lib/spack/spack/ lib/spack/llnl/ bin/
- name: vermin (Repositories)
run: vermin --backport argparse --violations --backport typing -t=2.7- -t=3.6- -vvv var/spack/repos
run: vermin --backport importlib --backport argparse --violations --backport typing -t=3.6- -vvv var/spack/repos
# Run style checks on the files that have been changed
style:
runs-on: ubuntu-latest
@@ -40,11 +40,11 @@ jobs:
fetch-depth: 0
- uses: actions/setup-python@13ae5bb136fac2878aff31522b9efb785519f984 # @v2
with:
python-version: '3.10'
python-version: '3.11'
cache: 'pip'
- name: Install Python packages
run: |
python3 -m pip install --upgrade pip six setuptools types-six click==8.0.2 'black==21.12b0' mypy isort clingo flake8
python3 -m pip install --upgrade pip six setuptools types-six black mypy isort clingo flake8
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -57,4 +57,4 @@ jobs:
uses: ./.github/workflows/audit.yaml
with:
with_coverage: ${{ inputs.with_coverage }}
python_version: '3.10'
python_version: '3.11'

View File

@@ -23,7 +23,7 @@ jobs:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools codecov pytest-cov
python -m pip install --upgrade pip six pywin32 setuptools codecov pytest-cov clingo
- name: Create local develop
run: |
.\spack\.github\workflows\setup_git.ps1
@@ -32,8 +32,7 @@ jobs:
echo F|xcopy .\spack\share\spack\qa\configuration\windows_config.yaml $env:USERPROFILE\.spack\windows\config.yaml
cd spack
dir
(Get-Item '.\lib\spack\docs\_spack_root').Delete()
spack unit-test --verbose --cov --cov-config=pyproject.toml --ignore=lib/spack/spack/test/cmd
spack unit-test -x --verbose --cov --cov-config=pyproject.toml --ignore=lib/spack/spack/test/cmd
coverage combine -a
coverage xml
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
@@ -50,7 +49,7 @@ jobs:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools codecov coverage pytest-cov
python -m pip install --upgrade pip six pywin32 setuptools codecov coverage pytest-cov clingo
- name: Create local develop
run: |
.\spack\.github\workflows\setup_git.ps1
@@ -58,8 +57,7 @@ jobs:
run: |
echo F|xcopy .\spack\share\spack\qa\configuration\windows_config.yaml $env:USERPROFILE\.spack\windows\config.yaml
cd spack
(Get-Item '.\lib\spack\docs\_spack_root').Delete()
spack unit-test --verbose --cov --cov-config=pyproject.toml lib/spack/spack/test/cmd
spack unit-test -x --verbose --cov --cov-config=pyproject.toml lib/spack/spack/test/cmd
coverage combine -a
coverage xml
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
@@ -83,7 +81,7 @@ jobs:
echo F|xcopy .\spack\share\spack\qa\configuration\windows_config.yaml $env:USERPROFILE\.spack\windows\config.yaml
spack external find cmake
spack external find ninja
spack install abseil-cpp
spack -d install abseil-cpp
make-installer:
runs-on: windows-latest
steps:
@@ -111,11 +109,11 @@ jobs:
echo "installer_root=$((pwd).Path)" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
env:
ProgressPreference: SilentlyContinue
- uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8
- uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
with:
name: Windows Spack Installer Bundle
path: ${{ env.installer_root }}\pkg\Spack.exe
- uses: actions/upload-artifact@3cea5372237819ed00197afe530f5a7ea3e805c8
- uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
with:
name: Windows Spack Installer
path: ${{ env.installer_root}}\pkg\Spack.msi

View File

@@ -1,16 +1,284 @@
# v0.19.0 (2022-11-11)
`v0.19.0` is a major feature release.
## Major features in this release
1. **Package requirements**
Spack's traditional [package preferences](
https://spack.readthedocs.io/en/latest/build_settings.html#package-preferences)
are soft, but we've added hard requriements to `packages.yaml` and `spack.yaml`
(#32528, #32369). Package requirements use the same syntax as specs:
```yaml
packages:
libfabric:
require: "@1.13.2"
mpich:
require:
- one_of: ["+cuda", "+rocm"]
```
More details in [the docs](
https://spack.readthedocs.io/en/latest/build_settings.html#package-requirements).
2. **Environment UI Improvements**
* Fewer surprising modifications to `spack.yaml` (#33711):
* `spack install` in an environment will no longer add to the `specs:` list; you'll
need to either use `spack add <spec>` or `spack install --add <spec>`.
* Similarly, `spack uninstall` will not remove from your environment's `specs:`
list; you'll need to use `spack remove` or `spack uninstall --remove`.
This will make it easier to manage an environment, as there is clear separation
between the stack to be installed (`spack.yaml`/`spack.lock`) and which parts of
it should be installed (`spack install` / `spack uninstall`).
* `concretizer:unify:true` is now the default mode for new environments (#31787)
We see more users creating `unify:true` environments now. Users who need
`unify:false` can add it to their environment to get the old behavior. This will
concretize every spec in the environment independently.
* Include environment configuration from URLs (#29026, [docs](
https://spack.readthedocs.io/en/latest/environments.html#included-configurations))
You can now include configuration in your environment directly from a URL:
```yaml
spack:
include:
- https://github.com/path/to/raw/config/compilers.yaml
```
4. **Multiple Build Systems**
An increasing number of packages in the ecosystem need the ability to support
multiple build systems (#30738, [docs](
https://spack.readthedocs.io/en/latest/packaging_guide.html#multiple-build-systems)),
either across versions, across platforms, or within the same version of the software.
This has been hard to support through multiple inheritance, as methods from different
build system superclasses would conflict. `package.py` files can now define separate
builder classes with installation logic for different build systems, e.g.:
```python
class ArpackNg(CMakePackage, AutotoolsPackage):
build_system(
conditional("cmake", when="@0.64:"),
conditional("autotools", when="@:0.63"),
default="cmake",
)
class CMakeBuilder(spack.build_systems.cmake.CMakeBuilder):
def cmake_args(self):
pass
class Autotoolsbuilder(spack.build_systems.autotools.AutotoolsBuilder):
def configure_args(self):
pass
```
5. **Compiler and variant propagation**
Currently, compiler flags and variants are inconsistent: compiler flags set for a
package are inherited by its dependencies, while variants are not. We should have
these be consistent by allowing for inheritance to be enabled or disabled for both
variants and compiler flags.
Example syntax:
- `package ++variant`:
enabled variant that will be propagated to dependencies
- `package +variant`:
enabled variant that will NOT be propagated to dependencies
- `package ~~variant`:
disabled variant that will be propagated to dependencies
- `package ~variant`:
disabled variant that will NOT be propagated to dependencies
- `package cflags==-g`:
`cflags` will be propagated to dependencies
- `package cflags=-g`:
`cflags` will NOT be propagated to dependencies
Syntax for non-boolan variants is similar to compiler flags. More in the docs for
[variants](
https://spack.readthedocs.io/en/latest/basic_usage.html#variants) and [compiler flags](
https://spack.readthedocs.io/en/latest/basic_usage.html#compiler-flags).
6. **Enhancements to git version specifiers**
* `v0.18.0` added the ability to use git commits as versions. You can now use the
`git.` prefix to specify git tags or branches as versions. All of these are valid git
versions in `v0.19` (#31200):
```console
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # raw commit
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234 # commit with git prefix
foo@git.develop # the develop branch
foo@git.0.19 # use the 0.19 tag
```
* `v0.19` also gives you more control over how Spack interprets git versions, in case
Spack cannot detect the version from the git repository. You can suffix a git
version with `=<version>` to force Spack to concretize it as a particular version
(#30998, #31914, #32257):
```console
# use mybranch, but treat it as version 3.2 for version comparison
foo@git.mybranch=3.2
# use the given commit, but treat it as develop for version comparison
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop
```
More in [the docs](
https://spack.readthedocs.io/en/latest/basic_usage.html#version-specifier)
7. **Changes to Cray EX Support**
Cray machines have historically had their own "platform" within Spack, because we
needed to go through the module system to leverage compilers and MPI installations on
these machines. The Cray EX programming environment now provides standalone `craycc`
executables and proper `mpicc` wrappers, so Spack can treat EX machines like Linux
with extra packages (#29392).
We expect this to greatly reduce bugs, as external packages and compilers can now be
used by prefix instead of through modules. We will also no longer be subject to
reproducibility issues when modules change from Cray PE release to release and from
site to site. This also simplifies dealing with the underlying Linux OS on cray
systems, as Spack will properly model the machine's OS as either SuSE or RHEL.
8. **Improvements to tests and testing in CI**
* `spack ci generate --tests` will generate a `.gitlab-ci.yml` file that not only does
builds but also runs tests for built packages (#27877). Public GitHub pipelines now
also run tests in CI.
* `spack test run --explicit` will only run tests for packages that are explicitly
installed, instead of all packages.
9. **Experimental binding link model**
You can add a new option to `config.yaml` to make Spack embed absolute paths to
needed shared libraries in ELF executables and shared libraries on Linux (#31948, [docs](
https://spack.readthedocs.io/en/latest/config_yaml.html#shared-linking-bind)):
```yaml
config:
shared_linking:
type: rpath
bind: true
```
This can improve launch time at scale for parallel applications, and it can make
installations less susceptible to environment variables like `LD_LIBRARY_PATH`, even
especially when dealing with external libraries that use `RUNPATH`. You can think of
this as a faster, even higher-precedence version of `RPATH`.
## Other new features of note
* `spack spec` prints dependencies more legibly. Dependencies in the output now appear
at the *earliest* level of indentation possible (#33406)
* You can override `package.py` attributes like `url`, directly in `packages.yaml`
(#33275, [docs](
https://spack.readthedocs.io/en/latest/build_settings.html#assigning-package-attributes))
* There are a number of new architecture-related format strings you can use in Spack
configuration files to specify paths (#29810, [docs](
https://spack.readthedocs.io/en/latest/configuration.html#config-file-variables))
* Spack now supports bootstrapping Clingo on Windows (#33400)
* There is now support for an `RPATH`-like library model on Windows (#31930)
## Performance Improvements
* Major performance improvements for installation from binary caches (#27610, #33628,
#33636, #33608, #33590, #33496)
* Test suite can now be parallelized using `xdist` (used in GitHub Actions) (#32361)
* Reduce lock contention for parallel builds in environments (#31643)
## New binary caches and stacks
* We now build nearly all of E4S with `oneapi` in our buildcache (#31781, #31804,
#31804, #31803, #31840, #31991, #32117, #32107, #32239)
* Added 3 new machine learning-centric stacks to binary cache: `x86_64_v3`, CUDA, ROCm
(#31592, #33463)
## Removals and Deprecations
* Support for Python 3.5 is dropped (#31908). Only Python 2.7 and 3.6+ are officially
supported.
* This is the last Spack release that will support Python 2 (#32615). Spack `v0.19`
will emit a deprecation warning if you run it with Python 2, and Python 2 support will
soon be removed from the `develop` branch.
* `LD_LIBRARY_PATH` is no longer set by default by `spack load` or module loads.
Setting `LD_LIBRARY_PATH` in Spack environments/modules can cause binaries from
outside of Spack to crash, and Spack's own builds use `RPATH` and do not need
`LD_LIBRARY_PATH` set in order to run. If you still want the old behavior, you
can run these commands to configure Spack to set `LD_LIBRARY_PATH`:
```console
spack config add modules:prefix_inspections:lib64:[LD_LIBRARY_PATH]
spack config add modules:prefix_inspections:lib:[LD_LIBRARY_PATH]
```
* The `spack:concretization:[together|separately]` has been removed after being
deprecated in `v0.18`. Use `concretizer:unify:[true|false]`.
* `config:module_roots` is no longer supported after being deprecated in `v0.18`. Use
configuration in module sets instead (#28659, [docs](
https://spack.readthedocs.io/en/latest/module_file_support.html)).
* `spack activate` and `spack deactivate` are no longer supported, having been
deprecated in `v0.18`. Use an environment with a view instead of
activating/deactivating ([docs](
https://spack.readthedocs.io/en/latest/environments.html#configuration-in-spack-yaml)).
* The old YAML format for buildcaches is now deprecated (#33707). If you are using an
old buildcache with YAML metadata you will need to regenerate it with JSON metadata.
* `spack bootstrap trust` and `spack bootstrap untrust` are deprecated in favor of
`spack bootstrap enable` and `spack bootstrap disable` and will be removed in `v0.20`.
(#33600)
* The `graviton2` architecture has been renamed to `neoverse_n1`, and `graviton3`
is now `neoverse_v1`. Buildcaches using the old architecture names will need to be rebuilt.
* The terms `blacklist` and `whitelist` have been replaced with `include` and `exclude`
in all configuration files (#31569). You can use `spack config update` to
automatically fix your configuration files.
## Notable Bugfixes
* Permission setting on installation now handles effective uid properly (#19980)
* `buildable:true` for an MPI implementation now overrides `buildable:false` for `mpi` (#18269)
* Improved error messages when attempting to use an unconfigured compiler (#32084)
* Do not punish explicitly requested compiler mismatches in the solver (#30074)
* `spack stage`: add missing --fresh and --reuse (#31626)
* Fixes for adding build system executables like `cmake` to package scope (#31739)
* Bugfix for binary relocation with aliased strings produced by newer `binutils` (#32253)
## Spack community stats
* 6,751 total packages, 335 new since `v0.18.0`
* 141 new Python packages
* 89 new R packages
* 303 people contributed to this release
* 287 committers to packages
* 57 committers to core
# v0.18.1 (2022-07-19)
### Spack Bugfixes
* Fix several bugs related to bootstrapping (#30834,#31042,#31180)
* Fix a regression that was causing spec hashes to differ between
* Fix a regression that was causing spec hashes to differ between
Python 2 and Python 3 (#31092)
* Fixed compiler flags for oneAPI and DPC++ (#30856)
* Fixed several issues related to concretization (#31142,#31153,#31170,#31226)
* Improved support for Cray manifest file and `spack external find` (#31144,#31201,#31173,#31186)
* Assign a version to openSUSE Tumbleweed according to the GLIBC version
in the system (#19895)
in the system (#19895)
* Improved Dockerfile generation for `spack containerize` (#29741,#31321)
* Fixed a few bugs related to concurrent execution of commands (#31509,#31493,#31477)
* Fixed a few bugs related to concurrent execution of commands (#31509,#31493,#31477)
### Package updates
* WarpX: add v22.06, fixed libs property (#30866,#31102)

View File

@@ -10,8 +10,8 @@ For more on Spack's release structure, see
| Version | Supported |
| ------- | ------------------ |
| develop | :white_check_mark: |
| 0.17.x | :white_check_mark: |
| 0.16.x | :white_check_mark: |
| 0.19.x | :white_check_mark: |
| 0.18.x | :white_check_mark: |
## Reporting a Vulnerability

View File

@@ -31,13 +31,11 @@ import os
import os.path
import sys
min_python3 = (3, 5)
min_python3 = (3, 6)
if sys.version_info[:2] < (2, 7) or (
sys.version_info[:2] >= (3, 0) and sys.version_info[:2] < min_python3
):
if sys.version_info[:2] < min_python3:
v_info = sys.version_info[:3]
msg = "Spack requires Python 2.7 or %d.%d or higher " % min_python3
msg = "Spack requires Python %d.%d or higher " % min_python3
msg += "You are running spack with Python %d.%d.%d." % v_info
sys.exit(msg)
@@ -49,52 +47,8 @@ spack_prefix = os.path.dirname(os.path.dirname(spack_file))
spack_lib_path = os.path.join(spack_prefix, "lib", "spack")
sys.path.insert(0, spack_lib_path)
# Add external libs
spack_external_libs = os.path.join(spack_lib_path, "external")
if sys.version_info[:2] <= (2, 7):
sys.path.insert(0, os.path.join(spack_external_libs, "py2"))
sys.path.insert(0, spack_external_libs)
# Here we delete ruamel.yaml in case it has been already imported from site
# (see #9206 for a broader description of the issue).
#
# Briefly: ruamel.yaml produces a .pth file when installed with pip that
# makes the site installed package the preferred one, even though sys.path
# is modified to point to another version of ruamel.yaml.
if "ruamel.yaml" in sys.modules:
del sys.modules["ruamel.yaml"]
if "ruamel" in sys.modules:
del sys.modules["ruamel"]
# The following code is here to avoid failures when updating
# the develop version, due to spurious argparse.pyc files remaining
# in the libs/spack/external directory, see:
# https://github.com/spack/spack/pull/25376
# TODO: Remove in v0.18.0 or later
try:
import argparse
except ImportError:
argparse_pyc = os.path.join(spack_external_libs, "argparse.pyc")
if not os.path.exists(argparse_pyc):
raise
try:
os.remove(argparse_pyc)
import argparse # noqa: F401
except Exception:
msg = (
"The file\n\n\t{0}\n\nis corrupted and cannot be deleted by Spack. "
"Either delete it manually or ask some administrator to "
"delete it for you."
)
print(msg.format(argparse_pyc))
sys.exit(1)
import spack.main # noqa: E402
from spack_installable.main import main # noqa: E402
# Once we've set up the system path, run the spack main method
if __name__ == "__main__":
sys.exit(spack.main.main())
sys.exit(main())

View File

@@ -9,16 +9,15 @@ bootstrap:
# may not be able to bootstrap all the software that Spack needs,
# depending on its type.
sources:
- name: 'github-actions-v0.4'
metadata: $spack/share/spack/bootstrap/github-actions-v0.4
- name: 'github-actions-v0.3'
metadata: $spack/share/spack/bootstrap/github-actions-v0.3
- name: 'github-actions-v0.2'
metadata: $spack/share/spack/bootstrap/github-actions-v0.2
- name: 'github-actions-v0.1'
metadata: $spack/share/spack/bootstrap/github-actions-v0.1
- name: 'spack-install'
metadata: $spack/share/spack/bootstrap/spack-install
trusted:
# By default we trust bootstrapping from sources and from binaries
# produced on Github via the workflow
github-actions-v0.4: true
github-actions-v0.3: true
spack-install: true

View File

@@ -33,4 +33,4 @@ concretizer:
# environments can always be activated. When "false" perform concretization separately
# on each root spec, allowing different versions and variants of the same package in
# an environment.
unify: false
unify: true

View File

@@ -19,7 +19,7 @@ config:
install_tree:
root: $spack/opt/spack
projections:
all: "${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}"
all: "{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}"
# install_tree can include an optional padded length (int or boolean)
# default is False (do not pad)
# if padded_length is True, Spack will pad as close to the system max path
@@ -187,10 +187,20 @@ config:
package_lock_timeout: null
# Control whether Spack embeds RPATH or RUNPATH attributes in ELF binaries.
# Has no effect on macOS. DO NOT MIX these within the same install tree.
# See the Spack documentation for details.
shared_linking: 'rpath'
# Control how shared libraries are located at runtime on Linux. See the
# the Spack documentation for details.
shared_linking:
# Spack automatically embeds runtime search paths in ELF binaries for their
# dependencies. Their type can either be "rpath" or "runpath". For glibc, rpath is
# inherited and has precedence over LD_LIBRARY_PATH; runpath is not inherited
# and of lower precedence. DO NOT MIX these within the same install tree.
type: rpath
# (Experimental) Embed absolute paths of dependent libraries directly in ELF
# binaries to avoid runtime search. This can improve startup time of
# executables with many dependencies, in particular on slow filesystems.
bind: false
# Set to 'false' to allow installation on filesystems that doesn't allow setgid bit
@@ -201,3 +211,11 @@ config:
# building and installing packages. This gives information about Spack's
# current progress as well as the current and total number of packages.
terminal_title: false
# Number of seconds a buildcache's index.json is cached locally before probing
# for updates, within a single Spack invocation. Defaults to 10 minutes.
binary_index_ttl: 600
flags:
# Whether to keep -Werror flags active in package builds.
keep_werror: 'none'

View File

@@ -27,7 +27,8 @@ packages:
fuse: [libfuse]
gl: [glx, osmesa]
glu: [mesa-glu, openglu]
golang: [gcc]
golang: [go, gcc]
go-external-or-gccgo-bootstrap: [go-bootstrap, gcc]
iconv: [libiconv]
ipp: [intel-ipp]
java: [openjdk, jdk, ibm-java]

View File

@@ -1,5 +1,5 @@
config:
locks: false
concretizer: original
concretizer: clingo
build_stage::
- '$spack/.staging'

View File

@@ -1 +0,0 @@
../../..

View File

@@ -85,7 +85,7 @@ All packages whose names or descriptions contain documentation:
To get more information on a particular package from `spack list`, use
`spack info`. Just supply the name of a package:
.. command-output:: spack info mpich
.. command-output:: spack info --all mpich
Most of the information is self-explanatory. The *safe versions* are
versions that Spack knows the checksum for, and it will use the
@@ -998,11 +998,15 @@ More formally, a spec consists of the following pieces:
* ``%`` Optional compiler specifier, with an optional compiler version
(``gcc`` or ``gcc@4.7.3``)
* ``+`` or ``-`` or ``~`` Optional variant specifiers (``+debug``,
``-qt``, or ``~qt``) for boolean variants
``-qt``, or ``~qt``) for boolean variants. Use ``++`` or ``--`` or
``~~`` to propagate variants through the dependencies (``++debug``,
``--qt``, or ``~~qt``).
* ``name=<value>`` Optional variant specifiers that are not restricted to
boolean variants
boolean variants. Use ``name==<value>`` to propagate variant through the
dependencies.
* ``name=<value>`` Optional compiler flag specifiers. Valid flag names are
``cflags``, ``cxxflags``, ``fflags``, ``cppflags``, ``ldflags``, and ``ldlibs``.
Use ``name==<value>`` to propagate compiler flags through the dependencies.
* ``target=<value> os=<value>`` Optional architecture specifier
(``target=haswell os=CNL10``)
* ``^`` Dependency specs (``^callpath@1.1``)
@@ -1110,21 +1114,21 @@ set of arbitrary versions, such as ``@1.0,1.5,1.7`` (``1.0``, ``1.5``,
or ``1.7``). When you supply such a specifier to ``spack install``,
it constrains the set of versions that Spack will install.
For packages with a ``git`` attribute, ``git`` references
may be specified instead of a numerical version i.e. branches, tags
and commits. Spack will stage and build based off the ``git``
For packages with a ``git`` attribute, ``git`` references
may be specified instead of a numerical version i.e. branches, tags
and commits. Spack will stage and build based off the ``git``
reference provided. Acceptable syntaxes for this are:
.. code-block:: sh
# branches and tags
foo@git.develop # use the develop branch
foo@git.0.19 # use the 0.19 tag
# commit hashes
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # 40 character hashes are automatically treated as git commits
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234
Spack versions from git reference either have an associated version supplied by the user,
or infer a relationship to known versions from the structure of the git repository. If an
associated version is supplied by the user, Spack treats the git version as equivalent to that
@@ -1226,6 +1230,23 @@ variants using the backwards compatibility syntax and uses only ``~``
for disabled boolean variants. The ``-`` and spaces on the command
line are provided for convenience and legibility.
Spack allows variants to propagate their value to the package's
dependency by using ``++``, ``--``, and ``~~`` for boolean variants.
For example, for a ``debug`` variant:
.. code-block:: sh
mpileaks ++debug # enabled debug will be propagated to dependencies
mpileaks +debug # only mpileaks will have debug enabled
To propagate the value of non-boolean variants Spack uses ``name==value``.
For example, for the ``stackstart`` variant:
.. code-block:: sh
mpileaks stackstart==4 # variant will be propagated to dependencies
mpileaks stackstart=4 # only mpileaks will have this variant value
^^^^^^^^^^^^^^
Compiler Flags
^^^^^^^^^^^^^^
@@ -1233,10 +1254,15 @@ Compiler Flags
Compiler flags are specified using the same syntax as non-boolean variants,
but fulfill a different purpose. While the function of a variant is set by
the package, compiler flags are used by the compiler wrappers to inject
flags into the compile line of the build. Additionally, compiler flags are
inherited by dependencies. ``spack install libdwarf cppflags="-g"`` will
install both libdwarf and libelf with the ``-g`` flag injected into their
compile line.
flags into the compile line of the build. Additionally, compiler flags can
be inherited by dependencies by using ``==``.
``spack install libdwarf cppflags=="-g"`` will install both libdwarf and
libelf with the ``-g`` flag injected into their compile line.
.. note::
versions of spack prior to 0.19.0 will propagate compiler flags using
the ``=`` syntax.
Notice that the value of the compiler flags must be quoted if it
contains any spaces. Any of ``cppflags=-O3``, ``cppflags="-O3"``,
@@ -1438,7 +1464,7 @@ built.
You can see what virtual packages a particular package provides by
getting info on it:
.. command-output:: spack info mpich
.. command-output:: spack info --virtuals mpich
Spack is unique in that its virtual packages can be versioned, just
like regular packages. A particular version of a package may provide
@@ -1646,9 +1672,13 @@ own install prefix. However, certain packages are typically installed
`Python <https://www.python.org>`_ packages are typically installed in the
``$prefix/lib/python-2.7/site-packages`` directory.
Spack has support for this type of installation as well. In Spack,
a package that can live inside the prefix of another package is called
an *extension*. Suppose you have Python installed like so:
In Spack, installation prefixes are immutable, so this type of installation
is not directly supported. However, it is possible to create views that
allow you to merge install prefixes of multiple packages into a single new prefix.
Views are a convenient way to get a more traditional filesystem structure.
Using *extensions*, you can ensure that Python packages always share the
same prefix in the view as Python itself. Suppose you have
Python installed like so:
.. code-block:: console
@@ -1686,8 +1716,6 @@ You can find extensions for your Python installation like this:
py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1
py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0
==> None activated.
The extensions are a subset of what's returned by ``spack list``, and
they are packages like any other. They are installed into their own
prefixes, and you can see this with ``spack find --paths``:
@@ -1715,32 +1743,72 @@ directly when you run ``python``:
ImportError: No module named numpy
>>>
^^^^^^^^^^^^^^^^
Using Extensions
^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using Extensions in Environments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are four ways to get ``numpy`` working in Python. The first is
to use :ref:`shell-support`. You can simply ``load`` the extension,
and it will be added to the ``PYTHONPATH`` in your current shell:
The recommended way of working with extensions such as ``py-numpy``
above is through :ref:`Environments <environments>`. For example,
the following creates an environment in the current working directory
with a filesystem view in the ``./view`` directory:
.. code-block:: console
$ spack load python
$ spack load py-numpy
$ spack env create --with-view view --dir .
$ spack -e . add py-numpy
$ spack -e . concretize
$ spack -e . install
We recommend environments for two reasons. Firstly, environments
can be activated (requires :ref:`shell-support`):
.. code-block:: console
$ spack env activate .
which sets all the right environment variables such as ``PATH`` and
``PYTHONPATH``. This ensures that
.. code-block:: console
$ python
>>> import numpy
works. Secondly, even without shell support, the view ensures
that Python can locate its extensions:
.. code-block:: console
$ ./view/bin/python
>>> import numpy
See :ref:`environments` for a more in-depth description of Spack
environments and customizations to views.
^^^^^^^^^^^^^^^^^^^^
Using ``spack load``
^^^^^^^^^^^^^^^^^^^^
A more traditional way of using Spack and extensions is ``spack load``
(requires :ref:`shell-support`). This will add the extension to ``PYTHONPATH``
in your current shell, and Python itself will be available in the ``PATH``:
.. code-block:: console
$ spack load py-numpy
$ python
>>> import numpy
Now ``import numpy`` will succeed for as long as you keep your current
session open.
The loaded packages can be checked using ``spack find --loaded``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Loading Extensions via Modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Instead of using Spack's environment modification capabilities through
the ``spack load`` command, you can load numpy through your
environment modules (using ``environment-modules`` or ``lmod``). This
will also add the extension to the ``PYTHONPATH`` in your current
shell.
Apart from ``spack env activate`` and ``spack load``, you can load numpy
through your environment modules (using ``environment-modules`` or
``lmod``). This will also add the extension to the ``PYTHONPATH`` in
your current shell.
.. code-block:: console
@@ -1750,130 +1818,6 @@ If you do not know the name of the specific numpy module you wish to
load, you can use the ``spack module tcl|lmod loads`` command to get
the name of the module from the Spack spec.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Activating Extensions in a View
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Another way to use extensions is to create a view, which merges the
python installation along with the extensions into a single prefix.
See :ref:`configuring_environment_views` for a more in-depth description
of views.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Activating Extensions Globally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As an alternative to creating a merged prefix with Python and its extensions,
and prior to support for views, Spack has provided a means to install the
extension into the Spack installation prefix for the extendee. This has
typically been useful since extendable packages typically search their own
installation path for addons by default.
Global activations are performed with the ``spack activate`` command:
.. _cmd-spack-activate:
^^^^^^^^^^^^^^^^^^
``spack activate``
^^^^^^^^^^^^^^^^^^
.. code-block:: console
$ spack activate py-numpy
==> Activated extension py-setuptools@11.3.1%gcc@4.4.7 arch=linux-debian7-x86_64-3c74eb69 for python@2.7.8%gcc@4.4.7.
==> Activated extension py-nose@1.3.4%gcc@4.4.7 arch=linux-debian7-x86_64-5f70f816 for python@2.7.8%gcc@4.4.7.
==> Activated extension py-numpy@1.9.1%gcc@4.4.7 arch=linux-debian7-x86_64-66733244 for python@2.7.8%gcc@4.4.7.
Several things have happened here. The user requested that
``py-numpy`` be activated in the ``python`` installation it was built
with. Spack knows that ``py-numpy`` depends on ``py-nose`` and
``py-setuptools``, so it activated those packages first. Finally,
once all dependencies were activated in the ``python`` installation,
``py-numpy`` was activated as well.
If we run ``spack extensions`` again, we now see the three new
packages listed as activated:
.. code-block:: console
$ spack extensions python
==> python@2.7.8%gcc@4.4.7 arch=linux-debian7-x86_64-703c7a96
==> 36 extensions:
geos py-ipython py-pexpect py-pyside py-sip
py-basemap py-libxml2 py-pil py-pytz py-six
py-biopython py-mako py-pmw py-rpy2 py-sympy
py-cython py-matplotlib py-pychecker py-scientificpython py-virtualenv
py-dateutil py-mpi4py py-pygments py-scikit-learn
py-epydoc py-mx py-pylint py-scipy
py-gnuplot py-nose py-pyparsing py-setuptools
py-h5py py-numpy py-pyqt py-shiboken
==> 12 installed:
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
py-dateutil@2.4.0 py-nose@1.3.4 py-pyside@1.2.2
py-dateutil@2.4.0 py-numpy@1.9.1 py-pytz@2014.10
py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1
py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0
==> 3 currently activated:
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
py-nose@1.3.4 py-numpy@1.9.1 py-setuptools@11.3.1
Now, when a user runs python, ``numpy`` will be available for import
*without* the user having to explicitly load it. ``python@2.7.8`` now
acts like a system Python installation with ``numpy`` installed inside
of it.
Spack accomplishes this by symbolically linking the *entire* prefix of
the ``py-numpy`` package into the prefix of the ``python`` package. To the
python interpreter, it looks like ``numpy`` is installed in the
``site-packages`` directory.
The only limitation of global activation is that you can only have a *single*
version of an extension activated at a time. This is because multiple
versions of the same extension would conflict if symbolically linked
into the same prefix. Users who want a different version of a package
can still get it by using environment modules or views, but they will have to
explicitly load their preferred version.
^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack activate --force``
^^^^^^^^^^^^^^^^^^^^^^^^^^
If, for some reason, you want to activate a package *without* its
dependencies, you can use ``spack activate --force``:
.. code-block:: console
$ spack activate --force py-numpy
==> Activated extension py-numpy@1.9.1%gcc@4.4.7 arch=linux-debian7-x86_64-66733244 for python@2.7.8%gcc@4.4.7.
.. _cmd-spack-deactivate:
^^^^^^^^^^^^^^^^^^^^
``spack deactivate``
^^^^^^^^^^^^^^^^^^^^
We've seen how activating an extension can be used to set up a default
version of a Python module. Obviously, you may want to change that at
some point. ``spack deactivate`` is the command for this. There are
several variants:
* ``spack deactivate <extension>`` will deactivate a single
extension. If another activated extension depends on this one,
Spack will warn you and exit with an error.
* ``spack deactivate --force <extension>`` deactivates an extension
regardless of packages that depend on it.
* ``spack deactivate --all <extension>`` deactivates an extension and
all of its dependencies. Use ``--force`` to disregard dependents.
* ``spack deactivate --all <extendee>`` deactivates *all* activated
extensions of a package. For example, to deactivate *all* python
extensions, use:
.. code-block:: console
$ spack deactivate --all python
-----------------------
Filesystem requirements
-----------------------

View File

@@ -15,15 +15,13 @@ is an entire command dedicated to the management of every aspect of bootstrappin
.. command-output:: spack bootstrap --help
The first thing to know to understand bootstrapping in Spack is that each of
Spack's dependencies is bootstrapped lazily; i.e. the first time it is needed and
can't be found. You can readily check if any prerequisite for using Spack
is missing by running:
Spack is configured to bootstrap its dependencies lazily by default; i.e. the first time they are needed and
can't be found. You can readily check if any prerequisite for using Spack is missing by running:
.. code-block:: console
% spack bootstrap status
Spack v0.17.1 - python@3.8
Spack v0.19.0 - python@3.8
[FAIL] Core Functionalities
[B] MISSING "clingo": required to concretize specs
@@ -48,6 +46,21 @@ they can be bootstrapped. Running a command that concretize a spec, like:
triggers the bootstrapping of clingo from pre-built binaries as expected.
Users can also bootstrap all the dependencies needed by Spack in a single command, which
might be useful to setup containers or other similar environments:
.. code-block:: console
$ spack bootstrap now
==> Bootstrapping clingo from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.3/build_cache/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-shqedxgvjnhiwdcdrvjhbd73jaevv7wt.spec.json
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.3/build_cache/linux-centos7-x86_64/gcc-10.2.1/clingo-bootstrap-spack/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-shqedxgvjnhiwdcdrvjhbd73jaevv7wt.spack
==> Installing "clingo-bootstrap@spack%gcc@10.2.1~docs~ipo+python+static_libstdcpp build_type=Release arch=linux-centos7-x86_64" from a buildcache
==> Bootstrapping patchelf from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.3/build_cache/linux-centos7-x86_64-gcc-10.2.1-patchelf-0.15.0-htk62k7efo2z22kh6kmhaselru7bfkuc.spec.json
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.3/build_cache/linux-centos7-x86_64/gcc-10.2.1/patchelf-0.15.0/linux-centos7-x86_64-gcc-10.2.1-patchelf-0.15.0-htk62k7efo2z22kh6kmhaselru7bfkuc.spack
==> Installing "patchelf@0.15.0%gcc@10.2.1 ldflags="-static-libstdc++ -static-libgcc" arch=linux-centos7-x86_64" from a buildcache
-----------------------
The Bootstrapping store
-----------------------
@@ -107,19 +120,19 @@ If need be, you can disable bootstrapping altogether by running:
in which case it's your responsibility to ensure Spack runs in an
environment where all its prerequisites are installed. You can
also configure Spack to skip certain bootstrapping methods by *untrusting*
them. For instance:
also configure Spack to skip certain bootstrapping methods by disabling
them specifically:
.. code-block:: console
% spack bootstrap untrust github-actions
==> "github-actions" is now untrusted and will not be used for bootstrapping
% spack bootstrap disable github-actions
==> "github-actions" is now disabled and will not be used for bootstrapping
tells Spack to skip trying to bootstrap from binaries. To add the "github-actions" method back you can:
.. code-block:: console
% spack bootstrap trust github-actions
% spack bootstrap enable github-actions
There is also an option to reset the bootstrapping configuration to Spack's defaults:

View File

@@ -302,88 +302,31 @@ microarchitectures considered during the solve are constrained to be compatible
host Spack is currently running on. For instance, if this option is set to ``true``, a
user cannot concretize for ``target=icelake`` while running on an Haswell node.
.. _package-preferences:
-------------------
Package Preferences
-------------------
Spack can be configured to prefer certain compilers, package
versions, dependencies, and variants during concretization.
The preferred configuration can be controlled via the
``~/.spack/packages.yaml`` file for user configurations, or the
``etc/spack/packages.yaml`` site configuration.
Here's an example ``packages.yaml`` file that sets preferred packages:
.. code-block:: yaml
packages:
opencv:
compiler: [gcc@4.9]
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
all:
compiler: [gcc@4.4.7, 'gcc@4.6:', intel, clang, pgi]
target: [sandybridge]
providers:
mpi: [mvapich2, mpich, openmpi]
At a high level, this example is specifying how packages should be
concretized. The opencv package should prefer using GCC 4.9 and
be built with debug options. The gperftools package should prefer version
2.2 over 2.4. Every package on the system should prefer mvapich2 for
its MPI and GCC 4.4.7 (except for opencv, which overrides this by preferring GCC 4.9).
These options are used to fill in implicit defaults. Any of them can be overwritten
on the command line if explicitly requested.
Each ``packages.yaml`` file begins with the string ``packages:`` and
package names are specified on the next level. The special string ``all``
applies settings to *all* packages. Underneath each package name is one
or more components: ``compiler``, ``variants``, ``version``,
``providers``, and ``target``. Each component has an ordered list of
spec ``constraints``, with earlier entries in the list being preferred
over later entries.
Sometimes a package installation may have constraints that forbid
the first concretization rule, in which case Spack will use the first
legal concretization rule. Going back to the example, if a user
requests gperftools 2.3 or later, then Spack will install version 2.4
as the 2.4 version of gperftools is preferred over 2.3.
An explicit concretization rule in the preferred section will always
take preference over unlisted concretizations. In the above example,
xlc isn't listed in the compiler list. Every listed compiler from
gcc to pgi will thus be preferred over the xlc compiler.
The syntax for the ``provider`` section differs slightly from other
concretization rules. A provider lists a value that packages may
``depend_on`` (e.g, MPI) and a list of rules for fulfilling that
dependency.
.. _package-requirements:
--------------------
Package Requirements
--------------------
You can use the configuration to force the concretizer to choose
specific properties for packages when building them. Like preferences,
these are only applied when the package is required by some other
request (e.g. if the package is needed as a dependency of a
request to ``spack install``).
Spack can be configured to always use certain compilers, package
versions, and variants during concretization through package
requirements.
An example of where this is useful is if you have a package that
is normally built as a dependency but only under certain circumstances
(e.g. only when a variant on a dependent is active): you can make
sure that it always builds the way you want it to; this distinguishes
package configuration requirements from constraints that you add to
``spack install`` or to environments (in those cases, the associated
packages are always built).
Package requirements are useful when you find yourself repeatedly
specifying the same constraints on the command line, and wish that
Spack respects these constraints whether you mention them explicitly
or not. Another use case is specifying constraints that should apply
to all root specs in an environment, without having to repeat the
constraint everywhere.
The following is an example of how to enforce package properties in
``packages.yaml``:
Apart from that, requirements config is more flexible than constraints
on the command line, because it can specify constraints on packages
*when they occur* as a dependency. In contrast, on the command line it
is not possible to specify constraints on dependencies while also keeping
those dependencies optional.
The package requirements configuration is specified in ``packages.yaml``
keyed by package name:
.. code-block:: yaml
@@ -452,15 +395,15 @@ under ``all`` are disregarded. For example, with a configuration like this:
cmake:
require: '%gcc'
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including cmake dependencies)
to use ``clang``.
Spack requires ``cmake`` to use ``gcc`` and all other nodes (including ``cmake``
dependencies) to use ``clang``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting requirements on virtual specs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A requirement on a virtual spec applies whenever that virtual is present in the DAG. This
can be useful for fixing which virtual provider you want to use:
A requirement on a virtual spec applies whenever that virtual is present in the DAG.
This can be useful for fixing which virtual provider you want to use:
.. code-block:: yaml
@@ -470,8 +413,8 @@ can be useful for fixing which virtual provider you want to use:
With the configuration above the only allowed ``mpi`` provider is ``mvapich2 %gcc``.
Requirements on the virtual spec and on the specific provider are both applied, if present. For
instance with a configuration like:
Requirements on the virtual spec and on the specific provider are both applied, if
present. For instance with a configuration like:
.. code-block:: yaml
@@ -483,6 +426,66 @@ instance with a configuration like:
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
.. _package-preferences:
-------------------
Package Preferences
-------------------
In some cases package requirements can be too strong, and package
preferences are the better option. Package preferences do not impose
constraints on packages for particular versions or variants values,
they rather only set defaults -- the concretizer is free to change
them if it must due to other constraints. Also note that package
preferences are of lower priority than reuse of already installed
packages.
Here's an example ``packages.yaml`` file that sets preferred packages:
.. code-block:: yaml
packages:
opencv:
compiler: [gcc@4.9]
variants: +debug
gperftools:
version: [2.2, 2.4, 2.3]
all:
compiler: [gcc@4.4.7, 'gcc@4.6:', intel, clang, pgi]
target: [sandybridge]
providers:
mpi: [mvapich2, mpich, openmpi]
At a high level, this example is specifying how packages are preferably
concretized. The opencv package should prefer using GCC 4.9 and
be built with debug options. The gperftools package should prefer version
2.2 over 2.4. Every package on the system should prefer mvapich2 for
its MPI and GCC 4.4.7 (except for opencv, which overrides this by preferring GCC 4.9).
These options are used to fill in implicit defaults. Any of them can be overwritten
on the command line if explicitly requested.
Package preferences accept the follow keys or components under
the specific package (or ``all``) section: ``compiler``, ``variants``,
``version``, ``providers``, and ``target``. Each component has an
ordered list of spec ``constraints``, with earlier entries in the
list being preferred over later entries.
Sometimes a package installation may have constraints that forbid
the first concretization rule, in which case Spack will use the first
legal concretization rule. Going back to the example, if a user
requests gperftools 2.3 or later, then Spack will install version 2.4
as the 2.4 version of gperftools is preferred over 2.3.
An explicit concretization rule in the preferred section will always
take preference over unlisted concretizations. In the above example,
xlc isn't listed in the compiler list. Every listed compiler from
gcc to pgi will thus be preferred over the xlc compiler.
The syntax for the ``provider`` section differs slightly from other
concretization rules. A provider lists a value that packages may
``depends_on`` (e.g, MPI) and a list of rules for fulfilling that
dependency.
.. _package_permissions:
-------------------
@@ -531,3 +534,25 @@ directories inside the install prefix. This will ensure that even
manually placed files within the install prefix are owned by the
assigned group. If no group is assigned, Spack will allow the OS
default behavior to go as expected.
----------------------------
Assigning Package Attributes
----------------------------
You can assign class-level attributes in the configuration:
.. code-block:: yaml
packages:
mpileaks:
# Override existing attributes
url: http://www.somewhereelse.com/mpileaks-1.0.tar.gz
# ... or add new ones
x: 1
Attributes set this way will be accessible to any method executed
in the package.py file (e.g. the ``install()`` method). Values for these
attributes may be any value parseable by yaml.
These can only be applied to specific packages, not "all" or
virtual packages.

View File

@@ -65,7 +65,6 @@ on these ideas for each distinct build system that Spack supports:
build_systems/custompackage
build_systems/inteloneapipackage
build_systems/intelpackage
build_systems/multiplepackage
build_systems/rocmpackage
build_systems/sourceforgepackage

View File

@@ -5,9 +5,9 @@
.. _autotoolspackage:
----------------
AutotoolsPackage
----------------
---------
Autotools
---------
Autotools is a GNU build system that provides a build-script generator.
By running the platform-independent ``./configure`` script that comes
@@ -17,7 +17,7 @@ with the package, you can generate a platform-dependent Makefile.
Phases
^^^^^^
The ``AutotoolsPackage`` base class comes with the following phases:
The ``AutotoolsBuilder`` and ``AutotoolsPackage`` base classes come with the following phases:
#. ``autoreconf`` - generate the configure script
#. ``configure`` - generate the Makefiles

View File

@@ -5,9 +5,9 @@
.. _bundlepackage:
-------------
BundlePackage
-------------
------
Bundle
------
``BundlePackage`` represents a set of packages that are expected to work well
together, such as a collection of commonly used software libraries. The

View File

@@ -5,9 +5,9 @@
.. _cachedcmakepackage:
------------------
CachedCMakePackage
------------------
-----------
CachedCMake
-----------
The CachedCMakePackage base class is used for CMake-based workflows
that create a CMake cache file prior to running ``cmake``. This is

View File

@@ -5,9 +5,9 @@
.. _cmakepackage:
------------
CMakePackage
------------
-----
CMake
-----
Like Autotools, CMake is a widely-used build-script generator. Designed
by Kitware, CMake is the most popular build system for new C, C++, and
@@ -21,7 +21,7 @@ whereas Autotools is Unix-only.
Phases
^^^^^^
The ``CMakePackage`` base class comes with the following phases:
The ``CMakeBuilder`` and ``CMakePackage`` base classes come with the following phases:
#. ``cmake`` - generate the Makefile
#. ``build`` - build the package
@@ -130,8 +130,8 @@ Adding flags to cmake
To add additional flags to the ``cmake`` call, simply override the
``cmake_args`` function. The following example defines values for the flags
``WHATEVER``, ``ENABLE_BROKEN_FEATURE``, ``DETECT_HDF5``, and ``THREADS`` with
and without the :meth:`~spack.build_systems.cmake.CMakePackage.define` and
:meth:`~spack.build_systems.cmake.CMakePackage.define_from_variant` helper functions:
and without the :meth:`~spack.build_systems.cmake.CMakeBuilder.define` and
:meth:`~spack.build_systems.cmake.CMakeBuilder.define_from_variant` helper functions:
.. code-block:: python

View File

@@ -5,9 +5,9 @@
.. _cudapackage:
-----------
CudaPackage
-----------
----
Cuda
----
Different from other packages, ``CudaPackage`` does not represent a build system.
Instead its goal is to simplify and unify usage of ``CUDA`` in other packages by providing a `mixin-class <https://en.wikipedia.org/wiki/Mixin>`_.
@@ -80,7 +80,7 @@ standard CUDA compiler flags.
**cuda_flags**
This built-in static method returns a list of command line flags
This built-in static method returns a list of command line flags
for the chosen ``cuda_arch`` value(s). The flags are intended to
be passed to the CUDA compiler driver (i.e., ``nvcc``).

View File

@@ -6,9 +6,9 @@
.. _inteloneapipackage:
====================
IntelOneapiPackage
====================
===========
IntelOneapi
===========
.. contents::
@@ -32,11 +32,11 @@ oneAPI packages or use::
For more information on a specific package, do::
spack info <package-name>
spack info --all <package-name>
Intel no longer releases new versions of Parallel Studio, which can be
used in Spack via the :ref:`intelpackage`. All of its components can
now be found in oneAPI.
now be found in oneAPI.
Examples
========

View File

@@ -5,9 +5,9 @@
.. _intelpackage:
------------
IntelPackage
------------
-----
Intel
-----
.. contents::

View File

@@ -5,11 +5,11 @@
.. _luapackage:
------------
LuaPackage
------------
---
Lua
---
LuaPackage is a helper for the common case of Lua packages that provide
The ``Lua`` build-system is a helper for the common case of Lua packages that provide
a rockspec file. This is not meant to take a rock archive, but to build
a source archive or repository that provides a rockspec, which should cover
most lua packages. In the case a Lua package builds by Make rather than
@@ -19,7 +19,7 @@ luarocks, prefer MakefilePackage.
Phases
^^^^^^
The ``LuaPackage`` base class comes with the following phases:
The ``LuaBuilder`` and `LuaPackage`` base classes come with the following phases:
#. ``unpack`` - if using a rock, unpacks the rock and moves into the source directory
#. ``preprocess`` - adjust sources or rockspec to fix build

View File

@@ -5,9 +5,9 @@
.. _makefilepackage:
---------------
MakefilePackage
---------------
--------
Makefile
--------
The most primitive build system a package can use is a plain Makefile.
Makefiles are simple to write for small projects, but they usually
@@ -18,7 +18,7 @@ variables.
Phases
^^^^^^
The ``MakefilePackage`` base class comes with 3 phases:
The ``MakefileBuilder`` and ``MakefilePackage`` base classes come with 3 phases:
#. ``edit`` - edit the Makefile
#. ``build`` - build the project

View File

@@ -5,9 +5,9 @@
.. _mavenpackage:
------------
MavenPackage
------------
-----
Maven
-----
Apache Maven is a general-purpose build system that does not rely
on Makefiles to build software. It is designed for building and
@@ -17,7 +17,7 @@ managing and Java-based project.
Phases
^^^^^^
The ``MavenPackage`` base class comes with the following phases:
The ``MavenBuilder`` and ``MavenPackage`` base classes come with the following phases:
#. ``build`` - compile code and package into a JAR file
#. ``install`` - copy to installation prefix

View File

@@ -5,9 +5,9 @@
.. _mesonpackage:
------------
MesonPackage
------------
-----
Meson
-----
Much like Autotools and CMake, Meson is a build system. But it is
meant to be both fast and as user friendly as possible. GNOME's goal
@@ -17,7 +17,7 @@ is to port modules to use the Meson build system.
Phases
^^^^^^
The ``MesonPackage`` base class comes with the following phases:
The ``MesonBuilder`` and ``MesonPackage`` base classes come with the following phases:
#. ``meson`` - generate ninja files
#. ``build`` - build the project

View File

@@ -1,350 +0,0 @@
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _multiplepackage:
----------------------
Multiple Build Systems
----------------------
Quite frequently, a package will change build systems from one version to the
next. For example, a small project that once used a single Makefile to build
may now require Autotools to handle the increased number of files that need to
be compiled. Or, a package that once used Autotools may switch to CMake for
Windows support. In this case, it becomes a bit more challenging to write a
single build recipe for this package in Spack.
There are several ways that this can be handled in Spack:
#. Subclass the new build system, and override phases as needed (preferred)
#. Subclass ``Package`` and implement ``install`` as needed
#. Create separate ``*-cmake``, ``*-autotools``, etc. packages for each build system
#. Rename the old package to ``*-legacy`` and create a new package
#. Move the old package to a ``legacy`` repository and create a new package
#. Drop older versions that only support the older build system
Of these options, 1 is preferred, and will be demonstrated in this
documentation. Options 3-5 have issues with concretization, so shouldn't be
used. Options 4-5 also don't support more than two build systems. Option 6 only
works if the old versions are no longer needed. Option 1 is preferred over 2
because it makes it easier to drop the old build system entirely.
The exact syntax of the package depends on which build systems you need to
support. Below are a couple of common examples.
^^^^^^^^^^^^^^^^^^^^^
Makefile -> Autotools
^^^^^^^^^^^^^^^^^^^^^
Let's say we have the following package:
.. code-block:: python
class Foo(MakefilePackage):
version("1.2.0", sha256="...")
def edit(self, spec, prefix):
filter_file("CC=", "CC=" + spack_cc, "Makefile")
def install(self, spec, prefix):
install_tree(".", prefix)
The package subclasses from :ref:`makefilepackage`, which has three phases:
#. ``edit`` (does nothing by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
In this case, the ``install`` phase needed to be overridden because the
Makefile did not have an install target. We also modify the Makefile to use
Spack's compiler wrappers. The default ``build`` phase is not changed.
Starting with version 1.3.0, we want to use Autotools to build instead.
:ref:`autotoolspackage` has four phases:
#. ``autoreconf`` (does not if a configure script already exists)
#. ``configure`` (runs ``./configure --prefix=...`` by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
If the only version we need to support is 1.3.0, the package would look as
simple as:
.. code-block:: python
class Foo(AutotoolsPackage):
version("1.3.0", sha256="...")
def configure_args(self):
return ["--enable-shared"]
In this case, we use the default methods for each phase and only override
``configure_args`` to specify additional flags to pass to ``./configure``.
If we wanted to write a single package that supports both versions 1.2.0 and
1.3.0, it would look something like:
.. code-block:: python
class Foo(AutotoolsPackage):
version("1.3.0", sha256="...")
version("1.2.0", sha256="...", deprecated=True)
def configure_args(self):
return ["--enable-shared"]
# Remove the following once version 1.2.0 is dropped
@when("@:1.2")
def patch(self):
filter_file("CC=", "CC=" + spack_cc, "Makefile")
@when("@:1.2")
def autoreconf(self, spec, prefix):
pass
@when("@:1.2")
def configure(self, spec, prefix):
pass
@when("@:1.2")
def install(self, spec, prefix):
install_tree(".", prefix)
There are a few interesting things to note here:
* We added ``deprecated=True`` to version 1.2.0. This signifies that version
1.2.0 is deprecated and shouldn't be used. However, if a user still relies
on version 1.2.0, it's still there and builds just fine.
* We moved the contents of the ``edit`` phase to the ``patch`` function. Since
``AutotoolsPackage`` doesn't have an ``edit`` phase, the only way for this
step to be executed is to move it to the ``patch`` function, which always
gets run.
* The ``autoreconf`` and ``configure`` phases become no-ops. Since the old
Makefile-based build system doesn't use these, we ignore these phases when
building ``foo@1.2.0``.
* The ``@when`` decorator is used to override these phases only for older
versions. The default methods are used for ``foo@1.3:``.
Once a new Spack release comes out, version 1.2.0 and everything below the
comment can be safely deleted. The result is the same as if we had written a
package for version 1.3.0 from scratch.
^^^^^^^^^^^^^^^^^^
Autotools -> CMake
^^^^^^^^^^^^^^^^^^
Let's say we have the following package:
.. code-block:: python
class Bar(AutotoolsPackage):
version("1.2.0", sha256="...")
def configure_args(self):
return ["--enable-shared"]
The package subclasses from :ref:`autotoolspackage`, which has four phases:
#. ``autoreconf`` (does not if a configure script already exists)
#. ``configure`` (runs ``./configure --prefix=...`` by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
In this case, we use the default methods for each phase and only override
``configure_args`` to specify additional flags to pass to ``./configure``.
Starting with version 1.3.0, we want to use CMake to build instead.
:ref:`cmakepackage` has three phases:
#. ``cmake`` (runs ``cmake ...`` by default)
#. ``build`` (runs ``make`` by default)
#. ``install`` (runs ``make install`` by default)
If the only version we need to support is 1.3.0, the package would look as
simple as:
.. code-block:: python
class Bar(CMakePackage):
version("1.3.0", sha256="...")
def cmake_args(self):
return [self.define("BUILD_SHARED_LIBS", True)]
In this case, we use the default methods for each phase and only override
``cmake_args`` to specify additional flags to pass to ``cmake``.
If we wanted to write a single package that supports both versions 1.2.0 and
1.3.0, it would look something like:
.. code-block:: python
class Bar(CMakePackage):
version("1.3.0", sha256="...")
version("1.2.0", sha256="...", deprecated=True)
def cmake_args(self):
return [self.define("BUILD_SHARED_LIBS", True)]
# Remove the following once version 1.2.0 is dropped
def configure_args(self):
return ["--enable-shared"]
@when("@:1.2")
def cmake(self, spec, prefix):
configure("--prefix=" + prefix, *self.configure_args())
There are a few interesting things to note here:
* We added ``deprecated=True`` to version 1.2.0. This signifies that version
1.2.0 is deprecated and shouldn't be used. However, if a user still relies
on version 1.2.0, it's still there and builds just fine.
* Since CMake and Autotools are so similar, we only need to override the
``cmake`` phase, we can use the default ``build`` and ``install`` phases.
* We override ``cmake`` to run ``./configure`` for older versions.
``configure_args`` remains the same.
* The ``@when`` decorator is used to override these phases only for older
versions. The default methods are used for ``bar@1.3:``.
Once a new Spack release comes out, version 1.2.0 and everything below the
comment can be safely deleted. The result is the same as if we had written a
package for version 1.3.0 from scratch.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Multiple build systems for the same version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
During the transition from one build system to another, developers often
support multiple build systems at the same time. Spack can only use a single
build system for a single version. To decide which build system to use for a
particular version, take the following things into account:
1. If the developers explicitly state that one build system is preferred over
another, use that one.
2. If one build system is considered "experimental" while another is considered
"stable", use the stable build system.
3. Otherwise, use the newer build system.
The developer preference for which build system to use can change over time as
a newer build system becomes stable/recommended.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Dropping support for old build systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When older versions of a package don't support a newer build system, it can be
tempting to simply delete them from a package. This significantly reduces
package complexity and makes the build recipe much easier to maintain. However,
other packages or Spack users may rely on these older versions. The recommended
approach is to first support both build systems (as demonstrated above),
:ref:`deprecate <deprecate>` versions that rely on the old build system, and
remove those versions and any phases that needed to be overridden in the next
Spack release.
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Three or more build systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^
In rare cases, a package may change build systems multiple times. For example,
a package may start with Makefiles, then switch to Autotools, then switch to
CMake. The same logic used above can be extended to any number of build systems.
For example:
.. code-block:: python
class Baz(CMakePackage):
version("1.4.0", sha256="...") # CMake
version("1.3.0", sha256="...") # Autotools
version("1.2.0", sha256="...") # Makefile
def cmake_args(self):
return [self.define("BUILD_SHARED_LIBS", True)]
# Remove the following once version 1.3.0 is dropped
def configure_args(self):
return ["--enable-shared"]
@when("@1.3")
def cmake(self, spec, prefix):
configure("--prefix=" + prefix, *self.configure_args())
# Remove the following once version 1.2.0 is dropped
@when("@:1.2")
def patch(self):
filter_file("CC=", "CC=" + spack_cc, "Makefile")
@when("@:1.2")
def cmake(self, spec, prefix):
pass
@when("@:1.2")
def install(self, spec, prefix):
install_tree(".", prefix)
^^^^^^^^^^^^^^^^^^^
Additional examples
^^^^^^^^^^^^^^^^^^^
When writing new packages, it often helps to see examples of existing packages.
Here is an incomplete list of existing Spack packages that have changed build
systems before:
================ ===================== ================
Package Previous Build System New Build System
================ ===================== ================
amber custom CMake
arpack-ng Autotools CMake
atk Autotools Meson
blast None Autotools
dyninst Autotools CMake
evtgen Autotools CMake
fish Autotools CMake
gdk-pixbuf Autotools Meson
glib Autotools Meson
glog Autotools CMake
gmt Autotools CMake
gtkplus Autotools Meson
hpl Makefile Autotools
interproscan Perl Maven
jasper Autotools CMake
kahip SCons CMake
kokkos Makefile CMake
kokkos-kernels Makefile CMake
leveldb Makefile CMake
libdrm Autotools Meson
libjpeg-turbo Autotools CMake
mesa Autotools Meson
metis None CMake
mpifileutils Autotools CMake
muparser Autotools CMake
mxnet Makefile CMake
nest Autotools CMake
neuron Autotools CMake
nsimd CMake nsconfig
opennurbs Makefile CMake
optional-lite None CMake
plasma Makefile CMake
preseq Makefile Autotools
protobuf Autotools CMake
py-pygobject Autotools Python
singularity Autotools Makefile
span-lite None CMake
ssht Makefile CMake
string-view-lite None CMake
superlu Makefile CMake
superlu-dist Makefile CMake
uncrustify Autotools CMake
================ ===================== ================
Packages that support multiple build systems can be a bit confusing to write.
Don't hesitate to open an issue or draft pull request and ask for advice from
other Spack developers!

View File

@@ -5,9 +5,9 @@
.. _octavepackage:
-------------
OctavePackage
-------------
------
Octave
------
Octave has its own build system for installing packages.
@@ -15,7 +15,7 @@ Octave has its own build system for installing packages.
Phases
^^^^^^
The ``OctavePackage`` base class has a single phase:
The ``OctaveBuilder`` and ``OctavePackage`` base classes have a single phase:
#. ``install`` - install the package

View File

@@ -5,9 +5,9 @@
.. _perlpackage:
-----------
PerlPackage
-----------
----
Perl
----
Much like Octave, Perl has its own language-specific
build system.
@@ -16,7 +16,7 @@ build system.
Phases
^^^^^^
The ``PerlPackage`` base class comes with 3 phases that can be overridden:
The ``PerlBuilder`` and ``PerlPackage`` base classes come with 3 phases that can be overridden:
#. ``configure`` - configure the package
#. ``build`` - build the package

View File

@@ -5,9 +5,9 @@
.. _pythonpackage:
-------------
PythonPackage
-------------
------
Python
------
Python packages and modules have their own special build system. This
documentation covers everything you'll need to know in order to write
@@ -724,10 +724,9 @@ extends vs. depends_on
This is very similar to the naming dilemma above, with a slight twist.
As mentioned in the :ref:`Packaging Guide <packaging_extensions>`,
``extends`` and ``depends_on`` are very similar, but ``extends`` adds
the ability to *activate* the package. Activation involves symlinking
everything in the installation prefix of the package to the installation
prefix of Python. This allows the user to import a Python module without
``extends`` and ``depends_on`` are very similar, but ``extends`` ensures
that the extension and extendee share the same prefix in views.
This allows the user to import a Python module without
having to add that module to ``PYTHONPATH``.
When deciding between ``extends`` and ``depends_on``, the best rule of
@@ -735,7 +734,7 @@ thumb is to check the installation prefix. If Python libraries are
installed to ``<prefix>/lib/pythonX.Y/site-packages``, then you
should use ``extends``. If Python libraries are installed elsewhere
or the only files that get installed reside in ``<prefix>/bin``, then
don't use ``extends``, as symlinking the package wouldn't be useful.
don't use ``extends``.
^^^^^^^^^^^^^^^^^^^^^
Alternatives to Spack

View File

@@ -5,9 +5,9 @@
.. _qmakepackage:
------------
QMakePackage
------------
-----
QMake
-----
Much like Autotools and CMake, QMake is a build-script generator
designed by the developers of Qt. In its simplest form, Spack's
@@ -29,7 +29,7 @@ variables or edit ``*.pro`` files to get things working properly.
Phases
^^^^^^
The ``QMakePackage`` base class comes with the following phases:
The ``QMakeBuilder`` and ``QMakePackage`` base classes come with the following phases:
#. ``qmake`` - generate Makefiles
#. ``build`` - build the project

View File

@@ -5,9 +5,9 @@
.. _racketpackage:
-------------
RacketPackage
-------------
------
Racket
------
Much like Python, Racket packages and modules have their own special build system.
To learn more about the specifics of Racket package system, please refer to the
@@ -17,7 +17,7 @@ To learn more about the specifics of Racket package system, please refer to the
Phases
^^^^^^
The ``RacketPackage`` base class provides an ``install`` phase that
The ``RacketBuilder`` and ``RacketPackage`` base classes provides an ``install`` phase that
can be overridden, corresponding to the use of:
.. code-block:: console

View File

@@ -5,9 +5,9 @@
.. _rocmpackage:
-----------
ROCmPackage
-----------
----
ROCm
----
The ``ROCmPackage`` is not a build system but a helper package. Like ``CudaPackage``,
it provides standard variants, dependencies, and conflicts to facilitate building
@@ -25,7 +25,7 @@ This package provides the following variants:
* **rocm**
This variant is used to enable/disable building with ``rocm``.
This variant is used to enable/disable building with ``rocm``.
The default is disabled (or ``False``).
* **amdgpu_target**

View File

@@ -5,9 +5,9 @@
.. _rpackage:
--------
RPackage
--------
--
R
--
Like Python, R has its own built-in build system.
@@ -19,7 +19,7 @@ new Spack packages for.
Phases
^^^^^^
The ``RPackage`` base class has a single phase:
The ``RBuilder`` and ``RPackage`` base classes have a single phase:
#. ``install`` - install the package
@@ -193,10 +193,10 @@ Build system dependencies
As an extension of the R ecosystem, your package will obviously depend
on R to build and run. Normally, we would use ``depends_on`` to express
this, but for R packages, we use ``extends``. ``extends`` is similar to
``depends_on``, but adds an additional feature: the ability to "activate"
the package by symlinking it to the R installation directory. Since
every R package needs this, the ``RPackage`` base class contains:
this, but for R packages, we use ``extends``. This implies a special
dependency on R, which is used to set environment variables such as
``R_LIBS`` uniformly. Since every R package needs this, the ``RPackage``
base class contains:
.. code-block:: python

View File

@@ -5,9 +5,9 @@
.. _rubypackage:
-----------
RubyPackage
-----------
----
Ruby
----
Like Perl, Python, and R, Ruby has its own build system for
installing Ruby gems.
@@ -16,7 +16,7 @@ installing Ruby gems.
Phases
^^^^^^
The ``RubyPackage`` base class provides the following phases that
The ``RubyBuilder`` and ``RubyPackage`` base classes provide the following phases that
can be overridden:
#. ``build`` - build everything needed to install

View File

@@ -5,9 +5,9 @@
.. _sconspackage:
------------
SConsPackage
------------
-----
SCons
-----
SCons is a general-purpose build system that does not rely on
Makefiles to build software. SCons is written in Python, and handles
@@ -42,7 +42,7 @@ As previously mentioned, SCons allows developers to add subcommands like
$ scons install
To facilitate this, the ``SConsPackage`` base class provides the
To facilitate this, the ``SConsBuilder`` and ``SconsPackage`` base classes provide the
following phases:
#. ``build`` - build the package

View File

@@ -5,9 +5,9 @@
.. _sippackage:
----------
SIPPackage
----------
---
SIP
---
SIP is a tool that makes it very easy to create Python bindings for C and C++
libraries. It was originally developed to create PyQt, the Python bindings for
@@ -22,7 +22,7 @@ provides support functions to the automatically generated code.
Phases
^^^^^^
The ``SIPPackage`` base class comes with the following phases:
The ``SIPBuilder`` and ``SIPPackage`` base classes come with the following phases:
#. ``configure`` - configure the package
#. ``build`` - build the package

View File

@@ -5,15 +5,15 @@
.. _sourceforgepackage:
------------------
SourceforgePackage
------------------
-----------
Sourceforge
-----------
``SourceforgePackage`` is a
``SourceforgePackage`` is a
`mixin-class <https://en.wikipedia.org/wiki/Mixin>`_. It automatically
sets the URL based on a list of Sourceforge mirrors listed in
`sourceforge_mirror_path`, which defaults to a half dozen known mirrors.
Refer to the package source
Refer to the package source
(`<https://github.com/spack/spack/blob/develop/lib/spack/spack/build_systems/sourceforge.py>`__) for the current list of mirrors used by Spack.
@@ -29,7 +29,7 @@ This package provides a method for populating mirror URLs.
It is decorated with `property` so its results are treated as
a package attribute.
Refer to
Refer to
`<https://spack.readthedocs.io/en/latest/packaging_guide.html#mirrors-of-the-main-url>`__
for information on how Spack uses the `urls` attribute during
fetching.

View File

@@ -5,9 +5,9 @@
.. _wafpackage:
----------
WafPackage
----------
---
Waf
---
Like SCons, Waf is a general-purpose build system that does not rely
on Makefiles to build software.
@@ -16,7 +16,7 @@ on Makefiles to build software.
Phases
^^^^^^
The ``WafPackage`` base class comes with the following phases:
The ``WafBuilder`` and ``WafPackage`` base classes come with the following phases:
#. ``configure`` - configure the project
#. ``build`` - build the project

View File

@@ -32,14 +32,11 @@
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
link_name = os.path.abspath("_spack_root")
if not os.path.exists(link_name):
os.symlink(os.path.abspath("../../.."), link_name, target_is_directory=True)
sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external"))
sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external/pytest-fallback"))
if sys.version_info[0] < 3:
sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external/yaml/lib"))
else:
sys.path.insert(0, os.path.abspath("_spack_root/lib/spack/external/yaml/lib3"))
sys.path.append(os.path.abspath("_spack_root/lib/spack/"))
# Add the Spack bin directory to the path so that we can use its output in docs.
@@ -157,8 +154,8 @@ def setup(sphinx):
master_doc = "index"
# General information about the project.
project = u"Spack"
copyright = u"2013-2021, Lawrence Livermore National Laboratory."
project = "Spack"
copyright = "2013-2021, Lawrence Livermore National Laboratory."
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
@@ -206,6 +203,9 @@ def setup(sphinx):
# Spack classes that are private and we don't want to expose
("py:class", "spack.provider_index._IndexBase"),
("py:class", "spack.repo._PrependFileLoader"),
("py:class", "spack.build_systems._checks.BaseBuilder"),
# Spack classes that intersphinx is unable to resolve
("py:class", "spack.version.VersionBase"),
]
# The reST default role (used for this markup: `text`) to use for all documents.
@@ -344,7 +344,7 @@ class SpackStyle(DefaultStyle):
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
("index", "Spack.tex", u"Spack Documentation", u"Todd Gamblin", "manual"),
("index", "Spack.tex", "Spack Documentation", "Todd Gamblin", "manual"),
]
# The name of an image file (relative to this directory) to place at the top of
@@ -372,7 +372,7 @@ class SpackStyle(DefaultStyle):
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [("index", "spack", u"Spack Documentation", [u"Todd Gamblin"], 1)]
man_pages = [("index", "spack", "Spack Documentation", ["Todd Gamblin"], 1)]
# If true, show URL addresses after external links.
# man_show_urls = False
@@ -387,8 +387,8 @@ class SpackStyle(DefaultStyle):
(
"index",
"Spack",
u"Spack Documentation",
u"Todd Gamblin",
"Spack Documentation",
"Todd Gamblin",
"Spack",
"One line description of project.",
"Miscellaneous",

View File

@@ -224,9 +224,9 @@ them). Please note that we currently disable ccache's ``hash_dir``
feature to avoid an issue with the stage directory (see
https://github.com/LLNL/spack/pull/3761#issuecomment-294352232).
------------------
``shared_linking``
------------------
-----------------------
``shared_linking:type``
-----------------------
Control whether Spack embeds ``RPATH`` or ``RUNPATH`` attributes in ELF binaries
so that they can find their dependencies. Has no effect on macOS.
@@ -245,6 +245,52 @@ the loading object.
DO NOT MIX the two options within the same install tree.
-----------------------
``shared_linking:bind``
-----------------------
This is an *experimental option* that controls whether Spack embeds absolute paths
to needed shared libraries in ELF executables and shared libraries on Linux. Setting
this option to ``true`` has two advantages:
1. **Improved startup time**: when running an executable, the dynamic loader does not
have to perform a search for needed libraries, they are loaded directly.
2. **Reliability**: libraries loaded at runtime are those that were linked to. This
minimizes the risk of accidentally picking up system libraries.
In the current implementation, Spack sets the soname (shared object name) of
libraries to their install path upon installation. This has two implications:
1. binding does not apply to libraries installed *before* the option was enabled;
2. toggling the option off does *not* prevent binding of libraries installed when
the option was still enabled.
It is also worth noting that:
1. Applications relying on ``dlopen(3)`` will continue to work, even when they open
a library by name. This is because ``RPATH``\s are retained in binaries also
when ``bind`` is enabled.
2. ``LD_PRELOAD`` continues to work for the typical use case of overriding
symbols, such as preloading a library with a more efficient ``malloc``.
However, the preloaded library will be loaded *additionally to*, instead of
*in place of* another library with the same name --- this can be problematic
in very rare cases where libraries rely on a particular ``init`` or ``fini``
order.
.. note::
In some cases packages provide *stub libraries* that only contain an interface
for linking, but lack an implementation for runtime. An example of this is
``libcuda.so``, provided by the CUDA toolkit; it can be used to link against,
but the library needed at runtime is the one installed with the CUDA driver.
To avoid binding those libraries, they can be marked as non-bindable using
a property in the package:
.. code-block:: python
class Example(Package):
non_bindable_shared_objects = ["libinterface.so"]
----------------------
``terminal_title``
----------------------

View File

@@ -405,6 +405,19 @@ Spack understands several special variables. These are:
* ``$user``: name of the current user
* ``$user_cache_path``: user cache directory (``~/.spack`` unless
:ref:`overridden <local-config-overrides>`)
* ``$architecture``: the architecture triple of the current host, as
detected by Spack.
* ``$arch``: alias for ``$architecture``.
* ``$platform``: the platform of the current host, as detected by Spack.
* ``$operating_system``: the operating system of the current host, as
detected by the ``distro`` python module.
* ``$os``: alias for ``$operating_system``.
* ``$target``: the ISA target for the current host, as detected by
ArchSpec. E.g. ``skylake`` or ``neoverse-n1``.
* ``$target_family``. The target family for the current host, as
detected by ArchSpec. E.g. ``x86_64`` or ``aarch64``.
* ``$date``: the current date in the format YYYY-MM-DD
Note that, as with shell variables, you can write these as ``$varname``
or with braces to distinguish the variable from surrounding characters:
@@ -549,7 +562,7 @@ down the problem:
You can see above that the ``build_jobs`` and ``debug`` settings are
built in and are not overridden by a configuration file. The
``verify_ssl`` setting comes from the ``--insceure`` option on the
``verify_ssl`` setting comes from the ``--insecure`` option on the
command line. ``dirty`` and ``install_tree`` come from the custom
scopes ``./my-scope`` and ``./my-scope-2``, and all other configuration
options come from the default configuration files that ship with Spack.

View File

@@ -253,27 +253,6 @@ to update them.
multiple runs of ``spack style`` just to re-compute line numbers and
makes it much easier to fix errors directly off of the CI output.
.. warning::
Flake8 and ``pep8-naming`` require a number of dependencies in order
to run. If you installed ``py-flake8`` and ``py-pep8-naming``, the
easiest way to ensure the right packages are on your ``PYTHONPATH`` is
to run::
spack activate py-flake8
spack activate pep8-naming
so that all of the dependencies are symlinked to a central
location. If you see an error message like:
.. code-block:: console
Traceback (most recent call last):
File: "/usr/bin/flake8", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
that means Flake8 couldn't find setuptools in your ``PYTHONPATH``.
^^^^^^^^^^^^^^^^^^^
Documentation Tests
@@ -309,13 +288,9 @@ All of these can be installed with Spack, e.g.
.. code-block:: console
$ spack activate py-sphinx
$ spack activate py-sphinx-rtd-theme
$ spack activate py-sphinxcontrib-programoutput
$ spack load py-sphinx py-sphinx-rtd-theme py-sphinxcontrib-programoutput
so that all of the dependencies are symlinked into that Python's
tree. Alternatively, you could arrange for their library
directories to be added to PYTHONPATH. If you see an error message
so that all of the dependencies are added to PYTHONPATH. If you see an error message
like:
.. code-block:: console

View File

@@ -149,11 +149,9 @@ grouped by functionality.
Package-related modules
^^^^^^^^^^^^^^^^^^^^^^^
:mod:`spack.package`
Contains the :class:`~spack.package_base.Package` class, which
is the superclass for all packages in Spack. Methods on ``Package``
implement all phases of the :ref:`package lifecycle
<package-lifecycle>` and manage the build process.
:mod:`spack.package_base`
Contains the :class:`~spack.package_base.PackageBase` class, which
is the superclass for all packages in Spack.
:mod:`spack.util.naming`
Contains functions for mapping between Spack package names,

View File

@@ -233,8 +233,8 @@ packages will be listed as roots of the Environment.
All of the Spack commands that act on the list of installed specs are
Environment-sensitive in this way, including ``install``,
``uninstall``, ``activate``, ``deactivate``, ``find``, ``extensions``,
and more. In the :ref:`environment-configuration` section we will discuss
``uninstall``, ``find``, ``extensions``, and more. In the
:ref:`environment-configuration` section we will discuss
Environment-sensitive commands further.
^^^^^^^^^^^^^^^^^^^^^
@@ -519,27 +519,33 @@ available from the yaml file.
^^^^^^^^^^^^^^^^^^^
Spec concretization
^^^^^^^^^^^^^^^^^^^
An environment can be concretized in three different modes and the behavior active under any environment
is determined by the ``concretizer:unify`` property. By default specs are concretized *separately*, one after the other:
An environment can be concretized in three different modes and the behavior active under
any environment is determined by the ``concretizer:unify`` configuration option.
The *default* mode is to unify all specs:
.. code-block:: yaml
spack:
specs:
- hdf5~mpi
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: false
unify: true
This mode of operation permits to deploy a full software stack where multiple configurations of the same package
need to be installed alongside each other using the best possible selection of transitive dependencies. The downside
is that redundancy of installations is disregarded completely, and thus environments might be more bloated than
strictly needed. In the example above, for instance, if a version of ``zlib`` newer than ``1.2.8`` is known to Spack,
then it will be used for both ``hdf5`` installations.
This means that any package in the environment corresponds to a single concrete spec. In
the above example, when ``hdf5`` depends down the line of ``zlib``, it is required to
take ``zlib@1.2.8`` instead of a newer version. This mode of concretization is
particularly useful when environment views are used: if every package occurs in
only one flavor, it is usually possible to merge all install directories into a view.
If redundancy of the environment is a concern, Spack provides a way to install it *together where possible*,
i.e. trying to maximize reuse of dependencies across different specs:
A downside of unified concretization is that it can be overly strict. For example, a
concretization error would happen when both ``hdf5+mpi`` and ``hdf5~mpi`` are specified
in an environment.
The second mode is to *unify when possible*: this makes concretization of root specs
more independendent. Instead of requiring reuse of dependencies across different root
specs, it is only maximized:
.. code-block:: yaml
@@ -551,26 +557,27 @@ i.e. trying to maximize reuse of dependencies across different specs:
concretizer:
unify: when_possible
Also in this case Spack allows having multiple configurations of the same package, but privileges the reuse of
specs over other factors. Going back to our example, this means that both ``hdf5`` installations will use
``zlib@1.2.8`` as a dependency even if newer versions of that library are available.
Central installations done at HPC centers by system administrators or user support groups are a common case
that fits either of these two modes.
This means that both ``hdf5`` installations will use ``zlib@1.2.8`` as a dependency even
if newer versions of that library are available.
Environments can also be configured to concretize all the root specs *together*, in a self-consistent way, to
ensure that each package in the environment comes with a single configuration:
The third mode of operation is to concretize root specs entirely independently by
disabling unified concretization:
.. code-block:: yaml
spack:
specs:
- hdf5~mpi
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: true
unify: false
This mode of operation is usually what is required by software developers that want to deploy their development
environment and have a single view of it in the filesystem.
In this example ``hdf5`` is concretized separately, and does not consider ``zlib@1.2.8``
as a constraint or preference. Instead, it will take the latest possible version.
The last two concretization options are typically useful for system administrators and
user support groups providing a large software stack for their HPC center.
.. note::
@@ -581,10 +588,10 @@ environment and have a single view of it in the filesystem.
.. admonition:: Re-concretization of user specs
When concretizing specs *together* or *together where possible* the entire set of specs will be
When using *unified* concretization (when possible), the entire set of specs will be
re-concretized after any addition of new user specs, to ensure that
the environment remains consistent / minimal. When instead the specs are concretized
separately only the new specs will be re-concretized after any addition.
the environment remains consistent / minimal. When instead unified concretization is
disabled, only the new specs will be concretized after any addition.
^^^^^^^^^^^^^
Spec Matrices
@@ -1063,19 +1070,23 @@ the include is conditional.
Building a subset of the environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The generated ``Makefile``\s contain install targets for each spec. Given the hash
of a particular spec, you can use the ``.install/<hash>`` target to install the
spec with its dependencies. There is also ``.install-deps/<hash>`` to *only* install
The generated ``Makefile``\s contain install targets for each spec, identified
by ``<name>-<version>-<hash>``. This allows you to install only a subset of the
packages in the environment. When packages are unique in the environment, it's
enough to know the name and let tab-completion fill out the version and hash.
The following phony targets are available: ``install/<spec>`` to install the
spec with its dependencies, and ``install-deps/<spec>`` to *only* install
its dependencies. This can be useful when certain flags should only apply to
dependencies. Below we show a use case where a spec is installed with verbose
output (``spack install --verbose``) while its dependencies are installed silently:
.. code:: console
$ spack env depfile -o Makefile --make-target-prefix my_env
$ spack env depfile -o Makefile
# Install dependencies in parallel, only show a log on error.
$ make -j16 my_env/.install-deps/<hash> SPACK_INSTALL_FLAGS=--show-log-on-error
$ make -j16 install-deps/python-3.11.0-<hash> SPACK_INSTALL_FLAGS=--show-log-on-error
# Install the root spec with verbose output.
$ make -j16 my_env/.install/<hash> SPACK_INSTALL_FLAGS=--verbose
$ make -j16 install/python-3.11.0-<hash> SPACK_INSTALL_FLAGS=--verbose

View File

@@ -98,40 +98,42 @@ For example, this command:
.. code-block:: console
$ spack create http://www.mr511.de/software/libelf-0.8.13.tar.gz
$ spack create https://ftp.osuosl.org/pub/blfs/conglomeration/libelf/libelf-0.8.13.tar.gz
creates a simple python file:
.. code-block:: python
from spack import *
from spack.package import *
class Libelf(Package):
class Libelf(AutotoolsPackage):
"""FIXME: Put a proper description of your package here."""
# FIXME: Add a proper url for your package's homepage here.
homepage = "http://www.example.com"
url = "http://www.mr511.de/software/libelf-0.8.13.tar.gz"
homepage = "https://www.example.com"
url = "https://ftp.osuosl.org/pub/blfs/conglomeration/libelf/libelf-0.8.13.tar.gz"
version('0.8.13', '4136d7b4c04df68b686570afa26988ac')
# FIXME: Add a list of GitHub accounts to
# notify when the package is updated.
# maintainers = ["github_user1", "github_user2"]
version("0.8.13", sha256="591a9b4ec81c1f2042a97aa60564e0cb79d041c52faa7416acb38bc95bd2c76d")
# FIXME: Add dependencies if required.
# depends_on('foo')
# depends_on("foo")
def install(self, spec, prefix):
# FIXME: Modify the configure line to suit your build system here.
configure('--prefix={0}'.format(prefix))
# FIXME: Add logic to build and install here.
make()
make('install')
def configure_args(self):
# FIXME: Add arguments other than --prefix
# FIXME: If not needed delete this function
args = []
return args
It doesn't take much python coding to get from there to a working
package:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/libelf/package.py
:lines: 6-
:lines: 5-
Spack also provides wrapper functions around common commands like
``configure``, ``make``, and ``cmake`` to make writing packages

View File

@@ -21,8 +21,9 @@ be present on the machine where Spack is run:
:header-rows: 1
These requirements can be easily installed on most modern Linux systems;
on macOS, XCode is required. Spack is designed to run on HPC
platforms like Cray. Not all packages should be expected
on macOS, the Command Line Tools package is required, and a full XCode suite
may be necessary for some packages such as Qt and apple-gl. Spack is designed
to run on HPC platforms like Cray. Not all packages should be expected
to work on all platforms.
A build matrix showing which packages are working on which systems is shown below.
@@ -44,7 +45,7 @@ A build matrix showing which packages are working on which systems is shown belo
yum install -y epel-release
yum update -y
yum --enablerepo epel groupinstall -y "Development Tools"
yum --enablerepo epel install -y curl findutils gcc-c++ gcc gcc-gfortran git gnupg2 hostname iproute make patch python3 python3-pip python3-setuptools unzip
yum --enablerepo epel install -y curl findutils gcc-c++ gcc gcc-gfortran git gnupg2 hostname iproute redhat-lsb-core make patch python3 python3-pip python3-setuptools unzip
python3 -m pip install boto3
.. tab-item:: macOS Brew
@@ -124,88 +125,41 @@ Spack provides two ways of bootstrapping ``clingo``: from pre-built binaries
(default), or from sources. The fastest way to get started is to bootstrap from
pre-built binaries.
.. note::
When bootstrapping from pre-built binaries, Spack currently requires
``patchelf`` on Linux and ``otool`` on macOS. If ``patchelf`` is not in the
``PATH``, Spack will build it from sources, and a C++ compiler is required.
The first time you concretize a spec, Spack will bootstrap in the background:
The first time you concretize a spec, Spack will bootstrap automatically:
.. code-block:: console
$ time spack spec zlib
$ spack spec zlib
==> Bootstrapping clingo from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-ba5ijauisd3uuixtmactc36vps7yfsrl.spec.json
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64/gcc-10.2.1/clingo-bootstrap-spack/linux-centos7-x86_64-gcc-10.2.1-clingo-bootstrap-spack-ba5ijauisd3uuixtmactc36vps7yfsrl.spack
==> Installing "clingo-bootstrap@spack%gcc@10.2.1~docs~ipo+python+static_libstdcpp build_type=Release arch=linux-centos7-x86_64" from a buildcache
==> Bootstrapping patchelf from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64-gcc-10.2.1-patchelf-0.16.1-p72zyan5wrzuabtmzq7isa5mzyh6ahdp.spec.json
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.4/build_cache/linux-centos7-x86_64/gcc-10.2.1/patchelf-0.16.1/linux-centos7-x86_64-gcc-10.2.1-patchelf-0.16.1-p72zyan5wrzuabtmzq7isa5mzyh6ahdp.spack
==> Installing "patchelf@0.16.1%gcc@10.2.1 ldflags="-static-libstdc++ -static-libgcc" build_system=autotools arch=linux-centos7-x86_64" from a buildcache
Input spec
--------------------------------
zlib
Concretized
--------------------------------
zlib@1.2.11%gcc@7.5.0+optimize+pic+shared arch=linux-ubuntu18.04-zen
real 0m20.023s
user 0m18.351s
sys 0m0.784s
After this command you'll see that ``clingo`` has been installed for Spack's own use:
.. code-block:: console
$ spack find -b
==> Showing internal bootstrap store at "/root/.spack/bootstrap/store"
==> 3 installed packages
-- linux-rhel5-x86_64 / gcc@9.3.0 -------------------------------
clingo-bootstrap@spack python@3.6
-- linux-ubuntu18.04-zen / gcc@7.5.0 ----------------------------
patchelf@0.13
Subsequent calls to the concretizer will then be much faster:
.. code-block:: console
$ time spack spec zlib
[ ... ]
real 0m0.490s
user 0m0.431s
sys 0m0.041s
zlib@1.2.13%gcc@9.4.0+optimize+pic+shared build_system=makefile arch=linux-ubuntu20.04-icelake
If for security concerns you cannot bootstrap ``clingo`` from pre-built
binaries, you have to mark this bootstrapping method as untrusted. This makes
Spack fall back to bootstrapping from sources:
binaries, you have to disable fetching the binaries we generated with Github Actions.
.. code-block:: console
$ spack bootstrap untrust github-actions-v0.2
==> "github-actions-v0.2" is now untrusted and will not be used for bootstrapping
$ spack bootstrap disable github-actions-v0.4
==> "github-actions-v0.4" is now disabled and will not be used for bootstrapping
$ spack bootstrap disable github-actions-v0.3
==> "github-actions-v0.3" is now disabled and will not be used for bootstrapping
You can verify that the new settings are effective with:
.. code-block:: console
.. command-output:: spack bootstrap list
$ spack bootstrap list
Name: github-actions-v0.2 UNTRUSTED
Type: buildcache
Info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.2
homepage: https://github.com/spack/spack-bootstrap-mirrors
releases: https://github.com/spack/spack-bootstrap-mirrors/releases
Description:
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
[ ... ]
Name: spack-install TRUSTED
Type: install
Description:
Specs built from sources by Spack. May take a long time.
.. note::
@@ -235,9 +189,7 @@ under the ``${HOME}/.spack`` directory. The software installed there can be quer
.. code-block:: console
$ spack find --bootstrap
==> Showing internal bootstrap store at "/home/spack/.spack/bootstrap/store"
==> 3 installed packages
$ spack -b find
-- linux-ubuntu18.04-x86_64 / gcc@10.1.0 ------------------------
clingo-bootstrap@spack python@3.6.9 re2c@1.2.1
@@ -246,7 +198,7 @@ In case it's needed the bootstrap store can also be cleaned with:
.. code-block:: console
$ spack clean -b
==> Removing software in "/home/spack/.spack/bootstrap/store"
==> Removing bootstrapped software and configuration in "/home/spack/.spack/bootstrap"
^^^^^^^^^^^^^^^^^^
Check Installation
@@ -1753,9 +1705,11 @@ dependencies or incompatible build tools like autoconf. Here are several
packages known to work on Windows:
* abseil-cpp
* bzip2
* clingo
* cpuinfo
* cmake
* hdf5
* glm
* nasm
* netlib-lapack (requires Intel Fortran)

Binary file not shown.

After

Width:  |  Height:  |  Size: 658 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 449 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -56,7 +56,6 @@ or refer to the full manual below.
basic_usage
Tutorial: Spack 101 <https://spack-tutorial.readthedocs.io>
replace_conda_homebrew
known_issues
.. toctree::
:maxdepth: 2

View File

@@ -1,40 +0,0 @@
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
============
Known Issues
============
This is a list of known issues in Spack. It provides ways of getting around these
problems if you encounter them.
------------------------------------------------
Spack does not seem to respect ``packages.yaml``
------------------------------------------------
.. note::
This issue is **resolved** as of v0.19.0.dev0 commit
`8281a0c5feabfc4fe180846d6fe95cfe53420bc5`, through the introduction of package
requirements. See :ref:`package-requirements`.
A common problem in Spack v0.18.0 up to v0.19.0.dev0 is that package, compiler and target
preferences specified in ``packages.yaml`` do not seem to be respected. Spack picks the
"wrong" compilers and their versions, package versions and variants, and
micro-architectures.
This is however not a bug. In order to reduce the number of builds of the same
packages, the concretizer values reuse of installed packages higher than preferences
set in ``packages.yaml``. Note that ``packages.yaml`` specifies only preferences, not
hard constraints.
There are multiple workarounds:
1. Disable reuse during concretization: ``spack install --fresh <spec>`` when installing
from the command line, or ``spack concretize --fresh --force`` when using
environments.
2. Turn preferences into constrains, by moving them to the input spec. For example,
use ``spack spec zlib%gcc@12`` when you want to force GCC 12 even if ``zlib`` was
already installed with GCC 10.

View File

@@ -34,24 +34,155 @@ ubiquitous in the scientific software community. Second, it's a modern
language and has many powerful features to help make package writing
easy.
---------------------------
Creating & editing packages
---------------------------
.. _installation_procedure:
--------------------------------------
Overview of the installation procedure
--------------------------------------
Whenever Spack installs software, it goes through a series of predefined steps:
.. image:: images/installation_pipeline.png
:scale: 60 %
:align: center
All these steps are influenced by the metadata in each ``package.py`` and
by the current Spack configuration.
Since build systems are different from one another, the execution of the
last block in the figure is further expanded in a build system specific way.
An example for ``CMake`` is, for instance:
.. image:: images/builder_phases.png
:align: center
:scale: 60 %
The predefined steps for each build system are called "phases".
In general, the name and order in which the phases will be executed can be
obtained by either reading the API docs at :py:mod:`~.spack.build_systems`, or
using the ``spack info`` command:
.. code-block:: console
:emphasize-lines: 13,14
$ spack info --phases m4
AutotoolsPackage: m4
Homepage: https://www.gnu.org/software/m4/m4.html
Safe versions:
1.4.17 ftp://ftp.gnu.org/gnu/m4/m4-1.4.17.tar.gz
Variants:
Name Default Description
sigsegv on Build the libsigsegv dependency
Installation Phases:
autoreconf configure build install
Build Dependencies:
libsigsegv
...
An extensive list of available build systems and phases is provided in :ref:`installation_process`.
------------------------
Writing a package recipe
------------------------
Since v0.19, Spack supports two ways of writing a package recipe. The most commonly used is to encode both the metadata
(directives, etc.) and the build behavior in a single class, like shown in the following example:
.. code-block:: python
class Openjpeg(CMakePackage):
"""OpenJPEG is an open-source JPEG 2000 codec written in C language"""
homepage = "https://github.com/uclouvain/openjpeg"
url = "https://github.com/uclouvain/openjpeg/archive/v2.3.1.tar.gz"
version("2.4.0", sha256="8702ba68b442657f11aaeb2b338443ca8d5fb95b0d845757968a7be31ef7f16d")
variant("codec", default=False, description="Build the CODEC executables")
depends_on("libpng", when="+codec")
def url_for_version(self, version):
if version >= Version("2.1.1"):
return super(Openjpeg, self).url_for_version(version)
url_fmt = "https://github.com/uclouvain/openjpeg/archive/version.{0}.tar.gz"
return url_fmt.format(version)
def cmake_args(self):
args = [
self.define_from_variant("BUILD_CODEC", "codec"),
self.define("BUILD_MJ2", False),
self.define("BUILD_THIRDPARTY", False),
]
return args
A package encoded with a single class is backward compatible with versions of Spack
lower than v0.19, and so are custom repositories containing only recipes of this kind.
The downside is that *this format doesn't allow packagers to use more than one build system in a single recipe*.
To do that, we have to resort to the second way Spack has of writing packages, which involves writing a
builder class explicitly. Using the same example as above, this reads:
.. code-block:: python
class Openjpeg(CMakePackage):
"""OpenJPEG is an open-source JPEG 2000 codec written in C language"""
homepage = "https://github.com/uclouvain/openjpeg"
url = "https://github.com/uclouvain/openjpeg/archive/v2.3.1.tar.gz"
version("2.4.0", sha256="8702ba68b442657f11aaeb2b338443ca8d5fb95b0d845757968a7be31ef7f16d")
variant("codec", default=False, description="Build the CODEC executables")
depends_on("libpng", when="+codec")
def url_for_version(self, version):
if version >= Version("2.1.1"):
return super(Openjpeg, self).url_for_version(version)
url_fmt = "https://github.com/uclouvain/openjpeg/archive/version.{0}.tar.gz"
return url_fmt.format(version)
class CMakeBuilder(spack.build_systems.cmake.CMakeBuilder):
def cmake_args(self):
args = [
self.define_from_variant("BUILD_CODEC", "codec"),
self.define("BUILD_MJ2", False),
self.define("BUILD_THIRDPARTY", False),
]
return args
This way of writing packages allows extending the recipe to support multiple build systems,
see :ref:`multiple_build_systems` for more details. The downside is that recipes of this kind
are only understood by Spack since v0.19+. More information on the internal architecture of
Spack can be found at :ref:`package_class_structure`.
.. note::
If a builder is implemented in ``package.py``, all build-specific methods must be moved
to the builder. This means that if you have a package like
.. code-block:: python
class Foo(CmakePackage):
def cmake_args(self):
...
and you add a builder to the ``package.py``, you must move ``cmake_args`` to the builder.
.. _cmd-spack-create:
^^^^^^^^^^^^^^^^
``spack create``
^^^^^^^^^^^^^^^^
---------------------
Creating new packages
---------------------
The ``spack create`` command creates a directory with the package name and
generates a ``package.py`` file with a boilerplate package template. If given
a URL pointing to a tarball or other software archive, ``spack create`` is
smart enough to determine basic information about the package, including its name
and build system. In most cases, ``spack create`` plus a few modifications is
all you need to get a package working.
Here's an example:
To help creating a new package Spack provides a command that generates a ``package.py``
file in an existing repository, with a boilerplate package template. Here's an example:
.. code-block:: console
@@ -87,23 +218,6 @@ You do not *have* to download all of the versions up front. You can
always choose to download just one tarball initially, and run
:ref:`cmd-spack-checksum` later if you need more versions.
Let's say you download 3 tarballs:
.. code-block:: console
How many would you like to checksum? (default is 1, q to abort) 3
==> Downloading...
==> Fetching https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2
######################################################################## 100.0%
==> Fetching https://gmplib.org/download/gmp/gmp-6.1.1.tar.bz2
######################################################################## 100.0%
==> Fetching https://gmplib.org/download/gmp/gmp-6.1.0.tar.bz2
######################################################################## 100.0%
==> Checksummed 3 versions of gmp:
==> This package looks like it uses the autotools build system
==> Created template for gmp package
==> Created package file: /Users/Adam/spack/var/spack/repos/builtin/packages/gmp/package.py
Spack automatically creates a directory in the appropriate repository,
generates a boilerplate template for your package, and opens up the new
``package.py`` in your favorite ``$EDITOR``:
@@ -111,6 +225,14 @@ generates a boilerplate template for your package, and opens up the new
.. code-block:: python
:linenos:
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# ----------------------------------------------------------------------------
# If you submit this package back to Spack as a pull request,
# please first remove this boilerplate and all FIXME comments.
#
# This is a template package file for Spack. We've put "FIXME"
# next to all the things you'll want to change. Once you've handled
@@ -123,9 +245,8 @@ generates a boilerplate template for your package, and opens up the new
# spack edit gmp
#
# See the Spack documentation for more information on packaging.
# If you submit this package back to Spack as a pull request,
# please first remove this boilerplate and all FIXME comments.
#
# ----------------------------------------------------------------------------
import spack.build_systems.autotools
from spack.package import *
@@ -133,19 +254,17 @@ generates a boilerplate template for your package, and opens up the new
"""FIXME: Put a proper description of your package here."""
# FIXME: Add a proper url for your package's homepage here.
homepage = "http://www.example.com"
url = "https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2"
homepage = "https://www.example.com"
url = "https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2"
# FIXME: Add a list of GitHub accounts to
# notify when the package is updated.
# maintainers = ['github_user1', 'github_user2']
# maintainers = ["github_user1", "github_user2"]
version('6.1.2', '8ddbb26dc3bd4e2302984debba1406a5')
version('6.1.1', '4c175f86e11eb32d8bf9872ca3a8e11d')
version('6.1.0', '86ee6e54ebfc4a90b643a65e402c4048')
version("6.2.1", sha256="eae9326beb4158c386e39a356818031bd28f3124cf915f8c5b1dc4c7a36b4d7c")
# FIXME: Add dependencies if required.
# depends_on('foo')
# depends_on("foo")
def configure_args(self):
# FIXME: Add arguments other than --prefix
@@ -154,15 +273,16 @@ generates a boilerplate template for your package, and opens up the new
return args
The tedious stuff (creating the class, checksumming archives) has been
done for you. You'll notice that ``spack create`` correctly detected that
``gmp`` uses the Autotools build system. It created a new ``Gmp`` package
that subclasses the ``AutotoolsPackage`` base class. This base class
provides basic installation methods common to all Autotools packages:
done for you. Spack correctly detected that ``gmp`` uses the ``autotools``
build system, so it created a new ``Gmp`` package that subclasses the
``AutotoolsPackage`` base class.
The default installation procedure for a package subclassing the ``AutotoolsPackage``
is to go through the typical process of:
.. code-block:: bash
./configure --prefix=/path/to/installation/directory
make
make check
make install
@@ -209,12 +329,14 @@ The rest of the tasks you need to do are as follows:
Your new package may require specific flags during ``configure``.
These can be added via ``configure_args``. Specifics will differ
depending on the package and its build system.
:ref:`Implementing the install method <install-method>` is
:ref:`installation_process` is
covered in detail later.
Passing a URL to ``spack create`` is a convenient and easy way to get
a basic package template, but what if your software is licensed and
cannot be downloaded from a URL? You can still create a boilerplate
^^^^^^^^^^^^^^^^^^^^^^^^^
Non-downloadable software
^^^^^^^^^^^^^^^^^^^^^^^^^
If your software cannot be downloaded from a URL you can still create a boilerplate
``package.py`` by telling ``spack create`` what name you want to use:
.. code-block:: console
@@ -223,40 +345,23 @@ cannot be downloaded from a URL? You can still create a boilerplate
This will create a simple ``intel`` package with an ``install()``
method that you can craft to install your package.
What if ``spack create <url>`` guessed the wrong name or build system?
For example, if your package uses the Autotools build system but does
not come with a ``configure`` script, Spack won't realize it uses
Autotools. You can overwrite the old package with ``--force`` and specify
a name with ``--name`` or a build system template to use with ``--template``:
Likewise, you can force the build system to be used with ``--template`` and,
in case it's needed, you can overwrite a package already in the repository
with ``--force``:
.. code-block:: console
$ spack create --name gmp https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2
$ spack create --force --template autotools https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2
.. note::
If you are creating a package that uses the Autotools build system
but does not come with a ``configure`` script, you'll need to add an
``autoreconf`` method to your package that explains how to generate
the ``configure`` script. You may also need the following dependencies:
.. code-block:: python
depends_on('autoconf', type='build')
depends_on('automake', type='build')
depends_on('libtool', type='build')
depends_on('m4', type='build')
A complete list of available build system templates can be found by running
``spack create --help``.
.. _cmd-spack-edit:
^^^^^^^^^^^^^^
``spack edit``
^^^^^^^^^^^^^^
-------------------------
Editing existing packages
-------------------------
One of the easiest ways to learn how to write packages is to look at
existing ones. You can edit a package file by name with the ``spack
@@ -266,10 +371,15 @@ edit`` command:
$ spack edit gmp
So, if you used ``spack create`` to create a package, then saved and
closed the resulting file, you can get back to it with ``spack edit``.
The ``gmp`` package actually lives in
``$SPACK_ROOT/var/spack/repos/builtin/packages/gmp/package.py``,
If you used ``spack create`` to create a package, you can get back to
it later with ``spack edit``. For instance, the ``gmp`` package actually
lives in:
.. code-block:: console
$ spack location -p gmp
${SPACK_ROOT}/var/spack/repos/builtin/packages/gmp/package.py
but ``spack edit`` provides a much simpler shortcut and saves you the
trouble of typing the full path.
@@ -2422,7 +2532,7 @@ Spack provides a mechanism for dependencies to influence the
environment of their dependents by overriding the
:meth:`setup_dependent_run_environment <spack.package_base.PackageBase.setup_dependent_run_environment>`
or the
:meth:`setup_dependent_build_environment <spack.package_base.PackageBase.setup_dependent_build_environment>`
:meth:`setup_dependent_build_environment <spack.builder.Builder.setup_dependent_build_environment>`
methods.
The Qt package, for instance, uses this call:
@@ -2524,9 +2634,12 @@ extendable package:
extends('python')
...
Now, the ``py-numpy`` package can be used as an argument to ``spack
activate``. When it is activated, all the files in its prefix will be
symbolically linked into the prefix of the python package.
This accomplishes a few things. Firstly, the Python package can set special
variables such as ``PYTHONPATH`` for all extensions when the run or build
environment is set up. Secondly, filesystem views can ensure that extensions
are put in the same prefix as their extendee. This ensures that Python in
a view can always locate its Python packages, even without environment
variables set.
A package can only extend one other package at a time. To support packages
that may extend one of a list of other packages, Spack supports multiple
@@ -2574,9 +2687,8 @@ variant(s) are selected. This may be accomplished with conditional
...
Sometimes, certain files in one package will conflict with those in
another, which means they cannot both be activated (symlinked) at the
same time. In this case, you can tell Spack to ignore those files
when it does the activation:
another, which means they cannot both be used in a view at the
same time. In this case, you can tell Spack to ignore those files:
.. code-block:: python
@@ -2588,7 +2700,7 @@ when it does the activation:
...
The code above will prevent everything in the ``$prefix/bin/`` directory
from being linked in at activation time.
from being linked in a view.
.. note::
@@ -2612,67 +2724,6 @@ extensions; as a consequence python extension packages (those inheriting from
``PythonPackage``) likewise override ``add_files_to_view`` in order to rewrite
shebang lines which point to the Python interpreter.
^^^^^^^^^^^^^^^^^^^^^^^^^
Activation & deactivation
^^^^^^^^^^^^^^^^^^^^^^^^^
Adding an extension to a view is referred to as an activation. If the view is
maintained in the Spack installation prefix of the extendee this is called a
global activation. Activations may involve updating some centralized state
that is maintained by the extendee package, so there can be additional work
for adding extensions compared with non-extension packages.
Spack's ``Package`` class has default ``activate`` and ``deactivate``
implementations that handle symbolically linking extensions' prefixes
into a specified view. Extendable packages can override these methods
to add custom activate/deactivate logic of their own. For example,
the ``activate`` and ``deactivate`` methods in the Python class handle
symbolic linking of extensions, but they also handle details surrounding
Python's ``.pth`` files, and other aspects of Python packaging.
Spack's extensions mechanism is designed to be extensible, so that
other packages (like Ruby, R, Perl, etc.) can provide their own
custom extension management logic, as they may not handle modules the
same way that Python does.
Let's look at Python's activate function:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.activate
:linenos:
This function is called on the *extendee* (Python). It first calls
``activate`` in the superclass, which handles symlinking the
extension package's prefix into the specified view. It then does
some special handling of the ``easy-install.pth`` file, part of
Python's setuptools.
Deactivate behaves similarly to activate, but it unlinks files:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.deactivate
:linenos:
Both of these methods call some custom functions in the Python
package. See the source for Spack's Python package for details.
^^^^^^^^^^^^^^^^^^^^
Activation arguments
^^^^^^^^^^^^^^^^^^^^
You may have noticed that the ``activate`` function defined above
takes keyword arguments. These are the keyword arguments from
``extends()``, and they are passed to both activate and deactivate.
This capability allows an extension to customize its own activation by
passing arguments to the extendee. Extendees can likewise implement
custom ``activate()`` and ``deactivate()`` functions to suit their
needs.
The only keyword argument supported by default is the ``ignore``
argument, which can take a regex, list of regexes, or a predicate to
determine which files *not* to symlink during activation.
.. _virtual-dependencies:
--------------------
@@ -3280,67 +3331,91 @@ the Python extensions provided by them: once for ``+python`` and once
for ``~python``. Other than using a little extra disk space, that
solution has no serious problems.
.. _installation_procedure:
.. _installation_process:
---------------------------------------
Implementing the installation procedure
---------------------------------------
--------------------------------
Overriding build system defaults
--------------------------------
The last element of a package is its **installation procedure**. This is
where the real work of installation happens, and it's the main part of
the package you'll need to customize for each piece of software.
.. note::
Defining an installation procedure means overriding a set of methods or attributes
that will be called at some point during the installation of the package.
The package base class, usually specialized for a given build system, determines the
actual set of entities available for overriding.
The classes that are currently provided by Spack are:
If you code a single class in ``package.py`` all the functions shown in the table below
can be implemented with the same signature on the ``*Package`` instead of the corresponding builder.
Most of the time the default implementation of methods or attributes in build system base classes
is what a packager needs, and just a very few entities need to be overwritten. Typically we just
need to override methods like ``configure_args``:
.. code-block:: python
def configure_args(self):
args = ["--enable-cxx"] + self.enable_or_disable("libs")
if "libs=static" in self.spec:
args.append("--with-pic")
return args
The actual set of entities available for overriding in ``package.py`` depend on
the build system. The build systems currently supported by Spack are:
+----------------------------------------------------------+----------------------------------+
| **Base Class** | **Purpose** |
| **API docs** | **Description** |
+==========================================================+==================================+
| :class:`~spack.package_base.Package` | General base class not |
| | specialized for any build system |
| :class:`~spack.build_systems.generic` | Generic build system without any |
| | base implementation |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.makefile.MakefilePackage` | Specialized class for packages |
| | built invoking |
| :class:`~spack.build_systems.makefile` | Specialized build system for |
| | software built invoking |
| | hand-written Makefiles |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.autotools.AutotoolsPackage` | Specialized class for packages |
| | built using GNU Autotools |
| :class:`~spack.build_systems.autotools` | Specialized build system for |
| | software built using |
| | GNU Autotools |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.cmake.CMakePackage` | Specialized class for packages |
| | built using CMake |
| :class:`~spack.build_systems.cmake` | Specialized build system for |
| | software built using CMake |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.cuda.CudaPackage` | A helper class for packages that |
| | use CUDA |
| :class:`~spack.build_systems.maven` | Specialized build system for |
| | software built using Maven |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.qmake.QMakePackage` | Specialized class for packages |
| | built using QMake |
| :class:`~spack.build_systems.meson` | Specialized build system for |
| | software built using Meson |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.rocm.ROCmPackage` | A helper class for packages that |
| | use ROCm |
| :class:`~spack.build_systems.nmake` | Specialized build system for |
| | software built using NMake |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.scons.SConsPackage` | Specialized class for packages |
| | built using SCons |
| :class:`~spack.build_systems.qmake` | Specialized build system for |
| | software built using QMake |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.waf.WafPackage` | Specialized class for packages |
| | built using Waf |
| :class:`~spack.build_systems.scons` | Specialized build system for |
| | software built using SCons |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.r.RPackage` | Specialized class for |
| :class:`~spack.build_systems.waf` | Specialized build system for |
| | software built using Waf |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.r` | Specialized build system for |
| | R extensions |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.octave.OctavePackage` | Specialized class for |
| :class:`~spack.build_systems.octave` | Specialized build system for |
| | Octave packages |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.python.PythonPackage` | Specialized class for |
| :class:`~spack.build_systems.python` | Specialized build system for |
| | Python extensions |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.perl.PerlPackage` | Specialized class for |
| :class:`~spack.build_systems.perl` | Specialized build system for |
| | Perl extensions |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.intel.IntelPackage` | Specialized class for licensed |
| | Intel software |
| :class:`~spack.build_systems.ruby` | Specialized build system for |
| | Ruby extensions |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.intel` | Specialized build system for |
| | licensed Intel software |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.oneapi` | Specialized build system for |
| | Intel onaAPI software |
+----------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.aspell_dict` | Specialized build system for |
| | Aspell dictionaries |
+----------------------------------------------------------+----------------------------------+
@@ -3353,52 +3428,17 @@ The classes that are currently provided by Spack are:
For example, a Python extension installed with CMake would ``extends('python')`` and
subclass from :class:`~spack.build_systems.cmake.CMakePackage`.
^^^^^^^^^^^^^^^^^^^^^
Installation pipeline
^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^
Overriding builder methods
^^^^^^^^^^^^^^^^^^^^^^^^^^
When a user runs ``spack install``, Spack:
1. Fetches an archive for the correct version of the software.
2. Expands the archive.
3. Sets the current working directory to the root directory of the expanded archive.
Then, depending on the base class of the package under consideration, it will execute
a certain number of **phases** that reflect the way a package of that type is usually built.
The name and order in which the phases will be executed can be obtained either reading the API
docs at :py:mod:`~.spack.build_systems`, or using the ``spack info`` command:
.. code-block:: console
:emphasize-lines: 13,14
$ spack info m4
AutotoolsPackage: m4
Homepage: https://www.gnu.org/software/m4/m4.html
Safe versions:
1.4.17 ftp://ftp.gnu.org/gnu/m4/m4-1.4.17.tar.gz
Variants:
Name Default Description
sigsegv on Build the libsigsegv dependency
Installation Phases:
autoreconf configure build install
Build Dependencies:
libsigsegv
...
Typically, phases have default implementations that fit most of the common cases:
Build-system "phases" have default implementations that fit most of the common cases:
.. literalinclude:: _spack_root/lib/spack/spack/build_systems/autotools.py
:pyobject: AutotoolsPackage.configure
:pyobject: AutotoolsBuilder.configure
:linenos:
It is thus just sufficient for a packager to override a few
It is usually sufficient for a packager to override a few
build system specific helper methods or attributes to provide, for instance,
configure arguments:
@@ -3406,31 +3446,31 @@ configure arguments:
:pyobject: M4.configure_args
:linenos:
.. note::
Each specific build system has a list of attributes that can be overridden to
fine-tune the installation of a package without overriding an entire phase. To
have more information on them the place to go is the API docs of the :py:mod:`~.spack.build_systems`
module.
Each specific build system has a list of attributes and methods that can be overridden to
fine-tune the installation of a package without overriding an entire phase. To
have more information on them the place to go is the API docs of the :py:mod:`~.spack.build_systems`
module.
^^^^^^^^^^^^^^^^^^^^^^^^^^
Overriding an entire phase
^^^^^^^^^^^^^^^^^^^^^^^^^^
In extreme cases it may be necessary to override an entire phase. Regardless
of the build system, the signature is the same. For example, the signature
for the install phase is:
Sometimes it is necessary to override an entire phase. If the ``package.py`` contains
a single class recipe, see :ref:`package_class_structure`, then the signature for a
phase is:
.. code-block:: python
class Foo(Package):
class Openjpeg(CMakePackage):
def install(self, spec, prefix):
...
regardless of the build system. The arguments for the phase are:
``self``
For those not used to Python instance methods, this is the
package itself. In this case it's an instance of ``Foo``, which
extends ``Package``. For API docs on Package objects, see
:py:class:`Package <spack.package_base.Package>`.
This is the package object, which extends ``CMakePackage``.
For API docs on Package objects, see
:py:class:`Package <spack.package_base.PackageBase>`.
``spec``
This is the concrete spec object created by Spack from an
@@ -3445,12 +3485,111 @@ for the install phase is:
The arguments ``spec`` and ``prefix`` are passed only for convenience, as they always
correspond to ``self.spec`` and ``self.spec.prefix`` respectively.
As mentioned in :ref:`install-environment`, you will usually not need to refer
to dependencies explicitly in your package file, as the compiler wrappers take care of most of
the heavy lifting here. There will be times, though, when you need to refer to
the install locations of dependencies, or when you need to do something different
depending on the version, compiler, dependencies, etc. that your package is
built with. These parameters give you access to this type of information.
If the ``package.py`` encodes builders explicitly, the signature for a phase changes slightly:
.. code-block:: python
class CMakeBuilder(spack.build_systems.cmake.CMakeBuilder):
def install(self, pkg, spec, prefix):
...
In this case the package is passed as the second argument, and ``self`` is the builder instance.
.. _multiple_build_systems:
^^^^^^^^^^^^^^^^^^^^^^
Multiple build systems
^^^^^^^^^^^^^^^^^^^^^^
There are cases where a software actively supports two build systems, or changes build systems
as it evolves, or needs different build systems on different platforms. Spack allows dealing with
these cases natively, if a recipe is written using builders explicitly.
For instance, software that supports two build systems unconditionally should derive from
both ``*Package`` base classes, and declare the possible use of multiple build systems using
a directive:
.. code-block:: python
class ArpackNg(CMakePackage, AutotoolsPackage):
build_system("cmake", "autotools", default="cmake")
In this case the software can be built with both ``autotools`` and ``cmake``. Since the package
supports multiple build systems, it is necessary to declare which one is the default. The ``package.py``
will likely contain some overriding of default builder methods:
.. code-block:: python
class CMakeBuilder(spack.build_systems.cmake.CMakeBuilder):
def cmake_args(self):
pass
class AutotoolsBuilder(spack.build_systems.autotools.AutotoolsBuilder):
def configure_args(self):
pass
In more complex cases it might happen that the build system changes according to certain conditions,
for instance across versions. That can be expressed with conditional variant values:
.. code-block:: python
class ArpackNg(CMakePackage, AutotoolsPackage):
build_system(
conditional("cmake", when="@0.64:"),
conditional("autotools", when="@:0.63"),
default="cmake",
)
In the example the directive impose a change from ``Autotools`` to ``CMake`` going
from ``v0.63`` to ``v0.64``.
^^^^^^^^^^^^^^^^^^
Mixin base classes
^^^^^^^^^^^^^^^^^^
Besides build systems, there are other cases where common metadata and behavior can be extracted
and reused by many packages. For instance, packages that depend on ``Cuda`` or ``Rocm``, share
common dependencies and constraints. To factor these attributes into a single place, Spack provides
a few mixin classes in the ``spack.build_systems`` module:
+---------------------------------------------------------------+----------------------------------+
| **API docs** | **Description** |
+===============================================================+==================================+
| :class:`~spack.build_systems.cuda.CudaPackage` | A helper class for packages that |
| | use CUDA |
+---------------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.rocm.ROCmPackage` | A helper class for packages that |
| | use ROCm |
+---------------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.gnu.GNUMirrorPackage` | A helper class for GNU packages |
+---------------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.python.PythonExtension` | A helper class for Python |
| | extensions |
+---------------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.sourceforge.SourceforgePackage` | A helper class for packages |
| | from sourceforge.org |
+---------------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.sourceware.SourcewarePackage` | A helper class for packages |
| | from sourceware.org |
+---------------------------------------------------------------+----------------------------------+
| :class:`~spack.build_systems.xorg.XorgPackage` | A helper class for x.org |
| | packages |
+---------------------------------------------------------------+----------------------------------+
These classes should be used by adding them to the inheritance tree of the package that needs them,
for instance:
.. code-block:: python
class Cp2k(MakefilePackage, CudaPackage):
"""CP2K is a quantum chemistry and solid state physics software package
that can perform atomistic simulations of solid state, liquid, molecular,
periodic, material, crystal, and biological systems
"""
In the example above ``Cp2k`` inherits all the conflicts and variants that ``CudaPackage`` defines.
.. _install-environment:
@@ -4208,16 +4347,9 @@ In addition to invoking the right compiler, the compiler wrappers add
flags to the compile line so that dependencies can be easily found.
These flags are added for each dependency, if they exist:
Compile-time library search paths
* ``-L$dep_prefix/lib``
* ``-L$dep_prefix/lib64``
Runtime library search paths (RPATHs)
* ``$rpath_flag$dep_prefix/lib``
* ``$rpath_flag$dep_prefix/lib64``
Include search paths
* ``-I$dep_prefix/include``
* Compile-time library search paths: ``-L$dep_prefix/lib``, ``-L$dep_prefix/lib64``
* Runtime library search paths (RPATHs): ``$rpath_flag$dep_prefix/lib``, ``$rpath_flag$dep_prefix/lib64``
* Include search paths: ``-I$dep_prefix/include``
An example of this would be the ``libdwarf`` build, which has one
dependency: ``libelf``. Every call to ``cc`` in the ``libdwarf``
@@ -5062,6 +5194,16 @@ where each argument has the following meaning:
will run.
The default of ``None`` corresponds to the current directory (``'.'``).
Each call starts with the working directory set to the spec's test stage
directory (i.e., ``self.test_suite.test_dir_for_spec(self.spec)``).
.. warning::
Use of the package spec's installation directory for building and running
tests is **strongly** discouraged. Doing so has caused permission errors
for shared spack instances *and* for facilities that install the software
in read-only file systems or directories.
"""""""""""""""""""""""""""""""""""""""""
Accessing package- and test-related files
@@ -5069,10 +5211,10 @@ Accessing package- and test-related files
You may need to access files from one or more locations when writing
stand-alone tests. This can happen if the software's repository does not
include test source files or includes files but no way to build the
executables using the installed headers and libraries. In these
cases, you may need to reference the files relative to one or more
root directory. The properties containing package- and test-related
include test source files or includes files but has no way to build the
executables using the installed headers and libraries. In these cases,
you may need to reference the files relative to one or more root
directory. The properties containing package- (or spec-) and test-related
directory paths are provided in the table below.
.. list-table:: Directory-to-property mapping
@@ -5081,19 +5223,22 @@ directory paths are provided in the table below.
* - Root Directory
- Package Property
- Example(s)
* - Package Installation Files
* - Package (Spec) Installation
- ``self.prefix``
- ``self.prefix.include``, ``self.prefix.lib``
* - Package Dependency's Files
* - Dependency Installation
- ``self.spec['<dependency-package>'].prefix``
- ``self.spec['trilinos'].prefix.include``
* - Test Suite Stage Files
* - Test Suite Stage
- ``self.test_suite.stage``
- ``join_path(self.test_suite.stage, 'results.txt')``
* - Staged Cached Build-time Files
* - Spec's Test Stage
- ``self.test_suite.test_dir_for_spec``
- ``self.test_suite.test_dir_for_spec(self.spec)``
* - Current Spec's Build-time Files
- ``self.test_suite.current_test_cache_dir``
- ``join_path(self.test_suite.current_test_cache_dir, 'examples', 'foo.c')``
* - Staged Custom Package Files
* - Current Spec's Custom Test Files
- ``self.test_suite.current_test_data_dir``
- ``join_path(self.test_suite.current_test_data_dir, 'hello.f90')``
@@ -6099,3 +6244,82 @@ might write:
DWARF_PREFIX = $(spack location --install-dir libdwarf)
CXXFLAGS += -I$DWARF_PREFIX/include
CXXFLAGS += -L$DWARF_PREFIX/lib
.. _package_class_structure:
--------------------------
Package class architecture
--------------------------
.. note::
This section aims to provide a high-level knowledge of how the package class architecture evolved
in Spack, and provides some insights on the current design.
Packages in Spack were originally designed to support only a single build system. The overall
class structure for a package looked like:
.. image:: images/original_package_architecture.png
:scale: 60 %
:align: center
In this architecture the base class ``AutotoolsPackage`` was responsible for both the metadata
related to the ``autotools`` build system (e.g. dependencies or variants common to all packages
using it), and for encoding the default installation procedure.
In reality, a non-negligible number of packages are either changing their build system during the evolution of the
project, or using different build systems for different platforms. An architecture based on a single class
requires hacks or other workarounds to deal with these cases.
To support a model more adherent to reality, Spack v0.19 changed its internal design by extracting
the attributes and methods related to building a software into a separate hierarchy:
.. image:: images/builder_package_architecture.png
:scale: 60 %
:align: center
In this new format each ``package.py`` contains one ``*Package`` class that gathers all the metadata,
and one or more ``*Builder`` classes that encode the installation procedure. A specific builder object
is created just before the software is built, so at a time where Spack knows which build system needs
to be used for the current installation, and receives a ``package`` object during initialization.
^^^^^^^^^^^^^^^^^^^^^^^^
``build_system`` variant
^^^^^^^^^^^^^^^^^^^^^^^^
To allow imposing conditions based on the build system, each package must a have ``build_system`` variant,
which is usually inherited from base classes. This variant allows for writing metadata that is conditional
on the build system:
.. code-block:: python
with when("build_system=cmake"):
depends_on("cmake", type="build")
and also for selecting a specific build system from a spec literal, like in the following command:
.. code-block:: console
$ spack install arpack-ng build_system=autotools
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Compatibility with single-class format
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Internally, Spack always uses builders to perform operations related to the installation of a specific software.
The builders are created in the ``spack.builder.create`` function
.. literalinclude:: _spack_root/lib/spack/spack/builder.py
:pyobject: create
To achieve backward compatibility with the single-class format Spack creates in this function a special
"adapter builder", if no custom builder is detected in the recipe:
.. image:: images/adapter.png
:scale: 60 %
:align: center
Overall the role of the adapter is to route access to attributes of methods first through the ``*Package``
hierarchy, and then back to the base class builder. This is schematically shown in the diagram above, where
the adapter role is to "emulate" a method resolution order like the one represented by the red arrows.

View File

@@ -1,5 +1,5 @@
Name, Supported Versions, Notes, Requirement Reason
Python, 2.7/3.6-3.10, , Interpreter for Spack
Python, 3.6--3.11, , Interpreter for Spack
C/C++ Compilers, , , Building software
make, , , Build software
patch, , , Build software
@@ -11,6 +11,7 @@ bzip2, , , Compress/Decompress archives
xz, , , Compress/Decompress archives
zstd, , Optional, Compress/Decompress archives
file, , , Create/Use Buildcaches
lsb-release, , , Linux: identify operating system version
gnupg2, , , Sign/Verify Buildcaches
git, , , Manage Software Repositories
svn, , Optional, Manage Software Repositories
1 Name Supported Versions Notes Requirement Reason
2 Python 2.7/3.6-3.10 3.6--3.11 Interpreter for Spack
3 C/C++ Compilers Building software
4 make Build software
5 patch Build software
11 xz Compress/Decompress archives
12 zstd Optional Compress/Decompress archives
13 file Create/Use Buildcaches
14 lsb-release Linux: identify operating system version
15 gnupg2 Sign/Verify Buildcaches
16 git Manage Software Repositories
17 svn Optional Manage Software Repositories

49
lib/spack/env/cc vendored
View File

@@ -241,28 +241,28 @@ case "$command" in
mode=cpp
debug_flags="-g"
;;
cc|c89|c99|gcc|clang|armclang|icc|icx|pgcc|nvc|xlc|xlc_r|fcc|amdclang|cl.exe)
cc|c89|c99|gcc|clang|armclang|icc|icx|pgcc|nvc|xlc|xlc_r|fcc|amdclang|cl.exe|craycc)
command="$SPACK_CC"
language="C"
comp="CC"
lang_flags=C
debug_flags="-g"
;;
c++|CC|g++|clang++|armclang++|icpc|icpx|dpcpp|pgc++|nvc++|xlc++|xlc++_r|FCC|amdclang++)
c++|CC|g++|clang++|armclang++|icpc|icpx|dpcpp|pgc++|nvc++|xlc++|xlc++_r|FCC|amdclang++|crayCC)
command="$SPACK_CXX"
language="C++"
comp="CXX"
lang_flags=CXX
debug_flags="-g"
;;
ftn|f90|fc|f95|gfortran|flang|armflang|ifort|ifx|pgfortran|nvfortran|xlf90|xlf90_r|nagfor|frt|amdflang)
ftn|f90|fc|f95|gfortran|flang|armflang|ifort|ifx|pgfortran|nvfortran|xlf90|xlf90_r|nagfor|frt|amdflang|crayftn)
command="$SPACK_FC"
language="Fortran 90"
comp="FC"
lang_flags=F
debug_flags="-g"
;;
f77|xlf|xlf_r|pgf77|amdflang)
f77|xlf|xlf_r|pgf77)
command="$SPACK_F77"
language="Fortran 77"
comp="F77"
@@ -440,6 +440,47 @@ while [ $# -ne 0 ]; do
continue
fi
if [ -n "${SPACK_COMPILER_FLAGS_KEEP}" ] ; then
# NOTE: the eval is required to allow `|` alternatives inside the variable
eval "\
case \"\$1\" in
$SPACK_COMPILER_FLAGS_KEEP)
append other_args_list \"\$1\"
shift
continue
;;
esac
"
fi
# the replace list is a space-separated list of pipe-separated pairs,
# the first in each pair is the original prefix to be matched, the
# second is the replacement prefix
if [ -n "${SPACK_COMPILER_FLAGS_REPLACE}" ] ; then
for rep in ${SPACK_COMPILER_FLAGS_REPLACE} ; do
before=${rep%|*}
after=${rep#*|}
eval "\
stripped=\"\${1##$before}\"
"
if [ "$stripped" = "$1" ] ; then
continue
fi
replaced="$after$stripped"
# it matched, remove it
shift
if [ -z "$replaced" ] ; then
# completely removed, continue OUTER loop
continue 2
fi
# re-build argument list with replacement
set -- "$replaced" "$@"
done
fi
case "$1" in
-isystem*)
arg="${1#-isystem}"

View File

@@ -0,0 +1 @@
../../cc

1
lib/spack/env/cce/craycc vendored Symbolic link
View File

@@ -0,0 +1 @@
../cc

1
lib/spack/env/cce/crayftn vendored Symbolic link
View File

@@ -0,0 +1 @@
../cc

View File

@@ -18,7 +18,7 @@
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.1.4 (commit e2cfdc266174488dee78b8c9058e36d60dc1b548)
* Version: 0.2.0 (commit 77640e572725ad97f18e63a04857155752ace045)
argparse
--------

View File

@@ -132,9 +132,15 @@ def sysctl(*args):
"model name": sysctl("-n", "machdep.cpu.brand_string"),
}
else:
model = (
"m1" if "Apple" in sysctl("-n", "machdep.cpu.brand_string") else "unknown"
)
model = "unknown"
model_str = sysctl("-n", "machdep.cpu.brand_string").lower()
if "m2" in model_str:
model = "m2"
elif "m1" in model_str:
model = "m1"
elif "apple" in model_str:
model = "m1"
info = {
"vendor_id": "Apple",
"flags": [],
@@ -322,14 +328,26 @@ def compatibility_check_for_aarch64(info, target):
features = set(info.get("Features", "").split())
vendor = info.get("CPU implementer", "generic")
# At the moment it's not clear how to detect compatibility with
# a specific version of the architecture
if target.vendor == "generic" and target.name != "aarch64":
return False
arch_root = TARGETS[basename]
return (
(target == arch_root or arch_root in target.ancestors)
and target.vendor in (vendor, "generic")
# On macOS it seems impossible to get all the CPU features with syctl info
and (target.features.issubset(features) or platform.system() == "Darwin")
arch_root_and_vendor = arch_root == target.family and target.vendor in (
vendor,
"generic",
)
# On macOS it seems impossible to get all the CPU features
# with syctl info, but for ARM we can get the exact model
if platform.system() == "Darwin":
model_key = info.get("model", basename)
model = TARGETS[model_key]
return arch_root_and_vendor and (target == model or target in model.ancestors)
return arch_root_and_vendor and target.features.issubset(features)
@compatibility_check(architecture_family="riscv64")
def compatibility_check_for_riscv64(info, target):

View File

@@ -85,7 +85,7 @@
"intel": [
{
"versions": ":",
"name": "x86-64",
"name": "pentium4",
"flags": "-march={name} -mtune=generic"
}
],
@@ -2093,8 +2093,163 @@
]
}
},
"thunderx2": {
"armv8.1a": {
"from": ["aarch64"],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "5:",
"flags": "-march=armv8.1-a -mtune=generic"
}
],
"clang": [
{
"versions": ":",
"flags": "-march=armv8.1-a -mtune=generic"
}
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv8.1-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv8.1-a -mtune=generic"
}
]
}
},
"armv8.2a": {
"from": ["armv8.1a"],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "6:",
"flags": "-march=armv8.2-a -mtune=generic"
}
],
"clang": [
{
"versions": ":",
"flags": "-march=armv8.2-a -mtune=generic"
}
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv8.2-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv8.2-a -mtune=generic"
}
]
}
},
"armv8.3a": {
"from": ["armv8.2a"],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "6:",
"flags": "-march=armv8.3-a -mtune=generic"
}
],
"clang": [
{
"versions": "6:",
"flags": "-march=armv8.3-a -mtune=generic"
}
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv8.3-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv8.3-a -mtune=generic"
}
]
}
},
"armv8.4a": {
"from": ["armv8.3a"],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "8:",
"flags": "-march=armv8.4-a -mtune=generic"
}
],
"clang": [
{
"versions": "8:",
"flags": "-march=armv8.4-a -mtune=generic"
}
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv8.4-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv8.4-a -mtune=generic"
}
]
}
},
"armv8.5a": {
"from": ["armv8.4a"],
"vendor": "generic",
"features": [],
"compilers": {
"gcc": [
{
"versions": "9:",
"flags": "-march=armv8.5-a -mtune=generic"
}
],
"clang": [
{
"versions": "11:",
"flags": "-march=armv8.5-a -mtune=generic"
}
],
"apple-clang": [
{
"versions": ":",
"flags": "-march=armv8.5-a -mtune=generic"
}
],
"arm": [
{
"versions": ":",
"flags": "-march=armv8.5-a -mtune=generic"
}
]
}
},
"thunderx2": {
"from": ["armv8.1a"],
"vendor": "Cavium",
"features": [
"fp",
@@ -2141,7 +2296,7 @@
}
},
"a64fx": {
"from": ["aarch64"],
"from": ["armv8.2a"],
"vendor": "Fujitsu",
"features": [
"fp",
@@ -2209,7 +2364,7 @@
]
}
},
"graviton": {
"cortex_a72": {
"from": ["aarch64"],
"vendor": "ARM",
"features": [
@@ -2235,19 +2390,19 @@
},
{
"versions": "6:",
"flags" : "-march=armv8-a+crc+crypto -mtune=cortex-a72"
"flags" : "-mcpu=cortex-a72"
}
],
"clang" : [
{
"versions": "3.9:",
"flags" : "-march=armv8-a+crc+crypto"
"flags" : "-mcpu=cortex-a72"
}
]
}
},
"graviton2": {
"from": ["graviton"],
"neoverse_n1": {
"from": ["cortex_a72", "armv8.2a"],
"vendor": "ARM",
"features": [
"fp",
@@ -2296,7 +2451,7 @@
},
{
"versions": "9.0:",
"flags" : "-march=armv8.2-a+fp16+rcpc+dotprod+crypto -mtune=neoverse-n1"
"flags" : "-mcpu=neoverse-n1"
}
],
"clang" : [
@@ -2307,6 +2462,10 @@
{
"versions": "5:",
"flags" : "-march=armv8.2-a+fp16+rcpc+dotprod+crypto"
},
{
"versions": "10:",
"flags" : "-mcpu=neoverse-n1"
}
],
"arm" : [
@@ -2317,11 +2476,11 @@
]
}
},
"graviton3": {
"from": ["graviton2"],
"neoverse_v1": {
"from": ["neoverse_n1", "armv8.4a"],
"vendor": "ARM",
"features": [
"fp",
"fp",
"asimd",
"evtstrm",
"aes",
@@ -2384,11 +2543,11 @@
},
{
"versions": "9.0:9.9",
"flags" : "-march=armv8.4-a+crypto+rcpc+sha3+sm4+sve+rng+nodotprod -mtune=neoverse-v1"
"flags" : "-mcpu=neoverse-v1"
},
{
"versions": "10.0:",
"flags" : "-march=armv8.4-a+crypto+rcpc+sha3+sm4+sve+rng+ssbs+i8mm+bf16+nodotprod -mtune=neoverse-v1"
"flags" : "-mcpu=neoverse-v1"
}
],
@@ -2404,6 +2563,10 @@
{
"versions": "11:",
"flags" : "-march=armv8.4-a+sve+ssbs+fp16+bf16+crypto+i8mm+rng"
},
{
"versions": "12:",
"flags" : "-mcpu=neoverse-v1"
}
],
"arm" : [
@@ -2419,7 +2582,7 @@
}
},
"m1": {
"from": ["aarch64"],
"from": ["armv8.4a"],
"vendor": "Apple",
"features": [
"fp",
@@ -2484,6 +2647,76 @@
]
}
},
"m2": {
"from": ["m1", "armv8.5a"],
"vendor": "Apple",
"features": [
"fp",
"asimd",
"evtstrm",
"aes",
"pmull",
"sha1",
"sha2",
"crc32",
"atomics",
"fphp",
"asimdhp",
"cpuid",
"asimdrdm",
"jscvt",
"fcma",
"lrcpc",
"dcpop",
"sha3",
"asimddp",
"sha512",
"asimdfhm",
"dit",
"uscat",
"ilrcpc",
"flagm",
"ssbs",
"sb",
"paca",
"pacg",
"dcpodp",
"flagm2",
"frint",
"ecv",
"bf16",
"i8mm",
"bti"
],
"compilers": {
"gcc": [
{
"versions": "8.0:",
"flags" : "-march=armv8.5-a -mtune=generic"
}
],
"clang" : [
{
"versions": "9.0:12.0",
"flags" : "-march=armv8.5-a"
},
{
"versions": "13.0:",
"flags" : "-mcpu=apple-m1"
}
],
"apple-clang": [
{
"versions": "11.0:12.5",
"flags" : "-march=armv8.5-a"
},
{
"versions": "13.0:",
"flags" : "-mcpu=vortex"
}
]
}
},
"arm": {
"from": [],
"vendor": "generic",

View File

@@ -71,13 +71,12 @@
import re
import math
import multiprocessing
import io
import sys
import threading
import time
from contextlib import contextmanager
from six import StringIO
from six import string_types
_error_matches = [
"^FAIL: ",
@@ -246,7 +245,7 @@ def __getitem__(self, line_no):
def __str__(self):
"""Returns event lines and context."""
out = StringIO()
out = io.StringIO()
for i in range(self.start, self.end):
if i == self.line_no:
out.write(' >> %-6d%s' % (i, self[i]))
@@ -386,7 +385,7 @@ def parse(self, stream, context=6, jobs=None):
(tuple): two lists containing ``BuildError`` and
``BuildWarning`` objects.
"""
if isinstance(stream, string_types):
if isinstance(stream, str):
with open(stream) as f:
return self.parse(f, context, jobs)

File diff suppressed because it is too large Load Diff

View File

@@ -1,289 +0,0 @@
A. HISTORY OF THE SOFTWARE
==========================
Python was created in the early 1990s by Guido van Rossum at Stichting
Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands
as a successor of a language called ABC. Guido remains Python's
principal author, although it includes many contributions from others.
In 1995, Guido continued his work on Python at the Corporation for
National Research Initiatives (CNRI, see http://www.cnri.reston.va.us)
in Reston, Virginia where he released several versions of the
software.
In May 2000, Guido and the Python core development team moved to
BeOpen.com to form the BeOpen PythonLabs team. In October of the same
year, the PythonLabs team moved to Digital Creations (now Zope
Corporation, see http://www.zope.com). In 2001, the Python Software
Foundation (PSF, see http://www.python.org/psf/) was formed, a
non-profit organization created specifically to own Python-related
Intellectual Property. Zope Corporation is a sponsoring member of
the PSF.
All Python releases are Open Source (see http://www.opensource.org for
the Open Source Definition). Historically, most, but not all, Python
releases have also been GPL-compatible; the table below summarizes
the various releases.
Release Derived Year Owner GPL-
from compatible? (1)
0.9.0 thru 1.2 1991-1995 CWI yes
1.3 thru 1.5.2 1.2 1995-1999 CNRI yes
1.6 1.5.2 2000 CNRI no
2.0 1.6 2000 BeOpen.com no
1.6.1 1.6 2001 CNRI yes (2)
2.1 2.0+1.6.1 2001 PSF no
2.0.1 2.0+1.6.1 2001 PSF yes
2.1.1 2.1+2.0.1 2001 PSF yes
2.2 2.1.1 2001 PSF yes
2.1.2 2.1.1 2002 PSF yes
2.1.3 2.1.2 2002 PSF yes
2.2.1 2.2 2002 PSF yes
2.2.2 2.2.1 2002 PSF yes
2.2.3 2.2.2 2003 PSF yes
2.3 2.2.2 2002-2003 PSF yes
2.3.1 2.3 2002-2003 PSF yes
2.3.2 2.3.1 2002-2003 PSF yes
2.3.3 2.3.2 2002-2003 PSF yes
2.3.4 2.3.3 2004 PSF yes
2.3.5 2.3.4 2005 PSF yes
2.4 2.3 2004 PSF yes
2.4.1 2.4 2005 PSF yes
2.4.2 2.4.1 2005 PSF yes
2.4.3 2.4.2 2006 PSF yes
2.4.4 2.4.3 2006 PSF yes
2.5 2.4 2006 PSF yes
2.5.1 2.5 2007 PSF yes
2.5.2 2.5.1 2008 PSF yes
2.5.3 2.5.2 2008 PSF yes
2.6 2.5 2008 PSF yes
2.6.1 2.6 2008 PSF yes
2.6.2 2.6.1 2009 PSF yes
2.6.3 2.6.2 2009 PSF yes
2.6.4 2.6.3 2009 PSF yes
2.6.5 2.6.4 2010 PSF yes
3.0 2.6 2008 PSF yes
3.0.1 3.0 2009 PSF yes
3.1 3.0.1 2009 PSF yes
3.1.1 3.1 2009 PSF yes
3.1.2 3.1.1 2010 PSF yes
3.1.3 3.1.2 2010 PSF yes
3.1.4 3.1.3 2011 PSF yes
3.2 3.1 2011 PSF yes
3.2.1 3.2 2011 PSF yes
3.2.2 3.2.1 2011 PSF yes
3.2.3 3.2.2 2012 PSF yes
Footnotes:
(1) GPL-compatible doesn't mean that we're distributing Python under
the GPL. All Python licenses, unlike the GPL, let you distribute
a modified version without making your changes open source. The
GPL-compatible licenses make it possible to combine Python with
other software that is released under the GPL; the others don't.
(2) According to Richard Stallman, 1.6.1 is not GPL-compatible,
because its license has a choice of law clause. According to
CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1
is "not incompatible" with the GPL.
Thanks to the many outside volunteers who have worked under Guido's
direction to make these releases possible.
B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON
===============================================================
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
--------------------------------------------
1. This LICENSE AGREEMENT is between the Python Software Foundation
("PSF"), and the Individual or Organization ("Licensee") accessing and
otherwise using this software ("Python") in source or binary form and
its associated documentation.
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python alone or in any derivative version,
provided, however, that PSF's License Agreement and PSF's notice of copyright,
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
2011, 2012 Python Software Foundation; All Rights Reserved" are retained in Python
alone or in any derivative version prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python.
4. PSF is making Python available to Licensee on an "AS IS"
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF and
Licensee. This License Agreement does not grant permission to use PSF
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using Python, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.
BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0
-------------------------------------------
BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1
1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an
office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the
Individual or Organization ("Licensee") accessing and otherwise using
this software in source or binary form and its associated
documentation ("the Software").
2. Subject to the terms and conditions of this BeOpen Python License
Agreement, BeOpen hereby grants Licensee a non-exclusive,
royalty-free, world-wide license to reproduce, analyze, test, perform
and/or display publicly, prepare derivative works, distribute, and
otherwise use the Software alone or in any derivative version,
provided, however, that the BeOpen Python License is retained in the
Software, alone or in any derivative version prepared by Licensee.
3. BeOpen is making the Software available to Licensee on an "AS IS"
basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY
DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
5. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
6. This License Agreement shall be governed by and interpreted in all
respects by the law of the State of California, excluding conflict of
law provisions. Nothing in this License Agreement shall be deemed to
create any relationship of agency, partnership, or joint venture
between BeOpen and Licensee. This License Agreement does not grant
permission to use BeOpen trademarks or trade names in a trademark
sense to endorse or promote products or services of Licensee, or any
third party. As an exception, the "BeOpen Python" logos available at
http://www.pythonlabs.com/logos.html may be used according to the
permissions granted on that web page.
7. By copying, installing or otherwise using the software, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.
CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1
---------------------------------------
1. This LICENSE AGREEMENT is between the Corporation for National
Research Initiatives, having an office at 1895 Preston White Drive,
Reston, VA 20191 ("CNRI"), and the Individual or Organization
("Licensee") accessing and otherwise using Python 1.6.1 software in
source or binary form and its associated documentation.
2. Subject to the terms and conditions of this License Agreement, CNRI
hereby grants Licensee a nonexclusive, royalty-free, world-wide
license to reproduce, analyze, test, perform and/or display publicly,
prepare derivative works, distribute, and otherwise use Python 1.6.1
alone or in any derivative version, provided, however, that CNRI's
License Agreement and CNRI's notice of copyright, i.e., "Copyright (c)
1995-2001 Corporation for National Research Initiatives; All Rights
Reserved" are retained in Python 1.6.1 alone or in any derivative
version prepared by Licensee. Alternately, in lieu of CNRI's License
Agreement, Licensee may substitute the following text (omitting the
quotes): "Python 1.6.1 is made available subject to the terms and
conditions in CNRI's License Agreement. This Agreement together with
Python 1.6.1 may be located on the Internet using the following
unique, persistent identifier (known as a handle): 1895.22/1013. This
Agreement may also be obtained from a proxy server on the Internet
using the following URL: http://hdl.handle.net/1895.22/1013".
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python 1.6.1 or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python 1.6.1.
4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS"
basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. This License Agreement shall be governed by the federal
intellectual property law of the United States, including without
limitation the federal copyright law, and, to the extent such
U.S. federal law does not apply, by the law of the Commonwealth of
Virginia, excluding Virginia's conflict of law provisions.
Notwithstanding the foregoing, with regard to derivative works based
on Python 1.6.1 that incorporate non-separable material that was
previously distributed under the GNU General Public License (GPL), the
law of the Commonwealth of Virginia shall govern this License
Agreement only as to issues arising under or with respect to
Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this
License Agreement shall be deemed to create any relationship of
agency, partnership, or joint venture between CNRI and Licensee. This
License Agreement does not grant permission to use CNRI trademarks or
trade name in a trademark sense to endorse or promote products or
services of Licensee, or any third party.
8. By clicking on the "ACCEPT" button where indicated, or by copying,
installing or otherwise using Python 1.6.1, Licensee agrees to be
bound by the terms and conditions of this License Agreement.
ACCEPT
CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2
--------------------------------------------------
Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
The Netherlands. All rights reserved.
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation, and that the name of Stichting Mathematisch
Centrum or CWI not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior
permission.
STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -1 +0,0 @@
from .functools32 import *

View File

@@ -1,158 +0,0 @@
"""Drop-in replacement for the thread module.
Meant to be used as a brain-dead substitute so that threaded code does
not need to be rewritten for when the thread module is not present.
Suggested usage is::
try:
try:
import _thread # Python >= 3
except:
import thread as _thread # Python < 3
except ImportError:
import _dummy_thread as _thread
"""
# Exports only things specified by thread documentation;
# skipping obsolete synonyms allocate(), start_new(), exit_thread().
__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock',
'interrupt_main', 'LockType']
# A dummy value
TIMEOUT_MAX = 2**31
# NOTE: this module can be imported early in the extension building process,
# and so top level imports of other modules should be avoided. Instead, all
# imports are done when needed on a function-by-function basis. Since threads
# are disabled, the import lock should not be an issue anyway (??).
class error(Exception):
"""Dummy implementation of _thread.error."""
def __init__(self, *args):
self.args = args
def start_new_thread(function, args, kwargs={}):
"""Dummy implementation of _thread.start_new_thread().
Compatibility is maintained by making sure that ``args`` is a
tuple and ``kwargs`` is a dictionary. If an exception is raised
and it is SystemExit (which can be done by _thread.exit()) it is
caught and nothing is done; all other exceptions are printed out
by using traceback.print_exc().
If the executed function calls interrupt_main the KeyboardInterrupt will be
raised when the function returns.
"""
if type(args) != type(tuple()):
raise TypeError("2nd arg must be a tuple")
if type(kwargs) != type(dict()):
raise TypeError("3rd arg must be a dict")
global _main
_main = False
try:
function(*args, **kwargs)
except SystemExit:
pass
except:
import traceback
traceback.print_exc()
_main = True
global _interrupt
if _interrupt:
_interrupt = False
raise KeyboardInterrupt
def exit():
"""Dummy implementation of _thread.exit()."""
raise SystemExit
def get_ident():
"""Dummy implementation of _thread.get_ident().
Since this module should only be used when _threadmodule is not
available, it is safe to assume that the current process is the
only thread. Thus a constant can be safely returned.
"""
return -1
def allocate_lock():
"""Dummy implementation of _thread.allocate_lock()."""
return LockType()
def stack_size(size=None):
"""Dummy implementation of _thread.stack_size()."""
if size is not None:
raise error("setting thread stack size not supported")
return 0
class LockType(object):
"""Class implementing dummy implementation of _thread.LockType.
Compatibility is maintained by maintaining self.locked_status
which is a boolean that stores the state of the lock. Pickling of
the lock, though, should not be done since if the _thread module is
then used with an unpickled ``lock()`` from here problems could
occur from this class not having atomic methods.
"""
def __init__(self):
self.locked_status = False
def acquire(self, waitflag=None, timeout=-1):
"""Dummy implementation of acquire().
For blocking calls, self.locked_status is automatically set to
True and returned appropriately based on value of
``waitflag``. If it is non-blocking, then the value is
actually checked and not set if it is already acquired. This
is all done so that threading.Condition's assert statements
aren't triggered and throw a little fit.
"""
if waitflag is None or waitflag:
self.locked_status = True
return True
else:
if not self.locked_status:
self.locked_status = True
return True
else:
if timeout > 0:
import time
time.sleep(timeout)
return False
__enter__ = acquire
def __exit__(self, typ, val, tb):
self.release()
def release(self):
"""Release the dummy lock."""
# XXX Perhaps shouldn't actually bother to test? Could lead
# to problems for complex, threaded code.
if not self.locked_status:
raise error
self.locked_status = False
return True
def locked(self):
return self.locked_status
# Used to signal that interrupt_main was called in a "thread"
_interrupt = False
# True when not executing in a "thread"
_main = True
def interrupt_main():
"""Set _interrupt flag to True to have start_new_thread raise
KeyboardInterrupt upon exiting."""
if _main:
raise KeyboardInterrupt
else:
global _interrupt
_interrupt = True

View File

@@ -1,423 +0,0 @@
"""functools.py - Tools for working with functions and callable objects
"""
# Python module wrapper for _functools C module
# to allow utilities written in Python to be added
# to the functools module.
# Written by Nick Coghlan <ncoghlan at gmail.com>
# and Raymond Hettinger <python at rcn.com>
# Copyright (C) 2006-2010 Python Software Foundation.
# See C source code for _functools credits/copyright
__all__ = ['update_wrapper', 'wraps', 'WRAPPER_ASSIGNMENTS', 'WRAPPER_UPDATES',
'total_ordering', 'cmp_to_key', 'lru_cache', 'reduce', 'partial']
from _functools import partial, reduce
from collections import MutableMapping, namedtuple
from .reprlib32 import recursive_repr as _recursive_repr
from weakref import proxy as _proxy
import sys as _sys
try:
from thread import allocate_lock as Lock
except ImportError:
from ._dummy_thread32 import allocate_lock as Lock
################################################################################
### OrderedDict
################################################################################
class _Link(object):
__slots__ = 'prev', 'next', 'key', '__weakref__'
class OrderedDict(dict):
'Dictionary that remembers insertion order'
# An inherited dict maps keys to values.
# The inherited dict provides __getitem__, __len__, __contains__, and get.
# The remaining methods are order-aware.
# Big-O running times for all methods are the same as regular dictionaries.
# The internal self.__map dict maps keys to links in a doubly linked list.
# The circular doubly linked list starts and ends with a sentinel element.
# The sentinel element never gets deleted (this simplifies the algorithm).
# The sentinel is in self.__hardroot with a weakref proxy in self.__root.
# The prev links are weakref proxies (to prevent circular references).
# Individual links are kept alive by the hard reference in self.__map.
# Those hard references disappear when a key is deleted from an OrderedDict.
def __init__(self, *args, **kwds):
'''Initialize an ordered dictionary. The signature is the same as
regular dictionaries, but keyword arguments are not recommended because
their insertion order is arbitrary.
'''
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__root
except AttributeError:
self.__hardroot = _Link()
self.__root = root = _proxy(self.__hardroot)
root.prev = root.next = root
self.__map = {}
self.__update(*args, **kwds)
def __setitem__(self, key, value,
dict_setitem=dict.__setitem__, proxy=_proxy, Link=_Link):
'od.__setitem__(i, y) <==> od[i]=y'
# Setting a new item creates a new link at the end of the linked list,
# and the inherited dictionary is updated with the new key/value pair.
if key not in self:
self.__map[key] = link = Link()
root = self.__root
last = root.prev
link.prev, link.next, link.key = last, root, key
last.next = link
root.prev = proxy(link)
dict_setitem(self, key, value)
def __delitem__(self, key, dict_delitem=dict.__delitem__):
'od.__delitem__(y) <==> del od[y]'
# Deleting an existing item uses self.__map to find the link which gets
# removed by updating the links in the predecessor and successor nodes.
dict_delitem(self, key)
link = self.__map.pop(key)
link_prev = link.prev
link_next = link.next
link_prev.next = link_next
link_next.prev = link_prev
def __iter__(self):
'od.__iter__() <==> iter(od)'
# Traverse the linked list in order.
root = self.__root
curr = root.next
while curr is not root:
yield curr.key
curr = curr.next
def __reversed__(self):
'od.__reversed__() <==> reversed(od)'
# Traverse the linked list in reverse order.
root = self.__root
curr = root.prev
while curr is not root:
yield curr.key
curr = curr.prev
def clear(self):
'od.clear() -> None. Remove all items from od.'
root = self.__root
root.prev = root.next = root
self.__map.clear()
dict.clear(self)
def popitem(self, last=True):
'''od.popitem() -> (k, v), return and remove a (key, value) pair.
Pairs are returned in LIFO order if last is true or FIFO order if false.
'''
if not self:
raise KeyError('dictionary is empty')
root = self.__root
if last:
link = root.prev
link_prev = link.prev
link_prev.next = root
root.prev = link_prev
else:
link = root.next
link_next = link.next
root.next = link_next
link_next.prev = root
key = link.key
del self.__map[key]
value = dict.pop(self, key)
return key, value
def move_to_end(self, key, last=True):
'''Move an existing element to the end (or beginning if last==False).
Raises KeyError if the element does not exist.
When last=True, acts like a fast version of self[key]=self.pop(key).
'''
link = self.__map[key]
link_prev = link.prev
link_next = link.next
link_prev.next = link_next
link_next.prev = link_prev
root = self.__root
if last:
last = root.prev
link.prev = last
link.next = root
last.next = root.prev = link
else:
first = root.next
link.prev = root
link.next = first
root.next = first.prev = link
def __sizeof__(self):
sizeof = _sys.getsizeof
n = len(self) + 1 # number of links including root
size = sizeof(self.__dict__) # instance dictionary
size += sizeof(self.__map) * 2 # internal dict and inherited dict
size += sizeof(self.__hardroot) * n # link objects
size += sizeof(self.__root) * n # proxy objects
return size
update = __update = MutableMapping.update
keys = MutableMapping.keys
values = MutableMapping.values
items = MutableMapping.items
__ne__ = MutableMapping.__ne__
__marker = object()
def pop(self, key, default=__marker):
'''od.pop(k[,d]) -> v, remove specified key and return the corresponding
value. If key is not found, d is returned if given, otherwise KeyError
is raised.
'''
if key in self:
result = self[key]
del self[key]
return result
if default is self.__marker:
raise KeyError(key)
return default
def setdefault(self, key, default=None):
'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od'
if key in self:
return self[key]
self[key] = default
return default
@_recursive_repr()
def __repr__(self):
'od.__repr__() <==> repr(od)'
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, list(self.items()))
def __reduce__(self):
'Return state information for pickling'
items = [[k, self[k]] for k in self]
inst_dict = vars(self).copy()
for k in vars(OrderedDict()):
inst_dict.pop(k, None)
if inst_dict:
return (self.__class__, (items,), inst_dict)
return self.__class__, (items,)
def copy(self):
'od.copy() -> a shallow copy of od'
return self.__class__(self)
@classmethod
def fromkeys(cls, iterable, value=None):
'''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S.
If not specified, the value defaults to None.
'''
self = cls()
for key in iterable:
self[key] = value
return self
def __eq__(self, other):
'''od.__eq__(y) <==> od==y. Comparison to another OD is order-sensitive
while comparison to a regular mapping is order-insensitive.
'''
if isinstance(other, OrderedDict):
return len(self)==len(other) and \
all(p==q for p, q in zip(self.items(), other.items()))
return dict.__eq__(self, other)
# update_wrapper() and wraps() are tools to help write
# wrapper functions that can handle naive introspection
WRAPPER_ASSIGNMENTS = ('__module__', '__name__', '__doc__')
WRAPPER_UPDATES = ('__dict__',)
def update_wrapper(wrapper,
wrapped,
assigned = WRAPPER_ASSIGNMENTS,
updated = WRAPPER_UPDATES):
"""Update a wrapper function to look like the wrapped function
wrapper is the function to be updated
wrapped is the original function
assigned is a tuple naming the attributes assigned directly
from the wrapped function to the wrapper function (defaults to
functools.WRAPPER_ASSIGNMENTS)
updated is a tuple naming the attributes of the wrapper that
are updated with the corresponding attribute from the wrapped
function (defaults to functools.WRAPPER_UPDATES)
"""
wrapper.__wrapped__ = wrapped
for attr in assigned:
try:
value = getattr(wrapped, attr)
except AttributeError:
pass
else:
setattr(wrapper, attr, value)
for attr in updated:
getattr(wrapper, attr).update(getattr(wrapped, attr, {}))
# Return the wrapper so this can be used as a decorator via partial()
return wrapper
def wraps(wrapped,
assigned = WRAPPER_ASSIGNMENTS,
updated = WRAPPER_UPDATES):
"""Decorator factory to apply update_wrapper() to a wrapper function
Returns a decorator that invokes update_wrapper() with the decorated
function as the wrapper argument and the arguments to wraps() as the
remaining arguments. Default arguments are as for update_wrapper().
This is a convenience function to simplify applying partial() to
update_wrapper().
"""
return partial(update_wrapper, wrapped=wrapped,
assigned=assigned, updated=updated)
def total_ordering(cls):
"""Class decorator that fills in missing ordering methods"""
convert = {
'__lt__': [('__gt__', lambda self, other: not (self < other or self == other)),
('__le__', lambda self, other: self < other or self == other),
('__ge__', lambda self, other: not self < other)],
'__le__': [('__ge__', lambda self, other: not self <= other or self == other),
('__lt__', lambda self, other: self <= other and not self == other),
('__gt__', lambda self, other: not self <= other)],
'__gt__': [('__lt__', lambda self, other: not (self > other or self == other)),
('__ge__', lambda self, other: self > other or self == other),
('__le__', lambda self, other: not self > other)],
'__ge__': [('__le__', lambda self, other: (not self >= other) or self == other),
('__gt__', lambda self, other: self >= other and not self == other),
('__lt__', lambda self, other: not self >= other)]
}
roots = set(dir(cls)) & set(convert)
if not roots:
raise ValueError('must define at least one ordering operation: < > <= >=')
root = max(roots) # prefer __lt__ to __le__ to __gt__ to __ge__
for opname, opfunc in convert[root]:
if opname not in roots:
opfunc.__name__ = opname
opfunc.__doc__ = getattr(int, opname).__doc__
setattr(cls, opname, opfunc)
return cls
def cmp_to_key(mycmp):
"""Convert a cmp= function into a key= function"""
class K(object):
__slots__ = ['obj']
def __init__(self, obj):
self.obj = obj
def __lt__(self, other):
return mycmp(self.obj, other.obj) < 0
def __gt__(self, other):
return mycmp(self.obj, other.obj) > 0
def __eq__(self, other):
return mycmp(self.obj, other.obj) == 0
def __le__(self, other):
return mycmp(self.obj, other.obj) <= 0
def __ge__(self, other):
return mycmp(self.obj, other.obj) >= 0
def __ne__(self, other):
return mycmp(self.obj, other.obj) != 0
__hash__ = None
return K
_CacheInfo = namedtuple("CacheInfo", "hits misses maxsize currsize")
def lru_cache(maxsize=100):
"""Least-recently-used cache decorator.
If *maxsize* is set to None, the LRU features are disabled and the cache
can grow without bound.
Arguments to the cached function must be hashable.
View the cache statistics named tuple (hits, misses, maxsize, currsize) with
f.cache_info(). Clear the cache and statistics with f.cache_clear().
Access the underlying function with f.__wrapped__.
See: http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used
"""
# Users should only access the lru_cache through its public API:
# cache_info, cache_clear, and f.__wrapped__
# The internals of the lru_cache are encapsulated for thread safety and
# to allow the implementation to change (including a possible C version).
def decorating_function(user_function,
tuple=tuple, sorted=sorted, len=len, KeyError=KeyError):
hits, misses = [0], [0]
kwd_mark = (object(),) # separates positional and keyword args
lock = Lock() # needed because OrderedDict isn't threadsafe
if maxsize is None:
cache = dict() # simple cache without ordering or size limit
@wraps(user_function)
def wrapper(*args, **kwds):
key = args
if kwds:
key += kwd_mark + tuple(sorted(kwds.items()))
try:
result = cache[key]
hits[0] += 1
return result
except KeyError:
pass
result = user_function(*args, **kwds)
cache[key] = result
misses[0] += 1
return result
else:
cache = OrderedDict() # ordered least recent to most recent
cache_popitem = cache.popitem
cache_renew = cache.move_to_end
@wraps(user_function)
def wrapper(*args, **kwds):
key = args
if kwds:
key += kwd_mark + tuple(sorted(kwds.items()))
with lock:
try:
result = cache[key]
cache_renew(key) # record recent use of this key
hits[0] += 1
return result
except KeyError:
pass
result = user_function(*args, **kwds)
with lock:
cache[key] = result # record recent use of this key
misses[0] += 1
if len(cache) > maxsize:
cache_popitem(0) # purge least recently used cache entry
return result
def cache_info():
"""Report cache statistics"""
with lock:
return _CacheInfo(hits[0], misses[0], maxsize, len(cache))
def cache_clear():
"""Clear the cache and cache statistics"""
with lock:
cache.clear()
hits[0] = misses[0] = 0
wrapper.cache_info = cache_info
wrapper.cache_clear = cache_clear
return wrapper
return decorating_function

View File

@@ -1,157 +0,0 @@
"""Redo the builtin repr() (representation) but with limits on most sizes."""
__all__ = ["Repr", "repr", "recursive_repr"]
import __builtin__ as builtins
from itertools import islice
try:
from thread import get_ident
except ImportError:
from _dummy_thread32 import get_ident
def recursive_repr(fillvalue='...'):
'Decorator to make a repr function return fillvalue for a recursive call'
def decorating_function(user_function):
repr_running = set()
def wrapper(self):
key = id(self), get_ident()
if key in repr_running:
return fillvalue
repr_running.add(key)
try:
result = user_function(self)
finally:
repr_running.discard(key)
return result
# Can't use functools.wraps() here because of bootstrap issues
wrapper.__module__ = getattr(user_function, '__module__')
wrapper.__doc__ = getattr(user_function, '__doc__')
wrapper.__name__ = getattr(user_function, '__name__')
wrapper.__annotations__ = getattr(user_function, '__annotations__', {})
return wrapper
return decorating_function
class Repr:
def __init__(self):
self.maxlevel = 6
self.maxtuple = 6
self.maxlist = 6
self.maxarray = 5
self.maxdict = 4
self.maxset = 6
self.maxfrozenset = 6
self.maxdeque = 6
self.maxstring = 30
self.maxlong = 40
self.maxother = 30
def repr(self, x):
return self.repr1(x, self.maxlevel)
def repr1(self, x, level):
typename = type(x).__name__
if ' ' in typename:
parts = typename.split()
typename = '_'.join(parts)
if hasattr(self, 'repr_' + typename):
return getattr(self, 'repr_' + typename)(x, level)
else:
return self.repr_instance(x, level)
def _repr_iterable(self, x, level, left, right, maxiter, trail=''):
n = len(x)
if level <= 0 and n:
s = '...'
else:
newlevel = level - 1
repr1 = self.repr1
pieces = [repr1(elem, newlevel) for elem in islice(x, maxiter)]
if n > maxiter: pieces.append('...')
s = ', '.join(pieces)
if n == 1 and trail: right = trail + right
return '%s%s%s' % (left, s, right)
def repr_tuple(self, x, level):
return self._repr_iterable(x, level, '(', ')', self.maxtuple, ',')
def repr_list(self, x, level):
return self._repr_iterable(x, level, '[', ']', self.maxlist)
def repr_array(self, x, level):
header = "array('%s', [" % x.typecode
return self._repr_iterable(x, level, header, '])', self.maxarray)
def repr_set(self, x, level):
x = _possibly_sorted(x)
return self._repr_iterable(x, level, 'set([', '])', self.maxset)
def repr_frozenset(self, x, level):
x = _possibly_sorted(x)
return self._repr_iterable(x, level, 'frozenset([', '])',
self.maxfrozenset)
def repr_deque(self, x, level):
return self._repr_iterable(x, level, 'deque([', '])', self.maxdeque)
def repr_dict(self, x, level):
n = len(x)
if n == 0: return '{}'
if level <= 0: return '{...}'
newlevel = level - 1
repr1 = self.repr1
pieces = []
for key in islice(_possibly_sorted(x), self.maxdict):
keyrepr = repr1(key, newlevel)
valrepr = repr1(x[key], newlevel)
pieces.append('%s: %s' % (keyrepr, valrepr))
if n > self.maxdict: pieces.append('...')
s = ', '.join(pieces)
return '{%s}' % (s,)
def repr_str(self, x, level):
s = builtins.repr(x[:self.maxstring])
if len(s) > self.maxstring:
i = max(0, (self.maxstring-3)//2)
j = max(0, self.maxstring-3-i)
s = builtins.repr(x[:i] + x[len(x)-j:])
s = s[:i] + '...' + s[len(s)-j:]
return s
def repr_int(self, x, level):
s = builtins.repr(x) # XXX Hope this isn't too slow...
if len(s) > self.maxlong:
i = max(0, (self.maxlong-3)//2)
j = max(0, self.maxlong-3-i)
s = s[:i] + '...' + s[len(s)-j:]
return s
def repr_instance(self, x, level):
try:
s = builtins.repr(x)
# Bugs in x.__repr__() can cause arbitrary
# exceptions -- then make up something
except Exception:
return '<%s instance at %x>' % (x.__class__.__name__, id(x))
if len(s) > self.maxother:
i = max(0, (self.maxother-3)//2)
j = max(0, self.maxother-3-i)
s = s[:i] + '...' + s[len(s)-j:]
return s
def _possibly_sorted(x):
# Since not all sequences of items can be sorted and comparison
# functions may raise arbitrary exceptions, return an unsorted
# sequence in that case.
try:
return sorted(x)
except Exception:
return list(x)
aRepr = Repr()
repr = aRepr.repr

View File

@@ -1,103 +0,0 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""
This is a fake set of symbols to allow spack to import typing in python
versions where we do not support type checking (<3)
"""
from collections import defaultdict
# (1) Unparameterized types.
Annotated = object
Any = object
AnyStr = object
ByteString = object
Counter = object
Final = object
Hashable = object
NoReturn = object
Sized = object
SupportsAbs = object
SupportsBytes = object
SupportsComplex = object
SupportsFloat = object
SupportsIndex = object
SupportsInt = object
SupportsRound = object
# (2) Parameterized types.
AbstractSet = defaultdict(lambda: object)
AsyncContextManager = defaultdict(lambda: object)
AsyncGenerator = defaultdict(lambda: object)
AsyncIterable = defaultdict(lambda: object)
AsyncIterator = defaultdict(lambda: object)
Awaitable = defaultdict(lambda: object)
Callable = defaultdict(lambda: object)
ChainMap = defaultdict(lambda: object)
ClassVar = defaultdict(lambda: object)
Collection = defaultdict(lambda: object)
Container = defaultdict(lambda: object)
ContextManager = defaultdict(lambda: object)
Coroutine = defaultdict(lambda: object)
DefaultDict = defaultdict(lambda: object)
Deque = defaultdict(lambda: object)
Dict = defaultdict(lambda: object)
ForwardRef = defaultdict(lambda: object)
FrozenSet = defaultdict(lambda: object)
Generator = defaultdict(lambda: object)
Generic = defaultdict(lambda: object)
ItemsView = defaultdict(lambda: object)
Iterable = defaultdict(lambda: object)
Iterator = defaultdict(lambda: object)
KeysView = defaultdict(lambda: object)
List = defaultdict(lambda: object)
Literal = defaultdict(lambda: object)
Mapping = defaultdict(lambda: object)
MappingView = defaultdict(lambda: object)
MutableMapping = defaultdict(lambda: object)
MutableSequence = defaultdict(lambda: object)
MutableSet = defaultdict(lambda: object)
NamedTuple = defaultdict(lambda: object)
Optional = defaultdict(lambda: object)
OrderedDict = defaultdict(lambda: object)
Reversible = defaultdict(lambda: object)
Sequence = defaultdict(lambda: object)
Set = defaultdict(lambda: object)
Tuple = defaultdict(lambda: object)
Type = defaultdict(lambda: object)
TypedDict = defaultdict(lambda: object)
Union = defaultdict(lambda: object)
ValuesView = defaultdict(lambda: object)
# (3) Type variable declarations.
TypeVar = lambda *args, **kwargs: None
# (4) Functions.
cast = lambda _type, x: x
get_args = None
get_origin = None
get_type_hints = None
no_type_check = None
no_type_check_decorator = None
## typing_extensions
# We get a ModuleNotFoundError when attempting to import anything from typing_extensions
# if we separate this into a separate typing_extensions.py file for some reason.
# (1) Unparameterized types.
IntVar = object
Literal = object
NewType = object
Text = object
# (2) Parameterized types.
Protocol = defaultdict(lambda: object)
# (3) Macro for avoiding evaluation except during type checking.
TYPE_CHECKING = False
# (4) Decorators.
final = lambda x: x
overload = lambda x: x
runtime_checkable = lambda x: x

View File

@@ -7,11 +7,10 @@
import argparse
import errno
import io
import re
import sys
from six import StringIO
class Command(object):
"""Parsed representation of a command from argparse.
@@ -181,7 +180,7 @@ def __init__(self, prog, out=None, aliases=False, rst_levels=_rst_levels):
self.rst_levels = rst_levels
def format(self, cmd):
string = StringIO()
string = io.StringIO()
string.write(self.begin_command(cmd.prog))
if cmd.description:

View File

@@ -1,39 +0,0 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# isort: off
import sys
if sys.version_info < (3,):
from itertools import ifilter as filter
from itertools import imap as map
from itertools import izip as zip
from itertools import izip_longest as zip_longest # novm
from urllib import urlencode as urlencode
from urllib import urlopen as urlopen
else:
filter = filter
map = map
zip = zip
from itertools import zip_longest as zip_longest # novm # noqa: F401
from urllib.parse import urlencode as urlencode # novm # noqa: F401
from urllib.request import urlopen as urlopen # novm # noqa: F401
if sys.version_info >= (3, 3):
from collections.abc import Hashable as Hashable # novm
from collections.abc import Iterable as Iterable # novm
from collections.abc import Mapping as Mapping # novm
from collections.abc import MutableMapping as MutableMapping # novm
from collections.abc import MutableSequence as MutableSequence # novm
from collections.abc import MutableSet as MutableSet # novm
from collections.abc import Sequence as Sequence # novm
else:
from collections import Hashable as Hashable # noqa: F401
from collections import Iterable as Iterable # noqa: F401
from collections import Mapping as Mapping # noqa: F401
from collections import MutableMapping as MutableMapping # noqa: F401
from collections import MutableSequence as MutableSequence # noqa: F401
from collections import MutableSet as MutableSet # noqa: F401
from collections import Sequence as Sequence # noqa: F401

View File

@@ -3,6 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import collections
import collections.abc
import errno
import glob
import hashlib
@@ -17,10 +18,7 @@
from contextlib import contextmanager
from sys import platform as _platform
import six
from llnl.util import tty
from llnl.util.compat import Sequence
from llnl.util.lang import dedupe, memoized
from llnl.util.symlink import islink, symlink
@@ -290,9 +288,7 @@ def groupid_to_group(x):
shutil.copy(filename, tmp_filename)
try:
extra_kwargs = {}
if sys.version_info > (3, 0):
extra_kwargs = {"errors": "surrogateescape"}
extra_kwargs = {"errors": "surrogateescape"}
# Open as a text file and filter until the end of the file is
# reached or we found a marker in the line if it was specified
@@ -505,8 +501,15 @@ def group_ids(uid=None):
if uid is None:
uid = getuid()
user = pwd.getpwuid(uid).pw_name
return [g.gr_gid for g in grp.getgrall() if user in g.gr_mem]
pwd_entry = pwd.getpwuid(uid)
user = pwd_entry.pw_name
# user's primary group id may not be listed in grp (i.e. /etc/group)
# you have to check pwd for that, so start the list with that
gids = [pwd_entry.pw_gid]
return sorted(set(gids + [g.gr_gid for g in grp.getgrall() if user in g.gr_mem]))
@system_path_filter(arg_slice=slice(1))
@@ -515,7 +518,7 @@ def chgrp(path, group, follow_symlinks=True):
if is_windows:
raise OSError("Function 'chgrp' is not supported on Windows")
if isinstance(group, six.string_types):
if isinstance(group, str):
gid = grp.getgrnam(group).gr_gid
else:
gid = group
@@ -1012,7 +1015,7 @@ def open_if_filename(str_or_file, mode="r"):
If it's a file object, just yields the file object.
"""
if isinstance(str_or_file, six.string_types):
if isinstance(str_or_file, str):
with open(str_or_file, mode) as f:
yield f
else:
@@ -1083,7 +1086,11 @@ def temp_cwd():
with working_dir(tmp_dir):
yield tmp_dir
finally:
shutil.rmtree(tmp_dir)
kwargs = {}
if is_windows:
kwargs["ignore_errors"] = False
kwargs["onerror"] = readonly_file_handler(ignore_errors=True)
shutil.rmtree(tmp_dir, **kwargs)
@contextmanager
@@ -1298,46 +1305,34 @@ def visit_directory_tree(root, visitor, rel_path="", depth=0):
depth (str): current depth from the root
"""
dir = os.path.join(root, rel_path)
if sys.version_info >= (3, 5, 0):
dir_entries = sorted(os.scandir(dir), key=lambda d: d.name) # novermin
else:
dir_entries = os.listdir(dir)
dir_entries.sort()
dir_entries = sorted(os.scandir(dir), key=lambda d: d.name)
for f in dir_entries:
if sys.version_info >= (3, 5, 0):
rel_child = os.path.join(rel_path, f.name)
islink = f.is_symlink()
# On Windows, symlinks to directories are distinct from
# symlinks to files, and it is possible to create a
# broken symlink to a directory (e.g. using os.symlink
# without `target_is_directory=True`), invoking `isdir`
# on a symlink on Windows that is broken in this manner
# will result in an error. In this case we can work around
# the issue by reading the target and resolving the
# directory ourselves
try:
isdir = f.is_dir()
except OSError as e:
if is_windows and hasattr(e, "winerror") and e.winerror == 5 and islink:
# if path is a symlink, determine destination and
# evaluate file vs directory
link_target = resolve_link_target_relative_to_the_link(f)
# link_target might be relative but
# resolve_link_target_relative_to_the_link
# will ensure that if so, that it is relative
# to the CWD and therefore
# makes sense
isdir = os.path.isdir(link_target)
else:
raise e
else:
rel_child = os.path.join(rel_path, f)
lexists, islink, isdir = lexists_islink_isdir(os.path.join(dir, f))
if not lexists:
continue
rel_child = os.path.join(rel_path, f.name)
islink = f.is_symlink()
# On Windows, symlinks to directories are distinct from
# symlinks to files, and it is possible to create a
# broken symlink to a directory (e.g. using os.symlink
# without `target_is_directory=True`), invoking `isdir`
# on a symlink on Windows that is broken in this manner
# will result in an error. In this case we can work around
# the issue by reading the target and resolving the
# directory ourselves
try:
isdir = f.is_dir()
except OSError as e:
if is_windows and hasattr(e, "winerror") and e.winerror == 5 and islink:
# if path is a symlink, determine destination and
# evaluate file vs directory
link_target = resolve_link_target_relative_to_the_link(f)
# link_target might be relative but
# resolve_link_target_relative_to_the_link
# will ensure that if so, that it is relative
# to the CWD and therefore
# makes sense
isdir = os.path.isdir(link_target)
else:
raise e
if not isdir and not islink:
# handle non-symlink files
@@ -1598,14 +1593,14 @@ def find(root, files, recursive=True):
Parameters:
root (str): The root directory to start searching from
files (str or Sequence): Library name(s) to search for
files (str or collections.abc.Sequence): Library name(s) to search for
recursive (bool): if False search only root folder,
if True descends top-down from the root. Defaults to True.
Returns:
list: The files that have been found
"""
if isinstance(files, six.string_types):
if isinstance(files, str):
files = [files]
if recursive:
@@ -1662,14 +1657,14 @@ def _find_non_recursive(root, search_files):
# Utilities for libraries and headers
class FileList(Sequence):
class FileList(collections.abc.Sequence):
"""Sequence of absolute paths to files.
Provides a few convenience methods to manipulate file paths.
"""
def __init__(self, files):
if isinstance(files, six.string_types):
if isinstance(files, str):
files = [files]
self.files = list(dedupe(files))
@@ -1765,7 +1760,7 @@ def directories(self):
def directories(self, value):
value = value or []
# Accept a single directory as input
if isinstance(value, six.string_types):
if isinstance(value, str):
value = [value]
self._directories = [path_to_os_path(os.path.normpath(x))[0] for x in value]
@@ -1901,9 +1896,9 @@ def find_headers(headers, root, recursive=False):
Returns:
HeaderList: The headers that have been found
"""
if isinstance(headers, six.string_types):
if isinstance(headers, str):
headers = [headers]
elif not isinstance(headers, Sequence):
elif not isinstance(headers, collections.abc.Sequence):
message = "{0} expects a string or sequence of strings as the "
message += "first argument [got {1} instead]"
message = message.format(find_headers.__name__, type(headers))
@@ -2067,9 +2062,9 @@ def find_system_libraries(libraries, shared=True):
Returns:
LibraryList: The libraries that have been found
"""
if isinstance(libraries, six.string_types):
if isinstance(libraries, str):
libraries = [libraries]
elif not isinstance(libraries, Sequence):
elif not isinstance(libraries, collections.abc.Sequence):
message = "{0} expects a string or sequence of strings as the "
message += "first argument [got {1} instead]"
message = message.format(find_system_libraries.__name__, type(libraries))
@@ -2095,7 +2090,7 @@ def find_system_libraries(libraries, shared=True):
return libraries_found
def find_libraries(libraries, root, shared=True, recursive=False):
def find_libraries(libraries, root, shared=True, recursive=False, runtime=True):
"""Returns an iterable of full paths to libraries found in a root dir.
Accepts any glob characters accepted by fnmatch:
@@ -2116,13 +2111,17 @@ def find_libraries(libraries, root, shared=True, recursive=False):
otherwise for static. Defaults to True.
recursive (bool): if False search only root folder,
if True descends top-down from the root. Defaults to False.
runtime (bool): Windows only option, no-op elsewhere. If true,
search for runtime shared libs (.DLL), otherwise, search
for .Lib files. If shared is false, this has no meaning.
Defaults to True.
Returns:
LibraryList: The libraries that have been found
"""
if isinstance(libraries, six.string_types):
if isinstance(libraries, str):
libraries = [libraries]
elif not isinstance(libraries, Sequence):
elif not isinstance(libraries, collections.abc.Sequence):
message = "{0} expects a string or sequence of strings as the "
message += "first argument [got {1} instead]"
message = message.format(find_libraries.__name__, type(libraries))
@@ -2130,7 +2129,9 @@ def find_libraries(libraries, root, shared=True, recursive=False):
if is_windows:
static_ext = "lib"
shared_ext = "dll"
# For linking (runtime=False) you need the .lib files regardless of
# whether you are doing a shared or static link
shared_ext = "dll" if runtime else "lib"
else:
# Used on both Linux and macOS
static_ext = "a"
@@ -2174,13 +2175,13 @@ def find_libraries(libraries, root, shared=True, recursive=False):
return LibraryList(found_libs)
def find_all_shared_libraries(root, recursive=False):
def find_all_shared_libraries(root, recursive=False, runtime=True):
"""Convenience function that returns the list of all shared libraries found
in the directory passed as argument.
See documentation for `llnl.util.filesystem.find_libraries` for more information
"""
return find_libraries("*", root=root, shared=True, recursive=recursive)
return find_libraries("*", root=root, shared=True, recursive=recursive, runtime=runtime)
def find_all_static_libraries(root, recursive=False):
@@ -2226,48 +2227,36 @@ def __init__(self, package, link_install_prefix=True):
self.pkg = package
self._addl_rpaths = set()
self.link_install_prefix = link_install_prefix
self._internal_links = set()
self._additional_library_dependents = set()
@property
def link_dest(self):
def library_dependents(self):
"""
Set of directories where package binaries/libraries are located.
"""
if hasattr(self.pkg, "libs") and self.pkg.libs:
pkg_libs = set(self.pkg.libs.directories)
else:
pkg_libs = set((self.pkg.prefix.lib, self.pkg.prefix.lib64))
return set([self.pkg.prefix.bin]) | self._additional_library_dependents
return pkg_libs | set([self.pkg.prefix.bin]) | self.internal_links
@property
def internal_links(self):
def add_library_dependent(self, *dest):
"""
linking that would need to be established within the package itself. Useful for links
against extension modules/build time executables/internal linkage
"""
return self._internal_links
Add paths to directories or libraries/binaries to set of
common paths that need to link against other libraries
def add_internal_links(self, *dest):
"""
Incorporate additional paths into the rpath (sym)linking scheme.
Paths provided to this method are linked against by a package's libraries
and libraries found at these paths are linked against a package's binaries.
(i.e. /site-packages -> /bin and /bin -> /site-packages)
Specified paths should be outside of a package's lib, lib64, and bin
Specified paths should fall outside of a package's common
link paths, i.e. the bin
directories.
"""
self._internal_links = self._internal_links | set(*dest)
for pth in dest:
if os.path.isfile(pth):
self._additional_library_dependents.add(os.path.dirname)
else:
self._additional_library_dependents.add(pth)
@property
def link_targets(self):
def rpaths(self):
"""
Set of libraries this package needs to link against during runtime
These packages will each be symlinked into the packages lib and binary dir
"""
dependent_libs = []
for path in self.pkg.rpath:
dependent_libs.extend(list(find_all_shared_libraries(path, recursive=True)))
@@ -2275,18 +2264,43 @@ def link_targets(self):
dependent_libs.extend(list(find_all_shared_libraries(extra_path, recursive=True)))
return set(dependent_libs)
def include_additional_link_paths(self, *paths):
def add_rpath(self, *paths):
"""
Add libraries found at the root of provided paths to runtime linking
These are libraries found outside of the typical scope of rpath linking
that require manual inclusion in a runtime linking scheme
that require manual inclusion in a runtime linking scheme.
These links are unidirectional, and are only
intended to bring outside dependencies into this package
Args:
*paths (str): arbitrary number of paths to be added to runtime linking
"""
self._addl_rpaths = self._addl_rpaths | set(paths)
def _link(self, path, dest):
file_name = os.path.basename(path)
dest_file = os.path.join(dest, file_name)
if os.path.exists(dest):
try:
symlink(path, dest_file)
# For py2 compatibility, we have to catch the specific Windows error code
# associate with trying to create a file that already exists (winerror 183)
except OSError as e:
if e.winerror == 183:
# We have either already symlinked or we are encoutering a naming clash
# either way, we don't want to overwrite existing libraries
already_linked = islink(dest_file)
tty.debug(
"Linking library %s to %s failed, " % (path, dest_file) + "already linked."
if already_linked
else "library with name %s already exists at location %s."
% (file_name, dest)
)
pass
else:
raise e
def establish_link(self):
"""
(sym)link packages to runtime dependencies based on RPath configuration for
@@ -2298,29 +2312,8 @@ def establish_link(self):
# for each binary install dir in self.pkg (i.e. pkg.prefix.bin, pkg.prefix.lib)
# install a symlink to each dependent library
for library, lib_dir in itertools.product(self.link_targets, self.link_dest):
if not path_contains_subdirectory(library, lib_dir):
file_name = os.path.basename(library)
dest_file = os.path.join(lib_dir, file_name)
if os.path.exists(lib_dir):
try:
symlink(library, dest_file)
# For py2 compatibility, we have to catch the specific Windows error code
# associate with trying to create a file that already exists (winerror 183)
except OSError as e:
if e.winerror == 183:
# We have either already symlinked or we are encoutering a naming clash
# either way, we don't want to overwrite existing libraries
already_linked = islink(dest_file)
tty.debug(
"Linking library %s to %s failed, " % (library, dest_file)
+ "already linked."
if already_linked
else "library with name %s already exists." % file_name
)
pass
else:
raise e
for library, lib_dir in itertools.product(self.rpaths, self.library_dependents):
self._link(library, lib_dir)
@system_path_filter

View File

@@ -5,9 +5,11 @@
from __future__ import division
import collections.abc
import contextlib
import functools
import inspect
import itertools
import os
import re
import sys
@@ -15,11 +17,6 @@
from datetime import datetime, timedelta
from typing import Any, Callable, Iterable, List, Tuple
import six
from six import string_types
from llnl.util.compat import MutableMapping, MutableSequence, zip_longest
# Ignore emacs backups when listing modules
ignore_modules = [r"^\.#", "~$"]
@@ -200,14 +197,9 @@ def _memoized_function(*args, **kwargs):
return ret
except TypeError as e:
# TypeError is raised when indexing into a dict if the key is unhashable.
raise six.raise_from(
UnhashableArguments(
"args + kwargs '{}' was not hashable for function '{}'".format(
key, func.__name__
),
),
e,
)
raise UnhashableArguments(
"args + kwargs '{}' was not hashable for function '{}'".format(key, func.__name__),
) from e
return _memoized_function
@@ -312,7 +304,7 @@ def lazy_eq(lseq, rseq):
# zip_longest is implemented in native code, so use it for speed.
# use zip_longest instead of zip because it allows us to tell
# which iterator was longer.
for left, right in zip_longest(liter, riter, fillvalue=done):
for left, right in itertools.zip_longest(liter, riter, fillvalue=done):
if (left is done) or (right is done):
return False
@@ -332,7 +324,7 @@ def lazy_lt(lseq, rseq):
liter = lseq()
riter = rseq()
for left, right in zip_longest(liter, riter, fillvalue=done):
for left, right in itertools.zip_longest(liter, riter, fillvalue=done):
if (left is done) or (right is done):
return left is done # left was shorter than right
@@ -482,7 +474,7 @@ def add_func_to_class(name, func):
@lazy_lexicographic_ordering
class HashableMap(MutableMapping):
class HashableMap(collections.abc.MutableMapping):
"""This is a hashable, comparable dictionary. Hash is performed on
a tuple of the values in the dictionary."""
@@ -574,7 +566,7 @@ def match_predicate(*args):
def match(string):
for arg in args:
if isinstance(arg, string_types):
if isinstance(arg, str):
if re.search(arg, string):
return True
elif isinstance(arg, list) or isinstance(arg, tuple):
@@ -749,6 +741,26 @@ def _n_xxx_ago(x):
raise ValueError(msg)
def pretty_seconds(seconds):
"""Seconds to string with appropriate units
Arguments:
seconds (float): Number of seconds
Returns:
str: Time string with units
"""
if seconds >= 1:
value, unit = seconds, "s"
elif seconds >= 1e-3:
value, unit = seconds * 1e3, "ms"
elif seconds >= 1e-6:
value, unit = seconds * 1e6, "us"
else:
value, unit = seconds * 1e9, "ns"
return "%.3f%s" % (value, unit)
class RequiredAttributeError(ValueError):
def __init__(self, message):
super(RequiredAttributeError, self).__init__(message)
@@ -867,32 +879,28 @@ def load_module_from_file(module_name, module_path):
ImportError: when the module can't be loaded
FileNotFoundError: when module_path doesn't exist
"""
import importlib.util
if module_name in sys.modules:
return sys.modules[module_name]
# This recipe is adapted from https://stackoverflow.com/a/67692/771663
if sys.version_info[0] == 3 and sys.version_info[1] >= 5:
import importlib.util
spec = importlib.util.spec_from_file_location(module_name, module_path) # novm
module = importlib.util.module_from_spec(spec) # novm
# The module object needs to exist in sys.modules before the
# loader executes the module code.
#
# See https://docs.python.org/3/reference/import.html#loading
sys.modules[spec.name] = module
spec = importlib.util.spec_from_file_location(module_name, module_path) # novm
module = importlib.util.module_from_spec(spec) # novm
# The module object needs to exist in sys.modules before the
# loader executes the module code.
#
# See https://docs.python.org/3/reference/import.html#loading
sys.modules[spec.name] = module
try:
spec.loader.exec_module(module)
except BaseException:
try:
spec.loader.exec_module(module)
except BaseException:
try:
del sys.modules[spec.name]
except KeyError:
pass
raise
elif sys.version_info[0] == 2:
import imp
module = imp.load_source(module_name, module_path)
del sys.modules[spec.name]
except KeyError:
pass
raise
return module
@@ -1002,7 +1010,15 @@ def stable_partition(
return true_items, false_items
class TypedMutableSequence(MutableSequence):
def ensure_last(lst, *elements):
"""Performs a stable partition of lst, ensuring that ``elements``
occur at the end of ``lst`` in specified order. Mutates ``lst``.
Raises ``ValueError`` if any ``elements`` are not already in ``lst``."""
for elt in elements:
lst.append(lst.pop(lst.index(elt)))
class TypedMutableSequence(collections.abc.MutableSequence):
"""Base class that behaves like a list, just with a different type.
Client code can inherit from this base class:

View File

@@ -9,9 +9,9 @@
import sys
import time
from datetime import datetime
from typing import Dict, Tuple # novm
import llnl.util.tty as tty
from llnl.util.lang import pretty_seconds
import spack.util.string
@@ -80,7 +80,7 @@ class OpenFileTracker(object):
def __init__(self):
"""Create a new ``OpenFileTracker``."""
self._descriptors = {} # type: Dict[Tuple[int, int], OpenFile]
self._descriptors = {}
def get_fh(self, path):
"""Get a filehandle for a lockfile.
@@ -102,7 +102,7 @@ def get_fh(self, path):
try:
# see whether we've seen this inode/pid before
stat = os.stat(path)
key = (stat.st_ino, pid)
key = (stat.st_dev, stat.st_ino, pid)
open_file = self._descriptors.get(key)
except OSError as e:
@@ -128,32 +128,32 @@ def get_fh(self, path):
# if we just created the file, we'll need to get its inode here
if not stat:
inode = os.fstat(fd).st_ino
key = (inode, pid)
stat = os.fstat(fd)
key = (stat.st_dev, stat.st_ino, pid)
self._descriptors[key] = open_file
open_file.refs += 1
return open_file.fh
def release_fh(self, path):
"""Release a filehandle, only closing it if there are no more references."""
try:
inode = os.stat(path).st_ino
except OSError as e:
if e.errno != errno.ENOENT: # only handle file not found
raise
inode = None # this will not be in self._descriptors
key = (inode, os.getpid())
def release_by_stat(self, stat):
key = (stat.st_dev, stat.st_ino, os.getpid())
open_file = self._descriptors.get(key)
assert open_file, "Attempted to close non-existing lock path: %s" % path
assert open_file, "Attempted to close non-existing inode: %s" % stat.st_inode
open_file.refs -= 1
if not open_file.refs:
del self._descriptors[key]
open_file.fh.close()
def release_by_fh(self, fh):
self.release_by_stat(os.fstat(fh.fileno()))
def purge(self):
for key in list(self._descriptors.keys()):
self._descriptors[key].fh.close()
del self._descriptors[key]
#: Open file descriptors for locks in this process. Used to prevent one process
#: from opening the sam file many times for different byte range locks
@@ -166,7 +166,7 @@ def _attempts_str(wait_time, nattempts):
return ""
attempts = spack.util.string.plural(nattempts, "attempt")
return " after {0:0.2f}s and {1}".format(wait_time, attempts)
return " after {} and {}".format(pretty_seconds(wait_time), attempts)
class LockType(object):
@@ -318,8 +318,8 @@ def _lock(self, op, timeout=None):
raise LockROFileError(self.path)
self._log_debug(
"{0} locking [{1}:{2}]: timeout {3} sec".format(
op_str.lower(), self._start, self._length, timeout
"{} locking [{}:{}]: timeout {}".format(
op_str.lower(), self._start, self._length, pretty_seconds(timeout or 0)
)
)
@@ -340,7 +340,8 @@ def _lock(self, op, timeout=None):
total_wait_time = time.time() - start_time
return total_wait_time, num_attempts
raise LockTimeoutError("Timed out waiting for a {0} lock.".format(op_str.lower()))
total_wait_time = time.time() - start_time
raise LockTimeoutError(op_str.lower(), self.path, total_wait_time, num_attempts)
def _poll_lock(self, op):
"""Attempt to acquire the lock in a non-blocking manner. Return whether
@@ -430,8 +431,7 @@ def _unlock(self):
"""
fcntl.lockf(self._file, fcntl.LOCK_UN, self._length, self._start, os.SEEK_SET)
file_tracker.release_fh(self.path)
file_tracker.release_by_fh(self._file)
self._file = None
self._reads = 0
self._writes = 0
@@ -780,6 +780,18 @@ class LockLimitError(LockError):
class LockTimeoutError(LockError):
"""Raised when an attempt to acquire a lock times out."""
def __init__(self, lock_type, path, time, attempts):
fmt = "Timed out waiting for a {} lock after {}.\n Made {} {} on file: {}"
super(LockTimeoutError, self).__init__(
fmt.format(
lock_type,
pretty_seconds(time),
attempts,
"attempt" if attempts == 1 else "attempts",
path,
)
)
class LockUpgradeError(LockError):
"""Raised when unable to upgrade from a read to a write lock."""

View File

@@ -24,7 +24,7 @@ def symlink(real_path, link_path):
On Windows, use junctions if os.symlink fails.
"""
if not is_windows or _win32_can_symlink():
os.symlink(real_path, link_path)
os.symlink(real_path, link_path, target_is_directory=os.path.isdir(real_path))
else:
try:
# Try to use junctions

View File

@@ -6,6 +6,7 @@
from __future__ import unicode_literals
import contextlib
import io
import os
import struct
import sys
@@ -14,10 +15,6 @@
from datetime import datetime
from sys import platform as _platform
import six
from six import StringIO
from six.moves import input
if _platform != "win32":
import fcntl
import termios
@@ -183,7 +180,7 @@ def msg(message, *args, **kwargs):
else:
cwrite("@*b{%s==>} %s%s" % (st_text, get_timestamp(), cescape(_output_filter(message))))
for arg in args:
print(indent + _output_filter(six.text_type(arg)))
print(indent + _output_filter(str(arg)))
def info(message, *args, **kwargs):
@@ -201,13 +198,13 @@ def info(message, *args, **kwargs):
st_text = process_stacktrace(st_countback)
cprint(
"@%s{%s==>} %s%s"
% (format, st_text, get_timestamp(), cescape(_output_filter(six.text_type(message)))),
% (format, st_text, get_timestamp(), cescape(_output_filter(str(message)))),
stream=stream,
)
for arg in args:
if wrap:
lines = textwrap.wrap(
_output_filter(six.text_type(arg)),
_output_filter(str(arg)),
initial_indent=indent,
subsequent_indent=indent,
break_long_words=break_long_words,
@@ -215,7 +212,7 @@ def info(message, *args, **kwargs):
for line in lines:
stream.write(line + "\n")
else:
stream.write(indent + _output_filter(six.text_type(arg)) + "\n")
stream.write(indent + _output_filter(str(arg)) + "\n")
def verbose(message, *args, **kwargs):
@@ -238,7 +235,7 @@ def error(message, *args, **kwargs):
kwargs.setdefault("format", "*r")
kwargs.setdefault("stream", sys.stderr)
info("Error: " + six.text_type(message), *args, **kwargs)
info("Error: " + str(message), *args, **kwargs)
def warn(message, *args, **kwargs):
@@ -247,7 +244,7 @@ def warn(message, *args, **kwargs):
kwargs.setdefault("format", "*Y")
kwargs.setdefault("stream", sys.stderr)
info("Warning: " + six.text_type(message), *args, **kwargs)
info("Warning: " + str(message), *args, **kwargs)
def die(message, *args, **kwargs):
@@ -271,7 +268,7 @@ def get_number(prompt, **kwargs):
while number is None:
msg(prompt, newline=False)
ans = input()
if ans == six.text_type(abort):
if ans == str(abort):
return None
if ans:
@@ -336,11 +333,11 @@ def hline(label=None, **kwargs):
cols -= 2
cols = min(max_width, cols)
label = six.text_type(label)
label = str(label)
prefix = char * 2 + " "
suffix = " " + (cols - len(prefix) - clen(label)) * char
out = StringIO()
out = io.StringIO()
out.write(prefix)
out.write(label)
out.write(suffix)
@@ -372,10 +369,5 @@ def ioctl_gwinsz(fd):
return int(rc[0]), int(rc[1])
else:
if sys.version_info[0] < 3:
raise RuntimeError(
"Terminal size not obtainable on Windows with a\
Python version older than 3"
)
rc = (os.environ.get("LINES", 25), os.environ.get("COLUMNS", 80))
return int(rc[0]), int(rc[1])

View File

@@ -8,11 +8,10 @@
"""
from __future__ import division, unicode_literals
import io
import os
import sys
from six import StringIO, text_type
from llnl.util.tty import terminal_size
from llnl.util.tty.color import cextra, clen
@@ -134,7 +133,7 @@ def colify(elts, **options):
)
# elts needs to be an array of strings so we can count the elements
elts = [text_type(elt) for elt in elts]
elts = [str(elt) for elt in elts]
if not elts:
return (0, ())
@@ -232,7 +231,7 @@ def transpose():
def colified(elts, **options):
"""Invokes the ``colify()`` function but returns the result as a string
instead of writing it to an output string."""
sio = StringIO()
sio = io.StringIO()
options["output"] = sio
colify(elts, **options)
return sio.getvalue()

View File

@@ -65,8 +65,6 @@
import sys
from contextlib import contextmanager
import six
class ColorParseError(Exception):
"""Raised when a color format fails to parse."""
@@ -259,7 +257,7 @@ def cescape(string):
Returns:
(str): the string with color codes escaped
"""
string = six.text_type(string)
string = str(string)
string = string.replace("@", "@@")
string = string.replace("}", "}}")
return string

View File

@@ -24,8 +24,6 @@
from types import ModuleType # novm
from typing import Optional # novm
from six import StringIO, string_types
import llnl.util.tty as tty
termios = None # type: Optional[ModuleType]
@@ -241,8 +239,7 @@ def __exit__(self, exc_type, exception, traceback):
"""If termios was available, restore old settings."""
if self.old_cfg:
self._restore_default_terminal_settings()
if sys.version_info >= (3,):
atexit.unregister(self._restore_default_terminal_settings)
atexit.unregister(self._restore_default_terminal_settings)
# restore SIGSTP and SIGCONT handlers
if self.old_handlers:
@@ -309,7 +306,7 @@ def __init__(self, file_like):
self.file_like = file_like
if isinstance(file_like, string_types):
if isinstance(file_like, str):
self.open = True
elif _file_descriptors_work(file_like):
self.open = False
@@ -323,12 +320,9 @@ def __init__(self, file_like):
def unwrap(self):
if self.open:
if self.file_like:
if sys.version_info < (3,):
self.file = open(self.file_like, "w")
else:
self.file = open(self.file_like, "w", encoding="utf-8") # novm
self.file = open(self.file_like, "w", encoding="utf-8")
else:
self.file = StringIO()
self.file = io.StringIO()
return self.file
else:
# We were handed an already-open file object. In this case we also
@@ -699,13 +693,10 @@ def __init__(self, sys_attr):
self.sys_attr = sys_attr
self.saved_stream = None
if sys.platform.startswith("win32"):
if sys.version_info < (3, 5):
libc = ctypes.CDLL(ctypes.util.find_library("c"))
if hasattr(sys, "gettotalrefcount"): # debug build
libc = ctypes.CDLL("ucrtbased")
else:
if hasattr(sys, "gettotalrefcount"): # debug build
libc = ctypes.CDLL("ucrtbased")
else:
libc = ctypes.CDLL("api-ms-win-crt-stdio-l1-1-0")
libc = ctypes.CDLL("api-ms-win-crt-stdio-l1-1-0")
kernel32 = ctypes.WinDLL("kernel32")
@@ -794,7 +785,7 @@ def __enter__(self):
raise RuntimeError("file argument must be set by __init__ ")
# Open both write and reading on logfile
if type(self.logfile) == StringIO:
if type(self.logfile) == io.StringIO:
self._ioflag = True
# cannot have two streams on tempfile, so we must make our own
sys.stdout = self.logfile
@@ -927,13 +918,10 @@ def _writer_daemon(
if sys.version_info < (3, 8) or sys.platform != "darwin":
os.close(write_fd)
# Use line buffering (3rd param = 1) since Python 3 has a bug
# 1. Use line buffering (3rd param = 1) since Python 3 has a bug
# that prevents unbuffered text I/O.
if sys.version_info < (3,):
in_pipe = os.fdopen(read_multiprocess_fd.fd, "r", 1)
else:
# Python 3.x before 3.7 does not open with UTF-8 encoding by default
in_pipe = os.fdopen(read_multiprocess_fd.fd, "r", 1, encoding="utf-8")
# 2. Python 3.x before 3.7 does not open with UTF-8 encoding by default
in_pipe = os.fdopen(read_multiprocess_fd.fd, "r", 1, encoding="utf-8")
if stdin_multiprocess_fd:
stdin = os.fdopen(stdin_multiprocess_fd.fd)
@@ -1023,7 +1011,7 @@ def _writer_daemon(
finally:
# send written data back to parent if we used a StringIO
if isinstance(log_file, StringIO):
if isinstance(log_file, io.StringIO):
control_pipe.send(log_file.getvalue())
log_file_wrapper.close()
close_connection_and_file(read_multiprocess_fd, in_pipe)

View File

@@ -3,11 +3,20 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: (major, minor, micro, dev release) tuple
spack_version_info = (0, 19, 0, "dev0")
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
spack_version = ".".join(str(s) for s in spack_version_info)
__version__ = "0.20.0.dev0"
spack_version = __version__
def __try_int(v):
try:
return int(v)
except ValueError:
return v
#: (major, minor, micro, dev release) tuple
spack_version_info = tuple([__try_int(v) for v in __version__.split(".")])
__all__ = ["spack_version_info", "spack_version"]
__version__ = spack_version

View File

@@ -37,15 +37,14 @@ def _search_duplicate_compilers(error_cls):
"""
import ast
import collections
import collections.abc
import inspect
import itertools
import pickle
import re
from six.moves.urllib.request import urlopen
from urllib.request import urlopen
import llnl.util.lang
from llnl.util.compat import Sequence
import spack.config
import spack.patch
@@ -81,7 +80,7 @@ def __hash__(self):
return hash(value)
class AuditClass(Sequence):
class AuditClass(collections.abc.Sequence):
def __init__(self, group, tag, description, kwargs):
"""Return an object that acts as a decorator to register functions
associated with a specific class of sanity checks.
@@ -288,7 +287,7 @@ def _check_build_test_callbacks(pkgs, error_cls):
errors = []
for pkg_name in pkgs:
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
test_callbacks = pkg_cls.build_time_test_callbacks
test_callbacks = getattr(pkg_cls, "build_time_test_callbacks", None)
if test_callbacks and "test" in test_callbacks:
msg = '{0} package contains "test" method in ' "build_time_test_callbacks"
@@ -503,6 +502,33 @@ def invalid_sha256_digest(fetcher):
return errors
@package_properties
def _ensure_env_methods_are_ported_to_builders(pkgs, error_cls):
"""Ensure that methods modifying the build environment are ported to builder classes."""
errors = []
for pkg_name in pkgs:
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
buildsystem_variant, _ = pkg_cls.variants["build_system"]
buildsystem_names = [getattr(x, "value", x) for x in buildsystem_variant.values]
builder_cls_names = [spack.builder.BUILDER_CLS[x].__name__ for x in buildsystem_names]
module = pkg_cls.module
has_builders_in_package_py = any(
getattr(module, name, False) for name in builder_cls_names
)
if not has_builders_in_package_py:
continue
for method_name in ("setup_build_environment", "setup_dependent_build_environment"):
if hasattr(pkg_cls, method_name):
msg = (
"Package '{}' need to move the '{}' method from the package class to the"
" appropriate builder class".format(pkg_name, method_name)
)
errors.append(error_cls(msg, []))
return errors
@package_https_directives
def _linting_package_file(pkgs, error_cls):
"""Check for correctness of links"""
@@ -660,7 +686,13 @@ def _ensure_variant_defaults_are_parsable(pkgs, error_cls):
errors.append(error_cls(error_msg.format(variant_name, pkg_name), []))
continue
vspec = variant.make_default()
try:
vspec = variant.make_default()
except spack.variant.MultipleValuesInExclusiveVariantError:
error_msg = "Cannot create a default value for the variant '{}' in package '{}'"
errors.append(error_cls(error_msg.format(variant_name, pkg_name), []))
continue
try:
variant.validate_or_raise(vspec, pkg_cls=pkg_cls)
except spack.variant.InvalidVariantValueError:

View File

@@ -7,22 +7,24 @@
import collections
import hashlib
import json
import multiprocessing.pool
import os
import shutil
import sys
import tarfile
import tempfile
import time
import traceback
import warnings
from contextlib import closing
from urllib.error import HTTPError, URLError
import ruamel.yaml as yaml
from six.moves.urllib.error import HTTPError, URLError
import llnl.util.filesystem as fsys
import llnl.util.lang
import llnl.util.tty as tty
from llnl.util.filesystem import mkdirp
from llnl.util.filesystem import BaseDirectoryVisitor, mkdirp, visit_directory_tree
import spack.cmd
import spack.config as config
@@ -41,8 +43,10 @@
import spack.util.url as url_util
import spack.util.web as web_util
from spack.caches import misc_cache_location
from spack.relocate import utf8_paths_to_single_binary_regex
from spack.spec import Spec
from spack.stage import Stage
from spack.util.executable import which
_build_cache_relative_path = "build_cache"
_build_cache_keys_relative_path = "_pgp"
@@ -70,6 +74,10 @@ def __init__(self, errors):
super(FetchCacheError, self).__init__(self.message)
class ListMirrorSpecsError(spack.error.SpackError):
"""Raised when unable to retrieve list of specs from the mirror"""
class BinaryCacheIndex(object):
"""
The BinaryCacheIndex tracks what specs are available on (usually remote)
@@ -105,6 +113,10 @@ def __init__(self, cache_root):
# cache (_mirrors_for_spec)
self._specs_already_associated = set()
# mapping from mirror urls to the time.time() of the last index fetch and a bool indicating
# whether the fetch succeeded or not.
self._last_fetch_times = {}
# _mirrors_for_spec is a dictionary mapping DAG hashes to lists of
# entries indicating mirrors where that concrete spec can be found.
# Each entry is a dictionary consisting of:
@@ -137,6 +149,7 @@ def clear(self):
self._index_file_cache = None
self._local_index_cache = None
self._specs_already_associated = set()
self._last_fetch_times = {}
self._mirrors_for_spec = {}
def _write_local_index_cache(self):
@@ -242,7 +255,6 @@ def find_built_spec(self, spec, mirrors_to_check=None):
}
]
"""
self.regenerate_spec_cache()
return self.find_by_hash(spec.dag_hash(), mirrors_to_check=mirrors_to_check)
def find_by_hash(self, find_hash, mirrors_to_check=None):
@@ -253,6 +265,9 @@ def find_by_hash(self, find_hash, mirrors_to_check=None):
mirrors_to_check: Optional mapping containing mirrors to check. If
None, just assumes all configured mirrors.
"""
if find_hash not in self._mirrors_for_spec:
# Not found in the cached index, pull the latest from the server.
self.update(with_cooldown=True)
if find_hash not in self._mirrors_for_spec:
return None
results = self._mirrors_for_spec[find_hash]
@@ -283,7 +298,7 @@ def update_spec(self, spec, found_list):
"spec": new_entry["spec"],
}
def update(self):
def update(self, with_cooldown=False):
"""Make sure local cache of buildcache index files is up to date.
If the same mirrors are configured as the last time this was called
and none of the remote buildcache indices have changed, calling this
@@ -325,24 +340,41 @@ def update(self):
fetch_errors = []
all_methods_failed = True
ttl = spack.config.get("config:binary_index_ttl", 600)
now = time.time()
for cached_mirror_url in self._local_index_cache:
cache_entry = self._local_index_cache[cached_mirror_url]
cached_index_hash = cache_entry["index_hash"]
cached_index_path = cache_entry["index_path"]
if cached_mirror_url in configured_mirror_urls:
# May need to fetch the index and update the local caches
try:
needs_regen = self._fetch_and_cache_index(
cached_mirror_url, expect_hash=cached_index_hash
)
all_methods_failed = False
except FetchCacheError as fetch_error:
needs_regen = False
fetch_errors.extend(fetch_error.errors)
# The need to regenerate implies a need to clear as well.
spec_cache_clear_needed |= needs_regen
spec_cache_regenerate_needed |= needs_regen
# Only do a fetch if the last fetch was longer than TTL ago
if (
with_cooldown
and ttl > 0
and cached_mirror_url in self._last_fetch_times
and now - self._last_fetch_times[cached_mirror_url][0] < ttl
):
# We're in the cooldown period, don't try to fetch again
# If the fetch succeeded last time, consider this update a success, otherwise
# re-report the error here
if self._last_fetch_times[cached_mirror_url][1]:
all_methods_failed = False
else:
# May need to fetch the index and update the local caches
try:
needs_regen = self._fetch_and_cache_index(
cached_mirror_url, expect_hash=cached_index_hash
)
self._last_fetch_times[cached_mirror_url] = (now, True)
all_methods_failed = False
except FetchCacheError as fetch_error:
needs_regen = False
fetch_errors.extend(fetch_error.errors)
self._last_fetch_times[cached_mirror_url] = (now, False)
# The need to regenerate implies a need to clear as well.
spec_cache_clear_needed |= needs_regen
spec_cache_regenerate_needed |= needs_regen
else:
# No longer have this mirror, cached index should be removed
items_to_remove.append(
@@ -351,6 +383,8 @@ def update(self):
"cache_key": os.path.join(self._index_cache_root, cached_index_path),
}
)
if cached_mirror_url in self._last_fetch_times:
del self._last_fetch_times[cached_mirror_url]
spec_cache_clear_needed = True
spec_cache_regenerate_needed = True
@@ -369,10 +403,12 @@ def update(self):
# Need to fetch the index and update the local caches
try:
needs_regen = self._fetch_and_cache_index(mirror_url)
self._last_fetch_times[mirror_url] = (now, True)
all_methods_failed = False
except FetchCacheError as fetch_error:
fetch_errors.extend(fetch_error.errors)
needs_regen = False
self._last_fetch_times[mirror_url] = (now, False)
# Generally speaking, a new mirror wouldn't imply the need to
# clear the spec cache, so leave it as is.
if needs_regen:
@@ -619,6 +655,57 @@ def read_buildinfo_file(prefix):
return buildinfo
class BuildManifestVisitor(BaseDirectoryVisitor):
"""Visitor that collects a list of files and symlinks
that can be checked for need of relocation. It knows how
to dedupe hardlinks and deal with symlinks to files and
directories."""
def __init__(self):
# Save unique identifiers of files to avoid
# relocating hardlink files for each path.
self.visited = set()
# Lists of files we will check
self.files = []
self.symlinks = []
def seen_before(self, root, rel_path):
stat_result = os.lstat(os.path.join(root, rel_path))
identifier = (stat_result.st_dev, stat_result.st_ino)
if identifier in self.visited:
return True
else:
self.visited.add(identifier)
return False
def visit_file(self, root, rel_path, depth):
if self.seen_before(root, rel_path):
return
self.files.append(rel_path)
def visit_symlinked_file(self, root, rel_path, depth):
# Note: symlinks *can* be hardlinked, but it is unclear if
# symlinks can be relinked in-place (preserving inode).
# Therefore, we do *not* de-dupe hardlinked symlinks.
self.symlinks.append(rel_path)
def before_visit_dir(self, root, rel_path, depth):
return os.path.basename(rel_path) not in (".spack", "man")
def before_visit_symlinked_dir(self, root, rel_path, depth):
# Treat symlinked directories simply as symlinks.
self.visit_symlinked_file(root, rel_path, depth)
# Never recurse into symlinked directories.
return False
def file_matches(path, regex):
with open(path, "rb") as f:
contents = f.read()
return bool(regex.search(contents))
def get_buildfile_manifest(spec):
"""
Return a data structure with information about a build, including
@@ -634,57 +721,61 @@ def get_buildfile_manifest(spec):
"link_to_relocate": [],
"other": [],
"binary_to_relocate_fullpath": [],
"hardlinks_deduped": True,
}
exclude_list = (".spack", "man")
# Guard against filesystem footguns of hardlinks and symlinks by using
# a visitor to retrieve a list of files and symlinks, so we don't have
# to worry about hardlinks of symlinked dirs and what not.
visitor = BuildManifestVisitor()
root = spec.prefix
visit_directory_tree(root, visitor)
# Do this at during tarball creation to save time when tarball unpacked.
# Used by make_package_relative to determine binaries to change.
for root, dirs, files in os.walk(spec.prefix, topdown=True):
dirs[:] = [d for d in dirs if d not in exclude_list]
# Collect a list of prefixes for this package and it's dependencies, Spack will
# look for them to decide if text file needs to be relocated or not
prefixes = [d.prefix for d in spec.traverse(root=True, deptype="all") if not d.external]
prefixes.append(spack.hooks.sbang.sbang_install_path())
prefixes.append(str(spack.store.layout.root))
# Directories may need to be relocated too.
for directory in dirs:
dir_path_name = os.path.join(root, directory)
rel_path_name = os.path.relpath(dir_path_name, spec.prefix)
if os.path.islink(dir_path_name):
link = os.readlink(dir_path_name)
if os.path.isabs(link) and link.startswith(spack.store.layout.root):
data["link_to_relocate"].append(rel_path_name)
# Create a giant regex that matches all prefixes
regex = utf8_paths_to_single_binary_regex(prefixes)
for filename in files:
path_name = os.path.join(root, filename)
m_type, m_subtype = fsys.mime_type(path_name)
rel_path_name = os.path.relpath(path_name, spec.prefix)
added = False
# Symlinks.
if os.path.islink(path_name):
link = os.readlink(path_name)
if os.path.isabs(link):
# Relocate absolute links into the spack tree
if link.startswith(spack.store.layout.root):
data["link_to_relocate"].append(rel_path_name)
added = True
# Obvious bugs:
# 1. relative links are not relocated.
# 2. paths are used as strings.
for rel_path in visitor.symlinks:
abs_path = os.path.join(root, rel_path)
link = os.readlink(abs_path)
if os.path.isabs(link) and link.startswith(spack.store.layout.root):
data["link_to_relocate"].append(rel_path)
if relocate.needs_binary_relocation(m_type, m_subtype):
if (
(
m_subtype in ("x-executable", "x-sharedlib", "x-pie-executable")
and sys.platform != "darwin"
)
or (m_subtype in ("x-mach-binary") and sys.platform == "darwin")
or (not filename.endswith(".o"))
):
data["binary_to_relocate"].append(rel_path_name)
data["binary_to_relocate_fullpath"].append(path_name)
added = True
# Non-symlinks.
for rel_path in visitor.files:
abs_path = os.path.join(root, rel_path)
m_type, m_subtype = fsys.mime_type(abs_path)
if relocate.needs_text_relocation(m_type, m_subtype):
data["text_to_relocate"].append(rel_path_name)
added = True
if relocate.needs_binary_relocation(m_type, m_subtype):
# Why is this branch not part of needs_binary_relocation? :(
if (
(
m_subtype in ("x-executable", "x-sharedlib", "x-pie-executable")
and sys.platform != "darwin"
)
or (m_subtype in ("x-mach-binary") and sys.platform == "darwin")
or (not rel_path.endswith(".o"))
):
data["binary_to_relocate"].append(rel_path)
data["binary_to_relocate_fullpath"].append(abs_path)
continue
elif relocate.needs_text_relocation(m_type, m_subtype) and file_matches(abs_path, regex):
data["text_to_relocate"].append(rel_path)
continue
data["other"].append(abs_path)
if not added:
data["other"].append(path_name)
return data
@@ -698,7 +789,7 @@ def write_buildinfo_file(spec, workdir, rel=False):
prefix_to_hash = dict()
prefix_to_hash[str(spec.package.prefix)] = spec.dag_hash()
deps = spack.build_environment.get_rpath_deps(spec.package)
for d in deps:
for d in deps + spec.dependencies(deptype="run"):
prefix_to_hash[str(d.prefix)] = d.dag_hash()
# Create buildinfo data and write it to disk
@@ -711,6 +802,7 @@ def write_buildinfo_file(spec, workdir, rel=False):
buildinfo["relocate_textfiles"] = manifest["text_to_relocate"]
buildinfo["relocate_binaries"] = manifest["binary_to_relocate"]
buildinfo["relocate_links"] = manifest["link_to_relocate"]
buildinfo["hardlinks_deduped"] = manifest["hardlinks_deduped"]
buildinfo["prefix_to_hash"] = prefix_to_hash
filename = buildinfo_file_name(workdir)
with open(filename, "w") as outfile:
@@ -795,37 +887,52 @@ def sign_specfile(key, force, specfile_path):
spack.util.gpg.sign(key, specfile_path, signed_specfile_path, clearsign=True)
def _fetch_spec_from_mirror(spec_url):
s = None
tty.debug("fetching {0}".format(spec_url))
_, _, spec_file = web_util.read_from_url(spec_url)
spec_file_contents = codecs.getreader("utf-8")(spec_file).read()
# Need full spec.json name or this gets confused with index.json.
if spec_url.endswith(".json.sig"):
specfile_json = Spec.extract_json_from_clearsig(spec_file_contents)
s = Spec.from_dict(specfile_json)
elif spec_url.endswith(".json"):
s = Spec.from_json(spec_file_contents)
elif spec_url.endswith(".yaml"):
s = Spec.from_yaml(spec_file_contents)
return s
def _read_specs_and_push_index(file_list, read_method, cache_prefix, db, temp_dir, concurrency):
"""Read all the specs listed in the provided list, using thread given thread parallelism,
generate the index, and push it to the mirror.
Args:
file_list (list(str)): List of urls or file paths pointing at spec files to read
read_method: A function taking a single argument, either a url or a file path,
and which reads the spec file at that location, and returns the spec.
cache_prefix (str): prefix of the build cache on s3 where index should be pushed.
db: A spack database used for adding specs and then writing the index.
temp_dir (str): Location to write index.json and hash for pushing
concurrency (int): Number of parallel processes to use when fetching
def _read_specs_and_push_index(file_list, cache_prefix, db, db_root_dir):
for file_path in file_list:
try:
s = _fetch_spec_from_mirror(url_util.join(cache_prefix, file_path))
except (URLError, web_util.SpackWebError) as url_err:
tty.error("Error reading specfile: {0}".format(file_path))
tty.error(url_err)
Return:
None
"""
if s:
db.add(s, None)
db.mark(s, "in_buildcache", True)
def _fetch_spec_from_mirror(spec_url):
spec_file_contents = read_method(spec_url)
if spec_file_contents:
# Need full spec.json name or this gets confused with index.json.
if spec_url.endswith(".json.sig"):
specfile_json = Spec.extract_json_from_clearsig(spec_file_contents)
return Spec.from_dict(specfile_json)
if spec_url.endswith(".json"):
return Spec.from_json(spec_file_contents)
if spec_url.endswith(".yaml"):
return Spec.from_yaml(spec_file_contents)
tp = multiprocessing.pool.ThreadPool(processes=concurrency)
try:
fetched_specs = tp.map(
llnl.util.lang.star(_fetch_spec_from_mirror), [(f,) for f in file_list]
)
finally:
tp.terminate()
tp.join()
for fetched_spec in fetched_specs:
db.add(fetched_spec, None)
db.mark(fetched_spec, "in_buildcache", True)
# Now generate the index, compute its hash, and push the two files to
# the mirror.
index_json_path = os.path.join(db_root_dir, "index.json")
index_json_path = os.path.join(temp_dir, "index.json")
with open(index_json_path, "w") as f:
db._write_to_file(f)
@@ -835,7 +942,7 @@ def _read_specs_and_push_index(file_list, cache_prefix, db, db_root_dir):
index_hash = compute_hash(index_string)
# Write the hash out to a local file
index_hash_path = os.path.join(db_root_dir, "index.json.hash")
index_hash_path = os.path.join(temp_dir, "index.json.hash")
with open(index_hash_path, "w") as f:
f.write(index_hash)
@@ -856,33 +963,152 @@ def _read_specs_and_push_index(file_list, cache_prefix, db, db_root_dir):
)
def generate_package_index(cache_prefix):
"""Create the build cache index page.
def _specs_from_cache_aws_cli(cache_prefix):
"""Use aws cli to sync all the specs into a local temporary directory.
Creates (or replaces) the "index.json" page at the location given in
cache_prefix. This page contains a link for each binary package (.yaml or
.json) under cache_prefix.
Args:
cache_prefix (str): prefix of the build cache on s3
Return:
List of the local file paths and a function that can read each one from the file system.
"""
read_fn = None
file_list = None
aws = which("aws")
def file_read_method(file_path):
with open(file_path) as fd:
return fd.read()
tmpspecsdir = tempfile.mkdtemp()
sync_command_args = [
"s3",
"sync",
"--exclude",
"*",
"--include",
"*.spec.json.sig",
"--include",
"*.spec.json",
"--include",
"*.spec.yaml",
cache_prefix,
tmpspecsdir,
]
try:
file_list = (
entry
tty.debug(
"Using aws s3 sync to download specs from {0} to {1}".format(cache_prefix, tmpspecsdir)
)
aws(*sync_command_args, output=os.devnull, error=os.devnull)
file_list = fsys.find(tmpspecsdir, ["*.spec.json.sig", "*.spec.json", "*.spec.yaml"])
read_fn = file_read_method
except Exception:
tty.warn("Failed to use aws s3 sync to retrieve specs, falling back to parallel fetch")
shutil.rmtree(tmpspecsdir)
return file_list, read_fn
def _specs_from_cache_fallback(cache_prefix):
"""Use spack.util.web module to get a list of all the specs at the remote url.
Args:
cache_prefix (str): Base url of mirror (location of spec files)
Return:
The list of complete spec file urls and a function that can read each one from its
remote location (also using the spack.util.web module).
"""
read_fn = None
file_list = None
def url_read_method(url):
contents = None
try:
_, _, spec_file = web_util.read_from_url(url)
contents = codecs.getreader("utf-8")(spec_file).read()
except (URLError, web_util.SpackWebError) as url_err:
tty.error("Error reading specfile: {0}".format(url))
tty.error(url_err)
return contents
try:
file_list = [
url_util.join(cache_prefix, entry)
for entry in web_util.list_url(cache_prefix)
if entry.endswith(".yaml")
or entry.endswith("spec.json")
or entry.endswith("spec.json.sig")
)
]
read_fn = url_read_method
except KeyError as inst:
msg = "No packages at {0}: {1}".format(cache_prefix, inst)
tty.warn(msg)
return
except Exception as err:
# If we got some kind of S3 (access denied or other connection
# error), the first non boto-specific class in the exception
# hierarchy is Exception. Just print a warning and return
msg = "Encountered problem listing packages at {0}: {1}".format(cache_prefix, err)
tty.warn(msg)
return file_list, read_fn
def _spec_files_from_cache(cache_prefix):
"""Get a list of all the spec files in the mirror and a function to
read them.
Args:
cache_prefix (str): Base url of mirror (location of spec files)
Return:
A tuple where the first item is a list of absolute file paths or
urls pointing to the specs that should be read from the mirror,
and the second item is a function taking a url or file path and
returning the spec read from that location.
"""
callbacks = []
if cache_prefix.startswith("s3"):
callbacks.append(_specs_from_cache_aws_cli)
callbacks.append(_specs_from_cache_fallback)
for specs_from_cache_fn in callbacks:
file_list, read_fn = specs_from_cache_fn(cache_prefix)
if file_list:
return file_list, read_fn
raise ListMirrorSpecsError("Failed to get list of specs from {0}".format(cache_prefix))
def generate_package_index(cache_prefix, concurrency=32):
"""Create or replace the build cache index on the given mirror. The
buildcache index contains an entry for each binary package under the
cache_prefix.
Args:
cache_prefix(str): Base url of binary mirror.
concurrency: (int): The desired threading concurrency to use when
fetching the spec files from the mirror.
Return:
None
"""
try:
file_list, read_fn = _spec_files_from_cache(cache_prefix)
except ListMirrorSpecsError as err:
tty.error("Unabled to generate package index, {0}".format(err))
return
if any(x.endswith(".yaml") for x in file_list):
msg = (
"The mirror in '{}' contains specs in the deprecated YAML format.\n\n\tSupport for "
"this format will be removed in v0.20, please regenerate the build cache with a "
"recent Spack\n"
).format(cache_prefix)
warnings.warn(msg)
tty.debug("Retrieving spec descriptor files from {0} to build index".format(cache_prefix))
tmpdir = tempfile.mkdtemp()
@@ -895,7 +1121,7 @@ def generate_package_index(cache_prefix):
)
try:
_read_specs_and_push_index(file_list, cache_prefix, db, db_root_dir)
_read_specs_and_push_index(file_list, read_fn, cache_prefix, db, db_root_dir, concurrency)
except Exception as err:
msg = "Encountered problem pushing package index to {0}: {1}".format(cache_prefix, err)
tty.warn(msg)
@@ -1071,7 +1297,11 @@ def _build_tarball(
tty.die(e)
# create gzip compressed tarball of the install prefix
with closing(tarfile.open(tarfile_path, "w:gz")) as tar:
# On AMD Ryzen 3700X and an SSD disk, we have the following on compression speed:
# compresslevel=6 gzip default: llvm takes 4mins, roughly 2.1GB
# compresslevel=9 python default: llvm takes 12mins, roughly 2.1GB
# So we follow gzip.
with closing(tarfile.open(tarfile_path, "w:gz", compresslevel=6)) as tar:
tar.add(name="%s" % workdir, arcname="%s" % os.path.basename(spec.prefix))
# remove copy of install directory
shutil.rmtree(workdir)
@@ -1346,6 +1576,13 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
# the remaining mirrors, looking for one we can use.
tarball_stage = try_fetch(spackfile_url)
if tarball_stage:
if ext == "yaml":
msg = (
"Reading {} from mirror.\n\n\tThe YAML format for buildcaches is "
"deprecated and will be removed in v0.20\n"
).format(spackfile_url)
warnings.warn(msg)
return {
"tarball_stage": tarball_stage,
"specfile_stage": local_specfile_stage,
@@ -1397,7 +1634,7 @@ def make_package_relative(workdir, spec, allow_root):
if "elf" in platform.binary_formats:
relocate.make_elf_binaries_relative(cur_path_names, orig_path_names, old_layout_root)
relocate.raise_if_not_relocatable(cur_path_names, allow_root)
allow_root or relocate.ensure_binaries_are_relocatable(cur_path_names)
orig_path_names = list()
cur_path_names = list()
for linkname in buildinfo.get("relocate_links", []):
@@ -1415,7 +1652,39 @@ def check_package_relocatable(workdir, spec, allow_root):
cur_path_names = list()
for filename in buildinfo["relocate_binaries"]:
cur_path_names.append(os.path.join(workdir, filename))
relocate.raise_if_not_relocatable(cur_path_names, allow_root)
allow_root or relocate.ensure_binaries_are_relocatable(cur_path_names)
def dedupe_hardlinks_if_necessary(root, buildinfo):
"""Updates a buildinfo dict for old archives that did
not dedupe hardlinks. De-duping hardlinks is necessary
when relocating files in parallel and in-place. This
means we must preserve inodes when relocating."""
# New archives don't need this.
if buildinfo.get("hardlinks_deduped", False):
return
# Clearly we can assume that an inode is either in the
# textfile or binary group, but let's just stick to
# a single set of visited nodes.
visited = set()
# Note: we do *not* dedupe hardlinked symlinks, since
# it seems difficult or even impossible to relink
# symlinks while preserving inode.
for key in ("relocate_textfiles", "relocate_binaries"):
if key not in buildinfo:
continue
new_list = []
for rel_path in buildinfo[key]:
stat_result = os.lstat(os.path.join(root, rel_path))
identifier = (stat_result.st_dev, stat_result.st_ino)
if identifier in visited:
continue
visited.add(identifier)
new_list.append(rel_path)
buildinfo[key] = new_list
def relocate_package(spec, allow_root):
@@ -1451,7 +1720,7 @@ def relocate_package(spec, allow_root):
hash_to_prefix = dict()
hash_to_prefix[spec.format("{hash}")] = str(spec.package.prefix)
new_deps = spack.build_environment.get_rpath_deps(spec.package)
for d in new_deps:
for d in new_deps + spec.dependencies(deptype="run"):
hash_to_prefix[d.format("{hash}")] = str(d.prefix)
# Spurious replacements (e.g. sbang) will cause issues with binaries
# For example, the new sbang can be longer than the old one.
@@ -1463,13 +1732,19 @@ def relocate_package(spec, allow_root):
install_path = spack.hooks.sbang.sbang_install_path()
prefix_to_prefix_text[old_sbang_install_path] = install_path
# First match specific prefix paths. Possibly the *local* install prefix
# of some dependency is in an upstream, so we cannot assume the original
# spack store root can be mapped uniformly to the new spack store root.
for orig_prefix, hash in prefix_to_hash.items():
prefix_to_prefix_text[orig_prefix] = hash_to_prefix.get(hash, None)
prefix_to_prefix_bin[orig_prefix] = hash_to_prefix.get(hash, None)
# Only then add the generic fallback of install prefix -> install prefix.
prefix_to_prefix_text[old_prefix] = new_prefix
prefix_to_prefix_bin[old_prefix] = new_prefix
prefix_to_prefix_text[old_layout_root] = new_layout_root
prefix_to_prefix_bin[old_layout_root] = new_layout_root
for orig_prefix, hash in prefix_to_hash.items():
prefix_to_prefix_text[orig_prefix] = hash_to_prefix.get(hash, None)
prefix_to_prefix_bin[orig_prefix] = hash_to_prefix.get(hash, None)
# This is vestigial code for the *old* location of sbang. Previously,
# sbang was a bash script, and it lived in the spack prefix. It is
# now a POSIX script that lives in the install prefix. Old packages
@@ -1480,6 +1755,9 @@ def relocate_package(spec, allow_root):
tty.debug("Relocating package from", "%s to %s." % (old_layout_root, new_layout_root))
# Old archives maybe have hardlinks repeated.
dedupe_hardlinks_if_necessary(workdir, buildinfo)
def is_backup_file(file):
return file.endswith("~")
@@ -1509,7 +1787,11 @@ def is_backup_file(file):
old_prefix,
new_prefix,
)
if "elf" in platform.binary_formats:
elif "elf" in platform.binary_formats and not rel:
# The new ELF dynamic section relocation logic only handles absolute to
# absolute relocation.
relocate.new_relocate_elf_binaries(files_to_relocate, prefix_to_prefix_bin)
elif "elf" in platform.binary_formats and rel:
relocate.relocate_elf_binaries(
files_to_relocate,
old_layout_root,
@@ -1519,35 +1801,23 @@ def is_backup_file(file):
old_prefix,
new_prefix,
)
# Relocate links to the new install prefix
links = [link for link in buildinfo.get("relocate_links", [])]
relocate.relocate_links(links, old_layout_root, old_prefix, new_prefix)
# Relocate links to the new install prefix
links = [os.path.join(workdir, f) for f in buildinfo.get("relocate_links", [])]
relocate.relocate_links(links, prefix_to_prefix_bin)
# For all buildcaches
# relocate the install prefixes in text files including dependencies
relocate.relocate_text(text_names, prefix_to_prefix_text)
relocate.unsafe_relocate_text(text_names, prefix_to_prefix_text)
paths_to_relocate = [old_prefix, old_layout_root]
paths_to_relocate.extend(prefix_to_hash.keys())
files_to_relocate = list(
filter(
lambda pathname: not relocate.file_is_relocatable(
pathname, paths_to_relocate=paths_to_relocate
),
map(
lambda filename: os.path.join(workdir, filename),
buildinfo["relocate_binaries"],
),
)
)
# relocate the install prefixes in binary files including dependencies
relocate.relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
relocate.unsafe_relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
# If we are installing back to the same location
# relocate the sbang location if the spack directory changed
else:
if old_spack_prefix != new_spack_prefix:
relocate.relocate_text(text_names, prefix_to_prefix_text)
relocate.unsafe_relocate_text(text_names, prefix_to_prefix_text)
def _extract_inner_tarball(spec, filename, extract_to, unsigned, remote_checksum):
@@ -1878,8 +2148,8 @@ def get_mirrors_for_spec(spec=None, mirrors_to_check=None, index_only=False):
results = binary_index.find_built_spec(spec, mirrors_to_check=mirrors_to_check)
# Maybe we just didn't have the latest information from the mirror, so
# try to fetch directly, unless we are only considering the indices.
# The index may be out-of-date. If we aren't only considering indices, try
# to fetch directly since we know where the file should be.
if not results and not index_only:
results = try_direct_fetch(spec, mirrors=mirrors_to_check)
# We found a spec by the direct fetch approach, we might as well

View File

@@ -17,8 +17,6 @@
import sysconfig
import uuid
import six
import archspec.cpu
import llnl.util.filesystem as fs
@@ -78,7 +76,7 @@ def _try_import_from_store(module, query_spec, query_info=None):
command found and the concrete spec providing it
"""
# If it is a string assume it's one of the root specs by this module
if isinstance(query_spec, six.string_types):
if isinstance(query_spec, str):
# We have to run as part of this python interpreter
query_spec += " ^" + spec_for_current_python()
@@ -91,6 +89,7 @@ def _try_import_from_store(module, query_spec, query_info=None):
os.path.join(candidate_spec.prefix, pkg.platlib),
] # type: list[str]
path_before = list(sys.path)
# NOTE: try module_paths first and last, last allows an existing version in path
# to be picked up and used, possibly depending on something in the store, first
# allows the bootstrap version to work when an incompatible version is in
@@ -468,21 +467,14 @@ def source_is_enabled_or_raise(conf):
def spec_for_current_python():
"""For bootstrapping purposes we are just interested in the Python
minor version (all patches are ABI compatible with the same minor)
and on whether ucs4 support has been enabled for Python 2.7
minor version (all patches are ABI compatible with the same minor).
See:
https://www.python.org/dev/peps/pep-0513/
https://stackoverflow.com/a/35801395/771663
"""
version_str = ".".join(str(x) for x in sys.version_info[:2])
variant_str = ""
if sys.version_info[0] == 2 and sys.version_info[1] == 7:
unicode_size = sysconfig.get_config_var("Py_UNICODE_SIZE")
variant_str = "+ucs4" if unicode_size == 4 else "~ucs4"
spec_fmt = "python@{0} {1}"
return spec_fmt.format(version_str, variant_str)
return "python@{0}".format(version_str)
@contextlib.contextmanager
@@ -667,6 +659,11 @@ def _add_externals_if_missing():
_REF_COUNT = 0
def is_bootstrapping():
global _REF_COUNT
return _REF_COUNT > 0
@contextlib.contextmanager
def ensure_bootstrap_configuration():
# The context manager is reference counted to ensure we don't swap multiple
@@ -860,9 +857,7 @@ def ensure_mypy_in_path_or_raise():
def black_root_spec():
# black v21 is the last version to support Python 2.7.
# Upgrade when we no longer support Python 2.7
return _root_spec("py-black@:21")
return _root_spec("py-black")
def ensure_black_in_path_or_raise():
@@ -919,7 +914,7 @@ def _missing(name, purpose, system_only=True):
def _required_system_executable(exes, msg):
"""Search for an executable is the system path only."""
if isinstance(exes, six.string_types):
if isinstance(exes, str):
exes = (exes,)
if spack.util.executable.which_string(*exes):
return True, None
@@ -937,7 +932,7 @@ def _required_python_module(module, query_spec, msg):
def _required_executable(exes, query_spec, msg):
"""Search for an executable in the system path or in the bootstrap store."""
if isinstance(exes, six.string_types):
if isinstance(exes, str):
exes = (exes,)
if spack.util.executable.which_string(*exes) or _executables_in_store(exes, query_spec):
return True, None

View File

@@ -33,6 +33,7 @@
calls you can make from within the install() function.
"""
import inspect
import io
import multiprocessing
import os
import re
@@ -41,8 +42,6 @@
import traceback
import types
from six import StringIO
import llnl.util.tty as tty
from llnl.util.filesystem import install, install_tree, mkdirp
from llnl.util.lang import dedupe
@@ -52,6 +51,7 @@
import spack.build_systems.cmake
import spack.build_systems.meson
import spack.builder
import spack.config
import spack.install_test
import spack.main
@@ -120,18 +120,18 @@
stat_suffix = "lib" if sys.platform == "win32" else "a"
def should_set_parallel_jobs(jobserver_support=False):
"""Returns true in general, except when:
- The env variable SPACK_NO_PARALLEL_MAKE=1 is set
- jobserver_support is enabled, and a jobserver was found.
"""
if (
jobserver_support
and "MAKEFLAGS" in os.environ
and "--jobserver" in os.environ["MAKEFLAGS"]
):
return False
return not env_flag(SPACK_NO_PARALLEL_MAKE)
def jobserver_enabled():
"""Returns true if a posix jobserver (make) is detected."""
return "MAKEFLAGS" in os.environ and "--jobserver" in os.environ["MAKEFLAGS"]
def get_effective_jobs(jobs, parallel=True, supports_jobserver=False):
"""Return the number of jobs, or None if supports_jobserver and a jobserver is detected."""
if not parallel or jobs <= 1 or env_flag(SPACK_NO_PARALLEL_MAKE):
return 1
if supports_jobserver and jobserver_enabled():
return None
return jobs
class MakeExecutable(Executable):
@@ -146,26 +146,33 @@ class MakeExecutable(Executable):
"""
def __init__(self, name, jobs, **kwargs):
supports_jobserver = kwargs.pop("supports_jobserver", True)
super(MakeExecutable, self).__init__(name, **kwargs)
self.supports_jobserver = supports_jobserver
self.jobs = jobs
def __call__(self, *args, **kwargs):
"""parallel, and jobs_env from kwargs are swallowed and used here;
remaining arguments are passed through to the superclass.
"""
# TODO: figure out how to check if we are using a jobserver-supporting ninja,
# the two split ninja packages make this very difficult right now
parallel = should_set_parallel_jobs(jobserver_support=True) and kwargs.pop(
"parallel", self.jobs > 1
)
parallel = kwargs.pop("parallel", True)
jobs_env = kwargs.pop("jobs_env", None)
jobs_env_supports_jobserver = kwargs.pop("jobs_env_supports_jobserver", False)
if parallel:
args = ("-j{0}".format(self.jobs),) + args
jobs_env = kwargs.pop("jobs_env", None)
if jobs_env:
# Caller wants us to set an environment variable to
# control the parallelism.
kwargs["extra_env"] = {jobs_env: str(self.jobs)}
jobs = get_effective_jobs(
self.jobs, parallel=parallel, supports_jobserver=self.supports_jobserver
)
if jobs is not None:
args = ("-j{0}".format(jobs),) + args
if jobs_env:
# Caller wants us to set an environment variable to
# control the parallelism.
jobs_env_jobs = get_effective_jobs(
self.jobs, parallel=parallel, supports_jobserver=jobs_env_supports_jobserver
)
if jobs_env_jobs is not None:
kwargs["extra_env"] = {jobs_env: str(jobs_env_jobs)}
return super(MakeExecutable, self).__call__(*args, **kwargs)
@@ -277,6 +284,23 @@ def clean_environment():
return env
def _add_werror_handling(keep_werror, env):
keep_flags = set()
# set of pairs
replace_flags = [] # type: List[Tuple[str,str]]
if keep_werror == "all":
keep_flags.add("-Werror*")
else:
if keep_werror == "specific":
keep_flags.add("-Werror-*")
keep_flags.add("-Werror=*")
# This extra case is to handle -Werror-implicit-function-declaration
replace_flags.append(("-Werror-", "-Wno-error="))
replace_flags.append(("-Werror", "-Wno-error"))
env.set("SPACK_COMPILER_FLAGS_KEEP", "|".join(keep_flags))
env.set("SPACK_COMPILER_FLAGS_REPLACE", " ".join(["|".join(item) for item in replace_flags]))
def set_compiler_environment_variables(pkg, env):
assert pkg.spec.concrete
compiler = pkg.compiler
@@ -316,15 +340,26 @@ def set_compiler_environment_variables(pkg, env):
env.set("SPACK_LINKER_ARG", compiler.linker_arg)
# Check whether we want to force RPATH or RUNPATH
if spack.config.get("config:shared_linking") == "rpath":
if spack.config.get("config:shared_linking:type") == "rpath":
env.set("SPACK_DTAGS_TO_STRIP", compiler.enable_new_dtags)
env.set("SPACK_DTAGS_TO_ADD", compiler.disable_new_dtags)
else:
env.set("SPACK_DTAGS_TO_STRIP", compiler.disable_new_dtags)
env.set("SPACK_DTAGS_TO_ADD", compiler.enable_new_dtags)
if pkg.keep_werror is not None:
keep_werror = pkg.keep_werror
else:
keep_werror = spack.config.get("config:flags:keep_werror")
_add_werror_handling(keep_werror, env)
# Set the target parameters that the compiler will add
isa_arg = spec.architecture.target.optimization_flags(compiler)
# Don't set on cray platform because the targeting module handles this
if spec.satisfies("platform=cray"):
isa_arg = ""
else:
isa_arg = spec.architecture.target.optimization_flags(compiler)
env.set("SPACK_TARGET_ARGS", isa_arg)
# Trap spack-tracked compiler flags as appropriate.
@@ -341,11 +376,9 @@ def set_compiler_environment_variables(pkg, env):
if isinstance(pkg.flag_handler, types.FunctionType):
handler = pkg.flag_handler
else:
if sys.version_info >= (3, 0):
handler = pkg.flag_handler.__func__
else:
handler = pkg.flag_handler.im_func
injf, envf, bsf = handler(pkg, flag, spec.compiler_flags[flag])
handler = pkg.flag_handler.__func__
injf, envf, bsf = handler(pkg, flag, spec.compiler_flags[flag][:])
inject_flags[flag] = injf or []
env_flags[flag] = envf or []
build_system_flags[flag] = bsf or []
@@ -530,14 +563,18 @@ def determine_number_of_jobs(
return min(max_cpus, config_default)
def _set_variables_for_single_module(pkg, module):
"""Helper function to set module variables for single module."""
def set_module_variables_for_package(pkg):
"""Populate the Python module of a package with some useful global names.
This makes things easier for package writers.
"""
# Put a marker on this module so that it won't execute the body of this
# function again, since it is not needed
marker = "_set_run_already_called"
if getattr(module, marker, False):
if getattr(pkg.module, marker, False):
return
module = ModuleChangePropagator(pkg)
jobs = determine_number_of_jobs(parallel=pkg.parallel)
m = module
@@ -546,7 +583,7 @@ def _set_variables_for_single_module(pkg, module):
# TODO: make these build deps that can be installed if not found.
m.make = MakeExecutable("make", jobs)
m.gmake = MakeExecutable("gmake", jobs)
m.ninja = MakeExecutable("ninja", jobs)
m.ninja = MakeExecutable("ninja", jobs, supports_jobserver=False)
# easy shortcut to os.environ
m.env = os.environ
@@ -557,10 +594,11 @@ def _set_variables_for_single_module(pkg, module):
if sys.platform == "win32":
m.nmake = Executable("nmake")
m.msbuild = Executable("msbuild")
# Standard CMake arguments
m.std_cmake_args = spack.build_systems.cmake.CMakePackage._std_args(pkg)
m.std_meson_args = spack.build_systems.meson.MesonPackage._std_args(pkg)
m.std_pip_args = spack.build_systems.python.PythonPackage._std_args(pkg)
m.std_cmake_args = spack.build_systems.cmake.CMakeBuilder.std_args(pkg)
m.std_meson_args = spack.build_systems.meson.MesonBuilder.std_args(pkg)
m.std_pip_args = spack.build_systems.python.PythonPipBuilder.std_args(pkg)
# Put spack compiler paths in module scope.
link_dir = spack.paths.build_env_path
@@ -604,20 +642,7 @@ def static_to_shared_library(static_lib, shared_lib=None, **kwargs):
# Put a marker on this module so that it won't execute the body of this
# function again, since it is not needed
setattr(m, marker, True)
def set_module_variables_for_package(pkg):
"""Populate the module scope of install() with some useful functions.
This makes things easier for package writers.
"""
# If a user makes their own package repo, e.g.
# spack.pkg.mystuff.libelf.Libelf, and they inherit from an existing class
# like spack.pkg.original.libelf.Libelf, then set the module variables
# for both classes so the parent class can still use them if it gets
# called. parent_class_modules includes pkg.module.
modules = parent_class_modules(pkg.__class__)
for mod in modules:
_set_variables_for_single_module(pkg, mod)
module.propagate_changes_to_mro()
def _static_to_shared_library(arch, compiler, static_lib, shared_lib=None, **kwargs):
@@ -727,57 +752,6 @@ def get_rpaths(pkg):
return list(dedupe(filter_system_paths(rpaths)))
def get_std_cmake_args(pkg):
"""List of standard arguments used if a package is a CMakePackage.
Returns:
list: standard arguments that would be used if this
package were a CMakePackage instance.
Args:
pkg (spack.package_base.PackageBase): package under consideration
Returns:
list: arguments for cmake
"""
return spack.build_systems.cmake.CMakePackage._std_args(pkg)
def get_std_meson_args(pkg):
"""List of standard arguments used if a package is a MesonPackage.
Returns:
list: standard arguments that would be used if this
package were a MesonPackage instance.
Args:
pkg (spack.package_base.PackageBase): package under consideration
Returns:
list: arguments for meson
"""
return spack.build_systems.meson.MesonPackage._std_args(pkg)
def parent_class_modules(cls):
"""
Get list of superclass modules that descend from spack.package_base.PackageBase
Includes cls.__module__
"""
if not issubclass(cls, spack.package_base.PackageBase) or issubclass(
spack.package_base.PackageBase, cls
):
return []
result = []
module = sys.modules.get(cls.__module__)
if module:
result = [module]
for c in cls.__bases__:
result.extend(parent_class_modules(c))
return result
def load_external_modules(pkg):
"""Traverse a package's spec DAG and load any external modules.
@@ -819,7 +793,8 @@ def setup_package(pkg, dirty, context="build"):
platform.setup_platform_environment(pkg, env_mods)
if context == "build":
pkg.setup_build_environment(env_mods)
builder = spack.builder.create(pkg)
builder.setup_build_environment(env_mods)
if (not dirty) and (not env_mods.is_unset("CPATH")):
tty.debug(
@@ -997,25 +972,13 @@ def add_modifications_for_dep(dep):
if set_package_py_globals:
set_module_variables_for_package(dpkg)
# Allow dependencies to modify the module
# Get list of modules that may need updating
modules = []
for cls in inspect.getmro(type(spec.package)):
module = cls.module
if module == spack.package_base:
break
modules.append(module)
# Execute changes as if on a single module
# copy dict to ensure prior changes are available
changes = spack.util.pattern.Bunch()
dpkg.setup_dependent_package(changes, spec)
for module in modules:
module.__dict__.update(changes.__dict__)
current_module = ModuleChangePropagator(spec.package)
dpkg.setup_dependent_package(current_module, spec)
current_module.propagate_changes_to_mro()
if context == "build":
dpkg.setup_dependent_build_environment(env, spec)
builder = spack.builder.create(dpkg)
builder.setup_dependent_build_environment(env, spec)
else:
dpkg.setup_dependent_run_environment(env, spec)
@@ -1117,8 +1080,20 @@ def _setup_pkg_and_run(
pkg.test_suite.stage, spack.install_test.TestSuite.test_log_name(pkg.spec)
)
error_msg = str(exc)
if isinstance(exc, (spack.multimethod.NoSuchMethodError, AttributeError)):
error_msg = (
"The '{}' package cannot find an attribute while trying to build "
"from sources. This might be due to a change in Spack's package format "
"to support multiple build-systems for a single package. You can fix this "
"by updating the build recipe, and you can also report the issue as a bug. "
"More information at https://spack.readthedocs.io/en/latest/packaging_guide.html#installation-procedure"
).format(pkg.name)
error_msg = colorize("@*R{{{}}}".format(error_msg))
error_msg = "{}\n\n{}".format(str(exc), error_msg)
# make a pickleable exception to send to parent.
msg = "%s: %s" % (exc_type.__name__, str(exc))
msg = "%s: %s" % (exc_type.__name__, error_msg)
ce = ChildError(
msg,
@@ -1277,6 +1252,8 @@ def make_stack(tb, stack=None):
obj = frame.f_locals["self"]
if isinstance(obj, spack.package_base.PackageBase):
break
else:
return None
# We found obj, the Package implementation we care about.
# Point out the location in the install method where we failed.
@@ -1358,7 +1335,7 @@ def __init__(self, msg, module, classname, traceback_string, log_name, log_type,
@property
def long_message(self):
out = StringIO()
out = io.StringIO()
out.write(self._long_message if self._long_message else "")
have_log = self.log_name and os.path.exists(self.log_name)
@@ -1443,3 +1420,51 @@ def write_log_summary(out, log_type, log, last=None):
# If no errors are found but warnings are, display warnings
out.write("\n%s found in %s log:\n" % (plural(nwar, "warning"), log_type))
out.write(make_log_context(warnings))
class ModuleChangePropagator:
"""Wrapper class to accept changes to a package.py Python module, and propagate them in the
MRO of the package.
It is mainly used as a substitute of the ``package.py`` module, when calling the
"setup_dependent_package" function during build environment setup.
"""
_PROTECTED_NAMES = ("package", "current_module", "modules_in_mro", "_set_attributes")
def __init__(self, package):
self._set_self_attributes("package", package)
self._set_self_attributes("current_module", package.module)
#: Modules for the classes in the MRO up to PackageBase
modules_in_mro = []
for cls in inspect.getmro(type(package)):
module = cls.module
if module == self.current_module:
continue
if module == spack.package_base:
break
modules_in_mro.append(module)
self._set_self_attributes("modules_in_mro", modules_in_mro)
self._set_self_attributes("_set_attributes", {})
def _set_self_attributes(self, key, value):
super().__setattr__(key, value)
def __getattr__(self, item):
return getattr(self.current_module, item)
def __setattr__(self, key, value):
if key in ModuleChangePropagator._PROTECTED_NAMES:
msg = f'Cannot set attribute "{key}" in ModuleMonkeyPatcher'
return AttributeError(msg)
setattr(self.current_module, key, value)
self._set_attributes[key] = value
def propagate_changes_to_mro(self):
for module_in_mro in self.modules_in_mro:
module_in_mro.__dict__.update(self._set_attributes)

View File

@@ -0,0 +1,122 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import llnl.util.lang
import spack.builder
import spack.installer
import spack.relocate
import spack.store
def sanity_check_prefix(builder):
"""Check that specific directories and files are created after installation.
The files to be checked are in the ``sanity_check_is_file`` attribute of the
package object, while the directories are in the ``sanity_check_is_dir``.
Args:
builder (spack.builder.Builder): builder that installed the package
"""
pkg = builder.pkg
def check_paths(path_list, filetype, predicate):
if isinstance(path_list, str):
path_list = [path_list]
for path in path_list:
abs_path = os.path.join(pkg.prefix, path)
if not predicate(abs_path):
msg = "Install failed for {0}. No such {1} in prefix: {2}"
msg = msg.format(pkg.name, filetype, path)
raise spack.installer.InstallError(msg)
check_paths(pkg.sanity_check_is_file, "file", os.path.isfile)
check_paths(pkg.sanity_check_is_dir, "directory", os.path.isdir)
ignore_file = llnl.util.lang.match_predicate(spack.store.layout.hidden_file_regexes)
if all(map(ignore_file, os.listdir(pkg.prefix))):
msg = "Install failed for {0}. Nothing was installed!"
raise spack.installer.InstallError(msg.format(pkg.name))
def apply_macos_rpath_fixups(builder):
"""On Darwin, make installed libraries more easily relocatable.
Some build systems (handrolled, autotools, makefiles) can set their own
rpaths that are duplicated by spack's compiler wrapper. This fixup
interrogates, and postprocesses if necessary, all libraries installed
by the code.
It should be added as a @run_after to packaging systems (or individual
packages) that do not install relocatable libraries by default.
Args:
builder (spack.builder.Builder): builder that installed the package
"""
spack.relocate.fixup_macos_rpaths(builder.spec)
def ensure_build_dependencies_or_raise(spec, dependencies, error_msg):
"""Ensure that some build dependencies are present in the concrete spec.
If not, raise a RuntimeError with a helpful error message.
Args:
spec (spack.spec.Spec): concrete spec to be checked.
dependencies (list of spack.spec.Spec): list of abstract specs to be satisfied
error_msg (str): brief error message to be prepended to a longer description
Raises:
RuntimeError: when the required build dependencies are not found
"""
assert spec.concrete, "Can ensure build dependencies only on concrete specs"
build_deps = [d.name for d in spec.dependencies(deptype="build")]
missing_deps = [x for x in dependencies if x not in build_deps]
if not missing_deps:
return
# Raise an exception on missing deps.
msg = (
"{0}: missing dependencies: {1}.\n\nPlease add "
"the following lines to the package:\n\n".format(error_msg, ", ".join(missing_deps))
)
for dep in missing_deps:
msg += ' depends_on("{0}", type="build", when="@{1} {2}")\n'.format(
dep, spec.version, "build_system=autotools"
)
msg += '\nUpdate the version (when="@{0}") as needed.'.format(spec.version)
raise RuntimeError(msg)
def execute_build_time_tests(builder):
"""Execute the build-time tests prescribed by builder.
Args:
builder (Builder): builder prescribing the test callbacks. The name of the callbacks is
stored as a list of strings in the ``build_time_test_callbacks`` attribute.
"""
builder.pkg.run_test_callbacks(builder, builder.build_time_test_callbacks, "build")
def execute_install_time_tests(builder):
"""Execute the install-time tests prescribed by builder.
Args:
builder (Builder): builder prescribing the test callbacks. The name of the callbacks is
stored as a list of strings in the ``install_time_test_callbacks`` attribute.
"""
builder.pkg.run_test_callbacks(builder, builder.install_time_test_callbacks, "install")
class BaseBuilder(spack.builder.Builder):
"""Base class for builders to register common checks"""
# Check that self.prefix is there after installation
spack.builder.run_after("install")(sanity_check_prefix)

View File

@@ -2,18 +2,36 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import llnl.util.filesystem as fs
# Why doesn't this work for me?
# from spack import *
from llnl.util.filesystem import filter_file
import spack.directives
import spack.package_base
import spack.util.executable
from spack.build_systems.autotools import AutotoolsPackage
from spack.directives import extends
from spack.package_base import ExtensionError
from spack.util.executable import which
from .autotools import AutotoolsBuilder, AutotoolsPackage
class AspellBuilder(AutotoolsBuilder):
"""The Aspell builder is close enough to an autotools builder to allow
specializing the builder class, so to use variables that are specific
to the Aspell extensions.
"""
def configure(self, pkg, spec, prefix):
aspell = spec["aspell"].prefix.bin.aspell
prezip = spec["aspell"].prefix.bin.prezip
destdir = prefix
sh = spack.util.executable.which("sh")
sh(
"./configure",
"--vars",
"ASPELL={0}".format(aspell),
"PREZIP={0}".format(prezip),
"DESTDIR={0}".format(destdir),
)
#
# Aspell dictionaries install their bits into their prefix.lib
# and when activated they'll get symlinked into the appropriate aspell's
# dict dir (see aspell's {de,}activate methods).
@@ -23,12 +41,17 @@
class AspellDictPackage(AutotoolsPackage):
"""Specialized class for building aspell dictionairies."""
extends("aspell")
spack.directives.extends("aspell", when="build_system=autotools")
#: Override the default autotools builder
AutotoolsBuilder = AspellBuilder
def view_destination(self, view):
aspell_spec = self.spec["aspell"]
if view.get_projection_for_spec(aspell_spec) != aspell_spec.prefix:
raise ExtensionError("aspell does not support non-global extensions")
raise spack.package_base.ExtensionError(
"aspell does not support non-global extensions"
)
aspell = aspell_spec.command
return aspell("dump", "config", "dict-dir", output=str).strip()
@@ -36,19 +59,5 @@ def view_source(self):
return self.prefix.lib
def patch(self):
filter_file(r"^dictdir=.*$", "dictdir=/lib", "configure")
filter_file(r"^datadir=.*$", "datadir=/lib", "configure")
def configure(self, spec, prefix):
aspell = spec["aspell"].prefix.bin.aspell
prezip = spec["aspell"].prefix.bin.prezip
destdir = prefix
sh = which("sh")
sh(
"./configure",
"--vars",
"ASPELL={0}".format(aspell),
"PREZIP={0}".format(prezip),
"DESTDIR={0}".format(destdir),
)
fs.filter_file(r"^dictdir=.*$", "dictdir=/lib", "configure")
fs.filter_file(r"^datadir=.*$", "datadir=/lib", "configure")

View File

@@ -6,87 +6,140 @@
import os
import os.path
import stat
from subprocess import PIPE, check_call
import subprocess
from typing import List # novm
import llnl.util.filesystem as fs
import llnl.util.tty as tty
from llnl.util.filesystem import force_remove, working_dir
from spack.build_environment import InstallError
from spack.directives import conflicts, depends_on
import spack.build_environment
import spack.builder
import spack.package_base
from spack.directives import build_system, conflicts, depends_on
from spack.multimethod import when
from spack.operating_systems.mac_os import macos_version
from spack.package_base import PackageBase, run_after, run_before
from spack.util.executable import Executable
from spack.version import Version
from ._checks import (
BaseBuilder,
apply_macos_rpath_fixups,
ensure_build_dependencies_or_raise,
execute_build_time_tests,
execute_install_time_tests,
)
class AutotoolsPackage(PackageBase):
"""Specialized class for packages built using GNU Autotools.
This class provides four phases that can be overridden:
class AutotoolsPackage(spack.package_base.PackageBase):
"""Specialized class for packages built using GNU Autotools."""
1. :py:meth:`~.AutotoolsPackage.autoreconf`
2. :py:meth:`~.AutotoolsPackage.configure`
3. :py:meth:`~.AutotoolsPackage.build`
4. :py:meth:`~.AutotoolsPackage.install`
#: This attribute is used in UI queries that need to know the build
#: system base class
build_system_class = "AutotoolsPackage"
#: Legacy buildsystem attribute used to deserialize and install old specs
legacy_buildsystem = "autotools"
build_system("autotools")
with when("build_system=autotools"):
depends_on("gnuconfig", type="build", when="target=ppc64le:")
depends_on("gnuconfig", type="build", when="target=aarch64:")
depends_on("gnuconfig", type="build", when="target=riscv64:")
conflicts("platform=windows")
def flags_to_build_system_args(self, flags):
"""Produces a list of all command line arguments to pass specified
compiler flags to configure."""
# Has to be dynamic attribute due to caching.
setattr(self, "configure_flag_args", [])
for flag, values in flags.items():
if values:
values_str = "{0}={1}".format(flag.upper(), " ".join(values))
self.configure_flag_args.append(values_str)
# Spack's fflags are meant for both F77 and FC, therefore we
# additionaly set FCFLAGS if required.
values = flags.get("fflags", None)
if values:
values_str = "FCFLAGS={0}".format(" ".join(values))
self.configure_flag_args.append(values_str)
# Legacy methods (used by too many packages to change them,
# need to forward to the builder)
def enable_or_disable(self, *args, **kwargs):
return self.builder.enable_or_disable(*args, **kwargs)
def with_or_without(self, *args, **kwargs):
return self.builder.with_or_without(*args, **kwargs)
@spack.builder.builder("autotools")
class AutotoolsBuilder(BaseBuilder):
"""The autotools builder encodes the default way of installing software built
with autotools. It has four phases that can be overridden, if need be:
1. :py:meth:`~.AutotoolsBuilder.autoreconf`
2. :py:meth:`~.AutotoolsBuilder.configure`
3. :py:meth:`~.AutotoolsBuilder.build`
4. :py:meth:`~.AutotoolsBuilder.install`
They all have sensible defaults and for many packages the only thing necessary
is to override the helper method
:meth:`~spack.build_systems.autotools.AutotoolsBuilder.configure_args`.
They all have sensible defaults and for many packages the only thing
necessary will be to override the helper method
:meth:`~spack.build_systems.autotools.AutotoolsPackage.configure_args`.
For a finer tuning you may also override:
+-----------------------------------------------+--------------------+
| **Method** | **Purpose** |
+===============================================+====================+
| :py:attr:`~.AutotoolsPackage.build_targets` | Specify ``make`` |
| :py:attr:`~.AutotoolsBuilder.build_targets` | Specify ``make`` |
| | targets for the |
| | build phase |
+-----------------------------------------------+--------------------+
| :py:attr:`~.AutotoolsPackage.install_targets` | Specify ``make`` |
| :py:attr:`~.AutotoolsBuilder.install_targets` | Specify ``make`` |
| | targets for the |
| | install phase |
+-----------------------------------------------+--------------------+
| :py:meth:`~.AutotoolsPackage.check` | Run build time |
| :py:meth:`~.AutotoolsBuilder.check` | Run build time |
| | tests if required |
+-----------------------------------------------+--------------------+
"""
#: Phases of a GNU Autotools package
phases = ["autoreconf", "configure", "build", "install"]
#: This attribute is used in UI queries that need to know the build
#: system base class
build_system_class = "AutotoolsPackage"
phases = ("autoreconf", "configure", "build", "install")
@property
def patch_config_files(self):
"""
Whether or not to update old ``config.guess`` and ``config.sub`` files
distributed with the tarball. This currently only applies to
``ppc64le:``, ``aarch64:``, and ``riscv64`` target architectures. The
substitutes are taken from the ``gnuconfig`` package, which is
automatically added as a build dependency for these architectures. In
case system versions of these config files are required, the
``gnuconfig`` package can be marked external with a prefix pointing to
the directory containing the system ``config.guess`` and ``config.sub``
files.
"""
return (
self.spec.satisfies("target=ppc64le:")
or self.spec.satisfies("target=aarch64:")
or self.spec.satisfies("target=riscv64:")
)
#: Names associated with package methods in the old build-system format
legacy_methods = (
"configure_args",
"check",
"installcheck",
)
#: Whether or not to update ``libtool``
#: (currently only for Arm/Clang/Fujitsu/NVHPC compilers)
#: Names associated with package attributes in the old build-system format
legacy_attributes = (
"archive_files",
"patch_libtool",
"build_targets",
"install_targets",
"build_time_test_callbacks",
"install_time_test_callbacks",
"force_autoreconf",
"autoreconf_extra_args",
"install_libtool_archives",
"patch_config_files",
"configure_directory",
"configure_abs_path",
"build_directory",
"autoreconf_search_path_args",
)
#: Whether to update ``libtool`` (e.g. for Arm/Clang/Fujitsu/NVHPC compilers)
patch_libtool = True
#: Targets for ``make`` during the :py:meth:`~.AutotoolsPackage.build`
#: phase
#: Targets for ``make`` during the :py:meth:`~.AutotoolsBuilder.build` phase
build_targets = [] # type: List[str]
#: Targets for ``make`` during the :py:meth:`~.AutotoolsPackage.install`
#: phase
#: Targets for ``make`` during the :py:meth:`~.AutotoolsBuilder.install` phase
install_targets = ["install"]
#: Callback names for build-time test
@@ -97,24 +150,40 @@ def patch_config_files(self):
#: Set to true to force the autoreconf step even if configure is present
force_autoreconf = False
#: Options to be passed to autoreconf when using the default implementation
autoreconf_extra_args = [] # type: List[str]
#: If False deletes all the .la files in the prefix folder
#: after the installation. If True instead it installs them.
#: If False deletes all the .la files in the prefix folder after the installation.
#: If True instead it installs them.
install_libtool_archives = False
depends_on("gnuconfig", type="build", when="target=ppc64le:")
depends_on("gnuconfig", type="build", when="target=aarch64:")
depends_on("gnuconfig", type="build", when="target=riscv64:")
conflicts("platform=windows")
@property
def patch_config_files(self):
"""Whether to update old ``config.guess`` and ``config.sub`` files
distributed with the tarball.
This currently only applies to ``ppc64le:``, ``aarch64:``, and
``riscv64`` target architectures.
The substitutes are taken from the ``gnuconfig`` package, which is
automatically added as a build dependency for these architectures. In case
system versions of these config files are required, the ``gnuconfig`` package
can be marked external, with a prefix pointing to the directory containing the
system ``config.guess`` and ``config.sub`` files.
"""
return (
self.pkg.spec.satisfies("target=ppc64le:")
or self.pkg.spec.satisfies("target=aarch64:")
or self.pkg.spec.satisfies("target=riscv64:")
)
@property
def _removed_la_files_log(self):
"""File containing the list of remove libtool archives"""
"""File containing the list of removed libtool archives"""
build_dir = self.build_directory
if not os.path.isabs(self.build_directory):
build_dir = os.path.join(self.stage.path, build_dir)
build_dir = os.path.join(self.pkg.stage.path, build_dir)
return os.path.join(build_dir, "removed_la_files.txt")
@property
@@ -125,13 +194,13 @@ def archive_files(self):
files.append(self._removed_la_files_log)
return files
@run_after("autoreconf")
@spack.builder.run_after("autoreconf")
def _do_patch_config_files(self):
"""Some packages ship with older config.guess/config.sub files and
need to have these updated when installed on a newer architecture.
In particular, config.guess fails for PPC64LE for version prior
to a 2013-06-10 build date (automake 1.13.4) and for ARM (aarch64) and
RISC-V (riscv64).
"""Some packages ship with older config.guess/config.sub files and need to
have these updated when installed on a newer architecture.
In particular, config.guess fails for PPC64LE for version prior to a
2013-06-10 build date (automake 1.13.4) and for AArch64 and RISC-V.
"""
if not self.patch_config_files:
return
@@ -139,11 +208,11 @@ def _do_patch_config_files(self):
# TODO: Expand this to select the 'config.sub'-compatible architecture
# for each platform (e.g. 'config.sub' doesn't accept 'power9le', but
# does accept 'ppc64le').
if self.spec.satisfies("target=ppc64le:"):
if self.pkg.spec.satisfies("target=ppc64le:"):
config_arch = "ppc64le"
elif self.spec.satisfies("target=aarch64:"):
elif self.pkg.spec.satisfies("target=aarch64:"):
config_arch = "aarch64"
elif self.spec.satisfies("target=riscv64:"):
elif self.pkg.spec.satisfies("target=riscv64:"):
config_arch = "riscv64"
else:
config_arch = "local"
@@ -155,7 +224,7 @@ def runs_ok(script_abs_path):
args = [script_abs_path] + additional_args.get(script_name, [])
try:
check_call(args, stdout=PIPE, stderr=PIPE)
subprocess.check_call(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
except Exception as e:
tty.debug(e)
return False
@@ -163,7 +232,7 @@ def runs_ok(script_abs_path):
return True
# Get the list of files that needs to be patched
to_be_patched = fs.find(self.stage.path, files=["config.sub", "config.guess"])
to_be_patched = fs.find(self.pkg.stage.path, files=["config.sub", "config.guess"])
to_be_patched = [f for f in to_be_patched if not runs_ok(f)]
# If there are no files to be patched, return early
@@ -171,22 +240,21 @@ def runs_ok(script_abs_path):
return
# Otherwise, require `gnuconfig` to be a build dependency
self._require_build_deps(
pkgs=["gnuconfig"], spec=self.spec, err="Cannot patch config files"
ensure_build_dependencies_or_raise(
spec=self.pkg.spec, dependencies=["gnuconfig"], error_msg="Cannot patch config files"
)
# Get the config files we need to patch (config.sub / config.guess).
to_be_found = list(set(os.path.basename(f) for f in to_be_patched))
gnuconfig = self.spec["gnuconfig"]
gnuconfig = self.pkg.spec["gnuconfig"]
gnuconfig_dir = gnuconfig.prefix
# An external gnuconfig may not not have a prefix.
if gnuconfig_dir is None:
raise InstallError(
"Spack could not find substitutes for GNU config "
"files because no prefix is available for the "
"`gnuconfig` package. Make sure you set a prefix "
"path instead of modules for external `gnuconfig`."
raise spack.build_environment.InstallError(
"Spack could not find substitutes for GNU config files because no "
"prefix is available for the `gnuconfig` package. Make sure you set a "
"prefix path instead of modules for external `gnuconfig`."
)
candidates = fs.find(gnuconfig_dir, files=to_be_found, recursive=False)
@@ -203,7 +271,7 @@ def runs_ok(script_abs_path):
msg += (
" or the `gnuconfig` package prefix is misconfigured as" " an external package"
)
raise InstallError(msg)
raise spack.build_environment.InstallError(msg)
# Filter working substitutes
candidates = [f for f in candidates if runs_ok(f)]
@@ -228,7 +296,9 @@ def runs_ok(script_abs_path):
and set the prefix to the directory containing the `config.guess` and
`config.sub` files.
"""
raise InstallError(msg.format(", ".join(to_be_found), self.name))
raise spack.build_environment.InstallError(
msg.format(", ".join(to_be_found), self.name)
)
# Copy the good files over the bad ones
for abs_path in to_be_patched:
@@ -238,7 +308,7 @@ def runs_ok(script_abs_path):
fs.copy(substitutes[name], abs_path)
os.chmod(abs_path, mode)
@run_before("configure")
@spack.builder.run_before("configure")
def _patch_usr_bin_file(self):
"""On NixOS file is not available in /usr/bin/file. Patch configure
scripts to use file from path."""
@@ -250,7 +320,7 @@ def _patch_usr_bin_file(self):
with fs.keep_modification_time(*x.filenames):
x.filter(regex="/usr/bin/file", repl="file", string=True)
@run_before("configure")
@spack.builder.run_before("configure")
def _set_autotools_environment_variables(self):
"""Many autotools builds use a version of mknod.m4 that fails when
running as root unless FORCE_UNSAFE_CONFIGURE is set to 1.
@@ -261,11 +331,10 @@ def _set_autotools_environment_variables(self):
Without it, configure just fails halfway through, but it can
still run things *before* this check. Forcing this just removes a
nuisance -- this is not circumventing any real protection.
"""
os.environ["FORCE_UNSAFE_CONFIGURE"] = "1"
@run_before("configure")
@spack.builder.run_before("configure")
def _do_patch_libtool_configure(self):
"""Patch bugs that propagate from libtool macros into "configure" and
further into "libtool". Note that patches that can be fixed by patching
@@ -293,7 +362,7 @@ def _do_patch_libtool_configure(self):
# Support Libtool 2.4.2 and older:
x.filter(regex=r'^(\s*test \$p = "-R")(; then\s*)$', repl=r'\1 || test x-l = x"$p"\2')
@run_after("configure")
@spack.builder.run_after("configure")
def _do_patch_libtool(self):
"""If configure generates a "libtool" script that does not correctly
detect the compiler (and patch_libtool is set), patch in the correct
@@ -328,31 +397,33 @@ def _do_patch_libtool(self):
markers[tag] = "LIBTOOL TAG CONFIG: {0}".format(tag.upper())
# Replace empty linker flag prefixes:
if self.compiler.name == "nag":
if self.pkg.compiler.name == "nag":
# Nag is mixed with gcc and g++, which are recognized correctly.
# Therefore, we change only Fortran values:
for tag in ["fc", "f77"]:
marker = markers[tag]
x.filter(
regex='^wl=""$',
repl='wl="{0}"'.format(self.compiler.linker_arg),
repl='wl="{0}"'.format(self.pkg.compiler.linker_arg),
start_at="# ### BEGIN {0}".format(marker),
stop_at="# ### END {0}".format(marker),
)
else:
x.filter(regex='^wl=""$', repl='wl="{0}"'.format(self.compiler.linker_arg))
x.filter(regex='^wl=""$', repl='wl="{0}"'.format(self.pkg.compiler.linker_arg))
# Replace empty PIC flag values:
for cc, marker in markers.items():
x.filter(
regex='^pic_flag=""$',
repl='pic_flag="{0}"'.format(getattr(self.compiler, "{0}_pic_flag".format(cc))),
repl='pic_flag="{0}"'.format(
getattr(self.pkg.compiler, "{0}_pic_flag".format(cc))
),
start_at="# ### BEGIN {0}".format(marker),
stop_at="# ### END {0}".format(marker),
)
# Other compiler-specific patches:
if self.compiler.name == "fj":
if self.pkg.compiler.name == "fj":
x.filter(regex="-nostdlib", repl="", string=True)
rehead = r"/\S*/"
for o in [
@@ -365,12 +436,12 @@ def _do_patch_libtool(self):
"crtendS.o",
]:
x.filter(regex=(rehead + o), repl="", string=True)
elif self.compiler.name == "dpcpp":
elif self.pkg.compiler.name == "dpcpp":
# Hack to filter out spurious predep_objects when building with Intel dpcpp
# (see https://github.com/spack/spack/issues/32863):
x.filter(regex=r"^(predep_objects=.*)/tmp/conftest-[0-9A-Fa-f]+\.o", repl=r"\1")
x.filter(regex=r"^(predep_objects=.*)/tmp/a-[0-9A-Fa-f]+\.o", repl=r"\1")
elif self.compiler.name == "nag":
elif self.pkg.compiler.name == "nag":
for tag in ["fc", "f77"]:
marker = markers[tag]
start_at = "# ### BEGIN {0}".format(marker)
@@ -446,11 +517,8 @@ def _do_patch_libtool(self):
@property
def configure_directory(self):
"""Returns the directory where 'configure' resides.
:return: directory where to find configure
"""
return self.stage.source_path
"""Return the directory where 'configure' resides."""
return self.pkg.stage.source_path
@property
def configure_abs_path(self):
@@ -463,34 +531,12 @@ def build_directory(self):
"""Override to provide another place to build the package"""
return self.configure_directory
@run_before("autoreconf")
@spack.builder.run_before("autoreconf")
def delete_configure_to_force_update(self):
if self.force_autoreconf:
force_remove(self.configure_abs_path)
fs.force_remove(self.configure_abs_path)
def _require_build_deps(self, pkgs, spec, err):
"""Require `pkgs` to be direct build dependencies of `spec`. Raises a
RuntimeError with a helpful error messages when any dep is missing."""
build_deps = [d.name for d in spec.dependencies(deptype="build")]
missing_deps = [x for x in pkgs if x not in build_deps]
if not missing_deps:
return
# Raise an exception on missing deps.
msg = (
"{0}: missing dependencies: {1}.\n\nPlease add "
"the following lines to the package:\n\n".format(err, ", ".join(missing_deps))
)
for dep in missing_deps:
msg += " depends_on('{0}', type='build', when='@{1}')\n".format(dep, spec.version)
msg += "\nUpdate the version (when='@{0}') as needed.".format(spec.version)
raise RuntimeError(msg)
def autoreconf(self, spec, prefix):
def autoreconf(self, pkg, spec, prefix):
"""Not needed usually, configure should be already there"""
# If configure exists nothing needs to be done
@@ -498,8 +544,10 @@ def autoreconf(self, spec, prefix):
return
# Else try to regenerate it, which reuquires a few build dependencies
self._require_build_deps(
pkgs=["autoconf", "automake", "libtool"], spec=spec, err="Cannot generate configure"
ensure_build_dependencies_or_raise(
spec=spec,
dependencies=["autoconf", "automake", "libtool"],
error_msg="Cannot generate configure",
)
tty.msg("Configure script not found: trying to generate it")
@@ -507,8 +555,8 @@ def autoreconf(self, spec, prefix):
tty.warn("* If the default procedure fails, consider implementing *")
tty.warn("* a custom AUTORECONF phase in the package *")
tty.warn("*********************************************************")
with working_dir(self.configure_directory):
m = inspect.getmodule(self)
with fs.working_dir(self.configure_directory):
m = inspect.getmodule(self.pkg)
# This line is what is needed most of the time
# --install, --verbose, --force
autoreconf_args = ["-ivf"]
@@ -524,98 +572,66 @@ def autoreconf_search_path_args(self):
spack dependencies."""
return _autoreconf_search_path_args(self.spec)
@run_after("autoreconf")
@spack.builder.run_after("autoreconf")
def set_configure_or_die(self):
"""Checks the presence of a ``configure`` file after the
autoreconf phase. If it is found sets a module attribute
appropriately, otherwise raises an error.
"""Ensure the presence of a "configure" script, or raise. If the "configure"
is found, a module level attribute is set.
:raises RuntimeError: if a configure script is not found in
:py:meth:`~AutotoolsPackage.configure_directory`
Raises:
RuntimeError: if the "configure" script is not found
"""
# Check if a configure script is there. If not raise a RuntimeError.
# Check if the "configure" script is there. If not raise a RuntimeError.
if not os.path.exists(self.configure_abs_path):
msg = "configure script not found in {0}"
raise RuntimeError(msg.format(self.configure_directory))
# Monkey-patch the configure script in the corresponding module
inspect.getmodule(self).configure = Executable(self.configure_abs_path)
inspect.getmodule(self.pkg).configure = Executable(self.configure_abs_path)
def configure_args(self):
"""Produces a list containing all the arguments that must be passed to
configure, except ``--prefix`` which will be pre-pended to the list.
:return: list of arguments for configure
"""Return the list of all the arguments that must be passed to configure,
except ``--prefix`` which will be pre-pended to the list.
"""
return []
def flags_to_build_system_args(self, flags):
"""Produces a list of all command line arguments to pass specified
compiler flags to configure."""
# Has to be dynamic attribute due to caching.
setattr(self, "configure_flag_args", [])
for flag, values in flags.items():
if values:
values_str = "{0}={1}".format(flag.upper(), " ".join(values))
self.configure_flag_args.append(values_str)
# Spack's fflags are meant for both F77 and FC, therefore we
# additionaly set FCFLAGS if required.
values = flags.get("fflags", None)
if values:
values_str = "FCFLAGS={0}".format(" ".join(values))
self.configure_flag_args.append(values_str)
def configure(self, spec, prefix):
"""Runs configure with the arguments specified in
:meth:`~spack.build_systems.autotools.AutotoolsPackage.configure_args`
and an appropriately set prefix.
def configure(self, pkg, spec, prefix):
"""Run "configure", with the arguments specified by the builder and an
appropriately set prefix.
"""
options = getattr(self, "configure_flag_args", [])
options = getattr(self.pkg, "configure_flag_args", [])
options += ["--prefix={0}".format(prefix)]
options += self.configure_args()
with working_dir(self.build_directory, create=True):
inspect.getmodule(self).configure(*options)
with fs.working_dir(self.build_directory, create=True):
inspect.getmodule(self.pkg).configure(*options)
def setup_build_environment(self, env):
if self.spec.platform == "darwin" and macos_version() >= Version("11"):
# Many configure files rely on matching '10.*' for macOS version
# detection and fail to add flags if it shows as version 11.
env.set("MACOSX_DEPLOYMENT_TARGET", "10.16")
def build(self, spec, prefix):
"""Makes the build targets specified by
:py:attr:``~.AutotoolsPackage.build_targets``
"""
def build(self, pkg, spec, prefix):
"""Run "make" on the build targets specified by the builder."""
# See https://autotools.io/automake/silent.html
params = ["V=1"]
params += self.build_targets
with working_dir(self.build_directory):
inspect.getmodule(self).make(*params)
with fs.working_dir(self.build_directory):
inspect.getmodule(self.pkg).make(*params)
def install(self, spec, prefix):
"""Makes the install targets specified by
:py:attr:``~.AutotoolsPackage.install_targets``
"""
with working_dir(self.build_directory):
inspect.getmodule(self).make(*self.install_targets)
def install(self, pkg, spec, prefix):
"""Run "make" on the install targets specified by the builder."""
with fs.working_dir(self.build_directory):
inspect.getmodule(self.pkg).make(*self.install_targets)
run_after("build")(PackageBase._run_default_build_time_test_callbacks)
spack.builder.run_after("build")(execute_build_time_tests)
def check(self):
"""Searches the Makefile for targets ``test`` and ``check``
and runs them if found.
"""
with working_dir(self.build_directory):
self._if_make_target_execute("test")
self._if_make_target_execute("check")
"""Run "make" on the ``test`` and ``check`` targets, if found."""
with fs.working_dir(self.build_directory):
self.pkg._if_make_target_execute("test")
self.pkg._if_make_target_execute("check")
def _activate_or_not(
self, name, activation_word, deactivation_word, activation_value=None, variant=None
):
"""This function contains the current implementation details of
:meth:`~spack.build_systems.autotools.AutotoolsPackage.with_or_without` and
:meth:`~spack.build_systems.autotools.AutotoolsPackage.enable_or_disable`.
"""This function contain the current implementation details of
:meth:`~spack.build_systems.autotools.AutotoolsBuilder.with_or_without` and
:meth:`~spack.build_systems.autotools.AutotoolsBuilder.enable_or_disable`.
Args:
name (str): name of the option that is being activated or not
@@ -671,7 +687,7 @@ def _activate_or_not(
Raises:
KeyError: if name is not among known variants
"""
spec = self.spec
spec = self.pkg.spec
args = []
if activation_value == "prefix":
@@ -681,16 +697,16 @@ def _activate_or_not(
# Defensively look that the name passed as argument is among
# variants
if variant not in self.variants:
if variant not in self.pkg.variants:
msg = '"{0}" is not a variant of "{1}"'
raise KeyError(msg.format(variant, self.name))
raise KeyError(msg.format(variant, self.pkg.name))
if variant not in spec.variants:
return []
# Create a list of pairs. Each pair includes a configuration
# option and whether or not that option is activated
variant_desc, _ = self.variants[variant]
variant_desc, _ = self.pkg.variants[variant]
if set(variant_desc.values) == set((True, False)):
# BoolValuedVariant carry information about a single option.
# Nonetheless, for uniformity of treatment we'll package them
@@ -718,14 +734,18 @@ def _activate_or_not(
override_name = "{0}_or_{1}_{2}".format(
activation_word, deactivation_word, option_value
)
line_generator = getattr(self, override_name, None)
line_generator = getattr(self, override_name, None) or getattr(
self.pkg, override_name, None
)
# If not available use a sensible default
if line_generator is None:
def _default_generator(is_activated):
if is_activated:
line = "--{0}-{1}".format(activation_word, option_value)
if activation_value is not None and activation_value(option_value):
if activation_value is not None and activation_value(
option_value
): # NOQA=ignore=E501
line += "={0}".format(activation_value(option_value))
return line
return "--{0}-{1}".format(deactivation_word, option_value)
@@ -764,7 +784,7 @@ def with_or_without(self, name, activation_value=None, variant=None):
def enable_or_disable(self, name, activation_value=None, variant=None):
"""Same as
:meth:`~spack.build_systems.autotools.AutotoolsPackage.with_or_without`
:meth:`~spack.build_systems.autotools.AutotoolsBuilder.with_or_without`
but substitute ``with`` with ``enable`` and ``without`` with ``disable``.
Args:
@@ -781,19 +801,14 @@ def enable_or_disable(self, name, activation_value=None, variant=None):
"""
return self._activate_or_not(name, "enable", "disable", activation_value, variant)
run_after("install")(PackageBase._run_default_install_time_test_callbacks)
spack.builder.run_after("install")(execute_install_time_tests)
def installcheck(self):
"""Searches the Makefile for an ``installcheck`` target
and runs it if found.
"""
with working_dir(self.build_directory):
self._if_make_target_execute("installcheck")
"""Run "make" on the ``installcheck`` target, if found."""
with fs.working_dir(self.build_directory):
self.pkg._if_make_target_execute("installcheck")
# Check that self.prefix is there after installation
run_after("install")(PackageBase.sanity_check_prefix)
@run_after("install")
@spack.builder.run_after("install")
def remove_libtool_archives(self):
"""Remove all .la files in prefix sub-folders if the package sets
``install_libtool_archives`` to be False.
@@ -803,14 +818,20 @@ def remove_libtool_archives(self):
return
# Remove the files and create a log of what was removed
libtool_files = fs.find(str(self.prefix), "*.la", recursive=True)
libtool_files = fs.find(str(self.pkg.prefix), "*.la", recursive=True)
with fs.safe_remove(*libtool_files):
fs.mkdirp(os.path.dirname(self._removed_la_files_log))
with open(self._removed_la_files_log, mode="w") as f:
f.write("\n".join(libtool_files))
def setup_build_environment(self, env):
if self.spec.platform == "darwin" and macos_version() >= Version("11"):
# Many configure files rely on matching '10.*' for macOS version
# detection and fail to add flags if it shows as version 11.
env.set("MACOSX_DEPLOYMENT_TARGET", "10.16")
# On macOS, force rpaths for shared library IDs and remove duplicate rpaths
run_after("install")(PackageBase.apply_macos_rpath_fixups)
spack.builder.run_after("install", when="platform=darwin")(apply_macos_rpath_fixups)
def _autoreconf_search_path_args(spec):

View File

@@ -0,0 +1,31 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.builder
import spack.directives
import spack.package_base
class BundlePackage(spack.package_base.PackageBase):
"""General purpose bundle, or no-code, package class."""
#: This attribute is used in UI queries that require to know which
#: build-system class we are using
build_system_class = "BundlePackage"
#: Legacy buildsystem attribute used to deserialize and install old specs
legacy_buildsystem = "bundle"
#: Bundle packages do not have associated source or binary code.
has_code = False
spack.directives.build_system("bundle")
@spack.builder.builder("bundle")
class BundleBuilder(spack.builder.Builder):
phases = ("install",)
def install(self, pkg, spec, prefix):
pass

View File

@@ -3,12 +3,14 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
from typing import Tuple
import llnl.util.filesystem as fs
import llnl.util.tty as tty
from llnl.util.filesystem import install, mkdirp
from spack.build_systems.cmake import CMakePackage
from spack.package_base import run_after
import spack.builder
from .cmake import CMakeBuilder, CMakePackage
def cmake_cache_path(name, value, comment=""):
@@ -28,44 +30,50 @@ def cmake_cache_option(name, boolean_value, comment=""):
return 'set({0} {1} CACHE BOOL "{2}")\n'.format(name, value, comment)
class CachedCMakePackage(CMakePackage):
"""Specialized class for packages built using CMake initial cache.
class CachedCMakeBuilder(CMakeBuilder):
This feature of CMake allows packages to increase reproducibility,
especially between Spack- and manual builds. It also allows packages to
sidestep certain parsing bugs in extremely long ``cmake`` commands, and to
avoid system limits on the length of the command line."""
#: Phases of a Cached CMake package
#: Note: the initconfig phase is used for developer builds as a final phase to stop on
phases = ("initconfig", "cmake", "build", "install") # type: Tuple[str, ...]
phases = ["initconfig", "cmake", "build", "install"]
#: Names associated with package methods in the old build-system format
legacy_methods = CMakeBuilder.legacy_methods + (
"initconfig_compiler_entries",
"initconfig_mpi_entries",
"initconfig_hardware_entries",
"std_initconfig_entries",
"initconfig_package_entries",
) # type: Tuple[str, ...]
#: Names associated with package attributes in the old build-system format
legacy_attributes = CMakeBuilder.legacy_attributes + (
"cache_name",
"cache_path",
) # type: Tuple[str, ...]
@property
def cache_name(self):
return "{0}-{1}-{2}@{3}.cmake".format(
self.name,
self.spec.architecture,
self.spec.compiler.name,
self.spec.compiler.version,
self.pkg.name,
self.pkg.spec.architecture,
self.pkg.spec.compiler.name,
self.pkg.spec.compiler.version,
)
@property
def cache_path(self):
return os.path.join(self.stage.source_path, self.cache_name)
def flag_handler(self, name, flags):
if name in ("cflags", "cxxflags", "cppflags", "fflags"):
return (None, None, None) # handled in the cmake cache
return (flags, None, None)
return os.path.join(self.pkg.stage.source_path, self.cache_name)
def initconfig_compiler_entries(self):
# This will tell cmake to use the Spack compiler wrappers when run
# through Spack, but use the underlying compiler when run outside of
# Spack
spec = self.spec
spec = self.pkg.spec
# Fortran compiler is optional
if "FC" in os.environ:
spack_fc_entry = cmake_cache_path("CMAKE_Fortran_COMPILER", os.environ["FC"])
system_fc_entry = cmake_cache_path("CMAKE_Fortran_COMPILER", self.compiler.fc)
system_fc_entry = cmake_cache_path("CMAKE_Fortran_COMPILER", self.pkg.compiler.fc)
else:
spack_fc_entry = "# No Fortran compiler defined in spec"
system_fc_entry = "# No Fortran compiler defined in spec"
@@ -81,8 +89,8 @@ def initconfig_compiler_entries(self):
" " + cmake_cache_path("CMAKE_CXX_COMPILER", os.environ["CXX"]),
" " + spack_fc_entry,
"else()\n",
" " + cmake_cache_path("CMAKE_C_COMPILER", self.compiler.cc),
" " + cmake_cache_path("CMAKE_CXX_COMPILER", self.compiler.cxx),
" " + cmake_cache_path("CMAKE_C_COMPILER", self.pkg.compiler.cc),
" " + cmake_cache_path("CMAKE_CXX_COMPILER", self.pkg.compiler.cxx),
" " + system_fc_entry,
"endif()\n",
]
@@ -126,7 +134,7 @@ def initconfig_compiler_entries(self):
return entries
def initconfig_mpi_entries(self):
spec = self.spec
spec = self.pkg.spec
if not spec.satisfies("^mpi"):
return []
@@ -160,13 +168,13 @@ def initconfig_mpi_entries(self):
mpiexec = os.path.join(spec["mpi"].prefix.bin, "mpiexec")
if not os.path.exists(mpiexec):
msg = "Unable to determine MPIEXEC, %s tests may fail" % self.name
msg = "Unable to determine MPIEXEC, %s tests may fail" % self.pkg.name
entries.append("# {0}\n".format(msg))
tty.warn(msg)
else:
# starting with cmake 3.10, FindMPI expects MPIEXEC_EXECUTABLE
# vs the older versions which expect MPIEXEC
if self.spec["cmake"].satisfies("@3.10:"):
if self.pkg.spec["cmake"].satisfies("@3.10:"):
entries.append(cmake_cache_path("MPIEXEC_EXECUTABLE", mpiexec))
else:
entries.append(cmake_cache_path("MPIEXEC", mpiexec))
@@ -180,7 +188,7 @@ def initconfig_mpi_entries(self):
return entries
def initconfig_hardware_entries(self):
spec = self.spec
spec = self.pkg.spec
entries = [
"#------------------{0}".format("-" * 60),
@@ -197,13 +205,7 @@ def initconfig_hardware_entries(self):
entries.append(cmake_cache_path("CUDA_TOOLKIT_ROOT_DIR", cudatoolkitdir))
cudacompiler = "${CUDA_TOOLKIT_ROOT_DIR}/bin/nvcc"
entries.append(cmake_cache_path("CMAKE_CUDA_COMPILER", cudacompiler))
if spec.satisfies("^mpi"):
entries.append(cmake_cache_path("CMAKE_CUDA_HOST_COMPILER", "${MPI_CXX_COMPILER}"))
else:
entries.append(
cmake_cache_path("CMAKE_CUDA_HOST_COMPILER", "${CMAKE_CXX_COMPILER}")
)
entries.append(cmake_cache_path("CMAKE_CUDA_HOST_COMPILER", "${CMAKE_CXX_COMPILER}"))
return entries
@@ -212,7 +214,7 @@ def std_initconfig_entries(self):
"#------------------{0}".format("-" * 60),
"# !!!! This is a generated file, edit at own risk !!!!",
"#------------------{0}".format("-" * 60),
"# CMake executable path: {0}".format(self.spec["cmake"].command.path),
"# CMake executable path: {0}".format(self.pkg.spec["cmake"].command.path),
"#------------------{0}\n".format("-" * 60),
]
@@ -220,7 +222,7 @@ def initconfig_package_entries(self):
"""This method is to be overwritten by the package"""
return []
def initconfig(self, spec, prefix):
def initconfig(self, pkg, spec, prefix):
cache_entries = (
self.std_initconfig_entries()
+ self.initconfig_compiler_entries()
@@ -236,11 +238,28 @@ def initconfig(self, spec, prefix):
@property
def std_cmake_args(self):
args = super(CachedCMakePackage, self).std_cmake_args
args = super(CachedCMakeBuilder, self).std_cmake_args
args.extend(["-C", self.cache_path])
return args
@run_after("install")
@spack.builder.run_after("install")
def install_cmake_cache(self):
mkdirp(self.spec.prefix.share.cmake)
install(self.cache_path, self.spec.prefix.share.cmake)
fs.mkdirp(self.pkg.spec.prefix.share.cmake)
fs.install(self.cache_path, self.pkg.spec.prefix.share.cmake)
class CachedCMakePackage(CMakePackage):
"""Specialized class for packages built using CMake initial cache.
This feature of CMake allows packages to increase reproducibility,
especially between Spack- and manual builds. It also allows packages to
sidestep certain parsing bugs in extremely long ``cmake`` commands, and to
avoid system limits on the length of the command line.
"""
CMakeBuilder = CachedCMakeBuilder
def flag_handler(self, name, flags):
if name in ("cflags", "cxxflags", "cppflags", "fflags"):
return None, None, None # handled in the cmake cache
return flags, None, None

Some files were not shown because too many files have changed in this diff Show More