Compare commits

...

209 Commits

Author SHA1 Message Date
Michael Kuhn
fb83c7112e Fix pkgconfig dependencies (#39059)
pkg-config and pkgconf are providers.
2023-07-22 17:20:30 -07:00
Christopher Christofi
c811b71336 py-jaxlib: add conflict for missing cuda cuda_arch value (#39054)
* py-jaxlib: add conflict for missing cuda cuda_arch specification

* Update var/spack/repos/builtin/packages/py-jaxlib/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* py-jaxlib: conflict missing cuda_arch value when with cuda

* Update var/spack/repos/builtin/packages/py-jaxlib/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-07-22 12:27:49 -04:00
eugeneswalker
366c798b87 e4s oneapi ci: build with latest 2023.2 based image (#39048) 2023-07-22 09:22:02 -07:00
百地 希留耶
90ac0ef66e Implement fish completion (#29549)
* commands: provide more information to Command

* fish: Add script to generate fish completion

* fish: auto prepend `spack` command to avoid duplication

* fish: impove completion generation code readability

* commands: replace match-case with if-else

* fish: fix optspec variable name prefix

* fish: fix return value in get_optspecs

* fish: fix return value in get_optspecs

* format: split long line and trim trailing space

* bugfix: replace f-string with interpolation

* fish: compete more specs and some fixes

* fish: complete hash spec starts with /

* fish: improve compatibility

* style: trim trailing whitespace

* commands: add fish to update args and update tests

* commands: add fish completion file

* style: merge imports

* fish: source completion in setup-env

* fish: caret only completes dependencies

* fish: make sure we always get same order of output

* fish: spack activate
only show installed packages that have extensions

* fish: update completion file

* fish: make dict keys sorted

* Blacken code

* Fix bad merge

* Undo style changes to setup-env.fish

* Fix unit tests

* Style fix

* Compatible with fish_indent

* Use list for stability of order

* Sort one more place

* Sort more things

* Sorting unneeded

* Unsort

* Print difference

* Style fix

* Help messages need quotes

* Arguments to -a must be quoted

* Update types

* Update types

* Update types

* Add type hints

* Change order of positionals

* Always expand help

* Remove shared base class

* Fix type hints

* Remove platform-specific choices

* First line of help only

* Remove unused maps

* Remove suppress

* Remove debugging comments

* Better quoting

* Fish completions have no double dash

* Remove test for deleted class

* Fix grammar in header file

* Use single quotes in most places

* Better support for remainder nargs

* No magic strings

* * and + can also complete multiple

* lower case, no period

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-07-22 08:55:12 -05:00
Mark Olesen
66e85ae39a openfoam: add versions 2306, 2212_230612 (patch), 2212 (#38694)
* openfoam: add versions 2306, 2212_230612 (patch), 2212

* Fix syntax error

---------

Co-authored-by: Mark Olesen <Mark.Olesen@esi-group.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-07-22 02:52:55 -04:00
Manuela Kuhn
54fdae4a79 py-sqlalchemy: add 2.0.19 (#38951)
* py-sqlalchemy: add 2.0.19

* [@spackbot] updating style on behalf of manuelakuhn

* Add py-cython and py-greenlet and fix dependency type

* Fix typo
2023-07-21 23:41:00 -05:00
Zach Jibben
b215bb41dd Update Truchas (#39026) 2023-07-21 16:07:32 -07:00
Elliott Slaughter
b85803ae6c legion: Update Python dependencies. Fix variant requirements. Remove TLS. (#39003)
* legion: Missing Python dependency. Fix variant dependencies. Remove TLS.

* Update Python version bound and add NumPy dependency.

* Update requires syntax.
2023-07-21 13:59:04 -07:00
Matthieu Dorier
c5c75e8921 quickjs: add quickjs package (#39041)
* added quickjs package

* edited style of quickjs package
2023-07-21 12:50:10 -07:00
Manuela Kuhn
9c5ae722b2 py-isort: add 5.12.0 and fix build of 5.10.1 (#39033) 2023-07-21 14:06:40 -05:00
Martin Aumüller
132bb59be8 qt-*: update for 6.5.2 (#39038) 2023-07-21 10:38:11 -04:00
Massimiliano Culpo
c0b42151c3 Remove spack.repo.IndexError (#39029)
This exception is never used and
overrides a built-in.
2023-07-21 15:33:30 +02:00
Mosè Giordano
c1be7f2354 julia: Update hashes of github-generated patch files (#39034) 2023-07-21 15:32:36 +02:00
eugeneswalker
4edeabb2a2 e4s ci: add cray-sles ministack (#38744)
* e4s ci: add cray-sles ministack

* fix typo: variables, not env
2023-07-21 05:57:27 -07:00
Ashwin Kumar Karnad
405f563909 binary_caches.rst: fix typo (#39030) 2023-07-21 10:39:53 +02:00
Adam J. Stewart
089d775cf2 py-mpi4py: does not yet support cython 3 (#38996) 2023-07-20 19:13:16 -04:00
Martin Aumüller
6610b8bc27 proj: fix build of v7 with GCC 13 & add 9.2.1 (#39004)
* proj: fix building with GCC 13

apply upstream patch from 7.2 branch

* proj: checksum 9.2.1

* proj: fix sha256 of patch

thank you, @adamjstewart
2023-07-20 17:44:13 -04:00
mschouler
d3c4b74095 Add recipe for py-plotext (#39023)
Co-authored-by: Marc Schouler <marc.schouler@inria.fr>
2023-07-20 16:43:38 -05:00
Samuel K. Gutiérrez
26a74bb3bc Add Quo-Vadis package. (#38998)
Signed-off-by: Samuel K. Gutierrez <samuel@lanl.gov>
2023-07-20 12:07:36 -07:00
Sergey Kosukhin
d2566e3d62 nag: update the versioning scheme (#35457)
* nag: append build number to the version
* nag: add version 7.1.7125
* nag: deprecate unavailable versions
2023-07-20 12:03:26 -07:00
willdunklin
3fbe5dd312 sensei: add version 4.1.0 (#38959) 2023-07-20 13:48:27 -05:00
Manuela Kuhn
d1ea315e57 serf: add 1.3.10 (#38847) 2023-07-20 11:37:41 -07:00
Tom Scogland
0bef599c21 update luajit and fix link on neovim to allow luajit to work on linux aarch64 (#38865) 2023-07-20 11:36:12 -07:00
Dmitriy
d1d2d76183 Add boost variant to henson and require it for aarch64 (#38916) 2023-07-20 11:29:18 -07:00
Rocco Meli
294d81e99e Update GNINA and libmolgrid (#38978)
* pin protobuf
* explicitly select python interpreter
* remove python pin
2023-07-20 11:14:06 -07:00
Martin Aumüller
22d2ef3d5a botan: checksum 3.1.0 and 3.1.1 (#39006) 2023-07-20 11:01:58 -07:00
Hariharan Devarajan
e087f3bf93 release gotcha 1.0.4 (#39007) 2023-07-20 10:59:52 -07:00
Martin Aumüller
ebdaa766ae tinygltf: new versions and add release branch (#39012)
* tinygltf: new versions and release branch
   for each minor release available, the newest patch release has been added

---------

Co-authored-by: aumuell <aumuell@users.noreply.github.com>
2023-07-20 10:56:14 -07:00
Martin Aumüller
a87ee7f427 qt: make partially buildable on macos (#38990)
- drop use_xcode = True, as this would lead to an attempt install Xcode (#34064)
- don't automatically build Qt Location with +opengl, as this is
  still broken

This built sucessfully with qt@5.15.10+opengl+dbus+phonon on ventura/arm without
Xcode installed (only command line tools) - I did not check with Xcode installed.
2023-07-20 11:53:38 -05:00
Adam J. Stewart
4f0020c794 py-lightly: add v1.4.13 (#39019) 2023-07-20 09:50:24 -07:00
Jen Herting
b23c6f2851 [py-wasabi] added version 1.1.2 (#38268)
* [py-wasabi] added version 1.1.2

* [py-wasabi] flake8

* [py-wasabi]

- added dependency on py-colorama
- updated homepage

* [py-wasabi] removed python check for py-colorama

* [@spackbot] updating style on behalf of qwertos

---------

Co-authored-by: qwertos <qwertos@users.noreply.github.com>
2023-07-20 11:14:58 -05:00
Manuela Kuhn
81f9910c26 py-stack-data: add 0.6.2 (#38952) 2023-07-20 11:03:56 -05:00
Jen Herting
b40e3898b4 [py-omegaconf] added version 2.2.2 (#38980) 2023-07-20 10:54:30 -05:00
Massimiliano Culpo
50b90e430d spack.util.lock: add type-hints, remove **kwargs in method signatures (#39011) 2023-07-20 09:41:23 +02:00
Massimiliano Culpo
3a565c66e9 Respect custom user store when bootstrapping (#39001)
The user store is lazily evaluated. The change
in #38975 made it such that the first evaluation
was happening in the middle of swapping to user
configuration.

Ensure we construct the user store before that.
2023-07-19 19:53:33 -04:00
Joe Schoonover
01167a1471 Add new feq-parse version (#38991)
* Add new feq-parse version
* Swap 2.0.0 for 2.0.1 - resolves feq-parse build failure
2023-07-19 19:33:05 -04:00
Alberto Sartori
3caa0093f8 justbuild: add v1.1.4 (#38985) 2023-07-19 13:27:54 -07:00
Xavier Delaruelle
d9fbdfbee9 modules: use curly braces to enclose value in Tcl modulefile (#38375)
Use curly braces instead of quotes to enclose value or text in Tcl
modulefile. Within curly braces Tcl special characters like [, ] or $
are treated verbatim whereas they are evaluated within quotes.

Curly braces is Tcl recommended way to enclose verbatim content [1].

Note: if curly braces charaters are used within content, they must be
balanced. This point has been checked against current repository and no
unbalanced curly braces has been spotted.

Fixes #24243

[1] https://wiki.tcl-lang.org/page/Tcl+Minimal+Escaping+Style
2023-07-19 17:57:37 +02:00
Jen Herting
ae08b25dac [py-openapi-schema-pydantic] New package (#38973) 2023-07-19 15:26:55 +02:00
Jen Herting
9ccb018b23 [py-langsmith] New package (#38971) 2023-07-19 15:21:57 +02:00
Harmen Stoppels
185bccb70f Fetch & patch: actually acquire stage lock, and many more issues (#38903)
* Fetching patches wouldn't result in acquiring a stage lock during install
* The installer would acquire a stage lock *after* fetching instead of
   before, leading to races
* The name of the stage for patches was random, so on build failure
   (where stage dirs are not removed), these directories would continue
   to exist after a second successful install.
* There was this redundant "composite fetch" object -- there's already
   a composite stage. Remove this.
* For some reason we do *double* shasum validation of patches, before
   and after compression -- that's just too much? I removed it.
2023-07-19 15:06:56 +02:00
Jen Herting
8c8186c757 [py-uc-micro-py] New package (#38967) 2023-07-19 14:19:45 +02:00
Jen Herting
8a76430039 [py-pydub] new package (#38966) 2023-07-19 14:07:31 +02:00
Jen Herting
33939656e2 [py-hatch-requirements-txt] new package (#38965) 2023-07-19 14:03:21 +02:00
Jen Herting
950b5579fb [py-ffmpy] New package (#38964) 2023-07-19 14:00:31 +02:00
Jen Herting
d996b4d240 [py-colorama] added version 0.4.6 (#38737)
* [py-colorama] added version 0.4.6

* [py-colorama] limited py-setuptools dependency
2023-07-19 13:51:25 +02:00
Harmen Stoppels
886946395d drop redundant rpaths post install (#38976)
Spack heuristically adds `<install prefix>/lib` and `<install prefix>/lib64` as rpath entries, as it doesn't know what the install dir is going to be ahead of the build. This PR cleans up non-existing, absolute paths[^1], which

1. avoids redundant stat calls at runtime
2. drops redundant rpaths in `patchelf`, making it relocatable -- you don't need patchelf recursively then.

[^1]: It also removes relative paths not starting with `$` (so, `$ORIGIN/../lib` is retained -- we _could_ interpolate `$ORIGIN`, but that's hard to get right when symlinks have to be taken into account). Relative paths _are_ supported in glibc, but are relative to _the current working directory_, which is madness, and it would be better to drop those paths.
2023-07-19 09:48:31 +00:00
Adam J. Stewart
57b69c9703 py-cython: add v3.0.0 (#38961) 2023-07-19 11:24:35 +02:00
Massimiliano Culpo
f34c93c5f8 llnl.util.lock: add type-hints (#38977)
Also uppercase global variables in the module
2023-07-19 11:23:08 +02:00
Massimiliano Culpo
a7f2abf924 Remove LazyReference from code (#38944)
A LazyReference object is a reference to an attribute of a 
lazily evaluated singleton. Its only purpose is to let developers
use shorter names to refer to such attribute.

This class does more harm than good, as it obfuscates the fact
that we are using the attribute of a global object. Also, it can easily
go out of sync with the singleton it refers to if, for instance, the
singleton is updated but the references are not.

This commit removes the LazyReference class entirely, and access
the attributes explicitly passing through the global value to which
they are attached.
2023-07-19 11:08:51 +02:00
Aiden Grossman
a99eaa9541 magma: add package name to conflict messages (#38984)
Without the package name being present in the conflict messages, it is
significantly more difficult to debug concretization failures in
environments that contain many packages.
2023-07-19 03:47:26 -04:00
downloadico
76b6436ade petsc: add version 3.19.3 (#38974) 2023-07-19 08:32:58 +02:00
Emil Briggs
0facda31eb rmgdft: add v5.3.1, v5.4.0 and cuda variant (#37813)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-07-19 08:30:37 +02:00
Aiden Grossman
032fd38af0 mesa-glu: Patch register long to long (#38833)
mesa-glu still has a couple instances of the register keyword which
causes build failures with clang on my platform. This patch removes the
register keyword which doesn't have any impact on correctness.
2023-07-19 08:29:24 +02:00
Aiden Grossman
b04b3aed9e gperf: patch usage of register keyword (#38893)
gperf still uses the register keyword in one place which makes
compilation fail with c++17. This patch adds in a patch file to remove
the usage of the reigster keyword so that it compiles properly.
2023-07-19 08:28:57 +02:00
Aiden Grossman
90b2e402f5 elfutils: remove conflicts with clang after version 0.186 (#38945)
In late 2021 elfutils was patched to make it build with clang, and these
patches ended up in version 0.186. This commit updates the conflicts to
specify this so elfutils can be built with clang.
2023-07-19 08:28:13 +02:00
Carlos Bederián
1f17f44def amdfftw: turn conflicts into conditional variants (#38221) 2023-07-19 08:23:40 +02:00
Sebastian Grimberg
cf87d9f199 palace: fix bugs introduced in #38910 (#38983) 2023-07-19 07:21:06 +02:00
Rocco Meli
d7a1a61702 Improve RDKit package (#36566) 2023-07-18 20:57:55 -04:00
markus-ferrell
416edfa229 Windows testing: enable tests for installer components (#36970)
These tests now work without any changes to core. Furthermore, it is
surprising that they had to be disabled (at least, as long as the
installer.py tests are run on Windows: these tests are more-basic
and their functionality would have been exercised automatically).
2023-07-18 16:19:14 -07:00
Sebastian Grimberg
9beb02ea83 palace: add v0.11.2 (#38910) 2023-07-18 22:29:35 +00:00
Manuela Kuhn
ce4162e28b py-poetry-core: add 1.6.1 and fix url (#38452)
* py-poetry-core: add 1.6.1 and fix url

* Update var/spack/repos/builtin/packages/py-poetry-core/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Re-add python upper bound for older versions

* Update var/spack/repos/builtin/packages/py-poetry-core/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-07-18 18:10:14 -04:00
Massimiliano Culpo
14f3297cca Ensure the bootstrap store has a padding length of zero (#38975)
Without this PR, padded length was propagating from user configuration to
bootstrap configuration, and was causing the issue reported in #38963
2023-07-18 23:49:22 +02:00
markus-ferrell
f24f98a1e2 Windows testing: enable bootstrap test (#36972) 2023-07-18 14:42:26 -07:00
Julien Loiseau
64361e1fc7 FleCSPH: update package (#37888)
Co-authored-by: Richard Berger <richard.berger@outlook.com>
2023-07-18 23:06:36 +02:00
Aiden Grossman
9a05dce3bf fftw: fix build with clang15+ (#38889)
In Clang 15, -Wint-conversion became an error instead of a warning,
breaking the fftw build for clang versions > 15. This patch fixes fftw
builds with clang 15+ by passing -Wno-error=int-conversion as a cflag.
2023-07-18 23:02:51 +02:00
markus-ferrell
ffc283ab8b test_clear_failures_success: run on Windows too (#36792) 2023-07-18 22:42:57 +02:00
Harmen Stoppels
3fef586cfb binary cache docs: remove redundant flag and comment (#38960) 2023-07-18 22:38:04 +02:00
eugeneswalker
515b53ac50 e4s cray: expand spec list (#38947)
* e4s cray: expand spec list

* unzip: require %gcc

* remove datatrasnferkit
2023-07-18 20:04:26 +00:00
markus-ferrell
b710778bda Windows testing: enable architecture test (#36973)
Works out of the box: remove skip.
2023-07-18 12:33:52 -07:00
markus-ferrell
a965fe9354 Windows testing: enable "spack clean" tests (#36840)
They work out out of the box on windows. Simply removing skips.
2023-07-18 12:25:32 -07:00
fpruvost
e47a2a7a65 chameleon: update to version 1.2.0 (#38936) 2023-07-18 12:57:42 -04:00
Harmen Stoppels
5b23c5dcc0 buildcache push: make --allow-root the default and deprecate the option (#38878)
Without --allow-root spack cannot push binaries that contain paths in
binaries. This flag is almost always needed, so there is no point of
requiring users to spell it out. 

Even without --allow-root, rpaths would still have to be patched, so the 
flag is not there to guarantee binaries are not modified on install.

This commit makes --allow-root the default, and drops the code 
required for it. It also deprecates `spack buildcache preview`, since 
the command made sense only with --allow-root.

As a side effect, Spack no longer depends on binutils for relocation
2023-07-18 18:45:14 +02:00
Massimiliano Culpo
ad1fdcdf48 Pin Spack dev dependencies on RtD (#38950) 2023-07-18 18:37:04 +02:00
Massimiliano Culpo
82aa27f5a5 Fix default construction of locks (#38953)
This fixes a typo introduced in a refactor
2023-07-18 14:36:41 +02:00
Taillefumier Mathieu
909f185f02 Update dbcsr and cp2k to latest version (#38939)
Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>
2023-07-18 11:48:42 +02:00
Xavier Delaruelle
8c7adbf8f3 modules: add support for conflict in lua modulefile (#36701)
Add support for conflict directives in Lua modulefile like done for Tcl
modulefile.

Note that conflicts are correctly honored on Lmod and Environment
Modules <4.2 only if mutually expressed on both modulefiles that
conflict with each other.

Migrate conflict code from Tcl-specific classes to the common part. Add
tests for Lmod and split the conflict test case in two.

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-07-18 10:24:46 +02:00
Peter Scheibel
10165397da "spack develop": always pull full history for git repos (#38343) 2023-07-18 09:53:33 +02:00
Houjun Tang
af60b802f7 Update package.py (#38946) 2023-07-18 03:17:48 -04:00
Rocco Meli
4e8f91a0c7 wannier90 github (#38927) 2023-07-17 23:52:38 -07:00
Michael Kuhn
7b87f0a569 meson: add 1.2.0, 1.1.1 and 1.0.2 (#38935) 2023-07-17 23:50:39 -07:00
Andrey Perestoronin
377fecd86f added new packages (#38941) 2023-07-18 00:47:46 -04:00
David Huber
2c7df5ce35 Update gsi-ncdiag/1.1.1 sha256 (#38943) 2023-07-18 00:12:43 -04:00
Manuela Kuhn
3bcd1a6c0e py-soupsieve: add 2.4.1 (#38929) 2023-07-17 17:48:27 -05:00
Manuela Kuhn
a468ca402e py-setuptools: add 68.0.0 (#38930)
* py-setuptools: add 68.0.0

* [@spackbot] updating style on behalf of manuelakuhn
2023-07-17 17:47:08 -05:00
Manuela Kuhn
ecb9d35cd4 py-setuptools-rust: add 1.6.0 (#38932) 2023-07-17 17:46:03 -05:00
Manuela Kuhn
d7b2f9d756 py-sphinxcontrib-applehelp: add 1.0.4 (#38933) 2023-07-17 17:44:32 -05:00
Manuela Kuhn
8c303cd29a py-sphinxcontrib-htmlhelp: add 2.0.1 (#38934) 2023-07-17 17:43:37 -05:00
Dan Lipsa
4831d45852 Decompression: fix naming issues (#37749)
* When using system tools to unpack a .gz file, the input file needs a
  different name than the output file. Normally, we generate this new
  name by stripping off the .gz extension off of the file name.
  This was not sufficient if the file name did not have an extension,
  so we temporarily rename the file in that case.
* When using system tar utility to untar on Windows, we were (erroneously)
  skipping the actual untar step if the filename was lacking a .tar
  extension
* For foo.txz, we were not changing the extension of the decompressed file
  (i.e. we would decompress foo.txz to foo.txz). This did not cause any
  problems, but is confusing, so has been updated such that the output
  filename reflects its decompressed state (i.e. foo.tar).
* Added test for strip_compression_extension
* Update test_native_unpacking to test each archive type with and without
  an extension as part of the file name (i.e. we test "foo.tar.gz", but
  also make sure we decompress properly if it is named "foo").
2023-07-17 14:33:18 -07:00
Harmen Stoppels
f05837a480 Fix wrong StageComposite keep override (#38938)
`Stage(keep=True)` was ignored when put in a composite that doesn't
override the value.
2023-07-17 23:20:24 +02:00
Manuela Kuhn
1fa60a6c70 py-pyside: fix build with python3.8 (#38886)
* py-pyside: fix build for version 1.2.2

* Remove check for python version

* Fix style

* Remove unnecessary patch

* Update var/spack/repos/builtin/packages/py-pyside/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-pyside/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Remove py-markupsafe conflict

* Update var/spack/repos/builtin/packages/py-pyside/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-pyside/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Move python check removal below suprocess patch

* Remove preference of 1.2.2

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-07-17 15:54:59 -05:00
snehring
1730bcaa31 soapdenovo2: strip optimization flags from injected flags (#38846)
* soapdenovo2: strip optimization flags from injected flags
* soapdenovo2: add maintainer
* soapdenovo2: only append on cflags
* soapdenovo2: clean up some wording and implementation
2023-07-17 13:33:37 -07:00
Alberto Sartori
e6235a8ff9 justbuild: add v1.1.3 (#38925) 2023-07-17 13:23:14 -07:00
Manuela Kuhn
884b4952af Fix python import tests (#38928)
Running `spack test run <python package>` resulted in the error
```
'str' object is not callable
```
because the python executable was not set correctly.
2023-07-17 13:19:47 -07:00
Seth R. Johnson
cc73789744 vecgeom: new version 1.2.4 (#38940) 2023-07-17 12:00:57 -07:00
Ashwin Kumar Karnad
c9b7eb3647 libxc: add kxc and lxc variants (#38937)
* libxc: add kxc and lxc variants
* libxc: add kxc and lxc variants for @5:0:
* Apply suggestion from @tldahlgren

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2023-07-17 13:53:29 -04:00
Harmen Stoppels
1100cdf5a0 Enable http/2 support by default in curl (#38750) 2023-07-17 09:21:51 -07:00
Manuela Kuhn
6ac93e1095 librsvg: add 2.56.2 and rust upper version limit for 2.51 (#38766)
* librsvg: add rust upper version limit

* librsvg: Add 2.56.2
2023-07-17 09:14:05 -07:00
Martin Aumüller
193e6e7678 qt-base: fix build on macos, when +network (#38519)
* qt-base: always link to GSS framework on macOS

On macos, the code in src/network/kernel/qauthenticator.cpp
unconditionally includes the header from the GSS framework, so we should
link against it.

This applies two patches from the dev branch. They are to be cherry-picked
into the 6.5 (probably released with 6.5.2) and 6.6 branches, but they
apply against 6.3.2 as well.

* qt-base: disable libproxy on macOS

src/network/CMakeLists.txt disables it on MACOS anyway. And as it is not
found without pkg-config, building with +network would break because of
the feature being explicitly enabled.

* qt-base: don't depend on pkgconfig on macOS

On macOS, usage of pkg-config is disabled by unsetting
PKG_CONFIG_EXECUTABLE, unless the feature pkg-config is requested explicitly.

* qt-base: don't depend on at-spi2-core on macOS

Does not build on macOS and seems to be targeted at linux. Qt6 on
homebrew does not depend on it, either.

* qt-base: fix long lines

* qt-base: restrict use of pkgconfig to linux

yes, probably not needed on windows, either

Co-authored-by: Alec Scott <alec@bcs.sh>

* qt-base: disable libproxy on Windows as well

according to src/network/CMakeLists.txt it's only used on Unix

* qt-base: improvements based on reviewer suggestions

---------

Co-authored-by: Alec Scott <alec@bcs.sh>
2023-07-17 08:53:47 -07:00
Elliott Slaughter
dc216adde2 legion: Add 23.06.0, variants for UCX, max nodes, update CUDA version. (#38759)
* legion: Add 23.06.0, variants for UCX, max nodes, update CUDA version.

* legion: Make newer CUDA versions dependent on newer Legion.

* legion: Update CUDA arch list so that we can stop tracking manually.
2023-07-17 08:51:45 -07:00
Maxence Thévenet
bf43471a7c HiPACE++ 23.07 (#38862)
* Update package and fix compilation issues
* fix order
2023-07-17 08:23:27 -07:00
Daniele Cesarini
469f06a8f2 Added py-eprosima-fastdds package (#38877)
* Added py-eprosima-fastdds package

* Fixed python extension and dependency version

* Added build type for swig

* Added minimum cmake support

* Added py-test dependency

* Added suggestion on python extension

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Added suggestion on build type for cmake

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-07-17 11:12:29 -04:00
Massimiliano Culpo
53ae969aa0 Lock, database and store don't need global configuration on construction (#33495)
Lock objects can now be instantiated independently,
without being tied to the global configuration. The
same is true for database and store objects.

The database __init__ method has been simplified to
take a single lock configuration object. Some common
lock configurations (e.g. NO_LOCK or NO_TIMEOUT) have
been named and are provided as globals.

The use_store context manager keeps the configuration
consistent by pushing and popping an internal scope.
It can also be tuned by passing extra data to set up
e.g. upstreams or anything else that might be related
to the store.
2023-07-17 16:51:04 +02:00
Adam J. Stewart
2b5a7bb4d7 Update new PythonPackage template to prefer --config-settings (#38918) 2023-07-17 08:03:05 -05:00
Aiden Grossman
e91db77930 root: Add package name to all conflict messages (#38920)
Not having the package name in the conflict messages can make debugging
conflicts exceedingly hard when trying to concretize an environment with
a sufficient number of packages. This patch adds the package name to all
of the conflict messages so that it is easy to tell just from the
message which package is causing conflicts.
2023-07-17 07:10:36 -05:00
Peter Scheibel
31431f967a Environment/depfile: fix bug with Git hash versions (attempt #2) (#37560)
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2023-07-17 11:17:32 +00:00
Massimiliano Culpo
63b88c4b75 Minimal cleanup of a few tests in test/packaging.py (#38880)
* Minimal cleanup of a few tests in packaging.py

* Use f-strings
2023-07-17 10:36:29 +02:00
Manuela Kuhn
5e7f989019 py-jinja2: add conflict for py-markupsafe@2.0.2 (#38913)
* py-jinja2: add conflict for py-markupsafe@2.0.2

* Update var/spack/repos/builtin/packages/py-jinja2/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-07-15 18:05:15 -05:00
Manuela Kuhn
319ef0f459 py-shiboken: fix build by restricting dependencies (#38900)
* py-shiboken: fix build by restricting dependencies

* Update var/spack/repos/builtin/packages/py-shiboken/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Remove conflict

* Remove py-markupsafe conflict

* Update var/spack/repos/builtin/packages/py-shiboken/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-07-15 18:57:35 -04:00
Manuela Kuhn
a9d5f24791 py-furo: add new package (#38904) 2023-07-15 17:16:34 -05:00
Manuela Kuhn
74d5da43a8 py-sphinx-rtd-theme: add 1.2.2 and py-sphinxcontrib-jquery: add 4.1 (#38896) 2023-07-15 16:25:58 -05:00
Elliott Slaughter
829b4fe8fe py-cupy: Add 11.3.0, 11.4.0, 11.5.0, 11.6.0, 12.0.0, 12.1.0 (#38911)
* py-cupy: Add 11.3.0, 11.4.0, 11.5.0, 11.6.0, 12.0.0, 12.1.0.

* Clean up version bounds.
2023-07-14 21:12:31 -07:00
Jonas Thies
5672c64356 add a phist patch to avoid trying to compile SSE code if that is not … (#38806)
* add a phist patch to avoid trying to compile SSE code if that is not available.

* phist: make the avoid-sse patch more robust because compiler on ARM system still tried to compile SSE code
2023-07-14 21:10:57 -07:00
Richard Berger
1f58ac5ed3 spiner: update dependencies (#37367)
* spiner: update dependencies
* spiner: add v1.6.1 and updated dependency
2023-07-14 20:07:17 -04:00
Massimiliano Culpo
206a0a1658 legion package: use conditional variants for gasnet (#38902) 2023-07-14 14:44:15 -07:00
Michael Kuhn
b72d0e850d lmdb: add 0.9.31 (#38892) 2023-07-14 11:04:16 -07:00
Thomas Madlener
29835ac343 podio: add 0.16.6 tag and mark older releases as deprecated (#38891)
* podio: Add latest tag
* podio: Deprecate older versions
2023-07-14 11:03:08 -07:00
G-Ragghianti
e276131b2a new release and bug fix on check() (#38901) 2023-07-14 10:57:11 -07:00
Adam J. Stewart
c5f9ae864a GDAL: add v3.7.1 (#38884) 2023-07-14 13:23:01 -04:00
Harmen Stoppels
ac5976d17d Remove unused context manager (#38897) 2023-07-14 18:41:30 +02:00
Vicente Bolea
de719e9a4b adios2: add catalyst variant (#38852) 2023-07-14 08:51:28 -07:00
Harmen Stoppels
f30ede1ab8 ci: remove aws-ahug (#38777) 2023-07-14 10:49:57 -05:00
dependabot[bot]
9e0f1f8998 build(deps): bump actions/setup-python from 4.6.1 to 4.7.0 (#38887)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4.6.1 to 4.7.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](bd6b4b6205...61a6322f88)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-14 15:48:29 +00:00
Manuela Kuhn
ee335c0d53 py-protobuf: add 4.23.3 (#38614)
* py-protobuf: add 4.23.3

* protobuf: add 3.23.3

* py-protobuf: disable cpp variant for @4.22:
2023-07-14 10:38:23 -05:00
Manuela Kuhn
0c986da030 py-simplejson: add 3.19.1 (#38898) 2023-07-14 10:34:33 -05:00
Adam J. Stewart
0d5d9524f2 py-lightly: add v1.4.12 (#38883) 2023-07-14 10:49:18 -04:00
Taillefumier Mathieu
c94137f6ea Use sirius namespacing for cmake (#38707)
* Use sirius namespacing for cmake

* Formating

* Fix lapack variables
2023-07-14 10:43:49 -04:00
Wouter Deconinck
9259a6aae4 xrootd: new versions 5.6.0, 5.6.1 (#38844) 2023-07-14 08:50:52 -05:00
Rocco Meli
244dfb3a35 Fix issue on cray with super call (#38895) 2023-07-14 10:00:02 +00:00
Harmen Stoppels
e16397b5d8 disable superlu test (#38894) 2023-07-14 10:53:16 +02:00
Tamara Dahlgren
2e9e7ce7c4 Bugfix/spack spec: read and use the environment concretizer:unification option (#38248)
* Bugfix: spack.yaml concretizer:unify needs to be read and used
* Optional: add environment test to ensure configuration scheme is used
* Activate environment in unit tests
  A more proper solution would be to keep
  an environment instance configuration as
  an attribute, but that is a bigger refactor
* Delay evaluation of Environment.unify
* Slightly simplify unit tests

---------

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-07-13 23:43:20 +02:00
Adam J. Stewart
f802b64e7a Update TensorFlow ecosystem (#38747)
* Update TensorFlow ecosystem

* Re-add +cpp

* Do not use system protobuf

* Let bazel auto-detect macOS SDK version

* Unnecessary duplicated dep

* Remove unused import
2023-07-13 14:42:07 -05:00
Sergey Kosukhin
afe6f7ed79 eccodes: fix a few issues in the recipe (#38873)
* explicitly disable the Python 2 interface

* drop obsolete cmake argument HDF5_ROOT for newer versions

* set PYTHON_EXECUTABLE only when needed
2023-07-13 13:53:19 -04:00
Richard Berger
82d41a7be4 FleCSI updates (#38870)
* flecsi: update maintainers
* flecsi: allow newer HPX to be used
* flecsi: propagate ROCm variants when using legion
* flecsi: add v2.2.1
2023-07-13 10:19:11 -07:00
Mikael Simberg
412a09e78b Add keep_werror = "specific" to mpich (#38861) 2023-07-13 09:27:03 -07:00
Harmen Stoppels
fbc0956d19 ccache: 4.8.2 (#38874) 2023-07-13 09:14:06 -07:00
Harmen Stoppels
dcb4bc3c54 ca-certificates-mozilla: add 2023-05-30 (#38875) 2023-07-13 09:12:46 -07:00
Manuela Kuhn
1ac2f34333 foonathan-memory: add 0.7-3 (#38879) 2023-07-13 08:59:14 -07:00
Harmen Stoppels
033eb77aa9 spack buildcache push: improve argparse (#38876) 2023-07-13 16:01:09 +02:00
Harmen Stoppels
522d9e260b mirrors: distinguish between source/binary mirror; simplify schema (#34523)
Allow the following formats:

```yaml
mirrors:
  name: <url>
```

```yaml
mirrors:
  name:
    url: s3://xyz
    access_pair: [x, y]
```

```yaml
mirrors:
  name:
    fetch: http://xyz
    push:
      url: s3://xyz
      access_pair: [x, y]
```

And reserve two new properties to indicate the mirror type (e.g.
mirror.spack.io is a source mirror, not a binary cache)

```yaml
mirrors:
  spack-public:
    source: true
    binary: false
    url: https://mirror.spack.io
```
2023-07-13 11:29:17 +00:00
Massimiliano Culpo
3261889e3a spack audit: allow skipping version checks from package.py (#28372)
A few packages have version directives evaluated
within if statements, conditional on the value of
`platform.platform()`.

Sometimes there are no cases for e.g. platform=darwin and that
causes a lot of spurious failures with version existence
audits.

This PR allows expressing conditions to skip version
existence checks in audits and avoid these spurious reports.
2023-07-13 06:47:47 -04:00
Harmen Stoppels
161b30a32f Add type hints to spack.installer (#38872) 2023-07-13 10:41:19 +00:00
Adam J. Stewart
b67f1f395b Add missing space in error msg (#38863) 2023-07-13 10:24:02 +02:00
dependabot[bot]
33c2fd7228 build(deps): bump docker/setup-buildx-action from 2.9.0 to 2.9.1 (#38868)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 2.9.0 to 2.9.1.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](2a1a44ac4a...4c0219f9ac)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-13 10:23:22 +02:00
Manuela Kuhn
7d5007b5e4 Restrict py-pip version for packages using --install-option (#38837) 2023-07-13 10:00:40 +02:00
Adam J. Stewart
bb7f437bf5 Standardize subcommand help strings (#38804)
### Rationale

While working on #29549, I noticed a lot of inconsistencies in our argparse help messages. This is important for fish where these help messages end up as descriptions in the tab completion menu. See https://github.com/spack/spack/pull/29549#issuecomment-1627596477 for some examples of longer or more stylized help messages.

### Implementation

This PR makes the following changes:

- [x] help messages start with a lowercase letter.
- [x] Help messages do not end with a period
- [x] the first line of a help message is short and simple

    longer text is separated by an empty line
- [x] "help messages do not use triple quotes" 

    """(except docstrings)"""
- [x] Parentheses not needed for string concatenation inside function call
- [x] Remove "..." "..." string concatenation leftover from black reformatting
- [x] Remove Sphinx argument docs from help messages

The first 2 choices aren't very controversial, and are designed to match the syntax of the `--help` flag automatically added by argparse. The 3rd choice is more up for debate, and is designed to match our package/module docstrings. The 4th choice is designed to avoid excessive newline characters and indentation. We may actually want to go even further and disallow docstrings altogether.

### Alternatives

Choice 3 in particular has a lot of alternatives. My goal is solely to ensure that fish tab completion looks reasonable. Alternatives include:

1. Get rid of long help messages, only allow short simple messages
2. Move longer help messages to epilog
3. Separate by 2 newline characters instead of 1
4. Separate by period instead of newline. First sentence goes into tab completion description

The number of commands with long help text is actually rather small, and is mostly relegated to `spack ci` and `spack buildcache`. So 1 isn't actually as ridiculous as it sounds.

Let me know if there are any other standardizations or alternatives you would like to suggest.
2023-07-13 00:18:23 -07:00
Vicente Bolea
6312ae8464 vtk-m: modernize vtk-m recipe (#38726) 2023-07-13 08:21:38 +02:00
Harmen Stoppels
3828ae2a52 ci: populate caches in before script (#38762)
* ci: run spack list in power ci

Let's see if Spack itself is the bottleneck in CI...

* rebuild curl in CI

* more of the same please!

* drop the profiler

* undo rebuildme test in ci variant

* add comment for posterity

* enable profiling

* trigger CI

* See how it goes now that perf regressions are fixed on develop

* try shorter poll intervals

* Revert "try shorter poll intervals"

This reverts commit d60c34ad3eceead0c13a5277cf8e783fd42b7458.

* Remove spec.format call in Database._get_matching_spec_key

* once more in ci please

* undo irrelevant changes

* run spack list in before script

* test in ci

* -:

* Undo CI testing
2023-07-12 18:12:22 -05:00
dependabot[bot]
c2bdb4600a build(deps): bump docker/setup-buildx-action from 2.8.0 to 2.9.0 (#38783)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 2.8.0 to 2.9.0.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](16c0bc4a6e...2a1a44ac4a)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-12 17:18:05 -04:00
Jonathon Anderson
90208da8a5 containers: retain shallow git data (#37734) 2023-07-12 21:03:10 +00:00
Michael Fink
c8026c3c87 Add path to MPI executables to ^mpi dependents (#35758) 2023-07-12 13:58:43 -05:00
G-Ragghianti
f895f80bc2 Package papi: update for smoke tests (#38711)
* Adding papi smoke tests
* smoke tests
* update to new test framework
2023-07-12 11:38:57 -07:00
Stephan Grein
0ca11d7033 Fix fmt and spdlog versions for micromamba. (#38739)
The spdlog project precisely states/depends which fmt version should
be used for compatibility. Latest version 1.11.0 depends explictly on
fmt 9.1.0. Without fixed version micromamba build fails when using spack
install micromamba on e.g. Rockylinux 8.5.
2023-07-12 11:24:26 -07:00
Dax Lynch
4fa7dc03ae py-callmonitor: added new package (#38764)
* py-callmonitor: added new package

* depends_on numpy

* Update var/spack/repos/builtin/packages/py-callmonitor/package.py

Co-authored-by: Manuela Kuhn <36827019+manuelakuhn@users.noreply.github.com>

---------

Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
Co-authored-by: Manuela Kuhn <36827019+manuelakuhn@users.noreply.github.com>
2023-07-12 11:19:57 -07:00
Vicente Bolea
5ba99b8eb2 adios2: add aws variant (#38857) 2023-07-12 11:14:47 -07:00
Samuel Browne
78b24b45f6 Add gettext as a dependency of bison (#35979)
The 'bison' executable requires libtextstyle to run.  I think this was
usually satisfied because gettext is often installed with the OS, or
brought in accidentally via perl/m4.

Looks like the libtextstyle library dependency started in Bison 3.4
2023-07-12 14:01:12 -04:00
kjrstory
c2bafd7b7f openfoam-org: add precision option (#38746) 2023-07-12 14:49:45 +02:00
Andre Sailer
778cbb225c Static-analysis-suite: mark versions deprecate for this obsolete package (#38754) 2023-07-12 14:47:03 +02:00
Mikael Simberg
62f24f1b2a Patch broken CMake handling when no architectures are found in HIP package (#37022) 2023-07-12 14:39:41 +02:00
Vicente Bolea
37ef31dc22 vtk-m: correct cuda_arch variant behavior (#38697)
Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
2023-07-12 14:34:50 +02:00
Mikael Simberg
7f2be62ff2 Add conflict for hpx on ARM (#38812)
HPX requires use of Boost.Context on ARM.
2023-07-12 13:38:57 +02:00
snehring
ca16066eef apptainer: add flag_handler to discard spack flags (#38843) 2023-07-12 13:31:47 +02:00
Vanessasaurus
6a762501f8 flux-sched: add v0.28.0 (#38860)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2023-07-12 05:33:48 -04:00
Seth R. Johnson
29fa4bf64c hepmc3: add protobuf variant and update flags (#38841) 2023-07-12 11:21:49 +02:00
Adam J. Stewart
d2ed8c5226 py-torch: rename master to main (#38858) 2023-07-12 11:20:47 +02:00
Massimiliano Culpo
fb223f034b Fix build of CentOS stream docker image (#38824) 2023-07-12 10:55:43 +02:00
Daniele Cesarini
ca4c59cd77 REGALE: add new package (#38444)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-07-12 10:21:36 +02:00
Richard Berger
dfcb3bca65 legion: add ofi-slingshot11 conduit (#38859) 2023-07-12 03:53:26 -04:00
Tamara Dahlgren
ce6b79cd96 cbflib: fix build with newer gfortran (#38632) 2023-07-12 09:15:26 +02:00
Rocco Meli
831bfb43f5 DLA-Future: ensure umpire~cuda~rocm when ~cuda~rocm (#38835)
Co-authored-by: Raffaele Solcà <rasolca@cscs.ch>
2023-07-12 09:11:07 +02:00
Wileam Y. Phan
60697b421e fpm: add versions up to 0.9.0 (#38856) 2023-07-12 09:09:36 +02:00
Michael Kuhn
d5d0b8821c installer: Improve status reporting (#37903)
Refactor `TermTitle` into `InstallStatus` and use it to show progress
information both in the terminal title as well as inline. This also
turns on the terminal title status by default.

The inline output will look like the following after this change:
```
==> Installing m4-1.4.19-w2fxrpuz64zdq63woprqfxxzc3tzu7p3 [4/4]
```
2023-07-12 08:54:45 +02:00
Samuel Browne
d3704130b6 trilinos: Add CMake minimum and 14.2.0 version (#38853)
* Update minimum CMake version for Trilinos
  Changed to 3.23 as of release 14.0.0.
* Add Trilinos 14.2.0
2023-07-12 01:04:07 -04:00
Seth R. Johnson
9ef138dad5 protobuf: use cxxstd from abseil-cpp to fix C++17 build (#38840) 2023-07-11 20:03:51 -04:00
Thomas Madlener
6b51bfb713 edm4hep: Add tag for version 0.10 and deprecate older versions (#38817)
* edm4hep: add latest tag
* edm4hep: Mark all older versions as deprecated
2023-07-11 16:00:06 -07:00
MatthewLieber
fb7cdb0408 Adding sha for the 7.2 release of OMB (#38842)
Co-authored-by: Matt Lieber <lieber.31@osu.edu>
2023-07-11 15:56:25 -07:00
Chris Green
1e1b086484 [procps] Improve gettext/libintl handling (#38646)
Fixes #38639.
2023-07-11 15:35:55 -07:00
Stephen Hudson
0d51faf6cb libEnsemble: add v0.10.1 (#38845) 2023-07-11 18:18:16 -04:00
Manuela Kuhn
ed247744e7 py-charm4py: add missing dependencies (#38830) 2023-07-11 14:49:49 -04:00
Christian Glusa
299066feb5 Py-PyNucleus: Update dependencies, enable parallel build (#38779) 2023-07-11 12:50:41 -04:00
Martin Aumüller
2e695fa03f ispc: on ARM, build with ARM targets enabled, and updates (#38080)
* llvm: fix build with libcxx=none

* ispc: checksum 1.20.0

* ispc: ensure that it does not crash immediately

this would happen if linked to the wrong libc++

* ispc: fix build on macos

find ncurses instead of curses and link against tinfo in order to avoid
unresolved references to _del_curterm, _set_curterm, _setupterm, and
_tigetnum

* ispc: enable arm targets, if building on arm

* ispc: remove double cmake argument

I forgot to remove the constant -DARM_ENABLED=FALSE when adding
-DARM_ENABLED with a value depending on target architecture

* ispc: fix linux build

since 1.20, linux build uses TBB as default tasking system and thus
needs to depend on it

* ispc: try to fix link error on linux

link against both curses (as before) and tinfo (added because of macos)

* ispc: update for recent llvm changes

libcxx=none instead of ~libcxx
2023-07-11 09:44:18 -07:00
Adam J. Stewart
5fc949f252 py-black: add v23.7.0 (#38834) 2023-07-11 09:50:19 -05:00
Adam J. Stewart
e6876f47e6 py-lightning: add v2.0.5 (#38828) 2023-07-11 09:49:50 -05:00
Rocco Meli
364884df97 fix tiled-mm (#38774) 2023-07-11 14:28:37 +02:00
Massimiliano Culpo
d0e39a9870 Add CHANGELOG entry for v0.20.1 (#38836) 2023-07-11 13:35:04 +02:00
Todd Gamblin
162d0926f9 mypy: add more ignored modules to pyproject.toml (#38769)
`mypy` will check *all* imported packages, even optional dependencies outside your
project, and this can cause issues if you are targeting python versions *older* than the
one you're running in. `mypy` will report issues in the latest versions of dependencies
as errors even if installing on some older python would have installed an older version
of the dependency.

We saw this problem before with `numpy` in #34732. We've started seeing it with IPython
in #38704. This fixes the issue by exempting `IPython` and a number of other imports of
Spack's from `mypy` checking.
2023-07-11 13:30:07 +02:00
Manuela Kuhn
f0ef0ceb34 py-gluoncv: switch to PyPI and add 0.10.5.post0 (#38814)
* py-gluoncv: switch to PyPI and add 0.10.5.post0

* Fix style

* Remove no-unicode-readme.patch
2023-07-10 21:19:15 -05:00
Adam J. Stewart
5ba40913af py-numpy: add v1.25.1 (#38799) 2023-07-10 21:18:39 -05:00
Jen Herting
7288f11cf9 [arrow] tuple has no method append (#38820) 2023-07-10 18:12:08 -07:00
Wouter Deconinck
d5b01e45ce mlpack: remove go variant from cmake_args (#38821) 2023-07-10 18:10:02 -07:00
Manuela Kuhn
8827f01865 py-minkowskiengine: add missing openblas dependency (#38742)
* py-minkowskiengine: add missing openblas dependency

* Add comment about blas
2023-07-10 17:21:16 -05:00
renjithravindrankannath
2163c3701c Setting library path as lib similar to other rocm packages. (#37568)
* Setting library path as lib similar to other rocm packages.
* Fix style check failure
* Restricting changes to 5.4.3 and above
* Including comgr change
2023-07-10 14:55:39 -07:00
afzpatel
db23fd055c new hip-examples package (#35891)
* initial commit for adding hip-examples package
* adding test to hip-examples
* fixed compile error on add4
* change standalone test to use new syntax
2023-07-10 13:05:46 -07:00
Sangu Mbekelu
73acf110ff py-sacrebleu (#37159)
* new mosesdecoder package

* "new py-sacrebleu package"

* Delete package.py

* [@spackbot] updating style on behalf of Sangu-Mbekelu

* Update package.py

updating package based on review

---------

Co-authored-by: Sangu Mbekelu <s.mbekelu9@gmail.com>
2023-07-10 12:16:42 -07:00
kjrstory
ff49969264 Package:Openfoam-org version url using function (#37587)
* Package:Openfoam-org version url using function
* Package:Openfoam-org small style fix
* openfoam-org: url_for_function
2023-07-10 11:19:29 -07:00
eugeneswalker
eb94d830e1 legion +rocm: set HIP_PATH to {hip.prefix}/hip (#38819) 2023-07-10 10:35:12 -07:00
Chris Green
8fdd8fcf63 Perl package: detect opcode support in externals (#38618)
Spack-installed Perl always has opcode support, but external Perl
installations might not. This commit adds a +opcode variant and
updates the external detection logic to check for opcode support.

The postgresql package is updated to require perl+opcode (in
combination with the above, this helps detect when an external
Perl instance is sufficient for a Spack build of postgreqsql, or
if Spack needs to build its own Perl).
2023-07-10 10:01:20 -07:00
Michael Kuhn
30b077e63c glib: add 2.76.4 (#38813) 2023-07-10 09:06:05 -07:00
Taillefumier Mathieu
64c6d63675 Update cp2k recipe to use cmake or the current build system (#35718)
* Update cp2k recipe to use cmake or the current build system

Offers the possibility to build cp2k with the new cmake build system. commands like this are now supported

spack install cp2k@master build_system=cmake +.....

the recipe supports the following optional functionalities

- superlu, cosma, sirius, spglib, metis, spglib, libxc, libint, cuda/rocm, mkl/openblas/sci (and others), mpi, openmp, dbcsr
- dbcsr is built separately using the currently available recipe.

Two PRs need to be merged to be fully functional (cosma update in spack + one PR in cp2k github).

* Fix indentation

* Fix indentation

* Update libvori

* More typos

* Simplify BLAS/LAPACK

* Simplify BLAS/LAPACK

* Add A100 gpu value

* Fix typo

* Add the enable_regtests option

if -DCP2K_ENABLE_REGTESTS=ON (+enable_regtests with spack) then the location of the binary executables will be in the cp2k root directory under exe/build-cmake-*. This option is needed to run the regtests afterwards.

* Minor update

* more fixes

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>

* small changes

* Remove any reference to nvidia architecture in the rocm list

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>

* Update var/spack/repos/builtin/packages/cp2k/package.py

Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>

* Final reformating

* Update py-fypp

---------

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>
Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>
2023-07-10 14:02:52 +02:00
Massimiliano Culpo
0ed6ff3823 Removing inactive maintainer (#38773) 2023-07-10 11:31:09 +02:00
Todd Gamblin
757f8ae59c find: add --hashes shortcut for piping to other commands (#38663)
People frequently ask us how to pipe `spack find` output to other commands, and we tell
them to do things like this:

```console
$ spack find --format "/{hash}" | spack uninstall -ay
```

Sometimes users don't know about hash references and come up with potentially ambiguous
formulations like this:

```console
spack find --format {name}@{version}%{compiler} | spack uninstall -ay
```

Since this is a common enough thing to want to do, and to make it more obvious how, this
PR adds a `-H` / `--hashes` as a shortcut, so you can now just do:

```console
spack find -H | spack uninstall -ay
```
2023-07-10 09:43:37 +02:00
Dax Lynch
27c62b981a Added package py-bitstruct (#38761)
* Added packages bitstruct, callmonitor, and PYnvtx

* Revert "Added packages bitstruct, callmonitor, and PYnvtx"

This reverts commit 76d25aa76b.

* py-bitstruct: This module is intended to have a similar interface as the python struct module, but working on bits instead of primitive data types (char, int, …)

* Update package.py

To pass the style prechecks

* PyNVTX: new package

* Delete package.py

Accidentally added this package.

* Update var/spack/repos/builtin/packages/py-bitstruct/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-07-09 20:19:07 -05:00
Manuela Kuhn
1ed934c710 py-triangle: restrict Python version (#38808) 2023-07-09 20:12:42 -04:00
Harmen Stoppels
eef14ddcad openssl: prefer 3.x (#36729)
* openssl: prefer 3.x

This PR is not intended to be merged immediately, but it would be good
to see what packages fail to build in CI so that we can get proper
version constraints on openssl (before all packages update and support
both openssl 1 and 3)

* Disable assembly for 3.x %oneapi

* cmake: depend on spack curl, to deal with curl - openssl compat

* also make zlib external

* remove overly strict & unsafe requirement on py-cryptographty patch version number

* update openssl compat bounds in py-cryptography

* smaller diff

* Make libssh2 an autotools/cmake package

* fix weird upperbound in libssh2 as there is not openssl v2

* libssh2: pc file lists plain -lssl -lcrypto w/o leading -L flag, confusing libgit2 parsing of pkg-config output

* Actually fix the issue in libssh2: its pc file looks broken
2023-07-09 17:48:00 -04:00
Jonathon Anderson
db879a5679 ci: Fix broken SPACK_CHECKOUT_VERSION (#38778) 2023-07-09 12:37:36 -07:00
Vanessasaurus
d0804c44f1 Automated deployment to update package flux-core 2023-07-08 (#38790)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2023-07-09 12:12:55 -07:00
Harmen Stoppels
374fda1063 Don’t call spec.format in Database._get_matching_spec_key (#38792)
`"%s" % spec` formats the spec with deps included, which produces sometimes KBs
of data and is slow to run in pure Python. It can delay otherwise very short-lived
read/write locks on the database.

Discovered in #38762 where profile output showed about 2 seconds is spent in
`spec.format`, which is significant overhead when using multiprocessing to install
from binary cache in parallel (installation often takes <5s for small packages). With
this change, `spec.format` no longer shows up in profile output.

(This line hasn't changed since Spack v0.9 ;p)

* move format() call to custom NoSuchSpecError exception
* add a comment saying why, so we can eventually change `Spec.__str__`
2023-07-09 11:27:38 -04:00
Jonathon Anderson
3c14569b8e pkgconf: Update to new upstream URL (#38800)
See 437c2a3218
2023-07-09 16:12:47 +02:00
Michael Kuhn
841402c57a gcc: add 10.5.0 (#38784) 2023-07-09 12:13:04 +02:00
434 changed files with 11917 additions and 5693 deletions

View File

@@ -17,10 +17,13 @@ concurrency:
jobs:
# Run audits on all the packages in the built-in repository
package-audits:
runs-on: ubuntu-latest
runs-on: ${{ matrix.operating_system }}
strategy:
matrix:
operating_system: ["ubuntu-latest", "macos-latest"]
steps:
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # @v2
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
- uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1 # @v2
with:
python-version: ${{inputs.python_version}}
- name: Install Python packages
@@ -41,4 +44,4 @@ jobs:
- uses: codecov/codecov-action@eaaf4bedf32dbdc6b720b63067d99c4d77d6047d # @v2.1.0
if: ${{ inputs.with_coverage == 'true' }}
with:
flags: unittests,linux,audits
flags: unittests,audits

View File

@@ -95,7 +95,7 @@ jobs:
uses: docker/setup-qemu-action@2b82ce82d56a2a04d2637cd93a637ae1b359c0a7 # @v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@16c0bc4a6e6ada2cfd8afd41d22d95379cf7c32a # @v1
uses: docker/setup-buildx-action@4c0219f9ac95b02789c1075625400b2acbff50b1 # @v1
- name: Log in to GitHub Container Registry
uses: docker/login-action@465a07811f14bebb1938fbed4728c6a1ff8901fc # @v1

View File

@@ -17,7 +17,7 @@ jobs:
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0
- uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1
with:
python-version: 3.9
- name: Install Python packages

View File

@@ -50,7 +50,7 @@ jobs:
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
- uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1 # @v2
with:
python-version: ${{ matrix.python-version }}
- name: Install System packages
@@ -97,7 +97,7 @@ jobs:
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
- uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1 # @v2
with:
python-version: '3.11'
- name: Install System packages
@@ -155,7 +155,7 @@ jobs:
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
- uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1 # @v2
with:
python-version: '3.11'
- name: Install System packages
@@ -189,7 +189,7 @@ jobs:
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
- uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1 # @v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Python packages

View File

@@ -19,7 +19,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # @v2
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
- uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1 # @v2
with:
python-version: '3.11'
cache: 'pip'
@@ -38,7 +38,7 @@ jobs:
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
- uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1 # @v2
with:
python-version: '3.11'
cache: 'pip'

View File

@@ -18,7 +18,7 @@ jobs:
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0
- uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1
with:
python-version: 3.9
- name: Install Python packages
@@ -42,7 +42,7 @@ jobs:
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0
- uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1
with:
python-version: 3.9
- name: Install Python packages
@@ -66,7 +66,7 @@ jobs:
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0
- uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1
with:
python-version: 3.9
- name: Install Python packages

View File

@@ -1,3 +1,21 @@
# v0.20.1 (2023-07-10)
## Spack Bugfixes
- Spec removed from an environment where not actually removed if `--force` was not given (#37877)
- Speed-up module file generation (#37739)
- Hotfix for a few recipes that treat CMake as a link dependency (#35816)
- Fix re-running stand-alone test a second time, which was getting a trailing spurious failure (#37840)
- Fixed reading JSON manifest on Cray, reporting non-concrete specs (#37909)
- Fixed a few bugs when generating Dockerfiles from Spack (#37766,#37769)
- Fixed a few long-standing bugs when generating module files (#36678,#38347,#38465,#38455)
- Fixed issues with building Python extensions using an external Python (#38186)
- Fixed compiler removal from command line (#38057)
- Show external status as [e] (#33792)
- Backported `archspec` fixes (#37793)
- Improved a few error messages (#37791)
# v0.20.0 (2023-05-21)
`v0.20.0` is a major feature release.

View File

@@ -216,10 +216,11 @@ config:
# manipulation by unprivileged user (e.g. AFS)
allow_sgid: true
# Whether to set the terminal title to display status information during
# building and installing packages. This gives information about Spack's
# current progress as well as the current and total number of packages.
terminal_title: false
# Whether to show status information during building and installing packages.
# This gives information about Spack's current progress as well as the current
# and total number of packages. Information is shown both in the terminal
# title and inline.
install_status: true
# Number of seconds a buildcache's index.json is cached locally before probing
# for updates, within a single Spack invocation. Defaults to 10 minutes.

View File

@@ -1,2 +1,4 @@
mirrors:
spack-public: https://mirror.spack.io
spack-public:
binary: false
url: https://mirror.spack.io

View File

@@ -48,14 +48,10 @@ Here is an example where a build cache is created in a local directory named
.. code-block:: console
$ spack buildcache push --allow-root ./spack-cache ninja
$ spack buildcache push ./spack-cache ninja
==> Pushing binary packages to file:///home/spackuser/spack/spack-cache/build_cache
Not that ``ninja`` must be installed locally for this to work.
We're using the ``--allow-root`` flag to tell Spack that is OK when any of
the binaries we're pushing contain references to the local Spack install
directory.
Note that ``ninja`` must be installed locally for this to work.
Once you have a build cache, you can add it as a mirror, discussed next.

View File

@@ -214,6 +214,7 @@ def setup(sphinx):
# Spack classes that intersphinx is unable to resolve
("py:class", "spack.version.StandardVersion"),
("py:class", "spack.spec.DependencySpec"),
("py:class", "spack.spec.SpecfileReaderBase"),
("py:class", "spack.install_test.Pb"),
]

View File

@@ -292,12 +292,13 @@ It is also worth noting that:
non_bindable_shared_objects = ["libinterface.so"]
----------------------
``terminal_title``
``install_status``
----------------------
By setting this option to ``true``, Spack will update the terminal's title to
provide information about its current progress as well as the current and
total package numbers.
When set to ``true``, Spack will show information about its current progress
as well as the current and total package numbers. Progress is shown both
in the terminal title and inline. Setting it to ``false`` will not show any
progress information.
To work properly, this requires your terminal to reset its title after
Spack has finished its work, otherwise Spack's status information will

View File

@@ -275,10 +275,12 @@ of the installed software. For instance, in the snippet below:
set:
BAR: 'bar'
# This anonymous spec selects any package that
# depends on openmpi. The double colon at the
# depends on mpi. The double colon at the
# end clears the set of rules that matched so far.
^openmpi::
^mpi::
environment:
prepend_path:
PATH: '{^mpi.prefix}/bin'
set:
BAR: 'baz'
# Selects any zlib package
@@ -293,7 +295,9 @@ of the installed software. For instance, in the snippet below:
- FOOBAR
you are instructing Spack to set the environment variable ``BAR=bar`` for every module,
unless the associated spec satisfies ``^openmpi`` in which case ``BAR=baz``.
unless the associated spec satisfies the abstract dependency ``^mpi`` in which case
``BAR=baz``, and the directory containing the respective MPI executables is prepended
to the ``PATH`` variable.
In addition in any spec that satisfies ``zlib`` the value ``foo`` will be
prepended to ``LD_LIBRARY_PATH`` and in any spec that satisfies ``zlib%gcc@4.8``
the variable ``FOOBAR`` will be unset.
@@ -396,28 +400,30 @@ that are already in the Lmod hierarchy.
.. note::
Tcl modules
Tcl modules also allow for explicit conflicts between modulefiles.
Tcl and Lua modules also allow for explicit conflicts between modulefiles.
.. code-block:: yaml
.. code-block:: yaml
modules:
default:
enable:
- tcl
tcl:
projections:
all: '{name}/{version}-{compiler.name}-{compiler.version}'
all:
conflict:
- '{name}'
- 'intel/14.0.1'
modules:
default:
enable:
- tcl
tcl:
projections:
all: '{name}/{version}-{compiler.name}-{compiler.version}'
all:
conflict:
- '{name}'
- 'intel/14.0.1'
will create module files that will conflict with ``intel/14.0.1`` and with the
base directory of the same module, effectively preventing the possibility to
load two or more versions of the same software at the same time. The tokens
that are available for use in this directive are the same understood by
the :meth:`~spack.spec.Spec.format` method.
will create module files that will conflict with ``intel/14.0.1`` and with the
base directory of the same module, effectively preventing the possibility to
load two or more versions of the same software at the same time. The tokens
that are available for use in this directive are the same understood by the
:meth:`~spack.spec.Spec.format` method.
For Lmod and Environment Modules versions prior 4.2, it is important to
express the conflict on both modulefiles conflicting with each other.
.. note::

View File

@@ -6,3 +6,8 @@ python-levenshtein==0.21.1
docutils==0.18.1
pygments==2.15.1
urllib3==2.0.3
pytest==7.4.0
isort==5.12.0
black==23.1.0
flake8==6.0.0
mypy==1.4.1

View File

@@ -9,7 +9,7 @@
import re
import sys
from argparse import ArgumentParser
from typing import IO, Optional, Sequence, Tuple
from typing import IO, Any, Iterable, List, Optional, Sequence, Tuple, Union
class Command:
@@ -25,9 +25,9 @@ def __init__(
prog: str,
description: Optional[str],
usage: str,
positionals: Sequence[Tuple[str, str]],
optionals: Sequence[Tuple[Sequence[str], str, str]],
subcommands: Sequence[Tuple[ArgumentParser, str]],
positionals: List[Tuple[str, Optional[Iterable[Any]], Union[int, str, None], str]],
optionals: List[Tuple[Sequence[str], List[str], str, Union[int, str, None], str]],
subcommands: List[Tuple[ArgumentParser, str, str]],
) -> None:
"""Initialize a new Command instance.
@@ -96,13 +96,30 @@ def parse(self, parser: ArgumentParser, prog: str) -> Command:
if action.option_strings:
flags = action.option_strings
dest_flags = fmt._format_action_invocation(action)
help = self._expand_help(action) if action.help else ""
help = help.replace("\n", " ")
optionals.append((flags, dest_flags, help))
nargs = action.nargs
help = (
self._expand_help(action)
if action.help and action.help != argparse.SUPPRESS
else ""
)
help = help.split("\n")[0]
if action.choices is not None:
dest = [str(choice) for choice in action.choices]
else:
dest = [action.dest]
optionals.append((flags, dest, dest_flags, nargs, help))
elif isinstance(action, argparse._SubParsersAction):
for subaction in action._choices_actions:
subparser = action._name_parser_map[subaction.dest]
subcommands.append((subparser, subaction.dest))
help = (
self._expand_help(subaction)
if subaction.help and action.help != argparse.SUPPRESS
else ""
)
help = help.split("\n")[0]
subcommands.append((subparser, subaction.dest, help))
# Look for aliases of the form 'name (alias, ...)'
if self.aliases and isinstance(subaction.metavar, str):
@@ -111,12 +128,22 @@ def parse(self, parser: ArgumentParser, prog: str) -> Command:
aliases = match.group(2).split(", ")
for alias in aliases:
subparser = action._name_parser_map[alias]
subcommands.append((subparser, alias))
help = (
self._expand_help(subaction)
if subaction.help and action.help != argparse.SUPPRESS
else ""
)
help = help.split("\n")[0]
subcommands.append((subparser, alias, help))
else:
args = fmt._format_action_invocation(action)
help = self._expand_help(action) if action.help else ""
help = help.replace("\n", " ")
positionals.append((args, help))
help = (
self._expand_help(action)
if action.help and action.help != argparse.SUPPRESS
else ""
)
help = help.split("\n")[0]
positionals.append((args, action.choices, action.nargs, help))
return Command(prog, description, usage, positionals, optionals, subcommands)
@@ -146,7 +173,7 @@ def _write(self, parser: ArgumentParser, prog: str, level: int = 0) -> None:
cmd = self.parse(parser, prog)
self.out.write(self.format(cmd))
for subparser, prog in cmd.subcommands:
for subparser, prog, help in cmd.subcommands:
self._write(subparser, prog, level=level + 1)
def write(self, parser: ArgumentParser) -> None:
@@ -205,13 +232,13 @@ def format(self, cmd: Command) -> str:
if cmd.positionals:
string.write(self.begin_positionals())
for args, help in cmd.positionals:
for args, choices, nargs, help in cmd.positionals:
string.write(self.positional(args, help))
string.write(self.end_positionals())
if cmd.optionals:
string.write(self.begin_optionals())
for flags, dest_flags, help in cmd.optionals:
for flags, dest, dest_flags, nargs, help in cmd.optionals:
string.write(self.optional(dest_flags, help))
string.write(self.end_optionals())
@@ -338,7 +365,7 @@ def end_optionals(self) -> str:
"""
return ""
def begin_subcommands(self, subcommands: Sequence[Tuple[ArgumentParser, str]]) -> str:
def begin_subcommands(self, subcommands: List[Tuple[ArgumentParser, str, str]]) -> str:
"""Table with links to other subcommands.
Arguments:
@@ -355,114 +382,8 @@ def begin_subcommands(self, subcommands: Sequence[Tuple[ArgumentParser, str]]) -
"""
for cmd, _ in subcommands:
for cmd, _, _ in subcommands:
prog = re.sub(r"^[^ ]* ", "", cmd.prog)
string += " * :ref:`{0} <{1}>`\n".format(prog, cmd.prog.replace(" ", "-"))
return string + "\n"
class ArgparseCompletionWriter(ArgparseWriter):
"""Write argparse output as shell programmable tab completion functions."""
def format(self, cmd: Command) -> str:
"""Return the string representation of a single node in the parser tree.
Args:
cmd: Parsed information about a command or subcommand.
Returns:
String representation of this subcommand.
"""
assert cmd.optionals # we should always at least have -h, --help
assert not (cmd.positionals and cmd.subcommands) # one or the other
# We only care about the arguments/flags, not the help messages
positionals: Tuple[str, ...] = ()
if cmd.positionals:
positionals, _ = zip(*cmd.positionals)
optionals, _, _ = zip(*cmd.optionals)
subcommands: Tuple[str, ...] = ()
if cmd.subcommands:
_, subcommands = zip(*cmd.subcommands)
# Flatten lists of lists
optionals = [x for xx in optionals for x in xx]
return (
self.start_function(cmd.prog)
+ self.body(positionals, optionals, subcommands)
+ self.end_function(cmd.prog)
)
def start_function(self, prog: str) -> str:
"""Return the syntax needed to begin a function definition.
Args:
prog: Program name.
Returns:
Function definition beginning.
"""
name = prog.replace("-", "_").replace(" ", "_")
return "\n_{0}() {{".format(name)
def end_function(self, prog: str) -> str:
"""Return the syntax needed to end a function definition.
Args:
prog: Program name
Returns:
Function definition ending.
"""
return "}\n"
def body(
self, positionals: Sequence[str], optionals: Sequence[str], subcommands: Sequence[str]
) -> str:
"""Return the body of the function.
Args:
positionals: List of positional arguments.
optionals: List of optional arguments.
subcommands: List of subcommand parsers.
Returns:
Function body.
"""
return ""
def positionals(self, positionals: Sequence[str]) -> str:
"""Return the syntax for reporting positional arguments.
Args:
positionals: List of positional arguments.
Returns:
Syntax for positional arguments.
"""
return ""
def optionals(self, optionals: Sequence[str]) -> str:
"""Return the syntax for reporting optional flags.
Args:
optionals: List of optional arguments.
Returns:
Syntax for optional flags.
"""
return ""
def subcommands(self, subcommands: Sequence[str]) -> str:
"""Return the syntax for reporting subcommands.
Args:
subcommands: List of subcommand parsers.
Returns:
Syntax for subcommand parsers
"""
return ""

View File

@@ -821,7 +821,7 @@ def __getattr__(self, name):
# 'instance'/'_instance' to be defined or it will enter an infinite
# loop, so protect against that here.
if name in ["_instance", "instance"]:
raise AttributeError()
raise AttributeError(f"cannot create {name}")
return getattr(self.instance, name)
def __getitem__(self, name):
@@ -843,27 +843,6 @@ def __repr__(self):
return repr(self.instance)
class LazyReference:
"""Lazily evaluated reference to part of a singleton."""
def __init__(self, ref_function):
self.ref_function = ref_function
def __getattr__(self, name):
if name == "ref_function":
raise AttributeError()
return getattr(self.ref_function(), name)
def __getitem__(self, name):
return self.ref_function()[name]
def __str__(self):
return str(self.ref_function())
def __repr__(self):
return repr(self.ref_function())
def load_module_from_file(module_name, module_path):
"""Loads a python module from the path of the corresponding file.

View File

@@ -9,9 +9,10 @@
import sys
import time
from datetime import datetime
from types import TracebackType
from typing import IO, Any, Callable, ContextManager, Dict, Generator, Optional, Tuple, Type, Union
import llnl.util.tty as tty
from llnl.util.lang import pretty_seconds
from llnl.util import lang, tty
import spack.util.string
@@ -34,9 +35,12 @@
]
#: A useful replacement for functions that should return True when not provided
#: for example.
true_fn = lambda: True
ReleaseFnType = Optional[Callable[[], bool]]
def true_fn() -> bool:
"""A function that always returns True."""
return True
class OpenFile:
@@ -48,7 +52,7 @@ class OpenFile:
file descriptors as well in the future.
"""
def __init__(self, fh):
def __init__(self, fh: IO) -> None:
self.fh = fh
self.refs = 0
@@ -78,11 +82,11 @@ class OpenFileTracker:
work in Python and assume the GIL.
"""
def __init__(self):
def __init__(self) -> None:
"""Create a new ``OpenFileTracker``."""
self._descriptors = {}
self._descriptors: Dict[Any, OpenFile] = {}
def get_fh(self, path):
def get_fh(self, path: str) -> IO:
"""Get a filehandle for a lockfile.
This routine will open writable files for read/write even if you're asking
@@ -90,7 +94,7 @@ def get_fh(self, path):
(write) lock later if requested.
Arguments:
path (str): path to lock file we want a filehandle for
path: path to lock file we want a filehandle for
"""
# Open writable files as 'r+' so we can upgrade to write later
os_mode, fh_mode = (os.O_RDWR | os.O_CREAT), "r+"
@@ -157,7 +161,7 @@ def purge(self):
#: Open file descriptors for locks in this process. Used to prevent one process
#: from opening the sam file many times for different byte range locks
file_tracker = OpenFileTracker()
FILE_TRACKER = OpenFileTracker()
def _attempts_str(wait_time, nattempts):
@@ -166,7 +170,7 @@ def _attempts_str(wait_time, nattempts):
return ""
attempts = spack.util.string.plural(nattempts, "attempt")
return " after {} and {}".format(pretty_seconds(wait_time), attempts)
return " after {} and {}".format(lang.pretty_seconds(wait_time), attempts)
class LockType:
@@ -188,7 +192,7 @@ def to_module(tid):
return lock
@staticmethod
def is_valid(op):
def is_valid(op: int) -> bool:
return op == LockType.READ or op == LockType.WRITE
@@ -207,7 +211,16 @@ class Lock:
overlapping byte ranges in the same file).
"""
def __init__(self, path, start=0, length=0, default_timeout=None, debug=False, desc=""):
def __init__(
self,
path: str,
*,
start: int = 0,
length: int = 0,
default_timeout: Optional[float] = None,
debug: bool = False,
desc: str = "",
) -> None:
"""Construct a new lock on the file at ``path``.
By default, the lock applies to the whole file. Optionally,
@@ -220,17 +233,17 @@ def __init__(self, path, start=0, length=0, default_timeout=None, debug=False, d
beginning of the file.
Args:
path (str): path to the lock
start (int): optional byte offset at which the lock starts
length (int): optional number of bytes to lock
default_timeout (int): number of seconds to wait for lock attempts,
path: path to the lock
start: optional byte offset at which the lock starts
length: optional number of bytes to lock
default_timeout: seconds to wait for lock attempts,
where None means to wait indefinitely
debug (bool): debug mode specific to locking
desc (str): optional debug message lock description, which is
debug: debug mode specific to locking
desc: optional debug message lock description, which is
helpful for distinguishing between different Spack locks.
"""
self.path = path
self._file = None
self._file: Optional[IO] = None
self._reads = 0
self._writes = 0
@@ -242,7 +255,7 @@ def __init__(self, path, start=0, length=0, default_timeout=None, debug=False, d
self.debug = debug
# optional debug description
self.desc = " ({0})".format(desc) if desc else ""
self.desc = f" ({desc})" if desc else ""
# If the user doesn't set a default timeout, or if they choose
# None, 0, etc. then lock attempts will not time out (unless the
@@ -250,11 +263,15 @@ def __init__(self, path, start=0, length=0, default_timeout=None, debug=False, d
self.default_timeout = default_timeout or None
# PID and host of lock holder (only used in debug mode)
self.pid = self.old_pid = None
self.host = self.old_host = None
self.pid: Optional[int] = None
self.old_pid: Optional[int] = None
self.host: Optional[str] = None
self.old_host: Optional[str] = None
@staticmethod
def _poll_interval_generator(_wait_times=None):
def _poll_interval_generator(
_wait_times: Optional[Tuple[float, float, float]] = None
) -> Generator[float, None, None]:
"""This implements a backoff scheme for polling a contended resource
by suggesting a succession of wait times between polls.
@@ -277,21 +294,21 @@ def _poll_interval_generator(_wait_times=None):
num_requests += 1
yield wait_time
def __repr__(self):
def __repr__(self) -> str:
"""Formal representation of the lock."""
rep = "{0}(".format(self.__class__.__name__)
for attr, value in self.__dict__.items():
rep += "{0}={1}, ".format(attr, value.__repr__())
return "{0})".format(rep.strip(", "))
def __str__(self):
def __str__(self) -> str:
"""Readable string (with key fields) of the lock."""
location = "{0}[{1}:{2}]".format(self.path, self._start, self._length)
timeout = "timeout={0}".format(self.default_timeout)
activity = "#reads={0}, #writes={1}".format(self._reads, self._writes)
return "({0}, {1}, {2})".format(location, timeout, activity)
def _lock(self, op, timeout=None):
def _lock(self, op: int, timeout: Optional[float] = None) -> Tuple[float, int]:
"""This takes a lock using POSIX locks (``fcntl.lockf``).
The lock is implemented as a spin lock using a nonblocking call
@@ -310,7 +327,7 @@ def _lock(self, op, timeout=None):
# Create file and parent directories if they don't exist.
if self._file is None:
self._ensure_parent_directory()
self._file = file_tracker.get_fh(self.path)
self._file = FILE_TRACKER.get_fh(self.path)
if LockType.to_module(op) == fcntl.LOCK_EX and self._file.mode == "r":
# Attempt to upgrade to write lock w/a read-only file.
@@ -319,7 +336,7 @@ def _lock(self, op, timeout=None):
self._log_debug(
"{} locking [{}:{}]: timeout {}".format(
op_str.lower(), self._start, self._length, pretty_seconds(timeout or 0)
op_str.lower(), self._start, self._length, lang.pretty_seconds(timeout or 0)
)
)
@@ -343,15 +360,20 @@ def _lock(self, op, timeout=None):
total_wait_time = time.time() - start_time
raise LockTimeoutError(op_str.lower(), self.path, total_wait_time, num_attempts)
def _poll_lock(self, op):
def _poll_lock(self, op: int) -> bool:
"""Attempt to acquire the lock in a non-blocking manner. Return whether
the locking attempt succeeds
"""
assert self._file is not None, "cannot poll a lock without the file being set"
module_op = LockType.to_module(op)
try:
# Try to get the lock (will raise if not available.)
fcntl.lockf(
self._file, module_op | fcntl.LOCK_NB, self._length, self._start, os.SEEK_SET
self._file.fileno(),
module_op | fcntl.LOCK_NB,
self._length,
self._start,
os.SEEK_SET,
)
# help for debugging distributed locking
@@ -377,7 +399,7 @@ def _poll_lock(self, op):
return False
def _ensure_parent_directory(self):
def _ensure_parent_directory(self) -> str:
parent = os.path.dirname(self.path)
# relative paths to lockfiles in the current directory have no parent
@@ -396,20 +418,22 @@ def _ensure_parent_directory(self):
raise
return parent
def _read_log_debug_data(self):
def _read_log_debug_data(self) -> None:
"""Read PID and host data out of the file if it is there."""
assert self._file is not None, "cannot read debug log without the file being set"
self.old_pid = self.pid
self.old_host = self.host
line = self._file.read()
if line:
pid, host = line.strip().split(",")
_, _, self.pid = pid.rpartition("=")
_, _, pid = pid.rpartition("=")
_, _, self.host = host.rpartition("=")
self.pid = int(self.pid)
self.pid = int(pid)
def _write_log_debug_data(self):
def _write_log_debug_data(self) -> None:
"""Write PID and host data to the file, recording old values."""
assert self._file is not None, "cannot write debug log without the file being set"
self.old_pid = self.pid
self.old_host = self.host
@@ -423,20 +447,21 @@ def _write_log_debug_data(self):
self._file.flush()
os.fsync(self._file.fileno())
def _unlock(self):
def _unlock(self) -> None:
"""Releases a lock using POSIX locks (``fcntl.lockf``)
Releases the lock regardless of mode. Note that read locks may
be masquerading as write locks, but this removes either.
"""
fcntl.lockf(self._file, fcntl.LOCK_UN, self._length, self._start, os.SEEK_SET)
file_tracker.release_by_fh(self._file)
assert self._file is not None, "cannot unlock without the file being set"
fcntl.lockf(self._file.fileno(), fcntl.LOCK_UN, self._length, self._start, os.SEEK_SET)
FILE_TRACKER.release_by_fh(self._file)
self._file = None
self._reads = 0
self._writes = 0
def acquire_read(self, timeout=None):
def acquire_read(self, timeout: Optional[float] = None) -> bool:
"""Acquires a recursive, shared lock for reading.
Read and write locks can be acquired and released in arbitrary
@@ -461,7 +486,7 @@ def acquire_read(self, timeout=None):
self._reads += 1
return False
def acquire_write(self, timeout=None):
def acquire_write(self, timeout: Optional[float] = None) -> bool:
"""Acquires a recursive, exclusive lock for writing.
Read and write locks can be acquired and released in arbitrary
@@ -491,7 +516,7 @@ def acquire_write(self, timeout=None):
self._writes += 1
return False
def is_write_locked(self):
def is_write_locked(self) -> bool:
"""Check if the file is write locked
Return:
@@ -508,7 +533,7 @@ def is_write_locked(self):
return False
def downgrade_write_to_read(self, timeout=None):
def downgrade_write_to_read(self, timeout: Optional[float] = None) -> None:
"""
Downgrade from an exclusive write lock to a shared read.
@@ -527,7 +552,7 @@ def downgrade_write_to_read(self, timeout=None):
else:
raise LockDowngradeError(self.path)
def upgrade_read_to_write(self, timeout=None):
def upgrade_read_to_write(self, timeout: Optional[float] = None) -> None:
"""
Attempts to upgrade from a shared read lock to an exclusive write.
@@ -546,7 +571,7 @@ def upgrade_read_to_write(self, timeout=None):
else:
raise LockUpgradeError(self.path)
def release_read(self, release_fn=None):
def release_read(self, release_fn: ReleaseFnType = None) -> bool:
"""Releases a read lock.
Arguments:
@@ -582,7 +607,7 @@ def release_read(self, release_fn=None):
self._reads -= 1
return False
def release_write(self, release_fn=None):
def release_write(self, release_fn: ReleaseFnType = None) -> bool:
"""Releases a write lock.
Arguments:
@@ -623,58 +648,58 @@ def release_write(self, release_fn=None):
else:
return False
def cleanup(self):
def cleanup(self) -> None:
if self._reads == 0 and self._writes == 0:
os.unlink(self.path)
else:
raise LockError("Attempting to cleanup active lock.")
def _get_counts_desc(self):
def _get_counts_desc(self) -> str:
return (
"(reads {0}, writes {1})".format(self._reads, self._writes) if tty.is_verbose() else ""
)
def _log_acquired(self, locktype, wait_time, nattempts):
def _log_acquired(self, locktype, wait_time, nattempts) -> None:
attempts_part = _attempts_str(wait_time, nattempts)
now = datetime.now()
desc = "Acquired at %s" % now.strftime("%H:%M:%S.%f")
self._log_debug(self._status_msg(locktype, "{0}{1}".format(desc, attempts_part)))
def _log_acquiring(self, locktype):
def _log_acquiring(self, locktype) -> None:
self._log_debug(self._status_msg(locktype, "Acquiring"), level=3)
def _log_debug(self, *args, **kwargs):
def _log_debug(self, *args, **kwargs) -> None:
"""Output lock debug messages."""
kwargs["level"] = kwargs.get("level", 2)
tty.debug(*args, **kwargs)
def _log_downgraded(self, wait_time, nattempts):
def _log_downgraded(self, wait_time, nattempts) -> None:
attempts_part = _attempts_str(wait_time, nattempts)
now = datetime.now()
desc = "Downgraded at %s" % now.strftime("%H:%M:%S.%f")
self._log_debug(self._status_msg("READ LOCK", "{0}{1}".format(desc, attempts_part)))
def _log_downgrading(self):
def _log_downgrading(self) -> None:
self._log_debug(self._status_msg("WRITE LOCK", "Downgrading"), level=3)
def _log_released(self, locktype):
def _log_released(self, locktype) -> None:
now = datetime.now()
desc = "Released at %s" % now.strftime("%H:%M:%S.%f")
self._log_debug(self._status_msg(locktype, desc))
def _log_releasing(self, locktype):
def _log_releasing(self, locktype) -> None:
self._log_debug(self._status_msg(locktype, "Releasing"), level=3)
def _log_upgraded(self, wait_time, nattempts):
def _log_upgraded(self, wait_time, nattempts) -> None:
attempts_part = _attempts_str(wait_time, nattempts)
now = datetime.now()
desc = "Upgraded at %s" % now.strftime("%H:%M:%S.%f")
self._log_debug(self._status_msg("WRITE LOCK", "{0}{1}".format(desc, attempts_part)))
def _log_upgrading(self):
def _log_upgrading(self) -> None:
self._log_debug(self._status_msg("READ LOCK", "Upgrading"), level=3)
def _status_msg(self, locktype, status):
def _status_msg(self, locktype: str, status: str) -> str:
status_desc = "[{0}] {1}".format(status, self._get_counts_desc())
return "{0}{1.desc}: {1.path}[{1._start}:{1._length}] {2}".format(
locktype, self, status_desc
@@ -709,7 +734,13 @@ class LockTransaction:
"""
def __init__(self, lock, acquire=None, release=None, timeout=None):
def __init__(
self,
lock: Lock,
acquire: Union[ReleaseFnType, ContextManager] = None,
release: Union[ReleaseFnType, ContextManager] = None,
timeout: Optional[float] = None,
) -> None:
self._lock = lock
self._timeout = timeout
self._acquire_fn = acquire
@@ -724,15 +755,20 @@ def __enter__(self):
else:
return self._as
def __exit__(self, type, value, traceback):
def __exit__(
self,
exc_type: Optional[Type[BaseException]],
exc_value: Optional[BaseException],
traceback: Optional[TracebackType],
) -> bool:
suppress = False
def release_fn():
if self._release_fn is not None:
return self._release_fn(type, value, traceback)
return self._release_fn(exc_type, exc_value, traceback)
if self._as and hasattr(self._as, "__exit__"):
if self._as.__exit__(type, value, traceback):
if self._as.__exit__(exc_type, exc_value, traceback):
suppress = True
if self._exit(release_fn):
@@ -740,6 +776,12 @@ def release_fn():
return suppress
def _enter(self) -> bool:
return NotImplemented
def _exit(self, release_fn: ReleaseFnType) -> bool:
return NotImplemented
class ReadTransaction(LockTransaction):
"""LockTransaction context manager that does a read and releases it."""
@@ -785,7 +827,7 @@ def __init__(self, lock_type, path, time, attempts):
super().__init__(
fmt.format(
lock_type,
pretty_seconds(time),
lang.pretty_seconds(time),
attempts,
"attempt" if attempts == 1 else "attempts",
path,

View File

@@ -725,11 +725,22 @@ def _version_constraints_are_satisfiable_by_some_version_in_repo(pkgs, error_cls
dependencies_to_check.extend([edge.spec for edge in dependency_data.values()])
host_architecture = spack.spec.ArchSpec.default_arch()
for s in dependencies_to_check:
dependency_pkg_cls = None
try:
dependency_pkg_cls = spack.repo.path.get_pkg_class(s.name)
assert any(v.intersects(s.versions) for v in list(dependency_pkg_cls.versions))
# Some packages have hacks that might cause failures on some platform
# Allow to explicitly set conditions to skip version checks in that case
skip_conditions = getattr(dependency_pkg_cls, "skip_version_audit", [])
skip_version_check = False
for condition in skip_conditions:
if host_architecture.satisfies(spack.spec.Spec(condition).architecture):
skip_version_check = True
break
assert skip_version_check or any(
v.intersects(s.versions) for v in list(dependency_pkg_cls.versions)
)
except Exception:
summary = (
"{0}: dependency on {1} cannot be satisfied " "by known versions of {1.name}"

View File

@@ -61,6 +61,22 @@
_build_cache_keys_relative_path = "_pgp"
class BuildCacheDatabase(spack_db.Database):
"""A database for binary buildcaches.
A database supports writing buildcache index files, in which case certain fields are not
needed in each install record, and no locking is required. To use this feature, it provides
``lock_cfg=NO_LOCK``, and override the list of ``record_fields``.
"""
record_fields = ("spec", "ref_count", "in_buildcache")
def __init__(self, root):
super().__init__(root, lock_cfg=spack_db.NO_LOCK)
self._write_transaction_impl = llnl.util.lang.nullcontext
self._read_transaction_impl = llnl.util.lang.nullcontext
class FetchCacheError(Exception):
"""Error thrown when fetching the cache failed, usually a composite error list."""
@@ -190,8 +206,7 @@ def _associate_built_specs_with_mirror(self, cache_key, mirror_url):
tmpdir = tempfile.mkdtemp()
try:
db_root_dir = os.path.join(tmpdir, "db_root")
db = spack_db.Database(None, db_dir=db_root_dir, enable_transaction_locking=False)
db = BuildCacheDatabase(tmpdir)
try:
self._index_file_cache.init_entry(cache_key)
@@ -317,9 +332,9 @@ def update(self, with_cooldown=False):
from each configured mirror and stored locally (both in memory and
on disk under ``_index_cache_root``)."""
self._init_local_index_cache()
mirrors = spack.mirror.MirrorCollection()
configured_mirror_urls = [m.fetch_url for m in mirrors.values()]
configured_mirror_urls = [
m.fetch_url for m in spack.mirror.MirrorCollection(binary=True).values()
]
items_to_remove = []
spec_cache_clear_needed = False
spec_cache_regenerate_needed = not self._mirrors_for_spec
@@ -703,7 +718,7 @@ def get_buildfile_manifest(spec):
# look for them to decide if text file needs to be relocated or not
prefixes = [d.prefix for d in spec.traverse(root=True, deptype="all") if not d.external]
prefixes.append(spack.hooks.sbang.sbang_install_path())
prefixes.append(str(spack.store.layout.root))
prefixes.append(str(spack.store.STORE.layout.root))
# Create a giant regex that matches all prefixes
regex = utf8_paths_to_single_binary_regex(prefixes)
@@ -716,7 +731,7 @@ def get_buildfile_manifest(spec):
for rel_path in visitor.symlinks:
abs_path = os.path.join(root, rel_path)
link = os.readlink(abs_path)
if os.path.isabs(link) and link.startswith(spack.store.layout.root):
if os.path.isabs(link) and link.startswith(spack.store.STORE.layout.root):
data["link_to_relocate"].append(rel_path)
# Non-symlinks.
@@ -764,9 +779,9 @@ def get_buildinfo_dict(spec):
return {
"sbang_install_path": spack.hooks.sbang.sbang_install_path(),
"buildpath": spack.store.layout.root,
"buildpath": spack.store.STORE.layout.root,
"spackprefix": spack.paths.prefix,
"relative_prefix": os.path.relpath(spec.prefix, spack.store.layout.root),
"relative_prefix": os.path.relpath(spec.prefix, spack.store.STORE.layout.root),
"relocate_textfiles": manifest["text_to_relocate"],
"relocate_binaries": manifest["binary_to_relocate"],
"relocate_links": manifest["link_to_relocate"],
@@ -1059,13 +1074,10 @@ def generate_package_index(cache_prefix, concurrency=32):
tty.debug("Retrieving spec descriptor files from {0} to build index".format(cache_prefix))
tmpdir = tempfile.mkdtemp()
db_root_dir = os.path.join(tmpdir, "db_root")
db = spack_db.Database(
None,
db_dir=db_root_dir,
enable_transaction_locking=False,
record_fields=["spec", "ref_count", "in_buildcache"],
)
db = BuildCacheDatabase(tmpdir)
db.root = None
db_root_dir = db.database_directory
try:
_read_specs_and_push_index(file_list, read_fn, cache_prefix, db, db_root_dir, concurrency)
@@ -1206,9 +1218,6 @@ class PushOptions(NamedTuple):
#: Overwrite existing tarball/metadata files in buildcache
force: bool = False
#: Allow absolute paths to package prefixes when creating a tarball
allow_root: bool = False
#: Regenerated indices after pushing
regenerate_index: bool = False
@@ -1253,7 +1262,7 @@ def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, option
# without concretizing with the current spack packages
# and preferences
spec_file = spack.store.layout.spec_file_path(spec)
spec_file = spack.store.STORE.layout.spec_file_path(spec)
specfile_name = tarball_name(spec, ".spec.json")
specfile_path = os.path.realpath(os.path.join(cache_prefix, specfile_name))
signed_specfile_path = "{0}.sig".format(specfile_path)
@@ -1281,9 +1290,6 @@ def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, option
# create info for later relocation and create tar
buildinfo = get_buildinfo_dict(spec)
if not options.allow_root:
ensure_package_relocatable(buildinfo, binaries_dir)
_do_create_tarball(tarfile_path, binaries_dir, pkg_dir, buildinfo)
# get the sha256 checksum of the tarball
@@ -1305,7 +1311,7 @@ def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, option
# Add original install prefix relative to layout root to spec.json.
# This will be used to determine is the directory layout has changed.
buildinfo = {}
buildinfo["relative_prefix"] = os.path.relpath(spec.prefix, spack.store.layout.root)
buildinfo["relative_prefix"] = os.path.relpath(spec.prefix, spack.store.STORE.layout.root)
spec_dict["buildinfo"] = buildinfo
with open(specfile_path, "w") as outfile:
@@ -1363,7 +1369,7 @@ def specs_to_be_packaged(
packageable = lambda n: not n.external and n.installed
# Mass install check
with spack.store.db.read_transaction():
with spack.store.STORE.db.read_transaction():
return list(filter(packageable, nodes))
@@ -1465,8 +1471,9 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
"signature_verified": "true-if-binary-pkg-was-already-verified"
}
"""
if not spack.mirror.MirrorCollection():
tty.die("Please add a spack mirror to allow " + "download of pre-compiled packages.")
configured_mirrors = spack.mirror.MirrorCollection(binary=True).values()
if not configured_mirrors:
tty.die("Please add a spack mirror to allow download of pre-compiled packages.")
tarball = tarball_path_name(spec, ".spack")
specfile_prefix = tarball_name(spec, ".spec")
@@ -1483,11 +1490,7 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
# we need was in an un-indexed mirror. No need to check any
# mirror for the spec twice though.
try_first = [i["mirror_url"] for i in mirrors_for_spec] if mirrors_for_spec else []
try_next = [
i.fetch_url
for i in spack.mirror.MirrorCollection().values()
if i.fetch_url not in try_first
]
try_next = [i.fetch_url for i in configured_mirrors if i.fetch_url not in try_first]
for url in try_first + try_next:
mirrors_to_try.append(
@@ -1565,12 +1568,6 @@ def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
return None
def ensure_package_relocatable(buildinfo, binaries_dir):
"""Check if package binaries are relocatable."""
binaries = [os.path.join(binaries_dir, f) for f in buildinfo["relocate_binaries"]]
relocate.ensure_binaries_are_relocatable(binaries)
def dedupe_hardlinks_if_necessary(root, buildinfo):
"""Updates a buildinfo dict for old archives that did
not dedupe hardlinks. De-duping hardlinks is necessary
@@ -1609,7 +1606,7 @@ def relocate_package(spec):
"""
workdir = str(spec.prefix)
buildinfo = read_buildinfo_file(workdir)
new_layout_root = str(spack.store.layout.root)
new_layout_root = str(spack.store.STORE.layout.root)
new_prefix = str(spec.prefix)
new_rel_prefix = str(os.path.relpath(new_prefix, new_layout_root))
new_spack_prefix = str(spack.paths.prefix)
@@ -1857,7 +1854,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False):
tarfile_path, size, contents, "sha256", expected, local_checksum
)
new_relative_prefix = str(os.path.relpath(spec.prefix, spack.store.layout.root))
new_relative_prefix = str(os.path.relpath(spec.prefix, spack.store.STORE.layout.root))
# if the original relative prefix is in the spec file use it
buildinfo = spec_dict.get("buildinfo", {})
old_relative_prefix = buildinfo.get("relative_prefix", new_relative_prefix)
@@ -1869,7 +1866,7 @@ def extract_tarball(spec, download_result, unsigned=False, force=False):
# The directory created is the base directory name of the old prefix.
# Moving the old prefix name to the new prefix location should preserve
# hard links and symbolic links.
extract_tmp = os.path.join(spack.store.layout.root, ".tmp")
extract_tmp = os.path.join(spack.store.STORE.layout.root, ".tmp")
mkdirp(extract_tmp)
extracted_dir = os.path.join(extract_tmp, old_relative_prefix.split(os.path.sep)[-1])
@@ -1896,7 +1893,9 @@ def extract_tarball(spec, download_result, unsigned=False, force=False):
raise e
else:
manifest_file = os.path.join(
spec.prefix, spack.store.layout.metadata_dir, spack.store.layout.manifest_file_name
spec.prefix,
spack.store.STORE.layout.metadata_dir,
spack.store.STORE.layout.manifest_file_name,
)
if not os.path.exists(manifest_file):
spec_id = spec.format("{name}/{hash:7}")
@@ -1955,7 +1954,7 @@ def install_root_node(spec, unsigned=False, force=False, sha256=None):
tty.msg('Installing "{0}" from a buildcache'.format(spec.format()))
extract_tarball(spec, download_result, unsigned, force)
spack.hooks.post_install(spec, False)
spack.store.db.add(spec, spack.store.layout)
spack.store.STORE.db.add(spec, spack.store.STORE.layout)
def install_single_spec(spec, unsigned=False, force=False):
@@ -1980,7 +1979,9 @@ def try_direct_fetch(spec, mirrors=None):
specfile_is_signed = False
found_specs = []
for mirror in spack.mirror.MirrorCollection(mirrors=mirrors).values():
binary_mirrors = spack.mirror.MirrorCollection(mirrors=mirrors, binary=True).values()
for mirror in binary_mirrors:
buildcache_fetch_url_json = url_util.join(
mirror.fetch_url, _build_cache_relative_path, specfile_name
)
@@ -2043,7 +2044,7 @@ def get_mirrors_for_spec(spec=None, mirrors_to_check=None, index_only=False):
if spec is None:
return []
if not spack.mirror.MirrorCollection(mirrors=mirrors_to_check):
if not spack.mirror.MirrorCollection(mirrors=mirrors_to_check, binary=True):
tty.debug("No Spack mirrors are currently configured")
return {}
@@ -2082,7 +2083,7 @@ def clear_spec_cache():
def get_keys(install=False, trust=False, force=False, mirrors=None):
"""Get pgp public keys available on mirror with suffix .pub"""
mirror_collection = mirrors or spack.mirror.MirrorCollection()
mirror_collection = mirrors or spack.mirror.MirrorCollection(binary=True)
if not mirror_collection:
tty.die("Please add a spack mirror to allow " + "download of build caches.")
@@ -2243,7 +2244,7 @@ def check_specs_against_mirrors(mirrors, specs, output_file=None):
"""
rebuilds = {}
for mirror in spack.mirror.MirrorCollection(mirrors).values():
for mirror in spack.mirror.MirrorCollection(mirrors, binary=True).values():
tty.debug("Checking for built specs at {0}".format(mirror.fetch_url))
rebuild_list = []
@@ -2287,7 +2288,7 @@ def _download_buildcache_entry(mirror_root, descriptions):
def download_buildcache_entry(file_descriptions, mirror_url=None):
if not mirror_url and not spack.mirror.MirrorCollection():
if not mirror_url and not spack.mirror.MirrorCollection(binary=True):
tty.die(
"Please provide or add a spack mirror to allow " + "download of buildcache entries."
)
@@ -2296,7 +2297,7 @@ def download_buildcache_entry(file_descriptions, mirror_url=None):
mirror_root = os.path.join(mirror_url, _build_cache_relative_path)
return _download_buildcache_entry(mirror_root, file_descriptions)
for mirror in spack.mirror.MirrorCollection().values():
for mirror in spack.mirror.MirrorCollection(binary=True).values():
mirror_root = os.path.join(mirror.fetch_url, _build_cache_relative_path)
if _download_buildcache_entry(mirror_root, file_descriptions):

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Function and classes needed to bootstrap Spack itself."""
from .config import ensure_bootstrap_configuration, is_bootstrapping
from .config import ensure_bootstrap_configuration, is_bootstrapping, store_path
from .core import all_core_root_specs, ensure_core_dependencies, ensure_patchelf_in_path_or_raise
from .environment import BootstrapEnvironment, ensure_environment_dependencies
from .status import status_message
@@ -18,4 +18,5 @@
"ensure_environment_dependencies",
"BootstrapEnvironment",
"status_message",
"store_path",
]

View File

@@ -50,7 +50,7 @@ def _try_import_from_store(
# We have to run as part of this python interpreter
query_spec += " ^" + spec_for_current_python()
installed_specs = spack.store.db.query(query_spec, installed=True)
installed_specs = spack.store.STORE.db.query(query_spec, installed=True)
for candidate_spec in installed_specs:
pkg = candidate_spec["python"].package
@@ -183,7 +183,7 @@ def _executables_in_store(
executables_str = ", ".join(executables)
msg = "[BOOTSTRAP EXECUTABLES {0}] Try installed specs with query '{1}'"
tty.debug(msg.format(executables_str, query_spec))
installed_specs = spack.store.db.query(query_spec, installed=True)
installed_specs = spack.store.STORE.db.query(query_spec, installed=True)
if installed_specs:
for concrete_spec in installed_specs:
bin_dir = concrete_spec.prefix.bin

View File

@@ -150,18 +150,19 @@ def _add_compilers_if_missing() -> None:
@contextlib.contextmanager
def _ensure_bootstrap_configuration() -> Generator:
spack.store.ensure_singleton_created()
bootstrap_store_path = store_path()
user_configuration = _read_and_sanitize_configuration()
with spack.environment.no_active_environment():
with spack.platforms.prevent_cray_detection(), spack.platforms.use_platform(
spack.platforms.real_host()
), spack.repo.use_repositories(spack.paths.packages_path), spack.store.use_store(
bootstrap_store_path
):
), spack.repo.use_repositories(spack.paths.packages_path):
# Default configuration scopes excluding command line
# and builtin but accounting for platform specific scopes
config_scopes = _bootstrap_config_scopes()
with spack.config.use_configuration(*config_scopes):
with spack.config.use_configuration(*config_scopes), spack.store.use_store(
bootstrap_store_path, extra_data={"padded_length": 0}
):
# We may need to compile code from sources, so ensure we
# have compilers for the current platform
_add_compilers_if_missing()

View File

@@ -39,7 +39,7 @@ def check_paths(path_list, filetype, predicate):
check_paths(pkg.sanity_check_is_file, "file", os.path.isfile)
check_paths(pkg.sanity_check_is_dir, "directory", os.path.isdir)
ignore_file = llnl.util.lang.match_predicate(spack.store.layout.hidden_file_regexes)
ignore_file = llnl.util.lang.match_predicate(spack.store.STORE.layout.hidden_file_regexes)
if all(map(ignore_file, os.listdir(pkg.prefix))):
msg = "Install failed for {0}. Nothing was installed!"
raise spack.installer.InstallError(msg.format(pkg.name))

View File

@@ -296,8 +296,46 @@ def std_args(pkg, generator=None):
define("CMAKE_PREFIX_PATH", spack.build_environment.get_cmake_prefix_path(pkg)),
]
)
return args
@staticmethod
def define_cuda_architectures(pkg):
"""Returns the str ``-DCMAKE_CUDA_ARCHITECTURES:STRING=(expanded cuda_arch)``.
``cuda_arch`` is variant composed of a list of target CUDA architectures and
it is declared in the cuda package.
This method is no-op for cmake<3.18 and when ``cuda_arch`` variant is not set.
"""
cmake_flag = str()
if "cuda_arch" in pkg.spec.variants and pkg.spec.satisfies("^cmake@3.18:"):
cmake_flag = CMakeBuilder.define(
"CMAKE_CUDA_ARCHITECTURES", pkg.spec.variants["cuda_arch"].value
)
return cmake_flag
@staticmethod
def define_hip_architectures(pkg):
"""Returns the str ``-DCMAKE_HIP_ARCHITECTURES:STRING=(expanded amdgpu_target)``.
``amdgpu_target`` is variant composed of a list of the target HIP
architectures and it is declared in the rocm package.
This method is no-op for cmake<3.18 and when ``amdgpu_target`` variant is
not set.
"""
cmake_flag = str()
if "amdgpu_target" in pkg.spec.variants and pkg.spec.satisfies("^cmake@3.21:"):
cmake_flag = CMakeBuilder.define(
"CMAKE_HIP_ARCHITECTURES", pkg.spec.variants["amdgpu_target"].value
)
return cmake_flag
@staticmethod
def define(cmake_var, value):
"""Return a CMake command line argument that defines a variable.

View File

@@ -173,7 +173,7 @@ def test_imports(self):
# Make sure we are importing the installed modules,
# not the ones in the source directory
python = inspect.getmodule(self).python.path
python = inspect.getmodule(self).python
for module in self.import_modules:
with test_part(
self,
@@ -286,7 +286,7 @@ def get_external_python_for_prefix(self):
spack.spec.Spec: The external Spec for python most likely to be compatible with self.spec
"""
python_externals_installed = [
s for s in spack.store.db.query("python") if s.prefix == self.spec.external_path
s for s in spack.store.STORE.db.query("python") if s.prefix == self.spec.external_path
]
if python_externals_installed:
return python_externals_installed[0]
@@ -401,7 +401,8 @@ def build_directory(self):
def config_settings(self, spec, prefix):
"""Configuration settings to be passed to the PEP 517 build backend.
Requires pip 22.1+, which requires Python 3.7+.
Requires pip 22.1 or newer.
Args:
spec (spack.spec.Spec): build spec
@@ -415,6 +416,8 @@ def config_settings(self, spec, prefix):
def install_options(self, spec, prefix):
"""Extra arguments to be supplied to the setup.py install command.
Requires pip 23.0 or older.
Args:
spec (spack.spec.Spec): build spec
prefix (spack.util.prefix.Prefix): installation prefix
@@ -428,6 +431,8 @@ def global_options(self, spec, prefix):
"""Extra global options to be supplied to the setup.py call before the install
or bdist_wheel command.
Deprecated in pip 23.1.
Args:
spec (spack.spec.Spec): build spec
prefix (spack.util.prefix.Prefix): installation prefix

View File

@@ -224,7 +224,7 @@ def _print_staging_summary(spec_labels, stages, mirrors_to_check, rebuild_decisi
if not stages:
return
mirrors = spack.mirror.MirrorCollection(mirrors=mirrors_to_check)
mirrors = spack.mirror.MirrorCollection(mirrors=mirrors_to_check, binary=True)
tty.msg("Checked the following mirrors for binaries:")
for m in mirrors.values():
tty.msg(" {0}".format(m.fetch_url))
@@ -1257,20 +1257,11 @@ def main_script_replacements(cmd):
output_object["stages"] = stage_names
# Capture the version of spack used to generate the pipeline, transform it
# into a value that can be passed to "git checkout", and save it in a
# global yaml variable
# Capture the version of Spack used to generate the pipeline, that can be
# passed to `git checkout` for version consistency. If we aren't in a Git
# repository, presume we are a Spack release and use the Git tag instead.
spack_version = spack.main.get_version()
version_to_clone = None
v_match = re.match(r"^\d+\.\d+\.\d+$", spack_version)
if v_match:
version_to_clone = "v{0}".format(v_match.group(0))
else:
v_match = re.match(r"^[^-]+-[^-]+-([a-f\d]+)$", spack_version)
if v_match:
version_to_clone = v_match.group(1)
else:
version_to_clone = spack_version
version_to_clone = spack.main.get_spack_commit() or f"v{spack.spack_version}"
output_object["variables"] = {
"SPACK_ARTIFACTS_ROOT": rel_artifacts_root,
@@ -1428,9 +1419,7 @@ def _push_mirror_contents(input_spec, sign_binaries, mirror_url):
unsigned = not sign_binaries
tty.debug("Creating buildcache ({0})".format("unsigned" if unsigned else "signed"))
push_url = spack.mirror.Mirror.from_url(mirror_url).push_url
return bindist.push(
input_spec, push_url, bindist.PushOptions(force=True, allow_root=True, unsigned=unsigned)
)
return bindist.push(input_spec, push_url, bindist.PushOptions(force=True, unsigned=unsigned))
def push_mirror_contents(input_spec: spack.spec.Spec, mirror_url, sign_binaries):

View File

@@ -273,9 +273,9 @@ def disambiguate_spec_from_hashes(spec, hashes, local=False, installed=True, fir
See ``spack.database.Database._query`` for details.
"""
if local:
matching_specs = spack.store.db.query_local(spec, hashes=hashes, installed=installed)
matching_specs = spack.store.STORE.db.query_local(spec, hashes=hashes, installed=installed)
else:
matching_specs = spack.store.db.query(spec, hashes=hashes, installed=installed)
matching_specs = spack.store.STORE.db.query(spec, hashes=hashes, installed=installed)
if not matching_specs:
tty.die("Spec '%s' matches no installed packages." % spec)
@@ -473,7 +473,7 @@ def format_list(specs):
out = ""
# getting lots of prefixes requires DB lookups. Ensure
# all spec.prefix calls are in one transaction.
with spack.store.db.read_transaction():
with spack.store.STORE.db.read_transaction():
for string, spec in formatted:
if not string:
# print newline from above

View File

@@ -59,7 +59,7 @@ def setup_parser(subparser):
subparser.add_argument(
"package_or_file",
help="name of package to show contributions for, " "or path to a file in the spack repo",
help="name of package to show contributions for, or path to a file in the spack repo",
)

View File

@@ -43,52 +43,50 @@ def setup_parser(subparser):
subparsers = subparser.add_subparsers(help="buildcache sub-commands")
push = subparsers.add_parser("push", aliases=["create"], help=push_fn.__doc__)
push.add_argument("-f", "--force", action="store_true", help="overwrite tarball if it exists.")
push.add_argument("-f", "--force", action="store_true", help="overwrite tarball if it exists")
push.add_argument(
"-u", "--unsigned", action="store_true", help="push unsigned buildcache tarballs"
)
push.add_argument(
"-a",
"--allow-root",
"-a",
action="store_true",
help="allow install root string in binary files after RPATH substitution",
)
push.add_argument(
"-k", "--key", metavar="key", type=str, default=None, help="Key for signing."
push_sign = push.add_mutually_exclusive_group(required=False)
push_sign.add_argument(
"--unsigned", "-u", action="store_true", help="push unsigned buildcache tarballs"
)
push.add_argument("mirror", type=str, help="Mirror name, path, or URL.")
push_sign.add_argument(
"--key", "-k", metavar="key", type=str, default=None, help="key for signing"
)
push.add_argument("mirror", type=str, help="mirror name, path, or URL")
push.add_argument(
"--update-index",
"--rebuild-index",
action="store_true",
default=False,
help="Regenerate buildcache index after building package(s)",
help="regenerate buildcache index after building package(s)",
)
push.add_argument(
"--spec-file", default=None, help="Create buildcache entry for spec from json or yaml file"
"--spec-file", default=None, help="create buildcache entry for spec from json or yaml file"
)
push.add_argument(
"--only",
default="package,dependencies",
dest="things_to_install",
choices=["package", "dependencies"],
help=(
"Select the buildcache mode. the default is to"
" build a cache for the package along with all"
" its dependencies. Alternatively, one can"
" decide to build a cache for only the package"
" or only the dependencies"
),
help="select the buildcache mode. "
"The default is to build a cache for the package along with all its dependencies. "
"Alternatively, one can decide to build a cache for only the package or only the "
"dependencies",
)
arguments.add_common_arguments(push, ["specs"])
push.set_defaults(func=push_fn)
install = subparsers.add_parser("install", help=install_fn.__doc__)
install.add_argument(
"-f", "--force", action="store_true", help="overwrite install directory if it exists."
"-f", "--force", action="store_true", help="overwrite install directory if it exists"
)
install.add_argument(
"-m", "--multiple", action="store_true", help="allow all matching packages "
"-m", "--multiple", action="store_true", help="allow all matching packages"
)
install.add_argument(
"-u",
@@ -142,11 +140,11 @@ def setup_parser(subparser):
"-m",
"--mirror-url",
default=None,
help="Override any configured mirrors with this mirror URL",
help="override any configured mirrors with this mirror URL",
)
check.add_argument(
"-o", "--output-file", default=None, help="File where rebuild info should be written"
"-o", "--output-file", default=None, help="file where rebuild info should be written"
)
# used to construct scope arguments below
@@ -162,13 +160,13 @@ def setup_parser(subparser):
)
check.add_argument(
"-s", "--spec", default=None, help="Check single spec instead of release specs file"
"-s", "--spec", default=None, help="check single spec instead of release specs file"
)
check.add_argument(
"--spec-file",
default=None,
help=("Check single spec from json or yaml file instead of release specs file"),
help="check single spec from json or yaml file instead of release specs file",
)
check.set_defaults(func=check_fn)
@@ -176,15 +174,15 @@ def setup_parser(subparser):
# Download tarball and specfile
download = subparsers.add_parser("download", help=download_fn.__doc__)
download.add_argument(
"-s", "--spec", default=None, help="Download built tarball for spec from mirror"
"-s", "--spec", default=None, help="download built tarball for spec from mirror"
)
download.add_argument(
"--spec-file",
default=None,
help=("Download built tarball for spec (from json or yaml file) from mirror"),
help="download built tarball for spec (from json or yaml file) from mirror",
)
download.add_argument(
"-p", "--path", default=None, help="Path to directory where tarball should be downloaded"
"-p", "--path", default=None, help="path to directory where tarball should be downloaded"
)
download.set_defaults(func=download_fn)
@@ -193,52 +191,52 @@ def setup_parser(subparser):
"get-buildcache-name", help=get_buildcache_name_fn.__doc__
)
getbuildcachename.add_argument(
"-s", "--spec", default=None, help="Spec string for which buildcache name is desired"
"-s", "--spec", default=None, help="spec string for which buildcache name is desired"
)
getbuildcachename.add_argument(
"--spec-file",
default=None,
help=("Path to spec json or yaml file for which buildcache name is desired"),
help="path to spec json or yaml file for which buildcache name is desired",
)
getbuildcachename.set_defaults(func=get_buildcache_name_fn)
# Given the root spec, save the yaml of the dependent spec to a file
savespecfile = subparsers.add_parser("save-specfile", help=save_specfile_fn.__doc__)
savespecfile.add_argument("--root-spec", default=None, help="Root spec of dependent spec")
savespecfile.add_argument("--root-spec", default=None, help="root spec of dependent spec")
savespecfile.add_argument(
"--root-specfile",
default=None,
help="Path to json or yaml file containing root spec of dependent spec",
help="path to json or yaml file containing root spec of dependent spec",
)
savespecfile.add_argument(
"-s",
"--specs",
default=None,
help="List of dependent specs for which saved yaml is desired",
help="list of dependent specs for which saved yaml is desired",
)
savespecfile.add_argument(
"--specfile-dir", default=None, help="Path to directory where spec yamls should be saved"
"--specfile-dir", default=None, help="path to directory where spec yamls should be saved"
)
savespecfile.set_defaults(func=save_specfile_fn)
# Sync buildcache entries from one mirror to another
sync = subparsers.add_parser("sync", help=sync_fn.__doc__)
sync.add_argument(
"--manifest-glob", help="A quoted glob pattern identifying copy manifest files"
"--manifest-glob", help="a quoted glob pattern identifying copy manifest files"
)
sync.add_argument(
"src_mirror",
metavar="source mirror",
type=arguments.mirror_name_or_url,
nargs="?",
help="Source mirror name, path, or URL",
help="source mirror name, path, or URL",
)
sync.add_argument(
"dest_mirror",
metavar="destination mirror",
type=arguments.mirror_name_or_url,
nargs="?",
help="Destination mirror name, path, or URL",
help="destination mirror name, path, or URL",
)
sync.set_defaults(func=sync_fn)
@@ -247,14 +245,14 @@ def setup_parser(subparser):
"update-index", aliases=["rebuild-index"], help=update_index_fn.__doc__
)
update_index.add_argument(
"mirror", type=arguments.mirror_name_or_url, help="Destination mirror name, path, or URL"
"mirror", type=arguments.mirror_name_or_url, help="destination mirror name, path, or URL"
)
update_index.add_argument(
"-k",
"--keys",
default=False,
action="store_true",
help="If provided, key index will be updated as well as package index",
help="if provided, key index will be updated as well as package index",
)
update_index.set_defaults(func=update_index_fn)
@@ -309,6 +307,11 @@ def push_fn(args):
"""create a binary package and push it to a mirror"""
mirror = arguments.mirror_name_or_url(args.mirror)
if args.allow_root:
tty.warn(
"The flag `--allow-root` is the default in Spack 0.21, will be removed in Spack 0.22"
)
url = mirror.push_url
specs = bindist.specs_to_be_packaged(
@@ -338,7 +341,6 @@ def push_fn(args):
bindist.PushOptions(
force=args.force,
unsigned=args.unsigned,
allow_root=args.allow_root,
key=args.key,
regenerate_index=args.update_index,
),
@@ -411,25 +413,19 @@ def keys_fn(args):
def preview_fn(args):
"""analyze an installed spec and reports whether executables
and libraries are relocatable
"""
constraints = spack.cmd.parse_specs(args.specs)
specs = spack.store.find(constraints, multiple=True)
# Cycle over the specs that match
for spec in specs:
print("Relocatable nodes")
print("--------------------------------")
print(spec.tree(status_fn=spack.relocate.is_relocatable))
"""analyze an installed spec and reports whether executables and libraries are relocatable"""
tty.warn(
"`spack buildcache preview` is deprecated since `spack buildcache push --allow-root` is "
"now the default. This command will be removed in Spack 0.22"
)
def check_fn(args):
"""Check specs (either a single spec from --spec, or else the full set
of release specs) against remote binary mirror(s) to see if any need
to be rebuilt. This command uses the process exit code to indicate
its result, specifically, if the exit code is non-zero, then at least
one of the indicated specs needs to be rebuilt.
"""check specs against remote binary mirror(s) to see if any need to be rebuilt
either a single spec from --spec, or else the full set of release specs. this command uses the
process exit code to indicate its result, specifically, if the exit code is non-zero, then at
least one of the indicated specs needs to be rebuilt
"""
if args.spec or args.spec_file:
specs = [_concrete_spec_from_args(args)]
@@ -460,10 +456,12 @@ def check_fn(args):
def download_fn(args):
"""Download buildcache entry from a remote mirror to local folder. This
command uses the process exit code to indicate its result, specifically,
a non-zero exit code indicates that the command failed to download at
least one of the required buildcache components."""
"""download buildcache entry from a remote mirror to local folder
this command uses the process exit code to indicate its result, specifically, a non-zero exit
code indicates that the command failed to download at least one of the required buildcache
components
"""
if not args.spec and not args.spec_file:
tty.msg("No specs provided, exiting.")
return
@@ -480,19 +478,18 @@ def download_fn(args):
def get_buildcache_name_fn(args):
"""Get name (prefix) of buildcache entries for this spec"""
"""get name (prefix) of buildcache entries for this spec"""
spec = _concrete_spec_from_args(args)
buildcache_name = bindist.tarball_name(spec, "")
print("{0}".format(buildcache_name))
def save_specfile_fn(args):
"""Get full spec for dependencies, relative to root spec, and write them
to files in the specified output directory. Uses exit code to signal
success or failure. An exit code of zero means the command was likely
successful. If any errors or exceptions are encountered, or if expected
command-line arguments are not provided, then the exit code will be
non-zero.
"""get full spec for dependencies and write them to files in the specified output directory
uses exit code to signal success or failure. an exit code of zero means the command was likely
successful. if any errors or exceptions are encountered, or if expected command-line arguments
are not provided, then the exit code will be non-zero
"""
if not args.root_spec and not args.root_specfile:
tty.msg("No root spec provided, exiting.")
@@ -546,12 +543,9 @@ def copy_buildcache_file(src_url, dest_url, local_path=None):
def sync_fn(args):
"""Syncs binaries (and associated metadata) from one mirror to another.
Requires an active environment in order to know which specs to sync.
"""sync binaries (and associated metadata) from one mirror to another
Args:
src (str): Source mirror URL
dest (str): Destination mirror URL
requires an active environment in order to know which specs to sync
"""
if args.manifest_glob:
manifest_copy(glob.glob(args.manifest_glob))
@@ -639,7 +633,7 @@ def update_index(mirror: spack.mirror.Mirror, update_keys=False):
def update_index_fn(args):
"""Update a buildcache index."""
"""update a buildcache index"""
update_index(args.mirror, update_keys=args.keys)

View File

@@ -47,40 +47,36 @@ def setup_parser(subparser):
generate.add_argument(
"--output-file",
default=None,
help="""pathname for the generated gitlab ci yaml file
Path to the file where generated jobs file should
be written. Default is .gitlab-ci.yml in the root of
the repository.""",
help="pathname for the generated gitlab ci yaml file\n\n"
"path to the file where generated jobs file should be written. "
"default is .gitlab-ci.yml in the root of the repository",
)
generate.add_argument(
"--copy-to",
default=None,
help="""path to additional directory for job files
This option provides an absolute path to a directory
where the generated jobs yaml file should be copied.
Default is not to copy.""",
help="path to additional directory for job files\n\n"
"this option provides an absolute path to a directory where the generated "
"jobs yaml file should be copied. default is not to copy",
)
generate.add_argument(
"--optimize",
action="store_true",
default=False,
help="""(Experimental) optimize the gitlab yaml file for size
Run the generated document through a series of
optimization passes designed to reduce the size
of the generated file.""",
help="(experimental) optimize the gitlab yaml file for size\n\n"
"run the generated document through a series of optimization passes "
"designed to reduce the size of the generated file",
)
generate.add_argument(
"--dependencies",
action="store_true",
default=False,
help="(Experimental) disable DAG scheduling; use " ' "plain" dependencies.',
help="(experimental) disable DAG scheduling (use 'plain' dependencies)",
)
generate.add_argument(
"--buildcache-destination",
default=None,
help="Override the mirror configured in the environment (spack.yaml) "
+ "in order to push binaries from the generated pipeline to a "
+ "different location.",
help="override the mirror configured in the environment\n\n"
"allows for pushing binaries from the generated pipeline to a different location",
)
prune_group = generate.add_mutually_exclusive_group()
prune_group.add_argument(
@@ -88,45 +84,37 @@ def setup_parser(subparser):
action="store_true",
dest="prune_dag",
default=True,
help="""skip up-to-date specs
Do not generate jobs for specs that are up-to-date
on the mirror.""",
help="skip up-to-date specs\n\n"
"do not generate jobs for specs that are up-to-date on the mirror",
)
prune_group.add_argument(
"--no-prune-dag",
action="store_false",
dest="prune_dag",
default=True,
help="""process up-to-date specs
Generate jobs for specs even when they are up-to-date
on the mirror.""",
help="process up-to-date specs\n\n"
"generate jobs for specs even when they are up-to-date on the mirror",
)
generate.add_argument(
"--check-index-only",
action="store_true",
dest="index_only",
default=False,
help="""only check spec state from buildcache indices
Spack always checks specs against configured binary
mirrors, regardless of the DAG pruning option.
If enabled, Spack will assume all remote buildcache
indices are up-to-date when assessing whether the spec
on the mirror, if present, is up-to-date. This has the
benefit of reducing pipeline generation time but at the
potential cost of needlessly rebuilding specs when the
indices are outdated.
If not enabled, Spack will fetch remote spec files
directly to assess whether the spec on the mirror is
up-to-date.""",
help="only check spec state from buildcache indices\n\n"
"Spack always checks specs against configured binary mirrors, regardless of the DAG "
"pruning option. if enabled, Spack will assume all remote buildcache indices are "
"up-to-date when assessing whether the spec on the mirror, if present, is up-to-date. "
"this has the benefit of reducing pipeline generation time but at the potential cost of "
"needlessly rebuilding specs when the indices are outdated. if not enabled, Spack will "
"fetch remote spec files directly to assess whether the spec on the mirror is up-to-date",
)
generate.add_argument(
"--artifacts-root",
default=None,
help="""path to the root of the artifacts directory
If provided, concrete environment files (spack.yaml,
spack.lock) will be generated under this directory.
Their location will be passed to generated child jobs
through the SPACK_CONCRETE_ENVIRONMENT_PATH variable.""",
help="path to the root of the artifacts directory\n\n"
"if provided, concrete environment files (spack.yaml, spack.lock) will be generated under "
"this directory. their location will be passed to generated child jobs through the "
"SPACK_CONCRETE_ENVIRONMENT_PATH variable",
)
generate.set_defaults(func=ci_generate)
@@ -150,13 +138,13 @@ def setup_parser(subparser):
"--tests",
action="store_true",
default=False,
help="""run stand-alone tests after the build""",
help="run stand-alone tests after the build",
)
rebuild.add_argument(
"--fail-fast",
action="store_true",
default=False,
help="""stop stand-alone tests after the first failure""",
help="stop stand-alone tests after the first failure",
)
rebuild.set_defaults(func=ci_rebuild)
@@ -166,10 +154,10 @@ def setup_parser(subparser):
description=deindent(ci_reproduce.__doc__),
help=spack.cmd.first_line(ci_reproduce.__doc__),
)
reproduce.add_argument("job_url", help="Url of job artifacts bundle")
reproduce.add_argument("job_url", help="URL of job artifacts bundle")
reproduce.add_argument(
"--working-dir",
help="Where to unpack artifacts",
help="where to unpack artifacts",
default=os.path.join(os.getcwd(), "ci_reproduction"),
)
@@ -177,12 +165,12 @@ def setup_parser(subparser):
def ci_generate(args):
"""Generate jobs file from a CI-aware spack file.
"""generate jobs file from a CI-aware spack file
If you want to report the results on CDash, you will need to set
the SPACK_CDASH_AUTH_TOKEN before invoking this command. The
value must be the CDash authorization token needed to create a
build group and register all generated jobs under it."""
if you want to report the results on CDash, you will need to set the SPACK_CDASH_AUTH_TOKEN
before invoking this command. the value must be the CDash authorization token needed to create
a build group and register all generated jobs under it
"""
env = spack.cmd.require_active_env(cmd_name="ci generate")
output_file = args.output_file
@@ -223,10 +211,11 @@ def ci_generate(args):
def ci_reindex(args):
"""Rebuild the buildcache index for the remote mirror.
"""rebuild the buildcache index for the remote mirror
Use the active, gitlab-enabled environment to rebuild the buildcache
index for the associated mirror."""
use the active, gitlab-enabled environment to rebuild the buildcache index for the associated
mirror
"""
env = spack.cmd.require_active_env(cmd_name="ci rebuild-index")
yaml_root = env.manifest[ev.TOP_LEVEL_KEY]
@@ -242,10 +231,11 @@ def ci_reindex(args):
def ci_rebuild(args):
"""Rebuild a spec if it is not on the remote mirror.
"""rebuild a spec if it is not on the remote mirror
Check a single spec against the remote mirror, and rebuild it from
source if the mirror does not contain the hash."""
check a single spec against the remote mirror, and rebuild it from source if the mirror does
not contain the hash
"""
env = spack.cmd.require_active_env(cmd_name="ci rebuild")
# Make sure the environment is "gitlab-enabled", or else there's nothing
@@ -606,7 +596,7 @@ def ci_rebuild(args):
)
reports_dir = fs.join_path(os.getcwd(), "cdash_report")
if args.tests and broken_tests:
tty.warn("Unable to run stand-alone tests since listed in " "ci's 'broken-tests-packages'")
tty.warn("Unable to run stand-alone tests since listed in ci's 'broken-tests-packages'")
if cdash_handler:
msg = "Package is listed in ci's broken-tests-packages"
cdash_handler.report_skipped(job_spec, reports_dir, reason=msg)
@@ -649,7 +639,7 @@ def ci_rebuild(args):
tty.warn("No recognized test results reporting option")
else:
tty.warn("Unable to run stand-alone tests due to unsuccessful " "installation")
tty.warn("Unable to run stand-alone tests due to unsuccessful installation")
if cdash_handler:
msg = "Failed to install the package"
cdash_handler.report_skipped(job_spec, reports_dir, reason=msg)
@@ -728,10 +718,11 @@ def ci_rebuild(args):
def ci_reproduce(args):
"""Generate instructions for reproducing the spec rebuild job.
"""generate instructions for reproducing the spec rebuild job
Artifacts of the provided gitlab pipeline rebuild job's URL will be
used to derive instructions for reproducing the build locally."""
artifacts of the provided gitlab pipeline rebuild job's URL will be used to derive
instructions for reproducing the build locally
"""
job_url = args.job_url
work_dir = args.working_dir

View File

@@ -115,7 +115,7 @@ def clean(parser, args):
tty.msg("Removing all temporary build stages")
spack.stage.purge()
# Temp directory where buildcaches are extracted
extract_tmp = os.path.join(spack.store.layout.root, ".tmp")
extract_tmp = os.path.join(spack.store.STORE.layout.root, ".tmp")
if os.path.exists(extract_tmp):
tty.debug("Removing {0}".format(extract_tmp))
shutil.rmtree(extract_tmp)

View File

@@ -48,7 +48,7 @@ def get_origin_info(remote):
)
except ProcessError:
origin_url = _SPACK_UPSTREAM
tty.warn("No git repository found; " "using default upstream URL: %s" % origin_url)
tty.warn("No git repository found; using default upstream URL: %s" % origin_url)
return (origin_url.strip(), branch.strip())
@@ -69,7 +69,7 @@ def clone(parser, args):
files_in_the_way = os.listdir(prefix)
if files_in_the_way:
tty.die(
"There are already files there! " "Delete these files before boostrapping spack.",
"There are already files there! Delete these files before boostrapping spack.",
*files_in_the_way,
)

View File

@@ -9,16 +9,11 @@
import re
import sys
from argparse import ArgumentParser, Namespace
from typing import IO, Any, Callable, Dict, Sequence, Set
from typing import IO, Any, Callable, Dict, Iterable, List, Optional, Sequence, Set, Tuple, Union
import llnl.util.filesystem as fs
import llnl.util.tty as tty
from llnl.util.argparsewriter import (
ArgparseCompletionWriter,
ArgparseRstWriter,
ArgparseWriter,
Command,
)
from llnl.util.argparsewriter import ArgparseRstWriter, ArgparseWriter, Command
from llnl.util.tty.colify import colify
import spack.cmd
@@ -43,7 +38,13 @@
"format": "bash",
"header": os.path.join(spack.paths.share_path, "bash", "spack-completion.in"),
"update": os.path.join(spack.paths.share_path, "spack-completion.bash"),
}
},
"fish": {
"aliases": True,
"format": "fish",
"header": os.path.join(spack.paths.share_path, "fish", "spack-completion.in"),
"update": os.path.join(spack.paths.share_path, "spack-completion.fish"),
},
}
@@ -178,9 +179,63 @@ def format(self, cmd: Command) -> str:
}
class BashCompletionWriter(ArgparseCompletionWriter):
class BashCompletionWriter(ArgparseWriter):
"""Write argparse output as bash programmable tab completion."""
def format(self, cmd: Command) -> str:
"""Return the string representation of a single node in the parser tree.
Args:
cmd: Parsed information about a command or subcommand.
Returns:
String representation of this subcommand.
"""
assert cmd.optionals # we should always at least have -h, --help
assert not (cmd.positionals and cmd.subcommands) # one or the other
# We only care about the arguments/flags, not the help messages
positionals: Tuple[str, ...] = ()
if cmd.positionals:
positionals, _, _, _ = zip(*cmd.positionals)
optionals, _, _, _, _ = zip(*cmd.optionals)
subcommands: Tuple[str, ...] = ()
if cmd.subcommands:
_, subcommands, _ = zip(*cmd.subcommands)
# Flatten lists of lists
optionals = [x for xx in optionals for x in xx]
return (
self.start_function(cmd.prog)
+ self.body(positionals, optionals, subcommands)
+ self.end_function(cmd.prog)
)
def start_function(self, prog: str) -> str:
"""Return the syntax needed to begin a function definition.
Args:
prog: Program name.
Returns:
Function definition beginning.
"""
name = prog.replace("-", "_").replace(" ", "_")
return "\n_{0}() {{".format(name)
def end_function(self, prog: str) -> str:
"""Return the syntax needed to end a function definition.
Args:
prog: Program name
Returns:
Function definition ending.
"""
return "}\n"
def body(
self, positionals: Sequence[str], optionals: Sequence[str], subcommands: Sequence[str]
) -> str:
@@ -264,6 +319,396 @@ def subcommands(self, subcommands: Sequence[str]) -> str:
return 'SPACK_COMPREPLY="{0}"'.format(" ".join(subcommands))
# Map argument destination names to their complete commands
# Earlier items in the list have higher precedence
_dest_to_fish_complete = {
("activate", "view"): "-f -a '(__fish_complete_directories)'",
("bootstrap root", "path"): "-f -a '(__fish_complete_directories)'",
("mirror add", "mirror"): "-f",
("repo add", "path"): "-f -a '(__fish_complete_directories)'",
("test find", "filter"): "-f -a '(__fish_spack_tests)'",
("bootstrap", "name"): "-f -a '(__fish_spack_bootstrap_names)'",
("buildcache create", "key"): "-f -a '(__fish_spack_gpg_keys)'",
("build-env", r"spec \[--\].*"): "-f -a '(__fish_spack_build_env_spec)'",
("checksum", "package"): "-f -a '(__fish_spack_packages)'",
(
"checksum",
"versions",
): "-f -a '(__fish_spack_package_versions $__fish_spack_argparse_argv[1])'",
("config", "path"): "-f -a '(__fish_spack_colon_path)'",
("config", "section"): "-f -a '(__fish_spack_config_sections)'",
("develop", "specs?"): "-f -k -a '(__fish_spack_specs_or_id)'",
("diff", "specs?"): "-f -a '(__fish_spack_installed_specs)'",
("gpg sign", "output"): "-f -a '(__fish_complete_directories)'",
("gpg", "keys?"): "-f -a '(__fish_spack_gpg_keys)'",
("graph", "specs?"): "-f -k -a '(__fish_spack_specs_or_id)'",
("help", "help_command"): "-f -a '(__fish_spack_commands)'",
("list", "filter"): "-f -a '(__fish_spack_packages)'",
("mirror", "mirror"): "-f -a '(__fish_spack_mirrors)'",
("pkg", "package"): "-f -a '(__fish_spack_pkg_packages)'",
("remove", "specs?"): "-f -a '(__fish_spack_installed_specs)'",
("repo", "namespace_or_path"): "$__fish_spack_force_files -a '(__fish_spack_repos)'",
("restage", "specs?"): "-f -k -a '(__fish_spack_specs_or_id)'",
("rm", "specs?"): "-f -a '(__fish_spack_installed_specs)'",
("solve", "specs?"): "-f -k -a '(__fish_spack_specs_or_id)'",
("spec", "specs?"): "-f -k -a '(__fish_spack_specs_or_id)'",
("stage", "specs?"): "-f -k -a '(__fish_spack_specs_or_id)'",
("test-env", r"spec \[--\].*"): "-f -a '(__fish_spack_build_env_spec)'",
("test", r"\[?name.*"): "-f -a '(__fish_spack_tests)'",
("undevelop", "specs?"): "-f -k -a '(__fish_spack_specs_or_id)'",
("verify", "specs_or_files"): "$__fish_spack_force_files -a '(__fish_spack_installed_specs)'",
("view", "path"): "-f -a '(__fish_complete_directories)'",
("", "comment"): "-f",
("", "compiler_spec"): "-f -a '(__fish_spack_installed_compilers)'",
("", "config_scopes"): "-f -a '(__fish_complete_directories)'",
("", "extendable"): "-f -a '(__fish_spack_extensions)'",
("", "installed_specs?"): "-f -a '(__fish_spack_installed_specs)'",
("", "job_url"): "-f",
("", "location_env"): "-f -a '(__fish_complete_directories)'",
("", "pytest_args"): "-f -a '(__fish_spack_unit_tests)'",
("", "package_or_file"): "$__fish_spack_force_files -a '(__fish_spack_packages)'",
("", "package_or_user"): "-f -a '(__fish_spack_packages)'",
("", "package"): "-f -a '(__fish_spack_packages)'",
("", "PKG"): "-f -a '(__fish_spack_packages)'",
("", "prefix"): "-f -a '(__fish_complete_directories)'",
("", r"rev\d?"): "-f -a '(__fish_spack_git_rev)'",
("", "specs?"): "-f -k -a '(__fish_spack_specs)'",
("", "tags?"): "-f -a '(__fish_spack_tags)'",
("", "virtual_package"): "-f -a '(__fish_spack_providers)'",
("", "working_dir"): "-f -a '(__fish_complete_directories)'",
("", r"(\w*_)?env"): "-f -a '(__fish_spack_environments)'",
("", r"(\w*_)?dir(ectory)?"): "-f -a '(__fish_spack_environments)'",
("", r"(\w*_)?mirror_name"): "-f -a '(__fish_spack_mirrors)'",
}
def _fish_dest_get_complete(prog: str, dest: str) -> Optional[str]:
"""Map from subcommand to autocompletion argument.
Args:
prog: Program name.
dest: Destination.
Returns:
Autocompletion argument.
"""
s = prog.split(None, 1)
subcmd = s[1] if len(s) == 2 else ""
for (prog_key, pos_key), value in _dest_to_fish_complete.items():
if subcmd.startswith(prog_key) and re.match("^" + pos_key + "$", dest):
return value
return None
class FishCompletionWriter(ArgparseWriter):
"""Write argparse output as bash programmable tab completion."""
def format(self, cmd: Command) -> str:
"""Return the string representation of a single node in the parser tree.
Args:
cmd: Parsed information about a command or subcommand.
Returns:
String representation of a node.
"""
assert cmd.optionals # we should always at least have -h, --help
assert not (cmd.positionals and cmd.subcommands) # one or the other
# We also need help messages and how arguments are used
# So we pass everything to completion writer
positionals = cmd.positionals
optionals = cmd.optionals
subcommands = cmd.subcommands
return (
self.prog_comment(cmd.prog)
+ self.optspecs(cmd.prog, optionals)
+ self.complete(cmd.prog, positionals, optionals, subcommands)
)
def _quote(self, string: str) -> str:
"""Quote string and escape special characters if necessary.
Args:
string: Input string.
Returns:
Quoted string.
"""
# Goal here is to match fish_indent behavior
# Strings without spaces (or other special characters) do not need to be escaped
if not any([sub in string for sub in [" ", "'", '"']]):
return string
string = string.replace("'", r"\'")
return f"'{string}'"
def optspecs(
self,
prog: str,
optionals: List[Tuple[Sequence[str], List[str], str, Union[int, str, None], str]],
) -> str:
"""Read the optionals and return the command to set optspec.
Args:
prog: Program name.
optionals: List of optional arguments.
Returns:
Command to set optspec variable.
"""
# Variables of optspecs
optspec_var = "__fish_spack_optspecs_" + prog.replace(" ", "_").replace("-", "_")
if optionals is None:
return "set -g %s\n" % optspec_var
# Build optspec by iterating over options
args = []
for flags, dest, _, nargs, _ in optionals:
if len(flags) == 0:
continue
required = ""
# Because nargs '?' is treated differently in fish, we treat it as required.
# Because multi-argument options are not supported, we treat it like one argument.
required = "="
if nargs == 0:
required = ""
# Pair short options with long options
# We need to do this because fish doesn't support multiple short
# or long options.
# However, since we are paring options only, this is fine
short = [f[1:] for f in flags if f.startswith("-") and len(f) == 2]
long = [f[2:] for f in flags if f.startswith("--")]
while len(short) > 0 and len(long) > 0:
arg = "%s/%s%s" % (short.pop(), long.pop(), required)
while len(short) > 0:
arg = "%s/%s" % (short.pop(), required)
while len(long) > 0:
arg = "%s%s" % (long.pop(), required)
args.append(arg)
# Even if there is no option, we still set variable.
# In fish such variable is an empty array, we use it to
# indicate that such subcommand exists.
args = " ".join(args)
return "set -g %s %s\n" % (optspec_var, args)
@staticmethod
def complete_head(
prog: str, index: Optional[int] = None, nargs: Optional[Union[int, str]] = None
) -> str:
"""Return the head of the completion command.
Args:
prog: Program name.
index: Index of positional argument.
nargs: Number of arguments.
Returns:
Head of the completion command.
"""
# Split command and subcommand
s = prog.split(None, 1)
subcmd = s[1] if len(s) == 2 else ""
if index is None:
return "complete -c %s -n '__fish_spack_using_command %s'" % (s[0], subcmd)
elif nargs in [argparse.ZERO_OR_MORE, argparse.ONE_OR_MORE, argparse.REMAINDER]:
head = "complete -c %s -n '__fish_spack_using_command_pos_remainder %d %s'"
else:
head = "complete -c %s -n '__fish_spack_using_command_pos %d %s'"
return head % (s[0], index, subcmd)
def complete(
self,
prog: str,
positionals: List[Tuple[str, Optional[Iterable[Any]], Union[int, str, None], str]],
optionals: List[Tuple[Sequence[str], List[str], str, Union[int, str, None], str]],
subcommands: List[Tuple[ArgumentParser, str, str]],
) -> str:
"""Return all the completion commands.
Args:
prog: Program name.
positionals: List of positional arguments.
optionals: List of optional arguments.
subcommands: List of subcommand parsers.
Returns:
Completion command.
"""
commands = []
if positionals:
commands.append(self.positionals(prog, positionals))
if subcommands:
commands.append(self.subcommands(prog, subcommands))
if optionals:
commands.append(self.optionals(prog, optionals))
return "".join(commands)
def positionals(
self,
prog: str,
positionals: List[Tuple[str, Optional[Iterable[Any]], Union[int, str, None], str]],
) -> str:
"""Return the completion for positional arguments.
Args:
prog: Program name.
positionals: List of positional arguments.
Returns:
Completion command.
"""
commands = []
for idx, (args, choices, nargs, help) in enumerate(positionals):
# Make sure we always get same order of output
if isinstance(choices, dict):
choices = sorted(choices.keys())
elif isinstance(choices, (set, frozenset)):
choices = sorted(choices)
# Remove platform-specific choices to avoid hard-coding the platform.
if choices is not None:
valid_choices = []
for choice in choices:
if spack.platforms.host().name not in choice:
valid_choices.append(choice)
choices = valid_choices
head = self.complete_head(prog, idx, nargs)
if choices is not None:
# If there are choices, we provide a completion for all possible values.
commands.append(head + " -f -a %s" % self._quote(" ".join(choices)))
else:
# Otherwise, we try to find a predefined completion for it
value = _fish_dest_get_complete(prog, args)
if value is not None:
commands.append(head + " " + value)
return "\n".join(commands) + "\n"
def prog_comment(self, prog: str) -> str:
"""Return a comment line for the command.
Args:
prog: Program name.
Returns:
Comment line.
"""
return "\n# %s\n" % prog
def optionals(
self,
prog: str,
optionals: List[Tuple[Sequence[str], List[str], str, Union[int, str, None], str]],
) -> str:
"""Return the completion for optional arguments.
Args:
prog: Program name.
optionals: List of optional arguments.
Returns:
Completion command.
"""
commands = []
head = self.complete_head(prog)
for flags, dest, _, nargs, help in optionals:
# Make sure we always get same order of output
if isinstance(dest, dict):
dest = sorted(dest.keys())
elif isinstance(dest, (set, frozenset)):
dest = sorted(dest)
# Remove platform-specific choices to avoid hard-coding the platform.
if dest is not None:
valid_choices = []
for choice in dest:
if spack.platforms.host().name not in choice:
valid_choices.append(choice)
dest = valid_choices
# To provide description for optionals, and also possible values,
# we need to use two split completion command.
# Otherwise, each option will have same description.
prefix = head
# Add all flags to the completion
for f in flags:
if f.startswith("--"):
long = f[2:]
prefix += " -l %s" % long
elif f.startswith("-"):
short = f[1:]
assert len(short) == 1
prefix += " -s %s" % short
# Check if option require argument.
# Currently multi-argument options are not supported, so we treat it like one argument.
if nargs != 0:
prefix += " -r"
if dest is not None:
# If there are choices, we provide a completion for all possible values.
commands.append(prefix + " -f -a %s" % self._quote(" ".join(dest)))
else:
# Otherwise, we try to find a predefined completion for it
value = _fish_dest_get_complete(prog, dest)
if value is not None:
commands.append(prefix + " " + value)
if help:
commands.append(prefix + " -d %s" % self._quote(help))
return "\n".join(commands) + "\n"
def subcommands(self, prog: str, subcommands: List[Tuple[ArgumentParser, str, str]]) -> str:
"""Return the completion for subcommands.
Args:
prog: Program name.
subcommands: List of subcommand parsers.
Returns:
Completion command.
"""
commands = []
head = self.complete_head(prog, 0)
for _, subcommand, help in subcommands:
command = head + " -f -a %s" % self._quote(subcommand)
if help is not None and len(help) > 0:
help = help.split("\n")[0]
command += " -d %s" % self._quote(help)
commands.append(command)
return "\n".join(commands) + "\n"
@formatter
def subcommands(args: Namespace, out: IO) -> None:
"""Hierarchical tree of subcommands.
@@ -371,6 +816,15 @@ def bash(args: Namespace, out: IO) -> None:
writer.write(parser)
@formatter
def fish(args, out):
parser = spack.main.make_argument_parser()
spack.main.add_all_commands(parser)
writer = FishCompletionWriter(parser.prog, out, args.aliases)
writer.write(parser)
def prepend_header(args: Namespace, out: IO) -> None:
"""Prepend header text at the beginning of a file.

View File

@@ -82,12 +82,12 @@ def _specs(self, **kwargs):
# return everything for an empty query.
if not qspecs:
return spack.store.db.query(**kwargs)
return spack.store.STORE.db.query(**kwargs)
# Return only matching stuff otherwise.
specs = {}
for spec in qspecs:
for s in spack.store.db.query(spec, **kwargs):
for s in spack.store.STORE.db.query(spec, **kwargs):
# This is fast for already-concrete specs
specs[s.dag_hash()] = s
@@ -265,7 +265,7 @@ def recurse_dependents():
"--dependents",
action="store_true",
dest="dependents",
help="also uninstall any packages that depend on the ones given " "via command line",
help="also uninstall any packages that depend on the ones given via command line",
)
@@ -286,7 +286,7 @@ def deptype():
"--deptype",
action=DeptypeAction,
default=dep.all_deptypes,
help="comma-separated list of deptypes to traverse\ndefault=%s"
help="comma-separated list of deptypes to traverse\n\ndefault=%s"
% ",".join(dep.all_deptypes),
)
@@ -350,9 +350,9 @@ def install_status():
"--install-status",
action="store_true",
default=True,
help="show install status of packages. packages can be: "
help="show install status of packages\n\npackages can be: "
"installed [+], missing and needed by an installed package [-], "
"installed in and upstream instance [^], "
"installed in an upstream instance [^], "
"or not installed (no annotation)",
)
@@ -393,24 +393,23 @@ def add_cdash_args(subparser, add_help):
cdash_help = {}
if add_help:
cdash_help["upload-url"] = "CDash URL where reports will be uploaded"
cdash_help[
"build"
] = """The name of the build that will be reported to CDash.
Defaults to spec of the package to operate on."""
cdash_help[
"site"
] = """The site name that will be reported to CDash.
Defaults to current system hostname."""
cdash_help[
"track"
] = """Results will be reported to this group on CDash.
Defaults to Experimental."""
cdash_help[
"buildstamp"
] = """Instead of letting the CDash reporter prepare the
buildstamp which, when combined with build name, site and project,
uniquely identifies the build, provide this argument to identify
the build yourself. Format: %%Y%%m%%d-%%H%%M-[cdash-track]"""
cdash_help["build"] = (
"name of the build that will be reported to CDash\n\n"
"defaults to spec of the package to operate on"
)
cdash_help["site"] = (
"site name that will be reported to CDash\n\n" "defaults to current system hostname"
)
cdash_help["track"] = (
"results will be reported to this group on CDash\n\n" "defaults to Experimental"
)
cdash_help["buildstamp"] = (
"use custom buildstamp\n\n"
"instead of letting the CDash reporter prepare the "
"buildstamp which, when combined with build name, site and project, "
"uniquely identifies the build, provide this argument to identify "
"the build yourself. format: %%Y%%m%%d-%%H%%M-[cdash-track]"
)
else:
cdash_help["upload-url"] = argparse.SUPPRESS
cdash_help["build"] = argparse.SUPPRESS
@@ -542,16 +541,16 @@ def add_s3_connection_args(subparser, add_help):
"--s3-access-key-id", help="ID string to use to connect to this S3 mirror"
)
subparser.add_argument(
"--s3-access-key-secret", help="Secret string to use to connect to this S3 mirror"
"--s3-access-key-secret", help="secret string to use to connect to this S3 mirror"
)
subparser.add_argument(
"--s3-access-token", help="Access Token to use to connect to this S3 mirror"
"--s3-access-token", help="access token to use to connect to this S3 mirror"
)
subparser.add_argument(
"--s3-profile", help="S3 profile name to use to connect to this S3 mirror", default=None
)
subparser.add_argument(
"--s3-endpoint-url", help="Endpoint URL to use to connect to this S3 mirror"
"--s3-endpoint-url", help="endpoint URL to use to connect to this S3 mirror"
)

View File

@@ -106,7 +106,7 @@ def emulate_env_utility(cmd_name, context, args):
visitor = AreDepsInstalledVisitor(context=context)
# Mass install check needs read transaction.
with spack.store.db.read_transaction():
with spack.store.STORE.db.read_transaction():
traverse.traverse_breadth_first_with_visitor([spec], traverse.CoverNodesVisitor(visitor))
if visitor.has_uninstalled_deps:

View File

@@ -14,18 +14,16 @@
def setup_parser(subparser):
subparser.add_argument(
"-f", "--force", action="store_true", help="Re-concretize even if already concretized."
"-f", "--force", action="store_true", help="re-concretize even if already concretized"
)
subparser.add_argument(
"--test",
default=None,
choices=["root", "all"],
help="""Concretize with test dependencies. When 'root' is chosen, test
dependencies are only added for the environment's root specs. When 'all' is
chosen, test dependencies are enabled for all packages in the environment.""",
help="concretize with test dependencies of only root packages or all packages",
)
subparser.add_argument(
"-q", "--quiet", action="store_true", help="Don't print concretized specs"
"-q", "--quiet", action="store_true", help="don't print concretized specs"
)
spack.cmd.common.arguments.add_concretizer_args(subparser)

View File

@@ -42,7 +42,7 @@ def setup_parser(subparser):
get_parser = sp.add_parser("get", help="print configuration values")
get_parser.add_argument(
"section",
help="configuration section to print. " "options: %(choices)s",
help="configuration section to print\n\noptions: %(choices)s",
nargs="?",
metavar="section",
choices=spack.config.section_schemas,
@@ -53,7 +53,7 @@ def setup_parser(subparser):
)
blame_parser.add_argument(
"section",
help="configuration section to print. " "options: %(choices)s",
help="configuration section to print\n\noptions: %(choices)s",
metavar="section",
choices=spack.config.section_schemas,
)
@@ -61,7 +61,7 @@ def setup_parser(subparser):
edit_parser = sp.add_parser("edit", help="edit configuration file")
edit_parser.add_argument(
"section",
help="configuration section to edit. " "options: %(choices)s",
help="configuration section to edit\n\noptions: %(choices)s",
metavar="section",
nargs="?",
choices=spack.config.section_schemas,
@@ -76,7 +76,7 @@ def setup_parser(subparser):
add_parser.add_argument(
"path",
nargs="?",
help="colon-separated path to config that should be added," " e.g. 'config:default:true'",
help="colon-separated path to config that should be added, e.g. 'config:default:true'",
)
add_parser.add_argument("-f", "--file", help="file from which to set all config values")
@@ -88,7 +88,7 @@ def setup_parser(subparser):
"--local",
action="store_true",
default=False,
help="Set packages preferences based on local installs, rather " "than upstream.",
help="set packages preferences based on local installs, rather than upstream",
)
remove_parser = sp.add_parser("remove", aliases=["rm"], help="remove configuration parameters")
@@ -157,7 +157,7 @@ def config_get(args):
tty.die("environment has no %s file" % ev.manifest_name)
else:
tty.die("`spack config get` requires a section argument " "or an active environment.")
tty.die("`spack config get` requires a section argument or an active environment.")
def config_blame(args):
@@ -180,7 +180,7 @@ def config_edit(args):
# If we aren't editing a spack.yaml file, get config path from scope.
scope, section = _get_scope_and_section(args)
if not scope and not section:
tty.die("`spack config edit` requires a section argument " "or an active environment.")
tty.die("`spack config edit` requires a section argument or an active environment.")
config_file = spack.config.config.get_config_filename(scope, section)
if args.print_file:
@@ -374,7 +374,7 @@ def config_revert(args):
proceed = True
if not args.yes_to_all:
msg = "The following scopes will be restored from the corresponding" " backup files:\n"
msg = "The following scopes will be restored from the corresponding backup files:\n"
for entry in to_be_restored:
msg += "\t[scope={0.scope}, bkp={0.bkp}]\n".format(entry)
msg += "This operation cannot be undone."
@@ -399,8 +399,8 @@ def config_prefer_upstream(args):
if scope is None:
scope = spack.config.default_modify_scope("packages")
all_specs = set(spack.store.db.query(installed=True))
local_specs = set(spack.store.db.query_local(installed=True))
all_specs = set(spack.store.STORE.db.query(installed=True))
local_specs = set(spack.store.STORE.db.query_local(installed=True))
pref_specs = local_specs if args.local else all_specs - local_specs
conflicting_variants = set()

View File

@@ -10,7 +10,7 @@
import spack.container
import spack.container.images
description = "creates recipes to build images for different" " container runtimes"
description = "creates recipes to build images for different container runtimes"
section = "container"
level = "long"

View File

@@ -325,6 +325,7 @@ class PythonPackageTemplate(PackageTemplate):
# FIXME: Add a build backend, usually defined in pyproject.toml. If no such file
# exists, use setuptools.
# depends_on("py-setuptools", type="build")
# depends_on("py-hatchling", type="build")
# depends_on("py-flit-core", type="build")
# depends_on("py-poetry-core", type="build")
@@ -332,17 +333,11 @@ class PythonPackageTemplate(PackageTemplate):
# depends_on("py-foo", type=("build", "run"))"""
body_def = """\
def global_options(self, spec, prefix):
# FIXME: Add options to pass to setup.py
def config_settings(self, spec, prefix):
# FIXME: Add configuration settings to be passed to the build backend
# FIXME: If not needed, delete this function
options = []
return options
def install_options(self, spec, prefix):
# FIXME: Add options to pass to setup.py install
# FIXME: If not needed, delete this function
options = []
return options"""
settings = {}
return settings"""
def __init__(self, name, url, *args, **kwargs):
# If the user provided `--name py-numpy`, don't rename it py-py-numpy
@@ -612,7 +607,7 @@ def setup_parser(subparser):
"--template",
metavar="TEMPLATE",
choices=sorted(templates.keys()),
help="build system template to use. options: %(choices)s",
help="build system template to use\n\noptions: %(choices)s",
)
subparser.add_argument(
"-r", "--repo", help="path to a repository where the package should be created"
@@ -620,7 +615,7 @@ def setup_parser(subparser):
subparser.add_argument(
"-N",
"--namespace",
help="specify a namespace for the package. must be the namespace of "
help="specify a namespace for the package\n\nmust be the namespace of "
"a repository registered with Spack",
)
subparser.add_argument(
@@ -878,7 +873,7 @@ def get_build_system(template, url, guesser):
# Use whatever build system the guesser detected
selected_template = guesser.build_system
if selected_template == "generic":
tty.warn("Unable to detect a build system. " "Using a generic package template.")
tty.warn("Unable to detect a build system. Using a generic package template.")
else:
msg = "This package looks like it uses the {0} build system"
tty.msg(msg.format(selected_template))

View File

@@ -60,16 +60,16 @@ def create_db_tarball(args):
tarball_name = "spack-db.%s.tar.gz" % _debug_tarball_suffix()
tarball_path = os.path.abspath(tarball_name)
base = os.path.basename(str(spack.store.root))
base = os.path.basename(str(spack.store.STORE.root))
transform_args = []
if "GNU" in tar("--version", output=str):
transform_args = ["--transform", "s/^%s/%s/" % (base, tarball_name)]
else:
transform_args = ["-s", "/^%s/%s/" % (base, tarball_name)]
wd = os.path.dirname(str(spack.store.root))
wd = os.path.dirname(str(spack.store.STORE.root))
with working_dir(wd):
files = [spack.store.db._index_path]
files = [spack.store.STORE.db._index_path]
files += glob("%s/*/*/*/.spack/spec.json" % base)
files += glob("%s/*/*/*/.spack/spec.yaml" % base)
files = [os.path.relpath(f) for f in files]

View File

@@ -26,8 +26,8 @@ def setup_parser(subparser):
"--installed",
action="store_true",
default=False,
help="List installed dependencies of an installed spec, "
"instead of possible dependencies of a package.",
help="list installed dependencies of an installed spec "
"instead of possible dependencies of a package",
)
subparser.add_argument(
"-t",
@@ -60,7 +60,7 @@ def dependencies(parser, args):
format_string = "{name}{@version}{%compiler}{/hash:7}"
if sys.stdout.isatty():
tty.msg("Dependencies of %s" % spec.format(format_string, color=True))
deps = spack.store.db.installed_relatives(
deps = spack.store.STORE.db.installed_relatives(
spec, "children", args.transitive, deptype=args.deptype
)
if deps:

View File

@@ -25,15 +25,15 @@ def setup_parser(subparser):
"--installed",
action="store_true",
default=False,
help="List installed dependents of an installed spec, "
"instead of possible dependents of a package.",
help="list installed dependents of an installed spec "
"instead of possible dependents of a package",
)
subparser.add_argument(
"-t",
"--transitive",
action="store_true",
default=False,
help="Show all transitive dependents.",
help="show all transitive dependents",
)
arguments.add_common_arguments(subparser, ["spec"])
@@ -96,7 +96,7 @@ def dependents(parser, args):
format_string = "{name}{@version}{%compiler}{/hash:7}"
if sys.stdout.isatty():
tty.msg("Dependents of %s" % spec.cformat(format_string))
deps = spack.store.db.installed_relatives(spec, "parents", args.transitive)
deps = spack.store.STORE.db.installed_relatives(spec, "parents", args.transitive)
if deps:
spack.cmd.display_specs(deps, long=True)
else:

View File

@@ -26,7 +26,7 @@
from spack.database import InstallStatuses
from spack.error import SpackError
description = "Replace one package with another via symlinks"
description = "replace one package with another via symlinks"
section = "admin"
level = "long"
@@ -46,7 +46,7 @@ def setup_parser(sp):
action="store_true",
default=True,
dest="dependencies",
help="Deprecate dependencies (default)",
help="deprecate dependencies (default)",
)
deps.add_argument(
"-D",
@@ -54,7 +54,7 @@ def setup_parser(sp):
action="store_false",
default=True,
dest="dependencies",
help="Do not deprecate dependencies",
help="do not deprecate dependencies",
)
install = sp.add_mutually_exclusive_group()
@@ -64,7 +64,7 @@ def setup_parser(sp):
action="store_true",
default=False,
dest="install",
help="Concretize and install deprecator spec",
help="concretize and install deprecator spec",
)
install.add_argument(
"-I",
@@ -72,7 +72,7 @@ def setup_parser(sp):
action="store_false",
default=False,
dest="install",
help="Deprecator spec must already be installed (default)",
help="deprecator spec must already be installed (default)",
)
sp.add_argument(
@@ -81,7 +81,7 @@ def setup_parser(sp):
type=str,
default="soft",
choices=["soft", "hard"],
help="Type of filesystem link to use for deprecation (default soft)",
help="type of filesystem link to use for deprecation (default soft)",
)
sp.add_argument(
@@ -130,7 +130,7 @@ def deprecate(parser, args):
already_deprecated = []
already_deprecated_for = []
for spec in all_deprecate:
deprecated_for = spack.store.db.deprecator(spec)
deprecated_for = spack.store.STORE.db.deprecator(spec)
if deprecated_for:
already_deprecated.append(spec)
already_deprecated_for.append(deprecated_for)

View File

@@ -25,14 +25,14 @@ def setup_parser(subparser):
"--source-path",
dest="source_path",
default=None,
help="path to source directory. defaults to the current directory",
help="path to source directory (defaults to the current directory)",
)
subparser.add_argument(
"-i",
"--ignore-dependencies",
action="store_true",
dest="ignore_deps",
help="don't try to install dependencies of requested packages",
help="do not try to install dependencies of requested packages",
)
arguments.add_common_arguments(subparser, ["no_checksum", "deprecated"])
subparser.add_argument(
@@ -55,16 +55,13 @@ def setup_parser(subparser):
type=str,
dest="shell",
default=None,
help="drop into a build environment in a new shell, e.g. bash, zsh",
help="drop into a build environment in a new shell, e.g., bash",
)
subparser.add_argument(
"--test",
default=None,
choices=["root", "all"],
help="""If 'root' is chosen, run package tests during
installation for top-level packages (but skip tests for dependencies).
if 'all' is chosen, run package tests during installation for all
packages. If neither are chosen, don't run tests for any packages.""",
help="run tests on only root packages or all packages",
)
arguments.add_common_arguments(subparser, ["spec"])

View File

@@ -20,7 +20,7 @@
def setup_parser(subparser):
subparser.add_argument("-p", "--path", help="Source location of package")
subparser.add_argument("-p", "--path", help="source location of package")
clone_group = subparser.add_mutually_exclusive_group()
clone_group.add_argument(
@@ -28,18 +28,18 @@ def setup_parser(subparser):
action="store_false",
dest="clone",
default=None,
help="Do not clone. The package already exists at the source path",
help="do not clone, the package already exists at the source path",
)
clone_group.add_argument(
"--clone",
action="store_true",
dest="clone",
default=None,
help="Clone the package even if the path already exists",
help="clone the package even if the path already exists",
)
subparser.add_argument(
"-f", "--force", help="Remove any files or directories that block cloning source code"
"-f", "--force", help="remove any files or directories that block cloning source code"
)
arguments.add_common_arguments(subparser, ["spec"])
@@ -66,8 +66,7 @@ def develop(parser, args):
# Both old syntax `spack develop pkg@x` and new syntax `spack develop pkg@=x`
# are currently supported.
spec = spack.spec.parse_with_version_concrete(entry["spec"])
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
pkg_cls(spec).stage.steal_source(abspath)
env.develop(spec=spec, path=path, clone=True)
if not env.dev_specs:
tty.warn("No develop specs to download")

View File

@@ -29,7 +29,7 @@ def setup_parser(subparser):
action="store_true",
default=False,
dest="dump_json",
help="Dump json output instead of pretty printing.",
help="dump json output instead of pretty printing",
)
subparser.add_argument(
"--first",

View File

@@ -62,7 +62,7 @@ def setup_parser(subparser):
dest="path",
action="store_const",
const=spack.paths.build_systems_path,
help="Edit the build system with the supplied name.",
help="edit the build system with the supplied name",
)
excl_args.add_argument(
"-c",

View File

@@ -102,7 +102,7 @@ def env_activate_setup_parser(subparser):
dest="with_view",
const=True,
default=True,
help="update PATH etc. with associated view",
help="update PATH, etc., with associated view",
)
view_options.add_argument(
"-V",
@@ -111,7 +111,7 @@ def env_activate_setup_parser(subparser):
dest="with_view",
const=False,
default=True,
help="do not update PATH etc. with associated view",
help="do not update PATH, etc., with associated view",
)
subparser.add_argument(
@@ -161,7 +161,7 @@ def env_activate(args):
# Error out when -e, -E, -D flags are given, cause they are ambiguous.
if args.env or args.no_env or args.env_dir:
tty.die("Calling spack env activate with --env, --env-dir and --no-env " "is ambiguous")
tty.die("Calling spack env activate with --env, --env-dir and --no-env is ambiguous")
env_name_or_dir = args.activate_env or args.dir
@@ -250,7 +250,7 @@ def env_deactivate(args):
# Error out when -e, -E, -D flags are given, cause they are ambiguous.
if args.env or args.no_env or args.env_dir:
tty.die("Calling spack env deactivate with --env, --env-dir and --no-env " "is ambiguous")
tty.die("Calling spack env deactivate with --env, --env-dir and --no-env is ambiguous")
if ev.active_environment() is None:
tty.die("No environment is currently active.")
@@ -290,7 +290,7 @@ def env_create_setup_parser(subparser):
"envfile",
nargs="?",
default=None,
help="either a lockfile (must end with '.json' or '.lock') or a manifest file.",
help="either a lockfile (must end with '.json' or '.lock') or a manifest file",
)
@@ -608,16 +608,16 @@ def env_depfile_setup_parser(subparser):
"--make-target-prefix",
default=None,
metavar="TARGET",
help="prefix Makefile targets (and variables) with <TARGET>/<name>. By default "
help="prefix Makefile targets (and variables) with <TARGET>/<name>\n\nby default "
"the absolute path to the directory makedeps under the environment metadata dir is "
"used. Can be set to an empty string --make-prefix ''.",
"used. can be set to an empty string --make-prefix ''",
)
subparser.add_argument(
"--make-disable-jobserver",
default=True,
action="store_false",
dest="jobserver",
help="disable POSIX jobserver support.",
help="disable POSIX jobserver support",
)
subparser.add_argument(
"--use-buildcache",
@@ -625,8 +625,8 @@ def env_depfile_setup_parser(subparser):
type=arguments.use_buildcache,
default="package:auto,dependencies:auto",
metavar="[{auto,only,never},][package:{auto,only,never},][dependencies:{auto,only,never}]",
help="When using `only`, redundant build dependencies are pruned from the DAG. "
"This flag is passed on to the generated spack install commands.",
help="when using `only`, redundant build dependencies are pruned from the DAG\n\n"
"this flag is passed on to the generated spack install commands",
)
subparser.add_argument(
"-o",
@@ -640,7 +640,7 @@ def env_depfile_setup_parser(subparser):
"--generator",
default="make",
choices=("make",),
help="specify the depfile type. Currently only make is supported.",
help="specify the depfile type\n\ncurrently only make is supported",
)
subparser.add_argument(
metavar="specs",

View File

@@ -22,7 +22,7 @@
def setup_parser(subparser):
subparser.epilog = (
"If called without argument returns " "the list of all valid extendable packages"
"If called without argument returns the list of all valid extendable packages"
)
arguments.add_common_arguments(subparser, ["long", "very_long"])
subparser.add_argument(
@@ -91,7 +91,7 @@ def extensions(parser, args):
if args.show in ("installed", "all"):
# List specs of installed extensions.
installed = [s.spec for s in spack.store.db.installed_extensions_for(spec)]
installed = [s.spec for s in spack.store.STORE.db.installed_extensions_for(spec)]
if args.show == "all":
print

View File

@@ -42,7 +42,7 @@ def setup_parser(subparser):
"--path",
default=None,
action="append",
help="Alternative search paths for finding externals. May be repeated",
help="one or more alternative search paths for finding externals",
)
find_parser.add_argument(
"--scope",
@@ -66,10 +66,8 @@ def setup_parser(subparser):
read_cray_manifest = sp.add_parser(
"read-cray-manifest",
help=(
"consume a Spack-compatible description of externally-installed "
"packages, including dependency relationships"
),
help="consume a Spack-compatible description of externally-installed packages, including "
"dependency relationships",
)
read_cray_manifest.add_argument(
"--file", default=None, help="specify a location other than the default"
@@ -92,7 +90,7 @@ def setup_parser(subparser):
read_cray_manifest.add_argument(
"--fail-on-error",
action="store_true",
help=("if a manifest file cannot be parsed, fail and report the " "full stack trace"),
help="if a manifest file cannot be parsed, fail and report the full stack trace",
)
@@ -111,14 +109,14 @@ def external_find(args):
# For most exceptions, just print a warning and continue.
# Note that KeyboardInterrupt does not subclass Exception
# (so CTRL-C will terminate the program as expected).
skip_msg = "Skipping manifest and continuing with other external " "checks"
skip_msg = "Skipping manifest and continuing with other external checks"
if (isinstance(e, IOError) or isinstance(e, OSError)) and e.errno in [
errno.EPERM,
errno.EACCES,
]:
# The manifest file does not have sufficient permissions enabled:
# print a warning and keep going
tty.warn("Unable to read manifest due to insufficient " "permissions.", skip_msg)
tty.warn("Unable to read manifest due to insufficient permissions.", skip_msg)
else:
tty.warn("Unable to read manifest, unexpected error: {0}".format(str(e)), skip_msg)
@@ -168,7 +166,7 @@ def external_find(args):
)
if new_entries:
path = spack.config.config.get_config_filename(args.scope, "packages")
msg = "The following specs have been detected on this system " "and added to {0}"
msg = "The following specs have been detected on this system and added to {0}"
tty.msg(msg.format(path))
spack.cmd.display_specs(new_entries)
else:
@@ -236,7 +234,7 @@ def _collect_and_consume_cray_manifest_files(
if fail_on_error:
raise
else:
tty.warn("Failure reading manifest file: {0}" "\n\t{1}".format(path, str(e)))
tty.warn("Failure reading manifest file: {0}\n\t{1}".format(path, str(e)))
def external_list(args):

View File

@@ -10,6 +10,7 @@
import spack.config
import spack.environment as ev
import spack.repo
import spack.traverse
description = "fetch archives for packages"
section = "build"
@@ -36,6 +37,12 @@ def setup_parser(subparser):
def fetch(parser, args):
if args.no_checksum:
spack.config.set("config:checksum", False, scope="command_line")
if args.deprecated:
spack.config.set("config:deprecated", True, scope="command_line")
if args.specs:
specs = spack.cmd.parse_specs(args.specs, concretize=True)
else:
@@ -51,24 +58,21 @@ def fetch(parser, args):
else:
specs = env.all_specs()
if specs == []:
tty.die(
"No uninstalled specs in environment. Did you " "run `spack concretize` yet?"
)
tty.die("No uninstalled specs in environment. Did you run `spack concretize` yet?")
else:
tty.die("fetch requires at least one spec argument")
if args.no_checksum:
spack.config.set("config:checksum", False, scope="command_line")
if args.dependencies or args.missing:
to_be_fetched = spack.traverse.traverse_nodes(specs, key=spack.traverse.by_dag_hash)
else:
to_be_fetched = specs
if args.deprecated:
spack.config.set("config:deprecated", True, scope="command_line")
for spec in to_be_fetched:
if args.missing and spec.installed:
continue
for spec in specs:
if args.missing or args.dependencies:
for s in spec.traverse(root=False):
# Skip already-installed packages with --missing
if args.missing and s.installed:
continue
pkg = spec.package
s.package.do_fetch()
spec.package.do_fetch()
pkg.stage.keep = True
with pkg.stage:
pkg.do_fetch()

View File

@@ -30,6 +30,14 @@ def setup_parser(subparser):
default=None,
help="output specs with the specified format string",
)
format_group.add_argument(
"-H",
"--hashes",
action="store_const",
dest="format",
const="{/hash}",
help="same as '--format {/hash}'; use with xargs or $()",
)
format_group.add_argument(
"--json",
action="store_true",

View File

@@ -20,7 +20,7 @@ def setup_parser(subparser):
def gc(parser, args):
specs = spack.store.db.unused_specs
specs = spack.store.STORE.db.unused_specs
# Restrict garbage collection to the active environment
# speculating over roots that are yet to be installed

View File

@@ -68,7 +68,7 @@ def setup_parser(subparser):
metavar="DEST",
type=str,
dest="secret",
help="export the private key to a file.",
help="export the private key to a file",
)
create.set_defaults(func=gpg_create)
@@ -86,7 +86,7 @@ def setup_parser(subparser):
export = subparsers.add_parser("export", help=gpg_export.__doc__)
export.add_argument("location", type=str, help="where to export keys")
export.add_argument(
"keys", nargs="*", help="the keys to export; " "all public keys if unspecified"
"keys", nargs="*", help="the keys to export (all public keys if unspecified)"
)
export.add_argument("--secret", action="store_true", help="export secret keys")
export.set_defaults(func=gpg_export)
@@ -99,29 +99,29 @@ def setup_parser(subparser):
"--directory",
metavar="directory",
type=str,
help="local directory where keys will be published.",
help="local directory where keys will be published",
)
output.add_argument(
"-m",
"--mirror-name",
metavar="mirror-name",
type=str,
help="name of the mirror where " + "keys will be published.",
help="name of the mirror where keys will be published",
)
output.add_argument(
"--mirror-url",
metavar="mirror-url",
type=str,
help="URL of the mirror where " + "keys will be published.",
help="URL of the mirror where keys will be published",
)
publish.add_argument(
"--rebuild-index",
action="store_true",
default=False,
help=("Regenerate buildcache key index " "after publishing key(s)"),
help="regenerate buildcache key index after publishing key(s)",
)
publish.add_argument(
"keys", nargs="*", help="the keys to publish; " "all public keys if unspecified"
"keys", nargs="*", help="keys to publish (all public keys if unspecified)"
)
publish.set_defaults(func=gpg_publish)
@@ -146,7 +146,7 @@ def gpg_create(args):
def gpg_export(args):
"""export a gpg key, optionally including secret key."""
"""export a gpg key, optionally including secret key"""
keys = args.keys
if not keys:
keys = spack.util.gpg.signing_keys()
@@ -168,7 +168,7 @@ def gpg_sign(args):
elif not keys:
raise RuntimeError("no signing keys are available")
else:
raise RuntimeError("multiple signing keys are available; " "please choose one")
raise RuntimeError("multiple signing keys are available; please choose one")
output = args.output
if not output:
output = args.spec[0] + ".asc"
@@ -216,7 +216,7 @@ def gpg_publish(args):
url = spack.util.url.path_to_file_url(args.directory)
mirror = spack.mirror.Mirror(url, url)
elif args.mirror_name:
mirror = spack.mirror.MirrorCollection().lookup(args.mirror_name)
mirror = spack.mirror.MirrorCollection(binary=True).lookup(args.mirror_name)
elif args.mirror_url:
mirror = spack.mirror.Mirror(args.mirror_url, args.mirror_url)

View File

@@ -63,7 +63,7 @@ def graph(parser, args):
if env:
specs = env.all_specs()
else:
specs = spack.store.db.query()
specs = spack.store.STORE.db.query()
else:
specs = spack.cmd.parse_specs(args.specs, concretize=not args.static)

View File

@@ -75,10 +75,9 @@ def setup_parser(subparser):
default="package,dependencies",
dest="things_to_install",
choices=["package", "dependencies"],
help="""select the mode of installation.
the default is to install the package along with all its dependencies.
alternatively one can decide to install only the package or only
the dependencies""",
help="select the mode of installation\n\n"
"default is to install the package along with all its dependencies. "
"alternatively, one can decide to install only the package or only the dependencies",
)
subparser.add_argument(
"-u",
@@ -143,12 +142,11 @@ def setup_parser(subparser):
type=arguments.use_buildcache,
default="package:auto,dependencies:auto",
metavar="[{auto,only,never},][package:{auto,only,never},][dependencies:{auto,only,never}]",
help="""select the mode of buildcache for the 'package' and 'dependencies'.
Default: package:auto,dependencies:auto
- `auto` behaves like --use-cache
- `only` behaves like --cache-only
- `never` behaves like --no-cache
""",
help="select the mode of buildcache for the 'package' and 'dependencies'\n\n"
"default: package:auto,dependencies:auto\n\n"
"- `auto` behaves like --use-cache\n"
"- `only` behaves like --cache-only\n"
"- `never` behaves like --no-cache",
)
subparser.add_argument(
@@ -156,8 +154,8 @@ def setup_parser(subparser):
action="store_true",
dest="include_build_deps",
default=False,
help="""include build deps when installing from cache,
which is useful for CI pipeline troubleshooting""",
help="include build deps when installing from cache, "
"useful for CI pipeline troubleshooting",
)
subparser.add_argument(
@@ -186,7 +184,7 @@ def setup_parser(subparser):
dest="install_verbose",
help="display verbose build output while installing",
)
subparser.add_argument("--fake", action="store_true", help="fake install for debug purposes.")
subparser.add_argument("--fake", action="store_true", help="fake install for debug purposes")
subparser.add_argument(
"--only-concrete",
action="store_true",
@@ -199,14 +197,13 @@ def setup_parser(subparser):
"--add",
action="store_true",
default=False,
help="""(with environment) add spec to the environment as a root.""",
help="(with environment) add spec to the environment as a root",
)
updateenv_group.add_argument(
"--no-add",
action="store_false",
dest="add",
help="""(with environment) do not add spec to the environment as a
root (the default behavior).""",
help="(with environment) do not add spec to the environment as a root",
)
subparser.add_argument(
@@ -216,7 +213,7 @@ def setup_parser(subparser):
default=[],
dest="specfiles",
metavar="SPEC_YAML_FILE",
help="install from file. Read specs to install from .yaml files",
help="read specs to install from .yaml files",
)
cd_group = subparser.add_mutually_exclusive_group()
@@ -227,19 +224,12 @@ def setup_parser(subparser):
"--test",
default=None,
choices=["root", "all"],
help="""If 'root' is chosen, run package tests during
installation for top-level packages (but skip tests for dependencies).
if 'all' is chosen, run package tests during installation for all
packages. If neither are chosen, don't run tests for any packages.""",
help="run tests on only root packages or all packages",
)
arguments.add_common_arguments(subparser, ["log_format"])
subparser.add_argument("--log-file", default=None, help="filename for the log file")
subparser.add_argument(
"--log-file",
default=None,
help="filename for the log file. if not passed a default will be used",
)
subparser.add_argument(
"--help-cdash", action="store_true", help="Show usage instructions for CDash reporting"
"--help-cdash", action="store_true", help="show usage instructions for CDash reporting"
)
arguments.add_cdash_args(subparser, False)
arguments.add_common_arguments(subparser, ["yes_to_all", "spec"])
@@ -276,11 +266,11 @@ def require_user_confirmation_for_overwrite(concrete_specs, args):
if args.yes_to_all:
return
installed = list(filter(lambda x: x, map(spack.store.db.query_one, concrete_specs)))
installed = list(filter(lambda x: x, map(spack.store.STORE.db.query_one, concrete_specs)))
display_args = {"long": True, "show_flags": True, "variants": True}
if installed:
tty.msg("The following package specs will be " "reinstalled:\n")
tty.msg("The following package specs will be reinstalled:\n")
spack.cmd.display_specs(installed, **display_args)
not_installed = list(filter(lambda x: x not in installed, concrete_specs))

View File

@@ -66,10 +66,9 @@ def setup_parser(subparser):
default="package,dependencies",
dest="things_to_load",
choices=["package", "dependencies"],
help="""select whether to load the package and its dependencies
the default is to load the package and all dependencies
alternatively one can decide to load only the package or only
the dependencies""",
help="select whether to load the package and its dependencies\n\n"
"the default is to load the package and all dependencies. alternatively, "
"one can decide to load only the package or only the dependencies",
)
subparser.add_argument(
@@ -102,7 +101,7 @@ def load(parser, args):
)
return 1
with spack.store.db.read_transaction():
with spack.store.STORE.db.read_transaction():
if "dependencies" in args.things_to_load:
include_roots = "package" in args.things_to_load
specs = [

View File

@@ -55,13 +55,13 @@ def setup_parser(subparser):
directories.add_argument(
"--source-dir",
action="store_true",
help="source directory for a spec " "(requires it to be staged first)",
help="source directory for a spec (requires it to be staged first)",
)
directories.add_argument(
"-b",
"--build-dir",
action="store_true",
help="build directory for a spec " "(requires it to be staged first)",
help="build directory for a spec (requires it to be staged first)",
)
directories.add_argument(
"-e",
@@ -162,7 +162,7 @@ def location(parser, args):
# source dir remains, which requires the spec to be staged
if not pkg.stage.expanded:
tty.die(
"Source directory does not exist yet. " "Run this to create it:",
"Source directory does not exist yet. Run this to create it:",
"spack stage " + " ".join(args.spec),
)

View File

@@ -39,7 +39,7 @@ def line_to_rtf(str):
def setup_parser(subparser):
spack_source_group = subparser.add_mutually_exclusive_group(required=True)
spack_source_group.add_argument(
"-v", "--spack-version", default="", help="download given spack version e.g. 0.16.0"
"-v", "--spack-version", default="", help="download given spack version"
)
spack_source_group.add_argument(
"-s", "--spack-source", default="", help="full path to spack source"
@@ -50,7 +50,7 @@ def setup_parser(subparser):
"--git-installer-verbosity",
default="",
choices=["SILENT", "VERYSILENT"],
help="Level of verbosity provided by bundled Git Installer. Default is fully verbose",
help="level of verbosity provided by bundled git installer (default is fully verbose)",
required=False,
action="store",
dest="git_verbosity",

View File

@@ -35,10 +35,7 @@ def setup_parser(subparser):
"--all",
action="store_true",
dest="all",
help="Mark ALL installed packages that match each "
"supplied spec. If you `mark --all libelf`,"
" ALL versions of `libelf` are marked. If no spec is "
"supplied, all installed packages will be marked.",
help="mark ALL installed packages that match each supplied spec",
)
exim = subparser.add_mutually_exclusive_group(required=True)
exim.add_argument(
@@ -46,14 +43,14 @@ def setup_parser(subparser):
"--explicit",
action="store_true",
dest="explicit",
help="Mark packages as explicitly installed.",
help="mark packages as explicitly installed",
)
exim.add_argument(
"-i",
"--implicit",
action="store_true",
dest="implicit",
help="Mark packages as implicitly installed.",
help="mark packages as implicitly installed",
)
@@ -74,7 +71,7 @@ def find_matching_specs(specs, allow_multiple_matches=False):
for spec in specs:
install_query = [InstallStatuses.INSTALLED]
matching = spack.store.db.query_local(spec, installed=install_query)
matching = spack.store.STORE.db.query_local(spec, installed=install_query)
# For each spec provided, make sure it refers to only one package.
# Fail and ask user to be unambiguous if it doesn't
if not allow_multiple_matches and len(matching) > 1:
@@ -105,7 +102,7 @@ def do_mark(specs, explicit):
explicit (bool): whether to mark specs as explicitly installed
"""
for spec in specs:
spack.store.db.update_explicit(spec, explicit)
spack.store.STORE.db.update_explicit(spec, explicit)
def mark_specs(args, specs):

View File

@@ -21,7 +21,6 @@
import spack.util.path
import spack.util.web as web_util
from spack.error import SpackError
from spack.util.spack_yaml import syaml_dict
description = "manage mirrors (source and binary)"
section = "config"
@@ -55,13 +54,13 @@ def setup_parser(subparser):
)
create_parser.add_argument(
"--exclude-specs",
help="specs which Spack should not try to add to a mirror" " (specified on command line)",
help="specs which Spack should not try to add to a mirror (specified on command line)",
)
create_parser.add_argument(
"--skip-unstable-versions",
action="store_true",
help="don't cache versions unless they identify a stable (unchanging)" " source code",
help="don't cache versions unless they identify a stable (unchanging) source code",
)
create_parser.add_argument(
"-D", "--dependencies", action="store_true", help="also fetch all dependencies"
@@ -104,6 +103,15 @@ def setup_parser(subparser):
default=spack.config.default_modify_scope(),
help="configuration scope to modify",
)
add_parser.add_argument(
"--type",
action="append",
choices=("binary", "source"),
help=(
"specify the mirror type: for both binary "
"and source use `--type binary --type source` (default)"
),
)
arguments.add_s3_connection_args(add_parser, False)
# Remove
remove_parser = sp.add_parser("remove", aliases=["rm"], help=mirror_remove.__doc__)
@@ -120,8 +128,12 @@ def setup_parser(subparser):
set_url_parser = sp.add_parser("set-url", help=mirror_set_url.__doc__)
set_url_parser.add_argument("name", help="mnemonic name for mirror", metavar="mirror")
set_url_parser.add_argument("url", help="url of mirror directory from 'spack mirror create'")
set_url_parser.add_argument(
"--push", action="store_true", help="set only the URL used for uploading new packages"
set_url_push_or_fetch = set_url_parser.add_mutually_exclusive_group(required=False)
set_url_push_or_fetch.add_argument(
"--push", action="store_true", help="set only the URL used for uploading"
)
set_url_push_or_fetch.add_argument(
"--fetch", action="store_true", help="set only the URL used for downloading"
)
set_url_parser.add_argument(
"--scope",
@@ -132,6 +144,35 @@ def setup_parser(subparser):
)
arguments.add_s3_connection_args(set_url_parser, False)
# Set
set_parser = sp.add_parser("set", help=mirror_set.__doc__)
set_parser.add_argument("name", help="mnemonic name for mirror", metavar="mirror")
set_parser_push_or_fetch = set_parser.add_mutually_exclusive_group(required=False)
set_parser_push_or_fetch.add_argument(
"--push", action="store_true", help="modify just the push connection details"
)
set_parser_push_or_fetch.add_argument(
"--fetch", action="store_true", help="modify just the fetch connection details"
)
set_parser.add_argument(
"--type",
action="append",
choices=("binary", "source"),
help=(
"specify the mirror type: for both binary "
"and source use `--type binary --type source`"
),
)
set_parser.add_argument("--url", help="url of mirror directory from 'spack mirror create'")
set_parser.add_argument(
"--scope",
choices=scopes,
metavar=scopes_metavar,
default=spack.config.default_modify_scope(),
help="configuration scope to modify",
)
arguments.add_s3_connection_args(set_parser, False)
# List
list_parser = sp.add_parser("list", help=mirror_list.__doc__)
list_parser.add_argument(
@@ -144,105 +185,85 @@ def setup_parser(subparser):
def mirror_add(args):
"""Add a mirror to Spack."""
"""add a mirror to Spack"""
if (
args.s3_access_key_id
or args.s3_access_key_secret
or args.s3_access_token
or args.s3_profile
or args.s3_endpoint_url
or args.type
):
connection = {"url": args.url}
if args.s3_access_key_id and args.s3_access_key_secret:
connection["access_pair"] = (args.s3_access_key_id, args.s3_access_key_secret)
connection["access_pair"] = [args.s3_access_key_id, args.s3_access_key_secret]
if args.s3_access_token:
connection["access_token"] = args.s3_access_token
if args.s3_profile:
connection["profile"] = args.s3_profile
if args.s3_endpoint_url:
connection["endpoint_url"] = args.s3_endpoint_url
mirror = spack.mirror.Mirror(fetch_url=connection, push_url=connection, name=args.name)
if args.type:
connection["binary"] = "binary" in args.type
connection["source"] = "source" in args.type
mirror = spack.mirror.Mirror(connection, name=args.name)
else:
mirror = spack.mirror.Mirror(args.url, name=args.name)
spack.mirror.add(mirror, args.scope)
def mirror_remove(args):
"""Remove a mirror by name."""
"""remove a mirror by name"""
spack.mirror.remove(args.name, args.scope)
def mirror_set_url(args):
"""Change the URL of a mirror."""
url = args.url
def _configure_mirror(args):
mirrors = spack.config.get("mirrors", scope=args.scope)
if not mirrors:
mirrors = syaml_dict()
if args.name not in mirrors:
tty.die("No mirror found with name %s." % args.name)
tty.die(f"No mirror found with name {args.name}.")
entry = mirrors[args.name]
key_values = ["s3_access_key_id", "s3_access_token", "s3_profile"]
entry = spack.mirror.Mirror(mirrors[args.name], args.name)
direction = "fetch" if args.fetch else "push" if args.push else None
changes = {}
if args.url:
changes["url"] = args.url
if args.s3_access_key_id and args.s3_access_key_secret:
changes["access_pair"] = [args.s3_access_key_id, args.s3_access_key_secret]
if args.s3_access_token:
changes["access_token"] = args.s3_access_token
if args.s3_profile:
changes["profile"] = args.s3_profile
if args.s3_endpoint_url:
changes["endpoint_url"] = args.s3_endpoint_url
if any(value for value in key_values if value in args):
incoming_data = {
"url": url,
"access_pair": (args.s3_access_key_id, args.s3_access_key_secret),
"access_token": args.s3_access_token,
"profile": args.s3_profile,
"endpoint_url": args.s3_endpoint_url,
}
try:
fetch_url = entry["fetch"]
push_url = entry["push"]
except TypeError:
fetch_url, push_url = entry, entry
# argparse cannot distinguish between --binary and --no-binary when same dest :(
# notice that set-url does not have these args, so getattr
if getattr(args, "type", None):
changes["binary"] = "binary" in args.type
changes["source"] = "source" in args.type
changes_made = False
changed = entry.update(changes, direction)
if args.push:
if isinstance(push_url, dict):
changes_made = changes_made or push_url != incoming_data
push_url = incoming_data
else:
changes_made = changes_made or push_url != url
push_url = url
else:
if isinstance(push_url, dict):
changes_made = changes_made or push_url != incoming_data or push_url != incoming_data
fetch_url, push_url = incoming_data, incoming_data
else:
changes_made = changes_made or push_url != url
fetch_url, push_url = url, url
items = [
(
(n, u)
if n != args.name
else (
(n, {"fetch": fetch_url, "push": push_url})
if fetch_url != push_url
else (n, {"fetch": fetch_url, "push": fetch_url})
)
)
for n, u in mirrors.items()
]
mirrors = syaml_dict(items)
spack.config.set("mirrors", mirrors, scope=args.scope)
if changes_made:
tty.msg(
"Changed%s url or connection information for mirror %s."
% ((" (push)" if args.push else ""), args.name)
)
if changed:
mirrors[args.name] = entry.to_dict()
spack.config.set("mirrors", mirrors, scope=args.scope)
else:
tty.msg("No changes made to mirror %s." % args.name)
def mirror_set(args):
"""configure the connection details of a mirror"""
_configure_mirror(args)
def mirror_set_url(args):
"""change the URL of a mirror"""
_configure_mirror(args)
def mirror_list(args):
"""Print out available mirrors to the console."""
"""print out available mirrors to the console"""
mirrors = spack.mirror.MirrorCollection(scope=args.scope)
if not mirrors:
@@ -395,9 +416,7 @@ def process_mirror_stats(present, mirrored, error):
def mirror_create(args):
"""Create a directory to be used as a spack mirror, and fill it with
package archives.
"""
"""create a directory to be used as a spack mirror, and fill it with package archives"""
if args.specs and args.all:
raise SpackError(
"cannot specify specs on command line if you chose to mirror all specs with '--all'"
@@ -470,7 +489,7 @@ def create_mirror_for_all_specs_inside_environment(path, skip_unstable_versions,
def mirror_destroy(args):
"""Given a url, recursively delete everything under it."""
"""given a url, recursively delete everything under it"""
mirror_url = None
if args.mirror_name:
@@ -490,6 +509,7 @@ def mirror(parser, args):
"remove": mirror_remove,
"rm": mirror_remove,
"set-url": mirror_set_url,
"set": mirror_set,
"list": mirror_list,
}

View File

@@ -11,6 +11,7 @@
import sys
from llnl.util import filesystem, tty
from llnl.util.tty import color
import spack.cmd
import spack.cmd.common.arguments as arguments
@@ -31,7 +32,7 @@ def setup_parser(subparser):
action="store",
dest="module_set_name",
default="default",
help="Named module set to use from modules configuration.",
help="named module set to use from modules configuration",
)
sp = subparser.add_subparsers(metavar="SUBCOMMAND", dest="subparser_name")
@@ -347,14 +348,20 @@ def refresh(module_type, specs, args):
spack.modules.common.generate_module_index(
module_type_root, writers, overwrite=args.delete_tree
)
errors = []
for x in writers:
try:
x.write(overwrite=True)
except spack.error.SpackError as e:
msg = f"{x.layout.filename}: {e.message}"
errors.append(msg)
except Exception as e:
tty.debug(e)
msg = "Could not write module file [{0}]"
tty.warn(msg.format(x.layout.filename))
tty.warn("\t--> {0} <--".format(str(e)))
msg = f"{x.layout.filename}: {str(e)}"
errors.append(msg)
if errors:
errors.insert(0, color.colorize("@*{some module files could not be written}"))
tty.warn("\n".join(errors))
#: Dictionary populated with the list of sub-commands.
@@ -368,7 +375,9 @@ def refresh(module_type, specs, args):
def modules_cmd(parser, args, module_type, callbacks=callbacks):
# Qualifiers to be used when querying the db for specs
constraint_qualifiers = {"refresh": {"installed": True, "known": True}}
constraint_qualifiers = {
"refresh": {"installed": True, "known": lambda x: not spack.repo.path.exists(x)}
}
query_args = constraint_qualifiers.get(args.subparser_name, {})
# Get the specs that match the query from the DB

View File

@@ -30,7 +30,7 @@ def add_command(parser, command_dict):
def setdefault(module_type, specs, args):
"""Set the default module file, when multiple are present"""
"""set the default module file, when multiple are present"""
# For details on the underlying mechanism see:
#
# https://lmod.readthedocs.io/en/latest/060_locating.html#marking-a-version-as-default

View File

@@ -29,7 +29,7 @@ def add_command(parser, command_dict):
def setdefault(module_type, specs, args):
"""Set the default module file, when multiple are present"""
"""set the default module file, when multiple are present"""
# Currently, accepts only a single matching spec
spack.cmd.modules.one_spec_or_raise(specs)
spec = specs[0]

View File

@@ -7,7 +7,11 @@
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.config
import spack.environment as ev
import spack.package_base
import spack.repo
import spack.traverse
description = "patch expanded archive sources in preparation for install"
section = "build"
@@ -21,7 +25,10 @@ def setup_parser(subparser):
def patch(parser, args):
if not args.specs:
tty.die("patch requires at least one spec argument")
env = ev.active_environment()
if not env:
tty.die("`spack patch` requires a spec or an active environment")
return _patch_env(env)
if args.no_checksum:
spack.config.set("config:checksum", False, scope="command_line")
@@ -29,6 +36,19 @@ def patch(parser, args):
if args.deprecated:
spack.config.set("config:deprecated", True, scope="command_line")
specs = spack.cmd.parse_specs(args.specs, concretize=True)
specs = spack.cmd.parse_specs(args.specs, concretize=False)
for spec in specs:
spec.package.do_patch()
_patch(spack.cmd.matching_spec_from_env(spec).package)
def _patch_env(env: ev.Environment):
tty.msg(f"Patching specs from environment {env.name}")
for spec in spack.traverse.traverse_nodes(env.concrete_roots()):
_patch(spec.package)
def _patch(pkg: spack.package_base.PackageBase):
pkg.stage.keep = True
with pkg.stage:
pkg.do_patch()
tty.msg(f"Patched {pkg.name} in {pkg.stage.path}")

View File

@@ -58,7 +58,7 @@ def setup_parser(subparser):
"--type",
action="store",
default="C",
help="Types of changes to show (A: added, R: removed, " "C: changed); default is 'C'",
help="types of changes to show (A: added, R: removed, C: changed); default is 'C'",
)
rm_parser = sp.add_parser("removed", help=pkg_removed.__doc__)
@@ -81,7 +81,7 @@ def setup_parser(subparser):
"--canonical",
action="store_true",
default=False,
help="dump canonical source as used by package hash.",
help="dump canonical source as used by package hash",
)
arguments.add_common_arguments(source_parser, ["spec"])

View File

@@ -17,9 +17,7 @@
def setup_parser(subparser):
subparser.epilog = (
"If called without argument returns " "the list of all valid virtual packages"
)
subparser.epilog = "If called without argument returns the list of all valid virtual packages"
subparser.add_argument(
"virtual_package", nargs="*", help="find packages that provide this virtual package"
)

View File

@@ -11,4 +11,4 @@
def reindex(parser, args):
spack.store.store.reindex()
spack.store.STORE.reindex()

View File

@@ -27,7 +27,7 @@ def setup_parser(subparser):
create_parser.add_argument("directory", help="directory to create the repo in")
create_parser.add_argument(
"namespace",
help="namespace to identify packages in the repository. " "defaults to the directory name",
help="namespace to identify packages in the repository (defaults to the directory name)",
nargs="?",
)
create_parser.add_argument(
@@ -36,10 +36,8 @@ def setup_parser(subparser):
action="store",
dest="subdir",
default=spack.repo.packages_dir_name,
help=(
"subdirectory to store packages in the repository."
" Default 'packages'. Use an empty string for no subdirectory."
),
help="subdirectory to store packages in the repository\n\n"
"default 'packages'. use an empty string for no subdirectory",
)
# List
@@ -78,14 +76,14 @@ def setup_parser(subparser):
def repo_create(args):
"""Create a new package repository."""
"""create a new package repository"""
full_path, namespace = spack.repo.create_repo(args.directory, args.namespace, args.subdir)
tty.msg("Created repo with namespace '%s'." % namespace)
tty.msg("To register it with spack, run this command:", "spack repo add %s" % full_path)
def repo_add(args):
"""Add a package source to Spack's configuration."""
"""add a package source to Spack's configuration"""
path = args.path
# real_path is absolute and handles substitution.
@@ -116,7 +114,7 @@ def repo_add(args):
def repo_remove(args):
"""Remove a repository from Spack's configuration."""
"""remove a repository from Spack's configuration"""
repos = spack.config.get("repos", scope=args.scope)
namespace_or_path = args.namespace_or_path
@@ -146,7 +144,7 @@ def repo_remove(args):
def repo_list(args):
"""Show registered repositories and their namespaces."""
"""show registered repositories and their namespaces"""
roots = spack.config.get("repos", scope=args.scope)
repos = []
for r in roots:

View File

@@ -33,7 +33,7 @@ def setup_parser(subparser):
"--show",
action="store",
default="opt,solutions",
help="select outputs: comma-separated list of: \n"
help="select outputs\n\ncomma-separated list of:\n"
" asp asp program text\n"
" opt optimization criteria for best model\n"
" output raw clingo output\n"

View File

@@ -100,7 +100,7 @@ def spec(parser, args):
# spec in the DAG. This avoids repeatedly querying the DB.
tree_context = lang.nullcontext
if args.install_status:
tree_context = spack.store.db.read_transaction
tree_context = spack.store.STORE.db.read_transaction
# Use command line specified specs, otherwise try to use environment specs.
if args.specs:

View File

@@ -9,9 +9,12 @@
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.config
import spack.environment as ev
import spack.package_base
import spack.repo
import spack.stage
import spack.traverse
description = "expand downloaded archive in preparation for install"
section = "build"
@@ -27,24 +30,18 @@ def setup_parser(subparser):
def stage(parser, args):
if not args.specs:
env = ev.active_environment()
if env:
tty.msg("Staging specs from environment %s" % env.name)
for spec in env.specs_by_hash.values():
for dep in spec.traverse():
dep.package.do_stage()
tty.msg("Staged {0} in {1}".format(dep.package.name, dep.package.stage.path))
return
else:
tty.die("`spack stage` requires a spec or an active environment")
if args.no_checksum:
spack.config.set("config:checksum", False, scope="command_line")
if args.deprecated:
spack.config.set("config:deprecated", True, scope="command_line")
if not args.specs:
env = ev.active_environment()
if not env:
tty.die("`spack stage` requires a spec or an active environment")
return _stage_env(env)
specs = spack.cmd.parse_specs(args.specs, concretize=False)
# We temporarily modify the working directory when setting up a stage, so we need to
@@ -57,7 +54,24 @@ def stage(parser, args):
for spec in specs:
spec = spack.cmd.matching_spec_from_env(spec)
pkg = spec.package
if custom_path:
spec.package.path = custom_path
spec.package.do_stage()
tty.msg("Staged {0} in {1}".format(spec.package.name, spec.package.stage.path))
pkg.path = custom_path
_stage(pkg)
def _stage_env(env: ev.Environment):
tty.msg(f"Staging specs from environment {env.name}")
for spec in spack.traverse.traverse_nodes(env.concrete_roots()):
_stage(spec.package)
def _stage(pkg: spack.package_base.PackageBase):
# Use context manager to ensure we don't restage while an installation is in progress
# keep = True ensures that the stage is not removed after exiting the context manager
pkg.stage.keep = True
with pkg.stage:
pkg.do_stage()
tty.msg(f"Staged {pkg.name} in {pkg.stage.path}")

View File

@@ -12,7 +12,7 @@
import spack.store
import spack.tag
description = "Show package tags and associated packages"
description = "show package tags and associated packages"
section = "basic"
level = "long"

View File

@@ -35,39 +35,35 @@ def setup_parser(subparser):
"run", description=test_run.__doc__, help=spack.cmd.first_line(test_run.__doc__)
)
alias_help_msg = "Provide an alias for this test-suite"
alias_help_msg += " for subsequent access."
run_parser.add_argument("--alias", help=alias_help_msg)
run_parser.add_argument(
"--alias", help="provide an alias for this test-suite for subsequent access"
)
run_parser.add_argument(
"--fail-fast",
action="store_true",
help="Stop tests for each package after the first failure.",
help="stop tests for each package after the first failure",
)
run_parser.add_argument(
"--fail-first", action="store_true", help="Stop after the first failed package."
"--fail-first", action="store_true", help="stop after the first failed package"
)
run_parser.add_argument(
"--externals", action="store_true", help="Test packages that are externally installed."
"--externals", action="store_true", help="test packages that are externally installed"
)
run_parser.add_argument(
"-x",
"--explicit",
action="store_true",
help="Only test packages that are explicitly installed.",
help="only test packages that are explicitly installed",
)
run_parser.add_argument(
"--keep-stage", action="store_true", help="Keep testing directory for debugging"
"--keep-stage", action="store_true", help="keep testing directory for debugging"
)
arguments.add_common_arguments(run_parser, ["log_format"])
run_parser.add_argument(
"--log-file",
default=None,
help="filename for the log file. if not passed a default will be used",
)
run_parser.add_argument("--log-file", default=None, help="filename for the log file")
arguments.add_cdash_args(run_parser, False)
run_parser.add_argument(
"--help-cdash", action="store_true", help="Show usage instructions for CDash reporting"
"--help-cdash", action="store_true", help="show usage instructions for CDash reporting"
)
cd_group = run_parser.add_mutually_exclusive_group()
@@ -96,7 +92,7 @@ def setup_parser(subparser):
find_parser.add_argument(
"filter",
nargs=argparse.REMAINDER,
help="optional case-insensitive glob patterns to filter results.",
help="optional case-insensitive glob patterns to filter results",
)
# Status
@@ -104,7 +100,7 @@ def setup_parser(subparser):
"status", description=test_status.__doc__, help=spack.cmd.first_line(test_status.__doc__)
)
status_parser.add_argument(
"names", nargs=argparse.REMAINDER, help="Test suites for which to print status"
"names", nargs=argparse.REMAINDER, help="test suites for which to print status"
)
# Results
@@ -142,15 +138,15 @@ def setup_parser(subparser):
)
arguments.add_common_arguments(remove_parser, ["yes_to_all"])
remove_parser.add_argument(
"names", nargs=argparse.REMAINDER, help="Test suites to remove from test stage"
"names", nargs=argparse.REMAINDER, help="test suites to remove from test stage"
)
def test_run(args):
"""Run tests for the specified installed packages.
"""run tests for the specified installed packages
If no specs are listed, run tests for all packages in the current
environment or all installed packages if there is no active environment.
if no specs are listed, run tests for all packages in the current
environment or all installed packages if there is no active environment
"""
if args.alias:
suites = spack.install_test.get_named_test_suites(args.alias)
@@ -178,7 +174,7 @@ def test_run(args):
specs = spack.cmd.parse_specs(args.specs) if args.specs else [None]
specs_to_test = []
for spec in specs:
matching = spack.store.db.query_local(spec, hashes=hashes, explicit=explicit)
matching = spack.store.STORE.db.query_local(spec, hashes=hashes, explicit=explicit)
if spec and not matching:
tty.warn("No {0}installed packages match spec {1}".format(explicit_str, spec))
"""
@@ -231,7 +227,7 @@ def create_reporter(args, specs_to_test, test_suite):
def test_list(args):
"""List installed packages with available tests."""
"""list installed packages with available tests"""
tagged = set(spack.repo.path.packages_with_tags(*args.tag)) if args.tag else set()
def has_test_and_tags(pkg_class):
@@ -256,17 +252,17 @@ def has_test_and_tags(pkg_class):
env = ev.active_environment()
hashes = env.all_hashes() if env else None
specs = spack.store.db.query(hashes=hashes)
specs = spack.store.STORE.db.query(hashes=hashes)
specs = list(filter(lambda s: has_test_and_tags(s.package_class), specs))
spack.cmd.display_specs(specs, long=True)
def test_find(args): # TODO: merge with status (noargs)
"""Find tests that are running or have available results.
"""find tests that are running or have available results
Displays aliases for tests that have them, otherwise test suite content
hashes."""
displays aliases for tests that have them, otherwise test suite content hashes
"""
test_suites = spack.install_test.get_all_test_suites()
# Filter tests by filter argument
@@ -302,7 +298,7 @@ def match(t, f):
def test_status(args):
"""Get the current status for the specified Spack test suite(s)."""
"""get the current status for the specified Spack test suite(s)"""
if args.names:
test_suites = []
for name in args.names:
@@ -333,7 +329,7 @@ def _report_suite_results(test_suite, args, constraints):
qspecs = spack.cmd.parse_specs(constraints)
specs = {}
for spec in qspecs:
for s in spack.store.db.query(spec, installed=True):
for s in spack.store.STORE.db.query(spec, installed=True):
specs[s.dag_hash()] = s
specs = sorted(specs.values())
test_specs = dict((test_suite.test_pkg_id(s), s) for s in test_suite.specs if s in specs)
@@ -387,7 +383,7 @@ def _report_suite_results(test_suite, args, constraints):
def test_results(args):
"""Get the results from Spack test suite(s) (default all)."""
"""get the results from Spack test suite(s) (default all)"""
if args.names:
try:
sep_index = args.names.index("--")
@@ -414,12 +410,13 @@ def test_results(args):
def test_remove(args):
"""Remove results from Spack test suite(s) (default all).
"""remove results from Spack test suite(s) (default all)
If no test suite is listed, remove results for all suites.
if no test suite is listed, remove results for all suites.
Removed tests can no longer be accessed for results or status, and will not
appear in `spack test list` results."""
removed tests can no longer be accessed for results or status, and will not
appear in `spack test list` results
"""
if args.names:
test_suites = []
for name in args.names:

View File

@@ -54,7 +54,7 @@ def setup_parser(subparser):
"--force",
action="store_true",
dest="force",
help="remove regardless of whether other packages or environments " "depend on this one",
help="remove regardless of whether other packages or environments depend on this one",
)
subparser.add_argument(
"--remove",
@@ -103,7 +103,7 @@ def find_matching_specs(
has_errors = False
for spec in specs:
install_query = [InstallStatuses.INSTALLED, InstallStatuses.DEPRECATED]
matching = spack.store.db.query_local(
matching = spack.store.STORE.db.query_local(
spec, hashes=hashes, installed=install_query, origin=origin
)
# For each spec provided, make sure it refers to only one package.
@@ -139,7 +139,7 @@ def installed_dependents(specs: List[spack.spec.Spec]) -> List[spack.spec.Spec]:
# input; in that case we return an empty list.
def is_installed(spec):
record = spack.store.db.query_local_by_spec_hash(spec.dag_hash())
record = spack.store.STORE.db.query_local_by_spec_hash(spec.dag_hash())
return record and record.installed
specs = traverse.traverse_nodes(

View File

@@ -53,15 +53,15 @@ def setup_parser(subparser):
)
subparser.add_argument(
"-a", "--all", action="store_true", help="unload all loaded Spack packages."
"-a", "--all", action="store_true", help="unload all loaded Spack packages"
)
def unload(parser, args):
"""Unload spack packages from the user environment."""
"""unload spack packages from the user environment"""
if args.specs and args.all:
raise spack.error.SpackError(
"Cannot specify specs on command line" " when unloading all specs with '--all'"
"Cannot specify specs on command line when unloading all specs with '--all'"
)
hashes = os.environ.get(uenv.spack_loaded_hashes_var, "").split(":")
@@ -71,7 +71,7 @@ def unload(parser, args):
for spec in spack.cmd.parse_specs(args.specs)
]
else:
specs = spack.store.db.query(hashes=hashes)
specs = spack.store.STORE.db.query(hashes=hashes)
if not args.shell:
specs_str = " ".join(args.specs) or "SPECS"

View File

@@ -10,7 +10,7 @@
import spack.store
import spack.verify
description = "Check that all spack packages are on disk as installed"
description = "check that all spack packages are on disk as installed"
section = "admin"
level = "long"
@@ -19,14 +19,14 @@ def setup_parser(subparser):
setup_parser.parser = subparser
subparser.add_argument(
"-l", "--local", action="store_true", help="Verify only locally installed packages"
"-l", "--local", action="store_true", help="verify only locally installed packages"
)
subparser.add_argument(
"-j", "--json", action="store_true", help="Ouptut json-formatted errors"
"-j", "--json", action="store_true", help="ouptut json-formatted errors"
)
subparser.add_argument("-a", "--all", action="store_true", help="Verify all packages")
subparser.add_argument("-a", "--all", action="store_true", help="verify all packages")
subparser.add_argument(
"specs_or_files", nargs=argparse.REMAINDER, help="Specs or files to verify"
"specs_or_files", nargs=argparse.REMAINDER, help="specs or files to verify"
)
type = subparser.add_mutually_exclusive_group()
@@ -37,7 +37,7 @@ def setup_parser(subparser):
const="specs",
dest="type",
default="specs",
help="Treat entries as specs (default)",
help="treat entries as specs (default)",
)
type.add_argument(
"-f",
@@ -46,7 +46,7 @@ def setup_parser(subparser):
const="files",
dest="type",
default="specs",
help="Treat entries as absolute filenames. Cannot be used with '-a'",
help="treat entries as absolute filenames\n\ncannot be used with '-a'",
)
@@ -71,7 +71,7 @@ def verify(parser, args):
spec_args = spack.cmd.parse_specs(args.specs_or_files)
if args.all:
query = spack.store.db.query_local if local else spack.store.db.query
query = spack.store.STORE.db.query_local if local else spack.store.STORE.db.query
# construct spec list
if spec_args:

View File

@@ -26,7 +26,7 @@ def setup_parser(subparser):
output.add_argument(
"--safe-only",
action="store_true",
help="[deprecated] only list safe versions " "of the package",
help="[deprecated] only list safe versions of the package",
)
output.add_argument(
"-r", "--remote", action="store_true", help="only list remote versions of the package"
@@ -35,7 +35,7 @@ def setup_parser(subparser):
"-n",
"--new",
action="store_true",
help="only list remote versions newer than " "the latest checksummed version",
help="only list remote versions newer than the latest checksummed version",
)
subparser.add_argument(
"-c", "--concurrency", default=32, type=int, help="number of concurrent requests"

View File

@@ -44,7 +44,7 @@
from spack.filesystem_view import YamlFilesystemView, view_func_parser
from spack.util import spack_yaml as s_yaml
description = "project packages to a compact naming scheme on the filesystem."
description = "project packages to a compact naming scheme on the filesystem"
section = "environments"
level = "short"
@@ -70,7 +70,7 @@ def squash(matching_specs):
return matching_in_view[0] if matching_in_view else matching_specs[0]
# make function always return a list to keep consistency between py2/3
return list(map(squash, map(spack.store.db.query, specs)))
return list(map(squash, map(spack.store.STORE.db.query, specs)))
def setup_parser(sp):
@@ -81,7 +81,7 @@ def setup_parser(sp):
"--verbose",
action="store_true",
default=False,
help="If not verbose only warnings/errors will be printed.",
help="if not verbose only warnings/errors will be printed",
)
sp.add_argument(
"-e",
@@ -95,7 +95,7 @@ def setup_parser(sp):
"--dependencies",
choices=["true", "false", "yes", "no"],
default="true",
help="Link/remove/list dependencies.",
help="link/remove/list dependencies",
)
ssp = sp.add_subparsers(metavar="ACTION", dest="action")
@@ -137,12 +137,11 @@ def setup_parser(sp):
if cmd in ("symlink", "hardlink", "copy"):
# invalid for remove/statlink, for those commands the view needs to
# already know its own projections.
help_msg = "Initialize view using projections from file."
act.add_argument(
"--projection-file",
dest="projection_file",
type=spack.cmd.extant_file,
help=help_msg,
help="initialize view using projections from file",
)
if cmd == "remove":
@@ -150,7 +149,7 @@ def setup_parser(sp):
act.add_argument(
"--no-remove-dependents",
action="store_true",
help="Do not remove dependents of specified specs.",
help="do not remove dependents of specified specs",
)
# with all option, spec is an optional argument
@@ -201,7 +200,7 @@ def view(parser, args):
view = YamlFilesystemView(
path,
spack.store.layout,
spack.store.STORE.layout,
projections=ordered_projections,
ignore_conflicts=getattr(args, "ignore_conflicts", False),
link=link_fn,

View File

@@ -515,7 +515,7 @@ def compiler_for_spec(compiler_spec, arch_spec):
if len(compilers) < 1:
raise NoCompilerForSpecError(compiler_spec, arch_spec.os)
if len(compilers) > 1:
msg = "Multiple definitions of compiler %s" % compiler_spec
msg = "Multiple definitions of compiler %s " % compiler_spec
msg += "for architecture %s:\n %s" % (arch_spec, compilers)
tty.debug(msg)
return compilers[0]

View File

@@ -4,8 +4,11 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import re
from typing import List
import llnl.util.lang
import spack.compiler
@@ -32,7 +35,13 @@ class Nag(spack.compiler.Compiler):
}
version_argument = "-V"
version_regex = r"NAG Fortran Compiler Release ([0-9.]+)"
@classmethod
@llnl.util.lang.memoized
def extract_version_from_output(cls, output):
match = re.search(r"NAG Fortran Compiler Release (\d+).(\d+)\(.*\) Build (\d+)", output)
if match:
return ".".join(match.groups())
@property
def verbose_flag(self):

View File

@@ -767,7 +767,7 @@ def _add_command_line_scopes(cfg, command_line_scopes):
_add_platform_scope(cfg, ImmutableConfigScope, name, path)
def _config():
def create():
"""Singleton Configuration instance.
This constructs one instance associated with this module and returns
@@ -825,7 +825,7 @@ def _config():
#: This is the singleton configuration instance for Spack.
config: Union[Configuration, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(_config)
config: Union[Configuration, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(create)
def add_from_file(filename, scope=None):

View File

@@ -5,6 +5,7 @@
"""Manages the details on the images used in the various stages."""
import json
import os.path
import shlex
import sys
import llnl.util.filesystem as fs
@@ -130,8 +131,11 @@ def checkout_command(url, ref, enforce_sha, verify):
if enforce_sha or verify:
ref = _verify_ref(url, ref, enforce_sha)
command = (
"git clone {0} . && git fetch origin {1}:container_branch &&"
" git checkout container_branch "
).format(url, ref)
return command
return " && ".join(
[
"git init --quiet",
f"git remote add origin {shlex.quote(url)}",
f"git fetch --depth=1 origin {shlex.quote(ref)}",
"git checkout --detach FETCH_HEAD",
]
)

View File

@@ -194,7 +194,7 @@ def read(path, apply_updates):
spack.compilers.add_compilers_to_config(compilers, init_config=False)
if apply_updates:
for spec in specs.values():
spack.store.db.add(spec, directory_layout=None)
spack.store.STORE.db.add(spec, directory_layout=None)
class ManifestValidationError(spack.error.SpackError):

View File

@@ -2,7 +2,6 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Spack's installation tracking database.
The database serves two purposes:
@@ -19,14 +18,13 @@
provides a cache and a sanity checking mechanism for what is in the
filesystem.
"""
import contextlib
import datetime
import os
import socket
import sys
import time
from typing import Dict
from typing import Dict, List, NamedTuple, Set, Type, Union
try:
import uuid
@@ -36,14 +34,13 @@
_use_uuid = False
pass
from typing import Optional, Tuple
import llnl.util.filesystem as fs
import llnl.util.lang as lang
import llnl.util.tty as tty
import spack.hash_types as ht
import spack.repo
import spack.spec
import spack.store
import spack.util.lock as lk
import spack.util.spack_json as sjson
import spack.version as vn
@@ -54,17 +51,17 @@
# TODO: Provide an API automatically retyring a build after detecting and
# TODO: clearing a failure.
# DB goes in this directory underneath the root
_db_dirname = ".spack-db"
#: DB goes in this directory underneath the root
_DB_DIRNAME = ".spack-db"
# DB version. This is stuck in the DB file to track changes in format.
# Increment by one when the database format changes.
# Versions before 5 were not integers.
_db_version = vn.Version("7")
#: DB version. This is stuck in the DB file to track changes in format.
#: Increment by one when the database format changes.
#: Versions before 5 were not integers.
_DB_VERSION = vn.Version("7")
# For any version combinations here, skip reindex when upgrading.
# Reindexing can take considerable time and is not always necessary.
_skip_reindex = [
#: For any version combinations here, skip reindex when upgrading.
#: Reindexing can take considerable time and is not always necessary.
_SKIP_REINDEX = [
# reindexing takes a significant amount of time, and there's
# no reason to do it from DB version 0.9.3 to version 5. The
# only difference is that v5 can contain "deprecated_for"
@@ -75,26 +72,26 @@
(vn.Version("6"), vn.Version("7")),
]
# Default timeout for spack database locks in seconds or None (no timeout).
# A balance needs to be struck between quick turnaround for parallel installs
# (to avoid excess delays) and waiting long enough when the system is busy
# (to ensure the database is updated).
_db_lock_timeout = 120
#: Default timeout for spack database locks in seconds or None (no timeout).
#: A balance needs to be struck between quick turnaround for parallel installs
#: (to avoid excess delays) and waiting long enough when the system is busy
#: (to ensure the database is updated).
_DEFAULT_DB_LOCK_TIMEOUT = 120
# Default timeout for spack package locks in seconds or None (no timeout).
# A balance needs to be struck between quick turnaround for parallel installs
# (to avoid excess delays when performing a parallel installation) and waiting
# long enough for the next possible spec to install (to avoid excessive
# checking of the last high priority package) or holding on to a lock (to
# ensure a failed install is properly tracked).
_pkg_lock_timeout = None
#: Default timeout for spack package locks in seconds or None (no timeout).
#: A balance needs to be struck between quick turnaround for parallel installs
#: (to avoid excess delays when performing a parallel installation) and waiting
#: long enough for the next possible spec to install (to avoid excessive
#: checking of the last high priority package) or holding on to a lock (to
#: ensure a failed install is properly tracked).
_DEFAULT_PKG_LOCK_TIMEOUT = None
# Types of dependencies tracked by the database
# We store by DAG hash, so we track the dependencies that the DAG hash includes.
_tracked_deps = ht.dag_hash.deptype
#: Types of dependencies tracked by the database
#: We store by DAG hash, so we track the dependencies that the DAG hash includes.
_TRACKED_DEPENDENCIES = ht.dag_hash.deptype
# Default list of fields written for each install record
default_install_record_fields = [
#: Default list of fields written for each install record
DEFAULT_INSTALL_RECORD_FIELDS = (
"spec",
"ref_count",
"path",
@@ -102,10 +99,10 @@
"explicit",
"installation_time",
"deprecated_for",
]
)
def reader(version):
def reader(version: vn.StandardVersion) -> Type["spack.spec.SpecfileReaderBase"]:
reader_cls = {
vn.Version("5"): spack.spec.SpecfileV1,
vn.Version("6"): spack.spec.SpecfileV3,
@@ -114,7 +111,7 @@ def reader(version):
return reader_cls[version]
def _now():
def _now() -> float:
"""Returns the time since the epoch"""
return time.time()
@@ -178,9 +175,9 @@ class InstallRecord:
dependents left.
Args:
spec (spack.spec.Spec): spec tracked by the install record
path (str): path where the spec has been installed
installed (bool): whether or not the spec is currently installed
spec: spec tracked by the install record
path: path where the spec has been installed
installed: whether or not the spec is currently installed
ref_count (int): number of specs that depend on this one
explicit (bool or None): whether or not this spec was explicitly
installed, or pulled-in as a dependency of something else
@@ -189,14 +186,14 @@ class InstallRecord:
def __init__(
self,
spec,
path,
installed,
ref_count=0,
explicit=False,
installation_time=None,
deprecated_for=None,
in_buildcache=False,
spec: "spack.spec.Spec",
path: str,
installed: bool,
ref_count: int = 0,
explicit: bool = False,
installation_time: Optional[float] = None,
deprecated_for: Optional["spack.spec.Spec"] = None,
in_buildcache: bool = False,
origin=None,
):
self.spec = spec
@@ -218,7 +215,7 @@ def install_type_matches(self, installed):
else:
return InstallStatuses.MISSING in installed
def to_dict(self, include_fields=default_install_record_fields):
def to_dict(self, include_fields=DEFAULT_INSTALL_RECORD_FIELDS):
rec_dict = {}
for field_name in include_fields:
@@ -254,11 +251,14 @@ class ForbiddenLockError(SpackError):
class ForbiddenLock:
def __getattribute__(self, name):
def __getattr__(self, name):
raise ForbiddenLockError("Cannot access attribute '{0}' of lock".format(name))
def __reduce__(self):
return ForbiddenLock, tuple()
_query_docstring = """
_QUERY_DOCSTRING = """
Args:
query_spec: queries iterate through specs in the database and
@@ -306,73 +306,106 @@ def __getattribute__(self, name):
"""
#: Data class to configure locks in Database objects
#:
#: Args:
#: enable (bool): whether to enable locks or not.
#: database_timeout (int or None): timeout for the database lock
#: package_timeout (int or None): timeout for the package lock
class LockConfiguration(NamedTuple):
enable: bool
database_timeout: Optional[int]
package_timeout: Optional[int]
#: Configure a database to avoid using locks
NO_LOCK: LockConfiguration = LockConfiguration(
enable=False, database_timeout=None, package_timeout=None
)
#: Configure the database to use locks without a timeout
NO_TIMEOUT: LockConfiguration = LockConfiguration(
enable=True, database_timeout=None, package_timeout=None
)
#: Default configuration for database locks
DEFAULT_LOCK_CFG: LockConfiguration = LockConfiguration(
enable=True,
database_timeout=_DEFAULT_DB_LOCK_TIMEOUT,
package_timeout=_DEFAULT_PKG_LOCK_TIMEOUT,
)
def lock_configuration(configuration):
"""Return a LockConfiguration from a spack.config.Configuration object."""
return LockConfiguration(
enable=configuration.get("config:locks", True),
database_timeout=configuration.get("config:db_lock_timeout"),
package_timeout=configuration.get("config:package_lock_timeout"),
)
class Database:
"""Per-process lock objects for each install prefix."""
#: Per-process lock objects for each install prefix
_prefix_locks: Dict[str, lk.Lock] = {}
"""Per-process failure (lock) objects for each install prefix."""
#: Per-process failure (lock) objects for each install prefix
_prefix_failures: Dict[str, lk.Lock] = {}
#: Fields written for each install record
record_fields: Tuple[str, ...] = DEFAULT_INSTALL_RECORD_FIELDS
def __init__(
self,
root,
db_dir=None,
upstream_dbs=None,
is_upstream=False,
enable_transaction_locking=True,
record_fields=default_install_record_fields,
):
"""Create a Database for Spack installations under ``root``.
root: str,
upstream_dbs: Optional[List["Database"]] = None,
is_upstream: bool = False,
lock_cfg: LockConfiguration = DEFAULT_LOCK_CFG,
) -> None:
"""Database for Spack installations.
A Database is a cache of Specs data from ``$prefix/spec.yaml``
files in Spack installation directories.
A Database is a cache of Specs data from ``$prefix/spec.yaml`` files
in Spack installation directories.
By default, Database files (data and lock files) are stored
under ``root/.spack-db``, which is created if it does not
exist. This is the ``db_dir``.
Database files (data and lock files) are stored under ``root/.spack-db``, which is
created if it does not exist. This is the "database directory".
The Database will attempt to read an ``index.json`` file in
``db_dir``. If that does not exist, it will create a database
when needed by scanning the entire Database root for ``spec.yaml``
files according to Spack's ``DirectoryLayout``.
The database will attempt to read an ``index.json`` file in the database directory.
If that does not exist, it will create a database when needed by scanning the entire
store root for ``spec.json`` files according to Spack's directory layout.
Caller may optionally provide a custom ``db_dir`` parameter
where data will be stored. This is intended to be used for
testing the Database class.
This class supports writing buildcache index files, in which case
certain fields are not needed in each install record, and no
transaction locking is required. To use this feature, provide
``enable_transaction_locking=False``, and specify a list of needed
fields in ``record_fields``.
Args:
root: root directory where to create the database directory.
upstream_dbs: upstream databases for this repository.
is_upstream: whether this repository is an upstream.
lock_cfg: configuration for the locks to be used by this repository.
Relevant only if the repository is not an upstream.
"""
self.root = root
# If the db_dir is not provided, default to within the db root.
self._db_dir = db_dir or os.path.join(self.root, _db_dirname)
self.database_directory = os.path.join(self.root, _DB_DIRNAME)
# Set up layout of database files within the db dir
self._index_path = os.path.join(self._db_dir, "index.json")
self._verifier_path = os.path.join(self._db_dir, "index_verifier")
self._lock_path = os.path.join(self._db_dir, "lock")
self._index_path = os.path.join(self.database_directory, "index.json")
self._verifier_path = os.path.join(self.database_directory, "index_verifier")
self._lock_path = os.path.join(self.database_directory, "lock")
# This is for other classes to use to lock prefix directories.
self.prefix_lock_path = os.path.join(self._db_dir, "prefix_lock")
self.prefix_lock_path = os.path.join(self.database_directory, "prefix_lock")
# Ensure a persistent location for dealing with parallel installation
# failures (e.g., across near-concurrent processes).
self._failure_dir = os.path.join(self._db_dir, "failures")
self._failure_dir = os.path.join(self.database_directory, "failures")
# Support special locks for handling parallel installation failures
# of a spec.
self.prefix_fail_path = os.path.join(self._db_dir, "prefix_failures")
self.prefix_fail_path = os.path.join(self.database_directory, "prefix_failures")
# Create needed directories and files
if not is_upstream and not os.path.exists(self._db_dir):
fs.mkdirp(self._db_dir)
if not is_upstream and not os.path.exists(self.database_directory):
fs.mkdirp(self.database_directory)
if not is_upstream and not os.path.exists(self._failure_dir):
fs.mkdirp(self._failure_dir)
@@ -389,10 +422,9 @@ def __init__(
self._state_is_inconsistent = False
# initialize rest of state.
self.db_lock_timeout = spack.config.get("config:db_lock_timeout") or _db_lock_timeout
self.package_lock_timeout = (
spack.config.get("config:package_lock_timeout") or _pkg_lock_timeout
)
self.db_lock_timeout = lock_cfg.database_timeout
self.package_lock_timeout = lock_cfg.package_timeout
tty.debug("DATABASE LOCK TIMEOUT: {0}s".format(str(self.db_lock_timeout)))
timeout_format_str = (
"{0}s".format(str(self.package_lock_timeout))
@@ -401,18 +433,22 @@ def __init__(
)
tty.debug("PACKAGE LOCK TIMEOUT: {0}".format(str(timeout_format_str)))
self.lock: Union[ForbiddenLock, lk.Lock]
if self.is_upstream:
self.lock = ForbiddenLock()
else:
self.lock = lk.Lock(
self._lock_path, default_timeout=self.db_lock_timeout, desc="database"
self._lock_path,
default_timeout=self.db_lock_timeout,
desc="database",
enable=lock_cfg.enable,
)
self._data = {}
self._data: Dict[str, InstallRecord] = {}
# For every installed spec we keep track of its install prefix, so that
# we can answer the simple query whether a given path is already taken
# before installing a different spec.
self._installed_prefixes = set()
self._installed_prefixes: Set[str] = set()
self.upstream_dbs = list(upstream_dbs) if upstream_dbs else []
@@ -424,14 +460,8 @@ def __init__(
# message)
self._fail_when_missing_deps = False
if enable_transaction_locking:
self._write_transaction_impl = lk.WriteTransaction
self._read_transaction_impl = lk.ReadTransaction
else:
self._write_transaction_impl = lang.nullcontext
self._read_transaction_impl = lang.nullcontext
self._record_fields = record_fields
self._write_transaction_impl = lk.WriteTransaction
self._read_transaction_impl = lk.ReadTransaction
def write_transaction(self):
"""Get a write lock context manager for use in a `with` block."""
@@ -448,7 +478,7 @@ def _failed_spec_path(self, spec):
return os.path.join(self._failure_dir, "{0}-{1}".format(spec.name, spec.dag_hash()))
def clear_all_failures(self):
def clear_all_failures(self) -> None:
"""Force remove install failure tracking files."""
tty.debug("Releasing prefix failure locks")
for pkg_id in list(self._prefix_failures.keys()):
@@ -466,19 +496,17 @@ def clear_all_failures(self):
"Unable to remove failure marking file {0}: {1}".format(fail_mark, str(exc))
)
def clear_failure(self, spec, force=False):
def clear_failure(self, spec: "spack.spec.Spec", force: bool = False) -> None:
"""
Remove any persistent and cached failure tracking for the spec.
see `mark_failed()`.
Args:
spec (spack.spec.Spec): the spec whose failure indicators are being removed
force (bool): True if the failure information should be cleared
when a prefix failure lock exists for the file or False if
the failure should not be cleared (e.g., it may be
associated with a concurrent build)
spec: the spec whose failure indicators are being removed
force: True if the failure information should be cleared when a prefix failure
lock exists for the file, or False if the failure should not be cleared (e.g.,
it may be associated with a concurrent build)
"""
failure_locked = self.prefix_failure_locked(spec)
if failure_locked and not force:
@@ -504,7 +532,7 @@ def clear_failure(self, spec, force=False):
)
)
def mark_failed(self, spec):
def mark_failed(self, spec: "spack.spec.Spec") -> lk.Lock:
"""
Mark a spec as failing to install.
@@ -514,7 +542,7 @@ def mark_failed(self, spec):
containing the spec, in a subdirectory of the database to enable
persistence across overlapping but separate related build processes.
The failure lock file, ``spack.store.db.prefix_failures``, lives
The failure lock file, ``spack.store.STORE.db.prefix_failures``, lives
alongside the install DB. ``n`` is the sys.maxsize-bit prefix of the
associated DAG hash to make the likelihood of collision very low with
no cleanup required.
@@ -554,7 +582,7 @@ def mark_failed(self, spec):
return self._prefix_failures[prefix]
def prefix_failed(self, spec):
def prefix_failed(self, spec: "spack.spec.Spec") -> bool:
"""Return True if the prefix (installation) is marked as failed."""
# The failure was detected in this process.
if spec.prefix in self._prefix_failures:
@@ -569,7 +597,7 @@ def prefix_failed(self, spec):
# spack build process running concurrently.
return self.prefix_failure_marked(spec)
def prefix_failure_locked(self, spec):
def prefix_failure_locked(self, spec: "spack.spec.Spec") -> bool:
"""Return True if a process has a failure lock on the spec."""
check = lk.Lock(
self.prefix_fail_path,
@@ -581,18 +609,18 @@ def prefix_failure_locked(self, spec):
return check.is_write_locked()
def prefix_failure_marked(self, spec):
def prefix_failure_marked(self, spec: "spack.spec.Spec") -> bool:
"""Determine if the spec has a persistent failure marking."""
return os.path.exists(self._failed_spec_path(spec))
def prefix_lock(self, spec, timeout=None):
def prefix_lock(self, spec: "spack.spec.Spec", timeout: Optional[float] = None) -> lk.Lock:
"""Get a lock on a particular spec's installation directory.
NOTE: The installation directory **does not** need to exist.
Prefix lock is a byte range lock on the nth byte of a file.
The lock file is ``spack.store.db.prefix_lock`` -- the DB
The lock file is ``spack.store.STORE.db.prefix_lock`` -- the DB
tells us what to call it and it lives alongside the install DB.
n is the sys.maxsize-bit prefix of the DAG hash. This makes
@@ -657,7 +685,7 @@ def _write_to_file(self, stream):
"""
# map from per-spec hash code to installation record.
installs = dict(
(k, v.to_dict(include_fields=self._record_fields)) for k, v in self._data.items()
(k, v.to_dict(include_fields=self.record_fields)) for k, v in self._data.items()
)
# database includes installation list and version.
@@ -670,7 +698,7 @@ def _write_to_file(self, stream):
"database": {
# TODO: move this to a top-level _meta section if we ever
# TODO: bump the DB version to 7
"version": str(_db_version),
"version": str(_DB_VERSION),
# dictionary of installation records, keyed by DAG hash
"installs": installs,
}
@@ -710,7 +738,9 @@ def db_for_spec_hash(self, hash_key):
if hash_key in db._data:
return db
def query_by_spec_hash(self, hash_key, data=None):
def query_by_spec_hash(
self, hash_key: str, data: Optional[Dict[str, InstallRecord]] = None
) -> Tuple[bool, Optional[InstallRecord]]:
"""Get a spec for hash, and whether it's installed upstream.
Return:
@@ -805,16 +835,16 @@ def check(cond, msg):
# TODO: better version checking semantics.
version = vn.Version(db["version"])
if version > _db_version:
raise InvalidDatabaseVersionError(self, _db_version, version)
elif version < _db_version:
if not any(old == version and new == _db_version for old, new in _skip_reindex):
if version > _DB_VERSION:
raise InvalidDatabaseVersionError(self, _DB_VERSION, version)
elif version < _DB_VERSION:
if not any(old == version and new == _DB_VERSION for old, new in _SKIP_REINDEX):
tty.warn(
"Spack database version changed from %s to %s. Upgrading."
% (version, _db_version)
% (version, _DB_VERSION)
)
self.reindex(spack.store.layout)
self.reindex(spack.store.STORE.layout)
installs = dict(
(k, v.to_dict(include_fields=self._record_fields))
for k, v in self._data.items()
@@ -976,7 +1006,7 @@ def _construct_from_directory_layout(self, directory_layout, old_data):
# applications.
tty.debug("RECONSTRUCTING FROM OLD DB: {0}".format(entry.spec))
try:
layout = None if entry.spec.external else spack.store.layout
layout = None if entry.spec.external else directory_layout
kwargs = {
"spec": entry.spec,
"directory_layout": layout,
@@ -1002,7 +1032,7 @@ def _check_ref_counts(self):
counts = {}
for key, rec in self._data.items():
counts.setdefault(key, 0)
for dep in rec.spec.dependencies(deptype=_tracked_deps):
for dep in rec.spec.dependencies(deptype=_TRACKED_DEPENDENCIES):
dep_key = dep.dag_hash()
counts.setdefault(dep_key, 0)
counts[dep_key] += 1
@@ -1091,13 +1121,13 @@ def _add(
):
"""Add an install record for this spec to the database.
Assumes spec is installed in ``layout.path_for_spec(spec)``.
Assumes spec is installed in ``directory_layout.path_for_spec(spec)``.
Also ensures dependencies are present and updated in the DB as
either installed or missing.
Args:
spec: spec to be added
spec (spack.spec.Spec): spec to be added
directory_layout: layout of the spec installation
explicit:
Possible values: True, False, any
@@ -1124,7 +1154,7 @@ def _add(
# Retrieve optional arguments
installation_time = installation_time or _now()
for edge in spec.edges_to_dependencies(deptype=_tracked_deps):
for edge in spec.edges_to_dependencies(deptype=_TRACKED_DEPENDENCIES):
if edge.spec.dag_hash() in self._data:
continue
# allow missing build-only deps. This prevents excessive
@@ -1176,7 +1206,7 @@ def _add(
self._data[key] = InstallRecord(new_spec, path, installed, ref_count=0, **extra_args)
# Connect dependencies from the DB to the new copy.
for dep in spec.edges_to_dependencies(deptype=_tracked_deps):
for dep in spec.edges_to_dependencies(deptype=_TRACKED_DEPENDENCIES):
dkey = dep.spec.dag_hash()
upstream, record = self.query_by_spec_hash(dkey)
new_spec._add_dependency(record.spec, deptypes=dep.deptypes, virtuals=dep.virtuals)
@@ -1216,7 +1246,7 @@ def _get_matching_spec_key(self, spec, **kwargs):
match = self.query_one(spec, **kwargs)
if match:
return match.dag_hash()
raise KeyError("No such spec in database! %s" % spec)
raise NoSuchSpecError(spec)
return key
@_autospec
@@ -1239,7 +1269,7 @@ def _decrement_ref_count(self, spec):
if rec.ref_count == 0 and not rec.installed:
del self._data[key]
for dep in spec.dependencies(deptype=_tracked_deps):
for dep in spec.dependencies(deptype=_TRACKED_DEPENDENCIES):
self._decrement_ref_count(dep)
def _increment_ref_count(self, spec):
@@ -1269,8 +1299,8 @@ def _remove(self, spec):
# Remove any reference to this node from dependencies and
# decrement the reference count
rec.spec.detach(deptype=_tracked_deps)
for dep in rec.spec.dependencies(deptype=_tracked_deps):
rec.spec.detach(deptype=_TRACKED_DEPENDENCIES)
for dep in rec.spec.dependencies(deptype=_TRACKED_DEPENDENCIES):
self._decrement_ref_count(dep)
if rec.deprecated_for:
@@ -1386,10 +1416,7 @@ def installed_relatives(self, spec, direction="children", transitive=True, depty
@_autospec
def installed_extensions_for(self, extendee_spec):
"""
Return the specs of all packages that extend
the given spec
"""
"""Returns the specs of all packages that extend the given spec"""
for spec in self.query():
if spec.package.extends(extendee_spec):
yield spec.package
@@ -1416,7 +1443,7 @@ def _get_by_hash_local(self, dag_hash, default=None, installed=any):
# nothing found
return default
def get_by_hash_local(self, *args, **kwargs):
def get_by_hash_local(self, dag_hash, default=None, installed=any):
"""Look up a spec in *this DB* by DAG hash, or by a DAG hash prefix.
Arguments:
@@ -1440,7 +1467,7 @@ def get_by_hash_local(self, *args, **kwargs):
"""
with self.read_transaction():
return self._get_by_hash_local(*args, **kwargs)
return self._get_by_hash_local(dag_hash, default=default, installed=installed)
def get_by_hash(self, dag_hash, default=None, installed=any):
"""Look up a spec by DAG hash, or by a DAG hash prefix.
@@ -1526,7 +1553,7 @@ def _query(
if explicit is not any and rec.explicit != explicit:
continue
if known is not any and spack.repo.path.exists(rec.spec.name) != known:
if known is not any and known(rec.spec.name):
continue
if start_date or end_date:
@@ -1541,7 +1568,7 @@ def _query(
if _query.__doc__ is None:
_query.__doc__ = ""
_query.__doc__ += _query_docstring
_query.__doc__ += _QUERY_DOCSTRING
def query_local(self, *args, **kwargs):
"""Query only the local Spack database.
@@ -1555,7 +1582,7 @@ def query_local(self, *args, **kwargs):
if query_local.__doc__ is None:
query_local.__doc__ = ""
query_local.__doc__ += _query_docstring
query_local.__doc__ += _QUERY_DOCSTRING
def query(self, *args, **kwargs):
"""Query the Spack database including all upstream databases."""
@@ -1574,7 +1601,7 @@ def query(self, *args, **kwargs):
if query.__doc__ is None:
query.__doc__ = ""
query.__doc__ += _query_docstring
query.__doc__ += _QUERY_DOCSTRING
def query_one(self, query_spec, known=any, installed=True):
"""Query for exactly one spec that matches the query spec.
@@ -1672,3 +1699,17 @@ def __init__(self, database, expected, found):
@property
def database_version_message(self):
return f"The expected DB version is '{self.expected}', but '{self.found}' was found."
class NoSuchSpecError(KeyError):
"""Raised when a spec is not found in the database."""
def __init__(self, spec):
self.spec = spec
super().__init__(spec)
def __str__(self):
# This exception is raised frequently, and almost always
# caught, so ensure we don't pay the cost of Spec.__str__
# unless the exception is actually printed.
return f"No such spec in database: {self.spec}"

View File

@@ -325,7 +325,7 @@ def path_for_spec(self, spec):
if spec.external:
return spec.external_path
if self.check_upstream:
upstream, record = spack.store.db.query_by_spec_hash(spec.dag_hash())
upstream, record = spack.store.STORE.db.query_by_spec_hash(spec.dag_hash())
if upstream:
raise SpackError(
"Internal error: attempted to call path_for_spec on"

View File

@@ -8,6 +8,7 @@
"""
import os
import re
from enum import Enum
from typing import List, Optional
@@ -45,8 +46,8 @@ class DepfileNode:
def __init__(
self, target: spack.spec.Spec, prereqs: List[spack.spec.Spec], buildcache: UseBuildCache
):
self.target = target
self.prereqs = prereqs
self.target = MakefileSpec(target)
self.prereqs = list(MakefileSpec(x) for x in prereqs)
if buildcache == UseBuildCache.ONLY:
self.buildcache_flag = "--use-buildcache=only"
elif buildcache == UseBuildCache.NEVER:
@@ -89,6 +90,32 @@ def accept(self, node):
return True
class MakefileSpec(object):
"""Limited interface to spec to help generate targets etc. without
introducing unwanted special characters.
"""
_pattern = None
def __init__(self, spec):
self.spec = spec
def safe_name(self):
return self.safe_format("{name}-{version}-{hash}")
def spec_hash(self):
return self.spec.dag_hash()
def safe_format(self, format_str):
unsafe_result = self.spec.format(format_str)
if not MakefileSpec._pattern:
MakefileSpec._pattern = re.compile(r"[^A-Za-z0-9_.-]")
return MakefileSpec._pattern.sub("_", unsafe_result)
def unsafe_format(self, format_str):
return self.spec.format(format_str)
class MakefileModel:
"""This class produces all data to render a makefile for specs of an environment."""
@@ -114,7 +141,7 @@ def __init__(
self.env_path = env.path
# These specs are built in the default target.
self.roots = roots
self.roots = list(MakefileSpec(x) for x in roots)
# The SPACK_PACKAGE_IDS variable is "exported", which can be used when including
# generated makefiles to add post-install hooks, like pushing to a buildcache,
@@ -131,17 +158,19 @@ def __init__(
# And here we collect a tuple of (target, prereqs, dag_hash, nice_name, buildcache_flag)
self.make_adjacency_list = [
(
self._safe_name(item.target),
" ".join(self._install_target(self._safe_name(s)) for s in item.prereqs),
item.target.dag_hash(),
item.target.format("{name}{@version}{%compiler}{variants}{arch=architecture}"),
item.target.safe_name(),
" ".join(self._install_target(s.safe_name()) for s in item.prereqs),
item.target.spec_hash(),
item.target.unsafe_format(
"{name}{@version}{%compiler}{variants}{arch=architecture}"
),
item.buildcache_flag,
)
for item in adjacency_list
]
# Root specs without deps are the prereqs for the environment target
self.root_install_targets = [self._install_target(self._safe_name(s)) for s in roots]
self.root_install_targets = [self._install_target(s.safe_name()) for s in self.roots]
self.jobserver_support = "+" if jobserver else ""
@@ -157,7 +186,7 @@ def __init__(
self.phony_convenience_targets: List[str] = []
for node in adjacency_list:
tgt = self._safe_name(node.target)
tgt = node.target.safe_name()
self.all_pkg_identifiers.append(tgt)
self.all_install_related_targets.append(self._install_target(tgt))
self.all_install_related_targets.append(self._install_deps_target(tgt))
@@ -165,9 +194,6 @@ def __init__(
self.phony_convenience_targets.append(os.path.join("install", tgt))
self.phony_convenience_targets.append(os.path.join("install-deps", tgt))
def _safe_name(self, spec: spack.spec.Spec) -> str:
return spec.format("{name}-{version}-{hash}")
def _target(self, name: str) -> str:
# The `all` and `clean` targets are phony. It doesn't make sense to
# have /abs/path/to/env/metadir/{all,clean} targets. But it *does* make

View File

@@ -29,6 +29,7 @@
import spack.concretize
import spack.config
import spack.error
import spack.fetch_strategy
import spack.hash_types as ht
import spack.hooks
import spack.main
@@ -153,7 +154,7 @@ def installed_specs():
"""
env = spack.environment.active_environment()
hashes = env.all_hashes() if env else None
return spack.store.db.query(hashes=hashes)
return spack.store.STORE.db.query(hashes=hashes)
def valid_env_name(name):
@@ -421,7 +422,7 @@ def _is_dev_spec_and_has_changed(spec):
# Not installed -> nothing to compare against
return False
_, record = spack.store.db.query_by_spec_hash(spec.dag_hash())
_, record = spack.store.STORE.db.query_by_spec_hash(spec.dag_hash())
mtime = fs.last_modification_time_recursive(dev_path_var.value)
return mtime > record.installation_time
@@ -582,7 +583,7 @@ def view(self, new=None):
raise SpackEnvironmentViewError(msg)
return SimpleFilesystemView(
root,
spack.store.layout,
spack.store.STORE.layout,
ignore_conflicts=True,
projections=self.projections,
link=self.link_type,
@@ -622,7 +623,7 @@ def specs_for_view(self, concretized_root_specs):
specs = list(dedupe(concretized_root_specs, key=traverse.by_dag_hash))
# Filter selected, installed specs
with spack.store.db.read_transaction():
with spack.store.STORE.db.read_transaction():
specs = [s for s in specs if s in self and s.installed]
return specs
@@ -731,7 +732,7 @@ def __init__(self, manifest_dir: Union[str, pathlib.Path]) -> None:
self.txlock = lk.Lock(self._transaction_lock_path)
self.unify = None
self._unify = None
self.new_specs: List[Spec] = []
self.new_installs: List[Spec] = []
self.views: Dict[str, ViewDescriptor] = {}
@@ -755,6 +756,16 @@ def __init__(self, manifest_dir: Union[str, pathlib.Path]) -> None:
self.manifest = EnvironmentManifestFile(manifest_dir)
self._read()
@property
def unify(self):
if self._unify is None:
self._unify = spack.config.get("concretizer:unify", False)
return self._unify
@unify.setter
def unify(self, value):
self._unify = value
def __reduce__(self):
return _create_environment, (self.path,)
@@ -816,9 +827,6 @@ def _construct_state_from_manifest(self):
else:
self.views = {}
# Retrieve unification scheme for the concretizer
self.unify = spack.config.get("concretizer:unify", False)
# Retrieve dev-build packages:
self.dev_specs = copy.deepcopy(env_configuration.get("develop", {}))
for name, entry in self.dev_specs.items():
@@ -1249,16 +1257,23 @@ def develop(self, spec: Spec, path: str, clone: bool = False) -> bool:
tty.msg("Configuring spec %s for development at path %s" % (spec, path))
if clone:
# "steal" the source code via staging API
abspath = spack.util.path.canonicalize_path(path, default_wd=self.path)
# Stage, at the moment, requires a concrete Spec, since it needs the
# dag_hash for the stage dir name. Below though we ask for a stage
# to be created, to copy it afterwards somewhere else. It would be
# "steal" the source code via staging API. We ask for a stage
# to be created, then copy it afterwards somewhere else. It would be
# better if we can create the `source_path` directly into its final
# destination.
abspath = spack.util.path.canonicalize_path(path, default_wd=self.path)
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
pkg_cls(spec).stage.steal_source(abspath)
# We construct a package class ourselves, rather than asking for
# Spec.package, since Spec only allows this when it is concrete
package = pkg_cls(spec)
if isinstance(package.fetcher, spack.fetch_strategy.GitFetchStrategy):
package.fetcher.get_full_repo = True
# If we retrieved this version before and cached it, we may have
# done so without cloning the full git repo; likewise, any
# mirror might store an instance with truncated history.
package.stage.disable_mirrors()
package.stage.steal_source(abspath)
# If it wasn't already in the list, append it
entry = {"path": path, "spec": str(spec)}
@@ -1826,7 +1841,7 @@ def _partition_roots_by_install_status(self):
specs. This is done in a single read transaction per environment instead
of per spec."""
installed, uninstalled = [], []
with spack.store.db.read_transaction():
with spack.store.STORE.db.read_transaction():
for concretized_hash in self.concretized_order:
spec = self.specs_by_hash[concretized_hash]
if not spec.installed or (
@@ -1871,9 +1886,9 @@ def install_specs(self, specs=None, **install_args):
# Already installed root specs should be marked explicitly installed in the
# database.
if specs_dropped:
with spack.store.db.write_transaction(): # do all in one transaction
with spack.store.STORE.db.write_transaction(): # do all in one transaction
for spec in specs_dropped:
spack.store.db.update_explicit(spec, True)
spack.store.STORE.db.update_explicit(spec, True)
if not specs_to_install:
tty.msg("All of the packages are already installed")
@@ -1936,7 +1951,7 @@ def added_specs(self):
"""
# use a transaction to avoid overhead of repeated calls
# to `package.spec.installed`
with spack.store.db.read_transaction():
with spack.store.STORE.db.read_transaction():
concretized = dict(self.concretized_specs())
for spec in self.user_specs:
concrete = concretized.get(spec)

View File

@@ -130,7 +130,7 @@ def activate(env, use_env_repo=False, add_view=True):
#
try:
if add_view and ev.default_view_name in env.views:
with spack.store.db.read_transaction():
with spack.store.STORE.db.read_transaction():
env.add_default_view_to_env(env_mods)
except (spack.repo.UnknownPackageError, spack.repo.UnknownNamespaceError) as e:
tty.error(e)
@@ -165,7 +165,7 @@ def deactivate():
if ev.default_view_name in active.views:
try:
with spack.store.db.read_transaction():
with spack.store.STORE.db.read_transaction():
active.rm_default_view_from_env(env_mods)
except (spack.repo.UnknownPackageError, spack.repo.UnknownNamespaceError) as e:
tty.warn(e)

View File

@@ -42,7 +42,6 @@
import spack.url
import spack.util.crypto as crypto
import spack.util.git
import spack.util.pattern as pattern
import spack.util.url as url_util
import spack.util.web as web_util
import spack.version
@@ -228,24 +227,6 @@ def mirror_id(self):
"""BundlePackages don't have a mirror id."""
class FetchStrategyComposite(pattern.Composite):
"""Composite for a FetchStrategy object."""
matches = FetchStrategy.matches
def __init__(self):
super().__init__(["fetch", "check", "expand", "reset", "archive", "cachable", "mirror_id"])
def source_id(self):
component_ids = tuple(i.source_id() for i in self)
if all(component_ids):
return component_ids
def set_package(self, package):
for item in self:
item.package = package
@fetcher
class URLFetchStrategy(FetchStrategy):
"""URLFetchStrategy pulls source code from a URL for an archive, check the
@@ -488,17 +469,7 @@ def check(self):
if not self.digest:
raise NoDigestError("Attempt to check URLFetchStrategy with no digest.")
checker = crypto.Checker(self.digest)
if not checker.check(self.archive_file):
# On failure, provide some information about the file size and
# contents, so that we can quickly see what the issue is (redirect
# was not followed, empty file, text instead of binary, ...)
size, contents = fs.filesummary(self.archive_file)
raise ChecksumError(
f"{checker.hash_name} checksum failed for {self.archive_file}",
f"Expected {self.digest} but got {checker.sum}. "
f"File size = {size} bytes. Contents = {contents!r}",
)
verify_checksum(self.archive_file, self.digest)
@_needs_stage
def reset(self):
@@ -1388,6 +1359,45 @@ def fetch(self):
raise FailedDownloadError(self.url)
@fetcher
class FetchAndVerifyExpandedFile(URLFetchStrategy):
"""Fetch strategy that verifies the content digest during fetching,
as well as after expanding it."""
def __init__(self, url, archive_sha256: str, expanded_sha256: str):
super().__init__(url, archive_sha256)
self.expanded_sha256 = expanded_sha256
def expand(self):
"""Verify checksum after expanding the archive."""
# Expand the archive
super().expand()
# Ensure a single patch file.
src_dir = self.stage.source_path
files = os.listdir(src_dir)
if len(files) != 1:
raise ChecksumError(self, f"Expected a single file in {src_dir}.")
verify_checksum(os.path.join(src_dir, files[0]), self.expanded_sha256)
def verify_checksum(file, digest):
checker = crypto.Checker(digest)
if not checker.check(file):
# On failure, provide some information about the file size and
# contents, so that we can quickly see what the issue is (redirect
# was not followed, empty file, text instead of binary, ...)
size, contents = fs.filesummary(file)
raise ChecksumError(
f"{checker.hash_name} checksum failed for {file}",
f"Expected {digest} but got {checker.sum}. "
f"File size = {size} bytes. Contents = {contents!r}",
)
def stable_target(fetcher):
"""Returns whether the fetcher target is expected to have a stable
checksum. This is only true if the target is a preexisting archive

View File

@@ -88,7 +88,7 @@ def view_copy(src: str, dst: str, view, spec: Optional[spack.spec.Spec] = None):
elif spack.relocate.is_binary(dst):
spack.relocate.relocate_text_bin(binaries=[dst], prefixes=prefix_to_projection)
else:
prefix_to_projection[spack.store.layout.root] = view._root
prefix_to_projection[spack.store.STORE.layout.root] = view._root
# This is vestigial code for the *old* location of sbang.
prefix_to_projection[
@@ -379,7 +379,7 @@ def needs_file(spec, file):
# check if this spec owns a file of that name (through the
# manifest in the metadata dir, which we have in the view).
manifest_file = os.path.join(
self.get_path_meta_folder(spec), spack.store.layout.manifest_file_name
self.get_path_meta_folder(spec), spack.store.STORE.layout.manifest_file_name
)
try:
with open(manifest_file, "r") as f:
@@ -506,14 +506,16 @@ def get_projection_for_spec(self, spec):
def get_all_specs(self):
md_dirs = []
for root, dirs, files in os.walk(self._root):
if spack.store.layout.metadata_dir in dirs:
md_dirs.append(os.path.join(root, spack.store.layout.metadata_dir))
if spack.store.STORE.layout.metadata_dir in dirs:
md_dirs.append(os.path.join(root, spack.store.STORE.layout.metadata_dir))
specs = []
for md_dir in md_dirs:
if os.path.exists(md_dir):
for name_dir in os.listdir(md_dir):
filename = os.path.join(md_dir, name_dir, spack.store.layout.spec_file_name)
filename = os.path.join(
md_dir, name_dir, spack.store.STORE.layout.spec_file_name
)
spec = get_spec_from_file(filename)
if spec:
specs.append(spec)
@@ -531,18 +533,18 @@ def get_path_meta_folder(self, spec):
"Get path to meta folder for either spec or spec name."
return os.path.join(
self.get_projection_for_spec(spec),
spack.store.layout.metadata_dir,
spack.store.STORE.layout.metadata_dir,
getattr(spec, "name", spec),
)
def get_spec(self, spec):
dotspack = self.get_path_meta_folder(spec)
filename = os.path.join(dotspack, spack.store.layout.spec_file_name)
filename = os.path.join(dotspack, spack.store.STORE.layout.spec_file_name)
return get_spec_from_file(filename)
def link_meta_folder(self, spec):
src = spack.store.layout.metadata_path(spec)
src = spack.store.STORE.layout.metadata_path(spec)
tgt = self.get_path_meta_folder(spec)
tree = LinkTree(src)
@@ -673,7 +675,7 @@ def add_specs(self, *specs, **kwargs):
# Ignore spack meta data folder.
def skip_list(file):
return os.path.basename(file) == spack.store.layout.metadata_dir
return os.path.basename(file) == spack.store.STORE.layout.metadata_dir
visitor = SourceMergeVisitor(ignore=skip_list)
@@ -735,14 +737,18 @@ def _source_merge_visitor_to_merge_map(self, visitor: SourceMergeVisitor):
def relative_metadata_dir_for_spec(self, spec):
return os.path.join(
self.get_relative_projection_for_spec(spec), spack.store.layout.metadata_dir, spec.name
self.get_relative_projection_for_spec(spec),
spack.store.STORE.layout.metadata_dir,
spec.name,
)
def link_metadata(self, specs):
metadata_visitor = SourceMergeVisitor()
for spec in specs:
src_prefix = os.path.join(spec.package.view_source(), spack.store.layout.metadata_dir)
src_prefix = os.path.join(
spec.package.view_source(), spack.store.STORE.layout.metadata_dir
)
proj = self.relative_metadata_dir_for_spec(spec)
metadata_visitor.set_projection(proj)
visit_directory_tree(src_prefix, metadata_visitor)

View File

@@ -0,0 +1,124 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
from typing import IO, Optional, Tuple
import llnl.util.tty as tty
from llnl.util.filesystem import BaseDirectoryVisitor, visit_directory_tree
from spack.util.elf import ElfParsingError, parse_elf
def should_keep(path: bytes) -> bool:
"""Return True iff path starts with $ (typically for $ORIGIN/${ORIGIN}) or is
absolute and exists."""
return path.startswith(b"$") or (os.path.isabs(path) and os.path.lexists(path))
def _drop_redundant_rpaths(f: IO) -> Optional[Tuple[bytes, bytes]]:
"""Drop redundant entries from rpath.
Args:
f: File object to patch opened in r+b mode.
Returns:
A tuple of the old and new rpath if the rpath was patched, None otherwise.
"""
try:
elf = parse_elf(f, interpreter=False, dynamic_section=True)
except ElfParsingError:
return None
# Nothing to do.
if not elf.has_rpath:
return None
old_rpath_str = elf.dt_rpath_str
new_rpath_str = b":".join(p for p in old_rpath_str.split(b":") if should_keep(p))
# Nothing to write.
if old_rpath_str == new_rpath_str:
return None
# Pad with 0 bytes.
pad = len(old_rpath_str) - len(new_rpath_str)
# This can never happen since we're filtering, but lets be defensive.
if pad < 0:
return None
# The rpath is at a given offset in the string table used by the
# dynamic section.
rpath_offset = elf.pt_dynamic_strtab_offset + elf.rpath_strtab_offset
f.seek(rpath_offset)
f.write(new_rpath_str + b"\x00" * pad)
return old_rpath_str, new_rpath_str
def drop_redundant_rpaths(path: str) -> Optional[Tuple[bytes, bytes]]:
"""Drop redundant entries from rpath.
Args:
path: Path to a potential ELF file to patch.
Returns:
A tuple of the old and new rpath if the rpath was patched, None otherwise.
"""
try:
with open(path, "r+b") as f:
return _drop_redundant_rpaths(f)
except OSError:
return None
class ElfFilesWithRPathVisitor(BaseDirectoryVisitor):
"""Visitor that collects all elf files that have an rpath"""
def __init__(self):
# Map from (ino, dev) -> path. We need 1 path per file, if there are hardlinks,
# we don't need to store the path multiple times.
self.visited = set()
def visit_file(self, root, rel_path, depth):
filepath = os.path.join(root, rel_path)
s = os.lstat(filepath)
identifier = (s.st_ino, s.st_dev)
# We're hitting a hardlink or symlink of an excluded lib, no need to parse.
if identifier in self.visited:
return
self.visited.add(identifier)
result = drop_redundant_rpaths(filepath)
if result is not None:
old, new = result
tty.debug(f"Patched rpath in {rel_path} from {old!r} to {new!r}")
def visit_symlinked_file(self, root, rel_path, depth):
pass
def before_visit_dir(self, root, rel_path, depth):
# Always enter dirs
return True
def before_visit_symlinked_dir(self, root, rel_path, depth):
# Never enter symlinked dirs
return False
def post_install(spec, explicit=None):
# Skip externals
if spec.external:
return
# Only enable on platforms using ELF.
if not spec.satisfies("platform=linux") and not spec.satisfies("platform=cray"):
return
visit_directory_tree(spec.prefix, ElfFilesWithRPathVisitor())

View File

@@ -41,7 +41,7 @@
def sbang_install_path():
"""Location sbang should be installed within Spack's ``install_tree``."""
sbang_root = str(spack.store.unpadded_root)
sbang_root = str(spack.store.STORE.unpadded_root)
install_path = os.path.join(sbang_root, "bin", "sbang")
path_length = len(install_path)
if path_length > system_shebang_limit:

File diff suppressed because it is too large Load Diff

View File

@@ -436,7 +436,7 @@ def make_argument_parser(**kwargs):
default=None,
action="append",
dest="config_vars",
help="add one or more custom, one off config settings.",
help="add one or more custom, one off config settings",
)
parser.add_argument(
"-C",
@@ -451,9 +451,9 @@ def make_argument_parser(**kwargs):
"--debug",
action="count",
default=0,
help="write out debug messages " "(more d's for more verbosity: -d, -dd, -ddd, etc.)",
help="write out debug messages\n\n(more d's for more verbosity: -d, -dd, -ddd, etc.)",
)
parser.add_argument("--timestamp", action="store_true", help="Add a timestamp to tty output")
parser.add_argument("--timestamp", action="store_true", help="add a timestamp to tty output")
parser.add_argument("--pdb", action="store_true", help="run spack under the pdb debugger")
env_group = parser.add_mutually_exclusive_group()
@@ -527,8 +527,7 @@ def make_argument_parser(**kwargs):
"--sorted-profile",
default=None,
metavar="STAT",
help="profile and sort by one or more of:\n[%s]"
% ",\n ".join([", ".join(line) for line in stat_lines]),
help=f"profile and sort\n\none or more of: {stat_lines[0]}",
)
parser.add_argument(
"--lines",
@@ -555,7 +554,7 @@ def make_argument_parser(**kwargs):
"-V", "--version", action="store_true", help="show version number and exit"
)
parser.add_argument(
"--print-shell-vars", action="store", help="print info needed by setup-env.[c]sh"
"--print-shell-vars", action="store", help="print info needed by setup-env.*sh"
)
return parser
@@ -848,7 +847,7 @@ def shell_set(var, value):
if "modules" in info:
generic_arch = archspec.cpu.host().family
module_spec = "environment-modules target={0}".format(generic_arch)
specs = spack.store.db.query(module_spec)
specs = spack.store.STORE.db.query(module_spec)
if specs:
shell_set("_sp_module_prefix", specs[-1].prefix)
else:

View File

@@ -18,6 +18,7 @@
import sys
import traceback
import urllib.parse
from typing import Optional, Union
import llnl.util.tty as tty
from llnl.util.filesystem import mkdirp
@@ -40,15 +41,6 @@
supported_url_schemes = ("file", "http", "https", "sftp", "ftp", "s3", "gs")
def _display_mirror_entry(size, name, url, type_=None):
if type_:
type_ = "".join((" (", type_, ")"))
else:
type_ = ""
print("%-*s%s%s" % (size + 4, name, url, type_))
def _url_or_path_to_url(url_or_path: str) -> str:
"""For simplicity we allow mirror URLs in config files to be local, relative paths.
This helper function takes care of distinguishing between URLs and paths, and
@@ -71,36 +63,24 @@ class Mirror:
to them. These two URLs are usually the same.
"""
def __init__(self, fetch_url, push_url=None, name=None):
self._fetch_url = fetch_url
self._push_url = push_url
def __init__(self, data: Union[str, dict], name: Optional[str] = None):
self._data = data
self._name = name
def __eq__(self, other):
return self._fetch_url == other._fetch_url and self._push_url == other._push_url
def to_json(self, stream=None):
return sjson.dump(self.to_dict(), stream)
def to_yaml(self, stream=None):
return syaml.dump(self.to_dict(), stream)
@staticmethod
def from_yaml(stream, name=None):
data = syaml.load(stream)
return Mirror.from_dict(data, name)
return Mirror(syaml.load(stream), name)
@staticmethod
def from_json(stream, name=None):
try:
d = sjson.load(stream)
return Mirror.from_dict(d, name)
return Mirror(sjson.load(stream), name)
except Exception as e:
raise sjson.SpackJSONError("error parsing JSON mirror:", str(e)) from e
@staticmethod
def from_local_path(path: str):
return Mirror(fetch_url=url_util.path_to_file_url(path))
return Mirror(url_util.path_to_file_url(path))
@staticmethod
def from_url(url: str):
@@ -111,165 +91,220 @@ def from_url(url: str):
url, ", ".join(supported_url_schemes)
)
)
return Mirror(fetch_url=url)
return Mirror(url)
def to_dict(self):
# Keep it a key-value pair <name>: <url> when possible.
if isinstance(self._fetch_url, str) and self._push_url is None:
return self._fetch_url
if self._push_url is None:
return syaml_dict([("fetch", self._fetch_url), ("push", self._fetch_url)])
else:
return syaml_dict([("fetch", self._fetch_url), ("push", self._push_url)])
@staticmethod
def from_dict(d, name=None):
if isinstance(d, str):
return Mirror(d, name=name)
else:
return Mirror(d["fetch"], d["push"], name=name)
def display(self, max_len=0):
if self._push_url is None:
_display_mirror_entry(max_len, self._name, self.fetch_url)
else:
_display_mirror_entry(max_len, self._name, self.fetch_url, "fetch")
_display_mirror_entry(max_len, self._name, self.push_url, "push")
def __eq__(self, other):
if not isinstance(other, Mirror):
return NotImplemented
return self._data == other._data and self._name == other._name
def __str__(self):
name = self._name
if name is None:
name = ""
else:
name = ' "%s"' % name
if self._push_url is None:
return "[Mirror%s (%s)]" % (name, self._fetch_url)
return "[Mirror%s (fetch: %s, push: %s)]" % (name, self._fetch_url, self._push_url)
return f"{self._name}: {self.push_url} {self.fetch_url}"
def __repr__(self):
return "".join(
(
"Mirror(",
", ".join(
"%s=%s" % (k, repr(v))
for k, v in (
("fetch_url", self._fetch_url),
("push_url", self._push_url),
("name", self._name),
)
if k == "fetch_url" or v
),
")",
)
)
return f"Mirror(name={self._name!r}, data={self._data!r})"
def to_json(self, stream=None):
return sjson.dump(self.to_dict(), stream)
def to_yaml(self, stream=None):
return syaml.dump(self.to_dict(), stream)
def to_dict(self):
return self._data
def display(self, max_len=0):
fetch, push = self.fetch_url, self.push_url
# don't print the same URL twice
url = fetch if fetch == push else f"fetch: {fetch} push: {push}"
source = "s" if self.source else " "
binary = "b" if self.binary else " "
print(f"{self.name: <{max_len}} [{source}{binary}] {url}")
@property
def name(self):
return self._name or "<unnamed>"
def get_profile(self, url_type):
if isinstance(self._fetch_url, dict):
if url_type == "push":
return self._push_url.get("profile", None)
return self._fetch_url.get("profile", None)
else:
return None
@property
def binary(self):
return isinstance(self._data, str) or self._data.get("binary", True)
def set_profile(self, url_type, profile):
if url_type == "push":
self._push_url["profile"] = profile
else:
self._fetch_url["profile"] = profile
def get_access_pair(self, url_type):
if isinstance(self._fetch_url, dict):
if url_type == "push":
return self._push_url.get("access_pair", None)
return self._fetch_url.get("access_pair", None)
else:
return None
def set_access_pair(self, url_type, connection_tuple):
if url_type == "push":
self._push_url["access_pair"] = connection_tuple
else:
self._fetch_url["access_pair"] = connection_tuple
def get_endpoint_url(self, url_type):
if isinstance(self._fetch_url, dict):
if url_type == "push":
return self._push_url.get("endpoint_url", None)
return self._fetch_url.get("endpoint_url", None)
else:
return None
def set_endpoint_url(self, url_type, url):
if url_type == "push":
self._push_url["endpoint_url"] = url
else:
self._fetch_url["endpoint_url"] = url
def get_access_token(self, url_type):
if isinstance(self._fetch_url, dict):
if url_type == "push":
return self._push_url.get("access_token", None)
return self._fetch_url.get("access_token", None)
else:
return None
def set_access_token(self, url_type, connection_token):
if url_type == "push":
self._push_url["access_token"] = connection_token
else:
self._fetch_url["access_token"] = connection_token
@property
def source(self):
return isinstance(self._data, str) or self._data.get("source", True)
@property
def fetch_url(self):
"""Get the valid, canonicalized fetch URL"""
url_or_path = (
self._fetch_url if isinstance(self._fetch_url, str) else self._fetch_url["url"]
)
return _url_or_path_to_url(url_or_path)
@fetch_url.setter
def fetch_url(self, url):
self._fetch_url["url"] = url
self._normalize()
return self.get_url("fetch")
@property
def push_url(self):
"""Get the valid, canonicalized push URL. Returns fetch URL if no custom
push URL is defined"""
if self._push_url is None:
return self.fetch_url
url_or_path = self._push_url if isinstance(self._push_url, str) else self._push_url["url"]
return _url_or_path_to_url(url_or_path)
"""Get the valid, canonicalized fetch URL"""
return self.get_url("push")
@push_url.setter
def push_url(self, url):
self._push_url["url"] = url
self._normalize()
def _update_connection_dict(self, current_data: dict, new_data: dict, top_level: bool):
keys = ["url", "access_pair", "access_token", "profile", "endpoint_url"]
if top_level:
keys += ["binary", "source"]
changed = False
for key in keys:
if key in new_data and current_data.get(key) != new_data[key]:
current_data[key] = new_data[key]
changed = True
return changed
def _normalize(self):
if self._push_url is not None and self._push_url == self._fetch_url:
self._push_url = None
def update(self, data: dict, direction: Optional[str] = None) -> bool:
"""Modify the mirror with the given data. This takes care
of expanding trivial mirror definitions by URL to something more
rich with a dict if necessary
Args:
data (dict): The data to update the mirror with.
direction (str): The direction to update the mirror in (fetch
or push or None for top-level update)
Returns:
bool: True if the mirror was updated, False otherwise."""
# Modify the top-level entry when no direction is given.
if not data:
return False
# If we only update a URL, there's typically no need to expand things to a dict.
set_url = data["url"] if len(data) == 1 and "url" in data else None
if direction is None:
# First deal with the case where the current top-level entry is just a string.
if isinstance(self._data, str):
# Can we replace that string with something new?
if set_url:
if self._data == set_url:
return False
self._data = set_url
return True
# Otherwise promote to a dict
self._data = {"url": self._data}
# And update the dictionary accordingly.
return self._update_connection_dict(self._data, data, top_level=True)
# Otherwise, update the fetch / push entry; turn top-level
# url string into a dict if necessary.
if isinstance(self._data, str):
self._data = {"url": self._data}
# Create a new fetch / push entry if necessary
if direction not in self._data:
# Keep config minimal if we're just setting the URL.
if set_url:
self._data[direction] = set_url
return True
self._data[direction] = {}
entry = self._data[direction]
# Keep the entry simple if we're just swapping out the URL.
if isinstance(entry, str):
if set_url:
if entry == set_url:
return False
self._data[direction] = set_url
return True
# Otherwise promote to a dict
self._data[direction] = {"url": entry}
return self._update_connection_dict(self._data[direction], data, top_level=False)
def _get_value(self, attribute: str, direction: str):
"""Returns the most specific value for a given attribute (either push/fetch or global)"""
if direction not in ("fetch", "push"):
raise ValueError(f"direction must be either 'fetch' or 'push', not {direction}")
if isinstance(self._data, str):
return None
# Either a string (url) or a dictionary, we care about the dict here.
value = self._data.get(direction, {})
# Return top-level entry if only a URL was set.
if isinstance(value, str):
return self._data.get(attribute, None)
return self._data.get(direction, {}).get(attribute, None)
def get_url(self, direction: str):
if direction not in ("fetch", "push"):
raise ValueError(f"direction must be either 'fetch' or 'push', not {direction}")
# Whole mirror config is just a url.
if isinstance(self._data, str):
return _url_or_path_to_url(self._data)
# Default value
url = self._data.get("url")
# Override it with a direction-specific value
if direction in self._data:
# Either a url as string or a dict with url key
info = self._data[direction]
if isinstance(info, str):
url = info
elif "url" in info:
url = info["url"]
return _url_or_path_to_url(url) if url else None
def get_access_token(self, direction: str):
return self._get_value("access_token", direction)
def get_access_pair(self, direction: str):
return self._get_value("access_pair", direction)
def get_profile(self, direction: str):
return self._get_value("profile", direction)
def get_endpoint_url(self, direction: str):
return self._get_value("endpoint_url", direction)
class MirrorCollection(collections.abc.Mapping):
"""A mapping of mirror names to mirrors."""
def __init__(self, mirrors=None, scope=None):
self._mirrors = collections.OrderedDict(
(name, Mirror.from_dict(mirror, name))
def __init__(
self,
mirrors=None,
scope=None,
binary: Optional[bool] = None,
source: Optional[bool] = None,
):
"""Initialize a mirror collection.
Args:
mirrors: A name-to-mirror mapping to initialize the collection with.
scope: The scope to use when looking up mirrors from the config.
binary: If True, only include binary mirrors.
If False, omit binary mirrors.
If None, do not filter on binary mirrors.
source: If True, only include source mirrors.
If False, omit source mirrors.
If None, do not filter on source mirrors."""
self._mirrors = {
name: Mirror(data=mirror, name=name)
for name, mirror in (
mirrors.items()
if mirrors is not None
else spack.config.get("mirrors", scope=scope).items()
)
)
}
if source is not None:
self._mirrors = {k: v for k, v in self._mirrors.items() if v.source == source}
if binary is not None:
self._mirrors = {k: v for k, v in self._mirrors.items() if v.binary == binary}
def __eq__(self, other):
return self._mirrors == other._mirrors
@@ -325,7 +360,7 @@ def lookup(self, name_or_url):
result = self.get(name_or_url)
if result is None:
result = Mirror(fetch_url=name_or_url)
result = Mirror(fetch=name_or_url)
return result
@@ -576,24 +611,8 @@ def remove(name, scope):
if name not in mirrors:
tty.die("No mirror with name %s" % name)
old_value = mirrors.pop(name)
mirrors.pop(name)
spack.config.set("mirrors", mirrors, scope=scope)
debug_msg_url = "url %s"
debug_msg = ["Removed mirror %s with"]
values = [name]
try:
fetch_value = old_value["fetch"]
push_value = old_value["push"]
debug_msg.extend(("fetch", debug_msg_url, "and push", debug_msg_url))
values.extend((fetch_value, push_value))
except TypeError:
debug_msg.append(debug_msg_url)
values.append(old_value)
tty.debug(" ".join(debug_msg) % tuple(values))
tty.msg("Removed mirror %s." % name)
@@ -656,12 +675,9 @@ def create_mirror_from_package_object(pkg_obj, mirror_cache, mirror_stats):
num_retries = 3
while num_retries > 0:
try:
# Includes patches and resources
with pkg_obj.stage as pkg_stage:
pkg_stage.cache_mirror(mirror_cache, mirror_stats)
for patch in pkg_obj.all_patches():
if patch.stage:
patch.stage.cache_mirror(mirror_cache, mirror_stats)
patch.clean()
exception = None
break
except Exception as e:

View File

@@ -35,6 +35,7 @@
import os.path
import pathlib
import re
import string
import warnings
from typing import Optional
@@ -248,7 +249,7 @@ def generate_module_index(root, modules, overwrite=False):
def _generate_upstream_module_index():
module_indices = read_module_indices()
return UpstreamModuleIndex(spack.store.db, module_indices)
return UpstreamModuleIndex(spack.store.STORE.db, module_indices)
upstream_module_index = llnl.util.lang.Singleton(_generate_upstream_module_index)
@@ -353,7 +354,7 @@ def get_module(module_type, spec, get_full_path, module_set_name="default", requ
try:
upstream = spec.installed_upstream
except spack.repo.UnknownPackageError:
upstream, record = spack.store.db.query_by_spec_hash(spec.dag_hash())
upstream, record = spack.store.STORE.db.query_by_spec_hash(spec.dag_hash())
if upstream:
module = spack.modules.common.upstream_module_index.upstream_module(spec, module_type)
if not module:
@@ -470,6 +471,11 @@ def hash(self):
return self.spec.dag_hash(length=hash_length)
return None
@property
def conflicts(self):
"""Conflicts for this module file"""
return self.conf.get("conflict", [])
@property
def excluded(self):
"""Returns True if the module has been excluded, False otherwise."""
@@ -763,6 +769,36 @@ def has_manpath_modifications(self):
else:
return False
@tengine.context_property
def conflicts(self):
"""List of conflicts for the module file."""
fmts = []
projection = proj.get_projection(self.conf.projections, self.spec)
for item in self.conf.conflicts:
self._verify_conflict_naming_consistency_or_raise(item, projection)
item = self.spec.format(item)
fmts.append(item)
return fmts
def _verify_conflict_naming_consistency_or_raise(self, item, projection):
f = string.Formatter()
errors = []
if len([x for x in f.parse(item)]) > 1:
for naming_dir, conflict_dir in zip(projection.split("/"), item.split("/")):
if naming_dir != conflict_dir:
errors.extend(
[
f"spec={self.spec.cshort_spec}",
f"conflict_scheme={item}",
f"naming_scheme={projection}",
]
)
if errors:
raise ModulesError(
message="conflict scheme does not match naming scheme",
long_message="\n ".join(errors),
)
@tengine.context_property
def autoload(self):
"""List of modules that needs to be loaded automatically."""

View File

@@ -7,13 +7,9 @@
non-hierarchical modules.
"""
import posixpath
import string
from typing import Any, Dict
import llnl.util.tty as tty
import spack.config
import spack.projections as proj
import spack.tengine as tengine
from .common import BaseConfiguration, BaseContext, BaseFileLayout, BaseModuleFileWriter
@@ -56,11 +52,6 @@ def make_context(spec, module_set_name, explicit):
class TclConfiguration(BaseConfiguration):
"""Configuration class for tcl module files."""
@property
def conflicts(self):
"""Conflicts for this module file"""
return self.conf.get("conflict", [])
class TclFileLayout(BaseFileLayout):
"""File layout for tcl module files."""
@@ -74,29 +65,6 @@ def prerequisites(self):
"""List of modules that needs to be loaded automatically."""
return self._create_module_list_of("specs_to_prereq")
@tengine.context_property
def conflicts(self):
"""List of conflicts for the tcl module file."""
fmts = []
projection = proj.get_projection(self.conf.projections, self.spec)
f = string.Formatter()
for item in self.conf.conflicts:
if len([x for x in f.parse(item)]) > 1:
for naming_dir, conflict_dir in zip(projection.split("/"), item.split("/")):
if naming_dir != conflict_dir:
message = "conflict scheme does not match naming "
message += "scheme [{spec}]\n\n"
message += 'naming scheme : "{nformat}"\n'
message += 'conflict scheme : "{cformat}"\n\n'
message += "** You may want to check your "
message += "`modules.yaml` configuration file **\n"
tty.error(message.format(spec=self.spec, nformat=projection, cformat=item))
raise SystemExit("Module generation aborted.")
item = self.spec.format(item)
fmts.append(item)
# Substitute spec tokens if present
return [self.spec.format(x) for x in fmts]
class TclModulefileWriter(BaseModuleFileWriter):
"""Writer class for tcl module files."""

View File

@@ -86,7 +86,7 @@ def __init__(self):
# distro.linux_distribution, while still calling __init__
# methods further up the MRO, we skip LinuxDistro in the MRO and
# call the OperatingSystem superclass __init__ method
super().__init__(name, version)
super(LinuxDistro, self).__init__(name, version)
else:
super().__init__()
self.modulecmd = module

View File

@@ -11,7 +11,6 @@
import base64
import collections
import contextlib
import copy
import functools
import glob
@@ -45,6 +44,7 @@
import spack.mirror
import spack.mixins
import spack.multimethod
import spack.patch
import spack.paths
import spack.repo
import spack.spec
@@ -63,7 +63,7 @@
install_test_root,
)
from spack.installer import InstallError, PackageInstaller
from spack.stage import ResourceStage, Stage, StageComposite, compute_stage_name
from spack.stage import DIYStage, ResourceStage, Stage, StageComposite, compute_stage_name
from spack.util.executable import ProcessError, which
from spack.util.package_hash import package_hash
from spack.util.web import FetchError
@@ -639,7 +639,7 @@ class PackageBase(WindowsRPath, PackageViewMixin, metaclass=PackageMeta):
def __init__(self, spec):
# this determines how the package should be built.
self.spec = spec
self.spec: "spack.spec.Spec" = spec
# Allow custom staging paths for packages
self.path = None
@@ -982,20 +982,17 @@ def find_valid_url_for_version(self, version):
return None
def _make_resource_stage(self, root_stage, fetcher, resource):
resource_stage_folder = self._resource_stage(resource)
mirror_paths = spack.mirror.mirror_archive_paths(
fetcher, os.path.join(self.name, "%s-%s" % (resource.name, self.version))
)
stage = ResourceStage(
def _make_resource_stage(self, root_stage, resource):
return ResourceStage(
resource.fetcher,
root=root_stage,
resource=resource,
name=resource_stage_folder,
mirror_paths=mirror_paths,
name=self._resource_stage(resource),
mirror_paths=spack.mirror.mirror_archive_paths(
resource.fetcher, os.path.join(self.name, f"{resource.name}-{self.version}")
),
path=self.path,
)
return stage
def _download_search(self):
dynamic_fetcher = fs.from_list_url(self)
@@ -1004,7 +1001,7 @@ def _download_search(self):
def _make_root_stage(self, fetcher):
# Construct a mirror path (TODO: get this out of package.py)
mirror_paths = spack.mirror.mirror_archive_paths(
fetcher, os.path.join(self.name, "%s-%s" % (self.name, self.version)), self.spec
fetcher, os.path.join(self.name, f"{self.name}-{self.version}"), self.spec
)
# Construct a path where the stage should build..
s = self.spec
@@ -1022,24 +1019,21 @@ def _make_stage(self):
# If it's a dev package (not transitively), use a DIY stage object
dev_path_var = self.spec.variants.get("dev_path", None)
if dev_path_var:
return spack.stage.DIYStage(dev_path_var.value)
return DIYStage(dev_path_var.value)
# Construct a composite stage on top of the composite FetchStrategy
composite_fetcher = self.fetcher
composite_stage = StageComposite()
resources = self._get_needed_resources()
for ii, fetcher in enumerate(composite_fetcher):
if ii == 0:
# Construct root stage first
stage = self._make_root_stage(fetcher)
else:
# Construct resource stage
resource = resources[ii - 1] # ii == 0 is root!
stage = self._make_resource_stage(composite_stage[0], fetcher, resource)
# Append the item to the composite
composite_stage.append(stage)
# To fetch the current version
source_stage = self._make_root_stage(self.fetcher)
return composite_stage
# Extend it with all resources and patches
all_stages = StageComposite()
all_stages.append(source_stage)
all_stages.extend(
self._make_resource_stage(source_stage, r) for r in self._get_needed_resources()
)
all_stages.extend(
p.stage for p in self.spec.patches if isinstance(p, spack.patch.UrlPatch)
)
return all_stages
@property
def stage(self):
@@ -1082,7 +1076,7 @@ def env_mods_path(self):
@property
def metadata_dir(self):
"""Return the install metadata directory."""
return spack.store.layout.metadata_path(self.spec)
return spack.store.STORE.layout.metadata_path(self.spec)
@property
def install_env_path(self):
@@ -1188,26 +1182,12 @@ def installed_upstream(self):
warnings.warn(msg)
return self.spec.installed_upstream
def _make_fetcher(self):
# Construct a composite fetcher that always contains at least
# one element (the root package). In case there are resources
# associated with the package, append their fetcher to the
# composite.
root_fetcher = fs.for_package_version(self)
fetcher = fs.FetchStrategyComposite() # Composite fetcher
fetcher.append(root_fetcher) # Root fetcher is always present
resources = self._get_needed_resources()
for resource in resources:
fetcher.append(resource.fetcher)
fetcher.set_package(self)
return fetcher
@property
def fetcher(self):
if not self.spec.versions.concrete:
raise ValueError("Cannot retrieve fetcher for package without concrete version.")
if not self._fetcher:
self._fetcher = self._make_fetcher()
self._fetcher = fs.for_package_version(self)
return self._fetcher
@fetcher.setter
@@ -1353,7 +1333,7 @@ def remove_prefix(self):
Removes the prefix for a package along with any empty parent
directories
"""
spack.store.layout.remove_install_directory(self.spec)
spack.store.STORE.layout.remove_install_directory(self.spec)
@property
def download_instr(self):
@@ -1446,11 +1426,6 @@ def do_fetch(self, mirror_only=False):
self.stage.cache_local()
for patch in self.spec.patches:
patch.fetch()
if patch.stage:
patch.stage.cache_local()
def do_stage(self, mirror_only=False):
"""Unpacks and expands the fetched tarball."""
# Always create the stage directory at this point. Why? A no-code
@@ -1785,13 +1760,6 @@ def _resource_stage(self, resource):
resource_stage_folder = "-".join(pieces)
return resource_stage_folder
@contextlib.contextmanager
def _stage_and_write_lock(self):
"""Prefix lock nested in a stage."""
with self.stage:
with spack.store.db.prefix_write_lock(self.spec):
yield
def do_install(self, **kwargs):
"""Called by commands to install a package and or its dependencies.
@@ -2215,20 +2183,20 @@ def uninstall_by_spec(spec, force=False, deprecator=None):
if not os.path.isdir(spec.prefix):
# prefix may not exist, but DB may be inconsistent. Try to fix by
# removing, but omit hooks.
specs = spack.store.db.query(spec, installed=True)
specs = spack.store.STORE.db.query(spec, installed=True)
if specs:
if deprecator:
spack.store.db.deprecate(specs[0], deprecator)
spack.store.STORE.db.deprecate(specs[0], deprecator)
tty.debug("Deprecating stale DB entry for {0}".format(spec.short_spec))
else:
spack.store.db.remove(specs[0])
spack.store.STORE.db.remove(specs[0])
tty.debug("Removed stale DB entry for {0}".format(spec.short_spec))
return
else:
raise InstallError(str(spec) + " is not installed.")
if not force:
dependents = spack.store.db.installed_relatives(
dependents = spack.store.STORE.db.installed_relatives(
spec, direction="parents", transitive=True, deptype=("link", "run")
)
if dependents:
@@ -2241,7 +2209,7 @@ def uninstall_by_spec(spec, force=False, deprecator=None):
pkg = None
# Pre-uninstall hook runs first.
with spack.store.db.prefix_write_lock(spec):
with spack.store.STORE.db.prefix_write_lock(spec):
if pkg is not None:
try:
spack.hooks.pre_uninstall(spec)
@@ -2267,17 +2235,17 @@ def uninstall_by_spec(spec, force=False, deprecator=None):
tty.debug(msg.format(spec.short_spec))
# test if spec is already deprecated, not whether we want to
# deprecate it now
deprecated = bool(spack.store.db.deprecator(spec))
spack.store.layout.remove_install_directory(spec, deprecated)
deprecated = bool(spack.store.STORE.db.deprecator(spec))
spack.store.STORE.layout.remove_install_directory(spec, deprecated)
# Delete DB entry
if deprecator:
msg = "deprecating DB entry [{0}] in favor of [{1}]"
tty.debug(msg.format(spec.short_spec, deprecator.short_spec))
spack.store.db.deprecate(spec, deprecator)
spack.store.STORE.db.deprecate(spec, deprecator)
else:
msg = "Deleting DB entry [{0}]"
tty.debug(msg.format(spec.short_spec))
spack.store.db.remove(spec)
spack.store.STORE.db.remove(spec)
if pkg is not None:
try:
@@ -2308,24 +2276,24 @@ def do_deprecate(self, deprecator, link_fn):
spec = self.spec
# Install deprecator if it isn't installed already
if not spack.store.db.query(deprecator):
if not spack.store.STORE.db.query(deprecator):
deprecator.package.do_install()
old_deprecator = spack.store.db.deprecator(spec)
old_deprecator = spack.store.STORE.db.deprecator(spec)
if old_deprecator:
# Find this specs yaml file from its old deprecation
self_yaml = spack.store.layout.deprecated_file_path(spec, old_deprecator)
self_yaml = spack.store.STORE.layout.deprecated_file_path(spec, old_deprecator)
else:
self_yaml = spack.store.layout.spec_file_path(spec)
self_yaml = spack.store.STORE.layout.spec_file_path(spec)
# copy spec metadata to "deprecated" dir of deprecator
depr_yaml = spack.store.layout.deprecated_file_path(spec, deprecator)
depr_yaml = spack.store.STORE.layout.deprecated_file_path(spec, deprecator)
fsys.mkdirp(os.path.dirname(depr_yaml))
shutil.copy2(self_yaml, depr_yaml)
# Any specs deprecated in favor of this spec are re-deprecated in
# favor of its new deprecator
for deprecated in spack.store.db.specs_deprecated_by(spec):
for deprecated in spack.store.STORE.db.specs_deprecated_by(spec):
deprecated.package.do_deprecate(deprecator, link_fn)
# Now that we've handled metadata, uninstall and replace with link
@@ -2341,7 +2309,7 @@ def view(self):
Extensions added to this view will modify the installation prefix of
this package.
"""
return YamlFilesystemView(self.prefix, spack.store.layout)
return YamlFilesystemView(self.prefix, spack.store.STORE.layout)
def do_restage(self):
"""Reverts expanded/checked out source to a pristine state."""
@@ -2349,9 +2317,6 @@ def do_restage(self):
def do_clean(self):
"""Removes the package's build stage and source tarball."""
for patch in self.spec.patches:
patch.clean()
self.stage.destroy()
@classmethod
@@ -2468,7 +2433,7 @@ def flatten_dependencies(spec, flat_dir):
for dep in spec.traverse(root=False):
name = dep.name
dep_path = spack.store.layout.path_for_spec(dep)
dep_path = spack.store.STORE.layout.path_for_spec(dep)
dep_files = LinkTree(dep_path)
os.mkdir(flat_dir + "/" + name)

Some files were not shown because too many files have changed in this diff Show More