Compare commits

..

127 Commits

Author SHA1 Message Date
Harmen Stoppels
296e5308a7 mirror: fetch by digest (#45809)
Source mirrors store entries by digest and add a human readable alias of the
form 'name-version'. If no digest is available, the alias is used as the primary
storage location.

Spack erroneously fetches by alias when the digest path does not exist. This is
problematic if `version(..., sha256=...)` changes in package.py, and the mirror
is populated with the old shasum. That would result in an error when a digest
is available, but in case of git versions with a modified commit sha, the wrong
sources would be fetched without error. With this PR, only the digest path is
used, not the alias, in case a digest is available. This is also a small performance
optimization, as the number of request is halved for mirrors that don't contain
the sources.

Further, for git sources the tag was used as a digest, but this is a moving
target. Only commit sha is used now.

Also whenever the alias already existed, Spack used to keep it in place when
updating the mirror cache, which means that aliases would always point to
outdated mirror entries whenever digests are modified. With this PR the alias
is moved in place.

Lastly, fix a recent regression where `Stage.disable_mirrors` disabled mirrors
but not the local download cache, which was the intention.
2024-08-24 09:09:25 +02:00
Massimiliano Culpo
47e79c32fd Substitute __import__ with importlib.import_module (#45965) 2024-08-23 21:41:26 +02:00
David Gardner
906799eec5 add SuperLU_MT v4.0.1 (#45924) 2024-08-23 09:53:21 -07:00
kwryankrattiger
dcdcab7b2c VTK-m: Point to github mirror for source tarball (#45893)
* VTK-m: Point to github mirror for source tarball
  The gitlab.kitware.com source location seems to have intermittent
  network issues. Switching the to mirror hosted at Github may alleviate
  some of the timeouts.
* Update sha256 for GitHub tarballs

---------

Co-authored-by: Zack Galbreath <zack.galbreath@kitware.com>
2024-08-23 09:39:59 -07:00
snehring
86050decb9 gromacs: add env mods for cufftmp w/ gcc (#45887) 2024-08-23 09:37:38 -07:00
Stephen Nicholas Swatman
fff8165f2f davix: add versions 0.8.2-0.8.7 and dependencies (#45853)
* davix: add versions 0.8.2-0.8.7 and dependencies
  This commit adds new versions 0.8.2-0.8.7 of the davix package, and it
  also improves the handling of embedded packages. Davix will try to build
  libcurl from its own embedded version of that code, which doesn't mesh
  well with Spack's design philosophy, so I've changed the CMake
  configuration to disallow the builtin libcurl and use a Spack dependency
  instead. Up to version 0.8.7, RapidJSON was also builtin, but version
  0.8.7 allows users to specify that they want to use a pre-installed
  version of RapidJSON, so this commit also adds that as a dependency for
  versions 0.8.7:.
* Fix old versions
2024-08-23 09:35:48 -07:00
Wouter Deconinck
5a9dbcc0c4 ddt: add v23.0.4 -> v24.0.3 (#45861)
* ddt: add v23.0.4 -> v24.0.3
* ddt: fix url_for_version for 22.1.3

---------

Co-authored-by: Lydéric Debusschère <lyderic.de@gmail.com>
2024-08-23 09:12:42 -07:00
Adam J. Stewart
bd627465f3 py-autograd: mark numpy 2 compatibility (#45942)
* py-autograd: mark numpy 2 compatibility

* Fix syntax error
2024-08-23 09:08:15 -07:00
Stephen Nicholas Swatman
f96e8757b8 acts: add v36.0.0, v36.1.0 and fixes (#45874)
* acts: add v36.0.0, v36.1.0 and fixes

This commit makes several changes to the Acts repository, namely:

1. It adds versions 36.0.0 and 36.1.0.
2. It adds the traccc plugin and related dependencies.
3. It updates the version requirements of some dependencies.
4. It adds the Geant4 module of GeoModel.
5. It updates the C++ standard requirement.
6. It adds a new variant determining the scalar type to use.

This commit supercedes #45851. Thanks @jmcarcell for the version
updates; I have added you as co-author.

Co-authored-by: Juan Miguel Carceller <22276694+jmcarcell@users.noreply.github.com>

* Updates

* alphabetic cmake args

---------

Co-authored-by: Juan Miguel Carceller <22276694+jmcarcell@users.noreply.github.com>
2024-08-23 07:09:29 -06:00
Harmen Stoppels
b8cbbb8e2e spack create: add depends_on(<lang>) statements (#45296) 2024-08-23 10:33:05 +02:00
Harmen Stoppels
d40f847497 Add missing MultiMethodMeta metaclass in builders (#45879)
* Add missing MultiMethodMeta metaclass in builders

and remove the Python 2 fallback option in favor of hard errors to catch
similar issues going forward.

The fallback option can cause about 10K stat calls due to use of
`realpath` in the inspect module, depending on how deep Spack itself is
nested in the file system, which is ... undesirable.

* code shuffling to avoid circular import

* more reshuffling

* move reserved variant names into variants module
2024-08-23 09:23:25 +02:00
Wouter Deconinck
ed34dfca96 xrootd: change urls to xrootd.web.cern.ch (#45895)
* xrootd: change urls to xrootd.web.cern.ch
* xrootd: change homepage
2024-08-22 20:02:14 -06:00
Christopher Christofi
3ee6a5b96f py-ipykernel: add version 6.29.5 (#45876)
* py-ipykernel: add version 6.29.5
* add maintainers for spack package
2024-08-22 18:24:38 -07:00
Kyoko Nagahashi
88bcfddbbb New package: linux-external-modules (#45797) 2024-08-22 19:20:01 -06:00
Juan Miguel Carceller
c49269f9dd poppler: change the URL of the test repository (#45857)
* poppler: change the URL of the test repository
2024-08-22 18:03:28 -07:00
Adam J. Stewart
ef45c392e0 py-scipy: add v1.14.1 (#45847) 2024-08-22 18:00:57 -07:00
AMD Toolchain Support
8b811171c7 removing -Ofast with aocc (#45880)
Co-authored-by: shbhaska <shbhaska@amd.com>
2024-08-22 17:54:58 -07:00
Richard Berger
823a2c1e4b kokkos-tools: add new package (#45382) 2024-08-22 17:09:19 -07:00
Massimiliano Culpo
ead25b1e9e Add a new audit to find missing package.py files (#45868)
* Add a new audit to find missing package.py files

* Remove directory without package.py
2024-08-22 14:22:54 -07:00
Stephen Nicholas Swatman
d5eefcba87 llvm-amdgpu: Conflict with MacOS (#45633)
Currently, the llvm-amdgpu package doesn't compile on MacOS, but it is
also not marked as a conflict. This causes problems because it seems
that Spack is very happy to pull in llvm-amdgpu as the default package
to satisfy any virtual libllvm dependency, which can cause dependent
specs to fail to install on MacOS. This commit marks a conflict between
this llvm package and the Darwin platform.
2024-08-22 11:14:20 -06:00
Nicole C.
1bcb1fcebc Windows: port tests for mirror/blame (#45259) 2024-08-22 09:49:32 -07:00
John W. Parent
f19b657235 VTK package: patch to fix NetCDFC - HDF5 interface (#43087)
Patch from Windows is also needed on Linux
2024-08-22 09:48:13 -07:00
Stephen Nicholas Swatman
8e1bd9a403 root: remove +webgui~http conflict version clause (#45856)
* root: set +webgui~http conflict from @6.28.12:

Currently, the ROOT spec correctly identifies a conflict between +webgui
and ~http, but this conflict is marked as affecting @6.29.00: only. As a
matter of fact, ROOT 6.28.12 is also affected. This commit, therefore,
updates the when clause on the conflict to @6.28.12:.

* Remove when clause entirely

* oops
2024-08-22 08:14:09 -05:00
Stephen Nicholas Swatman
ba56622574 geomodel: fix bug in cmake_args (#45869) 2024-08-22 05:13:21 -06:00
Massimiliano Culpo
836be2364c Make spack compiler find use external find (#45784)
so that there is no duplicate detection logic for compilers
2024-08-22 12:13:08 +02:00
Juan Miguel Carceller
b623f58782 root: add version 6.32.04 (#45850)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-08-21 16:43:48 -07:00
John W. Parent
182bc87fe1 Windows: Port icu4c; define cxx std flags for MSVC (#45547)
* Adds an MSBuild system + Builder to the icu4c package
* Adds custom install method as MSBuild system does not vendor an
  install target
* The cxxstd variant is not supported on Windows (there are no config
  options you use to tell the build system what cxx standard to
  build against), so the variant definition was updated to occur
  everywhere except Windows

Also, this commit defines the c/cxx..._flag properties of the MSVC
compiler (although they are not used by `icu4c` and not strictly
necessary to bundle with this PR).
2024-08-21 16:08:57 -06:00
Stephen Nicholas Swatman
f93595ba2f acts: add GeoModel dependency (#45859)
This commit adds a dependency on GeoModel 4.6.0 when the GeoModel plugin
is enabled. Note that the dependency is upgraded to 6.3.0 in Acts
36.1.0, but that will need to be covered in #45851.
2024-08-21 11:27:54 -06:00
Adam J. Stewart
aa5b17ceb5 py-shapely: add v2.0.6 (#45833) 2024-08-21 09:47:24 -07:00
Dom Heinzeller
eb5a1d3b4c Add fms@2024.02 (#45838) 2024-08-21 09:44:40 -07:00
Wouter Deconinck
84f680239e geoip: deprecate due to duplication (#45840)
* geoip: deprecate due to duplication
* geoip-api-c: fixed hashes; checked license; verified c code

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-08-21 09:42:54 -07:00
Stephen Nicholas Swatman
c7b693a0df geomodel: add versions 5.1.0-6.4.0 (#45858)
* geomodel: add versions 5.1.0-6.4.0

This PR adds new versions 5.1.0 through 6.4.0 of the geomodel package.
It also updates the CMake configuration to use the `define_from_variant`
mechanism and it adds me as a maintainer.

* Undo cmake change
2024-08-21 10:41:36 -06:00
Alex Seaton
7d5ad18573 heyoka: add v5.1.0 (#45841) 2024-08-21 09:30:01 -07:00
Stephen Herbener
2921e04353 Added latest version of eckit (#45834) 2024-08-21 09:01:54 -07:00
Massimiliano Culpo
33464a7038 gcc: simplify version_regex, change string to filter out Apple clang (#45852) 2024-08-21 16:36:07 +02:00
Juan Miguel Carceller
d3cdb2a344 sherpa: add v3.0.0, remove deprecated @:2.2.10 (#45101)
* Remove deprecated versions

* Add sherpa 3.0.0 and CMake builds

* Address comments in #45101

* Add builder classes for cmake and autotools

---------

Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-08-21 08:41:18 -05:00
Massimiliano Culpo
34df21b62c gcc: restore old detection (#45810) 2024-08-21 11:31:52 +02:00
Wouter Deconinck
e8a13642a0 packages/*: http -> https where permanent redirect (#45835)
* *: http -> https where permanent redirect

* fix: style
2024-08-21 09:18:24 +02:00
Lin Guo
dc3c96dd2f orca: add 6.0.0 avx2 version (#45824)
The avx2 version can be downloaded from the ORCA [forum](https://orcaforum.kofo.mpg.de/app.php/dlext/?view=detail&df_id=214#).

The version is named `avx2-6.0.0` (as opposed to the more
natural-looking `6.0.0-avx2`) to avoid the avx2 version shadowing the
non-avx2 one. Definitely open for better suggestion.
2024-08-20 18:41:30 -07:00
Victor Brunini
c29652580a arborx: Enable use of Kokkos from Trilinos for cuda/rocm. (#45220) 2024-08-20 15:48:42 -07:00
Matt Thompson
d714a9b223 mapl: add 2.47.2, 2.46.3 (#45795) 2024-08-20 13:10:09 -07:00
Marcel Koch
f596a8cdad ginkgo: add v1.8.0 (#45791)
* bump[Ginkgo]: add version 1.8.0
* [Ginkgo] add patch for rocthrust
* [Ginkgo] change maintainer
* [Ginkgo] use patch from PR
* [Ginkgo] fix style issues

---------

Co-authored-by: Terry Cojean <terry.cojean@kit.edu>
2024-08-20 09:36:51 -07:00
Vicente Bolea
3699a0ec9b paraview: add new v5.13.0-RC2 release (#45754) 2024-08-20 10:52:35 -05:00
Vicente Bolea
6c268234ba paraview: add smoke tests (#45759) 2024-08-20 10:46:45 -05:00
Greg Becker
c1736077bb spack bootstrap status --dev: function call for new interface (#45822) 2024-08-20 13:04:39 +00:00
psakievich
85905959dc Increase min version for sparse_checkout (#45818)
* Increase min version for sparse_checkout

* Update git_fetch.py

* style
2024-08-20 13:04:23 +00:00
Harmen Stoppels
2ae5596e92 Unify url and oci buildcache push code paths (#45776) 2024-08-20 13:17:49 +02:00
Sajid Ali
9d0b9f086f Fix linking for python with external ncurses (#45803)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2024-08-20 06:09:55 +02:00
Mikael Simberg
da079ed06f ninja: add 1.12.1 (#45789) 2024-08-19 17:14:13 -06:00
Fernando Ayats
a69c5b3e32 freefem: add v4.13, v4.12 and v4.11 (#45808) 2024-08-19 13:21:10 -07:00
Massimiliano Culpo
e3cce2bd96 binutils: add v2.43.1 (#45806) 2024-08-19 13:40:51 -06:00
AcriusWinter
0d668e4e92 hsakmt-roct: remove use of deprecated run_test method (#45763)
* hsakmt-roct: new test API
* hsakmt-roct: minor change to check_install script variable name

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-08-19 12:20:54 -07:00
David Gardner
ad6c7380c5 silo: add python variant (#45757)
* add python variant

* use enable_or_disable

* use extend
2024-08-19 10:56:44 -07:00
Adam J. Stewart
c064a30765 py-numpy: add v2.1.0 (#45807) 2024-08-19 10:25:37 -07:00
AcriusWinter
4a4f156d99 parallel-netcdf: new test API (#45170)
* parallel-netcdf: new test API
* parallel-netcdf: fix test args and tweak docstring and variables

---------

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
2024-08-19 09:39:42 -07:00
AcriusWinter
cb8878aaf4 hipsolver: remove use of deprecated run_test method (#45761)
* hipsolver: new test API
2024-08-19 09:38:19 -07:00
Vicente Bolea
d49f3a0960 diy: add smoke test (#45749)
Installing examples for running smoke tests for the DIY project.
2024-08-19 11:04:09 -05:00
Massimiliano Culpo
15413c7258 llvm based compilers: filter out non-compilers (#45805) 2024-08-19 09:28:37 -06:00
Teague Sterling
de754c7a47 perl-bio-bigfile: new package (#44505)
* Adding the perl-bio-db-bigfile package

* Update package.py

* Update package.py

* Update package.py

* Updating dependent package handling

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>

* Updating dependent package handling

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>

* Reverting variants

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>

* Rename package.py to package.py

* Update package.py

* Removing unneeded dependencies

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>

---------

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>
2024-08-19 16:14:49 +01:00
Harmen Stoppels
ac9398ed21 build_environment: explicitly disable ccache if disabled (#45275) 2024-08-19 13:49:38 +02:00
Harmen Stoppels
57769fac7d Simplify URLFetchStrategy (#45741) 2024-08-19 11:34:13 +02:00
Wouter Deconinck
c65fd7e12d apfel: add v3.1.1 (now CMakePackage) (#45661)
Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-08-19 10:45:49 +02:00
snehring
c71d778875 salmon: add v1.10.3 (#45695)
Signed-off-by: Shane Nehring <snehring@iastate.edu>
2024-08-19 10:38:56 +02:00
Luke Robison
a7313dc407 WRF: add -fpermissive when using gcc@14: (#45438) 2024-08-19 10:30:18 +02:00
Wouter Deconinck
31477d5dc7 activeharmony: replace dead links (#45704) 2024-08-19 10:28:57 +02:00
Wouter Deconinck
382ba0d041 mlpack: add through v4.4.0 (#45707) 2024-08-19 10:26:10 +02:00
Adam J. Stewart
886c950423 py-keras: add v3.5 (#45711) 2024-08-19 10:11:08 +02:00
Matthias Wolf
3798b16a29 py-brain-indexer: new package (#44718) 2024-08-19 10:09:43 +02:00
Matt Thompson
796617054d py-pyyaml: add v6.0.2 (#45716) 2024-08-19 10:05:39 +02:00
Pranav Sivaraman
78fc25ec12 magic-enum: fix minimum compiler versions conflicts (#45705) 2024-08-19 10:04:09 +02:00
Wouter Deconinck
6de51fdc58 librsvg: depends_on cairo +png (#45729) 2024-08-19 09:55:28 +02:00
snehring
430ba496d1 liftoff: add new package (#45726)
Signed-off-by: Shane Nehring <snehring@iastate.edu>
2024-08-19 09:48:47 +02:00
Wouter Deconinck
e1ede9c04b bind9: add v9.18.28, v9.20.0 (#45728) 2024-08-19 09:47:16 +02:00
Wouter Deconinck
856dd3417b gradle: add through v8.9 (#45731) 2024-08-19 09:44:45 +02:00
Wouter Deconinck
e49c6f68bc maven: add v3.8.8, v3.9.8 (#45732) 2024-08-19 09:44:00 +02:00
Alex Leute
eed7a1af24 mlc-llm: new package and dependency (#44726) 2024-08-19 09:33:00 +02:00
Rocco Meli
22e40541c7 CP2K: add 2024.2, fix dbcsr+g2g+plumed (#45614) 2024-08-19 09:19:17 +02:00
Wouter Deconinck
8561c89c25 hadoop: add v3.3.3 -> v3.4.0 (#45735) 2024-08-19 09:05:26 +02:00
dslarm
6501705de2 armpl-gcc - finish enabling debian12 (#45744) 2024-08-19 09:01:09 +02:00
Wouter Deconinck
0b3e1fd412 openssh: add v9.8p1 (#45736)
Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-08-19 08:52:09 +02:00
Wouter Deconinck
c260da5127 shared-mime-info: fix url for certificate name mismatch (#45779) 2024-08-19 08:42:03 +02:00
Wouter Deconinck
f63261dc65 ghostscript: add v10.01.2, v10.02.1, v10.03.1 (#45780) 2024-08-19 08:40:56 +02:00
Wouter Deconinck
1c081611ea graphviz: add v8.1.0 -> v12.1.0 (#45675) 2024-08-19 08:33:02 +02:00
Alec Scott
428b4e340a Remove deprecated --safe-only in spack version cmd (#45765) 2024-08-19 08:28:19 +02:00
Wouter Deconinck
20bf239a6a xorg-server: add variants dri and glx (#45787)
Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-08-19 08:07:56 +02:00
Massimiliano Culpo
cd682613cf dbcsr: avoid using a set in a message (#45804) 2024-08-19 07:35:33 +02:00
Axel Huebl
c1852e3706 WarpX: Python on pyAMReX (#45251)
* WarpX: Python on pyAMReX

Long overdue update for WarpX: in 2024, we updated our Python
bindings to rely on the new pyAMReX package. This deprecates the old
`py-warpx` package and adds a new dependency and variant to WarpX.

Also deprecates old versions that we will not continue to support.

* Update Cloud/E4S Pipelines for WarpX

`py-warpx` is replaced by `warpx +python`
oneAPI does not support IPO/LTO: diable for `py-amrex` even though
pybind11 strongly encourages it.
2024-08-18 21:14:04 -07:00
Rémi Lacroix
855a8476e4 Scotch: Fix sha256 for some older versions. (#44494)
Most likely caused by a change in Inria's Gitlab.
2024-08-18 21:18:00 +02:00
Auriane R.
d4a892f200 py-torch-nvidia-apex: Add 24.04.01 and variants from the readme (#45019)
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-08-18 21:16:58 +02:00
Harmen Stoppels
66e2836ba1 py-torchaudio: upperbound on cuda (#45773)
* py-torchaudio: upperbound on cuda

* actually get bound right

* make adam happy
2024-08-17 11:18:32 -06:00
Teague Sterling
52ab0c66fe xorgproto: new package (#45569)
* xorgproto: new package

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>

* adding providers for xorgprotos

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>

* Update var/spack/repos/builtin/packages/xorgproto/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* Update etc/spack/defaults/packages.yaml

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* Update etc/spack/defaults/packages.yaml

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* Update var/spack/repos/builtin/packages/xorgproto/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* Update var/spack/repos/builtin/packages/xorgproto/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* Update var/spack/repos/builtin/packages/xorgproto/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* Update var/spack/repos/builtin/packages/xorgproto/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* [@spackbot] updating style on behalf of teaguesterling

* xorgproto: depends_on meson type build

---------

Signed-off-by: Teague Sterling <teaguesterling@gmail.com>
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-08-17 10:24:28 -06:00
Stephen Hudson
f316068b27 libEnsemble: add v1.4.2 (#45760) 2024-08-17 08:25:00 -05:00
Wouter Deconinck
553cc3b70a util/web.py: parse new GitLab JS dropdown links (#45764)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2024-08-17 09:02:03 +02:00
Chris Marsh
f0f9a16e4f esmf package: add (optional) Python bindings (#45504)
* Add `+python` variant
* `esmf` package installs Python bindings when `+python` is set

Note: this does not inherit `PythonPackage`, which force an either/or
choice between the Makefile and Pip builder: it instantiates a
`PythonPipBuilder` as needed (when `+python` is set).
2024-08-16 15:02:50 -07:00
Greg Becker
9ec8eaa0d3 include_concrete: read from older env formats properly (#45766)
* include_concrete: read from older env formats properly
* spack env rm: fix logic for checking env includes
* regression test

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-08-16 20:40:41 +00:00
Adam J. Stewart
00182b19dc GDAL: add v3.9.2 (#45794) 2024-08-16 13:26:35 -07:00
James Smillie
cc7a29c55a Windows: fix stage cleaning for long paths (#45786)
Paths over 260 characters in length are not handled by `shutil.rmtree`
unless they use the extended-length path syntax (using a prefix of
"\\?\").

This fixes an issue where stage cleaning fails when paths in a stage
exceed the normal 260-character limit.

This indicates that other parts of the codebase should be examined/
refactored to handle long paths.
2024-08-16 11:16:13 -07:00
eugeneswalker
61b0f4f84d e4s ci: add wrf (#45719)
* e4s ci: add wrf

* e4s ci: also add wrf companion/adjacent package wps

* e4s oneapi: comment out wps: %oneapi not supported?
2024-08-16 09:57:46 -07:00
Massimiliano Culpo
fe3bfa482e Run unit test in parallel again in CI (#45793)
The --trace-config option was failing for linux unit-tests,
so we were running serial.
2024-08-16 16:11:08 +00:00
Paul R. C. Kent
e5f53a6250 py-lxml: add v5.2.2 (#45785)
* add v5.2.2

* py-lxml dependency improvements

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-08-16 10:04:29 -06:00
bk
a7e8080784 harfbuzz: enable freetype in MesonBuilder (#45654)
* harfbuzz: enable freetype in MesonBuilder to facilitate depends_on("freetype")

* spack style fix

* freetype is defined as a depends_on(), so set as enabled in MesonBuilder rather than an option/flag/variant

* add back depends_on($lang) lines for new test api

* keep original order
2024-08-16 10:59:27 -05:00
Wouter Deconinck
f5e934f2dc *: avoid js redirect for homepages on sourceforge (#45783) 2024-08-16 17:35:04 +02:00
Harmen Stoppels
54b57c5d1e Revert "Change environment modifications to escape with double quotes (#36789)" (#42780)
This reverts commit 690394fabc, as it causes arbitrary code execution.
2024-08-16 17:32:48 +02:00
Harmen Stoppels
725ef8f5c8 oci: support --only=package (#45775)
Previously `spack buildcache push --only=package` errored in the OCI
case, but it's been requested that OCI can be used as pure storage w/o
the need for runnable container images.

This commit makes it so that

1. manifests refer only to runtime dependencies that were selected to be
   pushed
2. failure to upload a blob among the selected specs does not prevent a
   manifest/tag to be created for dependents: they just don't refer to
   the missing blob as a layer/dependency

This fixes the following issues:

1. dependents of non-redistributable specs can now be pushed to oci
   build caches without error
2. failure to upload one tarball does not cause cascading failures for
   dependents whose tarballs do upload succesfully -- so it's better
   best-effort behavior
3. for some people uploading with deps caused a massive amount of
   fetches of their manifests (which certain registries count as a
   download of an image, even though their layers are not fetched) --
   being able to specify --only=package reduces the number of fetches.
2024-08-16 15:24:04 +02:00
Harmen Stoppels
f51a9a9107 stage: provide mirrors in constructor (#45792)
Stage objects create mirrors ad-hoc from current config.

- There is no way to prevent mirrors from being used
- There is no way to restrict mirrors to source/binary, which is of
  course context dependent.
- Stage is also used in build caches, where iterating over mirrors is
  already implemented differently, and wouldn't work anyways cause it's
  source only, and in particular it makes no sense for OCI build caches.

This commit:

1. Injects the sensible mirrors into the stage object from contexts
   where it is relevant
2. Separates mirrors from cache, so that w/o mirrors download cache can
   still be used
2024-08-16 15:21:47 +02:00
Massimiliano Culpo
4f0e336ed0 Remove "test_foreground_background" 2024-08-16 14:22:59 +02:00
Massimiliano Culpo
64774f3015 Skip test_foreground_background + other minor cleanups
The test_foreground_background unit test has been marked
xfail for a while, meaning:
- Nobody looks at the results of the test
- It still runs every time

That test happens to hang frequently on some Apple M1 I have access to,
so here I mark it as skip.

Also went through other xfailing and skipped tests, and applied minor changes.
2024-08-16 14:22:59 +02:00
Massimiliano Culpo
4e9fbca033 Clean up test/cmd/ci.py (#45774)
* Use absolute paths instead of https:// fake mirrors (this speed-up tests by avoiding requests)
* Add a fixture to gather in a single place code that is copy/pasted in a lot of tests
* General clean-up of tests and repeated code

Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2024-08-16 13:42:01 +02:00
AcriusWinter
a2fd26bbcc rccl: new test API (#45241)
* rccl: new test API
* rccl: stand-alone test docstring tweak

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-08-15 22:03:05 -06:00
AcriusWinter
067da09b46 hypre: get rid of use of deprecated run_test method (#45762)
* hypre: get rid of depreciated run_test method
* hypre: use mkdirp directly
* hypre: use install() for ij for addition of permissions fix

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-08-15 20:37:39 -06:00
AcriusWinter
b1b0c108bb parsec: old to new test API (#45122)
* parsec: old to new test API
* parsec: restore stand-alone test subparts; preliminary test build fixes

---------

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
2024-08-15 18:22:55 -06:00
AcriusWinter
c624088a7b n2p2: convert from old to new test API (#45141)
* n2p2: convert from old to new test API
* n2p2: Enhance stand-alone testing checks to reduce unnecessary processing

---------

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
2024-08-15 15:47:55 -06:00
AcriusWinter
a965c7c5c8 Open3d: Reinstate re-use of stand-alone test method (#45755)
* open3d: Reinstate re-use of stand-alone test method
* open3d: ignore test_open3d_import when ~python

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-08-15 13:52:31 -07:00
Greg Sjaardema
904d43f0e6 seacas: new version (#45777)
Update fmt dependency to support fmt@11:
Use an adios2 release instead of master
New release of seacas
2024-08-15 15:21:26 -05:00
AcriusWinter
10b6d7282a Cache extra test sources update (#45493)
* stand-alone test API update: self.cache_extra_test_sources(...) -> cache_extra_test_sources(self, ...)
* superlu: switch to new cache_extra_test_sources API

---------

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
2024-08-15 12:55:16 -06:00
Richard Berger
7112a49d1e libmesh: explicitly disable metis in ~metis case (#45781) 2024-08-15 11:28:21 -07:00
Mikael Simberg
b11bd6b745 pika: add 0.27.0 (#45772) 2024-08-15 11:11:19 -07:00
Derek Ryan Strong
4d0b04cf34 hwloc: add v2.11.1 (#45767)
* Add hwloc v2.11.1
2024-08-15 11:10:10 -07:00
Massimiliano Culpo
165c171659 Update archspec to v0.2.5-dev (7e6740012b897ae4a950f0bba7e9726b767e921f) (#45721) 2024-08-15 19:49:07 +02:00
David Gardner
aa3c62d936 limit patch range (#45756) 2024-08-15 15:08:24 +02:00
Alex Richert
cba2fe914c g2: add 3.5.0 and 3.5.1 (#45750) 2024-08-14 23:43:40 -06:00
psakievich
1b82779087 Add options for sparse checkout in GitFetcher (#45473)
* Add options for sparse checkout in GitFetcher

Newer versions of git have a beta feature called sparse checkout
that allow users to check out a portion of a large repo.

This feature will be ideal for monolithic repo projects that want to
model their infrastructure via spack.  This PR implements an addition
to the GitFetcher that allows users to add a `git_sparse_paths`
attribute to package classes or versions which will then use sparse
checkout on those directories/files for the package.

* Style

* Split git clone into multiple functions

* Add sparse-checkout impl

* Internalize src clone functions

* Docs

* Adding sparse clone test

* Add test for partial clone

* [@spackbot] updating style on behalf of psakievich

* Small fixes

* Restore default branch status

* Fix attributes for package

* Update lib/spack/docs/packaging_guide.rst

Co-authored-by: Matthew Mosby <44072882+mdmosby@users.noreply.github.com>

* Extend unit test to multiple git versions

* style

---------

Co-authored-by: psakievich <psakievich@users.noreply.github.com>
Co-authored-by: Matthew Mosby <44072882+mdmosby@users.noreply.github.com>
2024-08-15 05:28:34 +00:00
Evan Parker
55b1b0f3f0 py-fortranformat: update to version 2.0.0 (#45748)
* Feature update py-fortranformat
  Add more recent versions of py-fortranformat. The currently included release (0.2.5) is from 2014. I've added the latest point release of each of the major versions from the last 4 years.
* update homepage
2024-08-14 23:14:02 -06:00
AcriusWinter
4606c8ed68 magma: old to new test API (#45140)
* magma: old to new test API
* magma: simplify stand-alone test method/part docstrings/purposes 

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-08-14 19:37:12 -06:00
AcriusWinter
dd53eeb322 libpressio: old to new test API (#45151)
* libpressio: old to new test API
* libpressio: minor stand-alone test simplifications

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-08-14 15:59:08 -07:00
Greg Becker
f42486b684 spack info: use spec fullname (#45753) 2024-08-14 22:00:00 +00:00
Alex Richert
44ecea3813 g2tmpl: add 1.13.0 (#45751) 2024-08-14 15:25:57 -06:00
364 changed files with 5575 additions and 4832 deletions

View File

@@ -1263,6 +1263,11 @@ Git fetching supports the following parameters to ``version``:
option ``--depth 1`` will be used if the version of git and the specified
transport protocol support it, and ``--single-branch`` will be used if the
version of git supports it.
* ``git_sparse_paths``: Use ``sparse-checkout`` to only clone these relative paths.
This feature requires ``git`` to be version ``2.25.0`` or later but is useful for
large repositories that have separate portions that can be built independently.
If paths provided are directories then all the subdirectories and associated files
will also be cloned.
Only one of ``tag``, ``branch``, or ``commit`` can be used at a time.
@@ -1361,6 +1366,41 @@ Submodules
For more information about git submodules see the manpage of git: ``man
git-submodule``.
Sparse-Checkout
You can supply ``git_sparse_paths`` at the package or version level to utilize git's
sparse-checkout feature. This will only clone the paths that are specified in the
``git_sparse_paths`` attribute for the package along with the files in the top level directory.
This feature allows you to only clone what you need from a large repository.
Note that this is a newer feature in git and requries git ``2.25.0`` or greater.
If ``git_sparse_paths`` is supplied and the git version is too old
then a warning will be issued and that package will use the standard cloning operations instead.
``git_sparse_paths`` should be supplied as a list of paths, a callable function for versions,
or a more complex package attribute using the ``@property`` decorator. The return value should be
a list for a callable implementation of ``git_sparse_paths``.
.. code-block:: python
def sparse_path_function(package)
"""a callable function that can be used in side a version"""
# paths can be directories or functions, all subdirectories and files are included
paths = ["doe", "rae", "me/file.cpp"]
if package.spec.version > Version("1.2.0"):
paths.extend(["fae"])
return paths
class MyPackage(package):
# can also be a package attribute that will be used if not specified in versions
git_sparse_paths = ["doe", "rae"]
# use the package attribute
version("1.0.0")
version("1.1.0")
# use the function
version("1.1.5", git_sparse_paths=sparse_path_func)
version("1.2.0", git_sparse_paths=sparse_path_func)
version("1.2.5", git_sparse_paths=sparse_path_func)
version("1.1.5", git_sparse_paths=sparse_path_func)
.. _github-fetch:
^^^^^^

View File

@@ -18,7 +18,7 @@
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.2.4 (commit 48b92512b9ce203ded0ebd1ac41b42593e931f7c)
* Version: 0.2.5-dev (commit 7e6740012b897ae4a950f0bba7e9726b767e921f)
astunparse
----------------

View File

@@ -47,7 +47,11 @@ def decorator(factory):
def partial_uarch(
name: str = "", vendor: str = "", features: Optional[Set[str]] = None, generation: int = 0
name: str = "",
vendor: str = "",
features: Optional[Set[str]] = None,
generation: int = 0,
cpu_part: str = "",
) -> Microarchitecture:
"""Construct a partial microarchitecture, from information gathered during system scan."""
return Microarchitecture(
@@ -57,6 +61,7 @@ def partial_uarch(
features=features or set(),
compilers={},
generation=generation,
cpu_part=cpu_part,
)
@@ -90,6 +95,7 @@ def proc_cpuinfo() -> Microarchitecture:
return partial_uarch(
vendor=_canonicalize_aarch64_vendor(data),
features=_feature_set(data, key="Features"),
cpu_part=data.get("CPU part", ""),
)
if architecture in (PPC64LE, PPC64):
@@ -345,6 +351,10 @@ def sorting_fn(item):
generic_candidates = [c for c in candidates if c.vendor == "generic"]
best_generic = max(generic_candidates, key=sorting_fn)
# Relevant for AArch64. Filter on "cpu_part" if we have any match
if info.cpu_part != "" and any(c for c in candidates if info.cpu_part == c.cpu_part):
candidates = [c for c in candidates if info.cpu_part == c.cpu_part]
# Filter the candidates to be descendant of the best generic candidate.
# This is to avoid that the lack of a niche feature that can be disabled
# from e.g. BIOS prevents detection of a reasonably performant architecture

View File

@@ -2,9 +2,7 @@
# Archspec Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Types and functions to manage information
on CPU microarchitectures.
"""
"""Types and functions to manage information on CPU microarchitectures."""
import functools
import platform
import re
@@ -65,21 +63,24 @@ class Microarchitecture:
passed in as argument above.
* versions: versions that support this micro-architecture.
generation (int): generation of the micro-architecture, if
relevant.
generation (int): generation of the micro-architecture, if relevant.
cpu_part (str): cpu part of the architecture, if relevant.
"""
# pylint: disable=too-many-arguments
# pylint: disable=too-many-arguments,too-many-instance-attributes
#: Aliases for micro-architecture's features
feature_aliases = FEATURE_ALIASES
def __init__(self, name, parents, vendor, features, compilers, generation=0):
def __init__(self, name, parents, vendor, features, compilers, generation=0, cpu_part=""):
self.name = name
self.parents = parents
self.vendor = vendor
self.features = features
self.compilers = compilers
# Only relevant for PowerPC
self.generation = generation
# Only relevant for AArch64
self.cpu_part = cpu_part
# Cache the ancestor computation
self._ancestors = None
@@ -111,6 +112,7 @@ def __eq__(self, other):
and self.parents == other.parents # avoid ancestors here
and self.compilers == other.compilers
and self.generation == other.generation
and self.cpu_part == other.cpu_part
)
@coerce_target_names
@@ -143,7 +145,8 @@ def __repr__(self):
cls_name = self.__class__.__name__
fmt = (
cls_name + "({0.name!r}, {0.parents!r}, {0.vendor!r}, "
"{0.features!r}, {0.compilers!r}, {0.generation!r})"
"{0.features!r}, {0.compilers!r}, generation={0.generation!r}, "
"cpu_part={0.cpu_part!r})"
)
return fmt.format(self)
@@ -190,6 +193,7 @@ def to_dict(self):
"generation": self.generation,
"parents": [str(x) for x in self.parents],
"compilers": self.compilers,
"cpupart": self.cpu_part,
}
@staticmethod
@@ -202,6 +206,7 @@ def from_dict(data) -> "Microarchitecture":
features=set(data["features"]),
compilers=data.get("compilers", {}),
generation=data.get("generation", 0),
cpu_part=data.get("cpupart", ""),
)
def optimization_flags(self, compiler, version):
@@ -360,8 +365,11 @@ def fill_target_from_dict(name, data, targets):
features = set(values["features"])
compilers = values.get("compilers", {})
generation = values.get("generation", 0)
cpu_part = values.get("cpupart", "")
targets[name] = Microarchitecture(name, parents, vendor, features, compilers, generation)
targets[name] = Microarchitecture(
name, parents, vendor, features, compilers, generation=generation, cpu_part=cpu_part
)
known_targets = {}
data = archspec.cpu.schema.TARGETS_JSON["microarchitectures"]

View File

@@ -2225,10 +2225,14 @@
],
"nvhpc": [
{
"versions": "21.11:",
"versions": "21.11:23.8",
"name": "zen3",
"flags": "-tp {name}",
"warnings": "zen4 is not fully supported by nvhpc yet, falling back to zen3"
"warnings": "zen4 is not fully supported by nvhpc versions < 23.9, falling back to zen3"
},
{
"versions": "23.9:",
"flags": "-tp {name}"
}
]
}
@@ -2711,7 +2715,8 @@
"flags": "-mcpu=thunderx2t99"
}
]
}
},
"cpupart": "0x0af"
},
"a64fx": {
"from": ["armv8.2a"],
@@ -2779,7 +2784,8 @@
"flags": "-march=armv8.2-a+crc+crypto+fp16+sve"
}
]
}
},
"cpupart": "0x001"
},
"cortex_a72": {
"from": ["aarch64"],
@@ -2816,7 +2822,8 @@
"flags" : "-mcpu=cortex-a72"
}
]
}
},
"cpupart": "0xd08"
},
"neoverse_n1": {
"from": ["cortex_a72", "armv8.2a"],
@@ -2902,7 +2909,8 @@
"flags": "-tp {name}"
}
]
}
},
"cpupart": "0xd0c"
},
"neoverse_v1": {
"from": ["neoverse_n1", "armv8.4a"],
@@ -2926,8 +2934,6 @@
"lrcpc",
"dcpop",
"sha3",
"sm3",
"sm4",
"asimddp",
"sha512",
"sve",
@@ -3028,7 +3034,8 @@
"flags": "-tp {name}"
}
]
}
},
"cpupart": "0xd40"
},
"neoverse_v2": {
"from": ["neoverse_n1", "armv9.0a"],
@@ -3052,13 +3059,10 @@
"lrcpc",
"dcpop",
"sha3",
"sm3",
"sm4",
"asimddp",
"sha512",
"sve",
"asimdfhm",
"dit",
"uscat",
"ilrcpc",
"flagm",
@@ -3066,18 +3070,12 @@
"sb",
"dcpodp",
"sve2",
"sveaes",
"svepmull",
"svebitperm",
"svesha3",
"svesm4",
"flagm2",
"frint",
"svei8mm",
"svebf16",
"i8mm",
"bf16",
"dgh"
"bf16"
],
"compilers" : {
"gcc": [
@@ -3102,15 +3100,19 @@
"flags" : "-march=armv8.5-a+sve -mtune=cortex-a76"
},
{
"versions": "10.0:11.99",
"versions": "10.0:11.3.99",
"flags" : "-march=armv8.5-a+sve+sve2+i8mm+bf16 -mtune=cortex-a77"
},
{
"versions": "11.4:11.99",
"flags" : "-mcpu=neoverse-v2"
},
{
"versions": "12.0:12.99",
"versions": "12.0:12.2.99",
"flags" : "-march=armv9-a+i8mm+bf16 -mtune=cortex-a710"
},
{
"versions": "13.0:",
"versions": "12.3:",
"flags" : "-mcpu=neoverse-v2"
}
],
@@ -3145,7 +3147,113 @@
"flags": "-tp {name}"
}
]
}
},
"cpupart": "0xd4f"
},
"neoverse_n2": {
"from": ["neoverse_n1", "armv9.0a"],
"vendor": "ARM",
"features": [
"fp",
"asimd",
"evtstrm",
"aes",
"pmull",
"sha1",
"sha2",
"crc32",
"atomics",
"fphp",
"asimdhp",
"cpuid",
"asimdrdm",
"jscvt",
"fcma",
"lrcpc",
"dcpop",
"sha3",
"asimddp",
"sha512",
"sve",
"asimdfhm",
"uscat",
"ilrcpc",
"flagm",
"ssbs",
"sb",
"dcpodp",
"sve2",
"flagm2",
"frint",
"svei8mm",
"svebf16",
"i8mm",
"bf16"
],
"compilers" : {
"gcc": [
{
"versions": "4.8:5.99",
"flags": "-march=armv8-a"
},
{
"versions": "6:6.99",
"flags" : "-march=armv8.1-a"
},
{
"versions": "7.0:7.99",
"flags" : "-march=armv8.2-a -mtune=cortex-a72"
},
{
"versions": "8.0:8.99",
"flags" : "-march=armv8.4-a+sve -mtune=cortex-a72"
},
{
"versions": "9.0:9.99",
"flags" : "-march=armv8.5-a+sve -mtune=cortex-a76"
},
{
"versions": "10.0:10.99",
"flags" : "-march=armv8.5-a+sve+sve2+i8mm+bf16 -mtune=cortex-a77"
},
{
"versions": "11.0:",
"flags" : "-mcpu=neoverse-n2"
}
],
"clang" : [
{
"versions": "9.0:10.99",
"flags" : "-march=armv8.5-a+sve"
},
{
"versions": "11.0:13.99",
"flags" : "-march=armv8.5-a+sve+sve2+i8mm+bf16"
},
{
"versions": "14.0:15.99",
"flags" : "-march=armv9-a+i8mm+bf16"
},
{
"versions": "16.0:",
"flags" : "-mcpu=neoverse-n2"
}
],
"arm" : [
{
"versions": "23.04.0:",
"flags" : "-mcpu=neoverse-n2"
}
],
"nvhpc" : [
{
"versions": "23.3:",
"name": "neoverse-n1",
"flags": "-tp {name}"
}
]
},
"cpupart": "0xd49"
},
"m1": {
"from": ["armv8.4a"],
@@ -3211,7 +3319,8 @@
"flags" : "-mcpu=apple-m1"
}
]
}
},
"cpupart": "0x022"
},
"m2": {
"from": ["m1", "armv8.5a"],
@@ -3289,7 +3398,8 @@
"flags" : "-mcpu=apple-m2"
}
]
}
},
"cpupart": "0x032"
},
"arm": {
"from": [],

View File

@@ -52,6 +52,9 @@
}
}
}
},
"cpupart": {
"type": "string"
}
},
"required": [
@@ -107,4 +110,4 @@
"additionalProperties": false
}
}
}
}

View File

@@ -1624,6 +1624,12 @@ def remove_linked_tree(path):
shutil.rmtree(os.path.realpath(path), **kwargs)
os.unlink(path)
else:
if sys.platform == "win32":
# Adding this prefix allows shutil to remove long paths on windows
# https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry
long_path_pfx = "\\\\?\\"
if not path.startswith(long_path_pfx):
path = long_path_pfx + path
shutil.rmtree(path, **kwargs)

View File

@@ -84,20 +84,6 @@ def index_by(objects, *funcs):
return result
def caller_locals():
"""This will return the locals of the *parent* of the caller.
This allows a function to insert variables into its caller's
scope. Yes, this is some black magic, and yes it's useful
for implementing things like depends_on and provides.
"""
# Passing zero here skips line context for speed.
stack = inspect.stack(0)
try:
return stack[2][0].f_locals
finally:
del stack
def attr_setdefault(obj, name, value):
"""Like dict.setdefault, but for objects."""
if not hasattr(obj, name):

View File

@@ -10,6 +10,7 @@
import errno
import io
import multiprocessing
import multiprocessing.connection
import os
import re
import select

View File

@@ -42,6 +42,7 @@ def _search_duplicate_compilers(error_cls):
import inspect
import io
import itertools
import os
import pathlib
import pickle
import re
@@ -210,6 +211,11 @@ def _search_duplicate_compilers(error_cls):
group="configs", tag="CFG-PACKAGES", description="Sanity checks on packages.yaml", kwargs=()
)
#: Sanity checks on packages.yaml
config_repos = AuditClass(
group="configs", tag="CFG-REPOS", description="Sanity checks on repositories", kwargs=()
)
@config_packages
def _search_duplicate_specs_in_externals(error_cls):
@@ -367,6 +373,27 @@ def _ensure_all_virtual_packages_have_default_providers(error_cls):
]
@config_repos
def _ensure_no_folders_without_package_py(error_cls):
"""Check that we don't leave any folder without a package.py in repos"""
errors = []
for repository in spack.repo.PATH.repos:
missing = []
for entry in os.scandir(repository.packages_path):
if not entry.is_dir():
continue
package_py = pathlib.Path(entry.path) / spack.repo.package_file_name
if not package_py.exists():
missing.append(entry.path)
if missing:
summary = (
f"The '{repository.namespace}' repository misses a package.py file"
f" in the following folders"
)
errors.append(error_cls(summary=summary, details=[f"{x}" for x in missing]))
return errors
def _make_config_error(config_data, summary, error_cls):
s = io.StringIO()
s.write("Occurring in the following file:\n")
@@ -527,7 +554,7 @@ def _ensure_all_package_names_are_lowercase(pkgs, error_cls):
badname_regex, errors = re.compile(r"[_A-Z]"), []
for pkg_name in pkgs:
if badname_regex.search(pkg_name):
error_msg = "Package name '{}' is either lowercase or conatine '_'".format(pkg_name)
error_msg = f"Package name '{pkg_name}' should be lowercase and must not contain '_'"
errors.append(error_cls(error_msg, []))
return errors

View File

@@ -6,7 +6,6 @@
import codecs
import collections
import concurrent.futures
import contextlib
import copy
import hashlib
import io
@@ -25,7 +24,7 @@
import urllib.request
import warnings
from contextlib import closing
from typing import Dict, Generator, Iterable, List, NamedTuple, Optional, Set, Tuple, Union
from typing import Dict, Iterable, List, NamedTuple, Optional, Set, Tuple, Union
import llnl.util.filesystem as fsys
import llnl.util.lang
@@ -958,7 +957,7 @@ def _spec_files_from_cache(url: str):
raise ListMirrorSpecsError("Failed to get list of specs from {0}".format(url))
def generate_package_index(url: str, tmpdir: str, concurrency: int = 32):
def _url_generate_package_index(url: str, tmpdir: str, concurrency: int = 32):
"""Create or replace the build cache index on the given mirror. The
buildcache index contains an entry for each binary package under the
cache_prefix.
@@ -1119,7 +1118,7 @@ def _exists_in_buildcache(spec: Spec, tmpdir: str, out_url: str) -> ExistsInBuil
return ExistsInBuildcache(signed, unsigned, tarball)
def _upload_tarball_and_specfile(
def _url_upload_tarball_and_specfile(
spec: Spec, tmpdir: str, out_url: str, exists: ExistsInBuildcache, signing_key: Optional[str]
):
files = BuildcacheFiles(spec, tmpdir, out_url)
@@ -1154,49 +1153,146 @@ def _upload_tarball_and_specfile(
)
class Uploader:
def __init__(self, mirror: spack.mirror.Mirror, force: bool, update_index: bool):
self.mirror = mirror
self.force = force
self.update_index = update_index
self.tmpdir: str
self.executor: concurrent.futures.Executor
def __enter__(self):
self._tmpdir = tempfile.TemporaryDirectory(dir=spack.stage.get_stage_root())
self._executor = spack.util.parallel.make_concurrent_executor()
self.tmpdir = self._tmpdir.__enter__()
self.executor = self.executor = self._executor.__enter__()
return self
def __exit__(self, *args):
self._executor.__exit__(*args)
self._tmpdir.__exit__(*args)
def push_or_raise(self, specs: List[spack.spec.Spec]) -> List[spack.spec.Spec]:
skipped, errors = self.push(specs)
if errors:
raise PushToBuildCacheError(
f"Failed to push {len(errors)} specs to {self.mirror.push_url}:\n"
+ "\n".join(
f"Failed to push {_format_spec(spec)}: {error}" for spec, error in errors
)
)
return skipped
def push(
self, specs: List[spack.spec.Spec]
) -> Tuple[List[spack.spec.Spec], List[Tuple[spack.spec.Spec, BaseException]]]:
raise NotImplementedError
def tag(self, tag: str, roots: List[spack.spec.Spec]):
"""Make a list of selected specs together available under the given tag"""
pass
class OCIUploader(Uploader):
def __init__(
self,
mirror: spack.mirror.Mirror,
force: bool,
update_index: bool,
base_image: Optional[str],
) -> None:
super().__init__(mirror, force, update_index)
self.target_image = spack.oci.oci.image_from_mirror(mirror)
self.base_image = ImageReference.from_string(base_image) if base_image else None
def push(
self, specs: List[spack.spec.Spec]
) -> Tuple[List[spack.spec.Spec], List[Tuple[spack.spec.Spec, BaseException]]]:
skipped, base_images, checksums, upload_errors = _oci_push(
target_image=self.target_image,
base_image=self.base_image,
installed_specs_with_deps=specs,
force=self.force,
tmpdir=self.tmpdir,
executor=self.executor,
)
self._base_images = base_images
self._checksums = checksums
# only update index if any binaries were uploaded
if self.update_index and len(skipped) + len(upload_errors) < len(specs):
_oci_update_index(self.target_image, self.tmpdir, self.executor)
return skipped, upload_errors
def tag(self, tag: str, roots: List[spack.spec.Spec]):
tagged_image = self.target_image.with_tag(tag)
# _push_oci may not populate self._base_images if binaries were already in the registry
for spec in roots:
_oci_update_base_images(
base_image=self.base_image,
target_image=self.target_image,
spec=spec,
base_image_cache=self._base_images,
)
_oci_put_manifest(
self._base_images, self._checksums, tagged_image, self.tmpdir, None, None, *roots
)
class URLUploader(Uploader):
def __init__(
self,
mirror: spack.mirror.Mirror,
force: bool,
update_index: bool,
signing_key: Optional[str],
) -> None:
super().__init__(mirror, force, update_index)
self.url = mirror.push_url
self.signing_key = signing_key
def push(
self, specs: List[spack.spec.Spec]
) -> Tuple[List[spack.spec.Spec], List[Tuple[spack.spec.Spec, BaseException]]]:
return _url_push(
specs,
out_url=self.url,
force=self.force,
update_index=self.update_index,
signing_key=self.signing_key,
tmpdir=self.tmpdir,
executor=self.executor,
)
def make_uploader(
mirror: spack.mirror.Mirror,
force: bool = False,
update_index: bool = False,
signing_key: Optional[str] = None,
base_image: Optional[str] = None,
) -> Uploader:
"""Builder for the appropriate uploader based on the mirror type"""
if mirror.push_url.startswith("oci://"):
return OCIUploader(
mirror=mirror, force=force, update_index=update_index, base_image=base_image
)
else:
return URLUploader(
mirror=mirror, force=force, update_index=update_index, signing_key=signing_key
)
def _format_spec(spec: Spec) -> str:
return spec.cformat("{name}{@version}{/hash:7}")
@contextlib.contextmanager
def default_push_context() -> Generator[Tuple[str, concurrent.futures.Executor], None, None]:
with tempfile.TemporaryDirectory(
dir=spack.stage.get_stage_root()
) as tmpdir, spack.util.parallel.make_concurrent_executor() as executor:
yield tmpdir, executor
def push_or_raise(
specs: List[Spec],
out_url: str,
signing_key: Optional[str],
force: bool = False,
update_index: bool = False,
) -> List[Spec]:
"""Same as push, but raises an exception on error. Returns a list of skipped specs already
present in the build cache when force=False."""
skipped, errors = push(specs, out_url, signing_key, force, update_index)
if errors:
raise PushToBuildCacheError(
f"Failed to push {len(errors)} specs to {out_url}:\n"
+ "\n".join(f"Failed to push {_format_spec(spec)}: {error}" for spec, error in errors)
)
return skipped
def push(
specs: List[Spec],
out_url: str,
signing_key: Optional[str],
force: bool = False,
update_index: bool = False,
) -> Tuple[List[Spec], List[Tuple[Spec, BaseException]]]:
"""Pushes to the provided build cache, and returns a list of skipped specs that were already
present (when force=False). Does not raise on error."""
with default_push_context() as (tmpdir, executor):
return _push(specs, out_url, signing_key, force, update_index, tmpdir, executor)
class FancyProgress:
def __init__(self, total: int):
self.n = 0
@@ -1234,7 +1330,7 @@ def fail(self) -> None:
tty.info(f"{self.pre}Failed to push {self.pretty_spec}")
def _push(
def _url_push(
specs: List[Spec],
out_url: str,
signing_key: Optional[str],
@@ -1279,7 +1375,7 @@ def _push(
upload_futures = [
executor.submit(
_upload_tarball_and_specfile,
_url_upload_tarball_and_specfile,
spec,
tmpdir,
out_url,
@@ -1309,12 +1405,12 @@ def _push(
if signing_key:
keys_tmpdir = os.path.join(tmpdir, "keys")
os.mkdir(keys_tmpdir)
push_keys(out_url, keys=[signing_key], update_index=update_index, tmpdir=keys_tmpdir)
_url_push_keys(out_url, keys=[signing_key], update_index=update_index, tmpdir=keys_tmpdir)
if update_index:
index_tmpdir = os.path.join(tmpdir, "index")
os.mkdir(index_tmpdir)
generate_package_index(out_url, index_tmpdir)
_url_generate_package_index(out_url, index_tmpdir)
return skipped, errors
@@ -1431,12 +1527,9 @@ def _oci_put_manifest(
for s in expected_blobs:
# If a layer for a dependency has gone missing (due to removed manifest in the registry, a
# failed push, or a local forced uninstall), we cannot create a runnable container image.
# If an OCI registry is only used for storage, this is not a hard error, but for now we
# raise an exception unconditionally, until someone requests a more lenient behavior.
checksum = checksums.get(s.dag_hash())
if not checksum:
raise MissingLayerError(f"missing layer for {_format_spec(s)}")
config["rootfs"]["diff_ids"].append(str(checksum.uncompressed_digest))
if checksum:
config["rootfs"]["diff_ids"].append(str(checksum.uncompressed_digest))
# Set the environment variables
config["config"]["Env"] = [f"{k}={v}" for k, v in env.items()]
@@ -1481,6 +1574,7 @@ def _oci_put_manifest(
"size": checksums[s.dag_hash()].size,
}
for s in expected_blobs
if s.dag_hash() in checksums
),
],
}
@@ -1519,7 +1613,7 @@ def _oci_update_base_images(
)
def _push_oci(
def _oci_push(
*,
target_image: ImageReference,
base_image: Optional[ImageReference],
@@ -2645,7 +2739,7 @@ def get_keys(install=False, trust=False, force=False, mirrors=None):
)
def push_keys(
def _url_push_keys(
*mirrors: Union[spack.mirror.Mirror, str],
keys: List[str],
tmpdir: str,
@@ -3096,7 +3190,3 @@ class CannotListKeys(GenerateIndexError):
class PushToBuildCacheError(spack.error.SpackError):
"""Raised when unable to push objects to binary mirror"""
class MissingLayerError(spack.error.SpackError):
"""Raised when a required layer for a dependency is missing in an OCI registry."""

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Common basic functions used through the spack.bootstrap package"""
import fnmatch
import importlib
import os.path
import re
import sys
@@ -28,7 +29,7 @@
def _python_import(module: str) -> bool:
try:
__import__(module)
importlib.import_module(module)
except ImportError:
return False
return True

View File

@@ -143,11 +143,7 @@ def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
def _add_compilers_if_missing() -> None:
arch = spack.spec.ArchSpec.frontend_arch()
if not spack.compilers.compilers_for_arch(arch):
new_compilers = spack.compilers.find_new_compilers(
mixed_toolchain=sys.platform == "darwin"
)
if new_compilers:
spack.compilers.add_compilers_to_config(new_compilers)
spack.compilers.find_compilers()
@contextlib.contextmanager

View File

@@ -124,7 +124,7 @@ def _development_requirements() -> List[RequiredResponseType]:
# Ensure we trigger environment modifications if we have an environment
if BootstrapEnvironment.spack_yaml().exists():
with BootstrapEnvironment() as env:
env.update_syspath_and_environ()
env.load()
return [
_required_executable(

View File

@@ -457,9 +457,12 @@ def set_wrapper_variables(pkg, env):
env.set(SPACK_DEBUG_LOG_ID, pkg.spec.format("{name}-{hash:7}"))
env.set(SPACK_DEBUG_LOG_DIR, spack.main.spack_working_dir)
# Find ccache binary and hand it to build environment
if spack.config.get("config:ccache"):
# Enable ccache in the compiler wrapper
env.set(SPACK_CCACHE_BINARY, spack.util.executable.which_string("ccache", required=True))
else:
# Avoid cache pollution if a build system forces `ccache <compiler wrapper invocation>`.
env.set("CCACHE_DISABLE", "1")
# Gather information about various types of dependencies
link_deps = set(pkg.spec.traverse(root=False, deptype=("link")))

View File

@@ -85,28 +85,20 @@ def homepage(cls):
return "https://bioconductor.org/packages/" + cls.bioc
@lang.classproperty
def urls(cls):
def url(cls):
if cls.cran:
return [
return (
"https://cloud.r-project.org/src/contrib/"
+ f"{cls.cran}_{str(list(cls.versions)[0])}.tar.gz",
"https://cloud.r-project.org/src/contrib/Archive/{cls.cran}/"
+ f"{cls.cran}_{str(list(cls.versions)[0])}.tar.gz",
]
elif cls.bioc:
return [
"https://bioconductor.org/packages/release/bioc/src/contrib/"
+ f"{cls.bioc}_{str(list(cls.versions)[0])}.tar.gz",
"https://bioconductor.org/packages/release/data/annotation/src/contrib/"
+ f"{cls.bioc}_{str(list(cls.versions)[0])}.tar.gz",
]
else:
return [cls.url]
+ cls.cran
+ "_"
+ str(list(cls.versions)[0])
+ ".tar.gz"
)
@lang.classproperty
def list_url(cls):
if cls.cran:
return "https://cloud.r-project.org/src/contrib/"
return "https://cloud.r-project.org/src/contrib/Archive/" + cls.cran + "/"
@property
def git(self):

View File

@@ -12,6 +12,7 @@
from llnl.util import lang
import spack.build_environment
import spack.multimethod
#: Builder classes, as registered by the "builder" decorator
BUILDER_CLS = {}
@@ -295,7 +296,11 @@ def _decorator(fn):
return _decorator
class BuilderMeta(PhaseCallbacksMeta, type(collections.abc.Sequence)): # type: ignore
class BuilderMeta(
PhaseCallbacksMeta,
spack.multimethod.MultiMethodMeta,
type(collections.abc.Sequence), # type: ignore
):
pass

View File

@@ -9,11 +9,11 @@
import llnl.util.lang
from llnl.util.filesystem import mkdirp
from llnl.util.symlink import symlink
import spack.config
import spack.error
import spack.fetch_strategy
import spack.mirror
import spack.paths
import spack.util.file_cache
import spack.util.path
@@ -74,23 +74,6 @@ def store(self, fetcher, relative_dest):
mkdirp(os.path.dirname(dst))
fetcher.archive(dst)
def symlink(self, mirror_ref):
"""Symlink a human readible path in our mirror to the actual
storage location."""
cosmetic_path = os.path.join(self.root, mirror_ref.cosmetic_path)
storage_path = os.path.join(self.root, mirror_ref.storage_path)
relative_dst = os.path.relpath(storage_path, start=os.path.dirname(cosmetic_path))
if not os.path.exists(cosmetic_path):
if os.path.lexists(cosmetic_path):
# In this case the link itself exists but it is broken: remove
# it and recreate it (in order to fix any symlinks broken prior
# to https://github.com/spack/spack/pull/13908)
os.unlink(cosmetic_path)
mkdirp(os.path.dirname(cosmetic_path))
symlink(relative_dst, cosmetic_path)
#: Spack's local cache for downloaded source archives
FETCH_CACHE: Union[spack.fetch_strategy.FsCache, llnl.util.lang.Singleton] = (

View File

@@ -1382,8 +1382,10 @@ def push_to_build_cache(spec: spack.spec.Spec, mirror_url: str, sign_binaries: b
"""
tty.debug(f"Pushing to build cache ({'signed' if sign_binaries else 'unsigned'})")
signing_key = bindist.select_signing_key() if sign_binaries else None
mirror = spack.mirror.Mirror.from_url(mirror_url)
try:
bindist.push_or_raise([spec], out_url=mirror_url, signing_key=signing_key)
with bindist.make_uploader(mirror, signing_key=signing_key) as uploader:
uploader.push_or_raise([spec])
return True
except bindist.PushToBuildCacheError as e:
tty.error(f"Problem writing to {mirror_url}: {e}")
@@ -1433,10 +1435,6 @@ def copy_stage_logs_to_artifacts(job_spec: spack.spec.Spec, job_log_dir: str) ->
job_log_dir: path into which build log should be copied
"""
tty.debug(f"job spec: {job_spec}")
if not job_spec:
msg = f"Cannot copy stage logs: job spec ({job_spec}) is required"
tty.error(msg)
return
try:
pkg_cls = spack.repo.PATH.get_pkg_class(job_spec.name)

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse
import importlib
import os
import re
import sys
@@ -114,8 +115,8 @@ def get_module(cmd_name):
try:
# Try to import the command from the built-in directory
module_name = "%s.%s" % (__name__, pname)
module = __import__(module_name, fromlist=[pname, SETUP_PARSER, DESCRIPTION], level=0)
module_name = f"{__name__}.{pname}"
module = importlib.import_module(module_name)
tty.debug("Imported {0} from built-in commands".format(pname))
except ImportError:
module = spack.extensions.get_module(cmd_name)

View File

@@ -37,7 +37,6 @@
from spack import traverse
from spack.cmd import display_specs
from spack.cmd.common import arguments
from spack.oci.image import ImageReference
from spack.spec import Spec, save_dependency_specfiles
description = "create, download and install binary packages"
@@ -392,13 +391,8 @@ def push_fn(args):
else:
roots = spack.cmd.require_active_env(cmd_name="buildcache push").concrete_roots()
mirror: spack.mirror.Mirror = args.mirror
# Check if this is an OCI image.
try:
target_image = spack.oci.oci.image_from_mirror(mirror)
except ValueError:
target_image = None
mirror = args.mirror
assert isinstance(mirror, spack.mirror.Mirror)
push_url = mirror.push_url
@@ -409,14 +403,11 @@ def push_fn(args):
unsigned = not (args.key or args.signed)
# For OCI images, we require dependencies to be pushed for now.
if target_image:
if "dependencies" not in args.things_to_install:
tty.die("Dependencies must be pushed for OCI images.")
if not unsigned:
tty.warn(
"Code signing is currently not supported for OCI images. "
"Use --unsigned to silence this warning."
)
if mirror.push_url.startswith("oci://") and not unsigned:
tty.warn(
"Code signing is currently not supported for OCI images. "
"Use --unsigned to silence this warning."
)
unsigned = True
# Select a signing key, or None if unsigned.
@@ -447,49 +438,17 @@ def push_fn(args):
(s, PackageNotInstalledError("package not installed")) for s in not_installed
)
with bindist.default_push_context() as (tmpdir, executor):
if target_image:
base_image = ImageReference.from_string(args.base_image) if args.base_image else None
skipped, base_images, checksums, upload_errors = bindist._push_oci(
target_image=target_image,
base_image=base_image,
installed_specs_with_deps=specs,
force=args.force,
tmpdir=tmpdir,
executor=executor,
)
if upload_errors:
failed.extend(upload_errors)
# Apart from creating manifests for each individual spec, we allow users to create a
# separate image tag for all root specs and their runtime dependencies.
elif args.tag:
tagged_image = target_image.with_tag(args.tag)
# _push_oci may not populate base_images if binaries were already in the registry
for spec in roots:
bindist._oci_update_base_images(
base_image=base_image,
target_image=target_image,
spec=spec,
base_image_cache=base_images,
)
bindist._oci_put_manifest(
base_images, checksums, tagged_image, tmpdir, None, None, *roots
)
tty.info(f"Tagged {tagged_image}")
else:
skipped, upload_errors = bindist._push(
specs,
out_url=push_url,
force=args.force,
update_index=args.update_index,
signing_key=signing_key,
tmpdir=tmpdir,
executor=executor,
)
failed.extend(upload_errors)
with bindist.make_uploader(
mirror=mirror,
force=args.force,
update_index=args.update_index,
signing_key=signing_key,
base_image=args.base_image,
) as uploader:
skipped, upload_errors = uploader.push(specs=specs)
failed.extend(upload_errors)
if not upload_errors and args.tag:
uploader.tag(args.tag, roots)
if skipped:
if len(specs) == 1:
@@ -522,13 +481,6 @@ def push_fn(args):
),
)
# Update the OCI index if requested
if target_image and len(skipped) < len(specs) and args.update_index:
with tempfile.TemporaryDirectory(
dir=spack.stage.get_stage_root()
) as tmpdir, spack.util.parallel.make_concurrent_executor() as executor:
bindist._oci_update_index(target_image, tmpdir, executor)
def install_fn(args):
"""install from a binary package"""
@@ -816,7 +768,7 @@ def update_index(mirror: spack.mirror.Mirror, update_keys=False):
url = mirror.push_url
with tempfile.TemporaryDirectory(dir=spack.stage.get_stage_root()) as tmpdir:
bindist.generate_package_index(url, tmpdir)
bindist._url_generate_package_index(url, tmpdir)
if update_keys:
keys_url = url_util.join(

View File

@@ -50,6 +50,7 @@ def setup_parser(subparser):
default=lambda: spack.config.default_modify_scope("compilers"),
help="configuration scope to modify",
)
arguments.add_common_arguments(find_parser, ["jobs"])
# Remove
remove_parser = sp.add_parser("remove", aliases=["rm"], help="remove compiler by spec")
@@ -78,25 +79,21 @@ def setup_parser(subparser):
def compiler_find(args):
"""Search either $PATH or a list of paths OR MODULES for compilers and
add them to Spack's configuration.
"""
# None signals spack.compiler.find_compilers to use its default logic
paths = args.add_paths or None
# Below scope=None because we want new compilers that don't appear
# in any other configuration.
new_compilers = spack.compilers.find_new_compilers(
paths, scope=None, mixed_toolchain=args.mixed_toolchain
new_compilers = spack.compilers.find_compilers(
path_hints=paths,
scope=args.scope,
mixed_toolchain=args.mixed_toolchain,
max_workers=args.jobs,
)
if new_compilers:
spack.compilers.add_compilers_to_config(new_compilers, scope=args.scope)
n = len(new_compilers)
s = "s" if n > 1 else ""
config = spack.config.CONFIG
filename = config.get_config_filename(args.scope, "compilers")
tty.msg("Added %d new compiler%s to %s" % (n, s, filename))
colify(reversed(sorted(c.spec.display_str for c in new_compilers)), indent=4)
filename = spack.config.CONFIG.get_config_filename(args.scope, "compilers")
tty.msg(f"Added {n:d} new compiler{s} to {filename}")
compiler_strs = sorted(f"{c.spec.name}@{c.spec.version}" for c in new_compilers)
colify(reversed(compiler_strs), indent=4)
else:
tty.msg("Found no new compilers")
tty.msg("Compilers are defined in the following files:")

View File

@@ -6,6 +6,7 @@
import re
import sys
import urllib.parse
from typing import List
import llnl.util.tty as tty
from llnl.util.filesystem import mkdirp
@@ -14,9 +15,15 @@
import spack.stage
import spack.util.web
from spack.spec import Spec
from spack.url import UndetectableNameError, UndetectableVersionError, parse_name, parse_version
from spack.url import (
UndetectableNameError,
UndetectableVersionError,
find_versions_of_archive,
parse_name,
parse_version,
)
from spack.util.editor import editor
from spack.util.executable import ProcessError, which
from spack.util.executable import which
from spack.util.format import get_version_lines
from spack.util.naming import mod_to_class, simplify_name, valid_fully_qualified_module_name
@@ -89,14 +96,20 @@ class BundlePackageTemplate:
url_def = " # There is no URL since there is no code to download."
body_def = " # There is no need for install() since there is no code."
def __init__(self, name, versions):
def __init__(self, name: str, versions, languages: List[str]):
self.name = name
self.class_name = mod_to_class(name)
self.versions = versions
self.languages = languages
def write(self, pkg_path):
"""Writes the new package file."""
all_deps = [f' depends_on("{lang}", type="build")' for lang in self.languages]
if all_deps and self.dependencies:
all_deps.append("")
all_deps.append(self.dependencies)
# Write out a template for the file
with open(pkg_path, "w") as pkg_file:
pkg_file.write(
@@ -106,7 +119,7 @@ def write(self, pkg_path):
base_class_name=self.base_class_name,
url_def=self.url_def,
versions=self.versions,
dependencies=self.dependencies,
dependencies="\n".join(all_deps),
body_def=self.body_def,
)
)
@@ -125,8 +138,8 @@ def install(self, spec, prefix):
url_line = ' url = "{url}"'
def __init__(self, name, url, versions):
super().__init__(name, versions)
def __init__(self, name, url, versions, languages: List[str]):
super().__init__(name, versions, languages)
self.url_def = self.url_line.format(url=url)
@@ -214,13 +227,13 @@ def luarocks_args(self):
args = []
return args"""
def __init__(self, name, url, *args, **kwargs):
def __init__(self, name, url, versions, languages: List[str]):
# If the user provided `--name lua-lpeg`, don't rename it lua-lua-lpeg
if not name.startswith("lua-"):
# Make it more obvious that we are renaming the package
tty.msg("Changing package name from {0} to lua-{0}".format(name))
name = "lua-{0}".format(name)
super().__init__(name, url, *args, **kwargs)
super().__init__(name, url, versions, languages)
class MesonPackageTemplate(PackageTemplate):
@@ -321,14 +334,14 @@ class RacketPackageTemplate(PackageTemplate):
# subdirectory = None
"""
def __init__(self, name, url, *args, **kwargs):
def __init__(self, name, url, versions, languages: List[str]):
# If the user provided `--name rkt-scribble`, don't rename it rkt-rkt-scribble
if not name.startswith("rkt-"):
# Make it more obvious that we are renaming the package
tty.msg("Changing package name from {0} to rkt-{0}".format(name))
name = "rkt-{0}".format(name)
self.body_def = self.body_def.format(name[4:])
super().__init__(name, url, *args, **kwargs)
super().__init__(name, url, versions, languages)
class PythonPackageTemplate(PackageTemplate):
@@ -361,7 +374,7 @@ def config_settings(self, spec, prefix):
settings = {}
return settings"""
def __init__(self, name, url, *args, **kwargs):
def __init__(self, name, url, versions, languages: List[str]):
# If the user provided `--name py-numpy`, don't rename it py-py-numpy
if not name.startswith("py-"):
# Make it more obvious that we are renaming the package
@@ -415,7 +428,7 @@ def __init__(self, name, url, *args, **kwargs):
+ self.url_line
)
super().__init__(name, url, *args, **kwargs)
super().__init__(name, url, versions, languages)
class RPackageTemplate(PackageTemplate):
@@ -434,7 +447,7 @@ def configure_args(self):
args = []
return args"""
def __init__(self, name, url, *args, **kwargs):
def __init__(self, name, url, versions, languages: List[str]):
# If the user provided `--name r-rcpp`, don't rename it r-r-rcpp
if not name.startswith("r-"):
# Make it more obvious that we are renaming the package
@@ -454,7 +467,7 @@ def __init__(self, name, url, *args, **kwargs):
if bioc:
self.url_line = ' url = "{0}"\n' ' bioc = "{1}"'.format(url, r_name)
super().__init__(name, url, *args, **kwargs)
super().__init__(name, url, versions, languages)
class PerlmakePackageTemplate(PackageTemplate):
@@ -474,14 +487,14 @@ def configure_args(self):
args = []
return args"""
def __init__(self, name, *args, **kwargs):
def __init__(self, name, url, versions, languages: List[str]):
# If the user provided `--name perl-cpp`, don't rename it perl-perl-cpp
if not name.startswith("perl-"):
# Make it more obvious that we are renaming the package
tty.msg("Changing package name from {0} to perl-{0}".format(name))
name = "perl-{0}".format(name)
super().__init__(name, *args, **kwargs)
super().__init__(name, url, versions, languages)
class PerlbuildPackageTemplate(PerlmakePackageTemplate):
@@ -506,7 +519,7 @@ class OctavePackageTemplate(PackageTemplate):
# FIXME: Add additional dependencies if required.
# depends_on("octave-foo", type=("build", "run"))"""
def __init__(self, name, *args, **kwargs):
def __init__(self, name, url, versions, languages: List[str]):
# If the user provided `--name octave-splines`, don't rename it
# octave-octave-splines
if not name.startswith("octave-"):
@@ -514,7 +527,7 @@ def __init__(self, name, *args, **kwargs):
tty.msg("Changing package name from {0} to octave-{0}".format(name))
name = "octave-{0}".format(name)
super().__init__(name, *args, **kwargs)
super().__init__(name, url, versions, languages)
class RubyPackageTemplate(PackageTemplate):
@@ -534,7 +547,7 @@ def build(self, spec, prefix):
# FIXME: If not needed delete this function
pass"""
def __init__(self, name, *args, **kwargs):
def __init__(self, name, url, versions, languages: List[str]):
# If the user provided `--name ruby-numpy`, don't rename it
# ruby-ruby-numpy
if not name.startswith("ruby-"):
@@ -542,7 +555,7 @@ def __init__(self, name, *args, **kwargs):
tty.msg("Changing package name from {0} to ruby-{0}".format(name))
name = "ruby-{0}".format(name)
super().__init__(name, *args, **kwargs)
super().__init__(name, url, versions, languages)
class MakefilePackageTemplate(PackageTemplate):
@@ -580,14 +593,14 @@ def configure_args(self, spec, prefix):
args = []
return args"""
def __init__(self, name, *args, **kwargs):
def __init__(self, name, url, versions, languages: List[str]):
# If the user provided `--name py-pyqt4`, don't rename it py-py-pyqt4
if not name.startswith("py-"):
# Make it more obvious that we are renaming the package
tty.msg("Changing package name from {0} to py-{0}".format(name))
name = "py-{0}".format(name)
super().__init__(name, *args, **kwargs)
super().__init__(name, url, versions, languages)
templates = {
@@ -658,8 +671,48 @@ def setup_parser(subparser):
)
class BuildSystemGuesser:
"""An instance of BuildSystemGuesser provides a callable object to be used
#: C file extensions
C_EXT = {".c"}
#: C++ file extensions
CXX_EXT = {
".C",
".c++",
".cc",
".ccm",
".cpp",
".CPP",
".cxx",
".h++",
".hh",
".hpp",
".hxx",
".inl",
".ipp",
".ixx",
".tcc",
".tpp",
}
#: Fortran file extensions
FORTRAN_EXT = {
".f77",
".F77",
".f90",
".F90",
".f95",
".F95",
".f",
".F",
".for",
".FOR",
".ftn",
".FTN",
}
class BuildSystemAndLanguageGuesser:
"""An instance of BuildSystemAndLanguageGuesser provides a callable object to be used
during ``spack create``. By passing this object to ``spack checksum``, we
can take a peek at the fetched tarball and discern the build system it uses
"""
@@ -667,81 +720,119 @@ class BuildSystemGuesser:
def __init__(self):
"""Sets the default build system."""
self.build_system = "generic"
self._c = False
self._cxx = False
self._fortran = False
def __call__(self, stage, url):
# List of files in the archive ordered by their depth in the directory tree.
self._file_entries: List[str] = []
def __call__(self, archive: str, url: str) -> None:
"""Try to guess the type of build system used by a project based on
the contents of its archive or the URL it was downloaded from."""
if url is not None:
# Most octave extensions are hosted on Octave-Forge:
# https://octave.sourceforge.net/index.html
# They all have the same base URL.
if "downloads.sourceforge.net/octave/" in url:
self.build_system = "octave"
return
if url.endswith(".gem"):
self.build_system = "ruby"
return
if url.endswith(".whl") or ".whl#" in url:
self.build_system = "python"
return
if url.endswith(".rock"):
self.build_system = "lua"
return
# A list of clues that give us an idea of the build system a package
# uses. If the regular expression matches a file contained in the
# archive, the corresponding build system is assumed.
# NOTE: Order is important here. If a package supports multiple
# build systems, we choose the first match in this list.
clues = [
(r"/CMakeLists\.txt$", "cmake"),
(r"/NAMESPACE$", "r"),
(r"/Cargo\.toml$", "cargo"),
(r"/go\.mod$", "go"),
(r"/configure$", "autotools"),
(r"/configure\.(in|ac)$", "autoreconf"),
(r"/Makefile\.am$", "autoreconf"),
(r"/pom\.xml$", "maven"),
(r"/SConstruct$", "scons"),
(r"/waf$", "waf"),
(r"/pyproject.toml", "python"),
(r"/setup\.(py|cfg)$", "python"),
(r"/WORKSPACE$", "bazel"),
(r"/Build\.PL$", "perlbuild"),
(r"/Makefile\.PL$", "perlmake"),
(r"/.*\.gemspec$", "ruby"),
(r"/Rakefile$", "ruby"),
(r"/setup\.rb$", "ruby"),
(r"/.*\.pro$", "qmake"),
(r"/.*\.rockspec$", "lua"),
(r"/(GNU)?[Mm]akefile$", "makefile"),
(r"/DESCRIPTION$", "octave"),
(r"/meson\.build$", "meson"),
(r"/configure\.py$", "sip"),
]
# Peek inside the compressed file.
if stage.archive_file.endswith(".zip") or ".zip#" in stage.archive_file:
if archive.endswith(".zip") or ".zip#" in archive:
try:
unzip = which("unzip")
output = unzip("-lq", stage.archive_file, output=str)
except ProcessError:
assert unzip is not None
output = unzip("-lq", archive, output=str)
except Exception:
output = ""
else:
try:
tar = which("tar")
output = tar("--exclude=*/*/*", "-tf", stage.archive_file, output=str)
except ProcessError:
assert tar is not None
output = tar("tf", archive, output=str)
except Exception:
output = ""
lines = output.splitlines()
self._file_entries[:] = output.splitlines()
# Determine the build system based on the files contained
# in the archive.
for pattern, bs in clues:
if any(re.search(pattern, line) for line in lines):
self.build_system = bs
break
# Files closest to the root should be considered first when determining build system.
self._file_entries.sort(key=lambda p: p.count("/"))
self._determine_build_system(url)
self._determine_language()
def _determine_build_system(self, url: str) -> None:
# Most octave extensions are hosted on Octave-Forge:
# https://octave.sourceforge.net/index.html
# They all have the same base URL.
if "downloads.sourceforge.net/octave/" in url:
self.build_system = "octave"
elif url.endswith(".gem"):
self.build_system = "ruby"
elif url.endswith(".whl") or ".whl#" in url:
self.build_system = "python"
elif url.endswith(".rock"):
self.build_system = "lua"
elif self._file_entries:
# A list of clues that give us an idea of the build system a package
# uses. If the regular expression matches a file contained in the
# archive, the corresponding build system is assumed.
# NOTE: Order is important here. If a package supports multiple
# build systems, we choose the first match in this list.
clues = [
(re.compile(pattern), build_system)
for pattern, build_system in (
(r"/CMakeLists\.txt$", "cmake"),
(r"/NAMESPACE$", "r"),
(r"/Cargo\.toml$", "cargo"),
(r"/go\.mod$", "go"),
(r"/configure$", "autotools"),
(r"/configure\.(in|ac)$", "autoreconf"),
(r"/Makefile\.am$", "autoreconf"),
(r"/pom\.xml$", "maven"),
(r"/SConstruct$", "scons"),
(r"/waf$", "waf"),
(r"/pyproject.toml", "python"),
(r"/setup\.(py|cfg)$", "python"),
(r"/WORKSPACE$", "bazel"),
(r"/Build\.PL$", "perlbuild"),
(r"/Makefile\.PL$", "perlmake"),
(r"/.*\.gemspec$", "ruby"),
(r"/Rakefile$", "ruby"),
(r"/setup\.rb$", "ruby"),
(r"/.*\.pro$", "qmake"),
(r"/.*\.rockspec$", "lua"),
(r"/(GNU)?[Mm]akefile$", "makefile"),
(r"/DESCRIPTION$", "octave"),
(r"/meson\.build$", "meson"),
(r"/configure\.py$", "sip"),
)
]
# Determine the build system based on the files contained in the archive.
for file in self._file_entries:
for pattern, build_system in clues:
if pattern.search(file):
self.build_system = build_system
return
def _determine_language(self):
for entry in self._file_entries:
_, ext = os.path.splitext(entry)
if not self._c and ext in C_EXT:
self._c = True
elif not self._cxx and ext in CXX_EXT:
self._cxx = True
elif not self._fortran and ext in FORTRAN_EXT:
self._fortran = True
if self._c and self._cxx and self._fortran:
return
@property
def languages(self) -> List[str]:
langs: List[str] = []
if self._c:
langs.append("c")
if self._cxx:
langs.append("cxx")
if self._fortran:
langs.append("fortran")
return langs
def get_name(name, url):
@@ -811,7 +902,7 @@ def get_url(url):
def get_versions(args, name):
"""Returns a list of versions and hashes for a package.
Also returns a BuildSystemGuesser object.
Also returns a BuildSystemAndLanguageGuesser object.
Returns default values if no URL is provided.
@@ -820,7 +911,7 @@ def get_versions(args, name):
name (str): The name of the package
Returns:
tuple: versions and hashes, and a BuildSystemGuesser object
tuple: versions and hashes, and a BuildSystemAndLanguageGuesser object
"""
# Default version with hash
@@ -834,7 +925,7 @@ def get_versions(args, name):
# version("1.2.4")"""
# Default guesser
guesser = BuildSystemGuesser()
guesser = BuildSystemAndLanguageGuesser()
valid_url = True
try:
@@ -847,7 +938,7 @@ def get_versions(args, name):
if args.url is not None and args.template != "bundle" and valid_url:
# Find available versions
try:
url_dict = spack.url.find_versions_of_archive(args.url)
url_dict = find_versions_of_archive(args.url)
if len(url_dict) > 1 and not args.batch and sys.stdin.isatty():
url_dict_filtered = spack.stage.interactive_version_filter(url_dict)
if url_dict_filtered is None:
@@ -874,7 +965,7 @@ def get_versions(args, name):
return versions, guesser
def get_build_system(template, url, guesser):
def get_build_system(template: str, url: str, guesser: BuildSystemAndLanguageGuesser) -> str:
"""Determine the build system template.
If a template is specified, always use that. Otherwise, if a URL
@@ -882,11 +973,10 @@ def get_build_system(template, url, guesser):
build system it uses. Otherwise, use a generic template by default.
Args:
template (str): ``--template`` argument given to ``spack create``
url (str): ``url`` argument given to ``spack create``
args (argparse.Namespace): The arguments given to ``spack create``
guesser (BuildSystemGuesser): The first_stage_function given to
``spack checksum`` which records the build system it detects
template: ``--template`` argument given to ``spack create``
url: ``url`` argument given to ``spack create``
guesser: The first_stage_function given to ``spack checksum`` which records the build
system it detects
Returns:
str: The name of the build system template to use
@@ -960,7 +1050,7 @@ def create(parser, args):
build_system = get_build_system(args.template, url, guesser)
# Create the package template object
constr_args = {"name": name, "versions": versions}
constr_args = {"name": name, "versions": versions, "languages": guesser.languages}
package_class = templates[build_system]
if package_class != BundlePackageTemplate:
constr_args["url"] = url

View File

@@ -10,8 +10,10 @@
import spack.cmd
import spack.config
import spack.fetch_strategy
import spack.package_base
import spack.repo
import spack.spec
import spack.stage
import spack.util.path
import spack.version
from spack.cmd.common import arguments
@@ -62,7 +64,7 @@ def change_fn(section):
spack.config.change_or_add("develop", find_fn, change_fn)
def _retrieve_develop_source(spec, abspath):
def _retrieve_develop_source(spec: spack.spec.Spec, abspath: str) -> None:
# "steal" the source code via staging API. We ask for a stage
# to be created, then copy it afterwards somewhere else. It would be
# better if we can create the `source_path` directly into its final
@@ -71,13 +73,13 @@ def _retrieve_develop_source(spec, abspath):
# We construct a package class ourselves, rather than asking for
# Spec.package, since Spec only allows this when it is concrete
package = pkg_cls(spec)
source_stage = package.stage[0]
source_stage: spack.stage.Stage = package.stage[0]
if isinstance(source_stage.fetcher, spack.fetch_strategy.GitFetchStrategy):
source_stage.fetcher.get_full_repo = True
# If we retrieved this version before and cached it, we may have
# done so without cloning the full git repo; likewise, any
# mirror might store an instance with truncated history.
source_stage.disable_mirrors()
source_stage.default_fetcher_only = True
source_stage.fetcher.set_package(package)
package.stage.steal_source(abspath)

View File

@@ -468,32 +468,30 @@ def env_remove(args):
This removes an environment managed by Spack. Directory environments
and manifests embedded in repositories should be removed manually.
"""
read_envs = []
remove_envs = []
valid_envs = []
bad_envs = []
invalid_envs = []
for env_name in ev.all_environment_names():
try:
env = ev.read(env_name)
valid_envs.append(env_name)
valid_envs.append(env)
if env_name in args.rm_env:
read_envs.append(env)
remove_envs.append(env)
except (spack.config.ConfigFormatError, ev.SpackEnvironmentConfigError):
invalid_envs.append(env_name)
if env_name in args.rm_env:
bad_envs.append(env_name)
# Check if env is linked to another before trying to remove
for name in valid_envs:
# Check if remove_env is included from another env before trying to remove
for env in valid_envs:
for remove_env in remove_envs:
# don't check if environment is included to itself
if name == env_name:
if env.name == remove_env.name:
continue
environ = ev.Environment(ev.root(name))
if ev.root(env_name) in environ.included_concrete_envs:
msg = f'Environment "{env_name}" is being used by environment "{name}"'
if remove_env.path in env.included_concrete_envs:
msg = f'Environment "{remove_env.name}" is being used by environment "{env.name}"'
if args.force:
tty.warn(msg)
else:
@@ -506,7 +504,7 @@ def env_remove(args):
if not answer:
tty.die("Will not remove any environments")
for env in read_envs:
for env in remove_envs:
name = env.name
if env.active:
tty.die(f"Environment {name} can't be removed while activated.")

View File

@@ -224,7 +224,7 @@ def gpg_publish(args):
mirror = spack.mirror.Mirror(args.mirror_url, args.mirror_url)
with tempfile.TemporaryDirectory(dir=spack.stage.get_stage_root()) as tmpdir:
spack.binary_distribution.push_keys(
spack.binary_distribution._url_push_keys(
mirror, keys=args.keys, tmpdir=tmpdir, update_index=args.update_index
)

View File

@@ -502,7 +502,7 @@ def print_licenses(pkg, args):
def info(parser, args):
spec = spack.spec.Spec(args.package)
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
pkg_cls = spack.repo.PATH.get_pkg_class(spec.fullname)
pkg = pkg_cls(spec)
# Output core package information

View File

@@ -23,11 +23,6 @@ def setup_parser(subparser):
output.add_argument(
"-s", "--safe", action="store_true", help="only list safe versions of the package"
)
output.add_argument(
"--safe-only",
action="store_true",
help="[deprecated] only list safe versions of the package",
)
output.add_argument(
"-r", "--remote", action="store_true", help="only list remote versions of the package"
)
@@ -47,17 +42,13 @@ def versions(parser, args):
safe_versions = pkg.versions
if args.safe_only:
tty.warn('"--safe-only" is deprecated. Use "--safe" instead.')
args.safe = args.safe_only
if not (args.remote or args.new):
if sys.stdout.isatty():
tty.msg("Safe versions (already checksummed):")
if not safe_versions:
if sys.stdout.isatty():
tty.warn("Found no versions for {0}".format(pkg.name))
tty.warn(f"Found no versions for {pkg.name}")
tty.debug("Manually add versions to the package.")
else:
colify(sorted(safe_versions, reverse=True), indent=2)
@@ -83,12 +74,12 @@ def versions(parser, args):
if not remote_versions:
if sys.stdout.isatty():
if not fetched_versions:
tty.warn("Found no versions for {0}".format(pkg.name))
tty.warn(f"Found no versions for {pkg.name}")
tty.debug(
"Check the list_url and list_depth attributes of "
"the package to help Spack find versions."
)
else:
tty.warn("Found no unchecksummed versions for {0}".format(pkg.name))
tty.warn(f"Found no unchecksummed versions for {pkg.name}")
else:
colify(sorted(remote_versions, reverse=True), indent=2)

View File

@@ -7,11 +7,11 @@
system and configuring Spack to use multiple compilers.
"""
import collections
import itertools
import multiprocessing.pool
import importlib
import os
import sys
import warnings
from typing import Dict, List, Optional, Tuple
from typing import Dict, List, Optional
import archspec.cpu
@@ -22,11 +22,12 @@
import spack.compiler
import spack.config
import spack.error
import spack.operating_systems
import spack.paths
import spack.platforms
import spack.repo
import spack.spec
import spack.version
from spack.operating_systems import windows_os
from spack.util.environment import get_path
from spack.util.naming import mod_to_class
@@ -63,6 +64,10 @@
}
#: Tag used to identify packages providing a compiler
COMPILER_TAG = "compiler"
def pkg_spec_for_compiler(cspec):
"""Return the spec of the package that provides the compiler."""
for spec, package in _compiler_to_pkg.items():
@@ -127,7 +132,7 @@ def get_compiler_config(
# Do not init config because there is a non-empty scope
return config
_init_compiler_config(configuration, scope=scope)
find_compilers(scope=scope)
config = configuration.get("compilers", scope=scope)
return config
@@ -136,125 +141,8 @@ def get_compiler_config_from_packages(
configuration: "spack.config.Configuration", *, scope: Optional[str] = None
) -> List[Dict]:
"""Return the compiler configuration from packages.yaml"""
config = configuration.get("packages", scope=scope)
if not config:
return []
packages = []
compiler_package_names = supported_compilers() + list(package_name_to_compiler_name.keys())
for name, entry in config.items():
if name not in compiler_package_names:
continue
externals_config = entry.get("externals", None)
if not externals_config:
continue
packages.extend(_compiler_config_from_package_config(externals_config))
return packages
def _compiler_config_from_package_config(config):
compilers = []
for entry in config:
compiler = _compiler_config_from_external(entry)
if compiler:
compilers.append(compiler)
return compilers
def _compiler_config_from_external(config):
extra_attributes_key = "extra_attributes"
compilers_key = "compilers"
c_key, cxx_key, fortran_key = "c", "cxx", "fortran"
# Allow `@x.y.z` instead of `@=x.y.z`
spec = spack.spec.parse_with_version_concrete(config["spec"])
compiler_spec = spack.spec.CompilerSpec(
package_name_to_compiler_name.get(spec.name, spec.name), spec.version
)
err_header = f"The external spec '{spec}' cannot be used as a compiler"
# If extra_attributes is not there I might not want to use this entry as a compiler,
# therefore just leave a debug message, but don't be loud with a warning.
if extra_attributes_key not in config:
tty.debug(f"[{__file__}] {err_header}: missing the '{extra_attributes_key}' key")
return None
extra_attributes = config[extra_attributes_key]
# If I have 'extra_attributes' warn if 'compilers' is missing, or we don't have a C compiler
if compilers_key not in extra_attributes:
warnings.warn(
f"{err_header}: missing the '{compilers_key}' key under '{extra_attributes_key}'"
)
return None
attribute_compilers = extra_attributes[compilers_key]
if c_key not in attribute_compilers:
warnings.warn(
f"{err_header}: missing the C compiler path under "
f"'{extra_attributes_key}:{compilers_key}'"
)
return None
c_compiler = attribute_compilers[c_key]
# C++ and Fortran compilers are not mandatory, so let's just leave a debug trace
if cxx_key not in attribute_compilers:
tty.debug(f"[{__file__}] The external spec {spec} does not have a C++ compiler")
if fortran_key not in attribute_compilers:
tty.debug(f"[{__file__}] The external spec {spec} does not have a Fortran compiler")
# compilers format has cc/fc/f77, externals format has "c/fortran"
paths = {
"cc": c_compiler,
"cxx": attribute_compilers.get(cxx_key, None),
"fc": attribute_compilers.get(fortran_key, None),
"f77": attribute_compilers.get(fortran_key, None),
}
if not spec.architecture:
host_platform = spack.platforms.host()
operating_system = host_platform.operating_system("default_os")
target = host_platform.target("default_target").microarchitecture
else:
target = spec.architecture.target
if not target:
target = spack.platforms.host().target("default_target")
target = target.microarchitecture
operating_system = spec.os
if not operating_system:
host_platform = spack.platforms.host()
operating_system = host_platform.operating_system("default_os")
compiler_entry = {
"compiler": {
"spec": str(compiler_spec),
"paths": paths,
"flags": extra_attributes.get("flags", {}),
"operating_system": str(operating_system),
"target": str(target.family),
"modules": config.get("modules", []),
"environment": extra_attributes.get("environment", {}),
"extra_rpaths": extra_attributes.get("extra_rpaths", []),
"implicit_rpaths": extra_attributes.get("implicit_rpaths", None),
}
}
return compiler_entry
def _init_compiler_config(
configuration: "spack.config.Configuration", *, scope: Optional[str]
) -> None:
"""Compiler search used when Spack has no compilers."""
compilers = find_compilers()
compilers_dict = []
for compiler in compilers:
compilers_dict.append(_to_dict(compiler))
configuration.set("compilers", compilers_dict, scope=scope)
packages_yaml = configuration.get("packages", scope=scope)
return CompilerConfigFactory.from_packages_yaml(packages_yaml)
def compiler_config_files():
@@ -278,9 +166,7 @@ def add_compilers_to_config(compilers, scope=None):
compilers: a list of Compiler objects.
scope: configuration scope to modify.
"""
compiler_config = get_compiler_config(
configuration=spack.config.CONFIG, scope=scope, init_config=False
)
compiler_config = get_compiler_config(configuration=spack.config.CONFIG, scope=scope)
for compiler in compilers:
if not compiler.cc:
tty.debug(f"{compiler.spec} does not have a C compiler")
@@ -329,9 +215,7 @@ def _remove_compiler_from_scope(compiler_spec, scope):
True if one or more compiler entries were actually removed, False otherwise
"""
assert scope is not None, "a specific scope is needed when calling this function"
compiler_config = get_compiler_config(
configuration=spack.config.CONFIG, scope=scope, init_config=False
)
compiler_config = get_compiler_config(configuration=spack.config.CONFIG, scope=scope)
filtered_compiler_config = [
compiler_entry
for compiler_entry in compiler_config
@@ -380,79 +264,77 @@ def all_compiler_specs(scope=None, init_config=True):
def find_compilers(
path_hints: Optional[List[str]] = None, *, mixed_toolchain=False
path_hints: Optional[List[str]] = None,
*,
scope: Optional[str] = None,
mixed_toolchain: bool = False,
max_workers: Optional[int] = None,
) -> List["spack.compiler.Compiler"]:
"""Return the list of compilers found in the paths given as arguments.
"""Searches for compiler in the paths given as argument. If any new compiler is found, the
configuration is updated, and the list of new compiler objects is returned.
Args:
path_hints: list of path hints where to look for. A sensible default based on the ``PATH``
environment variable will be used if the value is None
scope: configuration scope to modify
mixed_toolchain: allow mixing compilers from different toolchains if otherwise missing for
a certain language
max_workers: number of processes used to search for compilers
"""
import spack.detection
known_compilers = set(all_compilers(init_config=False))
if path_hints is None:
path_hints = get_path("PATH")
default_paths = fs.search_paths_for_executables(*path_hints)
if sys.platform == "win32":
default_paths.extend(windows_os.WindowsOs().compiler_search_paths)
compiler_pkgs = spack.repo.PATH.packages_with_tags(COMPILER_TAG, full=True)
# To detect the version of the compilers, we dispatch a certain number
# of function calls to different workers. Here we construct the list
# of arguments for each call.
arguments = []
for o in all_os_classes():
search_paths = getattr(o, "compiler_search_paths", default_paths)
arguments.extend(arguments_to_detect_version_fn(o, search_paths))
# Here we map the function arguments to the corresponding calls
tp = multiprocessing.pool.ThreadPool()
try:
detected_versions = tp.map(detect_version, arguments)
finally:
tp.close()
def valid_version(item: Tuple[Optional[DetectVersionArgs], Optional[str]]) -> bool:
value, error = item
if error is None:
return True
try:
# This will fail on Python 2.6 if a non ascii
# character is in the error
tty.debug(error)
except UnicodeEncodeError:
pass
return False
def remove_errors(
item: Tuple[Optional[DetectVersionArgs], Optional[str]]
) -> DetectVersionArgs:
value, _ = item
assert value is not None
return value
return make_compiler_list(
[remove_errors(detected) for detected in detected_versions if valid_version(detected)],
mixed_toolchain=mixed_toolchain,
detected_packages = spack.detection.by_path(
compiler_pkgs, path_hints=default_paths, max_workers=max_workers
)
valid_compilers = {}
for name, detected in detected_packages.items():
compilers = [x for x in detected if CompilerConfigFactory.from_external_spec(x.spec)]
if not compilers:
continue
valid_compilers[name] = compilers
def find_new_compilers(
path_hints: Optional[List[str]] = None,
scope: Optional[str] = None,
*,
mixed_toolchain: bool = False,
):
"""Same as ``find_compilers`` but return only the compilers that are not
already in compilers.yaml.
def _has_fortran_compilers(x):
if "compilers" not in x.spec.extra_attributes:
return False
Args:
path_hints: list of path hints where to look for. A sensible default based on the ``PATH``
environment variable will be used if the value is None
scope: scope to look for a compiler. If None consider the merged configuration.
mixed_toolchain: allow mixing compilers from different toolchains if otherwise missing for
a certain language
"""
compilers = find_compilers(path_hints, mixed_toolchain=mixed_toolchain)
return "fortran" in x.spec.extra_attributes["compilers"]
return select_new_compilers(compilers, scope)
if mixed_toolchain:
gccs = [x for x in valid_compilers.get("gcc", []) if _has_fortran_compilers(x)]
if gccs:
best_gcc = sorted(
gccs, key=lambda x: spack.spec.parse_with_version_concrete(x.spec).version
)[-1]
gfortran = best_gcc.spec.extra_attributes["compilers"]["fortran"]
for name in ("llvm", "apple-clang"):
if name not in valid_compilers:
continue
candidates = valid_compilers[name]
for candidate in candidates:
if _has_fortran_compilers(candidate):
continue
candidate.spec.extra_attributes["compilers"]["fortran"] = gfortran
new_compilers = []
for name, detected in valid_compilers.items():
for config in CompilerConfigFactory.from_specs([x.spec for x in detected]):
c = _compiler_from_config_entry(config["compiler"])
if c in known_compilers:
continue
new_compilers.append(c)
add_compilers_to_config(new_compilers, scope=scope)
return new_compilers
def select_new_compilers(compilers, scope=None):
@@ -462,7 +344,9 @@ def select_new_compilers(compilers, scope=None):
compilers_not_in_config = []
for c in compilers:
arch_spec = spack.spec.ArchSpec((None, c.operating_system, c.target))
same_specs = compilers_for_spec(c.spec, arch_spec, scope=scope, init_config=False)
same_specs = compilers_for_spec(
c.spec, arch_spec=arch_spec, scope=scope, init_config=False
)
if not same_specs:
compilers_not_in_config.append(c)
@@ -531,7 +415,12 @@ def find(compiler_spec, scope=None, init_config=True):
def find_specs_by_arch(compiler_spec, arch_spec, scope=None, init_config=True):
"""Return specs of available compilers that match the supplied
compiler spec. Return an empty list if nothing found."""
return [c.spec for c in compilers_for_spec(compiler_spec, arch_spec, scope, True, init_config)]
return [
c.spec
for c in compilers_for_spec(
compiler_spec, arch_spec=arch_spec, scope=scope, init_config=init_config
)
]
def all_compilers(scope=None, init_config=True):
@@ -553,14 +442,11 @@ def all_compilers_from(configuration, scope=None, init_config=True):
@_auto_compiler_spec
def compilers_for_spec(
compiler_spec, arch_spec=None, scope=None, use_cache=True, init_config=True
):
def compilers_for_spec(compiler_spec, *, arch_spec=None, scope=None, init_config=True):
"""This gets all compilers that satisfy the supplied CompilerSpec.
Returns an empty list if none are found.
"""
config = all_compilers_config(spack.config.CONFIG, scope=scope, init_config=init_config)
matches = set(find(compiler_spec, scope, init_config))
compilers = []
for cspec in matches:
@@ -569,7 +455,7 @@ def compilers_for_spec(
def compilers_for_arch(arch_spec, scope=None):
config = all_compilers_config(spack.config.CONFIG, scope=scope)
config = all_compilers_config(spack.config.CONFIG, scope=scope, init_config=False)
return list(get_compilers(config, arch_spec=arch_spec))
@@ -766,7 +652,7 @@ def class_for_compiler_name(compiler_name):
submodule_name = compiler_name.replace("-", "_")
module_name = ".".join(["spack", "compilers", submodule_name])
module_obj = __import__(module_name, fromlist=[None])
module_obj = importlib.import_module(module_name)
cls = getattr(module_obj, mod_to_class(compiler_name))
# make a note of the name in the module so we can get to it easily.
@@ -819,228 +705,6 @@ def all_compiler_types():
)
def arguments_to_detect_version_fn(
operating_system: spack.operating_systems.OperatingSystem, paths: List[str]
) -> List[DetectVersionArgs]:
"""Returns a list of DetectVersionArgs tuples to be used in a
corresponding function to detect compiler versions.
The ``operating_system`` instance can customize the behavior of this
function by providing a method called with the same name.
Args:
operating_system: the operating system on which we are looking for compilers
paths: paths to search for compilers
Returns:
List of DetectVersionArgs tuples. Each item in the list will be later
mapped to the corresponding function call to detect the version of the
compilers in this OS.
"""
def _default(search_paths: List[str]) -> List[DetectVersionArgs]:
command_arguments: List[DetectVersionArgs] = []
files_to_be_tested = fs.files_in(*search_paths)
for compiler_name in supported_compilers_for_host_platform():
compiler_cls = class_for_compiler_name(compiler_name)
for language in ("cc", "cxx", "f77", "fc"):
# Select only the files matching a regexp
for (file, full_path), regexp in itertools.product(
files_to_be_tested, compiler_cls.search_regexps(language)
):
match = regexp.match(file)
if match:
compiler_id = CompilerID(operating_system, compiler_name, None)
detect_version_args = DetectVersionArgs(
id=compiler_id,
variation=NameVariation(*match.groups()),
language=language,
path=full_path,
)
command_arguments.append(detect_version_args)
return command_arguments
fn = getattr(operating_system, "arguments_to_detect_version_fn", _default)
return fn(paths)
def detect_version(
detect_version_args: DetectVersionArgs,
) -> Tuple[Optional[DetectVersionArgs], Optional[str]]:
"""Computes the version of a compiler and adds it to the information
passed as input.
As this function is meant to be executed by worker processes it won't
raise any exception but instead will return a (value, error) tuple that
needs to be checked by the code dispatching the calls.
Args:
detect_version_args: information on the compiler for which we should detect the version.
Returns:
A ``(DetectVersionArgs, error)`` tuple. If ``error`` is ``None`` the
version of the compiler was computed correctly and the first argument
of the tuple will contain it. Otherwise ``error`` is a string
containing an explanation on why the version couldn't be computed.
"""
def _default(fn_args):
compiler_id = fn_args.id
language = fn_args.language
compiler_cls = class_for_compiler_name(compiler_id.compiler_name)
path = fn_args.path
# Get compiler names and the callback to detect their versions
callback = getattr(compiler_cls, f"{language}_version")
try:
version = callback(path)
if version and str(version).strip() and version != "unknown":
value = fn_args._replace(id=compiler_id._replace(version=version))
return value, None
error = f"Couldn't get version for compiler {path}".format(path)
except spack.util.executable.ProcessError as e:
error = f"Couldn't get version for compiler {path}\n" + str(e)
except spack.util.executable.ProcessTimeoutError as e:
error = f"Couldn't get version for compiler {path}\n" + str(e)
except Exception as e:
# Catching "Exception" here is fine because it just
# means something went wrong running a candidate executable.
error = "Error while executing candidate compiler {0}" "\n{1}: {2}".format(
path, e.__class__.__name__, str(e)
)
return None, error
operating_system = detect_version_args.id.os
fn = getattr(operating_system, "detect_version", _default)
return fn(detect_version_args)
def make_compiler_list(
detected_versions: List[DetectVersionArgs], mixed_toolchain: bool = False
) -> List["spack.compiler.Compiler"]:
"""Process a list of detected versions and turn them into a list of
compiler specs.
Args:
detected_versions: list of DetectVersionArgs containing a valid version
mixed_toolchain: allow mixing compilers from different toolchains if langauge is missing
Returns:
list: list of Compiler objects
"""
group_fn = lambda x: (x.id, x.variation, x.language)
sorted_compilers = sorted(detected_versions, key=group_fn)
# Gather items in a dictionary by the id, name variation and language
compilers_d: Dict[CompilerID, Dict[NameVariation, dict]] = {}
for sort_key, group in itertools.groupby(sorted_compilers, key=group_fn):
compiler_id, name_variation, language = sort_key
by_compiler_id = compilers_d.setdefault(compiler_id, {})
by_name_variation = by_compiler_id.setdefault(name_variation, {})
by_name_variation[language] = next(x.path for x in group)
def _default_make_compilers(cmp_id, paths):
operating_system, compiler_name, version = cmp_id
compiler_cls = class_for_compiler_name(compiler_name)
spec = spack.spec.CompilerSpec(compiler_cls.name, f"={version}")
paths = [paths.get(x, None) for x in ("cc", "cxx", "f77", "fc")]
# TODO: johnwparent - revist the following line as per discussion at:
# https://github.com/spack/spack/pull/33385/files#r1040036318
target = archspec.cpu.host()
compiler = compiler_cls(spec, operating_system, str(target.family), paths)
return [compiler]
# For compilers with the same compiler id:
#
# - Prefer with C compiler to without
# - Prefer with C++ compiler to without
# - Prefer no variations to variations (e.g., clang to clang-gpu)
#
sort_fn = lambda variation: (
"cc" not in by_compiler_id[variation], # None last
"cxx" not in by_compiler_id[variation], # None last
getattr(variation, "prefix", None),
getattr(variation, "suffix", None),
)
# Flatten to a list of compiler id, primary variation and compiler dictionary
flat_compilers: List[Tuple[CompilerID, NameVariation, dict]] = []
for compiler_id, by_compiler_id in compilers_d.items():
ordered = sorted(by_compiler_id, key=sort_fn)
selected_variation = ordered[0]
selected = by_compiler_id[selected_variation]
# Fill any missing parts from subsequent entries (without mixing toolchains)
for lang in ["cxx", "f77", "fc"]:
if lang not in selected:
next_lang = next(
(by_compiler_id[v][lang] for v in ordered if lang in by_compiler_id[v]), None
)
if next_lang:
selected[lang] = next_lang
flat_compilers.append((compiler_id, selected_variation, selected))
# Next, fill out the blanks of missing compilers by creating a mixed toolchain (if requested)
if mixed_toolchain:
make_mixed_toolchain(flat_compilers)
# Finally, create the compiler list
compilers: List["spack.compiler.Compiler"] = []
for compiler_id, _, compiler in flat_compilers:
make_compilers = getattr(compiler_id.os, "make_compilers", _default_make_compilers)
candidates = make_compilers(compiler_id, compiler)
compilers.extend(x for x in candidates if x.cc is not None)
return compilers
def make_mixed_toolchain(compilers: List[Tuple[CompilerID, NameVariation, dict]]) -> None:
"""Add missing compilers across toolchains when they are missing for a particular language.
This currently only adds the most sensible gfortran to (apple)-clang if it doesn't have a
fortran compiler (no flang)."""
# First collect the clangs that are missing a fortran compiler
clangs_without_flang = [
(id, variation, compiler)
for id, variation, compiler in compilers
if id.compiler_name in ("clang", "apple-clang")
and "f77" not in compiler
and "fc" not in compiler
]
if not clangs_without_flang:
return
# Filter on GCCs with fortran compiler
gccs_with_fortran = [
(id, variation, compiler)
for id, variation, compiler in compilers
if id.compiler_name == "gcc" and "f77" in compiler and "fc" in compiler
]
# Sort these GCCs by "best variation" (no prefix / suffix first)
gccs_with_fortran.sort(
key=lambda x: (getattr(x[1], "prefix", None), getattr(x[1], "suffix", None))
)
# Attach the optimal GCC fortran compiler to the clangs that don't have one
for clang_id, _, clang_compiler in clangs_without_flang:
gcc_compiler = next(
(gcc[2] for gcc in gccs_with_fortran if gcc[0].os == clang_id.os), None
)
if not gcc_compiler:
continue
# Update the fc / f77 entries
clang_compiler["f77"] = gcc_compiler["f77"]
clang_compiler["fc"] = gcc_compiler["fc"]
def is_mixed_toolchain(compiler):
"""Returns True if the current compiler is a mixed toolchain,
False otherwise.
@@ -1087,6 +751,155 @@ def name_matches(name, name_list):
return False
_EXTRA_ATTRIBUTES_KEY = "extra_attributes"
_COMPILERS_KEY = "compilers"
_C_KEY = "c"
_CXX_KEY, _FORTRAN_KEY = "cxx", "fortran"
class CompilerConfigFactory:
"""Class aggregating all ways of constructing a list of compiler config entries."""
@staticmethod
def from_specs(specs: List["spack.spec.Spec"]) -> List[dict]:
result = []
compiler_package_names = supported_compilers() + list(package_name_to_compiler_name.keys())
for s in specs:
if s.name not in compiler_package_names:
continue
candidate = CompilerConfigFactory.from_external_spec(s)
if candidate is None:
continue
result.append(candidate)
return result
@staticmethod
def from_packages_yaml(packages_yaml) -> List[dict]:
compiler_specs = []
compiler_package_names = supported_compilers() + list(package_name_to_compiler_name.keys())
for name, entry in packages_yaml.items():
if name not in compiler_package_names:
continue
externals_config = entry.get("externals", None)
if not externals_config:
continue
current_specs = []
for current_external in externals_config:
compiler = CompilerConfigFactory._spec_from_external_config(current_external)
if compiler:
current_specs.append(compiler)
compiler_specs.extend(current_specs)
return CompilerConfigFactory.from_specs(compiler_specs)
@staticmethod
def _spec_from_external_config(config):
# Allow `@x.y.z` instead of `@=x.y.z`
err_header = f"The external spec '{config['spec']}' cannot be used as a compiler"
# If extra_attributes is not there I might not want to use this entry as a compiler,
# therefore just leave a debug message, but don't be loud with a warning.
if _EXTRA_ATTRIBUTES_KEY not in config:
tty.debug(f"[{__file__}] {err_header}: missing the '{_EXTRA_ATTRIBUTES_KEY}' key")
return None
extra_attributes = config[_EXTRA_ATTRIBUTES_KEY]
result = spack.spec.Spec(
str(spack.spec.parse_with_version_concrete(config["spec"])),
external_modules=config.get("modules"),
)
result.extra_attributes = extra_attributes
return result
@staticmethod
def from_external_spec(spec: "spack.spec.Spec") -> Optional[dict]:
spec = spack.spec.parse_with_version_concrete(spec)
extra_attributes = getattr(spec, _EXTRA_ATTRIBUTES_KEY, None)
if extra_attributes is None:
return None
paths = CompilerConfigFactory._extract_compiler_paths(spec)
if paths is None:
return None
compiler_spec = spack.spec.CompilerSpec(
package_name_to_compiler_name.get(spec.name, spec.name), spec.version
)
operating_system, target = CompilerConfigFactory._extract_os_and_target(spec)
compiler_entry = {
"compiler": {
"spec": str(compiler_spec),
"paths": paths,
"flags": extra_attributes.get("flags", {}),
"operating_system": str(operating_system),
"target": str(target.family),
"modules": getattr(spec, "external_modules", []),
"environment": extra_attributes.get("environment", {}),
"extra_rpaths": extra_attributes.get("extra_rpaths", []),
"implicit_rpaths": extra_attributes.get("implicit_rpaths", None),
}
}
return compiler_entry
@staticmethod
def _extract_compiler_paths(spec: "spack.spec.Spec") -> Optional[Dict[str, str]]:
err_header = f"The external spec '{spec}' cannot be used as a compiler"
extra_attributes = spec.extra_attributes
# If I have 'extra_attributes' warn if 'compilers' is missing,
# or we don't have a C compiler
if _COMPILERS_KEY not in extra_attributes:
warnings.warn(
f"{err_header}: missing the '{_COMPILERS_KEY}' key under '{_EXTRA_ATTRIBUTES_KEY}'"
)
return None
attribute_compilers = extra_attributes[_COMPILERS_KEY]
if _C_KEY not in attribute_compilers:
warnings.warn(
f"{err_header}: missing the C compiler path under "
f"'{_EXTRA_ATTRIBUTES_KEY}:{_COMPILERS_KEY}'"
)
return None
c_compiler = attribute_compilers[_C_KEY]
# C++ and Fortran compilers are not mandatory, so let's just leave a debug trace
if _CXX_KEY not in attribute_compilers:
tty.debug(f"[{__file__}] The external spec {spec} does not have a C++ compiler")
if _FORTRAN_KEY not in attribute_compilers:
tty.debug(f"[{__file__}] The external spec {spec} does not have a Fortran compiler")
# compilers format has cc/fc/f77, externals format has "c/fortran"
return {
"cc": c_compiler,
"cxx": attribute_compilers.get(_CXX_KEY, None),
"fc": attribute_compilers.get(_FORTRAN_KEY, None),
"f77": attribute_compilers.get(_FORTRAN_KEY, None),
}
@staticmethod
def _extract_os_and_target(spec: "spack.spec.Spec"):
if not spec.architecture:
host_platform = spack.platforms.host()
operating_system = host_platform.operating_system("default_os")
target = host_platform.target("default_target").microarchitecture
else:
target = spec.architecture.target
if not target:
target = spack.platforms.host().target("default_target")
target = target.microarchitecture
operating_system = spec.os
if not operating_system:
host_platform = spack.platforms.host()
operating_system = host_platform.operating_system("default_os")
return operating_system, target
class InvalidCompilerConfigurationError(spack.error.SpackError):
def __init__(self, compiler_spec):
super().__init__(

View File

@@ -223,6 +223,30 @@ def get_oneapi_root(pth: str):
)
self.msvc_compiler_environment = CmdCall(*env_cmds)
@property
def cxx11_flag(self):
return "/std:c++11"
@property
def cxx14_flag(self):
return "/std:c++14"
@property
def cxx17_flag(self):
return "/std:c++17"
@property
def cxx20_flag(self):
return "/std:c++20"
@property
def c11_flag(self):
return "/std:c11"
@property
def c17_flag(self):
return "/std:c17"
@property
def msvc_version(self):
"""This is the VCToolset version *NOT* the actual version of the cl compiler

View File

@@ -239,7 +239,7 @@ def update_configuration(
external_entries = pkg_config.get("externals", [])
assert not isinstance(external_entries, bool), "unexpected value for external entry"
all_new_specs.extend([spack.spec.Spec(x["spec"]) for x in external_entries])
all_new_specs.extend([x.spec for x in new_entries])
if buildable is False:
pkg_config["buildable"] = False
pkg_to_cfg[package_name] = pkg_config

View File

@@ -62,7 +62,7 @@ def common_windows_package_paths(pkg_cls=None) -> List[str]:
def file_identifier(path):
s = os.stat(path)
return (s.st_dev, s.st_ino)
return s.st_dev, s.st_ino
def executables_in_path(path_hints: List[str]) -> Dict[str, str]:
@@ -80,6 +80,8 @@ def executables_in_path(path_hints: List[str]) -> Dict[str, str]:
constructed based on the PATH environment variable.
"""
search_paths = llnl.util.filesystem.search_paths_for_executables(*path_hints)
# Make use we don't doubly list /usr/lib and /lib etc
search_paths = list(llnl.util.lang.dedupe(search_paths, key=file_identifier))
return path_to_dict(search_paths)

View File

@@ -32,10 +32,9 @@ class OpenMpi(Package):
"""
import collections
import collections.abc
import functools
import os.path
import re
from typing import TYPE_CHECKING, Any, Callable, List, Optional, Set, Tuple, Union
from typing import TYPE_CHECKING, Any, Callable, List, Optional, Tuple, Union
import llnl.util.lang
import llnl.util.tty.color
@@ -48,6 +47,7 @@ class OpenMpi(Package):
import spack.util.crypto
import spack.variant
from spack.dependency import Dependency
from spack.directives_meta import DirectiveError, DirectiveMeta
from spack.fetch_strategy import from_kwargs
from spack.resource import Resource
from spack.version import (
@@ -80,22 +80,6 @@ class OpenMpi(Package):
"redistribute",
]
#: These are variant names used by Spack internally; packages can't use them
reserved_names = [
"arch",
"architecture",
"dev_path",
"namespace",
"operating_system",
"os",
"patches",
"platform",
"target",
]
#: Names of possible directives. This list is mostly populated using the @directive decorator.
#: Some directives leverage others and in that case are not automatically added.
directive_names = ["build_system"]
_patch_order_index = 0
@@ -155,219 +139,6 @@ def _make_when_spec(value: WhenType) -> Optional["spack.spec.Spec"]:
return spack.spec.Spec(value)
class DirectiveMeta(type):
"""Flushes the directives that were temporarily stored in the staging
area into the package.
"""
# Set of all known directives
_directive_dict_names: Set[str] = set()
_directives_to_be_executed: List[str] = []
_when_constraints_from_context: List[str] = []
_default_args: List[dict] = []
def __new__(cls, name, bases, attr_dict):
# Initialize the attribute containing the list of directives
# to be executed. Here we go reversed because we want to execute
# commands:
# 1. in the order they were defined
# 2. following the MRO
attr_dict["_directives_to_be_executed"] = []
for base in reversed(bases):
try:
directive_from_base = base._directives_to_be_executed
attr_dict["_directives_to_be_executed"].extend(directive_from_base)
except AttributeError:
# The base class didn't have the required attribute.
# Continue searching
pass
# De-duplicates directives from base classes
attr_dict["_directives_to_be_executed"] = [
x for x in llnl.util.lang.dedupe(attr_dict["_directives_to_be_executed"])
]
# Move things to be executed from module scope (where they
# are collected first) to class scope
if DirectiveMeta._directives_to_be_executed:
attr_dict["_directives_to_be_executed"].extend(
DirectiveMeta._directives_to_be_executed
)
DirectiveMeta._directives_to_be_executed = []
return super(DirectiveMeta, cls).__new__(cls, name, bases, attr_dict)
def __init__(cls, name, bases, attr_dict):
# The instance is being initialized: if it is a package we must ensure
# that the directives are called to set it up.
if "spack.pkg" in cls.__module__:
# Ensure the presence of the dictionaries associated with the directives.
# All dictionaries are defaultdicts that create lists for missing keys.
for d in DirectiveMeta._directive_dict_names:
setattr(cls, d, {})
# Lazily execute directives
for directive in cls._directives_to_be_executed:
directive(cls)
# Ignore any directives executed *within* top-level
# directives by clearing out the queue they're appended to
DirectiveMeta._directives_to_be_executed = []
super(DirectiveMeta, cls).__init__(name, bases, attr_dict)
@staticmethod
def push_to_context(when_spec):
"""Add a spec to the context constraints."""
DirectiveMeta._when_constraints_from_context.append(when_spec)
@staticmethod
def pop_from_context():
"""Pop the last constraint from the context"""
return DirectiveMeta._when_constraints_from_context.pop()
@staticmethod
def push_default_args(default_args):
"""Push default arguments"""
DirectiveMeta._default_args.append(default_args)
@staticmethod
def pop_default_args():
"""Pop default arguments"""
return DirectiveMeta._default_args.pop()
@staticmethod
def directive(dicts=None):
"""Decorator for Spack directives.
Spack directives allow you to modify a package while it is being
defined, e.g. to add version or dependency information. Directives
are one of the key pieces of Spack's package "language", which is
embedded in python.
Here's an example directive:
.. code-block:: python
@directive(dicts='versions')
version(pkg, ...):
...
This directive allows you write:
.. code-block:: python
class Foo(Package):
version(...)
The ``@directive`` decorator handles a couple things for you:
1. Adds the class scope (pkg) as an initial parameter when
called, like a class method would. This allows you to modify
a package from within a directive, while the package is still
being defined.
2. It automatically adds a dictionary called "versions" to the
package so that you can refer to pkg.versions.
The ``(dicts='versions')`` part ensures that ALL packages in Spack
will have a ``versions`` attribute after they're constructed, and
that if no directive actually modified it, it will just be an
empty dict.
This is just a modular way to add storage attributes to the
Package class, and it's how Spack gets information from the
packages to the core.
"""
global directive_names
if isinstance(dicts, str):
dicts = (dicts,)
if not isinstance(dicts, collections.abc.Sequence):
message = "dicts arg must be list, tuple, or string. Found {0}"
raise TypeError(message.format(type(dicts)))
# Add the dictionary names if not already there
DirectiveMeta._directive_dict_names |= set(dicts)
# This decorator just returns the directive functions
def _decorator(decorated_function):
directive_names.append(decorated_function.__name__)
@functools.wraps(decorated_function)
def _wrapper(*args, **_kwargs):
# First merge default args with kwargs
kwargs = dict()
for default_args in DirectiveMeta._default_args:
kwargs.update(default_args)
kwargs.update(_kwargs)
# Inject when arguments from the context
if DirectiveMeta._when_constraints_from_context:
# Check that directives not yet supporting the when= argument
# are not used inside the context manager
if decorated_function.__name__ == "version":
msg = (
'directive "{0}" cannot be used within a "when"'
' context since it does not support a "when=" '
"argument"
)
msg = msg.format(decorated_function.__name__)
raise DirectiveError(msg)
when_constraints = [
spack.spec.Spec(x) for x in DirectiveMeta._when_constraints_from_context
]
if kwargs.get("when"):
when_constraints.append(spack.spec.Spec(kwargs["when"]))
when_spec = spack.spec.merge_abstract_anonymous_specs(*when_constraints)
kwargs["when"] = when_spec
# If any of the arguments are executors returned by a
# directive passed as an argument, don't execute them
# lazily. Instead, let the called directive handle them.
# This allows nested directive calls in packages. The
# caller can return the directive if it should be queued.
def remove_directives(arg):
directives = DirectiveMeta._directives_to_be_executed
if isinstance(arg, (list, tuple)):
# Descend into args that are lists or tuples
for a in arg:
remove_directives(a)
else:
# Remove directives args from the exec queue
remove = next((d for d in directives if d is arg), None)
if remove is not None:
directives.remove(remove)
# Nasty, but it's the best way I can think of to avoid
# side effects if directive results are passed as args
remove_directives(args)
remove_directives(list(kwargs.values()))
# A directive returns either something that is callable on a
# package or a sequence of them
result = decorated_function(*args, **kwargs)
# ...so if it is not a sequence make it so
values = result
if not isinstance(values, collections.abc.Sequence):
values = (values,)
DirectiveMeta._directives_to_be_executed.extend(values)
# wrapped function returns same result as original so
# that we can nest directives
return result
return _wrapper
return _decorator
SubmoduleCallback = Callable[["spack.package_base.PackageBase"], Union[str, List[str], bool]]
directive = DirectiveMeta.directive
@@ -846,7 +617,7 @@ def format_error(msg, pkg):
msg += " @*r{{[{0}, variant '{1}']}}"
return llnl.util.tty.color.colorize(msg.format(pkg.name, name))
if name in reserved_names:
if name in spack.variant.reserved_names:
def _raise_reserved_name(pkg):
msg = "The name '%s' is reserved by Spack" % name
@@ -1110,10 +881,6 @@ def _execute_languages(pkg: "spack.package_base.PackageBase"):
return _execute_languages
class DirectiveError(spack.error.SpackError):
"""This is raised when something is wrong with a package directive."""
class DependencyError(DirectiveError):
"""This is raised when a dependency specification is invalid."""

View File

@@ -0,0 +1,234 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import collections.abc
import functools
from typing import List, Set
import llnl.util.lang
import spack.error
import spack.spec
#: Names of possible directives. This list is mostly populated using the @directive decorator.
#: Some directives leverage others and in that case are not automatically added.
directive_names = ["build_system"]
class DirectiveMeta(type):
"""Flushes the directives that were temporarily stored in the staging
area into the package.
"""
# Set of all known directives
_directive_dict_names: Set[str] = set()
_directives_to_be_executed: List[str] = []
_when_constraints_from_context: List[str] = []
_default_args: List[dict] = []
def __new__(cls, name, bases, attr_dict):
# Initialize the attribute containing the list of directives
# to be executed. Here we go reversed because we want to execute
# commands:
# 1. in the order they were defined
# 2. following the MRO
attr_dict["_directives_to_be_executed"] = []
for base in reversed(bases):
try:
directive_from_base = base._directives_to_be_executed
attr_dict["_directives_to_be_executed"].extend(directive_from_base)
except AttributeError:
# The base class didn't have the required attribute.
# Continue searching
pass
# De-duplicates directives from base classes
attr_dict["_directives_to_be_executed"] = [
x for x in llnl.util.lang.dedupe(attr_dict["_directives_to_be_executed"])
]
# Move things to be executed from module scope (where they
# are collected first) to class scope
if DirectiveMeta._directives_to_be_executed:
attr_dict["_directives_to_be_executed"].extend(
DirectiveMeta._directives_to_be_executed
)
DirectiveMeta._directives_to_be_executed = []
return super(DirectiveMeta, cls).__new__(cls, name, bases, attr_dict)
def __init__(cls, name, bases, attr_dict):
# The instance is being initialized: if it is a package we must ensure
# that the directives are called to set it up.
if "spack.pkg" in cls.__module__:
# Ensure the presence of the dictionaries associated with the directives.
# All dictionaries are defaultdicts that create lists for missing keys.
for d in DirectiveMeta._directive_dict_names:
setattr(cls, d, {})
# Lazily execute directives
for directive in cls._directives_to_be_executed:
directive(cls)
# Ignore any directives executed *within* top-level
# directives by clearing out the queue they're appended to
DirectiveMeta._directives_to_be_executed = []
super(DirectiveMeta, cls).__init__(name, bases, attr_dict)
@staticmethod
def push_to_context(when_spec):
"""Add a spec to the context constraints."""
DirectiveMeta._when_constraints_from_context.append(when_spec)
@staticmethod
def pop_from_context():
"""Pop the last constraint from the context"""
return DirectiveMeta._when_constraints_from_context.pop()
@staticmethod
def push_default_args(default_args):
"""Push default arguments"""
DirectiveMeta._default_args.append(default_args)
@staticmethod
def pop_default_args():
"""Pop default arguments"""
return DirectiveMeta._default_args.pop()
@staticmethod
def directive(dicts=None):
"""Decorator for Spack directives.
Spack directives allow you to modify a package while it is being
defined, e.g. to add version or dependency information. Directives
are one of the key pieces of Spack's package "language", which is
embedded in python.
Here's an example directive:
.. code-block:: python
@directive(dicts='versions')
version(pkg, ...):
...
This directive allows you write:
.. code-block:: python
class Foo(Package):
version(...)
The ``@directive`` decorator handles a couple things for you:
1. Adds the class scope (pkg) as an initial parameter when
called, like a class method would. This allows you to modify
a package from within a directive, while the package is still
being defined.
2. It automatically adds a dictionary called "versions" to the
package so that you can refer to pkg.versions.
The ``(dicts='versions')`` part ensures that ALL packages in Spack
will have a ``versions`` attribute after they're constructed, and
that if no directive actually modified it, it will just be an
empty dict.
This is just a modular way to add storage attributes to the
Package class, and it's how Spack gets information from the
packages to the core.
"""
global directive_names
if isinstance(dicts, str):
dicts = (dicts,)
if not isinstance(dicts, collections.abc.Sequence):
message = "dicts arg must be list, tuple, or string. Found {0}"
raise TypeError(message.format(type(dicts)))
# Add the dictionary names if not already there
DirectiveMeta._directive_dict_names |= set(dicts)
# This decorator just returns the directive functions
def _decorator(decorated_function):
directive_names.append(decorated_function.__name__)
@functools.wraps(decorated_function)
def _wrapper(*args, **_kwargs):
# First merge default args with kwargs
kwargs = dict()
for default_args in DirectiveMeta._default_args:
kwargs.update(default_args)
kwargs.update(_kwargs)
# Inject when arguments from the context
if DirectiveMeta._when_constraints_from_context:
# Check that directives not yet supporting the when= argument
# are not used inside the context manager
if decorated_function.__name__ == "version":
msg = (
'directive "{0}" cannot be used within a "when"'
' context since it does not support a "when=" '
"argument"
)
msg = msg.format(decorated_function.__name__)
raise DirectiveError(msg)
when_constraints = [
spack.spec.Spec(x) for x in DirectiveMeta._when_constraints_from_context
]
if kwargs.get("when"):
when_constraints.append(spack.spec.Spec(kwargs["when"]))
when_spec = spack.spec.merge_abstract_anonymous_specs(*when_constraints)
kwargs["when"] = when_spec
# If any of the arguments are executors returned by a
# directive passed as an argument, don't execute them
# lazily. Instead, let the called directive handle them.
# This allows nested directive calls in packages. The
# caller can return the directive if it should be queued.
def remove_directives(arg):
directives = DirectiveMeta._directives_to_be_executed
if isinstance(arg, (list, tuple)):
# Descend into args that are lists or tuples
for a in arg:
remove_directives(a)
else:
# Remove directives args from the exec queue
remove = next((d for d in directives if d is arg), None)
if remove is not None:
directives.remove(remove)
# Nasty, but it's the best way I can think of to avoid
# side effects if directive results are passed as args
remove_directives(args)
remove_directives(list(kwargs.values()))
# A directive returns either something that is callable on a
# package or a sequence of them
result = decorated_function(*args, **kwargs)
# ...so if it is not a sequence make it so
values = result
if not isinstance(values, collections.abc.Sequence):
values = (values,)
DirectiveMeta._directives_to_be_executed.extend(values)
# wrapped function returns same result as original so
# that we can nest directives
return result
return _wrapper
return _decorator
class DirectiveError(spack.error.SpackError):
"""This is raised when something is wrong with a package directive."""

View File

@@ -1214,7 +1214,6 @@ def scope_name(self):
def include_concrete_envs(self):
"""Copy and save the included envs' specs internally"""
lockfile_meta = None
root_hash_seen = set()
concrete_hash_seen = set()
self.included_concrete_spec_data = {}
@@ -1225,37 +1224,26 @@ def include_concrete_envs(self):
raise SpackEnvironmentError(f"Unable to find env at {env_path}")
env = Environment(env_path)
with open(env.lock_path) as f:
lockfile_as_dict = env._read_lockfile(f)
# Lockfile_meta must match each env and use at least format version 5
if lockfile_meta is None:
lockfile_meta = lockfile_as_dict["_meta"]
elif lockfile_meta != lockfile_as_dict["_meta"]:
raise SpackEnvironmentError("All lockfile _meta values must match")
elif lockfile_meta["lockfile-version"] < 5:
raise SpackEnvironmentError("The lockfile format must be at version 5 or higher")
self.included_concrete_spec_data[env_path] = {"roots": [], "concrete_specs": {}}
# Copy unique root specs from env
self.included_concrete_spec_data[env_path] = {"roots": []}
for root_dict in lockfile_as_dict["roots"]:
for root_dict in env._concrete_roots_dict():
if root_dict["hash"] not in root_hash_seen:
self.included_concrete_spec_data[env_path]["roots"].append(root_dict)
root_hash_seen.add(root_dict["hash"])
# Copy unique concrete specs from env
for concrete_spec in lockfile_as_dict["concrete_specs"]:
if concrete_spec not in concrete_hash_seen:
self.included_concrete_spec_data[env_path].update(
{"concrete_specs": lockfile_as_dict["concrete_specs"]}
for dag_hash, spec_details in env._concrete_specs_dict().items():
if dag_hash not in concrete_hash_seen:
self.included_concrete_spec_data[env_path]["concrete_specs"].update(
{dag_hash: spec_details}
)
concrete_hash_seen.add(concrete_spec)
concrete_hash_seen.add(dag_hash)
if "include_concrete" in lockfile_as_dict.keys():
self.included_concrete_spec_data[env_path]["include_concrete"] = lockfile_as_dict[
"include_concrete"
]
# Copy transitive include data
transitive = env.included_concrete_spec_data
if transitive:
self.included_concrete_spec_data[env_path]["include_concrete"] = transitive
self._read_lockfile_dict(self._to_lockfile_dict())
self.write()
@@ -1656,7 +1644,7 @@ def _concretize_separately(self, tests=False):
# Ensure we have compilers in compilers.yaml to avoid that
# processes try to write the config file in parallel
_ = spack.compilers.get_compiler_config(spack.config.CONFIG, init_config=True)
_ = spack.compilers.all_compilers_config(spack.config.CONFIG)
# Early return if there is nothing to do
if len(args) == 0:
@@ -2173,16 +2161,23 @@ def _get_environment_specs(self, recurse_dependencies=True):
return specs
def _to_lockfile_dict(self):
"""Create a dictionary to store a lockfile for this environment."""
def _concrete_specs_dict(self):
concrete_specs = {}
for s in traverse.traverse_nodes(self.specs_by_hash.values(), key=traverse.by_dag_hash):
spec_dict = s.node_dict_with_hashes(hash=ht.dag_hash)
# Assumes no legacy formats, since this was just created.
spec_dict[ht.dag_hash.name] = s.dag_hash()
concrete_specs[s.dag_hash()] = spec_dict
return concrete_specs
def _concrete_roots_dict(self):
hash_spec_list = zip(self.concretized_order, self.concretized_user_specs)
return [{"hash": h, "spec": str(s)} for h, s in hash_spec_list]
def _to_lockfile_dict(self):
"""Create a dictionary to store a lockfile for this environment."""
concrete_specs = self._concrete_specs_dict()
root_specs = self._concrete_roots_dict()
spack_dict = {"version": spack.spack_version}
spack_commit = spack.main.get_spack_commit()
@@ -2203,7 +2198,7 @@ def _to_lockfile_dict(self):
# spack version information
"spack": spack_dict,
# users specs + hashes are the 'roots' of the environment
"roots": [{"hash": h, "spec": str(s)} for h, s in hash_spec_list],
"roots": root_specs,
# Concrete specs by hash, including dependencies
"concrete_specs": concrete_specs,
}

View File

@@ -54,7 +54,7 @@
import spack.version
import spack.version.git_ref_lookup
from spack.util.compression import decompressor_for
from spack.util.executable import CommandNotFoundError, which
from spack.util.executable import CommandNotFoundError, Executable, which
#: List of all fetch strategies, created by FetchStrategy metaclass.
all_strategies = []
@@ -246,33 +246,28 @@ class URLFetchStrategy(FetchStrategy):
# these are checksum types. The generic 'checksum' is deprecated for
# specific hash names, but we need it for backward compatibility
optional_attrs = list(crypto.hashes.keys()) + ["checksum"]
optional_attrs = [*crypto.hashes.keys(), "checksum"]
def __init__(self, url=None, checksum=None, **kwargs):
def __init__(self, *, url: str, checksum: Optional[str] = None, **kwargs) -> None:
super().__init__(**kwargs)
# Prefer values in kwargs to the positionals.
self.url = kwargs.get("url", url)
self.url = url
self.mirrors = kwargs.get("mirrors", [])
# digest can be set as the first argument, or from an explicit
# kwarg by the hash name.
self.digest = kwargs.get("checksum", checksum)
self.digest: Optional[str] = checksum
for h in self.optional_attrs:
if h in kwargs:
self.digest = kwargs[h]
self.expand_archive = kwargs.get("expand", True)
self.extra_options = kwargs.get("fetch_options", {})
self._curl = None
self.extension = kwargs.get("extension", None)
if not self.url:
raise ValueError("URLFetchStrategy requires a url for fetching.")
self.expand_archive: bool = kwargs.get("expand", True)
self.extra_options: dict = kwargs.get("fetch_options", {})
self._curl: Optional[Executable] = None
self.extension: Optional[str] = kwargs.get("extension", None)
@property
def curl(self):
def curl(self) -> Executable:
if not self._curl:
self._curl = web_util.require_curl()
return self._curl
@@ -348,8 +343,8 @@ def _fetch_urllib(self, url):
if os.path.lexists(save_file):
os.remove(save_file)
with open(save_file, "wb") as _open_file:
shutil.copyfileobj(response, _open_file)
with open(save_file, "wb") as f:
shutil.copyfileobj(response, f)
self._check_headers(str(response.headers))
@@ -468,7 +463,7 @@ def check(self):
"""Check the downloaded archive against a checksum digest.
No-op if this stage checks code out of a repository."""
if not self.digest:
raise NoDigestError("Attempt to check URLFetchStrategy with no digest.")
raise NoDigestError(f"Attempt to check {self.__class__.__name__} with no digest.")
verify_checksum(self.archive_file, self.digest)
@@ -479,8 +474,8 @@ def reset(self):
"""
if not self.archive_file:
raise NoArchiveFileError(
"Tried to reset URLFetchStrategy before fetching",
"Failed on reset() for URL %s" % self.url,
f"Tried to reset {self.__class__.__name__} before fetching",
f"Failed on reset() for URL{self.url}",
)
# Remove everything but the archive from the stage
@@ -493,14 +488,10 @@ def reset(self):
self.expand()
def __repr__(self):
url = self.url if self.url else "no url"
return "%s<%s>" % (self.__class__.__name__, url)
return f"{self.__class__.__name__}<{self.url}>"
def __str__(self):
if self.url:
return self.url
else:
return "[no url]"
return self.url
@fetcher
@@ -513,7 +504,7 @@ def fetch(self):
# check whether the cache file exists.
if not os.path.isfile(path):
raise NoCacheError("No cache of %s" % path)
raise NoCacheError(f"No cache of {path}")
# remove old symlink if one is there.
filename = self.stage.save_filename
@@ -523,8 +514,8 @@ def fetch(self):
# Symlink to local cached archive.
symlink(path, filename)
# Remove link if checksum fails, or subsequent fetchers
# will assume they don't need to download.
# Remove link if checksum fails, or subsequent fetchers will assume they don't need to
# download.
if self.digest:
try:
self.check()
@@ -533,12 +524,12 @@ def fetch(self):
raise
# Notify the user how we fetched.
tty.msg("Using cached archive: {0}".format(path))
tty.msg(f"Using cached archive: {path}")
class OCIRegistryFetchStrategy(URLFetchStrategy):
def __init__(self, url=None, checksum=None, **kwargs):
super().__init__(url, checksum, **kwargs)
def __init__(self, *, url: str, checksum: Optional[str] = None, **kwargs):
super().__init__(url=url, checksum=checksum, **kwargs)
self._urlopen = kwargs.get("_urlopen", spack.oci.opener.urlopen)
@@ -583,18 +574,18 @@ def __init__(self, **kwargs):
# Set a URL based on the type of fetch strategy.
self.url = kwargs.get(self.url_attr, None)
if not self.url:
raise ValueError("%s requires %s argument." % (self.__class__, self.url_attr))
raise ValueError(f"{self.__class__} requires {self.url_attr} argument.")
for attr in self.optional_attrs:
setattr(self, attr, kwargs.get(attr, None))
@_needs_stage
def check(self):
tty.debug("No checksum needed when fetching with {0}".format(self.url_attr))
tty.debug(f"No checksum needed when fetching with {self.url_attr}")
@_needs_stage
def expand(self):
tty.debug("Source fetched with %s is already expanded." % self.url_attr)
tty.debug(f"Source fetched with {self.url_attr} is already expanded.")
@_needs_stage
def archive(self, destination, *, exclude: Optional[str] = None):
@@ -614,10 +605,10 @@ def archive(self, destination, *, exclude: Optional[str] = None):
)
def __str__(self):
return "VCS: %s" % self.url
return f"VCS: {self.url}"
def __repr__(self):
return "%s<%s>" % (self.__class__, self.url)
return f"{self.__class__}<{self.url}>"
@fetcher
@@ -720,11 +711,17 @@ class GitFetchStrategy(VCSFetchStrategy):
"submodules",
"get_full_repo",
"submodules_delete",
"git_sparse_paths",
]
git_version_re = r"git version (\S+)"
def __init__(self, **kwargs):
self.commit: Optional[str] = None
self.tag: Optional[str] = None
self.branch: Optional[str] = None
# Discards the keywords in kwargs that may conflict with the next call
# to __init__
forwarded_args = copy.copy(kwargs)
@@ -735,6 +732,7 @@ def __init__(self, **kwargs):
self.submodules = kwargs.get("submodules", False)
self.submodules_delete = kwargs.get("submodules_delete", False)
self.get_full_repo = kwargs.get("get_full_repo", False)
self.git_sparse_paths = kwargs.get("git_sparse_paths", None)
@property
def git_version(self):
@@ -772,68 +770,71 @@ def git(self):
@property
def cachable(self):
return self.cache_enabled and bool(self.commit or self.tag)
return self.cache_enabled and bool(self.commit)
def source_id(self):
return self.commit or self.tag
# TODO: tree-hash would secure download cache and mirrors, commit only secures checkouts.
return self.commit
def mirror_id(self):
repo_ref = self.commit or self.tag or self.branch
if repo_ref:
if self.commit:
repo_path = urllib.parse.urlparse(self.url).path
result = os.path.sep.join(["git", repo_path, repo_ref])
result = os.path.sep.join(["git", repo_path, self.commit])
return result
def _repo_info(self):
args = ""
if self.commit:
args = " at commit {0}".format(self.commit)
args = f" at commit {self.commit}"
elif self.tag:
args = " at tag {0}".format(self.tag)
args = f" at tag {self.tag}"
elif self.branch:
args = " on branch {0}".format(self.branch)
args = f" on branch {self.branch}"
return "{0}{1}".format(self.url, args)
return f"{self.url}{args}"
@_needs_stage
def fetch(self):
if self.stage.expanded:
tty.debug("Already fetched {0}".format(self.stage.source_path))
tty.debug(f"Already fetched {self.stage.source_path}")
return
self.clone(commit=self.commit, branch=self.branch, tag=self.tag)
if self.git_sparse_paths:
self._sparse_clone_src()
else:
self._clone_src()
self.submodule_operations()
def clone(self, dest=None, commit=None, branch=None, tag=None, bare=False):
def bare_clone(self, dest: str) -> None:
"""
Clone a repository to a path.
Execute a bare clone for metadata only
This method handles cloning from git, but does not require a stage.
Arguments:
dest (str or None): The path into which the code is cloned. If None,
requires a stage and uses the stage's source path.
commit (str or None): A commit to fetch from the remote. Only one of
commit, branch, and tag may be non-None.
branch (str or None): A branch to fetch from the remote.
tag (str or None): A tag to fetch from the remote.
bare (bool): Execute a "bare" git clone (--bare option to git)
Requires a destination since bare cloning does not provide source
and shouldn't be used for staging.
"""
# Default to spack source path
dest = dest or self.stage.source_path
tty.debug("Cloning git repository: {0}".format(self._repo_info()))
tty.debug(f"Cloning git repository: {self._repo_info()}")
git = self.git
debug = spack.config.get("config:debug")
if bare:
# We don't need to worry about which commit/branch/tag is checked out
clone_args = ["clone", "--bare"]
if not debug:
clone_args.append("--quiet")
clone_args.extend([self.url, dest])
git(*clone_args)
elif commit:
# We don't need to worry about which commit/branch/tag is checked out
clone_args = ["clone", "--bare"]
if not debug:
clone_args.append("--quiet")
clone_args.extend([self.url, dest])
git(*clone_args)
def _clone_src(self) -> None:
"""Clone a repository to a path using git."""
# Default to spack source path
dest = self.stage.source_path
tty.debug(f"Cloning git repository: {self._repo_info()}")
git = self.git
debug = spack.config.get("config:debug")
if self.commit:
# Need to do a regular clone and check out everything if
# they asked for a particular commit.
clone_args = ["clone", self.url]
@@ -852,7 +853,7 @@ def clone(self, dest=None, commit=None, branch=None, tag=None, bare=False):
)
with working_dir(dest):
checkout_args = ["checkout", commit]
checkout_args = ["checkout", self.commit]
if not debug:
checkout_args.insert(1, "--quiet")
git(*checkout_args)
@@ -864,10 +865,10 @@ def clone(self, dest=None, commit=None, branch=None, tag=None, bare=False):
args.append("--quiet")
# If we want a particular branch ask for it.
if branch:
args.extend(["--branch", branch])
elif tag and self.git_version >= spack.version.Version("1.8.5.2"):
args.extend(["--branch", tag])
if self.branch:
args.extend(["--branch", self.branch])
elif self.tag and self.git_version >= spack.version.Version("1.8.5.2"):
args.extend(["--branch", self.tag])
# Try to be efficient if we're using a new enough git.
# This checks out only one branch's history
@@ -899,7 +900,7 @@ def clone(self, dest=None, commit=None, branch=None, tag=None, bare=False):
# For tags, be conservative and check them out AFTER
# cloning. Later git versions can do this with clone
# --branch, but older ones fail.
if tag and self.git_version < spack.version.Version("1.8.5.2"):
if self.tag and self.git_version < spack.version.Version("1.8.5.2"):
# pull --tags returns a "special" error code of 1 in
# older versions that we have to ignore.
# see: https://github.com/git/git/commit/19d122b
@@ -912,6 +913,79 @@ def clone(self, dest=None, commit=None, branch=None, tag=None, bare=False):
git(*pull_args, ignore_errors=1)
git(*co_args)
def _sparse_clone_src(self, **kwargs):
"""Use git's sparse checkout feature to clone portions of a git repository"""
dest = self.stage.source_path
git = self.git
if self.git_version < spack.version.Version("2.26.0"):
# technically this should be supported for 2.25, but bumping for OS issues
# see https://github.com/spack/spack/issues/45771
# code paths exist where the package is not set. Assure some indentifier for the
# package that was configured for sparse checkout exists in the error message
identifier = str(self.url)
if self.package:
identifier += f" ({self.package.name})"
tty.warn(
(
f"{identifier} is configured for git sparse-checkout "
"but the git version is too old to support sparse cloning. "
"Cloning the full repository instead."
)
)
self._clone_src()
else:
# default to depth=2 to allow for retention of some git properties
depth = kwargs.get("depth", 2)
needs_fetch = self.branch or self.tag
git_ref = self.branch or self.tag or self.commit
assert git_ref
clone_args = ["clone"]
if needs_fetch:
clone_args.extend(["--branch", git_ref])
if self.get_full_repo:
clone_args.append("--no-single-branch")
else:
clone_args.append("--single-branch")
clone_args.extend(
[f"--depth={depth}", "--no-checkout", "--filter=blob:none", self.url]
)
sparse_args = ["sparse-checkout", "set"]
if callable(self.git_sparse_paths):
sparse_args.extend(self.git_sparse_paths())
else:
sparse_args.extend([p for p in self.git_sparse_paths])
sparse_args.append("--cone")
checkout_args = ["checkout", git_ref]
if not spack.config.get("config:debug"):
clone_args.insert(1, "--quiet")
checkout_args.insert(1, "--quiet")
with temp_cwd():
git(*clone_args)
repo_name = get_single_file(".")
if self.stage:
self.stage.srcdir = repo_name
shutil.move(repo_name, dest)
with working_dir(dest):
git(*sparse_args)
git(*checkout_args)
def submodule_operations(self):
dest = self.stage.source_path
git = self.git
if self.submodules_delete:
with working_dir(dest):
for submodule_to_delete in self.submodules_delete:
@@ -964,7 +1038,7 @@ def protocol_supports_shallow_clone(self):
return not (self.url.startswith("http://") or self.url.startswith("/"))
def __str__(self):
return "[git] {0}".format(self._repo_info())
return f"[git] {self._repo_info()}"
@fetcher
@@ -1288,7 +1362,7 @@ def reset(self):
shutil.move(scrubbed, source_path)
def __str__(self):
return "[hg] %s" % self.url
return f"[hg] {self.url}"
@fetcher
@@ -1297,47 +1371,16 @@ class S3FetchStrategy(URLFetchStrategy):
url_attr = "s3"
def __init__(self, *args, **kwargs):
try:
super().__init__(*args, **kwargs)
except ValueError:
if not kwargs.get("url"):
raise ValueError("S3FetchStrategy requires a url for fetching.")
@_needs_stage
def fetch(self):
if not self.url.startswith("s3://"):
raise spack.error.FetchError(
f"{self.__class__.__name__} can only fetch from s3:// urls."
)
if self.archive_file:
tty.debug(f"Already downloaded {self.archive_file}")
return
parsed_url = urllib.parse.urlparse(self.url)
if parsed_url.scheme != "s3":
raise spack.error.FetchError("S3FetchStrategy can only fetch from s3:// urls.")
basename = os.path.basename(parsed_url.path)
request = urllib.request.Request(
self.url, headers={"User-Agent": web_util.SPACK_USER_AGENT}
)
with working_dir(self.stage.path):
try:
response = web_util.urlopen(request)
except (TimeoutError, urllib.error.URLError) as e:
raise FailedDownloadError(e) from e
tty.debug(f"Fetching {self.url}")
with open(basename, "wb") as f:
shutil.copyfileobj(response, f)
content_type = web_util.get_header(response.headers, "Content-type")
if content_type == "text/html":
warn_content_type_mismatch(self.archive_file or "the archive")
if self.stage.save_filename:
fs.rename(os.path.join(self.stage.path, basename), self.stage.save_filename)
self._fetch_urllib(self.url)
if not self.archive_file:
raise FailedDownloadError(
RuntimeError(f"Missing archive {self.archive_file} after fetching")
@@ -1350,46 +1393,17 @@ class GCSFetchStrategy(URLFetchStrategy):
url_attr = "gs"
def __init__(self, *args, **kwargs):
try:
super().__init__(*args, **kwargs)
except ValueError:
if not kwargs.get("url"):
raise ValueError("GCSFetchStrategy requires a url for fetching.")
@_needs_stage
def fetch(self):
if not self.url.startswith("gs"):
raise spack.error.FetchError(
f"{self.__class__.__name__} can only fetch from gs:// urls."
)
if self.archive_file:
tty.debug("Already downloaded {0}".format(self.archive_file))
tty.debug(f"Already downloaded {self.archive_file}")
return
parsed_url = urllib.parse.urlparse(self.url)
if parsed_url.scheme != "gs":
raise spack.error.FetchError("GCSFetchStrategy can only fetch from gs:// urls.")
basename = os.path.basename(parsed_url.path)
request = urllib.request.Request(
self.url, headers={"User-Agent": web_util.SPACK_USER_AGENT}
)
with working_dir(self.stage.path):
try:
response = web_util.urlopen(request)
except (TimeoutError, urllib.error.URLError) as e:
raise FailedDownloadError(e) from e
tty.debug(f"Fetching {self.url}")
with open(basename, "wb") as f:
shutil.copyfileobj(response, f)
content_type = web_util.get_header(response.headers, "Content-type")
if content_type == "text/html":
warn_content_type_mismatch(self.archive_file or "the archive")
if self.stage.save_filename:
os.rename(os.path.join(self.stage.path, basename), self.stage.save_filename)
self._fetch_urllib(self.url)
if not self.archive_file:
raise FailedDownloadError(
@@ -1403,7 +1417,7 @@ class FetchAndVerifyExpandedFile(URLFetchStrategy):
as well as after expanding it."""
def __init__(self, url, archive_sha256: str, expanded_sha256: str):
super().__init__(url, archive_sha256)
super().__init__(url=url, checksum=archive_sha256)
self.expanded_sha256 = expanded_sha256
def expand(self):
@@ -1445,14 +1459,14 @@ def stable_target(fetcher):
return False
def from_url(url):
def from_url(url: str) -> URLFetchStrategy:
"""Given a URL, find an appropriate fetch strategy for it.
Currently just gives you a URLFetchStrategy that uses curl.
TODO: make this return appropriate fetch strategies for other
types of URLs.
"""
return URLFetchStrategy(url)
return URLFetchStrategy(url=url)
def from_kwargs(**kwargs):
@@ -1521,10 +1535,12 @@ def _check_version_attributes(fetcher, pkg, version):
def _extrapolate(pkg, version):
"""Create a fetcher from an extrapolated URL for this version."""
try:
return URLFetchStrategy(pkg.url_for_version(version), fetch_options=pkg.fetch_options)
return URLFetchStrategy(url=pkg.url_for_version(version), fetch_options=pkg.fetch_options)
except spack.package_base.NoURLError:
msg = "Can't extrapolate a URL for version %s " "because package %s defines no URLs"
raise ExtrapolationError(msg % (version, pkg.name))
raise ExtrapolationError(
f"Can't extrapolate a URL for version {version} because "
f"package {pkg.name} defines no URLs"
)
def _from_merged_attrs(fetcher, pkg, version):
@@ -1541,8 +1557,11 @@ def _from_merged_attrs(fetcher, pkg, version):
attrs["fetch_options"] = pkg.fetch_options
attrs.update(pkg.versions[version])
if fetcher.url_attr == "git" and hasattr(pkg, "submodules"):
attrs.setdefault("submodules", pkg.submodules)
if fetcher.url_attr == "git":
pkg_attr_list = ["submodules", "git_sparse_paths"]
for pkg_attr in pkg_attr_list:
if hasattr(pkg, pkg_attr):
attrs.setdefault(pkg_attr, getattr(pkg, pkg_attr))
return fetcher(**attrs)
@@ -1637,11 +1656,9 @@ def for_package_version(pkg, version=None):
raise InvalidArgsError(pkg, version, **args)
def from_url_scheme(url, *args, **kwargs):
def from_url_scheme(url: str, **kwargs) -> FetchStrategy:
"""Finds a suitable FetchStrategy by matching its url_attr with the scheme
in the given url."""
url = kwargs.get("url", url)
parsed_url = urllib.parse.urlparse(url, scheme="file")
scheme_mapping = kwargs.get("scheme_mapping") or {
@@ -1658,11 +1675,9 @@ def from_url_scheme(url, *args, **kwargs):
for fetcher in all_strategies:
url_attr = getattr(fetcher, "url_attr", None)
if url_attr and url_attr == scheme:
return fetcher(url, *args, **kwargs)
return fetcher(url=url, **kwargs)
raise ValueError(
'No FetchStrategy found for url with scheme: "{SCHEME}"'.format(SCHEME=parsed_url.scheme)
)
raise ValueError(f'No FetchStrategy found for url with scheme: "{parsed_url.scheme}"')
def from_list_url(pkg):
@@ -1687,7 +1702,9 @@ def from_list_url(pkg):
)
# construct a fetcher
return URLFetchStrategy(url_from_list, checksum, fetch_options=pkg.fetch_options)
return URLFetchStrategy(
url=url_from_list, checksum=checksum, fetch_options=pkg.fetch_options
)
except KeyError as e:
tty.debug(e)
tty.msg("Cannot find version %s in url_list" % pkg.version)
@@ -1715,10 +1732,10 @@ def store(self, fetcher, relative_dest):
mkdirp(os.path.dirname(dst))
fetcher.archive(dst)
def fetcher(self, target_path, digest, **kwargs):
def fetcher(self, target_path: str, digest: Optional[str], **kwargs) -> CacheURLFetchStrategy:
path = os.path.join(self.root, target_path)
url = url_util.path_to_file_url(path)
return CacheURLFetchStrategy(url, digest, **kwargs)
return CacheURLFetchStrategy(url=url, checksum=digest, **kwargs)
def destroy(self):
shutil.rmtree(self.root, ignore_errors=True)

View File

@@ -20,6 +20,7 @@
systems (e.g. modules, lmod, etc.) or to add other custom
features.
"""
import importlib
from llnl.util.lang import ensure_last, list_modules
@@ -46,11 +47,7 @@ def _populate_hooks(cls):
for name in relative_names:
module_name = __name__ + "." + name
# When importing a module from a package, __import__('A.B', ...)
# returns package A when 'fromlist' is empty. If fromlist is not
# empty it returns the submodule B instead
# See: https://stackoverflow.com/a/2725668/771663
module_obj = __import__(module_name, fromlist=[None])
module_obj = importlib.import_module(module_name)
cls._hooks.append((module_name, module_obj))
@property

View File

@@ -24,5 +24,6 @@ def post_install(spec, explicit):
# Push the package to all autopush mirrors
for mirror in spack.mirror.MirrorCollection(binary=True, autopush=True).values():
signing_key = bindist.select_signing_key() if mirror.signed else None
bindist.push_or_raise([spec], out_url=mirror.push_url, signing_key=signing_key, force=True)
with bindist.make_uploader(mirror=mirror, force=True, signing_key=signing_key) as uploader:
uploader.push_or_raise([spec])
tty.msg(f"{spec.name}: Pushed to build cache: '{mirror.name}'")

View File

@@ -1611,9 +1611,7 @@ def _add_tasks(self, request: BuildRequest, all_deps):
def _add_compiler_package_to_config(self, pkg: "spack.package_base.PackageBase") -> None:
compiler_search_prefix = getattr(pkg, "compiler_search_prefix", pkg.spec.prefix)
spack.compilers.add_compilers_to_config(
spack.compilers.find_compilers([compiler_search_prefix])
)
spack.compilers.find_compilers([compiler_search_prefix])
def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None:
"""

View File

@@ -21,6 +21,7 @@
from typing import List, Optional, Union
import llnl.url
import llnl.util.symlink
import llnl.util.tty as tty
from llnl.util.filesystem import mkdirp
@@ -30,6 +31,7 @@
import spack.fetch_strategy
import spack.mirror
import spack.oci.image
import spack.repo
import spack.spec
import spack.util.path
import spack.util.spack_json as sjson
@@ -426,51 +428,74 @@ def _determine_extension(fetcher):
return ext
class MirrorReference:
"""A ``MirrorReference`` stores the relative paths where you can store a
package/resource in a mirror directory.
class MirrorLayout:
"""A ``MirrorLayout`` object describes the relative path of a mirror entry."""
The appropriate storage location is given by ``storage_path``. The
``cosmetic_path`` property provides a reference that a human could generate
themselves based on reading the details of the package.
A user can iterate over a ``MirrorReference`` object to get all the
possible names that might be used to refer to the resource in a mirror;
this includes names generated by previous naming schemes that are no-longer
reported by ``storage_path`` or ``cosmetic_path``.
"""
def __init__(self, cosmetic_path, global_path=None):
self.global_path = global_path
self.cosmetic_path = cosmetic_path
@property
def storage_path(self):
if self.global_path:
return self.global_path
else:
return self.cosmetic_path
def __init__(self, path: str) -> None:
self.path = path
def __iter__(self):
if self.global_path:
yield self.global_path
yield self.cosmetic_path
"""Yield all paths including aliases where the resource can be found."""
yield self.path
def make_alias(self, root: str) -> None:
"""Make the entry ``root / self.path`` available under a human readable alias"""
pass
class OCIImageLayout:
"""Follow the OCI Image Layout Specification to archive blobs
class DefaultLayout(MirrorLayout):
def __init__(self, alias_path: str, digest_path: Optional[str] = None) -> None:
# When we have a digest, it is used as the primary storage location. If not, then we use
# the human-readable alias. In case of mirrors of a VCS checkout, we currently do not have
# a digest, that's why an alias is required and a digest optional.
super().__init__(path=digest_path or alias_path)
self.alias = alias_path
self.digest_path = digest_path
Paths are of the form `blobs/<algorithm>/<digest>`
"""
def make_alias(self, root: str) -> None:
"""Symlink a human readible path in our mirror to the actual storage location."""
# We already use the human-readable path as the main storage location.
if not self.digest_path:
return
alias, digest = os.path.join(root, self.alias), os.path.join(root, self.digest_path)
alias_dir = os.path.dirname(alias)
relative_dst = os.path.relpath(digest, start=alias_dir)
mkdirp(alias_dir)
tmp = f"{alias}.tmp"
llnl.util.symlink.symlink(relative_dst, tmp)
try:
os.rename(tmp, alias)
except OSError:
# Clean up the temporary if possible
try:
os.unlink(tmp)
except OSError:
pass
raise
def __iter__(self):
if self.digest_path:
yield self.digest_path
yield self.alias
class OCILayout(MirrorLayout):
"""Follow the OCI Image Layout Specification to archive blobs where paths are of the form
``blobs/<algorithm>/<digest>``"""
def __init__(self, digest: spack.oci.image.Digest) -> None:
self.storage_path = os.path.join("blobs", digest.algorithm, digest.digest)
def __iter__(self):
yield self.storage_path
super().__init__(os.path.join("blobs", digest.algorithm, digest.digest))
def mirror_archive_paths(fetcher, per_package_ref, spec=None):
def default_mirror_layout(
fetcher: "spack.fetch_strategy.FetchStrategy",
per_package_ref: str,
spec: Optional["spack.spec.Spec"] = None,
) -> MirrorLayout:
"""Returns a ``MirrorReference`` object which keeps track of the relative
storage path of the resource associated with the specified ``fetcher``."""
ext = None
@@ -494,7 +519,7 @@ def mirror_archive_paths(fetcher, per_package_ref, spec=None):
if global_ref and ext:
global_ref += ".%s" % ext
return MirrorReference(per_package_ref, global_ref)
return DefaultLayout(per_package_ref, global_ref)
def get_all_versions(specs):

View File

@@ -28,11 +28,9 @@
import inspect
from contextlib import contextmanager
from llnl.util.lang import caller_locals
import spack.directives
import spack.directives_meta
import spack.error
from spack.spec import Spec
import spack.spec
class MultiMethodMeta(type):
@@ -165,9 +163,9 @@ def __init__(self, condition):
condition (str): condition to be met
"""
if isinstance(condition, bool):
self.spec = Spec() if condition else None
self.spec = spack.spec.Spec() if condition else None
else:
self.spec = Spec(condition)
self.spec = spack.spec.Spec(condition)
def __call__(self, method):
"""This annotation lets packages declare multiple versions of
@@ -229,11 +227,9 @@ def install(self, prefix):
platform-specific versions. There's not much we can do to get
around this because of the way decorators work.
"""
# In Python 2, Get the first definition of the method in the
# calling scope by looking at the caller's locals. In Python 3,
# we handle this using MultiMethodMeta.__prepare__.
if MultiMethodMeta._locals is None:
MultiMethodMeta._locals = caller_locals()
assert (
MultiMethodMeta._locals is not None
), "cannot use multimethod, missing MultiMethodMeta metaclass?"
# Create a multimethod with this name if there is not one already
original_method = MultiMethodMeta._locals.get(method.__name__)
@@ -266,17 +262,17 @@ def __enter__(self):
and add their constraint to whatever may be already present in the directive
`when=` argument.
"""
spack.directives.DirectiveMeta.push_to_context(str(self.spec))
spack.directives_meta.DirectiveMeta.push_to_context(str(self.spec))
def __exit__(self, exc_type, exc_val, exc_tb):
spack.directives.DirectiveMeta.pop_from_context()
spack.directives_meta.DirectiveMeta.pop_from_context()
@contextmanager
def default_args(**kwargs):
spack.directives.DirectiveMeta.push_default_args(kwargs)
spack.directives_meta.DirectiveMeta.push_default_args(kwargs)
yield
spack.directives.DirectiveMeta.pop_default_args()
spack.directives_meta.DirectiveMeta.pop_default_args()
class MultiMethodError(spack.error.SpackError):

View File

@@ -390,15 +390,12 @@ def make_stage(
) -> spack.stage.Stage:
_urlopen = _urlopen or spack.oci.opener.urlopen
fetch_strategy = spack.fetch_strategy.OCIRegistryFetchStrategy(
url, checksum=digest.digest, _urlopen=_urlopen
url=url, checksum=digest.digest, _urlopen=_urlopen
)
# Use blobs/<alg>/<encoded> as the cache path, which follows
# the OCI Image Layout Specification. What's missing though,
# is the `oci-layout` and `index.json` files, which are
# required by the spec.
return spack.stage.Stage(
fetch_strategy,
mirror_paths=spack.mirror.OCIImageLayout(digest),
name=digest.digest,
keep=keep,
fetch_strategy, mirror_paths=spack.mirror.OCILayout(digest), name=digest.digest, keep=keep
)

View File

@@ -15,6 +15,7 @@
import functools
import glob
import hashlib
import importlib
import inspect
import io
import os
@@ -740,7 +741,7 @@ def __init__(self, spec):
raise ValueError(msg.format(self))
# init internal variables
self._stage = None
self._stage: Optional[StageComposite] = None
self._fetcher = None
self._tester: Optional["PackageTest"] = None
@@ -868,7 +869,7 @@ def module(cls):
We use this to add variables to package modules. This makes
install() methods easier to write (e.g., can call configure())
"""
return __import__(cls.__module__, fromlist=[cls.__name__])
return importlib.import_module(cls.__module__)
@classproperty
def namespace(cls):
@@ -1098,9 +1099,10 @@ def _make_resource_stage(self, root_stage, resource):
root=root_stage,
resource=resource,
name=self._resource_stage(resource),
mirror_paths=spack.mirror.mirror_archive_paths(
mirror_paths=spack.mirror.default_mirror_layout(
resource.fetcher, os.path.join(self.name, pretty_resource_name)
),
mirrors=spack.mirror.MirrorCollection(source=True).values(),
path=self.path,
)
@@ -1112,7 +1114,7 @@ def _make_root_stage(self, fetcher):
# Construct a mirror path (TODO: get this out of package.py)
format_string = "{name}-{version}"
pretty_name = self.spec.format_path(format_string)
mirror_paths = spack.mirror.mirror_archive_paths(
mirror_paths = spack.mirror.default_mirror_layout(
fetcher, os.path.join(self.name, pretty_name), self.spec
)
# Construct a path where the stage should build..
@@ -1121,6 +1123,7 @@ def _make_root_stage(self, fetcher):
stage = Stage(
fetcher,
mirror_paths=mirror_paths,
mirrors=spack.mirror.MirrorCollection(source=True).values(),
name=stage_name,
path=self.path,
search_fn=self._download_search,
@@ -1177,7 +1180,7 @@ def stage(self):
return self._stage
@stage.setter
def stage(self, stage):
def stage(self, stage: StageComposite):
"""Allow a stage object to be set to override the default."""
self._stage = stage

View File

@@ -319,18 +319,19 @@ def stage(self) -> "spack.stage.Stage":
self.url, archive_sha256=self.archive_sha256, expanded_sha256=self.sha256
)
else:
fetcher = fs.URLFetchStrategy(self.url, sha256=self.sha256, expand=False)
fetcher = fs.URLFetchStrategy(url=self.url, sha256=self.sha256, expand=False)
# The same package can have multiple patches with the same name but
# with different contents, therefore apply a subset of the hash.
name = "{0}-{1}".format(os.path.basename(self.url), fetch_digest[:7])
per_package_ref = os.path.join(self.owner.split(".")[-1], name)
mirror_ref = spack.mirror.mirror_archive_paths(fetcher, per_package_ref)
mirror_ref = spack.mirror.default_mirror_layout(fetcher, per_package_ref)
self._stage = spack.stage.Stage(
fetcher,
name=f"{spack.stage.stage_prefix}patch-{fetch_digest}",
mirror_paths=mirror_ref,
mirrors=spack.mirror.MirrorCollection(source=True).values(),
)
return self._stage

View File

@@ -365,9 +365,9 @@ def __init__(self, namespace):
def __getattr__(self, name):
"""Getattr lazily loads modules if they're not already loaded."""
submodule = self.__package__ + "." + name
submodule = f"{self.__package__}.{name}"
try:
setattr(self, name, __import__(submodule))
setattr(self, name, importlib.import_module(submodule))
except ImportError:
msg = "'{0}' object has no attribute {1}"
raise AttributeError(msg.format(type(self), name))

View File

@@ -32,7 +32,6 @@
import spack.config
import spack.config as sc
import spack.deptypes as dt
import spack.directives
import spack.environment as ev
import spack.error
import spack.package_base
@@ -285,16 +284,14 @@ def _create_counter(specs: List[spack.spec.Spec], tests: bool):
return NoDuplicatesCounter(specs, tests=tests)
def all_compilers_in_config(configuration):
return spack.compilers.all_compilers_from(configuration)
def all_libcs() -> Set[spack.spec.Spec]:
"""Return a set of all libc specs targeted by any configured compiler. If none, fall back to
libc determined from the current Python process if dynamically linked."""
libcs = {
c.default_libc for c in all_compilers_in_config(spack.config.CONFIG) if c.default_libc
c.default_libc
for c in spack.compilers.all_compilers_from(spack.config.CONFIG)
if c.default_libc
}
if libcs:
@@ -613,7 +610,7 @@ def _external_config_with_implicit_externals(configuration):
if not using_libc_compatibility():
return packages_yaml
for compiler in all_compilers_in_config(configuration):
for compiler in spack.compilers.all_compilers_from(configuration):
libc = compiler.default_libc
if libc:
entry = {"spec": f"{libc} %{compiler.spec}", "prefix": libc.external_path}
@@ -1880,8 +1877,7 @@ def _spec_clauses(
# validate variant value only if spec not concrete
if not spec.concrete:
reserved_names = spack.directives.reserved_names
if not spec.virtual and vname not in reserved_names:
if not spec.virtual and vname not in spack.variant.reserved_names:
pkg_cls = self.pkg_class(spec.name)
try:
variant_def, _ = pkg_cls.variants[vname]
@@ -3002,7 +2998,7 @@ class CompilerParser:
def __init__(self, configuration) -> None:
self.compilers: Set[KnownCompiler] = set()
for c in all_compilers_in_config(configuration):
for c in spack.compilers.all_compilers_from(configuration):
if using_libc_compatibility() and not c_compiler_runs(c):
tty.debug(
f"the C compiler {c.cc} does not exist, or does not run correctly."
@@ -3466,7 +3462,7 @@ def reorder_flags(self):
"""
# reverse compilers so we get highest priority compilers that share a spec
compilers = dict(
(c.spec, c) for c in reversed(all_compilers_in_config(spack.config.CONFIG))
(c.spec, c) for c in reversed(spack.compilers.all_compilers_from(spack.config.CONFIG))
)
cmd_specs = dict((s.name, s) for spec in self._command_line_specs for s in spec.traverse())

View File

@@ -1639,7 +1639,7 @@ def _add_flag(self, name, value, propagate):
Known flags currently include "arch"
"""
if propagate and name in spack.directives.reserved_names:
if propagate and name in vt.reserved_names:
raise UnsupportedPropagationError(
f"Propagation with '==' is not supported for '{name}'."
)
@@ -2935,9 +2935,7 @@ def ensure_valid_variants(spec):
pkg_variants = pkg_cls.variants
# reserved names are variants that may be set on any package
# but are not necessarily recorded by the package's class
not_existing = set(spec.variants) - (
set(pkg_variants) | set(spack.directives.reserved_names)
)
not_existing = set(spec.variants) - (set(pkg_variants) | set(vt.reserved_names))
if not_existing:
raise vt.UnknownVariantError(spec, not_existing)
@@ -4341,9 +4339,9 @@ def attach_git_version_lookup(self):
v.attach_lookup(spack.version.git_ref_lookup.GitRefLookup(self.fullname))
def parse_with_version_concrete(string: str, compiler: bool = False):
def parse_with_version_concrete(spec_like: Union[str, Spec], compiler: bool = False):
"""Same as Spec(string), but interprets @x as @=x"""
s: Union[CompilerSpec, Spec] = CompilerSpec(string) if compiler else Spec(string)
s: Union[CompilerSpec, Spec] = CompilerSpec(spec_like) if compiler else Spec(spec_like)
interpreted_version = s.versions.concrete_range_as_version
if interpreted_version:
s.versions = vn.VersionList([interpreted_version])

View File

@@ -13,7 +13,7 @@
import stat
import sys
import tempfile
from typing import Callable, Dict, Iterable, List, Optional, Set
from typing import Callable, Dict, Generator, Iterable, List, Optional, Set
import llnl.string
import llnl.util.lang
@@ -352,8 +352,10 @@ class Stage(LockableStagingDir):
def __init__(
self,
url_or_fetch_strategy,
*,
name=None,
mirror_paths=None,
mirror_paths: Optional[spack.mirror.MirrorLayout] = None,
mirrors: Optional[Iterable[spack.mirror.Mirror]] = None,
keep=False,
path=None,
lock=True,
@@ -362,36 +364,30 @@ def __init__(
"""Create a stage object.
Parameters:
url_or_fetch_strategy
URL of the archive to be downloaded into this stage, OR
a valid FetchStrategy.
URL of the archive to be downloaded into this stage, OR a valid FetchStrategy.
name
If a name is provided, then this stage is a named stage
and will persist between runs (or if you construct another
stage object later). If name is not provided, then this
If a name is provided, then this stage is a named stage and will persist between runs
(or if you construct another stage object later). If name is not provided, then this
stage will be given a unique name automatically.
mirror_paths
If provided, Stage will search Spack's mirrors for
this archive at each of the provided relative mirror paths
before using the default fetch strategy.
If provided, Stage will search Spack's mirrors for this archive at each of the
provided relative mirror paths before using the default fetch strategy.
keep
By default, when used as a context manager, the Stage
is deleted on exit when no exceptions are raised.
Pass True to keep the stage intact even if no
exceptions are raised.
By default, when used as a context manager, the Stage is deleted on exit when no
exceptions are raised. Pass True to keep the stage intact even if no exceptions are
raised.
path
If provided, the stage path to use for associated builds.
lock
True if the stage directory file lock is to be used, False
otherwise.
True if the stage directory file lock is to be used, False otherwise.
search_fn
The search function that provides the fetch strategy
instance.
The search function that provides the fetch strategy instance.
"""
super().__init__(name, path, keep, lock)
@@ -407,30 +403,37 @@ def __init__(
# self.fetcher can change with mirrors.
self.default_fetcher = self.fetcher
self.search_fn = search_fn
# used for mirrored archives of repositories.
self.skip_checksum_for_mirror = True
# If we fetch from a mirror, but the original data is from say git, we can currently not
# prove that they are equal (we don't even have a tree hash in package.py). This bool is
# used to skip checksum verification and instead warn the user.
if isinstance(self.default_fetcher, fs.URLFetchStrategy):
self.skip_checksum_for_mirror = not bool(self.default_fetcher.digest)
else:
self.skip_checksum_for_mirror = True
self.srcdir = None
self.mirror_paths = mirror_paths
self.mirror_layout = mirror_paths
self.mirrors = list(mirrors) if mirrors else []
# Allow users the disable both mirrors and download cache
self.default_fetcher_only = False
@property
def expected_archive_files(self):
"""Possible archive file paths."""
paths = []
fnames = []
expanded = True
if isinstance(self.default_fetcher, fs.URLFetchStrategy):
expanded = self.default_fetcher.expand_archive
fnames.append(url_util.default_download_filename(self.default_fetcher.url))
if self.mirror_paths:
fnames.extend(os.path.basename(x) for x in self.mirror_paths)
if self.mirror_layout:
fnames.append(os.path.basename(self.mirror_layout.path))
paths.extend(os.path.join(self.path, f) for f in fnames)
paths = [os.path.join(self.path, f) for f in fnames]
if not expanded:
# If the download file is not compressed, the "archive" is a
# single file placed in Stage.source_path
# If the download file is not compressed, the "archive" is a single file placed in
# Stage.source_path
paths.extend(os.path.join(self.source_path, f) for f in fnames)
return paths
@@ -463,80 +466,63 @@ def source_path(self):
"""Returns the well-known source directory path."""
return os.path.join(self.path, _source_path_subdir)
def disable_mirrors(self):
"""The Stage will not attempt to look for the associated fetcher
target in any of Spack's mirrors (including the local download cache).
"""
self.mirror_paths = []
def fetch(self, mirror_only=False, err_msg=None):
"""Retrieves the code or archive
Args:
mirror_only (bool): only fetch from a mirror
err_msg (str or None): the error message to display if all fetchers
fail or ``None`` for the default fetch failure message
"""
fetchers = []
def _generate_fetchers(self, mirror_only=False) -> Generator[fs.FetchStrategy, None, None]:
fetchers: List[fs.FetchStrategy] = []
if not mirror_only:
fetchers.append(self.default_fetcher)
# If this archive is normally fetched from a URL, then use the same digest.
if isinstance(self.default_fetcher, fs.URLFetchStrategy):
digest = self.default_fetcher.digest
expand = self.default_fetcher.expand_archive
extension = self.default_fetcher.extension
else:
digest = None
expand = True
extension = None
# TODO: move mirror logic out of here and clean it up!
# TODO: Or @alalazo may have some ideas about how to use a
# TODO: CompositeFetchStrategy here.
self.skip_checksum_for_mirror = True
if self.mirror_paths:
# Join URLs of mirror roots with mirror paths. Because
# urljoin() will strip everything past the final '/' in
# the root, so we add a '/' if it is not present.
mirror_urls = [
url_util.join(mirror.fetch_url, rel_path)
for mirror in spack.mirror.MirrorCollection(source=True).values()
if not mirror.fetch_url.startswith("oci://")
for rel_path in self.mirror_paths
]
# If this archive is normally fetched from a tarball URL,
# then use the same digest. `spack mirror` ensures that
# the checksum will be the same.
digest = None
expand = True
extension = None
if isinstance(self.default_fetcher, fs.URLFetchStrategy):
digest = self.default_fetcher.digest
expand = self.default_fetcher.expand_archive
extension = self.default_fetcher.extension
# Have to skip the checksum for things archived from
# repositories. How can this be made safer?
self.skip_checksum_for_mirror = not bool(digest)
if not self.default_fetcher_only and self.mirror_layout and self.mirrors:
# Add URL strategies for all the mirrors with the digest
# Insert fetchers in the order that the URLs are provided.
for url in reversed(mirror_urls):
fetchers.insert(
0, fs.from_url_scheme(url, digest, expand=expand, extension=extension)
fetchers[:0] = (
fs.from_url_scheme(
url_util.join(mirror.fetch_url, self.mirror_layout.path),
checksum=digest,
expand=expand,
extension=extension,
)
for mirror in self.mirrors
if not mirror.fetch_url.startswith("oci://") # no support for mirrors yet
)
if self.default_fetcher.cachable:
for rel_path in reversed(list(self.mirror_paths)):
cache_fetcher = spack.caches.FETCH_CACHE.fetcher(
rel_path, digest, expand=expand, extension=extension
)
fetchers.insert(0, cache_fetcher)
if not self.default_fetcher_only and self.mirror_layout and self.default_fetcher.cachable:
fetchers.insert(
0,
spack.caches.FETCH_CACHE.fetcher(
self.mirror_layout.path, digest, expand=expand, extension=extension
),
)
def generate_fetchers():
for fetcher in fetchers:
yield fetcher
# The search function may be expensive, so wait until now to
# call it so the user can stop if a prior fetcher succeeded
if self.search_fn and not mirror_only:
dynamic_fetchers = self.search_fn()
for fetcher in dynamic_fetchers:
yield fetcher
yield from fetchers
# The search function may be expensive, so wait until now to call it so the user can stop
# if a prior fetcher succeeded
if self.search_fn and not mirror_only:
yield from self.search_fn()
def fetch(self, mirror_only: bool = False, err_msg: Optional[str] = None) -> None:
"""Retrieves the code or archive
Args:
mirror_only: only fetch from a mirror
err_msg: the error message to display if all fetchers fail or ``None`` for the default
fetch failure message
"""
errors: List[str] = []
for fetcher in generate_fetchers():
for fetcher in self._generate_fetchers(mirror_only):
try:
fetcher.stage = self
self.fetcher = fetcher
@@ -595,56 +581,60 @@ def steal_source(self, dest):
self.destroy()
def check(self):
"""Check the downloaded archive against a checksum digest.
No-op if this stage checks code out of a repository."""
"""Check the downloaded archive against a checksum digest."""
if self.fetcher is not self.default_fetcher and self.skip_checksum_for_mirror:
cache = isinstance(self.fetcher, fs.CacheURLFetchStrategy)
if cache:
secure_msg = "your download cache is in a secure location"
else:
secure_msg = "you trust this mirror and have a secure connection"
tty.warn(
"Fetching from mirror without a checksum!",
"This package is normally checked out from a version "
"control system, but it has been archived on a spack "
"mirror. This means we cannot know a checksum for the "
"tarball in advance. Be sure that your connection to "
"this mirror is secure!",
f"Using {'download cache' if cache else 'a mirror'} instead of version control",
"The required sources are normally checked out from a version control system, "
f"but have been archived {'in download cache' if cache else 'on a mirror'}: "
f"{self.fetcher}. Spack lacks a tree hash to verify the integrity of this "
f"archive. Make sure {secure_msg}.",
)
elif spack.config.get("config:checksum"):
self.fetcher.check()
def cache_local(self):
spack.caches.FETCH_CACHE.store(self.fetcher, self.mirror_paths.storage_path)
spack.caches.FETCH_CACHE.store(self.fetcher, self.mirror_layout.path)
def cache_mirror(self, mirror, stats):
def cache_mirror(
self, mirror: spack.caches.MirrorCache, stats: spack.mirror.MirrorStats
) -> None:
"""Perform a fetch if the resource is not already cached
Arguments:
mirror (spack.caches.MirrorCache): the mirror to cache this Stage's
resource in
stats (spack.mirror.MirrorStats): this is updated depending on whether the
caching operation succeeded or failed
mirror: the mirror to cache this Stage's resource in
stats: this is updated depending on whether the caching operation succeeded or failed
"""
if isinstance(self.default_fetcher, fs.BundleFetchStrategy):
# BundleFetchStrategy has no source to fetch. The associated
# fetcher does nothing but the associated stage may still exist.
# There is currently no method available on the fetcher to
# distinguish this ('cachable' refers to whether the fetcher
# refers to a resource with a fixed ID, which is not the same
# concept as whether there is anything to fetch at all) so we
# must examine the type of the fetcher.
# BundleFetchStrategy has no source to fetch. The associated fetcher does nothing but
# the associated stage may still exist. There is currently no method available on the
# fetcher to distinguish this ('cachable' refers to whether the fetcher refers to a
# resource with a fixed ID, which is not the same concept as whether there is anything
# to fetch at all) so we must examine the type of the fetcher.
return
if mirror.skip_unstable_versions and not fs.stable_target(self.default_fetcher):
elif mirror.skip_unstable_versions and not fs.stable_target(self.default_fetcher):
return
absolute_storage_path = os.path.join(mirror.root, self.mirror_paths.storage_path)
elif not self.mirror_layout:
return
absolute_storage_path = os.path.join(mirror.root, self.mirror_layout.path)
if os.path.exists(absolute_storage_path):
stats.already_existed(absolute_storage_path)
else:
self.fetch()
self.check()
mirror.store(self.fetcher, self.mirror_paths.storage_path)
mirror.store(self.fetcher, self.mirror_layout.path)
stats.added(absolute_storage_path)
mirror.symlink(self.mirror_paths)
self.mirror_layout.make_alias(mirror.root)
def expand_archive(self):
"""Changes to the stage directory and attempt to expand the downloaded
@@ -652,9 +642,9 @@ def expand_archive(self):
downloaded."""
if not self.expanded:
self.fetcher.expand()
tty.debug("Created stage in {0}".format(self.path))
tty.debug(f"Created stage in {self.path}")
else:
tty.debug("Already staged {0} in {1}".format(self.name, self.path))
tty.debug(f"Already staged {self.name} in {self.path}")
def restage(self):
"""Removes the expanded archive path if it exists, then re-expands
@@ -1173,18 +1163,20 @@ def _fetch_and_checksum(url, options, keep_stage, action_fn=None):
try:
url_or_fs = url
if options:
url_or_fs = fs.URLFetchStrategy(url, fetch_options=options)
url_or_fs = fs.URLFetchStrategy(url=url, fetch_options=options)
with Stage(url_or_fs, keep=keep_stage) as stage:
# Fetch the archive
stage.fetch()
if action_fn is not None:
archive = stage.archive_file
assert archive is not None, f"Archive not found for {url}"
if action_fn is not None and archive:
# Only run first_stage_function the first time,
# no need to run it every time
action_fn(stage, url)
action_fn(archive, url)
# Checksum the archive and add it to the list
checksum = spack.util.crypto.checksum(hashlib.sha256, stage.archive_file)
checksum = spack.util.crypto.checksum(hashlib.sha256, archive)
return checksum, None
except fs.FailedDownloadError:
return None, f"[WORKER] Failed to fetch {url}"

View File

@@ -12,7 +12,7 @@
modifications to global state in memory that must be replicated in the
child process.
"""
import importlib
import io
import multiprocessing
import pickle
@@ -118,7 +118,7 @@ def __init__(self, module_patches, class_patches):
def restore(self):
for module_name, attr_name, value in self.module_patches:
value = pickle.load(value)
module = __import__(module_name)
module = importlib.import_module(module_name)
setattr(module, attr_name, value)
for class_fqn, attr_name, value in self.class_patches:
value = pickle.load(value)

View File

@@ -8,7 +8,9 @@
import io
import json
import os
import pathlib
import platform
import shutil
import sys
import tarfile
import urllib.error
@@ -16,12 +18,11 @@
import urllib.response
from pathlib import Path, PurePath
import py
import pytest
import archspec.cpu
from llnl.util.filesystem import join_path, visit_directory_tree
from llnl.util.filesystem import copy_tree, join_path, visit_directory_tree
from llnl.util.symlink import readlink
import spack.binary_distribution as bindist
@@ -81,72 +82,67 @@ def test_mirror(mirror_dir):
@pytest.fixture(scope="module")
def config_directory(tmpdir_factory):
tmpdir = tmpdir_factory.mktemp("test_configs")
# restore some sane defaults for packages and config
config_path = py.path.local(spack.paths.etc_path)
modules_yaml = config_path.join("defaults", "modules.yaml")
os_modules_yaml = config_path.join(
"defaults", "%s" % platform.system().lower(), "modules.yaml"
)
packages_yaml = config_path.join("defaults", "packages.yaml")
config_yaml = config_path.join("defaults", "config.yaml")
repos_yaml = config_path.join("defaults", "repos.yaml")
tmpdir.ensure("site", dir=True)
tmpdir.ensure("user", dir=True)
tmpdir.ensure("site/%s" % platform.system().lower(), dir=True)
modules_yaml.copy(tmpdir.join("site", "modules.yaml"))
os_modules_yaml.copy(tmpdir.join("site/%s" % platform.system().lower(), "modules.yaml"))
packages_yaml.copy(tmpdir.join("site", "packages.yaml"))
config_yaml.copy(tmpdir.join("site", "config.yaml"))
repos_yaml.copy(tmpdir.join("site", "repos.yaml"))
yield tmpdir
tmpdir.remove()
def config_directory(tmp_path_factory):
# Copy defaults to a temporary "site" scope
defaults_dir = tmp_path_factory.mktemp("test_configs")
config_path = pathlib.Path(spack.paths.etc_path)
copy_tree(str(config_path / "defaults"), str(defaults_dir / "site"))
# Create a "user" scope
(defaults_dir / "user").mkdir()
# Detect compilers
cfg_scopes = [
spack.config.DirectoryConfigScope(name, str(defaults_dir / name))
for name in [f"site/{platform.system().lower()}", "site", "user"]
]
with spack.config.use_configuration(*cfg_scopes):
_ = spack.compilers.find_compilers(scope="site")
yield defaults_dir
shutil.rmtree(str(defaults_dir))
@pytest.fixture(scope="function")
def default_config(tmpdir, config_directory, monkeypatch, install_mockery):
def default_config(tmp_path, config_directory, monkeypatch, install_mockery):
# This fixture depends on install_mockery to ensure
# there is a clear order of initialization. The substitution of the
# config scopes here is done on top of the substitution that comes with
# install_mockery
mutable_dir = tmpdir.mkdir("mutable_config").join("tmp")
config_directory.copy(mutable_dir)
mutable_dir = tmp_path / "mutable_config" / "tmp"
mutable_dir.mkdir(parents=True)
copy_tree(str(config_directory), str(mutable_dir))
cfg = spack.config.Configuration(
*[
spack.config.DirectoryConfigScope(name, str(mutable_dir))
for name in [f"site/{platform.system().lower()}", "site", "user"]
]
)
scopes = [
spack.config.DirectoryConfigScope(name, str(mutable_dir / name))
for name in [f"site/{platform.system().lower()}", "site", "user"]
]
spack.config.CONFIG, old_config = cfg, spack.config.CONFIG
spack.config.CONFIG.set("repos", [spack.paths.mock_packages_path])
njobs = spack.config.get("config:build_jobs")
if not njobs:
spack.config.set("config:build_jobs", 4, scope="user")
extensions = spack.config.get("config:template_dirs")
if not extensions:
spack.config.set(
"config:template_dirs",
[os.path.join(spack.paths.share_path, "templates")],
scope="user",
)
with spack.config.use_configuration(*scopes):
spack.config.CONFIG.set("repos", [spack.paths.mock_packages_path])
njobs = spack.config.get("config:build_jobs")
if not njobs:
spack.config.set("config:build_jobs", 4, scope="user")
extensions = spack.config.get("config:template_dirs")
if not extensions:
spack.config.set(
"config:template_dirs",
[os.path.join(spack.paths.share_path, "templates")],
scope="user",
)
mutable_dir.ensure("build_stage", dir=True)
build_stage = spack.config.get("config:build_stage")
if not build_stage:
spack.config.set(
"config:build_stage", [str(mutable_dir.join("build_stage"))], scope="user"
)
timeout = spack.config.get("config:connect_timeout")
if not timeout:
spack.config.set("config:connect_timeout", 10, scope="user")
(mutable_dir / "build_stage").mkdir()
build_stage = spack.config.get("config:build_stage")
if not build_stage:
spack.config.set(
"config:build_stage", [str(mutable_dir / "build_stage")], scope="user"
)
timeout = spack.config.get("config:connect_timeout")
if not timeout:
spack.config.set("config:connect_timeout", 10, scope="user")
yield spack.config.CONFIG
spack.config.CONFIG = old_config
mutable_dir.remove()
yield spack.config.CONFIG
@pytest.fixture(scope="function")
@@ -357,7 +353,7 @@ def test_push_and_fetch_keys(mock_gnupghome, tmp_path):
assert len(keys) == 1
fpr = keys[0]
bindist.push_keys(mirror, keys=[fpr], tmpdir=str(tmp_path), update_index=True)
bindist._url_push_keys(mirror, keys=[fpr], tmpdir=str(tmp_path), update_index=True)
# dir 2: import the key from the mirror, and confirm that its fingerprint
# matches the one created above
@@ -492,7 +488,7 @@ def mock_list_url(url, recursive=False):
test_url = "file:///fake/keys/dir"
with pytest.raises(GenerateIndexError, match="Unable to generate package index"):
bindist.generate_package_index(test_url, str(tmp_path))
bindist._url_generate_package_index(test_url, str(tmp_path))
assert (
"Warning: Encountered problem listing packages at "
@@ -513,7 +509,7 @@ def mock_list_url(url, recursive=False):
bindist.generate_key_index(url, str(tmp_path))
with pytest.raises(GenerateIndexError, match="Unable to generate package index"):
bindist.generate_package_index(url, str(tmp_path))
bindist._url_generate_package_index(url, str(tmp_path))
assert f"Encountered problem listing packages at {url}" in capfd.readouterr().err

View File

@@ -10,6 +10,7 @@
import spack.binary_distribution as bd
import spack.main
import spack.mirror
import spack.spec
import spack.util.url
@@ -22,17 +23,21 @@ def test_build_tarball_overwrite(install_mockery, mock_fetch, monkeypatch, tmp_p
specs = [spec]
# Runs fine the first time, second time it's a no-op
out_url = spack.util.url.path_to_file_url(str(tmp_path))
skipped = bd.push_or_raise(specs, out_url, signing_key=None)
assert not skipped
# populate cache, everything is new
mirror = spack.mirror.Mirror.from_local_path(str(tmp_path))
with bd.make_uploader(mirror) as uploader:
skipped = uploader.push_or_raise(specs)
assert not skipped
skipped = bd.push_or_raise(specs, out_url, signing_key=None)
assert skipped == specs
# should skip all
with bd.make_uploader(mirror) as uploader:
skipped = uploader.push_or_raise(specs)
assert skipped == specs
# Should work fine with force=True
skipped = bd.push_or_raise(specs, out_url, signing_key=None, force=True)
assert not skipped
# with force=True none should be skipped
with bd.make_uploader(mirror, force=True) as uploader:
skipped = uploader.push_or_raise(specs)
assert not skipped
# Remove the tarball, which should cause push to push.
os.remove(
@@ -42,5 +47,6 @@ def test_build_tarball_overwrite(install_mockery, mock_fetch, monkeypatch, tmp_p
/ bd.tarball_name(spec, ".spack")
)
skipped = bd.push_or_raise(specs, out_url, signing_key=None)
assert not skipped
with bd.make_uploader(mirror) as uploader:
skipped = uploader.push_or_raise(specs)
assert not skipped

View File

@@ -56,6 +56,6 @@ def test_build_systems(url_and_build_system):
url, build_system = url_and_build_system
with spack.stage.Stage(url) as stage:
stage.fetch()
guesser = spack.cmd.create.BuildSystemGuesser()
guesser(stage, url)
guesser = spack.cmd.create.BuildSystemAndLanguageGuesser()
guesser(stage.archive_file, url)
assert build_system == guesser.build_system

View File

@@ -3,6 +3,8 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import pytest
from llnl.util.filesystem import working_dir
@@ -33,11 +35,10 @@ def test_blame_by_percent(mock_packages):
assert "EMAIL" in out
@pytest.mark.not_on_windows("Not supported on Windows (yet)")
def test_blame_file(mock_packages):
"""Sanity check the blame command to make sure it works."""
with working_dir(spack.paths.prefix):
out = blame("bin/spack")
out = blame(os.path.join("bin", "spack"))
assert "LAST_COMMIT" in out
assert "AUTHOR" in out
assert "EMAIL" in out

View File

@@ -7,6 +7,7 @@
import json
import os
import shutil
from typing import List
import pytest
@@ -16,6 +17,7 @@
import spack.environment as ev
import spack.error
import spack.main
import spack.mirror
import spack.spec
import spack.util.url
from spack.spec import Spec
@@ -380,18 +382,22 @@ def test_correct_specs_are_pushed(
# Concretize dttop and add it to the temporary database (without prefixes)
spec = default_mock_concretization("dttop")
temporary_store.db.add(spec, directory_layout=None)
slash_hash = "/{0}".format(spec.dag_hash())
slash_hash = f"/{spec.dag_hash()}"
packages_to_push = []
class DontUpload(spack.binary_distribution.Uploader):
def __init__(self):
super().__init__(spack.mirror.Mirror.from_local_path(str(tmpdir)), False, False)
self.pushed = []
def fake_push(specs, *args, **kwargs):
assert all(isinstance(s, Spec) for s in specs)
packages_to_push.extend(s.name for s in specs)
skipped = []
errors = []
return skipped, errors
def push(self, specs: List[spack.spec.Spec]):
self.pushed.extend(s.name for s in specs)
return [], [] # nothing skipped, nothing errored
monkeypatch.setattr(spack.binary_distribution, "_push", fake_push)
uploader = DontUpload()
monkeypatch.setattr(
spack.binary_distribution, "make_uploader", lambda *args, **kwargs: uploader
)
buildcache_create_args = ["create", "--unsigned"]
@@ -403,10 +409,10 @@ def fake_push(specs, *args, **kwargs):
buildcache(*buildcache_create_args)
# Order is not guaranteed, so we can't just compare lists
assert set(packages_to_push) == set(expected)
assert set(uploader.pushed) == set(expected)
# Ensure no duplicates
assert len(set(packages_to_push)) == len(packages_to_push)
assert len(set(uploader.pushed)) == len(uploader.pushed)
@pytest.mark.parametrize("signed", [True, False])

File diff suppressed because it is too large Load Diff

View File

@@ -6,7 +6,6 @@
import filecmp
import os
import shutil
import subprocess
import pytest
@@ -156,22 +155,6 @@ def test_update_with_header(tmpdir):
commands("--update", str(update_file), "--header", str(filename))
@pytest.mark.xfail
def test_no_pipe_error():
"""Make sure we don't see any pipe errors when piping output."""
proc = subprocess.Popen(
["spack", "commands", "--format=rst"], stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
# Call close() on stdout to cause a broken pipe
proc.stdout.close()
proc.wait()
stderr = proc.stderr.read().decode("utf-8")
assert "Broken pipe" not in stderr
def test_bash_completion():
"""Test the bash completion writer."""
out1 = commands("--format=bash")

View File

@@ -81,34 +81,6 @@ def test_compiler_find_without_paths(no_compilers_yaml, working_env, mock_execut
assert "gcc" in output
@pytest.mark.regression("17589")
def test_compiler_find_no_apple_gcc(no_compilers_yaml, working_env, mock_executable):
"""Tests that Spack won't mistake Apple's GCC as a "real" GCC, since it's really
Clang with a few tweaks.
"""
gcc_path = mock_executable(
"gcc",
output="""
if [ "$1" = "-dumpversion" ]; then
echo "4.2.1"
elif [ "$1" = "--version" ]; then
echo "Configured with: --prefix=/dummy"
echo "Apple clang version 11.0.0 (clang-1100.0.33.16)"
echo "Target: x86_64-apple-darwin18.7.0"
echo "Thread model: posix"
echo "InstalledDir: /dummy"
else
echo "clang: error: no input files"
fi
""",
)
os.environ["PATH"] = str(gcc_path.parent)
output = compiler("find", "--scope=site")
assert "gcc" not in output
@pytest.mark.regression("37996")
def test_compiler_remove(mutable_config, mock_packages):
"""Tests that we can remove a compiler from configuration."""
@@ -131,7 +103,7 @@ def test_removing_compilers_from_multiple_scopes(mutable_config, mock_packages):
@pytest.mark.not_on_windows("Cannot execute bash script on Windows")
def test_compiler_add(mutable_config, mock_packages, mock_executable):
def test_compiler_add(mutable_config, mock_executable):
"""Tests that we can add a compiler to configuration."""
expected_version = "4.5.3"
gcc_path = mock_executable(
@@ -149,7 +121,12 @@ def test_compiler_add(mutable_config, mock_packages, mock_executable):
compilers_before_find = set(spack.compilers.all_compiler_specs())
args = spack.util.pattern.Bunch(
all=None, compiler_spec=None, add_paths=[str(root_dir)], scope=None, mixed_toolchain=False
all=None,
compiler_spec=None,
add_paths=[str(root_dir)],
scope=None,
mixed_toolchain=False,
jobs=1,
)
spack.cmd.compiler.compiler_find(args)
compilers_after_find = set(spack.compilers.all_compiler_specs())
@@ -229,7 +206,7 @@ def test_compiler_find_path_order(no_compilers_yaml, working_env, compilers_dir)
for name in ("gcc-8", "g++-8", "gfortran-8"):
shutil.copy(compilers_dir / name, new_dir / name)
# Set PATH to have the new folder searched first
os.environ["PATH"] = "{}:{}".format(str(new_dir), str(compilers_dir))
os.environ["PATH"] = f"{str(new_dir)}:{str(compilers_dir)}"
compiler("find", "--scope=site")

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import tarfile
import pytest
@@ -154,24 +155,24 @@ def test_create_template_bad_name(mock_test_repo, name, expected):
def test_build_system_guesser_no_stage():
"""Test build system guesser when stage not provided."""
guesser = spack.cmd.create.BuildSystemGuesser()
guesser = spack.cmd.create.BuildSystemAndLanguageGuesser()
# Ensure get the expected build system
with pytest.raises(AttributeError, match="'NoneType' object has no attribute"):
guesser(None, "/the/url/does/not/matter")
def test_build_system_guesser_octave():
def test_build_system_guesser_octave(tmp_path):
"""
Test build system guesser for the special case, where the same base URL
identifies the build system rather than guessing the build system from
files contained in the archive.
"""
url, expected = "downloads.sourceforge.net/octave/", "octave"
guesser = spack.cmd.create.BuildSystemGuesser()
guesser = spack.cmd.create.BuildSystemAndLanguageGuesser()
# Ensure get the expected build system
guesser(None, url)
guesser(str(tmp_path / "archive.tar.gz"), url)
assert guesser.build_system == expected
# Also ensure get the correct template
@@ -207,3 +208,40 @@ def _parse_name_offset(path, v):
def test_no_url():
"""Test creation of package without a URL."""
create("--skip-editor", "-n", "create-new-package")
@pytest.mark.parametrize(
"source_files,languages",
[
(["fst.c", "snd.C"], ["c", "cxx"]),
(["fst.c", "snd.cxx"], ["c", "cxx"]),
(["fst.F", "snd.cc"], ["cxx", "fortran"]),
(["fst.f", "snd.c"], ["c", "fortran"]),
(["fst.jl", "snd.py"], []),
],
)
def test_language_and_build_system_detection(tmp_path, source_files, languages):
"""Test that languages are detected from tarball, and the build system is guessed from the
most top-level build system file."""
def add(tar: tarfile.TarFile, name: str, type):
tarinfo = tarfile.TarInfo(name)
tarinfo.type = type
tar.addfile(tarinfo)
tarball = str(tmp_path / "example.tar.gz")
with tarfile.open(tarball, "w:gz") as tar:
add(tar, "./third-party/", tarfile.DIRTYPE)
add(tar, "./third-party/example/", tarfile.DIRTYPE)
add(tar, "./third-party/example/CMakeLists.txt", tarfile.REGTYPE) # false positive
add(tar, "./configure", tarfile.REGTYPE) # actual build system
add(tar, "./src/", tarfile.DIRTYPE)
for file in source_files:
add(tar, f"src/{file}", tarfile.REGTYPE)
guesser = spack.cmd.create.BuildSystemAndLanguageGuesser()
guesser(str(tarball), "https://example.com")
assert guesser.build_system == "autotools"
assert guesser.languages == languages

View File

@@ -1734,6 +1734,17 @@ def test_env_include_concrete_env_yaml(env_name):
assert test.path in combined_yaml["include_concrete"]
@pytest.mark.regression("45766")
@pytest.mark.parametrize("format", ["v1", "v2", "v3"])
def test_env_include_concrete_old_env(format, tmpdir):
lockfile = os.path.join(spack.paths.test_path, "data", "legacy_env", f"{format}.lock")
# create an env from old .lock file -- this does not update the format
env("create", "old-env", lockfile)
env("create", "--include-concrete", "old-env", "test")
assert ev.read("old-env").all_specs() == ev.read("test").all_specs()
def test_env_bad_include_concrete_env():
with pytest.raises(ev.SpackEnvironmentError):
env("create", "--include-concrete", "nonexistant_env", "combined_env")

View File

@@ -100,7 +100,7 @@ def test_get_executables(working_env, mock_executable):
# TODO: this test should be made to work, but in the meantime it is
# causing intermittent (spurious) CI failures on all PRs
@pytest.mark.skipif(sys.platform == "win32", reason="Test fails intermittently on Windows")
@pytest.mark.not_on_windows("Test fails intermittently on Windows")
def test_find_external_cmd_not_buildable(mutable_config, working_env, mock_executable):
"""When the user invokes 'spack external find --not-buildable', the config
for any package where Spack finds an external version should be marked as

View File

@@ -2,29 +2,13 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import pytest
from spack.main import SpackCommand
@pytest.mark.xfail
def test_reuse_after_help():
"""Test `spack help` can be called twice with the same SpackCommand."""
help_cmd = SpackCommand("help", subprocess=True)
help_cmd()
# This second invocation will somehow fail because the parser no
# longer works after add_all_commands() is called in
# SpackArgumentParser.format_help_sections().
#
# TODO: figure out why this doesn't work properly and change this
# test to use a single SpackCommand.
#
# It seems that parse_known_args() finds "too few arguments" the
# second time through b/c add_all_commands() ends up leaving extra
# positionals in the parser. But this used to work before we loaded
# commands lazily.
help_cmd()

View File

@@ -24,6 +24,12 @@
style = spack.main.SpackCommand("style")
ISORT = which("isort")
BLACK = which("black")
FLAKE8 = which("flake8")
MYPY = which("mypy")
@pytest.fixture(autouse=True)
def has_develop_branch(git):
"""spack style requires git and a develop branch to run -- skip if we're missing either."""
@@ -190,8 +196,8 @@ def external_style_root(git, flake8_package_with_errors, tmpdir):
yield tmpdir, py_file
@pytest.mark.skipif(not which("isort"), reason="isort is not installed.")
@pytest.mark.skipif(not which("black"), reason="black is not installed.")
@pytest.mark.skipif(not ISORT, reason="isort is not installed.")
@pytest.mark.skipif(not BLACK, reason="black is not installed.")
def test_fix_style(external_style_root):
"""Make sure spack style --fix works."""
tmpdir, py_file = external_style_root
@@ -209,10 +215,10 @@ def test_fix_style(external_style_root):
assert filecmp.cmp(broken_py, fixed_py)
@pytest.mark.skipif(not which("flake8"), reason="flake8 is not installed.")
@pytest.mark.skipif(not which("isort"), reason="isort is not installed.")
@pytest.mark.skipif(not which("mypy"), reason="mypy is not installed.")
@pytest.mark.skipif(not which("black"), reason="black is not installed.")
@pytest.mark.skipif(not FLAKE8, reason="flake8 is not installed.")
@pytest.mark.skipif(not ISORT, reason="isort is not installed.")
@pytest.mark.skipif(not MYPY, reason="mypy is not installed.")
@pytest.mark.skipif(not BLACK, reason="black is not installed.")
def test_external_root(external_style_root, capfd):
"""Ensure we can run in a separate root directory w/o configuration files."""
tmpdir, py_file = external_style_root
@@ -238,7 +244,7 @@ def test_external_root(external_style_root, capfd):
assert "lib/spack/spack/dummy.py:7: [F401] 'os' imported but unused" in output
@pytest.mark.skipif(not which("flake8"), reason="flake8 is not installed.")
@pytest.mark.skipif(not FLAKE8, reason="flake8 is not installed.")
def test_style(flake8_package, tmpdir):
root_relative = os.path.relpath(flake8_package, spack.paths.prefix)
@@ -264,7 +270,7 @@ def test_style(flake8_package, tmpdir):
assert "spack style checks were clean" in output
@pytest.mark.skipif(not which("flake8"), reason="flake8 is not installed.")
@pytest.mark.skipif(not FLAKE8, reason="flake8 is not installed.")
def test_style_with_errors(flake8_package_with_errors):
root_relative = os.path.relpath(flake8_package_with_errors, spack.paths.prefix)
output = style(
@@ -275,8 +281,8 @@ def test_style_with_errors(flake8_package_with_errors):
assert "spack style found errors" in output
@pytest.mark.skipif(not which("black"), reason="black is not installed.")
@pytest.mark.skipif(not which("flake8"), reason="flake8 is not installed.")
@pytest.mark.skipif(not BLACK, reason="black is not installed.")
@pytest.mark.skipif(not FLAKE8, reason="flake8 is not installed.")
def test_style_with_black(flake8_package_with_errors):
output = style("--tool", "black,flake8", flake8_package_with_errors, fail_on_error=False)
assert "black found errors" in output

View File

@@ -11,13 +11,6 @@
versions = SpackCommand("versions")
def test_safe_only_versions():
"""Only test the safe versions of a package.
(Using the deprecated command line argument)
"""
versions("--safe-only", "zlib")
def test_safe_versions():
"""Only test the safe versions of a package."""

View File

@@ -19,27 +19,6 @@
from spack.util.executable import Executable, ProcessError
@pytest.fixture()
def make_args_for_version(monkeypatch):
def _factory(version, path="/usr/bin/gcc"):
class MockOs:
pass
compiler_name = "gcc"
compiler_cls = spack.compilers.class_for_compiler_name(compiler_name)
monkeypatch.setattr(compiler_cls, "cc_version", lambda x: version)
compiler_id = spack.compilers.CompilerID(
os=MockOs, compiler_name=compiler_name, version=None
)
variation = spack.compilers.NameVariation(prefix="", suffix="")
return spack.compilers.DetectVersionArgs(
id=compiler_id, variation=variation, language="cc", path=path
)
return _factory
def test_multiple_conflicting_compiler_definitions(mutable_config):
compiler_def = {
"compiler": {
@@ -82,21 +61,6 @@ def test_get_compiler_duplicates(mutable_config, compiler_factory):
assert len(duplicates) == 1
@pytest.mark.parametrize(
"input_version,expected_version,expected_error",
[(None, None, "Couldn't get version for compiler /usr/bin/gcc"), ("4.9", "4.9", None)],
)
def test_version_detection_is_empty(
make_args_for_version, input_version, expected_version, expected_error
):
args = make_args_for_version(version=input_version)
result, error = spack.compilers.detect_version(args)
if not error:
assert result.id.version == expected_version
assert error == expected_error
def test_compiler_flags_from_config_are_grouped():
compiler_entry = {
"spec": "intel@17.0.2",
@@ -906,51 +870,30 @@ def prepare_executable(name):
@pytest.mark.parametrize(
"detected_versions,expected_length",
"compilers_extra_attributes,expected_length",
[
# If we detect a C compiler we expect the result to be valid
(
[
spack.compilers.DetectVersionArgs(
id=spack.compilers.CompilerID(
os="ubuntu20.04", compiler_name="clang", version="12.0.0"
),
variation=spack.compilers.NameVariation(prefix="", suffix="-12"),
language="cc",
path="/usr/bin/clang-12",
),
spack.compilers.DetectVersionArgs(
id=spack.compilers.CompilerID(
os="ubuntu20.04", compiler_name="clang", version="12.0.0"
),
variation=spack.compilers.NameVariation(prefix="", suffix="-12"),
language="cxx",
path="/usr/bin/clang++-12",
),
],
1,
),
({"c": "/usr/bin/clang-12", "cxx": "/usr/bin/clang-12"}, 1),
# If we detect only a C++ compiler we expect the result to be discarded
(
[
spack.compilers.DetectVersionArgs(
id=spack.compilers.CompilerID(
os="ubuntu20.04", compiler_name="clang", version="12.0.0"
),
variation=spack.compilers.NameVariation(prefix="", suffix="-12"),
language="cxx",
path="/usr/bin/clang++-12",
)
],
0,
),
({"cxx": "/usr/bin/clang-12"}, 0),
],
)
def test_detection_requires_c_compiler(detected_versions, expected_length):
def test_detection_requires_c_compiler(compilers_extra_attributes, expected_length):
"""Tests that compilers automatically added to the configuration have
at least a C compiler.
"""
result = spack.compilers.make_compiler_list(detected_versions)
packages_yaml = {
"llvm": {
"externals": [
{
"spec": "clang@12.0.0",
"prefix": "/usr",
"extra_attributes": {"compilers": compilers_extra_attributes},
}
]
}
}
result = spack.compilers.CompilerConfigFactory.from_packages_yaml(packages_yaml)
assert len(result) == expected_length

View File

@@ -1,471 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Test detection of compiler version"""
import pytest
import spack.compilers.aocc
import spack.compilers.arm
import spack.compilers.cce
import spack.compilers.clang
import spack.compilers.fj
import spack.compilers.gcc
import spack.compilers.intel
import spack.compilers.nag
import spack.compilers.nvhpc
import spack.compilers.oneapi
import spack.compilers.pgi
import spack.compilers.xl
import spack.compilers.xl_r
import spack.util.module_cmd
@pytest.mark.parametrize(
"version_str,expected_version",
[
(
"Arm C/C++/Fortran Compiler version 19.0 (build number 73) (based on LLVM 7.0.2)\n"
"Target: aarch64--linux-gnu\n"
"Thread model: posix\n"
"InstalledDir:\n"
"/opt/arm/arm-hpc-compiler-19.0_Generic-AArch64_RHEL-7_aarch64-linux/bin\n",
"19.0",
),
(
"Arm C/C++/Fortran Compiler version 19.3.1 (build number 75) (based on LLVM 7.0.2)\n"
"Target: aarch64--linux-gnu\n"
"Thread model: posix\n"
"InstalledDir:\n"
"/opt/arm/arm-hpc-compiler-19.0_Generic-AArch64_RHEL-7_aarch64-linux/bin\n",
"19.3.1",
),
],
)
def test_arm_version_detection(version_str, expected_version):
version = spack.compilers.arm.Arm.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize(
"version_str,expected_version",
[
("Cray C : Version 8.4.6 Mon Apr 15, 2019 12:13:39\n", "8.4.6"),
("Cray C++ : Version 8.4.6 Mon Apr 15, 2019 12:13:45\n", "8.4.6"),
("Cray clang Version 8.4.6 Mon Apr 15, 2019 12:13:45\n", "8.4.6"),
("Cray Fortran : Version 8.4.6 Mon Apr 15, 2019 12:13:55\n", "8.4.6"),
],
)
def test_cce_version_detection(version_str, expected_version):
version = spack.compilers.cce.Cce.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.regression("10191")
@pytest.mark.parametrize(
"version_str,expected_version",
[
# macOS clang
(
"Apple clang version 11.0.0 (clang-1100.0.33.8)\n"
"Target: x86_64-apple-darwin18.7.0\n"
"Thread model: posix\n"
"InstalledDir: "
"/Applications/Xcode.app/Contents/Developer/Toolchains/"
"XcodeDefault.xctoolchain/usr/bin\n",
"11.0.0",
),
(
"Apple LLVM version 7.0.2 (clang-700.1.81)\n"
"Target: x86_64-apple-darwin15.2.0\n"
"Thread model: posix\n",
"7.0.2",
),
],
)
def test_apple_clang_version_detection(version_str, expected_version):
cls = spack.compilers.class_for_compiler_name("apple-clang")
version = cls.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.regression("10191")
@pytest.mark.parametrize(
"version_str,expected_version",
[
# LLVM Clang
(
"clang version 6.0.1-svn334776-1~exp1~20181018152737.116 (branches/release_60)\n"
"Target: x86_64-pc-linux-gnu\n"
"Thread model: posix\n"
"InstalledDir: /usr/bin\n",
"6.0.1",
),
(
"clang version 3.1 (trunk 149096)\n"
"Target: x86_64-unknown-linux-gnu\n"
"Thread model: posix\n",
"3.1",
),
(
"clang version 8.0.0-3~ubuntu18.04.1 (tags/RELEASE_800/final)\n"
"Target: x86_64-pc-linux-gnu\n"
"Thread model: posix\n"
"InstalledDir: /usr/bin\n",
"8.0.0",
),
(
"clang version 9.0.1-+201911131414230800840845a1eea-1~exp1~20191113231141.78\n"
"Target: x86_64-pc-linux-gnu\n"
"Thread model: posix\n"
"InstalledDir: /usr/bin\n",
"9.0.1",
),
(
"clang version 8.0.0-3 (tags/RELEASE_800/final)\n"
"Target: aarch64-unknown-linux-gnu\n"
"Thread model: posix\n"
"InstalledDir: /usr/bin\n",
"8.0.0",
),
(
"clang version 11.0.0\n"
"Target: aarch64-unknown-linux-gnu\n"
"Thread model: posix\n"
"InstalledDir: /usr/bin\n",
"11.0.0",
),
],
)
def test_clang_version_detection(version_str, expected_version):
version = spack.compilers.clang.Clang.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize(
"version_str,expected_version",
[
# C compiler
(
"fcc (FCC) 4.0.0a 20190314\n"
"simulating gcc version 6.1\n"
"Copyright FUJITSU LIMITED 2019",
"4.0.0a",
),
# C++ compiler
(
"FCC (FCC) 4.0.0a 20190314\n"
"simulating gcc version 6.1\n"
"Copyright FUJITSU LIMITED 2019",
"4.0.0a",
),
# Fortran compiler
("frt (FRT) 4.0.0a 20190314\n" "Copyright FUJITSU LIMITED 2019", "4.0.0a"),
],
)
def test_fj_version_detection(version_str, expected_version):
version = spack.compilers.fj.Fj.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize(
"version_str,expected_version",
[
# Output of -dumpversion changed to return only major from GCC 7
("4.4.7\n", "4.4.7"),
("7\n", "7"),
],
)
def test_gcc_version_detection(version_str, expected_version):
version = spack.compilers.gcc.Gcc.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize(
"version_str,expected_version",
[
(
"icpc (ICC) 12.1.5 20120612\n"
"Copyright (C) 1985-2012 Intel Corporation. All rights reserved.\n",
"12.1.5",
),
(
"ifort (IFORT) 12.1.5 20120612\n"
"Copyright (C) 1985-2012 Intel Corporation. All rights reserved.\n",
"12.1.5",
),
],
)
def test_intel_version_detection(version_str, expected_version):
version = spack.compilers.intel.Intel.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize(
"version_str,expected_version",
[
( # ICX/ICPX
"Intel(R) oneAPI DPC++ Compiler 2021.1.2 (2020.10.0.1214)\n"
"Target: x86_64-unknown-linux-gnu\n"
"Thread model: posix\n"
"InstalledDir: /made/up/path",
"2021.1.2",
),
( # ICX/ICPX
"Intel(R) oneAPI DPC++ Compiler 2021.2.0 (2021.2.0.20210317)\n"
"Target: x86_64-unknown-linux-gnu\n"
"Thread model: posix\n"
"InstalledDir: /made/up/path",
"2021.2.0",
),
( # ICX/ICPX
"Intel(R) oneAPI DPC++/C++ Compiler 2021.3.0 (2021.3.0.20210619)\n"
"Target: x86_64-unknown-linux-gnu\n"
"Thread model: posix\n"
"InstalledDir: /made/up/path",
"2021.3.0",
),
( # ICX/ICPX
"Intel(R) oneAPI DPC++/C++ Compiler 2021.4.0 (2021.4.0.20210924)\n"
"Target: x86_64-unknown-linux-gnu\n"
"Thread model: posix\n"
"InstalledDir: /made/up/path",
"2021.4.0",
),
( # IFX
"ifx (IFORT) 2021.1.2 Beta 20201214\n"
"Copyright (C) 1985-2020 Intel Corporation. All rights reserved.",
"2021.1.2",
),
( # IFX
"ifx (IFORT) 2021.2.0 Beta 20210317\n"
"Copyright (C) 1985-2020 Intel Corporation. All rights reserved.",
"2021.2.0",
),
( # IFX
"ifx (IFORT) 2021.3.0 Beta 20210619\n"
"Copyright (C) 1985-2020 Intel Corporation. All rights reserved.",
"2021.3.0",
),
( # IFX
"ifx (IFORT) 2021.4.0 Beta 20210924\n"
"Copyright (C) 1985-2021 Intel Corporation. All rights reserved.",
"2021.4.0",
),
( # IFX
"ifx (IFORT) 2022.0.0 20211123\n"
"Copyright (C) 1985-2021 Intel Corporation. All rights reserved.",
"2022.0.0",
),
( # IFX
"ifx (IFX) 2023.1.0 20230320\n"
"Copyright (C) 1985-2023 Intel Corporation. All rights reserved.",
"2023.1.0",
),
],
)
def test_oneapi_version_detection(version_str, expected_version):
version = spack.compilers.oneapi.Oneapi.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize(
"version_str,expected_version",
[
(
"NAG Fortran Compiler Release 6.0(Hibiya) Build 1037\n"
"Product NPL6A60NA for x86-64 Linux\n",
"6.0.1037",
)
],
)
def test_nag_version_detection(version_str, expected_version):
version = spack.compilers.nag.Nag.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize(
"version_str,expected_version",
[
# C compiler on x86-64
(
"nvc 20.9-0 LLVM 64-bit target on x86-64 Linux -tp haswell\n"
"NVIDIA Compilers and Tools\n"
"Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.",
"20.9",
),
# C++ compiler on x86-64
(
"nvc++ 20.9-0 LLVM 64-bit target on x86-64 Linux -tp haswell\n"
"NVIDIA Compilers and Tools\n"
"Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.",
"20.9",
),
# Fortran compiler on x86-64
(
"nvfortran 20.9-0 LLVM 64-bit target on x86-64 Linux -tp haswell\n"
"NVIDIA Compilers and Tools\n"
"Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.",
"20.9",
),
# C compiler on Power
(
"nvc 20.9-0 linuxpower target on Linuxpower\n"
"NVIDIA Compilers and Tools\n"
"Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.",
"20.9",
),
# C++ compiler on Power
(
"nvc++ 20.9-0 linuxpower target on Linuxpower\n"
"NVIDIA Compilers and Tools\n"
"Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.",
"20.9",
),
# Fortran compiler on Power
(
"nvfortran 20.9-0 linuxpower target on Linuxpower\n"
"NVIDIA Compilers and Tools\n"
"Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.",
"20.9",
),
# C compiler on Arm
(
"nvc 20.9-0 linuxarm64 target on aarch64 Linux\n"
"NVIDIA Compilers and Tools\n"
"Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.",
"20.9",
),
# C++ compiler on Arm
(
"nvc++ 20.9-0 linuxarm64 target on aarch64 Linux\n"
"NVIDIA Compilers and Tools\n"
"Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.",
"20.9",
),
# Fortran compiler on Arm
(
"nvfortran 20.9-0 linuxarm64 target on aarch64 Linux\n"
"NVIDIA Compilers and Tools\n"
"Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.",
"20.9",
),
],
)
def test_nvhpc_version_detection(version_str, expected_version):
version = spack.compilers.nvhpc.Nvhpc.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize(
"version_str,expected_version",
[
# Output on x86-64
(
"pgcc 15.10-0 64-bit target on x86-64 Linux -tp sandybridge\n"
"The Portland Group - PGI Compilers and Tools\n"
"Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.\n",
"15.10",
),
# Output on PowerPC
(
"pgcc 17.4-0 linuxpower target on Linuxpower\n"
"PGI Compilers and Tools\n"
"Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.\n",
"17.4",
),
# Output when LLVM-enabled
(
"pgcc-llvm 18.4-0 LLVM 64-bit target on x86-64 Linux -tp haswell\n"
"PGI Compilers and Tools\n"
"Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.\n",
"18.4",
),
],
)
def test_pgi_version_detection(version_str, expected_version):
version = spack.compilers.pgi.Pgi.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize(
"version_str,expected_version",
[
("IBM XL C/C++ for Linux, V11.1 (5724-X14)\n" "Version: 11.01.0000.0000\n", "11.1"),
("IBM XL Fortran for Linux, V13.1 (5724-X16)\n" "Version: 13.01.0000.0000\n", "13.1"),
("IBM XL C/C++ for AIX, V11.1 (5724-X13)\n" "Version: 11.01.0000.0009\n", "11.1"),
(
"IBM XL C/C++ Advanced Edition for Blue Gene/P, V9.0\n" "Version: 09.00.0000.0017\n",
"9.0",
),
],
)
def test_xl_version_detection(version_str, expected_version):
version = spack.compilers.xl.Xl.extract_version_from_output(version_str)
assert version == expected_version
version = spack.compilers.xl_r.XlR.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize(
"version_str,expected_version",
[
# This applies to C,C++ and FORTRAN compiler
(
"AMD clang version 12.0.0 (CLANG: AOCC_3_1_0-Build#126 2021_06_07)"
"(based on LLVM Mirror.Version.12.0.0)\n"
"Target: x86_64-unknown-linux-gnu\n"
"Thread model: posix\n",
"3.1.0",
),
(
"AMD clang version 12.0.0 (CLANG: AOCC_3.0.0-Build#78 2020_12_10)"
"(based on LLVM Mirror.Version.12.0.0)\n"
"Target: x86_64-unknown-linux-gnu\n"
"Thread model: posix\n",
"3.0.0",
),
(
"AMD clang version 11.0.0 (CLANG: AOCC_2.3.0-Build#85 2020_11_10)"
"(based on LLVM Mirror.Version.11.0.0)\n"
"Target: x86_64-unknown-linux-gnu\n"
"Thread model: posix\n",
"2.3.0",
),
(
"AMD clang version 10.0.0 (CLANG: AOCC_2.2.0-Build#93 2020_06_25)"
"(based on LLVM Mirror.Version.10.0.0)\n"
"Target: x86_64-unknown-linux-gnu\n"
"Thread model: posix\n",
"2.2.0",
),
],
)
def test_aocc_version_detection(version_str, expected_version):
version = spack.compilers.aocc.Aocc.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.regression("33901")
@pytest.mark.parametrize(
"version_str",
[
(
"Apple clang version 11.0.0 (clang-1100.0.33.8)\n"
"Target: x86_64-apple-darwin18.7.0\n"
"Thread model: posix\n"
"InstalledDir: "
"/Applications/Xcode.app/Contents/Developer/Toolchains/"
"XcodeDefault.xctoolchain/usr/bin\n"
),
(
"Apple LLVM version 7.0.2 (clang-700.1.81)\n"
"Target: x86_64-apple-darwin15.2.0\n"
"Thread model: posix\n"
),
],
)
def test_apple_clang_not_detected_as_cce(version_str):
version = spack.compilers.cce.Cce.extract_version_from_output(version_str)
assert version == "unknown"

View File

@@ -1003,7 +1003,7 @@ def temporary_store(tmpdir, request):
def mock_fetch(mock_archive, monkeypatch):
"""Fake the URL for a package so it downloads from a file."""
monkeypatch.setattr(
spack.package_base.PackageBase, "fetcher", URLFetchStrategy(mock_archive.url)
spack.package_base.PackageBase, "fetcher", URLFetchStrategy(url=mock_archive.url)
)
@@ -1418,6 +1418,24 @@ def mock_git_repository(git, tmpdir_factory):
r1 = rev_hash(branch)
r1_file = branch_file
multiple_directories_branch = "many_dirs"
num_dirs = 3
num_files = 2
dir_files = []
for i in range(num_dirs):
for j in range(num_files):
dir_files.append(f"dir{i}/file{j}")
git("checkout", "-b", multiple_directories_branch)
for f in dir_files:
repodir.ensure(f, file=True)
git("add", f)
git("-c", "commit.gpgsign=false", "commit", "-m", "many_dirs add files")
# restore default
git("checkout", default_branch)
# Map of version -> bunch. Each bunch includes; all the args
# that must be specified as part of a version() declaration (used to
# manufacture a version for the 'git-test' package); the associated
@@ -1437,6 +1455,11 @@ def mock_git_repository(git, tmpdir_factory):
"default-no-per-version-git": Bunch(
revision=default_branch, file=r0_file, args={"branch": default_branch}
),
"many-directories": Bunch(
revision=multiple_directories_branch,
file=dir_files[0],
args={"git": url, "branch": multiple_directories_branch},
),
}
t = Bunch(

View File

@@ -27,7 +27,7 @@ def test_listing_possible_os():
assert expected_os in output
@pytest.mark.skipif(str(spack.platforms.host()) == "windows", reason="test unsupported on Windows")
@pytest.mark.not_on_windows("test unsupported on Windows")
@pytest.mark.maybeslow
@pytest.mark.requires_executables("git")
def test_bootstrap_phase(minimal_configuration, config_dumper, capsys):

View File

@@ -16,6 +16,7 @@
import spack
import spack.cmd
import spack.cmd.external
import spack.compilers
import spack.config
import spack.cray_manifest as cray_manifest

View File

@@ -0,0 +1,10 @@
<html>
<head>
This is the root page.
</head>
<body>
This is a page with a Vue javascript drop down with links as used in GitLab.
<div class="js-source-code-dropdown" data-css-class="" data-download-artifacts="[]" data-download-links="[{&quot;text&quot;:&quot;tar.gz&quot;,&quot;path&quot;:&quot;/foo-5.0.0.tar.gz&quot;}]"></div>
</body>
</html>

View File

@@ -3,54 +3,21 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import pytest
import spack.config
import spack.error
import spack.fetch_strategy
import spack.stage
@pytest.mark.parametrize("_fetch_method", ["curl", "urllib"])
def test_gcsfetchstrategy_without_url(_fetch_method):
"""Ensure constructor with no URL fails."""
with spack.config.override("config:url_fetch_method", _fetch_method):
with pytest.raises(ValueError):
spack.fetch_strategy.GCSFetchStrategy(None)
@pytest.mark.parametrize("_fetch_method", ["curl", "urllib"])
def test_gcsfetchstrategy_bad_url(tmpdir, _fetch_method):
"""Ensure fetch with bad URL fails as expected."""
testpath = str(tmpdir)
with spack.config.override("config:url_fetch_method", _fetch_method):
fetcher = spack.fetch_strategy.GCSFetchStrategy(url="file:///does-not-exist")
assert fetcher is not None
with spack.stage.Stage(fetcher, path=testpath) as stage:
assert stage is not None
assert fetcher.archive_file is None
with pytest.raises(spack.error.FetchError):
fetcher.fetch()
@pytest.mark.parametrize("_fetch_method", ["curl", "urllib"])
def test_gcsfetchstrategy_downloaded(tmpdir, _fetch_method):
def test_gcsfetchstrategy_downloaded(tmp_path):
"""Ensure fetch with archive file already downloaded is a noop."""
testpath = str(tmpdir)
archive = os.path.join(testpath, "gcs.tar.gz")
archive = tmp_path / "gcs.tar.gz"
with spack.config.override("config:url_fetch_method", _fetch_method):
class Archived_GCSFS(spack.fetch_strategy.GCSFetchStrategy):
@property
def archive_file(self):
return str(archive)
class Archived_GCSFS(spack.fetch_strategy.GCSFetchStrategy):
@property
def archive_file(self):
return archive
url = "gcs:///{0}".format(archive)
fetcher = Archived_GCSFS(url=url)
with spack.stage.Stage(fetcher, path=testpath):
fetcher.fetch()
fetcher = Archived_GCSFS(url="gs://example/gcs.tar.gz")
with spack.stage.Stage(fetcher, path=str(tmp_path)):
fetcher.fetch()

View File

@@ -390,3 +390,38 @@ def submodules_callback(package):
assert not os.path.isfile(file_path)
file_path = os.path.join(s.package.stage.source_path, "third_party/submodule1/r0_file_1")
assert not os.path.isfile(file_path)
@pytest.mark.disable_clean_stage_check
def test_git_sparse_paths_partial_clone(
mock_git_repository, git_version, default_mock_concretization, mutable_mock_repo, monkeypatch
):
"""
Test partial clone of repository when using git_sparse_paths property
"""
type_of_test = "many-directories"
sparse_paths = ["dir0"]
omitted_paths = ["dir1", "dir2"]
t = mock_git_repository.checks[type_of_test]
args = copy.copy(t.args)
args["git_sparse_paths"] = sparse_paths
s = default_mock_concretization("git-test")
monkeypatch.setitem(s.package.versions, Version("git"), args)
s.package.do_stage()
with working_dir(s.package.stage.source_path):
# top level directory files are cloned via sparse-checkout
assert os.path.isfile("r0_file")
for p in sparse_paths:
assert os.path.isdir(p)
if git_version < Version("2.26.0.0"):
# older versions of git should fall back to a full clone
for p in omitted_paths:
assert os.path.isdir(p)
else:
for p in omitted_paths:
assert not os.path.isdir(p)
# fixture file is in the sparse-path expansion tree
assert os.path.isfile(t.file)

View File

@@ -610,10 +610,9 @@ def test_install_from_binary_with_missing_patch_succeeds(
temporary_store.db.add(s, directory_layout=temporary_store.layout, explicit=True)
# Push it to a binary cache
build_cache = tmp_path / "my_build_cache"
binary_distribution.push_or_raise(
[s], out_url=build_cache.as_uri(), signing_key=None, force=False
)
mirror = spack.mirror.Mirror.from_local_path(str(tmp_path / "my_build_cache"))
with binary_distribution.make_uploader(mirror=mirror) as uploader:
uploader.push_or_raise([s])
# Now re-install it.
s.package.do_uninstall()
@@ -624,7 +623,7 @@ def test_install_from_binary_with_missing_patch_succeeds(
s.package.do_install()
# Binary install: succeeds, we don't need the patch.
spack.mirror.add(spack.mirror.Mirror.from_local_path(str(build_cache)))
spack.mirror.add(mirror)
s.package.do_install(package_cache_only=True, dependencies_cache_only=True, unsigned=True)
assert temporary_store.db.query_local_by_spec_hash(s.dag_hash())

View File

@@ -493,11 +493,13 @@ def fake_package_list(compiler, architecture, pkgs):
def test_bootstrapping_compilers_with_different_names_from_spec(
install_mockery, mutable_config, mock_fetch, archspec_host_is_spack_test_host
):
"""Tests that, when we bootstrap '%oneapi' we can translate it to the
'intel-oneapi-compilers' package.
"""
with spack.config.override("config:install_missing_compilers", True):
with spack.concretize.disable_compiler_existence_check():
spec = spack.spec.Spec("trivial-install-test-package%oneapi@=22.2.0").concretized()
spec.package.do_install()
assert (
spack.spec.CompilerSpec("oneapi@=22.2.0") in spack.compilers.all_compiler_specs()
)
@@ -582,7 +584,7 @@ def test_clear_failures_success(tmpdir):
assert os.path.isfile(failures.locker.lock_path)
@pytest.mark.xfail(sys.platform == "win32", reason="chmod does not prevent removal on Win")
@pytest.mark.not_on_windows("chmod does not prevent removal on Win")
def test_clear_failures_errs(tmpdir, capsys):
"""Test the clear_failures exception paths."""
failures = spack.database.FailureTracker(str(tmpdir), default_timeout=0.1)
@@ -749,29 +751,6 @@ def test_install_task_use_cache(install_mockery, monkeypatch):
assert request.pkg_id in installer.installed
def test_install_task_add_compiler(install_mockery, monkeypatch, capfd):
config_msg = "mock add_compilers_to_config"
def _add(_compilers):
tty.msg(config_msg)
installer = create_installer(["pkg-a"], {})
task = create_build_task(installer.build_requests[0].pkg)
task.compiler = True
# Preclude any meaningful side-effects
monkeypatch.setattr(spack.package_base.PackageBase, "unit_test_check", _true)
monkeypatch.setattr(inst.PackageInstaller, "_setup_install_dir", _noop)
monkeypatch.setattr(spack.build_environment, "start_build_process", _noop)
monkeypatch.setattr(spack.database.Database, "add", _noop)
monkeypatch.setattr(spack.compilers, "add_compilers_to_config", _add)
installer._install_task(task, None)
out = capfd.readouterr()[0]
assert config_msg in out
def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
"""Test _release_lock for supposed write lock with exception."""
installer = create_installer(["trivial-install-test-package"], {})

View File

@@ -274,7 +274,7 @@ def test_symlinks_false(self, stage):
assert not os.path.islink("dest/2")
check_added_exe_permissions("source/2", "dest/2")
@pytest.mark.skipif(sys.platform == "win32", reason="Broken symlinks not allowed on Windows")
@pytest.mark.not_on_windows("Broken symlinks not allowed on Windows")
def test_allow_broken_symlinks(self, stage):
"""Test installing with a broken symlink."""
with fs.working_dir(str(stage)):

View File

@@ -4,19 +4,13 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import contextlib
import multiprocessing
import os
import signal
import sys
import time
from types import ModuleType
from typing import Optional
import pytest
import llnl.util.lang as lang
import llnl.util.tty.log as log
import llnl.util.tty.pty as pty
from spack.util.executable import which
@@ -173,342 +167,3 @@ def test_log_subproc_and_echo_output_capfd(capfd, tmpdir):
print("logged")
assert capfd.readouterr()[0] == "echo\n"
#
# Tests below use a pseudoterminal to test llnl.util.tty.log
#
def simple_logger(**kwargs):
"""Mock logger (minion) process for testing log.keyboard_input."""
running = [True]
def handler(signum, frame):
running[0] = False
signal.signal(signal.SIGUSR1, handler)
log_path = kwargs["log_path"]
with log.log_output(log_path):
while running[0]:
print("line")
time.sleep(1e-3)
def mock_shell_fg(proc, ctl, **kwargs):
"""PseudoShell controller function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_enabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_fg_no_termios(proc, ctl, **kwargs):
"""PseudoShell controller function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_disabled_fg()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_bg(proc, ctl, **kwargs):
"""PseudoShell controller function for test_foreground_background."""
ctl.bg()
ctl.status()
ctl.wait_disabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_tstp_cont(proc, ctl, **kwargs):
"""PseudoShell controller function for test_foreground_background."""
ctl.tstp()
ctl.wait_stopped()
ctl.cont()
ctl.wait_running()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_tstp_tstp_cont(proc, ctl, **kwargs):
"""PseudoShell controller function for test_foreground_background."""
ctl.tstp()
ctl.wait_stopped()
ctl.tstp()
ctl.wait_stopped()
ctl.cont()
ctl.wait_running()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_tstp_tstp_cont_cont(proc, ctl, **kwargs):
"""PseudoShell controller function for test_foreground_background."""
ctl.tstp()
ctl.wait_stopped()
ctl.tstp()
ctl.wait_stopped()
ctl.cont()
ctl.wait_running()
ctl.cont()
ctl.wait_running()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_bg_fg(proc, ctl, **kwargs):
"""PseudoShell controller function for test_foreground_background."""
ctl.bg()
ctl.status()
ctl.wait_disabled()
ctl.fg()
ctl.status()
ctl.wait_enabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_bg_fg_no_termios(proc, ctl, **kwargs):
"""PseudoShell controller function for test_foreground_background."""
ctl.bg()
ctl.status()
ctl.wait_disabled()
ctl.fg()
ctl.status()
ctl.wait_disabled_fg()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_fg_bg(proc, ctl, **kwargs):
"""PseudoShell controller function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_enabled()
ctl.bg()
ctl.status()
ctl.wait_disabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_fg_bg_no_termios(proc, ctl, **kwargs):
"""PseudoShell controller function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_disabled_fg()
ctl.bg()
ctl.status()
ctl.wait_disabled()
os.kill(proc.pid, signal.SIGUSR1)
@contextlib.contextmanager
def no_termios():
saved = log.termios
log.termios = None
try:
yield
finally:
log.termios = saved
@pytest.mark.skipif(not which("ps"), reason="requires ps utility")
@pytest.mark.skipif(not termios, reason="requires termios support")
@pytest.mark.parametrize(
"test_fn,termios_on_or_off",
[
# tests with termios
(mock_shell_fg, lang.nullcontext),
(mock_shell_bg, lang.nullcontext),
(mock_shell_bg_fg, lang.nullcontext),
(mock_shell_fg_bg, lang.nullcontext),
(mock_shell_tstp_cont, lang.nullcontext),
(mock_shell_tstp_tstp_cont, lang.nullcontext),
(mock_shell_tstp_tstp_cont_cont, lang.nullcontext),
# tests without termios
(mock_shell_fg_no_termios, no_termios),
(mock_shell_bg, no_termios),
(mock_shell_bg_fg_no_termios, no_termios),
(mock_shell_fg_bg_no_termios, no_termios),
(mock_shell_tstp_cont, no_termios),
(mock_shell_tstp_tstp_cont, no_termios),
(mock_shell_tstp_tstp_cont_cont, no_termios),
],
)
@pytest.mark.xfail(reason="Fails almost consistently when run with coverage and xdist")
def test_foreground_background(test_fn, termios_on_or_off, tmpdir):
"""Functional tests for foregrounding and backgrounding a logged process.
This ensures that things like SIGTTOU are not raised and that
terminal settings are corrected on foreground/background and on
process stop and start.
"""
shell = pty.PseudoShell(test_fn, simple_logger)
log_path = str(tmpdir.join("log.txt"))
# run the shell test
with termios_on_or_off():
shell.start(log_path=log_path, debug=True)
exitcode = shell.join()
# processes completed successfully
assert exitcode == 0
# assert log was created
assert os.path.exists(log_path)
def synchronized_logger(**kwargs):
"""Mock logger (minion) process for testing log.keyboard_input.
This logger synchronizes with the parent process to test that 'v' can
toggle output. It is used in ``test_foreground_background_output`` below.
"""
running = [True]
def handler(signum, frame):
running[0] = False
signal.signal(signal.SIGUSR1, handler)
log_path = kwargs["log_path"]
write_lock = kwargs["write_lock"]
v_lock = kwargs["v_lock"]
sys.stderr.write(os.getcwd() + "\n")
with log.log_output(log_path) as logger:
with logger.force_echo():
print("forced output")
while running[0]:
with write_lock:
if v_lock.acquire(False): # non-blocking acquire
print("off")
v_lock.release()
else:
print("on") # lock held; v is toggled on
time.sleep(1e-2)
def mock_shell_v_v(proc, ctl, **kwargs):
"""Controller function for test_foreground_background_output."""
write_lock = kwargs["write_lock"]
v_lock = kwargs["v_lock"]
ctl.fg()
ctl.wait_enabled()
time.sleep(0.1)
write_lock.acquire() # suspend writing
v_lock.acquire() # enable v lock
ctl.write(b"v") # toggle v on stdin
time.sleep(0.1)
write_lock.release() # resume writing
time.sleep(0.1)
write_lock.acquire() # suspend writing
ctl.write(b"v") # toggle v on stdin
time.sleep(0.1)
v_lock.release() # disable v lock
write_lock.release() # resume writing
time.sleep(0.1)
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_v_v_no_termios(proc, ctl, **kwargs):
"""Controller function for test_foreground_background_output."""
write_lock = kwargs["write_lock"]
v_lock = kwargs["v_lock"]
ctl.fg()
ctl.wait_disabled_fg()
time.sleep(0.1)
write_lock.acquire() # suspend writing
v_lock.acquire() # enable v lock
ctl.write(b"v\n") # toggle v on stdin
time.sleep(0.1)
write_lock.release() # resume writing
time.sleep(0.1)
write_lock.acquire() # suspend writing
ctl.write(b"v\n") # toggle v on stdin
time.sleep(0.1)
v_lock.release() # disable v lock
write_lock.release() # resume writing
time.sleep(0.1)
os.kill(proc.pid, signal.SIGUSR1)
@pytest.mark.skipif(not which("ps"), reason="requires ps utility")
@pytest.mark.skipif(not termios, reason="requires termios support")
@pytest.mark.parametrize(
"test_fn,termios_on_or_off",
[(mock_shell_v_v, lang.nullcontext), (mock_shell_v_v_no_termios, no_termios)],
)
@pytest.mark.xfail(reason="Fails almost consistently when run with coverage and xdist")
def test_foreground_background_output(test_fn, capfd, termios_on_or_off, tmpdir):
"""Tests hitting 'v' toggles output, and that force_echo works."""
if sys.version_info >= (3, 8) and sys.platform == "darwin" and termios_on_or_off == no_termios:
return
shell = pty.PseudoShell(test_fn, synchronized_logger)
log_path = str(tmpdir.join("log.txt"))
# Locks for synchronizing with minion
write_lock = multiprocessing.Lock() # must be held by minion to write
v_lock = multiprocessing.Lock() # held while controller is in v mode
with termios_on_or_off():
shell.start(write_lock=write_lock, v_lock=v_lock, debug=True, log_path=log_path)
exitcode = shell.join()
out, err = capfd.readouterr()
print(err) # will be shown if something goes wrong
print(out)
# processes completed successfully
assert exitcode == 0
# split output into lines
output = out.strip().split("\n")
# also get lines of log file
assert os.path.exists(log_path)
with open(log_path) as logfile:
log_data = logfile.read().strip().split("\n")
# Controller and minion process coordinate with locks such that the
# minion writes "off" when echo is off, and "on" when echo is on. The
# output should contain mostly "on" lines, but may contain "off"
# lines if the controller is slow. The important thing to observe
# here is that we started seeing 'on' in the end.
assert ["forced output", "on"] == lang.uniq(output) or [
"forced output",
"off",
"on",
] == lang.uniq(output)
# log should be off for a while, then on, then off
assert ["forced output", "off", "on", "off"] == lang.uniq(log_data) and log_data.count(
"off"
) > 2 # ensure some "off" lines were omitted

View File

@@ -9,16 +9,13 @@
This just tests whether the right args are getting passed to make.
"""
import os
import sys
import pytest
from spack.build_environment import MakeExecutable
from spack.util.environment import path_put_first
pytestmark = pytest.mark.skipif(
sys.platform == "win32", reason="MakeExecutable not supported on Windows"
)
pytestmark = pytest.mark.not_on_windows("MakeExecutable not supported on Windows")
@pytest.fixture(autouse=True)

View File

@@ -10,20 +10,21 @@
from llnl.util.symlink import resolve_link_target_relative_to_the_link
import spack.caches
import spack.config
import spack.fetch_strategy
import spack.mirror
import spack.patch
import spack.repo
import spack.stage
import spack.util.executable
import spack.util.spack_json as sjson
import spack.util.url as url_util
from spack.spec import Spec
from spack.stage import Stage
from spack.util.executable import which
from spack.util.spack_yaml import SpackYAMLError
pytestmark = [
pytest.mark.not_on_windows("does not run on windows"),
pytest.mark.usefixtures("mutable_config", "mutable_mock_repo"),
]
pytestmark = [pytest.mark.usefixtures("mutable_config", "mutable_mock_repo")]
# paths in repos that shouldn't be in the mirror tarballs.
exclude = [".hg", ".git", ".svn"]
@@ -51,7 +52,7 @@ def set_up_package(name, repository, url_attr):
def check_mirror():
with Stage("spack-mirror-test") as stage:
with spack.stage.Stage("spack-mirror-test") as stage:
mirror_root = os.path.join(stage.path, "test-mirror")
# register mirror with spack config
mirrors = {"spack-mirror-test": url_util.path_to_file_url(mirror_root)}
@@ -66,8 +67,8 @@ def check_mirror():
for spec in specs:
fetcher = spec.package.fetcher
per_package_ref = os.path.join(spec.name, "-".join([spec.name, str(spec.version)]))
mirror_paths = spack.mirror.mirror_archive_paths(fetcher, per_package_ref)
expected_path = os.path.join(mirror_root, mirror_paths.storage_path)
mirror_layout = spack.mirror.default_mirror_layout(fetcher, per_package_ref)
expected_path = os.path.join(mirror_root, mirror_layout.path)
assert os.path.exists(expected_path)
# Now try to fetch each package.
@@ -202,14 +203,12 @@ def test_invalid_json_mirror_collection(invalid_json, error_message):
def test_mirror_archive_paths_no_version(mock_packages, mock_archive):
spec = Spec("trivial-install-test-package@=nonexistingversion").concretized()
fetcher = spack.fetch_strategy.URLFetchStrategy(mock_archive.url)
spack.mirror.mirror_archive_paths(fetcher, "per-package-ref", spec)
fetcher = spack.fetch_strategy.URLFetchStrategy(url=mock_archive.url)
spack.mirror.default_mirror_layout(fetcher, "per-package-ref", spec)
def test_mirror_with_url_patches(mock_packages, monkeypatch):
spec = Spec("patch-several-dependencies")
spec.concretize()
spec = Spec("patch-several-dependencies").concretized()
files_cached_in_mirror = set()
def record_store(_class, fetcher, relative_dst, cosmetic_path=None):
@@ -228,30 +227,25 @@ def successful_expand(_class):
def successful_apply(*args, **kwargs):
pass
def successful_symlink(*args, **kwargs):
def successful_make_alias(*args, **kwargs):
pass
with Stage("spack-mirror-test") as stage:
with spack.stage.Stage("spack-mirror-test") as stage:
mirror_root = os.path.join(stage.path, "test-mirror")
monkeypatch.setattr(spack.fetch_strategy.URLFetchStrategy, "fetch", successful_fetch)
monkeypatch.setattr(spack.fetch_strategy.URLFetchStrategy, "expand", successful_expand)
monkeypatch.setattr(spack.patch, "apply_patch", successful_apply)
monkeypatch.setattr(spack.caches.MirrorCache, "store", record_store)
monkeypatch.setattr(spack.caches.MirrorCache, "symlink", successful_symlink)
monkeypatch.setattr(spack.mirror.DefaultLayout, "make_alias", successful_make_alias)
with spack.config.override("config:checksum", False):
spack.mirror.create(mirror_root, list(spec.traverse()))
assert not (
set(
[
"abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234",
"abcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcd.gz",
]
)
- files_cached_in_mirror
)
assert {
"abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234",
"abcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcd.gz",
}.issubset(files_cached_in_mirror)
class MockFetcher:
@@ -266,23 +260,21 @@ def archive(dst):
@pytest.mark.regression("14067")
def test_mirror_cache_symlinks(tmpdir):
def test_mirror_layout_make_alias(tmpdir):
"""Confirm that the cosmetic symlink created in the mirror cache (which may
be relative) targets the storage path correctly.
"""
cosmetic_path = "zlib/zlib-1.2.11.tar.gz"
global_path = "_source-cache/archive/c3/c3e5.tar.gz"
cache = spack.caches.MirrorCache(str(tmpdir), False)
reference = spack.mirror.MirrorReference(cosmetic_path, global_path)
alias = os.path.join("zlib", "zlib-1.2.11.tar.gz")
path = os.path.join("_source-cache", "archive", "c3", "c3e5.tar.gz")
cache = spack.caches.MirrorCache(root=str(tmpdir), skip_unstable_versions=False)
layout = spack.mirror.DefaultLayout(alias, path)
cache.store(MockFetcher(), reference.storage_path)
cache.symlink(reference)
cache.store(MockFetcher(), layout.path)
layout.make_alias(cache.root)
link_target = resolve_link_target_relative_to_the_link(
os.path.join(cache.root, reference.cosmetic_path)
)
link_target = resolve_link_target_relative_to_the_link(os.path.join(cache.root, layout.alias))
assert os.path.exists(link_target)
assert os.path.normpath(link_target) == os.path.join(cache.root, reference.storage_path)
assert os.path.normpath(link_target) == os.path.join(cache.root, layout.path)
@pytest.mark.regression("31627")

View File

@@ -11,6 +11,7 @@
import os
import pathlib
import re
import urllib.error
from contextlib import contextmanager
import pytest
@@ -293,7 +294,10 @@ def test_uploading_with_base_image_in_docker_image_manifest_v2_format(
def test_best_effort_upload(mutable_database: spack.database.Database, monkeypatch):
"""Failure to upload a blob or manifest should not prevent others from being uploaded"""
"""Failure to upload a blob or manifest should not prevent others from being uploaded -- it
should be a best-effort operation. If any runtime dep fails to upload, it results in a missing
layer for dependents. But we do still create manifests for dependents, so that the build cache
is maximally useful. (The downside is that container images are not runnable)."""
_push_blob = spack.binary_distribution._oci_push_pkg_blob
_push_manifest = spack.binary_distribution._oci_put_manifest
@@ -315,32 +319,51 @@ def put_manifest(base_images, checksums, image_ref, tmpdir, extra_config, annota
monkeypatch.setattr(spack.binary_distribution, "_oci_push_pkg_blob", push_blob)
monkeypatch.setattr(spack.binary_distribution, "_oci_put_manifest", put_manifest)
mirror("add", "oci-test", "oci://example.com/image")
registry = InMemoryOCIRegistry("example.com")
with oci_servers(registry):
mirror("add", "oci-test", "oci://example.com/image")
image = ImageReference.from_string("example.com/image")
with pytest.raises(spack.error.SpackError, match="The following 4 errors occurred") as e:
with oci_servers(registry):
with pytest.raises(spack.error.SpackError, match="The following 2 errors occurred") as e:
buildcache("push", "--update-index", "oci-test", "mpileaks^mpich")
error = str(e.value)
# mpich's blob failed to upload and libdwarf's manifest failed to upload
assert re.search("mpich.+: Exception: Blob Server Error", e.value)
assert re.search("libdwarf.+: Exception: Manifest Server Error", e.value)
# mpich's blob failed to upload
assert re.search("mpich.+: Exception: Blob Server Error", error)
mpileaks: spack.spec.Spec = mutable_database.query_local("mpileaks^mpich")[0]
# libdwarf's manifest failed to upload
assert re.search("libdwarf.+: Exception: Manifest Server Error", error)
without_manifest = ("mpich", "libdwarf")
# since there is no blob for mpich, runtime dependents cannot refer to it in their
# manifests, which is a transitive error.
assert re.search("callpath.+: MissingLayerError: missing layer for mpich", error)
assert re.search("mpileaks.+: MissingLayerError: missing layer for mpich", error)
# Verify that manifests of mpich/libdwarf are missing due to upload failure.
for name in without_manifest:
tagged_img = image.with_tag(default_tag(mpileaks[name]))
with pytest.raises(urllib.error.HTTPError, match="404"):
get_manifest_and_config(tagged_img)
mpileaks: spack.spec.Spec = mutable_database.query_local("mpileaks^mpich")[0]
# Collect the layer digests of successfully uploaded packages. Every package should refer
# to its own tarballs and those of its runtime deps that were uploaded.
pkg_to_all_digests = {}
pkg_to_own_digest = {}
for s in mpileaks.traverse():
if s.name in without_manifest:
continue
# This should not raise a 404.
manifest, _ = get_manifest_and_config(image.with_tag(default_tag(s)))
# ensure that packages not affected by errors were uploaded still.
uploaded_tags = {tag for _, tag in registry.manifests.keys()}
failures = {"mpich", "libdwarf", "callpath", "mpileaks"}
expected_tags = {default_tag(s) for s in mpileaks.traverse() if s.name not in failures}
# Collect layer digests
pkg_to_all_digests[s.name] = {layer["digest"] for layer in manifest["layers"]}
pkg_to_own_digest[s.name] = manifest["layers"][-1]["digest"]
assert expected_tags
assert uploaded_tags == expected_tags
# Verify that all packages reference blobs of their runtime deps that uploaded fine.
for s in mpileaks.traverse():
if s.name in without_manifest:
continue
expected_digests = {
pkg_to_own_digest[t.name]
for t in s.traverse(deptype=("link", "run"), root=True)
if t.name not in without_manifest
}
# Test with issubset, cause we don't have the blob of libdwarf as it has no manifest.
assert expected_digests and expected_digests.issubset(pkg_to_all_digests[s.name])

View File

@@ -259,6 +259,7 @@ def test_git_url_top_level_git_versions(version_str, tag, commit, branch):
assert fetcher.tag == tag
assert fetcher.commit == commit
assert fetcher.branch == branch
assert fetcher.url == pkg_factory("git-url-top-level").git
@pytest.mark.usefixtures("mock_packages", "config")
@@ -319,3 +320,14 @@ def test_package_deprecated_version(mock_packages, mock_fetch, mock_stage):
assert spack.package_base.deprecated_version(pkg_cls, "1.1.0")
assert not spack.package_base.deprecated_version(pkg_cls, "1.0.0")
def test_package_can_have_sparse_checkout_properties(mock_packages, mock_fetch, mock_stage):
spec = Spec("git-sparsepaths-pkg")
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
assert hasattr(pkg_cls, "git_sparse_paths")
fetcher = spack.fetch_strategy.for_package_version(pkg_cls(spec), "1.0")
assert isinstance(fetcher, spack.fetch_strategy.GitFetchStrategy)
assert hasattr(fetcher, "git_sparse_paths")
assert fetcher.git_sparse_paths == pkg_cls.git_sparse_paths

View File

@@ -48,7 +48,7 @@
def test_buildcache(mock_archive, tmp_path, monkeypatch, mutable_config):
# Install a test package
spec = Spec("trivial-install-test-package").concretized()
monkeypatch.setattr(spec.package, "fetcher", URLFetchStrategy(mock_archive.url))
monkeypatch.setattr(spec.package, "fetcher", URLFetchStrategy(url=mock_archive.url))
spec.package.do_install()
pkghash = "/" + str(spec.dag_hash(7))

View File

@@ -3,54 +3,19 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import pytest
import spack.config as spack_config
import spack.error
import spack.fetch_strategy as spack_fs
import spack.stage as spack_stage
@pytest.mark.parametrize("_fetch_method", ["curl", "urllib"])
def test_s3fetchstrategy_sans_url(_fetch_method):
"""Ensure constructor with no URL fails."""
with spack_config.override("config:url_fetch_method", _fetch_method):
with pytest.raises(ValueError):
spack_fs.S3FetchStrategy(None)
@pytest.mark.parametrize("_fetch_method", ["curl", "urllib"])
def test_s3fetchstrategy_bad_url(tmpdir, _fetch_method):
"""Ensure fetch with bad URL fails as expected."""
testpath = str(tmpdir)
with spack_config.override("config:url_fetch_method", _fetch_method):
fetcher = spack_fs.S3FetchStrategy(url="file:///does-not-exist")
assert fetcher is not None
with spack_stage.Stage(fetcher, path=testpath) as stage:
assert stage is not None
assert fetcher.archive_file is None
with pytest.raises(spack.error.FetchError):
fetcher.fetch()
@pytest.mark.parametrize("_fetch_method", ["curl", "urllib"])
def test_s3fetchstrategy_downloaded(tmpdir, _fetch_method):
def test_s3fetchstrategy_downloaded(tmp_path):
"""Ensure fetch with archive file already downloaded is a noop."""
testpath = str(tmpdir)
archive = os.path.join(testpath, "s3.tar.gz")
archive = tmp_path / "s3.tar.gz"
with spack_config.override("config:url_fetch_method", _fetch_method):
class Archived_S3FS(spack_fs.S3FetchStrategy):
@property
def archive_file(self):
return archive
class Archived_S3FS(spack_fs.S3FetchStrategy):
@property
def archive_file(self):
return archive
url = "s3:///{0}".format(archive)
fetcher = Archived_S3FS(url=url)
with spack_stage.Stage(fetcher, path=testpath):
fetcher.fetch()
fetcher = Archived_S3FS(url="s3://example/s3.tar.gz")
with spack_stage.Stage(fetcher, path=str(tmp_path)):
fetcher.fetch()

View File

@@ -2,10 +2,8 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Test Spack's custom YAML format."""
import io
import sys
import pytest
@@ -126,7 +124,7 @@ def test_yaml_aliases():
),
],
)
@pytest.mark.xfail(sys.platform == "win32", reason="fails on Windows")
@pytest.mark.not_on_windows(reason="fails on Windows")
def test_round_trip_configuration(initial_content, expected_final_content, tmp_path):
"""Test that configuration can be loaded and dumped without too many changes"""
file = tmp_path / "test.yaml"

View File

@@ -76,12 +76,6 @@ def fn_urls(v):
return factory
def test_urlfetchstrategy_sans_url():
"""Ensure constructor with no URL fails."""
with pytest.raises(ValueError):
fs.URLFetchStrategy(None)
@pytest.mark.parametrize("method", ["curl", "urllib"])
def test_urlfetchstrategy_bad_url(tmp_path, mutable_config, method):
"""Ensure fetch with bad URL fails as expected."""
@@ -267,7 +261,7 @@ def is_true():
monkeypatch.setattr(sys.stdout, "isatty", is_true)
monkeypatch.setattr(tty, "msg_enabled", is_true)
with spack.config.override("config:url_fetch_method", "curl"):
fetcher = fs.URLFetchStrategy(mock_archive.url)
fetcher = fs.URLFetchStrategy(url=mock_archive.url)
with Stage(fetcher, path=testpath) as stage:
assert fetcher.archive_file is None
stage.fetch()
@@ -280,7 +274,7 @@ def is_true():
def test_url_extra_fetch(tmp_path, mutable_config, mock_archive, _fetch_method):
"""Ensure a fetch after downloading is effectively a no-op."""
mutable_config.set("config:url_fetch_method", _fetch_method)
fetcher = fs.URLFetchStrategy(mock_archive.url)
fetcher = fs.URLFetchStrategy(url=mock_archive.url)
with Stage(fetcher, path=str(tmp_path)) as stage:
assert fetcher.archive_file is None
stage.fetch()

View File

@@ -160,22 +160,13 @@ def test_reverse_environment_modifications(working_env):
assert os.environ == start_env
def test_escape_double_quotes_in_shell_modifications():
to_validate = envutil.EnvironmentModifications()
def test_shell_modifications_are_properly_escaped():
"""Test that variable values are properly escaped so that they can safely be eval'd."""
changes = envutil.EnvironmentModifications()
changes.set("VAR", "$PATH")
changes.append_path("VAR", "$ANOTHER_PATH")
changes.set("RM_RF", "$(rm -rf /)")
to_validate.set("VAR", "$PATH")
to_validate.append_path("VAR", "$ANOTHER_PATH")
to_validate.set("QUOTED_VAR", '"MY_VAL"')
if sys.platform == "win32":
cmds = to_validate.shell_modifications(shell="bat")
assert r'set "VAR=$PATH;$ANOTHER_PATH"' in cmds
assert r'set "QUOTED_VAR="MY_VAL"' in cmds
cmds = to_validate.shell_modifications(shell="pwsh")
assert "$Env:VAR='$PATH;$ANOTHER_PATH'" in cmds
assert "$Env:QUOTED_VAR='\"MY_VAL\"'" in cmds
else:
cmds = to_validate.shell_modifications()
assert 'export VAR="$PATH:$ANOTHER_PATH"' in cmds
assert r'export QUOTED_VAR="\"MY_VAL\""' in cmds
script = changes.shell_modifications(shell="sh")
assert f"export VAR='$PATH{os.pathsep}$ANOTHER_PATH'" in script
assert "export RM_RF='$(rm -rf /)'" in script

View File

@@ -9,6 +9,7 @@
import pytest
import spack.directives
import spack.directives_meta
import spack.paths
import spack.repo
import spack.util.package_hash as ph
@@ -211,13 +212,13 @@ def foo():
{directives}
""".format(
directives="\n".join(" %s()" % name for name in spack.directives.directive_names)
directives="\n".join(" %s()" % name for name in spack.directives_meta.directive_names)
)
def test_remove_all_directives():
"""Ensure all directives are removed from packages before hashing."""
for name in spack.directives.directive_names:
for name in spack.directives_meta.directive_names:
assert name in many_directives
tree = ast.parse(many_directives)
@@ -225,7 +226,7 @@ def test_remove_all_directives():
tree = ph.RemoveDirectives(spec).visit(tree)
unparsed = unparse(tree, py_ver_consistent=True)
for name in spack.directives.directive_names:
for name in spack.directives_meta.directive_names:
assert name not in unparsed

View File

@@ -37,6 +37,7 @@ def _create_url(relative_url):
page_4 = _create_url("4.html")
root_with_fragment = _create_url("index_with_fragment.html")
root_with_javascript = _create_url("index_with_javascript.html")
@pytest.mark.parametrize(
@@ -148,6 +149,11 @@ def test_find_versions_of_archive_with_fragment():
assert Version("5.0.0") in versions
def test_find_versions_of_archive_with_javascript():
versions = spack.url.find_versions_of_archive(root_tarball, root_with_javascript, list_depth=0)
assert Version("5.0.0") in versions
def test_get_header():
headers = {"Content-type": "text/plain"}

View File

@@ -1,35 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import inspect
import llnl.util.tty as tty
from llnl.util.lang import list_modules, memoized
from spack.util.naming import mod_to_class
__all__ = ["list_classes"]
@memoized
def list_classes(parent_module, mod_path):
"""Given a parent path (e.g., spack.platforms or spack.analyzers),
use list_modules to derive the module names, and then mod_to_class
to derive class names. Import the classes and return them in a list
"""
classes = []
for name in list_modules(mod_path):
mod_name = "%s.%s" % (parent_module, name)
class_name = mod_to_class(name)
mod = __import__(mod_name, fromlist=[class_name])
if not hasattr(mod, class_name):
tty.die("No class %s defined in %s" % (class_name, mod_name))
cls = getattr(mod, class_name)
if not inspect.isclass(cls):
tty.die("%s.%s is not a class" % (mod_name, class_name))
classes.append(cls)
return classes

View File

@@ -11,6 +11,7 @@
import os.path
import pickle
import re
import shlex
import sys
from functools import wraps
from typing import Any, Callable, Dict, List, MutableMapping, Optional, Tuple, Union
@@ -63,26 +64,6 @@
ModificationList = List[Union["NameModifier", "NameValueModifier"]]
_find_unsafe = re.compile(r"[^\w@%+=:,./-]", re.ASCII).search
def double_quote_escape(s):
"""Return a shell-escaped version of the string *s*.
This is similar to how shlex.quote works, but it escapes with double quotes
instead of single quotes, to allow environment variable expansion within
quoted strings.
"""
if not s:
return '""'
if _find_unsafe(s) is None:
return s
# use double quotes, and escape double quotes in the string
# the string $"b is then quoted as "$\"b"
return '"' + s.replace('"', r"\"") + '"'
def system_env_normalize(func):
"""Decorator wrapping calls to system env modifications,
converting all env variable names to all upper case on Windows, no-op
@@ -182,7 +163,7 @@ def _nix_env_var_to_source_line(var: str, val: str) -> str:
fname=BASH_FUNCTION_FINDER.sub(r"\1", var), decl=val
)
else:
source_line = f"{var}={double_quote_escape(val)}; export {var}"
source_line = f"{var}={shlex.quote(val)}; export {var}"
return source_line
@@ -691,11 +672,10 @@ def shell_modifications(
if new is None:
cmds += _SHELL_UNSET_STRINGS[shell].format(name)
else:
if sys.platform != "win32":
new_env_name = double_quote_escape(new_env[name])
else:
new_env_name = new_env[name]
cmd = _SHELL_SET_STRINGS[shell].format(name, new_env_name)
value = new_env[name]
if shell not in ("bat", "pwsh"):
value = shlex.quote(value)
cmd = _SHELL_SET_STRINGS[shell].format(name, value)
cmds += cmd
return cmds

View File

@@ -5,7 +5,7 @@
import ast
import spack.directives
import spack.directives_meta
import spack.error
import spack.package_base
import spack.repo
@@ -82,7 +82,7 @@ def visit_Expr(self, node):
node.value
and isinstance(node.value, ast.Call)
and isinstance(node.value.func, ast.Name)
and node.value.func.id in spack.directives.directive_names
and node.value.func.id in spack.directives_meta.directive_names
)
else node
)

View File

@@ -7,6 +7,7 @@
import concurrent.futures
import email.message
import errno
import json
import os
import os.path
import re
@@ -152,7 +153,8 @@ class HTMLParseError(Exception):
class LinkParser(HTMLParser):
"""This parser just takes an HTML page and strips out the hrefs on the
links. Good enough for a really simple spider."""
links, as well as some javascript tags used on GitLab servers.
Good enough for a really simple spider."""
def __init__(self):
super().__init__()
@@ -160,9 +162,18 @@ def __init__(self):
def handle_starttag(self, tag, attrs):
if tag == "a":
for attr, val in attrs:
if attr == "href":
self.links.append(val)
self.links.extend(val for key, val in attrs if key == "href")
# GitLab uses a javascript function to place dropdown links:
# <div class="js-source-code-dropdown" ...
# data-download-links="[{"path":"/graphviz/graphviz/-/archive/12.0.0/graphviz-12.0.0.zip",...},...]"/>
if tag == "div" and ("class", "js-source-code-dropdown") in attrs:
try:
links_str = next(val for key, val in attrs if key == "data-download-links")
links = json.loads(links_str)
self.links.extend(x["path"] for x in links)
except Exception:
pass
class ExtractMetadataParser(HTMLParser):

View File

@@ -17,10 +17,22 @@
import llnl.util.tty.color
from llnl.string import comma_or
import spack.directives
import spack.error as error
import spack.parser
#: These are variant names used by Spack internally; packages can't use them
reserved_names = [
"arch",
"architecture",
"dev_path",
"namespace",
"operating_system",
"os",
"patches",
"platform",
"target",
]
special_variant_values = [None, "none", "*"]
@@ -679,7 +691,7 @@ def substitute_abstract_variants(spec):
# in $spack/lib/spack/spack/spec_list.py
failed = []
for name, v in spec.variants.items():
if name in spack.directives.reserved_names:
if name in reserved_names:
if name == "dev_path":
new_variant = SingleValuedVariant(name, v._original_value)
spec.variants.substitute(new_variant)

View File

@@ -138,7 +138,7 @@ def lookup_ref(self, ref) -> Tuple[Optional[str], int]:
# Only clone if we don't have it!
if not os.path.exists(dest):
self.fetcher.clone(dest, bare=True)
self.fetcher.bare_clone(dest)
# Lookup commit info
with working_dir(dest):

View File

@@ -258,7 +258,6 @@ ci:
- py-scipy
- py-statsmodels
- py-warlock
- py-warpx
- raja
- slepc
- slurm
@@ -272,6 +271,7 @@ ci:
- vtk
- vtk-h
- vtk-m
- warpx +python
- zfp
build-job:
tags: [ "spack", "medium" ]

View File

@@ -169,13 +169,13 @@ spack:
# - phist # fortran_bindings/CMakeFiles/phist_fort.dir/phist_testing.F90.o: ftn-78 ftn: ERROR in command line. The -f option has an invalid argument, "no-math-errno".
# - plasma # %cce conflict
# - py-jupyterhub # rust: ld.lld: error: relocation R_X86_64_32 cannot be used against local symbol; recompile with -fPIC'; defined in /opt/cray/pe/cce/15.0.1/cce/x86_64/lib/no_mmap.o, referenced by /opt/cray/pe/cce/15.0.1/cce/x86_64/lib/no_mmap.o:(__no_mmap_for_malloc)
# - py-warpx # py-scipy: meson.build:82:0: ERROR: Unknown compiler(s): [['/home/gitlab-runner-3/builds/dWfnZWPh/0/spack/spack/lib/spack/env/cce/ftn']]
# - quantum-espresso # quantum-espresso: CMake Error at cmake/FindSCALAPACK.cmake:503 (message): A required library with SCALAPACK API not found. Please specify library
# - scr # scr: make[2]: *** [examples/CMakeFiles/test_ckpt_F.dir/build.make:112: examples/test_ckpt_F] Error 1: /opt/cray/pe/cce/15.0.1/binutils/x86_64/x86_64-pc-linux-gnu/bin/ld: /opt/cray/pe/mpich/8.1.25/ofi/cray/10.0/lib/libmpi_cray.so: undefined reference to `PMI_Barrier'
# - strumpack ~slate # strumpack: [test/CMakeFiles/test_HSS_seq.dir/build.make:117: test/test_HSS_seq] Error 1: ld.lld: error: undefined reference due to --no-allow-shlib-undefined: mpi_abort_
# - upcxx # upcxx: configure error: User requested --enable-ofi but I don't know how to build ofi programs for your system
# - variorum # variorum: /opt/cray/pe/cce/15.0.1/binutils/x86_64/x86_64-pc-linux-gnu/bin/ld: /opt/cray/pe/lib64/libpals.so.0: undefined reference to `json_array_append_new@@libjansson.so.4'
# - xyce +mpi +shared +pymi +pymi_static_tpls ^trilinos~shylu # openblas: ftn-2307 ftn: ERROR in command line: The "-m" option must be followed by 0, 1, 2, 3 or 4.; make[2]: *** [<builtin>: spotrf2.o] Error 1; make[1]: *** [Makefile:27: lapacklib] Error 2; make: *** [Makefile:250: netlib] Error 2
# - warpx +python # py-scipy: meson.build:82:0: ERROR: Unknown compiler(s): [['/home/gitlab-runner-3/builds/dWfnZWPh/0/spack/spack/lib/spack/env/cce/ftn']]
cdash:
build-group: E4S Cray

View File

@@ -163,13 +163,13 @@ spack:
# - phist
# - plasma
# - py-jupyterhub
# - py-warpx
# - quantum-espresso
# - scr
# - strumpack ~slate
# - upcxx
# - variorum
# - xyce +mpi +shared +pymi +pymi_static_tpls ^trilinos~shylu
# - warpx +python
cdash:
build-group: E4S Cray SLES

View File

@@ -133,7 +133,6 @@ spack:
- py-jupyterhub
- py-libensemble
- py-petsc4py
- py-warpx
- qthreads scheduler=distrib
- quantum-espresso
- raja
@@ -157,6 +156,7 @@ spack:
- upcxx
# - veloc
- wannier90
- warpx +python
- xyce +mpi +shared +pymi +pymi_static_tpls
# INCLUDED IN ECP DAV CPU
# - adios2

View File

@@ -134,7 +134,6 @@ spack:
- py-jupyterhub
- py-libensemble
- py-petsc4py
- py-warpx
- qthreads scheduler=distrib
- quantum-espresso
- raja
@@ -157,6 +156,9 @@ spack:
- umpire
- upcxx
- wannier90
- warpx +python
- wps
- wrf
- xyce +mpi +shared +pymi +pymi_static_tpls
# INCLUDED IN ECP DAV CPU
- adios2

View File

@@ -143,12 +143,11 @@ spack:
- precice
- pruners-ninja
- pumi
- py-amrex
- py-amrex ~ipo # oneAPI 2024.2.0 builds do not support IPO/LTO says CMake, even though pybind11 strongly encourages it
- py-h5py
- py-jupyterhub
- py-libensemble
- py-petsc4py
- py-warpx
- qthreads scheduler=distrib
- raja
- rempi
@@ -171,6 +170,7 @@ spack:
- upcxx
- variorum
- wannier90
- wrf
- xyce +mpi +shared +pymi +pymi_static_tpls
# INCLUDED IN ECP DAV CPU
- adios2
@@ -188,6 +188,7 @@ spack:
- veloc
# - visit # visit: +visit: visit_vtk/lightweight/vtkSkewLookupTable.C:32:10: error: cannot initialize return object of type 'unsigned char *' with an rvalue of type 'const unsigned char *'
- vtk-m ~openmp
- warpx +python ~python_ipo ^py-amrex ~ipo # oneAPI 2024.2.0 builds do not support IPO/LTO says CMake, even though pybind11 strongly encourages it
- zfp
# --
# - chapel ~cuda ~rocm # llvm: closures.c:(.text+0x305e): undefined reference to `_intel_fast_memset'
@@ -197,6 +198,7 @@ spack:
# - lbann # 2024.2 internal compiler error
# - plasma # 2024.2 internal compiler error
# - quantum-espresso # quantum-espresso: external/mbd/src/mbd_c_api.F90(392): error #6645: The name of the module procedure conflicts with a name in the encompassing scoping unit. [F_C_STRING]
# - wps # wps: InstallError: Compiler not recognized nor supported.
# PYTHON PACKAGES
- opencv +python3
@@ -232,10 +234,10 @@ spack:
- sundials +sycl cxxstd=17 +examples-install
- tau +mpi +opencl +level_zero ~pdt +syscall # requires libdrm.so to be installed
- upcxx +level_zero
- warpx ~qed +python ~python_ipo compute=sycl ^py-amrex ~ipo # qed for https://github.com/ECP-WarpX/picsar/pull/53 prior to 24.09 release; ~ipo for oneAPI 2024.2.0 GPU builds do not support IPO/LTO says CMake, even though pybind11 strongly encourages it
# --
# - hpctoolkit +level_zero # dyninst@12.3.0%gcc: /usr/bin/ld: libiberty/./d-demangle.c:142: undefined reference to `_intel_fast_memcpy'; can't mix intel-tbb@%oneapi with dyninst%gcc
# - slate +sycl # slate: ifx: error #10426: option '-fopenmp-targets' requires '-fiopenmp'
# - warpx compute=sycl # warpx: fetchedamrex-src/Src/Base/AMReX_RandomEngine.H:18:10: fatal error: 'oneapi/mkl/rng/device.hpp' file not found
ci:

View File

@@ -137,7 +137,6 @@ spack:
- py-jupyterhub
- py-libensemble
- py-petsc4py
- py-warpx
- qthreads scheduler=distrib
- quantum-espresso
- raja
@@ -160,6 +159,9 @@ spack:
- umpire
- upcxx
- wannier90
- warpx +python
- wps
- wrf
- xyce +mpi +shared +pymi +pymi_static_tpls
# INCLUDED IN ECP DAV CPU
- adios2

View File

@@ -147,7 +147,6 @@ spack:
- py-jupyterhub
- py-libensemble
- py-petsc4py
- py-warpx
- qthreads scheduler=distrib
- quantum-espresso
- raja
@@ -171,6 +170,8 @@ spack:
- upcxx
- variorum
- wannier90
- wps
- wrf
- xyce +mpi +shared +pymi +pymi_static_tpls
# INCLUDED IN ECP DAV CPU
- adios2
@@ -188,6 +189,7 @@ spack:
- veloc
- visit # silo: https://github.com/spack/spack/issues/39538
- vtk-m
- warpx +python
- zfp
# --
# - geopm # geopm: https://github.com/spack/spack/issues/38795

Some files were not shown because too many files have changed in this diff Show More