Compare commits

...

1345 Commits

Author SHA1 Message Date
Gregory Becker
1b4ff30665 clean up post rebase 2023-06-14 11:18:41 -07:00
Gregory Becker
c10797718a fixup after rebase 2023-06-13 12:11:27 -07:00
becker33
81ea29b007 [@spackbot] updating style on behalf of becker33 2023-06-13 11:54:00 -07:00
Gregory Becker
d8666a7fdf !fixup 511dd5a968c23f3f0c660b0cc5216415e60c6018 2023-06-13 11:54:00 -07:00
Gregory Becker
7c41bba6f8 error causation in separate file loaded for debug only 2023-06-13 11:53:58 -07:00
Gregory Becker
20e9fe3785 solver: nodes directly caused by dependency conditions 2023-06-13 11:51:01 -07:00
Gregory Becker
401218b4f1 solver: reorder code so it's easier to instrument for debugging 2023-06-13 11:50:57 -07:00
Gregory Becker
adfc1c0896 error messages: add causation for conflicting variants error 2023-06-13 11:49:45 -07:00
Gregory Becker
f4402c1cde error messages: print cause tree as part of error 2023-06-13 11:48:02 -07:00
Gregory Becker
c1d6d93388 error-chaining: expand to additional error messages 2023-06-13 11:44:54 -07:00
Gregory Becker
e9012c7781 solver: first prototype of chained error messages for one message type 2023-06-13 11:42:05 -07:00
Gregory Becker
59acfe4f0b solver: treat literals as conditions 2023-06-13 11:28:47 -07:00
Gregory Becker
004ff9d4e2 error messages: compute connections between conditions 2023-06-13 11:28:47 -07:00
Gregory Becker
9d20be5fe5 solver: remove indirection for dependency conditions 2023-06-13 11:28:47 -07:00
Gregory Becker
edc07dab27 solver: remove indirection for external conditions 2023-06-13 11:28:47 -07:00
Gregory Becker
acde8ef104 solver: remove indirection for provider conditions 2023-06-13 11:28:47 -07:00
Massimiliano Culpo
ed76966a3a gnupg: add v2.4.2 (#38059)
* gnupg: add v2.4.2

* [@spackbot] updating style on behalf of alalazo

---------

Co-authored-by: alalazo <alalazo@users.noreply.github.com>
2023-06-02 09:28:54 -07:00
willdunklin
2015a51d1a sz: add versions v2.1.12.5 and v2.1.12.4 (#38060) 2023-06-02 09:28:05 -05:00
Manuela Kuhn
34b8fe827e py-flask: add 2.3.2, py-blinker: add 1.6.2, py-werkzeug: add 2.3.4 (#38025) 2023-06-02 05:57:22 -04:00
Manuela Kuhn
6f1ed9b2e4 py-exceptiongroup: add 1.1.1 (#38018) 2023-06-02 05:47:49 -04:00
M. Eric Irrgang
dd00f50943 Add py-gmxapi version 0.4.2 (#38053) 2023-06-02 03:07:34 -04:00
Sam Grayson
f0ec625321 singularityce: add v3.11.3 (#38055) 2023-06-02 08:04:39 +02:00
Manuela Kuhn
d406c371a8 py-formulaic: add 0.6.1 (#38035) 2023-06-01 19:23:15 -04:00
Sean Koyama
42d374a34d double-batched-fft-library: patch to add search paths to findOpenCL.cmake (#36355)
* double-batched-fft-library: PATCH: add search paths to find libOpenCL

* Apply patch up to version 0.3.6
2023-06-01 16:17:04 -07:00
Sean Koyama
d90e4fcc3d lua-luafilesystem: update source URL and improve rockspec detection (#36692)
* lua-luafilesystem: updated sources to new URL. Changed versioning to dot-separated versions.

* lua-luafilesystem: override install phase to find correct rockspec

* lua-luafilesystem: improved rockspec detection

* lua-luafilesystem: added lua version constraint for older versions
2023-06-01 16:15:42 -07:00
Manuela Kuhn
a44fde9dc9 py-docutils: add 0.20.1 (#38001) 2023-06-01 18:18:29 -04:00
Manuela Kuhn
9ac8841dab py-fonttools: add 4.39.4 (#38027) 2023-06-01 18:13:20 -04:00
Manuela Kuhn
a1f87638ec py-future: add 0.18.3 (#38041)
* py-future: add 0.18.3

* Update var/spack/repos/builtin/packages/py-future/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-06-01 17:48:13 -04:00
Manuela Kuhn
3b55e0a65d py-executing: add 1.2.0 (#38019) 2023-06-01 17:38:09 -04:00
Massimiliano Culpo
42667fe7fa Memoize a few hot functions during module file generation (#37739) 2023-06-01 13:36:42 -07:00
Ted Stern
cd27611d2f Adding libpsm3 package (#37444)
* Adding libpsm3 package
* Make changes suggested by flake8
* Make one more flake8-suggested change, blank line after 'import os'
* Change to standard header to pass flake8 tests
* Update doc string, remove unnecessary comments
* Reviewer-recommende changes
* Alphabetize variants
* Use helper functions
* Change quotes to pass spack style check
2023-06-01 13:01:43 -07:00
Massimiliano Culpo
b111d2172e py-archspec: add v0.2.1 (#38014) 2023-06-01 14:58:45 -05:00
Tobias Ribizel
055263fa3c Fix OpenCV detection on Ubuntu (#35336)
* fix OpenCV detection on Ubuntu

* Update package.py

* Simplify version detection

* remove superfluous `return`
2023-06-01 14:38:17 -05:00
Manuela Kuhn
f34f207bdc py-datalad-neuroimaging: add 0.3.3 (#38017) 2023-06-01 14:31:54 -05:00
snehring
0c9f0fd40d openssh: depend on krb5+shared when +gssapi (#38034) 2023-06-01 13:17:44 -04:00
pauleonix
24d5b1e645 asio: Add stable versions up to 1.28.0 (#38007)
Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>

- Add pkgconfig dependency from 1.23.0 onward.
- Add conflict of old versions with new gcc due to missing includes.
- Deprecate uneven minor versions because they are not regarded as stable.
- Add maintainer
2023-06-01 11:09:27 +02:00
Mikael Simberg
616f7bcaef pika: add 0.16.0 and pika-algorithms 0.1.3 (#38021)
* pika: add 0.16.0

* pika-algorithms: add 0.1.3

---------

Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>
2023-06-01 04:18:26 -04:00
Greg Becker
dace0316a2 Spec.format: print false attributes if requested (#37932) 2023-06-01 09:08:45 +02:00
Greg Sjaardema
3bb86418b8 seacas: add 2023-05 release, update fmt dependency (#38008)
The fmt dependency for the previous release was incorrect as does not work with latest lib::fmt.  Fixed that specification
2023-06-01 02:48:50 -04:00
Veselin Dobrev
6f6489a2c7 xSDK examples v0.4.0 (#37295)
* [xsdk-examples] Initial commit for v0.4.0

* [xsdk-examples] v0.4.0 depends on xsdk@0.8.0

* add in missing xsdk dependencies

* [xsdk-examples] remove repeated 'depends_on' directive

* [xsdk-examples] simplify and extend a bit the package

[mfem] process more optional dependencies of HiOp

[strumpack, superlu-dist] add a workaround for an issue on Mac

* [mfem] fix the handling of the hiop dependency

* [@spackbot] updating style on behalf of v-dobrev

* [xsdk-examples] enable 'heffte' and 'tasmanian' if enabled in 'xsdk'

* [xsdk-examples] Add PUMI dependency

* [xsdk-examples] Add preCICE dependency

* [xsdk-examples] add +rocm

* heffte: add in a backport fix for building xsdk-examples with cuda

* [xsdk] Remove the explicit requirement for deal.II to be built +hdf5

* ENABLE_ROCM -> ENABLE_HIP

* [hiop] Workaround for CMake not finding Cray's BLAS (libsci)

[xsdk-examples] Set CUDA/HIP architectures; sync cuda/rocm variants with xsdk

* [@spackbot] updating style on behalf of v-dobrev

* [exago] Workaround for CMake not finding Cray's LAPACK/BLAS, libsci

[mfem] Tweaks for running tests under Flux and PBS

* [slate] Pass CUDA/HIP architectures to CMake

* [heffte] For newer CMake versions, set CMAKE_CUDA_ARCHITECTURES

* [hypre] Patch v2.26.0 to fix sequential compilation in 'src/seq_mv'

* [xsdk-examples] Some tweaks in dependencies and compilers used

* [xsdk] Make the 'trilinos' variant sticky

[xsdk-examples] Tweak dependencies

* [slate] Fix copy-paste error

* [xsdk-examples] Workaround for CMakePackage not having the legacy
   property 'build_directory'

* [xsdk-examples] Replace the testing branch used temporarily for v0.4.0 with
                the official release

---------

Co-authored-by: Satish Balay <balay@mcs.anl.gov>
2023-05-31 21:57:53 -04:00
Brian Van Essen
543b697df1 CachedCMakePackage: add CUDA/HIP options and improve independent builds (#37592)
* Add CMake options for building with CUDA/HIP support to
  CachedCMakePackages (intended to reduce duplication across packages
  building with +hip/+cuda and using CachedCMakePackage)
* Define generic variables like CMAKE_PREFIX_PATH for
  CachedCMakePackages (so that a user may invoke "cmake" themselves
  without needing to setthem on the command line).
* Make `lbann` a CachedCMakePackage.

Co-authored-by: Chris White <white238@llnl.gov>
2023-05-31 17:35:11 -07:00
QuellynSnead
042dc2e1d8 libtheora: enforce math library (libm.so) linkage (#37891) 2023-05-31 13:45:11 -07:00
Mosè Giordano
f745e49d9a julia: add patch to fix printing of BigFloat with MPFR v4.2 (#37992) 2023-05-31 15:32:12 -04:00
Michael Kuhn
eda21cdfba gcc: add 11.4.0 (#37988) 2023-05-31 09:15:45 -04:00
Adam J. Stewart
bc8b026072 py-pandas: add v2.0.2 (#38005) 2023-05-30 22:48:51 -04:00
Tiziano Müller
0f84782fcc Bugfix: cray manifest parsing regression (#37909)
fa7719a changed syntax for specifying exact versions, which are
required for some compiler specs (including those read as part
of parsing a Cray manifest). This fixes that and also makes a
couple other improvements to manifest parsing.

* Instantiate compiler specs with exact versions (fixes #37893)
* fix slingshot network detection (CPE 22.10+ has libcxi.so
  in /usr/lib64)
* "spack external find": add arg to ignore default dir for cray
  manifests
2023-05-30 18:03:44 -07:00
AMD Toolchain Support
43b86ce282 HPL: amdalloc with AOCC 4.0 (#37757)
* adding amdalloc when using aocc 4


* adding and libm for aocc3.2
2023-05-30 16:40:41 -07:00
AMD Toolchain Support
d30698d9a8 cp2k fixes for aocc (#37758) 2023-05-30 16:33:00 -07:00
Massimiliano Culpo
8e9efa86c8 Simplify implementation of "get_compiler_config" (#37989) 2023-05-30 15:11:33 -07:00
Jack Morrison
84faf5a6cf intel-mpi-benchmarks: Add MPI implementation check variant (#37363)
* Add MPI implementation check variant to Intel MPI Benchmarks

* [@spackbot] updating style on behalf of jack-morrison
2023-05-30 16:37:58 -05:00
Eric Berquist
9428749a3c Update SST packages to 13.0.0 (#37467)
* add sst-{core,elements,macro} v13.0.0

* Add newest DUMPI versions and remove unavailable ones

* update maintainer lists

* sst-core: tracking and profiling flags

* sst-elements with Pin requires the Pin location

* sst-core: Zoltan integration was removed in version 12

* spack style fixes

* sst-core: ensure Python is in the sst{sim,info}.x rpaths

* sst-macro: update homepage and maintainers

* spack style --fix
2023-05-30 16:35:33 -05:00
Michael Kuhn
efdac68c28 julia: fix build for @1.8.4:1.8.5 (#37990)
julia@1.8.4:1.8.5 fails to build because it does not find libstdc++ (see https://github.com/JuliaLang/julia/issues/47987).
2023-05-30 13:49:15 -07:00
Carlos Bederián
5398c31e82 ucx: add 1.14.1 (#37991) 2023-05-30 13:46:39 -07:00
Thomas Madlener
188168c476 podio: Add 0.16.5 tag (#37994) 2023-05-30 13:39:15 -07:00
Michael Kuhn
4af84ac208 gobject-introspection: add 1.76.1 (#37995) 2023-05-30 13:37:53 -07:00
Manuela Kuhn
deb8b51098 py-dunamai: add 1.17.0 (#38003) 2023-05-30 15:34:15 -04:00
Manuela Kuhn
0d582b2ea9 py-duecredit: add 0.9.2 (#38002) 2023-05-30 15:33:54 -04:00
Manuela Kuhn
f88b01c34b py-datalad-container: add 1.2.0 (#38000) 2023-05-30 15:28:34 -04:00
Manuela Kuhn
0533c6a1b8 py-datalad-metalad: add 0-4-17 and new dep py-datalad-deprecated (#37993)
* py-datalad-metalad: add 0-4-17 and new dep py-datalad-deprecated

* Fix style
2023-05-30 15:18:15 -04:00
Manuela Kuhn
f73d5c2b0e py-cryptography: add 40.0.2 (#37925)
* py-cryptography: add 40.0.2

* Add pkgconfig dependency
2023-05-30 10:26:17 -05:00
Tamara Dahlgren
567d0ee455 tests/hdf5: convert to new stand-alone test process (#35732) 2023-05-29 16:56:17 +02:00
Tamara Dahlgren
577df6f498 sqlite: convert to new stand-alone test process (#37722) 2023-05-29 07:59:23 -04:00
Harmen Stoppels
8790efbcfe Remove patchelf self-relocation (#33834) 2023-05-29 13:14:24 +02:00
Tamara Dahlgren
212b1edb6b tests/biobambam2: convert to new stand-alone test process (#35696) 2023-05-29 12:54:38 +02:00
snehring
d85a27f317 sentieon-genomics: replacing square brackets in help (#37622) 2023-05-29 12:52:36 +02:00
Chris White
5622afbfd1 amgx: add v2.2.0 and v2.3.0 (#37567) 2023-05-29 12:42:36 +02:00
Carlos Bederián
f345038317 nwchem: fix non-fftw fftw-api providers (#37442) 2023-05-29 12:41:41 +02:00
Michael Kuhn
e43d4cfee0 glib: add 2.76.3 and 2.76.2, update build systems (#37226)
This also converts the package to the new way of handling multiple build
systems.

Co-authored-by: michaelkuhn <michaelkuhn@users.noreply.github.com>
2023-05-29 06:38:57 -04:00
Vanessasaurus
7070658e2a Automated deployment to update package flux-core 2023-05-15 (#37684)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2023-05-29 12:35:22 +02:00
dependabot[bot]
fc4b032fb4 build(deps): bump codecov/codecov-action from 3.1.3 to 3.1.4 (#37688)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 3.1.3 to 3.1.4.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](894ff025c7...eaaf4bedf3)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-29 12:34:34 +02:00
Vanessasaurus
8c97d8ad3f add flux-security variant to flux-core (#37689)
This will build flux-security separately to have a flux-imp
that can be defined in a flux broker.toml. Note that the user
that wants a multi-user setup is recommended to create a view,
and then a system/broker.toml in flux config directory that
points to it.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2023-05-29 12:34:07 +02:00
Alberto Invernizzi
26107fe6b2 highfive: add newer versions + add git repo (#37635)
* add versions
* add develop pointing to git@master
2023-05-29 06:09:02 -04:00
Tamara Dahlgren
9278c0df21 binutils: convert to new stand-alone test process (#37690) 2023-05-29 11:59:09 +02:00
Tamara Dahlgren
37e95713f4 libsigsegv: convert to new stand-alone test process (#37691) 2023-05-29 11:58:39 +02:00
SXS Bot
3ae8a3a517 spectre: add v2023.05.16 (#37716)
Co-authored-by: sxs-bot <sxs-bot@users.noreply.github.com>
2023-05-29 11:56:21 +02:00
Tamara Dahlgren
031af84e90 m4: convert to new stand-alone test process (#37723) 2023-05-29 11:49:28 +02:00
Annop Wongwathanarat
7d4b65491d armpl-gcc: add version 23.04.1 (#37907) 2023-05-29 05:49:19 -04:00
Harmen Stoppels
3038d1e7cd Use @= in some packages (#37737)
Change the pattern @x.y:x.y.0 -> @=x.y

Co-authored-by: haampie <haampie@users.noreply.github.com>
2023-05-29 11:47:00 +02:00
Lehman Garrison
b2e6ef97ce hdfview: add 3.1.4, 3.2.0, 3.3.0. Update dependencies. (#37745) 2023-05-29 11:41:44 +02:00
Tamara Dahlgren
e55236ce5b tests/arborx: convert to new stand-alone test process (#37778)
Co-authored-by: Andrey Prokopenko <andrey.prok@gmail.com>
2023-05-29 11:33:07 +02:00
Alec Scott
68dfd6ba6e hbase: add v2.5.3 (#37518)
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2023-05-29 11:29:11 +02:00
Jim Galarowicz
38d2459f94 survey: add v1.0.8 (#37385) 2023-05-29 11:26:56 +02:00
Nicolas Cornu
e309f367af catch2: new versions (#37742) 2023-05-29 11:26:06 +02:00
Harmen Stoppels
3b59c95323 fix InternalConcretizerError msg (#37791) 2023-05-29 11:24:43 +02:00
Howard Pritchard
fddaeadff8 OPENMPI: disable use of sphinx (#37717)
Sphinx is used to build Open MPI manpages, etc. as part of the make dist
process to create release tarballs.  There should be no need/use to do
this within Spack.  Also some sites have older Sphinx installs which
aren't compatible with the needs of the Open MPI documentation.
For example, attempts to install openmpi@main fail at NERSC owing to
such a situation.

Since Spack normally is used to build from release tarballs, in which
the docs have already been installed, this should present no issues.

This configuration option will be ignored for older than 5.0.0 Open MPI releases.

Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2023-05-29 11:23:42 +02:00
Joel Falcou
c85eaf9dc5 Add package for KUMI tuple library (#37795) 2023-05-29 11:23:06 +02:00
Brian Vanderwende
ddec7f8aec bbcp: fix cloning "master" version (#37802) 2023-05-29 11:21:48 +02:00
AMD Toolchain Support
f057d7154b NAMD: add AVXTILES support (#37040) 2023-05-29 11:10:09 +02:00
Tamara Dahlgren
a102950d67 berkeley-db: convert to new stand-alone test process (#35702) 2023-05-29 10:56:42 +02:00
Howard Pritchard
783be9b350 dealii - add platform-introspection variant (#37833)
This option is needed for DFT FE - or more accurately the check needs to
be checked off for a number of platforms or else the code doesn't work.

Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2023-05-29 10:56:03 +02:00
Juan Miguel Carceller
27c8135207 xrootd: add patch for when libraries are installed in lib64 (#37858)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-05-29 10:36:57 +02:00
Tamara Dahlgren
77ce4701b9 Bugfix/tests: add slash to test log message (#37874) 2023-05-29 10:36:36 +02:00
Juan Miguel Carceller
73ad3f729e dd4hep: add patch to fix missing hits when using LCIO (#37854)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-05-29 10:35:14 +02:00
Jim Edwards
1e7a64ad85 parallelio: add v2.6.0 release and ncint variant (#37805) 2023-05-29 10:19:15 +02:00
Tamara Dahlgren
3a5864bcdb tests/sip: convert to new stand-alone test process (#35693) 2023-05-29 10:16:35 +02:00
Alec Scott
7e13a7dccb cbc: add v2.10.9 (#37974) 2023-05-29 04:04:28 -04:00
Alec Scott
e3249fa155 bedops: add v2.4.41 (#37972) 2023-05-29 04:04:06 -04:00
Thomas-Ulrich
0c20760576 pumi: fix simmodsuite base variant, and mpi lib name (#37401) 2023-05-29 09:59:32 +02:00
Alec Scott
7ee7995493 cracklib: add v2.9.9 (#37976) 2023-05-29 03:59:13 -04:00
Alec Scott
ba1fac1c31 cpio: add v2.14 (#37975) 2023-05-29 03:58:53 -04:00
Adam J. Stewart
b05f0ecb6f py-segmentation-models-pytorch: add v0.3.3 (#37970) 2023-05-29 09:48:32 +02:00
Alec Scott
d5c66b75c3 cyrus-sasl: add v2.1.28 (#37978) 2023-05-29 09:31:54 +02:00
Alec Scott
98303d6956 double-conversion: add v3.3.0 (#37979) 2023-05-29 09:31:38 +02:00
Alec Scott
4622d638a6 elfio: add v3.11 (#37980) 2023-05-29 09:31:23 +02:00
Alec Scott
02023265fc erfa: add v2.0.0 (#37981) 2023-05-29 09:31:09 +02:00
Alec Scott
8a075998f8 erlang: add v26.0 (#37982) 2023-05-29 09:30:28 +02:00
Alec Scott
f2f48b1872 fasttransforms: add v0.6.2 (#37983) 2023-05-29 09:30:10 +02:00
Alec Scott
168d63c447 glog: add v0.6.0 (#37985) 2023-05-29 09:29:34 +02:00
Alec Scott
c25d4cbc1d gradle: add v8.1.1 (#37986) 2023-05-29 09:29:07 +02:00
snehring
ccb07538f7 Beast2: add v2.7.4, add javafx (#37419) 2023-05-29 09:28:32 +02:00
Alec Scott
1356b13b2f form: add v4.3.1 (#37984) 2023-05-29 09:24:34 +02:00
Tamara Dahlgren
935f862863 tests/flibcpp: convert to new stand-alone test process (#37782) 2023-05-28 10:47:23 +02:00
Tamara Dahlgren
9f6d9df302 patchelf: convert to new stand-alone test process (#37831) 2023-05-28 10:46:23 +02:00
Tamara Dahlgren
65d33c02a1 gasnet: convert to new stand-alone test process (#35727) 2023-05-28 10:45:11 +02:00
Tamara Dahlgren
40073e7b21 hdf: convert to new stand-alone test process (#37843) 2023-05-28 10:44:42 +02:00
Tamara Dahlgren
752e02e2f2 tests/upcxx: convert to new stand-alone test process (#37832)
Co-authored-by: Dan Bonachea <dobonachea@lbl.gov>
2023-05-28 10:44:04 +02:00
Vanessasaurus
d717b3a33f Automated deployment to update package flux-core 2023-05-04 (#37421)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2023-05-28 10:42:44 +02:00
Bruno Turcksin
9817f24c9a kokkos-nvcc-wrapper: Remove unnecessary dependencies (#37794) 2023-05-28 10:19:30 +02:00
Bruno Turcksin
1f7c4b0557 Kokkos: remove unused variants (#37800) 2023-05-28 10:18:53 +02:00
Xavier Delaruelle
6c42d2b7f7 modules: improve default naming scheme (#37808)
Change default naming scheme for tcl modules for a more user-friendly
experience. 

Change from flat projection to "per software name" projection.

Flat naming scheme restrains module selection capabilities. The
`{name}/{version}...` scheme make possible to use user-friendly
mechanisms:

* implicit defaults (`module load git`)
* extended default (`module load git/2`)
* advanced version specifiers (`module load git@2:`)
2023-05-28 10:06:30 +02:00
Tamara Dahlgren
8df036a5a5 tests/cmake: convert to new stand-alone test process (#37724) 2023-05-28 09:59:21 +02:00
dependabot[bot]
582ebee74c build(deps): bump actions/setup-python from 4.6.0 to 4.6.1 (#37894)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4.6.0 to 4.6.1.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](57ded4d7d5...bd6b4b6205)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-28 09:58:14 +02:00
Annop Wongwathanarat
1017b9ddde acfl: add version 23.04.1 and fix checksums for 22.1 (#37908)
Checksums for 22.1 need updates due to an IT incident at developer.arm.com. The package tarballs needed to be recreated.
2023-05-28 09:57:31 +02:00
Thomas Madlener
80ae73119d whizard: Fix parallel build race condition (#37890) 2023-05-28 09:55:52 +02:00
Tamara Dahlgren
1d88f690a4 tests/darshan-runtime: convert to new stand-alone test process (#37838) 2023-05-28 09:46:31 +02:00
Filippo Spiga
fbb271d804 Adding NVIDIA HPC SDK 23.5 (#37913) 2023-05-28 09:45:08 +02:00
Hao Lyu
d6aac873b7 copy namelist and xml to ./bin (#37933) 2023-05-28 09:43:27 +02:00
Tamara Dahlgren
ab3ffd9361 archer: convert to new stand-alone test process (#35697) 2023-05-28 09:41:38 +02:00
Tamara Dahlgren
3b9454a5cc tests/bolt: convert to new stand-alone test process (#35695)
* bolt: convert to new stand-alone test process
* Remove redundant test_requires_compiler (so above directives)
2023-05-27 18:46:11 -07:00
Tamara Dahlgren
c8eb0f9361 tests/amrex: convert to new stand-alone test process (#35698)
* amrex: convert to new stand-alone test process
* smoke->stand-alone
2023-05-27 18:44:06 -07:00
George Young
fb0f14eb06 py-macs2: add 2.2.8, updated dependencies (#37944)
* py-macs2: add 2.2.8, updated dependencies

* Update package.py

---------

Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2023-05-27 14:29:12 -05:00
George Young
e489ee4e2e py-cutadapt: add 4.4, 4.3, 4.2 versions (#37929)
* py-cutadapt: add 4.4, 4.3, 4.2 versions

* Update var/spack/repos/builtin/packages/py-cutadapt/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-cutadapt/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update package.py

---------

Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-27 14:27:43 -05:00
Alex Richert
fcd49f2f08 Add shared and pic variants to libtiff (#37965)
* Add static-only option for libtiff

* update libtiff to add pic variant

* fix libtiff pic setting
2023-05-27 11:24:27 -05:00
Alex Richert
b3268c2703 freetype: add pic and shared variants (#37898) 2023-05-27 01:05:10 +02:00
George Young
d1bfcfafe3 py-multiqc: add 1.14, bump dependencies (#37946)
Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2023-05-26 17:55:03 -05:00
Manuela Kuhn
490c9f5e16 py-datalad-metadata-model: add 0.3.10 (#37937) 2023-05-26 17:28:36 -05:00
Manuela Kuhn
85628d1474 py-debugpy: add 1.6.7 (#37941) 2023-05-26 17:27:50 -05:00
Manuela Kuhn
720c34d18d py-distro: add 1.8.0 (#37942) 2023-05-26 17:27:06 -05:00
mschouler
cd175377ca Update melissa build (#37609)
* Remove deprecated package

* Add up-to-date melissa builds

* Remove blank lines and FIXME comments

* Use directive syntax for maintainers and remove unnecessary comments

* Remove unused function

* Deprecate former melissa recipe

* Change melissa python package name

* Update setuptools and rapidjson dependencies versions

* Fix mypy error

* Restore rapidjson version

* Variant simplification

* Make variants lower case

* Deprecate former omitted version

* Make torch version consistent with requirement file

* Fix variants definition

* Fix style error

---------

Co-authored-by: Marc Schouler <marc.schouler@inria.fr>
2023-05-26 17:26:29 -05:00
Manuela Kuhn
b91ec05e13 py-coloredlogs: add 15.0.1 and py-humanfriendly: add 10.0 (#37905) 2023-05-26 17:22:02 -05:00
Manuela Kuhn
3bb15f420b py-contourpy: add 1.0.7 (#37914) 2023-05-26 17:19:40 -05:00
Manuela Kuhn
124a81df5b py-coverage: add 5.5 (#37922) 2023-05-26 17:06:39 -05:00
Lee James O'Riordan
d9472c083d Update py-pennylane ecosystem to support v0.30.0 (#37763)
* Update PennyLane ecosystem for 0.30 release

* Update package dep versions

* Fix formatting

* Update dep versions

* Remove PL hard pin and rely on PLQ to define version

* Update var/spack/repos/builtin/packages/py-pennylane-lightning-kokkos/package.py

Co-authored-by: Vincent Michaud-Rioux <vincent.michaud-rioux@xanadu.ai>

* Convert pybind11 from build to link dep, and PL ver limit

---------

Co-authored-by: Vincent Michaud-Rioux <vincent.michaud-rioux@xanadu.ai>
2023-05-26 16:55:21 -05:00
Manuela Kuhn
ac2a5ef4dd py-beautifulsoup4: add 4.12.2 (#37820) 2023-05-26 16:54:43 -05:00
Manuela Kuhn
ea210a6acf py-chardet: add 5.1.0 (#37879)
* py-chardet: add 5.1.0

* Remove py-setuptools as run dependency
2023-05-26 16:51:54 -05:00
kwryankrattiger
afb3bef7af CI: Use relative path in default script (#36649) 2023-05-26 14:28:48 -06:00
George Young
b5b5881426 picard: add 3.0.0, switch to java@17: (#37948)
Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2023-05-26 15:38:26 -04:00
George Young
76fc7915a8 minimap2: adding 2.26 (#37945)
Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2023-05-26 10:12:14 -07:00
George Young
e7798b619b hyphy: add 2.5.51hf, update dependencies for switch to MPI (#37938)
Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2023-05-26 09:48:26 -07:00
Kai Torben Ohlhus
8ecef12a20 Strip inactive maintainer. (#36048) 2023-05-26 10:47:39 -04:00
Brian Spilner
694292ebbf cdo: add 2.2.0 (#37244) 2023-05-26 10:01:49 +02:00
Howard Pritchard
7f18f6f8a1 PMIx and PRRTe: disabled use of sphinx (#37750)
Related to https://github.com/spack/spack/pull/37717

No need to be rebuilding openmpi man pages and other docs in
spack as it almost always is used with release tarballs.

See #37717 for more details.

Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2023-05-26 09:49:02 +02:00
John W. Parent
0b12a480eb Windows MSVC: do not set sdk version if installing sdk (#37930)
Note the win-sdk package is not installable and reports an error
which instructs the user how to add it. Without this fix, a
(more confusing) error occurs before this message can be generated.
2023-05-25 21:38:04 -04:00
George Young
2d91a79af3 fastp: add version 0.23.3, add build dependencies (#37931)
Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2023-05-25 21:28:13 -04:00
Leonard-Anderson-NNL
72fcee7227 gradle:add 7.3 (#37928)
Co-authored-by: Cloud User <leonardanderson@leonardander001.hzterscemazurawp3xenxzahla.bx.internal.cloudapp.net>
2023-05-25 20:55:09 -04:00
John W. Parent
d147ef231f Windows: fix "spack build-env" (#37923)
"spack build-env" was not generating proper environment variable
definitions on Windows; this commit updates the generated commands
to succeed with batch/PowerShell.
2023-05-25 17:08:15 -07:00
eugeneswalker
1c7af83d32 update ci ml darwin keypath (#37927) 2023-05-25 16:27:56 -07:00
John W. Parent
b982dfc071 Windows CI: add paraview deps nightly build (#37924)
Add a nightly job to attempt building all Paraview dependencies and
upload the results to cdash. This check doesn't affect the reported
build/test status of Spack. We are using this to monitor the state of
Windows support while working on more-robust checks (eventually the
Windows build will have to succeed to merge PRs to Spack).
2023-05-25 16:13:41 -07:00
H. Joe Lee
c0da8a00fc fix(protobuf-c): set version bound for protobuf dependency (#37917)
Fix #37887 GitLab CI failure.
2023-05-25 19:08:54 -04:00
H. Joe Lee
3f18f689d8 fix(dpdk): add a new version 23.03. (#37919)
Fix E4S GitLab CI issue #37887.
2023-05-25 18:58:41 -04:00
Simon Pintarelli
9dc4553cf3 sirius: add rocsolver/wannier90 (#37900)
* sirius: add rocsolver dependency for 7.5:
* add wannier90
2023-05-25 15:58:04 -07:00
Tim Haines
9a99c94b75 Dyninst: add standalone test (#37876)
* Dyninst: add standalone test
* Add docstring with description
* Don't use join_path for builtin path objects
* Whitespace
* Update format of docstring
2023-05-25 18:48:52 -04:00
John W. Parent
682f0b2a54 Protobuf package: CMake fix for Windows build (#37926)
Qualify reference with namespace. A pending upstream PR will eventually
make this unnecessary, so the patch is only applied for 3.22. versions.
2023-05-25 15:41:16 -07:00
Richard Berger
dbab0c1ff5 flecsi: disable cinch dependency for v1 release (#37857)
* flecsi: disable cinch dependency for v1 releases
* [@spackbot] updating style on behalf of rbberger

---------

Co-authored-by: rbberger <rbberger@users.noreply.github.com>
2023-05-25 14:59:45 -07:00
Xavier Delaruelle
2bf95f5340 environment-modules: fix @main version requirements (#37807)
Some requirements for @main version of environment-modules were missing:

* python (to build ChangeLog documentation file)
* py-sphinx@1.0: (to build man-pages, etc)

Also adding gzip, which is now required to build ChangeLog.gz (which is
now shipped instead of ChangeLog).

Other versions are not requiring these tools (as documentation is
pre-built in dist tarball).
2023-05-25 22:29:26 +02:00
Tamara Dahlgren
55561405b8 Bugfix/tests: write not append stand-alone test status (#37841) 2023-05-25 12:36:24 -07:00
H. Joe Lee
8eef458cea fix(pmdk): add pkconfig as dependency (#37896)
Fix #37887 failure.
2023-05-25 10:35:48 -07:00
Stephen Sachs
64eea9d996 [devito] Move to version 4.8.1 (#37915)
* [devito] Move to version 4.8.1

* Fix: Adding patch file

* Update var/spack/repos/builtin/packages/py-devito/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-devito/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Addressing @adamjstewart comments

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-25 13:33:10 -04:00
Alex Richert
60b4e2128b Add shared variant to geos package (#37899)
Co-authored-by: alexrichert <alexrichert@gmail.com>
2023-05-25 11:42:51 -04:00
Adam J. Stewart
2f8cea2792 Add macOS ML CI stacks (#36586)
* Add macOS ML CI stacks

* torchmeta is no longer maintained and requires ancient PyTorch

* Add MXNet

* update darwin aarch64 stacks

* add darwin-aarch64 scoped config.yaml

* remove unnecessary cleanup job

* fix specifications

* fix labels

* fix labels

* fix indent on tags specification

* no tags for trigger jobs

* try overriding tags in stack spack.yaml

* do not use CI_STACK_CONFIG_SCOPES

* incorporate config:install_tree:root: overrides and compiler defs

* copy relevant ci-scoped config settings directly into stack spack.yaml

* remove build-job-remove

* spack ci generate: add debug flag

* include cdash config directly in stack spack.yaml

* customize build-job script section to avoid absolute paths

* add any-job specification

* tags: use aarch64-macos instead of aarch64

* generate tags: use aarch64-macos instead of aarch64

* do not add morepadding

* use shared mirror; comment out known failures

* remove any-job

* nproc || true

* comment out specs failing due to bazel from cache codesign issue

---------

Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
2023-05-25 01:12:54 -04:00
M. Eric Irrgang
06f9bcf734 Update for py-gmxapi for 0.4.1. (#37834)
* Update for py-gmxapi for 0.4.1.

* Note 0.4.1 hash from PyPI.
* Note relaxed dependencies for future versions.

* Update var/spack/repos/builtin/packages/py-gmxapi/package.py
2023-05-24 22:03:10 -05:00
Manuela Kuhn
ee2725762f py-charset-normalizer: add 3.1.0 (#37880) 2023-05-24 21:57:29 -05:00
Manuela Kuhn
eace0a177c py-bids-validator: add 1.11.0 (#37845) 2023-05-24 21:36:45 -05:00
Manuela Kuhn
80c7d74707 py-bottleneck: add 1.3.7 (#37847) 2023-05-24 21:36:01 -05:00
Manuela Kuhn
a6f5bf821d py-certifi: add 2023.5.7 (#37848) 2023-05-24 21:35:13 -05:00
Manuela Kuhn
b214406253 py-attrs: add 23.1.0 (#37817)
* py-attrs: add 23.1.0

* Add missing dependency
2023-05-24 20:21:31 -05:00
Manuela Kuhn
5b003d80e5 py-babel: add 2.12.1 (#37818)
* py-babel: add 2.12.1

* Update var/spack/repos/builtin/packages/py-babel/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-24 20:13:02 -05:00
Adam J. Stewart
185b2d3ee7 py-rasterio: add v1.3.7 (#37886) 2023-05-24 18:48:16 -04:00
Adam J. Stewart
71bb2a1899 py-lightly: add v1.4.6 (#37885) 2023-05-24 16:43:07 -04:00
Nathalie Furmento
785c31b730 starpu: add release 1.4.1 (#37883) 2023-05-24 09:36:56 -07:00
QuellynSnead
175da4a88a paraview (protobuf failure) #37437 (#37440)
When attempting to build paraview@5.10.1 using a recent Intel
compiler (Classic or OneAPI) or the IBM XL compiler, the build
fails if the version of protobuf used is > 3.18
2023-05-24 11:09:48 -05:00
willdunklin
73fc1ef11c sensei: Allow Paraview 5.11 for sensei develop version (#37719) 2023-05-24 11:00:35 -05:00
Stephen Sachs
2d77e44f6f Pcluster local buildcache (#37852)
* [pcluster pipeline] Use local buildcache instead of upstream spack

Spack currently does not relocate compiler references from upstream spack
installations. When using a buildcache we don't need an upstream spack.

* gcc needs to be installed via postinstall to get correct deps

* quantum-espresso@gcc@12.3.0 returns ICE on neoverse_{n,v}1

* Force gitlab to pull the new container

* Revert "Force gitlab to pull the new container"

This reverts commit 3af5f4cd88.

Seems the gitlab version does not yet support "pull_policy" in .gitlab-ci.yml

* Gitlab keeps picking up wrong container. Renaming

* Update containers once more after failed build
2023-05-24 06:55:00 -07:00
Greg Becker
033599c4cd bugfix: env concretize after remove (#37877) 2023-05-24 15:41:57 +02:00
Harmen Stoppels
8096ed4b22 spack remove: fix traversal when user specs intersect (#37882)
drop unnecessary double loop over the matching user specs.
2023-05-24 09:23:46 -04:00
Simon Pintarelli
b49bfe25af update nlcglib package (#37578) 2023-05-24 11:17:15 +02:00
Houjun Tang
8b2f34d802 Add async vol v1.6 (#37875) 2023-05-24 01:47:46 -04:00
H. Joe Lee
3daed0d6a7 hdf5-vol-daos: add a new package (#35653)
* hdf5-vol-daos: add a new package
* hdf5-vol-daos: address @soumagne review

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2023-05-23 23:42:42 -04:00
Glenn Johnson
d6c1f75e8d julia: remove myself from maintainers list (#37868) 2023-05-23 16:22:56 -05:00
Laura Weber
c80a4c1ddc Updated hash for latest maintenance release (2022.2.1) (#37842) 2023-05-23 14:08:55 -07:00
Glenn Johnson
466abcb62d gate: remove myself as maintainer (#37862) 2023-05-23 14:03:52 -07:00
Glenn Johnson
69e99f0c16 Remove myself as maintainer of R packages (#37859)
* Remove myself as maintainer of R packages
  I will no longer have the time to properly maintain these packages.
* fix flake8 test for import
2023-05-23 15:35:32 -05:00
Glenn Johnson
bbee6dfc58 bart: remove myself as maintainer (#37860) 2023-05-23 16:08:07 -04:00
Glenn Johnson
2d60cf120b heasoft: remove myself as maintainer (#37866) 2023-05-23 14:37:57 -05:00
Glenn Johnson
db17fc2f33 opencv: remove myself from maintainers list (#37870) 2023-05-23 14:34:52 -05:00
eugeneswalker
c62080d498 e4s ci: add dealii (#32484) 2023-05-23 21:34:31 +02:00
Glenn Johnson
f9bbe549fa gatetools: remove myself as maintainer (#37863) 2023-05-23 15:32:54 -04:00
H. Joe Lee
55d7fec69c daos: add a new package (#35649) 2023-05-23 21:30:23 +02:00
Glenn Johnson
e938907150 reditools: remove myself as maintainer (#37871) 2023-05-23 15:28:16 -04:00
Glenn Johnson
0c40b86e96 itk: remove myself as maintainer (#37867) 2023-05-23 15:27:55 -04:00
Glenn Johnson
3d4cf0d8eb mumax: remove myself as maintainer (#37869) 2023-05-23 15:23:28 -04:00
Glenn Johnson
966e19d278 gurobi: remove myself as maintainer (#37865) 2023-05-23 15:23:05 -04:00
Glenn Johnson
8f930462bd fplo: remove myself as maintainer (#37861) 2023-05-23 15:17:50 -04:00
kjrstory
bf4fccee15 New package: FDS (#37850) 2023-05-23 11:59:05 -07:00
Manuela Kuhn
784771a008 py-bleach: add 6.0.0 (#37846) 2023-05-23 11:50:53 -07:00
Glenn Johnson
e4a9d9ae5b Bioc updates (#37297)
* add version 1.48.0 to bioconductor package r-a4
* add version 1.48.0 to bioconductor package r-a4base
* add version 1.48.0 to bioconductor package r-a4classif
* add version 1.48.0 to bioconductor package r-a4core
* add version 1.48.0 to bioconductor package r-a4preproc
* add version 1.48.0 to bioconductor package r-a4reporting
* add version 1.54.0 to bioconductor package r-absseq
* add version 1.30.0 to bioconductor package r-acde
* add version 1.78.0 to bioconductor package r-acgh
* add version 2.56.0 to bioconductor package r-acme
* add version 1.70.0 to bioconductor package r-adsplit
* add version 1.72.0 to bioconductor package r-affxparser
* add version 1.78.0 to bioconductor package r-affy
* add version 1.76.0 to bioconductor package r-affycomp
* add version 1.58.0 to bioconductor package r-affycontam
* add version 1.72.0 to bioconductor package r-affycoretools
* add version 1.48.0 to bioconductor package r-affydata
* add version 1.52.0 to bioconductor package r-affyilm
* add version 1.70.0 to bioconductor package r-affyio
* add version 1.76.0 to bioconductor package r-affyplm
* add version 1.46.0 to bioconductor package r-affyrnadegradation
* add version 1.48.0 to bioconductor package r-agdex
* add version 3.32.0 to bioconductor package r-agilp
* add version 2.50.0 to bioconductor package r-agimicrorna
* add version 1.32.0 to bioconductor package r-aims
* add version 1.32.0 to bioconductor package r-aldex2
* add version 1.38.0 to bioconductor package r-allelicimbalance
* add version 1.26.0 to bioconductor package r-alpine
* add version 2.62.0 to bioconductor package r-altcdfenvs
* add version 2.24.0 to bioconductor package r-anaquin
* add version 1.28.0 to bioconductor package r-aneufinder
* add version 1.28.0 to bioconductor package r-aneufinderdata
* add version 1.72.0 to bioconductor package r-annaffy
* add version 1.78.0 to bioconductor package r-annotate
* add version 1.62.0 to bioconductor package r-annotationdbi
* add version 1.24.0 to bioconductor package r-annotationfilter
* add version 1.42.0 to bioconductor package r-annotationforge
* add version 3.8.0 to bioconductor package r-annotationhub
* add version 3.30.0 to bioconductor package r-aroma-light
* add version 1.32.0 to bioconductor package r-bamsignals
* add version 2.16.0 to bioconductor package r-beachmat
* add version 2.60.0 to bioconductor package r-biobase
* add version 2.8.0 to bioconductor package r-biocfilecache
* add version 0.46.0 to bioconductor package r-biocgeneric
* add version 1.10.0 to bioconductor package r-biocio
* add version 1.18.0 to bioconductor package r-biocneighbors
* add version 1.34.0 to bioconductor package r-biocparallel
* add version 1.16.0 to bioconductor package r-biocsingular
* add version 2.28.0 to bioconductor package r-biocstyle
* add version 3.17.1 to bioconductor package r-biocversion
* add version 2.56.0 to bioconductor package r-biomart
* add version 1.28.0 to bioconductor package r-biomformat
* add version 2.68.0 to bioconductor package r-biostrings
* add version 1.48.0 to bioconductor package r-biovizbase
* add version 1.10.0 to bioconductor package r-bluster
* add version 1.68.0 to bioconductor package r-bsgenome
* add version 1.36.0 to bioconductor package r-bsseq
* add version 1.42.0 to bioconductor package r-bumphunter
* add version 2.66.0 to bioconductor package r-category
* add version 2.30.0 to bioconductor package r-champ
* add version 2.32.0 to bioconductor package r-champdata
* add version 1.50.0 to bioconductor package r-chipseq
* add version 4.8.0 to bioconductor package r-clusterprofiler
* add version 1.36.0 to bioconductor package r-cner
* add version 1.32.0 to bioconductor package r-codex
* add version 2.16.0 to bioconductor package r-complexheatmap
* add version 1.74.0 to bioconductor package r-ctc
* add version 2.28.0 to bioconductor package r-decipher
* add version 0.26.0 to bioconductor package r-delayedarray
* add version 1.22.0 to bioconductor package r-delayedmatrixstats
* add version 1.40.0 to bioconductor package r-deseq2
* add version 1.46.0 to bioconductor package r-dexseq
* add version 1.42.0 to bioconductor package r-dirichletmultinomial
* add version 2.14.0 to bioconductor package r-dmrcate
* add version 1.74.0 to bioconductor package r-dnacopy
* add version 3.26.0 to bioconductor package r-dose
* add version 2.48.0 to bioconductor package r-dss
* add version 3.42.0 to bioconductor package r-edger
* add version 1.20.0 to bioconductor package r-enrichplot
* add version 2.24.0 to bioconductor package r-ensembldb
* add version 1.46.0 to bioconductor package r-exomecopy
* add version 2.8.0 to bioconductor package r-experimenthub
* add version 1.26.0 to bioconductor package r-fgsea
* add version 2.72.0 to bioconductor package r-gcrma
* add version 1.36.0 to bioconductor package r-gdsfmt
* add version 1.82.0 to bioconductor package r-genefilter
* add version 1.36.0 to bioconductor package r-genelendatabase
* add version 1.72.0 to bioconductor package r-genemeta
* add version 1.78.0 to bioconductor package r-geneplotter
* add version 1.22.0 to bioconductor package r-genie3
* add version 1.36.0 to bioconductor package r-genomeinfodb
* update r-genomeinfodbdata
* add version 1.36.0 to bioconductor package r-genomicalignments
* add version 1.52.0 to bioconductor package r-genomicfeatures
* add version 1.52.0 to bioconductor package r-genomicranges
* add version 2.68.0 to bioconductor package r-geoquery
* add version 1.48.0 to bioconductor package r-ggbio
* add version 3.8.0 to bioconductor package r-ggtree
* add version 2.10.0 to bioconductor package r-glimma
* add version 1.12.0 to bioconductor package r-glmgampoi
* add version 5.54.0 to bioconductor package r-globaltest
* update r-go-db
* add version 1.20.0 to bioconductor package r-gofuncr
* add version 2.26.0 to bioconductor package r-gosemsim
* add version 1.52.0 to bioconductor package r-goseq
* add version 2.66.0 to bioconductor package r-gostats
* add version 1.78.0 to bioconductor package r-graph
* add version 1.62.0 to bioconductor package r-gseabase
* add version 1.32.0 to bioconductor package r-gtrellis
* add version 1.44.0 to bioconductor package r-gviz
* add version 1.28.0 to bioconductor package r-hdf5array
* add version 1.72.0 to bioconductor package r-hypergraph
* add version 1.36.0 to bioconductor package r-illumina450probevariants-db
* add version 0.42.0 to bioconductor package r-illuminaio
* add version 1.74.0 to bioconductor package r-impute
* add version 1.38.0 to bioconductor package r-interactivedisplaybase
* add version 2.34.0 to bioconductor package r-iranges
* add version 1.60.0 to bioconductor package r-kegggraph
* add version 1.40.0 to bioconductor package r-keggrest
* add version 3.56.0 to bioconductor package r-limma
* add version 2.52.0 to bioconductor package r-lumi
* add version 1.76.0 to bioconductor package r-makecdfenv
* add version 1.78.0 to bioconductor package r-marray
* add version 1.12.0 to bioconductor package r-matrixgenerics
* add version 1.8.0 to bioconductor package r-metapod
* add version 2.46.0 to bioconductor package r-methylumi
* add version 1.46.0 to bioconductor package r-minfi
* add version 1.34.0 to bioconductor package r-missmethyl
* add version 1.80.0 to bioconductor package r-mlinterfaces
* add version 1.12.0 to bioconductor package r-mscoreutils
* add version 2.26.0 to bioconductor package r-msnbase
* add version 2.56.0 to bioconductor package r-multtest
* add version 1.38.0 to bioconductor package r-mzid
* add version 2.34.0 to bioconductor package r-mzr
* add version 1.62.0 to bioconductor package r-oligoclasses
* update r-org-hs-eg-db
* add version 1.42.0 to bioconductor package r-organismdbi
* add version 1.40.0 to bioconductor package r-pathview
* add version 1.92.0 to bioconductor package r-pcamethods
* update r-pfam-db
* add version 1.44.0 to bioconductor package r-phyloseq
* add version 1.62.0 to bioconductor package r-preprocesscore
* add version 1.32.0 to bioconductor package r-protgenerics
* add version 1.34.0 to bioconductor package r-quantro
* add version 2.32.0 to bioconductor package r-qvalue
* add version 1.76.0 to bioconductor package r-rbgl
* add version 2.40.0 to bioconductor package r-reportingtools
* add version 2.44.0 to bioconductor package r-rgraphviz
* add version 2.44.0 to bioconductor package r-rhdf5
* add version 1.12.0 to bioconductor package r-rhdf5filters
* add version 1.22.0 to bioconductor package r-rhdf5lib
* add version 1.76.0 to bioconductor package r-roc
* add version 1.28.0 to bioconductor package r-rots
* add version 2.16.0 to bioconductor package r-rsamtools
* add version 1.60.0 to bioconductor package r-rtracklayer
* add version 0.38.0 to bioconductor package r-s4vectors
* add version 1.8.0 to bioconductor package r-scaledmatrix
* add version 1.28.0 to bioconductor package r-scater
* add version 1.14.0 to bioconductor package r-scdblfinder
* add version 1.28.0 to bioconductor package r-scran
* add version 1.10.0 to bioconductor package r-scuttle
* add version 1.66.0 to bioconductor package r-seqlogo
* add version 1.58.0 to bioconductor package r-shortread
* add version 1.74.0 to bioconductor package r-siggenes
* add version 1.22.0 to bioconductor package r-singlecellexperiment
* add version 1.34.0 to bioconductor package r-snprelate
* add version 1.50.0 to bioconductor package r-snpstats
* add version 2.36.0 to bioconductor package r-somaticsignatures
* add version 1.12.0 to bioconductor package r-sparsematrixstats
* add version 1.40.0 to bioconductor package r-spem
* add version 1.38.0 to bioconductor package r-sseq
* add version 1.30.0 to bioconductor package r-summarizedexperiment
* add version 3.48.0 to bioconductor package r-sva
* add version 1.38.0 to bioconductor package r-tfbstools
* add version 1.22.0 to bioconductor package r-tmixclust
* add version 2.52.0 to bioconductor package r-topgo
* add version 1.24.0 to bioconductor package r-treeio
* add version 1.28.0 to bioconductor package r-tximport
* add version 1.28.0 to bioconductor package r-tximportdata
* add version 1.46.0 to bioconductor package r-variantannotation
* add version 3.68.0 to bioconductor package r-vsn
* add version 2.6.0 to bioconductor package r-watermelon
* add version 2.46.0 to bioconductor package r-xde
* add version 1.58.0 to bioconductor package r-xmapbridge
* add version 0.40.0 to bioconductor package r-xvector
* add version 1.26.0 to bioconductor package r-yapsa
* add version 1.26.0 to bioconductor package r-yarn
* add version 1.46.0 to bioconductor package r-zlibbioc
* Revert "add version 1.82.0 to bioconductor package r-genefilter"
  This reverts commit 1702071c6d.
* Revert "add version 0.38.0 to bioconductor package r-s4vectors"
  This reverts commit 58a7df2387.
* add version 0.38.0 to bioconductor package r-s4vectors
* Revert "add version 1.28.0 to bioconductor package r-aneufinder"
  This reverts commit 0a1f59de6c.
* add version 1.28.0 to bioconductor package r-aneufinder
* Revert "add version 2.16.0 to bioconductor package r-beachmat"
  This reverts commit cd49fb8e4c.
* add version 2.16.0 to bioconductor package r-beachmat
* Revert "add version 4.8.0 to bioconductor package r-clusterprofiler"
  This reverts commit 6e9a951cbe.
* add version 4.8.0 to bioconductor package r-clusterprofiler
* Fix syntax error
* r-genefilter: add version 1.82.0
* new package: r-basilisk-utils
* new package: r-basilisk
* new package: r-densvis
* new package: r-dir-expiry
* r-affyplm: add zlib dependency
* r-cner: add zlib dependency
* r-mzr: add zlib dependency
* r-rhdf5filters: add zstd dependency
* r-shortread: add zlib dependency
* r-snpstats: add zlib dependency

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2023-05-23 11:40:00 -07:00
snehring
a6886983dc usalign: new package (#37646)
* usalign: adding new package
* usalign: updating shasum, adding note about distribution
2023-05-23 11:30:40 -07:00
Andrey Parfenov
93a34a9635 hpcg: apply patch with openmp pragma changes for intel and oneapi compilers (#37856)
Signed-off-by: Andrey Parfenov <andrey.parfenov@intel.com>
2023-05-23 11:52:45 -04:00
Todd Gamblin
91a54029f9 libgcrypt: patch 1.10.2 on macos (#37844)
macOS doesn't have `getrandom`, and 1.10.2 fails to compile because of this.

There's an upstream fix at https://dev.gnupg.org/T6442 that will be in the next
`libgcrypt` release, but the patch is available now.
2023-05-23 06:03:13 -04:00
Juan Miguel Carceller
5400b49ed6 dd4hep: add LD_LIBRARY_PATH for plugins for Gaudi (#37824)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-05-23 10:28:26 +01:00
Juan Miguel Carceller
c17fc3c0c1 gaudi: add gaudi to LD_LIBRARY_PATH (#37821)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-05-23 10:27:55 +01:00
Juan Miguel Carceller
6f248836ea dd4hep: restrict podio versions (#37699)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-05-23 10:20:31 +01:00
snehring
693c1821b0 py-pastml: adding version for compatibility with py-topiary-asr (#37828) 2023-05-22 14:44:00 -05:00
Manuela Kuhn
62afe3bd5a py-asttokens: add 2.2.1 (#37816) 2023-05-22 14:40:08 -05:00
genric
53a756d045 py-dask: add v2023.4.1 (#37550)
* py-dask: add v2023.4.1

* address review comments
2023-05-22 14:29:23 -05:00
Adam J. Stewart
321b687ae6 py-huggingface-hub: add v0.14.1, cli variant (#37815) 2023-05-22 11:19:41 -07:00
Adam J. Stewart
c8617f0574 py-fiona: add v1.9.4 (#37780) 2023-05-22 13:17:13 -05:00
Adam J. Stewart
7843e2ead0 azcopy: add new package (#37693) 2023-05-22 11:09:06 -07:00
Juan Miguel Carceller
dca3d071d7 gaudi: fix issue with fmt::format (#37810)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-05-22 10:33:05 -07:00
eugeneswalker
436f077482 tau %oneapi: -Wno-error=implicit-function-declaration (#37829) 2023-05-22 13:13:02 -04:00
simonleary-umass-edu
ab3f705019 deleted package.py better error message (#37814)
adds the namespace to the exception object's string representation
2023-05-22 09:59:07 -07:00
Tamara Dahlgren
d739989ec8 swig: convert to new stand-alone test process (#37786) 2023-05-22 09:39:30 -07:00
Jordan Galby
52ee1967d6 llvm: Fix hwloc@1 and hwloc@:2.3 compatibility (#35387) 2023-05-22 10:28:57 -05:00
Andrey Prokopenko
1af7284b5d arborx: new version 1.4 (#37809) 2023-05-21 12:25:53 -07:00
Todd Gamblin
e1bcefd805 Update CHANGELOG.md for v0.20.0 2023-05-21 01:48:34 +02:00
Manuela Kuhn
2159b0183d py-argcomplete: add 3.0.8 (#37797)
* py-argcomplete: add 3.0.8

* Update var/spack/repos/builtin/packages/py-argcomplete/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* [@spackbot] updating style on behalf of manuelakuhn

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-20 11:31:28 -05:00
kwryankrattiger
078fd225a9 Mochi-Margo: Add patch for pthreads detection (#36109) 2023-05-20 07:43:27 -07:00
Manuela Kuhn
83974828c7 libtiff: disable use of sphinx (#37803) 2023-05-19 21:37:19 -05:00
Manuela Kuhn
2412f74557 py-anyio: add 3.6.2 (#37796) 2023-05-19 21:36:39 -05:00
Manuela Kuhn
db06d3621d py-alabaster: add 0.7.13 (#37798) 2023-05-19 21:34:00 -05:00
Jose E. Roman
c25170d2f9 New patch release SLEPc 3.19.1 (#37675)
* New patch release SLEPc 3.19.1

* py-slepc4py: add explicit dependency on py-numpy
2023-05-19 21:33:21 -05:00
Vanessasaurus
b3dfe13670 Automated deployment to update package flux-security 2023-05-16 (#37696)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2023-05-19 12:07:57 -07:00
Harmen Stoppels
6358e84b48 fix binutils dep of spack itself (#37738) 2023-05-19 12:02:36 -07:00
Swann Perarnau
8e634d8e49 aml: v0.2.1 (#37621)
* aml: v0.2.1

* add version 0.2.1
* fix hip variant bug

* [fix] pkgconf required for all builds

On top of needing pkgconf for autoreconf builds, the release configure
scripts needs pkgconf do detect dependencies if any of the hwloc, ze, or
opencl variants are active.

* Remove deprecation for v0.2.0 based on PR advise.
2023-05-19 11:58:28 -07:00
Mark W. Krentel
1a21376515 intel-xed: add version 2023.04.16 (#37582)
* intel-xed: add version 2023.04.16
 1. add version 2023.04.16
 2. adjust the mbuild resource to better match the xed version at the time
 3. replace three conflicts() with one new requires() for x86_64 target
 4. add patch for libxed-ild for some new avx512 instructions
    * [@spackbot] updating style on behalf of mwkrentel
    * Fix the build for 2023.04.16.  XED requires its source directory to be exactly 'xed', so add a symlink.
 5. move the mbuild resource up one level, xed wants it to be in the same directory as the xed source dir
 6. deprecate 10.2019.03
    * semantic style fix: add OSError to except
    * [@spackbot] updating style on behalf of mwkrentel

---------

Co-authored-by: mwkrentel <mwkrentel@users.noreply.github.com>
2023-05-19 10:28:18 -07:00
Harmen Stoppels
bf45a2b6d3 spack env create: generate a view when newly created env has concrete specs (#37799) 2023-05-19 18:44:54 +02:00
Thomas-Ulrich
475ce955e7 hipsycl: add v0.9.4 (#37247) 2023-05-19 18:29:45 +02:00
Robert Underwood
5e44289787 updates for the libpressio ecosystem (#37764)
* updates for the libpressio ecosystem

* [@spackbot] updating style on behalf of robertu94

* style fix: remove FIXME

---------

Co-authored-by: Robert Underwood <runderwood@anl.gov>
Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
2023-05-19 09:24:31 -07:00
Massimiliano Culpo
e66888511f archspec: fix entry in the JSON file (#37793) 2023-05-19 09:57:57 -04:00
Tamara Dahlgren
e9e5beee1f fortrilinos: convert to new stand-alone test process (#37783) 2023-05-19 08:06:33 -04:00
Tamara Dahlgren
ffd134c09d formetis: converted to new stand-alone test process (#37785) 2023-05-19 08:05:23 -04:00
Massimiliano Culpo
bfadd5c9a5 lmod: allow core compiler to be specified with a version range (#37789)
Use CompilerSpec with satisfies instead of string equality tests

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-05-19 13:21:40 +02:00
Greg Becker
16e9279420 compiler specs: do not print '@=' when clear from context (#37787)
Ensure that spack compiler add/find/list and lists of concrete specs
print the compiler effectively as {compiler.name}{@compiler.version}.

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-05-19 11:31:27 +02:00
Satish Balay
ac0903ef9f llvm: add version 16.0.3 (#37472) 2023-05-19 03:37:57 -04:00
Cyrus Harrison
648839dffd add conduit 0.8.8 release (#37776) 2023-05-19 00:34:19 -04:00
Pieter Ghysels
489a604920 Add STRUMPACK versions 7.1.2 and 7.1.3 (#37779) 2023-05-18 23:18:50 -04:00
eugeneswalker
2ac3435810 legion +rocm: apply patch for --offload-arch (#37775)
* legion +rocm: apply patch for --offload-arch

* constrain to latest version
2023-05-18 23:03:50 -04:00
Alec Scott
69ea180d26 fzf: add v0.40.0 and refactor package (#37569)
* fzf: add v0.40.0 and refactor package
* Remove unused imports
2023-05-18 15:23:20 -07:00
Alec Scott
f52f217df0 roctracer-dev-api: add v5.5.0 (#37484) 2023-05-18 15:11:36 -07:00
Alec Scott
df74aa5d7e amqp-cpp: add v4.3.24 (#37504) 2023-05-18 15:09:30 -07:00
Alec Scott
41932c53ae libjwt: add v1.15.3 (#37521) 2023-05-18 15:05:27 -07:00
Alec Scott
4296db794f rdkit: add v2023_03_1 (#37529) 2023-05-18 15:05:07 -07:00
H. Joe Lee
9ab9302409 py-jarvis-util: add a new package (#37729)
* py-jarvis-util: add a new package

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-18 17:28:59 -04:00
Benjamin Meyers
0187376e54 Update py-nltk (#37703)
* Update py-nltk

* [@spackbot] updating style on behalf of meyersbs

* Update var/spack/repos/builtin/packages/py-nltk/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-18 17:14:20 -04:00
Aditya Bhamidipati
7340d2cb83 update package nccl version 2.17, 2.18 (#37721) 2023-05-18 15:47:01 -05:00
Benjamin Meyers
641d4477d5 New package py-tensorly (#37705) 2023-05-18 15:38:23 -05:00
Benjamin Meyers
3ff2fb69af New package py-scikit-tensor-py3 (#37706) 2023-05-18 15:37:43 -05:00
vucoda
e3024b1bcb Update py-jedi 0.17.2 checksum in package.py (#37700)
Pypi for jedi-0.17.2.tar.gz does not match the package.py file.

https://pypi.org/project/jedi/0.17.2/#copy-hash-modal-60e943e3-1192-4b12-90e2-4d639cb5b4f7
2023-05-18 15:15:47 -05:00
Dom Heinzeller
e733b87865 Remove references to gmake executable, only use make (#37280) 2023-05-18 19:03:03 +00:00
Victor Lopez Herrero
919985dc1b dlb: add v3.3.1 (#37759) 2023-05-18 10:29:41 -07:00
Michael Kuhn
d746f7d427 harfbuzz: add 7.3.0 (#37753)
Do not prefer 5.3.1 anymore since it does not build with newer compilers. Linux distributions have also moved to 6.x and newer.
2023-05-18 10:27:48 -07:00
kaanolgu
b6deab515b [babelstream] FIX: maintainers list missing a comma (#37761) 2023-05-18 10:25:04 -07:00
Glenn Johnson
848220c4ba update R: add version 4.3.0 (#37090) 2023-05-18 10:13:28 -07:00
Glenn Johnson
98462bd27e Cran updates (#37296)
* add version 1.28.3 to r-hexbin
* add version 5.0-1 to r-hmisc
* add version 0.5.5 to r-htmltools
* add version 1.6.2 to r-htmlwidgets
* add version 1.4.2 to r-igraph
* add version 0.42.19 to r-imager
* add version 1.0-5 to r-inum
* add version 0.9-14 to r-ipred
* add version 1.3.2 to r-irkernel
* add version 2.2.0 to r-janitor
* add version 0.1-10 to r-jpeg
* add version 1.2.2 to r-jsonify
* add version 0.9-32 to r-kernlab
* add version 1.7-2 to r-klar
* add version 1.42 to r-knitr
* add version 1.14.0 to r-ks
* add version 2.11.0 to r-labelled
* add version 1.7.2.1 to r-lava
* add version 0.6-15 to r-lavaan
* add version 2.1.2 to r-leaflet
* add version 2.9-0 to r-lfe
* add version 1.1.6 to r-lhs
* add version 1.1-33 to r-lme4
* add version 1.5-9.7 to r-locfit
* add version 0.4.3 to r-log4r
* add version 5.6.18 to r-lpsolve
* add version 0.2-11 to r-lwgeom
* add version 2.7.4 to r-magick
* add version 1.22.1 to r-maldiquant
* add version 1.2.11 to r-mapproj
* add version 1.6 to r-markdown
* add version 7.3-59 to r-mass
* add version 1.5-4 to r-matrix
* add version 0.63.0 to r-matrixstats
* add version 4.2-3 to r-memuse
* add version 4.0-0 to r-metafor
* add version 1.8-42 to r-mgcv
* add version 3.15.0 to r-mice
* add version 0.4-5 to r-mitml
* add version 2.0.0 to r-mixtools
* add version 0.1.11 to r-modelr
* add version 1.4-23 to r-multcomp
* add version 0.1-9 to r-multcompview
* add version 0.1-13 to r-mutoss
* add version 1.18.1 to r-network
* add version 3.3.4 to r-nleqslv
* add version 3.1-162 to r-nlme
* add version 0.26 to r-nmf
* add version 0.60-17 to r-np
* add version 4.2.5.2 to r-openxlsx
* add version 2022.11-16 to r-ordinal
* add version 0.6.0.8 to r-osqp
* add version 0.9.1 to r-packrat
* add version 1.35.0 to r-parallelly
* add version 1.3-13 to r-party
* add version 1.2-20 to r-partykit
* add version 1.7-0 to r-pbapply
* add version 0.3-9 to r-pbdzmq
* add version 1.2 to r-pegas
* add version 1.5-1 to r-phytools
* add version 1.9.0 to r-pillar
* add version 1.4.0 to r-pkgbuild
* add version 2.1.0 to r-pkgcache
* add version 0.5.0 to r-pkgdepends
* add version 2.0.7 to r-pkgdown
* add version 1.3.2 to r-pkgload
* add version 0.1-8 to r-png
* add version 1.1.22 to r-polspline
* add version 1.0.1 to r-pool
* add version 1.4.1 to r-posterior
* add version 3.8.1 to r-processx
* add version 2023.03.31 to r-prodlim
* add version 1.0-12 to r-proj4
* add version 2.5.0 to r-projpred
* add version 0.1.6 to r-pryr
* add version 1.7.5 to r-ps
* add version 1.0.1 to r-purrr
* add version 1.3.2 to r-qqconf
* add version 0.25.5 to r-qs
* add version 1.60 to r-qtl
* add version 0.4.22 to r-quantmod
* add version 5.95 to r-quantreg
* add version 0.7.8 to r-questionr
* add version 1.2.5 to r-ragg
* add version 0.15.1 to r-ranger
* add version 3.6-20 to r-raster
* add version 2.2.13 to r-rbibutils
* add version 1.0.10 to r-rcpp
* add version 0.12.2.0.0 to r-rcpparmadillo
* add version 0.1.7 to r-rcppde
* add version 0.3.13 to r-rcppgsl
* add version 1.98-1.12 to r-rcurl
* add version 1.2-1 to r-rda
* add version 2.1.4 to r-readr
* add version 1.4.2 to r-readxl
* add version 1.0.6 to r-recipes
* add version 1.1.6 to r-repr
* add version 1.2.16 to r-reproducible
* add version 0.3.0 to r-require
* add version 1.28 to r-reticulate
* add version 2.0.7 to r-rfast
* add version 1.6-6 to r-rgdal
* add version 0.6-2 to r-rgeos
* add version 1.1.3 to r-rgl
* add version 0.2.18 to r-rinside
* add version 4-14 to r-rjags
* add version 1.3-1.8 to r-rjsonio
* add version 2.21 to r-rmarkdown
* add version 0.9-2 to r-rmpfr
* add version 0.7-1 to r-rmpi
* add version 6.6-0 to r-rms
* add version 0.10.25 to r-rmysql
* add version 0.8.7 to r-rncl
* add version 2.4.11 to r-rnexml
* add version 0.95-1 to r-robustbase
* add version 1.3-20 to r-rodbc
* add version 7.2.3 to r-roxygen2
* add version 1.4.5 to r-rpostgres
* add version 0.7-5 to r-rpostgresql
* add version 0.8.29 to r-rsconnect
* add version 0.4-15 to r-rsnns
* add version 2.3.1 to r-rsqlite
* add version 0.7.2 to r-rstatix
* add version 1.1.2 to r-s2
* add version 0.4.5 to r-sass
* add version 0.1.9 to r-scatterpie
* add version 0.3-43 to r-scatterplot3d
* add version 3.2.4 to r-scs
* add version 1.6-4 to r-segmented
* add version 4.2-30 to r-seqinr
* add version 0.26 to r-servr
* add version 4.3.0 to r-seurat
* add version 1.0-12 to r-sf
* add version 0.4.2 to r-sfheaders
* add version 1.1-15 to r-sfsmisc
* add version 1.7.4 to r-shiny
* add version 1.9.0 to r-signac
* add version 1.6.0.3 to r-smoof
* add version 0.1.7-1 to r-sourcetools
* add version 1.6-0 to r-sp
* add version 1.3-0 to r-spacetime
* add version 7.3-16 to r-spatial
* add version 2.0-0 to r-spatialeco
* add version 1.2-8 to r-spatialreg
* add version 3.0-5 to r-spatstat
* add version 3.0-1 to r-spatstat-data
* add version 3.1-0 to r-spatstat-explore
* add version 3.1-0 to r-spatstat-geom
* add version 3.1-0 to r-spatstat-linnet
* add version 3.1-4 to r-spatstat-random
* add version 3.0-1 to r-spatstat-sparse
* add version 3.0-2 to r-spatstat-utils
* add version 2.2.2 to r-spdata
* add version 1.2-8 to r-spdep
* add version 0.6-1 to r-stars
* add version 1.5.0 to r-statmod
* add version 4.8.0 to r-statnet-common
* add version 1.7.12 to r-stringi
* add version 1.5.0 to r-stringr
* add version 1.9.1 to r-styler
* add version 3.5-5 to r-survival
* add version 1.5-4 to r-tclust
* add version 1.7-29 to r-terra
* add version 3.1.7 to r-testthat
* add version 1.1-2 to r-th-data
* add version 1.2 to r-tictoc
* add version 1.3.2 to r-tidycensus
* add version 1.2.3 to r-tidygraph
* add version 1.3.0 to r-tidyr
* add version 2.0.0 to r-tidyverse
* add version 0.2.0 to r-timechange
* add version 0.45 to r-tinytex
* add version 0.4.1 to r-triebeard
* add version 1.0-9 to r-truncnorm
* add version 0.10-53 to r-tseries
* add version 0.8-1 to r-units
* add version 4.3.0 to r-v8
* add version 1.4-11 to r-vcd
* add version 1.14.0 to r-vcfr
* add version 0.6.2 to r-vctrs
* add version 1.1-8 to r-vgam
* add version 0.4.0 to r-vioplot
* add version 1.6.1 to r-vroom
* add version 1.72-1 to r-wgcna
* add version 0.4.1 to r-whisker
* add version 0.7.2 to r-wk
* add version 0.39 to r-xfun
* add version 1.7.5.1 to r-xgboost
* add version 1.0.7 to r-xlconnect
* add version 3.99-0.14 to r-xml
* add version 0.13.1 to r-xts
* add version 2.3.7 to r-yaml
* add version 2.3.0 to r-zip
* add version 1.8-12 to r-zoo
* r-bigmem: dependency on uuid
* r-bio3d: dependency on zlib
* r-devtools: dependency cleanup
* r-dose: dependency cleanup
* r-dss: dependency cleanup
* r-enrichplot: dependency cleanup
* r-fgsea: dependency cleanup
* r-geor: dependency cleanup
* r-ggridges: dependency cleanup
* r-lobstr: dependency cleanup
* r-lubridate: dependency cleanup
* r-mnormt: dependency cleanup
* r-sctransform: version format correction
* r-seuratobject: dependency cleanup
* r-tidyselect: dependency cleanup
* r-tweenr: dependency cleanup
* r-uwot: dependency cleanup
* new package: r-clock
* new package: r-conflicted
* new package: r-diagram
* new package: r-doby
* new package: r-httr2
* new package: r-kableextra
* new package: r-mclogit
* new package: r-memisc
* new package: r-spatstat-model
* r-rmysql: use mariadb-client
* r-snpstats: add zlib dependency
* r-qs: add zstd dependency
* r-rcppcnpy: add zlib dependency
* black reformatting
* Revert "r-dose: dependency cleanup"
  This reverts commit 4c8ae8f5615ee124fff01ce43eddd3bb5d06b9bc.
* Revert "r-dss: dependency cleanup"
  This reverts commit a6c5c15c617a9a688fdcfe2b70c501c3520d4706.
* Revert "r-enrichplot: dependency cleanup"
  This reverts commit 65e116c18a94d885bc1a0ae667c1ef07d1fe5231.
* Revert "r-fgsea: dependency cleanup"
  This reverts commit ffe2cdcd1f73f69d66167b941970ede0281b56d7.
* r-rda: this package is back in CRAN
* r-sctransform: fix copyright
* r-seurat: fix copyright
* r-seuratobject: fix copyright
* Revert "add version 6.0-94 to r-caret"
  This reverts commit 236260597de97a800bfc699aec1cd1d0e3d1ac60.
* add version 6.0-94 to r-caret
* Revert "add version 1.8.5 to r-emmeans"
  This reverts commit 64a129beb0bd88d5c88fab564cade16c03b956ec.
* add version 1.8.5 to r-emmeans
* Revert "add version 5.0-1 to r-hmisc"
  This reverts commit 517643f4fd8793747365dfcfc264b894d2f783bd.
* add version 5.0-1 to r-hmisc
* Revert "add version 1.42 to r-knitr"
  This reverts commit 2a0d9a4c1f0ba173f7423fed59ba725bac902c37.
* add version 1.42 to r-knitr
* Revert "add version 1.6 to r-markdown"
  This reverts commit 4b5565844b5704559b819d2e775fe8dec625af99.
* add version 1.6 to r-markdown
* Revert "add version 0.26 to r-nmf"
  This reverts commit 4c44a788b17848f2cda67b32312a342c0261caec.
* add version 0.26 to r-nmf
* Revert "add version 2.3.1 to r-rsqlite"
  This reverts commit 5722ee2297276e4db8beee461d39014b0b17e420.
* add version 2.3.1 to r-rsqlite
* Revert "add version 1.0-12 to r-sf"
  This reverts commit ee1734fd62cc02ca7a9359a87ed734f190575f69.
* add version 1.0-12 to r-sf
* fix syntax error
2023-05-18 09:57:43 -07:00
Cameron Stanavige
2e2515266d unifyfs: new v1.1 release (#37756)
Add v1.1 release
Update mochi-margo dependency compatible versions
Update version range of libfabric conflict
2023-05-18 09:42:27 -07:00
Chris Green
776ab13276 [xrootd] New variants, new version, improve build config (#37682)
* Add FNAL Spack team to maintainers

* New variants and configuration improvements

* Version dependent "no-systemd" patches.

* New variants `client_only`, and `davix`

* Better handling of `cxxstd` for different versions, including
  improved patching and CMake options.

* Version-specific CMake requirements.

* Better version-specific handling of `openssl` dependency.

* `py-setuptools` required for `+python` build.

* Specific enable/disable of CMake options and use of
  `-DFORCE_ENABLED=TRUE` to prevent unwanted/non-portable activation
  of features.

* Better handling of `+python` configuration.

* New version 5.5.5
2023-05-18 10:49:18 -05:00
Massimiliano Culpo
c2ce9a6d93 Bump Spack version on develop to 0.21.0.dev0 (#37760) 2023-05-18 12:47:55 +02:00
Peter Scheibel
4e3ed56dfa Bugfix: allow preferred new versions from externals (#37747) 2023-05-18 09:40:26 +02:00
Tamara Dahlgren
dcfcc03497 maintainers: switch from list to directive (#37752) 2023-05-17 22:25:57 +00:00
Stephen Sachs
125c20bc06 Add aws-plcuster[-aarch64] stacks (#37627)
Add aws-plcuster[-aarch64] stacks.  These stacks build packages defined in
https://github.com/spack/spack-configs/tree/main/AWS/parallelcluster

They use a custom container from https://github.com/spack/gitlab-runners which
includes necessary ParallelCluster software to link and build as well as an
upstream spack installation with current GCC and dependencies.

Intel and ARM software is installed and used during the build stage but removed
from the buildcache before the signing stage.

Files `configs/linux/{arch}/ci.yaml` select the necessary providers in order to
build for specific architectures (icelake, skylake, neoverse_{n,v}1).
2023-05-17 16:21:10 -06:00
Brian Van Essen
f7696a4480 Added version 1.3.1 (#37735) 2023-05-17 14:51:02 -07:00
Harmen Stoppels
a5d7667cb6 lmod: fix build, bump patch version (#37744) 2023-05-17 13:18:02 -04:00
Massimiliano Culpo
d45818ccff Limit deepcopy to just the initial "all" section (#37718)
Modifications:
- [x] Limit the scope of the deepcopy when initializing module file writers
2023-05-17 10:17:41 -07:00
Scott Wittenburg
bcb7af6eb3 gitlab ci: no copy-only pipelines w/ deprecated config (#37720)
Make it clear that copy-only pipelines are not supported while still
using the deprecated ci config format. Also ensure that the deprecated
stack does not fail on spack pipelines for tags.
2023-05-17 09:46:30 -06:00
Juan Miguel Carceller
f438fb6c79 whizard: build newer versions in parallel (#37422) 2023-05-17 17:15:50 +02:00
Harmen Stoppels
371a8a361a libxcb: depend on python, remove releases that need python 2 (#37698) 2023-05-17 17:05:30 +02:00
Tamara Dahlgren
86b9ce1c88 spack test: fix stand-alone test suite status reporting (#37602)
* Fix reporting of packageless specs as having no tests

* Add test_test_output_multiple_specs with update to simple-standalone-test (and tests)

* Refactored test status summary; added more tests or checks
2023-05-17 16:03:21 +02:00
Seth R. Johnson
05232034f5 celeritas: new version 0.2.2 (#37731)
* celeritas: new version 0.2.2

* [@spackbot] updating style on behalf of sethrj
2023-05-17 05:38:09 -04:00
Peter Scheibel
7a3da0f606 Tk/Tcl packages: speed up file search (#35902) 2023-05-17 09:27:05 +02:00
Yoshiaki Senda
d96406a161 Add recently added Spack Docker Images to documentation (#37732)
Signed-off-by: Yoshiaki Senda <yoshiaki@live.it>
2023-05-17 08:48:27 +02:00
Tamara Dahlgren
ffa5962356 emacs: convert to new stand-alone test process (#37725) 2023-05-17 00:25:35 -04:00
Massimiliano Culpo
67e74da3ba Fix spack find not able to display version ranges in compilers (#37715) 2023-05-17 00:24:38 -04:00
Chris Green
9ee2d79de1 libxpm package: fix RHEL8 build with libintl (#37713)
Set LDFLAGS rather than LDLIBS
2023-05-16 13:32:26 -05:00
John W. Parent
79e4a13eee Windows: fix MSVC version handling (#37711)
MSVC compiler logic was using string parsing to extract version
from compiler spec, which was fragile. This broke in #37572, so has
been fixed and made more robust by using attribute access.
2023-05-16 11:00:55 -07:00
kwryankrattiger
4627438373 CI: Expand E4S ROCm stack to include missing DaV packages (#36843)
* CI: Expand E4S ROCm stack to include missing DaV packages

Ascent: Fixup for VTK-m with Kokkos backend

* DaV SDK: Removed duplicated openmp variant for ascent

* Drop visit and add conflict for Kokkos

* E4S: Drop ascent from CUDA builds
2023-05-16 09:34:52 -05:00
Harmen Stoppels
badaaf7092 gha rhel8-platform-python: configure git safe.directory (#37708) 2023-05-16 16:31:13 +02:00
Harmen Stoppels
815ac000cc Revert "hdf5: fix showconfig (#34920)" (#37707)
This reverts commit 192e564e26.
2023-05-16 15:57:15 +02:00
Peter Scheibel
7bc5b26c52 Requirements and preferences should not define (non-git) versions (#37687)
Ensure that requirements `packages:*:require:@x` and preferences `packages:*:version:[x]`
fail concretization when no version defined in the package satisfies `x`. This always holds
except for git versions -- they are defined on the fly.
2023-05-16 15:45:11 +02:00
Harmen Stoppels
a0e7ca94b2 gha bootstrap-dev-rhel8: configure git safe.directory (#37702)
git has been updated to something more recent
2023-05-16 15:21:42 +02:00
Harmen Stoppels
e56c90d839 check_modules_set_name: do not check for "enable" key (#37701) 2023-05-16 11:51:52 +02:00
Ye Luo
54003d4d72 Update llvm recipe regarding libomptarget. (#36675) 2023-05-16 11:20:02 +02:00
QuellynSnead
c47b554fa1 libxcb/xcb-proto: Enable internal Python dependency (#37575)
In the past, Spack did not allow two different versions of the
same package within a DAG. That led to difficulties with packages
that still required Python 2 while other packages had already
switched to Python 3.

The libxcb and xcb-proto packages did not have Python 3 support
for a time. To get around this issue, Spack maintainers disabled
their dependency on an internal (i.e., Spack-provided) Python
(see #4145),forcing these packages to look for a system-provided
Python (see #7646).

This has worked for us all right, but with the arrival of our most
recent platform we seem to be missing the critical xcbgen Python
module on the system. Since most software has largely moved on to
Python 3 now, let's re-enable internal Spack dependencies for the
libxcb and xcb-proto packages.
2023-05-16 10:00:01 +02:00
Mikael Simberg
b027f64a7f Add conflict for pika with fmt@10 and +cuda/rocm (#37679) 2023-05-16 09:24:02 +02:00
Greg Becker
3765a5f7f8 unify: when_possible and unify: true -- Bugfix for error in 37438 (#37681)
Two bugs came in from #37438

1. `unify: when_possible` was broken, because of an incorrect assertion. abstract/concrete
   spec pairs were compared against the results that were in the process of being computed,
   rather than against the previous results.
2. `unify: true` had an ordering bug that could mix the association between abstract and
   concrete specs

- [x] 1 is resolved by creating a lookup from old concrete specs to old abstract specs,
      and we use that to associate the "new" concrete specs that happen to be the old
      ones with their abstract specs (since those are stripped out for concretization
- [x] 2 is resolved by combining the new and old abstract as lists instead of combining
      them as sets. This is important because `set() | set()` does not make any ordering
      promises, even though set ordering is otherwise guaranteed in `python@3.7:`
2023-05-16 01:08:34 -04:00
Robert Blake
690661eadd Upgrading kosh to 3.0 (#37471)
* Upgrading kosh to 3.0.

* Accidentally regressed the package, changing back.

* Updating py-hdbscan versions for kosh.

* Fixing bug in patch.

* Adding 3.0.1

* Removing 3.0.

* Updating package deps for hdbscan to match requirements.txt.

* Version reqs for 3.0.*, need newer numpy and networkx

* spack style

* Reordering to match setup.py, adding "type" to python depends.
2023-05-16 01:08:20 -04:00
eugeneswalker
f7bbc326e4 trilinos: @develop fixes (#37615)
* trilinos@develop fixes

* Update var/spack/repos/builtin/packages/trilinos/package.py

Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>

---------

Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
2023-05-15 17:25:14 -07:00
Scott Wittenburg
a184bfc1a6 gitlab ci: reduce job name length of build_systems pipeline (#37686) 2023-05-16 00:26:37 +02:00
Alec Scott
81634440fb circos: add v0.69-9 (#37479) 2023-05-15 14:43:44 -07:00
Alec Scott
711d7683ac alluxio: add v2.9.3 (#37488) 2023-05-15 14:42:48 -07:00
Alec Scott
967356bcf5 codec2: add v1.1.0 (#37480) 2023-05-15 14:42:08 -07:00
Alec Scott
c006ed034a coinutils: add v2.11.9 (#37481) 2023-05-15 14:41:25 -07:00
Alec Scott
d065c65d94 g2c: add v1.7.0 (#37482) 2023-05-15 14:40:41 -07:00
Alec Scott
e23c372ff1 shadow: add v4.13 (#37485) 2023-05-15 14:38:32 -07:00
Alec Scott
25d2de5629 yoda: add v1.9.8 (#37487) 2023-05-15 14:37:31 -07:00
Alec Scott
d73a23ce35 cpp-httplib: add v0.12.3 (#37490) 2023-05-15 14:35:32 -07:00
Alec Scott
a62cb3c0f4 entt: add v3.11.1 (#37491) 2023-05-15 14:34:47 -07:00
Alec Scott
177da4595e harfbuzz: add v7.2.0 (#37492) 2023-05-15 14:34:06 -07:00
Alec Scott
e4f05129fe libconfuse: add v3.3 (#37493) 2023-05-15 14:33:19 -07:00
Alec Scott
c25b994917 libnsl: add v2.0.0 (#37494) 2023-05-15 14:32:49 -07:00
Alec Scott
95c4c5270a p11-kit: add v0.24.1 (#37495) 2023-05-15 14:31:43 -07:00
Alec Scott
1cf6a15a08 packmol: add v20.0.0 (#37496)
* packmol: add v20.0.0
* Fix zoltan homepage url
2023-05-15 14:29:01 -07:00
Alec Scott
47d206611a perl-module-build-tiny: add v0.044 (#37497) 2023-05-15 14:25:53 -07:00
Alec Scott
a6789cf653 zoltan: add v3.901 (#37498) 2023-05-15 14:25:00 -07:00
Alec Scott
933cd858e0 bdii: add v6.0.1 (#37499) 2023-05-15 14:24:14 -07:00
Alec Scott
8856361076 audit-userspace: add v3.1.1 (#37505) 2023-05-15 14:15:48 -07:00
Alec Scott
d826df7ef6 babl: add v0.1.106 (#37506) 2023-05-15 14:15:28 -07:00
Alec Scott
d8a9b42da6 actsvg: add v0.4.33 (#37503) 2023-05-15 14:14:45 -07:00
Alec Scott
7d926f86e8 bat: add v0.23.0 (#37507) 2023-05-15 14:09:51 -07:00
Alec Scott
1579544d57 beast-tracer: add v1.7.2 (#37508) 2023-05-15 14:09:18 -07:00
Alec Scott
1cee3fb4a5 cronie: add v1.6.1 (#37509) 2023-05-15 14:08:41 -07:00
Alec Scott
a8e2ad53dd cups: add v2.3.3 (#37510) 2023-05-15 14:08:09 -07:00
Alec Scott
6821fa7246 diamond: add v2.1.6 (#37511) 2023-05-15 14:07:31 -07:00
Alec Scott
09c68da1bd dust: add v0.8.6 (#37513) 2023-05-15 14:06:31 -07:00
Alec Scott
73064d62cf f3d: add v2.0.0 (#37514) 2023-05-15 14:05:37 -07:00
Alec Scott
168ed2a782 fullock: add v1.0.50 (#37515) 2023-05-15 14:02:36 -07:00
Alec Scott
9f60b29495 graphviz: add v8.0.5 (#37517) 2023-05-15 14:00:50 -07:00
Alec Scott
7abcd78426 krakenuniq: add v1.0.4 (#37519) 2023-05-15 13:59:15 -07:00
Alec Scott
d5295301de libfyaml: add v0.8 (#37520) 2023-05-15 13:58:13 -07:00
Alec Scott
beccc49b81 libluv: add v1.44.2-1 (#37522) 2023-05-15 13:55:57 -07:00
Alec Scott
037e7ffe33 libvterm: add v0.3.1 (#37524) 2023-05-15 13:54:15 -07:00
Alec Scott
293da8ed20 lighttpd: add v1.4.69 (#37525) 2023-05-15 13:53:30 -07:00
Alec Scott
2780ab2f6c mrchem: add v1.1.2 (#37526) 2023-05-15 13:51:54 -07:00
Alec Scott
1ed3c81b58 mutationpp: add v1.0.5 (#37527) 2023-05-15 13:50:48 -07:00
Alec Scott
50ce0a25b2 preseq: add v2.0.3 (#37528) 2023-05-15 13:49:47 -07:00
Alec Scott
d784227603 shtools: add v4.10.2 (#37530) 2023-05-15 13:47:15 -07:00
Alec Scott
ab9ed91539 tig: add v2.5.8 (#37531) 2023-05-15 13:46:03 -07:00
Alec Scott
421256063e trimgalore: add v0.6.9 (#37532) 2023-05-15 13:44:44 -07:00
Alec Scott
75459bc70c vdt: add v0.4.4 (#37533) 2023-05-15 13:43:40 -07:00
Carson Woods
33752eabb8 Improve package source code context display on error (#37655)
Spack displays package code context when it shouldn't (e.g., on `FetchError`s)
and doesn't display it when it should (e.g., when errors occur in builder classes.
The line attribution can sometimes be off by one, as well.

- [x] Display package context when errors occur in a subclass of `PackageBase`
- [x] Display package context when errors occur in a subclass of `BaseBuilder`
- [x] Do not display package context when errors occur in `PackageBase`,
      `BaseBuilder` or other core code that is not in a `package.py` file.
- [x] Fix off-by-one error for core code (don't subtract one from the line number *unless*
      it's in an actual `package.py` file.

---------

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2023-05-15 13:38:11 -07:00
Alec Scott
f1d1bb9167 gsl-lite: add v0.41.0 (#37483) 2023-05-15 13:36:03 -07:00
Alec Scott
68eaff24b0 crtm-fix: correct invalid checksum for v2.4.0 (#37500) 2023-05-15 13:34:32 -07:00
Alec Scott
862024cae1 dos2unix: add v7.4.4 (#37512) 2023-05-15 13:31:19 -07:00
Adam J. Stewart
9d6bcd67c3 Update PyTorch ecosystem (#37562) 2023-05-15 13:29:44 -07:00
Chris White
d97ecfe147 SUNDIALS: new version of sundials and guard against examples being install (#37576)
* add new version of sundials and guard against examples not installing
* fix flipping of variant
* fix directory not being there when writing a file
2023-05-15 13:21:37 -07:00
Alec Scott
0d991de50a subversion: add v1.14.2 (#37543) 2023-05-15 13:16:04 -07:00
Alec Scott
4f278a0255 go: add v1.20.4 (#37660)
* go: add v1.20.4
* Deprecate v1.20.2 and v1.19.7 due to CVE-2023-24538
2023-05-15 13:10:02 -07:00
Chris Green
6e72a3cff1 [davix] Enable third party copies with gSOAP (#37648)
* [davix] Enable third party copies with gSOAP

* Add FNAL Spack team to maintainers
2023-05-15 14:46:52 -05:00
snehring
1532c77ce6 micromamba: adding version 1.4.2 (#37594)
* micromamba: adding version 1.4.2
* micromamba: change to micromamba-1.4.2 tag artifacts
2023-05-15 10:40:54 -07:00
Mikael Simberg
5ffbce275c Add ut (#37603) 2023-05-15 10:35:55 -07:00
Carsten Uphoff
0e2ff2dddb Add double-batched FFT library v0.4.0 (#37616)
Signed-off-by: Carsten Uphoff <carsten.uphoff@intel.com>
2023-05-15 10:28:52 -07:00
Mikael Simberg
c0c446a095 stdexec: Add 23.03 (#37638) 2023-05-15 10:20:12 -07:00
snehring
33dbd44449 tmscore: adding new package (#37644) 2023-05-15 10:17:50 -07:00
Sean Koyama
7b0979c1e9 hwloc: explicitly disable building netloc for ~netloc (#35604)
* hwloc: explicitly disable building netloc for ~netloc

* hwloc: update syntax for netloc variant configure argument

---------

Co-authored-by: Sean Koyama <skoyama@anl.gov>
2023-05-15 12:16:21 -05:00
snehring
c9849dd41d tmalign: new version 20220412 (#37645) 2023-05-15 10:14:58 -07:00
Chris Green
d44e97d3f2 [scitokens-cpp] New variant cxxstd, depend on standalone jwt-cpp (#37643)
* Add FNAL Spack team to maintainers
* New variant `cxxstd`
* Depend on `jwt-cpp`
* New versions: 0.7.2, 0.7.3
2023-05-15 13:08:00 -04:00
Adam J. Stewart
8713ab0f67 py-timm: add v0.9 (#37654)
* py-timm: add v0.9
* add v0.9.1 and v0.9.2
* add new package py-safetensors (v0.3.1)
2023-05-15 09:41:58 -07:00
Harmen Stoppels
6a47339bf8 oneapi: before script load modules (#37678) 2023-05-15 18:39:58 +02:00
Alec Scott
1c0fb6d641 amrfinder: add v3.11.8 (#37656) 2023-05-15 09:38:29 -07:00
Alec Scott
b45eee29eb canal: add v1.1.6 (#37657) 2023-05-15 09:36:17 -07:00
Alec Scott
6d26274459 code-server: add v4.12.0 (#37658) 2023-05-15 09:35:17 -07:00
Alec Scott
2fb07de7bc fplll: add v5.4.4 (#37659) 2023-05-15 09:34:13 -07:00
Alec Scott
7678dc6b49 iso-codes: add v4.15.0 (#37661) 2023-05-15 09:27:05 -07:00
Frank Willmore
1944dd55a7 Update package.py for maker (#37662) 2023-05-15 09:25:43 -07:00
Adam J. Stewart
0b6c724743 py-sphinx: add v7.0.1 (#37665) 2023-05-15 09:23:22 -07:00
eugeneswalker
fa98023375 new pkg: py-psana (#37666) 2023-05-15 09:19:54 -07:00
Todd Gamblin
e79a911bac bugfix: allow reuse of packages from foreign namespaces
We currently throw a nasty error if you try to reuse packages from some other namespace
(e.g., OLCF), but we should be able to reuse patched local versions of builtin packages.

Right now the only obstacle to that is that we try to look up virtual info for unknown
namespaces, and we can't get the package from the repo to do that. We *can* assume that
a package with a known namespace is similar, and that its virtual provider information
is reasonably accurate, so we now do that. This isn't 100% accurate, but neither is
relying on the package itself, as it may have gone out of date.

The real solution here is virtual edge information, but this is a stopgap until we have
that.
2023-05-15 09:15:49 -07:00
Todd Gamblin
fd3efc71fd bugfix: don't look up virtual information for unknown packages
`spec_clauses()` attempts to look up package information for concrete specs in order to
determine which virtuals they may provide. This fails for renamed/deleted dependencies
of buildcaches and installed packages.

This will eventually be fixed by #35258, which adds virtual information on edges, but we
need a workaround to make older buildcaches usable.

- [x] make an exception for renamed packages and omit their virtual constraints
- [x] add a note that this will be solved by adding virtuals to edges
2023-05-15 09:15:49 -07:00
Todd Gamblin
0458de18de bugfix: don't look up patches from packages for concrete specs
The concretizer can fail with `reuse:true` if a buildcache or installation contains a
package with a dependency that has been renamed or deleted in the main repo (e.g.,
`netcdf` was refactored to `netcdf-c`, `netcdf-fortran`, etc., but there are still
binary packages with dependencies called `netcdf`).

We should still be able to install things for which we are missing `package.py` files.

`Spec.inject_patches_variant()` was failing this requirement by attempting to look up
the package class for concrete specs.  This isn't needed -- we can skip it.

- [x] swap two conditions in `Spec.inject_patches_variant()`
2023-05-15 09:15:49 -07:00
Vanessasaurus
f94ac8c770 add new package flux-security (#37668)
I will follow this up with a variant to flux-core to add flux-security, and then automation in the flux-framework/spack repository.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2023-05-15 09:14:44 -07:00
Andrew W Elble
a03c28a916 routinator: update, deprecate old version (#37676) 2023-05-15 09:10:40 -07:00
Victor Lopez Herrero
7b7fdf27f3 dlb: add v3.3 (#37677) 2023-05-15 09:08:57 -07:00
Sergey Kosukhin
192e564e26 hdf5: fix showconfig (#34920)
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2023-05-15 11:03:03 -05:00
Chris Green
b8c5099cde [jwt-cpp] New package (#37641)
* [jwt-cpp] New package

* Update homepage

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* [@spackbot] updating style on behalf of greenc-FNAL

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
Co-authored-by: greenc-FNAL <greenc-FNAL@users.noreply.github.com>
2023-05-15 10:16:11 -05:00
Stephen Sachs
ea5bca9067 palace: add v0.11.1 and explicit BLAS support (#37605) 2023-05-15 16:11:50 +02:00
Harmen Stoppels
e33eafd34f Bump tutorial command (#37674) 2023-05-15 13:54:52 +02:00
Xavier Delaruelle
e1344b5497 environment-modules: add version 5.3.0 (#37671) 2023-05-15 09:32:53 +02:00
Todd Gamblin
cf9dc3fc81 spack find: get rid of @= in arch/compiler headers (#37672)
The @= in `spack find` output adds a bit of noise. Remove it as we
did for `spack spec` and `spack concretize`.

This modifies display_specs so it actually covers other places we use that routine, as
well, e.g., `spack buildcache list`.

before:

```
-- linux-ubuntu20.04-aarch64 / gcc@=11.1.0 -----------------------
ofdlcpi libpressio@0.88.0
```

after:

```
-- linux-ubuntu20.04-aarch64 / gcc@11.1.0 -----------------------
ofdlcpi libpressio@0.88.0
```
2023-05-15 09:08:50 +02:00
Bruno Turcksin
d265dd2487 Kokkos: add new release and new architectures (#37650) 2023-05-14 13:21:40 -07:00
Greg Becker
a2a6e65e27 concretizer: don't change concrete environments without --force (#37438)
If a user does not explicitly `--force` the concretization of an entire environment,
Spack will try to reuse the concrete specs that are already in the lockfile.

---------

Co-authored-by: becker33 <becker33@users.noreply.github.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2023-05-14 13:36:03 +02:00
Paul R. C. Kent
0085280db8 gcc: add 12.3.0 (#37553) 2023-05-14 12:08:41 +02:00
Andrew W Elble
6e07bf149d freecad: new package w/ dependencies/updates (#37557)
* freecad: new package w/ dependencies/updates

* review

* symbols/debug variants only when autotools
2023-05-13 21:14:50 -05:00
dale-mittleman
811cd5e7ef Adding librdkafka versions 1.9.2, 2.0.2 (#37501)
Co-authored-by: Alec Scott <hi@alecbcs.com>
2023-05-13 16:00:12 -07:00
Adam J. Stewart
081e21f55e py-lightly: py-torch~distributed supported in next release (#37558) 2023-05-13 15:49:45 -07:00
Todd Gamblin
c5a24675a1 spack spec: remove noisy @= from output (#37663)
@= is accurate, but noisy. Other UI commands tend not to
print the redundant `@=` for known concrete versions;
make `spack spec` consistent with them.
2023-05-13 11:34:15 -07:00
eugeneswalker
e9bfe5cd35 new pkg: py-psmon (#37652) 2023-05-13 11:16:44 -07:00
eugeneswalker
ca84c96478 new pkg: py-psalg (#37653) 2023-05-13 09:02:57 -07:00
Chris Green
c9a790bce9 [gsoap] New package gSOAP (#37647) 2023-05-13 11:01:50 -05:00
eugeneswalker
91c5b4aeb0 e4s ci stacks: add: hdf5-vol-{log,cache} (#37651) 2023-05-13 04:54:44 +00:00
Larry Knox
c2968b4d8c Add HDF5 version 1.14.1 (#37579)
* Add HDF5 version 1.14.1
* Update to version HDF5 1.14.1-2.
2023-05-12 20:54:08 -04:00
Scott Wittenburg
c08be95d5e gitlab ci: release fixes and improvements (#37601)
* gitlab ci: release fixes and improvements

  - use rules to reduce boilerplate in .gitlab-ci.yml
  - support copy-only pipeline jobs
  - make pipelines for release branches rebuild everything
  - make pipelines for protected tags copy-only

* gitlab ci: remove url changes used in testing

* gitlab ci: tag mirrors need public key

Make sure that mirrors associated with release branches and tags
contain the public key needed to verify the signed binaries.  This
also ensures that when stack-specific mirror contents are copied
to the root, the root mirror has the public key as well.

* review: be more specific about tags, curl flags

* Make the check in ci.yaml consistent with the .gitlab-ci.yml

---------

Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
2023-05-12 15:22:42 -05:00
Lehman Garrison
4e5fb62679 py-asdf: add 2.15.0 and dependencies (#37642)
* py-asdf: add 2.15.0 and dependencies

* py-asdf: PR review
2023-05-12 15:35:22 -04:00
Adam J. Stewart
cafc21c43d py-lightly: add v1.4.5 (#37625) 2023-05-12 11:39:03 -07:00
Adam J. Stewart
72699b43ab py-dill: add v0.3.1.1 (#37415) 2023-05-12 11:37:51 -07:00
MatthewLieber
6c85f59a89 Osu/mvapich2.3.7 1 (#37636)
* add 3.0b release

* adding mvapich2 version 2.3.7-1

---------

Co-authored-by: Matt Lieber <lieber.31@osu.edu>
2023-05-12 11:30:05 -07:00
Nathan Hanford
eef2536055 Allow buildcache specs to be referenced by hash (#35042)
Currently, specs on buildcache mirrors must be referenced by their full description. This PR allows buildcache specs to be referenced by their hashes, rather than their full description.

### How it works

Hash resolution has been moved from `SpecParser` into `Spec`, and now includes the ability to execute a `BinaryCacheQuery` after checking the local store, but before concluding that the hash doesn't exist.

### Side-effects of Proposed Changes

Failures will take longer when nonexistent hashes are parsed, as mirrors will now be scanned.

### Other Changes

- `BinaryCacheIndex.update` has been modified to fail appropriately only when mirrors have been configured.
- Tests of hash failures have been updated to use `mutable_empty_config` so they don't needlessly search mirrors.
- Documentation has been clarified for `BinaryCacheQuery`, and more documentation has been added to the hash resolution functions added to `Spec`.
2023-05-12 10:27:42 -07:00
Massimiliano Culpo
e2ae60a3b0 Update archspec to v0.2.1 (#37633) 2023-05-12 18:59:58 +02:00
Chris Green
d942fd62b5 [root] New version 6.28.04 with C++20 support (#37640)
* Add FNAL Spack team to maintainers.
* New version 6.28/04.
* Support C++20 with ROOT >= 6.28.04.
2023-05-12 09:50:48 -07:00
Andrey Parfenov
99d511d3b0 Add more variants for STREAM to customize build (#37283)
* Added STREAM builds customization

* Changed stream_type to enum

* fix code style issues

Signed-off-by: Andrey Parfenov <andrey.parfenov@intel.com>

* rm not necessary optimization

Signed-off-by: Andrey Parfenov <andrey.parfenov@intel.com>

---------

Signed-off-by: Andrey Parfenov <andrey.parfenov@intel.com>
Co-authored-by: iermolae <igor.ermolaev@intel.com>
2023-05-12 12:17:59 -04:00
Adam J. Stewart
ab8661533b GDAL: add v3.7.0 (#37598) 2023-05-12 12:13:10 -04:00
Robert Cohn
f423edd526 intel-oneapi-mkl: support gnu openmp (#37637)
* intel-oneapi-mkl: support gnu openmp

* intel-oneapi-mkl: support gnu openmp
2023-05-12 12:03:19 -04:00
Manuela Kuhn
0a4d4da5ce py-rsatoolbox: add 0.0.5, 0.1.0 and 0.1.2 (#37595)
* py-rsatoolbox: add 0.0.5, 0.1.0 and 0.1.2 from wheels

* py-setuptools: add 63.4.3

* remove wheels and open up requirements

* Fix style

* Update var/spack/repos/builtin/packages/py-rsatoolbox/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-rsatoolbox/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Change version for python restriction

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-12 10:32:12 -05:00
Manuela Kuhn
845187f270 py-mne: add 1.4.0 and py-importlib-resources: add 5.12.0 (#37624)
* py-mne: add 1.4.0 and py-importlib-resources: add 5.12.0

* Fix style

* Update var/spack/repos/builtin/packages/py-mne/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-12 10:31:35 -05:00
Lehman Garrison
9b35c3cdcc Update tensorflow variant defaults to match upstream defaults (#37610)
* Update tensorflow variant defaults to match project's defaults

* Apply code style
2023-05-12 10:27:51 -05:00
Robert Cohn
fe8734cd52 Fix logic in setting oneapi microarchitecture flags (#37634) 2023-05-12 10:58:08 -04:00
Chris Green
40b1aa6b67 [geant4,geant4-data] New version 10.7.4 (#37382) 2023-05-12 15:20:50 +01:00
Eduardo Rothe
ced8ce6c34 cudnn: add versions 8.5.0, 8.6.0, 8.7.0 (#35998) 2023-05-12 07:38:11 -04:00
Tamara Dahlgren
9201b66792 AML: Convert to new stand-alone test process (#35701) 2023-05-12 13:22:11 +02:00
Massimiliano Culpo
fd45839c04 Improve error message for buildcaches (#37626) 2023-05-12 11:55:13 +02:00
Mikael Simberg
2e25db0755 Add pika 0.15.1 (#37628) 2023-05-12 11:45:30 +02:00
Massimiliano Culpo
ebfc706c8c Improve error messages when Spack finds a too new DB / lockfile (#37614)
This PR ensures that we'll get a comprehensible error message whenever an old
version of Spack tries to use a DB or a lockfile that is "too new".

* Fix error message when using a too new DB
* Add a unit-test to ensure we have a comprehensible error message
2023-05-12 08:13:10 +00:00
Steven R. Brandt
644a10ee35 Coastal Codes (#37176)
* Coastal codes installation
* Finished debugging swan.
* Fix formatting errors identified by flake8
* Another attempt to fix formatting.
* Fixed year in header.
* Fixed maintainers syntax and other details from review comments.
* Remove redundant url.

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2023-05-12 00:57:59 -04:00
snehring
bb96e4c9cc py-pysam: adding version 0.21.0 (#37623)
* py-pysam: adding version 0.21.0

* Update var/spack/repos/builtin/packages/py-pysam/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-12 00:47:53 -04:00
Tamara Dahlgren
d204a08aea Install/update the qt dependency (#37600) 2023-05-11 22:58:33 -05:00
Tamara Dahlgren
8e18297cf2 Environments: store spack version/commit in spack.lock (#32801)
Add a section to the lock file to track the Spack version/commit that produced
an environment. This should (eventually) enhance reproducibility, though we
do not currently do anything with the information. It just adds to provenance
at the moment.

Changes include:
- [x] adding the version/commit to `spack.lock`
- [x] refactor `spack.main.get_version()
- [x] fix a couple of environment lock file-related typos
2023-05-11 23:13:36 -04:00
MatthewLieber
b06d20be19 add 3.0b release (#37599)
Co-authored-by: Matt Lieber <lieber.31@osu.edu>
2023-05-11 17:10:15 -07:00
Alec Scott
a14d6fe56d gegl: add v0.4.44 (#37516) 2023-05-11 15:52:22 -07:00
eugeneswalker
47ec6a6ae5 e4s ci: trilinos +rocm: enable belos to fix build failure (#37617) 2023-05-11 14:02:20 -07:00
Massimiliano Culpo
5c7dda7e14 Allow using -j to control the parallelism of concretization (#37608)
fixes #29464

This PR allows to use
```
$ spack concretize -j X
```
to set a cap on the parallelism of concretization from the command line
2023-05-11 13:29:17 -07:00
Dom Heinzeller
0e87243284 libpng package: fix build error on macOS arm64 (#37613)
Turn off ARM NEON support on MacOS arm64

Co-authored-by: Stephen Herbener <stephen.herbener@gmail.com>
2023-05-11 16:27:43 -04:00
Nichols A. Romero
384f5f9960 Update Intel Pin package up to 3.27 (#37470) 2023-05-11 19:06:03 +02:00
Andrey Parfenov
c0f020d021 add openmp_max_threads variant and enable avx 512 optimizations for icelake (#37379)
* add openmp_max_threads variant and enable avx 512 optimizations for icelake and cascadelake

Signed-off-by: Andrey Parfenov <andrey.parfenov@intel.com>

* revert manual enabling of avx512 for icelake and cascadelake

Signed-off-by: Andrey Parfenov <andrey.parfenov@intel.com>

---------

Signed-off-by: Andrey Parfenov <andrey.parfenov@intel.com>
2023-05-11 09:23:04 -05:00
Tamara Dahlgren
dc58449bbf caliper: convert to new stand-alone test process (#35691) 2023-05-11 14:40:02 +02:00
Tamara Dahlgren
d8a72b68dd bricks: convert to new stand-alone test process (#35694) 2023-05-11 14:39:09 +02:00
Mosè Giordano
040c6e486e julia: Fix llvm shlib symbol version for v1.9 (#37606) 2023-05-11 08:22:40 -04:00
Harmen Stoppels
4fa7880b19 lmod: fix CompilerSpec concrete version / range (#37604) 2023-05-11 12:00:07 +02:00
Nisarg Patel
f090b05346 Update providers of virtual packages related to Intel OneAPI (#37412)
* add a virtual dependency name instead of complete package name

* add OneAPI components as providers of virtual packages

* Revert the default of tbb

---------

Co-authored-by: Nisarg Patel <nisarg.patel@lrz.de>
2023-05-11 05:58:24 -04:00
Mikael Simberg
0c69e5a442 Add fmt 10.0.0 (#37591) 2023-05-11 04:57:47 -04:00
Massimiliano Culpo
8da29d1231 Improve the message for errors in package recipes (#37589)
fixes #30355
2023-05-11 10:34:39 +02:00
Massimiliano Culpo
297329f4b5 Improve error message for missing "command" entry in containerize (#37590)
fixes #21242
2023-05-11 10:33:51 +02:00
Mosè Giordano
1b6621a14b julia: Add v1.9.0 (#35631) 2023-05-11 10:30:52 +02:00
Peter Scheibel
bfa54da292 Allow clingo to enforce flags when they appear in requirements (#37584)
Flags are encoded differently from other variants, and they need a choice rule to
ensure clingo has a choice to impose (or not) a constraint.
2023-05-11 09:17:16 +02:00
Jaelyn Litzinger
730ab1574f Upgrade exago's petsc dependency to v3.19.0 (#37092)
* add petsc 3.19 for exago@develop
* simplify version syntax
2023-05-10 18:25:11 -07:00
Harmen Stoppels
2c17c4e632 ci: remove --mirror-url flag (#37457)
The flags --mirror-name / --mirror-url / --directory were deprecated in 
favor of just passing a positional name, url or directory, and letting spack
figure it out.

---------

Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
2023-05-10 16:34:29 -06:00
John W. Parent
ec800cccbb Windows: Fix external detection for service accounts (#37293)
Prior to this PR, the HOMEDRIVE environment variable was used to
detect what drive we are operating in. This variable is not available
for service account logins (like what is used for CI), so switch to
extracting the drive from PROGRAMFILES (which is more-widely defined).
2023-05-10 18:12:58 -04:00
John W. Parent
85cc9097cb Windows: prefer Python decompression support (#36507)
On Windows, several commonly available system tools for decompression
are unreliable (gz/bz2/xz). This commit refactors `decompressor_for`
to call out to a Windows or Unix-specific method:

* The decompressor_for_nix method behaves the same as before and
  generally treats the Python/system support options for decompression
  as interchangeable (although avoids using Python's built-in tar
  support since that has had issues with permissions).
* The decompressor_for_win method can only use Python support for
  gz/bz2/xz, although for a tar.gz it does use system support for
  untar (after the decompression step). .zip uses the system tar
  utility, and .Z depends on external support (i.e. that the user
  has installed 7zip).

A naming scheme has been introduced for the various _decompression
methods:

* _system_gunzip means to use a system tool (and fail if it's not
    available)
* _py_gunzip means to use Python's built-in support for decompressing
    .gzip files (and fail if it's not available)
* _gunzip is a method that can do either
2023-05-10 18:07:56 -04:00
snehring
830ee6a1eb py-gtdbtk: adding version 2.3.0 (#37581)
* py-gtdbtk: adding version 2.3.0

* py-gtdbtk: adding missing pydantic dep

* py-gtdbtk: restrict pydantic dep
2023-05-10 16:58:42 -05:00
Alec Scott
0da7b83d0b fd: merge fd-find with fd (#37580) 2023-05-10 14:29:13 -07:00
SoniaScard
f51a4a1ae1 Ophidia-analytics-framework, ophidia-io-server: Work (#36801)
* ophidia-io-server: new package at v1.7
* ophidia-io-server: Fix package
* ophidia-analytics-framework: new package at v1.7
* Fix code style in ophidia-analytics-framework
* Merge
* ophidia-analytics-framework: update package to v1.7.3
* Update package.py
* Fix style

---------

Co-authored-by: SoniaScard <SoniaScard@users.noreply.github.com>
Co-authored-by: Donatello Elia <eldoo@users.noreply.github.com>
2023-05-10 08:38:43 -07:00
H. Joe Lee
e49f10a28e fix(hdf5): h5pfc link failure (#37468)
* fix(hdf5): h5pfc link failure
  develop branch doesn't need linking any more.
  See: acb186f6e5
* [@spackbot] updating style on behalf of hyoklee

---------

Co-authored-by: hyoklee <hyoklee@users.noreply.github.com>
2023-05-10 08:33:27 -07:00
Harmen Stoppels
1d96fdc74a Fix compiler version issues (concrete vs range) (#37572) 2023-05-10 17:26:22 +02:00
Robert Cohn
8eb1829554 intel-oneapi-mkl: add threading support (#37586) 2023-05-10 10:47:57 -04:00
matteo-chesi
e70755f692 cuda: add versions 12.0.1, 12.1.0 and 12.1.1 (#37083) 2023-05-10 15:31:07 +02:00
G-Ragghianti
ebb40ee0d1 New option "--first" for "spack location" (#36283) 2023-05-10 12:26:29 +02:00
Robert Cohn
a2ea30aceb Create include/lib in prefix for oneapi packages (#37552) 2023-05-10 06:25:00 -04:00
Tamara Dahlgren
9a37c8fcb1 Stand-alone testing: make recipe support and processing spack-/pytest-like (#34236)
This is a refactor of Spack's stand-alone test process to be more spack- and pytest-like. 

It is more spack-like in that test parts are no longer "hidden" in a package's run_test()
method and pytest-like in that any package method whose name starts test_ 
(i.e., a "test" method) is a test part. We also support the ability to embed test parts in a
test method when that makes sense.

Test methods are now implicit test parts. The docstring is the purpose for the test part. 
The name of the method is the name of the test part. The working directory is the active
spec's test stage directory. You can embed test parts using the test_part context manager.

Functionality added by this commit:
* Adds support for multiple test_* stand-alone package test methods, each of which is 
   an implicit test_part for execution and reporting purposes;
* Deprecates package use of run_test();
* Exposes some functionality from run_test() as optional helper methods;
* Adds a SkipTest exception that can be used to flag stand-alone tests as being skipped;
* Updates the packaging guide section on stand-alone tests to provide more examples;
* Restores the ability to run tests "inherited" from provided virtual packages;
* Prints the test log path (like we currently do for build log paths);
* Times and reports the post-install process (since it can include post-install tests);
* Corrects context-related error message to distinguish test recipes from build recipes.
2023-05-10 11:34:54 +02:00
Alec Scott
49677b9be5 squashfs-mount: add v0.4.0 (#37478) 2023-05-10 10:49:21 +02:00
Alec Scott
6fc44eb540 shared-mime-info: add v1.10 (#37477) 2023-05-10 10:49:00 +02:00
Alec Scott
234febe545 kinesis: add v2.4.8 (#37476) 2023-05-10 10:48:44 +02:00
Alec Scott
83a1245bfd unifdef: add v2.12 (#37456) 2023-05-10 10:48:20 +02:00
Alec Scott
241c37fcf7 conmon: add v2.1.7 (#37320) 2023-05-10 10:47:54 +02:00
Alec Scott
a8114ec52c runc: add v1.1.6 (#37308) 2023-05-10 10:47:44 +02:00
Manuela Kuhn
f92b5d586f py-datalad: add 0.18.3 (#37411)
* py-datalad: add 0.18.3

* [@spackbot] updating style on behalf of manuelakuhn

* Remove metadata variant

* Fix dependencies

* Remove redundant version restriction
2023-05-10 03:57:59 -04:00
Alec Scott
492d68c339 r-knitr: add v1.42 (#37203) 2023-05-09 17:43:58 -05:00
Alec Scott
2dcc55d6c5 ssht: add v1.5.2 (#37542) 2023-05-09 11:37:40 -07:00
eugeneswalker
dc897535df py-loguru: add v0.2.5, v0.3.0 (#37574)
* py-loguru: add v0.2.5

* py-loguru: add v0.3.0
2023-05-09 11:16:02 -07:00
kwryankrattiger
45e1d3498c CI: Backwards compatibility requires script override behavior (#37015) 2023-05-09 10:42:06 -06:00
eugeneswalker
af0f094292 memkind: parallel = false (#37566) 2023-05-09 09:04:54 -07:00
Alec Scott
44b51acb7b z-checker: add v0.9.0 (#37534) 2023-05-09 06:52:43 -07:00
eugeneswalker
13dd05e5ec hip: get_paths for hipify-clang (#37559)
* hip: get_paths for hipify-clang

* fix: need to actually use get_paths now to get hipify-clang path

* set hipify-clang path differentluy for external vs spack-installed case

* [@spackbot] updating style on behalf of eugeneswalker
2023-05-09 06:51:04 -07:00
Massimiliano Culpo
89520467e0 Use single quotes to inline manifest in Dockerfiles (#37571)
fixes #22341

Using double quotes creates issues with shell variable substitutions,
in particular when the manifest has "definitions:" in it. Use single
quotes instead.
2023-05-09 13:20:25 +02:00
Harmen Stoppels
9e1440ec7b spack view copy: relocate symlinks (#32306) 2023-05-09 12:17:16 +02:00
Alec Scott
71cd94e524 gh: add conflict for v2.28.0 and macos (#37563) 2023-05-09 08:55:54 +02:00
Alec Scott
ba696de71b breseq: add v0.38.1 (#37535) 2023-05-08 14:39:45 -07:00
Alec Scott
06b63cfce3 exiv2: add v0.27.6 (#37536) 2023-05-08 14:25:11 -07:00
Alec Scott
5be1e6e852 hazelcast: add v5.2.3 (#37537) 2023-05-08 14:22:46 -07:00
Alec Scott
e651c2b122 libjpeg-turbo: add v2.1.5 (#37539) 2023-05-08 14:19:24 -07:00
Alec Scott
ec2a4869ef mlst: add v2.23.0 (#37540) 2023-05-08 14:03:57 -07:00
Alec Scott
082fb1f6e9 scitokens-cpp: add v1.0.1 (#37541) 2023-05-08 14:00:33 -07:00
Alec Scott
95a65e85df delta: add v2.3.0 (#37545) 2023-05-08 13:46:08 -07:00
Alec Scott
d9e7aa4253 fd-find: add v8.7.0 (#37547) 2023-05-08 13:43:13 -07:00
Alec Scott
5578209117 druid: add v1.2.8 (#37546) 2023-05-08 13:42:10 -07:00
Alec Scott
b013a2de50 fd-find: add v8.7.0 (#37547) 2023-05-08 13:28:50 -07:00
Mark W. Krentel
d1c722a49c hpcviewer: add version 2023.04 (#37556) 2023-05-08 12:30:10 -07:00
eugeneswalker
3446feff70 use latest trilinos for +cuda variants (#37164) 2023-05-08 12:29:54 -07:00
eugeneswalker
41afeacaba new package: psalg (#37357)
* new package: psalg

* use new maintainer syntax

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2023-05-08 12:26:19 -07:00
Massimiliano Culpo
0139288ced Add a "requires" directive, extend functionality of package requirements (#36286)
Add a "require" directive to packages, which functions exactly like
requirements specified in packages.yaml (uses the same fact-generation
logic); update both to allow making the requirement conditional.

* Packages may now use "require" to add constraints. This can be useful
  for something like "require(%gcc)" (where before we had to add a
  conflict for every compiler except gcc).
* Requirements (in packages.yaml or in a "require" directive) can be
  conditional on a spec, e.g. "require(%gcc, when=@1.0.0)" (version
  1.0.0 can only build with gcc).
* Requirements may include a message which clarifies why they are needed.
  The concretizer assigns a high priority to errors which generate these
  messages (in particular over errors for unsatisfied requirements that
  do not produce messages, but also over a number of more-generic
  errors).
2023-05-08 10:12:26 -07:00
eugeneswalker
d0cba2bf35 caliper +rocm: use hipcc for CMAKE_CXX_COMPILER (#35219) 2023-05-08 09:40:23 -07:00
Mark W. Krentel
6afa2d6298 libmonitor: add version 2023.03.15 (#37434) 2023-05-08 08:58:11 -07:00
Seth R. Johnson
947445ccdd Fix pixman macOS build and add missing build deps (#36982) 2023-05-08 08:47:51 -07:00
snehring
b605cd0151 trinity: adding version 2.15.1 (#37076) 2023-05-08 08:32:05 -07:00
Mikael Simberg
273bccbccf Add HPX 1.9.0 (#37426) 2023-05-08 12:24:02 +02:00
Alec Scott
d5d596a851 pkgconf: add v1.9.4 (#36437) 2023-05-07 23:27:28 +02:00
eugeneswalker
0ddb5de27c caliper +rocm: patch missing libunwind include dir (#37461)
* patch missing libunwind include dir

* caliper +libunwind +sampler: patch libunwind include dir
2023-05-07 12:49:46 -07:00
eugeneswalker
8942909852 petsc@3.19.1 +rocm: conflicts with rocprim@5.3.0 (#37474)
* petsc@3.19.1 +rocm: conflicts with rocprim@5.3.0

* conflict with rocprim@5.3.0:5.3.2 when +rocm
2023-05-07 17:27:52 +00:00
Alec Scott
0143d5bf01 libtiff: add v4.5.0 (#37523) 2023-05-07 12:22:56 -04:00
Harmen Stoppels
e17d6d5eee gitlab ci: bump tutorial image (#37544) 2023-05-07 16:24:33 +02:00
Alec Scott
2a54cda953 libblastrampoline: add v5.8.0 (#37538) 2023-05-07 10:45:34 +02:00
Alec Scott
097d3e15b4 libpcap: add v1.10.4 (#37451) 2023-05-05 22:02:57 -07:00
Tamara Dahlgren
374264f610 Packaging Guide: build-time test updates: option and test logs (#37093)
* Packaging Guide: build-time test updates: option and test logs
* Fix a couple of typos
2023-05-05 22:19:06 -06:00
Chris Green
d6bf9bc8f1 [elfutils] iconv is required (see ./configure --help) (#37464) 2023-05-05 22:16:03 -06:00
Harmen Stoppels
fa7719a031 Improve version, version range, and version list syntax and behavior (#36273)
## Version types, parsing and printing

- The version classes have changed: `VersionBase` is removed, there is now a
  `ConcreteVersion` base class. `StandardVersion` and `GitVersion` both inherit
  from this.

- The public api (`Version`, `VersionRange`, `ver`) has changed a bit:
  1. `Version` produces either `StandardVersion` or `GitVersion` instances.
  2. `VersionRange` produces a `ClosedOpenRange`, but this shouldn't affect the user.
  3. `ver` produces any of `VersionList`, `ClosedOpenRange`, `StandardVersion`
     or `GitVersion`.

- No unexpected type promotion, so that the following is no longer an identity:
  `Version(x) != VersionRange(x, x)`.

- `VersionList.concrete` now returns a version if it contains only a single element
  subtyping `ConcreteVersion` (i.e. `StandardVersion(...)` or `GitVersion(...)`)

- In version lists, the parser turns `@x` into `VersionRange(x, x)` instead
  of `Version(x)`.

- The above also means that `ver("x")` produces a range, whereas
  `ver("=x")` produces a `StandardVersion`. The `=` is part of _VersionList_
  syntax.

- `VersionList.__str__` now outputs `=x.y.z` for specific version entries,
  and `x.y.z` as a short-hand for ranges `x.y.z:x.y.z`.

- `Spec.format` no longer aliases `{version}` to `{versions}`, but pulls the
  concrete version out of the list and prints that -- except when the list is
  is not concrete, then is falls back to `{versions}` to avoid a pedantic error.
  For projections of concrete specs, `{version}` should be used to render
  `1.2.3` instead of `=1.2.3` (which you would get with `{versions}`).
  The default `Spec` format string used in `Spec.__str__` now uses
  `{versions}` so that `str(Spec(string)) == string` holds.

## Changes to `GitVersion`

- `GitVersion` is a small wrapper around `StandardVersion` which enriches it
   with a git ref. It no longer inherits from it.

- `GitVersion` _always_ needs to be able to look up an associated Spack version
  if it was not assigned (yet). It throws a `VersionLookupError` whenever `ref_version`
  is accessed but it has no means to look up the ref; in the past Spack would
  not error and use the commit sha as a literal version, which was incorrect.
   
- `GitVersion` is never equal to `StandardVersion`, nor is satisfied by it. This
  is such that we don't lose transitivity. This fixes the following bug on `develop`
  where `git_version_a == standard_version == git_version_b` does not imply
  `git_version_a == git_version_b`. It also ensures equality always implies equal
  hash, which is also currently broken on develop; inclusion tests of a set of
  versions + git versions would behave differently from inclusion tests of a
  list of the same objects.

- The above means `ver("ref=1.2.3) != ver("=1.2.3")` could break packages that branch
  on specific versions, but that was brittle already, since the same happens with
  externals: `pkg@1.2.3-external` suffixes wouldn't be exactly equal either. Instead,
  those checks should be `x.satisfies("@1.2.3")` which works both for git versions and
  custom version suffixes.

- `GitVersion` from commit will now print as `<hash>=<version>` once the
  git ref is resolved to a spack version. This is for reliability -- version is frozen
  when added to the database and queried later. It also improves performance
  since there is no need to clone all repos of all git versions after `spack clean -m`
  is run and something queries the database, triggering version comparison, such
  as potentially reuse concretization.

- The "empty VerstionStrComponent trick" for `GitVerison` is dropped since it wasn't
  representable as a version string (by design). Instead, it's replaced by `git`,
  so you get `1.2.3.git.4` (which reads 4 commits after a tag 1.2.3). This means
  that there's an edge case for version schemes `1.1.1`, `1.1.1a`, since the
  generated git version `1.1.1.git.1` (1 commit after `1.1.1`) compares larger
  than `1.1.1a`, since `a < git` are compared as strings. This is currently a
  wont-fix edge case, but if really required, could be fixed by special casing
  the `git` string.

- Saved, concrete specs (database, lock file, ...) that only had a git sha as their
  version, but have no means to look the effective Spack version anymore, will
  now see their version mapped to `hash=develop`. Previously these specs
  would always have their sha literally interpreted as a version string (even when
  it _could_ be looked up). This only applies to databases, lock files and spec.json
  files created before Spack 0.20; after this PR, we always have a Spack version
  associated to the relevant GitVersion).

- Fixes a bug where previously `to_dict` / `from_dict` (de)serialization would not
  reattach the repo to the GitVersion, causing the git hash to be used as a literal
  (bogus) version instead of the resolved version. This was in particularly breaking
  version comparison in the build process on macOS/Windows.


## Installing or matching specific versions

- In the past, `spack install pkg@3.2` would install `pkg@=3.2` if it was a
  known specific version defined in the package, even when newer patch releases
  `3.2.1`, `3.2.2`, `...` were available. This behavior was only there because
  there was no syntax to distinguish between `3.2` and `3.2.1`. Since there is
  syntax for this now through `pkg@=3.2`, the old exact matching behavior is
  removed. This means that `spack install pkg@3.2` constrains the `pkg` version
  to the range `3.2`, and `spack install pkg@=3.2` constrains it to the specific
  version `3.2`.

- Also in directives such as `depends_on("pkg@2.3")` and their when
  conditions `conflicts("...", when="@2.3")` ranges are ranges, and specific
  version matches require `@=2.3.`.

- No matching version: in the case `pkg@3.2` matches nothing, concretization
  errors. However, if you run `spack install pkg@=3.2` and this version
  doesn't exist, Spack will define it; this allows you to install non-registered
  versions.

- For consistency, you can now do `%gcc@10` and let it match a configured
  `10.x.y` compiler. It errors when there is no matching compiler.
  In the past it was interpreted like a specific `gcc@=10` version, which
  would get bootstrapped.

- When compiler _bootstrapping_ is enabled, `%gcc@=10.2.0` can be used to
  bootstrap a specific compiler version.

## Other changes

- Externals, compilers, and develop spec definitions are backwards compatible.
  They are typically defined as `pkg@3.2.1` even though they should be
  saying `pkg@=3.2.1`. Spack now transforms `pkg@3` into `pkg@=3` in those cases.

- Finally, fix strictness of `version(...)` directive/declaration. It just does a simple
  type check, and now requires strings/integers. Floats are not allowed because
  they are ambiguous `str(3.10) == "3.1"`.
2023-05-05 22:04:41 -06:00
Alec Scott
f6497972b8 apr: add v1.7.4 (#37445) 2023-05-05 18:28:58 -04:00
Alec Scott
5c3ec5f47c fms: add v2023.01 (#37450) 2023-05-05 18:24:10 -04:00
Manuela Kuhn
16bddf152e py-palettable: add 3.3.3 (#37443) 2023-05-05 18:23:33 -04:00
Mittagskogel
4403df4f08 hdf5: Add conflict for older cmake versions. (#37463)
See HDFGroup/hdf5 issue 2906
2023-05-05 18:19:39 -04:00
Alec Scott
7cf45442a7 phast: add v1.6 (#37455) 2023-05-05 18:14:45 -04:00
Alec Scott
edb8bc91b2 mmg: add v5.7.1 (#37453) 2023-05-05 18:14:23 -04:00
Alec Scott
9610ecb936 octave: add v8.2.0 (#37454) 2023-05-05 18:09:16 -04:00
Eric Berquist
b2a8e8734e Fix typos in packaging guide (#37460) 2023-05-05 22:08:58 +00:00
Alec Scott
c287dbbf13 logrotate: add v3.21.0 (#37452) 2023-05-05 18:08:54 -04:00
Alec Scott
1808ce11de extrae: add v4.0.4 (#37449) 2023-05-05 18:03:55 -04:00
Alec Scott
13b0b1d574 cli11: add v2.3.2 (#37447) 2023-05-05 11:33:49 -07:00
snehring
261a34a7a4 Adding ncbi-vdb 3.0.2 (#37435)
* ncbi-vdb: adding version 3.0.2
* sra-tools: adding version restriction for newer versions
2023-05-05 11:06:07 -07:00
Alec Scott
222b44bd45 biobloom: add v2.3.5 (#37446) 2023-05-05 11:01:22 -07:00
Alec Scott
c105ac07bb editline: add v1.17.1 (#37448) 2023-05-05 10:57:47 -07:00
Harmen Stoppels
9ef062fcca Add spack buildcache push (alias to buildcache create) (#34861)
`spack buildcache create` is a misnomer cause it's the only way to push to
an existing buildcache (and it in fact calls binary_distribution.push).

Also we have `spack buildcache update-index` but for create the flag is
`--rebuild-index`, which is confusing (and also... why "rebuild"
something if the command is "create" in the first place, that implies it
wasn't there to begin with).

So, after this PR, you can use either

```
spack buildcache create --rebuild-index
```

or

```
spack buildcache push --update-index
```

Also, alias `spack buildcache rebuild-index` to `spack buildcache
update-index`.
2023-05-05 19:54:26 +02:00
Harmen Stoppels
ddea33bdc0 Update tutorial pipeline to Ubuntu 22.04 (#35451) 2023-05-05 17:52:07 +02:00
Harmen Stoppels
803425780e ci: stop downloading recent gmake (#37458) 2023-05-05 17:09:56 +02:00
Chris Green
d600aef4f4 Relax environment manifest filename requirements and lockfile identification criteria (#37413)
* Relax filename requirements and lockfile identification criteria

* Tests

* Update function docs and help text

* Update function documentation

* Update Sphinx documentation

* Adjustments per https://github.com/spack/spack/pull/37413#pullrequestreview-1413540132

* Further tweaks per https://github.com/spack/spack/pull/37413#pullrequestreview-1413971254

* Doc fixes per https://github.com/spack/spack/pull/37413#issuecomment-1535976068
2023-05-05 07:40:49 -05:00
Harmen Stoppels
af9b9f6baf binutils: enable debug section compression with zlib by default (#37359) 2023-05-05 14:14:48 +02:00
Harmen Stoppels
bbc779f3f0 cc: deal with -Wl,-rpath= without value, deal with NAG (#37215)
Spack never parsed `nagfor` linker arguments put on the compiler line: 
```
nagfor -Wl,-Wl,,-rpath,,/path
````
so, let's continue not attempting to parse that.
2023-05-05 12:16:31 +02:00
Harmen Stoppels
3ecb84d398 elfutils: unconditionally depend on zstd (#37368) 2023-05-05 10:40:14 +02:00
Michael Kuhn
b2c3973d4a meson: change default build type to "release" (#37436)
The same was done for CMake in #36679.
2023-05-05 10:35:40 +02:00
Harmen Stoppels
35e1dc8eba spack uninstall: reduce verbosity with named environments (#34001) 2023-05-05 10:23:08 +02:00
Harmen Stoppels
bf71b78094 deprecate buildcache create --rel, buildcache install --allow-root (#37285)
`buildcache create --rel`: deprecate this because there is no point in
making things relative before tarballing; on install you need to expand
`$ORIGIN` / `@loader_path` / relative symlinks anyways because some
dependencies may actually be in an upstream, or have different
projections.

`buildcache install --allow-root`: this flag was propagated through a
lot of functions but was ultimately unused.
2023-05-05 09:51:53 +02:00
Sergey Kosukhin
85730de055 mpich: avoid '-fallow-argument-mismatch' in the compiler wrappers (#33323) 2023-05-05 09:37:12 +02:00
Alec Scott
bc88b581b4 cleaveland4: add v4.5 (#37319) 2023-05-05 03:03:47 -04:00
Alec Scott
1ae556829a Revert "lua: add v5.4.5 (#37334)" (#37431)
This reverts commit 59e2ef6ad6.
2023-05-05 02:58:45 -04:00
Massimiliano Culpo
0c5a5e2ce0 Remove "blacklist" and "whitelist" from module configuration (#37432)
The sections were deprecated in v0.19
2023-05-05 00:28:34 -04:00
Greg Becker
c3593e5b48 Allow choosing the name of the packages subdirectory in repositories (#36643)
Co-authored-by: becker33 <becker33@users.noreply.github.com>
2023-05-04 23:36:21 +02:00
Robert Cohn
3c40d9588f intel-oneapi-mkl: include mpi libs when using +cluster (#37386) 2023-05-04 14:46:55 -04:00
Massimiliano Culpo
16613408e4 Place an upper bound on urllib3 to build docs (#37433) 2023-05-04 19:40:43 +02:00
Jack Morrison
6f22b5d724 iperf2: Add new versions 2.1.{7,8,9} (#37408) 2023-05-04 10:34:49 -07:00
Robert Cohn
420e093e42 detect ifx 2023.1, add test (#37377) 2023-05-04 10:27:19 -07:00
Erik Heeren
8e73eeb4b9 py-amici, py-python-libsbml: new packages (#35532)
* py-amici, py-python-libsbml: new packages

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Swig and cmake are build-only dependencies

* cmake as a run dependency after all

* py-amici: default boost and hdf5 variants to True

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-04 11:19:48 -05:00
Harmen Stoppels
a5300b5726 perl: fix jobserver job issue (#37428)
When building perl with posix jobserver, it seems to eat jobs, which
reduces parallelism to 1 in many cases, and is rather annoying. This is
solved in GNU Make 4.4 (fifo is more stable than file descriptors), but
that version is typically not available.

So, fix this issue by simply unsetting MAKEFLAGS for the duration of
./Configure. That's enough, and the build phase runs perfectly in
parallel again.
2023-05-04 17:50:00 +02:00
Massimiliano Culpo
86d3bad1e0 cmake build system: change default build type to Release (#36679)
This switches the default Make build type to `build_type=Release`.

This offers:
- higher optimization level, including loop vectorization on older GCC
- adds NDEBUG define, which disables assertions, which could cause speedups if assertions are in loops etc
- no `-g` means smaller install size

Downsides are:
- worse backtraces (though this does NOT strip symbols)
- perf reports may be useless
- no function arguments / local variables in debugger (could be of course)
- no file path / line numbers in debugger

The downsides can be mitigated by overriding to `build_type=RelWithDebInfo` in `packages.yaml`,
if needed.  The upside is that builds will be MUCH smaller (and faster) with this change.

---------

Co-authored-by: Gregory Becker <becker33@llnl.gov>
2023-05-04 11:33:35 -04:00
Massimiliano Culpo
600955edd4 Update vendored ruamel.yaml to v0.17.21 (#37008)
* Vendor ruamel.yaml v0.17.21

* Add unit test for whitespace regression

* Add an abstraction layer in Spack to wrap ruamel.yaml

All YAML operations are routed through spack.util.spack_yaml

The custom classes have been adapted to the new ruamel.yaml
class hierarchy.

Fixed line annotation issue in "spack config blame"
2023-05-04 08:00:38 -07:00
Massimiliano Culpo
95e61f2fdf Remove the old spec format in configuration (#37425)
The format was deprecated in v0.15
2023-05-04 07:59:11 -07:00
Mikael Simberg
e6d37b3b61 Add pika 0.15.0 (#37403) 2023-05-04 07:48:24 -04:00
Massimiliano Culpo
cf5daff6f5 Deprecate env: as top level environment key (#37424) 2023-05-04 07:08:29 -04:00
Annop Wongwathanarat
e5dcaebd43 acfl: add compiler-package mapping and fix version number (#36768) 2023-05-04 03:59:15 -05:00
Harmen Stoppels
84a70c26d9 buildcache metadata: store hash -> prefix mapping (#37404)
This ensures that:

a) no externals are added to the tarball metadata file
b) no externals are added to the prefix to prefix map on install, also
for old tarballs that did include externals
c) ensure that the prefix -> prefix map is always string to string, and
doesn't contain None in case for some reason a hash is missing
2023-05-04 10:09:22 +02:00
Mikael Simberg
3bfd948ec8 Add patches for generic context coroutine stack allocation in pika on macos (#37288) 2023-05-04 09:56:02 +02:00
Sam Reeve
58e527935c cabana: Add optional silo build (#37393) 2023-05-04 00:43:31 -04:00
Adam J. Stewart
c511fbb717 py-lightly: add v1.4.4 (#37406) 2023-05-03 15:41:05 -04:00
Bryce Torcello
541cdbbef2 docs: update RHEL/CentOS system prerequisites (#36720) 2023-05-03 19:04:16 +02:00
Alec Scott
eda806ab97 libjwt: add v1.15.2 (#37333) 2023-05-03 09:34:36 -07:00
Alec Scott
605aa45aab mii: add v1.1.2 (#37335) 2023-05-03 09:34:04 -07:00
Alec Scott
e8462f82ef texinfo: add v7.0.3 (#37348) 2023-05-03 09:33:28 -07:00
Manuela Kuhn
387ee494b0 py-mne: add 1.3.1 (#37399) 2023-05-03 10:17:17 -05:00
Manuela Kuhn
1f24fb1dd8 py-neurora: add 1.1.6.9 (#37398) 2023-05-03 10:13:58 -05:00
Manuela Kuhn
379aec5757 py-neurokit2: add 0.2.4 (#37396) 2023-05-03 10:13:12 -05:00
Manuela Kuhn
a091f7642d py-bidskit: add 2023.2.16 (#37395) 2023-05-03 10:12:16 -05:00
Manuela Kuhn
edc70942d7 py-nilearn: add 0.10.1 (#37394) 2023-05-03 10:11:14 -05:00
Alec Scott
a94d18ad08 perl-module-install: add v1.21 (#37341) 2023-05-03 08:26:50 -04:00
Egbert Eich
1491d8471d Add 'zypper' to the valid container.os_packages options (#36681)
Signed-off-by: Egbert Eich <eich@suse.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: e4t <e4t@users.noreply.github.com>
2023-05-03 13:05:14 +02:00
Massimiliano Culpo
03d1841385 Allow adding specs to an environment without the 'specs' attribute (#37378) 2023-05-03 13:01:16 +02:00
Harmen Stoppels
7c8590ee44 remove unused global in bindist tests (#37358)
* remove unused global in bindist tests
* remove unused function
2023-05-03 05:34:14 -04:00
Jack Morrison
71aa12f72c Intel MPI Benchmarks 'IMB-P2P' is available for versions newer than 2018. (#37360) 2023-05-03 05:29:14 -04:00
Scott Wittenburg
c7e60f441a buildcache push: improve printing (#36141) 2023-05-03 10:42:22 +02:00
Tim Haines
11aa2d721e intel-tbb: Add versions 2021.8.0 and 2021.9.0 (#37391) 2023-05-03 00:13:31 -04:00
Chris Green
c110bcc5af libintl, iconv, gettext: account for libc provider and externals (#35450)
* libiconv can be provided by libc, so update packages which depend on
  libiconv to require the iconv virtual instead
* Many packages need special consideration when locating iconv depending
  on whether it is provided by libc (no prefix provided) or the libiconv
  package (in that case we want to provide a prefix)
* It was also noticed that when an iconv external was provided, that
  there was interference with linking (this should generally be handled
  by Spack's compiler wrappers and bears further investigation)
* Like iconv, libintl can be provided by libc or another package, namely
  gettext. It is not converted to a provider like libiconv because it
  provides additional routines. The logic is similar to that of iconv
  but instead of checking the provider, we check whether the gettext
  installation includes libintl.
2023-05-02 18:18:30 -07:00
renjithravindrankannath
4edd364a8b Guard use of OpenMP in rocblas test (#36673)
* Provide openmp from rocm-open-extras for roblas test
* Addressing the  prechecks/audit/package-audits check
* Correcting style check errors.
* rocm-openmp-extras path veriable restricting for test
* Correcting the env variable to run_tests
* Guard use of OpenMP to make it optional in rocblas test
* Removing unused patch
2023-05-02 13:11:13 -07:00
Zack Galbreath
42ede698c2 trilinos: add version 14.0.0 (#37387) 2023-05-02 14:53:37 -04:00
Massimiliano Culpo
68a4b2e4e4 GitHub Actions: do not install six in CI (#37361)
* GitHub Actions: do not install six in CI
* Remove workflow code that was commented out
* Remove any use of "six" from packages
2023-05-02 13:28:24 -04:00
renjithravindrankannath
131e1c0937 hip: Patch to handle file reorg changes for the tests (#36993)
* Patch to handle file reorg changes for the tests
* Correcting patch file name
* Limiting hipify-clang path to 5.4 and later
* Set hipify-clang path env in CMake
2023-05-02 13:23:26 -04:00
Daniel Ahlin
b8136d7052 gromacs: add 2023.1 version (#37371) 2023-05-02 12:53:11 -04:00
Alec Scott
c4ac9246e2 muparser: add v2.3.4 (#37298) 2023-05-02 11:34:44 -04:00
Alec Scott
3dd3526de9 globalarrays: add v5.8.2 (#37325) 2023-05-02 10:43:13 -04:00
Alec Scott
5c4636c86d spot: add v2.11.5 (#37299) 2023-05-02 08:43:10 -04:00
Alec Scott
a6ee9369b6 osi: add v0.108.8 (#37340) 2023-05-02 08:33:14 -04:00
Richard Berger
b8f35c4aa7 ports-of-call: add version 1.5.1 (#37366) 2023-05-02 08:23:28 -04:00
Glenn Johnson
e5a48033bd proj: don't depend on googletest at build time (#37240)
* proj: v6 depends on googletest at build time

* Have cmake block check for run_tests
2023-05-02 08:23:13 -04:00
Alec Scott
49497dd254 uriparser: add v0.9.7 (#37350) 2023-05-02 08:15:49 -04:00
Alec Scott
d60b055f64 redis-plus-plus: add v1.3.8 (#37344) 2023-05-02 08:15:27 -04:00
Alec Scott
59e2ef6ad6 lua: add v5.4.5 (#37334) 2023-05-02 07:38:33 -04:00
Alec Scott
a7744b0dbc sqlitebrowser: add v3.12.2 (#37300) 2023-05-02 07:38:11 -04:00
Jonathon Anderson
78cfad7881 intel-tbb: backport GCC 13 support patch (#37291) 2023-05-02 06:25:06 -04:00
Tristan Carel
60d3ed86d9 steps: add version 4.1.1, remove others (#37250)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-05-02 05:37:05 -04:00
Luca Heltai
c16c5ad106 dealii: add support for 9.4.1 and 9.4.2 (#36627)
* Make sure the standard is cxx17 for 9.4:

* Fix patches for > 9.4
2023-05-02 10:49:53 +02:00
Mark W. Krentel
9c854bf78e libpfm4: add version 4.13.0 (#37364) 2023-05-02 04:39:11 -04:00
Harmen Stoppels
27bce8d489 gdb: add missing zstd, add system dep for zlib (#37369) 2023-05-02 04:34:13 -04:00
Massimiliano Culpo
a92f1e37aa Disable module file generation by default (#37258)
* Disable module generation by default (#35564)

a) It's used by site administrators, so it's niche
b) If it's used by site administrators, they likely need to modify the config anyhow, so the default config only serves as an example to get started
c) it's too arbitrary to enable tcl, but disable lmod

* Remove leftover from old module file schema

* Warn if module file config is detected and generation is disabled

---------

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-05-02 10:28:27 +02:00
Olivier Cessenat
99c3ecc139 visit: python 3.9 and above acceptable (#37071) 2023-05-02 10:18:44 +02:00
Alec Scott
0f6170875c gh: add v2.28.0 (#37302) 2023-05-02 10:12:27 +02:00
Weiqun Zhang
986809c4c5 amrex: add 23.05 (#37362) 2023-05-02 04:04:28 -04:00
Alec Scott
d9c128132a tree: add v2.1.0 (#37349) 2023-05-02 03:30:43 -04:00
Alec Scott
a0cac6c6bf gnupg: add v2.4.1 (#37326) 2023-05-02 09:21:42 +02:00
Alec Scott
470523cc35 libibumad: add v44.1 (#37332) 2023-05-01 22:49:15 -07:00
Alec Scott
c82d4fdc08 pharokka: add v1.3.2 (#37342) 2023-05-01 22:48:30 -07:00
Alec Scott
c03448c827 jsoncpp: add v1.9.5 (#37303) 2023-05-02 01:41:11 -04:00
Alec Scott
59cd0711ba nco: add v5.1.5 (#37338) 2023-05-02 00:44:00 -04:00
Alec Scott
8ef31b23a4 kubernetes: add v1.27.1 (#37331) 2023-05-02 00:43:38 -04:00
Alec Scott
1bae07e54e libyogrt: add v1.33 (#37305) 2023-05-02 00:14:56 -04:00
Alec Scott
c7a2766ded brynet: add v1.12.2 (#37309) 2023-05-02 00:05:01 -04:00
Alec Scott
2a95b1e282 oniguruma: add v6.9.8 (#37307) 2023-05-02 00:04:36 -04:00
Alec Scott
5eb217e878 nanomsg: add v1.2 (#37306) 2023-05-02 00:04:15 -04:00
Alec Scott
22486eeb4e faust: add v2.54.9 (#37310) 2023-05-02 00:03:59 -04:00
Alec Scott
e5665730b6 liblouis: add v3.25.0 (#37304) 2023-05-02 00:03:44 -04:00
Alec Scott
edccf0d819 mapserver: add v8.0.1 (#37312) 2023-05-01 23:58:33 -04:00
Alec Scott
815b2f542a libmmtf-cpp: add v1.1.0 (#37311) 2023-05-01 23:58:11 -04:00
Alec Scott
b50f6510d4 beakerlib: add v1.29.3 (#37316) 2023-05-01 23:47:18 -04:00
Alec Scott
226f331a21 kubectl: add v1.27.1 (#37330) 2023-05-01 23:41:39 -04:00
Alec Scott
bc0477f3e0 bedtools2: add v2.31.0 (#37301) 2023-05-01 23:28:01 -04:00
Alec Scott
cf924e397f wsmancli: add v2.6.2 (#37351) 2023-05-01 23:22:13 -04:00
eugeneswalker
bfe0bc1c6b new package: py-lcls-krtc (#37263)
* new package: py-lcls-krtc

* new package: py-pykerberos

* py-lcls-krtc: ^py-pykerberos for link
2023-05-01 23:21:54 -04:00
Alec Scott
1d2e30b8b2 heaptrack: add v1.3.0 (#37327) 2023-05-01 23:16:57 -04:00
Alec Scott
e11f635174 mixcr: add v4.3.2 (#37336) 2023-05-01 23:16:39 -04:00
Alec Scott
611c24c01c c-blosc: add v1.21.2 (#37317) 2023-05-01 23:16:24 -04:00
Alec Scott
2f49e20b12 nginx: add v1.24.0 (#37339) 2023-05-01 23:10:44 -04:00
Alec Scott
409bba7cf9 alembic: add v1.8.5 (#37315) 2023-05-01 23:05:33 -04:00
Alec Scott
378aa835f8 tippecanoe: add v1.36.0 (#37313) 2023-05-01 23:05:12 -04:00
Alec Scott
2500443f11 mpi-bash: add v1.3 (#37337) 2023-05-01 23:04:48 -04:00
Satish Balay
f5c32d57e0 petsc, py-petsc4py: add v3.19.1 (#37356) 2023-05-01 18:58:03 -04:00
Benjamin Meyers
ba6dadf760 Update py-editdistance@0.6.2 (#37365)
* Update py-editdistance@0.6.2

* [@spackbot] updating style on behalf of meyersbs
2023-05-01 18:33:37 -04:00
Alec Scott
83535ed503 console-bridge: add v1.0.2 (#37321) 2023-05-01 14:53:59 -07:00
Alec Scott
051d668ca4 dropwatch: add v1.5.4 (#37322) 2023-05-01 14:50:00 -07:00
Alec Scott
3029e943b1 flibcpp: add v1.0.2 (#37323) 2023-05-01 14:47:47 -07:00
Alec Scott
3d48bd88d3 helib: add v2.2.2 (#37328) 2023-05-01 14:42:12 -07:00
Alec Scott
fcf9068a04 hunspell: add v1.7.2 (#37329) 2023-05-01 14:40:40 -07:00
Alec Scott
724c34db5f procenv: add v0.60 (#37343) 2023-05-01 14:05:40 -07:00
Alec Scott
1b79725229 restic: add v0.15.2 (#37345) 2023-05-01 13:59:36 -07:00
Alec Scott
06e6c341e4 shapeit4: add v4.2.2 (#37347) 2023-05-01 13:56:26 -07:00
Adam J. Stewart
a2fc3dbfc5 py-sphinx: add v7.0.0 (#37352) 2023-05-01 12:17:50 -07:00
Alec Scott
fbda3e23ac scons: add v4.5.2 (#37346) 2023-05-01 11:12:09 -04:00
Massimiliano Culpo
3c3a4c7577 Factor YAML manifest manipulation out of the Environment class (#36927)
Change the signature of the Environment.__init__ method to have
a single argument, i.e. the directory where the environment manifest 
is located. Initializing that directory is now delegated to a function 
taking care of all the error handling upfront. Environment objects 
require a "spack.yaml" to be available to be constructed.

Add a class to manage the environment manifest file. The environment 
now delegates to an attribute of that class the responsibility of keeping
track of changes modifying the manifest. This allows simplifying the 
updates of the manifest file, and helps keeping in sync the spec lists in
memory with the spack.yaml on disk.
2023-05-01 15:06:10 +02:00
eugeneswalker
cfb34d19fe Revert "new package: roentdek (#37265)" (#37355)
This reverts commit fec20bb567.
2023-05-01 08:36:36 +02:00
snehring
8183210e59 Feature/topiary (#37157)
* generax: adding new package generax

* muscle5: adding new package muscle5

* py-custom-inherit: adding new package py-custom-inherit

* py-ete3: adding new package py-ete3

* py-itolapi: adding new package py-itolapi

* py-opentree: adding new package py-opentree

* py-pypng: adding new package py-pypng

* py-toyplot: adding new package py-toyplot

* py-toytree: adding new package py-toytree

* py-pastml: adding new package py-pastml

* raxml-ng: adding new version 1.1.0

* py-topiary: adding new package py-topiary

* generax: adding master branch version
generax: adding version 2.0.1
generax: add mpi variant

* py-topiary: add main

* generax: correcting commit for 2.0.1

* Update var/spack/repos/builtin/packages/py-itolapi/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-opentree/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* py-topiary-asr: rename package, requested changes.

* Update var/spack/repos/builtin/packages/py-topiary-asr/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-29 17:52:32 -04:00
Manuela Kuhn
896a81ba35 py-nipype: add 1.8.6 (#37279)
* py-nipype: add 1.8.6

* Exchange test dep with skip_modules
2023-04-29 15:56:15 -05:00
Greg Becker
21cadf96e0 Spec.format: fix bug in dependency hash formatting (#37073)
Co-authored-by: becker33 <becker33@users.noreply.github.com>
2023-04-28 23:33:05 +02:00
eugeneswalker
cceeb96e06 new package: py-amityping (#37262) 2023-04-28 13:51:52 -05:00
Alec Scott
7853ffc881 libmaxminddb: add v1.7.1 (#37185) 2023-04-28 11:08:21 -07:00
Victor Brunini
4363b1c2dc superlu-dist: do not discard cflags from Spack (#37260)
Make sure to append additional flags needed for specific compilers
in the flag_handler instead of adding them as separate cmake define
lines that override the main spack cflags.
2023-04-28 09:37:58 -04:00
Harmen Stoppels
c85877566f Reduce the number of stat calls in "spack verify" (#37251)
Spack comes to a crawl post-install of nvhpc, which is partly thanks to
this post install hook which has a lot of redundancy, and isn't correct.

1. There's no need to store "type" because that _is_ "mode".
2. There are more file types than "symlink", "dir", "file".
3. Don't checksum device type things
4. Don't run 3 stat calls (exists, stat, isdir/islink), but one lstat
   call
5. Don't read entire files into memory

I also don't know why `spack.crypto` wasn't used for checksumming, but I
guess it's too late for that now. Finally md5 would've been the faster
algorithm, which would've been fine given that a non cryptographicall
checksum was used anyways.
2023-04-28 13:24:24 +00:00
dale-mittleman
fc201a2b75 Fixing patch version constraint for llvm external ncurses patch (#37145) 2023-04-28 15:10:03 +02:00
Massimiliano Culpo
ddd191b1c0 libxml2: fix test method (#37242)
This was discovered using #34236
2023-04-28 13:45:16 +02:00
eugeneswalker
fec20bb567 new package: roentdek (#37265) 2023-04-28 04:35:56 -07:00
eugeneswalker
63869bba47 new package: py-pyabel (#37264) 2023-04-28 04:31:18 -07:00
eugeneswalker
eacf11e6cc new package: xtcdata (#37261) 2023-04-28 04:31:06 -07:00
snehring
7bd987aa5b sentieon-genomics: adding new version 20211207. (#37223) 2023-04-28 12:33:22 +02:00
Jonathon Anderson
cba8d1253d Add container images supporting RHEL alternatives (#36713)
Add container support for AlmaLinux, Fedora 37 and 38 and Rocky Linux
2023-04-28 12:28:33 +02:00
Andrew W Elble
c87cc5c7b1 kicad: new version 7.0.2 (#37228) 2023-04-28 12:25:28 +02:00
Richard Berger
4a7893bed6 slurm: add version 22-05-8-1 and 23-02-1-1 (#37238) 2023-04-28 12:24:10 +02:00
Adam J. Stewart
44196085f6 py-psycopg2: add v2.9.6 (#37235) 2023-04-28 11:50:27 +02:00
John Jolly
076660804d bricks: Fix package to properly find spack opencl-clhpp (#37239)
The bricks package uses header from the opencl-clhpp package when built with
the cuda variant activated. In order to find the header files, the bricks
CMakeLists.txt uses the `find_package(OpenCL 2.0)` statement. The CMake
FindOpenCL module searches several paths to find the header files. Eventually
it will search for header files in the local /usr/include directories. If
OpenCL headers are found, but CUDA is not installed locally, then the build
will fail.

One of the CMake variables searched for a path to the OpenCL headers is
OCL_ROOT. This fix utilizes the OCL_ROOT variable to identify the correct path
to the install opencl-clhpp package within Spack. Also, if the cuda variant is
not used, then the OpenCL build is disabled to prevent a build failure due to
improperly-identified locally-installed OpenCL header files.

The default behavior of the build process has not changed. An external variable
definitions must be made to activate these features. Specifically, to disable
the OpenCL build, this flag must be provided to CMake:

    -DBRICK_USE_OPENCL=OFF

The Spack build process explicitly uses this option unless the cuda variant is
specified. If the cuda variant is specified, then the BRICK_USE_OPENCL variable
is set to ON and the OCL_ROOT variable is set to the path of the opencl-clhpp
include directory.
2023-04-28 11:37:22 +02:00
Alec Scott
e57db5c528 apfel: add v3.0.6 (#37267) 2023-04-28 11:18:05 +02:00
Alec Scott
01cdc4f960 cmor: add v3.7.2 (#37268) 2023-04-28 11:17:44 +02:00
Alec Scott
93e88853c6 figtree: add v1.4.4 (#37269) 2023-04-28 11:17:31 +02:00
Alec Scott
561c192e30 fping: add v5.1 (#37270) 2023-04-28 11:17:16 +02:00
Alec Scott
65c625a44f gapbs: add v1.4 (#37271) 2023-04-28 11:16:55 +02:00
Alec Scott
08291cb63d glfw: add v3.3.8 (#37272) 2023-04-28 11:16:41 +02:00
Alec Scott
dcd0f8d252 jemalloc: add v5.3.0 (#37273) 2023-04-28 11:14:12 +02:00
Alec Scott
fa0367691e sandbox: add v2.25 (#37274) 2023-04-28 11:13:56 +02:00
Alec Scott
e3a76bf2a9 snap-berkeley: add v2.0.3 (#37275) 2023-04-28 11:13:39 +02:00
Alec Scott
e2f9cdfbdc tcpdump: add v4.99.4 (#37276) 2023-04-28 11:13:22 +02:00
Alec Scott
1eb4e30d28 wps: add v4.5 (#37277) 2023-04-28 11:12:28 +02:00
Sangu Mbekelu
2f529a7320 py-subword-nmt (#37161)
* "new py-subword-nmt package"

* [@spackbot] updating style on behalf of Sangu-Mbekelu

* Update package.py

updating package based on review

* [@spackbot] updating style on behalf of Sangu-Mbekelu

---------

Co-authored-by: Sangu Mbekelu <s.mbekelu9@gmail.com>
2023-04-28 00:22:45 -04:00
Manuela Kuhn
cb4234b971 py-virtualenv: add 20.22.0 (#37259)
* py-virtualenv: add 20.22.0

* [@spackbot] updating style on behalf of manuelakuhn

* Fix dependency versions for release 20.22.0

* Remove python version restrictions

* [@spackbot] updating style on behalf of manuelakuhn

* py-platformdirs: add 3.5.0

* py-filelock: add 3.12.0

* Fix dependency bound for py-platformdirs

* py-importlib-metadata: add 6.6.0
2023-04-27 22:28:18 -05:00
Adam J. Stewart
ee7cdb8a68 macOS: use Apple GL/GLU by default (#36618)
* macOS: use Apple GL/GLU by default

* Use CLT instead

* Use CLT instead

* Undo change to libuuid
2023-04-27 21:54:48 -05:00
eugeneswalker
7dc0bf5fcb legion: add versions up to 23.03.0 (#37257)
* legion: add versions up to 23.03.0

* add maintainer
2023-04-27 19:17:42 -04:00
Chris Green
d01b542df7 [py-breathe] New version 4.35.0 (#37233)
* [py-breathe] New version 4.35.0

* Update var/spack/repos/builtin/packages/py-breathe/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-27 13:57:31 -04:00
Thomas-Ulrich
c59eb8cdea fixes #29350 by enabling headers variant in binutils when compiling llvm with gold (#37245) 2023-04-27 09:53:45 -07:00
Manuela Kuhn
ed34c4a004 py-memory-profiler: add 0.61.0 (#37248) 2023-04-27 11:38:08 -05:00
Chris Green
7cbaf2ff56 [py-sphinx-design] New versions 0.4.0, 0.4.1 (#37234)
* [py-sphinx-design] New versions 0.4.0, 0.4.1

* conflicts() -> depends_on()

Per @adamjstewart

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-27 11:37:23 -05:00
Massimiliano Culpo
cd851e173d gcc: add v13.1.0 (#37230)
version 13.1 builds with apple-clang@14.0.3
2023-04-27 16:41:44 +02:00
Harmen Stoppels
8be2f017e7 gcc: no need to special case macos/linux wrt rpaths (#37243) 2023-04-27 15:52:28 +02:00
Michael Kuhn
f6464abfcb Fix pkgconfig dependencies (#37236)
Packages shouldn't typically depend on pkg-config or pkgconf but on the
virtual provider pkgconfig.
2023-04-27 10:04:22 +02:00
Alec Scott
ce81f15e6f r-covr: add v3.6.2 (#37194) 2023-04-27 02:22:29 -04:00
Wouter Deconinck
4d597bdecc py-gssapi: depends_on krb5 (#37227)
This dependency was somehow overlooked (but is pretty crucial) because it doesn't appear in the python requirements...
2023-04-26 22:35:15 -04:00
Manuela Kuhn
62ea6bed89 py-bidscoin: add 4.0.0 and py-pydeface as new deb package (#37231) 2023-04-26 20:58:57 -04:00
Manuela Kuhn
742fbd458d py-mypy: add 1.2.0 (#37224) 2023-04-26 20:54:11 -04:00
Manuela Kuhn
10b6651d35 py-flake8: add 6.0.0 and update dependencies (#37222) 2023-04-26 20:53:49 -04:00
Alec Scott
3f3fc804c6 r-afex: add v1.2-1 (#37188) 2023-04-26 19:36:53 -05:00
Alec Scott
c7d586d03a r-argparse: add v2.2.2 (#37189) 2023-04-26 19:36:11 -05:00
Alec Scott
b17feb9b06 r-bit: add v4.0.5 (#37190) 2023-04-26 19:35:30 -05:00
Alec Scott
2e4a9f1abf r-loo: add v2.6.0 (#37205)
* r-loo: add v2.6.0

* Split r dependency from other deps to align with common r package style
2023-04-26 19:34:19 -05:00
Alec Scott
117d374a61 r-bookdown: add v0.33 (#37191)
* r-bookdown: add v0.33

* Add r@3.5.0: dependency to package
2023-04-26 19:33:18 -05:00
Alec Scott
67cf37d750 r-compositions: add v2.0-6 (#37192)
* r-compositions: add v2.0-6

* Split r dependecies from other r packages to match common format
2023-04-26 19:32:15 -05:00
Alec Scott
6e276ecb4c r-convevol: add v2.0.0 (#37193) 2023-04-26 19:31:34 -05:00
Alec Scott
8c6a3f15e5 r-desolve: add v1.35 (#37195) 2023-04-26 19:29:14 -05:00
Alec Scott
65bbd2a1e6 r-fastmap: add v1.1.1 (#37196) 2023-04-26 19:27:49 -05:00
Alec Scott
426f8e987b r-formatr: add v1.14 (#37197) 2023-04-26 19:27:11 -05:00
Alec Scott
a78e061117 r-ggvis: add v0.4.8 (#37198) 2023-04-26 19:26:24 -05:00
Alec Scott
17e14757e2 r-glmnet: add v4.1-7 (#37199)
* r-glmnet: add v4.1-7

* Split r dependency from other deps to follow common format
2023-04-26 19:25:42 -05:00
Alec Scott
2bb3553f24 r-insight: add v0.19.1 (#37200) 2023-04-26 19:24:55 -05:00
Alec Scott
6267a1f68d r-interp: add v1.1-4 (#37201)
* r-interp: add v1.1-4

* split r dep from other dependencies to match common format
2023-04-26 19:24:08 -05:00
Alec Scott
666ceb998c r-jomo: add v2.7-6 (#37202) 2023-04-26 19:23:09 -05:00
Alec Scott
e4cc4018c1 r-lattice: add v0.21-8 (#37204)
* r-lattice: add v0.21-8

* Enforce a higher version of R as according to cran
2023-04-26 19:12:26 -05:00
Alec Scott
5551b7a506 r-magic: add v1.6-1 (#37206) 2023-04-26 19:06:27 -05:00
Alec Scott
facc93a30e r-maptools: add v1.1-6 (#37207) 2023-04-26 19:05:29 -05:00
Alec Scott
b3101d1c85 r-meta: add v6.2-1 (#37208) 2023-04-26 19:04:07 -05:00
H. Joe Lee
0d7890baa5 spdk: add a new package (#35520)
* spdk: add a new package
* chore: fix formatting and style
* fix: add rdma-core dependency
* fix: remove spdk < 23.01 versions per @soumagne review
* spdk: add 22.01.1 version
* spdk: address @soumagne reviews
* spdk: fix fio audit failure
* spdk: fix fio version and remove debugging info
2023-04-26 16:38:14 -07:00
Wouter Deconinck
d43ae9fa10 scitokens-cpp: depends_on pkgconfig (#37168)
A missing dependency, showing up as:
```console
#24 1113.1 3 errors found in build log:
#24 1113.1      20    -- Check for working CXX compiler: /opt/spack/lib/spack/env/gcc/g++ 
#24 1113.1            - skipped
#24 1113.1      21    -- Detecting CXX compile features
#24 1113.1      22    -- Detecting CXX compile features - done
#24 1113.1      23    -- Found CURL: /opt/software/linux-debian-x86_64_v3/gcc-12.2.0/curl-
#24 1113.1            7.85.0-ca5zznsdkp2hwbnikha6sgqzcpuw74sc/lib/libcurl.so (found versio
#24 1113.1            n "7.85.0")
#24 1113.1      24    -- Found UUID : /opt/software/linux-debian-x86_64_v3/gcc-12.2.0/util
#24 1113.1            -linux-uuid-2.38.1-45xrjgjjauvev5qarwji7xgmlnhqlzkh/lib/libuuid.so
#24 1113.1      25    -- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
#24 1113.1   >> 26    CMake Error at /opt/software/linux-debian-x86_64_v3/gcc-12.2.0/cmake
#24 1113.1            -3.24.3-6doi3daa6ihw6mk4zbk5oiiuiaahsny7/share/cmake-3.24/Modules/Fi
#24 1113.1            ndPkgConfig.cmake:663 (message):
#24 1113.1      27      pkg-config tool not found
#24 1113.1      28    Call Stack (most recent call first):
#24 1113.1      29      /opt/software/linux-debian-x86_64_v3/gcc-12.2.0/cmake-3.24.3-6doi3
#24 1113.1            daa6ihw6mk4zbk5oiiuiaahsny7/share/cmake-3.24/Modules/FindPkgConfig.c
#24 1113.1            make:829 (_pkg_check_modules_internal)
#24 1113.1      30      CMakeLists.txt:39 (pkg_check_modules)
#24 1113.1      31    
#24 1113.1      32    
```

Ref: https://github.com/scitokens/scitokens-cpp/blob/v1.0.0/CMakeLists.txt#L32
2023-04-27 00:02:55 +02:00
Trevor [LAS]
055e5abc93 py-ipyrad (#37160)
* Adding py-ipyrad for testing

* py-ipyrad: placating flake8

* py-ipyrad: adding version 0.9.90, fixing hard coded path.

* py-ipyrad: use join_path instead of hard coded linux path

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* py-ipyrad: Removing unneeded dependencies

* py-ipyrad: Readded future (see ipyrad setup.py)

* py-ipyrad: Switch to an anchored link in the docs

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* py-ipyrad: Removed patch decorator

---------

Co-authored-by: snehring <snehring@iastate.edu>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-26 17:04:57 -04:00
Michael Kuhn
16f86c7f96 rocksdb: add 8.1.1 (#37169) 2023-04-26 10:07:42 -07:00
Michael Kuhn
0292568f97 libbson, mongo-c-driver: add 1.23.3 (#37175) 2023-04-26 10:05:45 -07:00
Gerhard Theurich
61c24dc7c3 esmf: add v8.4.2 (#37183) 2023-04-26 10:03:13 -07:00
Alec Scott
cbb8900e6e genrich: add v0.6.1 (#37184) 2023-04-26 10:01:59 -07:00
Alec Scott
6084b9be5b openwsman: add v2.7.2 (#37186) 2023-04-26 09:59:21 -07:00
Alec Scott
e4eba24191 shortstack: add v4.0.1 (#37187) 2023-04-26 09:58:27 -07:00
Alec Scott
aa0b013efe abyss: add v2.3.5 (#37209) 2023-04-26 09:28:48 -07:00
Alec Scott
9d49664679 advancecomp: add v2.5 (#37210) 2023-04-26 09:26:04 -07:00
Alec Scott
8c4bd595f3 barrnap: add v0.9 (#37212) 2023-04-26 09:19:27 -07:00
Alec Scott
8669844084 check: add v0.15.2 (#37216) 2023-04-26 09:18:49 -07:00
Alec Scott
2477fe2ff7 bowtie: add v1.3.1 (#37213) 2023-04-26 09:18:04 -07:00
Alec Scott
3f7ea01e9d faiss: add v1.7.4 (#37217) 2023-04-26 09:15:52 -07:00
Alec Scott
c334a16a2f nanoflann: add v1.4.3 (#37218) 2023-04-26 09:13:52 -07:00
Alec Scott
4fec857504 pestpp: add v5.2.3 (#37219) 2023-04-26 09:12:52 -07:00
Alec Scott
f859da6119 rinetd: add v0.73 (#37220) 2023-04-26 09:11:37 -07:00
Alec Scott
1c19eccf29 specfem3d-globe: add v8.0.0 (#37221) 2023-04-26 09:10:22 -07:00
Adam J. Stewart
1b7bf9a95b py-lightly: add v1.4.3 (#37178) 2023-04-26 17:47:40 +02:00
Adam J. Stewart
8e7b2c999a py-sphinx: add v6.2.1 (#37177) 2023-04-26 17:46:55 +02:00
Manuela Kuhn
dec0c540e9 py-wesanderson: add new package (#37182) 2023-04-26 10:29:20 -05:00
Adam J. Stewart
d74b02f59a py-tensorflow: add v2.11–2.12 (#36263) 2023-04-26 08:03:47 -05:00
Richard Berger
37335f8fcf flecsi: add 2.2.0 release, cleanup (#37158)
* flecsi: add 2.2.0 release, cleanup

* flecsi: deprecate 1.x versions

* flecsi: add missing 1.4.1 release

* flecsi: add missing 2.0.0 release
2023-04-26 05:52:15 -07:00
Thomas Madlener
eb4552542d elfutils package: fix +debuginfod installation issue (#36758)
* elfutils cannot build against libarchive@3.62+iconv 
* elfutils needs libmicrohttpd version 0.9.50 or older
* elfutils: explicitly depend on pkg-config
* libmicrohttpd: Add several new versions
2023-04-25 22:57:19 -07:00
Benjamin Meyers
b5f546b72b New: py-ax-platform; Update: py-botorch, py-gpytorch, py-linear-operator, py-pyro-ppl, py-typeguard (#37143) 2023-04-25 21:53:12 -04:00
Alec Scott
b2fb851d18 gmp: Add patch for MacOS M1/M2 machines to fix segfault (#37166)
* Add patch for MacOS M1/M2 machines to fix segfault when using gmp

* Update var/spack/repos/builtin/packages/gmp/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Restrict patch to v6.2.1

* Update var/spack/repos/builtin/packages/gmp/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-25 20:43:15 -04:00
Simon Frasch
f4b0510156 spla: version 1.5.5 (#37173) 2023-04-25 20:03:59 -04:00
Austin McCartney
a371e1e645 Narrow patch diff context for zziplib package (#29236)
The patch for the zziplib package applied for version 0.13.69 and
earlier includes a reference to Creative Commons
Attribution-NonCommercial-ShareAlike 2.0 Generic license, which
causes Flexera's ~expensive perl script~ FlexNet Code Insights open
source license and compliance tool to flag Spack as a non-commercial
product. This patch narrows the diff context in the patch to exclude
this text. The semantics of the patch file are unchanged.
2023-04-25 19:58:18 -04:00
Cyrus Harrison
80881f9119 add ascent 0.9.1 release (#37139)
* add ascent 0.9.1 release
* fix typo
2023-04-25 12:54:24 -07:00
Adam J. Stewart
c82594de21 py-scipy: PyPI-based Python version support (#37137) 2023-04-25 12:42:24 -07:00
Adam J. Stewart
eb15d49d61 py-numpy: PyPI-based Python version support, add v1.24.3 (#37136)
* py-numpy: add v1.24.3
* PyPI-based Python version support
* Sort in reverse order
2023-04-25 10:55:10 -07:00
Harmen Stoppels
97beb2658b require: do not allow additional properties (#37174) 2023-04-25 11:52:51 +02:00
eugeneswalker
75bf6d665c e4s ci: add xyce (#36841)
* e4s ci: add xyce

* relax trilinos contraints for xyce

* also relax trilinos constraint for e4s-power stack

* allow trilinos~shylu for xyce
2023-04-24 16:32:01 -07:00
eugeneswalker
c76023d7ac e4s ci: add mgard (#36584) 2023-04-24 23:35:40 +02:00
Alec Scott
6edfc07092 megadock: add v4.1.1 (#37154) 2023-04-24 14:04:31 -05:00
Alec Scott
a6b2a713d7 sdl2-image: add v2.6.3 (#37155) 2023-04-24 12:02:54 -07:00
Carlos Bederián
416b570825 gromacs: rework plumed support (#37013) 2023-04-24 20:59:45 +02:00
Alec Scott
28ad553856 libgit2: add v1.6.4 (#37153) 2023-04-24 11:56:27 -07:00
Alec Scott
a17f5efa75 libcyaml: add v1.4.0 (#37152) 2023-04-24 10:58:14 -07:00
Alec Scott
c8b21f43eb imath: add v3.1.7 (#37151) 2023-04-24 10:51:57 -07:00
Alec Scott
b1be17144b glab: add v1.28.1 (#37150) 2023-04-24 13:37:52 -04:00
Juan Miguel Carceller
977ec0bd91 Extend patch to 1.82.0 (#37138)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-04-24 09:33:13 -07:00
Adam J. Stewart
4d58eaaf15 py-sphinx: add v6.2.0 (#37140) 2023-04-24 09:28:17 -07:00
Seth R. Johnson
3092bd2559 silo: loosen hdf5 version requirements (#37146)
We have successfully been building silo@4.10.2 against hdf5@1.10.4 from
some time. Refinement of #34275 (which was concerned with 4.11 but
unnecessarily restricted 4.10).
2023-04-24 09:23:16 -07:00
Adam J. Stewart
48347bbd7c py-pandas: add v2.0.1 (#37148) 2023-04-24 09:20:44 -07:00
Adam J. Stewart
07dbbff728 py-lightning: add v2.0.2 (#37149) 2023-04-24 09:19:49 -07:00
Adam J. Stewart
b5d66e0144 MXNet: add v1.9 (#36590) 2023-04-24 11:13:09 -05:00
eugeneswalker
81d3df3289 gptune: add new v4.0.0; relax py-numpy requirement to align with requirements.txt (#37141) 2023-04-24 08:48:46 -07:00
Anton Kozhevnikov
d8169ba440 do not use device_alloc (#37147) 2023-04-24 17:34:35 +02:00
Adam J. Stewart
7df2865dce Copy more logs to CI artifacts (#36783)
* Copy more logs to CI artifacts

* Trigger rebuilds again

* Remove test variant
2023-04-24 10:08:30 -05:00
Alec Scott
e69d693727 graphviz: add v8.0.1 (#36724) 2023-04-24 10:40:39 +02:00
Alec Scott
af9c466d86 libdap4: add v3.20.6 (#37112) 2023-04-24 10:40:12 +02:00
Alec Scott
43089a3931 libconfig: add v1.7.3 (#37111) 2023-04-24 10:39:56 +02:00
Alec Scott
1638821831 libcap-ng: add v0.8.3 (#37110) 2023-04-24 10:39:44 +02:00
Alec Scott
97c03b4c31 hohqmesh: add v1.3.0 (#37108) 2023-04-24 10:39:31 +02:00
Alec Scott
405ab9551f highway: add v1.0.4 (#37107) 2023-04-24 10:39:10 +02:00
Alec Scott
edd44032fd grep: add v3.10 (#37106) 2023-04-24 10:38:53 +02:00
Alec Scott
fad766e8d0 gpgme: add v1.20.0 (#37105) 2023-04-24 10:38:00 +02:00
Alec Scott
932195a913 fio: add v3.34 (#37104) 2023-04-24 10:37:49 +02:00
Alec Scott
a09fcca192 fdupes: add v2.2.1 (#37103) 2023-04-24 10:37:37 +02:00
Alec Scott
df9bd3c439 cpp-argparse: add v2.9 (#37102) 2023-04-24 10:37:25 +02:00
Alec Scott
f5b1d1f494 codec2: add v1.0.5 (#37101) 2023-04-24 10:37:13 +02:00
Alec Scott
43a069a60f bracken: add v2.8 (#37099) 2023-04-24 10:36:33 +02:00
Alec Scott
f3ad153dcd beast2: add v2.6.7 (#37098) 2023-04-24 10:36:21 +02:00
Alec Scott
794ab80086 args: add v6.4.6 (#37097) 2023-04-24 10:36:06 +02:00
Alec Scott
8fc8e8019a apktool: add v2.7.0 (#37096) 2023-04-24 10:35:52 +02:00
Alec Scott
6880bfd412 lis: add v2.1.1 (#37113) 2023-04-24 10:34:58 +02:00
Alec Scott
90e31c39cc maeparser: add v1.3.1 (#37114) 2023-04-24 10:34:44 +02:00
Alec Scott
6d1969e4d7 mrcpp: add v1.4.2 (#37115) 2023-04-24 10:34:30 +02:00
Alec Scott
90b4753f72 msmc2: add v2.1.4 (#37116) 2023-04-24 10:34:16 +02:00
Alec Scott
9285f591cc opam: add v2.1.3 (#37117) 2023-04-24 10:34:02 +02:00
Alec Scott
cf2ca0c9f3 openal-soft: add v1.23.1 (#37118) 2023-04-24 10:33:48 +02:00
Alec Scott
dffa5c56c8 poke: add v3.1 (#37120) 2023-04-24 10:33:35 +02:00
Alec Scott
302855ea86 pugixml: add v1.13 (#37121) 2023-04-24 10:33:20 +02:00
Alec Scott
a747a6e1f1 rabbitmq-c: add v0.13.0 (#37122) 2023-04-24 10:32:42 +02:00
Alec Scott
9e45571ebc rsem: add v1.3.3 (#37123) 2023-04-24 10:32:22 +02:00
Alec Scott
dae888b981 sina: add v1.13.0 (#37124) 2023-04-24 10:32:09 +02:00
Alec Scott
d98e129931 speexdsp: add v1.2.1 (#37125) 2023-04-24 10:31:53 +02:00
Alec Scott
a6b9f88b6c string-view-lite: add v1.7.0 (#37126) 2023-04-24 10:31:39 +02:00
Alec Scott
6412b7ac5d transposome: add v0.12.1 (#37127) 2023-04-24 10:31:10 +02:00
Alec Scott
410f00e36d tree-sitter: add v0.20.8 (#37128) 2023-04-24 10:30:36 +02:00
Alec Scott
240d40e159 units: add v2.22 (#37129) 2023-04-24 10:30:22 +02:00
Alec Scott
b34a354efe vc: add v1.4.3 (#37130) 2023-04-24 10:30:02 +02:00
Alec Scott
2b3939a65e vtk: add v9.2.6 (#37131) 2023-04-24 10:29:45 +02:00
Alec Scott
2be4200455 zstr: add v1.0.7 (#37132) 2023-04-24 10:29:21 +02:00
Wouter Deconinck
a3354a547f py-numpy: set openblas symbol_suffix in site.cfg (#37134)
* py-numpy: set openblas `symbol_suffix` in site.cfg

This writes the correct `symbol_suffix` variant value from the `openblas` in the spec into the `site.cfg`. Fixes #37133.

* py-numpy: fix style

* py-numpy: handle symbol_suffix == "none"
2023-04-23 09:27:26 -04:00
Andrew W Elble
8143c086a0 qgis: unpin qca, update qca (#37049)
allows compilation with gcc 11
2023-04-22 17:41:39 -05:00
Wouter Deconinck
806f630c8f (py-)codecov: not on pypi anymore, replace by static binary (#36809)
* py-codecov: deprecate since not on pypi anymore

* codecov: new package

* [@spackbot] updating style on behalf of wdconinc

* codecov: use github URL instead, multi-platform

* fix: install to prefix.bin.codecov

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* codecov: use versions lookup dict

* codecov: versions -> _versions, fix style

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-22 11:01:15 -05:00
Alec Scott
9d2aab415d bubblewrap: add v0.8.0 (#37100) 2023-04-22 15:40:01 +02:00
Erik Schnetter
a5d1c645a0 hwloc: New version 2.9.1 (#37084) 2023-04-22 04:22:37 -04:00
Erik Schnetter
d443a44854 openblas: New version 0.3.23 (#36986)
Silence make

Set a fixed and large NUM_THREADS by default, to avoid that it gets initialized with the host # CPUs.

Set OMP_NUM_THREADS/OPENBLAS_NUM_THREADS in terms of make_jobs so that tests don't need excessive CPU.

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-04-22 09:02:28 +02:00
Wouter Deconinck
6808df5729 opencascade: new version 7.7.1 (#37057)
No build system or dependency changes necessary.

New maintenance release announcement at https://www.opencascade.com/open-cascade-technology-7-7-1-maintenance-release/.

Full diff from 7.7.0 to 7.7.1 at https://git.dev.opencascade.org/gitweb/?p=occt.git;a=commitdiff;h=ffce0d66bbaafe3a95984d0e61804c201b9995d2;hp=185d29b92f6764ffa9fc195b7dbe7bba3c4ac855
2023-04-21 22:11:18 -07:00
Adam J. Stewart
6e4e7ab8b1 GDAL: add v3.6.4 (#37087) 2023-04-21 15:18:03 -07:00
John W. Parent
d8451b0c3f Windows: shell variables are case-insensitive (#36813)
If we modify both Path and PATH, on Windows they will clobber one
another. This PR updates the shell modification logic to automatically
convert variable names to upper-case on Windows.
2023-04-21 11:38:58 -07:00
Alec Scott
255c9ed5e9 librdkafka: add v2.1.0 (#36853) 2023-04-21 10:52:01 -07:00
Keita Iwabuchi
bd8e27503e Metall: add v0.24 and v0.25 (#36981)
* Metall package: add v0.22, v0.23, and v0.23.1
* Metall package: add v0.24 and v0.25
* Metall package: increase required Boost version
2023-04-21 10:50:01 -07:00
Bryce Torcello
459c5cfad6 r: fix building :3.6.1 with gcc 10: (#37080) 2023-04-21 11:43:09 -05:00
Robert Underwood
ff689be250 py-cupy allow customizing architecture and threads (#37072)
* py-cupy allow customizing architecture and threads

* Update var/spack/repos/builtin/packages/py-cupy/package.py

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* add missing self

---------

Co-authored-by: Robert Underwood <runderwood@anl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-04-21 09:52:47 -05:00
eflumerf
aaac8b0545 xmlrpc-c: Add variant to enable curl client (#37075)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-04-21 16:50:10 +02:00
Tim Haines
066e1e083d Dyninst: add conflict for intel-parallel-studio (#37036) 2023-04-21 16:49:23 +02:00
John W. Parent
5c742d4f2b Hraden Spack's powershell interface (#37079)
Paths with spaces are an issue on Windows and our current powershell
scripts are not sufficiently hardended against their use.
This PR removes promlematic commandlets that do not work well with paths
with spaces and adds escape quotes in other areas where this could be an
issue.
2023-04-21 08:58:37 -05:00
markus-ferrell
c64ca97877 Enable verify tests on windows (#36975) 2023-04-21 14:32:33 +02:00
markus-ferrell
cd4dddbef1 Enable versions cmd tests on windows (#36974) 2023-04-21 14:31:14 +02:00
markus-ferrell
e77b1da772 Enable test suite tests on windows (#36966) 2023-04-21 14:18:06 +02:00
markus-ferrell
e1e8d3b66e Enable database tests for windows (#36968) 2023-04-21 14:15:29 +02:00
markus-ferrell
962df6334d Enable config values tests on windows (#36969) 2023-04-21 14:15:09 +02:00
markus-ferrell
ba255cf5ec Enable graph tests on windows (#36967) 2023-04-21 14:14:51 +02:00
Annop Wongwathanarat
2ca049e74a armpl-gcc: fix PKG_CONFIG_PATH for 23.04 (#37001) 2023-04-21 14:10:03 +02:00
snehring
cd2893640d interproscan: add version 5.61-93.0, fix build issue (#37009) 2023-04-21 14:05:23 +02:00
Glenn Johnson
b686974160 augustus: add version 3.5.0 (#37037)
Adjust dependencies and constraints.
2023-04-21 12:01:00 +02:00
Eric Brugger
3303dcbfbd VisIt: add v3.3.3. (#37034) 2023-04-21 11:59:20 +02:00
Glenn Johnson
950079845c zziplib: support multiple build systems (#37058)
This PR rewrites the zziplib recipe to use Spack's multiple build system
support.
2023-04-21 11:58:00 +02:00
Brian Vanderwende
57b56499b2 Add flag handler for mpi-serial (#37060) 2023-04-21 11:55:30 +02:00
Glenn Johnson
00715a66a1 arpack-ng: add version 3.9.0 (#37032) 2023-04-21 11:43:33 +02:00
dependabot[bot]
581bd6bb9a build(deps): bump codecov/codecov-action from 3.1.2 to 3.1.3 (#37077)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 3.1.2 to 3.1.3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](40a12dcee2...894ff025c7)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-21 11:41:06 +02:00
dependabot[bot]
056eb659bb build(deps): bump actions/setup-python from 4.5.0 to 4.6.0 (#37078)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4.5.0 to 4.6.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](d27e3f3d7c...57ded4d7d5)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-21 11:40:37 +02:00
Thomas Madlener
6476b2d79c edm4hep: add v0.8 (#37006) 2023-04-21 11:31:42 +02:00
Ashwin Kumar Karnad
a2078d709b octopus: add v12.2 (#35487) 2023-04-21 11:30:30 +02:00
Ashwin Kumar Karnad
c8ad4807c2 octopus: fix download links (#37081) 2023-04-21 05:22:34 -04:00
Harmen Stoppels
7cf53a647e readline: fix for nvhpc (#37024) 2023-04-21 11:02:55 +02:00
Massimiliano Culpo
cac44b9e15 Update archspec to latest release (#37070)
Fix -mcpu flags for gcc on neoverse-v1

Add support for NVHPC flags
2023-04-21 11:01:37 +02:00
Wouter Deconinck
354d59500b glib: new versions 2.76.1, 2.74.7 (#36361)
* glib: new version 2.76.1

This adds a new stable version of glib, 2.76.1 (skipping the 2.75 unstable series).

`mkenums.py` check now is specified as a dict, after r62dca6c1cf. The `filter_file` should disable both old and new. Better (maybe, but more complicated) would be to add the `can_fail` flag for this test.

The `iconv` argument was already deprecated and has now been removed. It is now resolved through meson itself, e71ecc8771.

Builds successfully on my system (and several dependents on top of it):
```console
==> glib: Successfully installed glib-2.76.1-7iy4mee2evabd357gviozbtyh5yxi27t
```
as does the previous 2.74.6 version

* glib: patch for 2.76.1, new version 2.74.7
2023-04-21 10:25:55 +02:00
Paul R. C. Kent
a639b22c7c add-llvm-1601-1602 (#37026) 2023-04-20 19:56:52 -07:00
Wouter Deconinck
4f13e2fa15 py-ocnn: new package (octree-based sparse CNN) (#37030)
O-CNN is an octree-based sparse convolutional neural network framework for 3D deep learning. It is built on py-torch.
2023-04-20 21:43:47 -05:00
Bernhard Kaindl
e3109a96d4 bcache: Simplify check if -lintl shall be added to LDFLAGS (#36569)
Replace my initial libintl check with the much nicer check for
"intl" in self.spec["gettext"].libs.names. Thanks to Chris Green!

Co-authored-by: Bernhard Kaindl <bkaindl@gmail.com>
2023-04-20 18:56:49 -07:00
Pariksheet Nanda
0b606b01dc uqtk: bump (#36670)
1. support version 3.1.3, which now depends on sundials@6

2. support version 3.1.2:, which broke the two patch files and
   therefore the two patch files have been replaced by more flexible
   filter_file() commands inside a patch() function.

3. rename the variant for python extension from using the package name
   "+pyuqtk" to the more standard "+python"

4. add maintainers @omsai and the upstream developer @bjdebus who
   offered to help with the spack packaging.

5. swig should only be a build-time dependency.  swig is only
   necessary until @:3.1.0

6. confirmed python dependencies are correct by inspecting imports,
   subset python dependencies type to build, run, and confirmed all
   31 build-time tests pass including the 9 python tests:

```console
$ spack env create uqtk-dev
$ spack add uqtk@3.1.3
$ spack install --test root && cat $(spack location -i uqtk)/.spack/install-time-test-log.txt
==> Testing package uqtk-3.1.3-nok6fut
==> [2023-04-19-14:56:25.005361] Running build-time tests
==> [2023-04-19-14:56:25.005536] RUN-TESTS: build-time tests [check]
==> [2023-04-19-14:56:25.009543] '/home/omsai/src/spack/opt/spack/linux-pureos10-skylake/gcc-10.2.1/gmake-4.4.1-b6g4apmfvxz3bn4eabh37dehcrg65fj7/bin/make' '-j4' '-n' 'test'
==> [2023-04-19-14:56:25.014903] '/home/omsai/src/spack/opt/spack/linux-pureos10-skylake/gcc-10.2.1/gmake-4.4.1-b6g4apmfvxz3bn4eabh37dehcrg65fj7/bin/make' '-j4' 'test'
Running tests...
/home/omsai/src/spack/opt/spack/linux-pureos10-skylake/gcc-10.2.1/cmake-3.26.3-zjmsfz23j5l4ytniz26uzvxonlu5qebr/bin/ctest --force-new-ctest-process
Test project /tmp/omsai/spack-stage/spack-stage-uqtk-3.1.3-nok6fut47h42cnaau7wkoohgqy5f2qqa/spack-build-nok6fut
      Start  1: ArrayReadAndWrite
      Start  2: ArrayDelColumn
      Start  3: Array1DMiscTest
      Start  4: Array2DMiscTest
 1/31 Test  #1: ArrayReadAndWrite ................   Passed    0.01 sec
      Start  5: ArraySortTest
 2/31 Test  #2: ArrayDelColumn ...................   Passed    0.01 sec
      Start  6: MultiIndexTest
 3/31 Test  #3: Array1DMiscTest ..................   Passed    0.01 sec
      Start  7: CorrTest
 4/31 Test  #4: Array2DMiscTest ..................   Passed    0.01 sec
      Start  8: QuadLUTest
 5/31 Test  #5: ArraySortTest ....................   Passed    0.02 sec
      Start  9: MCMC2dTest
 6/31 Test  #6: MultiIndexTest ...................   Passed    0.01 sec
      Start 10: MCMCRandomTest
 7/31 Test  #8: QuadLUTest .......................   Passed    0.02 sec
      Start 11: MCMCNestedTest
 8/31 Test #10: MCMCRandomTest ...................   Passed    0.02 sec
      Start 12: Deriv1dTest
 9/31 Test #12: Deriv1dTest ......................   Passed    0.01 sec
      Start 13: SecondDeriv1dTest
10/31 Test #13: SecondDeriv1dTest ................   Passed    0.01 sec
      Start 14: GradHessianTest
11/31 Test #11: MCMCNestedTest ...................   Passed    0.03 sec
      Start 15: GradientPCETest
12/31 Test #14: GradHessianTest ..................   Passed    0.01 sec
      Start 16: PCE1dTest
13/31 Test #15: GradientPCETest ..................   Passed    0.01 sec
      Start 17: PCEImplTest
14/31 Test #16: PCE1dTest ........................   Passed    0.01 sec
      Start 18: PCELogTest
15/31 Test #18: PCELogTest .......................   Passed    0.01 sec
      Start 19: Hessian2dTest
16/31 Test #19: Hessian2dTest ....................   Passed    0.01 sec
      Start 20: BCS1dTest
17/31 Test #20: BCS1dTest ........................   Passed    0.01 sec
      Start 21: BCS2dTest
18/31 Test #21: BCS2dTest ........................   Passed    0.01 sec
      Start 22: LowRankRegrTest
19/31 Test #22: LowRankRegrTest ..................   Passed    0.01 sec
      Start 23: PyModTest
20/31 Test #17: PCEImplTest ......................   Passed    0.07 sec
      Start 24: PyArrayTest
21/31 Test #23: PyModTest ........................   Passed    0.08 sec
      Start 25: PyArrayTest2
22/31 Test #25: PyArrayTest2 .....................   Passed    0.30 sec
      Start 26: PyQuadTest
23/31 Test #24: PyArrayTest ......................   Passed    1.44 sec
      Start 27: PyBCSTest1D
24/31 Test #26: PyQuadTest .......................   Passed    1.68 sec
      Start 28: PyBCSTest2D
25/31 Test #27: PyBCSTest1D ......................   Passed    1.66 sec
      Start 29: PyBADPTest
26/31 Test  #7: CorrTest .........................   Passed    3.43 sec
      Start 30: PyRegressionTest
27/31 Test #28: PyBCSTest2D ......................   Passed    1.50 sec
      Start 31: PyGalerkinTest
28/31 Test  #9: MCMC2dTest .......................   Passed    3.90 sec
29/31 Test #29: PyBADPTest .......................   Passed    1.66 sec
30/31 Test #30: PyRegressionTest .................   Passed    1.72 sec
31/31 Test #31: PyGalerkinTest ...................   Passed    1.63 sec

100% tests passed, 0 tests failed out of 31

Total Test time (real) =   5.35 sec
==> [2023-04-19-14:56:30.382797] '/home/omsai/src/spack/opt/spack/linux-pureos10-skylake/gcc-10.2.1/gmake-4.4.1-b6g4apmfvxz3bn4eabh37dehcrg65fj7/bin/make' '-j4' '-n' 'check'
==> [2023-04-19-14:56:30.385983] Target 'check' not found in Makefile
```
2023-04-20 18:53:39 -07:00
afzpatel
134b954c7f enabling test for rccl hsakmt-roct and rocm-opencl (#35465)
* initial commit for enabling test for rccl hsakmt-roct and rocm-opencl
* fix styling and cleaning code
* adding missing imports and minor fixes
* minor style fix
* moidfying hsakmt-roct test to run right after installation
2023-04-20 18:39:49 -07:00
Daniel Ahlin
bf970ebf9d nccl-fastsocket: initial packaging for nccl-fastsocket (#36557)
* nccl-fastsocket: Add NCCL transport plugin for GCP
* nccl-fastsocket: remove auto-gen. header and fix maintainers
* Update var/spack/repos/builtin/packages/nccl-fastsocket/package.py
* nccl-fastsocket: Add rationale for setting LD_LIBRARY_PATH

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2023-04-20 21:38:40 -04:00
MatthewLieber
af47b9170c osu-micro-benchmarks: adding sha for 7.1 release (#36702)
* adding sha for 7.1 release
* add a new version

---------

Co-authored-by: Matt Lieber <lieber.31@osu.edu>
2023-04-20 18:33:58 -07:00
Ashwin Kumar Karnad
bf04551bf5 Fix bigdft-suite compilation (#36612)
(from d20f284100)
2023-04-20 18:18:11 -07:00
Wouter Deconinck
a419ffcf50 osg-ca-certs: igtf link should point to version, not 'current' (#35977)
* osg-ca-certs: igtf link should point to version, not 'current'
* osg-ca-certs: new version 1.110.igtf.1.119
* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2023-04-20 18:09:27 -07:00
Cory Bloor
de7c82f13f hiprand: split into new package (#36798) 2023-04-20 17:48:54 -07:00
Adam J. Stewart
37fb2c8dce py-kornia: add v0.6.12 (#37074) 2023-04-20 20:48:35 -04:00
Alec Scott
329fa0f067 libxaw3d: add v1.6.4 (#36854) 2023-04-20 17:35:32 -07:00
snehring
9d97ed1190 bamutil: fix filter_file invocation (#36978) 2023-04-20 17:25:51 -07:00
snehring
794fc47722 gmap-gsnap: adding new version 2023-03-24 (#36983) 2023-04-20 17:12:30 -07:00
Erik Schnetter
5842e17085 netlib-lapack: New version 3.11.0 (#36984) 2023-04-20 17:10:47 -07:00
Annop Wongwathanarat
5ef7e90462 lammps: add linking with acfl for FFT (#37000) 2023-04-20 16:48:24 -07:00
Adam J. Stewart
05215e016b ML CI: fix mirror name (#37007) 2023-04-20 16:45:15 -07:00
Pieter Ghysels
1ddde85221 Upgrade to STRUMPACK v7.1.0, update minimum CMake version (#37014)
* Upgrade to STRUMPACK v7.1.0, update minimum CMake version
* Update to v7.1.1, which has a fix for ROCm
2023-04-20 16:42:36 -07:00
Glenn Johnson
da8be02e66 ants: add version 2.4.3 (#37031)
* ants: add version 2.4.3
  - add version 2.4.3
  - deprecate old git version
* [@spackbot] updating style on behalf of glennpj

---------

Co-authored-by: glennpj <glennpj@users.noreply.github.com>
2023-04-20 16:31:27 -07:00
Rémi Lacroix
c348776096 Add new package MozJPEG (#37052)
* Add new package MozJPEG
  MozJPEG is a patched version of libjpeg-turbo which improves JPEG compression efficiency achieving higher visual quality and smaller file sizes at the same time.
* MozJPEG: Add myself as a maintainer and fix style
2023-04-20 19:28:07 -04:00
Glenn Johnson
683b933f68 atompaw: add version 4.2.0.2 (#37035) 2023-04-20 16:07:33 -07:00
Alec Scott
5c5f3c00a3 emacs: default to +tls true (#37038)
* emacs: default to +tls true
* Add myself as a maintainer for emacs pkg
2023-04-20 15:53:11 -07:00
AMD Toolchain Support
d5c549f1e6 stream optimization for aocc (#37039) 2023-04-20 15:45:17 -07:00
Olivier Cessenat
8fb38e3982 poppler-data: new version 0.4.12 (#37042) 2023-04-20 15:35:51 -07:00
Olivier Cessenat
61ce48c2ac poppler: new version 23.04.0 (#37043) 2023-04-20 15:34:40 -07:00
Olivier Cessenat
af19e51966 gxsview: new version 2022.11.04 (#37050) 2023-04-20 15:32:02 -07:00
Olivier Cessenat
04aaaa8270 netpbm: new version 10.73.43 (#37051) 2023-04-20 15:31:03 -07:00
Adam J. Stewart
f174bca689 py-lightly: add v1.4.2 (#37053)
* py-lightly: add v1.4.2
* pytorch and lightning 2.0 now supported
2023-04-20 15:29:53 -07:00
Cameron Book
7c9359d57e Add scotch-7.0.3 checksum. (#37054) 2023-04-20 15:26:50 -07:00
Kelly (KT) Thompson
09100ac5d9 Provide version 1.16 of lcov; volunteer to be maintainer. (#37056) 2023-04-20 13:51:39 -07:00
Emil Briggs
2b2c84aa8a rmgdft: new version 5.2.0 (#37059)
* Update for v5.0.4 release.
* Updated for version 5.0.5.
* Updated for new release.
2023-04-20 13:38:31 -07:00
snehring
a513272665 prism: adding new version 4.7 and java version restriction (#37061) 2023-04-20 13:36:01 -07:00
Matthew Thompson
843c0c060b gftl: add version 1.9.0, 1.10.0 (#37062) 2023-04-20 13:32:10 -07:00
Matthew Thompson
772772f14d gftl-shared: add version 1.6.0 (#37063) 2023-04-20 13:25:54 -07:00
Erik Schnetter
be133cd3d6 mpitrampoline: New version 5.3.0 (#37065) 2023-04-20 13:23:30 -07:00
Matthew Thompson
0ffedf2eba fargparse: add version 1.5.0 (#37066) 2023-04-20 13:22:08 -07:00
Matthew Thompson
82cfccaf85 pfunit: add version 4.7.0 (#37067) 2023-04-20 13:21:10 -07:00
Matthew Thompson
7d9e531de2 yafyaml: add version 1.1.0 (#37068) 2023-04-20 13:19:47 -07:00
Matthew Thompson
c0f096467e pflogger: add version 1.9.5, 1.10.0 (#37069) 2023-04-20 13:18:25 -07:00
Vicente Bolea
9684b03bff adios2: add latest release (#36563) 2023-04-20 14:50:51 -05:00
Alec Scott
1fb5707235 r-renv: add v0.17.3 (#36944) 2023-04-20 12:33:29 -05:00
Brian Vanderwende
381f1b76a7 Add RPC lib path to wrapper flags for HDF4 (#35628) 2023-04-20 08:49:04 -07:00
Massimiliano Culpo
1b7cf171ce Use core API to create a Makefile during bootstrapping (#37023) 2023-04-20 15:11:56 +02:00
Adam J. Stewart
01913d08e7 google-cloud-cli: add new package (#36830)
* google-cloud-cli: add new package

* black fixes

* Less verbose

* [@spackbot] updating style on behalf of adamjstewart

* More robust if ver doesn't exist for platform

* Deprecate ancient GEE

* Fix ppc64le bug

---------

Co-authored-by: adamjstewart <adamjstewart@users.noreply.github.com>
2023-04-19 16:12:03 -07:00
Adam J. Stewart
d5fa062e4a py-einops: add v0.6.1 (#37027) 2023-04-19 15:23:19 -07:00
Erik Schnetter
b164c03b09 coreutils: New version 9.3 (#37021) 2023-04-19 14:18:34 -07:00
Harmen Stoppels
d51af675ef make version(...) kwargs explicit (#36998)
- [x] Replace `version(ver, checksum=None, **kwargs)` signature with
      `version(ver, checksum=None, *, sha256=..., ...)` explicitly listing all arguments.
- [x] Fix various issues in packages:
  - `tags` instead of `tag`
  - `default` instead of `preferred`
  - `sha26` instead of `sha256`
  - etc

Also, use `sha256=...` consistently.

Note: setting `sha256` currently doesn't validate the checksum length, so you could do
`sha256="a"*32` and it would get checked as `md5`... but that's something for another PR.
2023-04-19 14:17:47 -07:00
Wouter Deconinck
c7508dc216 py-torch: define property cmake_prefix_paths (#37012)
* py-torch: define property cmake_prefix_paths

`py-torch` installs `libtorch` and a cmake config in a non-standard location. This points downstream code to the relevant locations. From there it should pick up the correctly library and include paths for C++ projects.

* py-torch: python_platlib suggestion

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* [@spackbot] updating style on behalf of wdconinc

* py-torch: back to self.spec["python"].package.platlib

* [@spackbot] updating style on behalf of wdconinc

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2023-04-19 17:12:45 -04:00
Veselin Dobrev
6ca41cfbcb Add mfem v4.5.2 and related updates/tweaks in other packages (#36154)
* Add mfem v4.5.2 and related updates/tweaks in other packages

* [mfem] Add the release source link for MFEM v4.5.2

* [mfem] Remove 'goxberry' (his request) from MFEM's maintainers list
2023-04-19 05:55:31 -07:00
Harmen Stoppels
ae909b3688 Extract depfile logic from cli command into a core module (#36995) 2023-04-19 14:36:29 +02:00
Alec Scott
3a5e48f476 installer.py: drop build edges of installed packages by default (#36707)
This means that `spack install` will now build the minimal set of packages
required to install the root(s).

To opt out of build edge pruning, use `spack install --include-build-deps`.

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-04-19 10:00:40 +02:00
Alec Scott
899838b325 apachetop: add v0.23.2 (#37017) 2023-04-19 09:48:52 +02:00
Alec Scott
001bf93ea3 r-rcpptoml: add v0.2.2 (#36886) 2023-04-19 02:07:40 -04:00
Todd Gamblin
b260234faf editing: add higher-precedence SPACK_EDITOR environment variable
Other tools like git support `GIT_EDITOR` which takes higher precedence than the
standard `VISUAL` or `EDITOR` variables. This adds similar support for Spack, in the
`SPACK_EDITOR` env var.

- [x] consolidate editor code from hooks into `spack.util.editor`
- [x] add more editor tests
- [x] add support for `SPACK_EDITOR`
- [x] add a documentation section for controlling the editor and reference it
2023-04-18 16:23:00 -07:00
Todd Gamblin
2f30da1762 refactor: unify use of spack.util.editor
Code from `spack.util.editor` was duplicated into our licensing hook in #11968. We
really only want one place where editor search logic is implemented. This consolidates
the logic into `spack.util.editor`, including a special case to run `gvim` with `-f`.

- [x] consolidate editor search logic in spack.util.editor
- [x] add tests for licensing case, where `Executable` is used instead of `os.execv`
- [x] make `_exec_func` argument of `editor()` into public `exec_fn` arg
- [x] add type annotations
2023-04-18 16:23:00 -07:00
Massimiliano Culpo
d92c21ec97 Fix compilation on Cray (target: any) (#37011)
fixes #36628

Fix using compilers that declare "target: any" in their
configuration. This should happen only on Cray with the
module based programming environment.
2023-04-18 15:47:52 -07:00
Benjamin Meyers
5960d74dca Update and fix py-dgl+cuda (#36823)
* Update and fix py-dgl+cuda

* [@spackbot] updating style on behalf of meyersbs

* Update py-dgl
2023-04-18 15:56:18 -05:00
Jonathon Anderson
c7232f4505 lammps: backport fix for +rocm+kokkos+kspace (#36850)
* lammps: backport hipfft fix for ROCm-based builds

* lammps: Mark incompatibility with Kokkos 4.x for old versions
2023-04-18 15:41:18 -05:00
Jonathon Anderson
08fd8c8d0a spack ci: preserve custom attributes in build jobs (#36651)
* Simplify test/cmd/ci.py::test_ci_generate_with_custom_scripts

* Rearrange the build-job logic in generate_gitlab_ci_yaml

* Preserve all unknown attributes in build jobs

* Slip tests for custom attributes in the tests for other job types

* Support custom artifacts

* [@spackbot] updating style on behalf of blue42u

* Don't bother sorting needs

---------

Co-authored-by: blue42u <blue42u@users.noreply.github.com>
2023-04-18 15:40:43 -05:00
Alec Scott
eb72217228 r-httpuv: add v1.6.9 (#36951) 2023-04-18 15:12:13 -05:00
Alec Scott
945d032f08 r-git2r: add v0.31.0 (#36864) 2023-04-18 15:08:24 -05:00
Alec Scott
78bfeb11ae r-highr: add v0.10 (#36867) 2023-04-18 15:06:24 -05:00
kwryankrattiger
9745865250 DaV SDK: Enable ParaView raytracing with in SDK (#36844)
* DaV SDK: Enable ParaView raytracing with in SDK

* CI: Drop swr testing from Data Vis SDK

* ISPC: extend LLVM requirement to main

* DaV SDK: Disallow concretizing develop unifyfs

No longer needed after mochi-margo patch
2023-04-18 13:39:47 -05:00
Benjamin Meyers
accbf2cffc New packages: py-ogb, py-outdated, py-littleutils (#36824)
* New packages: py-ogb, py-outdated, py-littleutils

* Update var/spack/repos/builtin/packages/py-outdated/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-18 11:08:57 -05:00
Annop Wongwathanarat
5d8ca81e82 py-scipy: link with OpenMP version of armpl-gcc when requested (#37002) 2023-04-18 11:08:29 -05:00
Harmen Stoppels
11e5b0cd91 fix typo (#36997) 2023-04-18 08:07:57 -05:00
Todd Gamblin
6845f41d67 Revert addition of SPACK_EDITOR pending review.
This reverts commit d8a26905ee.
This reverts commit 1ee049ccc3.

These were spuriously pushed to `develop`.
2023-04-18 03:57:24 -07:00
Todd Gamblin
1ee049ccc3 editing: add higher-precedence SPACK_EDITOR environment variable
Other tools like git support `GIT_EDITOR` which takes higher precedence than the
standard `VISUAL` or `EDITOR` variables. This adds similar support for Spack, in the
`SPACK_EDITOR` env var.

- [x] consolidate editor code from hooks into `spack.util.editor`
- [x] add more editor tests
- [x] add support for `SPACK_EDITOR`
- [x] add a documentation section for controlling the editor and reference it
2023-04-18 03:42:56 -07:00
Todd Gamblin
d8a26905ee refactor: unify use of spack.util.editor
Code from `spack.util.editor` was duplicated into our licensing hook in #11968. We
really only want one place where editor search logic is implemented. This consolidates
the logic into `spack.util.editor`, including a special case to run `gvim` with `-f`.

- [x] consolidate editor search logic in spack.util.editor
- [x] add tests for licensing case, where `Executable` is used instead of `os.execv`
- [x] make `_exec_func` argument of `editor()` into public `exec_fn` arg
- [x] add type annotations
2023-04-18 03:00:04 -07:00
Greg Becker
480b7f397e Allow users to remove items from hierarchy per-path (#31351)
* lmod modules: allow users to remove items from hierarchy per-spec

This allows MPI wrappers that depend on MPI to be removed from the MPI portion of
the hierarchy and be made available when the appropriate compiler is loaded.

module load gcc
module load mpi-wrapper  # implicitly loads mpi
module load hdf5

This allows users to treat an mpi wrapper like an mpi program
2023-04-17 22:17:11 -07:00
Wouter Deconinck
2bc1779a71 py-html5lib, madgraph5amc: correct hashes of empty files (#36979)
* py-html5lib: correct hashes of empty files

Going through and fixing hashes that are due to empty string.
```console
$ grep -Ir $(echo -n | sha256sum | awk '{print$1}') $SPACK_ROOT/var/spack/repos/builtin
/home/wdconinc/git/spack/var/spack/repos/builtin/packages/madgraph5amc/package.py:        sha256="e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
/home/wdconinc/git/spack/var/spack/repos/builtin/packages/py-html5lib/package.py:    version("0.99", sha256="e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855")
```

```console
$ spack checksum py-html5lib 0.99
==> Found 1 version of py-html5lib:
  
  0.99  https://files.pythonhosted.org/packages/source/h/html5lib/html5lib-0.99.tar.gz

==> Fetching https://files.pythonhosted.org/packages/source/h/html5lib/html5lib-0.99.tar.gz

    version("0.99", sha256="aff6fd3031c563883197e5a04b7df324086ff5f358278a0386808c463a077e59")
```

* madgraph5: remove incorrectly hashed version
2023-04-17 23:14:10 -04:00
Wouter Deconinck
9ea7937f6d qt: new version 5.15.9 (#36709)
This adds the new LTS version of Qt5. No build system changes needed.

The bundled libjpeg and sqlite versions were updated, but it is unclear if these are actual build requirements, and we have not been tracking these specific versions in the version dependencies (likely due to exactly this lack of clarity).

Compare: https://github.com/qt/qtbase/compare/v5.15.8-lts-lgpl...v5.15.9-lts-lgpl
2023-04-17 15:06:44 -04:00
Harmen Stoppels
381c0af988 Revert "move depfile logic into its own module, separate traversal logic from model (#36911)" (#36985)
This reverts commit a676f706a8.
2023-04-17 20:58:38 +02:00
Paul Kuberry
4519b42214 xyce: patch issue affecting MPICH (#36826) 2023-04-17 10:19:08 -07:00
Rémi Lacroix
8e50c08b3f Add NetCDF95 package. (#36959)
* Add NetCDF95 package.
  NetCDF95 is an alternative Fortran interface to the NetCDF library which uses Fortran   2003 features.
* [@spackbot] updating style on behalf of RemiLacroix-IDRIS

---------

Co-authored-by: RemiLacroix-IDRIS <RemiLacroix-IDRIS@users.noreply.github.com>
2023-04-17 10:02:54 -07:00
Tim Haines
121ef695e3 Boost: add version 1.82.0 (#36924) 2023-04-17 18:35:45 +02:00
Sergey Kosukhin
275bfc15b4 netcdf-c: major refactoring, new variants and versions (#36485) 2023-04-17 17:59:10 +02:00
Adam J. Stewart
036695ac94 CI: update Linux images in ML pipelines (#36766)
Add missing openssl/curl/pkgconfig deps to py-tokenizers

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-04-17 17:42:03 +02:00
Alec Scott
19b55d6b27 r-rstantools: add v2.3.1 (#36890) 2023-04-17 08:53:06 -05:00
Alec Scott
993303f840 r-r-utils: add v2.12.2 (#36884) 2023-04-17 08:29:41 -05:00
Harmen Stoppels
a676f706a8 move depfile logic into its own module, separate traversal logic from model (#36911) 2023-04-17 15:27:01 +02:00
Alec Scott
2081ab8be1 cgl: add v0.60.7 (#36940) 2023-04-17 15:07:34 +02:00
Alec Scott
f2efec7fcc masurca: add v4.1.0 (#36942) 2023-04-17 15:07:17 +02:00
Alec Scott
e7b549216c spot: add v2.11.4 (#36943) 2023-04-17 15:06:56 +02:00
Alec Scott
426570fc3f glab: add v1.28.0 (#36946) 2023-04-17 15:06:32 +02:00
Carlos Bederián
0e55055394 plumed: add 2.8.2, 2.7.6 and their supported gromacs combinations (#36929)
* plumed: Add 2.8.2, 2.7.6

* gromacs: add plumed 2.8.2 and 2.7.6 support
2023-04-17 15:05:31 +02:00
Alec Scott
1bbeb61668 slang: add v2.3.3 (#36952) 2023-04-17 15:04:57 +02:00
Alec Scott
376d87619b transdecoder: add v5.7.0 (#36953) 2023-04-17 15:04:41 +02:00
Alec Scott
f82b4a68b8 Add "build" stage to many Go packages (#36795) 2023-04-17 15:03:09 +02:00
Adam J. Stewart
abb236e98a py-lightly: py-torch+distributed required (#36955) 2023-04-17 15:02:13 +02:00
Adam J. Stewart
adf0ce6621 py-sphinx: add new versions (#36938) 2023-04-17 15:01:32 +02:00
Alec Scott
5368049128 ruby-rake: add v13.0.6 (#36945) 2023-04-17 15:00:39 +02:00
Alec Scott
8afa166edf hepmc3: add v3.2.6 (#36941) 2023-04-17 15:00:22 +02:00
Taillefumier Mathieu
79a1b3bc12 Add umpire support to SIRIUS (#36958) 2023-04-17 08:53:11 -04:00
Glenn Johnson
4b6ab53a10 Add dependency changes to R packages (#36937)
This PR adds dependency changes that were missed with some recent merges
of R package updates.
2023-04-17 09:50:46 +02:00
Harmen Stoppels
58f389779a rust: depend on curl+nghttp2 (#36947) 2023-04-16 21:50:37 -05:00
Wouter Deconinck
508cad0bbc apptainer: new versions 1.1.6, 1.1.7 (#36838)
* apptainer: new versions 1.0.6, 1.0.7

No changes to build system required. https://github.com/apptainer/apptainer/compare/v1.1.5...v1.1.7

* apptainer: rm hanging chad
2023-04-16 18:51:06 -07:00
Juan Miguel Carceller
d22fa1be19 Change extend to append (#36939)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-04-16 18:40:48 -07:00
Wouter Deconinck
77830d92bd libmypaint: change extend to append (#36957)
* libmypaint: change extend to append

For same reason as #36939, `extend` takes a list as argument, while `append` takes list entry. Here `append` should be used.

* libmypaint: depends_on intltool
2023-04-16 18:37:48 -07:00
Alec Scott
14075f9f4c r-jsonlite: add v1.8.4 (#36872) 2023-04-15 22:02:50 -04:00
Alec Scott
fc58f5d174 r-progressr: add v0.13.0 (#36881) 2023-04-15 21:57:48 -04:00
Alec Scott
25ba8702c7 r-hoardr: add v0.5.3 (#36869) 2023-04-15 21:37:42 -04:00
Alec Scott
b064f9d1e0 r-sn: add v2.1.1 (#36891) 2023-04-15 13:09:25 -05:00
Alec Scott
d43690a91b r-rstan: add v2.21.8 (#36889) 2023-04-15 13:07:57 -05:00
Alec Scott
b6c2487f74 r-timedate: add v4022.108 (#36897) 2023-04-15 09:31:54 -07:00
Alec Scott
32c3ce204d libctl: add v4.5.1 (#36852) 2023-04-15 09:20:43 -05:00
Alec Scott
8893605b1e r-yulab-utils: add v0.0.6 (#36900) 2023-04-15 08:33:32 -05:00
Alec Scott
1345e98ae7 r-spades-tools: add v1.0.1 (#36892) 2023-04-15 15:25:05 +02:00
Alec Scott
7dfcb37975 r-svglite: add v2.1.1 (#36893) 2023-04-15 15:24:49 +02:00
Alec Scott
7854f64e1e r-taxizedb: add v0.3.1 (#36894) 2023-04-15 15:24:33 +02:00
Alec Scott
e18f6ffc28 r-tibble: add v3.2.1 (#36895) 2023-04-15 15:24:19 +02:00
Alec Scott
9e463adac1 r-tidytree: add v0.4.2 (#36896) 2023-04-15 15:24:04 +02:00
Alec Scott
839f820127 sdl2: add v2.26.5 (#36901) 2023-04-15 15:22:26 +02:00
Alec Scott
1c0b028aaa r-utf8: add v1.2.3 (#36898) 2023-04-15 15:22:10 +02:00
Alec Scott
db38f4d75a r-xts: add v0.13.0 (#36899) 2023-04-15 15:21:54 +02:00
Alec Scott
4cbc0c420c r-dendextend: add v1.17.1 (#36861) 2023-04-15 08:15:17 -05:00
Alec Scott
856b6d6d01 r-curl: add v5.0.0 (#36860) 2023-04-15 08:03:41 -05:00
Alec Scott
51fb778358 r-earth: add v5.3.2 (#36863) 2023-04-15 07:57:03 -05:00
Howard Pritchard
cad16f5d3d OpenMPI: tell it where libcuda.so is (#36784)
starting with the 5.0.x release stream the cuda related configury
items in Open MPI once again need --with-cuda-libdir so that
libcuda.so can be found at configure time, otherwise no cuda
support unless someone copies libcuda.so to
$CUDA_HOME/lib64

related to #36760

Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2023-04-15 14:53:15 +02:00
Annop Wongwathanarat
cf817bdd4e openblas: add -Wno-error=implicit-function-declaration for Arm compiler (#36821) 2023-04-15 14:45:35 +02:00
Annop Wongwathanarat
22984edb1b netlib-scalapack: add -Wno-error=implicit-function-declaration for Arm compiler (#36822) 2023-04-15 14:44:44 +02:00
Annop Wongwathanarat
03ba24b370 scorpio: add -Wno-error=implicit-function-declaration for Arm compiler (#36827) 2023-04-15 14:43:28 +02:00
Annop Wongwathanarat
6d366c39ae libffi: add -Wno-error=implicit-function-declaration for Arm compiler (#36832) 2023-04-15 14:42:13 +02:00
Jon Rood
242c1e4535 gingko: remove redundant build type variant (#36703) 2023-04-15 14:41:48 +02:00
Annop Wongwathanarat
5071174194 superlu-dist: add -Wno-error=implicit-function-declaration for Arm compiler (#36831) 2023-04-15 14:40:54 +02:00
dependabot[bot]
2651def9fa build(deps): bump actions/checkout from 3.5.1 to 3.5.2 (#36848)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3.5.1 to 3.5.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](83b7061638...8e5e7e5ab8)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-15 14:37:52 +02:00
Alec Scott
a37ed72a07 cub: add v2.1.0 (#36851) 2023-04-15 14:33:55 +02:00
Alec Scott
0bd54147d8 openjpeg: add v2.5.0 (#36858) 2023-04-15 14:33:02 +02:00
Alec Scott
0d5394e184 r-blockmodeling: add v1.1.4 (#36859) 2023-04-15 14:32:01 +02:00
Alec Scott
ff732bdb4d r-dt: add v0.27 (#36862) 2023-04-15 14:30:47 +02:00
Alec Scott
0ccc044f79 r-gson: add v0.1.0 (#36865) 2023-04-15 14:29:09 +02:00
Alec Scott
a465856cef r-gtools: add v3.9.4 (#36866) 2023-04-15 14:28:53 +02:00
Alec Scott
0e8f23f202 r-hms: add v1.1.3 (#36868) 2023-04-15 14:28:05 +02:00
Alec Scott
f936a1ef7e r-httr: add v1.4.5 (#36870) 2023-04-15 14:27:25 +02:00
Alec Scott
54bcdb8dec r-isoband: add v0.2.7 (#36871) 2023-04-15 14:27:10 +02:00
Alec Scott
2022b38fe2 r-listenv: add v0.9.0 (#36873) 2023-04-15 14:26:26 +02:00
Alec Scott
06d3b882ad r-lubridate: add v1.9.2 (#36874) 2023-04-15 14:25:49 +02:00
Alec Scott
4508803643 r-nimble: add v0.13.1 (#36875) 2023-04-15 14:25:34 +02:00
Alec Scott
f5c5a4a0b1 r-openssl: add v2.0.6 (#36876) 2023-04-15 14:25:18 +02:00
Alec Scott
60ee39d3d8 r-pbkrtest: add v0.5.2 (#36877) 2023-04-15 14:25:04 +02:00
Alec Scott
3b98ad4479 r-phangorn: add v2.11.1 (#36878) 2023-04-15 14:24:50 +02:00
Alec Scott
78c54a42bb r-pkgmaker: add v0.32.8 (#36879) 2023-04-15 14:24:33 +02:00
Alec Scott
0d624d9842 r-plyr: add v1.8.8 (#36880) 2023-04-15 14:24:18 +02:00
Alec Scott
b0eaab8242 r-psych: add v2.3.3 (#36882) 2023-04-15 14:23:22 +02:00
Alec Scott
4655936389 r-qtl: add v1.58 (#36883) 2023-04-15 14:20:26 +02:00
Alec Scott
7f319429fd r-rcppparallel: add v5.1.7 (#36885) 2023-04-15 14:20:02 +02:00
Alec Scott
7c20aaaab8 r-rlang: add v1.1.0 (#36887) 2023-04-15 14:19:18 +02:00
Alec Scott
a3d49a56a8 r-rrblup: add v4.6.2 (#36888) 2023-04-15 14:19:02 +02:00
finkandreas
e7238a0f26 suite-sparse: fix build with GPU targets 2023-04-15 14:18:45 +02:00
Juan Miguel Carceller
161485a774 cairo: add a dependency on which (#36908) 2023-04-15 14:16:41 +02:00
Alec Scott
b6c3cbff6f ncio: add v1.1.2 (#36857) 2023-04-15 14:09:00 +02:00
Alec Scott
55adddc239 aspcud: add v1.9.6 (#36912) 2023-04-15 14:08:43 +02:00
Alec Scott
903626c899 conserver: add v8.2.7 (#36913) 2023-04-15 14:07:16 +02:00
Alec Scott
afc6560429 exa: add v0.10.1 (#36914) 2023-04-15 14:06:57 +02:00
Alec Scott
03be2635eb libverto: add v0.3.2 (#36915) 2023-04-15 14:06:39 +02:00
Alec Scott
efa51e124d r-dorng: add v1.8.6 (#36916) 2023-04-15 14:06:20 +02:00
Alec Scott
6ab070d2ae r-gower: add v1.0.1 (#36917) 2023-04-15 14:05:53 +02:00
Alec Scott
433afe39a7 r-grbase: add v1.8.9 (#36918) 2023-04-15 14:05:36 +02:00
Alec Scott
9ba4b0cab8 r-tigris: add v2.0.1 (#36919) 2023-04-15 14:05:13 +02:00
Alec Scott
554d9125ef routino: add v3.3.3 (#36920) 2023-04-15 14:04:52 +02:00
Alec Scott
9e4dd4972f trident: add v23.01.1 (#36921) 2023-04-15 14:03:53 +02:00
Alec Scott
39c7b10735 double-conversion: add v3.2.1 (#36931) 2023-04-15 13:52:28 +02:00
Alec Scott
4bb4225f09 kraken: add v1.1.1 (#36932) 2023-04-15 13:51:34 +02:00
Alec Scott
1582ce6397 r-bayesplot: add v1.10.0 (#36933) 2023-04-15 13:50:58 +02:00
Caleb Robinson
becebe99b4 Updating torchgeo to 0.4.1 (#36793)
* Updating torchgeo to 0.4.1

* Added some commas

* Update var/spack/repos/builtin/packages/py-torchgeo/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-torchgeo/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update var/spack/repos/builtin/packages/py-torchgeo/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Changed fiona bounds

* Update var/spack/repos/builtin/packages/py-torchgeo/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-14 22:17:35 -04:00
Adam J. Stewart
e3bb6d98fb py-pyupgrade: add v3.3.1 (#36926) 2023-04-14 17:18:00 -07:00
Juan Miguel Carceller
89b8445eb0 Add jmcarcell to the maintainers of some packages (#36909)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-04-14 17:03:50 -07:00
Dan Bonachea
b7c2cc9b85 upcxx: Fixes for Cray targets (#36708)
* upcxx: Enhance auto-detection for HPE Cray EX platforms

1. Some Cray EX systems use ALPS instead of SLURM, ensure we default
   the pmi-runcmd appropriately.

2. Some Cray EX systems run a stock kernel and lack a Cray PrgEnv
   (yes, really), so add a check for libfabric CXI provider as a
   last resort for detecting Cray EX, and ensure we don't choke on
   a lack of `$CRAYPE_DIR`.

* upcxx: Cray XC improvements

1. Future-proof Cray XC detection, in case Spack ever starts reporting
   it as "linux".
2. Revert cray-libsci workaround for ALCF Theta. The workaround no longer
   appears to be necessary, and is actually causing failures on Theta now.

* upcxx: Add level_zero variant detection
2023-04-14 16:17:41 -07:00
Sergey Kosukhin
5efd689803 proj: fix installation of datum grid with Autotools (#36906) 2023-04-14 18:18:21 -04:00
Adam J. Stewart
4d11001046 Issue Templates: improve details formatting (#36923) 2023-04-14 12:16:22 -05:00
Doug Jacobsen
690394fabc Change environment modifications to escape with double quotes (#36789)
This commit changes the environment modifications class to escape
strings with double quotes instead of single quotes.

Single quotes prevent the expansion of enviornment variables that are
nested within environment variable definitions.
2023-04-14 10:13:17 -07:00
John W. Parent
7d083cf138 cmake: add version 3.26.3 (#36648) 2023-04-14 11:03:36 -04:00
Scott Wittenburg
bfa94c5781 gitlab ci: Better tagging of "service" jobs (#36846)
- Tag non-rebuild jobs to target a cheaper (and more highly available)
subset of runners.

- Add missing resource requests to these jobs as well.
2023-04-14 09:03:12 -06:00
kwryankrattiger
3710774d02 VTK-m: use conflict with virtuals variant with ROCm (#36845) 2023-04-14 16:55:46 +02:00
Alec Scott
32808f4fb5 yambo: add v5.1.1 (#36905) 2023-04-14 07:13:33 -04:00
Alec Scott
7c2fc29b6a tempestremap: add v2.1.6 (#36903) 2023-04-14 07:03:35 -04:00
Alec Scott
5504fd6730 speex: add v1.2.1 (#36902) 2023-04-14 12:28:37 +02:00
Alec Scott
7066b61be3 vecmem: add v0.24.0 (#36904) 2023-04-14 12:27:12 +02:00
Massimiliano Culpo
9ec289857c netcdf: fix bugs introduced with multiple build systems split (#36825)
Fixes #36689

- The "base" builder class should be last in the MRO
- `filter_compiler_wrappers` needs to be moved to builders
- Decorating a function from a mixin class require using
   the correct metaclass for the mixin
2023-04-14 10:59:12 +02:00
Massimiliano Culpo
92144d6375 lz4: fix bug on darwin, use makefile by default (#36820)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-04-14 09:43:59 +02:00
Wouter Deconinck
0c2aafec33 cernlib: depends_on openssl when platform=linux (#36837)
Fixes #36836.
2023-04-14 09:17:13 +02:00
Adam J. Stewart
2380810d60 py-lightning: add v1.9.5 (#36803) 2023-04-14 09:08:19 +02:00
Alec Scott
7786da9117 kubernetes: add v1.27.0 and split off new package kubectl (#36780) 2023-04-14 09:07:42 +02:00
Wouter Deconinck
87dca0130c (py-)onnx: new version 1.13.0, 1.13.1 (for py-torch@2) (#36797)
* (py-)onnx: new version 1.13.0, 1.13.1 (for py-torch@2)

ONNX has a new version 1.13.1 which is required for py-torch +onnx_ml.
This version adds python 3.11 support, Mac M1/M2 support. No build
system changes changes.

- https://github.com/onnx/onnx/releases/tag/v1.13.0 (main changes)
- https://github.com/onnx/onnx/releases/tag/v1.13.1 (bugfixes)
- https://github.com/onnx/onnx/commits/main/CMakeLists.txt (top 6
  commits are on top of the last 1.12 release)
- https://github.com/onnx/onnx/blob/v1.13.1/requirements.txt
- https://github.com/onnx/onnx/blob/v1.13.1/requirements-release.txt

* py-onnx: update dependencies

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* protobuf: new version 3.22.2, depends_on abseil-cpp

* onnx: require c++14 for protobuf

* py-protobuf: new preferred version 3.20.3

* protobuf: new versions 3.20.2, 3.20.3

* [@spackbot] updating style on behalf of wdconinc

* protobuf: no double depends_on abseil-cpp

* py-protobuf: no preferred 3.20.3

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2023-04-13 21:25:58 -05:00
MikeG
f171d7ed15 py-dask-mpi: remove jupyter-server-proxy (#36680)
* py-dask-mpi: remove jupyter-server-proxy

This dependency isn't a 'hard' one; it optionally simplifies getting access to the web consoles.
See: https://github.com/dask/dask-mpi/pull/102

* Add patch to remove unnecessary dependency

* review comments

* pass formatting
2023-04-13 21:22:59 -05:00
Nisarg Patel
e2812a6e96 Update m4 (#36835)
* Update m4

For %oneapi & %intel, we explicitly set -O0 so dependents of m4 do not break
        # The default optimization level for icx/icpx is "-O2", 
        # but building m4 with this level breaks the build of dependents. 
        # So we set it explicitely to "-O0".

* [@spackbot] updating style on behalf of hpcnpatel
2023-04-13 21:02:43 -04:00
Adam J. Stewart
2b47a7a22f py-lightly: add new package (#36839) 2023-04-13 19:16:22 -05:00
eugeneswalker
a9db5620f5 remove x86_64_v3 tags (#36828) 2023-04-13 15:29:15 -05:00
Erik Schnetter
f4833a9869 mpiwrapper: New version 2.10.3 (#36816) 2023-04-13 20:28:51 +02:00
dependabot[bot]
4a44e4bed9 build(deps): bump actions/checkout from 3.5.0 to 3.5.1 (#36815)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3.5.0 to 3.5.1.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](8f4b7f8486...83b7061638)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-13 20:28:26 +02:00
Erik Schnetter
44e2d1cd03 mpitrampoline: add v5.2.3 (#36814) 2023-04-13 20:27:55 +02:00
Wouter Deconinck
ff319e9863 Resolve <include-fragment> tags e.g. in github release pages (#36674)
This aims to resolve #34164 by resolving the <include-fragment> tags
that GitHub has started using for their release pages, see
https://github.github.io/include-fragment-element/.

This feels a bit hacky but intended as a starting point for discussion.
After reading a page during spidering, it first parses for
include-fragments, gets them all, and treats them all as separate pages.
Then it looks for href links in both the page itself and the fragments.

Co-authored-by: Alec Scott <alec@bcs.sh>
2023-04-13 20:26:26 +02:00
Massimiliano Culpo
d918ae0bde containerize: strip binaries in a less aggressive way (#36683) 2023-04-13 17:09:34 +02:00
kwryankrattiger
b940468890 CI: Update Data Vis SDK image (#36761)
Supersedes #34224
2023-04-13 09:54:42 -05:00
Daniel Ahlin
0707ffd4e4 gromacs: Specify c++ stdlib for aocc (#36553)
Co-authored-by: Christoph Junghans <junghans@lanl.gov>
2023-04-13 07:39:58 -04:00
John W. Parent
7801577cd5 lz4: fixup incorrect builder method signature (#36812) 2023-04-13 12:33:55 +02:00
Jack Morrison
8668635f1f libfabric: Add v1.18.0; Drop linux-headers dependency (#36805)
* Remove libfabric dependency on linux-headers package when building with OPX support.
* Add libfabric verison 1.18.0
2023-04-12 21:50:19 -04:00
Daniel Ahlin
49fe67572e gromacs: Intel oneapi 2023 fixes (#36555)
* gromacs: oneapi@2023 - g++-dependency and MKL path

Fixes to build with oneapi@2023 and MKL 2023.
* Depend on gcc and add GMX_GPLUSPLUS_PATH to it
* MKL seems to be picked up without explicit directives
  and the old directives fails (at least for my tests) with MKL
  2023.

* gromacs: oneapi@2023 fix - address style errors

* gromacs: add danielahlin as maintainer

Adding danielahlin as maintainer as discussed in https://github.com/spack/spack/pull/36555#issuecomment-1496224128 and https://github.com/spack/spack/pull/36555#issuecomment-1504682712

* Update package.py

---------

Co-authored-by: Christoph Junghans <christoph.junghans@gmail.com>
Co-authored-by: Christoph Junghans <junghans@lanl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-04-12 19:19:42 -06:00
John W. Parent
ced6353e14 Paraview package: build on Windows (#36583)
* Prevent use of x11
* Don't define mpi compilers in cmake interface as MSMPI has no compiler
  wrapper.
2023-04-12 21:18:32 -04:00
Massimiliano Culpo
32f2d7ab7e libgcrypt: add v1.10.2 (#36799) 2023-04-12 16:10:12 -04:00
John W. Parent
135a6a07f1 WGL: remove invalid variant detection (#36579)
Plat variant was removed in a previous PR, now prevent that variant from
being detected externally.
2023-04-12 13:04:51 -07:00
Massimiliano Culpo
a7ae996e6b Stop installing codecov from pip (#36804)
It shouldn't be needed since we use Codecov's Github action

See here for more information https://community.codecov.com/t/codecov-yanked-from-pypi-all-versions/4259
2023-04-12 19:28:55 +02:00
Wouter Deconinck
e08c02c471 py-hatchling: new version 1.14.0; new pkgs py-calver, py-trove-classifiers (#36796)
* py-hatchling: new version 1.14.0; new pkgs py-calver, py-trove-classifiers

Hatchling 1.14.0 adds a new dependency on trove-classifiers, which in turn
depends on calver. So, those two packages needed to be added as well.

- f8915309d3
- https://github.com/pypa/trove-classifiers/blob/2023.3.9/pyproject.toml
- https://github.com/di/calver/blob/2022.06.26/pyproject.toml

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-12 12:52:10 -04:00
dependabot[bot]
385834bd43 build(deps): bump codecov/codecov-action from 3.1.1 to 3.1.2 (#36794)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 3.1.1 to 3.1.2.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](d9f34f8cd5...40a12dcee2)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-12 11:28:39 -04:00
Yang Zongze
46e45507dc hpddm: add new versions (#36769) 2023-04-12 08:59:33 +02:00
Simon Pintarelli
e3a420daed umpire: depends on camp~cuda when ~cuda (#36788) 2023-04-12 01:54:41 -04:00
Matthieu Dorier
78af0545c4 duckdb: add new package (#36682) 2023-04-12 07:26:53 +02:00
Kevin Huck
c3f207428c apex: add v2.6.2 (#36686) 2023-04-12 07:24:23 +02:00
Laura Weber
169704f72a sublime-text: Add licensing information (#36690) 2023-04-12 07:18:41 +02:00
Laura Weber
d4b8589b35 tecplot: Add version 2022r2, Licensing, Env setup (#36695) 2023-04-12 07:15:20 +02:00
Adam J. Stewart
60b573727e PROJ: googletest only needed to test (#36691) 2023-04-12 07:01:52 +02:00
Adam J. Stewart
e3e395769d py-tensorboard: add new versions (#36712) 2023-04-12 07:00:57 +02:00
Wouter Deconinck
8da72eb40b scitokens-cpp: new versions 0.7.1, 1.0.0 (#36714)
Two new versions were released since 0.7.0.

- https://github.com/scitokens/scitokens-cpp/releases/tag/v0.7.1
- https://github.com/scitokens/scitokens-cpp/releases/tag/v1.0.0

Some changes to dependencies:
- openssl always was an explicit requirement (but transitively satisfied by curl),
- cmake 3.10 required for both new versions.

Tested that all these new versions build fine:
```console
18:04:40 wdconinc@menelaos ~/git/spack (develop *$%>) $ spack find -lvx
==> In environment scitokens-cpp
==> Root specs
------- scitokens-cpp@0.7.0  ------- scitokens-cpp@0.7.1  ------- scitokens-cpp@1.0.0

==> Installed packages
-- linux-ubuntu23.04-skylake / gcc@12.2.0 -----------------------
uvwdosd cmake@3.26.3~doc+ncurses+ownlibs~qt build_system=generic build_type=Release          5gqdlyl scitokens-cpp@0.7.1~ipo build_system=cmake build_type=RelWithDebInfo generator=make
r4bdoii scitokens-cpp@0.7.0~ipo build_system=cmake build_type=RelWithDebInfo generator=make  xnnvbsf scitokens-cpp@1.0.0~ipo build_system=cmake build_type=RelWithDebInfo generator=make
==> 4 installed packages
```
2023-04-12 06:59:55 +02:00
Alec Scott
55fd89a576 glab: add v1.27.0 (#36723) 2023-04-12 06:55:29 +02:00
Alec Scott
a8b8c080a8 libcap: add v2.68 (#36725) 2023-04-12 06:55:15 +02:00
Alec Scott
76024ed28c nginx: add v1.23.4 (#36726) 2023-04-12 06:54:59 +02:00
Alec Scott
2eef4bd42a apr: add v1.7.3 (#36733) 2023-04-12 06:52:40 +02:00
Alec Scott
7df17ad7d6 feh: add v3.10 (#36734) 2023-04-12 06:52:21 +02:00
Alec Scott
729675bcf6 libgpg-error: add v1.47 (#36735) 2023-04-12 06:52:05 +02:00
Alec Scott
c12ce32b55 libxcomposite: add v0.4.6 (#36736) 2023-04-12 06:51:31 +02:00
Alec Scott
1d5e085549 libxrandr: add v1.5.3 (#36737) 2023-04-12 06:51:18 +02:00
Alec Scott
d68378d312 libxxf86misc: add v1.0.4 (#36738) 2023-04-12 06:51:04 +02:00
Alec Scott
9e9d5f72a9 perl-exporter-tiny: add v1.006002 (#36739) 2023-04-12 06:50:47 +02:00
Alec Scott
7212aae423 solr: add v8.11.2 (#36748) 2023-04-12 06:50:31 +02:00
Alec Scott
77da0707b0 talloc: add v2.4.0 (#36749) 2023-04-12 06:50:13 +02:00
Alec Scott
850b3bf27a task: add v2.6.2 (#36750) 2023-04-12 06:49:58 +02:00
Alec Scott
e0546f1b99 thrift: add v0.18.1 (#36751) 2023-04-12 06:49:44 +02:00
Alec Scott
a2365b68c2 xcb-proto: add v1.15.2 (#36752) 2023-04-12 06:49:13 +02:00
Alec Scott
f898406ae7 xerces-c: add v3.2.4 (#36753) 2023-04-12 06:48:55 +02:00
Alec Scott
6d96977980 xkeyboard-config: add v2.34 (#36754) 2023-04-12 06:48:39 +02:00
Adam J. Stewart
d79423fe9e protobuf: add new versions (#36711) 2023-04-12 06:47:42 +02:00
Glenn Johnson
f4658a520c r-hydrogof: add new package and dependencies (#36763)
- r-hydrogof
- r-hydrotsm
- r-automap
2023-04-12 06:39:06 +02:00
Alec Scott
94615285d1 sed: add v4.9 (#36747) 2023-04-12 06:38:22 +02:00
willdunklin
bf81f812a5 ascent: patch v0.9.0 for finding RAJA (#36645) 2023-04-12 06:36:48 +02:00
Adam J. Stewart
4863d6f21b py-fiona: add v1.9.3 (#36765) 2023-04-12 06:33:35 +02:00
Wouter Deconinck
2e242897e0 simsipm: add v2.0.2 (#36718) 2023-04-12 06:27:30 +02:00
Thomas Madlener
31c583581f lhapdf: Pass additional python lib dirs only if possible (#36771) 2023-04-12 06:25:54 +02:00
Howard Pritchard
781769545f Open MPI: fix pmix dependency for 5.0.x releases (#36773)
The Open MPI 5.0.x release stream needs PMIx 4.2(?).x  not 5
(which transates to pmix-master).

Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2023-04-12 06:22:56 +02:00
Annop Wongwathanarat
575bd1c3f8 acfl: add version 23.04 (#36775) 2023-04-12 06:15:18 +02:00
Annop Wongwathanarat
6cfb9339ee armpl-gcc: add version 23.04 (#36776) 2023-04-12 06:13:57 +02:00
afzpatel
9ddfdec193 Enable rocm support and gtest variant for ucx (#36693) 2023-04-12 06:12:44 +02:00
Alec Scott
4ba49f3814 libxcursor: add v1.2.1 (#36777) 2023-04-12 06:10:42 +02:00
Alec Scott
b83ff386b8 libxmu: add v1.1.4 (#36778) 2023-04-12 06:09:07 +02:00
snehring
d5d8b81e21 py-drep: adding new version 3.4.2 (#36625) 2023-04-11 20:25:38 -05:00
Jonathon Anderson
8e12eef4e1 py-torch: Update conflicts for +/~tensorpipe (#36781)
* py-torch: Update conflicts for +/~tensorpipe

* [@spackbot] updating style on behalf of blue42u

* Update var/spack/repos/builtin/packages/py-torch/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* [@spackbot] updating style on behalf of blue42u

---------

Co-authored-by: blue42u <blue42u@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-11 20:24:42 -05:00
John W. Parent
f1c0775245 Perl package: fix 64-bit detection on Windows (#36785) 2023-04-11 19:08:15 -04:00
John W. Parent
530669346a Windows/MSVC: propagate all VCVARS changes to Spack env (#36582)
MSVC compilers rely on vcvars environment setup scripts to establish
build environement variables neccesary for all projects to build
successfully. Prior to this we were only piping LIB, INCLUDE, and PATH
change through.

Instead we need to propegate all changes to the env variables made by
VCVARs in order to establish robust support for the MSVC compiler.
This most significantly impacts projects that need to be build with
NMake and MSBuild
2023-04-11 19:04:34 -04:00
kwryankrattiger
68f50f9b11 Fides 1.2 (#36787)
* VTK-m: Add conflict for 64 bit ids and Ascent types

* Fides: Add version 1.2

* [@spackbot] updating style on behalf of kwryankrattiger
2023-04-11 18:44:10 -04:00
Michael Kuhn
9996812067 meson: add 1.1.0 (#36790) 2023-04-11 18:07:55 -04:00
Alec Scott
2bf8284659 go: add v1.20.3 and v1.19.8 (#36791) 2023-04-11 17:34:31 -04:00
markus-ferrell
3edb044706 Windows testing: reenable tests for "spack dependents" (#36786)
All the tests worked out of the box. This just removes the skip statements.
2023-04-11 14:30:40 -07:00
eugeneswalker
0e9b5a05e8 mgard: add 2023-03-31, 2023-01-10 (#36585) 2023-04-11 23:16:14 +02:00
Alec Scott
d75343031e xtrans: add v1.4.0 (#36779) 2023-04-11 23:12:05 +02:00
kwryankrattiger
6e659cc38b ParaView: Need VTKm for CUDA (#36620) 2023-04-11 15:41:23 -05:00
Wouter Deconinck
ae2dd867a1 Update package.py (#36722)
No changes to the package recipe required.

Changelog: https://gitlab.com/Pythia8/releases/-/compare/pythia8306...pythia8309
2023-04-11 08:37:54 -07:00
Juan Miguel Carceller
d7dc26aeb7 Limit version of intel-tbb (#36731)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-04-11 08:11:14 -07:00
Wouter Deconinck
ea04ad108f xrootd: new version 5.5.4 (#36721)
No changes to the build recipe required. Changelog at https://github.com/xrootd/xrootd/compare/v5.5.3...v5.5.4.

Built successfully on my test system:
```console
[+] /opt/software/linux-ubuntu23.04-skylake/gcc-12.2.0/xrootd-5.5.4-cgyz43ivwwqkc7bhdofnxhl2fusysg3m
```
2023-04-11 08:07:59 -07:00
Nisarg Patel
eecd2f984f Update opencoarray to v2.10.1 and fgsl to v1.5.0 (#36626) 2023-04-11 16:29:10 +02:00
Harmen Stoppels
1c3961bdd0 Remove a unit-test that monkey-patches os.stat (#36757)
"test_create_stage_root_bad_uid" started failing as pytest updated to v7.3.0
2023-04-11 14:02:35 +02:00
eugeneswalker
a88fdb216f ci: gpu-tests stack: swap x86_64-{cuda,rocm} for x86_64 (#36759) 2023-04-10 17:01:18 -07:00
Alec Scott
8feeb07d4f r-caretensemble: add v2.0.2 (#36742) 2023-04-10 17:39:28 -05:00
kwryankrattiger
b2310f9e64 Ci backwards compat (#36045)
* CI: Fixup docs for bootstrap.

* CI: Add compatibility shim

* Add an update method for CI

Update requires manually renaming section to `ci`. After
this patch, updating and using the deprecated `gitlab-ci` section
should be possible.

* Fix typos in generate warnings

* Fixup CI schema validation

* Add unit tests for legacy CI

* Add deprecated CI stack for continuous testing

* Allow updating gitlab-ci section directly with env update

* Make warning give good advice for updating gitlab-ci

* Fix typo in CI name

* Remove white space

* Remove unneeded component of deprected-ci
2023-04-10 16:46:45 -05:00
Alec Scott
2c7d7388da r-gsodr: add v3.1.8 (#36746) 2023-04-10 17:37:39 -04:00
Alec Scott
1f7b1f5ee1 r-conquer: add v1.3.3 (#36743) 2023-04-10 17:34:49 -04:00
Alec Scott
6054daf1e0 r-data-table: add v1.14.8 (#36744) 2023-04-10 17:34:28 -04:00
Alec Scott
15bbd0724d r-cachem: add v1.0.7 (#36741) 2023-04-10 17:34:06 -04:00
Alec Scott
5771500255 r-ff: add v4.0.9 (#36745) 2023-04-10 16:03:03 -05:00
Alec Scott
4d535ed50f r-blob: add v1.2.4 (#36740) 2023-04-10 15:50:03 -05:00
Alec Scott
219c49db04 r-intervals: add v0.15.3 (#36727) 2023-04-10 15:48:17 -05:00
snehring
4006ce28aa armadillo: add new version 12.2.0 (#36755) 2023-04-10 12:37:51 -07:00
Thomas Madlener
cb9ae0fa77 krb5, xrootd: Make usable with OpenSSL 3 (#36728) 2023-04-10 05:34:08 -04:00
Michael Kuhn
c35e59a6c4 zstd: add 1.5.5 (#36717) 2023-04-09 21:02:18 +02:00
Jonathon Anderson
c32aeea1a4 py-wheel: Add dependency on python+ctypes (#36716) 2023-04-09 10:59:05 -05:00
Adam J. Stewart
6de3786c36 cuDNN: LD_LIBRARY_PATH required (#36466) 2023-04-08 12:05:34 -05:00
SXS Bot
3869761216 spectre: add v2023.04.07 (#36704)
Co-authored-by: sxs-bot <sxs-bot@users.noreply.github.com>
2023-04-07 21:17:22 -04:00
Vanessasaurus
04209599ef Automated deployment to update package flux-pmix 2023-04-07 (#36700)
* Automated deployment to update package flux-pmix 2023-04-07
* fix style

Signed-off-by: vsoch <vsoch@users.noreply.github.com>

---------

Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2023-04-07 20:27:32 -04:00
Wouter Deconinck
9d05e65578 wayland: new version 1.22 (#36694)
No build system changes compared to the previous release, https://gitlab.freedesktop.org/wayland/wayland/-/compare/1.21.0...1.22.0
2023-04-07 16:58:15 -07:00
John W. Parent
9d7b79d8e1 Windows support: correct 64-bit check (#36578) 2023-04-07 16:50:54 -07:00
Wouter Deconinck
65ee864232 cppzmq: new versions 4.8.1, 4.9.0 (updated cmake dependency) (#36699)
* cppzmq: new versions 4.8.1, 4.9.0 (updated cmake dependency)
  No important changes in the build system,    https://github.com/zeromq/cppzmq/compare/v4.7.1...v4.9.0, other than the more recent cmake required starting with 4.8.0.
  There is also a patch version 4.8.0, but presumably 4.8.1 is preferred.
* cppzmq: add maintainer
2023-04-07 09:36:22 -07:00
Richard Berger
4703c3bcb8 lammps: add missing hipfft dependency (#36697) 2023-04-07 09:28:28 -07:00
Massimiliano Culpo
a7b2196eab Fix incorrect reformatting of spack.yaml (#36698)
* Extract a method to warn when the manifest is not up-to-date

* Extract methods to update the repository and ensure dir exists

* Simplify further the write method, add failing unit-test

* Fix the function computing YAML equivalence between two instances
2023-04-07 13:37:28 +02:00
kwryankrattiger
4ace1e660a Ecp hdf5 vol (#35195)
* ECP-SDK: enable hdf5 VOL adapters
- When +hdf5, enable VOL adapters suitable for the SDK.
- Each VOL package must prepend to the HDF5_PLUGIN_PATH.
- hdf5: 1.13.3 will break existing VOL packages, constrain
  VOLs related to SDK and add note to keep 1.13.2 available.
- hdf5-vol-async:
    - Do not set HDF5_VOL_CONNECTOR, consumers must opt-in.
    - Enforce DAG constraints on MPI to require threaded version.
    - Depend on an explicit version of argbots to relax
      concretization issues in other spack environments.
- paraview: fix compiler flag usage for the 110 ABI (followup to #33617).
* ECP Data and ViS: Add constraits for HDF5 VOLS
* CI: HDF5 1.14 builds without VisIt
* hdf5-vol-async: Update docs string

---------

Co-authored-by: Stephen McDowell <stephen.mcdowell@kitware.com>
2023-04-06 15:21:11 -07:00
SoniaScard
12eff8daad Packages/ophidia server (#36632)
* ophidia-server: new package at v1.7
* Merge
* Update package.py
  Fix style in package.py
* Fix maintainers syntax

---------

Co-authored-by: SoniaScard <SoniaScard@users.noreply.github.com>
2023-04-06 17:11:52 -04:00
Adam J. Stewart
017d66eb79 abseil-cpp: add v20230125.2 (#36636) 2023-04-06 13:29:42 -07:00
kwryankrattiger
9e11d0e489 ParaView: add latest release v5.11.1 (#36577)
* ParaView: add latest release v5.11.1

* ParaView: No longer need XDMF patch after 5.11.1
2023-04-06 11:18:42 -07:00
Miroslav Stoyanov
74ad92e16b manually add mpi home to testing (#36633) 2023-04-06 10:52:43 -07:00
Jonathon Anderson
3e2d03984e Fix py-torch build on Linux >=6.0.3 (#35983)
* Fix py-torch build on Linux >=6.0.3

* Update var/spack/repos/builtin/packages/py-torch/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-06 13:48:05 -04:00
Adam J. Stewart
bd8dc1919b py-grpcio: add v1.52.0 (#36637) 2023-04-06 10:46:52 -07:00
Adam J. Stewart
6922e8f282 py-portpicker: add new package (#36638) 2023-04-06 10:39:15 -07:00
kwryankrattiger
a70f307f7e ParaView: Add variant for raytracing (#36640)
* ParaView: Add variant for raytracing

* [@spackbot] updating style on behalf of kwryankrattiger
2023-04-06 10:30:00 -07:00
Wouter Deconinck
c338d2fb02 root: depends_on vc@1.3.0: (open-ended) (#36663) 2023-04-06 09:51:20 -07:00
Vanessasaurus
b32edd3a72 Automated deployment to update package flux-core 2023-04-06 (#36676)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2023-04-06 12:47:51 -04:00
John W. Parent
44e15da92c Spack on Windows: spack.bat comment syntax (#36531)
Comments must start with `rem` in most cases.
2023-04-06 09:43:27 -07:00
Cameron Stanavige
685dd7272a veloc/scr component releases (#36672)
VeloC/SCR component releases needed for upcoming VeloC release.

* AXL v0.8.0
* ER  v0.4.0
* KVTree v1.4.0
* Rankstr v0.3.0
* Redset v0.3.0
* Shuffile v0.3.0
* Spath v0.3.0

Added some dependency compatibility restraints as the new componenet
versions have a change to how their cmake config works.
2023-04-06 09:27:32 -07:00
Massimiliano Culpo
0b9694575f spack install: fail if --log-file and not --log-format (#36684)
fixes #34551

"spack install" now fails if a user passed the --logfile option without specifying a log format.
2023-04-06 09:09:18 -07:00
Rocco Meli
6e490f2239 GNINA: add cuDNN variant and make RDKit optional (#36270)
* add cuDNN variant and make RDKit optional
* [@spackbot] updating style on behalf of RMeli
* add newer version of rdkit

---------

Co-authored-by: RMeli <RMeli@users.noreply.github.com>
2023-04-06 08:49:02 -07:00
Massimiliano Culpo
0ce548a850 singularity: deprecate all versions pre-fork, remove maintainer (#36654) 2023-04-06 11:31:10 +02:00
Alec Scott
d19475c3b4 crmc: add v2.0.1 (#36236)
Hint: The diff for this file is large because it initially had CRLF (DOS) eol style.
2023-04-06 06:12:56 +02:00
Andrey Perestoronin
3d1320c834 intel-oneapi-openvpl-2023.1.0 (#36668) 2023-04-05 17:48:12 -04:00
Andrey Perestoronin
bb735fb896 intel 2023.1.0 release fix versioning (#36665) 2023-04-05 16:58:11 -04:00
Jen Herting
767046f8da [py-mizani] added version 0.8.1 (#36500) 2023-04-05 16:57:56 -04:00
Axel Huebl
8f800aca72 openPMD-api: 0.15.1 Patch version.hpp (#36662)
Forgot to bump a version in the public header during release.
2023-04-05 16:28:24 -04:00
Jen Herting
65556eb53e New package: py-qiskit-ibm-provider and more (#36661)
* [py-singledispatchmethod] new package

* [py-singledispatchmethod] added missing dependency

* new package required for qiskit

* [py-rustworkx]

- removed redundant dependency on py-wheel
- added dependency on py-numpy

* [py-websocket-client] added version 1.5.1

* modified it is ahead of upstream

* [py-qiskit-terra]

- flake8
- removed duplicate dependencies
- fixed acceptable range for py-symengine

* [py-qiskit-terra] tweedledum no longer needed

* [py-qiskit-terra] depends on py-singledispatchmethod

* [py-qiskit-ibm-provider] new package

---------

Co-authored-by: Sid Pendelberry <sid@rit.edu>
2023-04-05 14:51:35 -04:00
Massimiliano Culpo
a080cf0193 archspec: add v0.2.0, deprecate old versions (#36653)
* archspec: add v0.2.0, deprecate old versions

* Simplify version ranges

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Remove py-setuptools

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-04-05 14:51:12 -04:00
Massimiliano Culpo
3e1c6b27a4 Update archspec to HEAD of develop (#36657) 2023-04-05 13:23:42 -04:00
Jen Herting
f28219cdda [py-backports-zoneinfo] New package (#36499)
* [py-backports-zoneinfo] New package

* [py-backports-zoneinfo] removed references to python 3.6
2023-04-05 12:05:20 -05:00
Daniel Ahlin
4206478f5e gromacs: add cufftmp variant to enable distributed PME when using CUDA (#36552)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-04-05 12:30:24 -04:00
Dan Bonachea
eebfb1cf07 UPC++/GASNet-EX 2023.3.0 update (#36629) 2023-04-05 13:55:17 +02:00
Mikael Simberg
8235e1f38a pika: Add version 0.14.0 (#36652)
* Rename PIKA_WITH_P2300_REFERENCE_IMPLEMENTATION CMake option in pika package

* Remove unnecessary use of self in pika package

* Use append instead of list += for single options in pika package

* Add pika 0.14.0
2023-04-05 13:18:48 +02:00
Massimiliano Culpo
a1703fa437 binutils: deprecate old version, build static on darwin (#36646)
The issue comes from libctf.
2023-04-05 09:20:09 +02:00
Massimiliano Culpo
4b3cc800ff minisign: add v0.11 (#36647) 2023-04-04 23:59:27 -04:00
Thomas-Ulrich
5cf7c60d74 add tandem package (#36546)
* add tandem package
* apply black
* fix import
* fix year of license
* add version 1.0 and associated compile fix
* change git to property
* add conflict to intel
2023-04-04 18:37:20 -07:00
SoniaScard
674c22f815 Packages/ophidia primitives (#36575)
* ophidia-primitives: new package at v1.7
* ophidia-primitives: Add mantainers
* ophidia-primitives: Fix style
* ophidia-primitives: update configure arguments
* Fix style in package.py
---------

Co-authored-by: SoniaScard <SoniaScard@users.noreply.github.com>
Co-authored-by: Donatello Elia <eldoo@users.noreply.github.com>
2023-04-04 18:30:22 -07:00
Luc Berger
5a4890cef8 Kokkos: add release 4.0.0 (#36532)
* Kokkos: add release 4.0.0
* Kokkos: updating default c++ standard requirement
  Now Kokkos requires c++17 as its new minimum c++ standard library.
* Kokkos: adding support for new GPU architectures
  The new updates include NVIDIA Hopper and AMD Navi
* Kokkos: fixing style...
* paraview +rocm: constrain kokkos dep to @:3.7.01

---------

Co-authored-by: eugeneswalker <38933153+eugeneswalker@users.noreply.github.com>
Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
2023-04-04 18:23:53 -07:00
afzpatel
66a9a9caa8 Enable tests for rocm packages - rocm-smi-lib, rocm-cmake and rocm-clang-ocl (#35299)
* initial commit for enabling test for rocm-smi-lib, rocm-cmake and rocm-clang-ocl
* fix styling and cleaning code
* disabling some tests for rocm-smi-lib
* fix style errors
2023-04-04 18:11:26 -07:00
Adam J. Stewart
c3a41c742e bazel: new versions, macOS patches (#36641) 2023-04-04 17:28:05 -04:00
Massimiliano Culpo
664c12c7be zig: add v0.10.1 (#36639) 2023-04-04 21:57:30 +02:00
John W. Parent
44306d3492 hdf5 package: fix wrapper handling on Windows(#36265)
* msmpi has no wrappers so don't set MPI_CXX_COMPILER etc. for that
  MPI implementation
* hdf5 on Windows does not have h5cc etc., so do not try to filter
  them on Windows
2023-04-04 11:40:30 -07:00
Howard Pritchard
354d498971 OpeMPI: make memchecker variant sticky (#36329)
related to #36255

Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2023-04-04 14:23:07 -04:00
willdunklin
d2fd68071e sensei: fix ascent constraints (#36622) 2023-04-04 11:49:13 -05:00
eugeneswalker
1a2510d031 suite-sparse ^openblas~shared threads=openmp: add -fopenmp (#36521)
* suite-sparse ^openblas~shared threads=openmp: add -fopenmp to cflags, cxxflags

* use compiler.openmp_flag
2023-04-04 08:52:34 -07:00
Jonathon Anderson
78f5b2a2c6 Add workflow:rules:always to spack ci output (#36011) 2023-04-04 10:03:58 -05:00
Wouter Deconinck
d0cd340628 py-uproot, py-awkward, py-awkward-cpp: new versions (#36421)
* py-uproot: new versions 5.0.4, 5.0.5

No changed in dependency versions

* py-awkward-cpp: new versions

* py-awkward: new versions

* py-awkward: new version in 1.10.* series
2023-04-04 09:46:29 -05:00
Filippo Spiga
6a2f80e2c6 Adding NVIDIA HPC SDK 23.3 (#36624)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-04-04 14:55:26 +02:00
Valentin Volkl
ed0c6cd0f3 frontier-client: fix build with clang (#29590) 2023-04-04 08:23:32 -04:00
Jean Luca Bez
6aa4c29119 update h5bench VOL-ASYNC dependecy with fixes (#36514) 2023-04-04 10:39:17 +02:00
Harmen Stoppels
dc1399386c Make spack config update work on environments (#36542)
Previously `spack -e bla config update <section>` would treat the
environment config scope as standard config file instead of a single
file config scope. This fixes that.
2023-04-04 10:19:05 +02:00
Matthew Thompson
6fca0f8018 pfunit: fix error with BUILD_SHARED_LIBS (#36554) 2023-04-04 10:15:26 +02:00
Wouter Deconinck
88cc949841 libdrm: new version 2.4.115 (#35586)
https://gitlab.freedesktop.org/mesa/drm/-/compare/libdrm-2.4.114...libdrm-2.4.115 only bugfixes
2023-04-04 10:14:19 +02:00
Juan Miguel Carceller
c81d0a9e97 acts: add a dependency on git-lfs (#36517)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-04-04 10:13:11 +02:00
John W. Parent
e4794274d6 Add versions 3.24.4, 3.25.3 (#36001)
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2023-04-04 10:11:21 +02:00
Harmen Stoppels
2b112dd02c llvm: add missing zstd dep (#36613) 2023-04-04 10:06:41 +02:00
Alex Richert
5da231969e esmf: add static netcdf-c support (#34579) 2023-04-04 10:01:21 +02:00
Valentin Volkl
c3b0806f6c lhapdfsets: fix an error by avoiding the lhapdf command (#35997)
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2023-04-04 09:56:10 +02:00
George Malerbo
77d55ebbd1 sdl2-ttf: add new package 2023-04-04 09:49:39 +02:00
Harmen Stoppels
e9a1d0a157 filter __spack_path_placeholder__ in generated module files after buildcache install (#36611)
* filter __spack_path_placeholder__ in generated module files after buildcache install

* fix windows
2023-04-04 09:45:43 +02:00
Weiqun Zhang
5560017ebe amrex: add v23.04 (#36596) 2023-04-04 09:41:48 +02:00
Adam J. Stewart
87da3a07bb PyTorch: fix ppc64le patching (#36621) 2023-04-04 09:27:23 +02:00
Daryl W. Grunau
712b30c9c5 eospac: expose version 6.5.6 (#36615)
Co-authored-by: Daryl W. Grunau <dwg@lanl.gov>
2023-04-03 23:31:14 -04:00
Andrey Perestoronin
dc194ec14c added new packages (#36608) 2023-04-03 23:30:51 -04:00
Daniel Ahlin
5c0004bbc1 gromacs: make package inherit from CudaPackage (#36551)
To conform to spack way of specifying cuda_arch. This commit does
not change version-specific gencode handling for older gromacs
versions
2023-04-03 23:10:47 -04:00
Adam J. Stewart
3d923fd5b8 py-pillow: add v9.5.0 (#36593) 2023-04-03 20:13:03 -05:00
Adam J. Stewart
a8c400bae0 py-pandas: add v2.0.0 (#36617) 2023-04-03 20:12:48 -05:00
Xavier Delaruelle
7a77ecbdb6 modules: remove default symlink on uninstall (#36454)
When app is uninstalled, if it matches a default, then remove the
default symlink targeting its modulefile.

Until now, when a default were uninstalled, the default symlink were
left pointing to a nonexistent modulefile.
2023-04-03 22:54:18 +02:00
Mikael Simberg
91636f0e9d Bump required CMake version for stdexec (#36605) 2023-04-03 16:04:14 -04:00
Massimiliano Culpo
f91968cf6f Improve Dockerfile recipe generation (#35187)
- Update default image to Ubuntu 22.04 (previously was still Ubuntu 18.04)
- Optionally use depfiles to install the environment within the container
- Allow extending Dockerfile Jinja2 template
- Allow extending Singularity definition file Jinja2 template
- Deprecate previous options to add extra instructions
2023-04-03 21:05:19 +02:00
Benjamin Meyers
3d149a7db2 New package py-pycorenlp (#36609) 2023-04-03 13:40:56 -05:00
Cory Bloor
cfea2d1010 hip: mathlibs inherit CudaPackage and ROCmPackage (#34586)
Unless the amdgpu_target is overriden, the libraries will default to
being built for cuda, since amdgpu_target=none is both default and in
conflict with +rocm. This requires a custom Disjoint set to include
both the 'auto' variant used by the rocm mathlibs and the 'none'
variant used by ROCmPackage.

* Fix search for hip+cuda in hipcub@5.1 and later

This patch is not strictly necessary, but it may fix the search for HIP
in certain environments.

* Backport fix for CUDA 11.5 to hipsparse
2023-04-03 20:28:24 +02:00
vijay kallesh
0860139c83 Updated homepage URL (#36519) 2023-04-03 11:21:13 -07:00
Vanessasaurus
182ef042ce Automated deployment to update package flux-sched 2023-04-01 (#36591)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2023-04-03 10:48:27 -07:00
Massimiliano Culpo
9d68100891 Rework error handling within the ASP logic program (#36536)
* Reduce effort on grounding by employing cardinality constraints

If we use a cardinality constraint instead of a rule
using pair of values, we'll end up grounding 1 rule
instead of all the possible pair combinations of the
allowed values.

* Display all errors from concretization, instead of just one

If clingo produces multiple "error" facts, we now print all
of them in the error message. Before we were printing just
the one with the least priority.

Consolidate a few common patterns in concretize.lp to ensure
that certain node attributes have one and only one value
assigned.

All errors are displayed, so use a single criterion
instead of three.

* Account for weights in concretize.lp

To recover the optimization order we had before, account
for weights of errors when minimizing.

The priority is mapped to powers of 10, so to effectively
get back the same results as with priorities.
2023-04-03 19:23:29 +02:00
Axel Huebl
ebffc53b93 openPMD-api: 0.15.1 (#36604)
Latest patch release, fixing build issues.
2023-04-03 16:39:34 +02:00
Robert Cohn
4c3edac454 namd: add oneapi support (#36139) 2023-04-03 16:39:09 +02:00
Matin Raayai
5e33f6bbc5 External Detection for llvm-amdgpu (#36595)
* Added external detection of llvm-amdgpu.

* Style cleaning for llvm-amdgpu.
2023-04-03 14:51:58 +02:00
Harmen Stoppels
a19f13f57a zstd: fix makefile build, default to makefile (#36606) 2023-04-03 14:50:38 +02:00
Harmen Stoppels
93cad90413 openblas: use "all" target since it's marked sequential (#35905)
openblas likes to concurrently writes to the same archive from different
targets, and solves that through .NOTPARALLEL: all

We run `make x y z` which is not affected by `NOTPARALLEL`, running into
races.
2023-04-03 11:45:38 +02:00
Xavier Delaruelle
7e4927b892 modules: correctly detect explicit installation (#36533)
When generating modulefile, correctly detect software installation asked
by user as explicit installation.

Explicit installation status were previously fetched from database
record of spec, which was only set after modulefile generation.

Code is updated to pass down the explicit status of software
installation to the object that generates modulefiles.

Fixes #34730.
Fixes #12105.

A value for the explicit argument has to be set when creating a new
installation, but for operations on existing installation, this value is
retrieved from database. Such operations are: module rm, module refresh,
module setdefaults or when get_module function is used.

Update on the way tests that mimics an installation, thus explicit
argument has to be set under such situation.
2023-04-03 11:19:18 +02:00
Harmen Stoppels
88548ba76f binutils 2.40: add missing zstd dep (#36598) 2023-04-03 06:41:38 +02:00
Laura Weber
2117e37a0b sublime-text: Fix the dependencies of the recent update (#36562) 2023-04-02 19:53:54 +02:00
Tristan Carel
ac62817ba0 petsc: simplify dependencies with hypre (#36573)
* petsc: simplify dependencies with hypre

* add propagation of `complex` variant to hypre
2023-04-02 10:00:10 -05:00
Harmen Stoppels
08cf82b977 papyrus: disable tests (#36599) 2023-04-02 13:34:25 +02:00
Leonard-Anderson-NNL
4242989f22 New package: py-mike and dependency. (#36587)
Co-authored-by: Cloud User <leonardanderson@leonardander004.hzterscemazurawp3xenxzahla.bx.internal.cloudapp.net>
2023-04-01 21:53:48 -05:00
Juan Miguel Carceller
b524351e10 root: vc variant: root accepts only vc version 1.3.0 (#36594)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-04-01 21:47:34 +02:00
Alex Richert
b9bde03df5 yafyaml: Add fismahigh variant to comply with FISMA standards (#36471) 2023-04-01 21:12:00 +02:00
Satish Balay
5c3bc36fde petsc,py-petsc4py,slepc,py-slepc4py: add version 3.19.0 (#36588) 2023-04-01 12:40:19 -05:00
John W. Parent
2a2c9cdf02 CMake: add version 3.26.2 (#36567) 2023-04-01 15:21:13 +02:00
Bernhard Kaindl
8b98564840 libxpm: Fix linking with libintl (#36568)
Original Author of this change: Chris Green <greenc@fnal.gov>

Two changes:
- Remove adding the library path using -L: It is obsolete now
  that we have the library paths in before the system paths.
- Link with -linto only if the gettext recipe provides it:
  When we are on a glibc system, we can use external gettext,
  which means we use the libintl inside libc.so: no -lintl then.

This change was already submitted in #35450 and reviewed but is
stuck in this big PR which is trying to do too in a single PR.
2023-04-01 15:11:14 +02:00
John W. Parent
d79c8179fc Perl package: change attr to method (#36580)
This fixes a bug in the Windows build of Perl.

An attribute defined in package class is inaccessible from the install
method due to builder: refactor it to be a method.
2023-03-31 21:57:55 -04:00
John W. Parent
1175831203 netcdf-c package: fix patch applied on Windows (#36581) 2023-03-31 20:52:36 -04:00
Rocco Meli
210b2e8caa MDAnalysis 2.4.3 (#36541)
* mda 2.4.3

* black

* mda update and reshuffle

* add tidynamics
2023-03-31 18:47:56 +00:00
John W. Parent
a8e2961010 Allow configurable stage names (#36509)
Add `config:stage_name` which is a Spec format string that can
customize the names of stages created by Spack. This was primarily
created to allow generating shorter stage names on Windows (along
with `config:build_stage`, this can be used to create stages with
short absolute paths).

By default, this is not set and the prior name stage format is used.

This also removes the username component that is always added to
Stage paths on Windows (if users want to include this, they can
add it to the `build_stage`).
2023-03-31 11:46:47 -07:00
Alec Scott
a6a364d3b8 py-aioitertools: add v0.11.0 (#36540)
* py-aioitertools: add v0.11.0

* Update var/spack/repos/builtin/packages/py-aioitertools/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Update package.py

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-03-31 14:16:12 -04:00
Harmen Stoppels
46bbce1922 compiler wrapper: fix -Xlinker parsing (#35929)
* compiler wrapper: fix -Xlinker parsing
* handle the case of -rpath without value; avoid that we drop the flag
* also handle the -Xlinker -rpath -Xlinker without further args case...
* fix test
* get rid of global $rp var, reduce branching
2023-03-31 09:47:24 -07:00
Satish Balay
14465e61ae petsc: builds break with gnu-make-4.4.1 (#35906)
Use 'make' detected by spack within petsc build
2023-03-31 09:28:48 -07:00
kwryankrattiger
4064191fbc Ascent: Drop VTK-h dependency for 0.9 (#36458)
* Ascent: Drop VTK-h dependency for 0.9

* Ascent: Remove duplicate OpenMP constraints

* Ascent: 0.9.0 cannot build with vtk-m@2

* Ascent: Only needs vtkm when using vtkh

* Ascent: Require fides when building with ADIOS2
2023-03-31 11:24:45 -05:00
renjithravindrankannath
7bb64b526f rocsparse: exclude v5.4 from being patched (#36571) 2023-03-31 18:14:04 +02:00
Cameron Rutherford
4f42092f4f hiop: add v0.7.2 (#36570) 2023-03-31 11:33:26 -04:00
Tristan Carel
3b5b9e8474 steps: add variants gmsh and kokkos (#36543) 2023-03-31 11:27:58 -04:00
Paul Romano
aabd76cb74 OpenMC: add v0.13.3 (#36564)
* OpenMC: add v0.13.3

* Add missing version in list for py-openmc
2023-03-31 11:18:26 -04:00
H. Joe Lee
fbde853360 hdf5: add a map variant (#35523)
* hdf5: add map variant
* hdf5: add mininum version for map variant
* hdf5: fix black error
2023-03-31 10:48:08 -04:00
Benjamin Meyers
951f691d1b New: py-dalib, py-qdldl, py-qpsolvers; Update: py-ecos, py-scs, py-osqp (#36549)
* New: py-dalib, py-qdldl, py-qpsolvers; Update: py-ecos, py-scs, py-osqp

* [@spackbot] updating style on behalf of meyersbs

* Update py-qpsolvers homepage

* Add when clause to py-future
2023-03-31 09:34:02 -05:00
Benjamin Meyers
27978fd355 New package py-json2html (#36111)
* New package py-json2html

* Remove deprecated python deps
2023-03-31 09:32:31 -05:00
John W. Parent
349b5d982b Add latet CMake release (#36119) 2023-03-31 08:28:49 -04:00
Ashwin Kumar Karnad
c2c56c1ca1 octopus: Add sparskit variant (#36494)
Co-authored-by: Hans Fangohr <fangohr@users.noreply.github.com>
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2023-03-31 08:18:53 -04:00
Weston Ortiz
7e06b5bc88 goma: add additional versions up to 7.4.3 (#36561) 2023-03-31 06:49:12 -04:00
Miroslav Stoyanov
17cec3b101 Deprecate old tasmanian (#36534)
* deprecated old versions and deps
* syntax fix

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2023-03-31 06:48:48 -04:00
Harmen Stoppels
b0e7b8c794 typehint a few globals (#36544) 2023-03-31 08:32:56 +02:00
Alec Scott
98ac3acc92 py-aiobotocore: add v2.5.0 (#36538)
* py-aiobotocore: add v2.5.0

* Fix dependencies for v2.5.0

* Add py-botocore@1.29.76
2023-03-30 20:52:37 -05:00
Benjamin Meyers
7d66c3b5d1 New package py-coclust (#36548)
* New package py-coclust

* [@spackbot] updating style on behalf of meyersbs
2023-03-30 20:41:15 -05:00
Benjamin Meyers
ae9a65ae56 New package py-soyclustering (#36550)
* New package py-soyclustering

* [@spackbot] updating style on behalf of meyersbs
2023-03-30 20:33:33 -05:00
Michael Kuhn
996442ea9b py-h5py: add 3.8.0 (#35960) 2023-03-30 20:20:55 -05:00
marcosmazz
bd20b7b8b7 QE v7.1 add post-processing tools installation (#36484)
* QE v7.1 add post-processing tools installation

Quantum-Espersso@7.1 ships with an incoplete CMakeLists.txt that prevents the installation of post-processing tools.
Added patches, is fixed in two different commits in upstream.

* fixed style

* removed spaces

* added MR references
2023-03-30 16:28:14 -07:00
Scott Wittenburg
08426ec492 gitlab ci: request more memory for publish job (#36560) 2023-03-31 00:19:59 +02:00
Adam J. Stewart
f863859bde py-lightning: add v2.0.1 (#36565) 2023-03-30 17:57:31 -04:00
Ashwin Kumar Karnad
07559d778e octopus: Add berkeleygw variant (#36495)
* octopus: Add berkeleygw variant
* octopus: style fix
2023-03-30 14:57:05 -07:00
Edoardo Aprà
33833a4f32 nwche: add v7.2.0 (#36193)
* removed env variables now default. added spec fftw3. removed obsolete version 7.0.0

* tweak NWCHEM_MODULES based on version
2023-03-30 22:15:56 +02:00
Adam J. Stewart
e544bb6271 py-scipy: py-pythan is only a build dep (#36528) 2023-03-30 16:13:17 -04:00
Massimiliano Culpo
e1a104e3a2 Add type-hints to spack.bootstrap (#36491) 2023-03-30 22:12:18 +02:00
Alex Richert
f5624f096c perl: add patching to allow building with intel@19.1.3 (#35666) 2023-03-30 13:07:33 -04:00
Cyrus Harrison
d82fc158ca add conduit 0.8.7 release (#36357) 2023-03-30 12:40:35 -04:00
Ashwin Kumar Karnad
709c5c4844 octopus: add etsf-io variant (#36490) 2023-03-30 18:15:04 +02:00
Ashwin Kumar Karnad
e356390575 octopus: Add NFFT variant (#36489) 2023-03-30 18:04:29 +02:00
Ashwin Kumar Karnad
906a8c56af octopus+cgal: requires boost: add --with-boost=spec["boost"].prefix (#36472) 2023-03-30 17:37:11 +02:00
Adam J. Stewart
6c5d3299fe Feature Request template: remove spack version requirement (#36503) 2023-03-30 16:39:36 +02:00
Richard Berger
fefa4b8cc4 r3d: add pic variant (#36476) 2023-03-30 16:37:35 +02:00
Erik Heeren
2ca471745c easyloggingpp: new package (#36487) 2023-03-30 16:35:33 +02:00
Alec Scott
3756d5761b poke: add v3.0 (#36479) 2023-03-30 16:32:05 +02:00
Nathalie Furmento
0c05f2bc21 Add StarPU latest release 1.4.0 (#36518)
* starpu: add v1.4.0
2023-03-30 09:59:00 -04:00
Harmen Stoppels
054cbe84de python: sequential make install :( (#35557) 2023-03-30 07:16:11 -04:00
Eric Brugger
a185343493 VisIt: Update to VisIt 3.3.2. (#36510) 2023-03-30 03:34:20 -04:00
Massimiliano Culpo
16404034dc Fix a couple of minor bugs with ASP weights (#36522)
Reorder versions so that deprecated ones are last. 

Account for default not used when the variant exists.
2023-03-30 01:08:57 -04:00
Sergey Kosukhin
516a023173 py-findlibs: add patch to support paths under lib64 (#36524) 2023-03-30 01:04:15 -04:00
Jack Morrison
f8064cd744 * Add new linux-headers version 6.2.8. (#36474)
* Add new libfabric versions 1.17.1, 1.17.0, 1.16.0, 1.15.2.
* Add libfabric dependency on numactl and linux-headers when building
  with OPX provider support.
* Set libfabric flag_handler to pass compiler flags as arguments to
  configure.
2023-03-29 21:34:33 -05:00
Adam J. Stewart
781c87840d py-kornia: add v0.6.11 (#36512) 2023-03-29 14:33:23 -05:00
Adam J. Stewart
b9a3254b66 py-black: add v23.3.0 (#36513) 2023-03-29 14:33:08 -05:00
Adam J. Stewart
20d2e6a9fd py-pyproj: add v3.5.0 (#36504)
* py-pyproj: add v3.5.0
* PROJ: add v9, fix datum grid installation
* [@spackbot] updating style on behalf of adamjstewart

---------

Co-authored-by: adamjstewart <adamjstewart@users.noreply.github.com>
2023-03-29 12:30:39 -07:00
snehring
b6390e335c gmap-gsnap: Adding new version and perl run dep (#36508) 2023-03-29 12:28:17 -07:00
Ivan Maidanski
3e09f04241 libatomic_ops: add v7.8.0 (#36515) 2023-03-29 12:23:16 -07:00
Harmen Stoppels
67c4ada08b py-cython: set upperbound for backported patch (#36525) 2023-03-29 12:43:40 -04:00
Adam J. Stewart
701e46464d Python: add Apple Clang version conflicts (#36511)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2023-03-29 10:49:10 -04:00
Harmen Stoppels
dba57ff113 ci: require x86_64_v3 everywhere (#36158) 2023-03-29 15:58:48 +02:00
Massimiliano Culpo
7579eaf75a Fix dyninst build with old boost (#36198)
Fix patching old boost versions to account for builders.

Add a proper version constraint on boost for recent dyninst.
The constraint can be found in dyninst source code under
"cmake/Boost.cmake" which contains:

  set(_boost_min_version 1.70.0)

Co-authored-by: Greg Becker <becker33@llnl.gov>
2023-03-29 15:26:51 +02:00
julian-zeidler
aa99063065 charliecloud: add squashfuse variant, add v0.32, v0.31 (#36404) 2023-03-29 01:33:05 -04:00
John W. Parent
18c21d0c32 Add CMake version 3.26.1 (#36349) 2023-03-28 21:03:39 -04:00
John W. Parent
6b03c9f285 Windows: spack.bat CLI handling robustness (#36281)
* Current develop spack.bat file cannot handle any reserved characters
  being passed via the CLI, particularly '=' and '?'. To address this,
  re-do the CLI parsing for loop to use custom logic to allow for more
  granular handling of CLI args.
* We take a less-than-ideal approach to escaping local scope and
  handling unset variables as well as the actual parsing of CL
  arguments. To address this, don't quote the args and then try to
  parse the quotes we just added (resulting in spack flags being
  undefined). Instead, leverage batch script features. Since we are
  not unnecessarily quoting things, we don't need to think about
  removing them, and in the case of paths with spaces, we should _not_
  be removing the quotes as we currently do.
2023-03-28 16:50:37 -07:00
John W. Parent
7ffe2fadfe WGL package: correct libs/headers detection (#35113)
Corrects libs detection with a more specific root, otherwise there
can be inconsistencies between version of WGL requested and the
version picked up by `find_libraries`.

Corrects headers detection - win-sdk, win-wdk, and WGL headers all
exist under the same directory, so we can compute the headers for WGL
without querying the spec for win-sdk (which causes errors).

This commit also removes the `plat` variant of `wgl`, which is
redundant with the Spec's target.
2023-03-28 16:31:10 -07:00
Ashwin Kumar Karnad
a3a9b48ed7 octopus: Add pnfft variant (#36492) 2023-03-28 13:50:36 -07:00
Laura Weber
e0fb737e8e SublimeText: add Sublime Text 4, build 4143 (#36480)
* Add Sublime Text 4, build 4143
* Reformatted with black
* Manual formatting adjustments.
2023-03-28 13:49:27 -07:00
Alec Scott
5e70943d1b perl-extutils-makemaker: add v7.70 (#36478) 2023-03-28 13:43:20 -07:00
Alec Scott
a48abfee75 at-spi2-core: add v2.48.0 (#36477) 2023-03-28 13:37:58 -07:00
Erik Heeren
af86759116 glm: add version 0.9.9.3 (#36486) 2023-03-28 13:34:21 -07:00
Erik Schnetter
6a868ec9c5 snappy: New version 1.1.10 (#36473) 2023-03-28 13:09:17 -07:00
Ryan Marcellino
c6ab42a86a xdotool: add version 3.20211022.1 (#36505) 2023-03-28 12:56:13 -07:00
Ryan Marcellino
a771bfadd1 py-pyautogui: add v0.9.53 (#36498) 2023-03-28 13:50:29 -05:00
Todd Gamblin
d76a8b7de7 retry: bugfix: package requirements with git commits (#35057) (#36347)
- [x] Specs that define 'new' versions in the require: section need to generate associated facts to indicate that those versions are valid.
- [x] add test to verify success with unknown versions.
- [x] remove unneeded check that was leading to extra complexity and test
      failures (at this point, all `hash=version` does not require listing out that version
      in `packages.yaml`)
- [x] unique index for origin (dont reuse 0)

Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
2023-03-28 11:18:54 -07:00
MatthewLieber
9a93f223d6 mvapich: add pmi_version variant, add process_manager=none (#36324)
Co-authored-by: Matt Lieber <lieber.31@osu.edu>
2023-03-28 19:46:42 +02:00
Howard Pritchard
b8735fd1e4 PMIX: add current 4.2.x releases (#36496)
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2023-03-28 10:13:26 -07:00
Erik Schnetter
eb80f4d9af simulationio: New version 9.0.3 (#36465)
* simulationio: New version 9.0.3
* simulationio: Require swig @3
2023-03-28 10:09:13 -07:00
Gerhard Theurich
9160e78729 Add ESMF v8.4.1, and number of other changes to improve ESMF integration with Spack (#35980)
* Add v8.4.1, and a few other changes.
    Minor adjustments for better alignment between Spack and ESMF native
    build. For ESMF >= 8.3.1 now Spack defaults to using
    external-parallelio. Before use internal version, which was PIO-1 all
    the way up to v8.3.0b10 anyway! Xerces is disabled by default.
* Deal with two long lines flagged by prechecks/style.
* Try to satisfy prechecks/style.
* Try to satisfy flake8 rules wrt indentation of continuation lines.
* Now trying to satisfy "black reformatting".
* For "black" formatting really put that ugly comma at the end before
closing parentheses. Interesting.
* Support building against external-parallelio even w/o mpi, but select the
   external-parallelio dependency accordingly.
* Correct C compiler setting.
* Handle `pnetcdf` variant consistent with how `ParallelIO` does it. And
  also pass the `pnetcdf` variant down to the `external-parallelio`
  dependency if set.
* Long line formatting again.
* Simplify handling of tarball URL construction and update sha256
  checksums.
* Align version check with recommended self.spec.satisfies().
* Deprecate v8.4.0 which has a bug that can cause memory corruption, fixed
  in v8.4.1.
* Use double quotes vs single quotes as per style-check... although
  https://spack-tutorial.readthedocs.io/en/latest/tutorial_packaging.html#querying-spec-versions
  clearly shows it with single quotes.
2023-03-28 09:33:49 -07:00
Eric Brugger
0d829b632f VisIt: Update to VisIt 3.3.1. (#36464) 2023-03-28 10:08:03 -04:00
eugeneswalker
c88b30b426 e4s power ci: ecp-data-vis-sdk: disable visit due to build issues (#36475) 2023-03-28 06:02:47 -04:00
Harmen Stoppels
d862edcce0 macos: set new ad-hoc signature with codesign after binary string replacement (#35585) 2023-03-28 00:33:35 -04:00
Erik Heeren
3b497359b7 ISPC: unblock 1.17-1.19 (#36346)
* ispc: attempts at getting more recent versions to work
* ispc: more attempts to get newer versions to build
* ispc: cleanup
* llvm: remove ispc_patches variant again
* ispc: unpin ncurses
* ispc: satisfy style checks
* ispc: 1.19 is only compatible with LLVM 13-15 
  otherwise it would not build against develop, as this now has LLVM 16
* ispc: relax LLVM version to what ispc requires itself
  verified that it builds against LLVM 13, 14, 15, but not 12 and 16
* ispc: use spec.satisfies instead of version comparison
  according to suggestions from review and docs, this is the canonical way to do it
* ispc: checksum 1.18.1
  just in order to include all versions, also checked that it builds

---------

Co-authored-by: Martin Aumüller <aumuell@reserv.at>
2023-03-27 11:30:36 -07:00
G-Ragghianti
4c599980da Package heffte: Enable smoke test to find MPI launcher (#35724)
* Enable smoke test to find MPI launcher
* Adding self as maintainer
* Style fix
* Update var/spack/repos/builtin/packages/heffte/package.py

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2023-03-27 11:26:56 -07:00
Ashwin Kumar Karnad
7266cc9b92 octopus: Update compiler flags (#36446)
* octopus: set the right compiler flags
  https://github.com/fangohr/octopus-in-spack/pull/70
* octopus: fix pep8 style issue
2023-03-27 11:13:28 -07:00
Annop Wongwathanarat
cc4a528274 sw4lite: add linking with libarmflang (#36461) 2023-03-27 11:08:53 -07:00
Shihab Shahriar Khan
efbfe38f63 arborx: make explicit the need to specify cuda_arch when +cuda (#36450) 2023-03-27 10:49:17 -07:00
Harmen Stoppels
5072e48dab Add llnl.util.filesystem.find_first (#36083)
Add a `find_first` method that locates one instance of a file
that matches a specified pattern by recursively searching a directory
tree. Unlike other `find` methods, this only locates one file at most,
so can use optimizations that avoid searching the entire tree:
Typically the relevant files are at low depth, so it makes sense to
locate files through iterative deepening and early exit.
2023-03-27 09:42:16 -07:00
Satish Balay
c5b3fc6929 py-mpi4py: always force rebuild cython sources (#36460) 2023-03-27 10:05:31 -05:00
kaanolgu
deca4ce107 Babelstream Spack Package (#36164) 2023-03-27 17:04:28 +02:00
Jose E. Roman
e612436e26 New patch release SLEPc 3.18.3 (#36401)
Thanks-To: Satish Balay
2023-03-27 09:47:18 -05:00
Wouter Deconinck
fd19759783 xrootd: update patch (#36350)
Due to case change, a patch in xrootd doesn't apply cleanly. This fixes the patch by turning it into a `filter_file` with a more limited regex match.
2023-03-27 11:36:24 +02:00
Paul R. C. Kent
3265215f1d py-pyscf: add v2.2.0,2.1.1,2.1.0 libcint: add v5.2.0 (#35497)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-03-27 11:34:17 +02:00
Auriane R
428e5726c1 Rename p2300 variant with stdexec (#36322) 2023-03-27 11:24:25 +02:00
dependabot[bot]
e0570c819c build(deps): bump actions/checkout from 3.4.0 to 3.5.0 (#36418)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3.4.0 to 3.5.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](24cb908017...8f4b7f8486)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-27 09:16:07 +00:00
Xavier Delaruelle
ea60220a84 modules: enhance help message (#36410)
Update tcl and lmod modulefile template to provide more information on
help message (name, version and target) like done on whatis for lmod
modulefiles.
2023-03-27 10:48:25 +02:00
Dennis Klein
84d67190a6 fairlogger: new versions, deprecations and various other updates (#36405) 2023-03-27 10:46:15 +02:00
Alec Scott
80a34ae9cc perl-moose: add v2.2203 (#36368) 2023-03-27 10:42:39 +02:00
Tamara Dahlgren
7beb57cb05 Bugfix: add perl-data-optlist 0.113 dependency (#36411) 2023-03-27 10:42:15 +02:00
Wouter Deconinck
055c30acfb root: new version 6.28.02 (#36419)
Only bugfixes: https://root.cern/doc/v628/release-notes.html#release-6.2802.

Builds fine on my test system:
```
[+] /opt/software/linux-ubuntu23.04-skylake/gcc-12.2.0/root-6.28.02-zwzhoz6d4323lggrqi66y6prg4hlzwie
```
2023-03-27 10:34:00 +02:00
Alec Scott
3cd61b9b83 libslirp: add v4.7.0 (#36423) 2023-03-27 10:33:25 +02:00
Alec Scott
6e8f449882 perl-exporter-tiny: add v1.006001 (#36424) 2023-03-27 10:33:07 +02:00
Alec Scott
51b0023638 perl-params-validate: add v1.31 (#36425) 2023-03-27 10:32:51 +02:00
Alec Scott
f02c374181 perl-time-hires: add v1.9758 (#36426) 2023-03-27 10:32:36 +02:00
Alec Scott
ff3245382e perl-time-piece: add v1.3401 (#36427) 2023-03-27 10:32:20 +02:00
Wouter Deconinck
ebc492f1e8 Xorg docs: new versions (#36455) 2023-03-27 04:22:34 -04:00
Alec Scott
5b8f005962 perl-try-tiny: add v0.31 (#36428) 2023-03-27 10:05:54 +02:00
Alec Scott
b11febbbc9 perl-uri: add v5.08 (#36429) 2023-03-27 10:05:35 +02:00
Alec Scott
5837d4c587 perl-xml-parser-lite: add v0.722 (#36430) 2023-03-27 10:04:15 +02:00
Alec Scott
e5ea7b6e32 perl-xml-parser: add v2.46 (#36431) 2023-03-27 10:04:00 +02:00
Alec Scott
c69a1af5c7 perl-xml-simple: add v2.25 (#36432) 2023-03-27 10:03:44 +02:00
Alec Scott
b40f9f72ed perl-xml-writer: add v0.900 (#36433) 2023-03-27 10:03:30 +02:00
Alec Scott
ab3fd38eda perl-yaml-libyaml: add v0.84 (#36434) 2023-03-27 10:03:10 +02:00
Alec Scott
5b9b8902ac perl-yaml-tiny: add v1.74 (#36435) 2023-03-27 10:02:52 +02:00
Alec Scott
8671f32b14 perl-yaml: add v1.30 (#36436) 2023-03-27 10:02:33 +02:00
Alec Scott
24f41ad050 postgresql: add v15.2 (#36438) 2023-03-27 10:02:17 +02:00
Alec Scott
732153c9e2 pstreams: add v1.0.3 (#36440) 2023-03-27 09:57:26 +02:00
Adam J. Stewart
227d19ef02 py-pytest-cov: add v4.0.0 (#36447) 2023-03-27 09:56:50 +02:00
Alec Scott
7600422183 presentproto: add v1.1 (#36439) 2023-03-27 09:51:52 +02:00
Wouter Deconinck
c2b4f5bf45 libpthread-stubs: adapt to XorgPackage (#36453)
This resolves a loose end from #36241 (missed due to package name). `libpthread-stubs` is another package from the xcb project that is now tracked through https://gitlab.freedesktop.org/xorg/ instead. No new versions; no changed hashes.
2023-03-27 09:48:55 +02:00
Satish Balay
56d98c3f0a petsc, py-petsc4py: add v3.18.4, v3.18.5 (#36406)
* py-petsc4py: update ldshared-dev.patch [to work with current @main]

* petsc4py: always force rebuild cython sources
2023-03-27 09:44:43 +02:00
Axel Huebl
37fadd9b2f openPMD-api: 0.15.0 (#36452) 2023-03-27 09:42:09 +02:00
AMD Toolchain Support
72318ba364 LAMMPS: Add Intel Package Support for AOCC 4.0 (#36155)
Add AOCC support for LAMMPS INTEL Package

Co-authored-by: Tooley, Phil <phil.tooley@amd.com>
Co-authored-by: usivakum <Umashankar.Sivakumar@amd.com>
2023-03-27 09:23:03 +02:00
Wouter Deconinck
32416eedd8 lhapdf: new version 6.5.4 (#36451) 2023-03-27 09:14:50 +02:00
Wouter Deconinck
417a5e4c3e sherpa: new bugfix version 2.2.15 (#36420)
Minor bugfix only, https://gitlab.com/sherpa-team/sherpa/-/compare/v2.2.14...v2.2.15
2023-03-27 09:14:23 +02:00
Wouter Deconinck
85f1eb4534 py-versioneer: new versions, depends_on py-tomli (#36415) 2023-03-26 08:55:24 -05:00
Alec Scott
39100c5336 gobject-introspection: add v1.72.1 (#36422) 2023-03-26 13:02:42 +02:00
Wouter Deconinck
9f59d4f199 py-build: new versions 0.10.0 (-> flit) (#36416)
* py-build: new versions 0.10.0 (-> flit)

py-build switched to flit with 0.10, https://github.com/pypa/build/blob/0.10.0/pyproject.toml

```
==> py-build: Successfully installed py-build-0.10.0-twgngkplyegllaovlp45r76nsk7bqezw
```

* py-pyproject-hooks: new package 1.0.0

* py-build: depends_on py-pyproject-hooks

* py-pyproject-hooks: use pypi url and hash

* py-build: replace patch with filter_file

* py-build: remove patch in favor of filter_file
2023-03-25 21:49:24 -05:00
Wouter Deconinck
334f704d36 py-iniconfig: new version 2.0.0 (-> hatchling) (#36413)
* py-iniconfig: new version 2.0.0 (-> hatchling)

py-iniconfig switched to hatchling with v2.0.0:
https://github.com/pytest-dev/iniconfig/blob/v2.0.0/pyproject.toml

```
==> py-iniconfig: Successfully installed py-iniconfig-2.0.0-ttoip2aalmxqqybv3vnozcabk47vg2yn
```

* Update var/spack/repos/builtin/packages/py-iniconfig/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-03-25 16:19:08 -05:00
Alec Scott
eb7b18e827 py-agate-dbf: add v0.2.2 (#36445) 2023-03-25 16:17:34 -05:00
Alec Scott
9a4b710f4e py-advancedhtmlparser: add v9.0.1 (#36444) 2023-03-25 16:16:51 -05:00
Alec Scott
d7a75b4fae py-absl-py: add v1.4.0 (#36443) 2023-03-25 16:15:52 -05:00
Alec Scott
c20feda19c py-about-time: add v4.2.1 (#36442) 2023-03-25 16:15:00 -05:00
Alec Scott
e2a170f8a2 py-a2wsgi: add v1.7.0 (#36441) 2023-03-25 13:37:45 -05:00
Wouter Deconinck
6777f2d9e9 py-pathspec: new versions through 0.11.1 (-> flit) (#36414)
py-pathspec changed from setuptools to using flit as the build system:
https://github.com/cpburnz/python-pathspec/blob/v0.11.1/pyproject.toml

```
==> py-pathspec: Successfully installed py-pathspec-0.11.1-5jxzfunl4o7ubzpwq5442diobkg7t5fl
```
2023-03-25 11:31:47 -05:00
Eric Brugger
a7175979cd Silo: Add conflict for silo 4.11 and gcc 11.1. (#36335) 2023-03-25 11:50:59 +01:00
Xavier Delaruelle
c4923fe3b3 modules: add support for append_flags/remove_flags (#36402)
Adapt tcl and lmod modulefile templates to generate append-path or
remove-path commands in modulefile when respectively append_flags or
remove_flags commands are defined in package for run environment.

Fixes #10299.
2023-03-24 15:38:24 -04:00
Dennis Klein
ae504ce2fe fairmq: add new package (#36400) 2023-03-24 15:33:23 -04:00
Arnur Nigmetov
e4edcf6104 henson: simplify args logic with define_from_variant (#36398)
Co-authored-by: Arnur Nigmetov <nigmetov@tugraz.at>
2023-03-24 15:28:20 -04:00
AMD Toolchain Support
693eea499c Changes to AOCC spack recipe to use new compiler download URLs (#36353) 2023-03-24 12:24:29 -07:00
John W. Parent
a451f55340 Expat package: add CMake-build option (#35109)
* Add option to optionally build with CMake
* Autotools is preferred where available
* Unlike the autotools-based build, the CMake-based build creates
  either static or shared libs, not both (the default is shared
  and is controlled with a new "shared" variant that only exists
  when building with cmake)
* Note that `cmake~ownlibs` depends on expat, so would require
  `expat build_system=autotools` (to avoid a cyclic dependency)
2023-03-24 11:10:46 -07:00
Jim Edwards
15f7b72557 gfortran version fix (#36351)
* gfortran version fix
* modified approach to get gfortran version
* add checksum for v8.4.1
2023-03-24 10:47:44 -07:00
Alec Scott
6bf33b2b7f perl-module-corelist: add v5.20230320 (#36366)
* perl-module-corelist: add v5.20230320
* [@spackbot] updating style on behalf of alecbcs

---------

Co-authored-by: alecbcs <alecbcs@users.noreply.github.com>
2023-03-24 10:23:57 -07:00
Alec Scott
dac62c8cf8 perl-module-build: add v0.4232 (#36365) 2023-03-24 10:21:17 -07:00
Adam J. Stewart
30b8b0e6f5 py-timmm: add v0.6.13 (#36377) 2023-03-24 13:19:27 -04:00
Alec Scott
8741d2a7ed perl-moo: add v2.005005 (#36367) 2023-03-24 10:00:36 -07:00
Alec Scott
4ec1d860fc perl-mro-compat: add v0.15 (#36369) 2023-03-24 09:55:46 -07:00
Alec Scott
df614169bd perl-net-http: add v6.22 (#36370) 2023-03-24 09:54:11 -07:00
Alec Scott
1029590672 perl-padwalker: add v2.5 (#36372) 2023-03-24 09:46:34 -07:00
Alec Scott
c7e2346d8b perl-package-deprecationmanager: add v0.18 (#36371) 2023-03-24 09:46:14 -07:00
Alec Scott
93631d7512 perl-parallel-forkmanager: add v2.02 (#36373) 2023-03-24 09:44:39 -07:00
Alec Scott
54e5dc3eb5 perl-path-tiny: add v0.144 (#36374) 2023-03-24 09:43:12 -07:00
Alec Scott
0a9eea593b perl-pdf-api2: add v2.044 (#36375) 2023-03-24 09:41:39 -07:00
Alec Scott
17e50f519a perl-pegex: add v0.75 (#36376) 2023-03-24 09:39:13 -07:00
Alec Scott
d15fe6a345 perl-perl4-corelibs: add v0.005 (#36379) 2023-03-24 15:55:53 +01:00
Alec Scott
3504866185 perl-perlio-utf8-strict: add v0.010 (#36380) 2023-03-24 15:55:37 +01:00
Alec Scott
d0aee3aa30 perl-scalar-list-utils: add v1.63 (#36381) 2023-03-24 15:55:17 +01:00
Alec Scott
bbf7ff348a perl-statistics-descriptive: add v3.0800 (#36382) 2023-03-24 15:54:59 +01:00
Alec Scott
c6ec5a71a7 perl-sub-exporter: add v0.989 (#36383) 2023-03-24 15:54:43 +01:00
Alec Scott
56358c5901 perl-sub-install: add v0.929 (#36384) 2023-03-24 15:54:05 +01:00
Alec Scott
06d8196dfd perl-sub-name: add v0.26 (#36385) 2023-03-24 15:53:47 +01:00
Alec Scott
6a119b911c perl-sub-quote: add v2.006008 (#36386) 2023-03-24 15:53:32 +01:00
Alec Scott
5e1cfeaad0 perl-svg: add v2.87 (#36387) 2023-03-24 15:53:16 +01:00
Alec Scott
d199c1a7cf perl-test-deep: add v1.204 (#36388) 2023-03-24 15:52:57 +01:00
Alec Scott
795ee106f0 perl-test-differences: add v0.69 (#36389) 2023-03-24 15:43:43 +01:00
Alec Scott
00f4021e6a perl-test-fatal: add v0.017 (#36390) 2023-03-24 15:42:54 +01:00
Alec Scott
d367fded81 perl-test-most: add v0.38 (#36391) 2023-03-24 15:41:40 +01:00
Alec Scott
56c086ea17 perl-test-needs: add v0.002010 (#36392) 2023-03-24 15:41:21 +01:00
Alec Scott
4dcca72e89 perl-test-requires: add v0.11 (#36393) 2023-03-24 15:41:03 +01:00
Alec Scott
8326ef0772 perl-test-warnings: add v0.031 (#36394) 2023-03-24 15:14:59 +01:00
Alec Scott
27456f53aa perl-text-csv: add v2.02 (#36395) 2023-03-24 15:13:06 +01:00
Alec Scott
326442b169 perl-text-format: add v0.62 (#36396) 2023-03-24 15:12:43 +01:00
Alec Scott
ef2b31f7d1 perl-text-simpletable: add v2.07 (#36397) 2023-03-24 15:12:27 +01:00
Alec Scott
f2abf90bfc gh: add v2.25.1 (#36364) 2023-03-24 14:46:37 +01:00
Xavier Delaruelle
906151075d modules tcl: simplify env modification block in template (#36334)
Simplify environment modification block in modulefile Tcl template by
always setting a path delimiter to the prepend-path, append-path and
remove-path commands.

Remove --delim option to the setenv command as this command does not
allow such option.

Update test_prepend_path_separator test to explicitly check the 6
path-like commands that should be present in generated modulefile.
2023-03-24 10:28:10 +01:00
Massimiliano Culpo
d0d5526110 Add a pre-check job to bootstrap the environment on Python 3.6 (#36358)
* Add a pre-check job that just bootstrap the environment on Python 3.6
* py-typing-extension: restore information on Python 3.6 installation
* Fix job name, try to run quick test on installed python packages
2023-03-24 04:08:42 +00:00
Wouter Deconinck
729b8113cc git: new version 2.40.0 (#36354)
Release notes: https://github.com/git/git/blob/v2.40.0/Documentation/RelNotes/2.40.0.txt (nothing worrisome there).

Commit history with recent changes to installation procedure: https://github.com/git/git/commits/master/INSTALL (nothing worrisome there).

This builds fine on my system,
```
==> git: Successfully installed git-2.40.0-sb7gmy64ivwstfwwjyff7y5mbbc7vtos
```
2023-03-24 00:08:27 -04:00
John W. Parent
c59bebbff9 zstd package: add Cmake build (#35104)
* This enables building zstd on Windows
* CMake is now the default for all systems
2023-03-23 16:58:12 -07:00
Harmen Stoppels
118d8e4f57 unit tests: don't hard-code arch in compiler config (#36360)
This breaks when testing on non-x86_64 machines outside CI
2023-03-23 23:22:45 +01:00
Dr. Christian Tacke
2d9c913eb1 faircmakemodules: Add new package (#36345)
Co-authored-by: Dennis Klein <d.klein@gsi.de>
2023-03-23 17:48:24 -04:00
2366 changed files with 47123 additions and 23270 deletions

View File

@@ -9,7 +9,7 @@ body:
Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue `Installation issue: <name-of-the-package>`.
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
- type: textarea
id: reproduce
@@ -29,7 +29,9 @@ body:
description: |
Please post the error message from spack inside the `<details>` tag below:
value: |
<details><summary>Error message</summary><pre>
<details><summary>Error message</summary>
<pre>
...
</pre></details>
validations:
@@ -53,7 +55,7 @@ body:
Please upload the following files:
* **`spack-build-out.txt`**
* **`spack-build-env.txt`**
They should be present in the stage directory of the failing build. Also upload any `config.log` or similar file if one exists.
- type: markdown
attributes:

View File

@@ -1,4 +1,4 @@
name: "\U0001F38A Feature request"
name: "\U0001F38A Feature request"
description: Suggest adding a feature that is not yet in Spack
labels: [feature]
body:
@@ -29,13 +29,11 @@ body:
attributes:
label: General information
options:
- label: I have run `spack --version` and reported the version of Spack
required: true
- label: I have searched the issues of this repo and believe this is not a duplicate
required: true
- type: markdown
attributes:
value: |
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on [our Slack](https://slack.spack.io/) first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack!

View File

@@ -21,7 +21,9 @@ body:
description: |
Please post the error message from spack inside the `<details>` tag below:
value: |
<details><summary>Error message</summary><pre>
<details><summary>Error message</summary>
<pre>
...
</pre></details>
validations:

View File

@@ -19,13 +19,13 @@ jobs:
package-audits:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # @v2
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
with:
python-version: ${{inputs.python_version}}
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools pytest codecov coverage[toml]
pip install --upgrade pip setuptools pytest coverage[toml]
- name: Package audits (with coverage)
if: ${{ inputs.with_coverage == 'true' }}
run: |
@@ -38,7 +38,7 @@ jobs:
run: |
. share/spack/setup-env.sh
$(which spack) audit packages
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70 # @v2.1.0
- uses: codecov/codecov-action@eaaf4bedf32dbdc6b720b63067d99c4d77d6047d # @v2.1.0
if: ${{ inputs.with_coverage == 'true' }}
with:
flags: unittests,linux,audits

View File

@@ -24,7 +24,7 @@ jobs:
make patch unzip which xz python3 python3-devel tree \
cmake bison bison-devel libstdc++-static
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
with:
fetch-depth: 0
- name: Setup non-root user
@@ -62,7 +62,7 @@ jobs:
make patch unzip xz-utils python3 python3-dev tree \
cmake bison
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
with:
fetch-depth: 0
- name: Setup non-root user
@@ -99,7 +99,7 @@ jobs:
bzip2 curl file g++ gcc gfortran git gnupg2 gzip \
make patch unzip xz-utils python3 python3-dev tree
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
with:
fetch-depth: 0
- name: Setup non-root user
@@ -133,7 +133,7 @@ jobs:
make patch unzip which xz python3 python3-devel tree \
cmake bison
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
with:
fetch-depth: 0
- name: Setup repo
@@ -158,7 +158,7 @@ jobs:
run: |
brew install cmake bison@2.7 tree
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh
@@ -179,7 +179,7 @@ jobs:
run: |
brew install tree
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
- name: Bootstrap clingo
run: |
set -ex
@@ -204,7 +204,7 @@ jobs:
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
with:
fetch-depth: 0
- name: Setup repo
@@ -247,7 +247,7 @@ jobs:
bzip2 curl file g++ gcc patchelf gfortran git gzip \
make patch unzip xz-utils python3 python3-dev tree
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
with:
fetch-depth: 0
- name: Setup non-root user
@@ -283,7 +283,7 @@ jobs:
make patch unzip xz-utils python3 python3-dev tree \
gawk
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
with:
fetch-depth: 0
- name: Setup non-root user
@@ -316,7 +316,7 @@ jobs:
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
@@ -333,7 +333,7 @@ jobs:
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh

View File

@@ -45,12 +45,18 @@ jobs:
[leap15, 'linux/amd64,linux/arm64,linux/ppc64le', 'opensuse/leap:15'],
[ubuntu-bionic, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:18.04'],
[ubuntu-focal, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:20.04'],
[ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04']]
[ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04'],
[almalinux8, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:8'],
[almalinux9, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:9'],
[rockylinux8, 'linux/amd64,linux/arm64', 'rockylinux:8'],
[rockylinux9, 'linux/amd64,linux/arm64,linux/ppc64le', 'rockylinux:9'],
[fedora37, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:37'],
[fedora38, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:38']]
name: Build ${{ matrix.dockerfile[0] }}
if: github.repository == 'spack/spack'
steps:
- name: Checkout
uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # @v2
uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- name: Set Container Tag Normal (Nightly)
run: |

View File

@@ -35,7 +35,7 @@ jobs:
core: ${{ steps.filter.outputs.core }}
packages: ${{ steps.filter.outputs.packages }}
steps:
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # @v2
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
if: ${{ github.event_name == 'push' }}
with:
fetch-depth: 0

View File

@@ -0,0 +1,31 @@
name: Windows Paraview Nightly
on:
schedule:
- cron: '0 2 * * *' # Run at 2 am
defaults:
run:
shell:
powershell Invoke-Expression -Command "./share/spack/qa/windows_test_setup.ps1"; {0}
jobs:
build-paraview-deps:
runs-on: windows-latest
steps:
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
with:
fetch-depth: 0
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools coverage
- name: Build Test
run: |
spack compiler find
spack external find cmake ninja win-sdk win-wdk wgl msmpi
spack -d install -y --cdash-upload-url https://cdash.spack.io/submit.php?project=Spack+on+Windows --cdash-track Nightly --only dependencies paraview
exit 0

View File

@@ -47,10 +47,10 @@ jobs:
on_develop: false
steps:
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # @v2
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
with:
python-version: ${{ matrix.python-version }}
- name: Install System packages
@@ -62,7 +62,7 @@ jobs:
cmake bison libbison-dev kcov
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools pytest codecov[toml] pytest-xdist pytest-cov
pip install --upgrade pip setuptools pytest pytest-xdist pytest-cov
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black"
- name: Setup git configuration
run: |
@@ -87,17 +87,17 @@ jobs:
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }}
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
- uses: codecov/codecov-action@eaaf4bedf32dbdc6b720b63067d99c4d77d6047d
with:
flags: unittests,linux,${{ matrix.concretizer }}
# Test shell integration
shell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # @v2
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
with:
python-version: '3.11'
- name: Install System packages
@@ -107,7 +107,7 @@ jobs:
sudo apt-get install -y coreutils kcov csh zsh tcsh fish dash bash
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-xdist
pip install --upgrade pip setuptools pytest coverage[toml] pytest-xdist
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -118,7 +118,7 @@ jobs:
COVERAGE: true
run: |
share/spack/qa/run-shell-tests
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
- uses: codecov/codecov-action@eaaf4bedf32dbdc6b720b63067d99c4d77d6047d
with:
flags: shelltests,linux
@@ -133,10 +133,11 @@ jobs:
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # @v2
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- name: Setup repo and non-root user
run: |
git --version
git config --global --add safe.directory /__w/spack/spack
git fetch --unshallow
. .github/workflows/setup_git.sh
useradd spack-test
@@ -151,10 +152,10 @@ jobs:
clingo-cffi:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # @v2
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
with:
python-version: '3.11'
- name: Install System packages
@@ -163,7 +164,7 @@ jobs:
sudo apt-get -y install coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build kcov
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools pytest codecov coverage[toml] pytest-cov clingo pytest-xdist
pip install --upgrade pip setuptools pytest coverage[toml] pytest-cov clingo pytest-xdist
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -175,7 +176,7 @@ jobs:
SPACK_TEST_SOLVER: clingo
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70 # @v2.1.0
- uses: codecov/codecov-action@eaaf4bedf32dbdc6b720b63067d99c4d77d6047d # @v2.1.0
with:
flags: unittests,linux,clingo
# Run unit tests on MacOS
@@ -185,16 +186,16 @@ jobs:
matrix:
python-version: ["3.10"]
steps:
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # @v2
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools
pip install --upgrade pytest codecov coverage[toml] pytest-xdist pytest-cov
pip install --upgrade pip setuptools
pip install --upgrade pytest coverage[toml] pytest-xdist pytest-cov
- name: Setup Homebrew packages
run: |
brew install dash fish gcc gnupg2 kcov
@@ -210,6 +211,6 @@ jobs:
$(which spack) solve zlib
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
$(which spack) unit-test --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}"
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
- uses: codecov/codecov-action@eaaf4bedf32dbdc6b720b63067d99c4d77d6047d
with:
flags: unittests,macos

View File

@@ -18,8 +18,8 @@ jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # @v2
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
with:
python-version: '3.11'
cache: 'pip'
@@ -35,16 +35,16 @@ jobs:
style:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f # @v2
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
with:
fetch-depth: 0
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # @v2
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0 # @v2
with:
python-version: '3.11'
cache: 'pip'
- name: Install Python packages
run: |
python3 -m pip install --upgrade pip six setuptools types-six black==23.1.0 mypy isort clingo flake8
python3 -m pip install --upgrade pip setuptools types-six black==23.1.0 mypy isort clingo flake8
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
@@ -58,3 +58,29 @@ jobs:
with:
with_coverage: ${{ inputs.with_coverage }}
python_version: '3.11'
# Check that spack can bootstrap the development environment on Python 3.6 - RHEL8
bootstrap-dev-rhel8:
runs-on: ubuntu-latest
container: registry.access.redhat.com/ubi8/ubi
steps:
- name: Install dependencies
run: |
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab # @v2
- name: Setup repo and non-root user
run: |
git --version
git config --global --add safe.directory /__w/spack/spack
git fetch --unshallow
. .github/workflows/setup_git.sh
useradd spack-test
chown -R spack-test .
- name: Bootstrap Spack development environment
shell: runuser -u spack-test -- bash {0}
run: |
source share/spack/setup-env.sh
spack -d bootstrap now --dev
spack style -t black
spack unit-test -V

View File

@@ -15,15 +15,15 @@ jobs:
unit-tests:
runs-on: windows-latest
steps:
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
with:
fetch-depth: 0
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools codecov pytest-cov clingo
python -m pip install --upgrade pip pywin32 setuptools pytest-cov clingo
- name: Create local develop
run: |
./.github/workflows/setup_git.ps1
@@ -33,21 +33,21 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
coverage combine -a
coverage xml
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
- uses: codecov/codecov-action@eaaf4bedf32dbdc6b720b63067d99c4d77d6047d
with:
flags: unittests,windows
unit-tests-cmd:
runs-on: windows-latest
steps:
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
with:
fetch-depth: 0
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools codecov coverage pytest-cov clingo
python -m pip install --upgrade pip pywin32 setuptools coverage pytest-cov clingo
- name: Create local develop
run: |
./.github/workflows/setup_git.ps1
@@ -57,99 +57,24 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
coverage combine -a
coverage xml
- uses: codecov/codecov-action@d9f34f8cd5cb3b3eb79b3e4b5dae3a16df499a70
- uses: codecov/codecov-action@eaaf4bedf32dbdc6b720b63067d99c4d77d6047d
with:
flags: unittests,windows
build-abseil:
runs-on: windows-latest
steps:
- uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
- uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab
with:
fetch-depth: 0
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
- uses: actions/setup-python@bd6b4b6205c4dbad673328db7b31b7fab9e241c0
with:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six pywin32 setuptools codecov coverage
python -m pip install --upgrade pip pywin32 setuptools coverage
- name: Build Test
run: |
spack compiler find
spack external find cmake
spack external find ninja
spack -d install abseil-cpp
# TODO: johnwparent - reduce the size of the installer operations
# make-installer:
# runs-on: windows-latest
# steps:
# - name: Disable Windows Symlinks
# run: |
# git config --global core.symlinks false
# shell:
# powershell
# - uses: actions/checkout@24cb9080177205b6e8c946b17badbe402adc938f
# with:
# fetch-depth: 0
# - uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
# with:
# python-version: 3.9
# - name: Install Python packages
# run: |
# python -m pip install --upgrade pip six pywin32 setuptools
# - name: Add Light and Candle to Path
# run: |
# $env:WIX >> $GITHUB_PATH
# - name: Run Installer
# run: |
# ./share/spack/qa/setup_spack_installer.ps1
# spack make-installer -s . -g SILENT pkg
# echo "installer_root=$((pwd).Path)" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
# env:
# ProgressPreference: SilentlyContinue
# - uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
# with:
# name: Windows Spack Installer Bundle
# path: ${{ env.installer_root }}\pkg\Spack.exe
# - uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb
# with:
# name: Windows Spack Installer
# path: ${{ env.installer_root}}\pkg\Spack.msi
# execute-installer:
# needs: make-installer
# runs-on: windows-latest
# defaults:
# run:
# shell: pwsh
# steps:
# - uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435
# with:
# python-version: 3.9
# - name: Install Python packages
# run: |
# python -m pip install --upgrade pip six pywin32 setuptools
# - name: Setup installer directory
# run: |
# mkdir -p spack_installer
# echo "spack_installer=$((pwd).Path)\spack_installer" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf8 -Append
# - uses: actions/download-artifact@v3
# with:
# name: Windows Spack Installer Bundle
# path: ${{ env.spack_installer }}
# - name: Execute Bundled Installer
# run: |
# $proc = Start-Process ${{ env.spack_installer }}\spack.exe "/install /quiet" -Passthru
# $handle = $proc.Handle # cache proc.Handle
# $proc.WaitForExit();
# $LASTEXITCODE
# env:
# ProgressPreference: SilentlyContinue
# - uses: actions/download-artifact@v3
# with:
# name: Windows Spack Installer
# path: ${{ env.spack_installer }}
# - name: Execute MSI
# run: |
# $proc = Start-Process ${{ env.spack_installer }}\spack.msi "/quiet" -Passthru
# $handle = $proc.Handle # cache proc.Handle
# $proc.WaitForExit();
# $LASTEXITCODE

View File

@@ -1,3 +1,221 @@
# v0.20.0 (2023-05-21)
`v0.20.0` is a major feature release.
## Features in this release
1. **`requires()` directive and enhanced package requirements**
We've added some more enhancements to requirements in Spack (#36286).
There is a new `requires()` directive for packages. `requires()` is the opposite of
`conflicts()`. You can use it to impose constraints on this package when certain
conditions are met:
```python
requires(
"%apple-clang",
when="platform=darwin",
msg="This package builds only with clang on macOS"
)
```
More on this in [the docs](
https://spack.rtfd.io/en/latest/packaging_guide.html#conflicts-and-requirements).
You can also now add a `when:` clause to `requires:` in your `packages.yaml`
configuration or in an environment:
```yaml
packages:
openmpi:
require:
- any_of: ["%gcc"]
when: "@:4.1.4"
message: "Only OpenMPI 4.1.5 and up can build with fancy compilers"
```
More details can be found [here](
https://spack.readthedocs.io/en/latest/build_settings.html#package-requirements)
2. **Exact versions**
Spack did not previously have a way to distinguish a version if it was a prefix of
some other version. For example, `@3.2` would match `3.2`, `3.2.1`, `3.2.2`, etc. You
can now match *exactly* `3.2` with `@=3.2`. This is useful, for example, if you need
to patch *only* the `3.2` version of a package. The new syntax is described in [the docs](
https://spack.readthedocs.io/en/latest/basic_usage.html#version-specifier).
Generally, when writing packages, you should prefer to use ranges like `@3.2` over
the specific versions, as this allows the concretizer more leeway when selecting
versions of dependencies. More details and recommendations are in the [packaging guide](
https://spack.readthedocs.io/en/latest/packaging_guide.html#ranges-versus-specific-versions).
See #36273 for full details on the version refactor.
3. **New testing interface**
Writing package tests is now much simpler with a new [test interface](
https://spack.readthedocs.io/en/latest/packaging_guide.html#stand-alone-tests).
Writing a test is now as easy as adding a method that starts with `test_`:
```python
class MyPackage(Package):
...
def test_always_fails(self):
"""use assert to always fail"""
assert False
def test_example(self):
"""run installed example"""
example = which(self.prefix.bin.example)
example()
```
You can use Python's native `assert` statement to implement your checks -- no more
need to fiddle with `run_test` or other test framework methods. Spack will
introspect the class and run `test_*` methods when you run `spack test`,
4. **More stable concretization**
* Now, `spack concretize` will *only* concretize the new portions of the environment
and will not change existing parts of an environment unless you specify `--force`.
This has always been true for `unify:false`, but not for `unify:true` and
`unify:when_possible` environments. Now it is true for all of them (#37438, #37681).
* The concretizer has a new `--reuse-deps` argument that *only* reuses dependencies.
That is, it will always treat the *roots* of your environment as it would with
`--fresh`. This allows you to upgrade just the roots of your environment while
keeping everything else stable (#30990).
5. **Weekly develop snapshot releases**
Since last year, we have maintained a buildcache of `develop` at
https://binaries.spack.io/develop, but the cache can grow to contain so many builds
as to be unwieldy. When we get a stable `develop` build, we snapshot the release and
add a corresponding tag the Spack repository. So, you can use a stack from a specific
day. There are now tags in the spack repository like:
* `develop-2023-05-14`
* `develop-2023-05-18`
that correspond to build caches like:
* https://binaries.spack.io/develop-2023-05-14/e4s
* https://binaries.spack.io/develop-2023-05-18/e4s
We plan to store these snapshot releases weekly.
6. **Specs in buildcaches can be referenced by hash.**
* Previously, you could run `spack buildcache list` and see the hashes in
buildcaches, but referring to them by hash would fail.
* You can now run commands like `spack spec` and `spack install` and refer to
buildcache hashes directly, e.g. `spack install /abc123` (#35042)
7. **New package and buildcache index websites**
Our public websites for searching packages have been completely revamped and updated.
You can check them out here:
* *Package Index*: https://packages.spack.io
* *Buildcache Index*: https://cache.spack.io
Both are searchable and more interactive than before. Currently major releases are
shown; UI for browsing `develop` snapshots is coming soon.
8. **Default CMake and Meson build types are now Release**
Spack has historically defaulted to building with optimization and debugging, but
packages like `llvm` can be enormous with debug turned on. Our default build type for
all Spack packages is now `Release` (#36679, #37436). This has a number of benefits:
* much smaller binaries;
* higher default optimization level; and
* defining `NDEBUG` disables assertions, which may lead to further speedups.
You can still get the old behavior back through requirements and package preferences.
## Other new commands and directives
* `spack checksum` can automatically add new versions to package (#24532)
* new command: `spack pkg grep` to easily search package files (#34388)
* New `maintainers` directive (#35083)
* Add `spack buildcache push` (alias to `buildcache create`) (#34861)
* Allow using `-j` to control the parallelism of concretization (#37608)
* Add `--exclude` option to 'spack external find' (#35013)
## Other new features of note
* editing: add higher-precedence `SPACK_EDITOR` environment variable
* Many YAML formatting improvements from updating `ruamel.yaml` to the latest version
supporting Python 3.6. (#31091, #24885, #37008).
* Requirements and preferences should not define (non-git) versions (#37687, #37747)
* Environments now store spack version/commit in `spack.lock` (#32801)
* User can specify the name of the `packages` subdirectory in repositories (#36643)
* Add container images supporting RHEL alternatives (#36713)
* make version(...) kwargs explicit (#36998)
## Notable refactors
* buildcache create: reproducible tarballs (#35623)
* Bootstrap most of Spack dependencies using environments (#34029)
* Split `satisfies(..., strict=True/False)` into two functions (#35681)
* spack install: simplify behavior when inside environments (#35206)
## Binary cache and stack updates
* Major simplification of CI boilerplate in stacks (#34272, #36045)
* Many improvements to our CI pipeline's reliability
## Removals, Deprecations, and disablements
* Module file generation is disabled by default; you'll need to enable it to use it (#37258)
* Support for Python 2 was deprecated in `v0.19.0` and has been removed. `v0.20.0` only
supports Python 3.6 and higher.
* Deprecated target names are no longer recognized by Spack. Use generic names instead:
* `graviton` is now `cortex_a72`
* `graviton2` is now `neoverse_n1`
* `graviton3` is now `neoverse_v1`
* `blacklist` and `whitelist` in module configuration were deprecated in `v0.19.0` and are
removed in this release. Use `exclude` and `include` instead.
* The `ignore=` parameter of the `extends()` directive has been removed. It was not used by
any builtin packages and is no longer needed to avoid conflicts in environment views (#35588).
* Support for the old YAML buildcache format has been removed. It was deprecated in `v0.19.0` (#34347).
* `spack find --bootstrap` has been removed. It was deprecated in `v0.19.0`. Use `spack
--bootstrap find` instead (#33964).
* `spack bootstrap trust` and `spack bootstrap untrust` are now removed, having been
deprecated in `v0.19.0`. Use `spack bootstrap enable` and `spack bootstrap disable`.
* The `--mirror-name`, `--mirror-url`, and `--directory` options to buildcache and
mirror commands were deprecated in `v0.19.0` and have now been removed. They have been
replaced by positional arguments (#37457).
* Deprecate `env:` as top level environment key (#37424)
* deprecate buildcache create --rel, buildcache install --allow-root (#37285)
* Support for very old perl-like spec format strings (e.g., `$_$@$%@+$+$=`) has been
removed (#37425). This was deprecated in in `v0.15` (#10556).
## Notable Bugfixes
* bugfix: don't fetch package metadata for unknown concrete specs (#36990)
* Improve package source code context display on error (#37655)
* Relax environment manifest filename requirements and lockfile identification criteria (#37413)
* `installer.py`: drop build edges of installed packages by default (#36707)
* Bugfix: package requirements with git commits (#35057, #36347)
* Package requirements: allow single specs in requirement lists (#36258)
* conditional variant values: allow boolean (#33939)
* spack uninstall: follow run/link edges on --dependents (#34058)
## Spack community stats
* 7,179 total packages, 499 new since `v0.19.0`
* 329 new Python packages
* 31 new R packages
* 336 people contributed to this release
* 317 committers to packages
* 62 committers to core
# v0.19.1 (2023-02-07)
### Spack Bugfixes

View File

@@ -50,24 +50,69 @@ setlocal enabledelayedexpansion
:: flags will always start with '-', e.g. --help or -V
:: subcommands will never start with '-'
:: everything after the subcommand is an arg
for %%x in (%*) do (
set t="%%~x"
:: we cannot allow batch "for" loop to directly process CL args
:: a number of batch reserved characters are commonly passed to
:: spack and allowing batch's "for" method to process the raw inputs
:: results in a large number of formatting issues
:: instead, treat the entire CLI as one string
:: and split by space manually
:: capture cl args in variable named cl_args
set cl_args=%*
:process_cl_args
rem tokens=1* returns the first processed token produced
rem by tokenizing the input string cl_args on spaces into
rem the named variable %%g
rem While this make look like a for loop, it only
rem executes a single time for each of the cl args
rem the actual iterative loop is performed by the
rem goto process_cl_args stanza
rem we are simply leveraging the "for" method's string
rem tokenization
for /f "tokens=1*" %%g in ("%cl_args%") do (
set t=%%~g
rem remainder of string is composed into %%h
rem these are the cl args yet to be processed
rem assign cl_args var to only the args to be processed
rem effectively discarding the current arg %%g
rem this will be nul when we have no further tokens to process
set cl_args=%%h
rem process the first space delineated cl arg
rem of this iteration
if "!t:~0,1!" == "-" (
if defined _sp_subcommand (
:: We already have a subcommand, processing args now
set "_sp_args=!_sp_args! !t!"
rem We already have a subcommand, processing args now
if not defined _sp_args (
set "_sp_args=!t!"
) else (
set "_sp_args=!_sp_args! !t!"
)
) else (
set "_sp_flags=!_sp_flags! !t!"
shift
if not defined _sp_flags (
set "_sp_flags=!t!"
shift
) else (
set "_sp_flags=!_sp_flags! !t!"
shift
)
)
) else if not defined _sp_subcommand (
set "_sp_subcommand=!t!"
shift
) else (
set "_sp_args=!_sp_args! !t!"
shift
if not defined _sp_args (
set "_sp_args=!t!"
shift
) else (
set "_sp_args=!_sp_args! !t!"
shift
)
)
)
rem if this is not nil, we have more tokens to process
rem start above process again with remaining unprocessed cl args
if defined cl_args goto :process_cl_args
:: --help, -h and -V flags don't require further output parsing.
:: If we encounter, execute and exit
@@ -95,31 +140,21 @@ if not defined _sp_subcommand (
:: pass parsed variables outside of local scope. Need to do
:: this because delayedexpansion can only be set by setlocal
echo %_sp_flags%>flags
echo %_sp_args%>args
echo %_sp_subcommand%>subcmd
endlocal
set /p _sp_subcommand=<subcmd
set /p _sp_flags=<flags
set /p _sp_args=<args
if "%_sp_subcommand%"=="ECHO is off." (set "_sp_subcommand=")
if "%_sp_subcommand%"=="ECHO is on." (set "_sp_subcommand=")
if "%_sp_flags%"=="ECHO is off." (set "_sp_flags=")
if "%_sp_flags%"=="ECHO is on." (set "_sp_flags=")
if "%_sp_args%"=="ECHO is off." (set "_sp_args=")
if "%_sp_args%"=="ECHO is on." (set "_sp_args=")
del subcmd
del flags
del args
endlocal & (
set "_sp_flags=%_sp_flags%"
set "_sp_args=%_sp_args%"
set "_sp_subcommand=%_sp_subcommand%"
)
:: Filter out some commands. For any others, just run the command.
if %_sp_subcommand% == "cd" (
if "%_sp_subcommand%" == "cd" (
goto :case_cd
) else if %_sp_subcommand% == "env" (
) else if "%_sp_subcommand%" == "env" (
goto :case_env
) else if %_sp_subcommand% == "load" (
) else if "%_sp_subcommand%" == "load" (
goto :case_load
) else if %_sp_subcommand% == "unload" (
) else if "%_sp_subcommand%" == "unload" (
goto :case_load
) else (
goto :default_case
@@ -154,20 +189,20 @@ goto :end_switch
if NOT defined _sp_args (
goto :default_case
)
set args_no_quote=%_sp_args:"=%
if NOT "%args_no_quote%"=="%args_no_quote:--help=%" (
if NOT "%_sp_args%"=="%_sp_args:--help=%" (
goto :default_case
) else if NOT "%args_no_quote%"=="%args_no_quote: -h=%" (
) else if NOT "%_sp_args%"=="%_sp_args: -h=%" (
goto :default_case
) else if NOT "%args_no_quote%"=="%args_no_quote:--bat=%" (
) else if NOT "%_sp_args%"=="%_sp_args:--bat=%" (
goto :default_case
) else if NOT "%args_no_quote%"=="%args_no_quote:deactivate=%" (
) else if NOT "%_sp_args%"=="%_sp_args:deactivate=%" (
for /f "tokens=* USEBACKQ" %%I in (
`call python %spack% %_sp_flags% env deactivate --bat %args_no_quote:deactivate=%`
`call python %spack% %_sp_flags% env deactivate --bat %_sp_args:deactivate=%`
) do %%I
) else if NOT "%args_no_quote%"=="%args_no_quote:activate=%" (
) else if NOT "%_sp_args%"=="%_sp_args:activate=%" (
for /f "tokens=* USEBACKQ" %%I in (
`python %spack% %_sp_flags% env activate --bat %args_no_quote:activate=%`
`python %spack% %_sp_flags% env activate --bat %_sp_args:activate=%`
) do %%I
) else (
goto :default_case
@@ -188,7 +223,7 @@ if defined _sp_args (
for /f "tokens=* USEBACKQ" %%I in (
`python "%spack%" %_sp_flags% %_sp_subcommand% --bat %_sp_args%`) do %%I
)
goto :end_switch
:case_unload

View File

@@ -23,8 +23,20 @@ packages:
providers:
elf: [libelf]
fuse: [macfuse]
gl: [apple-gl]
glu: [apple-glu]
unwind: [apple-libunwind]
uuid: [apple-libuuid]
apple-gl:
buildable: false
externals:
- spec: apple-gl@4.1.0
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
apple-glu:
buildable: false
externals:
- spec: apple-glu@1.3.0
prefix: /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
apple-libunwind:
buildable: false
externals:

View File

@@ -40,9 +40,8 @@ modules:
roots:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
# What type of modules to use
enable:
- tcl
# What type of modules to use ("tcl" and/or "lmod")
enable: []
tcl:
all:

View File

@@ -20,7 +20,7 @@ packages:
awk: [gawk]
blas: [openblas, amdblis]
D: [ldc]
daal: [intel-daal]
daal: [intel-oneapi-daal]
elf: [elfutils]
fftw-api: [fftw, amdfftw]
flame: [libflame, amdlibflame]
@@ -30,7 +30,7 @@ packages:
golang: [go, gcc]
go-or-gccgo-bootstrap: [go-bootstrap, gcc]
iconv: [libiconv]
ipp: [intel-ipp]
ipp: [intel-oneapi-ipp]
java: [openjdk, jdk, ibm-java]
jpeg: [libjpeg-turbo, libjpeg]
lapack: [openblas, amdlibflame]
@@ -40,7 +40,7 @@ packages:
lua-lang: [lua, lua-luajit-openresty, lua-luajit]
luajit: [lua-luajit-openresty, lua-luajit]
mariadb-client: [mariadb-c-client, mariadb]
mkl: [intel-mkl]
mkl: [intel-oneapi-mkl]
mpe: [mpe2]
mpi: [openmpi, mpich]
mysql-client: [mysql, mariadb-c-client]

View File

@@ -3,3 +3,4 @@ config:
concretizer: clingo
build_stage::
- '$spack/.staging'
stage_name: '{name}-{version}-{hash:7}'

View File

@@ -19,3 +19,4 @@ packages:
- msvc
providers:
mpi: [msmpi]
gl: [wgl]

View File

@@ -1103,16 +1103,31 @@ Below are more details about the specifiers that you can add to specs.
Version specifier
^^^^^^^^^^^^^^^^^
A version specifier comes somewhere after a package name and starts
with ``@``. It can be a single version, e.g. ``@1.0``, ``@3``, or
``@1.2a7``. Or, it can be a range of versions, such as ``@1.0:1.5``
(all versions between ``1.0`` and ``1.5``, inclusive). Version ranges
can be open, e.g. ``:3`` means any version up to and including ``3``.
This would include ``3.4`` and ``3.4.2``. ``4.2:`` means any version
above and including ``4.2``. Finally, a version specifier can be a
set of arbitrary versions, such as ``@1.0,1.5,1.7`` (``1.0``, ``1.5``,
or ``1.7``). When you supply such a specifier to ``spack install``,
it constrains the set of versions that Spack will install.
A version specifier ``pkg@<specifier>`` comes after a package name
and starts with ``@``. It can be something abstract that matches
multiple known versions, or a specific version. During concretization,
Spack will pick the optimal version within the spec's constraints
according to policies set for the particular Spack installation.
The version specifier can be *a specific version*, such as ``@=1.0.0`` or
``@=1.2a7``. Or, it can be *a range of versions*, such as ``@1.0:1.5``.
Version ranges are inclusive, so this example includes both ``1.0``
and any ``1.5.x`` version. Version ranges can be unbounded, e.g. ``@:3``
means any version up to and including ``3``. This would include ``3.4``
and ``3.4.2``. Similarly, ``@4.2:`` means any version above and including
``4.2``. As a short-hand, ``@3`` is equivalent to the range ``@3:3`` and
includes any version with major version ``3``.
Notice that you can distinguish between the specific version ``@=3.2`` and
the range ``@3.2``. This is useful for packages that follow a versioning
scheme that omits the zero patch version number: ``3.2``, ``3.2.1``,
``3.2.2``, etc. In general it is preferable to use the range syntax
``@3.2``, since ranges also match versions with one-off suffixes, such as
``3.2-custom``.
A version specifier can also be a list of ranges and specific versions,
separated by commas. For example, ``@1.0:1.5,=1.7.1`` matches any version
in the range ``1.0:1.5`` and the specific version ``1.7.1``.
For packages with a ``git`` attribute, ``git`` references
may be specified instead of a numerical version i.e. branches, tags
@@ -1121,36 +1136,35 @@ reference provided. Acceptable syntaxes for this are:
.. code-block:: sh
# branches and tags
foo@git.develop # use the develop branch
foo@git.0.19 # use the 0.19 tag
# commit hashes
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # 40 character hashes are automatically treated as git commits
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234
Spack versions from git reference either have an associated version supplied by the user,
or infer a relationship to known versions from the structure of the git repository. If an
associated version is supplied by the user, Spack treats the git version as equivalent to that
version for all version comparisons in the package logic (e.g. ``depends_on('foo', when='@1.5')``).
# branches and tags
foo@git.develop # use the develop branch
foo@git.0.19 # use the 0.19 tag
The associated version can be assigned with ``[git ref]=[version]`` syntax, with the caveat that the specified version is known to Spack from either the package definition, or in the configuration preferences (i.e. ``packages.yaml``).
Spack always needs to associate a Spack version with the git reference,
which is used for version comparison. This Spack version is heuristically
taken from the closest valid git tag among ancestors of the git ref.
Once a Spack version is associated with a git ref, it always printed with
the git ref. For example, if the commit ``@git.abcdefg`` is tagged
``0.19``, then the spec will be shown as ``@git.abcdefg=0.19``.
If the git ref is not exactly a tag, then the distance to the nearest tag
is also part of the resolved version. ``@git.abcdefg=0.19.git.8`` means
that the commit is 8 commits away from the ``0.19`` tag.
In cases where Spack cannot resolve a sensible version from a git ref,
users can specify the Spack version to use for the git ref. This is done
by appending ``=`` and the Spack version to the git ref. For example:
.. code-block:: sh
foo@git.my_ref=3.2 # use the my_ref tag or branch, but treat it as version 3.2 for version comparisons
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop # use the given commit, but treat it as develop for version comparisons
If an associated version is not supplied then the tags in the git repo are used to determine
the most recent previous version known to Spack. Details about how versions are compared
and how Spack determines if one version is less than another are discussed in the developer guide.
If the version spec is not provided, then Spack will choose one
according to policies set for the particular spack installation. If
the spec is ambiguous, i.e. it could match multiple versions, Spack
will choose a version within the spec's constraints according to
policies set for the particular Spack installation.
Details about how versions are compared and how Spack determines if
one version is less than another are discussed in the developer guide.

View File

@@ -36,7 +36,7 @@ Build caches are created via:
.. code-block:: console
$ spack buildcache create <path/url/mirror name> <spec>
$ spack buildcache push <path/url/mirror name> <spec>
This command takes the locally installed spec and its dependencies, and
creates tarballs of their install prefixes. It also generates metadata files,
@@ -48,7 +48,7 @@ Here is an example where a build cache is created in a local directory named
.. code-block:: console
$ spack buildcache create --allow-root ./spack-cache ninja
$ spack buildcache push --allow-root ./spack-cache ninja
==> Pushing binary packages to file:///home/spackuser/spack/spack-cache/build_cache
Not that ``ninja`` must be installed locally for this to work.
@@ -177,7 +177,7 @@ need to be adjusted for better re-locatability.
--------------------
^^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack buildcache create``
``spack buildcache push``
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Create tarball of installed Spack package and all dependencies.

View File

@@ -325,42 +325,99 @@ on the command line, because it can specify constraints on packages
is not possible to specify constraints on dependencies while also keeping
those dependencies optional.
The package requirements configuration is specified in ``packages.yaml``
keyed by package name:
^^^^^^^^^^^^^^^^^^^
Requirements syntax
^^^^^^^^^^^^^^^^^^^
The package requirements configuration is specified in ``packages.yaml``,
keyed by package name and expressed using the Spec syntax. In the simplest
case you can specify attributes that you always want the package to have
by providing a single spec string to ``require``:
.. code-block:: yaml
packages:
libfabric:
require: "@1.13.2"
In the above example, ``libfabric`` will always build with version 1.13.2. If you
need to compose multiple configuration scopes ``require`` accepts a list of
strings:
.. code-block:: yaml
packages:
libfabric:
require:
- "@1.13.2"
- "%gcc"
In this case ``libfabric`` will always build with version 1.13.2 **and** using GCC
as a compiler.
For more complex use cases, require accepts also a list of objects. These objects
must have either a ``any_of`` or a ``one_of`` field, containing a list of spec strings,
and they can optionally have a ``when`` and a ``message`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["~cuda", "%gcc"]
- any_of: ["@4.1.5", "%gcc"]
message: "in this example only 4.1.5 can build with other compilers"
``any_of`` is a list of specs. One of those specs must be satisfied
and it is also allowed for the concretized spec to match more than one.
In the above example, that means you could build ``openmpi@4.1.5%gcc``,
``openmpi@4.1.5%clang`` or ``openmpi@3.9%gcc``, but
not ``openmpi@3.9%clang``.
If a custom message is provided, and the requirement is not satisfiable,
Spack will print the custom error message:
.. code-block:: console
$ spack spec openmpi@3.9%clang
==> Error: in this example only 4.1.5 can build with other compilers
We could express a similar requirement using the ``when`` attribute:
.. code-block:: yaml
packages:
openmpi:
require:
- any_of: ["%gcc"]
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
In the example above, if the version turns out to be 4.1.4 or less, we require the compiler to be GCC.
For readability, Spack also allows a ``spec`` key accepting a string when there is only a single
constraint:
.. code-block:: yaml
packages:
openmpi:
require:
- spec: "%gcc"
when: "@:4.1.4"
message: "in this example only 4.1.5 can build with other compilers"
This code snippet and the one before it are semantically equivalent.
Finally, instead of ``any_of`` you can use ``one_of`` which also takes a list of specs. The final
concretized spec must match one and only one of them:
.. code-block:: yaml
packages:
mpich:
require:
- one_of: ["+cuda", "+rocm"]
require:
- one_of: ["+cuda", "+rocm"]
Requirements are expressed using Spec syntax (the same as what is provided
to ``spack install``). In the simplest case, you can specify attributes
that you always want the package to have by providing a single spec to
``require``; in the above example, ``libfabric`` will always build
with version 1.13.2.
You can provide a more-relaxed constraint and allow the concretizer to
choose between a set of options using ``any_of`` or ``one_of``:
* ``any_of`` is a list of specs. One of those specs must be satisfied
and it is also allowed for the concretized spec to match more than one.
In the above example, that means you could build ``openmpi+cuda%gcc``,
``openmpi~cuda%clang`` or ``openmpi~cuda%gcc`` (in the last case,
note that both specs in the ``any_of`` for ``openmpi`` are
satisfied).
* ``one_of`` is also a list of specs, and the final concretized spec
must match exactly one of them. In the above example, that means
you could build ``mpich+cuda`` or ``mpich+rocm`` but not
``mpich+cuda+rocm`` (note the current package definition for
``mpich`` already includes a conflict, so this is redundant but
still demonstrates the concept).
In the example above, that means you could build ``mpich+cuda`` or ``mpich+rocm`` but not ``mpich+cuda+rocm``.
.. note::
@@ -368,6 +425,13 @@ choose between a set of options using ``any_of`` or ``one_of``:
preference: items that appear earlier in the list are preferred
(note that these preferences can be ignored in favor of others).
.. note::
When using a conditional requirement, Spack is allowed to actively avoid the triggering
condition (the ``when=...`` spec) if that leads to a concrete spec with better scores in
the optimization criteria. To check the current optimization criteria and their
priorities you can run ``spack solve zlib``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting default requirements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -215,8 +215,9 @@ def setup(sphinx):
("py:class", "spack.repo._PrependFileLoader"),
("py:class", "spack.build_systems._checks.BaseBuilder"),
# Spack classes that intersphinx is unable to resolve
("py:class", "spack.version.VersionBase"),
("py:class", "spack.version.StandardVersion"),
("py:class", "spack.spec.DependencySpec"),
("py:class", "spack.install_test.Pb"),
]
# The reST default role (used for this markup: `text`) to use for all documents.

View File

@@ -20,8 +20,9 @@ case you want to skip directly to specific docs:
* :ref:`packages.yaml <build-settings>`
* :ref:`repos.yaml <repositories>`
You can also add any of these as inline configuration in ``spack.yaml``
in an :ref:`environment <environment-configuration>`.
You can also add any of these as inline configuration in the YAML
manifest file (``spack.yaml``) describing an :ref:`environment
<environment-configuration>`.
-----------
YAML Format

View File

@@ -143,6 +143,26 @@ The OS that are currently supported are summarized in the table below:
* - Amazon Linux 2
- ``amazonlinux:2``
- ``spack/amazon-linux``
* - AlmaLinux 8
- ``almalinux:8``
- ``spack/almalinux8``
* - AlmaLinux 9
- ``almalinux:9``
- ``spack/almalinux9``
* - Rocky Linux 8
- ``rockylinux:8``
- ``spack/rockylinux8``
* - Rocky Linux 9
- ``rockylinux:9``
- ``spack/rockylinux9``
* - Fedora Linux 37
- ``fedora:37``
- ``spack/fedora37``
* - Fedora Linux 38
- ``fedora:38``
- ``spack/fedora38``
All the images are tagged with the corresponding release of Spack:
@@ -444,6 +464,120 @@ attribute:
The minimum version of Singularity required to build a SIF (Singularity Image Format)
image from the recipes generated by Spack is ``3.5.3``.
------------------------------
Extending the Jinja2 Templates
------------------------------
The Dockerfile and the Singularity definition file that Spack can generate are based on
a few Jinja2 templates that are rendered according to the environment being containerized.
Even though Spack allows a great deal of customization by just setting appropriate values for
the configuration options, sometimes that is not enough.
In those cases, a user can directly extend the template that Spack uses to render the image
to e.g. set additional environment variables or perform specific operations either before or
after a given stage of the build. Let's consider as an example the following structure:
.. code-block:: console
$ tree /opt/environment
/opt/environment
├── data
│ └── data.csv
├── spack.yaml
├── data
└── templates
└── container
└── CustomDockerfile
containing both the custom template extension and the environment manifest file. To use a custom
template, the environment must register the directory containing it, and declare its use under the
``container`` configuration:
.. code-block:: yaml
:emphasize-lines: 7-8,12
spack:
specs:
- hdf5~mpi
concretizer:
unify: true
config:
template_dirs:
- /opt/environment/templates
container:
format: docker
depfile: true
template: container/CustomDockerfile
The template extension can override two blocks, named ``build_stage`` and ``final_stage``, similarly to
the example below:
.. code-block::
:emphasize-lines: 3,8
{% extends "container/Dockerfile" %}
{% block build_stage %}
RUN echo "Start building"
{{ super() }}
{% endblock %}
{% block final_stage %}
{{ super() }}
COPY data /share/myapp/data
{% endblock %}
The recipe that gets generated contains the two extra instruction that we added in our template extension:
.. code-block:: Dockerfile
:emphasize-lines: 4,43
# Build stage with Spack pre-installed and ready to be used
FROM spack/ubuntu-jammy:latest as builder
RUN echo "Start building"
# What we want to install and how we want to install it
# is specified in a manifest file (spack.yaml)
RUN mkdir /opt/spack-environment \
&& (echo "spack:" \
&& echo " specs:" \
&& echo " - hdf5~mpi" \
&& echo " concretizer:" \
&& echo " unify: true" \
&& echo " config:" \
&& echo " template_dirs:" \
&& echo " - /tmp/environment/templates" \
&& echo " install_tree: /opt/software" \
&& echo " view: /opt/view") > /opt/spack-environment/spack.yaml
# Install the software, remove unnecessary deps
RUN cd /opt/spack-environment && spack env activate . && spack concretize && spack env depfile -o Makefile && make -j $(nproc) && spack gc -y
# Strip all the binaries
RUN find -L /opt/view/* -type f -exec readlink -f '{}' \; | \
xargs file -i | \
grep 'charset=binary' | \
grep 'x-executable\|x-archive\|x-sharedlib' | \
awk -F: '{print $1}' | xargs strip -s
# Modifications to the environment that are necessary to run
RUN cd /opt/spack-environment && \
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
# Bare OS image to run the installed executables
FROM ubuntu:22.04
COPY --from=builder /opt/spack-environment /opt/spack-environment
COPY --from=builder /opt/software /opt/software
COPY --from=builder /opt/._view /opt/._view
COPY --from=builder /opt/view /opt/view
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
COPY data /share/myapp/data
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l", "-c", "$*", "--" ]
CMD [ "/bin/bash" ]
.. _container_config_options:
-----------------------
@@ -464,6 +598,10 @@ to customize the generation of container recipes:
- The format of the recipe
- ``docker`` or ``singularity``
- Yes
* - ``depfile``
- Whether to use a depfile for installation, or not
- True or False (default)
- No
* - ``images:os``
- Operating system used as a base for the image
- See :ref:`containers-supported-os`
@@ -498,7 +636,7 @@ to customize the generation of container recipes:
- No
* - ``os_packages:command``
- Tool used to manage system packages
- ``apt``, ``yum``
- ``apt``, ``yum``, ``zypper``, ``apk``, ``yum_amazon``
- Only with custom base images
* - ``os_packages:update``
- Whether or not to update the list of available packages
@@ -512,14 +650,6 @@ to customize the generation of container recipes:
- System packages needed at run-time
- Valid packages for the current OS
- No
* - ``extra_instructions:build``
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``build`` stage
- Anything understood by the current ``format``
- No
* - ``extra_instructions:final``
- Extra instructions (e.g. `RUN`, `COPY`, etc.) at the end of the ``final`` stage
- Anything understood by the current ``format``
- No
* - ``labels``
- Labels to tag the image
- Pairs of key-value strings

View File

@@ -94,9 +94,9 @@ an Environment, the ``.spack-env`` directory also contains:
* ``logs/``: A directory containing the build logs for the packages
in this Environment.
Spack Environments can also be created from either a ``spack.yaml``
manifest or a ``spack.lock`` lockfile. To create an Environment from a
``spack.yaml`` manifest:
Spack Environments can also be created from either a manifest file
(usually but not necessarily named, ``spack.yaml``) or a lockfile.
To create an Environment from a manifest:
.. code-block:: console
@@ -174,7 +174,7 @@ Anonymous specs can be created in place using the command:
$ spack env create -d .
In this case Spack simply creates a spack.yaml file in the requested
In this case Spack simply creates a ``spack.yaml`` file in the requested
directory.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -347,7 +347,7 @@ the Environment and then install the concretized specs.
(see :ref:`build-jobs`). To speed up environment builds further, independent
packages can be installed in parallel by launching more Spack instances. For
example, the following will build at most four packages in parallel using
three background jobs:
three background jobs:
.. code-block:: console
@@ -395,7 +395,7 @@ version (and other constraints) passed as the spec argument to the
For packages with ``git`` attributes, git branches, tags, and commits can
also be used as valid concrete versions (see :ref:`version-specifier`).
This means that for a package ``foo``, ``spack develop foo@git.main`` will clone
This means that for a package ``foo``, ``spack develop foo@git.main`` will clone
the ``main`` branch of the package, and ``spack install`` will install from
that git clone if ``foo`` is in the environment.
Further development on ``foo`` can be tested by reinstalling the environment,
@@ -589,10 +589,11 @@ user support groups providing a large software stack for their HPC center.
.. admonition:: Re-concretization of user specs
When using *unified* concretization (when possible), the entire set of specs will be
re-concretized after any addition of new user specs, to ensure that
the environment remains consistent / minimal. When instead unified concretization is
disabled, only the new specs will be concretized after any addition.
The ``spack concretize`` command without additional arguments will *not* change any
previously concretized specs. This may prevent it from finding a solution when using
``unify: true``, and it may prevent it from finding a minimal solution when using
``unify: when_possible``. You can force Spack to ignore the existing concrete environment
with ``spack concretize -f``.
^^^^^^^^^^^^^
Spec Matrices
@@ -1121,19 +1122,19 @@ index once every package is pushed. Note how this target uses the generated
SPACK ?= spack
BUILDCACHE_DIR = $(CURDIR)/tarballs
.PHONY: all
all: push
include env.mk
example/push/%: example/install/%
@mkdir -p $(dir $@)
$(info About to push $(SPEC) to a buildcache)
$(SPACK) -e . buildcache create --allow-root --only=package --directory $(BUILDCACHE_DIR) /$(HASH)
@touch $@
push: $(addprefix example/push/,$(example/SPACK_PACKAGE_IDS))
$(info Updating the buildcache index)
$(SPACK) -e . buildcache update-index --directory $(BUILDCACHE_DIR)

View File

@@ -41,12 +41,9 @@ A build matrix showing which packages are working on which systems is shown belo
.. code-block:: console
yum update -y
yum install -y epel-release
yum update -y
yum --enablerepo epel groupinstall -y "Development Tools"
yum --enablerepo epel install -y curl findutils gcc-c++ gcc gcc-gfortran git gnupg2 hostname iproute redhat-lsb-core make patch python3 python3-pip python3-setuptools unzip
python3 -m pip install boto3
dnf install epel-release
dnf group install "Development Tools"
dnf install curl findutils gcc-gfortran gnupg2 hostname iproute redhat-lsb-core python3 python3-pip python3-setuptools unzip python3-boto3
.. tab-item:: macOS Brew
@@ -320,7 +317,7 @@ installed, but you know that new compilers have been added to your
.. code-block:: console
$ module load gcc-4.9.0
$ module load gcc/4.9.0
$ spack compiler find
==> Added 1 new compiler to ~/.spack/linux/compilers.yaml
gcc@4.9.0
@@ -368,7 +365,8 @@ Manual compiler configuration
If auto-detection fails, you can manually configure a compiler by
editing your ``~/.spack/<platform>/compilers.yaml`` file. You can do this by running
``spack config edit compilers``, which will open the file in your ``$EDITOR``.
``spack config edit compilers``, which will open the file in
:ref:`your favorite editor <controlling-the-editor>`.
Each compiler configuration in the file looks like this:

View File

@@ -163,7 +163,7 @@ your site.
Mirror environment
^^^^^^^^^^^^^^^^^^
To create a mirror of all packages required by a concerte environment, activate the environment and call ``spack mirror create -a``.
To create a mirror of all packages required by a concrete environment, activate the environment and call ``spack mirror create -a``.
This is especially useful to create a mirror of an environment concretized on another machine.
.. code-block:: console

View File

@@ -35,27 +35,27 @@ showing lots of installed packages:
$ module avail
--------------------------------------------------------------- ~/spack/share/spack/modules/linux-ubuntu14-x86_64 ---------------------------------------------------------------
autoconf-2.69-gcc-4.8-qextxkq hwloc-1.11.6-gcc-6.3.0-akcisez m4-1.4.18-gcc-4.8-ev2znoc openblas-0.2.19-gcc-6.3.0-dhkmed6 py-setuptools-34.2.0-gcc-6.3.0-fadur4s
automake-1.15-gcc-4.8-maqvukj isl-0.18-gcc-4.8-afi6taq m4-1.4.18-gcc-6.3.0-uppywnz openmpi-2.1.0-gcc-6.3.0-go2s4z5 py-six-1.10.0-gcc-6.3.0-p4dhkaw
binutils-2.28-gcc-4.8-5s7c6rs libiconv-1.15-gcc-4.8-at46wg3 mawk-1.3.4-gcc-4.8-acjez57 openssl-1.0.2k-gcc-4.8-dkls5tk python-2.7.13-gcc-6.3.0-tyehea7
bison-3.0.4-gcc-4.8-ek4luo5 libpciaccess-0.13.4-gcc-6.3.0-gmufnvh mawk-1.3.4-gcc-6.3.0-ostdoms openssl-1.0.2k-gcc-6.3.0-gxgr5or readline-7.0-gcc-4.8-xhufqhn
bzip2-1.0.6-gcc-4.8-iffrxzn libsigsegv-2.11-gcc-4.8-pp2cvte mpc-1.0.3-gcc-4.8-g5mztc5 pcre-8.40-gcc-4.8-r5pbrxb readline-7.0-gcc-6.3.0-zzcyicg
bzip2-1.0.6-gcc-6.3.0-bequudr libsigsegv-2.11-gcc-6.3.0-7enifnh mpfr-3.1.5-gcc-4.8-o7xm7az perl-5.24.1-gcc-4.8-dg5j65u sqlite-3.8.5-gcc-6.3.0-6zoruzj
cmake-3.7.2-gcc-6.3.0-fowuuby libtool-2.4.6-gcc-4.8-7a523za mpich-3.2-gcc-6.3.0-dmvd3aw perl-5.24.1-gcc-6.3.0-6uzkpt6 tar-1.29-gcc-4.8-wse2ass
curl-7.53.1-gcc-4.8-3fz46n6 libtool-2.4.6-gcc-6.3.0-n7zmbzt ncurses-6.0-gcc-4.8-dcpe7ia pkg-config-0.29.2-gcc-4.8-ib33t75 tcl-8.6.6-gcc-4.8-tfxzqbr
expat-2.2.0-gcc-4.8-mrv6bd4 libxml2-2.9.4-gcc-4.8-ryzxnsu ncurses-6.0-gcc-6.3.0-ucbhcdy pkg-config-0.29.2-gcc-6.3.0-jpgubk3 util-macros-1.19.1-gcc-6.3.0-xorz2x2
flex-2.6.3-gcc-4.8-yf345oo libxml2-2.9.4-gcc-6.3.0-rltzsdh netlib-lapack-3.6.1-gcc-6.3.0-js33dog py-appdirs-1.4.0-gcc-6.3.0-jxawmw7 xz-5.2.3-gcc-4.8-mew4log
gcc-6.3.0-gcc-4.8-24puqve lmod-7.4.1-gcc-4.8-je4srhr netlib-scalapack-2.0.2-gcc-6.3.0-5aidk4l py-numpy-1.12.0-gcc-6.3.0-oemmoeu xz-5.2.3-gcc-6.3.0-3vqeuvb
gettext-0.19.8.1-gcc-4.8-yymghlh lua-5.3.4-gcc-4.8-im75yaz netlib-scalapack-2.0.2-gcc-6.3.0-hjsemcn py-packaging-16.8-gcc-6.3.0-i2n3dtl zip-3.0-gcc-4.8-rwar22d
gmp-6.1.2-gcc-4.8-5ub2wu5 lua-luafilesystem-1_6_3-gcc-4.8-wkey3nl netlib-scalapack-2.0.2-gcc-6.3.0-jva724b py-pyparsing-2.1.10-gcc-6.3.0-tbo6gmw zlib-1.2.11-gcc-4.8-pgxsxv7
help2man-1.47.4-gcc-4.8-kcnqmau lua-luaposix-33.4.0-gcc-4.8-mdod2ry netlib-scalapack-2.0.2-gcc-6.3.0-rgqfr6d py-scipy-0.19.0-gcc-6.3.0-kr7nat4 zlib-1.2.11-gcc-6.3.0-7cqp6cj
autoconf/2.69-gcc-4.8-qextxkq hwloc/1.11.6-gcc-6.3.0-akcisez m4/1.4.18-gcc-4.8-ev2znoc openblas/0.2.19-gcc-6.3.0-dhkmed6 py-setuptools/34.2.0-gcc-6.3.0-fadur4s
automake/1.15-gcc-4.8-maqvukj isl/0.18-gcc-4.8-afi6taq m4/1.4.18-gcc-6.3.0-uppywnz openmpi/2.1.0-gcc-6.3.0-go2s4z5 py-six/1.10.0-gcc-6.3.0-p4dhkaw
binutils/2.28-gcc-4.8-5s7c6rs libiconv/1.15-gcc-4.8-at46wg3 mawk/1.3.4-gcc-4.8-acjez57 openssl/1.0.2k-gcc-4.8-dkls5tk python/2.7.13-gcc-6.3.0-tyehea7
bison/3.0.4-gcc-4.8-ek4luo5 libpciaccess/0.13.4-gcc-6.3.0-gmufnvh mawk/1.3.4-gcc-6.3.0-ostdoms openssl/1.0.2k-gcc-6.3.0-gxgr5or readline/7.0-gcc-4.8-xhufqhn
bzip2/1.0.6-gcc-4.8-iffrxzn libsigsegv/2.11-gcc-4.8-pp2cvte mpc/1.0.3-gcc-4.8-g5mztc5 pcre/8.40-gcc-4.8-r5pbrxb readline/7.0-gcc-6.3.0-zzcyicg
bzip2/1.0.6-gcc-6.3.0-bequudr libsigsegv/2.11-gcc-6.3.0-7enifnh mpfr/3.1.5-gcc-4.8-o7xm7az perl/5.24.1-gcc-4.8-dg5j65u sqlite/3.8.5-gcc-6.3.0-6zoruzj
cmake/3.7.2-gcc-6.3.0-fowuuby libtool/2.4.6-gcc-4.8-7a523za mpich/3.2-gcc-6.3.0-dmvd3aw perl/5.24.1-gcc-6.3.0-6uzkpt6 tar/1.29-gcc-4.8-wse2ass
curl/7.53.1-gcc-4.8-3fz46n6 libtool/2.4.6-gcc-6.3.0-n7zmbzt ncurses/6.0-gcc-4.8-dcpe7ia pkg-config/0.29.2-gcc-4.8-ib33t75 tcl/8.6.6-gcc-4.8-tfxzqbr
expat/2.2.0-gcc-4.8-mrv6bd4 libxml2/2.9.4-gcc-4.8-ryzxnsu ncurses/6.0-gcc-6.3.0-ucbhcdy pkg-config/0.29.2-gcc-6.3.0-jpgubk3 util-macros/1.19.1-gcc-6.3.0-xorz2x2
flex/2.6.3-gcc-4.8-yf345oo libxml2/2.9.4-gcc-6.3.0-rltzsdh netlib-lapack/3.6.1-gcc-6.3.0-js33dog py-appdirs/1.4.0-gcc-6.3.0-jxawmw7 xz/5.2.3-gcc-4.8-mew4log
gcc/6.3.0-gcc-4.8-24puqve lmod/7.4.1-gcc-4.8-je4srhr netlib-scalapack/2.0.2-gcc-6.3.0-5aidk4l py-numpy/1.12.0-gcc-6.3.0-oemmoeu xz/5.2.3-gcc-6.3.0-3vqeuvb
gettext/0.19.8.1-gcc-4.8-yymghlh lua/5.3.4-gcc-4.8-im75yaz netlib-scalapack/2.0.2-gcc-6.3.0-hjsemcn py-packaging/16.8-gcc-6.3.0-i2n3dtl zip/3.0-gcc-4.8-rwar22d
gmp/6.1.2-gcc-4.8-5ub2wu5 lua-luafilesystem/1_6_3-gcc-4.8-wkey3nl netlib-scalapack/2.0.2-gcc-6.3.0-jva724b py-pyparsing/2.1.10-gcc-6.3.0-tbo6gmw zlib/1.2.11-gcc-4.8-pgxsxv7
help2man/1.47.4-gcc-4.8-kcnqmau lua-luaposix/33.4.0-gcc-4.8-mdod2ry netlib-scalapack/2.0.2-gcc-6.3.0-rgqfr6d py-scipy/0.19.0-gcc-6.3.0-kr7nat4 zlib/1.2.11-gcc-6.3.0-7cqp6cj
The names should look familiar, as they resemble the output from ``spack find``.
For example, you could type the following command to load the ``cmake`` module:
.. code-block:: console
$ module load cmake-3.7.2-gcc-6.3.0-fowuuby
$ module load cmake/3.7.2-gcc-6.3.0-fowuuby
Neither of these is particularly pretty, easy to remember, or easy to
type. Luckily, Spack offers many facilities for customizing the module
@@ -779,35 +779,35 @@ cut-and-pasted into a shell script. For example:
$ spack module tcl loads --dependencies py-numpy git
# bzip2@1.0.6%gcc@4.9.3=linux-x86_64
module load bzip2-1.0.6-gcc-4.9.3-ktnrhkrmbbtlvnagfatrarzjojmkvzsx
module load bzip2/1.0.6-gcc-4.9.3-ktnrhkrmbbtlvnagfatrarzjojmkvzsx
# ncurses@6.0%gcc@4.9.3=linux-x86_64
module load ncurses-6.0-gcc-4.9.3-kaazyneh3bjkfnalunchyqtygoe2mncv
module load ncurses/6.0-gcc-4.9.3-kaazyneh3bjkfnalunchyqtygoe2mncv
# zlib@1.2.8%gcc@4.9.3=linux-x86_64
module load zlib-1.2.8-gcc-4.9.3-v3ufwaahjnviyvgjcelo36nywx2ufj7z
module load zlib/1.2.8-gcc-4.9.3-v3ufwaahjnviyvgjcelo36nywx2ufj7z
# sqlite@3.8.5%gcc@4.9.3=linux-x86_64
module load sqlite-3.8.5-gcc-4.9.3-a3eediswgd5f3rmto7g3szoew5nhehbr
module load sqlite/3.8.5-gcc-4.9.3-a3eediswgd5f3rmto7g3szoew5nhehbr
# readline@6.3%gcc@4.9.3=linux-x86_64
module load readline-6.3-gcc-4.9.3-se6r3lsycrwxyhreg4lqirp6xixxejh3
module load readline/6.3-gcc-4.9.3-se6r3lsycrwxyhreg4lqirp6xixxejh3
# python@3.5.1%gcc@4.9.3=linux-x86_64
module load python-3.5.1-gcc-4.9.3-5q5rsrtjld4u6jiicuvtnx52m7tfhegi
module load python/3.5.1-gcc-4.9.3-5q5rsrtjld4u6jiicuvtnx52m7tfhegi
# py-setuptools@20.5%gcc@4.9.3=linux-x86_64
module load py-setuptools-20.5-gcc-4.9.3-4qr2suj6p6glepnedmwhl4f62x64wxw2
module load py-setuptools/20.5-gcc-4.9.3-4qr2suj6p6glepnedmwhl4f62x64wxw2
# py-nose@1.3.7%gcc@4.9.3=linux-x86_64
module load py-nose-1.3.7-gcc-4.9.3-pwhtjw2dvdvfzjwuuztkzr7b4l6zepli
module load py-nose/1.3.7-gcc-4.9.3-pwhtjw2dvdvfzjwuuztkzr7b4l6zepli
# openblas@0.2.17%gcc@4.9.3+shared=linux-x86_64
module load openblas-0.2.17-gcc-4.9.3-pw6rmlom7apfsnjtzfttyayzc7nx5e7y
module load openblas/0.2.17-gcc-4.9.3-pw6rmlom7apfsnjtzfttyayzc7nx5e7y
# py-numpy@1.11.0%gcc@4.9.3+blas+lapack=linux-x86_64
module load py-numpy-1.11.0-gcc-4.9.3-mulodttw5pcyjufva4htsktwty4qd52r
module load py-numpy/1.11.0-gcc-4.9.3-mulodttw5pcyjufva4htsktwty4qd52r
# curl@7.47.1%gcc@4.9.3=linux-x86_64
module load curl-7.47.1-gcc-4.9.3-ohz3fwsepm3b462p5lnaquv7op7naqbi
module load curl/7.47.1-gcc-4.9.3-ohz3fwsepm3b462p5lnaquv7op7naqbi
# autoconf@2.69%gcc@4.9.3=linux-x86_64
module load autoconf-2.69-gcc-4.9.3-bkibjqhgqm5e3o423ogfv2y3o6h2uoq4
module load autoconf/2.69-gcc-4.9.3-bkibjqhgqm5e3o423ogfv2y3o6h2uoq4
# cmake@3.5.0%gcc@4.9.3~doc+ncurses+openssl~qt=linux-x86_64
module load cmake-3.5.0-gcc-4.9.3-x7xnsklmgwla3ubfgzppamtbqk5rwn7t
module load cmake/3.5.0-gcc-4.9.3-x7xnsklmgwla3ubfgzppamtbqk5rwn7t
# expat@2.1.0%gcc@4.9.3=linux-x86_64
module load expat-2.1.0-gcc-4.9.3-6pkz2ucnk2e62imwakejjvbv6egncppd
module load expat/2.1.0-gcc-4.9.3-6pkz2ucnk2e62imwakejjvbv6egncppd
# git@2.8.0-rc2%gcc@4.9.3+curl+expat=linux-x86_64
module load git-2.8.0-rc2-gcc-4.9.3-3bib4hqtnv5xjjoq5ugt3inblt4xrgkd
module load git/2.8.0-rc2-gcc-4.9.3-3bib4hqtnv5xjjoq5ugt3inblt4xrgkd
The script may be further edited by removing unnecessary modules.
@@ -826,12 +826,12 @@ For example, consider the following on one system:
.. code-block:: console
$ module avail
linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y
linux-SuSE11-x86_64/antlr/2.7.7-gcc-5.3.0-bdpl46y
$ spack module tcl loads antlr # WRONG!
# antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
module load antlr-2.7.7-gcc-5.3.0-bdpl46y
module load antlr/2.7.7-gcc-5.3.0-bdpl46y
$ spack module tcl loads --prefix linux-SuSE11-x86_64/ antlr
# antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64
module load linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y
module load linux-SuSE11-x86_64/antlr/2.7.7-gcc-5.3.0-bdpl46y

File diff suppressed because it is too large Load Diff

View File

@@ -632,18 +632,18 @@ Here's an example of what bootstrapping some compilers might look like:
exclude:
- '%gcc@7.3.0 os=centos7'
- '%gcc@5.5.0 os=ubuntu18.04'
gitlab-ci:
ci:
bootstrap:
- name: compiler-pkgs
compiler-agnostic: true
mappings:
# mappings similar to the example higher up in this description
pipeline-gen:
# similar to the example higher up in this description
...
The example above adds a list to the ``definitions`` called ``compiler-pkgs``
(you can add any number of these), which lists compiler packages that should
be staged ahead of the full matrix of release specs (in this example, only
readline). Then within the ``gitlab-ci`` section, note the addition of a
readline). Then within the ``ci`` section, note the addition of a
``bootstrap`` section, which can contain a list of items, each referring to
a list in the ``definitions`` section. These items can either
be a dictionary or a string. If you supply a dictionary, it must have a name
@@ -709,7 +709,7 @@ be reported.
Take a look at the
`schema <https://github.com/spack/spack/blob/develop/lib/spack/spack/schema/ci.py>`_
for the gitlab-ci section of the spack environment file, to see precisely what
for the ci section of the spack environment file, to see precisely what
syntax is allowed there.
.. _reserved_tags:

View File

@@ -32,11 +32,16 @@ A package repository a directory structured like this::
...
The top-level ``repo.yaml`` file contains configuration metadata for the
repository, and the ``packages`` directory contains subdirectories for
each package in the repository. Each package directory contains a
``package.py`` file and any patches or other files needed to build the
repository. The packages subdirectory, typically ``packages``, contains
subdirectories for each package in the repository. Each package directory
contains a ``package.py`` file and any patches or other files needed to build the
package.
The ``repo.yaml`` file may also contain a ``subdirectory`` key,
which can modify the name of the subdirectory used for packages. As seen above,
the default value is ``packages``. An empty string (``subdirectory: ''``) requires
a flattened repo structure in which the package names are top-level subdirectories.
Package repositories allow you to:
1. Maintain your own packages separately from Spack;
@@ -373,6 +378,24 @@ You can supply a custom namespace with a second argument, e.g.:
repo:
namespace: 'llnl.comp'
You can also create repositories with custom structure with the ``-d/--subdirectory``
argument, e.g.:
.. code-block:: console
$ spack repo create -d applications myrepo apps
==> Created repo with namespace 'apps'.
==> To register it with Spack, run this command:
spack repo add ~/myrepo
$ ls myrepo
applications/ repo.yaml
$ cat myrepo/repo.yaml
repo:
namespace: apps
subdirectory: applications
^^^^^^^^^^^^^^^^^^
``spack repo add``
^^^^^^^^^^^^^^^^^^

View File

@@ -10,3 +10,4 @@ python-levenshtein
# https://stackoverflow.com/questions/67542699
docutils <0.17
pygments <2.13
urllib3 <2

126
lib/spack/env/cc vendored
View File

@@ -430,43 +430,57 @@ other_args_list=""
# Global state for keeping track of -Wl,-rpath -Wl,/path
wl_expect_rpath=no
# Same, but for -Xlinker -rpath -Xlinker /path
xlinker_expect_rpath=no
parse_Wl() {
# drop -Wl
shift
while [ $# -ne 0 ]; do
if [ "$wl_expect_rpath" = yes ]; then
rp="$1"
if system_dir "$1"; then
append system_rpath_dirs_list "$1"
else
append rpath_dirs_list "$1"
fi
wl_expect_rpath=no
else
rp=""
case "$1" in
-rpath=*)
rp="${1#-rpath=}"
arg="${1#-rpath=}"
if [ -z "$arg" ]; then
shift; continue
elif system_dir "$arg"; then
append system_rpath_dirs_list "$arg"
else
append rpath_dirs_list "$arg"
fi
;;
--rpath=*)
rp="${1#--rpath=}"
arg="${1#--rpath=}"
if [ -z "$arg" ]; then
shift; continue
elif system_dir "$arg"; then
append system_rpath_dirs_list "$arg"
else
append rpath_dirs_list "$arg"
fi
;;
-rpath|--rpath)
wl_expect_rpath=yes
;;
"$dtags_to_strip")
;;
-Wl)
# Nested -Wl,-Wl means we're in NAG compiler territory, we don't support
# it.
return 1
;;
*)
append other_args_list "-Wl,$1"
;;
esac
fi
if [ -n "$rp" ]; then
if system_dir "$rp"; then
append system_rpath_dirs_list "$rp"
else
append rpath_dirs_list "$rp"
fi
fi
shift
done
# By lack of local variables, always set this to empty string.
rp=""
}
@@ -569,42 +583,78 @@ while [ $# -ne 0 ]; do
;;
-Wl,*)
IFS=,
parse_Wl $1
if ! parse_Wl ${1#-Wl,}; then
append other_args_list "$1"
fi
unset IFS
;;
-Xlinker)
if [ "$2" = "-rpath" ]; then
if [ "$3" != "-Xlinker" ]; then
die "-Xlinker,-rpath was not followed by -Xlinker,*"
shift
if [ $# -eq 0 ]; then
# -Xlinker without value: let the compiler error about it.
append other_args_list -Xlinker
xlinker_expect_rpath=no
break
elif [ "$xlinker_expect_rpath" = yes ]; then
# Register the path of -Xlinker -rpath <other args> -Xlinker <path>
if system_dir "$1"; then
append system_rpath_dirs_list "$1"
else
append rpath_dirs_list "$1"
fi
shift 3;
rp="$1"
elif [ "$2" = "$dtags_to_strip" ]; then
shift # We want to remove explicitly this flag
xlinker_expect_rpath=no
else
append other_args_list "$1"
case "$1" in
-rpath=*)
arg="${1#-rpath=}"
if system_dir "$arg"; then
append system_rpath_dirs_list "$arg"
else
append rpath_dirs_list "$arg"
fi
;;
--rpath=*)
arg="${1#--rpath=}"
if system_dir "$arg"; then
append system_rpath_dirs_list "$arg"
else
append rpath_dirs_list "$arg"
fi
;;
-rpath|--rpath)
xlinker_expect_rpath=yes
;;
"$dtags_to_strip")
;;
*)
append other_args_list -Xlinker
append other_args_list "$1"
;;
esac
fi
;;
"$dtags_to_strip")
;;
*)
if [ "$1" = "$dtags_to_strip" ]; then
: # We want to remove explicitly this flag
else
append other_args_list "$1"
fi
append other_args_list "$1"
;;
esac
# test rpaths against system directories in one place.
if [ -n "$rp" ]; then
if system_dir "$rp"; then
append system_rpath_dirs_list "$rp"
else
append rpath_dirs_list "$rp"
fi
fi
shift
done
# We found `-Xlinker -rpath` but no matching value `-Xlinker /path`. Just append
# `-Xlinker -rpath` again and let the compiler or linker handle the error during arg
# parsing.
if [ "$xlinker_expect_rpath" = yes ]; then
append other_args_list -Xlinker
append other_args_list -rpath
fi
# Same, but for -Wl flags.
if [ "$wl_expect_rpath" = yes ]; then
append other_args_list -Wl,-rpath
fi
#
# Add flags from Spack's cppflags, cflags, cxxflags, fcflags, fflags, and
# ldflags. We stick to the order that gmake puts the flags in by default.

View File

@@ -18,7 +18,7 @@
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.2.0 (commit e44bad9c7b6defac73696f64078b2fe634719b62)
* Version: 0.2.1 (commit 9e1117bd8a2f0581bced161f2a2e8d6294d0300b)
astunparse
----------------
@@ -101,10 +101,7 @@
* Usage: Used for config files. Ruamel is based on PyYAML but is more
actively maintained and has more features, including round-tripping
comments read from config files.
* Version: 0.11.15 (last version supporting Python 2.6)
* Note: This package has been slightly modified to improve Python 2.6
compatibility -- some ``{}`` format strings were replaced, and the
import for ``OrderedDict`` was tweaked.
* Version: 0.17.21
six
---

View File

@@ -0,0 +1 @@
from ruamel import *

View File

@@ -1,21 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014-2022 Anthon van der Neut, Ruamel bvba
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,57 @@
# coding: utf-8
if False: # MYPY
from typing import Dict, Any # NOQA
_package_data = dict(
full_package_name='ruamel.yaml',
version_info=(0, 17, 21),
__version__='0.17.21',
version_timestamp='2022-02-12 09:49:22',
author='Anthon van der Neut',
author_email='a.van.der.neut@ruamel.eu',
description='ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order', # NOQA
entry_points=None,
since=2014,
extras_require={
':platform_python_implementation=="CPython" and python_version<"3.11"': ['ruamel.yaml.clib>=0.2.6'], # NOQA
'jinja2': ['ruamel.yaml.jinja2>=0.2'],
'docs': ['ryd'],
},
classifiers=[
'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: Implementation :: CPython',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Text Processing :: Markup',
'Typing :: Typed',
],
keywords='yaml 1.2 parser round-trip preserve quotes order config',
read_the_docs='yaml',
supported=[(3, 5)], # minimum
tox=dict(
env='*f', # f for 3.5
fl8excl='_test/lib',
),
# universal=True,
python_requires='>=3',
rtfd='yaml',
) # type: Dict[Any, Any]
version_info = _package_data['version_info']
__version__ = _package_data['__version__']
try:
from .cyaml import * # NOQA
__with_libyaml__ = True
except (ImportError, ValueError): # for Jython
__with_libyaml__ = False
from ruamel.yaml.main import * # NOQA

View File

@@ -0,0 +1,20 @@
# coding: utf-8
if False: # MYPY
from typing import Any, Dict, Optional, List, Union, Optional, Iterator # NOQA
anchor_attrib = '_yaml_anchor'
class Anchor:
__slots__ = 'value', 'always_dump'
attrib = anchor_attrib
def __init__(self):
# type: () -> None
self.value = None
self.always_dump = False
def __repr__(self):
# type: () -> Any
ad = ', (always dump)' if self.always_dump else ""
return 'Anchor({!r}{})'.format(self.value, ad)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,268 @@
# coding: utf-8
# partially from package six by Benjamin Peterson
import sys
import os
import io
import traceback
from abc import abstractmethod
import collections.abc
# fmt: off
if False: # MYPY
from typing import Any, Dict, Optional, List, Union, BinaryIO, IO, Text, Tuple # NOQA
from typing import Optional # NOQA
# fmt: on
_DEFAULT_YAML_VERSION = (1, 2)
try:
from collections import OrderedDict
except ImportError:
from ordereddict import OrderedDict # type: ignore
# to get the right name import ... as ordereddict doesn't do that
class ordereddict(OrderedDict): # type: ignore
if not hasattr(OrderedDict, 'insert'):
def insert(self, pos, key, value):
# type: (int, Any, Any) -> None
if pos >= len(self):
self[key] = value
return
od = ordereddict()
od.update(self)
for k in od:
del self[k]
for index, old_key in enumerate(od):
if pos == index:
self[key] = value
self[old_key] = od[old_key]
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
# replace with f-strings when 3.5 support is dropped
# ft = '42'
# assert _F('abc {ft!r}', ft=ft) == 'abc %r' % ft
# 'abc %r' % ft -> _F('abc {ft!r}' -> f'abc {ft!r}'
def _F(s, *superfluous, **kw):
# type: (Any, Any, Any) -> Any
if superfluous:
raise TypeError
return s.format(**kw)
StringIO = io.StringIO
BytesIO = io.BytesIO
if False: # MYPY
# StreamType = Union[BinaryIO, IO[str], IO[unicode], StringIO]
# StreamType = Union[BinaryIO, IO[str], StringIO] # type: ignore
StreamType = Any
StreamTextType = StreamType # Union[Text, StreamType]
VersionType = Union[List[int], str, Tuple[int, int]]
builtins_module = 'builtins'
def with_metaclass(meta, *bases):
# type: (Any, Any) -> Any
"""Create a base class with a metaclass."""
return meta('NewBase', bases, {})
DBG_TOKEN = 1
DBG_EVENT = 2
DBG_NODE = 4
_debug = None # type: Optional[int]
if 'RUAMELDEBUG' in os.environ:
_debugx = os.environ.get('RUAMELDEBUG')
if _debugx is None:
_debug = 0
else:
_debug = int(_debugx)
if bool(_debug):
class ObjectCounter:
def __init__(self):
# type: () -> None
self.map = {} # type: Dict[Any, Any]
def __call__(self, k):
# type: (Any) -> None
self.map[k] = self.map.get(k, 0) + 1
def dump(self):
# type: () -> None
for k in sorted(self.map):
sys.stdout.write('{} -> {}'.format(k, self.map[k]))
object_counter = ObjectCounter()
# used from yaml util when testing
def dbg(val=None):
# type: (Any) -> Any
global _debug
if _debug is None:
# set to true or false
_debugx = os.environ.get('YAMLDEBUG')
if _debugx is None:
_debug = 0
else:
_debug = int(_debugx)
if val is None:
return _debug
return _debug & val
class Nprint:
def __init__(self, file_name=None):
# type: (Any) -> None
self._max_print = None # type: Any
self._count = None # type: Any
self._file_name = file_name
def __call__(self, *args, **kw):
# type: (Any, Any) -> None
if not bool(_debug):
return
out = sys.stdout if self._file_name is None else open(self._file_name, 'a')
dbgprint = print # to fool checking for print statements by dv utility
kw1 = kw.copy()
kw1['file'] = out
dbgprint(*args, **kw1)
out.flush()
if self._max_print is not None:
if self._count is None:
self._count = self._max_print
self._count -= 1
if self._count == 0:
dbgprint('forced exit\n')
traceback.print_stack()
out.flush()
sys.exit(0)
if self._file_name:
out.close()
def set_max_print(self, i):
# type: (int) -> None
self._max_print = i
self._count = None
def fp(self, mode='a'):
# type: (str) -> Any
out = sys.stdout if self._file_name is None else open(self._file_name, mode)
return out
nprint = Nprint()
nprintf = Nprint('/var/tmp/ruamel.yaml.log')
# char checkers following production rules
def check_namespace_char(ch):
# type: (Any) -> bool
if '\x21' <= ch <= '\x7E': # ! to ~
return True
if '\xA0' <= ch <= '\uD7FF':
return True
if ('\uE000' <= ch <= '\uFFFD') and ch != '\uFEFF': # excl. byte order mark
return True
if '\U00010000' <= ch <= '\U0010FFFF':
return True
return False
def check_anchorname_char(ch):
# type: (Any) -> bool
if ch in ',[]{}':
return False
return check_namespace_char(ch)
def version_tnf(t1, t2=None):
# type: (Any, Any) -> Any
"""
return True if ruamel.yaml version_info < t1, None if t2 is specified and bigger else False
"""
from ruamel.yaml import version_info # NOQA
if version_info < t1:
return True
if t2 is not None and version_info < t2:
return None
return False
class MutableSliceableSequence(collections.abc.MutableSequence): # type: ignore
__slots__ = ()
def __getitem__(self, index):
# type: (Any) -> Any
if not isinstance(index, slice):
return self.__getsingleitem__(index)
return type(self)([self[i] for i in range(*index.indices(len(self)))]) # type: ignore
def __setitem__(self, index, value):
# type: (Any, Any) -> None
if not isinstance(index, slice):
return self.__setsingleitem__(index, value)
assert iter(value)
# nprint(index.start, index.stop, index.step, index.indices(len(self)))
if index.step is None:
del self[index.start : index.stop]
for elem in reversed(value):
self.insert(0 if index.start is None else index.start, elem)
else:
range_parms = index.indices(len(self))
nr_assigned_items = (range_parms[1] - range_parms[0] - 1) // range_parms[2] + 1
# need to test before changing, in case TypeError is caught
if nr_assigned_items < len(value):
raise TypeError(
'too many elements in value {} < {}'.format(nr_assigned_items, len(value))
)
elif nr_assigned_items > len(value):
raise TypeError(
'not enough elements in value {} > {}'.format(
nr_assigned_items, len(value)
)
)
for idx, i in enumerate(range(*range_parms)):
self[i] = value[idx]
def __delitem__(self, index):
# type: (Any) -> None
if not isinstance(index, slice):
return self.__delsingleitem__(index)
# nprint(index.start, index.stop, index.step, index.indices(len(self)))
for i in reversed(range(*index.indices(len(self)))):
del self[i]
@abstractmethod
def __getsingleitem__(self, index):
# type: (Any) -> Any
raise IndexError
@abstractmethod
def __setsingleitem__(self, index, value):
# type: (Any, Any) -> None
raise IndexError
@abstractmethod
def __delsingleitem__(self, index):
# type: (Any) -> None
raise IndexError

View File

@@ -0,0 +1,243 @@
# coding: utf-8
import warnings
from ruamel.yaml.error import MarkedYAMLError, ReusedAnchorWarning
from ruamel.yaml.compat import _F, nprint, nprintf # NOQA
from ruamel.yaml.events import (
StreamStartEvent,
StreamEndEvent,
MappingStartEvent,
MappingEndEvent,
SequenceStartEvent,
SequenceEndEvent,
AliasEvent,
ScalarEvent,
)
from ruamel.yaml.nodes import MappingNode, ScalarNode, SequenceNode
if False: # MYPY
from typing import Any, Dict, Optional, List # NOQA
__all__ = ['Composer', 'ComposerError']
class ComposerError(MarkedYAMLError):
pass
class Composer:
def __init__(self, loader=None):
# type: (Any) -> None
self.loader = loader
if self.loader is not None and getattr(self.loader, '_composer', None) is None:
self.loader._composer = self
self.anchors = {} # type: Dict[Any, Any]
@property
def parser(self):
# type: () -> Any
if hasattr(self.loader, 'typ'):
self.loader.parser
return self.loader._parser
@property
def resolver(self):
# type: () -> Any
# assert self.loader._resolver is not None
if hasattr(self.loader, 'typ'):
self.loader.resolver
return self.loader._resolver
def check_node(self):
# type: () -> Any
# Drop the STREAM-START event.
if self.parser.check_event(StreamStartEvent):
self.parser.get_event()
# If there are more documents available?
return not self.parser.check_event(StreamEndEvent)
def get_node(self):
# type: () -> Any
# Get the root node of the next document.
if not self.parser.check_event(StreamEndEvent):
return self.compose_document()
def get_single_node(self):
# type: () -> Any
# Drop the STREAM-START event.
self.parser.get_event()
# Compose a document if the stream is not empty.
document = None # type: Any
if not self.parser.check_event(StreamEndEvent):
document = self.compose_document()
# Ensure that the stream contains no more documents.
if not self.parser.check_event(StreamEndEvent):
event = self.parser.get_event()
raise ComposerError(
'expected a single document in the stream',
document.start_mark,
'but found another document',
event.start_mark,
)
# Drop the STREAM-END event.
self.parser.get_event()
return document
def compose_document(self):
# type: (Any) -> Any
# Drop the DOCUMENT-START event.
self.parser.get_event()
# Compose the root node.
node = self.compose_node(None, None)
# Drop the DOCUMENT-END event.
self.parser.get_event()
self.anchors = {}
return node
def return_alias(self, a):
# type: (Any) -> Any
return a
def compose_node(self, parent, index):
# type: (Any, Any) -> Any
if self.parser.check_event(AliasEvent):
event = self.parser.get_event()
alias = event.anchor
if alias not in self.anchors:
raise ComposerError(
None,
None,
_F('found undefined alias {alias!r}', alias=alias),
event.start_mark,
)
return self.return_alias(self.anchors[alias])
event = self.parser.peek_event()
anchor = event.anchor
if anchor is not None: # have an anchor
if anchor in self.anchors:
# raise ComposerError(
# "found duplicate anchor %r; first occurrence"
# % (anchor), self.anchors[anchor].start_mark,
# "second occurrence", event.start_mark)
ws = (
'\nfound duplicate anchor {!r}\nfirst occurrence {}\nsecond occurrence '
'{}'.format((anchor), self.anchors[anchor].start_mark, event.start_mark)
)
warnings.warn(ws, ReusedAnchorWarning)
self.resolver.descend_resolver(parent, index)
if self.parser.check_event(ScalarEvent):
node = self.compose_scalar_node(anchor)
elif self.parser.check_event(SequenceStartEvent):
node = self.compose_sequence_node(anchor)
elif self.parser.check_event(MappingStartEvent):
node = self.compose_mapping_node(anchor)
self.resolver.ascend_resolver()
return node
def compose_scalar_node(self, anchor):
# type: (Any) -> Any
event = self.parser.get_event()
tag = event.tag
if tag is None or tag == '!':
tag = self.resolver.resolve(ScalarNode, event.value, event.implicit)
node = ScalarNode(
tag,
event.value,
event.start_mark,
event.end_mark,
style=event.style,
comment=event.comment,
anchor=anchor,
)
if anchor is not None:
self.anchors[anchor] = node
return node
def compose_sequence_node(self, anchor):
# type: (Any) -> Any
start_event = self.parser.get_event()
tag = start_event.tag
if tag is None or tag == '!':
tag = self.resolver.resolve(SequenceNode, None, start_event.implicit)
node = SequenceNode(
tag,
[],
start_event.start_mark,
None,
flow_style=start_event.flow_style,
comment=start_event.comment,
anchor=anchor,
)
if anchor is not None:
self.anchors[anchor] = node
index = 0
while not self.parser.check_event(SequenceEndEvent):
node.value.append(self.compose_node(node, index))
index += 1
end_event = self.parser.get_event()
if node.flow_style is True and end_event.comment is not None:
if node.comment is not None:
nprint(
'Warning: unexpected end_event commment in sequence '
'node {}'.format(node.flow_style)
)
node.comment = end_event.comment
node.end_mark = end_event.end_mark
self.check_end_doc_comment(end_event, node)
return node
def compose_mapping_node(self, anchor):
# type: (Any) -> Any
start_event = self.parser.get_event()
tag = start_event.tag
if tag is None or tag == '!':
tag = self.resolver.resolve(MappingNode, None, start_event.implicit)
node = MappingNode(
tag,
[],
start_event.start_mark,
None,
flow_style=start_event.flow_style,
comment=start_event.comment,
anchor=anchor,
)
if anchor is not None:
self.anchors[anchor] = node
while not self.parser.check_event(MappingEndEvent):
# key_event = self.parser.peek_event()
item_key = self.compose_node(node, None)
# if item_key in node.value:
# raise ComposerError("while composing a mapping",
# start_event.start_mark,
# "found duplicate key", key_event.start_mark)
item_value = self.compose_node(node, item_key)
# node.value[item_key] = item_value
node.value.append((item_key, item_value))
end_event = self.parser.get_event()
if node.flow_style is True and end_event.comment is not None:
node.comment = end_event.comment
node.end_mark = end_event.end_mark
self.check_end_doc_comment(end_event, node)
return node
def check_end_doc_comment(self, end_event, node):
# type: (Any, Any) -> None
if end_event.comment and end_event.comment[1]:
# pre comments on an end_event, no following to move to
if node.comment is None:
node.comment = [None, None]
assert not isinstance(node, ScalarEvent)
# this is a post comment on a mapping node, add as third element
# in the list
node.comment.append(end_event.comment[1])
end_event.comment[1] = None

View File

@@ -0,0 +1,14 @@
# coding: utf-8
import warnings
from ruamel.yaml.util import configobj_walker as new_configobj_walker
if False: # MYPY
from typing import Any # NOQA
def configobj_walker(cfg):
# type: (Any) -> Any
warnings.warn('configobj_walker has moved to ruamel.yaml.util, please update your code')
return new_configobj_walker(cfg)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,183 @@
# coding: utf-8
from _ruamel_yaml import CParser, CEmitter # type: ignore
from ruamel.yaml.constructor import Constructor, BaseConstructor, SafeConstructor
from ruamel.yaml.representer import Representer, SafeRepresenter, BaseRepresenter
from ruamel.yaml.resolver import Resolver, BaseResolver
if False: # MYPY
from typing import Any, Union, Optional # NOQA
from ruamel.yaml.compat import StreamTextType, StreamType, VersionType # NOQA
__all__ = ['CBaseLoader', 'CSafeLoader', 'CLoader', 'CBaseDumper', 'CSafeDumper', 'CDumper']
# this includes some hacks to solve the usage of resolver by lower level
# parts of the parser
class CBaseLoader(CParser, BaseConstructor, BaseResolver): # type: ignore
def __init__(self, stream, version=None, preserve_quotes=None):
# type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
CParser.__init__(self, stream)
self._parser = self._composer = self
BaseConstructor.__init__(self, loader=self)
BaseResolver.__init__(self, loadumper=self)
# self.descend_resolver = self._resolver.descend_resolver
# self.ascend_resolver = self._resolver.ascend_resolver
# self.resolve = self._resolver.resolve
class CSafeLoader(CParser, SafeConstructor, Resolver): # type: ignore
def __init__(self, stream, version=None, preserve_quotes=None):
# type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
CParser.__init__(self, stream)
self._parser = self._composer = self
SafeConstructor.__init__(self, loader=self)
Resolver.__init__(self, loadumper=self)
# self.descend_resolver = self._resolver.descend_resolver
# self.ascend_resolver = self._resolver.ascend_resolver
# self.resolve = self._resolver.resolve
class CLoader(CParser, Constructor, Resolver): # type: ignore
def __init__(self, stream, version=None, preserve_quotes=None):
# type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
CParser.__init__(self, stream)
self._parser = self._composer = self
Constructor.__init__(self, loader=self)
Resolver.__init__(self, loadumper=self)
# self.descend_resolver = self._resolver.descend_resolver
# self.ascend_resolver = self._resolver.ascend_resolver
# self.resolve = self._resolver.resolve
class CBaseDumper(CEmitter, BaseRepresenter, BaseResolver): # type: ignore
def __init__(
self,
stream,
default_style=None,
default_flow_style=None,
canonical=None,
indent=None,
width=None,
allow_unicode=None,
line_break=None,
encoding=None,
explicit_start=None,
explicit_end=None,
version=None,
tags=None,
block_seq_indent=None,
top_level_colon_align=None,
prefix_colon=None,
):
# type: (StreamType, Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA
CEmitter.__init__(
self,
stream,
canonical=canonical,
indent=indent,
width=width,
encoding=encoding,
allow_unicode=allow_unicode,
line_break=line_break,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version,
tags=tags,
)
self._emitter = self._serializer = self._representer = self
BaseRepresenter.__init__(
self,
default_style=default_style,
default_flow_style=default_flow_style,
dumper=self,
)
BaseResolver.__init__(self, loadumper=self)
class CSafeDumper(CEmitter, SafeRepresenter, Resolver): # type: ignore
def __init__(
self,
stream,
default_style=None,
default_flow_style=None,
canonical=None,
indent=None,
width=None,
allow_unicode=None,
line_break=None,
encoding=None,
explicit_start=None,
explicit_end=None,
version=None,
tags=None,
block_seq_indent=None,
top_level_colon_align=None,
prefix_colon=None,
):
# type: (StreamType, Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA
self._emitter = self._serializer = self._representer = self
CEmitter.__init__(
self,
stream,
canonical=canonical,
indent=indent,
width=width,
encoding=encoding,
allow_unicode=allow_unicode,
line_break=line_break,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version,
tags=tags,
)
self._emitter = self._serializer = self._representer = self
SafeRepresenter.__init__(
self, default_style=default_style, default_flow_style=default_flow_style
)
Resolver.__init__(self)
class CDumper(CEmitter, Representer, Resolver): # type: ignore
def __init__(
self,
stream,
default_style=None,
default_flow_style=None,
canonical=None,
indent=None,
width=None,
allow_unicode=None,
line_break=None,
encoding=None,
explicit_start=None,
explicit_end=None,
version=None,
tags=None,
block_seq_indent=None,
top_level_colon_align=None,
prefix_colon=None,
):
# type: (StreamType, Any, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA
CEmitter.__init__(
self,
stream,
canonical=canonical,
indent=indent,
width=width,
encoding=encoding,
allow_unicode=allow_unicode,
line_break=line_break,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version,
tags=tags,
)
self._emitter = self._serializer = self._representer = self
Representer.__init__(
self, default_style=default_style, default_flow_style=default_flow_style
)
Resolver.__init__(self)

View File

@@ -0,0 +1,219 @@
# coding: utf-8
from ruamel.yaml.emitter import Emitter
from ruamel.yaml.serializer import Serializer
from ruamel.yaml.representer import (
Representer,
SafeRepresenter,
BaseRepresenter,
RoundTripRepresenter,
)
from ruamel.yaml.resolver import Resolver, BaseResolver, VersionedResolver
if False: # MYPY
from typing import Any, Dict, List, Union, Optional # NOQA
from ruamel.yaml.compat import StreamType, VersionType # NOQA
__all__ = ['BaseDumper', 'SafeDumper', 'Dumper', 'RoundTripDumper']
class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver):
def __init__(
self,
stream,
default_style=None,
default_flow_style=None,
canonical=None,
indent=None,
width=None,
allow_unicode=None,
line_break=None,
encoding=None,
explicit_start=None,
explicit_end=None,
version=None,
tags=None,
block_seq_indent=None,
top_level_colon_align=None,
prefix_colon=None,
):
# type: (Any, StreamType, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA
Emitter.__init__(
self,
stream,
canonical=canonical,
indent=indent,
width=width,
allow_unicode=allow_unicode,
line_break=line_break,
block_seq_indent=block_seq_indent,
dumper=self,
)
Serializer.__init__(
self,
encoding=encoding,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version,
tags=tags,
dumper=self,
)
BaseRepresenter.__init__(
self,
default_style=default_style,
default_flow_style=default_flow_style,
dumper=self,
)
BaseResolver.__init__(self, loadumper=self)
class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver):
def __init__(
self,
stream,
default_style=None,
default_flow_style=None,
canonical=None,
indent=None,
width=None,
allow_unicode=None,
line_break=None,
encoding=None,
explicit_start=None,
explicit_end=None,
version=None,
tags=None,
block_seq_indent=None,
top_level_colon_align=None,
prefix_colon=None,
):
# type: (StreamType, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA
Emitter.__init__(
self,
stream,
canonical=canonical,
indent=indent,
width=width,
allow_unicode=allow_unicode,
line_break=line_break,
block_seq_indent=block_seq_indent,
dumper=self,
)
Serializer.__init__(
self,
encoding=encoding,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version,
tags=tags,
dumper=self,
)
SafeRepresenter.__init__(
self,
default_style=default_style,
default_flow_style=default_flow_style,
dumper=self,
)
Resolver.__init__(self, loadumper=self)
class Dumper(Emitter, Serializer, Representer, Resolver):
def __init__(
self,
stream,
default_style=None,
default_flow_style=None,
canonical=None,
indent=None,
width=None,
allow_unicode=None,
line_break=None,
encoding=None,
explicit_start=None,
explicit_end=None,
version=None,
tags=None,
block_seq_indent=None,
top_level_colon_align=None,
prefix_colon=None,
):
# type: (StreamType, Any, Any, Optional[bool], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA
Emitter.__init__(
self,
stream,
canonical=canonical,
indent=indent,
width=width,
allow_unicode=allow_unicode,
line_break=line_break,
block_seq_indent=block_seq_indent,
dumper=self,
)
Serializer.__init__(
self,
encoding=encoding,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version,
tags=tags,
dumper=self,
)
Representer.__init__(
self,
default_style=default_style,
default_flow_style=default_flow_style,
dumper=self,
)
Resolver.__init__(self, loadumper=self)
class RoundTripDumper(Emitter, Serializer, RoundTripRepresenter, VersionedResolver):
def __init__(
self,
stream,
default_style=None,
default_flow_style=None,
canonical=None,
indent=None,
width=None,
allow_unicode=None,
line_break=None,
encoding=None,
explicit_start=None,
explicit_end=None,
version=None,
tags=None,
block_seq_indent=None,
top_level_colon_align=None,
prefix_colon=None,
):
# type: (StreamType, Any, Optional[bool], Optional[int], Optional[int], Optional[int], Optional[bool], Any, Any, Optional[bool], Optional[bool], Any, Any, Any, Any, Any) -> None # NOQA
Emitter.__init__(
self,
stream,
canonical=canonical,
indent=indent,
width=width,
allow_unicode=allow_unicode,
line_break=line_break,
block_seq_indent=block_seq_indent,
top_level_colon_align=top_level_colon_align,
prefix_colon=prefix_colon,
dumper=self,
)
Serializer.__init__(
self,
encoding=encoding,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version,
tags=tags,
dumper=self,
)
RoundTripRepresenter.__init__(
self,
default_style=default_style,
default_flow_style=default_flow_style,
dumper=self,
)
VersionedResolver.__init__(self, loader=self)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,332 @@
# coding: utf-8
import warnings
import textwrap
from ruamel.yaml.compat import _F
if False: # MYPY
from typing import Any, Dict, Optional, List, Text # NOQA
__all__ = [
'FileMark',
'StringMark',
'CommentMark',
'YAMLError',
'MarkedYAMLError',
'ReusedAnchorWarning',
'UnsafeLoaderWarning',
'MarkedYAMLWarning',
'MarkedYAMLFutureWarning',
]
class StreamMark:
__slots__ = 'name', 'index', 'line', 'column'
def __init__(self, name, index, line, column):
# type: (Any, int, int, int) -> None
self.name = name
self.index = index
self.line = line
self.column = column
def __str__(self):
# type: () -> Any
where = _F(
' in "{sname!s}", line {sline1:d}, column {scolumn1:d}',
sname=self.name,
sline1=self.line + 1,
scolumn1=self.column + 1,
)
return where
def __eq__(self, other):
# type: (Any) -> bool
if self.line != other.line or self.column != other.column:
return False
if self.name != other.name or self.index != other.index:
return False
return True
def __ne__(self, other):
# type: (Any) -> bool
return not self.__eq__(other)
class FileMark(StreamMark):
__slots__ = ()
class StringMark(StreamMark):
__slots__ = 'name', 'index', 'line', 'column', 'buffer', 'pointer'
def __init__(self, name, index, line, column, buffer, pointer):
# type: (Any, int, int, int, Any, Any) -> None
StreamMark.__init__(self, name, index, line, column)
self.buffer = buffer
self.pointer = pointer
def get_snippet(self, indent=4, max_length=75):
# type: (int, int) -> Any
if self.buffer is None: # always False
return None
head = ""
start = self.pointer
while start > 0 and self.buffer[start - 1] not in '\0\r\n\x85\u2028\u2029':
start -= 1
if self.pointer - start > max_length / 2 - 1:
head = ' ... '
start += 5
break
tail = ""
end = self.pointer
while end < len(self.buffer) and self.buffer[end] not in '\0\r\n\x85\u2028\u2029':
end += 1
if end - self.pointer > max_length / 2 - 1:
tail = ' ... '
end -= 5
break
snippet = self.buffer[start:end]
caret = '^'
caret = '^ (line: {})'.format(self.line + 1)
return (
' ' * indent
+ head
+ snippet
+ tail
+ '\n'
+ ' ' * (indent + self.pointer - start + len(head))
+ caret
)
def __str__(self):
# type: () -> Any
snippet = self.get_snippet()
where = _F(
' in "{sname!s}", line {sline1:d}, column {scolumn1:d}',
sname=self.name,
sline1=self.line + 1,
scolumn1=self.column + 1,
)
if snippet is not None:
where += ':\n' + snippet
return where
def __repr__(self):
# type: () -> Any
snippet = self.get_snippet()
where = _F(
' in "{sname!s}", line {sline1:d}, column {scolumn1:d}',
sname=self.name,
sline1=self.line + 1,
scolumn1=self.column + 1,
)
if snippet is not None:
where += ':\n' + snippet
return where
class CommentMark:
__slots__ = ('column',)
def __init__(self, column):
# type: (Any) -> None
self.column = column
class YAMLError(Exception):
pass
class MarkedYAMLError(YAMLError):
def __init__(
self,
context=None,
context_mark=None,
problem=None,
problem_mark=None,
note=None,
warn=None,
):
# type: (Any, Any, Any, Any, Any, Any) -> None
self.context = context
self.context_mark = context_mark
self.problem = problem
self.problem_mark = problem_mark
self.note = note
# warn is ignored
def __str__(self):
# type: () -> Any
lines = [] # type: List[str]
if self.context is not None:
lines.append(self.context)
if self.context_mark is not None and (
self.problem is None
or self.problem_mark is None
or self.context_mark.name != self.problem_mark.name
or self.context_mark.line != self.problem_mark.line
or self.context_mark.column != self.problem_mark.column
):
lines.append(str(self.context_mark))
if self.problem is not None:
lines.append(self.problem)
if self.problem_mark is not None:
lines.append(str(self.problem_mark))
if self.note is not None and self.note:
note = textwrap.dedent(self.note)
lines.append(note)
return '\n'.join(lines)
class YAMLStreamError(Exception):
pass
class YAMLWarning(Warning):
pass
class MarkedYAMLWarning(YAMLWarning):
def __init__(
self,
context=None,
context_mark=None,
problem=None,
problem_mark=None,
note=None,
warn=None,
):
# type: (Any, Any, Any, Any, Any, Any) -> None
self.context = context
self.context_mark = context_mark
self.problem = problem
self.problem_mark = problem_mark
self.note = note
self.warn = warn
def __str__(self):
# type: () -> Any
lines = [] # type: List[str]
if self.context is not None:
lines.append(self.context)
if self.context_mark is not None and (
self.problem is None
or self.problem_mark is None
or self.context_mark.name != self.problem_mark.name
or self.context_mark.line != self.problem_mark.line
or self.context_mark.column != self.problem_mark.column
):
lines.append(str(self.context_mark))
if self.problem is not None:
lines.append(self.problem)
if self.problem_mark is not None:
lines.append(str(self.problem_mark))
if self.note is not None and self.note:
note = textwrap.dedent(self.note)
lines.append(note)
if self.warn is not None and self.warn:
warn = textwrap.dedent(self.warn)
lines.append(warn)
return '\n'.join(lines)
class ReusedAnchorWarning(YAMLWarning):
pass
class UnsafeLoaderWarning(YAMLWarning):
text = """
The default 'Loader' for 'load(stream)' without further arguments can be unsafe.
Use 'load(stream, Loader=ruamel.yaml.Loader)' explicitly if that is OK.
Alternatively include the following in your code:
import warnings
warnings.simplefilter('ignore', ruamel.yaml.error.UnsafeLoaderWarning)
In most other cases you should consider using 'safe_load(stream)'"""
pass
warnings.simplefilter('once', UnsafeLoaderWarning)
class MantissaNoDotYAML1_1Warning(YAMLWarning):
def __init__(self, node, flt_str):
# type: (Any, Any) -> None
self.node = node
self.flt = flt_str
def __str__(self):
# type: () -> Any
line = self.node.start_mark.line
col = self.node.start_mark.column
return """
In YAML 1.1 floating point values should have a dot ('.') in their mantissa.
See the Floating-Point Language-Independent Type for YAML Version 1.1 specification
( http://yaml.org/type/float.html ). This dot is not required for JSON nor for YAML 1.2
Correct your float: "{}" on line: {}, column: {}
or alternatively include the following in your code:
import warnings
warnings.simplefilter('ignore', ruamel.yaml.error.MantissaNoDotYAML1_1Warning)
""".format(
self.flt, line, col
)
warnings.simplefilter('once', MantissaNoDotYAML1_1Warning)
class YAMLFutureWarning(Warning):
pass
class MarkedYAMLFutureWarning(YAMLFutureWarning):
def __init__(
self,
context=None,
context_mark=None,
problem=None,
problem_mark=None,
note=None,
warn=None,
):
# type: (Any, Any, Any, Any, Any, Any) -> None
self.context = context
self.context_mark = context_mark
self.problem = problem
self.problem_mark = problem_mark
self.note = note
self.warn = warn
def __str__(self):
# type: () -> Any
lines = [] # type: List[str]
if self.context is not None:
lines.append(self.context)
if self.context_mark is not None and (
self.problem is None
or self.problem_mark is None
or self.context_mark.name != self.problem_mark.name
or self.context_mark.line != self.problem_mark.line
or self.context_mark.column != self.problem_mark.column
):
lines.append(str(self.context_mark))
if self.problem is not None:
lines.append(self.problem)
if self.problem_mark is not None:
lines.append(str(self.problem_mark))
if self.note is not None and self.note:
note = textwrap.dedent(self.note)
lines.append(note)
if self.warn is not None and self.warn:
warn = textwrap.dedent(self.warn)
lines.append(warn)
return '\n'.join(lines)

View File

@@ -0,0 +1,196 @@
# coding: utf-8
from ruamel.yaml.compat import _F
# Abstract classes.
if False: # MYPY
from typing import Any, Dict, Optional, List # NOQA
SHOW_LINES = False
def CommentCheck():
# type: () -> None
pass
class Event:
__slots__ = 'start_mark', 'end_mark', 'comment'
def __init__(self, start_mark=None, end_mark=None, comment=CommentCheck):
# type: (Any, Any, Any) -> None
self.start_mark = start_mark
self.end_mark = end_mark
# assert comment is not CommentCheck
if comment is CommentCheck:
comment = None
self.comment = comment
def __repr__(self):
# type: () -> Any
if True:
arguments = []
if hasattr(self, 'value'):
# if you use repr(getattr(self, 'value')) then flake8 complains about
# abuse of getattr with a constant. When you change to self.value
# then mypy throws an error
arguments.append(repr(self.value)) # type: ignore
for key in ['anchor', 'tag', 'implicit', 'flow_style', 'style']:
v = getattr(self, key, None)
if v is not None:
arguments.append(_F('{key!s}={v!r}', key=key, v=v))
if self.comment not in [None, CommentCheck]:
arguments.append('comment={!r}'.format(self.comment))
if SHOW_LINES:
arguments.append(
'({}:{}/{}:{})'.format(
self.start_mark.line,
self.start_mark.column,
self.end_mark.line,
self.end_mark.column,
)
)
arguments = ', '.join(arguments) # type: ignore
else:
attributes = [
key
for key in ['anchor', 'tag', 'implicit', 'value', 'flow_style', 'style']
if hasattr(self, key)
]
arguments = ', '.join(
[_F('{k!s}={attr!r}', k=key, attr=getattr(self, key)) for key in attributes]
)
if self.comment not in [None, CommentCheck]:
arguments += ', comment={!r}'.format(self.comment)
return _F(
'{self_class_name!s}({arguments!s})',
self_class_name=self.__class__.__name__,
arguments=arguments,
)
class NodeEvent(Event):
__slots__ = ('anchor',)
def __init__(self, anchor, start_mark=None, end_mark=None, comment=None):
# type: (Any, Any, Any, Any) -> None
Event.__init__(self, start_mark, end_mark, comment)
self.anchor = anchor
class CollectionStartEvent(NodeEvent):
__slots__ = 'tag', 'implicit', 'flow_style', 'nr_items'
def __init__(
self,
anchor,
tag,
implicit,
start_mark=None,
end_mark=None,
flow_style=None,
comment=None,
nr_items=None,
):
# type: (Any, Any, Any, Any, Any, Any, Any, Optional[int]) -> None
NodeEvent.__init__(self, anchor, start_mark, end_mark, comment)
self.tag = tag
self.implicit = implicit
self.flow_style = flow_style
self.nr_items = nr_items
class CollectionEndEvent(Event):
__slots__ = ()
# Implementations.
class StreamStartEvent(Event):
__slots__ = ('encoding',)
def __init__(self, start_mark=None, end_mark=None, encoding=None, comment=None):
# type: (Any, Any, Any, Any) -> None
Event.__init__(self, start_mark, end_mark, comment)
self.encoding = encoding
class StreamEndEvent(Event):
__slots__ = ()
class DocumentStartEvent(Event):
__slots__ = 'explicit', 'version', 'tags'
def __init__(
self,
start_mark=None,
end_mark=None,
explicit=None,
version=None,
tags=None,
comment=None,
):
# type: (Any, Any, Any, Any, Any, Any) -> None
Event.__init__(self, start_mark, end_mark, comment)
self.explicit = explicit
self.version = version
self.tags = tags
class DocumentEndEvent(Event):
__slots__ = ('explicit',)
def __init__(self, start_mark=None, end_mark=None, explicit=None, comment=None):
# type: (Any, Any, Any, Any) -> None
Event.__init__(self, start_mark, end_mark, comment)
self.explicit = explicit
class AliasEvent(NodeEvent):
__slots__ = 'style'
def __init__(self, anchor, start_mark=None, end_mark=None, style=None, comment=None):
# type: (Any, Any, Any, Any, Any) -> None
NodeEvent.__init__(self, anchor, start_mark, end_mark, comment)
self.style = style
class ScalarEvent(NodeEvent):
__slots__ = 'tag', 'implicit', 'value', 'style'
def __init__(
self,
anchor,
tag,
implicit,
value,
start_mark=None,
end_mark=None,
style=None,
comment=None,
):
# type: (Any, Any, Any, Any, Any, Any, Any, Any) -> None
NodeEvent.__init__(self, anchor, start_mark, end_mark, comment)
self.tag = tag
self.implicit = implicit
self.value = value
self.style = style
class SequenceStartEvent(CollectionStartEvent):
__slots__ = ()
class SequenceEndEvent(CollectionEndEvent):
__slots__ = ()
class MappingStartEvent(CollectionStartEvent):
__slots__ = ()
class MappingEndEvent(CollectionEndEvent):
__slots__ = ()

View File

@@ -0,0 +1,75 @@
# coding: utf-8
from ruamel.yaml.reader import Reader
from ruamel.yaml.scanner import Scanner, RoundTripScanner
from ruamel.yaml.parser import Parser, RoundTripParser
from ruamel.yaml.composer import Composer
from ruamel.yaml.constructor import (
BaseConstructor,
SafeConstructor,
Constructor,
RoundTripConstructor,
)
from ruamel.yaml.resolver import VersionedResolver
if False: # MYPY
from typing import Any, Dict, List, Union, Optional # NOQA
from ruamel.yaml.compat import StreamTextType, VersionType # NOQA
__all__ = ['BaseLoader', 'SafeLoader', 'Loader', 'RoundTripLoader']
class BaseLoader(Reader, Scanner, Parser, Composer, BaseConstructor, VersionedResolver):
def __init__(self, stream, version=None, preserve_quotes=None):
# type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
self.comment_handling = None
Reader.__init__(self, stream, loader=self)
Scanner.__init__(self, loader=self)
Parser.__init__(self, loader=self)
Composer.__init__(self, loader=self)
BaseConstructor.__init__(self, loader=self)
VersionedResolver.__init__(self, version, loader=self)
class SafeLoader(Reader, Scanner, Parser, Composer, SafeConstructor, VersionedResolver):
def __init__(self, stream, version=None, preserve_quotes=None):
# type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
self.comment_handling = None
Reader.__init__(self, stream, loader=self)
Scanner.__init__(self, loader=self)
Parser.__init__(self, loader=self)
Composer.__init__(self, loader=self)
SafeConstructor.__init__(self, loader=self)
VersionedResolver.__init__(self, version, loader=self)
class Loader(Reader, Scanner, Parser, Composer, Constructor, VersionedResolver):
def __init__(self, stream, version=None, preserve_quotes=None):
# type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
self.comment_handling = None
Reader.__init__(self, stream, loader=self)
Scanner.__init__(self, loader=self)
Parser.__init__(self, loader=self)
Composer.__init__(self, loader=self)
Constructor.__init__(self, loader=self)
VersionedResolver.__init__(self, version, loader=self)
class RoundTripLoader(
Reader,
RoundTripScanner,
RoundTripParser,
Composer,
RoundTripConstructor,
VersionedResolver,
):
def __init__(self, stream, version=None, preserve_quotes=None):
# type: (StreamTextType, Optional[VersionType], Optional[bool]) -> None
# self.reader = Reader.__init__(self, stream)
self.comment_handling = None # issue 385
Reader.__init__(self, stream, loader=self)
RoundTripScanner.__init__(self, loader=self)
RoundTripParser.__init__(self, loader=self)
Composer.__init__(self, loader=self)
RoundTripConstructor.__init__(self, preserve_quotes=preserve_quotes, loader=self)
VersionedResolver.__init__(self, version, loader=self)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,135 @@
# coding: utf-8
import sys
from ruamel.yaml.compat import _F
if False: # MYPY
from typing import Dict, Any, Text # NOQA
class Node:
__slots__ = 'tag', 'value', 'start_mark', 'end_mark', 'comment', 'anchor'
def __init__(self, tag, value, start_mark, end_mark, comment=None, anchor=None):
# type: (Any, Any, Any, Any, Any, Any) -> None
self.tag = tag
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
self.comment = comment
self.anchor = anchor
def __repr__(self):
# type: () -> Any
value = self.value
# if isinstance(value, list):
# if len(value) == 0:
# value = '<empty>'
# elif len(value) == 1:
# value = '<1 item>'
# else:
# value = f'<{len(value)} items>'
# else:
# if len(value) > 75:
# value = repr(value[:70]+' ... ')
# else:
# value = repr(value)
value = repr(value)
return _F(
'{class_name!s}(tag={self_tag!r}, value={value!s})',
class_name=self.__class__.__name__,
self_tag=self.tag,
value=value,
)
def dump(self, indent=0):
# type: (int) -> None
if isinstance(self.value, str):
sys.stdout.write(
'{}{}(tag={!r}, value={!r})\n'.format(
' ' * indent, self.__class__.__name__, self.tag, self.value
)
)
if self.comment:
sys.stdout.write(' {}comment: {})\n'.format(' ' * indent, self.comment))
return
sys.stdout.write(
'{}{}(tag={!r})\n'.format(' ' * indent, self.__class__.__name__, self.tag)
)
if self.comment:
sys.stdout.write(' {}comment: {})\n'.format(' ' * indent, self.comment))
for v in self.value:
if isinstance(v, tuple):
for v1 in v:
v1.dump(indent + 1)
elif isinstance(v, Node):
v.dump(indent + 1)
else:
sys.stdout.write('Node value type? {}\n'.format(type(v)))
class ScalarNode(Node):
"""
styles:
? -> set() ? key, no value
" -> double quoted
' -> single quoted
| -> literal style
> -> folding style
"""
__slots__ = ('style',)
id = 'scalar'
def __init__(
self, tag, value, start_mark=None, end_mark=None, style=None, comment=None, anchor=None
):
# type: (Any, Any, Any, Any, Any, Any, Any) -> None
Node.__init__(self, tag, value, start_mark, end_mark, comment=comment, anchor=anchor)
self.style = style
class CollectionNode(Node):
__slots__ = ('flow_style',)
def __init__(
self,
tag,
value,
start_mark=None,
end_mark=None,
flow_style=None,
comment=None,
anchor=None,
):
# type: (Any, Any, Any, Any, Any, Any, Any) -> None
Node.__init__(self, tag, value, start_mark, end_mark, comment=comment)
self.flow_style = flow_style
self.anchor = anchor
class SequenceNode(CollectionNode):
__slots__ = ()
id = 'sequence'
class MappingNode(CollectionNode):
__slots__ = ('merge',)
id = 'mapping'
def __init__(
self,
tag,
value,
start_mark=None,
end_mark=None,
flow_style=None,
comment=None,
anchor=None,
):
# type: (Any, Any, Any, Any, Any, Any, Any) -> None
CollectionNode.__init__(
self, tag, value, start_mark, end_mark, flow_style, comment, anchor
)
self.merge = None

View File

@@ -0,0 +1,884 @@
# coding: utf-8
# The following YAML grammar is LL(1) and is parsed by a recursive descent
# parser.
#
# stream ::= STREAM-START implicit_document? explicit_document*
# STREAM-END
# implicit_document ::= block_node DOCUMENT-END*
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
# block_node_or_indentless_sequence ::=
# ALIAS
# | properties (block_content |
# indentless_block_sequence)?
# | block_content
# | indentless_block_sequence
# block_node ::= ALIAS
# | properties block_content?
# | block_content
# flow_node ::= ALIAS
# | properties flow_content?
# | flow_content
# properties ::= TAG ANCHOR? | ANCHOR TAG?
# block_content ::= block_collection | flow_collection | SCALAR
# flow_content ::= flow_collection | SCALAR
# block_collection ::= block_sequence | block_mapping
# flow_collection ::= flow_sequence | flow_mapping
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)*
# BLOCK-END
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
# block_mapping ::= BLOCK-MAPPING_START
# ((KEY block_node_or_indentless_sequence?)?
# (VALUE block_node_or_indentless_sequence?)?)*
# BLOCK-END
# flow_sequence ::= FLOW-SEQUENCE-START
# (flow_sequence_entry FLOW-ENTRY)*
# flow_sequence_entry?
# FLOW-SEQUENCE-END
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
# flow_mapping ::= FLOW-MAPPING-START
# (flow_mapping_entry FLOW-ENTRY)*
# flow_mapping_entry?
# FLOW-MAPPING-END
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
#
# FIRST sets:
#
# stream: { STREAM-START <}
# explicit_document: { DIRECTIVE DOCUMENT-START }
# implicit_document: FIRST(block_node)
# block_node: { ALIAS TAG ANCHOR SCALAR BLOCK-SEQUENCE-START
# BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START }
# flow_node: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START }
# block_content: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START
# FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
# flow_content: { FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
# block_collection: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START }
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
# block_sequence: { BLOCK-SEQUENCE-START }
# block_mapping: { BLOCK-MAPPING-START }
# block_node_or_indentless_sequence: { ALIAS ANCHOR TAG SCALAR
# BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START
# FLOW-MAPPING-START BLOCK-ENTRY }
# indentless_sequence: { ENTRY }
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
# flow_sequence: { FLOW-SEQUENCE-START }
# flow_mapping: { FLOW-MAPPING-START }
# flow_sequence_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START
# FLOW-MAPPING-START KEY }
# flow_mapping_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START
# FLOW-MAPPING-START KEY }
# need to have full path with import, as pkg_resources tries to load parser.py in __init__.py
# only to not do anything with the package afterwards
# and for Jython too
from ruamel.yaml.error import MarkedYAMLError
from ruamel.yaml.tokens import * # NOQA
from ruamel.yaml.events import * # NOQA
from ruamel.yaml.scanner import Scanner, RoundTripScanner, ScannerError # NOQA
from ruamel.yaml.scanner import BlankLineComment
from ruamel.yaml.comments import C_PRE, C_POST, C_SPLIT_ON_FIRST_BLANK
from ruamel.yaml.compat import _F, nprint, nprintf # NOQA
if False: # MYPY
from typing import Any, Dict, Optional, List, Optional # NOQA
__all__ = ['Parser', 'RoundTripParser', 'ParserError']
def xprintf(*args, **kw):
# type: (Any, Any) -> Any
return nprintf(*args, **kw)
pass
class ParserError(MarkedYAMLError):
pass
class Parser:
# Since writing a recursive-descendant parser is a straightforward task, we
# do not give many comments here.
DEFAULT_TAGS = {'!': '!', '!!': 'tag:yaml.org,2002:'}
def __init__(self, loader):
# type: (Any) -> None
self.loader = loader
if self.loader is not None and getattr(self.loader, '_parser', None) is None:
self.loader._parser = self
self.reset_parser()
def reset_parser(self):
# type: () -> None
# Reset the state attributes (to clear self-references)
self.current_event = self.last_event = None
self.tag_handles = {} # type: Dict[Any, Any]
self.states = [] # type: List[Any]
self.marks = [] # type: List[Any]
self.state = self.parse_stream_start # type: Any
def dispose(self):
# type: () -> None
self.reset_parser()
@property
def scanner(self):
# type: () -> Any
if hasattr(self.loader, 'typ'):
return self.loader.scanner
return self.loader._scanner
@property
def resolver(self):
# type: () -> Any
if hasattr(self.loader, 'typ'):
return self.loader.resolver
return self.loader._resolver
def check_event(self, *choices):
# type: (Any) -> bool
# Check the type of the next event.
if self.current_event is None:
if self.state:
self.current_event = self.state()
if self.current_event is not None:
if not choices:
return True
for choice in choices:
if isinstance(self.current_event, choice):
return True
return False
def peek_event(self):
# type: () -> Any
# Get the next event.
if self.current_event is None:
if self.state:
self.current_event = self.state()
return self.current_event
def get_event(self):
# type: () -> Any
# Get the next event and proceed further.
if self.current_event is None:
if self.state:
self.current_event = self.state()
# assert self.current_event is not None
# if self.current_event.end_mark.line != self.peek_event().start_mark.line:
xprintf('get_event', repr(self.current_event), self.peek_event().start_mark.line)
self.last_event = value = self.current_event
self.current_event = None
return value
# stream ::= STREAM-START implicit_document? explicit_document*
# STREAM-END
# implicit_document ::= block_node DOCUMENT-END*
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
def parse_stream_start(self):
# type: () -> Any
# Parse the stream start.
token = self.scanner.get_token()
self.move_token_comment(token)
event = StreamStartEvent(token.start_mark, token.end_mark, encoding=token.encoding)
# Prepare the next state.
self.state = self.parse_implicit_document_start
return event
def parse_implicit_document_start(self):
# type: () -> Any
# Parse an implicit document.
if not self.scanner.check_token(DirectiveToken, DocumentStartToken, StreamEndToken):
self.tag_handles = self.DEFAULT_TAGS
token = self.scanner.peek_token()
start_mark = end_mark = token.start_mark
event = DocumentStartEvent(start_mark, end_mark, explicit=False)
# Prepare the next state.
self.states.append(self.parse_document_end)
self.state = self.parse_block_node
return event
else:
return self.parse_document_start()
def parse_document_start(self):
# type: () -> Any
# Parse any extra document end indicators.
while self.scanner.check_token(DocumentEndToken):
self.scanner.get_token()
# Parse an explicit document.
if not self.scanner.check_token(StreamEndToken):
version, tags = self.process_directives()
if not self.scanner.check_token(DocumentStartToken):
raise ParserError(
None,
None,
_F(
"expected '<document start>', but found {pt!r}",
pt=self.scanner.peek_token().id,
),
self.scanner.peek_token().start_mark,
)
token = self.scanner.get_token()
start_mark = token.start_mark
end_mark = token.end_mark
# if self.loader is not None and \
# end_mark.line != self.scanner.peek_token().start_mark.line:
# self.loader.scalar_after_indicator = False
event = DocumentStartEvent(
start_mark, end_mark, explicit=True, version=version, tags=tags,
comment=token.comment
) # type: Any
self.states.append(self.parse_document_end)
self.state = self.parse_document_content
else:
# Parse the end of the stream.
token = self.scanner.get_token()
event = StreamEndEvent(token.start_mark, token.end_mark, comment=token.comment)
assert not self.states
assert not self.marks
self.state = None
return event
def parse_document_end(self):
# type: () -> Any
# Parse the document end.
token = self.scanner.peek_token()
start_mark = end_mark = token.start_mark
explicit = False
if self.scanner.check_token(DocumentEndToken):
token = self.scanner.get_token()
end_mark = token.end_mark
explicit = True
event = DocumentEndEvent(start_mark, end_mark, explicit=explicit)
# Prepare the next state.
if self.resolver.processing_version == (1, 1):
self.state = self.parse_document_start
else:
self.state = self.parse_implicit_document_start
return event
def parse_document_content(self):
# type: () -> Any
if self.scanner.check_token(
DirectiveToken, DocumentStartToken, DocumentEndToken, StreamEndToken
):
event = self.process_empty_scalar(self.scanner.peek_token().start_mark)
self.state = self.states.pop()
return event
else:
return self.parse_block_node()
def process_directives(self):
# type: () -> Any
yaml_version = None
self.tag_handles = {}
while self.scanner.check_token(DirectiveToken):
token = self.scanner.get_token()
if token.name == 'YAML':
if yaml_version is not None:
raise ParserError(
None, None, 'found duplicate YAML directive', token.start_mark
)
major, minor = token.value
if major != 1:
raise ParserError(
None,
None,
'found incompatible YAML document (version 1.* is required)',
token.start_mark,
)
yaml_version = token.value
elif token.name == 'TAG':
handle, prefix = token.value
if handle in self.tag_handles:
raise ParserError(
None,
None,
_F('duplicate tag handle {handle!r}', handle=handle),
token.start_mark,
)
self.tag_handles[handle] = prefix
if bool(self.tag_handles):
value = yaml_version, self.tag_handles.copy() # type: Any
else:
value = yaml_version, None
if self.loader is not None and hasattr(self.loader, 'tags'):
self.loader.version = yaml_version
if self.loader.tags is None:
self.loader.tags = {}
for k in self.tag_handles:
self.loader.tags[k] = self.tag_handles[k]
for key in self.DEFAULT_TAGS:
if key not in self.tag_handles:
self.tag_handles[key] = self.DEFAULT_TAGS[key]
return value
# block_node_or_indentless_sequence ::= ALIAS
# | properties (block_content | indentless_block_sequence)?
# | block_content
# | indentless_block_sequence
# block_node ::= ALIAS
# | properties block_content?
# | block_content
# flow_node ::= ALIAS
# | properties flow_content?
# | flow_content
# properties ::= TAG ANCHOR? | ANCHOR TAG?
# block_content ::= block_collection | flow_collection | SCALAR
# flow_content ::= flow_collection | SCALAR
# block_collection ::= block_sequence | block_mapping
# flow_collection ::= flow_sequence | flow_mapping
def parse_block_node(self):
# type: () -> Any
return self.parse_node(block=True)
def parse_flow_node(self):
# type: () -> Any
return self.parse_node()
def parse_block_node_or_indentless_sequence(self):
# type: () -> Any
return self.parse_node(block=True, indentless_sequence=True)
def transform_tag(self, handle, suffix):
# type: (Any, Any) -> Any
return self.tag_handles[handle] + suffix
def parse_node(self, block=False, indentless_sequence=False):
# type: (bool, bool) -> Any
if self.scanner.check_token(AliasToken):
token = self.scanner.get_token()
event = AliasEvent(token.value, token.start_mark, token.end_mark) # type: Any
self.state = self.states.pop()
return event
anchor = None
tag = None
start_mark = end_mark = tag_mark = None
if self.scanner.check_token(AnchorToken):
token = self.scanner.get_token()
self.move_token_comment(token)
start_mark = token.start_mark
end_mark = token.end_mark
anchor = token.value
if self.scanner.check_token(TagToken):
token = self.scanner.get_token()
tag_mark = token.start_mark
end_mark = token.end_mark
tag = token.value
elif self.scanner.check_token(TagToken):
token = self.scanner.get_token()
start_mark = tag_mark = token.start_mark
end_mark = token.end_mark
tag = token.value
if self.scanner.check_token(AnchorToken):
token = self.scanner.get_token()
start_mark = tag_mark = token.start_mark
end_mark = token.end_mark
anchor = token.value
if tag is not None:
handle, suffix = tag
if handle is not None:
if handle not in self.tag_handles:
raise ParserError(
'while parsing a node',
start_mark,
_F('found undefined tag handle {handle!r}', handle=handle),
tag_mark,
)
tag = self.transform_tag(handle, suffix)
else:
tag = suffix
# if tag == '!':
# raise ParserError("while parsing a node", start_mark,
# "found non-specific tag '!'", tag_mark,
# "Please check 'http://pyyaml.org/wiki/YAMLNonSpecificTag'
# and share your opinion.")
if start_mark is None:
start_mark = end_mark = self.scanner.peek_token().start_mark
event = None
implicit = tag is None or tag == '!'
if indentless_sequence and self.scanner.check_token(BlockEntryToken):
comment = None
pt = self.scanner.peek_token()
if self.loader and self.loader.comment_handling is None:
if pt.comment and pt.comment[0]:
comment = [pt.comment[0], []]
pt.comment[0] = None
elif self.loader:
if pt.comment:
comment = pt.comment
end_mark = self.scanner.peek_token().end_mark
event = SequenceStartEvent(
anchor, tag, implicit, start_mark, end_mark, flow_style=False, comment=comment
)
self.state = self.parse_indentless_sequence_entry
return event
if self.scanner.check_token(ScalarToken):
token = self.scanner.get_token()
# self.scanner.peek_token_same_line_comment(token)
end_mark = token.end_mark
if (token.plain and tag is None) or tag == '!':
implicit = (True, False)
elif tag is None:
implicit = (False, True)
else:
implicit = (False, False)
# nprint('se', token.value, token.comment)
event = ScalarEvent(
anchor,
tag,
implicit,
token.value,
start_mark,
end_mark,
style=token.style,
comment=token.comment,
)
self.state = self.states.pop()
elif self.scanner.check_token(FlowSequenceStartToken):
pt = self.scanner.peek_token()
end_mark = pt.end_mark
event = SequenceStartEvent(
anchor,
tag,
implicit,
start_mark,
end_mark,
flow_style=True,
comment=pt.comment,
)
self.state = self.parse_flow_sequence_first_entry
elif self.scanner.check_token(FlowMappingStartToken):
pt = self.scanner.peek_token()
end_mark = pt.end_mark
event = MappingStartEvent(
anchor,
tag,
implicit,
start_mark,
end_mark,
flow_style=True,
comment=pt.comment,
)
self.state = self.parse_flow_mapping_first_key
elif block and self.scanner.check_token(BlockSequenceStartToken):
end_mark = self.scanner.peek_token().start_mark
# should inserting the comment be dependent on the
# indentation?
pt = self.scanner.peek_token()
comment = pt.comment
# nprint('pt0', type(pt))
if comment is None or comment[1] is None:
comment = pt.split_old_comment()
# nprint('pt1', comment)
event = SequenceStartEvent(
anchor, tag, implicit, start_mark, end_mark, flow_style=False, comment=comment
)
self.state = self.parse_block_sequence_first_entry
elif block and self.scanner.check_token(BlockMappingStartToken):
end_mark = self.scanner.peek_token().start_mark
comment = self.scanner.peek_token().comment
event = MappingStartEvent(
anchor, tag, implicit, start_mark, end_mark, flow_style=False, comment=comment
)
self.state = self.parse_block_mapping_first_key
elif anchor is not None or tag is not None:
# Empty scalars are allowed even if a tag or an anchor is
# specified.
event = ScalarEvent(anchor, tag, (implicit, False), "", start_mark, end_mark)
self.state = self.states.pop()
else:
if block:
node = 'block'
else:
node = 'flow'
token = self.scanner.peek_token()
raise ParserError(
_F('while parsing a {node!s} node', node=node),
start_mark,
_F('expected the node content, but found {token_id!r}', token_id=token.id),
token.start_mark,
)
return event
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)*
# BLOCK-END
def parse_block_sequence_first_entry(self):
# type: () -> Any
token = self.scanner.get_token()
# move any comment from start token
# self.move_token_comment(token)
self.marks.append(token.start_mark)
return self.parse_block_sequence_entry()
def parse_block_sequence_entry(self):
# type: () -> Any
if self.scanner.check_token(BlockEntryToken):
token = self.scanner.get_token()
self.move_token_comment(token)
if not self.scanner.check_token(BlockEntryToken, BlockEndToken):
self.states.append(self.parse_block_sequence_entry)
return self.parse_block_node()
else:
self.state = self.parse_block_sequence_entry
return self.process_empty_scalar(token.end_mark)
if not self.scanner.check_token(BlockEndToken):
token = self.scanner.peek_token()
raise ParserError(
'while parsing a block collection',
self.marks[-1],
_F('expected <block end>, but found {token_id!r}', token_id=token.id),
token.start_mark,
)
token = self.scanner.get_token() # BlockEndToken
event = SequenceEndEvent(token.start_mark, token.end_mark, comment=token.comment)
self.state = self.states.pop()
self.marks.pop()
return event
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
# indentless_sequence?
# sequence:
# - entry
# - nested
def parse_indentless_sequence_entry(self):
# type: () -> Any
if self.scanner.check_token(BlockEntryToken):
token = self.scanner.get_token()
self.move_token_comment(token)
if not self.scanner.check_token(
BlockEntryToken, KeyToken, ValueToken, BlockEndToken
):
self.states.append(self.parse_indentless_sequence_entry)
return self.parse_block_node()
else:
self.state = self.parse_indentless_sequence_entry
return self.process_empty_scalar(token.end_mark)
token = self.scanner.peek_token()
c = None
if self.loader and self.loader.comment_handling is None:
c = token.comment
start_mark = token.start_mark
else:
start_mark = self.last_event.end_mark # type: ignore
c = self.distribute_comment(token.comment, start_mark.line) # type: ignore
event = SequenceEndEvent(start_mark, start_mark, comment=c)
self.state = self.states.pop()
return event
# block_mapping ::= BLOCK-MAPPING_START
# ((KEY block_node_or_indentless_sequence?)?
# (VALUE block_node_or_indentless_sequence?)?)*
# BLOCK-END
def parse_block_mapping_first_key(self):
# type: () -> Any
token = self.scanner.get_token()
self.marks.append(token.start_mark)
return self.parse_block_mapping_key()
def parse_block_mapping_key(self):
# type: () -> Any
if self.scanner.check_token(KeyToken):
token = self.scanner.get_token()
self.move_token_comment(token)
if not self.scanner.check_token(KeyToken, ValueToken, BlockEndToken):
self.states.append(self.parse_block_mapping_value)
return self.parse_block_node_or_indentless_sequence()
else:
self.state = self.parse_block_mapping_value
return self.process_empty_scalar(token.end_mark)
if self.resolver.processing_version > (1, 1) and self.scanner.check_token(ValueToken):
self.state = self.parse_block_mapping_value
return self.process_empty_scalar(self.scanner.peek_token().start_mark)
if not self.scanner.check_token(BlockEndToken):
token = self.scanner.peek_token()
raise ParserError(
'while parsing a block mapping',
self.marks[-1],
_F('expected <block end>, but found {token_id!r}', token_id=token.id),
token.start_mark,
)
token = self.scanner.get_token()
self.move_token_comment(token)
event = MappingEndEvent(token.start_mark, token.end_mark, comment=token.comment)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_block_mapping_value(self):
# type: () -> Any
if self.scanner.check_token(ValueToken):
token = self.scanner.get_token()
# value token might have post comment move it to e.g. block
if self.scanner.check_token(ValueToken):
self.move_token_comment(token)
else:
if not self.scanner.check_token(KeyToken):
self.move_token_comment(token, empty=True)
# else: empty value for this key cannot move token.comment
if not self.scanner.check_token(KeyToken, ValueToken, BlockEndToken):
self.states.append(self.parse_block_mapping_key)
return self.parse_block_node_or_indentless_sequence()
else:
self.state = self.parse_block_mapping_key
comment = token.comment
if comment is None:
token = self.scanner.peek_token()
comment = token.comment
if comment:
token._comment = [None, comment[1]]
comment = [comment[0], None]
return self.process_empty_scalar(token.end_mark, comment=comment)
else:
self.state = self.parse_block_mapping_key
token = self.scanner.peek_token()
return self.process_empty_scalar(token.start_mark)
# flow_sequence ::= FLOW-SEQUENCE-START
# (flow_sequence_entry FLOW-ENTRY)*
# flow_sequence_entry?
# FLOW-SEQUENCE-END
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
#
# Note that while production rules for both flow_sequence_entry and
# flow_mapping_entry are equal, their interpretations are different.
# For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?`
# generate an inline mapping (set syntax).
def parse_flow_sequence_first_entry(self):
# type: () -> Any
token = self.scanner.get_token()
self.marks.append(token.start_mark)
return self.parse_flow_sequence_entry(first=True)
def parse_flow_sequence_entry(self, first=False):
# type: (bool) -> Any
if not self.scanner.check_token(FlowSequenceEndToken):
if not first:
if self.scanner.check_token(FlowEntryToken):
self.scanner.get_token()
else:
token = self.scanner.peek_token()
raise ParserError(
'while parsing a flow sequence',
self.marks[-1],
_F("expected ',' or ']', but got {token_id!r}", token_id=token.id),
token.start_mark,
)
if self.scanner.check_token(KeyToken):
token = self.scanner.peek_token()
event = MappingStartEvent(
None, None, True, token.start_mark, token.end_mark, flow_style=True
) # type: Any
self.state = self.parse_flow_sequence_entry_mapping_key
return event
elif not self.scanner.check_token(FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry)
return self.parse_flow_node()
token = self.scanner.get_token()
event = SequenceEndEvent(token.start_mark, token.end_mark, comment=token.comment)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_flow_sequence_entry_mapping_key(self):
# type: () -> Any
token = self.scanner.get_token()
if not self.scanner.check_token(ValueToken, FlowEntryToken, FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry_mapping_value)
return self.parse_flow_node()
else:
self.state = self.parse_flow_sequence_entry_mapping_value
return self.process_empty_scalar(token.end_mark)
def parse_flow_sequence_entry_mapping_value(self):
# type: () -> Any
if self.scanner.check_token(ValueToken):
token = self.scanner.get_token()
if not self.scanner.check_token(FlowEntryToken, FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry_mapping_end)
return self.parse_flow_node()
else:
self.state = self.parse_flow_sequence_entry_mapping_end
return self.process_empty_scalar(token.end_mark)
else:
self.state = self.parse_flow_sequence_entry_mapping_end
token = self.scanner.peek_token()
return self.process_empty_scalar(token.start_mark)
def parse_flow_sequence_entry_mapping_end(self):
# type: () -> Any
self.state = self.parse_flow_sequence_entry
token = self.scanner.peek_token()
return MappingEndEvent(token.start_mark, token.start_mark)
# flow_mapping ::= FLOW-MAPPING-START
# (flow_mapping_entry FLOW-ENTRY)*
# flow_mapping_entry?
# FLOW-MAPPING-END
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
def parse_flow_mapping_first_key(self):
# type: () -> Any
token = self.scanner.get_token()
self.marks.append(token.start_mark)
return self.parse_flow_mapping_key(first=True)
def parse_flow_mapping_key(self, first=False):
# type: (Any) -> Any
if not self.scanner.check_token(FlowMappingEndToken):
if not first:
if self.scanner.check_token(FlowEntryToken):
self.scanner.get_token()
else:
token = self.scanner.peek_token()
raise ParserError(
'while parsing a flow mapping',
self.marks[-1],
_F("expected ',' or '}}', but got {token_id!r}", token_id=token.id),
token.start_mark,
)
if self.scanner.check_token(KeyToken):
token = self.scanner.get_token()
if not self.scanner.check_token(
ValueToken, FlowEntryToken, FlowMappingEndToken
):
self.states.append(self.parse_flow_mapping_value)
return self.parse_flow_node()
else:
self.state = self.parse_flow_mapping_value
return self.process_empty_scalar(token.end_mark)
elif self.resolver.processing_version > (1, 1) and self.scanner.check_token(
ValueToken
):
self.state = self.parse_flow_mapping_value
return self.process_empty_scalar(self.scanner.peek_token().end_mark)
elif not self.scanner.check_token(FlowMappingEndToken):
self.states.append(self.parse_flow_mapping_empty_value)
return self.parse_flow_node()
token = self.scanner.get_token()
event = MappingEndEvent(token.start_mark, token.end_mark, comment=token.comment)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_flow_mapping_value(self):
# type: () -> Any
if self.scanner.check_token(ValueToken):
token = self.scanner.get_token()
if not self.scanner.check_token(FlowEntryToken, FlowMappingEndToken):
self.states.append(self.parse_flow_mapping_key)
return self.parse_flow_node()
else:
self.state = self.parse_flow_mapping_key
return self.process_empty_scalar(token.end_mark)
else:
self.state = self.parse_flow_mapping_key
token = self.scanner.peek_token()
return self.process_empty_scalar(token.start_mark)
def parse_flow_mapping_empty_value(self):
# type: () -> Any
self.state = self.parse_flow_mapping_key
return self.process_empty_scalar(self.scanner.peek_token().start_mark)
def process_empty_scalar(self, mark, comment=None):
# type: (Any, Any) -> Any
return ScalarEvent(None, None, (True, False), "", mark, mark, comment=comment)
def move_token_comment(self, token, nt=None, empty=False):
# type: (Any, Optional[Any], Optional[bool]) -> Any
pass
class RoundTripParser(Parser):
"""roundtrip is a safe loader, that wants to see the unmangled tag"""
def transform_tag(self, handle, suffix):
# type: (Any, Any) -> Any
# return self.tag_handles[handle]+suffix
if handle == '!!' and suffix in (
'null',
'bool',
'int',
'float',
'binary',
'timestamp',
'omap',
'pairs',
'set',
'str',
'seq',
'map',
):
return Parser.transform_tag(self, handle, suffix)
return handle + suffix
def move_token_comment(self, token, nt=None, empty=False):
# type: (Any, Optional[Any], Optional[bool]) -> Any
token.move_old_comment(self.scanner.peek_token() if nt is None else nt, empty=empty)
class RoundTripParserSC(RoundTripParser):
"""roundtrip is a safe loader, that wants to see the unmangled tag"""
# some of the differences are based on the superclass testing
# if self.loader.comment_handling is not None
def move_token_comment(self, token, nt=None, empty=False):
# type: (Any, Any, Any, Optional[bool]) -> None
token.move_new_comment(self.scanner.peek_token() if nt is None else nt, empty=empty)
def distribute_comment(self, comment, line):
# type: (Any, Any) -> Any
# ToDo, look at indentation of the comment to determine attachment
if comment is None:
return None
if not comment[0]:
return None
if comment[0][0] != line + 1:
nprintf('>>>dcxxx', comment, line)
assert comment[0][0] == line + 1
# if comment[0] - line > 1:
# return
typ = self.loader.comment_handling & 0b11
# nprintf('>>>dca', comment, line, typ)
if typ == C_POST:
return None
if typ == C_PRE:
c = [None, None, comment[0]]
comment[0] = None
return c
# nprintf('>>>dcb', comment[0])
for _idx, cmntidx in enumerate(comment[0]):
# nprintf('>>>dcb', cmntidx)
if isinstance(self.scanner.comments[cmntidx], BlankLineComment):
break
else:
return None # no space found
if _idx == 0:
return None # first line was blank
# nprintf('>>>dcc', idx)
if typ == C_SPLIT_ON_FIRST_BLANK:
c = [None, None, comment[0][:_idx]]
comment[0] = comment[0][_idx:]
return c
raise NotImplementedError # reserved

View File

@@ -0,0 +1,302 @@
# coding: utf-8
# This module contains abstractions for the input stream. You don't have to
# looks further, there are no pretty code.
#
# We define two classes here.
#
# Mark(source, line, column)
# It's just a record and its only use is producing nice error messages.
# Parser does not use it for any other purposes.
#
# Reader(source, data)
# Reader determines the encoding of `data` and converts it to unicode.
# Reader provides the following methods and attributes:
# reader.peek(length=1) - return the next `length` characters
# reader.forward(length=1) - move the current position to `length`
# characters.
# reader.index - the number of the current character.
# reader.line, stream.column - the line and the column of the current
# character.
import codecs
from ruamel.yaml.error import YAMLError, FileMark, StringMark, YAMLStreamError
from ruamel.yaml.compat import _F # NOQA
from ruamel.yaml.util import RegExp
if False: # MYPY
from typing import Any, Dict, Optional, List, Union, Text, Tuple, Optional # NOQA
# from ruamel.yaml.compat import StreamTextType # NOQA
__all__ = ['Reader', 'ReaderError']
class ReaderError(YAMLError):
def __init__(self, name, position, character, encoding, reason):
# type: (Any, Any, Any, Any, Any) -> None
self.name = name
self.character = character
self.position = position
self.encoding = encoding
self.reason = reason
def __str__(self):
# type: () -> Any
if isinstance(self.character, bytes):
return _F(
"'{self_encoding!s}' codec can't decode byte #x{ord_self_character:02x}: "
'{self_reason!s}\n'
' in "{self_name!s}", position {self_position:d}',
self_encoding=self.encoding,
ord_self_character=ord(self.character),
self_reason=self.reason,
self_name=self.name,
self_position=self.position,
)
else:
return _F(
'unacceptable character #x{self_character:04x}: {self_reason!s}\n'
' in "{self_name!s}", position {self_position:d}',
self_character=self.character,
self_reason=self.reason,
self_name=self.name,
self_position=self.position,
)
class Reader:
# Reader:
# - determines the data encoding and converts it to a unicode string,
# - checks if characters are in allowed range,
# - adds '\0' to the end.
# Reader accepts
# - a `bytes` object,
# - a `str` object,
# - a file-like object with its `read` method returning `str`,
# - a file-like object with its `read` method returning `unicode`.
# Yeah, it's ugly and slow.
def __init__(self, stream, loader=None):
# type: (Any, Any) -> None
self.loader = loader
if self.loader is not None and getattr(self.loader, '_reader', None) is None:
self.loader._reader = self
self.reset_reader()
self.stream = stream # type: Any # as .read is called
def reset_reader(self):
# type: () -> None
self.name = None # type: Any
self.stream_pointer = 0
self.eof = True
self.buffer = ""
self.pointer = 0
self.raw_buffer = None # type: Any
self.raw_decode = None
self.encoding = None # type: Optional[Text]
self.index = 0
self.line = 0
self.column = 0
@property
def stream(self):
# type: () -> Any
try:
return self._stream
except AttributeError:
raise YAMLStreamError('input stream needs to specified')
@stream.setter
def stream(self, val):
# type: (Any) -> None
if val is None:
return
self._stream = None
if isinstance(val, str):
self.name = '<unicode string>'
self.check_printable(val)
self.buffer = val + '\0'
elif isinstance(val, bytes):
self.name = '<byte string>'
self.raw_buffer = val
self.determine_encoding()
else:
if not hasattr(val, 'read'):
raise YAMLStreamError('stream argument needs to have a read() method')
self._stream = val
self.name = getattr(self.stream, 'name', '<file>')
self.eof = False
self.raw_buffer = None
self.determine_encoding()
def peek(self, index=0):
# type: (int) -> Text
try:
return self.buffer[self.pointer + index]
except IndexError:
self.update(index + 1)
return self.buffer[self.pointer + index]
def prefix(self, length=1):
# type: (int) -> Any
if self.pointer + length >= len(self.buffer):
self.update(length)
return self.buffer[self.pointer : self.pointer + length]
def forward_1_1(self, length=1):
# type: (int) -> None
if self.pointer + length + 1 >= len(self.buffer):
self.update(length + 1)
while length != 0:
ch = self.buffer[self.pointer]
self.pointer += 1
self.index += 1
if ch in '\n\x85\u2028\u2029' or (
ch == '\r' and self.buffer[self.pointer] != '\n'
):
self.line += 1
self.column = 0
elif ch != '\uFEFF':
self.column += 1
length -= 1
def forward(self, length=1):
# type: (int) -> None
if self.pointer + length + 1 >= len(self.buffer):
self.update(length + 1)
while length != 0:
ch = self.buffer[self.pointer]
self.pointer += 1
self.index += 1
if ch == '\n' or (ch == '\r' and self.buffer[self.pointer] != '\n'):
self.line += 1
self.column = 0
elif ch != '\uFEFF':
self.column += 1
length -= 1
def get_mark(self):
# type: () -> Any
if self.stream is None:
return StringMark(
self.name, self.index, self.line, self.column, self.buffer, self.pointer
)
else:
return FileMark(self.name, self.index, self.line, self.column)
def determine_encoding(self):
# type: () -> None
while not self.eof and (self.raw_buffer is None or len(self.raw_buffer) < 2):
self.update_raw()
if isinstance(self.raw_buffer, bytes):
if self.raw_buffer.startswith(codecs.BOM_UTF16_LE):
self.raw_decode = codecs.utf_16_le_decode # type: ignore
self.encoding = 'utf-16-le'
elif self.raw_buffer.startswith(codecs.BOM_UTF16_BE):
self.raw_decode = codecs.utf_16_be_decode # type: ignore
self.encoding = 'utf-16-be'
else:
self.raw_decode = codecs.utf_8_decode # type: ignore
self.encoding = 'utf-8'
self.update(1)
NON_PRINTABLE = RegExp(
'[^\x09\x0A\x0D\x20-\x7E\x85' '\xA0-\uD7FF' '\uE000-\uFFFD' '\U00010000-\U0010FFFF' ']'
)
_printable_ascii = ('\x09\x0A\x0D' + "".join(map(chr, range(0x20, 0x7F)))).encode('ascii')
@classmethod
def _get_non_printable_ascii(cls, data): # type: ignore
# type: (Text, bytes) -> Optional[Tuple[int, Text]]
ascii_bytes = data.encode('ascii') # type: ignore
non_printables = ascii_bytes.translate(None, cls._printable_ascii) # type: ignore
if not non_printables:
return None
non_printable = non_printables[:1]
return ascii_bytes.index(non_printable), non_printable.decode('ascii')
@classmethod
def _get_non_printable_regex(cls, data):
# type: (Text) -> Optional[Tuple[int, Text]]
match = cls.NON_PRINTABLE.search(data)
if not bool(match):
return None
return match.start(), match.group()
@classmethod
def _get_non_printable(cls, data):
# type: (Text) -> Optional[Tuple[int, Text]]
try:
return cls._get_non_printable_ascii(data) # type: ignore
except UnicodeEncodeError:
return cls._get_non_printable_regex(data)
def check_printable(self, data):
# type: (Any) -> None
non_printable_match = self._get_non_printable(data)
if non_printable_match is not None:
start, character = non_printable_match
position = self.index + (len(self.buffer) - self.pointer) + start
raise ReaderError(
self.name,
position,
ord(character),
'unicode',
'special characters are not allowed',
)
def update(self, length):
# type: (int) -> None
if self.raw_buffer is None:
return
self.buffer = self.buffer[self.pointer :]
self.pointer = 0
while len(self.buffer) < length:
if not self.eof:
self.update_raw()
if self.raw_decode is not None:
try:
data, converted = self.raw_decode(self.raw_buffer, 'strict', self.eof)
except UnicodeDecodeError as exc:
character = self.raw_buffer[exc.start]
if self.stream is not None:
position = self.stream_pointer - len(self.raw_buffer) + exc.start
elif self.stream is not None:
position = self.stream_pointer - len(self.raw_buffer) + exc.start
else:
position = exc.start
raise ReaderError(self.name, position, character, exc.encoding, exc.reason)
else:
data = self.raw_buffer
converted = len(data)
self.check_printable(data)
self.buffer += data
self.raw_buffer = self.raw_buffer[converted:]
if self.eof:
self.buffer += '\0'
self.raw_buffer = None
break
def update_raw(self, size=None):
# type: (Optional[int]) -> None
if size is None:
size = 4096
data = self.stream.read(size)
if self.raw_buffer is None:
self.raw_buffer = data
else:
self.raw_buffer += data
self.stream_pointer += len(data)
if not data:
self.eof = True
# try:
# import psyco
# psyco.bind(Reader)
# except ImportError:
# pass

File diff suppressed because it is too large Load Diff

View File

@@ -1,54 +1,166 @@
# coding: utf-8
from __future__ import absolute_import
import re
try:
from .error import * # NOQA
from .nodes import * # NOQA
from .compat import string_types
except (ImportError, ValueError): # for Jython
from ruamel.yaml.error import * # NOQA
from ruamel.yaml.nodes import * # NOQA
from ruamel.yaml.compat import string_types
if False: # MYPY
from typing import Any, Dict, List, Union, Text, Optional # NOQA
from ruamel.yaml.compat import VersionType # NOQA
from ruamel.yaml.compat import _DEFAULT_YAML_VERSION, _F # NOQA
from ruamel.yaml.error import * # NOQA
from ruamel.yaml.nodes import MappingNode, ScalarNode, SequenceNode # NOQA
from ruamel.yaml.util import RegExp # NOQA
__all__ = ['BaseResolver', 'Resolver', 'VersionedResolver']
_DEFAULT_VERSION = (1, 2)
# fmt: off
# resolvers consist of
# - a list of applicable version
# - a tag
# - a regexp
# - a list of first characters to match
implicit_resolvers = [
([(1, 2)],
'tag:yaml.org,2002:bool',
RegExp('''^(?:true|True|TRUE|false|False|FALSE)$''', re.X),
list('tTfF')),
([(1, 1)],
'tag:yaml.org,2002:bool',
RegExp('''^(?:y|Y|yes|Yes|YES|n|N|no|No|NO
|true|True|TRUE|false|False|FALSE
|on|On|ON|off|Off|OFF)$''', re.X),
list('yYnNtTfFoO')),
([(1, 2)],
'tag:yaml.org,2002:float',
RegExp('''^(?:
[-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)?
|[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+)
|[-+]?\\.[0-9_]+(?:[eE][-+][0-9]+)?
|[-+]?\\.(?:inf|Inf|INF)
|\\.(?:nan|NaN|NAN))$''', re.X),
list('-+0123456789.')),
([(1, 1)],
'tag:yaml.org,2002:float',
RegExp('''^(?:
[-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)?
|[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+)
|\\.[0-9_]+(?:[eE][-+][0-9]+)?
|[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\\.[0-9_]* # sexagesimal float
|[-+]?\\.(?:inf|Inf|INF)
|\\.(?:nan|NaN|NAN))$''', re.X),
list('-+0123456789.')),
([(1, 2)],
'tag:yaml.org,2002:int',
RegExp('''^(?:[-+]?0b[0-1_]+
|[-+]?0o?[0-7_]+
|[-+]?[0-9_]+
|[-+]?0x[0-9a-fA-F_]+)$''', re.X),
list('-+0123456789')),
([(1, 1)],
'tag:yaml.org,2002:int',
RegExp('''^(?:[-+]?0b[0-1_]+
|[-+]?0?[0-7_]+
|[-+]?(?:0|[1-9][0-9_]*)
|[-+]?0x[0-9a-fA-F_]+
|[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X), # sexagesimal int
list('-+0123456789')),
([(1, 2), (1, 1)],
'tag:yaml.org,2002:merge',
RegExp('^(?:<<)$'),
['<']),
([(1, 2), (1, 1)],
'tag:yaml.org,2002:null',
RegExp('''^(?: ~
|null|Null|NULL
| )$''', re.X),
['~', 'n', 'N', '']),
([(1, 2), (1, 1)],
'tag:yaml.org,2002:timestamp',
RegExp('''^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]
|[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]?
(?:[Tt]|[ \\t]+)[0-9][0-9]?
:[0-9][0-9] :[0-9][0-9] (?:\\.[0-9]*)?
(?:[ \\t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$''', re.X),
list('0123456789')),
([(1, 2), (1, 1)],
'tag:yaml.org,2002:value',
RegExp('^(?:=)$'),
['=']),
# The following resolver is only for documentation purposes. It cannot work
# because plain scalars cannot start with '!', '&', or '*'.
([(1, 2), (1, 1)],
'tag:yaml.org,2002:yaml',
RegExp('^(?:!|&|\\*)$'),
list('!&*')),
]
# fmt: on
class ResolverError(YAMLError):
pass
class BaseResolver(object):
class BaseResolver:
DEFAULT_SCALAR_TAG = u'tag:yaml.org,2002:str'
DEFAULT_SEQUENCE_TAG = u'tag:yaml.org,2002:seq'
DEFAULT_MAPPING_TAG = u'tag:yaml.org,2002:map'
DEFAULT_SCALAR_TAG = 'tag:yaml.org,2002:str'
DEFAULT_SEQUENCE_TAG = 'tag:yaml.org,2002:seq'
DEFAULT_MAPPING_TAG = 'tag:yaml.org,2002:map'
yaml_implicit_resolvers = {}
yaml_path_resolvers = {}
yaml_implicit_resolvers = {} # type: Dict[Any, Any]
yaml_path_resolvers = {} # type: Dict[Any, Any]
def __init__(self):
self._loader_version = None
self.resolver_exact_paths = []
self.resolver_prefix_paths = []
def __init__(self, loadumper=None):
# type: (Any, Any) -> None
self.loadumper = loadumper
if self.loadumper is not None and getattr(self.loadumper, '_resolver', None) is None:
self.loadumper._resolver = self.loadumper
self._loader_version = None # type: Any
self.resolver_exact_paths = [] # type: List[Any]
self.resolver_prefix_paths = [] # type: List[Any]
@property
def parser(self):
# type: () -> Any
if self.loadumper is not None:
if hasattr(self.loadumper, 'typ'):
return self.loadumper.parser
return self.loadumper._parser
return None
@classmethod
def add_implicit_resolver(cls, tag, regexp, first):
def add_implicit_resolver_base(cls, tag, regexp, first):
# type: (Any, Any, Any) -> None
if 'yaml_implicit_resolvers' not in cls.__dict__:
cls.yaml_implicit_resolvers = cls.yaml_implicit_resolvers.copy()
# deepcopy doesn't work here
cls.yaml_implicit_resolvers = dict(
(k, cls.yaml_implicit_resolvers[k][:]) for k in cls.yaml_implicit_resolvers
)
if first is None:
first = [None]
for ch in first:
cls.yaml_implicit_resolvers.setdefault(ch, []).append(
(tag, regexp))
cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp))
@classmethod
def add_implicit_resolver(cls, tag, regexp, first):
# type: (Any, Any, Any) -> None
if 'yaml_implicit_resolvers' not in cls.__dict__:
# deepcopy doesn't work here
cls.yaml_implicit_resolvers = dict(
(k, cls.yaml_implicit_resolvers[k][:]) for k in cls.yaml_implicit_resolvers
)
if first is None:
first = [None]
for ch in first:
cls.yaml_implicit_resolvers.setdefault(ch, []).append((tag, regexp))
implicit_resolvers.append(([(1, 2), (1, 1)], tag, regexp, first))
# @classmethod
# def add_implicit_resolver(cls, tag, regexp, first):
@classmethod
def add_path_resolver(cls, tag, path, kind=None):
# type: (Any, Any, Any) -> None
# Note: `add_path_resolver` is experimental. The API could be changed.
# `new_path` is a pattern that is matched against the path from the
# root to the node that is being considered. `node_path` elements are
@@ -63,7 +175,7 @@ def add_path_resolver(cls, tag, path, kind=None):
# against a sequence value with the index equal to `index_check`.
if 'yaml_path_resolvers' not in cls.__dict__:
cls.yaml_path_resolvers = cls.yaml_path_resolvers.copy()
new_path = []
new_path = [] # type: List[Any]
for element in path:
if isinstance(element, (list, tuple)):
if len(element) == 2:
@@ -72,7 +184,9 @@ def add_path_resolver(cls, tag, path, kind=None):
node_check = element[0]
index_check = True
else:
raise ResolverError("Invalid path element: %s" % element)
raise ResolverError(
_F('Invalid path element: {element!s}', element=element)
)
else:
node_check = None
index_check = element
@@ -82,13 +196,18 @@ def add_path_resolver(cls, tag, path, kind=None):
node_check = SequenceNode
elif node_check is dict:
node_check = MappingNode
elif node_check not in [ScalarNode, SequenceNode, MappingNode] \
and not isinstance(node_check, string_types) \
and node_check is not None:
raise ResolverError("Invalid node checker: %s" % node_check)
if not isinstance(index_check, (string_types, int)) \
and index_check is not None:
raise ResolverError("Invalid index checker: %s" % index_check)
elif (
node_check not in [ScalarNode, SequenceNode, MappingNode]
and not isinstance(node_check, str)
and node_check is not None
):
raise ResolverError(
_F('Invalid node checker: {node_check!s}', node_check=node_check)
)
if not isinstance(index_check, (str, int)) and index_check is not None:
raise ResolverError(
_F('Invalid index checker: {index_check!s}', index_check=index_check)
)
new_path.append((node_check, index_check))
if kind is str:
kind = ScalarNode
@@ -96,12 +215,12 @@ def add_path_resolver(cls, tag, path, kind=None):
kind = SequenceNode
elif kind is dict:
kind = MappingNode
elif kind not in [ScalarNode, SequenceNode, MappingNode] \
and kind is not None:
raise ResolverError("Invalid node kind: %s" % kind)
elif kind not in [ScalarNode, SequenceNode, MappingNode] and kind is not None:
raise ResolverError(_F('Invalid node kind: {kind!s}', kind=kind))
cls.yaml_path_resolvers[tuple(new_path), kind] = tag
def descend_resolver(self, current_node, current_index):
# type: (Any, Any) -> None
if not self.yaml_path_resolvers:
return
exact_paths = {}
@@ -109,13 +228,11 @@ def descend_resolver(self, current_node, current_index):
if current_node:
depth = len(self.resolver_prefix_paths)
for path, kind in self.resolver_prefix_paths[-1]:
if self.check_resolver_prefix(depth, path, kind,
current_node, current_index):
if self.check_resolver_prefix(depth, path, kind, current_node, current_index):
if len(path) > depth:
prefix_paths.append((path, kind))
else:
exact_paths[kind] = self.yaml_path_resolvers[path,
kind]
exact_paths[kind] = self.yaml_path_resolvers[path, kind]
else:
for path, kind in self.yaml_path_resolvers:
if not path:
@@ -126,39 +243,40 @@ def descend_resolver(self, current_node, current_index):
self.resolver_prefix_paths.append(prefix_paths)
def ascend_resolver(self):
# type: () -> None
if not self.yaml_path_resolvers:
return
self.resolver_exact_paths.pop()
self.resolver_prefix_paths.pop()
def check_resolver_prefix(self, depth, path, kind,
current_node, current_index):
node_check, index_check = path[depth-1]
if isinstance(node_check, string_types):
def check_resolver_prefix(self, depth, path, kind, current_node, current_index):
# type: (int, Any, Any, Any, Any) -> bool
node_check, index_check = path[depth - 1]
if isinstance(node_check, str):
if current_node.tag != node_check:
return
return False
elif node_check is not None:
if not isinstance(current_node, node_check):
return
return False
if index_check is True and current_index is not None:
return
if (index_check is False or index_check is None) \
and current_index is None:
return
if isinstance(index_check, string_types):
if not (isinstance(current_index, ScalarNode) and
index_check == current_index.value):
return
elif isinstance(index_check, int) and not isinstance(index_check,
bool):
return False
if (index_check is False or index_check is None) and current_index is None:
return False
if isinstance(index_check, str):
if not (
isinstance(current_index, ScalarNode) and index_check == current_index.value
):
return False
elif isinstance(index_check, int) and not isinstance(index_check, bool):
if index_check != current_index:
return
return False
return True
def resolve(self, kind, value, implicit):
# type: (Any, Any, Any) -> Any
if kind is ScalarNode and implicit[0]:
if value == u'':
resolvers = self.yaml_implicit_resolvers.get(u'', [])
if value == "":
resolvers = self.yaml_implicit_resolvers.get("", [])
else:
resolvers = self.yaml_implicit_resolvers.get(value[0], [])
resolvers += self.yaml_implicit_resolvers.get(None, [])
@@ -166,7 +284,7 @@ def resolve(self, kind, value, implicit):
if regexp.match(value):
return tag
implicit = implicit[1]
if self.yaml_path_resolvers:
if bool(self.yaml_path_resolvers):
exact_paths = self.resolver_exact_paths[-1]
if kind in exact_paths:
return exact_paths[kind]
@@ -181,158 +299,37 @@ def resolve(self, kind, value, implicit):
@property
def processing_version(self):
# type: () -> Any
return None
class Resolver(BaseResolver):
pass
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:bool',
re.compile(u'''^(?:yes|Yes|YES|no|No|NO
|true|True|TRUE|false|False|FALSE
|on|On|ON|off|Off|OFF)$''', re.X),
list(u'yYnNtTfFoO'))
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:float',
re.compile(u'''^(?:
[-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)?
|[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+)
|\\.[0-9_]+(?:[eE][-+][0-9]+)?
|[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\\.[0-9_]*
|[-+]?\\.(?:inf|Inf|INF)
|\\.(?:nan|NaN|NAN))$''', re.X),
list(u'-+0123456789.'))
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:int',
re.compile(u'''^(?:[-+]?0b[0-1_]+
|[-+]?0o?[0-7_]+
|[-+]?(?:0|[1-9][0-9_]*)
|[-+]?0x[0-9a-fA-F_]+
|[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X),
list(u'-+0123456789'))
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:merge',
re.compile(u'^(?:<<)$'),
[u'<'])
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:null',
re.compile(u'''^(?: ~
|null|Null|NULL
| )$''', re.X),
[u'~', u'n', u'N', u''])
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:timestamp',
re.compile(u'''^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]
|[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]?
(?:[Tt]|[ \\t]+)[0-9][0-9]?
:[0-9][0-9] :[0-9][0-9] (?:\\.[0-9]*)?
(?:[ \\t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$''', re.X),
list(u'0123456789'))
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:value',
re.compile(u'^(?:=)$'),
[u'='])
# The following resolver is only for documentation purposes. It cannot work
# because plain scalars cannot start with '!', '&', or '*'.
Resolver.add_implicit_resolver(
u'tag:yaml.org,2002:yaml',
re.compile(u'^(?:!|&|\\*)$'),
list(u'!&*'))
# resolvers consist of
# - a list of applicable version
# - a tag
# - a regexp
# - a list of first characters to match
implicit_resolvers = [
([(1, 2)],
u'tag:yaml.org,2002:bool',
re.compile(u'''^(?:true|True|TRUE|false|False|FALSE)$''', re.X),
list(u'tTfF')),
([(1, 1)],
u'tag:yaml.org,2002:bool',
re.compile(u'''^(?:yes|Yes|YES|no|No|NO
|true|True|TRUE|false|False|FALSE
|on|On|ON|off|Off|OFF)$''', re.X),
list(u'yYnNtTfFoO')),
([(1, 2), (1, 1)],
u'tag:yaml.org,2002:float',
re.compile(u'''^(?:
[-+]?(?:[0-9][0-9_]*)\\.[0-9_]*(?:[eE][-+]?[0-9]+)?
|[-+]?(?:[0-9][0-9_]*)(?:[eE][-+]?[0-9]+)
|\\.[0-9_]+(?:[eE][-+][0-9]+)?
|[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\\.[0-9_]*
|[-+]?\\.(?:inf|Inf|INF)
|\\.(?:nan|NaN|NAN))$''', re.X),
list(u'-+0123456789.')),
([(1, 2)],
u'tag:yaml.org,2002:int',
re.compile(u'''^(?:[-+]?0b[0-1_]+
|[-+]?0o?[0-7_]+
|[-+]?(?:0|[1-9][0-9_]*)
|[-+]?0x[0-9a-fA-F_]+)$''', re.X),
list(u'-+0123456789')),
([(1, 1)],
u'tag:yaml.org,2002:int',
re.compile(u'''^(?:[-+]?0b[0-1_]+
|[-+]?0o?[0-7_]+
|[-+]?(?:0|[1-9][0-9_]*)
|[-+]?0x[0-9a-fA-F_]+
|[-+]?[1-9][0-9_]*(?::[0-5]?[0-9])+)$''', re.X),
list(u'-+0123456789')),
([(1, 2), (1, 1)],
u'tag:yaml.org,2002:merge',
re.compile(u'^(?:<<)$'),
[u'<']),
([(1, 2), (1, 1)],
u'tag:yaml.org,2002:null',
re.compile(u'''^(?: ~
|null|Null|NULL
| )$''', re.X),
[u'~', u'n', u'N', u'']),
([(1, 2), (1, 1)],
u'tag:yaml.org,2002:timestamp',
re.compile(u'''^(?:[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]
|[0-9][0-9][0-9][0-9] -[0-9][0-9]? -[0-9][0-9]?
(?:[Tt]|[ \\t]+)[0-9][0-9]?
:[0-9][0-9] :[0-9][0-9] (?:\\.[0-9]*)?
(?:[ \\t]*(?:Z|[-+][0-9][0-9]?(?::[0-9][0-9])?))?)$''', re.X),
list(u'0123456789')),
([(1, 2), (1, 1)],
u'tag:yaml.org,2002:value',
re.compile(u'^(?:=)$'),
[u'=']),
# The following resolver is only for documentation purposes. It cannot work
# because plain scalars cannot start with '!', '&', or '*'.
([(1, 2), (1, 1)],
u'tag:yaml.org,2002:yaml',
re.compile(u'^(?:!|&|\\*)$'),
list(u'!&*')),
]
for ir in implicit_resolvers:
if (1, 2) in ir[0]:
Resolver.add_implicit_resolver_base(*ir[1:])
class VersionedResolver(BaseResolver):
"""
contrary to the "normal" resolver, the smart resolver delays loading
the pattern matching rules. That way it can decide to load 1.1 rules
or the (default) 1.2 that no longer support octal without 0o, sexagesimals
or the (default) 1.2 rules, that no longer support octal without 0o, sexagesimals
and Yes/No/On/Off booleans.
"""
def __init__(self, version=None):
BaseResolver.__init__(self)
def __init__(self, version=None, loader=None, loadumper=None):
# type: (Optional[VersionType], Any, Any) -> None
if loader is None and loadumper is not None:
loader = loadumper
BaseResolver.__init__(self, loader)
self._loader_version = self.get_loader_version(version)
self._version_implicit_resolver = {}
self._version_implicit_resolver = {} # type: Dict[Any, Any]
def add_version_implicit_resolver(self, version, tag, regexp, first):
# type: (VersionType, Any, Any, Any) -> None
if first is None:
first = [None]
impl_resolver = self._version_implicit_resolver.setdefault(version, {})
@@ -340,19 +337,23 @@ def add_version_implicit_resolver(self, version, tag, regexp, first):
impl_resolver.setdefault(ch, []).append((tag, regexp))
def get_loader_version(self, version):
# type: (Optional[VersionType]) -> Any
if version is None or isinstance(version, tuple):
return version
if isinstance(version, list):
return tuple(version)
# assume string
return tuple(map(int, version.split(u'.')))
return tuple(map(int, version.split('.')))
@property
def resolver(self):
def versioned_resolver(self):
# type: () -> Any
"""
select the resolver based on the version we are parsing
"""
version = self.processing_version
if isinstance(version, str):
version = tuple(map(int, version.split('.')))
if version not in self._version_implicit_resolver:
for x in implicit_resolvers:
if version in x[0]:
@@ -360,17 +361,18 @@ def resolver(self):
return self._version_implicit_resolver[version]
def resolve(self, kind, value, implicit):
# type: (Any, Any, Any) -> Any
if kind is ScalarNode and implicit[0]:
if value == u'':
resolvers = self.resolver.get(u'', [])
if value == "":
resolvers = self.versioned_resolver.get("", [])
else:
resolvers = self.resolver.get(value[0], [])
resolvers += self.resolver.get(None, [])
resolvers = self.versioned_resolver.get(value[0], [])
resolvers += self.versioned_resolver.get(None, [])
for tag, regexp in resolvers:
if regexp.match(value):
return tag
implicit = implicit[1]
if self.yaml_path_resolvers:
if bool(self.yaml_path_resolvers):
exact_paths = self.resolver_exact_paths[-1]
if kind in exact_paths:
return exact_paths[kind]
@@ -385,13 +387,19 @@ def resolve(self, kind, value, implicit):
@property
def processing_version(self):
# type: () -> Any
try:
version = self.yaml_version
version = self.loadumper._scanner.yaml_version
except AttributeError:
# dumping
version = self.use_version
try:
if hasattr(self.loadumper, 'typ'):
version = self.loadumper.version
else:
version = self.loadumper._serializer.use_version # dumping
except AttributeError:
version = None
if version is None:
version = self._loader_version
if version is None:
version = _DEFAULT_VERSION
version = _DEFAULT_YAML_VERSION
return version

View File

@@ -0,0 +1,47 @@
# coding: utf-8
"""
You cannot subclass bool, and this is necessary for round-tripping anchored
bool values (and also if you want to preserve the original way of writing)
bool.__bases__ is type 'int', so that is what is used as the basis for ScalarBoolean as well.
You can use these in an if statement, but not when testing equivalence
"""
from ruamel.yaml.anchor import Anchor
if False: # MYPY
from typing import Text, Any, Dict, List # NOQA
__all__ = ['ScalarBoolean']
class ScalarBoolean(int):
def __new__(cls, *args, **kw):
# type: (Any, Any, Any) -> Any
anchor = kw.pop('anchor', None)
b = int.__new__(cls, *args, **kw)
if anchor is not None:
b.yaml_set_anchor(anchor, always_dump=True)
return b
@property
def anchor(self):
# type: () -> Any
if not hasattr(self, Anchor.attrib):
setattr(self, Anchor.attrib, Anchor())
return getattr(self, Anchor.attrib)
def yaml_anchor(self, any=False):
# type: (bool) -> Any
if not hasattr(self, Anchor.attrib):
return None
if any or self.anchor.always_dump:
return self.anchor
return None
def yaml_set_anchor(self, value, always_dump=False):
# type: (Any, bool) -> None
self.anchor.value = value
self.anchor.always_dump = always_dump

View File

@@ -0,0 +1,124 @@
# coding: utf-8
import sys
from ruamel.yaml.anchor import Anchor
if False: # MYPY
from typing import Text, Any, Dict, List # NOQA
__all__ = ['ScalarFloat', 'ExponentialFloat', 'ExponentialCapsFloat']
class ScalarFloat(float):
def __new__(cls, *args, **kw):
# type: (Any, Any, Any) -> Any
width = kw.pop('width', None)
prec = kw.pop('prec', None)
m_sign = kw.pop('m_sign', None)
m_lead0 = kw.pop('m_lead0', 0)
exp = kw.pop('exp', None)
e_width = kw.pop('e_width', None)
e_sign = kw.pop('e_sign', None)
underscore = kw.pop('underscore', None)
anchor = kw.pop('anchor', None)
v = float.__new__(cls, *args, **kw)
v._width = width
v._prec = prec
v._m_sign = m_sign
v._m_lead0 = m_lead0
v._exp = exp
v._e_width = e_width
v._e_sign = e_sign
v._underscore = underscore
if anchor is not None:
v.yaml_set_anchor(anchor, always_dump=True)
return v
def __iadd__(self, a): # type: ignore
# type: (Any) -> Any
return float(self) + a
x = type(self)(self + a)
x._width = self._width
x._underscore = self._underscore[:] if self._underscore is not None else None # NOQA
return x
def __ifloordiv__(self, a): # type: ignore
# type: (Any) -> Any
return float(self) // a
x = type(self)(self // a)
x._width = self._width
x._underscore = self._underscore[:] if self._underscore is not None else None # NOQA
return x
def __imul__(self, a): # type: ignore
# type: (Any) -> Any
return float(self) * a
x = type(self)(self * a)
x._width = self._width
x._underscore = self._underscore[:] if self._underscore is not None else None # NOQA
x._prec = self._prec # check for others
return x
def __ipow__(self, a): # type: ignore
# type: (Any) -> Any
return float(self) ** a
x = type(self)(self ** a)
x._width = self._width
x._underscore = self._underscore[:] if self._underscore is not None else None # NOQA
return x
def __isub__(self, a): # type: ignore
# type: (Any) -> Any
return float(self) - a
x = type(self)(self - a)
x._width = self._width
x._underscore = self._underscore[:] if self._underscore is not None else None # NOQA
return x
@property
def anchor(self):
# type: () -> Any
if not hasattr(self, Anchor.attrib):
setattr(self, Anchor.attrib, Anchor())
return getattr(self, Anchor.attrib)
def yaml_anchor(self, any=False):
# type: (bool) -> Any
if not hasattr(self, Anchor.attrib):
return None
if any or self.anchor.always_dump:
return self.anchor
return None
def yaml_set_anchor(self, value, always_dump=False):
# type: (Any, bool) -> None
self.anchor.value = value
self.anchor.always_dump = always_dump
def dump(self, out=sys.stdout):
# type: (Any) -> Any
out.write(
'ScalarFloat({}| w:{}, p:{}, s:{}, lz:{}, _:{}|{}, w:{}, s:{})\n'.format(
self,
self._width, # type: ignore
self._prec, # type: ignore
self._m_sign, # type: ignore
self._m_lead0, # type: ignore
self._underscore, # type: ignore
self._exp, # type: ignore
self._e_width, # type: ignore
self._e_sign, # type: ignore
)
)
class ExponentialFloat(ScalarFloat):
def __new__(cls, value, width=None, underscore=None):
# type: (Any, Any, Any) -> Any
return ScalarFloat.__new__(cls, value, width=width, underscore=underscore)
class ExponentialCapsFloat(ScalarFloat):
def __new__(cls, value, width=None, underscore=None):
# type: (Any, Any, Any) -> Any
return ScalarFloat.__new__(cls, value, width=width, underscore=underscore)

View File

@@ -0,0 +1,127 @@
# coding: utf-8
from ruamel.yaml.anchor import Anchor
if False: # MYPY
from typing import Text, Any, Dict, List # NOQA
__all__ = ['ScalarInt', 'BinaryInt', 'OctalInt', 'HexInt', 'HexCapsInt', 'DecimalInt']
class ScalarInt(int):
def __new__(cls, *args, **kw):
# type: (Any, Any, Any) -> Any
width = kw.pop('width', None)
underscore = kw.pop('underscore', None)
anchor = kw.pop('anchor', None)
v = int.__new__(cls, *args, **kw)
v._width = width
v._underscore = underscore
if anchor is not None:
v.yaml_set_anchor(anchor, always_dump=True)
return v
def __iadd__(self, a): # type: ignore
# type: (Any) -> Any
x = type(self)(self + a)
x._width = self._width # type: ignore
x._underscore = ( # type: ignore
self._underscore[:] if self._underscore is not None else None # type: ignore
) # NOQA
return x
def __ifloordiv__(self, a): # type: ignore
# type: (Any) -> Any
x = type(self)(self // a)
x._width = self._width # type: ignore
x._underscore = ( # type: ignore
self._underscore[:] if self._underscore is not None else None # type: ignore
) # NOQA
return x
def __imul__(self, a): # type: ignore
# type: (Any) -> Any
x = type(self)(self * a)
x._width = self._width # type: ignore
x._underscore = ( # type: ignore
self._underscore[:] if self._underscore is not None else None # type: ignore
) # NOQA
return x
def __ipow__(self, a): # type: ignore
# type: (Any) -> Any
x = type(self)(self ** a)
x._width = self._width # type: ignore
x._underscore = ( # type: ignore
self._underscore[:] if self._underscore is not None else None # type: ignore
) # NOQA
return x
def __isub__(self, a): # type: ignore
# type: (Any) -> Any
x = type(self)(self - a)
x._width = self._width # type: ignore
x._underscore = ( # type: ignore
self._underscore[:] if self._underscore is not None else None # type: ignore
) # NOQA
return x
@property
def anchor(self):
# type: () -> Any
if not hasattr(self, Anchor.attrib):
setattr(self, Anchor.attrib, Anchor())
return getattr(self, Anchor.attrib)
def yaml_anchor(self, any=False):
# type: (bool) -> Any
if not hasattr(self, Anchor.attrib):
return None
if any or self.anchor.always_dump:
return self.anchor
return None
def yaml_set_anchor(self, value, always_dump=False):
# type: (Any, bool) -> None
self.anchor.value = value
self.anchor.always_dump = always_dump
class BinaryInt(ScalarInt):
def __new__(cls, value, width=None, underscore=None, anchor=None):
# type: (Any, Any, Any, Any) -> Any
return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)
class OctalInt(ScalarInt):
def __new__(cls, value, width=None, underscore=None, anchor=None):
# type: (Any, Any, Any, Any) -> Any
return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)
# mixed casing of A-F is not supported, when loading the first non digit
# determines the case
class HexInt(ScalarInt):
"""uses lower case (a-f)"""
def __new__(cls, value, width=None, underscore=None, anchor=None):
# type: (Any, Any, Any, Any) -> Any
return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)
class HexCapsInt(ScalarInt):
"""uses upper case (A-F)"""
def __new__(cls, value, width=None, underscore=None, anchor=None):
# type: (Any, Any, Any, Any) -> Any
return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)
class DecimalInt(ScalarInt):
"""needed if anchor"""
def __new__(cls, value, width=None, underscore=None, anchor=None):
# type: (Any, Any, Any, Any) -> Any
return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)

View File

@@ -0,0 +1,152 @@
# coding: utf-8
from ruamel.yaml.anchor import Anchor
if False: # MYPY
from typing import Text, Any, Dict, List # NOQA
__all__ = [
'ScalarString',
'LiteralScalarString',
'FoldedScalarString',
'SingleQuotedScalarString',
'DoubleQuotedScalarString',
'PlainScalarString',
# PreservedScalarString is the old name, as it was the first to be preserved on rt,
# use LiteralScalarString instead
'PreservedScalarString',
]
class ScalarString(str):
__slots__ = Anchor.attrib
def __new__(cls, *args, **kw):
# type: (Any, Any) -> Any
anchor = kw.pop('anchor', None)
ret_val = str.__new__(cls, *args, **kw)
if anchor is not None:
ret_val.yaml_set_anchor(anchor, always_dump=True)
return ret_val
def replace(self, old, new, maxreplace=-1):
# type: (Any, Any, int) -> Any
return type(self)((str.replace(self, old, new, maxreplace)))
@property
def anchor(self):
# type: () -> Any
if not hasattr(self, Anchor.attrib):
setattr(self, Anchor.attrib, Anchor())
return getattr(self, Anchor.attrib)
def yaml_anchor(self, any=False):
# type: (bool) -> Any
if not hasattr(self, Anchor.attrib):
return None
if any or self.anchor.always_dump:
return self.anchor
return None
def yaml_set_anchor(self, value, always_dump=False):
# type: (Any, bool) -> None
self.anchor.value = value
self.anchor.always_dump = always_dump
class LiteralScalarString(ScalarString):
__slots__ = 'comment' # the comment after the | on the first line
style = '|'
def __new__(cls, value, anchor=None):
# type: (Text, Any) -> Any
return ScalarString.__new__(cls, value, anchor=anchor)
PreservedScalarString = LiteralScalarString
class FoldedScalarString(ScalarString):
__slots__ = ('fold_pos', 'comment') # the comment after the > on the first line
style = '>'
def __new__(cls, value, anchor=None):
# type: (Text, Any) -> Any
return ScalarString.__new__(cls, value, anchor=anchor)
class SingleQuotedScalarString(ScalarString):
__slots__ = ()
style = "'"
def __new__(cls, value, anchor=None):
# type: (Text, Any) -> Any
return ScalarString.__new__(cls, value, anchor=anchor)
class DoubleQuotedScalarString(ScalarString):
__slots__ = ()
style = '"'
def __new__(cls, value, anchor=None):
# type: (Text, Any) -> Any
return ScalarString.__new__(cls, value, anchor=anchor)
class PlainScalarString(ScalarString):
__slots__ = ()
style = ''
def __new__(cls, value, anchor=None):
# type: (Text, Any) -> Any
return ScalarString.__new__(cls, value, anchor=anchor)
def preserve_literal(s):
# type: (Text) -> Text
return LiteralScalarString(s.replace('\r\n', '\n').replace('\r', '\n'))
def walk_tree(base, map=None):
# type: (Any, Any) -> None
"""
the routine here walks over a simple yaml tree (recursing in
dict values and list items) and converts strings that
have multiple lines to literal scalars
You can also provide an explicit (ordered) mapping for multiple transforms
(first of which is executed):
map = ruamel.yaml.compat.ordereddict
map['\n'] = preserve_literal
map[':'] = SingleQuotedScalarString
walk_tree(data, map=map)
"""
from collections.abc import MutableMapping, MutableSequence
if map is None:
map = {'\n': preserve_literal}
if isinstance(base, MutableMapping):
for k in base:
v = base[k] # type: Text
if isinstance(v, str):
for ch in map:
if ch in v:
base[k] = map[ch](v)
break
else:
walk_tree(v, map=map)
elif isinstance(base, MutableSequence):
for idx, elem in enumerate(base):
if isinstance(elem, str):
for ch in map:
if ch in elem:
base[idx] = map[ch](elem)
break
else:
walk_tree(elem, map=map)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,241 @@
# coding: utf-8
from ruamel.yaml.error import YAMLError
from ruamel.yaml.compat import nprint, DBG_NODE, dbg, nprintf # NOQA
from ruamel.yaml.util import RegExp
from ruamel.yaml.events import (
StreamStartEvent,
StreamEndEvent,
MappingStartEvent,
MappingEndEvent,
SequenceStartEvent,
SequenceEndEvent,
AliasEvent,
ScalarEvent,
DocumentStartEvent,
DocumentEndEvent,
)
from ruamel.yaml.nodes import MappingNode, ScalarNode, SequenceNode
if False: # MYPY
from typing import Any, Dict, Union, Text, Optional # NOQA
from ruamel.yaml.compat import VersionType # NOQA
__all__ = ['Serializer', 'SerializerError']
class SerializerError(YAMLError):
pass
class Serializer:
# 'id' and 3+ numbers, but not 000
ANCHOR_TEMPLATE = 'id%03d'
ANCHOR_RE = RegExp('id(?!000$)\\d{3,}')
def __init__(
self,
encoding=None,
explicit_start=None,
explicit_end=None,
version=None,
tags=None,
dumper=None,
):
# type: (Any, Optional[bool], Optional[bool], Optional[VersionType], Any, Any) -> None # NOQA
self.dumper = dumper
if self.dumper is not None:
self.dumper._serializer = self
self.use_encoding = encoding
self.use_explicit_start = explicit_start
self.use_explicit_end = explicit_end
if isinstance(version, str):
self.use_version = tuple(map(int, version.split('.')))
else:
self.use_version = version # type: ignore
self.use_tags = tags
self.serialized_nodes = {} # type: Dict[Any, Any]
self.anchors = {} # type: Dict[Any, Any]
self.last_anchor_id = 0
self.closed = None # type: Optional[bool]
self._templated_id = None
@property
def emitter(self):
# type: () -> Any
if hasattr(self.dumper, 'typ'):
return self.dumper.emitter
return self.dumper._emitter
@property
def resolver(self):
# type: () -> Any
if hasattr(self.dumper, 'typ'):
self.dumper.resolver
return self.dumper._resolver
def open(self):
# type: () -> None
if self.closed is None:
self.emitter.emit(StreamStartEvent(encoding=self.use_encoding))
self.closed = False
elif self.closed:
raise SerializerError('serializer is closed')
else:
raise SerializerError('serializer is already opened')
def close(self):
# type: () -> None
if self.closed is None:
raise SerializerError('serializer is not opened')
elif not self.closed:
self.emitter.emit(StreamEndEvent())
self.closed = True
# def __del__(self):
# self.close()
def serialize(self, node):
# type: (Any) -> None
if dbg(DBG_NODE):
nprint('Serializing nodes')
node.dump()
if self.closed is None:
raise SerializerError('serializer is not opened')
elif self.closed:
raise SerializerError('serializer is closed')
self.emitter.emit(
DocumentStartEvent(
explicit=self.use_explicit_start, version=self.use_version, tags=self.use_tags
)
)
self.anchor_node(node)
self.serialize_node(node, None, None)
self.emitter.emit(DocumentEndEvent(explicit=self.use_explicit_end))
self.serialized_nodes = {}
self.anchors = {}
self.last_anchor_id = 0
def anchor_node(self, node):
# type: (Any) -> None
if node in self.anchors:
if self.anchors[node] is None:
self.anchors[node] = self.generate_anchor(node)
else:
anchor = None
try:
if node.anchor.always_dump:
anchor = node.anchor.value
except: # NOQA
pass
self.anchors[node] = anchor
if isinstance(node, SequenceNode):
for item in node.value:
self.anchor_node(item)
elif isinstance(node, MappingNode):
for key, value in node.value:
self.anchor_node(key)
self.anchor_node(value)
def generate_anchor(self, node):
# type: (Any) -> Any
try:
anchor = node.anchor.value
except: # NOQA
anchor = None
if anchor is None:
self.last_anchor_id += 1
return self.ANCHOR_TEMPLATE % self.last_anchor_id
return anchor
def serialize_node(self, node, parent, index):
# type: (Any, Any, Any) -> None
alias = self.anchors[node]
if node in self.serialized_nodes:
node_style = getattr(node, 'style', None)
if node_style != '?':
node_style = None
self.emitter.emit(AliasEvent(alias, style=node_style))
else:
self.serialized_nodes[node] = True
self.resolver.descend_resolver(parent, index)
if isinstance(node, ScalarNode):
# here check if the node.tag equals the one that would result from parsing
# if not equal quoting is necessary for strings
detected_tag = self.resolver.resolve(ScalarNode, node.value, (True, False))
default_tag = self.resolver.resolve(ScalarNode, node.value, (False, True))
implicit = (
(node.tag == detected_tag),
(node.tag == default_tag),
node.tag.startswith('tag:yaml.org,2002:'),
)
self.emitter.emit(
ScalarEvent(
alias,
node.tag,
implicit,
node.value,
style=node.style,
comment=node.comment,
)
)
elif isinstance(node, SequenceNode):
implicit = node.tag == self.resolver.resolve(SequenceNode, node.value, True)
comment = node.comment
end_comment = None
seq_comment = None
if node.flow_style is True:
if comment: # eol comment on flow style sequence
seq_comment = comment[0]
# comment[0] = None
if comment and len(comment) > 2:
end_comment = comment[2]
else:
end_comment = None
self.emitter.emit(
SequenceStartEvent(
alias,
node.tag,
implicit,
flow_style=node.flow_style,
comment=node.comment,
)
)
index = 0
for item in node.value:
self.serialize_node(item, node, index)
index += 1
self.emitter.emit(SequenceEndEvent(comment=[seq_comment, end_comment]))
elif isinstance(node, MappingNode):
implicit = node.tag == self.resolver.resolve(MappingNode, node.value, True)
comment = node.comment
end_comment = None
map_comment = None
if node.flow_style is True:
if comment: # eol comment on flow style sequence
map_comment = comment[0]
# comment[0] = None
if comment and len(comment) > 2:
end_comment = comment[2]
self.emitter.emit(
MappingStartEvent(
alias,
node.tag,
implicit,
flow_style=node.flow_style,
comment=node.comment,
nr_items=len(node.value),
)
)
for key, value in node.value:
self.serialize_node(key, node, None)
self.serialize_node(value, node, key)
self.emitter.emit(MappingEndEvent(comment=[map_comment, end_comment]))
self.resolver.ascend_resolver()
def templated_id(s):
# type: (Text) -> Any
return Serializer.ANCHOR_RE.match(s)

View File

@@ -0,0 +1,61 @@
# coding: utf-8
import datetime
import copy
# ToDo: at least on PY3 you could probably attach the tzinfo correctly to the object
# a more complete datetime might be used by safe loading as well
if False: # MYPY
from typing import Any, Dict, Optional, List # NOQA
class TimeStamp(datetime.datetime):
def __init__(self, *args, **kw):
# type: (Any, Any) -> None
self._yaml = dict(t=False, tz=None, delta=0) # type: Dict[Any, Any]
def __new__(cls, *args, **kw): # datetime is immutable
# type: (Any, Any) -> Any
return datetime.datetime.__new__(cls, *args, **kw)
def __deepcopy__(self, memo):
# type: (Any) -> Any
ts = TimeStamp(self.year, self.month, self.day, self.hour, self.minute, self.second)
ts._yaml = copy.deepcopy(self._yaml)
return ts
def replace(
self,
year=None,
month=None,
day=None,
hour=None,
minute=None,
second=None,
microsecond=None,
tzinfo=True,
fold=None,
):
# type: (Any, Any, Any, Any, Any, Any, Any, Any, Any) -> Any
if year is None:
year = self.year
if month is None:
month = self.month
if day is None:
day = self.day
if hour is None:
hour = self.hour
if minute is None:
minute = self.minute
if second is None:
second = self.second
if microsecond is None:
microsecond = self.microsecond
if tzinfo is True:
tzinfo = self.tzinfo
if fold is None:
fold = self.fold
ts = type(self)(year, month, day, hour, minute, second, microsecond, tzinfo, fold=fold)
ts._yaml = copy.deepcopy(self._yaml)
return ts

View File

@@ -0,0 +1,404 @@
# coding: utf-8
from ruamel.yaml.compat import _F, nprintf # NOQA
if False: # MYPY
from typing import Text, Any, Dict, Optional, List # NOQA
from .error import StreamMark # NOQA
SHOW_LINES = True
class Token:
__slots__ = 'start_mark', 'end_mark', '_comment'
def __init__(self, start_mark, end_mark):
# type: (StreamMark, StreamMark) -> None
self.start_mark = start_mark
self.end_mark = end_mark
def __repr__(self):
# type: () -> Any
# attributes = [key for key in self.__slots__ if not key.endswith('_mark') and
# hasattr('self', key)]
attributes = [key for key in self.__slots__ if not key.endswith('_mark')]
attributes.sort()
# arguments = ', '.join(
# [_F('{key!s}={gattr!r})', key=key, gattr=getattr(self, key)) for key in attributes]
# )
arguments = [
_F('{key!s}={gattr!r}', key=key, gattr=getattr(self, key)) for key in attributes
]
if SHOW_LINES:
try:
arguments.append('line: ' + str(self.start_mark.line))
except: # NOQA
pass
try:
arguments.append('comment: ' + str(self._comment))
except: # NOQA
pass
return '{}({})'.format(self.__class__.__name__, ', '.join(arguments))
@property
def column(self):
# type: () -> int
return self.start_mark.column
@column.setter
def column(self, pos):
# type: (Any) -> None
self.start_mark.column = pos
# old style ( <= 0.17) is a TWO element list with first being the EOL
# comment concatenated with following FLC/BLNK; and second being a list of FLC/BLNK
# preceding the token
# new style ( >= 0.17 ) is a THREE element list with the first being a list of
# preceding FLC/BLNK, the second EOL and the third following FLC/BLNK
# note that new style has differing order, and does not consist of CommentToken(s)
# but of CommentInfo instances
# any non-assigned values in new style are None, but first and last can be empty list
# new style routines add one comment at a time
# going to be deprecated in favour of add_comment_eol/post
def add_post_comment(self, comment):
# type: (Any) -> None
if not hasattr(self, '_comment'):
self._comment = [None, None]
else:
assert len(self._comment) in [2, 5] # make sure it is version 0
# if isinstance(comment, CommentToken):
# if comment.value.startswith('# C09'):
# raise
self._comment[0] = comment
# going to be deprecated in favour of add_comment_pre
def add_pre_comments(self, comments):
# type: (Any) -> None
if not hasattr(self, '_comment'):
self._comment = [None, None]
else:
assert len(self._comment) == 2 # make sure it is version 0
assert self._comment[1] is None
self._comment[1] = comments
return
# new style
def add_comment_pre(self, comment):
# type: (Any) -> None
if not hasattr(self, '_comment'):
self._comment = [[], None, None] # type: ignore
else:
assert len(self._comment) == 3
if self._comment[0] is None:
self._comment[0] = [] # type: ignore
self._comment[0].append(comment) # type: ignore
def add_comment_eol(self, comment, comment_type):
# type: (Any, Any) -> None
if not hasattr(self, '_comment'):
self._comment = [None, None, None]
else:
assert len(self._comment) == 3
assert self._comment[1] is None
if self.comment[1] is None:
self._comment[1] = [] # type: ignore
self._comment[1].extend([None] * (comment_type + 1 - len(self.comment[1]))) # type: ignore # NOQA
# nprintf('commy', self.comment, comment_type)
self._comment[1][comment_type] = comment # type: ignore
def add_comment_post(self, comment):
# type: (Any) -> None
if not hasattr(self, '_comment'):
self._comment = [None, None, []] # type: ignore
else:
assert len(self._comment) == 3
if self._comment[2] is None:
self._comment[2] = [] # type: ignore
self._comment[2].append(comment) # type: ignore
# def get_comment(self):
# # type: () -> Any
# return getattr(self, '_comment', None)
@property
def comment(self):
# type: () -> Any
return getattr(self, '_comment', None)
def move_old_comment(self, target, empty=False):
# type: (Any, bool) -> Any
"""move a comment from this token to target (normally next token)
used to combine e.g. comments before a BlockEntryToken to the
ScalarToken that follows it
empty is a special for empty values -> comment after key
"""
c = self.comment
if c is None:
return
# don't push beyond last element
if isinstance(target, (StreamEndToken, DocumentStartToken)):
return
delattr(self, '_comment')
tc = target.comment
if not tc: # target comment, just insert
# special for empty value in key: value issue 25
if empty:
c = [c[0], c[1], None, None, c[0]]
target._comment = c
# nprint('mco2:', self, target, target.comment, empty)
return self
if c[0] and tc[0] or c[1] and tc[1]:
raise NotImplementedError(_F('overlap in comment {c!r} {tc!r}', c=c, tc=tc))
if c[0]:
tc[0] = c[0]
if c[1]:
tc[1] = c[1]
return self
def split_old_comment(self):
# type: () -> Any
""" split the post part of a comment, and return it
as comment to be added. Delete second part if [None, None]
abc: # this goes to sequence
# this goes to first element
- first element
"""
comment = self.comment
if comment is None or comment[0] is None:
return None # nothing to do
ret_val = [comment[0], None]
if comment[1] is None:
delattr(self, '_comment')
return ret_val
def move_new_comment(self, target, empty=False):
# type: (Any, bool) -> Any
"""move a comment from this token to target (normally next token)
used to combine e.g. comments before a BlockEntryToken to the
ScalarToken that follows it
empty is a special for empty values -> comment after key
"""
c = self.comment
if c is None:
return
# don't push beyond last element
if isinstance(target, (StreamEndToken, DocumentStartToken)):
return
delattr(self, '_comment')
tc = target.comment
if not tc: # target comment, just insert
# special for empty value in key: value issue 25
if empty:
c = [c[0], c[1], c[2]]
target._comment = c
# nprint('mco2:', self, target, target.comment, empty)
return self
# if self and target have both pre, eol or post comments, something seems wrong
for idx in range(3):
if c[idx] is not None and tc[idx] is not None:
raise NotImplementedError(_F('overlap in comment {c!r} {tc!r}', c=c, tc=tc))
# move the comment parts
for idx in range(3):
if c[idx]:
tc[idx] = c[idx]
return self
# class BOMToken(Token):
# id = '<byte order mark>'
class DirectiveToken(Token):
__slots__ = 'name', 'value'
id = '<directive>'
def __init__(self, name, value, start_mark, end_mark):
# type: (Any, Any, Any, Any) -> None
Token.__init__(self, start_mark, end_mark)
self.name = name
self.value = value
class DocumentStartToken(Token):
__slots__ = ()
id = '<document start>'
class DocumentEndToken(Token):
__slots__ = ()
id = '<document end>'
class StreamStartToken(Token):
__slots__ = ('encoding',)
id = '<stream start>'
def __init__(self, start_mark=None, end_mark=None, encoding=None):
# type: (Any, Any, Any) -> None
Token.__init__(self, start_mark, end_mark)
self.encoding = encoding
class StreamEndToken(Token):
__slots__ = ()
id = '<stream end>'
class BlockSequenceStartToken(Token):
__slots__ = ()
id = '<block sequence start>'
class BlockMappingStartToken(Token):
__slots__ = ()
id = '<block mapping start>'
class BlockEndToken(Token):
__slots__ = ()
id = '<block end>'
class FlowSequenceStartToken(Token):
__slots__ = ()
id = '['
class FlowMappingStartToken(Token):
__slots__ = ()
id = '{'
class FlowSequenceEndToken(Token):
__slots__ = ()
id = ']'
class FlowMappingEndToken(Token):
__slots__ = ()
id = '}'
class KeyToken(Token):
__slots__ = ()
id = '?'
# def x__repr__(self):
# return 'KeyToken({})'.format(
# self.start_mark.buffer[self.start_mark.index:].split(None, 1)[0])
class ValueToken(Token):
__slots__ = ()
id = ':'
class BlockEntryToken(Token):
__slots__ = ()
id = '-'
class FlowEntryToken(Token):
__slots__ = ()
id = ','
class AliasToken(Token):
__slots__ = ('value',)
id = '<alias>'
def __init__(self, value, start_mark, end_mark):
# type: (Any, Any, Any) -> None
Token.__init__(self, start_mark, end_mark)
self.value = value
class AnchorToken(Token):
__slots__ = ('value',)
id = '<anchor>'
def __init__(self, value, start_mark, end_mark):
# type: (Any, Any, Any) -> None
Token.__init__(self, start_mark, end_mark)
self.value = value
class TagToken(Token):
__slots__ = ('value',)
id = '<tag>'
def __init__(self, value, start_mark, end_mark):
# type: (Any, Any, Any) -> None
Token.__init__(self, start_mark, end_mark)
self.value = value
class ScalarToken(Token):
__slots__ = 'value', 'plain', 'style'
id = '<scalar>'
def __init__(self, value, plain, start_mark, end_mark, style=None):
# type: (Any, Any, Any, Any, Any) -> None
Token.__init__(self, start_mark, end_mark)
self.value = value
self.plain = plain
self.style = style
class CommentToken(Token):
__slots__ = '_value', 'pre_done'
id = '<comment>'
def __init__(self, value, start_mark=None, end_mark=None, column=None):
# type: (Any, Any, Any, Any) -> None
if start_mark is None:
assert column is not None
self._column = column
Token.__init__(self, start_mark, None) # type: ignore
self._value = value
@property
def value(self):
# type: () -> str
if isinstance(self._value, str):
return self._value
return "".join(self._value)
@value.setter
def value(self, val):
# type: (Any) -> None
self._value = val
def reset(self):
# type: () -> None
if hasattr(self, 'pre_done'):
delattr(self, 'pre_done')
def __repr__(self):
# type: () -> Any
v = '{!r}'.format(self.value)
if SHOW_LINES:
try:
v += ', line: ' + str(self.start_mark.line)
except: # NOQA
pass
try:
v += ', col: ' + str(self.start_mark.column)
except: # NOQA
pass
return 'CommentToken({})'.format(v)
def __eq__(self, other):
# type: (Any) -> bool
if self.start_mark != other.start_mark:
return False
if self.end_mark != other.end_mark:
return False
if self.value != other.value:
return False
return True
def __ne__(self, other):
# type: (Any) -> bool
return not self.__eq__(other)

View File

@@ -0,0 +1,256 @@
# coding: utf-8
"""
some helper functions that might be generally useful
"""
import datetime
from functools import partial
import re
if False: # MYPY
from typing import Any, Dict, Optional, List, Text # NOQA
from .compat import StreamTextType # NOQA
class LazyEval:
"""
Lightweight wrapper around lazily evaluated func(*args, **kwargs).
func is only evaluated when any attribute of its return value is accessed.
Every attribute access is passed through to the wrapped value.
(This only excludes special cases like method-wrappers, e.g., __hash__.)
The sole additional attribute is the lazy_self function which holds the
return value (or, prior to evaluation, func and arguments), in its closure.
"""
def __init__(self, func, *args, **kwargs):
# type: (Any, Any, Any) -> None
def lazy_self():
# type: () -> Any
return_value = func(*args, **kwargs)
object.__setattr__(self, 'lazy_self', lambda: return_value)
return return_value
object.__setattr__(self, 'lazy_self', lazy_self)
def __getattribute__(self, name):
# type: (Any) -> Any
lazy_self = object.__getattribute__(self, 'lazy_self')
if name == 'lazy_self':
return lazy_self
return getattr(lazy_self(), name)
def __setattr__(self, name, value):
# type: (Any, Any) -> None
setattr(self.lazy_self(), name, value)
RegExp = partial(LazyEval, re.compile)
timestamp_regexp = RegExp(
"""^(?P<year>[0-9][0-9][0-9][0-9])
-(?P<month>[0-9][0-9]?)
-(?P<day>[0-9][0-9]?)
(?:((?P<t>[Tt])|[ \\t]+) # explictly not retaining extra spaces
(?P<hour>[0-9][0-9]?)
:(?P<minute>[0-9][0-9])
:(?P<second>[0-9][0-9])
(?:\\.(?P<fraction>[0-9]*))?
(?:[ \\t]*(?P<tz>Z|(?P<tz_sign>[-+])(?P<tz_hour>[0-9][0-9]?)
(?::(?P<tz_minute>[0-9][0-9]))?))?)?$""",
re.X,
)
def create_timestamp(
year, month, day, t, hour, minute, second, fraction, tz, tz_sign, tz_hour, tz_minute
):
# type: (Any, Any, Any, Any, Any, Any, Any, Any, Any, Any, Any, Any) -> Any
# create a timestamp from match against timestamp_regexp
MAX_FRAC = 999999
year = int(year)
month = int(month)
day = int(day)
if not hour:
return datetime.date(year, month, day)
hour = int(hour)
minute = int(minute)
second = int(second)
frac = 0
if fraction:
frac_s = fraction[:6]
while len(frac_s) < 6:
frac_s += '0'
frac = int(frac_s)
if len(fraction) > 6 and int(fraction[6]) > 4:
frac += 1
if frac > MAX_FRAC:
fraction = 0
else:
fraction = frac
else:
fraction = 0
delta = None
if tz_sign:
tz_hour = int(tz_hour)
tz_minute = int(tz_minute) if tz_minute else 0
delta = datetime.timedelta(
hours=tz_hour, minutes=tz_minute, seconds=1 if frac > MAX_FRAC else 0
)
if tz_sign == '-':
delta = -delta
elif frac > MAX_FRAC:
delta = -datetime.timedelta(seconds=1)
# should do something else instead (or hook this up to the preceding if statement
# in reverse
# if delta is None:
# return datetime.datetime(year, month, day, hour, minute, second, fraction)
# return datetime.datetime(year, month, day, hour, minute, second, fraction,
# datetime.timezone.utc)
# the above is not good enough though, should provide tzinfo. In Python3 that is easily
# doable drop that kind of support for Python2 as it has not native tzinfo
data = datetime.datetime(year, month, day, hour, minute, second, fraction)
if delta:
data -= delta
return data
# originally as comment
# https://github.com/pre-commit/pre-commit/pull/211#issuecomment-186466605
# if you use this in your code, I suggest adding a test in your test suite
# that check this routines output against a known piece of your YAML
# before upgrades to this code break your round-tripped YAML
def load_yaml_guess_indent(stream, **kw):
# type: (StreamTextType, Any) -> Any
"""guess the indent and block sequence indent of yaml stream/string
returns round_trip_loaded stream, indent level, block sequence indent
- block sequence indent is the number of spaces before a dash relative to previous indent
- if there are no block sequences, indent is taken from nested mappings, block sequence
indent is unset (None) in that case
"""
from .main import YAML
# load a YAML document, guess the indentation, if you use TABs you are on your own
def leading_spaces(line):
# type: (Any) -> int
idx = 0
while idx < len(line) and line[idx] == ' ':
idx += 1
return idx
if isinstance(stream, str):
yaml_str = stream # type: Any
elif isinstance(stream, bytes):
# most likely, but the Reader checks BOM for this
yaml_str = stream.decode('utf-8')
else:
yaml_str = stream.read()
map_indent = None
indent = None # default if not found for some reason
block_seq_indent = None
prev_line_key_only = None
key_indent = 0
for line in yaml_str.splitlines():
rline = line.rstrip()
lline = rline.lstrip()
if lline.startswith('- '):
l_s = leading_spaces(line)
block_seq_indent = l_s - key_indent
idx = l_s + 1
while line[idx] == ' ': # this will end as we rstripped
idx += 1
if line[idx] == '#': # comment after -
continue
indent = idx - key_indent
break
if map_indent is None and prev_line_key_only is not None and rline:
idx = 0
while line[idx] in ' -':
idx += 1
if idx > prev_line_key_only:
map_indent = idx - prev_line_key_only
if rline.endswith(':'):
key_indent = leading_spaces(line)
idx = 0
while line[idx] == ' ': # this will end on ':'
idx += 1
prev_line_key_only = idx
continue
prev_line_key_only = None
if indent is None and map_indent is not None:
indent = map_indent
yaml = YAML()
return yaml.load(yaml_str, **kw), indent, block_seq_indent # type: ignore
def configobj_walker(cfg):
# type: (Any) -> Any
"""
walks over a ConfigObj (INI file with comments) generating
corresponding YAML output (including comments
"""
from configobj import ConfigObj # type: ignore
assert isinstance(cfg, ConfigObj)
for c in cfg.initial_comment:
if c.strip():
yield c
for s in _walk_section(cfg):
if s.strip():
yield s
for c in cfg.final_comment:
if c.strip():
yield c
def _walk_section(s, level=0):
# type: (Any, int) -> Any
from configobj import Section
assert isinstance(s, Section)
indent = ' ' * level
for name in s.scalars:
for c in s.comments[name]:
yield indent + c.strip()
x = s[name]
if '\n' in x:
i = indent + ' '
x = '|\n' + i + x.strip().replace('\n', '\n' + i)
elif ':' in x:
x = "'" + x.replace("'", "''") + "'"
line = '{0}{1}: {2}'.format(indent, name, x)
c = s.inline_comments[name]
if c:
line += ' ' + c
yield line
for name in s.sections:
for c in s.comments[name]:
yield indent + c.strip()
line = '{0}{1}:'.format(indent, name)
c = s.inline_comments[name]
if c:
line += ' ' + c
yield line
for val in _walk_section(s[name], level=level + 1):
yield val
# def config_obj_2_rt_yaml(cfg):
# from .comments import CommentedMap, CommentedSeq
# from configobj import ConfigObj
# assert isinstance(cfg, ConfigObj)
# #for c in cfg.initial_comment:
# # if c.strip():
# # pass
# cm = CommentedMap()
# for name in s.sections:
# cm[name] = d = CommentedMap()
#
#
# #for c in cfg.final_comment:
# # if c.strip():
# # yield c
# return cm

View File

@@ -0,0 +1,8 @@
"""
Run the `archspec` CLI as a module.
"""
import sys
from .cli import main
sys.exit(main())

View File

@@ -6,19 +6,61 @@
archspec command line interface
"""
import click
import argparse
import typing
import archspec
import archspec.cpu
@click.group(name="archspec")
@click.version_option(version=archspec.__version__)
def main():
"""archspec command line interface"""
def _make_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(
"archspec",
description="archspec command line interface",
add_help=False,
)
parser.add_argument(
"--version",
"-V",
help="Show the version and exit.",
action="version",
version=f"archspec, version {archspec.__version__}",
)
parser.add_argument("--help", "-h", help="Show the help and exit.", action="help")
subcommands = parser.add_subparsers(
title="command",
metavar="COMMAND",
dest="command",
)
cpu_command = subcommands.add_parser(
"cpu",
help="archspec command line interface for CPU",
description="archspec command line interface for CPU",
)
cpu_command.set_defaults(run=cpu)
return parser
@main.command()
def cpu():
"""archspec command line interface for CPU"""
click.echo(archspec.cpu.host())
def cpu() -> int:
"""Run the `archspec cpu` subcommand."""
print(archspec.cpu.host())
return 0
def main(argv: typing.Optional[typing.List[str]] = None) -> int:
"""Run the `archspec` command line interface."""
parser = _make_parser()
try:
args = parser.parse_args(argv)
except SystemExit as err:
return err.code
if args.command is None:
parser.print_help()
return 0
return args.run()

View File

@@ -268,15 +268,14 @@ def tuplify(ver):
return flags
msg = (
"cannot produce optimized binary for micro-architecture '{0}'"
" with {1}@{2} [supported compiler versions are {3}]"
)
msg = msg.format(
self.name,
compiler,
version,
", ".join([x["versions"] for x in compiler_info]),
"cannot produce optimized binary for micro-architecture '{0}' with {1}@{2}"
)
if compiler_info:
versions = [x["versions"] for x in compiler_info]
msg += f' [supported compiler versions are {", ".join(versions)}]'
else:
msg += " [no supported compiler versions]"
msg = msg.format(self.name, compiler, version)
raise UnsupportedMicroarchitecture(msg)

View File

@@ -102,7 +102,8 @@
"name": "x86-64",
"flags": "-march={name} -mtune=generic"
}
]
],
"nvhpc": []
}
},
"x86_64_v2": {
@@ -157,7 +158,8 @@
"name": "x86-64-v2",
"flags": "-march={name} -mtune=generic"
}
]
],
"nvhpc": []
}
},
"x86_64_v3": {
@@ -228,6 +230,13 @@
"name": "x86-64-v3",
"flags": "-march={name} -mtune=generic"
}
],
"nvhpc" : [
{
"versions": ":",
"name": "px",
"flags": "-tp {name} -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mxsave"
}
]
}
},
@@ -304,6 +313,13 @@
"name": "x86-64-v4",
"flags": "-march={name} -mtune=generic"
}
],
"nvhpc": [
{
"versions": ":",
"name": "px",
"flags": "-tp {name} -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mxsave -mavx512f -mavx512bw -mavx512cd -mavx512dq -mavx512vl"
}
]
}
},
@@ -358,7 +374,8 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
]
],
"nvhpc": []
}
},
"core2": {
@@ -412,7 +429,8 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
]
],
"nvhpc": []
}
},
"nehalem": {
@@ -477,7 +495,8 @@
"name": "corei7",
"flags": "-march={name} -mtune={name}"
}
]
],
"nvhpc": []
}
},
"westmere": {
@@ -539,7 +558,8 @@
"name": "corei7",
"flags": "-march={name} -mtune={name}"
}
]
],
"nvhpc": []
}
},
"sandybridge": {
@@ -609,6 +629,12 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -681,6 +707,12 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -758,6 +790,12 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -827,6 +865,13 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "haswell",
"flags": "-tp {name}"
}
]
}
},
@@ -899,6 +944,13 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "haswell",
"flags": "-tp {name}"
}
]
}
},
@@ -1063,6 +1115,13 @@
"name": "skylake-avx512",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "skylake",
"flags": "-tp {name}"
}
]
}
},
@@ -1143,6 +1202,13 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "skylake",
"flags": "-tp {name}"
}
]
}
},
@@ -1222,6 +1288,13 @@
"versions": ":",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "skylake",
"flags": "-tp {name}"
}
]
}
},
@@ -1329,6 +1402,13 @@
"name": "icelake-client",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "skylake",
"flags": "-tp {name}"
}
]
}
},
@@ -1387,7 +1467,8 @@
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags": "-msse2"
}
]
],
"nvhpc": []
}
},
"bulldozer": {
@@ -1451,6 +1532,12 @@
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags": "-msse3"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -1519,6 +1606,12 @@
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags": "-msse3"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -1588,6 +1681,13 @@
"warnings": "Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors",
"flags": "-msse4.2"
}
],
"nvhpc": [
{
"versions": ":",
"name": "piledriver",
"flags": "-tp {name}"
}
]
}
},
@@ -1663,6 +1763,13 @@
"name": "core-avx2",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "piledriver",
"flags": "-tp {name}"
}
]
}
},
@@ -1741,6 +1848,12 @@
"name": "core-avx2",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"flags": "-tp {name}"
}
]
}
},
@@ -1820,6 +1933,12 @@
"name": "core-avx2",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": "20.5:",
"flags": "-tp {name}"
}
]
}
},
@@ -1902,6 +2021,12 @@
"name": "core-avx2",
"flags": "-march={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": "21.11:",
"flags": "-tp {name}"
}
]
}
},
@@ -1958,18 +2083,28 @@
"compilers": {
"gcc": [
{
"versions": "10.3:",
"versions": "10.3:13.0",
"name": "znver3",
"flags": "-march={name} -mtune={name} -mavx512f -mavx512dq -mavx512ifma -mavx512cd -mavx512bw -mavx512vl -mavx512vbmi -mavx512vbmi2 -mavx512vnni -mavx512bitalg"
},
{
"versions": "13.1:",
"name": "znver4",
"flags": "-march={name} -mtune={name}"
}
],
"clang": [
{
"versions": "12.0:",
"versions": "12.0:15.9",
"name": "znver3",
"flags": "-march={name} -mtune={name} -mavx512f -mavx512dq -mavx512ifma -mavx512cd -mavx512bw -mavx512vl -mavx512vbmi -mavx512vbmi2 -mavx512vnni -mavx512bitalg"
},
{
"versions": "16.0:",
"name": "znver4",
"flags": "-march={name} -mtune={name}"
}
],
],
"aocc": [
{
"versions": "3.0:3.9",
@@ -1982,7 +2117,15 @@
"name": "znver4",
"flags": "-march={name} -mtune={name}"
}
]
],
"nvhpc": [
{
"versions": "21.11:",
"name": "zen3",
"flags": "-tp {name}",
"warnings": "zen4 is not fully supported by nvhpc yet, falling back to zen3"
}
]
}
},
"ppc64": {
@@ -2087,7 +2230,8 @@
"versions": ":",
"flags": "-mcpu={name} -mtune={name}"
}
]
],
"nvhpc": []
}
},
"power8le": {
@@ -2116,6 +2260,13 @@
"name": "power8",
"flags": "-mcpu={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "pwr8",
"flags": "-tp {name}"
}
]
}
},
@@ -2139,6 +2290,13 @@
"name": "power9",
"flags": "-mcpu={name} -mtune={name}"
}
],
"nvhpc": [
{
"versions": ":",
"name": "pwr9",
"flags": "-tp {name}"
}
]
}
},
@@ -2170,7 +2328,8 @@
"versions": ":",
"flags": "-march=armv8-a -mtune=generic"
}
]
],
"nvhpc": []
}
},
"armv8.1a": {
@@ -2552,6 +2711,13 @@
"versions": "20:",
"flags" : "-march=armv8.2-a+fp16+rcpc+dotprod+crypto"
}
],
"nvhpc" : [
{
"versions": "22.5:",
"name": "neoverse-n1",
"flags": "-tp {name}"
}
]
}
},
@@ -2617,15 +2783,31 @@
"flags" : "-march=armv8.2-a+crypto+fp16 -mtune=cortex-a72"
},
{
"versions": "8.0:8.9",
"versions": "8.0:8.4",
"flags" : "-march=armv8.2-a+fp16+dotprod+crypto -mtune=cortex-a72"
},
{
"versions": "9.0:9.9",
"versions": "8.5:8.9",
"flags" : "-mcpu=neoverse-v1"
},
{
"versions": "10.0:",
{
"versions": "9.0:9.3",
"flags" : "-march=armv8.2-a+fp16+dotprod+crypto -mtune=cortex-a72"
},
{
"versions": "9.4:9.9",
"flags" : "-mcpu=neoverse-v1"
},
{
"versions": "10.0:10.1",
"flags" : "-march=armv8.2-a+fp16+dotprod+crypto -mtune=cortex-a72"
},
{
"versions": "10.2:10.2.99",
"flags" : "-mcpu=zeus"
},
{
"versions": "10.3:",
"flags" : "-mcpu=neoverse-v1"
}
@@ -2657,6 +2839,13 @@
"versions": "22:",
"flags" : "-march=armv8.4-a+sve+ssbs+fp16+bf16+crypto+i8mm+rng"
}
],
"nvhpc" : [
{
"versions": "22.5:",
"name": "neoverse-n1",
"flags": "-tp {name}"
}
]
}
},
@@ -2782,6 +2971,10 @@
{
"versions": "13.0:",
"flags" : "-mcpu=apple-m1"
},
{
"versions": "16.0:",
"flags" : "-mcpu=apple-m2"
}
],
"apple-clang": [
@@ -2790,8 +2983,12 @@
"flags" : "-march=armv8.5-a"
},
{
"versions": "13.0:",
"flags" : "-mcpu=vortex"
"versions": "13.0:14.0.2",
"flags" : "-mcpu=apple-m1"
},
{
"versions": "14.0.2:",
"flags" : "-mcpu=apple-m2"
}
]
}

View File

@@ -1,2 +0,0 @@
import pkg_resources
pkg_resources.declare_namespace(__name__)

View File

@@ -1,38 +0,0 @@
ruamel.yaml
===========
``ruamel.yaml`` is a YAML 1.2 loader/dumper package for Python.
* `Overview <http://yaml.readthedocs.org/en/latest/overview.html>`_
* `Installing <http://yaml.readthedocs.org/en/latest/install.html>`_
* `Details <http://yaml.readthedocs.org/en/latest/detail.html>`_
* `Examples <http://yaml.readthedocs.org/en/latest/example.html>`_
* `Differences with PyYAML <http://yaml.readthedocs.org/en/latest/pyyaml.html>`_
.. image:: https://readthedocs.org/projects/yaml/badge/?version=stable
:target: https://yaml.readthedocs.org/en/stable
ChangeLog
=========
::
0.11.15 (2016-XX-XX):
- Change to prevent FutureWarning in NumPy, as reported by tgehring
("comparison to None will result in an elementwise object comparison in the future")
0.11.14 (2016-07-06):
- fix preserve_quotes missing on original Loaders (as reported
by Leynos, bitbucket issue 38)
0.11.13 (2016-07-06):
- documentation only, automated linux wheels
0.11.12 (2016-07-06):
- added support for roundtrip of single/double quoted scalars using:
ruamel.yaml.round_trip_load(stream, preserve_quotes=True)
0.11.0 (2016-02-18):
- RoundTripLoader loads 1.2 by default (no sexagesimals, 012 octals nor
yes/no/on/off booleans

View File

@@ -1,85 +0,0 @@
# coding: utf-8
from __future__ import print_function
from __future__ import absolute_import
# install_requires of ruamel.base is not really required but the old
# ruamel.base installed __init__.py, and thus a new version should
# be installed at some point
_package_data = dict(
full_package_name="ruamel.yaml",
version_info=(0, 11, 15),
author="Anthon van der Neut",
author_email="a.van.der.neut@ruamel.eu",
description="ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order", # NOQA
entry_points=None,
install_requires=dict(
any=[],
py26=["ruamel.ordereddict"],
py27=["ruamel.ordereddict"]
),
ext_modules=[dict(
name="_ruamel_yaml",
src=["ext/_ruamel_yaml.c", "ext/api.c", "ext/writer.c", "ext/dumper.c",
"ext/loader.c", "ext/reader.c", "ext/scanner.c", "ext/parser.c",
"ext/emitter.c"],
lib=[],
# test='#include "ext/yaml.h"\n\nint main(int argc, char* argv[])\n{\nyaml_parser_t parser;\nparser = parser; /* prevent warning */\nreturn 0;\n}\n' # NOQA
)
],
classifiers=[
"Programming Language :: Python :: 2.6",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Programming Language :: Python :: Implementation :: Jython",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Text Processing :: Markup"
],
windows_wheels=True,
read_the_docs='yaml',
many_linux='libyaml-devel',
)
# < from ruamel.util.new import _convert_version
def _convert_version(tup):
"""create a PEP 386 pseudo-format conformant string from tuple tup"""
ret_val = str(tup[0]) # first is always digit
next_sep = "." # separator for next extension, can be "" or "."
for x in tup[1:]:
if isinstance(x, int):
ret_val += next_sep + str(x)
next_sep = '.'
continue
first_letter = x[0].lower()
next_sep = ''
if first_letter in 'abcr':
ret_val += 'rc' if first_letter == 'r' else first_letter
elif first_letter in 'pd':
ret_val += '.post' if first_letter == 'p' else '.dev'
return ret_val
# <
version_info = _package_data['version_info']
__version__ = _convert_version(version_info)
del _convert_version
try:
from .cyaml import * # NOQA
__with_libyaml__ = True
except (ImportError, ValueError): # for Jython
__with_libyaml__ = False
# body extracted to main.py
try:
from .main import * # NOQA
except ImportError:
from ruamel.yaml.main import * # NOQA

View File

@@ -1,486 +0,0 @@
# coding: utf-8
from __future__ import absolute_import
from __future__ import print_function
"""
stuff to deal with comments and formatting on dict/list/ordereddict/set
these are not really related, formatting could be factored out as
a separate base
"""
import sys
if sys.version_info >= (3, 3):
from collections.abc import MutableSet
else:
from collections import MutableSet
__all__ = ["CommentedSeq", "CommentedMap", "CommentedOrderedMap",
"CommentedSet", 'comment_attrib', 'merge_attrib']
try:
from .compat import ordereddict
except ImportError:
from ruamel.yaml.compat import ordereddict
comment_attrib = '_yaml_comment'
format_attrib = '_yaml_format'
line_col_attrib = '_yaml_line_col'
anchor_attrib = '_yaml_anchor'
merge_attrib = '_yaml_merge'
tag_attrib = '_yaml_tag'
class Comment(object):
# sys.getsize tested the Comment objects, __slots__ make them bigger
# and adding self.end did not matter
attrib = comment_attrib
def __init__(self):
self.comment = None # [post, [pre]]
# map key (mapping/omap/dict) or index (sequence/list) to a list of
# dict: post_key, pre_key, post_value, pre_value
# list: pre item, post item
self._items = {}
# self._start = [] # should not put these on first item
self._end = [] # end of document comments
def __str__(self):
if self._end:
end = ',\n end=' + str(self._end)
else:
end = ''
return "Comment(comment={0},\n items={1}{2})".format(
self.comment, self._items, end)
@property
def items(self):
return self._items
@property
def end(self):
return self._end
@end.setter
def end(self, value):
self._end = value
@property
def start(self):
return self._start
@start.setter
def start(self, value):
self._start = value
# to distinguish key from None
def NoComment():
pass
class Format(object):
attrib = format_attrib
def __init__(self):
self._flow_style = None
def set_flow_style(self):
self._flow_style = True
def set_block_style(self):
self._flow_style = False
def flow_style(self, default=None):
"""if default (the flow_style) is None, the flow style tacked on to
the object explicitly will be taken. If that is None as well the
default flow style rules the format down the line, or the type
of the constituent values (simple -> flow, map/list -> block)"""
if self._flow_style is None:
return default
return self._flow_style
class LineCol(object):
attrib = line_col_attrib
def __init__(self):
self.line = None
self.col = None
self.data = None
def add_kv_line_col(self, key, data):
if self.data is None:
self.data = {}
self.data[key] = data
def key(self, k):
return self._kv(k, 0, 1)
def value(self, k):
return self._kv(k, 2, 3)
def _kv(self, k, x0, x1):
if self.data is None:
return None
data = self.data[k]
return data[x0], data[x1]
def item(self, idx):
if self.data is None:
return None
return self.data[idx][0], self.data[idx][1]
def add_idx_line_col(self, key, data):
if self.data is None:
self.data = {}
self.data[key] = data
class Anchor(object):
attrib = anchor_attrib
def __init__(self):
self.value = None
self.always_dump = False
class Tag(object):
"""store tag information for roundtripping"""
attrib = tag_attrib
def __init__(self):
self.value = None
class CommentedBase(object):
@property
def ca(self):
if not hasattr(self, Comment.attrib):
setattr(self, Comment.attrib, Comment())
return getattr(self, Comment.attrib)
def yaml_end_comment_extend(self, comment, clear=False):
if clear:
self.ca.end = []
self.ca.end.extend(comment)
def yaml_key_comment_extend(self, key, comment, clear=False):
l = self.ca._items.setdefault(key, [None, None, None, None])
if clear or l[1] is None:
if comment[1] is not None:
assert isinstance(comment[1], list)
l[1] = comment[1]
else:
l[1].extend(comment[0])
l[0] = comment[0]
def yaml_value_comment_extend(self, key, comment, clear=False):
l = self.ca._items.setdefault(key, [None, None, None, None])
if clear or l[3] is None:
if comment[1] is not None:
assert isinstance(comment[1], list)
l[3] = comment[1]
else:
l[3].extend(comment[0])
l[2] = comment[0]
def yaml_set_start_comment(self, comment, indent=0):
"""overwrites any preceding comment lines on an object
expects comment to be without `#` and possible have mutlple lines
"""
from .error import Mark
from .tokens import CommentToken
pre_comments = self._yaml_get_pre_comment()
if comment[-1] == '\n':
comment = comment[:-1] # strip final newline if there
start_mark = Mark(None, None, None, indent, None, None)
for com in comment.split('\n'):
pre_comments.append(CommentToken('# ' + com + '\n', start_mark, None))
@property
def fa(self):
"""format attribute
set_flow_style()/set_block_style()"""
if not hasattr(self, Format.attrib):
setattr(self, Format.attrib, Format())
return getattr(self, Format.attrib)
def yaml_add_eol_comment(self, comment, key=NoComment, column=None):
"""
there is a problem as eol comments should start with ' #'
(but at the beginning of the line the space doesn't have to be before
the #. The column index is for the # mark
"""
from .tokens import CommentToken
from .error import Mark
if column is None:
column = self._yaml_get_column(key)
if comment[0] != '#':
comment = '# ' + comment
if column is None:
if comment[0] == '#':
comment = ' ' + comment
column = 0
start_mark = Mark(None, None, None, column, None, None)
ct = [CommentToken(comment, start_mark, None), None]
self._yaml_add_eol_comment(ct, key=key)
@property
def lc(self):
if not hasattr(self, LineCol.attrib):
setattr(self, LineCol.attrib, LineCol())
return getattr(self, LineCol.attrib)
def _yaml_set_line_col(self, line, col):
self.lc.line = line
self.lc.col = col
def _yaml_set_kv_line_col(self, key, data):
self.lc.add_kv_line_col(key, data)
def _yaml_set_idx_line_col(self, key, data):
self.lc.add_idx_line_col(key, data)
@property
def anchor(self):
if not hasattr(self, Anchor.attrib):
setattr(self, Anchor.attrib, Anchor())
return getattr(self, Anchor.attrib)
def yaml_anchor(self):
if not hasattr(self, Anchor.attrib):
return None
return self.anchor
def yaml_set_anchor(self, value, always_dump=False):
self.anchor.value = value
self.anchor.always_dump = always_dump
@property
def tag(self):
if not hasattr(self, Tag.attrib):
setattr(self, Tag.attrib, Tag())
return getattr(self, Tag.attrib)
def yaml_set_tag(self, value):
self.tag.value = value
class CommentedSeq(list, CommentedBase):
__slots__ = [Comment.attrib, ]
def _yaml_add_comment(self, comment, key=NoComment):
if key is not NoComment:
self.yaml_key_comment_extend(key, comment)
else:
self.ca.comment = comment
def _yaml_add_eol_comment(self, comment, key):
self._yaml_add_comment(comment, key=key)
def _yaml_get_columnX(self, key):
return self.ca.items[key][0].start_mark.column
def insert(self, idx, val):
"""the comments after the insertion have to move forward"""
list.insert(self, idx, val)
for list_index in sorted(self.ca.items, reverse=True):
if list_index < idx:
break
self.ca.items[list_index+1] = self.ca.items.pop(list_index)
def pop(self, idx):
res = list.pop(self, idx)
self.ca.items.pop(idx, None) # might not be there -> default value
for list_index in sorted(self.ca.items):
if list_index < idx:
continue
self.ca.items[list_index-1] = self.ca.items.pop(list_index)
return res
def _yaml_get_column(self, key):
column = None
sel_idx = None
pre, post = key-1, key+1
if pre in self.ca.items:
sel_idx = pre
elif post in self.ca.items:
sel_idx = post
else:
# self.ca.items is not ordered
for row_idx, k1 in enumerate(self):
if row_idx >= key:
break
if row_idx not in self.ca.items:
continue
sel_idx = row_idx
if sel_idx is not None:
column = self._yaml_get_columnX(sel_idx)
return column
def _yaml_get_pre_comment(self):
if self.ca.comment is None:
pre_comments = []
self.ca.comment = [None, pre_comments]
else:
pre_comments = self.ca.comment[1] = []
return pre_comments
class CommentedMap(ordereddict, CommentedBase):
__slots__ = [Comment.attrib, ]
def _yaml_add_comment(self, comment, key=NoComment, value=NoComment):
"""values is set to key to indicate a value attachment of comment"""
if key is not NoComment:
self.yaml_key_comment_extend(key, comment)
return
if value is not NoComment:
self.yaml_value_comment_extend(value, comment)
else:
self.ca.comment = comment
def _yaml_add_eol_comment(self, comment, key):
"""add on the value line, with value specified by the key"""
self._yaml_add_comment(comment, value=key)
def _yaml_get_columnX(self, key):
return self.ca.items[key][2].start_mark.column
def _yaml_get_column(self, key):
column = None
sel_idx = None
pre, post, last = None, None, None
for x in self:
if pre is not None and x != key:
post = x
break
if x == key:
pre = last
last = x
if pre in self.ca.items:
sel_idx = pre
elif post in self.ca.items:
sel_idx = post
else:
# self.ca.items is not ordered
for row_idx, k1 in enumerate(self):
if k1 >= key:
break
if k1 not in self.ca.items:
continue
sel_idx = k1
if sel_idx is not None:
column = self._yaml_get_columnX(sel_idx)
return column
def _yaml_get_pre_comment(self):
if self.ca.comment is None:
pre_comments = []
self.ca.comment = [None, pre_comments]
else:
pre_comments = self.ca.comment[1] = []
return pre_comments
def update(self, *vals, **kwds):
try:
ordereddict.update(self, *vals, **kwds)
except TypeError:
# probably a dict that is used
for x in vals:
self[x] = vals[x]
def insert(self, pos, key, value, comment=None):
"""insert key value into given position
attach comment if provided
"""
ordereddict.insert(self, pos, key, value)
if comment is not None:
self.yaml_add_eol_comment(comment, key=key)
def mlget(self, key, default=None, list_ok=False):
"""multi-level get that expects dicts within dicts"""
if not isinstance(key, list):
return self.get(key, default)
# assume that the key is a list of recursively accessible dicts
def get_one_level(key_list, level, d):
if not list_ok:
assert isinstance(d, dict)
if level >= len(key_list):
if level > len(key_list):
raise IndexError
return d[key_list[level-1]]
return get_one_level(key_list, level+1, d[key_list[level-1]])
try:
return get_one_level(key, 1, self)
except KeyError:
return default
except (TypeError, IndexError):
if not list_ok:
raise
return default
def __getitem__(self, key):
try:
return ordereddict.__getitem__(self, key)
except KeyError:
for merged in getattr(self, merge_attrib, []):
if key in merged[1]:
return merged[1][key]
raise
def get(self, key, default=None):
try:
return self.__getitem__(key)
except:
return default
@property
def merge(self):
if not hasattr(self, merge_attrib):
setattr(self, merge_attrib, [])
return getattr(self, merge_attrib)
def add_yaml_merge(self, value):
self.merge.extend(value)
class CommentedOrderedMap(CommentedMap):
__slots__ = [Comment.attrib, ]
class CommentedSet(MutableSet, CommentedMap):
__slots__ = [Comment.attrib, 'odict']
def __init__(self, values=None):
self.odict = ordereddict()
MutableSet.__init__(self)
if values is not None:
self |= values
def add(self, value):
"""Add an element."""
self.odict[value] = None
def discard(self, value):
"""Remove an element. Do not raise an exception if absent."""
del self.odict[value]
def __contains__(self, x):
return x in self.odict
def __iter__(self):
for x in self.odict:
yield x
def __len__(self):
return len(self.odict)
def __repr__(self):
return 'set({0!r})'.format(self.odict.keys())

View File

@@ -1,123 +0,0 @@
# coding: utf-8
from __future__ import print_function
# partially from package six by Benjamin Peterson
import sys
import os
import types
try:
from ruamel.ordereddict import ordereddict
except:
try:
from collections.abc import OrderedDict
except ImportError:
try:
from collections import OrderedDict
except ImportError:
from ordereddict import OrderedDict
# to get the right name import ... as ordereddict doesn't do that
class ordereddict(OrderedDict):
if not hasattr(OrderedDict, 'insert'):
def insert(self, pos, key, value):
if pos >= len(self):
self[key] = value
return
od = ordereddict()
od.update(self)
for k in od:
del self[k]
for index, old_key in enumerate(od):
if pos == index:
self[key] = value
self[old_key] = od[old_key]
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
if PY3:
def utf8(s):
return s
def to_str(s):
return s
def to_unicode(s):
return s
else:
def utf8(s):
return s.encode('utf-8')
def to_str(s):
return str(s)
def to_unicode(s):
return unicode(s)
if PY3:
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
MAXSIZE = sys.maxsize
unichr = chr
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
else:
string_types = basestring,
integer_types = (int, long)
class_types = (type, types.ClassType)
text_type = unicode
binary_type = str
unichr = unichr # to allow importing
import StringIO
StringIO = StringIO.StringIO
import cStringIO
BytesIO = cStringIO.StringIO
if PY3:
builtins_module = 'builtins'
else:
builtins_module = '__builtin__'
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
return meta("NewBase", bases, {})
DBG_TOKEN = 1
DBG_EVENT = 2
DBG_NODE = 4
_debug = None
# used from yaml util when testing
def dbg(val=None):
global _debug
if _debug is None:
# set to true or false
_debug = os.environ.get('YAMLDEBUG')
if _debug is None:
_debug = 0
else:
_debug = int(_debug)
if val is None:
return _debug
return _debug & val
def nprint(*args, **kw):
if dbg:
print(*args, **kw)

View File

@@ -1,182 +0,0 @@
# coding: utf-8
from __future__ import absolute_import
from __future__ import print_function
try:
from .error import MarkedYAMLError
from .compat import utf8
except (ImportError, ValueError): # for Jython
from ruamel.yaml.error import MarkedYAMLError
from ruamel.yaml.compat import utf8
from ruamel.yaml.events import (
StreamStartEvent, StreamEndEvent, MappingStartEvent, MappingEndEvent,
SequenceStartEvent, SequenceEndEvent, AliasEvent, ScalarEvent,
)
from ruamel.yaml.nodes import (
MappingNode, ScalarNode, SequenceNode,
)
__all__ = ['Composer', 'ComposerError']
class ComposerError(MarkedYAMLError):
pass
class Composer(object):
def __init__(self):
self.anchors = {}
def check_node(self):
# Drop the STREAM-START event.
if self.check_event(StreamStartEvent):
self.get_event()
# If there are more documents available?
return not self.check_event(StreamEndEvent)
def get_node(self):
# Get the root node of the next document.
if not self.check_event(StreamEndEvent):
return self.compose_document()
def get_single_node(self):
# Drop the STREAM-START event.
self.get_event()
# Compose a document if the stream is not empty.
document = None
if not self.check_event(StreamEndEvent):
document = self.compose_document()
# Ensure that the stream contains no more documents.
if not self.check_event(StreamEndEvent):
event = self.get_event()
raise ComposerError(
"expected a single document in the stream",
document.start_mark, "but found another document",
event.start_mark)
# Drop the STREAM-END event.
self.get_event()
return document
def compose_document(self):
# Drop the DOCUMENT-START event.
self.get_event()
# Compose the root node.
node = self.compose_node(None, None)
# Drop the DOCUMENT-END event.
self.get_event()
self.anchors = {}
return node
def compose_node(self, parent, index):
if self.check_event(AliasEvent):
event = self.get_event()
alias = event.anchor
if alias not in self.anchors:
raise ComposerError(
None, None, "found undefined alias %r"
% utf8(alias), event.start_mark)
return self.anchors[alias]
event = self.peek_event()
anchor = event.anchor
if anchor is not None: # have an anchor
if anchor in self.anchors:
raise ComposerError(
"found duplicate anchor %r; first occurence"
% utf8(anchor), self.anchors[anchor].start_mark,
"second occurence", event.start_mark)
self.descend_resolver(parent, index)
if self.check_event(ScalarEvent):
node = self.compose_scalar_node(anchor)
elif self.check_event(SequenceStartEvent):
node = self.compose_sequence_node(anchor)
elif self.check_event(MappingStartEvent):
node = self.compose_mapping_node(anchor)
self.ascend_resolver()
return node
def compose_scalar_node(self, anchor):
event = self.get_event()
tag = event.tag
if tag is None or tag == u'!':
tag = self.resolve(ScalarNode, event.value, event.implicit)
node = ScalarNode(tag, event.value,
event.start_mark, event.end_mark, style=event.style,
comment=event.comment)
if anchor is not None:
self.anchors[anchor] = node
return node
def compose_sequence_node(self, anchor):
start_event = self.get_event()
tag = start_event.tag
if tag is None or tag == u'!':
tag = self.resolve(SequenceNode, None, start_event.implicit)
node = SequenceNode(tag, [],
start_event.start_mark, None,
flow_style=start_event.flow_style,
comment=start_event.comment, anchor=anchor)
if anchor is not None:
self.anchors[anchor] = node
index = 0
while not self.check_event(SequenceEndEvent):
node.value.append(self.compose_node(node, index))
index += 1
end_event = self.get_event()
if node.flow_style is True and end_event.comment is not None:
if node.comment is not None:
print('Warning: unexpected end_event commment in sequence '
'node {0}'.format(node.flow_style))
node.comment = end_event.comment
node.end_mark = end_event.end_mark
self.check_end_doc_comment(end_event, node)
return node
def compose_mapping_node(self, anchor):
start_event = self.get_event()
tag = start_event.tag
if tag is None or tag == u'!':
tag = self.resolve(MappingNode, None, start_event.implicit)
node = MappingNode(tag, [],
start_event.start_mark, None,
flow_style=start_event.flow_style,
comment=start_event.comment, anchor=anchor)
if anchor is not None:
self.anchors[anchor] = node
while not self.check_event(MappingEndEvent):
# key_event = self.peek_event()
item_key = self.compose_node(node, None)
# if item_key in node.value:
# raise ComposerError("while composing a mapping",
# start_event.start_mark,
# "found duplicate key", key_event.start_mark)
item_value = self.compose_node(node, item_key)
# node.value[item_key] = item_value
node.value.append((item_key, item_value))
end_event = self.get_event()
if node.flow_style is True and end_event.comment is not None:
node.comment = end_event.comment
node.end_mark = end_event.end_mark
self.check_end_doc_comment(end_event, node)
return node
def check_end_doc_comment(self, end_event, node):
if end_event.comment and end_event.comment[1]:
# pre comments on an end_event, no following to move to
if node.comment is None:
node.comment = [None, None]
assert not isinstance(node, ScalarEvent)
# this is a post comment on a mapping node, add as third element
# in the list
node.comment.append(end_event.comment[1])
end_event.comment[1] = None

View File

@@ -1,9 +0,0 @@
# coding: utf-8
import warnings
from ruamel.yaml.util import configobj_walker as new_configobj_walker
def configobj_walker(cfg):
warnings.warn("configobj_walker has move to ruamel.yaml.util, please update your code")
return new_configobj_walker(cfg)

File diff suppressed because it is too large Load Diff

View File

@@ -1,102 +0,0 @@
# coding: utf-8
from __future__ import absolute_import
__all__ = ['BaseDumper', 'SafeDumper', 'Dumper', 'RoundTripDumper']
try:
from .emitter import * # NOQA
from .serializer import * # NOQA
from .representer import * # NOQA
from .resolver import * # NOQA
except (ImportError, ValueError): # for Jython
from ruamel.yaml.emitter import * # NOQA
from ruamel.yaml.serializer import * # NOQA
from ruamel.yaml.representer import * # NOQA
from ruamel.yaml.resolver import * # NOQA
class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver):
def __init__(self, stream,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, block_seq_indent=None,
top_level_colon_align=None, prefix_colon=None):
Emitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break,
block_seq_indent=block_seq_indent)
Serializer.__init__(self, encoding=encoding,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version, tags=tags)
Representer.__init__(self, default_style=default_style,
default_flow_style=default_flow_style)
Resolver.__init__(self)
class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver):
def __init__(self, stream,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, block_seq_indent=None,
top_level_colon_align=None, prefix_colon=None):
Emitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break,
block_seq_indent=block_seq_indent)
Serializer.__init__(self, encoding=encoding,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version, tags=tags)
SafeRepresenter.__init__(self, default_style=default_style,
default_flow_style=default_flow_style)
Resolver.__init__(self)
class Dumper(Emitter, Serializer, Representer, Resolver):
def __init__(self, stream,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, block_seq_indent=None,
top_level_colon_align=None, prefix_colon=None):
Emitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break,
block_seq_indent=block_seq_indent)
Serializer.__init__(self, encoding=encoding,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version, tags=tags)
Representer.__init__(self, default_style=default_style,
default_flow_style=default_flow_style)
Resolver.__init__(self)
class RoundTripDumper(Emitter, Serializer, RoundTripRepresenter, VersionedResolver):
def __init__(self, stream,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None, block_seq_indent=None,
top_level_colon_align=None, prefix_colon=None):
Emitter.__init__(self, stream, canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break,
block_seq_indent=block_seq_indent,
top_level_colon_align=top_level_colon_align,
prefix_colon=prefix_colon)
Serializer.__init__(self, encoding=encoding,
explicit_start=explicit_start,
explicit_end=explicit_end,
version=version, tags=tags)
RoundTripRepresenter.__init__(self, default_style=default_style,
default_flow_style=default_flow_style)
VersionedResolver.__init__(self)

File diff suppressed because it is too large Load Diff

View File

@@ -1,85 +0,0 @@
# coding: utf-8
from __future__ import absolute_import
__all__ = ['Mark', 'YAMLError', 'MarkedYAMLError']
try:
from .compat import utf8
except (ImportError, ValueError): # for Jython
from ruamel.yaml.compat import utf8
class Mark(object):
def __init__(self, name, index, line, column, buffer, pointer):
self.name = name
self.index = index
self.line = line
self.column = column
self.buffer = buffer
self.pointer = pointer
def get_snippet(self, indent=4, max_length=75):
if self.buffer is None:
return None
head = ''
start = self.pointer
while (start > 0 and
self.buffer[start-1] not in u'\0\r\n\x85\u2028\u2029'):
start -= 1
if self.pointer-start > max_length/2-1:
head = ' ... '
start += 5
break
tail = ''
end = self.pointer
while (end < len(self.buffer) and
self.buffer[end] not in u'\0\r\n\x85\u2028\u2029'):
end += 1
if end-self.pointer > max_length/2-1:
tail = ' ... '
end -= 5
break
snippet = utf8(self.buffer[start:end])
return ' '*indent + head + snippet + tail + '\n' \
+ ' '*(indent+self.pointer-start+len(head)) + '^'
def __str__(self):
snippet = self.get_snippet()
where = " in \"%s\", line %d, column %d" \
% (self.name, self.line+1, self.column+1)
if snippet is not None:
where += ":\n"+snippet
return where
class YAMLError(Exception):
pass
class MarkedYAMLError(YAMLError):
def __init__(self, context=None, context_mark=None,
problem=None, problem_mark=None, note=None):
self.context = context
self.context_mark = context_mark
self.problem = problem
self.problem_mark = problem_mark
self.note = note
def __str__(self):
lines = []
if self.context is not None:
lines.append(self.context)
if self.context_mark is not None \
and (self.problem is None or self.problem_mark is None or
self.context_mark.name != self.problem_mark.name or
self.context_mark.line != self.problem_mark.line or
self.context_mark.column != self.problem_mark.column):
lines.append(str(self.context_mark))
if self.problem is not None:
lines.append(self.problem)
if self.problem_mark is not None:
lines.append(str(self.problem_mark))
if self.note is not None:
lines.append(self.note)
return '\n'.join(lines)

View File

@@ -1,106 +0,0 @@
# coding: utf-8
# Abstract classes.
def CommentCheck():
pass
class Event(object):
def __init__(self, start_mark=None, end_mark=None, comment=CommentCheck):
self.start_mark = start_mark
self.end_mark = end_mark
# assert comment is not CommentCheck
if comment is CommentCheck:
comment = None
self.comment = comment
def __repr__(self):
attributes = [key for key in ['anchor', 'tag', 'implicit', 'value',
'flow_style', 'style']
if hasattr(self, key)]
arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
for key in attributes])
if self.comment not in [None, CommentCheck]:
arguments += ', comment={!r}'.format(self.comment)
return '%s(%s)' % (self.__class__.__name__, arguments)
class NodeEvent(Event):
def __init__(self, anchor, start_mark=None, end_mark=None, comment=None):
Event.__init__(self, start_mark, end_mark, comment)
self.anchor = anchor
class CollectionStartEvent(NodeEvent):
def __init__(self, anchor, tag, implicit, start_mark=None, end_mark=None,
flow_style=None, comment=None):
Event.__init__(self, start_mark, end_mark, comment)
self.anchor = anchor
self.tag = tag
self.implicit = implicit
self.flow_style = flow_style
class CollectionEndEvent(Event):
pass
# Implementations.
class StreamStartEvent(Event):
def __init__(self, start_mark=None, end_mark=None, encoding=None,
comment=None):
Event.__init__(self, start_mark, end_mark, comment)
self.encoding = encoding
class StreamEndEvent(Event):
pass
class DocumentStartEvent(Event):
def __init__(self, start_mark=None, end_mark=None,
explicit=None, version=None, tags=None, comment=None):
Event.__init__(self, start_mark, end_mark, comment)
self.explicit = explicit
self.version = version
self.tags = tags
class DocumentEndEvent(Event):
def __init__(self, start_mark=None, end_mark=None,
explicit=None, comment=None):
Event.__init__(self, start_mark, end_mark, comment)
self.explicit = explicit
class AliasEvent(NodeEvent):
pass
class ScalarEvent(NodeEvent):
def __init__(self, anchor, tag, implicit, value,
start_mark=None, end_mark=None, style=None, comment=None):
NodeEvent.__init__(self, anchor, start_mark, end_mark, comment)
self.tag = tag
self.implicit = implicit
self.value = value
self.style = style
class SequenceStartEvent(CollectionStartEvent):
pass
class SequenceEndEvent(CollectionEndEvent):
pass
class MappingStartEvent(CollectionStartEvent):
pass
class MappingEndEvent(CollectionEndEvent):
pass

View File

@@ -1,61 +0,0 @@
# coding: utf-8
from __future__ import absolute_import
__all__ = ['BaseLoader', 'SafeLoader', 'Loader', 'RoundTripLoader']
try:
from .reader import * # NOQA
from .scanner import * # NOQA
from .parser import * # NOQA
from .composer import * # NOQA
from .constructor import * # NOQA
from .resolver import * # NOQA
except (ImportError, ValueError): # for Jython
from ruamel.yaml.reader import * # NOQA
from ruamel.yaml.scanner import * # NOQA
from ruamel.yaml.parser import * # NOQA
from ruamel.yaml.composer import * # NOQA
from ruamel.yaml.constructor import * # NOQA
from ruamel.yaml.resolver import * # NOQA
class BaseLoader(Reader, Scanner, Parser, Composer, BaseConstructor, BaseResolver):
def __init__(self, stream, version=None, preserve_quotes=None):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
BaseConstructor.__init__(self)
BaseResolver.__init__(self)
class SafeLoader(Reader, Scanner, Parser, Composer, SafeConstructor, Resolver):
def __init__(self, stream, version=None, preserve_quotes=None):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
SafeConstructor.__init__(self)
Resolver.__init__(self)
class Loader(Reader, Scanner, Parser, Composer, Constructor, Resolver):
def __init__(self, stream, version=None, preserve_quotes=None):
Reader.__init__(self, stream)
Scanner.__init__(self)
Parser.__init__(self)
Composer.__init__(self)
Constructor.__init__(self)
Resolver.__init__(self)
class RoundTripLoader(Reader, RoundTripScanner, RoundTripParser, Composer,
RoundTripConstructor, VersionedResolver):
def __init__(self, stream, version=None, preserve_quotes=None):
Reader.__init__(self, stream)
RoundTripScanner.__init__(self)
RoundTripParser.__init__(self)
Composer.__init__(self)
RoundTripConstructor.__init__(self, preserve_quotes=preserve_quotes)
VersionedResolver.__init__(self, version)

View File

@@ -1,395 +0,0 @@
# coding: utf-8
from __future__ import absolute_import
from ruamel.yaml.error import * # NOQA
from ruamel.yaml.tokens import * # NOQA
from ruamel.yaml.events import * # NOQA
from ruamel.yaml.nodes import * # NOQA
from ruamel.yaml.loader import * # NOQA
from ruamel.yaml.dumper import * # NOQA
from ruamel.yaml.compat import StringIO, BytesIO, with_metaclass, PY3
# import io
def scan(stream, Loader=Loader):
"""
Scan a YAML stream and produce scanning tokens.
"""
loader = Loader(stream)
try:
while loader.check_token():
yield loader.get_token()
finally:
loader.dispose()
def parse(stream, Loader=Loader):
"""
Parse a YAML stream and produce parsing events.
"""
loader = Loader(stream)
try:
while loader.check_event():
yield loader.get_event()
finally:
loader.dispose()
def compose(stream, Loader=Loader):
"""
Parse the first YAML document in a stream
and produce the corresponding representation tree.
"""
loader = Loader(stream)
try:
return loader.get_single_node()
finally:
loader.dispose()
def compose_all(stream, Loader=Loader):
"""
Parse all YAML documents in a stream
and produce corresponding representation trees.
"""
loader = Loader(stream)
try:
while loader.check_node():
yield loader.get_node()
finally:
loader.dispose()
def load(stream, Loader=Loader, version=None, preserve_quotes=None):
"""
Parse the first YAML document in a stream
and produce the corresponding Python object.
"""
loader = Loader(stream, version, preserve_quotes=preserve_quotes)
try:
return loader.get_single_data()
finally:
loader.dispose()
def load_all(stream, Loader=Loader, version=None):
"""
Parse all YAML documents in a stream
and produce corresponding Python objects.
"""
loader = Loader(stream, version)
try:
while loader.check_data():
yield loader.get_data()
finally:
loader.dispose()
def safe_load(stream, version=None):
"""
Parse the first YAML document in a stream
and produce the corresponding Python object.
Resolve only basic YAML tags.
"""
return load(stream, SafeLoader, version)
def safe_load_all(stream, version=None):
"""
Parse all YAML documents in a stream
and produce corresponding Python objects.
Resolve only basic YAML tags.
"""
return load_all(stream, SafeLoader, version)
def round_trip_load(stream, version=None, preserve_quotes=None):
"""
Parse the first YAML document in a stream
and produce the corresponding Python object.
Resolve only basic YAML tags.
"""
return load(stream, RoundTripLoader, version, preserve_quotes=preserve_quotes)
def round_trip_load_all(stream, version=None, preserve_quotes=None):
"""
Parse all YAML documents in a stream
and produce corresponding Python objects.
Resolve only basic YAML tags.
"""
return load_all(stream, RoundTripLoader, version, preserve_quotes=preserve_quotes)
def emit(events, stream=None, Dumper=Dumper,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None):
"""
Emit YAML parsing events into a stream.
If stream is None, return the produced string instead.
"""
getvalue = None
if stream is None:
stream = StringIO()
getvalue = stream.getvalue
dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break)
try:
for event in events:
dumper.emit(event)
finally:
dumper.dispose()
if getvalue:
return getvalue()
enc = None if PY3 else 'utf-8'
def serialize_all(nodes, stream=None, Dumper=Dumper,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=enc, explicit_start=None, explicit_end=None,
version=None, tags=None):
"""
Serialize a sequence of representation trees into a YAML stream.
If stream is None, return the produced string instead.
"""
getvalue = None
if stream is None:
if encoding is None:
stream = StringIO()
else:
stream = BytesIO()
getvalue = stream.getvalue
dumper = Dumper(stream, canonical=canonical, indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break,
encoding=encoding, version=version, tags=tags,
explicit_start=explicit_start, explicit_end=explicit_end)
try:
dumper.open()
for node in nodes:
dumper.serialize(node)
dumper.close()
finally:
dumper.dispose()
if getvalue:
return getvalue()
def serialize(node, stream=None, Dumper=Dumper, **kwds):
"""
Serialize a representation tree into a YAML stream.
If stream is None, return the produced string instead.
"""
return serialize_all([node], stream, Dumper=Dumper, **kwds)
def dump_all(documents, stream=None, Dumper=Dumper,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=enc, explicit_start=None, explicit_end=None,
version=None, tags=None, block_seq_indent=None,
top_level_colon_align=None, prefix_colon=None):
"""
Serialize a sequence of Python objects into a YAML stream.
If stream is None, return the produced string instead.
"""
getvalue = None
if top_level_colon_align is True:
top_level_colon_align = max([len(str(x)) for x in documents[0]])
if stream is None:
if encoding is None:
stream = StringIO()
else:
stream = BytesIO()
getvalue = stream.getvalue
dumper = Dumper(stream, default_style=default_style,
default_flow_style=default_flow_style,
canonical=canonical, indent=indent, width=width,
allow_unicode=allow_unicode, line_break=line_break,
encoding=encoding, explicit_start=explicit_start,
explicit_end=explicit_end, version=version,
tags=tags, block_seq_indent=block_seq_indent,
top_level_colon_align=top_level_colon_align, prefix_colon=prefix_colon,
)
try:
dumper.open()
for data in documents:
dumper.represent(data)
dumper.close()
finally:
dumper.dispose()
if getvalue:
return getvalue()
def dump(data, stream=None, Dumper=Dumper,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=enc, explicit_start=None, explicit_end=None,
version=None, tags=None, block_seq_indent=None):
"""
Serialize a Python object into a YAML stream.
If stream is None, return the produced string instead.
default_style None, '', '"', "'", '|', '>'
"""
return dump_all([data], stream, Dumper=Dumper,
default_style=default_style,
default_flow_style=default_flow_style,
canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode,
line_break=line_break,
encoding=encoding, explicit_start=explicit_start,
explicit_end=explicit_end,
version=version, tags=tags, block_seq_indent=block_seq_indent)
def safe_dump_all(documents, stream=None, **kwds):
"""
Serialize a sequence of Python objects into a YAML stream.
Produce only basic YAML tags.
If stream is None, return the produced string instead.
"""
return dump_all(documents, stream, Dumper=SafeDumper, **kwds)
def safe_dump(data, stream=None, **kwds):
"""
Serialize a Python object into a YAML stream.
Produce only basic YAML tags.
If stream is None, return the produced string instead.
"""
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
def round_trip_dump(data, stream=None, Dumper=RoundTripDumper,
default_style=None, default_flow_style=None,
canonical=None, indent=None, width=None,
allow_unicode=None, line_break=None,
encoding=enc, explicit_start=None, explicit_end=None,
version=None, tags=None, block_seq_indent=None,
top_level_colon_align=None, prefix_colon=None):
allow_unicode = True if allow_unicode is None else allow_unicode
return dump_all([data], stream, Dumper=Dumper,
default_style=default_style,
default_flow_style=default_flow_style,
canonical=canonical,
indent=indent, width=width,
allow_unicode=allow_unicode,
line_break=line_break,
encoding=encoding, explicit_start=explicit_start,
explicit_end=explicit_end,
version=version, tags=tags, block_seq_indent=block_seq_indent,
top_level_colon_align=top_level_colon_align, prefix_colon=prefix_colon)
def add_implicit_resolver(tag, regexp, first=None,
Loader=Loader, Dumper=Dumper):
"""
Add an implicit scalar detector.
If an implicit scalar value matches the given regexp,
the corresponding tag is assigned to the scalar.
first is a sequence of possible initial characters or None.
"""
Loader.add_implicit_resolver(tag, regexp, first)
Dumper.add_implicit_resolver(tag, regexp, first)
def add_path_resolver(tag, path, kind=None, Loader=Loader, Dumper=Dumper):
"""
Add a path based resolver for the given tag.
A path is a list of keys that forms a path
to a node in the representation tree.
Keys can be string values, integers, or None.
"""
Loader.add_path_resolver(tag, path, kind)
Dumper.add_path_resolver(tag, path, kind)
def add_constructor(tag, constructor, Loader=Loader):
"""
Add a constructor for the given tag.
Constructor is a function that accepts a Loader instance
and a node object and produces the corresponding Python object.
"""
Loader.add_constructor(tag, constructor)
def add_multi_constructor(tag_prefix, multi_constructor, Loader=Loader):
"""
Add a multi-constructor for the given tag prefix.
Multi-constructor is called for a node if its tag starts with tag_prefix.
Multi-constructor accepts a Loader instance, a tag suffix,
and a node object and produces the corresponding Python object.
"""
Loader.add_multi_constructor(tag_prefix, multi_constructor)
def add_representer(data_type, representer, Dumper=Dumper):
"""
Add a representer for the given type.
Representer is a function accepting a Dumper instance
and an instance of the given data type
and producing the corresponding representation node.
"""
Dumper.add_representer(data_type, representer)
def add_multi_representer(data_type, multi_representer, Dumper=Dumper):
"""
Add a representer for the given type.
Multi-representer is a function accepting a Dumper instance
and an instance of the given data type or subtype
and producing the corresponding representation node.
"""
Dumper.add_multi_representer(data_type, multi_representer)
class YAMLObjectMetaclass(type):
"""
The metaclass for YAMLObject.
"""
def __init__(cls, name, bases, kwds):
super(YAMLObjectMetaclass, cls).__init__(name, bases, kwds)
if 'yaml_tag' in kwds and kwds['yaml_tag'] is not None:
cls.yaml_loader.add_constructor(cls.yaml_tag, cls.from_yaml)
cls.yaml_dumper.add_representer(cls, cls.to_yaml)
class YAMLObject(with_metaclass(YAMLObjectMetaclass)):
"""
An object that can dump itself to a YAML stream
and load itself from a YAML stream.
"""
__slots__ = () # no direct instantiation, so allow immutable subclasses
yaml_loader = Loader
yaml_dumper = Dumper
yaml_tag = None
yaml_flow_style = None
@classmethod
def from_yaml(cls, loader, node):
"""
Convert a representation node to a Python object.
"""
return loader.construct_yaml_object(node, cls)
@classmethod
def to_yaml(cls, dumper, data):
"""
Convert a Python object to a representation node.
"""
return dumper.represent_yaml_object(cls.yaml_tag, data, cls,
flow_style=cls.yaml_flow_style)

View File

@@ -1,86 +0,0 @@
# coding: utf-8
from __future__ import print_function
class Node(object):
def __init__(self, tag, value, start_mark, end_mark, comment=None):
self.tag = tag
self.value = value
self.start_mark = start_mark
self.end_mark = end_mark
self.comment = comment
self.anchor = None
def __repr__(self):
value = self.value
# if isinstance(value, list):
# if len(value) == 0:
# value = '<empty>'
# elif len(value) == 1:
# value = '<1 item>'
# else:
# value = '<%d items>' % len(value)
# else:
# if len(value) > 75:
# value = repr(value[:70]+u' ... ')
# else:
# value = repr(value)
value = repr(value)
return '%s(tag=%r, value=%s)' % (self.__class__.__name__,
self.tag, value)
def dump(self, indent=0):
if isinstance(self.value, basestring):
print('{0}{1}(tag={!r}, value={!r})'.format(
' ' * indent, self.__class__.__name__, self.tag, self.value))
if self.comment:
print(' {0}comment: {1})'.format(
' ' * indent, self.comment))
return
print('{0}{1}(tag={!r})'.format(
' ' * indent, self.__class__.__name__, self.tag))
if self.comment:
print(' {0}comment: {1})'.format(
' ' * indent, self.comment))
for v in self.value:
if isinstance(v, tuple):
for v1 in v:
v1.dump(indent+1)
elif isinstance(v, Node):
v.dump(indent+1)
else:
print('Node value type?', type(v))
class ScalarNode(Node):
"""
styles:
? -> set() ? key, no value
" -> double quoted
' -> single quoted
| -> literal style
> ->
"""
id = 'scalar'
def __init__(self, tag, value, start_mark=None, end_mark=None, style=None,
comment=None):
Node.__init__(self, tag, value, start_mark, end_mark, comment=comment)
self.style = style
class CollectionNode(Node):
def __init__(self, tag, value, start_mark=None, end_mark=None,
flow_style=None, comment=None, anchor=None):
Node.__init__(self, tag, value, start_mark, end_mark, comment=comment)
self.flow_style = flow_style
self.anchor = anchor
class SequenceNode(CollectionNode):
id = 'sequence'
class MappingNode(CollectionNode):
id = 'mapping'

View File

@@ -1,675 +0,0 @@
# coding: utf-8
from __future__ import absolute_import
# The following YAML grammar is LL(1) and is parsed by a recursive descent
# parser.
#
# stream ::= STREAM-START implicit_document? explicit_document*
# STREAM-END
# implicit_document ::= block_node DOCUMENT-END*
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
# block_node_or_indentless_sequence ::=
# ALIAS
# | properties (block_content |
# indentless_block_sequence)?
# | block_content
# | indentless_block_sequence
# block_node ::= ALIAS
# | properties block_content?
# | block_content
# flow_node ::= ALIAS
# | properties flow_content?
# | flow_content
# properties ::= TAG ANCHOR? | ANCHOR TAG?
# block_content ::= block_collection | flow_collection | SCALAR
# flow_content ::= flow_collection | SCALAR
# block_collection ::= block_sequence | block_mapping
# flow_collection ::= flow_sequence | flow_mapping
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)*
# BLOCK-END
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
# block_mapping ::= BLOCK-MAPPING_START
# ((KEY block_node_or_indentless_sequence?)?
# (VALUE block_node_or_indentless_sequence?)?)*
# BLOCK-END
# flow_sequence ::= FLOW-SEQUENCE-START
# (flow_sequence_entry FLOW-ENTRY)*
# flow_sequence_entry?
# FLOW-SEQUENCE-END
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
# flow_mapping ::= FLOW-MAPPING-START
# (flow_mapping_entry FLOW-ENTRY)*
# flow_mapping_entry?
# FLOW-MAPPING-END
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
#
# FIRST sets:
#
# stream: { STREAM-START }
# explicit_document: { DIRECTIVE DOCUMENT-START }
# implicit_document: FIRST(block_node)
# block_node: { ALIAS TAG ANCHOR SCALAR BLOCK-SEQUENCE-START
# BLOCK-MAPPING-START FLOW-SEQUENCE-START FLOW-MAPPING-START }
# flow_node: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START FLOW-MAPPING-START }
# block_content: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START
# FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
# flow_content: { FLOW-SEQUENCE-START FLOW-MAPPING-START SCALAR }
# block_collection: { BLOCK-SEQUENCE-START BLOCK-MAPPING-START }
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
# block_sequence: { BLOCK-SEQUENCE-START }
# block_mapping: { BLOCK-MAPPING-START }
# block_node_or_indentless_sequence: { ALIAS ANCHOR TAG SCALAR
# BLOCK-SEQUENCE-START BLOCK-MAPPING-START FLOW-SEQUENCE-START
# FLOW-MAPPING-START BLOCK-ENTRY }
# indentless_sequence: { ENTRY }
# flow_collection: { FLOW-SEQUENCE-START FLOW-MAPPING-START }
# flow_sequence: { FLOW-SEQUENCE-START }
# flow_mapping: { FLOW-MAPPING-START }
# flow_sequence_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START
# FLOW-MAPPING-START KEY }
# flow_mapping_entry: { ALIAS ANCHOR TAG SCALAR FLOW-SEQUENCE-START
# FLOW-MAPPING-START KEY }
__all__ = ['Parser', 'RoundTripParser', 'ParserError']
# need to have full path, as pkg_resources tries to load parser.py in __init__.py
# only to not do anything with the package afterwards
# and for Jython too
from ruamel.yaml.error import MarkedYAMLError # NOQA
from ruamel.yaml.tokens import * # NOQA
from ruamel.yaml.events import * # NOQA
from ruamel.yaml.scanner import * # NOQA
from ruamel.yaml.compat import utf8 # NOQA
class ParserError(MarkedYAMLError):
pass
class Parser(object):
# Since writing a recursive-descendant parser is a straightforward task, we
# do not give many comments here.
DEFAULT_TAGS = {
u'!': u'!',
u'!!': u'tag:yaml.org,2002:',
}
def __init__(self):
self.current_event = None
self.yaml_version = None
self.tag_handles = {}
self.states = []
self.marks = []
self.state = self.parse_stream_start
def dispose(self):
# Reset the state attributes (to clear self-references)
self.states = []
self.state = None
def check_event(self, *choices):
# Check the type of the next event.
if self.current_event is None:
if self.state:
self.current_event = self.state()
if self.current_event is not None:
if not choices:
return True
for choice in choices:
if isinstance(self.current_event, choice):
return True
return False
def peek_event(self):
# Get the next event.
if self.current_event is None:
if self.state:
self.current_event = self.state()
return self.current_event
def get_event(self):
# Get the next event and proceed further.
if self.current_event is None:
if self.state:
self.current_event = self.state()
value = self.current_event
self.current_event = None
return value
# stream ::= STREAM-START implicit_document? explicit_document*
# STREAM-END
# implicit_document ::= block_node DOCUMENT-END*
# explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END*
def parse_stream_start(self):
# Parse the stream start.
token = self.get_token()
token.move_comment(self.peek_token())
event = StreamStartEvent(token.start_mark, token.end_mark,
encoding=token.encoding)
# Prepare the next state.
self.state = self.parse_implicit_document_start
return event
def parse_implicit_document_start(self):
# Parse an implicit document.
if not self.check_token(DirectiveToken, DocumentStartToken,
StreamEndToken):
self.tag_handles = self.DEFAULT_TAGS
token = self.peek_token()
start_mark = end_mark = token.start_mark
event = DocumentStartEvent(start_mark, end_mark,
explicit=False)
# Prepare the next state.
self.states.append(self.parse_document_end)
self.state = self.parse_block_node
return event
else:
return self.parse_document_start()
def parse_document_start(self):
# Parse any extra document end indicators.
while self.check_token(DocumentEndToken):
self.get_token()
# Parse an explicit document.
if not self.check_token(StreamEndToken):
token = self.peek_token()
start_mark = token.start_mark
version, tags = self.process_directives()
if not self.check_token(DocumentStartToken):
raise ParserError(None, None,
"expected '<document start>', but found %r"
% self.peek_token().id,
self.peek_token().start_mark)
token = self.get_token()
end_mark = token.end_mark
event = DocumentStartEvent(
start_mark, end_mark,
explicit=True, version=version, tags=tags)
self.states.append(self.parse_document_end)
self.state = self.parse_document_content
else:
# Parse the end of the stream.
token = self.get_token()
event = StreamEndEvent(token.start_mark, token.end_mark,
comment=token.comment)
assert not self.states
assert not self.marks
self.state = None
return event
def parse_document_end(self):
# Parse the document end.
token = self.peek_token()
start_mark = end_mark = token.start_mark
explicit = False
if self.check_token(DocumentEndToken):
token = self.get_token()
end_mark = token.end_mark
explicit = True
event = DocumentEndEvent(start_mark, end_mark, explicit=explicit)
# Prepare the next state.
self.state = self.parse_document_start
return event
def parse_document_content(self):
if self.check_token(
DirectiveToken,
DocumentStartToken, DocumentEndToken, StreamEndToken):
event = self.process_empty_scalar(self.peek_token().start_mark)
self.state = self.states.pop()
return event
else:
return self.parse_block_node()
def process_directives(self):
self.yaml_version = None
self.tag_handles = {}
while self.check_token(DirectiveToken):
token = self.get_token()
if token.name == u'YAML':
if self.yaml_version is not None:
raise ParserError(
None, None,
"found duplicate YAML directive", token.start_mark)
major, minor = token.value
if major != 1:
raise ParserError(
None, None,
"found incompatible YAML document (version 1.* is "
"required)",
token.start_mark)
self.yaml_version = token.value
elif token.name == u'TAG':
handle, prefix = token.value
if handle in self.tag_handles:
raise ParserError(None, None,
"duplicate tag handle %r" % utf8(handle),
token.start_mark)
self.tag_handles[handle] = prefix
if self.tag_handles:
value = self.yaml_version, self.tag_handles.copy()
else:
value = self.yaml_version, None
for key in self.DEFAULT_TAGS:
if key not in self.tag_handles:
self.tag_handles[key] = self.DEFAULT_TAGS[key]
return value
# block_node_or_indentless_sequence ::= ALIAS
# | properties (block_content | indentless_block_sequence)?
# | block_content
# | indentless_block_sequence
# block_node ::= ALIAS
# | properties block_content?
# | block_content
# flow_node ::= ALIAS
# | properties flow_content?
# | flow_content
# properties ::= TAG ANCHOR? | ANCHOR TAG?
# block_content ::= block_collection | flow_collection | SCALAR
# flow_content ::= flow_collection | SCALAR
# block_collection ::= block_sequence | block_mapping
# flow_collection ::= flow_sequence | flow_mapping
def parse_block_node(self):
return self.parse_node(block=True)
def parse_flow_node(self):
return self.parse_node()
def parse_block_node_or_indentless_sequence(self):
return self.parse_node(block=True, indentless_sequence=True)
def transform_tag(self, handle, suffix):
return self.tag_handles[handle] + suffix
def parse_node(self, block=False, indentless_sequence=False):
if self.check_token(AliasToken):
token = self.get_token()
event = AliasEvent(token.value, token.start_mark, token.end_mark)
self.state = self.states.pop()
else:
anchor = None
tag = None
start_mark = end_mark = tag_mark = None
if self.check_token(AnchorToken):
token = self.get_token()
start_mark = token.start_mark
end_mark = token.end_mark
anchor = token.value
if self.check_token(TagToken):
token = self.get_token()
tag_mark = token.start_mark
end_mark = token.end_mark
tag = token.value
elif self.check_token(TagToken):
token = self.get_token()
start_mark = tag_mark = token.start_mark
end_mark = token.end_mark
tag = token.value
if self.check_token(AnchorToken):
token = self.get_token()
end_mark = token.end_mark
anchor = token.value
if tag is not None:
handle, suffix = tag
if handle is not None:
if handle not in self.tag_handles:
raise ParserError(
"while parsing a node", start_mark,
"found undefined tag handle %r" % utf8(handle),
tag_mark)
tag = self.transform_tag(handle, suffix)
else:
tag = suffix
# if tag == u'!':
# raise ParserError("while parsing a node", start_mark,
# "found non-specific tag '!'", tag_mark,
# "Please check 'http://pyyaml.org/wiki/YAMLNonSpecificTag'
# and share your opinion.")
if start_mark is None:
start_mark = end_mark = self.peek_token().start_mark
event = None
implicit = (tag is None or tag == u'!')
if indentless_sequence and self.check_token(BlockEntryToken):
end_mark = self.peek_token().end_mark
event = SequenceStartEvent(anchor, tag, implicit,
start_mark, end_mark)
self.state = self.parse_indentless_sequence_entry
else:
if self.check_token(ScalarToken):
token = self.get_token()
end_mark = token.end_mark
if (token.plain and tag is None) or tag == u'!':
implicit = (True, False)
elif tag is None:
implicit = (False, True)
else:
implicit = (False, False)
event = ScalarEvent(
anchor, tag, implicit, token.value,
start_mark, end_mark, style=token.style,
comment=token.comment
)
self.state = self.states.pop()
elif self.check_token(FlowSequenceStartToken):
end_mark = self.peek_token().end_mark
event = SequenceStartEvent(
anchor, tag, implicit,
start_mark, end_mark, flow_style=True)
self.state = self.parse_flow_sequence_first_entry
elif self.check_token(FlowMappingStartToken):
end_mark = self.peek_token().end_mark
event = MappingStartEvent(
anchor, tag, implicit,
start_mark, end_mark, flow_style=True)
self.state = self.parse_flow_mapping_first_key
elif block and self.check_token(BlockSequenceStartToken):
end_mark = self.peek_token().start_mark
# should inserting the comment be dependent on the
# indentation?
pt = self.peek_token()
comment = pt.comment
# print('pt0', type(pt))
if comment is None or comment[1] is None:
comment = pt.split_comment()
# print('pt1', comment)
event = SequenceStartEvent(
anchor, tag, implicit, start_mark, end_mark,
flow_style=False,
comment=comment,
)
self.state = self.parse_block_sequence_first_entry
elif block and self.check_token(BlockMappingStartToken):
end_mark = self.peek_token().start_mark
comment = self.peek_token().comment
event = MappingStartEvent(
anchor, tag, implicit, start_mark, end_mark,
flow_style=False, comment=comment)
self.state = self.parse_block_mapping_first_key
elif anchor is not None or tag is not None:
# Empty scalars are allowed even if a tag or an anchor is
# specified.
event = ScalarEvent(anchor, tag, (implicit, False), u'',
start_mark, end_mark)
self.state = self.states.pop()
else:
if block:
node = 'block'
else:
node = 'flow'
token = self.peek_token()
raise ParserError(
"while parsing a %s node" % node, start_mark,
"expected the node content, but found %r" % token.id,
token.start_mark)
return event
# block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)*
# BLOCK-END
def parse_block_sequence_first_entry(self):
token = self.get_token()
# move any comment from start token
# token.move_comment(self.peek_token())
self.marks.append(token.start_mark)
return self.parse_block_sequence_entry()
def parse_block_sequence_entry(self):
if self.check_token(BlockEntryToken):
token = self.get_token()
token.move_comment(self.peek_token())
if not self.check_token(BlockEntryToken, BlockEndToken):
self.states.append(self.parse_block_sequence_entry)
return self.parse_block_node()
else:
self.state = self.parse_block_sequence_entry
return self.process_empty_scalar(token.end_mark)
if not self.check_token(BlockEndToken):
token = self.peek_token()
raise ParserError(
"while parsing a block collection", self.marks[-1],
"expected <block end>, but found %r" %
token.id, token.start_mark)
token = self.get_token() # BlockEndToken
event = SequenceEndEvent(token.start_mark, token.end_mark,
comment=token.comment)
self.state = self.states.pop()
self.marks.pop()
return event
# indentless_sequence ::= (BLOCK-ENTRY block_node?)+
# indentless_sequence?
# sequence:
# - entry
# - nested
def parse_indentless_sequence_entry(self):
if self.check_token(BlockEntryToken):
token = self.get_token()
token.move_comment(self.peek_token())
if not self.check_token(BlockEntryToken,
KeyToken, ValueToken, BlockEndToken):
self.states.append(self.parse_indentless_sequence_entry)
return self.parse_block_node()
else:
self.state = self.parse_indentless_sequence_entry
return self.process_empty_scalar(token.end_mark)
token = self.peek_token()
event = SequenceEndEvent(token.start_mark, token.start_mark,
comment=token.comment)
self.state = self.states.pop()
return event
# block_mapping ::= BLOCK-MAPPING_START
# ((KEY block_node_or_indentless_sequence?)?
# (VALUE block_node_or_indentless_sequence?)?)*
# BLOCK-END
def parse_block_mapping_first_key(self):
token = self.get_token()
self.marks.append(token.start_mark)
return self.parse_block_mapping_key()
def parse_block_mapping_key(self):
if self.check_token(KeyToken):
token = self.get_token()
token.move_comment(self.peek_token())
if not self.check_token(KeyToken, ValueToken, BlockEndToken):
self.states.append(self.parse_block_mapping_value)
return self.parse_block_node_or_indentless_sequence()
else:
self.state = self.parse_block_mapping_value
return self.process_empty_scalar(token.end_mark)
if not self.check_token(BlockEndToken):
token = self.peek_token()
raise ParserError(
"while parsing a block mapping", self.marks[-1],
"expected <block end>, but found %r" % token.id,
token.start_mark)
token = self.get_token()
token.move_comment(self.peek_token())
event = MappingEndEvent(token.start_mark, token.end_mark,
comment=token.comment)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_block_mapping_value(self):
if self.check_token(ValueToken):
token = self.get_token()
# value token might have post comment move it to e.g. block
token.move_comment(self.peek_token())
if not self.check_token(KeyToken, ValueToken, BlockEndToken):
self.states.append(self.parse_block_mapping_key)
return self.parse_block_node_or_indentless_sequence()
else:
self.state = self.parse_block_mapping_key
return self.process_empty_scalar(token.end_mark)
else:
self.state = self.parse_block_mapping_key
token = self.peek_token()
return self.process_empty_scalar(token.start_mark)
# flow_sequence ::= FLOW-SEQUENCE-START
# (flow_sequence_entry FLOW-ENTRY)*
# flow_sequence_entry?
# FLOW-SEQUENCE-END
# flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
#
# Note that while production rules for both flow_sequence_entry and
# flow_mapping_entry are equal, their interpretations are different.
# For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?`
# generate an inline mapping (set syntax).
def parse_flow_sequence_first_entry(self):
token = self.get_token()
self.marks.append(token.start_mark)
return self.parse_flow_sequence_entry(first=True)
def parse_flow_sequence_entry(self, first=False):
if not self.check_token(FlowSequenceEndToken):
if not first:
if self.check_token(FlowEntryToken):
self.get_token()
else:
token = self.peek_token()
raise ParserError(
"while parsing a flow sequence", self.marks[-1],
"expected ',' or ']', but got %r" % token.id,
token.start_mark)
if self.check_token(KeyToken):
token = self.peek_token()
event = MappingStartEvent(None, None, True,
token.start_mark, token.end_mark,
flow_style=True)
self.state = self.parse_flow_sequence_entry_mapping_key
return event
elif not self.check_token(FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry)
return self.parse_flow_node()
token = self.get_token()
event = SequenceEndEvent(token.start_mark, token.end_mark,
comment=token.comment)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_flow_sequence_entry_mapping_key(self):
token = self.get_token()
if not self.check_token(ValueToken,
FlowEntryToken, FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry_mapping_value)
return self.parse_flow_node()
else:
self.state = self.parse_flow_sequence_entry_mapping_value
return self.process_empty_scalar(token.end_mark)
def parse_flow_sequence_entry_mapping_value(self):
if self.check_token(ValueToken):
token = self.get_token()
if not self.check_token(FlowEntryToken, FlowSequenceEndToken):
self.states.append(self.parse_flow_sequence_entry_mapping_end)
return self.parse_flow_node()
else:
self.state = self.parse_flow_sequence_entry_mapping_end
return self.process_empty_scalar(token.end_mark)
else:
self.state = self.parse_flow_sequence_entry_mapping_end
token = self.peek_token()
return self.process_empty_scalar(token.start_mark)
def parse_flow_sequence_entry_mapping_end(self):
self.state = self.parse_flow_sequence_entry
token = self.peek_token()
return MappingEndEvent(token.start_mark, token.start_mark)
# flow_mapping ::= FLOW-MAPPING-START
# (flow_mapping_entry FLOW-ENTRY)*
# flow_mapping_entry?
# FLOW-MAPPING-END
# flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)?
def parse_flow_mapping_first_key(self):
token = self.get_token()
self.marks.append(token.start_mark)
return self.parse_flow_mapping_key(first=True)
def parse_flow_mapping_key(self, first=False):
if not self.check_token(FlowMappingEndToken):
if not first:
if self.check_token(FlowEntryToken):
self.get_token()
else:
token = self.peek_token()
raise ParserError(
"while parsing a flow mapping", self.marks[-1],
"expected ',' or '}', but got %r" % token.id,
token.start_mark)
if self.check_token(KeyToken):
token = self.get_token()
if not self.check_token(ValueToken,
FlowEntryToken, FlowMappingEndToken):
self.states.append(self.parse_flow_mapping_value)
return self.parse_flow_node()
else:
self.state = self.parse_flow_mapping_value
return self.process_empty_scalar(token.end_mark)
elif not self.check_token(FlowMappingEndToken):
self.states.append(self.parse_flow_mapping_empty_value)
return self.parse_flow_node()
token = self.get_token()
event = MappingEndEvent(token.start_mark, token.end_mark,
comment=token.comment)
self.state = self.states.pop()
self.marks.pop()
return event
def parse_flow_mapping_value(self):
if self.check_token(ValueToken):
token = self.get_token()
if not self.check_token(FlowEntryToken, FlowMappingEndToken):
self.states.append(self.parse_flow_mapping_key)
return self.parse_flow_node()
else:
self.state = self.parse_flow_mapping_key
return self.process_empty_scalar(token.end_mark)
else:
self.state = self.parse_flow_mapping_key
token = self.peek_token()
return self.process_empty_scalar(token.start_mark)
def parse_flow_mapping_empty_value(self):
self.state = self.parse_flow_mapping_key
return self.process_empty_scalar(self.peek_token().start_mark)
def process_empty_scalar(self, mark):
return ScalarEvent(None, None, (True, False), u'', mark, mark)
class RoundTripParser(Parser):
"""roundtrip is a safe loader, that wants to see the unmangled tag"""
def transform_tag(self, handle, suffix):
# return self.tag_handles[handle]+suffix
if handle == '!!' and suffix in (u'null', u'bool', u'int', u'float', u'binary',
u'timestamp', u'omap', u'pairs', u'set', u'str',
u'seq', u'map'):
return Parser.transform_tag(self, handle, suffix)
return handle+suffix

View File

@@ -1,213 +0,0 @@
# coding: utf-8
from __future__ import absolute_import
# This module contains abstractions for the input stream. You don't have to
# looks further, there are no pretty code.
#
# We define two classes here.
#
# Mark(source, line, column)
# It's just a record and its only use is producing nice error messages.
# Parser does not use it for any other purposes.
#
# Reader(source, data)
# Reader determines the encoding of `data` and converts it to unicode.
# Reader provides the following methods and attributes:
# reader.peek(length=1) - return the next `length` characters
# reader.forward(length=1) - move the current position to `length`
# characters.
# reader.index - the number of the current character.
# reader.line, stream.column - the line and the column of the current
# character.
import codecs
import re
try:
from .error import YAMLError, Mark
from .compat import text_type, binary_type, PY3
except (ImportError, ValueError): # for Jython
from ruamel.yaml.error import YAMLError, Mark
from ruamel.yaml.compat import text_type, binary_type, PY3
__all__ = ['Reader', 'ReaderError']
class ReaderError(YAMLError):
def __init__(self, name, position, character, encoding, reason):
self.name = name
self.character = character
self.position = position
self.encoding = encoding
self.reason = reason
def __str__(self):
if isinstance(self.character, binary_type):
return "'%s' codec can't decode byte #x%02x: %s\n" \
" in \"%s\", position %d" \
% (self.encoding, ord(self.character), self.reason,
self.name, self.position)
else:
return "unacceptable character #x%04x: %s\n" \
" in \"%s\", position %d" \
% (self.character, self.reason,
self.name, self.position)
class Reader(object):
# Reader:
# - determines the data encoding and converts it to a unicode string,
# - checks if characters are in allowed range,
# - adds '\0' to the end.
# Reader accepts
# - a `str` object (PY2) / a `bytes` object (PY3),
# - a `unicode` object (PY2) / a `str` object (PY3),
# - a file-like object with its `read` method returning `str`,
# - a file-like object with its `read` method returning `unicode`.
# Yeah, it's ugly and slow.
def __init__(self, stream):
self.name = None
self.stream = None
self.stream_pointer = 0
self.eof = True
self.buffer = u''
self.pointer = 0
self.raw_buffer = None
self.raw_decode = None
self.encoding = None
self.index = 0
self.line = 0
self.column = 0
if isinstance(stream, text_type):
self.name = "<unicode string>"
self.check_printable(stream)
self.buffer = stream+u'\0'
elif isinstance(stream, binary_type):
self.name = "<byte string>"
self.raw_buffer = stream
self.determine_encoding()
else:
self.stream = stream
self.name = getattr(stream, 'name', "<file>")
self.eof = False
self.raw_buffer = None
self.determine_encoding()
def peek(self, index=0):
try:
return self.buffer[self.pointer+index]
except IndexError:
self.update(index+1)
return self.buffer[self.pointer+index]
def prefix(self, length=1):
if self.pointer+length >= len(self.buffer):
self.update(length)
return self.buffer[self.pointer:self.pointer+length]
def forward(self, length=1):
if self.pointer+length+1 >= len(self.buffer):
self.update(length+1)
while length:
ch = self.buffer[self.pointer]
self.pointer += 1
self.index += 1
if ch in u'\n\x85\u2028\u2029' \
or (ch == u'\r' and self.buffer[self.pointer] != u'\n'):
self.line += 1
self.column = 0
elif ch != u'\uFEFF':
self.column += 1
length -= 1
def get_mark(self):
if self.stream is None:
return Mark(self.name, self.index, self.line, self.column,
self.buffer, self.pointer)
else:
return Mark(self.name, self.index, self.line, self.column,
None, None)
def determine_encoding(self):
while not self.eof and (self.raw_buffer is None or
len(self.raw_buffer) < 2):
self.update_raw()
if isinstance(self.raw_buffer, binary_type):
if self.raw_buffer.startswith(codecs.BOM_UTF16_LE):
self.raw_decode = codecs.utf_16_le_decode
self.encoding = 'utf-16-le'
elif self.raw_buffer.startswith(codecs.BOM_UTF16_BE):
self.raw_decode = codecs.utf_16_be_decode
self.encoding = 'utf-16-be'
else:
self.raw_decode = codecs.utf_8_decode
self.encoding = 'utf-8'
self.update(1)
NON_PRINTABLE = re.compile(
u'[^\x09\x0A\x0D\x20-\x7E\x85\xA0-\uD7FF\uE000-\uFFFD]')
def check_printable(self, data):
match = self.NON_PRINTABLE.search(data)
if match:
character = match.group()
position = self.index+(len(self.buffer)-self.pointer)+match.start()
raise ReaderError(self.name, position, ord(character),
'unicode', "special characters are not allowed")
def update(self, length):
if self.raw_buffer is None:
return
self.buffer = self.buffer[self.pointer:]
self.pointer = 0
while len(self.buffer) < length:
if not self.eof:
self.update_raw()
if self.raw_decode is not None:
try:
data, converted = self.raw_decode(self.raw_buffer,
'strict', self.eof)
except UnicodeDecodeError as exc:
if PY3:
character = self.raw_buffer[exc.start]
else:
character = exc.object[exc.start]
if self.stream is not None:
position = self.stream_pointer - \
len(self.raw_buffer) + exc.start
else:
position = exc.start
raise ReaderError(self.name, position, character,
exc.encoding, exc.reason)
else:
data = self.raw_buffer
converted = len(data)
self.check_printable(data)
self.buffer += data
self.raw_buffer = self.raw_buffer[converted:]
if self.eof:
self.buffer += u'\0'
self.raw_buffer = None
break
def update_raw(self, size=None):
if size is None:
size = 4096 if PY3 else 1024
data = self.stream.read(size)
if self.raw_buffer is None:
self.raw_buffer = data
else:
self.raw_buffer += data
self.stream_pointer += len(data)
if not data:
self.eof = True
# try:
# import psyco
# psyco.bind(Reader)
# except ImportError:
# pass

View File

@@ -1,888 +0,0 @@
# coding: utf-8
from __future__ import absolute_import
from __future__ import print_function
try:
from .error import * # NOQA
from .nodes import * # NOQA
from .compat import text_type, binary_type, to_unicode, PY2, PY3, ordereddict
from .scalarstring import * # NOQA
except (ImportError, ValueError): # for Jython
from ruamel.yaml.error import * # NOQA
from ruamel.yaml.nodes import * # NOQA
from ruamel.yaml.compat import text_type, binary_type, to_unicode, PY2, PY3, ordereddict
from ruamel.yaml.scalarstring import * # NOQA
import datetime
import sys
import types
if PY3:
import copyreg
import base64
else:
import copy_reg as copyreg
__all__ = ['BaseRepresenter', 'SafeRepresenter', 'Representer',
'RepresenterError', 'RoundTripRepresenter']
class RepresenterError(YAMLError):
pass
class BaseRepresenter(object):
yaml_representers = {}
yaml_multi_representers = {}
def __init__(self, default_style=None, default_flow_style=None):
self.default_style = default_style
self.default_flow_style = default_flow_style
self.represented_objects = {}
self.object_keeper = []
self.alias_key = None
def represent(self, data):
node = self.represent_data(data)
self.serialize(node)
self.represented_objects = {}
self.object_keeper = []
self.alias_key = None
if PY2:
def get_classobj_bases(self, cls):
bases = [cls]
for base in cls.__bases__:
bases.extend(self.get_classobj_bases(base))
return bases
def represent_data(self, data):
if self.ignore_aliases(data):
self.alias_key = None
else:
self.alias_key = id(data)
if self.alias_key is not None:
if self.alias_key in self.represented_objects:
node = self.represented_objects[self.alias_key]
# if node is None:
# raise RepresenterError(
# "recursive objects are not allowed: %r" % data)
return node
# self.represented_objects[alias_key] = None
self.object_keeper.append(data)
data_types = type(data).__mro__
if PY2:
# if type(data) is types.InstanceType:
if isinstance(data, types.InstanceType):
data_types = self.get_classobj_bases(data.__class__) + \
list(data_types)
if data_types[0] in self.yaml_representers:
node = self.yaml_representers[data_types[0]](self, data)
else:
for data_type in data_types:
if data_type in self.yaml_multi_representers:
node = self.yaml_multi_representers[data_type](self, data)
break
else:
if None in self.yaml_multi_representers:
node = self.yaml_multi_representers[None](self, data)
elif None in self.yaml_representers:
node = self.yaml_representers[None](self, data)
else:
node = ScalarNode(None, text_type(data))
# if alias_key is not None:
# self.represented_objects[alias_key] = node
return node
def represent_key(self, data):
"""
David Fraser: Extract a method to represent keys in mappings, so that
a subclass can choose not to quote them (for example)
used in repesent_mapping
https://bitbucket.org/davidfraser/pyyaml/commits/d81df6eb95f20cac4a79eed95ae553b5c6f77b8c
"""
return self.represent_data(data)
@classmethod
def add_representer(cls, data_type, representer):
if 'yaml_representers' not in cls.__dict__:
cls.yaml_representers = cls.yaml_representers.copy()
cls.yaml_representers[data_type] = representer
@classmethod
def add_multi_representer(cls, data_type, representer):
if 'yaml_multi_representers' not in cls.__dict__:
cls.yaml_multi_representers = cls.yaml_multi_representers.copy()
cls.yaml_multi_representers[data_type] = representer
def represent_scalar(self, tag, value, style=None):
if style is None:
style = self.default_style
node = ScalarNode(tag, value, style=style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
def represent_sequence(self, tag, sequence, flow_style=None):
value = []
node = SequenceNode(tag, value, flow_style=flow_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
for item in sequence:
node_item = self.represent_data(item)
if not (isinstance(node_item, ScalarNode) and not node_item.style):
best_style = False
value.append(node_item)
if flow_style is None:
if self.default_flow_style is not None:
node.flow_style = self.default_flow_style
else:
node.flow_style = best_style
return node
def represent_omap(self, tag, omap, flow_style=None):
value = []
node = SequenceNode(tag, value, flow_style=flow_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
for item_key in omap:
item_val = omap[item_key]
node_item = self.represent_data({item_key: item_val})
# if not (isinstance(node_item, ScalarNode) \
# and not node_item.style):
# best_style = False
value.append(node_item)
if flow_style is None:
if self.default_flow_style is not None:
node.flow_style = self.default_flow_style
else:
node.flow_style = best_style
return node
def represent_mapping(self, tag, mapping, flow_style=None):
value = []
node = MappingNode(tag, value, flow_style=flow_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
if hasattr(mapping, 'items'):
mapping = list(mapping.items())
try:
mapping = sorted(mapping)
except TypeError:
pass
for item_key, item_value in mapping:
node_key = self.represent_key(item_key)
node_value = self.represent_data(item_value)
if not (isinstance(node_key, ScalarNode) and not node_key.style):
best_style = False
if not (isinstance(node_value, ScalarNode) and not
node_value.style):
best_style = False
value.append((node_key, node_value))
if flow_style is None:
if self.default_flow_style is not None:
node.flow_style = self.default_flow_style
else:
node.flow_style = best_style
return node
def ignore_aliases(self, data):
return False
class SafeRepresenter(BaseRepresenter):
def ignore_aliases(self, data):
# https://docs.python.org/3/reference/expressions.html#parenthesized-forms :
# "i.e. two occurrences of the empty tuple may or may not yield the same object"
# so "data is ()" should not be used
if data is None or data == ():
return True
if isinstance(data, (binary_type, text_type, bool, int, float)):
return True
def represent_none(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:null',
u'null')
if PY3:
def represent_str(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:str', data)
def represent_binary(self, data):
if hasattr(base64, 'encodebytes'):
data = base64.encodebytes(data).decode('ascii')
else:
data = base64.encodestring(data).decode('ascii')
return self.represent_scalar(u'tag:yaml.org,2002:binary', data,
style='|')
else:
def represent_str(self, data):
tag = None
style = None
try:
data = unicode(data, 'ascii')
tag = u'tag:yaml.org,2002:str'
except UnicodeDecodeError:
try:
data = unicode(data, 'utf-8')
tag = u'tag:yaml.org,2002:str'
except UnicodeDecodeError:
data = data.encode('base64')
tag = u'tag:yaml.org,2002:binary'
style = '|'
return self.represent_scalar(tag, data, style=style)
def represent_unicode(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:str', data)
def represent_bool(self, data):
if data:
value = u'true'
else:
value = u'false'
return self.represent_scalar(u'tag:yaml.org,2002:bool', value)
def represent_int(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:int', text_type(data))
if PY2:
def represent_long(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:int',
text_type(data))
inf_value = 1e300
while repr(inf_value) != repr(inf_value*inf_value):
inf_value *= inf_value
def represent_float(self, data):
if data != data or (data == 0.0 and data == 1.0):
value = u'.nan'
elif data == self.inf_value:
value = u'.inf'
elif data == -self.inf_value:
value = u'-.inf'
else:
value = to_unicode(repr(data)).lower()
# Note that in some cases `repr(data)` represents a float number
# without the decimal parts. For instance:
# >>> repr(1e17)
# '1e17'
# Unfortunately, this is not a valid float representation according
# to the definition of the `!!float` tag. We fix this by adding
# '.0' before the 'e' symbol.
if u'.' not in value and u'e' in value:
value = value.replace(u'e', u'.0e', 1)
return self.represent_scalar(u'tag:yaml.org,2002:float', value)
def represent_list(self, data):
# pairs = (len(data) > 0 and isinstance(data, list))
# if pairs:
# for item in data:
# if not isinstance(item, tuple) or len(item) != 2:
# pairs = False
# break
# if not pairs:
return self.represent_sequence(u'tag:yaml.org,2002:seq', data)
# value = []
# for item_key, item_value in data:
# value.append(self.represent_mapping(u'tag:yaml.org,2002:map',
# [(item_key, item_value)]))
# return SequenceNode(u'tag:yaml.org,2002:pairs', value)
def represent_dict(self, data):
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
def represent_ordereddict(self, data):
return self.represent_omap(u'tag:yaml.org,2002:omap', data)
def represent_set(self, data):
value = {}
for key in data:
value[key] = None
return self.represent_mapping(u'tag:yaml.org,2002:set', value)
def represent_date(self, data):
value = to_unicode(data.isoformat())
return self.represent_scalar(u'tag:yaml.org,2002:timestamp', value)
def represent_datetime(self, data):
value = to_unicode(data.isoformat(' '))
return self.represent_scalar(u'tag:yaml.org,2002:timestamp', value)
def represent_yaml_object(self, tag, data, cls, flow_style=None):
if hasattr(data, '__getstate__'):
state = data.__getstate__()
else:
state = data.__dict__.copy()
return self.represent_mapping(tag, state, flow_style=flow_style)
def represent_undefined(self, data):
raise RepresenterError("cannot represent an object: %s" % data)
SafeRepresenter.add_representer(type(None),
SafeRepresenter.represent_none)
SafeRepresenter.add_representer(str,
SafeRepresenter.represent_str)
if PY2:
SafeRepresenter.add_representer(unicode,
SafeRepresenter.represent_unicode)
else:
SafeRepresenter.add_representer(bytes,
SafeRepresenter.represent_binary)
SafeRepresenter.add_representer(bool,
SafeRepresenter.represent_bool)
SafeRepresenter.add_representer(int,
SafeRepresenter.represent_int)
if PY2:
SafeRepresenter.add_representer(long,
SafeRepresenter.represent_long)
SafeRepresenter.add_representer(float,
SafeRepresenter.represent_float)
SafeRepresenter.add_representer(list,
SafeRepresenter.represent_list)
SafeRepresenter.add_representer(tuple,
SafeRepresenter.represent_list)
SafeRepresenter.add_representer(dict,
SafeRepresenter.represent_dict)
SafeRepresenter.add_representer(set,
SafeRepresenter.represent_set)
SafeRepresenter.add_representer(ordereddict,
SafeRepresenter.represent_ordereddict)
SafeRepresenter.add_representer(datetime.date,
SafeRepresenter.represent_date)
SafeRepresenter.add_representer(datetime.datetime,
SafeRepresenter.represent_datetime)
SafeRepresenter.add_representer(None,
SafeRepresenter.represent_undefined)
class Representer(SafeRepresenter):
if PY2:
def represent_str(self, data):
tag = None
style = None
try:
data = unicode(data, 'ascii')
tag = u'tag:yaml.org,2002:str'
except UnicodeDecodeError:
try:
data = unicode(data, 'utf-8')
tag = u'tag:yaml.org,2002:python/str'
except UnicodeDecodeError:
data = data.encode('base64')
tag = u'tag:yaml.org,2002:binary'
style = '|'
return self.represent_scalar(tag, data, style=style)
def represent_unicode(self, data):
tag = None
try:
data.encode('ascii')
tag = u'tag:yaml.org,2002:python/unicode'
except UnicodeEncodeError:
tag = u'tag:yaml.org,2002:str'
return self.represent_scalar(tag, data)
def represent_long(self, data):
tag = u'tag:yaml.org,2002:int'
if int(data) is not data:
tag = u'tag:yaml.org,2002:python/long'
return self.represent_scalar(tag, to_unicode(data))
def represent_complex(self, data):
if data.imag == 0.0:
data = u'%r' % data.real
elif data.real == 0.0:
data = u'%rj' % data.imag
elif data.imag > 0:
data = u'%r+%rj' % (data.real, data.imag)
else:
data = u'%r%rj' % (data.real, data.imag)
return self.represent_scalar(u'tag:yaml.org,2002:python/complex', data)
def represent_tuple(self, data):
return self.represent_sequence(u'tag:yaml.org,2002:python/tuple', data)
def represent_name(self, data):
name = u'%s.%s' % (data.__module__, data.__name__)
return self.represent_scalar(u'tag:yaml.org,2002:python/name:' +
name, u'')
def represent_module(self, data):
return self.represent_scalar(
u'tag:yaml.org,2002:python/module:'+data.__name__, u'')
if PY2:
def represent_instance(self, data):
# For instances of classic classes, we use __getinitargs__ and
# __getstate__ to serialize the data.
# If data.__getinitargs__ exists, the object must be reconstructed
# by calling cls(**args), where args is a tuple returned by
# __getinitargs__. Otherwise, the cls.__init__ method should never
# be called and the class instance is created by instantiating a
# trivial class and assigning to the instance's __class__ variable.
# If data.__getstate__ exists, it returns the state of the object.
# Otherwise, the state of the object is data.__dict__.
# We produce either a !!python/object or !!python/object/new node.
# If data.__getinitargs__ does not exist and state is a dictionary,
# we produce a !!python/object node . Otherwise we produce a
# !!python/object/new node.
cls = data.__class__
class_name = u'%s.%s' % (cls.__module__, cls.__name__)
args = None
state = None
if hasattr(data, '__getinitargs__'):
args = list(data.__getinitargs__())
if hasattr(data, '__getstate__'):
state = data.__getstate__()
else:
state = data.__dict__
if args is None and isinstance(state, dict):
return self.represent_mapping(
u'tag:yaml.org,2002:python/object:'+class_name, state)
if isinstance(state, dict) and not state:
return self.represent_sequence(
u'tag:yaml.org,2002:python/object/new:' +
class_name, args)
value = {}
if args:
value['args'] = args
value['state'] = state
return self.represent_mapping(
u'tag:yaml.org,2002:python/object/new:'+class_name, value)
def represent_object(self, data):
# We use __reduce__ API to save the data. data.__reduce__ returns
# a tuple of length 2-5:
# (function, args, state, listitems, dictitems)
# For reconstructing, we calls function(*args), then set its state,
# listitems, and dictitems if they are not None.
# A special case is when function.__name__ == '__newobj__'. In this
# case we create the object with args[0].__new__(*args).
# Another special case is when __reduce__ returns a string - we don't
# support it.
# We produce a !!python/object, !!python/object/new or
# !!python/object/apply node.
cls = type(data)
if cls in copyreg.dispatch_table:
reduce = copyreg.dispatch_table[cls](data)
elif hasattr(data, '__reduce_ex__'):
reduce = data.__reduce_ex__(2)
elif hasattr(data, '__reduce__'):
reduce = data.__reduce__()
else:
raise RepresenterError("cannot represent object: %r" % data)
reduce = (list(reduce)+[None]*5)[:5]
function, args, state, listitems, dictitems = reduce
args = list(args)
if state is None:
state = {}
if listitems is not None:
listitems = list(listitems)
if dictitems is not None:
dictitems = dict(dictitems)
if function.__name__ == '__newobj__':
function = args[0]
args = args[1:]
tag = u'tag:yaml.org,2002:python/object/new:'
newobj = True
else:
tag = u'tag:yaml.org,2002:python/object/apply:'
newobj = False
function_name = u'%s.%s' % (function.__module__, function.__name__)
if not args and not listitems and not dictitems \
and isinstance(state, dict) and newobj:
return self.represent_mapping(
u'tag:yaml.org,2002:python/object:'+function_name, state)
if not listitems and not dictitems \
and isinstance(state, dict) and not state:
return self.represent_sequence(tag+function_name, args)
value = {}
if args:
value['args'] = args
if state or not isinstance(state, dict):
value['state'] = state
if listitems:
value['listitems'] = listitems
if dictitems:
value['dictitems'] = dictitems
return self.represent_mapping(tag+function_name, value)
if PY2:
Representer.add_representer(str,
Representer.represent_str)
Representer.add_representer(unicode,
Representer.represent_unicode)
Representer.add_representer(long,
Representer.represent_long)
Representer.add_representer(complex,
Representer.represent_complex)
Representer.add_representer(tuple,
Representer.represent_tuple)
Representer.add_representer(type,
Representer.represent_name)
if PY2:
Representer.add_representer(types.ClassType,
Representer.represent_name)
Representer.add_representer(types.FunctionType,
Representer.represent_name)
Representer.add_representer(types.BuiltinFunctionType,
Representer.represent_name)
Representer.add_representer(types.ModuleType,
Representer.represent_module)
if PY2:
Representer.add_multi_representer(types.InstanceType,
Representer.represent_instance)
Representer.add_multi_representer(object,
Representer.represent_object)
try:
from .comments import CommentedMap, CommentedOrderedMap, CommentedSeq, \
CommentedSet, comment_attrib, merge_attrib
except ImportError: # for Jython
from ruamel.yaml.comments import CommentedMap, CommentedOrderedMap, \
CommentedSeq, CommentedSet, comment_attrib, merge_attrib
class RoundTripRepresenter(SafeRepresenter):
# need to add type here and write out the .comment
# in serializer and emitter
def __init__(self, default_style=None, default_flow_style=None):
if default_flow_style is None:
default_flow_style = False
SafeRepresenter.__init__(self, default_style=default_style,
default_flow_style=default_flow_style)
def represent_none(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:null',
u'')
def represent_preserved_scalarstring(self, data):
tag = None
style = '|'
if PY2 and not isinstance(data, unicode):
data = unicode(data, 'ascii')
tag = u'tag:yaml.org,2002:str'
return self.represent_scalar(tag, data, style=style)
def represent_single_quoted_scalarstring(self, data):
tag = None
style = "'"
if PY2 and not isinstance(data, unicode):
data = unicode(data, 'ascii')
tag = u'tag:yaml.org,2002:str'
return self.represent_scalar(tag, data, style=style)
def represent_double_quoted_scalarstring(self, data):
tag = None
style = '"'
if PY2 and not isinstance(data, unicode):
data = unicode(data, 'ascii')
tag = u'tag:yaml.org,2002:str'
return self.represent_scalar(tag, data, style=style)
def represent_sequence(self, tag, sequence, flow_style=None):
value = []
# if the flow_style is None, the flow style tacked on to the object
# explicitly will be taken. If that is None as well the default flow
# style rules
try:
flow_style = sequence.fa.flow_style(flow_style)
except AttributeError:
flow_style = flow_style
try:
anchor = sequence.yaml_anchor()
except AttributeError:
anchor = None
node = SequenceNode(tag, value, flow_style=flow_style, anchor=anchor)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
try:
comment = getattr(sequence, comment_attrib)
item_comments = comment.items
node.comment = comment.comment
try:
node.comment.append(comment.end)
except AttributeError:
pass
except AttributeError:
item_comments = {}
for idx, item in enumerate(sequence):
node_item = self.represent_data(item)
node_item.comment = item_comments.get(idx)
if not (isinstance(node_item, ScalarNode) and not node_item.style):
best_style = False
value.append(node_item)
if flow_style is None:
if self.default_flow_style is not None:
node.flow_style = self.default_flow_style
else:
node.flow_style = best_style
return node
def represent_mapping(self, tag, mapping, flow_style=None):
value = []
try:
flow_style = mapping.fa.flow_style(flow_style)
except AttributeError:
flow_style = flow_style
try:
anchor = mapping.yaml_anchor()
except AttributeError:
anchor = None
node = MappingNode(tag, value, flow_style=flow_style, anchor=anchor)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
# no sorting! !!
try:
comment = getattr(mapping, comment_attrib)
node.comment = comment.comment
if node.comment and node.comment[1]:
for ct in node.comment[1]:
ct.reset()
item_comments = comment.items
for v in item_comments.values():
if v and v[1]:
for ct in v[1]:
ct.reset()
try:
node.comment.append(comment.end)
except AttributeError:
pass
except AttributeError:
item_comments = {}
for item_key, item_value in mapping.items():
node_key = self.represent_key(item_key)
node_value = self.represent_data(item_value)
item_comment = item_comments.get(item_key)
if item_comment:
assert getattr(node_key, 'comment', None) is None
node_key.comment = item_comment[:2]
nvc = getattr(node_value, 'comment', None)
if nvc is not None: # end comment already there
nvc[0] = item_comment[2]
nvc[1] = item_comment[3]
else:
node_value.comment = item_comment[2:]
if not (isinstance(node_key, ScalarNode) and not node_key.style):
best_style = False
if not (isinstance(node_value, ScalarNode) and not
node_value.style):
best_style = False
value.append((node_key, node_value))
if flow_style is None:
if self.default_flow_style is not None:
node.flow_style = self.default_flow_style
else:
node.flow_style = best_style
merge_list = [m[1] for m in getattr(mapping, merge_attrib, [])]
if merge_list:
# because of the call to represent_data here, the anchors
# are marked as being used and thereby created
if len(merge_list) == 1:
arg = self.represent_data(merge_list[0])
else:
arg = self.represent_data(merge_list)
arg.flow_style = True
value.insert(0,
(ScalarNode(u'tag:yaml.org,2002:merge', '<<'), arg))
return node
def represent_omap(self, tag, omap, flow_style=None):
value = []
try:
flow_style = omap.fa.flow_style(flow_style)
except AttributeError:
flow_style = flow_style
try:
anchor = omap.yaml_anchor()
except AttributeError:
anchor = None
node = SequenceNode(tag, value, flow_style=flow_style, anchor=anchor)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
try:
comment = getattr(omap, comment_attrib)
node.comment = comment.comment
if node.comment and node.comment[1]:
for ct in node.comment[1]:
ct.reset()
item_comments = comment.items
for v in item_comments.values():
if v and v[1]:
for ct in v[1]:
ct.reset()
try:
node.comment.append(comment.end)
except AttributeError:
pass
except AttributeError:
item_comments = {}
for item_key in omap:
item_val = omap[item_key]
node_item = self.represent_data({item_key: item_val})
# node item has two scalars in value: node_key and node_value
item_comment = item_comments.get(item_key)
if item_comment:
if item_comment[1]:
node_item.comment = [None, item_comment[1]]
assert getattr(node_item.value[0][0], 'comment', None) is None
node_item.value[0][0].comment = [item_comment[0], None]
nvc = getattr(node_item.value[0][1], 'comment', None)
if nvc is not None: # end comment already there
nvc[0] = item_comment[2]
nvc[1] = item_comment[3]
else:
node_item.value[0][1].comment = item_comment[2:]
# if not (isinstance(node_item, ScalarNode) \
# and not node_item.style):
# best_style = False
value.append(node_item)
if flow_style is None:
if self.default_flow_style is not None:
node.flow_style = self.default_flow_style
else:
node.flow_style = best_style
return node
def represent_set(self, setting):
flow_style = False
tag = u'tag:yaml.org,2002:set'
# return self.represent_mapping(tag, value)
value = []
flow_style = setting.fa.flow_style(flow_style)
try:
anchor = setting.yaml_anchor()
except AttributeError:
anchor = None
node = MappingNode(tag, value, flow_style=flow_style, anchor=anchor)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
best_style = True
# no sorting! !!
try:
comment = getattr(setting, comment_attrib)
node.comment = comment.comment
if node.comment and node.comment[1]:
for ct in node.comment[1]:
ct.reset()
item_comments = comment.items
for v in item_comments.values():
if v and v[1]:
for ct in v[1]:
ct.reset()
try:
node.comment.append(comment.end)
except AttributeError:
pass
except AttributeError:
item_comments = {}
for item_key in setting.odict:
node_key = self.represent_key(item_key)
node_value = self.represent_data(None)
item_comment = item_comments.get(item_key)
if item_comment:
assert getattr(node_key, 'comment', None) is None
node_key.comment = item_comment[:2]
node_key.style = node_value.style = "?"
if not (isinstance(node_key, ScalarNode) and not node_key.style):
best_style = False
if not (isinstance(node_value, ScalarNode) and not
node_value.style):
best_style = False
value.append((node_key, node_value))
best_style = best_style
return node
def represent_dict(self, data):
"""write out tag if saved on loading"""
try:
t = data.tag.value
except AttributeError:
t = None
if t:
while t and t[0] == '!':
t = t[1:]
tag = 'tag:yaml.org,2002:' + t
else:
tag = u'tag:yaml.org,2002:map'
return self.represent_mapping(tag, data)
RoundTripRepresenter.add_representer(type(None),
RoundTripRepresenter.represent_none)
RoundTripRepresenter.add_representer(
PreservedScalarString,
RoundTripRepresenter.represent_preserved_scalarstring)
RoundTripRepresenter.add_representer(
SingleQuotedScalarString,
RoundTripRepresenter.represent_single_quoted_scalarstring)
RoundTripRepresenter.add_representer(
DoubleQuotedScalarString,
RoundTripRepresenter.represent_double_quoted_scalarstring)
RoundTripRepresenter.add_representer(CommentedSeq,
RoundTripRepresenter.represent_list)
RoundTripRepresenter.add_representer(CommentedMap,
RoundTripRepresenter.represent_dict)
RoundTripRepresenter.add_representer(CommentedOrderedMap,
RoundTripRepresenter.represent_ordereddict)
if sys.version_info >= (2, 7):
import collections
RoundTripRepresenter.add_representer(collections.OrderedDict,
RoundTripRepresenter.represent_ordereddict)
RoundTripRepresenter.add_representer(CommentedSet,
RoundTripRepresenter.represent_set)

View File

@@ -1,60 +0,0 @@
# coding: utf-8
from __future__ import absolute_import
from __future__ import print_function
__all__ = ["ScalarString", "PreservedScalarString", "SingleQuotedScalarString",
"DoubleQuotedScalarString"]
try:
from .compat import text_type
except (ImportError, ValueError): # for Jython
from ruamel.yaml.compat import text_type
class ScalarString(text_type):
def __new__(cls, *args, **kw):
return text_type.__new__(cls, *args, **kw)
class PreservedScalarString(ScalarString):
def __new__(cls, value):
return ScalarString.__new__(cls, value)
class SingleQuotedScalarString(ScalarString):
def __new__(cls, value):
return ScalarString.__new__(cls, value)
class DoubleQuotedScalarString(ScalarString):
def __new__(cls, value):
return ScalarString.__new__(cls, value)
def preserve_literal(s):
return PreservedScalarString(s.replace('\r\n', '\n').replace('\r', '\n'))
def walk_tree(base):
"""
the routine here walks over a simple yaml tree (recursing in
dict values and list items) and converts strings that
have multiple lines to literal scalars
"""
from ruamel.yaml.compat import string_types
if isinstance(base, dict):
for k in base:
v = base[k]
if isinstance(v, string_types) and '\n' in v:
base[k] = preserve_literal(v)
else:
walk_tree(v)
elif isinstance(base, list):
for idx, elem in enumerate(base):
if isinstance(elem, string_types) and '\n' in elem:
print(elem)
base[idx] = preserve_literal(elem)
else:
walk_tree(elem)

File diff suppressed because it is too large Load Diff

View File

@@ -1,178 +0,0 @@
# coding: utf-8
from __future__ import absolute_import
import re
try:
from .error import YAMLError
from .compat import nprint, DBG_NODE, dbg, string_types
except (ImportError, ValueError): # for Jython
from ruamel.yaml.error import YAMLError
from ruamel.yaml.compat import nprint, DBG_NODE, dbg, string_types
from ruamel.yaml.events import (
StreamStartEvent, StreamEndEvent, MappingStartEvent, MappingEndEvent,
SequenceStartEvent, SequenceEndEvent, AliasEvent, ScalarEvent,
DocumentStartEvent, DocumentEndEvent,
)
from ruamel.yaml.nodes import (
MappingNode, ScalarNode, SequenceNode,
)
__all__ = ['Serializer', 'SerializerError']
class SerializerError(YAMLError):
pass
class Serializer(object):
# 'id' and 3+ numbers, but not 000
ANCHOR_TEMPLATE = u'id%03d'
ANCHOR_RE = re.compile(u'id(?!000$)\\d{3,}')
def __init__(self, encoding=None, explicit_start=None, explicit_end=None,
version=None, tags=None):
self.use_encoding = encoding
self.use_explicit_start = explicit_start
self.use_explicit_end = explicit_end
if isinstance(version, string_types):
self.use_version = tuple(map(int, version.split('.')))
else:
self.use_version = version
self.use_tags = tags
self.serialized_nodes = {}
self.anchors = {}
self.last_anchor_id = 0
self.closed = None
self._templated_id = None
def open(self):
if self.closed is None:
self.emit(StreamStartEvent(encoding=self.use_encoding))
self.closed = False
elif self.closed:
raise SerializerError("serializer is closed")
else:
raise SerializerError("serializer is already opened")
def close(self):
if self.closed is None:
raise SerializerError("serializer is not opened")
elif not self.closed:
self.emit(StreamEndEvent())
self.closed = True
# def __del__(self):
# self.close()
def serialize(self, node):
if dbg(DBG_NODE):
nprint('Serializing nodes')
node.dump()
if self.closed is None:
raise SerializerError("serializer is not opened")
elif self.closed:
raise SerializerError("serializer is closed")
self.emit(DocumentStartEvent(explicit=self.use_explicit_start,
version=self.use_version,
tags=self.use_tags))
self.anchor_node(node)
self.serialize_node(node, None, None)
self.emit(DocumentEndEvent(explicit=self.use_explicit_end))
self.serialized_nodes = {}
self.anchors = {}
self.last_anchor_id = 0
def anchor_node(self, node):
if node in self.anchors:
if self.anchors[node] is None:
self.anchors[node] = self.generate_anchor(node)
else:
anchor = None
try:
if node.anchor.always_dump:
anchor = node.anchor.value
except:
pass
self.anchors[node] = anchor
if isinstance(node, SequenceNode):
for item in node.value:
self.anchor_node(item)
elif isinstance(node, MappingNode):
for key, value in node.value:
self.anchor_node(key)
self.anchor_node(value)
def generate_anchor(self, node):
try:
anchor = node.anchor.value
except:
anchor = None
if anchor is None:
self.last_anchor_id += 1
return self.ANCHOR_TEMPLATE % self.last_anchor_id
return anchor
def serialize_node(self, node, parent, index):
alias = self.anchors[node]
if node in self.serialized_nodes:
self.emit(AliasEvent(alias))
else:
self.serialized_nodes[node] = True
self.descend_resolver(parent, index)
if isinstance(node, ScalarNode):
# here check if the node.tag equals the one that would result from parsing
# if not equal quoting is necessary for strings
detected_tag = self.resolve(ScalarNode, node.value, (True, False))
default_tag = self.resolve(ScalarNode, node.value, (False, True))
implicit = (node.tag == detected_tag), (node.tag == default_tag)
self.emit(ScalarEvent(alias, node.tag, implicit, node.value,
style=node.style, comment=node.comment))
elif isinstance(node, SequenceNode):
implicit = (node.tag == self.resolve(SequenceNode, node.value, True))
comment = node.comment
# print('comment >>>>>>>>>>>>>.', comment, node.flow_style)
end_comment = None
seq_comment = None
if node.flow_style is True:
if comment: # eol comment on flow style sequence
seq_comment = comment[0]
# comment[0] = None
if comment and len(comment) > 2:
end_comment = comment[2]
else:
end_comment = None
self.emit(SequenceStartEvent(alias, node.tag, implicit,
flow_style=node.flow_style,
comment=node.comment))
index = 0
for item in node.value:
self.serialize_node(item, node, index)
index += 1
self.emit(SequenceEndEvent(comment=[seq_comment, end_comment]))
elif isinstance(node, MappingNode):
implicit = (node.tag == self.resolve(MappingNode, node.value, True))
comment = node.comment
end_comment = None
map_comment = None
if node.flow_style is True:
if comment: # eol comment on flow style sequence
map_comment = comment[0]
# comment[0] = None
if comment and len(comment) > 2:
end_comment = comment[2]
self.emit(MappingStartEvent(alias, node.tag, implicit,
flow_style=node.flow_style,
comment=node.comment))
for key, value in node.value:
self.serialize_node(key, node, None)
self.serialize_node(value, node, key)
self.emit(MappingEndEvent(comment=[map_comment, end_comment]))
self.ascend_resolver()
def templated_id(s):
return Serializer.ANCHOR_RE.match(s)

View File

@@ -1,5 +0,0 @@
[egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0

View File

@@ -1,195 +0,0 @@
# # header
# coding: utf-8
class Token(object):
def __init__(self, start_mark, end_mark):
self.start_mark = start_mark
self.end_mark = end_mark
def __repr__(self):
attributes = [key for key in self.__dict__
if not key.endswith('_mark')]
attributes.sort()
arguments = ', '.join(['%s=%r' % (key, getattr(self, key))
for key in attributes])
return '%s(%s)' % (self.__class__.__name__, arguments)
def add_post_comment(self, comment):
if not hasattr(self, '_comment'):
self._comment = [None, None]
self._comment[0] = comment
def add_pre_comments(self, comments):
if not hasattr(self, '_comment'):
self._comment = [None, None]
assert self._comment[1] is None
self._comment[1] = comments
def get_comment(self):
return getattr(self, '_comment', None)
@property
def comment(self):
return getattr(self, '_comment', None)
def move_comment(self, target):
"""move a comment from this token to target (normally next token)
used to combine e.g. comments before a BlockEntryToken to the
ScalarToken that follows it
"""
c = self.comment
if c is None:
return
# don't push beyond last element
if isinstance(target, StreamEndToken):
return
delattr(self, '_comment')
tc = target.comment
if not tc: # target comment, just insert
target._comment = c
return self
if c[0] and tc[0] or c[1] and tc[1]:
raise NotImplementedError('overlap in comment %r %r' % c, tc)
if c[0]:
tc[0] = c[0]
if c[1]:
tc[1] = c[1]
return self
def split_comment(self):
""" split the post part of a comment, and return it
as comment to be added. Delete second part if [None, None]
abc: # this goes to sequence
# this goes to first element
- first element
"""
comment = self.comment
if comment is None or comment[0] is None:
return None # nothing to do
ret_val = [comment[0], None]
if comment[1] is None:
delattr(self, '_comment')
return ret_val
# class BOMToken(Token):
# id = '<byte order mark>'
class DirectiveToken(Token):
id = '<directive>'
def __init__(self, name, value, start_mark, end_mark):
Token.__init__(self, start_mark, end_mark)
self.name = name
self.value = value
class DocumentStartToken(Token):
id = '<document start>'
class DocumentEndToken(Token):
id = '<document end>'
class StreamStartToken(Token):
id = '<stream start>'
def __init__(self, start_mark=None, end_mark=None, encoding=None):
Token.__init__(self, start_mark, end_mark)
self.encoding = encoding
class StreamEndToken(Token):
id = '<stream end>'
class BlockSequenceStartToken(Token):
id = '<block sequence start>'
class BlockMappingStartToken(Token):
id = '<block mapping start>'
class BlockEndToken(Token):
id = '<block end>'
class FlowSequenceStartToken(Token):
id = '['
class FlowMappingStartToken(Token):
id = '{'
class FlowSequenceEndToken(Token):
id = ']'
class FlowMappingEndToken(Token):
id = '}'
class KeyToken(Token):
id = '?'
class ValueToken(Token):
id = ':'
class BlockEntryToken(Token):
id = '-'
class FlowEntryToken(Token):
id = ','
class AliasToken(Token):
id = '<alias>'
def __init__(self, value, start_mark, end_mark):
Token.__init__(self, start_mark, end_mark)
self.value = value
class AnchorToken(Token):
id = '<anchor>'
def __init__(self, value, start_mark, end_mark):
Token.__init__(self, start_mark, end_mark)
self.value = value
class TagToken(Token):
id = '<tag>'
def __init__(self, value, start_mark, end_mark):
Token.__init__(self, start_mark, end_mark)
self.value = value
class ScalarToken(Token):
id = '<scalar>'
def __init__(self, value, plain, start_mark, end_mark, style=None):
Token.__init__(self, start_mark, end_mark)
self.value = value
self.plain = plain
self.style = style
class CommentToken(Token):
id = '<comment>'
def __init__(self, value, start_mark, end_mark):
Token.__init__(self, start_mark, end_mark)
self.value = value
def reset(self):
if hasattr(self, 'pre_done'):
delattr(self, 'pre_done')

View File

@@ -1,139 +0,0 @@
# coding: utf-8
"""
some helper functions that might be generally useful
"""
from __future__ import print_function
from __future__ import absolute_import
from .compat import text_type, binary_type
from .main import round_trip_load
# originally as comment
# https://github.com/pre-commit/pre-commit/pull/211#issuecomment-186466605
# if you use this in your code, I suggest adding a test in your test suite
# that check this routines output against a known piece of your YAML
# before upgrades to this code break your round-tripped YAML
def load_yaml_guess_indent(stream, **kw):
"""guess the indent and block sequence indent of yaml stream/string
returns round_trip_loaded stream, indent level, block sequence indent
- block sequence indent is the number of spaces before a dash relative to previous indent
- if there are no block sequences, indent is taken from nested mappings, block sequence
indent is unset (None) in that case
"""
# load a yaml file guess the indentation, if you use TABs ...
def leading_spaces(l):
idx = 0
while idx < len(l) and l[idx] == ' ':
idx += 1
return idx
if isinstance(stream, text_type):
yaml_str = stream
elif isinstance(stream, binary_type):
yaml_str = stream.decode('utf-8') # most likely, but the Reader checks BOM for this
else:
yaml_str = stream.read()
map_indent = None
indent = None # default if not found for some reason
block_seq_indent = None
prev_line_key_only = None
key_indent = 0
for line in yaml_str.splitlines():
rline = line.rstrip()
lline = rline.lstrip()
if lline.startswith('- '):
l_s = leading_spaces(line)
block_seq_indent = l_s - key_indent
idx = l_s + 1
while line[idx] == ' ': # this will end as we rstripped
idx += 1
if line[idx] == '#': # comment after -
continue
indent = idx - key_indent
break
if map_indent is None and prev_line_key_only is not None and rline:
idx = 0
while line[idx] in ' -':
idx += 1
if idx > prev_line_key_only:
map_indent = idx - prev_line_key_only
if rline.endswith(':'):
key_indent = leading_spaces(line)
idx = 0
while line[idx] == ' ': # this will end on ':'
idx += 1
prev_line_key_only = idx
continue
prev_line_key_only = None
if indent is None and map_indent is not None:
indent = map_indent
return round_trip_load(yaml_str, **kw), indent, block_seq_indent
def configobj_walker(cfg):
"""
walks over a ConfigObj (INI file with comments) generating
corresponding YAML output (including comments
"""
from configobj import ConfigObj
assert isinstance(cfg, ConfigObj)
for c in cfg.initial_comment:
if c.strip():
yield c
for s in _walk_section(cfg):
if s.strip():
yield s
for c in cfg.final_comment:
if c.strip():
yield c
def _walk_section(s, level=0):
from configobj import Section
assert isinstance(s, Section)
indent = u' ' * level
for name in s.scalars:
for c in s.comments[name]:
yield indent + c.strip()
x = s[name]
if u'\n' in x:
i = indent + u' '
x = u'|\n' + i + x.strip().replace(u'\n', u'\n' + i)
elif ':' in x:
x = u"'" + x.replace(u"'", u"''") + u"'"
line = u'{0}{1}: {2}'.format(indent, name, x)
c = s.inline_comments[name]
if c:
line += u' ' + c
yield line
for name in s.sections:
for c in s.comments[name]:
yield indent + c.strip()
line = u'{0}{1}:'.format(indent, name)
c = s.inline_comments[name]
if c:
line += u' ' + c
yield line
for val in _walk_section(s[name], level=level+1):
yield val
# def config_obj_2_rt_yaml(cfg):
# from .comments import CommentedMap, CommentedSeq
# from configobj import ConfigObj
# assert isinstance(cfg, ConfigObj)
# #for c in cfg.initial_comment:
# # if c.strip():
# # pass
# cm = CommentedMap()
# for name in s.sections:
# cm[name] = d = CommentedMap()
#
#
# #for c in cfg.final_comment:
# # if c.strip():
# # yield c
# return cm

View File

@@ -7,3 +7,4 @@ jinja2==3.0.3
six==1.16.0
macholib==1.16.2
altgraph==0.17.3
ruamel.yaml==0.17.21

View File

@@ -5,18 +5,20 @@
import collections
import collections.abc
import errno
import fnmatch
import glob
import hashlib
import itertools
import numbers
import os
import posixpath
import re
import shutil
import stat
import sys
import tempfile
from contextlib import contextmanager
from typing import Callable, List, Match, Optional, Tuple, Union
from typing import Callable, Iterable, List, Match, Optional, Tuple, Union
from llnl.util import tty
from llnl.util.lang import dedupe, memoized
@@ -1671,6 +1673,38 @@ def fix_darwin_install_name(path):
break
def find_first(root: str, files: Union[Iterable[str], str], bfs_depth: int = 2) -> Optional[str]:
"""Find the first file matching a pattern.
The following
.. code-block:: console
$ find /usr -name 'abc*' -o -name 'def*' -quit
is equivalent to:
>>> find_first("/usr", ["abc*", "def*"])
Any glob pattern supported by fnmatch can be used.
The search order of this method is breadth-first over directories,
until depth bfs_depth, after which depth-first search is used.
Parameters:
root (str): The root directory to start searching from
files (str or Iterable): File pattern(s) to search for
bfs_depth (int): (advanced) parameter that specifies at which
depth to switch to depth-first search.
Returns:
str or None: The matching file or None when no file is found.
"""
if isinstance(files, str):
files = [files]
return FindFirstFile(root, *files, bfs_depth=bfs_depth).find()
def find(root, files, recursive=True):
"""Search for ``files`` starting from the ``root`` directory.
@@ -2720,3 +2754,105 @@ def filesummary(path, print_bytes=16) -> Tuple[int, bytes]:
return size, short_contents
except OSError:
return 0, b""
class FindFirstFile:
"""Uses hybrid iterative deepening to locate the first matching
file. Up to depth ``bfs_depth`` it uses iterative deepening, which
mimics breadth-first with the same memory footprint as depth-first
search, after which it switches to ordinary depth-first search using
``os.walk``."""
def __init__(self, root: str, *file_patterns: str, bfs_depth: int = 2):
"""Create a small summary of the given file. Does not error
when file does not exist.
Args:
root (str): directory in which to recursively search
file_patterns (str): glob file patterns understood by fnmatch
bfs_depth (int): until this depth breadth-first traversal is used,
when no match is found, the mode is switched to depth-first search.
"""
self.root = root
self.bfs_depth = bfs_depth
self.match: Callable
# normcase is trivial on posix
regex = re.compile("|".join(fnmatch.translate(os.path.normcase(p)) for p in file_patterns))
# On case sensitive filesystems match against normcase'd paths.
if os.path is posixpath:
self.match = regex.match
else:
self.match = lambda p: regex.match(os.path.normcase(p))
def find(self) -> Optional[str]:
"""Run the file search
Returns:
str or None: path of the matching file
"""
self.file = None
# First do iterative deepening (i.e. bfs through limited depth dfs)
for i in range(self.bfs_depth + 1):
if self._find_at_depth(self.root, i):
return self.file
# Then fall back to depth-first search
return self._find_dfs()
def _find_at_depth(self, path, max_depth, depth=0) -> bool:
"""Returns True when done. Notice search can be done
either because a file was found, or because it recursed
through all directories."""
try:
entries = os.scandir(path)
except OSError:
return True
done = True
with entries:
# At max depth we look for matching files.
if depth == max_depth:
for f in entries:
# Exit on match
if self.match(f.name):
self.file = os.path.join(path, f.name)
return True
# is_dir should not require a stat call, so it's a good optimization.
if self._is_dir(f):
done = False
return done
# At lower depth only recurse into subdirs
for f in entries:
if not self._is_dir(f):
continue
# If any subdir is not fully traversed, we're not done yet.
if not self._find_at_depth(os.path.join(path, f.name), max_depth, depth + 1):
done = False
# Early exit when we've found something.
if self.file:
return True
return done
def _is_dir(self, f: os.DirEntry) -> bool:
"""Returns True when f is dir we can enter (and not a symlink)."""
try:
return f.is_dir(follow_symlinks=False)
except OSError:
return False
def _find_dfs(self) -> Optional[str]:
"""Returns match or None"""
for dirpath, _, filenames in os.walk(self.root):
for file in filenames:
if self.match(file):
return os.path.join(dirpath, file)
return None

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
__version__ = "0.20.0.dev0"
__version__ = "0.21.0.dev0"
spack_version = __version__

View File

@@ -289,9 +289,14 @@ def _check_build_test_callbacks(pkgs, error_cls):
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
test_callbacks = getattr(pkg_cls, "build_time_test_callbacks", None)
if test_callbacks and "test" in test_callbacks:
msg = '{0} package contains "test" method in ' "build_time_test_callbacks"
instr = 'Remove "test" from: [{0}]'.format(", ".join(test_callbacks))
# TODO (post-34236): "test*"->"test_*" once remove deprecated methods
# TODO (post-34236): "test"->"test_" once remove deprecated methods
has_test_method = test_callbacks and any([m.startswith("test") for m in test_callbacks])
if has_test_method:
msg = '{0} package contains "test*" method(s) in ' "build_time_test_callbacks"
instr = 'Remove all methods whose names start with "test" from: [{0}]'.format(
", ".join(test_callbacks)
)
errors.append(error_cls(msg.format(pkg_name), [instr]))
return errors

View File

@@ -24,10 +24,9 @@
import warnings
from contextlib import closing, contextmanager
from gzip import GzipFile
from typing import List, NamedTuple, Optional, Union
from urllib.error import HTTPError, URLError
import ruamel.yaml as yaml
import llnl.util.filesystem as fsys
import llnl.util.lang
import llnl.util.tty as tty
@@ -194,10 +193,17 @@ def _associate_built_specs_with_mirror(self, cache_key, mirror_url):
db_root_dir = os.path.join(tmpdir, "db_root")
db = spack_db.Database(None, db_dir=db_root_dir, enable_transaction_locking=False)
self._index_file_cache.init_entry(cache_key)
cache_path = self._index_file_cache.cache_path(cache_key)
with self._index_file_cache.read_transaction(cache_key):
db._read_from_file(cache_path)
try:
self._index_file_cache.init_entry(cache_key)
cache_path = self._index_file_cache.cache_path(cache_key)
with self._index_file_cache.read_transaction(cache_key):
db._read_from_file(cache_path)
except spack_db.InvalidDatabaseVersionError as e:
msg = (
f"you need a newer Spack version to read the buildcache index for the "
f"following mirror: '{mirror_url}'. {e.database_version_message}"
)
raise BuildcacheIndexError(msg) from e
spec_list = db.query_local(installed=False, in_buildcache=True)
@@ -420,7 +426,7 @@ def update(self, with_cooldown=False):
self._write_local_index_cache()
if all_methods_failed:
if configured_mirror_urls and all_methods_failed:
raise FetchCacheError(fetch_errors)
if fetch_errors:
tty.warn(
@@ -502,17 +508,17 @@ def _binary_index():
#: Singleton binary_index instance
binary_index = llnl.util.lang.Singleton(_binary_index)
binary_index: Union[BinaryCacheIndex, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(
_binary_index
)
class NoOverwriteException(spack.error.SpackError):
"""
Raised when a file exists and must be overwritten.
"""
"""Raised when a file would be overwritten"""
def __init__(self, file_path):
super(NoOverwriteException, self).__init__(
'"{}" exists in buildcache. Use --force flag to overwrite.'.format(file_path)
f"Refusing to overwrite the following file: {file_path}"
)
@@ -615,7 +621,7 @@ def read_buildinfo_file(prefix):
filename = buildinfo_file_name(prefix)
with open(filename, "r") as inputfile:
content = inputfile.read()
buildinfo = yaml.load(content)
buildinfo = syaml.load(content)
return buildinfo
@@ -743,12 +749,14 @@ def get_buildfile_manifest(spec):
return data
def prefixes_to_hashes(spec):
def hashes_to_prefixes(spec):
"""Return a dictionary of hashes to prefixes for a spec and its deps, excluding externals"""
return {
str(s.prefix): s.dag_hash()
s.dag_hash(): str(s.prefix)
for s in itertools.chain(
spec.traverse(root=True, deptype="link"), spec.dependencies(deptype="run")
)
if not s.external
}
@@ -766,7 +774,7 @@ def get_buildinfo_dict(spec, rel=False):
"relocate_binaries": manifest["binary_to_relocate"],
"relocate_links": manifest["link_to_relocate"],
"hardlinks_deduped": manifest["hardlinks_deduped"],
"prefix_to_hash": prefixes_to_hashes(spec),
"hash_to_prefix": hashes_to_prefixes(spec),
}
@@ -775,11 +783,10 @@ def tarball_directory_name(spec):
Return name of the tarball directory according to the convention
<os>-<architecture>/<compiler>/<package>-<version>/
"""
return "%s/%s/%s-%s" % (
spec.architecture,
str(spec.compiler).replace("@", "-"),
spec.name,
spec.version,
return os.path.join(
str(spec.architecture),
f"{spec.compiler.name}-{spec.compiler.version}",
f"{spec.name}-{spec.version}",
)
@@ -788,13 +795,9 @@ def tarball_name(spec, ext):
Return the name of the tarfile according to the convention
<os>-<architecture>-<package>-<dag_hash><ext>
"""
return "%s-%s-%s-%s-%s%s" % (
spec.architecture,
str(spec.compiler).replace("@", "-"),
spec.name,
spec.version,
spec.dag_hash(),
ext,
return (
f"{spec.architecture}-{spec.compiler.name}-{spec.compiler.version}-"
f"{spec.name}-{spec.version}-{spec.dag_hash()}{ext}"
)
@@ -1202,48 +1205,42 @@ def _do_create_tarball(tarfile_path, binaries_dir, pkg_dir, buildinfo):
tar_add_metadata(tar, buildinfo_file_name(pkg_dir), buildinfo)
def _build_tarball(
spec,
out_url,
force=False,
relative=False,
unsigned=False,
allow_root=False,
key=None,
regenerate_index=False,
):
class PushOptions(NamedTuple):
#: Overwrite existing tarball/metadata files in buildcache
force: bool = False
#: Whether to use relative RPATHs
relative: bool = False
#: Allow absolute paths to package prefixes when creating a tarball
allow_root: bool = False
#: Regenerated indices after pushing
regenerate_index: bool = False
#: Whether to sign or not.
unsigned: bool = False
#: What key to use for signing
key: Optional[str] = None
def push_or_raise(spec: Spec, out_url: str, options: PushOptions):
"""
Build a tarball from given spec and put it into the directory structure
used at the mirror (following <tarball_directory_name>).
This method raises :py:class:`NoOverwriteException` when ``force=False`` and the tarball or
spec.json file already exist in the buildcache.
"""
if not spec.concrete:
raise ValueError("spec must be concrete to build tarball")
with tempfile.TemporaryDirectory(dir=spack.stage.get_stage_root()) as tmpdir:
_build_tarball_in_stage_dir(
spec,
out_url,
stage_dir=tmpdir,
force=force,
relative=relative,
unsigned=unsigned,
allow_root=allow_root,
key=key,
regenerate_index=regenerate_index,
)
_build_tarball_in_stage_dir(spec, out_url, stage_dir=tmpdir, options=options)
def _build_tarball_in_stage_dir(
spec,
out_url,
stage_dir,
force=False,
relative=False,
unsigned=False,
allow_root=False,
key=None,
regenerate_index=False,
):
def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, options: PushOptions):
cache_prefix = build_cache_prefix(stage_dir)
tarfile_name = tarball_name(spec, ".spack")
tarfile_dir = os.path.join(cache_prefix, tarball_directory_name(spec))
@@ -1253,7 +1250,7 @@ def _build_tarball_in_stage_dir(
mkdirp(tarfile_dir)
if web_util.url_exists(remote_spackfile_path):
if force:
if options.force:
web_util.remove_url(remote_spackfile_path)
else:
raise NoOverwriteException(url_util.format(remote_spackfile_path))
@@ -1273,7 +1270,7 @@ def _build_tarball_in_stage_dir(
remote_signed_specfile_path = "{0}.sig".format(remote_specfile_path)
# If force and exists, overwrite. Otherwise raise exception on collision.
if force:
if options.force:
if web_util.url_exists(remote_specfile_path):
web_util.remove_url(remote_specfile_path)
if web_util.url_exists(remote_signed_specfile_path):
@@ -1290,7 +1287,7 @@ def _build_tarball_in_stage_dir(
# mode, Spack unfortunately *does* mutate rpaths and links ahead of time.
# For now, we only make a full copy of the spec prefix when in relative mode.
if relative:
if options.relative:
# tarfile is used because it preserves hardlink etc best.
binaries_dir = workdir
temp_tarfile_name = tarball_name(spec, ".tar")
@@ -1304,19 +1301,19 @@ def _build_tarball_in_stage_dir(
binaries_dir = spec.prefix
# create info for later relocation and create tar
buildinfo = get_buildinfo_dict(spec, relative)
buildinfo = get_buildinfo_dict(spec, options.relative)
# optionally make the paths in the binaries relative to each other
# in the spack install tree before creating tarball
if relative:
make_package_relative(workdir, spec, buildinfo, allow_root)
elif not allow_root:
if options.relative:
make_package_relative(workdir, spec, buildinfo, options.allow_root)
elif not options.allow_root:
ensure_package_relocatable(buildinfo, binaries_dir)
_do_create_tarball(tarfile_path, binaries_dir, pkg_dir, buildinfo)
# remove copy of install directory
if relative:
if options.relative:
shutil.rmtree(workdir)
# get the sha256 checksum of the tarball
@@ -1339,7 +1336,7 @@ def _build_tarball_in_stage_dir(
# This will be used to determine is the directory layout has changed.
buildinfo = {}
buildinfo["relative_prefix"] = os.path.relpath(spec.prefix, spack.store.layout.root)
buildinfo["relative_rpaths"] = relative
buildinfo["relative_rpaths"] = options.relative
spec_dict["buildinfo"] = buildinfo
with open(specfile_path, "w") as outfile:
@@ -1350,40 +1347,40 @@ def _build_tarball_in_stage_dir(
json.dump(spec_dict, outfile, indent=0, separators=(",", ":"))
# sign the tarball and spec file with gpg
if not unsigned:
key = select_signing_key(key)
sign_specfile(key, force, specfile_path)
if not options.unsigned:
key = select_signing_key(options.key)
sign_specfile(key, options.force, specfile_path)
# push tarball and signed spec json to remote mirror
web_util.push_to_url(spackfile_path, remote_spackfile_path, keep_original=False)
web_util.push_to_url(
signed_specfile_path if not unsigned else specfile_path,
remote_signed_specfile_path if not unsigned else remote_specfile_path,
signed_specfile_path if not options.unsigned else specfile_path,
remote_signed_specfile_path if not options.unsigned else remote_specfile_path,
keep_original=False,
)
tty.debug('Buildcache for "{0}" written to \n {1}'.format(spec, remote_spackfile_path))
# push the key to the build cache's _pgp directory so it can be
# imported
if not unsigned:
push_keys(out_url, keys=[key], regenerate_index=regenerate_index, tmpdir=stage_dir)
if not options.unsigned:
push_keys(out_url, keys=[key], regenerate_index=options.regenerate_index, tmpdir=stage_dir)
# create an index.json for the build_cache directory so specs can be
# found
if regenerate_index:
if options.regenerate_index:
generate_package_index(url_util.join(out_url, os.path.relpath(cache_prefix, stage_dir)))
return None
def nodes_to_be_packaged(specs, root=True, dependencies=True):
def specs_to_be_packaged(
specs: List[Spec], root: bool = True, dependencies: bool = True
) -> List[Spec]:
"""Return the list of nodes to be packaged, given a list of specs.
Args:
specs (List[spack.spec.Spec]): list of root specs to be processed
root (bool): include the root of each spec in the nodes
dependencies (bool): include the dependencies of each
specs: list of root specs to be processed
root: include the root of each spec in the nodes
dependencies: include the dependencies of each
spec in the nodes
"""
if not root and not dependencies:
@@ -1401,32 +1398,26 @@ def nodes_to_be_packaged(specs, root=True, dependencies=True):
return list(filter(packageable, nodes))
def push(specs, push_url, include_root: bool = True, include_dependencies: bool = True, **kwargs):
"""Create a binary package for each of the specs passed as input and push them
to a given push URL.
def push(spec: Spec, mirror_url: str, options: PushOptions):
"""Create and push binary package for a single spec to the specified
mirror url.
Args:
specs (List[spack.spec.Spec]): installed specs to be packaged
push_url (str): url where to push the binary package
include_root (bool): include the root of each spec in the nodes
include_dependencies (bool): include the dependencies of each
spec in the nodes
**kwargs: TODO
spec: Spec to package and push
mirror_url: Desired destination url for binary package
options:
Returns:
True if package was pushed, False otherwise.
"""
# Be explicit about the arugment type
if type(include_root) != bool or type(include_dependencies) != bool:
raise ValueError("Expected include_root/include_dependencies to be True/False")
try:
push_or_raise(spec, mirror_url, options)
except NoOverwriteException as e:
warnings.warn(str(e))
return False
nodes = nodes_to_be_packaged(specs, root=include_root, dependencies=include_dependencies)
# TODO: This seems to be an easy target for task
# TODO: distribution using a parallel pool
for node in nodes:
try:
_build_tarball(node, push_url, **kwargs)
except NoOverwriteException as e:
warnings.warn(str(e))
return True
def try_verify(specfile_path):
@@ -1672,7 +1663,7 @@ def dedupe_hardlinks_if_necessary(root, buildinfo):
buildinfo[key] = new_list
def relocate_package(spec, allow_root):
def relocate_package(spec):
"""
Relocate the given package
"""
@@ -1690,23 +1681,23 @@ def relocate_package(spec, allow_root):
old_spack_prefix = str(buildinfo.get("spackprefix"))
old_rel_prefix = buildinfo.get("relative_prefix")
old_prefix = os.path.join(old_layout_root, old_rel_prefix)
rel = buildinfo.get("relative_rpaths")
prefix_to_hash = buildinfo.get("prefix_to_hash", None)
if old_rel_prefix != new_rel_prefix and not prefix_to_hash:
rel = buildinfo.get("relative_rpaths", False)
# In the past prefix_to_hash was the default and externals were not dropped, so prefixes
# were not unique.
if "hash_to_prefix" in buildinfo:
hash_to_old_prefix = buildinfo["hash_to_prefix"]
elif "prefix_to_hash" in buildinfo:
hash_to_old_prefix = dict((v, k) for (k, v) in buildinfo["prefix_to_hash"].items())
else:
hash_to_old_prefix = dict()
if old_rel_prefix != new_rel_prefix and not hash_to_old_prefix:
msg = "Package tarball was created from an install "
msg += "prefix with a different directory layout and an older "
msg += "buildcache create implementation. It cannot be relocated."
raise NewLayoutException(msg)
# older buildcaches do not have the prefix_to_hash dictionary
# need to set an empty dictionary and add one entry to
# prefix_to_prefix to reproduce the old behavior
if not prefix_to_hash:
prefix_to_hash = dict()
hash_to_prefix = dict()
hash_to_prefix[spec.format("{hash}")] = str(spec.package.prefix)
new_deps = spack.build_environment.get_rpath_deps(spec.package)
for d in new_deps + spec.dependencies(deptype="run"):
hash_to_prefix[d.format("{hash}")] = str(d.prefix)
# Spurious replacements (e.g. sbang) will cause issues with binaries
# For example, the new sbang can be longer than the old one.
# Hence 2 dictionaries are maintained here.
@@ -1720,9 +1711,11 @@ def relocate_package(spec, allow_root):
# First match specific prefix paths. Possibly the *local* install prefix
# of some dependency is in an upstream, so we cannot assume the original
# spack store root can be mapped uniformly to the new spack store root.
for orig_prefix, hash in prefix_to_hash.items():
prefix_to_prefix_text[orig_prefix] = hash_to_prefix.get(hash, None)
prefix_to_prefix_bin[orig_prefix] = hash_to_prefix.get(hash, None)
for dag_hash, new_dep_prefix in hashes_to_prefixes(spec).items():
if dag_hash in hash_to_old_prefix:
old_dep_prefix = hash_to_old_prefix[dag_hash]
prefix_to_prefix_bin[old_dep_prefix] = new_dep_prefix
prefix_to_prefix_text[old_dep_prefix] = new_dep_prefix
# Only then add the generic fallback of install prefix -> install prefix.
prefix_to_prefix_text[old_prefix] = new_prefix
@@ -1796,7 +1789,15 @@ def is_backup_file(file):
relocate.relocate_text(text_names, prefix_to_prefix_text)
# relocate the install prefixes in binary files including dependencies
relocate.relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
changed_files = relocate.relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
# Add ad-hoc signatures to patched macho files when on macOS.
if "macho" in platform.binary_formats and sys.platform == "darwin":
codesign = which("codesign")
if not codesign:
return
for binary in changed_files:
codesign("-fs-", binary)
# If we are installing back to the same location
# relocate the sbang location if the spack directory changed
@@ -1853,7 +1854,7 @@ def _extract_inner_tarball(spec, filename, extract_to, unsigned, remote_checksum
return tarfile_path
def extract_tarball(spec, download_result, allow_root=False, unsigned=False, force=False):
def extract_tarball(spec, download_result, unsigned=False, force=False):
"""
extract binary tarball for given package into install area
"""
@@ -1949,7 +1950,7 @@ def extract_tarball(spec, download_result, allow_root=False, unsigned=False, for
os.remove(specfile_path)
try:
relocate_package(spec, allow_root)
relocate_package(spec)
except Exception as e:
shutil.rmtree(spec.prefix)
raise e
@@ -1968,7 +1969,7 @@ def extract_tarball(spec, download_result, allow_root=False, unsigned=False, for
_delete_staged_downloads(download_result)
def install_root_node(spec, allow_root, unsigned=False, force=False, sha256=None):
def install_root_node(spec, unsigned=False, force=False, sha256=None):
"""Install the root node of a concrete spec from a buildcache.
Checking the sha256 sum of a node before installation is usually needed only
@@ -1977,8 +1978,6 @@ def install_root_node(spec, allow_root, unsigned=False, force=False, sha256=None
Args:
spec: spec to be installed (note that only the root node will be installed)
allow_root (bool): allows the root directory to be present in binaries
(may affect relocation)
unsigned (bool): if True allows installing unsigned binaries
force (bool): force installation if the spec is already present in the
local store
@@ -2014,24 +2013,22 @@ def install_root_node(spec, allow_root, unsigned=False, force=False, sha256=None
# don't print long padded paths while extracting/relocating binaries
with spack.util.path.filter_padding():
tty.msg('Installing "{0}" from a buildcache'.format(spec.format()))
extract_tarball(spec, download_result, allow_root, unsigned, force)
spack.hooks.post_install(spec)
extract_tarball(spec, download_result, unsigned, force)
spack.hooks.post_install(spec, False)
spack.store.db.add(spec, spack.store.layout)
def install_single_spec(spec, allow_root=False, unsigned=False, force=False):
def install_single_spec(spec, unsigned=False, force=False):
"""Install a single concrete spec from a buildcache.
Args:
spec (spack.spec.Spec): spec to be installed
allow_root (bool): allows the root directory to be present in binaries
(may affect relocation)
unsigned (bool): if True allows installing unsigned binaries
force (bool): force installation if the spec is already present in the
local store
"""
for node in spec.traverse(root=True, order="post", deptype=("link", "run")):
install_root_node(node, allow_root=allow_root, unsigned=unsigned, force=force)
install_root_node(node, unsigned=unsigned, force=force)
def try_direct_fetch(spec, mirrors=None):
@@ -2418,6 +2415,10 @@ def __init__(self, all_architectures):
self.possible_specs = specs
def __call__(self, spec, **kwargs):
"""
Args:
spec (str): The spec being searched for in its string representation or hash.
"""
matches = []
if spec.startswith("/"):
# Matching a DAG hash
@@ -2439,6 +2440,10 @@ def __str__(self):
return "{}, due to: {}".format(self.args[0], self.args[1])
class BuildcacheIndexError(spack.error.SpackError):
"""Raised when a buildcache cannot be read for any reason"""
FetchIndexResult = collections.namedtuple("FetchIndexResult", "etag hash data fresh")

View File

@@ -9,6 +9,7 @@
import sys
import sysconfig
import warnings
from typing import Dict, Optional, Sequence, Union
import archspec.cpu
@@ -21,8 +22,10 @@
from .config import spec_for_current_python
QueryInfo = Dict[str, "spack.spec.Spec"]
def _python_import(module):
def _python_import(module: str) -> bool:
try:
__import__(module)
except ImportError:
@@ -30,7 +33,9 @@ def _python_import(module):
return True
def _try_import_from_store(module, query_spec, query_info=None):
def _try_import_from_store(
module: str, query_spec: Union[str, "spack.spec.Spec"], query_info: Optional[QueryInfo] = None
) -> bool:
"""Return True if the module can be imported from an already
installed spec, False otherwise.
@@ -52,7 +57,7 @@ def _try_import_from_store(module, query_spec, query_info=None):
module_paths = [
os.path.join(candidate_spec.prefix, pkg.purelib),
os.path.join(candidate_spec.prefix, pkg.platlib),
] # type: list[str]
]
path_before = list(sys.path)
# NOTE: try module_paths first and last, last allows an existing version in path
@@ -89,7 +94,7 @@ def _try_import_from_store(module, query_spec, query_info=None):
return False
def _fix_ext_suffix(candidate_spec):
def _fix_ext_suffix(candidate_spec: "spack.spec.Spec"):
"""Fix the external suffixes of Python extensions on the fly for
platforms that may need it
@@ -157,7 +162,11 @@ def _fix_ext_suffix(candidate_spec):
os.symlink(abs_path, link_name)
def _executables_in_store(executables, query_spec, query_info=None):
def _executables_in_store(
executables: Sequence[str],
query_spec: Union["spack.spec.Spec", str],
query_info: Optional[QueryInfo] = None,
) -> bool:
"""Return True if at least one of the executables can be retrieved from
a spec in store, False otherwise.
@@ -193,7 +202,7 @@ def _executables_in_store(executables, query_spec, query_info=None):
return False
def _root_spec(spec_str):
def _root_spec(spec_str: str) -> str:
"""Add a proper compiler and target to a spec used during bootstrapping.
Args:

View File

@@ -7,6 +7,7 @@
import contextlib
import os.path
import sys
from typing import Any, Dict, Generator, MutableSequence, Sequence
from llnl.util import tty
@@ -24,12 +25,12 @@
_REF_COUNT = 0
def is_bootstrapping():
def is_bootstrapping() -> bool:
"""Return True if we are in a bootstrapping context, False otherwise."""
return _REF_COUNT > 0
def spec_for_current_python():
def spec_for_current_python() -> str:
"""For bootstrapping purposes we are just interested in the Python
minor version (all patches are ABI compatible with the same minor).
@@ -41,14 +42,14 @@ def spec_for_current_python():
return f"python@{version_str}"
def root_path():
def root_path() -> str:
"""Root of all the bootstrap related folders"""
return spack.util.path.canonicalize_path(
spack.config.get("bootstrap:root", spack.paths.default_user_bootstrap_path)
)
def store_path():
def store_path() -> str:
"""Path to the store used for bootstrapped software"""
enabled = spack.config.get("bootstrap:enable", True)
if not enabled:
@@ -59,7 +60,7 @@ def store_path():
@contextlib.contextmanager
def spack_python_interpreter():
def spack_python_interpreter() -> Generator:
"""Override the current configuration to set the interpreter under
which Spack is currently running as the only Python external spec
available.
@@ -76,18 +77,18 @@ def spack_python_interpreter():
yield
def _store_path():
def _store_path() -> str:
bootstrap_root_path = root_path()
return spack.util.path.canonicalize_path(os.path.join(bootstrap_root_path, "store"))
def _config_path():
def _config_path() -> str:
bootstrap_root_path = root_path()
return spack.util.path.canonicalize_path(os.path.join(bootstrap_root_path, "config"))
@contextlib.contextmanager
def ensure_bootstrap_configuration():
def ensure_bootstrap_configuration() -> Generator:
"""Swap the current configuration for the one used to bootstrap Spack.
The context manager is reference counted to ensure we don't swap multiple
@@ -107,7 +108,7 @@ def ensure_bootstrap_configuration():
_REF_COUNT -= 1
def _read_and_sanitize_configuration():
def _read_and_sanitize_configuration() -> Dict[str, Any]:
"""Read the user configuration that needs to be reused for bootstrapping
and remove the entries that should not be copied over.
"""
@@ -120,9 +121,11 @@ def _read_and_sanitize_configuration():
return user_configuration
def _bootstrap_config_scopes():
def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
tty.debug("[BOOTSTRAP CONFIG SCOPE] name=_builtin")
config_scopes = [spack.config.InternalConfigScope("_builtin", spack.config.config_defaults)]
config_scopes: MutableSequence["spack.config.ConfigScope"] = [
spack.config.InternalConfigScope("_builtin", spack.config.config_defaults)
]
configuration_paths = (spack.config.configuration_defaults_path, ("bootstrap", _config_path()))
for name, path in configuration_paths:
platform = spack.platforms.host().name
@@ -137,7 +140,7 @@ def _bootstrap_config_scopes():
return config_scopes
def _add_compilers_if_missing():
def _add_compilers_if_missing() -> None:
arch = spack.spec.ArchSpec.frontend_arch()
if not spack.compilers.compilers_for_arch(arch):
new_compilers = spack.compilers.find_new_compilers()
@@ -146,7 +149,7 @@ def _add_compilers_if_missing():
@contextlib.contextmanager
def _ensure_bootstrap_configuration():
def _ensure_bootstrap_configuration() -> Generator:
bootstrap_store_path = store_path()
user_configuration = _read_and_sanitize_configuration()
with spack.environment.no_active_environment():

Some files were not shown because too many files have changed in this diff Show More