Compare commits

...

312 Commits

Author SHA1 Message Date
Gregory Becker
6e25c4393e typo 2024-04-18 11:16:52 -07:00
Gregory Becker
b93be559f5 debugging amrex concretization 2024-04-18 10:53:19 -07:00
Gregory Becker
948f5a2135 typo 2024-04-18 09:03:46 -07:00
Gregory Becker
493b131fa3 refactor environment to use matrix for rocm architectures 2024-04-17 17:41:14 -07:00
Gregory Becker
daee0dd42c try without ginkgo 2024-04-17 16:12:23 -07:00
Gregory Becker
0369568512 try full rocm stack 2024-04-17 15:28:51 -07:00
Gregory Becker
019e4f31a1 propagate rocm targets 2024-04-17 14:04:14 -07:00
Gregory Becker
a49729c581 require +rocm when possible in rocm stack 2024-04-17 12:18:51 -07:00
Gregory Becker
5e42a322e7 additional compilers supporting rocm 2024-04-17 09:08:14 -07:00
Gregory Becker
653b43be97 update external compiler config for rocmcc 2024-04-16 18:57:15 -07:00
Gregory Becker
7126297da5 rename variable to not shadow compiler property 2024-04-16 18:52:42 -07:00
Gregory Becker
3efc19028e allow rocm packages to build with amdclang in gcc stack 2024-04-16 15:12:15 -07:00
Gregory Becker
0b1f74bd04 rocm conflicts with compilers that do not support rocm 2024-04-12 11:31:08 -07:00
Greg Becker
fb4e1cad45 remove dpcpp compiler and package (#43418)
`dpcpp` is deprecated by intel and has been superseded by `oneapi` compilers for a very long time.

---------

Co-authored-by: becker33 <becker33@users.noreply.github.com>
2024-04-03 15:34:23 -07:00
Adam J. Stewart
3054b71e2e py-rarfile: add v4.2 (#43477) 2024-04-03 15:08:36 -07:00
downloadico
47163f7435 py-cig-pythia: add missing py-setuptools dependency (#43473) 2024-04-03 15:02:32 -07:00
Dom Heinzeller
e322a8382f py-torch: Add variant 'internal-protobuf' to build with the internal protobuf (#43056) 2024-04-03 15:57:54 -06:00
Greg Sjaardema
53fb4795ca Seacas exodusii 04 2024 (#43468)
* SEACAS: Update package.py to handle new SEACAS project name
  The base project name for the SEACAS project has changed from
  "SEACASProj" to "SEACAS" as of @2022-10-14, so the package
  needed to be updated to use the new project name when needed.

  The refactor also changes several:
      "-DSome_CMAKE_Option:BOOL=ON"
  to
     define("Some_CMAKE_Option", True)
* SEACAS, EXODUSII: New version; deprecate older versions; better variant descriptions
* [@spackbot] updating style on behalf of gsjaardema
* Fix long lines reported by flake8

---------

Co-authored-by: gsjaardema <gsjaardema@users.noreply.github.com>
2024-04-03 15:46:57 -06:00
eugeneswalker
4517c7fa9b ginkgo@1.7 %oneapi@2024.1: icpx 2024.1 no longer accepts sycl::ext::intel::ctz (#43476) 2024-04-03 15:46:42 -06:00
Wouter Deconinck
efaed17f91 root: new version 6.30.06 (#43472) 2024-04-03 15:41:41 -06:00
Thomas Madlener
2c17cd365d Make it possible to build whizard from a git checkout (#43447) 2024-04-03 21:55:57 +02:00
psakievich
dfe537f688 Convert curl env mod method to a side effect (#43474) 2024-04-03 12:02:48 -07:00
G-Ragghianti
be0002b460 slate: Removing scalapack as test dependency, adding python (#43452)
* removing scalapack as test dependency, adding python
* fixing style
* style
2024-04-03 11:20:03 -07:00
Daryl W. Grunau
743ee5f3de eospac: expose versions 6.5.8 and 6.5.9 (#43450)
Co-authored-by: Daryl W. Grunau <dwg@lanl.gov>
2024-04-03 11:17:15 -07:00
Cameron Smith
b6caf0156f simmetrix-simmodsuite: support RHEL8, fix module paths (#43455) 2024-04-03 11:07:43 -07:00
Christoph Junghans
ec00ffc244 byfl: fix llvm dep (#43460) 2024-04-03 10:39:06 -07:00
G-Ragghianti
f020256b9f magma add version 2.8.0 (#43417)
* Add release 2.8.0
* Changing C compiler to hipcc
* Final release version
* Adding new cmake definition for rocm support
* Enabling rocm version support
* Update sha256
* Updating website URL
* Removing unnecessary C compiler spec
* Adding rocm-core dependency
* fixing rocm-core version
* fixing rocm-core version
* fixing style
* bugfix
2024-04-03 11:25:46 -06:00
Peter Brady
04377e39e0 New package: Parthenon (#43426) 2024-04-03 10:05:02 -07:00
Vanessasaurus
ba2703fea6 flux-core: add v0.61.0 (#43465)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2024-04-03 12:07:32 +02:00
Adrien Bernede
92b1c8f763 RADIUSS packages update (Starting over #39613) (#41375) 2024-04-02 15:03:07 -07:00
Adam J. Stewart
2b29ecd9b6 py-pillow: add v10+ (#43441) 2024-04-02 14:46:21 -07:00
Adam J. Stewart
5b43bf1b58 py-scikit-image: add v0.21 and v0.22 (#43440)
* py-scikit-image: add v0.21 and v0.22
* Add maintainer and license
* Style fix
2024-04-02 14:41:29 -07:00
Juan Miguel Carceller
37d9770e02 gdb: add a dependency on pkgconfig (#43439)
* gdb: add a dependency on pkgconfig
* Apply dependency for 13.1 and onwards

---------

Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-04-02 14:39:03 -07:00
Brad Geltz
0e016ba6f5 geopm-runtime: New package (#42737)
* Add systemd

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* gobject-introspection: Correct glib versions

- The meson.build requirement that the glib version
  is >= the gobject-introspective version is not in place
  until v1.76.1.
- Prior to that, the requirement was glib >= 2.58.0.
- Bug introduced in acbf0d99c4, PR #42222.

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* util-linux: add v2.39.3

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* py-natsort: add new versions

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* geopm-service: default systemd support to true

- Make the dependency sticky to force a failure
  if systemd compilation fails, or force
  the user to disable the option.

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* geopm-service: Add initial multi-architecture support

- Restrict arch conflicts to 3.0.1
- Disable cpuid at configure time on non-x86_64 platforms.

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* geopm-service: update docstrings

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* Add py-geopmdpy

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

* Add geopm-runtime recipe

Signed-off-by: Brad Geltz <brad.geltz@intel.com>

---------

Signed-off-by: Brad Geltz <brad.geltz@intel.com>
2024-04-02 17:27:36 +02:00
psakievich
7afa949da1 Add handling of custom ssl certs in urllib ops (#42953)
This PR allows the user to specify a path to a custom cert file (or directory) in
Spack's config:

```yaml
  # This is where custom certs for proxy/firewall are stored.
  # It can be a path or environment variable. To match ssl env configuration
  # the default is the environment variable SSL_CERT_FILE
  ssl_certs: $SSL_CERT_FILE
```

`config:ssl_certs` can be a path to a file or a directory, or it can be and environment
variable that resolves to one of those. When it posts to something valid, Spack will
update the ssl context to include custom certs, and fetching via `urllib` and `curl`
will trust the provided certs.

This should resolve many issues with fetching behind corporate firewalls.


---------

Co-authored-by: psakievich <psakievich@users.noreply.github.com>
Co-authored-by: Alec Scott <alec@bcs.sh>
2024-04-01 11:11:13 -07:00
Jose E. Roman
b81d7d0aac slepc, py-slepc4py, petsc, py-petsc4py add v3.21.0 (#43435)
* New release SLEPc 3.21

* petsc, py-petsc4py  add v3.21.0

* [@spackbot] updating style on behalf of joseeroman

---------

Co-authored-by: Satish Balay <balay@mcs.anl.gov>
Co-authored-by: joseeroman <joseeroman@users.noreply.github.com>
2024-03-31 08:57:44 -07:00
Peter Scheibel
e78484f501 Concretize when_possible: add failure detection and explicit message (#43202)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-03-31 14:02:09 +02:00
Adam J. Stewart
6fd43b4e75 PyTorch: update ecosystem (#43423) 2024-03-30 06:47:22 -06:00
G-Ragghianti
14edb55288 SLATE package: fix smoke test (#43425) 2024-03-29 17:17:48 -07:00
Juan Miguel Carceller
f062f1c5b3 ROOT package: add patch for builds with libAfterImage for MacOS (#43428) 2024-03-29 16:48:59 -07:00
eugeneswalker
7756c8f4fc ci devtools manylinux2014: update ci image with compatible gpg (#43421) 2024-03-29 16:12:55 -06:00
eugeneswalker
69c8a9e4ba veloc@1.7: depend on er@0.4: (#43433) 2024-03-29 12:54:27 -07:00
Todd Gamblin
47c0736952 xz: add comment to avoid 5.6 pending CVE resolution (#43432)
XZ is compromised; add a note for maintainers to avoid updating until we
have a release without the CVE.
2024-03-29 12:03:13 -06:00
kwryankrattiger
8b89287084 CI Reproducer on Metal (#43411)
* MacOS image remove requires override syntax

* Metal reproducer auto start and cross-platform
2024-03-29 12:32:54 -05:00
Julien Cortial
8bd6283b52 med: add v4.1.1, and v5.0.0, update package recipe (#43409) 2024-03-29 18:27:33 +01:00
Peter Scheibel
179e4f3ad1 Don't delete "spack develop" build artifacts after install (#43424)
After #41373, where we stopped considering the source directory to be the stage for develop builds,
we resumed *deleting* the stage even after a successful build.

We don't want this for develop builds because developers need to iterate; we should keep the artifacts
unless they explicitly run `spack clean`.  

Now:
- [x] Build artifacts for develop packages are not removed after a successful install
- [x] They are also not removed before an install starts, i.e. develop packages always 
      reuse prior artifacts, if available.
- [x] They can be deleted in any other context, e.g. by running  `spack clean --stage`
2024-03-29 09:36:31 -07:00
kwryankrattiger
e97787691b force oneapi compiler unless specified otherwise (#43419) 2024-03-29 09:20:26 -07:00
kjrstory
5932ee901c openradioss: add DEXEC_NAME to cmake variable and change style of cmake_args (#43365) 2024-03-29 10:45:54 +01:00
afzpatel
3bdebeba3c UCX: Add patch to set HIP_PLATFORM_AMD (#43403) 2024-03-29 10:41:45 +01:00
Massimiliano Culpo
d390ee1902 spack load: remove --only argument (#42120)
The argument was deprecated in v0.21 and slated
for removal in v0.22.
2024-03-29 10:19:10 +01:00
AMD Toolchain Support
4f9fe6f9bf Adding 'logging' and 'tracing' variants to enable AOCL DTL trace and logging capabilities (#43414) 2024-03-29 00:58:31 -06:00
Richard Berger
df6d6d9b5c flecsi: depend on legion@24.03.0: moving forward (#43410)
Prior FleCSI releases relied on commits on the control-replication branch (cr)
of Legion. That branch has now been merged into master and is part of the
24.03.0 Legion release.
2024-03-29 00:53:36 -06:00
Luc Berger
e57d33b29f Trilinos: update SuperLU dependency (#43346) 2024-03-29 00:48:27 -06:00
Wouter Deconinck
85c6d6dbab (py-)onnx: new version 1.14.{0,1}, 1.15.0 (#41877)
* (py-)onnx: new version 1.14.{0,1}, 1.15.0

Notes on `onnx`:
- The C++ standard was changed to 14 in 1.15, so no more filter_file is needed. (The C++ standard has since changed to 17 in master.)

Notes on `py-onnx`:
- `py-pybind11` was an unlisted requirement in CMakeLists.txt since 1.3 or so (before earliest spack package).

* py-onnx: depends_on pybind11 with type link, not run

* py-onnx: depends_on py-setuptools@64:
2024-03-29 00:33:27 -06:00
Kyle Knoepfel
5f9228746e Add ability to rename environments (#43296) 2024-03-28 15:15:04 -06:00
Adam J. Stewart
9f2451ddff py-jaxlib: ppc64le support has been fixed (#43422) 2024-03-28 21:15:17 +01:00
Tristan Carel
a05eb11b7b steps: add version 5.0.1 (#43360) 2024-03-28 17:43:10 +01:00
kwryankrattiger
ae2d0ff1cd CI: fail the rebuild command if buildcache push failed (#40045) 2024-03-28 17:02:41 +01:00
Greg Becker
7e906ced75 spack find: add options for local/upstream only (#42999)
Users requested an option to filter between local/upstream results in `spack find` output.

```
# default behavior, same as without --install-tree argument
$ spack find --install-tree all

# show only local results
$ spack find --install-tree local  

# show results from all upstreams
$ spack find --install-tree upstream 

# show results from a particular upstream or the local install_tree
$ spack find --install-tree /path/to/install/tree/root
```

---------

Co-authored-by: becker33 <becker33@users.noreply.github.com>
2024-03-28 10:00:55 -05:00
Andrey Perestoronin
647e89f6bc intel-oneapi 2024.1.0: added new version to packages (#43375)
* added new package versions

* add itac and inspector

* Remove ITAC and Inspector

* Set older version for MKL as a workaround to pass CI issue
2024-03-28 08:09:37 -06:00
Anderson Chauphan
3239c29fb0 trilinos: add v15.1.1 (#42996) 2024-03-28 10:08:28 -04:00
Martin Aumüller
abced0e87d openscenegraph: patch for compatibility with ffmpeg@5: (#43051)
Co-authored-by: aumuell <aumuell@users.noreply.github.com>
2024-03-28 11:21:47 +01:00
liam-o-marsh
300fc2ee42 scipy: register conflict with too-recent openblas (#43301) 2024-03-28 11:03:45 +01:00
Christopher Christofi
13c4258e54 py-folium: add new package with version 0.16.0 (#43352) 2024-03-28 10:50:37 +01:00
Sergey Kosukhin
f29cb7f953 zlib-ng: %nvhpc: Fix build with %nvhpc (similar to zlib) (#43095)
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-03-28 10:41:10 +01:00
Ryan Honeyager
826b8f25c5 hdf5: fixes floating point exceptions generated when fpe trapping is enabled (#42880) 2024-03-28 10:34:10 +01:00
Cody Balos
ebaeea7820 sundials: add new version (#43008)
* sundials: add new version
* note previous default
* update when clause for removed options

---------

Co-authored-by: David J. Gardner <gardner48@llnl.gov>
2024-03-28 10:12:49 +01:00
Kensuke WATANABE
f76eb993aa pixman: avoid assembler macros error with Fujitsu compiler (#43362) 2024-03-28 10:10:59 +01:00
Tristan Carel
0b2c370a83 py-pytest-mpi: new package starting at 0.6 (#43368) 2024-03-28 10:01:01 +01:00
Tristan Carel
6a9ee480bf py-vascpy: new package starting at 0.1.1 (#43370) 2024-03-28 09:58:43 +01:00
potter-s
cc80d52b62 aws-cli-2: restrict supported Python versions to @:3.11 (#43390)
Co-authored-by: Simon Potter <sp39@sanger.ac.uk>
2024-03-28 08:39:51 +01:00
Paul R. C. Kent
b9c7d3b89b llvm: add 18.1.2 (#43401) 2024-03-28 08:24:05 +01:00
AMD Toolchain Support
c1be6a5483 QE: backport 7.3.1 elpa build fix to 7.3 (#43394)
Co-authored-by: Ning Li <ning.li@amd.com>
2024-03-28 08:12:23 +01:00
Juan Miguel Carceller
42550208c3 gaudi: add version 38 and a gaudialg variant (#42856)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-03-28 08:10:00 +01:00
Ken Raffenetti
be231face6 mpich: fixup removal of pmi=off option (#43377) 2024-03-28 08:08:37 +01:00
Chris White
89ac747a76 add lua 5.4.6 (#43407) 2024-03-27 22:37:29 -06:00
Richard Berger
5d8f36d667 Legion: add Rust-based profiler to install (#43408)
* legion: add missing license
* legion: add rust-based profiler to install
2024-03-27 22:25:54 -06:00
Sreenivasa Murthy Kolam
6c3fed351f Fix build failure when run-tests is enabled for rocfft (#42424)
* initial commit to fix the error when running run tests for rocfft

* apply patch from 5.7.0 onwards

* add description for the patch that is added
2024-03-27 21:09:29 -05:00
afzpatel
b9cbd15674 hipcc: new package for HIP (#43245)
* initial commit to add hipcc package
* restore setup_dependent_package
2024-03-27 21:05:50 -05:00
Alex Richert
b8f633246a Add aocc support to qt (#43400) 2024-03-27 17:23:45 -06:00
Massimiliano Culpo
a2f3e98ab9 pinentry: add v1.3.0 (#43395) 2024-03-27 22:28:34 +01:00
Paul R. C. Kent
acffe37313 libffi: add 3.4.6 and 3.4.5 (#43404) 2024-03-27 22:27:19 +01:00
Greg Sjaardema
249e5415e8 Update exodusii package (#43379)
* SEACAS: Update package.py to handle new SEACAS project name
  The base project name for the SEACAS project has changed from
  "SEACASProj" to "SEACAS" as of @2022-10-14, so the package
  needed to be updated to use the new project name when needed.
  The refactor also changes several:
      "-DSome_CMAKE_Option:BOOL=ON"
  to
     define("Some_CMAKE_Option", True)
* exodusii -- refactor and bring up-to-date
* Add missed patch file
* [@spackbot] updating style on behalf of gsjaardema
* Apply seacas windows patch here also.
* Update url so old checksums valid; redo new checksums

---------

Co-authored-by: gsjaardema <gsjaardema@users.noreply.github.com>
2024-03-27 14:14:49 -07:00
Elliott Slaughter
e2a942d07e legion: Add 24.03.0, update HIP dependency (#43398)
* legion: Add 24.03.0, update HIP dependency.
* legion: Remove CUDA upper bound, update HIP bounds, use f-strings.
* legion: Fix format.
* legion: Fix format again.
2024-03-27 14:38:28 -06:00
afzpatel
32deca2a4c initial commit to add check function to rocthrust (#43405) 2024-03-27 13:06:55 -07:00
Yanfei Guo
e4c64865f1 argobots: update to v1.2 (#43381)
* argobots: update to v1.2
* Replace prior maintainer with current one.

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-03-27 13:58:37 -06:00
Adam J. Stewart
1175f37577 py-kornia: add v0.7.2 (#43184)
* py-kornia: add v0.7.2
* Add maintainers to recipe
* https not supported
* Debug build failure
* More detailed debug info
* Try rust+dev
* Fix pyo3 build
* Fix turbojpeg-sys build
* Fix dlpack-rs build
* Get rid of debug env vars
2024-03-27 10:58:06 -07:00
James Edgeley
faa183331f Update Nektar++ package (#43397)
* Update Nektar++ package
* shorten line
* correct to 5.5.0
* use cmake helpers
* style

---------

Co-authored-by: JamesEdgeley <JamesEdgeley@users.noreply.github.com>
2024-03-27 11:55:14 -06:00
afzpatel
bbac33871c bump rocprofiler-dev to 6.0 and add aqlprofile (#42459)
* Initial commit to bump rocprofiler-dev to 6.0 and add aqlprofile recipe
* bump to 6.0.2 and extracting binaries from deb pkg
* fixes for hpctoolkit build errors
* add yum and zyp aqlprofile packages
* fix style issues
2024-03-27 10:36:10 -07:00
afzpatel
6d4dd33c46 Enable ASAN in ROCm packages (#42704)
* Initial commit to enable ASAN
* fix styling
* fix styling
* add asan option for hip-tensor and roctracer-dev
2024-03-27 09:40:21 -07:00
Adrien Cotte
579bad05a8 Add py-modules-gui (mogui) (#43396) 2024-03-27 09:35:17 -07:00
psakievich
27a8eb0f68 Add config option and compiler support to reuse across OS's (#42693)
* Allow compilers to function across compatible OS's
* Add documentation in the default yaml

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
2024-03-27 15:39:07 +00:00
Tristan Carel
4cd993070f py-libsonata: new package starting at 0.1.25 (#43372)
* py-libsonata: new package starting at 0.1.25
* align spack version with the git branch
* Fix min required versions / use spack packages instead of submodules
2024-03-27 08:20:13 -07:00
renjithravindrankannath
4c55c6a268 rvs binary path updated for 6.0 (#43359) 2024-03-27 07:30:30 -07:00
Wouter Deconinck
a4a27fb1e4 root: new version 6.30.04 (#43378)
New version of ROOT, no changes to build system that aren't already accounted for (e.g. require http for webgui), https://github.com/root-project/root/compare/v6-30-02...v6-30-04
2024-03-27 07:28:34 -07:00
Massimiliano Culpo
66345e7185 Improve fixup macos rpath unit test (#43392)
Starting from XCode version 15 the linker ignores
duplicate rpaths, so the libraries don't need fixing
in those cases
2024-03-27 12:01:12 +01:00
dependabot[bot]
8f76f1b0d8 build(deps): bump actions/setup-python from 5.0.0 to 5.1.0 (#43384) 2024-03-27 10:49:45 +01:00
dependabot[bot]
4cab6f3af5 build(deps): bump codecov/codecov-action from 4.1.0 to 4.1.1 (#43385) 2024-03-27 10:49:33 +01:00
Harmen Stoppels
0d4665583b ci: inherit secrets in local workflows (#43391) 2024-03-27 09:48:14 +01:00
Harmen Stoppels
5d0ef9e4f4 ci: remove outdated version comments (#43389) 2024-03-27 09:26:12 +01:00
Harmen Stoppels
e145baf619 codecov: verbose: true (#43388) 2024-03-27 09:15:34 +01:00
Wouter Deconinck
6c912b30a2 Codespaces support for rapid PR evaluation (#41901)
* Create devcontainer.json

* Ensure codespace can be setup for current branch

* fix: find compilers in site scope

* fix: use cloud_pipelines ubuntu20.04 image

* fix: spack config --scope site add

* fix: use develop, not develop-root mirror
2024-03-26 21:13:32 -05:00
Jordan Ogas
f4da453f6b Charliecloud package: add 0.36 and 0.37; update dependencies. (#42590)
This adds a dependency on pkg-config which in turn builds pkg-config
on pipelines using %onapi/%cce: update the pkg-config build to disable
specific warnings-as-errors from these compilers.

Co-authored-by: Reid Priedhorsky <1682574+reidpr@users.noreply.github.com>
2024-03-26 16:13:29 -07:00
Harmen Stoppels
7e9caed8c2 ci: fix codecov upload (#43382) 2024-03-26 22:49:26 +01:00
Matthew Thompson
69509a6d9a pflogger: add version 1.14.0 (#43373) 2024-03-26 14:08:24 -06:00
Massimiliano Culpo
0841050d20 Add macos-14 as a runner (Apple M1) (#42728)
* Add macos-14 as a runner (Apple M1)

* Mark a test xfail

We need to check later if this test needs modifications
on Apple Silicon chips.

---------

Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Co-authored-by: alalazo <alalazo@users.noreply.github.com>
2024-03-26 12:36:21 -07:00
Cody Balos
321ffd732b axom package: add tests (#43312) 2024-03-26 11:21:28 -07:00
Simon Frasch
22922323e3 spfft package: add version 1.1.0 (#43318) 2024-03-26 10:19:06 -07:00
eugeneswalker
0b5b192c18 glvis: fix spack issue #42839 (#43369)
* glvis: fix spack issue #42839

* add issue link
2024-03-26 11:13:54 -06:00
Andrey Alekseenko
1275c57d88 gromacs: add new versions 2024, 2024.1; fix license (#43366) 2024-03-26 10:19:58 -06:00
Juan Miguel Carceller
29a39ac6a0 xrootd: add a dependency on pkgconfig when building with +davix (#43339)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-03-26 09:01:47 -07:00
kwryankrattiger
ae9c86a930 buildcache sync: manifest-glob with arbitrary destination (#41284)
* buildcache sync: manifest-glob with arbitrary destination

The current implementation of the --manifest-glob is a bit restrictive
requiring the destination to be known by the generation stage of CI.
This allows specifying an arbitrary destination mirror URL.

* Add unit test for buildcache sync with manifest

* Fix test and arguments for manifest-glob with override destination

* Add testing path for unused mirror argument
2024-03-26 08:47:45 -07:00
Massimiliano Culpo
83199a981d Allow unit test to work on Apple M1/M2 (#43363)
* Remove a few compilers from static test data

These compilers were used only in a bunch of tests, so
they are added only there.

* Remove clang@3.3 from unit test configuration

* Parametrize compilers.yaml

* Remove specially named gcc from static data

The compilers are used in two tests

* Remove apple-clang and macOS compilers from static data

The compiler was used only in multimethod tests

* Remove clang@3.5 (compiler seems to be unused)

* Remove gcc@4.4.0 (compiler seems to be unused)

* Exclude x86_64 tests on other architectures

* Mark two tests as for clingo only

* Update version syntax in compilers.yaml

* Parametrize tcl tests on architectures

* Parametrize lmod tests on architectures

* Substitute gcc@4.5.0 with gcc@4.8.0 so it can be used on aarch64

* Fix a few issues with aarch64 and unit-tests
2024-03-26 16:20:42 +01:00
eugeneswalker
ed40c3210e ci: add developer-tools-manylinux2014 stack (#43128)
* ci: add developer-tools-manylinux2014 stack

* add libtree, patchelf
2024-03-26 08:02:16 -07:00
Christopher Christofi
be96460ab2 py-nibabel: add new version 5.2.1 (#43335) 2024-03-26 12:46:48 +01:00
pauleonix
95caf55fe7 cuda: add 12.4.0, 12.3.2, 12.3.1 and 12.2.2 (#42748)
Add conflicts to ginkgo and petsc to avoid build failures with cuda@12.4

Co-authored-by: pauleonix <pauleonix@users.noreply.github.com>
2024-03-26 07:10:48 +01:00
Mark W. Krentel
960af24270 hpctoolkit: add version 2024.01.1 (#43353)
Add version 2024.01.1.  Adjust the dependencies for develop that no
longer uses libmonitor.
2024-03-25 17:57:45 -06:00
Christopher Christofi
899bef2aa8 py-nilearn: add new version (#43332)
* py-nilearn: add new version
* add maintainers and license
2024-03-25 16:45:13 -07:00
Adam J. Stewart
f0f092d9f1 py-keras: add v3.1.1 (#43283) 2024-03-26 00:38:01 +01:00
Christopher Christofi
6eaac2270d py-trx-python: add new package with version 0.2.9 (#43333) 2024-03-25 16:11:28 -07:00
John W. Parent
a9f3f6c007 seacas: fix linking on Windows (#43356) 2024-03-25 16:28:03 -06:00
Rocco Meli
08a04ebd46 spglib: add version 2.3.1 (#43345) 2024-03-25 14:27:32 -06:00
Matthew Thompson
d8e642ecb7 Update gftl, pflogger to match GFE 1.14 (#43291) 2024-03-25 13:22:50 -07:00
Christopher Christofi
669ed69d8e py-mne: add new version (#43334) 2024-03-25 13:21:07 -07:00
potter-s
7ebb21a0da Updated version (#43343)
Co-authored-by: Simon Potter <sp39@sanger.ac.uk>
2024-03-25 13:12:58 -07:00
Alec Scott
93ffa9ba5d go: add v1.22.1 (#43337) 2024-03-25 09:32:58 -07:00
liam-o-marsh
e5fdb90496 pyscf: new dependency bounds for pyscf, new version (#43300) 2024-03-25 10:24:01 -05:00
Danny McClanahan
303a0b3653 add command_line scope to help metavar (#42890)
It's now possible to add config on the command line with `spack -c <CONFIG_VARS> ...`, but the new `command_line` scope isn't reflected in the help output for `--scope`:

```bash
> spack help config
...
  --scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT
                        configuration scope to read/modify
...
```
2024-03-25 07:13:43 -07:00
Juan Miguel Carceller
9f07544bde r-rcpp: add version 1.0.11 and 1.0.12 (#43330) 2024-03-25 13:53:12 +01:00
Harmen Stoppels
9b046a39a8 strip url: fix whl suffix, remove exe (#43344) 2024-03-25 13:46:09 +01:00
Massimiliano Culpo
0c9a53ba3a Add intel-oneapi-runtime, allow injecting virtual dependencies (#42062)
This PR adds:
- A new runtime for `%oneapi` compilers, called `intel-oneapi-runtime`
- Information to both `gcc-runtime`  and `intel-oneapi-runtime`, to ensure
  that we don't mix compilers using different soname for either `libgfortran`
  or `libifcore`

To do so, the following internal mechanisms have been implemented:
- Possibility to inject virtual dependencies from the `runtime_constraints`
  callback on packages

Information has been added to `gcc-runtime` to provide the correct soname
under different conditions on its `%gcc`.

Rules injected into the solver looks like:

```prolog
% Add a dependency on 'gfortran@5' for nodes compiled with gcc@=13.2.0 and using the 'fortran' language
attr("dependency_holds", node(ID, Package), "gfortran", "link") :-
  attr("node", node(ID, Package)),
  attr("node_compiler", node(ID, Package), "gcc"),
  attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
  not external(node(ID, Package)),
  not runtime(Package),
  attr("language", node(ID, Package), "fortran").

attr("virtual_node", node(RuntimeID, "gfortran")) :-
  attr("depends_on", node(ID, Package), ProviderNode, "link"),
  provider(ProviderNode, node(RuntimeID, "gfortran")),
  attr("node", node(ID, Package)),
  attr("node_compiler", node(ID, Package), "gcc"),
  attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
  not external(node(ID, Package)),
  not runtime(Package),
  attr("language", node(ID, Package), "fortran").

attr("node_version_satisfies", node(RuntimeID, "gfortran"), "5") :-
  attr("depends_on", node(ID, Package), ProviderNode, "link"),
  provider(ProviderNode, node(RuntimeID, "gfortran")),
  attr("node", node(ID, Package)),
  attr("node_compiler", node(ID, Package), "gcc"),
  attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
  not external(node(ID, Package)),
  not runtime(Package),
  attr("language", node(ID, Package), "fortran").
```
2024-03-24 22:59:21 -07:00
Loic Hausammann
1fd4353289 openmpi: Add new variant: romio-filesystem=string (#43265)
Co-authored-by: loikki <loic.hausammann@id.ethz.ch>
2024-03-24 01:00:38 +01:00
potter-s
fcb8ed6409 py-plotly: Add versions up to 5.20 (#43284) 2024-03-24 00:55:42 +01:00
Marie Houillon
2f11862832 openCARP: Add v15.0 packages (#43299)
Co-authored-by: openCARP consortium <info@opencarp.org>
2024-03-24 00:35:32 +01:00
Gavin John
bff11ce8e7 py-kaleido: Add MacOS build, fix checksums (#43309)
* py-kaledio: Fix completely borked package.py

* Readd stuff I forgot to add

* And one last missing thing

* Remove python restriction

* [@spackbot] updating style on behalf of Pandapip1

* Add MacOS build

* Fix checksum

* Handle all supported OSes

* Split imports

* Remove extra version stuff
2024-03-24 00:31:38 +01:00
fgava90
218693431c paraview: fix range of exodusII-netcdf4.9.0.patch (#42926)
Co-authored-by: Gava, Francesco <francesco.gava@mclaren.com>
2024-03-23 20:33:20 +01:00
Alex Richert
e036cd9ef6 zlib-ng: New variants: +shared and +pic (#42796) 2024-03-23 19:32:42 +01:00
Sinan
cd5bef6780 py-line-profiler: Add 4.1.2 and 3.5.1 with their deps (#43156)
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-03-23 11:16:48 -06:00
Davide
159e9a20d1 suite-sparse: Add version 7.3.1 (#43328) 2024-03-23 10:00:28 -06:00
Auriane R
99bb288db7 py-torch-nvidia-apex: @3.11: Add config_settings(PEP517), add missing py-packaging (#43306) 2024-03-23 16:04:10 +01:00
Chris White
99744a766b BLT: add new version 0.6.2 (#43257) 2024-03-23 15:25:29 +01:00
Juan Miguel Carceller
ddd8be51a0 re2: use the same C++ std used by abseil-cpp (#43288)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-03-23 14:48:38 +01:00
吴坎
bba66b1063 py-vl-convert-python: Add 1.3.0 (#43297) 2024-03-23 14:44:23 +01:00
Howard Pritchard
1c3c21d9c7 mpich: add variant +xpmem to specify use of xpmem (#43293)
Signed-off-by: Howard Pritchard <hppritcha@gmail.com>
2024-03-23 14:36:06 +01:00
Christopher Christofi
cbe9b3d01c bgen: add new package with version 1.1.7 (#43327) 2024-03-23 13:49:41 +01:00
Richard Berger
0abf5ba43c hip: don't set HIP_PATH in ROCm 5.5+ (#42882) 2024-03-23 13:22:06 +01:00
Jonas Eschle
9ab3c1332b py-tensorflow-probability: Re-activate +py-jax variant for @0.20:, Add 0.23.0 (#43249) 2024-03-23 13:15:26 +01:00
Christopher Christofi
b6425da50f New package: qctool (#43326) 2024-03-23 05:46:03 -06:00
YI Zeping
937a4dbf69 NWChem pacakge: expand fftw patch appliance range to 7.2.2 (#43321)
* expand fftw patch appliance range to include 7.2.2
* fix formatting bug
2024-03-23 05:41:34 -06:00
Gavin John
cd779ee54d octave package: correction to jdk variant description (#43325) 2024-03-23 05:41:11 -06:00
Gavin John
7ddcb13325 openjdk package: use correct homepage (#43324) 2024-03-23 05:40:58 -06:00
Gavin John
7666046ce3 icedtea: change jdk dependency to java (#43323) 2024-03-23 05:40:43 -06:00
Miguel Dias Costa
8e89e61402 BerkeleyGW package: add version 4.0 (#43316) 2024-03-23 04:02:44 -06:00
Auriane R
d0dbfaa5d6 aws-ofi-nccl package: add versions including 1.8.1 (#43305)
The default url couldn't be the one with v0.0.0-aws since spack was
replacing v0.0.0-aws with v<version_number> for example, deleting the
-aws suffix. I used the url_for_version method to specify this suffix.
2024-03-22 17:49:00 -07:00
Gavin John
26f562b5a7 New package: py-kneaddata (#43310) 2024-03-22 17:20:52 -07:00
Chris Marsh
2967804da1 netcdf-cxx4: fix reference error (#43322) 2024-03-22 16:05:17 -07:00
Harmen Stoppels
c3eaf4d6cf Support for prereleases (#43140)
This adds support for prereleases. Alpha, beta and release candidate
suffixes are ordered in the intuitive way:

```
1.2.0-alpha < 1.2.0-alpha.1 < 1.2.0-beta.2 < 1.2.0-rc.3 < 1.2.0 < 1.2.0-xyz
```

Alpha, beta and rc prereleases are defined as follows: split the version
string into components like before (on delimiters and string boundaries).
If there's a string component `alpha`, `beta` or `rc` followed by an optional
numeric component at the end, then the version is prerelease.

So `1.2.0-alpha.1 == 1.2.0alpha1 == 1.2.0.alpha1` are all the same, as usual.

The strings `alpha`, `beta` and `rc` are chosen because they match semver,
they are sufficiently long to be unambiguous, and and all contain at least
one non-hex character so distinguish them from shasum/digest type suffixes.

The comparison key is now stored as `(release_tuple, prerelease_tuple)`, so in
the above example:

```
((1,2,0),(ALPHA,)) < ((1,2,0),(ALPHA,1)) < ((1,2,0),(BETA,2)) < ((1,2,0),(RC,3)) < ((1,2,0),(FINAL,)) < ((1,2,0,"xyz"), (FINAL,))
```

The version ranges `@1.2.0:` and `@:1.1` do *not* include prereleases of
`1.2.0`.

So for packaging, if the `1.2.0alpha` and `1.2.0` versions have the same constraints on
dependencies, it's best to write

```python
depends_on("x@1:", when="@1.2.0alpha:")
```

However, `@1.2:` does include `1.2.0alpha`. This is because Spack considers
`1.2 < 1.2.0` as distinct versions, with `1.2 < 1.2.0alpha < 1.2.0` as a consequence.

Alternatively, the above `depends_on` statement can thus be written

```python
depends_on("x@1:", when="@1.2:")
```

which can be useful too. A short-hand to include prereleases, but you
can still be explicit to exclude the prerelease by specifying the patch version
number.

### Concretization

Concretization uses a different version order than `<`. Prereleases are ordered
between final releases and develop versions. That way, users should not
have to set `preferred=True` on every final release if they add just one
prerelease to a package. The concretizer is unlikely to pick a prerelease when
final releases are possible.

### Limitations

1. You can't express a range that includes all alpha release but excludes all beta
   releases. Only alternative is good old repeated nines: `@:1.2.0alpha99`.

2. The Python ecosystem defaults to `a`, `b`, `rc` strings, so translation of Python versions to
   Spack versions requires expansion to `alpha`, `beta`, `rc`. It's mildly annoying, because
   this means we may need to compute URLs differently (not done in this commit).

### Hash

Care is taken not to break hashes of versions that do not have a prerelease
suffix.
2024-03-22 23:30:32 +01:00
John W. Parent
397334a4be Spack CI: Refactor process_command for Cross Platform support (#39739)
Generate CI scripts as powershell on Windows. This is intended to
output exactly the same bash scripts as before on Linux.

Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
2024-03-22 14:06:29 -07:00
Harmen Stoppels
434836be81 python wheels: do not "expand" (#43317) 2024-03-22 16:57:46 +01:00
Thomas-Ulrich
7b9b976f40 openssh: add 9.7p1 and 9.6p1; update workaround for clang (#40857)
Signed-off-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-03-22 08:24:32 -06:00
Mikael Simberg
4746e8a048 apex: Set APEX_WITH_KOKKOS CMake option in apex package (#43243)
* Make sure APEX_WITH_KOKKOS CMake option is set in apex

* Add conflict for apex with ~kokkos
2024-03-22 08:50:29 +01:00
Rocco Meli
69c684fef9 ELPA: enable GPU streams and update deprecated variables (#43145)
* elpa streams

* [@spackbot] updating style on behalf of RMeli

* Apply suggestions from @albestro

* Update var/spack/repos/builtin/packages/elpa/package.py

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>

---------

Co-authored-by: RMeli <RMeli@users.noreply.github.com>
Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>
2024-03-22 08:42:24 +01:00
Valentin Volkl
2314aeb884 py-cmake: only run test suite when run_tests (#43246)
As the cmake build is triggered by scikit build, the usual spack option
for enabling tests had no effect and the heavy test suite ran all the time.

Used https://github.com/scikit-build/cmake-python-distributions/issues/172#issuecomment-890322263
to implement how to pass options to the actual `cmake` build.

I also excluded some tests that failed for me on alma9 (gcc 11.4.1),
so the rest of the test suite can be run.
2024-03-22 04:34:43 +01:00
Martin Aumüller
d33e10a695 ffmpeg: add v6.1.1 and older patch release updates (#43050) 2024-03-22 03:39:53 +01:00
Henning Glawe
7668a0889a ncurses: Add terminfo for rxvt-unicode{,-256color} (#42721)
taken from the debian ncurses source package.
2024-03-22 02:50:24 +01:00
Thomas-Ulrich
d7a74bde9f easi: add v1.3.0, python bindings and master (#42784) 2024-03-22 02:44:20 +01:00
AMD Toolchain Support
fedf8128ae openblas: Add variant dynamic_dispatch: select best kernel at runtime (#42746)
Enable OpenBLAS's built-in CPU capability detection and kernel selection. 

This allows run-time selection of the "best" kernels for the running CPU, rather
than what is specified at build time.  For example, it allows OpenBLAS  to use
AVX512 kernels when running on ZEN4, and built targeting the "ZEN" architecture.

Co-authored-by: Branden Moore <branden.moore@amd.com>
2024-03-22 02:27:21 +01:00
Richard Berger
f70af2cc57 libristra: depends_on() for lua: allow newer lua versions (#42810) 2024-03-22 01:53:33 +01:00
Alex Leute
50562e6a0e py-neptune-client and missing deps: new package (#43059)
Co-authored-by: Cecilia Lau <chlits@rit.edu>
Co-authored-by: Jen Herting <jen@herting.cc>
2024-03-22 01:51:13 +01:00
Sergey Kosukhin
4ac51b2127 libiconv: fix building with nvhpc (#43033) 2024-03-22 01:43:30 +01:00
HELICS-bot
81c9e346dc helics: Add version 3.5.1 (#43314)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-03-21 18:08:36 -06:00
Christopher Christofi
73e16a7881 py-mrcfile: add new version (#43125) 2024-03-22 00:00:15 +01:00
Alex Richert
af8868fa47 xv (image viewer): Add missing depends_on(libxt) (#43277) 2024-03-21 23:24:29 +01:00
Rémi Lacroix
cfd4e356f8 x264: Tag a recent commit: 20240314 (#43304)
x264 does not publish releases so tag a newer commit in Spack in order to be able to use an up-to-date version.
2024-03-21 23:20:00 +01:00
Adam J. Stewart
fc87dcad4c py-torchmetrics: add v1.3.2 (#43244) 2024-03-21 23:00:32 +01:00
snehring
65472159c7 dorado: adding version 0.5.3 (#43313) 2024-03-21 22:59:20 +01:00
Stephen Hudson
d1f9d8f06d libEnsemble: add v1.2.2 (#43308) 2024-03-21 22:57:19 +01:00
Rocco Meli
67ac9c46a8 namd: disable parallel build for 3.0b3 (#43215) 2024-03-21 22:26:36 +01:00
Stephen Sachs
aa39465188 Re enable aws pcluster buildcache stack (#38931)
* Changes to re-enable aws-pcluster pipelines

- Use compilers from pre-installed spack store such that compiler path relocation works when downloading from buildcache.
- Install gcc from hash so there is no risk of building gcc from source in pipleine.
- `packages.yam` files are now part of the pipelines.
- No more eternal `postinstall.sh`. The necessary steps are in `setup=pcluster.sh` and will be version controlled within this repo.
- Re-enable pipelines.

* Add  and

* Debugging output & mv skylake -> skylake_avx512

* Explicilty check for packages

* Handle case with no intel compiler

* compatibility when using setup-pcluster.sh on a pre-installed cluster.

* Disable palace as parser cannot read require clause at the moment

* ifort cannot build superlu in buildcache

`ifort` is unable to handle such long file names as used when cmake compiles
test programs inside build cache.

* Fix spack commit for intel compiler installation

* Need to fetch other commits before using them

* fix style

* Add TODO

* Update packages.yaml to not use 'compiler:', 'target:' or 'provider:'

Synchronize with changes in https://github.com/spack/spack-configs/blob/main/AWS/parallelcluster/

* Use Intel compiler from later version (orig commit no longer found)

* Use envsubst to deal with quoted newlines

This is cleaner than the `eval` command used.

* Need to fetch tags for checkout on version number

* Intel compiler needs to be from version that has compatible DB

* Install intel compiler with commit that has DB ver 7

* Decouple the intel compiler installation from current commit

- Use a completely different spack installation such that this current pipeline
commit remains untouched.
- Make the script suceed even if the compiler installation fails (e.g. because
the Database version has been updated)
- Make the install targets fall back to gcc in case the compiler did not install
correctly.

* Use generic target for x86_64_vX

There is no way to provision a skylake/icelake/zen runner. They are all in the
same pools under x86_64_v3 and x86_64_v4.

* Find the intel compiler in the current spack installation

* Remove SPACK_TARGET_ARCH

* Fix virtual package index & use package.yaml for intel compiler

* Use only one stack & pipeline per generic architecture

* Fix yaml format

* Cleanup typos

* Include fix for ifx.cfg to get the right gcc toolchain when linking

* [removeme] Adding timeout to debug hang in make (palace)

* Revert "[removeme] Adding timeout to debug hang in make (palace)"

This reverts commit fee8a01580489a4ea364368459e9353b46d0d7e2.

* palace x86_64_v4 gets stuck when compiling try newer oneapi

* Update comment

* Use the latest container image

* Update gcc_hashes to match new container

* Use only one tag providing tags per extends call

Also removed an unnecessary tag.

* Move generic setup script out of individual stack

* Cleanup from last commit

* Enable checking signature for packages available on the container

* Remove commented packages / Add comment for palace

* Enable openmpi@5 which needs pmix>3

* don't look for intel compiler on aarch64
2024-03-21 14:45:05 -05:00
downloadico
09810a5e7c py-cig-pythia: add py-cig-pythia package to spack (#43294) 2024-03-21 11:11:14 -07:00
Alec Scott
446c0f2325 Disable interactive editor when --batch if passed to checksum (#43102) 2024-03-21 18:15:09 +01:00
Auriane R
c4ce51c9be Add flash-attn package (#42939) 2024-03-21 07:23:07 -06:00
Harmen Stoppels
1f63a764ac jdk: new versions (#43264) 2024-03-21 12:49:29 +01:00
Alec Scott
384e198304 py-python-lsp-server: add v1.10.0 (#42799)
* py-python-lsp-server: add v1.10.0

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Remove py-wheel from package

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2024-03-21 10:36:15 +01:00
Auriane R
2303332415 Update aws-ofi-nccl to use the hwloc option (#43287) 2024-03-21 09:38:40 +01:00
Tom Scogland
0eb1957999 cmd/python: use runpy to allow multiprocessing in scripts (#41789)
Running a `spack-python` script like this:

```python

import spack
import multiprocessing

def echo(args):
    print(args)

if __name__ == "__main__":
    pool = multiprocessing.Pool(2)
    pool.map(echo, range(10))
```

will fail in `develop` with an error like this:

```console
_pickle.PicklingError: Can't pickle <function echo at 0x104865820>: attribute lookup echo on __main__ failed
```

Python expects to be able to look up the method `echo` in `sys.path["__main__"]` in
subprocesses spawned by `multiprocessing`, but because we use `InteractiveConsole` to
run `spack python`, the executed file isn't considered to be the `__main__` module, and
lookups in subprocesses fail. We tried to fake this by setting `__name__` to `__main__`
in the `spack python` command, but that doesn't fix the fact that no `__main__` module
exists.

Another annoyance with `InteractiveConsole` is that `__file__` is not defined in the
main script scope, so you can't use it in your scripts.

We can use the [runpy.run_path()](https://docs.python.org/3/library/runpy.html#runpy.run_path) function,
which has been around since Python 3.2, to fix this.

- [x] Use `runpy` module to launch non-interactive `spack python` invocations
- [x] Only use `InteractiveConsole` for interactive `spack python`
2024-03-21 01:32:28 -07:00
Greg Becker
de1f9593c6 cray: return false more readily in detection logic (#43150)
Often in containers, the files we use to detect whether a cray system supports new features are not available.

Given that the cray containers only support the newer versions, and that these versions have been
around for a while at this point and few sites don't support them, this PR changes the logic for
detecting cray systems so that:

1. Don't even consider whether something is the `cray` platform if `opt/cray` is not in `MODULEPATH`
2. Only use the `cray` platform if we can read files in /opt/cray/pe and positively detect an older version
3. Otherwise, assume we're *not* on a cray (includes newer Cray PE's, which we treat as Linux)
2024-03-20 15:43:45 -07:00
Christopher Christofi
65fa71c1b4 py-pycm: new package (#43251)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Signed-off-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
2024-03-20 19:02:36 +01:00
Martin Lang
9802649716 berkeleygw: update FCPP flags for gcc (#42848)
Compilation with the old flags fails on PowerPC (power8le) due to syntax
errors in the output from the preprocessor. Compilation with the
extended set of flags works both on PowerPC and x86_64.

The correct set of flags was suggested from the berkeleygw developers:
https://groups.google.com/a/berkeleygw.org/g/help/c/ewi3RZgOyeE/m/jSIoe45PAgAJ

Uses spack-provided compiler prefix to call cpp when compiling berkeleygw with gcc.
2024-03-20 18:55:54 +01:00
Alex Leute
8d9d721f07 py-postcactus: new package (#42907)
Co-authored-by: Sid Pendelberry <sid@rit.edu>
2024-03-20 18:50:52 +01:00
Greg Becker
ecef72c471 Target.optimization_flags converts non-numeric versions to numeric (#43179) 2024-03-20 09:39:26 -07:00
Robert Cohn
485b6e2170 Update Intel download URLs (#43286) 2024-03-20 12:34:03 -04:00
mvlopri
ba02c6b70f seacas: update the variants and tpls (#43195)
Adds variants to turn off tests
Add variants for some missing TPL options
Add the variables required to build in ~shared

* Add pamgen to Trilinos as a variant to support SEACAS

This adds the ability to turn off and on pamgen as needed
through the variant interface for the Trilinos package.py.
Add changes for seacas package.py to build the appropriate
Trilinos variants.
Add zlib-api as depends_on instead of zlib directly for SEACAS
package.py

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-03-20 10:14:24 -06:00
Juan Miguel Carceller
7028669d50 fastjet: add version 3.4.2 (#43285)
* fastjet: add version 3.4.2

* Change the range of the ATLAS patch

---------

Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-03-20 16:46:09 +01:00
Dom Heinzeller
2f0a73f7ef Define variant typescript for py-jupyter-server with explicit dependency on npm (#43279)
* Add variant typescript for py-jupyter-server@:1, which then requires npm/node. Patch the build system for ~typescript so that it doesn't find any npm/node installations and attempts to build the typescript extension even though it shouldn't

* Fix formatting in var/spack/repos/builtin/packages/py-jupyter-server/package.py

* Constrain typescript variant to py-jupyter-server versions 1.10.2:1

* with when not needed if variant doesn't exist for other versions
2024-03-20 08:03:18 -06:00
Massimiliano Culpo
7cb0dbf77a Remove optimization criterion on OS mismatches (#43282) 2024-03-20 04:57:56 -06:00
G-Ragghianti
ac8800ffc7 heffte: Update MKL dependency to intel-oneapi-mkl (#43273) 2024-03-20 03:28:21 -06:00
Richard Berger
eb11fa7d18 lua-sol2: merge duplicate sol2 package into it (#43155) 2024-03-20 03:22:58 -06:00
Mikael Simberg
4d8381a775 fmt: Add master branch as version (#43239) 2024-03-20 03:18:06 -06:00
jmlapre
de5e20fc21 py-snoop: new package (#42945) 2024-03-20 01:18:24 +01:00
Adam J. Stewart
c33af49ed5 py-keras: add v3.1.0 (#43268) 2024-03-20 01:10:14 +01:00
Martin Aumüller
3addda6c4d mgard: disable C++11 warning also for apple-clang@15 (#43170) 2024-03-19 17:27:42 -06:00
AMD Toolchain Support
33f6f55d6b aocl-sparse: fix inconsistency in dependency logic (#43259) 2024-03-19 16:45:39 -06:00
AMD Toolchain Support
41d20d3731 amdblibm: add support for parallel build (#43258) 2024-03-19 16:45:09 -06:00
Bill Williams
dde8fa5561 scorep: add conflict for ROCm6 (#43240)
Co-authored-by: William Williams <william.williams@tu-dresden.de>
Co-authored-by: wrwilliams <wrwilliams@users.noreply.github.com>
2024-03-19 16:40:08 -06:00
Mikael Simberg
588a94bc8c apex: add v2.6.5 (#43242) 2024-03-19 16:02:58 -06:00
Alec Scott
06392f2c01 gnutls: add v3.8.3 (#43229)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2024-03-19 22:23:19 +01:00
Tom Payerle
f16e29559e n2p2: add --no-print-directory flag to calls to "make" (#43196)
This should fix issue #43192

Basically, had issue where a make variable was set to the output
of a shell function which included cd commands, and then the value
of that variable used as a makefile target.

The cd commands in the shell function caused assorted informational
messages (e.g. "Entering directory ...") which got included in the
return of the shell function, corrupting the value of the variable.
The presence of colons in the corrupted value caused make to issue
"multiple target" erros.

This fix adds --no-print-directory flags to the calls to the
make function in the package's build method, which resolves the
issue above.

It is admittedly a crude fix, and will remove *all* informational
messages re directory changes, thereby potentially making it more
difficult to diagnose/debug future issues building this package.
However, I do not see a way for to turn off these messages in a
more surgical manner.
2024-03-19 20:04:15 +01:00
Rémi Lacroix
ea96403157 ffmpeg: Fix patch hash (#43269) 2024-03-19 19:46:02 +01:00
G-Ragghianti
b659eac453 slate: add v2023.11.05 (#42913)
* Updated version of slate

* Added rocm version conflict

* Added patch to fix openMP problem
2024-03-19 08:58:34 -07:00
John W. Parent
ab590cc03a WGL: Update libs for new archspec on Win (#43253) 2024-03-19 09:43:29 -06:00
Harmen Stoppels
1a007a842b cmake: deprecate old patch releases and add missing gmake dep (#43261) 2024-03-19 15:41:50 +01:00
Martin Aumüller
9756354998 mgard: don't restrict protobuf version more than necessary (#43172)
* mgard: don't restrict protobuf version more than necessary

successfully built:
mgard@2022-11-18 ^protobuf@3.{4,21,25}
mgard@2023-01-10 ^protobuf@3.{4,25}
mgard@2023-03-31 ^protobuf@3.{4,25}

compile failures:
mgard@2022-11-18 ^protobuf@3.3
mgard@2023-01-10 ^protobuf@3.3
mgard@2023-03-31 ^protobuf@3.3

* mgard: add conflicts to address CI errors

* mgard: conflict between cuda and abseil@20240116.1

compiling mgard+cuda with gcc@12.3.0 and nvcc from cuda@12.3.0 against
protobuf pulling in abseil-cpp@20240116.1 results in the errors reported
here: https://github.com/abseil/abseil-cpp/issues/1629
2024-03-19 09:29:44 +01:00
Sinan
3984dd750c package/py-setuptools_add_new_versions (#43180)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2024-03-18 12:33:39 -06:00
Gregor Daiß
d5c1e16e43 sgpp: update dependency versions (#43178)
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-03-18 15:52:16 +01:00
Sinan
56ace9a087 py-wand: add v0.6.13 (#42972) 2024-03-18 15:13:50 +01:00
jdomke
6e0bab1706 fujitsu-mpi: add gcc and clang support (#43053)
Co-authored-by: domke <673751-domke@users.noreply.gitlab.com>
2024-03-18 15:12:44 +01:00
John W. Parent
193386f6ac netcdfc: consider static build in pkgconf filtering (#43084) 2024-03-18 14:40:32 +01:00
Christopher Christofi
755131fcdf py-optax: add new version (#43169) 2024-03-18 14:19:53 +01:00
dependabot[bot]
9a71733adb build(deps): bump docker/setup-buildx-action from 3.1.0 to 3.2.0 (#43204)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 3.1.0 to 3.2.0.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](0d103c3126...2b51285047)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 14:10:11 +01:00
dependabot[bot]
cd919d51ea build(deps): bump docker/build-push-action from 5.2.0 to 5.3.0 (#43205)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5.2.0 to 5.3.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](af5a7ed5ba...2cdde995de)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 14:09:29 +01:00
George Young
12adf66d07 telocal: add new package (#43241)
Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2024-03-18 07:08:32 -06:00
afzpatel
c02f58da8f llvm-amdgpu: add rpath to HIP rt (#42876) 2024-03-18 13:37:27 +01:00
Harmen Stoppels
9662d181a0 use directives in some packages (#43238) 2024-03-18 12:53:53 +01:00
Alec Scott
282df7aecc fzf: add v0.48.0 (#43230) 2024-03-18 12:53:37 +01:00
wspear
b4c0e6f03b scorep: add v8.4 (#43225) 2024-03-18 11:23:28 +01:00
Wileam Y. Phan
4cd8488139 intel-gtpin: add version 4.0 (#43216) 2024-03-18 11:09:29 +01:00
George Young
69a052841c py-cutadapt: updating to @4.7 (#43214)
Co-authored-by: LMS Bioinformatics <bioinformatics@lms.mrc.ac.uk>
2024-03-18 11:08:14 +01:00
Matthias Wolf
a3f39890c2 py-shacl: new version, update dependencies (#42905)
* py-shacl: new version, update dependencies

Also updates the dependencies py-prettytable and py-rdflib.

* review comments

* Update var/spack/repos/builtin/packages/py-pyshacl/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* py-poetry-core: add required 1.8.1

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2024-03-18 10:56:40 +01:00
Christopher Christofi
02d126ce2b py-geopandas: add new version 0.14.3 (#43235)
* py-geopandas: add new version

* add when specification on setuptools requirement
2024-03-18 10:52:19 +01:00
Christopher Christofi
339a63370f py-pyshp: add new version (#43234) 2024-03-18 10:23:11 +01:00
Christopher Christofi
fef6aed627 py-branca: add new package (#43236) 2024-03-18 10:16:58 +01:00
afzpatel
3445da807e rocm-smi-lib: remove standalone test and add build time test (#43129) 2024-03-18 10:13:08 +01:00
Pieter P
429c3598af Fix CMake generator documentation (#43232) 2024-03-18 10:02:33 +01:00
Todd Gamblin
3d8136493a performance: avoid jinja2 import at startup unless needed (#43237)
`jinja2` can be a costly import, and right now it happens at startup every time we run
Spack. This slows down `spack --print-shell-vars` a bit, which is needed by `setup-env.*sh`.
2024-03-18 10:00:37 +01:00
Sergey Kosukhin
8cd160db85 zlib-ng: add variant new_strategies (#43219) 2024-03-18 09:42:43 +01:00
Simon Pintarelli
a7dd756b34 gcc 12.3 ICE patch for aarch64 (#43093)
* gcc12.3 patch for ICE on aarch64

* aarch64 ICE patch for gcc@13.2
2024-03-18 09:12:49 +01:00
Thomas Padioleau
53be280681 Remove bundled fmt (#43210) 2024-03-17 21:43:40 -07:00
Seth R. Johnson
5ab10d57be geant4: add matinainer, clean args (#43218) 2024-03-16 17:29:03 +00:00
dependabot[bot]
96061d2c00 build(deps): bump black from 24.2.0 to 24.3.0 in /lib/spack/docs (#43228)
Bumps [black](https://github.com/psf/black) from 24.2.0 to 24.3.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/24.2.0...24.3.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-15 17:24:44 -06:00
dependabot[bot]
e78d20dc84 build(deps): bump black in /.github/workflows/style (#43227)
Bumps [black](https://github.com/psf/black) from 24.2.0 to 24.3.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/compare/24.2.0...24.3.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-15 16:09:59 -07:00
Adam J. Stewart
6d2341c109 py-black: add v24.3.0 (#43226) 2024-03-15 16:09:17 -07:00
Seth R. Johnson
968ad02473 geant4: patch old versions to work on new compiler/ubuntu (#43212)
* geant4: patch old version for %clang@15

* Backport ascii-V10-07-03

* Apply to all compilers, but only for 10.5-10.6
2024-03-15 12:22:08 -06:00
Jose E. Roman
b93882804f New patch release SLEPc 3.20.2 (#43211) 2024-03-15 09:51:08 -07:00
Wouter Deconinck
f58ebd4fbb pandora{pfa,sdk,monitoring}: new HEP package for particle flow analysis (#37714)
* pandorapfa: new package

* pandorasdk: new package

* [@spackbot] updating style on behalf of wdconinc

* pandorasdk: use self.define

* pandoramonitoring: new package

* pandorasdk: new version 3.4.2

* pandora{pfa,sdk,monitoring}: add maintainer jmcarcell

Co-authored-by: Juan Miguel Carceller <22276694+jmcarcell@users.noreply.github.com>

* pandora{pfa,sdk,monitoring}: add maintainer jmcarcell

Co-authored-by: Juan Miguel Carceller <22276694+jmcarcell@users.noreply.github.com>

* pandora{pfa,sdk,monitoring}: add maintainer jmcarcell

Co-authored-by: Juan Miguel Carceller <22276694+jmcarcell@users.noreply.github.com>

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
Co-authored-by: Juan Miguel Carceller <22276694+jmcarcell@users.noreply.github.com>
2024-03-15 14:58:36 +01:00
Massimiliano Culpo
6f7f9528e5 cray-rhel: add a lower bound to mgard (#43187) 2024-03-15 11:25:56 +01:00
Greg Becker
59c7ff8683 Allow compilers to be configured in packages.yaml (#42016)
Co-authored-by: becker33 <becker33@users.noreply.github.com>
2024-03-15 11:01:49 +01:00
John W. Parent
4495e0341d Clingo bootstrapping: Remove msvc constraint (#43199)
Patch allowing Clingo to build with VS22 has landed both in Spack
and Clingo upstream, update Spack's bootstrap constraints to handle
this.

Additionally, properly scope the patch application in the clingo
package to handle upstream patch.
2024-03-14 19:08:38 -06:00
eugeneswalker
ba39924046 e4s cray ci: mgard is broken, disable spec (#43194) 2024-03-14 14:06:36 -07:00
Sergey Kosukhin
751c3fef86 nag: add version 7.2.7200 (#43188) 2024-03-14 15:04:05 -06:00
Adrien Bernede
102811adb9 Fix Axom: index out of range when configuring axom~mpi on toss_4 (#43186) 2024-03-14 14:49:09 -06:00
Massimiliano Culpo
8f56eb620f Improve error message when an unknown compiler is requested (#43143)
Fixes #43141
2024-03-14 13:47:56 -07:00
Peter Scheibel
ec517b40e9 spack develop: stage build artifacts in same root as non-dev builds (#41373)
Currently (outside of this PR) when you `spack develop` a path, this path is treated as the staging
directory (this means that for example all build artifacts are placed in the develop path).

This PR creates a separate staging directory for all `spack develop`ed builds. It looks like

```
# the stage root
/the-stage-root-for-all-spack-builds/
    spack-stage-<hash>
        # Spack packages inheriting CMakePackage put their build artifacts here
        spack-build-<hash>/
```

Unlike non-develop builds, there is no `spack-src` directory, `source_path` is the provided `dev_path`.
Instead, separately, in the `dev_path`, we have:

```
/dev/path/for/foo/
    build-{arch}-<hash> -> /the-stage-root-for-all-spack-builds/spack-stage-<hash>/
```

The main benefit of this is that build artifacts for out-of-source builds that are relative to
`Stage.path` are easily identified (and you can delete them with `spack clean`).

Other behavior added here:

- [x] A symlink is made from the `dev_path` to the stage directory. This symlink name incorporates
    spec details, so that multiple Spack environments that develop the same path will not conflict
    with one another

- [x] `spack cd` and `spack location` have added a `-c` shorthand for `--source-dir`

Spack builds can still change the develop path (in particular to keep track of applied patches), 
and for in-source builds, this doesn't change much (although logs would not be written into 
the develop path). Packages inheriting from `CMakePackage` should get this benefit
automatically though.
2024-03-14 13:32:01 -07:00
Martin Aumüller
22cb3815fe ispc: add v1.22.0 & v1.23.0 (#43159)
* ispc: add v1.22.0 & v1.23.0
* ispc: fix build on macos
  apply upstream patch
* ispc: demote llvm to just a build dependency
  still builds fine
2024-03-14 10:59:45 -07:00
Greg Becker
f549354f78 move --deprecated arg to concretizer args (#43177) 2024-03-14 18:43:52 +01:00
Scott Wittenburg
dc212d0e59 mgard: add version 2023-12-09 (1.5.2) (#41493) 2024-03-14 11:15:47 -06:00
victor-decaria-nnl
8f14acb139 mfem: add MUMPs option (#42929)
* Added MUMPs option to mfem
* implement suggestions
* loosened mumps+metis dependency to just mumps
2024-03-14 09:09:24 -07:00
Harmen Stoppels
c38ef72b06 compiler.py: simplify implicit link dir bits (#43078) 2024-03-14 08:54:47 +01:00
jmuddnv
7d67d9ece4 nvhpc: add v24.3 (#43175) 2024-03-14 08:35:09 +01:00
Carlos Bederián
2c30962c74 amduprof: new package (#30575)
Co-authored-by: zzzoom <zzzoom@users.noreply.github.com>
2024-03-14 07:44:43 +01:00
dependabot[bot]
cc28334049 build(deps): bump pytest from 8.0.2 to 8.1.1 in /lib/spack/docs (#43134)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 8.0.2 to 8.1.1.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/8.0.2...8.1.1)

---
updated-dependencies:
- dependency-name: pytest
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-14 07:40:15 +01:00
dependabot[bot]
dbdf5bacc4 build(deps): bump actions/checkout from 4.1.1 to 4.1.2 (#43152)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.1 to 4.1.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](b4ffde65f4...9bb56186c3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-14 05:55:08 +01:00
Rocco Meli
531f01f0b9 highway: add v1.1.0 (#43174) 2024-03-14 05:43:37 +01:00
Sebastian Pipping
794593b478 expat: Add release 2.6.2 with security fixes (#43171) 2024-03-13 22:43:00 -06:00
Tom Scogland
afcf0d2e39 llvm-openmp: make llvm-openmp consistent with other llvm builds (#43165)
Add current versions of the 17 and 18 releases

Stop making it nearly impossible to compose this correctly with code built with gcc

Build for compatibility by default like we do in every other llvm package
2024-03-14 05:34:09 +01:00
dependabot[bot]
29ee861366 build(deps): bump docker/login-action from 3.0.0 to 3.1.0 (#43176)
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.0.0 to 3.1.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](343f7c4344...e92390c5fb)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-14 05:27:16 +01:00
John W. Parent
b1a984ef02 msvc: patch property ref bug (#43173) 2024-03-13 22:28:36 +00:00
Christopher Christofi
cc545d8c9a jags: add version 4.3.2 (#43166) 2024-03-13 23:06:03 +01:00
Ashwin Kumar Karnad
49ff816fb0 octopus: disable gdlib by default (#43161) 2024-03-13 13:04:39 -06:00
Ashwin Kumar Karnad
8c33841567 Add iamashwin99 as a maintainer in Octopus package (#43163) 2024-03-13 12:59:41 -06:00
Ashwin Kumar Karnad
21b50fbbe3 octopus: Support new version octopus@14 (#43160)
* add checksum for octopus 14

* Update dependencies for BerkeleyGW package
2024-03-13 12:59:16 -06:00
Martin Aumüller
2a8e503a04 protobuf: apply centos 8 patch only to @3.4: (#43162)
does not apply cleanly to older versions
2024-03-13 12:54:02 -06:00
Wileam Y. Phan
4b695d4722 rocm-openmp-extras: Fix resource download URLs (#43147)
* rocm-openmp-extras: Fix resource download URLs

* Apply review suggestions
2024-03-13 12:53:46 -06:00
Juan Miguel Carceller
2fa816184e py-ruff: add version 0.3.0 and deprecate the oldest version (#43069)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-03-13 11:03:28 -07:00
Brian Spilner
25f622e809 update to cdo-2.4.0 (#43071) 2024-03-13 11:01:41 -07:00
吴坎
7506acabe7 antlr4-cpp-runtime: update dependencies (#43115)
* update dependencies

* Update package.py
2024-03-13 11:00:54 -07:00
Dom Heinzeller
3a828358cb Update var/spack/repos/builtin/packages/ecmwf-atlas/package.py: set correct ectrans/trans variant, configure tesselation variant (#43151) 2024-03-13 10:19:24 -07:00
Adam J. Stewart
94a1d1414a spack.patch: support reversing patches (#43040)
The `patch()` directive can now be invoked with `reverse=True` to apply a patch in reverse.
This is useful for reverting commits that caused errors in projects, even if only the forward
patch is available, e.g. via a GitHub commit patch URL.

`patch(..., reverse=True)` runs `patch -R` behind the scenes. This is a POSIX option so we
can expect it to be available on the `patch` command.

---------

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-03-12 23:22:10 -07:00
downloadico
0f080b38f4 abinit: add version 9.10.5 (#43148)
set the FC variable to the MPI Fortran compiler and also set the F90 variable
to the same compiler for versions 9.8 and up.  FC needs to be set because the
configure script still uses FC.
2024-03-12 14:04:35 -07:00
Alec Scott
f1ec4859c8 unmaintained packages: add new versions (#43112)
* unmaintained packages: add new versions
* Fix parallel and numactl
* Revert numactl changes
* rollback lua-sol2 version
* Update alluxio version format
2024-03-12 13:34:17 -07:00
Greg Sjaardema
63baba0308 SEACAS: Make latest release available in spack (#43123) 2024-03-12 11:32:32 -07:00
Christopher Christofi
aeec861544 py-art: add new package (#43127) 2024-03-12 11:29:13 -07:00
Christopher Christofi
e54d4678f9 py-lazy-loader: add new version (#43130) 2024-03-12 11:14:17 -07:00
Dom Heinzeller
187b8adb4f ecmwf-atlas@0.36: depend on ecbuild@3.4: (#43133) 2024-03-12 11:00:15 -07:00
Samuel K. Gutiérrez
d6fd96f024 libquo: Update default version from 1.3.1 to 1.4. (#43057)
Signed-off-by: Samuel K. Gutierrez <samuel@lanl.gov>
2024-03-12 10:28:30 -07:00
Andrey Perestoronin
e3b6d2c3c7 Intel oneapi compilers 2023.2.4 (#43144)
* added intel-oneapi-compilers

* added classics version
2024-03-12 12:54:19 -04:00
Tamara Dahlgren
1e9c46296c perl testing: refactor stand-alone testing into base class (#43044) 2024-03-12 15:39:18 +01:00
Valentin Volkl
48183b37be gaudi: add py-yaml dependency (#43116)
Needed to parse job options in yaml format (https://gitlab.cern.ch/gaudi/Gaudi/-/blob/master/GaudiKernel/python/GaudiKernel/ProcessJobOptions.py#L528).
In principle this is optional, but it's needed for the test  35 - GaudiKernel.pytest.nose.test_Configurables (Failed). Seems like there's no harm done in adding this as a full dependency.
2024-03-12 08:58:32 -05:00
Oakley Brunt
9a3d248348 py-psyclone and py-fparser: Add new releases (#42744)
* py-psyclone and py-fparser new releases added

* py-fparser: add missing releases for py-psyclone

* py-psyclone: actioned @adamjstewart comments

* py-psyclone: removed py-pytest-pylint

* py-pytest-pylint: added package @ latest version

* py-pytest-pylint: reformatted

* Update var/spack/repos/builtin/packages/py-psyclone/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* py-pytest-pylint: added build deps and runtime dep versions

* py-pytest-pylint: removed version from setuptools

* py-psyclone: add py-pytest-pylint test dep and alphabetize deps

* Update var/spack/repos/builtin/packages/py-pytest-pylint/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* py-psyclone: deps ordered

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2024-03-12 14:46:48 +01:00
Pierre Blanchard
03e22adb5b sleef: add v3.6 (#42978) 2024-03-12 11:04:17 +01:00
Massimiliano Culpo
5f5fc78236 Update archspec to v0.2.3 (#42854) 2024-03-12 09:31:15 +01:00
Adam J. Stewart
e12a8a69c7 py-neuralgcm: add new package (#43117)
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-03-11 21:02:03 -06:00
Christopher Christofi
001af62585 perl-math-symbolic: add required perl-parse-recdescent dependency (#42157)
* perl-math-symbolic: add required perl-parse-recdescent dependency

* perl-math-symbolic: add new runtime dependency
2024-03-11 17:09:51 -06:00
Christopher Christofi
f5e89df6f2 perl-parsetemplate: add new package (#41845)
* perl-parsetemplate: add new package

* perl-parsetemplate: update to latest copyright date and add license
2024-03-11 17:04:35 -06:00
Massimiliano Culpo
ce75adada6 Fix callbacks accumulation when using mixins with builders (#43100)
fixes #43097

Before this PR the behavior of mixins used together with
builders was to mask completely the callbacks defined from
the class coming later in the MRO.

Here we fix the behavior by accumulating all callbacks,
and de-duplicating them later.
2024-03-11 20:48:21 +01:00
Adam J. Stewart
24d37df1a2 py-climax: add new package (#43121) 2024-03-11 12:47:20 -07:00
Stephen Sachs
a9d294c532 aocl-utils: source sha has changed (#43120)
* aocl-utils: source sha has changed

* aocl-utils: version 4.1 hash updated
2024-03-11 12:45:30 -07:00
Adam J. Stewart
9dcaa56db4 py-onnx-opcounter: add new package (#43119) 2024-03-11 12:42:16 -07:00
Adam J. Stewart
98162aa2e1 py-earth2mip: add new package (#43062)
* py-earth2mip: add new package

* Fix audit tests

* Make things easier to concretize

* Make things easier to concretize

* Fix magics build

* Fix onnx build

* blacken

* cmake > py-cmake

* Fix onnxruntime build

* onnxruntime: remove unknown cmake vars

* Trick bazelisk into using Spack bazel

* Different deps for main

* protobuf: add v3.25.3

* Enforce upper bound on jaxlib version

* py-chex: add v0.1.85

* py-dm-haiku: add v0.0.12

* py-jax: add v0.4.25

* Older jaxlib doesn't build on ppc64le either
2024-03-11 12:28:03 -07:00
Alec Scott
3934df622c fzf: add v0.47.0 (#43113) 2024-03-11 11:58:15 -07:00
tpeterka
dbf5d79557 updated diy/package.py to version 3.6.0 (#43101) 2024-03-11 11:56:26 -07:00
eugeneswalker
97e29e501d e4s ci stacks: add cp2k cpu and gpu specs (#42454)
* e4s ci stacks: add cp2k cpu and gpu specs

* remove non-building cp2k specs
2024-03-11 09:29:51 -07:00
Wouter Deconinck
258c651a8f geant4: new variant timemory (#43111)
* geant4: new variant timemory

* geant4: depends_on timemory@3.2:

* geant4: fix style
2024-03-11 15:50:02 +00:00
downloadico
43ca6da346 ruby: add version 3.3.0 (#43058)
* ruby: add version 3.3.0

* added dependency on libyaml
2024-03-11 08:49:24 -07:00
Adam J. Stewart
9786bd932b Update TensorFlow ecosystem (#41069) 2024-03-11 14:33:16 +01:00
Sinan
c72619d4db package/qgis: add new version (#42888)
* package/qgis: add new version

* improve Qsci.pro

* improve

* fix undefined symbol qsciprinter error

* add import test

* fix bug

* add version 3.36

* [@spackbot] updating style on behalf of Sinan81

* fix long line

* only run import test when +python

* first attempt at stand-alone test

* add TODO

---------

Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Sinan81 <Sinan81@users.noreply.github.com>
Co-authored-by: Sinan81 <Sinan@world>
2024-03-11 05:09:29 -05:00
Mikael Simberg
8ecae17c46 dla-future: Add patch for compilation with newer ROCm versions (#42348) 2024-03-11 10:44:00 +01:00
Massimiliano Culpo
1e47ccb83a Remove dead code (#43114)
* Remove dead code in spack
* Remove dead code in llnl
2024-03-11 00:47:55 -07:00
Jonas Eschle
d6421a69eb fix typo in dependency (#43105) 2024-03-10 18:56:27 -07:00
dependabot[bot]
000dff2fd4 build(deps): bump docker/build-push-action from 5.1.0 to 5.2.0 (#43107)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5.1.0 to 5.2.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](4a13e500e5...af5a7ed5ba)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-10 18:55:44 -07:00
dependabot[bot]
1e413477dd build(deps): bump mypy from 1.8.0 to 1.9.0 in /lib/spack/docs (#43103)
Bumps [mypy](https://github.com/python/mypy) from 1.8.0 to 1.9.0.
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.8.0...1.9.0)

---
updated-dependencies:
- dependency-name: mypy
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-10 18:54:58 -07:00
Pranav Sivaraman
8955e63a68 feat: add LLVM v18.1.1 (#43109) 2024-03-10 08:55:39 +01:00
913 changed files with 13417 additions and 7134 deletions

View File

@@ -0,0 +1,4 @@
{
"image": "ghcr.io/spack/ubuntu20.04-runner-amd64-gcc-11.4:2023.08.01",
"postCreateCommand": "./.devcontainer/postCreateCommand.sh"
}

View File

@@ -0,0 +1,20 @@
#!/bin/bash
# Load spack environment at terminal startup
cat <<EOF >> /root/.bashrc
. /workspaces/spack/share/spack/setup-env.sh
EOF
# Load spack environment in this script
. /workspaces/spack/share/spack/setup-env.sh
# Ensure generic targets for maximum matching with buildcaches
spack config --scope site add "packages:all:require:[target=x86_64_v3]"
spack config --scope site add "concretizer:targets:granularity:generic"
# Find compiler and install gcc-runtime
spack compiler find --scope site
# Setup buildcaches
spack mirror add --scope site develop https://binaries.spack.io/develop
spack buildcache keys --install --trust

View File

@@ -22,8 +22,8 @@ jobs:
matrix:
operating_system: ["ubuntu-latest", "macos-latest"]
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # @v2
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: ${{inputs.python_version}}
- name: Install Python packages
@@ -43,7 +43,9 @@ jobs:
. share/spack/setup-env.sh
$(which spack) audit packages
$(which spack) audit externals
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab # @v2.1.0
- uses: codecov/codecov-action@c16abc29c95fcf9174b58eb7e1abf4c866893bc8
if: ${{ inputs.with_coverage == 'true' }}
with:
flags: unittests,audits
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -24,7 +24,7 @@ jobs:
make patch unzip which xz python3 python3-devel tree \
cmake bison bison-devel libstdc++-static
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- name: Setup non-root user
@@ -62,7 +62,7 @@ jobs:
make patch unzip xz-utils python3 python3-dev tree \
cmake bison
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- name: Setup non-root user
@@ -99,7 +99,7 @@ jobs:
bzip2 curl file g++ gcc gfortran git gnupg2 gzip \
make patch unzip xz-utils python3 python3-dev tree
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- name: Setup non-root user
@@ -133,7 +133,7 @@ jobs:
make patch unzip which xz python3 python3-devel tree \
cmake bison
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- name: Setup repo
@@ -158,8 +158,8 @@ jobs:
run: |
brew install cmake bison@2.7 tree
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: "3.12"
- name: Bootstrap clingo
@@ -182,7 +182,7 @@ jobs:
run: |
brew install tree
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- name: Bootstrap clingo
run: |
set -ex
@@ -207,7 +207,7 @@ jobs:
runs-on: ubuntu-20.04
steps:
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- name: Setup repo
@@ -250,7 +250,7 @@ jobs:
bzip2 curl file g++ gcc patchelf gfortran git gzip \
make patch unzip xz-utils python3 python3-dev tree
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- name: Setup non-root user
@@ -287,7 +287,7 @@ jobs:
make patch unzip xz-utils python3 python3-dev tree \
gawk
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- name: Setup non-root user
@@ -320,7 +320,7 @@ jobs:
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
@@ -338,7 +338,7 @@ jobs:
# Remove GnuPG since we want to bootstrap it
sudo rm -rf /usr/local/bin/gpg
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh

View File

@@ -55,7 +55,7 @@ jobs:
if: github.repository == 'spack/spack'
steps:
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # @v2
uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: docker/metadata-action@8e5442c4ef9f78752691e2d8f8d19755c6f78e81
id: docker_meta
@@ -96,10 +96,10 @@ jobs:
uses: docker/setup-qemu-action@68827325e0b33c7199eb31dd4e31fbe9023e06e3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@0d103c3126aa41d772a8362f6aa67afac040f80c
uses: docker/setup-buildx-action@2b51285047da1547ffb1b2203d8be4c0af6b1f20
- name: Log in to GitHub Container Registry
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d
uses: docker/login-action@e92390c5fb421da1463c202d546fed0ec5c39f20
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -107,13 +107,13 @@ jobs:
- name: Log in to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d
uses: docker/login-action@e92390c5fb421da1463c202d546fed0ec5c39f20
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Deploy ${{ matrix.dockerfile[0] }}
uses: docker/build-push-action@4a13e500e55cf31b7a5d59a38ab2040ab0f42f56
uses: docker/build-push-action@2cdde995de11925a030ce8070c3d77a52ffcf1c0
with:
context: dockerfiles/${{ matrix.dockerfile[0] }}
platforms: ${{ matrix.dockerfile[1] }}

View File

@@ -18,6 +18,7 @@ jobs:
prechecks:
needs: [ changes ]
uses: ./.github/workflows/valid-style.yml
secrets: inherit
with:
with_coverage: ${{ needs.changes.outputs.core }}
all-prechecks:
@@ -35,7 +36,7 @@ jobs:
core: ${{ steps.filter.outputs.core }}
packages: ${{ steps.filter.outputs.packages }}
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
if: ${{ github.event_name == 'push' }}
with:
fetch-depth: 0
@@ -70,14 +71,17 @@ jobs:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.bootstrap == 'true' }}
needs: [ prechecks, changes ]
uses: ./.github/workflows/bootstrap.yml
secrets: inherit
unit-tests:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }}
needs: [ prechecks, changes ]
uses: ./.github/workflows/unit_tests.yaml
secrets: inherit
windows:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }}
needs: [ prechecks ]
uses: ./.github/workflows/windows_python.yml
secrets: inherit
all:
needs: [ windows, unit-tests, bootstrap ]
runs-on: ubuntu-latest

View File

@@ -14,10 +14,10 @@ jobs:
build-paraview-deps:
runs-on: windows-latest
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: 3.9
- name: Install Python packages

View File

@@ -1,4 +1,4 @@
black==24.2.0
black==24.3.0
clingo==5.7.1
flake8==7.0.0
isort==5.13.2

View File

@@ -51,10 +51,10 @@ jobs:
on_develop: false
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: ${{ matrix.python-version }}
- name: Install System packages
@@ -91,17 +91,19 @@ jobs:
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }}
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab
- uses: codecov/codecov-action@c16abc29c95fcf9174b58eb7e1abf4c866893bc8
with:
flags: unittests,linux,${{ matrix.concretizer }}
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Test shell integration
shell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.11'
- name: Install System packages
@@ -122,9 +124,11 @@ jobs:
COVERAGE: true
run: |
share/spack/qa/run-shell-tests
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab
- uses: codecov/codecov-action@c16abc29c95fcf9174b58eb7e1abf4c866893bc8
with:
flags: shelltests,linux
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Test RHEL8 UBI with platform Python. This job is run
# only on PRs modifying core Spack
@@ -137,7 +141,7 @@ jobs:
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- name: Setup repo and non-root user
run: |
git --version
@@ -156,10 +160,10 @@ jobs:
clingo-cffi:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.11'
- name: Install System packages
@@ -181,20 +185,23 @@ jobs:
SPACK_TEST_SOLVER: clingo
run: |
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab # @v2.1.0
- uses: codecov/codecov-action@c16abc29c95fcf9174b58eb7e1abf4c866893bc8
with:
flags: unittests,linux,clingo
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Run unit tests on MacOS
macos:
runs-on: macos-latest
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [macos-latest, macos-14]
python-version: ["3.11"]
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c # @v2
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: ${{ matrix.python-version }}
- name: Install Python packages
@@ -216,6 +223,8 @@ jobs:
$(which spack) solve zlib
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
$(which spack) unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}"
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab
- uses: codecov/codecov-action@c16abc29c95fcf9174b58eb7e1abf4c866893bc8
with:
flags: unittests,macos
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -18,8 +18,8 @@ jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.11'
cache: 'pip'
@@ -35,10 +35,10 @@ jobs:
style:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.11'
cache: 'pip'
@@ -56,6 +56,7 @@ jobs:
share/spack/qa/run-style-tests
audit:
uses: ./.github/workflows/audit.yaml
secrets: inherit
with:
with_coverage: ${{ inputs.with_coverage }}
python_version: '3.11'
@@ -69,7 +70,7 @@ jobs:
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # @v2
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- name: Setup repo and non-root user
run: |
git --version

View File

@@ -15,10 +15,10 @@ jobs:
unit-tests:
runs-on: windows-latest
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: 3.9
- name: Install Python packages
@@ -33,16 +33,18 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
coverage combine -a
coverage xml
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab
- uses: codecov/codecov-action@c16abc29c95fcf9174b58eb7e1abf4c866893bc8
with:
flags: unittests,windows
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
unit-tests-cmd:
runs-on: windows-latest
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: 3.9
- name: Install Python packages
@@ -57,16 +59,18 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
coverage combine -a
coverage xml
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab
- uses: codecov/codecov-action@c16abc29c95fcf9174b58eb7e1abf4c866893bc8
with:
flags: unittests,windows
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
build-abseil:
runs-on: windows-latest
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
fetch-depth: 0
- uses: actions/setup-python@0a5c61591373683505ea898e09a3ea4f39ef2b9c
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: 3.9
- name: Install Python packages

View File

@@ -42,3 +42,8 @@ concretizer:
# "minimal": allows the duplication of 'build-tools' nodes only (e.g. py-setuptools, cmake etc.)
# "full" (experimental): allows separation of the entire build-tool stack (e.g. the entire "cmake" subDAG)
strategy: minimal
# Option to specify compatiblity between operating systems for reuse of compilers and packages
# Specified as a key: [list] where the key is the os that is being targeted, and the list contains the OS's
# it can reuse. Note this is a directional compatibility so mutual compatibility between two OS's
# requires two entries i.e. os_compatible: {sonoma: [monterey], monterey: [sonoma]}
os_compatible: {}

View File

@@ -101,6 +101,12 @@ config:
verify_ssl: true
# This is where custom certs for proxy/firewall are stored.
# It can be a path or environment variable. To match ssl env configuration
# the default is the environment variable SSL_CERT_FILE
ssl_certs: $SSL_CERT_FILE
# Suppress gpg warnings from binary package verification
# Only suppresses warnings, gpg failure will still fail the install
# Potential rationale to set True: users have already explicitly trusted the

View File

@@ -24,6 +24,7 @@ packages:
elf: [elfutils]
fftw-api: [fftw, amdfftw]
flame: [libflame, amdlibflame]
fortran-rt: [gcc-runtime, intel-oneapi-runtime]
fuse: [libfuse]
gl: [glx, osmesa]
glu: [mesa-glu, openglu]
@@ -34,7 +35,9 @@ packages:
java: [openjdk, jdk, ibm-java]
jpeg: [libjpeg-turbo, libjpeg]
lapack: [openblas, amdlibflame]
libgfortran: [ gcc-runtime ]
libglx: [mesa+glx, mesa18+glx]
libifcore: [ intel-oneapi-runtime ]
libllvm: [llvm]
libosmesa: [mesa+osmesa, mesa18+osmesa]
lua-lang: [lua, lua-luajit-openresty, lua-luajit]

View File

@@ -1119,6 +1119,9 @@ and ``3.4.2``. Similarly, ``@4.2:`` means any version above and including
``4.2``. As a short-hand, ``@3`` is equivalent to the range ``@3:3`` and
includes any version with major version ``3``.
Versions are ordered lexicograpically by its components. For more details
on the order, see :ref:`the packaging guide <version-comparison>`.
Notice that you can distinguish between the specific version ``@=3.2`` and
the range ``@3.2``. This is useful for packages that follow a versioning
scheme that omits the zero patch version number: ``3.2``, ``3.2.1``,

View File

@@ -250,7 +250,7 @@ generator is Ninja. To switch to the Ninja generator, simply add:
.. code-block:: python
generator = "Ninja"
generator("ninja")
``CMakePackage`` defaults to "Unix Makefiles". If you switch to the

View File

@@ -173,6 +173,72 @@ arguments to ``Makefile.PL`` or ``Build.PL`` by overriding
]
^^^^^^^
Testing
^^^^^^^
``PerlPackage`` provides a simple stand-alone test of the successfully
installed package to confirm that installed perl module(s) can be used.
These tests can be performed any time after the installation using
``spack -v test run``. (For more information on the command, see
:ref:`cmd-spack-test-run`.)
The base class automatically detects perl modules based on the presence
of ``*.pm`` files under the package's library directory. For example,
the files under ``perl-bignum``'s perl library are:
.. code-block:: console
$ find . -name "*.pm"
./bigfloat.pm
./bigrat.pm
./Math/BigFloat/Trace.pm
./Math/BigInt/Trace.pm
./Math/BigRat/Trace.pm
./bigint.pm
./bignum.pm
which results in the package having the ``use_modules`` property containing:
.. code-block:: python
use_modules = [
"bigfloat",
"bigrat",
"Math::BigFloat::Trace",
"Math::BigInt::Trace",
"Math::BigRat::Trace",
"bigint",
"bignum",
]
.. note::
This list can often be used to catch missing dependencies.
If the list is somehow wrong, you can provide the names of the modules
yourself by overriding ``use_modules`` like so:
.. code-block:: python
use_modules = ["bigfloat", "bigrat", "bigint", "bignum"]
If you only want a subset of the automatically detected modules to be
tested, you could instead define the ``skip_modules`` property on the
package. So, instead of overriding ``use_modules`` as shown above, you
could define the following:
.. code-block:: python
skip_modules = [
"Math::BigFloat::Trace",
"Math::BigInt::Trace",
"Math::BigRat::Trace",
]
for the same use tests.
^^^^^^^^^^^^^^^^^^^^^
Alternatives to Spack
^^^^^^^^^^^^^^^^^^^^^

View File

@@ -145,6 +145,22 @@ hosts when making ``ssl`` connections. Set to ``false`` to disable, and
tools like ``curl`` will use their ``--insecure`` options. Disabling
this can expose you to attacks. Use at your own risk.
--------------------
``ssl_certs``
--------------------
Path to custom certificats for SSL verification. The value can be a
filesytem path, or an environment variable that expands to a file path.
The default value is set to the environment variable ``SSL_CERT_FILE``
to use the same syntax used by many other applications that automatically
detect custom certificates.
When ``url_fetch_method:curl`` the ``config:ssl_certs`` should resolve to
a single file. Spack will then set the environment variable ``CURL_CA_BUNDLE``
in the subprocess calling ``curl``.
If ``url_fetch_method:urllib`` then files and directories are supported i.e.
``config:ssl_certs:$SSL_CERT_FILE`` or ``config:ssl_certs:$SSL_CERT_DIR``
will work.
--------------------
``checksum``
--------------------

View File

@@ -250,9 +250,10 @@ Compiler configuration
Spack has the ability to build packages with multiple compilers and
compiler versions. Compilers can be made available to Spack by
specifying them manually in ``compilers.yaml``, or automatically by
running ``spack compiler find``, but for convenience Spack will
automatically detect compilers the first time it needs them.
specifying them manually in ``compilers.yaml`` or ``packages.yaml``,
or automatically by running ``spack compiler find``, but for
convenience Spack will automatically detect compilers the first time
it needs them.
.. _cmd-spack-compilers:
@@ -457,6 +458,48 @@ specification. The operations available to modify the environment are ``set``, `
prepend_path: # Similar for append|remove_path
LD_LIBRARY_PATH: /ld/paths/added/by/setvars/sh
.. note::
Spack is in the process of moving compilers from a separate
attribute to be handled like all other packages. As part of this
process, the ``compilers.yaml`` section will eventually be replaced
by configuration in the ``packages.yaml`` section. This new
configuration is now available, although it is not yet the default
behavior.
Compilers can also be configured as external packages in the
``packages.yaml`` config file. Any external package for a compiler
(e.g. ``gcc`` or ``llvm``) will be treated as a configured compiler
assuming the paths to the compiler executables are determinable from
the prefix.
If the paths to the compiler executable are not determinable from the
prefix, you can add them to the ``extra_attributes`` field. Similarly,
all other fields from the compilers config can be added to the
``extra_attributes`` field for an external representing a compiler.
.. code-block:: yaml
packages:
gcc:
external:
- spec: gcc@12.2.0 arch=linux-rhel8-skylake
prefix: /usr
extra_attributes:
environment:
set:
GCC_ROOT: /usr
external:
- spec: llvm+clang@15.0.0 arch=linux-rhel8-skylake
prefix: /usr
extra_attributes:
paths:
cc: /usr/bin/clang-with-suffix
cxx: /usr/bin/clang++-with-extra-info
fc: /usr/bin/gfortran
f77: /usr/bin/gfortran
extra_rpaths:
- /usr/lib/llvm/
^^^^^^^^^^^^^^^^^^^^^^^
Build Your Own Compiler

View File

@@ -893,26 +893,50 @@ as an option to the ``version()`` directive. Example situations would be a
"snapshot"-like Version Control System (VCS) tag, a VCS branch such as
``v6-16-00-patches``, or a URL specifying a regularly updated snapshot tarball.
.. _version-comparison:
^^^^^^^^^^^^^^^^^^
Version comparison
^^^^^^^^^^^^^^^^^^
Spack imposes a generic total ordering on the set of versions,
independently from the package they are associated with.
Most Spack versions are numeric, a tuple of integers; for example,
``0.1``, ``6.96`` or ``1.2.3.1``. Spack knows how to compare and sort
numeric versions.
``0.1``, ``6.96`` or ``1.2.3.1``. In this very basic case, version
comparison is lexicographical on the numeric components:
``1.2 < 1.2.1 < 1.2.2 < 1.10``.
Some Spack versions involve slight extensions of numeric syntax; for
example, ``py-sphinx-rtd-theme@=0.1.10a0``. In this case, numbers are
always considered to be "newer" than letters. This is for consistency
with `RPM <https://bugzilla.redhat.com/show_bug.cgi?id=50977>`_.
Spack can also supports string components such as ``1.1.1a`` and
``1.y.0``. String components are considered less than numeric
components, so ``1.y.0 < 1.0``. This is for consistency with
`RPM <https://bugzilla.redhat.com/show_bug.cgi?id=50977>`_. String
components do not have to be separated by dots or any other delimiter.
So, the contrived version ``1y0`` is identical to ``1.y.0``.
Spack versions may also be arbitrary non-numeric strings, for example
``develop``, ``master``, ``local``.
Pre-release suffixes also contain string parts, but they are handled
in a special way. For example ``1.2.3alpha1`` is parsed as a pre-release
of the version ``1.2.3``. This allows Spack to order it before the
actual release: ``1.2.3alpha1 < 1.2.3``. Spack supports alpha, beta and
release candidate suffixes: ``1.2alpha1 < 1.2beta1 < 1.2rc1 < 1.2``. Any
suffix not recognized as a pre-release is treated as an ordinary
string component, so ``1.2 < 1.2-mysuffix``.
The order on versions is defined as follows. A version string is split
into a list of components based on delimiters such as ``.``, ``-`` etc.
Lists are then ordered lexicographically, where components are ordered
as follows:
Finally, there are a few special string components that are considered
"infinity versions". They include ``develop``, ``main``, ``master``,
``head``, ``trunk``, and ``stable``. For example: ``1.2 < develop``.
These are useful for specifying the most recent development version of
a package (often a moving target like a git branch), without assigning
a specific version number. Infinity versions are not automatically used when determining the latest version of a package unless explicitly required by another package or user.
More formally, the order on versions is defined as follows. A version
string is split into a list of components based on delimiters such as
``.`` and ``-`` and string boundaries. The components are split into
the **release** and a possible **pre-release** (if the last component
is numeric and the second to last is a string ``alpha``, ``beta`` or ``rc``).
The release components are ordered lexicographically, with comparsion
between different types of components as follows:
#. The following special strings are considered larger than any other
numeric or non-numeric version component, and satisfy the following
@@ -925,6 +949,9 @@ as follows:
#. All other non-numeric components are less than numeric components,
and are ordered alphabetically.
Finally, if the release components are equal, the pre-release components
are used to break the tie, in the obvious way.
The logic behind this sort order is two-fold:
#. Non-numeric versions are usually used for special cases while

View File

@@ -6,8 +6,8 @@ python-levenshtein==0.25.0
docutils==0.20.1
pygments==2.17.2
urllib3==2.2.1
pytest==8.0.2
pytest==8.1.1
isort==5.13.2
black==24.2.0
black==24.3.0
flake8==7.0.0
mypy==1.8.0
mypy==1.9.0

5
lib/spack/env/cc vendored
View File

@@ -248,7 +248,7 @@ case "$command" in
lang_flags=C
debug_flags="-g"
;;
c++|CC|g++|clang++|armclang++|icpc|icpx|dpcpp|pgc++|nvc++|xlc++|xlc++_r|FCC|amdclang++|crayCC)
c++|CC|g++|clang++|armclang++|icpc|icpx|pgc++|nvc++|xlc++|xlc++_r|FCC|amdclang++|crayCC)
command="$SPACK_CXX"
language="C++"
comp="CXX"
@@ -526,7 +526,7 @@ categorize_arguments() {
continue
fi
replaced="$after$stripped"
replaced="$after$stripped"
# it matched, remove it
shift
@@ -913,4 +913,3 @@ fi
# Execute the full command, preserving spaces with IFS set
# to the alarm bell separator.
IFS="$lsep"; exec $full_command_list

View File

@@ -18,7 +18,7 @@
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.2.2 (commit 1dc58a5776dd77e6fc6e4ba5626af5b1fb24996e)
* Version: 0.2.3 (commit 7b8fe60b69e2861e7dac104bc1c183decfcd3daf)
astunparse
----------------

View File

@@ -1,2 +1,3 @@
"""Init file to avoid namespace packages"""
__version__ = "0.2.2"
__version__ = "0.2.3"

View File

@@ -3,6 +3,7 @@
"""
import sys
from .cli import main
sys.exit(main())

View File

@@ -46,7 +46,11 @@ def _make_parser() -> argparse.ArgumentParser:
def cpu() -> int:
"""Run the `archspec cpu` subcommand."""
print(archspec.cpu.host())
try:
print(archspec.cpu.host())
except FileNotFoundError as exc:
print(exc)
return 1
return 0

View File

@@ -5,10 +5,14 @@
"""The "cpu" package permits to query and compare different
CPU microarchitectures.
"""
from .microarchitecture import Microarchitecture, UnsupportedMicroarchitecture
from .microarchitecture import TARGETS, generic_microarchitecture
from .microarchitecture import version_components
from .detect import host
from .microarchitecture import (
TARGETS,
Microarchitecture,
UnsupportedMicroarchitecture,
generic_microarchitecture,
version_components,
)
__all__ = [
"Microarchitecture",

View File

@@ -4,15 +4,17 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Detection of CPU microarchitectures"""
import collections
import functools
import os
import platform
import re
import struct
import subprocess
import warnings
from typing import Dict, List, Optional, Set, Tuple, Union
from .microarchitecture import generic_microarchitecture, TARGETS
from .schema import TARGETS_JSON
from ..vendor.cpuid.cpuid import CPUID
from .microarchitecture import TARGETS, Microarchitecture, generic_microarchitecture
from .schema import CPUID_JSON, TARGETS_JSON
#: Mapping from operating systems to chain of commands
#: to obtain a dictionary of raw info on the current cpu
@@ -22,43 +24,46 @@
#: functions checking the compatibility of the host with a given target
COMPATIBILITY_CHECKS = {}
# Constants for commonly used architectures
X86_64 = "x86_64"
AARCH64 = "aarch64"
PPC64LE = "ppc64le"
PPC64 = "ppc64"
RISCV64 = "riscv64"
def info_dict(operating_system):
"""Decorator to mark functions that are meant to return raw info on
the current cpu.
def detection(operating_system: str):
"""Decorator to mark functions that are meant to return partial information on the current cpu.
Args:
operating_system (str or tuple): operating system for which the marked
function is a viable factory of raw info dictionaries.
operating_system: operating system where this function can be used.
"""
def decorator(factory):
INFO_FACTORY[operating_system].append(factory)
@functools.wraps(factory)
def _impl():
info = factory()
# Check that info contains a few mandatory fields
msg = 'field "{0}" is missing from raw info dictionary'
assert "vendor_id" in info, msg.format("vendor_id")
assert "flags" in info, msg.format("flags")
assert "model" in info, msg.format("model")
assert "model_name" in info, msg.format("model_name")
return info
return _impl
return factory
return decorator
@info_dict(operating_system="Linux")
def proc_cpuinfo():
"""Returns a raw info dictionary by parsing the first entry of
``/proc/cpuinfo``
"""
info = {}
def partial_uarch(
name: str = "", vendor: str = "", features: Optional[Set[str]] = None, generation: int = 0
) -> Microarchitecture:
"""Construct a partial microarchitecture, from information gathered during system scan."""
return Microarchitecture(
name=name,
parents=[],
vendor=vendor,
features=features or set(),
compilers={},
generation=generation,
)
@detection(operating_system="Linux")
def proc_cpuinfo() -> Microarchitecture:
"""Returns a partial Microarchitecture, obtained from scanning ``/proc/cpuinfo``"""
data = {}
with open("/proc/cpuinfo") as file: # pylint: disable=unspecified-encoding
for line in file:
key, separator, value = line.partition(":")
@@ -70,11 +75,96 @@ def proc_cpuinfo():
#
# we are on a blank line separating two cpus. Exit early as
# we want to read just the first entry in /proc/cpuinfo
if separator != ":" and info:
if separator != ":" and data:
break
info[key.strip()] = value.strip()
return info
data[key.strip()] = value.strip()
architecture = _machine()
if architecture == X86_64:
return partial_uarch(
vendor=data.get("vendor_id", "generic"), features=_feature_set(data, key="flags")
)
if architecture == AARCH64:
return partial_uarch(
vendor=_canonicalize_aarch64_vendor(data),
features=_feature_set(data, key="Features"),
)
if architecture in (PPC64LE, PPC64):
generation_match = re.search(r"POWER(\d+)", data.get("cpu", ""))
try:
generation = int(generation_match.group(1))
except AttributeError:
# There might be no match under emulated environments. For instance
# emulating a ppc64le with QEMU and Docker still reports the host
# /proc/cpuinfo and not a Power
generation = 0
return partial_uarch(generation=generation)
if architecture == RISCV64:
if data.get("uarch") == "sifive,u74-mc":
data["uarch"] = "u74mc"
return partial_uarch(name=data.get("uarch", RISCV64))
return generic_microarchitecture(architecture)
class CpuidInfoCollector:
"""Collects the information we need on the host CPU from cpuid"""
# pylint: disable=too-few-public-methods
def __init__(self):
self.cpuid = CPUID()
registers = self.cpuid.registers_for(**CPUID_JSON["vendor"]["input"])
self.highest_basic_support = registers.eax
self.vendor = struct.pack("III", registers.ebx, registers.edx, registers.ecx).decode(
"utf-8"
)
registers = self.cpuid.registers_for(**CPUID_JSON["highest_extension_support"]["input"])
self.highest_extension_support = registers.eax
self.features = self._features()
def _features(self):
result = set()
def check_features(data):
registers = self.cpuid.registers_for(**data["input"])
for feature_check in data["bits"]:
current = getattr(registers, feature_check["register"])
if self._is_bit_set(current, feature_check["bit"]):
result.add(feature_check["name"])
for call_data in CPUID_JSON["flags"]:
if call_data["input"]["eax"] > self.highest_basic_support:
continue
check_features(call_data)
for call_data in CPUID_JSON["extension-flags"]:
if call_data["input"]["eax"] > self.highest_extension_support:
continue
check_features(call_data)
return result
def _is_bit_set(self, register: int, bit: int) -> bool:
mask = 1 << bit
return register & mask > 0
@detection(operating_system="Windows")
def cpuid_info():
"""Returns a partial Microarchitecture, obtained from running the cpuid instruction"""
architecture = _machine()
if architecture == X86_64:
data = CpuidInfoCollector()
return partial_uarch(vendor=data.vendor, features=data.features)
return generic_microarchitecture(architecture)
def _check_output(args, env):
@@ -83,14 +173,25 @@ def _check_output(args, env):
return str(output.decode("utf-8"))
WINDOWS_MAPPING = {
"AMD64": "x86_64",
"ARM64": "aarch64",
}
def _machine():
""" "Return the machine architecture we are on"""
"""Return the machine architecture we are on"""
operating_system = platform.system()
# If we are not on Darwin, trust what Python tells us
if operating_system != "Darwin":
# If we are not on Darwin or Windows, trust what Python tells us
if operating_system not in ("Darwin", "Windows"):
return platform.machine()
# Normalize windows specific names
if operating_system == "Windows":
platform_machine = platform.machine()
return WINDOWS_MAPPING.get(platform_machine, platform_machine)
# On Darwin it might happen that we are on M1, but using an interpreter
# built for x86_64. In that case "platform.machine() == 'x86_64'", so we
# need to fix that.
@@ -103,54 +204,47 @@ def _machine():
if "Apple" in output:
# Note that a native Python interpreter on Apple M1 would return
# "arm64" instead of "aarch64". Here we normalize to the latter.
return "aarch64"
return AARCH64
return "x86_64"
return X86_64
@info_dict(operating_system="Darwin")
def sysctl_info_dict():
@detection(operating_system="Darwin")
def sysctl_info() -> Microarchitecture:
"""Returns a raw info dictionary parsing the output of sysctl."""
child_environment = _ensure_bin_usrbin_in_path()
def sysctl(*args):
def sysctl(*args: str) -> str:
return _check_output(["sysctl"] + list(args), env=child_environment).strip()
if _machine() == "x86_64":
flags = (
sysctl("-n", "machdep.cpu.features").lower()
+ " "
+ sysctl("-n", "machdep.cpu.leaf7_features").lower()
if _machine() == X86_64:
features = (
f'{sysctl("-n", "machdep.cpu.features").lower()} '
f'{sysctl("-n", "machdep.cpu.leaf7_features").lower()}'
)
info = {
"vendor_id": sysctl("-n", "machdep.cpu.vendor"),
"flags": flags,
"model": sysctl("-n", "machdep.cpu.model"),
"model name": sysctl("-n", "machdep.cpu.brand_string"),
}
else:
model = "unknown"
model_str = sysctl("-n", "machdep.cpu.brand_string").lower()
if "m2" in model_str:
model = "m2"
elif "m1" in model_str:
model = "m1"
elif "apple" in model_str:
model = "m1"
features = set(features.split())
info = {
"vendor_id": "Apple",
"flags": [],
"model": model,
"CPU implementer": "Apple",
"model name": sysctl("-n", "machdep.cpu.brand_string"),
}
return info
# Flags detected on Darwin turned to their linux counterpart
for darwin_flag, linux_flag in TARGETS_JSON["conversions"]["darwin_flags"].items():
if darwin_flag in features:
features.update(linux_flag.split())
return partial_uarch(vendor=sysctl("-n", "machdep.cpu.vendor"), features=features)
model = "unknown"
model_str = sysctl("-n", "machdep.cpu.brand_string").lower()
if "m2" in model_str:
model = "m2"
elif "m1" in model_str:
model = "m1"
elif "apple" in model_str:
model = "m1"
return partial_uarch(name=model, vendor="Apple")
def _ensure_bin_usrbin_in_path():
# Make sure that /sbin and /usr/sbin are in PATH as sysctl is
# usually found there
# Make sure that /sbin and /usr/sbin are in PATH as sysctl is usually found there
child_environment = dict(os.environ.items())
search_paths = child_environment.get("PATH", "").split(os.pathsep)
for additional_path in ("/sbin", "/usr/sbin"):
@@ -160,22 +254,10 @@ def _ensure_bin_usrbin_in_path():
return child_environment
def adjust_raw_flags(info):
"""Adjust the flags detected on the system to homogenize
slightly different representations.
"""
# Flags detected on Darwin turned to their linux counterpart
flags = info.get("flags", [])
d2l = TARGETS_JSON["conversions"]["darwin_flags"]
for darwin_flag, linux_flag in d2l.items():
if darwin_flag in flags:
info["flags"] += " " + linux_flag
def adjust_raw_vendor(info):
"""Adjust the vendor field to make it human readable"""
if "CPU implementer" not in info:
return
def _canonicalize_aarch64_vendor(data: Dict[str, str]) -> str:
"""Adjust the vendor field to make it human-readable"""
if "CPU implementer" not in data:
return "generic"
# Mapping numeric codes to vendor (ARM). This list is a merge from
# different sources:
@@ -185,43 +267,37 @@ def adjust_raw_vendor(info):
# https://github.com/gcc-mirror/gcc/blob/master/gcc/config/aarch64/aarch64-cores.def
# https://patchwork.kernel.org/patch/10524949/
arm_vendors = TARGETS_JSON["conversions"]["arm_vendors"]
arm_code = info["CPU implementer"]
if arm_code in arm_vendors:
info["CPU implementer"] = arm_vendors[arm_code]
arm_code = data["CPU implementer"]
return arm_vendors.get(arm_code, arm_code)
def raw_info_dictionary():
"""Returns a dictionary with information on the cpu of the current host.
def _feature_set(data: Dict[str, str], key: str) -> Set[str]:
return set(data.get(key, "").split())
This function calls all the viable factories one after the other until
there's one that is able to produce the requested information.
def detected_info() -> Microarchitecture:
"""Returns a partial Microarchitecture with information on the CPU of the current host.
This function calls all the viable factories one after the other until there's one that is
able to produce the requested information. Falls-back to a generic microarchitecture, if none
of the calls succeed.
"""
# pylint: disable=broad-except
info = {}
for factory in INFO_FACTORY[platform.system()]:
try:
info = factory()
return factory()
except Exception as exc:
warnings.warn(str(exc))
if info:
adjust_raw_flags(info)
adjust_raw_vendor(info)
break
return info
return generic_microarchitecture(_machine())
def compatible_microarchitectures(info):
"""Returns an unordered list of known micro-architectures that are
compatible with the info dictionary passed as argument.
Args:
info (dict): dictionary containing information on the host cpu
def compatible_microarchitectures(info: Microarchitecture) -> List[Microarchitecture]:
"""Returns an unordered list of known micro-architectures that are compatible with the
partial Microarchitecture passed as input.
"""
architecture_family = _machine()
# If a tester is not registered, be conservative and assume no known
# target is compatible with the host
# If a tester is not registered, assume no known target is compatible with the host
tester = COMPATIBILITY_CHECKS.get(architecture_family, lambda x, y: False)
return [x for x in TARGETS.values() if tester(info, x)] or [
generic_microarchitecture(architecture_family)
@@ -230,8 +306,8 @@ def compatible_microarchitectures(info):
def host():
"""Detects the host micro-architecture and returns it."""
# Retrieve a dictionary with raw information on the host's cpu
info = raw_info_dictionary()
# Retrieve information on the host's cpu
info = detected_info()
# Get a list of possible candidates for this micro-architecture
candidates = compatible_microarchitectures(info)
@@ -258,16 +334,15 @@ def sorting_fn(item):
return max(candidates, key=sorting_fn)
def compatibility_check(architecture_family):
def compatibility_check(architecture_family: Union[str, Tuple[str, ...]]):
"""Decorator to register a function as a proper compatibility check.
A compatibility check function takes the raw info dictionary as a first
argument and an arbitrary target as the second argument. It returns True
if the target is compatible with the info dictionary, False otherwise.
A compatibility check function takes a partial Microarchitecture object as a first argument,
and an arbitrary target Microarchitecture as the second argument. It returns True if the
target is compatible with first argument, False otherwise.
Args:
architecture_family (str or tuple): architecture family for which
this test can be used, e.g. x86_64 or ppc64le etc.
architecture_family: architecture family for which this test can be used
"""
# Turn the argument into something iterable
if isinstance(architecture_family, str):
@@ -280,86 +355,57 @@ def decorator(func):
return decorator
@compatibility_check(architecture_family=("ppc64le", "ppc64"))
@compatibility_check(architecture_family=(PPC64LE, PPC64))
def compatibility_check_for_power(info, target):
"""Compatibility check for PPC64 and PPC64LE architectures."""
basename = platform.machine()
generation_match = re.search(r"POWER(\d+)", info.get("cpu", ""))
try:
generation = int(generation_match.group(1))
except AttributeError:
# There might be no match under emulated environments. For instance
# emulating a ppc64le with QEMU and Docker still reports the host
# /proc/cpuinfo and not a Power
generation = 0
# We can use a target if it descends from our machine type and our
# generation (9 for POWER9, etc) is at least its generation.
arch_root = TARGETS[basename]
arch_root = TARGETS[_machine()]
return (
target == arch_root or arch_root in target.ancestors
) and target.generation <= generation
) and target.generation <= info.generation
@compatibility_check(architecture_family="x86_64")
@compatibility_check(architecture_family=X86_64)
def compatibility_check_for_x86_64(info, target):
"""Compatibility check for x86_64 architectures."""
basename = "x86_64"
vendor = info.get("vendor_id", "generic")
features = set(info.get("flags", "").split())
# We can use a target if it descends from our machine type, is from our
# vendor, and we have all of its features
arch_root = TARGETS[basename]
arch_root = TARGETS[X86_64]
return (
(target == arch_root or arch_root in target.ancestors)
and target.vendor in (vendor, "generic")
and target.features.issubset(features)
and target.vendor in (info.vendor, "generic")
and target.features.issubset(info.features)
)
@compatibility_check(architecture_family="aarch64")
@compatibility_check(architecture_family=AARCH64)
def compatibility_check_for_aarch64(info, target):
"""Compatibility check for AARCH64 architectures."""
basename = "aarch64"
features = set(info.get("Features", "").split())
vendor = info.get("CPU implementer", "generic")
# At the moment it's not clear how to detect compatibility with
# At the moment, it's not clear how to detect compatibility with
# a specific version of the architecture
if target.vendor == "generic" and target.name != "aarch64":
if target.vendor == "generic" and target.name != AARCH64:
return False
arch_root = TARGETS[basename]
arch_root = TARGETS[AARCH64]
arch_root_and_vendor = arch_root == target.family and target.vendor in (
vendor,
info.vendor,
"generic",
)
# On macOS it seems impossible to get all the CPU features
# with syctl info, but for ARM we can get the exact model
if platform.system() == "Darwin":
model_key = info.get("model", basename)
model = TARGETS[model_key]
model = TARGETS[info.name]
return arch_root_and_vendor and (target == model or target in model.ancestors)
return arch_root_and_vendor and target.features.issubset(features)
return arch_root_and_vendor and target.features.issubset(info.features)
@compatibility_check(architecture_family="riscv64")
@compatibility_check(architecture_family=RISCV64)
def compatibility_check_for_riscv64(info, target):
"""Compatibility check for riscv64 architectures."""
basename = "riscv64"
uarch = info.get("uarch")
# sifive unmatched board
if uarch == "sifive,u74-mc":
uarch = "u74mc"
# catch-all for unknown uarchs
else:
uarch = "riscv64"
arch_root = TARGETS[basename]
arch_root = TARGETS[RISCV64]
return (target == arch_root or arch_root in target.ancestors) and (
target == uarch or target.vendor == "generic"
target.name == info.name or target.vendor == "generic"
)

View File

@@ -13,6 +13,7 @@
import archspec
import archspec.cpu.alias
import archspec.cpu.schema
from .alias import FEATURE_ALIASES
from .schema import LazyDictionary
@@ -47,7 +48,7 @@ class Microarchitecture:
which has "broadwell" as a parent, supports running binaries
optimized for "broadwell".
vendor (str): vendor of the micro-architecture
features (list of str): supported CPU flags. Note that the semantic
features (set of str): supported CPU flags. Note that the semantic
of the flags in this field might vary among architectures, if
at all present. For instance x86_64 processors will list all
the flags supported by a given CPU while Arm processors will
@@ -180,24 +181,28 @@ def generic(self):
generics = [x for x in [self] + self.ancestors if x.vendor == "generic"]
return max(generics, key=lambda x: len(x.ancestors))
def to_dict(self, return_list_of_items=False):
"""Returns a dictionary representation of this object.
def to_dict(self):
"""Returns a dictionary representation of this object."""
return {
"name": str(self.name),
"vendor": str(self.vendor),
"features": sorted(str(x) for x in self.features),
"generation": self.generation,
"parents": [str(x) for x in self.parents],
"compilers": self.compilers,
}
Args:
return_list_of_items (bool): if True returns an ordered list of
items instead of the dictionary
"""
list_of_items = [
("name", str(self.name)),
("vendor", str(self.vendor)),
("features", sorted(str(x) for x in self.features)),
("generation", self.generation),
("parents", [str(x) for x in self.parents]),
]
if return_list_of_items:
return list_of_items
return dict(list_of_items)
@staticmethod
def from_dict(data) -> "Microarchitecture":
"""Construct a microarchitecture from a dictionary representation."""
return Microarchitecture(
name=data["name"],
parents=[TARGETS[x] for x in data["parents"]],
vendor=data["vendor"],
features=set(data["features"]),
compilers=data.get("compilers", {}),
generation=data.get("generation", 0),
)
def optimization_flags(self, compiler, version):
"""Returns a string containing the optimization flags that needs
@@ -271,9 +276,7 @@ def tuplify(ver):
flags = flags_fmt.format(**compiler_entry)
return flags
msg = (
"cannot produce optimized binary for micro-architecture '{0}' with {1}@{2}"
)
msg = "cannot produce optimized binary for micro-architecture '{0}' with {1}@{2}"
if compiler_info:
versions = [x["versions"] for x in compiler_info]
msg += f' [supported compiler versions are {", ".join(versions)}]'
@@ -289,9 +292,7 @@ def generic_microarchitecture(name):
Args:
name (str): name of the micro-architecture
"""
return Microarchitecture(
name, parents=[], vendor="generic", features=[], compilers={}
)
return Microarchitecture(name, parents=[], vendor="generic", features=[], compilers={})
def version_components(version):
@@ -345,9 +346,7 @@ def fill_target_from_dict(name, data, targets):
compilers = values.get("compilers", {})
generation = values.get("generation", 0)
targets[name] = Microarchitecture(
name, parents, vendor, features, compilers, generation
)
targets[name] = Microarchitecture(name, parents, vendor, features, compilers, generation)
known_targets = {}
data = archspec.cpu.schema.TARGETS_JSON["microarchitectures"]

View File

@@ -7,7 +7,9 @@
"""
import collections.abc
import json
import os.path
import os
import pathlib
from typing import Tuple
class LazyDictionary(collections.abc.MutableMapping):
@@ -46,21 +48,65 @@ def __len__(self):
return len(self.data)
def _load_json_file(json_file):
json_dir = os.path.join(os.path.dirname(__file__), "..", "json", "cpu")
json_dir = os.path.abspath(json_dir)
#: Environment variable that might point to a directory with a user defined JSON file
DIR_FROM_ENVIRONMENT = "ARCHSPEC_CPU_DIR"
def _factory():
filename = os.path.join(json_dir, json_file)
with open(filename, "r", encoding="utf-8") as file:
return json.load(file)
#: Environment variable that might point to a directory with extensions to JSON files
EXTENSION_DIR_FROM_ENVIRONMENT = "ARCHSPEC_EXTENSION_CPU_DIR"
return _factory
def _json_file(filename: str, allow_custom: bool = False) -> Tuple[pathlib.Path, pathlib.Path]:
"""Given a filename, returns the absolute path for the main JSON file, and an
optional absolute path for an extension JSON file.
Args:
filename: filename for the JSON file
allow_custom: if True, allows overriding the location where the file resides
"""
json_dir = pathlib.Path(__file__).parent / ".." / "json" / "cpu"
if allow_custom and DIR_FROM_ENVIRONMENT in os.environ:
json_dir = pathlib.Path(os.environ[DIR_FROM_ENVIRONMENT])
json_dir = json_dir.absolute()
json_file = json_dir / filename
extension_file = None
if allow_custom and EXTENSION_DIR_FROM_ENVIRONMENT in os.environ:
extension_dir = pathlib.Path(os.environ[EXTENSION_DIR_FROM_ENVIRONMENT])
extension_dir.absolute()
extension_file = extension_dir / filename
return json_file, extension_file
def _load(json_file: pathlib.Path, extension_file: pathlib.Path):
with open(json_file, "r", encoding="utf-8") as file:
data = json.load(file)
if not extension_file or not extension_file.exists():
return data
with open(extension_file, "r", encoding="utf-8") as file:
extension_data = json.load(file)
top_level_sections = list(data.keys())
for key in top_level_sections:
if key not in extension_data:
continue
data[key].update(extension_data[key])
return data
#: In memory representation of the data in microarchitectures.json,
#: loaded on first access
TARGETS_JSON = LazyDictionary(_load_json_file("microarchitectures.json"))
TARGETS_JSON = LazyDictionary(_load, *_json_file("microarchitectures.json", allow_custom=True))
#: JSON schema for microarchitectures.json, loaded on first access
SCHEMA = LazyDictionary(_load_json_file("microarchitectures_schema.json"))
TARGETS_JSON_SCHEMA = LazyDictionary(_load, *_json_file("microarchitectures_schema.json"))
#: Information on how to call 'cpuid' to get information on the HOST CPU
CPUID_JSON = LazyDictionary(_load, *_json_file("cpuid.json", allow_custom=True))
#: JSON schema for cpuid.json, loaded on first access
CPUID_JSON_SCHEMA = LazyDictionary(_load, *_json_file("cpuid_schema.json"))

View File

@@ -9,11 +9,11 @@ language specific APIs.
Currently the repository contains the following JSON files:
```console
.
├── COPYRIGHT
── cpu
   ├── microarchitectures.json # Contains information on CPU microarchitectures
   └── microarchitectures_schema.json # Schema for the file above
cpu/
├── cpuid.json # Contains information on CPUID calls to retrieve vendor and features on x86_64
── cpuid_schema.json # Schema for the file above
├── microarchitectures.json # Contains information on CPU microarchitectures
└── microarchitectures_schema.json # Schema for the file above
```

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,134 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Schema for microarchitecture definitions and feature aliases",
"type": "object",
"additionalProperties": false,
"properties": {
"vendor": {
"type": "object",
"additionalProperties": false,
"properties": {
"description": {
"type": "string"
},
"input": {
"type": "object",
"additionalProperties": false,
"properties": {
"eax": {
"type": "integer"
},
"ecx": {
"type": "integer"
}
}
}
}
},
"highest_extension_support": {
"type": "object",
"additionalProperties": false,
"properties": {
"description": {
"type": "string"
},
"input": {
"type": "object",
"additionalProperties": false,
"properties": {
"eax": {
"type": "integer"
},
"ecx": {
"type": "integer"
}
}
}
}
},
"flags": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": false,
"properties": {
"description": {
"type": "string"
},
"input": {
"type": "object",
"additionalProperties": false,
"properties": {
"eax": {
"type": "integer"
},
"ecx": {
"type": "integer"
}
}
},
"bits": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": false,
"properties": {
"name": {
"type": "string"
},
"register": {
"type": "string"
},
"bit": {
"type": "integer"
}
}
}
}
}
}
},
"extension-flags": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": false,
"properties": {
"description": {
"type": "string"
},
"input": {
"type": "object",
"additionalProperties": false,
"properties": {
"eax": {
"type": "integer"
},
"ecx": {
"type": "integer"
}
}
},
"bits": {
"type": "array",
"items": {
"type": "object",
"additionalProperties": false,
"properties": {
"name": {
"type": "string"
},
"register": {
"type": "string"
},
"bit": {
"type": "integer"
}
}
}
}
}
}
}
}
}

View File

@@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2014 Anders Høst
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -0,0 +1,76 @@
cpuid.py
========
Now, this is silly!
Pure Python library for accessing information about x86 processors
by querying the [CPUID](http://en.wikipedia.org/wiki/CPUID)
instruction. Well, not exactly pure Python...
It works by allocating a small piece of virtual memory, copying
a raw x86 function to that memory, giving the memory execute
permissions and then calling the memory as a function. The injected
function executes the CPUID instruction and copies the result back
to a ctypes.Structure where is can be read by Python.
It should work fine on both 32 and 64 bit versions of Windows and Linux
running x86 processors. Apple OS X and other BSD systems should also work,
not tested though...
Why?
----
For poops and giggles. Plus, having access to a low-level feature
without having to compile a C wrapper is pretty neat.
Examples
--------
Getting info with eax=0:
import cpuid
q = cpuid.CPUID()
eax, ebx, ecx, edx = q(0)
Running the files:
$ python example.py
Vendor ID : GenuineIntel
CPU name : Intel(R) Xeon(R) CPU W3550 @ 3.07GHz
Vector instructions supported:
SSE : Yes
SSE2 : Yes
SSE3 : Yes
SSSE3 : Yes
SSE4.1 : Yes
SSE4.2 : Yes
SSE4a : --
AVX : --
AVX2 : --
$ python cpuid.py
CPUID A B C D
00000000 0000000b 756e6547 6c65746e 49656e69
00000001 000106a5 00100800 009ce3bd bfebfbff
00000002 55035a01 00f0b2e4 00000000 09ca212c
00000003 00000000 00000000 00000000 00000000
00000004 00000000 00000000 00000000 00000000
00000005 00000040 00000040 00000003 00001120
00000006 00000003 00000002 00000001 00000000
00000007 00000000 00000000 00000000 00000000
00000008 00000000 00000000 00000000 00000000
00000009 00000000 00000000 00000000 00000000
0000000a 07300403 00000044 00000000 00000603
0000000b 00000000 00000000 00000095 00000000
80000000 80000008 00000000 00000000 00000000
80000001 00000000 00000000 00000001 28100800
80000002 65746e49 2952286c 6f655820 2952286e
80000003 55504320 20202020 20202020 57202020
80000004 30353533 20402020 37302e33 007a4847
80000005 00000000 00000000 00000000 00000000
80000006 00000000 00000000 01006040 00000000
80000007 00000000 00000000 00000000 00000100
80000008 00003024 00000000 00000000 00000000

View File

@@ -0,0 +1,172 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2024 Anders Høst
#
from __future__ import print_function
import platform
import os
import ctypes
from ctypes import c_uint32, c_long, c_ulong, c_size_t, c_void_p, POINTER, CFUNCTYPE
# Posix x86_64:
# Three first call registers : RDI, RSI, RDX
# Volatile registers : RAX, RCX, RDX, RSI, RDI, R8-11
# Windows x86_64:
# Three first call registers : RCX, RDX, R8
# Volatile registers : RAX, RCX, RDX, R8-11
# cdecl 32 bit:
# Three first call registers : Stack (%esp)
# Volatile registers : EAX, ECX, EDX
_POSIX_64_OPC = [
0x53, # push %rbx
0x89, 0xf0, # mov %esi,%eax
0x89, 0xd1, # mov %edx,%ecx
0x0f, 0xa2, # cpuid
0x89, 0x07, # mov %eax,(%rdi)
0x89, 0x5f, 0x04, # mov %ebx,0x4(%rdi)
0x89, 0x4f, 0x08, # mov %ecx,0x8(%rdi)
0x89, 0x57, 0x0c, # mov %edx,0xc(%rdi)
0x5b, # pop %rbx
0xc3 # retq
]
_WINDOWS_64_OPC = [
0x53, # push %rbx
0x89, 0xd0, # mov %edx,%eax
0x49, 0x89, 0xc9, # mov %rcx,%r9
0x44, 0x89, 0xc1, # mov %r8d,%ecx
0x0f, 0xa2, # cpuid
0x41, 0x89, 0x01, # mov %eax,(%r9)
0x41, 0x89, 0x59, 0x04, # mov %ebx,0x4(%r9)
0x41, 0x89, 0x49, 0x08, # mov %ecx,0x8(%r9)
0x41, 0x89, 0x51, 0x0c, # mov %edx,0xc(%r9)
0x5b, # pop %rbx
0xc3 # retq
]
_CDECL_32_OPC = [
0x53, # push %ebx
0x57, # push %edi
0x8b, 0x7c, 0x24, 0x0c, # mov 0xc(%esp),%edi
0x8b, 0x44, 0x24, 0x10, # mov 0x10(%esp),%eax
0x8b, 0x4c, 0x24, 0x14, # mov 0x14(%esp),%ecx
0x0f, 0xa2, # cpuid
0x89, 0x07, # mov %eax,(%edi)
0x89, 0x5f, 0x04, # mov %ebx,0x4(%edi)
0x89, 0x4f, 0x08, # mov %ecx,0x8(%edi)
0x89, 0x57, 0x0c, # mov %edx,0xc(%edi)
0x5f, # pop %edi
0x5b, # pop %ebx
0xc3 # ret
]
is_windows = os.name == "nt"
is_64bit = ctypes.sizeof(ctypes.c_voidp) == 8
class CPUID_struct(ctypes.Structure):
_register_names = ("eax", "ebx", "ecx", "edx")
_fields_ = [(r, c_uint32) for r in _register_names]
def __getitem__(self, item):
if item not in self._register_names:
raise KeyError(item)
return getattr(self, item)
def __repr__(self):
return "eax=0x{:x}, ebx=0x{:x}, ecx=0x{:x}, edx=0x{:x}".format(self.eax, self.ebx, self.ecx, self.edx)
class CPUID(object):
def __init__(self):
if platform.machine() not in ("AMD64", "x86_64", "x86", "i686"):
raise SystemError("Only available for x86")
if is_windows:
if is_64bit:
# VirtualAlloc seems to fail under some weird
# circumstances when ctypes.windll.kernel32 is
# used under 64 bit Python. CDLL fixes this.
self.win = ctypes.CDLL("kernel32.dll")
opc = _WINDOWS_64_OPC
else:
# Here ctypes.windll.kernel32 is needed to get the
# right DLL. Otherwise it will fail when running
# 32 bit Python on 64 bit Windows.
self.win = ctypes.windll.kernel32
opc = _CDECL_32_OPC
else:
opc = _POSIX_64_OPC if is_64bit else _CDECL_32_OPC
size = len(opc)
code = (ctypes.c_ubyte * size)(*opc)
if is_windows:
self.win.VirtualAlloc.restype = c_void_p
self.win.VirtualAlloc.argtypes = [ctypes.c_void_p, ctypes.c_size_t, ctypes.c_ulong, ctypes.c_ulong]
self.addr = self.win.VirtualAlloc(None, size, 0x1000, 0x40)
if not self.addr:
raise MemoryError("Could not allocate RWX memory")
ctypes.memmove(self.addr, code, size)
else:
from mmap import (
mmap,
MAP_PRIVATE,
MAP_ANONYMOUS,
PROT_WRITE,
PROT_READ,
PROT_EXEC,
)
self.mm = mmap(
-1,
size,
flags=MAP_PRIVATE | MAP_ANONYMOUS,
prot=PROT_WRITE | PROT_READ | PROT_EXEC,
)
self.mm.write(code)
self.addr = ctypes.addressof(ctypes.c_int.from_buffer(self.mm))
func_type = CFUNCTYPE(None, POINTER(CPUID_struct), c_uint32, c_uint32)
self.func_ptr = func_type(self.addr)
def __call__(self, eax, ecx=0):
struct = self.registers_for(eax=eax, ecx=ecx)
return struct.eax, struct.ebx, struct.ecx, struct.edx
def registers_for(self, eax, ecx=0):
"""Calls cpuid with eax and ecx set as the input arguments, and returns a structure
containing eax, ebx, ecx, and edx.
"""
struct = CPUID_struct()
self.func_ptr(struct, eax, ecx)
return struct
def __del__(self):
if is_windows:
self.win.VirtualFree.restype = c_long
self.win.VirtualFree.argtypes = [c_void_p, c_size_t, c_ulong]
self.win.VirtualFree(self.addr, 0, 0x8000)
else:
self.mm.close()
if __name__ == "__main__":
def valid_inputs():
cpuid = CPUID()
for eax in (0x0, 0x80000000):
highest, _, _, _ = cpuid(eax)
while eax <= highest:
regs = cpuid(eax)
yield (eax, regs)
eax += 1
print(" ".join(x.ljust(8) for x in ("CPUID", "A", "B", "C", "D")).strip())
for eax, regs in valid_inputs():
print("%08x" % eax, " ".join("%08x" % reg for reg in regs))

View File

@@ -0,0 +1,62 @@
# -*- coding: utf-8 -*-
#
# Copyright (c) 2024 Anders Høst
#
from __future__ import print_function
import struct
import cpuid
def cpu_vendor(cpu):
_, b, c, d = cpu(0)
return struct.pack("III", b, d, c).decode("utf-8")
def cpu_name(cpu):
name = "".join((struct.pack("IIII", *cpu(0x80000000 + i)).decode("utf-8")
for i in range(2, 5)))
return name.split('\x00', 1)[0]
def is_set(cpu, leaf, subleaf, reg_idx, bit):
"""
@param {leaf} %eax
@param {sublead} %ecx, 0 in most cases
@param {reg_idx} idx of [%eax, %ebx, %ecx, %edx], 0-based
@param {bit} bit of reg selected by {reg_idx}, 0-based
"""
regs = cpu(leaf, subleaf)
if (1 << bit) & regs[reg_idx]:
return "Yes"
else:
return "--"
if __name__ == "__main__":
cpu = cpuid.CPUID()
print("Vendor ID : %s" % cpu_vendor(cpu))
print("CPU name : %s" % cpu_name(cpu))
print()
print("Vector instructions supported:")
print("SSE : %s" % is_set(cpu, 1, 0, 3, 25))
print("SSE2 : %s" % is_set(cpu, 1, 0, 3, 26))
print("SSE3 : %s" % is_set(cpu, 1, 0, 2, 0))
print("SSSE3 : %s" % is_set(cpu, 1, 0, 2, 9))
print("SSE4.1 : %s" % is_set(cpu, 1, 0, 2, 19))
print("SSE4.2 : %s" % is_set(cpu, 1, 0, 2, 20))
print("SSE4a : %s" % is_set(cpu, 0x80000001, 0, 2, 6))
print("AVX : %s" % is_set(cpu, 1, 0, 2, 28))
print("AVX2 : %s" % is_set(cpu, 7, 0, 1, 5))
print("BMI1 : %s" % is_set(cpu, 7, 0, 1, 3))
print("BMI2 : %s" % is_set(cpu, 7, 0, 1, 8))
# Intel RDT CMT/MBM
print("L3 Monitoring : %s" % is_set(cpu, 0xf, 0, 3, 1))
print("L3 Occupancy : %s" % is_set(cpu, 0xf, 1, 3, 0))
print("L3 Total BW : %s" % is_set(cpu, 0xf, 1, 3, 1))
print("L3 Local BW : %s" % is_set(cpu, 0xf, 1, 3, 2))

View File

@@ -42,11 +42,6 @@ def convert_to_posix_path(path: str) -> str:
return format_os_path(path, mode=Path.unix)
def convert_to_windows_path(path: str) -> str:
"""Converts the input path to Windows style."""
return format_os_path(path, mode=Path.windows)
def convert_to_platform_path(path: str) -> str:
"""Converts the input path to the current platform's native style."""
return format_os_path(path, mode=Path.platform_path)

View File

@@ -12,7 +12,7 @@
# Archive extensions allowed in Spack
PREFIX_EXTENSIONS = ("tar", "TAR")
EXTENSIONS = ("gz", "bz2", "xz", "Z")
NO_TAR_EXTENSIONS = ("zip", "tgz", "tbz2", "tbz", "txz")
NO_TAR_EXTENSIONS = ("zip", "tgz", "tbz2", "tbz", "txz", "whl")
# Add PREFIX_EXTENSIONS and EXTENSIONS last so that .tar.gz is matched *before* .tar or .gz
ALLOWED_ARCHIVE_TYPES = (
@@ -357,10 +357,8 @@ def strip_version_suffixes(path_or_url: str) -> str:
r"i[36]86",
r"ppc64(le)?",
r"armv?(7l|6l|64)?",
# PyPI
r"[._-]py[23].*\.whl",
r"[._-]cp[23].*\.whl",
r"[._-]win.*\.exe",
# PyPI wheels
r"-(?:py|cp)[23].*",
]
for regex in suffix_regexes:
@@ -403,7 +401,7 @@ def expand_contracted_extension_in_path(
def compression_ext_from_compressed_archive(extension: str) -> Optional[str]:
"""Returns compression extension for a compressed archive"""
extension = expand_contracted_extension(extension)
for ext in [*EXTENSIONS]:
for ext in EXTENSIONS:
if ext in extension:
return ext
return None

View File

@@ -237,16 +237,6 @@ def _get_mime_type():
return file_command("-b", "-h", "--mime-type")
@memoized
def _get_mime_type_compressed():
"""Same as _get_mime_type but attempts to check for
compression first
"""
mime_uncompressed = _get_mime_type()
mime_uncompressed.add_default_arg("-Z")
return mime_uncompressed
def mime_type(filename):
"""Returns the mime type and subtype of a file.
@@ -262,21 +252,6 @@ def mime_type(filename):
return type, subtype
def compressed_mime_type(filename):
"""Same as mime_type but checks for type that has been compressed
Args:
filename (str): file to be analyzed
Returns:
Tuple containing the MIME type and subtype
"""
output = _get_mime_type_compressed()(filename, output=str, error=str).strip()
tty.debug("==> " + output)
type, _, subtype = output.partition("/")
return type, subtype
#: This generates the library filenames that may appear on any OS.
library_extensions = ["a", "la", "so", "tbd", "dylib"]
@@ -308,13 +283,6 @@ def paths_containing_libs(paths, library_names):
return rpaths_to_include
@system_path_filter
def same_path(path1, path2):
norm1 = os.path.abspath(path1).rstrip(os.path.sep)
norm2 = os.path.abspath(path2).rstrip(os.path.sep)
return norm1 == norm2
def filter_file(
regex: str,
repl: Union[str, Callable[[Match], str]],
@@ -909,17 +877,6 @@ def is_exe(path):
return os.path.isfile(path) and os.access(path, os.X_OK)
@system_path_filter
def get_filetype(path_name):
"""
Return the output of file path_name as a string to identify file type.
"""
file = Executable("file")
file.add_default_env("LC_ALL", "C")
output = file("-b", "-h", "%s" % path_name, output=str, error=str)
return output.strip()
def has_shebang(path):
"""Returns whether a path has a shebang line. Returns False if the file cannot be opened."""
try:
@@ -1169,20 +1126,6 @@ def write_tmp_and_move(filename):
shutil.move(tmp, filename)
@contextmanager
@system_path_filter
def open_if_filename(str_or_file, mode="r"):
"""Takes either a path or a file object, and opens it if it is a path.
If it's a file object, just yields the file object.
"""
if isinstance(str_or_file, str):
with open(str_or_file, mode) as f:
yield f
else:
yield str_or_file
@system_path_filter
def touch(path):
"""Creates an empty file at the specified path."""
@@ -1295,19 +1238,6 @@ def temp_cwd():
shutil.rmtree(tmp_dir, **kwargs)
@contextmanager
@system_path_filter
def temp_rename(orig_path, temp_path):
same_path = os.path.realpath(orig_path) == os.path.realpath(temp_path)
if not same_path:
shutil.move(orig_path, temp_path)
try:
yield
finally:
if not same_path:
shutil.move(temp_path, orig_path)
@system_path_filter
def can_access(file_name):
"""True if we have read/write access to the file."""

View File

@@ -98,36 +98,6 @@ def caller_locals():
del stack
def get_calling_module_name():
"""Make sure that the caller is a class definition, and return the
enclosing module's name.
"""
# Passing zero here skips line context for speed.
stack = inspect.stack(0)
try:
# Make sure locals contain __module__
caller_locals = stack[2][0].f_locals
finally:
del stack
if "__module__" not in caller_locals:
raise RuntimeError(
"Must invoke get_calling_module_name() " "from inside a class definition!"
)
module_name = caller_locals["__module__"]
base_name = module_name.split(".")[-1]
return base_name
def attr_required(obj, attr_name):
"""Ensure that a class has a required attribute."""
if not hasattr(obj, attr_name):
raise RequiredAttributeError(
"No required attribute '%s' in class '%s'" % (attr_name, obj.__class__.__name__)
)
def attr_setdefault(obj, name, value):
"""Like dict.setdefault, but for objects."""
if not hasattr(obj, name):
@@ -513,42 +483,6 @@ def copy(self):
return clone
def in_function(function_name):
"""True if the caller was called from some function with
the supplied Name, False otherwise."""
stack = inspect.stack()
try:
for elt in stack[2:]:
if elt[3] == function_name:
return True
return False
finally:
del stack
def check_kwargs(kwargs, fun):
"""Helper for making functions with kwargs. Checks whether the kwargs
are empty after all of them have been popped off. If they're
not, raises an error describing which kwargs are invalid.
Example::
def foo(self, **kwargs):
x = kwargs.pop('x', None)
y = kwargs.pop('y', None)
z = kwargs.pop('z', None)
check_kwargs(kwargs, self.foo)
# This raises a TypeError:
foo(w='bad kwarg')
"""
if kwargs:
raise TypeError(
"'%s' is an invalid keyword argument for function %s()."
% (next(iter(kwargs)), fun.__name__)
)
def match_predicate(*args):
"""Utility function for making string matching predicates.
@@ -764,11 +698,6 @@ def pretty_seconds(seconds):
return pretty_seconds_formatter(seconds)(seconds)
class RequiredAttributeError(ValueError):
def __init__(self, message):
super().__init__(message)
class ObjectWrapper:
"""Base class that wraps an object. Derived classes can add new behavior
while staying undercover.
@@ -935,25 +864,6 @@ def uniq(sequence):
return uniq_list
def star(func):
"""Unpacks arguments for use with Multiprocessing mapping functions"""
def _wrapper(args):
return func(*args)
return _wrapper
class Devnull:
"""Null stream with less overhead than ``os.devnull``.
See https://stackoverflow.com/a/2929954.
"""
def write(self, *_):
pass
def elide_list(line_list, max_num=10):
"""Takes a long list and limits it to a smaller number of elements,
replacing intervening elements with '...'. For example::

View File

@@ -815,10 +815,6 @@ def __init__(self, path):
super().__init__(msg)
class LockLimitError(LockError):
"""Raised when exceed maximum attempts to acquire a lock."""
class LockTimeoutError(LockError):
"""Raised when an attempt to acquire a lock times out."""

View File

@@ -44,10 +44,6 @@ def is_debug(level=1):
return _debug >= level
def is_stacktrace():
return _stacktrace
def set_debug(level=0):
global _debug
assert level >= 0, "Debug level must be a positive value"
@@ -252,37 +248,6 @@ def die(message, *args, **kwargs) -> NoReturn:
sys.exit(1)
def get_number(prompt, **kwargs):
default = kwargs.get("default", None)
abort = kwargs.get("abort", None)
if default is not None and abort is not None:
prompt += " (default is %s, %s to abort) " % (default, abort)
elif default is not None:
prompt += " (default is %s) " % default
elif abort is not None:
prompt += " (%s to abort) " % abort
number = None
while number is None:
msg(prompt, newline=False)
ans = input()
if ans == str(abort):
return None
if ans:
try:
number = int(ans)
if number < 1:
msg("Please enter a valid number.")
number = None
except ValueError:
msg("Please enter a valid number.")
elif default is not None:
number = default
return number
def get_yes_or_no(prompt, **kwargs):
default_value = kwargs.get("default", None)

View File

@@ -17,7 +17,6 @@
import tarfile
import tempfile
import time
import traceback
import urllib.error
import urllib.parse
import urllib.request
@@ -111,10 +110,6 @@ def __init__(self, errors):
super().__init__(self.message)
class ListMirrorSpecsError(spack.error.SpackError):
"""Raised when unable to retrieve list of specs from the mirror"""
class BinaryCacheIndex:
"""
The BinaryCacheIndex tracks what specs are available on (usually remote)
@@ -541,83 +536,6 @@ def binary_index_location():
BINARY_INDEX: BinaryCacheIndex = llnl.util.lang.Singleton(BinaryCacheIndex) # type: ignore
class NoOverwriteException(spack.error.SpackError):
"""Raised when a file would be overwritten"""
def __init__(self, file_path):
super().__init__(f"Refusing to overwrite the following file: {file_path}")
class NoGpgException(spack.error.SpackError):
"""
Raised when gpg2 is not in PATH
"""
def __init__(self, msg):
super().__init__(msg)
class NoKeyException(spack.error.SpackError):
"""
Raised when gpg has no default key added.
"""
def __init__(self, msg):
super().__init__(msg)
class PickKeyException(spack.error.SpackError):
"""
Raised when multiple keys can be used to sign.
"""
def __init__(self, keys):
err_msg = "Multiple keys available for signing\n%s\n" % keys
err_msg += "Use spack buildcache create -k <key hash> to pick a key."
super().__init__(err_msg)
class NoVerifyException(spack.error.SpackError):
"""
Raised if file fails signature verification.
"""
pass
class NoChecksumException(spack.error.SpackError):
"""
Raised if file fails checksum verification.
"""
def __init__(self, path, size, contents, algorithm, expected, computed):
super().__init__(
f"{algorithm} checksum failed for {path}",
f"Expected {expected} but got {computed}. "
f"File size = {size} bytes. Contents = {contents!r}",
)
class NewLayoutException(spack.error.SpackError):
"""
Raised if directory layout is different from buildcache.
"""
def __init__(self, msg):
super().__init__(msg)
class InvalidMetadataFile(spack.error.SpackError):
pass
class UnsignedPackageException(spack.error.SpackError):
"""
Raised if installation of unsigned package is attempted without
the use of ``--no-check-signature``.
"""
def compute_hash(data):
if isinstance(data, str):
data = data.encode("utf-8")
@@ -992,15 +910,10 @@ def url_read_method(url):
if entry.endswith("spec.json") or entry.endswith("spec.json.sig")
]
read_fn = url_read_method
except KeyError as inst:
msg = "No packages at {0}: {1}".format(cache_prefix, inst)
tty.warn(msg)
except Exception as err:
# If we got some kind of S3 (access denied or other connection
# error), the first non boto-specific class in the exception
# hierarchy is Exception. Just print a warning and return
msg = "Encountered problem listing packages at {0}: {1}".format(cache_prefix, err)
tty.warn(msg)
# If we got some kind of S3 (access denied or other connection error), the first non
# boto-specific class in the exception is Exception. Just print a warning and return
tty.warn(f"Encountered problem listing packages at {cache_prefix}: {err}")
return file_list, read_fn
@@ -1047,11 +960,10 @@ def generate_package_index(cache_prefix, concurrency=32):
"""
try:
file_list, read_fn = _spec_files_from_cache(cache_prefix)
except ListMirrorSpecsError as err:
tty.error("Unable to generate package index, {0}".format(err))
return
except ListMirrorSpecsError as e:
raise GenerateIndexError(f"Unable to generate package index: {e}") from e
tty.debug("Retrieving spec descriptor files from {0} to build index".format(cache_prefix))
tty.debug(f"Retrieving spec descriptor files from {cache_prefix} to build index")
tmpdir = tempfile.mkdtemp()
@@ -1061,27 +973,22 @@ def generate_package_index(cache_prefix, concurrency=32):
try:
_read_specs_and_push_index(file_list, read_fn, cache_prefix, db, db_root_dir, concurrency)
except Exception as err:
msg = "Encountered problem pushing package index to {0}: {1}".format(cache_prefix, err)
tty.warn(msg)
tty.debug("\n" + traceback.format_exc())
except Exception as e:
raise GenerateIndexError(
f"Encountered problem pushing package index to {cache_prefix}: {e}"
) from e
finally:
shutil.rmtree(tmpdir)
shutil.rmtree(tmpdir, ignore_errors=True)
def generate_key_index(key_prefix, tmpdir=None):
"""Create the key index page.
Creates (or replaces) the "index.json" page at the location given in
key_prefix. This page contains an entry for each key (.pub) under
key_prefix.
Creates (or replaces) the "index.json" page at the location given in key_prefix. This page
contains an entry for each key (.pub) under key_prefix.
"""
tty.debug(
" ".join(
("Retrieving key.pub files from", url_util.format(key_prefix), "to build key index")
)
)
tty.debug(f"Retrieving key.pub files from {url_util.format(key_prefix)} to build key index")
try:
fingerprints = (
@@ -1089,17 +996,8 @@ def generate_key_index(key_prefix, tmpdir=None):
for entry in web_util.list_url(key_prefix, recursive=False)
if entry.endswith(".pub")
)
except KeyError as inst:
msg = "No keys at {0}: {1}".format(key_prefix, inst)
tty.warn(msg)
return
except Exception as err:
# If we got some kind of S3 (access denied or other connection
# error), the first non boto-specific class in the exception
# hierarchy is Exception. Just print a warning and return
msg = "Encountered problem listing keys at {0}: {1}".format(key_prefix, err)
tty.warn(msg)
return
except Exception as e:
raise CannotListKeys(f"Encountered problem listing keys at {key_prefix}: {e}") from e
remove_tmpdir = False
@@ -1124,12 +1022,13 @@ def generate_key_index(key_prefix, tmpdir=None):
keep_original=False,
extra_args={"ContentType": "application/json"},
)
except Exception as err:
msg = "Encountered problem pushing key index to {0}: {1}".format(key_prefix, err)
tty.warn(msg)
except Exception as e:
raise GenerateIndexError(
f"Encountered problem pushing key index to {key_prefix}: {e}"
) from e
finally:
if remove_tmpdir:
shutil.rmtree(tmpdir)
shutil.rmtree(tmpdir, ignore_errors=True)
def tarfile_of_spec_prefix(tar: tarfile.TarFile, prefix: str) -> None:
@@ -1200,7 +1099,8 @@ def push_or_raise(spec: Spec, out_url: str, options: PushOptions):
used at the mirror (following <tarball_directory_name>).
This method raises :py:class:`NoOverwriteException` when ``force=False`` and the tarball or
spec.json file already exist in the buildcache.
spec.json file already exist in the buildcache. It raises :py:class:`PushToBuildCacheError`
when the tarball or spec.json file cannot be pushed to the buildcache.
"""
if not spec.concrete:
raise ValueError("spec must be concrete to build tarball")
@@ -1278,13 +1178,18 @@ def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, option
key = select_signing_key(options.key)
sign_specfile(key, options.force, specfile_path)
# push tarball and signed spec json to remote mirror
web_util.push_to_url(spackfile_path, remote_spackfile_path, keep_original=False)
web_util.push_to_url(
signed_specfile_path if not options.unsigned else specfile_path,
remote_signed_specfile_path if not options.unsigned else remote_specfile_path,
keep_original=False,
)
try:
# push tarball and signed spec json to remote mirror
web_util.push_to_url(spackfile_path, remote_spackfile_path, keep_original=False)
web_util.push_to_url(
signed_specfile_path if not options.unsigned else specfile_path,
remote_signed_specfile_path if not options.unsigned else remote_specfile_path,
keep_original=False,
)
except Exception as e:
raise PushToBuildCacheError(
f"Encountered problem pushing binary {remote_spackfile_path}: {e}"
) from e
# push the key to the build cache's _pgp directory so it can be
# imported
@@ -1296,8 +1201,6 @@ def _build_tarball_in_stage_dir(spec: Spec, out_url: str, stage_dir: str, option
if options.regenerate_index:
generate_package_index(url_util.join(out_url, os.path.relpath(cache_prefix, stage_dir)))
return None
class NotInstalledError(spack.error.SpackError):
"""Raised when a spec is not installed but picked to be packaged."""
@@ -1352,28 +1255,6 @@ def specs_to_be_packaged(
return [s for s in itertools.chain(roots, deps) if not s.external]
def push(spec: Spec, mirror_url: str, options: PushOptions):
"""Create and push binary package for a single spec to the specified
mirror url.
Args:
spec: Spec to package and push
mirror_url: Desired destination url for binary package
options:
Returns:
True if package was pushed, False otherwise.
"""
try:
push_or_raise(spec, mirror_url, options)
except NoOverwriteException as e:
warnings.warn(str(e))
return False
return True
def try_verify(specfile_path):
"""Utility function to attempt to verify a local file. Assumes the
file is a clearsigned signature file.
@@ -2706,3 +2587,96 @@ def conditional_fetch(self) -> FetchIndexResult:
raise FetchIndexError(f"Remote index {url_manifest} is invalid")
return FetchIndexResult(etag=None, hash=index_digest.digest, data=result, fresh=False)
class NoOverwriteException(spack.error.SpackError):
"""Raised when a file would be overwritten"""
def __init__(self, file_path):
super().__init__(f"Refusing to overwrite the following file: {file_path}")
class NoGpgException(spack.error.SpackError):
"""
Raised when gpg2 is not in PATH
"""
def __init__(self, msg):
super().__init__(msg)
class NoKeyException(spack.error.SpackError):
"""
Raised when gpg has no default key added.
"""
def __init__(self, msg):
super().__init__(msg)
class PickKeyException(spack.error.SpackError):
"""
Raised when multiple keys can be used to sign.
"""
def __init__(self, keys):
err_msg = "Multiple keys available for signing\n%s\n" % keys
err_msg += "Use spack buildcache create -k <key hash> to pick a key."
super().__init__(err_msg)
class NoVerifyException(spack.error.SpackError):
"""
Raised if file fails signature verification.
"""
pass
class NoChecksumException(spack.error.SpackError):
"""
Raised if file fails checksum verification.
"""
def __init__(self, path, size, contents, algorithm, expected, computed):
super().__init__(
f"{algorithm} checksum failed for {path}",
f"Expected {expected} but got {computed}. "
f"File size = {size} bytes. Contents = {contents!r}",
)
class NewLayoutException(spack.error.SpackError):
"""
Raised if directory layout is different from buildcache.
"""
def __init__(self, msg):
super().__init__(msg)
class InvalidMetadataFile(spack.error.SpackError):
pass
class UnsignedPackageException(spack.error.SpackError):
"""
Raised if installation of unsigned package is attempted without
the use of ``--no-check-signature``.
"""
class ListMirrorSpecsError(spack.error.SpackError):
"""Raised when unable to retrieve list of specs from the mirror"""
class GenerateIndexError(spack.error.SpackError):
"""Raised when unable to generate key or package index for mirror"""
class CannotListKeys(GenerateIndexError):
"""Raised when unable to list keys when generating key index"""
class PushToBuildCacheError(spack.error.SpackError):
"""Raised when unable to push objects to binary mirror"""

View File

@@ -213,9 +213,6 @@ def _root_spec(spec_str: str) -> str:
platform = str(spack.platforms.host())
if platform == "darwin":
spec_str += " %apple-clang"
elif platform == "windows":
# TODO (johnwparent): Remove version constraint when clingo patch is up
spec_str += " %msvc@:19.37"
elif platform == "linux":
spec_str += " %gcc"
elif platform == "freebsd":

View File

@@ -147,7 +147,7 @@ def _add_compilers_if_missing() -> None:
mixed_toolchain=sys.platform == "darwin"
)
if new_compilers:
spack.compilers.add_compilers_to_config(new_compilers, init_config=False)
spack.compilers.add_compilers_to_config(new_compilers)
@contextlib.contextmanager

View File

@@ -789,7 +789,7 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
for mod in ["cray-mpich", "cray-libsci"]:
module("unload", mod)
if target.module_name:
if target and target.module_name:
load_module(target.module_name)
load_external_modules(pkg)

View File

@@ -434,11 +434,6 @@ def _do_patch_libtool(self):
r"crtendS\.o",
]:
x.filter(regex=(rehead + o), repl="")
elif self.pkg.compiler.name == "dpcpp":
# Hack to filter out spurious predep_objects when building with Intel dpcpp
# (see https://github.com/spack/spack/issues/32863):
x.filter(regex=r"^(predep_objects=.*)/tmp/conftest-[0-9A-Fa-f]+\.o", repl=r"\1")
x.filter(regex=r"^(predep_objects=.*)/tmp/a-[0-9A-Fa-f]+\.o", repl=r"\1")
elif self.pkg.compiler.name == "nag":
for tag in ["fc", "f77"]:
marker = markers[tag]

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import collections.abc
import os
import re
from typing import Tuple
import llnl.util.filesystem as fs
@@ -15,6 +16,12 @@
from .cmake import CMakeBuilder, CMakePackage
def spec_uses_toolchain(spec):
gcc_toolchain_regex = re.compile(".*gcc-toolchain.*")
using_toolchain = list(filter(gcc_toolchain_regex.match, spec.compiler_flags["cxxflags"]))
return using_toolchain
def cmake_cache_path(name, value, comment="", force=False):
"""Generate a string for a cmake cache variable"""
force_str = " FORCE" if force else ""
@@ -213,7 +220,7 @@ def initconfig_mpi_entries(self):
else:
# starting with cmake 3.10, FindMPI expects MPIEXEC_EXECUTABLE
# vs the older versions which expect MPIEXEC
if self.pkg.spec["cmake"].satisfies("@3.10:"):
if spec["cmake"].satisfies("@3.10:"):
entries.append(cmake_cache_path("MPIEXEC_EXECUTABLE", mpiexec))
else:
entries.append(cmake_cache_path("MPIEXEC", mpiexec))
@@ -248,12 +255,17 @@ def initconfig_hardware_entries(self):
# Include the deprecated CUDA_TOOLKIT_ROOT_DIR for supporting BLT packages
entries.append(cmake_cache_path("CUDA_TOOLKIT_ROOT_DIR", cudatoolkitdir))
archs = spec.variants["cuda_arch"].value
if archs[0] != "none":
arch_str = ";".join(archs)
entries.append(
cmake_cache_string("CMAKE_CUDA_ARCHITECTURES", "{0}".format(arch_str))
)
# CUDA_FLAGS
cuda_flags = []
if not spec.satisfies("cuda_arch=none"):
cuda_archs = ";".join(spec.variants["cuda_arch"].value)
entries.append(cmake_cache_string("CMAKE_CUDA_ARCHITECTURES", cuda_archs))
if spec_uses_toolchain(spec):
cuda_flags.append("-Xcompiler {}".format(spec_uses_toolchain(spec)[0]))
entries.append(cmake_cache_string("CMAKE_CUDA_FLAGS", " ".join(cuda_flags)))
if "+rocm" in spec:
entries.append("#------------------{0}".format("-" * 30))
@@ -262,9 +274,6 @@ def initconfig_hardware_entries(self):
# Explicitly setting HIP_ROOT_DIR may be a patch that is no longer necessary
entries.append(cmake_cache_path("HIP_ROOT_DIR", "{0}".format(spec["hip"].prefix)))
entries.append(
cmake_cache_path("HIP_CXX_COMPILER", "{0}".format(self.spec["hip"].hipcc))
)
llvm_bin = spec["llvm-amdgpu"].prefix.bin
llvm_prefix = spec["llvm-amdgpu"].prefix
# Some ROCm systems seem to point to /<path>/rocm-<ver>/ and
@@ -277,11 +286,9 @@ def initconfig_hardware_entries(self):
archs = self.spec.variants["amdgpu_target"].value
if archs[0] != "none":
arch_str = ";".join(archs)
entries.append(
cmake_cache_string("CMAKE_HIP_ARCHITECTURES", "{0}".format(arch_str))
)
entries.append(cmake_cache_string("AMDGPU_TARGETS", "{0}".format(arch_str)))
entries.append(cmake_cache_string("GPU_TARGETS", "{0}".format(arch_str)))
entries.append(cmake_cache_string("CMAKE_HIP_ARCHITECTURES", arch_str))
entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str))
entries.append(cmake_cache_string("GPU_TARGETS", arch_str))
return entries

View File

@@ -4,12 +4,15 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import inspect
import os
from typing import Iterable
from llnl.util.filesystem import filter_file
from llnl.util.filesystem import filter_file, find
from llnl.util.lang import memoized
import spack.builder
import spack.package_base
from spack.directives import build_system, extends
from spack.install_test import SkipTest, test_part
from spack.util.executable import Executable
from ._checks import BaseBuilder, execute_build_time_tests
@@ -28,6 +31,58 @@ class PerlPackage(spack.package_base.PackageBase):
extends("perl", when="build_system=perl")
@property
@memoized
def _platform_dir(self):
"""Name of platform-specific module subdirectory."""
perl = self.spec["perl"].command
options = "-E", "use Config; say $Config{archname}"
out = perl(*options, output=str.split, error=str.split)
return out.strip()
@property
def use_modules(self) -> Iterable[str]:
"""Names of the package's perl modules."""
module_files = find(self.prefix.lib, ["*.pm"], recursive=True)
# Drop the platform directory, if present
if self._platform_dir:
platform_dir = self._platform_dir + os.sep
module_files = [m.replace(platform_dir, "") for m in module_files]
# Drop the extension and library path
prefix = self.prefix.lib + os.sep
modules = [os.path.splitext(m)[0].replace(prefix, "") for m in module_files]
# Drop the perl subdirectory as well
return ["::".join(m.split(os.sep)[1:]) for m in modules]
@property
def skip_modules(self) -> Iterable[str]:
"""Names of modules that should be skipped when running tests.
These are a subset of use_modules.
Returns:
List of strings of module names.
"""
return []
def test_use(self):
"""Test 'use module'"""
if not self.use_modules:
raise SkipTest("Test requires use_modules package property.")
perl = self.spec["perl"].command
for module in self.use_modules:
if module in self.skip_modules:
continue
with test_part(self, f"test_use-{module}", purpose=f"checking use of {module}"):
options = ["-we", f'use strict; use {module}; print("OK\n")']
out = perl(*options, output=str.split, error=str.split)
assert "OK" in out
@spack.builder.builder("perl")
class PerlBuilder(BaseBuilder):
@@ -52,7 +107,7 @@ class PerlBuilder(BaseBuilder):
phases = ("configure", "build", "install")
#: Names associated with package methods in the old build-system format
legacy_methods = ("configure_args", "check")
legacy_methods = ("configure_args", "check", "test_use")
#: Names associated with package attributes in the old build-system format
legacy_attributes = ()

View File

@@ -75,6 +75,8 @@
# does not like its directory structure.
#
import os
import spack.variant
from spack.directives import conflicts, depends_on, variant
from spack.package_base import PackageBase
@@ -143,7 +145,12 @@ class ROCmPackage(PackageBase):
depends_on("hip +rocm", when="+rocm")
# need amd gpu type for rocm builds
compilers_supporting_rocm = ("cce", "rocmcc", "clang", "aocc")
conflicts("amdgpu_target=none", when="+rocm")
# If this variable shadows a property, it overrides it
for cmp_name in spack.compilers.supported_compilers():
if cmp_name not in compilers_supporting_rocm:
conflicts(f"%{cmp_name}", when="+rocm")
# https://github.com/ROCm-Developer-Tools/HIP/blob/master/bin/hipcc
# It seems that hip-clang does not (yet?) accept this flag, in which case
@@ -154,6 +161,32 @@ def hip_flags(amdgpu_target):
archs = ",".join(amdgpu_target)
return "--amdgpu-target={0}".format(archs)
# ASAN
@staticmethod
def asan_on(env, llvm_path):
env.set("CC", llvm_path + "/bin/clang")
env.set("CXX", llvm_path + "/bin/clang++")
env.set("ASAN_OPTIONS", "detect_leaks=0")
for root, dirs, files in os.walk(llvm_path):
if "libclang_rt.asan-x86_64.so" in files:
asan_lib_path = root
env.prepend_path("LD_LIBRARY_PATH", asan_lib_path)
SET_DWARF_VERSION_4 = ""
try:
# This will throw an error if imported on a non-Linux platform.
import distro
distname = distro.id()
except ImportError:
distname = "unknown"
if "rhel" in distname or "sles" in distname:
SET_DWARF_VERSION_4 = "-gdwarf-5"
env.set("CFLAGS", "-fsanitize=address -shared-libasan -g " + SET_DWARF_VERSION_4)
env.set("CXXFLAGS", "-fsanitize=address -shared-libasan -g " + SET_DWARF_VERSION_4)
env.set("LDFLAGS", "-Wl,--enable-new-dtags -fuse-ld=lld -fsanitize=address -g -Wl,")
# HIP version vs Architecture
# TODO: add a bunch of lines like:

View File

@@ -9,6 +9,8 @@
import inspect
from typing import List, Optional, Tuple
from llnl.util import lang
import spack.build_environment
#: Builder classes, as registered by the "builder" decorator
@@ -231,24 +233,27 @@ def __new__(mcs, name, bases, attr_dict):
for temporary_stage in (_RUN_BEFORE, _RUN_AFTER):
staged_callbacks = temporary_stage.callbacks
# We don't have callbacks in this class, move on
if not staged_callbacks:
# Here we have an adapter from an old-style package. This means there is no
# hierarchy of builders, and every callback that had to be combined between
# *Package and *Builder has been combined already by _PackageAdapterMeta
if name == "Adapter":
continue
# If we are here we have callbacks. To get a complete list, get first what
# was attached to parent classes, then prepend what we have registered here.
# If we are here we have callbacks. To get a complete list, we accumulate all the
# callbacks from base classes, we deduplicate them, then prepend what we have
# registered here.
#
# The order should be:
# 1. Callbacks are registered in order within the same class
# 2. Callbacks defined in derived classes precede those defined in base
# classes
callbacks_from_base = []
for base in bases:
callbacks_from_base = getattr(base, temporary_stage.attribute_name, None)
if callbacks_from_base:
break
else:
callbacks_from_base = []
current_callbacks = getattr(base, temporary_stage.attribute_name, None)
if not current_callbacks:
continue
callbacks_from_base.extend(current_callbacks)
callbacks_from_base = list(lang.dedupe(callbacks_from_base))
# Set the callbacks in this class and flush the temporary stage
attr_dict[temporary_stage.attribute_name] = staged_callbacks[:] + callbacks_from_base
del temporary_stage.callbacks[:]

View File

@@ -70,7 +70,7 @@
JOB_NAME_FORMAT = (
"{name}{@version} {/hash:7} {%compiler.name}{@compiler.version}{arch=architecture}"
)
IS_WINDOWS = sys.platform == "win32"
spack_gpg = spack.main.SpackCommand("gpg")
spack_compiler = spack.main.SpackCommand("compiler")
@@ -103,7 +103,7 @@ def get_job_name(spec: spack.spec.Spec, build_group: str = ""):
job_name = spec.format(JOB_NAME_FORMAT)
if build_group:
job_name = "{0} {1}".format(job_name, build_group)
job_name = f"{job_name} {build_group}"
return job_name[:255]
@@ -114,7 +114,7 @@ def _remove_reserved_tags(tags):
def _spec_deps_key(s):
return "{0}/{1}".format(s.name, s.dag_hash(7))
return f"{s.name}/{s.dag_hash(7)}"
def _add_dependency(spec_label, dep_label, deps):
@@ -213,7 +213,7 @@ def _print_staging_summary(spec_labels, stages, mirrors_to_check, rebuild_decisi
mirrors = spack.mirror.MirrorCollection(mirrors=mirrors_to_check, binary=True)
tty.msg("Checked the following mirrors for binaries:")
for m in mirrors.values():
tty.msg(" {0}".format(m.fetch_url))
tty.msg(f" {m.fetch_url}")
tty.msg("Staging summary ([x] means a job needs rebuilding):")
for stage_index, stage in enumerate(stages):
@@ -296,7 +296,7 @@ def append_dep(s, d):
for spec in spec_list:
for s in spec.traverse(deptype="all"):
if s.external:
tty.msg("Will not stage external pkg: {0}".format(s))
tty.msg(f"Will not stage external pkg: {s}")
continue
skey = _spec_deps_key(s)
@@ -305,7 +305,7 @@ def append_dep(s, d):
for d in s.dependencies(deptype="all"):
dkey = _spec_deps_key(d)
if d.external:
tty.msg("Will not stage external dep: {0}".format(d))
tty.msg(f"Will not stage external dep: {d}")
continue
append_dep(skey, dkey)
@@ -374,8 +374,8 @@ def get_stack_changed(env_path, rev1="HEAD^", rev2="HEAD"):
for path in lines:
if ".gitlab-ci.yml" in path or path in env_path:
tty.debug("env represented by {0} changed".format(env_path))
tty.debug("touched file: {0}".format(path))
tty.debug(f"env represented by {env_path} changed")
tty.debug(f"touched file: {path}")
return True
return False
@@ -419,7 +419,7 @@ def get_spec_filter_list(env, affected_pkgs, dependent_traverse_depth=None):
all_concrete_specs = env.all_specs()
tty.debug("All concrete environment specs:")
for s in all_concrete_specs:
tty.debug(" {0}/{1}".format(s.name, s.dag_hash()[:7]))
tty.debug(f" {s.name}/{s.dag_hash()[:7]}")
affected_pkgs = frozenset(affected_pkgs)
env_matches = [s for s in all_concrete_specs if s.name in affected_pkgs]
visited = set()
@@ -510,7 +510,7 @@ def __is_named(self, section):
and if so return the name otherwise return none.
"""
for _name in self.named_jobs:
keys = ["{0}-job".format(_name), "{0}-job-remove".format(_name)]
keys = [f"{_name}-job", f"{_name}-job-remove"]
if any([key for key in keys if key in section]):
return _name
@@ -525,9 +525,9 @@ def __job_name(name, suffix=""):
jname = name
if suffix:
jname = "{0}-job{1}".format(name, suffix)
jname = f"{name}-job{suffix}"
else:
jname = "{0}-job".format(name)
jname = f"{name}-job"
return jname
@@ -739,7 +739,7 @@ def generate_gitlab_ci_yaml(
# Requested to prune untouched packages, but assume we won't do that
# unless we're actually in a git repo.
rev1, rev2 = get_change_revisions()
tty.debug("Got following revisions: rev1={0}, rev2={1}".format(rev1, rev2))
tty.debug(f"Got following revisions: rev1={rev1}, rev2={rev2}")
if rev1 and rev2:
# If the stack file itself did not change, proceed with pruning
if not get_stack_changed(env.manifest_path, rev1, rev2):
@@ -747,13 +747,13 @@ def generate_gitlab_ci_yaml(
affected_pkgs = compute_affected_packages(rev1, rev2)
tty.debug("affected pkgs:")
for p in affected_pkgs:
tty.debug(" {0}".format(p))
tty.debug(f" {p}")
affected_specs = get_spec_filter_list(
env, affected_pkgs, dependent_traverse_depth=dependent_depth
)
tty.debug("all affected specs:")
for s in affected_specs:
tty.debug(" {0}/{1}".format(s.name, s.dag_hash()[:7]))
tty.debug(f" {s.name}/{s.dag_hash()[:7]}")
# Allow overriding --prune-dag cli opt with environment variable
prune_dag_override = os.environ.get("SPACK_PRUNE_UP_TO_DATE", None)
@@ -978,7 +978,7 @@ def generate_gitlab_ci_yaml(
rebuild_decisions = {}
for stage_jobs in stages:
stage_name = "stage-{0}".format(stage_id)
stage_name = f"stage-{stage_id}"
stage_names.append(stage_name)
stage_id += 1
@@ -1009,7 +1009,7 @@ def generate_gitlab_ci_yaml(
job_object = spack_ci_ir["jobs"][release_spec_dag_hash]["attributes"]
if not job_object:
tty.warn("No match found for {0}, skipping it".format(release_spec))
tty.warn(f"No match found for {release_spec}, skipping it")
continue
if spack_pipeline_type is not None:
@@ -1119,7 +1119,7 @@ def main_script_replacements(cmd):
if artifacts_root:
job_object["needs"].append(
{"job": generate_job_name, "pipeline": "{0}".format(parent_pipeline_id)}
{"job": generate_job_name, "pipeline": f"{parent_pipeline_id}"}
)
# Let downstream jobs know whether the spec needed rebuilding, regardless
@@ -1185,19 +1185,17 @@ def main_script_replacements(cmd):
if spack_pipeline_type == "spack_pull_request":
spack.mirror.remove("ci_shared_pr_mirror", cfg.default_modify_scope())
tty.debug("{0} build jobs generated in {1} stages".format(job_id, stage_id))
tty.debug(f"{job_id} build jobs generated in {stage_id} stages")
if job_id > 0:
tty.debug(
"The max_needs_job is {0}, with {1} needs".format(max_needs_job, max_length_needs)
)
tty.debug(f"The max_needs_job is {max_needs_job}, with {max_length_needs} needs")
# Use "all_job_names" to populate the build group for this set
if cdash_handler and cdash_handler.auth_token:
try:
cdash_handler.populate_buildgroup(all_job_names)
except (SpackError, HTTPError, URLError) as err:
tty.warn("Problem populating buildgroup: {0}".format(err))
tty.warn(f"Problem populating buildgroup: {err}")
else:
tty.warn("Unable to populate buildgroup without CDash credentials")
@@ -1211,9 +1209,7 @@ def main_script_replacements(cmd):
sync_job = copy.deepcopy(spack_ci_ir["jobs"]["copy"]["attributes"])
sync_job["stage"] = "copy"
if artifacts_root:
sync_job["needs"] = [
{"job": generate_job_name, "pipeline": "{0}".format(parent_pipeline_id)}
]
sync_job["needs"] = [{"job": generate_job_name, "pipeline": f"{parent_pipeline_id}"}]
if "variables" not in sync_job:
sync_job["variables"] = {}
@@ -1230,6 +1226,7 @@ def main_script_replacements(cmd):
# TODO: Remove this condition in Spack 0.23
buildcache_source = os.environ.get("SPACK_SOURCE_MIRROR", None)
sync_job["variables"]["SPACK_BUILDCACHE_SOURCE"] = buildcache_source
sync_job["dependencies"] = []
output_object["copy"] = sync_job
job_id += 1
@@ -1348,7 +1345,7 @@ def main_script_replacements(cmd):
copy_specs_file = os.path.join(
copy_specs_dir,
"copy_{}_specs.json".format(spack_stack_name if spack_stack_name else "rebuilt"),
f"copy_{spack_stack_name if spack_stack_name else 'rebuilt'}_specs.json",
)
with open(copy_specs_file, "w") as fd:
@@ -1440,7 +1437,7 @@ def import_signing_key(base64_signing_key):
fd.write(decoded_key)
key_import_output = spack_gpg("trust", sign_key_path, output=str)
tty.debug("spack gpg trust {0}".format(sign_key_path))
tty.debug(f"spack gpg trust {sign_key_path}")
tty.debug(key_import_output)
# Now print the keys we have for verifying and signing
@@ -1466,45 +1463,39 @@ def can_verify_binaries():
return len(gpg_util.public_keys()) >= 1
def _push_mirror_contents(input_spec, sign_binaries, mirror_url):
def _push_to_build_cache(spec: spack.spec.Spec, sign_binaries: bool, mirror_url: str) -> None:
"""Unchecked version of the public API, for easier mocking"""
unsigned = not sign_binaries
tty.debug("Creating buildcache ({0})".format("unsigned" if unsigned else "signed"))
push_url = spack.mirror.Mirror.from_url(mirror_url).push_url
return bindist.push(input_spec, push_url, bindist.PushOptions(force=True, unsigned=unsigned))
bindist.push_or_raise(
spec,
spack.mirror.Mirror.from_url(mirror_url).push_url,
bindist.PushOptions(force=True, unsigned=not sign_binaries),
)
def push_mirror_contents(input_spec: spack.spec.Spec, mirror_url, sign_binaries):
def push_to_build_cache(spec: spack.spec.Spec, mirror_url: str, sign_binaries: bool) -> bool:
"""Push one or more binary packages to the mirror.
Arguments:
input_spec(spack.spec.Spec): Installed spec to push
mirror_url (str): Base url of target mirror
sign_binaries (bool): If True, spack will attempt to sign binary
package before pushing.
spec: Installed spec to push
mirror_url: URL of target mirror
sign_binaries: If True, spack will attempt to sign binary package before pushing.
"""
tty.debug(f"Pushing to build cache ({'signed' if sign_binaries else 'unsigned'})")
try:
return _push_mirror_contents(input_spec, sign_binaries, mirror_url)
except Exception as inst:
# If the mirror we're pushing to is on S3 and there's some
# permissions problem, for example, we can't just target
# that exception type here, since users of the
# `spack ci rebuild' may not need or want any dependency
# on boto3. So we use the first non-boto exception type
# in the heirarchy:
# boto3.exceptions.S3UploadFailedError
# boto3.exceptions.Boto3Error
# Exception
# BaseException
# object
err_msg = "Error msg: {0}".format(inst)
if any(x in err_msg for x in ["Access Denied", "InvalidAccessKeyId"]):
tty.msg("Permission problem writing to {0}".format(mirror_url))
tty.msg(err_msg)
_push_to_build_cache(spec, sign_binaries, mirror_url)
return True
except bindist.PushToBuildCacheError as e:
tty.error(str(e))
return False
except Exception as e:
# TODO (zackgalbreath): write an adapter for boto3 exceptions so we can catch a specific
# exception instead of parsing str(e)...
msg = str(e)
if any(x in msg for x in ["Access Denied", "InvalidAccessKeyId"]):
tty.error(f"Permission problem writing to {mirror_url}: {msg}")
return False
else:
raise inst
raise
def remove_other_mirrors(mirrors_to_keep, scope=None):
@@ -1531,8 +1522,9 @@ def copy_files_to_artifacts(src, artifacts_dir):
try:
fs.copy(src, artifacts_dir)
except Exception as err:
msg = ("Unable to copy files ({0}) to artifacts {1} due to " "exception: {2}").format(
src, artifacts_dir, str(err)
msg = (
f"Unable to copy files ({src}) to artifacts {artifacts_dir} due to "
f"exception: {str(err)}"
)
tty.warn(msg)
@@ -1548,23 +1540,23 @@ def copy_stage_logs_to_artifacts(job_spec: spack.spec.Spec, job_log_dir: str) ->
job_spec: spec associated with spack install log
job_log_dir: path into which build log should be copied
"""
tty.debug("job spec: {0}".format(job_spec))
tty.debug(f"job spec: {job_spec}")
if not job_spec:
msg = "Cannot copy stage logs: job spec ({0}) is required"
tty.error(msg.format(job_spec))
msg = f"Cannot copy stage logs: job spec ({job_spec}) is required"
tty.error(msg)
return
try:
pkg_cls = spack.repo.PATH.get_pkg_class(job_spec.name)
job_pkg = pkg_cls(job_spec)
tty.debug("job package: {0}".format(job_pkg))
tty.debug(f"job package: {job_pkg}")
except AssertionError:
msg = "Cannot copy stage logs: job spec ({0}) must be concrete"
tty.error(msg.format(job_spec))
msg = f"Cannot copy stage logs: job spec ({job_spec}) must be concrete"
tty.error(msg)
return
stage_dir = job_pkg.stage.path
tty.debug("stage dir: {0}".format(stage_dir))
tty.debug(f"stage dir: {stage_dir}")
for file in [job_pkg.log_path, job_pkg.env_mods_path, *job_pkg.builder.archive_files]:
copy_files_to_artifacts(file, job_log_dir)
@@ -1577,10 +1569,10 @@ def copy_test_logs_to_artifacts(test_stage, job_test_dir):
test_stage (str): test stage path
job_test_dir (str): the destination artifacts test directory
"""
tty.debug("test stage: {0}".format(test_stage))
tty.debug(f"test stage: {test_stage}")
if not os.path.exists(test_stage):
msg = "Cannot copy test logs: job test stage ({0}) does not exist"
tty.error(msg.format(test_stage))
msg = f"Cannot copy test logs: job test stage ({test_stage}) does not exist"
tty.error(msg)
return
copy_files_to_artifacts(os.path.join(test_stage, "*", "*.txt"), job_test_dir)
@@ -1595,7 +1587,7 @@ def download_and_extract_artifacts(url, work_dir):
url (str): Complete url to artifacts.zip file
work_dir (str): Path to destination where artifacts should be extracted
"""
tty.msg("Fetching artifacts from: {0}\n".format(url))
tty.msg(f"Fetching artifacts from: {url}\n")
headers = {"Content-Type": "application/zip"}
@@ -1612,7 +1604,7 @@ def download_and_extract_artifacts(url, work_dir):
response_code = response.getcode()
if response_code != 200:
msg = "Error response code ({0}) in reproduce_ci_job".format(response_code)
msg = f"Error response code ({response_code}) in reproduce_ci_job"
raise SpackError(msg)
artifacts_zip_path = os.path.join(work_dir, "artifacts.zip")
@@ -1642,7 +1634,7 @@ def get_spack_info():
return git_log
return "no git repo, use spack {0}".format(spack.spack_version)
return f"no git repo, use spack {spack.spack_version}"
def setup_spack_repro_version(repro_dir, checkout_commit, merge_commit=None):
@@ -1665,8 +1657,8 @@ def setup_spack_repro_version(repro_dir, checkout_commit, merge_commit=None):
"""
# figure out the path to the spack git version being used for the
# reproduction
print("checkout_commit: {0}".format(checkout_commit))
print("merge_commit: {0}".format(merge_commit))
print(f"checkout_commit: {checkout_commit}")
print(f"merge_commit: {merge_commit}")
dot_git_path = os.path.join(spack.paths.prefix, ".git")
if not os.path.exists(dot_git_path):
@@ -1685,14 +1677,14 @@ def setup_spack_repro_version(repro_dir, checkout_commit, merge_commit=None):
git("log", "-1", checkout_commit, output=str, error=os.devnull, fail_on_error=False)
if git.returncode != 0:
tty.error("Missing commit: {0}".format(checkout_commit))
tty.error(f"Missing commit: {checkout_commit}")
return False
if merge_commit:
git("log", "-1", merge_commit, output=str, error=os.devnull, fail_on_error=False)
if git.returncode != 0:
tty.error("Missing commit: {0}".format(merge_commit))
tty.error(f"Missing commit: {merge_commit}")
return False
# Next attempt to clone your local spack repo into the repro dir
@@ -1715,7 +1707,7 @@ def setup_spack_repro_version(repro_dir, checkout_commit, merge_commit=None):
)
if git.returncode != 0:
tty.error("Unable to checkout {0}".format(checkout_commit))
tty.error(f"Unable to checkout {checkout_commit}")
tty.msg(co_out)
return False
@@ -1734,7 +1726,7 @@ def setup_spack_repro_version(repro_dir, checkout_commit, merge_commit=None):
)
if git.returncode != 0:
tty.error("Unable to merge {0}".format(merge_commit))
tty.error(f"Unable to merge {merge_commit}")
tty.msg(merge_out)
return False
@@ -1755,6 +1747,7 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime):
commands to run to reproduce the build once inside the container.
"""
work_dir = os.path.realpath(work_dir)
platform_script_ext = "ps1" if IS_WINDOWS else "sh"
download_and_extract_artifacts(url, work_dir)
gpg_path = None
@@ -1765,13 +1758,13 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime):
lock_file = fs.find(work_dir, "spack.lock")[0]
repro_lock_dir = os.path.dirname(lock_file)
tty.debug("Found lock file in: {0}".format(repro_lock_dir))
tty.debug(f"Found lock file in: {repro_lock_dir}")
yaml_files = fs.find(work_dir, ["*.yaml", "*.yml"])
tty.debug("yaml files:")
for yaml_file in yaml_files:
tty.debug(" {0}".format(yaml_file))
tty.debug(f" {yaml_file}")
pipeline_yaml = None
@@ -1786,10 +1779,10 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime):
pipeline_yaml = yaml_obj
if pipeline_yaml:
tty.debug("\n{0} is likely your pipeline file".format(yf))
tty.debug(f"\n{yf} is likely your pipeline file")
relative_concrete_env_dir = pipeline_yaml["variables"]["SPACK_CONCRETE_ENV_DIR"]
tty.debug("Relative environment path used by cloud job: {0}".format(relative_concrete_env_dir))
tty.debug(f"Relative environment path used by cloud job: {relative_concrete_env_dir}")
# Using the relative concrete environment path found in the generated
# pipeline variable above, copy the spack environment files so they'll
@@ -1803,10 +1796,11 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime):
shutil.copyfile(orig_yaml_path, copy_yaml_path)
# Find the install script in the unzipped artifacts and make it executable
install_script = fs.find(work_dir, "install.sh")[0]
st = os.stat(install_script)
os.chmod(install_script, st.st_mode | stat.S_IEXEC)
install_script = fs.find(work_dir, f"install.{platform_script_ext}")[0]
if not IS_WINDOWS:
# pointless on Windows
st = os.stat(install_script)
os.chmod(install_script, st.st_mode | stat.S_IEXEC)
# Find the repro details file. This just includes some values we wrote
# during `spack ci rebuild` to make reproduction easier. E.g. the job
# name is written here so we can easily find the configuration of the
@@ -1844,7 +1838,7 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime):
job_image = job_image_elt["name"]
else:
job_image = job_image_elt
tty.msg("Job ran with the following image: {0}".format(job_image))
tty.msg(f"Job ran with the following image: {job_image}")
# Because we found this job was run with a docker image, so we will try
# to print a "docker run" command that bind-mounts the directory where
@@ -1919,65 +1913,75 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime):
job_tags = None
if "tags" in job_yaml:
job_tags = job_yaml["tags"]
tty.msg("Job ran with the following tags: {0}".format(job_tags))
tty.msg(f"Job ran with the following tags: {job_tags}")
entrypoint_script = [
["git", "config", "--global", "--add", "safe.directory", mount_as_dir],
[".", os.path.join(mount_as_dir if job_image else work_dir, "share/spack/setup-env.sh")],
[
".",
os.path.join(
mount_as_dir if job_image else work_dir,
f"share/spack/setup-env.{platform_script_ext}",
),
],
["spack", "gpg", "trust", mounted_gpg_path if job_image else gpg_path] if gpg_path else [],
["spack", "env", "activate", mounted_env_dir if job_image else repro_dir],
[os.path.join(mounted_repro_dir, "install.sh") if job_image else install_script],
[
(
os.path.join(mounted_repro_dir, f"install.{platform_script_ext}")
if job_image
else install_script
)
],
]
entry_script = os.path.join(mounted_workdir, f"entrypoint.{platform_script_ext}")
inst_list = []
# Finally, print out some instructions to reproduce the build
if job_image:
# Allow interactive
entrypoint_script.extend(
[
[
"echo",
"Re-run install script using:\n\t{0}".format(
os.path.join(mounted_repro_dir, "install.sh")
if job_image
else install_script
),
],
# Allow interactive
["exec", "$@"],
]
install_mechanism = (
os.path.join(mounted_repro_dir, f"install.{platform_script_ext}")
if job_image
else install_script
)
entrypoint_script.append(["echo", f"Re-run install script using:\n\t{install_mechanism}"])
# Allow interactive
if IS_WINDOWS:
entrypoint_script.extend(["&", "($args -Join ' ')", "-NoExit"])
else:
entrypoint_script.extend(["exec", "$@"])
process_command(
"entrypoint", entrypoint_script, work_dir, run=False, exit_on_failure=False
)
docker_command = [
[
runtime,
"run",
"-i",
"-t",
"--rm",
"--name",
"spack_reproducer",
"-v",
":".join([work_dir, mounted_workdir, "Z"]),
"-v",
":".join(
[
os.path.join(work_dir, "jobs_scratch_dir"),
os.path.join(mount_as_dir, "jobs_scratch_dir"),
"Z",
]
),
"-v",
":".join([os.path.join(work_dir, "spack"), mount_as_dir, "Z"]),
"--entrypoint",
os.path.join(mounted_workdir, "entrypoint.sh"),
job_image,
"bash",
]
runtime,
"run",
"-i",
"-t",
"--rm",
"--name",
"spack_reproducer",
"-v",
":".join([work_dir, mounted_workdir, "Z"]),
"-v",
":".join(
[
os.path.join(work_dir, "jobs_scratch_dir"),
os.path.join(mount_as_dir, "jobs_scratch_dir"),
"Z",
]
),
"-v",
":".join([os.path.join(work_dir, "spack"), mount_as_dir, "Z"]),
"--entrypoint",
]
if IS_WINDOWS:
docker_command.extend(["powershell.exe", job_image, entry_script, "powershell.exe"])
else:
docker_command.extend([entry_script, job_image, "bash"])
docker_command = [docker_command]
autostart = autostart and setup_result
process_command("start", docker_command, work_dir, run=autostart)
@@ -1986,22 +1990,26 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime):
inst_list.extend(
[
" - Start the docker container install",
" $ {0}/start.sh".format(work_dir),
f" $ {work_dir}/start.{platform_script_ext}",
]
)
else:
process_command("reproducer", entrypoint_script, work_dir, run=False)
autostart = autostart and setup_result
process_command("reproducer", entrypoint_script, work_dir, run=autostart)
inst_list.append("\nOnce on the tagged runner:\n\n")
inst_list.extent(
[" - Run the reproducer script", " $ {0}/reproducer.sh".format(work_dir)]
[
" - Run the reproducer script",
f" $ {work_dir}/reproducer.{platform_script_ext}",
]
)
if not setup_result:
inst_list.append("\n - Clone spack and acquire tested commit")
inst_list.append("\n {0}\n".format(spack_info))
inst_list.append(f"\n {spack_info}\n")
inst_list.append("\n")
inst_list.append("\n Path to clone spack: {0}/spack\n\n".format(work_dir))
inst_list.append(f"\n Path to clone spack: {work_dir}/spack\n\n")
tty.msg("".join(inst_list))
@@ -2020,50 +2028,78 @@ def process_command(name, commands, repro_dir, run=True, exit_on_failure=True):
Returns: the exit code from processing the command
"""
tty.debug("spack {0} arguments: {1}".format(name, commands))
tty.debug(f"spack {name} arguments: {commands}")
if len(commands) == 0 or isinstance(commands[0], str):
commands = [commands]
# Create a string [command 1] && [command 2] && ... && [command n] with commands
# quoted using double quotes.
args_to_string = lambda args: " ".join('"{}"'.format(arg) for arg in args)
full_command = " \n ".join(map(args_to_string, commands))
def compose_command_err_handling(args):
if not IS_WINDOWS:
args = [f'"{arg}"' for arg in args]
arg_str = " ".join(args)
result = arg_str + "\n"
# ErrorActionPreference will handle PWSH commandlets (Spack calls),
# but we need to handle EXEs (git, etc) ourselves
catch_exe_failure = (
"""
if ($LASTEXITCODE -ne 0){
throw "Command {} has failed"
}
"""
if IS_WINDOWS
else ""
)
if exit_on_failure and catch_exe_failure:
result += catch_exe_failure.format(arg_str)
return result
# Write the command to a shell script
script = "{0}.sh".format(name)
with open(script, "w") as fd:
fd.write("#!/bin/sh\n\n")
fd.write("\n# spack {0} command\n".format(name))
# Create a string [command 1] \n [command 2] \n ... \n [command n] with
# commands composed into a platform dependent shell script, pwsh on Windows,
full_command = "\n".join(map(compose_command_err_handling, commands))
# Write the command to a python script
if IS_WINDOWS:
script = f"{name}.ps1"
script_content = [f"\n# spack {name} command\n"]
if exit_on_failure:
fd.write("set -e\n")
script_content.append('$ErrorActionPreference = "Stop"\n')
if os.environ.get("SPACK_VERBOSE_SCRIPT"):
fd.write("set -x\n")
fd.write(full_command)
fd.write("\n")
script_content.append("Set-PSDebug -Trace 2\n")
else:
script = f"{name}.sh"
script_content = ["#!/bin/sh\n\n", f"\n# spack {name} command\n"]
if exit_on_failure:
script_content.append("set -e\n")
if os.environ.get("SPACK_VERBOSE_SCRIPT"):
script_content.append("set -x\n")
script_content.append(full_command)
script_content.append("\n")
st = os.stat(script)
os.chmod(script, st.st_mode | stat.S_IEXEC)
with open(script, "w") as fd:
for line in script_content:
fd.write(line)
copy_path = os.path.join(repro_dir, script)
shutil.copyfile(script, copy_path)
st = os.stat(copy_path)
os.chmod(copy_path, st.st_mode | stat.S_IEXEC)
if not IS_WINDOWS:
st = os.stat(copy_path)
os.chmod(copy_path, st.st_mode | stat.S_IEXEC)
# Run the generated install.sh shell script as if it were being run in
# Run the generated shell script as if it were being run in
# a login shell.
exit_code = None
if run:
try:
cmd_process = subprocess.Popen(["/bin/sh", "./{0}".format(script)])
# We use sh as executor on Linux like platforms, pwsh on Windows
interpreter = "powershell.exe" if IS_WINDOWS else "/bin/sh"
cmd_process = subprocess.Popen([interpreter, f"./{script}"])
cmd_process.wait()
exit_code = cmd_process.returncode
except (ValueError, subprocess.CalledProcessError, OSError) as err:
tty.error("Encountered error running {0} script".format(name))
tty.error(f"Encountered error running {name} script")
tty.error(err)
exit_code = 1
tty.debug("spack {0} exited {1}".format(name, exit_code))
tty.debug(f"spack {name} exited {exit_code}")
else:
# Delete the script, it is copied to the destination dir
os.remove(script)
@@ -2088,7 +2124,7 @@ def create_buildcache(
for mirror_url in destination_mirror_urls:
results.append(
PushResult(
success=push_mirror_contents(input_spec, mirror_url, sign_binaries), url=mirror_url
success=push_to_build_cache(input_spec, mirror_url, sign_binaries), url=mirror_url
)
)
@@ -2122,7 +2158,7 @@ def write_broken_spec(url, pkg_name, stack_name, job_url, pipeline_url, spec_dic
# If there is an S3 error (e.g., access denied or connection
# error), the first non boto-specific class in the exception
# hierarchy is Exception. Just print a warning and return
msg = "Error writing to broken specs list {0}: {1}".format(url, err)
msg = f"Error writing to broken specs list {url}: {err}"
tty.warn(msg)
finally:
shutil.rmtree(tmpdir)
@@ -2135,7 +2171,7 @@ def read_broken_spec(broken_spec_url):
try:
_, _, fs = web_util.read_from_url(broken_spec_url)
except (URLError, web_util.SpackWebError, HTTPError):
tty.warn("Unable to read broken spec from {0}".format(broken_spec_url))
tty.warn(f"Unable to read broken spec from {broken_spec_url}")
return None
broken_spec_contents = codecs.getreader("utf-8")(fs).read()
@@ -2150,14 +2186,14 @@ def display_broken_spec_messages(base_url, hashes):
for spec_hash, broken_spec in [tup for tup in broken_specs if tup[1]]:
details = broken_spec["broken-spec"]
if "job-name" in details:
item_name = "{0}/{1}".format(details["job-name"], spec_hash[:7])
item_name = f"{details['job-name']}/{spec_hash[:7]}"
else:
item_name = spec_hash
if "job-stack" in details:
item_name = "{0} (in stack {1})".format(item_name, details["job-stack"])
item_name = f"{item_name} (in stack {details['job-stack']})"
msg = " {0} was reported broken here: {1}".format(item_name, details["job-url"])
msg = f" {item_name} was reported broken here: {details['job-url']}"
tty.msg(msg)
@@ -2180,7 +2216,7 @@ def run_standalone_tests(**kwargs):
log_file = kwargs.get("log_file")
if cdash and log_file:
tty.msg("The test log file {0} option is ignored with CDash reporting".format(log_file))
tty.msg(f"The test log file {log_file} option is ignored with CDash reporting")
log_file = None
# Error out but do NOT terminate if there are missing required arguments.
@@ -2206,10 +2242,10 @@ def run_standalone_tests(**kwargs):
test_args.extend(["--log-file", log_file])
test_args.append(job_spec.name)
tty.debug("Running {0} stand-alone tests".format(job_spec.name))
tty.debug(f"Running {job_spec.name} stand-alone tests")
exit_code = process_command("test", test_args, repro_dir)
tty.debug("spack test exited {0}".format(exit_code))
tty.debug(f"spack test exited {exit_code}")
class CDashHandler:
@@ -2232,7 +2268,7 @@ def __init__(self, ci_cdash):
# append runner description to the site if available
runner = os.environ.get("CI_RUNNER_DESCRIPTION")
if runner:
self.site += " ({0})".format(runner)
self.site += f" ({runner})"
# track current spec, if any
self.current_spec = None
@@ -2260,21 +2296,13 @@ def build_name(self):
Returns: (str) current spec's CDash build name."""
spec = self.current_spec
if spec:
build_name = "{0}@{1}%{2} hash={3} arch={4} ({5})".format(
spec.name,
spec.version,
spec.compiler,
spec.dag_hash(),
spec.architecture,
self.build_group,
)
tty.debug(
"Generated CDash build name ({0}) from the {1}".format(build_name, spec.name)
)
build_name = f"{spec.name}@{spec.version}%{spec.compiler} \
hash={spec.dag_hash()} arch={spec.architecture} ({self.build_group})"
tty.debug(f"Generated CDash build name ({build_name}) from the {spec.name}")
return build_name
build_name = os.environ.get("SPACK_CDASH_BUILD_NAME")
tty.debug("Using CDash build name ({0}) from the environment".format(build_name))
tty.debug(f"Using CDash build name ({build_name}) from the environment")
return build_name
@property # type: ignore
@@ -2288,25 +2316,25 @@ def build_stamp(self):
Returns: (str) current CDash build stamp"""
build_stamp = os.environ.get("SPACK_CDASH_BUILD_STAMP")
if build_stamp:
tty.debug("Using build stamp ({0}) from the environment".format(build_stamp))
tty.debug(f"Using build stamp ({build_stamp}) from the environment")
return build_stamp
build_stamp = cdash_build_stamp(self.build_group, time.time())
tty.debug("Generated new build stamp ({0})".format(build_stamp))
tty.debug(f"Generated new build stamp ({build_stamp})")
return build_stamp
@property # type: ignore
@memoized
def project_enc(self):
tty.debug("Encoding project ({0}): {1})".format(type(self.project), self.project))
tty.debug(f"Encoding project ({type(self.project)}): {self.project})")
encode = urlencode({"project": self.project})
index = encode.find("=") + 1
return encode[index:]
@property
def upload_url(self):
url_format = "{0}/submit.php?project={1}"
return url_format.format(self.url, self.project_enc)
url_format = f"{self.url}/submit.php?project={self.project_enc}"
return url_format
def copy_test_results(self, source, dest):
"""Copy test results to artifacts directory."""
@@ -2324,7 +2352,7 @@ def create_buildgroup(self, opener, headers, url, group_name, group_type):
response_code = response.getcode()
if response_code not in [200, 201]:
msg = "Creating buildgroup failed (response code = {0})".format(response_code)
msg = f"Creating buildgroup failed (response code = {response_code})"
tty.warn(msg)
return None
@@ -2335,10 +2363,10 @@ def create_buildgroup(self, opener, headers, url, group_name, group_type):
return build_group_id
def populate_buildgroup(self, job_names):
url = "{0}/api/v1/buildgroup.php".format(self.url)
url = f"{self.url}/api/v1/buildgroup.php"
headers = {
"Authorization": "Bearer {0}".format(self.auth_token),
"Authorization": f"Bearer {self.auth_token}",
"Content-Type": "application/json",
}
@@ -2346,11 +2374,11 @@ def populate_buildgroup(self, job_names):
parent_group_id = self.create_buildgroup(opener, headers, url, self.build_group, "Daily")
group_id = self.create_buildgroup(
opener, headers, url, "Latest {0}".format(self.build_group), "Latest"
opener, headers, url, f"Latest {self.build_group}", "Latest"
)
if not parent_group_id or not group_id:
msg = "Failed to create or retrieve buildgroups for {0}".format(self.build_group)
msg = f"Failed to create or retrieve buildgroups for {self.build_group}"
tty.warn(msg)
return
@@ -2370,7 +2398,7 @@ def populate_buildgroup(self, job_names):
response_code = response.getcode()
if response_code != 200:
msg = "Error response code ({0}) in populate_buildgroup".format(response_code)
msg = f"Error response code ({response_code}) in populate_buildgroup"
tty.warn(msg)
def report_skipped(self, spec: spack.spec.Spec, report_dir: str, reason: Optional[str]):

View File

@@ -275,23 +275,37 @@ def setup_parser(subparser: argparse.ArgumentParser):
# Sync buildcache entries from one mirror to another
sync = subparsers.add_parser("sync", help=sync_fn.__doc__)
sync.add_argument(
"--manifest-glob", help="a quoted glob pattern identifying copy manifest files"
sync_manifest_source = sync.add_argument_group(
"Manifest Source",
"Specify a list of build cache objects to sync using manifest file(s)."
'This option takes the place of the "source mirror" for synchronization'
'and optionally takes a "destination mirror" ',
)
sync.add_argument(
sync_manifest_source.add_argument(
"--manifest-glob", help="a quoted glob pattern identifying CI rebuild manifest files"
)
sync_source_mirror = sync.add_argument_group(
"Named Source",
"Specify a single registered source mirror to synchronize from. This option requires"
"the specification of a destination mirror.",
)
sync_source_mirror.add_argument(
"src_mirror",
metavar="source mirror",
type=arguments.mirror_name_or_url,
nargs="?",
type=arguments.mirror_name_or_url,
help="source mirror name, path, or URL",
)
sync.add_argument(
"dest_mirror",
metavar="destination mirror",
type=arguments.mirror_name_or_url,
nargs="?",
type=arguments.mirror_name_or_url,
help="destination mirror name, path, or URL",
)
sync.set_defaults(func=sync_fn)
# Update buildcache index without copying any additional packages
@@ -1070,7 +1084,17 @@ def sync_fn(args):
requires an active environment in order to know which specs to sync
"""
if args.manifest_glob:
manifest_copy(glob.glob(args.manifest_glob))
# Passing the args.src_mirror here because it is not possible to
# have the destination be required when specifying a named source
# mirror and optional for the --manifest-glob argument. In the case
# of manifest glob sync, the source mirror positional argument is the
# destination mirror if it is specified. If there are two mirrors
# specified, the second is ignored and the first is the override
# destination.
if args.dest_mirror:
tty.warn(f"Ignoring unused arguemnt: {args.dest_mirror.name}")
manifest_copy(glob.glob(args.manifest_glob), args.src_mirror)
return 0
if args.src_mirror is None or args.dest_mirror is None:
@@ -1121,7 +1145,7 @@ def sync_fn(args):
shutil.rmtree(tmpdir)
def manifest_copy(manifest_file_list):
def manifest_copy(manifest_file_list, dest_mirror=None):
"""Read manifest files containing information about specific specs to copy
from source to destination, remove duplicates since any binary packge for
a given hash should be the same as any other, and copy all files specified
@@ -1135,10 +1159,17 @@ def manifest_copy(manifest_file_list):
# Last duplicate hash wins
deduped_manifest[spec_hash] = copy_list
build_cache_dir = bindist.build_cache_relative_path()
for spec_hash, copy_list in deduped_manifest.items():
for copy_file in copy_list:
tty.debug("copying {0} to {1}".format(copy_file["src"], copy_file["dest"]))
copy_buildcache_file(copy_file["src"], copy_file["dest"])
dest = copy_file["dest"]
if dest_mirror:
src_relative_path = os.path.join(
build_cache_dir, copy_file["src"].rsplit(build_cache_dir, 1)[1].lstrip("/")
)
dest = url_util.join(dest_mirror.push_url, src_relative_path)
tty.debug("copying {0} to {1}".format(copy_file["src"], dest))
copy_buildcache_file(copy_file["src"], dest)
def update_index(mirror: spack.mirror.Mirror, update_keys=False):
@@ -1165,14 +1196,18 @@ def update_index(mirror: spack.mirror.Mirror, update_keys=False):
url, bindist.build_cache_relative_path(), bindist.build_cache_keys_relative_path()
)
bindist.generate_key_index(keys_url)
try:
bindist.generate_key_index(keys_url)
except bindist.CannotListKeys as e:
# Do not error out if listing keys went wrong. This usually means that the _gpg path
# does not exist. TODO: distinguish between this and other errors.
tty.warn(f"did not update the key index: {e}")
def update_index_fn(args):
"""update a buildcache index"""
update_index(args.mirror, update_keys=args.keys)
return update_index(args.mirror, update_keys=args.keys)
def buildcache(parser, args):
if args.func:
args.func(args)
return args.func(args)

View File

@@ -183,7 +183,7 @@ def checksum(parser, args):
print()
if args.add_to_package:
add_versions_to_package(pkg, version_lines)
add_versions_to_package(pkg, version_lines, args.batch)
def print_checksum_status(pkg: PackageBase, version_hashes: dict):
@@ -229,7 +229,7 @@ def print_checksum_status(pkg: PackageBase, version_hashes: dict):
tty.die("Invalid checksums found.")
def add_versions_to_package(pkg: PackageBase, version_lines: str):
def add_versions_to_package(pkg: PackageBase, version_lines: str, is_batch: bool):
"""
Add checksumed versions to a package's instructions and open a user's
editor so they may double check the work of the function.
@@ -282,5 +282,5 @@ def add_versions_to_package(pkg: PackageBase, version_lines: str):
tty.msg(f"Added {num_versions_added} new versions to {pkg.name}")
tty.msg(f"Open {filename} to review the additions.")
if sys.stdout.isatty():
if sys.stdout.isatty() and not is_batch:
editor(filename)

View File

@@ -14,6 +14,7 @@
import spack.binary_distribution as bindist
import spack.ci as spack_ci
import spack.cmd
import spack.cmd.buildcache as buildcache
import spack.config as cfg
import spack.environment as ev
@@ -32,6 +33,7 @@
SPACK_COMMAND = "spack"
MAKE_COMMAND = "make"
INSTALL_FAIL_CODE = 1
FAILED_CREATE_BUILDCACHE_CODE = 100
def deindent(desc):
@@ -705,11 +707,9 @@ def ci_rebuild(args):
cdash_handler.report_skipped(job_spec, reports_dir, reason=msg)
cdash_handler.copy_test_results(reports_dir, job_test_dir)
# If the install succeeded, create a buildcache entry for this job spec
# and push it to one or more mirrors. If the install did not succeed,
# print out some instructions on how to reproduce this build failure
# outside of the pipeline environment.
if install_exit_code == 0:
# If the install succeeded, push it to one or more mirrors. Failure to push to any mirror
# will result in a non-zero exit code. Pushing is best-effort.
mirror_urls = [buildcache_mirror_url]
# TODO: Remove this block in Spack 0.23
@@ -721,13 +721,12 @@ def ci_rebuild(args):
destination_mirror_urls=mirror_urls,
sign_binaries=spack_ci.can_sign_binaries(),
):
msg = tty.msg if result.success else tty.warn
msg(
"{} {} to {}".format(
"Pushed" if result.success else "Failed to push",
job_spec.format("{name}{@version}{/hash:7}", color=clr.get_color_when()),
result.url,
)
if not result.success:
install_exit_code = FAILED_CREATE_BUILDCACHE_CODE
(tty.msg if result.success else tty.error)(
f'{"Pushed" if result.success else "Failed to push"} '
f'{job_spec.format("{name}{@version}{/hash:7}", color=clr.get_color_when())} '
f"to {result.url}"
)
# If this is a develop pipeline, check if the spec that we just built is
@@ -748,22 +747,22 @@ def ci_rebuild(args):
tty.warn(msg.format(broken_spec_path, err))
else:
# If the install did not succeed, print out some instructions on how to reproduce this
# build failure outside of the pipeline environment.
tty.debug("spack install exited non-zero, will not create buildcache")
api_root_url = os.environ.get("CI_API_V4_URL")
ci_project_id = os.environ.get("CI_PROJECT_ID")
ci_job_id = os.environ.get("CI_JOB_ID")
repro_job_url = "{0}/projects/{1}/jobs/{2}/artifacts".format(
api_root_url, ci_project_id, ci_job_id
)
repro_job_url = f"{api_root_url}/projects/{ci_project_id}/jobs/{ci_job_id}/artifacts"
# Control characters cause this to be printed in blue so it stands out
reproduce_msg = """
print(
f"""
\033[34mTo reproduce this build locally, run:
spack ci reproduce-build {0} [--working-dir <dir>] [--autostart]
spack ci reproduce-build {repro_job_url} [--working-dir <dir>] [--autostart]
If this project does not have public pipelines, you will need to first:
@@ -771,12 +770,9 @@ def ci_rebuild(args):
... then follow the printed instructions.\033[0;0m
""".format(
repro_job_url
"""
)
print(reproduce_msg)
rebuild_timer.stop()
try:
with open("install_timers.json", "w") as timelog:

View File

@@ -570,6 +570,14 @@ def add_concretizer_args(subparser):
default=None,
help="reuse installed dependencies only",
)
subgroup.add_argument(
"--deprecated",
action=ConfigSetAction,
dest="config:deprecated",
const=True,
default=None,
help="allow concretizer to select deprecated versions",
)
def add_connection_args(subparser, add_help):

View File

@@ -89,7 +89,7 @@ def compiler_find(args):
paths, scope=None, mixed_toolchain=args.mixed_toolchain
)
if new_compilers:
spack.compilers.add_compilers_to_config(new_compilers, scope=args.scope, init_config=False)
spack.compilers.add_compilers_to_config(new_compilers, scope=args.scope)
n = len(new_compilers)
s = "s" if n > 1 else ""

View File

@@ -19,7 +19,7 @@
def setup_parser(subparser):
arguments.add_common_arguments(subparser, ["jobs"])
arguments.add_common_arguments(subparser, ["jobs", "no_checksum", "spec"])
subparser.add_argument(
"-d",
"--source-path",
@@ -34,7 +34,6 @@ def setup_parser(subparser):
dest="ignore_deps",
help="do not try to install dependencies of requested packages",
)
arguments.add_common_arguments(subparser, ["no_checksum", "deprecated"])
subparser.add_argument(
"--keep-prefix",
action="store_true",
@@ -63,7 +62,6 @@ def setup_parser(subparser):
choices=["root", "all"],
help="run tests on only root packages or all packages",
)
arguments.add_common_arguments(subparser, ["spec"])
stop_group = subparser.add_mutually_exclusive_group()
stop_group.add_argument(
@@ -125,9 +123,6 @@ def dev_build(self, args):
if args.no_checksum:
spack.config.set("config:checksum", False, scope="command_line")
if args.deprecated:
spack.config.set("config:deprecated", True, scope="command_line")
tests = False
if args.test == "all":
tests = True

View File

@@ -9,6 +9,7 @@
import shutil
import sys
import tempfile
from pathlib import Path
from typing import Optional
import llnl.string as string
@@ -44,6 +45,7 @@
"deactivate",
"create",
["remove", "rm"],
["rename", "mv"],
["list", "ls"],
["status", "st"],
"loads",
@@ -472,11 +474,82 @@ def env_remove(args):
tty.msg(f"Successfully removed environment '{bad_env_name}'")
#
# env rename
#
def env_rename_setup_parser(subparser):
"""rename an existing environment"""
subparser.add_argument(
"mv_from", metavar="from", help="name (or path) of existing environment"
)
subparser.add_argument(
"mv_to", metavar="to", help="new name (or path) for existing environment"
)
subparser.add_argument(
"-d",
"--dir",
action="store_true",
help="the specified arguments correspond to directory paths",
)
subparser.add_argument(
"-f", "--force", action="store_true", help="allow overwriting of an existing environment"
)
def env_rename(args):
"""Rename an environment.
This renames a managed environment or moves an anonymous environment.
"""
# Directory option has been specified
if args.dir:
if not ev.is_env_dir(args.mv_from):
tty.die("The specified path does not correspond to a valid spack environment")
from_path = Path(args.mv_from)
if not args.force:
if ev.is_env_dir(args.mv_to):
tty.die(
"The new path corresponds to an existing environment;"
" specify the --force flag to overwrite it."
)
if Path(args.mv_to).exists():
tty.die("The new path already exists; specify the --force flag to overwrite it.")
to_path = Path(args.mv_to)
# Name option being used
elif ev.exists(args.mv_from):
from_path = ev.environment.environment_dir_from_name(args.mv_from)
if not args.force and ev.exists(args.mv_to):
tty.die(
"The new name corresponds to an existing environment;"
" specify the --force flag to overwrite it."
)
to_path = ev.environment.root(args.mv_to)
# Neither
else:
tty.die("The specified name does not correspond to a managed spack environment")
# Guard against renaming from or to an active environment
active_env = ev.active_environment()
if active_env:
from_env = ev.Environment(from_path)
if from_env.path == active_env.path:
tty.die("Cannot rename active environment")
if to_path == active_env.path:
tty.die(f"{args.mv_to} is an active environment")
shutil.rmtree(to_path, ignore_errors=True)
fs.rename(from_path, to_path)
tty.msg(f"Successfully renamed environment {args.mv_from} to {args.mv_to}")
#
# env list
#
def env_list_setup_parser(subparser):
"""list available environments"""
"""list managed environments"""
def env_list(args):

View File

@@ -18,7 +18,7 @@
def setup_parser(subparser):
arguments.add_common_arguments(subparser, ["no_checksum", "deprecated"])
arguments.add_common_arguments(subparser, ["no_checksum", "specs"])
subparser.add_argument(
"-m",
"--missing",
@@ -28,7 +28,7 @@ def setup_parser(subparser):
subparser.add_argument(
"-D", "--dependencies", action="store_true", help="also fetch all dependencies"
)
arguments.add_common_arguments(subparser, ["specs"])
arguments.add_concretizer_args(subparser)
subparser.epilog = (
"With an active environment, the specs "
"parameter can be omitted. In this case all (uninstalled"
@@ -40,9 +40,6 @@ def fetch(parser, args):
if args.no_checksum:
spack.config.set("config:checksum", False, scope="command_line")
if args.deprecated:
spack.config.set("config:deprecated", True, scope="command_line")
if args.specs:
specs = spack.cmd.parse_specs(args.specs, concretize=True)
else:

View File

@@ -140,6 +140,12 @@ def setup_parser(subparser):
subparser.add_argument(
"--only-deprecated", action="store_true", help="show only deprecated packages"
)
subparser.add_argument(
"--install-tree",
action="store",
default="all",
help="Install trees to query: 'all' (default), 'local', 'upstream', upstream name or path",
)
subparser.add_argument("--start-date", help="earliest date of installation [YYYY-MM-DD]")
subparser.add_argument("--end-date", help="latest date of installation [YYYY-MM-DD]")
@@ -168,6 +174,12 @@ def query_arguments(args):
q_args = {"installed": installed, "known": known, "explicit": explicit}
install_tree = args.install_tree
upstreams = spack.config.get("upstreams", {})
if install_tree in upstreams.keys():
install_tree = upstreams[install_tree]["install_tree"]
q_args["install_tree"] = install_tree
# Time window of installation
for attribute in ("start_date", "end_date"):
date = getattr(args, attribute)

View File

@@ -176,7 +176,7 @@ def setup_parser(subparser):
dest="install_source",
help="install source files in prefix",
)
arguments.add_common_arguments(subparser, ["no_checksum", "deprecated"])
arguments.add_common_arguments(subparser, ["no_checksum"])
subparser.add_argument(
"-v",
"--verbose",
@@ -326,9 +326,6 @@ def install(parser, args):
if args.no_checksum:
spack.config.set("config:checksum", False, scope="command_line")
if args.deprecated:
spack.config.set("config:deprecated", True, scope="command_line")
if args.log_file and not args.log_format:
msg = "the '--log-format' must be specified when using '--log-file'"
tty.die(msg)

View File

@@ -5,8 +5,6 @@
import sys
import llnl.util.tty as tty
import spack.cmd
import spack.cmd.find
import spack.environment as ev
@@ -70,16 +68,6 @@ def setup_parser(subparser):
help="load the first match if multiple packages match the spec",
)
subparser.add_argument(
"--only",
default="package,dependencies",
dest="things_to_load",
choices=["package", "dependencies"],
help="select whether to load the package and its dependencies\n\n"
"the default is to load the package and all dependencies. alternatively, "
"one can decide to load only the package or only the dependencies",
)
subparser.add_argument(
"--list",
action="store_true",
@@ -110,11 +98,6 @@ def load(parser, args):
)
return 1
if args.things_to_load != "package,dependencies":
tty.warn(
"The `--only` flag in spack load is deprecated and will be removed in Spack v0.22"
)
with spack.store.STORE.db.read_transaction():
env_mod = uenv.environment_modifications_for_specs(*specs)
for spec in specs:

View File

@@ -53,6 +53,7 @@ def setup_parser(subparser):
"-S", "--stages", action="store_true", help="top level stage directory"
)
directories.add_argument(
"-c",
"--source-dir",
action="store_true",
help="source directory for a spec (requires it to be staged first)",

View File

@@ -28,7 +28,7 @@
def setup_parser(subparser):
arguments.add_common_arguments(subparser, ["no_checksum", "deprecated"])
arguments.add_common_arguments(subparser, ["no_checksum"])
sp = subparser.add_subparsers(metavar="SUBCOMMAND", dest="mirror_command")
@@ -72,6 +72,7 @@ def setup_parser(subparser):
" retrieve all versions of each package",
)
arguments.add_common_arguments(create_parser, ["specs"])
arguments.add_concretizer_args(create_parser)
# Destroy
destroy_parser = sp.add_parser("destroy", help=mirror_destroy.__doc__)
@@ -549,7 +550,4 @@ def mirror(parser, args):
if args.no_checksum:
spack.config.set("config:checksum", False, scope="command_line")
if args.deprecated:
spack.config.set("config:deprecated", True, scope="command_line")
action[args.mirror_command](args)

View File

@@ -19,7 +19,7 @@
def setup_parser(subparser):
arguments.add_common_arguments(subparser, ["no_checksum", "deprecated", "specs"])
arguments.add_common_arguments(subparser, ["no_checksum", "specs"])
arguments.add_concretizer_args(subparser)
@@ -33,9 +33,6 @@ def patch(parser, args):
if args.no_checksum:
spack.config.set("config:checksum", False, scope="command_line")
if args.deprecated:
spack.config.set("config:deprecated", True, scope="command_line")
specs = spack.cmd.parse_specs(args.specs, concretize=False)
for spec in specs:
_patch(spack.cmd.matching_spec_from_env(spec).package)

View File

@@ -116,39 +116,38 @@ def ipython_interpreter(args):
def python_interpreter(args):
"""A python interpreter is the default interpreter"""
# Fake a main python shell by setting __name__ to __main__.
console = code.InteractiveConsole({"__name__": "__main__", "spack": spack})
if "PYTHONSTARTUP" in os.environ:
startup_file = os.environ["PYTHONSTARTUP"]
if os.path.isfile(startup_file):
with open(startup_file) as startup:
console.runsource(startup.read(), startup_file, "exec")
if args.python_command:
propagate_exceptions_from(console)
console.runsource(args.python_command)
elif args.python_args:
propagate_exceptions_from(console)
if args.python_args and not args.python_command:
sys.argv = args.python_args
with open(args.python_args[0]) as file:
console.runsource(file.read(), args.python_args[0], "exec")
runpy.run_path(args.python_args[0], run_name="__main__")
else:
# Provides readline support, allowing user to use arrow keys
console.push("import readline")
# Provide tabcompletion
console.push("from rlcompleter import Completer")
console.push("readline.set_completer(Completer(locals()).complete)")
console.push('readline.parse_and_bind("tab: complete")')
# Fake a main python shell by setting __name__ to __main__.
console = code.InteractiveConsole({"__name__": "__main__", "spack": spack})
if "PYTHONSTARTUP" in os.environ:
startup_file = os.environ["PYTHONSTARTUP"]
if os.path.isfile(startup_file):
with open(startup_file) as startup:
console.runsource(startup.read(), startup_file, "exec")
if args.python_command:
propagate_exceptions_from(console)
console.runsource(args.python_command)
else:
# Provides readline support, allowing user to use arrow keys
console.push("import readline")
# Provide tabcompletion
console.push("from rlcompleter import Completer")
console.push("readline.set_completer(Completer(locals()).complete)")
console.push('readline.parse_and_bind("tab: complete")')
console.interact(
"Spack version %s\nPython %s, %s %s"
% (
spack.spack_version,
platform.python_version(),
platform.system(),
platform.machine(),
console.interact(
"Spack version %s\nPython %s, %s %s"
% (
spack.spack_version,
platform.python_version(),
platform.system(),
platform.machine(),
)
)
)
def propagate_exceptions_from(console):

View File

@@ -91,7 +91,6 @@ def setup_parser(subparser):
def _process_result(result, show, required_format, kwargs):
result.raise_if_unsat()
opt, _, _ = min(result.answers)
if ("opt" in show) and (not required_format):
tty.msg("Best of %d considered solutions." % result.nmodels)

View File

@@ -22,7 +22,7 @@
def setup_parser(subparser):
arguments.add_common_arguments(subparser, ["no_checksum", "deprecated", "specs"])
arguments.add_common_arguments(subparser, ["no_checksum", "specs"])
subparser.add_argument(
"-p", "--path", dest="path", help="path to stage package, does not add to spack tree"
)
@@ -33,9 +33,6 @@ def stage(parser, args):
if args.no_checksum:
spack.config.set("config:checksum", False, scope="command_line")
if args.deprecated:
spack.config.set("config:deprecated", True, scope="command_line")
if not args.specs:
env = ev.active_environment()
if not env:

View File

@@ -34,6 +34,13 @@ def setup_parser(subparser):
default=False,
help="show full pytest help, with advanced options",
)
subparser.add_argument(
"-n",
"--numprocesses",
type=int,
default=1,
help="run tests in parallel up to this wide, default 1 for sequential",
)
# extra spack arguments to list tests
list_group = subparser.add_argument_group("listing tests")
@@ -229,6 +236,16 @@ def unit_test(parser, args, unknown_args):
if args.extension:
pytest_root = spack.extensions.load_extension(args.extension)
if args.numprocesses is not None and args.numprocesses > 1:
pytest_args.extend(
[
"--dist",
"loadfile",
"--tx",
f"{args.numprocesses}*popen//python=spack-tmpconfig spack python",
]
)
# pytest.ini lives in the root of the spack repository.
with llnl.util.filesystem.working_dir(pytest_root):
if args.list:

View File

@@ -334,6 +334,40 @@ def __init__(
# used for version checks for API, e.g. C++11 flag
self._real_version = None
def __eq__(self, other):
return (
self.cc == other.cc
and self.cxx == other.cxx
and self.fc == other.fc
and self.f77 == other.f77
and self.spec == other.spec
and self.operating_system == other.operating_system
and self.target == other.target
and self.flags == other.flags
and self.modules == other.modules
and self.environment == other.environment
and self.extra_rpaths == other.extra_rpaths
and self.enable_implicit_rpaths == other.enable_implicit_rpaths
)
def __hash__(self):
return hash(
(
self.cc,
self.cxx,
self.fc,
self.f77,
self.spec,
self.operating_system,
self.target,
str(self.flags),
str(self.modules),
str(self.environment),
str(self.extra_rpaths),
self.enable_implicit_rpaths,
)
)
def verify_executables(self):
"""Raise an error if any of the compiler executables is not valid.
@@ -389,8 +423,7 @@ def implicit_rpaths(self):
# Put CXX first since it has the most linking issues
# And because it has flags that affect linking
exe_paths = [x for x in [self.cxx, self.cc, self.fc, self.f77] if x]
link_dirs = self._get_compiler_link_paths(exe_paths)
link_dirs = self._get_compiler_link_paths()
all_required_libs = list(self.required_libs) + Compiler._all_compiler_rpath_libraries
return list(paths_containing_libs(link_dirs, all_required_libs))
@@ -403,43 +436,33 @@ def required_libs(self):
# By default every compiler returns the empty list
return []
def _get_compiler_link_paths(self, paths):
first_compiler = next((c for c in paths if c), None)
if not first_compiler:
return []
if not self.verbose_flag:
# In this case there is no mechanism to learn what link directories
# are used by the compiler
def _get_compiler_link_paths(self):
cc = self.cc if self.cc else self.cxx
if not cc or not self.verbose_flag:
# Cannot determine implicit link paths without a compiler / verbose flag
return []
# What flag types apply to first_compiler, in what order
flags = ["cppflags", "ldflags"]
if first_compiler == self.cc:
flags = ["cflags"] + flags
elif first_compiler == self.cxx:
flags = ["cxxflags"] + flags
if cc == self.cc:
flags = ["cflags", "cppflags", "ldflags"]
else:
flags.append("fflags")
flags = ["cxxflags", "cppflags", "ldflags"]
try:
tmpdir = tempfile.mkdtemp(prefix="spack-implicit-link-info")
fout = os.path.join(tmpdir, "output")
fin = os.path.join(tmpdir, "main.c")
with open(fin, "w+") as csource:
with open(fin, "w") as csource:
csource.write(
"int main(int argc, char* argv[]) { " "(void)argc; (void)argv; return 0; }\n"
"int main(int argc, char* argv[]) { (void)argc; (void)argv; return 0; }\n"
)
compiler_exe = spack.util.executable.Executable(first_compiler)
cc_exe = spack.util.executable.Executable(cc)
for flag_type in flags:
for flag in self.flags.get(flag_type, []):
compiler_exe.add_default_arg(flag)
cc_exe.add_default_arg(*self.flags.get(flag_type, []))
output = ""
with self.compiler_environment():
output = str(
compiler_exe(self.verbose_flag, fin, "-o", fout, output=str, error=str)
) # str for py2
output = cc_exe(self.verbose_flag, fin, "-o", fout, output=str, error=str)
return _parse_non_system_link_dirs(output)
except spack.util.executable.ProcessError as pe:
tty.debug("ProcessError: Command exited with non-zero status: " + pe.long_message)

View File

@@ -109,22 +109,113 @@ def _to_dict(compiler):
return {"compiler": d}
def get_compiler_config(scope=None, init_config=True):
def get_compiler_config(scope=None, init_config=False):
"""Return the compiler configuration for the specified architecture."""
config = spack.config.get("compilers", scope=scope) or []
config = spack.config.CONFIG.get("compilers", scope=scope) or []
if config or not init_config:
return config
merged_config = spack.config.get("compilers")
merged_config = spack.config.CONFIG.get("compilers")
if merged_config:
# Config is empty for this scope
# Do not init config because there is a non-empty scope
return config
_init_compiler_config(scope=scope)
config = spack.config.get("compilers", scope=scope)
config = spack.config.CONFIG.get("compilers", scope=scope)
return config
def get_compiler_config_from_packages(scope=None):
"""Return the compiler configuration from packages.yaml"""
config = spack.config.get("packages", scope=scope)
if not config:
return []
packages = []
compiler_package_names = supported_compilers() + list(package_name_to_compiler_name.keys())
for name, entry in config.items():
if name not in compiler_package_names:
continue
externals_config = entry.get("externals", None)
if not externals_config:
continue
packages.extend(_compiler_config_from_package_config(externals_config))
return packages
def _compiler_config_from_package_config(config):
compilers = []
for entry in config:
compiler = _compiler_config_from_external(entry)
if compiler:
compilers.append(compiler)
return compilers
def _compiler_config_from_external(config):
spec = spack.spec.parse_with_version_concrete(config["spec"])
# use str(spec.versions) to allow `@x.y.z` instead of `@=x.y.z`
compiler_spec = spack.spec.CompilerSpec(
package_name_to_compiler_name.get(spec.name, spec.name), spec.version
)
extra_attributes = config.get("extra_attributes", {})
prefix = config.get("prefix", None)
compiler_class = class_for_compiler_name(compiler_spec.name)
paths = extra_attributes.get("paths", {})
compiler_langs = ["cc", "cxx", "fc", "f77"]
for lang in compiler_langs:
if paths.setdefault(lang, None):
continue
if not prefix:
continue
# Check for files that satisfy the naming scheme for this compiler
bindir = os.path.join(prefix, "bin")
for f, regex in itertools.product(os.listdir(bindir), compiler_class.search_regexps(lang)):
if regex.match(f):
paths[lang] = os.path.join(bindir, f)
if all(v is None for v in paths.values()):
return None
if not spec.architecture:
host_platform = spack.platforms.host()
operating_system = host_platform.operating_system("default_os")
target = host_platform.target("default_target").microarchitecture
else:
target = spec.target
if not target:
host_platform = spack.platforms.host()
target = host_platform.target("default_target").microarchitecture
operating_system = spec.os
if not operating_system:
host_platform = spack.platforms.host()
operating_system = host_platform.operating_system("default_os")
compiler_entry = {
"compiler": {
"spec": str(compiler_spec),
"paths": paths,
"flags": extra_attributes.get("flags", {}),
"operating_system": str(operating_system),
"target": str(target.family),
"modules": config.get("modules", []),
"environment": extra_attributes.get("environment", {}),
"extra_rpaths": extra_attributes.get("extra_rpaths", []),
"implicit_rpaths": extra_attributes.get("implicit_rpaths", None),
}
}
return compiler_entry
def _init_compiler_config(*, scope):
"""Compiler search used when Spack has no compilers."""
compilers = find_compilers()
@@ -142,17 +233,20 @@ def compiler_config_files():
compiler_config = config.get("compilers", scope=name)
if compiler_config:
config_files.append(config.get_config_filename(name, "compilers"))
compiler_config_from_packages = get_compiler_config_from_packages(scope=name)
if compiler_config_from_packages:
config_files.append(config.get_config_filename(name, "packages"))
return config_files
def add_compilers_to_config(compilers, scope=None, init_config=True):
def add_compilers_to_config(compilers, scope=None):
"""Add compilers to the config for the specified architecture.
Arguments:
compilers: a list of Compiler objects.
scope: configuration scope to modify.
"""
compiler_config = get_compiler_config(scope, init_config)
compiler_config = get_compiler_config(scope, init_config=False)
for compiler in compilers:
if not compiler.cc:
tty.debug(f"{compiler.spec} does not have a C compiler")
@@ -184,6 +278,9 @@ def remove_compiler_from_config(compiler_spec, scope=None):
for current_scope in candidate_scopes:
removal_happened |= _remove_compiler_from_scope(compiler_spec, scope=current_scope)
msg = "`spack compiler remove` will not remove compilers defined in packages.yaml"
msg += "\nTo remove these compilers, either edit the config or use `spack external remove`"
tty.debug(msg)
return removal_happened
@@ -198,7 +295,7 @@ def _remove_compiler_from_scope(compiler_spec, scope):
True if one or more compiler entries were actually removed, False otherwise
"""
assert scope is not None, "a specific scope is needed when calling this function"
compiler_config = get_compiler_config(scope)
compiler_config = get_compiler_config(scope, init_config=False)
filtered_compiler_config = [
compiler_entry
for compiler_entry in compiler_config
@@ -221,7 +318,14 @@ def all_compilers_config(scope=None, init_config=True):
"""Return a set of specs for all the compiler versions currently
available to build with. These are instances of CompilerSpec.
"""
return get_compiler_config(scope, init_config)
from_packages_yaml = get_compiler_config_from_packages(scope)
if from_packages_yaml:
init_config = False
from_compilers_yaml = get_compiler_config(scope, init_config)
result = from_compilers_yaml + from_packages_yaml
key = lambda c: _compiler_from_config_entry(c["compiler"])
return list(llnl.util.lang.dedupe(result, key=key))
def all_compiler_specs(scope=None, init_config=True):
@@ -388,7 +492,7 @@ def find_specs_by_arch(compiler_spec, arch_spec, scope=None, init_config=True):
def all_compilers(scope=None, init_config=True):
config = get_compiler_config(scope, init_config=init_config)
config = all_compilers_config(scope, init_config=init_config)
compilers = list()
for items in config:
items = items["compiler"]
@@ -403,10 +507,7 @@ def compilers_for_spec(
"""This gets all compilers that satisfy the supplied CompilerSpec.
Returns an empty list if none are found.
"""
if use_cache:
config = all_compilers_config(scope, init_config)
else:
config = get_compiler_config(scope, init_config)
config = all_compilers_config(scope, init_config)
matches = set(find(compiler_spec, scope, init_config))
compilers = []
@@ -583,9 +684,7 @@ def get_compiler_duplicates(compiler_spec, arch_spec):
scope_to_compilers = {}
for scope in config.scopes:
compilers = compilers_for_spec(
compiler_spec, arch_spec=arch_spec, scope=scope, use_cache=False
)
compilers = compilers_for_spec(compiler_spec, arch_spec=arch_spec, scope=scope)
if compilers:
scope_to_compilers[scope] = compilers

View File

@@ -1,34 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import spack.compilers.oneapi
class Dpcpp(spack.compilers.oneapi.Oneapi):
"""This is the same as the oneAPI compiler but uses dpcpp instead of
icpx (for DPC++ source files). It explicitly refers to dpcpp, so that
CMake test files which check the compiler name (e.g. CMAKE_CXX_COMPILER)
detect it as dpcpp.
Ideally we could switch out icpx for dpcpp where needed in the oneAPI
compiler definition, but two things are needed for that: (a) a way to
tell the compiler that it should be using dpcpp and (b) a way to
customize the link_paths
See also: https://www.intel.com/content/www/us/en/develop/documentation/oneapi-dpcpp-cpp-compiler-dev-guide-and-reference/top/compiler-setup/using-the-command-line/invoking-the-compiler.html
"""
# Subclasses use possible names of C++ compiler
cxx_names = ["dpcpp"]
# Named wrapper links within build_env_path
link_paths = {
"cc": os.path.join("oneapi", "icx"),
"cxx": os.path.join("oneapi", "dpcpp"),
"f77": os.path.join("oneapi", "ifx"),
"fc": os.path.join("oneapi", "ifx"),
}

View File

@@ -10,6 +10,8 @@
import tempfile
from typing import Dict, List, Set
import archspec.cpu
import spack.compiler
import spack.operating_systems.windows_os
import spack.platforms
@@ -186,6 +188,9 @@ def __init__(self, *args, **kwargs):
# get current platform architecture and format for vcvars argument
arch = spack.platforms.real_host().default.lower()
arch = arch.replace("-", "_")
if str(archspec.cpu.host().family) == "x86_64":
arch = "amd64"
self.vcvars_call = VCVarsInvocation(vcvars_script_path, arch, self.msvc_version)
env_cmds.append(self.vcvars_call)
# Below is a check for a valid fortran path
@@ -318,7 +323,7 @@ def fc_version(cls, fc):
fc_path[fc_ver] = fc
if os.getenv("ONEAPI_ROOT"):
try:
sps = spack.operating_systems.windows_os.WindowsOs.compiler_search_paths
sps = spack.operating_systems.windows_os.WindowsOs().compiler_search_paths
except AttributeError:
raise SpackError("Windows compiler search paths not established")
clp = spack.util.executable.which_string("cl", path=sps)

View File

@@ -749,7 +749,6 @@ def _concretize_specs_together_new(*abstract_specs, **kwargs):
result = solver.solve(
abstract_specs, tests=kwargs.get("tests", False), allow_deprecated=allow_deprecated
)
result.raise_if_unsat()
return [s.copy() for s in result.specs]

View File

@@ -107,7 +107,7 @@
#: metavar to use for commands that accept scopes
#: this is shorter and more readable than listing all choices
SCOPES_METAVAR = "{defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT"
SCOPES_METAVAR = "{defaults,system,site,user,command_line}[/PLATFORM] or env:ENVIRONMENT"
#: Base name for the (internal) overrides scope.
_OVERRIDES_BASE_NAME = "overrides-"

View File

@@ -227,7 +227,7 @@ def read(path, apply_updates):
if apply_updates and compilers:
for compiler in compilers:
try:
spack.compilers.add_compilers_to_config([compiler], init_config=False)
spack.compilers.add_compilers_to_config([compiler])
except Exception:
warnings.warn(
f"Could not add compiler {str(compiler.spec)}: "

View File

@@ -1621,15 +1621,32 @@ def query_local(self, *args, **kwargs):
query_local.__doc__ += _QUERY_DOCSTRING
def query(self, *args, **kwargs):
"""Query the Spack database including all upstream databases."""
"""Query the Spack database including all upstream databases.
Additional Arguments:
install_tree (str): query 'all' (default), 'local', 'upstream', or upstream path
"""
install_tree = kwargs.pop("install_tree", "all")
valid_trees = ["all", "upstream", "local", self.root] + [u.root for u in self.upstream_dbs]
if install_tree not in valid_trees:
msg = "Invalid install_tree argument to Database.query()\n"
msg += f"Try one of {', '.join(valid_trees)}"
tty.error(msg)
return []
upstream_results = []
for upstream_db in self.upstream_dbs:
upstreams = self.upstream_dbs
if install_tree not in ("all", "upstream"):
upstreams = [u for u in self.upstream_dbs if u.root == install_tree]
for upstream_db in upstreams:
# queries for upstream DBs need to *not* lock - we may not
# have permissions to do this and the upstream DBs won't know about
# us anyway (so e.g. they should never uninstall specs)
upstream_results.extend(upstream_db._query(*args, **kwargs) or [])
local_results = set(self.query_local(*args, **kwargs))
local_results = []
if install_tree in ("all", "local") or self.root == install_tree:
local_results = set(self.query_local(*args, **kwargs))
results = list(local_results) + list(x for x in upstream_results if x not in local_results)

View File

@@ -9,8 +9,6 @@
import tempfile
from typing import Any, Deque, Dict, Generator, List, NamedTuple, Tuple
import jinja2
from llnl.util import filesystem
import spack.repo
@@ -85,6 +83,8 @@ def _mock_layout(self) -> Generator[List[str], None, None]:
self.tmpdir.cleanup()
def _create_executable_scripts(self, mock_executables: MockExecutables) -> List[pathlib.Path]:
import jinja2
relative_paths = mock_executables.executables
script = mock_executables.script
script_template = jinja2.Template("#!/bin/bash\n{{ script }}\n")

View File

@@ -94,6 +94,9 @@ class OpenMpi(Package):
PatchesType = Optional[Union[Patcher, str, List[Union[Patcher, str]]]]
SUPPORTED_LANGUAGES = ("fortran", "cxx")
def _make_when_spec(value: WhenType) -> Optional["spack.spec.Spec"]:
"""Create a ``Spec`` that indicates when a directive should be applied.
@@ -585,6 +588,9 @@ def depends_on(
@see The section "Dependency specs" in the Spack Packaging Guide.
"""
if spack.spec.Spec(spec).name in SUPPORTED_LANGUAGES:
assert type == "build", "languages must be of 'build' type"
return _language(lang_spec_str=spec, when=when)
def _execute_depends_on(pkg: "spack.package_base.PackageBase"):
_depends_on(pkg, spec, when=when, type=type, patches=patches)
@@ -660,6 +666,7 @@ def patch(
level: int = 1,
when: WhenType = None,
working_dir: str = ".",
reverse: bool = False,
sha256: Optional[str] = None,
archive_sha256: Optional[str] = None,
) -> Patcher:
@@ -673,10 +680,10 @@ def patch(
level: patch level (as in the patch shell command)
when: optional anonymous spec that specifies when to apply the patch
working_dir: dir to change to before applying
reverse: reverse the patch
sha256: sha256 sum of the patch, used to verify the patch (only required for URL patches)
archive_sha256: sha256 sum of the *archive*, if the patch is compressed (only required for
compressed URL patches)
"""
def _execute_patch(pkg_or_dep: Union["spack.package_base.PackageBase", Dependency]):
@@ -711,13 +718,14 @@ def _execute_patch(pkg_or_dep: Union["spack.package_base.PackageBase", Dependenc
url_or_filename,
level,
working_dir=working_dir,
reverse=reverse,
ordering_key=ordering_key,
sha256=sha256,
archive_sha256=archive_sha256,
)
else:
patch = spack.patch.FilePatch(
pkg, url_or_filename, level, working_dir, ordering_key=ordering_key
pkg, url_or_filename, level, working_dir, reverse, ordering_key=ordering_key
)
cur_patches.append(patch)
@@ -919,9 +927,9 @@ def maintainers(*names: str):
"""
def _execute_maintainer(pkg):
maintainers_from_base = getattr(pkg, "maintainers", [])
# Here it is essential to copy, otherwise we might add to an empty list in the parent
pkg.maintainers = list(sorted(set(maintainers_from_base + list(names))))
maintainers = set(getattr(pkg, "maintainers", []))
maintainers.update(names)
pkg.maintainers = sorted(maintainers)
return _execute_maintainer
@@ -965,7 +973,6 @@ def license(
checked_by: string or list of strings indicating which github user checked the
license (if any).
when: A spec specifying when the license applies.
when: A spec specifying when the license applies.
"""
return lambda pkg: _execute_license(pkg, license_identifier, when)
@@ -1012,6 +1019,21 @@ def _execute_requires(pkg: "spack.package_base.PackageBase"):
return _execute_requires
@directive("languages")
def _language(lang_spec_str: str, *, when: Optional[Union[str, bool]] = None):
"""Temporary implementation of language virtuals, until compilers are proper dependencies."""
def _execute_languages(pkg: "spack.package_base.PackageBase"):
when_spec = _make_when_spec(when)
if not when_spec:
return
languages = pkg.languages.setdefault(when_spec, set())
languages.add(lang_spec_str)
return _execute_languages
class DirectiveError(spack.error.SpackError):
"""This is raised when something is wrong with a package directive."""

View File

@@ -1427,7 +1427,7 @@ def _concretize_separately(self, tests=False):
# Ensure we have compilers in compilers.yaml to avoid that
# processes try to write the config file in parallel
_ = spack.compilers.get_compiler_config()
_ = spack.compilers.get_compiler_config(init_config=True)
# Early return if there is nothing to do
if len(args) == 0:

View File

@@ -2274,11 +2274,15 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
# whether to install source code with the packag
self.install_source = install_args.get("install_source", False)
is_develop = pkg.spec.is_develop
# whether to keep the build stage after installation
self.keep_stage = install_args.get("keep_stage", False)
# Note: user commands do not have an explicit choice to disable
# keeping stages (i.e., we have a --keep-stage option, but not
# a --destroy-stage option), so we can override a default choice
# to destroy
self.keep_stage = is_develop or install_args.get("keep_stage", False)
# whether to restage
self.restage = install_args.get("restage", False)
self.restage = (not is_develop) and install_args.get("restage", False)
# whether to skip the patch phase
self.skip_patch = install_args.get("skip_patch", False)

View File

@@ -64,7 +64,7 @@
install_test_root,
)
from spack.installer import InstallError, PackageInstaller
from spack.stage import DIYStage, ResourceStage, Stage, StageComposite, compute_stage_name
from spack.stage import DevelopStage, ResourceStage, Stage, StageComposite, compute_stage_name
from spack.util.executable import ProcessError, which
from spack.util.package_hash import package_hash
from spack.version import GitVersion, StandardVersion
@@ -567,6 +567,7 @@ class PackageBase(WindowsRPath, PackageViewMixin, metaclass=PackageMeta):
provided_together: Dict["spack.spec.Spec", List[Set[str]]]
patches: Dict["spack.spec.Spec", List["spack.patch.Patch"]]
variants: Dict[str, Tuple["spack.variant.Variant", "spack.spec.Spec"]]
languages: Dict["spack.spec.Spec", Set[str]]
#: By default, packages are not virtual
#: Virtual packages override this attribute
@@ -1075,7 +1076,12 @@ def _make_stage(self):
# If it's a dev package (not transitively), use a DIY stage object
dev_path_var = self.spec.variants.get("dev_path", None)
if dev_path_var:
return DIYStage(dev_path_var.value)
dev_path = dev_path_var.value
link_format = spack.config.get("config:develop_stage_link")
if not link_format:
link_format = "build-{arch}-{hash:7}"
stage_link = self.spec.format_path(link_format)
return DevelopStage(compute_stage_name(self.spec), dev_path, stage_link)
# To fetch the current version
source_stage = self._make_root_stage(self.fetcher)
@@ -1407,7 +1413,7 @@ def do_fetch(self, mirror_only=False):
return
checksum = spack.config.get("config:checksum")
fetch = self.stage.managed_by_spack
fetch = self.stage.needs_fetching
if (
checksum
and fetch
@@ -1480,9 +1486,6 @@ def do_stage(self, mirror_only=False):
if self.has_code:
self.do_fetch(mirror_only)
self.stage.expand_archive()
if not os.listdir(self.stage.path):
raise spack.error.FetchError("Archive was empty for %s" % self.name)
else:
# Support for post-install hooks requires a stage.source_path
fsys.mkdirp(self.stage.source_path)
@@ -1516,7 +1519,7 @@ def do_patch(self):
# If we encounter an archive that failed to patch, restage it
# so that we can apply all the patches again.
if os.path.isfile(bad_file):
if self.stage.managed_by_spack:
if self.stage.requires_patch_success:
tty.debug("Patching failed last time. Restaging.")
self.stage.restage()
else:
@@ -1537,6 +1540,8 @@ def do_patch(self):
tty.msg("No patches needed for {0}".format(self.name))
return
errors = []
# Apply all the patches for specs that match this one
patched = False
for patch in patches:
@@ -1546,12 +1551,16 @@ def do_patch(self):
tty.msg("Applied patch {0}".format(patch.path_or_url))
patched = True
except spack.error.SpackError as e:
tty.debug(e)
# Touch bad file if anything goes wrong.
tty.msg("Patch %s failed." % patch.path_or_url)
fsys.touch(bad_file)
raise
error_msg = f"Patch {patch.path_or_url} failed."
if self.stage.requires_patch_success:
tty.msg(error_msg)
raise
else:
tty.debug(error_msg)
tty.debug(e)
errors.append(e)
if has_patch_fun:
try:
@@ -1569,24 +1578,29 @@ def do_patch(self):
# printed a message for each patch.
tty.msg("No patches needed for {0}".format(self.name))
except spack.error.SpackError as e:
tty.debug(e)
# Touch bad file if anything goes wrong.
tty.msg("patch() function failed for {0}".format(self.name))
fsys.touch(bad_file)
raise
error_msg = f"patch() function failed for {self.name}"
if self.stage.requires_patch_success:
tty.msg(error_msg)
raise
else:
tty.debug(error_msg)
tty.debug(e)
errors.append(e)
# Get rid of any old failed file -- patches have either succeeded
# or are not needed. This is mostly defensive -- it's needed
# if the restage() method doesn't clean *everything* (e.g., for a repo)
if os.path.isfile(bad_file):
os.remove(bad_file)
if not errors:
# Get rid of any old failed file -- patches have either succeeded
# or are not needed. This is mostly defensive -- it's needed
# if we didn't restage
if os.path.isfile(bad_file):
os.remove(bad_file)
# touch good or no patches file so that we skip next time.
if patched:
fsys.touch(good_file)
else:
fsys.touch(no_patches_file)
# touch good or no patches file so that we skip next time.
if patched:
fsys.touch(good_file)
else:
fsys.touch(no_patches_file)
@classmethod
def all_patches(cls):

View File

@@ -26,7 +26,11 @@
def apply_patch(
stage: "spack.stage.Stage", patch_path: str, level: int = 1, working_dir: str = "."
stage: "spack.stage.Stage",
patch_path: str,
level: int = 1,
working_dir: str = ".",
reverse: bool = False,
) -> None:
"""Apply the patch at patch_path to code in the stage.
@@ -35,6 +39,7 @@ def apply_patch(
patch_path: filesystem location for the patch to apply
level: patch level
working_dir: relative path *within* the stage to change to
reverse: reverse the patch
"""
git_utils_path = os.environ.get("PATH", "")
if sys.platform == "win32":
@@ -45,6 +50,10 @@ def apply_patch(
git_root = git_root / "usr" / "bin"
git_utils_path = os.pathsep.join([str(git_root), git_utils_path])
args = ["-s", "-p", str(level), "-i", patch_path, "-d", working_dir]
if reverse:
args.append("-R")
# TODO: Decouple Spack's patch support on Windows from Git
# for Windows, and instead have Spack directly fetch, install, and
# utilize that patch.
@@ -53,7 +62,7 @@ def apply_patch(
# flag is passed.
patch = which("patch", required=True, path=git_utils_path)
with llnl.util.filesystem.working_dir(stage.source_path):
patch("-s", "-p", str(level), "-i", patch_path, "-d", working_dir)
patch(*args)
class Patch:
@@ -67,7 +76,12 @@ class Patch:
sha256: str
def __init__(
self, pkg: "spack.package_base.PackageBase", path_or_url: str, level: int, working_dir: str
self,
pkg: "spack.package_base.PackageBase",
path_or_url: str,
level: int,
working_dir: str,
reverse: bool = False,
) -> None:
"""Initialize a new Patch instance.
@@ -76,6 +90,7 @@ def __init__(
path_or_url: the relative path or URL to a patch file
level: patch level
working_dir: relative path *within* the stage to change to
reverse: reverse the patch
"""
# validate level (must be an integer >= 0)
if not isinstance(level, int) or not level >= 0:
@@ -87,6 +102,7 @@ def __init__(
self.path: Optional[str] = None # must be set before apply()
self.level = level
self.working_dir = working_dir
self.reverse = reverse
def apply(self, stage: "spack.stage.Stage") -> None:
"""Apply a patch to source in a stage.
@@ -97,7 +113,7 @@ def apply(self, stage: "spack.stage.Stage") -> None:
if not self.path or not os.path.isfile(self.path):
raise NoSuchPatchError(f"No such patch: {self.path}")
apply_patch(stage, self.path, self.level, self.working_dir)
apply_patch(stage, self.path, self.level, self.working_dir, self.reverse)
# TODO: Use TypedDict once Spack supports Python 3.8+ only
def to_dict(self) -> Dict[str, Any]:
@@ -111,6 +127,7 @@ def to_dict(self) -> Dict[str, Any]:
"sha256": self.sha256,
"level": self.level,
"working_dir": self.working_dir,
"reverse": self.reverse,
}
def __eq__(self, other: object) -> bool:
@@ -146,6 +163,7 @@ def __init__(
relative_path: str,
level: int,
working_dir: str,
reverse: bool = False,
ordering_key: Optional[Tuple[str, int]] = None,
) -> None:
"""Initialize a new FilePatch instance.
@@ -155,6 +173,7 @@ def __init__(
relative_path: path to patch, relative to the repository directory for a package.
level: level to pass to patch command
working_dir: path within the source directory where patch should be applied
reverse: reverse the patch
ordering_key: key used to ensure patches are applied in a consistent order
"""
self.relative_path = relative_path
@@ -182,7 +201,7 @@ def __init__(
msg += "package %s.%s does not exist." % (pkg.namespace, pkg.name)
raise ValueError(msg)
super().__init__(pkg, abs_path, level, working_dir)
super().__init__(pkg, abs_path, level, working_dir, reverse)
self.path = abs_path
self.ordering_key = ordering_key
@@ -228,6 +247,7 @@ def __init__(
level: int = 1,
*,
working_dir: str = ".",
reverse: bool = False,
sha256: str, # This is required for UrlPatch
ordering_key: Optional[Tuple[str, int]] = None,
archive_sha256: Optional[str] = None,
@@ -239,12 +259,13 @@ def __init__(
url: URL where the patch can be fetched
level: level to pass to patch command
working_dir: path within the source directory where patch should be applied
reverse: reverse the patch
ordering_key: key used to ensure patches are applied in a consistent order
sha256: sha256 sum of the patch, used to verify the patch
archive_sha256: sha256 sum of the *archive*, if the patch is compressed
(only required for compressed URL patches)
"""
super().__init__(pkg, url, level, working_dir)
super().__init__(pkg, url, level, working_dir, reverse)
self.url = url
self._stage: Optional["spack.stage.Stage"] = None
@@ -350,13 +371,20 @@ def from_dict(
dictionary["url"],
dictionary["level"],
working_dir=dictionary["working_dir"],
# Added in v0.22, fallback required for backwards compatibility
reverse=dictionary.get("reverse", False),
sha256=dictionary["sha256"],
archive_sha256=dictionary.get("archive_sha256"),
)
elif "relative_path" in dictionary:
patch = FilePatch(
pkg_cls, dictionary["relative_path"], dictionary["level"], dictionary["working_dir"]
pkg_cls,
dictionary["relative_path"],
dictionary["level"],
dictionary["working_dir"],
# Added in v0.22, fallback required for backwards compatibility
dictionary.get("reverse", False),
)
# If the patch in the repo changes, we cannot get it back, so we

View File

@@ -160,10 +160,15 @@ def detect(cls):
system, as the Cray compiler wrappers and other components of the Cray
programming environment are irrelevant without module support.
"""
craype_type, craype_version = cls.craype_type_and_version()
if craype_type == "EX" and craype_version >= spack.version.Version("21.10"):
if "opt/cray" not in os.environ.get("MODULEPATH", ""):
return False
return "opt/cray" in os.environ.get("MODULEPATH", "")
craype_type, craype_version = cls.craype_type_and_version()
if craype_type == "XC":
return True
if craype_type == "EX" and craype_version < spack.version.Version("21.10"):
return True
return False
def _default_target_from_env(self):
"""Set and return the default CrayPE target loaded in a clean login

View File

@@ -14,7 +14,7 @@
import xml.sax.saxutils
from typing import Dict, Optional
from urllib.parse import urlencode
from urllib.request import HTTPHandler, Request, build_opener
from urllib.request import HTTPSHandler, Request, build_opener
import llnl.util.tty as tty
from llnl.util.filesystem import working_dir
@@ -27,6 +27,7 @@
from spack.error import SpackError
from spack.util.crypto import checksum
from spack.util.log_parse import parse_log_events
from spack.util.web import urllib_ssl_cert_handler
from .base import Reporter
from .extract import extract_test_parts
@@ -427,7 +428,7 @@ def upload(self, filename):
# Compute md5 checksum for the contents of this file.
md5sum = checksum(hashlib.md5, filename, block_size=8192)
opener = build_opener(HTTPHandler)
opener = build_opener(HTTPSHandler(context=urllib_ssl_cert_handler()))
with open(filename, "rb") as f:
params_dict = {
"build": self.buildname,

View File

@@ -34,6 +34,7 @@
"strategy": {"type": "string", "enum": ["none", "minimal", "full"]}
},
},
"os_compatible": {"type": "object", "additionalProperties": {"type": "array"}},
},
}
}

View File

@@ -63,6 +63,7 @@
"oneOf": [{"type": "string"}, {"type": "array", "items": {"type": "string"}}]
},
"stage_name": {"type": "string"},
"develop_stage_link": {"type": "string"},
"test_stage": {"type": "string"},
"extensions": {"type": "array", "items": {"type": "string"}},
"template_dirs": {"type": "array", "items": {"type": "string"}},
@@ -72,6 +73,7 @@
"environments_root": {"type": "string"},
"connect_timeout": {"type": "integer", "minimum": 0},
"verify_ssl": {"type": "boolean"},
"ssl_certs": {"type": "string"},
"suppress_gpg_warnings": {"type": "boolean"},
"install_missing_compilers": {"type": "boolean"},
"debug": {"type": "boolean"},

View File

@@ -541,6 +541,7 @@ def _concretization_version_order(version_info: Tuple[GitOrStandardVersion, dict
info.get("preferred", False),
not info.get("deprecated", False),
not version.isdevelop(),
not version.is_prerelease(),
version,
)
@@ -762,7 +763,6 @@ def solve(self, setup, specs, reuse=None, output=None, control=None, allow_depre
timer.stop("ground")
# With a grounded program, we can run the solve.
result = Result(specs)
models = [] # stable models if things go well
cores = [] # unsatisfiable cores if they do not
@@ -783,6 +783,7 @@ def on_model(model):
timer.stop("solve")
# once done, construct the solve result
result = Result(specs)
result.satisfiable = solve_result.satisfiable
if result.satisfiable:
@@ -823,11 +824,14 @@ def on_model(model):
print("Statistics:")
pprint.pprint(self.control.statistics)
if result.unsolved_specs and setup.concretize_everything:
result.raise_if_unsat()
if result.satisfiable and result.unsolved_specs and setup.concretize_everything:
unsolved_str = Result.format_unsolved(result.unsolved_specs)
raise InternalConcretizerError(
"Internal Spack error: the solver completed but produced specs"
f" that do not satisfy the request.\n\t{unsolved_str}"
" that do not satisfy the request. Please report a bug at "
f"https://github.com/spack/spack/issues\n\t{unsolved_str}"
)
return result, timer, self.control.statistics
@@ -1038,6 +1042,25 @@ def conflict_rules(self, pkg):
)
self.gen.newline()
def package_languages(self, pkg):
for when_spec, languages in pkg.languages.items():
condition_msg = f"{pkg.name} needs the {', '.join(sorted(languages))} language"
if when_spec != spack.spec.Spec():
condition_msg += f" when {when_spec}"
condition_id = self.condition(when_spec, name=pkg.name, msg=condition_msg)
for language in sorted(languages):
self.gen.fact(fn.pkg_fact(pkg.name, fn.language(condition_id, language)))
self.gen.newline()
def config_compatible_os(self):
"""Facts about compatible os's specified in configs"""
self.gen.h2("Compatible OS from concretizer config file")
os_data = spack.config.get("concretizer:os_compatible", {})
for recent, reusable in os_data.items():
for old in reusable:
self.gen.fact(fn.os_compatible(recent, old))
self.gen.newline()
def compiler_facts(self):
"""Facts about available compilers."""
@@ -1087,6 +1110,9 @@ def pkg_rules(self, pkg, tests):
self.pkg_version_rules(pkg)
self.gen.newline()
# languages
self.package_languages(pkg)
# variants
self.variant_rules(pkg)
@@ -2157,7 +2183,7 @@ def versions_for(v):
if isinstance(v, vn.StandardVersion):
return [v]
elif isinstance(v, vn.ClosedOpenRange):
return [v.lo, vn.prev_version(v.hi)]
return [v.lo, vn._prev_version(v.hi)]
elif isinstance(v, vn.VersionList):
return sum((versions_for(e) for e in v), [])
else:
@@ -2292,8 +2318,6 @@ def setup(
self.possible_virtuals = node_counter.possible_virtuals()
self.pkgs = node_counter.possible_dependencies()
self.pkgs.update(spack.repo.PATH.packages_with_tags("runtime"))
# Fail if we already know an unreachable node is requested
for spec in specs:
missing_deps = [
@@ -2307,7 +2331,6 @@ def setup(
self.explicitly_required_namespaces[node.name] = node.namespace
self.gen = ProblemInstanceBuilder()
if not allow_deprecated:
self.gen.fact(fn.deprecated_versions_not_allowed())
@@ -2346,6 +2369,7 @@ def setup(
self.gen.newline()
self.gen.h1("General Constraints")
self.config_compatible_os()
self.compiler_facts()
# architecture defaults
@@ -2437,14 +2461,14 @@ def define_runtime_constraints(self):
"""Define the constraints to be imposed on the runtimes"""
recorder = RuntimePropertyRecorder(self)
for compiler in self.possible_compilers:
if compiler.name != "gcc":
continue
compiler_with_different_cls_names = {"oneapi": "intel-oneapi-compilers"}
compiler_cls_name = compiler_with_different_cls_names.get(compiler.name, compiler.name)
try:
compiler_cls = spack.repo.PATH.get_pkg_class(compiler.name)
compiler_cls = spack.repo.PATH.get_pkg_class(compiler_cls_name)
except spack.repo.UnknownPackageError:
continue
if hasattr(compiler_cls, "runtime_constraints"):
compiler_cls.runtime_constraints(compiler=compiler, pkg=recorder)
compiler_cls.runtime_constraints(spec=compiler.spec, pkg=recorder)
recorder.consume_facts()
@@ -2856,13 +2880,24 @@ def reset(self):
"""Resets the current state."""
self.current_package = None
def depends_on(self, dependency_str: str, *, when: str, type: str, description: str) -> None:
def depends_on(
self,
dependency_str: str,
*,
when: str,
type: str,
description: str,
languages: Optional[List[str]] = None,
) -> None:
"""Injects conditional dependencies on packages.
Conditional dependencies can be either "real" packages or virtual dependencies.
Args:
dependency_str: the dependency spec to inject
when: anonymous condition to be met on a package to have the dependency
type: dependency type
languages: languages needed by the package for the dependency to be considered
description: human-readable description of the rule for adding the dependency
"""
# TODO: The API for this function is not final, and is still subject to change. At
@@ -2888,26 +2923,45 @@ def depends_on(self, dependency_str: str, *, when: str, type: str, description:
f" not external({node_variable}),\n"
f" not runtime(Package)"
).replace(f'"{placeholder}"', f"{node_variable}")
if languages:
body_str += ",\n"
for language in languages:
body_str += f' attr("language", {node_variable}, "{language}")'
head_clauses = self._setup.spec_clauses(dependency_spec, body=False)
runtime_pkg = dependency_spec.name
is_virtual = head_clauses[0].args[0] == "virtual_node"
main_rule = (
f"% {description}\n"
f'1 {{ attr("depends_on", {node_variable}, node(0..X-1, "{runtime_pkg}"), "{type}") :'
f' max_dupes("gcc-runtime", X)}} 1:-\n'
f' max_dupes("{runtime_pkg}", X)}} 1:-\n'
f"{body_str}.\n\n"
)
if is_virtual:
main_rule = (
f"% {description}\n"
f'attr("dependency_holds", {node_variable}, "{runtime_pkg}", "{type}") :-\n'
f"{body_str}.\n\n"
)
self.rules.append(main_rule)
for clause in head_clauses:
if clause.args[0] == "node":
continue
runtime_node = f'node(RuntimeID, "{runtime_pkg}")'
head_str = str(clause).replace(f'"{runtime_pkg}"', runtime_node)
rule = (
f"{head_str} :-\n"
depends_on_constraint = (
f' attr("depends_on", {node_variable}, {runtime_node}, "{type}"),\n'
f"{body_str}.\n\n"
)
if is_virtual:
depends_on_constraint = (
f' attr("depends_on", {node_variable}, ProviderNode, "{type}"),\n'
f" provider(ProviderNode, {runtime_node}),\n"
)
rule = f"{head_str} :-\n" f"{depends_on_constraint}" f"{body_str}.\n\n"
self.rules.append(rule)
self.reset()
@@ -3483,9 +3537,14 @@ def solve_in_rounds(
if not result.unsolved_specs:
break
# This means we cannot progress with solving the input
if not result.satisfiable or not result.specs:
break
if not result.specs:
# This is also a problem: no specs were solved for, which
# means we would be in a loop if we tried again
unsolved_str = Result.format_unsolved(result.unsolved_specs)
raise InternalConcretizerError(
"Internal Spack error: a subset of input specs could not"
f" be solved for.\n\t{unsolved_str}"
)
input_specs = list(x for (x, y) in result.unsolved_specs)
for spec in result.specs:

View File

@@ -18,23 +18,23 @@
{ attr("virtual_node", node(0..X-1, Package)) } :- max_dupes(Package, X), virtual(Package).
% Integrity constraints on DAG nodes
:- attr("root", PackageNode), not attr("node", PackageNode).
:- attr("version", PackageNode, _), not attr("node", PackageNode), not attr("virtual_node", PackageNode).
:- attr("node_version_satisfies", PackageNode, _), not attr("node", PackageNode), not attr("virtual_node", PackageNode).
:- attr("hash", PackageNode, _), not attr("node", PackageNode).
:- attr("node_platform", PackageNode, _), not attr("node", PackageNode).
:- attr("node_os", PackageNode, _), not attr("node", PackageNode).
:- attr("node_target", PackageNode, _), not attr("node", PackageNode).
:- attr("node_compiler_version", PackageNode, _, _), not attr("node", PackageNode).
:- attr("variant_value", PackageNode, _, _), not attr("node", PackageNode).
:- attr("node_flag_compiler_default", PackageNode), not attr("node", PackageNode).
:- attr("node_flag", PackageNode, _, _), not attr("node", PackageNode).
:- attr("no_flags", PackageNode, _), not attr("node", PackageNode).
:- attr("external_spec_selected", PackageNode, _), not attr("node", PackageNode).
:- attr("depends_on", ParentNode, _, _), not attr("node", ParentNode).
:- attr("depends_on", _, ChildNode, _), not attr("node", ChildNode).
:- attr("node_flag_source", ParentNode, _, _), not attr("node", ParentNode).
:- attr("node_flag_source", _, _, ChildNode), not attr("node", ChildNode).
:- attr("root", PackageNode), not attr("node", PackageNode), internal_error("root without node").
:- attr("version", PackageNode, _), not attr("node", PackageNode), not attr("virtual_node", PackageNode), internal_error("version without node").
:- attr("node_version_satisfies", PackageNode, _), not attr("node", PackageNode), not attr("virtual_node", PackageNode), internal_error("version satisfies without node").
:- attr("hash", PackageNode, _), not attr("node", PackageNode), internal_error("hash without node").
:- attr("node_platform", PackageNode, _), not attr("node", PackageNode), internal_error("platform without node").
:- attr("node_os", PackageNode, _), not attr("node", PackageNode), internal_error("os without node").
:- attr("node_target", PackageNode, _), not attr("node", PackageNode), internal_error("target without node").
:- attr("node_compiler_version", PackageNode, _, _), not attr("node", PackageNode), internal_error("compiler without node").
:- attr("variant_value", PackageNode, _, _), not attr("node", PackageNode), internal_error("variant without node").
:- attr("node_flag_compiler_default", PackageNode), not attr("node", PackageNode), internal_error("compiler flag default without node").
:- attr("node_flag", PackageNode, _, _), not attr("node", PackageNode), internal_error("compiler flag without node").
:- attr("no_flags", PackageNode, _), not attr("node", PackageNode), internal_error("empty compiler flag without node").
:- attr("external_spec_selected", PackageNode, _), not attr("node", PackageNode), internal_error("external spec without node").
:- attr("depends_on", ParentNode, _, _), not attr("node", ParentNode), internal_error("depends on without node for parent").
:- attr("depends_on", _, ChildNode, _), not attr("node", ChildNode), internal_error("depends on without node for child").
:- attr("node_flag_source", ParentNode, _, _), not attr("node", ParentNode), internal_error("node flag source without flag node").
:- attr("node_flag_source", _, _, ChildNode), not attr("node", ChildNode), internal_error("node flag source without source node").
:- attr("virtual_node", VirtualNode), not provider(_, VirtualNode), internal_error("virtual node with no provider").
:- provider(_, VirtualNode), not attr("virtual_node", VirtualNode), internal_error("provider with no virtual node").
:- provider(PackageNode, _), not attr("node", PackageNode), internal_error("provider with no real node").
@@ -158,6 +158,14 @@ error(100, multiple_values_error, Attribute, Package)
attr_single_value(Attribute),
2 { attr(Attribute, node(ID, Package), Value) }.
%-----------------------------------------------------------------------------
% Languages used
%-----------------------------------------------------------------------------
attr("language", node(X, Package), Language) :-
condition_holds(ConditionID, node(X, Package)),
pkg_fact(Package,language(ConditionID, Language)).
%-----------------------------------------------------------------------------
% Version semantics
%-----------------------------------------------------------------------------
@@ -1011,16 +1019,6 @@ node_os_weight(PackageNode, Weight)
attr("node_os", PackageNode, OS),
os(OS, Weight).
% match semantics for OS's
node_os_match(PackageNode, DependencyNode) :-
depends_on(PackageNode, DependencyNode),
attr("node_os", PackageNode, OS),
attr("node_os", DependencyNode, OS).
node_os_mismatch(PackageNode, DependencyNode) :-
depends_on(PackageNode, DependencyNode),
not node_os_match(PackageNode, DependencyNode).
% every OS is compatible with itself. We can use `os_compatible` to declare
os_compatible(OS, OS) :- os(OS).
@@ -1172,11 +1170,13 @@ attr("node_compiler_version_satisfies", PackageNode, Compiler, Constraint)
% If the compiler version was set from the command line,
% respect it verbatim
:- attr("node_compiler_version_set", PackageNode, Compiler, Version),
not attr("node_compiler_version", PackageNode, Compiler, Version).
error(100, "Cannot set the required compiler: {2}%{0}@{1}", Compiler, Version, Package)
:- attr("node_compiler_version_set", node(X, Package), Compiler, Version),
not attr("node_compiler_version", node(X, Package), Compiler, Version).
:- attr("node_compiler_set", PackageNode, Compiler),
not attr("node_compiler_version", PackageNode, Compiler, _).
error(100, "Cannot set the required compiler: {1}%{0}", Compiler, Package)
:- attr("node_compiler_set", node(X, Package), Compiler),
not attr("node_compiler_version", node(X, Package), Compiler, _).
% Cannot select a compiler if it is not supported on the OS
% Compilers that are explicitly marked as allowed
@@ -1186,7 +1186,8 @@ error(100, "{0} compiler '%{1}@{2}' incompatible with 'os={3}'", Package, Compil
node_compiler(node(X, Package), CompilerID),
compiler_name(CompilerID, Compiler),
compiler_version(CompilerID, Version),
not compiler_os(CompilerID, OS),
compiler_os(CompilerID, CompilerOS),
not os_compatible(CompilerOS, OS),
not allow_compiler(Compiler, Version),
build(node(X, Package)).
@@ -1507,16 +1508,6 @@ opt_criterion(39, "compiler mismatches that are not from CLI").
build_priority(PackageNode, Priority)
}.
% Try to minimize the number of compiler mismatches in the DAG.
opt_criterion(35, "OS mismatches").
#minimize{ 0@235: #true }.
#minimize{ 0@35: #true }.
#minimize{
1@35+Priority,PackageNode,DependencyNode
: node_os_mismatch(PackageNode, DependencyNode),
build_priority(PackageNode, Priority)
}.
opt_criterion(30, "non-preferred OS's").
#minimize{ 0@230: #true }.
#minimize{ 0@30: #true }.

View File

@@ -10,6 +10,7 @@
import spack.deptypes as dt
import spack.package_base
import spack.repo
import spack.spec
PossibleDependencies = Set[str]
@@ -24,7 +25,13 @@ class Counter:
"""
def __init__(self, specs: List["spack.spec.Spec"], tests: bool) -> None:
self.specs = specs
runtime_pkgs = spack.repo.PATH.packages_with_tags("runtime")
runtime_virtuals = set()
for x in runtime_pkgs:
pkg_class = spack.repo.PATH.get_pkg_class(x)
runtime_virtuals.update(pkg_class.provided_virtual_names())
self.specs = specs + [spack.spec.Spec(x) for x in runtime_pkgs]
self.link_run_types: dt.DepFlag = dt.LINK | dt.RUN | dt.TEST
self.all_types: dt.DepFlag = dt.ALL
@@ -33,7 +40,9 @@ def __init__(self, specs: List["spack.spec.Spec"], tests: bool) -> None:
self.all_types = dt.LINK | dt.RUN | dt.BUILD
self._possible_dependencies: PossibleDependencies = set()
self._possible_virtuals: Set[str] = set(x.name for x in specs if x.virtual)
self._possible_virtuals: Set[str] = (
set(x.name for x in specs if x.virtual) | runtime_virtuals
)
def possible_dependencies(self) -> PossibleDependencies:
"""Returns the list of possible dependencies"""

View File

@@ -625,18 +625,6 @@ def __init__(self, *args):
else:
raise TypeError("__init__ takes 1 or 2 arguments. (%d given)" % nargs)
def _add_versions(self, version_list):
# If it already has a non-trivial version list, this is an error
if self.versions and self.versions != vn.any_version:
# Note: This may be impossible to reach by the current parser
# Keeping it in case the implementation changes.
raise MultipleVersionError(
"A spec cannot contain multiple version signifiers. Use a version list instead."
)
self.versions = vn.VersionList()
for version in version_list:
self.versions.add(version)
def _autospec(self, compiler_spec_like):
if isinstance(compiler_spec_like, CompilerSpec):
return compiler_spec_like
@@ -1420,6 +1408,13 @@ def external_path(self, ext_path):
def external(self):
return bool(self.external_path) or bool(self.external_modules)
@property
def is_develop(self):
"""Return whether the Spec represents a user-developed package
in a Spack ``Environment`` (i.e. using `spack develop`).
"""
return bool(self.variants.get("dev_path", False))
def clear_dependencies(self):
"""Trim the dependencies of this spec."""
self._dependencies.clear()
@@ -1544,20 +1539,6 @@ def _dependencies_dict(self, depflag: dt.DepFlag = dt.ALL):
result[key] = list(group)
return result
#
# Private routines here are called by the parser when building a spec.
#
def _add_versions(self, version_list):
"""Called by the parser to add an allowable version."""
# If it already has a non-trivial version list, this is an error
if self.versions and self.versions != vn.any_version:
raise MultipleVersionError(
"A spec cannot contain multiple version signifiers." " Use a version list instead."
)
self.versions = vn.VersionList()
for version in version_list:
self.versions.add(version)
def _add_flag(self, name, value, propagate):
"""Called by the parser to add a known flag.
Known flags currently include "arch"
@@ -1626,14 +1607,6 @@ def _set_architecture(self, **kwargs):
else:
setattr(self.architecture, new_attr, new_value)
def _set_compiler(self, compiler):
"""Called by the parser to set the compiler."""
if self.compiler:
raise DuplicateCompilerSpecError(
"Spec for '%s' cannot have two compilers." % self.name
)
self.compiler = compiler
def _add_dependency(self, spec: "Spec", *, depflag: dt.DepFlag, virtuals: Tuple[str, ...]):
"""Called by the parser to add another spec as a dependency."""
if spec.name not in self._dependencies or not spec.name:
@@ -2995,7 +2968,6 @@ def _new_concretize(self, tests=False):
allow_deprecated = spack.config.get("config:deprecated", False)
solver = spack.solver.asp.Solver()
result = solver.solve([self], tests=tests, allow_deprecated=allow_deprecated)
result.raise_if_unsat()
# take the best answer
opt, i, answer = min(result.answers)

View File

@@ -208,7 +208,103 @@ def _mirror_roots():
]
class Stage:
class LockableStagingDir:
"""A directory whose lifetime can be managed with a context
manager (but persists if the user requests it). Instances can have
a specified name and if they do, then for all instances that have
the same name, only one can enter the context manager at a time.
"""
def __init__(self, name, path, keep, lock):
# TODO: This uses a protected member of tempfile, but seemed the only
# TODO: way to get a temporary name. It won't be the same as the
# TODO: temporary stage area in _stage_root.
self.name = name
if name is None:
self.name = stage_prefix + next(tempfile._get_candidate_names())
# Use the provided path or construct an optionally named stage path.
if path is not None:
self.path = path
else:
self.path = os.path.join(get_stage_root(), self.name)
# Flag to decide whether to delete the stage folder on exit or not
self.keep = keep
# File lock for the stage directory. We use one file for all
# stage locks. See spack.database.Database.prefix_locker.lock for
# details on this approach.
self._lock = None
self._use_locks = lock
# When stages are reused, we need to know whether to re-create
# it. This marks whether it has been created/destroyed.
self.created = False
def _get_lock(self):
if not self._lock:
sha1 = hashlib.sha1(self.name.encode("utf-8")).digest()
lock_id = prefix_bits(sha1, bit_length(sys.maxsize))
stage_lock_path = os.path.join(get_stage_root(), ".lock")
self._lock = spack.util.lock.Lock(
stage_lock_path, start=lock_id, length=1, desc=self.name
)
return self._lock
def __enter__(self):
"""
Entering a stage context will create the stage directory
Returns:
self
"""
if self._use_locks:
self._get_lock().acquire_write(timeout=60)
self.create()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""
Exiting from a stage context will delete the stage directory unless:
- it was explicitly requested not to do so
- an exception has been raised
Args:
exc_type: exception type
exc_val: exception value
exc_tb: exception traceback
Returns:
Boolean
"""
# Delete when there are no exceptions, unless asked to keep.
if exc_type is None and not self.keep:
self.destroy()
if self._use_locks:
self._get_lock().release_write()
def create(self):
"""
Ensures the top-level (config:build_stage) directory exists.
"""
# User has full permissions and group has only read permissions
if not os.path.exists(self.path):
mkdirp(self.path, mode=stat.S_IRWXU | stat.S_IRGRP | stat.S_IXGRP)
elif not os.path.isdir(self.path):
os.remove(self.path)
mkdirp(self.path, mode=stat.S_IRWXU | stat.S_IRGRP | stat.S_IXGRP)
# Make sure we can actually do something with the stage we made.
ensure_access(self.path)
self.created = True
def destroy(self):
raise NotImplementedError(f"{self.__class__.__name__} is abstract")
class Stage(LockableStagingDir):
"""Manages a temporary stage directory for building.
A Stage object is a context manager that handles a directory where
@@ -251,7 +347,8 @@ class Stage:
"""
#: Most staging is managed by Spack. DIYStage is one exception.
managed_by_spack = True
needs_fetching = True
requires_patch_success = True
def __init__(
self,
@@ -297,6 +394,8 @@ def __init__(
The search function that provides the fetch strategy
instance.
"""
super().__init__(name, path, keep, lock)
# TODO: fetch/stage coupling needs to be reworked -- the logic
# TODO: here is convoluted and not modular enough.
if isinstance(url_or_fetch_strategy, str):
@@ -314,72 +413,8 @@ def __init__(
self.srcdir = None
# TODO: This uses a protected member of tempfile, but seemed the only
# TODO: way to get a temporary name. It won't be the same as the
# TODO: temporary stage area in _stage_root.
self.name = name
if name is None:
self.name = stage_prefix + next(tempfile._get_candidate_names())
self.mirror_paths = mirror_paths
# Use the provided path or construct an optionally named stage path.
if path is not None:
self.path = path
else:
self.path = os.path.join(get_stage_root(), self.name)
# Flag to decide whether to delete the stage folder on exit or not
self.keep = keep
# File lock for the stage directory. We use one file for all
# stage locks. See spack.database.Database.prefix_locker.lock for
# details on this approach.
self._lock = None
if lock:
sha1 = hashlib.sha1(self.name.encode("utf-8")).digest()
lock_id = prefix_bits(sha1, bit_length(sys.maxsize))
stage_lock_path = os.path.join(get_stage_root(), ".lock")
self._lock = spack.util.lock.Lock(
stage_lock_path, start=lock_id, length=1, desc=self.name
)
# When stages are reused, we need to know whether to re-create
# it. This marks whether it has been created/destroyed.
self.created = False
def __enter__(self):
"""
Entering a stage context will create the stage directory
Returns:
self
"""
if self._lock is not None:
self._lock.acquire_write(timeout=60)
self.create()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
"""
Exiting from a stage context will delete the stage directory unless:
- it was explicitly requested not to do so
- an exception has been raised
Args:
exc_type: exception type
exc_val: exception value
exc_tb: exception traceback
Returns:
Boolean
"""
# Delete when there are no exceptions, unless asked to keep.
if exc_type is None and not self.keep:
self.destroy()
if self._lock is not None:
self._lock.release_write()
@property
def expected_archive_files(self):
"""Possible archive file paths."""
@@ -631,21 +666,6 @@ def restage(self):
"""
self.fetcher.reset()
def create(self):
"""
Ensures the top-level (config:build_stage) directory exists.
"""
# User has full permissions and group has only read permissions
if not os.path.exists(self.path):
mkdirp(self.path, mode=stat.S_IRWXU | stat.S_IRGRP | stat.S_IXGRP)
elif not os.path.isdir(self.path):
os.remove(self.path)
mkdirp(self.path, mode=stat.S_IRWXU | stat.S_IRGRP | stat.S_IXGRP)
# Make sure we can actually do something with the stage we made.
ensure_access(self.path)
self.created = True
def destroy(self):
"""Removes this stage directory."""
remove_linked_tree(self.path)
@@ -752,7 +772,8 @@ def __init__(self):
"cache_mirror",
"steal_source",
"disable_mirrors",
"managed_by_spack",
"needs_fetching",
"requires_patch_success",
]
)
@@ -808,8 +829,8 @@ class DIYStage:
directory naming convention.
"""
"""DIY staging is, by definition, not managed by Spack."""
managed_by_spack = False
needs_fetching = False
requires_patch_success = False
def __init__(self, path):
if path is None:
@@ -857,6 +878,69 @@ def cache_local(self):
tty.debug("Sources for DIY stages are not cached")
class DevelopStage(LockableStagingDir):
needs_fetching = False
requires_patch_success = False
def __init__(self, name, dev_path, reference_link):
super().__init__(name=name, path=None, keep=False, lock=True)
self.dev_path = dev_path
self.source_path = dev_path
# The path of a link that will point to this stage
if os.path.isabs(reference_link):
link_path = reference_link
else:
link_path = os.path.join(self.source_path, reference_link)
if not os.path.isdir(os.path.dirname(link_path)):
raise StageError(f"The directory containing {link_path} must exist")
self.reference_link = link_path
@property
def archive_file(self):
return None
def fetch(self, *args, **kwargs):
tty.debug("No fetching needed for develop stage.")
def check(self):
tty.debug("No checksum needed for develop stage.")
def expand_archive(self):
tty.debug("No expansion needed for develop stage.")
@property
def expanded(self):
"""Returns True since the source_path must exist."""
return True
def create(self):
super().create()
try:
llnl.util.symlink.symlink(self.path, self.reference_link)
except (llnl.util.symlink.AlreadyExistsError, FileExistsError):
pass
def destroy(self):
# Destroy all files, but do not follow symlinks
try:
shutil.rmtree(self.path)
except FileNotFoundError:
pass
try:
os.remove(self.reference_link)
except FileNotFoundError:
pass
self.created = False
def restage(self):
self.destroy()
self.create()
def cache_local(self):
tty.debug("Sources for Develop stages are not cached")
def ensure_access(file):
"""Ensure we can access a directory and die with an error if we can't."""
if not can_access(file):

View File

@@ -102,7 +102,10 @@ def to_dict_or_value(self):
if self.microarchitecture.vendor == "generic":
return str(self)
return syaml.syaml_dict(self.microarchitecture.to_dict(return_list_of_items=True))
# Get rid of compiler flag information before turning the uarch into a dict
uarch_dict = self.microarchitecture.to_dict()
uarch_dict.pop("compilers", None)
return syaml.syaml_dict(uarch_dict.items())
def __repr__(self):
cls_name = self.__class__.__name__
@@ -152,4 +155,6 @@ def optimization_flags(self, compiler):
# log this and just return compiler.version instead
tty.debug(str(e))
return self.microarchitecture.optimization_flags(compiler.name, str(compiler_version))
return self.microarchitecture.optimization_flags(
compiler.name, compiler_version.dotted_numeric_string
)

View File

@@ -8,13 +8,16 @@
import pytest
import archspec.cpu
import llnl.util.filesystem as fs
import spack.compilers
import spack.concretize
import spack.operating_systems
import spack.platforms
import spack.target
from spack.spec import ArchSpec, CompilerSpec, Spec
from spack.spec import ArchSpec, Spec
@pytest.fixture(scope="module")
@@ -123,52 +126,60 @@ def test_arch_spec_container_semantic(item, architecture_str):
@pytest.mark.parametrize(
"compiler_spec,target_name,expected_flags",
[
# Check compilers with version numbers from a single toolchain
# Homogeneous compilers
("gcc@4.7.2", "ivybridge", "-march=core-avx-i -mtune=core-avx-i"),
# Check mixed toolchains
("clang@8.0.0", "broadwell", ""),
("clang@3.5", "x86_64", "-march=x86-64 -mtune=generic"),
# Check Apple's Clang compilers
("apple-clang@9.1.0", "x86_64", "-march=x86-64"),
# Mixed toolchain
("clang@8.0.0", "broadwell", ""),
],
)
@pytest.mark.filterwarnings("ignore:microarchitecture specific")
def test_optimization_flags(compiler_spec, target_name, expected_flags, config):
def test_optimization_flags(compiler_spec, target_name, expected_flags, compiler_factory):
target = spack.target.Target(target_name)
compiler = spack.compilers.compilers_for_spec(compiler_spec).pop()
compiler_dict = compiler_factory(spec=compiler_spec, operating_system="")["compiler"]
if compiler_spec == "clang@8.0.0":
compiler_dict["paths"] = {
"cc": "/path/to/clang-8",
"cxx": "/path/to/clang++-8",
"f77": "/path/to/gfortran-9",
"fc": "/path/to/gfortran-9",
}
compiler = spack.compilers.compiler_from_dict(compiler_dict)
opt_flags = target.optimization_flags(compiler)
assert opt_flags == expected_flags
@pytest.mark.parametrize(
"compiler,real_version,target_str,expected_flags",
"compiler_str,real_version,target_str,expected_flags",
[
(CompilerSpec("gcc@=9.2.0"), None, "haswell", "-march=haswell -mtune=haswell"),
("gcc@=9.2.0", None, "haswell", "-march=haswell -mtune=haswell"),
# Check that custom string versions are accepted
(
CompilerSpec("gcc@=10foo"),
"9.2.0",
"icelake",
"-march=icelake-client -mtune=icelake-client",
),
("gcc@=10foo", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
# Check that we run version detection (4.4.0 doesn't support icelake)
(
CompilerSpec("gcc@=4.4.0-special"),
"9.2.0",
"icelake",
"-march=icelake-client -mtune=icelake-client",
),
("gcc@=4.4.0-special", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
# Check that the special case for Apple's clang is treated correctly
# i.e. it won't try to detect the version again
(CompilerSpec("apple-clang@=9.1.0"), None, "x86_64", "-march=x86-64"),
("apple-clang@=9.1.0", None, "x86_64", "-march=x86-64"),
],
)
def test_optimization_flags_with_custom_versions(
compiler, real_version, target_str, expected_flags, monkeypatch, config
compiler_str,
real_version,
target_str,
expected_flags,
monkeypatch,
mutable_config,
compiler_factory,
):
target = spack.target.Target(target_str)
compiler_dict = compiler_factory(spec=compiler_str, operating_system="redhat6")
mutable_config.set("compilers", [compiler_dict])
if real_version:
monkeypatch.setattr(spack.compiler.Compiler, "get_real_version", lambda x: real_version)
compiler = spack.compilers.compiler_from_dict(compiler_dict["compiler"])
opt_flags = target.optimization_flags(compiler)
assert opt_flags == expected_flags
@@ -203,9 +214,10 @@ def test_satisfy_strict_constraint_when_not_concrete(architecture_tuple, constra
)
@pytest.mark.usefixtures("mock_packages", "config")
@pytest.mark.only_clingo("Fixing the parser broke this test for the original concretizer.")
@pytest.mark.skipif(
str(archspec.cpu.host().family) != "x86_64", reason="tests are for x86_64 uarch ranges"
)
def test_concretize_target_ranges(root_target_range, dep_target_range, result, monkeypatch):
# Monkeypatch so that all concretization is done as if the machine is core2
monkeypatch.setattr(spack.platforms.test.Test, "default", "core2")
spec = Spec(f"a %gcc@10 foobar=bar target={root_target_range} ^b target={dep_target_range}")
with spack.concretize.disable_compiler_existence_check():
spec.concretize()

View File

@@ -19,6 +19,8 @@
import py
import pytest
import archspec.cpu
from llnl.util.filesystem import join_path, visit_directory_tree
import spack.binary_distribution as bindist
@@ -34,7 +36,7 @@
import spack.util.spack_yaml as syaml
import spack.util.url as url_util
import spack.util.web as web_util
from spack.binary_distribution import get_buildfile_manifest
from spack.binary_distribution import CannotListKeys, GenerateIndexError, get_buildfile_manifest
from spack.directory_layout import DirectoryLayout
from spack.paths import test_path
from spack.spec import Spec
@@ -463,50 +465,57 @@ def test_generate_index_missing(monkeypatch, tmpdir, mutable_config):
assert "libelf" not in cache_list
def test_generate_indices_key_error(monkeypatch, capfd):
def test_generate_key_index_failure(monkeypatch):
def list_url(url, recursive=False):
if "fails-listing" in url:
raise Exception("Couldn't list the directory")
return ["first.pub", "second.pub"]
def push_to_url(*args, **kwargs):
raise Exception("Couldn't upload the file")
monkeypatch.setattr(web_util, "list_url", list_url)
monkeypatch.setattr(web_util, "push_to_url", push_to_url)
with pytest.raises(CannotListKeys, match="Encountered problem listing keys"):
bindist.generate_key_index("s3://non-existent/fails-listing")
with pytest.raises(GenerateIndexError, match="problem pushing .* Couldn't upload"):
bindist.generate_key_index("s3://non-existent/fails-uploading")
def test_generate_package_index_failure(monkeypatch, capfd):
def mock_list_url(url, recursive=False):
print("mocked list_url({0}, {1})".format(url, recursive))
raise KeyError("Test KeyError handling")
raise Exception("Some HTTP error")
monkeypatch.setattr(web_util, "list_url", mock_list_url)
test_url = "file:///fake/keys/dir"
# Make sure generate_key_index handles the KeyError
bindist.generate_key_index(test_url)
with pytest.raises(GenerateIndexError, match="Unable to generate package index"):
bindist.generate_package_index(test_url)
err = capfd.readouterr()[1]
assert "Warning: No keys at {0}".format(test_url) in err
# Make sure generate_package_index handles the KeyError
bindist.generate_package_index(test_url)
err = capfd.readouterr()[1]
assert "Warning: No packages at {0}".format(test_url) in err
assert (
f"Warning: Encountered problem listing packages at {test_url}: Some HTTP error"
in capfd.readouterr().err
)
def test_generate_indices_exception(monkeypatch, capfd):
def mock_list_url(url, recursive=False):
print("mocked list_url({0}, {1})".format(url, recursive))
raise Exception("Test Exception handling")
monkeypatch.setattr(web_util, "list_url", mock_list_url)
test_url = "file:///fake/keys/dir"
url = "file:///fake/keys/dir"
# Make sure generate_key_index handles the Exception
bindist.generate_key_index(test_url)
with pytest.raises(GenerateIndexError, match=f"Encountered problem listing keys at {url}"):
bindist.generate_key_index(url)
err = capfd.readouterr()[1]
expect = "Encountered problem listing keys at {0}".format(test_url)
assert expect in err
with pytest.raises(GenerateIndexError, match="Unable to generate package index"):
bindist.generate_package_index(url)
# Make sure generate_package_index handles the Exception
bindist.generate_package_index(test_url)
err = capfd.readouterr()[1]
expect = "Encountered problem listing packages at {0}".format(test_url)
assert expect in err
assert f"Encountered problem listing packages at {url}" in capfd.readouterr().err
@pytest.mark.usefixtures("mock_fetch", "install_mockery")
@@ -573,11 +582,20 @@ def test_update_sbang(tmpdir, test_mirror):
uninstall_cmd("-y", "/%s" % new_spec.dag_hash())
def test_install_legacy_buildcache_layout(install_mockery_mutable_config):
@pytest.mark.skipif(
str(archspec.cpu.host().family) != "x86_64",
reason="test data uses gcc 4.5.0 which does not support aarch64",
)
def test_install_legacy_buildcache_layout(
mutable_config, compiler_factory, install_mockery_mutable_config
):
"""Legacy buildcache layout involved a nested archive structure
where the .spack file contained a repeated spec.json and another
compressed archive file containing the install tree. This test
makes sure we can still read that layout."""
mutable_config.set(
"compilers", [compiler_factory(spec="gcc@4.5.0", operating_system="debian6")]
)
legacy_layout_dir = os.path.join(test_path, "data", "mirrors", "legacy_layout")
mirror_url = "file://{0}".format(legacy_layout_dir)
filename = (

View File

@@ -9,6 +9,8 @@
import py.path
import pytest
import archspec.cpu
import llnl.util.filesystem as fs
import spack.build_systems.autotools
@@ -209,6 +211,9 @@ def test_autotools_gnuconfig_replacement_disabled(
assert "gnuconfig version of config.guess" not in f.read()
@pytest.mark.disable_clean_stage_check
@pytest.mark.skipif(
str(archspec.cpu.host().family) != "x86_64", reason="test data is specific for x86_64"
)
def test_autotools_gnuconfig_replacement_no_gnuconfig(self, mutable_database, monkeypatch):
"""
Tests whether a useful error message is shown when patch_config_files is

View File

@@ -164,3 +164,20 @@ def test_install_time_test_callback(tmpdir, config, mock_packages, mock_stage):
with open(s.package.tester.test_log_file, "r") as f:
results = f.read().replace("\n", " ")
assert "PyTestCallback test" in results
@pytest.mark.regression("43097")
@pytest.mark.usefixtures("builder_test_repository", "config")
def test_mixins_with_builders(working_env):
"""Tests that run_after and run_before callbacks are accumulated correctly,
when mixins are used with builders.
"""
s = spack.spec.Spec("builder-and-mixins").concretized()
builder = spack.builder.create(s.package)
# Check that callbacks added by the mixin are in the list
assert any(fn.__name__ == "before_install" for _, fn in builder.run_before_callbacks)
assert any(fn.__name__ == "after_install" for _, fn in builder.run_after_callbacks)
# Check that callback from the GenericBuilder are in the list too
assert any(fn.__name__ == "sanity_check_prefix" for _, fn in builder.run_after_callbacks)

View File

@@ -448,7 +448,7 @@ def _fail(self, args):
def test_ci_create_buildcache(tmpdir, working_env, config, mock_packages, monkeypatch):
"""Test that create_buildcache returns a list of objects with the correct
keys and types."""
monkeypatch.setattr(spack.ci, "push_mirror_contents", lambda a, b, c: True)
monkeypatch.setattr(spack.ci, "_push_to_build_cache", lambda a, b, c: True)
results = ci.create_buildcache(
None, destination_mirror_urls=["file:///fake-url-one", "file:///fake-url-two"]

Some files were not shown because too many files have changed in this diff Show More