Compare commits

...

72 Commits

Author SHA1 Message Date
Gregory Becker
1c232759da wip: working for installer tests
Signed-off-by: Gregory Becker <becker33@llnl.gov>
2025-03-20 12:31:33 -07:00
Robert Maaskant
0d2c624bcb glib: add v2.82.5 (#49281) 2025-03-06 17:49:14 +01:00
Alec Scott
765b6b7150 py-aiojobs: new-package (#49329)
* py-aiojobs: new-package

* Update var/spack/repos/builtin/packages/py-aiojobs/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* Fix minimum required python dependency based on feedback

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2025-03-06 07:11:06 -06:00
Seth R. Johnson
a91f96292c vecgeom: add development version of surface branch (#49313)
* vecgeom: add development version of surface branch

* Use tag on main branch

* Get full repo for versioning on master branch
2025-03-06 05:32:33 -05:00
Wouter Deconinck
18487a45ed xz: add v5.4.7, v5.6.2, v5.6.3 (#49330) 2025-03-06 09:47:25 +01:00
Wouter Deconinck
29485e2125 meson: add v1.5.2, v1.6.1, v1.7.0 (#49244) 2025-03-05 22:36:06 -06:00
dependabot[bot]
7674ea0b7d build(deps): bump types-six in /.github/workflows/requirements/style (#49295)
Bumps [types-six](https://github.com/python/typeshed) from 1.17.0.20241205 to 1.17.0.20250304.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-six
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-05 22:34:49 -06:00
Wouter Deconinck
693376ea97 qt-*: add v6.8.2 (#49320) 2025-03-05 20:03:34 -07:00
Massimiliano Culpo
88bf2a8bcf globalarrays: add unconditional dep on C++ (#49317)
See https://gitlab.spack.io/spack/spack/-/jobs/15482194

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-05 20:03:09 -07:00
Wouter Deconinck
03e9ca0a76 QtPackage: set QT_ADDITIONAL_SBOM_DOCUMENT_PATHS (#49319)
* QtPackage: set QT_ADDITIONAL_SBOM_DOCUMENT_PATHS

* QtPackage: self.spec.satisfies("@6.9:")

* QtPackage: if self.spec.satisfies("@6.9:")
2025-03-05 19:53:35 -07:00
Massimiliano Culpo
18399d0bd1 qt-svg: add dependency on C (#49316)
https://gitlab.spack.io/spack/spack/-/jobs/15482214

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-05 19:53:10 -07:00
Dan Bonachea
3aabff77d7 GASNet 2025.2 update (#49327)
* gasnet: deprecate old versions
  GASNet versions more than 2 years old are not supported.
  Update description text.
* gasnet: add 2025.2.0-snapshot version
2025-03-05 19:48:31 -07:00
Chris Marsh
aa86342814 Ensure if TCL is already sourced on the system the lib paths don't interfere with spack's install step (#49325) 2025-03-05 19:48:04 -07:00
Weiqun Zhang
170a276f18 amrex: add v25.03 (#49252)
Starting from amrex-25.03, FFT is enabled by default in spack build.
2025-03-05 15:53:25 -08:00
Massimiliano Culpo
313524dc6d qrupdate: update to use oneapi packages (#49304)
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-05 13:44:37 -05:00
Massimiliano Culpo
5aae6e25a5 arpack-ng: update to use oneapi packages (#49302)
Also, remove deprecated versions

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-05 13:44:13 -05:00
Massimiliano Culpo
b58a52b6ce abinit: update to use oneapi packages (#49301)
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-05 13:44:01 -05:00
Chris White
32760e2885 sundials: expand patch when rule (#49296) 2025-03-05 16:13:19 +01:00
Harmen Stoppels
125feb125c Define Package API version (#49274)
Defines `spack.package_api_version` and `spack.min_package_api_version` 
as tuples (major, minor). 

This defines resp. the current Package API version implemented by this version 
of Spack and the minimal Package API version it is backwards compatible with.

Repositories can optionally define:
```yaml
repo:
    namespace: my_repo
    api: v1.2
```
which indicates they are compatible with versions of Spack that implement 
Package API `>= 1.2` and `< 2.0`. When the `api` key is omitted, the default 
`v1.0` is assumed.
2025-03-05 15:42:48 +01:00
Wouter Deconinck
8677063142 QtPackage: modify QT_ADDITIONAL_PACKAGES_PREFIX_PATH handling (#49297)
* QtPackage: mv QT_ADDITIONAL_PACKAGES_PREFIX_PATH handling

* geomodel: support Qt6

* qt-base: rm import re
2025-03-05 09:09:32 -05:00
Massimiliano Culpo
f015b18230 hydrogen: update to use oneapi packages (#49293)
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-05 09:06:32 +01:00
Massimiliano Culpo
aa9e610fa6 elemental: remove deprecated package (#49291)
This package has not been maintained since 2016.

We maintain an active fork in the hydrogen
package, so remove this one.

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-05 08:36:05 +01:00
Wouter Deconinck
7d62045c30 py-networkx: add up to v3.4.2 (#49289)
* py-networkx: add new versions up to 3.4.2
* py-networkx: add more requirements
* py-networkx: fix typo
* py-networkx: fix python and py-setuptools dependencies

---------

Co-authored-by: Joseph C Wang <joequant@gmail.com>
2025-03-04 17:02:54 -08:00
Chris Marsh
5b03173b99 r-packages: add missing gettext dependencies (#48910)
* add gettext dependency

* typo

* style
2025-03-04 17:07:01 -06:00
mvlopri
36fcdb8cfa Update the incorrect sha for the SEACAS package.py (#49292)
The sha256sum for the 2025-02-27 version of SEACAS is incorrect
due to the movement of the tagged version.
2025-03-04 16:03:28 -07:00
Chris Marsh
7d5b17fbf2 py-rpy2: Add 3.5.17 (#48911)
* Update rpy2 to newest version and clean up package

* Add me as maintainer

* Update depends section as per review. Add ipython variant. Fix some ranges and add support for python 3.9. Deprecated outdated versions

* refine depends_on and remove redundant version info

* style
2025-03-04 15:58:12 -07:00
Piotr Sacharuk
d6e3292955 flux-sched: Apply workarounds for oneAPI compiler for problem with build (#49282) 2025-03-04 15:28:33 -07:00
Chris Marsh
60f54df964 Explicitly depend on gettext for libintl (#48908) 2025-03-04 16:25:31 -06:00
Wouter Deconinck
487df807cc veccore: add typo fix for clang (#49288)
* veccore: add typo for clang

* veccore: apply ScalarWrapper.h patch for all compilers

---------

Co-authored-by: Joseph C Wang <joequant@gmail.com>
2025-03-04 14:35:47 -07:00
Zack Galbreath
cacdf84964 ci: add support for high priority local mirror (#49264) 2025-03-04 14:47:37 -06:00
fbrechin
e2293c758f Adding ability for repo paths from a manifest file to be expanded when creating an environment. (#49084)
* Adding ability for repo paths from a manifest file to be expanded when creating an environment.

A unit test was added to check that an environment variable will be expanded.
Also, a bug was fixed in the expansion of develop paths where if an environment variable
was in the path that then produced an absolute path the path would not be extended.

* Fixing new unit test for env repo var substitution

* Adding ability for repo paths from a manifest file to be expanded when creating an environment.

A unit test was added to check that an environment variable will be expanded.
Also, a bug was fixed in the expansion of develop paths where if an environment variable
was in the path that then produced an absolute path the path would not be extended.

* Messed up resolving last rebase
2025-03-04 09:52:28 -08:00
Harmen Stoppels
f5a275adf5 gitignore: remove *_archive (#49278) 2025-03-04 18:37:18 +01:00
Paul
615ced32cd protobuf: add v3.29.3 (#49246) 2025-03-04 11:29:53 -06:00
Massimiliano Culpo
bc04d963e5 Remove debug print statements in unit-tests (#49280)
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-03-04 18:29:45 +01:00
Taillefumier Mathieu
11051ce5c7 CP2K: Add GRPP support (#49232) 2025-03-04 06:54:27 -07:00
Adam J. Stewart
631bddc52e py-pyarrow: add v19.0.1 (#49149)
* py-pyarrow: add v19.0.1

* Environment variables no longer needed either

* Remove py-pyarrow variants
2025-03-04 13:20:52 +01:00
Adam J. Stewart
b5f40aa7fb OpenCV: fix +cuda build (#49146) 2025-03-04 13:19:57 +01:00
Adam J. Stewart
57e0798af2 py-pip: mark Python 3.12+ support (#49148) 2025-03-04 13:18:38 +01:00
Chris White
0161b662f7 conduit: do not pass link flags to ar (#49263) 2025-03-03 19:53:11 -07:00
afzpatel
aa55b19680 fix +asan in ROCm packages (#48745)
* fix asan for hsa-rocr-dev
* add libclang_rt.asan-x86_64.so to LD_LIBRARY_PATH
* fix +asan for hipsparselt
* fix rocm-openmp-extras asan and add rccl +asan support
* add missing comgr build env variables
* add missing rocm-smi-lib build env variables
* minor dependency change
* fix style
2025-03-03 17:57:34 -08:00
dependabot[bot]
8cfffd88fa build(deps): bump pytest from 8.3.4 to 8.3.5 in /lib/spack/docs (#49268)
Bumps [pytest](https://github.com/pytest-dev/pytest) from 8.3.4 to 8.3.5.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/8.3.4...8.3.5)

---
updated-dependencies:
- dependency-name: pytest
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-03 19:18:42 -06:00
dependabot[bot]
2f8dcb8097 build(deps): bump python-levenshtein in /lib/spack/docs (#49269)
Bumps [python-levenshtein](https://github.com/rapidfuzz/python-Levenshtein) from 0.26.1 to 0.27.1.
- [Release notes](https://github.com/rapidfuzz/python-Levenshtein/releases)
- [Changelog](https://github.com/rapidfuzz/python-Levenshtein/blob/main/HISTORY.md)
- [Commits](https://github.com/rapidfuzz/python-Levenshtein/compare/v0.26.1...v0.27.1)

---
updated-dependencies:
- dependency-name: python-levenshtein
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-03 19:17:48 -06:00
dependabot[bot]
5b70fa8cc8 build(deps): bump sphinx from 8.2.1 to 8.2.3 in /lib/spack/docs (#49270)
Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 8.2.1 to 8.2.3.
- [Release notes](https://github.com/sphinx-doc/sphinx/releases)
- [Changelog](https://github.com/sphinx-doc/sphinx/blob/master/CHANGES.rst)
- [Commits](https://github.com/sphinx-doc/sphinx/compare/v8.2.1...v8.2.3)

---
updated-dependencies:
- dependency-name: sphinx
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-03 19:17:08 -06:00
Adam J. Stewart
b4025e89ed py-torchmetrics: add v1.6.2 (#49262) 2025-03-03 19:15:49 -06:00
Eric Berquist
8db74e1b2f tmux: add 3.5a, 3.5, and 3.3 (#49259)
* tmux: add 3.5a, 3.5, and 3.3

* tmux: patch is in releases from 3.5 onward

* tmux: versions 3.5 and newer can use jemalloc
2025-03-03 19:12:45 -06:00
Wouter Deconinck
1fcfbadba7 qwt: add v6.2.0, v6.3.0, support Qt6 (#45604)
* qwt: support building against Qt6

* qwt: fix style

* qwt: depends_on qt-base+opengl+widgets when +opengl

* visit: patch for missing cmath include

---------

Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
2025-03-03 16:25:48 -08:00
Chris White
13ec35873f Axom: Changes from Axom repository (#49183)
* pull in new changes from axom project

* add new versions

* convert more conditionals to spec.satisfies

-------------
Co-authored-by: white238 <white238@users.noreply.github.com>
2025-03-03 15:47:45 -08:00
Philip Fackler
f96b6eac2b xolotl: new package (#48876)
* Adding xolotl package

* [@spackbot] updating style on behalf of PhilipFackler

* Removing redundant text

* Add blank line

* Update var/spack/repos/builtin/packages/xolotl/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* Update var/spack/repos/builtin/packages/xolotl/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* Switch to CudaPackage and remove source dir from runtime env

* [@spackbot] updating style on behalf of PhilipFackler

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2025-03-03 15:18:28 -06:00
Rocco Meli
933a1a5cd9 update (#49261) 2025-03-03 10:38:10 -07:00
Stephen Nicholas Swatman
b2b9914efc acts dependencies: new versions as of 2025/03/03 (#49253)
This commit adds ACTS version 39.2.0 and detray version 0.89.0.
2025-03-03 09:32:59 -07:00
Rocco Meli
9ce9596981 multicharge: add v0.3.1 (#49255)
* multicharge: add v0.3.1

* fix url
2025-03-03 15:32:29 +01:00
Wouter Deconinck
fc30fe1f6b librsvg: add v2.56.4, v2.57.3, v2.58.2 (#45734)
* librsvg: add v2.56.4, v2.57.3, v2.58.2

---------

Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2025-03-02 14:08:43 -08:00
Paul
25a4b98359 jacamar-ci: add v0.25.0 (#49248) 2025-03-02 14:50:43 -06:00
Adam J. Stewart
05c34b7312 py-pymc3: not compatible with numpy 2 (#49225) 2025-03-01 13:43:05 -06:00
Tahmid Khan
b22842af56 globalarrays: Add variant cxx which adds the --enable-cxx flag (#49241) 2025-03-01 13:16:04 -06:00
Vanessasaurus
0bef028692 Automated deployment to update package flux-sched 2025-02-28 (#49229)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2025-03-01 08:48:41 -07:00
Vanessasaurus
935facd069 Automated deployment to update package flux-security 2025-02-28 (#49230)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2025-03-01 08:47:19 -07:00
Adam J. Stewart
87e5255bbc py-matplotlib: add v3.10.1 (#49233) 2025-03-01 16:22:49 +01:00
dependabot[bot]
b42f0d793d build(deps): bump isort in /.github/workflows/requirements/style (#49212)
Bumps [isort](https://github.com/PyCQA/isort) from 6.0.0 to 6.0.1.
- [Release notes](https://github.com/PyCQA/isort/releases)
- [Changelog](https://github.com/PyCQA/isort/blob/main/CHANGELOG.md)
- [Commits](https://github.com/PyCQA/isort/compare/6.0.0...6.0.1)

---
updated-dependencies:
- dependency-name: isort
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-01 08:18:06 -07:00
dependabot[bot]
ccca0d3354 build(deps): bump isort from 6.0.0 to 6.0.1 in /lib/spack/docs (#49213)
Bumps [isort](https://github.com/PyCQA/isort) from 6.0.0 to 6.0.1.
- [Release notes](https://github.com/PyCQA/isort/releases)
- [Changelog](https://github.com/PyCQA/isort/blob/main/CHANGELOG.md)
- [Commits](https://github.com/PyCQA/isort/compare/6.0.0...6.0.1)

---
updated-dependencies:
- dependency-name: isort
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-01 08:17:39 -07:00
HELICS-bot
9699bbc7b9 helics: Add version 3.6.1 (#49231)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-03-01 08:16:21 -07:00
Raffaele Solcà
c7e251de9f Add dla-future v0.8.0 (#49235) 2025-03-01 08:14:52 -07:00
Robert Maaskant
d788b15529 libmd: add version 1.1.0 (#49239)
Release notes can be read at https://archive.hadrons.org/software/libmd/libmd-1.1.0.announce
2025-03-01 08:11:12 -07:00
Harmen Stoppels
8e7489bc17 Revert "Honor cmake_prefix_paths property if available (#42569)" (#49237)
This reverts commit fe171a560b.
2025-02-28 23:33:02 +01:00
John W. Parent
d234df62d7 Solver: Cache Concretization Results (#48198)
Concretizer caching for reusing solver results
2025-02-28 12:42:00 -06:00
Mikhail Titov
4a5922a0ec py-radical-*: new version 1.90 (#48586)
* rct: update packages (RE, RG, RP, RS, RU) with new version 1.90

* radical: added `url_for_version` for older versions

* radical: set latest versions for `radical.pilot` and `radical.utils`

* radical: fixed `url_for_version` setup

* radical: set the latest version for `radical.entk`

* radical: fixed style for `url_for_version`

* Apply suggestions from code review (python version dependency)

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2025-02-28 07:38:45 -07:00
John W. Parent
5bd184aaaf Windows Rpath: Allow package test rpaths (#47072)
On Windows, libraries search their directory for dependencies, and
we help libraries in Spack-built packages locate their dependencies
by symlinking them into the dependent's directory (we refer to this
as simulated RPATHing).

We extend the convenience functionality here to support base library
directories outside of the package prefix: this is primarily for
running tests in the build directory (which is not located inside
of the final install prefix chosen by spack).
2025-02-27 19:16:00 -08:00
Mikael Simberg
464c3b96fa fmt: Add 11.1.4 (#49218) 2025-02-27 19:12:26 -06:00
Scott Wittenburg
60544a4e84 ci: avoid py-mpi4py tests on darwin (#49227) 2025-02-27 18:07:59 -07:00
Greg Sjaardema
a664d98f37 seacas: new version with change set support (#49224)
This release contains modifications to most of the SEACAS applications to support ChangeSets to some degree.
See https://github.com/SandiaLabs/seacas/wiki/Dynamic_Topology for information about Change Sets and
See https://github.com/SandiaLabs/seacas/wiki/Supporting-Change-Sets for information about how the various seacas applications are supporting the use or creation of change sets.

The release also includes various other small changes including formatting, portability, intallation, TPL version updates, and spelling.
2025-02-27 18:02:51 -07:00
Sinan
0e3d7efb0f alps: add conflict (#48751)
Co-authored-by: Sinan81 <Sinan@world>
Co-authored-by: Sinan81 <Sinan81@users.noreply.github.com>
2025-02-27 17:57:55 -07:00
Chris Green
a8cd0b99f3 New recipes for PlantUML and py-sphinxcontrib-plantuml (#49204)
* new-recipe: plantuml
* new-recipe: py-sphinxcontrib-plantuml
2025-02-27 16:57:23 -08:00
126 changed files with 2077 additions and 1654 deletions

View File

@@ -1,7 +1,7 @@
black==25.1.0
clingo==5.7.1
flake8==7.1.2
isort==6.0.0
isort==6.0.1
mypy==1.15.0
types-six==1.17.0.20241205
types-six==1.17.0.20250304
vermin==1.6.0

1
.gitignore vendored
View File

@@ -201,7 +201,6 @@ tramp
# Org-mode
.org-id-locations
*_archive
# flymake-mode
*_flymake.*

View File

@@ -125,6 +125,8 @@ are stored in ``$spack/var/spack/cache``. These are stored indefinitely
by default. Can be purged with :ref:`spack clean --downloads
<cmd-spack-clean>`.
.. _Misc Cache:
--------------------
``misc_cache``
--------------------
@@ -334,3 +336,52 @@ create a new alias called ``inst`` that will always call ``install -v``:
aliases:
inst: install -v
-------------------------------
``concretization_cache:enable``
-------------------------------
When set to ``true``, Spack will utilize a cache of solver outputs from
successful concretization runs. When enabled, Spack will check the concretization
cache prior to running the solver. If a previous request to solve a given
problem is present in the cache, Spack will load the concrete specs and other
solver data from the cache rather than running the solver. Specs not previously
concretized will be added to the cache on a successful solve. The cache additionally
holds solver statistics, so commands like ``spack solve`` will still return information
about the run that produced a given solver result.
This cache is a subcache of the :ref:`Misc Cache` and as such will be cleaned when the Misc
Cache is cleaned.
When ``false`` or ommitted, all concretization requests will be performed from scatch
----------------------------
``concretization_cache:url``
----------------------------
Path to the location where Spack will root the concretization cache. Currently this only supports
paths on the local filesystem.
Default location is under the :ref:`Misc Cache` at: ``$misc_cache/concretization``
------------------------------------
``concretization_cache:entry_limit``
------------------------------------
Sets a limit on the number of concretization results that Spack will cache. The limit is evaluated
after each concretization run; if Spack has stored more results than the limit allows, the
oldest concretization results are pruned until 10% of the limit has been removed.
Setting this value to 0 disables the automatic pruning. It is expected users will be
responsible for maintaining this cache.
-----------------------------------
``concretization_cache:size_limit``
-----------------------------------
Sets a limit on the size of the concretization cache in bytes. The limit is evaluated
after each concretization run; if Spack has stored more results than the limit allows, the
oldest concretization results are pruned until 10% of the limit has been removed.
Setting this value to 0 disables the automatic pruning. It is expected users will be
responsible for maintaining this cache.

View File

@@ -1,13 +1,13 @@
sphinx==8.2.1
sphinx==8.2.3
sphinxcontrib-programoutput==0.18
sphinx_design==0.6.1
sphinx-rtd-theme==3.0.2
python-levenshtein==0.26.1
python-levenshtein==0.27.1
docutils==0.21.2
pygments==2.19.1
urllib3==2.3.0
pytest==8.3.4
isort==6.0.0
pytest==8.3.5
isort==6.0.1
black==25.1.0
flake8==7.1.2
mypy==1.11.1

View File

@@ -7,6 +7,7 @@
import fnmatch
import glob
import hashlib
import io
import itertools
import numbers
import os
@@ -20,6 +21,7 @@
from contextlib import contextmanager
from itertools import accumulate
from typing import (
IO,
Callable,
Deque,
Dict,
@@ -2454,26 +2456,69 @@ class WindowsSimulatedRPath:
and vis versa.
"""
def __init__(self, package, link_install_prefix=True):
def __init__(
self,
package,
base_modification_prefix: Optional[Union[str, pathlib.Path]] = None,
link_install_prefix: bool = True,
):
"""
Args:
package (spack.package_base.PackageBase): Package requiring links
base_modification_prefix (str|pathlib.Path): Path representation indicating
the root directory in which to establish the simulated rpath, ie where the
symlinks that comprise the "rpath" behavior will be installed.
Note: This is a mutually exclusive option with `link_install_prefix` using
both is an error.
Default: None
link_install_prefix (bool): Link against package's own install or stage root.
Packages that run their own executables during build and require rpaths to
the build directory during build time require this option. Default: install
the build directory during build time require this option.
Default: install
root
Note: This is a mutually exclusive option with `base_modification_prefix`, using
both is an error.
"""
self.pkg = package
self._addl_rpaths = set()
self._addl_rpaths: set[str] = set()
if link_install_prefix and base_modification_prefix:
raise RuntimeError(
"Invalid combination of arguments given to WindowsSimulated RPath.\n"
"Select either `link_install_prefix` to create an install prefix rpath"
" or specify a `base_modification_prefix` for any other link type. "
"Specifying both arguments is invalid."
)
if not (link_install_prefix or base_modification_prefix):
raise RuntimeError(
"Insufficient arguments given to WindowsSimulatedRpath.\n"
"WindowsSimulatedRPath requires one of link_install_prefix"
" or base_modification_prefix to be specified."
" Neither was provided."
)
self.link_install_prefix = link_install_prefix
self._additional_library_dependents = set()
if base_modification_prefix:
self.base_modification_prefix = pathlib.Path(base_modification_prefix)
else:
self.base_modification_prefix = pathlib.Path(self.pkg.prefix)
self._additional_library_dependents: set[pathlib.Path] = set()
if not self.link_install_prefix:
tty.debug(f"Generating rpath for non install context: {base_modification_prefix}")
@property
def library_dependents(self):
"""
Set of directories where package binaries/libraries are located.
"""
return set([pathlib.Path(self.pkg.prefix.bin)]) | self._additional_library_dependents
base_pths = set()
if self.link_install_prefix:
base_pths.add(pathlib.Path(self.pkg.prefix.bin))
base_pths |= self._additional_library_dependents
return base_pths
def add_library_dependent(self, *dest):
"""
@@ -2489,6 +2534,12 @@ def add_library_dependent(self, *dest):
new_pth = pathlib.Path(pth).parent
else:
new_pth = pathlib.Path(pth)
path_is_in_prefix = new_pth.is_relative_to(self.base_modification_prefix)
if not path_is_in_prefix:
raise RuntimeError(
f"Attempting to generate rpath symlink out of rpath context:\
{str(self.base_modification_prefix)}"
)
self._additional_library_dependents.add(new_pth)
@property
@@ -2577,6 +2628,33 @@ def establish_link(self):
self._link(library, lib_dir)
def make_package_test_rpath(pkg, test_dir: Union[str, pathlib.Path]):
"""Establishes a temp Windows simulated rpath for the pkg in the testing directory
so an executable can test the libraries/executables with proper access
to dependent dlls
Note: this is a no-op on all other platforms besides Windows
Args:
pkg (spack.package_base.PackageBase): the package for which the rpath should be computed
test_dir: the testing directory in which we should construct an rpath
"""
# link_install_prefix as false ensures we're not linking into the install prefix
mini_rpath = WindowsSimulatedRPath(pkg, link_install_prefix=False)
# add the testing directory as a location to install rpath symlinks
mini_rpath.add_library_dependent(test_dir)
# check for whether build_directory is available, if not
# assume the stage root is the build dir
build_dir_attr = getattr(pkg, "build_directory", None)
build_directory = build_dir_attr if build_dir_attr else pkg.stage.path
# add the build dir & build dir bin
mini_rpath.add_rpath(os.path.join(build_directory, "bin"))
mini_rpath.add_rpath(os.path.join(build_directory))
# construct rpath
mini_rpath.establish_link()
@system_path_filter
@memoized
def can_access_dir(path):
@@ -2805,6 +2883,20 @@ def keep_modification_time(*filenames):
os.utime(f, (os.path.getatime(f), mtime))
@contextmanager
def temporary_file_position(stream):
orig_pos = stream.tell()
yield
stream.seek(orig_pos)
@contextmanager
def current_file_position(stream: IO[str], loc: int, relative_to=io.SEEK_CUR):
with temporary_file_position(stream):
stream.seek(loc, relative_to)
yield
@contextmanager
def temporary_dir(
suffix: Optional[str] = None, prefix: Optional[str] = None, dir: Optional[str] = None

View File

@@ -13,6 +13,18 @@
__version__ = "1.0.0.dev0"
spack_version = __version__
#: The current Package API version implemented by this version of Spack. The Package API defines
#: the Python interface for packages as well as the layout of package repositories. The minor
#: version is incremented when the package API is extended in a backwards-compatible way. The major
#: version is incremented upon breaking changes. This version is changed independently from the
#: Spack version.
package_api_version = (1, 0)
#: The minimum Package API version that this version of Spack is compatible with. This should
#: always be a tuple of the form ``(major, 0)``, since compatibility with vX.Y implies
#: compatibility with vX.0.
min_package_api_version = (1, 0)
def __try_int(v):
try:
@@ -79,4 +91,6 @@ def get_short_version() -> str:
"get_version",
"get_spack_commit",
"get_short_version",
"package_api_version",
"min_package_api_version",
]

View File

@@ -1234,10 +1234,6 @@ def _make_runnable(self, dep: spack.spec.Spec, env: EnvironmentModifications):
if os.path.isdir(bin_dir):
env.prepend_path("PATH", bin_dir)
for cp_dir in spack.build_systems.cmake.get_cmake_prefix_path(dep.package):
env.append_path("CMAKE_PREFIX_PATH", cp_dir)
env.prune_duplicate_paths("CMAKE_PREFIX_PATH")
def _setup_pkg_and_run(
serialized_pkg: "spack.subprocess_context.PackageInstallContext",

View File

@@ -215,6 +215,7 @@ def create_external_pruner() -> Callable[[spack.spec.Spec], RebuildDecision]:
"""Return a filter that prunes external specs"""
def rebuild_filter(s: spack.spec.Spec) -> RebuildDecision:
print(s.name, "external:", s.external)
if not s.external:
return RebuildDecision(True, "not external")
return RebuildDecision(False, "external spec")

View File

@@ -16,6 +16,8 @@
import spack.concretize
import spack.config
import spack.environment as ev
import spack.hooks
import spack.hooks.report
import spack.paths
import spack.report
import spack.spec
@@ -329,13 +331,10 @@ def install(parser, args):
arguments.sanitize_reporter_options(args)
def reporter_factory(specs):
if args.log_format is None:
return lang.nullcontext()
return spack.report.build_context_manager(
reporter=args.reporter(), filename=report_filename(args, specs=specs), specs=specs
)
# TODO: This is hacky as hell
if args.log_format is not None:
spack.hooks.report.reporter = args.reporter()
spack.hooks.report.report_file = args.log_file
install_kwargs = install_kwargs_from_args(args)
@@ -346,9 +345,9 @@ def reporter_factory(specs):
try:
if env:
install_with_active_env(env, args, install_kwargs, reporter_factory)
install_with_active_env(env, args, install_kwargs)
else:
install_without_active_env(args, install_kwargs, reporter_factory)
install_without_active_env(args, install_kwargs)
except InstallError as e:
if args.show_log_on_error:
_dump_log_on_error(e)
@@ -382,7 +381,7 @@ def _maybe_add_and_concretize(args, env, specs):
env.write(regenerate=False)
def install_with_active_env(env: ev.Environment, args, install_kwargs, reporter_factory):
def install_with_active_env(env: ev.Environment, args, install_kwargs):
specs = spack.cmd.parse_specs(args.spec)
# The following two commands are equivalent:
@@ -416,8 +415,7 @@ def install_with_active_env(env: ev.Environment, args, install_kwargs, reporter_
install_kwargs["overwrite"] = [spec.dag_hash() for spec in specs_to_install]
try:
with reporter_factory(specs_to_install):
env.install_specs(specs_to_install, **install_kwargs)
env.install_specs(specs_to_install, **install_kwargs)
finally:
if env.views:
with env.write_transaction():
@@ -461,18 +459,17 @@ def concrete_specs_from_file(args):
return result
def install_without_active_env(args, install_kwargs, reporter_factory):
def install_without_active_env(args, install_kwargs):
concrete_specs = concrete_specs_from_cli(args, install_kwargs) + concrete_specs_from_file(args)
if len(concrete_specs) == 0:
tty.die("The `spack install` command requires a spec to install.")
with reporter_factory(concrete_specs):
if args.overwrite:
require_user_confirmation_for_overwrite(concrete_specs, args)
install_kwargs["overwrite"] = [spec.dag_hash() for spec in concrete_specs]
if args.overwrite:
require_user_confirmation_for_overwrite(concrete_specs, args)
install_kwargs["overwrite"] = [spec.dag_hash() for spec in concrete_specs]
installs = [s.package for s in concrete_specs]
install_kwargs["explicit"] = [s.dag_hash() for s in concrete_specs]
builder = PackageInstaller(installs, **install_kwargs)
builder.install()
installs = [s.package for s in concrete_specs]
install_kwargs["explicit"] = [s.dag_hash() for s in concrete_specs]
builder = PackageInstaller(installs, **install_kwargs)
builder.install()

View File

@@ -389,6 +389,7 @@ def create_in_dir(
# dev paths in this environment to refer to their original
# locations.
_rewrite_relative_dev_paths_on_relocation(env, init_file_dir)
_rewrite_relative_repos_paths_on_relocation(env, init_file_dir)
return env
@@ -405,8 +406,8 @@ def _rewrite_relative_dev_paths_on_relocation(env, init_file_dir):
dev_path = substitute_path_variables(entry["path"])
expanded_path = spack.util.path.canonicalize_path(dev_path, default_wd=init_file_dir)
# Skip if the expanded path is the same (e.g. when absolute)
if dev_path == expanded_path:
# Skip if the substituted and expanded path is the same (e.g. when absolute)
if entry["path"] == expanded_path:
continue
tty.debug("Expanding develop path for {0} to {1}".format(name, expanded_path))
@@ -421,6 +422,34 @@ def _rewrite_relative_dev_paths_on_relocation(env, init_file_dir):
env._re_read()
def _rewrite_relative_repos_paths_on_relocation(env, init_file_dir):
"""When initializing the environment from a manifest file and we plan
to store the environment in a different directory, we have to rewrite
relative repo paths to absolute ones and expand environment variables."""
with env:
repos_specs = spack.config.get("repos", default={}, scope=env.scope_name)
if not repos_specs:
return
for i, entry in enumerate(repos_specs):
repo_path = substitute_path_variables(entry)
expanded_path = spack.util.path.canonicalize_path(repo_path, default_wd=init_file_dir)
# Skip if the substituted and expanded path is the same (e.g. when absolute)
if entry == expanded_path:
continue
tty.debug("Expanding repo path for {0} to {1}".format(entry, expanded_path))
repos_specs[i] = expanded_path
spack.config.set("repos", repos_specs, scope=env.scope_name)
env.repos_specs = None
# If we changed the environment's spack.yaml scope, that will not be reflected
# in the manifest that we read
env._re_read()
def environment_dir_from_name(name: str, exists_ok: bool = True) -> str:
"""Returns the directory associated with a named environment.

View File

@@ -27,6 +27,7 @@
class _HookRunner:
#: Order in which hooks are executed
HOOK_ORDER = [
"spack.hooks.report",
"spack.hooks.module_file_generation",
"spack.hooks.licensing",
"spack.hooks.sbang",
@@ -67,3 +68,6 @@ def __call__(self, *args, **kwargs):
pre_uninstall = _HookRunner("pre_uninstall")
post_uninstall = _HookRunner("post_uninstall")
pre_installer = _HookRunner("pre_installer")
post_installer = _HookRunner("post_installer")

View File

@@ -0,0 +1,263 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Hooks to produce reports of spec installations"""
import collections
import gzip
import os
import time
import traceback
import llnl.util.filesystem as fs
import spack.build_environment
import spack.util.spack_json as sjson
reporter = None
report_file = None
Property = collections.namedtuple("Property", ["name", "value"])
class Record(dict):
def __getattr__(self, name):
# only called if no attribute exists
if name in self:
return self[name]
raise AttributeError(f"RequestRecord for {self.name} has no attribute {name}")
def __setattr__(self, name, value):
if name.startswith("_"):
super().__setattr__(name, value)
else:
self[name] = value
class RequestRecord(Record):
def __init__(self, spec):
super().__init__()
self.name = spec.name
self.errors = None
self.nfailures = None
self.npackages = None
self.time = None
self.timestamp = time.strftime("%a, d %b %Y %H:%M:%S", time.gmtime())
self.properties = [
Property("architecture", spec.architecture),
Property("compiler", spec.compiler),
]
self.packages = []
self._seen = set()
def append_record(self, record, key):
self.packages.append(record)
self._seen.add(key)
def seen(self, key):
return key in self._seen
def summarize(self):
self.npackages = len(self.packages)
self.nfailures = len([r for r in self.packages if r.result == "failure"])
self.nerrors = len([r for r in self.packages if r.result == "error"])
self.time = sum(float(r.elapsed_time or 0.0) for r in self.packages)
class SpecRecord(Record):
pass
class InstallRecord(SpecRecord):
def __init__(self, spec):
super().__init__()
self._spec = spec
self._package = spec.package
self._start_time = time.time()
self.name = spec.name
self.id = spec.dag_hash()
self.elapsed_time = None
self.result = None
self.message = None
self.installed_from_binary_cache = None
def fetch_log(self):
try:
if os.path.exists(self._package.install_log_path):
stream = gzip.open(self._package.install_log_path, "rt", encoding="utf-8")
else:
stream = open(self._package.log_path, encoding="utf-8")
with stream as f:
return f.read()
except OSError:
return f"Cannot open log for {self._spec.cshort_spec}"
def fetch_time(self):
try:
with open(self._package.times_log_path, "r", encoding="utf-8") as f:
data = sjson.load(f.read())
return data["total"]
except Exception:
return None
def skip(self, msg):
self.result = "skipped"
self.elapsed_time = 0.0
self.message = msg
def succeed(self):
self.result = "success"
self.stdout = self.fetch_log()
self.installed_from_binary_cache = self._package.installed_from_binary_cache
self.elapsed_time = self.fetch_time()
def fail(self, exc):
if isinstance(exc, spack.build_environment.InstallError):
self.result = "failure"
self.message = exc.message or "Installation failure"
self.exception = exc.traceback
else:
self.result = "error"
self.message = str(exc) or "Unknown error"
self.exception = traceback.format_exc()
self.stdout = self.fetch_log() + self.message
requests = {}
def pre_installer(specs):
global requests
for root in specs:
request = RequestRecord(root)
requests[root.dag_hash()] = request
for dep in filter(lambda x: x.installed, root.traverse()):
record = InstallRecord(dep)
record.skip(msg="Spec already installed")
request.append_record(record, dep.dag_hash())
def post_installer(specs, hashes_to_failures):
global requests
global report_file
global reporter
try:
for root in specs:
request = requests[root.dag_hash()]
# Associate all dependency jobs with this request
for dep in root.traverse():
if request.seen(dep.dag_hash()):
continue # Already handled
record = InstallRecord(dep)
if dep.dag_hash() in hashes_to_failures:
record.fail(hashes_to_failures[dep.dag_hash()])
elif dep.installed:
record.succeed()
else:
# This package was never reached because of an earlier failure
continue
request.append_record(record, dep.dag_hash())
# Aggregate request-level data
request.summarize()
# Write the actual report
if not report_file:
basename = specs[0].format("test-{name}-{version}-{hash}.xml")
dirname = os.path.join(spack.paths.reports_path, "junit")
fs.mkdirp(dirname)
report_file = os.path.join(dirname, basename)
if reporter:
reporter.build_report(report_file, specs=list(requests.values()))
finally:
# Clean up after ourselves
requests = {}
reporter = None
report_file = None
# This is not thread safe, but that should be ok
# We only have one top-level thread launching build requests, and all parallelism
# is between the jobs of different requests
# requests: Dict[str, RequestRecord] = {}
# specs: Dict[str, InstallRecord] = {}
# def pre_installer(specs):
# global requests
# global specs
# for spec in specs:
# record = RequestRecord(spec)
# requests[spec.dag_hash()] = record
# for dep in filter(lambda x: x.installed, spec.traverse()):
# spec_record = InstallRecord(dep)
# spec_record.elapsed_time = "0.0"
# spec_record.result = "skipped"
# spec_record.message = "Spec already installed"
# specs[dep.dag_hash()] = spec_record
# def pre_install(spec):
# global specs
# specs[spec.dag_hash()] = InstallRecord(spec)
# def post_install(spec, explicit: bool):
# global specs
# record = specs[spec.dag_hash()]
# record.result = "success"
# record.stdout = record.fetch_log()
# record.installed_from_binary_cache = record._package.installed_from_binary_cache
# record.elapsed_time = time.time() - record._start_time
# def post_failure(spec, error):
# global specs
# record = specs[spec.dag_hash()]
# if isinstance(error, spack.build_environment.InstallError):
# record.result = "failure"
# record.message = exc.message or "Installation failure"
# record.exception = exc.traceback
# else:
# record.result = "error"
# record.message = str(exc) or "Unknown error"
# record.exception = traceback.format_exc()
# record.stdout = record.fetch_log() + record.message
# record.elapsed_time = time.time() - record._start_time
# def post_installer(specs):
# global requests
# global specs
# global reporter
# global report_file
# for spec in specs:
# # Find all associated spec records
# request_record = requests[spec.dag_hash()]
# for dep in spec.traverse(root=True):
# spec_record = specs[dep.dag_hash()]
# request_record.records.append(spec_record)
# # Aggregate statistics
# request_record.npackages = len(request_record.records)
# request_record.nfailures = len([r for r in request_record.records if r.result == "failure"])
# request_record.errors = len([r for r in request_record.records if r.result == "error"])
# request_record.time = sum(float(r.elapsed_time) for r in request_record.records)
# # Write the actual report
# filename = report_file or specs[0].name
# reporter.build_report(filename, specs=specs)
# # Clean up after ourselves
# requests = {}
# specs = {}

View File

@@ -2014,11 +2014,13 @@ def _install_action(self, task: Task) -> InstallAction:
def install(self) -> None:
"""Install the requested package(s) and or associated dependencies."""
spack.hooks.pre_installer([r.pkg.spec for r in self.build_requests])
self._init_queue()
fail_fast_err = "Terminating after first install failure"
single_requested_spec = len(self.build_requests) == 1
failed_build_requests = []
failed_tasks = [] # self.failed tracks dependents of failed tasks, here only failures
install_status = InstallStatus(len(self.build_pq))
@@ -2171,13 +2173,24 @@ def install(self) -> None:
except KeyboardInterrupt as exc:
# The build has been terminated with a Ctrl-C so terminate
# regardless of the number of remaining specs.
failed_tasks.append((pkg, exc))
tty.error(
f"Failed to install {pkg.name} due to " f"{exc.__class__.__name__}: {str(exc)}"
)
hashes_to_failures = {pkg.spec.dag_hash(): exc for pkg, exc in failed_tasks}
spack.hooks.post_installer(
[r.pkg.spec for r in self.build_requests], hashes_to_failures
)
print("DDDDDD")
raise
except binary_distribution.NoChecksumException as exc:
if task.cache_only:
failed_tasks.append((pkg, exc))
hashes_to_failures = {pkg.spec.dag_hash(): exc for pkg, exc in failed_tasks}
spack.hooks.post_installer(
[r.pkg.spec for r in self.build_requests], hashes_to_failures
)
raise
# Checking hash on downloaded binary failed.
@@ -2192,6 +2205,7 @@ def install(self) -> None:
except (Exception, SystemExit) as exc:
self._update_failed(task, True, exc)
failed_tasks.append((pkg, exc))
# Best effort installs suppress the exception and mark the
# package as a failure.
@@ -2204,8 +2218,14 @@ def install(self) -> None:
f"Failed to install {pkg.name} due to "
f"{exc.__class__.__name__}: {str(exc)}"
)
# Terminate if requested to do so on the first failure.
if self.fail_fast:
hashes_to_failures = {pkg.spec.dag_hash(): exc for pkg, exc in failed_tasks}
spack.hooks.post_installer(
[r.pkg.spec for r in self.build_requests], hashes_to_failures
)
print("AAAAAAA")
raise spack.error.InstallError(
f"{fail_fast_err}: {str(exc)}", pkg=pkg
) from exc
@@ -2213,8 +2233,15 @@ def install(self) -> None:
# Terminate when a single build request has failed, or summarize errors later.
if task.is_build_request:
if single_requested_spec:
hashes_to_failures = {
pkg.spec.dag_hash(): exc for pkg, exc in failed_tasks
}
spack.hooks.post_installer(
[r.pkg.spec for r in self.build_requests], hashes_to_failures
)
print("BBBBB")
raise
failed_build_requests.append((pkg, pkg_id, str(exc)))
failed_build_requests.append((pkg, pkg_id, exc))
finally:
# Remove the install prefix if anything went wrong during
@@ -2238,9 +2265,13 @@ def install(self) -> None:
if request.install_args.get("install_package") and request.pkg_id not in self.installed
]
hashes_to_failures = {pkg.spec.dag_hash(): exc for pkg, exc in failed_tasks}
spack.hooks.post_installer([r.pkg.spec for r in self.build_requests], hashes_to_failures)
print("CCCCC", failed_build_requests)
if failed_build_requests or missing:
for _, pkg_id, err in failed_build_requests:
tty.error(f"{pkg_id}: {err}")
tty.error(f"{pkg_id}: {str(err)}")
for _, pkg_id in missing:
tty.error(f"{pkg_id}: Package was not installed")

View File

@@ -125,9 +125,10 @@ def windows_establish_runtime_linkage(self):
# Spack should in general not modify things it has not installed
# we can reasonably expect externals to have their link interface properly established
if sys.platform == "win32" and not self.spec.external:
self.win_rpath.add_library_dependent(*self.win_add_library_dependent())
self.win_rpath.add_rpath(*self.win_add_rpath())
self.win_rpath.establish_link()
win_rpath = fsys.WindowsSimulatedRPath(self)
win_rpath.add_library_dependent(*self.win_add_library_dependent())
win_rpath.add_rpath(*self.win_add_rpath())
win_rpath.establish_link()
#: Registers which are the detectable packages, by repo and package name
@@ -742,7 +743,6 @@ def __init__(self, spec):
# Set up timing variables
self._fetch_time = 0.0
self.win_rpath = fsys.WindowsSimulatedRPath(self)
super().__init__()
def __getitem__(self, key: str) -> "PackageBase":

View File

@@ -108,6 +108,8 @@ def _get_user_cache_path():
#: transient caches for Spack data (virtual cache, patch sha256 lookup, etc.)
default_misc_cache_path = os.path.join(user_cache_path, "cache")
#: concretization cache for Spack concretizations
default_conc_cache_path = os.path.join(default_misc_cache_path, "concretization")
# Below paths pull configuration from the host environment.
#

View File

@@ -32,6 +32,7 @@
import llnl.util.tty as tty
from llnl.util.filesystem import working_dir
import spack
import spack.caches
import spack.config
import spack.error
@@ -49,6 +50,8 @@
#: Package modules are imported as spack.pkg.<repo-namespace>.<pkg-name>
ROOT_PYTHON_NAMESPACE = "spack.pkg"
_API_REGEX = re.compile(r"^v(\d+)\.(\d+)$")
def python_package_for_repo(namespace):
"""Returns the full namespace of a repository, given its relative one
@@ -909,19 +912,52 @@ def __reduce__(self):
return RepoPath.unmarshal, self.marshal()
def _parse_package_api_version(
config: Dict[str, Any],
min_api: Tuple[int, int] = spack.min_package_api_version,
max_api: Tuple[int, int] = spack.package_api_version,
) -> Tuple[int, int]:
api = config.get("api")
if api is None:
package_api = (1, 0)
else:
if not isinstance(api, str):
raise BadRepoError(f"Invalid Package API version '{api}'. Must be of the form vX.Y")
api_match = _API_REGEX.match(api)
if api_match is None:
raise BadRepoError(f"Invalid Package API version '{api}'. Must be of the form vX.Y")
package_api = (int(api_match.group(1)), int(api_match.group(2)))
if min_api <= package_api <= max_api:
return package_api
min_str = ".".join(str(i) for i in min_api)
max_str = ".".join(str(i) for i in max_api)
curr_str = ".".join(str(i) for i in package_api)
raise BadRepoError(
f"Package API v{curr_str} is not supported by this version of Spack ("
f"must be between v{min_str} and v{max_str})"
)
class Repo:
"""Class representing a package repository in the filesystem.
Each package repository must have a top-level configuration file
called `repo.yaml`.
Each package repository must have a top-level configuration file called `repo.yaml`.
Currently, `repo.yaml` must define:
It contains the following keys:
`namespace`:
A Python namespace where the repository's packages should live.
`subdirectory`:
An optional subdirectory name where packages are placed
`api`:
A string of the form vX.Y that indicates the Package API version. The default is "v1.0".
For the repo to be compatible with the current version of Spack, the version must be
greater than or equal to :py:data:`spack.min_package_api_version` and less than or equal to
:py:data:`spack.package_api_version`.
"""
def __init__(
@@ -958,7 +994,7 @@ def check(condition, msg):
f"{os.path.join(root, repo_config_name)} must define a namespace.",
)
self.namespace = config["namespace"]
self.namespace: str = config["namespace"]
check(
re.match(r"[a-zA-Z][a-zA-Z0-9_.]+", self.namespace),
f"Invalid namespace '{self.namespace}' in repo '{self.root}'. "
@@ -971,12 +1007,14 @@ def check(condition, msg):
# Keep name components around for checking prefixes.
self._names = self.full_namespace.split(".")
packages_dir = config.get("subdirectory", packages_dir_name)
packages_dir: str = config.get("subdirectory", packages_dir_name)
self.packages_path = os.path.join(self.root, packages_dir)
check(
os.path.isdir(self.packages_path), f"No directory '{packages_dir}' found in '{root}'"
)
self.package_api = _parse_package_api_version(config)
# Class attribute overrides by package name
self.overrides = overrides or {}
@@ -1026,7 +1064,7 @@ def is_prefix(self, fullname: str) -> bool:
parts = fullname.split(".")
return self._names[: len(parts)] == parts
def _read_config(self) -> Dict[str, str]:
def _read_config(self) -> Dict[str, Any]:
"""Check for a YAML config file in this db's root directory."""
try:
with open(self.config_file, encoding="utf-8") as reponame_file:

View File

@@ -217,6 +217,7 @@ def build_report_for_package(self, report_dir, package, duration):
nerrors = len(errors)
if nerrors > 0:
print("NERRORS")
self.success = False
if phase == "configure":
report_data[phase]["status"] = 1
@@ -410,6 +411,7 @@ def concretization_report(self, report_dir, msg):
self.current_package_name = self.base_buildname
self.upload(output_filename)
self.success = False
print("CONCRETIZATION")
self.finalize_report()
def initialize_report(self, report_dir):

View File

@@ -58,6 +58,15 @@
{"type": "string"}, # deprecated
]
},
"concretization_cache": {
"type": "object",
"properties": {
"enable": {"type": "boolean"},
"url": {"type": "string"},
"entry_limit": {"type": "integer", "minimum": 0},
"size_limit": {"type": "integer", "minimum": 0},
},
},
"install_hash_length": {"type": "integer", "minimum": 1},
"install_path_scheme": {"type": "string"}, # deprecated
"build_stage": {

View File

@@ -5,9 +5,12 @@
import collections.abc
import copy
import enum
import errno
import functools
import hashlib
import io
import itertools
import json
import os
import pathlib
import pprint
@@ -17,12 +20,25 @@
import typing
import warnings
from contextlib import contextmanager
from typing import Callable, Dict, Iterator, List, NamedTuple, Optional, Set, Tuple, Type, Union
from typing import (
IO,
Callable,
Dict,
Iterator,
List,
NamedTuple,
Optional,
Set,
Tuple,
Type,
Union,
)
import archspec.cpu
import llnl.util.lang
import llnl.util.tty as tty
from llnl.util.filesystem import current_file_position
from llnl.util.lang import elide_list
import spack
@@ -34,15 +50,18 @@
import spack.deptypes as dt
import spack.environment as ev
import spack.error
import spack.hash_types as ht
import spack.package_base
import spack.package_prefs
import spack.patch
import spack.paths
import spack.platforms
import spack.repo
import spack.solver.splicing
import spack.spec
import spack.store
import spack.util.crypto
import spack.util.hash
import spack.util.libc
import spack.util.module_cmd as md
import spack.util.path
@@ -51,6 +70,7 @@
import spack.version as vn
import spack.version.git_ref_lookup
from spack import traverse
from spack.util.file_cache import FileCache
from .core import (
AspFunction,
@@ -538,6 +558,365 @@ def format_unsolved(unsolved_specs):
msg += "\n\t(No candidate specs from solver)"
return msg
def to_dict(self, test: bool = False) -> dict:
"""Produces dict representation of Result object
Does not include anything related to unsatisfiability as we
are only interested in storing satisfiable results
"""
serial_node_arg = (
lambda node_dict: f"""{{"id": "{node_dict.id}", "pkg": "{node_dict.pkg}"}}"""
)
spec_hash_type = ht.process_hash if test else ht.dag_hash
ret = dict()
ret["asp"] = self.asp
ret["criteria"] = self.criteria
ret["optimal"] = self.optimal
ret["warnings"] = self.warnings
ret["nmodels"] = self.nmodels
ret["abstract_specs"] = [str(x) for x in self.abstract_specs]
ret["satisfiable"] = self.satisfiable
serial_answers = []
for answer in self.answers:
serial_answer = answer[:2]
serial_answer_dict = {}
for node, spec in answer[2].items():
serial_answer_dict[serial_node_arg(node)] = spec.to_dict(hash=spec_hash_type)
serial_answer = serial_answer + (serial_answer_dict,)
serial_answers.append(serial_answer)
ret["answers"] = serial_answers
ret["specs_by_input"] = {}
input_specs = {} if not self.specs_by_input else self.specs_by_input
for input, spec in input_specs.items():
ret["specs_by_input"][str(input)] = spec.to_dict(hash=spec_hash_type)
return ret
@staticmethod
def from_dict(obj: dict):
"""Returns Result object from compatible dictionary"""
def _dict_to_node_argument(dict):
id = dict["id"]
pkg = dict["pkg"]
return NodeArgument(id=id, pkg=pkg)
def _str_to_spec(spec_str):
return spack.spec.Spec(spec_str)
def _dict_to_spec(spec_dict):
loaded_spec = spack.spec.Spec.from_dict(spec_dict)
_ensure_external_path_if_external(loaded_spec)
spack.spec.Spec.ensure_no_deprecated(loaded_spec)
return loaded_spec
asp = obj.get("asp")
spec_list = obj.get("abstract_specs")
if not spec_list:
raise RuntimeError("Invalid json for concretization Result object")
if spec_list:
spec_list = [_str_to_spec(x) for x in spec_list]
result = Result(spec_list, asp)
result.criteria = obj.get("criteria")
result.optimal = obj.get("optimal")
result.warnings = obj.get("warnings")
result.nmodels = obj.get("nmodels")
result.satisfiable = obj.get("satisfiable")
result._unsolved_specs = []
answers = []
for answer in obj.get("answers", []):
loaded_answer = answer[:2]
answer_node_dict = {}
for node, spec in answer[2].items():
answer_node_dict[_dict_to_node_argument(json.loads(node))] = _dict_to_spec(spec)
loaded_answer.append(answer_node_dict)
answers.append(tuple(loaded_answer))
result.answers = answers
result._concrete_specs_by_input = {}
result._concrete_specs = []
for input, spec in obj.get("specs_by_input", {}).items():
result._concrete_specs_by_input[_str_to_spec(input)] = _dict_to_spec(spec)
result._concrete_specs.append(_dict_to_spec(spec))
return result
class ConcretizationCache:
"""Store for Spack concretization results and statistics
Serializes solver result objects and statistics to json and stores
at a given endpoint in a cache associated by the sha256 of the
asp problem and the involved control files.
"""
def __init__(self, root: Union[str, None] = None):
if not root:
root = spack.config.get(
"config:concretization_cache:url", spack.paths.default_conc_cache_path
)
self.root = pathlib.Path(spack.util.path.canonicalize_path(root))
self._fc = FileCache(self.root)
self._cache_manifest = ".cache_manifest"
self._manifest_queue: List[Tuple[pathlib.Path, int]] = []
def cleanup(self):
"""Prunes the concretization cache according to configured size and entry
count limits. Cleanup is done in FIFO ordering."""
# TODO: determine a better default
entry_limit = spack.config.get("config:concretization_cache:entry_limit", 1000)
bytes_limit = spack.config.get("config:concretization_cache:size_limit", 3e8)
# lock the entire buildcache as we're removing a lot of data from the
# manifest and cache itself
with self._fc.read_transaction(self._cache_manifest) as f:
count, cache_bytes = self._extract_cache_metadata(f)
if not count or not cache_bytes:
return
entry_count = int(count)
manifest_bytes = int(cache_bytes)
# move beyond the metadata entry
f.readline()
if entry_count > entry_limit and entry_limit > 0:
with self._fc.write_transaction(self._cache_manifest) as (old, new):
# prune the oldest 10% or until we have removed 10% of
# total bytes starting from oldest entry
# TODO: make this configurable?
prune_count = entry_limit // 10
lines_to_prune = f.readlines(prune_count)
for i, line in enumerate(lines_to_prune):
sha, cache_entry_bytes = self._parse_manifest_entry(line)
if sha and cache_entry_bytes:
cache_path = self._cache_path_from_hash(sha)
if self._fc.remove(cache_path):
entry_count -= 1
manifest_bytes -= int(cache_entry_bytes)
else:
tty.warn(
f"Invalid concretization cache entry: '{line}' on line: {i+1}"
)
self._write_manifest(f, entry_count, manifest_bytes)
elif manifest_bytes > bytes_limit and bytes_limit > 0:
with self._fc.write_transaction(self._cache_manifest) as (old, new):
# take 10% of current size off
prune_amount = bytes_limit // 10
total_pruned = 0
i = 0
while total_pruned < prune_amount:
sha, manifest_cache_bytes = self._parse_manifest_entry(f.readline())
if sha and manifest_cache_bytes:
entry_bytes = int(manifest_cache_bytes)
cache_path = self.root / sha[:2] / sha
if self._safe_remove(cache_path):
entry_count -= 1
entry_bytes -= entry_bytes
total_pruned += entry_bytes
else:
tty.warn(
"Invalid concretization cache entry "
f"'{sha} {manifest_cache_bytes}' on line: {i}"
)
i += 1
self._write_manifest(f, entry_count, manifest_bytes)
for cache_dir in self.root.iterdir():
if cache_dir.is_dir() and not any(cache_dir.iterdir()):
self._safe_remove(cache_dir)
def cache_entries(self):
"""Generator producing cache entries"""
for cache_dir in self.root.iterdir():
# ensure component is cache entry directory
# not metadata file
if cache_dir.is_dir():
for cache_entry in cache_dir.iterdir():
if not cache_entry.is_dir():
yield cache_entry
else:
raise RuntimeError(
"Improperly formed concretization cache. "
f"Directory {cache_entry.name} is improperly located "
"within the concretization cache."
)
def _parse_manifest_entry(self, line):
"""Returns parsed manifest entry lines
with handling for invalid reads."""
if line:
cache_values = line.strip("\n").split(" ")
if len(cache_values) < 2:
tty.warn(f"Invalid cache entry at {line}")
return None, None
return None, None
def _write_manifest(self, manifest_file, entry_count, entry_bytes):
"""Writes new concretization cache manifest file.
Arguments:
manifest_file: IO stream opened for readin
and writing wrapping the manifest file
with cursor at calltime set to location
where manifest should be truncated
entry_count: new total entry count
entry_bytes: new total entry bytes count
"""
persisted_entries = manifest_file.readlines()
manifest_file.truncate(0)
manifest_file.write(f"{entry_count} {entry_bytes}\n")
manifest_file.writelines(persisted_entries)
def _results_from_cache(self, cache_entry_buffer: IO[str]) -> Union[Result, None]:
"""Returns a Results object from the concretizer cache
Reads the cache hit and uses `Result`'s own deserializer
to produce a new Result object
"""
with current_file_position(cache_entry_buffer, 0):
cache_str = cache_entry_buffer.read()
# TODO: Should this be an error if None?
# Same for _stats_from_cache
if cache_str:
cache_entry = json.loads(cache_str)
result_json = cache_entry["results"]
return Result.from_dict(result_json)
return None
def _stats_from_cache(self, cache_entry_buffer: IO[str]) -> Union[List, None]:
"""Returns concretization statistic from the
concretization associated with the cache.
Deserialzes the the json representation of the
statistics covering the cached concretization run
and returns the Python data structures
"""
with current_file_position(cache_entry_buffer, 0):
cache_str = cache_entry_buffer.read()
if cache_str:
return json.loads(cache_str)["statistics"]
return None
def _extract_cache_metadata(self, cache_stream: IO[str]):
"""Extracts and returns cache entry count and bytes count from head of manifest
file"""
# make sure we're always reading from the beginning of the stream
# concretization cache manifest data lives at the top of the file
with current_file_position(cache_stream, 0):
return self._parse_manifest_entry(cache_stream.readline())
def _prefix_digest(self, problem: str) -> Tuple[str, str]:
"""Return the first two characters of, and the full, sha256 of the given asp problem"""
prob_digest = hashlib.sha256(problem.encode()).hexdigest()
prefix = prob_digest[:2]
return prefix, prob_digest
def _cache_path_from_problem(self, problem: str) -> pathlib.Path:
"""Returns a Path object representing the path to the cache
entry for the given problem"""
prefix, digest = self._prefix_digest(problem)
return pathlib.Path(prefix) / digest
def _cache_path_from_hash(self, hash: str) -> pathlib.Path:
"""Returns a Path object representing the cache entry
corresponding to the given sha256 hash"""
return pathlib.Path(hash[:2]) / hash
def _lock_prefix_from_cache_path(self, cache_path: str):
"""Returns the bit location corresponding to a given cache entry path
for file locking"""
return spack.util.hash.base32_prefix_bits(
spack.util.hash.b32_hash(cache_path), spack.util.crypto.bit_length(sys.maxsize)
)
def flush_manifest(self):
"""Updates the concretization cache manifest file after a cache write operation
Updates the current byte count and entry counts and writes to the head of the
manifest file"""
manifest_file = self.root / self._cache_manifest
manifest_file.touch(exist_ok=True)
with open(manifest_file, "r+", encoding="utf-8") as f:
# check if manifest is empty
count, cache_bytes = self._extract_cache_metadata(f)
if not count or not cache_bytes:
# cache is unintialized
count = 0
cache_bytes = 0
f.seek(0, io.SEEK_END)
for manifest_update in self._manifest_queue:
entry_path, entry_bytes = manifest_update
count += 1
cache_bytes += entry_bytes
f.write(f"{entry_path.name} {entry_bytes}")
f.seek(0, io.SEEK_SET)
new_stats = f"{int(count)+1} {int(cache_bytes)}\n"
f.write(new_stats)
def _register_cache_update(self, cache_path: pathlib.Path, bytes_written: int):
"""Adds manifest entry to update queue for later updates to the manifest"""
self._manifest_queue.append((cache_path, bytes_written))
def _safe_remove(self, cache_dir: pathlib.Path):
"""Removes cache entries with handling for the case where the entry has been
removed already or there are multiple cache entries in a directory"""
try:
if cache_dir.is_dir():
cache_dir.rmdir()
else:
cache_dir.unlink()
return True
except FileNotFoundError:
# This is acceptable, removal is idempotent
pass
except OSError as e:
if e.errno == errno.ENOTEMPTY:
# there exists another cache entry in this directory, don't clean yet
pass
return False
def store(self, problem: str, result: Result, statistics: List, test: bool = False):
"""Creates entry in concretization cache for problem if none exists,
storing the concretization Result object and statistics in the cache
as serialized json joined as a single file.
Hash membership is computed based on the sha256 of the provided asp
problem.
"""
cache_path = self._cache_path_from_problem(problem)
if self._fc.init_entry(cache_path):
# if an entry for this conc hash exists already, we're don't want
# to overwrite, just exit
tty.debug(f"Cache entry {cache_path} exists, will not be overwritten")
return
with self._fc.write_transaction(cache_path) as (old, new):
if old:
# Entry for this conc hash exists already, do not overwrite
tty.debug(f"Cache entry {cache_path} exists, will not be overwritten")
return
cache_dict = {"results": result.to_dict(test=test), "statistics": statistics}
bytes_written = new.write(json.dumps(cache_dict))
self._register_cache_update(cache_path, bytes_written)
def fetch(self, problem: str) -> Union[Tuple[Result, List], Tuple[None, None]]:
"""Returns the concretization cache result for a lookup based on the given problem.
Checks the concretization cache for the given problem, and either returns the
Python objects cached on disk representing the concretization results and statistics
or returns none if no cache entry was found.
"""
cache_path = self._cache_path_from_problem(problem)
result, statistics = None, None
with self._fc.read_transaction(cache_path) as f:
if f:
result = self._results_from_cache(f)
statistics = self._stats_from_cache(f)
if result and statistics:
tty.debug(f"Concretization cache hit at {str(cache_path)}")
return result, statistics
tty.debug(f"Concretization cache miss at {str(cache_path)}")
return None, None
CONC_CACHE: ConcretizationCache = llnl.util.lang.Singleton(
lambda: ConcretizationCache()
) # type: ignore
def _normalize_packages_yaml(packages_yaml):
normalized_yaml = copy.copy(packages_yaml)
@@ -806,6 +1185,15 @@ def solve(self, setup, specs, reuse=None, output=None, control=None, allow_depre
if sys.platform == "win32":
tty.debug("Ensuring basic dependencies {win-sdk, wgl} available")
spack.bootstrap.core.ensure_winsdk_external_or_raise()
control_files = ["concretize.lp", "heuristic.lp", "display.lp"]
if not setup.concretize_everything:
control_files.append("when_possible.lp")
if using_libc_compatibility():
control_files.append("libc_compatibility.lp")
else:
control_files.append("os_compatibility.lp")
if setup.enable_splicing:
control_files.append("splices.lp")
timer.start("setup")
asp_problem = setup.setup(specs, reuse=reuse, allow_deprecated=allow_deprecated)
@@ -815,123 +1203,133 @@ def solve(self, setup, specs, reuse=None, output=None, control=None, allow_depre
return Result(specs), None, None
timer.stop("setup")
timer.start("load")
# Add the problem instance
self.control.add("base", [], asp_problem)
# Load the file itself
timer.start("cache-check")
timer.start("ordering")
# ensure deterministic output
problem_repr = "\n".join(sorted(asp_problem.split("\n")))
timer.stop("ordering")
parent_dir = os.path.dirname(__file__)
self.control.load(os.path.join(parent_dir, "concretize.lp"))
self.control.load(os.path.join(parent_dir, "heuristic.lp"))
self.control.load(os.path.join(parent_dir, "display.lp"))
if not setup.concretize_everything:
self.control.load(os.path.join(parent_dir, "when_possible.lp"))
full_path = lambda x: os.path.join(parent_dir, x)
abs_control_files = [full_path(x) for x in control_files]
for ctrl_file in abs_control_files:
with open(ctrl_file, "r+", encoding="utf-8") as f:
problem_repr += "\n" + f.read()
# Binary compatibility is based on libc on Linux, and on the os tag elsewhere
if using_libc_compatibility():
self.control.load(os.path.join(parent_dir, "libc_compatibility.lp"))
else:
self.control.load(os.path.join(parent_dir, "os_compatibility.lp"))
if setup.enable_splicing:
self.control.load(os.path.join(parent_dir, "splices.lp"))
result = None
conc_cache_enabled = spack.config.get("config:concretization_cache:enable", True)
if conc_cache_enabled:
result, concretization_stats = CONC_CACHE.fetch(problem_repr)
timer.stop("load")
timer.stop("cache-check")
if not result:
timer.start("load")
# Add the problem instance
self.control.add("base", [], asp_problem)
# Load the files
[self.control.load(lp) for lp in abs_control_files]
timer.stop("load")
# Grounding is the first step in the solve -- it turns our facts
# and first-order logic rules into propositional logic.
timer.start("ground")
self.control.ground([("base", [])])
timer.stop("ground")
# Grounding is the first step in the solve -- it turns our facts
# and first-order logic rules into propositional logic.
timer.start("ground")
self.control.ground([("base", [])])
timer.stop("ground")
# With a grounded program, we can run the solve.
models = [] # stable models if things go well
cores = [] # unsatisfiable cores if they do not
# With a grounded program, we can run the solve.
models = [] # stable models if things go well
cores = [] # unsatisfiable cores if they do not
def on_model(model):
models.append((model.cost, model.symbols(shown=True, terms=True)))
def on_model(model):
models.append((model.cost, model.symbols(shown=True, terms=True)))
solve_kwargs = {
"assumptions": setup.assumptions,
"on_model": on_model,
"on_core": cores.append,
}
solve_kwargs = {
"assumptions": setup.assumptions,
"on_model": on_model,
"on_core": cores.append,
}
if clingo_cffi():
solve_kwargs["on_unsat"] = cores.append
if clingo_cffi():
solve_kwargs["on_unsat"] = cores.append
timer.start("solve")
time_limit = spack.config.CONFIG.get("concretizer:timeout", -1)
error_on_timeout = spack.config.CONFIG.get("concretizer:error_on_timeout", True)
# Spack uses 0 to set no time limit, clingo API uses -1
if time_limit == 0:
time_limit = -1
with self.control.solve(**solve_kwargs, async_=True) as handle:
finished = handle.wait(time_limit)
if not finished:
specs_str = ", ".join(llnl.util.lang.elide_list([str(s) for s in specs], 4))
header = f"Spack is taking more than {time_limit} seconds to solve for {specs_str}"
if error_on_timeout:
raise UnsatisfiableSpecError(f"{header}, stopping concretization")
warnings.warn(f"{header}, using the best configuration found so far")
handle.cancel()
timer.start("solve")
time_limit = spack.config.CONFIG.get("concretizer:timeout", -1)
error_on_timeout = spack.config.CONFIG.get("concretizer:error_on_timeout", True)
# Spack uses 0 to set no time limit, clingo API uses -1
if time_limit == 0:
time_limit = -1
with self.control.solve(**solve_kwargs, async_=True) as handle:
finished = handle.wait(time_limit)
if not finished:
specs_str = ", ".join(llnl.util.lang.elide_list([str(s) for s in specs], 4))
header = (
f"Spack is taking more than {time_limit} seconds to solve for {specs_str}"
)
if error_on_timeout:
raise UnsatisfiableSpecError(f"{header}, stopping concretization")
warnings.warn(f"{header}, using the best configuration found so far")
handle.cancel()
solve_result = handle.get()
timer.stop("solve")
solve_result = handle.get()
timer.stop("solve")
# once done, construct the solve result
result = Result(specs)
result.satisfiable = solve_result.satisfiable
# once done, construct the solve result
result = Result(specs)
result.satisfiable = solve_result.satisfiable
if result.satisfiable:
timer.start("construct_specs")
# get the best model
builder = SpecBuilder(specs, hash_lookup=setup.reusable_and_possible)
min_cost, best_model = min(models)
if result.satisfiable:
timer.start("construct_specs")
# get the best model
builder = SpecBuilder(specs, hash_lookup=setup.reusable_and_possible)
min_cost, best_model = min(models)
# first check for errors
error_handler = ErrorHandler(best_model, specs)
error_handler.raise_if_errors()
# first check for errors
error_handler = ErrorHandler(best_model, specs)
error_handler.raise_if_errors()
# build specs from spec attributes in the model
spec_attrs = [(name, tuple(rest)) for name, *rest in extract_args(best_model, "attr")]
answers = builder.build_specs(spec_attrs)
# build specs from spec attributes in the model
spec_attrs = [
(name, tuple(rest)) for name, *rest in extract_args(best_model, "attr")
]
answers = builder.build_specs(spec_attrs)
# add best spec to the results
result.answers.append((list(min_cost), 0, answers))
# add best spec to the results
result.answers.append((list(min_cost), 0, answers))
# get optimization criteria
criteria_args = extract_args(best_model, "opt_criterion")
result.criteria = build_criteria_names(min_cost, criteria_args)
# get optimization criteria
criteria_args = extract_args(best_model, "opt_criterion")
result.criteria = build_criteria_names(min_cost, criteria_args)
# record the number of models the solver considered
result.nmodels = len(models)
# record the number of models the solver considered
result.nmodels = len(models)
# record the possible dependencies in the solve
result.possible_dependencies = setup.pkgs
timer.stop("construct_specs")
timer.stop()
elif cores:
result.control = self.control
result.cores.extend(cores)
# record the possible dependencies in the solve
result.possible_dependencies = setup.pkgs
timer.stop("construct_specs")
timer.stop()
elif cores:
result.control = self.control
result.cores.extend(cores)
result.raise_if_unsat()
if result.satisfiable and result.unsolved_specs and setup.concretize_everything:
unsolved_str = Result.format_unsolved(result.unsolved_specs)
raise InternalConcretizerError(
"Internal Spack error: the solver completed but produced specs"
" that do not satisfy the request. Please report a bug at "
f"https://github.com/spack/spack/issues\n\t{unsolved_str}"
)
if conc_cache_enabled:
CONC_CACHE.store(problem_repr, result, self.control.statistics, test=setup.tests)
concretization_stats = self.control.statistics
if output.timers:
timer.write_tty()
print()
if output.stats:
print("Statistics:")
pprint.pprint(self.control.statistics)
result.raise_if_unsat()
if result.satisfiable and result.unsolved_specs and setup.concretize_everything:
unsolved_str = Result.format_unsolved(result.unsolved_specs)
raise InternalConcretizerError(
"Internal Spack error: the solver completed but produced specs"
" that do not satisfy the request. Please report a bug at "
f"https://github.com/spack/spack/issues\n\t{unsolved_str}"
)
return result, timer, self.control.statistics
pprint.pprint(concretization_stats)
return result, timer, concretization_stats
class ConcreteSpecsByHash(collections.abc.Mapping):
@@ -1373,7 +1771,7 @@ def effect_rules(self):
return
self.gen.h2("Imposed requirements")
for name in self._effect_cache:
for name in sorted(self._effect_cache):
cache = self._effect_cache[name]
for (spec_str, _), (effect_id, requirements) in cache.items():
self.gen.fact(fn.pkg_fact(name, fn.effect_id(effect_id)))
@@ -1426,8 +1824,8 @@ def define_variant(
elif isinstance(values, vt.DisjointSetsOfValues):
union = set()
for sid, s in enumerate(values.sets):
for value in s:
for sid, s in enumerate(sorted(values.sets)):
for value in sorted(s):
pkg_fact(fn.variant_value_from_disjoint_sets(vid, value, sid))
union.update(s)
values = union
@@ -1608,7 +2006,7 @@ def package_provider_rules(self, pkg):
self.gen.fact(fn.pkg_fact(pkg.name, fn.possible_provider(vpkg_name)))
for when, provided in pkg.provided.items():
for vpkg in provided:
for vpkg in sorted(provided):
if vpkg.name not in self.possible_virtuals:
continue
@@ -1623,8 +2021,8 @@ def package_provider_rules(self, pkg):
condition_id = self.condition(
when, required_name=pkg.name, msg="Virtuals are provided together"
)
for set_id, virtuals_together in enumerate(sets_of_virtuals):
for name in virtuals_together:
for set_id, virtuals_together in enumerate(sorted(sets_of_virtuals)):
for name in sorted(virtuals_together):
self.gen.fact(
fn.pkg_fact(pkg.name, fn.provided_together(condition_id, set_id, name))
)
@@ -1734,7 +2132,7 @@ def package_splice_rules(self, pkg):
for map in pkg.variants.values():
for k in map:
filt_match_variants.add(k)
filt_match_variants = list(filt_match_variants)
filt_match_variants = sorted(filt_match_variants)
variant_constraints = self._gen_match_variant_splice_constraints(
pkg, cond, spec_to_splice, hash_var, splice_node, filt_match_variants
)
@@ -2264,7 +2662,7 @@ def define_package_versions_and_validate_preferences(
):
"""Declare any versions in specs not declared in packages."""
packages_yaml = spack.config.get("packages")
for pkg_name in possible_pkgs:
for pkg_name in sorted(possible_pkgs):
pkg_cls = self.pkg_class(pkg_name)
# All the versions from the corresponding package.py file. Since concepts
@@ -2592,7 +2990,7 @@ def define_variant_values(self):
"""
# Tell the concretizer about possible values from specs seen in spec_clauses().
# We might want to order these facts by pkg and name if we are debugging.
for pkg_name, variant_def_id, value in self.variant_values_from_specs:
for pkg_name, variant_def_id, value in sorted(self.variant_values_from_specs):
try:
vid = self.variant_ids_by_def_id[variant_def_id]
except KeyError:
@@ -2630,6 +3028,8 @@ def concrete_specs(self):
# Declare as possible parts of specs that are not in package.py
# - Add versions to possible versions
# - Add OS to possible OS's
# is traverse deterministic?
for dep in spec.traverse():
self.possible_versions[dep.name].add(dep.version)
if isinstance(dep.version, vn.GitVersion):
@@ -2867,7 +3267,7 @@ def define_runtime_constraints(self):
recorder.consume_facts()
def literal_specs(self, specs):
for spec in specs:
for spec in sorted(specs):
self.gen.h2("Spec: %s" % str(spec))
condition_id = next(self._id_counter)
trigger_id = next(self._id_counter)
@@ -3368,7 +3768,7 @@ def consume_facts(self):
# on the available compilers)
self._setup.pkg_version_rules(runtime_pkg)
for imposed_spec, when_spec in self.runtime_conditions:
for imposed_spec, when_spec in sorted(self.runtime_conditions):
msg = f"{when_spec} requires {imposed_spec} at runtime"
_ = self._setup.condition(when_spec, imposed_spec=imposed_spec, msg=msg)
@@ -4225,6 +4625,9 @@ def solve_with_stats(
reusable_specs.extend(self.selector.reusable_specs(specs))
setup = SpackSolverSetup(tests=tests)
output = OutputConfiguration(timers=timers, stats=stats, out=out, setup_only=setup_only)
CONC_CACHE.flush_manifest()
CONC_CACHE.cleanup()
return self.driver.solve(
setup, specs, reuse=reusable_specs, output=output, allow_deprecated=allow_deprecated
)
@@ -4294,6 +4697,9 @@ def solve_in_rounds(
for spec in result.specs:
reusable_specs.extend(spec.traverse())
CONC_CACHE.flush_manifest()
CONC_CACHE.cleanup()
class UnsatisfiableSpecError(spack.error.UnsatisfiableSpecError):
"""There was an issue with the spec that was requested (i.e. a user error)."""

View File

@@ -569,7 +569,6 @@ def test_FetchCacheError_only_accepts_lists_of_errors():
def test_FetchCacheError_pretty_printing_multiple():
e = bindist.FetchCacheError([RuntimeError("Oops!"), TypeError("Trouble!")])
str_e = str(e)
print("'" + str_e + "'")
assert "Multiple errors" in str_e
assert "Error 1: RuntimeError: Oops!" in str_e
assert "Error 2: TypeError: Trouble!" in str_e

View File

@@ -210,7 +210,6 @@ def check_args_contents(cc, args, must_contain, must_not_contain):
"""
with set_env(SPACK_TEST_COMMAND="dump-args"):
cc_modified_args = cc(*args, output=str).strip().split("\n")
print(cc_modified_args)
for a in must_contain:
assert a in cc_modified_args
for a in must_not_contain:

View File

@@ -347,7 +347,6 @@ def test_get_spec_filter_list(mutable_mock_env_path, mutable_mock_repo):
for key, val in expectations.items():
affected_specs = ci.get_spec_filter_list(e1, touched, dependent_traverse_depth=key)
affected_pkg_names = set([s.name for s in affected_specs])
print(f"{key}: {affected_pkg_names}")
assert affected_pkg_names == val

View File

@@ -214,9 +214,7 @@ def verify_mirror_contents():
if in_env_pkg in p:
found_pkg = True
if not found_pkg:
print("Expected to find {0} in {1}".format(in_env_pkg, dest_mirror_dir))
assert False
assert found_pkg, f"Expected to find {in_env_pkg} in {dest_mirror_dir}"
# Install a package and put it in the buildcache
s = spack.concretize.concretize_one(out_env_pkg)

View File

@@ -450,6 +450,8 @@ def just_throw(*args, **kwargs):
content = filename.open().read()
print(content)
# Only libelf error is reported (through libdwarf root spec). libdwarf
# install is skipped and it is not an error.
assert 'tests="1"' in content
@@ -898,7 +900,6 @@ def test_cdash_configure_warning(tmpdir, mock_fetch, install_mockery, capfd):
specfile = "./spec.json"
with open(specfile, "w", encoding="utf-8") as f:
f.write(spec.to_json())
print(spec.to_json())
install("--log-file=cdash_reports", "--log-format=cdash", specfile)
# Verify Configure.xml exists with expected contents.
report_dir = tmpdir.join("cdash_reports")

View File

@@ -52,8 +52,7 @@ def test_load_shell(shell, set_command):
mpileaks_spec = spack.concretize.concretize_one("mpileaks")
# Ensure our reference variable is clean.
hello_world_paths = [os.path.normpath(p) for p in ("/hello", "/world")]
os.environ["CMAKE_PREFIX_PATH"] = os.pathsep.join(hello_world_paths)
os.environ["CMAKE_PREFIX_PATH"] = "/hello" + os.pathsep + "/world"
shell_out = load(shell, "mpileaks")
@@ -70,7 +69,7 @@ def extract_value(output, variable):
paths_shell = extract_value(shell_out, "CMAKE_PREFIX_PATH")
# We should've prepended new paths, and keep old ones.
assert paths_shell[-2:] == hello_world_paths
assert paths_shell[-2:] == ["/hello", "/world"]
# All but the last two paths are added by spack load; lookup what packages they're from.
pkgs = [prefix_to_pkg(p) for p in paths_shell[:-2]]

View File

@@ -42,7 +42,7 @@ def mock_pkg_git_repo(git, tmp_path_factory):
repo_dir = root_dir / "builtin.mock"
shutil.copytree(spack.paths.mock_packages_path, str(repo_dir))
repo_cache = spack.util.file_cache.FileCache(str(root_dir / "cache"))
repo_cache = spack.util.file_cache.FileCache(root_dir / "cache")
mock_repo = spack.repo.RepoPath(str(repo_dir), cache=repo_cache)
mock_repo_packages = mock_repo.repos[0].packages_path

View File

@@ -5,9 +5,13 @@
import pytest
import spack.config
import spack.environment as ev
import spack.main
from spack.main import SpackCommand
repo = spack.main.SpackCommand("repo")
env = SpackCommand("env")
def test_help_option():
@@ -33,3 +37,33 @@ def test_create_add_list_remove(mutable_config, tmpdir):
repo("remove", "--scope=site", str(tmpdir))
output = repo("list", "--scope=site", output=str)
assert "mockrepo" not in output
def test_env_repo_path_vars_substitution(
tmpdir, install_mockery, mutable_mock_env_path, monkeypatch
):
"""Test Spack correctly substitues repo paths with environment variables when creating an
environment from a manifest file."""
monkeypatch.setenv("CUSTOM_REPO_PATH", ".")
# setup environment from spack.yaml
envdir = tmpdir.mkdir("env")
with envdir.as_cwd():
with open("spack.yaml", "w", encoding="utf-8") as f:
f.write(
"""\
spack:
specs: []
repos:
- $CUSTOM_REPO_PATH
"""
)
# creating env from manifest file
env("create", "test", "./spack.yaml")
# check that repo path was correctly substituted with the environment variable
current_dir = os.getcwd()
with ev.read("test") as newenv:
repos_specs = spack.config.get("repos", default={}, scope=newenv.scope_name)
assert current_dir in repos_specs

View File

@@ -50,7 +50,7 @@ def test_list_long(capsys):
def test_list_long_with_pytest_arg(capsys):
with capsys.disabled():
output = spack_test("--list-long", cmd_test_py)
print(output)
assert "unit_test.py::\n" in output
assert "test_list" in output
assert "test_list_with_pytest_arg" in output

View File

@@ -49,7 +49,6 @@ def test_single_file_verify_cmd(tmpdir):
sjson.dump({filepath: data}, f)
results = verify("manifest", "-f", filepath, fail_on_error=False)
print(results)
assert not results
os.utime(filepath, (0, 0))

View File

@@ -2018,7 +2018,6 @@ def test_git_ref_version_is_equivalent_to_specified_version(self, git_ref):
s = Spec("develop-branch-version@git.%s=develop" % git_ref)
c = spack.concretize.concretize_one(s)
assert git_ref in str(c)
print(str(c))
assert s.satisfies("@develop")
assert s.satisfies("@0.1:")
@@ -3255,3 +3254,54 @@ def test_spec_unification(unify, mutable_config, mock_packages):
maybe_fails = pytest.raises if unify is True else llnl.util.lang.nullcontext
with maybe_fails(spack.solver.asp.UnsatisfiableSpecError):
_ = spack.cmd.parse_specs([a_restricted, b], concretize=True)
def test_concretization_cache_roundtrip(use_concretization_cache, monkeypatch, mutable_config):
"""Tests whether we can write the results of a clingo solve to the cache
and load the same spec request from the cache to produce identical specs"""
# Force determinism:
# Solver setup is normally non-deterministic due to non-determinism in
# asp solver setup logic generation. The only other inputs to the cache keys are
# the .lp files, which are invariant over the course of this test.
# This method forces the same setup to be produced for the same specs
# which gives us a guarantee of cache hits, as it removes the only
# element of non deterministic solver setup for the same spec
# Basically just a quick and dirty memoization
solver_setup = spack.solver.asp.SpackSolverSetup.setup
def _setup(self, specs, *, reuse=None, allow_deprecated=False):
if not getattr(_setup, "cache_setup", None):
cache_setup = solver_setup(self, specs, reuse=reuse, allow_deprecated=allow_deprecated)
setattr(_setup, "cache_setup", cache_setup)
return getattr(_setup, "cache_setup")
# monkeypatch our forced determinism setup method into solver setup
monkeypatch.setattr(spack.solver.asp.SpackSolverSetup, "setup", _setup)
assert spack.config.get("config:concretization_cache:enable")
# run one standard concretization to populate the cache and the setup method
# memoization
h = spack.concretize.concretize_one("hdf5")
# due to our forced determinism above, we should not be observing
# cache misses, assert that we're not storing any new cache entries
def _ensure_no_store(self, problem: str, result, statistics, test=False):
# always throw, we never want to reach this code path
assert False, "Concretization cache hit expected"
# Assert that we're actually hitting the cache
cache_fetch = spack.solver.asp.ConcretizationCache.fetch
def _ensure_cache_hits(self, problem: str):
result, statistics = cache_fetch(self, problem)
assert result, "Expected successful concretization cache hit"
assert statistics, "Expected statistics to be non null on cache hit"
return result, statistics
monkeypatch.setattr(spack.solver.asp.ConcretizationCache, "store", _ensure_no_store)
monkeypatch.setattr(spack.solver.asp.ConcretizationCache, "fetch", _ensure_cache_hits)
# ensure subsequent concretizations of the same spec produce the same spec
# object
for _ in range(5):
assert h == spack.concretize.concretize_one("hdf5")

View File

@@ -341,6 +341,16 @@ def pytest_collection_modifyitems(config, items):
item.add_marker(skip_as_slow)
@pytest.fixture(scope="function")
def use_concretization_cache(mutable_config, tmpdir):
"""Enables the use of the concretization cache"""
spack.config.set("config:concretization_cache:enable", True)
# ensure we have an isolated concretization cache
new_conc_cache_loc = str(tmpdir.mkdir("concretization"))
spack.config.set("config:concretization_cache:path", new_conc_cache_loc)
yield
#
# These fixtures are applied to all tests
#
@@ -2138,8 +2148,7 @@ def _c_compiler_always_exists():
@pytest.fixture(scope="session")
def mock_test_cache(tmp_path_factory):
cache_dir = tmp_path_factory.mktemp("cache")
print(cache_dir)
return spack.util.file_cache.FileCache(str(cache_dir))
return spack.util.file_cache.FileCache(cache_dir)
class MockHTTPResponse(io.IOBase):

View File

@@ -14,3 +14,5 @@ config:
checksum: true
dirty: false
locks: {1}
concretization_cache:
enable: false

View File

@@ -161,7 +161,7 @@ def test_handle_unknown_package(temporary_store, config, mock_packages, tmp_path
"""
layout = temporary_store.layout
repo_cache = spack.util.file_cache.FileCache(str(tmp_path / "cache"))
repo_cache = spack.util.file_cache.FileCache(tmp_path / "cache")
mock_db = spack.repo.RepoPath(spack.paths.mock_packages_path, cache=repo_cache)
not_in_mock = set.difference(

View File

@@ -977,7 +977,6 @@ class MyBuildException(Exception):
def _install_fail_my_build_exception(installer, task, install_status, **kwargs):
print(task, task.pkg.name)
if task.pkg.name == "pkg-a":
raise MyBuildException("mock internal package build error for pkg-a")
else:

View File

@@ -34,7 +34,7 @@ def extra_repo(tmp_path_factory, request):
subdirectory: '{request.param}'
"""
)
repo_cache = spack.util.file_cache.FileCache(str(cache_dir))
repo_cache = spack.util.file_cache.FileCache(cache_dir)
return spack.repo.Repo(str(repo_dir), cache=repo_cache), request.param
@@ -194,7 +194,7 @@ def _repo_paths(repos):
repo_paths, namespaces = _repo_paths(repos)
repo_cache = spack.util.file_cache.FileCache(str(tmp_path / "cache"))
repo_cache = spack.util.file_cache.FileCache(tmp_path / "cache")
repo_path = spack.repo.RepoPath(*repo_paths, cache=repo_cache)
assert len(repo_path.repos) == len(namespaces)
assert [x.namespace for x in repo_path.repos] == namespaces
@@ -319,3 +319,48 @@ def test_get_repo(self, mock_test_cache):
# foo is not there, raise
with pytest.raises(spack.repo.UnknownNamespaceError):
repo.get_repo("foo")
def test_parse_package_api_version():
"""Test that we raise an error if a repository has a version that is not supported."""
# valid version
assert spack.repo._parse_package_api_version(
{"api": "v1.2"}, min_api=(1, 0), max_api=(2, 3)
) == (1, 2)
# too new and too old
with pytest.raises(
spack.repo.BadRepoError,
match=r"Package API v2.4 is not supported .* \(must be between v1.0 and v2.3\)",
):
spack.repo._parse_package_api_version({"api": "v2.4"}, min_api=(1, 0), max_api=(2, 3))
with pytest.raises(
spack.repo.BadRepoError,
match=r"Package API v0.9 is not supported .* \(must be between v1.0 and v2.3\)",
):
spack.repo._parse_package_api_version({"api": "v0.9"}, min_api=(1, 0), max_api=(2, 3))
# default to v1.0 if not specified
assert spack.repo._parse_package_api_version({}, min_api=(1, 0), max_api=(2, 3)) == (1, 0)
# if v1.0 support is dropped we should also raise
with pytest.raises(
spack.repo.BadRepoError,
match=r"Package API v1.0 is not supported .* \(must be between v2.0 and v2.3\)",
):
spack.repo._parse_package_api_version({}, min_api=(2, 0), max_api=(2, 3))
# finally test invalid input
with pytest.raises(spack.repo.BadRepoError, match="Invalid Package API version"):
spack.repo._parse_package_api_version({"api": "v2"}, min_api=(1, 0), max_api=(3, 3))
with pytest.raises(spack.repo.BadRepoError, match="Invalid Package API version"):
spack.repo._parse_package_api_version({"api": 2.0}, min_api=(1, 0), max_api=(3, 3))
def test_repo_package_api_version(tmp_path: pathlib.Path):
"""Test that we can specify the API version of a repository."""
(tmp_path / "example" / "packages").mkdir(parents=True)
(tmp_path / "example" / "repo.yaml").write_text(
"""\
repo:
namespace: example
"""
)
cache = spack.util.file_cache.FileCache(tmp_path / "cache")
assert spack.repo.Repo(str(tmp_path / "example"), cache=cache).package_api == (1, 0)

View File

@@ -27,9 +27,7 @@ def check_spliced_spec_prefixes(spliced_spec):
text_file_path = os.path.join(node.prefix, node.name)
with open(text_file_path, "r", encoding="utf-8") as f:
text = f.read()
print(text)
for modded_spec in node.traverse(root=True, deptype=dt.ALL & ~dt.BUILD):
print(modded_spec)
assert modded_spec.prefix in text

View File

@@ -149,11 +149,8 @@ def test_reverse_environment_modifications(working_env):
os.environ.clear()
os.environ.update(start_env)
print(os.environ)
to_reverse.apply_modifications()
print(os.environ)
reversal.apply_modifications()
print(os.environ)
start_env.pop("UNSET")
assert os.environ == start_env

View File

@@ -257,7 +257,6 @@ def test_core_lib_files():
names.append(os.path.join(test_dir, n))
for filename in names:
print("Testing %s" % filename)
source = read_pyfile(filename)
check_ast_roundtrip(source)

View File

@@ -5,16 +5,17 @@
import errno
import math
import os
import pathlib
import shutil
from typing import IO, Optional, Tuple
from typing import IO, Dict, Optional, Tuple, Union
from llnl.util.filesystem import mkdirp, rename
from llnl.util.filesystem import rename
from spack.error import SpackError
from spack.util.lock import Lock, ReadTransaction, WriteTransaction
def _maybe_open(path: str) -> Optional[IO[str]]:
def _maybe_open(path: Union[str, pathlib.Path]) -> Optional[IO[str]]:
try:
return open(path, "r", encoding="utf-8")
except OSError as e:
@@ -24,7 +25,7 @@ def _maybe_open(path: str) -> Optional[IO[str]]:
class ReadContextManager:
def __init__(self, path: str) -> None:
def __init__(self, path: Union[str, pathlib.Path]) -> None:
self.path = path
def __enter__(self) -> Optional[IO[str]]:
@@ -70,7 +71,7 @@ class FileCache:
"""
def __init__(self, root, timeout=120):
def __init__(self, root: Union[str, pathlib.Path], timeout=120):
"""Create a file cache object.
This will create the cache directory if it does not exist yet.
@@ -82,58 +83,60 @@ def __init__(self, root, timeout=120):
for cache files, this specifies how long Spack should wait
before assuming that there is a deadlock.
"""
self.root = root.rstrip(os.path.sep)
if not os.path.exists(self.root):
mkdirp(self.root)
if isinstance(root, str):
root = pathlib.Path(root)
self.root = root
self.root.mkdir(parents=True, exist_ok=True)
self._locks = {}
self._locks: Dict[Union[pathlib.Path, str], Lock] = {}
self.lock_timeout = timeout
def destroy(self):
"""Remove all files under the cache root."""
for f in os.listdir(self.root):
path = os.path.join(self.root, f)
if os.path.isdir(path):
shutil.rmtree(path, True)
for f in self.root.iterdir():
if f.is_dir():
shutil.rmtree(f, True)
else:
os.remove(path)
f.unlink()
def cache_path(self, key):
def cache_path(self, key: Union[str, pathlib.Path]):
"""Path to the file in the cache for a particular key."""
return os.path.join(self.root, key)
return self.root / key
def _lock_path(self, key):
def _lock_path(self, key: Union[str, pathlib.Path]):
"""Path to the file in the cache for a particular key."""
keyfile = os.path.basename(key)
keydir = os.path.dirname(key)
return os.path.join(self.root, keydir, "." + keyfile + ".lock")
return self.root / keydir / ("." + keyfile + ".lock")
def _get_lock(self, key):
def _get_lock(self, key: Union[str, pathlib.Path]):
"""Create a lock for a key, if necessary, and return a lock object."""
if key not in self._locks:
self._locks[key] = Lock(self._lock_path(key), default_timeout=self.lock_timeout)
self._locks[key] = Lock(str(self._lock_path(key)), default_timeout=self.lock_timeout)
return self._locks[key]
def init_entry(self, key):
def init_entry(self, key: Union[str, pathlib.Path]):
"""Ensure we can access a cache file. Create a lock for it if needed.
Return whether the cache file exists yet or not.
"""
cache_path = self.cache_path(key)
# Avoid using pathlib here to allow the logic below to
# function as is
# TODO: Maybe refactor the following logic for pathlib
exists = os.path.exists(cache_path)
if exists:
if not os.path.isfile(cache_path):
if not cache_path.is_file():
raise CacheError("Cache file is not a file: %s" % cache_path)
if not os.access(cache_path, os.R_OK):
raise CacheError("Cannot access cache file: %s" % cache_path)
else:
# if the file is hierarchical, make parent directories
parent = os.path.dirname(cache_path)
if parent.rstrip(os.path.sep) != self.root:
mkdirp(parent)
parent = cache_path.parent
if parent != self.root:
parent.mkdir(parents=True, exist_ok=True)
if not os.access(parent, os.R_OK | os.W_OK):
raise CacheError("Cannot access cache directory: %s" % parent)
@@ -142,7 +145,7 @@ def init_entry(self, key):
self._get_lock(key)
return exists
def read_transaction(self, key):
def read_transaction(self, key: Union[str, pathlib.Path]):
"""Get a read transaction on a file cache item.
Returns a ReadTransaction context manager and opens the cache file for
@@ -153,9 +156,11 @@ def read_transaction(self, key):
"""
path = self.cache_path(key)
return ReadTransaction(self._get_lock(key), acquire=lambda: ReadContextManager(path))
return ReadTransaction(
self._get_lock(key), acquire=lambda: ReadContextManager(path) # type: ignore
)
def write_transaction(self, key):
def write_transaction(self, key: Union[str, pathlib.Path]):
"""Get a write transaction on a file cache item.
Returns a WriteTransaction context manager that opens a temporary file
@@ -167,9 +172,11 @@ def write_transaction(self, key):
if os.path.exists(path) and not os.access(path, os.W_OK):
raise CacheError(f"Insufficient permissions to write to file cache at {path}")
return WriteTransaction(self._get_lock(key), acquire=lambda: WriteContextManager(path))
return WriteTransaction(
self._get_lock(key), acquire=lambda: WriteContextManager(path) # type: ignore
)
def mtime(self, key) -> float:
def mtime(self, key: Union[str, pathlib.Path]) -> float:
"""Return modification time of cache file, or -inf if it does not exist.
Time is in units returned by os.stat in the mtime field, which is
@@ -179,14 +186,14 @@ def mtime(self, key) -> float:
if not self.init_entry(key):
return -math.inf
else:
return os.stat(self.cache_path(key)).st_mtime
return self.cache_path(key).stat().st_mtime
def remove(self, key):
def remove(self, key: Union[str, pathlib.Path]):
file = self.cache_path(key)
lock = self._get_lock(key)
try:
lock.acquire_write()
os.unlink(file)
file.unlink()
except OSError as e:
# File not found is OK, so remove is idempotent.
if e.errno != errno.ENOENT:

View File

@@ -20,8 +20,9 @@ ci:
- k=$CI_GPG_KEY_ROOT/intermediate_ci_signing_key.gpg; [[ -r $k ]] && spack gpg trust $k
- k=$CI_GPG_KEY_ROOT/spack_public_key.gpg; [[ -r $k ]] && spack gpg trust $k
script::
- - spack config blame mirrors
- spack --color=always --backtrace ci rebuild -j ${SPACK_BUILD_JOBS} --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
- - if [ -n "$SPACK_EXTRA_MIRROR" ]; then spack mirror add local "$SPACK_EXTRA_MIRROR"; fi
- spack config blame mirrors
- - spack --color=always --backtrace ci rebuild -j ${SPACK_BUILD_JOBS} --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
after_script:
- - cat /proc/loadavg || true
- cat /proc/meminfo | grep 'MemTotal\|MemFree' || true

View File

@@ -2,6 +2,7 @@ ci:
broken-tests-packages:
- mpich
- openmpi
- py-mpi4py
pipeline-gen:
- build-job-remove:
tags: [spack]

View File

@@ -28,6 +28,7 @@ class Abinit(AutotoolsPackage):
license("Apache-2.0")
maintainers("downloadico")
version("10.2.7", sha256="e0e1049b01b4ebaec29be632cd554caeccb4b2a8acf2e148c8ac505e6b226dc1")
version("10.0.9", sha256="17650580295e07895f6c3c4b1f3f0fe0e0f3fea9bab5fd8ce7035b16a62f8e5e")
version("10.0.7", sha256="a9fc044b33861b7defd50fafd19a73eb6f225e18ae30b23bc731d9c8009c881c")
version("9.10.5", sha256="a9e0f0e058baa6088ea93d26ada369ccf0fe52dc9d4a865b1c38c20620148cd5")
@@ -76,13 +77,22 @@ class Abinit(AutotoolsPackage):
depends_on("fftw-api")
depends_on("netcdf-fortran")
depends_on("netcdf-c+mpi", when="+mpi")
depends_on("netcdf-c~mpi", when="~mpi")
depends_on("hdf5+mpi", when="+mpi")
depends_on("hdf5~mpi", when="~mpi")
with when("+mpi"):
depends_on("netcdf-c+mpi")
depends_on("hdf5+mpi")
depends_on("wannier90+shared", when="+wannier90")
with when("~mpi"):
depends_on("netcdf-c~mpi")
depends_on("hdf5~mpi")
# Cannot ask for +scalapack if it does not depend on MPI
conflicts("+scalapack")
# Cannot ask for +wannier90 if it does not depend on MPI
conflicts("+wannier90")
# constrain version of hdf5
depends_on("hdf5@:1.8", when="@9:")
depends_on("wannier90+shared", when="+wannier90+mpi")
# constrain libxc version
depends_on("libxc")
@@ -93,15 +103,8 @@ class Abinit(AutotoolsPackage):
depends_on("libxml2", when="@9:+libxml2")
# If the Intel suite is used for Lapack, it must be used for fftw and vice-versa
for _intel_pkg in INTEL_MATH_LIBRARIES:
requires(f"^[virtuals=fftw-api] {_intel_pkg}", when=f"^[virtuals=lapack] {_intel_pkg}")
requires(f"^[virtuals=lapack] {_intel_pkg}", when=f"^[virtuals=fftw-api] {_intel_pkg}")
# Cannot ask for +scalapack if it does not depend on MPI
conflicts("+scalapack", when="~mpi")
# Cannot ask for +wannier90 if it does not depend on MPI
conflicts("+wannier90", when="~mpi")
requires("^[virtuals=fftw-api] intel-oneapi-mkl", when="^[virtuals=lapack] intel-oneapi-mkl")
requires("^[virtuals=lapack] intel-oneapi-mkl", when="^[virtuals=fftw-api] intel-oneapi-mkl")
# libxml2 needs version 9 and above
conflicts("+libxml2", when="@:8")
@@ -115,10 +118,8 @@ class Abinit(AutotoolsPackage):
for fftw in ["amdfftw", "cray-fftw", "fujitsu-fftw", "fftw"]:
conflicts("+openmp", when=f"^{fftw}~openmp", msg=f"Need to request {fftw} +openmp")
mkl_message = "Need to set dependent variant to threads=openmp"
conflicts("+openmp", when="^intel-mkl threads=none", msg=mkl_message)
conflicts("+openmp", when="^intel-mkl threads=tbb", msg=mkl_message)
conflicts("+openmp", when="^intel-parallel-studio +mkl threads=none", msg=mkl_message)
with when("+openmp"):
requires("^intel-oneapi-mkl threads=openmp", when="^[virtuals=lapack] intel-oneapi-mkl")
conflicts(
"+openmp", when="^fujitsu-ssl2 ~parallel", msg="Need to request fujitsu-ssl2 +parallel"
@@ -139,9 +140,7 @@ class Abinit(AutotoolsPackage):
def configure_args(self):
spec = self.spec
options = []
options += self.with_or_without("libxml2")
options = self.with_or_without("libxml2")
oapp = options.append
if spec.satisfies("@:8"):

View File

@@ -40,6 +40,7 @@ class Acts(CMakePackage, CudaPackage):
# Supported Acts versions
version("main", branch="main")
version("master", branch="main", deprecated=True) # For compatibility
version("39.2.0", commit="94cf48783efd713f38106b18211d1c59f4e8cdec", submodules=True)
version("39.1.0", commit="09225b0d0bba24d57a696e347e3027b39404bb75", submodules=True)
version("39.0.0", commit="b055202e2fbdd509bc186eb4782714bc46f38f3f", submodules=True)
version("38.2.0", commit="9cb8f4494656553fd9b85955938b79b2fac4c9b0", submodules=True)

View File

@@ -43,6 +43,11 @@ class Alps(CMakePackage):
extends("python")
# https://github.com/ALPSim/ALPS/issues/9
conflicts(
"%gcc@14", when="@:2.3.3-beta.6", msg="use gcc older than version 14 or else build fails"
)
# See https://github.com/ALPSim/ALPS/issues/6#issuecomment-2604912169
# for why this is needed
resources = {

View File

@@ -25,6 +25,7 @@ class Amrex(CMakePackage, CudaPackage, ROCmPackage):
license("BSD-3-Clause")
version("develop", branch="development")
version("25.03", sha256="7a2dc60d01619afdcbce0ff624a3c1a5a605e28dd8721c0fbec638076228cab0")
version("25.02", sha256="2680a5a9afba04e211cd48d27799c5a25abbb36c6c3d2b6c13cd4757c7176b23")
version("25.01", sha256="29eb35cf67d66b0fd0654282454c210abfadf27fcff8478b256e3196f237c74f")
version("24.12", sha256="ca4b41ac73fabb9cf3600b530c9823eb3625f337d9b7b9699c1089e81c67fc67")
@@ -137,7 +138,8 @@ class Amrex(CMakePackage, CudaPackage, ROCmPackage):
)
variant("eb", default=True, description="Build Embedded Boundary classes", when="@24.10:")
variant("eb", default=False, description="Build Embedded Boundary classes", when="@:24.09")
variant("fft", default=False, description="Build FFT support", when="@24.11:")
variant("fft", default=True, description="Build FFT support", when="@25.03:")
variant("fft", default=False, description="Build FFT support", when="@24.11:25.02")
variant("fortran", default=False, description="Build Fortran API")
variant("linear_solvers", default=True, description="Build linear solvers")
variant("amrdata", default=False, description="Build data services")

View File

@@ -41,6 +41,7 @@ class ArpackNg(CMakePackage, AutotoolsPackage):
build_system("cmake", "autotools", default="cmake")
version("develop", branch="master")
version("3.9.1", sha256="f6641deb07fa69165b7815de9008af3ea47eb39b2bb97521fbf74c97aba6e844")
version("3.9.0", sha256="24f2a2b259992d3c797d80f626878aa8e2ed5009d549dad57854bbcfb95e1ed0")
version("3.8.0", sha256="ada5aeb3878874383307239c9235b716a8a170c6d096a6625bfd529844df003d")
version("3.7.0", sha256="972e3fc3cd0b9d6b5a737c9bf6fd07515c0d6549319d4ffb06970e64fa3cc2d6")
@@ -49,11 +50,6 @@ class ArpackNg(CMakePackage, AutotoolsPackage):
version("3.6.0", sha256="3c88e74cc10bba81dc2c72c4f5fff38a800beebaa0b4c64d321c28c9203b37ea")
version("3.5.0", sha256="50f7a3e3aec2e08e732a487919262238f8504c3ef927246ec3495617dde81239")
version("3.4.0", sha256="69e9fa08bacb2475e636da05a6c222b17c67f1ebeab3793762062248dd9d842f")
version(
"3.3.0",
sha256="ad59811e7d79d50b8ba19fd908f92a3683d883597b2c7759fdcc38f6311fe5b3",
deprecated=True,
)
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
@@ -63,11 +59,6 @@ class ArpackNg(CMakePackage, AutotoolsPackage):
variant("mpi", default=True, description="Activates MPI support")
variant("icb", default=False, when="@3.6:", description="Activates iso_c_binding support")
# The function pdlamch10 does not set the return variable.
# This is fixed upstream
# see https://github.com/opencollab/arpack-ng/issues/34
patch("pdlamch10.patch", when="@3.3.0")
patch("make_install.patch", when="@3.4.0")
patch("parpack_cmake.patch", when="@3.4.0")
@@ -100,17 +91,13 @@ def flag_handler(self, name, flags):
if self.spec.satisfies("%cce"):
flags.append("-hnopattern")
return (flags, None, None)
return flags, None, None
@property
def libs(self):
# TODO: do we need spec['arpack-ng:parallel'].libs ?
# query_parameters = self.spec.last_query.extra_parameters
libraries = ["libarpack"]
if self.spec.satisfies("+mpi"):
libraries = ["libparpack"] + libraries
return find_libraries(libraries, root=self.prefix, shared=True, recursive=True)
@@ -136,10 +123,8 @@ def cmake_args(self):
]
# If 64-bit BLAS is used:
if (
spec.satisfies("^openblas+ilp64")
or spec.satisfies("^intel-mkl+ilp64")
or spec.satisfies("^intel-parallel-studio+mkl+ilp64")
if spec.satisfies("^[virtuals=lapack] openblas+ilp64") or spec.satisfies(
"^[virtuals=lapack] intel-oneapi-mkl+ilp64"
):
options.append("-DINTERFACE64=1")

View File

@@ -1,15 +0,0 @@
diff --git a/PARPACK/SRC/MPI/pdlamch10.f b/PARPACK/SRC/MPI/pdlamch10.f
index 6571da9..2882c2e 100644
--- a/PARPACK/SRC/MPI/pdlamch10.f
+++ b/PARPACK/SRC/MPI/pdlamch10.f
@@ -86,8 +86,8 @@
TEMP = TEMP1
END IF
*
- PDLAMCH = TEMP
+ PDLAMCH10 = TEMP
*
-* End of PDLAMCH
+* End of PDLAMCH10
*
END

View File

@@ -16,6 +16,7 @@ class Arrow(CMakePackage, CudaPackage):
license("Apache-2.0")
version("19.0.1", sha256="4c898504958841cc86b6f8710ecb2919f96b5e10fa8989ac10ac4fca8362d86a")
version("18.0.0", sha256="9c473f2c9914c59ab571761c9497cf0e5cfd3ea335f7782ccc6121f5cb99ae9b")
version("16.1.0", sha256="9762d9ecc13d09de2a03f9c625a74db0d645cb012de1e9a10dfed0b4ddc09524")
version("15.0.2", sha256="4735b349845bff1fe95ed11abbfed204eb092cabc37523aa13a80cb830fe5b5e")
@@ -41,8 +42,8 @@ class Arrow(CMakePackage, CudaPackage):
version("0.9.0", sha256="65f89a3910b6df02ac71e4d4283db9b02c5b3f1e627346c7b6a5982ae994af91")
version("0.8.0", sha256="c61a60c298c30546fc0b418a35be66ef330fb81b06c49928acca7f1a34671d54")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("c", type="build")
depends_on("cxx", type="build")
depends_on("boost@1.60: +filesystem +system")
depends_on("brotli", when="+brotli")
@@ -95,6 +96,7 @@ class Arrow(CMakePackage, CudaPackage):
variant(
"compute", default=False, description="Computational kernel functions and other support"
)
variant("dataset", default=False, description="Build the Arrow Dataset integration")
variant("gandiva", default=False, description="Build Gandiva support")
variant(
"glog",
@@ -156,6 +158,7 @@ def cmake_args(self):
args.append(self.define_from_variant("ARROW_COMPUTE", "compute"))
args.append(self.define_from_variant("ARROW_CUDA", "cuda"))
args.append(self.define_from_variant("ARROW_DATASET", "dataset"))
args.append(self.define_from_variant("ARROW_GANDIVA", "gandiva"))
args.append(self.define_from_variant("ARROW_GLOG", "glog"))
args.append(self.define_from_variant("ARROW_HDFS", "hdfs"))

View File

@@ -8,6 +8,7 @@
from os.path import join as pjoin
from spack.package import *
from spack.util.executable import which_string
def get_spec_path(spec, package_name, path_replacements={}, use_bin=False):
@@ -44,6 +45,8 @@ class Axom(CachedCMakePackage, CudaPackage, ROCmPackage):
version("main", branch="main")
version("develop", branch="develop")
version("0.10.1", tag="v0.10.1", commit="6626ee1c5668176fb64dd9a52dec3e8596b3ba6b")
version("0.10.0", tag="v0.10.0", commit="ea853a34a834415ea75f824160fc44cba9a0755d")
version("0.9.0", tag="v0.9.0", commit="5f531595d941d16fa3b8583bfc347a845d9feb6d")
version("0.8.1", tag="v0.8.1", commit="0da8a5b1be596887158ac2fcd321524ba5259e15")
version("0.8.0", tag="v0.8.0", commit="71fab3262eb7e1aa44a04c21d072b77f06362f7b")
@@ -58,9 +61,9 @@ class Axom(CachedCMakePackage, CudaPackage, ROCmPackage):
version("0.3.0", tag="v0.3.0", commit="20068ccab4b4f70055918b4f17960ec3ed6dbce8")
version("0.2.9", tag="v0.2.9", commit="9e9a54ede3326817c05f35922738516e43b5ec3d")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("c", type="build")
depends_on("cxx", type="build")
depends_on("fortran", type="build", when="+fortran")
# https://github.com/spack/spack/issues/31829
patch("examples-oneapi.patch", when="@0.6.1 +examples %oneapi")
@@ -95,6 +98,8 @@ class Axom(CachedCMakePackage, CudaPackage, ROCmPackage):
description="Build with hooks for Adiak/Caliper performance analysis",
)
variant("opencascade", default=False, description="Build with opencascade")
variant("mfem", default=False, description="Build with mfem")
variant("hdf5", default=True, description="Build with hdf5")
variant("lua", default=True, description="Build with Lua")
@@ -180,6 +185,8 @@ class Axom(CachedCMakePackage, CudaPackage, ROCmPackage):
depends_on("rocprim", when="+rocm")
depends_on("opencascade", when="+opencascade")
with when("+mfem"):
depends_on("mfem+mpi", when="+mpi")
depends_on("mfem~mpi", when="~mpi")
@@ -196,7 +203,7 @@ class Axom(CachedCMakePackage, CudaPackage, ROCmPackage):
depends_on("py-sphinx")
depends_on("py-shroud")
depends_on("py-jsonschema")
depends_on("llvm+clang@10.0.0", type="build")
depends_on("llvm+clang@14", type="build")
# -----------------------------------------------------------------------
# Conflicts
@@ -213,6 +220,11 @@ class Axom(CachedCMakePackage, CudaPackage, ROCmPackage):
conflicts("+openmp", when="+rocm")
conflicts("+cuda", when="+rocm")
conflicts("~raja", when="+cuda")
conflicts("~raja", when="+rocm")
conflicts("~umpire", when="+cuda")
conflicts("~umpire", when="+rocm")
conflicts("^blt@:0.3.6", when="+rocm")
def flag_handler(self, name, flags):
@@ -277,12 +289,12 @@ def initconfig_compiler_entries(self):
else:
entries.append(cmake_cache_option("ENABLE_FORTRAN", False))
if "+cpp14" in spec and spec.satisfies("@:0.6.1"):
if spec.satisfies("+cpp14") and spec.satisfies("@:0.6.1"):
entries.append(cmake_cache_string("BLT_CXX_STD", "c++14", ""))
# Add optimization flag workaround for Debug builds with cray compiler or newer HIP
if spec.satisfies("+rocm"):
entries.append(cmake_cache_string("CMAKE_CXX_FLAGS_DEBUG", "-O1 -g -DNDEBUG"))
# Add optimization flag workaround for builds with cray compiler
if spec.satisfies("%cce"):
entries.append(cmake_cache_string("CMAKE_CXX_FLAGS_DEBUG", "-O1 -g"))
return entries
@@ -319,43 +331,40 @@ def initconfig_hardware_entries(self):
entries.append(cmake_cache_option("ENABLE_HIP", True))
hip_root = spec["hip"].prefix
rocm_root = hip_root + "/.."
rocm_root = os.path.dirname(spec["llvm-amdgpu"].prefix)
entries.append(cmake_cache_path("ROCM_PATH", rocm_root))
# Fix blt_hip getting HIP_CLANG_INCLUDE_PATH-NOTFOUND bad include directory
# TODO: verify that this is still needed and is indeed specific to LC
if (
self.spec.satisfies("%cce") or self.spec.satisfies("%clang")
) and "toss_4" in self._get_sys_type(spec):
# Set the patch version to 0 if not already
clang_version = str(self.compiler.version)[:-1] + "0"
hip_clang_include_path = (
rocm_root + "/llvm/lib/clang/" + clang_version + "/include"
)
if os.path.isdir(hip_clang_include_path):
entries.append(
cmake_cache_path("HIP_CLANG_INCLUDE_PATH", hip_clang_include_path)
)
hip_link_flags = "-L{0}/lib -Wl,-rpath,{0}/lib ".format(rocm_root)
# Recommended MPI flags
hip_link_flags += "-lxpmem "
hip_link_flags += "-L/opt/cray/pe/mpich/{0}/gtl/lib ".format(spec["mpi"].version)
hip_link_flags += "-Wl,-rpath,/opt/cray/pe/mpich/{0}/gtl/lib ".format(
spec["mpi"].version
)
hip_link_flags += "-lmpi_gtl_hsa "
# Fixes for mpi for rocm until wrapper paths are fixed
# These flags are already part of the wrapped compilers on TOSS4 systems
hip_link_flags = ""
if "+fortran" in spec and self.is_fortran_compiler("amdflang"):
if spec.satisfies("+fortran") and self.is_fortran_compiler("amdflang"):
hip_link_flags += "-Wl,--disable-new-dtags "
hip_link_flags += "-L{0}/../llvm/lib -L{0}/lib ".format(hip_root)
hip_link_flags += "-Wl,-rpath,{0}/../llvm/lib:{0}/lib ".format(hip_root)
hip_link_flags += "-lpgmath -lflang -lflangrti -lompstub -lamdhip64 "
if spec.satisfies("^hip@6.0.0:"):
hip_link_flags += "-L{0}/lib/llvm/lib -Wl,-rpath,{0}/lib/llvm/lib ".format(
rocm_root
)
else:
hip_link_flags += "-L{0}/llvm/lib -Wl,-rpath,{0}/llvm/lib ".format(rocm_root)
hip_link_flags += "-lpgmath -lflang -lflangrti -lompstub "
# Remove extra link library for crayftn
if "+fortran" in spec and self.is_fortran_compiler("crayftn"):
if spec.satisfies("+fortran") and self.is_fortran_compiler("crayftn"):
entries.append(
cmake_cache_string("BLT_CMAKE_IMPLICIT_LINK_LIBRARIES_EXCLUDE", "unwind")
)
# Additional libraries for TOSS4
hip_link_flags += " -L{0}/../lib64 -Wl,-rpath,{0}/../lib64 ".format(hip_root)
hip_link_flags += " -L{0}/../lib -Wl,-rpath,{0}/../lib ".format(hip_root)
hip_link_flags += "-lamd_comgr -lhsa-runtime64 "
hip_link_flags += "-lamdhip64 -lhsakmt -lhsa-runtime64 -lamd_comgr "
entries.append(cmake_cache_string("CMAKE_EXE_LINKER_FLAGS", hip_link_flags))
@@ -373,7 +382,7 @@ def initconfig_hardware_entries(self):
)
)
if "+fortran" in spec and self.is_fortran_compiler("xlf"):
if spec.satisfies("+fortran") and self.is_fortran_compiler("xlf"):
# Grab lib directory for the current fortran compiler
libdir = pjoin(os.path.dirname(os.path.dirname(self.compiler.fc)), "lib")
description = (
@@ -398,9 +407,9 @@ def initconfig_hardware_entries(self):
)
if (
"+openmp" in spec
spec.satisfies("+openmp")
and "clang" in self.compiler.cxx
and "+fortran" in spec
and spec.satisfies("+fortran")
and self.is_fortran_compiler("xlf")
):
openmp_gen_exp = (
@@ -453,7 +462,8 @@ def initconfig_mpi_entries(self):
mpi_exec_index = [
index for index, entry in enumerate(entries) if "MPIEXEC_EXECUTABLE" in entry
]
del entries[mpi_exec_index[0]]
if mpi_exec_index:
del entries[mpi_exec_index[0]]
entries.append(cmake_cache_path("MPIEXEC_EXECUTABLE", srun_wrapper))
else:
entries.append(cmake_cache_option("ENABLE_MPI", False))
@@ -488,8 +498,8 @@ def initconfig_package_entries(self):
entries.append(cmake_cache_path("CONDUIT_DIR", conduit_dir))
# optional tpls
for dep in ("mfem", "hdf5", "lua", "raja", "umpire"):
if "+%s" % dep in spec:
for dep in ("mfem", "hdf5", "lua", "raja", "umpire", "opencascade"):
if spec.satisfies("+%s" % dep):
dep_dir = get_spec_path(spec, dep, path_replacements)
entries.append(cmake_cache_path("%s_DIR" % dep.upper(), dep_dir))
else:
@@ -502,7 +512,7 @@ def initconfig_package_entries(self):
dep_dir = get_spec_path(spec, "caliper", path_replacements)
entries.append(cmake_cache_path("CALIPER_DIR", dep_dir))
if "+umpire" in spec and spec.satisfies("^camp"):
if spec.satisfies("+umpire") and spec.satisfies("^camp"):
dep_dir = get_spec_path(spec, "camp", path_replacements)
entries.append(cmake_cache_path("CAMP_DIR", dep_dir))
@@ -546,14 +556,14 @@ def initconfig_package_entries(self):
path2 = os.path.realpath(spec["doxygen"].prefix)
self.find_path_replacement(path1, path2, path_replacements, "DEVTOOLS_ROOT", entries)
if "+devtools" in spec and spec.satisfies("^llvm"):
if spec.satisfies("+devtools") and spec.satisfies("^llvm"):
clang_fmt_path = spec["llvm"].prefix.bin.join("clang-format")
entries.append(cmake_cache_path("CLANGFORMAT_EXECUTABLE", clang_fmt_path))
else:
entries.append("# ClangFormat disabled due to llvm and devtools not in spec\n")
entries.append(cmake_cache_option("ENABLE_CLANGFORMAT", False))
if "+python" in spec or "+devtools" in spec:
if spec.satisfies("+python") or spec.satisfies("+devtools"):
python_path = os.path.realpath(spec["python"].command.path)
for key in path_replacements:
python_path = python_path.replace(key, path_replacements[key])
@@ -599,6 +609,8 @@ def cmake_args(self):
options.append(self.define_from_variant("BUILD_SHARED_LIBS", "shared"))
options.append(self.define_from_variant("AXOM_ENABLE_EXAMPLES", "examples"))
options.append(self.define_from_variant("AXOM_ENABLE_TOOLS", "tools"))
if self.spec.satisfies("~raja") or self.spec.satisfies("+umpire"):
options.append("-DAXOM_ENABLE_MIR:BOOL=OFF")
return options

View File

@@ -136,6 +136,15 @@ def cmake_args(self):
args.append(self.define_from_variant("ADDRESS_SANITIZER", "asan"))
return args
def setup_build_environment(self, env):
if self.spec.satisfies("@5.7: +asan"):
env.set("CC", f"{self.spec['llvm-amdgpu'].prefix}/bin/clang")
env.set("CXX", f"{self.spec['llvm-amdgpu'].prefix}/bin/clang++")
env.set("ASAN_OPTIONS", "detect_leaks=0")
env.set("CFLAGS", "-fsanitize=address -shared-libasan")
env.set("CXXFLAGS", "-fsanitize=address -shared-libasan")
env.set("LDFLAGS", "-fuse-ld=lld")
@classmethod
def determine_version(cls, lib):
match = re.search(r"lib\S*\.so\.\d+\.\d+\.(\d)(\d\d)(\d\d)", lib)

View File

@@ -424,8 +424,6 @@ def hostconfig(self):
cfg.write(cmake_cache_entry("CMAKE_EXE_LINKER_FLAGS", linkerflags))
if spec.satisfies("+shared"):
cfg.write(cmake_cache_entry("CMAKE_SHARED_LINKER_FLAGS", linkerflags))
else:
cfg.write(cmake_cache_entry("CMAKE_STATIC_LINKER_FLAGS", linkerflags))
#######################
# BLT

View File

@@ -144,6 +144,13 @@ class Cp2k(MakefilePackage, CMakePackage, CudaPackage, ROCmPackage):
" conventional location. This option is only relevant when regtests need to be run.",
)
variant(
"grpp",
default=False,
description="Enable GRPP psuedo potentials",
when="@2025.2: build_system=cmake",
)
with when("+cuda"):
variant(
"cuda_arch_35_k20x",
@@ -1048,6 +1055,7 @@ def cmake_args(self):
self.define_from_variant("CP2K_ENABLE_GRID_GPU", "grid_gpu"),
self.define_from_variant("CP2K_ENABLE_DBM_GPU", "dbm_gpu"),
self.define_from_variant("CP2K_ENABLE_PW_GPU", "pw_gpu"),
self.define_from_variant("CP2K_USE_GRPP", "grpp"),
]
# we force the use elpa openmp threading support. might need to be revisited though

View File

@@ -19,6 +19,7 @@ class Detray(CMakePackage):
license("MPL-2.0", checked_by="stephenswat")
version("0.89.0", sha256="b893b7f5434c1c9951433876ef43d1db1b08d36749f062e261b4e6d48e77d5db")
version("0.88.1", sha256="89134c86c6857cb3a821181e3bb0565ebb726dd8b1245678db1681483d792cf9")
version("0.88.0", sha256="bda15501c9c96af961e24ce243982f62051c535b9fe458fb28336a19b54eb47d")
version("0.87.0", sha256="2d4a76432dd6ddbfc00b88b5d482072e471fefc264b60748bb1f9a123963576e")
@@ -75,7 +76,7 @@ class Detray(CMakePackage):
depends_on("vecmem@1.8.0:", when="@0.76:")
depends_on("covfie@0.10.0:")
depends_on("nlohmann-json@3.11.0:", when="+json")
depends_on("dfelibs@20211029:")
depends_on("dfelibs@20211029:", when="@:0.88")
depends_on("acts-algebra-plugins@0.18.0: +vecmem")
depends_on("acts-algebra-plugins +vc", when="+vc")
depends_on("acts-algebra-plugins +eigen", when="+eigen")

View File

@@ -15,6 +15,7 @@ class DlaFuture(CMakePackage, CudaPackage, ROCmPackage):
license("BSD-3-Clause")
version("0.8.0", sha256="4c30c33ee22417514d839a75d99ae4c24860078fb595ee24ce4ebf45fbce5e69")
version("0.7.3", sha256="8c829b72f4ea9c924abdb6fe2ac7489304be4056ab76b8eba226c33ce7b7dc0e")
version(
"0.7.1",

View File

@@ -1,22 +0,0 @@
diff --git a/cmake/configure_files/ElementalConfig.cmake.in b/cmake/configure_files/ElementalConfig.cmake.in
index d37649f..8511d81 100644
--- a/cmake/configure_files/ElementalConfig.cmake.in
+++ b/cmake/configure_files/ElementalConfig.cmake.in
@@ -1,6 +1,8 @@
set(Elemental_INCLUDE_DIRS "@CMAKE_INSTALL_PREFIX@/include")
set(Elemental_INCLUDE_DIRS "${Elemental_INCLUDE_DIRS};@MPI_CXX_INCLUDE_PATH@")
-set(Elemental_INCLUDE_DIRS "${Elemental_INCLUDE_DIRS};@QD_INCLUDES@")
+IF(@QD_FOUND@)
+ set(Elemental_INCLUDE_DIRS "${Elemental_INCLUDE_DIRS};@QD_INCLUDES@")
+ENDIF()
set(Elemental_INCLUDE_DIRS "${Elemental_INCLUDE_DIRS};@MPC_INCLUDES@")
set(Elemental_INCLUDE_DIRS "${Elemental_INCLUDE_DIRS};@MPFR_INCLUDES@")
set(Elemental_INCLUDE_DIRS "${Elemental_INCLUDE_DIRS};@GMP_INCLUDES@")
@@ -13,6 +15,6 @@ set(Elemental_LINK_FLAGS "@EL_LINK_FLAGS@")
set(Elemental_DEFINITIONS "@Qt5Widgets_DEFINITIONS@")
# Our library dependencies (contains definitions for IMPORTED targets)
-include("@CMAKE_INSTALL_PREFIX@/CMake/ElementalTargets.cmake")
+include("${CMAKE_CURRENT_LIST_DIR}/ElementalTargets.cmake")
set(Elemental_LIBRARIES El)

View File

@@ -1,668 +0,0 @@
diff -Naur a/include/El/blas_like/level3.hpp b/include/El/blas_like/level3.hpp
--- a/include/El/blas_like/level3.hpp 2017-06-08 07:30:43.180249917 -0700
+++ b/include/El/blas_like/level3.hpp 2017-06-08 07:35:27.325434602 -0700
@@ -31,6 +31,10 @@
}
using namespace GemmAlgorithmNS;
+void GemmUseGPU(int min_M, int min_N, int min_K);
+
+void GemmUseCPU();
+
template<typename T>
void Gemm
( Orientation orientA, Orientation orientB,
diff -Naur a/include/El/core/imports/blas.hpp b/include/El/core/imports/blas.hpp
--- a/include/El/core/imports/blas.hpp 2017-06-08 07:30:43.522016908 -0700
+++ b/include/El/core/imports/blas.hpp 2017-06-08 07:35:06.834030908 -0700
@@ -916,4 +916,63 @@
} // namespace blas
} // namespace El
+
+#if defined(EL_USE_CUBLAS)
+
+namespace El {
+
+#ifdef EL_USE_64BIT_BLAS_INTS
+typedef long long int BlasInt;
+#else
+typedef int BlasInt;
+#endif
+
+namespace cublas {
+
+// NOTE: templated routines are custom and not wrappers
+
+// Level 3 BLAS
+// ============
+template<typename T>
+void Gemm
+( char transA, char transB, BlasInt m, BlasInt n, BlasInt k,
+ const T& alpha,
+ const T* A, BlasInt ALDim,
+ const T* B, BlasInt BLDim,
+ const T& beta,
+ T* C, BlasInt CLDim );
+
+void Gemm
+( char transA, char transB, BlasInt m, BlasInt n, BlasInt k,
+ const float& alpha,
+ const float* A, BlasInt ALDim,
+ const float* B, BlasInt BLDim,
+ const float& beta,
+ float* C, BlasInt CLDim );
+void Gemm
+( char transA, char transB, BlasInt m, BlasInt n, BlasInt k,
+ const double& alpha,
+ const double* A, BlasInt ALDim,
+ const double* B, BlasInt BLDim,
+ const double& beta,
+ double* C, BlasInt CLDim );
+void Gemm
+( char transA, char transB, BlasInt m, BlasInt n, BlasInt k,
+ const scomplex& alpha,
+ const scomplex* A, BlasInt ALDim,
+ const scomplex* B, BlasInt BLDim,
+ const scomplex& beta,
+ scomplex* C, BlasInt CLDim );
+void Gemm
+( char transA, char transB, BlasInt m, BlasInt n, BlasInt k,
+ const dcomplex& alpha,
+ const dcomplex* A, BlasInt ALDim,
+ const dcomplex* B, BlasInt BLDim,
+ const dcomplex& beta,
+ dcomplex* C, BlasInt CLDim );
+
+} // namespace cublas
+} // namespace El
+#endif
+
#endif // ifndef EL_IMPORTS_BLAS_DECL_HPP
diff -Naur a/src/blas_like/level3/Gemm.cpp b/src/blas_like/level3/Gemm.cpp
--- a/src/blas_like/level3/Gemm.cpp 2017-06-08 07:30:44.307096427 -0700
+++ b/src/blas_like/level3/Gemm.cpp 2017-06-08 07:34:23.062863489 -0700
@@ -16,6 +16,20 @@
namespace El {
+char gemm_cpu_gpu_switch = 'c';
+int min_M = 0, min_N = 0, min_K = 0;
+
+void GemmUseGPU(int _min_M, int _min_N, int _min_K) {
+ gemm_cpu_gpu_switch = 'g';
+ min_M = _min_M;
+ min_N = _min_N;
+ min_K = _min_K;
+}
+
+void GemmUseCPU() {
+ gemm_cpu_gpu_switch = 'c';
+}
+
template<typename T>
void Gemm
( Orientation orientA, Orientation orientB,
@@ -59,11 +73,30 @@
const Int k = ( orientA == NORMAL ? A.Width() : A.Height() );
if( k != 0 )
{
+#if defined(EL_USE_CUBLAS)
+ if (gemm_cpu_gpu_switch == 'g' &&
+ m >= min_M &&
+ n >= min_N &&
+ k >= min_K) {
+ cublas::Gemm
+ ( transA, transB, m, n, k,
+ alpha, A.LockedBuffer(), A.LDim(),
+ B.LockedBuffer(), B.LDim(),
+ beta, C.Buffer(), C.LDim() );
+ } else {
+ blas::Gemm
+ ( transA, transB, m, n, k,
+ alpha, A.LockedBuffer(), A.LDim(),
+ B.LockedBuffer(), B.LDim(),
+ beta, C.Buffer(), C.LDim() );
+ }
+#else
blas::Gemm
( transA, transB, m, n, k,
alpha, A.LockedBuffer(), A.LDim(),
B.LockedBuffer(), B.LDim(),
beta, C.Buffer(), C.LDim() );
+#endif
}
else
{
diff -Naur a/src/core/imports/blas/Gemm.hpp b/src/core/imports/blas/Gemm.hpp
--- a/src/core/imports/blas/Gemm.hpp 2017-06-08 07:30:45.090529967 -0700
+++ b/src/core/imports/blas/Gemm.hpp 2017-06-08 07:34:46.503009958 -0700
@@ -41,6 +41,12 @@
} // extern "C"
+
+#if defined(EL_USE_CUBLAS)
+#include <cublas.h>
+#include <cub/util_allocator.cuh>
+#endif
+
namespace El {
namespace blas {
@@ -515,3 +521,515 @@
} // namespace blas
} // namespace El
+
+
+#if EL_USE_CUBLAS
+
+#define USE_CUB 1
+
+namespace El {
+namespace cublas {
+
+#if USE_CUB
+cub::CachingDeviceAllocator g_allocator(true); // Caching allocator for device memory
+#endif
+
+template<typename T>
+void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const T& alpha,
+ const T* A, BlasInt ALDim,
+ const T* B, BlasInt BLDim,
+ const T& beta,
+ T* C, BlasInt CLDim )
+{
+ // put something here
+ printf("integer version \n");
+}
+template void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const Int& alpha,
+ const Int* A, BlasInt ALDim,
+ const Int* B, BlasInt BLDim,
+ const Int& beta,
+ Int* C, BlasInt CLDim );
+#ifdef EL_HAVE_QD
+template void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const DoubleDouble& alpha,
+ const DoubleDouble* A, BlasInt ALDim,
+ const DoubleDouble* B, BlasInt BLDim,
+ const DoubleDouble& beta,
+ DoubleDouble* C, BlasInt CLDim );
+template void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const QuadDouble& alpha,
+ const QuadDouble* A, BlasInt ALDim,
+ const QuadDouble* B, BlasInt BLDim,
+ const QuadDouble& beta,
+ QuadDouble* C, BlasInt CLDim );
+template void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const Complex<DoubleDouble>& alpha,
+ const Complex<DoubleDouble>* A, BlasInt ALDim,
+ const Complex<DoubleDouble>* B, BlasInt BLDim,
+ const Complex<DoubleDouble>& beta,
+ Complex<DoubleDouble>* C, BlasInt CLDim );
+template void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const Complex<QuadDouble>& alpha,
+ const Complex<QuadDouble>* A, BlasInt ALDim,
+ const Complex<QuadDouble>* B, BlasInt BLDim,
+ const Complex<QuadDouble>& beta,
+ Complex<QuadDouble>* C, BlasInt CLDim );
+#endif
+#ifdef EL_HAVE_QUAD
+template void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const Quad& alpha,
+ const Quad* A, BlasInt ALDim,
+ const Quad* B, BlasInt BLDim,
+ const Quad& beta,
+ Quad* C, BlasInt CLDim );
+template void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const Complex<Quad>& alpha,
+ const Complex<Quad>* A, BlasInt ALDim,
+ const Complex<Quad>* B, BlasInt BLDim,
+ const Complex<Quad>& beta,
+ Complex<Quad>* C, BlasInt CLDim );
+#endif
+#ifdef EL_HAVE_MPC
+template void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const BigInt& alpha,
+ const BigInt* A, BlasInt ALDim,
+ const BigInt* B, BlasInt BLDim,
+ const BigInt& beta,
+ BigInt* C, BlasInt CLDim );
+template void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const BigFloat& alpha,
+ const BigFloat* A, BlasInt ALDim,
+ const BigFloat* B, BlasInt BLDim,
+ const BigFloat& beta,
+ BigFloat* C, BlasInt CLDim );
+template void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const Complex<BigFloat>& alpha,
+ const Complex<BigFloat>* A, BlasInt ALDim,
+ const Complex<BigFloat>* B, BlasInt BLDim,
+ const Complex<BigFloat>& beta,
+ Complex<BigFloat>* C, BlasInt CLDim );
+#endif
+
+void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const float& alpha,
+ const float* A, BlasInt ALDim,
+ const float* B, BlasInt BLDim,
+ const float& beta,
+ float* C, BlasInt CLDim )
+{
+ EL_DEBUG_CSE
+ EL_DEBUG_ONLY(
+ if( std::toupper(transA) == 'N' )
+ {
+ if( ALDim < Max(m,1) )
+ LogicError("ALDim was too small: ALDim=",ALDim,",m=",m);
+ }
+ else
+ {
+ if( ALDim < Max(k,1) )
+ LogicError("ALDim was too small: ALDim=",ALDim,",k=",k);
+ }
+
+ if( std::toupper(transB) == 'N' )
+ {
+ if( BLDim < Max(k,1) )
+ LogicError("BLDim was too small: BLDim=",BLDim,",k=",k);
+ }
+ else
+ {
+ if( BLDim < Max(n,1) )
+ LogicError("BLDim was too small: BLDim=",BLDim,",n=",n);
+ }
+
+ if( CLDim < Max(m,1) )
+ LogicError("CLDim was too small: CLDim=",CLDim,",m=",m);
+ )
+ const char fixedTransA = ( std::toupper(transA) == 'C' ? 'T' : transA );
+ const char fixedTransB = ( std::toupper(transB) == 'C' ? 'T' : transB );
+
+ const mpi::Comm comm;
+ const Int commRank = mpi::Rank( comm );
+ if (commRank == 0) {
+ //printf("calling cublas Sgemm: m %d n %d k %d\n", m, n, k);
+ }
+
+ BlasInt rowA, colA, rowB, colB, rowC, colC;
+ // device memory size for A, B and C
+ BlasInt sizeA, sizeB, sizeC;
+ float *devA=NULL, *devB=NULL, *devC=NULL;
+
+ rowA = fixedTransA == 'T' ? k : m;
+ colA = fixedTransA == 'T' ? m : k;
+ rowB = fixedTransB == 'T' ? n : k;
+ colB = fixedTransB == 'T' ? k : n;
+ rowC = m;
+ colC = n;
+ sizeA = rowA * colA;
+ sizeB = rowB * colB;
+ sizeC = rowC * colC;
+
+ cublasStatus stat;
+
+#if USE_CUB
+ CubDebugExit(g_allocator.DeviceAllocate((void**)&devA,
+ sizeof(float) * (sizeA+sizeB+sizeC) ));
+#else
+ stat = cublasAlloc(sizeA+sizeB+sizeC, sizeof(float), (void **) &devA);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("Alloc A,B,C error\n"); }
+#endif
+
+ devB = devA + sizeA;
+ devC = devB + sizeB;
+
+ // copy matrix A, B and C to device
+ stat = cublasSetMatrix(rowA, colA, sizeof(float), A, ALDim, devA, rowA);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix A error\n"); }
+
+ stat = cublasSetMatrix(rowB, colB, sizeof(float), B, BLDim, devB, rowB);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix B error\n"); }
+
+ if (beta != 0.0)
+ {
+ stat = cublasSetMatrix(rowC, colC, sizeof(float), C, CLDim, devC, rowC);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix C error\n"); }
+ }
+
+ // cublas<t>gemm
+ cublasSgemm
+ ( fixedTransA, fixedTransB, m, n, k,
+ alpha, devA, rowA, devB, rowB, beta, devC, rowC );
+
+ // copy matrix C to host
+ stat = cublasGetMatrix(rowC, colC, sizeof(float), devC, rowC, C, CLDim);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("GetMatrix C error\n"); }
+
+ // free
+#if USE_CUB
+ CubDebugExit(g_allocator.DeviceFree(devA));
+#else
+ cublasFree(devA);
+#endif
+ //printf("CUBLAS float done ...\n");
+}
+
+void Gemm
+( char transA, char transB,
+ BlasInt m, BlasInt n, BlasInt k,
+ const double& alpha,
+ const double* A, BlasInt ALDim,
+ const double* B, BlasInt BLDim,
+ const double& beta,
+ double* C, BlasInt CLDim )
+{
+ EL_DEBUG_CSE
+ EL_DEBUG_ONLY(
+ if( std::toupper(transA) == 'N' )
+ {
+ if( ALDim < Max(m,1) )
+ LogicError("ALDim was too small: ALDim=",ALDim,",m=",m);
+ }
+ else
+ {
+ if( ALDim < Max(k,1) )
+ LogicError("ALDim was too small: ALDim=",ALDim,",k=",k);
+ }
+
+ if( std::toupper(transB) == 'N' )
+ {
+ if( BLDim < Max(k,1) )
+ LogicError("BLDim was too small: BLDim=",BLDim,",k=",k);
+ }
+ else
+ {
+ if( BLDim < Max(n,1) )
+ LogicError("BLDim was too small: BLDim=",BLDim,",n=",n);
+ }
+
+ if( CLDim < Max(m,1) )
+ LogicError("CLDim was too small: CLDim=",CLDim,",m=",m);
+ )
+ const char fixedTransA = ( std::toupper(transA) == 'C' ? 'T' : transA );
+ const char fixedTransB = ( std::toupper(transB) == 'C' ? 'T' : transB );
+
+ const mpi::Comm comm;
+ const Int commRank = mpi::Rank( comm );
+ if (commRank == 0) {
+ //printf("calling cublas Dgemm: m %d n %d k %d\n", m, n, k);
+ }
+
+ BlasInt rowA, colA, rowB, colB, rowC, colC;
+ // device memory size for A, B and C
+ BlasInt sizeA, sizeB, sizeC;
+ double *devA=NULL, *devB=NULL, *devC=NULL;
+
+ rowA = fixedTransA == 'T' ? k : m;
+ colA = fixedTransA == 'T' ? m : k;
+ rowB = fixedTransB == 'T' ? n : k;
+ colB = fixedTransB == 'T' ? k : n;
+ rowC = m;
+ colC = n;
+ sizeA = rowA * colA;
+ sizeB = rowB * colB;
+ sizeC = rowC * colC;
+
+ cublasStatus stat;
+
+#if USE_CUB
+ CubDebugExit(g_allocator.DeviceAllocate((void**)&devA,
+ sizeof(double) * (sizeA+sizeB+sizeC) ));
+#else
+ stat = cublasAlloc(sizeA+sizeB+sizeC, sizeof(double), (void **) &devA);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("Alloc A,B,C error\n"); }
+#endif
+
+ devB = devA + sizeA;
+ devC = devB + sizeB;
+
+ // copy matrix A, B and C to device
+ stat = cublasSetMatrix(rowA, colA, sizeof(double), A, ALDim, devA, rowA);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix A error\n"); }
+
+ stat = cublasSetMatrix(rowB, colB, sizeof(double), B, BLDim, devB, rowB);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix B error\n"); }
+
+ if (beta != 0.0)
+ {
+ stat = cublasSetMatrix(rowC, colC, sizeof(double), C, CLDim, devC, rowC);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix C error\n"); }
+ }
+
+ // cublas<t>gemm
+ cublasDgemm
+ ( fixedTransA, fixedTransB, m, n, k,
+ alpha, devA, rowA, devB, rowB, beta, devC, rowC );
+
+ // copy matrix C to host
+ stat = cublasGetMatrix(rowC, colC, sizeof(double), devC, rowC, C, CLDim);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("GetMatrix C error\n"); }
+
+ // free
+#if USE_CUB
+ CubDebugExit(g_allocator.DeviceFree(devA));
+#else
+ cublasFree(devA);
+#endif
+ //printf("CUBLAS double done ...\n");
+}
+
+void Gemm
+( char transA, char transB, BlasInt m, BlasInt n, BlasInt k,
+ const scomplex& alpha,
+ const scomplex* A, BlasInt ALDim,
+ const scomplex* B, BlasInt BLDim,
+ const scomplex& beta,
+ scomplex* C, BlasInt CLDim )
+{
+ EL_DEBUG_CSE
+ EL_DEBUG_ONLY(
+ if( std::toupper(transA) == 'N' )
+ {
+ if( ALDim < Max(m,1) )
+ LogicError("ALDim was too small: ALDim=",ALDim,",m=",m);
+ }
+ else
+ {
+ if( ALDim < Max(k,1) )
+ LogicError("ALDim was too small: ALDim=",ALDim,",k=",k);
+ }
+
+ if( std::toupper(transB) == 'N' )
+ {
+ if( BLDim < Max(k,1) )
+ LogicError("BLDim was too small: BLDim=",BLDim,",k=",k);
+ }
+ else
+ {
+ if( BLDim < Max(n,1) )
+ LogicError("BLDim was too small: BLDim=",BLDim,",n=",n);
+ }
+
+ if( CLDim < Max(m,1) )
+ LogicError("CLDim was too small: CLDim=",CLDim,",m=",m);
+ )
+
+ const char fixedTransA = transA;
+ const char fixedTransB = transB;
+
+ const mpi::Comm comm;
+ const Int commRank = mpi::Rank( comm );
+ if (commRank == 0) {
+ //printf("calling cublas Cgemm: m %d n %d k %d\n", m, n, k);
+ }
+
+ BlasInt rowA, colA, rowB, colB, rowC, colC;
+ // device memory size for A, B and C
+ BlasInt sizeA, sizeB, sizeC;
+ cuComplex *devA=NULL, *devB=NULL, *devC=NULL;
+
+ rowA = fixedTransA == 'T' ? k : m;
+ colA = fixedTransA == 'T' ? m : k;
+ rowB = fixedTransB == 'T' ? n : k;
+ colB = fixedTransB == 'T' ? k : n;
+ rowC = m;
+ colC = n;
+ sizeA = rowA * colA;
+ sizeB = rowB * colB;
+ sizeC = rowC * colC;
+
+ cublasStatus stat;
+ stat = cublasAlloc(sizeA+sizeB+sizeC, sizeof(cuComplex), (void **) &devA);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("Alloc A,B,C error\n"); }
+
+ devB = devA + sizeA;
+ devC = devB + sizeB;
+
+ // copy matrix A, B and C to device
+ stat = cublasSetMatrix(rowA, colA, sizeof(cuComplex), A, ALDim, devA, rowA);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix A error\n"); }
+
+ stat = cublasSetMatrix(rowB, colB, sizeof(cuComplex), B, BLDim, devB, rowB);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix B error\n"); }
+
+ if (beta.real() != 0.0 || beta.imag() != 0.0)
+ {
+ stat = cublasSetMatrix(rowC, colC, sizeof(cuComplex), C, CLDim, devC, rowC);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix C error\n"); }
+ }
+
+ // cublas<t>gemm
+ cublasCgemm
+ ( fixedTransA, fixedTransB, m, n, k,
+ *((cuComplex*) &alpha), devA, rowA, devB, rowB, *((cuComplex*) &beta), devC, rowC );
+
+ // copy matrix C to host
+ stat = cublasGetMatrix(rowC, colC, sizeof(cuComplex), devC, rowC, C, CLDim);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("GetMatrix C error\n"); }
+
+ // free
+ cublasFree(devA);
+}
+
+void Gemm
+( char transA, char transB, BlasInt m, BlasInt n, BlasInt k,
+ const dcomplex& alpha,
+ const dcomplex* A, BlasInt ALDim,
+ const dcomplex* B, BlasInt BLDim,
+ const dcomplex& beta,
+ dcomplex* C, BlasInt CLDim )
+{
+ EL_DEBUG_CSE
+ EL_DEBUG_ONLY(
+ if( std::toupper(transA) == 'N' )
+ {
+ if( ALDim < Max(m,1) )
+ LogicError("ALDim was too small: ALDim=",ALDim,",m=",m);
+ }
+ else
+ {
+ if( ALDim < Max(k,1) )
+ LogicError("ALDim was too small: ALDim=",ALDim,",k=",k);
+ }
+
+ if( std::toupper(transB) == 'N' )
+ {
+ if( BLDim < Max(k,1) )
+ LogicError("BLDim was too small: BLDim=",BLDim,",k=",k);
+ }
+ else
+ {
+ if( BLDim < Max(n,1) )
+ LogicError("BLDim was too small: BLDim=",BLDim,",n=",n);
+ }
+
+ if( CLDim < Max(m,1) )
+ LogicError("CLDim was too small: CLDim=",CLDim,",m=",m);
+ )
+
+ const char fixedTransA = transA;
+ const char fixedTransB = transB;
+
+ const mpi::Comm comm;
+ const Int commRank = mpi::Rank( comm );
+ if (commRank == 0) {
+ //printf("calling cublas Zgemm: m %d n %d k %d\n", m, n, k);
+ }
+
+ BlasInt rowA, colA, rowB, colB, rowC, colC;
+ // device memory size for A, B and C
+ BlasInt sizeA, sizeB, sizeC;
+ cuDoubleComplex *devA=NULL, *devB=NULL, *devC=NULL;
+
+ rowA = fixedTransA == 'T' ? k : m;
+ colA = fixedTransA == 'T' ? m : k;
+ rowB = fixedTransB == 'T' ? n : k;
+ colB = fixedTransB == 'T' ? k : n;
+ rowC = m;
+ colC = n;
+ sizeA = rowA * colA;
+ sizeB = rowB * colB;
+ sizeC = rowC * colC;
+
+ cublasStatus stat;
+ stat = cublasAlloc(sizeA+sizeB+sizeC, sizeof(cuDoubleComplex), (void **) &devA);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("Alloc A,B,C error\n"); }
+
+ devB = devA + sizeA;
+ devC = devB + sizeB;
+
+ // copy matrix A, B and C to device
+ stat = cublasSetMatrix(rowA, colA, sizeof(cuDoubleComplex), A, ALDim, devA, rowA);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix A error\n"); }
+
+ stat = cublasSetMatrix(rowB, colB, sizeof(cuDoubleComplex), B, BLDim, devB, rowB);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix B error\n"); }
+
+ if (beta.real() != 0.0 || beta.imag() != 0.0)
+ {
+ stat = cublasSetMatrix(rowC, colC, sizeof(cuDoubleComplex), C, CLDim, devC, rowC);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("SetMatrix C error\n"); }
+ }
+
+ cublasZgemm
+ ( fixedTransA, fixedTransB, m, n, k,
+ *((cuDoubleComplex*) &alpha), devA, rowA, devB, rowB, *((cuDoubleComplex*) &beta),
+ devC, rowC );
+
+ // copy matrix C to host
+ stat = cublasGetMatrix(rowC, colC, sizeof(cuDoubleComplex), devC, rowC, C, CLDim);
+ if (stat != CUBLAS_STATUS_SUCCESS) { RuntimeError("GetMatrix C error\n"); }
+
+ // free
+ cublasFree(devA);
+}
+
+} // namespace cublas
+} // namespace El
+
+#endif
+

View File

@@ -1,171 +0,0 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
from spack.package import *
class Elemental(CMakePackage):
"""Elemental: Distributed-memory dense and sparse-direct linear algebra
and optimization library."""
homepage = "https://libelemental.org"
url = "https://github.com/elemental/Elemental/archive/v0.87.7.tar.gz"
git = "https://github.com/elemental/Elemental.git"
license("Apache-2.0")
version("develop", branch="master")
version("0.87.7", sha256="7becfdbc223e9c72e65ae876d842c48d2037d13f83e9f41cea285e21b840d7d9")
version("0.87.6", sha256="b597987c99ddd3462e0619524c5b7f711177ae8ae541b1b961e11d96e15afc64")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
variant("shared", default=True, description="Enables the build of shared libraries")
variant("hybrid", default=True, description="Make use of OpenMP within MPI packing/unpacking")
variant(
"openmp_blas", default=False, description="Use OpenMP for threading in the BLAS library"
)
variant("c", default=False, description="Build C interface")
variant("parmetis", default=False, description="Enable ParMETIS")
variant("quad", default=False, description="Enable quad precision")
variant("int64", default=False, description="Use 64bit integers")
variant("cublas", default=False, description="Enable cuBLAS for local BLAS operations")
# When this variant is set remove the normal dependencies since
# Elemental has to build BLAS and ScaLAPACK internally
variant(
"int64_blas",
default=False,
description="Use 64bit integers for BLAS." " Requires local build of BLAS library.",
)
variant("scalapack", default=False, description="Build with ScaLAPACK library")
variant(
"build_type",
default="Release",
description="The build type to build",
values=("Debug", "Release"),
)
variant(
"blas",
default="openblas",
values=("openblas", "mkl", "accelerate", "essl"),
description="Enable the use of OpenBlas/MKL/Accelerate/ESSL",
)
variant(
"mpfr",
default=False,
description="Support GNU MPFR's" "arbitrary-precision floating-point arithmetic",
)
# Note that #1712 forces us to enumerate the different blas variants
depends_on("blas", when="~openmp_blas ~int64_blas")
# Hack to forward variant to openblas package
depends_on("openblas", when="blas=openblas ~openmp_blas ~int64_blas")
# Allow Elemental to build internally when using 8-byte ints
depends_on("openblas threads=openmp", when="blas=openblas +openmp_blas ~int64_blas")
depends_on("intel-mkl", when="blas=mkl")
depends_on("intel-mkl threads=openmp", when="blas=mkl +openmp_blas")
depends_on("intel-mkl@2017.1 +ilp64", when="blas=mkl +int64_blas")
depends_on("veclibfort", when="blas=accelerate")
depends_on("essl", when="blas=essl")
depends_on("essl threads=openmp", when="blas=essl +openmp_blas")
# Note that this forces us to use OpenBLAS until #1712 is fixed
depends_on("lapack", when="blas=openblas ~openmp_blas")
depends_on("netlib-lapack +external-blas", when="blas=essl")
depends_on("metis")
depends_on("metis +int64", when="+int64")
depends_on("mpi")
# Allow Elemental to build internally when using 8-byte ints
depends_on("scalapack", when="+scalapack ~int64_blas")
depends_on("gmp", when="+mpfr")
depends_on("mpc", when="+mpfr")
depends_on("mpfr", when="+mpfr")
patch("elemental_cublas.patch", when="+cublas")
patch("cmake_0.87.7.patch", when="@0.87.7")
conflicts("%intel@:17.0.2", when="@:0.87.7")
@property
def libs(self):
shared = True if "+shared" in self.spec else False
return find_libraries("libEl", root=self.prefix, shared=shared, recursive=True)
def cmake_args(self):
spec = self.spec
args = [
"-DCMAKE_INSTALL_MESSAGE:STRING=LAZY",
"-DCMAKE_C_COMPILER=%s" % spec["mpi"].mpicc,
"-DCMAKE_CXX_COMPILER=%s" % spec["mpi"].mpicxx,
"-DCMAKE_Fortran_COMPILER=%s" % spec["mpi"].mpifc,
"-DEL_PREFER_OPENBLAS:BOOL=TRUE",
"-DEL_DISABLE_SCALAPACK:BOOL=%s" % ("~scalapack" in spec),
"-DBUILD_SHARED_LIBS:BOOL=%s" % ("+shared" in spec),
"-DEL_HYBRID:BOOL=%s" % ("+hybrid" in spec),
"-DEL_C_INTERFACE:BOOL=%s" % ("+c" in spec),
"-DEL_DISABLE_PARMETIS:BOOL=%s" % ("~parmetis" in spec),
"-DEL_DISABLE_QUAD:BOOL=%s" % ("~quad" in spec),
"-DEL_USE_64BIT_INTS:BOOL=%s" % ("+int64" in spec),
"-DEL_USE_64BIT_BLAS_INTS:BOOL=%s" % ("+int64_blas" in spec),
"-DEL_DISABLE_MPFR:BOOL=%s" % ("~mpfr" in spec),
]
if self.spec.satisfies("%intel"):
ifort = env["SPACK_F77"]
intel_bin = os.path.dirname(ifort)
intel_root = os.path.dirname(intel_bin)
libfortran = find_libraries("libifcoremt", root=intel_root, recursive=True)
elif self.spec.satisfies("%gcc"):
# see <stage_folder>/debian/rules as an example:
mpif77 = Executable(spec["mpi"].mpif77)
libfortran = LibraryList(
mpif77("--print-file-name", "libgfortran.%s" % dso_suffix, output=str).strip()
)
elif self.spec.satisfies("%xl") or self.spec.satisfies("%xl_r"):
xl_fort = env["SPACK_F77"]
xl_bin = os.path.dirname(xl_fort)
xl_root = os.path.dirname(xl_bin)
libfortran = find_libraries("libxlf90_r", root=xl_root, recursive=True)
else:
libfortran = None
if libfortran:
args.append("-DGFORTRAN_LIB=%s" % libfortran.libraries[0])
# If using 64bit int BLAS libraries, elemental has to build
# them internally
if spec.satisfies("+int64_blas"):
args.extend(
[
"-DEL_BLAS_SUFFIX:STRING={0}".format(
("_64_" if "+int64_blas" in spec else "_")
),
"-DCUSTOM_BLAS_SUFFIX:BOOL=TRUE",
]
)
if spec.satisfies("+scalapack"):
args.extend(
[
"-DEL_LAPACK_SUFFIX:STRING={0}".format(
("_64_" if "+int64_blas" in spec else "_")
),
"-DCUSTOM_LAPACK_SUFFIX:BOOL=TRUE",
]
)
else:
math_libs = spec["lapack"].libs + spec["blas"].libs
if spec.satisfies("+scalapack"):
math_libs = spec["scalapack"].libs + math_libs
args.extend(["-DMATH_LIBS:STRING={0}".format(math_libs.ld_flags)])
return args

View File

@@ -22,6 +22,7 @@ class FluxSched(CMakePackage, AutotoolsPackage):
license("LGPL-3.0-only")
version("master", branch="master")
version("0.42.2", sha256="3a4a513c6539f2927e7a544f431e97456e50c71b63f8744d31e0dee3dc7fcc2e")
version("0.42.1", sha256="ab56b257e4918ad7e26ef6a375d0ea500a4929bf6633937f0c11c06e21db56b9")
version("0.41.0", sha256="c89baf72867031847748c157aa99f3b36755f2801df917aae66010d2112e10fe")
version("0.40.0", sha256="1484befcf8628b0af7833bf550d0bb3864db32b70f2c1bb363c35e30ada1ecc5")
@@ -110,6 +111,12 @@ class FluxSched(CMakePackage, AutotoolsPackage):
patch("no-valgrind.patch", when="@:0.20.0")
patch("jobid-sign-compare-fix.patch", when="@:0.22.0")
patch(
"https://github.com/flux-framework/flux-sched/pull/1338.patch?full_index=1",
when="@0.42.2 %oneapi@2025:",
sha256="b46579efa70176055f88493caa3fefbfea5a5663a33d9c561b71e83046f763c5",
)
def url_for_version(self, version):
"""
Flux uses a fork of ZeroMQ's Collective Code Construction Contract

View File

@@ -20,6 +20,7 @@ class FluxSecurity(AutotoolsPackage):
license("LGPL-3.0-or-later")
version("master", branch="master")
version("0.14.0", sha256="fae93bdaf94110a614d2806dfddf8b70bb43f73d89a7cb6856f26ab9055afc70")
version("0.13.0", sha256="d61b8d0e6d6c8d7497e9542eadc110c496cbd57ba6a33bfd26271d805bda9869")
version("0.12.0", sha256="2876d1f10c4f898f2ff10d60ddb446af9c8a913dda69f0136d820ad1fdf28a93")
version("0.11.0", sha256="d1ef78a871155a252f07e4f0a636eb272d6c2048d5e0e943860dd687c6cf808a")

View File

@@ -17,6 +17,7 @@ class Fmt(CMakePackage):
license("MIT")
version("11.1.4", sha256="49b039601196e1a765e81c5c9a05a61ed3d33f23b3961323d7322e4fe213d3e6")
version("11.1.3", sha256="7df2fd3426b18d552840c071c977dc891efe274051d2e7c47e2c83c3918ba6df")
version("11.1.2", sha256="ef54df1d4ba28519e31bf179f6a4fb5851d684c328ca051ce5da1b52bf8b1641")
version("11.1.1", sha256="a25124e41c15c290b214c4dec588385153c91b47198dbacda6babce27edc4b45")

View File

@@ -12,8 +12,8 @@ class Gasnet(Package, CudaPackage, ROCmPackage):
"""GASNet is a language-independent, networking middleware layer that
provides network-independent, high-performance communication primitives
including Remote Memory Access (RMA) and Active Messages (AM). It has been
used to implement parallel programming models and libraries such as UPC,
UPC++, Co-Array Fortran, Legion, Chapel, and many others. The interface is
used to implement parallel programming models and libraries including UPC,
UPC++, multi-image Fortran, Legion, Chapel, and many others. The interface is
primarily intended as a compilation target and for use by runtime library
writers (as opposed to end users), and the primary goals are high
performance, interface portability, and expressiveness.
@@ -21,12 +21,12 @@ class Gasnet(Package, CudaPackage, ROCmPackage):
***NOTICE***: The GASNet library built by this Spack package is ONLY intended for
unit-testing purposes, and is generally UNSUITABLE FOR PRODUCTION USE.
The RECOMMENDED way to build GASNet is as an embedded library as configured
by the higher-level client runtime package (UPC++, Legion, etc), including
by the higher-level client runtime package (UPC++, Legion, Chapel, etc), including
system-specific configuration.
"""
homepage = "https://gasnet.lbl.gov"
url = "https://gasnet.lbl.gov/EX/GASNet-2021.3.0.tar.gz"
url = "https://gasnet.lbl.gov/EX/GASNet-2024.5.0.tar.gz"
git = "https://bitbucket.org/berkeleylab/gasnet.git"
maintainers("PHHargrove", "bonachea")
@@ -37,11 +37,26 @@ class Gasnet(Package, CudaPackage, ROCmPackage):
version("main", branch="stable")
version("master", branch="master")
# commit hash e2fdec corresponds to tag gex-2025.2.0-snapshot
version("2025.2.0-snapshot", commit="e2fdece76d86d7b4fa090fbff9b46eb98ce97177")
# Versions fetched from git require a Bootstrap step
def bootstrap_version():
return "@master:,2025.2.0-snapshot"
version("2024.5.0", sha256="f945e80f71d340664766b66290496d230e021df5e5cd88f404d101258446daa9")
version("2023.9.0", sha256="2d9f15a794e10683579ce494cd458b0dd97e2d3327c4d17e1fea79bd95576ce6")
version("2023.3.0", sha256="e1fa783d38a503cf2efa7662be591ca5c2bb98d19ac72a9bc6da457329a9a14f")
version("2022.9.2", sha256="2352d52f395a9aa14cc57d82957d9f1ebd928d0a0021fd26c5f1382a06cd6f1d")
version("2022.9.0", sha256="6873ff4ad8ebee49da4378f2d78095a6ccc31333d6ae4cd739b9f772af11f936")
version(
"2022.9.2",
deprecated=True,
sha256="2352d52f395a9aa14cc57d82957d9f1ebd928d0a0021fd26c5f1382a06cd6f1d",
)
version(
"2022.9.0",
deprecated=True,
sha256="6873ff4ad8ebee49da4378f2d78095a6ccc31333d6ae4cd739b9f772af11f936",
)
version(
"2022.3.0",
deprecated=True,
@@ -129,8 +144,8 @@ class Gasnet(Package, CudaPackage, ROCmPackage):
depends_on("mpi", when="conduits=mpi")
depends_on("libfabric", when="conduits=ofi")
depends_on("autoconf@2.69", type="build", when="@master:")
depends_on("automake@1.16:", type="build", when="@master:")
depends_on("autoconf@2.69", type="build", when=bootstrap_version())
depends_on("automake@1.16:", type="build", when=bootstrap_version())
conflicts("^hip@:4.4.0", when="+rocm")
@@ -139,7 +154,7 @@ class Gasnet(Package, CudaPackage, ROCmPackage):
depends_on("oneapi-level-zero@1.8.0:", when="+level_zero")
def install(self, spec, prefix):
if spec.satisfies("@master:"):
if spec.satisfies(Gasnet.bootstrap_version()):
bootstrapsh = Executable("./Bootstrap")
bootstrapsh()
# Record git-describe when fetched from git:

View File

@@ -80,7 +80,12 @@ class Geomodel(CMakePackage):
depends_on("pythia8", when="+pythia")
with when("+visualization"):
depends_on("hdf5+cxx")
depends_on("qt +gui +opengl +sql")
depends_on("qmake")
with when("^[virtuals=qmake] qt"):
depends_on("qt +gui +opengl +sql")
with when("^[virtuals=qmake] qt-base"):
depends_on("qt-base +gui +opengl +sql +widgets")
depends_on("qt-5compat")
depends_on("coin3d")
depends_on("soqt")
depends_on("opengl")
@@ -94,5 +99,8 @@ def cmake_args(self):
self.define_from_variant("GEOMODEL_BUILD_EXAMPLES", "examples"),
self.define_from_variant("GEOMODEL_BUILD_TOOLS", "tools"),
self.define_from_variant("CMAKE_CXX_STANDARD", "cxxstd"),
self.define(
"GEOMODEL_USE_QT6", self.spec.satisfies("+visualization ^[virtuals=qmake] qt-base")
),
]
return args

View File

@@ -19,7 +19,7 @@ class Glib(MesonPackage):
"""
homepage = "https://developer.gnome.org/glib/"
url = "https://download.gnome.org/sources/glib/2.82/glib-2.82.2.tar.xz"
url = "https://download.gnome.org/sources/glib/2.82/glib-2.82.5.tar.xz"
list_url = "https://download.gnome.org/sources/glib"
list_depth = 1
@@ -28,6 +28,7 @@ class Glib(MesonPackage):
license("LGPL-2.1-or-later")
# Even minor versions are stable, odd minor versions are development, only add even numbers
version("2.82.5", sha256="05c2031f9bdf6b5aba7a06ca84f0b4aced28b19bf1b50c6ab25cc675277cbc3f")
version("2.82.2", sha256="ab45f5a323048b1659ee0fbda5cecd94b099ab3e4b9abf26ae06aeb3e781fd63")
version("2.78.3", sha256="609801dd373796e515972bf95fc0b2daa44545481ee2f465c4f204d224b2bc21")
version("2.78.0", sha256="44eaab8b720877ce303c5540b657b126f12dc94972d9880b52959f43fb537b30")

View File

@@ -34,10 +34,11 @@ class Globalarrays(AutotoolsPackage):
version("5.6.1", sha256="b324deed49f930f55203e1d18294ce07dd02680b9ac0728ebc54f94a12557ebc")
version("5.6", sha256="a228dfbae9a6cfaae34694d7e56f589ac758e959b58f4bc49e6ef44058096767")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("c", type="build")
depends_on("cxx", type="build")
depends_on("fortran", type="build")
variant("cxx", default=False, description="Enable C++")
variant("scalapack", default=False, description="Enable SCALAPACK")
variant(
"armci",
@@ -52,7 +53,6 @@ class Globalarrays(AutotoolsPackage):
depends_on("libfabric", when="armci=ofi")
depends_on("rdma-core", when="armci=openib")
depends_on("scalapack", when="+scalapack")
# See release https://github.com/GlobalArrays/ga/releases/tag/v5.7.1
@@ -74,6 +74,9 @@ def configure_args(self):
"--with-lapack={0}".format(lapack_libs),
]
if self.spec.satisfies("+cxx"):
args.append("--enable-cxx")
if self.spec.satisfies("+scalapack"):
scalapack_libs = self.spec["scalapack"].libs.ld_flags
args.append("--with-scalapack={0}".format(scalapack_libs))

View File

@@ -22,6 +22,7 @@ class Helics(CMakePackage):
version("develop", branch="develop", submodules=True)
version("main", branch="main", submodules=True)
version("master", branch="main", submodules=True)
version("3.6.1", sha256="d607c1b47dd5ae32f3076c4aa4aa584d37b6056a9bd049234494698ed95cd70f")
version("3.6.0", sha256="e111ac5d92e808f27e330afd1f8b8ca4d86adf6ccd74e3280f2d40fb3e0e2ce9")
version("3.5.3", sha256="f9ace240510b18caf642f55d08f9009a9babb203fbc032ec7d7d8aa6fd5e1553")
version("3.5.2", sha256="c2604694698a1e33c4a68f3d1c5ab0a228ef2bfca1b0d3bae94801dbd3b11048")

View File

@@ -90,6 +90,8 @@ class Hipsparselt(CMakePackage, ROCmPackage):
def setup_build_environment(self, env):
env.set("CXX", self.spec["hip"].hipcc)
if self.spec.satisfies("+asan"):
env.set("CC", f"{self.spec['llvm-amdgpu'].prefix}/bin/clang")
env.set("TENSILE_ROCM_ASSEMBLER_PATH", f"{self.spec['llvm-amdgpu'].prefix}/bin/clang++")
env.set(
"TENSILE_ROCM_OFFLOAD_BUNDLER_PATH",

View File

@@ -155,6 +155,17 @@ def determine_version(cls, lib):
ver = None
return ver
def setup_build_environment(self, env):
if self.spec.satisfies("@5.7: +asan"):
numa_inc = self.spec["numactl"].prefix.include
numa_lib = self.spec["numactl"].prefix.lib
env.set("CC", f"{self.spec['llvm-amdgpu'].prefix}/bin/clang")
env.set("CXX", f"{self.spec['llvm-amdgpu'].prefix}/bin/clang++")
env.set("ASAN_OPTIONS", "detect_leaks=0")
env.set("CFLAGS", f"-fsanitize=address -shared-libasan -I{numa_inc} -L{numa_lib}")
env.set("CXXFLAGS", f"-fsanitize=address -shared-libasan -I{numa_inc} -L{numa_lib}")
env.set("LDFLAGS", "-fuse-ld=lld")
def cmake_args(self):
spec = self.spec

View File

@@ -15,7 +15,10 @@
class Hydrogen(CachedCMakePackage, CudaPackage, ROCmPackage):
"""Hydrogen: Distributed-memory dense and sparse-direct linear algebra
and optimization library. Based on the Elemental library."""
and optimization library.
Based on the Elemental library.
"""
homepage = "https://libelemental.org"
url = "https://github.com/LLNL/Elemental/archive/v1.5.1.tar.gz"
@@ -31,8 +34,8 @@ class Hydrogen(CachedCMakePackage, CudaPackage, ROCmPackage):
version("1.5.2", sha256="a902cad3962471216cfa278ba0561c18751d415cd4d6b2417c02a43b0ab2ea33")
version("1.5.1", sha256="447da564278f98366906d561d9c8bc4d31678c56d761679c2ff3e59ee7a2895c")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("c", type="build")
depends_on("cxx", type="build")
# Older versions are no longer supported.
variant("shared", default=True, description="Enables the build of shared libraries.")
@@ -71,13 +74,6 @@ class Hydrogen(CachedCMakePackage, CudaPackage, ROCmPackage):
description="Use OpenMP taskloops instead of parallel for loops",
)
# Users should spec this on their own on the command line, no?
# This doesn't affect Hydrogen itself at all. Not one bit.
# variant(
# "openmp_blas",
# default=False,
# description="Use OpenMP for threading in the BLAS library")
variant("test", default=False, description="Builds test suite")
conflicts("+cuda", when="+rocm", msg="CUDA and ROCm support are mutually exclusive")
@@ -90,22 +86,25 @@ class Hydrogen(CachedCMakePackage, CudaPackage, ROCmPackage):
depends_on("blas")
depends_on("lapack")
# Note that #1712 forces us to enumerate the different blas variants
# Note that this forces us to use OpenBLAS until #1712 is fixed
depends_on("openblas", when="blas=openblas")
depends_on("openblas +ilp64", when="blas=openblas +int64_blas")
depends_on("openblas@0.3.21:0.3.23", when="blas=openblas arch=ppc64le:")
with when("blas=openblas"):
requires("^[virtuals=blas,lapack] openblas")
requires("^[virtuals=blas,lapack] openblas~ilp64", when="~int64_blas")
requires("^[virtuals=blas,lapack] openblas+ilp64", when="+int64_blas")
requires("^[virtuals=blas,lapack] openblas@0.3.21:0.3.23", when="arch=ppc64le:")
depends_on("intel-mkl", when="blas=mkl")
depends_on("intel-mkl +ilp64", when="blas=mkl +int64_blas")
with when("blas=mkl"):
requires("^[virtuals=blas,lapack] intel-oneapi-mkl")
requires("^[virtuals=blas,lapack] intel-oneapi-mkl ~ilp64", when="~int64_blas")
requires("^[virtuals=blas,lapack] intel-oneapi-mkl +ilp64", when="+int64_blas")
# I don't think this is true...
depends_on("veclibfort", when="blas=accelerate")
depends_on("essl", when="blas=essl")
depends_on("essl +ilp64", when="blas=essl +int64_blas")
depends_on("netlib-lapack +external-blas", when="blas=essl")
with when("blas=essl"):
requires("^[virtuals=blas] essl")
requires("^[virtuals=blas] essl ~ilp64", when="~int64_blas")
requires("^[virtuals=blas] essl +ilp64", when="+int64_blas")
requires("^[virtuals=lapack] netlib-lapack +external-blas")
depends_on("cray-libsci", when="blas=libsci")
@@ -117,14 +116,12 @@ class Hydrogen(CachedCMakePackage, CudaPackage, ROCmPackage):
depends_on("aluminum +rocm +ht", when="+al +rocm")
for arch in CudaPackage.cuda_arch_values:
depends_on("aluminum +cuda cuda_arch=%s" % arch, when="+al +cuda cuda_arch=%s" % arch)
depends_on(f"aluminum +cuda cuda_arch={arch}", when=f"+al +cuda cuda_arch={arch}")
# variants +rocm and amdgpu_targets are not automatically passed to
# dependencies, so do it manually.
for val in ROCmPackage.amdgpu_targets:
depends_on(
"aluminum +rocm amdgpu_target=%s" % val, when="+al +rocm amdgpu_target=%s" % val
)
depends_on(f"aluminum +rocm amdgpu_target={val}", when=f"+al +rocm amdgpu_target={val}")
depends_on("cuda@11.0.0:", when="+cuda")
depends_on("hipcub +rocm", when="+rocm +cub")
@@ -142,8 +139,7 @@ def libs(self):
return find_libraries("libHydrogen", root=self.prefix, shared=shared, recursive=True)
def cmake_args(self):
args = []
return args
return []
def get_cuda_flags(self):
spec = self.spec
@@ -188,9 +184,7 @@ def initconfig_compiler_entries(self):
entries.append(cmake_cache_string("OpenMP_CXX_FLAGS", "-fopenmp=libomp"))
entries.append(cmake_cache_string("OpenMP_CXX_LIB_NAMES", "libomp"))
entries.append(
cmake_cache_string(
"OpenMP_libomp_LIBRARY", "{0}/lib/libomp.dylib".format(clang_root)
)
cmake_cache_string("OpenMP_libomp_LIBRARY", f"{clang_root}/lib/libomp.dylib")
)
return entries
@@ -259,9 +253,9 @@ def initconfig_package_entries(self):
)
# CMAKE_PREFIX_PATH should handle this
entries.append(cmake_cache_string("OpenBLAS_DIR", spec["openblas"].prefix))
elif spec.satisfies("blas=mkl") or spec.satisfies("^intel-mkl"):
elif spec.satisfies("blas=mkl"):
entries.append(cmake_cache_option("Hydrogen_USE_MKL", True))
elif spec.satisfies("blas=essl") or spec.satisfies("^essl"):
elif spec.satisfies("blas=essl"):
entries.append(cmake_cache_string("BLA_VENDOR", "IBMESSL"))
# IF IBM ESSL is used it needs help finding the proper LAPACK libraries
entries.append(
@@ -280,7 +274,7 @@ def initconfig_package_entries(self):
)
elif spec.satisfies("blas=accelerate"):
entries.append(cmake_cache_option("Hydrogen_USE_ACCELERATE", True))
elif spec.satisfies("^netlib-lapack"):
elif spec.satisfies("^[virtuals=blas,lapack] netlib-lapack"):
entries.append(cmake_cache_string("BLA_VENDOR", "Generic"))
return entries

View File

@@ -19,6 +19,7 @@ class JacamarCi(GoPackage):
license("Apache-2.0 OR MIT")
version("develop", branch="develop")
version("0.25.0", sha256="20626ed931f5bf6ba1d5a2dd56af5793efa69a4f355bdac9b8bf742aaf806653")
version("0.24.2", sha256="d2b8be464b88a92df0ad2ba1e846226b993c4162779432cb8366fb9bca5c40db")
version("0.24.1", sha256="fe1036fee2e97e38457212bf1246895803eeb6e1a6aa1ecd24eba1d3ea994029")
version("0.23.0", sha256="796679e13ece5f88dd7d4a4f40a27a87a6f3273085bb07043b258a612a4b43d3")

View File

@@ -3,6 +3,10 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import subprocess
import sys
import llnl.util.filesystem as fsys
import llnl.util.tty as tty
from spack.package import *
@@ -57,19 +61,29 @@ def setup_run_environment(self, env):
@on_package_attributes(run_tests=True)
@run_after("install")
def build_test(self):
testdir = "smoke_test_build"
testdir = join_path(self.stage.source_path, "smoke_test_build")
cmakeExampleDir = join_path(self.stage.source_path, "examples")
cmake_args = [
cmakeExampleDir,
"-DBUILD_SHARED_LIBS=ON",
self.define("CMAKE_PREFIX_PATH", self.prefix),
]
adapter0_test_path = join_path(testdir, "adaptor0/adaptor0_test")
if sys.platform == "win32":
# Specify ninja generator for `cmake` call used to generate test artifact
# (this differs from the build of `libcatalyst` itself); if unspecified, the
# default is to use Visual Studio, which generates a more-complex path
# (adapter0/<CONFIG>/adaptor0_test rather than adaptor0/adaptor0_test).
cmake_args.append("-GNinja")
# To run the test binary on Windows, we need to construct an rpath
# for the current package being tested, including the package
# itself
fsys.make_package_test_rpath(self, adapter0_test_path)
cmake = which(self.spec["cmake"].prefix.bin.cmake)
with working_dir(testdir, create=True):
cmake(*cmake_args)
cmake(*(["--build", "."]))
tty.info("Running Catalyst test")
res = subprocess.run(["adaptor0/adaptor0_test", "catalyst"])
res = subprocess.run([adapter0_test_path, "catalyst"])
assert res.returncode == 0

View File

@@ -11,10 +11,11 @@ class Libmd(AutotoolsPackage):
macOS, Solaris) libraries and lacking on others like GNU systems."""
homepage = "https://www.hadrons.org/software/libmd/"
url = "https://archive.hadrons.org/software/libmd/libmd-1.0.3.tar.xz"
url = "https://archive.hadrons.org/software/libmd/libmd-1.1.0.tar.xz"
maintainers("haampie")
version("1.1.0", sha256="1bd6aa42275313af3141c7cf2e5b964e8b1fd488025caf2f971f43b00776b332")
version("1.0.4", sha256="f51c921042e34beddeded4b75557656559cf5b1f2448033b4c1eec11c07e530f")
version("1.0.3", sha256="5a02097f95cc250a3f1001865e4dbba5f1d15554120f95693c0541923c52af4a")
version("1.0.2", sha256="dc66b8278f82e7e1bf774fbd4bc83a0348e8f27afa185b2c2779cfcb3da25013")

View File

@@ -10,9 +10,14 @@ class Librsvg(AutotoolsPackage):
homepage = "https://wiki.gnome.org/Projects/LibRsvg"
url = "https://download.gnome.org/sources/librsvg/2.44/librsvg-2.44.14.tar.xz"
list_url = "https://download.gnome.org/sources/librsvg"
list_depth = 1
license("LGPL-2.1-or-later")
license("LGPL-2.1-or-later", checked_by="wdconinc")
version("2.58.2", sha256="18e9d70c08cf25f50d610d6d5af571561d67cf4179f962e04266475df6e2e224")
version("2.57.3", sha256="1b2267082c0b77ef93b15747a5c754584eb5886baf2d5a08011cde0659c2c479")
version("2.56.4", sha256="ea87fdcf5159348fcb08b14c43e91a9d3d9e45dc2006a875d1711bb65b6740f5")
version("2.56.2", sha256="3ec3c4d8f73e0ba4b9130026969e8371c092b734298d36e2fdb3eb4afcec1200")
version("2.51.0", sha256="89d32e38445025e1b1d9af3dd9d3aeb9f6fce527aeecbecf38b369b34c80c038")
version("2.50.2", sha256="6211f271ce4cd44a7318190d36712e9cea384a933d3e3570004edeb210a056d3")
@@ -27,6 +32,8 @@ class Librsvg(AutotoolsPackage):
depends_on("gobject-introspection", type="build")
depends_on("pkgconfig", type="build")
# rust minimal version also in `configure` file
depends_on("rust@1.70:", when="@2.57:", type="build")
# rust minimal version from NEWS file
depends_on("rust@1.65:", when="@2.56.1:", type="build")
# upper bound because "Unaligned references to packed fields are a hard
@@ -36,8 +43,9 @@ class Librsvg(AutotoolsPackage):
depends_on("gtk-doc", type="build", when="+doc")
# requirements according to `configure` file
depends_on("cairo@1.16:+gobject+png", when="@2.50:")
depends_on("cairo@1.15.12:+gobject+png", when="@2.44.14:")
depends_on("cairo@1.17:", when="@2.57:")
depends_on("cairo@1.16:", when="@2.50:")
depends_on("cairo@1.15.12:", when="@2.44.14:")
depends_on("cairo@1.2.0:+gobject+png")
depends_on("libcroco@0.6.1:", when="@:2.44.14")
depends_on("gdk-pixbuf@2.20:")
@@ -46,6 +54,7 @@ class Librsvg(AutotoolsPackage):
depends_on("glib@2.12:")
depends_on("harfbuzz@2:", when="@2.50:")
depends_on("libxml2@2.9:")
depends_on("pango@1.50:", when="@2.57.1:")
depends_on("pango@1.46:", when="@2.51:")
depends_on("pango@1.38:")

View File

@@ -315,6 +315,11 @@ def setup_run_environment(self, env):
def setup_dependent_run_environment(self, env, dependent_spec):
llvm_amdgpu_home = self.spec["llvm-amdgpu"].prefix
env.prepend_path("LD_LIBRARY_PATH", llvm_amdgpu_home + "/lib")
# Required for enabling asan on dependent packages
for root, _, files in os.walk(self.spec["llvm-amdgpu"].prefix):
if "libclang_rt.asan-x86_64.so" in files:
env.prepend_path("LD_LIBRARY_PATH", root)
env.prune_duplicate_paths("LD_LIBRARY_PATH")
@run_after("install")
def post_install(self):

View File

@@ -19,7 +19,9 @@ class Meson(PythonPackage):
license("Apache-2.0")
version("1.5.1", sha256="55f6acd5bf72c14d4aa5a781993633f84a1d117bdf2c2057735902ced9b81390")
version("1.7.0", sha256="a6ca46e2a11a0278bb6492ecd4e0520ff441b164ebfdef1e012b11beb848d26e")
version("1.6.1", sha256="4889795777b536ea1a351982f3ef7c7b06a786ccb47036daba63cc5757c59edb")
version("1.5.2", sha256="fb41882bef26ffc02647d9978cba502a4accdf2e94c0a6dc9cc498dd7463381e")
version("1.4.2", sha256="11d1336fe35e1ade57510a846a31d7dc2e3b6ac1e2491c2831bce5a2a192ba0d")
version("1.3.2", sha256="683082fb3c5cddf203b21d29bdf4c227e2f7964da5324a15e1a5f7db94322b4b")
version("1.2.2", sha256="1caa0ef6082e311bdca9836e7907f548b8c3f041a42ed41f0ff916b83ac7dddd")
@@ -27,6 +29,7 @@ class Meson(PythonPackage):
version("1.0.2", sha256="1f1239c3091668643f7d2086663d6afd8cc87fbab84fe7462bc18b9ba6d65de8")
with default_args(deprecated=True):
version("1.5.1", sha256="55f6acd5bf72c14d4aa5a781993633f84a1d117bdf2c2057735902ced9b81390")
version("1.2.1", sha256="e1f3b32b636cc86496261bd89e63f00f206754697c7069788b62beed5e042713")
version("1.2.0", sha256="603489f0aaa6305f806c6cc4a4455a965f22290fc74f65871f589b002110c790")
version("1.1.0", sha256="f29a3e14062043d75e82d16f1e41856e6b1ed7a7c016e10c7b13afa7ee6364cc")

View File

@@ -53,6 +53,8 @@ class Mgard(CMakePackage, CudaPackage):
depends_on("pkgconfig", type=("build",), when="@2022-11-18:")
depends_on("zstd")
depends_on("protobuf@3.4:", when="@2022-11-18:")
# See https://github.com/CODARcode/MGARD/issues/240
depends_on("protobuf@:3.28", when="@:2023-12-09")
depends_on("libarchive", when="@2021-11-12:")
depends_on("tclap", when="@2021-11-12")
depends_on("yaml-cpp", when="@2021-11-12:")

View File

@@ -19,6 +19,7 @@ class Multicharge(CMakePackage, MesonPackage):
build_system("cmake", "meson", default="meson")
version("0.3.1", sha256="180541714c26804a2d66edd892c8cd4cb40a21acbaf7edb24aaf04d580368b97")
version("0.3.0", sha256="e8f6615d445264798b12d2854e25c93938373dc149bb79e6eddd23fc4309749d")
variant("openmp", default=True, description="Enable OpenMP support")
@@ -27,6 +28,12 @@ class Multicharge(CMakePackage, MesonPackage):
depends_on("mctc-lib build_system=cmake", when="build_system=cmake")
depends_on("mctc-lib build_system=meson", when="build_system=meson")
def url_for_version(self, version):
if self.spec.satisfies("@:0.3.0"):
return f"https://github.com/grimme-lab/multicharge/releases/download/v{version}/multicharge-{version}.tar.xz"
else:
return f"https://github.com/grimme-lab/multicharge/releases/download/v{version}/multicharge-{version}-source.tar.xz"
class CMakeBuilder(cmake.CMakeBuilder):
def cmake_args(self):

View File

@@ -18,6 +18,7 @@ class NvplFft(Package):
license("UNKNOWN")
version("0.4.0.1", sha256="e0309f28a98a5f920919a9c6a766b89b507907bde66e665e0a239005c6942781")
version("0.3.0", sha256="e20791b77fa705e5a4f7aa5dada39b2a41e898189e0e60e680576128d532269b")
version("0.2.0.2", sha256="264343405aad6aca451bf8bd0988b6217b2bb17fd8f99394b83e04d9ab2f7f91")
version("0.1.0", sha256="0344f8e15e5b40f4d552f7013fe04a32e54a092cc3ebede51ddfce74b44c6e7d")

View File

@@ -940,6 +940,9 @@ def cmake_args(self):
if spec.variants["cuda_arch"].value[0] != "none":
cuda_arch = spec.variants["cuda_arch"].value
args.append(self.define("CUDA_ARCH_BIN", " ".join(cuda_arch)))
# https://github.com/opencv/opencv/pull/23021
if spec.satisfies("@4.9: ^cmake@3.18:"):
args.append(self.define("ENABLE_CUDA_FIRST_CLASS_LANGUAGE", True))
# TODO: this CMake flag is deprecated
if spec.target.family == "ppc64le":

View File

@@ -0,0 +1,39 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import glob
import os
from spack.package import *
class Plantuml(Package):
"""PlantUML is a highly versatile tool that facilitates the rapid
and straightforward creation of a wide array of diagrams."""
homepage = "https://plantuml.com"
url = "https://github.com/plantuml/plantuml/releases/download/v1.2025.1/plantuml-lgpl-1.2025.1.jar"
maintainers("greenc-FNAL", "knoepfel", "marcpaterno")
license("LGPL-3.0-or-later", checked_by="greenc-FNAL")
version(
"1.2025.1",
sha256="b08112f0c8ac2a2085c8c4a81ac9eac7bc5a3413a492c252cad4d39e473d9d6d",
expand=False,
)
depends_on("java@8.0:", type="run")
depends_on("graphviz", type="run")
def install(self, spec, prefix):
mkdirp(prefix.bin)
rename(glob.glob("plantuml-*.jar")[0], "plantuml.jar")
install("plantuml.jar", prefix.bin)
plantuml_wrapper = join_path(os.path.dirname(__file__), "plantuml")
install(plantuml_wrapper, prefix.bin.plantuml)
def setup_run_environment(self, env):
env.set("PLANTUML_JAR_LOCATION", join_path(self.prefix.bin, "plantuml.jar"))

View File

@@ -0,0 +1,5 @@
#!/bin/bash
if [[ "$*" != *"-gui"* ]]; then
VMARGS="-Djava.awt.headless=true"
fi
exec java $VMARGS -jar "$PLANTUML_JAR_LOCATION" "$@"

View File

@@ -15,6 +15,7 @@ class Protobuf(CMakePackage):
license("BSD-3-Clause")
version("3.29.3", sha256="c8d0ed0085f559444f70311791cf7aef414246b9942441443963184b534dbf9e")
version("3.28.2", sha256="1b6b6a7a7894f509f099c4469b5d4df525c2f3c9e4009e5b2db5b0f66cb8ee0e")
version("3.27.5", sha256="a4aa92d0a207298149bf553d9a3192f3562eb91740086f50fa52331e60fa480c")
version("3.26.1", sha256="f3c0830339eaa5036eba8ff8ce7fca5aa3088f7d616f7c3713d946f611ae92bf")

View File

@@ -0,0 +1,27 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyAiojobs(PythonPackage):
"""Jobs scheduler for managing background task (asyncio)."""
homepage = "https://github.com/aio-libs/aiojobs"
pypi = "aiojobs/aiojobs-1.3.0.tar.gz"
maintainers("alecbcs")
license("Apache-2.0", checked_by="alecbcs")
version("1.3.0", sha256="03074c884b3dc388b8d798c0de24ec17d72b2799018497fda8062c0431a494b5")
variant("aiohttp", default=False, description="Enable aiohttp integration")
depends_on("python@3.8:", type=("build", "run"))
depends_on("py-setuptools@46.4:", type="build")
depends_on("py-async-timeout@4:", type=("build", "run"), when="^python@:3.10")
depends_on("py-aiohttp@3.9:", type=("build", "run"), when="+aiohttp")

View File

@@ -29,7 +29,8 @@ class PyCudf(PythonPackage):
depends_on("py-cython", type="build")
depends_on("py-numba@0.40.0:", type=("build", "run"))
depends_on("py-numpy@1.14.4:", type=("build", "run"))
depends_on("py-pyarrow+cuda+orc+parquet", type=("build", "run"))
depends_on("py-pyarrow", type=("build", "run"))
depends_on("arrow+cuda+orc+parquet")
depends_on("py-pandas@0.23.4:", type=("build", "run"))
depends_on("py-rmm", type=("build", "run"))
depends_on("cuda@10:")

View File

@@ -21,5 +21,6 @@ class PyDaskExpr(PythonPackage):
# Can't do circular run-time dependencies yet?
# depends_on("py-dask@2024.7.1", type="run")
depends_on("py-pyarrow@7: +dataset", type="run")
depends_on("py-pyarrow@7:", type="run")
depends_on("arrow+dataset")
depends_on("py-pandas@2:", type="run")

View File

@@ -39,7 +39,8 @@ class PyDatasets(PythonPackage):
depends_on("py-fsspec@:0.8.0", when="^python@:3.7")
depends_on("py-huggingface-hub@:0.0")
depends_on("py-importlib-metadata", when="^python@:3.7")
depends_on("py-pyarrow@1:3+parquet")
depends_on("py-pyarrow@1:3")
depends_on("arrow+parquet")
depends_on("py-tqdm@4.27:4.49")
with when("@2.8.0"):
depends_on("py-responses@:0.18")
@@ -49,7 +50,8 @@ class PyDatasets(PythonPackage):
depends_on("py-dill@:0.3.6")
depends_on("py-fsspec@2021.11.1:+http")
depends_on("py-huggingface-hub@0.2:0")
depends_on("py-pyarrow@6:+parquet")
depends_on("py-pyarrow@6:")
depends_on("arrow+parquet")
depends_on("py-tqdm@4.62.1:")
depends_on("python@3.7:")
with when("@2.20.0:"):
@@ -57,7 +59,8 @@ class PyDatasets(PythonPackage):
depends_on("py-dill@0.3.0:0.3.8") # temporary upper bound
depends_on("py-fsspec@2023.1.0:2024.5.0+http")
depends_on("py-huggingface-hub@0.21.2:")
depends_on("py-pyarrow@15:+parquet+dataset")
depends_on("py-pyarrow@15:")
depends_on("arrow+parquet+dataset")
depends_on("py-requests@2.32.2:")
depends_on("py-tqdm@4.66.3:")
depends_on("python@3.8:")

View File

@@ -25,6 +25,7 @@ class PyMatplotlib(PythonPackage):
license("Apache-2.0")
maintainers("adamjstewart", "rgommers")
version("3.10.1", sha256="e8d2d0e3881b129268585bf4765ad3ee73a4591d77b9a18c214ac7e3a79fb2ba")
version("3.10.0", sha256="b886d02a581b96704c9d1ffe55709e49b4d2d52709ccebc4be42db856e511278")
version("3.9.4", sha256="1e00e8be7393cbdc6fedfa8a6fba02cf3e83814b285db1c60b906a023ba41bc3")
version("3.9.3", sha256="cd5dbbc8e25cad5f706845c4d100e2c8b34691b412b93717ce38d8ae803bcfa5")

View File

@@ -15,6 +15,12 @@ class PyNetworkx(PythonPackage):
license("BSD-3-Clause")
version("3.4.2", sha256="307c3669428c5362aab27c8a1260aa8f47c4e91d3891f48be0141738d8d053e1")
version("3.4.1", sha256="f9df45e85b78f5bd010993e897b4f1fdb242c11e015b101bd951e5c0e29982d8")
version("3.4", sha256="1269b90f8f0d3a4095f016f49650f35ac169729f49b69d0572b2bb142748162b")
version("3.3", sha256="0c127d8b2f4865f59ae9cb8aafcd60b5c70f3241ebd66f7defad7c4ab90126c9")
version("3.2.1", sha256="9f1bb5cf3409bf324e0a722c20bdb4c20ee39bf1c30ce8ae499c8502b0b5e0c6")
version("3.2", sha256="bda29edf392d9bfa5602034c767d28549214ec45f620081f0b74dc036a1fbbc1")
version("3.1", sha256="de346335408f84de0eada6ff9fafafff9bcda11f0a0dfaa931133debb146ab61")
version("3.0", sha256="9a9992345353618ae98339c2b63d8201c381c2944f38a2ab49cb45a4c667e412")
version("2.8.6", sha256="bd2b7730300860cbd2dafe8e5af89ff5c9a65c3975b352799d87a6238b4301a6")
@@ -41,18 +47,29 @@ class PyNetworkx(PythonPackage):
description="Optional requirements that may require extra steps to install",
)
depends_on("python@3.10:", when="@3.3:", type=("build", "run"))
depends_on("python@3.9:", when="@3.2:", type=("build", "run"))
depends_on("python@3.8:", when="@2.7:", type=("build", "run"))
depends_on("python@3.7:", when="@2.6:", type=("build", "run"))
depends_on("py-setuptools", type="build")
depends_on("py-setuptools@61.2:", type="build", when="@3.2:")
with when("+default"):
# From requirements/default.txt
depends_on("py-numpy@1.24:", when="@3.4:", type=("build", "run"))
depends_on("py-numpy@1.23:", when="@3.3:", type=("build", "run"))
depends_on("py-numpy@1.22:", when="@3.2:", type=("build", "run"))
depends_on("py-numpy@1.20:", when="@3:", type=("build", "run"))
depends_on("py-numpy@1.19:", when="@2.8.6:", type=("build", "run"))
# https://github.com/networkx/networkx/pull/7390
depends_on("py-numpy@:1", when="@:3.2", type=("build", "run"))
depends_on("py-scipy@1.11.2:", when="@3.2:", type=("build", "run"))
depends_on("py-scipy@1.8:", when="@2.8.6:", type=("build", "run"))
depends_on("py-matplotlib@3.8:", when="@3.4:", type=("build", "run"))
depends_on("py-matplotlib@3.5:", when="@3.2:", type=("build", "run"))
depends_on("py-matplotlib@3.4:", when="@2.8.6:", type=("build", "run"))
depends_on("py-pandas@2.0:", when="@3.4:", type=("build", "run"))
depends_on("py-pandas@1.4:", when="@3.2:", type=("build", "run"))
depends_on("py-pandas@1.3:", when="@2.8.6:", type=("build", "run"))
# Historical dependencies

View File

@@ -45,11 +45,16 @@ class PyPip(Package, PythonExtension):
version("9.0.1", sha256="690b762c0a8460c303c089d5d0be034fb15a5ea2b75bdf565f40421f542fefb0")
extends("python")
depends_on("python@3.8:", when="@24.1:", type=("build", "run"))
depends_on("python@3.7:", when="@22:", type=("build", "run"))
# Uses collections.MutableMapping
depends_on("python@:3.9", when="@:19.1", type=("build", "run"))
with default_args(type=("build", "run")):
depends_on("python@3.8:", when="@24.1:")
depends_on("python@3.7:", when="@22:")
# Uses pkgutil.ImpImporter
depends_on("python@:3.11", when="@:23.1.1")
# Uses collections.MutableMapping
depends_on("python@:3.9", when="@:19.1")
resource(
name="pip-bootstrap",

View File

@@ -5,7 +5,7 @@
from spack.package import *
class PyPyarrow(PythonPackage, CudaPackage):
class PyPyarrow(PythonPackage):
"""A cross-language development platform for in-memory data.
This package contains the Python bindings.
@@ -19,6 +19,7 @@ class PyPyarrow(PythonPackage, CudaPackage):
license("Apache-2.0")
version("19.0.1", sha256="3bf266b485df66a400f282ac0b6d1b500b9d2ae73314a153dbe97d6d5cc8a99e")
version("16.1.0", sha256="15fbb22ea96d11f0b5768504a3f961edab25eaf4197c341720c4a387f6c60315")
version("15.0.2", sha256="9c9bc803cb3b7bfacc1e96ffbfd923601065d9d3f911179d81e72d99fd74a3d9")
version("14.0.2", sha256="36cef6ba12b499d864d1def3e990f97949e0b79400d08b7cf74504ffbd3eb025")
@@ -36,39 +37,31 @@ class PyPyarrow(PythonPackage, CudaPackage):
version("0.11.0", sha256="07a6fd71c5d7440f2c42383dd2c5daa12d7f0a012f1e88288ed08a247032aead")
version("0.9.0", sha256="7db8ce2f0eff5a00d6da918ce9f9cfec265e13f8a119b4adb1595e5b19fd6242")
depends_on("cxx", type="build") # generated
depends_on("cxx", type="build")
variant("parquet", default=False, description="Build with Parquet support")
variant("orc", default=False, description="Build with orc support")
variant("dataset", default=False, description="Build with Dataset support")
with default_args(type="build"):
# CMakeLists.txt
depends_on("cmake@3.16:", when="@13:")
depends_on("cmake@3.5:", when="@11:")
depends_on("cmake@3.2:", when="@0.17:")
depends_on("cmake@2.7:")
conflicts("~parquet", when="+dataset")
# cmake_modules and pyarrow/__init__.py
depends_on("pkgconfig")
depends_on("cmake@3.0.0:", type="build")
depends_on("pkgconfig", type="build")
depends_on("python@3.8:", type=("build", "run"), when="@13:")
depends_on("python@3.7:", type=("build", "run"), when="@7:")
depends_on("python@3.6:", type=("build", "run"), when="@3:")
depends_on("python@3.5:", type=("build", "run"), when="@0.17:")
depends_on("py-setuptools", type="build")
depends_on("py-setuptools@40.1.0:", type="build", when="@10.0.1:")
depends_on("py-setuptools@38.6.0:", type="build", when="@7:")
depends_on("py-setuptools-scm@:7", type="build", when="@0.15:")
depends_on("py-cython", type="build")
depends_on("py-cython@0.29.31:", type="build", when="@14:")
depends_on("py-cython@0.29.31:2", type="build", when="@12:13")
depends_on("py-cython@0.29.22:2", type="build", when="@8:11")
depends_on("py-cython@0.29:2", type="build", when="@0.15:7")
depends_on("py-cython@:2", type="build", when="@:0.14")
# in newer pip versions --install-option does not exist
depends_on("py-pip@:23.0", type="build")
depends_on("py-numpy@1.16.6:", type=("build", "run"), when="@3:")
# Prior to python 3.9 numpy must be >=0.14,<1.25
depends_on("py-numpy@0.14:1.24", when="^python@:3.8", type=("build", "run"))
depends_on("py-numpy@1.25:", when="^python@3.9:", type=("build", "run"))
# https://github.com/apache/arrow/issues/39532
depends_on("py-numpy@:1", when="@:15", type=("build", "run"))
# pyproject.toml, setup.py
depends_on("py-cython@0.29.31:", when="@14:")
depends_on("py-cython@0.29.31:2", when="@12:13")
depends_on("py-cython@0.29.22:2", when="@8:11")
depends_on("py-cython@0.29:2", when="@0.15:7")
depends_on("py-cython@:2", when="@:0.14")
depends_on("py-setuptools-scm@8:+toml", when="@17:")
depends_on("py-setuptools-scm", when="@16")
depends_on("py-setuptools-scm@:7", when="@0.15:15")
depends_on("py-setuptools@64:", when="@17:")
depends_on("py-setuptools@40.1:", when="@10.0.1:")
depends_on("py-setuptools@38.6:", when="@7:")
depends_on("py-setuptools")
arrow_versions = (
"@0.9.0",
@@ -87,29 +80,41 @@ class PyPyarrow(PythonPackage, CudaPackage):
"@14.0.2",
"@15.0.2",
"@16.1.0",
"@19.0.1",
)
for v in arrow_versions:
depends_on("arrow+python" + v, when=v)
depends_on("arrow+parquet+python" + v, when="+parquet" + v)
depends_on("arrow+cuda" + v, when="+cuda" + v)
depends_on("arrow+orc" + v, when="+orc" + v)
# Historical dependencies
# In newer pip versions --install-option does not exist
depends_on("py-pip@:23.0", when="@:16", type="build")
with default_args(type=("build", "run")):
# pyproject.toml, setup.py
depends_on("py-numpy@1.16.6:", when="@3:17")
depends_on("py-numpy@1.14:", when="@0.11:")
depends_on("py-numpy@1.10:")
depends_on("py-numpy@:1", when="@:15")
patch("for_aarch64.patch", when="@0 target=aarch64:")
# Starting with pyarrow 17+, backend support is built if arrow was built with it
@when("@:16")
def setup_build_environment(self, env):
env.set("PYARROW_WITH_PARQUET", self.spec.satisfies("+parquet"))
env.set("PYARROW_WITH_CUDA", self.spec.satisfies("+cuda"))
env.set("PYARROW_WITH_ORC", self.spec.satisfies("+orc"))
env.set("PYARROW_WITH_DATASET", self.spec.satisfies("+dataset"))
env.set("PYARROW_WITH_PARQUET", self.spec.satisfies("^arrow+parquet"))
env.set("PYARROW_WITH_CUDA", self.spec.satisfies("^arrow+cuda"))
env.set("PYARROW_WITH_ORC", self.spec.satisfies("^arrow+orc"))
env.set("PYARROW_WITH_DATASET", self.spec.satisfies("^arrow+dataset"))
@when("@:16")
def install_options(self, spec, prefix):
args = []
if spec.satisfies("+parquet"):
if spec.satisfies("^arrow+parquet"):
args.append("--with-parquet")
if spec.satisfies("+cuda"):
if spec.satisfies("^arrow+cuda"):
args.append("--with-cuda")
if spec.satisfies("+orc"):
if spec.satisfies("^arrow+orc"):
args.append("--with-orc")
if spec.satisfies("+dataset"):
if spec.satisfies("^arrow+dataset"):
args.append("--with-dataset")
return args

View File

@@ -22,7 +22,9 @@ class PyPymc3(PythonPackage):
depends_on("py-setuptools", type="build")
depends_on("py-arviz@0.4.1:", type=("build", "run"))
depends_on("py-theano@1.0.4:", type=("build", "run"))
depends_on("py-numpy@1.13.0:", type=("build", "run"))
# numpy 2 support added in pymc 5.21, pymc3 is the legacy package
# https://github.com/pymc-devs/pymc/pull/7688
depends_on("py-numpy@1.13.0:1", type=("build", "run"))
depends_on("py-scipy@0.18.1:", type=("build", "run"))
depends_on("py-pandas@0.18.0:", type=("build", "run"))
depends_on("py-patsy@0.4.0:", type=("build", "run"))

View File

@@ -2,6 +2,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.url
from spack.package import *
@@ -11,89 +12,42 @@ class PyRadicalEntk(PythonPackage):
homepage = "https://radical-cybertools.github.io"
git = "https://github.com/radical-cybertools/radical.entk.git"
pypi = "radical.entk/radical.entk-1.47.0.tar.gz"
pypi = "radical_entk/radical_entk-1.92.0.tar.gz"
maintainers("andre-merzky")
license("MIT")
version("develop", branch="devel")
version("1.47.0", sha256="a4338e3a87147c032fb3a16a03990155742cc64c6625cfb4e1588ae0e51aafda")
version("1.39.0", sha256="72d64b25df9f3cb1dcbc32323a669d86d947cf07d15bed91cfedca2a99fb3ef1")
version("1.92.0", sha256="908a5d35cbc801c8b064837a21cbf5ad1a9b4aed0db48f2db84ef85d4e529cef")
version(
"1.20.0",
sha256="1b9fc470b926a93528fd2a898636bdcd1c565bd58ba47608f9bead811d8a46d7",
"1.47.0",
sha256="a4338e3a87147c032fb3a16a03990155742cc64c6625cfb4e1588ae0e51aafda",
deprecated=True,
)
version(
"1.18.0",
sha256="049f70ec7e95819ec0ea706ee6275db04799ceff119dd7b675ef0d36d814de6f",
deprecated=True,
)
version(
"1.17.0",
sha256="695e162b8b6209384660400920f4a2e613d01f0b904e44cfe5b5d012dcc35af9",
deprecated=True,
)
version(
"1.16.0",
sha256="6611b4634ad554651601d9aed3a6d8b8273073da6218112bb472ce51f771ac8e",
deprecated=True,
)
version(
"1.14.0",
sha256="beb6de5625b52b3aeeace52f7b4ac608e9f1bb761d8e9cdfe85d3e36931ce9f3",
deprecated=True,
)
version(
"1.13.0",
sha256="5489338173409777d69885fd5fdb296552937d5a539a8182321bebe273647e1c",
deprecated=True,
)
version(
"1.12.0",
sha256="1ea4814c8324e28cc2b86e6f44d26aaa09c8257ed58f50d1d2eada99adaa17da",
deprecated=True,
)
version(
"1.11.0",
sha256="a912ae3aee4c1a323910dbbb33c87a65f02bb30da94e64d81bb3203c2109fb83",
deprecated=True,
)
version(
"1.9.0",
sha256="918c716ac5eecb012a57452f45f5a064af7ea72f70765c7b0c60be4322b23557",
deprecated=True,
)
version(
"1.8.0",
sha256="47a3f7f1409612d015a3e6633853d31ec4e4b0681aecb7554be16ebf39c7f756",
deprecated=True,
)
version(
"1.6.7",
sha256="9384568279d29b9619a565c075f287a08bca8365e2af55e520af0c2f3595f8a2",
"1.39.0",
sha256="72d64b25df9f3cb1dcbc32323a669d86d947cf07d15bed91cfedca2a99fb3ef1",
deprecated=True,
)
depends_on("py-radical-utils@1.40:", type=("build", "run"), when="@1.40:")
depends_on("py-radical-pilot@1.40:", type=("build", "run"), when="@1.40:")
depends_on("py-radical-utils@1.90:1.99", type=("build", "run"), when="@1.90:")
depends_on("py-radical-pilot@1.90:1.99", type=("build", "run"), when="@1.90:")
depends_on("py-radical-utils@1.40:1.52", type=("build", "run"), when="@1.40:1.52")
depends_on("py-radical-pilot@1.40:1.52.1", type=("build", "run"), when="@1.40:1.52")
depends_on("py-radical-utils@1.39", type=("build", "run"), when="@1.39")
depends_on("py-radical-pilot@1.39", type=("build", "run"), when="@1.39")
depends_on("py-radical-pilot@1.18:1.20", type=("build", "run"), when="@1.20")
depends_on("python@3.7:", type=("build", "run"), when="@1.53:")
depends_on("python@3.6:", type=("build", "run"), when="@:1.52")
depends_on("py-radical-utils@1.12:1.20", type=("build", "run"), when="@1.12:1.20")
depends_on("py-radical-pilot@1.12:1.17", type=("build", "run"), when="@1.12:1.19")
depends_on("py-radical-utils@:1.11", type=("build", "run"), when="@:1.11")
depends_on("py-radical-pilot@:1.11", type=("build", "run"), when="@:1.11")
depends_on("py-packaging", type=("build", "run"), when="@:1.20")
depends_on("py-pika@0.13.0", type=("build", "run"), when="@:1.20")
depends_on("py-requests", type=("build", "run"), when="@:1.20")
depends_on("python@3.6:", type=("build", "run"))
depends_on("py-setuptools", type="build")
def url_for_version(self, version):
if version >= Version("1.48.1"):
return super().url_for_version(version)
url = self.url.replace("_", ".")
return spack.url.substitute_version(url, self.url_version(version))

View File

@@ -2,6 +2,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.url
from spack.package import *
@@ -13,40 +14,38 @@ class PyRadicalGtod(PythonPackage):
homepage = "https://radical-cybertools.github.io"
git = "https://github.com/radical-cybertools/radical.gtod.git"
pypi = "radical.gtod/radical.gtod-1.47.0.tar.gz"
pypi = "radical_gtod/radical_gtod-1.90.0.tar.gz"
maintainers("andre-merzky")
license("LGPL-3.0-or-later")
version("develop", branch="devel")
version("1.47.0", sha256="52e75bf14faf352165ffa0d9e32ca472bd63f479020cd78f832baa34f8acfe6d")
version("1.39.0", sha256="254f1e805b58a33b93c6180f018904db25538710ec9e75b3a3a9969d7206ecf6")
version("1.90.0", sha256="70889239d3a60f8f323f62b942939665464fa368c4a00d0fbc49c878658f57b2")
version(
"1.20.0",
sha256="8d0846de7a5d094146c01fbb7c137f343e4da06af51efafeba79dd3fdfe421dc",
"1.47.0",
sha256="52e75bf14faf352165ffa0d9e32ca472bd63f479020cd78f832baa34f8acfe6d",
deprecated=True,
)
version(
"1.16.0",
sha256="1fe9da598a965c7194ed9c7df49d5b30632a11a7f9ece12152bea9aaa91bd4b8",
deprecated=True,
)
version(
"1.13.0",
sha256="15df4ae728a8878b111cfdedffb9457aecc8003c2cfbdf2c918dfcb6b836cc93",
deprecated=True,
)
version(
"1.6.7",
sha256="8d7d32e3d0bcf6d7cf176454a9892a46919b03e1ed96bee389380e6d75d6eff8",
"1.39.0",
sha256="254f1e805b58a33b93c6180f018904db25538710ec9e75b3a3a9969d7206ecf6",
deprecated=True,
)
depends_on("c", type="build") # generated
depends_on("py-radical-utils", type=("build", "run"), when="@1.13:")
depends_on("py-radical-utils@1.90:1.99", type=("build", "run"), when="@1.90:")
depends_on("py-radical-utils@:1.52", type=("build", "run"), when="@1.13:1.52")
depends_on("python@3.7:", type=("build", "run"), when="@1.53:")
depends_on("python@3.6:", type=("build", "run"), when="@:1.52")
depends_on("python@3.6:", type=("build", "run"))
depends_on("py-setuptools", type="build")
def url_for_version(self, version):
if version >= Version("1.47.1"):
return super().url_for_version(version)
url = self.url.replace("_", ".")
return spack.url.substitute_version(url, self.url_version(version))

View File

@@ -2,6 +2,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.url
from spack.package import *
@@ -12,112 +13,50 @@ class PyRadicalPilot(PythonPackage):
homepage = "https://radical-cybertools.github.io"
git = "https://github.com/radical-cybertools/radical.pilot.git"
pypi = "radical.pilot/radical.pilot-1.47.0.tar.gz"
pypi = "radical_pilot/radical_pilot-1.92.0.tar.gz"
maintainers("andre-merzky")
license("MIT")
version("develop", branch="devel")
version("1.47.0", sha256="58f41a0c42fe61381f15263a63424294732606ab7cee717540c0b730308f7908")
version("1.39.0", sha256="7ba0bfa3258b861db71e73d52f0915bfb8b3ac1099badacf69628307cab3b913")
version("1.92.0", sha256="5c65df02ec097f71648259db8ed8638580ea8e4c1c7f360879afff7f99e56134")
version(
"1.20.0",
sha256="a0747e573a01a856dc330797dbee158f7e1cf8652001dc26f06a1d6c5e553bc6",
"1.47.0",
sha256="58f41a0c42fe61381f15263a63424294732606ab7cee717540c0b730308f7908",
deprecated=True,
)
version(
"1.18.1",
sha256="fd6a0ffaa727b6b9bab35d8f2dc300bf4d9c4ff3541136d83560aa7b853d6100",
deprecated=True,
)
version(
"1.17.0",
sha256="0bfbb321a623a684e6694241aa3b7804208846515d23afa3b930553274f4a69f",
deprecated=True,
)
version(
"1.16.0",
sha256="057941a206ee96b62b97a63a507c1136b7fe821ae9f9e5eebe7949a3f53941f9",
deprecated=True,
)
version(
"1.15.1",
sha256="35c3b179a0bc85f52d2165e98e19acf2bf79037dd14f4d9ff3fc55ae0122d17e",
deprecated=True,
)
version(
"1.14.0",
sha256="462471065de25f6d6e8baee705790828444c2eebb2073f5faf67a8da800d15a9",
deprecated=True,
)
version(
"1.13.0",
sha256="5bd9eef1884ccca09c242ab6d1361588a442d9cd980613c66604ba140786bde5",
deprecated=True,
)
version(
"1.12.0",
sha256="a266355d30d838f20b6cac190ce589ca919acd41883ad06aec62386239475133",
deprecated=True,
)
version(
"1.11.2",
sha256="9d239f747589b8ae5d6faaea90ea5304b6f230a1edfd8d4efb440bc3799c8a9d",
deprecated=True,
)
version(
"1.10.2",
sha256="56e9d8b1ce7ed05eff471d7df660e4940f485027e5f353aa36fd17425846a499",
deprecated=True,
)
version(
"1.10.1",
sha256="003f4c519b991bded31693026b69dd51547a5a69a5f94355dc8beff766524b3c",
deprecated=True,
)
version(
"1.9.2",
sha256="7c872ac9103a2aed0c5cd46057048a182f672191e194e0fd42794b0012e6e947",
deprecated=True,
)
version(
"1.8.0",
sha256="a4c3bca163db61206e15a2d820d9a64e888da5c72672448ae975c26768130b9d",
deprecated=True,
)
version(
"1.6.8",
sha256="fa8fd3f348a68b54ee8338d5c5cf1a3d99c10c0b6da804424a839239ee0d313d",
deprecated=True,
)
version(
"1.6.7",
sha256="6ca0a3bd3cda65034fa756f37fa05681d5a43441c1605408a58364f89c627970",
"1.39.0",
sha256="7ba0bfa3258b861db71e73d52f0915bfb8b3ac1099badacf69628307cab3b913",
deprecated=True,
)
depends_on("py-radical-utils@1.44:", type=("build", "run"), when="@1.47:")
depends_on("py-radical-saga@1.40:", type=("build", "run"), when="@1.47:")
depends_on("py-radical-gtod", type=("build", "run"), when="@1.14:")
depends_on("py-radical-utils@1.90:1.99", type=("build", "run"), when="@1.90:")
depends_on("py-radical-gtod@1.90:1.99", type=("build", "run"), when="@1.90:")
depends_on("py-radical-utils@1.44:1.52", type=("build", "run"), when="@1.47:1.52.1")
depends_on("py-radical-saga@1.40:", type=("build", "run"), when="@1.47")
depends_on("py-radical-gtod@:1.52", type=("build", "run"), when="@1.14:1.52.1")
depends_on("py-radical-utils@1.39", type=("build", "run"), when="@1.39")
depends_on("py-radical-saga@1.39", type=("build", "run"), when="@1.39")
depends_on("py-radical-gtod@1.39", type=("build", "run"), when="@1.39")
depends_on("py-radical-utils@1.12:1.20", type=("build", "run"), when="@1.12:1.20")
depends_on("py-radical-saga@1.12:1.20", type=("build", "run"), when="@1.12:1.20")
depends_on("py-radical-utils@1.8.4:1.11", type=("build", "run"), when="@1.11")
depends_on("py-radical-saga@1.8:1.11", type=("build", "run"), when="@1.11")
depends_on("py-radical-utils@:1.8.3", type=("build", "run"), when="@:1.10")
depends_on("py-radical-saga@:1.7", type=("build", "run"), when="@:1.10")
depends_on("py-pymongo@:3", type=("build", "run"), when="@:1.39")
depends_on("python@3.6:", type=("build", "run"))
depends_on("python@3.7:", type=("build", "run"), when="@1.48:")
depends_on("python@3.6:", type=("build", "run"), when="@:1.47")
depends_on("py-requests", type=("build", "run"), when="@1.90:")
depends_on("py-psij-python", type=("build", "run"), when="@1.48:")
depends_on("py-dill", type=("build", "run"), when="@1.14:")
depends_on("py-setproctitle", type=("build", "run"))
depends_on("py-setuptools", type="build")
def url_for_version(self, version):
if version >= Version("1.49.3"):
return super().url_for_version(version)
url = self.url.replace("_", ".")
return spack.url.substitute_version(url, self.url_version(version))

View File

@@ -2,6 +2,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.url
from spack.package import *
@@ -13,81 +14,39 @@ class PyRadicalSaga(PythonPackage):
homepage = "https://radical-cybertools.github.io"
git = "https://github.com/radical-cybertools/radical.saga.git"
pypi = "radical.saga/radical.saga-1.47.0.tar.gz"
pypi = "radical_saga/radical_saga-1.90.0.tar.gz"
maintainers("andre-merzky")
license("MIT")
version("develop", branch="devel")
version("1.47.0", sha256="fc9a8fc060e708852ce6c40b08a65111f8d72b9ad5f8afef9ceaa866c1351233")
version("1.39.0", sha256="0fea8103d3f96c821c977bcb55ff1c6a9844de727539b182dda4cbc2570df791")
version("1.90.0", sha256="55758339f58087477574ed598e5a34cb99d045a540a74ba9e11b34eead4af78d")
version(
"1.20.0",
sha256="d85f3ed564d9eaf3ead2aa349c854e944ca459492ebf88542404106fce4204ab",
"1.47.0",
sha256="fc9a8fc060e708852ce6c40b08a65111f8d72b9ad5f8afef9ceaa866c1351233",
deprecated=True,
)
version(
"1.18.0",
sha256="544d4ffafc0b311151724db371ee11e27744103068748962866351ce31ccb810",
deprecated=True,
)
version(
"1.17.0",
sha256="e48b42c232ac0ad53a410c1317746a5f15214fd3108fad773d098714fb4c40a0",
deprecated=True,
)
version(
"1.16.0",
sha256="d269e2e7043f05e8f1d45ca3d50be973857150d7928d53bedd6844f39b224786",
deprecated=True,
)
version(
"1.14.0",
sha256="337d8778bf392fd54845b1876de903c4c12f6fa938ef16220e1847561b66731a",
deprecated=True,
)
version(
"1.13.0",
sha256="90d8e875f48402deab87314ea5c08d591264fb576c461bd9663ac611fc2e547e",
deprecated=True,
)
version(
"1.12.0",
sha256="769c83bab95c0e3ef970da0fa6cb30878d7a31216ff8b542e894686357f7cb5b",
deprecated=True,
)
version(
"1.11.1",
sha256="edb1def63fadd192a4be4f508e9e65669745843e158ce27a965bf2f43d18b84d",
deprecated=True,
)
version(
"1.8.0",
sha256="6edf94897102a08dcb994f7f107a0e25e7f546a0a9488af3f8b92ceeeaaf58a6",
deprecated=True,
)
version(
"1.6.10",
sha256="8fe7e281e9f81234f34f5c7c7986871761e9e37230d2a874c65d18daeccd976a",
deprecated=True,
)
version(
"1.6.8",
sha256="d5e9f95a027087fb637cef065ff3af848e5902e403360189e36c9aa7c3f6f29b",
"1.39.0",
sha256="0fea8103d3f96c821c977bcb55ff1c6a9844de727539b182dda4cbc2570df791",
deprecated=True,
)
depends_on("py-radical-utils@1.40:", type=("build", "run"), when="@1.40:")
depends_on("py-radical-utils@1.90:1.99", type=("build", "run"), when="@1.90:")
depends_on("py-radical-utils@1.40:1.52", type=("build", "run"), when="@1.40:1.52")
depends_on("py-radical-utils@1.39", type=("build", "run"), when="@1.39")
depends_on("py-radical-utils@1.12:1.20", type=("build", "run"), when="@1.12:1.20")
depends_on("python@3.7:", type=("build", "run"), when="@1.53:")
depends_on("python@3.6:", type=("build", "run"), when="@:1.52")
depends_on("py-radical-utils@:1.11", type=("build", "run"), when="@:1.11")
depends_on("python@3.6:", type=("build", "run"))
depends_on("py-apache-libcloud", type=("build", "run"))
depends_on("py-apache-libcloud", type=("build", "run"), when="@:1.60")
depends_on("py-parse", type=("build", "run"))
depends_on("py-setuptools", type="build")
def url_for_version(self, version):
if version >= Version("1.47.1"):
return super().url_for_version(version)
url = self.url.replace("_", ".")
return spack.url.substitute_version(url, self.url_version(version))

View File

@@ -2,6 +2,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.url
from spack.package import *
@@ -11,92 +12,29 @@ class PyRadicalUtils(PythonPackage):
homepage = "https://radical-cybertools.github.io"
git = "https://github.com/radical-cybertools/radical.utils.git"
pypi = "radical.utils/radical.utils-1.47.0.tar.gz"
pypi = "radical_utils/radical_utils-1.91.1.tar.gz"
maintainers("andre-merzky")
license("MIT")
version("develop", branch="devel")
version("1.47.0", sha256="f85a4a452561dd018217f1ed38d97c9be96fa448437cfeb1b879121174fd5311")
version("1.39.0", sha256="fade87ee4c6ccf335d5e26d5158ce22ee891e4d4c576464274999ddf36dc4977")
version("1.91.1", sha256="5293f375f699161e451982b2e7668613c24e2562252f65e765ebbc83d8ae0118")
version(
"1.20.0",
sha256="9b39dd616d70c387fb3f97d3510a506bac92c159b6482c3aebd3d11eeaeebcc9",
"1.47.0",
sha256="f85a4a452561dd018217f1ed38d97c9be96fa448437cfeb1b879121174fd5311",
deprecated=True,
)
version(
"1.18.1",
sha256="5b3ab15417a1ef82f63f8a77763a177d6bc59b61a80823be0df8c0f7502d9b3e",
deprecated=True,
)
version(
"1.17.0",
sha256="ee3fec190e89522f648e191d2e380689842746f1eacda27772a9471215908cfe",
deprecated=True,
)
version(
"1.16.0",
sha256="6eddfba5c73e71c7c5ddeba6c8ebe5260616d66b26d1f7123613c3cd543d61e9",
deprecated=True,
)
version(
"1.15.0",
sha256="22e5028de75c0a471bfed587d437dded214625b150deaca0289474a3619d395b",
deprecated=True,
)
version(
"1.14.0",
sha256="f61f0e335bbdc51e4023458e7e6959551686ebf170adc5353220dcc83fd677c9",
deprecated=True,
)
version(
"1.13.0",
sha256="84c1cad8be988dad7fb2b8455d19a4fb0c979fab02c5b7a7b531a4ae8fe52580",
deprecated=True,
)
version(
"1.12.0",
sha256="1474dbe4d94cdf3e992e1711e10d73dffa352c1c29ff51d81c1686e5081e9398",
deprecated=True,
)
version(
"1.11.1",
sha256="4fec3f6d45d7309c891ab4f8aeda0257f06f9a8404ca87c7eb643cd8d7415804",
deprecated=True,
)
version(
"1.11.0",
sha256="81537c2a2f8a1a409b4a1aac67323c6b49cc994e2b70052425e2bc8d4622e2de",
deprecated=True,
)
version(
"1.9.1",
sha256="0837d75e7f9dcce5ba5ac63151ab1683d6ba9ab3954b076d1f170cc4a3cdb1b4",
deprecated=True,
)
version(
"1.8.4",
sha256="4777ba20e9f881bf3e73ad917638fdeca5a4b253d57ed7b321a07f670e3f737b",
deprecated=True,
)
version(
"1.8.0",
sha256="8582c65593f51d394fc263c6354ec5ad9cc7173369dcedfb2eef4f5e8146cf03",
deprecated=True,
)
version(
"1.6.7",
sha256="552f6c282f960ccd9d2401d686b0b3bfab35dfa94a26baeb2d3b4e45211f05a9",
"1.39.0",
sha256="fade87ee4c6ccf335d5e26d5158ce22ee891e4d4c576464274999ddf36dc4977",
deprecated=True,
)
depends_on("py-radical-gtod", type=("build", "run"), when="@:1.13")
depends_on("python@3.7:", type=("build", "run"), when="@1.53:")
depends_on("python@3.6:", type=("build", "run"), when="@:1.52")
depends_on("py-pymongo@:3", type=("build", "run"), when="@:1.39")
depends_on("python@3.6:", type=("build", "run"))
depends_on("py-colorama", type=("build", "run"))
depends_on("py-msgpack", type=("build", "run"))
depends_on("py-netifaces", type=("build", "run"))
@@ -108,3 +46,9 @@ class PyRadicalUtils(PythonPackage):
depends_on("py-setuptools")
# https://github.com/radical-cybertools/radical.utils/issues/403
depends_on("py-setuptools@:69.2", when="@:1.51")
def url_for_version(self, version):
if version >= Version("1.48.1"):
return super().url_for_version(version)
url = self.url.replace("_", ".")
return spack.url.substitute_version(url, self.url_version(version))

View File

@@ -10,39 +10,66 @@ class PyRpy2(PythonPackage):
interface to R from Python, a proposed high-level interface,
including wrappers to graphical libraries, as well as R-like
structures and functions.
"""
homepage = "https://rpy2.github.io"
pypi = "rpy2/rpy2-2.5.4.tar.gz"
license("GPL-2.0-or-later")
version("3.0.4", sha256="2af5158a5d56af7f7bf5e54d8d7e87b6f115ff40f056d82f93cad0cbf6acc0cb")
version("3.0.0", sha256="34efc2935d9015527837d6b1de29641863d184b19d39ad415d5384be8a015bce")
version("2.9.4", sha256="be57f741d0c284b5d8785ab03dff0e829303e5ac30e548d5ceb46e05b168812e")
version("2.8.6", sha256="004d13734a7b9a85cbc1e7a93ec87df741e28db1273ab5b0d9efaac04a9c5f98")
version("2.5.6", sha256="d0d584c435b5ed376925a95a4525dbe87de7fa9260117e9f208029e0c919ad06")
version("2.5.4", sha256="d521ecdd05cd0c31ab017cb63e9f63c29b524e46ec9063a920f640b5875f8a90")
maintainers("Chrismarsh")
# FIXME: Missing dependencies:
# ld: cannot find -licuuc
# ld: cannot find -licui18
version("3.5.17", sha256="dbff08c30f3d79161922623858a5b3b68a3fba8ee1747d6af41bc4ba68f3d582")
# All versions
depends_on("py-setuptools", type="build")
depends_on("r", type=("build", "run"))
# these are from 2019 and don't cleanly work with new r (4+) and pandas versions
# but the exact version incompatibility range is not clear without substantial testing
with default_args(deprecated=True):
version("3.0.4", sha256="2af5158a5d56af7f7bf5e54d8d7e87b6f115ff40f056d82f93cad0cbf6acc0cb")
version("3.0.0", sha256="34efc2935d9015527837d6b1de29641863d184b19d39ad415d5384be8a015bce")
# @3.0.0:
depends_on("py-cffi@1.0.0:", when="@3.0.0:", type=("build", "run"))
depends_on("py-simplegeneric", when="@3.0.0:", type=("build", "run"))
depends_on("py-pytest", when="@3:", type=("build", "run"))
variant("numpy", default=True, description="Numpy", when="@3.5.17:")
variant("pandas", default=True, description="Pandas", when="@3.5.17:")
variant("ipython", default=True, description="iPython", when="@3.5.17:")
# @2.9.0:
depends_on("r@3.3:", when="@2.9.0:", type=("build", "run"))
depends_on("python@3.5:", when="@2.9.0:", type=("build", "run"))
depends_on("py-jinja2", when="@2.9.0:", type=("build", "run"))
depends_on("py-six", when="@2.9.0:2.9", type=("build", "run"))
# many of the previous minor and patch versions change dependency versions so future updates
# should be careful of that
depends_on("python@3.8:", type=("build", "run"), when="@3.5.17:")
# @:2.8.6
depends_on("r@2.8:", when="@:2.8.6", type=("build", "run"))
depends_on("python@2.7:2.8,3.5:", type=("build", "run"))
# https://github.com/rpy2/rpy2/blob/RELEASE_3_5_17/setup.py#L42C1-L42C14
depends_on("r@3.5:", type=("build", "run"), when="@3.5.17:")
depends_on("py-setuptools@61:", type="build", when="@3.5.17:")
# not explicity stated as required but needed from testing
depends_on("readline", type=("build", "run"), when="@3.5.17:")
# @1.15.1: is needed for run, @1.15.0: for build but use one version for simplicity
depends_on("py-cffi@1.15.1:", type=("build", "run"), when="@3.5.17:")
depends_on("py-jinja2", type=("build", "run"), when="@3.5.17:")
depends_on("py-tzlocal", type=("build", "run"), when="@3.5.17:")
# optional variant
depends_on("py-ipython", type=("build", "run"), when="+ipython")
# optional variant
depends_on("py-numpy@1.26:", type=("build", "run"), when="+numpy ^python@3.9:")
depends_on("py-numpy@:1.25", type=("build", "run"), when="+numpy ^python@:3.8")
# optional variant
depends_on("py-pandas", type=("build", "run"), when="+pandas ^python@:3.9")
depends_on("py-pandas@1.3.5:", type=("build", "run"), when="+pandas ^python@3.10:")
depends_on("py-backports-zoneinfo", type=("build", "run"), when="@3.5.17: ^python@:3.8")
# These are from 2019 and predate the pyproject.toml config that currently exists
with when("@3.0.0:3.0.4"):
# Doesn't support post-distutil removal until 3.5.13
# https://github.com/rpy2/rpy2/releases/tag/RELEASE_3_5_13
depends_on("python@3.5:3.11", type=("build", "run"))
depends_on("py-setuptools", type="build")
depends_on("py-cffi@1.0.0:", type=("build", "run"))
depends_on("py-simplegeneric", type=("build", "run"))
depends_on("py-pytest", type=("build", "run"))
depends_on("r@3.3:", type=("build", "run"))

View File

@@ -0,0 +1,23 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PySphinxcontribPlantuml(PythonPackage):
"""PlantUML for Sphinx."""
homepage = "https://github.com/sphinx-contrib/plantuml/"
pypi = "sphinxcontrib-plantuml/sphinxcontrib-plantuml-0.30.tar.gz"
maintainers("greenc-FNAL", "knoepfel", "marcpaterno")
license("BSD-2-Clause", checked_by="greenc-FNAL")
version("0.30", sha256="2a1266ca43bddf44640ae44107003df4490de2b3c3154a0d627cfb63e9a169bf")
depends_on("py-setuptools", type="build")
depends_on("plantuml", type=("build", "run"))
depends_on("py-sphinx@1.6:", type=("build", "run"))

View File

@@ -14,6 +14,7 @@ class PyTorchmetrics(PythonPackage):
license("Apache-2.0")
maintainers("adamjstewart")
version("1.6.2", sha256="a3fa6372dbf01183d0f6fda2159e9526fb62818aa3630660909c290425f67df6")
version("1.6.1", sha256="a5dc236694b392180949fdd0a0fcf2b57135c8b600e557c725e077eb41e53e64")
version("1.6.0", sha256="aebba248708fb90def20cccba6f55bddd134a58de43fb22b0c5ca0f3a89fa984")
version("1.5.2", sha256="2d0e4957af0ea76438d2779fe1a626d8cba6cda8607eadb54267598153e7ea63")

View File

@@ -10,7 +10,8 @@
class Qrupdate(MakefilePackage, SourceforgePackage):
"""qrupdate is a Fortran library for fast updates of QR and
Cholesky decompositions."""
Cholesky decompositions.
"""
homepage = "https://sourceforge.net/projects/qrupdate/"
sourceforge_mirror_path = "qrupdate/qrupdate-1.1.2.tar.gz"
@@ -35,7 +36,7 @@ def edit(self, spec, prefix):
makefile = FileFilter("Makefile")
makefile.filter("make", "$(MAKE)")
# We may like to compile with any Forran compiler, not always gfortran
# We may like to compile with any Fortran compiler, not always gfortran
makefile = FileFilter("Makeconf")
makefile.filter("FC=gfortran", "FC ?= gfortran")
@@ -45,16 +46,11 @@ def edit(self, spec, prefix):
def build(self, spec, prefix):
lapack_blas = spec["lapack"].libs + spec["blas"].libs
make_args = [
"BLAS={0}".format(lapack_blas.ld_flags),
"LAPACK={0}".format(lapack_blas.ld_flags),
]
make_args = [f"BLAS={lapack_blas.ld_flags}", f"LAPACK={lapack_blas.ld_flags}"]
# If 64-bit BLAS is used:
if (
spec.satisfies("^openblas+ilp64")
or spec.satisfies("^intel-mkl+ilp64")
or spec.satisfies("^intel-parallel-studio+mkl+ilp64")
if spec.satisfies("^[virtuals=lapack] openblas+ilp64") or spec.satisfies(
"^[virtuals=lapack] intel-oneapi-mkl+ilp64"
):
if spec.satisfies("%intel") or spec.satisfies("%oneapi") or spec.satisfies("%nvhpc"):
# 64bits integer for ifort and nvfortran are promoted by:

View File

@@ -17,6 +17,7 @@ class Qt5compat(QtPackage):
license("LicenseRef-Qt-Commercial OR LGPL-3.0-only OR GPL-2.0-only OR GPL-3.0-only")
version("6.8.2", sha256="9b78a025f17d65eb826ee153f167546e6c12790235d75b7f4fcd03c166d9c689")
version("6.8.1", sha256="5e51feb8d9362d860017ae72f63daa5caeddf3ec3396e73a4b27c672536fd774")
version("6.8.0", sha256="0ea312a2d7e7033857712273e5ea42e61d1f485d23420307f7bbf0b8ca701453")
version("6.7.3", sha256="959634d1a6a53f9a483882e81da87ec182ff44d7747a0cc771c786b0f2cf52e0")

Some files were not shown because too many files have changed in this diff Show More