Compare commits

..

186 Commits

Author SHA1 Message Date
Tamara Dahlgren
f04f7bd6af repo: don't return builtin path plus docstring tweaks and TODOs 2024-12-19 18:49:23 -08:00
Tamara Dahlgren
062d3643b2 test/audit.py: Get tests to pass using mock packages 2024-12-19 18:31:24 -08:00
jnhealy2
6de1ebd71a Fix silent error when reporting builds to CDash (#47939)
* Fix silent error when reporting builds to CDash
   CDash has a 191 char maximum for build names.  When this
   is exceeded, CDash silently fails to correctly process the
   reported XML. This truncates CDash build names to 190 chars
   and emits a warning indicating it is doing so to prevent
   such errors from occuring.
* test/reporters.py: add unittest for buildname len issue
* test/reporters.py: rename cdash buildname test
* ci/common.py: fix syntax causing breaking test
   It appears that the CDash reporter is expecting a string
   as the buildname.
* Update lib/spack/spack/reporters/cdash.py
   Fix warning message to reflect actual issue.
   Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* ci/common.py: fix function call to actually call function

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: psakievich <psakiev@sandia.gov>
2024-12-19 14:19:44 -07:00
afzpatel
fd865efe87 Bump up the version for ROCm-6.3.0 (#47966)
* Bump up the version for ROCm-6.3.0
* changes for aqlprofile, rocprofiler-dev and omnitrace
* add rocfft patch, correct Clang_DIR and add aqlprofile yum package
* add rpp and rocm-openmp-extras changes
* hipblaslt changes
* add rvs rocm 6.3
* bump rocdecode and rocpydecode
* add rocdecode libva arg
* add llvm-amdgpu dependency for hipblaslt
* restrict half in miopen-hip
* fix for rocblas and hipblaslt
* fix hipblas-common target_include
* fix sha256 for rocm-tensile
   Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-12-19 14:09:27 -07:00
kwryankrattiger
93c09ed3b4 Move allocation override to the generate job (#48199) 2024-12-19 21:59:15 +01:00
Rocco Meli
9db8f8ea88 sirius: add libxc constraint (#48184)
* sirius: add libxc constraint

* add myself as maintainer
2024-12-19 10:36:04 -08:00
Lehman Garrison
eb178e6840 py-typer: add version 0.15.1 and "standard" optional dependencies (#48010)
* py-typer: add version 0.15.1 and "standard" optional dependencies
* py-typer: remove variant that only exists in source, not sdist. Remove trailing .0 from versions.
2024-12-19 10:35:05 -08:00
Wouter Deconinck
8487842e11 gaudi: add v39.1; patch for failing test; properly support +examples (#48130)
* gaudi: add v39.1; patch for failing test; properly support +examples

* gaudi: filter_file OPTIONS
* gaudi: rm patch to disable pytest RandomNumber.py
2024-12-19 10:14:00 -08:00
Sebastian Keller
2286b2ad5a sphexa package (#48128)
* sphexa package

* remove older versions

* avoid setting args twice

Co-authored-by: Rocco Meli <r.meli@bluemail.ch>

* rocprim should be hipcub

* address review comments

---------

Co-authored-by: Rocco Meli <r.meli@bluemail.ch>
2024-12-19 18:45:33 +01:00
Wouter Deconinck
ea0d99baf8 snakemake and py-snakemake-*: updates to latest versions (#47524)
* py-snakemake-interface-common: add v1.17.4

* py-snakemake-executor-plugin-azure-batch: add thru v0.3.0

* py-snakemake-executor-plugin-drmaa: add thru v0.1.5

* py-snakemake-executor-plugin-flux: add v0.1.1

* py-snakemake-executor-plugin-googlebatch: add thru v0.5.0

* py-snakemake-executor-plugin-kubernetes: add thru v0.2.2

* py-snakemake-executor-plugin-slurm: add thru v0.11.2

* py-snakemake-executor-plugin-tes: add v0.1.3

* py-snakemake-interface-executor-plugins: add thru 9.3.2

* py-snakemake-interface-report-plugins: add v1.1.0

* py-snakemake-storage-plugin-azure: add thru v0.4.2

* py-snakemake-storage-plugin-fs: add thru v1.0.6

* py-snakemake-storage-plugin-gcs: add thru v1.1.2

* py-snakemake-storage-plugin-s3: add thru v0.2.12

* py-snakemake-storage-plugin-zenodo: add thru v0.1.4

* snakemake: add v8.25.2

* [@spackbot] updating style on behalf of wdconinc

* snakemake, py-snakemake-*: apply suggestions from code review

* py-snakemake-executor-plugin-azure-batch: apply suggestions from code review

* snakemake, py-snakemake-*: fix style

* py-snakemake-executor-plugin-azure-batch: apply suggestion from code review

* py-snakemake-executor-plugin-drmaa: apply suggestion from code review

* py-snakemake-executor-plugin-drmaa: fix style

---------

Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
2024-12-19 07:16:32 -06:00
eugeneswalker
60be9ea068 ci: update darwin tags (#47993)
* ci: update darwin tags

* tag with apple-clang version

* move darwin aarch64 tagging into configs/darwin/aarch/ci.yaml
2024-12-19 04:55:02 -08:00
Seth R. Johnson
5640861aeb Improve package recipes for some HEP packages (#48185)
* Improve variant robustness for dd4hep and edm4hep

Now variants won't be "false" if there's a typo.

* Use libs instead of manual prefix paths

* Improve cmake for another  hep package

* Fix variant use and style

* Use directories for ODD
2024-12-19 07:29:08 -05:00
Mikael Simberg
d8fa6eb559 hpx-kokkos: Add 0.4.1 (#48207) 2024-12-19 04:17:39 -07:00
dependabot[bot]
ec7436be6b build(deps): bump sphinxcontrib-programoutput in /lib/spack/docs (#47992)
Bumps [sphinxcontrib-programoutput](https://github.com/NextThought/sphinxcontrib-programoutput) from 0.17 to 0.18.
- [Changelog](https://github.com/OpenNTI/sphinxcontrib-programoutput/blob/master/CHANGES.rst)
- [Commits](https://github.com/NextThought/sphinxcontrib-programoutput/compare/0.17...0.18)

---
updated-dependencies:
- dependency-name: sphinxcontrib-programoutput
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-19 12:08:20 +01:00
Thomas Gruber
48f6a4ffb4 likwid: add 5.4.1 and add patch for 5.4.0 (#48012) 2024-12-19 12:03:19 +01:00
Harmen Stoppels
96a0b0eb08 llnl.util.lang: remove testing literal backtrace output (#48209) 2024-12-19 11:55:41 +01:00
Wouter Deconinck
8d8e36d7e2 qt-*: add v6.8.0, v6.8.1 (#46947) 2024-12-19 11:52:30 +01:00
Wouter Deconinck
1c843b99ae Replace lzma with xz dependency (#39404) 2024-12-19 11:50:38 +01:00
MatthewLieber
93a0c0eafd osu-micro-benchmarks: fix AMDGPU_TARGET issue (#48171)
Co-authored-by: Matt Lieber <lieber.31@osu.edu>
2024-12-19 11:24:59 +01:00
Harmen Stoppels
0850e0bf08 docs: advertise --oci-username-variable and --oci-password-variable (#48189) 2024-12-19 10:15:01 +01:00
Alberto Invernizzi
6263f75303 ffmpeg: add v7.1 (#47783) 2024-12-19 09:52:38 +01:00
Adam J. Stewart
c184a68512 py-pydantic: add v1.10.19 (#48204) 2024-12-19 09:48:35 +01:00
Wouter Deconinck
69b17ea602 py-paramiko: add v3.3.2, v3.4.1, v3.5.0 (#48191)
* py-paramiko: add v3.3.2, v3.4.1, v3.5.0
* py-paramiko: deprecate v2.1.2 (CVE)
2024-12-19 00:15:44 -07:00
Howard Pritchard
5547b7b552 openmpi: add 5.0.6 (#48043)
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2024-12-18 22:36:23 -07:00
Scott Wittenburg
ae6d1538d5 ci: Disable broken specs list (#48194) 2024-12-18 21:26:42 -07:00
Christoph Junghans
cdb0e80598 all-library: add v0.9.3 (#48193) 2024-12-18 21:16:59 -07:00
Dave Keeshan
233e57c4bc Install process has been changed, made simpler. Also added versions v0.0-3864 and v0.0-3876 (#48190) 2024-12-18 21:12:12 -07:00
Dom Heinzeller
918afd6385 spack external find grep on Linux AND macOS (#48134)
* Configure 'spack external find grep'
* Fix style for finding external grep
* Remove unused 're' Python module from grep
2024-12-18 21:02:20 -07:00
Christophe Prud'homme
83af81a14a mmg : add variant to install private headers for parmmg packaged (#47386)
* update package : add variant to install private headers for parmmg package
* re-add maintainer
* renamed to +private_headers and only for 5.7:
   /cc @jcortial-safran
* fix style and code
* applied suggestions
  /cc @jcortial-safran @tldahlgren
* fix
2024-12-18 19:12:32 -08:00
John W. Parent
2b2538e82c Add new CMake versions (#47997) 2024-12-18 18:26:11 -08:00
Cody Balos
b6715bde32 sundials: add version 7.2.0 (#48202) 2024-12-18 18:24:20 -08:00
Wouter Deconinck
0db3b36874 sherpa: fix AutotoolsBuilder install signature (#48002) 2024-12-18 18:19:11 -08:00
Brian Vanderwende
0bc54a4640 New versions and fixed images resource (#48003) 2024-12-18 18:14:11 -08:00
Matt Thompson
7057ca3c0c mapl: add v2.51.1 (#48007) 2024-12-18 18:10:51 -08:00
Brian Vanderwende
40ac1613e3 Fix for modern GCC and for drifting download URL (#48015) 2024-12-18 17:51:07 -08:00
Brian Vanderwende
d3ab84e5d8 Add latest 2.x version (#48016) 2024-12-18 17:49:15 -08:00
Derek Ryan Strong
15197b1868 Add netlib-lapack v3.12.0 (#48029) 2024-12-18 17:39:22 -08:00
Brian Vanderwende
de45c90056 pnetcdf: New versions and examples option (#48018)
* New pnetcdf versions and examples option
* Refine spec for GCC workaround
* Refactor examples variant to conflict with older versions
Co-authored-by: Sergey Kosukhin <skosukhin@gmail.com>

---------

Co-authored-by: Sergey Kosukhin <skosukhin@gmail.com>
2024-12-18 17:36:58 -08:00
Dave Keeshan
82fc0c702d yosys: add v0.48 (#48036) 2024-12-18 17:34:14 -08:00
Timo Heister
51e889ea3f aspect: add v3.0.0 (#48040) 2024-12-18 17:32:21 -08:00
Paul Kuberry
ad8d1eddde xyce: update +pymi related dependencies (#48044) 2024-12-18 17:30:09 -08:00
Joseph Wang
ebb3736de7 nodejs: update to 22.11.0 (#48084) 2024-12-18 17:27:50 -08:00
sid
4d7a637788 r-lidr and dependency r-rlas (#48051)
* r-rlas is a dependency for r-lidr
* new package r-lidr w/ suggests to address masking issues
* fixed flake8 issues and added maintainers
* removed boost import statement for flake sake
2024-12-18 17:20:33 -08:00
Wouter Deconinck
8e163c3565 qmake: docs about virtual provider (#48055) 2024-12-18 17:16:30 -08:00
Bill Williams
f1fbf11b33 Score-P: mpi and shmem fixes (#48069)
* Score-P: Replace with-or-without, document options that are not currently explicitly mapped in package for mpi and shmem.
* trim long lines

---------

Co-authored-by: wrwilliams <wrwilliams@users.noreply.github.com>
2024-12-18 17:14:59 -08:00
Wouter Deconinck
be3a33ecf7 prmon: add v3.1.1, update py-matplotlib dependency (#48109)
* prmon: add v3.1.1, update py-matplotlib dependency
* prmon: depends_on py-matplotlib@:3.5
2024-12-18 17:12:55 -08:00
Adam J. Stewart
4be528448c py-scikit-image: add v0.25.0 (#48117)
* py-scikit-image: add v0.25.0
* Fix Python range
2024-12-18 17:10:20 -08:00
Wouter Deconinck
8b11918c1e dcap: add v2.47.13, v2.47.14, avoid bash for sh script (#48123)
* dcap: add v2.47.13, v2.47.14, avoid bash for sh script
* dcap: fix typo
2024-12-18 17:00:13 -08:00
Adam J. Stewart
5add010c71 py-torchdata: add v0.10.1 (#48118) 2024-12-18 16:58:23 -08:00
Wouter Deconinck
e77e1d6528 ghostscript: add v10.04.0 (fix CVEs) (#48126)
* ghostscript: add v10.04.0
2024-12-18 16:43:40 -08:00
Adam J. Stewart
6ede4e9f13 py-matplotlib: add v3.10.0 (#48127) 2024-12-18 16:26:43 -08:00
Wouter Deconinck
c50ac5ac25 py-gfal2-python: new package to fix gfal2-util (#48165)
* py-gfal2-python: add new package
* gfal2-util: depends_on py-gfal2-python
* py-gfal2-python: patch setup.py to find correct python
* py-gfal2-python: depends_on boost +python
2024-12-18 13:44:09 -08:00
Mikael Simberg
e7e5352e93 dla-future: Add 0.7.1 (#48188) 2024-12-18 13:18:36 -08:00
Mikael Simberg
36e74f360b pika: Add 0.31.0 (#48192) 2024-12-18 12:59:51 -08:00
dependabot[bot]
f362d45802 build(deps): bump actions/upload-artifact from 4.4.3 to 4.5.0 (#48180)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.4.3 to 4.5.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](b4b15b8c7c...6f51ac03b9)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-18 13:01:30 -06:00
Zack Galbreath
9719220e8a Stop building for neoverse_n1 in our GitLab CI pipelines (#48186) 2024-12-18 17:12:05 +00:00
Todd Gamblin
30e2b15eea Use Literal now that we have typing_extensions in Spack. (#48172)
Improve our typing by updating some todo locations in the code to use
`Literal` instead of a simple `str`.

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-12-18 14:10:14 +01:00
Andrey Perestoronin
7ee1e518b0 Add intel-compiler 2025.0.4 patch and intel-dal 2025.0.1 patch packages (#48135)
* add compiler patch packages

* Add intel-dal patch package
2024-12-18 07:51:25 -05:00
eugeneswalker
4af8fbeddf ci: remove unmaintained, inactive gpu-tests stack (#48166) 2024-12-18 12:40:20 +01:00
Wouter Deconinck
2b85b706f1 findutils: add v4.10.0; faketime: new package (#48182)
* findutils: add v4.10.0
* faketime: new package
2024-12-18 00:13:02 -07:00
Samuel K. Gutiérrez
eadf8727e7 libquo: Improve dependency code, cleanup configure. (#48181)
* Fix issue reported by some users regarding some build dependencies.
* Remove invalid configure-time flag that was recently introduced.

Signed-off-by: Samuel K. Gutierrez <samuel@lanl.gov>
2024-12-17 19:58:07 -07:00
Buldram
de739db153 nim: remove bash dependency (#48132) 2024-12-17 18:51:38 -08:00
Jon Rood
a3bed44bf5 kokkos: add hip_relocatable_device_code variant. trilinos: kokkos should enable relocatable device code if requested for trilinos and it also requires the kokkos libraries be static. (#48143) 2024-12-17 18:48:35 -08:00
Stephen Hudson
3da04ccb19 libEnsemble: add v1.4.3 (#48144) 2024-12-17 18:47:27 -08:00
SXS Bot
f921b28032 spectre: add v2024.12.16 (#48146)
Co-authored-by: sxs-bot <sxs-bot@users.noreply.github.com>
2024-12-17 18:45:45 -08:00
Thomas Helfer
3d50d7173d Update tfel and mgis packages to new versions (#48176)
* update tfel package
* Update MGIS package
2024-12-17 18:26:08 -08:00
Rémi Lacroix
5a5f555fe2 NextFlow: Add versions 24.10.2 and 24.10.3. (#48153)
* NextFlow: add version 24.10.2.
* NextFlow: add version 24.10.3.
2024-12-17 17:44:10 -08:00
David Boehme
bb30c726a4 caliper: Add v2.12.1 (#48021)
* caliper: Add v2.12.1
* Only apply aarch patch in versions below 2.12
* Fix version spec for patch
* Remove obsolete comment
2024-12-17 17:40:54 -08:00
psakievich
0894180cc1 Add more functionality to the stage cmd (#46498)
* Add more functionality to the stage cmd

* Completion commands

* completion again

* Add tests, but they are slow

* Stale comment
2024-12-17 15:07:29 -08:00
kwryankrattiger
f211e2f9c4 CI: reduce output from helper scripts (#48145) 2024-12-17 12:37:57 -07:00
kwryankrattiger
f04ea573fa ci: don't error in CI for missing libs (#48169)
There are still more fix ups required for the missing libs to work as
expected in CI. Dropping the error requirement in favor of moving to a
log scraping method until we can verify all package issues have been
addressed correctly.
2024-12-17 19:43:36 +01:00
Zack Galbreath
364f70c16d Remove E4S Neoverse V1 pipeline (#48160)
Per discussion with the Spack CI team, our graviton2 runners have been
performing poorly and this stack seems no longer necessary.
2024-12-17 19:37:08 +01:00
Wouter Deconinck
5da1adad3a root: only depends_on fortran when +fortran (#48122) 2024-12-17 12:08:12 -06:00
kwryankrattiger
dfb529fc6e Ci set concretiztion pool size (#48077)
* Set the "build_jobs" on concretization/generate for CI

build_jobs also controls the concretization pool size. Set this
in the config section for CI generate.

This config is overwritten by build_job CI using the SPACK_BUILD_JOBS
environment variable. This implicitly will drop the default build
CPU request on all "default" grouped build jobs from (max) 16 to 8.

* Add default allocations for build jobs

* Add common jobs and concretize args to ci generate and rebuild

* CI: Specify parallel concretize and build jobs via argument

* Increase power and cray concretization limits

Lowering limits for these stacks creates timeout

* Increase default pool size to 8

intermittent timeouts with 4 CPU

* Add reduced requests for windows for now
2024-12-17 12:05:15 -06:00
Todd Gamblin
6e2625ae65 package_base: generify accessor methods for when-keyed dictionaries
This turns some variant-specific methods for dealing with when-keyed dictionaries into
more generic versions, in preparation for conditional version definitions.

`_by_name`, `_names`, etc. are replaced with generic methods for transforming
when-keyed dictionaries:
 * `_by_subkey()`
 * `_subkeys()`
 * `_num_definitions()`
 * `_definitions()`
 * `_remove_overridden_defs()`

And the variant accessors are refactored to use these methods underneath.

To do this, types like `WhenDict` had to be generified, and some `TypeVars`
were added for sortable keys and values.

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-12-17 07:25:14 -08:00
Todd Gamblin
7f24b11675 Vendor typing_extensions
We are using more and more typing features in Spack, and without features like
protocols, typing core is becoming harder and harder.

I think it's worth vendoring `typing_extensions` for this. It will get us a number of
useful capabilities:

* `Literal`
* `TypedDict`
* `Protocol`

among others.

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-12-17 07:25:14 -08:00
Mikael Simberg
bb9bb905a0 mold: Add 2.35.1 (#48136) 2024-12-17 06:54:59 -07:00
Harmen Stoppels
60b4882d4e ci: pcluster missing_library_policy: ignore (#48138) 2024-12-17 12:01:56 +01:00
Harmen Stoppels
19734832eb resolve_shared_libraries.py: exclude libanl.so from glibc (#48139) 2024-12-17 11:33:36 +01:00
Massimiliano Culpo
51fb1ed05b Temporarily pin Ubuntu to v22.04, where we use kcov (#48152)
Ubuntu doesn't package kcov in v24.04 Since GitHub
started upgrading their runner images, this makes
our CI fail, see e.g.

https://github.com/spack/spack/actions/runs/12366970840/job/34518012887?pr=47854

This is a temporary workaround, while we prepare a
more stable fix.

* Don't run too many unit tests
2024-12-17 11:30:47 +01:00
dependabot[bot]
69faa41c3f build(deps): bump docker/setup-buildx-action from 3.7.1 to 3.8.0 (#48150)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 3.7.1 to 3.8.0.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](c47758b77c...6524bf65af)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-17 09:00:26 +01:00
Stephen Nicholas Swatman
72ef5b9010 acts dependencies: new versions as of 2024/12/16 (#48133)
This commit adds a new patch version of algebra-plugins and a new minor
version of detray.
2024-12-16 23:12:58 -07:00
Chris Marsh
795809f31b qgis: add 3.36 and 3.40, fix proj depend (#48110)
* Add newest LTR 3.34.13, constrain proj to work around build bug, add 3.40.1

* bound proj

* Improve comment
2024-12-16 21:57:57 -07:00
Chris Marsh
5db597ff87 qt5: patch internal RapidJSON (#48078)
* Fix qt5 internal RapidJSON build error with %gcc@14:

* fix style

* qt: patch url full_index

* qt: fix patch sha

* qt: patch when @5.9.2:

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-12-16 20:06:30 -06:00
kwryankrattiger
b54227d5e9 Mesa: Update the Meson requirements for newer versions (#48116) 2024-12-16 15:18:15 -06:00
Wouter Deconinck
94cf51875f acts: don't use system dfelibs for 35.1:36.0 (#47994) 2024-12-16 14:39:21 -06:00
Harmen Stoppels
2f6e30fd24 ci: new image for developer-tools (#48065) 2024-12-16 13:08:05 +01:00
Harmen Stoppels
06eae96ef9 config:shared_linking:missing_library_policy to error/warn about accidental use of system libraries on linux/freebsd (#47365)
This commit adds a config option `config:shared_linking:missing_library_policy:error/warn/ignore` which will cause installation errors or warnings when ELF executables or libraries need shared libraries which cannot be resolved from RPATH search paths. The default is to ignore.

This is a safeguard against accidentally linking to system libraries instead of Spack libraries. It makes it more likely that build cache installs work on different machines. It works only at the level of libraries, not at the level of symbols. Some system dependencies are allowed (e.g. kernel and libc).

Packages can (but are discouraged to) set `unresolved_libraries` to a list of patterns of sonames/library names that are know to be unresolvable in RPATHs.  In the future this could be made more fine-grained in a non-breaking way by allowing a dictionary of patterns `lib => [deps]`.
2024-12-16 12:32:36 +01:00
Harmen Stoppels
557083c33b curl: disable docs to drop perl dep (#48074) 2024-12-16 12:25:53 +01:00
Massimiliano Culpo
f6ab2f5b99 unit-test: port changes from compiler as deps (#48104)
Extracted #45189

Common test setup has been extracted in fixtures. Some matrix
dimensions moved from being "compiler" to be "targets".

Use --fake install for packages in test.
2024-12-16 09:27:41 +01:00
Rocco Meli
6005813518 CP2K: use ninja generator and add constraint on dla-future-fortran (#48033)
* cp2k ninja

* version
2024-12-16 09:24:21 +01:00
Joseph Wang
1df506959e py-packaging: update to 24.2 (#48087) 2024-12-14 09:57:40 -07:00
Todd Gamblin
0d0ff44e3e Spec: Remove _normal attribute and unused constructor arguments (#48119)
The `_normal` attribute on specs is no longer used and has no meaning.
It's left over from part of the original concretizer.

The `concrete` constructor argument is also not used by any part of core.

- [x] remove `_normal` attribute from `Spec`
- [x] remove `concrete` argument from `Spec.__init__`
- [x] remove unused `check_diamond_normalized_dag` function in tests
- [x] simplify `Spec` constructor and docstrings

I tried to add typing to `Spec` here, but it creates a huge number of type issues
because *most* things on `Spec` are optional. We probably need separate `Spec` and
`ConcreteSpec` classes before attempting that.

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-12-14 15:24:22 +01:00
Wouter Deconinck
f4bfeb7ed8 sundials: fix missing comma in list (#48106) 2024-12-14 06:38:01 -07:00
Zack Galbreath
a16350df69 Fix typo in ci/README.md (#48114) 2024-12-14 06:22:46 -07:00
John W. Parent
a2981cff1f New Package: Trame (#47920)
* Add trame
2024-12-13 17:16:31 -06:00
Richard Berger
d2372f8eee Add libcxi and its dependencies (#47705) 2024-12-13 16:09:17 -07:00
Buldram
c310c2911a nim: fix deps, deprecate and patch old versions (#47934)
* nim: fix deps, deprecate and patch old versions
* Fix runtime dependencies for produced binaries:
  - Add -rpaths pointing at dependencies to
	  std wrapper modules
  - Set version constraints for OpenSSL
  - Added SQLite3 variant for <2.0
* Parallelize build with make
* Deprecate 1.0.10 due to CVEs
* Deprecate 1.9.3 as it's an old development version
* Backport patch for CVE-2021-21372 to <1.2.10/<1.4.4,
  CVE-2021-21374 to 1.4.2 and CVE-2021-46872 to 1.4.*
* Avoid empty low ranges that include devel
* Add previously missing CVE comment
* Keep "link" type for dynamic libraries for MSVC
* Omit "run" type for library dependencies
* Disable SQLite variant by default
* Fix version ranges
   Had assumed they were exclusive, but they're inclusive
* Correct version range for sqlite variant
   Difference doesn't matter outside of development versions
* Move patches to use GitHub URLs instead of files
* Retry CI
* append ?full_index=1
2024-12-13 15:29:18 -07:00
Rémi Lacroix
d68747912d suite-sparse: Add version 7.8.3. (#48101) 2024-12-13 14:53:38 -07:00
Harmen Stoppels
107e4515bd tests: fix a few open(...) calls (#48113) 2024-12-13 21:38:48 +01:00
Massimiliano Culpo
af6526bb82 ga: add a pylint check to avoid adding open calls without encoding= (#48099) 2024-12-13 21:21:26 +01:00
Raffaele Solcà
dd8dff7872 add dla-future v0.7.0 (#48098) 2024-12-13 12:28:21 -07:00
Adam J. Stewart
82d4b391bf py-matplotlib: add v3.9.3, v3.9.4 (#48107)
* py-matplotlib: add v3.9.3, v3.9.4
* Add upper bound on meson
2024-12-13 11:54:19 -07:00
Harmen Stoppels
a07e372770 filter_file: make tempfile later (#48108)
* filter_file: make tempfile later

* also add a `.` after the filename
2024-12-13 11:49:32 -07:00
kwryankrattiger
d35202d83e VisIt: Patch to fix python module deps (#48097)
Previously the pip setup would delete the visitmodule during the install
step. This was fixed by forcing the pip setup to only run once before
the dependents are created.
2024-12-13 11:33:07 -07:00
Tamara Dahlgren
1c1d439a01 Circular import fix: spack.config -> spack.environment (#48057)
Fix by moving setup to spack.main
2024-12-13 18:44:08 +01:00
Massimiliano Culpo
d52be82c06 netcdf-cxx4: use https instead of ftp (#47875)
* netcdf-cxx4: use https instead of ftp
* Update url for v4.3.0
2024-12-13 09:00:40 -08:00
Dominic Hofer
2a0fc464c9 Remove maintainer (#48105) 2024-12-13 17:17:50 +01:00
Massimiliano Culpo
cd26331b19 superlu: add v7.0, add metis as a dependency (#48061) 2024-12-13 11:26:27 +01:00
Kevin Huck
f5934db96b Updating APEX package (#48017)
* Updating APEX package
Adding version 2.7.1 and adding OpenCL support flags

* Update var/spack/repos/builtin/packages/apex/package.py

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>

* Update var/spack/repos/builtin/packages/apex/package.py

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>

* Fixing recommended change typo

* Removing superfluous conflict

---------

Co-authored-by: Mikael Simberg <mikael.simberg@iki.fi>
2024-12-13 10:20:30 +01:00
Harmen Stoppels
11b86ca75c package_base.py: remove deprecated props (#48066) 2024-12-13 10:19:16 +01:00
kwryankrattiger
0c2b546825 Remove extraneous newline from reproducer output (#48076)
* Remove extraneous newline from reproducer output

* Convert print -> tty.info
2024-12-13 09:38:43 +01:00
kwryankrattiger
ef615bcc7e CI: Move the build stage to the project root instead of tmp (#47996)
On MacOS the tmpdir fills up over time and isn't cleared properly.
Move the build stage to a location it is cleared after each build.
2024-12-13 06:55:01 +00:00
Wouter Deconinck
0f21f24356 cmake: ~ownlibs ^libarchive compression+=bz2lib,lzma,zstd (#48079) 2024-12-12 23:47:29 -07:00
Brian Vanderwende
e59ee0768f grads: Support newer hdf5 versions for grads (#48058)
* Support newer hdf5 versions for grads

* Cover netcdf dependency too

* Adjusted placement of two comments
2024-12-13 00:33:04 -06:00
Chris Marsh
6cbd9dcf13 Add +pic variant by default such that consumers of the static version of pcre2 can use it in a shared library. Fixes #47614 (#48071) 2024-12-12 19:38:10 -06:00
kwryankrattiger
92dbb55703 Mkae Autotools build_system point at correct build_directory (#48072) 2024-12-12 15:58:18 -07:00
Marc T. Henry de Frahan
e84631473c Add openfast FPE trapping variant (#48042) 2024-12-12 15:28:04 -07:00
Satish Balay
c213a8c2a7 camp: restrict cub dependency to cuda versions older than 10 - as newer cuda versions already include cub (#48008)
Adding an additional dependency on cub pulls in an in-compatible version of cub [than whats provided by cuda]
2024-12-12 13:17:51 -08:00
Harmen Stoppels
526af1cbe7 Sprinkle open(..., encoding=utf-8) (#48006)
Add missing encoding=utf-8 to various open calls. This makes
files like spec.json, spack.yaml, spack.lock, config.yaml etc locale
independent w.r.t. text encoding. In practice this is not often an
issue since Python 3.7, where the C locale is promoted to
C.UTF-8. But it's better to enforce UTF-8 explicitly, since there is
no guarantee text files are written in the right encoding.

Also avoid opening in text mode if it can be avoided.
2024-12-12 21:46:08 +01:00
Massimiliano Culpo
334a8b0991 ci: remove a custom implementation of a stdlib functionality (#48068) 2024-12-12 12:09:35 -08:00
Dom Heinzeller
1581922c9e cprnc: install rpath patch for v1.0.8 (#47913) 2024-12-12 20:31:40 +01:00
Harmen Stoppels
9cd2f0a536 filter_file: fix various bugs (#48038)
* `f.tell` on a `TextIOWrapper` does not return the offset in bytes, but
  an opaque integer that can only be used for `f.seek` on the same
  object. Spack assumes it's a byte offset.
* Do not open in a locale dependent way, but assume utf-8 (and allow
  users to override that)
* Use tempfile to generate a backup/temporary file in a safe way
* Comparison between None and str is valid and on purpose.
2024-12-12 20:07:39 +01:00
Harmen Stoppels
687766b8ab spec.parser / spec.token: improvements (#48063)
Follow-up to #47956 

* Rename `token.py` -> `tokenize.py`
* Rename `parser.py` -> `spec_parser.py`
* Move common code related to iterating over tokens into `tokenize.py`
* Add "unexpected character token" (i.e. `.`) to `SpecTokens` by default instead of having a separate tokenizer / regex.
2024-12-12 17:08:20 +01:00
Adam J. Stewart
396a701860 Python: deprecate 3.8 (#46913)
* Python: deprecate 3.8

* Remove preference for EOL Python versions

* Explicitly deprecate things requiring EOL Python

* More deprecations

* deprecate old versions of slepc, py-petsc4py, py-slepc4py in sync with old versions of petsc

---------

Co-authored-by: Satish Balay <balay@mcs.anl.gov>
2024-12-12 15:22:06 +01:00
Massimiliano Culpo
7105cc8c01 Make use of ^ in 'depends_on' an error (#48062)
The use of `^` in `depends_on` directives has never been allowed, since
the dawn of Spack.

Up to now, we used to have an audit to catch this kind of issue, mainly
because in that way we could easily collect all issues and report them
to packagers at once.

Due to implementation details, this audit doesn't work if a dependency
without a `^` is followed by the same dependency with a `^`.

This PR makes this pattern an error, which will be reported eagerly, and
removes the corresponding audit. It also fixes a package using the wrong
idiom.

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2024-12-12 14:19:05 +01:00
Wouter Deconinck
0ce38ed109 rivet: patch to fix missing headers (#48049) 2024-12-12 11:14:15 +01:00
Tamara Dahlgren
c548bcc9ef Environment: remove self import (#48056) 2024-12-12 10:30:19 +01:00
Tamara Dahlgren
f018e0fe42 Circular import fix: spack.schema.config -> spack.config (#48059)
fix by moving `merge_yaml` from `config.py` to `schema/__init__.py`
2024-12-12 10:07:53 +01:00
Tamara Dahlgren
9aefbb0e96 Circular import fix: spack.oci.opener -> spack.parser (#47956)
by splitting spack.parser into two modules
2024-12-12 10:02:07 +01:00
Marcel Koch
9265991767 ginkgo: add v1.9.0 (#47987)
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-12-11 16:53:01 -07:00
Dom Heinzeller
25cfea48f3 ESMF package: support clang with flang
So far, the ESMF package recipe in spack assumes that the spack
compilers clang and apple-clang are using gfortran as the Fortran
compiler. But with the latest improvements to the LLVM compilers,
we need to also support clang with flang.
2024-12-11 16:47:52 -07:00
Andy Porter
fc4316cafa py-psyclone: add v3.0.0 (#47964)
* Update py-psyclone package to version 3.0.0
* Add Sergi and myself as maintainers
* correct case of url and add url_for_version() method
2024-12-11 11:51:14 -08:00
Scott Wittenburg
de1416b3de ci: Refactor pipeline generation (#47459)
Reorganize the pipeline generation aspect of the ci module,
mostly to separate the representation, generation, and
pruning of pipeline graphs from platform-specific output
formatting.

Introduce a pipeline generation registry to support generating
pipelines for other platforms, though gitlab is still the only
supported format currently.

Fix a long-existing bug in pipeline pruning where only direct
dependencies were added to any nodes dependency list.
2024-12-11 19:23:37 +00:00
Harmen Stoppels
ba52c4f05d gha: fix git safe.directory config (#48041) 2024-12-11 19:11:44 +01:00
Rocco Meli
501ee68606 dbcsr: add v2.8.0 (#48035)
* dbcsr: add v2.8.0

* add myself as maintainer
2024-12-11 10:18:38 -07:00
Jon Rood
283eaaf323 amr-wind: remove unused cmake option (#48009)
* amr-wind: remove unused cmake option

* Style.
2024-12-11 09:28:31 -07:00
Wouter Deconinck
a3543008d9 qt-base: fix rpath for dependents (#47424)
ensure that CMAKE_INSTALL_RPATH_USE_LINK_PATH=ON works in qt packages.
2024-12-11 16:59:47 +01:00
Robert Cohn
f760e16688 umf only avaiable with 2025 (#48027) 2024-12-11 16:33:56 +01:00
Harmen Stoppels
e9d2732e00 log.py: improve utf-8 handling, and non-utf-8 output (#48005) 2024-12-11 10:54:17 +01:00
Harmen Stoppels
03525528d6 llnl.path: make system_path_filter a noop on non-win32 (#48032) 2024-12-11 10:51:06 +01:00
Scott Wittenburg
a3985e7538 Revert "Set the "build_jobs" on concretization/generate for CI (#47660)" (#48028)
This reverts commit 316dcc1609.
2024-12-11 07:56:36 +01:00
Robert Cohn
ae28528ec7 sycl runtime needs umf (#48011) 2024-12-10 14:34:35 -08:00
Paul Kuberry
cb8880b388 Update compadre and py-pycompadre to v1.6.0 (#47948)
* compadre: add version 1.6.0
* py-pycompadre: add version 1.6.0
2024-12-10 13:29:31 -08:00
kwryankrattiger
316dcc1609 Set the "build_jobs" on concretization/generate for CI (#47660)
* Set the "build_jobs" on concretization/generate for CI

build_jobs also controls the concretization pool size. Set this
in the config section for CI generate.

This config is overwritten by build_job CI using the SPACK_BUILD_JOBS
environment variable. This implicitly will drop the default build
CPU request on all "default" grouped build jobs from (max) 16 to 8.

* Add default allocations for build jobs

* Add common jobs and concretize args to ci generate and rebuild

* CI: Specify parallel concretize and build jobs via argument

* Increase power and cray concretization limits

Lowering limits for these stacks creates timeout
2024-12-10 14:13:23 -07:00
rfbgo
84ea7dbddf hp2p: new package (#47950)
* Add hp2p app definition
* update homepage
   Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* update to fstring

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-12-10 13:03:54 -08:00
dmagdavector
b6e4ff0242 py-nbclassic: add v1.1.0 (#47946)
* py-nbclassic: add v1.1
* py-nbclassic: reduce explicit dependencies for v1.1.0
  Having all the 'excess' packages listed did not break anything, as
  they were needed for `py-jupyter-server` (pulled in via `py-notebook-shim`)
  anyway, but the change makes it more clear on why things are being pulled in.
2024-12-10 13:45:44 -07:00
Xuefeng Ding
c23ffbbd7a geant4: patch typo in wroot (#47955)
* bug fix

* not just 10.4, all versions

* geant4: comment; close patch when range

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-12-10 14:36:01 -06:00
Luc Grosheintz
accd3ca860 highfive: add v2.10.1 (#47914)
* highfive: release 2.10.1
* Use sha256 of '.tar.gz'.

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2024-12-10 12:29:17 -08:00
dmagdavector
d47629a521 py-jupyterlab-server: add v2.23 to 2.27 (#47969)
* py-jupyterlab-server: add v2.23 to 2.27
* py-jupyterlab-server: style fixes
2024-12-10 12:06:25 -08:00
Lehman Garrison
7bb6c9b828 py-disbatch: add new package at version 3.0 (#47988) 2024-12-10 10:13:56 -08:00
Wouter Deconinck
7e5b5f8c57 veccore: add v0.8.2 (#47855)
* veccore: add v8.2.0

* Update cmake requirement

---------

Co-authored-by: Seth R Johnson <johnsonsr@ornl.gov>
2024-12-10 08:38:43 -07:00
Paul R. C. Kent
3a1c0f5c5f llvm: add v19.1.5 (#47897) 2024-12-10 15:19:10 +01:00
Massimiliano Culpo
b50dbb8604 pipelines: simplify and lint aws-pcluster-* (#47989) 2024-12-10 12:16:51 +01:00
Harmen Stoppels
30c00353d4 make level_zero variant consistent, add missing instances (#47985) 2024-12-10 09:25:30 +01:00
Tamara Dahlgren
466c3abaeb Remove remaining use of deprecated test callback (#47995) 2024-12-10 08:19:56 +01:00
Adam J. Stewart
478647f873 py-numpy: add v2.2.0 (#47999) 2024-12-09 23:39:00 -07:00
Adam J. Stewart
15f3851a92 py-scikit-learn: add v1.6.0 (#47998) 2024-12-09 22:59:24 -07:00
Laura Weber
5232ee1ed1 tecplot: updated hash for 2024r1m1 (#47886) 2024-12-09 18:31:35 -08:00
teddy
855943ff29 py-mgmetis: remove constrains 3.X for mpi4py & 1.X for numpy depandancies (#47890)
Co-authored-by: t. chantrait <teddy.chantrait@cea.fr>
2024-12-09 18:29:59 -08:00
Tobias Ribizel
449a462cde gurobi: add versions 11 and 12 (#47889)
This also means removing the Python support for these versions,
as the installation method was deprecated in favor of pip/conda
2024-12-09 18:27:30 -08:00
François Trahay
f3c6f00cc1 eztrace: new version for building from the dev branch of the git repository (#47891) 2024-12-09 18:25:54 -08:00
jgraciahlrs
42333ad66e extrae: relax requirements on binutils (#47893) 2024-12-09 18:24:15 -08:00
Luc Grosheintz
36f3566257 highfive: update maintainers. (#47896)
* highfive: update maintainers.
* switch maintainers.

---------

Co-authored-by: Nicolas Cornu <me+github@alkino.fr>
2024-12-09 18:22:10 -08:00
Adam J. Stewart
24fc720c0b py-twine: add v6.0.1 (#47899) 2024-12-09 18:18:19 -08:00
Andrey Alekseenko
fe0f4c1815 gromacs: support version 2024.4 (#47900)
And fix help formatting
2024-12-09 18:16:42 -08:00
Alberto Sartori
d68462ae8e justbuild: add version 1.4.1 (#47902) 2024-12-09 18:15:06 -08:00
Victor Lopez Herrero
0189e92329 dlb: add v3.5.0 (#47916) 2024-12-09 18:06:58 -08:00
Mark Abraham
8d83baa35e gromacs: conflict %apple-clang and +openmp (#47935) 2024-12-09 17:39:32 -08:00
Wouter Deconinck
12dd1208f3 geant4: add v11.3.0 (#47961)
* geant4: add v11.3.0
* geant4: rm deprecated 11.3.0.beta
* geant4: add 11.3.0 and associated data library versions
   - Data library versions taken from:
     - https://gitlab.cern.ch/geant4/geant4/-/blob/v11.3.0/cmake/Modules/G4DatasetDefinitions.cmake?ref_type=tags
   - Variants etc otherwise unchanged.
   - 11.3.0-beta version removed, release version marked as preffered.
* g4channeling: f-strings

---------

Co-authored-by: Ben Morgan <ben.morgan@warwick.ac.uk>
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
2024-12-09 15:01:45 -08:00
David Gardner
728c5e0e9d add main branch (#47952) 2024-12-09 14:54:38 -08:00
dmagdavector
c3e92a3d01 py-httpx: add v0.28, v0.28.1 (#47970)
* py-httpx: add v0.28, v0.28.1
* py-httpx: py-sniffio dependency only needed up to 0.27
2024-12-09 14:44:29 -08:00
Stephen Nicholas Swatman
49efa711d0 acts dependencies: new versions as of 2024/12/08 (#47981)
* acts dependencies: new versions as of 2024/12/08
 This commit includes a new version of ACTS, as well as new versions of
  the ACTS algebra plugins, covfie, detray, and geomodel.
* Fixes
* covfie: depends_on cmake@3.21: when @0.11:

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-12-09 15:08:47 -07:00
Wouter Deconinck
ab4a645cbe various pkgs: use https homepage when http redirects (github.io) (#47974)
* various pkgs: use https homepage when http redirects (github.io)
* py-owslib: fix homepage
* py-owslib: fix homepage to rtd
* h5io: fix homepage
2024-12-09 14:48:33 -07:00
Wouter Deconinck
7c74247f23 py-greenlet: remove preference for v2.0.2 (#47962)
* py-greenlet: remove preference for 2.0.2
* py-greenlet: conflicts with gcc@:7 when @3.1.1:
2024-12-09 12:26:16 -08:00
Matt Thompson
728f13d4b2 mapl: add v2.51.0 (#47968) 2024-12-09 12:21:05 -08:00
Wouter Deconinck
4d6347c99c node-js: patch for %gcc@12.[1-2] when @22.2:22.5 (#47979)
* node-js: patch for %gcc@12.[1-2] when @22.2:22
* node-js: avoid url patch (serial in common.gypi)
2024-12-09 12:19:09 -08:00
Wouter Deconinck
b2a86fcaba py-plac: add v1.4.3; restrict to python@:3.11 for older (#47982) 2024-12-09 11:24:51 -08:00
Kin Fai Tse
da83ab35e8 add soci 4.0.3 (#47983) 2024-12-09 11:17:36 -08:00
Alberto Invernizzi
9cb2070eeb gh: add v2.59.0 -> v2.63.2 (#47958)
* gh: bump versions

* update go requirement (good catch @alecbcs!)
see 8446079656
2024-12-09 09:43:10 -08:00
Wouter Deconinck
a72490fc91 coverage.yml: set fail_ci_if_error = false again (#47986) 2024-12-09 16:59:44 +01:00
Mikael Simberg
f15e5f7163 mold: Add 2.35.0 (#47984) 2024-12-09 04:02:51 -07:00
dependabot[bot]
fc105a1a26 build(deps): bump types-six in /.github/workflows/requirements/style (#47954)
Bumps [types-six](https://github.com/python/typeshed) from 1.16.21.20241105 to 1.17.0.20241205.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-six
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-08 19:34:20 -06:00
Stephen Sachs
8a9e16dc3b aws-pcluster stacks: static spack.yaml (#47918) 2024-12-08 20:26:08 +01:00
eugeneswalker
0b7fc360fa e4s ci: add lammps +rocm (#47929)
* e4s ci: add lammps +rocm

* e4s rocm external stack: add def for rocm-openmp-extras external

* lammps +rocm external rocm has errors, comment out
2024-12-08 10:21:28 -08:00
Wouter Deconinck
79d79969bb celeritas: patch 0.5.0 for geant4@11.3.0: (#47976) 2024-12-08 09:12:14 -05:00
737 changed files with 20701 additions and 10993 deletions

View File

@@ -66,7 +66,7 @@ jobs:
./share/spack/qa/validate_last_exit.ps1
spack -d audit externals
./share/spack/qa/validate_last_exit.ps1
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
if: ${{ inputs.with_coverage == 'true' && runner.os != 'Windows' }}
with:
name: coverage-audits-${{ matrix.system.os }}

View File

@@ -94,7 +94,7 @@ jobs:
fi
- name: Upload Dockerfile
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
with:
name: dockerfiles_${{ matrix.dockerfile[0] }}
path: dockerfiles
@@ -103,7 +103,7 @@ jobs:
uses: docker/setup-qemu-action@49b3bc8e6bdd4a60e6116a5414239cba5943d3cf
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@c47758b77c9736f4b2ef4073d4d51994fabfe349
uses: docker/setup-buildx-action@6524bf65af31da8d45b59e8c27de4bd072b392f5
- name: Log in to GitHub Container Registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
@@ -133,7 +133,7 @@ jobs:
needs: deploy-images
steps:
- name: Merge Artifacts
uses: actions/upload-artifact/merge@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
uses: actions/upload-artifact/merge@6f51ac03b9356f520e9adb1b1b7802705f340c2b
with:
name: dockerfiles
pattern: dockerfiles_*

View File

@@ -32,4 +32,4 @@ jobs:
uses: codecov/codecov-action@05f5a9cfad807516dbbef9929c4a42df3eb78766
with:
verbose: true
fail_ci_if_error: true
fail_ci_if_error: false

View File

@@ -3,5 +3,5 @@ clingo==5.7.1
flake8==7.1.1
isort==5.13.2
mypy==1.8.0
types-six==1.16.21.20241105
types-six==1.17.0.20241205
vermin==1.6.0

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
os: [ubuntu-22.04]
python-version: ['3.7', '3.8', '3.9', '3.10', '3.11', '3.12']
on_develop:
- ${{ github.ref == 'refs/heads/develop' }}
@@ -24,19 +24,19 @@ jobs:
on_develop: ${{ github.ref == 'refs/heads/develop' }}
exclude:
- python-version: '3.7'
os: ubuntu-latest
os: ubuntu-22.04
on_develop: false
- python-version: '3.8'
os: ubuntu-latest
os: ubuntu-22.04
on_develop: false
- python-version: '3.9'
os: ubuntu-latest
os: ubuntu-22.04
on_develop: false
- python-version: '3.10'
os: ubuntu-latest
os: ubuntu-22.04
on_develop: false
- python-version: '3.11'
os: ubuntu-latest
os: ubuntu-22.04
on_develop: false
steps:
@@ -55,7 +55,7 @@ jobs:
cmake bison libbison-dev kcov
- name: Install Python packages
run: |
pip install --upgrade pip setuptools pytest pytest-xdist pytest-cov clingo
pip install --upgrade pip setuptools pytest pytest-xdist pytest-cov
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black"
- name: Setup git configuration
run: |
@@ -80,14 +80,14 @@ jobs:
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }}
run: |
share/spack/qa/run-unit-tests
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
with:
name: coverage-${{ matrix.os }}-python${{ matrix.python-version }}
path: coverage
include-hidden-files: true
# Test shell integration
shell:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
@@ -113,7 +113,7 @@ jobs:
COVERAGE: true
run: |
share/spack/qa/run-shell-tests
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
with:
name: coverage-shell
path: coverage
@@ -134,7 +134,7 @@ jobs:
- name: Setup repo and non-root user
run: |
git --version
git config --global --add safe.directory /__w/spack/spack
git config --global --add safe.directory '*'
git fetch --unshallow
. .github/workflows/bin/setup_git.sh
useradd spack-test
@@ -173,8 +173,9 @@ jobs:
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.6
spack bootstrap status
spack solve zlib
spack unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml lib/spack/spack/test/concretization/core.py
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
with:
name: coverage-clingo-cffi
path: coverage
@@ -210,9 +211,9 @@ jobs:
. share/spack/setup-env.sh
$(which spack) bootstrap disable spack-install
$(which spack) solve zlib
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python')
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
$(which spack) unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}"
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
with:
name: coverage-${{ matrix.os }}-python${{ matrix.python-version }}
path: coverage
@@ -241,9 +242,9 @@ jobs:
env:
COVERAGE_FILE: coverage/.coverage-windows
run: |
spack unit-test --verbose --cov --cov-config=pyproject.toml
spack unit-test -x --verbose --cov --cov-config=pyproject.toml
./share/spack/qa/validate_last_exit.ps1
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
with:
name: coverage-windows
path: coverage

View File

@@ -13,8 +13,7 @@ concurrency:
jobs:
# Validate that the code can be run on all the Python versions
# supported by Spack
# Validate that the code can be run on all the Python versions supported by Spack
validate:
runs-on: ubuntu-latest
steps:
@@ -74,7 +73,7 @@ jobs:
- name: Setup repo and non-root user
run: |
git --version
git config --global --add safe.directory /__w/spack/spack
git config --global --add safe.directory '*'
git fetch --unshallow
. .github/workflows/bin/setup_git.sh
useradd spack-test
@@ -87,6 +86,7 @@ jobs:
spack -d bootstrap now --dev
spack -d style -t black
spack unit-test -V
# Check we don't make the situation with circular imports worse
import-check:
runs-on: ubuntu-latest
steps:
@@ -146,3 +146,21 @@ jobs:
else
printf '\033[1;32mImport check passed: %s <= %s\033[0m\n' "$edges_after" "$edges_before"
fi
# Further style checks from pylint
pylint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.13'
cache: 'pip'
- name: Install Python packages
run: |
pip install --upgrade pip setuptools pylint
- name: Pylint (Spack Core)
run: |
pylint -j 4 --disable=all --enable=unspecified-encoding --ignore-paths=lib/spack/external lib

View File

@@ -102,6 +102,6 @@ PackageName: sbang
PackageHomePage: https://github.com/spack/sbang
PackageLicenseDeclared: Apache-2.0 OR MIT
PackageName: six
PackageHomePage: https://pypi.python.org/pypi/six
PackageLicenseDeclared: MIT
PackageName: typing_extensions
PackageHomePage: https://pypi.org/project/typing-extensions/
PackageLicenseDeclared: Python-2.0

View File

@@ -19,7 +19,7 @@ config:
install_tree:
root: $spack/opt/spack
projections:
all: "{architecture.platform}/{architecture.target}/{name}-{version}-{hash}"
all: "{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}"
# install_tree can include an optional padded length (int or boolean)
# default is False (do not pad)
# if padded_length is True, Spack will pad as close to the system max path
@@ -194,6 +194,12 @@ config:
# executables with many dependencies, in particular on slow filesystems.
bind: false
# Controls the handling of missing dynamic libraries after installation.
# Options are ignore (default), warn, or error. If set to error, the
# installation fails if installed binaries reference dynamic libraries that
# are not found in their specified rpaths.
missing_library_policy: ignore
# Set to 'false' to allow installation on filesystems that doesn't allow setgid bit
# manipulation by unprivileged user (e.g. AFS)

View File

@@ -15,11 +15,12 @@
# -------------------------------------------------------------------------
packages:
all:
compiler:
- apple-clang
- clang
- gcc
providers:
c: [apple-clang, llvm, gcc]
cxx: [apple-clang, llvm, gcc]
elf: [libelf]
fortran: [gcc]
fuse: [macfuse]
gl: [apple-gl]
glu: [apple-glu]

View File

@@ -15,18 +15,19 @@
# -------------------------------------------------------------------------
packages:
all:
compiler: [gcc, clang, oneapi, xl, nag, fj, aocc]
providers:
awk: [gawk]
armci: [armcimpi]
blas: [openblas, amdblis]
c: [gcc, llvm, intel-oneapi-compilers, xl, aocc]
cxx: [gcc, llvm, intel-oneapi-compilers, xl, aocc]
c: [gcc]
cxx: [gcc]
D: [ldc]
daal: [intel-oneapi-daal]
elf: [elfutils]
fftw-api: [fftw, amdfftw]
flame: [libflame, amdlibflame]
fortran: [gcc, llvm]
fortran: [gcc]
fortran-rt: [gcc-runtime, intel-oneapi-runtime]
fuse: [libfuse]
gl: [glx, osmesa]

View File

@@ -15,8 +15,8 @@
# -------------------------------------------------------------------------
packages:
all:
compiler:
- msvc
providers:
c : [msvc]
cxx: [msvc]
mpi: [msmpi]
gl: [wgl]

View File

@@ -265,25 +265,30 @@ infrastructure, or to cache Spack built binaries in Github Actions and
GitLab CI.
To get started, configure an OCI mirror using ``oci://`` as the scheme,
and optionally specify a username and password (or personal access token):
and optionally specify variables that hold the username and password (or
personal access token) for the registry:
.. code-block:: console
$ spack mirror add --oci-username username --oci-password password my_registry oci://example.com/my_image
$ spack mirror add --oci-username-variable REGISTRY_USER \
--oci-password-variable REGISTRY_TOKEN \
my_registry oci://example.com/my_image
Spack follows the naming conventions of Docker, with Dockerhub as the default
registry. To use Dockerhub, you can omit the registry domain:
.. code-block:: console
$ spack mirror add --oci-username username --oci-password password my_registry oci://username/my_image
$ spack mirror add ... my_registry oci://username/my_image
From here, you can use the mirror as any other build cache:
.. code-block:: console
$ export REGISTRY_USER=...
$ export REGISTRY_TOKEN=...
$ spack buildcache push my_registry <specs...> # push to the registry
$ spack install <specs...> # install from the registry
$ spack install <specs...> # or install from the registry
A unique feature of buildcaches on top of OCI registries is that it's incredibly
easy to generate get a runnable container image with the binaries installed. This

View File

@@ -25,6 +25,14 @@ QMake does not appear to have a standardized way of specifying
the installation directory, so you may have to set environment
variables or edit ``*.pro`` files to get things working properly.
QMake packages will depend on the virtual ``qmake`` package which
is provided by multiple versions of Qt: ``qt`` provides Qt up to
Qt5, and ``qt-base`` provides Qt from version Qt6 onwards. This
split was motivated by the desire to split the single Qt package
into its components to allow for more fine-grained installation.
To depend on a specific version, refer to the documentation on
:ref:`virtual-dependencies`.
^^^^^^
Phases
^^^^^^

View File

@@ -38,9 +38,11 @@ just have to configure and OCI registry and run ``spack buildcache push``.
spack -e . install
# Configure the registry
spack -e . mirror add --oci-username ... --oci-password ... container-registry oci://example.com/name/image
spack -e . mirror add --oci-username-variable REGISTRY_USER \
--oci-password-variable REGISTRY_TOKEN \
container-registry oci://example.com/name/image
# Push the image
# Push the image (do set REGISTRY_USER and REGISTRY_TOKEN)
spack -e . buildcache push --update-index --base-image ubuntu:22.04 --tag my_env container-registry
The resulting container image can then be run as follows:

View File

@@ -178,8 +178,8 @@ Spec-related modules
Contains :class:`~spack.spec.Spec`. Also implements most of the logic for concretization
of specs.
:mod:`spack.parser`
Contains :class:`~spack.parser.SpecParser` and functions related to parsing specs.
:mod:`spack.spec_parser`
Contains :class:`~spack.spec_parser.SpecParser` and functions related to parsing specs.
:mod:`spack.version`
Implements a simple :class:`~spack.version.Version` class with simple

View File

@@ -5137,7 +5137,7 @@ other checks.
- Not applicable
* - :ref:`PythonPackage <pythonpackage>`
- Not applicable
- ``test`` (module imports)
- ``test_imports`` (module imports)
* - :ref:`QMakePackage <qmakepackage>`
- ``check`` (``make check``)
- Not applicable
@@ -5146,7 +5146,7 @@ other checks.
- Not applicable
* - :ref:`SIPPackage <sippackage>`
- Not applicable
- ``test`` (module imports)
- ``test_imports`` (module imports)
* - :ref:`WafPackage <wafpackage>`
- ``build_test`` (must be overridden)
- ``install_test`` (must be overridden)

View File

@@ -1,5 +1,5 @@
sphinx==8.1.3
sphinxcontrib-programoutput==0.17
sphinxcontrib-programoutput==0.18
sphinx_design==0.6.1
sphinx-rtd-theme==3.0.2
python-levenshtein==0.26.1

26
lib/spack/env/cc vendored
View File

@@ -40,6 +40,11 @@ readonly params="\
SPACK_ENV_PATH
SPACK_DEBUG_LOG_DIR
SPACK_DEBUG_LOG_ID
SPACK_COMPILER_SPEC
SPACK_CC_RPATH_ARG
SPACK_CXX_RPATH_ARG
SPACK_F77_RPATH_ARG
SPACK_FC_RPATH_ARG
SPACK_LINKER_ARG
SPACK_SHORT_SPEC
SPACK_SYSTEM_DIRS
@@ -218,7 +223,6 @@ for param in $params; do
if eval "test -z \"\${${param}:-}\""; then
die "Spack compiler must be run from Spack! Input '$param' is missing."
fi
# FIXME (compiler as nodes) add checks on whether `SPACK_XX_RPATH` is set if `SPACK_XX` is set
done
# eval this because SPACK_MANAGED_DIRS and SPACK_SYSTEM_DIRS are inputs we don't wanna loop over.
@@ -342,9 +346,6 @@ case "$command" in
;;
ld|ld.gold|ld.lld)
mode=ld
if [ -z "$SPACK_CC_RPATH_ARG" ]; then
comp="CXX"
fi
;;
*)
die "Unknown compiler: $command"
@@ -402,7 +403,7 @@ dtags_to_strip="${SPACK_DTAGS_TO_STRIP}"
linker_arg="${SPACK_LINKER_ARG}"
# Set up rpath variable according to language.
rpath="ERROR: RPATH ARG WAS NOT SET, MAYBE THE PACKAGE DOES NOT DEPEND ON ${comp}?"
rpath="ERROR: RPATH ARG WAS NOT SET"
eval "rpath=\${SPACK_${comp}_RPATH_ARG:?${rpath}}"
# Dump the mode and exit if the command is dump-mode.
@@ -411,6 +412,13 @@ if [ "$SPACK_TEST_COMMAND" = "dump-mode" ]; then
exit
fi
# If, say, SPACK_CC is set but SPACK_FC is not, we want to know. Compilers do not
# *have* to set up Fortran executables, so we need to tell the user when a build is
# about to attempt to use them unsuccessfully.
if [ -z "$command" ]; then
die "Compiler '$SPACK_COMPILER_SPEC' does not have a $language compiler configured."
fi
#
# Filter '.' and Spack environment directories out of PATH so that
# this script doesn't just call itself
@@ -780,17 +788,15 @@ case "$mode" in
C)
extend spack_flags_list SPACK_ALWAYS_CFLAGS
extend spack_flags_list SPACK_CFLAGS
preextend flags_list SPACK_TARGET_ARGS_CC
;;
CXX)
extend spack_flags_list SPACK_ALWAYS_CXXFLAGS
extend spack_flags_list SPACK_CXXFLAGS
preextend flags_list SPACK_TARGET_ARGS_CXX
;;
F)
preextend flags_list SPACK_TARGET_ARGS_FORTRAN
;;
esac
# prepend target args
preextend flags_list SPACK_TARGET_ARGS
;;
esac

View File

@@ -0,0 +1,254 @@
A. HISTORY OF THE SOFTWARE
==========================
Python was created in the early 1990s by Guido van Rossum at Stichting
Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands
as a successor of a language called ABC. Guido remains Python's
principal author, although it includes many contributions from others.
In 1995, Guido continued his work on Python at the Corporation for
National Research Initiatives (CNRI, see http://www.cnri.reston.va.us)
in Reston, Virginia where he released several versions of the
software.
In May 2000, Guido and the Python core development team moved to
BeOpen.com to form the BeOpen PythonLabs team. In October of the same
year, the PythonLabs team moved to Digital Creations (now Zope
Corporation, see http://www.zope.com). In 2001, the Python Software
Foundation (PSF, see http://www.python.org/psf/) was formed, a
non-profit organization created specifically to own Python-related
Intellectual Property. Zope Corporation is a sponsoring member of
the PSF.
All Python releases are Open Source (see http://www.opensource.org for
the Open Source Definition). Historically, most, but not all, Python
releases have also been GPL-compatible; the table below summarizes
the various releases.
Release Derived Year Owner GPL-
from compatible? (1)
0.9.0 thru 1.2 1991-1995 CWI yes
1.3 thru 1.5.2 1.2 1995-1999 CNRI yes
1.6 1.5.2 2000 CNRI no
2.0 1.6 2000 BeOpen.com no
1.6.1 1.6 2001 CNRI yes (2)
2.1 2.0+1.6.1 2001 PSF no
2.0.1 2.0+1.6.1 2001 PSF yes
2.1.1 2.1+2.0.1 2001 PSF yes
2.1.2 2.1.1 2002 PSF yes
2.1.3 2.1.2 2002 PSF yes
2.2 and above 2.1.1 2001-now PSF yes
Footnotes:
(1) GPL-compatible doesn't mean that we're distributing Python under
the GPL. All Python licenses, unlike the GPL, let you distribute
a modified version without making your changes open source. The
GPL-compatible licenses make it possible to combine Python with
other software that is released under the GPL; the others don't.
(2) According to Richard Stallman, 1.6.1 is not GPL-compatible,
because its license has a choice of law clause. According to
CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1
is "not incompatible" with the GPL.
Thanks to the many outside volunteers who have worked under Guido's
direction to make these releases possible.
B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON
===============================================================
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
--------------------------------------------
1. This LICENSE AGREEMENT is between the Python Software Foundation
("PSF"), and the Individual or Organization ("Licensee") accessing and
otherwise using this software ("Python") in source or binary form and
its associated documentation.
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python alone or in any derivative version,
provided, however, that PSF's License Agreement and PSF's notice of copyright,
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are
retained in Python alone or in any derivative version prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python.
4. PSF is making Python available to Licensee on an "AS IS"
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF and
Licensee. This License Agreement does not grant permission to use PSF
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using Python, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.
BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0
-------------------------------------------
BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1
1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an
office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the
Individual or Organization ("Licensee") accessing and otherwise using
this software in source or binary form and its associated
documentation ("the Software").
2. Subject to the terms and conditions of this BeOpen Python License
Agreement, BeOpen hereby grants Licensee a non-exclusive,
royalty-free, world-wide license to reproduce, analyze, test, perform
and/or display publicly, prepare derivative works, distribute, and
otherwise use the Software alone or in any derivative version,
provided, however, that the BeOpen Python License is retained in the
Software, alone or in any derivative version prepared by Licensee.
3. BeOpen is making the Software available to Licensee on an "AS IS"
basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY
DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
5. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
6. This License Agreement shall be governed by and interpreted in all
respects by the law of the State of California, excluding conflict of
law provisions. Nothing in this License Agreement shall be deemed to
create any relationship of agency, partnership, or joint venture
between BeOpen and Licensee. This License Agreement does not grant
permission to use BeOpen trademarks or trade names in a trademark
sense to endorse or promote products or services of Licensee, or any
third party. As an exception, the "BeOpen Python" logos available at
http://www.pythonlabs.com/logos.html may be used according to the
permissions granted on that web page.
7. By copying, installing or otherwise using the software, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.
CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1
---------------------------------------
1. This LICENSE AGREEMENT is between the Corporation for National
Research Initiatives, having an office at 1895 Preston White Drive,
Reston, VA 20191 ("CNRI"), and the Individual or Organization
("Licensee") accessing and otherwise using Python 1.6.1 software in
source or binary form and its associated documentation.
2. Subject to the terms and conditions of this License Agreement, CNRI
hereby grants Licensee a nonexclusive, royalty-free, world-wide
license to reproduce, analyze, test, perform and/or display publicly,
prepare derivative works, distribute, and otherwise use Python 1.6.1
alone or in any derivative version, provided, however, that CNRI's
License Agreement and CNRI's notice of copyright, i.e., "Copyright (c)
1995-2001 Corporation for National Research Initiatives; All Rights
Reserved" are retained in Python 1.6.1 alone or in any derivative
version prepared by Licensee. Alternately, in lieu of CNRI's License
Agreement, Licensee may substitute the following text (omitting the
quotes): "Python 1.6.1 is made available subject to the terms and
conditions in CNRI's License Agreement. This Agreement together with
Python 1.6.1 may be located on the Internet using the following
unique, persistent identifier (known as a handle): 1895.22/1013. This
Agreement may also be obtained from a proxy server on the Internet
using the following URL: http://hdl.handle.net/1895.22/1013".
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python 1.6.1 or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python 1.6.1.
4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS"
basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. This License Agreement shall be governed by the federal
intellectual property law of the United States, including without
limitation the federal copyright law, and, to the extent such
U.S. federal law does not apply, by the law of the Commonwealth of
Virginia, excluding Virginia's conflict of law provisions.
Notwithstanding the foregoing, with regard to derivative works based
on Python 1.6.1 that incorporate non-separable material that was
previously distributed under the GNU General Public License (GPL), the
law of the Commonwealth of Virginia shall govern this License
Agreement only as to issues arising under or with respect to
Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this
License Agreement shall be deemed to create any relationship of
agency, partnership, or joint venture between CNRI and Licensee. This
License Agreement does not grant permission to use CNRI trademarks or
trade name in a trademark sense to endorse or promote products or
services of Licensee, or any third party.
8. By clicking on the "ACCEPT" button where indicated, or by copying,
installing or otherwise using Python 1.6.1, Licensee agrees to be
bound by the terms and conditions of this License Agreement.
ACCEPT
CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2
--------------------------------------------------
Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
The Netherlands. All rights reserved.
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation, and that the name of Stichting Mathematisch
Centrum or CWI not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior
permission.
STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1 @@
from typing_extensions import *

View File

@@ -8,3 +8,4 @@ six==1.16.0
macholib==1.16.2
altgraph==0.17.3
ruamel.yaml==0.17.21
typing_extensions==4.1.1

View File

@@ -66,7 +66,7 @@ def _is_url(path_or_url: str) -> bool:
return result
def system_path_filter(_func=None, arg_slice: Optional[slice] = None):
def _system_path_filter(_func=None, arg_slice: Optional[slice] = None):
"""Filters function arguments to account for platform path separators.
Optional slicing range can be specified to select specific arguments
@@ -100,6 +100,16 @@ def path_filter_caller(*args, **kwargs):
return holder_func
def _noop_decorator(_func=None, arg_slice: Optional[slice] = None):
return _func if _func else lambda x: x
if sys.platform == "win32":
system_path_filter = _system_path_filter
else:
system_path_filter = _noop_decorator
def sanitize_win_longpath(path: str) -> str:
"""Strip Windows extended path prefix from strings
Returns sanitized string.

View File

@@ -301,35 +301,32 @@ def filter_file(
ignore_absent: bool = False,
start_at: Optional[str] = None,
stop_at: Optional[str] = None,
encoding: Optional[str] = "utf-8",
) -> None:
r"""Like sed, but uses python regular expressions.
Filters every line of each file through regex and replaces the file
with a filtered version. Preserves mode of filtered files.
Filters every line of each file through regex and replaces the file with a filtered version.
Preserves mode of filtered files.
As with re.sub, ``repl`` can be either a string or a callable.
If it is a callable, it is passed the match object and should
return a suitable replacement string. If it is a string, it
can contain ``\1``, ``\2``, etc. to represent back-substitution
as sed would allow.
As with re.sub, ``repl`` can be either a string or a callable. If it is a callable, it is
passed the match object and should return a suitable replacement string. If it is a string, it
can contain ``\1``, ``\2``, etc. to represent back-substitution as sed would allow.
Args:
regex (str): The regular expression to search for
repl (str): The string to replace matches with
*filenames: One or more files to search and replace
string (bool): Treat regex as a plain string. Default it False
backup (bool): Make backup file(s) suffixed with ``~``. Default is False
ignore_absent (bool): Ignore any files that don't exist.
Default is False
start_at (str): Marker used to start applying the replacements. If a
text line matches this marker filtering is started at the next line.
All contents before the marker and the marker itself are copied
verbatim. Default is to start filtering from the first line of the
file.
stop_at (str): Marker used to stop scanning the file further. If a text
line matches this marker filtering is stopped and the rest of the
file is copied verbatim. Default is to filter until the end of the
file.
regex: The regular expression to search for
repl: The string to replace matches with
*filenames: One or more files to search and replace string: Treat regex as a plain string.
Default it False backup: Make backup file(s) suffixed with ``~``. Default is False
ignore_absent: Ignore any files that don't exist. Default is False
start_at: Marker used to start applying the replacements. If a text line matches this
marker filtering is started at the next line. All contents before the marker and the
marker itself are copied verbatim. Default is to start filtering from the first line of
the file.
stop_at: Marker used to stop scanning the file further. If a text line matches this marker
filtering is stopped and the rest of the file is copied verbatim. Default is to filter
until the end of the file.
encoding: The encoding to use when reading and writing the files. Default is None, which
uses the system's default encoding.
"""
# Allow strings to use \1, \2, etc. for replacement, like sed
if not callable(repl):
@@ -345,72 +342,56 @@ def groupid_to_group(x):
if string:
regex = re.escape(regex)
for filename in path_to_os_path(*filenames):
msg = 'FILTER FILE: {0} [replacing "{1}"]'
tty.debug(msg.format(filename, regex))
backup_filename = filename + "~"
tmp_filename = filename + ".spack~"
if ignore_absent and not os.path.exists(filename):
msg = 'FILTER FILE: file "{0}" not found. Skipping to next file.'
tty.debug(msg.format(filename))
regex_compiled = re.compile(regex)
for path in path_to_os_path(*filenames):
if ignore_absent and not os.path.exists(path):
tty.debug(f'FILTER FILE: file "{path}" not found. Skipping to next file.')
continue
else:
tty.debug(f'FILTER FILE: {path} [replacing "{regex}"]')
# Create backup file. Don't overwrite an existing backup
# file in case this file is being filtered multiple times.
if not os.path.exists(backup_filename):
shutil.copy(filename, backup_filename)
fd, temp_path = tempfile.mkstemp(
prefix=f"{os.path.basename(path)}.", dir=os.path.dirname(path)
)
os.close(fd)
# Create a temporary file to read from. We cannot use backup_filename
# in case filter_file is invoked multiple times on the same file.
shutil.copy(filename, tmp_filename)
shutil.copy(path, temp_path)
errored = False
try:
# Open as a text file and filter until the end of the file is
# reached, or we found a marker in the line if it was specified
#
# To avoid translating line endings (\n to \r\n and vice-versa)
# we force os.open to ignore translations and use the line endings
# the file comes with
with open(tmp_filename, mode="r", errors="surrogateescape", newline="") as input_file:
with open(filename, mode="w", errors="surrogateescape", newline="") as output_file:
do_filtering = start_at is None
# Using iter and readline is a workaround needed not to
# disable input_file.tell(), which will happen if we call
# input_file.next() implicitly via the for loop
for line in iter(input_file.readline, ""):
if stop_at is not None:
current_position = input_file.tell()
# Open as a text file and filter until the end of the file is reached, or we found a
# marker in the line if it was specified. To avoid translating line endings (\n to
# \r\n and vice-versa) use newline="".
with open(
temp_path, mode="r", errors="surrogateescape", newline="", encoding=encoding
) as input_file, open(
path, mode="w", errors="surrogateescape", newline="", encoding=encoding
) as output_file:
if start_at is None and stop_at is None: # common case, avoids branching in loop
for line in input_file:
output_file.write(re.sub(regex_compiled, repl, line))
else:
# state is -1 before start_at; 0 between; 1 after stop_at
state = 0 if start_at is None else -1
for line in input_file:
if state == 0:
if stop_at == line.strip():
output_file.write(line)
break
if do_filtering:
filtered_line = re.sub(regex, repl, line)
output_file.write(filtered_line)
else:
do_filtering = start_at == line.strip()
output_file.write(line)
else:
current_position = None
# If we stopped filtering at some point, reopen the file in
# binary mode and copy verbatim the remaining part
if current_position and stop_at:
with open(tmp_filename, mode="rb") as input_binary_buffer:
input_binary_buffer.seek(current_position)
with open(filename, mode="ab") as output_binary_buffer:
output_binary_buffer.writelines(input_binary_buffer.readlines())
state = 1
else:
line = re.sub(regex_compiled, repl, line)
elif state == -1 and start_at == line.strip():
state = 0
output_file.write(line)
except BaseException:
# clean up the original file on failure.
shutil.move(backup_filename, filename)
# restore the original file
os.rename(temp_path, path)
errored = True
raise
finally:
os.remove(tmp_filename)
if not backup and os.path.exists(backup_filename):
os.remove(backup_filename)
if not errored and not backup:
os.unlink(temp_path)
class FileFilter:
@@ -1115,12 +1096,12 @@ def hash_directory(directory, ignore=[]):
@contextmanager
@system_path_filter
def write_tmp_and_move(filename):
def write_tmp_and_move(filename: str, *, encoding: Optional[str] = None):
"""Write to a temporary file, then move into place."""
dirname = os.path.dirname(filename)
basename = os.path.basename(filename)
tmp = os.path.join(dirname, ".%s.tmp" % basename)
with open(tmp, "w") as f:
with open(tmp, "w", encoding=encoding) as f:
yield f
shutil.move(tmp, filename)

View File

@@ -73,7 +73,7 @@ def index_by(objects, *funcs):
if isinstance(f, str):
f = lambda x: getattr(x, funcs[0])
elif isinstance(f, tuple):
f = lambda x: tuple(getattr(x, p, None) for p in funcs[0])
f = lambda x: tuple(getattr(x, p) for p in funcs[0])
result = {}
for o in objects:
@@ -995,8 +995,11 @@ def _receive_forwarded(self, context: str, exc: Exception, tb: List[str]):
def grouped_message(self, with_tracebacks: bool = True) -> str:
"""Print out an error message coalescing all the forwarded errors."""
each_exception_message = [
"\n\t{0} raised {1}: {2}\n{3}".format(
context, exc.__class__.__name__, exc, f"\n{''.join(tb)}" if with_tracebacks else ""
"{0} raised {1}: {2}{3}".format(
context,
exc.__class__.__name__,
exc,
"\n{0}".format("".join(tb)) if with_tracebacks else "",
)
for context, exc, tb in self.exceptions
]

View File

@@ -96,8 +96,8 @@ def get_fh(self, path: str) -> IO:
Arguments:
path: path to lock file we want a filehandle for
"""
# Open writable files as 'r+' so we can upgrade to write later
os_mode, fh_mode = (os.O_RDWR | os.O_CREAT), "r+"
# Open writable files as rb+ so we can upgrade to write later
os_mode, fh_mode = (os.O_RDWR | os.O_CREAT), "rb+"
pid = os.getpid()
open_file = None # OpenFile object, if there is one
@@ -124,7 +124,7 @@ def get_fh(self, path: str) -> IO:
# we know path exists but not if it's writable. If it's read-only,
# only open the file for reading (and fail if we're trying to get
# an exclusive (write) lock on it)
os_mode, fh_mode = os.O_RDONLY, "r"
os_mode, fh_mode = os.O_RDONLY, "rb"
fd = os.open(path, os_mode)
fh = os.fdopen(fd, fh_mode)
@@ -243,7 +243,7 @@ def __init__(
helpful for distinguishing between different Spack locks.
"""
self.path = path
self._file: Optional[IO] = None
self._file: Optional[IO[bytes]] = None
self._reads = 0
self._writes = 0
@@ -329,9 +329,9 @@ def _lock(self, op: int, timeout: Optional[float] = None) -> Tuple[float, int]:
self._ensure_parent_directory()
self._file = FILE_TRACKER.get_fh(self.path)
if LockType.to_module(op) == fcntl.LOCK_EX and self._file.mode == "r":
if LockType.to_module(op) == fcntl.LOCK_EX and self._file.mode == "rb":
# Attempt to upgrade to write lock w/a read-only file.
# If the file were writable, we'd have opened it 'r+'
# If the file were writable, we'd have opened it rb+
raise LockROFileError(self.path)
self._log_debug(
@@ -426,7 +426,7 @@ def _read_log_debug_data(self) -> None:
line = self._file.read()
if line:
pid, host = line.strip().split(",")
pid, host = line.decode("utf-8").strip().split(",")
_, _, pid = pid.rpartition("=")
_, _, self.host = host.rpartition("=")
self.pid = int(pid)
@@ -442,7 +442,7 @@ def _write_log_debug_data(self) -> None:
# write pid, host to disk to sync over FS
self._file.seek(0)
self._file.write("pid=%s,host=%s" % (self.pid, self.host))
self._file.write(f"pid={self.pid},host={self.host}".encode("utf-8"))
self._file.truncate()
self._file.flush()
os.fsync(self._file.fileno())

View File

@@ -161,7 +161,7 @@ def _err_check(result, func, args):
)
# Use conout$ here to handle a redirectired stdout/get active console associated
# with spack
with open(r"\\.\CONOUT$", "w") as conout:
with open(r"\\.\CONOUT$", "w", encoding="utf-8") as conout:
# Link above would use kernel32.GetStdHandle(-11) however this would not handle
# a redirected stdout appropriately, so we always refer to the current CONSOLE out
# which is defined as conout$ on Windows.

View File

@@ -762,7 +762,7 @@ def __enter__(self):
self.reader = open(self.logfile, mode="rb+")
# Dup stdout so we can still write to it after redirection
self.echo_writer = open(os.dup(sys.stdout.fileno()), "w")
self.echo_writer = open(os.dup(sys.stdout.fileno()), "w", encoding=sys.stdout.encoding)
# Redirect stdout and stderr to write to logfile
self.stderr.redirect_stream(self.writer.fileno())
self.stdout.redirect_stream(self.writer.fileno())
@@ -879,10 +879,13 @@ def _writer_daemon(
write_fd.close()
# 1. Use line buffering (3rd param = 1) since Python 3 has a bug
# that prevents unbuffered text I/O.
# 2. Python 3.x before 3.7 does not open with UTF-8 encoding by default
# that prevents unbuffered text I/O. [needs citation]
# 2. Enforce a UTF-8 interpretation of build process output with errors replaced by '?'.
# The downside is that the log file will not contain the exact output of the build process.
# 3. closefd=False because Connection has "ownership"
read_file = os.fdopen(read_fd.fileno(), "r", 1, encoding="utf-8", closefd=False)
read_file = os.fdopen(
read_fd.fileno(), "r", 1, encoding="utf-8", errors="replace", closefd=False
)
if stdin_fd:
stdin_file = os.fdopen(stdin_fd.fileno(), closefd=False)
@@ -928,11 +931,7 @@ def _writer_daemon(
try:
while line_count < 100:
# Handle output from the calling process.
try:
line = _retry(read_file.readline)()
except UnicodeDecodeError:
# installs like --test=root gpgme produce non-UTF8 logs
line = "<line lost: output was not encoded as UTF-8>\n"
line = _retry(read_file.readline)()
if not line:
return
@@ -946,6 +945,13 @@ def _writer_daemon(
output_line = clean_line
if filter_fn:
output_line = filter_fn(clean_line)
enc = sys.stdout.encoding
if enc != "utf-8":
# On Python 3.6 and 3.7-3.14 with non-{utf-8,C} locale stdout
# may not be able to handle utf-8 output. We do an inefficient
# dance of re-encoding with errors replaced, so stdout.write
# does not raise.
output_line = output_line.encode(enc, "replace").decode(enc)
sys.stdout.write(output_line)
# Stripped output to log file.

View File

@@ -55,7 +55,6 @@ def _search_duplicate_compilers(error_cls):
import spack.builder
import spack.config
import spack.deptypes
import spack.fetch_strategy
import spack.patch
import spack.repo
@@ -657,7 +656,7 @@ def _ensure_docstring_and_no_fixme(pkgs, error_cls):
for pkg_name in pkgs:
details = []
filename = spack.repo.PATH.filename_for_package_name(pkg_name)
with open(filename, "r") as package_file:
with open(filename, "r", encoding="utf-8") as package_file:
for i, line in enumerate(package_file):
pattern = next((r for r in fixme_regexes if r.search(line)), None)
if pattern:
@@ -810,7 +809,7 @@ def _uses_deprecated_globals(pkgs, error_cls):
continue
file = spack.repo.PATH.filename_for_package_name(pkg_name)
tree = ast.parse(open(file).read())
tree = ast.parse(open(file, "rb").read())
visitor = DeprecatedMagicGlobals(("std_cmake_args", "std_meson_args", "std_pip_args"))
visitor.visit(tree)
if visitor.references_to_globals:
@@ -1010,27 +1009,6 @@ def _issues_in_depends_on_directive(pkgs, error_cls):
for when, deps_by_name in pkg_cls.dependencies.items():
for dep_name, dep in deps_by_name.items():
# Check if there are nested dependencies declared. We don't want directives like:
#
# depends_on('foo+bar ^fee+baz')
#
# but we'd like to have two dependencies listed instead.
nested_dependencies = dep.spec.edges_to_dependencies()
# Filter out pure build dependencies, like:
#
# depends_on('foo+bar%gcc')
#
nested_dependencies = [
x for x in nested_dependencies if x.depflag != spack.deptypes.BUILD
]
if nested_dependencies:
summary = f"{pkg_name}: nested dependency declaration '{dep.spec}'"
ndir = len(nested_dependencies) + 1
details = [
f"split depends_on('{dep.spec}', when='{when}') into {ndir} directives",
f"in {filename}",
]
errors.append(error_cls(summary=summary, details=details))
def check_virtual_with_variants(spec, msg):
if not spec.virtual or not spec.variants:

View File

@@ -69,10 +69,8 @@
Digest,
ImageReference,
default_config,
default_index_tag,
default_manifest,
default_tag,
tag_is_spec,
ensure_valid_tag,
)
from spack.oci.oci import (
copy_missing_layers_with_retry,
@@ -83,7 +81,6 @@
)
from spack.package_prefs import get_package_dir_permissions, get_package_group
from spack.relocate_text import utf8_paths_to_single_binary_regex
from spack.spec import Spec
from spack.stage import Stage
from spack.util.executable import which
@@ -586,7 +583,7 @@ def buildinfo_file_name(prefix):
def read_buildinfo_file(prefix):
"""Read buildinfo file"""
with open(buildinfo_file_name(prefix), "r") as f:
with open(buildinfo_file_name(prefix), "r", encoding="utf-8") as f:
return syaml.load(f)
@@ -765,14 +762,7 @@ def tarball_directory_name(spec):
Return name of the tarball directory according to the convention
<os>-<architecture>/<compiler>/<package>-<version>/
"""
if spec.original_spec_format() < 5:
compiler = spec.annotations.compiler_node_attribute
assert compiler is not None, "a compiler spec is expected"
return spec.format_path(
f"{spec.architecture}/{compiler.name}-{compiler.version}/{spec.name}-{spec.version}"
)
return spec.format_path(f"{spec.architecture.platform}/{spec.name}-{spec.version}")
return spec.format_path("{architecture}/{compiler.name}-{compiler.version}/{name}-{version}")
def tarball_name(spec, ext):
@@ -780,17 +770,9 @@ def tarball_name(spec, ext):
Return the name of the tarfile according to the convention
<os>-<architecture>-<package>-<dag_hash><ext>
"""
if spec.original_spec_format() < 5:
compiler = spec.annotations.compiler_node_attribute
assert compiler is not None, "a compiler spec is expected"
spec_formatted = (
f"{spec.architecture}-{compiler.name}-{compiler.version}-{spec.name}"
f"-{spec.version}-{spec.dag_hash()}"
)
else:
spec_formatted = (
f"{spec.architecture.platform}-{spec.name}-{spec.version}-{spec.dag_hash()}"
)
spec_formatted = spec.format_path(
"{architecture}-{compiler.name}-{compiler.version}-{name}-{version}-{hash}"
)
return f"{spec_formatted}{ext}"
@@ -842,10 +824,10 @@ def _read_specs_and_push_index(
contents = read_method(file)
# Need full spec.json name or this gets confused with index.json.
if file.endswith(".json.sig"):
specfile_json = Spec.extract_json_from_clearsig(contents)
fetched_spec = Spec.from_dict(specfile_json)
specfile_json = spack.spec.Spec.extract_json_from_clearsig(contents)
fetched_spec = spack.spec.Spec.from_dict(specfile_json)
elif file.endswith(".json"):
fetched_spec = Spec.from_json(contents)
fetched_spec = spack.spec.Spec.from_json(contents)
else:
continue
@@ -855,17 +837,17 @@ def _read_specs_and_push_index(
# Now generate the index, compute its hash, and push the two files to
# the mirror.
index_json_path = os.path.join(temp_dir, "index.json")
with open(index_json_path, "w") as f:
with open(index_json_path, "w", encoding="utf-8") as f:
db._write_to_file(f)
# Read the index back in and compute its hash
with open(index_json_path) as f:
with open(index_json_path, encoding="utf-8") as f:
index_string = f.read()
index_hash = compute_hash(index_string)
# Write the hash out to a local file
index_hash_path = os.path.join(temp_dir, "index.json.hash")
with open(index_hash_path, "w") as f:
with open(index_hash_path, "w", encoding="utf-8") as f:
f.write(index_hash)
# Push the index itself
@@ -899,7 +881,7 @@ def _specs_from_cache_aws_cli(cache_prefix):
aws = which("aws")
def file_read_method(file_path):
with open(file_path) as fd:
with open(file_path, encoding="utf-8") as fd:
return fd.read()
tmpspecsdir = tempfile.mkdtemp()
@@ -1044,7 +1026,7 @@ def generate_key_index(key_prefix: str, tmpdir: str) -> None:
target = os.path.join(tmpdir, "index.json")
index = {"keys": dict((fingerprint, {}) for fingerprint in sorted(set(fingerprints)))}
with open(target, "w") as f:
with open(target, "w", encoding="utf-8") as f:
sjson.dump(index, f)
try:
@@ -1115,7 +1097,7 @@ class ExistsInBuildcache(NamedTuple):
class BuildcacheFiles:
def __init__(self, spec: Spec, local: str, remote: str):
def __init__(self, spec: spack.spec.Spec, local: str, remote: str):
"""
Args:
spec: The spec whose tarball and specfile are being managed.
@@ -1145,7 +1127,7 @@ def local_tarball(self) -> str:
return os.path.join(self.local, f"{self.spec.dag_hash()}.tar.gz")
def _exists_in_buildcache(spec: Spec, tmpdir: str, out_url: str) -> ExistsInBuildcache:
def _exists_in_buildcache(spec: spack.spec.Spec, tmpdir: str, out_url: str) -> ExistsInBuildcache:
"""returns a tuple of bools (signed, unsigned, tarball) indicating whether specfiles/tarballs
exist in the buildcache"""
files = BuildcacheFiles(spec, tmpdir, out_url)
@@ -1156,7 +1138,11 @@ def _exists_in_buildcache(spec: Spec, tmpdir: str, out_url: str) -> ExistsInBuil
def _url_upload_tarball_and_specfile(
spec: Spec, tmpdir: str, out_url: str, exists: ExistsInBuildcache, signing_key: Optional[str]
spec: spack.spec.Spec,
tmpdir: str,
out_url: str,
exists: ExistsInBuildcache,
signing_key: Optional[str],
):
files = BuildcacheFiles(spec, tmpdir, out_url)
tarball = files.local_tarball()
@@ -1174,7 +1160,7 @@ def _url_upload_tarball_and_specfile(
web_util.push_to_url(tarball, files.remote_tarball(), keep_original=False)
specfile = files.local_specfile()
with open(specfile, "w") as f:
with open(specfile, "w", encoding="utf-8") as f:
# Note: when using gpg clear sign, we need to avoid long lines (19995 chars).
# If lines are longer, they are truncated without error. Thanks GPG!
# So, here we still add newlines, but no indent, so save on file size and
@@ -1329,7 +1315,7 @@ def make_uploader(
)
def _format_spec(spec: Spec) -> str:
def _format_spec(spec: spack.spec.Spec) -> str:
return spec.cformat("{name}{@version}{/hash:7}")
@@ -1352,7 +1338,7 @@ def _progress(self):
return f"[{self.n:{digits}}/{self.total}] "
return ""
def start(self, spec: Spec, running: bool) -> None:
def start(self, spec: spack.spec.Spec, running: bool) -> None:
self.n += 1
self.running = running
self.pre = self._progress()
@@ -1371,18 +1357,18 @@ def fail(self) -> None:
def _url_push(
specs: List[Spec],
specs: List[spack.spec.Spec],
out_url: str,
signing_key: Optional[str],
force: bool,
update_index: bool,
tmpdir: str,
executor: concurrent.futures.Executor,
) -> Tuple[List[Spec], List[Tuple[Spec, BaseException]]]:
) -> Tuple[List[spack.spec.Spec], List[Tuple[spack.spec.Spec, BaseException]]]:
"""Pushes to the provided build cache, and returns a list of skipped specs that were already
present (when force=False), and a list of errors. Does not raise on error."""
skipped: List[Spec] = []
errors: List[Tuple[Spec, BaseException]] = []
skipped: List[spack.spec.Spec] = []
errors: List[Tuple[spack.spec.Spec, BaseException]] = []
exists_futures = [
executor.submit(_exists_in_buildcache, spec, tmpdir, out_url) for spec in specs
@@ -1455,7 +1441,7 @@ def _url_push(
return skipped, errors
def _oci_upload_success_msg(spec: Spec, digest: Digest, size: int, elapsed: float):
def _oci_upload_success_msg(spec: spack.spec.Spec, digest: Digest, size: int, elapsed: float):
elapsed = max(elapsed, 0.001) # guard against division by zero
return (
f"Pushed {_format_spec(spec)}: {digest} ({elapsed:.2f}s, "
@@ -1541,7 +1527,7 @@ def _oci_put_manifest(
):
architecture = _oci_archspec_to_gooarch(specs[0])
expected_blobs: List[Spec] = [
expected_blobs: List[spack.spec.Spec] = [
s
for s in traverse.traverse_nodes(specs, order="topo", deptype=("link", "run"), root=True)
if not s.external
@@ -1585,7 +1571,7 @@ def _oci_put_manifest(
config_file = os.path.join(tmpdir, f"{specs[0].dag_hash()}.config.json")
with open(config_file, "w") as f:
with open(config_file, "w", encoding="utf-8") as f:
json.dump(config, f, separators=(",", ":"))
config_file_checksum = Digest.from_sha256(
@@ -1655,19 +1641,33 @@ def _oci_update_base_images(
)
def _oci_default_tag(spec: spack.spec.Spec) -> str:
"""Return a valid, default image tag for a spec."""
return ensure_valid_tag(f"{spec.name}-{spec.version}-{spec.dag_hash()}.spack")
#: Default OCI index tag
default_index_tag = "index.spack"
def tag_is_spec(tag: str) -> bool:
"""Check if a tag is likely a Spec"""
return tag.endswith(".spack") and tag != default_index_tag
def _oci_push(
*,
target_image: ImageReference,
base_image: Optional[ImageReference],
installed_specs_with_deps: List[Spec],
installed_specs_with_deps: List[spack.spec.Spec],
tmpdir: str,
executor: concurrent.futures.Executor,
force: bool = False,
) -> Tuple[
List[Spec],
List[spack.spec.Spec],
Dict[str, Tuple[dict, dict]],
Dict[str, spack.oci.oci.Blob],
List[Tuple[Spec, BaseException]],
List[Tuple[spack.spec.Spec, BaseException]],
]:
# Spec dag hash -> blob
checksums: Dict[str, spack.oci.oci.Blob] = {}
@@ -1676,13 +1676,15 @@ def _oci_push(
base_images: Dict[str, Tuple[dict, dict]] = {}
# Specs not uploaded because they already exist
skipped: List[Spec] = []
skipped: List[spack.spec.Spec] = []
if not force:
tty.info("Checking for existing specs in the buildcache")
blobs_to_upload = []
tags_to_check = (target_image.with_tag(default_tag(s)) for s in installed_specs_with_deps)
tags_to_check = (
target_image.with_tag(_oci_default_tag(s)) for s in installed_specs_with_deps
)
available_blobs = executor.map(_oci_get_blob_info, tags_to_check)
for spec, maybe_blob in zip(installed_specs_with_deps, available_blobs):
@@ -1710,8 +1712,8 @@ def _oci_push(
executor.submit(_oci_push_pkg_blob, target_image, spec, tmpdir) for spec in blobs_to_upload
]
manifests_to_upload: List[Spec] = []
errors: List[Tuple[Spec, BaseException]] = []
manifests_to_upload: List[spack.spec.Spec] = []
errors: List[Tuple[spack.spec.Spec, BaseException]] = []
# And update the spec to blob mapping for successful uploads
for spec, blob_future in zip(blobs_to_upload, blob_futures):
@@ -1737,7 +1739,7 @@ def _oci_push(
base_image_cache=base_images,
)
def extra_config(spec: Spec):
def extra_config(spec: spack.spec.Spec):
spec_dict = spec.to_dict(hash=ht.dag_hash)
spec_dict["buildcache_layout_version"] = CURRENT_BUILD_CACHE_LAYOUT_VERSION
spec_dict["binary_cache_checksum"] = {
@@ -1753,7 +1755,7 @@ def extra_config(spec: Spec):
_oci_put_manifest,
base_images,
checksums,
target_image.with_tag(default_tag(spec)),
target_image.with_tag(_oci_default_tag(spec)),
tmpdir,
extra_config(spec),
{"org.opencontainers.image.description": spec.format()},
@@ -1770,7 +1772,7 @@ def extra_config(spec: Spec):
manifest_progress.start(spec, manifest_future.running())
if error is None:
manifest_progress.ok(
f"Tagged {_format_spec(spec)} as {target_image.with_tag(default_tag(spec))}"
f"Tagged {_format_spec(spec)} as {target_image.with_tag(_oci_default_tag(spec))}"
)
else:
manifest_progress.fail()
@@ -1805,13 +1807,13 @@ def _oci_update_index(
db = BuildCacheDatabase(db_root_dir)
for spec_dict in spec_dicts:
spec = Spec.from_dict(spec_dict)
spec = spack.spec.Spec.from_dict(spec_dict)
db.add(spec)
db.mark(spec, "in_buildcache", True)
# Create the index.json file
index_json_path = os.path.join(tmpdir, "index.json")
with open(index_json_path, "w") as f:
with open(index_json_path, "w", encoding="utf-8") as f:
db._write_to_file(f)
# Create an empty config.json file
@@ -1920,7 +1922,7 @@ def _get_valid_spec_file(path: str, max_supported_layout: int) -> Tuple[Dict, in
try:
as_string = binary_content.decode("utf-8")
if path.endswith(".json.sig"):
spec_dict = Spec.extract_json_from_clearsig(as_string)
spec_dict = spack.spec.Spec.extract_json_from_clearsig(as_string)
else:
spec_dict = json.loads(as_string)
except Exception as e:
@@ -2016,7 +2018,7 @@ def fetch_url_to_mirror(url):
if fetch_url.startswith("oci://"):
ref = spack.oci.image.ImageReference.from_string(
fetch_url[len("oci://") :]
).with_tag(spack.oci.image.default_tag(spec))
).with_tag(_oci_default_tag(spec))
# Fetch the manifest
try:
@@ -2260,7 +2262,8 @@ def relocate_package(spec):
]
if analogs:
# Prefer same-name analogs and prefer higher versions
# This matches the preferences in Spec.splice, so we will find same node
# This matches the preferences in spack.spec.Spec.splice, so we
# will find same node
analog = max(analogs, key=lambda a: (a.name == s.name, a.version))
lookup_dag_hash = analog.dag_hash()
@@ -2696,10 +2699,10 @@ def try_direct_fetch(spec, mirrors=None):
# are concrete (as they are built) so we need to mark this spec
# concrete on read-in.
if specfile_is_signed:
specfile_json = Spec.extract_json_from_clearsig(specfile_contents)
fetched_spec = Spec.from_dict(specfile_json)
specfile_json = spack.spec.Spec.extract_json_from_clearsig(specfile_contents)
fetched_spec = spack.spec.Spec.from_dict(specfile_json)
else:
fetched_spec = Spec.from_json(specfile_contents)
fetched_spec = spack.spec.Spec.from_json(specfile_contents)
fetched_spec._mark_concrete()
found_specs.append({"mirror_url": mirror.fetch_url, "spec": fetched_spec})
@@ -2904,7 +2907,7 @@ def check_specs_against_mirrors(mirrors, specs, output_file=None):
}
if output_file:
with open(output_file, "w") as outf:
with open(output_file, "w", encoding="utf-8") as outf:
outf.write(json.dumps(rebuilds))
return 1 if rebuilds else 0
@@ -2998,7 +3001,7 @@ def __init__(self, all_architectures):
self.possible_specs = specs
def __call__(self, spec: Spec, **kwargs):
def __call__(self, spec: spack.spec.Spec, **kwargs):
"""
Args:
spec: The spec being searched for
@@ -3136,7 +3139,7 @@ def __init__(self, url: str, local_hash, urlopen=None) -> None:
def conditional_fetch(self) -> FetchIndexResult:
"""Download an index from an OCI registry type mirror."""
url_manifest = self.ref.with_tag(spack.oci.image.default_index_tag).manifest_url()
url_manifest = self.ref.with_tag(default_index_tag).manifest_url()
try:
response = self.urlopen(
urllib.request.Request(

View File

@@ -227,13 +227,12 @@ def _root_spec(spec_str: str) -> str:
# Add a compiler and platform requirement to the root spec.
platform = str(spack.platforms.host())
# FIXME (compiler as nodes): recover the compiler for source bootstrapping
# if platform == "darwin":
# spec_str += " %apple-clang"
if platform == "windows":
if platform == "darwin":
spec_str += " %apple-clang"
elif platform == "windows":
spec_str += " %msvc"
# elif platform == "linux":
# spec_str += " %gcc"
elif platform == "linux":
spec_str += " %gcc"
elif platform == "freebsd":
spec_str += " %clang"
spec_str += f" platform={platform}"

View File

@@ -16,9 +16,8 @@
import archspec.cpu
import spack.compilers.config
import spack.compilers.libraries
import spack.config
import spack.compiler
import spack.compilers
import spack.platforms
import spack.spec
import spack.traverse
@@ -40,7 +39,7 @@ def __init__(self, configuration):
self.external_cmake, self.external_bison = self._externals_from_yaml(configuration)
def _valid_compiler_or_raise(self):
def _valid_compiler_or_raise(self) -> "spack.compiler.Compiler":
if str(self.host_platform) == "linux":
compiler_name = "gcc"
elif str(self.host_platform) == "darwin":
@@ -48,30 +47,17 @@ def _valid_compiler_or_raise(self):
elif str(self.host_platform) == "windows":
compiler_name = "msvc"
elif str(self.host_platform) == "freebsd":
compiler_name = "llvm"
compiler_name = "clang"
else:
raise RuntimeError(f"Cannot bootstrap clingo from sources on {self.host_platform}")
candidates = [
x
for x in spack.compilers.config.CompilerFactory.from_packages_yaml(spack.config.CONFIG)
if x.name == compiler_name
]
candidates = spack.compilers.compilers_for_spec(
compiler_name, arch_spec=self.host_architecture
)
if not candidates:
raise RuntimeError(
f"Cannot find any version of {compiler_name} to bootstrap clingo from sources"
)
candidates.sort(key=lambda x: x.version, reverse=True)
best = candidates[0]
# Get compilers for bootstrapping from the 'builtin' repository
best.namespace = "builtin"
# If the compiler does not support C++ 14, fail with a legible error message
try:
_ = best.package.standard_flag(language="cxx", standard="14")
except RuntimeError as e:
raise RuntimeError(
"cannot find a compiler supporting C++ 14 [needed to bootstrap clingo]"
) from e
candidates.sort(key=lambda x: x.spec.version, reverse=True)
return candidates[0]
def _externals_from_yaml(
@@ -90,6 +76,9 @@ def _externals_from_yaml(
if not s.satisfies(requirements[pkg_name]):
continue
if not s.intersects(f"%{self.host_compiler.spec}"):
continue
if not s.intersects(f"arch={self.host_architecture}"):
continue
@@ -122,10 +111,11 @@ def concretize(self) -> "spack.spec.Spec":
# Tweak it to conform to the host architecture
for node in s.traverse():
node.architecture.os = str(self.host_os)
node.compiler = self.host_compiler.spec
node.architecture = self.host_architecture
if node.name == "gcc-runtime":
node.versions = self.host_compiler.versions
node.versions = self.host_compiler.spec.versions
for edge in spack.traverse.traverse_edges([s], cover="edges"):
if edge.spec.name == "python":
@@ -137,9 +127,6 @@ def concretize(self) -> "spack.spec.Spec":
if edge.spec.name == "cmake" and self.external_cmake:
edge.spec = self.external_cmake
if edge.spec.name == self.host_compiler.name:
edge.spec = self.host_compiler
if "libc" in edge.virtuals:
edge.spec = self.host_libc
@@ -155,12 +142,13 @@ def python_external_spec(self) -> "spack.spec.Spec":
return self._external_spec(result)
def libc_external_spec(self) -> "spack.spec.Spec":
detector = spack.compilers.libraries.CompilerPropertyDetector(self.host_compiler)
result = detector.default_libc()
result = self.host_compiler.default_libc
return self._external_spec(result)
def _external_spec(self, initial_spec) -> "spack.spec.Spec":
#TODO: Does this need to be changed?
initial_spec.namespace = "builtin"
initial_spec.compiler = self.host_compiler.spec
initial_spec.architecture = self.host_architecture
for flag_type in spack.spec.FlagMap.valid_compiler_flags():
initial_spec.compiler_flags[flag_type] = []

View File

@@ -11,7 +11,7 @@
from llnl.util import tty
import spack.compilers.config
import spack.compilers
import spack.config
import spack.environment
import spack.modules
@@ -143,8 +143,8 @@ def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
def _add_compilers_if_missing() -> None:
arch = spack.spec.ArchSpec.frontend_arch()
if not spack.compilers.config.compilers_for_arch(arch):
spack.compilers.config.find_compilers()
if not spack.compilers.compilers_for_arch(arch):
spack.compilers.find_compilers()
@contextlib.contextmanager

View File

@@ -281,12 +281,7 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
# Install the spec that should make the module importable
with spack.config.override(self.mirror_scope):
PackageInstaller(
[concrete_spec.package],
fail_fast=True,
package_use_cache=False,
dependencies_use_cache=False,
).install()
PackageInstaller([concrete_spec.package], fail_fast=True).install()
if _try_import_from_store(module, query_spec=concrete_spec, query_info=info):
self.last_search = info
@@ -322,10 +317,11 @@ def create_bootstrapper(conf: ConfigDictionary):
return _bootstrap_methods[btype](conf)
def source_is_enabled(conf: ConfigDictionary):
def source_is_enabled_or_raise(conf: ConfigDictionary):
"""Raise ValueError if the source is not enabled for bootstrapping"""
trusted, name = spack.config.get("bootstrap:trusted"), conf["name"]
return trusted.get(name, False)
if not trusted.get(name, False):
raise ValueError("source is not trusted")
def ensure_module_importable_or_raise(module: str, abstract_spec: Optional[str] = None):
@@ -355,10 +351,8 @@ def ensure_module_importable_or_raise(module: str, abstract_spec: Optional[str]
exception_handler = GroupedExceptionHandler()
for current_config in bootstrapping_sources():
if not source_is_enabled(current_config):
continue
with exception_handler.forward(current_config["name"], Exception):
source_is_enabled_or_raise(current_config)
current_bootstrapper = create_bootstrapper(current_config)
if current_bootstrapper.try_import(module, abstract_spec):
return
@@ -370,7 +364,11 @@ def ensure_module_importable_or_raise(module: str, abstract_spec: Optional[str]
msg = f'cannot bootstrap the "{module}" Python module '
if abstract_spec:
msg += f'from spec "{abstract_spec}" '
msg += exception_handler.grouped_message(with_tracebacks=tty.is_debug())
if tty.is_debug():
msg += exception_handler.grouped_message(with_tracebacks=True)
else:
msg += exception_handler.grouped_message(with_tracebacks=False)
msg += "\nRun `spack --debug ...` for more detailed errors"
raise ImportError(msg)
@@ -408,9 +406,8 @@ def ensure_executables_in_path_or_raise(
exception_handler = GroupedExceptionHandler()
for current_config in bootstrapping_sources():
if not source_is_enabled(current_config):
continue
with exception_handler.forward(current_config["name"], Exception):
source_is_enabled_or_raise(current_config)
current_bootstrapper = create_bootstrapper(current_config)
if current_bootstrapper.try_search_path(executables, abstract_spec):
# Additional environment variables needed

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -37,6 +37,7 @@
import multiprocessing
import os
import re
import stat
import sys
import traceback
import types
@@ -59,7 +60,7 @@
import spack.build_systems.meson
import spack.build_systems.python
import spack.builder
import spack.compilers.libraries
import spack.compilers
import spack.config
import spack.deptypes as dt
import spack.error
@@ -73,6 +74,7 @@
import spack.store
import spack.subprocess_context
import spack.util.executable
import spack.util.libc
from spack import traverse
from spack.context import Context
from spack.error import InstallError, NoHeadersError, NoLibrariesError
@@ -80,7 +82,6 @@
from spack.util.environment import (
SYSTEM_DIR_CASE_ENTRY,
EnvironmentModifications,
PrependPath,
env_flag,
filter_system_paths,
get_path,
@@ -296,10 +297,62 @@ def _add_werror_handling(keep_werror, env):
env.set("SPACK_COMPILER_FLAGS_REPLACE", " ".join(["|".join(item) for item in replace_flags]))
def set_wrapper_environment_variables_for_flags(pkg, env):
def set_compiler_environment_variables(pkg, env):
assert pkg.spec.concrete
compiler = pkg.compiler
spec = pkg.spec
# Make sure the executables for this compiler exist
compiler.verify_executables()
# Set compiler variables used by CMake and autotools
assert all(key in compiler.link_paths for key in ("cc", "cxx", "f77", "fc"))
# Populate an object with the list of environment modifications
# and return it
# TODO : add additional kwargs for better diagnostics, like requestor,
# ttyout, ttyerr, etc.
link_dir = spack.paths.build_env_path
# Set SPACK compiler variables so that our wrapper knows what to
# call. If there is no compiler configured then use a default
# wrapper which will emit an error if it is used.
if compiler.cc:
env.set("SPACK_CC", compiler.cc)
env.set("CC", os.path.join(link_dir, compiler.link_paths["cc"]))
else:
env.set("CC", os.path.join(link_dir, "cc"))
if compiler.cxx:
env.set("SPACK_CXX", compiler.cxx)
env.set("CXX", os.path.join(link_dir, compiler.link_paths["cxx"]))
else:
env.set("CC", os.path.join(link_dir, "c++"))
if compiler.f77:
env.set("SPACK_F77", compiler.f77)
env.set("F77", os.path.join(link_dir, compiler.link_paths["f77"]))
else:
env.set("F77", os.path.join(link_dir, "f77"))
if compiler.fc:
env.set("SPACK_FC", compiler.fc)
env.set("FC", os.path.join(link_dir, compiler.link_paths["fc"]))
else:
env.set("FC", os.path.join(link_dir, "fc"))
# Set SPACK compiler rpath flags so that our wrapper knows what to use
env.set("SPACK_CC_RPATH_ARG", compiler.cc_rpath_arg)
env.set("SPACK_CXX_RPATH_ARG", compiler.cxx_rpath_arg)
env.set("SPACK_F77_RPATH_ARG", compiler.f77_rpath_arg)
env.set("SPACK_FC_RPATH_ARG", compiler.fc_rpath_arg)
env.set("SPACK_LINKER_ARG", compiler.linker_arg)
# Check whether we want to force RPATH or RUNPATH
if spack.config.get("config:shared_linking:type") == "rpath":
env.set("SPACK_DTAGS_TO_STRIP", compiler.enable_new_dtags)
env.set("SPACK_DTAGS_TO_ADD", compiler.disable_new_dtags)
else:
env.set("SPACK_DTAGS_TO_STRIP", compiler.disable_new_dtags)
env.set("SPACK_DTAGS_TO_ADD", compiler.enable_new_dtags)
if pkg.keep_werror is not None:
keep_werror = pkg.keep_werror
else:
@@ -307,6 +360,10 @@ def set_wrapper_environment_variables_for_flags(pkg, env):
_add_werror_handling(keep_werror, env)
# Set the target parameters that the compiler will add
isa_arg = optimization_flags(compiler, spec.target)
env.set("SPACK_TARGET_ARGS", isa_arg)
# Trap spack-tracked compiler flags as appropriate.
# env_flags are easy to accidentally override.
inject_flags = {}
@@ -340,27 +397,74 @@ def set_wrapper_environment_variables_for_flags(pkg, env):
env.set(flag.upper(), " ".join(f for f in env_flags[flag]))
pkg.flags_to_build_system_args(build_system_flags)
env.set("SPACK_COMPILER_SPEC", str(spec.compiler))
env.set("SPACK_SYSTEM_DIRS", SYSTEM_DIR_CASE_ENTRY)
# FIXME (compiler as nodes): recover this one in the correct packages
# compiler.setup_custom_environment(pkg, env)
compiler.setup_custom_environment(pkg, env)
return env
def optimization_flags(compiler, target):
if spack.compilers.is_mixed_toolchain(compiler):
msg = (
"microarchitecture specific optimizations are not "
"supported yet on mixed compiler toolchains [check"
f" {compiler.name}@{compiler.version} for further details]"
)
tty.debug(msg)
return ""
# Try to check if the current compiler comes with a version number or
# has an unexpected suffix. If so, treat it as a compiler with a
# custom spec.
version_number, _ = archspec.cpu.version_components(compiler.version.dotted_numeric_string)
compiler_version = compiler.version
version_number, suffix = archspec.cpu.version_components(compiler.version)
if not version_number or suffix:
try:
compiler_version = compiler.real_version
except spack.util.executable.ProcessError as e:
# log this and just return compiler.version instead
tty.debug(str(e))
try:
result = target.optimization_flags(compiler.name, version_number)
result = target.optimization_flags(compiler.name, compiler_version.dotted_numeric_string)
except (ValueError, archspec.cpu.UnsupportedMicroarchitecture):
result = ""
return result
class FilterDefaultDynamicLinkerSearchPaths:
"""Remove rpaths to directories that are default search paths of the dynamic linker."""
def __init__(self, dynamic_linker: Optional[str]) -> None:
# Identify directories by (inode, device) tuple, which handles symlinks too.
self.default_path_identifiers: Set[Tuple[int, int]] = set()
if not dynamic_linker:
return
for path in spack.util.libc.default_search_paths_from_dynamic_linker(dynamic_linker):
try:
s = os.stat(path)
if stat.S_ISDIR(s.st_mode):
self.default_path_identifiers.add((s.st_ino, s.st_dev))
except OSError:
continue
def is_dynamic_loader_default_path(self, p: str) -> bool:
try:
s = os.stat(p)
return (s.st_ino, s.st_dev) in self.default_path_identifiers
except OSError:
return False
def __call__(self, dirs: List[str]) -> List[str]:
if not self.default_path_identifiers:
return dirs
return [p for p in dirs if not self.is_dynamic_loader_default_path(p)]
def set_wrapper_variables(pkg, env):
"""Set environment variables used by the Spack compiler wrapper (which have the prefix
`SPACK_`) and also add the compiler wrappers to PATH.
@@ -369,8 +473,39 @@ def set_wrapper_variables(pkg, env):
this function computes these options in a manner that is intended to match the DAG traversal
order in `SetupContext`. TODO: this is not the case yet, we're using post order, SetupContext
is using topo order."""
# Set compiler flags injected from the spec
set_wrapper_environment_variables_for_flags(pkg, env)
# Set environment variables if specified for
# the given compiler
compiler = pkg.compiler
env.extend(spack.schema.environment.parse(compiler.environment))
if compiler.extra_rpaths:
extra_rpaths = ":".join(compiler.extra_rpaths)
env.set("SPACK_COMPILER_EXTRA_RPATHS", extra_rpaths)
# Add spack build environment path with compiler wrappers first in
# the path. We add the compiler wrapper path, which includes default
# wrappers (cc, c++, f77, f90), AND a subdirectory containing
# compiler-specific symlinks. The latter ensures that builds that
# are sensitive to the *name* of the compiler see the right name when
# we're building with the wrappers.
#
# Conflicts on case-insensitive systems (like "CC" and "cc") are
# handled by putting one in the <build_env_path>/case-insensitive
# directory. Add that to the path too.
env_paths = []
compiler_specific = os.path.join(
spack.paths.build_env_path, os.path.dirname(pkg.compiler.link_paths["cc"])
)
for item in [spack.paths.build_env_path, compiler_specific]:
env_paths.append(item)
ci = os.path.join(item, "case-insensitive")
if os.path.isdir(ci):
env_paths.append(ci)
tty.debug("Adding compiler bin/ paths: " + " ".join(env_paths))
for item in env_paths:
env.prepend_path("PATH", item)
env.set_path(SPACK_ENV_PATH, env_paths)
# Working directory for the spack command itself, for debug logs.
if spack.config.get("config:debug"):
@@ -436,15 +571,22 @@ def set_wrapper_variables(pkg, env):
lib_path = os.path.join(pkg.prefix, libdir)
rpath_dirs.insert(0, lib_path)
filter_default_dynamic_linker_search_paths = FilterDefaultDynamicLinkerSearchPaths(
pkg.compiler.default_dynamic_linker
)
# TODO: filter_system_paths is again wrong (and probably unnecessary due to the is_system_path
# branch above). link_dirs should be filtered with entries from _parse_link_paths.
link_dirs = list(dedupe(filter_system_paths(link_dirs)))
include_dirs = list(dedupe(filter_system_paths(include_dirs)))
rpath_dirs = list(dedupe(filter_system_paths(rpath_dirs)))
rpath_dirs = filter_default_dynamic_linker_search_paths(rpath_dirs)
default_dynamic_linker_filter = spack.compilers.libraries.dynamic_linker_filter_for(pkg.spec)
if default_dynamic_linker_filter:
rpath_dirs = default_dynamic_linker_filter(rpath_dirs)
# TODO: implicit_rpaths is prefiltered by is_system_path, that should be removed in favor of
# just this filter.
implicit_rpaths = filter_default_dynamic_linker_search_paths(pkg.compiler.implicit_rpaths())
if implicit_rpaths:
env.set("SPACK_COMPILER_IMPLICIT_RPATHS", ":".join(implicit_rpaths))
# Spack managed directories include the stage, store and upstream stores. We extend this with
# their real paths to make it more robust (e.g. /tmp vs /private/tmp on macOS).
@@ -500,19 +642,22 @@ def set_package_py_globals(pkg, context: Context = Context.BUILD):
# Put spack compiler paths in module scope. (Some packages use it
# in setup_run_environment etc, so don't put it context == build)
link_dir = spack.paths.build_env_path
pkg_compiler = None
try:
pkg_compiler = pkg.compiler
except spack.compilers.NoCompilerForSpecError as e:
tty.debug(f"cannot set 'spack_cc': {str(e)}")
# FIXME (compiler as nodes): make this more general, and not tied to three languages
# Maybe add a callback?
global_names = {
"c": ("spack_cc",),
"cxx": ("spack_cxx",),
"fortran": ("spack_fc", "spack_f77"),
}
for language in ("c", "cxx", "fortran"):
spec = pkg.spec.dependencies(virtuals=[language])
value = None if not spec else os.path.join(link_dir, spec[0].package.link_paths[language])
for name in global_names[language]:
setattr(module, name, value)
if pkg_compiler is not None:
module.spack_cc = os.path.join(link_dir, pkg_compiler.link_paths["cc"])
module.spack_cxx = os.path.join(link_dir, pkg_compiler.link_paths["cxx"])
module.spack_f77 = os.path.join(link_dir, pkg_compiler.link_paths["f77"])
module.spack_fc = os.path.join(link_dir, pkg_compiler.link_paths["fc"])
else:
module.spack_cc = None
module.spack_cxx = None
module.spack_f77 = None
module.spack_fc = None
# Useful directories within the prefix are encapsulated in
# a Prefix object.
@@ -679,6 +824,7 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
context == Context.TEST and pkg.test_requires_compiler
)
if need_compiler:
set_compiler_environment_variables(pkg, env_mods)
set_wrapper_variables(pkg, env_mods)
# Platform specific setup goes before package specific setup. This is for setting
@@ -690,11 +836,6 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
env_mods.extend(setup_context.get_env_modifications())
tty.debug("setup_package: collected all modifications from dependencies")
tty.debug("setup_package: adding compiler wrappers paths")
for x in env_mods.group_by_name()["SPACK_ENV_PATH"]:
assert isinstance(x, PrependPath), "unexpected setting used for SPACK_ENV_PATH"
env_mods.prepend_path("PATH", x.value)
if context == Context.TEST:
env_mods.prepend_path("PATH", ".")
elif context == Context.BUILD and not dirty and not env_mods.is_unset("CPATH"):
@@ -708,6 +849,11 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
# Load modules on an already clean environment, just before applying Spack's
# own environment modifications. This ensures Spack controls CC/CXX/... variables.
if need_compiler:
tty.debug("setup_package: loading compiler modules")
for mod in pkg.compiler.modules:
load_module(mod)
load_external_modules(pkg)
# Make sure nothing's strange about the Spack environment.

View File

@@ -13,7 +13,6 @@
import spack.build_environment
import spack.builder
import spack.compilers.libraries
import spack.error
import spack.package_base
import spack.phase_callbacks
@@ -183,10 +182,7 @@ def patch_config_files(self) -> bool:
@property
def _removed_la_files_log(self) -> str:
"""File containing the list of removed libtool archives"""
build_dir = self.build_directory
if not os.path.isabs(self.build_directory):
build_dir = os.path.join(self.pkg.stage.path, build_dir)
return os.path.join(build_dir, "removed_la_files.txt")
return os.path.join(self.build_directory, "removed_la_files.txt")
@property
def archive_files(self) -> List[str]:
@@ -397,44 +393,33 @@ def _do_patch_libtool(self) -> None:
markers[tag] = "LIBTOOL TAG CONFIG: {0}".format(tag.upper())
# Replace empty linker flag prefixes:
if self.spec.satisfies("%nag"):
if self.pkg.compiler.name == "nag":
# Nag is mixed with gcc and g++, which are recognized correctly.
# Therefore, we change only Fortran values:
nag_pkg = self.spec["fortran"].package
for tag in ["fc", "f77"]:
marker = markers[tag]
x.filter(
regex='^wl=""$',
repl=f'wl="{nag_pkg.linker_arg}"',
start_at=f"# ### BEGIN {marker}",
stop_at=f"# ### END {marker}",
repl='wl="{0}"'.format(self.pkg.compiler.linker_arg),
start_at="# ### BEGIN {0}".format(marker),
stop_at="# ### END {0}".format(marker),
)
else:
compiler_spec = spack.compilers.libraries.compiler_spec(self.spec)
if compiler_spec:
x.filter(regex='^wl=""$', repl='wl="{0}"'.format(compiler_spec.package.linker_arg))
x.filter(regex='^wl=""$', repl='wl="{0}"'.format(self.pkg.compiler.linker_arg))
# Replace empty PIC flag values:
for compiler, marker in markers.items():
if compiler == "cc":
language = "c"
elif compiler == "cxx":
language = "cxx"
else:
language = "fortran"
if language not in self.spec:
continue
for cc, marker in markers.items():
x.filter(
regex='^pic_flag=""$',
repl=f'pic_flag="{self.spec[language].package.pic_flag}"',
start_at=f"# ### BEGIN {marker}",
stop_at=f"# ### END {marker}",
repl='pic_flag="{0}"'.format(
getattr(self.pkg.compiler, "{0}_pic_flag".format(cc))
),
start_at="# ### BEGIN {0}".format(marker),
stop_at="# ### END {0}".format(marker),
)
# Other compiler-specific patches:
if self.spec.satisfies("%fj"):
if self.pkg.compiler.name == "fj":
x.filter(regex="-nostdlib", repl="", string=True)
rehead = r"/\S*/"
for o in [
@@ -447,7 +432,7 @@ def _do_patch_libtool(self) -> None:
r"crtendS\.o",
]:
x.filter(regex=(rehead + o), repl="")
elif self.spec.satisfies("%nag"):
elif self.pkg.compiler.name == "nag":
for tag in ["fc", "f77"]:
marker = markers[tag]
start_at = "# ### BEGIN {0}".format(marker)
@@ -535,7 +520,12 @@ def configure_abs_path(self) -> str:
@property
def build_directory(self) -> str:
"""Override to provide another place to build the package"""
return self.configure_directory
# Handle the case where the configure directory is set to a non-absolute path
# Non-absolute paths are always relative to the staging source path
build_dir = self.configure_directory
if not os.path.isabs(build_dir):
build_dir = os.path.join(self.pkg.stage.source_path, build_dir)
return build_dir
@spack.phase_callbacks.run_before("autoreconf")
def delete_configure_to_force_update(self) -> None:
@@ -848,7 +838,7 @@ def remove_libtool_archives(self) -> None:
libtool_files = fs.find(str(self.pkg.prefix), "*.la", recursive=True)
with fs.safe_remove(*libtool_files):
fs.mkdirp(os.path.dirname(self._removed_la_files_log))
with open(self._removed_la_files_log, mode="w") as f:
with open(self._removed_la_files_log, mode="w", encoding="utf-8") as f:
f.write("\n".join(libtool_files))
def setup_build_environment(self, env):

View File

@@ -11,7 +11,6 @@
import llnl.util.tty as tty
import spack.phase_callbacks
from spack.directives import depends_on
from .cmake import CMakeBuilder, CMakePackage
@@ -69,7 +68,12 @@ class CachedCMakeBuilder(CMakeBuilder):
@property
def cache_name(self):
return f"{self.pkg.name}-{self.spec.architecture.platform}-{self.spec.dag_hash()}.cmake"
return "{0}-{1}-{2}@{3}.cmake".format(
self.pkg.name,
self.pkg.spec.architecture,
self.pkg.spec.compiler.name,
self.pkg.spec.compiler.version,
)
@property
def cache_path(self):
@@ -112,9 +116,7 @@ def initconfig_compiler_entries(self):
# Fortran compiler is optional
if "FC" in os.environ:
spack_fc_entry = cmake_cache_path("CMAKE_Fortran_COMPILER", os.environ["FC"])
system_fc_entry = cmake_cache_path(
"CMAKE_Fortran_COMPILER", self.spec["fortran"].package.fortran
)
system_fc_entry = cmake_cache_path("CMAKE_Fortran_COMPILER", self.pkg.compiler.fc)
else:
spack_fc_entry = "# No Fortran compiler defined in spec"
system_fc_entry = "# No Fortran compiler defined in spec"
@@ -130,8 +132,8 @@ def initconfig_compiler_entries(self):
" " + cmake_cache_path("CMAKE_CXX_COMPILER", os.environ["CXX"]),
" " + spack_fc_entry,
"else()\n",
" " + cmake_cache_path("CMAKE_C_COMPILER", self.spec["c"].package.cc),
" " + cmake_cache_path("CMAKE_CXX_COMPILER", self.spec["cxx"].package.cxx),
" " + cmake_cache_path("CMAKE_C_COMPILER", self.pkg.compiler.cc),
" " + cmake_cache_path("CMAKE_CXX_COMPILER", self.pkg.compiler.cxx),
" " + system_fc_entry,
"endif()\n",
]
@@ -322,7 +324,7 @@ def initconfig(self, pkg, spec, prefix):
+ self.initconfig_package_entries()
)
with open(self.cache_name, "w") as f:
with open(self.cache_name, "w", encoding="utf-8") as f:
for entry in cache_entries:
f.write("%s\n" % entry)
f.write("\n")
@@ -350,10 +352,6 @@ class CachedCMakePackage(CMakePackage):
CMakeBuilder = CachedCMakeBuilder
# These dependencies are assumed in the builder
depends_on("c", type="build")
depends_on("cxx", type="build")
def flag_handler(self, name, flags):
if name in ("cflags", "cxxflags", "cppflags", "fflags"):
return None, None, None # handled in the cmake cache

View File

@@ -5,22 +5,15 @@
import itertools
import os
import pathlib
import platform
import re
import sys
from typing import Dict, List, Optional, Sequence, Tuple, Union
import archspec.cpu
from typing import Dict, List, Sequence, Tuple, Union
import llnl.util.tty as tty
from llnl.util.lang import classproperty, memoized
from llnl.util.lang import classproperty
import spack
import spack.compilers.error
import spack.compilers.libraries
import spack.config
import spack.compiler
import spack.package_base
import spack.paths
import spack.util.executable
# Local "type" for type hints
@@ -51,9 +44,6 @@ class CompilerPackage(spack.package_base.PackageBase):
#: Static definition of languages supported by this class
compiler_languages: Sequence[str] = ["c", "cxx", "fortran"]
#: Relative path to compiler wrappers
link_paths: Dict[str, str] = {}
def __init__(self, spec: "spack.spec.Spec"):
super().__init__(spec)
msg = f"Supported languages for {spec} are not a subset of possible supported languages"
@@ -88,14 +78,14 @@ def executables(cls) -> Sequence[str]:
]
@classmethod
def determine_version(cls, exe: Path) -> str:
def determine_version(cls, exe: Path):
version_argument = cls.compiler_version_argument
if isinstance(version_argument, str):
version_argument = (version_argument,)
for va in version_argument:
try:
output = compiler_output(exe, version_argument=va)
output = spack.compiler.get_compiler_version_output(exe, va)
match = re.search(cls.compiler_version_regex, output)
if match:
return ".".join(match.groups())
@@ -106,7 +96,6 @@ def determine_version(cls, exe: Path) -> str:
f"[{__file__}] Cannot detect a valid version for the executable "
f"{str(exe)}, for package '{cls.name}': {e}"
)
return ""
@classmethod
def compiler_bindir(cls, prefix: Path) -> Path:
@@ -154,184 +143,3 @@ def determine_compiler_paths(cls, exes: Sequence[Path]) -> Dict[str, Path]:
def determine_variants(cls, exes: Sequence[Path], version_str: str) -> Tuple:
# path determination is separated so it can be reused in subclasses
return "", {"compilers": cls.determine_compiler_paths(exes=exes)}
#: Returns the argument needed to set the RPATH, or None if it does not exist
rpath_arg: Optional[str] = "-Wl,-rpath,"
#: Flag that needs to be used to pass an argument to the linker
linker_arg: str = "-Wl,"
#: Flag used to produce Position Independent Code
pic_flag: str = "-fPIC"
#: Flag used to get verbose output
verbose_flags: str = "-v"
#: Flag to activate OpenMP support
openmp_flag: str = "-fopenmp"
def standard_flag(self, *, language: str, standard: str) -> str:
"""Returns the flag used to enforce a given standard for a language"""
if language not in self.supported_languages:
raise spack.compilers.error.UnsupportedCompilerFlag(
f"{self.spec} does not provide the '{language}' language"
)
try:
return self._standard_flag(language=language, standard=standard)
except (KeyError, RuntimeError) as e:
raise spack.compilers.error.UnsupportedCompilerFlag(
f"{self.spec} does not provide the '{language}' standard {standard}"
) from e
def _standard_flag(self, *, language: str, standard: str) -> str:
raise NotImplementedError("Must be implemented by derived classes")
@property
def disable_new_dtags(self) -> str:
if platform.system() == "Darwin":
return ""
return "--disable-new-dtags"
@property
def enable_new_dtags(self) -> str:
if platform.system() == "Darwin":
return ""
return "--enable-new-dtags"
def setup_dependent_build_environment(self, env, dependent_spec):
# FIXME (compiler as nodes): check if this is good enough or should be made more general
# The package is not used as a compiler, so skip this setup
if not any(
lang in dependent_spec and dependent_spec[lang].name == self.spec.name
for lang in ("c", "cxx", "fortran")
):
return
# Populate an object with the list of environment modifications and return it
link_dir = pathlib.Path(spack.paths.build_env_path)
env_paths = []
for language, attr_name, wrapper_var_name, spack_var_name in [
("c", "cc", "CC", "SPACK_CC"),
("cxx", "cxx", "CXX", "SPACK_CXX"),
("fortran", "fortran", "F77", "SPACK_F77"),
("fortran", "fortran", "FC", "SPACK_FC"),
]:
if language not in dependent_spec or dependent_spec[language].name != self.spec.name:
continue
if not hasattr(self, attr_name):
continue
compiler = getattr(self, attr_name)
env.set(spack_var_name, compiler)
if language not in self.link_paths:
continue
wrapper_path = link_dir / self.link_paths.get(language)
env.set(wrapper_var_name, str(wrapper_path))
env.set(f"SPACK_{wrapper_var_name}_RPATH_ARG", self.rpath_arg)
uarch = dependent_spec.architecture.target
version_number, _ = archspec.cpu.version_components(
self.spec.version.dotted_numeric_string
)
try:
isa_arg = uarch.optimization_flags(self.archspec_name(), version_number)
except (ValueError, archspec.cpu.UnsupportedMicroarchitecture):
isa_arg = ""
if isa_arg:
env.set(f"SPACK_TARGET_ARGS_{attr_name.upper()}", isa_arg)
# Add spack build environment path with compiler wrappers first in
# the path. We add the compiler wrapper path, which includes default
# wrappers (cc, c++, f77, f90), AND a subdirectory containing
# compiler-specific symlinks. The latter ensures that builds that
# are sensitive to the *name* of the compiler see the right name when
# we're building with the wrappers.
#
# Conflicts on case-insensitive systems (like "CC" and "cc") are
# handled by putting one in the <build_env_path>/case-insensitive
# directory. Add that to the path too.
compiler_specific = os.path.join(
spack.paths.build_env_path, os.path.dirname(self.link_paths[language])
)
for item in [spack.paths.build_env_path, compiler_specific]:
env_paths.append(item)
ci = os.path.join(item, "case-insensitive")
if os.path.isdir(ci):
env_paths.append(ci)
# FIXME (compiler as nodes): make these paths language specific
env.set("SPACK_LINKER_ARG", self.linker_arg)
paths = _implicit_rpaths(pkg=self)
if paths:
env.set("SPACK_COMPILER_IMPLICIT_RPATHS", ":".join(paths))
# Check whether we want to force RPATH or RUNPATH
if spack.config.CONFIG.get("config:shared_linking:type") == "rpath":
env.set("SPACK_DTAGS_TO_STRIP", self.enable_new_dtags)
env.set("SPACK_DTAGS_TO_ADD", self.disable_new_dtags)
else:
env.set("SPACK_DTAGS_TO_STRIP", self.disable_new_dtags)
env.set("SPACK_DTAGS_TO_ADD", self.enable_new_dtags)
spec = self.spec
if spec.extra_attributes:
extra_rpaths = spec.extra_attributes.get("extra_rpaths")
if extra_rpaths:
extra_rpaths = ":".join(compiler.extra_rpaths)
env.append_path("SPACK_COMPILER_EXTRA_RPATHS", extra_rpaths)
for item in env_paths:
env.prepend_path("SPACK_ENV_PATH", item)
def archspec_name(self) -> str:
"""Name that archspec uses to refer to this compiler"""
return self.spec.name
def _implicit_rpaths(pkg: spack.package_base.PackageBase) -> List[str]:
detector = spack.compilers.libraries.CompilerPropertyDetector(pkg.spec)
paths = detector.implicit_rpaths()
return paths
@memoized
def _compiler_output(
compiler_path: Path, *, version_argument: str, ignore_errors: Tuple[int, ...] = ()
) -> str:
"""Returns the output from the compiler invoked with the given version argument.
Args:
compiler_path: path of the compiler to be invoked
version_argument: the argument used to extract version information
"""
compiler = spack.util.executable.Executable(compiler_path)
compiler_invocation_args = {
"output": str,
"error": str,
"ignore_errors": ignore_errors,
"timeout": 120,
"fail_on_error": True,
}
if version_argument:
output = compiler(version_argument, **compiler_invocation_args)
else:
output = compiler(**compiler_invocation_args)
return output
def compiler_output(
compiler_path: Path, *, version_argument: str, ignore_errors: Tuple[int, ...] = ()
) -> str:
"""Wrapper for _get_compiler_version_output()."""
# This ensures that we memoize compiler output by *absolute path*,
# not just executable name. If we don't do this, and the path changes
# (e.g., during testing), we can get incorrect results.
if not os.path.isabs(compiler_path):
compiler_path = spack.util.executable.which_string(compiler_path, required=True)
return _compiler_output(
compiler_path, version_argument=version_argument, ignore_errors=ignore_errors
)

View File

@@ -1153,7 +1153,7 @@ def _determine_license_type(self):
# The file will have been created upon self.license_required AND
# self.license_files having been populated, so the "if" is usually
# true by the time the present function runs; ../hooks/licensing.py
with open(f) as fh:
with open(f, encoding="utf-8") as fh:
if re.search(r"^[ \t]*[^" + self.license_comment + "\n]", fh.read(), re.MULTILINE):
license_type = {
"ACTIVATION_TYPE": "license_file",
@@ -1185,7 +1185,7 @@ def configure(self):
# our configuration accordingly. We can do this because the tokens are
# quite long and specific.
validator_code = open("pset/check.awk", "r").read()
validator_code = open("pset/check.awk", "r", encoding="utf-8").read()
# Let's go a little further and distill the tokens (plus some noise).
tokenlike_words = set(re.findall(r"[A-Z_]{4,}", validator_code))
@@ -1222,7 +1222,7 @@ def configure(self):
config_draft.update(self._determine_license_type)
# Write sorted *by token* so the file looks less like a hash dump.
f = open("silent.cfg", "w")
f = open("silent.cfg", "w", encoding="utf-8")
for token, value in sorted(config_draft.items()):
if token in tokenlike_words:
f.write("%s=%s\n" % (token, value))
@@ -1273,7 +1273,7 @@ def configure_rpath(self):
raise InstallError("Cannot find compiler command to configure rpath:\n\t" + f)
compiler_cfg = os.path.abspath(f + ".cfg")
with open(compiler_cfg, "w") as fh:
with open(compiler_cfg, "w", encoding="utf-8") as fh:
fh.write("-Xlinker -rpath={0}\n".format(compilers_lib_dir))
@spack.phase_callbacks.run_after("install")
@@ -1297,7 +1297,7 @@ def configure_auto_dispatch(self):
ad.append(x)
compiler_cfg = os.path.abspath(f + ".cfg")
with open(compiler_cfg, "a") as fh:
with open(compiler_cfg, "a", encoding="utf-8") as fh:
fh.write("-ax{0}\n".format(",".join(ad)))
@spack.phase_callbacks.run_after("install")

View File

@@ -75,7 +75,7 @@ def generate_luarocks_config(self, pkg, spec, prefix):
table_entries.append(self._generate_tree_line(d.name, d.prefix))
path = self._luarocks_config_path()
with open(path, "w") as config:
with open(path, "w", encoding="utf-8") as config:
config.write(
"""
deps_mode="all"

View File

@@ -75,7 +75,7 @@ def toolchain_version(self):
Override this method to select a specific version of the toolchain or change
selection heuristics.
Default is whatever version of msvc has been selected by concretization"""
return "v" + self.spec["msvc"].package.platform_toolset_ver
return "v" + self.pkg.compiler.platform_toolset_ver
@property
def std_msbuild_args(self):

View File

@@ -32,6 +32,9 @@ class IntelOneApiPackage(Package):
# organization (e.g. University/Company).
redistribute(source=False, binary=False)
# contains precompiled binaries without rpaths
unresolved_libraries = ["*"]
for c in [
"target=ppc64:",
"target=ppc64le:",
@@ -140,7 +143,7 @@ def setup_run_environment(self, env):
$ source {prefix}/{component}/{version}/env/vars.sh
"""
# Only if environment modifications are desired (default is +envmods)
if "+envmods" in self.spec:
if "~envmods" not in self.spec:
env.extend(
EnvironmentModifications.from_sourcing_file(
self.component_prefix.env.join("vars.sh"), *self.env_script_args

View File

@@ -277,6 +277,10 @@ def update_external_dependencies(self, extendee_spec=None):
if not python.architecture.target:
python.architecture.target = archspec.cpu.host().family.name
# Ensure compiler information is present
if not python.compiler:
python.compiler = self.spec.compiler
python.external_path = self.spec.external_path
python._mark_concrete()
self.spec.add_dependency_edge(python, depflag=dt.BUILD | dt.LINK | dt.RUN, virtuals=())

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,41 @@
# Spack CI generators
This document describes how the ci module can be extended to provide novel
ci generators. The module currently has only a single generator for gitlab.
The unit-tests for the ci module define a small custom generator for testing
purposes as well.
The process of generating a pipeline involves creating a ci-enabled spack
environment, activating it, and running `spack ci generate`, possibly with
arguments describing things like where the output should be written.
Internally pipeline generation is broken into two components: general and
ci platform specific.
## General pipeline functionality
General pipeline functionality includes building a pipeline graph (really,
a forest), pruning it in a variety of ways, and gathering attributes for all
the generated spec build jobs from the spack configuration.
All of the above functionality is defined in the `__init__.py` of the top-level
ci module, and should be roughly the same for pipelines generated for any
platform.
## CI platform specific functionality
Functionality specific to CI platforms (e.g. gitlab, gha, etc.) should be
defined in a dedicated module. In order to define a generator for a new
platform, there are only a few requirements:
1. add a file under `ci` in which you define a generator method decorated with
the `@generator` attribute. .
1. import it from `lib/spack/spack/ci/__init__.py`, so that your new generator
is registered.
1. the generator method must take as arguments PipelineDag, SpackCIConfig,
and PipelineOptions objects, in that order.
1. the generator method must produce an output file containing the
generated pipeline.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,825 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import codecs
import copy
import json
import os
import re
import ssl
import sys
import time
from collections import deque
from enum import Enum
from typing import Dict, Generator, List, Optional, Set, Tuple
from urllib.parse import quote, urlencode, urlparse
from urllib.request import HTTPHandler, HTTPSHandler, Request, build_opener
import llnl.util.filesystem as fs
import llnl.util.tty as tty
from llnl.util.lang import Singleton, memoized
import spack.binary_distribution as bindist
import spack.config as cfg
import spack.deptypes as dt
import spack.environment as ev
import spack.error
import spack.mirrors.mirror
import spack.schema
import spack.spec
import spack.util.spack_yaml as syaml
import spack.util.url as url_util
import spack.util.web as web_util
from spack import traverse
from spack.reporters import CDash, CDashConfiguration
from spack.reporters.cdash import SPACK_CDASH_TIMEOUT
from spack.reporters.cdash import build_stamp as cdash_build_stamp
def _urlopen():
error_handler = web_util.SpackHTTPDefaultErrorHandler()
# One opener with HTTPS ssl enabled
with_ssl = build_opener(
HTTPHandler(), HTTPSHandler(context=web_util.ssl_create_default_context()), error_handler
)
# One opener with HTTPS ssl disabled
without_ssl = build_opener(
HTTPHandler(), HTTPSHandler(context=ssl._create_unverified_context()), error_handler
)
# And dynamically dispatch based on the config:verify_ssl.
def dispatch_open(fullurl, data=None, timeout=None, verify_ssl=True):
opener = with_ssl if verify_ssl else without_ssl
timeout = timeout or cfg.get("config:connect_timeout", 1)
return opener.open(fullurl, data, timeout)
return dispatch_open
IS_WINDOWS = sys.platform == "win32"
SPACK_RESERVED_TAGS = ["public", "protected", "notary"]
_dyn_mapping_urlopener = Singleton(_urlopen)
def copy_files_to_artifacts(src, artifacts_dir):
"""
Copy file(s) to the given artifacts directory
Parameters:
src (str): the glob-friendly path expression for the file(s) to copy
artifacts_dir (str): the destination directory
"""
try:
fs.copy(src, artifacts_dir)
except Exception as err:
msg = (
f"Unable to copy files ({src}) to artifacts {artifacts_dir} due to "
f"exception: {str(err)}"
)
tty.warn(msg)
def win_quote(quote_str: str) -> str:
if IS_WINDOWS:
quote_str = f'"{quote_str}"'
return quote_str
def _spec_matches(spec, match_string):
return spec.intersects(match_string)
def _noop(x):
return x
def unpack_script(script_section, op=_noop):
script = []
for cmd in script_section:
if isinstance(cmd, list):
for subcmd in cmd:
script.append(op(subcmd))
else:
script.append(op(cmd))
return script
def ensure_expected_target_path(path: str) -> str:
"""Returns passed paths with all Windows path separators exchanged
for posix separators
TODO (johnwparent): Refactor config + cli read/write to deal only in posix style paths
"""
if path:
return path.replace("\\", "/")
return path
def update_env_scopes(
env: ev.Environment,
cli_scopes: List[str],
output_file: str,
transform_windows_paths: bool = False,
) -> None:
"""Add any config scopes from cli_scopes which aren't already included in the
environment, by reading the yaml, adding the missing includes, and writing the
updated yaml back to the same location.
"""
with open(env.manifest_path, "r", encoding="utf-8") as env_fd:
env_yaml_root = syaml.load(env_fd)
# Add config scopes to environment
env_includes = env_yaml_root["spack"].get("include", [])
include_scopes: List[str] = []
for scope in cli_scopes:
if scope not in include_scopes and scope not in env_includes:
include_scopes.insert(0, scope)
env_includes.extend(include_scopes)
env_yaml_root["spack"]["include"] = [
ensure_expected_target_path(i) if transform_windows_paths else i for i in env_includes
]
with open(output_file, "w", encoding="utf-8") as fd:
syaml.dump_config(env_yaml_root, fd, default_flow_style=False)
def write_pipeline_manifest(specs, src_prefix, dest_prefix, output_file):
"""Write out the file describing specs that should be copied"""
buildcache_copies = {}
for release_spec in specs:
release_spec_dag_hash = release_spec.dag_hash()
# TODO: This assumes signed version of the spec
buildcache_copies[release_spec_dag_hash] = [
{
"src": url_util.join(
src_prefix,
bindist.build_cache_relative_path(),
bindist.tarball_name(release_spec, ".spec.json.sig"),
),
"dest": url_util.join(
dest_prefix,
bindist.build_cache_relative_path(),
bindist.tarball_name(release_spec, ".spec.json.sig"),
),
},
{
"src": url_util.join(
src_prefix,
bindist.build_cache_relative_path(),
bindist.tarball_path_name(release_spec, ".spack"),
),
"dest": url_util.join(
dest_prefix,
bindist.build_cache_relative_path(),
bindist.tarball_path_name(release_spec, ".spack"),
),
},
]
target_dir = os.path.dirname(output_file)
if not os.path.exists(target_dir):
os.makedirs(target_dir)
with open(output_file, "w", encoding="utf-8") as fd:
fd.write(json.dumps(buildcache_copies))
class CDashHandler:
"""
Class for managing CDash data and processing.
"""
def __init__(self, ci_cdash):
# start with the gitlab ci configuration
self.url = ci_cdash.get("url")
self.build_group = ci_cdash.get("build-group")
self.project = ci_cdash.get("project")
self.site = ci_cdash.get("site")
# grab the authorization token when available
self.auth_token = os.environ.get("SPACK_CDASH_AUTH_TOKEN")
if self.auth_token:
tty.verbose("Using CDash auth token from environment")
# append runner description to the site if available
runner = os.environ.get("CI_RUNNER_DESCRIPTION")
if runner:
self.site += f" ({runner})"
def args(self):
return [
"--cdash-upload-url",
win_quote(self.upload_url),
"--cdash-build",
win_quote(self.build_name()),
"--cdash-site",
win_quote(self.site),
"--cdash-buildstamp",
win_quote(self.build_stamp),
]
def build_name(self, spec: Optional[spack.spec.Spec] = None) -> Optional[str]:
"""Returns the CDash build name.
A name will be generated if the `spec` is provided,
otherwise, the value will be retrieved from the environment
through the `SPACK_CDASH_BUILD_NAME` variable.
Returns: (str) given spec's CDash build name."""
if spec:
build_name = f"{spec.name}@{spec.version}%{spec.compiler} \
hash={spec.dag_hash()} arch={spec.architecture} ({self.build_group})"
tty.debug(f"Generated CDash build name ({build_name}) from the {spec.name}")
return build_name
env_build_name = os.environ.get("SPACK_CDASH_BUILD_NAME")
tty.debug(f"Using CDash build name ({env_build_name}) from the environment")
return env_build_name
@property # type: ignore
def build_stamp(self):
"""Returns the CDash build stamp.
The one defined by SPACK_CDASH_BUILD_STAMP environment variable
is preferred due to the representation of timestamps; otherwise,
one will be built.
Returns: (str) current CDash build stamp"""
build_stamp = os.environ.get("SPACK_CDASH_BUILD_STAMP")
if build_stamp:
tty.debug(f"Using build stamp ({build_stamp}) from the environment")
return build_stamp
build_stamp = cdash_build_stamp(self.build_group, time.time())
tty.debug(f"Generated new build stamp ({build_stamp})")
return build_stamp
@property # type: ignore
@memoized
def project_enc(self):
tty.debug(f"Encoding project ({type(self.project)}): {self.project})")
encode = urlencode({"project": self.project})
index = encode.find("=") + 1
return encode[index:]
@property
def upload_url(self):
url_format = f"{self.url}/submit.php?project={self.project_enc}"
return url_format
def copy_test_results(self, source, dest):
"""Copy test results to artifacts directory."""
reports = fs.join_path(source, "*_Test*.xml")
copy_files_to_artifacts(reports, dest)
def create_buildgroup(self, opener, headers, url, group_name, group_type):
data = {"newbuildgroup": group_name, "project": self.project, "type": group_type}
enc_data = json.dumps(data).encode("utf-8")
request = Request(url, data=enc_data, headers=headers)
response = opener.open(request, timeout=SPACK_CDASH_TIMEOUT)
response_code = response.getcode()
if response_code not in [200, 201]:
msg = f"Creating buildgroup failed (response code = {response_code})"
tty.warn(msg)
return None
response_text = response.read()
response_json = json.loads(response_text)
build_group_id = response_json["id"]
return build_group_id
def populate_buildgroup(self, job_names):
url = f"{self.url}/api/v1/buildgroup.php"
headers = {
"Authorization": f"Bearer {self.auth_token}",
"Content-Type": "application/json",
}
opener = build_opener(HTTPHandler)
parent_group_id = self.create_buildgroup(opener, headers, url, self.build_group, "Daily")
group_id = self.create_buildgroup(
opener, headers, url, f"Latest {self.build_group}", "Latest"
)
if not parent_group_id or not group_id:
msg = f"Failed to create or retrieve buildgroups for {self.build_group}"
tty.warn(msg)
return
data = {
"dynamiclist": [
{"match": name, "parentgroupid": parent_group_id, "site": self.site}
for name in job_names
]
}
enc_data = json.dumps(data).encode("utf-8")
request = Request(url, data=enc_data, headers=headers)
request.get_method = lambda: "PUT"
response = opener.open(request, timeout=SPACK_CDASH_TIMEOUT)
response_code = response.getcode()
if response_code != 200:
msg = f"Error response code ({response_code}) in populate_buildgroup"
tty.warn(msg)
def report_skipped(self, spec: spack.spec.Spec, report_dir: str, reason: Optional[str]):
"""Explicitly report skipping testing of a spec (e.g., it's CI
configuration identifies it as known to have broken tests or
the CI installation failed).
Args:
spec: spec being tested
report_dir: directory where the report will be written
reason: reason the test is being skipped
"""
configuration = CDashConfiguration(
upload_url=self.upload_url,
packages=[spec.name],
build=self.build_name(),
site=self.site,
buildstamp=self.build_stamp,
track=None,
)
reporter = CDash(configuration=configuration)
reporter.test_skipped_report(report_dir, spec, reason)
class PipelineType(Enum):
COPY_ONLY = 1
spack_copy_only = 1
PROTECTED_BRANCH = 2
spack_protected_branch = 2
PULL_REQUEST = 3
spack_pull_request = 3
class PipelineOptions:
"""A container for all pipeline options that can be specified (whether
via cli, config/yaml, or environment variables)"""
def __init__(
self,
env: ev.Environment,
buildcache_destination: spack.mirrors.mirror.Mirror,
artifacts_root: str = "jobs_scratch_dir",
print_summary: bool = True,
output_file: Optional[str] = None,
check_index_only: bool = False,
broken_specs_url: Optional[str] = None,
rebuild_index: bool = True,
untouched_pruning_dependent_depth: Optional[int] = None,
prune_untouched: bool = False,
prune_up_to_date: bool = True,
prune_external: bool = True,
stack_name: Optional[str] = None,
pipeline_type: Optional[PipelineType] = None,
require_signing: bool = False,
cdash_handler: Optional["CDashHandler"] = None,
):
"""
Args:
env: Active spack environment
buildcache_destination: The mirror where built binaries should be pushed
artifacts_root: Path to location where artifacts should be stored
print_summary: Print a summary of the scheduled pipeline
output_file: Path where output file should be written
check_index_only: Only fetch the index or fetch all spec files
broken_specs_url: URL where broken specs (on develop) should be reported
rebuild_index: Generate a job to rebuild mirror index after rebuilds
untouched_pruning_dependent_depth: How many parents to traverse from changed pkg specs
prune_untouched: Prune jobs for specs that were unchanged in git history
prune_up_to_date: Prune specs from pipeline if binary exists on the mirror
prune_external: Prune specs from pipeline if they are external
stack_name: Name of spack stack
pipeline_type: Type of pipeline running (optional)
require_signing: Require buildcache to be signed (fail w/out signing key)
cdash_handler: Object for communicating build information with CDash
"""
self.env = env
self.buildcache_destination = buildcache_destination
self.artifacts_root = artifacts_root
self.print_summary = print_summary
self.output_file = output_file
self.check_index_only = check_index_only
self.broken_specs_url = broken_specs_url
self.rebuild_index = rebuild_index
self.untouched_pruning_dependent_depth = untouched_pruning_dependent_depth
self.prune_untouched = prune_untouched
self.prune_up_to_date = prune_up_to_date
self.prune_external = prune_external
self.stack_name = stack_name
self.pipeline_type = pipeline_type
self.require_signing = require_signing
self.cdash_handler = cdash_handler
class PipelineNode:
spec: spack.spec.Spec
parents: Set[str]
children: Set[str]
def __init__(self, spec: spack.spec.Spec):
self.spec = spec
self.parents = set()
self.children = set()
@property
def key(self):
"""Return key of the stored spec"""
return PipelineDag.key(self.spec)
class PipelineDag:
"""Turn a list of specs into a simple directed graph, that doesn't keep track
of edge types."""
@classmethod
def key(cls, spec: spack.spec.Spec) -> str:
return spec.dag_hash()
def __init__(self, specs: List[spack.spec.Spec]) -> None:
# Build dictionary of nodes
self.nodes: Dict[str, PipelineNode] = {
PipelineDag.key(s): PipelineNode(s)
for s in traverse.traverse_nodes(specs, deptype=dt.ALL_TYPES, root=True)
}
# Create edges
for edge in traverse.traverse_edges(
specs, deptype=dt.ALL_TYPES, root=False, cover="edges"
):
parent_key = PipelineDag.key(edge.parent)
child_key = PipelineDag.key(edge.spec)
self.nodes[parent_key].children.add(child_key)
self.nodes[child_key].parents.add(parent_key)
def prune(self, node_key: str):
"""Remove a node from the graph, and reconnect its parents and children"""
node = self.nodes[node_key]
for parent in node.parents:
self.nodes[parent].children.remove(node_key)
self.nodes[parent].children |= node.children
for child in node.children:
self.nodes[child].parents.remove(node_key)
self.nodes[child].parents |= node.parents
del self.nodes[node_key]
def traverse_nodes(
self, direction: str = "children"
) -> Generator[Tuple[int, PipelineNode], None, None]:
"""Yields (depth, node) from the pipeline graph. Traversal is topologically
ordered from the roots if ``direction`` is ``children``, or from the leaves
if ``direction`` is ``parents``. The yielded depth is the length of the
longest path from the starting point to the yielded node."""
if direction == "children":
get_in_edges = lambda node: node.parents
get_out_edges = lambda node: node.children
else:
get_in_edges = lambda node: node.children
get_out_edges = lambda node: node.parents
sort_key = lambda k: self.nodes[k].spec.name
out_edges = {k: sorted(get_out_edges(n), key=sort_key) for k, n in self.nodes.items()}
num_in_edges = {k: len(get_in_edges(n)) for k, n in self.nodes.items()}
# Populate a queue with all the nodes that have no incoming edges
nodes = deque(
sorted(
[(0, key) for key in self.nodes.keys() if num_in_edges[key] == 0],
key=lambda item: item[1],
)
)
while nodes:
# Remove the next node, n, from the queue and yield it
depth, n_key = nodes.pop()
yield (depth, self.nodes[n_key])
# Remove an in-edge from every node, m, pointed to by an
# out-edge from n. If any of those nodes are left with
# 0 remaining in-edges, add them to the queue.
for m in out_edges[n_key]:
num_in_edges[m] -= 1
if num_in_edges[m] == 0:
nodes.appendleft((depth + 1, m))
def get_dependencies(self, node: PipelineNode) -> List[PipelineNode]:
"""Returns a list of nodes corresponding to the direct dependencies
of the given node."""
return [self.nodes[k] for k in node.children]
class SpackCIConfig:
"""Spack CI object used to generate intermediate representation
used by the CI generator(s).
"""
def __init__(self, ci_config):
"""Given the information from the ci section of the config
and the staged jobs, set up meta data needed for generating Spack
CI IR.
"""
self.ci_config = ci_config
self.named_jobs = ["any", "build", "copy", "cleanup", "noop", "reindex", "signing"]
self.ir = {
"jobs": {},
"rebuild-index": self.ci_config.get("rebuild-index", True),
"broken-specs-url": self.ci_config.get("broken-specs-url", None),
"broken-tests-packages": self.ci_config.get("broken-tests-packages", []),
"target": self.ci_config.get("target", "gitlab"),
}
jobs = self.ir["jobs"]
for name in self.named_jobs:
# Skip the special named jobs
if name not in ["any", "build"]:
jobs[name] = self.__init_job("")
def __init_job(self, release_spec):
"""Initialize job object"""
job_object = {"spec": release_spec, "attributes": {}}
if release_spec:
job_vars = job_object["attributes"].setdefault("variables", {})
job_vars["SPACK_JOB_SPEC_DAG_HASH"] = release_spec.dag_hash()
job_vars["SPACK_JOB_SPEC_PKG_NAME"] = release_spec.name
job_vars["SPACK_JOB_SPEC_PKG_VERSION"] = release_spec.format("{version}")
job_vars["SPACK_JOB_SPEC_COMPILER_NAME"] = release_spec.format("{compiler.name}")
job_vars["SPACK_JOB_SPEC_COMPILER_VERSION"] = release_spec.format("{compiler.version}")
job_vars["SPACK_JOB_SPEC_ARCH"] = release_spec.format("{architecture}")
job_vars["SPACK_JOB_SPEC_VARIANTS"] = release_spec.format("{variants}")
return job_object
def __is_named(self, section):
"""Check if a pipeline-gen configuration section is for a named job,
and if so return the name otherwise return none.
"""
for _name in self.named_jobs:
keys = [f"{_name}-job", f"{_name}-job-remove"]
if any([key for key in keys if key in section]):
return _name
return None
@staticmethod
def __job_name(name, suffix=""):
"""Compute the name of a named job with appropriate suffix.
Valid suffixes are either '-remove' or empty string or None
"""
assert isinstance(name, str)
jname = name
if suffix:
jname = f"{name}-job{suffix}"
else:
jname = f"{name}-job"
return jname
def __apply_submapping(self, dest, spec, section):
"""Apply submapping setion to the IR dict"""
matched = False
only_first = section.get("match_behavior", "first") == "first"
for match_attrs in reversed(section["submapping"]):
attrs = cfg.InternalConfigScope._process_dict_keyname_overrides(match_attrs)
for match_string in match_attrs["match"]:
if _spec_matches(spec, match_string):
matched = True
if "build-job-remove" in match_attrs:
spack.config.remove_yaml(dest, attrs["build-job-remove"])
if "build-job" in match_attrs:
spack.schema.merge_yaml(dest, attrs["build-job"])
break
if matched and only_first:
break
return dest
# Create jobs for all the pipeline specs
def init_pipeline_jobs(self, pipeline: PipelineDag):
for _, node in pipeline.traverse_nodes():
dag_hash = node.spec.dag_hash()
self.ir["jobs"][dag_hash] = self.__init_job(node.spec)
# Generate IR from the configs
def generate_ir(self):
"""Generate the IR from the Spack CI configurations."""
jobs = self.ir["jobs"]
# Implicit job defaults
defaults = [
{
"build-job": {
"script": [
"cd {env_dir}",
"spack env activate --without-view .",
"spack ci rebuild",
]
}
},
{"noop-job": {"script": ['echo "All specs already up to date, nothing to rebuild."']}},
]
# Job overrides
overrides = [
# Reindex script
{
"reindex-job": {
"script:": ["spack buildcache update-index --keys {index_target_mirror}"]
}
},
# Cleanup script
{
"cleanup-job": {
"script:": ["spack -d mirror destroy {mirror_prefix}/$CI_PIPELINE_ID"]
}
},
# Add signing job tags
{"signing-job": {"tags": ["aws", "protected", "notary"]}},
# Remove reserved tags
{"any-job-remove": {"tags": SPACK_RESERVED_TAGS}},
]
pipeline_gen = overrides + self.ci_config.get("pipeline-gen", []) + defaults
for section in reversed(pipeline_gen):
name = self.__is_named(section)
has_submapping = "submapping" in section
has_dynmapping = "dynamic-mapping" in section
section = cfg.InternalConfigScope._process_dict_keyname_overrides(section)
if name:
remove_job_name = self.__job_name(name, suffix="-remove")
merge_job_name = self.__job_name(name)
do_remove = remove_job_name in section
do_merge = merge_job_name in section
def _apply_section(dest, src):
if do_remove:
dest = spack.config.remove_yaml(dest, src[remove_job_name])
if do_merge:
dest = copy.copy(spack.schema.merge_yaml(dest, src[merge_job_name]))
if name == "build":
# Apply attributes to all build jobs
for _, job in jobs.items():
if job["spec"]:
_apply_section(job["attributes"], section)
elif name == "any":
# Apply section attributes too all jobs
for _, job in jobs.items():
_apply_section(job["attributes"], section)
else:
# Create a signing job if there is script and the job hasn't
# been initialized yet
if name == "signing" and name not in jobs:
if "signing-job" in section:
if "script" not in section["signing-job"]:
continue
else:
jobs[name] = self.__init_job("")
# Apply attributes to named job
_apply_section(jobs[name]["attributes"], section)
elif has_submapping:
# Apply section jobs with specs to match
for _, job in jobs.items():
if job["spec"]:
job["attributes"] = self.__apply_submapping(
job["attributes"], job["spec"], section
)
elif has_dynmapping:
mapping = section["dynamic-mapping"]
dynmap_name = mapping.get("name")
# Check if this section should be skipped
dynmap_skip = os.environ.get("SPACK_CI_SKIP_DYNAMIC_MAPPING")
if dynmap_name and dynmap_skip:
if re.match(dynmap_skip, dynmap_name):
continue
# Get the endpoint
endpoint = mapping["endpoint"]
endpoint_url = urlparse(endpoint)
# Configure the request header
header = {"User-Agent": web_util.SPACK_USER_AGENT}
header.update(mapping.get("header", {}))
# Expand header environment variables
# ie. if tokens are passed
for value in header.values():
value = os.path.expandvars(value)
verify_ssl = mapping.get("verify_ssl", spack.config.get("config:verify_ssl", True))
timeout = mapping.get("timeout", spack.config.get("config:connect_timeout", 1))
required = mapping.get("require", [])
allowed = mapping.get("allow", [])
ignored = mapping.get("ignore", [])
# required keys are implicitly allowed
allowed = sorted(set(allowed + required))
ignored = sorted(set(ignored))
required = sorted(set(required))
# Make sure required things are not also ignored
assert not any([ikey in required for ikey in ignored])
def job_query(job):
job_vars = job["attributes"]["variables"]
query = (
"{SPACK_JOB_SPEC_PKG_NAME}@{SPACK_JOB_SPEC_PKG_VERSION}"
# The preceding spaces are required (ref. https://github.com/spack/spack-gantry/blob/develop/docs/api.md#allocation)
" {SPACK_JOB_SPEC_VARIANTS}"
" arch={SPACK_JOB_SPEC_ARCH}"
"%{SPACK_JOB_SPEC_COMPILER_NAME}@{SPACK_JOB_SPEC_COMPILER_VERSION}"
).format_map(job_vars)
return f"spec={quote(query)}"
for job in jobs.values():
if not job["spec"]:
continue
# Create request for this job
query = job_query(job)
request = Request(
endpoint_url._replace(query=query).geturl(), headers=header, method="GET"
)
try:
response = _dyn_mapping_urlopener(
request, verify_ssl=verify_ssl, timeout=timeout
)
except Exception as e:
# For now just ignore any errors from dynamic mapping and continue
# This is still experimental, and failures should not stop CI
# from running normally
tty.warn(f"Failed to fetch dynamic mapping for query:\n\t{query}")
tty.warn(f"{e}")
continue
config = json.load(codecs.getreader("utf-8")(response))
# Strip ignore keys
if ignored:
for key in ignored:
if key in config:
config.pop(key)
# Only keep allowed keys
clean_config = {}
if allowed:
for key in allowed:
if key in config:
clean_config[key] = config[key]
else:
clean_config = config
# Verify all of the required keys are present
if required:
missing_keys = []
for key in required:
if key not in clean_config.keys():
missing_keys.append(key)
if missing_keys:
tty.warn(f"Response missing required keys: {missing_keys}")
if clean_config:
job["attributes"] = spack.schema.merge_yaml(
job.get("attributes", {}), clean_config
)
for _, job in jobs.items():
if job["spec"]:
job["spec"] = job["spec"].name
return self.ir
class SpackCIError(spack.error.SpackError):
def __init__(self, msg):
super().__init__(msg)

View File

@@ -0,0 +1,36 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# Holds all known formatters
"""Generators that support writing out pipelines for various CI platforms,
using a common pipeline graph definition.
"""
import spack.error
_generators = {}
def generator(name):
"""Decorator to register a pipeline generator method.
A generator method should take PipelineDag, SpackCIConfig, and
PipelineOptions arguments, and should produce a pipeline file.
"""
def _decorator(generate_method):
_generators[name] = generate_method
return generate_method
return _decorator
def get_generator(name):
try:
return _generators[name]
except KeyError:
raise UnknownGeneratorException(name)
class UnknownGeneratorException(spack.error.SpackError):
def __init__(self, generator_name):
super().__init__(f"No registered generator for {generator_name}")

View File

@@ -0,0 +1,416 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import copy
import os
import shutil
from typing import List, Optional
import ruamel.yaml
import llnl.util.tty as tty
import spack
import spack.binary_distribution as bindist
import spack.config as cfg
import spack.mirrors.mirror
import spack.schema
import spack.spec
import spack.util.spack_yaml as syaml
from .common import (
SPACK_RESERVED_TAGS,
PipelineDag,
PipelineOptions,
PipelineType,
SpackCIConfig,
SpackCIError,
ensure_expected_target_path,
unpack_script,
update_env_scopes,
write_pipeline_manifest,
)
from .generator_registry import generator
# See https://docs.gitlab.com/ee/ci/yaml/#retry for descriptions of conditions
JOB_RETRY_CONDITIONS = [
# "always",
"unknown_failure",
"script_failure",
"api_failure",
"stuck_or_timeout_failure",
"runner_system_failure",
"runner_unsupported",
"stale_schedule",
# "job_execution_timeout",
"archived_failure",
"unmet_prerequisites",
"scheduler_failure",
"data_integrity_failure",
]
JOB_NAME_FORMAT = "{name}{@version} {/hash}"
def _remove_reserved_tags(tags):
"""Convenience function to strip reserved tags from jobs"""
return [tag for tag in tags if tag not in SPACK_RESERVED_TAGS]
def get_job_name(spec: spack.spec.Spec, build_group: Optional[str] = None) -> str:
"""Given a spec and possibly a build group, return the job name. If the
resulting name is longer than 255 characters, it will be truncated.
Arguments:
spec: Spec job will build
build_group: Name of build group this job belongs to (a CDash notion)
Returns: The job name
"""
job_name = spec.format(JOB_NAME_FORMAT)
if build_group:
job_name = f"{job_name} {build_group}"
return job_name[:255]
def maybe_generate_manifest(pipeline: PipelineDag, options: PipelineOptions, manifest_path):
# TODO: Consider including only hashes of rebuilt specs in the manifest,
# instead of full source and destination urls. Also, consider renaming
# the variable that controls whether or not to write the manifest from
# "SPACK_COPY_BUILDCACHE" to "SPACK_WRITE_PIPELINE_MANIFEST" or similar.
spack_buildcache_copy = os.environ.get("SPACK_COPY_BUILDCACHE", None)
if spack_buildcache_copy:
buildcache_copy_src_prefix = options.buildcache_destination.fetch_url
buildcache_copy_dest_prefix = spack_buildcache_copy
if options.pipeline_type == PipelineType.COPY_ONLY:
manifest_specs = [s for s in options.env.all_specs() if not s.external]
else:
manifest_specs = [n.spec for _, n in pipeline.traverse_nodes(direction="children")]
write_pipeline_manifest(
manifest_specs, buildcache_copy_src_prefix, buildcache_copy_dest_prefix, manifest_path
)
@generator("gitlab")
def generate_gitlab_yaml(pipeline: PipelineDag, spack_ci: SpackCIConfig, options: PipelineOptions):
"""Given a pipeline graph, job attributes, and pipeline options,
write a pipeline that can be consumed by GitLab to the given output file.
Arguments:
pipeline: An already pruned graph of jobs representing all the specs to build
spack_ci: An object containing the configured attributes of all jobs in the pipeline
options: An object containing all the pipeline options gathered from yaml, env, etc...
"""
ci_project_dir = os.environ.get("CI_PROJECT_DIR") or os.getcwd()
generate_job_name = os.environ.get("CI_JOB_NAME", "job-does-not-exist")
generate_pipeline_id = os.environ.get("CI_PIPELINE_ID", "pipeline-does-not-exist")
artifacts_root = options.artifacts_root
if artifacts_root.startswith(ci_project_dir):
artifacts_root = os.path.relpath(artifacts_root, ci_project_dir)
pipeline_artifacts_dir = os.path.join(ci_project_dir, artifacts_root)
output_file = options.output_file
if not output_file:
output_file = os.path.abspath(".gitlab-ci.yml")
else:
output_file_path = os.path.abspath(output_file)
gen_ci_dir = os.path.dirname(output_file_path)
if not os.path.exists(gen_ci_dir):
os.makedirs(gen_ci_dir)
spack_ci_ir = spack_ci.generate_ir()
concrete_env_dir = os.path.join(pipeline_artifacts_dir, "concrete_environment")
# Now that we've added the mirrors we know about, they should be properly
# reflected in the environment manifest file, so copy that into the
# concrete environment directory, along with the spack.lock file.
if not os.path.exists(concrete_env_dir):
os.makedirs(concrete_env_dir)
shutil.copyfile(options.env.manifest_path, os.path.join(concrete_env_dir, "spack.yaml"))
shutil.copyfile(options.env.lock_path, os.path.join(concrete_env_dir, "spack.lock"))
update_env_scopes(
options.env,
[
os.path.relpath(s.path, concrete_env_dir)
for s in cfg.scopes().values()
if not s.writable
and isinstance(s, (cfg.DirectoryConfigScope))
and os.path.exists(s.path)
],
os.path.join(concrete_env_dir, "spack.yaml"),
# Here transforming windows paths is only required in the special case
# of copy_only_pipelines, a unique scenario where the generate job and
# child pipelines are run on different platforms. To make this compatible
# w/ Windows, we cannot write Windows style path separators that will be
# consumed on by the Posix copy job runner.
#
# TODO (johnwparent): Refactor config + cli read/write to deal only in
# posix style paths
transform_windows_paths=(options.pipeline_type == PipelineType.COPY_ONLY),
)
job_log_dir = os.path.join(pipeline_artifacts_dir, "logs")
job_repro_dir = os.path.join(pipeline_artifacts_dir, "reproduction")
job_test_dir = os.path.join(pipeline_artifacts_dir, "tests")
user_artifacts_dir = os.path.join(pipeline_artifacts_dir, "user_data")
# We communicate relative paths to the downstream jobs to avoid issues in
# situations where the CI_PROJECT_DIR varies between the pipeline
# generation job and the rebuild jobs. This can happen when gitlab
# checks out the project into a runner-specific directory, for example,
# and different runners are picked for generate and rebuild jobs.
rel_concrete_env_dir = os.path.relpath(concrete_env_dir, ci_project_dir)
rel_job_log_dir = os.path.relpath(job_log_dir, ci_project_dir)
rel_job_repro_dir = os.path.relpath(job_repro_dir, ci_project_dir)
rel_job_test_dir = os.path.relpath(job_test_dir, ci_project_dir)
rel_user_artifacts_dir = os.path.relpath(user_artifacts_dir, ci_project_dir)
def main_script_replacements(cmd):
return cmd.replace("{env_dir}", rel_concrete_env_dir)
output_object = {}
job_id = 0
stage_id = 0
stages: List[List] = []
stage_names = []
max_length_needs = 0
max_needs_job = ""
if not options.pipeline_type == PipelineType.COPY_ONLY:
for level, node in pipeline.traverse_nodes(direction="parents"):
stage_id = level
if len(stages) == stage_id:
stages.append([])
stages[stage_id].append(node.spec)
stage_name = f"stage-{level}"
if stage_name not in stage_names:
stage_names.append(stage_name)
release_spec = node.spec
release_spec_dag_hash = release_spec.dag_hash()
job_object = spack_ci_ir["jobs"][release_spec_dag_hash]["attributes"]
if not job_object:
tty.warn(f"No match found for {release_spec}, skipping it")
continue
if options.pipeline_type is not None:
# For spack pipelines "public" and "protected" are reserved tags
job_object["tags"] = _remove_reserved_tags(job_object.get("tags", []))
if options.pipeline_type == PipelineType.PROTECTED_BRANCH:
job_object["tags"].extend(["protected"])
elif options.pipeline_type == PipelineType.PULL_REQUEST:
job_object["tags"].extend(["public"])
if "script" not in job_object:
raise AttributeError
job_object["script"] = unpack_script(job_object["script"], op=main_script_replacements)
if "before_script" in job_object:
job_object["before_script"] = unpack_script(job_object["before_script"])
if "after_script" in job_object:
job_object["after_script"] = unpack_script(job_object["after_script"])
build_group = options.cdash_handler.build_group if options.cdash_handler else None
job_name = get_job_name(release_spec, build_group)
dep_nodes = pipeline.get_dependencies(node)
job_object["needs"] = [
{"job": get_job_name(dep_node.spec, build_group), "artifacts": False}
for dep_node in dep_nodes
]
job_object["needs"].append(
{"job": generate_job_name, "pipeline": f"{generate_pipeline_id}"}
)
job_vars = job_object["variables"]
# Let downstream jobs know whether the spec needed rebuilding, regardless
# whether DAG pruning was enabled or not.
already_built = bindist.get_mirrors_for_spec(spec=release_spec, index_only=True)
job_vars["SPACK_SPEC_NEEDS_REBUILD"] = "False" if already_built else "True"
if options.cdash_handler:
build_name = options.cdash_handler.build_name(release_spec)
job_vars["SPACK_CDASH_BUILD_NAME"] = build_name
build_stamp = options.cdash_handler.build_stamp
job_vars["SPACK_CDASH_BUILD_STAMP"] = build_stamp
job_object["artifacts"] = spack.schema.merge_yaml(
job_object.get("artifacts", {}),
{
"when": "always",
"paths": [
rel_job_log_dir,
rel_job_repro_dir,
rel_job_test_dir,
rel_user_artifacts_dir,
],
},
)
job_object["stage"] = stage_name
job_object["retry"] = {"max": 2, "when": JOB_RETRY_CONDITIONS}
job_object["interruptible"] = True
length_needs = len(job_object["needs"])
if length_needs > max_length_needs:
max_length_needs = length_needs
max_needs_job = job_name
output_object[job_name] = job_object
job_id += 1
tty.debug(f"{job_id} build jobs generated in {stage_id} stages")
if job_id > 0:
tty.debug(f"The max_needs_job is {max_needs_job}, with {max_length_needs} needs")
service_job_retries = {
"max": 2,
"when": ["runner_system_failure", "stuck_or_timeout_failure", "script_failure"],
}
# In some cases, pipeline generation should write a manifest. Currently
# the only purpose is to specify a list of sources and destinations for
# everything that should be copied.
distinguish_stack = options.stack_name if options.stack_name else "rebuilt"
manifest_path = os.path.join(
pipeline_artifacts_dir, "specs_to_copy", f"copy_{distinguish_stack}_specs.json"
)
maybe_generate_manifest(pipeline, options, manifest_path)
if options.pipeline_type == PipelineType.COPY_ONLY:
stage_names.append("copy")
sync_job = copy.deepcopy(spack_ci_ir["jobs"]["copy"]["attributes"])
sync_job["stage"] = "copy"
sync_job["needs"] = [{"job": generate_job_name, "pipeline": f"{generate_pipeline_id}"}]
if "variables" not in sync_job:
sync_job["variables"] = {}
sync_job["variables"][
"SPACK_COPY_ONLY_DESTINATION"
] = options.buildcache_destination.fetch_url
pipeline_mirrors = spack.mirrors.mirror.MirrorCollection(binary=True)
if "buildcache-source" not in pipeline_mirrors:
raise SpackCIError("Copy-only pipelines require a mirror named 'buildcache-source'")
buildcache_source = pipeline_mirrors["buildcache-source"].fetch_url
sync_job["variables"]["SPACK_BUILDCACHE_SOURCE"] = buildcache_source
sync_job["dependencies"] = []
output_object["copy"] = sync_job
job_id += 1
if job_id > 0:
if (
"script" in spack_ci_ir["jobs"]["signing"]["attributes"]
and options.pipeline_type == PipelineType.PROTECTED_BRANCH
):
# External signing: generate a job to check and sign binary pkgs
stage_names.append("stage-sign-pkgs")
signing_job = spack_ci_ir["jobs"]["signing"]["attributes"]
signing_job["script"] = unpack_script(signing_job["script"])
signing_job["stage"] = "stage-sign-pkgs"
signing_job["when"] = "always"
signing_job["retry"] = {"max": 2, "when": ["always"]}
signing_job["interruptible"] = True
if "variables" not in signing_job:
signing_job["variables"] = {}
signing_job["variables"][
"SPACK_BUILDCACHE_DESTINATION"
] = options.buildcache_destination.push_url
signing_job["dependencies"] = []
output_object["sign-pkgs"] = signing_job
if options.rebuild_index:
# Add a final job to regenerate the index
stage_names.append("stage-rebuild-index")
final_job = spack_ci_ir["jobs"]["reindex"]["attributes"]
final_job["stage"] = "stage-rebuild-index"
target_mirror = options.buildcache_destination.push_url
final_job["script"] = unpack_script(
final_job["script"],
op=lambda cmd: cmd.replace("{index_target_mirror}", target_mirror),
)
final_job["when"] = "always"
final_job["retry"] = service_job_retries
final_job["interruptible"] = True
final_job["dependencies"] = []
output_object["rebuild-index"] = final_job
output_object["stages"] = stage_names
# Capture the version of Spack used to generate the pipeline, that can be
# passed to `git checkout` for version consistency. If we aren't in a Git
# repository, presume we are a Spack release and use the Git tag instead.
spack_version = spack.get_version()
version_to_clone = spack.get_spack_commit() or f"v{spack.spack_version}"
rebuild_everything = not options.prune_up_to_date and not options.prune_untouched
output_object["variables"] = {
"SPACK_ARTIFACTS_ROOT": artifacts_root,
"SPACK_CONCRETE_ENV_DIR": rel_concrete_env_dir,
"SPACK_VERSION": spack_version,
"SPACK_CHECKOUT_VERSION": version_to_clone,
"SPACK_JOB_LOG_DIR": rel_job_log_dir,
"SPACK_JOB_REPRO_DIR": rel_job_repro_dir,
"SPACK_JOB_TEST_DIR": rel_job_test_dir,
"SPACK_PIPELINE_TYPE": options.pipeline_type.name if options.pipeline_type else "None",
"SPACK_CI_STACK_NAME": os.environ.get("SPACK_CI_STACK_NAME", "None"),
"SPACK_REBUILD_CHECK_UP_TO_DATE": str(options.prune_up_to_date),
"SPACK_REBUILD_EVERYTHING": str(rebuild_everything),
"SPACK_REQUIRE_SIGNING": str(options.require_signing),
}
if options.stack_name:
output_object["variables"]["SPACK_CI_STACK_NAME"] = options.stack_name
output_vars = output_object["variables"]
for item, val in output_vars.items():
output_vars[item] = ensure_expected_target_path(val)
else:
# No jobs were generated
noop_job = spack_ci_ir["jobs"]["noop"]["attributes"]
# If this job fails ignore the status and carry on
noop_job["retry"] = 0
noop_job["allow_failure"] = True
tty.debug("No specs to rebuild, generating no-op job")
output_object = {"no-specs-to-rebuild": noop_job}
# Ensure the child pipeline always runs
output_object["workflow"] = {"rules": [{"when": "always"}]}
sorted_output = {}
for output_key, output_value in sorted(output_object.items()):
sorted_output[output_key] = output_value
# Minimize yaml output size through use of anchors
syaml.anchorify(sorted_output)
with open(output_file, "w", encoding="utf-8") as f:
ruamel.yaml.YAML().dump(sorted_output, f)

View File

@@ -24,10 +24,10 @@
import spack.environment as ev
import spack.error
import spack.extensions
import spack.parser
import spack.paths
import spack.repo
import spack.spec
import spack.spec_parser
import spack.store
import spack.traverse as traverse
import spack.user_environment as uenv
@@ -163,12 +163,12 @@ def quote_kvp(string: str) -> str:
or ``name==``, and we assume the rest of the argument is the value. This covers the
common cases of passign flags, e.g., ``cflags="-O2 -g"`` on the command line.
"""
match = spack.parser.SPLIT_KVP.match(string)
match = spack.spec_parser.SPLIT_KVP.match(string)
if not match:
return string
key, delim, value = match.groups()
return f"{key}{delim}{spack.parser.quote_if_needed(value)}"
return f"{key}{delim}{spack.spec_parser.quote_if_needed(value)}"
def parse_specs(
@@ -180,7 +180,7 @@ def parse_specs(
args = [args] if isinstance(args, str) else args
arg_string = " ".join([quote_kvp(arg) for arg in args])
specs = spack.parser.parse(arg_string)
specs = spack.spec_parser.parse(arg_string)
if not concretize:
return specs
@@ -372,13 +372,8 @@ def iter_groups(specs, indent, all_headers):
index = index_by(specs, ("architecture", "compiler"))
ispace = indent * " "
def _key(item):
if item is None:
return ""
return str(item)
# Traverse the index and print out each package
for i, (architecture, compiler) in enumerate(sorted(index, key=_key)):
for i, (architecture, compiler) in enumerate(sorted(index)):
if i > 0:
print()
@@ -436,7 +431,6 @@ def display_specs(specs, args=None, **kwargs):
"""
# FIXME (compiler as nodes): remove the "show full compiler" arguments, and its use
def get_arg(name, default=None):
"""Prefer kwargs, then args, then default."""
if name in kwargs:
@@ -451,6 +445,7 @@ def get_arg(name, default=None):
hashes = get_arg("long", False)
namespaces = get_arg("namespaces", False)
flags = get_arg("show_flags", False)
full_compiler = get_arg("show_full_compiler", False)
variants = get_arg("variants", False)
groups = get_arg("groups", True)
all_headers = get_arg("all_headers", False)
@@ -472,7 +467,10 @@ def get_arg(name, default=None):
if format_string is None:
nfmt = "{fullname}" if namespaces else "{name}"
ffmt = ""
if flags:
if full_compiler or flags:
ffmt += "{%compiler.name}"
if full_compiler:
ffmt += "{@compiler.version}"
ffmt += " {compiler_flags}"
vfmt = "{variants}" if variants else ""
format_string = nfmt + "{@version}" + ffmt + vfmt

View File

@@ -419,7 +419,7 @@ def write_metadata(subdir, metadata):
metadata_rel_dir = os.path.join("metadata", subdir)
metadata_yaml = os.path.join(args.root_dir, metadata_rel_dir, "metadata.yaml")
llnl.util.filesystem.mkdirp(os.path.dirname(metadata_yaml))
with open(metadata_yaml, mode="w") as f:
with open(metadata_yaml, mode="w", encoding="utf-8") as f:
spack.util.spack_yaml.dump(metadata, stream=f)
return os.path.dirname(metadata_yaml), metadata_rel_dir

View File

@@ -731,7 +731,7 @@ def manifest_copy(manifest_file_list, dest_mirror=None):
deduped_manifest = {}
for manifest_path in manifest_file_list:
with open(manifest_path) as fd:
with open(manifest_path, encoding="utf-8") as fd:
manifest = json.loads(fd.read())
for spec_hash, copy_list in manifest.items():
# Last duplicate hash wins

View File

@@ -253,7 +253,7 @@ def add_versions_to_package(pkg: PackageBase, version_lines: str, is_batch: bool
if match:
new_versions.append((Version(match.group(1)), ver_line))
with open(filename, "r+") as f:
with open(filename, "r+", encoding="utf-8") as f:
contents = f.read()
split_contents = version_statement_re.split(contents)

View File

@@ -6,7 +6,6 @@
import json
import os
import shutil
import warnings
from urllib.parse import urlparse, urlunparse
import llnl.util.filesystem as fs
@@ -17,6 +16,7 @@
import spack.ci as spack_ci
import spack.cmd
import spack.cmd.buildcache as buildcache
import spack.cmd.common.arguments
import spack.config as cfg
import spack.environment as ev
import spack.hash_types as ht
@@ -62,22 +62,8 @@ def setup_parser(subparser):
"path to the file where generated jobs file should be written. "
"default is .gitlab-ci.yml in the root of the repository",
)
generate.add_argument(
"--optimize",
action="store_true",
default=False,
help="(DEPRECATED) optimize the gitlab yaml file for size\n\n"
"run the generated document through a series of optimization passes "
"designed to reduce the size of the generated file",
)
generate.add_argument(
"--dependencies",
action="store_true",
default=False,
help="(DEPRECATED) disable DAG scheduling (use 'plain' dependencies)",
)
prune_group = generate.add_mutually_exclusive_group()
prune_group.add_argument(
prune_dag_group = generate.add_mutually_exclusive_group()
prune_dag_group.add_argument(
"--prune-dag",
action="store_true",
dest="prune_dag",
@@ -85,7 +71,7 @@ def setup_parser(subparser):
help="skip up-to-date specs\n\n"
"do not generate jobs for specs that are up-to-date on the mirror",
)
prune_group.add_argument(
prune_dag_group.add_argument(
"--no-prune-dag",
action="store_false",
dest="prune_dag",
@@ -93,6 +79,23 @@ def setup_parser(subparser):
help="process up-to-date specs\n\n"
"generate jobs for specs even when they are up-to-date on the mirror",
)
prune_ext_group = generate.add_mutually_exclusive_group()
prune_ext_group.add_argument(
"--prune-externals",
action="store_true",
dest="prune_externals",
default=True,
help="skip external specs\n\n"
"do not generate jobs for specs that are marked as external",
)
prune_ext_group.add_argument(
"--no-prune-externals",
action="store_false",
dest="prune_externals",
default=True,
help="process external specs\n\n"
"generate jobs for specs even when they are marked as external",
)
generate.add_argument(
"--check-index-only",
action="store_true",
@@ -108,14 +111,18 @@ def setup_parser(subparser):
)
generate.add_argument(
"--artifacts-root",
default=None,
default="jobs_scratch_dir",
help="path to the root of the artifacts directory\n\n"
"if provided, concrete environment files (spack.yaml, spack.lock) will be generated under "
"this directory. their location will be passed to generated child jobs through the "
"SPACK_CONCRETE_ENVIRONMENT_PATH variable",
"The spack ci module assumes it will normally be run from within your project "
"directory, wherever that is checked out to run your ci. The artifacts root directory "
"should specifiy a name that can safely be used for artifacts within your project "
"directory.",
)
generate.set_defaults(func=ci_generate)
spack.cmd.common.arguments.add_concretizer_args(generate)
spack.cmd.common.arguments.add_common_arguments(generate, ["jobs"])
# Rebuild the buildcache index associated with the mirror in the
# active, gitlab-enabled environment.
index = subparsers.add_parser(
@@ -145,6 +152,7 @@ def setup_parser(subparser):
help="stop stand-alone tests after the first failure",
)
rebuild.set_defaults(func=ci_rebuild)
spack.cmd.common.arguments.add_common_arguments(rebuild, ["jobs"])
# Facilitate reproduction of a failed CI build job
reproduce = subparsers.add_parser(
@@ -187,42 +195,8 @@ def ci_generate(args):
before invoking this command. the value must be the CDash authorization token needed to create
a build group and register all generated jobs under it
"""
if args.optimize:
warnings.warn(
"The --optimize option has been deprecated, and currently has no effect. "
"It will be removed in Spack v0.24."
)
if args.dependencies:
warnings.warn(
"The --dependencies option has been deprecated, and currently has no effect. "
"It will be removed in Spack v0.24."
)
env = spack.cmd.require_active_env(cmd_name="ci generate")
output_file = args.output_file
prune_dag = args.prune_dag
index_only = args.index_only
artifacts_root = args.artifacts_root
if not output_file:
output_file = os.path.abspath(".gitlab-ci.yml")
else:
output_file_path = os.path.abspath(output_file)
gen_ci_dir = os.path.dirname(output_file_path)
if not os.path.exists(gen_ci_dir):
os.makedirs(gen_ci_dir)
# Generate the jobs
spack_ci.generate_gitlab_ci_yaml(
env,
True,
output_file,
prune_dag=prune_dag,
check_index_only=index_only,
artifacts_root=artifacts_root,
)
spack_ci.generate_pipeline(env, args)
def ci_reindex(args):
@@ -387,7 +361,7 @@ def ci_rebuild(args):
# Write this job's spec json into the reproduction directory, and it will
# also be used in the generated "spack install" command to install the spec
tty.debug("job concrete spec path: {0}".format(job_spec_json_path))
with open(job_spec_json_path, "w") as fd:
with open(job_spec_json_path, "w", encoding="utf-8") as fd:
fd.write(job_spec.to_json(hash=ht.dag_hash))
# Write some other details to aid in reproduction into an artifact
@@ -397,7 +371,7 @@ def ci_rebuild(args):
"job_spec_json": job_spec_json_file,
"ci_project_dir": ci_project_dir,
}
with open(repro_file, "w") as fd:
with open(repro_file, "w", encoding="utf-8") as fd:
fd.write(json.dumps(repro_details))
# Write information about spack into an artifact in the repro dir
@@ -433,14 +407,19 @@ def ci_rebuild(args):
if not config["verify_ssl"]:
spack_cmd.append("-k")
install_args = [f'--use-buildcache={spack_ci.win_quote("package:never,dependencies:only")}']
install_args = [
f'--use-buildcache={spack_ci.common.win_quote("package:never,dependencies:only")}'
]
can_verify = spack_ci.can_verify_binaries()
verify_binaries = can_verify and spack_is_pr_pipeline is False
if not verify_binaries:
install_args.append("--no-check-signature")
slash_hash = spack_ci.win_quote("/" + job_spec.dag_hash())
if args.jobs:
install_args.append(f"-j{args.jobs}")
slash_hash = spack_ci.common.win_quote("/" + job_spec.dag_hash())
# Arguments when installing the root from sources
deps_install_args = install_args + ["--only=dependencies"]
@@ -605,7 +584,7 @@ def ci_rebuild(args):
rebuild_timer.stop()
try:
with open("install_timers.json", "w") as timelog:
with open("install_timers.json", "w", encoding="utf-8") as timelog:
extra_attributes = {"name": ".ci-rebuild"}
rebuild_timer.write_json(timelog, extra_attributes=extra_attributes)
except Exception as e:

View File

@@ -743,7 +743,7 @@ def rst(args: Namespace, out: IO) -> None:
# extract cross-refs of the form `_cmd-spack-<cmd>:` from rst files
documented_commands: Set[str] = set()
for filename in args.rst_files:
with open(filename) as f:
with open(filename, encoding="utf-8") as f:
for line in f:
match = re.match(r"\.\. _cmd-(spack-.*):", line)
if match:
@@ -815,7 +815,7 @@ def prepend_header(args: Namespace, out: IO) -> None:
if not args.header:
return
with open(args.header) as header:
with open(args.header, encoding="utf-8") as header:
out.write(header.read())
@@ -836,7 +836,7 @@ def _commands(parser: ArgumentParser, args: Namespace) -> None:
if args.update:
tty.msg(f"Updating file: {args.update}")
with open(args.update, "w") as f:
with open(args.update, "w", encoding="utf-8") as f:
prepend_header(args, f)
formatter(args, f)

View File

@@ -169,7 +169,7 @@ def installed_specs(args):
else:
packages = []
for file in args.specfiles:
with open(file, "r") as f:
with open(file, "r", encoding="utf-8") as f:
s = spack.spec.Spec.from_yaml(f)
packages.append(s.format())
return packages

View File

@@ -5,14 +5,13 @@
import argparse
import sys
import warnings
import llnl.util.tty as tty
from llnl.util.lang import index_by
from llnl.util.tty.colify import colify
from llnl.util.tty.color import colorize
import spack.compilers.config
import spack.compilers
import spack.config
import spack.spec
from spack.cmd.common import arguments
@@ -36,13 +35,13 @@ def setup_parser(subparser):
"--mixed-toolchain",
action="store_true",
default=sys.platform == "darwin",
help="(DEPRECATED) Allow mixed toolchains (for example: clang, clang++, gfortran)",
help="Allow mixed toolchains (for example: clang, clang++, gfortran)",
)
mixed_toolchain_group.add_argument(
"--no-mixed-toolchain",
action="store_false",
dest="mixed_toolchain",
help="(DEPRECATED) Do not allow mixed toolchains (for example: clang, clang++, gfortran)",
help="Do not allow mixed toolchains (for example: clang, clang++, gfortran)",
)
find_parser.add_argument("add_paths", nargs=argparse.REMAINDER)
find_parser.add_argument(
@@ -81,97 +80,77 @@ def compiler_find(args):
"""Search either $PATH or a list of paths OR MODULES for compilers and
add them to Spack's configuration.
"""
if args.mixed_toolchain:
warnings.warn(
"The '--mixed-toolchain' option has been deprecated in Spack v0.23, and currently "
"has no effect. The option will be removed in Spack v0.25"
)
paths = args.add_paths or None
new_compilers = spack.compilers.config.find_compilers(
path_hints=paths, scope=args.scope, max_workers=args.jobs
new_compilers = spack.compilers.find_compilers(
path_hints=paths,
scope=args.scope,
mixed_toolchain=args.mixed_toolchain,
max_workers=args.jobs,
)
if new_compilers:
n = len(new_compilers)
s = "s" if n > 1 else ""
filename = spack.config.CONFIG.get_config_filename(args.scope, "packages")
filename = spack.config.CONFIG.get_config_filename(args.scope, "compilers")
tty.msg(f"Added {n:d} new compiler{s} to {filename}")
compiler_strs = sorted(f"{spec.name}@{spec.versions}" for spec in new_compilers)
compiler_strs = sorted(f"{c.spec.name}@{c.spec.version}" for c in new_compilers)
colify(reversed(compiler_strs), indent=4)
else:
tty.msg("Found no new compilers")
tty.msg("Compilers are defined in the following files:")
colify(spack.compilers.config.compiler_config_files(), indent=4)
colify(spack.compilers.compiler_config_files(), indent=4)
def compiler_remove(args):
remover = spack.compilers.config.CompilerRemover(spack.config.CONFIG)
candidates = remover.mark_compilers(match=args.compiler_spec, scope=args.scope)
if not candidates:
tty.die(f"No compiler matches '{args.compiler_spec}'")
compiler_spec = spack.spec.CompilerSpec(args.compiler_spec)
candidate_compilers = spack.compilers.compilers_for_spec(compiler_spec, scope=args.scope)
compiler_strs = reversed(sorted(f"{spec.name}@{spec.versions}" for spec in candidates))
if not candidate_compilers:
tty.die("No compilers match spec %s" % compiler_spec)
if not args.all and len(candidates) > 1:
tty.error(f"multiple compilers match the spec '{args.compiler_spec}':")
print()
colify(compiler_strs, indent=4)
print()
print(
"Either use a stricter spec to select only one, or use `spack compiler remove -a`"
" to remove all of them."
)
if not args.all and len(candidate_compilers) > 1:
tty.error(f"Multiple compilers match spec {compiler_spec}. Choose one:")
colify(reversed(sorted([c.spec.display_str for c in candidate_compilers])), indent=4)
tty.msg("Or, use `spack compiler remove -a` to remove all of them.")
sys.exit(1)
remover.flush()
tty.msg("The following compilers have been removed:")
print()
colify(compiler_strs, indent=4)
print()
for current_compiler in candidate_compilers:
spack.compilers.remove_compiler_from_config(current_compiler.spec, scope=args.scope)
tty.msg(f"{current_compiler.spec.display_str} has been removed")
def compiler_info(args):
"""Print info about all compilers matching a spec."""
query = spack.spec.Spec(args.compiler_spec)
all_compilers = spack.compilers.config.all_compilers(scope=args.scope, init_config=False)
compilers = [x for x in all_compilers if x.satisfies(query)]
cspec = spack.spec.CompilerSpec(args.compiler_spec)
compilers = spack.compilers.compilers_for_spec(cspec, scope=args.scope)
if not compilers:
tty.die(f"No compilers match spec {query.cformat()}")
tty.die("No compilers match spec %s" % cspec)
else:
for c in compilers:
print(f"{c.cformat()}:")
print(f" prefix: {c.external_path}")
extra_attributes = getattr(c, "extra_attributes", {})
if "compilers" in extra_attributes:
print(" compilers:")
for language, exe in extra_attributes.get("compilers", {}).items():
print(f" {language}: {exe}")
if "flags" in extra_attributes:
print(" flags:")
for flag, flag_value in extra_attributes["flags"].items():
print(f" {flag} = {flag_value}")
# FIXME (compiler as nodes): recover this printing
# if "environment" in extra_attributes:
# if len(c.environment.get("set", {})) != 0:
# print("\tenvironment:")
# print("\t set:")
# for key, value in c.environment["set"].items():
# print("\t %s = %s" % (key, value))
if "extra_rpaths" in extra_attributes:
print(" extra rpaths:")
for extra_rpath in extra_attributes["extra_rpaths"]:
print(f" {extra_rpath}")
if getattr(c, "external_modules", []):
print(" modules: ")
for module in c.external_modules:
print(f" {module}")
print()
print(c.spec.display_str + ":")
print("\tpaths:")
for cpath in ["cc", "cxx", "f77", "fc"]:
print("\t\t%s = %s" % (cpath, getattr(c, cpath, None)))
if c.flags:
print("\tflags:")
for flag, flag_value in c.flags.items():
print("\t\t%s = %s" % (flag, flag_value))
if len(c.environment) != 0:
if len(c.environment.get("set", {})) != 0:
print("\tenvironment:")
print("\t set:")
for key, value in c.environment["set"].items():
print("\t %s = %s" % (key, value))
if c.extra_rpaths:
print("\tExtra rpaths:")
for extra_rpath in c.extra_rpaths:
print("\t\t%s" % extra_rpath)
print("\tmodules = %s" % c.modules)
print("\toperating system = %s" % c.operating_system)
def compiler_list(args):
compilers = spack.compilers.config.all_compilers(scope=args.scope, init_config=False)
compilers = spack.compilers.all_compilers(scope=args.scope, init_config=False)
# If there are no compilers in any scope, and we're outputting to a tty, give a
# hint to the user.
@@ -184,7 +163,7 @@ def compiler_list(args):
tty.msg(msg)
return
index = index_by(compilers, spack.compilers.config.name_os_target)
index = index_by(compilers, lambda c: (c.spec.name, c.operating_system, c.target))
tty.msg("Available compilers")
@@ -203,10 +182,10 @@ def compiler_list(args):
name, os, target = key
os_str = os
if target:
os_str += f"-{target}"
cname = f"{spack.spec.COMPILER_COLOR}{{{name}}} {os_str}"
os_str += "-%s" % target
cname = "%s{%s} %s" % (spack.spec.COMPILER_COLOR, name, os_str)
tty.hline(colorize(cname), char="-")
colify(reversed(sorted(c.format("{name}@{version}") for c in compilers)))
colify(reversed(sorted(c.spec.display_str for c in compilers)))
def compiler(parser, args):

View File

@@ -14,6 +14,7 @@
import spack.config
import spack.environment as ev
import spack.error
import spack.schema
import spack.schema.env
import spack.spec
import spack.store
@@ -518,6 +519,8 @@ def config_prefer_upstream(args):
for spec in pref_specs:
# Collect all the upstream compilers and versions for this package.
pkg = pkgs.get(spec.name, {"version": []})
all = pkgs.get("all", {"compiler": []})
pkgs["all"] = all
pkgs[spec.name] = pkg
# We have no existing variant if this is our first added version.
@@ -527,6 +530,10 @@ def config_prefer_upstream(args):
if version not in pkg["version"]:
pkg["version"].append(version)
compiler = str(spec.compiler)
if compiler not in all["compiler"]:
all["compiler"].append(compiler)
# Get and list all the variants that differ from the default.
variants = []
for var_name, variant in spec.variants.items():
@@ -560,7 +567,7 @@ def config_prefer_upstream(args):
# Simply write the config to the specified file.
existing = spack.config.get("packages", scope=scope)
new = spack.config.merge_yaml(existing, pkgs)
new = spack.schema.merge_yaml(existing, pkgs)
spack.config.set("packages", new, scope)
config_file = spack.config.CONFIG.get_config_filename(scope, section)

View File

@@ -110,7 +110,7 @@ def write(self, pkg_path):
all_deps.append(self.dependencies)
# Write out a template for the file
with open(pkg_path, "w") as pkg_file:
with open(pkg_path, "w", encoding="utf-8") as pkg_file:
pkg_file.write(
package_template.format(
name=self.name,

View File

@@ -76,7 +76,7 @@ def locate_package(name: str, repo: spack.repo.Repo) -> str:
path = repo.filename_for_package_name(name)
try:
with open(path, "r"):
with open(path, "r", encoding="utf-8"):
return path
except OSError as e:
if e.errno == errno.ENOENT:
@@ -93,7 +93,7 @@ def locate_file(name: str, path: str) -> str:
# Try to open direct match.
try:
with open(file_path, "r"):
with open(file_path, "r", encoding="utf-8"):
return file_path
except OSError as e:
if e.errno != errno.ENOENT:

View File

@@ -865,7 +865,7 @@ def env_loads(args):
args.recurse_dependencies = False
loads_file = fs.join_path(env.path, "loads")
with open(loads_file, "w") as f:
with open(loads_file, "w", encoding="utf-8") as f:
specs = env._get_environment_specs(recurse_dependencies=recurse_dependencies)
spack.cmd.modules.loads(module_type, specs, args, f)
@@ -1053,7 +1053,7 @@ def env_depfile(args):
# Finally write to stdout/file.
if args.output:
with open(args.output, "w") as f:
with open(args.output, "w", encoding="utf-8") as f:
f.write(makefile)
else:
sys.stdout.write(makefile)

View File

@@ -99,7 +99,7 @@ def setup_parser(subparser):
"--show-full-compiler",
action="store_true",
dest="show_full_compiler",
help="(DEPRECATED) show full compiler specs. Currently it's a no-op",
help="show full compiler specs",
)
implicit_explicit = subparser.add_mutually_exclusive_group()
implicit_explicit.add_argument(
@@ -279,6 +279,7 @@ def root_decorator(spec, string):
# these enforce details in the root specs to show what the user asked for
namespaces=True,
show_flags=True,
show_full_compiler=True,
decorator=root_decorator,
variants=True,
)
@@ -301,6 +302,7 @@ def root_decorator(spec, string):
decorator=lambda s, f: color.colorize("@*{%s}" % f),
namespace=True,
show_flags=True,
show_full_compiler=True,
variants=True,
)
print()

View File

@@ -291,7 +291,7 @@ def _dump_log_on_error(e: InstallError):
tty.error("'spack install' created no log.")
else:
sys.stderr.write("Full build log:\n")
with open(e.pkg.log_path, errors="replace") as log:
with open(e.pkg.log_path, errors="replace", encoding="utf-8") as log:
shutil.copyfileobj(log, sys.stderr)
@@ -445,7 +445,7 @@ def concrete_specs_from_file(args):
"""Return the list of concrete specs read from files."""
result = []
for file in args.specfiles:
with open(file, "r") as f:
with open(file, "r", encoding="utf-8") as f:
if file.endswith("yaml") or file.endswith("yml"):
s = spack.spec.Spec.from_yaml(f)
else:

View File

@@ -191,7 +191,7 @@ def verify(args):
for relpath in _licensed_files(args):
path = os.path.join(args.root, relpath)
with open(path) as f:
with open(path, encoding="utf-8") as f:
lines = [line for line in f][:license_lines]
error = _check_license(lines, path)

View File

@@ -340,7 +340,7 @@ def list(parser, args):
return
tty.msg("Updating file: %s" % args.update)
with open(args.update, "w") as f:
with open(args.update, "w", encoding="utf-8") as f:
formatter(sorted_packages, f)
elif args.count:

View File

@@ -31,7 +31,7 @@ def line_to_rtf(str):
return str.replace("\n", "\\par")
contents = ""
with open(file_path, "r+") as f:
with open(file_path, "r+", encoding="utf-8") as f:
for line in f.readlines():
contents += line_to_rtf(line)
return rtf_header.format(contents)
@@ -93,7 +93,7 @@ def make_installer(parser, args):
rtf_spack_license = txt_to_rtf(spack_license)
spack_license = posixpath.join(source_dir, "LICENSE.rtf")
with open(spack_license, "w") as rtf_license:
with open(spack_license, "w", encoding="utf-8") as rtf_license:
written = rtf_license.write(rtf_spack_license)
if written == 0:
raise RuntimeError("Failed to generate properly formatted license file")

View File

@@ -468,7 +468,7 @@ def specs_from_text_file(filename, concretize=False):
concretize (bool): if True concretize the specs before returning
the list.
"""
with open(filename, "r") as f:
with open(filename, "r", encoding="utf-8") as f:
specs_in_file = f.readlines()
specs_in_file = [s.strip() for s in specs_in_file]
return spack.cmd.parse_specs(" ".join(specs_in_file), concretize=concretize)

View File

@@ -150,7 +150,7 @@ def pkg_source(args):
content = ph.canonical_source(spec)
else:
message = "Source for %s:" % filename
with open(filename) as f:
with open(filename, encoding="utf-8") as f:
content = f.read()
if sys.stdout.isatty():

View File

@@ -94,7 +94,7 @@ def ipython_interpreter(args):
if "PYTHONSTARTUP" in os.environ:
startup_file = os.environ["PYTHONSTARTUP"]
if os.path.isfile(startup_file):
with open(startup_file) as startup:
with open(startup_file, encoding="utf-8") as startup:
exec(startup.read())
# IPython can also support running a script OR command, not both
@@ -126,7 +126,7 @@ def python_interpreter(args):
if "PYTHONSTARTUP" in os.environ:
startup_file = os.environ["PYTHONSTARTUP"]
if os.path.isfile(startup_file):
with open(startup_file) as startup:
with open(startup_file, encoding="utf-8") as startup:
console.runsource(startup.read(), startup_file, "exec")
if args.python_command:
propagate_exceptions_from(console)

View File

@@ -19,11 +19,48 @@
level = "long"
class StageFilter:
"""
Encapsulation of reasons to skip staging
"""
def __init__(self, exclusions, skip_installed):
"""
:param exclusions: A list of specs to skip if satisfied.
:param skip_installed: A boolean indicating whether to skip already installed specs.
"""
self.exclusions = exclusions
self.skip_installed = skip_installed
def __call__(self, spec):
"""filter action, true means spec should be filtered"""
if spec.external:
return True
if self.skip_installed and spec.installed:
return True
if any(spec.satisfies(exclude) for exclude in self.exclusions):
return True
return False
def setup_parser(subparser):
arguments.add_common_arguments(subparser, ["no_checksum", "specs"])
subparser.add_argument(
"-p", "--path", dest="path", help="path to stage package, does not add to spack tree"
)
subparser.add_argument(
"-e",
"--exclude",
action="append",
default=[],
help="exclude packages that satisfy the specified specs",
)
subparser.add_argument(
"-s", "--skip-installed", action="store_true", help="dont restage already installed specs"
)
arguments.add_concretizer_args(subparser)
@@ -31,11 +68,14 @@ def stage(parser, args):
if args.no_checksum:
spack.config.set("config:checksum", False, scope="command_line")
exclusion_specs = spack.cmd.parse_specs(args.exclude, concretize=False)
filter = StageFilter(exclusion_specs, args.skip_installed)
if not args.specs:
env = ev.active_environment()
if not env:
tty.die("`spack stage` requires a spec or an active environment")
return _stage_env(env)
return _stage_env(env, filter)
specs = spack.cmd.parse_specs(args.specs, concretize=False)
@@ -49,6 +89,11 @@ def stage(parser, args):
specs = spack.cmd.matching_specs_from_env(specs)
for spec in specs:
spec = spack.cmd.matching_spec_from_env(spec)
if filter(spec):
continue
pkg = spec.package
if custom_path:
@@ -57,9 +102,13 @@ def stage(parser, args):
_stage(pkg)
def _stage_env(env: ev.Environment):
def _stage_env(env: ev.Environment, filter):
tty.msg(f"Staging specs from environment {env.name}")
for spec in spack.traverse.traverse_nodes(env.concrete_roots()):
if filter(spec):
continue
_stage(spec.package)

View File

@@ -415,8 +415,8 @@ def _run_import_check(
pretty_path = file if root_relative else cwd_relative(file, root, working_dir)
try:
with open(file, "r") as f:
contents = open(file, "r").read()
with open(file, "r", encoding="utf-8") as f:
contents = f.read()
parsed = ast.parse(contents)
except Exception:
exit_code = 1
@@ -448,7 +448,7 @@ def _run_import_check(
if not fix or not to_add and not to_remove:
continue
with open(file, "r") as f:
with open(file, "r", encoding="utf-8") as f:
lines = f.readlines()
if to_add:
@@ -468,7 +468,7 @@ def _run_import_check(
for statement in to_remove:
new_contents = new_contents.replace(f"{statement}\n", "")
with open(file, "w") as f:
with open(file, "w", encoding="utf-8") as f:
f.write(new_contents)
return exit_code

View File

@@ -346,7 +346,7 @@ def _report_suite_results(test_suite, args, constraints):
tty.msg("{0} for test suite '{1}'{2}:".format(results_desc, test_suite.name, matching))
results = {}
with open(test_suite.results_file, "r") as f:
with open(test_suite.results_file, "r", encoding="utf-8") as f:
for line in f:
pkg_id, status = line.split()
results[pkg_id] = status
@@ -371,7 +371,7 @@ def _report_suite_results(test_suite, args, constraints):
spec = test_specs[pkg_id]
log_file = test_suite.log_file_for_spec(spec)
if os.path.isfile(log_file):
with open(log_file, "r") as f:
with open(log_file, "r", encoding="utf-8") as f:
msg += "\n{0}".format("".join(f.readlines()))
tty.msg(msg)

View File

@@ -217,7 +217,7 @@ def unit_test(parser, args, unknown_args):
# Ensure clingo is available before switching to the
# mock configuration used by unit tests
with spack.bootstrap.ensure_bootstrap_configuration():
spack.bootstrap.ensure_clingo_importable_or_raise()
spack.bootstrap.ensure_core_dependencies()
if pytest is None:
spack.bootstrap.ensure_environment_dependencies()
import pytest

View File

@@ -192,7 +192,7 @@ def view(parser, args):
if args.action in actions_link and args.projection_file:
# argparse confirms file exists
with open(args.projection_file, "r") as f:
with open(args.projection_file, "r", encoding="utf-8") as f:
projections_data = s_yaml.load(f)
validate(projections_data, spack.schema.projections.schema)
ordered_projections = projections_data["projections"]

850
lib/spack/spack/compiler.py Normal file
View File

@@ -0,0 +1,850 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import contextlib
import hashlib
import itertools
import json
import os
import platform
import re
import shutil
import sys
import tempfile
from typing import Dict, List, Optional, Sequence
import llnl.path
import llnl.util.lang
import llnl.util.tty as tty
from llnl.util.filesystem import path_contains_subdirectory, paths_containing_libs
import spack.caches
import spack.error
import spack.schema.environment
import spack.spec
import spack.util.executable
import spack.util.libc
import spack.util.module_cmd
import spack.version
from spack.util.environment import filter_system_paths
from spack.util.file_cache import FileCache
__all__ = ["Compiler"]
PATH_INSTANCE_VARS = ["cc", "cxx", "f77", "fc"]
FLAG_INSTANCE_VARS = ["cflags", "cppflags", "cxxflags", "fflags"]
@llnl.util.lang.memoized
def _get_compiler_version_output(compiler_path, version_arg, ignore_errors=()) -> str:
"""Invokes the compiler at a given path passing a single
version argument and returns the output.
Args:
compiler_path (path): path of the compiler to be invoked
version_arg (str): the argument used to extract version information
"""
compiler = spack.util.executable.Executable(compiler_path)
compiler_invocation_args = {
"output": str,
"error": str,
"ignore_errors": ignore_errors,
"timeout": 120,
"fail_on_error": True,
}
if version_arg:
output = compiler(version_arg, **compiler_invocation_args)
else:
output = compiler(**compiler_invocation_args)
return output
def get_compiler_version_output(compiler_path, *args, **kwargs) -> str:
"""Wrapper for _get_compiler_version_output()."""
# This ensures that we memoize compiler output by *absolute path*,
# not just executable name. If we don't do this, and the path changes
# (e.g., during testing), we can get incorrect results.
if not os.path.isabs(compiler_path):
compiler_path = spack.util.executable.which_string(compiler_path, required=True)
return _get_compiler_version_output(compiler_path, *args, **kwargs)
def tokenize_flags(flags_values, propagate=False):
"""Given a compiler flag specification as a string, this returns a list
where the entries are the flags. For compiler options which set values
using the syntax "-flag value", this function groups flags and their
values together. Any token not preceded by a "-" is considered the
value of a prior flag."""
tokens = flags_values.split()
if not tokens:
return []
flag = tokens[0]
flags_with_propagation = []
for token in tokens[1:]:
if not token.startswith("-"):
flag += " " + token
else:
flags_with_propagation.append((flag, propagate))
flag = token
flags_with_propagation.append((flag, propagate))
return flags_with_propagation
#: regex for parsing linker lines
_LINKER_LINE = re.compile(r"^( *|.*[/\\])" r"(link|ld|([^/\\]+-)?ld|collect2)" r"[^/\\]*( |$)")
#: components of linker lines to ignore
_LINKER_LINE_IGNORE = re.compile(r"(collect2 version|^[A-Za-z0-9_]+=|/ldfe )")
#: regex to match linker search paths
_LINK_DIR_ARG = re.compile(r"^-L(.:)?(?P<dir>[/\\].*)")
#: regex to match linker library path arguments
_LIBPATH_ARG = re.compile(r"^[-/](LIBPATH|libpath):(?P<dir>.*)")
def _parse_link_paths(string):
"""Parse implicit link paths from compiler debug output.
This gives the compiler runtime library paths that we need to add to
the RPATH of generated binaries and libraries. It allows us to
ensure, e.g., that codes load the right libstdc++ for their compiler.
"""
lib_search_paths = False
raw_link_dirs = []
for line in string.splitlines():
if lib_search_paths:
if line.startswith("\t"):
raw_link_dirs.append(line[1:])
continue
else:
lib_search_paths = False
elif line.startswith("Library search paths:"):
lib_search_paths = True
if not _LINKER_LINE.match(line):
continue
if _LINKER_LINE_IGNORE.match(line):
continue
tty.debug(f"implicit link dirs: link line: {line}")
next_arg = False
for arg in line.split():
if arg in ("-L", "-Y"):
next_arg = True
continue
if next_arg:
raw_link_dirs.append(arg)
next_arg = False
continue
link_dir_arg = _LINK_DIR_ARG.match(arg)
if link_dir_arg:
link_dir = link_dir_arg.group("dir")
raw_link_dirs.append(link_dir)
link_dir_arg = _LIBPATH_ARG.match(arg)
if link_dir_arg:
link_dir = link_dir_arg.group("dir")
raw_link_dirs.append(link_dir)
implicit_link_dirs = list()
visited = set()
for link_dir in raw_link_dirs:
normalized_path = os.path.abspath(link_dir)
if normalized_path not in visited:
implicit_link_dirs.append(normalized_path)
visited.add(normalized_path)
tty.debug(f"implicit link dirs: result: {', '.join(implicit_link_dirs)}")
return implicit_link_dirs
@llnl.path.system_path_filter
def _parse_non_system_link_dirs(string: str) -> List[str]:
"""Parses link paths out of compiler debug output.
Args:
string: compiler debug output as a string
Returns:
Implicit link paths parsed from the compiler output
"""
link_dirs = _parse_link_paths(string)
# Remove directories that do not exist. Some versions of the Cray compiler
# report nonexistent directories
link_dirs = [d for d in link_dirs if os.path.isdir(d)]
# Return set of directories containing needed compiler libs, minus
# system paths. Note that 'filter_system_paths' only checks for an
# exact match, while 'in_system_subdirectory' checks if a path contains
# a system directory as a subdirectory
link_dirs = filter_system_paths(link_dirs)
return list(p for p in link_dirs if not in_system_subdirectory(p))
def in_system_subdirectory(path):
system_dirs = [
"/lib/",
"/lib64/",
"/usr/lib/",
"/usr/lib64/",
"/usr/local/lib/",
"/usr/local/lib64/",
]
return any(path_contains_subdirectory(path, x) for x in system_dirs)
class Compiler:
"""This class encapsulates a Spack "compiler", which includes C,
C++, and Fortran compilers. Subclasses should implement
support for specific compilers, their possible names, arguments,
and how to identify the particular type of compiler."""
# Optional prefix regexes for searching for this type of compiler.
# Prefixes are sometimes used for toolchains
prefixes: List[str] = []
# Optional suffix regexes for searching for this type of compiler.
# Suffixes are used by some frameworks, e.g. macports uses an '-mp-X.Y'
# version suffix for gcc.
suffixes = [r"-.*"]
#: Compiler argument that produces version information
version_argument = "-dumpversion"
#: Return values to ignore when invoking the compiler to get its version
ignore_version_errors: Sequence[int] = ()
#: Regex used to extract version from compiler's output
version_regex = "(.*)"
# These libraries are anticipated to be required by all executables built
# by any compiler
_all_compiler_rpath_libraries = ["libc", "libc++", "libstdc++"]
#: Platform matcher for Platform objects supported by compiler
is_supported_on_platform = lambda x: True
# Default flags used by a compiler to set an rpath
@property
def cc_rpath_arg(self):
return "-Wl,-rpath,"
@property
def cxx_rpath_arg(self):
return "-Wl,-rpath,"
@property
def f77_rpath_arg(self):
return "-Wl,-rpath,"
@property
def fc_rpath_arg(self):
return "-Wl,-rpath,"
@property
def linker_arg(self):
"""Flag that need to be used to pass an argument to the linker."""
return "-Wl,"
@property
def disable_new_dtags(self):
if platform.system() == "Darwin":
return ""
return "--disable-new-dtags"
@property
def enable_new_dtags(self):
if platform.system() == "Darwin":
return ""
return "--enable-new-dtags"
@property
def debug_flags(self):
return ["-g"]
@property
def opt_flags(self):
return ["-O", "-O0", "-O1", "-O2", "-O3"]
def __init__(
self,
cspec,
operating_system,
target,
paths,
modules: Optional[List[str]] = None,
alias=None,
environment=None,
extra_rpaths=None,
enable_implicit_rpaths=None,
**kwargs,
):
self.spec = cspec
self.operating_system = str(operating_system)
self.target = target
self.modules = modules or []
self.alias = alias
self.environment = environment or {}
self.extra_rpaths = extra_rpaths or []
self.enable_implicit_rpaths = enable_implicit_rpaths
self.cache = COMPILER_CACHE
self.cc = paths[0]
self.cxx = paths[1]
self.f77 = None
self.fc = None
if len(paths) > 2:
self.f77 = paths[2]
if len(paths) == 3:
self.fc = self.f77
else:
self.fc = paths[3]
# Unfortunately have to make sure these params are accepted
# in the same order they are returned by sorted(flags)
# in compilers/__init__.py
self.flags = spack.spec.FlagMap(self.spec)
for flag in self.flags.valid_compiler_flags():
value = kwargs.get(flag, None)
if value is not None:
values_with_propagation = tokenize_flags(value, False)
for value, propagation in values_with_propagation:
self.flags.add_flag(flag, value, propagation)
# caching value for compiler reported version
# used for version checks for API, e.g. C++11 flag
self._real_version = None
def __eq__(self, other):
return (
self.cc == other.cc
and self.cxx == other.cxx
and self.fc == other.fc
and self.f77 == other.f77
and self.spec == other.spec
and self.operating_system == other.operating_system
and self.target == other.target
and self.flags == other.flags
and self.modules == other.modules
and self.environment == other.environment
and self.extra_rpaths == other.extra_rpaths
and self.enable_implicit_rpaths == other.enable_implicit_rpaths
)
def __hash__(self):
return hash(
(
self.cc,
self.cxx,
self.fc,
self.f77,
self.spec,
self.operating_system,
self.target,
str(self.flags),
str(self.modules),
str(self.environment),
str(self.extra_rpaths),
self.enable_implicit_rpaths,
)
)
def verify_executables(self):
"""Raise an error if any of the compiler executables is not valid.
This method confirms that for all of the compilers (cc, cxx, f77, fc)
that have paths, those paths exist and are executable by the current
user.
Raises a CompilerAccessError if any of the non-null paths for the
compiler are not accessible.
"""
def accessible_exe(exe):
# compilers may contain executable names (on Cray or user edited)
if not os.path.isabs(exe):
exe = spack.util.executable.which_string(exe)
if not exe:
return False
return os.path.isfile(exe) and os.access(exe, os.X_OK)
# setup environment before verifying in case we have executable names
# instead of absolute paths
with self.compiler_environment():
missing = [
cmp
for cmp in (self.cc, self.cxx, self.f77, self.fc)
if cmp and not accessible_exe(cmp)
]
if missing:
raise CompilerAccessError(self, missing)
@property
def version(self):
return self.spec.version
@property
def real_version(self):
"""Executable reported compiler version used for API-determinations
E.g. C++11 flag checks.
"""
real_version_str = self.cache.get(self).real_version
if not real_version_str or real_version_str == "unknown":
return self.version
return spack.version.StandardVersion.from_string(real_version_str)
def implicit_rpaths(self) -> List[str]:
if self.enable_implicit_rpaths is False:
return []
output = self.compiler_verbose_output
if not output:
return []
link_dirs = _parse_non_system_link_dirs(output)
all_required_libs = list(self.required_libs) + Compiler._all_compiler_rpath_libraries
return list(paths_containing_libs(link_dirs, all_required_libs))
@property
def default_dynamic_linker(self) -> Optional[str]:
"""Determine default dynamic linker from compiler link line"""
output = self.compiler_verbose_output
if not output:
return None
return spack.util.libc.parse_dynamic_linker(output)
@property
def default_libc(self) -> Optional["spack.spec.Spec"]:
"""Determine libc targeted by the compiler from link line"""
# technically this should be testing the target platform of the compiler, but we don't have
# that, so stick to host platform for now.
if sys.platform in ("darwin", "win32"):
return None
dynamic_linker = self.default_dynamic_linker
if not dynamic_linker:
return None
return spack.util.libc.libc_from_dynamic_linker(dynamic_linker)
@property
def required_libs(self):
"""For executables created with this compiler, the compiler libraries
that would be generally required to run it.
"""
# By default every compiler returns the empty list
return []
@property
def compiler_verbose_output(self) -> Optional[str]:
"""Verbose output from compiling a dummy C source file. Output is cached."""
return self.cache.get(self).c_compiler_output
def _compile_dummy_c_source(self) -> Optional[str]:
if self.cc:
cc = self.cc
ext = "c"
else:
cc = self.cxx
ext = "cc"
if not cc or not self.verbose_flag:
return None
try:
tmpdir = tempfile.mkdtemp(prefix="spack-implicit-link-info")
fout = os.path.join(tmpdir, "output")
fin = os.path.join(tmpdir, f"main.{ext}")
with open(fin, "w", encoding="utf-8") as csource:
csource.write(
"int main(int argc, char* argv[]) { (void)argc; (void)argv; return 0; }\n"
)
cc_exe = spack.util.executable.Executable(cc)
for flag_type in ["cflags" if cc == self.cc else "cxxflags", "cppflags", "ldflags"]:
cc_exe.add_default_arg(*self.flags.get(flag_type, []))
with self.compiler_environment():
return cc_exe(self.verbose_flag, fin, "-o", fout, output=str, error=str)
except spack.util.executable.ProcessError as pe:
tty.debug("ProcessError: Command exited with non-zero status: " + pe.long_message)
return None
finally:
shutil.rmtree(tmpdir, ignore_errors=True)
@property
def verbose_flag(self) -> Optional[str]:
"""
This property should be overridden in the compiler subclass if a
verbose flag is available.
If it is not overridden, it is assumed to not be supported.
"""
# This property should be overridden in the compiler subclass if
# OpenMP is supported by that compiler
@property
def openmp_flag(self):
# If it is not overridden, assume it is not supported and warn the user
raise UnsupportedCompilerFlag(self, "OpenMP", "openmp_flag")
# This property should be overridden in the compiler subclass if
# C++98 is not the default standard for that compiler
@property
def cxx98_flag(self):
return ""
# This property should be overridden in the compiler subclass if
# C++11 is supported by that compiler
@property
def cxx11_flag(self):
# If it is not overridden, assume it is not supported and warn the user
raise UnsupportedCompilerFlag(self, "the C++11 standard", "cxx11_flag")
# This property should be overridden in the compiler subclass if
# C++14 is supported by that compiler
@property
def cxx14_flag(self):
# If it is not overridden, assume it is not supported and warn the user
raise UnsupportedCompilerFlag(self, "the C++14 standard", "cxx14_flag")
# This property should be overridden in the compiler subclass if
# C++17 is supported by that compiler
@property
def cxx17_flag(self):
# If it is not overridden, assume it is not supported and warn the user
raise UnsupportedCompilerFlag(self, "the C++17 standard", "cxx17_flag")
# This property should be overridden in the compiler subclass if
# C99 is supported by that compiler
@property
def c99_flag(self):
# If it is not overridden, assume it is not supported and warn the user
raise UnsupportedCompilerFlag(self, "the C99 standard", "c99_flag")
# This property should be overridden in the compiler subclass if
# C11 is supported by that compiler
@property
def c11_flag(self):
# If it is not overridden, assume it is not supported and warn the user
raise UnsupportedCompilerFlag(self, "the C11 standard", "c11_flag")
@property
def cc_pic_flag(self):
"""Returns the flag used by the C compiler to produce
Position Independent Code (PIC)."""
return "-fPIC"
@property
def cxx_pic_flag(self):
"""Returns the flag used by the C++ compiler to produce
Position Independent Code (PIC)."""
return "-fPIC"
@property
def f77_pic_flag(self):
"""Returns the flag used by the F77 compiler to produce
Position Independent Code (PIC)."""
return "-fPIC"
@property
def fc_pic_flag(self):
"""Returns the flag used by the FC compiler to produce
Position Independent Code (PIC)."""
return "-fPIC"
# Note: This is not a class method. The class methods are used to detect
# compilers on PATH based systems, and do not set up the run environment of
# the compiler. This method can be called on `module` based systems as well
def get_real_version(self) -> str:
"""Query the compiler for its version.
This is the "real" compiler version, regardless of what is in the
compilers.yaml file, which the user can change to name their compiler.
Use the runtime environment of the compiler (modules and environment
modifications) to enable the compiler to run properly on any platform.
"""
cc = spack.util.executable.Executable(self.cc)
try:
with self.compiler_environment():
output = cc(
self.version_argument,
output=str,
error=str,
ignore_errors=tuple(self.ignore_version_errors),
)
return self.extract_version_from_output(output)
except spack.util.executable.ProcessError:
return "unknown"
@property
def prefix(self):
"""Query the compiler for its install prefix. This is the install
path as reported by the compiler. Note that paths for cc, cxx, etc
are not enough to find the install prefix of the compiler, since
the can be symlinks, wrappers, or filenames instead of absolute paths."""
raise NotImplementedError("prefix is not implemented for this compiler")
#
# Compiler classes have methods for querying the version of
# specific compiler executables. This is used when discovering compilers.
#
# Compiler *instances* are just data objects, and can only be
# constructed from an actual set of executables.
#
@classmethod
def default_version(cls, cc):
"""Override just this to override all compiler version functions."""
output = get_compiler_version_output(
cc, cls.version_argument, tuple(cls.ignore_version_errors)
)
return cls.extract_version_from_output(output)
@classmethod
@llnl.util.lang.memoized
def extract_version_from_output(cls, output: str) -> str:
"""Extracts the version from compiler's output."""
match = re.search(cls.version_regex, output)
return match.group(1) if match else "unknown"
@classmethod
def cc_version(cls, cc):
return cls.default_version(cc)
@classmethod
def search_regexps(cls, language):
# Compile all the regular expressions used for files beforehand.
# This searches for any combination of <prefix><name><suffix>
# defined for the compiler
compiler_names = getattr(cls, "{0}_names".format(language))
prefixes = [""] + cls.prefixes
suffixes = [""]
if sys.platform == "win32":
ext = r"\.(?:exe|bat)"
cls_suf = [suf + ext for suf in cls.suffixes]
ext_suf = [ext]
suffixes = suffixes + cls.suffixes + cls_suf + ext_suf
else:
suffixes = suffixes + cls.suffixes
regexp_fmt = r"^({0}){1}({2})$"
return [
re.compile(regexp_fmt.format(prefix, re.escape(name), suffix))
for prefix, name, suffix in itertools.product(prefixes, compiler_names, suffixes)
]
def setup_custom_environment(self, pkg, env):
"""Set any environment variables necessary to use the compiler."""
pass
def __repr__(self):
"""Return a string representation of the compiler toolchain."""
return self.__str__()
def __str__(self):
"""Return a string representation of the compiler toolchain."""
return "%s(%s)" % (
self.name,
"\n ".join(
(
str(s)
for s in (
self.cc,
self.cxx,
self.f77,
self.fc,
self.modules,
str(self.operating_system),
)
)
),
)
@contextlib.contextmanager
def compiler_environment(self):
# Avoid modifying os.environ if possible.
if not self.modules and not self.environment:
yield
return
# store environment to replace later
backup_env = os.environ.copy()
try:
# load modules and set env variables
for module in self.modules:
spack.util.module_cmd.load_module(module)
# apply other compiler environment changes
spack.schema.environment.parse(self.environment).apply_modifications()
yield
finally:
# Restore environment regardless of whether inner code succeeded
os.environ.clear()
os.environ.update(backup_env)
def to_dict(self):
flags_dict = {fname: " ".join(fvals) for fname, fvals in self.flags.items()}
flags_dict.update(
{attr: getattr(self, attr, None) for attr in FLAG_INSTANCE_VARS if hasattr(self, attr)}
)
result = {
"spec": str(self.spec),
"paths": {attr: getattr(self, attr, None) for attr in PATH_INSTANCE_VARS},
"flags": flags_dict,
"operating_system": str(self.operating_system),
"target": str(self.target),
"modules": self.modules or [],
"environment": self.environment or {},
"extra_rpaths": self.extra_rpaths or [],
}
if self.enable_implicit_rpaths is not None:
result["implicit_rpaths"] = self.enable_implicit_rpaths
if self.alias:
result["alias"] = self.alias
return result
class CompilerAccessError(spack.error.SpackError):
def __init__(self, compiler, paths):
msg = "Compiler '%s' has executables that are missing" % compiler.spec
msg += " or are not executable: %s" % paths
super().__init__(msg)
class InvalidCompilerError(spack.error.SpackError):
def __init__(self):
super().__init__("Compiler has no executables.")
class UnsupportedCompilerFlag(spack.error.SpackError):
def __init__(self, compiler, feature, flag_name, ver_string=None):
super().__init__(
"{0} ({1}) does not support {2} (as compiler.{3}).".format(
compiler.name, ver_string if ver_string else compiler.version, feature, flag_name
),
"If you think it should, please edit the compiler.{0} subclass to".format(
compiler.name
)
+ " implement the {0} property and submit a pull request or issue.".format(flag_name),
)
class CompilerCacheEntry:
"""Deserialized cache entry for a compiler"""
__slots__ = ["c_compiler_output", "real_version"]
def __init__(self, c_compiler_output: Optional[str], real_version: str):
self.c_compiler_output = c_compiler_output
self.real_version = real_version
@classmethod
def from_dict(cls, data: Dict[str, Optional[str]]):
if not isinstance(data, dict):
raise ValueError(f"Invalid {cls.__name__} data")
c_compiler_output = data.get("c_compiler_output")
real_version = data.get("real_version")
if not isinstance(real_version, str) or not isinstance(
c_compiler_output, (str, type(None))
):
raise ValueError(f"Invalid {cls.__name__} data")
return cls(c_compiler_output, real_version)
class CompilerCache:
"""Base class for compiler output cache. Default implementation does not cache anything."""
def value(self, compiler: Compiler) -> Dict[str, Optional[str]]:
return {
"c_compiler_output": compiler._compile_dummy_c_source(),
"real_version": compiler.get_real_version(),
}
def get(self, compiler: Compiler) -> CompilerCacheEntry:
return CompilerCacheEntry.from_dict(self.value(compiler))
class FileCompilerCache(CompilerCache):
"""Cache for compiler output, which is used to determine implicit link paths, the default libc
version, and the compiler version."""
name = os.path.join("compilers", "compilers.json")
def __init__(self, cache: "FileCache") -> None:
self.cache = cache
self.cache.init_entry(self.name)
self._data: Dict[str, Dict[str, Optional[str]]] = {}
def _get_entry(self, key: str) -> Optional[CompilerCacheEntry]:
try:
return CompilerCacheEntry.from_dict(self._data[key])
except ValueError:
del self._data[key]
except KeyError:
pass
return None
def get(self, compiler: Compiler) -> CompilerCacheEntry:
# Cache hit
try:
with self.cache.read_transaction(self.name) as f:
assert f is not None
self._data = json.loads(f.read())
assert isinstance(self._data, dict)
except (json.JSONDecodeError, AssertionError):
self._data = {}
key = self._key(compiler)
value = self._get_entry(key)
if value is not None:
return value
# Cache miss
with self.cache.write_transaction(self.name) as (old, new):
try:
assert old is not None
self._data = json.loads(old.read())
assert isinstance(self._data, dict)
except (json.JSONDecodeError, AssertionError):
self._data = {}
# Use cache entry that may have been created by another process in the meantime.
entry = self._get_entry(key)
# Finally compute the cache entry
if entry is None:
self._data[key] = self.value(compiler)
entry = CompilerCacheEntry.from_dict(self._data[key])
new.write(json.dumps(self._data, separators=(",", ":")))
return entry
def _key(self, compiler: Compiler) -> str:
as_bytes = json.dumps(compiler.to_dict(), separators=(",", ":")).encode("utf-8")
return hashlib.sha256(as_bytes).hexdigest()
def _make_compiler_cache():
return FileCompilerCache(spack.caches.MISC_CACHE)
COMPILER_CACHE: CompilerCache = llnl.util.lang.Singleton(_make_compiler_cache) # type: ignore

View File

@@ -2,3 +2,836 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""This module contains functions related to finding compilers on the
system and configuring Spack to use multiple compilers.
"""
import importlib
import os
import re
import sys
import warnings
from typing import Dict, List, Optional
import archspec.cpu
import llnl.util.filesystem as fs
import llnl.util.lang
import llnl.util.tty as tty
import spack.compiler
import spack.config
import spack.error
import spack.paths
import spack.platforms
import spack.repo
import spack.spec
from spack.operating_systems import windows_os
from spack.util.environment import get_path
from spack.util.naming import mod_to_class
_other_instance_vars = [
"modules",
"operating_system",
"environment",
"implicit_rpaths",
"extra_rpaths",
]
# TODO: Caches at module level make it difficult to mock configurations in
# TODO: unit tests. It might be worth reworking their implementation.
#: cache of compilers constructed from config data, keyed by config entry id.
_compiler_cache: Dict[str, "spack.compiler.Compiler"] = {}
_compiler_to_pkg = {
"clang": "llvm+clang",
"oneapi": "intel-oneapi-compilers",
"rocmcc": "llvm-amdgpu",
"intel@2020:": "intel-oneapi-compilers-classic",
"arm": "acfl",
}
# TODO: generating this from the previous dict causes docs errors
package_name_to_compiler_name = {
"llvm": "clang",
"intel-oneapi-compilers": "oneapi",
"llvm-amdgpu": "rocmcc",
"intel-oneapi-compilers-classic": "intel",
"acfl": "arm",
}
#: Tag used to identify packages providing a compiler
COMPILER_TAG = "compiler"
def pkg_spec_for_compiler(cspec):
"""Return the spec of the package that provides the compiler."""
for spec, package in _compiler_to_pkg.items():
if cspec.satisfies(spec):
spec_str = "%s@%s" % (package, cspec.versions)
break
else:
spec_str = str(cspec)
return spack.spec.parse_with_version_concrete(spec_str)
def _auto_compiler_spec(function):
def converter(cspec_like, *args, **kwargs):
if not isinstance(cspec_like, spack.spec.CompilerSpec):
cspec_like = spack.spec.CompilerSpec(cspec_like)
return function(cspec_like, *args, **kwargs)
return converter
def _to_dict(compiler):
"""Return a dict version of compiler suitable to insert in YAML."""
return {"compiler": compiler.to_dict()}
def get_compiler_config(
configuration: "spack.config.Configuration",
*,
scope: Optional[str] = None,
init_config: bool = False,
) -> List[Dict]:
"""Return the compiler configuration for the specified architecture."""
config = configuration.get("compilers", scope=scope) or []
if config or not init_config:
return config
merged_config = configuration.get("compilers")
if merged_config:
# Config is empty for this scope
# Do not init config because there is a non-empty scope
return config
find_compilers(scope=scope)
config = configuration.get("compilers", scope=scope)
return config
def get_compiler_config_from_packages(
configuration: "spack.config.Configuration", *, scope: Optional[str] = None
) -> List[Dict]:
"""Return the compiler configuration from packages.yaml"""
packages_yaml = configuration.get("packages", scope=scope)
return CompilerConfigFactory.from_packages_yaml(packages_yaml)
def compiler_config_files():
config_files = list()
config = spack.config.CONFIG
for scope in config.writable_scopes:
name = scope.name
compiler_config = config.get("compilers", scope=name)
if compiler_config:
config_files.append(config.get_config_filename(name, "compilers"))
compiler_config_from_packages = get_compiler_config_from_packages(config, scope=name)
if compiler_config_from_packages:
config_files.append(config.get_config_filename(name, "packages"))
return config_files
def add_compilers_to_config(compilers, scope=None):
"""Add compilers to the config for the specified architecture.
Arguments:
compilers: a list of Compiler objects.
scope: configuration scope to modify.
"""
compiler_config = get_compiler_config(configuration=spack.config.CONFIG, scope=scope)
for compiler in compilers:
if not compiler.cc:
tty.debug(f"{compiler.spec} does not have a C compiler")
if not compiler.cxx:
tty.debug(f"{compiler.spec} does not have a C++ compiler")
if not compiler.f77:
tty.debug(f"{compiler.spec} does not have a Fortran77 compiler")
if not compiler.fc:
tty.debug(f"{compiler.spec} does not have a Fortran compiler")
compiler_config.append(_to_dict(compiler))
spack.config.set("compilers", compiler_config, scope=scope)
@_auto_compiler_spec
def remove_compiler_from_config(compiler_spec, scope=None):
"""Remove compilers from configuration by spec.
If scope is None, all the scopes are searched for removal.
Arguments:
compiler_spec: compiler to be removed
scope: configuration scope to modify
"""
candidate_scopes = [scope]
if scope is None:
candidate_scopes = spack.config.CONFIG.scopes.keys()
removal_happened = False
for current_scope in candidate_scopes:
removal_happened |= _remove_compiler_from_scope(compiler_spec, scope=current_scope)
msg = "`spack compiler remove` will not remove compilers defined in packages.yaml"
msg += "\nTo remove these compilers, either edit the config or use `spack external remove`"
tty.debug(msg)
return removal_happened
def _remove_compiler_from_scope(compiler_spec, scope):
"""Removes a compiler from a specific configuration scope.
Args:
compiler_spec: compiler to be removed
scope: configuration scope under consideration
Returns:
True if one or more compiler entries were actually removed, False otherwise
"""
assert scope is not None, "a specific scope is needed when calling this function"
compiler_config = get_compiler_config(configuration=spack.config.CONFIG, scope=scope)
filtered_compiler_config = [
compiler_entry
for compiler_entry in compiler_config
if not spack.spec.parse_with_version_concrete(
compiler_entry["compiler"]["spec"], compiler=True
).satisfies(compiler_spec)
]
if len(filtered_compiler_config) == len(compiler_config):
return False
# We need to preserve the YAML type for comments, hence we are copying the
# items in the list that has just been retrieved
compiler_config[:] = filtered_compiler_config
spack.config.CONFIG.set("compilers", compiler_config, scope=scope)
return True
def all_compilers_config(
configuration: "spack.config.Configuration",
*,
scope: Optional[str] = None,
init_config: bool = True,
) -> List["spack.compiler.Compiler"]:
"""Return a set of specs for all the compiler versions currently
available to build with. These are instances of CompilerSpec.
"""
from_packages_yaml = get_compiler_config_from_packages(configuration, scope=scope)
if from_packages_yaml:
init_config = False
from_compilers_yaml = get_compiler_config(configuration, scope=scope, init_config=init_config)
result = from_compilers_yaml + from_packages_yaml
# Dedupe entries by the compiler they represent
# If the entry is invalid, treat it as unique for deduplication
key = lambda c: _compiler_from_config_entry(c["compiler"] or id(c))
return list(llnl.util.lang.dedupe(result, key=key))
def all_compiler_specs(scope=None, init_config=True):
# Return compiler specs from the merged config.
return [
spack.spec.parse_with_version_concrete(s["compiler"]["spec"], compiler=True)
for s in all_compilers_config(spack.config.CONFIG, scope=scope, init_config=init_config)
]
def find_compilers(
path_hints: Optional[List[str]] = None,
*,
scope: Optional[str] = None,
mixed_toolchain: bool = False,
max_workers: Optional[int] = None,
) -> List["spack.compiler.Compiler"]:
"""Searches for compiler in the paths given as argument. If any new compiler is found, the
configuration is updated, and the list of new compiler objects is returned.
Args:
path_hints: list of path hints where to look for. A sensible default based on the ``PATH``
environment variable will be used if the value is None
scope: configuration scope to modify
mixed_toolchain: allow mixing compilers from different toolchains if otherwise missing for
a certain language
max_workers: number of processes used to search for compilers
"""
import spack.detection
known_compilers = set(all_compilers(init_config=False))
if path_hints is None:
path_hints = get_path("PATH")
default_paths = fs.search_paths_for_executables(*path_hints)
if sys.platform == "win32":
default_paths.extend(windows_os.WindowsOs().compiler_search_paths)
compiler_pkgs = spack.repo.PATH.packages_with_tags(COMPILER_TAG, full=True)
detected_packages = spack.detection.by_path(
compiler_pkgs, path_hints=default_paths, max_workers=max_workers
)
valid_compilers = {}
for name, detected in detected_packages.items():
compilers = [x for x in detected if CompilerConfigFactory.from_external_spec(x)]
if not compilers:
continue
valid_compilers[name] = compilers
def _has_fortran_compilers(x):
if "compilers" not in x.extra_attributes:
return False
return "fortran" in x.extra_attributes["compilers"]
if mixed_toolchain:
gccs = [x for x in valid_compilers.get("gcc", []) if _has_fortran_compilers(x)]
if gccs:
best_gcc = sorted(
gccs, key=lambda x: spack.spec.parse_with_version_concrete(x).version
)[-1]
gfortran = best_gcc.extra_attributes["compilers"]["fortran"]
for name in ("llvm", "apple-clang"):
if name not in valid_compilers:
continue
candidates = valid_compilers[name]
for candidate in candidates:
if _has_fortran_compilers(candidate):
continue
candidate.extra_attributes["compilers"]["fortran"] = gfortran
new_compilers = []
for name, detected in valid_compilers.items():
for config in CompilerConfigFactory.from_specs(detected):
c = _compiler_from_config_entry(config["compiler"])
if c in known_compilers:
continue
new_compilers.append(c)
add_compilers_to_config(new_compilers, scope=scope)
return new_compilers
def select_new_compilers(compilers, scope=None):
"""Given a list of compilers, remove those that are already defined in
the configuration.
"""
compilers_not_in_config = []
for c in compilers:
arch_spec = spack.spec.ArchSpec((None, c.operating_system, c.target))
same_specs = compilers_for_spec(
c.spec, arch_spec=arch_spec, scope=scope, init_config=False
)
if not same_specs:
compilers_not_in_config.append(c)
return compilers_not_in_config
def supported_compilers() -> List[str]:
"""Return a set of names of compilers supported by Spack.
See available_compilers() to get a list of all the available
versions of supported compilers.
"""
# Hack to be able to call the compiler `apple-clang` while still
# using a valid python name for the module
return sorted(all_compiler_names())
def supported_compilers_for_host_platform() -> List[str]:
"""Return a set of compiler class objects supported by Spack
that are also supported by the current host platform
"""
host_plat = spack.platforms.real_host()
return supported_compilers_for_platform(host_plat)
def supported_compilers_for_platform(platform: "spack.platforms.Platform") -> List[str]:
"""Return a set of compiler class objects supported by Spack
that are also supported by the provided platform
Args:
platform (str): string representation of platform
for which compiler compatability should be determined
"""
return [
name
for name in supported_compilers()
if class_for_compiler_name(name).is_supported_on_platform(platform)
]
def all_compiler_names() -> List[str]:
def replace_apple_clang(name):
return name if name != "apple_clang" else "apple-clang"
return [replace_apple_clang(name) for name in all_compiler_module_names()]
@llnl.util.lang.memoized
def all_compiler_module_names() -> List[str]:
return list(llnl.util.lang.list_modules(spack.paths.compilers_path))
@_auto_compiler_spec
def supported(compiler_spec):
"""Test if a particular compiler is supported."""
return compiler_spec.name in supported_compilers()
@_auto_compiler_spec
def find(compiler_spec, scope=None, init_config=True):
"""Return specs of available compilers that match the supplied
compiler spec. Return an empty list if nothing found."""
return [c for c in all_compiler_specs(scope, init_config) if c.satisfies(compiler_spec)]
@_auto_compiler_spec
def find_specs_by_arch(compiler_spec, arch_spec, scope=None, init_config=True):
"""Return specs of available compilers that match the supplied
compiler spec. Return an empty list if nothing found."""
return [
c.spec
for c in compilers_for_spec(
compiler_spec, arch_spec=arch_spec, scope=scope, init_config=init_config
)
]
def all_compilers(scope=None, init_config=True):
return all_compilers_from(
configuration=spack.config.CONFIG, scope=scope, init_config=init_config
)
def all_compilers_from(configuration, scope=None, init_config=True):
compilers = []
for items in all_compilers_config(
configuration=configuration, scope=scope, init_config=init_config
):
items = items["compiler"]
compiler = _compiler_from_config_entry(items) # can be None in error case
if compiler:
compilers.append(compiler)
return compilers
@_auto_compiler_spec
def compilers_for_spec(compiler_spec, *, arch_spec=None, scope=None, init_config=True):
"""This gets all compilers that satisfy the supplied CompilerSpec.
Returns an empty list if none are found.
"""
config = all_compilers_config(spack.config.CONFIG, scope=scope, init_config=init_config)
matches = set(find(compiler_spec, scope, init_config))
compilers = []
for cspec in matches:
compilers.extend(get_compilers(config, cspec, arch_spec))
return compilers
def compilers_for_arch(arch_spec, scope=None):
config = all_compilers_config(spack.config.CONFIG, scope=scope, init_config=False)
return list(get_compilers(config, arch_spec=arch_spec))
def compiler_specs_for_arch(arch_spec, scope=None):
return [c.spec for c in compilers_for_arch(arch_spec, scope)]
class CacheReference:
"""This acts as a hashable reference to any object (regardless of whether
the object itself is hashable) and also prevents the object from being
garbage-collected (so if two CacheReference objects are equal, they
will refer to the same object, since it will not have been gc'ed since
the creation of the first CacheReference).
"""
def __init__(self, val):
self.val = val
self.id = id(val)
def __hash__(self):
return self.id
def __eq__(self, other):
return isinstance(other, CacheReference) and self.id == other.id
def compiler_from_dict(items):
cspec = spack.spec.parse_with_version_concrete(items["spec"], compiler=True)
os = items.get("operating_system", None)
target = items.get("target", None)
if not (
"paths" in items and all(n in items["paths"] for n in spack.compiler.PATH_INSTANCE_VARS)
):
raise InvalidCompilerConfigurationError(cspec)
cls = class_for_compiler_name(cspec.name)
compiler_paths = []
for c in spack.compiler.PATH_INSTANCE_VARS:
compiler_path = items["paths"][c]
if compiler_path != "None":
compiler_paths.append(compiler_path)
else:
compiler_paths.append(None)
mods = items.get("modules")
if mods == "None":
mods = []
alias = items.get("alias", None)
compiler_flags = items.get("flags", {})
environment = items.get("environment", {})
extra_rpaths = items.get("extra_rpaths", [])
implicit_rpaths = items.get("implicit_rpaths", None)
# Starting with c22a145, 'implicit_rpaths' was a list. Now it is a
# boolean which can be set by the user to disable all automatic
# RPATH insertion of compiler libraries
if implicit_rpaths is not None and not isinstance(implicit_rpaths, bool):
implicit_rpaths = None
return cls(
cspec,
os,
target,
compiler_paths,
mods,
alias,
environment,
extra_rpaths,
enable_implicit_rpaths=implicit_rpaths,
**compiler_flags,
)
def _compiler_from_config_entry(items):
"""Note this is intended for internal use only. To avoid re-parsing
the same config dictionary this keeps track of its location in
memory. If you provide the same dictionary twice it will return
the same Compiler object (regardless of whether the dictionary
entries have changed).
"""
config_id = CacheReference(items)
compiler = _compiler_cache.get(config_id, None)
if compiler is None:
try:
compiler = compiler_from_dict(items)
except UnknownCompilerError as e:
warnings.warn(e.message)
_compiler_cache[config_id] = compiler
return compiler
def get_compilers(config, cspec=None, arch_spec=None):
compilers = []
for items in config:
items = items["compiler"]
# We might use equality here.
if cspec and not spack.spec.parse_with_version_concrete(
items["spec"], compiler=True
).satisfies(cspec):
continue
# If an arch spec is given, confirm that this compiler
# is for the given operating system
os = items.get("operating_system", None)
if arch_spec and os != arch_spec.os:
continue
# If an arch spec is given, confirm that this compiler
# is for the given target. If the target is 'any', match
# any given arch spec. If the compiler has no assigned
# target this is an old compiler config file, skip this logic.
target = items.get("target", None)
try:
current_target = archspec.cpu.TARGETS[str(arch_spec.target)]
family = str(current_target.family)
except KeyError:
# TODO: Check if this exception handling makes sense, or if we
# TODO: need to change / refactor tests
family = str(arch_spec.target)
except AttributeError:
assert arch_spec is None
if arch_spec and target and (target != family and target != "any"):
# If the family of the target is the family we are seeking,
# there's an error in the underlying configuration
if archspec.cpu.TARGETS[target].family == family:
msg = (
'the "target" field in compilers.yaml accepts only '
'target families [replace "{0}" with "{1}"'
' in "{2}" specification]'
)
msg = msg.format(str(target), family, items.get("spec", "??"))
raise ValueError(msg)
continue
compiler = _compiler_from_config_entry(items)
if compiler:
compilers.append(compiler)
return compilers
@_auto_compiler_spec
def compiler_for_spec(compiler_spec, arch_spec):
"""Get the compiler that satisfies compiler_spec. compiler_spec must
be concrete."""
assert compiler_spec.concrete
assert arch_spec.concrete
compilers = compilers_for_spec(compiler_spec, arch_spec=arch_spec)
if len(compilers) < 1:
raise NoCompilerForSpecError(compiler_spec, arch_spec.os)
if len(compilers) > 1:
msg = "Multiple definitions of compiler %s " % compiler_spec
msg += "for architecture %s:\n %s" % (arch_spec, compilers)
tty.debug(msg)
return compilers[0]
@llnl.util.lang.memoized
def class_for_compiler_name(compiler_name):
"""Given a compiler module name, get the corresponding Compiler class."""
if not supported(compiler_name):
raise UnknownCompilerError(compiler_name)
# Hack to be able to call the compiler `apple-clang` while still
# using a valid python name for the module
submodule_name = compiler_name
if compiler_name == "apple-clang":
submodule_name = compiler_name.replace("-", "_")
module_name = ".".join(["spack", "compilers", submodule_name])
module_obj = importlib.import_module(module_name)
cls = getattr(module_obj, mod_to_class(compiler_name))
# make a note of the name in the module so we can get to it easily.
cls.name = compiler_name
return cls
def all_compiler_types():
return [class_for_compiler_name(c) for c in supported_compilers()]
def is_mixed_toolchain(compiler):
"""Returns True if the current compiler is a mixed toolchain,
False otherwise.
Args:
compiler (spack.compiler.Compiler): a valid compiler object
"""
import spack.detection.path
executables = [
os.path.basename(compiler.cc or ""),
os.path.basename(compiler.cxx or ""),
os.path.basename(compiler.f77 or ""),
os.path.basename(compiler.fc or ""),
]
toolchains = set()
finder = spack.detection.path.ExecutablesFinder()
for pkg_name in spack.repo.PATH.packages_with_tags(COMPILER_TAG):
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
patterns = finder.search_patterns(pkg=pkg_cls)
if not patterns:
continue
joined_pattern = re.compile(r"|".join(patterns))
if any(joined_pattern.search(exe) for exe in executables):
tty.debug(f"[TOOLCHAIN] MATCH {pkg_name}")
toolchains.add(pkg_name)
if len(toolchains) > 1:
if (
toolchains == {"llvm", "apple-clang", "aocc"}
# Msvc toolchain uses Intel ifx
or toolchains == {"msvc", "intel-oneapi-compilers"}
):
return False
tty.debug("[TOOLCHAINS] {0}".format(toolchains))
return True
return False
_EXTRA_ATTRIBUTES_KEY = "extra_attributes"
_COMPILERS_KEY = "compilers"
_C_KEY = "c"
_CXX_KEY, _FORTRAN_KEY = "cxx", "fortran"
class CompilerConfigFactory:
"""Class aggregating all ways of constructing a list of compiler config entries."""
@staticmethod
def from_specs(specs: List["spack.spec.Spec"]) -> List[dict]:
result = []
compiler_package_names = supported_compilers() + list(package_name_to_compiler_name.keys())
for s in specs:
if s.name not in compiler_package_names:
continue
candidate = CompilerConfigFactory.from_external_spec(s)
if candidate is None:
continue
result.append(candidate)
return result
@staticmethod
def from_packages_yaml(packages_yaml) -> List[dict]:
compiler_specs = []
compiler_package_names = supported_compilers() + list(package_name_to_compiler_name.keys())
for name, entry in packages_yaml.items():
if name not in compiler_package_names:
continue
externals_config = entry.get("externals", None)
if not externals_config:
continue
current_specs = []
for current_external in externals_config:
compiler = CompilerConfigFactory._spec_from_external_config(current_external)
if compiler:
current_specs.append(compiler)
compiler_specs.extend(current_specs)
return CompilerConfigFactory.from_specs(compiler_specs)
@staticmethod
def _spec_from_external_config(config):
# Allow `@x.y.z` instead of `@=x.y.z`
err_header = f"The external spec '{config['spec']}' cannot be used as a compiler"
# If extra_attributes is not there I might not want to use this entry as a compiler,
# therefore just leave a debug message, but don't be loud with a warning.
if _EXTRA_ATTRIBUTES_KEY not in config:
tty.debug(f"[{__file__}] {err_header}: missing the '{_EXTRA_ATTRIBUTES_KEY}' key")
return None
extra_attributes = config[_EXTRA_ATTRIBUTES_KEY]
result = spack.spec.Spec(
str(spack.spec.parse_with_version_concrete(config["spec"])),
external_modules=config.get("modules"),
)
result.extra_attributes = extra_attributes
return result
@staticmethod
def from_external_spec(spec: "spack.spec.Spec") -> Optional[dict]:
spec = spack.spec.parse_with_version_concrete(spec)
extra_attributes = getattr(spec, _EXTRA_ATTRIBUTES_KEY, None)
if extra_attributes is None:
return None
paths = CompilerConfigFactory._extract_compiler_paths(spec)
if paths is None:
return None
compiler_spec = spack.spec.CompilerSpec(
package_name_to_compiler_name.get(spec.name, spec.name), spec.version
)
operating_system, target = CompilerConfigFactory._extract_os_and_target(spec)
compiler_entry = {
"compiler": {
"spec": str(compiler_spec),
"paths": paths,
"flags": extra_attributes.get("flags", {}),
"operating_system": str(operating_system),
"target": str(target.family),
"modules": getattr(spec, "external_modules", []),
"environment": extra_attributes.get("environment", {}),
"extra_rpaths": extra_attributes.get("extra_rpaths", []),
"implicit_rpaths": extra_attributes.get("implicit_rpaths", None),
}
}
return compiler_entry
@staticmethod
def _extract_compiler_paths(spec: "spack.spec.Spec") -> Optional[Dict[str, str]]:
err_header = f"The external spec '{spec}' cannot be used as a compiler"
extra_attributes = spec.extra_attributes
# If I have 'extra_attributes' warn if 'compilers' is missing,
# or we don't have a C compiler
if _COMPILERS_KEY not in extra_attributes:
warnings.warn(
f"{err_header}: missing the '{_COMPILERS_KEY}' key under '{_EXTRA_ATTRIBUTES_KEY}'"
)
return None
attribute_compilers = extra_attributes[_COMPILERS_KEY]
if _C_KEY not in attribute_compilers:
warnings.warn(
f"{err_header}: missing the C compiler path under "
f"'{_EXTRA_ATTRIBUTES_KEY}:{_COMPILERS_KEY}'"
)
return None
c_compiler = attribute_compilers[_C_KEY]
# C++ and Fortran compilers are not mandatory, so let's just leave a debug trace
if _CXX_KEY not in attribute_compilers:
tty.debug(f"[{__file__}] The external spec {spec} does not have a C++ compiler")
if _FORTRAN_KEY not in attribute_compilers:
tty.debug(f"[{__file__}] The external spec {spec} does not have a Fortran compiler")
# compilers format has cc/fc/f77, externals format has "c/fortran"
return {
"cc": c_compiler,
"cxx": attribute_compilers.get(_CXX_KEY, None),
"fc": attribute_compilers.get(_FORTRAN_KEY, None),
"f77": attribute_compilers.get(_FORTRAN_KEY, None),
}
@staticmethod
def _extract_os_and_target(spec: "spack.spec.Spec"):
if not spec.architecture:
host_platform = spack.platforms.host()
operating_system = host_platform.operating_system("default_os")
target = host_platform.target("default_target")
else:
target = spec.architecture.target
if not target:
target = spack.platforms.host().target("default_target")
operating_system = spec.os
if not operating_system:
host_platform = spack.platforms.host()
operating_system = host_platform.operating_system("default_os")
return operating_system, target
class InvalidCompilerConfigurationError(spack.error.SpackError):
def __init__(self, compiler_spec):
super().__init__(
f'Invalid configuration for [compiler "{compiler_spec}"]: ',
f"Compiler configuration must contain entries for "
f"all compilers: {spack.compiler.PATH_INSTANCE_VARS}",
)
class UnknownCompilerError(spack.error.SpackError):
def __init__(self, compiler_name):
super().__init__("Spack doesn't support the requested compiler: {0}".format(compiler_name))
class NoCompilerForSpecError(spack.error.SpackError):
def __init__(self, compiler_spec, target):
super().__init__(
"No compilers for operating system %s satisfy spec %s" % (target, compiler_spec)
)

View File

@@ -1,212 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import enum
import typing
from typing import Dict, List
from llnl.util import lang
from .libraries import CompilerPropertyDetector
if typing.TYPE_CHECKING:
import spack.spec
class Languages(enum.Enum):
C = "c"
CXX = "cxx"
FORTRAN = "fortran"
class CompilerAdaptor:
def __init__(
self, compiled_spec: "spack.spec.Spec", compilers: Dict[Languages, "spack.spec.Spec"]
) -> None:
if not compilers:
raise AttributeError(f"{compiled_spec} has no 'compiler' attribute")
self.compilers = compilers
self.compiled_spec = compiled_spec
def _lang_exists_or_raise(self, name: str, *, lang: Languages) -> None:
if lang not in self.compilers:
raise AttributeError(
f"'{self.compiled_spec}' has no {lang.value} compiler, so the "
f"'{name}' property cannot be retrieved"
)
def _maybe_return_attribute(self, name: str, *, lang: Languages) -> str:
self._lang_exists_or_raise(name, lang=lang)
return getattr(self.compilers[lang].package, name)
@property
def cc_rpath_arg(self) -> str:
self._lang_exists_or_raise("cc_rpath_arg", lang=Languages.C)
return self.compilers[Languages.C].package.rpath_arg
@property
def cxx_rpath_arg(self) -> str:
self._lang_exists_or_raise("cxx_rpath_arg", lang=Languages.CXX)
return self.compilers[Languages.CXX].package.rpath_arg
@property
def fc_rpath_arg(self) -> str:
self._lang_exists_or_raise("fc_rpath_arg", lang=Languages.FORTRAN)
return self.compilers[Languages.FORTRAN].package.rpath_arg
@property
def f77_rpath_arg(self) -> str:
self._lang_exists_or_raise("f77_rpath_arg", lang=Languages.FORTRAN)
return self.compilers[Languages.FORTRAN].package.rpath_arg
@property
def linker_arg(self) -> str:
return self._maybe_return_attribute("linker_arg", lang=Languages.C)
@property
def name(self):
return next(iter(self.compilers.values())).name
@property
def version(self):
return next(iter(self.compilers.values())).version
def implicit_rpaths(self) -> List[str]:
result, seen = [], set()
for compiler in self.compilers.values():
if compiler in seen:
continue
seen.add(compiler)
result.extend(CompilerPropertyDetector(compiler).implicit_rpaths())
return result
@property
def openmp_flag(self) -> str:
return next(iter(self.compilers.values())).package.openmp_flag
@property
def cxx98_flag(self) -> str:
return self.compilers[Languages.CXX].package.standard_flag(
language=Languages.CXX.value, standard="98"
)
@property
def cxx11_flag(self) -> str:
return self.compilers[Languages.CXX].package.standard_flag(
language=Languages.CXX.value, standard="11"
)
@property
def cxx14_flag(self) -> str:
return self.compilers[Languages.CXX].package.standard_flag(
language=Languages.CXX.value, standard="14"
)
@property
def cxx17_flag(self) -> str:
return self.compilers[Languages.CXX].package.standard_flag(
language=Languages.CXX.value, standard="17"
)
@property
def cxx20_flag(self) -> str:
return self.compilers[Languages.CXX].package.standard_flag(
language=Languages.CXX.value, standard="20"
)
@property
def cxx23_flag(self) -> str:
return self.compilers[Languages.CXX].package.standard_flag(
language=Languages.CXX.value, standard="23"
)
@property
def c99_flag(self) -> str:
return self.compilers[Languages.C].package.standard_flag(
language=Languages.C.value, standard="99"
)
@property
def c11_flag(self) -> str:
return self.compilers[Languages.C].package.standard_flag(
language=Languages.C.value, standard="11"
)
@property
def c17_flag(self) -> str:
return self.compilers[Languages.C].package.standard_flag(
language=Languages.C.value, standard="17"
)
@property
def c23_flag(self) -> str:
return self.compilers[Languages.C].package.standard_flag(
language=Languages.C.value, standard="17"
)
@property
def cc_pic_flag(self) -> str:
self._lang_exists_or_raise("cc_pic_flag", lang=Languages.C)
return self.compilers[Languages.C].package.pic_flag
@property
def cxx_pic_flag(self) -> str:
self._lang_exists_or_raise("cxx_pic_flag", lang=Languages.CXX)
return self.compilers[Languages.CXX].package.pic_flag
@property
def fc_pic_flag(self) -> str:
self._lang_exists_or_raise("fc_pic_flag", lang=Languages.FORTRAN)
return self.compilers[Languages.FORTRAN].package.pic_flag
@property
def f77_pic_flag(self) -> str:
self._lang_exists_or_raise("f77_pic_flag", lang=Languages.FORTRAN)
return self.compilers[Languages.FORTRAN].package.pic_flag
@property
def prefix(self) -> str:
return next(iter(self.compilers.values())).prefix
@property
def extra_rpaths(self) -> List[str]:
compiler = next(iter(self.compilers.values()))
return getattr(compiler, "extra_attributes", {}).get("extra_rpaths", [])
@property
def cc(self):
return self._maybe_return_attribute("cc", lang=Languages.C)
@property
def cxx(self):
return self._maybe_return_attribute("cxx", lang=Languages.CXX)
@property
def fc(self):
self._lang_exists_or_raise("fc", lang=Languages.FORTRAN)
return self.compilers[Languages.FORTRAN].package.fortran
@property
def f77(self):
self._lang_exists_or_raise("f77", lang=Languages.FORTRAN)
return self.compilers[Languages.FORTRAN].package.fortran
class DeprecatedCompiler(lang.DeprecatedProperty):
def __init__(self) -> None:
super().__init__(name="compiler")
def factory(self, instance, owner) -> CompilerAdaptor:
spec = instance.spec
if not spec.concrete:
raise ValueError("Can only get a compiler for a concrete package.")
compilers = {}
for language in Languages:
deps = spec.dependencies(virtuals=[language.value])
if deps:
compilers[language] = deps[0]
return CompilerAdaptor(instance, compilers)

View File

@@ -0,0 +1,120 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import re
import llnl.util.lang
from spack.compiler import Compiler
from spack.version import ver
class Aocc(Compiler):
version_argument = "--version"
@property
def debug_flags(self):
return [
"-gcodeview",
"-gdwarf-2",
"-gdwarf-3",
"-gdwarf-4",
"-gdwarf-5",
"-gline-tables-only",
"-gmodules",
"-g",
]
@property
def opt_flags(self):
return ["-O0", "-O1", "-O2", "-O3", "-Ofast", "-Os", "-Oz", "-Og", "-O", "-O4"]
@property
def link_paths(self):
link_paths = {
"cc": os.path.join("aocc", "clang"),
"cxx": os.path.join("aocc", "clang++"),
"f77": os.path.join("aocc", "flang"),
"fc": os.path.join("aocc", "flang"),
}
return link_paths
@property
def verbose_flag(self):
return "-v"
@property
def openmp_flag(self):
return "-fopenmp"
@property
def cxx11_flag(self):
return "-std=c++11"
@property
def cxx14_flag(self):
return "-std=c++14"
@property
def cxx17_flag(self):
return "-std=c++17"
@property
def c99_flag(self):
return "-std=c99"
@property
def c11_flag(self):
return "-std=c11"
@property
def cc_pic_flag(self):
return "-fPIC"
@property
def cxx_pic_flag(self):
return "-fPIC"
@property
def f77_pic_flag(self):
return "-fPIC"
@property
def fc_pic_flag(self):
return "-fPIC"
required_libs = ["libclang"]
@classmethod
@llnl.util.lang.memoized
def extract_version_from_output(cls, output):
match = re.search(r"AOCC_(\d+)[._](\d+)[._](\d+)", output)
if match:
return ".".join(match.groups())
return "unknown"
@property
def stdcxx_libs(self):
return ("-lstdc++",)
@property
def cflags(self):
return self._handle_default_flag_addtions()
@property
def cxxflags(self):
return self._handle_default_flag_addtions()
@property
def fflags(self):
return self._handle_default_flag_addtions()
def _handle_default_flag_addtions(self):
# This is a known issue for AOCC 3.0 see:
# https://developer.amd.com/wp-content/resources/AOCC-3.0-Install-Guide.pdf
if self.version.satisfies(ver("3.0.0")):
return "-Wno-unused-command-line-argument " "-mllvm -eliminate-similar-expr=false"

View File

@@ -0,0 +1,116 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import re
import llnl.util.lang
import spack.compiler
import spack.compilers.clang
from spack.version import Version
class AppleClang(spack.compilers.clang.Clang):
openmp_flag = "-Xpreprocessor -fopenmp"
@classmethod
@llnl.util.lang.memoized
def extract_version_from_output(cls, output):
ver = "unknown"
match = re.search(
# Apple's LLVM compiler has its own versions, so suffix them.
r"^Apple (?:LLVM|clang) version ([^ )]+)",
output,
# Multi-line, since 'Apple clang' may not be on the first line
# in particular, when run as gcc, it seems to output
# "Configured with: --prefix=..." as the first line
re.M,
)
if match:
ver = match.group(match.lastindex)
return ver
# C++ flags based on CMake Modules/Compiler/AppleClang-CXX.cmake
@property
def cxx11_flag(self):
# Spack's AppleClang detection only valid from Xcode >= 4.6
if self.real_version < Version("4.0"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C++11 standard", "cxx11_flag", "Xcode < 4.0"
)
return "-std=c++11"
@property
def cxx14_flag(self):
if self.real_version < Version("5.1"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C++14 standard", "cxx14_flag", "Xcode < 5.1"
)
elif self.real_version < Version("6.1"):
return "-std=c++1y"
return "-std=c++14"
@property
def cxx17_flag(self):
if self.real_version < Version("6.1"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C++17 standard", "cxx17_flag", "Xcode < 6.1"
)
elif self.real_version < Version("10.0"):
return "-std=c++1z"
return "-std=c++17"
@property
def cxx20_flag(self):
if self.real_version < Version("10.0"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C++20 standard", "cxx20_flag", "Xcode < 10.0"
)
elif self.real_version < Version("13.0"):
return "-std=c++2a"
return "-std=c++20"
@property
def cxx23_flag(self):
if self.real_version < Version("13.0"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C++23 standard", "cxx23_flag", "Xcode < 13.0"
)
return "-std=c++2b"
# C flags based on CMake Modules/Compiler/AppleClang-C.cmake
@property
def c99_flag(self):
if self.real_version < Version("4.0"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C99 standard", "c99_flag", "< 4.0"
)
return "-std=c99"
@property
def c11_flag(self):
if self.real_version < Version("4.0"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C11 standard", "c11_flag", "< 4.0"
)
return "-std=c11"
@property
def c17_flag(self):
if self.real_version < Version("11.0"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C17 standard", "c17_flag", "< 11.0"
)
return "-std=c17"
@property
def c23_flag(self):
if self.real_version < Version("11.0.3"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C23 standard", "c23_flag", "< 11.0.3"
)
return "-std=c2x"

View File

@@ -0,0 +1,80 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import spack.compiler
class Arm(spack.compiler.Compiler):
# Named wrapper links within lib/spack/env
link_paths = {
"cc": os.path.join("arm", "armclang"),
"cxx": os.path.join("arm", "armclang++"),
"f77": os.path.join("arm", "armflang"),
"fc": os.path.join("arm", "armflang"),
}
# The ``--version`` option seems to be the most consistent one for
# arm compilers. Output looks like this:
#
# $ arm<c/f>lang --version
# Arm C/C++/Fortran Compiler version 19.0 (build number 73) (based on LLVM 7.0.2)
# Target: aarch64--linux-gnu
# Thread model: posix
# InstalledDir:
# /opt/arm/arm-hpc-compiler-19.0_Generic-AArch64_RHEL-7_aarch64-linux/bin
version_argument = "--version"
version_regex = r"Arm C\/C\+\+\/Fortran Compiler version ([\d\.]+) "
@property
def verbose_flag(self):
return "-v"
@property
def opt_flags(self):
return ["-O", "-O0", "-O1", "-O2", "-O3", "-Ofast"]
@property
def openmp_flag(self):
return "-fopenmp"
@property
def cxx11_flag(self):
return "-std=c++11"
@property
def cxx14_flag(self):
return "-std=c++14"
@property
def cxx17_flag(self):
return "-std=c++1z"
@property
def c99_flag(self):
return "-std=c99"
@property
def c11_flag(self):
return "-std=c11"
@property
def cc_pic_flag(self):
return "-fPIC"
@property
def cxx_pic_flag(self):
return "-fPIC"
@property
def f77_pic_flag(self):
return "-fPIC"
@property
def fc_pic_flag(self):
return "-fPIC"
required_libs = ["libclang", "libflang"]

View File

@@ -0,0 +1,128 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
from spack.compiler import Compiler, UnsupportedCompilerFlag
from spack.version import Version
class Cce(Compiler):
"""Cray compiler environment compiler."""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# For old cray compilers on module based systems we replace
# ``version_argument`` with the old value. Cannot be a property
# as the new value is used in classmethods for path-based detection
if not self.is_clang_based:
self.version_argument = "-V"
# MacPorts builds gcc versions with prefixes and -mp-X.Y suffixes.
suffixes = [r"-mp-\d\.\d"]
@property
def link_paths(self):
if any("PrgEnv-cray" in m for m in self.modules):
# Old module-based interface to cray compilers
return {
"cc": os.path.join("cce", "cc"),
"cxx": os.path.join("case-insensitive", "CC"),
"f77": os.path.join("cce", "ftn"),
"fc": os.path.join("cce", "ftn"),
}
return {
"cc": os.path.join("cce", "craycc"),
"cxx": os.path.join("cce", "case-insensitive", "crayCC"),
"f77": os.path.join("cce", "crayftn"),
"fc": os.path.join("cce", "crayftn"),
}
@property
def is_clang_based(self):
version = self._real_version or self.version
return version >= Version("9.0") and "classic" not in str(version)
version_argument = "--version"
version_regex = r"[Cc]ray (?:clang|C :|C\+\+ :|Fortran :) [Vv]ersion.*?(\d+(\.\d+)+)"
@property
def verbose_flag(self):
return "-v"
@property
def debug_flags(self):
return ["-g", "-G0", "-G1", "-G2", "-Gfast"]
@property
def openmp_flag(self):
if self.is_clang_based:
return "-fopenmp"
return "-h omp"
@property
def cxx11_flag(self):
if self.is_clang_based:
return "-std=c++11"
return "-h std=c++11"
@property
def cxx14_flag(self):
if self.is_clang_based:
return "-std=c++14"
return "-h std=c++14"
@property
def cxx17_flag(self):
if self.is_clang_based:
return "-std=c++17"
@property
def c99_flag(self):
if self.is_clang_based:
return "-std=c99"
elif self.real_version >= Version("8.4"):
return "-h std=c99,noconform,gnu"
elif self.real_version >= Version("8.1"):
return "-h c99,noconform,gnu"
raise UnsupportedCompilerFlag(self, "the C99 standard", "c99_flag", "< 8.1")
@property
def c11_flag(self):
if self.is_clang_based:
return "-std=c11"
elif self.real_version >= Version("8.5"):
return "-h std=c11,noconform,gnu"
raise UnsupportedCompilerFlag(self, "the C11 standard", "c11_flag", "< 8.5")
@property
def cc_pic_flag(self):
if self.is_clang_based:
return "-fPIC"
return "-h PIC"
@property
def cxx_pic_flag(self):
if self.is_clang_based:
return "-fPIC"
return "-h PIC"
@property
def f77_pic_flag(self):
if self.is_clang_based:
return "-fPIC"
return "-h PIC"
@property
def fc_pic_flag(self):
if self.is_clang_based:
return "-fPIC"
return "-h PIC"
@property
def stdcxx_libs(self):
# Cray compiler wrappers link to the standard C++ library
# without additional flags.
return ()

View File

@@ -0,0 +1,192 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import re
import llnl.util.lang
from spack.compiler import Compiler, UnsupportedCompilerFlag
from spack.version import Version
#: compiler symlink mappings for mixed f77 compilers
f77_mapping = [
("gfortran", os.path.join("clang", "gfortran")),
("xlf_r", os.path.join("xl_r", "xlf_r")),
("xlf", os.path.join("xl", "xlf")),
("ifort", os.path.join("intel", "ifort")),
]
#: compiler symlink mappings for mixed f90/fc compilers
fc_mapping = [
("gfortran", os.path.join("clang", "gfortran")),
("xlf90_r", os.path.join("xl_r", "xlf90_r")),
("xlf90", os.path.join("xl", "xlf90")),
("ifort", os.path.join("intel", "ifort")),
]
class Clang(Compiler):
version_argument = "--version"
@property
def debug_flags(self):
return [
"-gcodeview",
"-gdwarf-2",
"-gdwarf-3",
"-gdwarf-4",
"-gdwarf-5",
"-gline-tables-only",
"-gmodules",
"-g",
]
@property
def opt_flags(self):
return ["-O0", "-O1", "-O2", "-O3", "-Ofast", "-Os", "-Oz", "-Og", "-O", "-O4"]
# Clang has support for using different fortran compilers with the
# clang executable.
@property
def link_paths(self):
# clang links are always the same
link_paths = {
"cc": os.path.join("clang", "clang"),
"cxx": os.path.join("clang", "clang++"),
}
# fortran links need to look at the actual compiler names from
# compilers.yaml to figure out which named symlink to use
for compiler_name, link_path in f77_mapping:
if self.f77 and compiler_name in self.f77:
link_paths["f77"] = link_path
break
else:
link_paths["f77"] = os.path.join("clang", "flang")
for compiler_name, link_path in fc_mapping:
if self.fc and compiler_name in self.fc:
link_paths["fc"] = link_path
break
else:
link_paths["fc"] = os.path.join("clang", "flang")
return link_paths
@property
def verbose_flag(self):
return "-v"
openmp_flag = "-fopenmp"
# C++ flags based on CMake Modules/Compiler/Clang.cmake
@property
def cxx11_flag(self):
if self.real_version < Version("3.3"):
raise UnsupportedCompilerFlag(self, "the C++11 standard", "cxx11_flag", "< 3.3")
return "-std=c++11"
@property
def cxx14_flag(self):
if self.real_version < Version("3.4"):
raise UnsupportedCompilerFlag(self, "the C++14 standard", "cxx14_flag", "< 3.5")
elif self.real_version < Version("3.5"):
return "-std=c++1y"
return "-std=c++14"
@property
def cxx17_flag(self):
if self.real_version < Version("3.5"):
raise UnsupportedCompilerFlag(self, "the C++17 standard", "cxx17_flag", "< 3.5")
elif self.real_version < Version("5.0"):
return "-std=c++1z"
return "-std=c++17"
@property
def cxx20_flag(self):
if self.real_version < Version("5.0"):
raise UnsupportedCompilerFlag(self, "the C++20 standard", "cxx20_flag", "< 5.0")
elif self.real_version < Version("11.0"):
return "-std=c++2a"
else:
return "-std=c++20"
@property
def cxx23_flag(self):
if self.real_version < Version("12.0"):
raise UnsupportedCompilerFlag(self, "the C++23 standard", "cxx23_flag", "< 12.0")
elif self.real_version < Version("17.0"):
return "-std=c++2b"
else:
return "-std=c++23"
@property
def c99_flag(self):
return "-std=c99"
@property
def c11_flag(self):
if self.real_version < Version("3.0"):
raise UnsupportedCompilerFlag(self, "the C11 standard", "c11_flag", "< 3.0")
if self.real_version < Version("3.1"):
return "-std=c1x"
return "-std=c11"
@property
def c17_flag(self):
if self.real_version < Version("6.0"):
raise UnsupportedCompilerFlag(self, "the C17 standard", "c17_flag", "< 6.0")
return "-std=c17"
@property
def c23_flag(self):
if self.real_version < Version("9.0"):
raise UnsupportedCompilerFlag(self, "the C23 standard", "c23_flag", "< 9.0")
elif self.real_version < Version("18.0"):
return "-std=c2x"
else:
return "-std=c23"
@property
def cc_pic_flag(self):
return "-fPIC"
@property
def cxx_pic_flag(self):
return "-fPIC"
@property
def f77_pic_flag(self):
return "-fPIC"
@property
def fc_pic_flag(self):
return "-fPIC"
required_libs = ["libclang"]
@classmethod
@llnl.util.lang.memoized
def extract_version_from_output(cls, output):
ver = "unknown"
if ("Apple" in output) or ("AMD" in output):
return ver
match = re.search(
# Normal clang compiler versions are left as-is
r"(?:clang|flang-new) version ([^ )\n]+)-svn[~.\w\d-]*|"
# Don't include hyphenated patch numbers in the version
# (see https://github.com/spack/spack/pull/14365 for details)
r"(?:clang|flang-new) version ([^ )\n]+?)-[~.\w\d-]*|"
r"(?:clang|flang-new) version ([^ )\n]+)",
output,
)
if match:
ver = match.group(match.lastindex)
return ver

View File

@@ -1,427 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""This module contains functions related to finding compilers on the system,
and configuring Spack to use multiple compilers.
"""
import os
import re
import sys
import warnings
from typing import Any, Dict, List, Optional, Tuple
import archspec.cpu
import llnl.util.filesystem as fs
import llnl.util.lang
import llnl.util.tty as tty
import spack.config
import spack.detection
import spack.error
import spack.platforms
import spack.repo
import spack.spec
from spack.operating_systems import windows_os
from spack.util.environment import get_path
package_name_to_compiler_name = {
"llvm": "clang",
"intel-oneapi-compilers": "oneapi",
"llvm-amdgpu": "rocmcc",
"intel-oneapi-compilers-classic": "intel",
"acfl": "arm",
}
#: Tag used to identify packages providing a compiler
COMPILER_TAG = "compiler"
def compiler_config_files():
config_files = []
configuration = spack.config.CONFIG
for scope in configuration.writable_scopes:
name = scope.name
from_packages_yaml = CompilerFactory.from_packages_yaml(configuration, scope=name)
if from_packages_yaml:
config_files.append(configuration.get_config_filename(name, "packages"))
compiler_config = configuration.get("compilers", scope=name)
if compiler_config:
config_files.append(configuration.get_config_filename(name, "compilers"))
return config_files
def add_compiler_to_config(new_compilers, *, scope=None) -> None:
"""Add a Compiler object to the configuration, at the required scope."""
# FIXME (compiler as nodes): still needed to read Cray manifest
by_name: Dict[str, List["spack.spec.Spec"]] = {}
for x in new_compilers:
by_name.setdefault(x.name, []).append(x)
spack.detection.update_configuration(by_name, buildable=True, scope=scope)
def find_compilers(
path_hints: Optional[List[str]] = None,
*,
scope: Optional[str] = None,
max_workers: Optional[int] = None,
) -> List["spack.spec.Spec"]:
"""Searches for compiler in the paths given as argument. If any new compiler is found, the
configuration is updated, and the list of new compiler objects is returned.
Args:
path_hints: list of path hints where to look for. A sensible default based on the ``PATH``
environment variable will be used if the value is None
scope: configuration scope to modify
max_workers: number of processes used to search for compilers
"""
if path_hints is None:
path_hints = get_path("PATH")
default_paths = fs.search_paths_for_executables(*path_hints)
if sys.platform == "win32":
default_paths.extend(windows_os.WindowsOs().compiler_search_paths)
compiler_pkgs = spack.repo.PATH.packages_with_tags(COMPILER_TAG, full=True)
detected_packages = spack.detection.by_path(
compiler_pkgs, path_hints=default_paths, max_workers=max_workers
)
new_compilers = spack.detection.update_configuration(
detected_packages, buildable=True, scope=scope
)
return new_compilers
def select_new_compilers(
candidates: List["spack.spec.Spec"], *, scope: Optional[str] = None
) -> List["spack.spec.Spec"]:
"""Given a list of compilers, remove those that are already defined in
the configuration.
"""
compilers_in_config = all_compilers_from(configuration=spack.config.CONFIG, scope=scope)
return [c for c in candidates if c not in compilers_in_config]
def supported_compilers() -> List[str]:
"""Returns all the currently supported compiler packages"""
return sorted(spack.repo.PATH.packages_with_tags(COMPILER_TAG))
def all_compilers(
scope: Optional[str] = None, init_config: bool = True
) -> List["spack.spec.Spec"]:
"""Returns all the compilers from the current global configuration.
Args:
scope: configuration scope from which to extract the compilers. If None, the merged
configuration is used.
init_config: if True, search for compilers if none is found in configuration.
"""
compilers = all_compilers_from(configuration=spack.config.CONFIG, scope=scope)
if not compilers and init_config:
find_compilers(scope=scope)
compilers = all_compilers_from(configuration=spack.config.CONFIG, scope=scope)
return compilers
def all_compilers_from(
configuration: "spack.config.ConfigurationType", scope: Optional[str] = None
) -> List["spack.spec.Spec"]:
"""Returns all the compilers from the current global configuration.
Args:
configuration: configuration to be queried
scope: configuration scope from which to extract the compilers. If None, the merged
configuration is used.
"""
compilers = CompilerFactory.from_packages_yaml(configuration, scope=scope)
if os.environ.get("SPACK_EXPERIMENTAL_DEPRECATE_COMPILERS_YAML") != "1":
legacy_compilers = CompilerFactory.from_compilers_yaml(configuration, scope=scope)
if legacy_compilers:
# FIXME (compiler as nodes): write how to update the file. Maybe an ad-hoc command
warnings.warn(
"Some compilers are still defined in 'compilers.yaml', which has been deprecated "
"in v0.23. Those configuration files will be ignored from Spack v0.25.\n"
)
for legacy in legacy_compilers:
if not any(c.satisfies(f"{legacy.name}@{legacy.versions}") for c in compilers):
compilers.append(legacy)
return compilers
class CompilerRemover:
"""Removes compiler from configuration."""
def __init__(self, configuration: "spack.config.ConfigurationType") -> None:
self.configuration = configuration
self.marked_packages_yaml: List[Tuple[str, Any]] = []
self.marked_compilers_yaml: List[Tuple[str, Any]] = []
def mark_compilers(
self, *, match: str, scope: Optional[str] = None
) -> List["spack.spec.Spec"]:
"""Marks compilers to be removed in configuration, and returns a corresponding list
of specs.
Args:
match: constraint that the compiler must match to be removed.
scope: scope where to remove the compiler. If None, all writeable scopes are checked.
"""
self.marked_packages_yaml = []
self.marked_compilers_yaml = []
candidate_scopes = [scope]
if scope is None:
candidate_scopes = [x.name for x in self.configuration.writable_scopes]
all_removals = self._mark_in_packages_yaml(match, candidate_scopes)
all_removals.extend(self._mark_in_compilers_yaml(match, candidate_scopes))
return all_removals
def _mark_in_packages_yaml(self, match, candidate_scopes):
compiler_package_names = supported_compilers()
all_removals = []
for current_scope in candidate_scopes:
packages_yaml = self.configuration.get("packages", scope=current_scope)
if not packages_yaml:
continue
removed_from_scope = []
for name, entry in packages_yaml.items():
if name not in compiler_package_names:
continue
externals_config = entry.get("externals", None)
if not externals_config:
continue
def _partition_match(external_yaml):
s = CompilerFactory.from_external_yaml(external_yaml)
return not s.satisfies(match)
to_keep, to_remove = llnl.util.lang.stable_partition(
externals_config, _partition_match
)
if not to_remove:
continue
removed_from_scope.extend(to_remove)
entry["externals"] = to_keep
if not removed_from_scope:
continue
self.marked_packages_yaml.append((current_scope, packages_yaml))
all_removals.extend(
[CompilerFactory.from_external_yaml(x) for x in removed_from_scope]
)
return all_removals
def _mark_in_compilers_yaml(self, match, candidate_scopes):
if os.environ.get("SPACK_EXPERIMENTAL_DEPRECATE_COMPILERS_YAML") == "1":
return []
all_removals = []
for current_scope in candidate_scopes:
compilers_yaml = self.configuration.get("compilers", scope=current_scope)
if not compilers_yaml:
continue
def _partition_match(entry):
external_specs = CompilerFactory.from_legacy_yaml(entry["compiler"])
return not any(x.satisfies(match) for x in external_specs)
to_keep, to_remove = llnl.util.lang.stable_partition(compilers_yaml, _partition_match)
if not to_remove:
continue
compilers_yaml[:] = to_keep
self.marked_compilers_yaml.append((current_scope, compilers_yaml))
for entry in to_remove:
all_removals.extend(CompilerFactory.from_legacy_yaml(entry["compiler"]))
return all_removals
def flush(self):
"""Removes from configuration the specs that have been marked by the previous call
of ``remove_compilers``.
"""
for scope, packages_yaml in self.marked_packages_yaml:
self.configuration.set("packages", packages_yaml, scope=scope)
for scope, compilers_yaml in self.marked_compilers_yaml:
self.configuration.set("compilers", compilers_yaml, scope=scope)
def compilers_for_spec(compiler_spec, *, arch_spec=None, scope=None, init_config=True):
"""This gets all compilers that satisfy the supplied CompilerSpec.
Returns an empty list if none are found.
"""
# FIXME (compiler as nodes): to be removed, or reimplemented
raise NotImplementedError("still to be implemented")
def compilers_for_arch(
arch_spec: "spack.spec.ArchSpec", *, scope: Optional[str] = None
) -> List["spack.spec.Spec"]:
"""Returns the compilers that can be used on the input architecture"""
compilers = all_compilers_from(spack.config.CONFIG, scope=scope)
query = f"platform={arch_spec.platform} target=:{arch_spec.target}"
return [x for x in compilers if x.satisfies(query)]
def class_for_compiler_name(compiler_name):
"""Given a compiler module name, get the corresponding Compiler class."""
# FIXME (compiler as nodes): to be removed, or reimplemented
raise NotImplementedError("still to be implemented")
_EXTRA_ATTRIBUTES_KEY = "extra_attributes"
_COMPILERS_KEY = "compilers"
_C_KEY = "c"
_CXX_KEY, _FORTRAN_KEY = "cxx", "fortran"
def name_os_target(spec: "spack.spec.Spec") -> Tuple[str, str, str]:
if not spec.architecture:
host_platform = spack.platforms.host()
operating_system = host_platform.operating_system("default_os")
target = host_platform.target("default_target")
else:
target = spec.architecture.target
if not target:
target = spack.platforms.host().target("default_target")
target = target
operating_system = spec.os
if not operating_system:
host_platform = spack.platforms.host()
operating_system = host_platform.operating_system("default_os")
return spec.name, str(operating_system), str(target)
class CompilerFactory:
"""Class aggregating all ways of constructing a list of compiler specs from config entries."""
_PACKAGES_YAML_CACHE: Dict[str, Optional["spack.spec.Spec"]] = {}
_COMPILERS_YAML_CACHE: Dict[str, List["spack.spec.Spec"]] = {}
_GENERIC_TARGET = None
@staticmethod
def from_packages_yaml(
configuration: "spack.config.ConfigurationType", *, scope: Optional[str] = None
) -> List["spack.spec.Spec"]:
"""Returns the compiler specs defined in the "packages" section of the configuration"""
compilers = []
compiler_package_names = supported_compilers()
packages_yaml = configuration.get("packages", scope=scope)
for name, entry in packages_yaml.items():
if name not in compiler_package_names:
continue
externals_config = entry.get("externals", None)
if not externals_config:
continue
compiler_specs = []
for current_external in externals_config:
key = str(current_external)
if key not in CompilerFactory._PACKAGES_YAML_CACHE:
CompilerFactory._PACKAGES_YAML_CACHE[key] = CompilerFactory.from_external_yaml(
current_external
)
compiler = CompilerFactory._PACKAGES_YAML_CACHE[key]
if compiler:
compiler_specs.append(compiler)
compilers.extend(compiler_specs)
return compilers
@staticmethod
def from_external_yaml(config: Dict[str, Any]) -> Optional["spack.spec.Spec"]:
"""Returns a compiler spec from an external definition from packages.yaml."""
# Allow `@x.y.z` instead of `@=x.y.z`
err_header = f"The external spec '{config['spec']}' cannot be used as a compiler"
# If extra_attributes is not there I might not want to use this entry as a compiler,
# therefore just leave a debug message, but don't be loud with a warning.
if _EXTRA_ATTRIBUTES_KEY not in config:
tty.debug(f"[{__file__}] {err_header}: missing the '{_EXTRA_ATTRIBUTES_KEY}' key")
return None
extra_attributes = config[_EXTRA_ATTRIBUTES_KEY]
result = spack.spec.Spec(
str(spack.spec.parse_with_version_concrete(config["spec"])),
external_path=config.get("prefix"),
external_modules=config.get("modules"),
)
result.extra_attributes = extra_attributes
CompilerFactory._finalize_external_concretization(result)
return result
@staticmethod
def _finalize_external_concretization(abstract_spec):
if CompilerFactory._GENERIC_TARGET is None:
CompilerFactory._GENERIC_TARGET = archspec.cpu.host().family
if abstract_spec.architecture:
abstract_spec.architecture.complete_with_defaults()
else:
abstract_spec.constrain(spack.spec.Spec.default_arch())
abstract_spec.architecture.target = CompilerFactory._GENERIC_TARGET
abstract_spec._finalize_concretization()
@staticmethod
def from_legacy_yaml(compiler_dict: Dict[str, Any]) -> List["spack.spec.Spec"]:
"""Returns a list of external specs, corresponding to a compiler entry
from compilers.yaml.
"""
from spack.detection.path import ExecutablesFinder
# FIXME (compiler as nodes): should we look at targets too?
result = []
candidate_paths = [x for x in compiler_dict["paths"].values() if x is not None]
finder = ExecutablesFinder()
for pkg_name in spack.repo.PATH.packages_with_tags("compiler"):
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
pattern = re.compile(r"|".join(finder.search_patterns(pkg=pkg_cls)))
filtered_paths = [x for x in candidate_paths if pattern.search(os.path.basename(x))]
detected = finder.detect_specs(pkg=pkg_cls, paths=filtered_paths)
result.extend(detected)
for item in result:
CompilerFactory._finalize_external_concretization(item)
return result
@staticmethod
def from_compilers_yaml(
configuration: "spack.config.ConfigurationType", *, scope: Optional[str] = None
) -> List["spack.spec.Spec"]:
"""Returns the compiler specs defined in the "compilers" section of the configuration"""
result: List["spack.spec.Spec"] = []
for item in configuration.get("compilers", scope=scope):
key = str(item)
if key not in CompilerFactory._COMPILERS_YAML_CACHE:
CompilerFactory._COMPILERS_YAML_CACHE[key] = CompilerFactory.from_legacy_yaml(
item["compiler"]
)
result.extend(CompilerFactory._COMPILERS_YAML_CACHE[key])
return result
class UnknownCompilerError(spack.error.SpackError):
def __init__(self, compiler_name):
super().__init__(f"Spack doesn't support the requested compiler: {compiler_name}")

View File

@@ -1,19 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from ..error import SpackError
class CompilerAccessError(SpackError):
def __init__(self, compiler, paths):
super().__init__(
f"Compiler '{compiler.spec}' has executables that are missing"
f" or are not executable: {paths}"
)
class UnsupportedCompilerFlag(SpackError):
"""Raised when a compiler does not support a flag type (e.g. a flag to enforce a
language standard).
"""

View File

@@ -0,0 +1,79 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import spack.compiler
class Fj(spack.compiler.Compiler):
# Named wrapper links within build_env_path
link_paths = {
"cc": os.path.join("fj", "fcc"),
"cxx": os.path.join("fj", "case-insensitive", "FCC"),
"f77": os.path.join("fj", "frt"),
"fc": os.path.join("fj", "frt"),
}
version_argument = "--version"
version_regex = r"\((?:FCC|FRT)\) ([a-z\d.]+)"
required_libs = ["libfj90i", "libfj90f", "libfjsrcinfo"]
@property
def verbose_flag(self):
return "-v"
@property
def debug_flags(self):
return "-g"
@property
def opt_flags(self):
return ["-O0", "-O1", "-O2", "-O3", "-Ofast"]
@property
def openmp_flag(self):
return "-Kopenmp"
@property
def cxx98_flag(self):
return "-std=c++98"
@property
def cxx11_flag(self):
return "-std=c++11"
@property
def cxx14_flag(self):
return "-std=c++14"
@property
def cxx17_flag(self):
return "-std=c++17"
@property
def c99_flag(self):
return "-std=c99"
@property
def c11_flag(self):
return "-std=c11"
@property
def cc_pic_flag(self):
return "-KPIC"
@property
def cxx_pic_flag(self):
return "-KPIC"
@property
def f77_pic_flag(self):
return "-KPIC"
@property
def fc_pic_flag(self):
return "-KPIC"

View File

@@ -1,26 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from typing import List, Tuple
def tokenize_flags(flags_values: str, propagate: bool = False) -> List[Tuple[str, bool]]:
"""Given a compiler flag specification as a string, this returns a list
where the entries are the flags. For compiler options which set values
using the syntax "-flag value", this function groups flags and their
values together. Any token not preceded by a "-" is considered the
value of a prior flag."""
tokens = flags_values.split()
if not tokens:
return []
flag = tokens[0]
flags_with_propagation = []
for token in tokens[1:]:
if not token.startswith("-"):
flag += " " + token
else:
flags_with_propagation.append((flag, propagate))
flag = token
flags_with_propagation.append((flag, propagate))
return flags_with_propagation

View File

@@ -0,0 +1,191 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
from llnl.util.filesystem import ancestor
import spack.compiler
import spack.compilers.apple_clang as apple_clang
import spack.util.executable
from spack.version import Version
class Gcc(spack.compiler.Compiler):
# MacPorts builds gcc versions with prefixes and -mp-X or -mp-X.Y suffixes.
# Homebrew and Linuxbrew may build gcc with -X, -X.Y suffixes.
# Old compatibility versions may contain XY suffixes.
suffixes = [r"-mp-\d+(?:\.\d+)?", r"-\d+(?:\.\d+)?", r"\d\d"]
# Named wrapper links within build_env_path
link_paths = {
"cc": os.path.join("gcc", "gcc"),
"cxx": os.path.join("gcc", "g++"),
"f77": os.path.join("gcc", "gfortran"),
"fc": os.path.join("gcc", "gfortran"),
}
@property
def verbose_flag(self):
return "-v"
@property
def debug_flags(self):
return ["-g", "-gstabs+", "-gstabs", "-gxcoff+", "-gxcoff", "-gvms"]
@property
def opt_flags(self):
return ["-O", "-O0", "-O1", "-O2", "-O3", "-Os", "-Ofast", "-Og"]
@property
def openmp_flag(self):
return "-fopenmp"
@property
def cxx98_flag(self):
if self.real_version < Version("6.0"):
return ""
else:
return "-std=c++98"
@property
def cxx11_flag(self):
if self.real_version < Version("4.3"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C++11 standard", "cxx11_flag", " < 4.3"
)
elif self.real_version < Version("4.7"):
return "-std=c++0x"
else:
return "-std=c++11"
@property
def cxx14_flag(self):
if self.real_version < Version("4.8"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C++14 standard", "cxx14_flag", "< 4.8"
)
elif self.real_version < Version("4.9"):
return "-std=c++1y"
else:
return "-std=c++14"
@property
def cxx17_flag(self):
if self.real_version < Version("5.0"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C++17 standard", "cxx17_flag", "< 5.0"
)
elif self.real_version < Version("6.0"):
return "-std=c++1z"
else:
return "-std=c++17"
@property
def cxx20_flag(self):
if self.real_version < Version("8.0"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C++20 standard", "cxx20_flag", "< 8.0"
)
elif self.real_version < Version("11.0"):
return "-std=c++2a"
else:
return "-std=c++20"
@property
def cxx23_flag(self):
if self.real_version < Version("11.0"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C++23 standard", "cxx23_flag", "< 11.0"
)
elif self.real_version < Version("14.0"):
return "-std=c++2b"
else:
return "-std=c++23"
@property
def c99_flag(self):
if self.real_version < Version("4.5"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C99 standard", "c99_flag", "< 4.5"
)
return "-std=c99"
@property
def c11_flag(self):
if self.real_version < Version("4.7"):
raise spack.compiler.UnsupportedCompilerFlag(
self, "the C11 standard", "c11_flag", "< 4.7"
)
return "-std=c11"
@property
def cc_pic_flag(self):
return "-fPIC"
@property
def cxx_pic_flag(self):
return "-fPIC"
@property
def f77_pic_flag(self):
return "-fPIC"
@property
def fc_pic_flag(self):
return "-fPIC"
required_libs = ["libgcc", "libgfortran"]
@classmethod
def default_version(cls, cc):
"""Older versions of gcc use the ``-dumpversion`` option.
Output looks like this::
4.4.7
In GCC 7, this option was changed to only return the major
version of the compiler::
7
A new ``-dumpfullversion`` option was added that gives us
what we want::
7.2.0
"""
# Apple's gcc is actually apple clang, so skip it. Returning
# "unknown" ensures this compiler is not detected by default.
# Users can add it manually to compilers.yaml at their own risk.
if apple_clang.AppleClang.default_version(cc) != "unknown":
return "unknown"
version = super(Gcc, cls).default_version(cc)
if Version(version) >= Version("7"):
output = spack.compiler.get_compiler_version_output(cc, "-dumpfullversion")
version = cls.extract_version_from_output(output)
return version
@property
def stdcxx_libs(self):
return ("-lstdc++",)
@property
def prefix(self):
# GCC reports its install prefix when running ``-print-search-dirs``
# on the first line ``install: <prefix>``.
cc = spack.util.executable.Executable(self.cc)
with self.compiler_environment():
gcc_output = cc("-print-search-dirs", output=str, error=str)
for line in gcc_output.splitlines():
if line.startswith("install:"):
gcc_prefix = line.split(":")[1].strip()
# Go from <prefix>/lib/gcc/<triplet>/<version>/ to <prefix>
return ancestor(gcc_prefix, 4)
raise RuntimeError(
"could not find install prefix of GCC from output:\n\t{}".format(gcc_output)
)

View File

@@ -0,0 +1,131 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import sys
from spack.compiler import Compiler, UnsupportedCompilerFlag
from spack.version import Version
class Intel(Compiler):
# Named wrapper links within build_env_path
link_paths = {
"cc": os.path.join("intel", "icc"),
"cxx": os.path.join("intel", "icpc"),
"f77": os.path.join("intel", "ifort"),
"fc": os.path.join("intel", "ifort"),
}
if sys.platform == "win32":
version_argument = "/QV"
else:
version_argument = "--version"
if sys.platform == "win32":
version_regex = r"([1-9][0-9]*\.[0-9]*\.[0-9]*)"
else:
version_regex = r"\((?:IFORT|ICC)\) ([^ ]+)"
@property
def verbose_flag(self):
return "-v"
required_libs = ["libirc", "libifcore", "libifcoremt", "libirng"]
@property
def debug_flags(self):
return ["-debug", "-g", "-g0", "-g1", "-g2", "-g3"]
@property
def opt_flags(self):
return ["-O", "-O0", "-O1", "-O2", "-O3", "-Ofast", "-Os"]
@property
def openmp_flag(self):
if self.real_version < Version("16.0"):
return "-openmp"
else:
return "-qopenmp"
@property
def cxx11_flag(self):
if self.real_version < Version("11.1"):
raise UnsupportedCompilerFlag(self, "the C++11 standard", "cxx11_flag", "< 11.1")
elif self.real_version < Version("13"):
return "-std=c++0x"
else:
return "-std=c++11"
@property
def cxx14_flag(self):
# Adapted from CMake's Intel-CXX rules.
if self.real_version < Version("15"):
raise UnsupportedCompilerFlag(self, "the C++14 standard", "cxx14_flag", "< 15")
elif self.real_version < Version("15.0.2"):
return "-std=c++1y"
else:
return "-std=c++14"
@property
def cxx17_flag(self):
# https://www.intel.com/content/www/us/en/developer/articles/news/c17-features-supported-by-c-compiler.html
if self.real_version < Version("19"):
raise UnsupportedCompilerFlag(self, "the C++17 standard", "cxx17_flag", "< 19")
else:
return "-std=c++17"
@property
def c99_flag(self):
if self.real_version < Version("12"):
raise UnsupportedCompilerFlag(self, "the C99 standard", "c99_flag", "< 12")
else:
return "-std=c99"
@property
def c11_flag(self):
if self.real_version < Version("16"):
raise UnsupportedCompilerFlag(self, "the C11 standard", "c11_flag", "< 16")
else:
return "-std=c1x"
@property
def c18_flag(self):
# c18 supported since oneapi 2022, which is classic version 2021.5.0
if self.real_version < Version("21.5.0"):
raise UnsupportedCompilerFlag(self, "the C18 standard", "c18_flag", "< 21.5.0")
else:
return "-std=c18"
@property
def cc_pic_flag(self):
return "-fPIC"
@property
def cxx_pic_flag(self):
return "-fPIC"
@property
def f77_pic_flag(self):
return "-fPIC"
@property
def fc_pic_flag(self):
return "-fPIC"
@property
def stdcxx_libs(self):
return ("-cxxlib",)
def setup_custom_environment(self, pkg, env):
# Edge cases for Intel's oneAPI compilers when using the legacy classic compilers:
# Always pass flags to disable deprecation warnings, since these warnings can
# confuse tools that parse the output of compiler commands (e.g. version checks).
if self.real_version >= Version("2021") and self.real_version < Version("2024"):
env.append_flags("SPACK_ALWAYS_CFLAGS", "-diag-disable=10441")
env.append_flags("SPACK_ALWAYS_CXXFLAGS", "-diag-disable=10441")
if self.real_version >= Version("2021") and self.real_version < Version("2025"):
env.append_flags("SPACK_ALWAYS_FFLAGS", "-diag-disable=10448")

View File

@@ -1,426 +0,0 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import contextlib
import hashlib
import json
import os
import re
import shutil
import stat
import sys
import tempfile
import typing
from typing import Dict, List, Optional, Set, Tuple
import llnl.path
import llnl.util.lang
from llnl.util import tty
from llnl.util.filesystem import path_contains_subdirectory, paths_containing_libs
import spack.caches
import spack.util.libc
from spack.util.environment import filter_system_paths
from spack.util.file_cache import FileCache
if typing.TYPE_CHECKING:
import spack.spec
#: regex for parsing linker lines
_LINKER_LINE = re.compile(r"^( *|.*[/\\])" r"(link|ld|([^/\\]+-)?ld|collect2)" r"[^/\\]*( |$)")
#: components of linker lines to ignore
_LINKER_LINE_IGNORE = re.compile(r"(collect2 version|^[A-Za-z0-9_]+=|/ldfe )")
#: regex to match linker search paths
_LINK_DIR_ARG = re.compile(r"^-L(.:)?(?P<dir>[/\\].*)")
#: regex to match linker library path arguments
_LIBPATH_ARG = re.compile(r"^[-/](LIBPATH|libpath):(?P<dir>.*)")
@llnl.path.system_path_filter
def parse_non_system_link_dirs(compiler_debug_output: str) -> List[str]:
"""Parses link paths out of compiler debug output.
Args:
compiler_debug_output: compiler debug output as a string
Returns:
Implicit link paths parsed from the compiler output
"""
link_dirs = _parse_link_paths(compiler_debug_output)
# Remove directories that do not exist. Some versions of the Cray compiler
# report nonexistent directories
link_dirs = filter_non_existing_dirs(link_dirs)
# Return set of directories containing needed compiler libs, minus
# system paths. Note that 'filter_system_paths' only checks for an
# exact match, while 'in_system_subdirectory' checks if a path contains
# a system directory as a subdirectory
link_dirs = filter_system_paths(link_dirs)
return list(p for p in link_dirs if not in_system_subdirectory(p))
def filter_non_existing_dirs(dirs):
return [d for d in dirs if os.path.isdir(d)]
def in_system_subdirectory(path):
system_dirs = [
"/lib/",
"/lib64/",
"/usr/lib/",
"/usr/lib64/",
"/usr/local/lib/",
"/usr/local/lib64/",
]
return any(path_contains_subdirectory(path, x) for x in system_dirs)
def _parse_link_paths(string):
"""Parse implicit link paths from compiler debug output.
This gives the compiler runtime library paths that we need to add to
the RPATH of generated binaries and libraries. It allows us to
ensure, e.g., that codes load the right libstdc++ for their compiler.
"""
lib_search_paths = False
raw_link_dirs = []
for line in string.splitlines():
if lib_search_paths:
if line.startswith("\t"):
raw_link_dirs.append(line[1:])
continue
else:
lib_search_paths = False
elif line.startswith("Library search paths:"):
lib_search_paths = True
if not _LINKER_LINE.match(line):
continue
if _LINKER_LINE_IGNORE.match(line):
continue
tty.debug(f"implicit link dirs: link line: {line}")
next_arg = False
for arg in line.split():
if arg in ("-L", "-Y"):
next_arg = True
continue
if next_arg:
raw_link_dirs.append(arg)
next_arg = False
continue
link_dir_arg = _LINK_DIR_ARG.match(arg)
if link_dir_arg:
link_dir = link_dir_arg.group("dir")
raw_link_dirs.append(link_dir)
link_dir_arg = _LIBPATH_ARG.match(arg)
if link_dir_arg:
link_dir = link_dir_arg.group("dir")
raw_link_dirs.append(link_dir)
implicit_link_dirs = list()
visited = set()
for link_dir in raw_link_dirs:
normalized_path = os.path.abspath(link_dir)
if normalized_path not in visited:
implicit_link_dirs.append(normalized_path)
visited.add(normalized_path)
tty.debug(f"implicit link dirs: result: {', '.join(implicit_link_dirs)}")
return implicit_link_dirs
class CompilerPropertyDetector:
def __init__(self, compiler_spec: "spack.spec.Spec"):
assert compiler_spec.external, "only external compiler specs are allowed, so far"
assert compiler_spec.concrete, "only concrete compiler specs are allowed, so far"
self.spec = compiler_spec
self.cache = COMPILER_CACHE
@contextlib.contextmanager
def compiler_environment(self):
"""Sets the environment to run this compiler"""
import spack.schema.environment
import spack.util.module_cmd
# Avoid modifying os.environ if possible.
environment = self.spec.extra_attributes.get("environment", {})
modules = self.spec.external_modules or []
if not self.spec.external_modules and not environment:
yield
return
# store environment to replace later
backup_env = os.environ.copy()
try:
# load modules and set env variables
for module in modules:
spack.util.module_cmd.load_module(module)
# apply other compiler environment changes
spack.schema.environment.parse(environment).apply_modifications()
yield
finally:
# Restore environment regardless of whether inner code succeeded
os.environ.clear()
os.environ.update(backup_env)
def _compile_dummy_c_source(self) -> Optional[str]:
import spack.util.executable
assert self.spec.external, "only external compiler specs are allowed, so far"
compiler_pkg = self.spec.package
if getattr(compiler_pkg, "cc"):
cc = compiler_pkg.cc
ext = "c"
else:
cc = compiler_pkg.cxx
ext = "cc"
if not cc or not self.spec.package.verbose_flags:
return None
try:
tmpdir = tempfile.mkdtemp(prefix="spack-implicit-link-info")
fout = os.path.join(tmpdir, "output")
fin = os.path.join(tmpdir, f"main.{ext}")
with open(fin, "w") as csource:
csource.write(
"int main(int argc, char* argv[]) { (void)argc; (void)argv; return 0; }\n"
)
cc_exe = spack.util.executable.Executable(cc)
# FIXME (compiler as nodes): this operation should be encapsulated somewhere else
compiler_flags = self.spec.extra_attributes.get("flags", {})
for flag_type in [
"cflags" if cc == compiler_pkg.cc else "cxxflags",
"cppflags",
"ldflags",
]:
current_flags = compiler_flags.get(flag_type, "").strip()
if current_flags:
cc_exe.add_default_arg(*current_flags.split(" "))
with self.compiler_environment():
return cc_exe("-v", fin, "-o", fout, output=str, error=str)
except spack.util.executable.ProcessError as pe:
tty.debug(f"ProcessError: Command exited with non-zero status: {pe.long_message}")
return None
finally:
shutil.rmtree(tmpdir, ignore_errors=True)
def compiler_verbose_output(self) -> Optional[str]:
return self.cache.get(self.spec).c_compiler_output
def default_dynamic_linker(self) -> Optional[str]:
output = self.compiler_verbose_output()
if not output:
return None
return spack.util.libc.parse_dynamic_linker(output)
def default_libc(self) -> Optional["spack.spec.Spec"]:
"""Determine libc targeted by the compiler from link line"""
# technically this should be testing the target platform of the compiler, but we don't have
# that, so stick to host platform for now.
if sys.platform in ("darwin", "win32"):
return None
dynamic_linker = self.default_dynamic_linker()
if dynamic_linker is None:
return None
return spack.util.libc.libc_from_dynamic_linker(dynamic_linker)
def implicit_rpaths(self) -> List[str]:
output = self.compiler_verbose_output()
if output is None:
return []
link_dirs = parse_non_system_link_dirs(output)
all_required_libs = list(self.spec.package.required_libs) + ["libc", "libc++", "libstdc++"]
dynamic_linker = self.default_dynamic_linker()
# FIXME (compiler as nodes): is this needed ?
# if dynamic_linker is None:
# return []
result = DefaultDynamicLinkerFilter(dynamic_linker)(
paths_containing_libs(link_dirs, all_required_libs)
)
return list(result)
class DefaultDynamicLinkerFilter:
"""Remove rpaths to directories that are default search paths of the dynamic linker."""
_CACHE: Dict[Optional[str], Set[Tuple[int, int]]] = {}
def __init__(self, dynamic_linker: Optional[str]) -> None:
if dynamic_linker not in DefaultDynamicLinkerFilter._CACHE:
# Identify directories by (inode, device) tuple, which handles symlinks too.
default_path_identifiers: Set[Tuple[int, int]] = set()
if not dynamic_linker:
self.default_path_identifiers = None
return
for path in spack.util.libc.default_search_paths_from_dynamic_linker(dynamic_linker):
try:
s = os.stat(path)
if stat.S_ISDIR(s.st_mode):
default_path_identifiers.add((s.st_ino, s.st_dev))
except OSError:
continue
DefaultDynamicLinkerFilter._CACHE[dynamic_linker] = default_path_identifiers
self.default_path_identifiers = DefaultDynamicLinkerFilter._CACHE[dynamic_linker]
def is_dynamic_loader_default_path(self, p: str) -> bool:
if self.default_path_identifiers is None:
return False
try:
s = os.stat(p)
return (s.st_ino, s.st_dev) in self.default_path_identifiers
except OSError:
return False
def __call__(self, dirs: List[str]) -> List[str]:
if not self.default_path_identifiers:
return dirs
return [p for p in dirs if not self.is_dynamic_loader_default_path(p)]
def dynamic_linker_filter_for(node: "spack.spec.Spec") -> Optional[DefaultDynamicLinkerFilter]:
compiler = compiler_spec(node)
if compiler is None:
return None
detector = CompilerPropertyDetector(compiler)
dynamic_linker = detector.default_dynamic_linker()
if dynamic_linker is None:
return None
return DefaultDynamicLinkerFilter(dynamic_linker)
def compiler_spec(node: "spack.spec.Spec") -> Optional["spack.spec.Spec"]:
"""Returns the compiler spec associated with the node passed as argument.
The function looks for a "c", "cxx", and "fortran" compiler in that order,
and returns the first found. If none is found, returns None.
"""
for language in ("c", "cxx", "fortran"):
candidates = node.dependencies(virtuals=[language])
if candidates:
break
else:
return None
return candidates[0]
class CompilerCacheEntry:
"""Deserialized cache entry for a compiler"""
__slots__ = ["c_compiler_output"]
def __init__(self, c_compiler_output: Optional[str]):
self.c_compiler_output = c_compiler_output
@classmethod
def from_dict(cls, data: Dict[str, Optional[str]]):
if not isinstance(data, dict):
raise ValueError(f"Invalid {cls.__name__} data")
c_compiler_output = data.get("c_compiler_output")
if not isinstance(c_compiler_output, (str, type(None))):
raise ValueError(f"Invalid {cls.__name__} data")
return cls(c_compiler_output)
class CompilerCache:
"""Base class for compiler output cache. Default implementation does not cache anything."""
def value(self, compiler: "spack.spec.Spec") -> Dict[str, Optional[str]]:
return {"c_compiler_output": CompilerPropertyDetector(compiler)._compile_dummy_c_source()}
def get(self, compiler: "spack.spec.Spec") -> CompilerCacheEntry:
return CompilerCacheEntry.from_dict(self.value(compiler))
class FileCompilerCache(CompilerCache):
"""Cache for compiler output, which is used to determine implicit link paths, the default libc
version, and the compiler version."""
name = os.path.join("compilers", "compilers.json")
def __init__(self, cache: "FileCache") -> None:
self.cache = cache
self.cache.init_entry(self.name)
self._data: Dict[str, Dict[str, Optional[str]]] = {}
def _get_entry(self, key: str) -> Optional[CompilerCacheEntry]:
try:
return CompilerCacheEntry.from_dict(self._data[key])
except ValueError:
del self._data[key]
except KeyError:
pass
return None
def get(self, compiler: "spack.spec.Spec") -> CompilerCacheEntry:
# Cache hit
try:
with self.cache.read_transaction(self.name) as f:
assert f is not None
self._data = json.loads(f.read())
assert isinstance(self._data, dict)
except (json.JSONDecodeError, AssertionError):
self._data = {}
key = self._key(compiler)
value = self._get_entry(key)
if value is not None:
return value
# Cache miss
with self.cache.write_transaction(self.name) as (old, new):
try:
assert old is not None
self._data = json.loads(old.read())
assert isinstance(self._data, dict)
except (json.JSONDecodeError, AssertionError):
self._data = {}
# Use cache entry that may have been created by another process in the meantime.
entry = self._get_entry(key)
# Finally compute the cache entry
if entry is None:
self._data[key] = self.value(compiler)
entry = CompilerCacheEntry.from_dict(self._data[key])
new.write(json.dumps(self._data, separators=(",", ":")))
return entry
def _key(self, compiler: "spack.spec.Spec") -> str:
as_bytes = json.dumps(compiler.to_dict(), separators=(",", ":")).encode("utf-8")
return hashlib.sha256(as_bytes).hexdigest()
def _make_compiler_cache():
return FileCompilerCache(spack.caches.MISC_CACHE)
COMPILER_CACHE: CompilerCache = llnl.util.lang.Singleton(_make_compiler_cache) # type: ignore

View File

@@ -0,0 +1,394 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import re
import subprocess
import sys
import tempfile
from typing import Dict
import archspec.cpu
import spack.compiler
import spack.operating_systems.windows_os
import spack.platforms
import spack.util.executable
from spack.compiler import Compiler
from spack.error import SpackError
from spack.version import Version, VersionRange
FC_PATH: Dict[str, str] = dict()
class CmdCall:
"""Compose a call to `cmd` for an ordered series of cmd commands/scripts"""
def __init__(self, *cmds):
if not cmds:
raise RuntimeError(
"""Attempting to run commands from CMD without specifying commands.
Please add commands to be run."""
)
self._cmds = cmds
def __call__(self):
out = subprocess.check_output(self.cmd_line, stderr=subprocess.STDOUT) # novermin
return out.decode("utf-16le", errors="replace") # novermin
@property
def cmd_line(self):
base_call = "cmd /u /c "
commands = " && ".join([x.command_str() for x in self._cmds])
# If multiple commands are being invoked by a single subshell
# they must be encapsulated by a double quote. Always double
# quote to be sure of proper handling
# cmd will properly resolve nested double quotes as needed
#
# `set`` writes out the active env to the subshell stdout,
# and in this context we are always trying to obtain env
# state so it should always be appended
return base_call + f'"{commands} && set"'
class VarsInvocation:
def __init__(self, script):
self._script = script
def command_str(self):
return f'"{self._script}"'
@property
def script(self):
return self._script
class VCVarsInvocation(VarsInvocation):
def __init__(self, script, arch, msvc_version):
super(VCVarsInvocation, self).__init__(script)
self._arch = arch
self._msvc_version = msvc_version
@property
def sdk_ver(self):
"""Accessor for Windows SDK version property
Note: This property may not be set by
the calling context and as such this property will
return an empty string
This property will ONLY be set if the SDK package
is a dependency somewhere in the Spack DAG of the package
for which we are constructing an MSVC compiler env.
Otherwise this property should be unset to allow the VCVARS
script to use its internal heuristics to determine appropriate
SDK version
"""
if getattr(self, "_sdk_ver", None):
return self._sdk_ver + ".0"
return ""
@sdk_ver.setter
def sdk_ver(self, val):
self._sdk_ver = val
@property
def arch(self):
return self._arch
@property
def vcvars_ver(self):
return f"-vcvars_ver={self._msvc_version}"
def command_str(self):
script = super(VCVarsInvocation, self).command_str()
return f"{script} {self.arch} {self.sdk_ver} {self.vcvars_ver}"
def get_valid_fortran_pth():
"""Assign maximum available fortran compiler version"""
# TODO (johnwparent): validate compatibility w/ try compiler
# functionality when added
sort_fn = lambda fc_ver: Version(fc_ver)
sort_fc_ver = sorted(list(FC_PATH.keys()), key=sort_fn)
return FC_PATH[sort_fc_ver[-1]] if sort_fc_ver else None
class Msvc(Compiler):
# Named wrapper links within build_env_path
# Due to the challenges of supporting compiler wrappers
# in Windows, we leave these blank, and dynamically compute
# based on proper versions of MSVC from there
# pending acceptance of #28117 for full support using
# compiler wrappers
link_paths = {"cc": "", "cxx": "", "f77": "", "fc": ""}
#: Compiler argument that produces version information
version_argument = ""
# For getting ifx's version, call it with version_argument
# and ignore the error code
ignore_version_errors = [1]
#: Regex used to extract version from compiler's output
version_regex = r"([1-9][0-9]*\.[0-9]*\.[0-9]*)"
# The MSVC compiler class overrides this to prevent instances
# of erroneous matching on executable names that cannot be msvc
# compilers
suffixes = []
is_supported_on_platform = lambda x: isinstance(x, spack.platforms.Windows)
def __init__(self, *args, **kwargs):
# This positional argument "paths" is later parsed and process by the base class
# via the call to `super` later in this method
paths = args[3]
latest_fc = get_valid_fortran_pth()
new_pth = [pth if pth else latest_fc for pth in paths[2:]]
paths[2:] = new_pth
# Initialize, deferring to base class but then adding the vcvarsallfile
# file based on compiler executable path.
super().__init__(*args, **kwargs)
# To use the MSVC compilers, VCVARS must be invoked
# VCVARS is located at a fixed location, referencable
# idiomatically by the following relative path from the
# compiler.
# Spack first finds the compilers via VSWHERE
# and stores their path, but their respective VCVARS
# file must be invoked before useage.
env_cmds = []
compiler_root = os.path.join(os.path.dirname(self.cc), "../../../../../..")
vcvars_script_path = os.path.join(compiler_root, "Auxiliary", "Build", "vcvars64.bat")
# get current platform architecture and format for vcvars argument
arch = spack.platforms.real_host().default.lower()
arch = arch.replace("-", "_")
if str(archspec.cpu.host().family) == "x86_64":
arch = "amd64"
self.vcvars_call = VCVarsInvocation(vcvars_script_path, arch, self.msvc_version)
env_cmds.append(self.vcvars_call)
# Below is a check for a valid fortran path
# paths has c, cxx, fc, and f77 paths in that order
# paths[2] refers to the fc path and is a generic check
# for a fortran compiler
if paths[2]:
def get_oneapi_root(pth: str):
"""From within a prefix known to be a oneAPI path
determine the oneAPI root path from arbitrary point
under root
Args:
pth: path prefixed within oneAPI root
"""
if not pth:
return ""
while os.path.basename(pth) and os.path.basename(pth) != "oneAPI":
pth = os.path.dirname(pth)
return pth
# If this found, it sets all the vars
oneapi_root = get_oneapi_root(self.fc)
if not oneapi_root:
raise RuntimeError(f"Non-oneAPI Fortran compiler {self.fc} assigned to MSVC")
oneapi_root_setvars = os.path.join(oneapi_root, "setvars.bat")
# some oneAPI exes return a version more precise than their
# install paths specify, so we determine path from
# the install path rather than the fc executable itself
numver = r"\d+\.\d+(?:\.\d+)?"
pattern = f"((?:{numver})|(?:latest))"
version_from_path = re.search(pattern, self.fc).group(1)
oneapi_version_setvars = os.path.join(
oneapi_root, "compiler", version_from_path, "env", "vars.bat"
)
# order matters here, the specific version env must be invoked first,
# otherwise it will be ignored if the root setvars sets up the oneapi
# env first
env_cmds.extend(
[VarsInvocation(oneapi_version_setvars), VarsInvocation(oneapi_root_setvars)]
)
self.msvc_compiler_environment = CmdCall(*env_cmds)
@property
def cxx11_flag(self):
return "/std:c++11"
@property
def cxx14_flag(self):
return "/std:c++14"
@property
def cxx17_flag(self):
return "/std:c++17"
@property
def cxx20_flag(self):
return "/std:c++20"
@property
def c11_flag(self):
return "/std:c11"
@property
def c17_flag(self):
return "/std:c17"
@property
def msvc_version(self):
"""This is the VCToolset version *NOT* the actual version of the cl compiler
For CL version, query `Msvc.cl_version`"""
return Version(re.search(Msvc.version_regex, self.cc).group(1))
@property
def short_msvc_version(self):
"""This is the shorthand VCToolset version of form
MSVC<short-ver>
"""
return "MSVC" + self.vc_toolset_ver
@property
def vc_toolset_ver(self):
"""
The toolset version is the version of the combined set of cl and link
This typically relates directly to VS version i.e. VS 2022 is v143
VS 19 is v142, etc.
This value is defined by the first three digits of the major + minor
version of the VS toolset (143 for 14.3x.bbbbb). Traditionally the
minor version has remained a static two digit number for a VS release
series, however, as of VS22, this is no longer true, both
14.4x.bbbbb and 14.3x.bbbbb are considered valid VS22 VC toolset
versions due to a change in toolset minor version sentiment.
This is *NOT* the full version, for that see
Msvc.msvc_version or MSVC.platform_toolset_ver for the
raw platform toolset version
"""
ver = self.msvc_version[:2].joined.string[:3]
return ver
@property
def platform_toolset_ver(self):
"""
This is the platform toolset version of current MSVC compiler
i.e. 142. The platform toolset is the targeted MSVC library/compiler
versions by compilation (this is different from the VC Toolset)
This is different from the VC toolset version as established
by `short_msvc_version`, but typically are represented by the same
three digit value
"""
# Typically VS toolset version and platform toolset versions match
# VS22 introduces the first divergence of VS toolset version
# (144 for "recent" releases) and platform toolset version (143)
# so it needs additional handling until MS releases v144
# (assuming v144 is also for VS22)
# or adds better support for detection
# TODO: (johnwparent) Update this logic for the next platform toolset
# or VC toolset version update
toolset_ver = self.vc_toolset_ver
vs22_toolset = Version(toolset_ver) > Version("142")
return toolset_ver if not vs22_toolset else "143"
@property
def visual_studio_version(self):
"""The four digit Visual Studio version (i.e. 2019 or 2022)
Note: This differs from the msvc version or toolset version as
those properties track the compiler and build tools version
respectively, whereas this tracks the VS release associated
with a given MSVC compiler.
"""
return re.search(r"[0-9]{4}", self.cc).group(0)
def _compiler_version(self, compiler):
"""Returns version object for given compiler"""
# ignore_errors below is true here due to ifx's
# non zero return code if it is not provided
# and input file
return Version(
re.search(
Msvc.version_regex,
spack.compiler.get_compiler_version_output(
compiler, version_arg=None, ignore_errors=True
),
).group(1)
)
@property
def cl_version(self):
"""Cl toolset version"""
return self._compiler_version(self.cc)
@property
def ifx_version(self):
"""Ifx compiler version associated with this version of MSVC"""
return self._compiler_version(self.fc)
@property
def vs_root(self):
# The MSVC install root is located at a fix level above the compiler
# and is referenceable idiomatically via the pattern below
# this should be consistent accross versions
return os.path.abspath(os.path.join(self.cc, "../../../../../../../.."))
def setup_custom_environment(self, pkg, env):
"""Set environment variables for MSVC using the
Microsoft-provided script."""
# Set the build environment variables for spack. Just using
# subprocess.call() doesn't work since that operates in its own
# environment which is destroyed (along with the adjusted variables)
# once the process terminates. So go the long way around: examine
# output, sort into dictionary, use that to make the build
# environment.
# vcvars can target specific sdk versions, force it to pick up concretized sdk
# version, if needed by spec
if pkg.name != "win-sdk" and "win-sdk" in pkg.spec:
self.vcvars_call.sdk_ver = pkg.spec["win-sdk"].version.string
out = self.msvc_compiler_environment()
int_env = dict(
(key, value)
for key, _, value in (line.partition("=") for line in out.splitlines())
if key and value
)
for env_var in int_env:
if os.pathsep not in int_env[env_var]:
env.set(env_var, int_env[env_var])
else:
env.set_path(env_var, int_env[env_var].split(os.pathsep))
# certain versions of ifx (2021.3.0:2023.1.0) do not play well with env:TMP
# that has a "." character in the path
# Work around by pointing tmp to the stage for the duration of the build
if self.fc and Version(self.fc_version(self.fc)).satisfies(
VersionRange("2021.3.0", "2023.1.0")
):
new_tmp = tempfile.mkdtemp(dir=pkg.stage.path)
env.set("TMP", new_tmp)
env.set("CC", self.cc)
env.set("CXX", self.cxx)
env.set("FC", self.fc)
env.set("F77", self.f77)
@classmethod
def fc_version(cls, fc):
if not sys.platform == "win32":
return "unknown"
fc_ver = cls.default_version(fc)
FC_PATH[fc_ver] = fc
try:
sps = spack.operating_systems.windows_os.WindowsOs().compiler_search_paths
except AttributeError:
raise SpackError(
"Windows compiler search paths not established, "
"please report this behavior to github.com/spack/spack"
)
clp = spack.util.executable.which_string("cl", path=sps)
return cls.default_version(clp) if clp else fc_ver

View File

@@ -0,0 +1,112 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import re
import llnl.util.lang
import spack.compiler
class Nag(spack.compiler.Compiler):
# Named wrapper links within build_env_path
# Use default wrappers for C and C++, in case provided in compilers.yaml
link_paths = {
"cc": "cc",
"cxx": "c++",
"f77": os.path.join("nag", "nagfor"),
"fc": os.path.join("nag", "nagfor"),
}
version_argument = "-V"
@classmethod
@llnl.util.lang.memoized
def extract_version_from_output(cls, output):
match = re.search(r"NAG Fortran Compiler Release (\d+).(\d+)\(.*\) Build (\d+)", output)
if match:
return ".".join(match.groups())
@property
def verbose_flag(self):
# NAG does not support a flag that would enable verbose output and
# compilation/linking at the same time (with either '-#' or '-dryrun'
# the compiler only prints the commands but does not run them).
# Therefore, the only thing we can do is to pass the '-v' argument to
# the underlying GCC. In order to get verbose output from the latter
# at both compile and linking stages, we need to call NAG with two
# additional flags: '-Wc,-v' and '-Wl,-v'. However, we return only
# '-Wl,-v' for the following reasons:
# 1) the interface of this method does not support multiple flags in
# the return value and, at least currently, verbose output at the
# linking stage has a higher priority for us;
# 2) NAG is usually mixed with GCC compiler, which also accepts
# '-Wl,-v' and produces meaningful result with it: '-v' is passed
# to the linker and the latter produces verbose output for the
# linking stage ('-Wc,-v', however, would break the compilation
# with a message from GCC that the flag is not recognized).
#
# This way, we at least enable the implicit rpath detection, which is
# based on compilation of a C file (see method
# spack.compiler._compile_dummy_c_source): in the case of a mixed
# NAG/GCC toolchain, the flag will be passed to g++ (e.g.
# 'g++ -Wl,-v ./main.c'), otherwise, the flag will be passed to nagfor
# (e.g. 'nagfor -Wl,-v ./main.c' - note that nagfor recognizes '.c'
# extension and treats the file accordingly). The list of detected
# rpaths will contain only GCC-related directories and rpaths to
# NAG-related directories are injected by nagfor anyway.
return "-Wl,-v"
@property
def openmp_flag(self):
return "-openmp"
@property
def debug_flags(self):
return ["-g", "-gline", "-g90"]
@property
def opt_flags(self):
return ["-O", "-O0", "-O1", "-O2", "-O3", "-O4"]
@property
def cxx11_flag(self):
# NAG does not have a C++ compiler
# However, it can be mixed with a compiler that does support it
return "-std=c++11"
@property
def f77_pic_flag(self):
return "-PIC"
@property
def fc_pic_flag(self):
return "-PIC"
# Unlike other compilers, the NAG compiler passes options to GCC, which
# then passes them to the linker. Therefore, we need to doubly wrap the
# options with '-Wl,-Wl,,'
@property
def f77_rpath_arg(self):
return "-Wl,-Wl,,-rpath,,"
@property
def fc_rpath_arg(self):
return "-Wl,-Wl,,-rpath,,"
@property
def linker_arg(self):
return "-Wl,-Wl,,"
@property
def disable_new_dtags(self):
# Disable RPATH/RUNPATH forcing for NAG/GCC mixed toolchains:
return ""
@property
def enable_new_dtags(self):
# Disable RPATH/RUNPATH forcing for NAG/GCC mixed toolchains:
return ""

View File

@@ -0,0 +1,79 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
from spack.compiler import Compiler
class Nvhpc(Compiler):
# Named wrapper links within build_env_path
link_paths = {
"cc": os.path.join("nvhpc", "nvc"),
"cxx": os.path.join("nvhpc", "nvc++"),
"f77": os.path.join("nvhpc", "nvfortran"),
"fc": os.path.join("nvhpc", "nvfortran"),
}
version_argument = "--version"
version_regex = r"nv[^ ]* (?:[^ ]+ Dev-r)?([0-9.]+)(?:-[0-9]+)?"
@property
def verbose_flag(self):
return "-v"
@property
def debug_flags(self):
return ["-g", "-gopt"]
@property
def opt_flags(self):
return ["-O", "-O0", "-O1", "-O2", "-O3", "-O4"]
@property
def openmp_flag(self):
return "-mp"
@property
def cc_pic_flag(self):
return "-fpic"
@property
def cxx_pic_flag(self):
return "-fpic"
@property
def f77_pic_flag(self):
return "-fpic"
@property
def fc_pic_flag(self):
return "-fpic"
@property
def c99_flag(self):
return "-c99"
@property
def c11_flag(self):
return "-c11"
@property
def cxx11_flag(self):
return "--c++11"
@property
def cxx14_flag(self):
return "--c++14"
@property
def cxx17_flag(self):
return "--c++17"
@property
def stdcxx_libs(self):
return ("-c++libs",)
required_libs = ["libnvc", "libnvf"]

View File

@@ -0,0 +1,172 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
from os.path import dirname, join
from llnl.util import tty
from llnl.util.filesystem import ancestor
import spack.util.executable
from spack.compiler import Compiler
from spack.version import Version
class Oneapi(Compiler):
# Named wrapper links within build_env_path
link_paths = {
"cc": os.path.join("oneapi", "icx"),
"cxx": os.path.join("oneapi", "icpx"),
"f77": os.path.join("oneapi", "ifx"),
"fc": os.path.join("oneapi", "ifx"),
}
version_argument = "--version"
version_regex = r"(?:(?:oneAPI DPC\+\+(?:\/C\+\+)? Compiler)|(?:\(IFORT\))|(?:\(IFX\))) (\S+)"
@property
def verbose_flag(self):
return "-v"
required_libs = [
"libirc",
"libifcore",
"libifcoremt",
"libirng",
"libsvml",
"libintlc",
"libimf",
"libsycl",
"libOpenCL",
]
@property
def debug_flags(self):
return ["-debug", "-g", "-g0", "-g1", "-g2", "-g3"]
@property
def opt_flags(self):
return ["-O", "-O0", "-O1", "-O2", "-O3", "-Ofast", "-Os"]
@property
def openmp_flag(self):
return "-fiopenmp"
# There may be some additional options here for offload, e.g. :
# -fopenmp-simd Emit OpenMP code only for SIMD-based constructs.
# -fopenmp-targets=<value>
# -fopenmp-version=<value>
# -fopenmp Parse OpenMP pragmas and generate parallel code.
# -qno-openmp Disable OpenMP support
# -qopenmp-link=<value> Choose whether to link with the static or
# dynamic OpenMP libraries. Default is dynamic.
# -qopenmp-simd Emit OpenMP code only for SIMD-based constructs.
# -qopenmp-stubs enables the user to compile OpenMP programs in
# sequential mode. The OpenMP directives are
# ignored and a stub OpenMP library is linked.
# -qopenmp-threadprivate=<value>
# -qopenmp Parse OpenMP pragmas and generate parallel code.
# -static-openmp Use the static host OpenMP runtime while
# linking.
# -Xopenmp-target=<triple> <arg>
# -Xopenmp-target <arg> Pass <arg> to the target offloading toolchain.
# Source: icx --help output
@property
def cxx11_flag(self):
return "-std=c++11"
@property
def cxx14_flag(self):
return "-std=c++14"
@property
def cxx17_flag(self):
return "-std=c++17"
@property
def cxx20_flag(self):
return "-std=c++20"
@property
def c99_flag(self):
return "-std=c99"
@property
def c11_flag(self):
return "-std=c1x"
@property
def cc_pic_flag(self):
return "-fPIC"
@property
def cxx_pic_flag(self):
return "-fPIC"
@property
def f77_pic_flag(self):
return "-fPIC"
@property
def fc_pic_flag(self):
return "-fPIC"
@property
def stdcxx_libs(self):
return ("-cxxlib",)
@property
def prefix(self):
# OneAPI reports its install prefix when running ``--version``
# on the line ``InstalledDir: <prefix>/bin/compiler``.
cc = spack.util.executable.Executable(self.cc)
with self.compiler_environment():
oneapi_output = cc("--version", output=str, error=str)
for line in oneapi_output.splitlines():
if line.startswith("InstalledDir:"):
oneapi_prefix = line.split(":")[1].strip()
# Go from <prefix>/bin/compiler to <prefix>
return ancestor(oneapi_prefix, 2)
raise RuntimeError(
"could not find install prefix of OneAPI from output:\n\t{}".format(oneapi_output)
)
def setup_custom_environment(self, pkg, env):
# workaround bug in icpx driver where it requires sycl-post-link is on the PATH
# It is located in the same directory as the driver. Error message:
# clang++: error: unable to execute command:
# Executable "sycl-post-link" doesn't exist!
# also ensures that shared objects and libraries required by the compiler,
# e.g. libonnx, can be found succesfully
# due to a fix, this is no longer required for OneAPI versions >= 2024.2
if self.cxx and pkg.spec.satisfies("%oneapi@:2024.1"):
env.prepend_path("PATH", dirname(self.cxx))
env.prepend_path("LD_LIBRARY_PATH", join(dirname(dirname(self.cxx)), "lib"))
# Edge cases for Intel's oneAPI compilers when using the legacy classic compilers:
# Always pass flags to disable deprecation warnings, since these warnings can
# confuse tools that parse the output of compiler commands (e.g. version checks).
# This is really only needed for Fortran, since oneapi@ should be using either
# icx+icpx+ifx or icx+icpx+ifort. But to be on the safe side (some users may
# want to try to swap icpx against icpc, for example), and since the Intel LLVM
# compilers accept these diag-disable flags, we apply them for all compilers.
if self.real_version >= Version("2021") and self.real_version < Version("2024"):
env.append_flags("SPACK_ALWAYS_CFLAGS", "-diag-disable=10441")
env.append_flags("SPACK_ALWAYS_CXXFLAGS", "-diag-disable=10441")
if self.real_version >= Version("2021") and self.real_version < Version("2025"):
env.append_flags("SPACK_ALWAYS_FFLAGS", "-diag-disable=10448")
# 2024 release bumped the libsycl version because of an ABI
# change, 2024 compilers are required. You will see this
# error:
#
# /usr/bin/ld: warning: libsycl.so.7, needed by ...., not found
if pkg.spec.satisfies("%oneapi@:2023"):
for c in ["dnn"]:
if pkg.spec.satisfies(f"^intel-oneapi-{c}@2024:"):
tty.warn(f"intel-oneapi-{c}@2024 SYCL APIs requires %oneapi@2024:")

Some files were not shown because too many files have changed in this diff Show More