Compare commits

..

199 Commits

Author SHA1 Message Date
Gregory Becker
2d46de5741 fix and test issue with copying by reference vs value 2023-05-25 01:43:55 +02:00
Greg Becker
b9cf63aa41 Merge branch 'develop' into bugfix/transactional-concretization 2023-05-24 16:25:15 +02:00
Stephen Sachs
2d77e44f6f Pcluster local buildcache (#37852)
* [pcluster pipeline] Use local buildcache instead of upstream spack

Spack currently does not relocate compiler references from upstream spack
installations. When using a buildcache we don't need an upstream spack.

* gcc needs to be installed via postinstall to get correct deps

* quantum-espresso@gcc@12.3.0 returns ICE on neoverse_{n,v}1

* Force gitlab to pull the new container

* Revert "Force gitlab to pull the new container"

This reverts commit 3af5f4cd88.

Seems the gitlab version does not yet support "pull_policy" in .gitlab-ci.yml

* Gitlab keeps picking up wrong container. Renaming

* Update containers once more after failed build
2023-05-24 06:55:00 -07:00
Greg Becker
033599c4cd bugfix: env concretize after remove (#37877) 2023-05-24 15:41:57 +02:00
Harmen Stoppels
8096ed4b22 spack remove: fix traversal when user specs intersect (#37882)
drop unnecessary double loop over the matching user specs.
2023-05-24 09:23:46 -04:00
Simon Pintarelli
b49bfe25af update nlcglib package (#37578) 2023-05-24 11:17:15 +02:00
Houjun Tang
8b2f34d802 Add async vol v1.6 (#37875) 2023-05-24 01:47:46 -04:00
H. Joe Lee
3daed0d6a7 hdf5-vol-daos: add a new package (#35653)
* hdf5-vol-daos: add a new package
* hdf5-vol-daos: address @soumagne review

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2023-05-23 23:42:42 -04:00
Glenn Johnson
d6c1f75e8d julia: remove myself from maintainers list (#37868) 2023-05-23 16:22:56 -05:00
Gregory Becker
0ced62480d raise the error after fixing the transaction state 2023-05-23 23:14:33 +02:00
Laura Weber
c80a4c1ddc Updated hash for latest maintenance release (2022.2.1) (#37842) 2023-05-23 14:08:55 -07:00
Glenn Johnson
466abcb62d gate: remove myself as maintainer (#37862) 2023-05-23 14:03:52 -07:00
Glenn Johnson
69e99f0c16 Remove myself as maintainer of R packages (#37859)
* Remove myself as maintainer of R packages
  I will no longer have the time to properly maintain these packages.
* fix flake8 test for import
2023-05-23 15:35:32 -05:00
Glenn Johnson
bbee6dfc58 bart: remove myself as maintainer (#37860) 2023-05-23 16:08:07 -04:00
Glenn Johnson
2d60cf120b heasoft: remove myself as maintainer (#37866) 2023-05-23 14:37:57 -05:00
Glenn Johnson
db17fc2f33 opencv: remove myself from maintainers list (#37870) 2023-05-23 14:34:52 -05:00
eugeneswalker
c62080d498 e4s ci: add dealii (#32484) 2023-05-23 21:34:31 +02:00
Glenn Johnson
f9bbe549fa gatetools: remove myself as maintainer (#37863) 2023-05-23 15:32:54 -04:00
H. Joe Lee
55d7fec69c daos: add a new package (#35649) 2023-05-23 21:30:23 +02:00
Glenn Johnson
e938907150 reditools: remove myself as maintainer (#37871) 2023-05-23 15:28:16 -04:00
Glenn Johnson
0c40b86e96 itk: remove myself as maintainer (#37867) 2023-05-23 15:27:55 -04:00
Glenn Johnson
3d4cf0d8eb mumax: remove myself as maintainer (#37869) 2023-05-23 15:23:28 -04:00
Glenn Johnson
966e19d278 gurobi: remove myself as maintainer (#37865) 2023-05-23 15:23:05 -04:00
Glenn Johnson
8f930462bd fplo: remove myself as maintainer (#37861) 2023-05-23 15:17:50 -04:00
kjrstory
bf4fccee15 New package: FDS (#37850) 2023-05-23 11:59:05 -07:00
Manuela Kuhn
784771a008 py-bleach: add 6.0.0 (#37846) 2023-05-23 11:50:53 -07:00
Glenn Johnson
e4a9d9ae5b Bioc updates (#37297)
* add version 1.48.0 to bioconductor package r-a4
* add version 1.48.0 to bioconductor package r-a4base
* add version 1.48.0 to bioconductor package r-a4classif
* add version 1.48.0 to bioconductor package r-a4core
* add version 1.48.0 to bioconductor package r-a4preproc
* add version 1.48.0 to bioconductor package r-a4reporting
* add version 1.54.0 to bioconductor package r-absseq
* add version 1.30.0 to bioconductor package r-acde
* add version 1.78.0 to bioconductor package r-acgh
* add version 2.56.0 to bioconductor package r-acme
* add version 1.70.0 to bioconductor package r-adsplit
* add version 1.72.0 to bioconductor package r-affxparser
* add version 1.78.0 to bioconductor package r-affy
* add version 1.76.0 to bioconductor package r-affycomp
* add version 1.58.0 to bioconductor package r-affycontam
* add version 1.72.0 to bioconductor package r-affycoretools
* add version 1.48.0 to bioconductor package r-affydata
* add version 1.52.0 to bioconductor package r-affyilm
* add version 1.70.0 to bioconductor package r-affyio
* add version 1.76.0 to bioconductor package r-affyplm
* add version 1.46.0 to bioconductor package r-affyrnadegradation
* add version 1.48.0 to bioconductor package r-agdex
* add version 3.32.0 to bioconductor package r-agilp
* add version 2.50.0 to bioconductor package r-agimicrorna
* add version 1.32.0 to bioconductor package r-aims
* add version 1.32.0 to bioconductor package r-aldex2
* add version 1.38.0 to bioconductor package r-allelicimbalance
* add version 1.26.0 to bioconductor package r-alpine
* add version 2.62.0 to bioconductor package r-altcdfenvs
* add version 2.24.0 to bioconductor package r-anaquin
* add version 1.28.0 to bioconductor package r-aneufinder
* add version 1.28.0 to bioconductor package r-aneufinderdata
* add version 1.72.0 to bioconductor package r-annaffy
* add version 1.78.0 to bioconductor package r-annotate
* add version 1.62.0 to bioconductor package r-annotationdbi
* add version 1.24.0 to bioconductor package r-annotationfilter
* add version 1.42.0 to bioconductor package r-annotationforge
* add version 3.8.0 to bioconductor package r-annotationhub
* add version 3.30.0 to bioconductor package r-aroma-light
* add version 1.32.0 to bioconductor package r-bamsignals
* add version 2.16.0 to bioconductor package r-beachmat
* add version 2.60.0 to bioconductor package r-biobase
* add version 2.8.0 to bioconductor package r-biocfilecache
* add version 0.46.0 to bioconductor package r-biocgeneric
* add version 1.10.0 to bioconductor package r-biocio
* add version 1.18.0 to bioconductor package r-biocneighbors
* add version 1.34.0 to bioconductor package r-biocparallel
* add version 1.16.0 to bioconductor package r-biocsingular
* add version 2.28.0 to bioconductor package r-biocstyle
* add version 3.17.1 to bioconductor package r-biocversion
* add version 2.56.0 to bioconductor package r-biomart
* add version 1.28.0 to bioconductor package r-biomformat
* add version 2.68.0 to bioconductor package r-biostrings
* add version 1.48.0 to bioconductor package r-biovizbase
* add version 1.10.0 to bioconductor package r-bluster
* add version 1.68.0 to bioconductor package r-bsgenome
* add version 1.36.0 to bioconductor package r-bsseq
* add version 1.42.0 to bioconductor package r-bumphunter
* add version 2.66.0 to bioconductor package r-category
* add version 2.30.0 to bioconductor package r-champ
* add version 2.32.0 to bioconductor package r-champdata
* add version 1.50.0 to bioconductor package r-chipseq
* add version 4.8.0 to bioconductor package r-clusterprofiler
* add version 1.36.0 to bioconductor package r-cner
* add version 1.32.0 to bioconductor package r-codex
* add version 2.16.0 to bioconductor package r-complexheatmap
* add version 1.74.0 to bioconductor package r-ctc
* add version 2.28.0 to bioconductor package r-decipher
* add version 0.26.0 to bioconductor package r-delayedarray
* add version 1.22.0 to bioconductor package r-delayedmatrixstats
* add version 1.40.0 to bioconductor package r-deseq2
* add version 1.46.0 to bioconductor package r-dexseq
* add version 1.42.0 to bioconductor package r-dirichletmultinomial
* add version 2.14.0 to bioconductor package r-dmrcate
* add version 1.74.0 to bioconductor package r-dnacopy
* add version 3.26.0 to bioconductor package r-dose
* add version 2.48.0 to bioconductor package r-dss
* add version 3.42.0 to bioconductor package r-edger
* add version 1.20.0 to bioconductor package r-enrichplot
* add version 2.24.0 to bioconductor package r-ensembldb
* add version 1.46.0 to bioconductor package r-exomecopy
* add version 2.8.0 to bioconductor package r-experimenthub
* add version 1.26.0 to bioconductor package r-fgsea
* add version 2.72.0 to bioconductor package r-gcrma
* add version 1.36.0 to bioconductor package r-gdsfmt
* add version 1.82.0 to bioconductor package r-genefilter
* add version 1.36.0 to bioconductor package r-genelendatabase
* add version 1.72.0 to bioconductor package r-genemeta
* add version 1.78.0 to bioconductor package r-geneplotter
* add version 1.22.0 to bioconductor package r-genie3
* add version 1.36.0 to bioconductor package r-genomeinfodb
* update r-genomeinfodbdata
* add version 1.36.0 to bioconductor package r-genomicalignments
* add version 1.52.0 to bioconductor package r-genomicfeatures
* add version 1.52.0 to bioconductor package r-genomicranges
* add version 2.68.0 to bioconductor package r-geoquery
* add version 1.48.0 to bioconductor package r-ggbio
* add version 3.8.0 to bioconductor package r-ggtree
* add version 2.10.0 to bioconductor package r-glimma
* add version 1.12.0 to bioconductor package r-glmgampoi
* add version 5.54.0 to bioconductor package r-globaltest
* update r-go-db
* add version 1.20.0 to bioconductor package r-gofuncr
* add version 2.26.0 to bioconductor package r-gosemsim
* add version 1.52.0 to bioconductor package r-goseq
* add version 2.66.0 to bioconductor package r-gostats
* add version 1.78.0 to bioconductor package r-graph
* add version 1.62.0 to bioconductor package r-gseabase
* add version 1.32.0 to bioconductor package r-gtrellis
* add version 1.44.0 to bioconductor package r-gviz
* add version 1.28.0 to bioconductor package r-hdf5array
* add version 1.72.0 to bioconductor package r-hypergraph
* add version 1.36.0 to bioconductor package r-illumina450probevariants-db
* add version 0.42.0 to bioconductor package r-illuminaio
* add version 1.74.0 to bioconductor package r-impute
* add version 1.38.0 to bioconductor package r-interactivedisplaybase
* add version 2.34.0 to bioconductor package r-iranges
* add version 1.60.0 to bioconductor package r-kegggraph
* add version 1.40.0 to bioconductor package r-keggrest
* add version 3.56.0 to bioconductor package r-limma
* add version 2.52.0 to bioconductor package r-lumi
* add version 1.76.0 to bioconductor package r-makecdfenv
* add version 1.78.0 to bioconductor package r-marray
* add version 1.12.0 to bioconductor package r-matrixgenerics
* add version 1.8.0 to bioconductor package r-metapod
* add version 2.46.0 to bioconductor package r-methylumi
* add version 1.46.0 to bioconductor package r-minfi
* add version 1.34.0 to bioconductor package r-missmethyl
* add version 1.80.0 to bioconductor package r-mlinterfaces
* add version 1.12.0 to bioconductor package r-mscoreutils
* add version 2.26.0 to bioconductor package r-msnbase
* add version 2.56.0 to bioconductor package r-multtest
* add version 1.38.0 to bioconductor package r-mzid
* add version 2.34.0 to bioconductor package r-mzr
* add version 1.62.0 to bioconductor package r-oligoclasses
* update r-org-hs-eg-db
* add version 1.42.0 to bioconductor package r-organismdbi
* add version 1.40.0 to bioconductor package r-pathview
* add version 1.92.0 to bioconductor package r-pcamethods
* update r-pfam-db
* add version 1.44.0 to bioconductor package r-phyloseq
* add version 1.62.0 to bioconductor package r-preprocesscore
* add version 1.32.0 to bioconductor package r-protgenerics
* add version 1.34.0 to bioconductor package r-quantro
* add version 2.32.0 to bioconductor package r-qvalue
* add version 1.76.0 to bioconductor package r-rbgl
* add version 2.40.0 to bioconductor package r-reportingtools
* add version 2.44.0 to bioconductor package r-rgraphviz
* add version 2.44.0 to bioconductor package r-rhdf5
* add version 1.12.0 to bioconductor package r-rhdf5filters
* add version 1.22.0 to bioconductor package r-rhdf5lib
* add version 1.76.0 to bioconductor package r-roc
* add version 1.28.0 to bioconductor package r-rots
* add version 2.16.0 to bioconductor package r-rsamtools
* add version 1.60.0 to bioconductor package r-rtracklayer
* add version 0.38.0 to bioconductor package r-s4vectors
* add version 1.8.0 to bioconductor package r-scaledmatrix
* add version 1.28.0 to bioconductor package r-scater
* add version 1.14.0 to bioconductor package r-scdblfinder
* add version 1.28.0 to bioconductor package r-scran
* add version 1.10.0 to bioconductor package r-scuttle
* add version 1.66.0 to bioconductor package r-seqlogo
* add version 1.58.0 to bioconductor package r-shortread
* add version 1.74.0 to bioconductor package r-siggenes
* add version 1.22.0 to bioconductor package r-singlecellexperiment
* add version 1.34.0 to bioconductor package r-snprelate
* add version 1.50.0 to bioconductor package r-snpstats
* add version 2.36.0 to bioconductor package r-somaticsignatures
* add version 1.12.0 to bioconductor package r-sparsematrixstats
* add version 1.40.0 to bioconductor package r-spem
* add version 1.38.0 to bioconductor package r-sseq
* add version 1.30.0 to bioconductor package r-summarizedexperiment
* add version 3.48.0 to bioconductor package r-sva
* add version 1.38.0 to bioconductor package r-tfbstools
* add version 1.22.0 to bioconductor package r-tmixclust
* add version 2.52.0 to bioconductor package r-topgo
* add version 1.24.0 to bioconductor package r-treeio
* add version 1.28.0 to bioconductor package r-tximport
* add version 1.28.0 to bioconductor package r-tximportdata
* add version 1.46.0 to bioconductor package r-variantannotation
* add version 3.68.0 to bioconductor package r-vsn
* add version 2.6.0 to bioconductor package r-watermelon
* add version 2.46.0 to bioconductor package r-xde
* add version 1.58.0 to bioconductor package r-xmapbridge
* add version 0.40.0 to bioconductor package r-xvector
* add version 1.26.0 to bioconductor package r-yapsa
* add version 1.26.0 to bioconductor package r-yarn
* add version 1.46.0 to bioconductor package r-zlibbioc
* Revert "add version 1.82.0 to bioconductor package r-genefilter"
  This reverts commit 1702071c6d.
* Revert "add version 0.38.0 to bioconductor package r-s4vectors"
  This reverts commit 58a7df2387.
* add version 0.38.0 to bioconductor package r-s4vectors
* Revert "add version 1.28.0 to bioconductor package r-aneufinder"
  This reverts commit 0a1f59de6c.
* add version 1.28.0 to bioconductor package r-aneufinder
* Revert "add version 2.16.0 to bioconductor package r-beachmat"
  This reverts commit cd49fb8e4c.
* add version 2.16.0 to bioconductor package r-beachmat
* Revert "add version 4.8.0 to bioconductor package r-clusterprofiler"
  This reverts commit 6e9a951cbe.
* add version 4.8.0 to bioconductor package r-clusterprofiler
* Fix syntax error
* r-genefilter: add version 1.82.0
* new package: r-basilisk-utils
* new package: r-basilisk
* new package: r-densvis
* new package: r-dir-expiry
* r-affyplm: add zlib dependency
* r-cner: add zlib dependency
* r-mzr: add zlib dependency
* r-rhdf5filters: add zstd dependency
* r-shortread: add zlib dependency
* r-snpstats: add zlib dependency

---------

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2023-05-23 11:40:00 -07:00
snehring
a6886983dc usalign: new package (#37646)
* usalign: adding new package
* usalign: updating shasum, adding note about distribution
2023-05-23 11:30:40 -07:00
Gregory Becker
265d80cee3 test for transactional concretization 2023-05-23 20:25:43 +02:00
Gregory Becker
0f1d36585e environments: transactional concretization 2023-05-23 20:23:54 +02:00
Andrey Parfenov
93a34a9635 hpcg: apply patch with openmp pragma changes for intel and oneapi compilers (#37856)
Signed-off-by: Andrey Parfenov <andrey.parfenov@intel.com>
2023-05-23 11:52:45 -04:00
Todd Gamblin
91a54029f9 libgcrypt: patch 1.10.2 on macos (#37844)
macOS doesn't have `getrandom`, and 1.10.2 fails to compile because of this.

There's an upstream fix at https://dev.gnupg.org/T6442 that will be in the next
`libgcrypt` release, but the patch is available now.
2023-05-23 06:03:13 -04:00
Juan Miguel Carceller
5400b49ed6 dd4hep: add LD_LIBRARY_PATH for plugins for Gaudi (#37824)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-05-23 10:28:26 +01:00
Juan Miguel Carceller
c17fc3c0c1 gaudi: add gaudi to LD_LIBRARY_PATH (#37821)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-05-23 10:27:55 +01:00
Juan Miguel Carceller
6f248836ea dd4hep: restrict podio versions (#37699)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-05-23 10:20:31 +01:00
snehring
693c1821b0 py-pastml: adding version for compatibility with py-topiary-asr (#37828) 2023-05-22 14:44:00 -05:00
Manuela Kuhn
62afe3bd5a py-asttokens: add 2.2.1 (#37816) 2023-05-22 14:40:08 -05:00
genric
53a756d045 py-dask: add v2023.4.1 (#37550)
* py-dask: add v2023.4.1

* address review comments
2023-05-22 14:29:23 -05:00
Adam J. Stewart
321b687ae6 py-huggingface-hub: add v0.14.1, cli variant (#37815) 2023-05-22 11:19:41 -07:00
Adam J. Stewart
c8617f0574 py-fiona: add v1.9.4 (#37780) 2023-05-22 13:17:13 -05:00
Adam J. Stewart
7843e2ead0 azcopy: add new package (#37693) 2023-05-22 11:09:06 -07:00
Juan Miguel Carceller
dca3d071d7 gaudi: fix issue with fmt::format (#37810)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2023-05-22 10:33:05 -07:00
eugeneswalker
436f077482 tau %oneapi: -Wno-error=implicit-function-declaration (#37829) 2023-05-22 13:13:02 -04:00
simonleary-umass-edu
ab3f705019 deleted package.py better error message (#37814)
adds the namespace to the exception object's string representation
2023-05-22 09:59:07 -07:00
Tamara Dahlgren
d739989ec8 swig: convert to new stand-alone test process (#37786) 2023-05-22 09:39:30 -07:00
Jordan Galby
52ee1967d6 llvm: Fix hwloc@1 and hwloc@:2.3 compatibility (#35387) 2023-05-22 10:28:57 -05:00
Andrey Prokopenko
1af7284b5d arborx: new version 1.4 (#37809) 2023-05-21 12:25:53 -07:00
Todd Gamblin
e1bcefd805 Update CHANGELOG.md for v0.20.0 2023-05-21 01:48:34 +02:00
Manuela Kuhn
2159b0183d py-argcomplete: add 3.0.8 (#37797)
* py-argcomplete: add 3.0.8

* Update var/spack/repos/builtin/packages/py-argcomplete/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* [@spackbot] updating style on behalf of manuelakuhn

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-20 11:31:28 -05:00
kwryankrattiger
078fd225a9 Mochi-Margo: Add patch for pthreads detection (#36109) 2023-05-20 07:43:27 -07:00
Manuela Kuhn
83974828c7 libtiff: disable use of sphinx (#37803) 2023-05-19 21:37:19 -05:00
Manuela Kuhn
2412f74557 py-anyio: add 3.6.2 (#37796) 2023-05-19 21:36:39 -05:00
Manuela Kuhn
db06d3621d py-alabaster: add 0.7.13 (#37798) 2023-05-19 21:34:00 -05:00
Jose E. Roman
c25170d2f9 New patch release SLEPc 3.19.1 (#37675)
* New patch release SLEPc 3.19.1

* py-slepc4py: add explicit dependency on py-numpy
2023-05-19 21:33:21 -05:00
Vanessasaurus
b3dfe13670 Automated deployment to update package flux-security 2023-05-16 (#37696)
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
2023-05-19 12:07:57 -07:00
Harmen Stoppels
6358e84b48 fix binutils dep of spack itself (#37738) 2023-05-19 12:02:36 -07:00
Swann Perarnau
8e634d8e49 aml: v0.2.1 (#37621)
* aml: v0.2.1

* add version 0.2.1
* fix hip variant bug

* [fix] pkgconf required for all builds

On top of needing pkgconf for autoreconf builds, the release configure
scripts needs pkgconf do detect dependencies if any of the hwloc, ze, or
opencl variants are active.

* Remove deprecation for v0.2.0 based on PR advise.
2023-05-19 11:58:28 -07:00
Mark W. Krentel
1a21376515 intel-xed: add version 2023.04.16 (#37582)
* intel-xed: add version 2023.04.16
 1. add version 2023.04.16
 2. adjust the mbuild resource to better match the xed version at the time
 3. replace three conflicts() with one new requires() for x86_64 target
 4. add patch for libxed-ild for some new avx512 instructions
    * [@spackbot] updating style on behalf of mwkrentel
    * Fix the build for 2023.04.16.  XED requires its source directory to be exactly 'xed', so add a symlink.
 5. move the mbuild resource up one level, xed wants it to be in the same directory as the xed source dir
 6. deprecate 10.2019.03
    * semantic style fix: add OSError to except
    * [@spackbot] updating style on behalf of mwkrentel

---------

Co-authored-by: mwkrentel <mwkrentel@users.noreply.github.com>
2023-05-19 10:28:18 -07:00
Harmen Stoppels
bf45a2b6d3 spack env create: generate a view when newly created env has concrete specs (#37799) 2023-05-19 18:44:54 +02:00
Thomas-Ulrich
475ce955e7 hipsycl: add v0.9.4 (#37247) 2023-05-19 18:29:45 +02:00
Robert Underwood
5e44289787 updates for the libpressio ecosystem (#37764)
* updates for the libpressio ecosystem

* [@spackbot] updating style on behalf of robertu94

* style fix: remove FIXME

---------

Co-authored-by: Robert Underwood <runderwood@anl.gov>
Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
2023-05-19 09:24:31 -07:00
Massimiliano Culpo
e66888511f archspec: fix entry in the JSON file (#37793) 2023-05-19 09:57:57 -04:00
Tamara Dahlgren
e9e5beee1f fortrilinos: convert to new stand-alone test process (#37783) 2023-05-19 08:06:33 -04:00
Tamara Dahlgren
ffd134c09d formetis: converted to new stand-alone test process (#37785) 2023-05-19 08:05:23 -04:00
Massimiliano Culpo
bfadd5c9a5 lmod: allow core compiler to be specified with a version range (#37789)
Use CompilerSpec with satisfies instead of string equality tests

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-05-19 13:21:40 +02:00
Greg Becker
16e9279420 compiler specs: do not print '@=' when clear from context (#37787)
Ensure that spack compiler add/find/list and lists of concrete specs
print the compiler effectively as {compiler.name}{@compiler.version}.

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-05-19 11:31:27 +02:00
Satish Balay
ac0903ef9f llvm: add version 16.0.3 (#37472) 2023-05-19 03:37:57 -04:00
Cyrus Harrison
648839dffd add conduit 0.8.8 release (#37776) 2023-05-19 00:34:19 -04:00
Pieter Ghysels
489a604920 Add STRUMPACK versions 7.1.2 and 7.1.3 (#37779) 2023-05-18 23:18:50 -04:00
eugeneswalker
2ac3435810 legion +rocm: apply patch for --offload-arch (#37775)
* legion +rocm: apply patch for --offload-arch

* constrain to latest version
2023-05-18 23:03:50 -04:00
Alec Scott
69ea180d26 fzf: add v0.40.0 and refactor package (#37569)
* fzf: add v0.40.0 and refactor package
* Remove unused imports
2023-05-18 15:23:20 -07:00
Alec Scott
f52f217df0 roctracer-dev-api: add v5.5.0 (#37484) 2023-05-18 15:11:36 -07:00
Alec Scott
df74aa5d7e amqp-cpp: add v4.3.24 (#37504) 2023-05-18 15:09:30 -07:00
Alec Scott
41932c53ae libjwt: add v1.15.3 (#37521) 2023-05-18 15:05:27 -07:00
Alec Scott
4296db794f rdkit: add v2023_03_1 (#37529) 2023-05-18 15:05:07 -07:00
H. Joe Lee
9ab9302409 py-jarvis-util: add a new package (#37729)
* py-jarvis-util: add a new package

* Apply suggestions from code review

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-18 17:28:59 -04:00
Benjamin Meyers
0187376e54 Update py-nltk (#37703)
* Update py-nltk

* [@spackbot] updating style on behalf of meyersbs

* Update var/spack/repos/builtin/packages/py-nltk/package.py

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2023-05-18 17:14:20 -04:00
Aditya Bhamidipati
7340d2cb83 update package nccl version 2.17, 2.18 (#37721) 2023-05-18 15:47:01 -05:00
Benjamin Meyers
641d4477d5 New package py-tensorly (#37705) 2023-05-18 15:38:23 -05:00
Benjamin Meyers
3ff2fb69af New package py-scikit-tensor-py3 (#37706) 2023-05-18 15:37:43 -05:00
vucoda
e3024b1bcb Update py-jedi 0.17.2 checksum in package.py (#37700)
Pypi for jedi-0.17.2.tar.gz does not match the package.py file.

https://pypi.org/project/jedi/0.17.2/#copy-hash-modal-60e943e3-1192-4b12-90e2-4d639cb5b4f7
2023-05-18 15:15:47 -05:00
Dom Heinzeller
e733b87865 Remove references to gmake executable, only use make (#37280) 2023-05-18 19:03:03 +00:00
Victor Lopez Herrero
919985dc1b dlb: add v3.3.1 (#37759) 2023-05-18 10:29:41 -07:00
Michael Kuhn
d746f7d427 harfbuzz: add 7.3.0 (#37753)
Do not prefer 5.3.1 anymore since it does not build with newer compilers. Linux distributions have also moved to 6.x and newer.
2023-05-18 10:27:48 -07:00
kaanolgu
b6deab515b [babelstream] FIX: maintainers list missing a comma (#37761) 2023-05-18 10:25:04 -07:00
Glenn Johnson
848220c4ba update R: add version 4.3.0 (#37090) 2023-05-18 10:13:28 -07:00
Glenn Johnson
98462bd27e Cran updates (#37296)
* add version 1.28.3 to r-hexbin
* add version 5.0-1 to r-hmisc
* add version 0.5.5 to r-htmltools
* add version 1.6.2 to r-htmlwidgets
* add version 1.4.2 to r-igraph
* add version 0.42.19 to r-imager
* add version 1.0-5 to r-inum
* add version 0.9-14 to r-ipred
* add version 1.3.2 to r-irkernel
* add version 2.2.0 to r-janitor
* add version 0.1-10 to r-jpeg
* add version 1.2.2 to r-jsonify
* add version 0.9-32 to r-kernlab
* add version 1.7-2 to r-klar
* add version 1.42 to r-knitr
* add version 1.14.0 to r-ks
* add version 2.11.0 to r-labelled
* add version 1.7.2.1 to r-lava
* add version 0.6-15 to r-lavaan
* add version 2.1.2 to r-leaflet
* add version 2.9-0 to r-lfe
* add version 1.1.6 to r-lhs
* add version 1.1-33 to r-lme4
* add version 1.5-9.7 to r-locfit
* add version 0.4.3 to r-log4r
* add version 5.6.18 to r-lpsolve
* add version 0.2-11 to r-lwgeom
* add version 2.7.4 to r-magick
* add version 1.22.1 to r-maldiquant
* add version 1.2.11 to r-mapproj
* add version 1.6 to r-markdown
* add version 7.3-59 to r-mass
* add version 1.5-4 to r-matrix
* add version 0.63.0 to r-matrixstats
* add version 4.2-3 to r-memuse
* add version 4.0-0 to r-metafor
* add version 1.8-42 to r-mgcv
* add version 3.15.0 to r-mice
* add version 0.4-5 to r-mitml
* add version 2.0.0 to r-mixtools
* add version 0.1.11 to r-modelr
* add version 1.4-23 to r-multcomp
* add version 0.1-9 to r-multcompview
* add version 0.1-13 to r-mutoss
* add version 1.18.1 to r-network
* add version 3.3.4 to r-nleqslv
* add version 3.1-162 to r-nlme
* add version 0.26 to r-nmf
* add version 0.60-17 to r-np
* add version 4.2.5.2 to r-openxlsx
* add version 2022.11-16 to r-ordinal
* add version 0.6.0.8 to r-osqp
* add version 0.9.1 to r-packrat
* add version 1.35.0 to r-parallelly
* add version 1.3-13 to r-party
* add version 1.2-20 to r-partykit
* add version 1.7-0 to r-pbapply
* add version 0.3-9 to r-pbdzmq
* add version 1.2 to r-pegas
* add version 1.5-1 to r-phytools
* add version 1.9.0 to r-pillar
* add version 1.4.0 to r-pkgbuild
* add version 2.1.0 to r-pkgcache
* add version 0.5.0 to r-pkgdepends
* add version 2.0.7 to r-pkgdown
* add version 1.3.2 to r-pkgload
* add version 0.1-8 to r-png
* add version 1.1.22 to r-polspline
* add version 1.0.1 to r-pool
* add version 1.4.1 to r-posterior
* add version 3.8.1 to r-processx
* add version 2023.03.31 to r-prodlim
* add version 1.0-12 to r-proj4
* add version 2.5.0 to r-projpred
* add version 0.1.6 to r-pryr
* add version 1.7.5 to r-ps
* add version 1.0.1 to r-purrr
* add version 1.3.2 to r-qqconf
* add version 0.25.5 to r-qs
* add version 1.60 to r-qtl
* add version 0.4.22 to r-quantmod
* add version 5.95 to r-quantreg
* add version 0.7.8 to r-questionr
* add version 1.2.5 to r-ragg
* add version 0.15.1 to r-ranger
* add version 3.6-20 to r-raster
* add version 2.2.13 to r-rbibutils
* add version 1.0.10 to r-rcpp
* add version 0.12.2.0.0 to r-rcpparmadillo
* add version 0.1.7 to r-rcppde
* add version 0.3.13 to r-rcppgsl
* add version 1.98-1.12 to r-rcurl
* add version 1.2-1 to r-rda
* add version 2.1.4 to r-readr
* add version 1.4.2 to r-readxl
* add version 1.0.6 to r-recipes
* add version 1.1.6 to r-repr
* add version 1.2.16 to r-reproducible
* add version 0.3.0 to r-require
* add version 1.28 to r-reticulate
* add version 2.0.7 to r-rfast
* add version 1.6-6 to r-rgdal
* add version 0.6-2 to r-rgeos
* add version 1.1.3 to r-rgl
* add version 0.2.18 to r-rinside
* add version 4-14 to r-rjags
* add version 1.3-1.8 to r-rjsonio
* add version 2.21 to r-rmarkdown
* add version 0.9-2 to r-rmpfr
* add version 0.7-1 to r-rmpi
* add version 6.6-0 to r-rms
* add version 0.10.25 to r-rmysql
* add version 0.8.7 to r-rncl
* add version 2.4.11 to r-rnexml
* add version 0.95-1 to r-robustbase
* add version 1.3-20 to r-rodbc
* add version 7.2.3 to r-roxygen2
* add version 1.4.5 to r-rpostgres
* add version 0.7-5 to r-rpostgresql
* add version 0.8.29 to r-rsconnect
* add version 0.4-15 to r-rsnns
* add version 2.3.1 to r-rsqlite
* add version 0.7.2 to r-rstatix
* add version 1.1.2 to r-s2
* add version 0.4.5 to r-sass
* add version 0.1.9 to r-scatterpie
* add version 0.3-43 to r-scatterplot3d
* add version 3.2.4 to r-scs
* add version 1.6-4 to r-segmented
* add version 4.2-30 to r-seqinr
* add version 0.26 to r-servr
* add version 4.3.0 to r-seurat
* add version 1.0-12 to r-sf
* add version 0.4.2 to r-sfheaders
* add version 1.1-15 to r-sfsmisc
* add version 1.7.4 to r-shiny
* add version 1.9.0 to r-signac
* add version 1.6.0.3 to r-smoof
* add version 0.1.7-1 to r-sourcetools
* add version 1.6-0 to r-sp
* add version 1.3-0 to r-spacetime
* add version 7.3-16 to r-spatial
* add version 2.0-0 to r-spatialeco
* add version 1.2-8 to r-spatialreg
* add version 3.0-5 to r-spatstat
* add version 3.0-1 to r-spatstat-data
* add version 3.1-0 to r-spatstat-explore
* add version 3.1-0 to r-spatstat-geom
* add version 3.1-0 to r-spatstat-linnet
* add version 3.1-4 to r-spatstat-random
* add version 3.0-1 to r-spatstat-sparse
* add version 3.0-2 to r-spatstat-utils
* add version 2.2.2 to r-spdata
* add version 1.2-8 to r-spdep
* add version 0.6-1 to r-stars
* add version 1.5.0 to r-statmod
* add version 4.8.0 to r-statnet-common
* add version 1.7.12 to r-stringi
* add version 1.5.0 to r-stringr
* add version 1.9.1 to r-styler
* add version 3.5-5 to r-survival
* add version 1.5-4 to r-tclust
* add version 1.7-29 to r-terra
* add version 3.1.7 to r-testthat
* add version 1.1-2 to r-th-data
* add version 1.2 to r-tictoc
* add version 1.3.2 to r-tidycensus
* add version 1.2.3 to r-tidygraph
* add version 1.3.0 to r-tidyr
* add version 2.0.0 to r-tidyverse
* add version 0.2.0 to r-timechange
* add version 0.45 to r-tinytex
* add version 0.4.1 to r-triebeard
* add version 1.0-9 to r-truncnorm
* add version 0.10-53 to r-tseries
* add version 0.8-1 to r-units
* add version 4.3.0 to r-v8
* add version 1.4-11 to r-vcd
* add version 1.14.0 to r-vcfr
* add version 0.6.2 to r-vctrs
* add version 1.1-8 to r-vgam
* add version 0.4.0 to r-vioplot
* add version 1.6.1 to r-vroom
* add version 1.72-1 to r-wgcna
* add version 0.4.1 to r-whisker
* add version 0.7.2 to r-wk
* add version 0.39 to r-xfun
* add version 1.7.5.1 to r-xgboost
* add version 1.0.7 to r-xlconnect
* add version 3.99-0.14 to r-xml
* add version 0.13.1 to r-xts
* add version 2.3.7 to r-yaml
* add version 2.3.0 to r-zip
* add version 1.8-12 to r-zoo
* r-bigmem: dependency on uuid
* r-bio3d: dependency on zlib
* r-devtools: dependency cleanup
* r-dose: dependency cleanup
* r-dss: dependency cleanup
* r-enrichplot: dependency cleanup
* r-fgsea: dependency cleanup
* r-geor: dependency cleanup
* r-ggridges: dependency cleanup
* r-lobstr: dependency cleanup
* r-lubridate: dependency cleanup
* r-mnormt: dependency cleanup
* r-sctransform: version format correction
* r-seuratobject: dependency cleanup
* r-tidyselect: dependency cleanup
* r-tweenr: dependency cleanup
* r-uwot: dependency cleanup
* new package: r-clock
* new package: r-conflicted
* new package: r-diagram
* new package: r-doby
* new package: r-httr2
* new package: r-kableextra
* new package: r-mclogit
* new package: r-memisc
* new package: r-spatstat-model
* r-rmysql: use mariadb-client
* r-snpstats: add zlib dependency
* r-qs: add zstd dependency
* r-rcppcnpy: add zlib dependency
* black reformatting
* Revert "r-dose: dependency cleanup"
  This reverts commit 4c8ae8f5615ee124fff01ce43eddd3bb5d06b9bc.
* Revert "r-dss: dependency cleanup"
  This reverts commit a6c5c15c617a9a688fdcfe2b70c501c3520d4706.
* Revert "r-enrichplot: dependency cleanup"
  This reverts commit 65e116c18a94d885bc1a0ae667c1ef07d1fe5231.
* Revert "r-fgsea: dependency cleanup"
  This reverts commit ffe2cdcd1f73f69d66167b941970ede0281b56d7.
* r-rda: this package is back in CRAN
* r-sctransform: fix copyright
* r-seurat: fix copyright
* r-seuratobject: fix copyright
* Revert "add version 6.0-94 to r-caret"
  This reverts commit 236260597de97a800bfc699aec1cd1d0e3d1ac60.
* add version 6.0-94 to r-caret
* Revert "add version 1.8.5 to r-emmeans"
  This reverts commit 64a129beb0bd88d5c88fab564cade16c03b956ec.
* add version 1.8.5 to r-emmeans
* Revert "add version 5.0-1 to r-hmisc"
  This reverts commit 517643f4fd8793747365dfcfc264b894d2f783bd.
* add version 5.0-1 to r-hmisc
* Revert "add version 1.42 to r-knitr"
  This reverts commit 2a0d9a4c1f0ba173f7423fed59ba725bac902c37.
* add version 1.42 to r-knitr
* Revert "add version 1.6 to r-markdown"
  This reverts commit 4b5565844b5704559b819d2e775fe8dec625af99.
* add version 1.6 to r-markdown
* Revert "add version 0.26 to r-nmf"
  This reverts commit 4c44a788b17848f2cda67b32312a342c0261caec.
* add version 0.26 to r-nmf
* Revert "add version 2.3.1 to r-rsqlite"
  This reverts commit 5722ee2297276e4db8beee461d39014b0b17e420.
* add version 2.3.1 to r-rsqlite
* Revert "add version 1.0-12 to r-sf"
  This reverts commit ee1734fd62cc02ca7a9359a87ed734f190575f69.
* add version 1.0-12 to r-sf
* fix syntax error
2023-05-18 09:57:43 -07:00
Cameron Stanavige
2e2515266d unifyfs: new v1.1 release (#37756)
Add v1.1 release
Update mochi-margo dependency compatible versions
Update version range of libfabric conflict
2023-05-18 09:42:27 -07:00
Chris Green
776ab13276 [xrootd] New variants, new version, improve build config (#37682)
* Add FNAL Spack team to maintainers

* New variants and configuration improvements

* Version dependent "no-systemd" patches.

* New variants `client_only`, and `davix`

* Better handling of `cxxstd` for different versions, including
  improved patching and CMake options.

* Version-specific CMake requirements.

* Better version-specific handling of `openssl` dependency.

* `py-setuptools` required for `+python` build.

* Specific enable/disable of CMake options and use of
  `-DFORCE_ENABLED=TRUE` to prevent unwanted/non-portable activation
  of features.

* Better handling of `+python` configuration.

* New version 5.5.5
2023-05-18 10:49:18 -05:00
Massimiliano Culpo
c2ce9a6d93 Bump Spack version on develop to 0.21.0.dev0 (#37760) 2023-05-18 12:47:55 +02:00
Peter Scheibel
4e3ed56dfa Bugfix: allow preferred new versions from externals (#37747) 2023-05-18 09:40:26 +02:00
Tamara Dahlgren
dcfcc03497 maintainers: switch from list to directive (#37752) 2023-05-17 22:25:57 +00:00
Stephen Sachs
125c20bc06 Add aws-plcuster[-aarch64] stacks (#37627)
Add aws-plcuster[-aarch64] stacks.  These stacks build packages defined in
https://github.com/spack/spack-configs/tree/main/AWS/parallelcluster

They use a custom container from https://github.com/spack/gitlab-runners which
includes necessary ParallelCluster software to link and build as well as an
upstream spack installation with current GCC and dependencies.

Intel and ARM software is installed and used during the build stage but removed
from the buildcache before the signing stage.

Files `configs/linux/{arch}/ci.yaml` select the necessary providers in order to
build for specific architectures (icelake, skylake, neoverse_{n,v}1).
2023-05-17 16:21:10 -06:00
Brian Van Essen
f7696a4480 Added version 1.3.1 (#37735) 2023-05-17 14:51:02 -07:00
Harmen Stoppels
a5d7667cb6 lmod: fix build, bump patch version (#37744) 2023-05-17 13:18:02 -04:00
Massimiliano Culpo
d45818ccff Limit deepcopy to just the initial "all" section (#37718)
Modifications:
- [x] Limit the scope of the deepcopy when initializing module file writers
2023-05-17 10:17:41 -07:00
Scott Wittenburg
bcb7af6eb3 gitlab ci: no copy-only pipelines w/ deprecated config (#37720)
Make it clear that copy-only pipelines are not supported while still
using the deprecated ci config format. Also ensure that the deprecated
stack does not fail on spack pipelines for tags.
2023-05-17 09:46:30 -06:00
Juan Miguel Carceller
f438fb6c79 whizard: build newer versions in parallel (#37422) 2023-05-17 17:15:50 +02:00
Harmen Stoppels
371a8a361a libxcb: depend on python, remove releases that need python 2 (#37698) 2023-05-17 17:05:30 +02:00
Tamara Dahlgren
86b9ce1c88 spack test: fix stand-alone test suite status reporting (#37602)
* Fix reporting of packageless specs as having no tests

* Add test_test_output_multiple_specs with update to simple-standalone-test (and tests)

* Refactored test status summary; added more tests or checks
2023-05-17 16:03:21 +02:00
Seth R. Johnson
05232034f5 celeritas: new version 0.2.2 (#37731)
* celeritas: new version 0.2.2

* [@spackbot] updating style on behalf of sethrj
2023-05-17 05:38:09 -04:00
Peter Scheibel
7a3da0f606 Tk/Tcl packages: speed up file search (#35902) 2023-05-17 09:27:05 +02:00
Yoshiaki Senda
d96406a161 Add recently added Spack Docker Images to documentation (#37732)
Signed-off-by: Yoshiaki Senda <yoshiaki@live.it>
2023-05-17 08:48:27 +02:00
Tamara Dahlgren
ffa5962356 emacs: convert to new stand-alone test process (#37725) 2023-05-17 00:25:35 -04:00
Massimiliano Culpo
67e74da3ba Fix spack find not able to display version ranges in compilers (#37715) 2023-05-17 00:24:38 -04:00
Chris Green
9ee2d79de1 libxpm package: fix RHEL8 build with libintl (#37713)
Set LDFLAGS rather than LDLIBS
2023-05-16 13:32:26 -05:00
John W. Parent
79e4a13eee Windows: fix MSVC version handling (#37711)
MSVC compiler logic was using string parsing to extract version
from compiler spec, which was fragile. This broke in #37572, so has
been fixed and made more robust by using attribute access.
2023-05-16 11:00:55 -07:00
kwryankrattiger
4627438373 CI: Expand E4S ROCm stack to include missing DaV packages (#36843)
* CI: Expand E4S ROCm stack to include missing DaV packages

Ascent: Fixup for VTK-m with Kokkos backend

* DaV SDK: Removed duplicated openmp variant for ascent

* Drop visit and add conflict for Kokkos

* E4S: Drop ascent from CUDA builds
2023-05-16 09:34:52 -05:00
Harmen Stoppels
badaaf7092 gha rhel8-platform-python: configure git safe.directory (#37708) 2023-05-16 16:31:13 +02:00
Harmen Stoppels
815ac000cc Revert "hdf5: fix showconfig (#34920)" (#37707)
This reverts commit 192e564e26.
2023-05-16 15:57:15 +02:00
Peter Scheibel
7bc5b26c52 Requirements and preferences should not define (non-git) versions (#37687)
Ensure that requirements `packages:*:require:@x` and preferences `packages:*:version:[x]`
fail concretization when no version defined in the package satisfies `x`. This always holds
except for git versions -- they are defined on the fly.
2023-05-16 15:45:11 +02:00
Harmen Stoppels
a0e7ca94b2 gha bootstrap-dev-rhel8: configure git safe.directory (#37702)
git has been updated to something more recent
2023-05-16 15:21:42 +02:00
Harmen Stoppels
e56c90d839 check_modules_set_name: do not check for "enable" key (#37701) 2023-05-16 11:51:52 +02:00
Ye Luo
54003d4d72 Update llvm recipe regarding libomptarget. (#36675) 2023-05-16 11:20:02 +02:00
QuellynSnead
c47b554fa1 libxcb/xcb-proto: Enable internal Python dependency (#37575)
In the past, Spack did not allow two different versions of the
same package within a DAG. That led to difficulties with packages
that still required Python 2 while other packages had already
switched to Python 3.

The libxcb and xcb-proto packages did not have Python 3 support
for a time. To get around this issue, Spack maintainers disabled
their dependency on an internal (i.e., Spack-provided) Python
(see #4145),forcing these packages to look for a system-provided
Python (see #7646).

This has worked for us all right, but with the arrival of our most
recent platform we seem to be missing the critical xcbgen Python
module on the system. Since most software has largely moved on to
Python 3 now, let's re-enable internal Spack dependencies for the
libxcb and xcb-proto packages.
2023-05-16 10:00:01 +02:00
Mikael Simberg
b027f64a7f Add conflict for pika with fmt@10 and +cuda/rocm (#37679) 2023-05-16 09:24:02 +02:00
Greg Becker
3765a5f7f8 unify: when_possible and unify: true -- Bugfix for error in 37438 (#37681)
Two bugs came in from #37438

1. `unify: when_possible` was broken, because of an incorrect assertion. abstract/concrete
   spec pairs were compared against the results that were in the process of being computed,
   rather than against the previous results.
2. `unify: true` had an ordering bug that could mix the association between abstract and
   concrete specs

- [x] 1 is resolved by creating a lookup from old concrete specs to old abstract specs,
      and we use that to associate the "new" concrete specs that happen to be the old
      ones with their abstract specs (since those are stripped out for concretization
- [x] 2 is resolved by combining the new and old abstract as lists instead of combining
      them as sets. This is important because `set() | set()` does not make any ordering
      promises, even though set ordering is otherwise guaranteed in `python@3.7:`
2023-05-16 01:08:34 -04:00
Robert Blake
690661eadd Upgrading kosh to 3.0 (#37471)
* Upgrading kosh to 3.0.

* Accidentally regressed the package, changing back.

* Updating py-hdbscan versions for kosh.

* Fixing bug in patch.

* Adding 3.0.1

* Removing 3.0.

* Updating package deps for hdbscan to match requirements.txt.

* Version reqs for 3.0.*, need newer numpy and networkx

* spack style

* Reordering to match setup.py, adding "type" to python depends.
2023-05-16 01:08:20 -04:00
eugeneswalker
f7bbc326e4 trilinos: @develop fixes (#37615)
* trilinos@develop fixes

* Update var/spack/repos/builtin/packages/trilinos/package.py

Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>

---------

Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
2023-05-15 17:25:14 -07:00
Scott Wittenburg
a184bfc1a6 gitlab ci: reduce job name length of build_systems pipeline (#37686) 2023-05-16 00:26:37 +02:00
Alec Scott
81634440fb circos: add v0.69-9 (#37479) 2023-05-15 14:43:44 -07:00
Alec Scott
711d7683ac alluxio: add v2.9.3 (#37488) 2023-05-15 14:42:48 -07:00
Alec Scott
967356bcf5 codec2: add v1.1.0 (#37480) 2023-05-15 14:42:08 -07:00
Alec Scott
c006ed034a coinutils: add v2.11.9 (#37481) 2023-05-15 14:41:25 -07:00
Alec Scott
d065c65d94 g2c: add v1.7.0 (#37482) 2023-05-15 14:40:41 -07:00
Alec Scott
e23c372ff1 shadow: add v4.13 (#37485) 2023-05-15 14:38:32 -07:00
Alec Scott
25d2de5629 yoda: add v1.9.8 (#37487) 2023-05-15 14:37:31 -07:00
Alec Scott
d73a23ce35 cpp-httplib: add v0.12.3 (#37490) 2023-05-15 14:35:32 -07:00
Alec Scott
a62cb3c0f4 entt: add v3.11.1 (#37491) 2023-05-15 14:34:47 -07:00
Alec Scott
177da4595e harfbuzz: add v7.2.0 (#37492) 2023-05-15 14:34:06 -07:00
Alec Scott
e4f05129fe libconfuse: add v3.3 (#37493) 2023-05-15 14:33:19 -07:00
Alec Scott
c25b994917 libnsl: add v2.0.0 (#37494) 2023-05-15 14:32:49 -07:00
Alec Scott
95c4c5270a p11-kit: add v0.24.1 (#37495) 2023-05-15 14:31:43 -07:00
Alec Scott
1cf6a15a08 packmol: add v20.0.0 (#37496)
* packmol: add v20.0.0
* Fix zoltan homepage url
2023-05-15 14:29:01 -07:00
Alec Scott
47d206611a perl-module-build-tiny: add v0.044 (#37497) 2023-05-15 14:25:53 -07:00
Alec Scott
a6789cf653 zoltan: add v3.901 (#37498) 2023-05-15 14:25:00 -07:00
Alec Scott
933cd858e0 bdii: add v6.0.1 (#37499) 2023-05-15 14:24:14 -07:00
Alec Scott
8856361076 audit-userspace: add v3.1.1 (#37505) 2023-05-15 14:15:48 -07:00
Alec Scott
d826df7ef6 babl: add v0.1.106 (#37506) 2023-05-15 14:15:28 -07:00
Alec Scott
d8a9b42da6 actsvg: add v0.4.33 (#37503) 2023-05-15 14:14:45 -07:00
Alec Scott
7d926f86e8 bat: add v0.23.0 (#37507) 2023-05-15 14:09:51 -07:00
Alec Scott
1579544d57 beast-tracer: add v1.7.2 (#37508) 2023-05-15 14:09:18 -07:00
Alec Scott
1cee3fb4a5 cronie: add v1.6.1 (#37509) 2023-05-15 14:08:41 -07:00
Alec Scott
a8e2ad53dd cups: add v2.3.3 (#37510) 2023-05-15 14:08:09 -07:00
Alec Scott
6821fa7246 diamond: add v2.1.6 (#37511) 2023-05-15 14:07:31 -07:00
Alec Scott
09c68da1bd dust: add v0.8.6 (#37513) 2023-05-15 14:06:31 -07:00
Alec Scott
73064d62cf f3d: add v2.0.0 (#37514) 2023-05-15 14:05:37 -07:00
Alec Scott
168ed2a782 fullock: add v1.0.50 (#37515) 2023-05-15 14:02:36 -07:00
Alec Scott
9f60b29495 graphviz: add v8.0.5 (#37517) 2023-05-15 14:00:50 -07:00
Alec Scott
7abcd78426 krakenuniq: add v1.0.4 (#37519) 2023-05-15 13:59:15 -07:00
Alec Scott
d5295301de libfyaml: add v0.8 (#37520) 2023-05-15 13:58:13 -07:00
Alec Scott
beccc49b81 libluv: add v1.44.2-1 (#37522) 2023-05-15 13:55:57 -07:00
Alec Scott
037e7ffe33 libvterm: add v0.3.1 (#37524) 2023-05-15 13:54:15 -07:00
Alec Scott
293da8ed20 lighttpd: add v1.4.69 (#37525) 2023-05-15 13:53:30 -07:00
Alec Scott
2780ab2f6c mrchem: add v1.1.2 (#37526) 2023-05-15 13:51:54 -07:00
Alec Scott
1ed3c81b58 mutationpp: add v1.0.5 (#37527) 2023-05-15 13:50:48 -07:00
Alec Scott
50ce0a25b2 preseq: add v2.0.3 (#37528) 2023-05-15 13:49:47 -07:00
Alec Scott
d784227603 shtools: add v4.10.2 (#37530) 2023-05-15 13:47:15 -07:00
Alec Scott
ab9ed91539 tig: add v2.5.8 (#37531) 2023-05-15 13:46:03 -07:00
Alec Scott
421256063e trimgalore: add v0.6.9 (#37532) 2023-05-15 13:44:44 -07:00
Alec Scott
75459bc70c vdt: add v0.4.4 (#37533) 2023-05-15 13:43:40 -07:00
Carson Woods
33752eabb8 Improve package source code context display on error (#37655)
Spack displays package code context when it shouldn't (e.g., on `FetchError`s)
and doesn't display it when it should (e.g., when errors occur in builder classes.
The line attribution can sometimes be off by one, as well.

- [x] Display package context when errors occur in a subclass of `PackageBase`
- [x] Display package context when errors occur in a subclass of `BaseBuilder`
- [x] Do not display package context when errors occur in `PackageBase`,
      `BaseBuilder` or other core code that is not in a `package.py` file.
- [x] Fix off-by-one error for core code (don't subtract one from the line number *unless*
      it's in an actual `package.py` file.

---------

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2023-05-15 13:38:11 -07:00
Alec Scott
f1d1bb9167 gsl-lite: add v0.41.0 (#37483) 2023-05-15 13:36:03 -07:00
Alec Scott
68eaff24b0 crtm-fix: correct invalid checksum for v2.4.0 (#37500) 2023-05-15 13:34:32 -07:00
Alec Scott
862024cae1 dos2unix: add v7.4.4 (#37512) 2023-05-15 13:31:19 -07:00
Adam J. Stewart
9d6bcd67c3 Update PyTorch ecosystem (#37562) 2023-05-15 13:29:44 -07:00
Chris White
d97ecfe147 SUNDIALS: new version of sundials and guard against examples being install (#37576)
* add new version of sundials and guard against examples not installing
* fix flipping of variant
* fix directory not being there when writing a file
2023-05-15 13:21:37 -07:00
Alec Scott
0d991de50a subversion: add v1.14.2 (#37543) 2023-05-15 13:16:04 -07:00
Alec Scott
4f278a0255 go: add v1.20.4 (#37660)
* go: add v1.20.4
* Deprecate v1.20.2 and v1.19.7 due to CVE-2023-24538
2023-05-15 13:10:02 -07:00
Chris Green
6e72a3cff1 [davix] Enable third party copies with gSOAP (#37648)
* [davix] Enable third party copies with gSOAP

* Add FNAL Spack team to maintainers
2023-05-15 14:46:52 -05:00
snehring
1532c77ce6 micromamba: adding version 1.4.2 (#37594)
* micromamba: adding version 1.4.2
* micromamba: change to micromamba-1.4.2 tag artifacts
2023-05-15 10:40:54 -07:00
Mikael Simberg
5ffbce275c Add ut (#37603) 2023-05-15 10:35:55 -07:00
Carsten Uphoff
0e2ff2dddb Add double-batched FFT library v0.4.0 (#37616)
Signed-off-by: Carsten Uphoff <carsten.uphoff@intel.com>
2023-05-15 10:28:52 -07:00
Mikael Simberg
c0c446a095 stdexec: Add 23.03 (#37638) 2023-05-15 10:20:12 -07:00
snehring
33dbd44449 tmscore: adding new package (#37644) 2023-05-15 10:17:50 -07:00
Sean Koyama
7b0979c1e9 hwloc: explicitly disable building netloc for ~netloc (#35604)
* hwloc: explicitly disable building netloc for ~netloc

* hwloc: update syntax for netloc variant configure argument

---------

Co-authored-by: Sean Koyama <skoyama@anl.gov>
2023-05-15 12:16:21 -05:00
snehring
c9849dd41d tmalign: new version 20220412 (#37645) 2023-05-15 10:14:58 -07:00
Chris Green
d44e97d3f2 [scitokens-cpp] New variant cxxstd, depend on standalone jwt-cpp (#37643)
* Add FNAL Spack team to maintainers
* New variant `cxxstd`
* Depend on `jwt-cpp`
* New versions: 0.7.2, 0.7.3
2023-05-15 13:08:00 -04:00
Adam J. Stewart
8713ab0f67 py-timm: add v0.9 (#37654)
* py-timm: add v0.9
* add v0.9.1 and v0.9.2
* add new package py-safetensors (v0.3.1)
2023-05-15 09:41:58 -07:00
Harmen Stoppels
6a47339bf8 oneapi: before script load modules (#37678) 2023-05-15 18:39:58 +02:00
Alec Scott
1c0fb6d641 amrfinder: add v3.11.8 (#37656) 2023-05-15 09:38:29 -07:00
Alec Scott
b45eee29eb canal: add v1.1.6 (#37657) 2023-05-15 09:36:17 -07:00
Alec Scott
6d26274459 code-server: add v4.12.0 (#37658) 2023-05-15 09:35:17 -07:00
Alec Scott
2fb07de7bc fplll: add v5.4.4 (#37659) 2023-05-15 09:34:13 -07:00
Alec Scott
7678dc6b49 iso-codes: add v4.15.0 (#37661) 2023-05-15 09:27:05 -07:00
Frank Willmore
1944dd55a7 Update package.py for maker (#37662) 2023-05-15 09:25:43 -07:00
Adam J. Stewart
0b6c724743 py-sphinx: add v7.0.1 (#37665) 2023-05-15 09:23:22 -07:00
eugeneswalker
fa98023375 new pkg: py-psana (#37666) 2023-05-15 09:19:54 -07:00
Todd Gamblin
e79a911bac bugfix: allow reuse of packages from foreign namespaces
We currently throw a nasty error if you try to reuse packages from some other namespace
(e.g., OLCF), but we should be able to reuse patched local versions of builtin packages.

Right now the only obstacle to that is that we try to look up virtual info for unknown
namespaces, and we can't get the package from the repo to do that. We *can* assume that
a package with a known namespace is similar, and that its virtual provider information
is reasonably accurate, so we now do that. This isn't 100% accurate, but neither is
relying on the package itself, as it may have gone out of date.

The real solution here is virtual edge information, but this is a stopgap until we have
that.
2023-05-15 09:15:49 -07:00
Todd Gamblin
fd3efc71fd bugfix: don't look up virtual information for unknown packages
`spec_clauses()` attempts to look up package information for concrete specs in order to
determine which virtuals they may provide. This fails for renamed/deleted dependencies
of buildcaches and installed packages.

This will eventually be fixed by #35258, which adds virtual information on edges, but we
need a workaround to make older buildcaches usable.

- [x] make an exception for renamed packages and omit their virtual constraints
- [x] add a note that this will be solved by adding virtuals to edges
2023-05-15 09:15:49 -07:00
Todd Gamblin
0458de18de bugfix: don't look up patches from packages for concrete specs
The concretizer can fail with `reuse:true` if a buildcache or installation contains a
package with a dependency that has been renamed or deleted in the main repo (e.g.,
`netcdf` was refactored to `netcdf-c`, `netcdf-fortran`, etc., but there are still
binary packages with dependencies called `netcdf`).

We should still be able to install things for which we are missing `package.py` files.

`Spec.inject_patches_variant()` was failing this requirement by attempting to look up
the package class for concrete specs.  This isn't needed -- we can skip it.

- [x] swap two conditions in `Spec.inject_patches_variant()`
2023-05-15 09:15:49 -07:00
Vanessasaurus
f94ac8c770 add new package flux-security (#37668)
I will follow this up with a variant to flux-core to add flux-security, and then automation in the flux-framework/spack repository.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Co-authored-by: vsoch <vsoch@users.noreply.github.com>
2023-05-15 09:14:44 -07:00
Andrew W Elble
a03c28a916 routinator: update, deprecate old version (#37676) 2023-05-15 09:10:40 -07:00
Victor Lopez Herrero
7b7fdf27f3 dlb: add v3.3 (#37677) 2023-05-15 09:08:57 -07:00
Sergey Kosukhin
192e564e26 hdf5: fix showconfig (#34920)
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
2023-05-15 11:03:03 -05:00
Chris Green
b8c5099cde [jwt-cpp] New package (#37641)
* [jwt-cpp] New package

* Update homepage

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* [@spackbot] updating style on behalf of greenc-FNAL

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
Co-authored-by: greenc-FNAL <greenc-FNAL@users.noreply.github.com>
2023-05-15 10:16:11 -05:00
Stephen Sachs
ea5bca9067 palace: add v0.11.1 and explicit BLAS support (#37605) 2023-05-15 16:11:50 +02:00
Harmen Stoppels
e33eafd34f Bump tutorial command (#37674) 2023-05-15 13:54:52 +02:00
Xavier Delaruelle
e1344b5497 environment-modules: add version 5.3.0 (#37671) 2023-05-15 09:32:53 +02:00
832 changed files with 6266 additions and 2941 deletions

View File

@@ -5,8 +5,3 @@ updates:
directory: "/"
schedule:
interval: "daily"
# Requirements to build documentation
- package-ecosystem: "pip"
directory: "/lib/spack/docs"
schedule:
interval: "daily"

View File

@@ -183,7 +183,7 @@ jobs:
- name: Bootstrap clingo
run: |
set -ex
for ver in '3.7' '3.8' '3.9' '3.10' '3.11' ; do
for ver in '3.6' '3.7' '3.8' '3.9' '3.10' ; do
not_found=1
ver_dir="$(find $RUNNER_TOOL_CACHE/Python -wholename "*/${ver}.*/*/bin" | grep . || true)"
echo "Testing $ver_dir"
@@ -214,7 +214,7 @@ jobs:
- name: Bootstrap clingo
run: |
set -ex
for ver in '3.7' '3.8' '3.9' '3.10' '3.11' ; do
for ver in '3.6' '3.7' '3.8' '3.9' '3.10' ; do
not_found=1
ver_dir="$(find $RUNNER_TOOL_CACHE/Python -wholename "*/${ver}.*/*/bin" | grep . || true)"
echo "Testing $ver_dir"

View File

@@ -44,7 +44,7 @@ jobs:
cache: 'pip'
- name: Install Python packages
run: |
python3 -m pip install --upgrade pip setuptools types-six black==23.1.0 mypy isort clingo flake8==6.0.0
python3 -m pip install --upgrade pip setuptools types-six black==23.1.0 mypy isort clingo flake8
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.

View File

@@ -1,16 +1,10 @@
version: 2
build:
os: "ubuntu-22.04"
apt_packages:
- graphviz
tools:
python: "3.11"
sphinx:
configuration: lib/spack/docs/conf.py
fail_on_warning: true
python:
version: 3.7
install:
- requirements: lib/spack/docs/requirements.txt

View File

@@ -1,50 +1,3 @@
# v0.20.3 (2023-10-31)
## Bugfixes
- Fix a bug where `spack mirror set-url` would drop configured connection info (reverts #34210)
- Fix a minor issue with package hash computation for Python 3.12 (#40328)
# v0.20.2 (2023-10-03)
## Features in this release
Spack now supports Python 3.12 (#40155)
## Bugfixes
- Improve escaping in Tcl module files (#38375)
- Make repo cache work on repositories with zero mtime (#39214)
- Ignore errors for newer, incompatible buildcache version (#40279)
- Print an error when git is required, but missing (#40254)
- Ensure missing build dependencies get installed when using `spack install --overwrite` (#40252)
- Fix an issue where Spack freezes when the build process unexpectedly exits (#39015)
- Fix a bug where installation failures cause an unrelated `NameError` to be thrown (#39017)
- Fix an issue where Spack package versions would be incorrectly derived from git tags (#39414)
- Fix a bug triggered when file locking fails internally (#39188)
- Prevent "spack external find" to error out when a directory cannot be accessed (#38755)
- Fix multiple performance regressions in environments (#38771)
- Add more ignored modules to `pyproject.toml` for `mypy` (#38769)
# v0.20.1 (2023-07-10)
## Spack Bugfixes
- Spec removed from an environment where not actually removed if `--force` was not given (#37877)
- Speed-up module file generation (#37739)
- Hotfix for a few recipes that treat CMake as a link dependency (#35816)
- Fix re-running stand-alone test a second time, which was getting a trailing spurious failure (#37840)
- Fixed reading JSON manifest on Cray, reporting non-concrete specs (#37909)
- Fixed a few bugs when generating Dockerfiles from Spack (#37766,#37769)
- Fixed a few long-standing bugs when generating module files (#36678,#38347,#38465,#38455)
- Fixed issues with building Python extensions using an external Python (#38186)
- Fixed compiler removal from command line (#38057)
- Show external status as [e] (#33792)
- Backported `archspec` fixes (#37793)
- Improved a few error messages (#37791)
# v0.20.0 (2023-05-21)
`v0.20.0` is a major feature release.

View File

@@ -1,16 +0,0 @@
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# The name of the Pygments (syntax highlighting) style to use.
# We use our own extension of the default style with a few modifications
from pygments.styles.default import DefaultStyle
from pygments.token import Generic
class SpackStyle(DefaultStyle):
styles = DefaultStyle.styles.copy()
background_color = "#f4f4f8"
styles[Generic.Output] = "#355"
styles[Generic.Prompt] = "bold #346ec9"

View File

@@ -149,6 +149,7 @@ def setup(sphinx):
# Get nice vector graphics
graphviz_output_format = "svg"
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
@@ -232,8 +233,30 @@ def setup(sphinx):
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
sys.path.append("./_pygments")
pygments_style = "style.SpackStyle"
# The name of the Pygments (syntax highlighting) style to use.
# We use our own extension of the default style with a few modifications
from pygments.style import Style
from pygments.styles.default import DefaultStyle
from pygments.token import Comment, Generic, Text
class SpackStyle(DefaultStyle):
styles = DefaultStyle.styles.copy()
background_color = "#f4f4f8"
styles[Generic.Output] = "#355"
styles[Generic.Prompt] = "bold #346ec9"
import pkg_resources
dist = pkg_resources.Distribution(__file__)
sys.path.append(".") # make 'conf' module findable
ep = pkg_resources.EntryPoint.parse("spack = conf:SpackStyle", dist=dist)
dist._ep_map = {"pygments.styles": {"plugin1": ep}}
pkg_resources.working_set.add(dist)
pygments_style = "spack"
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
@@ -318,15 +341,16 @@ def setup(sphinx):
# Output file base name for HTML help builder.
htmlhelp_basename = "Spackdoc"
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples

View File

@@ -143,6 +143,26 @@ The OS that are currently supported are summarized in the table below:
* - Amazon Linux 2
- ``amazonlinux:2``
- ``spack/amazon-linux``
* - AlmaLinux 8
- ``almalinux:8``
- ``spack/almalinux8``
* - AlmaLinux 9
- ``almalinux:9``
- ``spack/almalinux9``
* - Rocky Linux 8
- ``rockylinux:8``
- ``spack/rockylinux8``
* - Rocky Linux 9
- ``rockylinux:9``
- ``spack/rockylinux9``
* - Fedora Linux 37
- ``fedora:37``
- ``spack/fedora37``
* - Fedora Linux 38
- ``fedora:38``
- ``spack/fedora38``
All the images are tagged with the corresponding release of Spack:
@@ -616,7 +636,7 @@ to customize the generation of container recipes:
- No
* - ``os_packages:command``
- Tool used to manage system packages
- ``apt``, ``yum``, ``dnf``, ``dnf_epel``, ``zypper``, ``apk``, ``yum_amazon``
- ``apt``, ``yum``, ``zypper``, ``apk``, ``yum_amazon``
- Only with custom base images
* - ``os_packages:update``
- Whether or not to update the list of available packages

View File

@@ -916,9 +916,9 @@ function, as shown in the example below:
.. code-block:: yaml
projections:
zlib: "{name}-{version}"
^mpi: "{name}-{version}/{^mpi.name}-{^mpi.version}-{compiler.name}-{compiler.version}"
all: "{name}-{version}/{compiler.name}-{compiler.version}"
zlib: {name}-{version}
^mpi: {name}-{version}/{^mpi.name}-{^mpi.version}-{compiler.name}-{compiler.version}
all: {name}-{version}/{compiler.name}-{compiler.version}
The entries in the projections configuration file must all be either
specs or the keyword ``all``. For each spec, the projection used will

View File

@@ -1,8 +1,13 @@
sphinx==6.2.1
sphinxcontrib-programoutput==0.17
sphinx_design==0.4.1
sphinx-rtd-theme==1.2.1
python-levenshtein==0.21.0
docutils==0.18.1
pygments==2.15.1
urllib3==2.0.2
# These dependencies should be installed using pip in order
# to build the documentation.
sphinx>=3.4,!=4.1.2,!=5.1.0
sphinxcontrib-programoutput
sphinx-design
sphinx-rtd-theme
python-levenshtein
# Restrict to docutils <0.17 to workaround a list rendering issue in sphinx.
# https://stackoverflow.com/questions/67542699
docutils <0.17
pygments <2.13
urllib3 <2

View File

@@ -139,7 +139,7 @@ def get_fh(self, path):
def release_by_stat(self, stat):
key = (stat.st_dev, stat.st_ino, os.getpid())
open_file = self._descriptors.get(key)
assert open_file, "Attempted to close non-existing inode: %s" % stat.st_ino
assert open_file, "Attempted to close non-existing inode: %s" % stat.st_inode
open_file.refs -= 1
if not open_file.refs:

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
__version__ = "0.20.3"
__version__ = "0.21.0.dev0"
spack_version = __version__

View File

@@ -199,11 +199,11 @@ def _associate_built_specs_with_mirror(self, cache_key, mirror_url):
with self._index_file_cache.read_transaction(cache_key):
db._read_from_file(cache_path)
except spack_db.InvalidDatabaseVersionError as e:
tty.warn(
msg = (
f"you need a newer Spack version to read the buildcache index for the "
f"following mirror: '{mirror_url}'. {e.database_version_message}"
)
return
raise BuildcacheIndexError(msg) from e
spec_list = db.query_local(installed=False, in_buildcache=True)

View File

@@ -589,7 +589,6 @@ def set_module_variables_for_package(pkg):
# TODO: make these build deps that can be installed if not found.
m.make = MakeExecutable("make", jobs)
m.gmake = MakeExecutable("gmake", jobs)
m.ninja = MakeExecutable("ninja", jobs, supports_jobserver=False)
# TODO: johnwparent: add package or builder support to define these build tools
# for now there is no entrypoint for builders to define these on their
@@ -1028,7 +1027,7 @@ def get_cmake_prefix_path(pkg):
def _setup_pkg_and_run(
serialized_pkg, function, kwargs, write_pipe, input_multiprocess_fd, jsfd1, jsfd2
serialized_pkg, function, kwargs, child_pipe, input_multiprocess_fd, jsfd1, jsfd2
):
context = kwargs.get("context", "build")
@@ -1049,12 +1048,12 @@ def _setup_pkg_and_run(
pkg, dirty=kwargs.get("dirty", False), context=context
)
return_value = function(pkg, kwargs)
write_pipe.send(return_value)
child_pipe.send(return_value)
except StopPhase as e:
# Do not create a full ChildError from this, it's not an error
# it's a control statement.
write_pipe.send(e)
child_pipe.send(e)
except BaseException:
# catch ANYTHING that goes wrong in the child process
exc_type, exc, tb = sys.exc_info()
@@ -1103,10 +1102,10 @@ def _setup_pkg_and_run(
context,
package_context,
)
write_pipe.send(ce)
child_pipe.send(ce)
finally:
write_pipe.close()
child_pipe.close()
if input_multiprocess_fd is not None:
input_multiprocess_fd.close()
@@ -1150,7 +1149,7 @@ def child_fun():
For more information on `multiprocessing` child process creation
mechanisms, see https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
"""
read_pipe, write_pipe = multiprocessing.Pipe(duplex=False)
parent_pipe, child_pipe = multiprocessing.Pipe()
input_multiprocess_fd = None
jobserver_fd1 = None
jobserver_fd2 = None
@@ -1175,7 +1174,7 @@ def child_fun():
serialized_pkg,
function,
kwargs,
write_pipe,
child_pipe,
input_multiprocess_fd,
jobserver_fd1,
jobserver_fd2,
@@ -1184,12 +1183,6 @@ def child_fun():
p.start()
# We close the writable end of the pipe now to be sure that p is the
# only process which owns a handle for it. This ensures that when p
# closes its handle for the writable end, read_pipe.recv() will
# promptly report the readable end as being ready.
write_pipe.close()
except InstallError as e:
e.pkg = pkg
raise
@@ -1199,16 +1192,7 @@ def child_fun():
if input_multiprocess_fd is not None:
input_multiprocess_fd.close()
def exitcode_msg(p):
typ = "exit" if p.exitcode >= 0 else "signal"
return f"{typ} {abs(p.exitcode)}"
try:
child_result = read_pipe.recv()
except EOFError:
p.join()
raise InstallError(f"The process has stopped unexpectedly ({exitcode_msg(p)})")
child_result = parent_pipe.recv()
p.join()
# If returns a StopPhase, raise it
@@ -1228,10 +1212,6 @@ def exitcode_msg(p):
child_result.print_context()
raise child_result
# Fallback. Usually caught beforehand in EOFError above.
if p.exitcode != 0:
raise InstallError(f"The process failed unexpectedly ({exitcode_msg(p)})")
return child_result
@@ -1276,8 +1256,9 @@ def make_stack(tb, stack=None):
func = getattr(obj, tb.tb_frame.f_code.co_name, "")
if func:
typename, *_ = func.__qualname__.partition(".")
if isinstance(obj, CONTEXT_BASES) and typename not in basenames:
break
if isinstance(obj, CONTEXT_BASES) and typename not in basenames:
break
else:
return None
@@ -1392,7 +1373,7 @@ def long_message(self):
test_log = join_path(os.path.dirname(self.log_name), spack_install_test_log)
if os.path.isfile(test_log):
out.write("\nSee test log for details:\n")
out.write(" {0}\n".format(test_log))
out.write(" {0}n".format(test_log))
return out.getvalue()

View File

@@ -180,51 +180,6 @@ def test(self):
work_dir="spack-test",
)
def update_external_dependencies(self, extendee_spec=None):
"""
Ensure all external python packages have a python dependency
If another package in the DAG depends on python, we use that
python for the dependency of the external. If not, we assume
that the external PythonPackage is installed into the same
directory as the python it depends on.
"""
# TODO: Include this in the solve, rather than instantiating post-concretization
if "python" not in self.spec:
if extendee_spec:
python = extendee_spec
elif "python" in self.spec.root:
python = self.spec.root["python"]
else:
python = self.get_external_python_for_prefix()
if not python.concrete:
repo = spack.repo.path.repo_for_pkg(python)
python.namespace = repo.namespace
# Ensure architecture information is present
if not python.architecture:
host_platform = spack.platforms.host()
host_os = host_platform.operating_system("default_os")
host_target = host_platform.target("default_target")
python.architecture = spack.spec.ArchSpec(
(str(host_platform), str(host_os), str(host_target))
)
else:
if not python.architecture.platform:
python.architecture.platform = spack.platforms.host()
if not python.architecture.os:
python.architecture.os = "default_os"
if not python.architecture.target:
python.architecture.target = archspec.cpu.host().family.name
# Ensure compiler information is present
if not python.compiler:
python.compiler = self.spec.compiler
python.external_path = self.spec.external_path
python._mark_concrete()
self.spec.add_dependency_edge(python, deptypes=("build", "link", "run"))
class PythonPackage(PythonExtension):
"""Specialized class for packages that are built using pip."""
@@ -270,6 +225,51 @@ def list_url(cls):
name = cls.pypi.split("/")[0]
return "https://pypi.org/simple/" + name + "/"
def update_external_dependencies(self, extendee_spec=None):
"""
Ensure all external python packages have a python dependency
If another package in the DAG depends on python, we use that
python for the dependency of the external. If not, we assume
that the external PythonPackage is installed into the same
directory as the python it depends on.
"""
# TODO: Include this in the solve, rather than instantiating post-concretization
if "python" not in self.spec:
if extendee_spec:
python = extendee_spec
elif "python" in self.spec.root:
python = self.spec.root["python"]
else:
python = self.get_external_python_for_prefix()
if not python.concrete:
repo = spack.repo.path.repo_for_pkg(python)
python.namespace = repo.namespace
# Ensure architecture information is present
if not python.architecture:
host_platform = spack.platforms.host()
host_os = host_platform.operating_system("default_os")
host_target = host_platform.target("default_target")
python.architecture = spack.spec.ArchSpec(
(str(host_platform), str(host_os), str(host_target))
)
else:
if not python.architecture.platform:
python.architecture.platform = spack.platforms.host()
if not python.architecture.os:
python.architecture.os = "default_os"
if not python.architecture.target:
python.architecture.target = archspec.cpu.host().family.name
# Ensure compiler information is present
if not python.compiler:
python.compiler = self.spec.compiler
python.external_path = self.spec.external_path
python._mark_concrete()
self.spec.add_dependency_edge(python, deptypes=("build", "link", "run"))
def get_external_python_for_prefix(self):
"""
For an external package that extends python, find the most likely spec for the python

View File

@@ -7,7 +7,7 @@
import llnl.util.lang as lang
from spack.directives import extends, maintainers
from spack.directives import extends
from .generic import GenericBuilder, Package
@@ -71,8 +71,6 @@ class RPackage(Package):
GenericBuilder = RBuilder
maintainers("glennpj")
#: This attribute is used in UI queries that need to know the build
#: system base class
build_system_class = "RPackage"

View File

@@ -911,13 +911,19 @@ def generate_gitlab_ci_yaml(
# --check-index-only, then the override mirror needs to be added to
# the configured mirrors when bindist.update() is run, or else we
# won't fetch its index and include in our local cache.
spack.mirror.add("ci_pr_mirror", remote_mirror_override, cfg.default_modify_scope())
spack.mirror.add(
spack.mirror.Mirror(remote_mirror_override, name="ci_pr_mirror"),
cfg.default_modify_scope(),
)
shared_pr_mirror = None
if spack_pipeline_type == "spack_pull_request":
stack_name = os.environ.get("SPACK_CI_STACK_NAME", "")
shared_pr_mirror = url_util.join(SHARED_PR_MIRROR_URL, stack_name)
spack.mirror.add("ci_shared_pr_mirror", shared_pr_mirror, cfg.default_modify_scope())
spack.mirror.add(
spack.mirror.Mirror(shared_pr_mirror, name="ci_shared_pr_mirror"),
cfg.default_modify_scope(),
)
pipeline_artifacts_dir = artifacts_root
if not pipeline_artifacts_dir:

View File

@@ -449,9 +449,8 @@ def ci_rebuild(args):
# mirror now so it's used when we check for a hash match already
# built for this spec.
if pipeline_mirror_url:
spack.mirror.add(
spack_ci.TEMP_STORAGE_MIRROR_NAME, pipeline_mirror_url, cfg.default_modify_scope()
)
mirror = spack.mirror.Mirror(pipeline_mirror_url, name=spack_ci.TEMP_STORAGE_MIRROR_NAME)
spack.mirror.add(mirror, cfg.default_modify_scope())
pipeline_mirrors.append(pipeline_mirror_url)
# Check configured mirrors for a built spec with a matching hash
@@ -466,7 +465,10 @@ def ci_rebuild(args):
# could be installed from either the override mirror or any other configured
# mirror (e.g. remote_mirror_url which is defined in the environment or
# pipeline_mirror_url), which is also what we want.
spack.mirror.add("mirror_override", remote_mirror_override, cfg.default_modify_scope())
spack.mirror.add(
spack.mirror.Mirror(remote_mirror_override, name="mirror_override"),
cfg.default_modify_scope(),
)
pipeline_mirrors.append(remote_mirror_override)
if spack_pipeline_type == "spack_pull_request":

View File

@@ -53,7 +53,7 @@ def setup_parser(subparser):
"--scope",
choices=scopes,
metavar=scopes_metavar,
default=None,
default=spack.config.default_modify_scope("compilers"),
help="configuration scope to modify",
)
@@ -106,21 +106,19 @@ def compiler_find(args):
def compiler_remove(args):
compiler_spec = spack.spec.CompilerSpec(args.compiler_spec)
candidate_compilers = spack.compilers.compilers_for_spec(compiler_spec, scope=args.scope)
if not candidate_compilers:
tty.die("No compilers match spec %s" % compiler_spec)
if not args.all and len(candidate_compilers) > 1:
tty.error(f"Multiple compilers match spec {compiler_spec}. Choose one:")
colify(reversed(sorted([c.spec.display_str for c in candidate_compilers])), indent=4)
cspec = spack.spec.CompilerSpec(args.compiler_spec)
compilers = spack.compilers.compilers_for_spec(cspec, scope=args.scope)
if not compilers:
tty.die("No compilers match spec %s" % cspec)
elif not args.all and len(compilers) > 1:
tty.error("Multiple compilers match spec %s. Choose one:" % cspec)
colify(reversed(sorted([c.spec.display_str for c in compilers])), indent=4)
tty.msg("Or, use `spack compiler remove -a` to remove all of them.")
sys.exit(1)
for current_compiler in candidate_compilers:
spack.compilers.remove_compiler_from_config(current_compiler.spec, scope=args.scope)
tty.msg(f"{current_compiler.spec.display_str} has been removed")
for compiler in compilers:
spack.compilers.remove_compiler_from_config(compiler.spec, scope=args.scope)
tty.msg("Removed compiler %s" % compiler.spec.display_str)
def compiler_info(args):

View File

@@ -79,12 +79,6 @@ def setup_parser(subparser):
read_cray_manifest.add_argument(
"--directory", default=None, help="specify a directory storing a group of manifest files"
)
read_cray_manifest.add_argument(
"--ignore-default-dir",
action="store_true",
default=False,
help="ignore the default directory of manifest files",
)
read_cray_manifest.add_argument(
"--dry-run",
action="store_true",
@@ -183,16 +177,11 @@ def external_read_cray_manifest(args):
manifest_directory=args.directory,
dry_run=args.dry_run,
fail_on_error=args.fail_on_error,
ignore_default_dir=args.ignore_default_dir,
)
def _collect_and_consume_cray_manifest_files(
manifest_file=None,
manifest_directory=None,
dry_run=False,
fail_on_error=False,
ignore_default_dir=False,
manifest_file=None, manifest_directory=None, dry_run=False, fail_on_error=False
):
manifest_files = []
if manifest_file:
@@ -202,7 +191,7 @@ def _collect_and_consume_cray_manifest_files(
if manifest_directory:
manifest_dirs.append(manifest_directory)
if not ignore_default_dir and os.path.isdir(cray_manifest.default_path):
if os.path.isdir(cray_manifest.default_path):
tty.debug(
"Cray manifest path {0} exists: collecting all files to read.".format(
cray_manifest.default_path

View File

@@ -145,7 +145,26 @@ def setup_parser(subparser):
def mirror_add(args):
"""Add a mirror to Spack."""
spack.mirror.add(args.name, args.url, args.scope, args)
if (
args.s3_access_key_id
or args.s3_access_key_secret
or args.s3_access_token
or args.s3_profile
or args.s3_endpoint_url
):
connection = {"url": args.url}
if args.s3_access_key_id and args.s3_access_key_secret:
connection["access_pair"] = (args.s3_access_key_id, args.s3_access_key_secret)
if args.s3_access_token:
connection["access_token"] = args.s3_access_token
if args.s3_profile:
connection["profile"] = args.s3_profile
if args.s3_endpoint_url:
connection["endpoint_url"] = args.s3_endpoint_url
mirror = spack.mirror.Mirror(fetch_url=connection, push_url=connection, name=args.name)
else:
mirror = spack.mirror.Mirror(args.url, name=args.name)
spack.mirror.add(mirror, args.scope)
def mirror_remove(args):

View File

@@ -37,6 +37,7 @@
"implicit_rpaths",
"extra_rpaths",
]
_cache_config_file = []
# TODO: Caches at module level make it difficult to mock configurations in
# TODO: unit tests. It might be worth reworking their implementation.
@@ -111,26 +112,36 @@ def _to_dict(compiler):
def get_compiler_config(scope=None, init_config=True):
"""Return the compiler configuration for the specified architecture."""
config = spack.config.get("compilers", scope=scope) or []
if config or not init_config:
return config
def init_compiler_config():
"""Compiler search used when Spack has no compilers."""
compilers = find_compilers()
compilers_dict = []
for compiler in compilers:
compilers_dict.append(_to_dict(compiler))
spack.config.set("compilers", compilers_dict, scope=scope)
merged_config = spack.config.get("compilers")
if merged_config:
return config
_init_compiler_config(scope=scope)
config = spack.config.get("compilers", scope=scope)
return config
def _init_compiler_config(*, scope):
"""Compiler search used when Spack has no compilers."""
compilers = find_compilers()
compilers_dict = []
for compiler in compilers:
compilers_dict.append(_to_dict(compiler))
spack.config.set("compilers", compilers_dict, scope=scope)
# Update the configuration if there are currently no compilers
# configured. Avoid updating automatically if there ARE site
# compilers configured but no user ones.
if not config and init_config:
if scope is None:
# We know no compilers were configured in any scope.
init_compiler_config()
config = spack.config.get("compilers", scope=scope)
elif scope == "user":
# Check the site config and update the user config if
# nothing is configured at the site level.
site_config = spack.config.get("compilers", scope="site")
sys_config = spack.config.get("compilers", scope="system")
if not site_config and not sys_config:
init_compiler_config()
config = spack.config.get("compilers", scope=scope)
return config
elif config:
return config
else:
return [] # Return empty list which we will later append to.
def compiler_config_files():
@@ -154,65 +165,52 @@ def add_compilers_to_config(compilers, scope=None, init_config=True):
compiler_config = get_compiler_config(scope, init_config)
for compiler in compilers:
compiler_config.append(_to_dict(compiler))
global _cache_config_file
_cache_config_file = compiler_config
spack.config.set("compilers", compiler_config, scope=scope)
@_auto_compiler_spec
def remove_compiler_from_config(compiler_spec, scope=None):
"""Remove compilers from configuration by spec.
If scope is None, all the scopes are searched for removal.
"""Remove compilers from the config, by spec.
Arguments:
compiler_spec: compiler to be removed
scope: configuration scope to modify
compiler_specs: a list of CompilerSpec objects.
scope: configuration scope to modify.
"""
candidate_scopes = [scope]
if scope is None:
candidate_scopes = spack.config.config.scopes.keys()
# Need a better way for this
global _cache_config_file
removal_happened = False
for current_scope in candidate_scopes:
removal_happened |= _remove_compiler_from_scope(compiler_spec, scope=current_scope)
return removal_happened
def _remove_compiler_from_scope(compiler_spec, scope):
"""Removes a compiler from a specific configuration scope.
Args:
compiler_spec: compiler to be removed
scope: configuration scope under consideration
Returns:
True if one or more compiler entries were actually removed, False otherwise
"""
assert scope is not None, "a specific scope is needed when calling this function"
compiler_config = get_compiler_config(scope)
config_length = len(compiler_config)
filtered_compiler_config = [
compiler_entry
for compiler_entry in compiler_config
comp
for comp in compiler_config
if not spack.spec.parse_with_version_concrete(
compiler_entry["compiler"]["spec"], compiler=True
comp["compiler"]["spec"], compiler=True
).satisfies(compiler_spec)
]
if len(filtered_compiler_config) == len(compiler_config):
return False
# We need to preserve the YAML type for comments, hence we are copying the
# items in the list that has just been retrieved
compiler_config[:] = filtered_compiler_config
spack.config.set("compilers", compiler_config, scope=scope)
return True
# Update the cache for changes
_cache_config_file = filtered_compiler_config
if len(filtered_compiler_config) == config_length: # No items removed
CompilerSpecInsufficientlySpecificError(compiler_spec)
spack.config.set("compilers", filtered_compiler_config, scope=scope)
def all_compilers_config(scope=None, init_config=True):
"""Return a set of specs for all the compiler versions currently
available to build with. These are instances of CompilerSpec.
"""
return get_compiler_config(scope, init_config)
# Get compilers for this architecture.
# Create a cache of the config file so we don't load all the time.
global _cache_config_file
if not _cache_config_file:
_cache_config_file = get_compiler_config(scope, init_config)
return _cache_config_file
else:
return _cache_config_file
def all_compiler_specs(scope=None, init_config=True):

View File

@@ -1353,11 +1353,17 @@ def use_configuration(*scopes_or_paths):
configuration = _config_from(scopes_or_paths)
config.clear_caches(), configuration.clear_caches()
# Save and clear the current compiler cache
saved_compiler_cache = spack.compilers._cache_config_file
spack.compilers._cache_config_file = []
saved_config, config = config, configuration
try:
yield configuration
finally:
# Restore previous config files
spack.compilers._cache_config_file = saved_compiler_cache
config = saved_config

View File

@@ -17,7 +17,7 @@
"template": "container/fedora_38.dockerfile",
"image": "docker.io/fedora:38"
},
"os_package_manager": "dnf",
"os_package_manager": "yum",
"build": "spack/fedora38",
"build_tags": {
"develop": "latest"
@@ -31,7 +31,7 @@
"template": "container/fedora_37.dockerfile",
"image": "docker.io/fedora:37"
},
"os_package_manager": "dnf",
"os_package_manager": "yum",
"build": "spack/fedora37",
"build_tags": {
"develop": "latest"
@@ -45,7 +45,7 @@
"template": "container/rockylinux_9.dockerfile",
"image": "docker.io/rockylinux:9"
},
"os_package_manager": "dnf_epel",
"os_package_manager": "yum",
"build": "spack/rockylinux9",
"build_tags": {
"develop": "latest"
@@ -59,7 +59,7 @@
"template": "container/rockylinux_8.dockerfile",
"image": "docker.io/rockylinux:8"
},
"os_package_manager": "dnf_epel",
"os_package_manager": "yum",
"build": "spack/rockylinux8",
"build_tags": {
"develop": "latest"
@@ -73,7 +73,7 @@
"template": "container/almalinux_9.dockerfile",
"image": "quay.io/almalinux/almalinux:9"
},
"os_package_manager": "dnf_epel",
"os_package_manager": "yum",
"build": "spack/almalinux9",
"build_tags": {
"develop": "latest"
@@ -87,7 +87,7 @@
"template": "container/almalinux_8.dockerfile",
"image": "quay.io/almalinux/almalinux:8"
},
"os_package_manager": "dnf_epel",
"os_package_manager": "yum",
"build": "spack/almalinux8",
"build_tags": {
"develop": "latest"
@@ -101,7 +101,7 @@
"template": "container/centos_stream.dockerfile",
"image": "quay.io/centos/centos:stream"
},
"os_package_manager": "dnf_epel",
"os_package_manager": "yum",
"build": "spack/centos-stream",
"final": {
"image": "quay.io/centos/centos:stream"
@@ -185,16 +185,6 @@
"install": "apt-get -yqq install",
"clean": "rm -rf /var/lib/apt/lists/*"
},
"dnf": {
"update": "dnf update -y",
"install": "dnf install -y",
"clean": "rm -rf /var/cache/dnf && dnf clean all"
},
"dnf_epel": {
"update": "dnf update -y && dnf install -y epel-release && dnf update -y",
"install": "dnf install -y",
"clean": "rm -rf /var/cache/dnf && dnf clean all"
},
"yum": {
"update": "yum update -y && yum install -y epel-release && yum update -y",
"install": "yum install -y",

View File

@@ -48,8 +48,7 @@ def translated_compiler_name(manifest_compiler_name):
def compiler_from_entry(entry):
compiler_name = translated_compiler_name(entry["name"])
paths = entry["executables"]
# to instantiate a compiler class we may need a concrete version:
version = "={}".format(entry["version"])
version = entry["version"]
arch = entry["arch"]
operating_system = arch["os"]
target = arch["target"]

View File

@@ -112,15 +112,10 @@ def path_to_dict(search_paths):
# Reverse order of search directories so that a lib in the first
# entry overrides later entries
for search_path in reversed(search_paths):
try:
for lib in os.listdir(search_path):
lib_path = os.path.join(search_path, lib)
if llnl.util.filesystem.is_readable_file(lib_path):
path_to_lib[lib_path] = lib
except OSError as e:
msg = f"cannot scan '{search_path}' for external software: {str(e)}"
llnl.util.tty.debug(msg)
for lib in os.listdir(search_path):
lib_path = os.path.join(search_path, lib)
if llnl.util.filesystem.is_readable_file(lib_path):
path_to_lib[lib_path] = lib
return path_to_lib

View File

@@ -39,6 +39,7 @@
import spack.stage
import spack.store
import spack.subprocess_context
import spack.traverse
import spack.user_environment as uenv
import spack.util.cpus
import spack.util.environment
@@ -50,7 +51,6 @@
import spack.util.spack_yaml as syaml
import spack.util.url
import spack.version
from spack import traverse
from spack.filesystem_view import SimpleFilesystemView, inverse_view_func_parser, view_func_parser
from spack.installer import PackageInstaller
from spack.spec import Spec
@@ -437,6 +437,32 @@ def _is_dev_spec_and_has_changed(spec):
return mtime > record.installation_time
def _spec_needs_overwrite(spec, changed_dev_specs):
"""Check whether the current spec needs to be overwritten because either it has
changed itself or one of its dependencies have changed
"""
# if it's not installed, we don't need to overwrite it
if not spec.installed:
return False
# If the spec itself has changed this is a trivial decision
if spec in changed_dev_specs:
return True
# if spec and all deps aren't dev builds, we don't need to overwrite it
if not any(spec.satisfies(c) for c in ("dev_path=*", "^dev_path=*")):
return False
# If any dep needs overwrite, or any dep is missing and is a dev build then
# overwrite this package
if any(
((not dep.installed) and dep.satisfies("dev_path=*"))
or _spec_needs_overwrite(dep, changed_dev_specs)
for dep in spec.traverse(root=False)
):
return True
def _error_on_nonempty_view_dir(new_root):
"""Defensively error when the target view path already exists and is not an
empty directory. This usually happens when the view symlink was removed, but
@@ -621,16 +647,18 @@ def specs_for_view(self, concretized_root_specs):
From the list of concretized user specs in the environment, flatten
the dags, and filter selected, installed specs, remove duplicates on dag hash.
"""
dag_hash = lambda spec: spec.dag_hash()
# With deps, requires traversal
if self.link == "all" or self.link == "run":
deptype = ("run") if self.link == "run" else ("link", "run")
specs = list(
traverse.traverse_nodes(
concretized_root_specs, deptype=deptype, key=traverse.by_dag_hash
spack.traverse.traverse_nodes(
concretized_root_specs, deptype=deptype, key=dag_hash
)
)
else:
specs = list(dedupe(concretized_root_specs, key=traverse.by_dag_hash))
specs = list(dedupe(concretized_root_specs, key=dag_hash))
# Filter selected, installed specs
with spack.store.db.read_transaction():
@@ -1317,28 +1345,38 @@ def concretize(self, force=False, tests=False):
List of specs that have been concretized. Each entry is a tuple of
the user spec and the corresponding concretized spec.
"""
if force:
# Clear previously concretized specs
self.concretized_user_specs = []
self.concretized_order = []
self.specs_by_hash = {}
old_concretized_user_specs = self.concretized_user_specs[:]
old_concretized_order = self.concretized_order[:]
old_specs_by_hash = self.specs_by_hash.copy()
# Remove concrete specs that no longer correlate to a user spec
for spec in set(self.concretized_user_specs) - set(self.user_specs):
self.deconcretize(spec)
try:
if force:
# Clear previously concretized specs
self.concretized_user_specs = []
self.concretized_order = []
self.specs_by_hash = {}
# Pick the right concretization strategy
if self.unify == "when_possible":
return self._concretize_together_where_possible(tests=tests)
# Remove concrete specs that no longer correlate to a user spec
for spec in set(self.concretized_user_specs) - set(self.user_specs):
self.deconcretize(spec)
if self.unify is True:
return self._concretize_together(tests=tests)
# Pick the right concretization strategy
if self.unify == "when_possible":
return self._concretize_together_where_possible(tests=tests)
if self.unify is False:
return self._concretize_separately(tests=tests)
if self.unify is True:
return self._concretize_together(tests=tests)
msg = "concretization strategy not implemented [{0}]"
raise SpackEnvironmentError(msg.format(self.unify))
if self.unify is False:
return self._concretize_separately(tests=tests)
msg = "concretization strategy not implemented [{0}]"
raise SpackEnvironmentError(msg.format(self.unify))
except Exception:
self.concretized_user_specs = old_concretized_user_specs
self.concretized_order = old_concretized_order
self.specs_by_hash = old_specs_by_hash
raise
def deconcretize(self, spec):
# spec has to be a root of the environment
@@ -1796,29 +1834,17 @@ def _add_concrete_spec(self, spec, concrete, new=True):
self.specs_by_hash[h] = concrete
def _get_overwrite_specs(self):
# Find all dev specs that were modified.
changed_dev_specs = [
s
for s in traverse.traverse_nodes(
self.concrete_roots(), order="breadth", key=traverse.by_dag_hash
)
if _is_dev_spec_and_has_changed(s)
]
# Collect all specs in the environment first before checking which ones
# to rebuild to avoid checking the same specs multiple times
specs_to_check = set()
for dag_hash in self.concretized_order:
root_spec = self.specs_by_hash[dag_hash]
specs_to_check.update(root_spec.traverse(root=True))
changed_dev_specs = set(s for s in specs_to_check if _is_dev_spec_and_has_changed(s))
# Collect their hashes, and the hashes of their installed parents.
# Notice: with order=breadth all changed dev specs are at depth 0,
# even if they occur as parents of one another.
return [
spec.dag_hash()
for depth, spec in traverse.traverse_nodes(
changed_dev_specs,
root=True,
order="breadth",
depth=True,
direction="parents",
key=traverse.by_dag_hash,
)
if depth == 0 or spec.installed
s.dag_hash() for s in specs_to_check if _spec_needs_overwrite(s, changed_dev_specs)
]
def _install_log_links(self, spec):
@@ -1925,7 +1951,7 @@ def install_specs(self, specs=None, **install_args):
def all_specs(self):
"""Return all specs, even those a user spec would shadow."""
roots = [self.specs_by_hash[h] for h in self.concretized_order]
specs = [s for s in traverse.traverse_nodes(roots, key=traverse.by_dag_hash)]
specs = [s for s in spack.traverse.traverse_nodes(roots, lambda s: s.dag_hash())]
specs.sort()
return specs
@@ -1971,18 +1997,13 @@ def concrete_roots(self):
roots *without* associated user spec"""
return [root for _, root in self.concretized_specs()]
def get_by_hash(self, dag_hash: str) -> List[Spec]:
# If it's not a partial hash prefix we can early exit
early_exit = len(dag_hash) == 32
matches = []
for spec in traverse.traverse_nodes(
self.concrete_roots(), key=traverse.by_dag_hash, order="breadth"
):
def get_by_hash(self, dag_hash):
matches = {}
roots = [self.specs_by_hash[h] for h in self.concretized_order]
for spec in spack.traverse.traverse_nodes(roots, key=lambda s: s.dag_hash()):
if spec.dag_hash().startswith(dag_hash):
matches.append(spec)
if early_exit:
break
return matches
matches[spec.dag_hash()] = spec
return list(matches.values())
def get_one_by_hash(self, dag_hash):
"""Returns the single spec from the environment which matches the
@@ -1994,14 +2015,11 @@ def get_one_by_hash(self, dag_hash):
def all_matching_specs(self, *specs: spack.spec.Spec) -> List[Spec]:
"""Returns all concretized specs in the environment satisfying any of the input specs"""
# Look up abstract hashes ahead of time, to avoid O(n^2) traversal.
specs = [s.lookup_hash() for s in specs]
# Avoid double lookup by directly calling _satisfies.
key = lambda s: s.dag_hash()
return [
s
for s in traverse.traverse_nodes(self.concrete_roots(), key=traverse.by_dag_hash)
if any(s._satisfies(t) for t in specs)
for s in spack.traverse.traverse_nodes(self.concrete_roots(), key=key)
if any(s.satisfies(t) for t in specs)
]
@spack.repo.autospec
@@ -2025,9 +2043,9 @@ def matching_spec(self, spec):
env_root_to_user = {root.dag_hash(): user for user, root in self.concretized_specs()}
root_matches, dep_matches = [], []
for env_spec in traverse.traverse_nodes(
for env_spec in spack.traverse.traverse_nodes(
specs=[root for _, root in self.concretized_specs()],
key=traverse.by_dag_hash,
key=lambda s: s.dag_hash(),
order="breadth",
):
if not env_spec.satisfies(spec):
@@ -2101,8 +2119,8 @@ def _get_environment_specs(self, recurse_dependencies=True):
if recurse_dependencies:
specs.extend(
traverse.traverse_nodes(
specs, root=False, deptype=("link", "run"), key=traverse.by_dag_hash
spack.traverse.traverse_nodes(
specs, root=False, deptype=("link", "run"), key=lambda s: s.dag_hash()
)
)
@@ -2111,7 +2129,9 @@ def _get_environment_specs(self, recurse_dependencies=True):
def _to_lockfile_dict(self):
"""Create a dictionary to store a lockfile for this environment."""
concrete_specs = {}
for s in traverse.traverse_nodes(self.specs_by_hash.values(), key=traverse.by_dag_hash):
for s in spack.traverse.traverse_nodes(
self.specs_by_hash.values(), key=lambda s: s.dag_hash()
):
spec_dict = s.node_dict_with_hashes(hash=ht.dag_hash)
# Assumes no legacy formats, since this was just created.
spec_dict[ht.dag_hash.name] = s.dag_hash()
@@ -2268,7 +2288,7 @@ def ensure_env_directory_exists(self, dot_env: bool = False) -> None:
def update_environment_repository(self) -> None:
"""Updates the repository associated with the environment."""
for spec in traverse.traverse_nodes(self.new_specs):
for spec in spack.traverse.traverse_nodes(self.new_specs):
if not spec.concrete:
raise ValueError("specs passed to environment.write() must be concrete!")

View File

@@ -763,11 +763,7 @@ def version_from_git(git_exe):
@property
def git(self):
if not self._git:
try:
self._git = spack.util.git.git(required=True)
except CommandNotFoundError as exc:
tty.error(str(exc))
raise
self._git = spack.util.git.git()
# Disable advice for a quieter fetch
# https://github.com/git/git/blob/master/Documentation/RelNotes/1.7.2.txt

View File

@@ -460,8 +460,7 @@ def write_tested_status(self):
elif self.counts[TestStatus.PASSED] > 0:
status = TestStatus.PASSED
with open(self.tested_file, "w") as f:
f.write(f"{status.value}\n")
_add_msg_to_file(self.tested_file, f"{status.value}")
@contextlib.contextmanager

View File

@@ -2436,11 +2436,10 @@ def get_deptypes(self, pkg):
else:
cache_only = self.install_args.get("dependencies_cache_only")
# Include build dependencies if pkg is going to be built from sources, or
# if build deps are explicitly requested.
if include_build_deps or not (
cache_only or pkg.spec.installed and not pkg.spec.dag_hash() in self.overwrite
):
# Include build dependencies if pkg is not installed and cache_only
# is False, or if build depdencies are explicitly called for
# by include_build_deps.
if include_build_deps or not (cache_only or pkg.spec.installed):
deptypes.append("build")
if self.run_tests(pkg):
deptypes.append("test")

View File

@@ -114,6 +114,10 @@ def from_url(url: str):
return Mirror(fetch_url=url)
def to_dict(self):
# Keep it a key-value pair <name>: <url> when possible.
if isinstance(self._fetch_url, str) and self._push_url is None:
return self._fetch_url
if self._push_url is None:
return syaml_dict([("fetch", self._fetch_url), ("push", self._fetch_url)])
else:
@@ -548,30 +552,17 @@ def mirror_cache_and_stats(path, skip_unstable_versions=False):
return mirror_cache, mirror_stats
def add(name, url, scope, args={}):
def add(mirror: Mirror, scope=None):
"""Add a named mirror in the given scope"""
mirrors = spack.config.get("mirrors", scope=scope)
if not mirrors:
mirrors = syaml_dict()
if name in mirrors:
tty.die("Mirror with name %s already exists." % name)
if mirror.name in mirrors:
tty.die("Mirror with name {} already exists.".format(mirror.name))
items = [(n, u) for n, u in mirrors.items()]
mirror_data = url
key_values = ["s3_access_key_id", "s3_access_token", "s3_profile"]
# On creation, assume connection data is set for both
if any(value for value in key_values if value in args):
url_dict = {
"url": url,
"access_pair": (args.s3_access_key_id, args.s3_access_key_secret),
"access_token": args.s3_access_token,
"profile": args.s3_profile,
"endpoint_url": args.s3_endpoint_url,
}
mirror_data = {"fetch": url_dict, "push": url_dict}
items.insert(0, (name, mirror_data))
items.insert(0, (mirror.name, mirror.to_dict()))
mirrors = syaml_dict(items)
spack.config.set("mirrors", mirrors, scope=scope)

View File

@@ -40,7 +40,7 @@
import llnl.util.filesystem
import llnl.util.tty as tty
from llnl.util.lang import dedupe, memoized
from llnl.util.lang import dedupe
import spack.build_environment
import spack.config
@@ -170,9 +170,11 @@ def merge_config_rules(configuration, spec):
Returns:
dict: actions to be taken on the spec passed as an argument
"""
# Construct a dictionary with the actions we need to perform on the spec passed as a parameter
spec_configuration = {}
# The keyword 'all' is always evaluated first, all the others are
# evaluated in order of appearance in the module file
spec_configuration = copy.deepcopy(configuration.get("all", {}))
spec_configuration.update(copy.deepcopy(configuration.get("all", {})))
for constraint, action in configuration.items():
if spec.satisfies(constraint):
if hasattr(constraint, "override") and constraint.override:
@@ -671,14 +673,7 @@ def configure_options(self):
# the configure option section
return None
def modification_needs_formatting(self, modification):
"""Returns True if environment modification entry needs to be formatted."""
return (
not isinstance(modification, (spack.util.environment.SetEnv)) or not modification.raw
)
@tengine.context_property
@memoized
def environment_modifications(self):
"""List of environment modifications to be processed."""
# Modifications guessed by inspecting the spec prefix
@@ -740,29 +735,15 @@ def environment_modifications(self):
_check_tokens_are_valid(x.name, message=msg)
# Transform them
x.name = spec.format(x.name, transform=transform)
if self.modification_needs_formatting(x):
try:
# Not every command has a value
x.value = spec.format(x.value)
except AttributeError:
pass
try:
# Not every command has a value
x.value = spec.format(x.value)
except AttributeError:
pass
x.name = str(x.name).replace("-", "_")
return [(type(x).__name__, x) for x in env if x.name not in exclude]
@tengine.context_property
def has_manpath_modifications(self):
"""True if MANPATH environment variable is modified."""
for modification_type, cmd in self.environment_modifications:
if not isinstance(
cmd, (spack.util.environment.PrependPath, spack.util.environment.AppendPath)
):
continue
if cmd.name == "MANPATH":
return True
else:
return False
@tengine.context_property
def autoload(self):
"""List of modules that needs to be loaded automatically."""

View File

@@ -134,7 +134,6 @@ def filter_hierarchy_specs(self):
return configuration(self.name).get("filter_hierarchy_specs", {})
@property
@lang.memoized
def hierarchy_tokens(self):
"""Returns the list of tokens that are part of the modulefile
hierarchy. 'compiler' is always present.
@@ -159,7 +158,6 @@ def hierarchy_tokens(self):
return tokens
@property
@lang.memoized
def requires(self):
"""Returns a dictionary mapping all the requirements of this spec
to the actual provider. 'compiler' is always present among the
@@ -226,7 +224,6 @@ def available(self):
return available
@property
@lang.memoized
def missing(self):
"""Returns the list of tokens that are not available."""
return [x for x in self.hierarchy_tokens if x not in self.available]
@@ -320,7 +317,6 @@ def available_path_parts(self):
return parts
@property
@lang.memoized
def unlocked_paths(self):
"""Returns a dictionary mapping conditions to a list of unlocked
paths.
@@ -432,7 +428,6 @@ def missing(self):
return self.conf.missing
@tengine.context_property
@lang.memoized
def unlocked_paths(self):
"""Returns the list of paths that are unlocked unconditionally."""
layout = make_layout(self.spec, self.conf.name, self.conf.explicit)

View File

@@ -108,6 +108,5 @@
# These are just here for editor support; they will be replaced when the build env
# is set up.
make = MakeExecutable("make", jobs=1)
gmake = MakeExecutable("gmake", jobs=1)
ninja = MakeExecutable("ninja", jobs=1)
configure = Executable(join_path(".", "configure"))

View File

@@ -1231,7 +1231,6 @@ def dependencies_of_type(cls, *deptypes):
if any(dt in cls.dependencies[name][cond].type for cond in conds for dt in deptypes)
)
# TODO: allow more than one active extendee.
@property
def extendee_spec(self):
"""
@@ -1247,6 +1246,7 @@ def extendee_spec(self):
if dep.name in self.extendees:
deps.append(dep)
# TODO: allow more than one active extendee.
if deps:
assert len(deps) == 1
return deps[0]
@@ -1256,6 +1256,7 @@ def extendee_spec(self):
if self.spec._concrete:
return None
else:
# TODO: do something sane here with more than one extendee
# If it's not concrete, then return the spec from the
# extends() directive since that is all we know so far.
spec_str, kwargs = next(iter(self.extendees.items()))

View File

@@ -37,9 +37,7 @@
def slingshot_network():
return os.path.exists("/opt/cray/pe") and (
os.path.exists("/lib64/libcxi.so") or os.path.exists("/usr/lib64/libcxi.so")
)
return os.path.exists("/opt/cray/pe") and os.path.exists("/lib64/libcxi.so")
def _target_name_from_craype_target_name(name):

View File

@@ -686,7 +686,7 @@ def is_relocatable(spec):
Raises:
ValueError: if the spec is not installed
"""
if not spec.installed:
if not spec.install_status():
raise ValueError("spec is not installed [{0}]".format(str(spec)))
if spec.external or spec.virtual:

View File

@@ -24,7 +24,7 @@
import traceback
import types
import uuid
from typing import Any, Dict, List, Union
from typing import Dict, Union
import llnl.util.filesystem as fs
import llnl.util.lang
@@ -424,7 +424,7 @@ def _create_new_cache(self) -> Dict[str, os.stat_result]:
def last_mtime(self):
return max(sinfo.st_mtime for sinfo in self._packages_to_stats.values())
def modified_since(self, since: float) -> List[str]:
def modified_since(self, since):
return [name for name, sinfo in self._packages_to_stats.items() if sinfo.st_mtime > since]
def __getitem__(self, item):
@@ -550,34 +550,35 @@ class RepoIndex(object):
when they're needed.
``Indexers`` should be added to the ``RepoIndex`` using
``add_indexer(name, indexer)``, and they should support the interface
``add_index(name, indexer)``, and they should support the interface
defined by ``Indexer``, so that the ``RepoIndex`` can read, generate,
and update stored indices.
Generated indexes are accessed by name via ``__getitem__()``."""
Generated indexes are accessed by name via ``__getitem__()``.
def __init__(
self,
package_checker: FastPackageChecker,
namespace: str,
cache: spack.util.file_cache.FileCache,
):
"""
def __init__(self, package_checker, namespace, cache):
self.checker = package_checker
self.packages_path = self.checker.packages_path
if sys.platform == "win32":
self.packages_path = spack.util.path.convert_to_posix_path(self.packages_path)
self.namespace = namespace
self.indexers: Dict[str, Indexer] = {}
self.indexes: Dict[str, Any] = {}
self.indexers = {}
self.indexes = {}
self.cache = cache
def add_indexer(self, name: str, indexer: Indexer):
def add_indexer(self, name, indexer):
"""Add an indexer to the repo index.
Arguments:
name: name of this indexer
indexer: object implementing the ``Indexer`` interface"""
name (str): name of this indexer
indexer (object): an object that supports create(), read(),
write(), and get_index() operations
"""
self.indexers[name] = indexer
def __getitem__(self, name):
@@ -598,15 +599,17 @@ def _build_all_indexes(self):
because the main bottleneck here is loading all the packages. It
can take tens of seconds to regenerate sequentially, and we'd
rather only pay that cost once rather than on several
invocations."""
invocations.
"""
for name, indexer in self.indexers.items():
self.indexes[name] = self._build_index(name, indexer)
def _build_index(self, name: str, indexer: Indexer):
def _build_index(self, name, indexer):
"""Determine which packages need an update, and update indexes."""
# Filename of the provider index cache (we assume they're all json)
cache_filename = f"{name}/{self.namespace}-index.json"
cache_filename = "{0}/{1}-index.json".format(name, self.namespace)
# Compute which packages needs to be updated in the cache
index_mtime = self.cache.mtime(cache_filename)
@@ -630,7 +633,8 @@ def _build_index(self, name: str, indexer: Indexer):
needs_update = self.checker.modified_since(new_index_mtime)
for pkg_name in needs_update:
indexer.update(f"{self.namespace}.{pkg_name}")
namespaced_name = "%s.%s" % (self.namespace, pkg_name)
indexer.update(namespaced_name)
indexer.write(new)
@@ -1235,7 +1239,7 @@ def get_pkg_class(self, pkg_name):
try:
module = importlib.import_module(fullname)
except ImportError:
raise UnknownPackageError(pkg_name)
raise UnknownPackageError(fullname)
except Exception as e:
msg = f"cannot load package '{pkg_name}' from the '{self.namespace}' repository: {e}"
raise RepoError(msg) from e

View File

@@ -2836,13 +2836,12 @@ class InternalConcretizerError(spack.error.UnsatisfiableSpecError):
"""
def __init__(self, provided, conflicts):
msg = (
"Spack concretizer internal error. Please submit a bug report and include the "
"command, environment if applicable and the following error message."
f"\n {provided} is unsatisfiable, errors are:"
)
msg += "".join([f"\n {conflict}" for conflict in conflicts])
indented = [" %s\n" % conflict for conflict in conflicts]
error_msg = "".join(indented)
msg = "Spack concretizer internal error. Please submit a bug report"
msg += "\n Please include the command, environment if applicable,"
msg += "\n and the following error message."
msg = "\n %s is unsatisfiable, errors are:\n%s" % (provided, error_msg)
super(spack.error.UnsatisfiableSpecError, self).__init__(msg)

View File

@@ -50,7 +50,6 @@
"""
import collections
import collections.abc
import enum
import io
import itertools
import os
@@ -174,16 +173,6 @@
SPECFILE_FORMAT_VERSION = 3
# InstallStatus is used to map install statuses to symbols for display
# Options are artificially disjoint for dispay purposes
class InstallStatus(enum.Enum):
installed = "@g{[+]} "
upstream = "@g{[^]} "
external = "@g{[e]} "
absent = "@K{ - } "
missing = "@r{[-]} "
def colorize_spec(spec):
"""Returns a spec colorized according to the colors specified in
color_formats."""
@@ -4412,20 +4401,12 @@ def __str__(self):
def install_status(self):
"""Helper for tree to print DB install status."""
if not self.concrete:
return InstallStatus.absent
if self.external:
return InstallStatus.external
upstream, record = spack.store.db.query_by_spec_hash(self.dag_hash())
if not record:
return InstallStatus.absent
elif upstream and record.installed:
return InstallStatus.upstream
elif record.installed:
return InstallStatus.installed
else:
return InstallStatus.missing
return None
try:
record = spack.store.db.get_record(self)
return record.installed
except KeyError:
return None
def _installed_explicitly(self):
"""Helper for tree to print DB install status."""
@@ -4439,10 +4420,7 @@ def _installed_explicitly(self):
def tree(self, **kwargs):
"""Prints out this spec and its dependencies, tree-formatted
with indentation.
Status function may either output a boolean or an InstallStatus
"""
with indentation."""
color = kwargs.pop("color", clr.get_color_when())
depth = kwargs.pop("depth", False)
hashes = kwargs.pop("hashes", False)
@@ -4474,12 +4452,14 @@ def tree(self, **kwargs):
if status_fn:
status = status_fn(node)
if status in list(InstallStatus):
out += clr.colorize(status.value, color=color)
if node.installed_upstream:
out += clr.colorize("@g{[^]} ", color=color)
elif status is None:
out += clr.colorize("@K{ - } ", color=color) # !installed
elif status:
out += clr.colorize("@g{[+]} ", color=color)
out += clr.colorize("@g{[+]} ", color=color) # installed
else:
out += clr.colorize("@r{[-]} ", color=color)
out += clr.colorize("@r{[-]} ", color=color) # missing
if hashes:
out += clr.colorize("@K{%s} ", color=color) % node.dag_hash(hlen)

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import itertools
import textwrap
from typing import List, Optional, Tuple
from typing import List
import llnl.util.lang
@@ -66,17 +66,17 @@ def to_dict(self):
return dict(d)
@llnl.util.lang.memoized
def make_environment(dirs: Optional[Tuple[str, ...]] = None):
"""Returns a configured environment for template rendering."""
# Import at this scope to avoid slowing Spack startup down
import jinja2
def make_environment(dirs=None):
"""Returns an configured environment for template rendering."""
if dirs is None:
# Default directories where to search for templates
builtins = spack.config.get("config:template_dirs", ["$spack/share/spack/templates"])
extensions = spack.extensions.get_template_dirs()
dirs = tuple(canonicalize_path(d) for d in itertools.chain(builtins, extensions))
dirs = [canonicalize_path(d) for d in itertools.chain(builtins, extensions)]
# avoid importing this at the top level as it's used infrequently and
# slows down startup a bit.
import jinja2
# Loader for the templates
loader = jinja2.FileSystemLoader(dirs)
@@ -100,15 +100,9 @@ def quote(text):
return ['"{0}"'.format(line) for line in text]
def curly_quote(text):
"""Encloses each line of text in curly braces"""
return ["{{{0}}}".format(line) for line in text]
def _set_filters(env):
"""Sets custom filters to the template engine environment"""
env.filters["textwrap"] = textwrap.wrap
env.filters["prepend_to_line"] = prepend_to_line
env.filters["join"] = "\n".join
env.filters["quote"] = quote
env.filters["curly_quote"] = curly_quote

View File

@@ -115,6 +115,9 @@ def default_config(tmpdir, config_directory, monkeypatch, install_mockery_mutabl
spack.config.config, old_config = cfg, spack.config.config
spack.config.config.set("repos", [spack.paths.mock_packages_path])
# This is essential, otherwise the cache will create weird side effects
# that will compromise subsequent tests if compilers.yaml is modified
monkeypatch.setattr(spack.compilers, "_cache_config_file", [])
njobs = spack.config.get("config:build_jobs")
if not njobs:
spack.config.set("config:build_jobs", 4, scope="user")

View File

@@ -8,6 +8,8 @@
import pytest
import llnl.util.filesystem
import spack.compilers
import spack.main
import spack.version
@@ -16,8 +18,124 @@
@pytest.fixture
def compilers_dir(mock_executable):
"""Create a directory with some mock compiler scripts in it.
def mock_compiler_version():
return "4.5.3"
@pytest.fixture()
def mock_compiler_dir(tmpdir, mock_compiler_version):
"""Return a directory containing a fake, but detectable compiler."""
tmpdir.ensure("bin", dir=True)
bin_dir = tmpdir.join("bin")
gcc_path = bin_dir.join("gcc")
gxx_path = bin_dir.join("g++")
gfortran_path = bin_dir.join("gfortran")
gcc_path.write(
"""\
#!/bin/sh
for arg in "$@"; do
if [ "$arg" = -dumpversion ]; then
echo '%s'
fi
done
"""
% mock_compiler_version
)
# Create some mock compilers in the temporary directory
llnl.util.filesystem.set_executable(str(gcc_path))
gcc_path.copy(gxx_path, mode=True)
gcc_path.copy(gfortran_path, mode=True)
return str(tmpdir)
@pytest.mark.skipif(
sys.platform == "win32",
reason="Cannot execute bash \
script on Windows",
)
@pytest.mark.regression("11678,13138")
def test_compiler_find_without_paths(no_compilers_yaml, working_env, tmpdir):
with tmpdir.as_cwd():
with open("gcc", "w") as f:
f.write(
"""\
#!/bin/sh
echo "0.0.0"
"""
)
os.chmod("gcc", 0o700)
os.environ["PATH"] = str(tmpdir)
output = compiler("find", "--scope=site")
assert "gcc" in output
@pytest.mark.regression("17589")
def test_compiler_find_no_apple_gcc(no_compilers_yaml, working_env, tmpdir):
with tmpdir.as_cwd():
# make a script to emulate apple gcc's version args
with open("gcc", "w") as f:
f.write(
"""\
#!/bin/sh
if [ "$1" = "-dumpversion" ]; then
echo "4.2.1"
elif [ "$1" = "--version" ]; then
echo "Configured with: --prefix=/dummy"
echo "Apple clang version 11.0.0 (clang-1100.0.33.16)"
echo "Target: x86_64-apple-darwin18.7.0"
echo "Thread model: posix"
echo "InstalledDir: /dummy"
else
echo "clang: error: no input files"
fi
"""
)
os.chmod("gcc", 0o700)
os.environ["PATH"] = str(tmpdir)
output = compiler("find", "--scope=site")
assert "gcc" not in output
def test_compiler_remove(mutable_config, mock_packages):
assert spack.spec.CompilerSpec("gcc@=4.5.0") in spack.compilers.all_compiler_specs()
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@4.5.0", add_paths=[], scope=None)
spack.cmd.compiler.compiler_remove(args)
assert spack.spec.CompilerSpec("gcc@=4.5.0") not in spack.compilers.all_compiler_specs()
@pytest.mark.skipif(
sys.platform == "win32",
reason="Cannot execute bash \
script on Windows",
)
def test_compiler_add(mutable_config, mock_packages, mock_compiler_dir, mock_compiler_version):
# Compilers available by default.
old_compilers = set(spack.compilers.all_compiler_specs())
args = spack.util.pattern.Bunch(
all=None, compiler_spec=None, add_paths=[mock_compiler_dir], scope=None
)
spack.cmd.compiler.compiler_find(args)
# Ensure new compiler is in there
new_compilers = set(spack.compilers.all_compiler_specs())
new_compiler = new_compilers - old_compilers
assert any(c.version == spack.version.Version(mock_compiler_version) for c in new_compiler)
@pytest.fixture
def clangdir(tmpdir):
"""Create a directory with some dummy compiler scripts in it.
Scripts are:
- clang
@@ -27,9 +145,11 @@ def compilers_dir(mock_executable):
- gfortran-8
"""
clang_path = mock_executable(
"clang",
output="""
with tmpdir.as_cwd():
with open("clang", "w") as f:
f.write(
"""\
#!/bin/sh
if [ "$1" = "--version" ]; then
echo "clang version 11.0.0 (clang-1100.0.33.16)"
echo "Target: x86_64-apple-darwin18.7.0"
@@ -39,11 +159,12 @@ def compilers_dir(mock_executable):
echo "clang: error: no input files"
exit 1
fi
""",
)
shutil.copy(clang_path, clang_path.parent / "clang++")
"""
)
shutil.copy("clang", "clang++")
gcc_script = """
gcc_script = """\
#!/bin/sh
if [ "$1" = "-dumpversion" ]; then
echo "8"
elif [ "$1" = "-dumpfullversion" ]; then
@@ -57,111 +178,30 @@ def compilers_dir(mock_executable):
exit 1
fi
"""
mock_executable("gcc-8", output=gcc_script.format("gcc", "gcc-8"))
mock_executable("g++-8", output=gcc_script.format("g++", "g++-8"))
mock_executable("gfortran-8", output=gcc_script.format("GNU Fortran", "gfortran-8"))
with open("gcc-8", "w") as f:
f.write(gcc_script.format("gcc", "gcc-8"))
with open("g++-8", "w") as f:
f.write(gcc_script.format("g++", "g++-8"))
with open("gfortran-8", "w") as f:
f.write(gcc_script.format("GNU Fortran", "gfortran-8"))
os.chmod("clang", 0o700)
os.chmod("clang++", 0o700)
os.chmod("gcc-8", 0o700)
os.chmod("g++-8", 0o700)
os.chmod("gfortran-8", 0o700)
return clang_path.parent
yield tmpdir
@pytest.mark.skipif(sys.platform == "win32", reason="Cannot execute bash script on Windows")
@pytest.mark.regression("11678,13138")
def test_compiler_find_without_paths(no_compilers_yaml, working_env, mock_executable):
"""Tests that 'spack compiler find' looks into PATH by default, if no specific path
is given.
"""
gcc_path = mock_executable("gcc", output='echo "0.0.0"')
os.environ["PATH"] = str(gcc_path.parent)
output = compiler("find", "--scope=site")
assert "gcc" in output
@pytest.mark.regression("17589")
def test_compiler_find_no_apple_gcc(no_compilers_yaml, working_env, mock_executable):
"""Tests that Spack won't mistake Apple's GCC as a "real" GCC, since it's really
Clang with a few tweaks.
"""
gcc_path = mock_executable(
"gcc",
output="""
if [ "$1" = "-dumpversion" ]; then
echo "4.2.1"
elif [ "$1" = "--version" ]; then
echo "Configured with: --prefix=/dummy"
echo "Apple clang version 11.0.0 (clang-1100.0.33.16)"
echo "Target: x86_64-apple-darwin18.7.0"
echo "Thread model: posix"
echo "InstalledDir: /dummy"
else
echo "clang: error: no input files"
fi
""",
)
os.environ["PATH"] = str(gcc_path.parent)
output = compiler("find", "--scope=site")
assert "gcc" not in output
@pytest.mark.regression("37996")
def test_compiler_remove(mutable_config, mock_packages):
"""Tests that we can remove a compiler from configuration."""
assert spack.spec.CompilerSpec("gcc@=4.5.0") in spack.compilers.all_compiler_specs()
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@4.5.0", add_paths=[], scope=None)
spack.cmd.compiler.compiler_remove(args)
assert spack.spec.CompilerSpec("gcc@=4.5.0") not in spack.compilers.all_compiler_specs()
@pytest.mark.regression("37996")
def test_removing_compilers_from_multiple_scopes(mutable_config, mock_packages):
# Duplicate "site" scope into "user" scope
site_config = spack.config.get("compilers", scope="site")
spack.config.set("compilers", site_config, scope="user")
assert spack.spec.CompilerSpec("gcc@=4.5.0") in spack.compilers.all_compiler_specs()
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@4.5.0", add_paths=[], scope=None)
spack.cmd.compiler.compiler_remove(args)
assert spack.spec.CompilerSpec("gcc@=4.5.0") not in spack.compilers.all_compiler_specs()
@pytest.mark.skipif(sys.platform == "win32", reason="Cannot execute bash script on Windows")
def test_compiler_add(mutable_config, mock_packages, mock_executable):
"""Tests that we can add a compiler to configuration."""
expected_version = "4.5.3"
gcc_path = mock_executable(
"gcc",
output=f"""\
for arg in "$@"; do
if [ "$arg" = -dumpversion ]; then
echo '{expected_version}'
fi
done
""",
)
bin_dir = gcc_path.parent
root_dir = bin_dir.parent
compilers_before_find = set(spack.compilers.all_compiler_specs())
args = spack.util.pattern.Bunch(
all=None, compiler_spec=None, add_paths=[str(root_dir)], scope=None
)
spack.cmd.compiler.compiler_find(args)
compilers_after_find = set(spack.compilers.all_compiler_specs())
compilers_added_by_find = compilers_after_find - compilers_before_find
assert len(compilers_added_by_find) == 1
new_compiler = compilers_added_by_find.pop()
assert new_compiler.version == spack.version.Version(expected_version)
@pytest.mark.skipif(sys.platform == "win32", reason="Cannot execute bash script on Windows")
@pytest.mark.skipif(
sys.platform == "win32",
reason="Cannot execute bash \
script on Windows",
)
@pytest.mark.regression("17590")
def test_compiler_find_mixed_suffixes(no_compilers_yaml, working_env, compilers_dir):
def test_compiler_find_mixed_suffixes(no_compilers_yaml, working_env, clangdir):
"""Ensure that we'll mix compilers with different suffixes when necessary."""
os.environ["PATH"] = str(compilers_dir)
os.environ["PATH"] = str(clangdir)
output = compiler("find", "--scope=site")
assert "clang@11.0.0" in output
@@ -171,33 +211,39 @@ def test_compiler_find_mixed_suffixes(no_compilers_yaml, working_env, compilers_
clang = next(c["compiler"] for c in config if c["compiler"]["spec"] == "clang@=11.0.0")
gcc = next(c["compiler"] for c in config if c["compiler"]["spec"] == "gcc@=8.4.0")
gfortran_path = str(compilers_dir / "gfortran-8")
gfortran_path = str(clangdir.join("gfortran-8"))
assert clang["paths"] == {
"cc": str(compilers_dir / "clang"),
"cxx": str(compilers_dir / "clang++"),
"cc": str(clangdir.join("clang")),
"cxx": str(clangdir.join("clang++")),
# we only auto-detect mixed clang on macos
"f77": gfortran_path if sys.platform == "darwin" else None,
"fc": gfortran_path if sys.platform == "darwin" else None,
}
assert gcc["paths"] == {
"cc": str(compilers_dir / "gcc-8"),
"cxx": str(compilers_dir / "g++-8"),
"cc": str(clangdir.join("gcc-8")),
"cxx": str(clangdir.join("g++-8")),
"f77": gfortran_path,
"fc": gfortran_path,
}
@pytest.mark.skipif(sys.platform == "win32", reason="Cannot execute bash script on Windows")
@pytest.mark.skipif(
sys.platform == "win32",
reason="Cannot execute bash \
script on Windows",
)
@pytest.mark.regression("17590")
def test_compiler_find_prefer_no_suffix(no_compilers_yaml, working_env, compilers_dir):
def test_compiler_find_prefer_no_suffix(no_compilers_yaml, working_env, clangdir):
"""Ensure that we'll pick 'clang' over 'clang-gpu' when there is a choice."""
clang_path = compilers_dir / "clang"
shutil.copy(clang_path, clang_path.parent / "clang-gpu")
shutil.copy(clang_path, clang_path.parent / "clang++-gpu")
with clangdir.as_cwd():
shutil.copy("clang", "clang-gpu")
shutil.copy("clang++", "clang++-gpu")
os.chmod("clang-gpu", 0o700)
os.chmod("clang++-gpu", 0o700)
os.environ["PATH"] = str(compilers_dir)
os.environ["PATH"] = str(clangdir)
output = compiler("find", "--scope=site")
assert "clang@11.0.0" in output
@@ -206,38 +252,46 @@ def test_compiler_find_prefer_no_suffix(no_compilers_yaml, working_env, compiler
config = spack.compilers.get_compiler_config("site", False)
clang = next(c["compiler"] for c in config if c["compiler"]["spec"] == "clang@=11.0.0")
assert clang["paths"]["cc"] == str(compilers_dir / "clang")
assert clang["paths"]["cxx"] == str(compilers_dir / "clang++")
assert clang["paths"]["cc"] == str(clangdir.join("clang"))
assert clang["paths"]["cxx"] == str(clangdir.join("clang++"))
@pytest.mark.skipif(sys.platform == "win32", reason="Cannot execute bash script on Windows")
def test_compiler_find_path_order(no_compilers_yaml, working_env, compilers_dir):
"""Ensure that we look for compilers in the same order as PATH, when there are duplicates"""
new_dir = compilers_dir / "first_in_path"
new_dir.mkdir()
for name in ("gcc-8", "g++-8", "gfortran-8"):
shutil.copy(compilers_dir / name, new_dir / name)
# Set PATH to have the new folder searched first
os.environ["PATH"] = "{}:{}".format(str(new_dir), str(compilers_dir))
@pytest.mark.skipif(
sys.platform == "win32",
reason="Cannot execute bash \
script on Windows",
)
def test_compiler_find_path_order(no_compilers_yaml, working_env, clangdir):
"""Ensure that we find compilers that come first in the PATH first"""
with clangdir.as_cwd():
os.mkdir("first_in_path")
shutil.copy("gcc-8", "first_in_path/gcc-8")
shutil.copy("g++-8", "first_in_path/g++-8")
shutil.copy("gfortran-8", "first_in_path/gfortran-8")
# the first_in_path folder should be searched first
os.environ["PATH"] = "{0}:{1}".format(str(clangdir.join("first_in_path")), str(clangdir))
compiler("find", "--scope=site")
config = spack.compilers.get_compiler_config("site", False)
gcc = next(c["compiler"] for c in config if c["compiler"]["spec"] == "gcc@=8.4.0")
assert gcc["paths"] == {
"cc": str(new_dir / "gcc-8"),
"cxx": str(new_dir / "g++-8"),
"f77": str(new_dir / "gfortran-8"),
"fc": str(new_dir / "gfortran-8"),
"cc": str(clangdir.join("first_in_path", "gcc-8")),
"cxx": str(clangdir.join("first_in_path", "g++-8")),
"f77": str(clangdir.join("first_in_path", "gfortran-8")),
"fc": str(clangdir.join("first_in_path", "gfortran-8")),
}
def test_compiler_list_empty(no_compilers_yaml, working_env, compilers_dir):
"""Spack should not automatically search for compilers when listing them and none are
available. And when stdout is not a tty like in tests, there should be no output and
no error exit code.
"""
os.environ["PATH"] = str(compilers_dir)
def test_compiler_list_empty(no_compilers_yaml, working_env, clangdir):
# Spack should not automatically search for compilers when listing them and none
# are available. And when stdout is not a tty like in tests, there should be no
# output and no error exit code.
os.environ["PATH"] = str(clangdir)
out = compiler("list")
assert not out
assert compiler.returncode == 0

View File

@@ -396,16 +396,17 @@ def reset_string():
with envdir.as_cwd():
with open("spack.yaml", "w") as f:
f.write(
f"""\
"""\
spack:
specs:
- {test_spec}@0.0.0
- %s@0.0.0
develop:
dev-build-test-install:
spec: dev-build-test-install@0.0.0
path: {build_dir}
path: %s
"""
% (test_spec, build_dir)
)
env("create", "test", "./spack.yaml")

View File

@@ -17,6 +17,7 @@
import llnl.util.link_tree
import spack.cmd.env
import spack.concretize
import spack.config
import spack.environment as ev
import spack.environment.environment
@@ -25,6 +26,7 @@
import spack.modules
import spack.paths
import spack.repo
import spack.solver.asp
import spack.util.spack_json as sjson
from spack.cmd.env import _env_create
from spack.main import SpackCommand, SpackCommandError
@@ -2759,6 +2761,51 @@ def test_virtual_spec_concretize_together(tmpdir):
assert any(s.package.provides("mpi") for _, s in e.concretized_specs())
@pytest.mark.parametrize(
"unify,method_to_fail",
[
(True, (spack.concretize, "concretize_specs_together")),
("when_possible", (spack.solver.asp.Solver, "solve_in_rounds")),
# An earlier failure so that we test the case where the internal state
# has been changed, but the pointer to the internal variables has not change.
# This effectively tests that we are properly copying by value not by
# reference for the transactional concretization
(True, (spack.environment.Environment, "_get_specs_to_concretize")),
],
)
def test_concretize_transactional(unify, method_to_fail, monkeypatch):
e = ev.create("test")
e.unify = unify
e.add("mpi")
e.add("zlib")
e.concretize()
# remove one spec and add another to ensure we test with changes before
# and after the environment is cleared during concretization
e.remove("zlib")
e.add("libelf")
def fail(*args, **kwargs):
raise Exception("Test failures")
location, method = method_to_fail
monkeypatch.setattr(location, method, fail)
first_user_specs = e.concretized_user_specs[:]
first_order = e.concretized_order[:]
first_hash_dict = e.specs_by_hash.copy()
try:
e.concretize()
except Exception:
pass
assert e.concretized_user_specs == first_user_specs
assert e.concretized_order == first_order
assert e.specs_by_hash == first_hash_dict
def test_query_develop_specs():
"""Test whether a spec is develop'ed or not"""
env("create", "test")

View File

@@ -44,8 +44,9 @@ def define_plat_exe(exe):
def test_find_external_single_package(mock_executable, executables_found, _platform_executables):
pkgs_to_check = [spack.repo.path.get_pkg_class("cmake")]
cmake_path = mock_executable("cmake", output="echo cmake version 1.foo")
executables_found({str(cmake_path): define_plat_exe("cmake")})
executables_found(
{mock_executable("cmake", output="echo cmake version 1.foo"): define_plat_exe("cmake")}
)
pkg_to_entries = spack.detection.by_executable(pkgs_to_check)
@@ -70,7 +71,7 @@ def test_find_external_two_instances_same_package(
"cmake", output="echo cmake version 3.17.2", subdir=("base2", "bin")
)
cmake_exe = define_plat_exe("cmake")
executables_found({str(cmake_path1): cmake_exe, str(cmake_path2): cmake_exe})
executables_found({cmake_path1: cmake_exe, cmake_path2: cmake_exe})
pkg_to_entries = spack.detection.by_executable(pkgs_to_check)
@@ -106,7 +107,7 @@ def test_get_executables(working_env, mock_executable):
cmake_path1 = mock_executable("cmake", output="echo cmake version 1.foo")
path_to_exe = spack.detection.executables_in_path([os.path.dirname(cmake_path1)])
cmake_exe = define_plat_exe("cmake")
assert path_to_exe[str(cmake_path1)] == cmake_exe
assert path_to_exe[cmake_path1] == cmake_exe
external = SpackCommand("external")
@@ -333,7 +334,7 @@ def test_packages_yaml_format(mock_executable, mutable_config, monkeypatch, _pla
assert "extra_attributes" in external_gcc
extra_attributes = external_gcc["extra_attributes"]
assert "prefix" not in extra_attributes
assert extra_attributes["compilers"]["c"] == str(gcc_exe)
assert extra_attributes["compilers"]["c"] == gcc_exe
def test_overriding_prefix(mock_executable, mutable_config, monkeypatch, _platform_executables):
@@ -396,30 +397,3 @@ def test_use_tags_for_detection(command_args, mock_executable, mutable_config, m
assert "The following specs have been" in output
assert "cmake" in output
assert "openssl" not in output
@pytest.mark.regression("38733")
@pytest.mark.skipif(sys.platform == "win32", reason="the test uses bash scripts")
def test_failures_in_scanning_do_not_result_in_an_error(
mock_executable, monkeypatch, mutable_config
):
"""Tests that scanning paths with wrong permissions, won't cause `external find` to error."""
cmake_exe1 = mock_executable(
"cmake", output="echo cmake version 3.19.1", subdir=("first", "bin")
)
cmake_exe2 = mock_executable(
"cmake", output="echo cmake version 3.23.3", subdir=("second", "bin")
)
# Remove access from the first directory executable
cmake_exe1.parent.chmod(0o600)
value = os.pathsep.join([str(cmake_exe1.parent), str(cmake_exe2.parent)])
monkeypatch.setenv("PATH", value)
output = external("find", "cmake")
assert external.returncode == 0
assert "The following specs have been" in output
assert "cmake" in output
assert "3.23.3" in output
assert "3.19.1" not in output

View File

@@ -337,6 +337,8 @@ def test_compiler_flags_differ_identical_compilers(self):
# Get the compiler that matches the spec (
compiler = spack.compilers.compiler_for_spec("clang@=12.2.0", spec.architecture)
# Clear cache for compiler config since it has its own cache mechanism outside of config
spack.compilers._cache_config_file = []
# Configure spack to have two identical compilers with different flags
default_dict = spack.compilers._to_dict(compiler)
@@ -2135,7 +2137,7 @@ def test_compiler_with_custom_non_numeric_version(self, mock_executable):
{
"compiler": {
"spec": "gcc@foo",
"paths": {"cc": str(gcc_path), "cxx": str(gcc_path), "f77": None, "fc": None},
"paths": {"cc": gcc_path, "cxx": gcc_path, "f77": None, "fc": None},
"operating_system": "debian6",
"modules": [],
}

View File

@@ -1669,21 +1669,22 @@ def clear_directive_functions():
@pytest.fixture
def mock_executable(tmp_path):
def mock_executable(tmpdir):
"""Factory to create a mock executable in a temporary directory that
output a custom string when run.
"""
import jinja2
shebang = "#!/bin/sh\n" if sys.platform != "win32" else "@ECHO OFF"
def _factory(name, output, subdir=("bin",)):
executable_dir = tmp_path.joinpath(*subdir)
executable_dir.mkdir(parents=True, exist_ok=True)
executable_path = executable_dir / name
f = tmpdir.ensure(*subdir, dir=True).join(name)
if sys.platform == "win32":
executable_path = executable_dir / (name + ".bat")
executable_path.write_text(f"{ shebang }{ output }\n")
executable_path.chmod(0o755)
return executable_path
f += ".bat"
t = jinja2.Template("{{ shebang }}{{ output }}\n")
f.write(t.render(shebang=shebang, output=output))
f.chmod(0o755)
return str(f)
return _factory

View File

@@ -1,17 +0,0 @@
#!/usr/bin/env bash
#
# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
_module_raw() { return 1; };
module() { return 1; };
ml() { return 1; };
export -f _module_raw;
export -f module;
export -f ml;
export MODULES_AUTO_HANDLING=1
export __MODULES_LMCONFLICT=bar&foo
export NEW_VAR=new

View File

@@ -31,194 +31,164 @@ class Amdfftw(FftwBase):
Example : spack install amdfftw precision=float
"""
_name = 'amdfftw'
_name = "amdfftw"
homepage = "https://developer.amd.com/amd-aocl/fftw/"
url = "https://github.com/amd/amd-fftw/archive/3.0.tar.gz"
git = "https://github.com/amd/amd-fftw.git"
maintainers = ['amd-toolchain-support']
maintainers("amd-toolchain-support")
version('3.1', sha256='3e777f3acef13fa1910db097e818b1d0d03a6a36ef41186247c6ab1ab0afc132')
version('3.0.1', sha256='87030c6bbb9c710f0a64f4f306ba6aa91dc4b182bb804c9022b35aef274d1a4c')
version('3.0', sha256='a69deaf45478a59a69f77c4f7e9872967f1cfe996592dd12beb6318f18ea0bcd')
version('2.2', sha256='de9d777236fb290c335860b458131678f75aa0799c641490c644c843f0e246f8')
version("3.1", sha256="3e777f3acef13fa1910db097e818b1d0d03a6a36ef41186247c6ab1ab0afc132")
version("3.0.1", sha256="87030c6bbb9c710f0a64f4f306ba6aa91dc4b182bb804c9022b35aef274d1a4c")
version("3.0", sha256="a69deaf45478a59a69f77c4f7e9872967f1cfe996592dd12beb6318f18ea0bcd")
version("2.2", sha256="de9d777236fb290c335860b458131678f75aa0799c641490c644c843f0e246f8")
variant('shared', default=True,
description='Builds a shared version of the library')
variant('openmp', default=True,
description='Enable OpenMP support')
variant('threads', default=False,
description='Enable SMP threads support')
variant('debug', default=False,
description='Builds a debug version of the library')
variant("shared", default=True, description="Builds a shared version of the library")
variant("openmp", default=True, description="Enable OpenMP support")
variant("threads", default=False, description="Enable SMP threads support")
variant("debug", default=False, description="Builds a debug version of the library")
variant(
'amd-fast-planner',
"amd-fast-planner",
default=False,
description='Option to reduce the planning time without much'
'tradeoff in the performance. It is supported for'
'Float and double precisions only.')
description="Option to reduce the planning time without much"
"tradeoff in the performance. It is supported for"
"Float and double precisions only.",
)
variant("amd-top-n-planner", default=False, description="Build with amd-top-n-planner support")
variant(
'amd-top-n-planner',
default=False,
description='Build with amd-top-n-planner support')
variant(
'amd-mpi-vader-limit',
default=False,
description='Build with amd-mpi-vader-limit support')
variant(
'static',
default=False,
description='Build with static suppport')
variant(
'amd-trans',
default=False,
description='Build with amd-trans suppport')
variant(
'amd-app-opt',
default=False,
description='Build with amd-app-opt suppport')
"amd-mpi-vader-limit", default=False, description="Build with amd-mpi-vader-limit support"
)
variant("static", default=False, description="Build with static suppport")
variant("amd-trans", default=False, description="Build with amd-trans suppport")
variant("amd-app-opt", default=False, description="Build with amd-app-opt suppport")
depends_on('texinfo')
depends_on("texinfo")
provides('fftw-api@3', when='@2:')
provides("fftw-api@3", when="@2:")
conflicts(
'precision=quad',
when='@2.2 %aocc',
msg='Quad precision is not supported by AOCC clang version 2.2')
"precision=quad",
when="@2.2 %aocc",
msg="Quad precision is not supported by AOCC clang version 2.2",
)
conflicts(
'+debug',
when='@2.2 %aocc',
msg='debug mode is not supported by AOCC clang version 2.2')
"+debug", when="@2.2 %aocc", msg="debug mode is not supported by AOCC clang version 2.2"
)
conflicts("%gcc@:7.2", when="@2.2:", msg="GCC version above 7.2 is required for AMDFFTW")
conflicts(
'%gcc@:7.2',
when='@2.2:',
msg='GCC version above 7.2 is required for AMDFFTW')
"+amd-fast-planner ", when="+mpi", msg="mpi thread is not supported with amd-fast-planner"
)
conflicts(
'+amd-fast-planner ',
when='+mpi',
msg='mpi thread is not supported with amd-fast-planner')
"+amd-fast-planner", when="@2.2", msg="amd-fast-planner is supported from 3.0 onwards"
)
conflicts(
'+amd-fast-planner',
when='@2.2',
msg='amd-fast-planner is supported from 3.0 onwards')
"+amd-fast-planner",
when="precision=quad",
msg="Quad precision is not supported with amd-fast-planner",
)
conflicts(
'+amd-fast-planner',
when='precision=quad',
msg='Quad precision is not supported with amd-fast-planner')
"+amd-fast-planner",
when="precision=long_double",
msg="long_double precision is not supported with amd-fast-planner",
)
conflicts(
'+amd-fast-planner',
when='precision=long_double',
msg='long_double precision is not supported with amd-fast-planner')
"+amd-top-n-planner",
when="@:3.0.0",
msg="amd-top-n-planner is supported from 3.0.1 onwards",
)
conflicts(
'+amd-top-n-planner',
when='@:3.0.0',
msg='amd-top-n-planner is supported from 3.0.1 onwards')
"+amd-top-n-planner",
when="precision=long_double",
msg="long_double precision is not supported with amd-top-n-planner",
)
conflicts(
'+amd-top-n-planner',
when='precision=long_double',
msg='long_double precision is not supported with amd-top-n-planner')
"+amd-top-n-planner",
when="precision=quad",
msg="Quad precision is not supported with amd-top-n-planner",
)
conflicts(
'+amd-top-n-planner',
when='precision=quad',
msg='Quad precision is not supported with amd-top-n-planner')
"+amd-top-n-planner",
when="+amd-fast-planner",
msg="amd-top-n-planner cannot be used with amd-fast-planner",
)
conflicts(
'+amd-top-n-planner',
when='+amd-fast-planner',
msg='amd-top-n-planner cannot be used with amd-fast-planner')
"+amd-top-n-planner", when="+threads", msg="amd-top-n-planner works only for single thread"
)
conflicts(
'+amd-top-n-planner',
when='+threads',
msg='amd-top-n-planner works only for single thread')
"+amd-top-n-planner", when="+mpi", msg="mpi thread is not supported with amd-top-n-planner"
)
conflicts(
'+amd-top-n-planner',
when='+mpi',
msg='mpi thread is not supported with amd-top-n-planner')
"+amd-top-n-planner",
when="+openmp",
msg="openmp thread is not supported with amd-top-n-planner",
)
conflicts(
'+amd-top-n-planner',
when='+openmp',
msg='openmp thread is not supported with amd-top-n-planner')
"+amd-mpi-vader-limit",
when="@:3.0.0",
msg="amd-mpi-vader-limit is supported from 3.0.1 onwards",
)
conflicts(
'+amd-mpi-vader-limit',
when='@:3.0.0',
msg='amd-mpi-vader-limit is supported from 3.0.1 onwards')
"+amd-mpi-vader-limit",
when="precision=quad",
msg="Quad precision is not supported with amd-mpi-vader-limit",
)
conflicts("+amd-trans", when="+threads", msg="amd-trans works only for single thread")
conflicts("+amd-trans", when="+mpi", msg="mpi thread is not supported with amd-trans")
conflicts("+amd-trans", when="+openmp", msg="openmp thread is not supported with amd-trans")
conflicts(
'+amd-mpi-vader-limit',
when='precision=quad',
msg='Quad precision is not supported with amd-mpi-vader-limit')
"+amd-trans",
when="precision=long_double",
msg="long_double precision is not supported with amd-trans",
)
conflicts(
'+amd-trans',
when='+threads',
msg='amd-trans works only for single thread')
"+amd-trans", when="precision=quad", msg="Quad precision is not supported with amd-trans"
)
conflicts("+amd-app-opt", when="@:3.0.1", msg="amd-app-opt is supported from 3.1 onwards")
conflicts("+amd-app-opt", when="+mpi", msg="mpi thread is not supported with amd-app-opt")
conflicts(
'+amd-trans',
when='+mpi',
msg='mpi thread is not supported with amd-trans')
"+amd-app-opt",
when="precision=long_double",
msg="long_double precision is not supported with amd-app-opt",
)
conflicts(
'+amd-trans',
when='+openmp',
msg='openmp thread is not supported with amd-trans')
conflicts(
'+amd-trans',
when='precision=long_double',
msg='long_double precision is not supported with amd-trans')
conflicts(
'+amd-trans',
when='precision=quad',
msg='Quad precision is not supported with amd-trans')
conflicts(
'+amd-app-opt',
when='@:3.0.1',
msg='amd-app-opt is supported from 3.1 onwards')
conflicts(
'+amd-app-opt',
when='+mpi',
msg='mpi thread is not supported with amd-app-opt')
conflicts(
'+amd-app-opt',
when='precision=long_double',
msg='long_double precision is not supported with amd-app-opt')
conflicts(
'+amd-app-opt',
when='precision=quad',
msg='Quad precision is not supported with amd-app-opt')
"+amd-app-opt",
when="precision=quad",
msg="Quad precision is not supported with amd-app-opt",
)
def configure(self, spec, prefix):
"""Configure function"""
# Base options
options = [
'--prefix={0}'.format(prefix),
'--enable-amd-opt'
]
options = ["--prefix={0}".format(prefix), "--enable-amd-opt"]
# Check if compiler is AOCC
if '%aocc' in spec:
options.append('CC={0}'.format(os.path.basename(spack_cc)))
options.append('FC={0}'.format(os.path.basename(spack_fc)))
options.append('F77={0}'.format(os.path.basename(spack_fc)))
if "%aocc" in spec:
options.append("CC={0}".format(os.path.basename(spack_cc)))
options.append("FC={0}".format(os.path.basename(spack_fc)))
options.append("F77={0}".format(os.path.basename(spack_fc)))
if '+debug' in spec:
options.append('--enable-debug')
if "+debug" in spec:
options.append("--enable-debug")
if '+mpi' in spec:
options.append('--enable-mpi')
options.append('--enable-amd-mpifft')
if "+mpi" in spec:
options.append("--enable-mpi")
options.append("--enable-amd-mpifft")
else:
options.append('--disable-mpi')
options.append('--disable-amd-mpifft')
options.append("--disable-mpi")
options.append("--disable-amd-mpifft")
options.extend(self.enable_or_disable('shared'))
options.extend(self.enable_or_disable('openmp'))
options.extend(self.enable_or_disable('threads'))
options.extend(self.enable_or_disable('amd-fast-planner'))
options.extend(self.enable_or_disable('amd-top-n-planner'))
options.extend(self.enable_or_disable('amd-mpi-vader-limit'))
options.extend(self.enable_or_disable('static'))
options.extend(self.enable_or_disable('amd-trans'))
options.extend(self.enable_or_disable('amd-app-opt'))
options.extend(self.enable_or_disable("shared"))
options.extend(self.enable_or_disable("openmp"))
options.extend(self.enable_or_disable("threads"))
options.extend(self.enable_or_disable("amd-fast-planner"))
options.extend(self.enable_or_disable("amd-top-n-planner"))
options.extend(self.enable_or_disable("amd-mpi-vader-limit"))
options.extend(self.enable_or_disable("static"))
options.extend(self.enable_or_disable("amd-trans"))
options.extend(self.enable_or_disable("amd-app-opt"))
if not self.compiler.f77 or not self.compiler.fc:
options.append('--disable-fortran')
options.append("--disable-fortran")
# Cross compilation is supported in amd-fftw by making use of target
# variable to set AMD_ARCH configure option.
@@ -226,17 +196,16 @@ class Amdfftw(FftwBase):
# use target variable to set appropriate -march option in AMD_ARCH.
arch = spec.architecture
options.append(
'AMD_ARCH={0}'.format(
arch.target.optimization_flags(
spec.compiler).split('=')[-1]))
"AMD_ARCH={0}".format(arch.target.optimization_flags(spec.compiler).split("=")[-1])
)
# Specific SIMD support.
# float and double precisions are supported
simd_features = ['sse2', 'avx', 'avx2']
simd_features = ["sse2", "avx", "avx2"]
simd_options = []
for feature in simd_features:
msg = '--enable-{0}' if feature in spec.target else '--disable-{0}'
msg = "--enable-{0}" if feature in spec.target else "--disable-{0}"
simd_options.append(msg.format(feature))
# When enabling configure option "--enable-amd-opt", do not use the
@@ -246,20 +215,19 @@ class Amdfftw(FftwBase):
# Double is the default precision, for all the others we need
# to enable the corresponding option.
enable_precision = {
'float': ['--enable-float'],
'double': None,
'long_double': ['--enable-long-double'],
'quad': ['--enable-quad-precision']
"float": ["--enable-float"],
"double": None,
"long_double": ["--enable-long-double"],
"quad": ["--enable-quad-precision"],
}
# Different precisions must be configured and compiled one at a time
configure = Executable('../configure')
configure = Executable("../configure")
for precision in self.selected_precisions:
opts = (enable_precision[precision] or []) + options[:]
# SIMD optimizations are available only for float and double
if precision in ('float', 'double'):
if precision in ("float", "double"):
opts += simd_options
with working_dir(precision, create=True):

View File

@@ -16,21 +16,21 @@ from spack.package import *
class Llvm(CMakePackage, CudaPackage):
"""The LLVM Project is a collection of modular and reusable compiler and
toolchain technologies. Despite its name, LLVM has little to do
with traditional virtual machines, though it does provide helpful
libraries that can be used to build them. The name "LLVM" itself
is not an acronym; it is the full name of the project.
toolchain technologies. Despite its name, LLVM has little to do
with traditional virtual machines, though it does provide helpful
libraries that can be used to build them. The name "LLVM" itself
is not an acronym; it is the full name of the project.
"""
homepage = "https://llvm.org/"
url = "https://github.com/llvm/llvm-project/archive/llvmorg-7.1.0.tar.gz"
list_url = "https://releases.llvm.org/download.html"
git = "https://github.com/llvm/llvm-project"
maintainers = ['trws', 'haampie']
maintainers("trws", "haampie")
tags = ['e4s']
tags = ["e4s"]
generator = 'Ninja'
generator = "Ninja"
family = "compiler" # Used by lmod
@@ -80,13 +80,12 @@ class Llvm(CMakePackage, CudaPackage):
# to save space, build with `build_type=Release`.
variant(
"clang",
default=True,
description="Build the LLVM C/C++/Objective-C compiler frontend",
"clang", default=True, description="Build the LLVM C/C++/Objective-C compiler frontend"
)
variant(
"flang",
default=False, when='@11: +clang',
default=False,
when="@11: +clang",
description="Build the LLVM Fortran compiler frontend "
"(experimental - parser only, needs GCC)",
)
@@ -95,27 +94,23 @@ class Llvm(CMakePackage, CudaPackage):
default=False,
description="Include debugging code in OpenMP runtime libraries",
)
variant("lldb", default=True, when='+clang', description="Build the LLVM debugger")
variant("lldb", default=True, when="+clang", description="Build the LLVM debugger")
variant("lld", default=True, description="Build the LLVM linker")
variant("mlir", default=False, when='@10:', description="Build with MLIR support")
variant("mlir", default=False, when="@10:", description="Build with MLIR support")
variant(
"internal_unwind",
default=True, when='+clang',
description="Build the libcxxabi libunwind",
"internal_unwind", default=True, when="+clang", description="Build the libcxxabi libunwind"
)
variant(
"polly",
default=True,
description="Build the LLVM polyhedral optimization plugin, "
"only builds for 3.7.0+",
description="Build the LLVM polyhedral optimization plugin, " "only builds for 3.7.0+",
)
variant(
"libcxx",
default=True, when='+clang',
description="Build the LLVM C++ standard library",
"libcxx", default=True, when="+clang", description="Build the LLVM C++ standard library"
)
variant(
"compiler-rt", when='+clang',
"compiler-rt",
when="+clang",
default=True,
description="Build LLVM compiler runtime, including sanitizers",
)
@@ -124,11 +119,7 @@ class Llvm(CMakePackage, CudaPackage):
default=(sys.platform != "darwin"),
description="Add support for LTO with the gold linker plugin",
)
variant(
"split_dwarf",
default=False,
description="Build with split dwarf information",
)
variant("split_dwarf", default=False, description="Build with split dwarf information")
variant(
"llvm_dylib",
default=True,
@@ -136,18 +127,40 @@ class Llvm(CMakePackage, CudaPackage):
)
variant(
"link_llvm_dylib",
default=False, when='+llvm_dylib',
default=False,
when="+llvm_dylib",
description="Link LLVM tools against the LLVM shared library",
)
variant(
"targets",
default="none",
description=("What targets to build. Spack's target family is always added "
"(e.g. X86 is automatically enabled when targeting znver2)."),
values=("all", "none", "aarch64", "amdgpu", "arm", "avr", "bpf", "cppbackend",
"hexagon", "lanai", "mips", "msp430", "nvptx", "powerpc", "riscv",
"sparc", "systemz", "webassembly", "x86", "xcore"),
multi=True
description=(
"What targets to build. Spack's target family is always added "
"(e.g. X86 is automatically enabled when targeting znver2)."
),
values=(
"all",
"none",
"aarch64",
"amdgpu",
"arm",
"avr",
"bpf",
"cppbackend",
"hexagon",
"lanai",
"mips",
"msp430",
"nvptx",
"powerpc",
"riscv",
"sparc",
"systemz",
"webassembly",
"x86",
"xcore",
),
multi=True,
)
variant(
"build_type",
@@ -157,51 +170,52 @@ class Llvm(CMakePackage, CudaPackage):
)
variant(
"omp_tsan",
default=False, when='@6:',
default=False,
when="@6:",
description="Build with OpenMP capable thread sanitizer",
)
variant(
"omp_as_runtime",
default=True,
when='+clang @12:',
when="+clang @12:",
description="Build OpenMP runtime via ENABLE_RUNTIME by just-built Clang",
)
variant('code_signing', default=False,
when='+lldb platform=darwin',
description="Enable code-signing on macOS")
variant("python", default=False, description="Install python bindings")
variant('version_suffix', default='none', description="Add a symbol suffix")
variant(
'shlib_symbol_version',
default='none',
"code_signing",
default=False,
when="+lldb platform=darwin",
description="Enable code-signing on macOS",
)
variant("python", default=False, description="Install python bindings")
variant("version_suffix", default="none", description="Add a symbol suffix")
variant(
"shlib_symbol_version",
default="none",
description="Add shared library symbol version",
when='@13:'
when="@13:",
)
variant(
'z3',
default=False,
when='+clang @8:',
description='Use Z3 for the clang static analyzer'
"z3", default=False, when="+clang @8:", description="Use Z3 for the clang static analyzer"
)
provides('libllvm@14', when='@14.0.0:14')
provides('libllvm@13', when='@13.0.0:13')
provides('libllvm@12', when='@12.0.0:12')
provides('libllvm@11', when='@11.0.0:11')
provides('libllvm@10', when='@10.0.0:10')
provides('libllvm@9', when='@9.0.0:9')
provides('libllvm@8', when='@8.0.0:8')
provides('libllvm@7', when='@7.0.0:7')
provides('libllvm@6', when='@6.0.0:6')
provides('libllvm@5', when='@5.0.0:5')
provides('libllvm@4', when='@4.0.0:4')
provides('libllvm@3', when='@3.0.0:3')
provides("libllvm@14", when="@14.0.0:14")
provides("libllvm@13", when="@13.0.0:13")
provides("libllvm@12", when="@12.0.0:12")
provides("libllvm@11", when="@11.0.0:11")
provides("libllvm@10", when="@10.0.0:10")
provides("libllvm@9", when="@9.0.0:9")
provides("libllvm@8", when="@8.0.0:8")
provides("libllvm@7", when="@7.0.0:7")
provides("libllvm@6", when="@6.0.0:6")
provides("libllvm@5", when="@5.0.0:5")
provides("libllvm@4", when="@4.0.0:4")
provides("libllvm@3", when="@3.0.0:3")
extends("python", when="+python")
# Build dependency
depends_on("cmake@3.4.3:", type="build")
depends_on('cmake@3.13.4:', type='build', when='@12:')
depends_on("cmake@3.13.4:", type="build", when="@12:")
depends_on("ninja", type="build")
depends_on("python@2.7:2.8", when="@:4 ~python", type="build")
depends_on("python", when="@5: ~python", type="build")
@@ -242,7 +256,7 @@ class Llvm(CMakePackage, CudaPackage):
# clang/lib: a lambda parameter cannot shadow an explicitly captured entity
conflicts("%clang@8:", when="@:4")
# Internal compiler error on gcc 8.4 on aarch64 https://bugzilla.redhat.com/show_bug.cgi?id=1958295
conflicts('%gcc@8.4:8.4.9', when='@12: target=aarch64:')
conflicts("%gcc@8.4:8.4.9", when="@12: target=aarch64:")
# When these versions are concretized, but not explicitly with +libcxx, these
# conflicts will enable clingo to set ~libcxx, making the build successful:
@@ -252,17 +266,17 @@ class Llvm(CMakePackage, CudaPackage):
# GCC 11 - latest stable release per GCC release page
# Clang: 11, 12 - latest two stable releases per LLVM release page
# AppleClang 12 - latest stable release per Xcode release page
conflicts("%gcc@:10", when="@13:+libcxx")
conflicts("%clang@:10", when="@13:+libcxx")
conflicts("%gcc@:10", when="@13:+libcxx")
conflicts("%clang@:10", when="@13:+libcxx")
conflicts("%apple-clang@:11", when="@13:+libcxx")
# libcxx-4 and compiler-rt-4 fail to build with "newer" clang and gcc versions:
conflicts('%gcc@7:', when='@:4+libcxx')
conflicts('%clang@6:', when='@:4+libcxx')
conflicts('%apple-clang@6:', when='@:4+libcxx')
conflicts('%gcc@7:', when='@:4+compiler-rt')
conflicts('%clang@6:', when='@:4+compiler-rt')
conflicts('%apple-clang@6:', when='@:4+compiler-rt')
conflicts("%gcc@7:", when="@:4+libcxx")
conflicts("%clang@6:", when="@:4+libcxx")
conflicts("%apple-clang@6:", when="@:4+libcxx")
conflicts("%gcc@7:", when="@:4+compiler-rt")
conflicts("%clang@6:", when="@:4+compiler-rt")
conflicts("%apple-clang@6:", when="@:4+compiler-rt")
# cuda_arch value must be specified
conflicts("cuda_arch=none", when="+cuda", msg="A value for cuda_arch must be specified.")
@@ -270,27 +284,27 @@ class Llvm(CMakePackage, CudaPackage):
# LLVM bug https://bugs.llvm.org/show_bug.cgi?id=48234
# CMake bug: https://gitlab.kitware.com/cmake/cmake/-/issues/21469
# Fixed in upstream versions of both
conflicts('^cmake@3.19.0', when='@6:11.0.0')
conflicts("^cmake@3.19.0", when="@6:11.0.0")
# Github issue #4986
patch("llvm_gcc7.patch", when="@4.0.0:4.0.1+lldb %gcc@7.0:")
# sys/ustat.h has been removed in favour of statfs from glibc-2.28. Use fixed sizes:
patch('llvm5-sanitizer-ustat.patch', when="@4:6.0.0+compiler-rt")
patch("llvm5-sanitizer-ustat.patch", when="@4:6.0.0+compiler-rt")
# Fix lld templates: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230463
patch('llvm4-lld-ELF-Symbols.patch', when="@4+lld%clang@6:")
patch('llvm5-lld-ELF-Symbols.patch', when="@5+lld%clang@7:")
patch("llvm4-lld-ELF-Symbols.patch", when="@4+lld%clang@6:")
patch("llvm5-lld-ELF-Symbols.patch", when="@5+lld%clang@7:")
# Fix missing std:size_t in 'llvm@4:5' when built with '%clang@7:'
patch('xray_buffer_queue-cstddef.patch', when="@4:5+compiler-rt%clang@7:")
patch("xray_buffer_queue-cstddef.patch", when="@4:5+compiler-rt%clang@7:")
# https://github.com/llvm/llvm-project/commit/947f9692440836dcb8d88b74b69dd379d85974ce
patch('sanitizer-ipc_perm_mode.patch', when="@5:7+compiler-rt%clang@11:")
patch('sanitizer-ipc_perm_mode.patch', when="@5:9+compiler-rt%gcc@9:")
patch("sanitizer-ipc_perm_mode.patch", when="@5:7+compiler-rt%clang@11:")
patch("sanitizer-ipc_perm_mode.patch", when="@5:9+compiler-rt%gcc@9:")
# github.com/spack/spack/issues/24270: MicrosoftDemangle for %gcc@10: and %clang@13:
patch('missing-includes.patch', when='@8')
patch("missing-includes.patch", when="@8")
# Backport from llvm master + additional fix
# see https://bugs.llvm.org/show_bug.cgi?id=39696
@@ -315,33 +329,33 @@ class Llvm(CMakePackage, CudaPackage):
patch("llvm_python_path.patch", when="@:11")
# Workaround for issue https://github.com/spack/spack/issues/18197
patch('llvm7_intel.patch', when='@7 %intel@18.0.2,19.0.0:19.1.99')
patch("llvm7_intel.patch", when="@7 %intel@18.0.2,19.0.0:19.1.99")
# Remove cyclades support to build against newer kernel headers
# https://reviews.llvm.org/D102059
patch('no_cyclades.patch', when='@10:12.0.0')
patch('no_cyclades9.patch', when='@6:9')
patch("no_cyclades.patch", when="@10:12.0.0")
patch("no_cyclades9.patch", when="@6:9")
patch('llvm-gcc11.patch', when='@9:11%gcc@11:')
patch("llvm-gcc11.patch", when="@9:11%gcc@11:")
# add -lpthread to build OpenMP libraries with Fujitsu compiler
patch('llvm12-thread.patch', when='@12 %fj')
patch('llvm13-thread.patch', when='@13 %fj')
patch("llvm12-thread.patch", when="@12 %fj")
patch("llvm13-thread.patch", when="@13 %fj")
# avoid build failed with Fujitsu compiler
patch('llvm13-fujitsu.patch', when='@13 %fj')
patch("llvm13-fujitsu.patch", when="@13 %fj")
# patch for missing hwloc.h include for libompd
patch('llvm14-hwloc-ompd.patch', when='@14')
patch("llvm14-hwloc-ompd.patch", when="@14")
# make libflags a list in openmp subproject when ~omp_as_runtime
patch('libomp-libflags-as-list.patch', when='@3.7:')
patch("libomp-libflags-as-list.patch", when="@3.7:")
# The functions and attributes below implement external package
# detection for LLVM. See:
#
# https://spack.readthedocs.io/en/latest/packaging_guide.html#making-a-package-discoverable-with-spack-external-find
executables = ['clang', 'flang', 'ld.lld', 'lldb']
executables = ["clang", "flang", "ld.lld", "lldb"]
@classmethod
def filter_detected_exes(cls, prefix, exes_in_prefix):
@@ -351,7 +365,7 @@ class Llvm(CMakePackage, CudaPackage):
# on some port and would hang Spack during detection.
# clang-cl and clang-cpp are dev tools that we don't
# need to test
if any(x in exe for x in ('vscode', 'cpp', '-cl', '-gpu')):
if any(x in exe for x in ("vscode", "cpp", "-cl", "-gpu")):
continue
result.append(exe)
return result
@@ -360,20 +374,20 @@ class Llvm(CMakePackage, CudaPackage):
def determine_version(cls, exe):
version_regex = re.compile(
# Normal clang compiler versions are left as-is
r'clang version ([^ )\n]+)-svn[~.\w\d-]*|'
r"clang version ([^ )\n]+)-svn[~.\w\d-]*|"
# Don't include hyphenated patch numbers in the version
# (see https://github.com/spack/spack/pull/14365 for details)
r'clang version ([^ )\n]+?)-[~.\w\d-]*|'
r'clang version ([^ )\n]+)|'
r"clang version ([^ )\n]+?)-[~.\w\d-]*|"
r"clang version ([^ )\n]+)|"
# LLDB
r'lldb version ([^ )\n]+)|'
r"lldb version ([^ )\n]+)|"
# LLD
r'LLD ([^ )\n]+) \(compatible with GNU linkers\)'
r"LLD ([^ )\n]+) \(compatible with GNU linkers\)"
)
try:
compiler = Executable(exe)
output = compiler('--version', output=str, error=str)
if 'Apple' in output:
output = compiler("--version", output=str, error=str)
if "Apple" in output:
return None
match = version_regex.search(output)
if match:
@@ -387,38 +401,39 @@ class Llvm(CMakePackage, CudaPackage):
@classmethod
def determine_variants(cls, exes, version_str):
variants, compilers = ['+clang'], {}
variants, compilers = ["+clang"], {}
lld_found, lldb_found = False, False
for exe in exes:
if 'clang++' in exe:
compilers['cxx'] = exe
elif 'clang' in exe:
compilers['c'] = exe
elif 'flang' in exe:
variants.append('+flang')
compilers['fc'] = exe
compilers['f77'] = exe
elif 'ld.lld' in exe:
if "clang++" in exe:
compilers["cxx"] = exe
elif "clang" in exe:
compilers["c"] = exe
elif "flang" in exe:
variants.append("+flang")
compilers["fc"] = exe
compilers["f77"] = exe
elif "ld.lld" in exe:
lld_found = True
compilers['ld'] = exe
elif 'lldb' in exe:
compilers["ld"] = exe
elif "lldb" in exe:
lldb_found = True
compilers['lldb'] = exe
compilers["lldb"] = exe
variants.append('+lld' if lld_found else '~lld')
variants.append('+lldb' if lldb_found else '~lldb')
variants.append("+lld" if lld_found else "~lld")
variants.append("+lldb" if lldb_found else "~lldb")
return ''.join(variants), {'compilers': compilers}
return "".join(variants), {"compilers": compilers}
@classmethod
def validate_detected_spec(cls, spec, extra_attributes):
# For LLVM 'compilers' is a mandatory attribute
msg = ('the extra attribute "compilers" must be set for '
'the detected spec "{0}"'.format(spec))
assert 'compilers' in extra_attributes, msg
compilers = extra_attributes['compilers']
for key in ('c', 'cxx'):
msg = '{0} compiler not found for {1}'
msg = 'the extra attribute "compilers" must be set for ' 'the detected spec "{0}"'.format(
spec
)
assert "compilers" in extra_attributes, msg
compilers = extra_attributes["compilers"]
for key in ("c", "cxx"):
msg = "{0} compiler not found for {1}"
assert key in compilers, msg.format(key, spec)
@property
@@ -426,10 +441,10 @@ class Llvm(CMakePackage, CudaPackage):
msg = "cannot retrieve C compiler [spec is not concrete]"
assert self.spec.concrete, msg
if self.spec.external:
return self.spec.extra_attributes['compilers'].get('c', None)
return self.spec.extra_attributes["compilers"].get("c", None)
result = None
if '+clang' in self.spec:
result = os.path.join(self.spec.prefix.bin, 'clang')
if "+clang" in self.spec:
result = os.path.join(self.spec.prefix.bin, "clang")
return result
@property
@@ -437,10 +452,10 @@ class Llvm(CMakePackage, CudaPackage):
msg = "cannot retrieve C++ compiler [spec is not concrete]"
assert self.spec.concrete, msg
if self.spec.external:
return self.spec.extra_attributes['compilers'].get('cxx', None)
return self.spec.extra_attributes["compilers"].get("cxx", None)
result = None
if '+clang' in self.spec:
result = os.path.join(self.spec.prefix.bin, 'clang++')
if "+clang" in self.spec:
result = os.path.join(self.spec.prefix.bin, "clang++")
return result
@property
@@ -448,10 +463,10 @@ class Llvm(CMakePackage, CudaPackage):
msg = "cannot retrieve Fortran compiler [spec is not concrete]"
assert self.spec.concrete, msg
if self.spec.external:
return self.spec.extra_attributes['compilers'].get('fc', None)
return self.spec.extra_attributes["compilers"].get("fc", None)
result = None
if '+flang' in self.spec:
result = os.path.join(self.spec.prefix.bin, 'flang')
if "+flang" in self.spec:
result = os.path.join(self.spec.prefix.bin, "flang")
return result
@property
@@ -459,27 +474,25 @@ class Llvm(CMakePackage, CudaPackage):
msg = "cannot retrieve Fortran 77 compiler [spec is not concrete]"
assert self.spec.concrete, msg
if self.spec.external:
return self.spec.extra_attributes['compilers'].get('f77', None)
return self.spec.extra_attributes["compilers"].get("f77", None)
result = None
if '+flang' in self.spec:
result = os.path.join(self.spec.prefix.bin, 'flang')
if "+flang" in self.spec:
result = os.path.join(self.spec.prefix.bin, "flang")
return result
@property
def libs(self):
return LibraryList(self.llvm_config("--libfiles", "all",
result="list"))
return LibraryList(self.llvm_config("--libfiles", "all", result="list"))
@run_before('cmake')
@run_before("cmake")
def codesign_check(self):
if self.spec.satisfies("+code_signing"):
codesign = which('codesign')
mkdir('tmp')
llvm_check_file = join_path('tmp', 'llvm_check')
copy('/usr/bin/false', llvm_check_file)
codesign = which("codesign")
mkdir("tmp")
llvm_check_file = join_path("tmp", "llvm_check")
copy("/usr/bin/false", llvm_check_file)
try:
codesign('-f', '-s', 'lldb_codesign', '--dryrun',
llvm_check_file)
codesign("-f", "-s", "lldb_codesign", "--dryrun", llvm_check_file)
except ProcessError:
# Newer LLVM versions have a simple script that sets up
@@ -489,32 +502,32 @@ class Llvm(CMakePackage, CudaPackage):
setup()
except Exception:
raise RuntimeError(
'spack was unable to either find or set up'
'code-signing on your system. Please refer to'
'https://lldb.llvm.org/resources/build.html#'
'code-signing-on-macos for details on how to'
'create this identity.'
"spack was unable to either find or set up"
"code-signing on your system. Please refer to"
"https://lldb.llvm.org/resources/build.html#"
"code-signing-on-macos for details on how to"
"create this identity."
)
def flag_handler(self, name, flags):
if name == 'cxxflags':
if name == "cxxflags":
flags.append(self.compiler.cxx11_flag)
return(None, flags, None)
elif name == 'ldflags' and self.spec.satisfies('%intel'):
flags.append('-shared-intel')
return(None, flags, None)
return(flags, None, None)
return (None, flags, None)
elif name == "ldflags" and self.spec.satisfies("%intel"):
flags.append("-shared-intel")
return (None, flags, None)
return (flags, None, None)
def setup_build_environment(self, env):
"""When using %clang, add only its ld.lld-$ver and/or ld.lld to our PATH"""
if self.compiler.name in ['clang', 'apple-clang']:
for lld in 'ld.lld-{0}'.format(self.compiler.version.version[0]), 'ld.lld':
if self.compiler.name in ["clang", "apple-clang"]:
for lld in "ld.lld-{0}".format(self.compiler.version.version[0]), "ld.lld":
bin = os.path.join(os.path.dirname(self.compiler.cc), lld)
sym = os.path.join(self.stage.path, 'ld.lld')
sym = os.path.join(self.stage.path, "ld.lld")
if os.path.exists(bin) and not os.path.exists(sym):
mkdirp(self.stage.path)
os.symlink(bin, sym)
env.prepend_path('PATH', self.stage.path)
env.prepend_path("PATH", self.stage.path)
def setup_run_environment(self, env):
if "+clang" in self.spec:
@@ -531,7 +544,7 @@ class Llvm(CMakePackage, CudaPackage):
define = CMakePackage.define
from_variant = self.define_from_variant
python = spec['python']
python = spec["python"]
cmake_args = [
define("LLVM_REQUIRES_RTTI", True),
define("LLVM_ENABLE_RTTI", True),
@@ -544,14 +557,13 @@ class Llvm(CMakePackage, CudaPackage):
define("LIBOMP_HWLOC_INSTALL_DIR", spec["hwloc"].prefix),
]
version_suffix = spec.variants['version_suffix'].value
if version_suffix != 'none':
cmake_args.append(define('LLVM_VERSION_SUFFIX', version_suffix))
version_suffix = spec.variants["version_suffix"].value
if version_suffix != "none":
cmake_args.append(define("LLVM_VERSION_SUFFIX", version_suffix))
shlib_symbol_version = spec.variants.get('shlib_symbol_version', None)
if shlib_symbol_version is not None and shlib_symbol_version.value != 'none':
cmake_args.append(define('LLVM_SHLIB_SYMBOL_VERSION',
shlib_symbol_version.value))
shlib_symbol_version = spec.variants.get("shlib_symbol_version", None)
if shlib_symbol_version is not None and shlib_symbol_version.value != "none":
cmake_args.append(define("LLVM_SHLIB_SYMBOL_VERSION", shlib_symbol_version.value))
if python.version >= Version("3"):
cmake_args.append(define("Python3_EXECUTABLE", python.command.path))
@@ -562,47 +574,56 @@ class Llvm(CMakePackage, CudaPackage):
runtimes = []
if "+cuda" in spec:
cmake_args.extend([
define("CUDA_TOOLKIT_ROOT_DIR", spec["cuda"].prefix),
define("LIBOMPTARGET_NVPTX_COMPUTE_CAPABILITIES",
",".join(spec.variants["cuda_arch"].value)),
define("CLANG_OPENMP_NVPTX_DEFAULT_ARCH",
"sm_{0}".format(spec.variants["cuda_arch"].value[-1])),
])
cmake_args.extend(
[
define("CUDA_TOOLKIT_ROOT_DIR", spec["cuda"].prefix),
define(
"LIBOMPTARGET_NVPTX_COMPUTE_CAPABILITIES",
",".join(spec.variants["cuda_arch"].value),
),
define(
"CLANG_OPENMP_NVPTX_DEFAULT_ARCH",
"sm_{0}".format(spec.variants["cuda_arch"].value[-1]),
),
]
)
if "+omp_as_runtime" in spec:
cmake_args.extend([
define("LIBOMPTARGET_NVPTX_ENABLE_BCLIB", True),
# work around bad libelf detection in libomptarget
define("LIBOMPTARGET_DEP_LIBELF_INCLUDE_DIR",
spec["libelf"].prefix.include),
])
cmake_args.extend(
[
define("LIBOMPTARGET_NVPTX_ENABLE_BCLIB", True),
# work around bad libelf detection in libomptarget
define(
"LIBOMPTARGET_DEP_LIBELF_INCLUDE_DIR", spec["libelf"].prefix.include
),
]
)
else:
# still build libomptarget but disable cuda
cmake_args.extend([
define("CUDA_TOOLKIT_ROOT_DIR", "IGNORE"),
define("CUDA_SDK_ROOT_DIR", "IGNORE"),
define("CUDA_NVCC_EXECUTABLE", "IGNORE"),
define("LIBOMPTARGET_DEP_CUDA_DRIVER_LIBRARIES", "IGNORE"),
])
cmake_args.extend(
[
define("CUDA_TOOLKIT_ROOT_DIR", "IGNORE"),
define("CUDA_SDK_ROOT_DIR", "IGNORE"),
define("CUDA_NVCC_EXECUTABLE", "IGNORE"),
define("LIBOMPTARGET_DEP_CUDA_DRIVER_LIBRARIES", "IGNORE"),
]
)
cmake_args.append(from_variant("LIBOMPTARGET_ENABLE_DEBUG", "omp_debug"))
if "+lldb" in spec:
projects.append("lldb")
cmake_args.append(define('LLDB_ENABLE_LIBEDIT', True))
cmake_args.append(define('LLDB_ENABLE_NCURSES', True))
cmake_args.append(define('LLDB_ENABLE_LIBXML2', False))
if spec.version >= Version('10'):
cmake_args.append(from_variant("LLDB_ENABLE_PYTHON", 'python'))
cmake_args.append(define("LLDB_ENABLE_LIBEDIT", True))
cmake_args.append(define("LLDB_ENABLE_NCURSES", True))
cmake_args.append(define("LLDB_ENABLE_LIBXML2", False))
if spec.version >= Version("10"):
cmake_args.append(from_variant("LLDB_ENABLE_PYTHON", "python"))
else:
cmake_args.append(define("LLDB_DISABLE_PYTHON", '~python' in spec))
cmake_args.append(define("LLDB_DISABLE_PYTHON", "~python" in spec))
if spec.satisfies("@5.0.0: +python"):
cmake_args.append(define("LLDB_USE_SYSTEM_SIX", True))
if "+gold" in spec:
cmake_args.append(
define("LLVM_BINUTILS_INCDIR", spec["binutils"].prefix.include)
)
cmake_args.append(define("LLVM_BINUTILS_INCDIR", spec["binutils"].prefix.include))
if "+clang" in spec:
projects.append("clang")
@@ -612,10 +633,10 @@ class Llvm(CMakePackage, CudaPackage):
else:
projects.append("openmp")
if '@8' in spec:
cmake_args.append(from_variant('CLANG_ANALYZER_ENABLE_Z3_SOLVER', 'z3'))
elif '@9:' in spec:
cmake_args.append(from_variant('LLVM_ENABLE_Z3_SOLVER', 'z3'))
if "@8" in spec:
cmake_args.append(from_variant("CLANG_ANALYZER_ENABLE_Z3_SOLVER", "z3"))
elif "@9:" in spec:
cmake_args.append(from_variant("LLVM_ENABLE_Z3_SOLVER", "z3"))
if "+flang" in spec:
projects.append("flang")
@@ -634,26 +655,26 @@ class Llvm(CMakePackage, CudaPackage):
projects.append("polly")
cmake_args.append(define("LINK_POLLY_INTO_TOOLS", True))
cmake_args.extend([
define("BUILD_SHARED_LIBS", False),
from_variant("LLVM_BUILD_LLVM_DYLIB", "llvm_dylib"),
from_variant("LLVM_LINK_LLVM_DYLIB", "link_llvm_dylib"),
from_variant("LLVM_USE_SPLIT_DWARF", "split_dwarf"),
# By default on Linux, libc++.so is a ldscript. CMake fails to add
# CMAKE_INSTALL_RPATH to it, which fails. Statically link libc++abi.a
# into libc++.so, linking with -lc++ or -stdlib=libc++ is enough.
define('LIBCXX_ENABLE_STATIC_ABI_LIBRARY', True)
])
cmake_args.extend(
[
define("BUILD_SHARED_LIBS", False),
from_variant("LLVM_BUILD_LLVM_DYLIB", "llvm_dylib"),
from_variant("LLVM_LINK_LLVM_DYLIB", "link_llvm_dylib"),
from_variant("LLVM_USE_SPLIT_DWARF", "split_dwarf"),
# By default on Linux, libc++.so is a ldscript. CMake fails to add
# CMAKE_INSTALL_RPATH to it, which fails. Statically link libc++abi.a
# into libc++.so, linking with -lc++ or -stdlib=libc++ is enough.
define("LIBCXX_ENABLE_STATIC_ABI_LIBRARY", True),
]
)
cmake_args.append(define(
"LLVM_TARGETS_TO_BUILD",
get_llvm_targets_to_build(spec)))
cmake_args.append(define("LLVM_TARGETS_TO_BUILD", get_llvm_targets_to_build(spec)))
cmake_args.append(from_variant("LIBOMP_TSAN_SUPPORT", "omp_tsan"))
if self.compiler.name == "gcc":
compiler = Executable(self.compiler.cc)
gcc_output = compiler('-print-search-dirs', output=str, error=str)
gcc_output = compiler("-print-search-dirs", output=str, error=str)
for line in gcc_output.splitlines():
if line.startswith("install:"):
@@ -665,7 +686,7 @@ class Llvm(CMakePackage, CudaPackage):
cmake_args.append(define("GCC_INSTALL_PREFIX", gcc_prefix))
if self.spec.satisfies("~code_signing platform=darwin"):
cmake_args.append(define('LLDB_USE_SYSTEM_DEBUGSERVER', True))
cmake_args.append(define("LLDB_USE_SYSTEM_DEBUGSERVER", True))
# Semicolon seperated list of projects to enable
cmake_args.append(define("LLVM_ENABLE_PROJECTS", projects))
@@ -689,20 +710,24 @@ class Llvm(CMakePackage, CudaPackage):
# rebuild libomptarget to get bytecode runtime library files
with working_dir(ompdir, create=True):
cmake_args = [
'-G', 'Ninja',
define('CMAKE_BUILD_TYPE', spec.variants['build_type'].value),
"-G",
"Ninja",
define("CMAKE_BUILD_TYPE", spec.variants["build_type"].value),
define("CMAKE_C_COMPILER", spec.prefix.bin + "/clang"),
define("CMAKE_CXX_COMPILER", spec.prefix.bin + "/clang++"),
define("CMAKE_INSTALL_PREFIX", spec.prefix),
define('CMAKE_PREFIX_PATH', prefix_paths)
define("CMAKE_PREFIX_PATH", prefix_paths),
]
cmake_args.extend(self.cmake_args())
cmake_args.extend([
define("LIBOMPTARGET_NVPTX_ENABLE_BCLIB", True),
define("LIBOMPTARGET_DEP_LIBELF_INCLUDE_DIR",
spec["libelf"].prefix.include),
self.stage.source_path + "/openmp",
])
cmake_args.extend(
[
define("LIBOMPTARGET_NVPTX_ENABLE_BCLIB", True),
define(
"LIBOMPTARGET_DEP_LIBELF_INCLUDE_DIR", spec["libelf"].prefix.include
),
self.stage.source_path + "/openmp",
]
)
cmake(*cmake_args)
ninja()
@@ -717,22 +742,22 @@ class Llvm(CMakePackage, CudaPackage):
install_tree("bin", join_path(self.prefix, "libexec", "llvm"))
def llvm_config(self, *args, **kwargs):
lc = Executable(self.prefix.bin.join('llvm-config'))
if not kwargs.get('output'):
kwargs['output'] = str
lc = Executable(self.prefix.bin.join("llvm-config"))
if not kwargs.get("output"):
kwargs["output"] = str
ret = lc(*args, **kwargs)
if kwargs.get('result') == "list":
if kwargs.get("result") == "list":
return ret.split()
else:
return ret
def get_llvm_targets_to_build(spec):
targets = spec.variants['targets'].value
targets = spec.variants["targets"].value
# Build everything?
if 'all' in targets:
return 'all'
if "all" in targets:
return "all"
# Convert targets variant values to CMake LLVM_TARGETS_TO_BUILD array.
spack_to_cmake = {
@@ -753,10 +778,10 @@ def get_llvm_targets_to_build(spec):
"systemz": "SystemZ",
"webassembly": "WebAssembly",
"x86": "X86",
"xcore": "XCore"
"xcore": "XCore",
}
if 'none' in targets:
if "none" in targets:
llvm_targets = set()
else:
llvm_targets = set(spack_to_cmake[target] for target in targets)

View File

@@ -22,127 +22,140 @@ class PyTorch(PythonPackage, CudaPackage):
with strong GPU acceleration."""
homepage = "https://pytorch.org/"
git = "https://github.com/pytorch/pytorch.git"
git = "https://github.com/pytorch/pytorch.git"
maintainers = ['adamjstewart']
maintainers("adamjstewart")
# Exact set of modules is version- and variant-specific, just attempt to import the
# core libraries to ensure that the package was successfully installed.
import_modules = ['torch', 'torch.autograd', 'torch.nn', 'torch.utils']
import_modules = ["torch", "torch.autograd", "torch.nn", "torch.utils"]
version('master', branch='master', submodules=True)
version('1.10.1', tag='v1.10.1', submodules=True)
version('1.10.0', tag='v1.10.0', submodules=True)
version('1.9.1', tag='v1.9.1', submodules=True)
version('1.9.0', tag='v1.9.0', submodules=True)
version('1.8.2', tag='v1.8.2', submodules=True)
version('1.8.1', tag='v1.8.1', submodules=True)
version('1.8.0', tag='v1.8.0', submodules=True)
version('1.7.1', tag='v1.7.1', submodules=True)
version('1.7.0', tag='v1.7.0', submodules=True)
version('1.6.0', tag='v1.6.0', submodules=True)
version('1.5.1', tag='v1.5.1', submodules=True)
version('1.5.0', tag='v1.5.0', submodules=True)
version('1.4.1', tag='v1.4.1', submodules=True)
version('1.4.0', tag='v1.4.0', submodules=True, deprecated=True,
submodules_delete=['third_party/fbgemm'])
version('1.3.1', tag='v1.3.1', submodules=True)
version('1.3.0', tag='v1.3.0', submodules=True)
version('1.2.0', tag='v1.2.0', submodules=True)
version('1.1.0', tag='v1.1.0', submodules=True)
version('1.0.1', tag='v1.0.1', submodules=True)
version('1.0.0', tag='v1.0.0', submodules=True)
version('0.4.1', tag='v0.4.1', submodules=True, deprecated=True,
submodules_delete=['third_party/nervanagpu'])
version('0.4.0', tag='v0.4.0', submodules=True, deprecated=True)
version('0.3.1', tag='v0.3.1', submodules=True, deprecated=True)
version("master", branch="master", submodules=True)
version("1.10.1", tag="v1.10.1", submodules=True)
version("1.10.0", tag="v1.10.0", submodules=True)
version("1.9.1", tag="v1.9.1", submodules=True)
version("1.9.0", tag="v1.9.0", submodules=True)
version("1.8.2", tag="v1.8.2", submodules=True)
version("1.8.1", tag="v1.8.1", submodules=True)
version("1.8.0", tag="v1.8.0", submodules=True)
version("1.7.1", tag="v1.7.1", submodules=True)
version("1.7.0", tag="v1.7.0", submodules=True)
version("1.6.0", tag="v1.6.0", submodules=True)
version("1.5.1", tag="v1.5.1", submodules=True)
version("1.5.0", tag="v1.5.0", submodules=True)
version("1.4.1", tag="v1.4.1", submodules=True)
version(
"1.4.0",
tag="v1.4.0",
submodules=True,
deprecated=True,
submodules_delete=["third_party/fbgemm"],
)
version("1.3.1", tag="v1.3.1", submodules=True)
version("1.3.0", tag="v1.3.0", submodules=True)
version("1.2.0", tag="v1.2.0", submodules=True)
version("1.1.0", tag="v1.1.0", submodules=True)
version("1.0.1", tag="v1.0.1", submodules=True)
version("1.0.0", tag="v1.0.0", submodules=True)
version(
"0.4.1",
tag="v0.4.1",
submodules=True,
deprecated=True,
submodules_delete=["third_party/nervanagpu"],
)
version("0.4.0", tag="v0.4.0", submodules=True, deprecated=True)
version("0.3.1", tag="v0.3.1", submodules=True, deprecated=True)
is_darwin = sys.platform == 'darwin'
is_darwin = sys.platform == "darwin"
# All options are defined in CMakeLists.txt.
# Some are listed in setup.py, but not all.
variant('caffe2', default=True, description='Build Caffe2')
variant('test', default=False, description='Build C++ test binaries')
variant('cuda', default=not is_darwin, description='Use CUDA')
variant('rocm', default=False, description='Use ROCm')
variant('cudnn', default=not is_darwin, description='Use cuDNN')
variant('fbgemm', default=True, description='Use FBGEMM (quantized 8-bit server operators)')
variant('kineto', default=True, description='Use Kineto profiling library')
variant('magma', default=not is_darwin, description='Use MAGMA')
variant('metal', default=is_darwin, description='Use Metal for Caffe2 iOS build')
variant('nccl', default=not is_darwin, description='Use NCCL')
variant('nnpack', default=True, description='Use NNPACK')
variant('numa', default=not is_darwin, description='Use NUMA')
variant('numpy', default=True, description='Use NumPy')
variant('openmp', default=True, description='Use OpenMP for parallel code')
variant('qnnpack', default=True, description='Use QNNPACK (quantized 8-bit operators)')
variant('valgrind', default=not is_darwin, description='Use Valgrind')
variant('xnnpack', default=True, description='Use XNNPACK')
variant('mkldnn', default=True, description='Use MKLDNN')
variant('distributed', default=not is_darwin, description='Use distributed')
variant('mpi', default=not is_darwin, description='Use MPI for Caffe2')
variant('gloo', default=not is_darwin, description='Use Gloo')
variant('tensorpipe', default=not is_darwin, description='Use TensorPipe')
variant('onnx_ml', default=True, description='Enable traditional ONNX ML API')
variant('breakpad', default=True, description='Enable breakpad crash dump library')
variant("caffe2", default=True, description="Build Caffe2")
variant("test", default=False, description="Build C++ test binaries")
variant("cuda", default=not is_darwin, description="Use CUDA")
variant("rocm", default=False, description="Use ROCm")
variant("cudnn", default=not is_darwin, description="Use cuDNN")
variant("fbgemm", default=True, description="Use FBGEMM (quantized 8-bit server operators)")
variant("kineto", default=True, description="Use Kineto profiling library")
variant("magma", default=not is_darwin, description="Use MAGMA")
variant("metal", default=is_darwin, description="Use Metal for Caffe2 iOS build")
variant("nccl", default=not is_darwin, description="Use NCCL")
variant("nnpack", default=True, description="Use NNPACK")
variant("numa", default=not is_darwin, description="Use NUMA")
variant("numpy", default=True, description="Use NumPy")
variant("openmp", default=True, description="Use OpenMP for parallel code")
variant("qnnpack", default=True, description="Use QNNPACK (quantized 8-bit operators)")
variant("valgrind", default=not is_darwin, description="Use Valgrind")
variant("xnnpack", default=True, description="Use XNNPACK")
variant("mkldnn", default=True, description="Use MKLDNN")
variant("distributed", default=not is_darwin, description="Use distributed")
variant("mpi", default=not is_darwin, description="Use MPI for Caffe2")
variant("gloo", default=not is_darwin, description="Use Gloo")
variant("tensorpipe", default=not is_darwin, description="Use TensorPipe")
variant("onnx_ml", default=True, description="Enable traditional ONNX ML API")
variant("breakpad", default=True, description="Enable breakpad crash dump library")
conflicts('+cuda', when='+rocm')
conflicts('+cudnn', when='~cuda')
conflicts('+magma', when='~cuda')
conflicts('+nccl', when='~cuda~rocm')
conflicts('+nccl', when='platform=darwin')
conflicts('+numa', when='platform=darwin', msg='Only available on Linux')
conflicts('+valgrind', when='platform=darwin', msg='Only available on Linux')
conflicts('+mpi', when='~distributed')
conflicts('+gloo', when='~distributed')
conflicts('+tensorpipe', when='~distributed')
conflicts('+kineto', when='@:1.7')
conflicts('+valgrind', when='@:1.7')
conflicts('~caffe2', when='@0.4.0:1.6') # no way to disable caffe2?
conflicts('+caffe2', when='@:0.3.1') # caffe2 did not yet exist?
conflicts('+tensorpipe', when='@:1.5')
conflicts('+xnnpack', when='@:1.4')
conflicts('~onnx_ml', when='@:1.4') # no way to disable ONNX?
conflicts('+rocm', when='@:0.4')
conflicts('+cudnn', when='@:0.4')
conflicts('+fbgemm', when='@:0.4,1.4.0')
conflicts('+qnnpack', when='@:0.4')
conflicts('+mkldnn', when='@:0.4')
conflicts('+breakpad', when='@:1.9') # Option appeared in 1.10.0
conflicts('+breakpad', when='target=ppc64:', msg='Unsupported')
conflicts('+breakpad', when='target=ppc64le:', msg='Unsupported')
conflicts("+cuda", when="+rocm")
conflicts("+cudnn", when="~cuda")
conflicts("+magma", when="~cuda")
conflicts("+nccl", when="~cuda~rocm")
conflicts("+nccl", when="platform=darwin")
conflicts("+numa", when="platform=darwin", msg="Only available on Linux")
conflicts("+valgrind", when="platform=darwin", msg="Only available on Linux")
conflicts("+mpi", when="~distributed")
conflicts("+gloo", when="~distributed")
conflicts("+tensorpipe", when="~distributed")
conflicts("+kineto", when="@:1.7")
conflicts("+valgrind", when="@:1.7")
conflicts("~caffe2", when="@0.4.0:1.6") # no way to disable caffe2?
conflicts("+caffe2", when="@:0.3.1") # caffe2 did not yet exist?
conflicts("+tensorpipe", when="@:1.5")
conflicts("+xnnpack", when="@:1.4")
conflicts("~onnx_ml", when="@:1.4") # no way to disable ONNX?
conflicts("+rocm", when="@:0.4")
conflicts("+cudnn", when="@:0.4")
conflicts("+fbgemm", when="@:0.4,1.4.0")
conflicts("+qnnpack", when="@:0.4")
conflicts("+mkldnn", when="@:0.4")
conflicts("+breakpad", when="@:1.9") # Option appeared in 1.10.0
conflicts("+breakpad", when="target=ppc64:", msg="Unsupported")
conflicts("+breakpad", when="target=ppc64le:", msg="Unsupported")
conflicts('cuda_arch=none', when='+cuda',
msg='Must specify CUDA compute capabilities of your GPU, see '
'https://developer.nvidia.com/cuda-gpus')
conflicts(
"cuda_arch=none",
when="+cuda",
msg="Must specify CUDA compute capabilities of your GPU, see "
"https://developer.nvidia.com/cuda-gpus",
)
# Required dependencies
depends_on('cmake@3.5:', type='build')
depends_on("cmake@3.5:", type="build")
# Use Ninja generator to speed up build times, automatically used if found
depends_on('ninja@1.5:', when='@1.1.0:', type='build')
depends_on("ninja@1.5:", when="@1.1.0:", type="build")
# See python_min_version in setup.py
depends_on('python@3.6.2:', when='@1.7.1:', type=('build', 'link', 'run'))
depends_on('python@3.6.1:', when='@1.6.0:1.7.0', type=('build', 'link', 'run'))
depends_on('python@3.5:', when='@1.5.0:1.5', type=('build', 'link', 'run'))
depends_on('python@2.7:2.8,3.5:', when='@1.4.0:1.4', type=('build', 'link', 'run'))
depends_on('python@2.7:2.8,3.5:3.7', when='@:1.3', type=('build', 'link', 'run'))
depends_on('py-setuptools', type=('build', 'run'))
depends_on('py-future', when='@1.5:', type=('build', 'run'))
depends_on('py-future', when='@1.1: ^python@:2', type=('build', 'run'))
depends_on('py-pyyaml', type=('build', 'run'))
depends_on('py-typing', when='@0.4: ^python@:3.4', type=('build', 'run'))
depends_on('py-typing-extensions', when='@1.7:', type=('build', 'run'))
depends_on('py-pybind11@2.6.2', when='@1.8.0:', type=('build', 'link', 'run'))
depends_on('py-pybind11@2.3.0', when='@1.1.0:1.7', type=('build', 'link', 'run'))
depends_on('py-pybind11@2.2.4', when='@1.0.0:1.0', type=('build', 'link', 'run'))
depends_on('py-pybind11@2.2.2', when='@0.4.0:0.4', type=('build', 'link', 'run'))
depends_on('py-dataclasses', when='@1.7: ^python@3.6.0:3.6', type=('build', 'run'))
depends_on('py-tqdm', type='run')
depends_on('py-protobuf', when='@0.4:', type=('build', 'run'))
depends_on('protobuf', when='@0.4:')
depends_on('blas')
depends_on('lapack')
depends_on('eigen', when='@0.4:')
depends_on("python@3.6.2:", when="@1.7.1:", type=("build", "link", "run"))
depends_on("python@3.6.1:", when="@1.6.0:1.7.0", type=("build", "link", "run"))
depends_on("python@3.5:", when="@1.5.0:1.5", type=("build", "link", "run"))
depends_on("python@2.7:2.8,3.5:", when="@1.4.0:1.4", type=("build", "link", "run"))
depends_on("python@2.7:2.8,3.5:3.7", when="@:1.3", type=("build", "link", "run"))
depends_on("py-setuptools", type=("build", "run"))
depends_on("py-future", when="@1.5:", type=("build", "run"))
depends_on("py-future", when="@1.1: ^python@:2", type=("build", "run"))
depends_on("py-pyyaml", type=("build", "run"))
depends_on("py-typing", when="@0.4: ^python@:3.4", type=("build", "run"))
depends_on("py-typing-extensions", when="@1.7:", type=("build", "run"))
depends_on("py-pybind11@2.6.2", when="@1.8.0:", type=("build", "link", "run"))
depends_on("py-pybind11@2.3.0", when="@1.1.0:1.7", type=("build", "link", "run"))
depends_on("py-pybind11@2.2.4", when="@1.0.0:1.0", type=("build", "link", "run"))
depends_on("py-pybind11@2.2.2", when="@0.4.0:0.4", type=("build", "link", "run"))
depends_on("py-dataclasses", when="@1.7: ^python@3.6.0:3.6", type=("build", "run"))
depends_on("py-tqdm", type="run")
depends_on("py-protobuf", when="@0.4:", type=("build", "run"))
depends_on("protobuf", when="@0.4:")
depends_on("blas")
depends_on("lapack")
depends_on("eigen", when="@0.4:")
# https://github.com/pytorch/pytorch/issues/60329
# depends_on('cpuinfo@2020-12-17', when='@1.8.0:')
# depends_on('cpuinfo@2020-06-11', when='@1.6.0:1.7')
@@ -152,30 +165,30 @@ class PyTorch(PythonPackage, CudaPackage):
# depends_on('sleef@3.4.0_2019-07-30', when='@1.6.0:1.7')
# https://github.com/Maratyszcza/FP16/issues/18
# depends_on('fp16@2020-05-14', when='@1.6.0:')
depends_on('pthreadpool@2021-04-13', when='@1.9.0:')
depends_on('pthreadpool@2020-10-05', when='@1.8.0:1.8')
depends_on('pthreadpool@2020-06-15', when='@1.6.0:1.7')
depends_on('psimd@2020-05-17', when='@1.6.0:')
depends_on('fxdiv@2020-04-17', when='@1.6.0:')
depends_on('benchmark', when='@1.6:+test')
depends_on("pthreadpool@2021-04-13", when="@1.9.0:")
depends_on("pthreadpool@2020-10-05", when="@1.8.0:1.8")
depends_on("pthreadpool@2020-06-15", when="@1.6.0:1.7")
depends_on("psimd@2020-05-17", when="@1.6.0:")
depends_on("fxdiv@2020-04-17", when="@1.6.0:")
depends_on("benchmark", when="@1.6:+test")
# Optional dependencies
depends_on('cuda@7.5:', when='+cuda', type=('build', 'link', 'run'))
depends_on('cuda@9:', when='@1.1:+cuda', type=('build', 'link', 'run'))
depends_on('cuda@9.2:', when='@1.6:+cuda', type=('build', 'link', 'run'))
depends_on('cudnn@6.0:7', when='@:1.0+cudnn')
depends_on('cudnn@7.0:7', when='@1.1.0:1.5+cudnn')
depends_on('cudnn@7.0:', when='@1.6.0:+cudnn')
depends_on('magma', when='+magma')
depends_on('nccl', when='+nccl')
depends_on('numactl', when='+numa')
depends_on('py-numpy', when='+numpy', type=('build', 'run'))
depends_on('llvm-openmp', when='%apple-clang +openmp')
depends_on('valgrind', when='+valgrind')
depends_on("cuda@7.5:", when="+cuda", type=("build", "link", "run"))
depends_on("cuda@9:", when="@1.1:+cuda", type=("build", "link", "run"))
depends_on("cuda@9.2:", when="@1.6:+cuda", type=("build", "link", "run"))
depends_on("cudnn@6.0:7", when="@:1.0+cudnn")
depends_on("cudnn@7.0:7", when="@1.1.0:1.5+cudnn")
depends_on("cudnn@7.0:", when="@1.6.0:+cudnn")
depends_on("magma", when="+magma")
depends_on("nccl", when="+nccl")
depends_on("numactl", when="+numa")
depends_on("py-numpy", when="+numpy", type=("build", "run"))
depends_on("llvm-openmp", when="%apple-clang +openmp")
depends_on("valgrind", when="+valgrind")
# https://github.com/pytorch/pytorch/issues/60332
# depends_on('xnnpack@2021-02-22', when='@1.8.0:+xnnpack')
# depends_on('xnnpack@2020-03-23', when='@1.6.0:1.7+xnnpack')
depends_on('mpi', when='+mpi')
depends_on("mpi", when="+mpi")
# https://github.com/pytorch/pytorch/issues/60270
# depends_on('gloo@2021-05-04', when='@1.9.0:+gloo')
# depends_on('gloo@2020-09-18', when='@1.7.0:1.8+gloo')
@@ -183,31 +196,35 @@ class PyTorch(PythonPackage, CudaPackage):
# https://github.com/pytorch/pytorch/issues/60331
# depends_on('onnx@1.8.0_2020-11-03', when='@1.8.0:+onnx_ml')
# depends_on('onnx@1.7.0_2020-05-31', when='@1.6.0:1.7+onnx_ml')
depends_on('mkl', when='+mkldnn')
depends_on("mkl", when="+mkldnn")
# Test dependencies
depends_on('py-hypothesis', type='test')
depends_on('py-six', type='test')
depends_on('py-psutil', type='test')
depends_on("py-hypothesis", type="test")
depends_on("py-six", type="test")
depends_on("py-psutil", type="test")
# Fix BLAS being overridden by MKL
# https://github.com/pytorch/pytorch/issues/60328
patch('https://patch-diff.githubusercontent.com/raw/pytorch/pytorch/pull/59220.patch',
sha256='e37afffe45cf7594c22050109942370e49983ad772d12ebccf508377dc9dcfc9',
when='@1.2.0:')
patch(
"https://patch-diff.githubusercontent.com/raw/pytorch/pytorch/pull/59220.patch",
sha256="e37afffe45cf7594c22050109942370e49983ad772d12ebccf508377dc9dcfc9",
when="@1.2.0:",
)
# Fixes build on older systems with glibc <2.12
patch('https://patch-diff.githubusercontent.com/raw/pytorch/pytorch/pull/55063.patch',
sha256='e17eaa42f5d7c18bf0d7c37d7b0910127a01ad53fdce3e226a92893356a70395',
when='@1.1.0:1.8.1')
patch(
"https://patch-diff.githubusercontent.com/raw/pytorch/pytorch/pull/55063.patch",
sha256="e17eaa42f5d7c18bf0d7c37d7b0910127a01ad53fdce3e226a92893356a70395",
when="@1.1.0:1.8.1",
)
# Fixes CMake configuration error when XNNPACK is disabled
# https://github.com/pytorch/pytorch/pull/35607
# https://github.com/pytorch/pytorch/pull/37865
patch('xnnpack.patch', when='@1.5.0:1.5')
patch("xnnpack.patch", when="@1.5.0:1.5")
# Fixes build error when ROCm is enabled for pytorch-1.5 release
patch('rocm.patch', when='@1.5.0:1.5+rocm')
patch("rocm.patch", when="@1.5.0:1.5+rocm")
# Fixes fatal error: sleef.h: No such file or directory
# https://github.com/pytorch/pytorch/pull/35359
@@ -216,47 +233,56 @@ class PyTorch(PythonPackage, CudaPackage):
# Fixes compilation with Clang 9.0.0 and Apple Clang 11.0.3
# https://github.com/pytorch/pytorch/pull/37086
patch('https://github.com/pytorch/pytorch/commit/e921cd222a8fbeabf5a3e74e83e0d8dfb01aa8b5.patch',
sha256='17561b16cd2db22f10c0fe1fdcb428aecb0ac3964ba022a41343a6bb8cba7049',
when='@1.1:1.5')
patch(
"https://github.com/pytorch/pytorch/commit/e921cd222a8fbeabf5a3e74e83e0d8dfb01aa8b5.patch",
sha256="17561b16cd2db22f10c0fe1fdcb428aecb0ac3964ba022a41343a6bb8cba7049",
when="@1.1:1.5",
)
# Removes duplicate definition of getCusparseErrorString
# https://github.com/pytorch/pytorch/issues/32083
patch('cusparseGetErrorString.patch', when='@0.4.1:1.0^cuda@10.1.243:')
patch("cusparseGetErrorString.patch", when="@0.4.1:1.0^cuda@10.1.243:")
# Fixes 'FindOpenMP.cmake'
# to detect openmp settings used by Fujitsu compiler.
patch('detect_omp_of_fujitsu_compiler.patch', when='%fj')
patch("detect_omp_of_fujitsu_compiler.patch", when="%fj")
# Fix compilation of +distributed~tensorpipe
# https://github.com/pytorch/pytorch/issues/68002
patch('https://github.com/pytorch/pytorch/commit/c075f0f633fa0136e68f0a455b5b74d7b500865c.patch',
sha256='e69e41b5c171bfb00d1b5d4ee55dd5e4c8975483230274af4ab461acd37e40b8', when='@1.10.0+distributed~tensorpipe')
patch(
"https://github.com/pytorch/pytorch/commit/c075f0f633fa0136e68f0a455b5b74d7b500865c.patch",
sha256="e69e41b5c171bfb00d1b5d4ee55dd5e4c8975483230274af4ab461acd37e40b8",
when="@1.10.0+distributed~tensorpipe",
)
# Both build and install run cmake/make/make install
# Only run once to speed up build times
phases = ['install']
phases = ["install"]
@property
def libs(self):
root = join_path(self.prefix, self.spec['python'].package.site_packages_dir,
'torch', 'lib')
return find_libraries('libtorch', root)
root = join_path(
self.prefix, self.spec["python"].package.site_packages_dir, "torch", "lib"
)
return find_libraries("libtorch", root)
@property
def headers(self):
root = join_path(self.prefix, self.spec['python'].package.site_packages_dir,
'torch', 'include')
root = join_path(
self.prefix, self.spec["python"].package.site_packages_dir, "torch", "include"
)
headers = find_all_headers(root)
headers.directories = [root]
return headers
@when('@1.5.0:')
@when("@1.5.0:")
def patch(self):
# https://github.com/pytorch/pytorch/issues/52208
filter_file('torch_global_deps PROPERTIES LINKER_LANGUAGE C',
'torch_global_deps PROPERTIES LINKER_LANGUAGE CXX',
'caffe2/CMakeLists.txt')
filter_file(
"torch_global_deps PROPERTIES LINKER_LANGUAGE C",
"torch_global_deps PROPERTIES LINKER_LANGUAGE CXX",
"caffe2/CMakeLists.txt",
)
def setup_build_environment(self, env):
"""Set environment variables used to control the build.
@@ -269,7 +295,8 @@ class PyTorch(PythonPackage, CudaPackage):
most flags defined in ``CMakeLists.txt`` can be specified as
environment variables.
"""
def enable_or_disable(variant, keyword='USE', var=None, newer=False):
def enable_or_disable(variant, keyword="USE", var=None, newer=False):
"""Set environment variable to enable or disable support for a
particular variant.
@@ -284,137 +311,135 @@ class PyTorch(PythonPackage, CudaPackage):
# Version 1.1.0 switched from NO_* to USE_* or BUILD_*
# But some newer variants have always used USE_* or BUILD_*
if self.spec.satisfies('@1.1:') or newer:
if '+' + variant in self.spec:
env.set(keyword + '_' + var, 'ON')
if self.spec.satisfies("@1.1:") or newer:
if "+" + variant in self.spec:
env.set(keyword + "_" + var, "ON")
else:
env.set(keyword + '_' + var, 'OFF')
env.set(keyword + "_" + var, "OFF")
else:
if '+' + variant in self.spec:
env.unset('NO_' + var)
if "+" + variant in self.spec:
env.unset("NO_" + var)
else:
env.set('NO_' + var, 'ON')
env.set("NO_" + var, "ON")
# Build in parallel to speed up build times
env.set('MAX_JOBS', make_jobs)
env.set("MAX_JOBS", make_jobs)
# Spack logs have trouble handling colored output
env.set('COLORIZE_OUTPUT', 'OFF')
env.set("COLORIZE_OUTPUT", "OFF")
if self.spec.satisfies('@0.4:'):
enable_or_disable('test', keyword='BUILD')
if self.spec.satisfies("@0.4:"):
enable_or_disable("test", keyword="BUILD")
if self.spec.satisfies('@1.7:'):
enable_or_disable('caffe2', keyword='BUILD')
if self.spec.satisfies("@1.7:"):
enable_or_disable("caffe2", keyword="BUILD")
enable_or_disable('cuda')
if '+cuda' in self.spec:
enable_or_disable("cuda")
if "+cuda" in self.spec:
# cmake/public/cuda.cmake
# cmake/Modules_CUDA_fix/upstream/FindCUDA.cmake
env.unset('CUDA_ROOT')
torch_cuda_arch = ';'.join('{0:.1f}'.format(float(i) / 10.0) for i
in
self.spec.variants['cuda_arch'].value)
env.set('TORCH_CUDA_ARCH_LIST', torch_cuda_arch)
env.unset("CUDA_ROOT")
torch_cuda_arch = ";".join(
"{0:.1f}".format(float(i) / 10.0) for i in self.spec.variants["cuda_arch"].value
)
env.set("TORCH_CUDA_ARCH_LIST", torch_cuda_arch)
enable_or_disable('rocm')
enable_or_disable("rocm")
enable_or_disable('cudnn')
if '+cudnn' in self.spec:
enable_or_disable("cudnn")
if "+cudnn" in self.spec:
# cmake/Modules_CUDA_fix/FindCUDNN.cmake
env.set('CUDNN_INCLUDE_DIR', self.spec['cudnn'].prefix.include)
env.set('CUDNN_LIBRARY', self.spec['cudnn'].libs[0])
env.set("CUDNN_INCLUDE_DIR", self.spec["cudnn"].prefix.include)
env.set("CUDNN_LIBRARY", self.spec["cudnn"].libs[0])
enable_or_disable('fbgemm')
if self.spec.satisfies('@1.8:'):
enable_or_disable('kineto')
enable_or_disable('magma')
enable_or_disable('metal')
if self.spec.satisfies('@1.10:'):
enable_or_disable('breakpad')
enable_or_disable("fbgemm")
if self.spec.satisfies("@1.8:"):
enable_or_disable("kineto")
enable_or_disable("magma")
enable_or_disable("metal")
if self.spec.satisfies("@1.10:"):
enable_or_disable("breakpad")
enable_or_disable('nccl')
if '+nccl' in self.spec:
env.set('NCCL_LIB_DIR', self.spec['nccl'].libs.directories[0])
env.set('NCCL_INCLUDE_DIR', self.spec['nccl'].prefix.include)
enable_or_disable("nccl")
if "+nccl" in self.spec:
env.set("NCCL_LIB_DIR", self.spec["nccl"].libs.directories[0])
env.set("NCCL_INCLUDE_DIR", self.spec["nccl"].prefix.include)
# cmake/External/nnpack.cmake
enable_or_disable('nnpack')
enable_or_disable("nnpack")
enable_or_disable('numa')
if '+numa' in self.spec:
enable_or_disable("numa")
if "+numa" in self.spec:
# cmake/Modules/FindNuma.cmake
env.set('NUMA_ROOT_DIR', self.spec['numactl'].prefix)
env.set("NUMA_ROOT_DIR", self.spec["numactl"].prefix)
# cmake/Modules/FindNumPy.cmake
enable_or_disable('numpy')
enable_or_disable("numpy")
# cmake/Modules/FindOpenMP.cmake
enable_or_disable('openmp', newer=True)
enable_or_disable('qnnpack')
if self.spec.satisfies('@1.3:'):
enable_or_disable('qnnpack', var='PYTORCH_QNNPACK')
if self.spec.satisfies('@1.8:'):
enable_or_disable('valgrind')
if self.spec.satisfies('@1.5:'):
enable_or_disable('xnnpack')
enable_or_disable('mkldnn')
enable_or_disable('distributed')
enable_or_disable('mpi')
enable_or_disable("openmp", newer=True)
enable_or_disable("qnnpack")
if self.spec.satisfies("@1.3:"):
enable_or_disable("qnnpack", var="PYTORCH_QNNPACK")
if self.spec.satisfies("@1.8:"):
enable_or_disable("valgrind")
if self.spec.satisfies("@1.5:"):
enable_or_disable("xnnpack")
enable_or_disable("mkldnn")
enable_or_disable("distributed")
enable_or_disable("mpi")
# cmake/Modules/FindGloo.cmake
enable_or_disable('gloo', newer=True)
if self.spec.satisfies('@1.6:'):
enable_or_disable('tensorpipe')
enable_or_disable("gloo", newer=True)
if self.spec.satisfies("@1.6:"):
enable_or_disable("tensorpipe")
if '+onnx_ml' in self.spec:
env.set('ONNX_ML', 'ON')
if "+onnx_ml" in self.spec:
env.set("ONNX_ML", "ON")
else:
env.set('ONNX_ML', 'OFF')
env.set("ONNX_ML", "OFF")
if not self.spec.satisfies('@master'):
env.set('PYTORCH_BUILD_VERSION', self.version)
env.set('PYTORCH_BUILD_NUMBER', 0)
if not self.spec.satisfies("@master"):
env.set("PYTORCH_BUILD_VERSION", self.version)
env.set("PYTORCH_BUILD_NUMBER", 0)
# BLAS to be used by Caffe2
# Options defined in cmake/Dependencies.cmake and cmake/Modules/FindBLAS.cmake
if self.spec['blas'].name == 'atlas':
env.set('BLAS', 'ATLAS')
env.set('WITH_BLAS', 'atlas')
elif self.spec['blas'].name in ['blis', 'amdblis']:
env.set('BLAS', 'BLIS')
env.set('WITH_BLAS', 'blis')
elif self.spec['blas'].name == 'eigen':
env.set('BLAS', 'Eigen')
elif self.spec['lapack'].name in ['libflame', 'amdlibflame']:
env.set('BLAS', 'FLAME')
env.set('WITH_BLAS', 'FLAME')
elif self.spec['blas'].name in [
'intel-mkl', 'intel-parallel-studio', 'intel-oneapi-mkl']:
env.set('BLAS', 'MKL')
env.set('WITH_BLAS', 'mkl')
elif self.spec['blas'].name == 'openblas':
env.set('BLAS', 'OpenBLAS')
env.set('WITH_BLAS', 'open')
elif self.spec['blas'].name == 'veclibfort':
env.set('BLAS', 'vecLib')
env.set('WITH_BLAS', 'veclib')
if self.spec["blas"].name == "atlas":
env.set("BLAS", "ATLAS")
env.set("WITH_BLAS", "atlas")
elif self.spec["blas"].name in ["blis", "amdblis"]:
env.set("BLAS", "BLIS")
env.set("WITH_BLAS", "blis")
elif self.spec["blas"].name == "eigen":
env.set("BLAS", "Eigen")
elif self.spec["lapack"].name in ["libflame", "amdlibflame"]:
env.set("BLAS", "FLAME")
env.set("WITH_BLAS", "FLAME")
elif self.spec["blas"].name in ["intel-mkl", "intel-parallel-studio", "intel-oneapi-mkl"]:
env.set("BLAS", "MKL")
env.set("WITH_BLAS", "mkl")
elif self.spec["blas"].name == "openblas":
env.set("BLAS", "OpenBLAS")
env.set("WITH_BLAS", "open")
elif self.spec["blas"].name == "veclibfort":
env.set("BLAS", "vecLib")
env.set("WITH_BLAS", "veclib")
else:
env.set('BLAS', 'Generic')
env.set('WITH_BLAS', 'generic')
env.set("BLAS", "Generic")
env.set("WITH_BLAS", "generic")
# Don't use vendored third-party libraries when possible
env.set('BUILD_CUSTOM_PROTOBUF', 'OFF')
env.set('USE_SYSTEM_NCCL', 'ON')
env.set('USE_SYSTEM_EIGEN_INSTALL', 'ON')
if self.spec.satisfies('@0.4:'):
env.set('pybind11_DIR', self.spec['py-pybind11'].prefix)
env.set('pybind11_INCLUDE_DIR',
self.spec['py-pybind11'].prefix.include)
if self.spec.satisfies('@1.10:'):
env.set('USE_SYSTEM_PYBIND11', 'ON')
env.set("BUILD_CUSTOM_PROTOBUF", "OFF")
env.set("USE_SYSTEM_NCCL", "ON")
env.set("USE_SYSTEM_EIGEN_INSTALL", "ON")
if self.spec.satisfies("@0.4:"):
env.set("pybind11_DIR", self.spec["py-pybind11"].prefix)
env.set("pybind11_INCLUDE_DIR", self.spec["py-pybind11"].prefix.include)
if self.spec.satisfies("@1.10:"):
env.set("USE_SYSTEM_PYBIND11", "ON")
# https://github.com/pytorch/pytorch/issues/60334
# if self.spec.satisfies('@1.8:'):
# env.set('USE_SYSTEM_SLEEF', 'ON')
if self.spec.satisfies('@1.6:'):
if self.spec.satisfies("@1.6:"):
# env.set('USE_SYSTEM_LIBS', 'ON')
# https://github.com/pytorch/pytorch/issues/60329
# env.set('USE_SYSTEM_CPUINFO', 'ON')
@@ -422,27 +447,26 @@ class PyTorch(PythonPackage, CudaPackage):
# env.set('USE_SYSTEM_GLOO', 'ON')
# https://github.com/Maratyszcza/FP16/issues/18
# env.set('USE_SYSTEM_FP16', 'ON')
env.set('USE_SYSTEM_PTHREADPOOL', 'ON')
env.set('USE_SYSTEM_PSIMD', 'ON')
env.set('USE_SYSTEM_FXDIV', 'ON')
env.set('USE_SYSTEM_BENCHMARK', 'ON')
env.set("USE_SYSTEM_PTHREADPOOL", "ON")
env.set("USE_SYSTEM_PSIMD", "ON")
env.set("USE_SYSTEM_FXDIV", "ON")
env.set("USE_SYSTEM_BENCHMARK", "ON")
# https://github.com/pytorch/pytorch/issues/60331
# env.set('USE_SYSTEM_ONNX', 'ON')
# https://github.com/pytorch/pytorch/issues/60332
# env.set('USE_SYSTEM_XNNPACK', 'ON')
@run_before('install')
@run_before("install")
def build_amd(self):
if '+rocm' in self.spec:
python(os.path.join('tools', 'amd_build', 'build_amd.py'))
if "+rocm" in self.spec:
python(os.path.join("tools", "amd_build", "build_amd.py"))
@run_after('install')
@run_after("install")
@on_package_attributes(run_tests=True)
def install_test(self):
with working_dir('test'):
python('run_test.py')
with working_dir("test"):
python("run_test.py")
# Tests need to be re-added since `phases` was overridden
run_after('install')(
PythonPackage._run_default_install_time_test_callbacks)
run_after('install')(PythonPackage.sanity_check_prefix)
run_after("install")(PythonPackage._run_default_install_time_test_callbacks)
run_after("install")(PythonPackage.sanity_check_prefix)

File diff suppressed because it is too large Load Diff

View File

@@ -400,7 +400,7 @@ def test_sanitize_literals(env, exclude, include):
({"SHLVL": "1"}, ["SH.*"], [], [], ["SHLVL"]),
# Check we can include using a regex
({"SHLVL": "1"}, ["SH.*"], ["SH.*"], ["SHLVL"], []),
# Check regex to exclude Environment Modules related vars
# Check regex to exclude Modules v4 related vars
(
{"MODULES_LMALTNAME": "1", "MODULES_LMCONFLICT": "2"},
["MODULES_(.*)"],
@@ -415,13 +415,6 @@ def test_sanitize_literals(env, exclude, include):
[],
["A_modquar", "b_modquar", "C_modshare"],
),
(
{"__MODULES_LMTAG": "1", "__MODULES_LMPREREQ": "2"},
["__MODULES_(.*)"],
[],
[],
["__MODULES_LMTAG", "__MODULES_LMPREREQ"],
),
],
)
def test_sanitize_regex(env, exclude, include, expected, deleted):
@@ -496,19 +489,3 @@ def test_exclude_lmod_variables():
# Check that variables related to lmod are not in there
modifications = env.group_by_name()
assert not any(x.startswith("LMOD_") for x in modifications)
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
@pytest.mark.regression("13504")
def test_exclude_modules_variables():
# Construct the list of environment modifications
file = os.path.join(datadir, "sourceme_modules.sh")
env = EnvironmentModifications.from_sourcing_file(file)
# Check that variables related to modules are not in there
modifications = env.group_by_name()
assert not any(x.startswith("MODULES_") for x in modifications)
assert not any(x.startswith("__MODULES_") for x in modifications)
assert not any(x.startswith("BASH_FUNC_ml") for x in modifications)
assert not any(x.startswith("BASH_FUNC_module") for x in modifications)
assert not any(x.startswith("BASH_FUNC__module_raw") for x in modifications)

View File

@@ -1386,30 +1386,6 @@ def test_single_external_implicit_install(install_mockery, explicit_args, is_exp
assert spack.store.db.get_record(pkg).explicit == is_explicit
@pytest.mark.skipif(
sys.platform == "win32",
reason="Windows breaks overwrite install due to prefix normalization inconsistencies",
)
def test_overwrite_install_does_install_build_deps(install_mockery, mock_fetch):
"""When overwrite installing something from sources, build deps should be installed."""
s = spack.spec.Spec("dtrun3").concretized()
create_installer([(s, {})]).install()
# Verify there is a pure build dep
edge = s.edges_to_dependencies(name="dtbuild3").pop()
assert edge.deptypes == ("build",)
build_dep = edge.spec
# Uninstall the build dep
build_dep.package.do_uninstall()
# Overwrite install the root dtrun3
create_installer([(s, {"overwrite": [s.dag_hash()]})]).install()
# Verify that the build dep was also installed.
assert build_dep.installed
@pytest.mark.parametrize("run_tests", [True, False])
def test_print_install_test_log_skipped(install_mockery, mock_packages, capfd, run_tests):
"""Confirm printing of install log skipped if not run/no failures."""

View File

@@ -167,46 +167,6 @@ def test_prepend_path_separator(self, modulefile_content, module_configuration):
assert len([x for x in content if 'append_path("SPACE", "qux", " ")' in x]) == 1
assert len([x for x in content if 'remove_path("SPACE", "qux", " ")' in x]) == 1
@pytest.mark.regression("11355")
def test_manpath_setup(self, modulefile_content, module_configuration):
"""Tests specific setup of MANPATH environment variable."""
module_configuration("autoload_direct")
# no manpath set by module
content = modulefile_content("mpileaks")
assert len([x for x in content if 'append_path("MANPATH", "", ":")' in x]) == 0
# manpath set by module with prepend_path
content = modulefile_content("module-manpath-prepend")
assert (
len([x for x in content if 'prepend_path("MANPATH", "/path/to/man", ":")' in x]) == 1
)
assert (
len([x for x in content if 'prepend_path("MANPATH", "/path/to/share/man", ":")' in x])
== 1
)
assert len([x for x in content if 'append_path("MANPATH", "", ":")' in x]) == 1
# manpath set by module with append_path
content = modulefile_content("module-manpath-append")
assert len([x for x in content if 'append_path("MANPATH", "/path/to/man", ":")' in x]) == 1
assert len([x for x in content if 'append_path("MANPATH", "", ":")' in x]) == 1
# manpath set by module with setenv
content = modulefile_content("module-manpath-setenv")
assert len([x for x in content if 'setenv("MANPATH", "/path/to/man")' in x]) == 1
assert len([x for x in content if 'append_path("MANPATH", "", ":")' in x]) == 0
@pytest.mark.regression("29578")
def test_setenv_raw_value(self, modulefile_content, module_configuration):
"""Tests that we can set environment variable value without formatting it."""
module_configuration("autoload_direct")
content = modulefile_content("module-setenv-raw")
assert len([x for x in content if 'setenv("FOO", "{{name}}, {name}, {{}}, {}")' in x]) == 1
def test_help_message(self, modulefile_content, module_configuration):
"""Tests the generation of module help message."""
@@ -232,18 +192,6 @@ def test_help_message(self, modulefile_content, module_configuration):
)
assert help_msg in "".join(content)
content = modulefile_content("module-long-help target=core2")
help_msg = (
"help([[Name : module-long-help]])"
"help([[Version: 1.0]])"
"help([[Target : core2]])"
"help()"
"help([[Package to test long description message generated in modulefile."
"Message too long is wrapped over multiple lines.]])"
)
assert help_msg in "".join(content)
def test_exclude(self, modulefile_content, module_configuration):
"""Tests excluding the generation of selected modules."""
module_configuration("exclude")

View File

@@ -29,7 +29,7 @@ def test_simple_case(self, modulefile_content, module_configuration):
module_configuration("autoload_direct")
content = modulefile_content(mpich_spec_string)
assert "module-whatis {mpich @3.0.4}" in content
assert 'module-whatis "mpich @3.0.4"' in content
def test_autoload_direct(self, modulefile_content, module_configuration):
"""Tests the automatic loading of direct dependencies."""
@@ -37,11 +37,6 @@ def test_autoload_direct(self, modulefile_content, module_configuration):
module_configuration("autoload_direct")
content = modulefile_content(mpileaks_spec_string)
assert (
len([x for x in content if "if {![info exists ::env(LMOD_VERSION_MAJOR)]} {" in x])
== 1
)
assert len([x for x in content if "depends-on " in x]) == 2
assert len([x for x in content if "module load " in x]) == 2
# dtbuild1 has
@@ -51,11 +46,6 @@ def test_autoload_direct(self, modulefile_content, module_configuration):
# Just make sure the 'build' dependency is not there
content = modulefile_content("dtbuild1")
assert (
len([x for x in content if "if {![info exists ::env(LMOD_VERSION_MAJOR)]} {" in x])
== 1
)
assert len([x for x in content if "depends-on " in x]) == 2
assert len([x for x in content if "module load " in x]) == 2
# The configuration file sets the verbose keyword to False
@@ -68,11 +58,6 @@ def test_autoload_all(self, modulefile_content, module_configuration):
module_configuration("autoload_all")
content = modulefile_content(mpileaks_spec_string)
assert (
len([x for x in content if "if {![info exists ::env(LMOD_VERSION_MAJOR)]} {" in x])
== 1
)
assert len([x for x in content if "depends-on " in x]) == 5
assert len([x for x in content if "module load " in x]) == 5
# dtbuild1 has
@@ -82,11 +67,6 @@ def test_autoload_all(self, modulefile_content, module_configuration):
# Just make sure the 'build' dependency is not there
content = modulefile_content("dtbuild1")
assert (
len([x for x in content if "if {![info exists ::env(LMOD_VERSION_MAJOR)]} {" in x])
== 1
)
assert len([x for x in content if "depends-on " in x]) == 2
assert len([x for x in content if "module load " in x]) == 2
def test_prerequisites_direct(self, modulefile_content, module_configuration):
@@ -112,18 +92,17 @@ def test_alter_environment(self, modulefile_content, module_configuration):
content = modulefile_content("mpileaks platform=test target=x86_64")
assert len([x for x in content if x.startswith("prepend-path CMAKE_PREFIX_PATH")]) == 0
assert len([x for x in content if "setenv FOO {foo}" in x]) == 1
assert len([x for x in content if "setenv OMPI_MCA_mpi_leave_pinned {1}" in x]) == 1
assert len([x for x in content if "setenv OMPI_MCA_MPI_LEAVE_PINNED {1}" in x]) == 0
assert len([x for x in content if 'setenv FOO "foo"' in x]) == 1
assert len([x for x in content if 'setenv OMPI_MCA_mpi_leave_pinned "1"' in x]) == 1
assert len([x for x in content if 'setenv OMPI_MCA_MPI_LEAVE_PINNED "1"' in x]) == 0
assert len([x for x in content if "unsetenv BAR" in x]) == 1
assert len([x for x in content if "setenv MPILEAKS_ROOT" in x]) == 1
content = modulefile_content("libdwarf platform=test target=core2")
assert len([x for x in content if x.startswith("prepend-path CMAKE_PREFIX_PATH")]) == 0
assert len([x for x in content if "setenv FOO {foo}" in x]) == 0
assert len([x for x in content if 'setenv FOO "foo"' in x]) == 0
assert len([x for x in content if "unsetenv BAR" in x]) == 0
assert len([x for x in content if "depends-on foo/bar" in x]) == 1
assert len([x for x in content if "module load foo/bar" in x]) == 1
assert len([x for x in content if "setenv LIBDWARF_ROOT" in x]) == 1
@@ -133,63 +112,14 @@ def test_prepend_path_separator(self, modulefile_content, module_configuration):
module_configuration("module_path_separator")
content = modulefile_content("module-path-separator")
assert len([x for x in content if "append-path --delim {:} COLON {foo}" in x]) == 1
assert len([x for x in content if "prepend-path --delim {:} COLON {foo}" in x]) == 1
assert len([x for x in content if "remove-path --delim {:} COLON {foo}" in x]) == 1
assert len([x for x in content if "append-path --delim {;} SEMICOLON {bar}" in x]) == 1
assert len([x for x in content if "prepend-path --delim {;} SEMICOLON {bar}" in x]) == 1
assert len([x for x in content if "remove-path --delim {;} SEMICOLON {bar}" in x]) == 1
assert len([x for x in content if "append-path --delim { } SPACE {qux}" in x]) == 1
assert len([x for x in content if "remove-path --delim { } SPACE {qux}" in x]) == 1
@pytest.mark.regression("11355")
def test_manpath_setup(self, modulefile_content, module_configuration):
"""Tests specific setup of MANPATH environment variable."""
module_configuration("autoload_direct")
# no manpath set by module
content = modulefile_content("mpileaks")
assert len([x for x in content if "append-path --delim {:} MANPATH {}" in x]) == 0
# manpath set by module with prepend-path
content = modulefile_content("module-manpath-prepend")
assert (
len([x for x in content if "prepend-path --delim {:} MANPATH {/path/to/man}" in x])
== 1
)
assert (
len(
[
x
for x in content
if "prepend-path --delim {:} MANPATH {/path/to/share/man}" in x
]
)
== 1
)
assert len([x for x in content if "append-path --delim {:} MANPATH {}" in x]) == 1
# manpath set by module with append-path
content = modulefile_content("module-manpath-append")
assert (
len([x for x in content if "append-path --delim {:} MANPATH {/path/to/man}" in x]) == 1
)
assert len([x for x in content if "append-path --delim {:} MANPATH {}" in x]) == 1
# manpath set by module with setenv
content = modulefile_content("module-manpath-setenv")
assert len([x for x in content if "setenv MANPATH {/path/to/man}" in x]) == 1
assert len([x for x in content if "append-path --delim {:} MANPATH {}" in x]) == 0
@pytest.mark.regression("29578")
def test_setenv_raw_value(self, modulefile_content, module_configuration):
"""Tests that we can set environment variable value without formatting it."""
module_configuration("autoload_direct")
content = modulefile_content("module-setenv-raw")
assert len([x for x in content if "setenv FOO {{{name}}, {name}, {{}}, {}}" in x]) == 1
assert len([x for x in content if 'append-path --delim ":" COLON "foo"' in x]) == 1
assert len([x for x in content if 'prepend-path --delim ":" COLON "foo"' in x]) == 1
assert len([x for x in content if 'remove-path --delim ":" COLON "foo"' in x]) == 1
assert len([x for x in content if 'append-path --delim ";" SEMICOLON "bar"' in x]) == 1
assert len([x for x in content if 'prepend-path --delim ";" SEMICOLON "bar"' in x]) == 1
assert len([x for x in content if 'remove-path --delim ";" SEMICOLON "bar"' in x]) == 1
assert len([x for x in content if 'append-path --delim " " SPACE "qux"' in x]) == 1
assert len([x for x in content if 'remove-path --delim " " SPACE "qux"' in x]) == 1
def test_help_message(self, modulefile_content, module_configuration):
"""Tests the generation of module help message."""
@@ -199,11 +129,11 @@ def test_help_message(self, modulefile_content, module_configuration):
help_msg = (
"proc ModulesHelp { } {"
" puts stderr {Name : mpileaks}"
" puts stderr {Version: 2.3}"
" puts stderr {Target : core2}"
" puts stderr {}"
" puts stderr {Mpileaks is a mock package that passes audits}"
' puts stderr "Name : mpileaks"'
' puts stderr "Version: 2.3"'
' puts stderr "Target : core2"'
' puts stderr ""'
' puts stderr "Mpileaks is a mock package that passes audits"'
"}"
)
assert help_msg in "".join(content)
@@ -212,23 +142,9 @@ def test_help_message(self, modulefile_content, module_configuration):
help_msg = (
"proc ModulesHelp { } {"
" puts stderr {Name : libdwarf}"
" puts stderr {Version: 20130729}"
" puts stderr {Target : core2}"
"}"
)
assert help_msg in "".join(content)
content = modulefile_content("module-long-help target=core2")
help_msg = (
"proc ModulesHelp { } {"
" puts stderr {Name : module-long-help}"
" puts stderr {Version: 1.0}"
" puts stderr {Target : core2}"
" puts stderr {}"
" puts stderr {Package to test long description message generated in modulefile.}"
" puts stderr {Message too long is wrapped over multiple lines.}"
' puts stderr "Name : libdwarf"'
' puts stderr "Version: 20130729"'
' puts stderr "Target : core2"'
"}"
)
assert help_msg in "".join(content)
@@ -383,14 +299,14 @@ def test_setup_environment(self, modulefile_content, module_configuration):
content = modulefile_content("mpileaks")
assert len([x for x in content if "setenv FOOBAR" in x]) == 1
assert len([x for x in content if "setenv FOOBAR {mpileaks}" in x]) == 1
assert len([x for x in content if 'setenv FOOBAR "mpileaks"' in x]) == 1
spec = spack.spec.Spec("mpileaks")
spec.concretize()
content = modulefile_content(str(spec["callpath"]))
assert len([x for x in content if "setenv FOOBAR" in x]) == 1
assert len([x for x in content if "setenv FOOBAR {callpath}" in x]) == 1
assert len([x for x in content if 'setenv FOOBAR "callpath"' in x]) == 1
def test_override_config(self, module_configuration, factory):
"""Tests overriding some sections of the configuration file."""
@@ -431,7 +347,7 @@ def test_extend_context(self, modulefile_content, module_configuration):
assert 'puts stderr "sentence from package"' in content
short_description = "module-whatis {This package updates the context for Tcl modulefiles.}"
short_description = 'module-whatis "This package updates the context for Tcl modulefiles."'
assert short_description in content
@pytest.mark.regression("4400")
@@ -478,16 +394,10 @@ def test_autoload_with_constraints(self, modulefile_content, module_configuratio
# Test the mpileaks that should have the autoloaded dependencies
content = modulefile_content("mpileaks ^mpich2")
assert len([x for x in content if "depends-on " in x]) == 2
assert len([x for x in content if "module load " in x]) == 2
# Test the mpileaks that should NOT have the autoloaded dependencies
content = modulefile_content("mpileaks ^mpich")
assert (
len([x for x in content if "if {![info exists ::env(LMOD_VERSION_MAJOR)]} {" in x])
== 0
)
assert len([x for x in content if "depends-on " in x]) == 0
assert len([x for x in content if "module load " in x]) == 0
def test_modules_no_arch(self, factory, module_configuration):

View File

@@ -62,7 +62,7 @@ def source_file(tmpdir, is_relocatable):
src = tmpdir.join("relocatable.c")
shutil.copy(template_src, str(src))
else:
template_dirs = (os.path.join(spack.paths.test_path, "data", "templates"),)
template_dirs = [os.path.join(spack.paths.test_path, "data", "templates")]
env = spack.tengine.make_environment(template_dirs)
template = env.get_template("non_relocatable.c")
text = template.render({"prefix": spack.store.layout.root})
@@ -246,12 +246,12 @@ def test_set_elf_rpaths(mock_patchelf):
# the call made to patchelf itself
patchelf = mock_patchelf("echo $@")
rpaths = ["/usr/lib", "/usr/lib64", "/opt/local/lib"]
output = spack.relocate._set_elf_rpaths(str(patchelf), rpaths)
output = spack.relocate._set_elf_rpaths(patchelf, rpaths)
# Assert that the arguments of the call to patchelf are as expected
assert "--force-rpath" in output
assert "--set-rpath " + ":".join(rpaths) in output
assert str(patchelf) in output
assert patchelf in output
@skip_unless_linux
@@ -261,7 +261,7 @@ def test_set_elf_rpaths_warning(mock_patchelf):
rpaths = ["/usr/lib", "/usr/lib64", "/opt/local/lib"]
# To avoid using capfd in order to check if the warning was triggered
# here we just check that output is not set
output = spack.relocate._set_elf_rpaths(str(patchelf), rpaths)
output = spack.relocate._set_elf_rpaths(patchelf, rpaths)
assert output is None

View File

@@ -71,7 +71,7 @@ def test_template_retrieval(self):
"""Tests the template retrieval mechanism hooked into config files"""
# Check the directories are correct
template_dirs = spack.config.get("config:template_dirs")
template_dirs = tuple([canonicalize_path(x) for x in template_dirs])
template_dirs = [canonicalize_path(x) for x in template_dirs]
assert len(template_dirs) == 3
env = tengine.make_environment(template_dirs)

View File

@@ -441,30 +441,6 @@ def test_write_tested_status(
assert TestStatus(status) == expected
@pytest.mark.regression("37840")
def test_write_tested_status_no_repeats(
tmpdir, install_mockery_mutable_config, mock_fetch, mock_test_stage
):
"""Emulate re-running the same stand-alone tests a second time."""
s = spack.spec.Spec("trivial-smoke-test").concretized()
pkg = s.package
statuses = [TestStatus.PASSED, TestStatus.PASSED]
for i, status in enumerate(statuses):
pkg.tester.test_parts[f"test_{i}"] = status
pkg.tester.counts[status] += 1
pkg.tester.tested_file = tmpdir.join("test-log.txt")
pkg.tester.write_tested_status()
pkg.tester.write_tested_status()
# The test should NOT result in a ValueError: invalid literal for int()
# with base 10: '2\n2' (i.e., the results being appended instead of
# written to the file).
with open(pkg.tester.tested_file, "r") as f:
status = int(f.read().strip("\n"))
assert TestStatus(status) == TestStatus.PASSED
def test_check_special_outputs(tmpdir):
"""This test covers two related helper methods"""
contents = """CREATE TABLE packages (

View File

@@ -244,7 +244,6 @@ def check_ast_roundtrip(code1, filename="internal", mode="exec"):
assert ast.dump(ast1) == ast.dump(ast2), error_msg
@pytest.mark.xfail(reason="https://github.com/spack/spack/pull/38424")
def test_core_lib_files():
"""Roundtrip source files from the Python core libs."""
test_directories = [

View File

@@ -17,7 +17,6 @@
import spack.package_base
import spack.spec
from spack.version import (
SEMVER_REGEX,
GitVersion,
StandardVersion,
Version,
@@ -936,7 +935,7 @@ def test_inclusion_upperbound():
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
def test_git_version_repo_attached_after_serialization(
mock_git_version_info, mock_packages, config, monkeypatch
mock_git_version_info, mock_packages, monkeypatch
):
"""Test that a GitVersion instance can be serialized and deserialized
without losing its repository reference.
@@ -955,9 +954,7 @@ def test_git_version_repo_attached_after_serialization(
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
def test_resolved_git_version_is_shown_in_str(
mock_git_version_info, mock_packages, config, monkeypatch
):
def test_resolved_git_version_is_shown_in_str(mock_git_version_info, mock_packages, monkeypatch):
"""Test that a GitVersion from a commit without a user supplied version is printed
as <hash>=<version>, and not just <hash>."""
repo_path, _, commits = mock_git_version_info
@@ -971,31 +968,9 @@ def test_resolved_git_version_is_shown_in_str(
assert str(spec.version) == f"{commit}=1.0-git.1"
def test_unresolvable_git_versions_error(config, mock_packages):
def test_unresolvable_git_versions_error(mock_packages):
"""Test that VersionLookupError is raised when a git prop is not set on a package."""
with pytest.raises(VersionLookupError):
# The package exists, but does not have a git property set. When dereferencing
# the version, we should get VersionLookupError, not a generic AttributeError.
spack.spec.Spec(f"git-test-commit@{'a' * 40}").version.ref_version
@pytest.mark.parametrize(
"tag,expected",
[
("v100.2.3", "100.2.3"),
("v1.2.3", "1.2.3"),
("v1.2.3-pre.release+build.1", "1.2.3-pre.release+build.1"),
("v1.2.3+build.1", "1.2.3+build.1"),
("v1.2.3+build_1", None),
("v1.2.3-pre.release", "1.2.3-pre.release"),
("v1.2.3-pre_release", None),
("1.2.3", "1.2.3"),
("1.2.3.", None),
],
)
def test_semver_regex(tag, expected):
result = SEMVER_REGEX.search(tag)
if expected is None:
assert result is None
else:
assert result.group() == expected

View File

@@ -552,8 +552,3 @@ def traverse_tree(specs, cover="nodes", deptype="all", key=id, depth_first=True)
return traverse_breadth_first_tree_nodes(None, edges)
return traverse_edges(specs, order="pre", cover=cover, deptype=deptype, key=key, depth=True)
def by_dag_hash(s: "spack.spec.Spec") -> str:
"""Used very often as a key function for traversals."""
return s.dag_hash()

View File

@@ -340,20 +340,13 @@ def execute(self, env: MutableMapping[str, str]):
class SetEnv(NameValueModifier):
__slots__ = ("force", "raw")
__slots__ = ("force",)
def __init__(
self,
name: str,
value: str,
*,
trace: Optional[Trace] = None,
force: bool = False,
raw: bool = False,
self, name: str, value: str, *, trace: Optional[Trace] = None, force: bool = False
):
super().__init__(name, value, trace=trace)
self.force = force
self.raw = raw
def execute(self, env: MutableMapping[str, str]):
tty.debug(f"SetEnv: {self.name}={str(self.value)}", level=3)
@@ -497,16 +490,15 @@ def _trace(self) -> Optional[Trace]:
return Trace(filename=filename, lineno=lineno, context=current_context)
@system_env_normalize
def set(self, name: str, value: str, *, force: bool = False, raw: bool = False):
def set(self, name: str, value: str, *, force: bool = False):
"""Stores a request to set an environment variable.
Args:
name: name of the environment variable
value: value of the environment variable
force: if True, audit will not consider this modification a warning
raw: if True, format of value string is skipped
"""
item = SetEnv(name, value, trace=self._trace(), force=force, raw=raw)
item = SetEnv(name, value, trace=self._trace(), force=force)
self.env_modifications.append(item)
@system_env_normalize
@@ -765,21 +757,16 @@ def from_sourcing_file(
"PS1",
"PS2",
"ENV",
# Environment Modules or Lmod
# Environment modules v4
"LOADEDMODULES",
"_LMFILES_",
"MODULEPATH",
"MODULERCFILE",
"BASH_FUNC_ml()",
"BASH_FUNC_module()",
# Environment Modules-specific configuration
"MODULESHOME",
"BASH_FUNC__module_raw()",
r"MODULES_(.*)",
r"__MODULES_(.*)",
"MODULEPATH",
"MODULES_(.*)",
r"(\w*)_mod(quar|share)",
# Lmod-specific configuration
# Lmod configuration
r"LMOD_(.*)",
"MODULERCFILE",
]
)

View File

@@ -4,7 +4,6 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import errno
import math
import os
import shutil
@@ -152,17 +151,18 @@ def __exit__(cm, type, value, traceback):
return WriteTransaction(self._get_lock(key), acquire=WriteContextManager)
def mtime(self, key) -> float:
"""Return modification time of cache file, or -inf if it does not exist.
def mtime(self, key):
"""Return modification time of cache file, or 0 if it does not exist.
Time is in units returned by os.stat in the mtime field, which is
platform-dependent.
"""
if not self.init_entry(key):
return -math.inf
return 0
else:
return os.stat(self.cache_path(key)).st_mtime
sinfo = os.stat(self.cache_path(key))
return sinfo.st_mtime
def remove(self, key):
file = self.cache_path(key)

View File

@@ -454,10 +454,6 @@ def visit_ClassDef(self, node):
self.fill("@")
self.dispatch(deco)
self.fill("class " + node.name)
if getattr(node, "type_params", False):
self.write("[")
interleave(lambda: self.write(", "), self.dispatch, node.type_params)
self.write("]")
with self.delimit_if("(", ")", condition=node.bases or node.keywords):
comma = False
for e in node.bases:
@@ -503,10 +499,6 @@ def __FunctionDef_helper(self, node, fill_suffix):
self.dispatch(deco)
def_str = fill_suffix + " " + node.name
self.fill(def_str)
if getattr(node, "type_params", False):
self.write("[")
interleave(lambda: self.write(", "), self.dispatch, node.type_params)
self.write("]")
with self.delimit("(", ")"):
self.dispatch(node.args)
if getattr(node, "returns", False):
@@ -1257,27 +1249,3 @@ def visit_MatchOr(self, node):
with self.require_parens(_Precedence.BOR, node):
self.set_precedence(pnext(_Precedence.BOR), *node.patterns)
interleave(lambda: self.write(" | "), self.dispatch, node.patterns)
def visit_TypeAlias(self, node):
self.fill("type ")
self.dispatch(node.name)
if node.type_params:
self.write("[")
interleave(lambda: self.write(", "), self.dispatch, node.type_params)
self.write("]")
self.write(" = ")
self.dispatch(node.value)
def visit_TypeVar(self, node):
self.write(node.name)
if node.bound:
self.write(": ")
self.dispatch(node.bound)
def visit_TypeVarTuple(self, node):
self.write("*")
self.write(node.name)
def visit_ParamSpec(self, node):
self.write("**")
self.write(node.name)

View File

@@ -40,16 +40,11 @@
SEGMENT_REGEX = re.compile(r"(?:(?P<num>[0-9]+)|(?P<str>[a-zA-Z]+))(?P<sep>[_.-]*)")
# regular expression for semantic versioning
_VERSION_CORE = r"\d+\.\d+\.\d+"
_IDENT = r"[0-9A-Za-z-]+"
_SEPARATED_IDENT = rf"{_IDENT}(?:\.{_IDENT})*"
_PRERELEASE = rf"\-{_SEPARATED_IDENT}"
_BUILD = rf"\+{_SEPARATED_IDENT}"
_SEMVER = rf"{_VERSION_CORE}(?:{_PRERELEASE})?(?:{_BUILD})?"
# clamp on the end, so versions like v1.2.3-rc1 will match
# without the leading 'v'.
SEMVER_REGEX = re.compile(rf"{_SEMVER}$")
SEMVER_REGEX = re.compile(
".+(?P<semver>([0-9]+)[.]([0-9]+)[.]([0-9]+)"
"(?:-([0-9A-Za-z-]+(?:[.][0-9A-Za-z-]+)*))?"
"(?:[+][0-9A-Za-z-]+)?)"
)
# Infinity-like versions. The order in the list implies the comparison rules
infinity_versions = ["stable", "trunk", "head", "master", "main", "develop"]
@@ -1324,10 +1319,11 @@ def lookup_ref(self, ref) -> Tuple[Optional[str], int]:
commit_to_version[tag_commit] = v
break
else:
# try to parse tag to compare versions spack does not know
match = SEMVER_REGEX.search(tag)
# try to parse tag to copare versions spack does not know
match = SEMVER_REGEX.match(tag)
if match:
commit_to_version[tag_commit] = match.group()
semver = match.groupdict()["semver"]
commit_to_version[tag_commit] = semver
ancestor_commits = []
for tag_commit in commit_to_version:

View File

@@ -141,29 +141,12 @@ ignore_missing_imports = true
ignore_errors = true
ignore_missing_imports = true
# Spack imports a number of external packages, and they *may* require Python 3.8 or
# higher in recent versions. This can cause mypy to fail because we check for 3.7
# compatibility. We could restrict mypy to run for the oldest supported version (3.7),
# but that means most developers won't be able to run mypy, which means it'll fail
# more in CI. Instead, we exclude these imported packages from mypy checking.
# pytest (which we depend on) optionally imports numpy, which requires Python 3.8 in
# recent versions. mypy still imports its .pyi file, which has positional-only
# arguments, which don't work in 3.7, which causes mypy to bail out early if you have
# numpy installed.
[[tool.mypy.overrides]]
module = [
'IPython',
'altgraph',
'attr',
'boto3',
'botocore',
'distro',
'jinja2',
'jsonschema',
'macholib',
'markupsafe',
'numpy',
'pyristent',
'pytest',
'ruamel.yaml',
'six',
]
module = 'numpy'
follow_imports = 'skip'
follow_imports_for_stubs = true

View File

@@ -14,6 +14,26 @@ default:
SPACK_TARGET_PLATFORM: "linux"
SPACK_TARGET_ARCH: "x86_64_v3"
.linux_skylake:
variables:
SPACK_TARGET_PLATFORM: "linux"
SPACK_TARGET_ARCH: "skylake_avx512"
.linux_icelake:
variables:
SPACK_TARGET_PLATFORM: "linux"
SPACK_TARGET_ARCH: "icelake"
.linux_neoverse_n1:
variables:
SPACK_TARGET_PLATFORM: "linux"
SPACK_TARGET_ARCH: "neoverse_n1"
.linux_neoverse_v1:
variables:
SPACK_TARGET_PLATFORM: "linux"
SPACK_TARGET_ARCH: "neoverse_v1"
.linux_aarch64:
variables:
SPACK_TARGET_PLATFORM: "linux"
@@ -762,3 +782,100 @@ deprecated-ci-build:
needs:
- artifacts: True
job: deprecated-ci-generate
########################################
# AWS PCLUSTER
########################################
.aws-pcluster-generate-image:
image: { "name": "ghcr.io/spack/pcluster-amazonlinux-2:v2023-05-25", "entrypoint": [""] }
.aws-pcluster-generate:
before_script:
# Use gcc from local container buildcache
- - . "./share/spack/setup-env.sh"
- . /etc/profile.d/modules.sh
- spack mirror add local-cache /bootstrap/local-cache
- spack gpg trust /bootstrap/public-key
- cd "${CI_PROJECT_DIR}" && curl -sOL https://raw.githubusercontent.com/spack/spack-configs/main/AWS/parallelcluster/postinstall.sh
- sed -i -e "s/spack arch -t/echo ${SPACK_TARGET_ARCH}/g" postinstall.sh
- /bin/bash postinstall.sh -fg
- spack config --scope site add "packages:all:target:\"target=${SPACK_TARGET_ARCH}\""
after_script:
- - mv "${CI_PROJECT_DIR}/postinstall.sh" "${CI_PROJECT_DIR}/jobs_scratch_dir/"
# Icelake (one pipeline per target)
.aws-pcluster-icelake:
variables:
SPACK_CI_STACK_NAME: aws-pcluster-icelake
aws-pcluster-generate-icelake:
extends: [ ".linux_icelake", ".aws-pcluster-icelake", ".generate", ".tags-x86_64_v4", ".aws-pcluster-generate", ".aws-pcluster-generate-image" ]
aws-pcluster-build-icelake:
extends: [ ".linux_icelake", ".aws-pcluster-icelake", ".build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-pcluster-generate-icelake
strategy: depend
needs:
- artifacts: True
job: aws-pcluster-generate-icelake
# Skylake_avx512 (one pipeline per target)
.aws-pcluster-skylake:
variables:
SPACK_CI_STACK_NAME: aws-pcluster-skylake
aws-pcluster-generate-skylake:
extends: [ ".linux_skylake", ".aws-pcluster-skylake", ".generate", ".tags-x86_64_v4", ".aws-pcluster-generate", ".aws-pcluster-generate-image" ]
aws-pcluster-build-skylake:
extends: [ ".linux_skylake", ".aws-pcluster-skylake", ".build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-pcluster-generate-skylake
strategy: depend
needs:
- artifacts: True
job: aws-pcluster-generate-skylake
# Neoverse_n1 (one pipeline per target)
.aws-pcluster-neoverse_n1:
variables:
SPACK_CI_STACK_NAME: aws-pcluster-neoverse_n1
aws-pcluster-generate-neoverse_n1:
extends: [ ".linux_neoverse_n1", ".aws-pcluster-neoverse_n1", ".generate-aarch64", ".aws-pcluster-generate", ".aws-pcluster-generate-image" ]
aws-pcluster-build-neoverse_n1:
extends: [ ".linux_neoverse_n1", ".aws-pcluster-neoverse_n1", ".build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-pcluster-generate-neoverse_n1
strategy: depend
needs:
- artifacts: True
job: aws-pcluster-generate-neoverse_n1
# Neoverse_v1 (one pipeline per target)
.aws-pcluster-neoverse_v1:
variables:
SPACK_CI_STACK_NAME: aws-pcluster-neoverse_v1
aws-pcluster-generate-neoverse_v1:
extends: [ ".linux_neoverse_v1", ".aws-pcluster-neoverse_v1", ".generate-aarch64", ".aws-pcluster-generate", ".aws-pcluster-generate-image" ]
aws-pcluster-build-neoverse_v1:
extends: [ ".linux_neoverse_v1", ".aws-pcluster-neoverse_v1", ".build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-pcluster-generate-neoverse_v1
strategy: depend
needs:
- artifacts: True
job: aws-pcluster-generate-neoverse_v1

View File

@@ -0,0 +1,11 @@
ci:
pipeline-gen:
- any-job:
variables:
SPACK_TARGET_ARCH: icelake
- build-job:
before_script:
- - curl -LfsS "https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz" -o gmake.tar.gz
- printf "fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz" | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null
tags: ["x86_64_v4"]

View File

@@ -0,0 +1,7 @@
ci:
pipeline-gen:
- any-job:
variables:
SPACK_TARGET_ARCH: neoverse_n1
- build-job:
tags: ["aarch64", "graviton2"]

View File

@@ -0,0 +1,7 @@
ci:
pipeline-gen:
- any-job:
variables:
SPACK_TARGET_ARCH: neoverse_v1
- build-job:
tags: ["aarch64", "graviton3"]

View File

@@ -0,0 +1,11 @@
ci:
pipeline-gen:
- any-job:
variables:
SPACK_TARGET_ARCH: skylake_avx512
- build-job:
before_script:
- - curl -LfsS "https://github.com/JuliaBinaryWrappers/GNUMake_jll.jl/releases/download/GNUMake-v4.3.0+1/GNUMake.v4.3.0.x86_64-linux-gnu.tar.gz" -o gmake.tar.gz
- printf "fef1f59e56d2d11e6d700ba22d3444b6e583c663d6883fd0a4f63ab8bd280f0f gmake.tar.gz" | sha256sum --check --strict --quiet
- tar -xzf gmake.tar.gz -C /usr bin/make 2> /dev/null
tags: ["x86_64_v4"]

View File

@@ -225,7 +225,7 @@ spack:
- - $compiler
- - $target
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/aws-ahug-aarch64" }
mirrors: { "mirror": "s3://spack-binaries/develop/aws-ahug-aarch64" }
ci:
pipeline-gen:

View File

@@ -222,7 +222,7 @@ spack:
- - $compiler
- - $target
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/aws-ahug" }
mirrors: { "mirror": "s3://spack-binaries/develop/aws-ahug" }
ci:
pipeline-gen:

View File

@@ -132,7 +132,7 @@ spack:
- - $target
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/aws-isc-aarch64" }
mirrors: { "mirror": "s3://spack-binaries/develop/aws-isc-aarch64" }
ci:
pipeline-gen:

View File

@@ -143,7 +143,7 @@ spack:
- - $target
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/aws-isc" }
mirrors: { "mirror": "s3://spack-binaries/develop/aws-isc" }
ci:
pipeline-gen:

View File

@@ -0,0 +1,55 @@
spack:
view: false
definitions:
- compiler_specs:
- gcc
- gettext
- compiler_target:
- '%gcc@7.3.1 target=x86_64_v3'
- optimized_configs:
# - gromacs
- lammps
# - mpas-model
- openfoam
# - palace
# - py-devito
# - quantum-espresso
# - wrf
- optimized_libs:
- mpich
- openmpi
specs:
- matrix:
- - $compiler_specs
- - $compiler_target
- $optimized_configs
# - $optimized_libs
mirrors: { "mirror": "s3://spack-binaries/develop/aws-pcluster-icelake" }
ci:
pipeline-gen:
- build-job:
image: { "name": "ghcr.io/spack/pcluster-amazonlinux-2:v2023-05-25", "entrypoint": [""] }
before_script:
- - . "./share/spack/setup-env.sh"
- . /etc/profile.d/modules.sh
- spack --version
- spack arch
# Use gcc from local container buildcache
- - spack mirror add local-cache /bootstrap/local-cache
- spack gpg trust /bootstrap/public-key
- - /bin/bash "${SPACK_ARTIFACTS_ROOT}/postinstall.sh" -fg
- spack config --scope site add "packages:all:target:\"target=${SPACK_TARGET_ARCH}\""
- signing-job:
before_script:
# Do not distribute Intel & ARM binaries
- - for i in $(aws s3 ls --recursive ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/ | grep intel-oneapi | awk '{print $4}' | sed -e 's?^.*build_cache/??g'); do aws s3 rm ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/$i; done
- for i in $(aws s3 ls --recursive ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/ | grep armpl | awk '{print $4}' | sed -e 's?^.*build_cache/??g'); do aws s3 rm ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/$i; done
cdash:
build-group: AWS Packages

View File

@@ -0,0 +1,58 @@
spack:
view: false
definitions:
- compiler_specs:
- gcc
- gettext
- compiler_target:
- '%gcc@7.3.1 target=aarch64'
- optimized_configs:
- gromacs
# - lammps
# - mpas-model
- openfoam
- palace
# - py-devito
# - quantum-espresso
# - wrf
- optimized_libs:
- mpich
- openmpi
specs:
- matrix:
- - $compiler_specs
- - $compiler_target
- $optimized_configs
- $optimized_libs
mirrors: { "mirror": "s3://spack-binaries/develop/aws-pcluster-neoverse_n1" }
ci:
pipeline-gen:
- build-job:
image: { "name": "ghcr.io/spack/pcluster-amazonlinux-2:v2023-05-25", "entrypoint": [""] }
tags: ["aarch64"]
before_script:
- - . "./share/spack/setup-env.sh"
- . /etc/profile.d/modules.sh
- spack --version
- spack arch
# Use gcc from local container buildcache
- - spack mirror add local-cache /bootstrap/local-cache
- spack gpg trust /bootstrap/public-key
- - /bin/bash "${SPACK_ARTIFACTS_ROOT}/postinstall.sh" -fg
- spack config --scope site add "packages:all:target:\"target=${SPACK_TARGET_ARCH}\""
- signing-job:
before_script:
# Do not distribute Intel & ARM binaries
- - for i in $(aws s3 ls --recursive ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/ | grep intel-oneapi | awk '{print $4}' | sed -e 's?^.*build_cache/??g'); do aws s3 rm ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/$i; done
- for i in $(aws s3 ls --recursive ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/ | grep armpl | awk '{print $4}' | sed -e 's?^.*build_cache/??g'); do aws s3 rm ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/$i; done
cdash:
build-group: AWS Packages

View File

@@ -0,0 +1,58 @@
spack:
view: false
definitions:
- compiler_specs:
- gcc
- gettext
- compiler_target:
- '%gcc@7.3.1 target=aarch64'
- optimized_configs:
- gromacs
# - lammps
# - mpas-model
- openfoam
- palace
# - py-devito
# - quantum-espresso
# - wrf
- optimized_libs:
- mpich
- openmpi
specs:
- matrix:
- - $compiler_specs
- - $compiler_target
- $optimized_configs
- $optimized_libs
mirrors: { "mirror": "s3://spack-binaries/develop/aws-pcluster-neoverse_v1" }
ci:
pipeline-gen:
- build-job:
image: { "name": "ghcr.io/spack/pcluster-amazonlinux-2:v2023-05-25", "entrypoint": [""] }
tags: ["aarch64"]
before_script:
- - . "./share/spack/setup-env.sh"
- . /etc/profile.d/modules.sh
- spack --version
- spack arch
# Use gcc from local container buildcache
- - spack mirror add local-cache /bootstrap/local-cache
- spack gpg trust /bootstrap/public-key
- - /bin/bash "${SPACK_ARTIFACTS_ROOT}/postinstall.sh" -fg
- spack config --scope site add "packages:all:target:\"target=${SPACK_TARGET_ARCH}\""
- signing-job:
before_script:
# Do not distribute Intel & ARM binaries
- - for i in $(aws s3 ls --recursive ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/ | grep intel-oneapi | awk '{print $4}' | sed -e 's?^.*build_cache/??g'); do aws s3 rm ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/$i; done
- for i in $(aws s3 ls --recursive ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/ | grep armpl | awk '{print $4}' | sed -e 's?^.*build_cache/??g'); do aws s3 rm ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/$i; done
cdash:
build-group: AWS Packages

View File

@@ -0,0 +1,55 @@
spack:
view: false
definitions:
- compiler_specs:
- gcc
- gettext
- compiler_target:
- '%gcc@7.3.1 target=x86_64_v3'
- optimized_configs:
# - gromacs
- lammps
# - mpas-model
- openfoam
# - palace
# - py-devito
# - quantum-espresso
# - wrf
- optimized_libs:
- mpich
- openmpi
specs:
- matrix:
- - $compiler_specs
- - $compiler_target
- $optimized_configs
# - $optimized_libs
mirrors: { "mirror": "s3://spack-binaries/develop/aws-pcluster-skylake" }
ci:
pipeline-gen:
- build-job:
image: { "name": "ghcr.io/spack/pcluster-amazonlinux-2:v2023-05-25", "entrypoint": [""] }
before_script:
- - . "./share/spack/setup-env.sh"
- . /etc/profile.d/modules.sh
- spack --version
- spack arch
# Use gcc from local container buildcache
- - spack mirror add local-cache /bootstrap/local-cache
- spack gpg trust /bootstrap/public-key
- - /bin/bash "${SPACK_ARTIFACTS_ROOT}/postinstall.sh" -fg
- spack config --scope site add "packages:all:target:\"target=${SPACK_TARGET_ARCH}\""
- signing-job:
before_script:
# Do not distribute Intel & ARM binaries
- - for i in $(aws s3 ls --recursive ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/ | grep intel-oneapi | awk '{print $4}' | sed -e 's?^.*build_cache/??g'); do aws s3 rm ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/$i; done
- for i in $(aws s3 ls --recursive ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/ | grep armpl | awk '{print $4}' | sed -e 's?^.*build_cache/??g'); do aws s3 rm ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache/$i; done
cdash:
build-group: AWS Packages

View File

@@ -21,7 +21,7 @@ spack:
- - $default_specs
- - $arch
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/build_systems" }
mirrors: { "mirror": "s3://spack-binaries/develop/build_systems" }
cdash:
build-group: Build Systems

View File

@@ -64,7 +64,7 @@ spack:
- [$sdk_base_spec]
- [$^visit_specs]
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/data-vis-sdk" }
mirrors: { "mirror": "s3://spack-binaries/develop/data-vis-sdk" }
ci:
pipeline-gen:

View File

@@ -21,7 +21,7 @@ spack:
- readline
mirrors:
mirror: s3://spack-binaries/releases/v0.20/deprecated
mirror: s3://spack-binaries/develop/deprecated
gitlab-ci:
broken-tests-packages:
- gptune

View File

@@ -24,7 +24,7 @@ spack:
- - $easy_specs
- - $arch
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/e4s-mac" }
mirrors: { "mirror": "s3://spack-binaries/develop/e4s-mac" }
ci:
pipeline-gen:

View File

@@ -263,7 +263,7 @@ spack:
# SKIPPED
# - flecsi # dependency pfunit marks oneapi as an unsupported compiler
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/e4s-oneapi" }
mirrors: { "mirror": "s3://spack-binaries/develop/e4s-oneapi" }
ci:
pipeline-gen:

View File

@@ -207,7 +207,7 @@ spack:
# bricks: VSBrick-7pt.py-Scalar-8x8x8-1:30:3: error: 'vfloat512' was not declared in this scope
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/e4s-power" }
mirrors: { "mirror": "s3://spack-binaries/develop/e4s-power" }
ci:
pipeline-gen:

View File

@@ -70,8 +70,9 @@ spack:
- charliecloud
- conduit
- datatransferkit
- dealii
- dyninst
- ecp-data-vis-sdk ~cuda ~rocm +adios2 +ascent +cinema +darshan +faodel +hdf5 +paraview +pnetcdf +sz +unifyfs +veloc +visit +vtkm +zfp
- ecp-data-vis-sdk ~cuda ~rocm +adios2 +ascent +cinema +darshan +faodel +hdf5 +paraview +pnetcdf +sz +unifyfs +veloc ~visit +vtkm +zfp ^hdf5@1.14
- exaworks
- flecsi
- flit
@@ -165,7 +166,7 @@ spack:
- chai ~benchmarks ~tests +cuda ^umpire ~shared
- cusz +cuda
- dealii +cuda
- ecp-data-vis-sdk +cuda +adios2 +hdf5 +paraview +vtkm +zfp # Removing ascent because Dray is hung in CI. +ascent
- ecp-data-vis-sdk +cuda ~ascent +adios2 +hdf5 +paraview +sz +vtkm +zfp ^hdf5@1.14 # Removing ascent because RAJA build failure
- flecsi +cuda
- flux-core +cuda
- ginkgo +cuda
@@ -199,7 +200,7 @@ spack:
- cabana +rocm
- caliper +rocm
- chai ~benchmarks +rocm
- ecp-data-vis-sdk +paraview +vtkm +rocm
- ecp-data-vis-sdk +adios2 +hdf5 +paraview +pnetcdf +sz +vtkm +zfp +rocm ^hdf5@1.14 # Excludes ascent for now due to C++ standard issues
- gasnet +rocm
- ginkgo +rocm
- heffte +rocm
@@ -232,7 +233,7 @@ spack:
# CUDA failures
#- parsec +cuda # parsec/mca/device/cuda/transfer.c:168: multiple definition of `parsec_CUDA_d2h_max_flows';
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/e4s" }
mirrors: { "mirror": "s3://spack-binaries/develop/e4s" }
ci:
pipeline-gen:

View File

@@ -51,7 +51,7 @@ spack:
# FAILURES
# - kokkos +wrapper +cuda cuda_arch=80 ^cuda@12.0.0 # https://github.com/spack/spack/issues/35378
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/gpu-tests" }
mirrors: { "mirror": "s3://spack-binaries/develop/gpu-tests" }
ci:
pipeline-gen:

View File

@@ -77,7 +77,7 @@ spack:
- xgboost
mirrors:
mirror: s3://spack-binaries/releases/v0.20/ml-linux-x86_64-cpu
mirror: s3://spack-binaries/develop/ml-linux-x86_64-cpu
ci:
pipeline-gen:

View File

@@ -80,7 +80,7 @@ spack:
- xgboost
mirrors:
mirror: s3://spack-binaries/releases/v0.20/ml-linux-x86_64-cuda
mirror: s3://spack-binaries/develop/ml-linux-x86_64-cuda
ci:
pipeline-gen:

View File

@@ -83,7 +83,7 @@ spack:
- xgboost
mirrors:
mirror: s3://spack-binaries/releases/v0.20/ml-linux-x86_64-rocm
mirror: s3://spack-binaries/develop/ml-linux-x86_64-rocm
ci:
pipeline-gen:

View File

@@ -38,7 +38,7 @@ spack:
- - $compiler
- - $target
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/radiuss-aws-aarch64" }
mirrors: { "mirror": "s3://spack-binaries/develop/radiuss-aws-aarch64" }
ci:
pipeline-gen:

View File

@@ -44,7 +44,7 @@ spack:
- - $compiler
- - $target
mirrors: { "mirror": "s3://spack-binaries/releases/v0.20/radiuss-aws" }
mirrors: { "mirror": "s3://spack-binaries/develop/radiuss-aws" }
ci:
pipeline-gen:

View File

@@ -41,7 +41,7 @@ spack:
- zfp
mirrors:
mirror: "s3://spack-binaries/releases/v0.20/radiuss"
mirror: "s3://spack-binaries/develop/radiuss"
specs:
- matrix:

View File

@@ -50,7 +50,7 @@ spack:
- $gcc_spack_built_packages
mirrors:
mirror: s3://spack-binaries/releases/v0.20/tutorial
mirror: s3://spack-binaries/develop/tutorial
ci:
pipeline-gen:
- build-job:

View File

@@ -1060,7 +1060,7 @@ _spack_external_list() {
}
_spack_external_read_cray_manifest() {
SPACK_COMPREPLY="-h --help --file --directory --ignore-default-dir --dry-run --fail-on-error"
SPACK_COMPREPLY="-h --help --file --directory --dry-run --fail-on-error"
}
_spack_fetch() {

View File

@@ -37,7 +37,7 @@ RUN find -L {{ paths.view }}/* -type f -exec readlink -f '{}' \; | \
# Modifications to the environment that are necessary to run
RUN cd {{ paths.environment }} && \
spack env activate --sh -d . > activate.sh
spack env activate --sh -d . >> /etc/profile.d/z10_spack_environment.sh
{% if extra_instructions.build %}
{{ extra_instructions.build }}
@@ -53,13 +53,7 @@ COPY --from=builder {{ paths.environment }} {{ paths.environment }}
COPY --from=builder {{ paths.store }} {{ paths.store }}
COPY --from=builder {{ paths.hidden_view }} {{ paths.hidden_view }}
COPY --from=builder {{ paths.view }} {{ paths.view }}
RUN { \
echo '#!/bin/sh' \
&& echo '.' {{ paths.environment }}/activate.sh \
&& echo 'exec "$@"'; \
} > /entrypoint.sh \
&& chmod a+x /entrypoint.sh
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
{% block final_stage %}
@@ -76,6 +70,6 @@ RUN {% if os_package_update %}{{ os_packages_final.update }} \
{% for label, value in labels.items() %}
LABEL "{{ label }}"="{{ value }}"
{% endfor %}
ENTRYPOINT [ "/entrypoint.sh" ]
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l", "-c", "$*", "--" ]
CMD [ "/bin/bash" ]
{% endif %}

View File

@@ -1,7 +1,7 @@
{% extends "container/bootstrap-base.dockerfile" %}
{% block install_os_packages %}
RUN dnf update -y \
&& dnf install -y \
RUN yum update -y \
&& yum install -y \
bzip2 \
curl \
file \
@@ -23,6 +23,6 @@ RUN dnf update -y \
unzip \
zstd \
&& pip3 install boto3 \
&& rm -rf /var/cache/dnf \
&& dnf clean all
&& rm -rf /var/cache/yum \
&& yum clean all
{% endblock %}

View File

@@ -1,9 +1,9 @@
{% extends "container/bootstrap-base.dockerfile" %}
{% block install_os_packages %}
RUN dnf update -y \
&& dnf install -y epel-release \
&& dnf update -y \
&& dnf --enablerepo epel install -y \
RUN yum update -y \
&& yum install -y epel-release \
&& yum update -y \
&& yum --enablerepo epel install -y \
bzip2 \
curl-minimal \
file \
@@ -25,6 +25,6 @@ RUN dnf update -y \
unzip \
zstd \
&& pip3 install boto3 \
&& rm -rf /var/cache/dnf \
&& dnf clean all
&& rm -rf /var/cache/yum \
&& yum clean all
{% endblock %}

Some files were not shown because too many files have changed in this diff Show More