Set the minimun C++ standard for LBANN, Hydrogen, and DiHydrogen to
C++17. The minumim C++ standard for Aluminum is C++14. Add new
versions for Aluminum, Hydrogen, and DiHydrogen. Added support for
high performance linkers in LBANN recipe (gold and lld). Added
variants to LBANN for enabling embedded Python support independently
from the Python front end.
* py-fenics-instant: new package for legacy fenics 2016 and 2017 versions
* Update var/spack/repos/builtin/packages/py-fenics-instant/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The signature for configure_args in the template for new
RPackage packages was incorrect (different than what is
defined and used in lib/spack/spack/build_systems/r.py)
See issue #21774
Keep spack.store.store and spack.store.db consistent in unit tests
* Remove calls to monkeypatch for spack.store.store and spack.store.db:
tests that used these called one or the other, which lead to
inconsistencies (the tests passed regardless but were fragile as a
result)
* Fixtures making use of monkeypatch with mock_store now use the
updated use_store function, which sets store.store and store.db
consistently
* subprocess_context.TestState now transfers the serializes and
restores spack.store.store (without the monkeypatch changes this
would have created inconsistencies)
Since signals are fundamentally racy, We can't bound the amount of time
that the `test_foreground_background_output` test will take to get to
'on', we can only observe that it transitions to 'on'. So instead of
using an arbitrary limit, just adjust the test to allow either 'on' or
'off' followed by 'on'.
This should eliminate the spurious errors we see in CI.
Follow-up to #17110
### Before
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/apple-clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
### After
```bash
CC=/Users/Adam/spack/lib/spack/env/clang/clang; export CC
SPACK_CC=/usr/bin/clang; export SPACK_CC
PATH=...:/Users/Adam/spack/lib/spack/env/clang:/Users/Adam/spack/lib/spack/env/case-insensitive:/Users/Adam/spack/lib/spack/env:...; export PATH
```
`CC` and `SPACK_CC` were being set correctly, but `PATH` was using the name of the compiler `apple-clang` instead of `clang`. For most packages, since `CC` was set correctly, nothing broke. But for packages using `Makefiles` that set `CC` based on `which clang`, it was using the system compilers instead of the compiler wrappers. Discovered when working on `py-xgboost@0.90`.
An alternative fix would be to copy the symlinks in `env/clang` to `env/apple-clang`. Let me know if you think there's a better way to do this, or to test this.
The dependencies needed a little clean up as several dependencies are
only needed for the +X variant. This PR consolidates all of the
dependencies that actually require +X and explicitly disables them when
~X to prevent accidentally picking up system libraries.
- modified the description of the +X variant
- arranges dependencies to group them
- added missing dependency on xz
- removed unneeded dependencies
- freetype
- glib
- set dependencies when +X
- cairo
- jpeg
- libpng
- libtiff
- tcl/tk
- R uses tcl/tk together, so only tk needs to be depended on, and only
when +X
- moved tcl/tk resources to with/without-x test
- added explicit with/without settings for
- cairo
- jpeglib
- libpng
- libtiff
- tcltk
The fixture was introduced in #19690 maybe accidentally.
It's not used in unit tests, and though it should be
mutable it seems an exact copy of it's immutable version.
Before this change, in pipeline environments where runners do not have access
to persistent shared file-system storage, the only way to pass buildcaches to
dependents in later stages was by using the "enable-artifacts-buildcache" flag
in the gitlab-ci section of the spack.yaml. This change supports a second
mechanism, named "temporary-storage-url-prefix", which can be provided instead
of the "enable-artifacts-buildcache" feature, but the two cannot be used at the
same time. If this prefix is provided (only "file://" and "s3://" urls are
supported), the gitlab "CI_PIPELINE_ID" will be appended to it to create a url
for a mirror where pipeline jobs will write buildcache entries for use by jobs
in subsequent stages. If this prefix is provided, a cleanup job will be
generated to run after all the rebuild jobs have finished that will delete the
contents of the temporary mirror. To support this behavior a new mirror
sub-command has been added: "spack mirror destroy" which can take either a
mirror name or url.
This change also fixes a bug in generation of "needs" list for each job. Each
jobs "needs" list is supposed to only contain direct dependencies for scheduling
purposes, unless "enable-artifacts-buildcache" is specified. Only in that case
are the needs lists supposed to contain all transitive dependencies. This
changes fixes a bug that caused the needs lists to always contain all transitive
dependencies, regardless of whether or not "enable-artifacts-buildcache" was
specified.
* py-typing: new version, avoid issues with newer versions of python
https://pypi.org/project/typing/
"For package maintainers, it is preferred to use
typing;python_version<"3.5" if your package requires it to support
earlier Python versions."
* update conflict version / more message detail
* change the depends_on, leave a comment suggesting correct usage
The actual, documented minimum version of the cfitsio dependency,
v3.181, is now neither available for (easy) download from NASA, nor as
a Spack package. No upper bound on version number exists (at this time).
Pipelines: DAG pruning
During the pipeline generation staging process we check each spec against all configured mirrors to determine whether it is up to date on any of the mirrors. By default, and with the --prune-dag argument to "spack ci generate", any spec already up to date on at least one remote mirror is omitted from the generated pipeline. To generate jobs for up to date specs instead of omitting them, use the --no-prune-dag argument. To speed up the pipeline generation process, pass the --check-index-only argument. This will cause spack to check only remote buildcache indices and avoid directly fetching any spec.yaml files from mirrors. The drawback is that if the remote buildcache index is out of date, spec rebuild jobs may be scheduled unnecessarily.
This change removes the final-stage-rebuild-index block from gitlab-ci section of spack.yaml. Now rebuilding the buildcache index of the mirror specified in the spack.yaml is the default, unless "rebuild-index: False" is set. Spack assigns the generated rebuild-index job runner attributes from an optional new "service-job-attributes" block, which is also used as the source of runner attributes for another generated non-build job, a no-op job, which spack generates to avoid gitlab errors when DAG pruning results in empty pipelines.
Add `manual_download = True` to packages that need to do manual
downloads but do not have the `manual_download attribute set. This
provides a message when installing these packages rather than a generic
fetch error.
Add versions 2020.12 and 2021.01. The viewer and trace viewer are now
integrated into a single program and one tar ball. Now available on
arm/aarch64 and now uses Java 11.
Update some things in hpctoolkit to prepare for a 2021.02.x release:
1. allow binutils to be built with +nls.
2. require libmonitor to be built with +dlopen.
3. allow rocm in more than just develop branch.
4. remove some conflicting setenv's in hpctoolkit module.
The SPACK_PYTHON environment variable can be set to a python interpreter to be
used by the spack command. This allows the spack command itself to use a
consistent and separate interpreter from whatever python might be used for package
building.
* add new flag when compiling mumps with %gcc@10.
* Fix style
* Try to fix formatting
* Use flag_handler approach suggested by @michaelkuhn
in the PR review.
* Delete former approach
* Another style issue
* Add another space
* More fixes
We still need mesa18 for some of our builds.
Those builds require python@2, normal mesa only works with
python@3.
* Remove the deprecation tag
* Add myself as a maintainer: I volunteer to help with this
package for the time being.
* There is only one version, no need to prefer it.
Modifications:
- Make use of SpackCommand objects wherever possible
- Deduplicated code when possible
- Moved cleaning of mirrors to fixtures
- Ensure mock configuration has a clear initialization order
* fixed install with ver 3 and python 3.0
* replaced @3 with @2.999
* [py-pyspark] added version requirements for py-py4j
* [py-pyspark] all versions require at least version 2.7 of python
* [py-pyspark] fixed comma syntax
Co-authored-by: Sid Pendelberry <sid@rit.edu>
The GROMACS package embeds references to its build tool chain.
Use the Spack utilities to make sure these references are correct
outside of the isolated Spack build environment.
`query()` calls `datetime.datetime.fromtimestamp` regardless of whether a
date query is being done. Guard this with an if statement to avoid the
unnecessary work.
Constructing a spec from a name instead of setting name directly forces
from_node_dict to call Spec.parse(), which is slow. Avoid this by using a
zero-arg constructor and setting name directly.
cmake was added as a runtime dependency to meson in #20449. This
introduces an unnecessary implicit cmake dependency, which increases
build time for meson considerably. cmake is only one of many methods for
finding dependencies (pkg-config, qmake etc.), which are also not
runtime dependencies of meson. Add cmake as a build dependency to mesa
instead.
This solves a few FIXMEs in conftest.py, where
we were manipulating globals and seeing side
effects prior to registering fixtures.
This commit solves the FIXMEs, but introduces
a performance regression on tests that may need
to be investigated
The method is now called "use_repositories" and
makes it clear in the docstring that it accepts
as arguments either Repo objects or paths.
Since there was some duplication between this
contextmanager and "use_repo" in the testing framework,
remove the latter and use spack.repo.use_repositories
across the entire code base.
Make a few adjustment to MockPackageMultiRepo, since it was
stating in the docstring that it was supposed to mock
spack.repo.Repo and was instead mocking spack.repo.RepoPath.
Some compilers, such as the NV compilers, do not recognize -isystem
dir when specified without a space.
Works: -isystem ../include
Does not work: -isystem../include
This PR updates the compiler wrapper to include the space with -isystem.
Environment views fail when the tmpdir used for view generation is
on a separate mount from the install_tree because the files cannot
by symlinked between the two. The fix is to use an alternative
tmpdir located alongside the view.
* [py-moviepy] created template
* [py-moviepy] added dependencies
* [py-moviepy] removed fixmes, added homepage and description
* [py-moviepy] updated to pypi and updated checksum
* [py-moviepy] added setuptools dependency
* [py-moviepy] more specific version limit
* [py-moviepy] added checksum for version 1.0.1
* [py-moviepy] numpy restriction not nesessary here
* 3DTK: add new package
* Add missing opencv variants
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
* Fix cmake version req, add eigen dep
* Prefer trunk version
* Tell 3dtk where to find eigen
* Fix installation
* Fix installation
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
This PR fixes the case where groff fails to build if the spack install
path is really long. There are a couple of perl scripts that get built,
and used, during the build phase that will fail when the perl
interpreter line is too long. Filtering the lines will not work because
the files don not exist after the configure phase and patching after the
build phase is too late. This PR runs the scripts explicitly with the
spack perl via the $(PERL) variable in the call to the script.
* Procedure to deprecate old versions of software
* Add documentation
* Fix bug in logic
* Update tab completion
* Deprecate legacy packages
* Deprecate old mxnet as well
* More explicit docs
* py-dvc: new package
* Update var/spack/repos/builtin/packages/py-dvc/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-dvc: add version dependency for py-networkx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Clarify relaxed double precision option
This is only intended for use on the Fujitsu PRIMEHPC platform
* Fix typo
* Shorten line to keep linter happy
Added conflict for macsio@1.1~mpi after investigating source code. As of
1.1 tag macsio does not properly guard out MPI commands. This is
verified as corrected in @develop
* mxnet: convert to CMakePackage
* Package isn't installed yet, can't find libs
* Fix bug with GCC 8+ and CUDA 10 on PowerPC
* Add space
* Add patch to fix cmake cuda flags
* Space no longer needed
* Add patch to fix OpenBLAS linking
* Add missing CMake flag
* Fix env set, default to Distribution
* Add new version, patch
* added py-python-benedict recipe
* Update var/spack/repos/builtin/packages/py-python-benedict/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
+ Provide optional variant `pythontools` (default False) that adds a run-time dependency on
`py-matplotlib`.
+ Latest versions require `cmake@3.18:` to support cuda features.
+ Enable a cmake option that forcibly disables qt support. Previously, draco would enable qt
support if it was available in the local build environment (outside of spack).
* graphviz: Remove ghostscript requirement when ~ghostscript
* Add doc variant and patch for 2.44.1
* Patch does not apply
* Update graphviz versions, using archives rather than git hash
* Complete implementation of doc variant
* Fix typo
This commit adds an option to the `external find`
command that allows it to search by tags. In this
way group of executables with common purposes can
be grouped under a single name and a simple command
can be used to detect all of them.
As an example introduce the 'build-tools' tag to
search for common development tools on a system
* add to LD_LIBRARY_PATH so that it finds libimf.so
* amrex: fix handling of CUDA arch (#20786)
* amrex: fix handling of CUDA arch
* amrex: fix style
* amrex: fix bug
* Update var/spack/repos/builtin/packages/amrex/package.py
* Update var/spack/repos/builtin/packages/amrex/package.py
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
* ecp-data-vis-sdk: Combine the vis and io SDK packages (#20737)
This better enables the collective set to be deployed togethor satisfying
eachothers dependencies
* r-sf: fix dependency error (#20898)
* improve documentation for Rocm (hip amd builds) (#20812)
* improve documentation
* astyle: Fix makefile for install parameter (#20899)
* llvm-doe: added new package (#20719)
The package contains duplicated code from llvm/package.py,
will supersede solve.
* r-e1071: added v1.7-4 (#20891)
* r-diffusionmap: added v1.2.0 (#20881)
* r-covr: added v3.5.1 (#20868)
* r-class: added v7.3-17 (#20856)
* py-h5py: HDF5_DIR is needed for ~mpi too (#20905)
For the `~mpi` variant, the environment variable `HDF5_DIR` is still required. I moved this command out of the `+mpi` conditional.
* py-hovorod: fix typo on variant name in conflicts directive (#20906)
* fujitsu-fftw: Add new package (#20824)
* pocl: added v1.6 (#20932)
Made version 1.5 or lower conflicts with a64fx.
* PCL: add new package (#20933)
* r-rle: new package (#20916)
Common 'base' and 'stats' methods for 'rle' objects, aiming to make it
possible to treat them transparently as vectors.
* r-ellipsis: added v0.3.1 (#20913)
* libconfig: add build dependency on texinfo (#20930)
* r-flexmix: add v2.3-17 (#20924)
* r-fitdistrplus: add v1.1-3 (#20923)
* r-fit-models: add v0.64 (#20922)
* r-fields: add v11.6 (#20921)
* r-fftwtools: add v0.9-9 (#20920)
* r-farver: add v2.0.3 (#20919)
* r-expm: add v0.999-6 (#20918)
* cln: add build dependency on texinfo (#20928)
* r-expint: add v0.1-6 (#20917)
* r-envstats: add v2.4.0 (#20915)
* r-energy: add v1.7-7 (#20914)
* r-ellipse: add v0.4.2 (#20912)
* py-fiscalyear: add v0.3.0 (#20911)
* r-ecp: add v3.1.3 (#20910)
* r-plotmo: add v3.6.0 (#20909)
* Improve gcc detection in llvm. (#20189)
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
* hatchet: updated urls (#20908)
* py-anuga: add new package (#20782)
* libvips: added v8.10.5 (#20902)
* libzmq: add platform conditions to libbsd dependency (#20893)
* r-dtw: add v1.22-3 (#20890)
* r-dt: add v0.17 (#20889)
* r-dosnow: add v1.0.19 (#20888)
* add version 1.0.16 to r-doparallel (#20886)
* add version 1.3.7 to r-domc (#20885)
* add version 0.9-15 to r-diversitree (#20884)
* add version 1.3-3 to r-dismo (#20883)
* add version 0.6.27 to r-digest (#20882)
* add version 1.5 to r-rngtools (#20887)
* add version 1.5.8 to r-dicekriging (#20877)
* add version 1.4.2 to r-httr (#20876)
* add version 1.28 to r-desolve (#20875)
* add version 2.2-5 to r-deoptim (#20874)
* add version 0.2-3 to r-deldir (#20873)
* add version 1.0.0 to r-crul (#20870)
* add version 1.1.0.1 to r-crosstalk (#20869)
* add version 1.0-1 to r-copula (#20867)
* add version 5.0.2 to r-rcppparallel (#20866)
* add version 2.0-1 to r-compositions (#20865)
* add version 0.4.10 to r-rlang (#20796)
* add version 0.3.6 to r-vctrs (#20878)
* amrex: add ROCm support (#20809)
* add version 2.0-0 to r-colorspace (#20864)
* add version 1.3-1 to r-coin (#20863)
* add version 0.19-4 to r-coda (#20862)
* add version 1.3.7 to r-clustergeneration (#20861)
* add version 0.3-58 to r-clue (#20860)
* add version 0.7.1 to r-clipr (#20859)
* add version 2.2.0 to r-cli (#20858)
* add version 0.4-3 to r-classint (#20857)
* add version 0.1.2 to r-globaloptions (#20855)
* add version 2.3-56 to r-chron (#20854)
* add version 0.4.10 to r-checkpoint (#20853)
* add version 2.0.0 to r-checkmate (#20852)
* add version 1.18.1 to r-catools (#20850)
* add version 1.2.2.2 to r-modelmetrics (#20849)
* add version 3.0-4 to r-cardata (#20847)
* add version 1.0.1 to r-caracas (#20846)
* r-lifecycle: new package at v0.2.0 (#20845)
* add version 3.0-10 to r-car (#20844)
* add version 3.4.5 to r-processx (#20843)
* add version 1.5-12.2 to r-cairo (#20842)
* add version 0.2.3 to r-cubist (#20841)
* add version 2.6 to r-rmarkdown (#20838)
* add version 1.2.1 to r-blob (#20819)
* add version 4.0.4 to r-bit (#20818)
* add version 2.4-1 to r-bio3d (#20816)
* add version 0.4.2.3 to r-bibtex (#20815)
* add version 3.1-4 to r-bayesm (#20807)
* add version 1.2.1 to r-backports (#20806)
* add version 2.0.3 to r-argparse (#20805)
* add version 5.4-1 to r-ape (#20804)
* add version 0.8-18 to r-amap (#20803)
* r-pixmap: added new package (#20795)
* zoltan: source code location change (#20787)
* refactor path logic
* added some paths to make compilers and libs discoverable
* add to LD_LIBRARY_PATH so that it finds libimf.so
and cleanup PEP8
* refactor path logic
* adding paths to LIBRARY_PATH so compiler wrappers will find -lmpi
* added vals for CC=icx, CXX=icpx, FC=ifx to generated module
* back out changes to intel-oneapi-mpi, save for separate PR
* Update var/spack/repos/builtin/packages/intel-oneapi-compilers/package.py
path is joined in _ld_library_path()
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
* set absolute paths to icx,icpx,ifx
* dang close parenthesis
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
Co-authored-by: mic84 <mrosso@lbl.gov>
Co-authored-by: Axel Huebl <axel.huebl@plasma.ninja>
Co-authored-by: Chuck Atkins <chuck.atkins@kitware.com>
Co-authored-by: darmac <xiaojun2@hisilicon.com>
Co-authored-by: Danny Taller <66029857+dtaller@users.noreply.github.com>
Co-authored-by: Tomoyasu Nojiri <68096132+t-nojiri@users.noreply.github.com>
Co-authored-by: Shintaro Iwasaki <siwasaki@anl.gov>
Co-authored-by: Glenn Johnson <glenn-johnson@uiowa.edu>
Co-authored-by: Kelly (KT) Thompson <KineticTheory@users.noreply.github.com>
Co-authored-by: Henrique Mendonça <henrique@users.noreply.github.com>
Co-authored-by: h-denpo <57649496+h-denpo@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Thomas Green <tomgreen66@hotmail.com>
Co-authored-by: Tom Scogland <tom.scogland@gmail.com>
Co-authored-by: Thomas Green <ca-tgreen@gw4a64fxlogin00.head.gw4.metoffice.gov.uk>
Co-authored-by: Abhinav Bhatele <bhatele@cs.umd.edu>
Co-authored-by: a-saitoh-fj <63334055+a-saitoh-fj@users.noreply.github.com>
Co-authored-by: QuellynSnead <quellyn@lanl.gov>
The "fact" method before was dealing with multiple facts
registered per call, which was used when we were emitting
grounded rules from knowledge of the problem instance.
Now that the encoding is changed we can simplify the method
to deal only with a single fact per call.
* py-dictdiffer: fix offline dependencies
* Update var/spack/repos/builtin/packages/py-dictdiffer/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-flatten-dict: new recipe
* Update var/spack/repos/builtin/packages/py-flatten-dict/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-flatten-dict: fix dependencies
* py-flatten-dict: fix dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* clingo/clingo-bootstrap: added a package with option for bootstrapping clingo
package builds in Release mode
uses GCC options to link libstdc++ and libgcc statically
* clingo-bootstrap: apple-clang options to bootstrap statically on darwin
* clingo: fix the path of the Python interpreter
In case multiple Python versions are in the same prefix
(e.g. when clingo is built against an external Python),
it may happen that the Python used by CMake does not
match the corresponding node in the current spec.
This is fixed here by defining "Python_EXECUTABLE"
properly as a hint to CMake.
* clingo: the commit for "spack" version has been updated.
- add variants for build targets, language bindings, backends
- ensure selected variants are compatible with zfp version
- point to GitHub (not LLNL) tar balls
- add dependencies
- update link to homepage
- add maintainers
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Patch provided by @Billae
Avoid the following error:
File "/home/danlipsa/projects/spack/lib/spack/llnl/util/tty/log.py", line 768, in _writer_daemon
line = _retry(in_pipe.readline)()
File "/home/danlipsa/projects/spack/lib/spack/llnl/util/tty/log.py", line 830, in wrapped
return function(*args, **kwargs)
File "/usr/lib/python3.8/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x97 in position 220: invalid start byte
This PR adds:
1. A patch that fixes a bug in version 2.70
(will be fixed upstream in the next release: https://savannah.gnu.org/support/?110396).
2. A fix for the way we patch shebang in bin/autom4te.in.
For 2, we need to keep the original modification timestamp of the file.
Otherwise, we either get an empty man page for autom4te (versions 2.69 and before)
or a failure at the build time (versions 2.70 and after).
The difference has to do with the update of the missing script: https://git.savannah.gnu.org/cgit/automake.git/commit/lib/missing?id=a22717dffe37f30ef2ad2c355b68c9b3b5e4b8c7
It will take time until developers of Autotools-based packages adjust their scripts
to the new version, therefore, 2.69 is marked as preferred.
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
* fix typo WONTON_ENABLE_Kokkos ---> TANGRAM_ENABLE_Kokkos
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
* fix typo WONTON_ENABLE_Kokkos ---> TANGRAM_ENABLE_Kokkos
* fix bugs in CMake section
* more compact cmake block
* update hash for 1.2.10 and add 1.2.11
* update recipe for Portage 3.0.0
* removing old versions - they won't build with the new recipe and the url specification doesn't work for them
* update version to 3.3.6
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
* added pytest-benchmark recipe
* Update var/spack/repos/builtin/packages/py-pytest-benchmark/package.py
Added Python2 dependence.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added package py-pytest-cpp
* Update var/spack/repos/builtin/packages/py-pytest-cpp/package.py
package is !=5.4.0 use @:5.3.999
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added package py-pytest-timeout
* Update var/spack/repos/builtin/packages/py-pytest-timeout/package.py
Added Python2.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added package py-openmc
* Update var/spack/repos/builtin/packages/py-openmc/package.py
specify branch when using branch names for versions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
use run after fixture to install openmc lib
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
Simplify copying openmc library to py-openmc prefix using install
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
NumPy should be 1.9+
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix paren missing
* Update var/spack/repos/builtin/packages/py-openmc/package.py
fixed parens
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-openmc/package.py
use v0.11.0 in URL
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Sometimes we need to patch a file that is a dependency for some other
automatically generated file that comes in a release tarball. As a
result, make tries to regenerate the dependent file using additional
tools (e.g. help2man), which would not be needed otherwise.
In some cases, it's preferable to avoid that (e.g. see #21255). A way
to do that is to save the modification timestamps before patching and
restoring them afterwards. This PR introduces a context wrapper that
does that.
Python extensions use CC and LDSHARED from the sysconfig module to
build. When Spack installs Python, it replaces the Spack compiler
wrappers in these values with the underlying compilers (since these
wrappers are not useful outside of the context of running Spack).
In order to use the Spack compiler wrappers when building Python
extensions with Spack, Spack sets the LDSHARED environment variable
when running `Python.setup_py` (which overrides sysconfig). However,
many Python extensions use an alternative method to build (namely
PythonPackage.setup_py), which meant that LDSHARED was not set (and
RPATHs were not inserted for dependencies).
This commit makes the following changes:
* Sets LDSHARED in the environment: this applies to all commands
executed during the build, rather than for a single command
invocation
* Updates the logic to set LDSHARED: this replaces the compiler
executable in LDSHARED with the Spack compiler wrapper. This
means that for some externally-built instances of Python,
Spack will now switch to using the Spack wrappers when building
extensions. The behavior is expected to be the same for Spack-
built instances of Python.
* Performs similar modifications for LDCXXSHARED (to ensure RPATHs
are included for C++ codes)
On ppc64le and aarch64, Spack tries to execute any "config.guess" and
"config.sub" scripts it finds in the source package.
However, in the libsodium tarball, these files are present but not
executable. This causes the following error when trying to install
libsodium with spack:
Error: RuntimeError: Failed to find suitable substitutes for config.sub, config.guess
Fix this by chmod-ing the scripts in the patch() function of libsodium.
* recipe: add version 6.1.1 for pytest
add recipe for new dependency py-iniconfig
recipe: add version 6.1.1 for pytest
add recipe for new dependency py-iniconfig
* fix: 'SyntaxError: invalid syntax' during unittests
* requested changes on the pull request done
* requested changes on dep for py-pytest
* change constaint on python for importlib-metadata
* undo change on py-importlib-metada as requested
* bug fix
* bug fix on py-wcwidth
* fix as requested
* forget @ in when param
* forget a colon
* add new versions py-pytest and py-py
* fix setuptools* version
* add rule for more-itertools
* [py-intel-openmp] created template
* [py-intel-openmp] is wheel
* [py-intel-openmp] fixed version for linux
* [py-intel-openmp] removed fixmes, added homepage and description
* [py-intel-openmp] added macos support
* [py-intel-openmp] style fix
* petsc: add a +mkl-pardiso variant
mkl_pardiso solver is distributed with intel-mkl
* petsc: depend on mkl instead of intel-mkl
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The first of my two upstream patches to mypy landed in the 0.800 tag that was released this morning, which lets us use module and package parameters with a .mypy.ini file that has a files key. This uses those parameters to check all of spack in style, but leaves the packages out for now since they are still very, very broken. If no package has been modified, the packages are not checked, but if one has they are. Includes some fixes for the log tests since they were not type checking.
Should also fix all failures related to "duplicate module named package" errors.
Hopefully the next drop of mypy will include my other patch so we can just specify the modules and packages in the config file to begin with, but for now we'll have to live with a bare mypy doing a check of the libs but not the packages.
* use module and package flags to check packages properly
* stop checking package files, use package flag for libs
The packages are not type checkable yet, need to finish out another PR
before they can be. The previous commit also didn't check the libraries
properly, this one does.
Add version 4.12.6, 5.0.3
I think, the preferred was there to keep version 4.
But that's why we have spack, because people can install
whatever version they want.
And root has a properly versioned dependency.
* mumps: Fix for problematic src/makefile patch (#20590)
Minor change in src/Makefile between 5.2.0 and 5.3.3 causing patch to
break. Split into 2 patchfiles
* mumps: Additional patch for fixing #20590
This is to fix issue wherein build fails on Ubuntu due to undefined
symbols, despite symbols being included in other libraries referenced
on the compilation line. I believe the issue is that the inclusion
of libsmumps.so was (due to my original patch) causing
libmumps_common.so to be automatically loaded, but since libpords.so
was not also required, the error was occuring. I have added libpords.so
along with libmumps_common.so to be explicit dependencies of
libsmumps.so, etc., which seems to resolve the issue.
* ArrayFire: Add version 3.7.2.
* ArrayFire: Allow using MKL as the FFTW provider.
* ArrayFire: Ensure the libraries are properly found.
The required backend(s) can be specified in the library query.
* openssl: remove preprocessor flags incompatible with NVIDIA HPC SDK
* Update var/spack/repos/builtin/packages/openssl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Scott McMillan <smcmillan@nvidia.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [py-statsmodels] added version 0.12.1 and updated dependencies accordingly
* [py-statsmodels] added python requirements for new version and fixed formatting for readability
* added m4 dep to PVM recipe
* added libtirpc dep to PVM recipe
* decode str or bytestr string to unicode
* Resolved comments from @adamjstewart on setup_build_environment
* When the SCR spec specifies a resource_manager=SLURM or LSF flag, propagate the spec through to
the libyogrt scheduler=slurm or lsf
* Use libyogrt default scheduler option when the SCR spec does not specify LSF or SLURM
* updated relion for new versions
* Switched to checksum versions
* Enabled spack tracking for MKL and TBB when CPU optimizations are enabled
* Added variants to control MKL FFT and Ppatent feature
* Replaced tags with sha256 for older versions an and switched to virtual packages
* py-funcy: new recipe
* Update var/spack/repos/builtin/packages/py-funcy/package.py
add build and run python dependencies
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* sbang pushed back to callers;
star moved to util.lang
* updated unit test
* sbang test moved; local tests pass
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
* fixing outdated metis link
* updated url to the official website since the previous url was a GitHub repo that is an unofficial mirror that only contains the latest version
* py-dictdiffer: new recipie
* Update var/spack/repos/builtin/packages/py-dictdiffer/package.py
add correct setuptools dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update NEURON simulator package
- update recipe to support autoconf as well as cmake
- new versions >=7.8 support cmake
- remove old variants
- added patch for latest bug fix release 7.8.2
Co-authored-by: Kumbhar Pramod Shivaji <kumbhar@bbpv1.epfl.ch>
Co-authored-by: Kumbhar Pramod Shivaji <kumbhar@bb-c02vf1h0hv2r.epfl.ch>
* NAMD: FIX build +cuda
Hi,
If I try to compile NAMD with CUDA support, it fails because cannot file the file "{self.arch}.cuda" because it is undet the "arch" folder.
* NAMD: FIX mpi ~smp
Fix `spack install namd ^charmpp backend=mpi ~smp`
* ssht: New version 1.3.4
ssht changed configuration mechanism from "home-grown" to "cmake. The previously current version 1.2b1 (a beta release) is thus unfortunately not available any more.
* ssht: Don't set build type
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Don't use CUDA for hipblas
* old versions use TRY_CUDA
* Update var/spack/repos/builtin/packages/hipblas/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add version 2.2-2 to r-gwmodel
* Update var/spack/repos/builtin/packages/r-gwmodel/package.py
Fix comma, space issue.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add version 0.3.17 to r-inline
* Drop R version constraint
A really old version of R was specified in the 0.3.14 and 0.3.15
versions of r-inline. This constraint was dropped in the 0.3.17 version.
Drop it from the spack recipe as well.
* vasp: fix build with gfortran 10
Avoid Error: Type mismatch between actual argument at (1) and actual argument at (2)
* vasp: add version 6.1.1
* vasp 6: allow building without CUDA
* Adding PiP recipe
* pip@1 recipe (it seems working)
* change install dir hierarchy
* installing PiP man pages
* add pip-glibc & pip-gdb
* fix configure option designations, fix dependency types
* fix dependency type of pip
* use AutotoolsPackage in pip recipe
* add patch for pip-glibc & pip-gdb to enable 'disable-werror'
* change glibc install directory
* add linux distro check to pip-gdb
* create process-in-process package
* use flag_handler and join_path
* add gcc version constraint, change install-test to check-installed
* fix gcc version designations on conflicts()
* add constraint of target cpu, fix flake8 warnings
* add version constraint to resource()
* Some fixes to adapt the current version
not to execute 'piplnlibs'
change documentation install command
* Update
new branch name of PiP-gdb
adapting PiP-Testsuite
* update pip-gdb github urls
* The very first commit of Process-in-Process (PiP)
details can be found at https://github.com/RIKEN-SysSoft/PiP
* Fix comment style issues
* New Package: Process-in-Process (PiP) -- 2nd trial
* fix style issue
* change inline comments style (required to have two spaces)
Co-authored-by: Daiki Matsunaga <daikim@axe.bz>
Imagemagick-7.0.8 needs to link against libltdl. Otherwise, the build will fail with:
```
2 errors found in build log:
503 checking for libltdl...
504 checking ltdl.h usability... no
505 checking ltdl.h presence... no
506 checking for ltdl.h... no
507 checking for lt_dlinit in -lltdl... no
508 checking if libltdl package is complete... no
>> 509 configure: error: in `/tmp/gpjohnsn/spack-stage/spack-stage-imagemagick-7.0.8-7-4y44gaklhhciiwjzhfpxjfwdj5q
ltjp3/spack-src':
>> 510 configure: error: libltdl is required for modules and OpenCL builds
511 See `config.log' for more details
```
* add version 3.8.2 to r-gtools
* Improve formatting of description
In case the list gets formatted as a non-list:
- added semicolons to end of list items
- replaced dashes with [#]
* add version 1.30 to r-knitr
* Fix version constraints
- r-digest
- r-formatr
The version constraints on those packages should actually be in the `when`
clause.
'date' is a C++ header library offering extensive date and time
functionality for the C++11, C++14 and C++17 standards written by Howard
Hinnant and released under the MIT license. A slightly modified version
has been accepted (along with 'tz.h') as part of C++20. This package
regroups all header files from the upstream repository by Howard Hinnant
so that other R packages can use them in their C++ code. At present, few
of the types have explicit 'Rcpp' wrapper though these may be added as
needed.
Designed to ease the application and comparison of multiple hypothesis
testing procedures for FWER, gFWER, FDR and FDX. Methods are
standardized and usable by the accompanying 'mutossGUI'.
Utility functions that enhance the 'parallel' package and support the
built-in parallel backends of the 'future' package. For example,
availableCores() gives the number of CPU cores available to your R
process as given by the operating system, 'cgroups' and Linux
containers, R options, and environment variables, including those set by
job schedulers on high-performance compute clusters. If none is set, it
will fall back to parallel::detectCores(). Another example is
makeClusterPSOCK(), which is backward compatible with
parallel::makePSOCKcluster() while doing a better job in setting up
remote cluster workers without the need for configuring the firewall to
do port-forwarding to your local computer.
Contains third-party map tile provider information from 'Leaflet.js',
<https://github.com/leaflet-extras/leaflet-providers>, to be used with
the 'leaflet' R package. Additionally, 'leaflet.providers' enables users
to retrieve up-to-date provider information between package updates.
Provides a header only, C++11 interface to R's C interface. Compared to
other approaches 'cpp11' strives to be safe against long jumps from the
C API as well as C++ exceptions, conform to normal R function semantics
and supports interaction with 'ALTREP' vectors.
Query, set, delete credentials from the 'git' credential store. Manage
'GitHub' tokens and other 'git' credentials. This package is to be used
by other packages that need to authenticate to 'GitHub' and/or other
'git' repositories.
Importance sampling from the truncated multivariate normal using the GHK
(Geweke-Hajivassiliou-Keane) simulator. Unlike Gibbs sampling which can
get stuck in one truncation sub-region depending on initial values, this
package allows truncation based on disjoint regions that are created by
truncation of absolute values. The GHK algorithm uses simple Cholesky
transformation followed by recursive simulation of univariate truncated
normals hence there are also no convergence issues. Importance sample is
returned along with sampling weights, based on which, one can calculate
integrals over truncated regions for multivariate normals.
This release also includes the HDF5 VOL plugin so I've added an additional
funciton to ensure the HDF5_PLUGIN_PATH env var gets updated with the adios
install prefix
* intel-xed: add version 12.0.1
Rework the version numbers for intel-xed, now that xed has actual
releases and tags. Add releases 11.2.0 and 12.0.1. Rename 2019.03.01
to 10.2019.03 as a legacy version that fits in the new order.
Add variant +pic to compile libxed.a with PIC code so that it can be
linked into another shared library.
Add conflict for aarch64.
Add mwkrentel as maintainer.
* py-pyfiglet:new recipe
* Update var/spack/repos/builtin/packages/py-pyfiglet/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-pyfiglet: use pypi url
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
fixes#20736
Before this one line fix we were erroneously deducing
that dependency conditions hold even if a package
was external.
This may result in answer sets that contain imposed
conditions on a node without the node being present
in the DAG, hence #20736.
fixes#20611
The conflict was triggered by an invalid value of the
'scheduler' variant. This causes Spack to error when libyogrt
facts are validated by the ASP-based concretizer.
At some point in the past, the skip_patch argument was removed
from the call to package.do_install() this broke the --skip-patch
flag on the dev-build command.
There's two issues with hip where it tries to autodetect the patch
version number from git (when installed), but it does not check if it
even is inside of a git repo. The result is we end up with a shared lib
with a trailing dash in the library suffix: `libamd64.so.x.y.z-`, which
confuses GCC. The patch tries to check if the `.git` folder exists, and
if it does not, it handles version numbering the same as when git was
not installed previously.
* opencl-c-headers: add new version 2020.12.18
* opencl-clhpp: add new version 2.0.13
* opencl-headers: now supports OpenCL 3.0 with new versions of opencl-c-headers and opencl-clhpp
* ocl-icd: add new version 2.2.14 add now can provide OpenCL 3.0
PaRSEC: the Parallel Runtime Scheduler and Execution Controller for micro-tasks on distributed heterogeneous systems.
Signed-off-by: Aurelien Bouteiller <bouteill@icl.utk.edu>
* py-tensorflow: 2.4.0 and dependency updates
* minor version updates
* fix numpy dependency
* dependency rework: compatible release issues, start to clarify cuda versions
* --incompatible_no_support_tools_in_action_inputs was removed in bazel 3.6
* adjustment to versions of cuda dependency, also make sure that
patches/filters still apply to certain release trains.
* python 3.8 and tf < 2.2 have issues
* missed py-grpcio version bump
Set up environment and dependent packages properly when building
with intel-oneapi-mpi as a dependency MPI provider (e.g. point to
mpicc compiler wrapper).
* eospac: add version 6.4.2beta
* eospac: clarify EOSPAC "beta" versions
Compared to 6.4.1, EOSPAC 6.4.2beta contains only one change, a fix
for an inability to read some SESAME files in ASCII format. From the
release announcement,
EOSPAC 6.4.2beta has been released for general use as the latest
(i.e., eospac6-latest) versions. This is a small patch to the
previously-released version 6.4.1, which was requested by an
affected user.
But the "beta" label can cause confusion, especially when a beta
version is the new preferred version, as is the case here. As
suggested by reviewers, add a comment clarifying EOSPAC's use of
"beta".
This properly sets PATH/CPATH/LIBRARY_PATH etc. to make the
Spack-generated module file for intel-oneapi-compilers useful
(without this, 'icx' would not be found after loading the module
file for intel-oneapi-compilers).
The C-Library for the current compiler should already be used by the compiler. So there is no point in returning any libs for this package.
Without this patch: if one uses this as an external package (as intended), then this will can inject system library paths into the build process at the wrong place.
fixes#20679
In this refactor we have a single cardinality rule on the
provider, which triggers a rule transforming a dependency
on a virtual package into a dependency on the provider of
the virtual.
This adds a -i option to "spack python" which allows use of the
IPython interpreter; it can be used with "spack python -i ipython".
This assumes it is available in the Python instance used to run
Spack (i.e. that you can "import IPython").
* Update recipe for AOMP.
Reduced repitition with version hashes.
Expanded dependency versioning.
Reduced repitition with cmake args.
Added version 3.10.0
* Update dependency versions and remove uneeded quotes.
* Update var/spack/repos/builtin/packages/aomp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update of Eccodes to 2.19.1
* PEP8
* PEP8
* PEP8-whitespace
* Update var/spack/repos/builtin/packages/eccodes/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Michael Blaschek <michael.blaschek@univie.ac.at>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Every other predicate in the concretizer uses a `_set` suffix to
implement user- or package-supplied settings, but compiler settings use a
`_hard` suffix for this. There's no difference in how they're used, so
make the names the same.
- [x] change `node_compiler_hard` to `node_compiler_set`
- [x] change `node_compiler_version_hard` to `node_compiler_version_set`
* OpenMPI: Depends on hwlock & libevent
Both hwlock & libevent are required dependencies of Open MPI.
While they are also shipped internally, newer releases (>=4.0)
will start looking for external packages by default.
This caused build issues of Open MPI 4.0.5 with Fortran on macOS
10.15.
* Open MPI 4.0: libevent external
Internally shipped libevent just works fine for prior releases.
#20076 moved Cray-specific MPICH support from the Spack MPICH package
to a new cray-mpich Package. This broke existing package installs
using external mpich on Cray systems. This PR keeps the cray-mpich
package but restores the Cray-specific MPICH support for older
installations.
In the future this support should be removed from the Spack mpich
package and users should be directed to use cray-mpich on Cray.
Previously, the concretizer handled version constraints by comparing all
pairs of constraints and ensuring they satisfied each other. This led to
INCONSISTENT ressults from clingo, due to ambiguous semantics like:
version_constraint_satisfies("mpi", ":1", ":3")
version_constraint_satisfies("mpi", ":3", ":1")
To get around this, we introduce possible (fake) versions for virtuals,
based on their constraints. Essentially, we add any Versions,
VersionRange endpoints, and all such Versions and endpoints from
VersionLists to the constraint. Virtuals will have one of these synthetic
versions "picked" by the solver. This also allows us to remove a special
case from handling of `version_satisfies/3` -- virtuals now work just
like regular packages.
This converts the virtual handling in the new concretizer from
already-ground rules to facts. This is the last thing that needs to be
refactored, and it converts the entire concretizer to just use facts.
The previous way of handling virtuals hinged on rules involving
`single_provider_for` facts that were tied to the virtual and a version
range. The new method uses the condition pattern we've been using for
dependencies, externals, and conflicts.
To handle virtuals as conditions, we impose constraints on "fake" virtual
specs in the logic program. i.e., `version_satisfies("mpi", "2.0:",
"2.0")` is legal whereas before we wouldn't have seen something like
this. Currently, constriants are only handled on versions -- we don't
handle variants or anything else yet, but they key change here is that we
*could*. For a long time, virtual handling in Spack has only dealt with
versions, and we'd like to be able to handle variants as well. We could
easily add an integrity constraint to handle variants like the one we use
for versions.
One issue with the implementation here is that virtual packages don't
actually declare possible versions like regular packages do. To get
around that, we implement an integrity constraint like this:
:- virtual_node(Virtual),
version_satisfies(Virtual, V1), version_satisfies(Virtual, V2),
not version_constraint_satisfies(Virtual, V1, V2).
This requires us to compare every version constraint to every other, both
in program generation and within the concretizer -- so there's a
potentially quadratic evaluation time on virtual constraints because we
don't have a real version to "anchor" things to. We just say that all the
constraints need to agree for the virtual constraint to hold.
We can investigate adding synthetic versions for virtuals in the future,
to speed this up.
This code in `SpecBuilder.build_specs()` introduced in #20203, can loop
seemingly interminably for very large specs:
```python
set([spec.root for spec in self._specs.values()])
```
It's deceptive, because it seems like there must be an issue with
`spec.root`, but that works fine. It's building the set afterwards that
takes forever, at least on `r-rminer`. Currently if you try running
`spack solve r-rminer`, it loops infinitely and spins up your fan.
The issue (I think) is that the spec is not yet complete when this is
run, and something is going wrong when constructing and comparing so many
values produced by `_cmp_key()`. We can investigate the efficiency of
`_cmp_key()` separately, but for now, the fix is:
```python
roots = [spec.root for spec in self._specs.values()]
roots = dict((id(r), r) for r in roots)
```
We know the specs in `self._specs` are distinct (they just came out of
the solver), so we can just use their `id()` to unique them here. This
gets rid of the infinite loop.
- [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files
- [x] update all licensed files to say 2013-2021 using
`spack license update-copyright-year`
- [x] appease mypy with some additions to package.py that needed
for oneapi.py
This adds a new subcommand to `spack license` that automatically updates
the copyright year in files that should have a license header.
- [x] add `spack license update-copyright-year` command
- [x] add test
This adds two lines to `.gitattributes`:
- [x] exclude vendored code from GitHub's language calculation
- [x] recognize `.lp` files as Prolog (closest language to ASP that
linguist supports)
It looks like there have been two attempts
(https://github.com/github/linguist/issues/3867,
https://github.com/github/linguist/issues/4860) to add ASP as a language
to Linguist, but it's not widespread enough to be standard yet (or at
least the people who submitted the PRs haven't been able to show enough
stats to prove it). We'll settle for calling ASP "Prolog" for now as
that'll get us some syntax highlighting for `concretize.lp`.
* hdf-eos5: new package (HDF for Earth Observing Sytem using hdf v5)
* hdf-eos5: flake8 fixes
* hdf-eos5: trying to fix flake8 errors
* hdf-eos5: flake8 fix
* hdf-eos5: Fix to support Fortran codes
The -Df2cFortran compilation flag needed to support Fortran
* hdf-eos2: new package (HDF for Earth Observing System using hdf5)
* hdf-eos2: flake8 fixes
* hdf-eos2: fix to support Fortran
Need the compilation flag -Df2cFortran to allow support for Fortran
codes
libuuid is currently contained in util-linux, libuuid and uuid. This
change introduces a new virtual provider `uuid` and renames the existing
`uuid` package to `ossp-uuid`.
util-linux's libuuid is provided in the form of a separate package
util-linux-uuid to make sure that packages depending on uuid and
util-linux can use a separate uuid implementation, which the concretizer
does not allow if libuuid is contained in util-linux.
- added several patches
- added some missing dependencies
- remove unneeded dependencies
- add CUDA support
- disable queue support, which was limited, and broken anyway
- move package text that was specific to the package to a comment, so it
does not show up the environment module
- set conflicts for cuda and compilers
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* OpenMPI: Add version 4.1.0
* OpenMPI: Prefer version 4.0.5.
* OpenMPI: Update links
The download links changed, there is currently a redirection but it might not work forever. The website also switched to https.
Previously compiler-rt didn't correctly passthrough cmake
variables for python when building the various santizers.
This patch passes these variables through.
This patch may also correctly apply to any version of LLVM
to any version of LLVM that uses the newer monorepo style organization,
and any older llvm newer than 7.0.0 as long as the paths were set
appropriately. However, this was not done so because it was not
tested with older LLVM releases.
Fixes#19908
See also: https://bugs.llvm.org/show_bug.cgi?id=48180
This updates the UnifyFS packages to account for the latest v0.9.1
release.
Updates required and optional dependencies for the respective
releases.
Locks margo and mercury dependencies at specific versions while
integration with their latest versions is still in progress.
* PGI compiler has trouble with avx2 SIMD support
(https://github.com/FFTW/fftw3/issues/78)
* Hew to the project's preferred indentation standard.
* Expand '%nvhpc' logic to include '%pgi'.
* Exceeded the max line-length.
* Break up the long compound statement into nested if's.
* Inadvertently picked up an extraneous file.
* PGI compiler has trouble with avx2/avx-512 SIMD support, too.
* Add PGI runtime libs to LDFLAGS when '%pgi' in spec.
* Revert "Add PGI runtime libs to LDFLAGS when '%pgi' in spec."
This reverts commit 31c3ef8ea2.
* Add PGI runtime libs to LDFLAGS when '%pgi' in spec.
GCC looks for included files based on several env vars.
Remove C_INCLUDE_PATH, CPLUS_INCLUDE_PATH, and OBJC_INCLUDE_PATH
from the build environment to ensure it's clean and prevent
accidental clobbering.
* Adding support for the CMake flags in LBANN that are missing.
* Added new flag to OpenCV dependency and removed negative variants
since OpenCV no longer turns on everything by default. Removed CMake
flags in LBANN that have been deprecated.
* Removed type='build' flags from dependencies so that they get linked
into a environment's view.
* Removed type='build' flags from dependencies so that they get linked
into a environment's view. Fixed DiHydrogen variant to enable
DistConv feature, renamed to +distconv from +legacy. Added conflicts
line to indicated that DistConv and ROCm don't work with +half
support.
* Fixed Flake8 and cleaned up ordering of variants.
* Flake8
* Backed out changes to not mark and cmake and ninja as build
dependencies, which was introduced to make sure that they appear in
a spack environment.
* Backed out changes to not mark doc related packages as build
dependencies, which was introduced to make sure that they appear
in a spack environment.
* Fixed how recipe communicates the intent to build and run tests to the
package CMake.
This is to make sure that the build system doesn't pick up a library that
would happen to be available.
Co-authored-by: Baptiste Jonglez <git@bitsofnetworks.org>
Environment yaml files should not have default values written to them.
To accomplish this, we change the validator to not add the default values to yaml. We rely on the code to set defaults for all values (and use defaulting getters like dict.get(key, default)).
Includes regression test.
This creates a set of packages which all use the same script to install
components of Intel oneAPI. This includes:
* An inheritable IntelOneApiPackage which knows how to invoke the
installation script based on which components are requested
* For components which include headers/libraries, an inheritable
IntelOneApiLibraryPackage is provided to locate them
* Individual packages for DAL, DNN, TBB, etc.
* A package for the Intel oneAPI compilers (icx/ifx). This also includes
icc/ifortran but these are not currently detected in this PR
I lost my mind a bit after getting the completion stuff working and
decided to get Mypy working for spack as well. This adds a
`.mypy.ini` that checks all of the spack and llnl modules, though
not yet packages, and fixes all of the identified missing types and
type issues for the spack library.
In addition to these changes, this includes:
* rename `spack flake8` to `spack style`
Aliases flake8 to style, and just runs flake8 as before, but with
a warning. The style command runs both `flake8` and `mypy`,
in sequence. Added --no-<tool> options to turn off one or the
other, they are on by default. Fixed two issues caught by the tools.
* stub typing module for python2.x
We don't support typing in Spack for python 2.x. To allow 2.x to
support `import typing` and `from typing import ...` without a
try/except dance to support old versions, this adds a stub module
*just* for python 2.x. Doing it this way means we can only reliably
use all type hints in python3.7+, and mypi.ini has been updated to
reflect that.
* add non-default black check to spack style
This is a first step to requiring black. It doesn't enforce it by
default, but it will check it if requested. Currently enforcing the
line length of 79 since that's what flake8 requires, but it's a bit odd
for a black formatted project to be quite that narrow. All settings are
in the style command since spack has no pyproject.toml and I don't
want to add one until more discussion happens. Also re-format
`style.py` since it no longer passed the black style check
with the new length.
* use style check in github action
Update the style and docs action to use `spack style`, adding in mypy
and black to the action even if it isn't running black right now.
We have to repeat all the spec attributes in a number of places in
`concretize.lp`, and Spack has a fair number of spec attributes. If we
instead add some rules up front that establish equivalencies like this:
```
node(Package) :- attr("node", Package).
attr("node", Package) :- node(Package).
version(Package, Version) :- attr("version", Package, Version).
attr("version", Package, Version) :- version(Package, Version).
```
We can rewrite most of the repetitive conditions with `attr` and repeat
only for each arity (there are only 3 arities for spec attributes so far)
as opposed to each spec attribute. This makes the logic easier to read
and the rules easier to follow.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This patch logic resovles a linking issue with ncurses in the mesa
package. This appears to be a recurring problem that was identified in
the mesa gitlab issues here:
https://gitlab.freedesktop.org/mesa/mesa/-/issues/2843
Using `_llvm_method = 'auto'` is broken. This patch replaces that with
`_llvm_method = 'config-tool'`, which is a hack, but makes it possible
to build.
I have commented on the closed issue (2843), referencing the original
author of the bug, and one of the mesa developers, so perhaps they will
fix the problem.
This PR does three related things to try to improve developer tooling quality of life:
1. Adds new options to `.flake8` so it applies the rules of both `.flake8` and `.flake_package` based on paths in the repository.
2. Adds a re-factoring of the `spack flake8` logic into a flake8 plugin so using flake8 directly, or through editor or language server integration, only reports errors that `spack flake8` would.
3. Allows star import of `spack.pkgkit` in packages, since this is now the thing that needs to be imported for completion to work correctly in package files, it's nice to be able to do that.
I'm sorely tempted to sed over the whole repository and put `from spack.pkgkit import *` in every package, but at least being allowed to do it on a per-package basis helps.
As an example of what the result of this is:
```
~/Workspace/Projects/spack/spack develop* ⇣
❯ flake8 --format=pylint ./var/spack/repos/builtin/packages/kripke/package.py
./var/spack/repos/builtin/packages/kripke/package.py:6: [F403] 'from spack.pkgkit import *' used; unable to detect undefined names
./var/spack/repos/builtin/packages/kripke/package.py:25: [E501] line too long (88 > 79 characters)
~/Workspace/Projects/spack/spack refactor-flake8*
1 ❯ flake8 --format=spack ./var/spack/repos/builtin/packages/kripke/package.py
~/Workspace/Projects/spack/spack refactor-flake8*
❯ flake8 ./var/spack/repos/builtin/packages/kripke/package.py
```
* qa/flake8: update .flake8, spack formatter plugin
Adds:
* Modern flake8 settings for per-path/glob error ignores, allows
packages to use the same `.flake8` as the rest of spack
* A spack formatter plugin to flake8 that implements the behavior of
`spack flake8` for direct invocations. Makes integration with
developer tooling nicer, linting with flake8 reports only errors that
`spack flake8` would report. Using pyls and pyls-flake8, or any other
non-format-dependent flake8 integration, now works with spack's rules.
* qa/flake8: allow star import of spack.pkgkit
To get working completion of directives and spack components it's
necessary to import the contents of spack.pkgkit. At the moment doing
this makes flake8 displeased. For now, allow spack.pkgkit and spack
both, next step is to ban spack * and require spack.pkgkit *.
* first cut at refactoring spack flake8
This version still copies all of the files to be checked as befire, and
some other things that probably aren't necessary, but it relies on the
spack formatter plugin to implement the ignore logic.
* keep flake8 from rejecting itself
* remove separate packages flake8 config
* fix failures from too many files
I ran into this in the PR converting pkgkit to std. The solution in
that branch does not work in all cases as it turns out, and all the
workarounds I tried to use generated configs to get a single invocation
of flake8 with a filename optoion to work failed. It's an astonishingly
frustrating config option.
Regardless, this removes all temporary file creation from the command
and relies on the plugin instead. To work around the huge number of
files in spack and still allow the command to control what gets checked,
it scans files in batches of 100. This is a completely arbitrary number
but was chosen to be safely under common line-length limits. One
side-effect of this is that every 100 files the command will produce
output, rather than only at the end, which doesn't seem like a terrible
thing.
* Dependencies of Go will now correctly set the GOPATH for the
appropriate spec to avoid using the user's default path.
* Bumped version to latest releases(1.15.6 & 1.14.13).
Most people installing `clingo` with Spack are going to be doing it to
use the new concretizer, and that requires the `master` branch.
- [x] make `master` the default so we don't have to keep telling people
to install `clingo@master`. We'll update the preferred version when
there's a new release.
Continuing to convert everything in `asp.py` into facts, make the
generation of ground rules for conditional dependencies use facts, and
move the semantics into `concretize.lp`.
This is probably the most complex logic in Spack, as dependencies can be
conditional on anything, and we need conditional ASP rules to accumulate
and map all the dependency conditions to spec attributes.
The logic looks complicated, but essentially it accumulates any
constraints associated with particular conditions into a fact associated
with the condition by id. Then, if *any* condition id's fact is True, we
trigger the dependency.
This simplifies the way `declared_dependency()` works -- the dependency
is now declared regardless of whether it is conditional, and the
conditions are handled by `dependency_condition()` facts.
There are currently no places where we do not want to traverse
dependencies in `spec_clauses()`, so simplify the logic by consolidating
`spec_traverse_clauses()` with `spec_clauses()`.
`version_satisfies/2` and `node_compiler_version_satisfies/3` are
generated but need `#defined` directives to avoid " info: atom does not
occur in any rule head:" warnings.
Since zsh can load bash completion files natively, seems reasonable to just turn this on.
The only changes are to switch from `type -t` which zsh doesn't support to using `type`
with a regex and adding a new arm to the sourcing of the completions to allow it to work
for zsh as well as bash.
Could use more bash/dash/etc testing probably, but everything I've thought to try has
worked so far.
Notes:
* unit-test zsh support, fix issues
Specifically fixed word splitting in completion-test, use a different
method to apply sh emulation to zsh loaded bash completion, and fixed
an incompatibility in regex operator quoting requirements.
* compinit now ignores insecure directories
Completion isn't meant to be enabled in non-interactive environments, so
by default compinit will ask the user if they want to ignore insecure
directories or load them anyway. To pass the spack unit tests in GH
actions, this prompt must be disabled, so ignore explicitly until a
better solution can be found.
* debug functions test also requires bash emulation
COMP_WORDS is a bash-ism that zsh doesn't natively support, turn on
emulation for just that section of tests to allow the comparison to
work. Does not change the behavior of the functions themselves since
they are already pinned to sh emulation elsewhere.
* propagate change to .in file
* fix comment and update script based on .in
This PR addresses a number of issues related to compiler bootstrapping.
Specifically:
1. Collect compilers to be bootstrapped while queueing in installer
Compiler tasks currently have an incomplete list in their task.dependents,
making those packages fail to install as they think they have not all their
dependencies installed. This PR collects the dependents and sets them on
compiler tasks.
2. allow boostrapped compilers to back off target
Bootstrapped compilers may be built with a compiler that doesn't support
the target used by the rest of the spec. Allow them to build with less
aggressive target optimization settings.
3. Support for target ranges
Backing off the target necessitates computing target ranges, so make Spack
handle those properly. Notably, this adds an intersection method for target
ranges and fixes the way ranges are satisfied and constrained on Spec objects.
This PR also:
- adds testing
- improves concretizer handling of target ranges
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Cray's version of MPICH uses a different versioning system than
MPICH, so it has been split into its own package. It is an
external-only package (always provided by the system, never
installed by Spack).
* Kluge to get the gfortran linker to work correctly on Big Sur.
* Fixed formatting error; stetting the other.
* Removed spaces.
* Added comment, mainly to re-trigger Spack CI.
Currently, version range constraints, compiler version range constraints,
and target range constraints are implemented by generating ground rules
from `asp.py`, via `one_of_iff()`. The rules look like this:
```
version_satisfies("python", "2.6:") :- 1 { version("python", "2.4"); ... } 1.
1 { version("python", "2.4"); ... } 1. :- version_satisfies("python", "2.6:").
```
So, `version_satisfies(Package, Constraint)` is true if and only if the
package is assigned a version that satisfies the constraint. We
precompute the set of known versions that satisfy the constraint, and
generate the rule in `SpackSolverSetup`.
We shouldn't need to generate already-ground rules for this. Rather, we
should leave it to the grounder to do the grounding, and generate facts
so that the constraint semantics can be defined in `concretize.lp`.
We can replace rules like the ones above with facts like this:
```
version_satisfies("python", "2.6:", "2.4")
```
And ground them in `concretize.lp` with rules like this:
```
1 { version(Package, Version) : version_satisfies(Package, Constraint, Version) } 1
:- version_satisfies(Package, Constraint).
version_satisfies(Package, Constraint)
:- version(Package, Version), version_satisfies(Package, Constraint, Version).
```
The top rule is the same as before. It makes conditional dependencies and
other places where version constraints are used work properly. Note that
we do not need the cardinality constraint for the second rule -- we
already have rules saying there can be only one version assigned to a
package, so we can just infer from `version/2` `version_satisfies/3`.
This form is also safe for grounding -- If we used the original form we'd
have unsafe variables like `Constraint` and `Package` -- the original
form only really worked when specified as ground to begin with.
- [x] use facts instead of generating rules for package version constraints
- [x] use facts instead of generating rules for compiler version constraints
- [x] use facts instead of generating rules for target range constraints
- [x] remove `one_of_iff()` and `iff()` as they're no longer needed
* ParaView: add new ParaView-5.9.0-RC2 release
Signed-off-by: Vicente Adolfo Bolea Sanchez <vicente.bolea@kitware.com>
* Update var/spack/repos/builtin/packages/paraview/package.py
Indeed, I misunderstood the previous review. This looks good to me too.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
I was keeping the old `clingo` driver code around in case we had to run
using the command line tool instad of through the Python interface.
So far, the command line is faster than running through Python, but I'm
working on fixing that. I found that if I do this:
```python
control = clingo.Control()
control.load("concretize.lp")
control.load("hdf5.lp") # code from spack solve --show asp hdf5
control.load("display.lp")
control.ground([("base", [])])
control.solve(...)
```
It's just as fast as the command line tool. So we can always generate the
code and load it manually if we need to -- we don't need two drivers for
clingo. Given that the python interface is also the only way to get unsat
cores, I think we pretty much have to use it.
So, I'm removing the old command line driver and other unused code. We
can dig it up again from the history if it is needed.
This fixes a logging error observed on macOS 11.0.1 (Big Sur).
When performing a Spack install in debugging mode (e.g.
`spack -d install py-scipy`) Spack is supposed to write a log of
compiler wrapper command line invocations to the current working
directory.
Due to a regression error introduced by #18205, these files were
no-longer generated, and Spack was printing errors such as
"No such file or directory: None/." This is because the log file
directory gets set from `spack.main.spack_working_dir`, but that
variable is not set in the spawned process.
This PR ensures that the working directory (at the time of the
"spack install" invocation) is persisted to the subprocess.
Fixed hard tab in flux-sched edit and unbound hwloc in flux-core after
testing to better support modern MPIs in spack environments
Verified that flux-core@0.17 is when hwloc@2: became viable
Track all the variant values mentioned when emitting constraints, validate them
and emit a fact that allows them as possible values.
This modification ensures that open-ended variants (variants accepting any string
or any integer) are projected to the finite set of values that are relevant for this
concretization.
2020.10.0 is the latest stable release, and the preferred version
for general use (when the user does not specify otherwise).
2020.11.0 is a prototype for the memory kinds feature that is also
available when requested.
Other parts of the concretizer code build up lists of things we can't
know without traversing all specs and packages, and they output these
list at the very end.
The code for this for variant values from spec literals was intertwined
with the code for traversing the input specs. This only covers the input
specs and misses variant values that might come from directives in
packages.
- [x] move ad-hoc value handling code into spec_clauses so we do it in
one place for CLI and packages
- [x] move handling of `variant_possible_value`, etc. into
`concretize.lp`, where we can automatically infer variant existence
more concisely.
- [x] simplify/clarify some of the code for variants in `spec_clauses()`
* [cmd versions] add spack versions --new flag to only fetch new versions
format
[cmd versions] rename --latest to --newest and add --remote-only
[cmd versions] add tests for --remote-only and --new
format
[cmd versions] update shell tab completion
[cmd versions] remove test for --remote-only --new which gives empty output
[cmd versions] final rename
format
* add brillig mock package
* add test for spack versions --new
* [brillig] format
* [versions] increase test coverage
* Update lib/spack/spack/cmd/versions.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Update lib/spack/spack/cmd/versions.py
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Update geant4-data and individual datasets for Geant4 versions 10.6.3
and 10.7.0.
Update geant4 package with new versions 10.6.3 and 10.7.0. Update
dependencies on CLHEP and VecGeom with versions required for Geant4
10.7.
Add GEANT4_INSTALL_PACKAGE_CACHE=OFF to CMake args for 10.6 onwards.
Prevents install of the "package cahce" file that contains hard-coded
paths for dependencies, improving relocatability. It relies on Spack
setting CMAKE_PREFIX_PATH correctly in build/use environments that
consume the geant4 package.
`cmake @3.17:` is necessary to handle `cuda @11:` correctly. Earlier versions of `cmake` do not know that `cuda @11:` does not support `compute_30` any more, and list that compute capability as supported. This is handled in `cmake`'s file `Modules/FindCUDA/select_compute_arch.cmake`.
The bowtie2 Makefile uses `prefix`, not `PREFIX`, for versions before v2.4.
Credit to @tkameyama
Co-authored-by: george.hartzell <george.hartzell@sana.com>
* allow install of build-deps from cache via --include-build-deps switch
* make clear that --include-build-deps is useful for CI pipeline troubleshooting
fixes#20055
Compiler with custom versions like gcc@foo are not currently
matched to the appropriate targets. This is because the
version of spec doesn't match the "real" version of the
compiler.
This PR replicates the strategy used in the original
concretizer to deal with that and tries to detect the real
version of compilers if the version in the spec returns no
results.
* bump up version for rocm-3.10.0 release
* bump up version for rocm-3.10.0
* remove duplicate version addition for 3.9.0
* bump up version for rocm-3.10.0 release
* bump up version for rocm-3.10.0 release
* bump up version for rocm-debug-agent and rocm-dbgapi
* bump up version for rocm-bandwidth-test,rocm-gdb,rocprofiler,roctracer for rocm-3.10.0
* add smoke test
* remove whitespaces
* fix minimum version issue
* reorder decorators & replace make with cmake build
* merge cmake build into one line
* reorganize smoke test function
Co-authored-by: Jieyang Chen <chenj3@ornl.gov>
* added dockerfile for opensuse leap 15
* updated maintainer info
* Update share/spack/docker/leap-15.dockerfile
* move copies and symlinks after package install
also use ${SPACK_ROOT} for spack calls as
this works with buildah
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* New package: py-qsymm
* py-qsymm: Convert to using tarballs from PyPi instead of git checkouts
* py-qsymm: add missing dependencies
* Update var/spack/repos/builtin/packages/py-qsymm/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-qsymm: Fix url to use pypi hidden download interface
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* AOCC-2.3.0 is now added to spack
Change-Id: I18fd9606e6fd9a288cc7dc6c6ead11ea17839a7c
* Added flag and version tests for AOCC-2.3.0
* Addressed review comments
Co-authored-by: vkallesh <Vijay-teekinavar.Kallesh@amd.com>
fixes#20040
Matching compilers among nodes has been prioritized
in #20020. Selection of default variants has been
tuned in #20182. With this setup there is no need
to have an ad-hoc rule for external packages. On
the contrary it should be removed to prefer having
default variant values over more external nodes in
the DAG.
refers #20040
Before this PR optimization rules would have selected default
providers at a higher priority than default variants. Here we
swap this priority and we consider variants that are forced by
any means (root spec or spec in depends_on clause) the same as
if they were with a default value.
This prevents the solver from avoiding expected configurations
just because they contain directives like:
depends_on('pkg+foo')
and `+foo` is not the default variant value for pkg.
* OpenBLAS: More Precise GCC Conflicts
Add more precise GCC conflicts so e.g. GCC 6 and GCC 7.5 don't fail.
* Compact syntax
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
As part of pull request #19452, a patch method was added to the mfem
package to delete byte order marks from 3 mfem source files. These
files first appeared in a stable release of mfem as of version
4.1. Consequently, attempts to install mfem 3.4 or mfem 4.0 fail
because no files exist at the path arguments of the filter_file
commands used to execute this operation. Decorating the patch method
so it runs only on mfem versions 4.1 and later resolves the errors
that were thrown due to files not found.
This commit adds that decorator.
* Qt: add options to disable docs and gui
- Add `~gui` option for minimal build
- Add `+doc` option to install docs, and attempt to disable the implicit
llvm dependency if not
- Removes the 'freetype' option which hasn't worked reliably in qt5, as
many of the gui components implicitly rely on freetype.
- Add and test version 5.15 (and skip qtlocation if disabling opengl)
- Refactor some of the dependency logic
I've tested this on linux with 5.15.2 and 4.8.7 in a couple of different
configurations.
* Address reviewer feedback and correctly disable llvm
* Fix qt doc generation
* py-rosdep: add new package
* setuptools needed at run-time
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: Andrew W Elble <aweits@rit.edu>
* py-rospkg: add new package
* setuptools needed at run-time
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: Andrew W Elble <aweits@rit.edu>
* py-catkin-pkg: add new package
* setuptools is needed at run-time
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: Andrew W Elble <aweits@rit.edu>
fixes#19981
This commit adds support for target ranges in directives,
for instance:
conflicts('+foo', when='target=x86_64:,aarch64:')
If any target in a spec body is not a known target the
following clause will be emitted:
node_target_satisfies(Package, TargetConstraint)
when traversing the spec and a definition of
the clause will then be printed at the end similarly
to what is done for package and compiler versions.
* spack recipe for gromacs with aocc compiler support
Change-Id: I364aab4a0aa2dcd44bc47eb50c81b2d94c99cfbd
* Removed arch and other associated compilers flags
Added cycle_subcounters variant
Co-authored-by: vkallesh <Vijay-teekinavar.Kallesh@amd.com>
fixes#20019
Before this modification having a newer version of a node came
at higher priority in the optimization than having matching
compilers. This could result in unexpected configurations for
packages with conflict directives on compilers of the type:
conflicts('%gcc@X.Y:', when='@:A.B')
where changing the compiler for just that node is preferred to
lower the node version to less than 'A.B'. Now the priority has
been switched so the solver will try to lower the version of the
nodes in question before changing their compiler.
* llvm-amdgpu: fix the build for version 3.9.0
Adapt the fix-system-zlib-ncurses.patch for version 3.9.0. Without
the patch, llvm-amdgpu builds, but then rocm-device-libs fails with
"cannot find -ltinfo."
Tighten the version requirements for cmake according to the
llvm/CMakeLists.txt file.
* Add a conflict for cmake 3.19.0.
refers #20079
Added docstrings to 'concretize' and 'concretized' to
document the format for tests.
Added tests for the activation of test dependencies.
refers #20040
This modification emits rules like:
provides_virtual("netlib-lapack","blas") :- variant_value("netlib-lapack","external-blas","False").
for packages that provide virtual dependencies conditionally instead
of a fact that doesn't account for the condition.
* intel-tbb: patch for arm64 on macOS
as submitted upstream and used in homebrew
* intel-tbb: check patchable versions
* intel-tbb: avoid patch breakage when 2021.1 is released
2021.1-beta05 would be considered newer than 2021.1
* Add the 'exciting' package.
Version 14 (latest available) is defined.
An as-of-yet unpublished patch (dfgather.patch) from the developers is also
included.
* fixed flake8 errors (I *thought* I had already gotten them! OOPS!)
* Update var/spack/repos/builtin/packages/exciting/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixed install method to just do the install, and no build method is needed.
* *Actually* added the lapack dependency!
* removed variant from blas dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix: leading . is not needed in extension kwarg
* mfem: add support for NVIDIA AmgX
fix: proper spacing
* mfem: use conflict to indicate that AmgX is expected to depend on CUDA
fixes#19966
Global arrays supports GCC 10 since version 5.7.1,
therefore a conflict has been added to avoid old
releases to error at build-time.
Removed the 'blas' and 'lapack' variant since
BLAS and LAPACK are always a dependency, and
if not specified during configure, a version
of these APIs vendored with Global Arrays is
built.
Fixed a few options in configuration.
The point of this variant is to give the end user an option to use system
installed fabrics such as mofed instead of upstream fabrics such as rdma-core.
This was found to avoid run time errors on some systems.
Co-authored-by: nithintsk <nithintsk@github.com>
This PR fixes two problems with clang/llvm's version detection. clang's
version output looks like this:
```
clang version 11.0.0
Target: x86_64-unknown-linux-gnu
```
This caused clang's version to be misdetected as:
```
clang@11.0.0
Target:
```
This resulted in errors when trying to actually use it as a compiler.
When using `spack external find`, we couldn't determine the compiler
version, resulting in errors like this:
```
==> Warning: "llvm@11.0.0+clang+lld+lldb" has been detected on the system but will not be added to packages.yaml [reason=c compiler not found for llvm@11.0.0+clang+lld+lldb]
```
Changing the regex to only match until the end of the line fixes these
problems.
Fixes: #19473
* Updated the cuDNN recipe to generate the proper version names for only
the arhcitecture that you are on. This prevents the concretizer from
selecting a source code version that is incompatible with your current
architecture. Additionally, add constraints to ensure that the
corresponding CUDA version is properly set as well.
* Added maintainer
* Fixed renaming for darwin systems
* Fixed flake8
* Fixed flake8
* Fixed range typo
* Update var/spack/repos/builtin/packages/cudnn/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fixed style issues
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* seems to have been introduced errorously by users using gitk-based
workflows. This should be handled by the git package
* fixes build problems on OSX bigsur
* charmpp: various fixes
- change URLs to https
- address deprecated/renamed versions
- make it build with the cmake build system
* flake8
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This adds a new `mark` command that can be used to mark packages as either
explicitly or implicitly installed. Apart from fixing the package
database after installing a dependency manually, it can be used to
implement upgrade workflows as outlined in #13385.
The following commands demonstrate how the `mark` and `gc` commands can be
used to only keep the current version of a package installed:
```console
$ spack install pkgA
$ spack install pkgB
$ git pull # Imagine new versions for pkgA and/or pkgB are introduced
$ spack mark -i -a
$ spack install pkgA
$ spack install pkgB
$ spack gc
```
If there is no new version for a package, `install` will simply mark it as
explicitly installed and `gc` will not remove it.
Co-authored-by: Greg Becker <becker33@llnl.gov>
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.
Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.
This handles the following trickier aspects of testing with direct
support in Spack's package API:
- [x] Caching source or intermediate build files at build time for
use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
packages).
See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added -level_zero -rocm -opencl flags and sha256 for TAU v2.30.
* Removed the depends_on clause for OpenCL and added a variant for OneAPI level_zero.
* remove depends_on rocm
* remove depends_on rocprofiler
Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
The deprecatedProperties custom validator now can accept a function
to compute a better error message.
Improve error/warning message for deprecated properties
As of #18260, `spack load` and `spack env activate` now use
`prefix_inspections` from the modules configuration to decide
how to modify environment variables.
This updates the modules configuration documentation to describe
how to update environment variables with the `prefix_inspections`
section. This also updates the `spack load` and environments
documentation to refer to the new `prefix_inspections` documentation.
`spack load` and `spack env activate` now use the prefix inspections
defined in `modules.yaml`. This allows users to customize/override
environment variable modifications if desired.
If no `prefix_inspections` configuration is present, Spack uses the
values in the default configuration.
This PR reworks a few attributes in the container subsection of
spack.yaml to permit the injection of custom base images when
generating containers with Spack. In more detail, users can still
specify the base operating system and Spack version they want to use:
spack:
container:
images:
os: ubuntu:18.04
spack: develop
in which case the generated recipe will use one of the Spack images
built on Docker Hub for the build stage and the base OS image in the
final stage. Alternatively, they can specify explicitly the two
base images:
spack:
container:
images:
build: spack/ubuntu-bionic:latest
final: ubuntu:18.04
and it will be up to them to ensure their consistency.
Additional changes:
* This commit adds documentation on the two approaches.
* Users can now specify OS packages to install (e.g. with apt or yum)
prior to the build (previously this was only available for the
finalized image).
* Handles to avoid an update of the available system packages have been
added to the configuration to facilitate the generation of recipes
permitting deterministic builds.
This commit address the case of concretizing a root spec with a
transitive conditional dependency on a virtual package, provided
by an external. Before these modifications default variant values
for the dependency bringing in the virtual package were not
respected, and the external package providing the virtual was added
to the DAG.
The issue stems from two facts:
- Selecting a provider has higher precedence than selecting default variants
- To ensure that an external is preferred, we used a negative weight
To solve it we shift all the providers weight so that:
- External providers have a weight of 0
- Non external provider have a weight of 10 or more
Using a weight of zero for external providers is such that having
an external provider, if present, or not having a provider at all
has the same effect on the higher priority minimization.
Also fixed a few minor bugs in concretize.lp, that were causing
spurious entries in the final answer set.
Cleaned concretize.lp from leftover rules.
If a the default of a multi-valued variant is set to
multiple values either in package.py or in packages.yaml
we need to ensure that all the values are present in the
concretized spec.
Since each default value has a weight of 0 and the
variant value is set implicitly by the concretizer
we need to add a rule to maximize on the number of
default values that are used.
This commit introduces a new rule:
real_node(Package) :- not external(Package), node(Package).
that permits to distinguish between an external node and a
real node that shouldn't trim dependency. It solves the
case of concretizing ninja with an external Python.
`node_compiler_hard()` means that something explicitly asked for a node's
compiler to be set -- i.e., it's not inherited, it's required. We're
generating this in spec_clauses even for specs in rule bodies, which
results in conditions like this for optional dependencies:
In py-torch/package.py:
depends_on('llvm-openmp', when='%apple-clang +openmp')
In the generated ASP:
declared_dependency("py-torch","llvm-openmp","build")
:- node("py-torch"),
variant_value("py-torch","openmp","True"),
node_compiler("py-torch","apple-clang"),
node_compiler_hard("py-torch","apple-clang"),
node_compiler_version_satisfies("py-torch","apple-clang",":").
The `node_compiler_hard` there means we would have to *explicitly* set
py-torch's compiler to trigger the llvm-openmp dependency, rather than
just letting it be set by preferences. This is wrong; the dependency
should be there regardless of how the compiler was set.
- [x] remove fn.node_compiler_hard() call from spec_clauses when
generating rule body clauses.
If the version list passed to one_of_iff is empty, it still generates a
rule like this:
node_compiler_version_satisfies("fujitsu-mpi", "arm", ":") :- 1 { } 1.
1 { } 1 :- node_compiler_version_satisfies("fujitsu-mpi", "arm", ":").
The cardinality rules on the right and left above are never
satisfiale, and these rules do nothing.
- [x] Skip generating any rules at all for empty version lists.
As reported, conflicts with compiler ranges were not treated
correctly. This commit adds tests to verify the expected behavior
for the new concretizer.
The new rules to enforce a correct behavior involve:
- Adding a rule to prefer the compiler selected for
the root package, if no other preference is set
- Give a strong negative weight to compiler preferences
expressed in packages.yaml
- Maximize on compiler AND compiler version match
Variant of this kind don't have a list of possible
values encoded in the ASP facts. Since all we have
is a validator the list of possible values just includes
just the default value and possibly the value passed
from packages.yaml or cli.
This is done after the builder has actually built
the specs, to respect the semantics use with the
old concretizer.
Later we could move this to the solver as
a multivalued variant.
This is done after the builder has actually built
the specs, to respect the semantics use with the
old concretizer.
A better approach is to substitute the spec
directly in concretization.
The "none" variant value cannot be combined with
other values.
The '*' wildcard matches anything, including "none".
It's thus relevant in queries, but disregarded in
concretization.
- The test on concretization of anonymous dependencies
has been fixed by raising the expected exception.
- The test on compiler bootstrap has been fixed by
updating the version of GCC used in the test.
Since gcc@2.0 does not support targets later than
x86_64, the new concretizer was looking for a
non-existing spec, i.e. it was correctly trying
to retrieve 'gcc target=x86_64' instead of
'gcc target=core2'.
- The test on gitlab CI needed an update of the target
This commit adds support for specifying rules in
packages.yaml that refer to virtual packages.
The approach is to normalize in memory each
configuration and turn it into an equivalent
configuration without rules on virtual. This
is possible if the set of packages to be handled
is considered fixed.
The weight of the target used in concretization is, in order:
1. A specific per package weight, if set in packages.yaml
2. Inherited from the parent, if possible
3. The default target weight (always set)
Generate facts on externals by inspecting
packages.yaml. Added rules in concretize.lp
Added extra logic so that external specs
disregard any conflict encoded in the
package.
In ASP this would be a simple addition to
an integrity constraint:
:- c1, c2, c3, not external(pkg)
Using the the Backend API from Python it
requires some scaffolding to obtain a default
negated statement.
Conflict rules from packages are added as integrity
constraints in the ASP formulation. Most of the code
to generate them has been reused from PyclingoDriver.rules
The new concretizer and the old concretizer solve constraints
in a different way. Here we ensure that a SpackError is raised,
instead of a specific error that made sense in the old concretizer
but probably not in the new.
Instead of python callbacks, use cardinality constraints for package
versions. This is slightly faster and has the advantage that it can be
written to an ASP program to be executed *outside* of Spack. We can use
this in the future to unify the pyclingo driver and the clingo text
driver.
This makes use of add_weight_rule() to implement cardinality constraints.
add_weight_rule() only has a lower bound parameter, but you can implement
a strict "exactly one of" constraint using it. In particular, wee want to
define:
1 {v1; v2; v3; ...} 1 :- version_satisfies(pkg, constraint).
version_satisfies(pkg, constraint) :- 1 {v1; v2; v3; ...} 1.
And we do that like this, for every version constraint:
atleast1(pkg, constr) :- 1 {version(pkg, v1); version(pkg, v2); ...}.
morethan1(pkg, constr) :- 2 {version(pkg, v1); version(pkg, v2); ...}.
version_satisfies(pkg, constr) :- atleast1, not morethan1(pkg, constr).
:- version_satisfies(pkg, constr), morethan1.
:- version_satisfies(pkg, constr), not atleast1.
v1, v2, v3, etc. are computed on the Python side by comparing every
possible package version with the constraint.
Computing things like this has the added advantage that if v1, v2, v3,
etc. comprise *all* possible versions of a package, we can just omit the
rules for the constraint under consideration. This happens pretty
frequently in the Spack mainline.
- [x] Solver now uses the Python interface to clingo
- [x] can extract unsatisfiable cores from problems when things go wrong
- [x] use Python callbacks for versions instead of choice rules (this may
ultimately hurt performance)
There are now three parts:
- `SpackSolverSetup`
- Spack-specific logic for generating constraints. Calls methods on
`AspTextGenerator` to set up the solver with a Spack problem. This
shouln't change much from solver backend to solver backend.
- ClingoDriver
- The solver driver provides methods for SolverSetup to generates an ASP
program, send it to `clingo` (run as an external tool), and parse the
output into function tuples suitable for `SpecBuilder`.
- The interface is generic and should not have to change much for a
driver for, say, the Clingo Python interface.
- SpecBuilder
- Builds Spack specs from function tuples parsed by the solver driver.
The original implementation was difficult to read, as it only had
single-letter variable names. This converts all of them to descriptive
names, e.g., P -> Package, V -> Virtual/Version/Variant, etc.
To handle unknown compilers propely in tests (and elsewhere), we need to
add unknown compilers from the spec to the list of possible compilers.
Rework how the compiler list is generated and includes compilers from
specs if the existence check is disabled.
Specs like hdf5 ^mpi were unsatisfiable because we added a requierment
for `node("mpi").`. This can't be resolved because "mpi" is not a
package.
- [x] Introduce `virtual_node()`, which says *some* provider must be in
the DAG.
This adds compiler flags to the ASP solve so that we can have conditions
based on them in the solve. But, it keeps order out of the solve to
avoid unneeded complexity and combinatorial explosions.
The solver determines which flags are on a spec, but the order is
determined by DAG precedence (childrens' flags take precedence over
parents' and are added on the right) and order (order flags were
specified on the command line is respected).
The solver is responsible for determining when to propagate flags, when
to inheit them from other nodes, when to take them from compiler
preferences, etc.
Weight microarchitectures and prefers more rercent ones. Also disallow
nodes where the compiler does not support the selected target.
We should revisit this at some point as it seems like if I play around
with the compiler support for different architectures, the solver runs
very slowly. See notes in comments -- the bad case was gcc supporting
broadwell and skylake with clang maxing out at haswell.
We didn't have a cardinality constraint for multi-valued variants, so the
solver wasn't filling them in.
- [x] add a requirement for at least one value for multi-valued variants
Variants like `cpu_target` on `openblas` don't have defineed values, but
they have a default. Ensure that the default is always a possible value
for the solver.
Spack was generating the same dependency connstraints twice in the output ASP:
```
declared_dependency("abinit", "hdf5", "link")
:- node("abinit"),
variant_value("abinit", "mpi", "True"),
variant_value("abinit", "mpi", "True").
```
This was because `AspFunction` was modifying itself when called.
- [x] fix `AspFunction` so that every call returns a new object
- [x] Add support for packages.yaml and command-line compiler preferences.
- [x] Rework compiler version propagation to use optimization rather than
hard logic constraints
Technically the ASP output order does not matter, but it's hard to diff
two different solve fomulations unless we order it.
- [x] make sure ASP output is emitted in a deterministic order (by
sorting all hash keys)
This needs more thought, as I am pretty sure the weights are not correct.
Or, at least, I'm not convinced that they do what we want in all cases.
See note in concretize.lp.
Solver now prefers newer versions like the old concretizer. Prefer
package preferences from packages.yaml, preferred=True, package
definition, and finally each version itself.
Competition output only prints out one model, so we do not have to
unnecessarily parse all the non-optimal models. We'll just look at the
best model and bring that in.
In practice, this saves a lot of JSON parsing and spec construction time.
Clingo actually has an option to output JSON -- use that instead of
parsing the raw otuput ourselves.
This also allows us to pick the best answer -- modify the parser to
*only* construct a spec for that one rather than building all of them
like we did before.
- Instead of using default logic, handle variant defaults by minimizing
the number of non-default variants in the solution.
- This actually seems to be pretty fast, and it fixes the long-standing
issue that writing this:
spack install hdf5 ^mpich
will fail if you don't specify hdf5+mpi. With optimization and
allowing enums to be enumerated, the solver seems to be able to quickly
discover that +mpi is the only way hdf5 can depend on mpich, and it
forces the switch to be thrown.
Use '1 { version(x); version(y); version(z) } 1.' instead of declaring
conflicts for non-matching versions. This keeps the sense of version
clauses positive, which will allow them to be used more easily in
conditionals later.
Also refactor `spec_clauses()` method to return clauses that can be used
in conditions, etc. instead of just printing out facts.
- This handles setting the compiler and falling back to a default
compiler, as well as providing default values for compilers/compiler
versions.
- Versions still aren't quite right -- you can't properly override
versions on compiler specs.
- Model architecture default settings and propagation off of variants
- Leverage ASP default logic to set architecture to default if it's not
set otherwise.
- Move logic out of Python and into concretize.lp as first-order rules.
We are relying on default logic in the variant handling in that we set a
default value if we never see `variant_set(P, V, X)`.
- Move the logic for this into `concretize.lp` instead of generating it
for every package.
- For programs that don't have explicit variant settings, clingo warns
that variant_set(P, V, X) doesn't appear in any rule head, because a
setting is never generated.
- Specifically suppress this warning.
- moving the dump logic into spack.solver.asp.solve() allows us to print
out useful debug info sooner
- prior approach required a successful solve to print out anyhting.
According to the documentation for spack and pkg-config,
$view/share/pkgconfig should also be a valid place to look
for package config files. This commit ensures that when
spack activate env $dir is called, the environment has this
directory in PKG_CONFIG_PATH.
As of #13100, Spack installs the dependencies of a _single_ spec in parallel.
Environments, when installed, can only get parallelism from each individual
spec, as they're installed in order. This PR makes entire environments build
in parallel by extending Spack's package installer to accept multiple root
specs. The install command and Environment class have been updated to use
the new parallel install method.
The specs and kwargs for each *uninstalled* package (when not force-replacing
installations) of an environment are collected, passed to the `PackageInstaller`,
and processed using a single build queue.
This introduces a `BuildRequest` class to track install arguments, and it
significantly cleans up the code used to track package ids during installation.
Package ids in the build queue are now just DAG hashes as you would expect,
Other tasks:
- [x] Finish updating the unit tests based on `PackageInstaller`'s use of
`BuildRequest` and the associated changes
- [x] Change `environment.py`'s `install_all` to use the `PackageInstaller` directly
- [x] Change the `install` command to leverage the new installation process for multiple specs
- [x] Change install output messages for external packages, e.g.:
`[+] /usr` -> `[+] /usr (external bzip2-1.0.8-<dag-hash>`
- [x] Fix incomplete environment install's view setup/update and not confirming all
packages are installed (?)
- [x] Ensure externally installed package dependencies are properly accounted for in
remaining build tasks
- [x] Add tests for coverage (if insufficient and can identity the appropriate, uncovered non-comment lines)
- [x] Add documentation
- [x] Resolve multi-compiler environment install issues
- [x] Fix issue with environment installation reporting (restore CDash/JUnit reports)
This change makes improvements to the `spack ci rebuild` command
which supports running gitlab pipelines on PRs from forks. Much
of this has to do with making sure we can run without the secrets
previously required for running gitlab pipelines (e.g signing key,
aws credentials, etc). Specific improvements in this PR:
Check if spack has precisely one signing key, and use that information
as an additional constraint on whether or not we should attempt to sign
the binary package we create.
Also, if spack does not have at least one public key, add the install
option "--no-check-signature"
If we are running a pipeline without any profile or environment
variables allowing us to push to S3, the pipeline could still
successfully create a buildcache in the artifacts and move on. So
just print a message and move on if pushing either the buildcache
entry or cdash id file to the remote mirror fails.
When we attempt to generate a pacakge or gpg key index on an S3
mirror, and there is nothing to index, just print a warning and
exit gracefully rather than throw an exception.
Support the use of PR-specific mirrors for temporary binary pkg
storage. This will allow quality-of-life improvement for developers,
providing a place to store binaries over the lifetime of a PR, so
that they must only wait for packages to rebuild from source when
they push a new commit that causes it to be necessary.
Replace two-pass install with a single pass and the new option:
--require-full-hash-match. Doing this also removes the need to
save a copy of the spack.yaml to be copied over the one spack
rewrites in between the two spack install passes.
Work around a mirror configuration issue caused by using
spack.util.executable to do the package installation.
* Update pipeline trigger jobs for PRs from forks
Moving to PRs from forks relies on external synchronization script
pushing special branch names. Also secrets will only live on the
spack mirror project, and must be propagated to the E4S project via
variables on the trigger jobs.
When this change is merged, pipelines will not run until we update
the "Custom CI configuration path" in the Gitlab CI Settings, as the
name of the file has changed to better reflect its purpose.
* Arg to MirrorCollection is used exclusively, so add main remote mirror to it
* Compute full hash less frequently
* Add tests covering index generation error handling code
* Add WRF 3.9.1.1 and improve recipe robustness
* Include version 3.9.1.1 as common benchmarking workload
* Fix compilation against recent glibc (detect spack installed libtirpc)
* Detect and handle failed compilation (upstream use make -i)
* WRF: PR changes round 1
fix build jobs
fix maintainers
fix pkgconfig dependency
use Executable to run compile stage
repair some overzealous autoformatting by black
* WRF: make recipe py26 compatible
* wrf: recipe review changes round 2
* more python 26 fixes
The unattended install using the pre-compiled binaries (tl-install)
needs a .profile file or it goes in interactive mode blocking the
install process forever
* Added guard for setting CUB_DIR to only when cuda variant is true
* Added support for OpenMP on OSX platforms
* Updated the way that LBANN, Hydrogen, and DiHydrogen handle
apple-clang with OpenMP and Clang installed on OS X via brew.
* Fixed bug in spec resolution
* Fixed merge conflict
* Fixed typo
* Fixed flake8
* AMD - Bumped up version for hip-rocclr, rocm-opencl, rocm-smi-lib
* AMD ROCm - HIP update and bump up version to 3.9.0 for rccl,debug agent, hip-rocclr and atmi
* Update package.py
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/hip/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Since #11598 sbang has been installed within the install_tree. This doesn’t play
nicely with install_tree padding, since sbang can’t do its job if it is installed in a
long path (this is the whole point of sbang).
This PR changes the padding specification. Instead of $padding inside paths,
we now have a separate `padding:` field in the `install_tree` configuration.
Previously, the `install_tree` looked like this:
```
/path/to/opt/spack_padding_padding_padding_padding_padding/
bin/
sbang
.spack-db/
...
linux-rhel7-x86_64/
...
```
```
This PR updates things to look like this:
/path/to/opt/
bin/
sbang
spack_padding_padding_padding_padding_padding/
.spack-db/
...
linux-rhel7-x86_64/
...
So padding is added at the start of all install prefixes *within* the unpadded
root. The database and all installations still go under the padded root.
This ensures that `sbang` is in the shorted possible path while also allowing
us to make long paths for relocatable binaries.
As of #18205, all packages must be pickle-able to be installed by
Spack.
This adds a test to check that each package can be pickled. If any
package fails to pickle, the test keeps going and collects the names
of all failed packages; it then takes the first one that failed and
attempts to re-pickle it, generating the full stack trace for the
failed pickle attempt.
Spack creates a separate process to do package installation. Different
operating systems and Python versions use different methods to create
it but up until Python 3.8 both Linux and Mac OS used "fork" (which
duplicates process memory, file descriptor table, etc.).
Python >= 3.8 on Mac OS prefers creating an entirely new process
(referred to as the "spawn" start method) because "fork" was found to
cause issues (in other words "spawn" is the default start method used
by multiprocessing.Process). Spack was dependent on the particular
behavior of fork to replicate process memory and transmit file
descriptors.
This PR refactors the Spack internals to support starting a child
process with the "spawn" method. To achieve this, it makes the
following changes:
- ensure that the package repository and other global state are
transmitted to the child process
- ensure that file descriptors are transmitted to the child process in
a way that works with multiprocessing and spawn
- make all the state needed for the build process and tests picklable
(package, stage, etc.)
- move a number of locally-defined functions into global scope so that
they can be pickled
- rework tests where needed to avoid using local functions
This PR also reworks sbang tests to work on macOS, where temporary
directories are deeper than the Linux sbang limit. We make the limit
platform-dependent (macOS supports 512-character shebangs)
See: #14102
In compiler bootstrapping pipelines, we add an artificial dependency
between jobs for packages to be built with a bootstrapped compiler
and the job building the compiler. To find the right bootstrapped
compiler for each spec, we compared not only the compiler spec to
that required by the package spec, but also the architectures of
the compiler and package spec.
But this prevented us from finding the bootstrapped compiler for a
spec in cases where the architecture of the compiler wasn't exactly
the same as the spec. For example, a gcc@4.8.5 might have
bootstrapped a compiler with haswell as the architecture, while the
spec had broadwell. By comparing the families instead of the architecture
itself, we know that we can build the zlib for broadwell with the gcc for
haswell.
* py-json-get: new package at 1.1.1
* py-json-get: new package at 1.1.1
* r-bigalgebra: new package at 0.8.4
* r-bigalgebra: new package at 0.8.4 with corrections
* Added an additional change to tarball and dependencies
* removing accidentally added file
* Added tarball that uses mirror and removed redundant dependencies
* Fixed version and added dep.
* Updated checksum
* Fixed urls
* Added list_url
Co-authored-by: las_djorton <las_djorton@build.las.iastate.edu>
* Add CUDA support to superlu-dist
* Use spec['cuda'].libs.directories[0] iso spec['cuda'].prefix.lib
so it works for both lib and lib64
The suggested:
args.append('-DTPL_CUDA_LIBRARIES=' +
spec['cuda'].libs.ld_flags)
did not work because it does not link with cuBLAS.
Currently, full JSON output is the only machine readable option for `spack find`
in an environment.
`spack find --format` is also designed to be machine readable, but we print extra
headers in environments.
-[x] don't print headers in `spack find` output when in an environment
* No version of yaml-cpp in spack can build shared AND
static libraries at the same time. So drop the "static"
variant and let "shared" handle that alone.
Or in other words: No version handles the
BUILD_STATIC_LIBS flag.
* The flag for building shared libraries changed from
BUILD_SHARED_LIBS to YAML_BUILD_SHARED_LIBS at some
point. So just pass both flags.
* Use the newer define_from_variant.
* [py-cuml] created template
* [py-cuml] setup phases and added build_directory
* [py-cuml] added dependencies
* [py-cuml] depends on libcumlprims
* [py-cuml] requiring multigpu version
* [py-cuml] figuring out the best way to get concretization to happen cleanly
* [py-cuml] removed singlegpu variat from libcuml
* [py-cuml] depends on py-cudf
* [py-cuml] depends on cupy
* [py-cuml] fixed typoo
* [py-cuml] depends on py-scipy
* [py-cuml] depends on py-treelite
* [py-cuml] py-treelite is now a variant of treelite
* [py-cuml] depends on joblib
* [py-cuml] depends on py-scikit-learn
* [py-cuml] flake8
* [py-cuml] added homepage and description. removed fixmes
* [py-cuml] updated checksum
* Enabling build of v1.9.x development branch.
* v1.8.1 is the preferred (stable) version.
* Fixing code style
Co-authored-by: Filippo Spiga <fspiga@nvidia.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [podio] put python dir in python path
* Update var/spack/repos/builtin/packages/podio/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When invoking "buildcache list" multiple times, the command was
reporting no specs in the cache the second time around. The
presence of an up-to-date index was causing the internal
representation to be left un-initialized.
* tskit package
* Update var/spack/repos/builtin/packages/tskit/package.py
I can't see any hard requirement for 3.6:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixes following PR review
* Update var/spack/repos/builtin/packages/tskit/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Added a command to set up Spack for our tutorial at
https://spack-tutorial.readthedocs.io.
The command does some common operations we need first-time users to do.
Specifically:
- checks out a particular branch of Spack
- deletes spurious configuration in `~/.spack` that might be
left over from prior parts of the tutorial
- adds a mirror and trusts its public key
Version 5.32.0 has been out for quite a while and Linux distributions
are shipping it. I have also done a rebuild of some common packages with
the new version. Let's make it the preferred version.
* amrex: new options names for version > 20.11
* amrex: change option name DIM -> AMReX_SPACEDIM
* Update var/spack/repos/builtin/packages/amrex/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added code to help DiHydrogen find cuDNN and CUB
* Cleaning up dependencies on CUB and adding guards for when newer
versions of CUDA include CUB and it should be excluded.
* Changed Hydrogen to disable half support by default.
* Have LBANN force Hydrogen and DiHydrogen to build without half when the variant is disabled.
* Added explicit variants to enusre that if LBANN is build without Cuda,
Aluminum, or Half support, it enforces those constraints for Hydrogen
and DiHydrogen. Cleaned up the use of Python extend versus append in
LBANN and DiHydrogen recipes.
* Fixed Flake8
* [evtgen] add env var
* Update var/spack/repos/builtin/packages/evtgen/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
See #19784
virtualgl CMake system is looking for a specific libjpeg-turbo include
file, not present in libjpeg (currently the only other jpeg provider)
* cget package
* Update var/spack/repos/builtin/packages/cget/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cget/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/cget/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Updates in LBANN an Aluminum code now allow working with versions
HWLOC 1.11.x and 2.x and up.
* Updating the minimum CMake version to address a pending PR in LBANN
that will require C++17 support and needs CMake to properly separate
the compiler flags from nvcc.
* Clarified the support for different versions of HWLOC in LBANN
Previously, we hardcoded a list of Spack versions which could be used by the containerize command.
This PR removes that list. It's a maintenance burden when cutting a release, and prevents older versions of Spack from creating containers to be used by newer versions.
* filtlong package
* Update var/spack/repos/builtin/packages/filtlong/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* bump up version for 3.9.0 release
* update version of rocminfo for rocm-3.9.0
* bump up rocm-cmake version for rocm-3.9.0
* bump up rocm-smi and rocmdevice-libs for 3.9.0
* bumpup comgr version for rocm_ 3.9.0
* bump rocm-clang-ocl for rocm-3.9.0
* bump hipify-clang for rocm-3.9.0
* Trilinos: Add STRUMPACK dependency
* break long lines, flake8 cleanup
* Use spec['strumpack'].libs.directories[0]
instead of spec['strumpack'].prefix.lib
because libraries may be in lib or lib64.
Likewise use headers.directories[0] iso prefix.include.
Suggested by adamjstewart
* allows UCX since v1.7 to build with more recent version of gdrcopy (v2.X)
* Update var/spack/repos/builtin/packages/ucx/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
There was an error introduced in #19209 where `full_hash()` and
`build_hash()` are called on older specs that we've read in from the DB;
older specs may not be able to compute these hashes (e.g. if they have
removed patches used in computing the full_hash).
When serializing a Spec, we want to generate the full/build hash when
possible, but we need a mechanism to skip it for Specs that have
themselves been read from YAML (and may not support this).
To get around this ambiguity and to fix the issue, we:
- Add an attribute to the spec called `_hashes_final`, that is `True`
if we can't lazily compute `build_hash` and `full_hash`.
- Set `_hashes_final` to `False` for new specs (i.e., lazily
computing hashes is ok)
- Set `_hashes_final` to `True` for concrete specs read in via
`from_node_dict`, as it may be too late to recompute hashes.
- Compute and write out all hashes in `node_dict_with_hashes` *if
possible*.
Effectively what this means is that we can round-trip specs that are
missing `_build_hash` and `_full_hash` without recomputing them, but for
all new specs, we'll compute them and store them. So Spack should work
fine with old DBs now.
* hip: rocminfo is a runtime requirement
* hip: +setup_run_environment, +setup_dependent_run_environment
* hip: run environment: get lib dir using libs.directories[0], not prefix.lib
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This fixes sbang relocation when using old binary packages, and updates
code in `relocate.py`.
There are really two places where we would want to handle an `sbang`
relocation:
1. Installing an old package that uses `sbang` with shebang lines like
`#!/bin/bash $spack_prefix/sbang`
2. Installing a *new* package that uses `sbang` with shebang lines like
`#!/bin/sh $install_tree/sbang`
The second case is actually handled automatically by our text relocation;
we don't need any special relocation logic for new shebangs, as our
relocation logic already changes references to the build-time
`install_tree` to point to the `install_tree` at intall-time.
Case 1 was not properly handled -- we would not take an old binary
package and point its shebangs at the new `sbang` location. This PR fixes
that and updates the code in `relocation.py` with some notes.
There is one more case we don't currently handle: if a binary package is
created from an installation in a short prefix that does *not* need
`sbang` and is installed to a long prefix that *does* need `sbang`, we
won't do anything. We should just patch the file as we would for a normal
install. In some upcoming PR we should probably change *all* `sbang`
relocation logic to be idempotent and to apply to any sort of shebang'd
file. Then we'd only have to worry about which files to `sbang`-ify at
install time and wouldn't need to care about these special cases.
* add python-docutils dependency
* adds symlink to script for better compatibility if py-docutils installation
* Improve post_install phase of py-docutils
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix review of rdma-core package
* improve formating of py-docutils package
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [NEW] Added amdfftw, amdlibflame and amdscalapack recipes
Updated base fftw, libflame and netlib-scalapack recipes
to accommodate the above listed AMD Optimizing CPU Libraries
which are a set of numerical routines optimized for AMD platforms.
Updated amdblis spack recipe
amdblis:
1. updated with amdblis 2.2 release
amdfftw:
1. "--enable-single" now work as synonym for "--enable-float"
amdlibflame:
1. Added enable_or_disable_threads() to set value for "--enable-multithreading" flag
Libflame:
1. Added enable_or_disable_threads() to set value for "--enable-multithreading" flag
2. Corrected invocation of "enable_or_disable('threads')"
Change-Id: I9da0a2c2c4e2075b7fa2776e7cfe6548a2e0b32f
* Added amd-toolchain-support as maintainers
Added team github account amd-toolchain-support
as maintainers for all the recipes owned by
AMD Optimizing CPU Libraries (AOCL) team
Change-Id: I9a7969bd48fc42cfbb88dd7bd93e0802c6138582
* Incorporated review comments
Updated packages.yaml with aocl components
Handled Flake8 test failures
Change-Id: I0a03f02d8c9f326b2434ec907958c3de3a8e18eb
* Readded accidental removal of stream recipe
amdfftw:
1. Updated the aocc clang selection as per spack standards
fftw:
1. Currently apple-clang section is redundant,
already it is handled in the conflict checks.
Change-Id: Idef4a3f61717eb81f321e0cd16e7ba9619eac846
* Fix for style and docs/validate (pull_request) test
unnumbered format placeholders from {} to {0}
Change-Id: If67a3374177ec067573e5504462d257712fafc05
* changed compiler references to Spack's compiler wrapper:spack_cc, spack_cxx, spack_fc
Change-Id: I7ae29c978fff16e37773913f14c84df232499763
* Removed 'single' variant from amdfftw recipe
Instead of conflict for apple-clang + openmp, handled this senario
via below available feature:
depends_on('llvm-openmp', when='%apple-clang +openmp')
Change-Id: I701b23d83e822a500ca3aaf2b60cc9ace09e13dc
* Added relevant info for users who prefers to use single precision
Change-Id: I3506e21da428ddef5fb7895b5aaed32c2a061ef6
* Minor changes on fftw, amdfftw and libflame
amdfftw:
1. Removed escape symbol to the single quotes
2. Rewording the conflict line from Recommended
to Required
fftw:
1. Reorded to following recommended sections:
versions, variants, dependencies, providers,
patches
libflame:
1. Added provides entry for 5.1.0 version
Change-Id: I21ebff99b6dfde031763154693ecb3f1fa47b476
* Removed single quote from amdfftw docstring to fix style failures
Change-Id: Ife939a5a2f5ccbc8879b730c7bebfe2fcfef9332
* camp: changes to support hip build
* hip: add fallback path for external hip to detect other rocm components
Co-authored-by: Greg Becker <becker33@llnl.gov>
fixes#15183
- Moved the container related content from
workflows.rst into containers.rst
- Deleted the docker_for_developers.rst file,
since it describes an outdated procedure
Co-authored-by: Axel Huebl <a.huebl@hzdr.de>
Co-authored-by: Omar Padron <omar.padron@kitware.com>
`config.get_config` now caches the results and returns the same
configuration if called multiple times with the same arguments
(i.e. the same section and scope).
As a consequence, it is expected that users will always call
update methods provided in the `config` module after changing
the configuration (even if manipulating it as a Python nested
dictionary). The following two examples should cover most
scenarios:
* Most configuration update logic in the core (e.g. relating to
adding new compiler) should call `Configuration.update_config`
* Tests that need to change the global configuration should use the
newly-provided `config.replace_config` function.
(if neither of these methods apply, then the essential requirement
is to use a method marked as `_config_mutator`)
Failure to call such a function after modifying the configuration
will lead to unexpected results (e.g. calling `get_config` after
changing the configuration will not reflect the changes since the
first call to get_config).
* Patched hypre to better add flags based on compiler.
* Update package.py
This file seems to have lots of edits, so the patch may succeed with offsets. Has anyone checked with spack patch to be sure it'll work with versions 2.15 - 2.20?
* "spack install" now has a "--require-full-hash-match" option, which
forces Spack to skip an available binary package when the full hash
doesn't match. Normally only a DAG-hash match is required, which
ensures equivalent Specs, but does not account for changing logic
inside the associated package.
* Add a local binary cache index which tracks specs that have a binary
install available in a remote binary cache. It is updated with
"spack buildcache list" or for a given spec when a binary package
is retrieved for that Spec.
In #18394 it was noted, that this package should be changed
from a generic "Package" to a "CMakePackage".
It makes a bunch of things easier.
And it uses all the common cmake code.
* Added hash values for LBANN v0.101 and Hydrogen v1.5.0. Updated the
LBANN package to be more successful in resolving a legal configuration
of MPI and HWLOC packages. This required the removal of the MPI
virtual package since it is unable to resolve dependencies with
minimum version requirements. As a result to enable a reasonable
install line for LBANN this requires explicit forwarding of MPI
variants to Hydrogen and Aluminum. Due to the lack of variant
forwarding, there are many explicitly replicated dependencies for both
LBANN and Hydrogen. Fixed the error in LBANN where gpu variant was
replaced by the cuda variant, but not all dependencies were fixed.
* Fixed the minumum cuDNN version for newer versions of LBANN.
* Added explicit versioning of the MPI libraries for DiHydrogen to avoid
all of the conflicts with minimum required versions of the OpenMPI library.
* Removed explicit MPI versions and went back to using the MPI virtual
dependency. Updated construction of variant forwarding to use
iterative construction of constraints and variants. This exacerbates
the challenges with backtracking in the current concretizer, but
should be fixed in the new concretizer.
* Added support for including the DiHydrogen library in LBANN as well as
support for the distributed convolution (DistConv) parallel
algorithms. Also include support for building with half precision.
* Moving dependencies around
* Added conflict statement to ensure that the variant dihydrogen is
required for distconv.
* Removed the preferred field
* Fixed Flake8 and cuDNN version bounds
* gemini dep py-cyordereddict +
* dep ipyparallel +
* py-ipython-cluster +
* py-cyordereddict URL+dep fix
* Update var/spack/repos/builtin/packages/py-cyordereddict/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-ipython-cluster-helper dep fix
* py-ipyparallel dep fix
* ipython-cluster-helper debug
* ipython-cluster-helper debug
* ipyparallel dep fix
* ipython-cluster-helper dep fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added initial version of package and patch for precice-bindings
* updated package name
* cleanup in script; added version requirement to cython
* Remove unnecessary part of patch
* cleanup package
* added initial version of package and patch for precice-bindings
* updated package name
* cleanup in script; added version requirement to cython
* Remove unnecessary part of patch
* cleanup package
* update style of package
* reformatting to fullfil style requirements
* reformatting again
* fixing some of the issues mention in PR; working on fixing install stage
* readded py-wheel as dependency
Co-authored-by: Benjamin Rüth <benjamin.rueth@tum.de>
Spack has a fallback for hash checking with m55sums that may not be
supported in earlier versions of Python 3.x. The comments in the
Spack code acknowledge that this is best effort and may fail, but
recent vermin checks (running as part of our CI) reject this. This
disables vermin checks for that fallback.
* enable flatcc to be built with gcc/9.X.X
* add static option for building libyogrt
* cleanup
* Initial working version
* rework new oneapi wrappers
* tested and removed my initials from source
* cleanup
* Update __init__.py
* remove whitespace
* working now with mods for testing, detection. Detection for oneapi is working, but entry needs to be modified to add link path for libimf.so. Cleared cruft for old Intel versions
* fixed some formatting
* cleanup
* flake8 cleanup
* flake8
* fixed syntax of compiler version detection tests
* fixed syntax of compiler version detection tests
modified: detection.py
* fix typo
* fixes for compilers tests
* remove erroneous tests for outdated -std= flags, remove ifx version check (output won't parse)
Co-authored-by: Frank Willmore <willmore@anl.gov>
* Patch CMake version check in Umpire
* Update version constraint for cmake_version_check patch
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add maintainers to Umpire
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [py-torch-nvidia-apex] added version 201019 and added dependency on py-pybind11
* [py-torch-nvidia-apex] changed versioning format
* [py-torch-nvidia-apex] removed redundent version condition
* [py-torch-nvidia-apex] removed condition on dependency
* Adding AOCC support for M4
* combining 4 if-statements into a single if-statement with or conditions
* keeping parentheses around the or expressions
* fixing flake8 test failures
Co-authored-by: mohan babu <mohbabul@amd.com>
For some mysterious reason Qt4 stopped building the xmlpatterns
component, needed by some downstream packages. With this patch, the
component successfully builds with
```
qt@4.8.7~dbus~debug~examples~framework~gtk~opengl~phonon+shared~sql~ssl~tools~webkit freetype=none arch=linux-rhel7-haswell %gcc@10.2.0
```
`sbang` now lives at https://github.com/spack/sbang, and it has its own
test suite that's more extensive than what's in Spack. We'll leave sbang
tests to sbang from now on, and just vendor `bin/sbang` directly.
Remaining `sbang` tests have to do with patching files, not with
`sbang`'s functionality.
This update also fixes a bug with `sbang` and multiple command line
arguments that was introduced in #19529. See:
* https://github.com/spack/sbang/pull/1
* https://github.com/spack/sbang/pull/2
- [x] include latest `sbang` from https://github.com/spack/sbang
- [x] remove old `sbang` tests from Spack
- [x] update `COPYRIGHT` and `cmd/license.py`
* Update package.py
Remove breaking patch.
Patching the shebang is useless is the dependencies are properly loaded before execution. Furthermore, the long paths which can be generated when installing with Spack can exceed the maximum length of the shebang.
* Add newer versions of strelka.
* [libcudf] created template
* [libcudf] depends on cuda
* [libcudf] set cmake dir
* [libcudf] depends on boost
* [libcudf] depends on py-pyarrow
* [libcudf] depends on librmm
* [libcudf] depends on dlpack
* [libcudf] added more dependency information from https://github.com/rapidsai/libcudf/blob/v0.15.0/CONTRIBUTING.md#customizing-the-build
* [libcudf] removed python dependencies
* [libcudf] fixed url that got mangled in package renaming
* [libcudf] added default build options from build.sh
* [libcudf] added version 0.16.0a
* [libcudf] removed version 0.16.0a as it's an alpha version
* [libcudf] added homepage and description. removed fixmes
* [libcudf] flake8
* [libcudf] arrow requires +orc
* [libcudf] requires +parquet
* [libcudf] checksum changed
`sbang` was previously a bash script but did not need to be. This
converts it to a plain old POSIX shell script and adds some options. This
also allows us to simplify sbang shebangs to `#!/bin/sh /path/to/sbang`
instead of `#!/bin/bash /path/to/sbang`.
The new script passes shellcheck (with a few exceptions noted in the file)
- [x] `SBANG_DEBUG` env var enables printing what *would* be executed
- [x] `sbang` checks whether it has been passed an option and fails gracefully
- [x] `sbang` will now fail if it can't find a second shebang line, or if
the second line happens to be sbang (avoid infinite loops)
- [x] add more rigorous tests for `sbang` behavior using `SBANG_DEBUG`
On Cori(Cray-XC40), I need to pass the entire path for the compilers, this is what is saved in c_compiler, cpp_compiler, f_compiler. Therefore, when for the MPI wrappers only the binary name is provided I run into the same issue. There is no drawback of passing the entire path, this is set by the user through the compiler path anyways.
* added -lpthread flag in kv/tests/CMakeLists.txt
* Update var/spack/repos/builtin/packages/papyrus/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
PHP supports an initial shebang, but its comment syntax can't handle our 2-line
shebangs. So, we need to embed the 2nd-line shebang comment to look like a
PHP comment:
<?php #!/path/to/php ?>
This adds patching support to the sbang hook and support for
instrumenting php shebangs.
This also patches `phar`, which is a tool used to create php packages.
`phar` itself has to add sbangs to those packages (as phar archives
apparently contain UTF-8, as well as binary blobs), and `phar` sets a
checksum based on the contents of the package.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* New package: py-minrpc
* Delete package.py.save
* Update var/spack/repos/builtin/packages/py-minrpc/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`sbang` is not always accessible to users of packages, e.g., if Spack
is installed in someone's home directory and they deploy software
for others. Avoid this by:
1. Always installing the `sbang` script in the `install_tree`
2. Relocating binaries to point to the copy in the `install_tree`
and not the one in the Spack installation.
This PR also:
- ensures that `sbang` is reinstalled if it is modified in Spack
- adds tests
- updates the way `gobject-introspection` patches Makefiles
to support `sbang`
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Add BLT package
* Switch install function
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add type='run' to cmake dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add git attribute to BLT
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cp2k: locate correct include dir when using intel-parallel-studio+mkl for fftw-api
* libxc: drop arch-specific intel opt. flags
fixes#17794
* libint: drop arch-specific intel opt. flags, always build Fortran example with FC
fixes#17509
* package/pmdk add variants, version 1.9
* add dependency
* Update var/spack/repos/builtin/packages/pmdk/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The logic in `config.py` merges lists correctly so that list elements
from higher-precedence config files come first, but the way we merge
`dict` elements reverses the precedence.
Since `mirrors.yaml` relies on `OrderedDict` for precedence, this bug
causes mirrors in lower-precedence config scopes to be checked before
higher-precedence scopes.
We should probably convert `mirrors.yaml` to use a list at some point,
but in the meantie here's a fix for `OrderedDict`.
- [x] ensuring that keys are ordered correctly in `OrderedDict` by
re-inserting keys from the destination `dict` after adding the keys from
the source `dict`.
- [x] also simplify the logic in `merge_yaml` by always reinserting
common keys -- this preserves mark information without all the special
cases, and makes it simpler to preserve insertion order.
Assuming a default spack configuration, if we run this:
```console
$ spack mirror add foo https://bar.com
```
Results before this change:
```console
$ spack config blame mirrors
--- mirrors:
/Users/gamblin2/src/spack/etc/spack/defaults/mirrors.yaml:2 spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/
/Users/gamblin2/.spack/mirrors.yaml:2 foo: https://bar.com
```
Results after:
```console
$ spack config blame mirrors
--- mirrors:
/Users/gamblin2/.spack/mirrors.yaml:2 foo: https://bar.com
/Users/gamblin2/src/spack/etc/spack/defaults/mirrors.yaml:2 spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/
```
Shell integration no longer requires setting `SPACK_ROOT`, so we can
simplify the documentation on it. The docs on shell support and using
packages are getting a bit old, and information on `spack load` (which
seems to be everyone's most common way of using packages) is hard to
find.
This PR simplifies the shell documentation to remove SPACK_ROOT, and also
moves some sections around for clearer organization.
- [x] make docs on sourcing setup scripts clearer and simpler
- [x] introduce `spack load` early in the basic usage guide instead of
burying it in the module docs
- [x] clean up module docs so that spack module tcl loads comes later
- [x] be clear about the different ways to use packages so that the users
can find the docs better.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
GROMACS still requires a version of FFTW when compiling it to utilize
NVIDIA GPUs. In fact, the type of calculation that depends on FFTW --
Particle-Mesh Ewald (PME) -- is generally run on the host system's CPUs,
even when GPUs are available.
* New package: py-rise
* Fix URL and add description
* Update var/spack/repos/builtin/packages/py-rise/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* gaussian-src: initial commit to build from source
* do not install the source to ensure to not accidentally distribute
it to users
* set required runtime env vars based on the login.profile
* gaussian-view: update to 6.1.1
PR #19482 updated gcc to only apply the zstd patch until @10.2 but the
releases/gcc-10 branch actually does not contain the patch yet, that is,
gcc@10.3 will most likely have the same problem. Apply the patch for all
10.x releases instead.
* gemini dep -py-bcolz +
* Update var/spack/repos/builtin/packages/py-bcolz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-bcolz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-bcolz URL fix
* Update var/spack/repos/builtin/packages/py-bcolz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
fixes#19476
Module file content is written to file in a
temporary location and read back to be analyzed
by unit tests.
The approach to patch "open" and write to a
StringIO in memory has been abandoned, since
over time other operations insisting on the
filesystem have been added to the module file
generator.
* [py-pyarrow] telling setup.py that we want cuda support
* [py-pyarrow] added orc variant
* [py-pyarrow] passing the orc variant down the line
* [py-pyarrow] added variant description
* ocl-icd: fix build problems
* New package: opencl-c-headers
* New package: opencl-clhpp
* New bundled package: opencl-headers
- bundle C and C++ header files
* ocl-icd: Add +headers variant to use this as opencl provider
* ocl-icd: add new upstream release 2.2.13
* ocl-icd: add asciidoc-py3 and xmlto dependency needed for manpage generation
* ocl-icd and opencl-headers provides OpenCL 3.0
- also add more explicit version providing for older ocl-icd versions
* opencl-headers: add maximum of supported opencl versions for all versions
* opencl-headers: there aren't final releases with OpenCL 3.0
* [orc] created template
* [orc] depends on maven
* [orc] building with -fPIC
* [orc] fixed name of c flags option
* [orc] depends on openssl
* [orc] added dependencies and disableing installing vendored libs
* [orc] disabling hdfs
* [orc] depending on specific versions of dependencies
* [orc] no building of third party libs
* [orc] helping cmake find the dependencies
* [orc] disabling features that would require static protobuf libraries
* [orc] dependency versions are ranges
* [orc] added homepage and description. removed fixmes
* [orc] flake8
* [orc] switching to compilier indipendent code
* r-sf: fix build error
* Update var/spack/repos/builtin/packages/r-sf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Add herwig3
* Prepare fixes based on MR (needs checking)
* Set all dependencies (except python) as build-type
* OK now
* Move import to the top of the file
* Fix dependency name
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Synchronization on GitHub macOS runners seems to be very slow, and
frequently the foreground/background tests fail due to the race this
causes. This increases the tolerance for slowness a bit more, to allow up
to 4 spurious output lines in the tests.
This should hopefully result in no more false negatives on these tests
for macOS on GitHub.
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Add qgraf
* Update package.py
Changes from review
* Changes from MR
* Fix for URLs containing @ symbol
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Zsh and newer versions of bash have a builtin `which` function that will
show you if a command is actually an alias or a function. For functions,
the entire function is printed, and our `spack()` function is quite long.
Instead of printing out all that, make the `spack()` function a wrapper
around `_spack_shell_wrapper()`, and include some no-ops in the
definition so that users can see where it was created and where Spack is
installed.
Here's what the new output looks like in zsh:
```console
$ which spack
spack () {
: this is a shell function from: /Users/gamblin2/src/spack/share/spack/setup-env.sh
: the real spack script is here: /Users/gamblin2/src/spack/bin/spack
_spack "$@"
return $?
}
```
Note that `:` is a no-op in Bourne shell; it just discards anything after
it on the line. We use it here to embed paths in the function definition
(as comments are stripped).
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Update madgraph to 2.8.1
* Changes from MR
* Changes from MR
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* Updated blaspp package
* Modified lapackpp for newest release
* Formatting
* Updates to lapackpp package for new version
* Added dependency on cblas
* Removed cblas dependency
* updated to lapackpp
* Added new version for blaspp and lapackpp
* Removed debugging output
* Converted version matching logic for for loop
* mpich: yaksa configure fix
modified: var/spack/repos/builtin/packages/mpich/package.py
* typo
* python is not needed when building from preconfigured tarballs
* add maintainers
* Added FFLAGS for apple-clang:11
* Added issue #
* Update var/spack/repos/builtin/packages/mpich/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* update version to avoid compile error
* Update var/spack/repos/builtin/packages/r-rgdal/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Add FORM
* Update package.py
Changes from review
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* Fixes for thepeg
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* mfem: specify PETSC_DIR, link correct sundials libraries
* fix: only use PETSC_DIR directly for static builds
* fix: only use sundials nvecmpiplusx for MFEM 4.2+
* Add recipe for qgraf
* Revert "Add recipe for qgraf"
This reverts commit 76783f7386.
* LHAPDF should extend Python to get env variables correct
Co-authored-by: Ivan Razumov <ivan.razumov@cern.ch>
* Adding AOCC compiler to SPACK community
The AOCC compiler system offers a high level of advanced optimizations, multi-threading and processor support that includes global optimization, vectorization, inter-procedural analyses, loop transformations, and code generation. AMD also provides highly optimized libraries, which extract the optimal performance from each x86 processor core when utilized. The AOCC Compiler Suite simplifies and accelerates development and tuning for x86 applications.
* Added unit tests for detection and flags for AOCC
* Addressed reviewers comments w.r.t version checks and url,checksum related line lengths
Co-authored-by: Test User <spack@example.com>
* add updated version of py-dnaio
* Add py-setuptools-scm build dependency
* Fine tune the py-xopen dependency constraint
The needed version of xopen does not become specific until v0.4 of
dnaio.
* Set constraint on py-setuptools-scm
The py-setuptools-scm dependency is needed beginning with v0.4.
* updated version of py-cutadapt
* Update dependency specs
* Add py-setuptools-scm build dependency
* More constraint fixes
* Fix version range for py-xopen
* Added tau version 2.29.1 hash
* Update var/spack/repos/builtin/packages/tau/package.py
Make version name match branch name (master)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add updated version of py-xopen
* Update dependency constraints
* Further refine the python constraints
Also, put them all together.
* Put python constraints at top of list
* New package - Gate
This PR adds the Gate package as well as the ITK dependency.
* Fix flake 8 errors
* Be more explicit with CMake options
Make sure Cmake values related to variants are explicitly set to either
ON/OFF.
The ITK_USE_MKL flag will turn on the following:
- USE_FFTWD=ON
- USE_FFTWF=ON
- USE_SYSTEM_FFTW=ON
Since the package depends on fftw-api, those options will always be set.
* A collection of tensorflow fixes and updates
* tensorflow 2.3.1 requires the workaround for external protobuf as well
* Update tensorflow-estimator to 2.3.0
* Update tensorboard to 2.3.0
* Update tensorboard-plugin-wit to use actual releases
* Patch that potentially fixes#16073
* add myself to maintainer list
* Changed make command to support new slate build variable 'blas='
* Updated to use package's "make install" target
* Added variant 'blas' to support switching blas provider and removed legacy 'mkl' variant.
* Fixed problem caused by systems which use a non-bash /bin/sh
* Removed blas= variant in preference for setting blas provider via spec syntax (e.g., ^openblas).
* Fixed formating
* Changed to MakefilePackage and cleaned up make argument generation
* Implemented "edit" method
* Removed blank line
* Sqitched to using mpi compiler wrapper variables
* Update var/spack/repos/builtin/packages/slate/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* ADD: testing to dev-build command
* RM: mutally exclusive group for testing in parser
* FIX: test option to subparser and not testing
* ADD: spack-completion.bash
* RM: local devbuildcosmo cmd
* FIX: bad merge --drop-in -b --before options forgotten
* FIX: --test place in spack-completion.bash
* FIX: typo
* FIX: blank line removing
* FIX: trailing white space
Co-authored-by: Elsa Germann <egermann@tsa-ln002.cm.cluster>
The package list at https://spack.readthedocs.io/en/latest/package_list.html claims "it is automatically generated based on the packages in the latest Spack release" but it is actually based on the develop branch. This leads to confusion when users find that e.g. herwigpp is included in the list, but it cannot be found when they install the latest release. That latest release has a package list at https://spack.readthedocs.io/en/stable/package_list.html which does indeed not include herwigpp.
Changing the language from "the latest Spack release" to "this Spack version" might make that clearer. Maybe.
* Update libensemble to v0.7.1
* Update var/spack/repos/builtin/packages/py-libensemble/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add nvhpc compiler definition: "spack compiler add" will now look
for instances of the NVIDIA HPC SDK compiler executables
(nvc, nvc++, nvfortran) in supplied paths
* Add the nvhpc package which installs the nvhpc compiler
* Add testing for nvhpc detection and C++-standard/pic flags
Co-authored-by: Scott McMillan <smcmillan@nvidia.com>
Output was, e.g. `Executables in /bin and /,u,s,r,/,b,i,n are both associated with the same spec xz@5.2.2`, will be `Executables in /bin and /usr/bin are both associated with the same spec xz@5.2.2`.
Previously config.guess and config.sub were patched only
in the root of the source path.
This modification extend the previous behavior to patch every
config.guess or config.sub file even in subfolders, if need be.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* allow environments to specify dev-build packages
* spack develop and spack undevelop commands
* never pull dev-build packges from bincache
* reinstall dev_specs when code has changed; reinstall dependents too
* preserve dev info paths and versions in concretization as special variant
* move install overwrite transaction into installer
* move dev-build argument handling to package.do_install
now that specs are dev-aware, package.do_install can add
necessary args (keep_stage=True, use_cache=False) to dev
builds. This simplifies driving logic in cmd and env._install
* allow 'any' as wildcard for variants
* spec: allow anonymous dependencies
raise an error when constraining by or normalizing an anonymous dep
refactor concretize_develop to remove dev_build variant
refactor tests to check for ^dev_path=any instead of +dev_build
* fix variant class hierarchy
* Make release_90 preferred version.
* Be more explicity about CUDA dependencies.
* Remove duplicate CUDA dependency in Flang package and introduce nvptx variant.
* Fix nvptx variant message.
* Fixed wrong link to version 0.0.0 and add hash for version 0.1.4
* Fix failing build for neovim@master and neovim@stable and add hash for version 0.4.0
* Fix flake8 issues
* Removed unnecessary newline
* Depedency conditions restriction to neovim >= 0.2.0 as previous versions fail to compile
* Removed build dependency on git
* Removed master from all conditions
* autotools: add attribute to delete libtool archives .la files
According to Autotools Mythbuster (https://autotools.io/libtool/lafiles.html)
libtool archive files are mostly vestigial, but they might create issues
when relocating binary packages as shown in #18694.
For GCC specifically, most distributions remove these files with
explicit commands:
https://git.stg.centos.org/rpms/gcc/blob/master/f/gcc.spec#_1303
Considered all of that, this commit adds an easy way for each
AutotoolsPackage to remove every .la file that has been installed.
The default, for the time being, is to maintain them - to be consistent
with what Spack was doing previously.
* autotools: delete libtool archive files by default
Following review this commit changes the default for
libtool archive files deletion and adds test to verify
the behavior.
* Add new package: py-rbtools
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-rbtools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Original version added --no-gcc to CFLAGS when compiling with intel
compilers. This does not appear to be needed and indeed causes problems
(see #18894) with newer intel compilers; I have modified so it is not
added for intel@19: (I confirmed it is needed/works for intel@20, based
on comments in #18854 looks like holds for intel@19 as well).
(Also fix old formatting issue flake8 was complaining about)
* Update of py-redis for merlin-1.7.5
* Add hiredis variant and python versions for 3.5.x versions.
* Update var/spack/repos/builtin/packages/py-redis/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Added slurm version 20-02-4-1 and support to build slurmrestd
* cleanup formatting
* cleanup libjwt to pass unittests
* missed one boilerplate
* hacking a pass in unittests
* defer to default install routine in libjwt
This commit refactors the computation of the search path
for aclocal in its own method, so that it's easier to reuse
for packages that need to have a custom autoreconf phase.
Co-authored-by: Toyohisa Kameyama <kameyama@riken.jp>
The r-devtools package was not installable due to a few issues.
- The rstudioapi spec was for 0.11.0 but the rstudioapi version is
actually 0.11. This caused an error during concretization.
- Set r-usethis to depend on rlang@0.4.3: rather than r-lang@0.4.3.
- Set r-usethis to depend on r-gh@1.1.0: rather than r-gh@1.1.0.
- Added version r-gh-1.1.0 as it is not currently present in spack.
* Added CUDAHOSTCXX variable needed to compile with cuda and mpi.
* Added guard for setting CUDAHOSTCXX with MPI.
* Acceptable working version of dealii+cuda+mpi.
By default Spack uses the latest (highest version) GCC
compiler available, which might change across updates
of the Github CI environment.
Since a C compiler is always installed and `mpich~fortran`
will result in faster build times, avoid building the FORTRAN
interface as part of the test.
* cpio: Fix issue compiling with newer intel compilers (#18854)
Do not add --no-gcc for recent intel compilers (e.g. 20.x)
* cpio: Remove --no-gcc flag for intel@19 as well as intel@20
Based on comments from @nrichart, removing --no-gcc option for intel@19
as well as intel@20
* Provide draco-7_8_0.
+ Also provide a patchfile for draco-7_6_0 to support CrayPE builds.
+ Version 7.8.0 has a new variant `+caliper`.
+ Sort dependencies alphabetically after grouping by required and optional.
* Remove patchfile that is no longer needed.
+ Newer versions of draco do not require this patch.
+ Older versions of draco are not supported for spectrum-mpi.
* Change new variant +caliper to default to False.
* pandoc: add variant for texlive
Modifies the pandoc package by adding a variant for texlive, which is only needed for PDF output. Enables this variant by default.
* Fix whitespace
Fix for #19095
When given +openmp, add the correct compiler openmp flag to the link
stage. This seems to be required for %intel compilers.
I do this for all compilers, not just %intel, because it does not seem
to harm anything and might be beneficial for others (and just seems
'correct').
* py-scikit-image: bump version
* address reviewer comments
* address reviewer comments
* address reviewer comments
* py-scikit-image : update dependencies : part 2
* cloudpicke is a docs only dependency, enable it with a variant if necessary
* address reviewer comments
* cleanup build vs run deps
* address reviewer comments
* Initial cut at FLCL spackage. Works with GCC so far.
* Update spackage to list release which supports spack. Add @agaspar as a maintainer. Default unit tests to disabled when building with spack.
* Change url to 0.2.
* Nope, 0.3.
* add package py-lmodule version 0.1.0
Lmodule is tested with lmod >= 7.x. Lmod 6 has different json
structure in spider which is not supported by lmodule
* py-charm4py: new package
Charm++ for python
Installation notes:
1) charm4py ships with its own charm++ tarball. It really wants
to use the version it ships with. It also builds charm++ in a special way to
produce libcharm.so (but not charmc, etc), so it does not seem
worthwhile to try to hack to build using a spack installed charmpp.
2) Originally, the installation was failing due to unresolved cuda
symbols when setup.py was doing a ctypes.CDLL of libcharm.so (in order
to verify version?). This appears to be due to the fact that
libcharm.so had undefined cuda symbols, but did not show libcudart.so as
a dependency (in e.g. ldd output). To fix this, I had to add
libcudart.so explicitly when linking libcharm.so, but since setup.py
untars a tarball to build libcharm, the solution was a tad convoluted:
2a) Add a patch in spack to py-charm4py which creates a patchfile
"spack-charm4py-setup.py.patch" which will modify a Makefile file (after it
is untarred) to add the flags in env var SPACK_CHARM4PY_EXTRALIBS to
the link command for libcharm.so
2b) The spack patch file also patches setup.py to run patch using the
aforementioned patchfile to patch the Makefile after it is untarred, and
sets the SPACK_CHARM4PY_EXTRALIBS variable appropriately in the setup
environment.
* Update var/spack/repos/builtin/packages/py-charm4py/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-charm4py/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-charm4py: flake8 fixes
remove useless import
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update Package, Pymol 2.4
* Fixed flake8 stuff
* more style fixes
* missing ( at EOF
* added py-pymol 2.3 back this
* extra line removal
* white space in empty line removal
* added libpng and py-pyqt5 to prefix_path
* Fix 'unexpected product version' error for macOS 11.0
* Adjustment: add the minimum version that this macOS patch is necessary.
* Adding a keyword to prevent the patch being applied to systems other than darwin (macOS)
* Deleting quotation marks
* AMD ROCm 3.8.0 - roctracer-dev
* Update var/spack/repos/builtin/packages/roctracer-dev/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added py-ply dependency
* remove py-ply
* Update var/spack/repos/builtin/packages/roctracer-dev/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* aomp 3.7.0 and rccl 3.8.0 update
* Bump up to ROCm 3.8.0 support on AOMP
* Create 0001-Add-amdgcn-to-devicelibs-bitcode-names-3.8.patch
* Create 0001-Add-amdgcn-to-devicelibs-bitcode-names.patch
* Update var/spack/repos/builtin/packages/aomp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This reverts #18359 and follow-on PRs intended to address issues with
#18359 because that PR changes the hash of all specs. A future PR will
reintroduce the changes.
* Revert "Fix location in spec.yaml where we look for full_hash (#19132)"
* Revert "Fix fetch of spec.yaml files from buildcache (#19101)"
* Revert "Merge pull request #18359 from scottwittenburg/add-binary-distribution-cache-manager"
When we attempt to determine whether a remote spec (in a binary mirror)
is up-to-date or needs to be rebuilt, we compare the full_hash stored in
the remote spec.yaml file against the full_hash computed from the local
concrete spec. Since the full_hash moved into the spec (and is no longer
at the top level of the spec.yaml), we need to look there for it. This
oversight from #18359 was causing all specs to get rebuilt when the
full_hash wasn't fouhd at the expected location.
It looks like intel compilers generate warnings for omp pragmas when
openmp flag is not given, which due to other flags set get promoted to
errors.
This adds a flag to ignore the pragma omp warnings (icc diagnostic
number 3180 on %intel@14:).
This changes makes sure that when we run the pipeline job that updates
the buildcache package index on the remote mirror, we also update the
key index. The public keys corresponding to the signing keys used to
sign the package was pushed to the mirror as a part of creating the
buildcache index, so this is just ensuring those keys are reflected
in the key index.
Also, this change makes sure the "spack buildcache update-index"
job runs even when there may have been pipeline failures, since we
would like the index always to reflect the true state of the mirror.
* Add rocblas 3.8.0 and add all Tensile deps
* Deploy rocm_smi to the bin/ folder so that it is in $PATH
* BUILD_WITH_TENSILE_HOST=ON on 3.7.0+ and fix flake8
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
[py-particle] format
[py-particle] switch to pypi downloads
[py-particle] specify dependencies in more details
[py-particle] format
Update var/spack/repos/builtin/packages/py-particle/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Update var/spack/repos/builtin/packages/py-particle/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Update var/spack/repos/builtin/packages/py-particle/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Since those files currently exist in buildcaches (in S3 buckets) with
potentially different content types, we should be less restrictive in
what content types we accept when attempting to fetch them. This PR
removes the content type constraint so any file with the matching
name will be found.
* Remove duplication of reconstructed RPATHs caused by multiple
identical entries in prefixes dictionary
* Don't rewrite RPATHs if relative RPATHs are unchanged because the
directory layout is unchanged
* Need to check the binary is not a Mach-o binary in a linux package or an ELF binary in a macOS package.
* use sys.platform
* Darwin -> darwin for sys.platform
* Created +python_deps variant
- the timemory python bindings can still be imported without these runtime packages and forcing a dependence by default significantly increases the spack install time
* Added conflict
- added conflicts('+python_deps', when='~python')
* rocm-3.8.0 updates for hipblas,rocsolver,rocm-opencl
* rocm-3.8.0 updates to rocalution and rename and change rocmvalidationsuite
* rocm-3.8.0 update to miopen-hip
* Revert "rocm-3.8.0 updates for hipblas,rocsolver,rocm-opencl"
This reverts commit 2542e8b1be.
* rocm-3.8.0 changes for rocsolver and hipblas
* new package: py-gitpython
* Update var/spack/repos/builtin/packages/py-gitpython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Sinan81 <Sinan81@earth>
* Add more updates for kokkos 3.2 release, particularly nvcc-wrapper
* Use an ordinary Package
Co-authored-by: Jeremiah J Wilke <jjwilke@kokkos-dev-2.sandia.gov>
* Rework spack.util.web.list_url()
list_url() now accepts an optional recursive argument (default: False)
for controlling whether to only return files within the prefix url or to
return all files whose path starts with the prefix url. Allows for the
most effecient implementation for the given prefix url scheme. For
example, only recursive queries are supported for S3 prefixes, so the
returned list is trimmed down if recursive == False, but the native
search is returned as-is when recursive == True. Suitable
implementations for each case are also used for file system URLs.
* Switch to using an explicit index for public keys
Switches to maintaining a build cache's keys under build_cache/_pgp.
Within this directory is an index.json file listing all the available
keys and a <fingerprint>.pub file for each such key.
- Adds spack.binary_distribution.generate_key_index()
- (re)generates a build cache's key index
- Modifies spack.binary_distribution.build_tarball()
- if tarball is signed, automatically pushes the key used for signing
along with the tarball
- if regenerate_index == True, automatically (re)generates the build
cache's key index along with the build cache's package index; as in
spack.binary_distribution.generate_key_index()
- Modifies spack.binary_distribution.get_keys()
- a build cache's key index is now used instead of programmatic
listing
- Adds spack.binary_distribution.push_keys()
- publishes keys from Spack's keyring to a given list of mirrors
- Adds new spack subcommand: spack gpg publish
- publishes keys from Spack's keyring to a given list of mirrors
- Modifies spack.util.gpg.Gpg.signing_keys()
- Accepts optional positional arguments for filtering the set of keys
returned
- Adds spack.util.gpg.Gpg.public_keys()
- As spack.util.gpg.Gpg.signing_keys(), except public keys are
returned
- Modifies spack.util.gpg.Gpg.export_keys()
- Fixes an issue where GnuPG would prompt for user input if trying to
overwrite an existing file
- Modifies spack.util.gpg.Gpg.untrust()
- Fixes an issue where GnuPG would fail for input that were not key
fingerprints
- Modifies spack.util.web.url_exists()
- Fixes an issue where url_exists() would throw instead of returning
False
* rework gpg module/fix error with very long GNUPGHOME dir
* add a shim for functools.cached_property
* handle permission denied error in gpg util
* fix tests/make gpgconf optional if no socket dir is available
* Flang master branch is now the preferred version.
* Flang master branch can now use LLVM 9
* Remove master as this was never used by Flang.
* Add LLVM-Flang release_90 and release_90.
Magma is not currently compatible with CUDA-11. While this is reflected
in the package, it is done with a comment in a `depends_on` directive,
which has the effect of trying to install a version of CUDA that may be
different from the one in the current environment, without any message
to the end user. A `conflicts` is a better way to handle this.
* Disable bash completion by default.
* flake8
* Adding explicit dependence on libuuid
* Adding explicit dependence on cryptsetup
This way we don't pick up host crypto packages by mistake.
* Fixing the completion directory.
* Update var/spack/repos/builtin/packages/util-linux/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* flake8
* Removing libuuid linkage according to @michaelkuhn on #18696
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* trigger ascent e4s pipeline on merge to spack develop
* change pipeline name ecpcitest/e4s is the pipeline that will be triggered for merge on develop its the E4S use-case.
This PR adds the current release version of mumax and tweaks the install
of the previous beta version.
- Set the url parameter to reflect the release version over the beta
version. Hopefully, this will be consistent going forward.
- Set an explicit url for the previous beta version.
- Accept values for `cuda_arch`. The previous version had its own list
but the release version does not.
- Replace the built in cuda compute capabilities list with the one
provided by Spack for the 3.10beta version.
This PR fixes a couple of things with the libbeagle package.
- libbeagle can only be built for one GPU type. Add a test for that.
- version 2 had the arch statement in
libhmsbeagle/GPU/kernels/Makefile.am but version 3 has it in
configure.ac. Put the variant specified value in configure.ac for
consistency.
Due to recent changes in the `netcdf-c` package, it is now necessary to explicitly request a non-mpi-enabled hdf5 build if building a non-mpi-enabled seacas.
* libvdwxc: unbreak concretization, request fftw-api
mixing both fftw and fftw-api in a dependency tree can trigger the
following:
```
$ spack spec cp2k@master +sirius
==> [2020-09-16-12:36:06.552981] sirius applying constraint gsl
==> [2020-09-16-12:36:06.554270] sirius applying constraint openblas@0.3.10%gcc@7.5.0~consistent_fpcsr~ilp64+pic+shared threads=none arch=linux-opensuse_leap15-sandybridge
Traceback (most recent call last):
File "./bin/spack", line 64, in <module>
sys.exit(spack.main.main())
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/main.py", line 762, in main
return _invoke_command(command, parser, args, unknown)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/main.py", line 490, in _invoke_command
return_val = command(parser, args)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/cmd/spec.py", line 103, in spec
spec.concretize()
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2228, in concretize
user_spec_deps=user_spec_deps),
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2716, in normalize
visited, all_spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2654, in _normalize_helper
dep, visited, spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2613, in _merge_dependency
visited, spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2654, in _normalize_helper
dep, visited, spec_deps, provider_index, tests)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2554, in _merge_dependency
provider = self._find_provider(dep, provider_index)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/spec.py", line 2489, in _find_provider
providers = provider_index.providers_for(vdep)
File "/data/tiziano/debug-spack/spack2/lib/spack/spack/provider_index.py", line 80, in providers_for
return sorted(s.copy() for s in result)
File "/data/tiziano/debug-spack/spack2/lib/spack/llnl/util/lang.py", line 249, in <lambda>
lambda s, o: o is not None and s._cmp_key() < o._cmp_key())
TypeError: '<' not supported between instances of 'str' and 'NoneType'
```
while at the same point disallowing MKL as a fftw provider.
Solving both issues by depending on `fftw-api@3` instead and adding a
conflict on `^fftw~mpi` when using `+mpi` (thanks to alalazo).
* cp2k: use conflicts instead of runtime checks for fftw/openblas variants
* Initial Draft of Cosmoflow Spackage
Need to add in logic to streamline cpu/gpu builds
* Added ~cuda logic to cosmoflow spackage
Added logic to support a ~cuda build for cosmoflow
* Requested Changes to Cosmoflow Spackage
Made requested changes to cosmoflow spackage
MatIO development has switched to github from sourceforge. Updated the `git` and `url` variables and added the four new versions (1.5.14 -- 1.5.17) that have been released since the last update of this package.
* qbox: install to correct directory structure
* qbox: Have qb executable put in bin rather than src subdir
* qbox: Fix python script shebangs to use python from path
* qbox: Add dependencies on gnuplot, python2 for utilities
* qbox: fix flake8 issue
* qbox: Add $prefix/util to PATH
* Initial CRADL Spackage Work
Currently resolving ```--single-version-externally-managed``` error
* Fixed GPUtil Issues
Thanks to Vinay Ramakrishnaiah for overwriting install
* Finished CRADL Install Function
Finished CRADL install function which is basically copying the scripts
to the install directory. Also resolved flake8 issues for PR purposes
Update pipelines documentation to describe how 'tags', 'variables',
'image', 'before_script', 'script', and 'after_script' can be
supplied at the top level, to be used by any of the runner mappings,
and also overridden by any of the runner mappings.
Also show an example of capturing the custom spack SHA at pipeline
generation time, so all jobs are sure to run with the same version
of spack, as a means to illustrate the $env:VARIABLE_NAME syntax.
* Add new package: webbench
* Update var/spack/repos/builtin/packages/webbench/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cpio: add --rtlib=compiler-rt for %fj
* cpio: simplify if
* Update var/spack/repos/builtin/packages/cpio/package.py
This seems better.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add new package: hping
* Update var/spack/repos/builtin/packages/hping/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/hping/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Fix flake8 errors
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Use the config path instead of the basename
* Removing unused variables
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Test
Making sure if there are 2 include config files with the same basename they are both implemented
* Edit test assert
Co-authored-by: Greg Becker <becker33@llnl.gov>
* ncurses: adding external support.
* Update var/spack/repos/builtin/packages/ncurses/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ncurses/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ncurses/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fixing includes.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Fixes#18441
When writing an environment, there are cases where the lock file for
the environment may be removed. In this case there was a period
between removing the lock file and writing the new manifest file
where an exception could leave the manifest in its old state (in
which case the lock and manifest would be out of sync).
This adds a context manager which is used to restore the prior lock
file state in cases where the manifest file cannot be written.
* Checksummed New Flux Versions
Checksummed new flux versions to let spack detect them
* Added CXXFlags to build Flux-sched
Added missing cxxflags to build flux-sched
* Adding Cuda Variant to SW4Lite
Added cuda variant of sw4lite as per guidance in README
* Updated SW4Lite+cuda to Current Header Conventions
Updated sw4lite+cuda to use current conventions for spackage include
dirs
* Fixing FLake8 Issue with Sw4lite+cuda Fix
Fixed overly long line and further underlined sticky note reminding me
to run flake8 BEFORE pushing
* Switching to Spack Compiler Wrapper
Switching to spack compiler wrapper for consistency
* Orca: Add new versions.
* Orca: Support OpenMPI without the legacy wrappers.
By default, Spack builds OpenMPI without the legacy wrappers when using the Slurm scheduler. This breaks Orca since its binaries are hardcoded to call "mpirun". To workaround this issue, add a "mpirun" wrapper which calls "srun" when required.
* cp2k: do not support ~openmp for v8+
* sirius: version bump
* cp2k: fix overlapping deps for elpa
fixes#18029
* cp2k: update SIRIUS dependency for v8+
* spfft: requires CMake 3.11+
* cp2k: fix build with +sirius
* darshan-util: remove return(-1) from void function
* Update var/spack/repos/builtin/packages/darshan-util/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/darshan-util/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* gamess-ri-mp2-miniapp: initial import
* flake8
* Update var/spack/repos/builtin/packages/gamess-ri-mp2-miniapp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This is a special case of overriding since each section is being matched with the current spec.
The trailing ':' for sections with override is now removed when parsing the configuration so the special handling for the modules configuration stopped working but it went unnoticed.
Starting with OpenSceneGraph 3.5.5, support for windows managed by Qt
has been moved to the seperate project osgQt. Hence, a dependency on Qt
is not needed any longer for version 3.5.5 or newer.
In order to still satisfy the dependency on OpenGL, a depends_on('gl')
has been added.
without setting the build enviroment, the installation fails with
```
1 error found in build log:
35946 fmtutil [INFO]: /usr/local/pkg/Installs/linux-ubuntu18.04-skylake_avx512/gcc7.4.0/texlive/20190410/rgs2nakycorkgzno/t
exmf-var/web2c/pdftex/pdfcslatex.fmt installed.
35947 fmtutil [INFO]: Disabled formats: 6
35948 fmtutil [INFO]: Successfully rebuilt formats: 45
35949 fmtutil [INFO]: Total formats: 51
35950 fmtutil [INFO]: exiting with status 0
35951 ==> [2020-09-07-21:23:21.482745] '/usr/local/pkg/Installs/linux-ubuntu18.04-skylake_avx512/gcc7.4.0/texlive/20190410/
rgs2nakycorkgzno/bin/x86_64-linux/mtxrun' '--generate'
>> 35952 /usr/bin/env: 'texlua': No such file or directory
```
May be there is a better way...
Cython requires a library that is available in Python 3.8, or before
Python 3.8 with setuptools. This specifies that setuptools is a run
dependency to allow running with Python < 3.8
`spack install --yes-to-all` doesn't actually make the build non-interactive,
but that is why people typically use it. This documents that you must also
specify `--no-checksum` for a fully non-interactive build.
* Modules: Deduplicate suffixes but don't sort them.
The suffixes' order is defined by the order in which they appear in the configuration file.
* Modules: Modify tests to use spack_yaml.load_config.
spack_yaml.load_config ensures that the configuration is stored in an ordered manner. Without this change, the behavior of the tests did not match Spack's.
* Modules: Tweak the suffixes test to better catch ordering issues.
* new package: py-textblob
add variant to py-nltk to allow for data download/installation
add dependencies to py-nltk so that bin/nltk works
* add resources and resource generation script
* spack config: default modification scope can be an environment
The previous model was that environments are the highest priority config
scope for config reading operations, but were not considered for config
writing operations. Now, the active environment is the highest priority
config scope for both reading and writing operations.
Now spack config add, spack external find and spack compiler set environment
configuration in the environment by default if an environment is active. This is a
change in default behavior for these routines, but better matches the mental
model for an environment taking precedence over the user's default config file.
* add scope argument to 'spack external find' to choose non-default scope
* Increase testing for config modifications on environments
Co-authored-by: Gregory Becker <becker33@llnl.gov>
At some point in the build phase a script
spack-src/scripts/convert-template
has a shebang looking for python in the path.
Currently this picks up system python if in invoker's path, but should
be using python from spack, so add a build dependency on python.
Many system-installed binaries (at least in Debian) are built against a
libtinfo.so that has versioned symbols. If spack builds a version without this
functionality, and it winds up in the user's LD_LIBRARY_PATH via spack load,
system binaries will begin to complain.
```
$ less log.txt
less: /opt/spack/.../libtinfo.so.6: no version information available (required by less)
```
Co-authored-by: Luke D'Alessandro <ldalessa@uw.edu>
The 'external_modules' attribute on a Spec, when read from a YAML
configuration file, may contain extra formatting that is lost when
that Spec is written-to/read-from JSON format. This was resulting in
a hashing instability (when the Spec was read back, it would report a
different hash). This commit adds a function which removes the extra
formatting from 'external_modules' as it is passed to the Spec in
__init__ to ensure a consistent hash.
* Add rocm 3.7.0 libs
* Make 3.7.0-only dependency on numactl explicit
* Add rocm-device-libs dep to rocm-clang-ocl
* Update the cmakelists dir in rocm-debug-agant
* Make rocm-debug-agent work on 3.7.0
* Disable tensile host; following rocm-arch recommendations
* ldak: new package at 5.1
* flake8
* Re-run tests
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-gitdb
* Update var/spack/repos/builtin/packages/py-gitdb/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Set GOPATH in build environment to avoid creating files in the user's
default GOPATH (e.g. ~/go).
* Support for external find.
* Added latest releease 0.74.3.
Remove prior built-in Trilinos subrepository.
Added a Trilinos conflict discovered while documenting ForTrilinos:
```
***
*** ERROR: Setting Trilinos_ENABLE_SEACASExodus=OFF which was 'ON' because SEACASExodus has a required library dependence on disabled TPL Netcdf!
***
```
As detailed in https://bugs.python.org/issue33725, starting new
processes with 'fork' on Mac OS is not guaranteed to work in general.
As of Python 3.8 the default process spawning mechanism was changed
to avoid this issue.
Spack depends on the fork-based method to preserve file descriptors
transparently, to preserve global state, and to avoid pickling some
objects. An effort is underway to remove dependence on fork-based
process spawning (see #18205). In the meantime, this allows Spack to
run with Python 3.8 on Mac OS by explicitly choosing to use 'fork'.
Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* [py-torch-geometric] depends on py-torch-sparse
* [py-torch-geometric] setting TORCH_CUDA_ARCH_LIST
* [py-torch-geometric] added the rest of the dependencies
* [py-torch-geometric] added cuda variant and added more build env vars
* [py-torch-geometric] added variant info for depenedencies
* [py-torch-geometric] flake8
* [py-torch-geometric] add variant description
* HPCC Benchmark: added HPC Challenge (HPCC) benchmark
* HPCC Benchmark: modified error message on lack of fftw2 interface in MKL
* hpcc: fixed styling add one more installation example
* hpcc: styling fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* hpcc: changed include and lib location setter
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* hpcc: fixed styling add one more installation example
* hpcc: removed readme.md
* hpcc: develop repo now is in github
* hpcc: march arguments are set explicitly in case of intel compilers, added -restrict flag, which needed for older intel compilers (at least <=19.0.5.281)
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* wrf: new package
* wrf: fix install dir
* wrf: ndown location
* Add more compiler and nesting options to wrf package
* Fix configure that didn't find pgf90, use tempfile and compile in parallel
* WRF v4.2 with parallel I/O support through pnetcdf
Signed-off-by: michael laufer <michael.laufer@toganetworks.com>
* extend Package, compiler wrapper now used, small fixes
Signed-off-by: michael laufer <michael.laufer@toganetworks.com>
* Update var/spack/repos/builtin/packages/wrf/package.py
fixed typo
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Levi Baber <baberlevi@gmail.com>
Co-authored-by: eXact lab <info@exact-lab.it>
Co-authored-by: michael laufer <michael.laufer@toganetworks.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-update-checker
* add test deps
* Update var/spack/repos/builtin/packages/py-update-checker/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-update-checker/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-update-checker/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* remove lint stuff.
Co-authored-by: Sinan81 <Sinan81@earth>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`configure` script of Modules 4.5.2 is a bit too strict and breaks when
special options like `--disable-dependency-tracking` are set. This issue
will be fixed on Modules project starting version 4.5.3
(cea-hpc/modules#354).
This change adapts `configure` options set when installing version 4.5.2
to avoid options unrecognized on this version.
Fix#18420
* libvterm: renumber version and add 1.0.3
neovim: build on aarrch64
* Remove unneeded comment.
* libvterm: newer bazaar snapshot version is set to version 0.0.
neovim: change for libvterm version change, and libtermkey version bug is fixed.
* update libvterm versions.
* Add new package: byte-unixbench
* refine install flow
* Update var/spack/repos/builtin/packages/byte-unixbench/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add new package: leptonica
* Update var/spack/repos/builtin/packages/leptonica/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-spacy: new version 2.3.2
update en-core-web-sm to @2.3.1
add en-vectors-web-lg@2.3.0
* update deps
* wasabi
Co-authored-by: Andrew Elble <aweits@localhost.localdomain>
* Update version for package Draco
+ Add support for `draco-7.7.0`.
+ Introduces new `+cuda` variant. This variant is only allowed in version
`7.7.0:`.
+ Restrict `random123` to compatible versions.
+ Restrict `libquo` to compatible versions.
+ Moving forward, require `python@3:`
+ Moving forward, the ``+superlu_dist` variant is not longer supported.
+ Improve printed output for `--test` mode by adding `ctest` option
`--output-on-failure`
+ Provide a patch to support for IBM Spectrum-MPI in version `7.7.0:`
+ Provide a patch to allow variant `~cuda` to actually disable GPU portions of
the code when a GPU is discovered on the local system.
* Remove unnecessary function decoration.
* Adding externals for bison and flex
Added because bison actually pulls in a ton of stuff.
* Need to escape parentheses.
* Need to add re package.
* Adding re package.
* spectrum-mpi: adding external support.
* Package is tested, works on LLNL lassen
* Spectrum external now detects the correct compiler
* Changing code to not output all compilers
Done per becker33's request on #18055
If Thyra isn't explicitly enabled at the package level, trilinos fails
to build.
```
/var/folders/gy/mrg1ffts2h945qj9k29s1l1dvvmbqb/T/s3j/spack-stage/spack-stage-trilinos-12.18.1-vfmemkls4ncta6qoptm5s7bcmrxnjhnd/spack-src/packages/muelu/adapters/stratimikos/Thyra_XpetraLinearOp_def.hpp:167:15: error:
no member named 'ThyraUtils' in namespace 'Xpetra'
Xpetra::ThyraUtils<Scalar,LocalOrdinal,GlobalOrdinal,Node>::toXpetra(rcpFromRef(X_in), comm);
~~~~~~~~^
```
* py-basemap
* Updated versions + URL attribute
* Update var/spack/repos/builtin/packages/py-basemap/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-basemap/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removed unnecssary comment
* flake8
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Adding external support for mvapich2.
This picks up all the options that are currently settable by
the spack package. It also detects the compiler and sets it
appropriately.
* Removing debugging printing.
* Adding changes suggested by @nithintsk
+ Added version 2.1.9
+ Previously the SZ package incorrectly depended on CMake without a
version dependancy, but actually version 3.13 or newer is required
+ Added myself as a maintainer for the SZ spack package
This is a bug release with some new features and bug fixes. Among them:
[Batch] Set number of MPI processes for SLURM. (Ben Tovar)
[General] Use the right signature when overriding gettimeofday. (Tim
Shaffer)
[Resource Monitor] Add context-switch count to final summary. (Ben
Tovar)
[Resource Monitor] Fix kbps to Mbps typo in final summary. (Ben Tovar)
[WorkQueue] Update example apps to python3. (Douglas Thain)
* samtools: Add version 0.1.8 for OSS soapdenovo-trans.
* Add depend on zlib and samtools to build on aarch64.
* soapdenovo-trans: Change the condition of depend on zlib and samtools.
* New package: cxxopts
* Use +unicode instead of unicode=True
- Make the unicode option more explicit
* Add two new variants to spack for upcoming 1.5, stable and develop
* Add as maintainer
* Add depends_on on clauses
* Remove unrelated change
I know that it's just an example, but I was trying to figure out what was going on and it wasn't making sense....
`tput sgr0` resets the terminal state (http://linuxcommand.org/lc3_adv_tput.php) and I can't see any reason to do it twice. Deleting the second occurrence doesn't seem to break the fancy prompt effect.
* qgis
* Update package.py
QGIS 3.12.1 can use PROJ >= 4.9.3. Therefore both version restrictions on PROJ were incorrect.
https://github.com/qgis/QGIS/blob/final-3_12_1/INSTALL
* Update package.py
Add explanation to (hopefully temporary) removal of hdf5 dependency.
* Remove overly restrictive GRASS version number.
* flake8
Co-authored-by: Elizabeth Fischer <elizabeth.fischer@alaska.edu>
Set location for dependencies specifying explicitly both
the include and lib path. This permits to handle cases where
the libraries are installed in lib64 instead of lib.
fixes#17556fixes#10842closes#18150
Compilers can have strange versions, as the version is provided by the user. We know the real version internally, (by querying the compiler) so expose it as a property and use it in places we don't trust the user. Eventually we'll refactor this with compilers as dependencies, but this is the best fix we've got for now.
- [x] Make `real_version` a property and cache the version returned by the compiler
- [x] Use `real_version` to make C++ language level flags work
Restores the fetching progress bar sans failure outputs; restores non-debug reporting of using fetch cache for installed packages; and adds a unit test.
* Add status bar check to test and fetch output when already installed
Some of the feature flags are named differently and clwb is missing on
my i7-1065G7. cascadelake and cannonlake might have similar problems but
I do not have access to those architectures to test.
- add cuda variant, enabled by default, but conflicting with
strumpack@:3.9.999
- add zfp variant, enabled by default, but conflicting with
strumpack@:3.9.999
- update minimum CMake version to 3.11
- for version 4.0.0:, do not use mpi wrappers. v4.0.0 uses CMake
MPI targets
- for version 4.0.0, add dependency on butterflypack@1.2.0:
- remove versions 3.1.0 and older
- make parmetis variant True by default
- add TODO for slate variant (spack package not ready yet)
While I believe there must have been a reason to restrict libtool to <=
2.4.2, adios compiles just fine with libtool 2.4.6 for me.
In fact, without this change, I'm getting this error:
libtool: Version mismatch error. This is libtool 2.4.6, but the
libtool: definition of this LT_INIT comes from libtool 2.4.2.
libtool: You should recreate aclocal.m4 with macros from libtool 2.4.6
This doesn't make much sense, since spack did build libtool@2.4.2 as a
dependency, and was supposedly trying to use it. My guess is that on
this system (NERSC's cori) the system libtool in /usr/bin, which is
2.4.6 somehow got picked up partially.
Semi-recently the lua spackage was updated to explicitly add libtinfow
to the lua build line. Ncurses provides this but only when the +termlib
variant is enabled
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
* fix typo WONTON_ENABLE_Kokkos ---> TANGRAM_ENABLE_Kokkos
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
* make_package_relative: relocate rpaths on cray
* relocate_package: relocate rpaths on cray
* platforms: add `binary_formats` property
We need to know which binary formats are supported on a platform so we
know which types of relocations to try. This adds a list of binary
formats to the platform and removes a bunch of special cases from
`binary_distribution.py`.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* New interface reconstruction package
* forgot to put in CMake option for Jali
* cleanup whitespace
* fix lines with more than 79 chars
* more long line cleanup
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
See #18033
libssh seemed to detect and link to system krb5 libraries if found
to provide gssapi support, causing issues/system dependencies/etc.
We add a boolean variant gssapi
If +gssapi, the spack krb5 package is added as a dependency.
If ~gssapi, the Cmake flags are adjusted to not use gssapi so that
does not link to any krb5 package.
xz-utils already builds a shared library. The +pic variant adds the
compiler pic flag to the static archive so that it can be linked into
another shared library.
* Add Collier and SysCalc recipes
* Remove extra syscalc version
* Build collier with -j1 for @:1.2.4
* Add recipe for gosam-contrib
* Update gosam-contrib recipe with 'provides'
* Madgraph recipe, first version
* Finalize madgraph recipe + flake8
* Make py2 version of madgraph default; fix hash for syscalc; fix patch
* Handle virtual packages (#3)
* Update package.py
* Update packages.yaml
* Remove virtual packages - pt. 1
* Remove virtual packages - pt. 2
* Changes from review - pt. 1
* Changes from code review - pt. 2
* Update var/spack/repos/builtin/packages/collier/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/madgraph5amc/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add hash for version 2.7.2 (available in our private mirror)
* Fixes for 2.7.3 family
* Patches for 2.7.3{.py3,}{.atlas,}
* Fix hash of syscalc
* Hack to fix concretization (2.7.3 matches 2.7.3.py3)
* Add conflict statement (reported to devs)
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Delete madgraph5amc-2.7.2.atlas.patch
* Delete madgraph5amc-2.7.2.patch
* Update package.py
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: iarspider <iarpsider@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Adding new packages Mvapich2x and Mvapich2-GDR which can be installed only via binary mirrors
* Added docstring descriptions to both packages
* Removed variant wrapper for cuda dependencies
* Fixed multiple flake8 errors
* Updated APIs to pass unit tests
* Updated APIs for MVAPICH2-X package and fixed flake8 warnings for MVAPICH2-GDR
* Changed url back to single line
* Removed extra parantesis around URL string
Co-authored-by: nithintsk <nithintsk@github.com>
* [root] add dataframe cmake option
@chissg @HadrienG2 @drbenmorgan
This has been a separate cmake option starting v6-19 I believe: 31292b9082
It should default to true -- not sure why, but this recipe sets it to off.
I could add a variant too, but since it has become an integral part of root and doesn't introduce extra dependencies, I'd propose to just set it to true like I do here.
* Update package.py
Before this PR, packages.yaml files that contained an
empty "paths" or "modules" attribute were not updated
correctly, since the update function was not reporting
them as changed after the update.
This PR fixes that issue and adds a unit test to
avoid regression.
This commit adds output to the "spack external find"
command to inform users of the result of the operation.
It also fixes a bug introduced in #17804 due to the fact
that a function was not updated to conform to the new
packages.yaml format (_get_predefined_externals).
* Update the change to add gomp compatibity to llvm-openmp.
* Update the change to add gomp compatibity to llvm-openmp using append instead of extend.
* Fix flake8 issue.
Co-authored-by: Jim Galarowicz <jgalarowicz@newmexicoconsortium.org>
* pFUnit: Added support for version 4
pFUnit v4 uses submodules, so we must fetch from the repo rather
than grabbing the tarball (see #11642).
* pFUnit: Added conflicts
pFUnit 4 causes an internal compiler error with gcc 7.2.0, and
several pFUnit versions are incompatible with shared libraries.
* pFUnit: Added conflicts for version 4
Verson 4 uses Fortran 2008 features and cannot be built with gcc
compilers prior to 8.4.
* pFUnit: Fixed conflicts/dependencies as suggested
* pFUnit: Version 4 no longer fetches from git
Checksummable files are fetched instead.
* pFUnit: Simplify major version check
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* pFUnit: Removed unnecessary patch for v4
The patch is still applied to v3.
* pFUnit: Modified MPI flag for v4
pFUnit v3 and v4 use different CMake flags to enable/disable MPI
support. Also added a conflict for v3 with MPI enabled using
gfortran 10, since newer gfortran is more finicky about datatypes.
* pFUnit: Rearranged mpi logic
* pFUnit: changed m4 to a build dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* pFUnit: Added URL back
I did not realize it was needed by "spack versions" and
"spack checksum". Thanks @adamjstewart!
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When the user explicitly sets ~fortran, mpich builds without fortran
support. This will make building C/C++ libraries using clang easier,
since clang does not offer a fortran compiler by default (yet).
Since the user has to disable Fortran support explicitly, this change
is not breaking.
* Handle uninstalled rootspecs in buildcache
- Do not parse specs / find matching specs when in an environment and no
package string is provided
- Error only when a spec.yaml or spec string are not installed. In an
environment it is fine when the root spec does not exist.
- When iterating through the matched specs, simply skip uninstalled
packages
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
* Buildcache:
* Try mocking an install of quux, corge and garply using prebuilt binaries
* Put patchelf install after ccache restore
* Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
* Remove mirror at end of bindist test
* Add patchelf to Ubuntu build env
* Revert mock patchelf package to allow other tests to run.
* Remove depends_on('patchelf', type='build') relying instead on
* Test fixture to ensure patchelf is available.
* Call g++ command to build libraries directly during test build
* Flake8
* Install patchelf in before_install stage using apt unless on Trusty where a build is done.
* Add some symbolic links between packages
* Flake8
* Flake8:
* Update mock packages to write their own source files
* Create the stage because spec search does not create it any longer
* updates after change of list command arguments
* cleanup after merge
* flake8
Style and documentation tests take just a few minutes
to run. Since in Github actions one can't restart a single
job but needs to restart an entire workflow, here we group
tests with similar duration together.
- [x] Remove references to `master` branch
- [x] Document how release branches are structured
- [x] Document how to make a major release
- [x] Document how to make a point release
- [x] Document how to do work in our release projects
* Move flake8 tests on Github Actions
* Move shell test to Github Actions
* Moved documentation build to Github Action
* Don't run coverage on Python 2.6
Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
This is needed because libcuda is used by the driver,
whereas libcudart is used by the runtime. CMake searches
for cudart instead of cuda.
On LLNL LC systems, libcuda is only found in compat and
stubs directories, meaning that the lookup of libraries
fails.
`spack -V` stopped working when we added the `releases/latest` tag to
track the most recent release. It started just reporting the version,
even on a `develop` checkout. We need to tell it to *only* search for
tags that start with `v`, so that it will ignore `releases/latest`.
`spack -V` also would print out unwanted git eror output on a shallow
clone.
- [x] add `--match 'v*'` to `git describe` arguments
- [x] route error output to `os.devnull`
`spack buildcache list` was trying to construct an `Arch` object and
compare it to `arch_for_spec(<spec>)`. for each spec in the buildcache.
`Arch` objects are only intended to be constructed for the machine they
describe. The `ArchSpec` object (part of the `Spec`) is the descriptor
that lets us talk about architectures anywhere.
- [x] Modify `spack buildcache list` and `spack buildcache install` to
filter with `Spec` matching instead of using `Arch`.
- [x] Make it easier to get a `Spec` with a proper `ArchSpec` from an
`Arch` object via new `Arch.to_spec()` method.
- [x] Pull `spack.architecture.default_arch()` out of
`spack.architecture.sys_type()` so we can get an `Arch` instead of
a string.
* Loosen Axom's variants, add shared variant for axom, fix clang/xlf rpath'ing problem on blueos
* Fix flake8
* Add main branch to list of known git branches
fixes#18028
Since now external packages support multiple modules
the correct thing to do is to check if the name of the
*first* module to be loaded contains the string "cray"
`cmake @3.16.3` is the version provided by Ubuntu 20.04. Adding this version here avoids the warning
```
==> Warning: Missing a source id for cmake@3.16.3
```
when using the system `cmake`.
* Spack recipes for ROCm Stage 1 Build components
* fix flake8 errors
* fixes for flake8 errors
* Add a patch for cmake 3.x suport
* Fix rpath issue where hsa-rocr-dev does not allow it to be filled in by spack
* Remove inherited cmake args from comgr
* Make hsakmt-roct compile: no -Werror because with const cast in numa, and actually add the numa dependency
* Remove redundant cmake args which is inherited
* Fix some dependencies
* Fix some python 2.x compatibilities
* Add amd gpu targets to rocfft
* Make comgr a link dep of rocm-dbgapi and remove redundant cmake args
* Remove redundant cmake args
* Remove more redundant cmake args
* Final redundant args
* Use cmake 3.x instead of a fixed version
* Remove random variable
* Use installed rocclr instead of nonexisting directory
* Don't build outside the staging folder
* Deploy some missing cmake target file
* Formatting
* Fix target list
* Properly handle the rocclr dependency
* Formatting
* Fix vermin test
* Make all 3.5.0 package depend exactly on eachother
* Add a few missing link dependencies
* Fix flake8
* Remove some other redundant flags
* Add gcc install prefix for gcc builds of llvm-amdgpu
* review changes for the spack recipes
* Do not hard-code versions
* Fix atmi install
- no more relative rpaths outside of install directory (required patch)
- fix build -> link dependencies
- remove unused build dependency
* Fix flake8 errors
* Remove unused variable and make things python 2.x compatible
* Fix flake8
* Move compiler config from rocfft -> hipcc
* Remove redundant dependency on fftw-api
* Remove redundant import
* Avoid hitting the ROCM_PATH variable altogether with a patch; also just fill in all variables
* Add missing deps z3, zlib and ncurses+termlib to llvm-amdgpu
* Fix perl shebang and add dep
* Fix typo and patch HIP_CLANG_ROOT detection in hip's cmake files
* fixing build failure due z3 and adding zlib for rocgdb
* new changes to add z3,curses dependency for llvm-amdgpu
* fix flake8 error
Co-authored-by: root <root@localhost.localdomain>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
This is needed because libcuda is used by the driver,
whereas libcudart is used by the runtime. CMake searches
for cudart instead of cuda.
On LLNL LC systems, libcuda is only found in compat and
stubs directories, meaning that the lookup of libraries
fails.
* Libtool: add spack external find support
* Less specific regex
* match -> search
* Clarify that min returns first alphabetically, not shortest
* Simplify version determination
The modifications in 193e8333fa
introduced a bug in the loading of compiler modules, since a
function that was expecting a list of string was just getting
a string.
This commit fixes the bug and adds an assertion to verify the
prerequisite of the function.
* add py-ufl package from fenics
* add py-fiat package from fenics
* add py-ffcx package from fenics
* add py-dijitso package from fenics
* add dolfinx library from fenics
* amend ffcx to use ufl and fiat master branches
* setup variants complex and int64 of dolfinx
* add dolfinx python library as package
* add test dependencies to py-dolfinx
* remove broken doc variant
* remove test dependencies from py-dolfinx
* flake8 fixes to dolfinx and py-dolfinx
* make sure dolfinx cmake picks up the correct python version
* list build phases in py-dolfinx package
* remove unnecessary package url
* make pkgconf a build dependency
* make all python dependencies build+run
* py-ffcx needs py-setuptools to be a build/run dependency to support ffcx executable
* remove unnecessary variants from dolfinx
* add missing dependencies to py-dijitso
* remove stray line from py-dolfinx
* simplify definition of build_directory in py-dolfinx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* use depends_on("python") rather than extends("python") in py-ffcx
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* use depends_on("python") rather than extends("python") in py-fiat
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* use depends_on("python") rather than extends("python") in py-ufl
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* rename py-fiat to py-fenics-fiat
* rename py-ufl to py-fenics-ufl
* fix error in depends_on(petsc) definition
* add missing dep on numpy to py-fenics-fiat
* specify python@3.8: as requirement for all fenics components
* use tuples rather than list for depends_on type=
* specify eigen@3.3.7: as dependency for dolfinx
* add js947 and chrisrichardson as maintainers for the fenics packages
* remove scipy dependency from py-dolfinx
* rename package py-ffcx -> py-fenics-ffcx
* rename package dolfinx -> fenics-dolfinx
* rename package py-dolfinx -> py-fenics-dolfinx
* remove pointless URL from py-fenics-dolfinx package
* rename package py-dijitso -> py-fenics-dijitso
* formatting
* remove unecessary cmake args from fenics-dolfinx
* revert py-fenics-fiat python version to 3:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* revert py-fenics-ufl python version to 3.5:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add conflict to fenics-dolfinx for C++17 support
* revert py-fenics-ffcx python version to 3.5:
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* pbbam: fix build error
* Update var/spack/repos/builtin/packages/pbbam/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Packages can implement “detect_version” to support detection
of external instances of a package. This is generally easier
than implementing “determine_spec_details”. The API for
determine_version is similar: for example you can return
“None” to indicate that an executable is not an instance
of a package.
Users may implement a “determine_variants” method for a package.
When doing external detection, executables are grouped by version
and each group results in a single invocation of “determine_variants”
for the associated spec. The method returns a string specifying
the variants for the package. The method may additionally return
a dictionary representing extra attributes for the package.
These will be stored in the spec yaml and can be retrieved
from self.spec.extra_attributes
The Spack GCC package has been updated with an implementation
of “determine_variants” which adds the following extra
attributes to the package: c, cxx, fortran
The YAML config for paths and modules of external packages has
changed: the new format allows a single spec to load multiple
modules. Spack will automatically convert from the old format
when reading the configs (the updates do not add new essential
properties, so this change in Spack is backwards-compatible).
With this update, Spack cannot modify existing configs/environments
without updating them (e.g. “spack config add” will fail if the
configuration is in a format that predates this PR). The user is
prompted to do this explicitly and commands are provided. All
config scopes can be updated at once. Each environment must be
updated one at a time.
Older versions do not compile correctly. New users should use 2.004,
not any of the older versions.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
FlatCC has been removed from UnifyFS as a dependency on the develop
branch and for future releases.
spath is now an optional dependency for UnifyFS to normalize relative
paths provided by the user.
The tcl module for r-dorng will fail to load due to the [] characters in
the description. This happens for Tcl formatted modules loaded by Lmod
at least.
```
module load r-dorng-1.7.1-gcc-9.2.0-wtq7bne
Lmod has detected the following error: .../spack/share/spack/modules/linux-centos7-broadwell/r-dorng-1.7.1-gcc-9.2.0-wtq7bne:(r-dorng-1.7.1-gcc-9.2.0-wtq7bne):
invalid command name "L'Ecuyer"
```
Split text for short and long descriptions.
* Add variants to petsc
This PR adds the follolwing variants to the petsc package
- gmp
- jpeg
- libpng
- giflib
- mpfr
- netcdf
- pnetcdf (parallel-netcdf)
- moab
- eigen
- random123
- exodusii
- mstk
- cgns
- memkind
- muparser
- p4est
- saws
- libyaml
- zstd
* Fix flake8 errors
* Additional changes to Petsc recipe
This commit addresses the issues with dependencies that were brought up
in the comments. There are also a few other enhancements.
- the language of the new variant descriptions was changed to be more
consistent with what was already in the recipe
- an explicit '+mpi' was added to the depends_on('hypre...') directives
- an explicit '+mpi' was added to the depends_on('trilinos...')
directives
- the run time error checking for '~mpi' was replaced with 'conflicts()'
directives that will cause the install to fail sooner
- additional variants that were 'parallel only' were added to the '~mpi'
check
* Set the '~mpi`' conflicts msg to a variable
* Changing raja, chai, and umpire packages so all will compile with each other.
* Need a CUDA version of CHAI when compiling with raja+cuda+chai
* Updating checks for commit.
* Adding comments explaining why chai+umpire tests were disabled
* Reactivating tests for CHAI and Umpire
* reordering versions
* Unified handling of Cuda Arch
* Adding latest versions
* Unused/Untested: removed
* Aesthetic and test mode in Chai
* Unified handling of Cuda Arch
* Using 'ON' consistently, instead of 'On'
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fix, suggestion and patch:
Chai depends on RAJA, not the other way.
Apply suggested master-main version mapping.
Add Umpire version 3.0.0 and patch.
Co-authored-by: Robert Blake <blake14@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package - REDItools
This PR adds the REDItools package, along with a new package dependency,
py-fisher. This contains a patch generated from the python 2to3 script
as well as some other fixes. I am not sure if the project is ready to
support python-3 yet but I submitted the other patches upstream.
* Update var/spack/repos/builtin/packages/reditools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new version 2020.3; new variants nosuffix and fft; version selections for plumed
* fixed too long lines
* fixed whitespaces
* revised fft interface according to @haampie 's suggestions
Co-authored-by: lu64bag3 <gerald.mathias@lrz.de>
* new package Wonton
* remove the flecsi variant because flecsi-sp does not have a spackage
* fix url, clean up whitespaces
* formatting
* put in explicit else clauses for variants in CMake section because CMake's behavior is system-dependent
Co-authored-by: Rao Garimella <rao@abyzou.lanl.gov>
`spack -V` stopped working when we added the `releases/latest` tag to
track the most recent release. It started just reporting the version,
even on a `develop` checkout. We need to tell it to *only* search for
tags that start with `v`, so that it will ignore `releases/latest`.
`spack -V` also would print out unwanted git eror output on a shallow
clone.
- [x] add `--match 'v*'` to `git describe` arguments
- [x] route error output to `os.devnull`
* update version: intel packages daal, ipp, mkl-dnn, mkl, mpi, parallel-studio, pin, tbb and makes url parameter consistent and always use single quote.
* Fixes a typo with one of the sha256 checksum..
* Adds version entries for new versions of Intel packages.
* Adds hashes for new versions of Intel packages.
* Adds missing hash of Intel compiler.
* Adds the newest version of Intel MPI 2019.8.
* Fixes hash for intel-parallel-studio and intel-tbb.
* Fixes version number of Intel MPI.
* Adds GPI-2 package.
* Fixes flake8 noticed issues.
* Second try to fix flake8 comment
* Fixes some issues adamjstewart noticed.
* Fixes package according to flake8 complains.
* Fixes flake8 issue.
* Renames next version to master and removes master.
* Adds maintainer into gpi-2 and returns master branch for the git
repository.
Co-authored-by: Robert Mijakovic <robert.mijakovic@lrz.de>
* Dyninst: 10.2 release
* Use 'elf' instead of 'elfutils'
* Use v10.2.0 tag
* Change minimum elfutils to 0.173
* Move STERILE_BUILD option to correct cmake_args
* make a sacrifice to the flake8 gods
* Add maintainer
* Revert to using elf@1 for elfutils
* Allow all ParaView versions to depend on Python 2
* Keep conflict for 5.9 and up with python 2
* Fix line too long
* Don't use backslash
* Try fixing indent
* Clean logic for python cmake flags
* Try fixing indent
Previously the python package for vim used static linking, and depending
on what system libraries were available and linked against could cause
symbol conflicts for python leading to segfaults in loading c modules in
the standard library (i.e. heapq). This patch address this issue by
dynamically linking them.
If you use git to clone a repository ssh, git transfers control the ssh
binary available on your path, if that ssh binary was built with
contradictory version of openssl/kerberos, then your git commands will
fail.
* sirius, update versions, fixes, add missing options
- sirius/spfft: depend on fftw-api
- cleanup +shared option
- sirius add option for memory pool
- sirius add version 6.5.3 and 6.5.4
- sirius: add spfft dependency for @master, @develop
* add nlcglib package
Robust wave function optimization for SIRIUS.
* add q-e-sirius package
based on q-e package
* Update var/spack/repos/builtin/packages/q-e-sirius/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* nlcglib: pass nvcc_wrapper to cmake
* Add 6.5.6
* Make flake8 happy
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* canu: fix depends issue & using java instead of jdk
* Update var/spack/repos/builtin/packages/canu/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* typo error correction
* Adding recipe for `colorspacious` (a python package)
* Copyright year changed
* revert last commit on basic_usage.rst
* better with a good description
* fix according to failed test
* Update var/spack/repos/builtin/packages/py-colorspacious/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-colorspacious/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Nightly builds with MacOS started failing again
due to an upgrade of the default virtual environment
that now uses Python 3.8
This makes us hit #14102 and every build fails. This
commit should be reverted along with the fix to #14102.
* Additional versions of py-jsonschema.
* Tweak to force Maestro to use jsonschema@3.2.0:
* Correction of whitespace (flake8 error).
* Merges importlib's Python version conditons
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add new versions of spfft
* Extend CudaPackage and use virtual fftw package
Co-authored-by: Simon Pintarelli <simon.pintarelli@cscs.ch>
* Add CUDA 11 compatibility note
* Depend on older cuda <= 10 for spfft <= 0.9.11
Co-authored-by: Simon Pintarelli <simon.pintarelli@cscs.ch>
* introduce logic for boost+context dependency and generic_context variant
* fix OTF2 instrumentation minor problem
* default coroutine impl depends on platform
* fix flake8
* add reference to ~generic_coroutines conflict info
* Update var/spack/repos/builtin/packages/hpx/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Octopus: Add support for version 10.0.
Fix compilation when using the MKL as a provider for BLAS/LAPACK. Octopus will now detect that the MKL also provides the FFTW API and will refuse to compile when both the FFTW library and the MKL are given to the configure script.
* Octopus: Add supported version range for libxc.
* berkeley-db: add version 18.1.40, update build options in package
* combine adamjstewart's changes
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [kassiopeia] New package
* [kassiopeia] Remove master branch, update dependencies
* Update var/spack/repos/builtin/packages/kassiopeia/package.py
Unable to test since I do not have a license to intel-parallel-studio, but I see no reason why it would not work if.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [kassiopeia] depends_on mpi
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [kassiopeia] cmake_args with self.spec.satisfies and elses
* [kassiopeia] args.extend -> args.append
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* h5py: explicitly specify version
hdf5@1.10.5 on Cray is wrongly detected as 1.8.4.
* Update var/spack/repos/builtin/packages/py-h5py/package.py
Thanks. Also had this first, then CI was complaining about line length ...
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Before:
```console
$ licensee diff --license mit LICENSE-MIT
Comparing to MIT License:
Input Length: 1092
License length: 1020
Similarity: 92.46%
diff --git a/LICENSE b/LICENSE
index 0ce42af..be0ff1c 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,3 +1,4 @@
{+spack project developers. see the top-level copyright file for details.+}
permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "software"), to deal in
the software without restriction, including without limitation the rights to
```
After:
```console
$ licensee diff --license mit LICENSE-MIT
Comparing to MIT License:
Input Length: 1020
License length: 1020
Similarity: 100.00%
Exact match!
```
This gets us a 100% license match from GitHub's `licensee` tool.
`spack buildcache list` was trying to construct an `Arch` object and
compare it to `arch_for_spec(<spec>)`. for each spec in the buildcache.
`Arch` objects are only intended to be constructed for the machine they
describe. The `ArchSpec` object (part of the `Spec`) is the descriptor
that lets us talk about architectures anywhere.
- [x] Modify `spack buildcache list` and `spack buildcache install` to
filter with `Spec` matching instead of using `Arch`.
- [x] Make it easier to get a `Spec` with a proper `ArchSpec` from an
`Arch` object via new `Arch.to_spec()` method.
- [x] Pull `spack.architecture.default_arch()` out of
`spack.architecture.sys_type()` so we can get an `Arch` instead of
a string.
Ci is currently failing on brew update with the error:
```
Error: Cannot install bazelisk because conflicting formulae are installed.
bazel: because Bazelisk replaces the bazel binary
Please `brew unlink bazel` before continuing.
Unlinking removes a formula's symlinks from /usr/local. You can
link the formula again after the install finishes. You can --force this
install, but the build may fail or cause obscure side effects in the
resulting software.
```
Avoiding:
```
$ brew update
$ brew upgrade
```
solves the issue by preventing the risk of conflicting formulae
* Run Python2.6 unit tests on Github Actions
* Skip url tests on Python 2.6 to reduce waiting times
* Skip foreground background tests on Python 2.6 to reduce waiting times
* Removed references to Travis in the documentation
* Deleted install_patchelf.sh (can be installed from repo on CentOS 6)
* Update LBANN, Hydrogen, Aluminum to inherit CudaPackage
* Update CMake constraints: LBANN, Hydrogen, and Aluminum now require
cmake@3.16.0: (better support for pthreads with nvcc)
* Aluminum: add variants for host-enabled MPI and RMA features in a
MPI-GPU RDMA-enabled library
* NCCL: add versions 2.7.5-1, 2.7.6-1, and 2.7.8-1
* Hydrogen: add version 1.4.0
* LBANN: add versions 0.99 and 0.100
* Aluminum: add versions 0.4.0 and 0.5.0
* new package(s): py-gql
and related dependencies:
py-aiohttp
py-async-timeout
py-graphql-core
py-idna-ssl
py-multidict
py-websockets
py-yarl
new versions:
py-requests
* fixes
Co-authored-by: Andrew W Elble <aweits@skl-a-00.rc.rit.edu>
* NWChem 7.0.0
* add python2 for 6.8.1. removed 6.8 https://github.com/spack/spack/pull/17779#discussion_r462700413
* nwchem 6.8.1 breaks with gcc 10 and later
* restored extra python bits for version 6.8.1. add env. definition of basis libraries
* changes for flake8
* url fixed
* prevent 6.8.1 being compiled with gcc 10
* Ferret: Add missing dependency with curl.
* Ferret: Don't force using the static version of libgfortran.
* Ferret: Ensure Spack's compiler wrappers are used.
This allows properly setting the rpaths.
* Ferret: Add support for versions 7.3 to 7.6.
* Ferret: Add a variant to install Ferret standard datasets.
* Ferret: Define some useful runtime environnement variables.
* Ferret: Fix flake8.
Also add myself as a maintainer as suggested by @alalazo.
As discussed in issue #17638, wherein kahip fails to build when
scons is dependent on python@3.
This converts the print statements in various SConstruct files
into python3 friendly print functions.
I found most of the affected SConstruct files in both @2.00 and
the later versions I found on web, but some files were only in @2.00.
I split the patches into two files for that reason, but have not
tried the later versions.
* LAMMPS: Use LATTE 1.2.2 starting with version 20200602.
Version 20200602 and upper requires Latte 1.2.2. This caused the internal Latte distribution to be used instead of the Latte install provided by Spack.
* LAMMPS: Add new versions 20200630 and 20200721.
* dcmtk: fixed type error
* Update var/spack/repos/builtin/packages/dcmtk/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: IDL
* Update var/spack/repos/builtin/packages/idl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/idl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* added license header and changed url_for_version to just url
* removed unused imports, addressed comments
* removed trailing whitespace on line 14
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
During configure lhapdf5 searches for python. On one system
I tested on (ubuntu 19.10) it finds a system installed python3
and fails to create the python extension.
Variant named to make explicit that this is only a python2 extension.
* update version: intel packages daal, ipp, mkl-dnn, mkl, mpi, parallel-studio, pin, tbb and makes url parameter consistent and always use single quote.
* Fixes a typo with one of the sha256 checksum..
* Adds version entries for new versions of Intel packages.
* Adds hashes for new versions of Intel packages.
* Adds missing hash of Intel compiler.
* Adds the newest version of Intel MPI 2019.8.
* Fixes hash for intel-parallel-studio and intel-tbb.
* Fixes version number of Intel MPI.
Co-authored-by: Robert Mijakovic <robert.mijakovic@lrz.de>
* openfoam: use MPI 'headers' property (fixes#17730)
* openfoam: +spdp variant, usable for OpenFOAM 1906 and later
in contrast to +float32, which uses single-precision throughout, +spdp
uses the following:
- single-precision for most internals
- double-precision for linear solver
* openfoam: add m4 as build dependency
* scotch: update to 6.0.9 released Oct 2019
Co-authored-by: Mark Olesen <Mark.Olesen@esi-group.com>
Libunwind already builds a shared library. The +pic variant adds the
compiler pic flag to the static archive so that it can be linked into
another shared library.
Eospac's build breaks on gcc@10: due to dependence on -fcommon behavior
and gnu changing to -fno-common. Added conditional argument to support
bleeding edge compilers
Style and documentation tests take just a few minutes
to run. Since in Github actions one can't restart a single
job but needs to restart an entire workflow, here we group
tests with similar duration together.
Relative paths in views have been broken since #17608 or earlier.
- [x] Fix by passing base path of the environment into the `ViewDescriptor`.
Relative paths are calculated from this path.
Relative paths in views have been broken since #17608 or earlier.
- [x] Fix by passing base path of the environment into the `ViewDescriptor`.
Relative paths are calculated from this path.
This PR adds the r-dss package and the r-bsseq package, also new, as a
dependency. This includes the latest versions, which required updates to
the following dependencies:
- r-biocgenerics
- r-iranges
- r-s4vectors
- r-summarizedexperiment
Older versions of r-dss and r-bsseq are included as well to ensure
compatibility with older versions of the above dependencies.
* add tutorial setup script to share/spack
* Add check for Ubuntu 18, fix xvda check, fix apt-get errors
- now works on t2.micro, t2.small, and m instances
- apt-get needs retries around it to work
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* add tutorial setup script to share/spack
* Add check for Ubuntu 18, fix xvda check, fix apt-get errors
- now works on t2.micro, t2.small, and m instances
- apt-get needs retries around it to work
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
A bug was introduced in #13100 where ChildErrors would be redundantly
printed when raised during a build. We should eventually revisit error
handling in builds and figure out what the right separation of
responsibilities is for distributed builds, but for now just skip
printing.
- [x] SpackErrors were designed to be printed by the forked process, not
by the parent, so check if they've already been printed.
- [x] update tests
A bug was introduced in #13100 where ChildErrors would be redundantly
printed when raised during a build. We should eventually revisit error
handling in builds and figure out what the right separation of
responsibilities is for distributed builds, but for now just skip
printing.
- [x] SpackErrors were designed to be printed by the forked process, not
by the parent, so check if they've already been printed.
- [x] update tests
* WHIZARD: add versions 2.8.4 and 2.8.3
* New package: LCIO
* WHIZARD: add optional dependency on LCIO
* WHIZARD: add optional dependency on Openloops
* WHIZARD: allow building with either hepmc or hepmc3 dependencies
* Openloops: set process_lib_dir in configure
* Openloops: fix reference to variant
astropy 3.2.1 fails to build with python 3.8.3 with
errors similar to this:
astropy/stats/_stats.c:318:11: error: too many arguments to function 'PyCode_New'
PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
These are files that are generated by cython, but are included in the
tarball. Since there's apparently been an API change to PyCode_New, they will
need to be re-cythonized to compile correctly.
Fixes#17299
Cray Shasta systems appear to use an unmodified Sles or other Linux operating system on the backend (like Cray "Cluster" systems and unlike Cray "XC40" systems that use CNL).
This updates the CNL version detection to properly note that this is the underlying OS instead of CNL and delegate to LinuxDistro.
* environment-views: fix bug where missing recipe/repo breaks env commands
When a recipe or a repo has been removed from Spack and an environment
is active, it causes the view activation to crash Spack before any
commands can be executed. Further, the error message it not at all clear
in explaining the issue.
This forces view regeneration to always start from scratch to avoid the
missing package recipes, and defaults add_view=False in main for views activated
by the `spack -e` option.
* add messages to env status and deactivate
Warn users that a view may be corrupt when deactivating an environment
or checking its status while active. Updated message for activate.
* tests for view checking
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* switch from bool to int debug levels
* Added debug options and changed lock logging to use more detailed values
* Limit installer and timestamp PIDs to standard debug output
* Reduced verbosity of fetch/stage/install output, changing most to debug level 1
* Combine lock log methods; change build process install to debug
* Changed binary cache install messages to extraction messages
* bugfix: make compiler preferences slightly saner
This fixes two issues with the way we currently select compilers.
If multiple compilers have the same "id" (os/arch/compiler/version), we
currently prefer them by picking this one with the most supported
languages. This can have some surprising effects:
* If you have no `gfortran` but you have `gfortran-8`, you can detect
`clang` that has no configured C compiler -- just `f77` and `f90`. This
happens frequently on macOS with homebrew. The bug is due to some
kludginess about the way we detect mixed `clang`/`gfortran`.
* We can prefer suffixed versions of compilers to non-suffixed versions,
which means we may select `clang-gpu` over `clang` at LLNL. But,
`clang-gpu` is not actually clang, and it can break builds. We should
prefer `clang` if it's available.
- [x] prefer compilers that have C compilers and prefer no name variation
to variation.
* tests: add test for which()
Apple's gcc is really clang. We previously ignored it by default but
there was a regression in #17110.
Originally we checked for all clang versions with this, but I know of
none other than `gcc` on macos that actually do this, so limiting to
`apple-clang` should be ok.
- [x] Fix check for `apple-clang` in `gcc.py` to use version detection
from `spack.compilers.apple_clang`
The `spack-build-env.txt` file may contains many secrets, but the obvious one is the private signing key in `SPACK_SIGNING_KEY`. This file is nonetheless uploaded as a build artifact to gitlab. For anyone running CI on a public version of Gitlab this is a major security problem. Even for private Gitlab instances it can be very problematic.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* llvm-flang Only build offload code if cuda enabled
The current version executes `cmake(*args)` always as part of the post install. If device offload is not part of the build, this results in referencing `args` without it being set and the error:
```
==> Error: UnboundLocalError: local variable 'args' referenced before assignment
```
Looking at prevoous version of `llvm-package.py` this whole routine appears to be only required for offload, some indent `cmake/make/install` to be under the `if`.
* Update package.py
Add comment
Fixes#17299
Cray Shasta systems appear to use an unmodified Sles or other Linux operating system on the backend (like Cray "Cluster" systems and unlike Cray "XC40" systems that use CNL).
This updates the CNL version detection to properly note that this is the underlying OS instead of CNL and delegate to LinuxDistro.
* environment-views: fix bug where missing recipe/repo breaks env commands
When a recipe or a repo has been removed from Spack and an environment
is active, it causes the view activation to crash Spack before any
commands can be executed. Further, the error message it not at all clear
in explaining the issue.
This forces view regeneration to always start from scratch to avoid the
missing package recipes, and defaults add_view=False in main for views activated
by the `spack -e` option.
* add messages to env status and deactivate
Warn users that a view may be corrupt when deactivating an environment
or checking its status while active. Updated message for activate.
* tests for view checking
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* switch from bool to int debug levels
* Added debug options and changed lock logging to use more detailed values
* Limit installer and timestamp PIDs to standard debug output
* Reduced verbosity of fetch/stage/install output, changing most to debug level 1
* Combine lock log methods; change build process install to debug
* Changed binary cache install messages to extraction messages
* [M4] Add missing compiler flag on Cray Compiler
The new version of the Cray Compiler are based on Clang, which means we
need to add the same LDFLAG as other clang environments.
* bugfix: make compiler preferences slightly saner
This fixes two issues with the way we currently select compilers.
If multiple compilers have the same "id" (os/arch/compiler/version), we
currently prefer them by picking this one with the most supported
languages. This can have some surprising effects:
* If you have no `gfortran` but you have `gfortran-8`, you can detect
`clang` that has no configured C compiler -- just `f77` and `f90`. This
happens frequently on macOS with homebrew. The bug is due to some
kludginess about the way we detect mixed `clang`/`gfortran`.
* We can prefer suffixed versions of compilers to non-suffixed versions,
which means we may select `clang-gpu` over `clang` at LLNL. But,
`clang-gpu` is not actually clang, and it can break builds. We should
prefer `clang` if it's available.
- [x] prefer compilers that have C compilers and prefer no name variation
to variation.
* tests: add test for which()
Apple's gcc is really clang. We previously ignored it by default but
there was a regression in #17110.
Originally we checked for all clang versions with this, but I know of
none other than `gcc` on macos that actually do this, so limiting to
`apple-clang` should be ok.
- [x] Fix check for `apple-clang` in `gcc.py` to use version detection
from `spack.compilers.apple_clang`
* MacOS build tests
- Run on PR that modify the YAML file of the workflow
- Don't clone Spack, since we are in the Spack repo now
* Try to add opengl to configuration to build jupyter
* fixup
Spack did not support usage of the `--config-scope` option in
combination with an environment: In `lib/spack/spack/main.py`,
`spack.config.command_line_scopes` is set equal to any config scopes
passed by the `--config-scope` option. However, this is done after
activating an environment. In the process of activating an environment,
the `spack.config.config` singleton is instantiated, so later setting of
`spack.config.command_line_scopes` is ignored.
This commit sets command line scopes before activating an environment to
ensure that they are included in the configuration.
Co-authored-by: Tim Fuller <tjfulle@sandia.gov>
* Add new versions of texlive and poppler.
* Add new versions of harfbuzz which also relocated source location to github.
* Update var/spack/repos/builtin/packages/harfbuzz/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Restore deleted url line in harfbuzz.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Addition of Chainmap to satisfy Maestro dependency.
* Additional versions and dependencies for Maestro.
* Updated URL to point to pypi.
* Updates to chainmap hashes.
* Updates to pull version from PyPi.
* Corrections to flake8 errors.
* Stricter restrictions on Python versioning.
Maestro actually supports Python 3.5 and later.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Only install chainmap for Python2 versions.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removal of setuptools python cond.
* Removal of version constaints on setuptools.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
GCC 4.8.5 on rhel6:
```
utext.cpp:572:5: error: 'max_align_t' in namespace 'std' does not name a
type
std::max_align_t extension;
^
utext.cpp: In function 'UText* utext_setup_67(UText*, int32_t,
UErrorCode*)':
utext.cpp:587:73: error: 'max_align_t' is not a member of 'std'
spaceRequired = sizeof(ExtendedUText) + extraSpace -
sizeof(std::max_align_t);
^
utext.cpp:587:73: note: suggested alternative:
In file included from
/projects/spack/opt/spack/gcc-4.4.7/gcc/6ln2t7b/include/c++/4.8.5/cstddef:42:0,
from utext.cpp:19:
/projects/spack/opt/spack/gcc-4.4.7/gcc/6ln2t7b/lib/gcc/x86_64-unknown-linux-gnu/4.8.5/include/stddef.h:
425:3: note: 'max_align_t'
} max_align_t;
^
utext.cpp:598:57: error: 'struct ExtendedUText' has no member named
'extension'
ut->pExtra = &((ExtendedUText *)ut)->extension;
^
g++ ... loadednormalizer2impl.cpp
g++ ... chariter.cpp
```
The `spack-build-env.txt` file may contains many secrets, but the obvious one is the private signing key in `SPACK_SIGNING_KEY`. This file is nonetheless uploaded as a build artifact to gitlab. For anyone running CI on a public version of Gitlab this is a major security problem. Even for private Gitlab instances it can be very problematic.
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* Initial version of PySCF.
* Add master branch to xcfun library
* PySCF only compatible with specific commit of xcfun library
* Update var/spack/repos/builtin/packages/py-pyscf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pyscf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Revert "PySCF only compatible with specific commit of xcfun library"
This reverts commit 8296005400.
* Revert "Add master branch to xcfun library"
This reverts commit f2b6998931.
* Issues conflict for xcfun library version rather than relying on a random commit.
* Add version xcfun 2.0.0a2 which is needed by PySCF.
* Remove xcfun conflict and express dependency more explictly. Add comment as to why this is necessary.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* wannier90: add versions 3.0.0 and 3.1.0 and 'shared variant'
Added versions 3.0.0 and 3.1.0
Added shared variant
Added url_for_version function as versions less than 3 are from the
wannier.org site and versions 3 and up are from github.com
Added the MPI libraries to the list of libs substituted into the make.sys file
in place of @LIBS
Made it possible to build a shared object version of the library for versions
< 3 by filtering the src/Makefile.2 file (based off of the patch from a src rpm
from RHEL for version 2.0.1)
Create a modules directory in the install prefix root directory and copy the
Fortran .mod files there.
Set the MPIFC variable to the Spack Fortran MPI compiler wrapper.
* abinit: added 'wannier90' variant which enables building abinit with wannier90
Added wannier90 variant
Made abinit depend on the shared object ('shared') variant of
wannier90 if the wannier90 variant is selected
Add configure args for wannier90 libs, includes, and binaries and to
set MPIFC
set the dft-flavor to wannier90 when wannier90 is enabled and only
set the dft flavor to 'atompaw+libxc' if wannier90 is not selected
* Update var/spack/repos/builtin/packages/abinit/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* incorporated bbecker's suggestion for making the strings less ugly!
* incorporated bbecker's suggestion to fix the logic for picking which
"DFT flavor" configure argument.
If the wannier variant is enabled, it passes --with-dft-flavor=wannier90
to configure, otherwise it passes --with-dft-flavor=atompaw+libxc to configure
* Changed to using plain strings
* Fixed version tests
* incorporated @adamjstewart's fix for testing if the major version is > 2
* incorporated @adamjstewart's fix to check if mpi is enabled and
only set the MPIFC variable if it is.
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Only set MPIFC if '+mpi' is set
* incorporated fixes from @adamjstewart including:
- using the string=True argument to filter_file (and removed the unneeded
escapes)
- changing the url to the github location
- fixing the version checks
- building a libwannier.dylib on darwin
* incorporated fixes suggested by @adamjstewart including:
- using the string=True argument to filter_file and cleaned up the escapes
- only pass the MPIFC argument to configure when '+mpi' is set
- chaned the url to the github site for Wannier090
- fixed the version checks
- build a 'libwannier.dylib' file when building the shared variant on darwin
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* moved a configure argument from it's own '+mpi' check to under the lower one
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Cleaned up syntax as suggested by @adamjstewart
It looks *so much better* now! Thanks!
* removed unneeded import of 'find' from 'llnl.util.filesystem' package
as suggested by @adamjstewart
* Update var/spack/repos/builtin/packages/wannier90/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* incorporated changes from @adamjstewart
changed check to "if '@:2 +shared' in spec:" instead of a nested check of '@:2' and
'+shared'
removed unneeded joins used in filter_file and spliced the list of objs directly into
the filter_file call
used the dso_suffix instead of testing for darwin to determine the name of the
shared library
* removed whitespace from blank line
* fixed bug with '../../wannier90.x: .*' not being treated as a regexp. Thanks Adam!
* fixed missing whitespace when modifying Makefile.2
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: OpenLoops
* install() for openloops
* Working OpenLoops recipe
* Flake-8
* Only copy collection file if required; add clarification to num_jobs
* Add __future__ import just in case
* Fix missing space
* Remove __future__ import
* Changes from review, pt. 1
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Replace print() with write()
* Flake-8
Co-authored-by: iarspider <iarpsider@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* vasp: New package.
* Remove unneeded `#noqa`
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Removed a completely needless tty.debug()
* Add compiler conflicts() and minute fixes
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add apcomp package
* add maintainers
* fake8
* Update var/spack/repos/builtin/packages/apcomp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* review suggestions
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* icu4c: Add new versions for older releases.
The old URLs for versions 60.1, 58.2 and 57.1 do not work anymore so add the versions available on Github.
The old versions are kept for reference (cf. #15896).
* icu4c: Add versions 66.1 and 67.1.
* icu4c: Fix compilation of versions 58 and 59 with recent glibc.
This matches the current latest version of protobuf in Spack.
Generally the version of py-protobuf and protobuf should match,
but this constraint is not currently recorded in py-protobuf.
For normal users, `-o` or `--no-same-owner` (GNU extension) is
the default behavior, but for the root user, `tar` attempts to preserve
the ownership from the tarball.
This makes `tar` use `-o` all the time. This should improve untarring
files owned by users not available in rootless Docker builds.
* gdk-pixbuf: Add new stable versions.
* gdk-pixbuf: Add a missing dependency with libx11.
Also add a variant disabled by default to make it optional since it is considered deprecated
(cf. 3362e94c25).
* Added new versions to magics and began to set not-so-optional netcdf dependency
* Added enforced netcdf dependency
* Fix also works for version 4.1.0
* llvm-flang Only build offload code if cuda enabled
The current version executes `cmake(*args)` always as part of the post install. If device offload is not part of the build, this results in referencing `args` without it being set and the error:
```
==> Error: UnboundLocalError: local variable 'args' referenced before assignment
```
Looking at prevoous version of `llvm-package.py` this whole routine appears to be only required for offload, some indent `cmake/make/install` to be under the `if`.
* Update package.py
Add comment
The error message was not updated when the behavior of Spack environments
was changed to not automatically activate the local environment in #17258.
The previous error message no longer makes sense.
* bbcp: Update the URLs to use HTTPS.
The HTTP URLs do not work anymore.
* bbcp: Add missing libnsl dependency.
* bbcp: Rename the git-based version to match the branch name.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-pysam: add LDFLAGS to curl
* Update var/spack/repos/builtin/packages/py-pysam/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* New package: vbfnlo
* Add new package: vbfnlo
* Add recipe for looptools
* Add patch for looptools
* LoopTools: patch not needed (fixed by developers without changing version)
* Remove patch file as well
* Update package.py
* Update package.py
* Fix vbfnlo recipe for old version
Co-authored-by: iarspider <iarpsider@gmail.com>
When Spack installs a package, it stores repository package.py files
for it and all of its dependencies - any package with a Spack metadata
directory in its installation prefix.
It turns out this was too broad: this ends up including external
packages installed by Spack (e.g. installed by another Spack instance).
Currently Spack doesn't store the namespace properly for such packages,
so even though the package file could be fetched from the external,
Spack is unable to locate it.
This commit avoids the issue by skipping any attempt to locate and copy
from the package repository of externals, regardless of whether they
have a Spack repo directory.
* new package: ligra
* setup run environment
* tidy up
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/ligra/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* flake8
Co-authored-by: Sinan81 <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
`gcc` 9 and above have more warnings that break the `flatcc` build by default, because `-Werror` is enabled. This loosens the build up so that we can build with more compilers in Spack.
- [x] Add `-DFLATCC_ALLOW_WERROR=OFF` to `flatcc` CMake arguments
Co-authored-by: Frank Willmore <willmore@anl.gov>
* Buildcache:
* Try mocking an install of quux, corge and garply using prebuilt binaries
* Put patchelf install after ccache restore
* Add script to install patchelf from source so it can be used on Ubuntu:Trusty which does not have a patchelf pat package. The script will skip building on macOS
* Remove mirror at end of bindist test
* Add patchelf to Ubuntu build env
* Revert mock patchelf package to allow other tests to run.
* Remove depends_on('patchelf', type='build') relying instead on
* Test fixture to ensure patchelf is available.
* Call g++ command to build libraries directly during test build
* Flake8
* Install patchelf in before_install stage using apt unless on Trusty where a build is done.
* Add some symbolic links between packages
* Flake8
* Flake8:
* Update mock packages to write their own source files
* Create the stage because spec search does not create it any longer
* updates after change of list command arguments
* cleanup after merge
* flake8
fixes#17396
This prevents the class attribute to be inherited and
saves current maintainers from becoming the default
maintainers of every Cuda package.
We got rid of `master` after #17377, but users still want a way to get
the latest stable release without knowing its number.
We've added a `releases/latest` tag to replace what was once `master`.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Fixes#16478
This allows an uninstall to proceed even when encountering pre-uninstall
hook failures if the user chooses the --force option for the uninstall.
This also prevents post-uninstall hook failures from raising an exception,
which would terminate a sequence of uninstalls. This isn't likely essential
for #16478, but I think overall it will improve the user experience: if
the post-uninstall hook fails, there isn't much point in terminating a
sequence of spec uninstalls because at the point where the post-uninstall
hook is run, the spec has already been removed from the database (so it
will never have another chance to run).
Notes:
* When doing spack uninstall -a, certain pre/post-uninstall hooks aren't
important to run, but this isn't easy to track with the current model.
For example: if you are uninstalling a package and its extension, you
do not have to do the activation check for the extension.
* This doesn't handle the uninstallation of specs that are not in the DB,
so it may leave "dangling" specs in the installation prefix
This PR creates a new spack package for
mumax: GPU accelerated micromagnetic simulator.
This uses the current beta version because
- it is somewhat dated, ~2018
- it is the only one that supports recent GPU kernels
On Cray platforms, we rely heavily on the module system to figure out
what targets, compilers, etc. are available. This unfortunately means
that we shell out to the `module` command as part of platform
initialization.
Because we run subcommands in a shell, we can get infinite recursion if
`setup-env.sh` and friends are in some init script like `.bashrc`.
This fixes the infinite loop by adding guards around `setup-env.sh`,
`setup-env.csh`, and `setup-env.fish`, to prevent recursive
initializations of Spack. This is safe because Spack never shells out to
itself, so we do not need it to be initialized in subshells.
- [x] add recursion guard around `setup-env.sh`
- [x] add recursion guard around `setup-env.csh`
- [x] add recursion guard around `setup-env.fish`
- [x] Remove references to `master` branch
- [x] Document how release branches are structured
- [x] Document how to make a major release
- [x] Document how to make a point release
- [x] Document how to do work in our release projects
* fix binutils deptype for gcc
binutils needs to be a run dependency of gcc
* Fix gcc+binutils build on RHEL7+
static-libstdc++ is not available with system gcc.
Anyway, as it is for bootstraping, we do not really care depending on
a shared libstdc++.
Co-authored-by: Michael Kuhn <michael@ikkoku.de>
Spack was attempting to calculate abspath on the located config.guess
path even when it was not found (None); this commit skips the abspath
calculation when config.guess is not found.
* Add Rivet and YODA
* Add patches
* Flake-8
* Set level for Rivet patches
* Syntax fix
* Fix dependencies of Rivet
* Update var/spack/repos/builtin/packages/rivet/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The latest 0.3.10 version openblas changed how Fortran libraries
are detected, and this broke Fujitsu compiler support.
This (new) openblas patch addresses that issue.
* dtfbplus: New package.
* dftbplus: Addresses @adamjstewart's comments on PR #15191
* dftbplus: Fixes format() calls that slipped in previous commit.
* dftbplus: Appease flake8.
* dftbplus: Change 'url' and misc. fixes.
* Add a resource to do the job of './utils/get_opt_externals'
Also:
* Add url_for_version function
* Add Java to PATH for run environment
* Update `install` method to handle old and new version
Co-authored-by: lu64bag3 <gerald.mathias@lrz.de>
* examl +
* examl style fix
* examl flake8 fix
* Update var/spack/repos/builtin/packages/examl/package.py
using `working_dir`
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Move flake8 tests on Github Actions
* Move shell test to Github Actions
* Moved documentation build to Github Action
* Don't run coverage on Python 2.6
Since we get connection errors consistently on Travis
when trying to upload coverage results for Python 2.6,
avoid computing coverage entirely to speed-up tests.
* hbase: refine url , java and version
* Update var/spack/repos/builtin/packages/hbase/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.