if dump file existed it was not truncating the file, resulting in
a file with unaltered filesize, with the new content at the beginning,
"padded" with the tail of the old content, since the new content was
not enough to overwrite it.
`colify` is an old module in Spack that still uses `**kwargs` liberally.
We should be more explicit. Doing this eliminates the need for many
checks (can't pass the wrong arg if it isn't allowed) and makes the
function documentation more clear.
Fixes a bug introduced in 44ed0de8c0
where the push method of binary_distribution now takes named args
include_root and include_depedencies, to avoid the **kwarg hole.
But the call site wasn't update and we passed a dict of keys/values instead
of arguments, which resulted in a call like this:
```
push(include_root={"include_root": True, "include_dependencies": False})
```
This commit fixes that, and adds a test to see if we push the correct packages.
This error shows up a lot, typically it's harmless because an error
happened before the source build even started, in which case we don't
have build logs to copy. So, warn instead of error, cause it distracts
from the actual CI error.
Currently we attempt to setup the build environment even when
dependencies are not installed, which typically results in error while
searching for libraries or executables in a dependency's prefix.
With this change, we get a more user friendly error:
```
$ spack build-env perl
==> Error: Not all dependencies of perl are installed, cannot setup build environment:
- qpj6dw5 perl@5.36.0%apple-clang@14.0.0+cpanm+open+shared+threads build_system=generic arch=darwin-ventura-m1
- jq2plbe ^berkeley-db@18.1.40%apple-clang@14.0.0+cxx~docs+stl build_system=autotools patches=26090f4,b231fcc arch=darwin-ventura-m1
...
$ echo $?
1
```
* Allow users to specify root env dir
Environments managed by spack have some advantages over anonymous Environments
but they are tucked away inside spack's directory tree. This PR gives
users the ability to specify where the environments should live.
See #32823
This is also taken as an opportunity to ensure that all references are to "managed environments",
rather than "named environments". Prior to this PR some references to the latter persisted.
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* Update exago w/ 1.5.1 and small updates to hiop.
* Fix styling.
* Add RAJA back to ExaGO package.
* Update RAJA requirement for ExaGO and HiOp.
* Update last RAJA requirement in HiOp.
* Add new sphinx rtd theme release 1.2.0
The new release helps with supporting more recent version of docutils
* set docutils officially supported version
* add jquery dependency for sphinx-rtd-theme
* add conflict with jquery version
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* correct dependency
* fix version dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* set sphinx version
* fix sha256
* add version for flit-core
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The call:
```
x.satisfies(y[, strict=False])
```
is commutative, and tests non-empty intersection, whereas:
```
x.satsifies(y, strict=True)
```
is not commutative, and tests set-inclusion.
There are 2 fast paths. When strict=False both self and other need to
be concrete, when strict=True we can optimize when other is concrete.
a) It's used by site administrators, so it's niche
b) If it's used by site administrators, they likely need to modify the config anyhow, so the default config only serves as an example to get started
c) it's too arbitrary to enable tcl, but disable lmod
Spack generally ignores file-file projection clashes in environment
views, but would eventually error when linking the `.spack` directory
for two specs of the same package.
This leads to obscure errors where users have no clue what the issue is
and how to fix it. On top of that, the error comes very late, since it
happens when the .spack dir contents are linked (which happens after
everything else)
This PR improves that by doing a quick check ahead of time if clashes
are going to be anticipated (by simply checking for clashes in the
projection of each spec's .spack metadir). If there are clashes, a
human-readable error is thrown which shows two of the conflicting specs,
and tells users to user unify:true, view:false, or set up custom
projections.
* add pytng
* black
* add setuptools
* fix
* Update var/spack/repos/builtin/packages/py-pytng/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pytng/package.py
* Update var/spack/repos/builtin/packages/py-pytng/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Kokkos when compiled by spack without +wrapper could potentially capture the spack compiler wrappers, resulting in cmake configs and kokkos_launch_compiler trying to run the spack compiler wrapper after installation.
The checksum exception was not detailed enough and not reraised when using cache only, resulting in useless error messages.
Now it dumps the file path, expected
hash, computed hash, and the downloaded file summary.
Batch scripts in general will not function without carriage return line
endings on Windows. We rely on these scripts to support cmd, so we
should not allow these scripts to be converted to lf.
Note: Windows 11 supports lf line endings due to the use of Windows
Terminal. Once support for Windows 10 is dropped, this change can be
reverted.
When running many concurrent spack install processes that need to write
to the db, Spack regularly times out. This is because writing to the DB
after another process has written to it requires deserialization of the
db, mutating it in memory, and serializing it again, which takes some
time. On top of that, I believe there's a 1 second retry when a write
lock cannot be obtained, so I think this means only 3 processes can
really write to the DB at the same time before timing out.
* Style: black 23, skip magic trailing commas
* isort should use same line length as black
* Fix unused import
* Update version of black used in CI
* Update new packages
* Update new packages
* Update package.py
Initial new stuff
* Update package.py
* Update package.py
* Update package.py
* fix targets
* non-llvm backends
* ooops
* fix style
* Somehow that was not caught?
Somehow that was not caught?
* style
* Last fix
make capitalization consistent with Halide not LLVM...
* py-cmake-format: new version, new variants
* Update var/spack/repos/builtin/packages/py-cmake-format/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-cufflinks: new package version with 0.17.3
* Update var/spack/repos/builtin/packages/py-cufflinks/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Specs that did not contribute any files to an env view caused a problem
where zip(specs, files grouped by prefix) got "out of sync", causing the
wrong merge map to be passed to a package's `add_files_to_view`, which
specifically caused an issue where *sometimes* bin/python ended up as a
symlink instead of a copy.
One such example is kokkos + kokkos-nvcc-wrapper, as the latter package
only provides the file bin/nvcc_wrapper, which is also added to view by
kokkos, causing kokkos-nvcc-wrapper to contribute 0 files.
The test feels a bit contrived, but it captures the problem... pkg a is
added first and has 0 files to contribute, pkg b adds a single file, and
we check if pkg b receives a merge map (and a does not).
* pfunit: add v4.6.3
* pfunit: use CMakePackage methods to define arguments
* pfunit: deprecate v3.X, make a variant conditional
* pfunit: simplify setting up environment variables
Reading the docs it seems only v3
needs F90_VENDOR to be set
* pfunit: fix option names
The names set before were unused
* pfunit: shared libraries seem not to be supported
See https://github.com/Goddard-Fortran-Ecosystem/pFUnit/issues/308#issuecomment-874725759
* Add py-mlflow and its dependencies
* mlflow: fix syntax error in package.py
* py-mlflow: cleanup
Process review remarks, add missing dependencies, add skinny variant
* Apply suggestions from code review
* Fix flake8 issues
* More formatting fixes
* Fix py-waitress dependency version
* py-mlflow: platform-specific dependency
* Update var/spack/repos/builtin/packages/py-mlflow/package.py
* Update var/spack/repos/builtin/packages/py-mlflow/package.py
* Process review remarks
* Fix typo in dependency version
* py-shap: fix dependencies
* py-arrow: fix dependencies
* py-slicer: remove py-setuptools explicit version
* py-pyarrow: dataset variant and pass options through environment
It appears there are some issues when using `pip install` instead of
`python setup.py` - this setup_build_environment should fix that.
* py-pyarrow: review remark
* Decouple setup_build_environment from install_options
* py-pyarrow: style
* Bump licenses to 2023
---------
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Matthias Wolf <matthias.wolf@epfl.ch>
`spack gc` removes build deps of explicitly installed specs, but somehow
if you take one of the specs that `spack gc` would remove, and feed it
to `spack uninstall /<hash>` by hash, it complains about all the
dependents that still rely on it.
This resolves the inconsistency by only following run/link type deps in
spack uninstall.
That way you can finally do `spack uninstall cmake` without having to
remove all packages built with cmake.
Default package requirements might contain
variants that are not defined in each package,
so we shouldn't verify them when emitting facts
for the ASP solver.
Account for group when enforcing requirements
packages:all : don't emit facts for requirement conditions
that can't apply to current spec
* Update package.py
Several libraries are need to be present at run time so that the code can be run in parallel.
I have added them as dependencies and to LD_LIBRARY_PATH. Orca comes as a binary so the libraries cannot be added as RPATH at compilation time.
Also, orca 5.0.3 was compiled against 4.1.1, not 4.1.2.
* fortls
* Update var/spack/repos/builtin/packages/py-fortls/package.py
* review
* Update var/spack/repos/builtin/packages/py-fortls/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixes
* review
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new py-amplpy package
* [@spackbot] updating style on behalf of sm2939
* Update package.py
* Rename var/spack/repos/builtin/py-amplpy/package.py to var/spack/repos/builtin/packages/py-amplpy/package.py
* Edited file to change copyright year/dependencies and changed the directory of the file
---------
Co-authored-by: Sangu Mbekelu <s.mbekelu9@gmail.com>
#35098 added the correct extraction of toolset version for the MSVC
compiler. This updates the associated method in MSBuilder to retrieve
the (now correct) property.
Meme 4.5.0 has the first occurrence of the string
```
use XML::Simple
```
I found this by doing a binary search manually extracting tarballs until `grep` came up empty.
* new ampltools package
* [@spackbot] updating style on behalf of sm2939
* Update and rename var/spack/repos/builtin/py-ampltools/package.py to var/spack/repos/builtin/packages/py-ampltools/package.py
Edited file to change copyright year/dependencies and edited directory
---------
Co-authored-by: Sangu Mbekelu <s.mbekelu9@gmail.com>
* Add Score-P 8.0 and Cube 4.7/4.8 packages.
* Score-P 8.0 requires 4.8, not 4.7, Cube packages
* Add maintainer
* Add CUDA and HIP variants. Add version checks for CUDA (Score-P 8 requires CUDA 7), ROCm (variant only valid as of Score-P 8), and MPI (Score-P 7 requires at least version 2.2 of the MPI standard).
* Deprecate everything pre-7.0.
* Fix HIP dependencies and enable CUDA and HIP variants for configure.
* Deprecate OTF2 pre-2.3 and Cube pre-4.6
* Add "fake" mpi compiler wrappers to msmpi: msmpi doesn't actually
provide wrappers, so this just assigns the wrappers to be whatever
compiler that a dependent is using. Packages referencing the
wrappers would otherwise break. This is assumed to be workable
because build scripts will need to assemble appropriate information
to pass to the compiler anyway
* Fix msmpi detection stanza ('executable' is not the correct name of
the property)
* Fix compiler pkg dereference
* add initial package.
* Update package.py
* Update var/spack/repos/builtin/packages/tiramisu/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/tiramisu/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/tiramisu/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Hopefully this will be fine.
* Update package.py
* Update package.py
* Update package.py
* Update var/spack/repos/builtin/packages/tiramisu/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* cvise: new package
* cvise: colordiff as optional dependency
* cvise: remove old versions and correctly name master version
* cvise: update license date
* cvise: use maintainers directive
* Remove @olupton as maintainer
After live discussion: it's been too long since he did anything with this package.
* add halide package.
* some style changes.
* small fix
* Update var/spack/repos/builtin/packages/halide/package.py
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Update var/spack/repos/builtin/packages/halide/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/halide/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
add comment to requirements.txt
* Update package.py
Fix version order.
* Update package.py
style
* Update package.py
Removed unneeded vars.
* Update var/spack/repos/builtin/packages/halide/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/halide/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Fix some deps
* Update package.py
* Fix finding llvm cmake info
* Update package.py
---------
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This PR enables the successful execution of the spack binary cache
tutorial on Windows. It assumes gnupg and file are available (they
can be installed with choco).
* Fix handling of args with quotes in spack.bat
* `file` utility can be installed on Windows (e.g. with choco): update
error message accordingly
I don't know if this is new in version 7.0, but to build `info`, which is a required executable at the end of the recipe, it is necessary to have a terminal library, otherwise you get
```
[...]
checking for tgetent in -ltinfo... no
checking for tgetent in -lncurses... no
checking for tgetent in -lcurses... no
checking for tgetent in -ltermlib... no
checking for tgetent in -ltermcap... no
checking for tgetent in -lterminfo... no
configure: WARNING: info needs a terminal library, one of: tinfo ncurses curses termlib termcap terminfo
[...]
configure: WARNING: Could not find a terminal library among tinfo ncurses curses termlib termcap terminfo
configure: WARNING: The programs from `info' directory will not be built.
```
then compilation runs, `info` is not built and installation fails according to Spack because the required executable is missing.
* Support packages for using scitokens on OSG
The Open Science Grid (OSG) encourages scitokens to provide
certain services (e.g. writing to xrootd). Spack already
supports this through scitokens-cpp and xrootd +scitokens-cpp.
This adds py-htgettoken, a python utility to get a scitoken
from a vault through web authentication. To support htgettoken,
this also adds py-gssapi.
This also adds the OSG CA cert collection which is typically
at /etc/grid-security but pointed to in user installations by
the X509_CERTS_DIR variable.
This allows userspace through spack for functionality that
otherwise depends on installing the RPMs provided by OSG.
* fine, I'll fix style myself then
* fix maintainers
* py-gssapi: version before depends_on
* remove list_url
* add documentation on reason for git describe version numbers
* Apply suggestions from code review
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* better BEARER_TOKEN definition
* import os
* remove older version that don't build with setuptools
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
At least with ZSH, prefix inspections containing `./bin` result in a
`$PREFIX/./bin` and result in strange `$PATH` handling.
I.e., `module load git` will prepend `/path/to/git/./bin`, `which git`
will find the right executable, but `git --version` will print the
system one. Normalize the relative path to avoid this behavior.
See also spack/spack#31867.
* changes to enable LLVM_ENABLE_RUNTIMES for libcxx and libcxxabi
* remove version update for 5.3.0 as it is done thru PR #33320 to enable
ci and reviews
* initial commit for rocm-5.4.0 release
* update the versions for more packages for 5.4.0 release
* update the gallium patch for mesa for libllvm-15 for ROCm-5.4.0 release
* update rocm-openmp-extras and rocwmma recipes for 5.4.0 release
* fix build error for rocfft for 5.4.0
* address review comments for rocfft for 5.4.0 change
* undo the removal of the older patch file
* bump up the version for hipfft for 5.4.0
* fix the failure after the merge with develop
* add recipes updates for 5.4.0 for migraphx.miopen-hip,miopen-opencl
* address the review comments on the mesa patch.update the rdc package for
5.4.0 release
* fix style errors
* acts: new versions 21.1.1, 22.0.1, 23.0.0
New versions:
- [major 23.0.0](https://github.com/acts-project/acts/compare/v22.0.0...v23.0.0):
- new option `ACTS_BUILD_PLUGIN_GEANT4` -> enabled with existing variant `geant4`
- new option `ACTS_BUILD_EXAMPLES_BINARIES`:
- it is my understanding that the binaries for examples are deprecated (in favor of python examples); warnings to this effect have been printed for a few versions, and now the building of binaries is disabled by default,
- rather than introducing a variant to enable deprecated behavior for only one or two versions, I propose that we just follow the default and keep this disabled.
- [bugfix 22.0.1](https://github.com/acts-project/acts/compare/v22.0.0...v22.0.1) (no build system changes)
- [bugfix 21.1.1](https://github.com/acts-project/acts/compare/v21.1.0...v21.1.1) (no build system changes)
* acts: correct 23.0.0 sha
Co-authored-by: Hadrien G. <knights_of_ni@gmx.com>
As of 2.4.113, the flag for man-pages is now a feature,
so true/false is now enabled/disabled. Other similarly
changed options are not used in the spack recipe (i.e.
experimental kms drivers).
* qt: new versions 6.4.0, 6.4.1
- New libpsl vendored dependency in qt-base.
- New embree and tinyexr dependency in qt-quick3d.
We need to figure out a better way to deal with these vendored
dependencies in src/3rdparty. Removing them was a way to make sure
they are not used unintentionally. Many of these dependencies cannot
be overridden with a QT_FEATURE_system_* flag and are included directly
in cpp files. Many change versions from release to release, so even if
they use system (ie spack managed) versions we need to support this in
the depends_on lines.
What we can rely on?
- src/3rdparty is where vendored stuff is stored
- not much else...
Possible ways to deal with this:
- Change vendor_deps_to_keep to dict with versions, eg
```
vendor_deps_to_keep = {
"xatlas": "@6:",
"embree": "@6.4:",
"tinyexr": "@6.4:",
}
```
- Similarly introduce system_deps_to_use:
```
system_deps_to_use = {
"assimp@5.2:": "@6:",
}
```
and derive depends_on and QT_FEATURE_system_* from this dict.
* qt-*: new version 6.4.2, invert vendored pkgs logic
* qt-base: fix vendor_deps_to_avoid typo
* qt-*: move lots into QtPackage base layer
* py-minkowskiengine: new package (sparse tensor autodiff by Nvidia)
This python package (with cuda support) provides torch support for sparse
tensors. The `pybind11` headers are not found without the patch to `setup.py`.
* [@spackbot] updating style on behalf of wdconinc
* py-minkowskiengine: depends_on numpy, pybind11 type=link; no patch
* [@spackbot] updating style on behalf of wdconinc
---------
Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
The fastqc script was using the system perl. This PR sets the script to
use the spack built/provided perl. This PR also removes the code that
adds the java path. That should be handled by module loading as far as I
know.
* Add HDF5 version 1.13.3.
* Remove maintainers no longer with The HDFGroup.
* Add version hdf5-vol-async@1.4
* Add HDF5 version 1.14.0, develop-1.14, develop-1.15.
Add missing conflicts for api version and develop versions.
* Add conflicts statement to hdf5/package.py to avoid building hdf5 with
MPICH 4.0.x versions with bug that causes testphdf5 test to fail.
* Add patch to call find_package(MPI) for dependent packages not finding
it, not having called it themselves.
* Remove language components from find_package(MPI) in
hdf5_1_14_0_config_find_mpi.patch.
* Add HDF5 version 1.14.0, develop-1.14, develop-1.15.
Add missing conflicts for api version and develop versions.
* Add conflicts statement to hdf5/package.py to avoid building hdf5 with
MPICH 4.0.x versions with bug that causes testphdf5 test to fail.
* Add patch to call find_package(MPI) for dependent packages not finding
it, not having called it themselves.
* Remove language components from find_package(MPI) in
hdf5_1_14_0_config_find_mpi.patch.
* Don't guard ParaView patch on HDF5 variant
ParaView always needsd HDF5 and ignores the variant.
* py-h5py: Newer versions of HDF5 introduce breaking API changes
---------
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
* Add trilinos-solvers variant to nalu-wind package.
This allows nalu-wind to be built against a trilinos installation
which doesn't have amesos2, belos, ifpack2, or muelu enabled, if
the nalu-wind user provides the spec 'nalu-wind@master~trilinos-solvers'
Support for these solver-packages remains on by default.
* Fixed a style issue reported by CI.
* Incorporate change in wording suggested from review comments.
... to clarify that at least one, or both, of hypre and/or
trilinos-solvers must be enabled. The error condition is if
both are disabled.
* That style checker is picky...
* It really did want a trailing comma...
* py-jinja2-cli: new package
* Update var/spack/repos/builtin/packages/py-jinja2-cli/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
---------
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add py-docker@5:
* [@spackbot] updating style on behalf of spoutn1k
* Ignore `tls` variant
* Update var/spack/repos/builtin/packages/py-docker/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* `py-docker`: `py-paramiko` version fix
---------
Co-authored-by: spoutn1k <spoutn1k@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The gfx906:xnack- and gfx908:xnack- targets were introduced in ROCm 4.1
and replaced gfx906 and gfx908 as default build targets, but the library
can still be built for gfx906 and gfx908 if requested.
* e4s: restore builds builds
* gitlab ci: allow UO to build protected binaries for signing
* use newer image; comment out failing builds
* gitlab-ci: Some tweaks for e4s power builds
- fix tags (no longer require generate jobs to run on aws)
- fix resource requests for generation jobs resource requests
- remove SPACK_SIGNING_KEY from protected power build jobs
- update UO signing key path
- change the CDash build group to reflect stack name
- retry pipeline generation jobs *always*
* correct double packages: section
* gitlab-ci:script: modernize
* remove new gnu make, not for ppc64le
---------
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* Added e4s-cl package
* Version order change
* Added e4s-cl dependencies
* Added python-sotools dependency
* [@spackbot] updating style on behalf of spoutn1k
* Add missing versions to py- packages
* Fix style
* [@spackbot] updating style on behalf of spoutn1k
* Update var/spack/repos/builtin/packages/e4s-cl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/e4s-cl/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-python-sotools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Add docker removing patch for e4s-cl
Co-authored-by: spoutn1k <spoutn1k@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-nexusforge: add with dependencies
* py-pyshacl, py-sseclient: more style
* py-hjson, py-nexus-sdk, py-nexusforge, py-puremagic: more style
* py-pyshacl: license update
* py-nexusforge, py-prettytable, py-pyshacl: review remarks
* py-nexusforge: make the variant mean something
Too hasty to commit...
* py-ipyparallel: add 8.4.1, which builds with py-hatchling
* py-ipyparallel: copyright and redundant py-setuptools dependency
* py-ipyparallel: py-packaging was dropped after 8.0.0
fixes#34879
This commit adds a new maintainer directive,
which by default extend the list of maintainers
for a given package.
The directive is backward compatible with the current
practice of having a "maintainers" list declared at
the class level.
Move the relocation of binary text in its own class
Drop threaded text replacement, since the current bottleneck
is decompression. It would be better to parallellize over packages,
instead of over files per package.
A small improvement with separate classes for text replacement is that we
now compile the regex in the constructor; previously it was compiled per
binary to be relocated.
The regex doesn't actually work because dollar signs and parentheses have to be
escaped. Also, compiling with OpenMPI requires defining the macro
`MPI2SUPPORT`.
This commit makes explicit the format version of the spec file
we are reading from.
Before there were different functions capable of reading some
part of the spec file at multiple format versions. The decision
was implicit, since checks were based on the structure of the
JSON without ever checking a format version number.
The refactor makes also explicit which spec file format is used
by which database and lockfile format, since the information is
stored in global mappings.
To ensure we don't change the hash of old specs, JSON representations
of specs have been added as data. A unit tests checks that we read
the correct hash in, and that the hash stays the same when we
re-serialize the spec using the most recent format version.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
```
File ".../spack/var/spack/environments/scale-mpi/.spack-env/._view/4yiorsdd4pefrnwgrwlwt3yzo5i235il/lib/python3.10/site-packages/h5py/_hl/base.py", line 19, in <module>
from collections import (Mapping, MutableMapping, KeysView,
ImportError: cannot import name 'Mapping' from 'collections' (.../spack/var/spack/environments/scale-mpi/.spack-env/._view/4yiorsdd4pefrnwgrwlwt3yzo5i235il/lib/python3.10/collections/__init__.py)
```
Fixed in https://github.com/h5py/h5py/pull/1069 which was first merged
in v2.9.
* py-flatten-dict: require poetry to build.
The sources seem to contain a bundled, auto-generated `setup.py`.
Building with `pip` insist on using Poetry as mentioned in
`pyproject.toml`, so require it as a build dependency.
* Update var/spack/repos/builtin/packages/py-flatten-dict/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-flatten-dict/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Currently we print "sha256 checksum failed for [file]. Expected X but
got Y".
This PR extends that message with file size and contents info:
"... but got Y. File size = 123456 bytes. Contents = b'abc...def'"
That way we can immediately see if the file was downloaded only
partially, or if we downloaded a text page instead of a binary, etc.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
py-scipy 1.6 and older come with pre cython-ized files that
use the _PyGen_Send symbol that was removed from python 3.10.0.161,
so do not build these old versions with python 3.10.1 and later
Parts of libgcrypt should not be optimized with -O1/2/3, so it's best to
let the build system do that; the build system cannot know the compiler
wrapper would inject optimization flags
When running unit-test the test/ci.py module is leaving
garbage (help.sh, test.sh files) in the current working
directory.
This commit changes the current working directory to a
temporary path before those files are created.
* freeimage: fails to compile with c++17, use c++14
Only `opencascade` when a (non-default) variant depends on `freeimage`, which seems to have gone unmaintained. There are c++17 standard violations [[1]]( https://en.cppreference.com/w/cpp/language/except_spec) in the code, so we can at most expect c++14. Since some compilers default to c++17 (gcc-12) we need to be explicit.
* freeimage: install directly in prefix
* freeimage: fix inverted patch
* environments: don't rewrite relative view path, expand path on cli ahead of time
Currently if you have a spack.yaml that specifies a view by relative
path, Spack expands it to an absolute path on `spack -e . install` and
persists that to disk.
This is rather annoying when you have a `spack.yaml` file inside a git
repo, cause you want to use relative paths to make it relocatable, but
you constantly have to undo the changes made to spack.yaml by Spack.
So, as an alternative:
1. Always stick to paths as they are provided in spack.yaml, never
replace them with a canonicalized version
2. Turn relative paths on the command line into absolute paths before
storing to spack.yaml. This way you can do `spack env create --dir
./env --with-view ./view` and both `./env` and `./view` are resolved
to the current working dir, as expected (not `./env/view`). This
corresponds to the old behavior of `spack env create`.
* create --with-view always takes a value
All packages with explicit Windows support can be found with
`spack list --tags=windows`.
This also removes the documentation which explicitly lists
supported packages on Windows (which is currently out of date and
is now unnecessary with the added tags).
Note that if a package does not appear in this list, it *may*
still build on Windows, but it likely means that no explicit
attempt has been made to support it.
* nextflow recipe: added latest stable version
* tower-cli recipe: added latest release
* recipes tower-agent and tower-cli renamed to nf-tower-agent and nf-tower-cli
* recipes nf-tower-agent and nf-tower-cli: small fix
* nf-core-tools recipe: added most py- dependencies
* nf-core-tools: recipe without galaxy-tool-util (for testing)
* fixed typos in py-yacman recipe
* fixed typos in py-pytest-workflow recipe
* fixed typo in nf-core-tools recipe
* fixed typos in py-yacman recipe
* fixes in recipes for py-questionary and py-url-normalize
* fixes to py-yacman recipe
* style fixes to py- packages that are dependencies to nf-core-tools
* fix in py-requests-cache recipe
* added missing dep in py-requests-cache recipe
* nf-core-tools deps: removed redundant python dep for py packages oyaml and piper
* nf-core-tools recipe: final, incl dep on py-galaxy-tool-util
* nf-core-tools: new version with extra dependency
* added py-galaxy-util, draft: added some required dep versions, still have to add 40+ deps
* nextflow and nf-core-tools packages: added my self as maintainer
* style fixes
* style fix for nf-core-tools recipe
* added license to py-logmuse recipe
* audit fixes
* style fix after audit fix
* py-galaxy-tool-util: added deps 1st bunch
* audit/style fixes, including adding missing dep package
* more audit/style fixes
* more more audit/style fixes
* moooore audit fixes
* py-galaxy-tool-util: dependencies 2nd chunk
* silly audit fix
* py-galaxy-util deps: 3rd bunch - first 20 done
* fixes
* style fix
* py-galaxy-tool-util: 4th bunch of deps
* stashing dep recipe backbones for py-galaxy-tool-util
* nf-core-tools: using pre-built wheel for dependency py-galaxy-tool-util
* nf-core-tools: adding also py-galaxy-util, as wheel
* fix
* nextflow: added latest bugfix version
* Update var/spack/repos/builtin/packages/nf-core-tools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/nf-core-tools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* nf-core-tools pr: 1st bunch of review edits
* nf-core-tools: 2nd bunch of review edits
* adding back tower-agent and tower-cli as deprecated
* nf-core-tools: 3rd bunch of review edits
* small style fix
* prepping py-galaxy-tool-util for further work
* nf-core-tools: last bunch of deps, except for galaxy-tool-util and pulsar
* audit fixes
* updates to py-galaxy-tool-util and its deps, still 2 to work on
* one style fix
* updated recipe for py-galaxy-util
* updated recipe for py-pulsar-galaxy-lib
* typo fix
* shasum fixes
* updated py-sqlalchemy from develop
* added newest versions (today) for nf-tower-agent and nf-tower-cli
* Update var/spack/repos/builtin/packages/py-requests-cache/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-requests-cache/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* adding 2nd bunch of nf-core deps from update/nextflow-tools
* adding 3rd bunch of nf-core deps from update/nextflow-tools
* 4th chunk of nf-core deps from update/nextflow-tools
* Update var/spack/repos/builtin/packages/py-dnspython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-dnspython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-dnspython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi-utils/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pastedeploy/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pebble/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-gunicorn/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-starlette/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-starlette/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-starlette/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-parsley/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-paste/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-paste/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-gxformat2: added comment
* py-lagom: now using github tarballs
* fix for py-lagom
* adding missing deps to py-fastapi-utils
* another fix to py-lagom
* Update var/spack/repos/builtin/packages/py-dnspython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi-utils/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi-utils/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi-utils/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-lagom/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-supervisor/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-social-auth-core/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fixes from PR review
* adding missing deps, from PR review
* py-galaxy2cwl from github tarball, as per PR review
* fix to py-tuswsgi, as per PR review
* nf-tools: edits from PR review
* adding 3x more galaxy deps
* fix
* fixing circular dep of py-poetry-plugin-export with py-poetry
* added newest nf-core-tools version
* Update var/spack/repos/builtin/packages/py-galaxy-util/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* fix in py-poetry-plugin-export
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Currently, the `python` package tries to set `CPATH` in `setup_run_environment()`.
We no longer set `CPATH` and other destructive environment variables (like
`LD_LIBRARY_PATH`) in modules, so we shouldn't do something special for Python.
Also, the way `python` sets `CPATH` causes issues. Because it does a header search to
find directories containing headers, if you bootstrap `mypy` or other style tools on a
fresh Ubuntu image with *no* python devel headers installed, you'll get an error like
this when trying to load the thing you just installed:
```console
[root@980de539843d /]# spack -b load py-mypy
==> Error: Unable to locate python headers in any of these locations:
/usr/include/python3.6m
/usr/include/3.6
/usr/Headers
```
The headers and includes aren't needed to get `mypy` in the path or for `mypy` to work,
so we're failing unnecessarily here.
- [x] remove `setup_run_environment()` from `python/package.py`
Since SPACK_PACKAGE_IDS is now also "namespaced" with <prefix>, it makes
more sense to call the flag `--make-prefix` and alias the old flag
`--make-target-prefix` to it.
1. add variant cray-static, older crays build hpcprof-mpi static,
newer ones build dynamic.
2. move URL patches from github to gitlab.
3. add workaround for a bug where a file is mistakenly overwritten.
4. add conflict for hpcprof-mpi at 2022.10.01.
* [@spackbot] updating style on behalf of mwkrentel
Co-authored-by: mwkrentel <mwkrentel@users.noreply.github.com>
Normally when using external packages in concretization, Spack ignores
all dependencies of the external. #33777 updated this logic to attach
a Python Spec to external Python extensions (most py-* packages), but
as implemented there were a couple issues:
* this did not account for concretization groups and could generate
multiple different python specs for a single DAG
* in some cases this created a fake Python spec with insufficient
details to be usable (concretization/installation of the
extension would fail)
This PR addresses both of these issues:
* For environment specs that are concretized together, external python
extensions in those specs will all be assigned the same Python spec
* If Spack needs to "invent" a Python spec, then it will have all the
needed details (e.g. compiler/architecture)
* npm: Add latest version, update build
The `npm` package had gotten a bit long in the tooth and only suported the last version
for which running `configure` / `make` / `make install` actually worked.
- [x] Update the package to support npm@9, in which `npm install .` works properly and
installation is easier.
- [x] Update the package so that `npm@6:8` also install successfullly. The incantation
that is *supposed* to work on these versions is `node bin/npm-cli.js install $(node
bin/npm-cli.js pack . | tail -1)`, but depending on the version one of `npm install`
or `npm pack` will fail when run straight from the install directory. So now we just
manually copies things over.
This seems to make the `npm` install much more reliable for all of `npm@6:9` (at least
for me).
udpates the `npm` build to support versions 6-9 and fixes the install for all of
them on macos.
* update for review
With the new variable [prefix/]SPACK_PACKAGE_IDS you can conveniently execute
things after each successful install.
For example push just-built packages to a buildcache
```
SPACK ?= spack
export SPACK_COLOR = always
MAKEFLAGS += -Orecurse
MY_BUILDCACHE := $(CURDIR)/cache
.PHONY: all clean
all: push
ifeq (,$(filter clean,$(MAKECMDGOALS)))
include env.mk
endif
# the relevant part: push has *all* example/push/<pkg identifier> as prereqs
push: $(addprefix example/push/,$(example/SPACK_PACKAGE_IDS))
$(SPACK) -e . buildcache update-index --directory $(MY_BUILDCACHE)
$(info Pushed everything, yay!)
# and each example/push/<pkg identifier> has the install target as prereq,
# and the body can use target local $(HASH) and $(SPEC) variables to do
# things, such as pushing to a build cache
example/push/%: example/install/%
@mkdir -p $(dir $@)
$(SPACK) -e . buildcache create --allow-root --only=package --unsigned --directory $(MY_BUILDCACHE) /$(HASH) # push $(SPEC)
@touch $@
spack.lock: spack.yaml
$(SPACK) -e . concretize -f
env.mk: spack.lock
$(SPACK) -e . env depfile -o $@ --make-target-prefix example
clean:
rm -rf spack.lock env.mk example/
``
* [armpl-gcc] Make pkg-config files available
ARMpl pkgconfig files are located in a non-default location and do not have the
.pc extension. Changing those both helps pkgconfig pick them up correctly, e.g.
in the meson build of `py-scipy`.
* Address @annop-w comments
* symlink instead of cp.
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
In the past we checked remote binary mirrors for existence of a spec
before attempting to download it. That changed to only checking local
copies of index.jsons (if available) to prioritize certain mirrors where
we expect to find a tarball. That was faster for CI since fetching
index.json and loading it just to order mirrors takes more time than
just attempting to fetch tarballs -- and also if we have a direct hit
there's no point to look at other mirrors.
Long story short: the info message only makes sense in the old version
of Spack, so it's better to remove it.
* py-sphinx-immaterial: new package
This is a new-ish theme for Sphinx that's based on MkDocs's `immaterial` theme. More on
the theme here: https://jbms.github.io/sphinx-immaterial/, but it seems to be very clear
and readable. We *might* consider switching to it for Spack's docs.
* Update var/spack/repos/builtin/packages/py-sphinx-immaterial/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-sphinx-immaterial/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-sphinx-immaterial/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-sphinx-immaterial/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add note about node.js requirements
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Forward lookup of "test_log_file" and "test_failures"
refers #34531closes#34487fixes#34440
* Add unit test
* py-libensemble: fix tests
* Support stand-alone tests with cached files as install-time tests
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Ensure `spack mirror add <name> <url/path>` without further arguments translates to `<name>: <url>` key value pairs in mirrors.yaml. If --s3-* flags are provided, only store the provided ones.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
This is only a work-around for the actual problem that Python is used to install libraries instead of CMake, so we end up with BUILD_RPATH not INSTALL_RPATHs.
Signed-off-by: Loïc Pottier <pottier1@llnl.gov>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* Add new file for MVAPICH 3.0a release
Creating this as a new package since it requires some new configuration
options and because we are moving to the name "MVAPICH" and droping the
2 (following a similar move by MPICH).
Co-authored-by: Nat Shineman <shineman.5@osu.edu>
Co-authored-by: Matthew Lieber <lieber.31@osu.edu>
* OpenCV: checksum for 4.5.5, make contrib optional
* [@spackbot] updating style on behalf of iarspider
* Add conflicts for contrib modules
* Fix typo
* Implement changes from review
* Update var/spack/repos/builtin/packages/opencv/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: iarspider <iarspider@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Revert "Revert "4th chunk of nf-core deps from update/nextflow-tools (#34564)" (#34960)"
This reverts commit 891a63cae6.
* fix to py-python-multipart, as per PR review
* py-macs2: add version 2.2.7.1 and support python@3.10:
The tarball from PyPi includes the Cythonized C files. The tarball from
github does not. Remove the Cythonized C files from the source so that
they are rebuilt with the Spack Python/Cython combination. This is
necessary for python-3.10 but make sense for other combinations as well.
* Edits based on review
- set python version constraint on version 2.2.4
- removed all python-2 versions and related constraints.
* Add package file for xtb and py-xtb
* Retain maintainers from PythonPackage
* Update package files
- use extends("python") instead of tampering with PYTHONPATH
- use PyPI for downloading sdist of py-xtb
- add simple-dftd3 0.7.0
- add dftd4 3.5.0
- remove --wrap-mode=nodownload from toml-f
* Remove logic for download URL
With this change we get the invariant that `mirror.fetch_url` and
`mirror.push_url` return valid URLs, even when the backing config
file is actually using (relative) paths with potentially `$spack` and
`$env` like variables.
Secondly it avoids expanding mirror path / URLs too early,
so if I say `spack mirror add name ./path`, it stays `./path` in my
config. When it's retrieved through MirrorCollection() we
exand it to say `file://<env dir>/path` if `./path` was set in an
environment scope.
Thirdly, the interface is simplified for the relevant buildcache
commands, so it's more like `git push`:
```
spack buildcache create [mirror] [specs...]
```
`mirror` is either a mirror name, a path, or a URL.
Resolving the relevant mirror goes as follows:
- If it contains either / or \ it is used as an anonymous mirror with
path or url.
- Otherwise, it's interpreted as a named mirror, which must exist.
This helps to guard against typos, e.g. typing `my-mirror` when there
is no such named mirror now errors with:
```
$ spack -e . buildcache create my-mirror
==> Error: no mirror named "my-mirror". Did you mean ./my-mirror?
```
instead of creating a directory in the current working directory. I
think this is reasonable, as the alternative (requiring that a local dir
exists) feels a bit pendantic in the general case -- spack is happy to
create the build cache dir when needed, saving a `mkdir`.
The old (now deprecated) format will still be available in Spack 0.20,
but is scheduled to be removed in 0.21:
```
spack buildcache create (--directory | --mirror-url | --mirror-name) [specs...]
```
This PR also touches `tmp_scope` in tests, because it didn't really
work for me, since spack fixes the possible --scope values once and
for all across tests, so tests failed when run out of order.
Packages that use docbook-xml may specify a specific entity version.
When this is specified as a version constraint in the package recipe it
will cause problems when using `unify = True` in a Spack environment, as
there could be multiple versions of docbook-xml in the spec. In
practice, any entity version should work with any other version and
everything should work with the latest version. This PR maps all Spack
docbook-xml entity versions to the docbook-xml version in the spec.
Ideally, the version in the spec would be the latest version. With this
PR, even if a package specifies an older entity version, it will map
to the entity version (latest) in the spec. This means that there can be one
docbook-xml version in a Spack environment spec and packages requesting
older entity versions will still work.
To help facilitate this, docbook-xml version constraints for packages
that have them have been removed. Those packages are dbus and gtk-doc.
What's in AOCL 4.0:
1. amdblis
LPGEMM variants with post-ops support
AMD "Zen4" support for BLIS
2. amdlibflame
Upgrade to LAPACK 3.10.1 specification
Improvements in a few more variants of SVD and Eigen Value routines
Multithread support enabled for selected APIs
3. amdfftw
AVX-512 enablement of DFT kernels
AVX-512 optimization of copy and transpose routines
5. amdlibm
Black & Scholes support (logf, expf, erff, both scalar and vector)
AVX-512 variants of vector functions
6. aocl-sparse
New Iterative Solver APIs
AVX-512 support for SPMV API
7. amdscalapack
Upgrade to Netlib ScaLAPACK 2.2.0
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Sometimes I just want to know how many packages of a certain type there are.
- [x] add `--count` option to `spack list` that output the number of packages that
*would* be listed.
```console
> spack list --count
6864
> spack list --count py-
2040
> spack list --count r-
1162
```
* Add packages
* Style
* Respond to comments
* Change conflct dep type
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* paraview: add `rocm` variant
This conflicts with CUDA and requires at least ParaView 5.11.0. More
dependencies are also needed.
* E4S: Add ParaView for ROCm and CUDA stacks
* DAV SDK: Update ParaView version and GPU variants
* Verify using hipcc vs amdclang++ for newer hip
Co-authored-by: Ben Boeckel <ben.boeckel@kitware.com>
* adding 1st version of py-strawberryfields
* py-strawberryfields: minor edit
* added backbone for 4x new SF dependencies
* edits to SF and its 4x new deps
* added all deps for SF
* added one version to py-lark-parser
* py-quantum-xir with tarball from github
* Update var/spack/repos/builtin/packages/py-strawberryfields/package.py
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* updated version for py-quantum-xir : pypy fixed
* py-pennylane: added pythonpackage.maintainers, too
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Currently, all of the replacements in `spack.util.path.replacements()` get evaluated for
each replacement. This makes it easy to get bootstrap issues, because config is used
very early on in Spack.
Right now, if I run `test_autotools_gnuconfig_replacement_no_gnuconfig` on my M1 mac, I
get the circular reference error below. This fixes the issue by making all of the path
replacements lazy lambdas.
As a bonus, this cleans up the way we do substitution for `$env` -- it's consistent with
other substitutions now.
- [x] make all path `replacements()` lazy
- [x] clean up handling of `$env`
```console
> spack unit-test -k test_autotools_gnuconfig_replacement_no_gnuconfig
...
==> [2022-12-31-15:44:21.771459] Error: AttributeError:
The 'autotools-config-replacement' package cannot find an attribute while trying to build from sources. This might be due to a change in Spack's package format to support multiple build-systems for a single package. You can fix this by updating the build recipe, and you can also report the issue as a bug. More information at https://spack.readthedocs.io/en/latest/packaging_guide.html#installation-procedure
/Users/gamblin2/src/spack/lib/spack/spack/package_base.py:1332, in prefix:
1330 @property
1331 def prefix(self):
>> 1332 """Get the prefix into which this package should be installed."""
1333 return self.spec.prefix
Traceback (most recent call last):
File "/Users/gamblin2/src/spack/lib/spack/spack/build_environment.py", line 1030, in _setup_pkg_and_run
kwargs["env_modifications"] = setup_package(
^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/build_environment.py", line 757, in setup_package
set_module_variables_for_package(pkg)
File "/Users/gamblin2/src/spack/lib/spack/spack/build_environment.py", line 596, in set_module_variables_for_package
m.std_cmake_args = spack.build_systems.cmake.CMakeBuilder.std_args(pkg)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/build_systems/cmake.py", line 241, in std_args
define("CMAKE_INSTALL_PREFIX", pkg.prefix),
^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/package_base.py", line 1333, in prefix
return self.spec.prefix
^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/spec.py", line 1710, in prefix
self.prefix = spack.store.layout.path_for_spec(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/directory_layout.py", line 336, in path_for_spec
path = self.relative_path_for_spec(spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/directory_layout.py", line 106, in relative_path_for_spec
projection = spack.projections.get_projection(self.projections, spec)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/projections.py", line 13, in get_projection
if spec.satisfies(spec_like, strict=True):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/spec.py", line 3642, in satisfies
if not self.virtual and other.virtual:
^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/spec.py", line 1622, in virtual
return spack.repo.path.is_virtual(self.name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 890, in is_virtual
return have_name and pkg_name in self.provider_index
^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 770, in provider_index
self._provider_index.merge(repo.provider_index)
^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 1096, in provider_index
return self.index["providers"]
~~~~~~~~~~^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 592, in __getitem__
self._build_all_indexes()
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 607, in _build_all_indexes
self.indexes[name] = self._build_index(name, indexer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/spack/repo.py", line 616, in _build_index
index_mtime = self.cache.mtime(cache_filename)
^^^^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/llnl/util/lang.py", line 826, in __getattr__
return getattr(self.instance, name)
^^^^^^^^^^^^^
File "/Users/gamblin2/src/spack/lib/spack/llnl/util/lang.py", line 825, in __getattr__
raise AttributeError()
AttributeError
```
* Added support for libfabric to find an external installation and
identify variants supported.
* Change the fabrics definition to only include CXI when on a cray
system with a libfabric-based slingshot network.
* Added a conflict when trying to build the CXI fabric value since it is
only available as closed source at this time.
#33128 Introduces a dependency on re2c into the Ninja build recipe.
This is problematic on Windows as we use CMake to build re2c, and
Ninja to drive the CMake build. This PR resolves this issue by
adding a variant to toggle the use of re2c with ninja.
* Added a more robust check for an external version of the library.
Included a guard to identify when the library gives no discernible
version information and then to substitute with "unknown_ver"
identifier.
* gaudi: new versions 36.8, 36.9
As of 36.8, the tests use catch2 ([commit](https://gitlab.cern.ch/gaudi/Gaudi/- /commit/f2cafb5c9d04c9d497d49182258aa3a0440622c0)).
* gaudi: still depends_on fmt@:8
* unifyfs: new release v1.0.1
Add 1.0.1 release
Add new variant for new configure time option
Co-authored-by: CamStan <CamStan@users.noreply.github.com>
Now that the `tix` variant is conditional, it should also be detected
condititionally, otherwise the spec is invalid and cannot be used during
concretization.
ShellCheck is installed with a downloaded binary instead of being
compiled from source, and there should be comments to point out this
unorthodox approach.
autoconf 2.70 uses use warnings instead of -w so that PERL=/usr/bin/env perl can be passed, but we want to fix absolute paths anyhow through sbang upon install. So, we stick to patching the one perl script that's used during the build.
Gitlab does not merge lists when a job extends two other definitions
that include the same list (e.g. tags). Also, it merges dictionaries
as long as the keys are distinct, but just takes the last mentioned
value when there are key collisions.
This change makes sure that when different tags are needed by a
pipeline, the ones we want are actually provided. It also changes
the example stack to better follow this pattern so we do not lead
developers astray in the future.
Since we dropped support for Python 2.7, we can embrace using keyword only arguments
for many functions in Spack that use **kwargs in the function signature. Here this is done
for the llnl.util.filesystem module.
There were a couple of bugs lurking in the code related to typo-like errors when retrieving
from kwargs. Those have been fixed as well.
* Update xsbench to version 20
XSBench version 20 has implementations for new
architectures and accelerators.
* Added CUDA support for XSBench
* Fixed style issues
The code in FileCache for write_transaction attempts to delete the temporary file when an exception occurs under the context by calling shutil.rmtree. However, rmtree only operates on directories while the rest of FileCache uses normal files. This causes an empty file to be written to the cache key when unneeded.
Use os.remove instead which operates on normal files.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* cernlib: depends_on freetype, libnsl, libxcrypt, openssl; and patch
In addition to #34448, cernlib depends on these additional packages.
This also applies a patch to the current release in which crypto is
specified wbere libcrypt (in libxcrypt) is actually needed. Because
the upstream git repository is behind a CERN login, we cannot patch
by gitlab URL link.
* [@spackbot] updating style on behalf of wdconinc
Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
* update the version for 5.3.0 release
* update the rocwmma for 5.3.0 release
* fix the +hip variant
* update the version for rocm-openmp-extras package for 5.3.0 release
* update the hipsolver and hipfft as per review comments
* address review comments
* revert changes to mivisionx with regard to change added for clangrt
* fix for the petsc failure
Spack was running an external detection of Python during each invocation
of the setup script for Windows CMD/PWSH, which has dramatic performance
implications each time the script is invoked, and is completely
unneccesary. Remove this operation.
Windows CMD prompt does not automatically support ASCI color control
characters on the console from Python. Enable this behavior by
accessing the current console and allowing the interpreation of ASCI
control characters from Python via the win32 API.
* adding 2nd bunch of nf-core deps from update/nextflow-tools
* adding 3rd bunch of nf-core deps from update/nextflow-tools
* Update var/spack/repos/builtin/packages/py-dnspython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-dnspython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-dnspython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi-utils/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pastedeploy/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pebble/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-gunicorn/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-starlette/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-starlette/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-starlette/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-parsley/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-paste/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-paste/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-gxformat2: added comment
* py-lagom: now using github tarballs
* fix for py-lagom
* adding missing deps to py-fastapi-utils
* another fix to py-lagom
* Update var/spack/repos/builtin/packages/py-dnspython/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi-utils/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi-utils/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-fastapi-utils/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-lagom/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-supervisor/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This commit allows (remote) spec.json files to be clearsigned and gzipped.
The idea is to reduce the number of requests and number of bytes transferred
* added recipes for py-qutip and py-qutip-qip
* small fix
* updated qutip 2x versions
* py-qutip-qip: tarball url from github
* style fix in py-qutip-qip
* Update var/spack/repos/builtin/packages/py-qutip-qip/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-qutip-qip/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-qutip-qip/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-qutip/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-qutip/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* draft for py-pennylane recipe
* first draft for py-strawberryfields recipe
* minimal fix
* small fixes
* accounting for circular dep in py-pennylane and py-pennylane-lightning
* removing py-strawberryfields from this branch
* updated versions for py-pennylane 2x
* needs cmake
* py-pennylane-lightning using github tarball
* Update var/spack/repos/builtin/packages/py-autoray/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pennylane-lightning/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The Intel compiler isn't able to deal with noinline member functions of template classses defined in headers. As such it outputs
```
warning #2196: routine is both "inline" and "noinline"
```
cmake bootstrap will fail due to the word 'warning'.
See spack/var/spack/repos/builtin/packages/protobuf/intel-v2.patch for reference.
The issue does not appear with intel@2021.7.0 or later:
```
$~: compiler=/shared/spack/opt/spack/linux-amzn2-x86_64_v3/gcc-12.2.0/intel-oneapi-compilers-2022.2.0-uqvb2553zy5toeapvoopacndd27x6p5m/compiler/2022.2.0/linux/bin/intel64/icpc
$~: $compiler unique.c
icpc: remark #10441: The Intel(R) C++ Compiler Classic (ICC) is deprecated and will be removed from product release in the second half of 2023. The Intel(R) oneAPI DPC++/C++ Compiler (ICX) is the recommended compiler moving forward. Please transition to use this compiler. Use '-diag-disable=10441' to disable this message.
```
This is a clean version of https://github.com/spack/spack/pull/34167
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
* package/libproxy: fix py3 install
* improve readability
* fix bug
* also add extend
* make flake happy
* [@spackbot] updating style on behalf of Sinan81
* Update var/spack/repos/builtin/packages/libproxy/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* python dependency implied by extends const.
* disable python variant by default
* add run_env, add py conflict
* Update var/spack/repos/builtin/packages/libproxy/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* set env for macos as well
* generalize lib dir detection
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Sinan81 <Sinan81@users.noreply.github.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* add_new_package: py-file-magic
* re-order depends...
* Update var/spack/repos/builtin/packages/py-file-magic/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [@spackbot] updating style on behalf of Sinan81
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Sinan81 <Sinan81@users.noreply.github.com>
* Add py-svgpath and dependency
* Update copyright expiration
* [@spackbot] updating style on behalf of heerener
* Process review remarks
* Update var/spack/repos/builtin/packages/py-trimesh/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fix style issue
* py-trimesh: cleanup and optional dependencies
* Fix formatting issue
* py-trimesh: complete dependency list for easy variant
Two new packages: py-mapbox-earcut and py-pycollada
* Some more missing dependencies
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* new package: py-kb-python + dependencies
- py-loompy
- py-ngs-tools
- py-numpy-groupies
* Update var/spack/repos/builtin/packages/py-kb-python/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-shortuuid: add version 1.0.11
* Update var/spack/repos/builtin/packages/py-shortuuid/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update packages config to indicate that MSVC is the preferred compiler
* Update packages config to indicate that msmpi is the preferred MPI provider
* Fix msmpi external detection
Spack imports `pytest`, which *can* import `numpy`. Recent versions of `numpy` require
Python 3.8 or higher, and they use 3.8 type annotations in their type stubs (`.pyi`
files). At the same time, we tell `mypy` to target Python 3.7, as we still support older
versions of Python.
What all this means is that if you run `mypy` on `spack`, `mypy` will follow all the
static import statements, and it ends up giving you this error when it finds numpy stuff
that is newer than the target Python version:
```
==> Running mypy checks
src/spack/var/spack/environments/default/.spack-env/._view/4g7jd4ibkg4gopv4rosq3kn2vsxrxm2f/lib/python3.11/site-packages/numpy/__init__.pyi:638: error: Positional-only parameters are only supported in Python 3.8 and greater [syntax]
Found 1 error in 1 file (errors prevented further checking)
mypy found errors
```
We can fix this by telling `mypy` to skip all imports of `numpy` in `pyproject.toml`:
```toml
[[tool.mypy.overrides]]
module = 'numpy'
follow_imports = 'skip'
follow_imports_for_stubs = true
```
- [x] don't follow imports from `numpy` in `mypy`
- [x] get rid of old rule not to follow `jinja2` imports, as we now require Python 3
The code in Spack to generate install and test reports currently suffers from unneeded complexity. For
instance, we have classes in Spack core packages, like `spack.reporters.CDash`, that need an
`argparse.Namespace` to be initialized and have "hard-coded" string literals on which they branch to
change their behavior:
```python
if do_fn.__name__ == "do_test" and skip_externals:
package["result"] = "skipped"
else:
package["result"] = "success"
package["stdout"] = fetch_log(pkg, do_fn, self.dir)
package["installed_from_binary_cache"] = pkg.installed_from_binary_cache
if do_fn.__name__ == "_install_task" and installed_already:
return
```
This PR attempt to polish the major issues encountered in both `spack.report` and `spack.reporters`.
Details:
- [x] `spack.reporters` is now a package that contains both the base class `Reporter` and all
the derived classes (`JUnit` and `CDash`)
- [x] Classes derived from `spack.reporters.Reporter` don't take an `argparse.Namespace` anymore
as argument to `__init__`. The rationale is that code for commands should be built upon Spack
core classes, not vice-versa.
- [x] An `argparse.Action` has been coded to create the correct `Reporter` object based on command
line arguments
- [x] The context managers to generate reports from either `spack install` or from `spack test` have
been greatly simplified, and have been made less "dynamic" in nature. In particular, the `collect_info`
class has been deleted in favor of two more specific context managers. This allows for a simpler
structure of the code, and less knowledge required to client code (in particular on which method to patch)
- [x] The `InfoCollector` class has been turned into a simple hierarchy, so to avoid conditional statements
within methods that assume a knowledge of the context in which the method is called.
On systems with remote groups, the primary user group may be remote and may not exist on
the local system (i.e., it might just be a number). On the CLI, it looks like this:
```console
> touch foo
> l foo
-rw-r--r-- 1 gamblin2 57095 0 Dec 29 22:24 foo
> chmod 2000 foo
chmod: changing permissions of 'foo': Operation not permitted
```
Here, the local machine doesn't know about per-user groups, so they appear as gids in
`ls` output. `57095` is also `gamblin2`'s uid, but the local machine doesn't know that
`gamblin2` is in the `57095` group.
Unfortunately, it seems that Python's `os.chmod()` just fails silently, setting
permissions to `0o0000` instead of `0o2000`. We can avoid this by ensuring that the file
has a group the user is known to be a member of.
- [x] Add `ensure_known_group()` in the permissions tests.
- [x] Call `ensure_known_group()` on tempfile in `test_chmod_real_entries_ignores_suid_sgid`.
There are a number of places in our docstrings where we write "list of X" as the type, even though napoleon doesn't actually support this. It ends up causing warnings when generating docs.
Now that we require Python 3, we don't have to rely on type hints in docs -- we can just use Python type hints and omit the types of args and return values from docstrings.
We should probably do this for all types in docstrings eventually, but this PR focuses on the ones that generate warnings during doc builds.
Some `mypy` annoyances we should consider in the future:
1. Adding some of these type annotations gets you:
```
note: By default the bodies of untyped functions are not checked, consider using --check-untyped-defs [annotation-unchecked]
```
because they are in unannotated functions (like constructors where we don't really need any annotations).
You can silence these with `disable_error_code = "annotation-unchecked"` in `pyproject.toml`
2. Right now we support running `mypy` in Python `3.6`. That means we have to support `mypy` `.971`, which does not support `disable_error_code = "annotation-unchecked"`, so I just filter `[annotation-unchecked]` lines out in `spack style`.
3. I would rather just turn on `check_untyped_defs` and get more `mypy` coverage everywhere, but that will require about 1,000 fixes. We should probably do that eventually.
4. We could also consider only running `mypy` on newer python versions. This is not easy to do while supporting `3.6`, because you have to use `if TYPE_CHECKING` for a lot of things to ensure that 3.6 still parses correctly. If we only supported `3.7` and above we could use [`from __future__ import annotations`](https://mypy.readthedocs.io/en/stable/runtime_troubles.html#future-annotations-import-pep-563), but we have to support 3.6 for now. Sigh.
- [x] Convert a number of docstring types to Python type hints
- [x] Get rid of "list of" wherever it appears
Based on the following lines in the top level `CMakeLists.txt` (I can't deep link since gitlab.cern.ch not public), `cernlib` needs an explicit dependency on `libxaw` and `libxt`:
```cmake
find_package(X11 REQUIRED)
message(STATUS "CERNLIB: X11_Xt_LIB=${X11_Xt_LIB} X11_Xaw_LIB=${X11_Xaw_LIB} X11_LIBRARIES=${X11_LIBRARIES}")
```
Per https://github.com/spack/spack/issues/34192, apptainer does not support `--without-conmon`, so we introduce a base class `config_options` property that can be overridden in the `apptainer` package.
`texinfo` depends on `gettext`, and it builds a perl module that uses gettext via XS
module FFI. Unfortunately, the XS modules build asks perl to tell it what compiler to
use instead of respecting the one passed to configure.
Without this change, the build fails with this error:
```
parsetexi/api.c:33:10: fatal error: 'libintl.h' file not found
^~~~~~~~~~~
```
We need the gettext dependency and the spack wrappers to ensure XS builds properly.
- [x] Add needed `gettext` dependency to `texinfo`
- [x] Override XS compiler with `PERL_EXT_CC`
Co-authored-by: Paul Kuberry <pakuber@sandia.gov>
Local `git` tests will fail with `fatal: transport 'file' not allowed` when using git 2.38.1 or higher, due to a fix for `CVE-2022-39253`.
This was fixed in CI in #33429, but that doesn't help the issue for anyone's local environment. Instead of fixing this with git config in CI, we should ensure that the tests run anywhere.
- [x] Introduce `spack.util.git`.
- [x] Use `spack.util.git.get_git()` to get a git executable, instead of `which("git")` everywhere.
- [x] Make all `git` tests use a `git` fixture that goes through `spack.util.git.get_git()`.
- [x] Add `-c protocol.file.allow=always` to all `git` invocations under `pytest`.
- [x] Revert changes from #33429, which are no longer needed.
`spack graph` has been reworked to use:
- Jinja templates
- builder objects to construct the template context when DOT graphs are requested.
This allowed to add a new colored output for DOT graphs that highlights both
the dependency types and the nodes that are needed at runtime for a given spec.
`spack solve` is supposed to show you times you can compare. setup, ground, solve, etc.
all in a list. You're also supposed to be able to compare easily across runs. With
`pretty_seconds()` (introduced in #33900), it's easy to miss the units, e.g., spot the
bottleneck here:
```console
> spack solve --timers tcl
setup 22.125ms
load 16.083ms
ground 8.298ms
solve 848.055us
total 58.615ms
```
It's easier to see what matters if these are all in the same units, e.g.:
```
> spack solve --timers tcl
setup 0.0147s
load 0.0130s
ground 0.0078s
solve 0.0008s
total 0.0463s
```
And the units won't fluctuate from run to run as you make changes.
-[x] make `spack solve` timings consistent like before
* py-pytest-datadir: Init at 1.4.1
* py-pytest-data-dir: Fix missing dep
Co-authored-by: "Adam J. Stewart" <ajstewart426@gmail.com>
Co-authored-by: "Adam J. Stewart" <ajstewart426@gmail.com>
* ML CI: Linux x86_64
* Update comments
* Rename again
* Rename comments
* Update to match other arches
* No compiler
* Compiler was wrong anyway
* Faster TF
Avoid text decoding and encoding when combining log files, instead
combine in binary mode.
Also do a buffered copy which is sometimes faster for large log files.
Currently, the Spack docs show documentation for submodules *before* documentation for
submodules on package doc pages. This means that if you put docs in `__init__.py` in
some package, the docs in there will be shown *after* the docs for all submodules of the
package instead of at the top as an intro to the package. See, e.g.,
[the lockfile docs](https://spack.readthedocs.io/en/latest/spack.environment.html#module-spack.environment),
which should be at the
[top of that page](https://spack.readthedocs.io/en/latest/spack.environment.html).
- [x] add the `--module-first` option to sphinx so that it generates module docs at top of page.
* librsvg: add 2.40.21, which does not require rust and has some security backports
https://download.gnome.org/sources/librsvg/2.40/librsvg-2.40.21.news
* librsvg: prevent finding broken gtkdoc binaries when ~doc is selected.
On my CentOS7 hosts, ./configure finds e.g. /bin/gtkdoc-rebase even when
~doc is selected. These tools use Python2, and fail with an error:
"ImportError: No module named site"
So prevent ./configure from finding these broken tools when not building
the +doc variant.
without this patch, build of paraview has a meltdown when reaching 3rd party catalyst and other packages
with these types of errors:
335 /tmp/foo/spack-stage/spack-stage-paraview-5.10.1-gscoqxhhakjyyfirdefuhmi2bzw4scho/spack-src/VTK/ThirdParty/fmt/vtkfmt/vtkfmt/format.h:1732:11: error: cannot capture a bi
t-field by reference
336 if (sign) *it++ = static_cast<Char>(data::signs[sign]);
337 ^
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
The sticky property will prevent clingo from changing the amdgpu_target
to work around conflicts. This is the same behaviour as was adopted for
cuda_arch in 055c9d125d.
Replace the filter_file for older configure with rocm 5.3 with an
upstream patch. Further, the patch is no longer needed for develop or
later releases.
Implement an alternative strategy to do index.json invalidation.
The current approach of pairs of index.json / index.json.hash is
problematic because it leads to races.
The standard solution for cache invalidation is etags, which are
supported by both http and s3 protocols, which allows one to do
conditional fetches.
This PR implements that for the http/https schemes. It should also work
for s3 schemes, but that requires other prs to be merged.
Also it improves unit tests for index.json fetches.
* py-scikit-image: add 0.19.3
* Update var/spack/repos/builtin/packages/py-scikit-image/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* LLVM: replace libelf dependency with elf
I didn't test this extensively, but in CMS LLVM builds just fine with elfutils.
* [@spackbot] updating style on behalf of iarspider
Co-authored-by: iarspider <iarspider@users.noreply.github.com>
* exiv2: add new versions
* babl: new package required to build GIMP
* gegl: new package required to build GIMP
* gexiv2: new package required to build GIMP
* libmypaint: new package required to build GIMP
* mypaint-brushes: new package required to build GIMP
* vala: new package required to build GIMP
* GIMP: new package definition for building GIMP-2.10 from source
* libjxl: update for 0.7.0
* libwmf: a library for reading vector images in Windows Metafile Format (WMF)
* libde265: an open source implementation of the h.265 video codec
* libwebp: add new versions
* GIMP: additional variants for building GIMP-2.10 from source
* libde265: remove boilerplate
* fixes for style precheck
* updates based on feedback
* fixes for style precheck
Update the depends_on("perl") to depends_on("perl+threads").
This and #34074 is needed to properly handle e.g. the perl-Thread-Queue
rpm package:
It may not be installed on RedHat-based hosts, which can lead to automake
build failures when `spack external find perl` or `spack external find --all`
was used to use the system-provided perl install.
* nextflow recipe: added latest stable version
* tower-cli recipe: added latest release
* recipes tower-agent and tower-cli renamed to nf-tower-agent and nf-tower-cli
* recipes nf-tower-agent and nf-tower-cli: small fix
* nf-core-tools recipe: added most py- dependencies
* nf-core-tools: recipe without galaxy-tool-util (for testing)
* fixed typos in py-yacman recipe
* fixed typos in py-pytest-workflow recipe
* fixed typo in nf-core-tools recipe
* fixed typos in py-yacman recipe
* fixes in recipes for py-questionary and py-url-normalize
* fixes to py-yacman recipe
* style fixes to py- packages that are dependencies to nf-core-tools
* fix in py-requests-cache recipe
* added missing dep in py-requests-cache recipe
* nf-core-tools deps: removed redundant python dep for py packages oyaml and piper
* nf-core-tools recipe: final, incl dep on py-galaxy-tool-util
* nf-core-tools: new version with extra dependency
* commit to merge packages on focus from update/nextflow-tools
* nf-core: commenting galaxy dep for this pr
* Update var/spack/repos/builtin/packages/py-requests-cache/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-requests-cache/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* removed nf-core-tools from this branch, will be back at the end
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Interim fix for #34559
Spack's MSVC compiler definition uses ifx as the Fortran compiler.
Prior to #33385, the Spack MSVC compiler definition required the
executable to be called "ifx.exe"; #33385 replaced this with just
"ifx", which inadvertently led to ifx falsely indicating the
presence of MSVC on non-Windows systems (which leads to future
errors when attempting to query/use those compiler objects).
This commit applies a short-term fix by updating MSVC Fortran
version detection to always indicate a failure on non-Windows.
fixes#34518
Fix an issue due to the MRO chain of the package wrapper
during build. Before this PR we were always returning
False when the builder object was created before the
run_tests method was monkey patched.
openPMD, a metadata standard on top of backends like ADIOS2 and HDF5,
is implemented in ParaView 5.9+ via a Python3 module.
Simplify Conflicts & Variant
Add to ECP Data Vis SDK
This reverts commit 8035eeb36d.
And also removes logic around an additional HEAD request to prevent
a more expensive GET request on wrong content-type. Since large files
are typically an attachment and only downloaded when reading the
stream, it's not an optimization that helps much, and in fact the logic
was broken since the GET request was done unconditionally.
* Add py-bmtk and py-neurotools
* py-bmtk: version bump
* [@spackbot] updating style on behalf of heerener
* Maybe the copyright needs to be extended to 2022 for the check to pass
* Process review remarks
* Update var/spack/repos/builtin/packages/py-neurotools/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
The main issue that's fixed is that Spack passes paths (as strings) to
functions that require urls. That wasn't an issue on unix, since there
you can simply concatenate `file://` and `path` and all is good, but on
Windows that gives invalid file urls. Also on Unix, Spack would not deal with uri encoding like x%20y for file paths.
It also removes Spack's custom url.parse function, which had its own incorrect interpretation of file urls, taking file://x/y to mean the relative path x/y instead of hostname=x and path=/y. Also it automatically interpolated variables, which is surprising for a function that parses URLs.
Instead of all sorts of ad-hoc `if windows: fix_broken_file_url` this PR
adds two helper functions around Python's own path2url and reverse.
Also fixes a bug where some `spack buildcache` commands
used `-d` as a flag to mean `--mirror-url` requiring a URL, and others
`--directory`, requiring a path. It is now the latter consistently.
* Add GFE packages, Update pFUnit
* Remove citibeth as maintainer per her request
* Version 3.3.0 is an odd duck. Needs a v
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
When installing binary tarballs, Spack has to download from its
binary mirrors.
Sometimes Spack has cache available for these mirrors.
That cache helps to order mirrors to increase the likelihood of
getting a direct hit.
However, currently, when Spack can't find a spec in any local cache
of mirrors, it's very dumb:
- A while ago it used to query each mirror to see if it had a spec,
and use that information to order the mirror again, only to go
about and do exactly a part of what it just did: fetch the spec
from that mirror confused
- Recently, it was changed to download a full index.json, which
can be multiple dozens of MBs of data and may take a minute to
process thanks to the blazing fast performance you get with
Python.
In a typical use case of concretizing with reuse, the full index.json
is already available, and it likely that the local cache gives a perfect
mirror ordering on install. (There's typically no need to update any
caches).
However, in the use case of Gitlab CI, the build jobs don't have cache,
and it would be smart to just do direct fetches instead of all the
redundant work of (1) and/or (2).
Also, direct fetches from mirrors will soon be fast enough to
prefer these direct fetches over the excruciating slowness of
index.json files.
* include Drishti
* fix syntax
* Update var/spack/repos/builtin/packages/drishti/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Update var/spack/repos/builtin/packages/drishti/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-sphinxcontrib-devhelp: add 1.0.2
* Update var/spack/repos/builtin/packages/py-sphinxcontrib-devhelp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-sphinxcontrib-applehelp: add 1.0.2
* Update var/spack/repos/builtin/packages/py-sphinxcontrib-applehelp/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* ROCm 5.3.0 updates
* New patches for 5.3.0 on hip and hsakmt
* Adding additional build arguments in hip and llvm
* RVS updates for 5.3.0 release
* New patches and rocm-tensile, rocprofiler-dev, roctracer-dev recipe updates for 5.3.0
* Reverting OPENMP fix from rocm-tensile
* Removing the patch to compile without git and adding witout it
* Install library in to lib directory instead of lib64 across all platform
* Setting lib install directory to lib
* Disable gallivm coroutine for libllvm15
* Update llvm-amdgpu prefix path in hip-config.cmake.in
Removing libllvm15 from Mesa dependency removing
* hip-config.cmake.in update required from 5.2
* hip-config.cmake.in update required from 5.2 and above
* hip-config.cmake.in update required for all 5.2 release above
* Style check correction in hip update
* ginkgo: add missing include
* Patching hsa include path for rocm 5.3
* Restricting patch for llvm-15
* Style check error correction
* PIC flag required for the new test applications
* Passing -DCMAKE_POSITION_INDEPENDENT_CODE=ON in the cmake_args instead of setting -fPIC in CFLAGS
Co-authored-by: Cordell Bloor <Cordell.Bloor@amd.com>
gnulib/lib/malloca.c uses single value `static_assert()` only available in c-11
syntax. `gcc` seems to be fine, but `icc` needs extra flag.
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
Since it's a header-only library there's nothing to build. However, the
default targets include tests and examples and there's no option to turn
them off during configuration time.
Per https://geant4.web.cern.ch/node/1837 the correct dependency for 10.6 is on `g4emlow@7.9.1`, not on both `g4emlow@7.9` and `g4emlow@7.9.1`.
This is a minor cosmetic fix. The concretization for 10.6 works just fine here. But this removes the duplicate entry.
This PR patches the f_check script to detect the ifort compiler and
ensure that F_COMPILER is iset to INTEL. This problem was introduced with
openblas-0.3.21. Without this patch, the value of F_COMPILER falls back
to G77 and icc rather than ifort is used for the linking stage. That
results in the openblas library missing libifcore, which in turn means
many Fotran programs can not be compiled with ifort.
Writing a long dependency like:
```python
depends_on(
"llvm"
"targets=amdgpu,bpf,nvptx,webassembly"
"version_suffix=jl +link_llvm_dylib ~internal_unwind"
)
```
when it should be formatted like this:
```python
depends_on(
"llvm"
" targets=amdgpu,bpf,nvptx,webassembly"
" version_suffix=jl +link_llvm_dylib ~internal_unwind"
)
```
can cause really subtle errors. Specifically, you'll get something like this in
the package sanity tests:
```
AttributeError: 'NoneType' object has no attribute 'rpartition'
```
because Spack happily constructs a class that has a dependency with name `None`.
We can catch this earlier by banning anonymous dependency specs directly in
`depends_on()`. This causes the package itself to fail to parse, and emits
a much better error message:
```
==> Error: Invalid dependency specification in package 'julia':
llvmtargets=amdgpu,bpf,nvptx,webassemblyversion_suffix=jl +link_llvm_dylib ~internal_unwind
```
* Only restrict CMake version in umpire when examples and rocm are enabled
* Add CMAKE_HIP_ARCHITECTURES to Umpire and lift cmake version restriction
Co-authored-by: Tom Scogland <scogland1@llnl.gov>
This patch:
https://gcc.gnu.org/legacy-ml/gcc-patches/2018-01/msg01962.html
is actually in Amazon Linux GCC 7.3.1, which we use in CI.
So we should not hold openblas back because of it.
Old versions of OpenBLAS fail to detect the host arch of some of the
AVX512 cpus of build nodes, causing build failures.
Of course we should try to set ARCH properly in OpenBLAS to avoid that
it looks up the build arch, but that's quite some work.
* Debian like distros use multiarch implementation spec
https://wiki.ubuntu.com/MultiarchSpec
Instead of being limited to /usr/lib64, architecture based
lib directories are used. For instance, under ubuntu a library package
on x86_64 installs binaries under /usr/lib/x86_64-linux-gnu.
Building pmix with external dependencies like hwloc or libevent
fail as with prefix set to /usr, that prefix works for
headers and binaries but does not work for libraries. The default
location for library /usr/lib64 does not hold installed binaries.
Pmix build options --with-libevent and --with-libhwloc allow us to
specify dependent library locations. This commit is an effort to
highlight and resolve such an issue when a users want to use Debian like
distro library packages and use spack to build pmix.
There maybe other packages that might be impacted in a similar way.
* Adding libs property to hwloc and libevent and some cleanups to pmix patch
* Fixing style and adding comment on Pmix' 32-bit hwloc version detection issue
* `url_exists` improvements (take 2)
Make `url_exists` do HEAD request for http/https/s3 protocols
Rework the opener: construct it once and only once, dynamically dispatch
to the right one based on config.
* geant4: version bumps for Geant4 11.1.0
- Version bumps for new data libraries
- g4ndl 4.7
- g4emlow 8.2
- Add geant4-data@11.1.0
- Checksum new Geant4 11.1.0 release
- Limit +python variant to maximum of :11.0 due to removal of
Geant4Py in 11.1
- Update CLHEP dependency to at least 2.4.6.0 for this release
- Update VecGeom dependency to at least 1.2.0 for this release,
closing version ranges for older releases to prevent multiple
versions satisfying requirement
* geant4: correct max version for python support
It's very common for us to tell users to grep through the existing Spack packages to
find examples of what they want, and it's also very common for package developers to do
it. Now, searching packages is even easier.
`spack pkg grep` runs grep on all `package.py` files in repos known to Spack. It has no
special options other than the search string; all options passed to it are forwarded
along to `grep`.
```console
> spack pkg grep --help
usage: spack pkg grep [--help] ...
positional arguments:
grep_args arguments for grep
options:
--help show this help message and exit
```
```console
> spack pkg grep CMakePackage | head -3
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/3dtk/package.py:class _3dtk(CMakePackage):
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/abseil-cpp/package.py:class AbseilCpp(CMakePackage):
/Users/gamblin2/src/spack/var/spack/repos/builtin/packages/accfft/package.py:class Accfft(CMakePackage, CudaPackage):
```
```console
> spack pkg grep -Eho '(\S*)\(PythonPackage\)' | head -3
AwsParallelcluster(PythonPackage)
Awscli(PythonPackage)
Bueno(PythonPackage)
```
## Return Value
This retains the return value semantics of `grep`:
* 0 for found,
* 1 for not found
* >1 for error
## Choosing a `grep`
You can set the ``SPACK_GREP`` environment variable to choose the ``grep``
executable this command should use.
Unit tests on Windows are supposed to pass for any PR to pass CI.
However, the return code for the unit test command was not being
checked, which meant this check was always passing (effectively
disabled). This PR
* Properly checks the result of the unit tests and fails if the
unit tests fail
* Fixes (or disables on Windows) a number of tests which have
"drifted" out of support on Windows since this check was
effectively disabled
At some point the `a` mock package became an `AutotoolsPackage`, and that means it
depends on `gnuconfig` on macOS. This was causing one of our shell tests to fail on
macOS because it was testing for `{a.prefix.bin}:{b.prefix.bin}` in `PATH`, but
`gnuconfig` shows up between them.
- [x] simplify the test to check `spack load --sh a` and `spack load --sh b` separately
* Add 20 as a valid option for cxxstd to fmt
* Add pika 0.11.0
* Fix version constraint for p2300 variant in pika package
* Add pika-algorithms package
* py-reportlab: add 3.6.12
* Update var/spack/repos/builtin/packages/py-reportlab/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* adding recipe for singularity-hpc - 1st go
* typo in singularity-hpc recipe
* singularity-hpc, spython recipes: added platform variant
* singularity-hpc, spython recipes: platform variant renamed to runtime
* style fix
* another style fix
* yet another style fix (why are they not reported altogether)
* singularity-hpc recipe: added Vanessa as maintainer
* singularity-hpc recipe: add podman variant
* singularity-hpc recipe: added variant for module system
* shpc recipe: add version for py-semver dependency
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-spython recipe: no need to specify generic python dep for a python pkg
* py-spython: py-requests not needed
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
## Motivation
Our parser grew to be quite complex, with a 2-state lexer and logic in the parser
that has up to 5 levels of nested conditionals. In the future, to turn compilers into
proper dependencies, we'll have to increase the complexity further as we foresee
the need to add:
1. Edge attributes
2. Spec nesting
to the spec syntax (see https://github.com/spack/seps/pull/5 for an initial discussion of
those changes). The main attempt here is thus to _simplify the existing code_ before
we start extending it later. We try to do that by adopting a different token granularity,
and by using more complex regexes for tokenization. This allow us to a have a "flatter"
encoding for the parser. i.e., it has fewer nested conditionals and a near-trivial lexer.
There are places, namely in `VERSION`, where we have to use negative lookahead judiciously
to avoid ambiguity. Specifically, this parse is ambiguous without `(?!\s*=)` in `VERSION_RANGE`
and an extra final `\b` in `VERSION`:
```
@ 1.2.3 : develop # This is a version range 1.2.3:develop
@ 1.2.3 : develop=foo # This is a version range 1.2.3: followed by a key-value pair
```
## Differences with the previous parser
~There are currently 2 known differences with the previous parser, which have been added on purpose:~
- ~No spaces allowed after a sigil (e.g. `foo @ 1.2.3` is invalid while `foo @1.2.3` is valid)~
- ~`/<hash> @1.2.3` can be parsed as a concrete spec followed by an anonymous spec (before was invalid)~
~We can recover the previous behavior on both ones but, especially for the second one, it seems the current behavior in the PR is more consistent.~
The parser is currently 100% backward compatible.
## Error handling
Being based on more complex regexes, we can possibly improve error
handling by adding regexes for common issues and hint users on that.
I'll leave that for a following PR, but there's a stub for this approach in the PR.
## Performance
To be sure we don't add any performance penalty with this new encoding, I measured:
```console
$ spack python -m timeit -s "import spack.spec" -c "spack.spec.Spec(<spec>)"
```
for different specs on my machine:
* **Spack:** 0.20.0.dev0 (c9db4e50ba045f5697816187accaf2451cb1aae7)
* **Python:** 3.8.10
* **Platform:** linux-ubuntu20.04-icelake
* **Concretizer:** clingo
results are:
| Spec | develop | this PR |
| ------------- | ------------- | ------- |
| `trilinos` | 28.9 usec | 13.1 usec |
| `trilinos @1.2.10:1.4.20,2.0.1` | 131 usec | 120 usec |
| `trilinos %gcc` | 44.9 usec | 20.9 usec |
| `trilinos +foo` | 44.1 usec | 21.3 usec |
| `trilinos foo=bar` | 59.5 usec | 25.6 usec |
| `trilinos foo=bar ^ mpich foo=baz` | 120 usec | 82.1 usec |
so this new parser seems to be consistently faster than the previous one.
## Modifications
In this PR we just substituted the Spec parser, which means:
- [x] Deleted in `spec.py` the `SpecParser` and `SpecLexer` classes. deleted `spack/parse.py`
- [x] Added a new parser in `spack/parser.py`
- [x] Hooked the new parser in all the places the previous one was used
- [x] Adapted unit tests in `test/spec_syntax.py`
## Possible future improvements
Random thoughts while working on the PR:
- Currently we transform hashes and files into specs during parsing. I think
we might want to introduce an additional step and parse special objects like
a `FileSpec` etc. in-between parsing and concretization.
* py-pywavelets: add 1.4.1
* Update var/spack/repos/builtin/packages/py-pywavelets/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pywavelets/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* [quantum-espresso] Parallel make fails for 6.{6,7}
I run into a race condition in `make` with Intel compiler on icelake when building QE 6.6 and 6.7.
* Fix comment
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
* fix location of input for darshan-util tests
Darshan log file used for test input was removed from the Darshan
repo after the 3.4.0 release. This commit adds logic to use a
different log file as test input for later Darshan versions.
* libctl: add new version
Change-Id: I16f91cfab198c66b60407ab5bb2cb3ebeac6bc19
* New package: libgdsii
Change-Id: I34b52260ab68ecc857ddf8cc63b124adc2689a51
* New package: mpb
Change-Id: I6fdf5321c33d6bdbcaa1569026139a8483a3bcf8
* meep: add new version and variants
Change-Id: I0b60a9a4d9a329f7bde9027514467e17376e6a39
* meep: use with_or_without
Change-Id: I05584cb13df8ee153ed385e77d367cb34e39777e
* LibCatalyst: Fix version of pre-release develop version
* ParaView: Requires libcatalyst@2:
* ParaView: Apply adios2 module no kit patch to 5.11
This patch is still pending in VTK and didn't make it into 5.11 as anticipated.
* bcache: support external gettext when `libintl` is in glibc
Many glibc-based Linux systems don't have gettext's libintl because
libintl is included in the standard system's glibc (libc) itself.
When using `spack external find gettext` on those, packages like
`bcache` which unconditionally to link using `-lintl` fail to link
with -lintl.
Description of the fix:
The libs property of spack's gettext recipe returns the list of libs,
so when gettext provides libintl, use it. When not, there is no
separate liblint library and the libintl API is provided by glibc.
Tested with `spack external find gettext` on glibc-based Linux and
in musl-based Alpine Linux to make sure that when -lintl is really
needed, it is really used and nothing breaks.
* Fix mercurial print_str failure.
* Perform same fix on py-pybind11 for print_string missing method.
Co-authored-by: Nicholas Cameron Sly <sly1@llnl.gov>
* Add new root versions and associated dependency constraints
* Please style guide
* Avoid conflicts where possible
* Untested prototype of macOS version detection
* Fixes for macOS version prototype
* More logical ordering
* More correctness and style fixes
* Try to use spack's macos_version
* Add some forgotten @s
* Actually, Spack can't build Python 3.6 anymore, and thus no older PyROOT
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* Fix compile errors with latest HDF5 1.13.3
* format
* Update var/spack/repos/builtin/packages/hdf5-vol-async/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This commit reworks the bootstrapping procedure to use Spack environments
as much as possible.
The `spack.bootstrap` module has also been reorganized into a Python package.
A distinction is made among "core" Spack dependencies (clingo, GnuPG, patchelf)
and other dependencies. For a number of reasons, explained in the `spack.bootstrap.core`
module docstring, "core" dependencies are bootstrapped with the current ad-hoc
method.
All the other dependencies are instead bootstrapped using a Spack environment
that lives in a directory specific to the interpreter and the architecture being used.
The cray manifest shows dependency information for cray-mpich, which we never previously cared about
because it can only be used as an external. This updates Spack's dependency information to make cray-mpich
specs read in from the cray external manifest usable.
* initial changes for rocm recipes
* drop support for older releases
* drop support for older rocm releases - add more recipes
* drop support for older releases
* address style issues
* address style error
* fix errors
* address review comments
* Added plumed version 2.8.1 including gromacs compatibility
* Corrected ~mpi and +mpi variants in new depends
* Fixed regression logic plumed+gromacs@2020.6 support
All the vermin annotations we were using were for optional features introduced in early
Python 3 versions. We no longer need any of them, as we only support Python 3.6+. If we
start optionally using features from newer Pythons than 3.6, we will need more vermin
annotations.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
We no longer support Python <3.6, so we don't need to check whether Python supports SSL
verification in `spack.util.web`.
- [x] Remove a bunch of logic we needed to appease Python 2
We've stopped supporting Python 2, and contributors are noticing that our CI no longer
allows Python 2.7 comment type hints. They end up having to adapt them, but this adds
extra unrelated work to PRs.
- [x] Move to 3.6 type hints across the entire code base
* doxygen: add build-tool tag
This allows it to be included automatically as an external. No one links against doxygen so this should be ok.
* doxygen: add self as maintainer
* vecgeom: add new 1.2.1 version
* vecgeom: introduce conflict between gcc/cuda
Recent tests of vecgeom in Spack environments have shown that the build
with +cuda fails with GCC >= 11.3 and CUDA < 11.7 with error
...lib/gcc/x86_64-pc-linux-gnu/11.3.0/include/serializeintrin.h(41):
error: identifier "__builtin_ia32_serialize" is undefined
1 error detected in the compilation of
".../VecGeom/source/BVHManager.cu".
Other GCC/CUDA combinations appear o.k.
Avoid this error in spack, and document it for users, with a conflict
directive to express the restriction.
All Spec attributes are now represented as `attr(attribute_name, ... args ...)`, e.g.
`attr(node, "hdf5")` instead of `node("hdf5")`, as we *have* to maintain the `attr()`
form anyway, and it simplifies the encoding to just maintain one form of the Spec
information.
Background
----------
In #20644, we unified the way conditionals are done in the concretizer, but this
introduced a nasty aspect to the encoding: we have to maintain everything we want in
general conditions in two forms: `predicate(...)` and `attr("predicate", ...)`. For
example, here's the start of the table of spec attributes we had to maintain:
```prolog
node(Package) :- attr("node", Package).
virtual_node(Virtual) :- attr("virtual_node", Virtual).
hash(Package, Hash) :- attr("hash", Package, Hash).
version(Package, Version) :- attr("version", Package, Version).
...
```
```prolog
attr("node", Package) :- node(Package).
attr("virtual_node", Virtual) :- virtual_node(Virtual).
attr("hash", Package, Hash) :- hash(Package, Hash).
attr("version", Package, Version) :- version(Package, Version).
...
```
This adds cognitive load to understanding how the concretizer works, as you have to
understand the equivalence between the two forms of spec attributes. It also makes the
general condition logic in #20644 hard to explain, and it's easy to forget to add a new
equivalence to this list when adding new spec attributes (at least two people have been
bitten by this).
Solution
--------
- [x] remove the equivalence list from `concretize.lp`
- [x] simplify `spec_clauses()`, `condition()`, and other functions in `asp.py` that need
to deal with `Spec` attributes.
- [x] Convert all old-form spec attributes in `concretize.lp` to the `attr()` form
- [x] Simplify `display.lp`, where we also had to maintain a list of spec attributes. Now
we only need to show `attr/2`, `attr/3`, and `attr/4`.
- [x] Simplify model extraction logic in `asp.py`.
Performance
-----------
This seems to result in a smaller grounded problem (as there are no longer duplicated
`attr("foo", ...)` / `foo(...)` predicates in the program), but it also adds a slight
performance overhead vs. develop. Ultimately, simplifying the encoding will be a win,
particularly for improving error messages.
Notes
-----
This will simplify future node refactors in `concretize.lp` (e.g., not identifying nodes
by package name, which we need for separate build dependencies).
I'm still not entirely used to reading `attr()` notation, but I thnk it's ultimately
clearer than what we did before. We need more uniform naming, and it's now clear what is
part of a solution. We should probably continue making the encoding of `concretize.lp`
simpler and more self-explanatory. It may make sense to rename `attr` to something like
`node_attr` and to simplify the names of node attributes. It also might make sense to do
something similar for other types of predicates in `concretize.lp`.
* Add py-python-lsp-server and dependencies
* Update var/spack/repos/builtin/packages/py-python-lsp-server/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Relax version range constraints on py-python-lsp-jsonrpc and add missing dep
* Add runtime dependency flag to setuptools dependencies
* Remove unused python@3.6: dependency and move setuptools-scm to build dep only
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Fix recipe
* Update package.py
* Update recipe following review
* Update var/spack/repos/builtin/packages/py-onnx-runtime/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* remove unused imports
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* CI: Update Data and Vis SDK Stack
* Update image to match target deployments (E4S)
* Enable all packages
* Test supported variants of ParaView and VisIt
* Sensei: Update Python hint for newer cmake
* Sensei: add Python3 hint
* Add new version of snakemake
* Add myself as a maintainer
* py-retry -> py-reretry
* Added snakemake variants for storage systems
* Updated comments
* Responded to Adam's comments
* Fixed spack style
* Add build/run dependency types
* New package py-statmorph w/ dependecies. Add py-astropy@5.1
* [@spackbot] updating style on behalf of meyersbs
* [py-statmorph,py-astropy,py-pyerfa] minor fixes
* Add checksum for py-rsa 4.9
* Update package.py
* Update var/spack/repos/builtin/packages/py-rsa/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* py-fastpath: new package
* Update var/spack/repos/builtin/packages/py-fastpath/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Historically, development of the Octopus code was done on the "develop" branch
on https://gitlab.com/octopus-code/octopus but now development takes place on
"main" (since Q3 2022).
The suggestion in this PR to solve the issue is to keep the spack label
`octopus@develop` as this better indicates this is the development branch on git
than `octopus@main`, but of course to use the `main` branch (there is no choice
here - the `develop` branch is not touched anymore). Sticking to
`octopus@develop` as the version label also keeps backwards compatibility.
This reverts commit d06fd26c9a.
The problem is that Bitbucket's API forwards download requests to an S3 bucket using a temporary URL. This URL includes a signature for the request, which embeds the HTTP verb. That means only GET requests are allowed, and HEAD requests would fail verification, leading to 403 erros. The same is observed when using `curl -LI ...`
v0.5 does not build due to a change in setting `arch` introduced in v0.5.2, compatibility with which was not kept in `arbor/package.py`. Since v0.5.2 is compatible with `arbor/package.py`, and is API compatible with v0.5, any users relying on v0.5 can rely on v0.5.2.
Using `-Werror` is good practice for development and testing, but causes us a great
deal of heartburn supporting multiple compiler versions, especially as newer compiler
versions add warnings for released packages. This PR adds support for suppressing
`-Werror` through spack's compiler wrappers. There are currently three modes for
the `flags:keep_werror` setting:
* `none`: (default) cancel all `-Werror`, `-Werror=*` and `-Werror-*` flags by
converting them to `-Wno-error[=]*` flags
* `specific`: preserve explicitly selected warnings as errors, such as
`-Werror=format-truncation`, but reverse the blanket `-Werror`
* `all`: keeps all `-Werror` flags
These can be set globally in config.yaml, through the config command-line flags, or
overridden by a particular package (some packages use Werror as a proxy for determining
support for other compiler features). We chose to use this approach because:
1. removing `-Werror` flags entirely broke *many* build systems, especially autoconf
based ones, because of things like checking `-Werror=feature` and making the
assumption that if that did not error other flags related to that feature would also work
2. Attempting to preserve `-Werror` in some phases but not others caused similar issues
3. The per-package setting came about because some packages, even with all these
protections, still use `-Werror` unsafely. Currently there are roughly 3 such packages
known.
* Working updates to py-antlr4-python3-runtime and py-omegaconf
* [py-antlr4-python3-runtime] added version 4.9.3
* [@spackbot] updating style on behalf of qwertos
Co-authored-by: Benjamin Meyers <bsmits@rit.edu>
Co-authored-by: qwertos <qwertos@users.noreply.github.com>
For reasons beyond me Python thinks it's a great idea to upgrade HEAD
requests to GET requests when following redirects. So, this PR adds a
better `HTTPRedirectHandler`, and also moves some ad-hoc logic around
for dealing with disabling SSL certs verification.
Also, I'm stumped by the fact that Spack's `url_exists` does not use
HEAD requests at all, so in certain cases Spack awkwardly downloads
something first to see if it can download it, and then downloads it
again because it knows it can download it. So, this PR ensures that both
urllib and botocore use HEAD requests.
Finally, it also removes some things that were there to support currently
unsupported Python versions.
Notice that the HTTP spec [section 10.3.2](https://datatracker.ietf.org/doc/html/rfc2616.html#section-10.3.2) just talks about how to deal
with POST request on redirect (whether to follow or not):
> If the 301 status code is received in response to a request other
> than GET or HEAD, the user agent MUST NOT automatically redirect the
> request unless it can be confirmed by the user, since this might
> change the conditions under which the request was issued.
> Note: When automatically redirecting a POST request after
> receiving a 301 status code, some existing HTTP/1.0 user agents
> will erroneously change it into a GET request.
Python has a comment about this, they choose to go with the "erroneous change".
But they then mess up the HEAD request while following the redirect, probably
because they were too busy discussing how to deal with POST.
See https://github.com/python/cpython/pull/99731
The package.py assumed "+mpi" in many places, without checking for the variant.
This problem went undetected, as a hard dependency on scalapack pulled an mpi
implementation into the dependency chain (this is also fixed).
Also, the +mpi variant is used select between serial and parallel mode:
It has to enable MPI and ScaLAPACK: They are inter-dependent. Compile
fails because of checks for the other if the other is not enabled.
Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
This adds super-lazy maintainer mode to `spack checksum`: Instead of
only printing the new checksums to the terminal, `-a` and
`--add-to-package` will add the new checksums to the `package.py` file
and open it in the editor afterwards for final checks.
This PR removes [end of life](https://endoflife.date/python) versions of Python from Spack. Specifically, this includes all versions of Python older than 3.7.
See https://github.com/spack/spack/discussions/31824 for rationale. Deprecated in #32615. And #28003.
For anyone using software that relies on Python 2, you have a few options:
* Upgrade the software to support Python 3. The `3to2` tool may get you most of the way there, although more complex libraries may need manual tweaking.
* Add Python 2 as an [external package](https://spack.readthedocs.io/en/latest/build_settings.html#external-packages). Many Python libraries do not support Python 2, but you may be able to add older versions that did once upon a time.
* Use Spack 0.19. Spack 0.19 is the last release to officially support Python 3.6 and older
* Create and maintain your own [custom repository](https://spack.readthedocs.io/en/latest/repositories.html). Basically, you would need a package for Python 2 and any other Python 2-specific libraries you need.
* Add a WindowsRegistryView class, which can query for existing
package installations on Windows. This is particularly important
because some Windows packages (including those added here)
do not allow two simultaneous installs, and this can be
queried in order to provide a clear error message.
* Consolidate external path detection logic for Windows into
WindowsKitExternalPaths and WindowsCompilerExternalPaths objects.
* Add external-only packages win-sdk and wgl
* Add win-wdk (including external detection) which depends on
win-sdk
* Replace prior msmpi implementation with a source-based install
(depends on win-wdk). This install can control the install
destination (unlike the binary installation).
* Update MSVC compiler to choose vcvars based on win-sdk dependency
* Provide "msbuild" module-level variable to packages during build
* When creating symlinks on Windows, need to explicitly specify when
a symlink target is a directory
* executables_in_path no-longer defaults to using PATH (this is
now expected to be taken care of by the caller)
Spec traversals can now specify a topological ordering. A topologically-
ordered traversal with input specs X1, X2... will
* include all of X1, X2... and their children
* be ordered such that a given node is guaranteed to appear before any
of its children in the traversal
Other notes:
* Input specs can be children of other input specs (this is useful if
a user specifies a set of specs to uninstall: some of those specs
might be children of others)
* `direction="parents"` will produce a reversed topological order
(children always come before parents).
* `cover="edges"` will generate a list of edges L such that (a) input
edges will always appear before output edges and (b) if you create
a list with the destination of each edge in L the result is
topologically ordered
* test_suite.py: speed up slow test by using mock packages
* Don't resolve the sha during unit-tests
* Skip long-running test that fails, instead of executing it
* uninstall: fix accidental cubic complexity
Currently spack uninstall runs in worst case cubic time complexity
thanks to traversal during traversal during traversal while collecting
the specs to be uninstalled.
Also brings down the number of error messages printed to something
linear in the amount of matching specs instead of quadratic.
* qt6: initial commit of several basic qt6 packages
* Qt6: fix style issues
* [qt6] fix style issues, trailing spaces
* [qt6] rename to qt-* ecosystem; remove imports
* [qt6] rename dependencies; change version strings
* [qt6] list_urls
* [qt6] homepage links
* [qt6] missing closing quotes failed style check
* qt-declarative: use private _versions
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* qt-quick3d, qt-quicktimeline, qt-shadertools: use private _versions
* qt-base: rework feature defines and use run_tests
* qt: new version 6.2.4
* flake8 whitespace before comma
* qt-base: variant opengl when +gui
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* qt6: rebase and apply new black style
* qt6: apply style isort fixes
* qt6: new version 6.3.0 and 6.3.1
* qt6: add 6.3.0 and 6.3.1 to versions list
* qt6: multi-argument join_path
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* qt-base: fix isort
* qt-shadertools: no cmake_args needed
* qt-declarative: imports up front
* qt-quick3d: fix import
* qt-declarative: remove useless cmake_args
* qt-shadertools: imports and join_path fixes
* qt-quick3d: join_path fixes
* qt-declarative: join_path fixes
* Update features based on gui usage
* Update dependencies, cmake args, mac support
* Update features based on linux
* More updates
* qt-base: fix style
* qt-base: archive_files join_path
* qt-base: new version 6.3.2
* qt-{declarative,quick3d,quicktimeline,shadertools}@6.3.2
* qt-base: require libxcb@1.13: and use system xcb_xinput when on linux
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update dask and related packages
* Update package dependency specs
* Run spack style
* Add new version of locket
* Respond to comments
* Added constraints
* Add version constraints for py-dask+distributed
* Run spack style
* Update var/spack/repos/builtin/packages/py-dask/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Deprecated dask versions
* Deprecated more dask and distirbuted
* spack style --fix
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* graphviz: remove cyclic dep to svg when pangocairo and poppler+glib failure
* graphviz: remove cyclic dep to svg when pangocairo and poppler+glib failure
* Add a regression test for 33928
* PackageBase should not set `(build|install)_time_test_callbacks`
* Fix audits by preserving the current semantic
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
gcc@10: Newer binutils than RHEL7/8's are required to for guaranteed operaton. Therefore, on RHEL7/8, reject ~binutils. You need to add +binutils to be sure to have binutils which are recent enough.
See this discussion with the OpenBLAS devs for reference:
https://github.com/xianyi/OpenBLAS/issues/3805#issuecomment-1319878852
Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
Add a dependency on python versions less than 3.10 in order to work
around a bug in libxml2's configure script that fails to parse python
version strings with more than one character for the minor version.
The bug is present in v2.10.1, but has been fixed in 2.10.2.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
* add version 1.46.0 to bioconductor package r-a4
* add version 1.46.0 to bioconductor package r-a4base
* add version 1.46.0 to bioconductor package r-a4classif
* add version 1.46.0 to bioconductor package r-a4core
* add version 1.46.0 to bioconductor package r-a4preproc
* add version 1.46.0 to bioconductor package r-a4reporting
* add version 1.52.0 to bioconductor package r-absseq
* add version 1.28.0 to bioconductor package r-acde
* add version 1.76.0 to bioconductor package r-acgh
* add version 2.54.0 to bioconductor package r-acme
* add version 1.68.0 to bioconductor package r-adsplit
* add version 1.70.0 to bioconductor package r-affxparser
* add version 1.76.0 to bioconductor package r-affy
* add version 1.74.0 to bioconductor package r-affycomp
* add version 1.58.0 to bioconductor package r-affycompatible
* add version 1.56.0 to bioconductor package r-affycontam
* add version 1.70.0 to bioconductor package r-affycoretools
* add version 1.46.0 to bioconductor package r-affydata
* add version 1.50.0 to bioconductor package r-affyilm
* add version 1.68.0 to bioconductor package r-affyio
* add version 1.74.0 to bioconductor package r-affyplm
* add version 1.44.0 to bioconductor package r-affyrnadegradation
* add version 1.46.0 to bioconductor package r-agdex
* add version 3.30.0 to bioconductor package r-agilp
* add version 2.48.0 to bioconductor package r-agimicrorna
* add version 1.30.0 to bioconductor package r-aims
* add version 1.30.0 to bioconductor package r-aldex2
* add version 1.36.0 to bioconductor package r-allelicimbalance
* add version 1.24.0 to bioconductor package r-alpine
* add version 2.60.0 to bioconductor package r-altcdfenvs
* add version 2.22.0 to bioconductor package r-anaquin
* add version 1.26.0 to bioconductor package r-aneufinder
* add version 1.26.0 to bioconductor package r-aneufinderdata
* add version 1.70.0 to bioconductor package r-annaffy
* add version 1.76.0 to bioconductor package r-annotate
* add version 1.60.0 to bioconductor package r-annotationdbi
* add version 1.22.0 to bioconductor package r-annotationfilter
* add version 1.40.0 to bioconductor package r-annotationforge
* add version 3.6.0 to bioconductor package r-annotationhub
* add version 3.28.0 to bioconductor package r-aroma-light
* add version 1.30.0 to bioconductor package r-bamsignals
* add version 2.14.0 to bioconductor package r-beachmat
* add version 2.58.0 to bioconductor package r-biobase
* add version 2.6.0 to bioconductor package r-biocfilecache
* add version 0.44.0 to bioconductor package r-biocgenerics
* add version 1.8.0 to bioconductor package r-biocio
* add version 1.16.0 to bioconductor package r-biocneighbors
* add version 1.32.1 to bioconductor package r-biocparallel
* add version 1.14.0 to bioconductor package r-biocsingular
* add version 2.26.0 to bioconductor package r-biocstyle
* add version 3.16.0 to bioconductor package r-biocversion
* add version 2.54.0 to bioconductor package r-biomart
* add version 1.26.0 to bioconductor package r-biomformat
* add version 2.66.0 to bioconductor package r-biostrings
* add version 1.46.0 to bioconductor package r-biovizbase
* add version 1.8.0 to bioconductor package r-bluster
* add version 1.66.1 to bioconductor package r-bsgenome
* add version 1.34.0 to bioconductor package r-bsseq
* add version 1.40.0 to bioconductor package r-bumphunter
* add version 2.64.0 to bioconductor package r-category
* add version 2.28.0 to bioconductor package r-champ
* add version 2.30.0 to bioconductor package r-champdata
* add version 1.48.0 to bioconductor package r-chipseq
* add version 4.6.0 to bioconductor package r-clusterprofiler
* add version 1.34.0 to bioconductor package r-cner
* add version 1.30.0 to bioconductor package r-codex
* add version 2.14.0 to bioconductor package r-complexheatmap
* add version 1.72.0 to bioconductor package r-ctc
* add version 2.26.0 to bioconductor package r-decipher
* add version 0.24.0 to bioconductor package r-delayedarray
* add version 1.20.0 to bioconductor package r-delayedmatrixstats
* add version 1.38.0 to bioconductor package r-deseq2
* add version 1.44.0 to bioconductor package r-dexseq
* add version 1.40.0 to bioconductor package r-dirichletmultinomial
* add version 2.12.0 to bioconductor package r-dmrcate
* add version 1.72.0 to bioconductor package r-dnacopy
* add version 3.24.1 to bioconductor package r-dose
* add version 2.46.0 to bioconductor package r-dss
* add version 3.40.0 to bioconductor package r-edger
* add version 1.18.0 to bioconductor package r-enrichplot
* add version 2.22.0 to bioconductor package r-ensembldb
* add version 1.44.0 to bioconductor package r-exomecopy
* add version 2.6.0 to bioconductor package r-experimenthub
* add version 1.24.0 to bioconductor package r-fgsea
* add version 2.70.0 to bioconductor package r-gcrma
* add version 1.34.0 to bioconductor package r-gdsfmt
* add version 1.80.0 to bioconductor package r-genefilter
* add version 1.34.0 to bioconductor package r-genelendatabase
* add version 1.70.0 to bioconductor package r-genemeta
* add version 1.76.0 to bioconductor package r-geneplotter
* add version 1.20.0 to bioconductor package r-genie3
* add version 1.34.3 to bioconductor package r-genomeinfodb
* update r-genomeinfodbdata
* add version 1.34.0 to bioconductor package r-genomicalignments
* add version 1.50.2 to bioconductor package r-genomicfeatures
* add version 1.50.1 to bioconductor package r-genomicranges
* add version 2.66.0 to bioconductor package r-geoquery
* add version 1.46.0 to bioconductor package r-ggbio
* add version 3.6.2 to bioconductor package r-ggtree
* add version 2.8.0 to bioconductor package r-glimma
* add version 1.10.0 to bioconductor package r-glmgampoi
* add version 5.52.0 to bioconductor package r-globaltest
* update r-go-db
* add version 1.18.0 to bioconductor package r-gofuncr
* add version 2.24.0 to bioconductor package r-gosemsim
* add version 1.50.0 to bioconductor package r-goseq
* add version 2.64.0 to bioconductor package r-gostats
* add version 1.76.0 to bioconductor package r-graph
* add version 1.60.0 to bioconductor package r-gseabase
* add version 1.30.0 to bioconductor package r-gtrellis
* add version 1.42.0 to bioconductor package r-gviz
* add version 1.26.0 to bioconductor package r-hdf5array
* add version 1.70.0 to bioconductor package r-hypergraph
* add version 1.34.0 to bioconductor package r-illumina450probevariants-db
* add version 0.40.0 to bioconductor package r-illuminaio
* add version 1.72.0 to bioconductor package r-impute
* add version 1.36.0 to bioconductor package r-interactivedisplaybase
* add version 2.32.0 to bioconductor package r-iranges
* add version 1.58.0 to bioconductor package r-kegggraph
* add version 1.38.0 to bioconductor package r-keggrest
* add version 3.54.0 to bioconductor package r-limma
* add version 2.50.0 to bioconductor package r-lumi
* add version 1.74.0 to bioconductor package r-makecdfenv
* add version 1.76.0 to bioconductor package r-marray
* add version 1.10.0 to bioconductor package r-matrixgenerics
* add version 1.6.0 to bioconductor package r-metapod
* add version 2.44.0 to bioconductor package r-methylumi
* add version 1.44.0 to bioconductor package r-minfi
* add version 1.32.0 to bioconductor package r-missmethyl
* add version 1.78.0 to bioconductor package r-mlinterfaces
* add version 1.10.0 to bioconductor package r-mscoreutils
* add version 2.24.0 to bioconductor package r-msnbase
* add version 2.54.0 to bioconductor package r-multtest
* add version 1.36.0 to bioconductor package r-mzid
* add version 2.32.0 to bioconductor package r-mzr
* add version 1.60.0 to bioconductor package r-oligoclasses
* update r-org-hs-eg-db
* add version 1.40.0 to bioconductor package r-organismdbi
* add version 1.38.0 to bioconductor package r-pathview
* add version 1.90.0 to bioconductor package r-pcamethods
* update r-pfam-db
* add version 1.42.0 to bioconductor package r-phyloseq
* add version 1.60.0 to bioconductor package r-preprocesscore
* add version 1.30.0 to bioconductor package r-protgenerics
* add version 1.32.0 to bioconductor package r-quantro
* add version 2.30.0 to bioconductor package r-qvalue
* add version 1.74.0 to bioconductor package r-rbgl
* add version 2.38.0 to bioconductor package r-reportingtools
* add version 2.42.0 to bioconductor package r-rgraphviz
* add version 2.42.0 to bioconductor package r-rhdf5
* add version 1.10.0 to bioconductor package r-rhdf5filters
* add version 1.20.0 to bioconductor package r-rhdf5lib
* add version 2.0.0 to bioconductor package r-rhtslib
* add version 1.74.0 to bioconductor package r-roc
* add version 1.26.0 to bioconductor package r-rots
* add version 2.14.0 to bioconductor package r-rsamtools
* add version 1.58.0 to bioconductor package r-rtracklayer
* add version 0.36.0 to bioconductor package r-s4vectors
* add version 1.6.0 to bioconductor package r-scaledmatrix
* add version 1.26.0 to bioconductor package r-scater
* add version 1.12.0 to bioconductor package r-scdblfinder
* add version 1.26.0 to bioconductor package r-scran
* add version 1.8.0 to bioconductor package r-scuttle
* add version 1.64.0 to bioconductor package r-seqlogo
* add version 1.56.0 to bioconductor package r-shortread
* add version 1.72.0 to bioconductor package r-siggenes
* add version 1.20.0 to bioconductor package r-singlecellexperiment
* add version 1.32.0 to bioconductor package r-snprelate
* add version 1.48.0 to bioconductor package r-snpstats
* add version 2.34.0 to bioconductor package r-somaticsignatures
* add version 1.10.0 to bioconductor package r-sparsematrixstats
* add version 1.38.0 to bioconductor package r-spem
* add version 1.36.0 to bioconductor package r-sseq
* add version 1.28.0 to bioconductor package r-summarizedexperiment
* add version 3.46.0 to bioconductor package r-sva
* add version 1.36.0 to bioconductor package r-tfbstools
* add version 1.20.0 to bioconductor package r-tmixclust
* add version 2.50.0 to bioconductor package r-topgo
* add version 1.22.0 to bioconductor package r-treeio
* add version 1.26.0 to bioconductor package r-tximport
* add version 1.26.0 to bioconductor package r-tximportdata
* add version 1.44.0 to bioconductor package r-variantannotation
* add version 3.66.0 to bioconductor package r-vsn
* add version 2.4.0 to bioconductor package r-watermelon
* add version 2.44.0 to bioconductor package r-xde
* add version 1.56.0 to bioconductor package r-xmapbridge
* add version 0.38.0 to bioconductor package r-xvector
* add version 1.24.0 to bioconductor package r-yapsa
* add version 1.24.0 to bioconductor package r-yarn
* add version 1.44.0 to bioconductor package r-zlibbioc
* make version resource consistent for r-bsgenome-hsapiens-ucsc-hg19
* make version resource consistent for r-go-db
* make version resource consistent for r-kegg-db
* make version resource consistent for r-org-hs-eg-db
* make version resource consistent for r-pfam-db
* new package: r-ggrastr
* Patches not needed for new version
* new package: r-hdo-db
* new package: r-ggnewscale
* new package: r-gson
* Actually depends on ggplot2@3.4.0:
* Fix formatting of r-hdo-db
* Fix dependency version specifiers
* Clean up duplicate dependency references
* Enable hdf5 build (including +mpi) on Windows
* This includes updates to hdf5 dependencies openssl (minor edit) and
bzip2 (more-extensive edits)
* Add binary-based installation of msmpi (this is currently the only
supported MPI implementation in Spack for Windows). Note that this
does not install to the Spack-specified prefix. This implementation
will be replaced with a source-based implementation
Co-authored-by: John Parent <john.parent@kitware.com>
Setting PYTHONHOME is rarely needed (since each interpreter has
various ways of setting it automatically) and very often it is
difficult to get right manually.
For instance, the change done to set PYTHONHOME to
sysconfig["base_prefix"] broke bootstrapping dev dependencies
of Spack for me, when working inside a virtual environment in Linux.
* replace mpi as a variant instead of dependency
* separate serial and MPI dependencies
* configure args depending on serial or mpi variant
* reformat with black
In #3113, `https` was removed to ensure that `curl` can be bootstrapped
without SSL being present. This was lost in #25672 which aimed to use
`https` where possible.
Co-authored-by: Bernhard Kaindl <43588962+bernhardkaindl@users.noreply.github.com>
#32942 fixed bootstrapping on Windows by having the core Spack
code explicitly add the Clingo package bin/ directory as a
DLL path.
Since then, #33400 has been merged, which ensures that the Python
module installed by the Spack `clingo` package can find the DLLs
in bin/.
Note that this only works for Spack instances which have been
bootstrapped after #33400: for installations bootstrapped before
then, you will need to run `spack clean -b` (this would only
be needed for Spack instances running on Windows).
Buildin neovim@0.8.0 complains (for me) about Lua's lpeg and mpack
packages not being available at build time. Removing the link-only
setting in the dependencies for these two packages fixes the build for
me.
Revamp the timer so we always have a designated begin and end.
Fix a bug where the phase timer was stopped before the phase started,
resulting in incorrect timing reports in timers.json.
Add spack.ld_so_conf.host_dynamic_linker_search_paths
Retrieve the current host runtime search paths for shared libraries;
for GNU and musl Linux we try to retrieve the dynamic linker from the
current Python interpreter and then find the corresponding config file
(e.g. ld.so.conf or ld-musl-<arch>.path). Similar can be done for
BSD and others, but this is not implemented yet. The default paths
are always returned. We don't check if the listed directories exist.
Use this in spack external find for libraries.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
While binaries built for PRs that get merged must still be rebuilt
in develop pipelines, they can be used by other PRs that find they
would otherwise need to rebuild them. Now that spackbot is
managing copying PR binaries from merged PRs into a shared location,
keeping it pruned to a reasonable size, and making sure the indices
are up to date, spack can use these mirrors as a potential source
of binaries.
I'm finding I often want the date in my paths and it would be nice if spack had a config variable for this.
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Remove CI jobs related to Python 2.7
* Remove Python 2.7 specific code from Spack core
* Remove externals for Python 2 only
* Remove llnl.util.compat
We added a hotfix to releases/v0.19 with a feature flag, but the flag
is incompatible with the config schema on `develop`.
- [x] Ensure schema is compatible on develop even though config option is unused.
* Speed-up bootstrap mirror unit test
The unit test doesn't need to concretize, since it checks
only metadata for the mirror.
* architecture.py: use "default_mock_concretization" for slow test
Environments and environment views have taken over the role of `spack activate/deactivate`, and we should deprecate these commands for several reasons:
- Global activation is a really poor idea:
- Install prefixes should be immutable; since they can have multiple, unrelated dependents; see below
- Added complexity elsewhere: verification of installations, tarballs for build caches, creation of environment views of packages with unrelated extensions "globally activated"... by removing the feature, it gets easier for people to contribute, and we'd end up with fewer bugs due to edge cases.
- Environment accomplish the same thing for non-global "activation" i.e. `spack view`, but better.
Also we write in the docs:
```
However, Spack global activations have two potential drawbacks:
#. Activated packages that involve compiled C extensions may still
need their dependencies to be loaded manually. For example,
``spack load openblas`` might be required to make ``py-numpy``
work.
#. Global activations "break" a core feature of Spack, which is that
multiple versions of a package can co-exist side-by-side. For example,
suppose you wish to run a Python package in two different
environments but the same basic Python --- one with
``py-numpy@1.7`` and one with ``py-numpy@1.8``. Spack extensions
will not support this potential debugging use case.
```
Now that environments are established and views can take over the role of activation
non-destructively, we can remove global activation/deactivation.
* add version 1.7-20 to r-ade4
* add version 1.1-13 to r-adephylo
* add version 0.3-20 to r-adespatial
* add version 1.2-0 to r-afex
* add version 0.8-19 to r-amap
* add version 0.1.8 to r-aplot
* add version 2.0.8 to r-biasedurn
* add version 2.4-4 to r-bio3d
* add version 1.30.19 to r-biocmanager
* add version 1.2-9 to r-brobdingnag
* add version 0.4.1 to r-bslib
* add version 3.7.3 to r-callr
* add version 3.1-1 to r-car
* add version 0.3-62 to r-clue
* add version 1.2.0 to r-colourpicker
* add version 1.8.1 to r-commonmark
* add version 0.4.3 to r-cpp11
* add version 1.14.4 to r-data-table
* add version 1.34 to r-desolve
* add version 2.4.5 to r-devtools
* add version 0.6.30 to r-digest
* add version 0.26 to r-dt
* add version 1.7-12 to r-e1071
* add version 1.8.2 to r-emmeans
* add version 1.1.16 to r-exomedepth
* add version 0.1-8 to r-expint
* add version 0.4.0 to r-fontawesome
* add version 1.5-2 to r-fracdiff
* add version 1.10.0 to r-future-apply
* add version 3.0.1 to r-ggmap
* add version 3.4.0 to r-ggplot2
* add version 2.1.0 to r-ggraph
* add version 0.6.4 to r-ggsignif
* add version 0.6-7 to r-gmp
* add version 2022.10-2 to r-gparotation
* add version 0.8.3 to r-graphlayouts
* add version 1.8.8 to r-grbase
* add version 2.1-0 to r-gstat
* add version 0.18.6 to r-insight
* add version 1.3.1 to r-irkernel
* add version 1.8.3 to r-jsonlite
* add version 1.7.0 to r-lava
* add version 1.1-31 to r-lme4
* add version 5.6.17 to r-lpsolve
* add version 5.5.2.0-17.9 to r-lpsolveapi
* add version 1.2.9 to r-mapproj
* add version 3.4.1 to r-maps
* add version 1.1-5 to r-maptools
* add version 1.3 to r-markdown
* add version 6.0.0 to r-mclust
* add version 4.2-2 to r-memuse
* add version 1.8-41 to r-mgcv
* add version 1.2.5 to r-minqa
* add version 0.3.7 to r-nanotime
* add version 2.4.1.1 to r-nfactors
* add version 3.1-160 to r-nlme
* add version 2.7-1 to r-nmof
* add version 0.60-16 to r-np
* add version 2.0.4 to r-openssl
* add version 4.2.5.1 to r-openxlsx
* add version 0.3-8 to r-pbdzmq
* add version 2.0-3 to r-pcapp
* add version 0.7.0 to r-philentropy
* add version 2.0.3 to r-pkgcache
* add version 1.3.1 to r-pkgload
* add version 1.10-4 to r-polyclip
* add version 3.8.0 to r-processx
* add version 1.7.2 to r-ps
* add version 2.12.1 to r-r-utils
* add version 1.2.4 to r-ragg
* add version 3.7 to r-rainbow
* add version 0.0.20 to r-rcppannoy
* add version 0.3.3.9.3 to r-rcppeigen
* add version 0.3.12 to r-rcppgsl
* add version 1.0.2 to r-recipes
* add version 1.1.10 to r-rmutil
* add version 0.10.24 to r-rmysql
* add version 2.4.8 to r-rnexml
* add version 4.1.19 to r-rpart
* add version 1.7-2 to r-rrcov
* add version 0.8.28 to r-rsconnect
* add version 0.25 to r-servr
* add version 1.7.3 to r-shiny
* add version 3.0-0 to r-spatstat-data
* add version 3.0-3 to r-spatstat-geom
* add version 3.0-1 to r-spatstat-random
* add version 3.0-0 to r-spatstat-sparse
* add version 3.0-1 to r-spatstat-utils
* add version 1.8.0 to r-styler
* add version 3.4.1 to r-sys
* add version 1.5-2 to r-tclust
* add version 0.0.9 to r-tfmpvalue
* add version 1.2.0 to r-tidyselect
* add version 0.10-52 to r-tseries
* add version 4.2.2 to r-v8
* add version 0.5.0 to r-vctrs
* add version 2.6-4 to r-vegan
* add version 0.7.0 to r-wk
* add version 0.34 to r-xfun
* add version 1.0.6 to r-xlconnect
* add version 3.99-0.12 to r-xml
* add version 0.12.2 to r-xts
* add version 1.0-33 to r-yaimpute
* add version 2.3.6 to r-yaml
* add version 2.2.2 to r-zip
* add version 2.2-7 to r-deoptim
* add version 4.3.1 to r-ergm
* add version 0.18 to r-evaluate
* add version 1.29.0 to r-future
* add version 0.0.8 to r-ggfun
* add version 0.9.2 to r-ggrepel
* add version 1.9.0 to r-lubridate
* add version 4.10.1 to r-plotly
* add version 0.2.12 to r-rcppcctz
* add version 1.2 to r-rook
* add version 1.6-1 to r-segmented
* add version 4.2.1 to r-seurat
* add version 4.1.3 to r-seuratobject
* add version 1.0-9 to r-sf
* add version 1.5-1 to r-sp
* add version 1.8.1 to r-styler
* new package: r-timechange
* new package: r-stars
* new package: r-sftime
* new package: r-spatstat-explore
Co-authored-by: glennpj <glennpj@users.noreply.github.com>
* updates and fixes for libpressio
* differentiate between standalone and build tests
* add e4s tags
Co-authored-by: Robert Underwood <runderwood@anl.gov>
* include py-darshan
* include requested changes
* fix required versions
* fix style
* fix style
* Update package.py
* Update var/spack/repos/builtin/packages/py-darshan/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Boost 1.64.0 has build errors when building the python and MPI modules. This was previously just a comment in the package.py which allowed broken specs to concretize. The comments are now expressed in conflicts to prevent this.
Currently, external `PythonPackage`s cause install failures because the logic in `PythonPackage` assumes that it can ask for `spec["python"]`. Because we chop off externals' dependencies, an external Python extension may not have a `python` dependency.
This PR resolves the issue by guaranteeing that a `python` node is present in one of two ways:
1. If there is already a `python` node in the DAG, we wire the external up to it.
2. If there is no existing `python` node, we wire up a synthetic external `python` node, and we assume that it has the same prefix as the external.
The assumption in (2) isn't always valid, but it's better than leaving the user with a non-working `PythonPackage`.
The logic here is specific to `python`, but other types of extensions could take advantage of it. Packages need only define `update_external_dependencies(self)`, and this method will be called on externals after concretization. This likely needs to be fleshed out in the future so that any added nodes are included in concretization, but for now we only bolt on dependencies post-concretization.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* [mfem] updates related to building with cuda
* [hypre] tweak to support building with external ROCm/HIP
* [mfem] more tweaks related to building with +rocm
* [mfem] temporary (?) workaround for issue #33684
* [mfem] fix style
* [mfem] fix +shared+miniapps install
* Add checksum for py-protobuf 4.21.7, protobuf 21.7
* Update var/spack/repos/builtin/packages/protobuf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Update var/spack/repos/builtin/packages/protobuf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/protobuf/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update package.py
* Update package.py
* Delete protoc2.5.0_aarch64.patch
* Update package.py
* Restore but deprecate py-protobuf 3.0.0a/b; deprecate py-tensorflow 0.x
* Fix audit
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Spack currently creates a temporary sbang that is moved "atomically" in place,
but this temporary causes races when multiple processes start installing sbang.
Let's just stick to an idempotent approach. Notice that we only re-install sbang
if Spack updates it (since we do file compare), and sbang was only touched
18 times in the past 6 years, whereas we hit the sbang tempfile issue
frequently with parallel install on a fresh spack instance in CI.
Also fixes a bug where permissions weren't updated if config changed but
the latest version of the sbang file was already installed.
The `intel` compiler at versions > 20 is provided by the `intel-oneapi-compilers-classic`
package (a thin wrapper around the `intel-oneapi-compilers` package), and the `oneapi`
compiler is provided by the `intel-oneapi-compilers` package.
Prior to this work, neither of these compilers could be bootstrapped by Spack as part of
an install with `install_missing_compilers: True`.
Changes made to make these two packages bootstrappable:
1. The `intel-oneapi-compilers-classic` package includes a bin directory and symlinks
to the compiler executables, not just logical pointers in Spack.
2. Spack can look for bootstrapped compilers in directories other than `$prefix/bin`,
defined on a per-package basis
3. `intel-oneapi-compilers` specifies a non-default search directory for the
compiler executables.
4. The `spack.compilers` module now can make more advanced associations between
packages and compilers, not just simple name translations
5. Spack support for lmod hierarchies accounts for differences between package
names and the associated compiler names for `intel-oneapi-compilers/oneapi`,
`intel-oneapi-compilers-classic/intel@20:`, `llvm+clang/clang`, and
`llvm-amdgpu/rocmcc`.
- [x] full end-to-end testing
- [x] add unit tests
* Add zstd support for elfutils
Not defining `+zstd` implies `--without-zstd` flag to configure.
This avoids automatic library detection and thus make the build only
depends on Spack installed dependencies.
* Use autotools helper "with_or_without"
* Revert use of with_or_without
Using `with_or_without()` with `variant` keyword does not seem to work.
"spack install foo" no longer adds package "foo" to the environment
(i.e. to the list of root specs) by default: you must specify "--add".
Likewise "spack uninstall foo" no longer removes package "foo" from
the environment: you must specify --remove. Generally this means
that install/uninstall commands will no longer modify the users list
of root specs (which many users found problematic: they had to
deactivate an environment if they wanted to uninstall a spec without
changing their spack.yaml description).
In more detail: if you have environments e1 and e2, and specs [P, Q, R]
such that P depends on R, Q depends on R, [P, R] are in e1, and [Q, R]
are in e2:
* `spack uninstall --dependents --remove r` in e1: removes R from e1
(but does not uninstall it) and uninstalls (and removes) P
* `spack uninstall -f --dependents r` in e1: will uninstall P, Q, and
R (i.e. e2 will have dependent specs uninstalled as a side effect)
* `spack uninstall -f --dependents --remove r` in e1: this uninstalls
P, Q, and R, and removes [P, R] from e1
* `spack uninstall -f --remove r` in e1: uninstalls R (so it is
"missing" in both environments) and removes R from e1 (note that e1
would still install R as a dependency of P, but it would no longer
be listed as a root spec)
* `spack uninstall --dependents r` in e1: will fail because e2 needs R
Individual unit tests were created for each of these scenarios.
Somehow a network error when cloning the repo for ci gets
categorized by gitlab as a script failure. To make sure we retry
jobs that failed for that reason or a similar one, include
"script_failure" as one of the reasons for retrying service jobs
(which include "no specs to rebuild" jobs, update buildcache
index jobs, and temp storage cleanup jobs.
Add a `project` block to the toml config along with development and CI
dependencies and a minimal `build-system` block, doing basically
nothing, so that spack can be bootstrapped to a full development
environment with:
```shell
$ hatch -e dev shell
```
or for a minimal environment without hatch:
```shell
$ python3 -m venv venv
$ source venv/bin/activate
$ python3 -m pip install --upgrade pip
$ python3 -m pip install -e '.[dev]'
```
This means we can re-use the requirements list throughout the workflow
yaml files and otherwise maintain this list in *one place* rather than
several disparate ones. We may be stuck with a couple more temporarily
to continue supporting python2.7, but aside from that it's less places
to get out of sync and a couple new bootstrap options.
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
This change uses the aws cli, if available, to retrieve spec files
from the mirror to a local temp directory, then parallelizes the
reading of those files from disk using multiprocessing.ThreadPool.
If the aws cli is not available, then a ThreadPool is used to fetch
and read the spec files from the mirror.
Using aws cli results in ~16 times speed up to recreate the binary
mirror index, while just parallelizing the fetching and reading
results in ~3 speed up.
The compiler bootstrapping logic currently does not add a task when the compiler package is already in the install task queue. This causes failures when the compiler package is added without the additional metadata telling the task to update the compilers list.
Solution: requeue compilers for bootstrapping when needed, to update `task.compiler` metadata.
Currently, develop specs that are not roots and are not explicitly listed dependencies
of the roots are not applied.
- [x] ensure dev specs are applied.
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
`spack env create` enables a view by default (in a weird hidden
directory, but well...). This is asking for trouble with the other
default of `concretizer:unify:false`, since having different flavors of
the same spec in an environment, leads to collision errors when
generating the view.
A change of defaults would improve user experience:
However, `unify:true` makes most sense, since any time the issue is
brought up in Slack, the user changes the concretization config, since
it wasn't the intention to have different flavors of the same spec, and
install times are decreased.
Further we improve the docs and drop the duplicate root spec limitation
Dependencies specified by hash are unique in Spack in that the abstract
specs are created with internal structure. In this case, the constraint
generation for spec matrices fails due to flattening the structure.
It turns out that the dep_difference method for Spec.constrain does not
need to operate on transitive deps to ensure correctness. Removing transitive
deps from this method resolves the bug.
- [x] Includes regression test
Without this, Meson will use its Wraps to automatically download and
install dependencies. We want to manage dependencies explicitly,
therefore disable this functionality.
Currently, Spack can fail for a valid spec if the spec is constructed from overlapping, but not conflicting, concrete specs via the hash.
For example, if abcdef and ghijkl are the hashes of specs that both depend on zlib/mnopqr, then foo ^/abcdef ^/ghijkl will fail to construct a spec, with the error message "Cannot depend on zlib... twice".
This PR changes this behavior to check whether the specs are compatible before failing.
With this PR, foo ^/abcdef ^/ghijkl will concretize.
As a side-effect, so will foo ^zlib ^zlib and other specs that are redundant on their dependencies.
* ADD version 0.19.0 in py-gym recipe
* Fix py-gym download url and dependencies for v0.19.0
* Fix stupid error in previous commit: no change in py-cloudpickle dep
* Yes, I should've paid more attention! O:)
I think now it is right, thanks!
Argparse started raising ArgumentError exceptions
when the same parser is added twice. Therefore, we
perform the addition only if the parser is not there
already
Port match syntax to our unparser
Compilers and linker optimize string constants for space by aliasing
them when one is a suffix of another. For gcc / binutils this happens
already at -O1, due to -fmerge-constants. This means that we have
to take care during relocation to always preserve a certain length
of the suffix of those prefixes that are C-strings.
In this commit we pick length 7 as a safe suffix length, assuming the
suffix is typically the 7 characters from the hash (i.e. random), so
it's unlikely to alias with any string constant used in the sources.
In general we now pad shortened strings from the left with leading
dir seperators, but in the case of C-strings that are much shorter
and don't share a common suffix (due to projections), we do allow
shrinking the C-string, appending a null, and retaining the old part
of the prefix.
Also when rewiring, we ensure that the new hash preserves the last
7 bytes of the old hash.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
A user may want to set some attributes on a package without actually modifying the package (e.g. if they want to git pull updates to the package without conflicts). This PR adds a per-package configuration section called "set", which is a dictionary of attribute names to desired values. For example:
packages:
openblas:
package_attributes:
submodules: true
git: "https://github.com/myfork/openblas"
in this case, the package will always retrieve git submodules, and will use an alternate location for the git repo.
While git, url, and submodules are the attributes for which we envision the most usage, this allows any attribute to be overridden, and the acceptable values are any value parseable from yaml.
Newer versions of the CrayPE for EX systems have standalone compiler executables for CCE and compiler wrappers for Cray MPICH. With those, we can treat the cray systems as part of the linux platform rather than having a separate cray platform.
This PR:
- [x] Changes cray platform detection to ignore EX systems with Craype version 21.10 or later
- [x] Changes the cce compiler to be detectable via paths
- [x] Changes the spack compiler wrapper to understand the executable names for the standalone cce compiler (`craycc`, `crayCC`, `crayftn`).
For some instances of externally-provided Python (e.g. Homebrew),
the LDLIBRARY/LIBRARY config variables don't actually refer to
libraries and should therefore be excluded from ".libs".
Only enable the hdf5-vfd-gds package if it can compile.
- hdf5-vfd-gds needs cuda@11.7.1+ to be able to `find_library` for cuFile.
- Only enable hdf5-vfd-gds in the sdk if cuda@11.7.1+ is available.
If an earlier version of cuda is being used, do not depend on the
hdf5-vfd-gds package at all.
* take two
* Add missing import statement
* Group dependencies together
* Extract libtiff arguments
* Extract libpng arguments
* Push preamble variable into png_args and tiff_args
* Extract setting args associated with the screenshot variant
* Inlined a few variables
* Modify only build targets and install targets
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Whenever the rpath string actually _grows_, it falls back to patchelf,
when it stays the same length or gets shorter, we update it in-place,
padded with null bytes.
This PR only deals with absolute -> absolute rpath replacement. We don't
use `_build_tarball(relative=True)` in our CI. If `relative` then it falls
back to the old replacement code.
With this PR, relocation time goes down significantly, likely because patchelf
does some odd things with mmap, causing lots of overhead. Example:
- `binutils`: 700MB installed, goes from `1.91s` to `0.57s`, or `3.4x` faster.
Relocation time: 27% -> 10% of total install time
- `llvm`: 6.8GB installed, goes from `28.56s` to `5.38`, or `5.3x` faster.
Relocation time: 44% -> 13% of total install time
The bottleneck is now decompression.
Note: I'm somewhat confused about the "relative rpath" code paths. Right
now this PR only deals with absolute -> absolute replacement. As far as
I understand, if you embrace relative rpaths when uploading to the
buildcache, the whole point is you _don't_ want to patch rpaths on
install? So it seems fine to not expand `$ORIGIN` again imho.
When a package asks for non-parallel make, we need to force `make -j1` because just doing `make` will run in parallel under jobserver (e.g. `spack env depfile`).
We now always add `-j1` when asked for a non-parallel execution (even if there is no jobserver).
And each `MakeExecutable` can now ask for jobserver support or not. For example: the default `ninja` does not support jobserver so spack applies the default `-j`, but `ninja@kitware` or `ninja-fortran` does, so spack doesn't add `-j`.
Tips: you can run `SPACK_INSTALL_FLAGS=-j1 make -f spack-env-depfile.make -j8` to avoid massive job-spawning because of build tools that don't support jobserver (ninja).
* testing ssh key
* test
* LR : Creating the packge to install the gegelati app
* LR : Gegelati, a TPG C++ library added and fully tested
* LR : adjusting for fork
* LR: taking out the boilerplates
* LR: taking out the rest
We try to avoid non-default variant values in the concretizer, but this doesn't make
sense for variants forced to take some non-default value by variant propagation.
Counting this as a penalty effectively biases the concretizer for small specs dependency
graphs -- something we try very hard to avoid elsewhere because it can lead to very
strange decisions.
Example: with the penalty, `spack spec hdf5` will choose the default `openmpi` as its
`mpi` provider, but `spack spec hdf5 ~~shared` will choose `mpich` because it has to set
fewer non-default variant values because `mpich`'s DAG is smaller. That's not a good
reason to prefer a non-default virtual provider.
To fix this, if the user explicitly requests a non-default value to be propagated, there
shouldn't be a penalty. Variant values set on the CLI already don't count as default; we
just need to extend that to propagated values.
Adds another post install hook that loops over the install prefix, looking for shared libraries type of ELF files, and sets the soname to their own absolute paths.
The idea being, whenever somebody links against those libraries, the linker copies the soname (which is the absolute path to the library) as a "needed" library, so that at runtime the dynamic loader realizes the needed library is a path which should be loaded directly without searching.
As a result:
1. rpaths are not used for the fixed/static list of needed libraries in the dynamic section (only for _actually_ dynamically loaded libraries through `dlopen`), which largely solves the issue that Spack's rpaths are a heuristic (`<prefix>/lib` and `<prefix>/lib64` might not be where libraries really are...)
2. improved startup times (no library search required)
Untouched spec pruning was added to reduce the number of specs
developers see getting rebuilt in their PR pipelines that they
don't understand. Because the state of the develop mirror lags
quite far behind the tip of the develop branch, PRs often find
they need to rebuild things untouched by their PR.
Untouched spec pruning was previously implemented by finding all
specs in the environment with names of packages touched by the PR,
traversing in both directions the DAGS of those specs, and adding
all dependencies as well as dependents to a list of concrete specs
that should not be considered for pruning.
We found that this heuristic results in too many pruned specs, and
that dependents of touched specs must have all their dependencies
added to the list of specs that should not be considered for pruning.
* SEACAS: Update package.py to handle new SEACAS project name
The base project name for the SEACAS project has changed from
"SEACASProj" to "SEACAS" as of @2022-10-14, so the package
needed to be updated to use the new project name when needed.
The refactor also changes several:
"-DSome_CMAKE_Option:BOOL=ON"
to
define("Some_CMAKE_Option", True)
* SEACAS: Additional refactorings
* Replaced all cmake "-Dsomething=other" lines with either `define`
or `define_from_variant` functions.
Consolidated the application (fortran, legacy, all) enabling lines
into loops over the code names. Easier to see categorization of
applications and also to add/move/remove an application
Reordered some lines; general cleanup and restructuring.
* Address flake8 issues
* Remove trailing whitespace
* Reformat using black
* add new package: py-pylatex
* fix bugs
* add extras indicated in setup.py
* Update var/spack/repos/builtin/packages/py-pylatex/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-pylatex/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* improvements
* remove git merge related lines
* tidy
* Update var/spack/repos/builtin/packages/py-pylatex/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* remove variant
* [@spackbot] updating style on behalf of Sinan81
Co-authored-by: sbulut <sbulut@3vgeomatics.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Sinan81 <Sinan81@users.noreply.github.com>
This issue was introduced in #29761:
```
==> Installing ncurses-6.3-22hz6q6cvo3ep2uhrs3erpp2kogxncbn
==> No binary for ncurses-6.3-22hz6q6cvo3ep2uhrs3erpp2kogxncbn found: installing from source
==> Using cached archive: /spack/var/spack/cache/_source-cache/archive/97/97fc51ac2b085d4cde31ef4d2c3122c21abc217e9090a43a30fc5ec21684e059.tar.gz
==> No patches needed for ncurses
==> ncurses: Executing phase: 'autoreconf'
==> ncurses: Executing phase: 'configure'
==> ncurses: Executing phase: 'build'
==> ncurses: Executing phase: 'install'
==> Error: AttributeError: 'str' object has no attribute 'propagate'
The 'ncurses' package cannot find an attribute while trying to build from sources. This might be due to a change in Spack's package format to support multiple build-systems for a single package. You can fix this by updating the build recipe, and you can also report the issue as a bug. More information at https://spack.readthedocs.io/en/latest/packaging_guide.html#installation-procedure
/spack/lib/spack/spack/build_environment.py:1075, in _setup_pkg_and_run:
1072 tb_string = traceback.format_exc()
1073
1074 # build up some context from the offending package so we can
>> 1075 # show that, too.
1076 package_context = get_package_context(tb)
1077
1078 logfile = None
```
It turns out this was caused by a bug that had been around much longer, in which the flags were passed by reference to the flag_handler, and the flag_handler was modifying the spec object, not just the flags given to the build system. The scope of this bug was limited by the forking model in Spack, which is how it went under the radar for so long.
PR includes regression test.
* remove deptype_query remnants
* deptypes -> deptype
These arguments haven't existed since 2017, but `traverse` now fails on unknown **kwargs, so they have finally popped up.
The base project name for the SEACAS project has changed from
"SEACASProj" to "SEACAS" as of @2022-10-14, so the package
needed to be updated to use the new project name when needed.
The refactor also changes several:
"-DSome_CMAKE_Option:BOOL=ON"
to
define("Some_CMAKE_Option", True)
This updates the propagation logic used in `concretize.lp` to avoid rules with `path()`
in the body and instead base propagation around `depends_on()`.
Currently, compiler flags and variants are inconsistent: compiler flags set for a
package are inherited by its dependencies, while variants are not. We should have these
be consistent by allowing for inheritance to be enabled or disabled for both variants
and compiler flags.
- [x] Make new (spec language) operators
- [x] Apply operators to variants and compiler flags
- [x] Conflicts currently result in an unsatisfiable spec
(i.e., you can't propagate two conflicting values)
What I propose is using two of the currently used sigils to symbolized that the variant
or compiler flag will be inherited:
Example syntax:
- `package ++variant`
enabled variant that will be propagated to dependencies
- `package +variant`
enabled variant that will NOT be propagated to dependencies
- `package ~~variant`
disabled variant that will be propagated to dependencies
- `package ~variant`
disabled variant that will NOT be propagated to dependencies
- `package cflags==True`
`cflags` will be propagated to dependencies
- `package cflags=True`
`cflags` will NOT be propagated to dependencies
Syntax for string-valued variants is similar to compiler flags.
Fixes an issue on the RHEL8 UBI container where this test would fail because `gr_mem`
was empty for every entry in the `grp` DB.
You have to check *both* the `pwd` database (which has primary groups) and `grp` (which
has other gorups) to do this correctly.
- [x] update `llnl.util.filesystem.group_ids()` to do this
- [x] use it in the `sbang` test
This PR introduces breadth-first traversal, and moves depth-first traversal
logic out of Spec's member functions, into `traverse.py`.
It introduces a high-level API with three main methods:
```python
spack.traverse.traverse_edges(specs, kwargs...)
spack.traverse.traverse_nodes(specs, kwags...)
spack.traverse.traverse_tree(specs, kwargs...)
```
with the usual `root`, `order`, `cover`, `direction`, `deptype`, `depth`, `key`,
`visited` kwargs for the first two.
What's new is that `order="breadth"` is added for breadth-first traversal.
The lower level API is not exported, but is certainly useful for advanced use
cases. The lower level API includes visitor classes for direction reversal and
edge pruning, which can be used to create more advanced traversal methods,
especially useful when the `deptype` is not constant but depends on the node
or depth.
---
There's a couple nice use-cases for breadth-first traversal:
- Sometimes roots have to be handled differently (e.g. follow build edges of
roots but not of deps). BFS ensures that root nodes are always discovered at
depth 0, instead of at any depth > 1 as a dep of another root.
- When printing a tree, it would be nice to reduce indent levels so it fits in the
terminal, and ensure that e.g. `zlib` is not printed at indent level 10 as a
dependency of a build dep of a build dep -- rather if it's a direct dep of my
package, I wanna see it at depth 1. This basically requires one breadth-first
traversal to construct a tree, which can then be printed with depth-first traversal.
- In environments in general, it's sometimes inconvenient to have a double
loop: first over the roots then over each root's deps, and maintain your own
`visited` set outside. With BFS, you can simply init the queue with the
environment root specs and it Just Works. [Example here](3ec7304699/lib/spack/spack/environment/environment.py (L1815-L1816))
Currently, many tests hardcode to older versions of gcc for comparisons of
concretization among compiler versions. Those versions are too old to concretize for
`aarch64`-family targets, which leads to failing tests on `aarch64`.
This PR fixes those tests by updating the compiler versions used for testing.
Currently, many tests hardcode the expected architecture result in concretization to the
`x86_64` family of architectures.
This PR generalizes the tests that can be generalized, to cover multiple architecture
families. For those that test specific relationships among `x86_64`-family targets, it
ensures that concretization uses the `x86_64`-family targets in those cases.
Currently, many tests rely on the fact that `AutotoolsPackage` imposes no dependencies
on the inheriting package. That is not true on `aarch64`-family architectures.
This PR ensures that the fact `AutotoolsPackage` on `aarch64` pulls in a dependency on
`gnuconfig` is ignored when testing for the appropriate relationships among dependencies
Additionally, 5 tests currently prompt the user for input when `gpg` is available in the
user's path. This PR fixes that issue. And 7 tests fail currently when the user has a
yubikey available. This PR fixes the incorrect gpg argument causing those issues.
The `spack info <package>` command does not show the `Virtual Packages:` output unless the `--virtuals` command option is passed. Before this changes, the information that the command is supposed to be illustrating is not shown in the example and is confusing.
* julia: don't look for the openlibm libraries when unneeded
Cause spack to *not* check for the existence of the openlibm libraries (by adding it to the pkgs list) when ~openlibm is specified.
* [@spackbot] updating style on behalf of downloadico
Co-authored-by: downloadico <downloadico@users.noreply.github.com>
- the updated OpenFOAM wmake rules now allow multiple locations for
compiler flags:
* wmake/General/common/c++Opt [central]
* wmake/linux64Gcc/c++Opt [traditional]
- match both '=' and ':=' make rule lines
Co-authored-by: Mark Olesen <Mark.Olesen@esi-group.com>
Changes to improve locating shared libraries on Windows, which in
turn enables the use of Clingo. This PR attempts to establish a
proper distinction between linking on Windows vs. Linux/Mac: on
Windows, linking is always done with .lib files (never .dll files).
This somewhat complicates the model since the Spec.lib method could
return libraries that were used for both linking and loading, but
since these are not always the same on Windows, it was decided to
treat Spec.libs as being for link-time libraries. Additional functions
are added to help dependents locate run-time libraries.
* Clingo is now the default concretizer on Windows
* Clingo is now the concretizer used for unit tests on Windows
* Fix a permissions issue that can occur while moving Git files during
fetching/staging
* Packages can now implement "win_add_library_dependent" to register
files/directories that include libraries that would need to link
to dependency dlls
* Packages can now implement "win_add_rpath" to register the locations
of dlls that dependents would want to load
* "Spec.libs" on Windows is updated to return link-time libraries
(i.e. .lib files, rather than .dll files)
* PackageBase.rpath on Windows is now updated to return the most-likely
locations where .dlls will be found (which is generally in the bin/
directory)
* Nalu-Wind: Allow for standard versions of trilinos
This will allow us to utilize custom numeric versions for trilinos in `spack-manager` while we continue to develop `nalu-wind`.
Pinging @eugeneswalker @jrood-nrel @tasmith4
* Update var/spack/repos/builtin/packages/nalu-wind/package.py
Currently there's a slow sequential step in binary relocation where all
strings of a binary are collected, with rpaths removed, and then
filtered for the old install root.
This is completely unnecessary, and also incorrect, since we replace
more than just the old install root in the prefix to prefix mapping. And
in fact the prefix to prefix mapping is parallel, and a single pass. So
even as an optimization, this filter makes no sense anymore.
Therefor we remove it
- single pass over the binary data matching all prefixes
- collect offsets and replacement strings
- do in-place updates with `fseek` / `fwrite`, since typically our
replacement touch O(few bytes) while the file is O(many megabytes)
- be nice: leave the file untouched if some string can't be
replaced
* Added py-medaka and dependencies
* fixed py-parasail build error
* medaka still doesn't have correct linked libdeflate
* fixed pyspoa deps
* added htslib.patch, confirmed builds and runs
* fixed style
* Update var/spack/repos/builtin/packages/py-auditwheel/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* made requested changes
* added targets for pyspoa dep
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
/home/xsdk/spack.x/lib/spack/env/oneapi/icx -DAdd_ -Dscalapack_EXPORTS -I/opt/intel/oneapi/mpi/2021.7.0/include -O3 -DNDEBUG -fPIC -MD -MT CMakeFiles/scalapack.dir/BLACS/SRC/dgamx2d_.c.o -MF CMakeFiles/scalapack.dir/BLACS/SRC/dgamx2d_.c.o.d -o CMakeFiles/scalapack.dir/BLACS/SRC/dgamx2d_.c.o -c /home/xsdk/spack.x/spack-stage/spack-stage-netlib-scalapack-2.2.0-uj3jepiowz5is4hmdmjrzjltetgdr3lx/spack-src/BLACS/SRC/dgamx2d_.c
/home/xsdk/spack.x/spack-stage/spack-stage-netlib-scalapack-2.2.0-uj3jepiowz5is4hmdmjrzjltetgdr3lx/spack-src/BLACS/SRC/igsum2d_.c:154:7: error: call to undeclared function 'BI_imvcopy'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
BI_imvcopy(Mpval(m), Mpval(n), A, tlda, bp->Buff);
^
Currently the vasp package always enables the use of shmem to reduce algorithm memory usage (see
https://www.vasp.at/wiki/index.php/Precompiler_options). This is great,but on some systems gives compile errors with the interoperability of C and Fortran. This PR makes that shmem flag optional, but retains the
existing default on behavior.
* Added support for building the DiHydrogen package and LBANN extensions
to DiHydrogen with ROCm libraries.
Fixed a bug on Cray systems where CMake didn't try hard enough to find
an MPI-compatible compiler wrapper. Make it look more.
Added support for the roctracer package when using ROCm libraries.
* Fixed how ROCm support is defined for pre-v0.3 versions.
- hdf5-vfd-gds:
- Add new version 1.0.2 compatible with hdf5@1.13.
- CMake is a build dependency.
- Set `HDF5_PLUGIN_PATH` in the runtime environment, this plugin
is loaded dynamically.
- SDK:
- The VFD GDS driver only has utility when CUDA is enabled.
- Require hdf5-vfd-gds@1.0.2+ (1.0.1 and earlier do not compile).
* Add patches for building clingo with MSVC
* Help python find clingo DLL
* If an executable is located in "C:\Program Files", Executable was
running into issues with the extra space. This quotes the exe
to ensure that it is treated as a single value.
Signed-off-by: Kiruya Momochi <65301509+KiruyaMomochi@users.noreply.github.com>
This commit extends the DSL that can be used in packages
to allow declaring that a package uses different build-systems
under different conditions.
It requires each spec to have a `build_system` single valued
variant. The variant can be used in many context to query, manipulate
or select the build system associated with a concrete spec.
The knowledge to build a package has been moved out of the
PackageBase hierarchy, into a new Builder hierarchy. Customization
of the default behavior for a given builder can be obtained by
coding a new derived builder in package.py.
The "run_after" and "run_before" decorators are now applied to
methods on the builder. They can also incorporate a "when="
argument to specify that a method is run only when certain
conditions apply.
For packages that do not define their own builder, forwarding logic
is added between the builder and package (methods not found in one
will be retrieved from the other); this PR is expected to be fully
backwards compatible with unmodified packages that use a single
build system.
* Updating package file for osu-micro-benchmarks for the 6.2 release
* updating sha hash for 6.2 tarball
Co-authored-by: natshineman <shineman.5@osu.edu>
* Add netcdf-c 4.9.0 and netcdf-fortran 4.6.0
With v4.9.0 netcdf-c introduces zstandard compression option which is added as a variant.
* Fix when= in dependency
* Turn on variant zstd by default
Co-authored-by: kgerheiser <kgerheiser@icloud.com>
* alquimia, pflotran, plasma, py-mpi4py, strumpack - add in new versions
* Fix hip CI failure
Co-authored-by: eugeneswalker <eugenesunsetwalker@gmail.com>
Instead of looping over multiple regexes and the entire text file
contents, create a giant regex with all literal prefixes and do a single
pass over files to detect prefixes. Not only is a single pass faster,
it's also likely that the regex is compiled better, given that most
prefixes share a common ... prefix.
In the dfs code, flip edges so that `parent` means `from` and
`spec` means `to` in the direction of traversal. This makes it slightly
easier to write generic/composable code. For example when using visitors
where one visitor reverses direction, and another only cares about
accepting particular edges or not depending on whether the target node
is seen before, it would be good if the second visitor didn't have to
know whether the order was changed or not.
Use the same compression level as `gzip` (6) instead of what Python uses
(9).
The LLVM tarball takes 4m instead of 12m to create, and is <1% larger.
That's not worth the wait...
* udunits: Update download URL
* udunits: Deprecate older versions
Unidata now only provides the latest version of each X.Y branch. Older 2.2 versions have been deprecated accordingly but are still available in the build cache.
Co-authored-by: RemiLacroix-IDRIS <RemiLacroix-IDRIS@users.noreply.github.com>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Update wgrib2 from JCSDA/NOAA-EMC fork
* var/spack/repos/builtin/packages/wgrib2/package.py: fix typo in comment, add conflict for variants netcdf3, netcdf4
* wget hdf5/netcdf4 internal dependencies for wgrib2
* Black-format var/spack/repos/builtin/packages/wgrib2/package.py
* More format changes in var/spack/repos/builtin/packages/wgrib2/package.py
#32137 added an option to update() a BinaryCacheIndex with a
cooldown: repeated attempts within this cooldown would not
actually retry. However, the cooldown was not properly
tracked for failures (which is common when the mirror
does not store any binaries and therefore has no index.json).
This commit ensures that update(..., with_cooldown=True) will
also skip the update even if a failure has occurred within the
cooldown period.
Due to reuse concretization, we may generate DAGs with two occurrences
of the same package corresponding to distinct specs. This happens when
build dependencies are reused, since their dependencies are ignored in
concretization.
This caused a regression, for example: `spec['openssl']` would take the
'openssl' of the build dep `cmake`, instead of the direct `openssl`
dependency, simply because the edge to `cmake` was traversed first and
we do depth first traversal.
One solution that was discussed is to limit `spec[name]` to just direct
deps, or direct deps + transitive link deps, but this is too breaking.
Instead, this PR simply prioritizes transitive link and direct
build/run/test deps, and then falls back to a full DAG traversal. So,
it's just about order of iteration.
* Update py-sphinxcontrib-mermaid
* Add py-myst-parser
* Fix py-mdit-py-plugins and py-myst-parser dependencies
* Add py-exhale package
* Update var/spack/repos/builtin/packages/py-mdit-py-plugins/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-myst-parser/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-myst-parser/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update var/spack/repos/builtin/packages/py-myst-parser/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
* Update py-exhale and py-myst-parser dependencies
* Add @svenevs as py-exhale maintainer
* Update var/spack/repos/builtin/packages/py-mdit-py-plugins/package.py
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Scan the text files for relocatable prefixes *before* creating a tarball,
to reduce the amount of work to be done during install from binary
cache.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
* py-drep: new package
* fixed file extension
* added darwin conflict
* py-checkm-genome and py-pysam: bumped version and updated deps (#10)
added checkm and pysam deps
* added dep documentation and fixed style
* changed checkm and pysam back to dev version for upstreaming
* added url and perl run dep
* fixed style
Instead of showing
```
==> Error: Timed out waiting for a write lock.
```
show
```
==> Error: Timed out waiting for a write lock after 1.200ms and 4 attempts on file: /some/file
```
s.t. we actually get to see where acquiring a lock failed even when not
running in debug mode.
And use pretty time units everywhere, so we don't get 1.45e-9 seconds
but 1.450ns etc.
* backtraces without --debug
Currently `--debug` is too verbose and not-`--debug` gives to little
context about where exceptions are coming from.
So, instead, it'd be nice to have `spack --backtrace` and
`SPACK_BACKTRACE=1` as methods to get something inbetween: no verbose
debug messages, but always a full backtrace.
This is useful for CI, where we don't want to drown in debug messages
when installing deps, but we do want to get details where something goes
wrong if it goes wrong.
* completion
* acts: new versions
In the 20.x release line, these are the changes, https://github.com/acts-project/acts/compare/v20.0.0...v20.2.0
- `option(ACTS_SETUP_ACTSVG "Build ActSVG display plugin" OFF)` introduced in v20.1.0
- `option(ACTS_USE_SYSTEM_ACTSVG "Use the ActSVG system library" OFF)` introduced in v20.1.0
- `option(ACTS_BUILD_PLUGIN_ACTSVG "Build SVG display plugin" OFF)` introduced in v20.1.0
- `option(ACTS_USE_EXAMPLES_TBB "Use Threading Building Blocks library in examples" ON)` introduced in v20.1.0
- `option(ACTS_EXATRKX_ENABLE_ONNX "Build the Onnx backend for the exatrkx plugin" OFF)` introduced in v20.2.0
- `option(ACTS_EXATRKX_ENABLE_TORCH "Build the torchscript backend for the exatrkx plugin" ON)` introduced in v20.2.0
In the 19.x release line, these are the changes: https://github.com/acts-project/acts/compare/v19.7.0...v19.9.0
- `option(ACTS_USE_EXAMPLES_TBB "Use Threading Building Blocks library in examples" ON)` introduced in v19.8.0
The new build options have not been implemented in this commit but will be implemented next.
* acts: new variant svg
* actsvg: new package
* actsvg: style fixes
* acts: new versions 20.3.0 and 19.10.0
* astsvg: depends_on boost googletest
* actsvg: new version 0.4.26 (and style fix)
Includes fix to build issue when +examples, https://github.com/acts-project/actsvg/pull/23
* acts: new variant tbb when +examples @19.8:19 @20.1:
* acts: set ACTS_USE_EXAMPLES_TBB
* acts: no need for ACTS_SETUP_ACTSVG
* acts: move tbb variant to examples block
* acts: ACTS_USE_SYSTEM_ACTSDD4HEP removed in 20.3
* acts: use new ACTS_USE_SYSTEM_LIBS
* acts-dd4hep: new version 1.0.1, maintainer handle fixed
* acts: simplify variant tbb condition
* py-checkm-genome and py-pysam: bumped version and updated deps
* updated setuptools dep type
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
When we lose a running pod (possibly loss of spot instance) or encounter
some other infrastructure-related failure of this job, we need to retry
it. This retries the job the maximum number of times in those cases.
Currently `relocate_text` and `relocate_text_bin` are unsafe in the
sense that they run in parallel, and lead to races when modifying
different items pointing to the same inode.
This leads to the issue observed in #33453.
This PR:
1. Renames those functions to `unsafe_*` so people are aware
2. Adds logic to deal with hardlinks in current binary packages
3. Adds logic to deal with hardlinks when creating new binary tarballs,
so the install side doesn't have to de-dupe hardlinks.
4. Adds a test for 3
The assumption is that all our relocation logic preserves inodes. That
is, we should never copy a file, modify it, and then move it back. I
quickly verified, and its seems like this is true for (binary) text
relocation, as well as rpath patching in patchelf (even when the file
grows) and mach-o fixes.
* axom@0.7.0: require cmake@3.21:
* Update var/spack/repos/builtin/packages/axom/package.py
Co-authored-by: Chris White <white238@llnl.gov>
Co-authored-by: Chris White <white238@llnl.gov>
`reuse` and `when_possible` concretization broke the invariant that
`spec[pkg_name]` has unique keys. This invariant is relied on in tons of
places, such as when setting up the build environment.
When using `when_possible` concretization, one may end up with two or
more `perl`s or `python`s among the transitive deps of a spec, because
concretization does not consider build-only deps of reusable specs.
Until the code base is fixed not to rely on this broken property of
`__getitem__`, we should disable reuse in CI.
* fixed version numbers to python 2 and old biopython
* changed shortbred pacakge to pypi, removed python 2 version
* added package description
* re-added shortbred package with depreciated flag
* fixed style and removed unnecessary python dep (it can't build with python 2 anyway)
* removed whitespace and readded the python2.7.9+ dep
* fixed style
* gitlab: Do not use root_spec['pkg_name'] anymore
For a long time it was fine to index a concrete root spec with the name
of a dependency in order to access the concrete dependency spec. Since
pipelines started using `--use-buildcache dependencies:only,package:never`
though, it has exposed a scheduling issue in how pipelines are
generated. If a concrete root spec depends on two different hashes of
`openssl` for example, indexing that root with just the package name
is ambiguous, so we should no longer depend on that approach when
scheduling jobs.
* env: make sure exactly one spec in env matches hash
When installing some/all specs from a buildcache, build edges are pruned
from those specs. This can result in a much smaller effective DAG. Until
now, `spack env depfile` would always generate a full DAG.
Ths PR adds the `spack env depfile --use-buildcache` flag that was
introduced for `spack install` before. This way, not only can we drop
build edges, but also we can automatically set the right buildcache
related flags on the specific specs that are gonna get installed.
This way we get parallel installs of binary deps without redundancy,
which is useful for Gitlab CI.
When downloading from binary cache not only replace RPATHs to dependencies, but
also text references to dependencies.
Example:
`autoconf@2.69` contains a text reference to the executable of its dependency
`perl`:
```
$ grep perl-5 /shared/spack/opt/spack/linux-amzn2-x86_64_v3/gcc-7.3.1/autoconf-2.69-q3lo/bin/autoreconf
eval 'case $# in 0) exec /shared/spack/opt/spack/linux-amzn2-x86_64_v3/gcc-7.3.1/perl-5.34.1-yphg/bin/perl -S "$0";; *) exec /shared/spack/opt/spack/linux-amzn2-x86_64_v3/gcc-7.3.1/perl-5.34.1-yphg/bin/perl -S "$0" "$@";; esac'
```
These references need to be replace or any package using `autoreconf` will fail
as it cannot find the installed `perl`.
Co-authored-by: Stephen Sachs <stesachs@amazon.com>
"spack install" will not update the binary index if given a concrete
spec, which causes it to fall back to direct fetches when a simple
index update would have helped. For S3 buckets in particular, this
significantly and needlessly slows down the install process.
This commit alters the logic so that the binary index is updated
whenever a by-hash lookup fails. The lookup is attempted again with
the updated index before falling back to direct fetches. To avoid
updating too frequently (potentially once for each spec being
installed), BinaryCacheIndex.update now includes a "cooldown"
option, and when this option is enabled it will not update more
than once in a cooldown window (set in config.yaml).
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Without this patch one hits this error trying to compiler papi with Intel OneAPI:
icx: error: Note that use of '-g' without any optimization-level option will turn off most compiler optimizations similar to use of '-O0' [-Werror,-Wdebug-disables-optimization]
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
Signed-off-by: Howard Pritchard <howardp@lanl.gov>
2022-10-18 11:18:04 -06:00
8496 changed files with 85744 additions and 78260 deletions
==> Installing "patchelf@0.16.1%gcc@10.2.1 ldflags="-static-libstdc++ -static-libgcc" build_system=autotools arch=linux-centos7-x86_64" from a buildcache
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.