`spack gc` has so far been a global or environment-specific thing.
This adds the ability to restrict garbage collection to specific specs,
e.g. if you *just* want to get rid of all your unused python installations,
you could write:
```console
spack gc python
```
- [x] add `constraint` arg to `spack gc`
- [x] add a simple test
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Avoid that hdf5 searches all search paths for ZLIB.cmake config files (inluding /usr/lib), before it looks for zlib without cmake config files, which is how Spack installs it
* WarpX: 24.10
This updates WarpX and dependencies for the 24.10 release.
New features:
- EB runtime control: we can now compile with EB on by default,
because it is not an incompatible binary option anymore
- Catalyst2 support: AMReX/WarpX 24.09+ support Catalyst2 through
the existing Conduit bindings
* Fix Typo in Variant
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Improve Python Dep Version Ranges
* Add Missing `-DWarpX_CATALYST`
* AMReX: Missing CMake Options for Vis
---------
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
This allows us to keep the workflow file tidier, and avoid
using indirections to perform platform specific operations.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* py-mpi4py: add v4.0.0
* sensei: update mpi4py dependency
build with py-mpi4py@4.0.0 due to fatal no such file or directory error
* petsc4py: update license, and remove C++/Fortran dependency
There was a bit of mystery surrounding the arguments for `_setup_pkg_and_run`. It passes
two file descriptors for handling the `gmake`'s job server in child processes, but they are
unsed in the method.
It turns out that there are good reasons to do this -- depending on the multiprocessing
backend, these file descriptors may be closed in the child if they're not passed
directly to it.
- [x] Document all args to `_setup_pkg_and_run`.
- [x] Document all arguments to `_setup_pkg_and_run`.
- [x] Add type hints for `_setup_pkg_and_run`.
- [x] Refactor exception handling in `_setup_pkg_and_run` so it's easier to add type
hints. `exc_info()` was problematic because it *can* return `None` (just not
in the context where it's used). `mypy` was too dumb to notice this.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* new builtin package: ambertools
* fixes for the style test
* yet more changes for the style test
* hope this is the last fix for the style test
* netlib-xblas is a dependency, it needs a depends_on("m4", type="build")
* ambertools: Add new setuptool dependency, limit python to <= 3.10 (does not build with 3.11+)
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
We mostly use `spack style` and `spack style --fix`, but it's nice to also be able to
run plain old `black .` in the repo.
- [x] Fix includes and excludes `pyproject.toml` so that we *only* cover files we expect
to be blackened.
Note that `spack style` is still likely the better way to go, because it looks at `git
status` and tells black to only check files that changed from `develop`. `black` with
`pyproject.toml` won't do that. Of course, you can always manually specify which files
you want blackened.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* postgresql: Add icu4c dependency for versions 16+
* postgresql: make ICU an option
* postgresql: ICU variant only needed for v16+
* postgresql: Check for negated option
Check for negated option instead of negating the test
Co-authored-by: Alec Scott <hi@alecbcs.com>
---------
Co-authored-by: Alec Scott <hi@alecbcs.com>
* [py-flash-attn] Add version 2.6.3
* Update dependencies according to the latest version
* Add max_jobs environmental variable to avoid oom error
---------
Co-authored-by: aurianer <8trash-can8@protonmail.ch>
* Revert "`cc`: ensure that RPATHs passed to linker are unique"
This reverts commit 2613a14c43.
* Revert "`cc`: simplify ordered list handling"
This reverts commit a76a48c42e.
Updated the terminology for the two types of environments to be
consistent with that used in the tutorial for the last three years.
Additionally:
* changed 'anonymous' to 'independent in environment command+test for consistency.
* Update package.py
Update to pull Totalview tar files from AWS instead of requiring the user to download ahead of time. Use new license type, RLM license. Only allow for installs of versions using the new license type. 2024.1 and 2024.2. User selects the platform with the version as it is down from the TotalView downloads website.
* Update package.py
Update to pass style test
* Update package.py
fixing syle
* Updating to pass style check
removing more spaces to pass style check
* final style fixes
fixing the last 2 style errors
* Typo
Typo correction to pass style check
* REmove new line
removing new line character
* Ran black to reformat
Ran black to clear errors
* Changing to use sha256
Updating to use sha256 checksums for all TotalView files.
* acts dependencies: new versions as of 2024/09/30
This commit adds new versions of acts, actsvg, and detray.
* Add vecmem version, patch detray version
#45205 already removed previous use of single letter packages
from unit tests, in view of reserving `c` as a language (see #45191).
Some use of them has been re-introduced accidentally in #46382, and
is making unit-tests fail in the feature branch #45189 since there
`c` is a virtual package.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Kokkos: adding some sanity checks
We can pretty much guarentee that if bin, include or lib directory
is missing, something is wrong. Additionally KokkosCore_config.h
and Kokkos_Core.hpp. I guess technically we could look for all
public headers at least but that seems a bit overkill as well?
* Kokkos Kernels: adding sanity checks
* Remove check for lib directory since it might end up being lib64
* Also remove lib from kokkos-kernels sanity check
* Add latest releases of Camp, RAJA, Umpire, CHAI and CARE
* Address review comments + blt requirement in Umpire
* CARE @develop & @main: Submodules -> False
* Changes in Umpire
* Changes in RAJA
* Changes in CHAI
* Changes in RAJA: prefer 'spec.satisfies' to 'in spec'
This is due to a non-equivalence in Spack with providers like mpi.
See e.g. https://github.com/spack/spack/pull/46126
* Changes in Umpire: prefer 'spec.satisfies' to 'in spec'
This is due to a non-equivalence in Spack with providers like mpi.
See e.g. https://github.com/spack/spack/pull/46126
* Changes in CARE:
Still need to update to CachedCMakePackage based on RADIUSS Spack Configs version
* Missing change in RAJA + changes in fmt
* Fix synta
* Changes in Camp
* Fix style
* CHAI: when ~raja, turn off RAJA in build system
* Fix: Ascent@0.9.3 does not support RAJA@2024.07.0
* Enforce same version constraint on Umpire as for RAJA
* Enforce preferred version of vtk-m in ascent 0.9.3
* Migrate CARE package to CachedCMakePackage
* Fix style in CARE package
* CARE: Apply changes for uniform implementation accross RADIUSS projects
* Caliper: move to CachedCMakePackage, from RADIUSS Spack Configs
* Adapt RAJA Perf to spack CI
* Activate CHAI, CARE and RAJAPerf in Spack CI
* Fixes and diffs with RADIUSS Spack Configs
* Caliper: fix
* Caliper : fix + RAJAPerf : style
* RAJAPerf: fixes
* Update maintainers
* raja-perf: fix license header
* raja-perf: Fix variant naming openmp_target -> omptarget
* raja-perf: style and blt dependency versions
* CARE: benchmark and examples off by default (like tests)
* CARE: fix missing variable
* Update var/spack/repos/builtin/packages/raja-perf/package.py
* CARE: fix branch name
* Revert changes in MFEM to pass CI
* Fix CXX17 condition in RAJA + add sycl option in RAJAPerf
---------
Co-authored-by: Rich Hornung <hornung1@llnl.gov>
* cbindgen: new package
* Attempting to add rust dependencies for cbindgen
* adding rust-toml min rust version
* Removing dependencies that don't install with cargo
* cleanup broken packages
---------
Signed-off-by: Teague Sterling <teaguesterling@gmail.com>
On sysroot systems like gentoo prefix, as well as nix/guix, our "is
system path" logic is broken cause it's static.
Talking about "the system paths" is not helpful, we have to talk
about default search paths of the dynamic linker instead.
If glibc is recent enough, we can query the dynamic loader's default
search paths, which is a much more robust way to avoid registering
rpaths to system dirs, which can shadow Spack dirs.
This PR adds an **additional** filter on rpaths the compiler wrapper
adds, dropping rpaths that are default search paths. The PR **does
not** remove any of the original `is_system_path` code yet.
This fixes issues where build systems run just-built executables
linked against their *not-yet-installed libraries*, typically:
```
LD_LIBRARY_PATH=. ./exe
```
which happens in `perl`, `python`, and other non-cmake packages.
If a default path is rpath'ed, it takes precedence over
`LD_LIBRARY_PATH`, and a system library gets loaded instead
of the just-built library in the stage dir, breaking the build. If
default paths are not rpath'ed, then LD_LIBRARY_PATH takes
precedence, as is desired.
This PR additionally fixes an inconsistency in rpaths between
cmake and non-cmake packages. The cmake build system
computed rpaths by itself, but used a different order than
computed for the compiler wrapper. In fact it's not necessary
to compute rpaths at all, since we let cmake do that thanks to
`CMAKE_INSTALL_RPATH_USE_LINK_PATH`. This covers rpaths
for all dependencies. The only install rpaths we need to set are
`<install prefix>/{lib,lib64}`, which cmake unfortunately omits,
although it could also know these. Also, cmake does *not*
delete rpaths added by the toolchain (i.e. Spack's compiler
wrapper), so I don't think it should be controversial to simplify
things.
https://docs.sylabs.io/guides/main/admin-guide/configfiles.html#loop-devices
shared loop devices: This allows containers running the same image
to share a single loop device. This minimizes loop device usage and
helps optimize kernel cache usage.
Enabling this feature can be particularly useful for large MPI jobs.
The current `Spec.splice` model is very limited by the inability to splice specs that
contain multiple nodes with the same name. This is an artifact of the original
algorithm design predating the separate concretization of build dependencies,
which was the first feature to allow multiple specs in a DAG to share a name.
This PR provides a complete reimplementation of `Spec.splice` to avoid that
limitation. At the same time, the new algorithm ensures that build dependencies
for spliced specs are not changed, since the splice by definition cannot change
the build-time information of the spec. This is handled by splitting the dependency
edges and link/run edges into separate dependencies as needed.
Signed-off-by: Gregory Becker <becker33@llnl.gov>
* CI: Add documentation for adding new stacks and runners
* More docs for runner registration
---------
Co-authored-by: Zack Galbreath <zack.galbreath@kitware.com>
Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
This PR shorten the string representation for concrete specs,
in order to make it more legible.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
macOS Sequoia's linker will complain if RPATHs on the CLI are specified more than once.
To avoid errors due to this, make `cc` only append unique RPATHs to the final args list.
This required a few improvements to the logic in `cc`:
1. List functions in `cc` didn't have any way to append unique elements to a list. Add a
`contains()` shell function that works like our other list functions. Use it to implement
an optional `"unique"` argument to `append()` and an `extend_unique()`. Use that to add
RPATHs to the `args_list`.
2. In the pure `ld` case, we weren't actually parsing `RPATH` arguments separately as we
do for `ccld`. Fix this by adding *another* nested case statement for raw `RPATH`
parsing. There are now 3 places where we deal with `-rpath` and friends, but I don't
see a great way to unify them, as `-Wl,`, `-Xlinker`, and raw `-rpath` arguments are
all ever so slightly different.
3. Fix ordering of assertions to make `pytest` diffs more intelligible. The meaning of
`+` and `-` in diffs changed in `pytest` 6.0 and the "preferred" order for assertions
became `assert actual == expected` instead of the other way around.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
`cc` divides most paths up into system paths, spack managed paths, and other paths.
This gets really repetitive and makes the code hard to read. Simplify the script
by adding some functions to do most of the redundant work for us.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* axom/stand-alone tests: build and run in test stage directory
* Removed unused glob
* axom/stand-alone tests: add example_stage_dir variable for clarity
SimpleFilesystemView was producing an error due to looking for a
<prefix>/lib/.spack folder. Also, view_destination had no effect and
wasn't called. Changed this by instead patching in the correct
installation prefix for dictionaries.
Since aspell is using the resolved path of the executable prefix, the
runtime environment variable ASPELL_CONF is set to correct the prefix
when in a view. With this change aspell can now find installed
dictionaries. Verified with:
aspell dump config
aspell dump dicts
* shorten version number validations per reviewer feedback
* rename set_lib_path per reviewer feedback
* Add E4S tag
* Set CHPL_CUDA_PATH to ensure Chapel installer finds the right package
* Update ROCm dependency for Chapel 2.2
* Fix llvm-amdgpu and CHPL_TARGET_* for llvm=bundled
* Ensure CHPL_TARGET_COMPILER is set to "llvm" when required (llvm=spack
or +cuda or +rocm).
* Ensure CHPL_TARGET_{CC,CXX} are only set when using llvm=spack or llvm=none
* Use hip.prefix to set CHPL_ROCM_PATH
Since we might not directly depend on llvm-amdgpu, thus it might
not appear in our spec
* limit m4 dependency to +gmp
* limit names of env vars created from variants
* Ensure that +cuda and +rocm variants are Sticky
The concretizer should never be permitted to select GPU support, because
it's only meaningful and functional when the appropriate hardware is actually
available, and the concretizer cannot reliably determine that.
Also: Chapel's GPU support includes alot of complicated dependencies
and constraints, so leaving that choice free to the concretizer leads to alot
of extraneous and confusing messages when failing to concretize a
non-GPU-enabled spec.
Co-authored-by: Dan Bonachea <dobonachea@lbl.gov>
Add pre-built sbcl for x86 and arm for various glibc versions, making
way for an actual sblc built from source.
Also switch to use set_env in a context manager over setting the
environment variable for the build environment. I hit an issue with the
build system due to this in the sbcl package, pre-empting the same issue
here.
* dla-future: Add DLAF_ prefix to LAPACK_LIBRARY CMake variable in newer versions
* dla-future: Use spec.satisfies to check version constraint for LAPACK_LIBRARY variable prefix
Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>
---------
Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>
* py-sphinx-tabs: new version 3.4.5
* py-sphinx-design: new versions 0.5.0, 0.6.0, and 0.6.1
* py-requests: new version 2.32.3
* py-dnspython: new version 2.6.1
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* py-hatch-vcs: new version 0.4.0
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
Python 3.12 removed the `distutils` module, which is being required
by the build process of LLVM <= 14: Conflict with it for +python.
Fix build to not pick host tools like an incompatible Python from host
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
gcc on ubuntu has fix-cortex-a53-843419 set by default - this causes linking
issues (symbol relocation errors) for tf, even when compiling for different
cpus.
If `add_padding()` is allowed to return a path with a trailing path
separator, it will get collapsed elsewhere in Spack. This can lead to
buildcache entries that have RPATHS that are too short to be replaced by
other users whose install root happens to be padded to the correct
length. Detect this and replace the trailing path separator with a
concrete path character.
Signed-off-by: Samuel E. Browne <sebrown@sandia.gov>
Also: set the build and install directories to the source directory
because the build system unfortunately expects the `src_ext` directory
to be under the current working directory when building the bundled
third-party libraries, even when the configure script is run from
another directory.
@scemama pointed out that 'make' just calls 'dune' which is already
parallel, so make itself should not have more than one job.
opam@:2.1 need 'make lib-ext' for cmdliner, above it's obsolete.
Co-authored-by: Bernhard Kaindl <bernhard.kaindl@cloud.com>
The detection logic for the prefix used in py-bind11 if broken for spack
resulting in an empty prefix. However, the package provides an escape
hatch in the form of `prefix_for_pc_file`. Use this escape hatch to
provide the correct path; spack will always know better than pybind11's
CMake.
Co-authored-by: Robert Underwood <runderwood@anl.gov>
We've seen `getfqdn()` cause slowdowns on macOS in CI when added elsewhere. It's also
called by database.py every time we write the DB file.
- [x] replace the call with a memoized version so that it is only called once per process.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
This PR introduces a new heuristic for the solver, which behaves better when
compilers are treated as nodes. Apparently, it performs better also on `develop`,
where compilers are still node attributes.
The new heuristic:
- Sets an initial priority for guessing a few attributes. The order is "nodes" (300),
"dependencies" (150), "virtual dependencies" (60), "version" and "variants" (30), and
"targets" and "compilers" (1). This initial priority decays over time during the solve, and
falls back to the defaults.
- By default, it considers most guessed facts as "false". For instance, by default a node
doesn't exist in the optimal answer set, or a version is not picked as a node version etc.
- There are certain conditions that override the default heuristic using the _priority_ of
a rule, which previously we didn't use. For instance, by default we guess that a
`attr("variant", Node, Variant, Value)` is false, but if we know that the node is already
in the answer set, and the value is the default one, then we guess it is true.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This includes a test_linux699 variant which "activates" a version
that pulls from a repository other than the official repository.
This version is required to work with Linux kernel version
6.9.9 or later. Future official `msr-safe` versions are expected
to support later Linux kernel versions.
* opendatadetector: Add an env variable pointing to the share directory
* Rename the new variable to OPENDATADETECTOR_DATA and use join_path
---------
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
The `spack.target.Target` class is a weird entity, that is just needed to:
1. Sort microarchitectures in lists deterministically
2. Being able to use microarchitectures in hashed containers
This PR removes it, and uses `archspec.cpu.Microarchitecture` directly. To sort lists, we use a proper `key=` when needed. Being able to use `Microarchitecture` objects in sets is achieved by updating the external `archspec`.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Introduce the bufr_query library from NOAA-EMC (#461)
This PR adds in a new package.py script for the new bufr_query library from NOAA-EMC. This is being used by JEDI and other applications.
* Add explicit build dependency spec to the pybind11 depends_on spec
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Convert patch file to the URL form which pulls the changes from github.
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Added new version (0.0.3) and removed obsolete site-packages.patch file
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
While the existing getting started guide does in fact reference the
powershell support, it's a footnote and easily missed. This PR adds
explicit, upfront mentions of the powershell support. Additionally
this PR adds notes about some of the issues with certain components
of the spec syntax when using CMD.
If the spec is external, it has extra attributes. If not, we know
which names are used. In both cases we don't need to search again
for executables.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Removes `spack.package_base.PackageBase.do_{install,deprecate}` in favor of
`spack.installer.PackageInstaller.install` and `spack.installer.deprecate` resp.
That drops a dependency of `spack.package_base` on `spack.installer`, which is
necessary to get rid of circular dependencies in Spack.
Also change the signature of `PackageInstaller.__init__` from taking a dict as
positional argument, to an explicit list of keyword arguments.
Continuing the work started in #40326, his changes the structure
of Variant metadata on Packages from a single variant definition
per name with a list of `when` specs:
```
name: (Variant, [when_spec, ...])
```
to a Variant definition per `when_spec` per name:
```
when_spec: { name: Variant }
```
With this change, everything on a package *except* versions is
keyed by `when` spec. This:
1. makes things consistent, in that conditional things are (nearly)
all modeled in the same way; and
2. fixes an issue where we would lose information about multiple
variant definitions in a package (see #38302). We can now have,
e.g., different defaults for the same variant in different
versions of a package.
Some notes:
1. This required some pretty deep changes to the solver. Previously,
the solver's job was to select value(s) for a single variant definition
per name per package. Now, the solver needs to:
a. Determine which variant definition should be used for a given node,
which can depend on the node's version, compiler, target, other variants, etc.
b. Select valid value(s) for variants for each node based on the selected
variant definition.
When multiple variant definitions are enabled via their `when=` clause, we will
always prefer the *last* matching definition, by declaration order in packages. This
is implemented by adding a `precedence` to each variant at definition time, and we
ensure they are added to the solver in order of precedence.
This has the effect that variant definitions from derived classes are preferred over
definitions from superclasses, and the last definition within the same class sticks.
This matches python semantics. Some examples:
```python
class ROCmPackage(PackageBase):
variant("amdgpu_target", ..., when="+rocm")
class Hipblas(ROCmPackage):
variant("amdgpu_target", ...)
```
The global variant in `hipblas` will always supersede the `when="+rocm"` variant in
`ROCmPackage`. If `hipblas`'s variant was also conditional on `+rocm` (as it probably
should be), we would again filter out the definition from `ROCmPackage` because it
could never be activated. If you instead have:
```python
class ROCmPackage(PackageBase):
variant("amdgpu_target", ..., when="+rocm")
class Hipblas(ROCmPackage):
variant("amdgpu_target", ..., when="+rocm+foo")
```
The variant on `hipblas` will win for `+rocm+foo` but the one on `ROCmPackage` will
win with `rocm~foo`.
So, *if* we can statically determine if a variant is overridden, we filter it out.
This isn't strictly necessary, as the solver can handle many definitions fine, but
this reduces the complexity of the problem instance presented to `clingo`, and
simplifies output in `spack info` for derived packages. e.g., `spack info hipblas`
now shows only one definition of `amdgpu_target` where before it showed two, one of
which would never be used.
2. Nearly all access to the `variants` dictionary on packages has been refactored to
use the following class methods on `PackageBase`:
* `variant_names(cls) -> List[str]`: get all variant names for a package
* `has_variant(cls, name) -> bool`: whether a package has a variant with a given name
* `variant_definitions(cls, name: str) -> List[Tuple[Spec, Variant]]`: all definitions
of variant `name` that are possible, along with their `when` specs.
* `variant_items() -> `: iterate over `pkg.variants.items()`, with impossible variants
filtered out.
Consolidating to these methods seems to simplify the code a lot.
3. The solver does a lot more validation on variant values at setup time now. In
particular, it checks whether a variant value on a spec is valid given the other
constraints on that spec. This allowed us to remove the crufty logic in
`update_variant_validate`, which was needed because we previously didn't *know* after
a solve which variant definition had been used. Now, variant values from solves are
constructed strictly based on which variant definition was selected -- no more
heuristics.
4. The same prevalidation can now be done in package audits, and you can run:
```
spack audit packages --strict-variants
```
This turns up around 18 different places where a variant specification isn't valid
given the conditions on variant definitions in packages. I haven't fixed those here
but will open a separate PR to iterate on them. I plan to make strict checking the
defaults once all existing package issues are resolved. It's not clear to me that
strict checking should be the default for the prevalidation done at solve time.
There are a few other changes here that might be of interest:
1. The `generator` variant in `CMakePackage` is now only defined when `build_system=cmake`.
2. `spack info` has been updated to support the new metadata layout.
3. split out variant propagation into its own `.lp` file in the `solver` code.
4. Add better typing and clean up code for variant types in `variant.py`.
5. Add tests for new variant behavior.
Historically, every PR, push, etc. to Spack generates a bunch of jobs, each of which
uploads its coverage report to codecov independently. This means that we get annoying
partial coverage numbers when only a few of the jobs have finished, and frequently
codecov is bad at understanding when to merge reports for a given PR. The numbers of the
site can be weird as a result.
This restructures our coverage handling so that we do all the merging ourselves and
upload exactly one report per GitHub actions workflow. In practice, that means that
every push to every PR will get exactly one coverage report and exactly one coverage
number reported. I think this will at least partially restore peoples' faith in what
codecov is telling them, and it might even make codecov handle Spack a bit better, since
this reduces the report burden by ~7x.
- [x] test and audit jobs now upload artifacts for coverage
- [x] add a new job that downloads artifacts and merges coverage reports together
- [x] set `paths` section of `pyproject.toml` so that cross-platform clone locations are merged
- [x] upload to codecov once, at the end of the workflow
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* kokkos, kokkos-kernels, kokkos-nvcc-wrapper: add v4.4.01
* trilinos: update @[master,develop] dependency on kokkos
==> Error: InstallError: For Trilinos@[master,develop], ^kokkos version in spec must match version in Trilinos source code. Specify ^kokkos@4.4.01 for trilinos@[master,develop] instead of ^kokkos@4.4.00.
* petsc: configure requires rocm-core/rocm_version.h to detect ROCM_VERSION_MAJOR.ROCM_VERSION_MINOR.ROCM_VERSION_PATCH
* mfem: add dependency on rocprim (as needed via petsc dependency)
In file included from linalg/petsc.cpp:19:
In file included from linalg/linalg.hpp:65:
In file included from linalg/petsc.hpp:48:
In file included from /scratch/svcpetsc/spack.x/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/petsc-3.22.0-7dsxwizo24ycnqvwnsscupuh4i7yusrh/include/petscsystypes.h:530:
In file included from /scratch/svcpetsc/spack.x/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/rocthrust-6.1.2-ux5nmi4utw27oaqmz3sfjmhb6hyt5zed/include/thrust/complex.h:30:
/scratch/svcpetsc/spack.x/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/rocthrust-6.1.2-ux5nmi4utw27oaqmz3sfjmhb6hyt5zed/include/thrust/detail/type_traits.h:29:10: fatal error: 'rocprim/detail/match_result_type.hpp' file not found
29 | #include <rocprim/detail/match_result_type.hpp>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Update seacas package.py
Adding libcatalyst variant to seacas package
When seacas is installed with "seacas +libcatalyst"
then a dependency on the spack package "libcatalyst"
(which is catalyst api 2 from kitware) is added, and
the appropriage cmake variable for the catalyst TPL
is set. The mpi variant option in catalyst (i.e. build
with mpi or build without mpi) is passed on to
libcatalyst. The default of the libcatalyst variant
is false/off, so if seacas is installed without the
"+libcatalyst" in the spec it will behave exactly as
it did before the introduction of this variant.
* shortened line 202 to comply with < 100 characters per line style requirement
* py-httpx: New version
* [py-httpx] fix when for dependencies
* [py-httpx] organized dependencies
* [py-httpx] added version 0.27.2
---------
Co-authored-by: Alex C Leute <aclrc@rit.edu>
* Automated deployment to update package flux-sched 2024-09-05
* flux-sched: add back check for run environment
* flux-sched: add conflict for gcc and clang above 0.37.0
---------
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
autotools packages with a configure script should generate the libtool
executable, there's no point in `depends_on("libtool", type="build")`.
the libtool executable in `<libtool prefix>/bin/libtool` is configured
for the wrong toolchain (libtools %compiler instead of the package's
%compiler).
Some package link to `libltdl.so`, which is fine, but had a wrong
dependency type.
See https://github.com/spack/spack/pull/46314#discussion_r1752940332.
This further simplifies `cxxstd` variant handling in `acts` by removing superfluous
version constraints from dependencies for `geant4` and `root`.
The version constraints in the loop are redundant with the conditional variant
values here:
```python
_cxxstd_values = (
conditional("14", when="@:0.8.1"),
conditional("17", when="@:35"),
conditional("20", when="@24:"),
)
_cxxstd_common = {
"values": _cxxstd_values,
"multi": False,
"description": "Use the specified C++ standard when building.",
}
variant("cxxstd", default="17", when="@:35", **_cxxstd_common)
variant("cxxstd", default="20", when="@36:", **_cxxstd_common)
```
So we can simplify the dependencies in the loop to:
```python
for _cxxstd in _cxxstd_values:
for _v in _cxxstd:
depends_on(f"geant4 cxxstd={_v.value}", when=f"cxxstd={_v.value} +geant4")
depends_on(f"geant4 cxxstd={_v.value}", when=f"cxxstd={_v.value} +fatras_geant4")
depends_on(f"root cxxstd={_v.value}", when=f"cxxstd={_v.value} +tgeo")
```
And avoid the potential for impossible variant expressions.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Openmpi provider statements were changed in #46102. The package change
was fine in and of itself, but apparently one of our tests depends on
the precise constraints used in those statements. I updated the test
to remove the checks for constraints that were removed.
In #44425, we add stricter variant audits that catch expressions that can never match.
This fixes 13 packages that had this type of issue.
Most packages had lingering spec expressions from before conditional variants with
`when=` statements were added. For example:
* Given `conflicts("+a", when="~b")`, if the package has since added
`variant("a", when="+b")`, the conflict is no longer needed, because
`a` and `b` will never exist together.
* Similarly, two packages that depended on `py-torch` depended on
`py-torch~cuda~cudnn`, which can't match because the `cudnn` variant
doesn't exist when `cuda` is disabled. Note that neither `+foo` or `~foo`
match (intentionally) if the `foo` variant doesn't exist.
* Some packages referred to impossible version/variant combinations, e.g.,
`ceed@1.0.0+mfem~petsc` when the `petsc` variant only exist at version `2`
or higher.
Some of these correct real issues (e.g. the `py-torch` dependencies would have never
worked). Others simply declutter old code in packages by making all constraints
consistent with version and variant updates.
The only one of these that I think is not all that useful is the one for `acts`,
where looping over `cxxstd` versions and package versions ends up adding some
constraints that are impossible. The additional dependencies could never have
happened and the code is more complicated with the needed extra constriant.
I think *probably* the best thing to do in `acts` is to just not to use a loop
and to write out the constraints explicitly, but maybe the code is easier to
maintain as written.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Update var/spack/repos/builtin/packages/fms/package.py: apply patch for fms@2023.03 to fix compiler options bug in cmake config, add variant shared and corresponding patch for fms@2024.02
* Fix fms package audit: use c9bba516ba.patch?full_index=1 instead of c9bba516ba.patch?full_index=1
* Update checksum of patch for fms@2023.03
* CUDA: support Grace Hopper 9.0a compute capability
* Fix other packages
* Add type annotations
* Support ancient Python versions
* isort
* spec -> self.spec
Co-authored-by: Andrew W Elble <aweits@rit.edu>
* [@spackbot] updating style on behalf of adamjstewart
---------
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: adamjstewart <adamjstewart@users.noreply.github.com>
fixes#46295
A proper solution would be a tag directive that accumulates tags
with the ones defined in base classes.
For the time being, rewrite them explicitly.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Boost:Adjust bootstrapping/b2 options as needed for Windows (the
bootstrapping phase sufficiently differs between Windows/Unix
that it is handled entirely within its own branch).
* Boost: Paths in user-config.jam should be POSIX, including on Windows
* Python: `.libs` for the Python package should return link libraries
on Windows. The libraries are also stored in a different directory.
The option config:install_missing_compilers is currently buggy,
and has been for a while. Remove it, since it won't be needed
when compilers are treated as dependencies.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* fast-float: new package
* fast-float: add test dependency
* fast-float: fix doctest dependency type
* fast-float: convert deps to tuple
* fast-float: add v6.1.5 and v6.1.6
* fast-float: patch older versions to use find_package(doctest)
* py-your: new package
Spack package recipe for YOUR, Your Unified Reader. YOUR processes pulsar data in different formats.
Output below from spack install py-your
spack install py-your
==> Installing py-your-0.6.7-djfzsn2lutp24ik6wrk6tjx5f7hil76x [83/83]
==> No binary for py-your-0.6.7-djfzsn2lutp24ik6wrk6tjx5f7hil76x found: installing from source
==> Fetching https://github.com/thepetabyteproject/your/archive/refs/tags/0.6.7.tar.gz
==> No patches needed for py-your
==> py-your: Executing phase: 'install'
==> py-your: Successfully installed py-your-0.6.7-djfzsn2lutp24ik6wrk6tjx5f7hil76x
Stage: 1.43s. Install: 0.99s. Post-install: 0.12s. Total: 3.12s
* Removed setup_run_environment
After some testing, both spack load and module load for the package will include the bin directory generated by py-your as well as the path to the version of python the package was built with, without the need for the setup_run_environment function.
I removed that function (Although, like Tamara I thought it would be necessary based on other package setups I used as a basis for this package).
Note: I also updated the required version of py-astropy from py-astropy@4.0: to @py-astropy@6.1.0: In my test builds, the install was picking up version py-astropy@4.0.1.post1 and numpy1.26. However when I tried to run some of the code I was getting errors about py-astropy making numpy calls that are now removed. The newer version of py-astropy corrects these. Ideally this would be handled in the py-astropy package to make sure numpy isn't too new
* Changed software pull location
The original package pulled a tagged release version from GitHub. That tagged version was created in 2022 and has not been updated since. It no longer runs because newer versions of numpy have removed deprecation warnings for several of their calls. The main branch for this repository has addressed these numpy issues as well as some other important fixes but no new release has been generated. Because of this and the apparent minimal development that now appears to be going on, it is probably best to always pull from the main branch
* [@spackbot] updating style on behalf of aweaver1fandm
* py-your: Changed software pull location
1. Restored original URL and version (0.6.7) as requested
2. Updated py-numpy dependency versions to be constrained based on the version of your. Because of numpy deprecations related to your version 0.6.7 need to ensure that the numpy version used is not 1.24 or greater because the depracations were removed starting with that version
* gptune: new test API
* gptune: cleanup; finish API changes; separate unrelated test parts
* gptune: standalone test cleanup with timeout constraints
* gptune: ensure stand-alone test bash failures terminate; enable in CI
* gptune: add directory to terminate_bash_failures
* gptune/stand-alone tests: use satisifes for checking variants
---------
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
* Add numactl 2.0.16-2.0.18
* Create link-with-latomic-if-needed-v2.0.16.patch
Add a link to libatomic, if needed, for numactl v2.0.16.
* Add some missing patches to v2.0.16
* Create numactl-2.0.18-syscall-NR-ppc64.patch
In short, we need numactl to set __NR_set_mempolicy_home_node on ppc64, if it's not already defined.
* Apply a necessary patch for v2.0.18 on PPC64
* Add libatomic patch for v2.0.16
`spack reindex` relies on projections from configuration to locate
installed specs and prefixes. This is problematic because config can
change over time, and we have reasons to do so when turning compilers
into depedencies (removing `{compiler.name}-{compiler.version}` from
projections)
This commit makes reindex recursively search for .spack/ metadirs.
@@ -1175,6 +1175,17 @@ unspecified version, but packages can depend on other packages with
could depend on ``mpich@1.2:`` if it can only build with version
``1.2`` or higher of ``mpich``.
..note:: Windows Spec Syntax Caveats
Windows has a few idiosyncrasies when it comes to the Spack spec syntax and the use of certain shells
Spack's spec dependency syntax uses the carat (``^``) character, however this is an escape string in CMD
so it must be escaped with an additional carat (i.e. ``^^``).
CMD also will attempt to interpret strings with ``=`` characters in them. Any spec including this symbol
must double quote the string.
Note: All of these issues are unique to CMD, they can be avoided by using Powershell.
For more context on these caveats see the related issues: `carat <https://github.com/spack/spack/issues/42833>`_ and `equals <https://github.com/spack/spack/issues/43348>`_
Below are more details about the specifiers that you can add to specs.
The first step to contribute new runners is to open an issue in the `spack infrastructure <https://github.com/spack/spack-infrastructure/issues/new?assignees=&labels=runner-registration&projects=&template=runner_registration.yml>`_
project. This will be reported to the spack infrastructure team who will guide users through the process
of registering new runners for Spack CI.
The information needed to register a runner is the motivation for the new resources, a semi-detailed description of
the runner, and finallly the point of contact for maintaining the software on the runner.
The point of contact will then work with the infrastruture team to obtain runner registration token(s) for interacting with
with Spack's GitLab instance. Once the runner is active, this point of contact will also be responsible for updating the
GitLab runner software to keep pace with Spack's Gitlab.
Tagging
~~~~~~~
In the initial stages of runner registration it is important to **exclude** the special tag ``spack``. This will prevent
the new runner(s) from being picked up for production CI jobs while it is configured and evaluated. Once it is determined
that the runner is ready for production use the ``spack`` tag will be added.
Because gitlab has no concept of tag exclustion, runners that provide specialized resource also require specialized tags.
For example, a basic CPU only x86_64 runner may have a tag ``x86_64`` associated with it. However, a runner containing an
CUDA capable GPU may have the tag ``x86_64-cuda`` to denote that it should only be used for packages that will benefit from
a CUDA capable resource.
OIDC
~~~~
Spack runners use OIDC authentication for connecting to the appropriate AWS bucket
which is used for coordinating the communication of binaries between build jobs. In
order to configure OIDC authentication, Spack CI runners use a python script with minimal
dependencies. This script can be configured for runners as seen here using the ``pre_build_script``.
An environment is used to group together a set of specs for the
purpose of building, rebuilding and deploying in a coherent fashion.
Environments provide a number of advantages over the *à la carte*
approach of building and loading individual Spack modules:
An environment is used to group a set of specs intended for some purpose
to be built, rebuilt, and deployed in a coherent fashion. Environments
define aspects of the installation of the software, such as:
#.Environments separate the steps of (a) choosing what to
install, (b) concretizing, and (c) installing. This allows
Environments to remain stable and repeatable, even if Spack packages
are upgraded: specs are only re-concretized when the user
explicitly asks for it. It is even possible to reliably
transport environments between different computers running
different versions of Spack!
#. Environments allow several specs to be built at once; a more robust
solution than ad-hoc scripts making multiple calls to ``spack
install``.
#. An Environment that is built as a whole can be loaded as a whole
into the user environment. An Environment can be built to maintain
a filesystem view of its packages, and the environment can load
that view into the user environment at activation time. Spack can
also generate a script to load all modules related to an
environment.
#.*which* specs to install;
#.*how* those specs are configured; and
#.*where* the concretized software will be installed.
Aggregating this information into an environment for processing has advantages
over the *à la carte* approach of building and loading individual Spack modules.
With environments, you concretize, install, or load (activate) all of the
specs with a single command. Concretization fully configures the specs
and dependencies of the environment in preparation for installing the
software. This is a more robust solution than ad-hoc installation scripts.
And you can share an environment or even re-use it on a different computer.
Environment definitions, especially *how* specs are configured, allow the
software to remain stable and repeatable even when Spack packages are upgraded. Changes are only picked up when the environment is explicitly re-concretized.
Defining *where* specs are installed supports a filesystem view of the
environment. Yet Spack maintains a single installation of the software that
can be re-used across multiple environments.
Activating an environment determines *when* all of the associated (and
installed) specs are loaded so limits the software loaded to those specs
actually needed by the environment. Spack can even generate a script to
load all modules related to an environment.
Other packaging systems also provide environments that are similar in
some ways to Spack environments; for example, `Conda environments
<https://conda.io/docs/user-guide/tasks/manage-environments.html>`_ or
description="show package tags and associated packages"
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.