This PR provides complementary 2 features:
1. An augmentation to the package language to express ABI compatibility relationships among packages.
2. An extension to the concretizer that can synthesize splices between ABI compatible packages.
1. The `can_splice` directive and ABI compatibility
We augment the package language with a single directive: `can_splice`. Here is an example of a package `Foo` exercising the `can_splice` directive:
class Foo(Package):
version("1.0")
version("1.1")
variant("compat", default=True)
variant("json", default=False)
variant("pic", default=False)
can_splice("foo@1.0", when="@1.1")
can_splice("bar@1.0", when="@1.0+compat")
can_splice("baz@1.0+compat", when="@1.0+compat", match_variants="*")
can_splice("quux@1.0", when=@1.1~compat", match_variants="json")
Explanations of the uses of each directive:
- `can_splice("foo@1.0", when="@1.1")`: If `foo@1.0` is the dependency of an already installed spec and `foo@1.1` could be a valid dependency for the parent spec, then `foo@1.1` can be spliced in for `foo@1.0` in the parent spec.
- `can_splice("bar@1.0", when="@1.0+compat")`: If `bar@1.0` is the dependency of an already installed spec and `foo@1.0+compat` could be a valid dependency for the parent spec, then `foo@1.0+compat` can be spliced in for `bar@1.0+compat` in the parent spec
- `can_splice("baz@1.0", when="@1.0+compat", match_variants="*")`: If `baz@1.0+compat` is the dependency of an already installed spec and `foo@1.0+compat` could be a valid dependency for the parent spec, then `foo@1.0+compat` can be spliced in for `baz@1.0+compat` in the parent spec, provided that they have the same value for all other variants (regardless of what those values are).
- `can_splice("quux@1.0", when=@1.1~compat", match_variants="json")`:If `quux@1.0` is the dependency of an already installed spec and `foo@1.1~compat` could be a valid dependency for the parent spec, then `foo@1.0~compat` can be spliced in for `quux@1.0` in the parent spec, provided that they have the same value for their `json` variant.
2. Augmenting the solver to synthesize splices
### Changes to the hash encoding in `asp.py`
Previously, when including concrete specs in the solve, they would have the following form:
installed_hash("foo", "xxxyyy")
imposed_constraint("xxxyyy", "foo", "attr1", ...)
imposed_constraint("xxxyyy", "foo", "attr2", ...)
% etc.
Concrete specs now have the following form:
installed_hash("foo", "xxxyyy")
hash_attr("xxxyyy", "foo", "attr1", ...)
hash_attr("xxxyyy", "foo", "attr2", ...)
This transformation allows us to control which constraints are imposed when we select a hash, to facilitate the splicing of dependencies.
2.1 Compiling `can_splice` directives in `asp.py`
Consider the concrete spec:
foo@2.72%gcc@11.4 arch=linux-ubuntu22.04-icelake build_system=autotools ^bar ...
It will emit the following facts for reuse (below is a subset)
installed_hash("foo", "xxxyyy")
hash_attr("xxxyyy", "hash", "foo", "xxxyyy")
hash_attr("xxxyyy", "version", "foo", "2.72")
hash_attr("xxxyyy", "node_os", "ubuntu22.04")
hash_attr("xxxyyy", "hash", "bar", "zzzqqq")
hash_attr("xxxyyy", "depends_on", "foo", "bar", "link")
Rules that derive abi_splice_conditions_hold will be generated from
use of the `can_splice` directive. They will have the following form:
can_splice("foo@1.0.0+a", when="@1.0.1+a", match_variants=["b"]) --->
abi_splice_conditions_hold(0, node(SID, "foo"), "foo", BaseHash) :-
installed_hash("foo", BaseHash),
attr("node", node(SID, SpliceName)),
attr("node_version_satisfies", node(SID, "foo"), "1.0.1"),
hash_attr("hash", "node_version_satisfies", "foo", "1.0.1"),
attr("variant_value", node(SID, "foo"), "a", "True"),
hash_attr("hash", "variant_value", "foo", "a", "True"),
attr("variant_value", node(SID, "foo"), "b", VariVar0),
hash_attr("hash", "variant_value", "foo", "b", VariVar0).
2.2 Synthesizing splices in `concretize.lp` and `splices.lp`
The ASP solver generates "splice_at_hash" attrs to indicate that a particular node has a splice in one of its immediate dependencies.
Splices can be introduced in the dependencies of concrete specs when `splices.lp` is conditionally loaded (based on the config option `concretizer:splice:True`.
2.3 Constructing spliced specs in `asp.py`
The method `SpecBuilder._resolve_splices` implements a top-down memoized implementation of hybrid splicing. This is an optimization over the more general `Spec.splice`, since the solver gives a global view of exactly which specs can be shared, to ensure the minimal number of splicing operations.
Misc changes to facilitate configuration and benchmarking
- Added the method `Solver.solve_with_stats` to expose timers from the public interface for easier benchmarking
- Added the boolean config option `concretizer:splice` to conditionally load splicing behavior
Co-authored-by: Greg Becker <becker33@llnl.gov>
We added unification semantics for parsing specs from the CLI, but there are a couple
of special cases in which we can avoid calls to the concretizer for speed when the
specs can all be resolved by lookups.
- [x] special case 1: solving a single spec
- [x] special case 2: all specs are either concrete (come from a file) or have an abstract
hash. In this case if concretizer:unify:true we need an additional check to confirm
the specs are compatible.
- [x] add a parameterized test for unifying on the CI
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* libunwind: Add 1.7.2 and 1.8.1
* libunwind: Remove deprecated 1.1 version
* libunwind: Add newer *-stable branches: Remove 1.5-stable branch as well as cleanup.
* libunwind: Use GitHub url for all versions
* libunwind: Add conflict for PPC and 1.8.*
* libunwind: Add conflict for aarch64 and 1.8:
Build fails with
aarch64/Gos-linux.c: In function '_ULaarch64_local_resume':
aarch64/Gos-linux.c:147:1: error: x29 cannot be used in asm here
}
^
aarch64/Gos-linux.c:147:1: error: x29 cannot be used in asm here
make[2]: *** [Makefile:4795: aarch64/Los-linux.lo] Error 1
* added updated versions
* added pyhmmer
* updated infernal
* fix blast-plus for apple-clang
* fix py-biopython build on apple-clang
* remove erroneous biopython dep: build issue is with python 3.8, not biopython
* deepsig python 3.9: expanding unnecessary python restrictions
* add pyrodigal
* fix unnecessarily strict diamond version
* builds and updates: blast-plus indexing broken, still need to test db download and bakta pipeline
* builds and runs
* revert blast-plus changes: remove my personal hacks to get blast-plus to build
* fix the build error during compilation of rocdecode.was dependent on libva-devel packag
* address review comment
* address review changes.commit the changes
* Add two_level_namespace variant (default is disabled) for MacOS to enable building
executables and libraries with two level namespace enabled.
* Addressed reviewer comments.
* Moved two_level_namespace variant ahead of the patch that uses that variant to
get concretize to work properly.
* Removed extra print statements
* soqt: Add SoQt package
The geomodel package needs this if visualization is turned on.
* make qt versions explicit
* use virtual dependency for qt
* pr feedback
Remove myself as maintainer
Remove v1.6.0
Remove unused qt variant
This addresses part [1] of #46345#44713 introduced a bug where all non-spec query parameters like date
ranges, -x, etc. were ignored when an env was active.
This fixes that issue and adds tests for it.
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
`spack mirror add` and `set` now have flags `--oci-password-variable`, `--oci-password-variable`, `--s3-access-key-id-variable`, `--s3-access-key-secret-variable`, `--s3-access-token-variable`, which allows users to specify an environment variable in which a username or password is stored.
Storing plain text passwords in config files is considered deprecated.
The schema for mirrors.yaml has changed, notably the `access_pair` list is generally replaced with a dictionary of `{id: ..., secret_variable: ...}` or `{id_variable: ..., secret_variable: ...}`.
the py-oracledb package only has a single outdated version available in its recipe. this PR adds a much broader range of versions and their corresponding checksums.
* add more versions of py-oracledb
* update py-oracledb recipe
* add py-cython version dependencies
* tweak py-cython version dependencies
* remove older versions of py-oracledb
This filters any selected executable ending with `-ocl` from the list of executables being probed as candidate for external `llvm` installations.
I couldn't reproduce the entire issue, but with a simple script:
```
#!/bin/bash
touch foo.o
echo "clang version 10.0.0-4ubuntu1 "
echo "Target: x86_64-pc-linux-gnu"
echo "Thread model: posix"
echo "InstalledDir: /usr/bin"
exit 0
```
I noticed the executable was still probed:
```
$ spack -d compiler find /tmp/ocl
[ ... ]
==> [2024-11-11-08:38:41.933618] '/tmp/ocl/bin/clang-ocl' '--version'
```
and `foo.o` was left in the working directory. With this change, instead the executable is filtered out of the list on which we run `--version`, so `clang-ocl --version` is not run by Spack.
- [x] Get rid of a call to `parser.quote_if_needed()` during solver setup, which
introduces a circular import and also isn't necessary.
- [x] Rename `spack.variant.Value` to `spack.variant.ConditionalValue`, as it is *only*
used for conditional values. This makes it much easier to understand some of the
logic for variant definitions.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
`conditional()`, which defines conditional variant values, and the other ways to declare
variant values should probably be in a layer above `spack.variant`. This does the simple
thing and moves *just* `conditional()` to `spack.directives` to avoid a circular import.
We can revisit the public variant interface later, when we split packages from core.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* lua: add +pcfile support for @5.4: versions, without using a version-dependent patch
* lua: always generate pcfile, remove +pcfile variant from all packages
* lua: minor fixes
* rpm: minor fix
* Add 5.030 and remove the requirement to patch verilator, the problem has be fixed in this rev
* Update var/spack/repos/builtin/packages/verilator/package.py
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* benchmark: enable shared libraries by default
The existing behaviour of Google Benchmark yiels static objects which
are of little use for most projects. This PR changes the spec to use
dynamic libraries instead.
* Add shared variant
* librdkafka: added missing dependency on curl
This PR adds a missing dependency on curl in librdkafka.
* librdkafka: added dependency on openssl and zlib
* Added support for Codeplay AMD Plugin for Intel OneAPI Compilers
* [@spackbot] updating style on behalf of kaanolgu
* Adding 2025.0.0
* removed HOME and XDG_RUNTIME_DIR
* [@spackbot] updating style on behalf of kaanolgu
---------
Co-authored-by: Kaan Olgu <kaan.olgu@bristol.ac.uk>
No ROOT `builtin` should ever be set to true if possible, because that
builds an existing library that spack may not know about.
Furthermore, using `builtin_glew` forces the package to be on, even when
not building x/gl/aqua on macos. This causes build failures.
Caused by https://github.com/spack/spack/pull/45632#issuecomment-2276311748 .
Currently, if a package has a dependency from another repository and patches it,
generation of the patch cache will fail. Concretization succeeds if a fixed patch
cache is in place.
- [x] don't assume that patched dependencies are in the same repo when indexing
- [x] add some test fixtures to support multi-repo tests.
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* spack.compiler: cache output
* compute libc from the dynamic linker at most once per spack process
* wrap compiler cache entry in class, add type hints
* test compiler caching
* ensure tests do not populate user cache, and fix 2 tests
* avoid recursion: cache lookup -> compute key -> cflags -> real_version -> cache lookup
* allow compiler execution in test that depends on get_real_version
If a package `foo` doesn't implement `libs`, the default was to search recursively for `libfoo` whenever asking for `spec[foo].libs` (this also happens automatically if a package includes `foo` as a link dependency).
This can lead to some strange behavior:
1. A package that is normally used as a build dependency (e.g. `cmake` at one point) is referenced like
`depends_on(cmake)` which leads to a fully-recursive search for `libcmake` (this can take
"forever" when CMake is registered as an external with a prefix like `/usr`, particularly on NFS mounts).
2. A similar hang can occur if a package is registered as an external with an incorrect prefix
- [x] Update the default library search to stop after a maximum depth (by default, search
the root prefix and each directory in it, but no lower).
- [x]
The following is a list of known changes to `find` compared to `develop`:
1. Matching directories are no longer returned -- `find` consistently only finds non-dirs,
even at `max_depth`
2. Symlinked directories are followed (needed to support max_depth)
3. `find(..., "dir/*.txt")` is allowed, for finding files inside certain dirs. These "complex"
patterns are delegated to `glob`, like they are on `develop`.
4. `root` and `files` arguments both support generic sequences, and `root`
allows both `str` and `path` types. This allows us to specify multiple entry points to `find`.
---------
Co-authored-by: Peter Scheibel <scheibel1@llnl.gov>
This PR adds a sub-command to `spack env` (`track`) which allows users to add/link
anonymous environments into their installation as named environments. This allows
users to more easily track their installed packages and the environments they're
dependencies of. For example, with the addition of #41731 it's now easier to remove
all packages not required by any environments with,
```
spack gc -bE
```
#### Usage
```
spack env track /path/to/env
==> Linked environment in /path/to/env
==> You can activate this environment with:
==> spack env activate env
```
By default `track /path/to/env` will use the last directory in the path as the name of
the environment. However users may customize the name of the linked environment
with `-n | --name`. Shown below.
```
spack env track /path/to/env --name foo
==> Tracking environment in /path/to/env
==> You can activate this environment with:
==> spack env activate foo
```
When removing a linked environment, Spack will remove the link to the environment
but will keep the structure of the environment within the directory. This will allow
users to remove a linked environment from their installation without deleting it from
a shared repository.
There is a `spack env untrack` command that can be used to *only* untrack a tracked
environment -- it will fail if it is used on a managed environment. Users can also use
`spack env remove` to untrack an environment.
This allows users to continue to share environments in git repositories while also having
the dependencies of those environments be remembered by Spack.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
You can now provide multiple roots to a single `find()` call and all of
them will be searched. The roots can overlap (e.g. can be parents of one
another).
This also adds a library function for taking a set of regular expression
patterns and creating a single OR expression (and that library function
is used in `find` to improve its performance).
Set command line scopes last in _main, so they are higher scopes
Restore the global configuration in a spawned process by inspecting
the result of ctx.get_start_method()
Add the ability to pass a mp.context to PackageInstallContext.
Add shell-tests to check overriding the configuration:
- Using both -c and -C from command line
- With and without an environment active
* edm4hep: Add json variant for newer versions
Explicit option has been added to EDM4hep so we now expose it via a
variant as well. We keep the old behavior where we unconditionally
depended on nlohmann-json and implicitly built JSON support if we could
detect it cmake stage
* Fix condition statement in when clause
* Use open version range to avoid fixing to single version
---------
Co-authored-by: Valentin Volkl <valentin.volkl@cern.ch>
Variants can now be propagated from a dependent package to (transitive) dependencies,
even if the source or transitive dependencies have the propagated variants.
For example, here `zlib` doesn't have a `guile` variant, but `gmake` does:
```
$ spack spec zlib++guile
- zlib@1.3%gcc@12.2.0+optimize+pic+shared build_system=makefile arch=linux-rhel8-broadwell
- ^gcc-runtime@12.2.0%gcc@12.2.0 build_system=generic arch=linux-rhel8-broadwell
- ^gmake@4.4.1%gcc@12.2.0+guile build_system=generic arch=linux-rhel8-broadwell
```
Adding this property has some strange ramifications for `satisfies()`. In particular:
* The abstract specs `pkg++variant` and `pkg+variant` do not intersect, because `+variant`
implies that `pkg` *has* the variant, but `++variant` does not.
* This means that `spec.satisfies("++foo")` is `True` if:
* for concrete specs: `spec` and its dependencies all have `foo` set if it exists
* for abstract specs: no dependency of `spec` has `~foo` (i.e. no dependency contradicts `++foo`).
* This also means that `Spec("++foo").satisfies("+foo")` is `False` -- we only know after concretization.
The `satisfies()` semantics may be surprising, but this is the cost of introducing non-subset
semantics (which are more useful than proper subsets here).
- [x] Change checks for variants
- [x] Resolve conflicts
- [x] Add tests
- [x] Add documentation
---------
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* pythia8: Include patch for C++20 / Clang
Pythia8 vendors some FJCore sources that are as of Pythia8 312
incompatible with C++20 on clang. This adds a patch that makes it
compatible in these scenarios
* Add issue link
* rename setup_cxxstd function
* Remove an accidental printout
* Apply patch to all compilers, add lower bound
* `find(..., max_depth=...)` can be used to control how many directories at most to descend into below the starting point
* `find` now enters every unique (symlinked) directory once at the lowest depth
* `find` is now repeatable: it traverses the directory tree in a deterministic order
In the pure `ld` case, we weren't actually parsing `RPATH` arguments separately as we
do for `ccld`. Fix this by adding *another* nested case statement for raw `RPATH`
parsing.
There are now 3 places where we deal with `-rpath` and friends, but I don't see a great
way to unify them, as `-Wl,`, `-Xlinker`, and raw `-rpath` arguments are all ever so
slightly different.
Also, this Fixes ordering of assertions to make `pytest` diffs more intelligible.
The meaning of `+` and `-` in diffs changed in `pytest` 6.0 and the "preferred" order
for assertions became `assert actual == expected` instead of the other way around.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
`cc` divides most paths up into system paths, spack managed paths, and other paths.
This gets really repetitive and makes the code hard to read. Simplify the script
by adding some functions to do most of the redundant work for us.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
This PR has two small contributions:
- It adds another phase to the timer for concrectization, "construct_specs", to actually see the time the concretizer spends interpreting the `clingo` output to build the Python object for a concretized spec.
- It adds the method `Solver.solve_with_stats` to expose the timers that were already in the concretizer to the public solver API. `Solver.solve` just becomes a special case of `Solver.solve_with_stats` that throws away the timing output (which is what it was already doing).
These changes will make it easier to benchmark concretizer performance and provide a more complete picture of the time spent in the concretizer by including the time spent interpreting clingo output.
Currently, the `geant4-data` spec creates symlink to all of its
dependencies, and it does so by globbing their `share/` directories.
This works very well for the way Spack installs these, but it doesn't
work for anybody wanting to use e.g. the Geant4 data on CVMFS. See pull
request #47298. This commit changes the way the `geant4-data` spec
works. It no longer blindly globs directories and makes symlinks, but it
asks its dependencies specifically for the name of their data directory.
This should allow us to use Spack to use the CVMFS installations as
externals.
* zabbix: add v5.0.44, v6.0.34, v7.0.4 (fix CVEs)
* [@spackbot] updating style on behalf of wdconinc
* zabbix: use f-string
* zabbix: fix f-string quoting
* zabbix: use mysql-client
* @wdconic, this fixes the mysql client virtual for me
---------
Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
The idea is to go from most to least used: backward compat -> forward compat -> pinning on major or major.minor version -> pinning specific, concrete versions.
Further, the following
```python
# backward compatibility with Python
depends_on("python@3.8:")
depends_on("python@3.9:", when="@1.2:")
depends_on("python@3.10:", when="@1.4:")
# forward compatibility with Python
depends_on("python@:3.12", when="@:1.10")
depends_on("python@:3.13", when="@:1.12")
depends_on("python@:3.14")
```
is better than disjoint when ranges causing repetition of the rules on dependencies, and requiring frequent editing of existing lines after new releases are done:
```python
depends_on("python@3.8:3.12", when="@:1.1")
depends_on("python@3.9:3.12", when="@1.2:1.3")
depends_on("python@3.10:3.12", when="@1.4:1.10")
depends_on("python@3.10:3.13", when="@1.11:1.12")
depends_on("python@3.10:3.14", when="@1.13:")
When compiled without MPI support, a fake mpi header is autogenerated during configure/build. The header is missing one symbol in version 1.9.5. The problem has since been fixed upstream.
A simular problem does also occur for 1.9.4. Unfortunately, the patch does not work for 1.9.4 and I also don't know if further fixes would be required for 1.9.4. Therefore, only the newest version 1.9.5 is patched.
Update HDF5 version for develop branch to develop-2.0 to match the new
version in the develop branch.
Remove develop-1.16 as it has been decided to make next release HDF5 2.0.0.
This PR changes the semantic of == for spec so that:
hdf5++mpi == hdf5+mpi
won't hold true anymore. It also changes the constrain semantic, so that a
non-propagating variant always override a propagating variant. This means:
(hdf5++mpi).constrain(hdf5+mpi) -> hdf5+mpi
Before we had a very weird semantic, that was supposed to be tested by unit-tests:
(libelf++debug).constrain(libelf+debug+foo) -> libelf++debug++foo
This semantic has been dropped, as it was never really tested due to the == bug.
According to https://github.com/root-project/root/issues/7160, if
`-Dcocoa=ON` build must also be configured with `-Dopengl=ON`, since
otherwise the build encounters missing includes. This is/was a silent
failure in ROOT CMake, but I believe has been made an explicit failure
some time this year.
Currently, the schema reads:
from:
- type:
environment: path_or_name
but this can't be extended easily to other types, e.g. to buildcaches,
without duplicating the extension keys. Use instead:
from:
- type: environment
path: path_or_name
Currently, the `concretizer:unify:` config option only affects environments.
With this PR, it now affects any group of specs given to a command using the `parse_specs(*, concretize=True)` interface.
- [x] implementation in `parse_specs`
- [x] tests
- [x] ensure all commands that accept multiple specs and concretize use `parse_specs` interface
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* don't concretize in CI if changed packages are not in stacks
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Generate noop job when no specs to rebuild due to untouched pruning
* Add test to verify skipping generate creates a noop job
* Changed debug for early exit
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This changes `Spec` serialization to include information about propagation for abstract specs.
This was previously not included in the JSON representation for abstract specs, and couldn't be
stored.
Now, there is a separate `propagate` dictionary alongside the `parameters` dictionary. This isn't
beautiful, but when we bump the spec version for Spack `v0.24`, we can clean up this and other
aspects of the schema.
* geant4: make downloading data dependency optional
This PR makes downloading the data repository of the Geant4 spec
optional by adding a sticky, default-enabled variant which controls the
dependency on `geant4-data`. This should not change the default
behaviour, but should allow users to choose whether or not they want the
data directory.
* Add comment
* Update env variable
* Generic docs
* Buildable false
- Merging sycl2020usm and sycl2020acc into sycl2020 and the submodel=acc/usm variant is introduced
- implementation is renamed to option
- impl ( fortran implementation options) renamed to foption
- sycl_compiler_implementation and thrust_backend
- stddata,stdindices,stdranges to a single std with std_submodel introduction
- std_use_tbb to be boolean; also changed model filtering algorithm to make sure that it only picks model names
- Modified comments to clear confusion with cuda_arch cc_ and sm_ prefix appends
- Deleted duplicate of cuda_arch definition from +omp
- CMAKE_CXX_COMPILER moved to be shared arg between all models except tbb and thrust
- Replaced sys.exit with InstallError and created a dictionary to simplify things and eliminate excess code lines doing same checks
- Replaced the -mcpu flags to -march since it is deprecated now
- Replaced platform.machine with spec.target
- Removing raja_backend, introducing openmp_flag,removing -march flags,clearing debugging print(), removing excess if ___ in self.spec.variants
- [FIX] Issue where Thrust couldn't find correct compiler (it requires nvcc)
- [FIX] Fortran unsupported check to match the full string
- [FIX] RAJA cuda_arch to be with sm_ not cc_
- dir= option is no longer needed for kokkos
- dir is no longer needed
- [omp] Adding clang support for nvidia offload
- SYCL2020 offload to nvidia GPU
- changing model dependency to be languages rather than build system
- removing hardcoded arch flags and replacing with archspec
- removing cpu_arch from acc model
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Kaan Olgu <kaan.olgu@bristol.ac.uk>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Introduce support for ArmPL and ACfL 24.10
This patch introduces the possibility of installing armpl-gcc
and acfl 24.10 through spack. It also addressed one issue observed
after PR https://github.com/spack/spack/pull/46594
* Fix Github action issues.
- Remove default URL
- Reinstate default OS for ACfL to RHEL.
Fixes an issue reported where `spack env depfile` + `make -j` would
non-deterministically refuse to mark all environment roots explicit.
`update_explicit` had the pattern
```python
rec = self._data[key]
with self.write_transaction():
rec.explicit = explicit
```
but `write_transaction` may reinitialize `self._data`, meaning that
mutating `rec` won't mutate `self._data`, and the changes won't be
persisted.
Instead, use `mark` which has a correct implementation.
Also avoids the essentially incorrect early return in `update_explicit`
which is a pattern I don't think belongs in database.py: it branches on
possibly stale data to realize there is nothing to change, but in reality
it requires a write transaction to know that for a fact, but that would
defeat the purpose. So, leave this optimization to the call site.
The already concrete specs in an environment are now among the reusable specs for the concretizer.
This includes concrete specs from all include_concrete environments.
In addition to this change to the default reuse, `environment` is added as a reuse type for
the concretizer config. This allows users to specify:
spack:
concretizer:
# Reuse from this environment (including included concrete) but not elsewhere
reuse:
from:
- type: environment
# or reuse from only my_env included environment
reuse:
from:
- type:
environment: my_env
# or reuse from everywhere
reuse: true
If reuse is specified from a specific environment, only specs from that environment will be reused.
If the reused environment is not specified via include_concrete, the concrete specs will be retried
at concretization time to be reused.
Signed-off-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Currently the order in which hooks are run is arbitrary.
This can be fixed by sorted(list_modules(...)) but I think it is much
more clear to just have a static list.
Hooks are not extensible other than modifying Spack code, which
means it's unlikely people maintain custom hooks since they'd have
to fork Spack. And if they fork Spack, they might as well add an entry
to the list when they're continuously rebasing.
The idea is that `spack -e env add ./concrete-spec.json` would list the
full hash in the specs, so that (a) it's not ambiguous and (b) it could
in principle results in constant time lookup instead of linear time
substring match in large build caches.
* r-*: updates to latest versions
* r-*: add new dependencies
* r-proj: fix docstring line length
* r-list: add homepage
* r-*: add more dependencies
* r-rmpi: use virtual dependencies, conflict openmpi@5:
* r-cairo: require cairo +png; +pdf for some versions; cairo +fc when +ft
* r-proj: set LD_LIBRARY_PATH since rpath not respected
* Added packages to for intel-2025.0.0 release
* fix style
* pin mkl to 2024.2.2
until e4s can upgrade to 2025 compiler and ginkgo compatibility issue can be resolved.
---------
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
* Add a descriptor to have a class level constant
This descriptor helps intercept places where we set a value on instances.
It does not really behave like "const" in C-like languages, but is the
simplest implementation that might still be useful.
* Add a descriptor to deprecate properties/attributes of an object
This descriptor is used as a base class. Derived classes may implement a
factory to return an adaptor to the attribute being deprecated. The
descriptor can either warn, or raise an error, when usage of the deprecated
attribute is intercepted.
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
This adds the current latest version of py-uv. While working on this, I also
found that uv (including older versions) has a build dependency on cmake which
was not specified in the package, so I added it as a dependency.
I also found that on my machine, the build process had trouble finding cmake,
so I set the path to it explicitly as an environment variable.
* py-jupyter-core: set environment variables for extensions
* Changes committed by gh-spack-pr
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Currently, `spack solve` has different spec selection semantics than `spack spec`.
`spack solve` currently does not allow specifying a single spec when an environment is active.
This PR modifies `spack solve` to inherit the interface from `spack spec`, and to use
the same spec selection logic. This will allow for better use of `spack solve --show opt`
for debugging.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* gdk-pixbuf: delete old versions, make mesonpackage
goal is to get rid of `std_meson_args` global, but clean up package
while at it.
`setup_dependent_run_environment` was removed because it did not depend
on the dependent spec, and would result in repeated env variable
changes.
* atk: idem
* fix a dependent
* ML CI: Linux aarch64
* Add config files
* No aarch64 tag
* Don't specify image
* Use amazonlinux image
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
* Update and require
* GCC is too old
* Fix some builds
* xgboost doesn't support old GCC + cuda
* Run on newer Ubuntu
* Remove mxnet
* Try aarch64 range
* Use main branch
* Conflict applies to all targets
* cuda only required when +cuda
* Use tagged version
* Comment out tf-estimator
* Add ROCm, use newer Ubuntu
* Remove ROCm
---------
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
Originally, concretization failed if the splice config points to an invalid replacement.
This PR defers the check until we know the splice is needed, so that irrelevant splices
with bad config cannot stop concretization.
While I was at it, I improved an error message from an assert to a ValueError.
* Normalize Spack Win entrypoints
Currently Spack has multiple entrypoints on Windows that in addition to
differing from *nix implementations, differ from shell to shell on
Windows. This is a bit confusing for new users and in general
unnecessary.
This PR adds a normal setup script for the batch shell while preserving
the previous "click from file explorer for spack shell" behavior.
Additionally adds a shell title to both powershell and cmd letting users
know this is a Spack shell
* remove doskeys
#44588 we added logic to suppress deprecation warnings for the
Intel classic compilers. This depended on matching against
* The compiler names (looking for icc, icpc, ifort)
* The compiler version
When using an Intel compiler with fortran wrappers, the first check
always fails. To support using the fortran wrappers (in combination
with the classic Intel compilers), we remove the first check and
suppress if just the version matches. This works because:
* The newer compilers like icx can handle (ignore) the flags that
suppress deprecation warnings
* The Cray wrappers pass the underlying compiler version (e.g. they
report what icc would report)
* new openjdk variant to symlink system certificate
* new openjdk variant to symlink system certificate
* new openjdk variant to symlink system certificate
* new openjdk variant to symlink system certificate
* Update var/spack/repos/builtin/packages/openjdk/package.py
Co-authored-by: Alec Scott <hi@alecbcs.com>
---------
Co-authored-by: Alec Scott <hi@alecbcs.com>
Connection objects are Python version, platform and multiprocessing
start method independent, so better to use those than a mix of plain
file descriptors and inadequate guesses in the child process whether it
was forked or not.
This also allows us to delete the now redundant MultiProcessFd class,
hopefully making things a bit easier to follow.
This allows the following
```python
cache.init_entry("my/cache")
with cache.read_transaction("my/cache") as f:
data = f.read() if f is not None else None
```
mirroring `write_transaction`, which returns a tuple `(old, new)` where
`old` is `None` if the cache file did not exist yet.
The alternative that requires less defensive programming on the call
site would be to create the "old" file upon first read, but I did not
want to think about how to safely atomically create the file, and it's
not unthinkable that an empty file is an invalid format (for instance
the call site may expect a json file, which requires at least {} bytes).
This PR is in response to a question in the `environments` slack channel (https://spackpm.slack.com/archives/CMHK7MF51/p1729200068557219) about inadequate CLI help/documentation for one specific subcommand.
This PR uses the approach I took for the descriptions and help for `spack test` subcommands. Namely, I use the first line of the relevant docstring as the description, which is shown per subcommand in `spack env -h`, and the entire docstring as the help. I then added, where it seemed appropriate, help. I also tweaked argument docstrings to tighten them up, make consistent with similar arguments elsewhere in the command, and elaborate when it seemed important. (The only subcommand I didn't touch is `loads`.)
For example, before:
```
$ spack env update -h
usage: spack env update [-hy] env
positional arguments:
env name or directory of the environment to activate
optional arguments:
-h, --help show this help message and exit
-y, --yes-to-all assume "yes" is the answer to every confirmation request
```
After the changes in this PR:
```
$ spack env update -h
usage: spack env update [-hy] env
update the environment manifest to the latest schema format
update the environment to the latest schema format, which may not be
readable by older versions of spack
a backup copy of the manifest is retained in case there is a need to revert
this operation
positional arguments:
env name or directory of the environment
optional arguments:
-h, --help show this help message and exit
-y, --yes-to-all assume "yes" is the answer to every confirmation request
```
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Commit aa0825d642 accidentally added a semicolon
to the ANSI escape sequence even if the color code was `None` or unknown, breaking the
bold, uncolored font-face. This PR restores the old behavior.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Add type hints to all query* methods
* Inline docstrings
* Change defaults from `any` to `None` so they can be type hinted in old Python
* Pre-filter on given hashes instead of iterating over all db specs
* Fix a bug where the `--origin` option of uninstall had no effect
* Fix a bug where query args were not applied when searching by concrete spec
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Fixes a change in behavior/bug in
70412612c7, where partial environment
installs would mark the selected spec as explicitly installed, even if
it was not a root of the environment.
The desired behavior is that roots by definition are the to be
explicitly installed specs. The specs on the `spack -e ... install x`
command line are just filters for partial installs, so leave them
implicitly installed if they aren't roots.
ci: Remove deprecated logic from the ci module
Remove the following from the ci module, schema, and tests:
- deprecated ci stack and handling of old ci config
- deprecated mirror handling logic
- support for artifacts buildcache
- support for temporary storage url
* ParaView: Explicitly set the ENABLE_MPI on/off
* Disallow MPI in the DAG when ~mpi
* @5.13 uses 'remove_children', use pugixml@1.11:, See #47098
* cloud_pipelines/stacks/data-vis-sdk: paraview +raytracing: add +adios2 +fides
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
When building ParaView with ADIOS2 and allowing VTK-m to be
built, also build Fides. This reads ADIOS2 files with a
particular JSON schema, but it requires VTK-m to read data.
* Libmng: Restore Autotools system
CMake, when building its Qt gui, depends on Qt, which in turn, depends on libmng, a CMake based build. To avoid this obvious cyclic dependency, we re-introduce libmng's autotools build into Spack and require when building Qt as a CMake dependency, libmng is built with autotools
* Ensure autotools constraint is limited to non-Windows
* refactor qt-libmng relation from CMake
This commit remove all the uses of spec.compiler that
can be easily substituted by a more idiomatic approach,
e.g. using spec.satisfies or directives
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* py-sphinxcontrib-spelling: new package
* Dependency enchant: Add missing dep on pkgconfig
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Both `multiprocessing.connection.Connection.__del__` and `io.IOBase.__del__` called `os.close` on the same file descriptor. As of Python 3.13, this is an explicit warning. Ensure we close once by usef `os.fdopen(..., closefd=False)`
* stacks: add a stack for devtools on darwin
After getting this whole mess building on darwin, let's keep it that
way, and maybe make it so we have some non-ML darwin binaries in spack
as well.
* reuse: false for devtools
* dtc: fix darwin dylib name and id
On mac the convention is `lib<name>.<num>.dylib`, while the makefile
creates a num suffixed one by default. The id in the file is also a
local name rather than rewritten to the full path, this fixes both
problems.
* node-js: make whereis more deterministic
* relocation(darwin): catch Mach-O load failure
The MachO library can throw an exception rather than return no headers,
this happened in an elf file in the test data of go-bootstrap. Trying
catching the exception and moving on for now. May also need to look
into why we're trying to rewrite an elf file.
* qemu: add darwin flags to clear out warnings
There's a build failure for qemu in CI, but it's invisible because of
the immense mass of warning output. Explicitly specify the target macos
version and remove the extraneous unknown-warning-option flag.
* dtc: libyaml is also a link dependency
libyaml is required at runtime to run the dtc binary, lack of it caused
the ci for qemu to fail when the library wasn't found.
fixes#47101
The bug was introduced in #33495, where `spack find was not updated,
and wasn't caught by unit tests.
Now a Database can accept a custom predicate to select the installation
records. A unit test is added to prevent regressions. The weird convention
of having `any` as a default value has been replaced by the more commonly
used `None`.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* root: fix variant detection for external
A few fixes (possibly non-exhaustive) to `spack external find root`
Several variants have had `when=` clauses added that need to be
propagated to determine_variants. The previously used
Version.satifies("") method also has been removed, it seems. It's
slightly cumbersome that there is no self.spec to use in
determine_variants, but comparisons using Version(version_str) work at least
* remove debug printout
* sccache: add new package
* sccache: add older versions and minimum rust versions
* sccache: add more minimum rust versions
* sccache: add sccache executable and tag as build-tools
* sccache: add dist-server
* sccache: add determine_version and determin_variants
* sccache: add sccache-dist executable
* sccache: fix style
* Update var/spack/repos/builtin/packages/sccache/package.py
* In case building very old sccache <= 5 is not needed with these older rust version is not needed, they can be omitted.
* sccache: drop older versions
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
* sccache: add openssl dependency
* sccache: openssl is a linux only dependency?
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Older builds of Boost were failing on Windows because they were
adding --without-... flags for libraries that did not exist in those
versions. So:
* lib variants are updated with version range info (current range
info for libs is not comprehensive, but represents changes over the
last few minor versions up to 1.85)
* On Windows, --without-... options are omitted for libraries when they
don't exist for the version of boost being built. Non-Windows uses
a different approach, which was not affected because the new libraries
were not activated by default. It would benefit from similar attention
though to avoid potential future issues.
#44327 made sure to always run `set_package_py_globals` on all
packages before running `setup_dependent_package` for any package,
so that packages implementing the latter could depend on variables
like `spack_cc` being defined.
This ran into an undocumented dependency: `std_cmake_args` is set in
`set_package_py_globals` and makes use of `cmake_prefix_paths` (if it
is defined in the package); `py-torch`es implementation of
`cmake_prefix_paths` depends on a variable set by
`setup_dependent_package` (`python_platlib`).
This generally restores #44327, and corrects the resulting issue by
moving assignment of `std_cmake_args` to after both actions have been
run.
* py-clip-anytorch: new package
* py-clip-anytorch: ran black
py-langchain-core: ran black
py-pydantic: ran black
py-dalle2-pytorch: ran black
* [py-clip-anytorch] fixed license(checked_by)
* Apply suggestion from Wouter on fixing CI
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
---------
Co-authored-by: Alex C Leute <acl2809@rit.edu>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* py-pytorch-warmup: new package
* py-clip-anytorch: ran black
py-langchain-core: ran black
py-pydantic: ran black
py-dalle2-pytorch: ran black
---------
Co-authored-by: Alex C Leute <acl2809@rit.edu>
Some unused methods in VTK-m resulted in compile errors. These were
not discovered because many compilers ignore unused methods in templated
classes, but the SYCL compiler for Aurora gave an error.
Turns out `os=...` of the spec and `MACOSX_DEPLOYMENT_TARGET` are kept
in sync, and the env variable is used to initialize
`CMAKE_MACOSX_DEPLOYMENT_TARGET`.
In bootstrap code we set the env variable, so these bits are redundant.
---------
Co-authored-by: haampie <haampie@users.noreply.github.com>
* Remove the implicit CORE-AVX512 since the CPU specific flags are added by the
compiler wrappers.
* Add `-i_use-path` to help `ifx` find `lld` even if `-gcc-name` is set in
`ifx.cfg`. This file is written by `intel-oneapi-compilers` package to find the
correct `gcc`. Not being able to find `ldd` is a bug in `ifx`. @rschon2 found
this workaround.
* e4s external rocm ci: upgrade to v6.2.1
* use ghcr.io/spack/spack/ubuntu22.04-runner-amd64-gcc-11.4-rocm6.2.1:2024.10.08
* magma +rocm: add entry for v6.2.1
Extrae normally separates the C and MPI fortran interception libs, but
for mixed C/Fortran applications a combined lib is needed.
Co-authored-by: fpanichi <fpanichi@amd.com>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
* [ftgl] Restrict GCC 14+ patch to apply only to GCC 14+
The patch added by #46927 should only be applied where it is needed:
with GCC 11 it causes a compilation failure where none previously
existed.
* Fix the contraint for applying unsiged char patch to ^freetype@2.13.3:
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
The purpose of this CI job is to ensure that we
can use a modern clingo to concretize specs, if
e.g. it was installed in a virtual environment
with pip.
Since there is no need to re-test unrelated parts
of Spack, reduce the number of tests we run to just
concretize.py
* lua: update luarocks resource to 3.11.1
We have kept an older 3.8 for some time, but that now uses an incorrect
value for the deployment target for macos, causing builds for bundles to
succeed but in such a way that they can't be linked to applications by
`ld` but only loaded by dlopen. This fixes that, and also generally
updates the tool.
* lua-luajit-openresty: add new version fix LUA_PATH
Adds a newer version of openresty's luajor, and adds the slightly odd
extra share path they use that contains the `jit.*` modules. Without
that, things that use bytecode-saving and other jit routines (like
neovim) fail.
* lua-lpeg: fix lpeg build to work for neovim on OSX
Normally luarocks builds all lua libraries as bundles on macos, this
makes sense, but means they can't then be linked by LD into executables
the way neovim expects to do. I'm not sure how this ever worked, if it
did. This patch adds the appropriate variables to have luarocks build
the library as a shared librar, and subsequently fix the id with
install_name_tool (the built-in functionality for this does not
trigger).
This also adds a symlink from `liblpeg.dylib` to `lpeg.so` because
neovim will not build on macos without it. See corresponding upstream
pull request at https://github.com/neovim/neovim/pull/30749
Remove the constraint for concrete specs and simply take the
max(version) if a version is not given. This should default to the
highest infinity version which is also the logical best guess for
doing development.
* Remove concrete verision constriant for develop, set docs
* Add unit-test
* Update lib/spack/docs/environments.rst
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
* Update lib/spack/spack/cmd/develop.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Consolidate env collection in cmd
* Style
---------
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
Co-authored-by: Greg Becker <becker33@llnl.gov>
Remove the `build-tools` tag of python, otherwise these types of
concretizations are possible:
```
py-root
^py-pip
^python@3.12
^python@3.13
```
So, a package would be configured with py-pip using python 3.12, but
installed for 3.13, which does not work.
* Add new version for master branch
Added new version for master branch. Also added additional functions to ensure tempo will actually run. Tempo assumes the stage directory sticks around and references numerous files and directory there. That has been corrected here only if using the master version. The LWA-10-2020 version will also have this problem but they may have additional setup in their compute/Spack environment to address this issue already so I did not modify anything when that's the version. Example of what happens in the LWA-10-17-2020 version regarding missing files is given below
user@cs:~/spack/bin$ tempo
more: cannot open /tempo.hlp: No such file or directory
* Updated to fix format errors
Flake8 check found errors. Fixed those formatting issues
* Additional format change
Removed redundant setup_dependent_run_environment missed in previous update
* Update url to use https: https is the usual transport and is needed to support checkout behind some firewalls
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
The CMake builder in Spack actually adds incorrect rpaths. They are
unfiltered and incorrectly ordered compared to what the compiler wrapper
adds.
There is no need to specify paths to dependencies in `CMAKE_INSTALL_RPATH`
because of two reasons:
1. CMake preserves "toolchain" rpaths, which includes the rpaths injected
by our compiler wrapper.
2. We use `CMAKE_INSTALL_RPATH_USE_LINK_PATH=ON`, so libraries we link
to are rpath'ed automatically.
However, CMake does not create install rpaths to directories in the package's
own install prefix, so we set `CMAKE_INSTALL_RPATH` to the educated guess
`<prefix>/{lib,lib64}`, but omit dependencies.
Some assertion are not testing DAG invariants, and they are passing only
because of the simple structure of the builtin.mock repository on develop.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* update versions of Neo4j and Redis deps
* deprecating older versions due to security vulnerabilities
* [@spackbot] updating style on behalf of kchilleri
* Update var/spack/repos/builtin/packages/redis/package.py
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* adding previous urls to use archives on project websites
* [@spackbot] updating style on behalf of kchilleri
* adding new required maven version
* label when to use specific maven versions
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
---------
Co-authored-by: Krishna Chilleri <krishnachilleri@lanl.gov>
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Add versions 1.14.4-3, 1.14.5, develop-1.16, and update develop-1.15 to
develop-1.17.
* Remove unused list_url, list_depth, and style disapproved spaced in url
= "...
* One more style fix.
This PR allows users to configure explicit splicing replacement of an abstract spec in the concretizer.
concretizer:
splice:
explicit:
- target: mpi
replacement: mpich/abcdef
transitive: true
This config block would mean "for any spec that concretizes to use mpi, splice in mpich/abcdef in place of the mpi it would naturally concretize to use. See #20262, #26873, #27919, and #46382 for PRs enabling splicing in the Spec object. This PR will be the first place the splice method is used in a user-facing manner. See https://spack.readthedocs.io/en/latest/spack.html#spack.spec.Spec.splice for more information on splicing.
This will allow users to reuse generic public binaries while splicing in the performant local mpi implementation on their system.
In the config file, the target may be any abstract spec. The replacement must be a spec that includes an abstract hash `/abcdef`. The transitive key is optional, defaulting to true if left out.
Two important items to note:
1. When writing explicit splice config, the user is in charge of ensuring that the replacement specs they use are binary compatible with whatever targets they replace. In practice, this will likely require either specific knowledge of what packages will be installed by the user's workflow, or somewhat more specific abstract "target" specs for splicing, to ensure binary compatibility.
2. Explicit splices can cause the output of the concretizer not to satisfy the input. For example, using the config above and consider a package in a binary cache `hdf5/xyzabc` that depends on mvapich2. Then the command `spack install hdf5/xyzabc` will instead install the result of splicing `mpich/abcdef` into `hdf5/xyzabc` in place of whatever mvapich2 spec it previously depended on. When this occurs, a warning message is printed `Warning: explicit splice configuration has caused the the concretized spec {concrete_spec} not to satisfy the input spec {input_spec}".
Highlighted technical details of implementation:
1. This PR required modifying the installer to have two separate types of Tasks, `RewireTask` and `BuildTask`. Spliced specs are queued as `RewireTask` and standard specs are queued as `BuildTask`. Each spliced spec retains a pointer to its build_spec for provenance. If a RewireTask is dequeued and the associated `build_spec` is neither available in the install_tree nor from a binary cache, the RewireTask is requeued with a new dependency on a BuildTask for the build_spec, and BuildTasks are queued for the build spec and its dependencies.
2. Relocation is modified so that a spack binary can be simultaneously installed and rewired. This ensures that installing the build_spec is not necessary when splicing from a binary cache.
3. The splicing model is modified to more accurately represent build dependencies -- that is, spliced specs do not have build dependencies, as spliced specs are never built. Their build_specs retain the build dependencies, as they may be built as part of installing the spliced spec.
4. There were vestiges of the compiler bootstrapping logic that were not removed in #46237 because I asked alalazo to leave them in to avoid making the rebase for this PR harder than it needed to be. Those last remains are removed in this PR.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
* cuda: Add 12.6.2
* Update cuda build system
- Remove gcc@6 conflict that was only a deprecation (probably has be added again with cuda@13)
- Update cuda_arch support by CUDA version
- Kepler support has ended with cuda@12
- Recently added 90a Hopper "experimental" features architecture was
missing the dependency on cuda@12:
adds the lima-vm project, in order to make that useful adds a newer
version of qemu so qemu VMs can work, and builds qemu with flags that
allow it to do things like give the VMs networking and virtfs
filesystems.
Also adds vde as a dependency of qemu.
* CMake: Improve incremental build speed.
CMake automatically embeds an updated configure step into make/ninja that will be called during the build phase. By default if a `CMakeCache.txt` file exists in the build directory CMake will use it and this + `spec.is_develop` is sufficient evidence of an incremental build.
This PR removes duplicate work/expense from CMake packages when using `spack develop`.
* Update cmake.py
* [@spackbot] updating style on behalf of psakievich
* Update cmake.py
meant self not spec...
---------
Co-authored-by: psakievich <psakievich@users.noreply.github.com>
* fix typo in variable name in hepmc3 variant
* set cxx standard to 14 when using protobuf
* add myself to hepmc3 maintainer list
* hepmc3: Applied suggestion of @acecbs for spec.satisfies("+protobuf") (agreed!)
Co-authored-by: Alec Scott <hi@alecbcs.com>
* hepmc3: cxx_standard for protobuf
only set cxx standard to meet protobuf minimum (14) if not also rootio variant as that sets the cxx standard to match the root public API standard requirements
* Environment.clear: ensure clearing is passed through to manifest
* test/cmd/env: make test_remove_commmand round-trip to disk
* cleanup vestigial variables
Some Windows Python installations may store the Python exe in Scripts/
rather than the base directory. Update `.command` to search in both
locations on Windows. On all systems, the search is now done
recursively from the search root: on Windows, that is the base install
directory, and on other systems it is bin/.
* Add libALL support
* cabana: also require ALL
* cabana: Bugfix: Fix spec for cmake>=3.26 to be @3.26: and HDF5 support requires MPI
* cabana: MPI requires C: Add depends_on("c", type="build", when="+mpi")
* cabana: +mpi requires C, but at least for some CMake versions, Cabana's enable of C is too late. Patch it.
* cabana: simplify disabling of find_package's for disabled options and improve comment
* cabana: +grid of 0.6.0 does not compile with gcc-13: It misses iostream includes
* cabana: +test requires googletest at build time: gtest is a linked library(not a plugin or tool)
* cabana: 0.6.0+cuda requires kokkos@3.7:, see https://github.com/ECP-copa/Cabana/releases
* cabana: As 0.6.0+grid does not support gcc-13 and newer, I think it's good to add 0.6.1 and 0.7.0?
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
* added py-evodiff and dependencies
* deleted the FIXME
* fixed style issues
* added versions for biotite dependencies; added hash to py-hatch-vcs
* added python version for py-hatch-cython
* updated biotraj dependencies
* - added versions for the packages and dependencies
- added more dependencies for py-hatch
- added rust versions
- added py-uv as a new package
* updated pacakges and their dependencies according to the PR review by @meyersbs
* typo fix for hatchling version; fix the minimum required setuptools version for evodiff
* added 1.9.0 and 1.7.0 userpath versions; required as a dependency
* added mlflow as a dependency
* changed biopython to an optional dependency according to review from @meyersbs; variant esmfold
* Updated Specs
- Pinned biotraj to 1:1 for py-biotite
- Added numpy and other dependencies for py-biotraj; they are dependent
on the versions
- Excluded py-mlflow as a dependency for package py-evodiff; missing
usage in
package.
- Removed versioned dependencies from py-fair-esm
- Added a version to py-packaging
- Added py-setuptools as a dependency in py-userpath
- Added sha256 as hashes for py-uv
* style changes
* Remove "modify_object_macholib"
According to documentation, this function is used when installing
Mach-O binaries on linux. The implementation seems questionable at
least, and the code seems to be never hit (Spack currently doesn't
support installing Mach-O binaries on linux).
* Fix relocation on macOS, when store projection changes
---------
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Enable tests for symlink-based views (this works with almost no
modifications to the view logic). View logic is not yet robust
for hardlink/junction-based views, so those are disabled for now
(both in the tests and as subcommands to `spack view`).
* add new darshan-runtime variants
- `lustre` variant enables instrumentation of Lustre files
* requires Lustre headers, so Lustre is a proper dependency
for this variant
- `log_path` variant allows setting of a centralized log directory
for Darshan logs (commonly used for facility deployments)
* when this variant is used, the `DARSHAN_LOG_DIR_PATH` env var
is no longer used to set the log file path
- `group_readable_logs` variant sets Darshan log permissions to
allow reads from users in the same group
* add mmap_logs variant to enable usage of mmap logs
Also adds support for Paraview and CMake to build with Qt support on
Windows.
The remaining edits are to enable building of Qt itself on Windows:
* Several packages needed to update `.libs` to properly locate
libraries on Windows
* Qt needed a patch to allow it to build using a Python with a space
in the path
* Some Qt dependencies had not been ported to Windows yet
(e.g. `harfbuzz` and `lcms`)
This PR does not provide a sufficient GL for Qt to use Qt Quick2, as
such Qt Quick2 is disabled on the Windows platform by this PR.
---------
Co-authored-by: Dan Lipsa <dan.lipsa@kitware.com>
* Bugfix/Installer: properly track task queueing
* Move ordinal() to llnl.string; change time to attempt
* Convert BuildTask to use kwargs (after pkg); convert STATUS_ to BuildStats enum
* BuildTask: instantiate with keyword only args after the request
* Installer: build request is required for initializing task
* Installer: only the initial BuildTask cannnot have status REMOVED
* Change queueing check
* ordinal(): simplify suffix determination [tgamblin]
* BuildStatus: ADDED -> QUEUED [becker33]
* BuildTask: clarify TypeError for 'installed' argument
* madgraph5amc: add v3.5.6, add a preferred version and remove urls
* Fix format
---------
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
We run tests for more python versions on `develop` than we do for PRs, so codecov
project status is nearly always failing. There is about a 1% difference in max coverage
between `develop` tests and PR tests, so we should increase the project threshold to 2%
to allow for this difference.
The purpose of the project test on PRs is just to make sure that nothing done on the PR
massively affects coverage of code not covered by the PR. This is valuable, but rare. It
only really affects PRs that deal with test or coverage configuration.
- [x] change project coverage threshold from .2% to 2%
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Add v3.0.0 version for libfirefly package
* [@spackbot] updating style on behalf of tbhaxor
---------
Co-authored-by: tbhaxor <tbhaxor@users.noreply.github.com>
`spack gc` has so far been a global or environment-specific thing.
This adds the ability to restrict garbage collection to specific specs,
e.g. if you *just* want to get rid of all your unused python installations,
you could write:
```console
spack gc python
```
- [x] add `constraint` arg to `spack gc`
- [x] add a simple test
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Avoid that hdf5 searches all search paths for ZLIB.cmake config files (inluding /usr/lib), before it looks for zlib without cmake config files, which is how Spack installs it
* WarpX: 24.10
This updates WarpX and dependencies for the 24.10 release.
New features:
- EB runtime control: we can now compile with EB on by default,
because it is not an incompatible binary option anymore
- Catalyst2 support: AMReX/WarpX 24.09+ support Catalyst2 through
the existing Conduit bindings
* Fix Typo in Variant
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Improve Python Dep Version Ranges
* Add Missing `-DWarpX_CATALYST`
* AMReX: Missing CMake Options for Vis
---------
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
This allows us to keep the workflow file tidier, and avoid
using indirections to perform platform specific operations.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* py-mpi4py: add v4.0.0
* sensei: update mpi4py dependency
build with py-mpi4py@4.0.0 due to fatal no such file or directory error
* petsc4py: update license, and remove C++/Fortran dependency
There was a bit of mystery surrounding the arguments for `_setup_pkg_and_run`. It passes
two file descriptors for handling the `gmake`'s job server in child processes, but they are
unsed in the method.
It turns out that there are good reasons to do this -- depending on the multiprocessing
backend, these file descriptors may be closed in the child if they're not passed
directly to it.
- [x] Document all args to `_setup_pkg_and_run`.
- [x] Document all arguments to `_setup_pkg_and_run`.
- [x] Add type hints for `_setup_pkg_and_run`.
- [x] Refactor exception handling in `_setup_pkg_and_run` so it's easier to add type
hints. `exc_info()` was problematic because it *can* return `None` (just not
in the context where it's used). `mypy` was too dumb to notice this.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* new builtin package: ambertools
* fixes for the style test
* yet more changes for the style test
* hope this is the last fix for the style test
* netlib-xblas is a dependency, it needs a depends_on("m4", type="build")
* ambertools: Add new setuptool dependency, limit python to <= 3.10 (does not build with 3.11+)
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
We mostly use `spack style` and `spack style --fix`, but it's nice to also be able to
run plain old `black .` in the repo.
- [x] Fix includes and excludes `pyproject.toml` so that we *only* cover files we expect
to be blackened.
Note that `spack style` is still likely the better way to go, because it looks at `git
status` and tells black to only check files that changed from `develop`. `black` with
`pyproject.toml` won't do that. Of course, you can always manually specify which files
you want blackened.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* postgresql: Add icu4c dependency for versions 16+
* postgresql: make ICU an option
* postgresql: ICU variant only needed for v16+
* postgresql: Check for negated option
Check for negated option instead of negating the test
Co-authored-by: Alec Scott <hi@alecbcs.com>
---------
Co-authored-by: Alec Scott <hi@alecbcs.com>
* [py-flash-attn] Add version 2.6.3
* Update dependencies according to the latest version
* Add max_jobs environmental variable to avoid oom error
---------
Co-authored-by: aurianer <8trash-can8@protonmail.ch>
* Revert "`cc`: ensure that RPATHs passed to linker are unique"
This reverts commit 2613a14c43.
* Revert "`cc`: simplify ordered list handling"
This reverts commit a76a48c42e.
Updated the terminology for the two types of environments to be
consistent with that used in the tutorial for the last three years.
Additionally:
* changed 'anonymous' to 'independent in environment command+test for consistency.
* Update package.py
Update to pull Totalview tar files from AWS instead of requiring the user to download ahead of time. Use new license type, RLM license. Only allow for installs of versions using the new license type. 2024.1 and 2024.2. User selects the platform with the version as it is down from the TotalView downloads website.
* Update package.py
Update to pass style test
* Update package.py
fixing syle
* Updating to pass style check
removing more spaces to pass style check
* final style fixes
fixing the last 2 style errors
* Typo
Typo correction to pass style check
* REmove new line
removing new line character
* Ran black to reformat
Ran black to clear errors
* Changing to use sha256
Updating to use sha256 checksums for all TotalView files.
* acts dependencies: new versions as of 2024/09/30
This commit adds new versions of acts, actsvg, and detray.
* Add vecmem version, patch detray version
#45205 already removed previous use of single letter packages
from unit tests, in view of reserving `c` as a language (see #45191).
Some use of them has been re-introduced accidentally in #46382, and
is making unit-tests fail in the feature branch #45189 since there
`c` is a virtual package.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Kokkos: adding some sanity checks
We can pretty much guarentee that if bin, include or lib directory
is missing, something is wrong. Additionally KokkosCore_config.h
and Kokkos_Core.hpp. I guess technically we could look for all
public headers at least but that seems a bit overkill as well?
* Kokkos Kernels: adding sanity checks
* Remove check for lib directory since it might end up being lib64
* Also remove lib from kokkos-kernels sanity check
* Add latest releases of Camp, RAJA, Umpire, CHAI and CARE
* Address review comments + blt requirement in Umpire
* CARE @develop & @main: Submodules -> False
* Changes in Umpire
* Changes in RAJA
* Changes in CHAI
* Changes in RAJA: prefer 'spec.satisfies' to 'in spec'
This is due to a non-equivalence in Spack with providers like mpi.
See e.g. https://github.com/spack/spack/pull/46126
* Changes in Umpire: prefer 'spec.satisfies' to 'in spec'
This is due to a non-equivalence in Spack with providers like mpi.
See e.g. https://github.com/spack/spack/pull/46126
* Changes in CARE:
Still need to update to CachedCMakePackage based on RADIUSS Spack Configs version
* Missing change in RAJA + changes in fmt
* Fix synta
* Changes in Camp
* Fix style
* CHAI: when ~raja, turn off RAJA in build system
* Fix: Ascent@0.9.3 does not support RAJA@2024.07.0
* Enforce same version constraint on Umpire as for RAJA
* Enforce preferred version of vtk-m in ascent 0.9.3
* Migrate CARE package to CachedCMakePackage
* Fix style in CARE package
* CARE: Apply changes for uniform implementation accross RADIUSS projects
* Caliper: move to CachedCMakePackage, from RADIUSS Spack Configs
* Adapt RAJA Perf to spack CI
* Activate CHAI, CARE and RAJAPerf in Spack CI
* Fixes and diffs with RADIUSS Spack Configs
* Caliper: fix
* Caliper : fix + RAJAPerf : style
* RAJAPerf: fixes
* Update maintainers
* raja-perf: fix license header
* raja-perf: Fix variant naming openmp_target -> omptarget
* raja-perf: style and blt dependency versions
* CARE: benchmark and examples off by default (like tests)
* CARE: fix missing variable
* Update var/spack/repos/builtin/packages/raja-perf/package.py
* CARE: fix branch name
* Revert changes in MFEM to pass CI
* Fix CXX17 condition in RAJA + add sycl option in RAJAPerf
---------
Co-authored-by: Rich Hornung <hornung1@llnl.gov>
* cbindgen: new package
* Attempting to add rust dependencies for cbindgen
* adding rust-toml min rust version
* Removing dependencies that don't install with cargo
* cleanup broken packages
---------
Signed-off-by: Teague Sterling <teaguesterling@gmail.com>
On sysroot systems like gentoo prefix, as well as nix/guix, our "is
system path" logic is broken cause it's static.
Talking about "the system paths" is not helpful, we have to talk
about default search paths of the dynamic linker instead.
If glibc is recent enough, we can query the dynamic loader's default
search paths, which is a much more robust way to avoid registering
rpaths to system dirs, which can shadow Spack dirs.
This PR adds an **additional** filter on rpaths the compiler wrapper
adds, dropping rpaths that are default search paths. The PR **does
not** remove any of the original `is_system_path` code yet.
This fixes issues where build systems run just-built executables
linked against their *not-yet-installed libraries*, typically:
```
LD_LIBRARY_PATH=. ./exe
```
which happens in `perl`, `python`, and other non-cmake packages.
If a default path is rpath'ed, it takes precedence over
`LD_LIBRARY_PATH`, and a system library gets loaded instead
of the just-built library in the stage dir, breaking the build. If
default paths are not rpath'ed, then LD_LIBRARY_PATH takes
precedence, as is desired.
This PR additionally fixes an inconsistency in rpaths between
cmake and non-cmake packages. The cmake build system
computed rpaths by itself, but used a different order than
computed for the compiler wrapper. In fact it's not necessary
to compute rpaths at all, since we let cmake do that thanks to
`CMAKE_INSTALL_RPATH_USE_LINK_PATH`. This covers rpaths
for all dependencies. The only install rpaths we need to set are
`<install prefix>/{lib,lib64}`, which cmake unfortunately omits,
although it could also know these. Also, cmake does *not*
delete rpaths added by the toolchain (i.e. Spack's compiler
wrapper), so I don't think it should be controversial to simplify
things.
https://docs.sylabs.io/guides/main/admin-guide/configfiles.html#loop-devices
shared loop devices: This allows containers running the same image
to share a single loop device. This minimizes loop device usage and
helps optimize kernel cache usage.
Enabling this feature can be particularly useful for large MPI jobs.
The current `Spec.splice` model is very limited by the inability to splice specs that
contain multiple nodes with the same name. This is an artifact of the original
algorithm design predating the separate concretization of build dependencies,
which was the first feature to allow multiple specs in a DAG to share a name.
This PR provides a complete reimplementation of `Spec.splice` to avoid that
limitation. At the same time, the new algorithm ensures that build dependencies
for spliced specs are not changed, since the splice by definition cannot change
the build-time information of the spec. This is handled by splitting the dependency
edges and link/run edges into separate dependencies as needed.
Signed-off-by: Gregory Becker <becker33@llnl.gov>
* CI: Add documentation for adding new stacks and runners
* More docs for runner registration
---------
Co-authored-by: Zack Galbreath <zack.galbreath@kitware.com>
Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
This PR shorten the string representation for concrete specs,
in order to make it more legible.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
macOS Sequoia's linker will complain if RPATHs on the CLI are specified more than once.
To avoid errors due to this, make `cc` only append unique RPATHs to the final args list.
This required a few improvements to the logic in `cc`:
1. List functions in `cc` didn't have any way to append unique elements to a list. Add a
`contains()` shell function that works like our other list functions. Use it to implement
an optional `"unique"` argument to `append()` and an `extend_unique()`. Use that to add
RPATHs to the `args_list`.
2. In the pure `ld` case, we weren't actually parsing `RPATH` arguments separately as we
do for `ccld`. Fix this by adding *another* nested case statement for raw `RPATH`
parsing. There are now 3 places where we deal with `-rpath` and friends, but I don't
see a great way to unify them, as `-Wl,`, `-Xlinker`, and raw `-rpath` arguments are
all ever so slightly different.
3. Fix ordering of assertions to make `pytest` diffs more intelligible. The meaning of
`+` and `-` in diffs changed in `pytest` 6.0 and the "preferred" order for assertions
became `assert actual == expected` instead of the other way around.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
`cc` divides most paths up into system paths, spack managed paths, and other paths.
This gets really repetitive and makes the code hard to read. Simplify the script
by adding some functions to do most of the redundant work for us.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* axom/stand-alone tests: build and run in test stage directory
* Removed unused glob
* axom/stand-alone tests: add example_stage_dir variable for clarity
SimpleFilesystemView was producing an error due to looking for a
<prefix>/lib/.spack folder. Also, view_destination had no effect and
wasn't called. Changed this by instead patching in the correct
installation prefix for dictionaries.
Since aspell is using the resolved path of the executable prefix, the
runtime environment variable ASPELL_CONF is set to correct the prefix
when in a view. With this change aspell can now find installed
dictionaries. Verified with:
aspell dump config
aspell dump dicts
* shorten version number validations per reviewer feedback
* rename set_lib_path per reviewer feedback
* Add E4S tag
* Set CHPL_CUDA_PATH to ensure Chapel installer finds the right package
* Update ROCm dependency for Chapel 2.2
* Fix llvm-amdgpu and CHPL_TARGET_* for llvm=bundled
* Ensure CHPL_TARGET_COMPILER is set to "llvm" when required (llvm=spack
or +cuda or +rocm).
* Ensure CHPL_TARGET_{CC,CXX} are only set when using llvm=spack or llvm=none
* Use hip.prefix to set CHPL_ROCM_PATH
Since we might not directly depend on llvm-amdgpu, thus it might
not appear in our spec
* limit m4 dependency to +gmp
* limit names of env vars created from variants
* Ensure that +cuda and +rocm variants are Sticky
The concretizer should never be permitted to select GPU support, because
it's only meaningful and functional when the appropriate hardware is actually
available, and the concretizer cannot reliably determine that.
Also: Chapel's GPU support includes alot of complicated dependencies
and constraints, so leaving that choice free to the concretizer leads to alot
of extraneous and confusing messages when failing to concretize a
non-GPU-enabled spec.
Co-authored-by: Dan Bonachea <dobonachea@lbl.gov>
Add pre-built sbcl for x86 and arm for various glibc versions, making
way for an actual sblc built from source.
Also switch to use set_env in a context manager over setting the
environment variable for the build environment. I hit an issue with the
build system due to this in the sbcl package, pre-empting the same issue
here.
* dla-future: Add DLAF_ prefix to LAPACK_LIBRARY CMake variable in newer versions
* dla-future: Use spec.satisfies to check version constraint for LAPACK_LIBRARY variable prefix
Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>
---------
Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>
* py-sphinx-tabs: new version 3.4.5
* py-sphinx-design: new versions 0.5.0, 0.6.0, and 0.6.1
* py-requests: new version 2.32.3
* py-dnspython: new version 2.6.1
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* py-hatch-vcs: new version 0.4.0
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
Python 3.12 removed the `distutils` module, which is being required
by the build process of LLVM <= 14: Conflict with it for +python.
Fix build to not pick host tools like an incompatible Python from host
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
gcc on ubuntu has fix-cortex-a53-843419 set by default - this causes linking
issues (symbol relocation errors) for tf, even when compiling for different
cpus.
If `add_padding()` is allowed to return a path with a trailing path
separator, it will get collapsed elsewhere in Spack. This can lead to
buildcache entries that have RPATHS that are too short to be replaced by
other users whose install root happens to be padded to the correct
length. Detect this and replace the trailing path separator with a
concrete path character.
Signed-off-by: Samuel E. Browne <sebrown@sandia.gov>
Also: set the build and install directories to the source directory
because the build system unfortunately expects the `src_ext` directory
to be under the current working directory when building the bundled
third-party libraries, even when the configure script is run from
another directory.
@scemama pointed out that 'make' just calls 'dune' which is already
parallel, so make itself should not have more than one job.
opam@:2.1 need 'make lib-ext' for cmdliner, above it's obsolete.
Co-authored-by: Bernhard Kaindl <bernhard.kaindl@cloud.com>
The detection logic for the prefix used in py-bind11 if broken for spack
resulting in an empty prefix. However, the package provides an escape
hatch in the form of `prefix_for_pc_file`. Use this escape hatch to
provide the correct path; spack will always know better than pybind11's
CMake.
Co-authored-by: Robert Underwood <runderwood@anl.gov>
We've seen `getfqdn()` cause slowdowns on macOS in CI when added elsewhere. It's also
called by database.py every time we write the DB file.
- [x] replace the call with a memoized version so that it is only called once per process.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
This PR introduces a new heuristic for the solver, which behaves better when
compilers are treated as nodes. Apparently, it performs better also on `develop`,
where compilers are still node attributes.
The new heuristic:
- Sets an initial priority for guessing a few attributes. The order is "nodes" (300),
"dependencies" (150), "virtual dependencies" (60), "version" and "variants" (30), and
"targets" and "compilers" (1). This initial priority decays over time during the solve, and
falls back to the defaults.
- By default, it considers most guessed facts as "false". For instance, by default a node
doesn't exist in the optimal answer set, or a version is not picked as a node version etc.
- There are certain conditions that override the default heuristic using the _priority_ of
a rule, which previously we didn't use. For instance, by default we guess that a
`attr("variant", Node, Variant, Value)` is false, but if we know that the node is already
in the answer set, and the value is the default one, then we guess it is true.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This includes a test_linux699 variant which "activates" a version
that pulls from a repository other than the official repository.
This version is required to work with Linux kernel version
6.9.9 or later. Future official `msr-safe` versions are expected
to support later Linux kernel versions.
* opendatadetector: Add an env variable pointing to the share directory
* Rename the new variable to OPENDATADETECTOR_DATA and use join_path
---------
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
The `spack.target.Target` class is a weird entity, that is just needed to:
1. Sort microarchitectures in lists deterministically
2. Being able to use microarchitectures in hashed containers
This PR removes it, and uses `archspec.cpu.Microarchitecture` directly. To sort lists, we use a proper `key=` when needed. Being able to use `Microarchitecture` objects in sets is achieved by updating the external `archspec`.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Introduce the bufr_query library from NOAA-EMC (#461)
This PR adds in a new package.py script for the new bufr_query library from NOAA-EMC. This is being used by JEDI and other applications.
* Add explicit build dependency spec to the pybind11 depends_on spec
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Convert patch file to the URL form which pulls the changes from github.
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Added new version (0.0.3) and removed obsolete site-packages.patch file
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
While the existing getting started guide does in fact reference the
powershell support, it's a footnote and easily missed. This PR adds
explicit, upfront mentions of the powershell support. Additionally
this PR adds notes about some of the issues with certain components
of the spec syntax when using CMD.
If the spec is external, it has extra attributes. If not, we know
which names are used. In both cases we don't need to search again
for executables.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Removes `spack.package_base.PackageBase.do_{install,deprecate}` in favor of
`spack.installer.PackageInstaller.install` and `spack.installer.deprecate` resp.
That drops a dependency of `spack.package_base` on `spack.installer`, which is
necessary to get rid of circular dependencies in Spack.
Also change the signature of `PackageInstaller.__init__` from taking a dict as
positional argument, to an explicit list of keyword arguments.
Continuing the work started in #40326, his changes the structure
of Variant metadata on Packages from a single variant definition
per name with a list of `when` specs:
```
name: (Variant, [when_spec, ...])
```
to a Variant definition per `when_spec` per name:
```
when_spec: { name: Variant }
```
With this change, everything on a package *except* versions is
keyed by `when` spec. This:
1. makes things consistent, in that conditional things are (nearly)
all modeled in the same way; and
2. fixes an issue where we would lose information about multiple
variant definitions in a package (see #38302). We can now have,
e.g., different defaults for the same variant in different
versions of a package.
Some notes:
1. This required some pretty deep changes to the solver. Previously,
the solver's job was to select value(s) for a single variant definition
per name per package. Now, the solver needs to:
a. Determine which variant definition should be used for a given node,
which can depend on the node's version, compiler, target, other variants, etc.
b. Select valid value(s) for variants for each node based on the selected
variant definition.
When multiple variant definitions are enabled via their `when=` clause, we will
always prefer the *last* matching definition, by declaration order in packages. This
is implemented by adding a `precedence` to each variant at definition time, and we
ensure they are added to the solver in order of precedence.
This has the effect that variant definitions from derived classes are preferred over
definitions from superclasses, and the last definition within the same class sticks.
This matches python semantics. Some examples:
```python
class ROCmPackage(PackageBase):
variant("amdgpu_target", ..., when="+rocm")
class Hipblas(ROCmPackage):
variant("amdgpu_target", ...)
```
The global variant in `hipblas` will always supersede the `when="+rocm"` variant in
`ROCmPackage`. If `hipblas`'s variant was also conditional on `+rocm` (as it probably
should be), we would again filter out the definition from `ROCmPackage` because it
could never be activated. If you instead have:
```python
class ROCmPackage(PackageBase):
variant("amdgpu_target", ..., when="+rocm")
class Hipblas(ROCmPackage):
variant("amdgpu_target", ..., when="+rocm+foo")
```
The variant on `hipblas` will win for `+rocm+foo` but the one on `ROCmPackage` will
win with `rocm~foo`.
So, *if* we can statically determine if a variant is overridden, we filter it out.
This isn't strictly necessary, as the solver can handle many definitions fine, but
this reduces the complexity of the problem instance presented to `clingo`, and
simplifies output in `spack info` for derived packages. e.g., `spack info hipblas`
now shows only one definition of `amdgpu_target` where before it showed two, one of
which would never be used.
2. Nearly all access to the `variants` dictionary on packages has been refactored to
use the following class methods on `PackageBase`:
* `variant_names(cls) -> List[str]`: get all variant names for a package
* `has_variant(cls, name) -> bool`: whether a package has a variant with a given name
* `variant_definitions(cls, name: str) -> List[Tuple[Spec, Variant]]`: all definitions
of variant `name` that are possible, along with their `when` specs.
* `variant_items() -> `: iterate over `pkg.variants.items()`, with impossible variants
filtered out.
Consolidating to these methods seems to simplify the code a lot.
3. The solver does a lot more validation on variant values at setup time now. In
particular, it checks whether a variant value on a spec is valid given the other
constraints on that spec. This allowed us to remove the crufty logic in
`update_variant_validate`, which was needed because we previously didn't *know* after
a solve which variant definition had been used. Now, variant values from solves are
constructed strictly based on which variant definition was selected -- no more
heuristics.
4. The same prevalidation can now be done in package audits, and you can run:
```
spack audit packages --strict-variants
```
This turns up around 18 different places where a variant specification isn't valid
given the conditions on variant definitions in packages. I haven't fixed those here
but will open a separate PR to iterate on them. I plan to make strict checking the
defaults once all existing package issues are resolved. It's not clear to me that
strict checking should be the default for the prevalidation done at solve time.
There are a few other changes here that might be of interest:
1. The `generator` variant in `CMakePackage` is now only defined when `build_system=cmake`.
2. `spack info` has been updated to support the new metadata layout.
3. split out variant propagation into its own `.lp` file in the `solver` code.
4. Add better typing and clean up code for variant types in `variant.py`.
5. Add tests for new variant behavior.
Historically, every PR, push, etc. to Spack generates a bunch of jobs, each of which
uploads its coverage report to codecov independently. This means that we get annoying
partial coverage numbers when only a few of the jobs have finished, and frequently
codecov is bad at understanding when to merge reports for a given PR. The numbers of the
site can be weird as a result.
This restructures our coverage handling so that we do all the merging ourselves and
upload exactly one report per GitHub actions workflow. In practice, that means that
every push to every PR will get exactly one coverage report and exactly one coverage
number reported. I think this will at least partially restore peoples' faith in what
codecov is telling them, and it might even make codecov handle Spack a bit better, since
this reduces the report burden by ~7x.
- [x] test and audit jobs now upload artifacts for coverage
- [x] add a new job that downloads artifacts and merges coverage reports together
- [x] set `paths` section of `pyproject.toml` so that cross-platform clone locations are merged
- [x] upload to codecov once, at the end of the workflow
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* kokkos, kokkos-kernels, kokkos-nvcc-wrapper: add v4.4.01
* trilinos: update @[master,develop] dependency on kokkos
==> Error: InstallError: For Trilinos@[master,develop], ^kokkos version in spec must match version in Trilinos source code. Specify ^kokkos@4.4.01 for trilinos@[master,develop] instead of ^kokkos@4.4.00.
* petsc: configure requires rocm-core/rocm_version.h to detect ROCM_VERSION_MAJOR.ROCM_VERSION_MINOR.ROCM_VERSION_PATCH
* mfem: add dependency on rocprim (as needed via petsc dependency)
In file included from linalg/petsc.cpp:19:
In file included from linalg/linalg.hpp:65:
In file included from linalg/petsc.hpp:48:
In file included from /scratch/svcpetsc/spack.x/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/petsc-3.22.0-7dsxwizo24ycnqvwnsscupuh4i7yusrh/include/petscsystypes.h:530:
In file included from /scratch/svcpetsc/spack.x/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/rocthrust-6.1.2-ux5nmi4utw27oaqmz3sfjmhb6hyt5zed/include/thrust/complex.h:30:
/scratch/svcpetsc/spack.x/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/rocthrust-6.1.2-ux5nmi4utw27oaqmz3sfjmhb6hyt5zed/include/thrust/detail/type_traits.h:29:10: fatal error: 'rocprim/detail/match_result_type.hpp' file not found
29 | #include <rocprim/detail/match_result_type.hpp>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Update seacas package.py
Adding libcatalyst variant to seacas package
When seacas is installed with "seacas +libcatalyst"
then a dependency on the spack package "libcatalyst"
(which is catalyst api 2 from kitware) is added, and
the appropriage cmake variable for the catalyst TPL
is set. The mpi variant option in catalyst (i.e. build
with mpi or build without mpi) is passed on to
libcatalyst. The default of the libcatalyst variant
is false/off, so if seacas is installed without the
"+libcatalyst" in the spec it will behave exactly as
it did before the introduction of this variant.
* shortened line 202 to comply with < 100 characters per line style requirement
* py-httpx: New version
* [py-httpx] fix when for dependencies
* [py-httpx] organized dependencies
* [py-httpx] added version 0.27.2
---------
Co-authored-by: Alex C Leute <aclrc@rit.edu>
* Automated deployment to update package flux-sched 2024-09-05
* flux-sched: add back check for run environment
* flux-sched: add conflict for gcc and clang above 0.37.0
---------
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
autotools packages with a configure script should generate the libtool
executable, there's no point in `depends_on("libtool", type="build")`.
the libtool executable in `<libtool prefix>/bin/libtool` is configured
for the wrong toolchain (libtools %compiler instead of the package's
%compiler).
Some package link to `libltdl.so`, which is fine, but had a wrong
dependency type.
See https://github.com/spack/spack/pull/46314#discussion_r1752940332.
This further simplifies `cxxstd` variant handling in `acts` by removing superfluous
version constraints from dependencies for `geant4` and `root`.
The version constraints in the loop are redundant with the conditional variant
values here:
```python
_cxxstd_values = (
conditional("14", when="@:0.8.1"),
conditional("17", when="@:35"),
conditional("20", when="@24:"),
)
_cxxstd_common = {
"values": _cxxstd_values,
"multi": False,
"description": "Use the specified C++ standard when building.",
}
variant("cxxstd", default="17", when="@:35", **_cxxstd_common)
variant("cxxstd", default="20", when="@36:", **_cxxstd_common)
```
So we can simplify the dependencies in the loop to:
```python
for _cxxstd in _cxxstd_values:
for _v in _cxxstd:
depends_on(f"geant4 cxxstd={_v.value}", when=f"cxxstd={_v.value} +geant4")
depends_on(f"geant4 cxxstd={_v.value}", when=f"cxxstd={_v.value} +fatras_geant4")
depends_on(f"root cxxstd={_v.value}", when=f"cxxstd={_v.value} +tgeo")
```
And avoid the potential for impossible variant expressions.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Openmpi provider statements were changed in #46102. The package change
was fine in and of itself, but apparently one of our tests depends on
the precise constraints used in those statements. I updated the test
to remove the checks for constraints that were removed.
In #44425, we add stricter variant audits that catch expressions that can never match.
This fixes 13 packages that had this type of issue.
Most packages had lingering spec expressions from before conditional variants with
`when=` statements were added. For example:
* Given `conflicts("+a", when="~b")`, if the package has since added
`variant("a", when="+b")`, the conflict is no longer needed, because
`a` and `b` will never exist together.
* Similarly, two packages that depended on `py-torch` depended on
`py-torch~cuda~cudnn`, which can't match because the `cudnn` variant
doesn't exist when `cuda` is disabled. Note that neither `+foo` or `~foo`
match (intentionally) if the `foo` variant doesn't exist.
* Some packages referred to impossible version/variant combinations, e.g.,
`ceed@1.0.0+mfem~petsc` when the `petsc` variant only exist at version `2`
or higher.
Some of these correct real issues (e.g. the `py-torch` dependencies would have never
worked). Others simply declutter old code in packages by making all constraints
consistent with version and variant updates.
The only one of these that I think is not all that useful is the one for `acts`,
where looping over `cxxstd` versions and package versions ends up adding some
constraints that are impossible. The additional dependencies could never have
happened and the code is more complicated with the needed extra constriant.
I think *probably* the best thing to do in `acts` is to just not to use a loop
and to write out the constraints explicitly, but maybe the code is easier to
maintain as written.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Update var/spack/repos/builtin/packages/fms/package.py: apply patch for fms@2023.03 to fix compiler options bug in cmake config, add variant shared and corresponding patch for fms@2024.02
* Fix fms package audit: use c9bba516ba.patch?full_index=1 instead of c9bba516ba.patch?full_index=1
* Update checksum of patch for fms@2023.03
* CUDA: support Grace Hopper 9.0a compute capability
* Fix other packages
* Add type annotations
* Support ancient Python versions
* isort
* spec -> self.spec
Co-authored-by: Andrew W Elble <aweits@rit.edu>
* [@spackbot] updating style on behalf of adamjstewart
---------
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: adamjstewart <adamjstewart@users.noreply.github.com>
fixes#46295
A proper solution would be a tag directive that accumulates tags
with the ones defined in base classes.
For the time being, rewrite them explicitly.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Boost:Adjust bootstrapping/b2 options as needed for Windows (the
bootstrapping phase sufficiently differs between Windows/Unix
that it is handled entirely within its own branch).
* Boost: Paths in user-config.jam should be POSIX, including on Windows
* Python: `.libs` for the Python package should return link libraries
on Windows. The libraries are also stored in a different directory.
The option config:install_missing_compilers is currently buggy,
and has been for a while. Remove it, since it won't be needed
when compilers are treated as dependencies.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* fast-float: new package
* fast-float: add test dependency
* fast-float: fix doctest dependency type
* fast-float: convert deps to tuple
* fast-float: add v6.1.5 and v6.1.6
* fast-float: patch older versions to use find_package(doctest)
* py-your: new package
Spack package recipe for YOUR, Your Unified Reader. YOUR processes pulsar data in different formats.
Output below from spack install py-your
spack install py-your
==> Installing py-your-0.6.7-djfzsn2lutp24ik6wrk6tjx5f7hil76x [83/83]
==> No binary for py-your-0.6.7-djfzsn2lutp24ik6wrk6tjx5f7hil76x found: installing from source
==> Fetching https://github.com/thepetabyteproject/your/archive/refs/tags/0.6.7.tar.gz
==> No patches needed for py-your
==> py-your: Executing phase: 'install'
==> py-your: Successfully installed py-your-0.6.7-djfzsn2lutp24ik6wrk6tjx5f7hil76x
Stage: 1.43s. Install: 0.99s. Post-install: 0.12s. Total: 3.12s
* Removed setup_run_environment
After some testing, both spack load and module load for the package will include the bin directory generated by py-your as well as the path to the version of python the package was built with, without the need for the setup_run_environment function.
I removed that function (Although, like Tamara I thought it would be necessary based on other package setups I used as a basis for this package).
Note: I also updated the required version of py-astropy from py-astropy@4.0: to @py-astropy@6.1.0: In my test builds, the install was picking up version py-astropy@4.0.1.post1 and numpy1.26. However when I tried to run some of the code I was getting errors about py-astropy making numpy calls that are now removed. The newer version of py-astropy corrects these. Ideally this would be handled in the py-astropy package to make sure numpy isn't too new
* Changed software pull location
The original package pulled a tagged release version from GitHub. That tagged version was created in 2022 and has not been updated since. It no longer runs because newer versions of numpy have removed deprecation warnings for several of their calls. The main branch for this repository has addressed these numpy issues as well as some other important fixes but no new release has been generated. Because of this and the apparent minimal development that now appears to be going on, it is probably best to always pull from the main branch
* [@spackbot] updating style on behalf of aweaver1fandm
* py-your: Changed software pull location
1. Restored original URL and version (0.6.7) as requested
2. Updated py-numpy dependency versions to be constrained based on the version of your. Because of numpy deprecations related to your version 0.6.7 need to ensure that the numpy version used is not 1.24 or greater because the depracations were removed starting with that version
* gptune: new test API
* gptune: cleanup; finish API changes; separate unrelated test parts
* gptune: standalone test cleanup with timeout constraints
* gptune: ensure stand-alone test bash failures terminate; enable in CI
* gptune: add directory to terminate_bash_failures
* gptune/stand-alone tests: use satisifes for checking variants
---------
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
* Add numactl 2.0.16-2.0.18
* Create link-with-latomic-if-needed-v2.0.16.patch
Add a link to libatomic, if needed, for numactl v2.0.16.
* Add some missing patches to v2.0.16
* Create numactl-2.0.18-syscall-NR-ppc64.patch
In short, we need numactl to set __NR_set_mempolicy_home_node on ppc64, if it's not already defined.
* Apply a necessary patch for v2.0.18 on PPC64
* Add libatomic patch for v2.0.16
`spack reindex` relies on projections from configuration to locate
installed specs and prefixes. This is problematic because config can
change over time, and we have reasons to do so when turning compilers
into depedencies (removing `{compiler.name}-{compiler.version}` from
projections)
This commit makes reindex recursively search for .spack/ metadirs.
* py-httpcore: Added new version
* [py-httpcore]
- added version 0.18.0
- restructured dependencies as everything has a when and
type/when ordering was all over the place
* [py-httpcore] ordered dependencies in the order listed in v1.0.5 pyproject.toml
---------
Co-authored-by: Alex C Leute <aclrc@rit.edu>
When a package is running `setup_dependent_package` on a parent, ensure
that module variables like `spack_cc` are available. This was often
true prior to this commit, but externals were an exception.
---------
Co-authored-by: John Parent <john.parent@kitware.com>
* New package: py-monai
* Fixed linked issues with py-monai
* [py-monai] removed extra line
* [py-monai]
- New version 1.3.2
- ran black
- update copyright
* [py-monai] added license
* [py-monai]
- moved build only dependencies
- specified newer python requirements for newer versions
---------
Co-authored-by: vehrc <vehrc@rit.edu>
* Replace if ... in spec with spec.satisfies in f* and g* packages
* gromacs: ^amdfftw -> ^[virtuals=fftw-api] amdfftw
* flamemaster: add virtuals lapack for the amdlibflame dependency
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
Allow flags from different sources (compilers, `require:`, command-line
specs, and `depends_on`) to be merged together, and enforce a consistent
order among them.
The order is based on the sources, e.g. flags on specs from the command
line always come last. Some flag order consistency issues are fixed:
1. Flags from `compilers.yaml` and the command line were always intra- and
inter-source order consistent.
2. Flags from dependents and packages.yaml (introduced via `require:`)
were not: for `-a -b` from one source and `-c` from another, the final
result might rearrange `-a -b`, and would also be inconsistent in terms
of whether `-c` came before or after.
(1) is/was handled by going back to the original source, i.e., flags are
retrieved directly from the command line spec rather than the solver.
(2) is addressed by:
* Keeping track of grouped flags in the solver
* Keeping track of flag sources in the solver on a per-flag basis
The latter info is used in this PR to enforce DAG ordering on flags
applied from multiple dependents to the same package, e.g., for this
graph:
```
a
/|\
b | c
\|/
d
```
If `a`, `b`, and `c` impose flags on `d`, the combined flags on `d` will
contain the flags of `a`, `b`, and `c` -- in that order.
Conflicting flags are allowed (e.g. -O2 and -O3). `Spec.satisifes()` has
been updated such that X satisfies Y as long as X has *at least* all of
the flags that Y has. This is also true in the solver constraints.
`.satisfies` does not account for how order can change behavior (so
`-O2 -O3` can satisfy `-O3 -O2`); it is expected that this can be
addressed later (e.g. by prohibiting flag conflicts).
`Spec.constrain` and `.intersects` have been updated to be consistent
with this new definition of `.satisfies`.
Spack can now bootstrap two new dependencies on Windows: GnuPG, and file.
These dependencies are modeled as a separate package, and they install a cross-compiled binary.
Details on how they binaries are built are in https://github.com/spack/windows-bootstrap-resources
This PR adds py-pybind11 versions 2.13.0, 2.13.1, 2.13.2, 2.13.3, and
2.13.4. It also adds a new conflict between gcc 14 and pybind versions
up to and including 2.13.1.
* Allow deprecating more than one property in config
This internal change allows the customization of errors
and warnings to be printed when deprecating a property.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* fix
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Use a list comprehension for "issues"
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
---------
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
fixes#40791
Currently stacks behave differently if used in unify:false
environments, which leads to inconsistencies during concretization.
For instance, we might have two abstract user specs that do not
intersect with each other map to the same concrete spec in the
environment. This is clearly wrong.
This PR removes the best effort expansion, so that user specs
are always applied strictly.
* gaudi: Specify boost components and add +fiber for v39
* gaudi: Limit fmt version to allow building master branch
* Make boost dependencies a bit more readable
* Remove patches for no longer existing versions
* Replace if ... in spec with spec.satisfies in d* and e* packages
* Use virtuals for different mpi implementations in esmf
* esmf: ^[virtuals=mpi] mpt
* extrae: ^[virtuals=mpi] intel-oneapi-mpi
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
This change aligns the build condition for parmetis with the
depends_on condition.
The current build condition of parmetis looks for "+parmetis" in
the spec which is not added by the depends_on as that adds
"^parmetis" instead.
* Adding addtional check for omptarget library for amdgpu in nvidia environment
* Avoiding registration of duplicate when built on cuda
* Adding hsa library path in LD_LIBRARY_PATH
* Correction in hsa prefix library path in LD_LIBRARY_PATH
==> Error: InstallError: For Trilinos@[master,develop], ^kokkos version in spec must match version in Trilinos source code. Specify ^kokkos@4.4.00 for trilinos@[master,develop] instead of ^kokkos@4.3.01.
* root: Add dependency on libglx
We have been trying to build the Acts package on MacOS, and in this
process we have been running into problems with the ROOT spec on that
operating system; the primary issue we are encountering is that the
compiler is unable to find the `GL/glx.h` header, which is part of glx.
It seems, therefore, that ROOT depends on libglx, but this is not
currently encoded in the spec. This commit ensures that ROOT depends on
the virtual libglx package when both the OpenCL and X11 variants are
enabled.
* Enable builtin glew on MacOS
* Allow `root+opengl+aqua~x` on macOS
dd4hep versions up to and including 1.27 had a conflict with root
versions starting from 6.31.1, as shown in
https://github.com/AIDASoft/DD4hep/issues/1210. This PR explicitly adds
that conflict to the spec.
* whizard: add a patch when using hepmc3 3.3.0 or newer
* whizard: comment with patch origin
---------
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* podio: apply patch for gcc 14 builds
Podio versions after 0.17.0 but before 1.0.0 are broken when using gcc
14 because of a missing include, which is addressed in the podio pull
request at https://github.com/AIDASoft/podio/pull/600. This commit
patches pre-1.0.0 versions of Podio so they can be compiled with gcc 14,
which is important for building Acts.
* Style
* Style 2
* Fixes
* Add comment:
* Add sha256
This should help not selecting, by default, some niche implementation that are supposed to be externals.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This PR simplifies the code doing external spec detection by removing
the `DetectedPackage` class. Now, functions accepting or returning lists
of `DetectedPackage`, will accept or return list of specs.
Performance doesn't seem to change if we use `Spec.__reduce__` instead
of `DetectionPackage.__reduce__`.
2024-08-30 08:11:17 +00:00
2427 changed files with 41272 additions and 16816 deletions
if [ "${{ needs.prechecks.result }}" == "failure" ] || [ "${{ needs.prechecks.result }}" == "canceled" ]; then
echo "Unit tests failed."
exit 1
else
exit 0
fi
coverage:
needs:[unit-tests, prechecks ]
uses:./.github/workflows/coverage.yml
secrets:inherit
all:
needs:[unit-tests, coverage, bootstrap ]
if:${{ always() }}
runs-on:ubuntu-latest
# See https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/accessing-contextual-information-about-workflow-runs#needs-context
steps:
- name:Status summary
run:|
if [ "${{ needs.unit-tests.result }}" == "failure" ] || [ "${{ needs.unit-tests.result }}" == "canceled" ]; then
@@ -1175,6 +1175,17 @@ unspecified version, but packages can depend on other packages with
could depend on ``mpich@1.2:`` if it can only build with version
``1.2`` or higher of ``mpich``.
..note:: Windows Spec Syntax Caveats
Windows has a few idiosyncrasies when it comes to the Spack spec syntax and the use of certain shells
Spack's spec dependency syntax uses the carat (``^``) character, however this is an escape string in CMD
so it must be escaped with an additional carat (i.e. ``^^``).
CMD also will attempt to interpret strings with ``=`` characters in them. Any spec including this symbol
must double quote the string.
Note: All of these issues are unique to CMD, they can be avoided by using Powershell.
For more context on these caveats see the related issues: `carat <https://github.com/spack/spack/issues/42833>`_ and `equals <https://github.com/spack/spack/issues/43348>`_
Below are more details about the specifiers that you can add to specs.
.._version-specifier:
@@ -1348,6 +1359,10 @@ For example, for the ``stackstart`` variant:
mpileaks stackstart==4# variant will be propagated to dependencies
mpileaks stackstart=4# only mpileaks will have this variant value
Spack also allows variants to be propagated from a package that does
The first step to contribute new runners is to open an issue in the `spack infrastructure <https://github.com/spack/spack-infrastructure/issues/new?assignees=&labels=runner-registration&projects=&template=runner_registration.yml>`_
project. This will be reported to the spack infrastructure team who will guide users through the process
of registering new runners for Spack CI.
The information needed to register a runner is the motivation for the new resources, a semi-detailed description of
the runner, and finallly the point of contact for maintaining the software on the runner.
The point of contact will then work with the infrastruture team to obtain runner registration token(s) for interacting with
with Spack's GitLab instance. Once the runner is active, this point of contact will also be responsible for updating the
GitLab runner software to keep pace with Spack's Gitlab.
Tagging
~~~~~~~
In the initial stages of runner registration it is important to **exclude** the special tag ``spack``. This will prevent
the new runner(s) from being picked up for production CI jobs while it is configured and evaluated. Once it is determined
that the runner is ready for production use the ``spack`` tag will be added.
Because gitlab has no concept of tag exclustion, runners that provide specialized resource also require specialized tags.
For example, a basic CPU only x86_64 runner may have a tag ``x86_64`` associated with it. However, a runner containing an
CUDA capable GPU may have the tag ``x86_64-cuda`` to denote that it should only be used for packages that will benefit from
a CUDA capable resource.
OIDC
~~~~
Spack runners use OIDC authentication for connecting to the appropriate AWS bucket
which is used for coordinating the communication of binaries between build jobs. In
order to configure OIDC authentication, Spack CI runners use a python script with minimal
dependencies. This script can be configured for runners as seen here using the ``pre_build_script``.
An environment is used to group together a set of specs for the
purpose of building, rebuilding and deploying in a coherent fashion.
Environments provide a number of advantages over the *à la carte*
approach of building and loading individual Spack modules:
An environment is used to group a set of specs intended for some purpose
to be built, rebuilt, and deployed in a coherent fashion. Environments
define aspects of the installation of the software, such as:
#.Environments separate the steps of (a) choosing what to
install, (b) concretizing, and (c) installing. This allows
Environments to remain stable and repeatable, even if Spack packages
are upgraded: specs are only re-concretized when the user
explicitly asks for it. It is even possible to reliably
transport environments between different computers running
different versions of Spack!
#. Environments allow several specs to be built at once; a more robust
solution than ad-hoc scripts making multiple calls to ``spack
install``.
#. An Environment that is built as a whole can be loaded as a whole
into the user environment. An Environment can be built to maintain
a filesystem view of its packages, and the environment can load
that view into the user environment at activation time. Spack can
also generate a script to load all modules related to an
environment.
#.*which* specs to install;
#.*how* those specs are configured; and
#.*where* the concretized software will be installed.
Aggregating this information into an environment for processing has advantages
over the *à la carte* approach of building and loading individual Spack modules.
With environments, you concretize, install, or load (activate) all of the
specs with a single command. Concretization fully configures the specs
and dependencies of the environment in preparation for installing the
software. This is a more robust solution than ad-hoc installation scripts.
And you can share an environment or even re-use it on a different computer.
Environment definitions, especially *how* specs are configured, allow the
software to remain stable and repeatable even when Spack packages are upgraded. Changes are only picked up when the environment is explicitly re-concretized.
Defining *where* specs are installed supports a filesystem view of the
environment. Yet Spack maintains a single installation of the software that
can be re-used across multiple environments.
Activating an environment determines *when* all of the associated (and
installed) specs are loaded so limits the software loaded to those specs
actually needed by the environment. Spack can even generate a script to
load all modules related to an environment.
Other packaging systems also provide environments that are similar in
some ways to Spack environments; for example, `Conda environments
<https://conda.io/docs/user-guide/tasks/manage-environments.html>`_ or
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.