`--force` was previously available only on `spack concretize`
but not on `spack spec`, `spack solve`, and other commands that
do concretization.
This means you can now preview a force-concretize on an environment
or spec with `spack spec -f` or `spack solve -f`. You can also set
concretization to *always* force in config with:
```yaml
spack:
concretizer:
force: true
```
- [x] make `concretizer:force` a configuration option
- [x] add `--force` to common concretizer arguments
The import-check action now presents problematic import statements
introduced by the PR better.
The idea is roughly:
* Let (V₁, E₁) be the graph of modules as vertices and import statements
as edges before the change
* Let (V₂, E₂) be the graph after the code change, which is typically a small
perturbation of (V₁, E₁).
* X₁ = FAS(V₁, E₁) is the feedback arc set before (a minimal set of edges to
delete to make it acyclic)
* X₂ = FAS(V₂, E₂ ∖ X₁) is the feedback arc set after deletion of the minimal
set of edges that made the old graph acyclic.
* X₃ = FAS(V₂, E₂) is the feedback arc set after
Previously I displayed X₁ and X₃ and users had to diff themselves.
Now, I'm showing X₂, which is a small set, typically directly related to
code changes.
However, it can be that a small code change adding say 2 problematic imports
creates a completely different solution X₃ that only requires deletion of just 1
different import. In that case the user is informed that they can potentially do
less work.
So for PR #48784 the output is now:
> The overall number of problematic import statements increased by 1 from 31 to 32.
> This is likely a direct consequence of the following import statements:
>
> ```
> spack/config imports: spack.spec, spack.util.path, spack.util.remote_file_cache
> ```
>
> However, instead of removing 3 import statements, it is sufficient to remove only 1
> import statement from the following list:
>
> ```
> spack/concretize imports: spack.bootstrap, spack.solver.asp
> spack/environment imports: spack.bootstrap, spack.environment
> spack/fetch_strategy imports: spack.version.git_ref_lookup
> spack/install_test imports: spack.build_environment, spack.package_base
> spack/modules imports: spack.modules
> spack/platforms imports: spack.config
> spack/relocate imports: spack.bootstrap
> spack/repo imports: spack.package_base, spack.patch, spack.tag
> spack/spec imports: spack.binary_distribution, spack.compiler, spack.compilers, spack.concretize, spack.environment, spack.hash_types, spack.provider_index, spack.repo, spack.spec_parser, spack.store, spack.traverse, spack.variant, spack.version.git_ref_lookup
> spack/subprocess_context imports: spack.environment
> spack/util/gpg imports: spack.bootstrap
> spack/util/package_hash imports: spack.package_base
> spack/util/path imports: spack.config, spack.environment
> spack/util/remote_file_cache imports: spack.util.web
> ```
from which the user can figure out that
`spack/util/remote_file_cache imports: spack.util.web` is the "bottleneck" now.
* Add versions 2 and 3 of py-sphinx-rtd-theme.
Allow for versions of py-sphinx greater than 6.
Fix the Python version for older versions that depend on distutils.
Get the py-docutils dependency from the py-sphinx recipe.
* Depend purely on the py-docutils dependency in py-sphinx.
* More refined dependency versioning.
* Fixed versioning for py-sphinx and py-docutils.
Currently, environments created from manifest files with relative includes result in broken
references to config files.
This PR modifies `spack env create` to create local copies in the new environment of any local
config files from relative paths in the environment manifest passed as an init file.
This PR does not change the behavior if the include is an absolute path or if the include is from
a relative path outside the environment directory, but it does warn about missing relative includes if
they are inside the environment directory.
Includes regression test and short blurb in docs.
* Added salt variant to tau
* Update package.py
* [@spackbot] updating style on behalf of wspear
---------
Co-authored-by: wspear <wspear@users.noreply.github.com>
* Create SALT package.py
Added a package for the SALT Source AnaLysis Toolkit
@zbeekman
* [@spackbot] updating style on behalf of wspear
* Update package.py
Line wrap
---------
Co-authored-by: wspear <wspear@users.noreply.github.com>
* py-mdit-py-plugins: Add new versions 0.3.5, 0.4.2
Signed-off-by: Jonathon Anderson <anderson.jonathonm@gmail.com>
* py-myst-parser: Add new versions 0.19.0 to 4.0.0
Signed-off-by: Jonathon Anderson <anderson.jonathonm@gmail.com>
* hpctoolkit: Add +docs variant and manpages
This commit unconditionally enables manpages for the HPCToolkit tools.
The new `+docs` variant enables additional documentation, specifically
the user's manual. Both require new build-time dependencies.
Signed-off-by: Jonathon Anderson <anderson.jonathonm@gmail.com>
---------
Signed-off-by: Jonathon Anderson <anderson.jonathonm@gmail.com>
* package api: drop wildcard re-export
To ensure package repos are forward/backward compatibility with Spack,
we should explicitly export all symbols we want to expose in the public
package API, and drop `from spack.something import *` because
removal/addition to the public API will go unnoticed.
Also `llnl.util.filesystem` has some methods that shouldn't be exposed
in the package API, so better to enumerate a subset explicitly.
* remove flatten_dependencies / install_dependency_symlinks
* JAX: add v0.4.34
* Disable search for clang
* Update CUDA flags
* Add py-jax 0.4.33, comment out until py-jaxlib 0.4.33 is also released
* Fix GCC build
* Try TF_NVCC_CLANG
* py-jax: add v0.4.34
* jax no longer has separate tags for jaxlib
* Install compiled wheel
* Join path before glob
* Wheel is in spack stage, not tmp path
* Add 0.4.35
* Add newer versions
* Build system has been refactored yet again
* Drop clang
* Fix build with source tarball, rocm support
* Support GCC
* Remove clang-specific compiler flags
* enable_cuda flag was removed
* Fix logic
* py-jax: add v0.4.38
* Add patch to fix GCC support
* Patch no longer needed
* Skip patching, directly pass flags
* New flags
* Remove unused import
* Patch changed
* Use older version of patch
* Newer patch
* Add CUDA symlink
* Symlink more directories
* Recursive symlink
* Import function
* Recursive search
* Undo cuda changes
* Add v0.5.0
* I quit
* Improve definition of a few placeholder packages
These packages are placeholders for vendor provided software,
that is not buildable, and should be declared as external.
* ibm-java: remove package, as asked by maintainer
Regressed in #47126
Spack was not interpreting mirrors using relative path with respect to the
metadata directory.
---------
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* test_no_matching_compiler_specs: does not need mock_low_high_config,
since mutable_config is already used at class level
* bindist.py: setup a configuration that doesn't super-impose builtin.mock
over builtin
* builder.py: use a mock configuration for the tests
This commit adds version 6.8.0 of GeoModel. As far as I can tell from
the change notes, there are no changes required to the build
configuration or dependencies.
This adds a new configuration section called `env_vars:` that can be set in an environment.
It looks very similar to the existing `environment:` section that can be added to `modules.yaml`,
but it is global for an entire spack environment. It's called `env_vars:` to deconflate it with spack
environments (the term was too overloaded).
The syntax looks like this:
```yaml
spack:
specs:
- cmake%gcc
env_vars:
set:
ENVAR_SET_IN_ENV_LOAD: "True"
```
Any of our standard environment modifications can be added to the `env_vars` section, e.g.
`prepend_path:`, `unset:`, `append_path:`, etc. Operations in `env_vars:` are performed
on `spack env activate` and undone on `spack env deactivate`.
* Reduce the size of outputted go built binaries
* Remove unused import from go package
* go: remove comment from setup dependents build env
* Add back missing imports after rebase
* Backward compat with Python 3.9 for socket.timeout
* Forward compat with Python [unknown] as HTTPResponse.geturl is deprecated
* Catch timeout etc from .read()
* Some minor simplifications: json.load(...) takes file object in binary mode.
* Fix CDash code which does error handling wrong: non-2XX responses raise.
The conflict is too strict and prevents usable combinations from being
concretized. Some packages depend on pika, but do not include pika
headers in HIP device code files, and can thus be compiled even with the
bad combination of HIP and GCC versions.
* py-awkward: add dependency on py-importlib-metadata and py-fsspec
* Implement suggested constraint and dependency type changes
---------
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
On macOS you cannot unconditionally pass `-rpath`, it can conflict with
`-r`.
mpicc is not the place to inject rpaths to compiler runtime
libraries. That's up to the compiler's config files (spec files in gcc,
config files in llvm).
Also remove some patches that are redundant in newer versions of
openmpi.
Should be sufficient to set CONFIG_SITE to /dev/null before running configure to
prevent any config site file to be loaded, which currently causes non-determinism in builds.
Setting the variable CONFIG_SITE prevents configure files to check $prefix/share/config.site,
$prefix/etc/config.site, $ac_default_prefix/share/config.site, $ac_default_prefix/etc/config.site
so we don't have to patch a block of code.
* Address quoting issue that casuses dev_paths containing @ symbols to parse as versions
* Fix additional location were dev_path was imporperly constructed
The dev_path must be quoted to avoid parsing issues when
paths contain '@' symbols. Also add tests to catch
regression of this behavior
* Format to fix line length
* fix failing tests
* Fix whitespace error
* Add binary_compatibility fixture to test
* Change string formatting to avoid multiline f string
* Update lib/spack/spack/test/concretization/core.py
* Add tmate debug session
* Revert "Add tmate debug session"
This reverts commit 24e2f77e3c.
* Move test and refactor to use env methods
* Update lib/spack/spack/test/cmd/develop.py
---------
Co-authored-by: psakievich <psakiev@sandia.gov>
Since macOS 15 `ld -single_module` warns with a deprecation message,
which makes configure scripts believe the flag is unsupported. That
in turn triggers a code path where `archive_cmds` is set to
```
$CC -r -keep_private_externs -nostdlib ... -dynamiclib
```
instead of just
```
$CC -dynamiclib ...
```
This code path was meant to trigger only on ancient macOS <= 14.4 where
libtool had to add `-single_module`, which is the default since macos
14.4, and is now apparently deprecated because the flag is a no-op for
more than 15 years.
The wrong `archive_cmds` causes actual problems combined with a bug in
OpenMPI's compiler wrapper (`CC=mpicc`), which appends `-rpath` flags,
which cause an error when combined with the `-r` flag added by the
autotools.
Spack's compiler wrapper doesn't do this, but it's likely there are
other compiler wrappers out there that are not aware that `-r` and
`-rpath` cannot be combined.
The fix is to change defaults: `lt_cv_apple_cc_single_mod=yes`.
* BF+ENH: Add wxwidgets 3.2.6 and py-wxpython 4.2.2
Improves compat with newer Python (>3.9) and numpy.
Fix error during configure step (even on at least some older
versions) by including 'pkgconfig' as a build dep so gtk+ is found.
* BF: Make wxpython use spack built wxwidgets instead of rebuilding
* Update var/spack/repos/builtin/packages/py-wxpython/package.py
Avoid using too new version of py-setuptools as license file format is currently not compatible.
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* for config: let `syaml_dict` inherit from `dict` instead of `OrderedDict`. `syaml_dict` now only exists as a mutable wrapper for yaml related metadata.
* for spec serialization / hashing: use `dict` directly
This is possible since we only support cpython 3.6+ in which dicts are ordered.
This improves performance of hash computation a bit. For a larger spec I'm getting 9.22ms instead of 11.9ms, so 22.5% reduction in runtime.
* OSError.errno apparently is int | None, but already part of __str__ anyway so drop
* fix disambiguate_spec_from_hashes
* List[Spec].sort() complains, ignore it
* fix typing of KnownCompiler, and use it in asp.py
Adds packages for the Arkouda server (`arkouda`) and Arkouda python client (`py-arkouda`).
Arkouda server and client are divided into separate packages to allow for them to be
installed independently of one another.
Future work remains to add a `+dev` variant to `py-arkouda`, but that will require additional
supporting packages made available through spack (e.g. for `py-pytest-env`
Modifications:
* Fix a severe bug where a spliced spec has the same hash as the original build spec
* Removed CacheManager in favor of install_mockery. The latter sets up a temporary store, and removes it at the end
of the test.
* Deleted a couple of helper functions e.g. _has_dependencies
* Checked that SolverException is raised, rather than Exception (more specific)
* Extended, and renamed the splicing_setup fixture (now called install_specs)
* Added more specific assertions in each test
One test is currently flaky, due to some instability when sorting multiple specs. It's currently marked xfail
The methods spack.spec.Spec.concretize and spack.spec.Spec.concretized
are deprecated in favor of spack.concretize.concretize_one.
This will resolve a circular dependency between the spack.spec and
spack.concretize in the next Spack release.
* add python bindings variant to chapel
* fix chpl_home, update chapel frontend rpath in venv
* fix chpl frontend shared lib rpath hack
* patch chapel env var line length limit
* patch chapel python shared lib location by chapel version
* unhack chpl frontend lib rpath
* fix postinstall main version number file path
* update chapel version number to 2.3
* use chapel releases instead of source tarballs, remove dead code
* Chapel 2.3 adds support for LLVM 19
* Bundled LLVM always requires CMake 3.20:
* Apply 2.3 patch for LLVM include search path
Fixes build errors for `chapel@2.3+rocm` of the form:
```
compiler/llvm/llvmDebug.cpp:174:9: error: cannot convert 'const char*' to 'llvm::dwarf::MemorySpace'
compiler/llvm/llvmDebug.cpp:254:15: error: cannot convert 'const char*' to 'llvm::dwarf::MemorySpace'
```
* fix style
* Fix misreporting of test_hello failures
`test_part` was aliasing the enclosing test name, leading to confusing and incorrect reporting on test failure.
* Ensure `libxml2` optional dep of `hwloc` is also added to `PKG_CONFIG_PATH` in the run env
* patch chplCheck for GPU + multilocale configs, install docs
* Adjust patch for checkChplInstall
* Ensure `CHPL_DEVELOPER` is unset for `~developer`
---------
Co-authored-by: Dan Bonachea <dobonachea@lbl.gov>
* Improve support using external modules with zsh
Spack integrates with external modules by launching a python subprocess
to scrape the path from the module command. In zsh, subshells do not
inheret functions defined in the parent shell (only environment
variables). This breaks the module function in module_cmd.py.
As a workaround, source the module commands using $MODULESHOME prior to
running the module command.
* Fix formatting
* Fix flake error
* Add guard around sourcing module file
* Add improved unit testing to module src command
* Correct style
* Another attempt at style
* formatting again
* formatting again
---------
Co-authored-by: psakievich <psakiev@sandia.gov>
* bdsim: update to point to the new location in github and add 1.7.7
* Remove the C++ standard patch
* Remove the cmake_args and add a comment instead
---------
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
`spack find` and other commands that read the DB will fail if the databse is in a
read-only location, because `FailureTracker` and `Database` can try to create their
parent directories when they are constructed, even if the DB root is read-only.
Instead, we should only write to the filesystem when needed.
- [x] don't create parent directories in constructors
- [x] have write operations create parents if needed
- [x] in tests, `chmod` databases to be read-only unless absolutely needed.
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Adding the PACE library to the LAMMPS build
* Adding the PACE library for ML-PACE in LAMMPS
---------
Co-authored-by: Richard Berger <rberger@lanl.gov>
* Add py-protobuf@4.24.4 (needed for py-cylc-flow@8.3.6).
* Add py-cylc-flow@8.3.6 and enable png output when creating graphs
by requesting variant pangocairo for graphviz dependency.
* Add corresponding versions of py-metomi-rose@2.3.2,
py-cylc-rose@1.4.2, py-cylc-uiserver@1.4.2.
* Add myself as maintainer to all the cylc-related packages.
This PR allows using the subscript notation directly on packages. The
intent is to reduce the boilerplate needed to retrieve package
properties from nodes other than root.
New year, new versions of the ACTS dependencies. This time, we have a
new version of detray, a new version of algebra plugins, as well as a
new version of ACTS itself.
* Add type-hints to `spack.util.executable.Executable`
* Add type-hint to input
* Use overload, and remove assertions at calling sites
* Bump mypy to v1.11.2 (working locally), Python to 3.13
* root: Restrict patch range
* root: Set minimum gcc version for cxxstd=20
* root: fix gcc range when cxxstd 20
Co-authored-by: Paul Gessinger <hello@paulgessinger.com>
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* add parse tool, json, and load json
bug fix
add variants and conflictions for g4tendl
* correct format
* correct format
* update version mapping
* remove 1.0
* Update var/spack/repos/builtin/packages/geant4-data/package.py
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* add options
* depends on it
* fix typo
---------
Co-authored-by: Seth R. Johnson <johnsonsr@ornl.gov>
* e4s cray rhel ci stack: re-enable and update for new cpe, should fix cray libsci issue
* only run e4s-cray-rhel stack
* Mkae Autotools build_system point at correct build_directory
* remove selective enable of cray-rhel stacks
* restore SPACK_CI_DISABLE_STACKS
* use dot prefix to hide cray-sles jobs instead of comment-out
---------
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
* edm4hep: Add lower clang version bounds
EDM4hep 0.99.1 introduced usage of consteval. While this is technically
supported in clang versions below 17, the implementation seems to be
incomplete and fail compilation of EDM4hep 0.99.1 and up.
* Update var/spack/repos/builtin/packages/edm4hep/package.py
Co-authored-by: Thomas Madlener <thomas.madlener@desy.de>
---------
Co-authored-by: Thomas Madlener <thomas.madlener@desy.de>
* py-py-spy: add 0.4.0
* py-py-spy: port from Package to CargoPackage
* CargoPackage: respect make_jobs
* rust: respect make_jobs during build and install
* spack style
* CargoBuilder: fix make_jobs syntax
* CargoBuilder: don't write to $HOME, use stage dir
* rivet: add version 3.1.11 with some back-port fixes
Some back-ported fixes from the 4.x series. Most importantly the
possibility to build against HepMC 3.3.0.
* yoda: Add version 1.9.11
* Add a conflict with hepmc=2
* Unify hepmc@3.3.0 conflicts into one statement
* Remove conflict and update conditional hepmc variant
* Use caret to signify conflict with dependency
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Make conflict and msg more specific
* Add dedicated version dependency to yoda
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* gromacs: remove dependency on Fortran
Fortran was removed from GROMACS core since 4.6. There are a few contrib
files around, but they are not built anyway.
Also fix a couple typos.
* [@spackbot] updating style on behalf of al42and
---------
Co-authored-by: al42and <al42and@users.noreply.github.com>
* zoltan: Ignore errors about incompatible pointer type with gcc@14
In gcc 14 -Werror=incompatible-pointer-types is now the default.
Silence it until it gets fixed in zoltan upstream.
* zoltan: Fix linking error when built with ifx
* mpich: gather in a single place env modifications needed by mpich derivatives
MPICH, and its derivatives, share a lot of copy/paste code to setup the
environment during the different stages of the package life-cycle.
This commit gathers the common modifications in a single place (a mixin class),
living in the Mpich package, and makes derivatives import, and reuse, it.
* Fix docs for Python < 3.13
* openloops: Add cmodel to user config file to override setting added in 2.1.2.
* openloops: Change cmodel from small to large following author recommendation.
A few changes to tarball creation (for build caches):
- do not run file to distinguish binary from text
- file is slow, even when running it in a batched fashion -- it usually reads all bytes and has slow logic to categorize specific types
- we don't need a highly detailed file categorization; a crude categorization of elf, mach-o, text suffices.
detecting elf and mach-o is straightforward and cheap
- detecting utf-8 (and with that ascii) is highly accurate: false positive rate decays exponentially as file size increases. Further it's not only the most common encoding, but the most common file type in package prefixes.
iso-8859-1 is cheaply (but heuristically) detected too, and sufficiently accurate after binaries and utf-8 files are classified earlier
- remove file as a dependency of Spack in general, which makes Spack itself easier to install
- detect file type and need to relocate as part of creating the tarball, which is more cache friendly and thus faster
mpicxx_shared_libraries seems a relic of #1550, and is
not currently used by any builtin package.
Thus, cleanup the recipes, and avoid monkey-patching
spec objects.
Python >= 3.13 does not have the crypt variant anymore. Still no matter
if the test for crypt succeeds (which it can on Fedora providing its own
crypt module for Python 3.13) or fails, it will add +crypt or ~crypt,
which both fail because the variant only exists until Python 3.12.
Co-authored-by: Richard Berger <rberger@lanl.gov>
* silo: variant python needs python
* Dependency to Python did not resolve the "Python.h" header not being found
Added the -I path to Python header to the compiler.
Having silo depend on python was not sufficient to get the path to python.
Maybe there is a smarter way to do that, but this one works.
* Update var/spack/repos/builtin/packages/silo/package.py
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Update var/spack/repos/builtin/packages/silo/package.py
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Update var/spack/repos/builtin/packages/silo/package.py
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* thepeg: Correctly specify rivet version for dependency
* herwig3: Add latest version 7.3.0
* thepeg: Make sure to have consistent hepmc version
rivet and thepeg need to have the same hepmc version otherwise things
will not compile
* libzip links with external libs found on host feelpp/spack#6
* add maintainers
* fix style
* use multi-build system
/cc @wdconinc
* fix style
* rm space and rename variant bz2 to bzip2
/cc @wdconinc
* fix variant name for bzip2
* zstd is supported in libzip@:1.8
* fix style
* fix style
* fix style
* rm deprecated version and versions that cannot be found easily
use only cmake from now on
* fix style
* fix style
* use variant when option for zstd
* Update var/spack/repos/builtin/packages/libzip/package.py
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* fix style
/cc @wdconinc
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
`kcov` was removed in Ubuntu 24.04, and it is no longer
installable via `apt` in our CI images. Instal it via
Linuxbrew instead, at least until it comes back to Ubuntu.
`subversion` is also not installed on ubuntu 24 by default,
so we have to install it manually.
- [x] Add linuxbrew to linux tests
- [x] Install `kcov` with brew
- [x] Install subversion with `apt`
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Fix silent error when reporting builds to CDash
CDash has a 191 char maximum for build names. When this
is exceeded, CDash silently fails to correctly process the
reported XML. This truncates CDash build names to 190 chars
and emits a warning indicating it is doing so to prevent
such errors from occuring.
* test/reporters.py: add unittest for buildname len issue
* test/reporters.py: rename cdash buildname test
* ci/common.py: fix syntax causing breaking test
It appears that the CDash reporter is expecting a string
as the buildname.
* Update lib/spack/spack/reporters/cdash.py
Fix warning message to reflect actual issue.
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* ci/common.py: fix function call to actually call function
---------
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
Co-authored-by: psakievich <psakiev@sandia.gov>
* py-typer: add version 0.15.1 and "standard" optional dependencies
* py-typer: remove variant that only exists in source, not sdist. Remove trailing .0 from versions.
* Improve variant robustness for dd4hep and edm4hep
Now variants won't be "false" if there's a typo.
* Use libs instead of manual prefix paths
* Improve cmake for another hep package
* Fix variant use and style
* Use directories for ODD
* r-rlas is a dependency for r-lidr
* new package r-lidr w/ suggests to address masking issues
* fixed flake8 issues and added maintainers
* removed boost import statement for flake sake
* Score-P: Replace with-or-without, document options that are not currently explicitly mapped in package for mpi and shmem.
* trim long lines
---------
Co-authored-by: wrwilliams <wrwilliams@users.noreply.github.com>
Improve our typing by updating some todo locations in the code to use
`Literal` instead of a simple `str`.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Fix issue reported by some users regarding some build dependencies.
* Remove invalid configure-time flag that was recently introduced.
Signed-off-by: Samuel K. Gutierrez <samuel@lanl.gov>
There are still more fix ups required for the missing libs to work as
expected in CI. Dropping the error requirement in favor of moving to a
log scraping method until we can verify all package issues have been
addressed correctly.
* Set the "build_jobs" on concretization/generate for CI
build_jobs also controls the concretization pool size. Set this
in the config section for CI generate.
This config is overwritten by build_job CI using the SPACK_BUILD_JOBS
environment variable. This implicitly will drop the default build
CPU request on all "default" grouped build jobs from (max) 16 to 8.
* Add default allocations for build jobs
* Add common jobs and concretize args to ci generate and rebuild
* CI: Specify parallel concretize and build jobs via argument
* Increase power and cray concretization limits
Lowering limits for these stacks creates timeout
* Increase default pool size to 8
intermittent timeouts with 4 CPU
* Add reduced requests for windows for now
This turns some variant-specific methods for dealing with when-keyed dictionaries into
more generic versions, in preparation for conditional version definitions.
`_by_name`, `_names`, etc. are replaced with generic methods for transforming
when-keyed dictionaries:
* `_by_subkey()`
* `_subkeys()`
* `_num_definitions()`
* `_definitions()`
* `_remove_overridden_defs()`
And the variant accessors are refactored to use these methods underneath.
To do this, types like `WhenDict` had to be generified, and some `TypeVars`
were added for sortable keys and values.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
We are using more and more typing features in Spack, and without features like
protocols, typing core is becoming harder and harder.
I think it's worth vendoring `typing_extensions` for this. It will get us a number of
useful capabilities:
* `Literal`
* `TypedDict`
* `Protocol`
among others.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
This commit adds a config option `config:shared_linking:missing_library_policy:error/warn/ignore` which will cause installation errors or warnings when ELF executables or libraries need shared libraries which cannot be resolved from RPATH search paths. The default is to ignore.
This is a safeguard against accidentally linking to system libraries instead of Spack libraries. It makes it more likely that build cache installs work on different machines. It works only at the level of libraries, not at the level of symbols. Some system dependencies are allowed (e.g. kernel and libc).
Packages can (but are discouraged to) set `unresolved_libraries` to a list of patterns of sonames/library names that are know to be unresolvable in RPATHs. In the future this could be made more fine-grained in a non-breaking way by allowing a dictionary of patterns `lib => [deps]`.
Extracted #45189
Common test setup has been extracted in fixtures. Some matrix
dimensions moved from being "compiler" to be "targets".
Use --fake install for packages in test.
The `_normal` attribute on specs is no longer used and has no meaning.
It's left over from part of the original concretizer.
The `concrete` constructor argument is also not used by any part of core.
- [x] remove `_normal` attribute from `Spec`
- [x] remove `concrete` argument from `Spec.__init__`
- [x] remove unused `check_diamond_normalized_dag` function in tests
- [x] simplify `Spec` constructor and docstrings
I tried to add typing to `Spec` here, but it creates a huge number of type issues
because *most* things on `Spec` are optional. We probably need separate `Spec` and
`ConcreteSpec` classes before attempting that.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* nim: fix deps, deprecate and patch old versions
* Fix runtime dependencies for produced binaries:
- Add -rpaths pointing at dependencies to
std wrapper modules
- Set version constraints for OpenSSL
- Added SQLite3 variant for <2.0
* Parallelize build with make
* Deprecate 1.0.10 due to CVEs
* Deprecate 1.9.3 as it's an old development version
* Backport patch for CVE-2021-21372 to <1.2.10/<1.4.4,
CVE-2021-21374 to 1.4.2 and CVE-2021-46872 to 1.4.*
* Avoid empty low ranges that include devel
* Add previously missing CVE comment
* Keep "link" type for dynamic libraries for MSVC
* Omit "run" type for library dependencies
* Disable SQLite variant by default
* Fix version ranges
Had assumed they were exclusive, but they're inclusive
* Correct version range for sqlite variant
Difference doesn't matter outside of development versions
* Move patches to use GitHub URLs instead of files
* Retry CI
* append ?full_index=1
Previously the pip setup would delete the visitmodule during the install
step. This was fixed by forcing the pip setup to only run once before
the dependents are created.
Add missing encoding=utf-8 to various open calls. This makes
files like spec.json, spack.yaml, spack.lock, config.yaml etc locale
independent w.r.t. text encoding. In practice this is not often an
issue since Python 3.7, where the C locale is promoted to
C.UTF-8. But it's better to enforce UTF-8 explicitly, since there is
no guarantee text files are written in the right encoding.
Also avoid opening in text mode if it can be avoided.
* `f.tell` on a `TextIOWrapper` does not return the offset in bytes, but
an opaque integer that can only be used for `f.seek` on the same
object. Spack assumes it's a byte offset.
* Do not open in a locale dependent way, but assume utf-8 (and allow
users to override that)
* Use tempfile to generate a backup/temporary file in a safe way
* Comparison between None and str is valid and on purpose.
Follow-up to #47956
* Rename `token.py` -> `tokenize.py`
* Rename `parser.py` -> `spec_parser.py`
* Move common code related to iterating over tokens into `tokenize.py`
* Add "unexpected character token" (i.e. `.`) to `SpecTokens` by default instead of having a separate tokenizer / regex.
* Python: deprecate 3.8
* Remove preference for EOL Python versions
* Explicitly deprecate things requiring EOL Python
* More deprecations
* deprecate old versions of slepc, py-petsc4py, py-slepc4py in sync with old versions of petsc
---------
Co-authored-by: Satish Balay <balay@mcs.anl.gov>
The use of `^` in `depends_on` directives has never been allowed, since
the dawn of Spack.
Up to now, we used to have an audit to catch this kind of issue, mainly
because in that way we could easily collect all issues and report them
to packagers at once.
Due to implementation details, this audit doesn't work if a dependency
without a `^` is followed by the same dependency with a `^`.
This PR makes this pattern an error, which will be reported eagerly, and
removes the corresponding audit. It also fixes a package using the wrong
idiom.
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
So far, the ESMF package recipe in spack assumes that the spack
compilers clang and apple-clang are using gfortran as the Fortran
compiler. But with the latest improvements to the LLVM compilers,
we need to also support clang with flang.
Reorganize the pipeline generation aspect of the ci module,
mostly to separate the representation, generation, and
pruning of pipeline graphs from platform-specific output
formatting.
Introduce a pipeline generation registry to support generating
pipelines for other platforms, though gitlab is still the only
supported format currently.
Fix a long-existing bug in pipeline pruning where only direct
dependencies were added to any nodes dependency list.
* Set the "build_jobs" on concretization/generate for CI
build_jobs also controls the concretization pool size. Set this
in the config section for CI generate.
This config is overwritten by build_job CI using the SPACK_BUILD_JOBS
environment variable. This implicitly will drop the default build
CPU request on all "default" grouped build jobs from (max) 16 to 8.
* Add default allocations for build jobs
* Add common jobs and concretize args to ci generate and rebuild
* CI: Specify parallel concretize and build jobs via argument
* Increase power and cray concretization limits
Lowering limits for these stacks creates timeout
* py-nbclassic: add v1.1
* py-nbclassic: reduce explicit dependencies for v1.1.0
Having all the 'excess' packages listed did not break anything, as
they were needed for `py-jupyter-server` (pulled in via `py-notebook-shim`)
anyway, but the change makes it more clear on why things are being pulled in.
* acts dependencies: new versions as of 2024/12/08
This commit includes a new version of ACTS, as well as new versions of
the ACTS algebra plugins, covfie, detray, and geomodel.
* Fixes
* covfie: depends_on cmake@3.21: when @0.11:
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Add py-param@2.1.1
* Add py-panel@1.5.2
* Add py-bokeh@3.5.2
* New package py-datashader
* New package py-geoviews
* New package py-holoviews
* WIP: new package py-uxarray
* New package py-antimeridian
* New package py-dask-expr
* New package py-spatialpandas
* New package py-hvplot
* Add dependency on py-dask-expr for 'py-dask@2024.3: +dataframe'
* Added all dependencies for py-uxarray; still having problems with py-dask +dataframe / py-dask-expr
* Fix style errors in many packages
* Clean up comments and fix style errors in var/spack/repos/builtin/packages/py-dask-expr/package.py
* In var/spack/repos/builtin/packages/py-dask/package.py: since 2023.8, the dataframe variant requires the array variant
* Fix style errors in py-uxarray package
- [x] Clean up arguments on the `resource` directive.
- [x] Add type annotations
- [x] Add `resource` to type annotations on `PackageBase`
- [x] Fix up `resource` docstrings
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Some of the class-level annotations were wrong, and some were missing. Annotate all the
functions here and fix the class properties to match what's actually happening.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
The first argument to each Spack directive is not a `PackageBase` instance but a
`PackageBase` class object, so fix the type annotations to reflect this.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
`Optional` shouldn't be part of `PatchesType` -- it's clearer to specify `Optional` it
in the methods that need their arguments to be optional.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* update py-numl and py-nugraph recipes
this commit adds the develop branch as a valid option for each of these two packages. in order to enable this, package tarballs are now retrieved from the github source repository instead of pypi, and their checksums and the build system have been updated accordingly.
* rename versions "develop" -> "main" to be consistent with branch name
* nim: add latest versions
In addition:
- Create separate build and install phases.
- Remove koch nimble call as it's redundant with koch tools.
- Install all additional tools bundled with Nim instead of only Nimble.
* Fix 1.6 version
* nim: add devel
In addition:
- Fix build accessing user config/cache
* Bump Kokkos and Kokkos-kernels to 4.5.00
* petsc@:3.22 add a conflict with this new version of kokkos
* Update kokkos/kokkos-kernel dependency
---------
Co-authored-by: Satish Balay <balay@mcs.anl.gov>
the nis module was removed in python 3.13
we had it default to ~nis
no package requires +nis
required dependencies for +nis were missing
so better to remove the nis module entirely.
* acts dependencies: new versions as of 2024/11/25
This commit adds a new version of detray and two new versions of vecmem.
* acts dependencies: new versions as of 2024/12/02
This commit adds version 38 of ACTS and a new version of detray.
In preparation for adding `when=` to `version()`, I'm cleaning up the types in
`version_types` and making sure the methods here pass `mypy` checks. This started as an
attempt to use `ConcreteVersion` outside of `spack.version` and grew into a larger type
refactor.
The hierarchy now looks like this:
* `VersionType`
* `ConcreteVersion`
* `StandardVersion`
* `GitVersion`
* `ClosedOpenRange`
* `VersionList`
Note that the top-level thing can't easily be `Version` as that is a method and it
returns only `ConcreteVersion` right now. I *could* do something fancy with `__new__` to
make `Version` a synonym for the `ConcreteVersion` constructor, which would allow it to
be used as a type. I could also do something similar with `VersionRange` but not sure if
it's worth it just to make these into types.
There are still some places where I think `GitVersion` might not be handled properly,
but I have not attempted to fix those here.
- [x] Add a top-level `VersionType` class that all version types extend from
- [x] Define and document common methods and rich comparisons on `VersionType`
- [x] Replace complicated `Union` types with `VersionType` and `ConcreteVersion` as needed
- [x] Annotate most methods (skipping `__getitem__` and friends as the typing is a pain)
- [x] Fix up the `VersionList` constructor a bit
- [x] Add cases to methods that weren't handling all `VersionType`s
- [x] Rework some places to clarify typing for `mypy`
- [x] Simplify / optimize _next_version
- [x] Make StandardVersion.string a property to enable lazy comparison
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
This visitor accepts the sub-dag of all nodes and unique edges that have
deptype X directly from given roots, or deptype Y transitively for any
of the roots.
LLVM can be a transitive link dependency of hip through gl's dependency mesa, which uses it for software rendering.
In this case make sure llvm-amdgpu is found with find_package(LLVM) and
find_package(Clang) by setting LLVM_ROOT and Clang_ROOT.
That makes the patch of find_package's HINTS redundant, so remove that.
It did not work anyways, because CMAKE_PREFIX_PATH has higher precedence
than HINTS.
Explcitly sets the CMake variables Faodel_INCLUDE_DIRS and Faodel_LIBRARY_DIRS when +faodel.
This seems to be needed for recent versions of seacas (seacas@2021-04-02:), but should be safe
to do for all versions.
For Faodel_INCLUDE_DIRS, it looks like Faodel has header files under $(Faodel_Prefix)/include/faodel,
but seacas is not including the "faodel" part in #includes. So add both $(Faodel_Prefix)/include
and $(Foadel_Prefix)/include/faodel
Co-authored-by: payerle <payerle@users.noreply.github.com>
* c/c++ flags should have been modified for all 2023.x.y versions, but
upper bound was too low
* Fortran flags should have been modified for all 2024.x.y versions, but
likewise the upper bound was too low
* gromacs: announce deprecation policy and start to implement
* Style it up
* [@spackbot] updating style on behalf of mabraham
* Bump versions used in CI
---------
Co-authored-by: mabraham <mabraham@users.noreply.github.com>
Permit configuring GROMACS with support for mdrun to trace its timing
regions by calling the ITT API. This permits tools like VTune and
unitrace to augment their analysis with GROMACS-specific annotation.
external googletest breaks dependents because they end up with
ITK_LIBRARIES set to `GTest::GTest;GTest::Main`, which then end up
literally in a nonsensical link line `-lGTest::GtTest`.
the vendored googletest produces a cmake config file where
`ITKGoogleTest_LIBRARIES` is empty.
* gromacs: oneapi does not always require gcc
* Support intel_provided_gcc only with %intel classic compiler
Require gcc only when needed with %intel
* New approach depending on gcc-runtime directly
* Update var/spack/repos/builtin/packages/gromacs/package.py
Co-authored-by: Christoph Junghans <christoph.junghans@gmail.com>
---------
Co-authored-by: Christoph Junghans <christoph.junghans@gmail.com>
* Visit: Add new versions 3.4.0 and 3.4.1
* Adios2: Restrict python, 3.11 doesn't not work for older Adios2
* VisIt: Set the VTK_VERSION for @3.4:
Older versions of VTK used the VTK_{MAJOR, MINOR}_VERSION variables for
VTK detection. VisIt >= 3.4 uses the full string VTK_VERSION.
* CI: Don't build llvm-amdgpu for non-HIP stack
* VisIt: v3.4.1 handles newer Adios2 correctly
* Visit: Add missing links in HDF5, set correct VTK version configuration parameter
* VisIt: Add py-pip requirement and patch visit with configuration changes
* HDF5 symlinks move when inside of callback
* VisIt ninja install fails with python module. Using make does not
* VisIt 3.4 has a high minimum cmake requirement
* HDF5: Early return when not mpi for mpi symlinks
* HDF5: Use platform agnostic method for creating legacy compatible MPI symlinks
* Fix VISIT_VTK_VERSION handling for 8.2.1a hack
* This extends PR #47285 to properly include some of the required version constrains of cgal 6 incl C++ standard. It also adds the new no-gmp backend as a variant.
* fix style
* disable cgal@6 +demo variant as the demos require qt6 which is not in spack
* disable the gmp variant until clarity on how its supposed to work is provided. bound shared and header_only variants to relevant versions
* Fix missing msvc compiler limit, fix variant left in
* Add more comments. Better describe the gmp variant. Remove testing code
* fix style
The default build of clang on darwin couldn't actually build anything
because of a lack of a sysroot built in. Also several compilation
errors finding the system libc++ cropped up, much like those in GCC, and
have been fixed.
* packages: Update 'yambo'
* add call to 'resource' method to download Ydriver and iotk during fetch instead of during build
* air-gapped installation could be performed since version 5.2.1
* add versions 5.2.3 and 5.2.4
* remove some inexistant configure options for versions "@5:"
* add a sanity_check on 'bin/yambo'
The gni libfabric provider works on some Cray systems, but not all. For
example, Slingshot-based machines use a different libfabric provider
(cxi). Therefore libfabric/gni should not be a dependency when using
Cray PMI.
* celeritas: remove deprecated versions through 0.3
* celeritas: deprecate old versions
* celeritas: add c++20 option
* Propagate vecgeom CUDA requirements
* Remove outdated conflicts and format it
Automatic splicing say `Spec` grow a `__len__` method but it's only used
in one place and it's not clear the semantics are useful elsewhere. It also
runs the risk of Specs one day being confused for other types of containers.
Rather than introduce a new function for one algorithm, let's use a more
specific method in the splice code.
- [x] Use topological ordering in `_resolve_automatic_splices` instead of
sorting by node count
- [x] delete `Spec.__len__()` and `Spec.__bool__()`
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
`spack spec` output has looked like this for a while:
```console
> spack spec /v5fn6xo /wd2p2v7
Input spec
--------------------------------
- /v5fn6xo
Concretized
--------------------------------
[+] openssl@3.3.1%apple-clang@16.0.0~docs+shared build_system=generic certs=mozilla arch=darwin-sequoia-m1
[+] ^ca-certificates-mozilla@2023-05-30%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1
...
Input spec
--------------------------------
- /wd2p2v7
Concretized
--------------------------------
[+] py-six@1.16.0%apple-clang@16.0.0 build_system=python_pip arch=darwin-sequoia-m1
[+] ^py-pip@23.1.2%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1
```
But the input spec is right there on the CLI, and it doesn't add anything to the output.
Also, since #44843, specs concretized in the CLI line can be unified, so it makes sense
to display them as we did in #44489 -- as one multi-root tree instead of as multiple
single-root trees.
With this PR, concretize output now looks like this:
```console
> spack spec /v5fn6xo /wd2p2v7
[+] openssl@3.3.1%apple-clang@16.0.0~docs+shared build_system=generic certs=mozilla arch=darwin-sequoia-m1
[+] ^ca-certificates-mozilla@2023-05-30%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1
[+] ^gmake@4.4.1%apple-clang@16.0.0~guile build_system=generic arch=darwin-sequoia-m1
[+] ^perl@5.40.0%apple-clang@16.0.0+cpanm+opcode+open+shared+threads build_system=generic arch=darwin-sequoia-m1
[+] ^berkeley-db@18.1.40%apple-clang@16.0.0+cxx~docs+stl build_system=autotools patches=26090f4,b231fcc arch=darwin-sequoia-m1
[+] ^bzip2@1.0.8%apple-clang@16.0.0~debug~pic+shared build_system=generic arch=darwin-sequoia-m1
[+] ^diffutils@3.10%apple-clang@16.0.0 build_system=autotools arch=darwin-sequoia-m1
[+] ^libiconv@1.17%apple-clang@16.0.0 build_system=autotools libs=shared,static arch=darwin-sequoia-m1
[+] ^gdbm@1.23%apple-clang@16.0.0 build_system=autotools arch=darwin-sequoia-m1
[+] ^readline@8.2%apple-clang@16.0.0 build_system=autotools patches=bbf97f1 arch=darwin-sequoia-m1
[+] ^ncurses@6.5%apple-clang@16.0.0~symlinks+termlib abi=none build_system=autotools patches=7a351bc arch=darwin-sequoia-m1
[+] ^pkgconf@2.2.0%apple-clang@16.0.0 build_system=autotools arch=darwin-sequoia-m1
[+] ^zlib-ng@2.2.1%apple-clang@16.0.0+compat+new_strategies+opt+pic+shared build_system=autotools arch=darwin-sequoia-m1
[+] ^gnuconfig@2022-09-17%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1
[+] py-six@1.16.0%apple-clang@16.0.0 build_system=python_pip arch=darwin-sequoia-m1
[+] ^py-pip@23.1.2%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1
[+] ^py-setuptools@69.2.0%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1
[-] ^py-wheel@0.41.2%apple-clang@16.0.0 build_system=generic arch=darwin-sequoia-m1
...
```
With no input spec displayed -- just the concretization output shown as one consolidated
tree and multiple roots.
- [x] remove "Input Spec" section and "Concretized" header from `spack spec` output
- [x] print concretized specs as one BFS tree instead of multiple
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
This PR provides complementary 2 features:
1. An augmentation to the package language to express ABI compatibility relationships among packages.
2. An extension to the concretizer that can synthesize splices between ABI compatible packages.
1. The `can_splice` directive and ABI compatibility
We augment the package language with a single directive: `can_splice`. Here is an example of a package `Foo` exercising the `can_splice` directive:
class Foo(Package):
version("1.0")
version("1.1")
variant("compat", default=True)
variant("json", default=False)
variant("pic", default=False)
can_splice("foo@1.0", when="@1.1")
can_splice("bar@1.0", when="@1.0+compat")
can_splice("baz@1.0+compat", when="@1.0+compat", match_variants="*")
can_splice("quux@1.0", when=@1.1~compat", match_variants="json")
Explanations of the uses of each directive:
- `can_splice("foo@1.0", when="@1.1")`: If `foo@1.0` is the dependency of an already installed spec and `foo@1.1` could be a valid dependency for the parent spec, then `foo@1.1` can be spliced in for `foo@1.0` in the parent spec.
- `can_splice("bar@1.0", when="@1.0+compat")`: If `bar@1.0` is the dependency of an already installed spec and `foo@1.0+compat` could be a valid dependency for the parent spec, then `foo@1.0+compat` can be spliced in for `bar@1.0+compat` in the parent spec
- `can_splice("baz@1.0", when="@1.0+compat", match_variants="*")`: If `baz@1.0+compat` is the dependency of an already installed spec and `foo@1.0+compat` could be a valid dependency for the parent spec, then `foo@1.0+compat` can be spliced in for `baz@1.0+compat` in the parent spec, provided that they have the same value for all other variants (regardless of what those values are).
- `can_splice("quux@1.0", when=@1.1~compat", match_variants="json")`:If `quux@1.0` is the dependency of an already installed spec and `foo@1.1~compat` could be a valid dependency for the parent spec, then `foo@1.0~compat` can be spliced in for `quux@1.0` in the parent spec, provided that they have the same value for their `json` variant.
2. Augmenting the solver to synthesize splices
### Changes to the hash encoding in `asp.py`
Previously, when including concrete specs in the solve, they would have the following form:
installed_hash("foo", "xxxyyy")
imposed_constraint("xxxyyy", "foo", "attr1", ...)
imposed_constraint("xxxyyy", "foo", "attr2", ...)
% etc.
Concrete specs now have the following form:
installed_hash("foo", "xxxyyy")
hash_attr("xxxyyy", "foo", "attr1", ...)
hash_attr("xxxyyy", "foo", "attr2", ...)
This transformation allows us to control which constraints are imposed when we select a hash, to facilitate the splicing of dependencies.
2.1 Compiling `can_splice` directives in `asp.py`
Consider the concrete spec:
foo@2.72%gcc@11.4 arch=linux-ubuntu22.04-icelake build_system=autotools ^bar ...
It will emit the following facts for reuse (below is a subset)
installed_hash("foo", "xxxyyy")
hash_attr("xxxyyy", "hash", "foo", "xxxyyy")
hash_attr("xxxyyy", "version", "foo", "2.72")
hash_attr("xxxyyy", "node_os", "ubuntu22.04")
hash_attr("xxxyyy", "hash", "bar", "zzzqqq")
hash_attr("xxxyyy", "depends_on", "foo", "bar", "link")
Rules that derive abi_splice_conditions_hold will be generated from
use of the `can_splice` directive. They will have the following form:
can_splice("foo@1.0.0+a", when="@1.0.1+a", match_variants=["b"]) --->
abi_splice_conditions_hold(0, node(SID, "foo"), "foo", BaseHash) :-
installed_hash("foo", BaseHash),
attr("node", node(SID, SpliceName)),
attr("node_version_satisfies", node(SID, "foo"), "1.0.1"),
hash_attr("hash", "node_version_satisfies", "foo", "1.0.1"),
attr("variant_value", node(SID, "foo"), "a", "True"),
hash_attr("hash", "variant_value", "foo", "a", "True"),
attr("variant_value", node(SID, "foo"), "b", VariVar0),
hash_attr("hash", "variant_value", "foo", "b", VariVar0).
2.2 Synthesizing splices in `concretize.lp` and `splices.lp`
The ASP solver generates "splice_at_hash" attrs to indicate that a particular node has a splice in one of its immediate dependencies.
Splices can be introduced in the dependencies of concrete specs when `splices.lp` is conditionally loaded (based on the config option `concretizer:splice:True`.
2.3 Constructing spliced specs in `asp.py`
The method `SpecBuilder._resolve_splices` implements a top-down memoized implementation of hybrid splicing. This is an optimization over the more general `Spec.splice`, since the solver gives a global view of exactly which specs can be shared, to ensure the minimal number of splicing operations.
Misc changes to facilitate configuration and benchmarking
- Added the method `Solver.solve_with_stats` to expose timers from the public interface for easier benchmarking
- Added the boolean config option `concretizer:splice` to conditionally load splicing behavior
Co-authored-by: Greg Becker <becker33@llnl.gov>
We added unification semantics for parsing specs from the CLI, but there are a couple
of special cases in which we can avoid calls to the concretizer for speed when the
specs can all be resolved by lookups.
- [x] special case 1: solving a single spec
- [x] special case 2: all specs are either concrete (come from a file) or have an abstract
hash. In this case if concretizer:unify:true we need an additional check to confirm
the specs are compatible.
- [x] add a parameterized test for unifying on the CI
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* libunwind: Add 1.7.2 and 1.8.1
* libunwind: Remove deprecated 1.1 version
* libunwind: Add newer *-stable branches: Remove 1.5-stable branch as well as cleanup.
* libunwind: Use GitHub url for all versions
* libunwind: Add conflict for PPC and 1.8.*
* libunwind: Add conflict for aarch64 and 1.8:
Build fails with
aarch64/Gos-linux.c: In function '_ULaarch64_local_resume':
aarch64/Gos-linux.c:147:1: error: x29 cannot be used in asm here
}
^
aarch64/Gos-linux.c:147:1: error: x29 cannot be used in asm here
make[2]: *** [Makefile:4795: aarch64/Los-linux.lo] Error 1
* added updated versions
* added pyhmmer
* updated infernal
* fix blast-plus for apple-clang
* fix py-biopython build on apple-clang
* remove erroneous biopython dep: build issue is with python 3.8, not biopython
* deepsig python 3.9: expanding unnecessary python restrictions
* add pyrodigal
* fix unnecessarily strict diamond version
* builds and updates: blast-plus indexing broken, still need to test db download and bakta pipeline
* builds and runs
* revert blast-plus changes: remove my personal hacks to get blast-plus to build
* fix the build error during compilation of rocdecode.was dependent on libva-devel packag
* address review comment
* address review changes.commit the changes
* Add two_level_namespace variant (default is disabled) for MacOS to enable building
executables and libraries with two level namespace enabled.
* Addressed reviewer comments.
* Moved two_level_namespace variant ahead of the patch that uses that variant to
get concretize to work properly.
* Removed extra print statements
* soqt: Add SoQt package
The geomodel package needs this if visualization is turned on.
* make qt versions explicit
* use virtual dependency for qt
* pr feedback
Remove myself as maintainer
Remove v1.6.0
Remove unused qt variant
This addresses part [1] of #46345#44713 introduced a bug where all non-spec query parameters like date
ranges, -x, etc. were ignored when an env was active.
This fixes that issue and adds tests for it.
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
`spack mirror add` and `set` now have flags `--oci-password-variable`, `--oci-password-variable`, `--s3-access-key-id-variable`, `--s3-access-key-secret-variable`, `--s3-access-token-variable`, which allows users to specify an environment variable in which a username or password is stored.
Storing plain text passwords in config files is considered deprecated.
The schema for mirrors.yaml has changed, notably the `access_pair` list is generally replaced with a dictionary of `{id: ..., secret_variable: ...}` or `{id_variable: ..., secret_variable: ...}`.
the py-oracledb package only has a single outdated version available in its recipe. this PR adds a much broader range of versions and their corresponding checksums.
* add more versions of py-oracledb
* update py-oracledb recipe
* add py-cython version dependencies
* tweak py-cython version dependencies
* remove older versions of py-oracledb
This filters any selected executable ending with `-ocl` from the list of executables being probed as candidate for external `llvm` installations.
I couldn't reproduce the entire issue, but with a simple script:
```
#!/bin/bash
touch foo.o
echo "clang version 10.0.0-4ubuntu1 "
echo "Target: x86_64-pc-linux-gnu"
echo "Thread model: posix"
echo "InstalledDir: /usr/bin"
exit 0
```
I noticed the executable was still probed:
```
$ spack -d compiler find /tmp/ocl
[ ... ]
==> [2024-11-11-08:38:41.933618] '/tmp/ocl/bin/clang-ocl' '--version'
```
and `foo.o` was left in the working directory. With this change, instead the executable is filtered out of the list on which we run `--version`, so `clang-ocl --version` is not run by Spack.
- [x] Get rid of a call to `parser.quote_if_needed()` during solver setup, which
introduces a circular import and also isn't necessary.
- [x] Rename `spack.variant.Value` to `spack.variant.ConditionalValue`, as it is *only*
used for conditional values. This makes it much easier to understand some of the
logic for variant definitions.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
`conditional()`, which defines conditional variant values, and the other ways to declare
variant values should probably be in a layer above `spack.variant`. This does the simple
thing and moves *just* `conditional()` to `spack.directives` to avoid a circular import.
We can revisit the public variant interface later, when we split packages from core.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* lua: add +pcfile support for @5.4: versions, without using a version-dependent patch
* lua: always generate pcfile, remove +pcfile variant from all packages
* lua: minor fixes
* rpm: minor fix
* Add 5.030 and remove the requirement to patch verilator, the problem has be fixed in this rev
* Update var/spack/repos/builtin/packages/verilator/package.py
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* benchmark: enable shared libraries by default
The existing behaviour of Google Benchmark yiels static objects which
are of little use for most projects. This PR changes the spec to use
dynamic libraries instead.
* Add shared variant
* librdkafka: added missing dependency on curl
This PR adds a missing dependency on curl in librdkafka.
* librdkafka: added dependency on openssl and zlib
* Added support for Codeplay AMD Plugin for Intel OneAPI Compilers
* [@spackbot] updating style on behalf of kaanolgu
* Adding 2025.0.0
* removed HOME and XDG_RUNTIME_DIR
* [@spackbot] updating style on behalf of kaanolgu
---------
Co-authored-by: Kaan Olgu <kaan.olgu@bristol.ac.uk>
No ROOT `builtin` should ever be set to true if possible, because that
builds an existing library that spack may not know about.
Furthermore, using `builtin_glew` forces the package to be on, even when
not building x/gl/aqua on macos. This causes build failures.
Caused by https://github.com/spack/spack/pull/45632#issuecomment-2276311748 .
Currently, if a package has a dependency from another repository and patches it,
generation of the patch cache will fail. Concretization succeeds if a fixed patch
cache is in place.
- [x] don't assume that patched dependencies are in the same repo when indexing
- [x] add some test fixtures to support multi-repo tests.
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* spack.compiler: cache output
* compute libc from the dynamic linker at most once per spack process
* wrap compiler cache entry in class, add type hints
* test compiler caching
* ensure tests do not populate user cache, and fix 2 tests
* avoid recursion: cache lookup -> compute key -> cflags -> real_version -> cache lookup
* allow compiler execution in test that depends on get_real_version
If a package `foo` doesn't implement `libs`, the default was to search recursively for `libfoo` whenever asking for `spec[foo].libs` (this also happens automatically if a package includes `foo` as a link dependency).
This can lead to some strange behavior:
1. A package that is normally used as a build dependency (e.g. `cmake` at one point) is referenced like
`depends_on(cmake)` which leads to a fully-recursive search for `libcmake` (this can take
"forever" when CMake is registered as an external with a prefix like `/usr`, particularly on NFS mounts).
2. A similar hang can occur if a package is registered as an external with an incorrect prefix
- [x] Update the default library search to stop after a maximum depth (by default, search
the root prefix and each directory in it, but no lower).
- [x]
The following is a list of known changes to `find` compared to `develop`:
1. Matching directories are no longer returned -- `find` consistently only finds non-dirs,
even at `max_depth`
2. Symlinked directories are followed (needed to support max_depth)
3. `find(..., "dir/*.txt")` is allowed, for finding files inside certain dirs. These "complex"
patterns are delegated to `glob`, like they are on `develop`.
4. `root` and `files` arguments both support generic sequences, and `root`
allows both `str` and `path` types. This allows us to specify multiple entry points to `find`.
---------
Co-authored-by: Peter Scheibel <scheibel1@llnl.gov>
This PR adds a sub-command to `spack env` (`track`) which allows users to add/link
anonymous environments into their installation as named environments. This allows
users to more easily track their installed packages and the environments they're
dependencies of. For example, with the addition of #41731 it's now easier to remove
all packages not required by any environments with,
```
spack gc -bE
```
#### Usage
```
spack env track /path/to/env
==> Linked environment in /path/to/env
==> You can activate this environment with:
==> spack env activate env
```
By default `track /path/to/env` will use the last directory in the path as the name of
the environment. However users may customize the name of the linked environment
with `-n | --name`. Shown below.
```
spack env track /path/to/env --name foo
==> Tracking environment in /path/to/env
==> You can activate this environment with:
==> spack env activate foo
```
When removing a linked environment, Spack will remove the link to the environment
but will keep the structure of the environment within the directory. This will allow
users to remove a linked environment from their installation without deleting it from
a shared repository.
There is a `spack env untrack` command that can be used to *only* untrack a tracked
environment -- it will fail if it is used on a managed environment. Users can also use
`spack env remove` to untrack an environment.
This allows users to continue to share environments in git repositories while also having
the dependencies of those environments be remembered by Spack.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
You can now provide multiple roots to a single `find()` call and all of
them will be searched. The roots can overlap (e.g. can be parents of one
another).
This also adds a library function for taking a set of regular expression
patterns and creating a single OR expression (and that library function
is used in `find` to improve its performance).
Set command line scopes last in _main, so they are higher scopes
Restore the global configuration in a spawned process by inspecting
the result of ctx.get_start_method()
Add the ability to pass a mp.context to PackageInstallContext.
Add shell-tests to check overriding the configuration:
- Using both -c and -C from command line
- With and without an environment active
* edm4hep: Add json variant for newer versions
Explicit option has been added to EDM4hep so we now expose it via a
variant as well. We keep the old behavior where we unconditionally
depended on nlohmann-json and implicitly built JSON support if we could
detect it cmake stage
* Fix condition statement in when clause
* Use open version range to avoid fixing to single version
---------
Co-authored-by: Valentin Volkl <valentin.volkl@cern.ch>
Variants can now be propagated from a dependent package to (transitive) dependencies,
even if the source or transitive dependencies have the propagated variants.
For example, here `zlib` doesn't have a `guile` variant, but `gmake` does:
```
$ spack spec zlib++guile
- zlib@1.3%gcc@12.2.0+optimize+pic+shared build_system=makefile arch=linux-rhel8-broadwell
- ^gcc-runtime@12.2.0%gcc@12.2.0 build_system=generic arch=linux-rhel8-broadwell
- ^gmake@4.4.1%gcc@12.2.0+guile build_system=generic arch=linux-rhel8-broadwell
```
Adding this property has some strange ramifications for `satisfies()`. In particular:
* The abstract specs `pkg++variant` and `pkg+variant` do not intersect, because `+variant`
implies that `pkg` *has* the variant, but `++variant` does not.
* This means that `spec.satisfies("++foo")` is `True` if:
* for concrete specs: `spec` and its dependencies all have `foo` set if it exists
* for abstract specs: no dependency of `spec` has `~foo` (i.e. no dependency contradicts `++foo`).
* This also means that `Spec("++foo").satisfies("+foo")` is `False` -- we only know after concretization.
The `satisfies()` semantics may be surprising, but this is the cost of introducing non-subset
semantics (which are more useful than proper subsets here).
- [x] Change checks for variants
- [x] Resolve conflicts
- [x] Add tests
- [x] Add documentation
---------
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* pythia8: Include patch for C++20 / Clang
Pythia8 vendors some FJCore sources that are as of Pythia8 312
incompatible with C++20 on clang. This adds a patch that makes it
compatible in these scenarios
* Add issue link
* rename setup_cxxstd function
* Remove an accidental printout
* Apply patch to all compilers, add lower bound
* `find(..., max_depth=...)` can be used to control how many directories at most to descend into below the starting point
* `find` now enters every unique (symlinked) directory once at the lowest depth
* `find` is now repeatable: it traverses the directory tree in a deterministic order
In the pure `ld` case, we weren't actually parsing `RPATH` arguments separately as we
do for `ccld`. Fix this by adding *another* nested case statement for raw `RPATH`
parsing.
There are now 3 places where we deal with `-rpath` and friends, but I don't see a great
way to unify them, as `-Wl,`, `-Xlinker`, and raw `-rpath` arguments are all ever so
slightly different.
Also, this Fixes ordering of assertions to make `pytest` diffs more intelligible.
The meaning of `+` and `-` in diffs changed in `pytest` 6.0 and the "preferred" order
for assertions became `assert actual == expected` instead of the other way around.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
`cc` divides most paths up into system paths, spack managed paths, and other paths.
This gets really repetitive and makes the code hard to read. Simplify the script
by adding some functions to do most of the redundant work for us.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
This PR has two small contributions:
- It adds another phase to the timer for concrectization, "construct_specs", to actually see the time the concretizer spends interpreting the `clingo` output to build the Python object for a concretized spec.
- It adds the method `Solver.solve_with_stats` to expose the timers that were already in the concretizer to the public solver API. `Solver.solve` just becomes a special case of `Solver.solve_with_stats` that throws away the timing output (which is what it was already doing).
These changes will make it easier to benchmark concretizer performance and provide a more complete picture of the time spent in the concretizer by including the time spent interpreting clingo output.
Currently, the `geant4-data` spec creates symlink to all of its
dependencies, and it does so by globbing their `share/` directories.
This works very well for the way Spack installs these, but it doesn't
work for anybody wanting to use e.g. the Geant4 data on CVMFS. See pull
request #47298. This commit changes the way the `geant4-data` spec
works. It no longer blindly globs directories and makes symlinks, but it
asks its dependencies specifically for the name of their data directory.
This should allow us to use Spack to use the CVMFS installations as
externals.
* zabbix: add v5.0.44, v6.0.34, v7.0.4 (fix CVEs)
* [@spackbot] updating style on behalf of wdconinc
* zabbix: use f-string
* zabbix: fix f-string quoting
* zabbix: use mysql-client
* @wdconic, this fixes the mysql client virtual for me
---------
Co-authored-by: wdconinc <wdconinc@users.noreply.github.com>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
The idea is to go from most to least used: backward compat -> forward compat -> pinning on major or major.minor version -> pinning specific, concrete versions.
Further, the following
```python
# backward compatibility with Python
depends_on("python@3.8:")
depends_on("python@3.9:", when="@1.2:")
depends_on("python@3.10:", when="@1.4:")
# forward compatibility with Python
depends_on("python@:3.12", when="@:1.10")
depends_on("python@:3.13", when="@:1.12")
depends_on("python@:3.14")
```
is better than disjoint when ranges causing repetition of the rules on dependencies, and requiring frequent editing of existing lines after new releases are done:
```python
depends_on("python@3.8:3.12", when="@:1.1")
depends_on("python@3.9:3.12", when="@1.2:1.3")
depends_on("python@3.10:3.12", when="@1.4:1.10")
depends_on("python@3.10:3.13", when="@1.11:1.12")
depends_on("python@3.10:3.14", when="@1.13:")
When compiled without MPI support, a fake mpi header is autogenerated during configure/build. The header is missing one symbol in version 1.9.5. The problem has since been fixed upstream.
A simular problem does also occur for 1.9.4. Unfortunately, the patch does not work for 1.9.4 and I also don't know if further fixes would be required for 1.9.4. Therefore, only the newest version 1.9.5 is patched.
Update HDF5 version for develop branch to develop-2.0 to match the new
version in the develop branch.
Remove develop-1.16 as it has been decided to make next release HDF5 2.0.0.
This PR changes the semantic of == for spec so that:
hdf5++mpi == hdf5+mpi
won't hold true anymore. It also changes the constrain semantic, so that a
non-propagating variant always override a propagating variant. This means:
(hdf5++mpi).constrain(hdf5+mpi) -> hdf5+mpi
Before we had a very weird semantic, that was supposed to be tested by unit-tests:
(libelf++debug).constrain(libelf+debug+foo) -> libelf++debug++foo
This semantic has been dropped, as it was never really tested due to the == bug.
According to https://github.com/root-project/root/issues/7160, if
`-Dcocoa=ON` build must also be configured with `-Dopengl=ON`, since
otherwise the build encounters missing includes. This is/was a silent
failure in ROOT CMake, but I believe has been made an explicit failure
some time this year.
Currently, the schema reads:
from:
- type:
environment: path_or_name
but this can't be extended easily to other types, e.g. to buildcaches,
without duplicating the extension keys. Use instead:
from:
- type: environment
path: path_or_name
Currently, the `concretizer:unify:` config option only affects environments.
With this PR, it now affects any group of specs given to a command using the `parse_specs(*, concretize=True)` interface.
- [x] implementation in `parse_specs`
- [x] tests
- [x] ensure all commands that accept multiple specs and concretize use `parse_specs` interface
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* don't concretize in CI if changed packages are not in stacks
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Generate noop job when no specs to rebuild due to untouched pruning
* Add test to verify skipping generate creates a noop job
* Changed debug for early exit
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This changes `Spec` serialization to include information about propagation for abstract specs.
This was previously not included in the JSON representation for abstract specs, and couldn't be
stored.
Now, there is a separate `propagate` dictionary alongside the `parameters` dictionary. This isn't
beautiful, but when we bump the spec version for Spack `v0.24`, we can clean up this and other
aspects of the schema.
* geant4: make downloading data dependency optional
This PR makes downloading the data repository of the Geant4 spec
optional by adding a sticky, default-enabled variant which controls the
dependency on `geant4-data`. This should not change the default
behaviour, but should allow users to choose whether or not they want the
data directory.
* Add comment
* Update env variable
* Generic docs
* Buildable false
- Merging sycl2020usm and sycl2020acc into sycl2020 and the submodel=acc/usm variant is introduced
- implementation is renamed to option
- impl ( fortran implementation options) renamed to foption
- sycl_compiler_implementation and thrust_backend
- stddata,stdindices,stdranges to a single std with std_submodel introduction
- std_use_tbb to be boolean; also changed model filtering algorithm to make sure that it only picks model names
- Modified comments to clear confusion with cuda_arch cc_ and sm_ prefix appends
- Deleted duplicate of cuda_arch definition from +omp
- CMAKE_CXX_COMPILER moved to be shared arg between all models except tbb and thrust
- Replaced sys.exit with InstallError and created a dictionary to simplify things and eliminate excess code lines doing same checks
- Replaced the -mcpu flags to -march since it is deprecated now
- Replaced platform.machine with spec.target
- Removing raja_backend, introducing openmp_flag,removing -march flags,clearing debugging print(), removing excess if ___ in self.spec.variants
- [FIX] Issue where Thrust couldn't find correct compiler (it requires nvcc)
- [FIX] Fortran unsupported check to match the full string
- [FIX] RAJA cuda_arch to be with sm_ not cc_
- dir= option is no longer needed for kokkos
- dir is no longer needed
- [omp] Adding clang support for nvidia offload
- SYCL2020 offload to nvidia GPU
- changing model dependency to be languages rather than build system
- removing hardcoded arch flags and replacing with archspec
- removing cpu_arch from acc model
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Greg Becker <becker33@llnl.gov>
Co-authored-by: Kaan Olgu <kaan.olgu@bristol.ac.uk>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Introduce support for ArmPL and ACfL 24.10
This patch introduces the possibility of installing armpl-gcc
and acfl 24.10 through spack. It also addressed one issue observed
after PR https://github.com/spack/spack/pull/46594
* Fix Github action issues.
- Remove default URL
- Reinstate default OS for ACfL to RHEL.
Fixes an issue reported where `spack env depfile` + `make -j` would
non-deterministically refuse to mark all environment roots explicit.
`update_explicit` had the pattern
```python
rec = self._data[key]
with self.write_transaction():
rec.explicit = explicit
```
but `write_transaction` may reinitialize `self._data`, meaning that
mutating `rec` won't mutate `self._data`, and the changes won't be
persisted.
Instead, use `mark` which has a correct implementation.
Also avoids the essentially incorrect early return in `update_explicit`
which is a pattern I don't think belongs in database.py: it branches on
possibly stale data to realize there is nothing to change, but in reality
it requires a write transaction to know that for a fact, but that would
defeat the purpose. So, leave this optimization to the call site.
The already concrete specs in an environment are now among the reusable specs for the concretizer.
This includes concrete specs from all include_concrete environments.
In addition to this change to the default reuse, `environment` is added as a reuse type for
the concretizer config. This allows users to specify:
spack:
concretizer:
# Reuse from this environment (including included concrete) but not elsewhere
reuse:
from:
- type: environment
# or reuse from only my_env included environment
reuse:
from:
- type:
environment: my_env
# or reuse from everywhere
reuse: true
If reuse is specified from a specific environment, only specs from that environment will be reused.
If the reused environment is not specified via include_concrete, the concrete specs will be retried
at concretization time to be reused.
Signed-off-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Currently the order in which hooks are run is arbitrary.
This can be fixed by sorted(list_modules(...)) but I think it is much
more clear to just have a static list.
Hooks are not extensible other than modifying Spack code, which
means it's unlikely people maintain custom hooks since they'd have
to fork Spack. And if they fork Spack, they might as well add an entry
to the list when they're continuously rebasing.
The idea is that `spack -e env add ./concrete-spec.json` would list the
full hash in the specs, so that (a) it's not ambiguous and (b) it could
in principle results in constant time lookup instead of linear time
substring match in large build caches.
* r-*: updates to latest versions
* r-*: add new dependencies
* r-proj: fix docstring line length
* r-list: add homepage
* r-*: add more dependencies
* r-rmpi: use virtual dependencies, conflict openmpi@5:
* r-cairo: require cairo +png; +pdf for some versions; cairo +fc when +ft
* r-proj: set LD_LIBRARY_PATH since rpath not respected
* Added packages to for intel-2025.0.0 release
* fix style
* pin mkl to 2024.2.2
until e4s can upgrade to 2025 compiler and ginkgo compatibility issue can be resolved.
---------
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
* Add a descriptor to have a class level constant
This descriptor helps intercept places where we set a value on instances.
It does not really behave like "const" in C-like languages, but is the
simplest implementation that might still be useful.
* Add a descriptor to deprecate properties/attributes of an object
This descriptor is used as a base class. Derived classes may implement a
factory to return an adaptor to the attribute being deprecated. The
descriptor can either warn, or raise an error, when usage of the deprecated
attribute is intercepted.
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
This adds the current latest version of py-uv. While working on this, I also
found that uv (including older versions) has a build dependency on cmake which
was not specified in the package, so I added it as a dependency.
I also found that on my machine, the build process had trouble finding cmake,
so I set the path to it explicitly as an environment variable.
* py-jupyter-core: set environment variables for extensions
* Changes committed by gh-spack-pr
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Currently, `spack solve` has different spec selection semantics than `spack spec`.
`spack solve` currently does not allow specifying a single spec when an environment is active.
This PR modifies `spack solve` to inherit the interface from `spack spec`, and to use
the same spec selection logic. This will allow for better use of `spack solve --show opt`
for debugging.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* gdk-pixbuf: delete old versions, make mesonpackage
goal is to get rid of `std_meson_args` global, but clean up package
while at it.
`setup_dependent_run_environment` was removed because it did not depend
on the dependent spec, and would result in repeated env variable
changes.
* atk: idem
* fix a dependent
* ML CI: Linux aarch64
* Add config files
* No aarch64 tag
* Don't specify image
* Use amazonlinux image
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
* Update and require
* GCC is too old
* Fix some builds
* xgboost doesn't support old GCC + cuda
* Run on newer Ubuntu
* Remove mxnet
* Try aarch64 range
* Use main branch
* Conflict applies to all targets
* cuda only required when +cuda
* Use tagged version
* Comment out tf-estimator
* Add ROCm, use newer Ubuntu
* Remove ROCm
---------
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
Originally, concretization failed if the splice config points to an invalid replacement.
This PR defers the check until we know the splice is needed, so that irrelevant splices
with bad config cannot stop concretization.
While I was at it, I improved an error message from an assert to a ValueError.
* Normalize Spack Win entrypoints
Currently Spack has multiple entrypoints on Windows that in addition to
differing from *nix implementations, differ from shell to shell on
Windows. This is a bit confusing for new users and in general
unnecessary.
This PR adds a normal setup script for the batch shell while preserving
the previous "click from file explorer for spack shell" behavior.
Additionally adds a shell title to both powershell and cmd letting users
know this is a Spack shell
* remove doskeys
#44588 we added logic to suppress deprecation warnings for the
Intel classic compilers. This depended on matching against
* The compiler names (looking for icc, icpc, ifort)
* The compiler version
When using an Intel compiler with fortran wrappers, the first check
always fails. To support using the fortran wrappers (in combination
with the classic Intel compilers), we remove the first check and
suppress if just the version matches. This works because:
* The newer compilers like icx can handle (ignore) the flags that
suppress deprecation warnings
* The Cray wrappers pass the underlying compiler version (e.g. they
report what icc would report)
* new openjdk variant to symlink system certificate
* new openjdk variant to symlink system certificate
* new openjdk variant to symlink system certificate
* new openjdk variant to symlink system certificate
* Update var/spack/repos/builtin/packages/openjdk/package.py
Co-authored-by: Alec Scott <hi@alecbcs.com>
---------
Co-authored-by: Alec Scott <hi@alecbcs.com>
Connection objects are Python version, platform and multiprocessing
start method independent, so better to use those than a mix of plain
file descriptors and inadequate guesses in the child process whether it
was forked or not.
This also allows us to delete the now redundant MultiProcessFd class,
hopefully making things a bit easier to follow.
This allows the following
```python
cache.init_entry("my/cache")
with cache.read_transaction("my/cache") as f:
data = f.read() if f is not None else None
```
mirroring `write_transaction`, which returns a tuple `(old, new)` where
`old` is `None` if the cache file did not exist yet.
The alternative that requires less defensive programming on the call
site would be to create the "old" file upon first read, but I did not
want to think about how to safely atomically create the file, and it's
not unthinkable that an empty file is an invalid format (for instance
the call site may expect a json file, which requires at least {} bytes).
This PR is in response to a question in the `environments` slack channel (https://spackpm.slack.com/archives/CMHK7MF51/p1729200068557219) about inadequate CLI help/documentation for one specific subcommand.
This PR uses the approach I took for the descriptions and help for `spack test` subcommands. Namely, I use the first line of the relevant docstring as the description, which is shown per subcommand in `spack env -h`, and the entire docstring as the help. I then added, where it seemed appropriate, help. I also tweaked argument docstrings to tighten them up, make consistent with similar arguments elsewhere in the command, and elaborate when it seemed important. (The only subcommand I didn't touch is `loads`.)
For example, before:
```
$ spack env update -h
usage: spack env update [-hy] env
positional arguments:
env name or directory of the environment to activate
optional arguments:
-h, --help show this help message and exit
-y, --yes-to-all assume "yes" is the answer to every confirmation request
```
After the changes in this PR:
```
$ spack env update -h
usage: spack env update [-hy] env
update the environment manifest to the latest schema format
update the environment to the latest schema format, which may not be
readable by older versions of spack
a backup copy of the manifest is retained in case there is a need to revert
this operation
positional arguments:
env name or directory of the environment
optional arguments:
-h, --help show this help message and exit
-y, --yes-to-all assume "yes" is the answer to every confirmation request
```
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Commit aa0825d642 accidentally added a semicolon
to the ANSI escape sequence even if the color code was `None` or unknown, breaking the
bold, uncolored font-face. This PR restores the old behavior.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Add type hints to all query* methods
* Inline docstrings
* Change defaults from `any` to `None` so they can be type hinted in old Python
* Pre-filter on given hashes instead of iterating over all db specs
* Fix a bug where the `--origin` option of uninstall had no effect
* Fix a bug where query args were not applied when searching by concrete spec
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Fixes a change in behavior/bug in
70412612c7, where partial environment
installs would mark the selected spec as explicitly installed, even if
it was not a root of the environment.
The desired behavior is that roots by definition are the to be
explicitly installed specs. The specs on the `spack -e ... install x`
command line are just filters for partial installs, so leave them
implicitly installed if they aren't roots.
ci: Remove deprecated logic from the ci module
Remove the following from the ci module, schema, and tests:
- deprecated ci stack and handling of old ci config
- deprecated mirror handling logic
- support for artifacts buildcache
- support for temporary storage url
* ParaView: Explicitly set the ENABLE_MPI on/off
* Disallow MPI in the DAG when ~mpi
* @5.13 uses 'remove_children', use pugixml@1.11:, See #47098
* cloud_pipelines/stacks/data-vis-sdk: paraview +raytracing: add +adios2 +fides
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
When building ParaView with ADIOS2 and allowing VTK-m to be
built, also build Fides. This reads ADIOS2 files with a
particular JSON schema, but it requires VTK-m to read data.
* Libmng: Restore Autotools system
CMake, when building its Qt gui, depends on Qt, which in turn, depends on libmng, a CMake based build. To avoid this obvious cyclic dependency, we re-introduce libmng's autotools build into Spack and require when building Qt as a CMake dependency, libmng is built with autotools
* Ensure autotools constraint is limited to non-Windows
* refactor qt-libmng relation from CMake
This commit remove all the uses of spec.compiler that
can be easily substituted by a more idiomatic approach,
e.g. using spec.satisfies or directives
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* py-sphinxcontrib-spelling: new package
* Dependency enchant: Add missing dep on pkgconfig
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Both `multiprocessing.connection.Connection.__del__` and `io.IOBase.__del__` called `os.close` on the same file descriptor. As of Python 3.13, this is an explicit warning. Ensure we close once by usef `os.fdopen(..., closefd=False)`
* stacks: add a stack for devtools on darwin
After getting this whole mess building on darwin, let's keep it that
way, and maybe make it so we have some non-ML darwin binaries in spack
as well.
* reuse: false for devtools
* dtc: fix darwin dylib name and id
On mac the convention is `lib<name>.<num>.dylib`, while the makefile
creates a num suffixed one by default. The id in the file is also a
local name rather than rewritten to the full path, this fixes both
problems.
* node-js: make whereis more deterministic
* relocation(darwin): catch Mach-O load failure
The MachO library can throw an exception rather than return no headers,
this happened in an elf file in the test data of go-bootstrap. Trying
catching the exception and moving on for now. May also need to look
into why we're trying to rewrite an elf file.
* qemu: add darwin flags to clear out warnings
There's a build failure for qemu in CI, but it's invisible because of
the immense mass of warning output. Explicitly specify the target macos
version and remove the extraneous unknown-warning-option flag.
* dtc: libyaml is also a link dependency
libyaml is required at runtime to run the dtc binary, lack of it caused
the ci for qemu to fail when the library wasn't found.
fixes#47101
The bug was introduced in #33495, where `spack find was not updated,
and wasn't caught by unit tests.
Now a Database can accept a custom predicate to select the installation
records. A unit test is added to prevent regressions. The weird convention
of having `any` as a default value has been replaced by the more commonly
used `None`.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* root: fix variant detection for external
A few fixes (possibly non-exhaustive) to `spack external find root`
Several variants have had `when=` clauses added that need to be
propagated to determine_variants. The previously used
Version.satifies("") method also has been removed, it seems. It's
slightly cumbersome that there is no self.spec to use in
determine_variants, but comparisons using Version(version_str) work at least
* remove debug printout
* sccache: add new package
* sccache: add older versions and minimum rust versions
* sccache: add more minimum rust versions
* sccache: add sccache executable and tag as build-tools
* sccache: add dist-server
* sccache: add determine_version and determin_variants
* sccache: add sccache-dist executable
* sccache: fix style
* Update var/spack/repos/builtin/packages/sccache/package.py
* In case building very old sccache <= 5 is not needed with these older rust version is not needed, they can be omitted.
* sccache: drop older versions
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
* sccache: add openssl dependency
* sccache: openssl is a linux only dependency?
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Older builds of Boost were failing on Windows because they were
adding --without-... flags for libraries that did not exist in those
versions. So:
* lib variants are updated with version range info (current range
info for libs is not comprehensive, but represents changes over the
last few minor versions up to 1.85)
* On Windows, --without-... options are omitted for libraries when they
don't exist for the version of boost being built. Non-Windows uses
a different approach, which was not affected because the new libraries
were not activated by default. It would benefit from similar attention
though to avoid potential future issues.
#44327 made sure to always run `set_package_py_globals` on all
packages before running `setup_dependent_package` for any package,
so that packages implementing the latter could depend on variables
like `spack_cc` being defined.
This ran into an undocumented dependency: `std_cmake_args` is set in
`set_package_py_globals` and makes use of `cmake_prefix_paths` (if it
is defined in the package); `py-torch`es implementation of
`cmake_prefix_paths` depends on a variable set by
`setup_dependent_package` (`python_platlib`).
This generally restores #44327, and corrects the resulting issue by
moving assignment of `std_cmake_args` to after both actions have been
run.
* py-clip-anytorch: new package
* py-clip-anytorch: ran black
py-langchain-core: ran black
py-pydantic: ran black
py-dalle2-pytorch: ran black
* [py-clip-anytorch] fixed license(checked_by)
* Apply suggestion from Wouter on fixing CI
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
---------
Co-authored-by: Alex C Leute <acl2809@rit.edu>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* py-pytorch-warmup: new package
* py-clip-anytorch: ran black
py-langchain-core: ran black
py-pydantic: ran black
py-dalle2-pytorch: ran black
---------
Co-authored-by: Alex C Leute <acl2809@rit.edu>
Some unused methods in VTK-m resulted in compile errors. These were
not discovered because many compilers ignore unused methods in templated
classes, but the SYCL compiler for Aurora gave an error.
Turns out `os=...` of the spec and `MACOSX_DEPLOYMENT_TARGET` are kept
in sync, and the env variable is used to initialize
`CMAKE_MACOSX_DEPLOYMENT_TARGET`.
In bootstrap code we set the env variable, so these bits are redundant.
---------
Co-authored-by: haampie <haampie@users.noreply.github.com>
* Remove the implicit CORE-AVX512 since the CPU specific flags are added by the
compiler wrappers.
* Add `-i_use-path` to help `ifx` find `lld` even if `-gcc-name` is set in
`ifx.cfg`. This file is written by `intel-oneapi-compilers` package to find the
correct `gcc`. Not being able to find `ldd` is a bug in `ifx`. @rschon2 found
this workaround.
* e4s external rocm ci: upgrade to v6.2.1
* use ghcr.io/spack/spack/ubuntu22.04-runner-amd64-gcc-11.4-rocm6.2.1:2024.10.08
* magma +rocm: add entry for v6.2.1
Extrae normally separates the C and MPI fortran interception libs, but
for mixed C/Fortran applications a combined lib is needed.
Co-authored-by: fpanichi <fpanichi@amd.com>
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
* [ftgl] Restrict GCC 14+ patch to apply only to GCC 14+
The patch added by #46927 should only be applied where it is needed:
with GCC 11 it causes a compilation failure where none previously
existed.
* Fix the contraint for applying unsiged char patch to ^freetype@2.13.3:
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
The purpose of this CI job is to ensure that we
can use a modern clingo to concretize specs, if
e.g. it was installed in a virtual environment
with pip.
Since there is no need to re-test unrelated parts
of Spack, reduce the number of tests we run to just
concretize.py
* lua: update luarocks resource to 3.11.1
We have kept an older 3.8 for some time, but that now uses an incorrect
value for the deployment target for macos, causing builds for bundles to
succeed but in such a way that they can't be linked to applications by
`ld` but only loaded by dlopen. This fixes that, and also generally
updates the tool.
* lua-luajit-openresty: add new version fix LUA_PATH
Adds a newer version of openresty's luajor, and adds the slightly odd
extra share path they use that contains the `jit.*` modules. Without
that, things that use bytecode-saving and other jit routines (like
neovim) fail.
* lua-lpeg: fix lpeg build to work for neovim on OSX
Normally luarocks builds all lua libraries as bundles on macos, this
makes sense, but means they can't then be linked by LD into executables
the way neovim expects to do. I'm not sure how this ever worked, if it
did. This patch adds the appropriate variables to have luarocks build
the library as a shared librar, and subsequently fix the id with
install_name_tool (the built-in functionality for this does not
trigger).
This also adds a symlink from `liblpeg.dylib` to `lpeg.so` because
neovim will not build on macos without it. See corresponding upstream
pull request at https://github.com/neovim/neovim/pull/30749
Remove the constraint for concrete specs and simply take the
max(version) if a version is not given. This should default to the
highest infinity version which is also the logical best guess for
doing development.
* Remove concrete verision constriant for develop, set docs
* Add unit-test
* Update lib/spack/docs/environments.rst
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
* Update lib/spack/spack/cmd/develop.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Consolidate env collection in cmd
* Style
---------
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
Co-authored-by: Greg Becker <becker33@llnl.gov>
Remove the `build-tools` tag of python, otherwise these types of
concretizations are possible:
```
py-root
^py-pip
^python@3.12
^python@3.13
```
So, a package would be configured with py-pip using python 3.12, but
installed for 3.13, which does not work.
* Add new version for master branch
Added new version for master branch. Also added additional functions to ensure tempo will actually run. Tempo assumes the stage directory sticks around and references numerous files and directory there. That has been corrected here only if using the master version. The LWA-10-2020 version will also have this problem but they may have additional setup in their compute/Spack environment to address this issue already so I did not modify anything when that's the version. Example of what happens in the LWA-10-17-2020 version regarding missing files is given below
user@cs:~/spack/bin$ tempo
more: cannot open /tempo.hlp: No such file or directory
* Updated to fix format errors
Flake8 check found errors. Fixed those formatting issues
* Additional format change
Removed redundant setup_dependent_run_environment missed in previous update
* Update url to use https: https is the usual transport and is needed to support checkout behind some firewalls
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
The CMake builder in Spack actually adds incorrect rpaths. They are
unfiltered and incorrectly ordered compared to what the compiler wrapper
adds.
There is no need to specify paths to dependencies in `CMAKE_INSTALL_RPATH`
because of two reasons:
1. CMake preserves "toolchain" rpaths, which includes the rpaths injected
by our compiler wrapper.
2. We use `CMAKE_INSTALL_RPATH_USE_LINK_PATH=ON`, so libraries we link
to are rpath'ed automatically.
However, CMake does not create install rpaths to directories in the package's
own install prefix, so we set `CMAKE_INSTALL_RPATH` to the educated guess
`<prefix>/{lib,lib64}`, but omit dependencies.
Some assertion are not testing DAG invariants, and they are passing only
because of the simple structure of the builtin.mock repository on develop.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* update versions of Neo4j and Redis deps
* deprecating older versions due to security vulnerabilities
* [@spackbot] updating style on behalf of kchilleri
* Update var/spack/repos/builtin/packages/redis/package.py
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* adding previous urls to use archives on project websites
* [@spackbot] updating style on behalf of kchilleri
* adding new required maven version
* label when to use specific maven versions
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
---------
Co-authored-by: Krishna Chilleri <krishnachilleri@lanl.gov>
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Add versions 1.14.4-3, 1.14.5, develop-1.16, and update develop-1.15 to
develop-1.17.
* Remove unused list_url, list_depth, and style disapproved spaced in url
= "...
* One more style fix.
This PR allows users to configure explicit splicing replacement of an abstract spec in the concretizer.
concretizer:
splice:
explicit:
- target: mpi
replacement: mpich/abcdef
transitive: true
This config block would mean "for any spec that concretizes to use mpi, splice in mpich/abcdef in place of the mpi it would naturally concretize to use. See #20262, #26873, #27919, and #46382 for PRs enabling splicing in the Spec object. This PR will be the first place the splice method is used in a user-facing manner. See https://spack.readthedocs.io/en/latest/spack.html#spack.spec.Spec.splice for more information on splicing.
This will allow users to reuse generic public binaries while splicing in the performant local mpi implementation on their system.
In the config file, the target may be any abstract spec. The replacement must be a spec that includes an abstract hash `/abcdef`. The transitive key is optional, defaulting to true if left out.
Two important items to note:
1. When writing explicit splice config, the user is in charge of ensuring that the replacement specs they use are binary compatible with whatever targets they replace. In practice, this will likely require either specific knowledge of what packages will be installed by the user's workflow, or somewhat more specific abstract "target" specs for splicing, to ensure binary compatibility.
2. Explicit splices can cause the output of the concretizer not to satisfy the input. For example, using the config above and consider a package in a binary cache `hdf5/xyzabc` that depends on mvapich2. Then the command `spack install hdf5/xyzabc` will instead install the result of splicing `mpich/abcdef` into `hdf5/xyzabc` in place of whatever mvapich2 spec it previously depended on. When this occurs, a warning message is printed `Warning: explicit splice configuration has caused the the concretized spec {concrete_spec} not to satisfy the input spec {input_spec}".
Highlighted technical details of implementation:
1. This PR required modifying the installer to have two separate types of Tasks, `RewireTask` and `BuildTask`. Spliced specs are queued as `RewireTask` and standard specs are queued as `BuildTask`. Each spliced spec retains a pointer to its build_spec for provenance. If a RewireTask is dequeued and the associated `build_spec` is neither available in the install_tree nor from a binary cache, the RewireTask is requeued with a new dependency on a BuildTask for the build_spec, and BuildTasks are queued for the build spec and its dependencies.
2. Relocation is modified so that a spack binary can be simultaneously installed and rewired. This ensures that installing the build_spec is not necessary when splicing from a binary cache.
3. The splicing model is modified to more accurately represent build dependencies -- that is, spliced specs do not have build dependencies, as spliced specs are never built. Their build_specs retain the build dependencies, as they may be built as part of installing the spliced spec.
4. There were vestiges of the compiler bootstrapping logic that were not removed in #46237 because I asked alalazo to leave them in to avoid making the rebase for this PR harder than it needed to be. Those last remains are removed in this PR.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
* cuda: Add 12.6.2
* Update cuda build system
- Remove gcc@6 conflict that was only a deprecation (probably has be added again with cuda@13)
- Update cuda_arch support by CUDA version
- Kepler support has ended with cuda@12
- Recently added 90a Hopper "experimental" features architecture was
missing the dependency on cuda@12:
adds the lima-vm project, in order to make that useful adds a newer
version of qemu so qemu VMs can work, and builds qemu with flags that
allow it to do things like give the VMs networking and virtfs
filesystems.
Also adds vde as a dependency of qemu.
* CMake: Improve incremental build speed.
CMake automatically embeds an updated configure step into make/ninja that will be called during the build phase. By default if a `CMakeCache.txt` file exists in the build directory CMake will use it and this + `spec.is_develop` is sufficient evidence of an incremental build.
This PR removes duplicate work/expense from CMake packages when using `spack develop`.
* Update cmake.py
* [@spackbot] updating style on behalf of psakievich
* Update cmake.py
meant self not spec...
---------
Co-authored-by: psakievich <psakievich@users.noreply.github.com>
* fix typo in variable name in hepmc3 variant
* set cxx standard to 14 when using protobuf
* add myself to hepmc3 maintainer list
* hepmc3: Applied suggestion of @acecbs for spec.satisfies("+protobuf") (agreed!)
Co-authored-by: Alec Scott <hi@alecbcs.com>
* hepmc3: cxx_standard for protobuf
only set cxx standard to meet protobuf minimum (14) if not also rootio variant as that sets the cxx standard to match the root public API standard requirements
* Environment.clear: ensure clearing is passed through to manifest
* test/cmd/env: make test_remove_commmand round-trip to disk
* cleanup vestigial variables
Some Windows Python installations may store the Python exe in Scripts/
rather than the base directory. Update `.command` to search in both
locations on Windows. On all systems, the search is now done
recursively from the search root: on Windows, that is the base install
directory, and on other systems it is bin/.
* Add libALL support
* cabana: also require ALL
* cabana: Bugfix: Fix spec for cmake>=3.26 to be @3.26: and HDF5 support requires MPI
* cabana: MPI requires C: Add depends_on("c", type="build", when="+mpi")
* cabana: +mpi requires C, but at least for some CMake versions, Cabana's enable of C is too late. Patch it.
* cabana: simplify disabling of find_package's for disabled options and improve comment
* cabana: +grid of 0.6.0 does not compile with gcc-13: It misses iostream includes
* cabana: +test requires googletest at build time: gtest is a linked library(not a plugin or tool)
* cabana: 0.6.0+cuda requires kokkos@3.7:, see https://github.com/ECP-copa/Cabana/releases
* cabana: As 0.6.0+grid does not support gcc-13 and newer, I think it's good to add 0.6.1 and 0.7.0?
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
* added py-evodiff and dependencies
* deleted the FIXME
* fixed style issues
* added versions for biotite dependencies; added hash to py-hatch-vcs
* added python version for py-hatch-cython
* updated biotraj dependencies
* - added versions for the packages and dependencies
- added more dependencies for py-hatch
- added rust versions
- added py-uv as a new package
* updated pacakges and their dependencies according to the PR review by @meyersbs
* typo fix for hatchling version; fix the minimum required setuptools version for evodiff
* added 1.9.0 and 1.7.0 userpath versions; required as a dependency
* added mlflow as a dependency
* changed biopython to an optional dependency according to review from @meyersbs; variant esmfold
* Updated Specs
- Pinned biotraj to 1:1 for py-biotite
- Added numpy and other dependencies for py-biotraj; they are dependent
on the versions
- Excluded py-mlflow as a dependency for package py-evodiff; missing
usage in
package.
- Removed versioned dependencies from py-fair-esm
- Added a version to py-packaging
- Added py-setuptools as a dependency in py-userpath
- Added sha256 as hashes for py-uv
* style changes
* Remove "modify_object_macholib"
According to documentation, this function is used when installing
Mach-O binaries on linux. The implementation seems questionable at
least, and the code seems to be never hit (Spack currently doesn't
support installing Mach-O binaries on linux).
* Fix relocation on macOS, when store projection changes
---------
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Enable tests for symlink-based views (this works with almost no
modifications to the view logic). View logic is not yet robust
for hardlink/junction-based views, so those are disabled for now
(both in the tests and as subcommands to `spack view`).
* add new darshan-runtime variants
- `lustre` variant enables instrumentation of Lustre files
* requires Lustre headers, so Lustre is a proper dependency
for this variant
- `log_path` variant allows setting of a centralized log directory
for Darshan logs (commonly used for facility deployments)
* when this variant is used, the `DARSHAN_LOG_DIR_PATH` env var
is no longer used to set the log file path
- `group_readable_logs` variant sets Darshan log permissions to
allow reads from users in the same group
* add mmap_logs variant to enable usage of mmap logs
Also adds support for Paraview and CMake to build with Qt support on
Windows.
The remaining edits are to enable building of Qt itself on Windows:
* Several packages needed to update `.libs` to properly locate
libraries on Windows
* Qt needed a patch to allow it to build using a Python with a space
in the path
* Some Qt dependencies had not been ported to Windows yet
(e.g. `harfbuzz` and `lcms`)
This PR does not provide a sufficient GL for Qt to use Qt Quick2, as
such Qt Quick2 is disabled on the Windows platform by this PR.
---------
Co-authored-by: Dan Lipsa <dan.lipsa@kitware.com>
* Bugfix/Installer: properly track task queueing
* Move ordinal() to llnl.string; change time to attempt
* Convert BuildTask to use kwargs (after pkg); convert STATUS_ to BuildStats enum
* BuildTask: instantiate with keyword only args after the request
* Installer: build request is required for initializing task
* Installer: only the initial BuildTask cannnot have status REMOVED
* Change queueing check
* ordinal(): simplify suffix determination [tgamblin]
* BuildStatus: ADDED -> QUEUED [becker33]
* BuildTask: clarify TypeError for 'installed' argument
* madgraph5amc: add v3.5.6, add a preferred version and remove urls
* Fix format
---------
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
We run tests for more python versions on `develop` than we do for PRs, so codecov
project status is nearly always failing. There is about a 1% difference in max coverage
between `develop` tests and PR tests, so we should increase the project threshold to 2%
to allow for this difference.
The purpose of the project test on PRs is just to make sure that nothing done on the PR
massively affects coverage of code not covered by the PR. This is valuable, but rare. It
only really affects PRs that deal with test or coverage configuration.
- [x] change project coverage threshold from .2% to 2%
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Add v3.0.0 version for libfirefly package
* [@spackbot] updating style on behalf of tbhaxor
---------
Co-authored-by: tbhaxor <tbhaxor@users.noreply.github.com>
`spack gc` has so far been a global or environment-specific thing.
This adds the ability to restrict garbage collection to specific specs,
e.g. if you *just* want to get rid of all your unused python installations,
you could write:
```console
spack gc python
```
- [x] add `constraint` arg to `spack gc`
- [x] add a simple test
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Avoid that hdf5 searches all search paths for ZLIB.cmake config files (inluding /usr/lib), before it looks for zlib without cmake config files, which is how Spack installs it
* WarpX: 24.10
This updates WarpX and dependencies for the 24.10 release.
New features:
- EB runtime control: we can now compile with EB on by default,
because it is not an incompatible binary option anymore
- Catalyst2 support: AMReX/WarpX 24.09+ support Catalyst2 through
the existing Conduit bindings
* Fix Typo in Variant
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
* Improve Python Dep Version Ranges
* Add Missing `-DWarpX_CATALYST`
* AMReX: Missing CMake Options for Vis
---------
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
This allows us to keep the workflow file tidier, and avoid
using indirections to perform platform specific operations.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* py-mpi4py: add v4.0.0
* sensei: update mpi4py dependency
build with py-mpi4py@4.0.0 due to fatal no such file or directory error
* petsc4py: update license, and remove C++/Fortran dependency
There was a bit of mystery surrounding the arguments for `_setup_pkg_and_run`. It passes
two file descriptors for handling the `gmake`'s job server in child processes, but they are
unsed in the method.
It turns out that there are good reasons to do this -- depending on the multiprocessing
backend, these file descriptors may be closed in the child if they're not passed
directly to it.
- [x] Document all args to `_setup_pkg_and_run`.
- [x] Document all arguments to `_setup_pkg_and_run`.
- [x] Add type hints for `_setup_pkg_and_run`.
- [x] Refactor exception handling in `_setup_pkg_and_run` so it's easier to add type
hints. `exc_info()` was problematic because it *can* return `None` (just not
in the context where it's used). `mypy` was too dumb to notice this.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* new builtin package: ambertools
* fixes for the style test
* yet more changes for the style test
* hope this is the last fix for the style test
* netlib-xblas is a dependency, it needs a depends_on("m4", type="build")
* ambertools: Add new setuptool dependency, limit python to <= 3.10 (does not build with 3.11+)
---------
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
We mostly use `spack style` and `spack style --fix`, but it's nice to also be able to
run plain old `black .` in the repo.
- [x] Fix includes and excludes `pyproject.toml` so that we *only* cover files we expect
to be blackened.
Note that `spack style` is still likely the better way to go, because it looks at `git
status` and tells black to only check files that changed from `develop`. `black` with
`pyproject.toml` won't do that. Of course, you can always manually specify which files
you want blackened.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* postgresql: Add icu4c dependency for versions 16+
* postgresql: make ICU an option
* postgresql: ICU variant only needed for v16+
* postgresql: Check for negated option
Check for negated option instead of negating the test
Co-authored-by: Alec Scott <hi@alecbcs.com>
---------
Co-authored-by: Alec Scott <hi@alecbcs.com>
* [py-flash-attn] Add version 2.6.3
* Update dependencies according to the latest version
* Add max_jobs environmental variable to avoid oom error
---------
Co-authored-by: aurianer <8trash-can8@protonmail.ch>
* Revert "`cc`: ensure that RPATHs passed to linker are unique"
This reverts commit 2613a14c43.
* Revert "`cc`: simplify ordered list handling"
This reverts commit a76a48c42e.
Updated the terminology for the two types of environments to be
consistent with that used in the tutorial for the last three years.
Additionally:
* changed 'anonymous' to 'independent in environment command+test for consistency.
* Update package.py
Update to pull Totalview tar files from AWS instead of requiring the user to download ahead of time. Use new license type, RLM license. Only allow for installs of versions using the new license type. 2024.1 and 2024.2. User selects the platform with the version as it is down from the TotalView downloads website.
* Update package.py
Update to pass style test
* Update package.py
fixing syle
* Updating to pass style check
removing more spaces to pass style check
* final style fixes
fixing the last 2 style errors
* Typo
Typo correction to pass style check
* REmove new line
removing new line character
* Ran black to reformat
Ran black to clear errors
* Changing to use sha256
Updating to use sha256 checksums for all TotalView files.
* acts dependencies: new versions as of 2024/09/30
This commit adds new versions of acts, actsvg, and detray.
* Add vecmem version, patch detray version
#45205 already removed previous use of single letter packages
from unit tests, in view of reserving `c` as a language (see #45191).
Some use of them has been re-introduced accidentally in #46382, and
is making unit-tests fail in the feature branch #45189 since there
`c` is a virtual package.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Kokkos: adding some sanity checks
We can pretty much guarentee that if bin, include or lib directory
is missing, something is wrong. Additionally KokkosCore_config.h
and Kokkos_Core.hpp. I guess technically we could look for all
public headers at least but that seems a bit overkill as well?
* Kokkos Kernels: adding sanity checks
* Remove check for lib directory since it might end up being lib64
* Also remove lib from kokkos-kernels sanity check
* Add latest releases of Camp, RAJA, Umpire, CHAI and CARE
* Address review comments + blt requirement in Umpire
* CARE @develop & @main: Submodules -> False
* Changes in Umpire
* Changes in RAJA
* Changes in CHAI
* Changes in RAJA: prefer 'spec.satisfies' to 'in spec'
This is due to a non-equivalence in Spack with providers like mpi.
See e.g. https://github.com/spack/spack/pull/46126
* Changes in Umpire: prefer 'spec.satisfies' to 'in spec'
This is due to a non-equivalence in Spack with providers like mpi.
See e.g. https://github.com/spack/spack/pull/46126
* Changes in CARE:
Still need to update to CachedCMakePackage based on RADIUSS Spack Configs version
* Missing change in RAJA + changes in fmt
* Fix synta
* Changes in Camp
* Fix style
* CHAI: when ~raja, turn off RAJA in build system
* Fix: Ascent@0.9.3 does not support RAJA@2024.07.0
* Enforce same version constraint on Umpire as for RAJA
* Enforce preferred version of vtk-m in ascent 0.9.3
* Migrate CARE package to CachedCMakePackage
* Fix style in CARE package
* CARE: Apply changes for uniform implementation accross RADIUSS projects
* Caliper: move to CachedCMakePackage, from RADIUSS Spack Configs
* Adapt RAJA Perf to spack CI
* Activate CHAI, CARE and RAJAPerf in Spack CI
* Fixes and diffs with RADIUSS Spack Configs
* Caliper: fix
* Caliper : fix + RAJAPerf : style
* RAJAPerf: fixes
* Update maintainers
* raja-perf: fix license header
* raja-perf: Fix variant naming openmp_target -> omptarget
* raja-perf: style and blt dependency versions
* CARE: benchmark and examples off by default (like tests)
* CARE: fix missing variable
* Update var/spack/repos/builtin/packages/raja-perf/package.py
* CARE: fix branch name
* Revert changes in MFEM to pass CI
* Fix CXX17 condition in RAJA + add sycl option in RAJAPerf
---------
Co-authored-by: Rich Hornung <hornung1@llnl.gov>
* cbindgen: new package
* Attempting to add rust dependencies for cbindgen
* adding rust-toml min rust version
* Removing dependencies that don't install with cargo
* cleanup broken packages
---------
Signed-off-by: Teague Sterling <teaguesterling@gmail.com>
On sysroot systems like gentoo prefix, as well as nix/guix, our "is
system path" logic is broken cause it's static.
Talking about "the system paths" is not helpful, we have to talk
about default search paths of the dynamic linker instead.
If glibc is recent enough, we can query the dynamic loader's default
search paths, which is a much more robust way to avoid registering
rpaths to system dirs, which can shadow Spack dirs.
This PR adds an **additional** filter on rpaths the compiler wrapper
adds, dropping rpaths that are default search paths. The PR **does
not** remove any of the original `is_system_path` code yet.
This fixes issues where build systems run just-built executables
linked against their *not-yet-installed libraries*, typically:
```
LD_LIBRARY_PATH=. ./exe
```
which happens in `perl`, `python`, and other non-cmake packages.
If a default path is rpath'ed, it takes precedence over
`LD_LIBRARY_PATH`, and a system library gets loaded instead
of the just-built library in the stage dir, breaking the build. If
default paths are not rpath'ed, then LD_LIBRARY_PATH takes
precedence, as is desired.
This PR additionally fixes an inconsistency in rpaths between
cmake and non-cmake packages. The cmake build system
computed rpaths by itself, but used a different order than
computed for the compiler wrapper. In fact it's not necessary
to compute rpaths at all, since we let cmake do that thanks to
`CMAKE_INSTALL_RPATH_USE_LINK_PATH`. This covers rpaths
for all dependencies. The only install rpaths we need to set are
`<install prefix>/{lib,lib64}`, which cmake unfortunately omits,
although it could also know these. Also, cmake does *not*
delete rpaths added by the toolchain (i.e. Spack's compiler
wrapper), so I don't think it should be controversial to simplify
things.
https://docs.sylabs.io/guides/main/admin-guide/configfiles.html#loop-devices
shared loop devices: This allows containers running the same image
to share a single loop device. This minimizes loop device usage and
helps optimize kernel cache usage.
Enabling this feature can be particularly useful for large MPI jobs.
The current `Spec.splice` model is very limited by the inability to splice specs that
contain multiple nodes with the same name. This is an artifact of the original
algorithm design predating the separate concretization of build dependencies,
which was the first feature to allow multiple specs in a DAG to share a name.
This PR provides a complete reimplementation of `Spec.splice` to avoid that
limitation. At the same time, the new algorithm ensures that build dependencies
for spliced specs are not changed, since the splice by definition cannot change
the build-time information of the spec. This is handled by splitting the dependency
edges and link/run edges into separate dependencies as needed.
Signed-off-by: Gregory Becker <becker33@llnl.gov>
* CI: Add documentation for adding new stacks and runners
* More docs for runner registration
---------
Co-authored-by: Zack Galbreath <zack.galbreath@kitware.com>
Co-authored-by: Bernhard Kaindl <contact@bernhard.kaindl.dev>
This PR shorten the string representation for concrete specs,
in order to make it more legible.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
macOS Sequoia's linker will complain if RPATHs on the CLI are specified more than once.
To avoid errors due to this, make `cc` only append unique RPATHs to the final args list.
This required a few improvements to the logic in `cc`:
1. List functions in `cc` didn't have any way to append unique elements to a list. Add a
`contains()` shell function that works like our other list functions. Use it to implement
an optional `"unique"` argument to `append()` and an `extend_unique()`. Use that to add
RPATHs to the `args_list`.
2. In the pure `ld` case, we weren't actually parsing `RPATH` arguments separately as we
do for `ccld`. Fix this by adding *another* nested case statement for raw `RPATH`
parsing. There are now 3 places where we deal with `-rpath` and friends, but I don't
see a great way to unify them, as `-Wl,`, `-Xlinker`, and raw `-rpath` arguments are
all ever so slightly different.
3. Fix ordering of assertions to make `pytest` diffs more intelligible. The meaning of
`+` and `-` in diffs changed in `pytest` 6.0 and the "preferred" order for assertions
became `assert actual == expected` instead of the other way around.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
`cc` divides most paths up into system paths, spack managed paths, and other paths.
This gets really repetitive and makes the code hard to read. Simplify the script
by adding some functions to do most of the redundant work for us.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* axom/stand-alone tests: build and run in test stage directory
* Removed unused glob
* axom/stand-alone tests: add example_stage_dir variable for clarity
SimpleFilesystemView was producing an error due to looking for a
<prefix>/lib/.spack folder. Also, view_destination had no effect and
wasn't called. Changed this by instead patching in the correct
installation prefix for dictionaries.
Since aspell is using the resolved path of the executable prefix, the
runtime environment variable ASPELL_CONF is set to correct the prefix
when in a view. With this change aspell can now find installed
dictionaries. Verified with:
aspell dump config
aspell dump dicts
* shorten version number validations per reviewer feedback
* rename set_lib_path per reviewer feedback
* Add E4S tag
* Set CHPL_CUDA_PATH to ensure Chapel installer finds the right package
* Update ROCm dependency for Chapel 2.2
* Fix llvm-amdgpu and CHPL_TARGET_* for llvm=bundled
* Ensure CHPL_TARGET_COMPILER is set to "llvm" when required (llvm=spack
or +cuda or +rocm).
* Ensure CHPL_TARGET_{CC,CXX} are only set when using llvm=spack or llvm=none
* Use hip.prefix to set CHPL_ROCM_PATH
Since we might not directly depend on llvm-amdgpu, thus it might
not appear in our spec
* limit m4 dependency to +gmp
* limit names of env vars created from variants
* Ensure that +cuda and +rocm variants are Sticky
The concretizer should never be permitted to select GPU support, because
it's only meaningful and functional when the appropriate hardware is actually
available, and the concretizer cannot reliably determine that.
Also: Chapel's GPU support includes alot of complicated dependencies
and constraints, so leaving that choice free to the concretizer leads to alot
of extraneous and confusing messages when failing to concretize a
non-GPU-enabled spec.
Co-authored-by: Dan Bonachea <dobonachea@lbl.gov>
Add pre-built sbcl for x86 and arm for various glibc versions, making
way for an actual sblc built from source.
Also switch to use set_env in a context manager over setting the
environment variable for the build environment. I hit an issue with the
build system due to this in the sbcl package, pre-empting the same issue
here.
* dla-future: Add DLAF_ prefix to LAPACK_LIBRARY CMake variable in newer versions
* dla-future: Use spec.satisfies to check version constraint for LAPACK_LIBRARY variable prefix
Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>
---------
Co-authored-by: Alberto Invernizzi <9337627+albestro@users.noreply.github.com>
* py-sphinx-tabs: new version 3.4.5
* py-sphinx-design: new versions 0.5.0, 0.6.0, and 0.6.1
* py-requests: new version 2.32.3
* py-dnspython: new version 2.6.1
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* py-hatch-vcs: new version 0.4.0
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
Python 3.12 removed the `distutils` module, which is being required
by the build process of LLVM <= 14: Conflict with it for +python.
Fix build to not pick host tools like an incompatible Python from host
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
gcc on ubuntu has fix-cortex-a53-843419 set by default - this causes linking
issues (symbol relocation errors) for tf, even when compiling for different
cpus.
If `add_padding()` is allowed to return a path with a trailing path
separator, it will get collapsed elsewhere in Spack. This can lead to
buildcache entries that have RPATHS that are too short to be replaced by
other users whose install root happens to be padded to the correct
length. Detect this and replace the trailing path separator with a
concrete path character.
Signed-off-by: Samuel E. Browne <sebrown@sandia.gov>
Also: set the build and install directories to the source directory
because the build system unfortunately expects the `src_ext` directory
to be under the current working directory when building the bundled
third-party libraries, even when the configure script is run from
another directory.
@scemama pointed out that 'make' just calls 'dune' which is already
parallel, so make itself should not have more than one job.
opam@:2.1 need 'make lib-ext' for cmdliner, above it's obsolete.
Co-authored-by: Bernhard Kaindl <bernhard.kaindl@cloud.com>
The detection logic for the prefix used in py-bind11 if broken for spack
resulting in an empty prefix. However, the package provides an escape
hatch in the form of `prefix_for_pc_file`. Use this escape hatch to
provide the correct path; spack will always know better than pybind11's
CMake.
Co-authored-by: Robert Underwood <runderwood@anl.gov>
We've seen `getfqdn()` cause slowdowns on macOS in CI when added elsewhere. It's also
called by database.py every time we write the DB file.
- [x] replace the call with a memoized version so that it is only called once per process.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
This PR introduces a new heuristic for the solver, which behaves better when
compilers are treated as nodes. Apparently, it performs better also on `develop`,
where compilers are still node attributes.
The new heuristic:
- Sets an initial priority for guessing a few attributes. The order is "nodes" (300),
"dependencies" (150), "virtual dependencies" (60), "version" and "variants" (30), and
"targets" and "compilers" (1). This initial priority decays over time during the solve, and
falls back to the defaults.
- By default, it considers most guessed facts as "false". For instance, by default a node
doesn't exist in the optimal answer set, or a version is not picked as a node version etc.
- There are certain conditions that override the default heuristic using the _priority_ of
a rule, which previously we didn't use. For instance, by default we guess that a
`attr("variant", Node, Variant, Value)` is false, but if we know that the node is already
in the answer set, and the value is the default one, then we guess it is true.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This includes a test_linux699 variant which "activates" a version
that pulls from a repository other than the official repository.
This version is required to work with Linux kernel version
6.9.9 or later. Future official `msr-safe` versions are expected
to support later Linux kernel versions.
* opendatadetector: Add an env variable pointing to the share directory
* Rename the new variable to OPENDATADETECTOR_DATA and use join_path
---------
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
The `spack.target.Target` class is a weird entity, that is just needed to:
1. Sort microarchitectures in lists deterministically
2. Being able to use microarchitectures in hashed containers
This PR removes it, and uses `archspec.cpu.Microarchitecture` directly. To sort lists, we use a proper `key=` when needed. Being able to use `Microarchitecture` objects in sets is achieved by updating the external `archspec`.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Introduce the bufr_query library from NOAA-EMC (#461)
This PR adds in a new package.py script for the new bufr_query library from NOAA-EMC. This is being used by JEDI and other applications.
* Add explicit build dependency spec to the pybind11 depends_on spec
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Convert patch file to the URL form which pulls the changes from github.
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
* Added new version (0.0.3) and removed obsolete site-packages.patch file
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
While the existing getting started guide does in fact reference the
powershell support, it's a footnote and easily missed. This PR adds
explicit, upfront mentions of the powershell support. Additionally
this PR adds notes about some of the issues with certain components
of the spec syntax when using CMD.
If the spec is external, it has extra attributes. If not, we know
which names are used. In both cases we don't need to search again
for executables.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Removes `spack.package_base.PackageBase.do_{install,deprecate}` in favor of
`spack.installer.PackageInstaller.install` and `spack.installer.deprecate` resp.
That drops a dependency of `spack.package_base` on `spack.installer`, which is
necessary to get rid of circular dependencies in Spack.
Also change the signature of `PackageInstaller.__init__` from taking a dict as
positional argument, to an explicit list of keyword arguments.
Continuing the work started in #40326, his changes the structure
of Variant metadata on Packages from a single variant definition
per name with a list of `when` specs:
```
name: (Variant, [when_spec, ...])
```
to a Variant definition per `when_spec` per name:
```
when_spec: { name: Variant }
```
With this change, everything on a package *except* versions is
keyed by `when` spec. This:
1. makes things consistent, in that conditional things are (nearly)
all modeled in the same way; and
2. fixes an issue where we would lose information about multiple
variant definitions in a package (see #38302). We can now have,
e.g., different defaults for the same variant in different
versions of a package.
Some notes:
1. This required some pretty deep changes to the solver. Previously,
the solver's job was to select value(s) for a single variant definition
per name per package. Now, the solver needs to:
a. Determine which variant definition should be used for a given node,
which can depend on the node's version, compiler, target, other variants, etc.
b. Select valid value(s) for variants for each node based on the selected
variant definition.
When multiple variant definitions are enabled via their `when=` clause, we will
always prefer the *last* matching definition, by declaration order in packages. This
is implemented by adding a `precedence` to each variant at definition time, and we
ensure they are added to the solver in order of precedence.
This has the effect that variant definitions from derived classes are preferred over
definitions from superclasses, and the last definition within the same class sticks.
This matches python semantics. Some examples:
```python
class ROCmPackage(PackageBase):
variant("amdgpu_target", ..., when="+rocm")
class Hipblas(ROCmPackage):
variant("amdgpu_target", ...)
```
The global variant in `hipblas` will always supersede the `when="+rocm"` variant in
`ROCmPackage`. If `hipblas`'s variant was also conditional on `+rocm` (as it probably
should be), we would again filter out the definition from `ROCmPackage` because it
could never be activated. If you instead have:
```python
class ROCmPackage(PackageBase):
variant("amdgpu_target", ..., when="+rocm")
class Hipblas(ROCmPackage):
variant("amdgpu_target", ..., when="+rocm+foo")
```
The variant on `hipblas` will win for `+rocm+foo` but the one on `ROCmPackage` will
win with `rocm~foo`.
So, *if* we can statically determine if a variant is overridden, we filter it out.
This isn't strictly necessary, as the solver can handle many definitions fine, but
this reduces the complexity of the problem instance presented to `clingo`, and
simplifies output in `spack info` for derived packages. e.g., `spack info hipblas`
now shows only one definition of `amdgpu_target` where before it showed two, one of
which would never be used.
2. Nearly all access to the `variants` dictionary on packages has been refactored to
use the following class methods on `PackageBase`:
* `variant_names(cls) -> List[str]`: get all variant names for a package
* `has_variant(cls, name) -> bool`: whether a package has a variant with a given name
* `variant_definitions(cls, name: str) -> List[Tuple[Spec, Variant]]`: all definitions
of variant `name` that are possible, along with their `when` specs.
* `variant_items() -> `: iterate over `pkg.variants.items()`, with impossible variants
filtered out.
Consolidating to these methods seems to simplify the code a lot.
3. The solver does a lot more validation on variant values at setup time now. In
particular, it checks whether a variant value on a spec is valid given the other
constraints on that spec. This allowed us to remove the crufty logic in
`update_variant_validate`, which was needed because we previously didn't *know* after
a solve which variant definition had been used. Now, variant values from solves are
constructed strictly based on which variant definition was selected -- no more
heuristics.
4. The same prevalidation can now be done in package audits, and you can run:
```
spack audit packages --strict-variants
```
This turns up around 18 different places where a variant specification isn't valid
given the conditions on variant definitions in packages. I haven't fixed those here
but will open a separate PR to iterate on them. I plan to make strict checking the
defaults once all existing package issues are resolved. It's not clear to me that
strict checking should be the default for the prevalidation done at solve time.
There are a few other changes here that might be of interest:
1. The `generator` variant in `CMakePackage` is now only defined when `build_system=cmake`.
2. `spack info` has been updated to support the new metadata layout.
3. split out variant propagation into its own `.lp` file in the `solver` code.
4. Add better typing and clean up code for variant types in `variant.py`.
5. Add tests for new variant behavior.
Historically, every PR, push, etc. to Spack generates a bunch of jobs, each of which
uploads its coverage report to codecov independently. This means that we get annoying
partial coverage numbers when only a few of the jobs have finished, and frequently
codecov is bad at understanding when to merge reports for a given PR. The numbers of the
site can be weird as a result.
This restructures our coverage handling so that we do all the merging ourselves and
upload exactly one report per GitHub actions workflow. In practice, that means that
every push to every PR will get exactly one coverage report and exactly one coverage
number reported. I think this will at least partially restore peoples' faith in what
codecov is telling them, and it might even make codecov handle Spack a bit better, since
this reduces the report burden by ~7x.
- [x] test and audit jobs now upload artifacts for coverage
- [x] add a new job that downloads artifacts and merges coverage reports together
- [x] set `paths` section of `pyproject.toml` so that cross-platform clone locations are merged
- [x] upload to codecov once, at the end of the workflow
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* kokkos, kokkos-kernels, kokkos-nvcc-wrapper: add v4.4.01
* trilinos: update @[master,develop] dependency on kokkos
==> Error: InstallError: For Trilinos@[master,develop], ^kokkos version in spec must match version in Trilinos source code. Specify ^kokkos@4.4.01 for trilinos@[master,develop] instead of ^kokkos@4.4.00.
* petsc: configure requires rocm-core/rocm_version.h to detect ROCM_VERSION_MAJOR.ROCM_VERSION_MINOR.ROCM_VERSION_PATCH
* mfem: add dependency on rocprim (as needed via petsc dependency)
In file included from linalg/petsc.cpp:19:
In file included from linalg/linalg.hpp:65:
In file included from linalg/petsc.hpp:48:
In file included from /scratch/svcpetsc/spack.x/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/petsc-3.22.0-7dsxwizo24ycnqvwnsscupuh4i7yusrh/include/petscsystypes.h:530:
In file included from /scratch/svcpetsc/spack.x/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/rocthrust-6.1.2-ux5nmi4utw27oaqmz3sfjmhb6hyt5zed/include/thrust/complex.h:30:
/scratch/svcpetsc/spack.x/opt/spack/linux-ubuntu22.04-x86_64/gcc-11.4.0/rocthrust-6.1.2-ux5nmi4utw27oaqmz3sfjmhb6hyt5zed/include/thrust/detail/type_traits.h:29:10: fatal error: 'rocprim/detail/match_result_type.hpp' file not found
29 | #include <rocprim/detail/match_result_type.hpp>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Update seacas package.py
Adding libcatalyst variant to seacas package
When seacas is installed with "seacas +libcatalyst"
then a dependency on the spack package "libcatalyst"
(which is catalyst api 2 from kitware) is added, and
the appropriage cmake variable for the catalyst TPL
is set. The mpi variant option in catalyst (i.e. build
with mpi or build without mpi) is passed on to
libcatalyst. The default of the libcatalyst variant
is false/off, so if seacas is installed without the
"+libcatalyst" in the spec it will behave exactly as
it did before the introduction of this variant.
* shortened line 202 to comply with < 100 characters per line style requirement
* py-httpx: New version
* [py-httpx] fix when for dependencies
* [py-httpx] organized dependencies
* [py-httpx] added version 0.27.2
---------
Co-authored-by: Alex C Leute <aclrc@rit.edu>
* Automated deployment to update package flux-sched 2024-09-05
* flux-sched: add back check for run environment
* flux-sched: add conflict for gcc and clang above 0.37.0
---------
Co-authored-by: github-actions <github-actions@users.noreply.github.com>
autotools packages with a configure script should generate the libtool
executable, there's no point in `depends_on("libtool", type="build")`.
the libtool executable in `<libtool prefix>/bin/libtool` is configured
for the wrong toolchain (libtools %compiler instead of the package's
%compiler).
Some package link to `libltdl.so`, which is fine, but had a wrong
dependency type.
See https://github.com/spack/spack/pull/46314#discussion_r1752940332.
This further simplifies `cxxstd` variant handling in `acts` by removing superfluous
version constraints from dependencies for `geant4` and `root`.
The version constraints in the loop are redundant with the conditional variant
values here:
```python
_cxxstd_values = (
conditional("14", when="@:0.8.1"),
conditional("17", when="@:35"),
conditional("20", when="@24:"),
)
_cxxstd_common = {
"values": _cxxstd_values,
"multi": False,
"description": "Use the specified C++ standard when building.",
}
variant("cxxstd", default="17", when="@:35", **_cxxstd_common)
variant("cxxstd", default="20", when="@36:", **_cxxstd_common)
```
So we can simplify the dependencies in the loop to:
```python
for _cxxstd in _cxxstd_values:
for _v in _cxxstd:
depends_on(f"geant4 cxxstd={_v.value}", when=f"cxxstd={_v.value} +geant4")
depends_on(f"geant4 cxxstd={_v.value}", when=f"cxxstd={_v.value} +fatras_geant4")
depends_on(f"root cxxstd={_v.value}", when=f"cxxstd={_v.value} +tgeo")
```
And avoid the potential for impossible variant expressions.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Openmpi provider statements were changed in #46102. The package change
was fine in and of itself, but apparently one of our tests depends on
the precise constraints used in those statements. I updated the test
to remove the checks for constraints that were removed.
In #44425, we add stricter variant audits that catch expressions that can never match.
This fixes 13 packages that had this type of issue.
Most packages had lingering spec expressions from before conditional variants with
`when=` statements were added. For example:
* Given `conflicts("+a", when="~b")`, if the package has since added
`variant("a", when="+b")`, the conflict is no longer needed, because
`a` and `b` will never exist together.
* Similarly, two packages that depended on `py-torch` depended on
`py-torch~cuda~cudnn`, which can't match because the `cudnn` variant
doesn't exist when `cuda` is disabled. Note that neither `+foo` or `~foo`
match (intentionally) if the `foo` variant doesn't exist.
* Some packages referred to impossible version/variant combinations, e.g.,
`ceed@1.0.0+mfem~petsc` when the `petsc` variant only exist at version `2`
or higher.
Some of these correct real issues (e.g. the `py-torch` dependencies would have never
worked). Others simply declutter old code in packages by making all constraints
consistent with version and variant updates.
The only one of these that I think is not all that useful is the one for `acts`,
where looping over `cxxstd` versions and package versions ends up adding some
constraints that are impossible. The additional dependencies could never have
happened and the code is more complicated with the needed extra constriant.
I think *probably* the best thing to do in `acts` is to just not to use a loop
and to write out the constraints explicitly, but maybe the code is easier to
maintain as written.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Update var/spack/repos/builtin/packages/fms/package.py: apply patch for fms@2023.03 to fix compiler options bug in cmake config, add variant shared and corresponding patch for fms@2024.02
* Fix fms package audit: use c9bba516ba.patch?full_index=1 instead of c9bba516ba.patch?full_index=1
* Update checksum of patch for fms@2023.03
* CUDA: support Grace Hopper 9.0a compute capability
* Fix other packages
* Add type annotations
* Support ancient Python versions
* isort
* spec -> self.spec
Co-authored-by: Andrew W Elble <aweits@rit.edu>
* [@spackbot] updating style on behalf of adamjstewart
---------
Co-authored-by: Andrew W Elble <aweits@rit.edu>
Co-authored-by: adamjstewart <adamjstewart@users.noreply.github.com>
fixes#46295
A proper solution would be a tag directive that accumulates tags
with the ones defined in base classes.
For the time being, rewrite them explicitly.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Boost:Adjust bootstrapping/b2 options as needed for Windows (the
bootstrapping phase sufficiently differs between Windows/Unix
that it is handled entirely within its own branch).
* Boost: Paths in user-config.jam should be POSIX, including on Windows
* Python: `.libs` for the Python package should return link libraries
on Windows. The libraries are also stored in a different directory.
The option config:install_missing_compilers is currently buggy,
and has been for a while. Remove it, since it won't be needed
when compilers are treated as dependencies.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* fast-float: new package
* fast-float: add test dependency
* fast-float: fix doctest dependency type
* fast-float: convert deps to tuple
* fast-float: add v6.1.5 and v6.1.6
* fast-float: patch older versions to use find_package(doctest)
* py-your: new package
Spack package recipe for YOUR, Your Unified Reader. YOUR processes pulsar data in different formats.
Output below from spack install py-your
spack install py-your
==> Installing py-your-0.6.7-djfzsn2lutp24ik6wrk6tjx5f7hil76x [83/83]
==> No binary for py-your-0.6.7-djfzsn2lutp24ik6wrk6tjx5f7hil76x found: installing from source
==> Fetching https://github.com/thepetabyteproject/your/archive/refs/tags/0.6.7.tar.gz
==> No patches needed for py-your
==> py-your: Executing phase: 'install'
==> py-your: Successfully installed py-your-0.6.7-djfzsn2lutp24ik6wrk6tjx5f7hil76x
Stage: 1.43s. Install: 0.99s. Post-install: 0.12s. Total: 3.12s
* Removed setup_run_environment
After some testing, both spack load and module load for the package will include the bin directory generated by py-your as well as the path to the version of python the package was built with, without the need for the setup_run_environment function.
I removed that function (Although, like Tamara I thought it would be necessary based on other package setups I used as a basis for this package).
Note: I also updated the required version of py-astropy from py-astropy@4.0: to @py-astropy@6.1.0: In my test builds, the install was picking up version py-astropy@4.0.1.post1 and numpy1.26. However when I tried to run some of the code I was getting errors about py-astropy making numpy calls that are now removed. The newer version of py-astropy corrects these. Ideally this would be handled in the py-astropy package to make sure numpy isn't too new
* Changed software pull location
The original package pulled a tagged release version from GitHub. That tagged version was created in 2022 and has not been updated since. It no longer runs because newer versions of numpy have removed deprecation warnings for several of their calls. The main branch for this repository has addressed these numpy issues as well as some other important fixes but no new release has been generated. Because of this and the apparent minimal development that now appears to be going on, it is probably best to always pull from the main branch
* [@spackbot] updating style on behalf of aweaver1fandm
* py-your: Changed software pull location
1. Restored original URL and version (0.6.7) as requested
2. Updated py-numpy dependency versions to be constrained based on the version of your. Because of numpy deprecations related to your version 0.6.7 need to ensure that the numpy version used is not 1.24 or greater because the depracations were removed starting with that version
* gptune: new test API
* gptune: cleanup; finish API changes; separate unrelated test parts
* gptune: standalone test cleanup with timeout constraints
* gptune: ensure stand-alone test bash failures terminate; enable in CI
* gptune: add directory to terminate_bash_failures
* gptune/stand-alone tests: use satisifes for checking variants
---------
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
* Add numactl 2.0.16-2.0.18
* Create link-with-latomic-if-needed-v2.0.16.patch
Add a link to libatomic, if needed, for numactl v2.0.16.
* Add some missing patches to v2.0.16
* Create numactl-2.0.18-syscall-NR-ppc64.patch
In short, we need numactl to set __NR_set_mempolicy_home_node on ppc64, if it's not already defined.
* Apply a necessary patch for v2.0.18 on PPC64
* Add libatomic patch for v2.0.16
`spack reindex` relies on projections from configuration to locate
installed specs and prefixes. This is problematic because config can
change over time, and we have reasons to do so when turning compilers
into depedencies (removing `{compiler.name}-{compiler.version}` from
projections)
This commit makes reindex recursively search for .spack/ metadirs.
* py-httpcore: Added new version
* [py-httpcore]
- added version 0.18.0
- restructured dependencies as everything has a when and
type/when ordering was all over the place
* [py-httpcore] ordered dependencies in the order listed in v1.0.5 pyproject.toml
---------
Co-authored-by: Alex C Leute <aclrc@rit.edu>
When a package is running `setup_dependent_package` on a parent, ensure
that module variables like `spack_cc` are available. This was often
true prior to this commit, but externals were an exception.
---------
Co-authored-by: John Parent <john.parent@kitware.com>
* New package: py-monai
* Fixed linked issues with py-monai
* [py-monai] removed extra line
* [py-monai]
- New version 1.3.2
- ran black
- update copyright
* [py-monai] added license
* [py-monai]
- moved build only dependencies
- specified newer python requirements for newer versions
---------
Co-authored-by: vehrc <vehrc@rit.edu>
* Replace if ... in spec with spec.satisfies in f* and g* packages
* gromacs: ^amdfftw -> ^[virtuals=fftw-api] amdfftw
* flamemaster: add virtuals lapack for the amdlibflame dependency
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
Allow flags from different sources (compilers, `require:`, command-line
specs, and `depends_on`) to be merged together, and enforce a consistent
order among them.
The order is based on the sources, e.g. flags on specs from the command
line always come last. Some flag order consistency issues are fixed:
1. Flags from `compilers.yaml` and the command line were always intra- and
inter-source order consistent.
2. Flags from dependents and packages.yaml (introduced via `require:`)
were not: for `-a -b` from one source and `-c` from another, the final
result might rearrange `-a -b`, and would also be inconsistent in terms
of whether `-c` came before or after.
(1) is/was handled by going back to the original source, i.e., flags are
retrieved directly from the command line spec rather than the solver.
(2) is addressed by:
* Keeping track of grouped flags in the solver
* Keeping track of flag sources in the solver on a per-flag basis
The latter info is used in this PR to enforce DAG ordering on flags
applied from multiple dependents to the same package, e.g., for this
graph:
```
a
/|\
b | c
\|/
d
```
If `a`, `b`, and `c` impose flags on `d`, the combined flags on `d` will
contain the flags of `a`, `b`, and `c` -- in that order.
Conflicting flags are allowed (e.g. -O2 and -O3). `Spec.satisifes()` has
been updated such that X satisfies Y as long as X has *at least* all of
the flags that Y has. This is also true in the solver constraints.
`.satisfies` does not account for how order can change behavior (so
`-O2 -O3` can satisfy `-O3 -O2`); it is expected that this can be
addressed later (e.g. by prohibiting flag conflicts).
`Spec.constrain` and `.intersects` have been updated to be consistent
with this new definition of `.satisfies`.
Spack can now bootstrap two new dependencies on Windows: GnuPG, and file.
These dependencies are modeled as a separate package, and they install a cross-compiled binary.
Details on how they binaries are built are in https://github.com/spack/windows-bootstrap-resources
This PR adds py-pybind11 versions 2.13.0, 2.13.1, 2.13.2, 2.13.3, and
2.13.4. It also adds a new conflict between gcc 14 and pybind versions
up to and including 2.13.1.
* Allow deprecating more than one property in config
This internal change allows the customization of errors
and warnings to be printed when deprecating a property.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* fix
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
* Use a list comprehension for "issues"
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
---------
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
fixes#40791
Currently stacks behave differently if used in unify:false
environments, which leads to inconsistencies during concretization.
For instance, we might have two abstract user specs that do not
intersect with each other map to the same concrete spec in the
environment. This is clearly wrong.
This PR removes the best effort expansion, so that user specs
are always applied strictly.
* gaudi: Specify boost components and add +fiber for v39
* gaudi: Limit fmt version to allow building master branch
* Make boost dependencies a bit more readable
* Remove patches for no longer existing versions
* Replace if ... in spec with spec.satisfies in d* and e* packages
* Use virtuals for different mpi implementations in esmf
* esmf: ^[virtuals=mpi] mpt
* extrae: ^[virtuals=mpi] intel-oneapi-mpi
---------
Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
This change aligns the build condition for parmetis with the
depends_on condition.
The current build condition of parmetis looks for "+parmetis" in
the spec which is not added by the depends_on as that adds
"^parmetis" instead.
* Adding addtional check for omptarget library for amdgpu in nvidia environment
* Avoiding registration of duplicate when built on cuda
* Adding hsa library path in LD_LIBRARY_PATH
* Correction in hsa prefix library path in LD_LIBRARY_PATH
==> Error: InstallError: For Trilinos@[master,develop], ^kokkos version in spec must match version in Trilinos source code. Specify ^kokkos@4.4.00 for trilinos@[master,develop] instead of ^kokkos@4.3.01.
* root: Add dependency on libglx
We have been trying to build the Acts package on MacOS, and in this
process we have been running into problems with the ROOT spec on that
operating system; the primary issue we are encountering is that the
compiler is unable to find the `GL/glx.h` header, which is part of glx.
It seems, therefore, that ROOT depends on libglx, but this is not
currently encoded in the spec. This commit ensures that ROOT depends on
the virtual libglx package when both the OpenCL and X11 variants are
enabled.
* Enable builtin glew on MacOS
* Allow `root+opengl+aqua~x` on macOS
dd4hep versions up to and including 1.27 had a conflict with root
versions starting from 6.31.1, as shown in
https://github.com/AIDASoft/DD4hep/issues/1210. This PR explicitly adds
that conflict to the spec.
if [ "${{ needs.prechecks.result }}" == "failure" ] || [ "${{ needs.prechecks.result }}" == "canceled" ]; then
echo "Unit tests failed."
exit 1
else
exit 0
fi
coverage:
needs:[unit-tests, prechecks ]
uses:./.github/workflows/coverage.yml
secrets:inherit
all:
needs:[unit-tests, coverage, bootstrap ]
if:${{ always() }}
runs-on:ubuntu-latest
# See https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/accessing-contextual-information-about-workflow-runs#needs-context
steps:
- name:Status summary
run:|
if [ "${{ needs.unit-tests.result }}" == "failure" ] || [ "${{ needs.unit-tests.result }}" == "canceled" ]; then
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -1175,6 +1174,17 @@ unspecified version, but packages can depend on other packages with
could depend on ``mpich@1.2:`` if it can only build with version
``1.2`` or higher of ``mpich``.
..note:: Windows Spec Syntax Caveats
Windows has a few idiosyncrasies when it comes to the Spack spec syntax and the use of certain shells
Spack's spec dependency syntax uses the carat (``^``) character, however this is an escape string in CMD
so it must be escaped with an additional carat (i.e. ``^^``).
CMD also will attempt to interpret strings with ``=`` characters in them. Any spec including this symbol
must double quote the string.
Note: All of these issues are unique to CMD, they can be avoided by using Powershell.
For more context on these caveats see the related issues: `carat <https://github.com/spack/spack/issues/42833>`_ and `equals <https://github.com/spack/spack/issues/43348>`_
Below are more details about the specifiers that you can add to specs.
.._version-specifier:
@@ -1348,6 +1358,10 @@ For example, for the ``stackstart`` variant:
mpileaks stackstart==4# variant will be propagated to dependencies
mpileaks stackstart=4# only mpileaks will have this variant value
Spack also allows variants to be propagated from a package that does
The first step to contribute new runners is to open an issue in the `spack infrastructure <https://github.com/spack/spack-infrastructure/issues/new?assignees=&labels=runner-registration&projects=&template=runner_registration.yml>`_
project. This will be reported to the spack infrastructure team who will guide users through the process
of registering new runners for Spack CI.
The information needed to register a runner is the motivation for the new resources, a semi-detailed description of
the runner, and finallly the point of contact for maintaining the software on the runner.
The point of contact will then work with the infrastruture team to obtain runner registration token(s) for interacting with
with Spack's GitLab instance. Once the runner is active, this point of contact will also be responsible for updating the
GitLab runner software to keep pace with Spack's Gitlab.
Tagging
~~~~~~~
In the initial stages of runner registration it is important to **exclude** the special tag ``spack``. This will prevent
the new runner(s) from being picked up for production CI jobs while it is configured and evaluated. Once it is determined
that the runner is ready for production use the ``spack`` tag will be added.
Because gitlab has no concept of tag exclustion, runners that provide specialized resource also require specialized tags.
For example, a basic CPU only x86_64 runner may have a tag ``x86_64`` associated with it. However, a runner containing an
CUDA capable GPU may have the tag ``x86_64-cuda`` to denote that it should only be used for packages that will benefit from
a CUDA capable resource.
OIDC
~~~~
Spack runners use OIDC authentication for connecting to the appropriate AWS bucket
which is used for coordinating the communication of binaries between build jobs. In
order to configure OIDC authentication, Spack CI runners use a python script with minimal
dependencies. This script can be configured for runners as seen here using the ``pre_build_script``.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.._environments:
=========================
Environments (spack.yaml)
=========================
=====================================
Environments (spack.yaml, spack.lock)
=====================================
An environment is used to group together a set of specs for the
purpose of building, rebuilding and deploying in a coherent fashion.
Environments provide a number of advantages over the *à la carte*
approach of building and loading individual Spack modules:
An environment is used to group a set of specs intended for some purpose
to be built, rebuilt, and deployed in a coherent fashion. Environments
define aspects of the installation of the software, such as:
#.Environments separate the steps of (a) choosing what to
install, (b) concretizing, and (c) installing. This allows
Environments to remain stable and repeatable, even if Spack packages
are upgraded: specs are only re-concretized when the user
explicitly asks for it. It is even possible to reliably
transport environments between different computers running
different versions of Spack!
#. Environments allow several specs to be built at once; a more robust
solution than ad-hoc scripts making multiple calls to ``spack
install``.
#. An Environment that is built as a whole can be loaded as a whole
into the user environment. An Environment can be built to maintain
a filesystem view of its packages, and the environment can load
that view into the user environment at activation time. Spack can
also generate a script to load all modules related to an
environment.
#.*which* specs to install;
#.*how* those specs are configured; and
#.*where* the concretized software will be installed.
Aggregating this information into an environment for processing has advantages
over the *à la carte* approach of building and loading individual Spack modules.
With environments, you concretize, install, or load (activate) all of the
specs with a single command. Concretization fully configures the specs
and dependencies of the environment in preparation for installing the
software. This is a more robust solution than ad-hoc installation scripts.
And you can share an environment or even re-use it on a different computer.
Environment definitions, especially *how* specs are configured, allow the
software to remain stable and repeatable even when Spack packages are upgraded. Changes are only picked up when the environment is explicitly re-concretized.
Defining *where* specs are installed supports a filesystem view of the
environment. Yet Spack maintains a single installation of the software that
can be re-used across multiple environments.
Activating an environment determines *when* all of the associated (and
installed) specs are loaded so limits the software loaded to those specs
actually needed by the environment. Spack can even generate a script to
load all modules related to an environment.
Other packaging systems also provide environments that are similar in
some ways to Spack environments; for example, `Conda environments
<https://conda.io/docs/user-guide/tasks/manage-environments.html>`_ or
"""Strip Windows extended path prefix from strings
Returnssanitizedstring.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.