Add a CI check to automatically verify the checksums of newly added
package versions:
- [x] a new command, `spack ci verify-versions`
- [x] a GitHub actions check to run the command
- [x] tests for the new command
This also eliminates the suggestion for maintainers to manually verify added
checksums in the case of accidental version <--> checksum mismatches.
----
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Codecov needs to see the token secret when uploading, so we have to
add this line to the workflow YAML:
```yaml
with:
token: ${{ secrets.CODECOV_TOKEN }}
```
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
The import-check action now presents problematic import statements
introduced by the PR better.
The idea is roughly:
* Let (V₁, E₁) be the graph of modules as vertices and import statements
as edges before the change
* Let (V₂, E₂) be the graph after the code change, which is typically a small
perturbation of (V₁, E₁).
* X₁ = FAS(V₁, E₁) is the feedback arc set before (a minimal set of edges to
delete to make it acyclic)
* X₂ = FAS(V₂, E₂ ∖ X₁) is the feedback arc set after deletion of the minimal
set of edges that made the old graph acyclic.
* X₃ = FAS(V₂, E₂) is the feedback arc set after
Previously I displayed X₁ and X₃ and users had to diff themselves.
Now, I'm showing X₂, which is a small set, typically directly related to
code changes.
However, it can be that a small code change adding say 2 problematic imports
creates a completely different solution X₃ that only requires deletion of just 1
different import. In that case the user is informed that they can potentially do
less work.
So for PR #48784 the output is now:
> The overall number of problematic import statements increased by 1 from 31 to 32.
> This is likely a direct consequence of the following import statements:
>
> ```
> spack/config imports: spack.spec, spack.util.path, spack.util.remote_file_cache
> ```
>
> However, instead of removing 3 import statements, it is sufficient to remove only 1
> import statement from the following list:
>
> ```
> spack/concretize imports: spack.bootstrap, spack.solver.asp
> spack/environment imports: spack.bootstrap, spack.environment
> spack/fetch_strategy imports: spack.version.git_ref_lookup
> spack/install_test imports: spack.build_environment, spack.package_base
> spack/modules imports: spack.modules
> spack/platforms imports: spack.config
> spack/relocate imports: spack.bootstrap
> spack/repo imports: spack.package_base, spack.patch, spack.tag
> spack/spec imports: spack.binary_distribution, spack.compiler, spack.compilers, spack.concretize, spack.environment, spack.hash_types, spack.provider_index, spack.repo, spack.spec_parser, spack.store, spack.traverse, spack.variant, spack.version.git_ref_lookup
> spack/subprocess_context imports: spack.environment
> spack/util/gpg imports: spack.bootstrap
> spack/util/package_hash imports: spack.package_base
> spack/util/path imports: spack.config, spack.environment
> spack/util/remote_file_cache imports: spack.util.web
> ```
from which the user can figure out that
`spack/util/remote_file_cache imports: spack.util.web` is the "bottleneck" now.
* Add type-hints to `spack.util.executable.Executable`
* Add type-hint to input
* Use overload, and remove assertions at calling sites
* Bump mypy to v1.11.2 (working locally), Python to 3.13
A few changes to tarball creation (for build caches):
- do not run file to distinguish binary from text
- file is slow, even when running it in a batched fashion -- it usually reads all bytes and has slow logic to categorize specific types
- we don't need a highly detailed file categorization; a crude categorization of elf, mach-o, text suffices.
detecting elf and mach-o is straightforward and cheap
- detecting utf-8 (and with that ascii) is highly accurate: false positive rate decays exponentially as file size increases. Further it's not only the most common encoding, but the most common file type in package prefixes.
iso-8859-1 is cheaply (but heuristically) detected too, and sufficiently accurate after binaries and utf-8 files are classified earlier
- remove file as a dependency of Spack in general, which makes Spack itself easier to install
- detect file type and need to relocate as part of creating the tarball, which is more cache friendly and thus faster
`kcov` was removed in Ubuntu 24.04, and it is no longer
installable via `apt` in our CI images. Instal it via
Linuxbrew instead, at least until it comes back to Ubuntu.
`subversion` is also not installed on ubuntu 24 by default,
so we have to install it manually.
- [x] Add linuxbrew to linux tests
- [x] Install `kcov` with brew
- [x] Install subversion with `apt`
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
The purpose of this CI job is to ensure that we
can use a modern clingo to concretize specs, if
e.g. it was installed in a virtual environment
with pip.
Since there is no need to re-test unrelated parts
of Spack, reduce the number of tests we run to just
concretize.py
This PR allows users to configure explicit splicing replacement of an abstract spec in the concretizer.
concretizer:
splice:
explicit:
- target: mpi
replacement: mpich/abcdef
transitive: true
This config block would mean "for any spec that concretizes to use mpi, splice in mpich/abcdef in place of the mpi it would naturally concretize to use. See #20262, #26873, #27919, and #46382 for PRs enabling splicing in the Spec object. This PR will be the first place the splice method is used in a user-facing manner. See https://spack.readthedocs.io/en/latest/spack.html#spack.spec.Spec.splice for more information on splicing.
This will allow users to reuse generic public binaries while splicing in the performant local mpi implementation on their system.
In the config file, the target may be any abstract spec. The replacement must be a spec that includes an abstract hash `/abcdef`. The transitive key is optional, defaulting to true if left out.
Two important items to note:
1. When writing explicit splice config, the user is in charge of ensuring that the replacement specs they use are binary compatible with whatever targets they replace. In practice, this will likely require either specific knowledge of what packages will be installed by the user's workflow, or somewhat more specific abstract "target" specs for splicing, to ensure binary compatibility.
2. Explicit splices can cause the output of the concretizer not to satisfy the input. For example, using the config above and consider a package in a binary cache `hdf5/xyzabc` that depends on mvapich2. Then the command `spack install hdf5/xyzabc` will instead install the result of splicing `mpich/abcdef` into `hdf5/xyzabc` in place of whatever mvapich2 spec it previously depended on. When this occurs, a warning message is printed `Warning: explicit splice configuration has caused the the concretized spec {concrete_spec} not to satisfy the input spec {input_spec}".
Highlighted technical details of implementation:
1. This PR required modifying the installer to have two separate types of Tasks, `RewireTask` and `BuildTask`. Spliced specs are queued as `RewireTask` and standard specs are queued as `BuildTask`. Each spliced spec retains a pointer to its build_spec for provenance. If a RewireTask is dequeued and the associated `build_spec` is neither available in the install_tree nor from a binary cache, the RewireTask is requeued with a new dependency on a BuildTask for the build_spec, and BuildTasks are queued for the build spec and its dependencies.
2. Relocation is modified so that a spack binary can be simultaneously installed and rewired. This ensures that installing the build_spec is not necessary when splicing from a binary cache.
3. The splicing model is modified to more accurately represent build dependencies -- that is, spliced specs do not have build dependencies, as spliced specs are never built. Their build_specs retain the build dependencies, as they may be built as part of installing the spliced spec.
4. There were vestiges of the compiler bootstrapping logic that were not removed in #46237 because I asked alalazo to leave them in to avoid making the rebase for this PR harder than it needed to be. Those last remains are removed in this PR.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>