* `find(..., max_depth=...)` can be used to control how many directories at most to descend into below the starting point
* `find` now enters every unique (symlinked) directory once at the lowest depth
* `find` is now repeatable: it traverses the directory tree in a deterministic order
In the pure `ld` case, we weren't actually parsing `RPATH` arguments separately as we
do for `ccld`. Fix this by adding *another* nested case statement for raw `RPATH`
parsing.
There are now 3 places where we deal with `-rpath` and friends, but I don't see a great
way to unify them, as `-Wl,`, `-Xlinker`, and raw `-rpath` arguments are all ever so
slightly different.
Also, this Fixes ordering of assertions to make `pytest` diffs more intelligible.
The meaning of `+` and `-` in diffs changed in `pytest` 6.0 and the "preferred" order
for assertions became `assert actual == expected` instead of the other way around.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
`cc` divides most paths up into system paths, spack managed paths, and other paths.
This gets really repetitive and makes the code hard to read. Simplify the script
by adding some functions to do most of the redundant work for us.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
This PR has two small contributions:
- It adds another phase to the timer for concrectization, "construct_specs", to actually see the time the concretizer spends interpreting the `clingo` output to build the Python object for a concretized spec.
- It adds the method `Solver.solve_with_stats` to expose the timers that were already in the concretizer to the public solver API. `Solver.solve` just becomes a special case of `Solver.solve_with_stats` that throws away the timing output (which is what it was already doing).
These changes will make it easier to benchmark concretizer performance and provide a more complete picture of the time spent in the concretizer by including the time spent interpreting clingo output.
The idea is to go from most to least used: backward compat -> forward compat -> pinning on major or major.minor version -> pinning specific, concrete versions.
Further, the following
```python
# backward compatibility with Python
depends_on("python@3.8:")
depends_on("python@3.9:", when="@1.2:")
depends_on("python@3.10:", when="@1.4:")
# forward compatibility with Python
depends_on("python@:3.12", when="@:1.10")
depends_on("python@:3.13", when="@:1.12")
depends_on("python@:3.14")
```
is better than disjoint when ranges causing repetition of the rules on dependencies, and requiring frequent editing of existing lines after new releases are done:
```python
depends_on("python@3.8:3.12", when="@:1.1")
depends_on("python@3.9:3.12", when="@1.2:1.3")
depends_on("python@3.10:3.12", when="@1.4:1.10")
depends_on("python@3.10:3.13", when="@1.11:1.12")
depends_on("python@3.10:3.14", when="@1.13:")
This PR changes the semantic of == for spec so that:
hdf5++mpi == hdf5+mpi
won't hold true anymore. It also changes the constrain semantic, so that a
non-propagating variant always override a propagating variant. This means:
(hdf5++mpi).constrain(hdf5+mpi) -> hdf5+mpi
Before we had a very weird semantic, that was supposed to be tested by unit-tests:
(libelf++debug).constrain(libelf+debug+foo) -> libelf++debug++foo
This semantic has been dropped, as it was never really tested due to the == bug.
Currently, the schema reads:
from:
- type:
environment: path_or_name
but this can't be extended easily to other types, e.g. to buildcaches,
without duplicating the extension keys. Use instead:
from:
- type: environment
path: path_or_name
Currently, the `concretizer:unify:` config option only affects environments.
With this PR, it now affects any group of specs given to a command using the `parse_specs(*, concretize=True)` interface.
- [x] implementation in `parse_specs`
- [x] tests
- [x] ensure all commands that accept multiple specs and concretize use `parse_specs` interface
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* don't concretize in CI if changed packages are not in stacks
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Generate noop job when no specs to rebuild due to untouched pruning
* Add test to verify skipping generate creates a noop job
* Changed debug for early exit
---------
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This changes `Spec` serialization to include information about propagation for abstract specs.
This was previously not included in the JSON representation for abstract specs, and couldn't be
stored.
Now, there is a separate `propagate` dictionary alongside the `parameters` dictionary. This isn't
beautiful, but when we bump the spec version for Spack `v0.24`, we can clean up this and other
aspects of the schema.
Fixes an issue reported where `spack env depfile` + `make -j` would
non-deterministically refuse to mark all environment roots explicit.
`update_explicit` had the pattern
```python
rec = self._data[key]
with self.write_transaction():
rec.explicit = explicit
```
but `write_transaction` may reinitialize `self._data`, meaning that
mutating `rec` won't mutate `self._data`, and the changes won't be
persisted.
Instead, use `mark` which has a correct implementation.
Also avoids the essentially incorrect early return in `update_explicit`
which is a pattern I don't think belongs in database.py: it branches on
possibly stale data to realize there is nothing to change, but in reality
it requires a write transaction to know that for a fact, but that would
defeat the purpose. So, leave this optimization to the call site.
The already concrete specs in an environment are now among the reusable specs for the concretizer.
This includes concrete specs from all include_concrete environments.
In addition to this change to the default reuse, `environment` is added as a reuse type for
the concretizer config. This allows users to specify:
spack:
concretizer:
# Reuse from this environment (including included concrete) but not elsewhere
reuse:
from:
- type: environment
# or reuse from only my_env included environment
reuse:
from:
- type:
environment: my_env
# or reuse from everywhere
reuse: true
If reuse is specified from a specific environment, only specs from that environment will be reused.
If the reused environment is not specified via include_concrete, the concrete specs will be retried
at concretization time to be reused.
Signed-off-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Currently the order in which hooks are run is arbitrary.
This can be fixed by sorted(list_modules(...)) but I think it is much
more clear to just have a static list.
Hooks are not extensible other than modifying Spack code, which
means it's unlikely people maintain custom hooks since they'd have
to fork Spack. And if they fork Spack, they might as well add an entry
to the list when they're continuously rebasing.
The idea is that `spack -e env add ./concrete-spec.json` would list the
full hash in the specs, so that (a) it's not ambiguous and (b) it could
in principle results in constant time lookup instead of linear time
substring match in large build caches.
* Add a descriptor to have a class level constant
This descriptor helps intercept places where we set a value on instances.
It does not really behave like "const" in C-like languages, but is the
simplest implementation that might still be useful.
* Add a descriptor to deprecate properties/attributes of an object
This descriptor is used as a base class. Derived classes may implement a
factory to return an adaptor to the attribute being deprecated. The
descriptor can either warn, or raise an error, when usage of the deprecated
attribute is intercepted.
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Currently, `spack solve` has different spec selection semantics than `spack spec`.
`spack solve` currently does not allow specifying a single spec when an environment is active.
This PR modifies `spack solve` to inherit the interface from `spack spec`, and to use
the same spec selection logic. This will allow for better use of `spack solve --show opt`
for debugging.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Originally, concretization failed if the splice config points to an invalid replacement.
This PR defers the check until we know the splice is needed, so that irrelevant splices
with bad config cannot stop concretization.
While I was at it, I improved an error message from an assert to a ValueError.
* Normalize Spack Win entrypoints
Currently Spack has multiple entrypoints on Windows that in addition to
differing from *nix implementations, differ from shell to shell on
Windows. This is a bit confusing for new users and in general
unnecessary.
This PR adds a normal setup script for the batch shell while preserving
the previous "click from file explorer for spack shell" behavior.
Additionally adds a shell title to both powershell and cmd letting users
know this is a Spack shell
* remove doskeys
#44588 we added logic to suppress deprecation warnings for the
Intel classic compilers. This depended on matching against
* The compiler names (looking for icc, icpc, ifort)
* The compiler version
When using an Intel compiler with fortran wrappers, the first check
always fails. To support using the fortran wrappers (in combination
with the classic Intel compilers), we remove the first check and
suppress if just the version matches. This works because:
* The newer compilers like icx can handle (ignore) the flags that
suppress deprecation warnings
* The Cray wrappers pass the underlying compiler version (e.g. they
report what icc would report)
Connection objects are Python version, platform and multiprocessing
start method independent, so better to use those than a mix of plain
file descriptors and inadequate guesses in the child process whether it
was forked or not.
This also allows us to delete the now redundant MultiProcessFd class,
hopefully making things a bit easier to follow.
This allows the following
```python
cache.init_entry("my/cache")
with cache.read_transaction("my/cache") as f:
data = f.read() if f is not None else None
```
mirroring `write_transaction`, which returns a tuple `(old, new)` where
`old` is `None` if the cache file did not exist yet.
The alternative that requires less defensive programming on the call
site would be to create the "old" file upon first read, but I did not
want to think about how to safely atomically create the file, and it's
not unthinkable that an empty file is an invalid format (for instance
the call site may expect a json file, which requires at least {} bytes).
This PR is in response to a question in the `environments` slack channel (https://spackpm.slack.com/archives/CMHK7MF51/p1729200068557219) about inadequate CLI help/documentation for one specific subcommand.
This PR uses the approach I took for the descriptions and help for `spack test` subcommands. Namely, I use the first line of the relevant docstring as the description, which is shown per subcommand in `spack env -h`, and the entire docstring as the help. I then added, where it seemed appropriate, help. I also tweaked argument docstrings to tighten them up, make consistent with similar arguments elsewhere in the command, and elaborate when it seemed important. (The only subcommand I didn't touch is `loads`.)
For example, before:
```
$ spack env update -h
usage: spack env update [-hy] env
positional arguments:
env name or directory of the environment to activate
optional arguments:
-h, --help show this help message and exit
-y, --yes-to-all assume "yes" is the answer to every confirmation request
```
After the changes in this PR:
```
$ spack env update -h
usage: spack env update [-hy] env
update the environment manifest to the latest schema format
update the environment to the latest schema format, which may not be
readable by older versions of spack
a backup copy of the manifest is retained in case there is a need to revert
this operation
positional arguments:
env name or directory of the environment
optional arguments:
-h, --help show this help message and exit
-y, --yes-to-all assume "yes" is the answer to every confirmation request
```
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Commit aa0825d642 accidentally added a semicolon
to the ANSI escape sequence even if the color code was `None` or unknown, breaking the
bold, uncolored font-face. This PR restores the old behavior.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* Add type hints to all query* methods
* Inline docstrings
* Change defaults from `any` to `None` so they can be type hinted in old Python
* Pre-filter on given hashes instead of iterating over all db specs
* Fix a bug where the `--origin` option of uninstall had no effect
* Fix a bug where query args were not applied when searching by concrete spec
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Fixes a change in behavior/bug in
70412612c7, where partial environment
installs would mark the selected spec as explicitly installed, even if
it was not a root of the environment.
The desired behavior is that roots by definition are the to be
explicitly installed specs. The specs on the `spack -e ... install x`
command line are just filters for partial installs, so leave them
implicitly installed if they aren't roots.
ci: Remove deprecated logic from the ci module
Remove the following from the ci module, schema, and tests:
- deprecated ci stack and handling of old ci config
- deprecated mirror handling logic
- support for artifacts buildcache
- support for temporary storage url
Both `multiprocessing.connection.Connection.__del__` and `io.IOBase.__del__` called `os.close` on the same file descriptor. As of Python 3.13, this is an explicit warning. Ensure we close once by usef `os.fdopen(..., closefd=False)`
* stacks: add a stack for devtools on darwin
After getting this whole mess building on darwin, let's keep it that
way, and maybe make it so we have some non-ML darwin binaries in spack
as well.
* reuse: false for devtools
* dtc: fix darwin dylib name and id
On mac the convention is `lib<name>.<num>.dylib`, while the makefile
creates a num suffixed one by default. The id in the file is also a
local name rather than rewritten to the full path, this fixes both
problems.
* node-js: make whereis more deterministic
* relocation(darwin): catch Mach-O load failure
The MachO library can throw an exception rather than return no headers,
this happened in an elf file in the test data of go-bootstrap. Trying
catching the exception and moving on for now. May also need to look
into why we're trying to rewrite an elf file.
* qemu: add darwin flags to clear out warnings
There's a build failure for qemu in CI, but it's invisible because of
the immense mass of warning output. Explicitly specify the target macos
version and remove the extraneous unknown-warning-option flag.
* dtc: libyaml is also a link dependency
libyaml is required at runtime to run the dtc binary, lack of it caused
the ci for qemu to fail when the library wasn't found.
fixes#47101
The bug was introduced in #33495, where `spack find was not updated,
and wasn't caught by unit tests.
Now a Database can accept a custom predicate to select the installation
records. A unit test is added to prevent regressions. The weird convention
of having `any` as a default value has been replaced by the more commonly
used `None`.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
#44327 made sure to always run `set_package_py_globals` on all
packages before running `setup_dependent_package` for any package,
so that packages implementing the latter could depend on variables
like `spack_cc` being defined.
This ran into an undocumented dependency: `std_cmake_args` is set in
`set_package_py_globals` and makes use of `cmake_prefix_paths` (if it
is defined in the package); `py-torch`es implementation of
`cmake_prefix_paths` depends on a variable set by
`setup_dependent_package` (`python_platlib`).
This generally restores #44327, and corrects the resulting issue by
moving assignment of `std_cmake_args` to after both actions have been
run.
Remove the constraint for concrete specs and simply take the
max(version) if a version is not given. This should default to the
highest infinity version which is also the logical best guess for
doing development.
* Remove concrete verision constriant for develop, set docs
* Add unit-test
* Update lib/spack/docs/environments.rst
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
* Update lib/spack/spack/cmd/develop.py
Co-authored-by: Greg Becker <becker33@llnl.gov>
* Consolidate env collection in cmd
* Style
---------
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
Co-authored-by: Greg Becker <becker33@llnl.gov>
The CMake builder in Spack actually adds incorrect rpaths. They are
unfiltered and incorrectly ordered compared to what the compiler wrapper
adds.
There is no need to specify paths to dependencies in `CMAKE_INSTALL_RPATH`
because of two reasons:
1. CMake preserves "toolchain" rpaths, which includes the rpaths injected
by our compiler wrapper.
2. We use `CMAKE_INSTALL_RPATH_USE_LINK_PATH=ON`, so libraries we link
to are rpath'ed automatically.
However, CMake does not create install rpaths to directories in the package's
own install prefix, so we set `CMAKE_INSTALL_RPATH` to the educated guess
`<prefix>/{lib,lib64}`, but omit dependencies.
Some assertion are not testing DAG invariants, and they are passing only
because of the simple structure of the builtin.mock repository on develop.
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This PR allows users to configure explicit splicing replacement of an abstract spec in the concretizer.
concretizer:
splice:
explicit:
- target: mpi
replacement: mpich/abcdef
transitive: true
This config block would mean "for any spec that concretizes to use mpi, splice in mpich/abcdef in place of the mpi it would naturally concretize to use. See #20262, #26873, #27919, and #46382 for PRs enabling splicing in the Spec object. This PR will be the first place the splice method is used in a user-facing manner. See https://spack.readthedocs.io/en/latest/spack.html#spack.spec.Spec.splice for more information on splicing.
This will allow users to reuse generic public binaries while splicing in the performant local mpi implementation on their system.
In the config file, the target may be any abstract spec. The replacement must be a spec that includes an abstract hash `/abcdef`. The transitive key is optional, defaulting to true if left out.
Two important items to note:
1. When writing explicit splice config, the user is in charge of ensuring that the replacement specs they use are binary compatible with whatever targets they replace. In practice, this will likely require either specific knowledge of what packages will be installed by the user's workflow, or somewhat more specific abstract "target" specs for splicing, to ensure binary compatibility.
2. Explicit splices can cause the output of the concretizer not to satisfy the input. For example, using the config above and consider a package in a binary cache `hdf5/xyzabc` that depends on mvapich2. Then the command `spack install hdf5/xyzabc` will instead install the result of splicing `mpich/abcdef` into `hdf5/xyzabc` in place of whatever mvapich2 spec it previously depended on. When this occurs, a warning message is printed `Warning: explicit splice configuration has caused the the concretized spec {concrete_spec} not to satisfy the input spec {input_spec}".
Highlighted technical details of implementation:
1. This PR required modifying the installer to have two separate types of Tasks, `RewireTask` and `BuildTask`. Spliced specs are queued as `RewireTask` and standard specs are queued as `BuildTask`. Each spliced spec retains a pointer to its build_spec for provenance. If a RewireTask is dequeued and the associated `build_spec` is neither available in the install_tree nor from a binary cache, the RewireTask is requeued with a new dependency on a BuildTask for the build_spec, and BuildTasks are queued for the build spec and its dependencies.
2. Relocation is modified so that a spack binary can be simultaneously installed and rewired. This ensures that installing the build_spec is not necessary when splicing from a binary cache.
3. The splicing model is modified to more accurately represent build dependencies -- that is, spliced specs do not have build dependencies, as spliced specs are never built. Their build_specs retain the build dependencies, as they may be built as part of installing the spliced spec.
4. There were vestiges of the compiler bootstrapping logic that were not removed in #46237 because I asked alalazo to leave them in to avoid making the rebase for this PR harder than it needed to be. Those last remains are removed in this PR.
Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
* cuda: Add 12.6.2
* Update cuda build system
- Remove gcc@6 conflict that was only a deprecation (probably has be added again with cuda@13)
- Update cuda_arch support by CUDA version
- Kepler support has ended with cuda@12
- Recently added 90a Hopper "experimental" features architecture was
missing the dependency on cuda@12:
* CMake: Improve incremental build speed.
CMake automatically embeds an updated configure step into make/ninja that will be called during the build phase. By default if a `CMakeCache.txt` file exists in the build directory CMake will use it and this + `spec.is_develop` is sufficient evidence of an incremental build.
This PR removes duplicate work/expense from CMake packages when using `spack develop`.
* Update cmake.py
* [@spackbot] updating style on behalf of psakievich
* Update cmake.py
meant self not spec...
---------
Co-authored-by: psakievich <psakievich@users.noreply.github.com>
* Environment.clear: ensure clearing is passed through to manifest
* test/cmd/env: make test_remove_commmand round-trip to disk
* cleanup vestigial variables
Some Windows Python installations may store the Python exe in Scripts/
rather than the base directory. Update `.command` to search in both
locations on Windows. On all systems, the search is now done
recursively from the search root: on Windows, that is the base install
directory, and on other systems it is bin/.
* Remove "modify_object_macholib"
According to documentation, this function is used when installing
Mach-O binaries on linux. The implementation seems questionable at
least, and the code seems to be never hit (Spack currently doesn't
support installing Mach-O binaries on linux).
* Fix relocation on macOS, when store projection changes
---------
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Enable tests for symlink-based views (this works with almost no
modifications to the view logic). View logic is not yet robust
for hardlink/junction-based views, so those are disabled for now
(both in the tests and as subcommands to `spack view`).
Also adds support for Paraview and CMake to build with Qt support on
Windows.
The remaining edits are to enable building of Qt itself on Windows:
* Several packages needed to update `.libs` to properly locate
libraries on Windows
* Qt needed a patch to allow it to build using a Python with a space
in the path
* Some Qt dependencies had not been ported to Windows yet
(e.g. `harfbuzz` and `lcms`)
This PR does not provide a sufficient GL for Qt to use Qt Quick2, as
such Qt Quick2 is disabled on the Windows platform by this PR.
---------
Co-authored-by: Dan Lipsa <dan.lipsa@kitware.com>
* Bugfix/Installer: properly track task queueing
* Move ordinal() to llnl.string; change time to attempt
* Convert BuildTask to use kwargs (after pkg); convert STATUS_ to BuildStats enum
* BuildTask: instantiate with keyword only args after the request
* Installer: build request is required for initializing task
* Installer: only the initial BuildTask cannnot have status REMOVED
* Change queueing check
* ordinal(): simplify suffix determination [tgamblin]
* BuildStatus: ADDED -> QUEUED [becker33]
* BuildTask: clarify TypeError for 'installed' argument
`spack gc` has so far been a global or environment-specific thing.
This adds the ability to restrict garbage collection to specific specs,
e.g. if you *just* want to get rid of all your unused python installations,
you could write:
```console
spack gc python
```
- [x] add `constraint` arg to `spack gc`
- [x] add a simple test
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
There was a bit of mystery surrounding the arguments for `_setup_pkg_and_run`. It passes
two file descriptors for handling the `gmake`'s job server in child processes, but they are
unsed in the method.
It turns out that there are good reasons to do this -- depending on the multiprocessing
backend, these file descriptors may be closed in the child if they're not passed
directly to it.
- [x] Document all args to `_setup_pkg_and_run`.
- [x] Document all arguments to `_setup_pkg_and_run`.
- [x] Add type hints for `_setup_pkg_and_run`.
- [x] Refactor exception handling in `_setup_pkg_and_run` so it's easier to add type
hints. `exc_info()` was problematic because it *can* return `None` (just not
in the context where it's used). `mypy` was too dumb to notice this.
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Revert "`cc`: ensure that RPATHs passed to linker are unique"
This reverts commit 2613a14c43.
* Revert "`cc`: simplify ordered list handling"
This reverts commit a76a48c42e.
Updated the terminology for the two types of environments to be
consistent with that used in the tutorial for the last three years.
Additionally:
* changed 'anonymous' to 'independent in environment command+test for consistency.