This adds some improvements to `spack find` output when in environments based
around some thoughts about what users want to know when they're in an env.
If you're working in an enviroment, you mostly care about:
* What are the roots
* Which ones are installed / not installed
* What's been added that still needs to be concretized
So, this PR adds a couple tweaks to display that information more clearly:
- [x] We now display install status next to every root. You can easily see
which are installed and which aren't.
- [x] When you run `spack find -l` in an env, the roots now show their concrete
hash (if they've been concretized). They previously would show `-------`
(b/c the root spec itself is abstract), but showing the concretized root's
hash is a lot more useful.
- [x] Newly added/unconcretized specs still show `-------`, which now makes more
sense, b/c they are not concretized.
- [x] There is a new option, `-r` / `--only-roots` to *only* show env roots if
you don't want to look at all the installed specs.
- [x] Roots in the installed spec list are now highlighted as bold. This is
actually an old feature from the first env implementation , but various
refactors had disabled it inadvertently.
Reduce incidence of spurious errors by:
* Ensuring we're passing the buffer by reference
* Get the correct short string size from Windows API instead of computing ourselves
* Ensure sufficient space for null terminator character
Add test for `windows_sfn`
Currently if you request pkg +example where example is a conditional
variant, and you have a pkg in the database for which the condition
did not hold (so no +example nor ~example), the solver would reuse it
regardless, not imposing +example.
The change rules out exactly one thing: variant_set without variant_value,
which in practice could only happen when not node_has_variant (i.e. when
under the current package.py rules the variant's when condition did not
trigger).
Currently, some of the tests in `spec_format` and `spec_semantics` fetch
the actual zlib repository when run, because they call `str()` on specs
like `zlib@foo/bar`, which at least currently requires a remote git clone
to resolve.
This doesn't change the behavior of git versions, but it uses our mock git
repo infrastructure and clones the `git-test` package instead of the *real*
URL from the mock `zlib` package.
This should speed up tests. We could probably refactor more so that the git
tests *all* use such a fixture, but the `checks` field that unfortunately
tightly couples the mock git repository and the `git_fetch` tests complicates
this. We could also consider *not* making `str()` resolve git versions, but
I did not dig into that here.
- [x] add a mock_git_test_package fixture that sets up a mock git repo *and*
monkeypatches the `git-test` package (like our git test packages do)
- [x] use fixture in `test_spec_format_path`
- [x] use fixture in `test_spec_format_path_posix`
- [x] use fixture in `test_spec_format_path_windows`
- [x] use fixture in `test_parse_single_spec`
Upon close inspection of clingo answer sets, in some cases we have "equivalent" (i.e. same hash for the concrete spec) duplicates that differ only because of virtual nodes that are added to the answer set, without any edge using them.
This commit adds a property `autopush` to mirrors. When true, every source build is immediately followed by a push to the build cache. This is useful in ephemeral environments such as CI / containers.
To enable autopush on existing build caches, use `spack mirror set --autopush <name>`. The same flag can be used in `spack mirror add`.
Allow reuse of specs that were built with compilers not in the current configuration. This means that specs from build caches don't need to have a matching compiler locally to be reused. Similarly when updating a distro. If a node needs to be built, only available compilers will be considered as candidates.
* Generally use os.replace on Windows and Linux
* Windows behavior for os.replace differs when the destination exists
and is a symlink to a directory: on Linux the dst is replaced and
on Windows this fails - this PR makes Windows behave like Linux
(by deleting the dst before doing the rename unless src and dst
are the same)
* Relax compiler and target mismatches
The mismatch occurs on an edge. Previously it was assigned
the parent priority, now it is assigned the child priority.
This should make reuse from buildcaches or store more likely,
since most mismatches will be counted with "reused" priority.
* Optimize version badness for runtimes at very low priority
We don't want to e.g. switch other attributes because we
cannot reuse an old installed runtime.
* Optimize runtime attributes at very low priority
This is such that the version of the runtime would
not influence whether we should reuse a spec.
Compiler mismatches are considered for runtimes,
to avoid situations where compiling foo%gcc@9
brings in gcc-runtime%gcc@13 if gcc@13 is among
the available compilers
* Exclude specs without runtimes from reuse
This should ensure that we do not reuse specs that
could be broken, as they expect the compiler to be
installed in a specific place.
The installer runs `get_dependent_ids`, which follows edges outside the
subdag that's being installed, so it returns a superset of the actual
dependents.
That's generally fine, except that it calls `s.package` on every
dependent, which triggers a package class to be instantiated, which is a
lot of work.
Instead, compute the package id from the spec, since that's all that's
used anyways and does not trigger *lots* of slow and redundant
instantiations of package objects.
If ONEAPI_ROOT is not set as an environment variable, the current approach will raise an error.
Instead we can compute the OneAPI_ROOT from the compiler paths like we do with vcvarsall.
`dpcpp` is deprecated by intel and has been superseded by `oneapi` compilers for a very long time.
---------
Co-authored-by: becker33 <becker33@users.noreply.github.com>
This PR allows the user to specify a path to a custom cert file (or directory) in
Spack's config:
```yaml
# This is where custom certs for proxy/firewall are stored.
# It can be a path or environment variable. To match ssl env configuration
# the default is the environment variable SSL_CERT_FILE
ssl_certs: $SSL_CERT_FILE
```
`config:ssl_certs` can be a path to a file or a directory, or it can be and environment
variable that resolves to one of those. When it posts to something valid, Spack will
update the ssl context to include custom certs, and fetching via `urllib` and `curl`
will trust the provided certs.
This should resolve many issues with fetching behind corporate firewalls.
---------
Co-authored-by: psakievich <psakievich@users.noreply.github.com>
Co-authored-by: Alec Scott <alec@bcs.sh>
After #41373, where we stopped considering the source directory to be the stage for develop builds,
we resumed *deleting* the stage even after a successful build.
We don't want this for develop builds because developers need to iterate; we should keep the artifacts
unless they explicitly run `spack clean`.
Now:
- [x] Build artifacts for develop packages are not removed after a successful install
- [x] They are also not removed before an install starts, i.e. develop packages always
reuse prior artifacts, if available.
- [x] They can be deleted in any other context, e.g. by running `spack clean --stage`
Users requested an option to filter between local/upstream results in `spack find` output.
```
# default behavior, same as without --install-tree argument
$ spack find --install-tree all
# show only local results
$ spack find --install-tree local
# show results from all upstreams
$ spack find --install-tree upstream
# show results from a particular upstream or the local install_tree
$ spack find --install-tree /path/to/install/tree/root
```
---------
Co-authored-by: becker33 <becker33@users.noreply.github.com>
* Allow compilers to function across compatible OS's
* Add documentation in the default yaml
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
* Add macos-14 as a runner (Apple M1)
* Mark a test xfail
We need to check later if this test needs modifications
on Apple Silicon chips.
---------
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Co-authored-by: alalazo <alalazo@users.noreply.github.com>
* buildcache sync: manifest-glob with arbitrary destination
The current implementation of the --manifest-glob is a bit restrictive
requiring the destination to be known by the generation stage of CI.
This allows specifying an arbitrary destination mirror URL.
* Add unit test for buildcache sync with manifest
* Fix test and arguments for manifest-glob with override destination
* Add testing path for unused mirror argument
* Remove a few compilers from static test data
These compilers were used only in a bunch of tests, so
they are added only there.
* Remove clang@3.3 from unit test configuration
* Parametrize compilers.yaml
* Remove specially named gcc from static data
The compilers are used in two tests
* Remove apple-clang and macOS compilers from static data
The compiler was used only in multimethod tests
* Remove clang@3.5 (compiler seems to be unused)
* Remove gcc@4.4.0 (compiler seems to be unused)
* Exclude x86_64 tests on other architectures
* Mark two tests as for clingo only
* Update version syntax in compilers.yaml
* Parametrize tcl tests on architectures
* Parametrize lmod tests on architectures
* Substitute gcc@4.5.0 with gcc@4.8.0 so it can be used on aarch64
* Fix a few issues with aarch64 and unit-tests
It's now possible to add config on the command line with `spack -c <CONFIG_VARS> ...`, but the new `command_line` scope isn't reflected in the help output for `--scope`:
```bash
> spack help config
...
--scope {defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT
configuration scope to read/modify
...
```
This PR adds:
- A new runtime for `%oneapi` compilers, called `intel-oneapi-runtime`
- Information to both `gcc-runtime` and `intel-oneapi-runtime`, to ensure
that we don't mix compilers using different soname for either `libgfortran`
or `libifcore`
To do so, the following internal mechanisms have been implemented:
- Possibility to inject virtual dependencies from the `runtime_constraints`
callback on packages
Information has been added to `gcc-runtime` to provide the correct soname
under different conditions on its `%gcc`.
Rules injected into the solver looks like:
```prolog
% Add a dependency on 'gfortran@5' for nodes compiled with gcc@=13.2.0 and using the 'fortran' language
attr("dependency_holds", node(ID, Package), "gfortran", "link") :-
attr("node", node(ID, Package)),
attr("node_compiler", node(ID, Package), "gcc"),
attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
not external(node(ID, Package)),
not runtime(Package),
attr("language", node(ID, Package), "fortran").
attr("virtual_node", node(RuntimeID, "gfortran")) :-
attr("depends_on", node(ID, Package), ProviderNode, "link"),
provider(ProviderNode, node(RuntimeID, "gfortran")),
attr("node", node(ID, Package)),
attr("node_compiler", node(ID, Package), "gcc"),
attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
not external(node(ID, Package)),
not runtime(Package),
attr("language", node(ID, Package), "fortran").
attr("node_version_satisfies", node(RuntimeID, "gfortran"), "5") :-
attr("depends_on", node(ID, Package), ProviderNode, "link"),
provider(ProviderNode, node(RuntimeID, "gfortran")),
attr("node", node(ID, Package)),
attr("node_compiler", node(ID, Package), "gcc"),
attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
not external(node(ID, Package)),
not runtime(Package),
attr("language", node(ID, Package), "fortran").
```
This adds support for prereleases. Alpha, beta and release candidate
suffixes are ordered in the intuitive way:
```
1.2.0-alpha < 1.2.0-alpha.1 < 1.2.0-beta.2 < 1.2.0-rc.3 < 1.2.0 < 1.2.0-xyz
```
Alpha, beta and rc prereleases are defined as follows: split the version
string into components like before (on delimiters and string boundaries).
If there's a string component `alpha`, `beta` or `rc` followed by an optional
numeric component at the end, then the version is prerelease.
So `1.2.0-alpha.1 == 1.2.0alpha1 == 1.2.0.alpha1` are all the same, as usual.
The strings `alpha`, `beta` and `rc` are chosen because they match semver,
they are sufficiently long to be unambiguous, and and all contain at least
one non-hex character so distinguish them from shasum/digest type suffixes.
The comparison key is now stored as `(release_tuple, prerelease_tuple)`, so in
the above example:
```
((1,2,0),(ALPHA,)) < ((1,2,0),(ALPHA,1)) < ((1,2,0),(BETA,2)) < ((1,2,0),(RC,3)) < ((1,2,0),(FINAL,)) < ((1,2,0,"xyz"), (FINAL,))
```
The version ranges `@1.2.0:` and `@:1.1` do *not* include prereleases of
`1.2.0`.
So for packaging, if the `1.2.0alpha` and `1.2.0` versions have the same constraints on
dependencies, it's best to write
```python
depends_on("x@1:", when="@1.2.0alpha:")
```
However, `@1.2:` does include `1.2.0alpha`. This is because Spack considers
`1.2 < 1.2.0` as distinct versions, with `1.2 < 1.2.0alpha < 1.2.0` as a consequence.
Alternatively, the above `depends_on` statement can thus be written
```python
depends_on("x@1:", when="@1.2:")
```
which can be useful too. A short-hand to include prereleases, but you
can still be explicit to exclude the prerelease by specifying the patch version
number.
### Concretization
Concretization uses a different version order than `<`. Prereleases are ordered
between final releases and develop versions. That way, users should not
have to set `preferred=True` on every final release if they add just one
prerelease to a package. The concretizer is unlikely to pick a prerelease when
final releases are possible.
### Limitations
1. You can't express a range that includes all alpha release but excludes all beta
releases. Only alternative is good old repeated nines: `@:1.2.0alpha99`.
2. The Python ecosystem defaults to `a`, `b`, `rc` strings, so translation of Python versions to
Spack versions requires expansion to `alpha`, `beta`, `rc`. It's mildly annoying, because
this means we may need to compute URLs differently (not done in this commit).
### Hash
Care is taken not to break hashes of versions that do not have a prerelease
suffix.
Generate CI scripts as powershell on Windows. This is intended to
output exactly the same bash scripts as before on Linux.
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Running a `spack-python` script like this:
```python
import spack
import multiprocessing
def echo(args):
print(args)
if __name__ == "__main__":
pool = multiprocessing.Pool(2)
pool.map(echo, range(10))
```
will fail in `develop` with an error like this:
```console
_pickle.PicklingError: Can't pickle <function echo at 0x104865820>: attribute lookup echo on __main__ failed
```
Python expects to be able to look up the method `echo` in `sys.path["__main__"]` in
subprocesses spawned by `multiprocessing`, but because we use `InteractiveConsole` to
run `spack python`, the executed file isn't considered to be the `__main__` module, and
lookups in subprocesses fail. We tried to fake this by setting `__name__` to `__main__`
in the `spack python` command, but that doesn't fix the fact that no `__main__` module
exists.
Another annoyance with `InteractiveConsole` is that `__file__` is not defined in the
main script scope, so you can't use it in your scripts.
We can use the [runpy.run_path()](https://docs.python.org/3/library/runpy.html#runpy.run_path) function,
which has been around since Python 3.2, to fix this.
- [x] Use `runpy` module to launch non-interactive `spack python` invocations
- [x] Only use `InteractiveConsole` for interactive `spack python`
Often in containers, the files we use to detect whether a cray system supports new features are not available.
Given that the cray containers only support the newer versions, and that these versions have been
around for a while at this point and few sites don't support them, this PR changes the logic for
detecting cray systems so that:
1. Don't even consider whether something is the `cray` platform if `opt/cray` is not in `MODULEPATH`
2. Only use the `cray` platform if we can read files in /opt/cray/pe and positively detect an older version
3. Otherwise, assume we're *not* on a cray (includes newer Cray PE's, which we treat as Linux)
`jinja2` can be a costly import, and right now it happens at startup every time we run
Spack. This slows down `spack --print-shell-vars` a bit, which is needed by `setup-env.*sh`.
Patch allowing Clingo to build with VS22 has landed both in Spack
and Clingo upstream, update Spack's bootstrap constraints to handle
this.
Additionally, properly scope the patch application in the clingo
package to handle upstream patch.
Currently (outside of this PR) when you `spack develop` a path, this path is treated as the staging
directory (this means that for example all build artifacts are placed in the develop path).
This PR creates a separate staging directory for all `spack develop`ed builds. It looks like
```
# the stage root
/the-stage-root-for-all-spack-builds/
spack-stage-<hash>
# Spack packages inheriting CMakePackage put their build artifacts here
spack-build-<hash>/
```
Unlike non-develop builds, there is no `spack-src` directory, `source_path` is the provided `dev_path`.
Instead, separately, in the `dev_path`, we have:
```
/dev/path/for/foo/
build-{arch}-<hash> -> /the-stage-root-for-all-spack-builds/spack-stage-<hash>/
```
The main benefit of this is that build artifacts for out-of-source builds that are relative to
`Stage.path` are easily identified (and you can delete them with `spack clean`).
Other behavior added here:
- [x] A symlink is made from the `dev_path` to the stage directory. This symlink name incorporates
spec details, so that multiple Spack environments that develop the same path will not conflict
with one another
- [x] `spack cd` and `spack location` have added a `-c` shorthand for `--source-dir`
Spack builds can still change the develop path (in particular to keep track of applied patches),
and for in-source builds, this doesn't change much (although logs would not be written into
the develop path). Packages inheriting from `CMakePackage` should get this benefit
automatically though.
The `patch()` directive can now be invoked with `reverse=True` to apply a patch in reverse.
This is useful for reverting commits that caused errors in projects, even if only the forward
patch is available, e.g. via a GitHub commit patch URL.
`patch(..., reverse=True)` runs `patch -R` behind the scenes. This is a POSIX option so we
can expect it to be available on the `patch` command.
---------
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
fixes#43097
Before this PR the behavior of mixins used together with
builders was to mask completely the callbacks defined from
the class coming later in the MRO.
Here we fix the behavior by accumulating all callbacks,
and de-duplicating them later.
Remove dependency on `importlib_metadata` and `pkg_resources`, which can be problematic if the version in PYTHONPATH is incompatible with the interpreter Spack is running under.
Closes#43052.
Maybe moving the argument to the `find` subcommand is a good idea, but I
just wanted to get the docs fix out.
Co-authored-by: Patrice Peterson <patrice.peterson@itz.uni-halle.de>
This PR adds the ability to load spack extensions through `importlib.metadata` entry
points, in addition to the regular configuration variable.
It requires Python 3.8 or greater to be properly supported.
* ASP-based solver: improve reusing nodes with gcc-runtime
This PR skips emitting dependency constraints on "gcc-runtime",
for concrete specs that are considered for reuse.
Instead, an appropriate version of gcc-runtime is recomputed
considering also the concrete nodes from reused specs.
This ensures that root nodes in a DAG have always a runtime
that is at a version greater or equal than their dependencies.
* Add unit-test for view with multiple runtimes
* Select latest version of runtimes in views
* Construct result keeping track of latest
* Keep ordering stable, just in case
* Execute `args.help` after setting main options so that extension commands will show with `spack -h`
---------
Co-authored-by: psakievich <psakiev@sandia.gov>
Spack merges ranges and concrete versions if they have non-empty
intersection. That is not enough for adjacent version ranges.
This commit ensures that disjoint ranges in version lists are simplified
if their union is not disjoint:
```python
"@1.0:2.0,2.1,2.2:3,4:6" # simplifies to "@1.0:6"
```
Refactoring `SpackSolverSetup` is a bit easier with type annotations, so I started
adding some. This adds annotations for the (many) instance variables on
`SpackSolverSetup` as well as a few other places.
This also refactors `condition()` to reduce redundancy and to allow
`_get_condition_id()` to be called independently of the larger condition
function.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Some builds on Windows break when encountering paths with spaces. This
reencodes some paths in Windows 8.3 filename format (when on Windows):
this serves as an equivalent identifier for the file, but in a form that
does not have spaces.
8.3 filenames are also truncated in length, which could be helpful, but
that is not the primary intended purpose of using this format.
Overall
* nmake/msbuild packages do this generally for the install prefix
* curl/perl require additional modifications (as written now, each package
may require calls to `windows_sfn` to work when the Spack
root/install/staging prefixes contain spaces)
Some items for follow-up:
* Spack itself does not create paths with spaces "on top" of whatever
the user configures or where it is placed (e.g. the Spack root, the
staging directory, etc.), so it might be possible to edit some of these
paths once and avoid a proliferation of individual `windows_sfn`
calls in individual packages.
* This approach may result in the insertion of 8.3-style paths into
build artifacts (on Windows), handling this may require additional
bookkeeping (e.g. when relocating).
* Move spec_list into its own file, instead of __init__.py
* Remove spack.schema.spack
This module was introduced in #33960 It's almost an exact duplicate of
spack.schema.env, and is not used anywhere.
* Fix typo
* Add support for clang in oneapi packages with OpenMP
* Add fallback search for libomp in OneApi package with OpenMP threading
* Add requires for the compiler when using threads=openmp in intel-oneapi-mkl
* Cosmetic changes to messages in oneapi.py
* Update error message in oneapi.py
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
* Update another error message in oneapi.py
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
* Inline helper error function in oneapi.py
* Update one more error message in oneapi.py
* Wrap long line in oneapi.py
---------
Co-authored-by: Robert Cohn <rscohn2@gmail.com>
* allow packages to request no submodules be updated when self.submodules is a
callable function
* Extend the test added in Allow more fine-grained control over what submodules are
updated: part 2 #27293 to include this case
* Update the type signature for the submodules arg of version() in directives.py
---------
Co-authored-by: tjfulle <tjfulle@users.noreply.github.com>
* cmake: Enable CMAKE_EXPORT_COMPILE_COMMANDS
Enabling this option causes CMake to generate a compile_commands.json file
containing a compilation database that can be used to drive third-party tools.
CMAKE_EXPORT_COMPILE_COMMANDS only exists for CMake >= 3.5
Exporting compilation databases is only supported for Makefile and Ninja
generators, so check these conditions as well.
CMAKE_EXPORT_COMPILE_COMMANDS is only enabled in supported configurations
This fixes bugs, performance issues, and removes no longer necessary code.
Short version:
1. Creating views from Python extensions would error if the Spack `opt` dir itself was in some symlinked directory. Use of `realpath` would expand those, and keying into `merge_map` would fail.
2. Creating views from Python extensions (and Python itself, potentially) could fail if the `bin/` dir contains symlinks pointing outside the package prefix -- Spack keyed into `merge_map[target_of_symlink]` incorrectly.
3. In the `python` package the `remove_files_from_view` function was broken after a breaking API change two years ago (#24355). However, the entire function body was redundant anyways, so solved it by removing it.
4. Notions of "global view" (i.e. python extensions being linked into Python's own prefix instead of into a view) are completely outdated, and removed. It used to be supported but was removed years ago.
5. Views for Python extension would _always_ copy non-symlinks in `./bin/*`, which is a big mistake, since all we care about is rewriting shebangs of scripts; we don't want to copy binaries. Now we first check if the file is executable, and then read two bytes to check if it has a shebang, and only if so, copy the entire file and patch up shebangs.
The bug fixes for (1) and (2) basically consist of getting rid of `realpath` entirely, and instead simply keep track of file identifiers of files that are copied/modified in the view. Only after patching up regular files do we iterate over symlinks and check if they target one of those. If so, retarget it to the modified file in the view.
These 7 hooks were not used.
- Six of them related to install phases were unused after `spack`
`monitor` was removed, and the code seems to have bit rotten as there
were reports they were not (always?) triggered when they should.
- The post environment one was made redundant after spack install for
environment started following the common code path for generating
module files in #42147.
It should not be a breaking change to remove, since users cannot define
hooks in extensions, they would have to fork Spack.
If we ever _were_ to make those hooks extendable outside of core Spack,
it would also be better to start with fewer rather than more, cause
everything you expose gets relied upon...
Removing those also allows us to rethink what hooks we really need, and
in particular it seems like we need a hook that runs post install also when
the spec is inserted into the database.
The lack of a rule to avoid enforcing requirements on multi-valued variants, when the condition activating the environment was not met, resulted in multiple optimal solutions. The fix is to prevent imposing a requirement if the when= rule activating it is not met.
The section was highly outdated as it referred to old defaults, and
failed to mention `hide_implicits: true`.
This commit restructures it, moves some deeply nested sections a level
up, and promotes `hide_implicits: true` + `autoload: direct` before
talking about `exclude`.
* Registry queries can fail due to simultaneous access from other
processes. Wrap some of these accesses and retry when this may
be the cause (the generated exceptions don't allow pinpointing
this as the reason, but we add logic to identify cases which
are definitely unrecoverable, and retry if it is not one of
these).
* Add make recursion optioal for most registry search functions;
disable recursive search in case where it was originally always
recursive.
Fix two separate problems:
1. We want to always visit parents before children while creating views
(when it comes to ignoring conflicts, the first instance generated in
the view is chosen, and we want the parent instance to have precedence).
Our preorder traversal does not guarantee that, but our topological-
order traversal does.
2. For copy style views with packages x depending on y, where
<x-prefix>/foo is a symlink to <y-prefix>/foo, we want to guarantee
that:
* A conflict is not registered
* <y-prefix>/foo is chosen (otherwise, the "foo" symlink would become
self-referential if relocated relative to the view root)
Note that
* This is an exception to [1] (in this case the dependency instance
overrides the dependent)
* Prior to this change, if "foo" was ignored as a conflict, it was
possible to create this self-referential symlink
Add tests for each of these cases
Sometimes the logs are too long and the copy & paste command is not
shown. In that case I'd like to just copy the failing GitLab job URL in
my browser to `spack reproduce-build <url>`.
Currently, the `SpackSolverSetup` and the `PyclingoDriver` are more coupled than necessary:
1. The driver object needs a setup object to be injected during a solve,
2. And the setup object will get a reference back to the driver
This design is necessary because we use the low-level `clingo.backend` interface to setup our problem. This interface though is meant to bypass the grounder and add symbols directly in the grounded table, which is a feature we don't currently use.
The PR simplifies the encoding by having the setup object returning the problem-specific facts / rules as a list of strings, and the driver ingesting them using the [clingo.Control.add](https://potassco.org/clingo/python-api/5.6/clingo/control.html#clingo.control.Control.add) method. This removes any use of the low level interface.
Using this encoding makes it easy to hash the output of the setup phase, since it is returned as a string.
This "breaks" the deprecated schema by allowing unknown attributes
to the attributes section of the job types. The breaking change here is
that deprecated stacks will no longer ignore attributes that are unknown
but rather assume the new CI schema behavior of injecting them into the
generated CI configuration. This change is required to secure
authentication in Spack CI.
Improve naming, so it's clear file "extensions" are not taken in the
`PurePath(path).suffix` sense as the original function name suggests,
but rather that the files are opened and their magic bytes are
classified.
Add type hints.
Fix a bug where `stream.read(num_bytes)` was run on the compressed
stream instead of the uncompressed stream, which can potentially break
detection of tar.bz2 files.
Ensure that when peeking into streams for magic bytes, they are reset to
their original position upon return.
Use new API in `spack logs`.
Relocation of `PT_INTERP` in ELF files already happens to work from long to short path, thanks to generic binary relocation (i.e. find and replace). This PR improves it:
1. Adds logic to grow `PT_INTERP` strings through patchelf (which is only useful if the interpreter and rpath paths are the _only_ paths in the binary that need to be relocated)
2. Makes shrinking `PT_INTERP` cleaner. Before this PR when you would use Spack-built glibc as link dep, and relocate
executables using its dynamic linker, you'd end up with
```
$ file exe
exe: ELF 64-bit LSD pie executable, ..., interpreter /////////////////////////////////////////////////path/to/glibc/lib/ld-linux.so
```
With this PR you get something sensible:
```
$ file exe
exe: ELF 64-bit LSD pie executable, ..., interpreter /path/to/glibc/lib/ld-linux.so
```
When Spack cannot modify the interpreter or rpath strings in-place, it errors out without modifying the file, and leaves both tasks to patchelf instead.
Also add type hints to `elf.py`.
Certain versions of ifx (the majority of those available) have an issue
where they are not compatible with TMP directories with dot chars
This precludes their use with CMake.
Remap TMP to point to the stage directory rather than whatever the TMP
default is
Add the empty deptype `spack.deptypes.NONE`.
Test the case `traverse_nodes(deptype=spack.deptypes.NONE)` to not
traverse dependencies, only de-duplicate.
Use the construct in environment views that otherwise would branch on
whether deps are enabled or not.
Previously, for abstract specs like:
```
foo ^[virtuals=a] bar ^[virtuals=b] bar
```
the second requirement was silently discarded on concretization. Now they're merged, and the abstract spec is equivalent to:
```
foo ^[virtuals=a,b] bar
```
CMake may write and read from `~/.cmake` through `export(...)` and read `find_package(...)` respectively. We don't want this as it may influence the build in a non-deterministic way, so disable it for all versions of `cmake`.