When Spack concretizes environments, it prints every (newly concretized) root spec
individually with all of its dependencies. For most reasonably sized environments, this
is too much output. This is true for three commands:
* `spack concretize` when concretizing an environment with newly added specs
* `spack install` when installing an environment with newly added specs
* `spack spec` with no arguments in an environment
The output dates back to before we had unified environments or nicer spec traversal
routines, and we can improve it.
This PR makes environment concretization output analogous to what we do for regular
specs. Just like `spack spec` for a single spec, we show all root specs with no
indentation, so you can easily see the specs you explicitly requested. Dependencies are
shown:
1. With indentation according to their depth in a breadth-first traversal starting at
the roots;
2. Only once if they appear on paths from multiple roots
So, the default is now consistent with `spack spec` for one spec--it's `--cover=nodes`.
i.e., if there are 100 specs in your environment, you'll get 100 lines of output.
If you want to see more details, you can do that with `spack spec` using the arguments
you're already familiar with. For example, if you wanted to see dependency types and
*all* dependencies, you could use `spack spec -l --cover=edges`. Or you could add
deptypes and namespaces with, e.g. `spack spec -ltN`.
With no arguments in an environment, `spack spec` concretizes (if necessary) and shows
the concretized environment. If you run `spack concretize` *first*, inspecting the
environment repeatedly with `spack spec` will be fast, as everything is already in the
`spack.lock` file.
- [x] factor most logic of `Spec.tree()` out of `Spec` class into `spack.spec.tree()`,
which can take multiple specs as roots.
- [x] make `Spec.tree()` call `spack.spec.tree()`
- [x] `spack.environment.display_specs()` now uses `spack.spec.tree()`
- [x] Update `spack concretize`
- [x] Update `spack install`
- [x] Update `spack spec` to call `spack.spec.tree()` for environments.
- [x] Continue to output specs individually for `spack spec` when using
`--yaml` or `--json`
`BuildcacheBootstrapper` uses `Spec.intersects` to match specs needed
for bootstrapping against the binary cache. The specs were not
sufficiently-detailed to prevent matching e.g. cached binaries for
Mac OS on Windows; this commit adds the platform to each requested
bootstrap spec to prevent that.
Remove support for `cray` as a separate platform.
Any platform previously detected as `cray` is now detected as `linux`.
Users who still need platform=cray have to stick to Spack 0.22
Fixes a bug in the concretizer where specs depending on a host
incompatible libc would be used. This bug triggers when nothing is
built.
In the case where everything is reused, there is no libc provider from
the perspective of the solver, there is only compatible_libc. This
commit ensures that we require a host compatible libc on any reused
spec, additionally to requiring compat with the chosen libc provider.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Change the installer to take `([pkg], args)` in the constructor instead
of `[(pkg, args)]`. The reason is that certain arguments are global
settings, and the new API ensures that those arguments cannot be
different across different "build requests".
The `explicit` install arg is now a list of hashes, and the installer is
no longer responsible for determining what package is installed
explicitly. This way environment installs can simply pass the list of
environment roots, without them necessarily being explicit build
requests. For example an env with two roots [a, b], where b depends on
a, would not always cause spack install to mark b as explicit.
Notice that `overwrite` already took a list of hashes, this makes
`explicit` consistent.
`package.do_install(explicit=True)` continues to take a boolean.
The windows wrappers for basic functions like `os.symlink`,
`os.readlink` and `os.path.islink` in the `llnl.util.symlink` module
have bugs, and trigger more file system operations on non-windows than
they should.
This commit just binds `llnl.util.symlink.symlink = os.symlink` etc so
built-in functions are used on non-windows
`spack clean <spec>` will now resolve specs based on the active environment if one is active.
If an env is active but no matching spec is found, this will fall back on fully concretizing.
Before this PR, if Spack could see a possibility to reuse a spec that
doesn't match a strong preference, it would do so. After the PR, a
strong preference would take precedence.
avoid calling `spec.target` when None.
When an external compiler package has an `os` set but no `target` set, Spack
currently falls into a codepath that calls `spec.target` (which itself calls
`spec.architecture.target.Microarchitecture`) when `spec.architecture.target`
is None, throwing an error.
e.g.
```
packages:
gcc:
externals:
- spec: gcc@12.3.1 os=rhel7
prefix: /usr
```
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
This fixes a bug occurring when two root specs need to select
old versions, and these versions have the same penalty in the
optimization. This sometimes caused an older version to be
preferred to a more recent one.
The issue was the omission of `PackageNode` in the optimization
tuple.
Use correct path separator in get_all_package_diffs for all platforms.
Ensures correct package change computation on Windows when pruning unchanged specs in Gitlab CI
This fixes an issue where ghcr, gitlab and possibly other container registries paginate tags by default, which violates the OCI spec v1.0, but is common practice (the spec was broken itself). After this commit, you can create build cache indices of > 100 specs on ghcr.
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Add support for Gitlab CI on Windows
This PR adds the config changes required to configure and execute
Gitlab pipelines running Windows builds on Windows runners using
the existing Gitlab CI infrastructure (and newly added Windows
infrastructure).
* Adds support for generating child pipelines dispatched to Windows runners
* Refactors the relevant pre-scripts, scripts, and post scripts to be compatible with Windows
* Adds Windows config section describing Windows jobs
* Adds VTK as Windows build stack (to be expanded later)
* Modifies proj to build on Windows
* Refactors Windows rpath symlinking to avoid system libs and externals
---------
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Co-authored-by: Mike VanDenburgh <michael.vandenburgh@kitware.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
Symlinks on Windows can use longpath prefixes (\\?\); these are fine
in the context of win32 API interactions but break numerous facets of
Spack behavior that rely on string parsing/matching (archiving,
binary distributions, tarball extraction, view regen, etc).
Spack's internal readlink method (llnl.util.symlink.readlink)
gracefully handles this by removing the prefix and otherwise behaving
exactly as os.readlink does, so we should prefer that in all cases.
Apparently urllib can throw a range of different exceptions:
1. HTTPError
2. URLError with e.reason set to the actual exception
3. TimeoutError from getresponse, which is not wrapped
* archive: relative links only
Ensure all links written into tarfiles generated from Spack prefixes do not contain symlinks pointing outside the prefix
* binary_distribution: limit extraction to prefix
Ensure files extracted from spackballs are not links pointing outside of the prefix
* Ensure rpaths are properly set on Windows
* hard error on extraction of absolute links
* refactor for non link-modifying approach
* Restore tarball extraction to original impl
* use custom readlink
* cleanup symlink module
* make lstrip
Add the ability to include any number of (potentially nested) concrete environments, e.g.:
```yaml
spack:
specs: []
concretizer:
unify: true
include_concrete:
- /path/to/environment1
- /path/to/environment2
```
or, from the CLI:
```console
$ spack env create myenv
$ spack -e myenv add python
$ spack -e myenv concretize
$ spack env create --include-concrete myenv included_env
```
The contents of included concrete environments' spack.lock files are
included in the environment's lock file at creation time. Any changes
to included concrete environments are only reflected after the environment
is re-concretized from the re-concretized included environments.
- [x] Concretize included envs
- [x] Save concrete specs in memory by hash
- [x] Add included envs to combined env's lock file
- [x] Add test
- [x] Update documentation
Co-authored-by: Kayla Butler <<butler59@llnl.gov>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.co
m>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Currently SPACK_COLOR=always is not respected in the build process on
macOS, because the global `_force_color` is re-evaluated in global scope
during module setup, where it is always `None`.
So, move global init bits from main.py to the module itself.
Some specs which were excluded from reuse,
are currently added back to the solve when
we traverse dependencies of other reusable
specs.
This fixes the issue by keeping track of what
we can explicitly reuse.
This commit adds a layer of indirection to improve build isolation with
and without external Python, as well as usability of environment views.
It adds `python-venv` as a dependency to all packages that `extends("python")`,
which has the following advantages:
1. Build isolation: only `PYTHONPATH` is considered in builds, not
user / system packages
2. Stable install layout: fixes the problem on Debian, RHEL and Fedora where
external / system python produces `bin/local` subdirs in Spack install prefixes.
3. Environment views are Python virtual environments (and if you add
`py-pip` things like `pip list` work)
Views work whether they're symlink, hardlink or copy type.
This commit additionally makes `spec["python"].command` return
`spec["python-venv"].command`. The rationale is that packages in repos we do
not own do not pass the underlying python to the build system, which could still
result in incorrectly computed install layouts.
Other attributes like `libs`, `headers` should be on `python` anyways and need no change.
Currently bootstrapping from source fails because clingo requires gnupg
requires clingo.
This commit stops eager bootstrapping. We don't need `patchelf` nor `gnupg`
generally. They're bootstrapped when needed.
This creates shared infrastructure for compiler packages to implement the
detailed search capabilities from the `spack compiler find` command for the
`spack external find` command.
After this commit, `spack compiler find` can be replaced with
`spack external find --tag compiler`, with the exception of mixed toolchains.
A named env cannot contain `.` and `/`.
So when a user runs `spack env create ./here` do not error but treat it
as `spack env create -d ./here`.
Also fix help string of `spack env create`, which seems to have been
copied from `activate` incorrectly.
Since reuse is the default now, `--reuse-deps` can be confusing, as it
technically does not imply roots are fresh.
So add `--fresh-roots`, which is also easier to discover when running
`spack concretize --fre<tab>`
We recently switched to using the new ReadTheDocs with "addons". That includes its own
analytics, which is nice, but we also want to continue using our GA4 analytics.
Adding GA4 is no longer supported by RTD, so we have to add it manually.
- [x] re-add the gtag to all pages, manually
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Adds a pre-concretization check for the Windows SDK and WGL (Windows
GL) packages as non-buildable externals.
This is a redo of https://github.com/spack/spack/pull/43459, but makes
sure to modify the configuration scope outside of the bootstrap scope:
whichever is highest-precedence in the user's environment at the time
the concretization runs, which should either be an env scope or the
~ scope.
Adds pytest fixture mocking the check for WGL and WSDK as if they were
present.
This PR gives users finer control over which specs are reused during concretization.
The value of the `concretizer:reuse` config option now can take an object with the following properties:
- `roots`: true if reusing roots, false if reusing just dependencies
- `exclude`: list of constraints used to select reusable specs
- `include`: list of constraints used to select reusable specs
- `from`: allows to select the sources of reused specs
### Examples
#### Reuse only specs compiled with GCC
```yaml
concretizer:
reuse:
roots: true
include:
- "%gcc"
```
#### `openmpi` must be used from externals, and it must be the only external used
```yaml
concretizer:
reuse:
roots: true
from:
- type: local
exclude:
- "openmpi"
- type: buildcache
exclude:
- "openmpi"
- type: external
include:
- "openmpi"
```
* PackageStillNeededError: add pkg that needs spec to exception msg
* PackageStillNeededError: f-string with short fmt and hash
* PackageStillNeededError: split long string
The old concretizer creates a cyclic graph when expanding virtuals for
`iconv`, which is a bug. This hack drops glibc and musl as possible
providers for `iconv` in the old concretizer to work around it.
Add debug log for external detection tests. The debug log
is used to print which test is being executed.
Skip version audit on Windows where appropriate
We run `extend spack_flags_list SPACK_LDFLAGS` for `$mode in ld|ccld`.
That's problematic, cause `ccld` needs `-Wl,--flag` whereas `ld` needs
`--flag` directly. Only `-L` and `-l` are common to compiler & linker.
In all build systems `LDFLAGS` is for the compiler not the linker, cause
any linker flag `-x` can be passed as a compiler flag `-Wl,-x`, and there
are many compiler flags that affect the linker invocation, like `-fopenmp`,
`-fuse-ld=`, `-fsanitize=` etc.
So don't pass `LDFLAGS` to the linker directly.
This way users can set `ldflags: -Wl,--allow-shlib-undefined` in compilers.yaml
to work around an issue where the linker tries to resolve the `libcuda.so.1`
stub lib which cannot be located by design in `cuda`.
Some packages can't be redistributed in source or binary form. We need an explicit way to say that in a package.
This adds a `redistribute()` directive so that package authors can write, e.g.:
```python
redistribute(source=False, binary=False)
```
You can also do this conditionally with `when=`, as with other directives, e.g.:
```python
# 12.0 and higher are proprietary
redistribute(source=False, binary=False, when="@12.0:")
# can't redistribute when we depend on some proprietary dependency
redistribute(source=False, binary=False, when="^proprietary-dependency")
```
To prevent Spack from adding either their sources or binaries to public mirrors and build caches. You can still unconditionally add things *if* you run either:
* `spack mirror create --private`
* `spack buildcache push --private`
But the default behavior for build caches is not to include non-redistributable packages in either mirrors or build caches. We have previously done this manually for our public buildcache, but with this we can start maintaining redistributability directly in packages.
Caveats: currently the default for `redistribute()` is `True` for both `source` and `binary`, and you can only set either of them to `False` via this directive.
- [x] add `redistribute()` directive
- [x] add `redistribute_source` and `redistribute_binary` class methods to `PackageBase`
- [x] add `--private` option to `spack mirror`
- [x] add `--private` option to `spack buildcache push`
- [x] test exclusion of packages from source mirror (both as a root and as a dependency)
- [x] test exclusion of packages from binary mirror (both as a root and as a dependency)
If there's no compiler we currently don't have any external libc for the solver.
This commit adds a fallback on libc from the current Python process, which works if it is dynamically linked.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
When looking at where we spend our time in solver setup, I noticed a fair bit of time is spent
in `Spec.format()`, and `Spec.format()` is a pretty old, slow, convoluted method.
This PR does a number of things:
- [x] Consolidate most of what was being done manually with a character loop and several
regexes into a single regex.
- [x] Precompile regexes where we keep them
- [x] Remove the `transform=` argument to `Spec.format()` which was only used in one
place in the code (modules) to uppercase env var names, but added a lot of complexity
- [x] Avoid escaping and colorizing specs unless necessary
- [x] Refactor a lot of the colorization logic to avoid unnecessary object construction
- [x] Add type hints and remove some spots in the code where we were using nonexistent
arguments to `format()`.
- [x] Add trivial cases to `__str__` in `VariantMap` and `VersionList` to avoid sorting
- [x] Avoid calling `isinstance()` in the main loop of `Spec.format()`
- [x] Don't bother constructing a `string` representation for the result of `_prev_version`
as it is only used for comparisons.
In my timings (on all the specs formatted in a solve of `hdf5`), this is over 2.67x faster than the
original `format()`, and it seems to reduce setup time by around a second (for `hdf5`).
The reverse provider lookup may have stale entries for deleted packages, which used to cause errors. It's hard to invalidate those cache entries, so this commit simply drops entries w/o invalidating the cache.
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
This commit differentiate linux from other platforms by
using libc compatibility as a criterion for deciding
which buildcaches / binaries can be reused. Other
platforms still use OS compatibility.
On linux a libc is injected by all compilers as an implicit
external, and the compatibility criterion is that a libc is
compatible with all other libcs with the same name and a
version that is lesser or equal.
Some concretization unit tests use libc when run on linux.
Some logic to detect what libc the c / cxx compilers use by default,
based on `-dynamic-linker`.
The function `compiler.default_libc()` returns a `Spec` of the form
`glibc@x.y` or `musl@x.y` with the `external_path` property set.
The idea is this can be injected as a dependency.
If we can't run the dynamic linker directly, fall back to `ldd` relative
to the prefix computed from `ld.so.`
In the future we may transform the database from a single JSON object to
a stream of JSON objects.
This paves the way for constant time writes and constant time rereads
when only O(1) changes are made. Currently both are linear time.
This commit gives just enough forward compat for Spack to produce a
friendly error when we would move to a stream of json objects, and a db
would look like this:
```json
{"database": {"version": "<something newer>"}}
```