Commit Graph

198 Commits

Author SHA1 Message Date
Massimiliano Culpo
5b3942a489
Turn compilers into nodes (#45189)
## Summary

Compilers stop being a *node attribute*, and become a *build-only* dependency. 

Packages may declare a dependency on the `c`, `cxx`, or `fortran` languages, which
are now treated as virtuals, and compilers would be *providers* for one or more of
those languages. Compilers can also inject runtime dependency, on the node being
compiled. An example graph for something as simple as `zlib-ng` is the following:

<p align="center">
<img src="https://github.com/user-attachments/assets/ee6471cb-09fd-4127-9f16-b9fe6d1338ac" alt="zlib-ng DAG" width="80%" height="auto">
</p>

Here `gcc` is used for both the `c`, and `cxx` languages. Edges are annotated with
the virtuals they satisfy (`c`, `cxx`, `libc`). `gcc` injects `gcc-runtime` on the nodes
being compiled. `glibc` is also injected for packages that require `c`. The
`compiler-wrapper` is explicitly represented as a node in the DAG, and is included in
the hash.

This change in the model has implications on the semantics of the `%` sigil, as
discussed in #44379, and requires a version bump for our `Specfile`, `Database`,
and `Lockfile` formats.

## Breaking changes

Breaking changes below may impact users of this branch.

### 1. Custom, non-numeric version of compilers are not supported

Currently, users can assign to compilers any custom version they want, and Spack
will try to recover the "real version" whenever the custom version fails some operation.
To deduce the "real version" Spack must run the compiler, which can add needless
overhead to common operations.

Since any information that a version like `gcc@foo` might give to the user, can also
be suffixed while retaining the correct numeric version, e.g. `gcc@10.5.0-foo`, Spack
will **not try** anymore to deduce real versions for compilers.

Said otherwise, users should have no expectation that `gcc@foo` behaves as
`gcc@X.Y.Z` internally.

### 2. The `%` sigil in the spec syntax means "direct build dependency"

The `%` sigil in the spec syntax means *"direct build dependency"*, and is not a node
attribute anymore. This means that:

```python
node.satisfies("%gcc")
``` 
is true only if `gcc` is a direct build dependency of the node. *Nodes without a compiler
dependency are allowed.*

### `parent["child"]`, and `node in spec`, will now only inspect the link/run sub-DAG
and direct build dependencies

The subscript notation for `Spec`:

```python
parent["child"]
```

will look for a `child` node only in the link/run transitive graph of `parent`, and in its
direct build dependencies. This means that to reach a transitive build dependency,
we must first pass through the node it is associated with. 

Assuming `parent` does not depend on `cmake`, but depends on a `CMakePackage`,
e.g. `hdf5`, then we have the following situation:

```python
# This one raises an Exception, since "parent" does not depend on cmake
parent["cmake"]
# This one is ok
cmake = parent["hdf5"]["cmake"]
```

### 3. Externals differing by just the compiler attribute

Externals are nodes where dependencies are trimmed, and that _is not planned to
change_ in this branch. Currently, on `develop` it is ok to write:

```yaml
packages:
  hdf5:
    externals:
    - spec: hdf5@1.12 %gcc
      prefix: /prefix/gcc
    - spec: hdf5@1.12 %clang
      prefix: /prefix/clang
```
and Spack will account for the compiler node attribute when computing the optimal
spec. In this branch, using externals with a compiler specified is allowed only if any
compiler in the dag matches the constraints specified on the external. _The external
will be still represented as a single node without dependencies_.

### 4. Spec matrices enforcing a compiler

Currently we can have matrices of the form:

```yaml
matrix:
- [x, y, z]
- [%gcc, %clang]
```
to get the cross-product of specs and compilers. We can disregard the nature of the
packages in the first row, since the compiler is a node attribute required on each node.

In this branch, instead, we require a spec to depend on `c`, `cxx`, or `fortran` for the
`%` to have any meaning. If any of the specs in the first row doesn't depend on these
languages, there will be a concretization error. 

## Deprecations

* The entire `compilers` section in the configuration (i.e., `compilers.yaml`) has been
  deprecated, and current entries will be removed in v1.2.0. For the time being, if Spack
  finds any `compilers` configuration, it will try to convert it automatically to a set of
  external packages.
* The `packages:compiler` soft-preference has been deprecated. It will be removed
  in v1.1.0.

## Other notable changes

* The tokens `{compiler}`, `{compiler.version}`, and `{compiler.name}` in `Spec.format`
  expand to `"none"` if a Spec does not depend on C, C++, or Fortran.
* The default install tree layout is now
  `"{architecture.platform}-{architecture.target}/{name}-{version}-{hash}"`

## Known limitations

The major known limitations of this branch that we intend to fix before v1.0 is that compilers
cannot be bootstrapped directly. 

In this branch we can build a new compiler using an existing external compiler, for instance:
	
```
$ spack install gcc@14 %gcc@10.5.0
```

where `gcc@10.5.0` is external, and `gcc@14` is to be built.

What we can't do at the moment is use a yet to be built compiler, and expect it will be
bootstrapped, e.g. :

```
spack install hdf5 %gcc@14
```

We plan to tackle this issue in a following PR.

---------

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Signed-off-by: Harmen Stoppels <me@harmenstoppels.nl>
Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2025-03-25 22:32:49 -06:00
snehring
9c6f0392d5
Revert "Add package libglvnd (#49214)" (#49478)
This reverts commit 682e4bf4d4.
2025-03-14 09:50:29 -04:00
snehring
682e4bf4d4
Add package libglvnd (#49214)
* Add package libglvnd
  Signed-off-by: Shane Nehring <snehring@iastate.edu>
* libglvnd: add virtual defaults
  Signed-off-by: Shane Nehring <snehring@iastate.edu>

---------

Signed-off-by: Shane Nehring <snehring@iastate.edu>
2025-03-12 18:53:43 -07:00
Massimiliano Culpo
3caa3132aa
python: allow it as a build-tool again (#49201)
Python was removed from being a build tool in #46980, due to issues
when reusing specs. This PR adds a new rule to match the interpreter
among different Python packages, in clingo.

It also adds a bunch of new "build-tools", so that specs like:
```
py-matplotlib backend=tkagg
```
can be concretized in one go.

Modifications:
- [x] Make `py-matplotlib backend=tkagg` concretizable
- [x] Add unit-tests to ensure situations like in #46980 do not happen

---------

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2025-02-26 15:04:31 -08:00
Massimiliano Culpo
ccd205bfeb
Allow tuning max_dupes for build dependencies (#48948)
Up to now, Spack was allowing all build-tools that
may appear in the DAG to have 2 max_dupes.

This is not needed in practice for most of them,
and adding them out of caution just increases
grounding and concretization time.

This PR makes the value of max_dupes configurable
per package, and sets only a few known packages to
2 max_dupes by default.

In case user needs different values, they can
tune the configuration for their use case.
2025-02-14 14:25:12 +01:00
Massimiliano Culpo
54210270c8
concretizer: reduce search space with static analysis (#48729)
Currently, when we setup the ASP problem for `clingo`, we don't take into account the configuration. This results in setting up ASP problems that are larger than necessary, with possibly redundant information, and higher concretization times. 

This PR tries to improve things by adding an opt-in feature that computes the _possible dependencies_ of a solve taking also into account the current configuration, and avoids adding possible dependencies that we are certain can't be in the final solution.

The feature can be activated with:
```yaml
concretizer:
  static_analysis: true
```

Examples of simple rules to discard dependencies are:
- Dependencies that are not buildable, and for which no binary is present (e.g. `cray-mpich` etc. on non Cray systems)
- Dependencies that are not for the current platform (e.g. `msmpi` on non Windows platforms)
- Conditional dependencies that cannot be activated, because of some user requirement (e.g. `cuda` etc. if the user requires `~cuda` in configuration)
- Virtual providers that cannot be used, because of a requirement on a virtual

The speed-up these rules seem to give depends on the use case at hand, but if the configuration is updated properly, they are noticeable. 

Since in cases where there is no rule to exclude packages upfront, reuse is active, and this option is activated, it's possible to see some minor slow down, the feature has been added as opt-in, so it's turned off by default.
2025-02-11 08:44:20 +01:00
John W. Parent
2c3f2c5733
Windows: Update default config for stage location (#48511)
Current location is within the Spack prefix, which causes builds to
pollute VCS with stage artifacts and generally inflates the Spack
install prefix.

This PR moves it to the user cache location now that we can
consistently support paths with spaces on Windows.
2025-02-06 23:25:11 -08:00
Massimiliano Culpo
e47a6059a7
Improve definition of a few placeholder packages (#48730)
* Improve definition of a few placeholder packages
   These packages are placeholders for vendor provided software,
   that is not buildable, and should be declared as external.
* ibm-java: remove package, as asked by maintainer
2025-01-27 15:34:34 -08:00
Teague Sterling
9cccdc5424
xkeyboard-config: add xkbdata provider (#45571)
Signed-off-by: Teague Sterling <teaguesterling@gmail.com>
2025-01-04 10:39:45 +01:00
Harmen Stoppels
06eae96ef9
config:shared_linking:missing_library_policy to error/warn about accidental use of system libraries on linux/freebsd (#47365)
This commit adds a config option `config:shared_linking:missing_library_policy:error/warn/ignore` which will cause installation errors or warnings when ELF executables or libraries need shared libraries which cannot be resolved from RPATH search paths. The default is to ignore.

This is a safeguard against accidentally linking to system libraries instead of Spack libraries. It makes it more likely that build cache installs work on different machines. It works only at the level of libraries, not at the level of symbols. Some system dependencies are allowed (e.g. kernel and libc).

Packages can (but are discouraged to) set `unresolved_libraries` to a list of patterns of sonames/library names that are know to be unresolvable in RPATHs.  In the future this could be made more fine-grained in a non-breaking way by allowing a dictionary of patterns `lib => [deps]`.
2024-12-16 12:32:36 +01:00
Massimiliano Culpo
ea5ffe35f5
configuration: set egl as buildable:false (#47865) 2024-12-02 11:33:01 -08:00
Massimiliano Culpo
68aa712a3e
solver: add a timeout handle for users (#47661)
This PR adds a configuration setting to allow setting time limits for concretization.

For backward compatibility, the default is to set no time limit.
2024-11-19 15:00:26 +01:00
John Gouwar
bf16f0bf74
Add solver capability for synthesizing splices of ABI compatible packages. (#46729)
This PR provides complementary 2 features:
1. An augmentation to the package language to express ABI compatibility relationships among packages. 
2. An extension to the concretizer that can synthesize splices between ABI compatible packages.

1.  The `can_splice` directive and ABI compatibility 
We augment the package language with a single directive: `can_splice`. Here is an example of a package `Foo` exercising the `can_splice` directive:

class Foo(Package):
    version("1.0")
    version("1.1")
    variant("compat", default=True)
    variant("json", default=False)
    variant("pic", default=False)
    can_splice("foo@1.0", when="@1.1")
    can_splice("bar@1.0", when="@1.0+compat")
    can_splice("baz@1.0+compat", when="@1.0+compat", match_variants="*")
    can_splice("quux@1.0", when=@1.1~compat", match_variants="json")

Explanations of the uses of each directive: 
- `can_splice("foo@1.0", when="@1.1")`:  If `foo@1.0` is the dependency of an already installed spec and `foo@1.1` could be a valid dependency for the parent spec, then `foo@1.1` can be spliced in for `foo@1.0` in the parent spec.
- `can_splice("bar@1.0", when="@1.0+compat")`: If `bar@1.0` is the dependency of an already installed spec and `foo@1.0+compat` could be a valid dependency for the parent spec, then `foo@1.0+compat` can be spliced in for `bar@1.0+compat` in the parent spec
-  `can_splice("baz@1.0", when="@1.0+compat", match_variants="*")`: If `baz@1.0+compat` is the dependency of an already installed spec and `foo@1.0+compat` could be a valid dependency for the parent spec, then `foo@1.0+compat` can be spliced in for `baz@1.0+compat` in the parent spec, provided that they have the same value for all other variants (regardless of what those values are). 
-  `can_splice("quux@1.0", when=@1.1~compat", match_variants="json")`:If `quux@1.0` is the dependency of an already installed spec and `foo@1.1~compat` could be a valid dependency for the parent spec, then `foo@1.0~compat` can be spliced in for `quux@1.0` in the parent spec, provided that they have the same value for their `json` variant. 

2. Augmenting the solver to synthesize splices
### Changes to the hash encoding in `asp.py`
Previously, when including concrete specs in the solve, they would have the following form:

installed_hash("foo", "xxxyyy")
imposed_constraint("xxxyyy", "foo", "attr1", ...)
imposed_constraint("xxxyyy", "foo", "attr2", ...)
% etc. 

Concrete specs now have the following form:
installed_hash("foo", "xxxyyy")
hash_attr("xxxyyy", "foo", "attr1", ...)
hash_attr("xxxyyy", "foo", "attr2", ...)

This transformation allows us to control which constraints are imposed when we select a hash, to facilitate the splicing of dependencies. 

2.1 Compiling `can_splice` directives in `asp.py`
Consider the concrete spec:
foo@2.72%gcc@11.4 arch=linux-ubuntu22.04-icelake build_system=autotools ^bar ...
It will emit the following facts for reuse (below is a subset)

installed_hash("foo", "xxxyyy")
hash_attr("xxxyyy", "hash", "foo", "xxxyyy")
hash_attr("xxxyyy", "version", "foo", "2.72")
hash_attr("xxxyyy", "node_os", "ubuntu22.04")
hash_attr("xxxyyy", "hash", "bar", "zzzqqq")
hash_attr("xxxyyy", "depends_on", "foo", "bar", "link")

Rules that derive abi_splice_conditions_hold will be generated from 
use of the `can_splice` directive. They will have the following form:
can_splice("foo@1.0.0+a", when="@1.0.1+a", match_variants=["b"]) --->

abi_splice_conditions_hold(0, node(SID, "foo"), "foo", BaseHash) :-
  installed_hash("foo", BaseHash),
  attr("node", node(SID, SpliceName)),
  attr("node_version_satisfies", node(SID, "foo"), "1.0.1"),
  hash_attr("hash", "node_version_satisfies", "foo", "1.0.1"),
  attr("variant_value", node(SID, "foo"), "a", "True"),
  hash_attr("hash", "variant_value", "foo", "a", "True"),
  attr("variant_value", node(SID, "foo"), "b", VariVar0),
  hash_attr("hash", "variant_value", "foo", "b", VariVar0).


2.2 Synthesizing splices in `concretize.lp` and `splices.lp`

The ASP solver generates "splice_at_hash" attrs to indicate that a particular node has a splice in one of its immediate dependencies. 

Splices can be introduced in the dependencies of concrete specs when `splices.lp` is conditionally loaded (based on the config option `concretizer:splice:True`. 

2.3 Constructing spliced specs in `asp.py`

The method `SpecBuilder._resolve_splices` implements a top-down memoized implementation of hybrid splicing. This is an optimization over the more general `Spec.splice`, since the solver gives a global view of exactly which specs can be shared, to ensure the minimal number of splicing operations. 

Misc changes to facilitate configuration and benchmarking 
- Added the method `Solver.solve_with_stats` to expose timers from the public interface for easier benchmarking 
- Added the boolean config option `concretizer:splice` to conditionally load splicing behavior 

Co-authored-by: Greg Becker <becker33@llnl.gov>
2024-11-12 20:51:19 -08:00
Alex Hedges
ec058556ad
Remove trailing spaces in default YAML files (#47328)
caught by `prettier`
2024-10-30 22:32:54 +01:00
Alex Hedges
44d09f2b2b
Fix typo in default concretizer.yaml (#47307)
This was caught by `codespell` when I copied the config file into an
internal repository.
2024-10-30 09:35:30 +01:00
Harmen Stoppels
d8c8074762
bootstrap: add clingo 3.13 binaries and more (#47126) 2024-10-24 08:55:14 +02:00
Massimiliano Culpo
ffdfa498bf
Deprecate config:install_missing_compilers (#46237)
The option config:install_missing_compilers is currently buggy,
and has been for a while. Remove it, since it won't be needed
when compilers are treated as dependencies.

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-09-10 20:02:05 +02:00
Massimiliano Culpo
1c1970e727
Put some more constraint on a few mpi providers (#46132)
This should help not selecting, by default, some niche implementation that are supposed to be externals.

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-08-30 11:16:35 +02:00
AcriusWinter
63e680e4f9
c: new test API (#45469)
* c: new test API
* gcc:  provides('c')
* c: bugfix and simplification of the new stand-alone test method

---------

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
2024-08-12 15:58:09 -06:00
AcriusWinter
7ce5ac1e6e
fortran: new test API (#45470)
* fortran: new test API
* fortran: add provides to gcc package
* fortran: simplify stand-alone test processing

---------

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
2024-08-11 14:32:39 -06:00
Massimiliano Culpo
2079b888c8
Remove the old concretizer (#45215)
The old concretizer is still used to bootstrap clingo from source. If we switch to a DAG model
where compilers are treated as nodes, we need to either:

1. fix the old concretizer to support this (which is a lot of work and possibly research), or
2. bootstrap `clingo` without the old concretizer.

This PR takes the second approach and gets rid of the old concretizer code. To bootstrap
`clingo`, we store some concrete spec prototypes as JSON, select one according to the
coarse-grained system architecture, and tweak them according to the current host.

The old concretizer and related dead code are removed.  In particular, this removes
`Spec.normalize()` and related methods, which were used in many unit-tests to set
up the test context. The tests have been updated not to use `normalize()`.

- [x] Bootstrap clingo concretization based on a JSON file
- [x] Bootstrap clingo *before* patchelf
- [x] Remove any use of the old concretizer, including:
      * Remove only_clingo and only_original fixtures
      * Remove _old_concretize and _new_concretize
      * Remove _concretize_together_old
      * Remove _concretize_together_new
      * Remove any use of `SPACK_TEST_SOLVER`
      * Simplify CI jobs
- [x] ensure bootstrapping `clingo` works on on Darwin and Windows
- [x] Raise an intelligible error when a compiler is missing
- [x] Ensure bootstrapping works on FreeBSD
- [x] remove normalize and related methods

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-08-10 16:12:27 -07:00
AcriusWinter
899ac78887
cxx: new test API (#45462)
* cxx: new test API
* gcc: provide cxx
* default providers:  cxx provided by gcc
* cxx: cleanup stand-alone test
  - test_c -> test_cxx
  - simplify compilation and execution
  - corrected output checks

---------

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
2024-08-08 23:23:05 -06:00
Todd Gamblin
4a35dec206 wasi-sdk: add default provider
This was missed in #45394 because we don't run unit tests for package PRs, and
`test_all_virtual_packages_have_default_providers`, which would've caught it, is a unit
test, not an audit.

- [x] add a default provider for `wasi-sdk` in `etc/spack/defaults/packages.yaml` (which
      we require for all virtuals)
- [x] rework `test_all_virtual_packages_have_default_providers` as an audit called
      `_ensure_all_virtual_packages_have_default_providers`

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-07-29 01:30:14 -07:00
Harmen Stoppels
1b967a9d98
iconv: remove requirement (#45206)
no longer necessary after 5c53973220
2024-07-15 17:45:32 +02:00
Harmen Stoppels
36d64fcbd4
iconv: require libiconv on linux (#45026)
otherwise it is still picked up from glibc as it is external
2024-07-04 08:20:27 +02:00
Massimiliano Culpo
f242e0fd0c
remove platform=cray (#43796)
Remove support for `cray` as a separate platform.

Any platform previously detected as `cray` is now detected as `linux`.

Users who still need platform=cray have to stick to Spack 0.22
2024-05-30 14:21:32 +02:00
Harmen Stoppels
eef6a79b35
Prefer libiconv for iconv (#44335)
`glibc` and `musl` provide a basic implementation of `iconv` (`iconv`,
`iconv_open`, `iconv_close`), but in practice the installation may be
missing the character encoding methods to make them usable. On Fedora
for example, users need to

```yum install glibc-gconv-extra```

to get the character encodings that `gettext` requires during configure,
namely EUC-JP. Users may not have permissions to install the missing
parts of glibc.

Since Spack can install `libiconv`, it is simpler to use that by
default.
2024-05-24 13:25:59 +02:00
Massimiliano Culpo
9c88a48a73
Remove mesa18 and libosmesa (#44264)
* Remove mesa18 and libosmesa

mesa18 was introduced in #19528 as a way to maintain the old
autotools build of mesa separate from the new meson build.

We could add a second build system to mesa, but since mesa18 has
been deprecated for a long time, we'll just remove it.

libosmesa was used to multiplex the gl provider between mesa18
and mesa, and is thus unecessary. Remove it to reduce complexity
in the graphical stack.

* Remove references to mesa18 and libosmesa

* vtk: rework dependency on gl and osmesa

* memsurfer: rework dependency on vtk

* visit: minimal fix to avoid having both osmesa and glx
2024-05-21 10:18:14 -05:00
Massimiliano Culpo
47051c3690
Add a default provider for armci (#43997) 2024-05-03 23:48:58 +02:00
Harmen Stoppels
d47951a1e3
glibc: provides iconv (#43897)
`iconv` is a bit of weird virtual because the only shared API between
`glibc` and `libiconv` is:

```
iconv
iconv_open
iconv_close
```

whereas `libiconv` has further symbols [iconvctl](https://www.gnu.org/software/libiconv/documentation/libiconv-1.17/iconvctl.3.html), [iconv_open_into](https://www.gnu.org/software/libiconv/documentation/libiconv-1.17/iconv_open_into.3.html), and an `iconv` executable and `libcharset.so`. Packages that need those will have to do `depends_on("[virtuals=iconv] libiconv")`.
2024-04-30 07:40:00 +02:00
Massimiliano Culpo
0dbe4d54b6
Tune default compiler preferences (#43805)
Add `%oneapi`, remove compilers that have been discontinued upstream
2024-04-24 18:00:40 +02:00
Greg Becker
978c20f35a
concretizer: update reuse: default to True (#41302) 2024-04-23 17:42:14 +02:00
Harmen Stoppels
209a3bf302 Compiler.default_libc
Some logic to detect what libc the c / cxx compilers use by default,
based on `-dynamic-linker`.

The function `compiler.default_libc()` returns a `Spec` of the form
`glibc@x.y` or `musl@x.y` with the `external_path` property set.

The idea is this can be injected as a dependency.

If we can't run the dynamic linker directly, fall back to `ldd` relative
to the prefix computed from `ld.so.`
2024-04-22 15:18:06 +02:00
psakievich
7afa949da1
Add handling of custom ssl certs in urllib ops (#42953)
This PR allows the user to specify a path to a custom cert file (or directory) in
Spack's config:

```yaml
  # This is where custom certs for proxy/firewall are stored.
  # It can be a path or environment variable. To match ssl env configuration
  # the default is the environment variable SSL_CERT_FILE
  ssl_certs: $SSL_CERT_FILE
```

`config:ssl_certs` can be a path to a file or a directory, or it can be and environment
variable that resolves to one of those. When it posts to something valid, Spack will
update the ssl context to include custom certs, and fetching via `urllib` and `curl`
will trust the provided certs.

This should resolve many issues with fetching behind corporate firewalls.


---------

Co-authored-by: psakievich <psakievich@users.noreply.github.com>
Co-authored-by: Alec Scott <alec@bcs.sh>
2024-04-01 11:11:13 -07:00
psakievich
27a8eb0f68
Add config option and compiler support to reuse across OS's (#42693)
* Allow compilers to function across compatible OS's
* Add documentation in the default yaml

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
2024-03-27 15:39:07 +00:00
Massimiliano Culpo
0c9a53ba3a
Add intel-oneapi-runtime, allow injecting virtual dependencies (#42062)
This PR adds:
- A new runtime for `%oneapi` compilers, called `intel-oneapi-runtime`
- Information to both `gcc-runtime`  and `intel-oneapi-runtime`, to ensure
  that we don't mix compilers using different soname for either `libgfortran`
  or `libifcore`

To do so, the following internal mechanisms have been implemented:
- Possibility to inject virtual dependencies from the `runtime_constraints`
  callback on packages

Information has been added to `gcc-runtime` to provide the correct soname
under different conditions on its `%gcc`.

Rules injected into the solver looks like:

```prolog
% Add a dependency on 'gfortran@5' for nodes compiled with gcc@=13.2.0 and using the 'fortran' language
attr("dependency_holds", node(ID, Package), "gfortran", "link") :-
  attr("node", node(ID, Package)),
  attr("node_compiler", node(ID, Package), "gcc"),
  attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
  not external(node(ID, Package)),
  not runtime(Package),
  attr("language", node(ID, Package), "fortran").

attr("virtual_node", node(RuntimeID, "gfortran")) :-
  attr("depends_on", node(ID, Package), ProviderNode, "link"),
  provider(ProviderNode, node(RuntimeID, "gfortran")),
  attr("node", node(ID, Package)),
  attr("node_compiler", node(ID, Package), "gcc"),
  attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
  not external(node(ID, Package)),
  not runtime(Package),
  attr("language", node(ID, Package), "fortran").

attr("node_version_satisfies", node(RuntimeID, "gfortran"), "5") :-
  attr("depends_on", node(ID, Package), ProviderNode, "link"),
  provider(ProviderNode, node(RuntimeID, "gfortran")),
  attr("node", node(ID, Package)),
  attr("node_compiler", node(ID, Package), "gcc"),
  attr("node_compiler_version", node(ID, Package), "gcc", "13.2.0"),
  not external(node(ID, Package)),
  not runtime(Package),
  attr("language", node(ID, Package), "fortran").
```
2024-03-24 22:59:21 -07:00
Adam J. Stewart
b719c905f1
apple-libuuid: update installation directory (#40416)
* apple-libuuid: update installation directory

Copy design of Apple GL
2023-11-28 10:13:55 -08:00
Harmen Stoppels
b2840acd52
Revert "defaults/modules.yaml: hide implicits (#40906)" (#40955)
This reverts commit a2f00886e9.
2023-11-08 14:33:50 -07:00
Michael Kuhn
5074b7e922
Add support for aliases (#17229)
Add a new config section: `config:aliases`, which is a dictionary mapping aliases
to commands.

For instance:


```yaml
config:
    aliases:
        sp: spec -I
```

will define a new command `sp` that will execute `spec` with the `-I`
argument. 

Aliases cannot override existing commands, and this is ensured with a test.

We cannot currently alias subcommands. Spack will warn about any aliases
containing a space, but will not error, which leaves room for subcommand
aliases in the future.

---------

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2023-11-06 14:37:46 -08:00
Harmen Stoppels
a2f00886e9
defaults/modules.yaml: hide implicits (#40906) 2023-11-06 10:37:29 -08:00
Massimiliano Culpo
861bb4d35a
Update bootstrap buildcache to support Python 3.12 (#40404)
* Add support for Python 3.12
* Use optimized build of clingo
2023-10-11 19:03:17 +02:00
Massimiliano Culpo
e20c05fcdf
Make "minimal" the default duplicate strategy (#39621)
* Allow branching out of the "generic build" unification set

For cases like the one in https://github.com/spack/spack/pull/39661
we need to relax rules on unification sets.

The issue is that, right now, nodes in the "generic build" unification
set are unified together with their build dependencies. This was done
out of caution to avoid the risk of circular dependencies, which would
ultimately cause a very slow solve.

For build-tools like Cython, however, the build dependencies is masked
by a long chain of "build, run" dependencies that belong in the
"generic build" unification space.

To allow splitting on cases like this, we relax the rule disallowing
branching out of the "generic build" unification set.

* Fix issue with pure build virtual dependencies

Pure build virtual dependencies were not accounted properly in the
list of possible virtuals. This caused some facts connecting virtuals
to the corresponding providers to not be emitted, and in the end
lead to unsat problems.

* Fixed a few issues in packages

py-gevent: restore dependency on py-cython@3
jsoncpp: fix typo in build dependency
ecp-data-vis-sdk: update spack.yaml and cmake recipe
py-statsmodels: add v0.13.5

* Make dependency on "blt" of type "build"
2023-10-06 10:24:21 +02:00
Harmen Stoppels
ec500adb50
zlib-api: use zlib-ng +compat by default (#39358)
In the HPC package manager, we want the fastest `zlib`  implementation by default.  `zlib-ng` is up to 4x faster than stock `zlib`, and it can do things like take advantage of AVX-512 instructions.  This PR makes `zlib-ng` the default `zlib-api` provider (`zlib-api` was introduced earlier, in #37372).

As far as I can see, the only issues you can encounter are:

1. Build issues with packages that heavily rely on `zlib` internals. In Gitlab CI only one out of hundreds of packages had that issue (it extended zlib with deflate stuff, and used its own copy of zlib sources).
2. Packages that like to detect `zlib-ng` separately and rely on `zlib-ng` internals. The only issue I've found with this among the hundreds of packages built in CI is `perl` trying to report more specific zlib-ng version details, and relied on some internals that got refactored. But yeah... that warrants a patch / conflict and is nothing special.

At runtime, you cannot really have any issues, given that zlib and zlib-ng export the exact same symbols (and zlib-ng tests this in their CI).

You can't really have issues with externals when using zlib-ng either. The only type of issue is when system zlib is rather new, and not marked as external; if another external uses new symbols, and Spack builds an older zlib/zlib-ng, then the external might not find the new symbols. But this is a configuration issue, and it's not an issue caused by zlib-ng, as the same would happen with older Spack zlib.

* zlib-api: use zlib-ng +compat by default
* make a trivial change to zlib-ng to trigger a rebuild
* add `haampie` as maintainer
2023-08-17 14:03:14 -07:00
Massimiliano Culpo
9f8edbf6bf Add a new configuration option to select among different concretization modes
The "concretizer" section has been extended with a "duplicates:strategy"
attribute, that can take three values:

- "none": only 1 node per package
- "minimal": allow multiple nodes opf specific packages
- "full": allow full duplication for a build tool
2023-08-15 15:54:37 -07:00
Harmen Stoppels
e51748ee8f
zlib-api: new virtual with zlib/zlib-ng as providers (#37372)
Introduces a new virtual zlib-api, which replaces zlib in most packages.

This allows users to switch to zlib-ng by default for better performance.
2023-08-09 09:22:58 -04:00
Adam J. Stewart
edbf12cfa8
Add qmake virtual provider (#38848) 2023-08-02 13:50:37 -05:00
Harmen Stoppels
522d9e260b
mirrors: distinguish between source/binary mirror; simplify schema (#34523)
Allow the following formats:

```yaml
mirrors:
  name: <url>
```

```yaml
mirrors:
  name:
    url: s3://xyz
    access_pair: [x, y]
```

```yaml
mirrors:
  name:
    fetch: http://xyz
    push:
      url: s3://xyz
      access_pair: [x, y]
```

And reserve two new properties to indicate the mirror type (e.g.
mirror.spack.io is a source mirror, not a binary cache)

```yaml
mirrors:
  spack-public:
    source: true
    binary: false
    url: https://mirror.spack.io
```
2023-07-13 11:29:17 +00:00
Michael Kuhn
d5d0b8821c
installer: Improve status reporting (#37903)
Refactor `TermTitle` into `InstallStatus` and use it to show progress
information both in the terminal title as well as inline. This also
turns on the terminal title status by default.

The inline output will look like the following after this change:
```
==> Installing m4-1.4.19-w2fxrpuz64zdq63woprqfxxzc3tzu7p3 [4/4]
```
2023-07-12 08:54:45 +02:00
Nisarg Patel
f090b05346
Update providers of virtual packages related to Intel OneAPI (#37412)
* add a virtual dependency name instead of complete package name

* add OneAPI components as providers of virtual packages

* Revert the default of tbb

---------

Co-authored-by: Nisarg Patel <nisarg.patel@lrz.de>
2023-05-11 05:58:24 -04:00
Massimiliano Culpo
a92f1e37aa
Disable module file generation by default (#37258)
* Disable module generation by default (#35564)

a) It's used by site administrators, so it's niche
b) If it's used by site administrators, they likely need to modify the config anyhow, so the default config only serves as an example to get started
c) it's too arbitrary to enable tcl, but disable lmod

* Remove leftover from old module file schema

* Warn if module file config is detected and generation is disabled

---------

Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2023-05-02 10:28:27 +02:00