Compare commits

..

133 Commits

Author SHA1 Message Date
Gregory Becker
8c37ed4534 Set version to 0.22.5 2025-02-21 12:20:45 -08:00
Harmen Stoppels
1f55b59171 Set version to 0.22.4 2025-02-19 11:13:42 +01:00
Massimiliano Culpo
6a46a1c27d Temporarily pin Ubuntu to v22.04, where we use kcov (#48152)
Ubuntu doesn't package kcov in v24.04 Since GitHub
started upgrading their runner images, this makes
our CI fail, see e.g.

https://github.com/spack/spack/actions/runs/12366970840/job/34518012887?pr=47854

This is a temporary workaround, while we prepare a
more stable fix.

* Don't run too many unit tests
2025-02-19 11:13:42 +01:00
Harmen Stoppels
c78545d5f1 fix year dependent license verification check 2025-02-19 11:13:42 +01:00
Harmen Stoppels
05dcd1d2b7 env: continue to mark non-roots as implicitly installed on partial env installs (#47183)
Fixes a change in behavior/bug in
70412612c7, where partial environment
installs would mark the selected spec as explicitly installed, even if
it was not a root of the environment.

The desired behavior is that roots by definition are the to be
explicitly installed specs. The specs on the `spack -e ... install x`
command line are just filters for partial installs, so leave them
implicitly installed if they aren't roots.
2025-02-19 11:13:42 +01:00
Harmen Stoppels
67e5b4fecf Set version to 0.22.4.dev0 2025-02-19 11:13:42 +01:00
Harmen Stoppels
ba6cb62df8 Set version to v0.22.3 2024-11-18 12:47:11 +01:00
Harmen Stoppels
e1437349d1 getting_started.rst: fix list of spack deps (#47557) 2024-11-18 12:47:11 +01:00
Harmen Stoppels
c1eb3f965b libc.py: detect glibc also in chinese locale (#47434) 2024-11-18 12:47:11 +01:00
Massimiliano Culpo
e0d246210f Fix pickle round-trip of specs propagating variants (#47351)
This changes `Spec` serialization to include information about propagation for abstract specs.
This was previously not included in the JSON representation for abstract specs, and couldn't be
stored.

Now, there is a separate `propagate` dictionary alongside the `parameters` dictionary. This isn't
beautiful, but when we bump the spec version for Spack `v0.24`, we can clean up this and other
aspects of the schema.
2024-11-18 12:47:11 +01:00
Harmen Stoppels
88b47d2714 spack_yaml: add anchorify function (#44995)
This adds spack.util.spack_yaml.anchorify, which takes a non-cyclic
dict/list structure, and replaces identical values with (back)
references to the first instance, so that yaml serialization will use
anchors.

`repr` is used to identify sub-dags, which in principle is quadratic
complexity in depth of the graph, but in practice the depth is O(1) so
this should not matter.

Then this is used in CI to reduce the size of generated YAML files to
30% of their original size.
2024-11-18 12:47:11 +01:00
Harmen Stoppels
342520175d database.py: remove process unsafe update_explicit (#47358)
Fixes an issue reported where `spack env depfile` + `make -j` would
non-deterministically refuse to mark all environment roots explicit.

`update_explicit` had the pattern

```python
rec = self._data[key]
with self.write_transaction():
    rec.explicit = explicit
```

but `write_transaction` may reinitialize `self._data`, meaning that
mutating `rec` won't mutate `self._data`, and the changes won't be
persisted.

Instead, use `mark` which has a correct implementation.

Also avoids the essentially incorrect early return in `update_explicit`
which is a pattern I don't think belongs in database.py: it branches on
possibly stale data to realize there is nothing to change, but in reality
it requires a write transaction to know that for a fact, but that would
defeat the purpose. So, leave this optimization to the call site.
2024-11-18 12:47:11 +01:00
Harmen Stoppels
8a55299214 packaging_guide.rst: explain forward and backward compat before the less common cases (#47402)
The idea is to go from most to least used: backward compat -> forward compat -> pinning on major or major.minor version -> pinning specific, concrete versions.

Further, the following

```python
   # backward compatibility with Python
   depends_on("python@3.8:")
   depends_on("python@3.9:", when="@1.2:")
   depends_on("python@3.10:", when="@1.4:")

   # forward compatibility with Python
   depends_on("python@:3.12", when="@:1.10")
   depends_on("python@:3.13", when="@:1.12")
   depends_on("python@:3.14")
```

is better than disjoint when ranges causing repetition of the rules on dependencies, and requiring frequent editing of existing lines after new releases are done:

```python
   depends_on("python@3.8:3.12", when="@:1.1")
   depends_on("python@3.9:3.12", when="@1.2:1.3")
   depends_on("python@3.10:3.12", when="@1.4:1.10")
   depends_on("python@3.10:3.13", when="@1.11:1.12")
   depends_on("python@3.10:3.14", when="@1.13:")
2024-11-18 12:47:11 +01:00
Wouter Deconinck
174308ca3d ffmpeg: update patch hashes for addition of the X-Git-Tag (#45574) 2024-11-18 12:47:11 +01:00
Harmen Stoppels
506c176d25 remove xfail marked hanging foreground/background tests 2024-11-18 12:47:11 +01:00
Adam J. Stewart
88171ff353 Docs: remove reference to pyspack (#47346) 2024-11-18 12:47:11 +01:00
Harmen Stoppels
fb8f2d8301 autopush: run after install manifest is written 2024-11-18 12:47:11 +01:00
Todd Gamblin
27beb100e1 mypy: work around typing issues with functools.partial (#47160) 2024-11-18 12:47:11 +01:00
Alex Hedges
02613d778d Remove trailing spaces in default YAML files (#47328)
caught by `prettier`
2024-11-18 12:47:11 +01:00
Alex Hedges
a6065334ad Fix malformed RST link in documentation (#47309) 2024-11-18 12:47:11 +01:00
Alex Hedges
243ee7203a Fix errors found by running spack audit externals (#47308)
The problem was that `+` is part of the regex grammar, so it needs to be
escaped.
2024-11-18 12:47:11 +01:00
Alex Hedges
271106e5dd Fix typo in default concretizer.yaml (#47307)
This was caught by `codespell` when I copied the config file into an
internal repository.
2024-11-18 12:47:11 +01:00
Harmen Stoppels
9fb9156e82 bootstrap: do not consider source when metadata file missing (#47278) 2024-11-18 12:47:11 +01:00
Harmen Stoppels
d88cfbd839 compilers.yaml: require list of strings for modules (#47197) 2024-11-18 12:47:11 +01:00
Harmen Stoppels
4127a93a91 ensure write_fd.close() isn't called when sys.std* cannot be redirected 2024-11-18 12:47:11 +01:00
Harmen Stoppels
5df2189e43 Avoid a socket to communicate effectively a bit 2024-11-18 12:47:11 +01:00
Harmen Stoppels
667c1960d0 Replace MultiProcessFd with Connection objects
Connection objects are Python version, platform and multiprocessing
start method independent, so better to use those than a mix of plain
file descriptors and inadequate guesses in the child process whether it
was forked or not.

This also allows us to delete the now redundant MultiProcessFd class,
hopefully making things a bit easier to follow.
2024-11-18 12:47:11 +01:00
Harmen Stoppels
4449058257 fix use of traceback.format_exception (#47080)
Co-authored-by: Peter Scheibel <scheibel1@llnl.gov>
2024-11-18 12:47:11 +01:00
Todd Gamblin
1b03f1d5bc Better docs and typing for _setup_pkg_and_run (#46709)
There was a bit of mystery surrounding the arguments for `_setup_pkg_and_run`. It passes
two file descriptors for handling the `gmake`'s job server in child processes, but they are
unsed in the method.

It turns out that there are good reasons to do this -- depending on the multiprocessing
backend, these file descriptors may be closed in the child if they're not passed
directly to it.

- [x] Document all args to `_setup_pkg_and_run`.
- [x] Document all arguments to `_setup_pkg_and_run`.
- [x] Add type hints for `_setup_pkg_and_run`.
- [x] Refactor exception handling in `_setup_pkg_and_run` so it's easier to add type
      hints. `exc_info()` was problematic because it *can* return `None` (just not
      in the context where it's used).  `mypy` was too dumb to notice this.

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-11-18 12:47:11 +01:00
Harmen Stoppels
cf4e9cb3b3 installer.py: handle external roots the same (#44917)
There was logic not to enqueue build requests for externals if they occur as
roots. That's unnecessary.
2024-11-18 12:47:11 +01:00
Harmen Stoppels
d189b12050 installer: improve init signature and explicits (#44374)
Change the installer to take `([pkg], args)` in the constructor instead
of `[(pkg, args)]`. The reason is that certain arguments are global
settings, and the new API ensures that those arguments cannot be
different across different "build requests".

The `explicit` install arg is now a list of hashes, and the installer is
no longer responsible for determining what package is installed
explicitly. This way environment installs can simply pass the list of
environment roots, without them necessarily being explicit build
requests. For example an env with two roots [a, b], where b depends on
a, would not always cause spack install to mark b as explicit.

Notice that `overwrite` already took a list of hashes, this makes
`explicit` consistent.

`package.do_install(explicit=True)` continues to take a boolean.
2024-11-18 12:47:11 +01:00
Massimiliano Culpo
5134504fd8 Fix broken spack find -u (#47102)
fixes #47101

The bug was introduced in #33495, where `spack find was not updated,
and wasn't caught by unit tests.

Now a Database can accept a custom predicate to select the installation
records. A unit test is added to prevent regressions. The weird convention
of having `any` as a default value has been replaced by the more commonly
used `None`.

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-11-18 12:47:11 +01:00
Harmen Stoppels
ae5018ee09 python: drop build-tools tag (#46980)
Remove the `build-tools` tag of python, otherwise these types of
concretizations are possible:

```
py-root
  ^py-pip
    ^python@3.12
  ^python@3.13
```

So, a package would be configured with py-pip using python 3.12, but
installed for 3.13, which does not work.
2024-11-18 12:47:11 +01:00
Harmen Stoppels
106ebeb502 Show underlying errors on fetch failure (#45714)
- unwrap/flatten nested exceptions
- improve tests
- unify curl lookup
2024-11-18 12:47:11 +01:00
Harmen Stoppels
ba89754ee1 buildcache: recognize . and .. as paths instead of names (#47105) 2024-11-18 12:47:11 +01:00
Harmen Stoppels
e733eb0fd9 docs: do not promote build_systems/* at all (#47111) 2024-11-18 12:47:11 +01:00
Harmen Stoppels
07b344bf10 docs: tune ranking further (#47110)
promote hand-written docs, demote generated "docs" for sources, modules, packages.
2024-11-18 12:47:11 +01:00
Harmen Stoppels
a71c65399e docs search: rank api lowest and generated commands low (#47107) 2024-11-18 12:47:11 +01:00
Harmen Stoppels
d8b73331f6 bootstrap: remove all system gnupg/patchelf executables (#47165) 2024-11-18 12:47:11 +01:00
Peter Scheibel
4741ea683c avoid double closing of fd in sub-processes (#47035)
Both `multiprocessing.connection.Connection.__del__` and `io.IOBase.__del__` called `os.close` on the same file descriptor. As of Python 3.13, this is an explicit warning. Ensure we close once by usef `os.fdopen(..., closefd=False)`
2024-11-18 12:47:11 +01:00
Harmen Stoppels
f33c18290b tests: fix wrong install name (#46711) 2024-11-18 12:47:11 +01:00
Massimiliano Culpo
5ea67e8882 archspec: update to v0.2.5 (#46958)
Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-11-18 12:47:11 +01:00
Massimiliano Culpo
8ade071253 Bump archspec 0.2.5-dev (#46503)
Use commit bceb39528ac49dd0c876b2e9bf3e7482e9c2be4a
2024-11-18 12:47:11 +01:00
Tobias Ribizel
dca09e6e0f env depfile: generate Makefile with absolute script path (#46966)
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2024-11-18 12:47:11 +01:00
Harmen Stoppels
2776402c90 Update release documentation (#46991) 2024-11-18 12:47:11 +01:00
Harmen Stoppels
3961a86f86 support python 3.13 bootstrapping from sources (#46983) 2024-11-18 12:47:11 +01:00
Harmen Stoppels
39e594d096 clingo: fix build with Python 3.13+ (#46775)
Python 3.9 deprecated `PyEval_InitThreads` and kept it as a no-op, and
Python 3.13 deleted the function.

The patch removes calls to it.
2024-11-18 12:47:11 +01:00
Harmen Stoppels
7a91bed5c9 Set version to v0.22.3.dev0 2024-11-18 12:47:11 +01:00
Harmen Stoppels
594a376c52 Set version to v0.22.2 2024-09-21 12:39:55 +02:00
Harmen Stoppels
1538c48616 run-unit-tests: no xdist if coverage (#46480)
xdist only slows down unit tests under coverage
2024-09-21 12:39:55 +02:00
Massimiliano Culpo
683e50b8d9 Run unit test in parallel again in CI (#45793)
The --trace-config option was failing for linux unit-tests,
so we were running serial.
2024-09-21 12:39:55 +02:00
Harmen Stoppels
9b32fb0beb Revert "Change environment modifications to escape with double quotes (#36789)" (#42780)
This reverts commit 690394fabc, as it causes arbitrary code execution.
2024-09-21 12:39:55 +02:00
Harmen Stoppels
2c6df0d491 deal with TimeoutError from ssl.py (#45683) 2024-09-21 12:39:55 +02:00
Harmen Stoppels
ce7218acae buildcache: fix hard-coded, outdated layout version (#45645) 2024-09-21 12:39:55 +02:00
Dominic Hofer
246eeb2b69 Remove execution permission from setup-env.sh (#45641)
`setup-env.sh` is meant to be sourced, not executed directly.
By revoking execution permissions, users who accidentally execute
the script will receive an error instead of seeing no effect.

* Remove execution permission from `setup-env.sh` and friends
* Don't make output file executable in `spack commands --update-completion`

---------

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-09-21 12:39:55 +02:00
Harmen Stoppels
cc47ee3984 unparser.py: remove print statements (#45235) 2024-09-21 12:39:55 +02:00
Harmen Stoppels
7b644719c1 Avoid duplicate detectable tag (#45160)
in case of inheritance the static tags prop may be updated multiple
times, and it turns out builder classes magically inherit from
traditional package classes
2024-09-21 12:39:55 +02:00
Harmen Stoppels
d8a6aa551e build_environment: explicitly disable ccache if disabled (#45275) 2024-09-21 12:39:55 +02:00
Massimiliano Culpo
ac7b18483a Bump archspec to latest commit (#46445)
This should fix an issue with Neoverse XX detection

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-09-21 12:39:55 +02:00
Massimiliano Culpo
39f37de4ce Update archspec to v0.2.5-dev (7e6740012b897ae4a950f0bba7e9726b767e921f) (#45721) 2024-09-21 12:39:55 +02:00
Harmen Stoppels
703e153404 require spec in develop entry (#46485) 2024-09-21 12:39:55 +02:00
Harmen Stoppels
aa013611bc url join: fix oci scheme (#46483)
* url.py: also special case oci scheme in join

* avoid fetching keys from oci mirror
2024-09-21 12:39:55 +02:00
Harmen Stoppels
6a7ccd4e46 docs: refer to upstreams.yaml in chain.rst title (#46475) 2024-09-21 12:39:55 +02:00
Harmen Stoppels
1c6c4b4690 spack.util.url: fix join breakage in python 3.12.6 (#46453) 2024-09-21 12:39:55 +02:00
arezaii
68558b3dd0 Chapel package: updates post release (#45304)
* Fix +rocm variant, to ensure correct dependencies on ROCm packages
  and use of AMD LLVM
* Add a +pshm variant for comm=gasnet to enable fast shared-memory
  comms between co-locales
* Add logic to ensure we get the native CXI libfabric network provider
  on Cray EX
* Expand dependency type for package modules to encompass runtime
  dependencies
* Factor logic for setting (LD_)LIBRARY_PATH and PKG_CONFIG_PATH of
  runtime dependencies
* Workaround issue #44746 that causes a transitive dependency on lua
  to break SLURM
* Disable nonfunctional checkChplDoc test
* Annotate some variants as conditional, to improve spack info output
  and reduce confusion

---------

Co-authored-by: Dan Bonachea <dobonachea@lbl.gov>
2024-09-21 12:39:55 +02:00
arezaii
5440fe09cd update chapel package for v2.1 (#44931) 2024-09-21 12:39:55 +02:00
arezaii
03c22f403f Chapel package: major update (#42197)
* add cray detection taken from upcxx
* add CUDA/ROCm support
* add numerous pass-through options to Chapel build,
  like gpu_mem_strategy, comm_substrate, etc.; all variants are
  translated to analogous CHPL_* environment variables. As a side
  effect, this defines a number of environment variables that are
  not actually used by Chapel.
* Define LD_LIBRARY_PATH, LIBRARY_PATH, and PKG_CONFIG_PATH to
  help programs built with Chapel properly locate needed runtime
  dependencies

---------

Co-authored-by: bonachea <dobonachea@lbl.gov>
2024-09-21 12:39:55 +02:00
Greg Becker
f339225d22 include_concrete: read from older env formats properly (#45766)
* include_concrete: read from older env formats properly
* spack env rm: fix logic for checking env includes
* regression test

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-09-21 12:39:55 +02:00
Massimiliano Culpo
22c815f3d4 Do not halt concretization on unknown variants in externals (#45326)
* Do not halt concretization on unkwnown variants in externals
2024-09-21 12:39:55 +02:00
Massimiliano Culpo
4354288e44 Run minimization of weights only on known targets (#45269)
This prevents excessive output from clingo of the kind:

.../spack/lib/spack/spack/solver/concretize.lp:1640:5-11: info: tuple ignored:
  #sup@2
2024-09-21 12:39:55 +02:00
Massimiliano Culpo
ea2d43b4a6 Do not initialize previous store state in "use_store" (#45268)
The "use_store" context manager is used to swap the value
of a global variable (spack.store.STORE), while keeping
another global variable consistent (spack.config.CONFIG).

When doing that it tries to evaluate the previous value
of the store, if that was not done already. This is wrong,
since the configuration might be in an "intermediate" state
that was never meant to trigger side effects.

Remove that operation, and add a unit test to
prevent regressions.
2024-09-21 12:39:55 +02:00
Massimiliano Culpo
85e67d60a0 Add compatibility of sequoia with previous macOS versions (#45127)
* Add compatibility of sequoia with previous macOS versions

* Add compatibility of sequoia with previous macOS versions
2024-09-21 12:39:55 +02:00
Adam J. Stewart
bf6a9ff5ed Add support for macOS Sequoia (#45018) 2024-09-21 12:39:55 +02:00
Jordan Galby
1bdc30979d Fix regression in spec format string for indiviual variants (#46206)
Fix a regression in {variants.X} and {variants.X.value} spec format strings.
2024-09-21 12:39:55 +02:00
Harmen Stoppels
ef1eabe5b3 Add c to the list of languages (#45191) 2024-09-21 12:39:55 +02:00
Harmen Stoppels
43d673f915 Add pkg- prefix to builtin.mock a b c d ... (#45205) 2024-09-21 12:39:55 +02:00
Harmen Stoppels
8a9c501030 spec.py: fix __getitem__ looking outside of dag (#45090)
`Spec.__getitem__` queries dependent edges, which almost always point to
nodes outside the sub-dag considered. It should only ever look at edges
being traversed.
2024-09-21 12:39:55 +02:00
Harmen Stoppels
9f035ca030 Set version to v0.22.2.dev0 2024-09-21 12:39:55 +02:00
Harmen Stoppels
d66dce2d66 Set version to v0.22.1 2024-07-04 15:14:09 +02:00
Jordan Galby
ef2aa2f5f5 spack audit packages: Fix message (#45045)
Fix message formatting of the "virtual dependency cannot have variants" error.
2024-07-04 15:13:31 +02:00
Harmen Stoppels
41f5f6eaab iconv: require libiconv on linux (#45026)
otherwise it is still picked up from glibc as it is external
2024-07-04 15:07:05 +02:00
Massimiliano Culpo
cba347e0b7 Heuristic decays to default over time (#45023)
This modifies heuristic to decay to clingo default
over time. The hope is that this helps with specs
that have an optimal solution with a high penalty.

Let target and compiler heuristic decay too, do not
guess compiler
2024-07-04 15:07:05 +02:00
Harmen Stoppels
a3cef0f02e netlib-lapack: provide blas and lapack together (#44981)
If netlib-lapack is built with ~external-blas, it internally links
liblapack.so with libblas.so, meaning that whenever netlib-lapack is
used as a lapack provider, the package must also be a blas provider.

Conversely using netli-lapack as a blas provider does not imply that it
also must provide lapack, but nothing is lost disallowing that...
2024-07-01 16:56:31 +02:00
Harmen Stoppels
45fca040c3 Use composite stage also for develop specs (#44950) 2024-07-01 16:56:31 +02:00
Harmen Stoppels
eb2b5739b2 Remove DIYStage (#44949) 2024-07-01 16:56:31 +02:00
Massimiliano Culpo
d299e17d43 neoverse-v1: restore py-cinemasci (#44976)
Use a different tactic for determining conflicts.

Give more priority to setting False very old versions.
2024-07-01 16:56:31 +02:00
Massimiliano Culpo
d883883be0 Ensure parent runtime version >= child (#44834)
Fixes a bug where old gcc-runtime libraries would be loaded at runtime, but newer are required by dependencies, breaking the binaries.
2024-07-01 16:56:31 +02:00
Massimiliano Culpo
249dcb49e2 ASP-based solver: add a generic rule for propagation (#44870)
This adds a generic propagate/2 rule to propagate any
fact to children in the DAG.
2024-07-01 16:56:31 +02:00
Massimiliano Culpo
8628add66b Simplify and improve solver heuristic (#44893)
When we changed how to deal with errors in November,
we didn't realize that for an unconstrained choice
rule it is more important in the heuristic to guess
what is NOT in the answer set, since it will be the
majority of options.

Previously this was following automatically from what
was in the answer set, via `1 { ... } 1` cardinality
constraints.

Here we improve the heuristic and the solve time for specs.
2024-07-01 16:56:31 +02:00
Harmen Stoppels
aeccba8bc0 build_environment: fix ccache error handling (#44740) 2024-07-01 16:56:31 +02:00
Todd Gamblin
d94e8ab36f python: make every view a venv (#44382)
#40773 introduced python-venv, which improved build isolation and avoids issues with,
e.g., `ubuntu`'s system python modifying `sysconfig` to include a (very unwanted)
`local` directory within the default install layout.

This addresses a few cases where #40773 removed functionality, without harming the
default cases where we use `python-venv`.

Traditionally, *every* view with `python` in it was essentially a virtual environment,
because we would copy the `python` interpreter and `os.py` into every view when linking.
We now rely on `python-venv` to do that, but only when it's used (i.e. new builds) and
only for packages that have an `extends("python")` directive.

This again makes every view with `python` in it a virtual environment, but only
if we're not already using a package like `python-venv`. This uses a different
mechanism from before -- instead of using the `virtualenv` trick of copying `python`
into the prefix, we instead create a `pyvenv.cfg` like `venv` (the more modern way
to do it).

This fixes two things:
1. If you already had an environment before Spack `v0.22` that worked, it would
   stop working without a reconcretize and rebuild in `v0.22`, because we no longer
   copy the python interpreter on link. Adding `pyvenv.cfg` fixes this in a more
   modern way, so old views will keep working.

2. If you have an env that only includes python packages that use `depends_on("python")`
   instead of `extends("python")`, those packages will now be importable as before,
   though they won't have the same level of build isolation you'd get with `extends`
   and `python-venv`.

* views: avoid making client code deal with link functions

Users of views and ViewDescriptors shouldn't have to deal with link functions -- they
should just say what type of linking they want.

- [x] views take a link_type, not a link function
- [x] views work out the link function from the link type
- [x] view descriptors and commands now just tell the view what they want.

* python: simplify logic for avoiding pyvenv.cfg in copy views

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-07-01 16:56:31 +02:00
Massimiliano Culpo
e66c26871f Move unit tests into the same file, simplify main workflow 2024-07-01 16:56:31 +02:00
kwryankrattiger
2db4ff7061 Generate jobs should use x86_64_v3 runners only (#44582) 2024-07-01 16:56:31 +02:00
Tom Bradford
c248932a94 protobuf: fix 3.4:3.21 patch checksum (#44443) 2024-07-01 16:56:31 +02:00
dmagdavector
f15d302fc7 protobuf: update hash for patch needed when="@3.4:3.21" (#44210)
* protobuf: update hash for patch needed when="@3.4:3.21"

* Update var/spack/repos/builtin/packages/protobuf/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

* Update var/spack/repos/builtin/packages/protobuf/package.py

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>

---------

Co-authored-by: Wouter Deconinck <wdconinc@gmail.com>
2024-07-01 16:56:31 +02:00
John W. Parent
74ef630241 Windows: Non config changes to support Gitlab CI (#43965)
* Quote python for shlex

* Remove python path quoting patch

* spack env: Allow `C` "protocol" for config_path

When running spack on windows, a path beginning with `C://...` is a valid path.

* Remove makefile from ci rebuild

* GPG use llnl.util.filesystem.getuid

* Cleanup process_command

* Remove unused lines

* Fix tyop in encode_path

* Double quote arguments

* Cleanup process_command

* Pass cdash args with =

* Escape parens in CMD script

* escape parens doesn't only apply to paths

* Install deps

* sfn prefix

* use sfn with libxml2

* Add hash to dep install

* WIP

* REview

* Changes missed in prior review commit

* Style

* Ensure we handle Windows paths with config scopes

* clarify docstring

* No more MAKE_COMMAND

* syntax cleanup

* Actually correct is_path_url

* Correct call

* raise on other errors

* url2path behaves differently on unix

* Ensure proper quoting

* actually prepend slash in slash_hash

---------

Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Co-authored-by: Mike VanDenburgh <michael.vandenburgh@kitware.com>
2024-07-01 16:56:31 +02:00
John W. Parent
a70ea11e69 Gitlab CI: Windows Configs (#43967)
Add support for Gitlab CI on Windows

This PR adds the config changes required to configure and execute
Gitlab pipelines running Windows builds on Windows runners using
the existing Gitlab CI infrastructure (and newly added Windows 
infrastructure).

* Adds support for generating child pipelines dispatched to Windows runners
* Refactors the relevant pre-scripts, scripts, and post scripts to be compatible with Windows
* Adds Windows config section describing Windows jobs
* Adds VTK as Windows build stack (to be expanded later)
* Modifies proj to build on Windows
* Refactors Windows rpath symlinking to avoid system libs and externals

---------

Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Co-authored-by: Mike VanDenburgh <michael.vandenburgh@kitware.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
2024-07-01 16:56:31 +02:00
John W. Parent
a79b1bd9af Buildcache/ensure symlinks proper prefix (#43851)
* archive: relative links only

Ensure all links written into tarfiles generated from Spack prefixes do not contain symlinks pointing outside the prefix

* binary_distribution: limit extraction to prefix

Ensure files extracted from spackballs are not links pointing outside of the prefix

* Ensure rpaths are properly set on Windows

* hard error on extraction of absolute links

* refactor for non link-modifying approach

* Restore tarball extraction to original impl

* use custom readlink

* cleanup symlink module

* make lstrip
2024-07-01 16:56:31 +02:00
John W. Parent
ac5d5485b9 Cdash reporting timeout (#44213)
* Add timeout to cdash reporter PUT request

Add cdash timeout everywhere
Correct mock responder api

* Style

* brief doc
2024-07-01 16:56:31 +02:00
John W. Parent
04258f9cce Prefer llnl.util.symlink.readlink to os.readlink (#44126)
Symlinks on Windows can use longpath prefixes (\\?\); these are fine
in the context of win32 API interactions but break numerous facets of
Spack behavior that rely on string parsing/matching (archiving,
binary distributions, tarball extraction, view regen, etc).

Spack's internal readlink method (llnl.util.symlink.readlink)
gracefully handles this by removing the prefix and otherwise behaving
exactly as os.readlink does, so we should prefer that in all cases.
2024-07-01 16:56:31 +02:00
Scott Wittenburg
1b14170bd1 gitlab ci: fix untouched spec pruning on windows (#44279)
Use correct path separator in get_all_package_diffs for all platforms.
Ensures correct package change computation on Windows when pruning unchanged specs in Gitlab CI
2024-07-01 16:56:31 +02:00
Massimiliano Culpo
a3bc9dbfe8 Make strong preferences even stronger (#44373)
Before this PR, if Spack could see a possibility to reuse a spec that
doesn't match a strong preference, it would do so. After the PR, a
strong preference would take precedence.
2024-07-01 16:56:31 +02:00
Greg Becker
e7c86259bd bugfix: external detection for compilers with os but not target (#44156)
avoid calling `spec.target` when None.

When an external compiler package has an `os` set but no `target` set, Spack
currently falls into a codepath that calls `spec.target` (which itself calls
`spec.architecture.target.Microarchitecture`) when `spec.architecture.target`
is None, throwing an error.

e.g.

```
packages:
  gcc:
    externals:
    - spec: gcc@12.3.1 os=rhel7
      prefix: /usr
```

---------

Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2024-07-01 16:56:31 +02:00
Massimiliano Culpo
2605aeb072 ASP-based solver: fix reusing externals on linux (#44316)
We need to tell clingo the libc compatibility of external nodes
in buildcaches or stores, to allow reuse.
2024-07-01 16:56:31 +02:00
Massimiliano Culpo
94536d2b66 Enforce consistency of gl providers (#44307)
* glew: rework dependency on gl

This simplifies the package and ensures a single gl implementation is
pulled in. Before we were adding direct dependencies, and those are
not unified through the virtual.

* mesa-demos: rework dependency on gl

This simplifies the package and ensures a single gl implementation is
pulled in. Before we were adding direct dependencies, and those are
not unified through the virtual.

* mesa-glu: rework dependency on gl

This simplifies the package and ensures a single gl implementation is
pulled in. Before we were adding direct dependencies, and those are
not unified through the virtual.

* paraview: fix dependency on glew

* mesa: group dependency on when("+glx")

* Add missing dependency on libxml2

* paraview: remove the "osmesa" and "egl" variant

Instead, enforce consistency using the "gl" virtual that allows
only one provider.

* visit: remove osmesa variant

* Disable paraview in the aws-isc stacks

* data-vis-sdk: rework constrains to enforce front-ends

* e4s-power: remove redundant paraview

* Pipelines: update osmesa variants

* trilinos-catalyst-ioss-adapter: make gl a run dependency
2024-07-01 16:56:31 +02:00
Massimiliano Culpo
5e580fc82e Remove mesa18 and libosmesa (#44264)
* Remove mesa18 and libosmesa

mesa18 was introduced in #19528 as a way to maintain the old
autotools build of mesa separate from the new meson build.

We could add a second build system to mesa, but since mesa18 has
been deprecated for a long time, we'll just remove it.

libosmesa was used to multiplex the gl provider between mesa18
and mesa, and is thus unecessary. Remove it to reduce complexity
in the graphical stack.

* Remove references to mesa18 and libosmesa

* vtk: rework dependency on gl and osmesa

* memsurfer: rework dependency on vtk

* visit: minimal fix to avoid having both osmesa and glx
2024-07-01 16:56:31 +02:00
Harmen Stoppels
195bad8675 Prefer libiconv for iconv (#44335)
`glibc` and `musl` provide a basic implementation of `iconv` (`iconv`,
`iconv_open`, `iconv_close`), but in practice the installation may be
missing the character encoding methods to make them usable. On Fedora
for example, users need to

```yum install glibc-gconv-extra```

to get the character encodings that `gettext` requires during configure,
namely EUC-JP. Users may not have permissions to install the missing
parts of glibc.

Since Spack can install `libiconv`, it is simpler to use that by
default.
2024-07-01 16:56:31 +02:00
Harmen Stoppels
bd9f3f100a gcc: use -rpath {rpath_dir} not -rpath={rpath dir} (#44315)
to make macOS's linker happy.
2024-07-01 16:56:31 +02:00
Mosè Giordano
b5962613a0 suite-sparse: improve setting of the libs property (#44214)
on some distros it is in lib64/
2024-07-01 16:56:31 +02:00
Massimiliano Culpo
cbcfc7e10a Demote a warning to debug message, if C compiler is not there (#44182) 2024-07-01 16:56:31 +02:00
Massimiliano Culpo
579fadacd0 ASP-based solver: fix version optimization for roots (#44272)
This fixes a bug occurring when two root specs need to select
old versions, and these versions have the same penalty in the
optimization. This sometimes caused an older version to be
preferred to a more recent one.

The issue was the omission of `PackageNode` in the optimization
tuple.
2024-07-01 16:56:31 +02:00
Chris Green
b86d08b022 git: bump v2.39 to 2.45; deprecate unsafe versions (#44248) 2024-07-01 16:56:31 +02:00
Scott Wittenburg
02d62cf40f oci buildcache: handle pagination of tags (#43136)
This fixes an issue where ghcr, gitlab and possibly other container registries paginate tags by default, which violates the OCI spec v1.0, but is common practice (the spec was broken itself). After this commit, you can create build cache indices of > 100 specs on ghcr.

Co-authored-by: Harmen Stoppels <me@harmenstoppels.nl>
2024-07-01 16:56:31 +02:00
Harmen Stoppels
97369776f0 build_environment.py: deal with rpathing identical packages (#44219)
When multiple gcc-runtime packages exist in the same link sub-dag, only rpath
the latest.
2024-07-01 16:56:31 +02:00
Howard Pritchard
47af0159dc py-matplotlib: qualify when to do a post install (#44191)
* py-matplotlib: qualify when to do a post install

Older versions of py-matplotlib don't seem to have some of the
files that the post install step is trying to install.
Looks like the files first appeared in 3.6.0 and later.

Signed-off-by: Howard Pritchard <hppritcha@gmail.com>

* Change install paths for older matplotlib

---------

Signed-off-by: Howard Pritchard <hppritcha@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2024-07-01 16:56:31 +02:00
Alec Scott
db6ead6fc1 rust: fix v1.78.0 instructions (#44127) 2024-07-01 16:56:31 +02:00
Harmen Stoppels
b4aa2c3cab glibc: detect from "Free Software Foundation" not "gnu" (#44154)
which should be more generic
2024-07-01 16:56:31 +02:00
Harmen Stoppels
4108de1ce4 Set version to v0.22.1.dev0 2024-07-01 16:56:31 +02:00
Todd Gamblin
5fe93fee1e Update CHANGELOG.md for v0.22.0 2024-05-12 02:06:28 +02:00
Todd Gamblin
8207f11333 Bump version to v0.22 2024-05-11 17:54:12 +02:00
Todd Gamblin
5bb5d2696f changelog: add changes form 0.21.1 and 0.21.2 (#44136)
These changes were added to the release branch but did not make it onto `develop`.
2024-05-11 17:48:27 +02:00
Harmen Stoppels
55f37dffe5 oci: improve default_retry (#44132)
Apparently urllib can throw a range of different exceptions:

1. HTTPError
2. URLError with e.reason set to the actual exception
3. TimeoutError from getresponse, which is not wrapped
2024-05-11 15:44:40 +02:00
Harmen Stoppels
252a5bd71b PythonExtension: fix issue where package does not extend python (#44109) 2024-05-10 10:48:06 +02:00
Massimiliano Culpo
f55224f161 Fix filtering external specs (#44093)
When an include filter on externals is present, implicitly
include libcs.

Also, do not penalize deprecated versions if they come
from externals.
2024-05-09 20:48:43 +02:00
Massimiliano Culpo
189ae4b06e CI/Update macOS runners: macos-latest switched to macos-14 (#44094)
macos-latest switched to macos-14, so now we are running
two identical jobs.
2024-05-09 20:48:43 +02:00
Harmen Stoppels
5e9c702fa7 gcc: use -idirafter for libc headers (#44081)
GCC C++ headers like cstdlib use `#include_next <stdlib.h>` to wrap libc
headers. We're using `-isystem` for libc, which puts those headers too
early in the search path. `-idirafter` fixes this so `include_next`
works.
2024-05-08 20:45:39 +02:00
Harmen Stoppels
965bb4d3c0 gitlab ci: tutorial: add julia and vim (#44073) 2024-05-08 14:19:22 +02:00
Todd Gamblin
354f98c94a r: patch R-CVE-2024-27322 for r@3.5:4.3.3 (#44050)
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-05-08 14:19:22 +02:00
Tamara Dahlgren
5dce480154 Remove dead environment creation code (#44065) 2024-05-08 14:19:22 +02:00
Richarda Butler
f634d48b7c Include concrete environments with include_concrete (#33768)
Add the ability to include any number of (potentially nested) concrete environments, e.g.:

```yaml
   spack:
     specs: []
     concretizer:
         unify: true
     include_concrete:
     - /path/to/environment1
     - /path/to/environment2
```

or, from the CLI:

```console
   $ spack env create myenv
   $ spack -e myenv add python
   $ spack -e myenv concretize
   $ spack env create --include-concrete myenv included_env
```

The contents of included concrete environments' spack.lock files are
included in the environment's lock file at creation time. Any changes
to included concrete environments are only reflected after the environment
is re-concretized from the re-concretized included environments.

- [x] Concretize included envs
- [x] Save concrete specs in memory by hash
- [x] Add included envs to combined env's lock file
- [x] Add test
- [x] Update documentation

    Co-authored-by: Kayla Butler <<butler59@llnl.gov>
    Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.co
m>
    Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
    Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-05-08 14:19:22 +02:00
Massimiliano Culpo
4daee565ae Update the tutorial command to point to releases/v0.22 (#44056) 2024-05-08 14:19:22 +02:00
Massimiliano Culpo
8e4dbdc2d7 Bump removal version in deprecation messages (#44064) 2024-05-08 14:19:22 +02:00
Harmen Stoppels
4f6adc03cd gitlab: dont build paraview for neoverse v2 (#44060) 2024-05-08 14:19:22 +02:00
9891 changed files with 81254 additions and 129876 deletions

View File

@@ -5,7 +5,7 @@ coverage:
status:
project:
default:
threshold: 2.0%
threshold: 0.2%
ignore:
- lib/spack/spack/test/.*

View File

@@ -1,5 +1,4 @@
{
"name": "Ubuntu 20.04",
"image": "ghcr.io/spack/ubuntu20.04-runner-amd64-gcc-11.4:2023.08.01",
"postCreateCommand": "./.devcontainer/postCreateCommand.sh"
}

View File

@@ -1,5 +0,0 @@
{
"name": "Ubuntu 22.04",
"image": "ghcr.io/spack/ubuntu-22.04:v2024-05-07",
"postCreateCommand": "./.devcontainer/postCreateCommand.sh"
}

View File

@@ -5,10 +5,13 @@ updates:
directory: "/"
schedule:
interval: "daily"
# Requirements to run style checks and build documentation
# Requirements to build documentation
- package-ecosystem: "pip"
directories:
- "/.github/workflows/requirements/style/*"
- "/lib/spack/docs"
directory: "/lib/spack/docs"
schedule:
interval: "daily"
# Requirements to run style checks
- package-ecosystem: "pip"
directory: "/.github/workflows/style"
schedule:
interval: "daily"

View File

@@ -28,8 +28,8 @@ jobs:
run:
shell: ${{ matrix.system.shell }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: ${{inputs.python_version}}
- name: Install Python packages
@@ -40,35 +40,30 @@ jobs:
run: |
python -m pip install --upgrade pywin32
- name: Package audits (with coverage)
env:
COVERAGE_FILE: coverage/.coverage-audits-${{ matrix.system.os }}
if: ${{ inputs.with_coverage == 'true' && runner.os != 'Windows' }}
run: |
. share/spack/setup-env.sh
coverage run $(which spack) audit packages
coverage run $(which spack) audit configs
coverage run $(which spack) -d audit externals
coverage combine
coverage xml
- name: Package audits (without coverage)
if: ${{ inputs.with_coverage == 'false' && runner.os != 'Windows' }}
run: |
. share/spack/setup-env.sh
. share/spack/setup-env.sh
spack -d audit packages
spack -d audit configs
spack -d audit externals
- name: Package audits (without coverage)
if: ${{ runner.os == 'Windows' }}
run: |
. share/spack/setup-env.sh
. share/spack/setup-env.sh
spack -d audit packages
./share/spack/qa/validate_last_exit.ps1
spack -d audit configs
./share/spack/qa/validate_last_exit.ps1
spack -d audit externals
./share/spack/qa/validate_last_exit.ps1
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
if: ${{ inputs.with_coverage == 'true' && runner.os != 'Windows' }}
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
if: ${{ inputs.with_coverage == 'true' }}
with:
name: coverage-audits-${{ matrix.system.os }}
path: coverage
include-hidden-files: true
flags: unittests,audits
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -1,7 +1,7 @@
#!/bin/bash
set -e
source share/spack/setup-env.sh
$PYTHON bin/spack bootstrap disable github-actions-v0.5
$PYTHON bin/spack bootstrap disable github-actions-v0.4
$PYTHON bin/spack bootstrap disable spack-install
$PYTHON bin/spack $SPACK_FLAGS solve zlib
tree $BOOTSTRAP/store

View File

@@ -37,14 +37,14 @@ jobs:
make patch unzip which xz python3 python3-devel tree \
cmake bison
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.6
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.4
spack external find cmake bison
spack -d solve zlib
tree ~/.spack/bootstrap/store/
@@ -60,20 +60,20 @@ jobs:
run: |
brew install cmake bison tree
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: "3.12"
- name: Bootstrap clingo
run: |
source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.6
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.4
spack external find --not-buildable cmake bison
spack -d solve zlib
tree $HOME/.spack/bootstrap/store/
tree ~/.spack/bootstrap/store/
gnupg-sources:
runs-on: ${{ matrix.runner }}
@@ -90,15 +90,15 @@ jobs:
sudo rm $(command -v gpg gpg2 patchelf)
done
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- name: Bootstrap GnuPG
run: |
source share/spack/setup-env.sh
spack solve zlib
spack bootstrap disable github-actions-v0.6
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.4
spack -d gpg list
tree ~/.spack/bootstrap/store/
@@ -117,10 +117,10 @@ jobs:
sudo rm $(command -v gpg gpg2 patchelf)
done
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: |
3.8
@@ -128,16 +128,15 @@ jobs:
3.10
3.11
3.12
3.13
- name: Set bootstrap sources
run: |
source share/spack/setup-env.sh
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.4
spack bootstrap disable spack-install
- name: Bootstrap clingo
run: |
set -e
for ver in '3.8' '3.9' '3.10' '3.11' '3.12' '3.13'; do
for ver in '3.8' '3.9' '3.10' '3.11' '3.12' ; do
not_found=1
ver_dir="$(find $RUNNER_TOOL_CACHE/Python -wholename "*/${ver}.*/*/bin" | grep . || true)"
if [[ -d "$ver_dir" ]] ; then
@@ -147,7 +146,7 @@ jobs:
not_found=0
old_path="$PATH"
export PATH="$ver_dir:$PATH"
./bin/spack-tmpconfig -b ./.github/workflows/bin/bootstrap-test.sh
./bin/spack-tmpconfig -b ./.github/workflows/bootstrap-test.sh
export PATH="$old_path"
fi
fi
@@ -160,35 +159,5 @@ jobs:
run: |
source share/spack/setup-env.sh
spack -d gpg list
tree $HOME/.spack/bootstrap/store/
tree ~/.spack/bootstrap/store/
windows:
runs-on: "windows-latest"
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: "3.12"
- name: Setup Windows
run: |
Remove-Item -Path (Get-Command gpg).Path
Remove-Item -Path (Get-Command file).Path
- name: Bootstrap clingo
run: |
./share/spack/setup-env.ps1
spack bootstrap disable github-actions-v0.6
spack bootstrap disable github-actions-v0.5
spack external find --not-buildable cmake bison
spack -d solve zlib
./share/spack/qa/validate_last_exit.ps1
tree $env:userprofile/.spack/bootstrap/store/
- name: Bootstrap GnuPG
run: |
./share/spack/setup-env.ps1
spack -d gpg list
./share/spack/qa/validate_last_exit.ps1
tree $env:userprofile/.spack/bootstrap/store/

View File

@@ -40,30 +40,25 @@ jobs:
# 1: Platforms to build for
# 2: Base image (e.g. ubuntu:22.04)
dockerfile: [[amazon-linux, 'linux/amd64,linux/arm64', 'amazonlinux:2'],
[centos-stream9, 'linux/amd64,linux/arm64', 'centos:stream9'],
[leap15, 'linux/amd64,linux/arm64', 'opensuse/leap:15'],
[ubuntu-focal, 'linux/amd64,linux/arm64', 'ubuntu:20.04'],
[ubuntu-jammy, 'linux/amd64,linux/arm64', 'ubuntu:22.04'],
[ubuntu-noble, 'linux/amd64,linux/arm64', 'ubuntu:24.04'],
[almalinux8, 'linux/amd64,linux/arm64', 'almalinux:8'],
[almalinux9, 'linux/amd64,linux/arm64', 'almalinux:9'],
[centos7, 'linux/amd64,linux/arm64,linux/ppc64le', 'centos:7'],
[centos-stream, 'linux/amd64,linux/arm64,linux/ppc64le', 'centos:stream'],
[leap15, 'linux/amd64,linux/arm64,linux/ppc64le', 'opensuse/leap:15'],
[ubuntu-focal, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:20.04'],
[ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04'],
[ubuntu-noble, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:24.04'],
[almalinux8, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:8'],
[almalinux9, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:9'],
[rockylinux8, 'linux/amd64,linux/arm64', 'rockylinux:8'],
[rockylinux9, 'linux/amd64,linux/arm64', 'rockylinux:9'],
[fedora39, 'linux/amd64,linux/arm64', 'fedora:39'],
[fedora40, 'linux/amd64,linux/arm64', 'fedora:40']]
[fedora39, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:39'],
[fedora40, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:40']]
name: Build ${{ matrix.dockerfile[0] }}
if: github.repository == 'spack/spack'
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
- name: Determine latest release tag
id: latest
run: |
git fetch --quiet --tags
echo "tag=$(git tag --list --sort=-v:refname | grep -E '^v[0-9]+\.[0-9]+\.[0-9]+$' | head -n 1)" | tee -a $GITHUB_OUTPUT
- uses: docker/metadata-action@369eb591f429131d6889c46b94e711f089e6ca96
- uses: docker/metadata-action@8e5442c4ef9f78752691e2d8f8d19755c6f78e81
id: docker_meta
with:
images: |
@@ -77,13 +72,12 @@ jobs:
type=semver,pattern={{major}}
type=ref,event=branch
type=ref,event=pr
type=raw,value=latest,enable=${{ github.ref == format('refs/tags/{0}', steps.latest.outputs.tag) }}
- name: Generate the Dockerfile
env:
SPACK_YAML_OS: "${{ matrix.dockerfile[2] }}"
run: |
.github/workflows/bin/generate_spack_yaml_containerize.sh
.github/workflows/generate_spack_yaml_containerize.sh
. share/spack/setup-env.sh
mkdir -p dockerfiles/${{ matrix.dockerfile[0] }}
spack containerize --last-stage=bootstrap | tee dockerfiles/${{ matrix.dockerfile[0] }}/Dockerfile
@@ -94,19 +88,19 @@ jobs:
fi
- name: Upload Dockerfile
uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808
with:
name: dockerfiles_${{ matrix.dockerfile[0] }}
path: dockerfiles
- name: Set up QEMU
uses: docker/setup-qemu-action@49b3bc8e6bdd4a60e6116a5414239cba5943d3cf
uses: docker/setup-qemu-action@68827325e0b33c7199eb31dd4e31fbe9023e06e3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@6524bf65af31da8d45b59e8c27de4bd072b392f5
uses: docker/setup-buildx-action@d70bba72b1f3fd22344832f00baa16ece964efeb
- name: Log in to GitHub Container Registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
uses: docker/login-action@e92390c5fb421da1463c202d546fed0ec5c39f20
with:
registry: ghcr.io
username: ${{ github.actor }}
@@ -114,13 +108,13 @@ jobs:
- name: Log in to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567
uses: docker/login-action@e92390c5fb421da1463c202d546fed0ec5c39f20
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build & Deploy ${{ matrix.dockerfile[0] }}
uses: docker/build-push-action@48aba3b46d1b1fec4febb7c5d0c644b249a11355
uses: docker/build-push-action@2cdde995de11925a030ce8070c3d77a52ffcf1c0
with:
context: dockerfiles/${{ matrix.dockerfile[0] }}
platforms: ${{ matrix.dockerfile[1] }}
@@ -133,7 +127,7 @@ jobs:
needs: deploy-images
steps:
- name: Merge Artifacts
uses: actions/upload-artifact/merge@6f51ac03b9356f520e9adb1b1b7802705f340c2b
uses: actions/upload-artifact/merge@65462800fd760344b1a7b4382951275a0abb4808
with:
name: dockerfiles
pattern: dockerfiles_*

View File

@@ -15,6 +15,18 @@ concurrency:
cancel-in-progress: true
jobs:
prechecks:
needs: [ changes ]
uses: ./.github/workflows/valid-style.yml
secrets: inherit
with:
with_coverage: ${{ needs.changes.outputs.core }}
all-prechecks:
needs: [ prechecks ]
runs-on: ubuntu-latest
steps:
- name: Success
run: "true"
# Check which files have been updated by the PR
changes:
runs-on: ubuntu-latest
@@ -24,7 +36,7 @@ jobs:
core: ${{ steps.filter.outputs.core }}
packages: ${{ steps.filter.outputs.packages }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
if: ${{ github.event_name == 'push' }}
with:
fetch-depth: 0
@@ -41,13 +53,6 @@ jobs:
- 'var/spack/repos/builtin/packages/clingo/**'
- 'var/spack/repos/builtin/packages/python/**'
- 'var/spack/repos/builtin/packages/re2c/**'
- 'var/spack/repos/builtin/packages/gnupg/**'
- 'var/spack/repos/builtin/packages/libassuan/**'
- 'var/spack/repos/builtin/packages/libgcrypt/**'
- 'var/spack/repos/builtin/packages/libgpg-error/**'
- 'var/spack/repos/builtin/packages/libksba/**'
- 'var/spack/repos/builtin/packages/npth/**'
- 'var/spack/repos/builtin/packages/pinentry/**'
- 'lib/spack/**'
- 'share/spack/**'
- '.github/workflows/bootstrap.yml'
@@ -67,57 +72,14 @@ jobs:
needs: [ prechecks, changes ]
uses: ./.github/workflows/bootstrap.yml
secrets: inherit
unit-tests:
if: ${{ github.repository == 'spack/spack' && needs.changes.outputs.core == 'true' }}
needs: [ prechecks, changes ]
uses: ./.github/workflows/unit_tests.yaml
secrets: inherit
prechecks:
needs: [ changes ]
uses: ./.github/workflows/valid-style.yml
secrets: inherit
with:
with_coverage: ${{ needs.changes.outputs.core }}
import-check:
needs: [ changes ]
uses: ./.github/workflows/import-check.yaml
all-prechecks:
needs: [ prechecks ]
if: ${{ always() }}
all:
needs: [ unit-tests, bootstrap ]
runs-on: ubuntu-latest
steps:
- name: Success
run: |
if [ "${{ needs.prechecks.result }}" == "failure" ] || [ "${{ needs.prechecks.result }}" == "canceled" ]; then
echo "Unit tests failed."
exit 1
else
exit 0
fi
coverage:
needs: [ unit-tests, prechecks ]
uses: ./.github/workflows/coverage.yml
secrets: inherit
all:
needs: [ unit-tests, coverage, bootstrap ]
if: ${{ always() }}
runs-on: ubuntu-latest
# See https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/accessing-contextual-information-about-workflow-runs#needs-context
steps:
- name: Status summary
run: |
if [ "${{ needs.unit-tests.result }}" == "failure" ] || [ "${{ needs.unit-tests.result }}" == "canceled" ]; then
echo "Unit tests failed."
exit 1
elif [ "${{ needs.bootstrap.result }}" == "failure" ] || [ "${{ needs.bootstrap.result }}" == "canceled" ]; then
echo "Bootstrap tests failed."
exit 1
else
exit 0
fi
run: "true"

View File

@@ -1,36 +0,0 @@
name: coverage
on:
workflow_call:
jobs:
# Upload coverage reports to codecov once as a single bundle
upload:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.11'
cache: 'pip'
- name: Install python dependencies
run: pip install -r .github/workflows/requirements/coverage/requirements.txt
- name: Download coverage artifact files
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16
with:
pattern: coverage-*
path: coverage
merge-multiple: true
- run: ls -la coverage
- run: coverage combine -a coverage/.coverage*
- run: coverage xml
- name: "Upload coverage report to CodeCov"
uses: codecov/codecov-action@1e68e06f1dbfde0e4cefc87efeba9e4643565303
with:
verbose: true
fail_ci_if_error: false
token: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -1,49 +0,0 @@
name: import-check
on:
workflow_call:
jobs:
# Check we don't make the situation with circular imports worse
import-check:
runs-on: ubuntu-latest
steps:
- uses: julia-actions/setup-julia@v2
with:
version: '1.10'
- uses: julia-actions/cache@v2
# PR: use the base of the PR as the old commit
- name: Checkout PR base commit
if: github.event_name == 'pull_request'
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
ref: ${{ github.event.pull_request.base.sha }}
path: old
# not a PR: use the previous commit as the old commit
- name: Checkout previous commit
if: github.event_name != 'pull_request'
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 2
path: old
- name: Checkout previous commit
if: github.event_name != 'pull_request'
run: git -C old reset --hard HEAD^
- name: Checkout new commit
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
path: new
- name: Install circular import checker
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: haampie/circular-import-fighter
ref: 4cdb0bf15f04ab6b49041d5ef1bfd9644cce7f33
path: circular-import-fighter
- name: Install dependencies
working-directory: circular-import-fighter
run: make -j dependencies
- name: Circular import check
working-directory: circular-import-fighter
run: make -j compare "SPACK_ROOT=../old ../new"

8
.github/workflows/install_spack.sh vendored Executable file
View File

@@ -0,0 +1,8 @@
#!/usr/bin/env sh
. share/spack/setup-env.sh
echo -e "config:\n build_jobs: 2" > etc/spack/config.yaml
spack config add "packages:all:target:[x86_64]"
spack compiler find
spack compiler info apple-clang
spack debug report
spack solve zlib

View File

@@ -14,10 +14,10 @@ jobs:
build-paraview-deps:
runs-on: windows-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: 3.9
- name: Install Python packages

View File

@@ -1 +0,0 @@
coverage==7.6.1

View File

@@ -1,7 +0,0 @@
black==25.1.0
clingo==5.7.1
flake8==7.1.2
isort==6.0.1
mypy==1.15.0
types-six==1.17.0.20250304
vermin==1.6.0

View File

@@ -1,3 +1,5 @@
# (c) 2022 Lawrence Livermore National Laboratory
git config --global user.email "spack@example.com"
git config --global user.name "Test User"
git config --global core.longpaths true

View File

@@ -0,0 +1,7 @@
black==24.4.2
clingo==5.7.1
flake8==7.0.0
isort==5.13.2
mypy==1.8.0
types-six==1.16.21.9
vermin==1.6.0

View File

@@ -14,36 +14,47 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
os: [ubuntu-22.04]
python-version: ['3.7', '3.8', '3.9', '3.10', '3.11', '3.12']
concretizer: ['clingo']
on_develop:
- ${{ github.ref == 'refs/heads/develop' }}
include:
- python-version: '3.11'
os: ubuntu-20.04
concretizer: original
on_develop: ${{ github.ref == 'refs/heads/develop' }}
- python-version: '3.6'
os: ubuntu-20.04
on_develop: ${{ github.ref == 'refs/heads/develop' }}
- python-version: '3.7'
os: ubuntu-22.04
concretizer: clingo
on_develop: ${{ github.ref == 'refs/heads/develop' }}
exclude:
- python-version: '3.7'
concretizer: 'clingo'
os: ubuntu-22.04
on_develop: false
- python-version: '3.8'
os: ubuntu-latest
concretizer: 'clingo'
os: ubuntu-22.04
on_develop: false
- python-version: '3.9'
os: ubuntu-latest
concretizer: 'clingo'
os: ubuntu-22.04
on_develop: false
- python-version: '3.10'
os: ubuntu-latest
concretizer: 'clingo'
os: ubuntu-22.04
on_develop: false
- python-version: '3.11'
os: ubuntu-latest
concretizer: 'clingo'
os: ubuntu-22.04
on_develop: false
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: ${{ matrix.python-version }}
- name: Install System packages
@@ -52,13 +63,7 @@ jobs:
# Needed for unit tests
sudo apt-get -y install \
coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build \
cmake bison libbison-dev subversion
# On ubuntu 24.04, kcov was removed. It may come back in some future Ubuntu
- name: Set up Homebrew
id: set-up-homebrew
uses: Homebrew/actions/setup-homebrew@40e9946c182a64b3db1bf51be0dcb915f7802aa9
- name: Install kcov with brew
run: "brew install kcov"
cmake bison libbison-dev kcov
- name: Install Python packages
run: |
pip install --upgrade pip setuptools pytest pytest-xdist pytest-cov
@@ -67,7 +72,7 @@ jobs:
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/bin/setup_git.sh
. .github/workflows/setup_git.sh
- name: Bootstrap clingo
if: ${{ matrix.concretizer == 'clingo' }}
env:
@@ -80,38 +85,32 @@ jobs:
- name: Run unit tests
env:
SPACK_PYTHON: python
SPACK_TEST_SOLVER: ${{ matrix.concretizer }}
SPACK_TEST_PARALLEL: 2
COVERAGE: true
COVERAGE_FILE: coverage/.coverage-${{ matrix.os }}-python${{ matrix.python-version }}
UNIT_TEST_COVERAGE: ${{ matrix.python-version == '3.11' }}
run: |
share/spack/qa/run-unit-tests
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
with:
name: coverage-${{ matrix.os }}-python${{ matrix.python-version }}
path: coverage
include-hidden-files: true
flags: unittests,linux,${{ matrix.concretizer }}
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Test shell integration
shell:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.11'
- name: Install System packages
run: |
sudo apt-get -y update
# Needed for shell tests
sudo apt-get install -y coreutils csh zsh tcsh fish dash bash subversion
# On ubuntu 24.04, kcov was removed. It may come back in some future Ubuntu
- name: Set up Homebrew
id: set-up-homebrew
uses: Homebrew/actions/setup-homebrew@40e9946c182a64b3db1bf51be0dcb915f7802aa9
- name: Install kcov with brew
run: "brew install kcov"
sudo apt-get install -y coreutils kcov csh zsh tcsh fish dash bash
- name: Install Python packages
run: |
pip install --upgrade pip setuptools pytest coverage[toml] pytest-xdist
@@ -119,17 +118,17 @@ jobs:
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/bin/setup_git.sh
. .github/workflows/setup_git.sh
- name: Run shell tests
env:
COVERAGE: true
run: |
share/spack/qa/run-shell-tests
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
with:
name: coverage-shell
path: coverage
include-hidden-files: true
flags: shelltests,linux
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Test RHEL8 UBI with platform Python. This job is run
# only on PRs modifying core Spack
@@ -140,15 +139,15 @@ jobs:
- name: Install dependencies
run: |
dnf install -y \
bzip2 curl gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
- name: Setup repo and non-root user
run: |
git --version
git config --global --add safe.directory '*'
git config --global --add safe.directory /__w/spack/spack
git fetch --unshallow
. .github/workflows/bin/setup_git.sh
. .github/workflows/setup_git.sh
useradd spack-test
chown -R spack-test .
- name: Run unit tests
@@ -159,39 +158,38 @@ jobs:
spack unit-test -k 'not cvs and not svn and not hg' -x --verbose
# Test for the clingo based solver (using clingo-cffi)
clingo-cffi:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.13'
python-version: '3.11'
- name: Install System packages
run: |
sudo apt-get -y update
sudo apt-get -y install coreutils gfortran graphviz gnupg2
sudo apt-get -y install coreutils cvs gfortran graphviz gnupg2 mercurial ninja-build kcov
- name: Install Python packages
run: |
pip install --upgrade pip setuptools pytest coverage[toml] pytest-cov clingo
pip install --upgrade pip setuptools pytest coverage[toml] pytest-cov clingo pytest-xdist
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black"
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/setup_git.sh
- name: Run unit tests (full suite with coverage)
env:
COVERAGE: true
COVERAGE_FILE: coverage/.coverage-clingo-cffi
SPACK_TEST_SOLVER: clingo
run: |
. share/spack/setup-env.sh
spack bootstrap disable spack-install
spack bootstrap disable github-actions-v0.5
spack bootstrap disable github-actions-v0.6
spack bootstrap status
spack solve zlib
spack unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml lib/spack/spack/test/concretization/core.py
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
share/spack/qa/run-unit-tests
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
with:
name: coverage-clingo-cffi
path: coverage
include-hidden-files: true
flags: unittests,linux,clingo
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Run unit tests on MacOS
macos:
runs-on: ${{ matrix.os }}
@@ -200,10 +198,10 @@ jobs:
os: [macos-13, macos-14]
python-version: ["3.11"]
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: ${{ matrix.python-version }}
- name: Install Python packages
@@ -212,24 +210,24 @@ jobs:
pip install --upgrade pytest coverage[toml] pytest-xdist pytest-cov
- name: Setup Homebrew packages
run: |
brew install dash fish gcc gnupg kcov
brew install dash fish gcc gnupg2 kcov
- name: Run unit tests
env:
SPACK_TEST_SOLVER: clingo
SPACK_TEST_PARALLEL: 4
COVERAGE_FILE: coverage/.coverage-${{ matrix.os }}-python${{ matrix.python-version }}
run: |
git --version
. .github/workflows/bin/setup_git.sh
. .github/workflows/setup_git.sh
. share/spack/setup-env.sh
$(which spack) bootstrap disable spack-install
$(which spack) solve zlib
common_args=(--dist loadfile --tx '4*popen//python=./bin/spack-tmpconfig python -u ./bin/spack python' -x)
$(which spack) unit-test --verbose --cov --cov-config=pyproject.toml --cov-report=xml:coverage.xml "${common_args[@]}"
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
- uses: codecov/codecov-action@5ecb98a3c6b747ed38dc09f787459979aebb39be
with:
name: coverage-${{ matrix.os }}-python${{ matrix.python-version }}
path: coverage
include-hidden-files: true
flags: unittests,macos
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true
# Run unit tests on Windows
windows:
defaults:
@@ -238,10 +236,10 @@ jobs:
powershell Invoke-Expression -Command "./share/spack/qa/windows_test_setup.ps1"; {0}
runs-on: windows-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@a5ac7e51b41094c92402da3b24376905380afc29
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: 3.9
- name: Install Python packages
@@ -249,15 +247,15 @@ jobs:
python -m pip install --upgrade pip pywin32 setuptools pytest-cov clingo
- name: Create local develop
run: |
./.github/workflows/bin/setup_git.ps1
./.github/workflows/setup_git.ps1
- name: Unit Test
env:
COVERAGE_FILE: coverage/.coverage-windows
run: |
spack unit-test -x --verbose --cov --cov-config=pyproject.toml
./share/spack/qa/validate_last_exit.ps1
- uses: actions/upload-artifact@6f51ac03b9356f520e9adb1b1b7802705f340c2b
coverage combine -a
coverage xml
- uses: codecov/codecov-action@125fc84a9a348dbcf27191600683ec096ec9021c
with:
name: coverage-windows
path: coverage
include-hidden-files: true
flags: unittests,windows
token: ${{ secrets.CODECOV_TOKEN }}
verbose: true

View File

@@ -13,19 +13,20 @@ concurrency:
jobs:
# Validate that the code can be run on all the Python versions supported by Spack
# Validate that the code can be run on all the Python versions
# supported by Spack
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.13'
python-version: '3.11'
cache: 'pip'
- name: Install Python Packages
run: |
pip install --upgrade pip setuptools
pip install -r .github/workflows/requirements/style/requirements.txt
pip install -r .github/workflows/style/requirements.txt
- name: vermin (Spack's Core)
run: vermin --backport importlib --backport argparse --violations --backport typing -t=3.6- -vvv lib/spack/spack/ lib/spack/llnl/ bin/
- name: vermin (Repositories)
@@ -34,22 +35,22 @@ jobs:
style:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
- uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d
with:
python-version: '3.13'
python-version: '3.11'
cache: 'pip'
- name: Install Python packages
run: |
pip install --upgrade pip setuptools
pip install -r .github/workflows/requirements/style/requirements.txt
pip install -r .github/workflows/style/requirements.txt
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.
git --version
. .github/workflows/bin/setup_git.sh
. .github/workflows/setup_git.sh
- name: Run style tests
run: |
share/spack/qa/run-style-tests
@@ -58,7 +59,7 @@ jobs:
secrets: inherit
with:
with_coverage: ${{ inputs.with_coverage }}
python_version: '3.13'
python_version: '3.11'
# Check that spack can bootstrap the development environment on Python 3.6 - RHEL8
bootstrap-dev-rhel8:
runs-on: ubuntu-latest
@@ -69,13 +70,13 @@ jobs:
dnf install -y \
bzip2 curl file gcc-c++ gcc gcc-gfortran git gnupg2 gzip \
make patch tcl unzip which xz
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b
- name: Setup repo and non-root user
run: |
git --version
git config --global --add safe.directory '*'
git config --global --add safe.directory /__w/spack/spack
git fetch --unshallow
. .github/workflows/bin/setup_git.sh
. .github/workflows/setup_git.sh
useradd spack-test
chown -R spack-test .
- name: Bootstrap Spack development environment
@@ -84,23 +85,5 @@ jobs:
source share/spack/setup-env.sh
spack debug report
spack -d bootstrap now --dev
spack -d style -t black
spack style -t black
spack unit-test -V
# Further style checks from pylint
pylint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.13'
cache: 'pip'
- name: Install Python packages
run: |
pip install --upgrade pip setuptools pylint
- name: Pylint (Spack Core)
run: |
pylint -j 4 --disable=all --enable=unspecified-encoding --ignore-paths=lib/spack/external lib

1
.gitignore vendored
View File

@@ -201,6 +201,7 @@ tramp
# Org-mode
.org-id-locations
*_archive
# flymake-mode
*_flymake.*

View File

@@ -1,3 +1,40 @@
# v0.22.5 (2025-02-21)
## Bugfixes
- Continue to mark non-roots as implicitly installed on partial env installs (#47183)
## Notes
- v0.22.4 is skipped to do an error in the release process
# v0.22.3 (2024-11-18)
## Bugfixes
- Forward compatibility with Python 3.13 (#46775, #46983, #47035, #47175)
- `archspec` was updated to v0.2.5 (#46503, #46958)
- Fix path to Spack in `spack env depfile` makefile (#46966)
- Fix `glibc` detection in Chinese locales (#47434)
- Fix pickle round-trip of specs propagating variants (#47351)
- Fix a bug where concurrent spack install commands would not always update explicits correctly
(#47358)
- Fix a bug where autopush would run before all post install hooks modifying the install prefix
had run (#47329)
- Fix `spack find -u` (#47102)
- Fix a bug where sometimes the wrong Python interpreter was used for build dependencies such as
`py-setuptools` (#46980)
- Fix default config errors found by `spack audit externals` (#47308)
- Fix duplicate printing of external roots in installer (#44917)
- Fix modules schema in `compilers.yaml` (#47197)
- Reduce the size of generated YAML for Gitlab CI (#44995)
- Handle missing metadata file gracefully in bootstrap (#47278)
- Show underlying errors on fetch failure (#45714)
- Recognize `.` and `..` as paths instead of names in buildcache commands (#47105)
- Documentation and style (#46991, #47107, #47110, #47111, #47346, #47307, #47309, #47328, #47160,
#47402, #47557, #46709, #47080)
- Tests and CI fixes (#47165, #46711)
## Package updates
- ffmpeg: fix hash of patch (#45574)
# v0.22.2 (2024-09-21)
## Bugfixes
@@ -60,6 +97,7 @@
- suite-sparse: improve setting of the `libs` property (#44214)
- netlib-lapack: provide blas and lapack together (#44981)
# v0.22.0 (2024-05-12)
`v0.22.0` is a major feature release.
@@ -380,15 +418,6 @@
* 344 committers to packages
* 45 committers to core
# v0.21.3 (2024-10-02)
## Bugfixes
- Forward compatibility with Spack 0.23 packages with language dependencies (#45205, #45191)
- Forward compatibility with `urllib` from Python 3.12.6+ (#46453, #46483)
- Bump `archspec` to 0.2.5-dev for better aarch64 and Windows support (#42854, #44005,
#45721, #46445)
- Support macOS Sequoia (#45018, #45127, #43862)
- CI and test maintenance (#42909, #42728, #46711, #41943, #43363)
# v0.21.2 (2024-03-01)
@@ -419,7 +448,7 @@
- spack graph: fix coloring with environments (#41240)
- spack info: sort variants in --variants-by-name (#41389)
- Spec.format: error on old style format strings (#41934)
- ASP-based solver:
- ASP-based solver:
- fix infinite recursion when computing concretization errors (#41061)
- don't error for type mismatch on preferences (#41138)
- don't emit spurious debug output (#41218)

View File

@@ -8,9 +8,8 @@ or http://www.apache.org/licenses/LICENSE-2.0) or the MIT license,
Copyrights and patents in the Spack project are retained by contributors.
No copyright assignment is required to contribute to Spack.
Spack was originally developed in 2013 by Lawrence Livermore National
Security, LLC. It was originally distributed under the LGPL-2.1 license.
Consent from contributors to relicense to Apache-2.0/MIT is documented at
Spack was originally distributed under the LGPL-2.1 license. Consent from
contributors to relicense to Apache-2.0/MIT is documented at
https://github.com/spack/spack/issues/9137.
@@ -103,6 +102,6 @@ PackageName: sbang
PackageHomePage: https://github.com/spack/sbang
PackageLicenseDeclared: Apache-2.0 OR MIT
PackageName: typing_extensions
PackageHomePage: https://pypi.org/project/typing-extensions/
PackageLicenseDeclared: Python-2.0
PackageName: six
PackageHomePage: https://pypi.python.org/pypi/six
PackageLicenseDeclared: MIT

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) Spack Project Developers.
Copyright (c) 2013-2024 LLNS, LLC and other Spack Project Developers.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -32,7 +32,7 @@
Spack is a multi-platform package manager that builds and installs
multiple versions and configurations of software. It works on Linux,
macOS, Windows, and many supercomputers. Spack is non-destructive: installing a
macOS, and many supercomputers. Spack is non-destructive: installing a
new version of a package does not break existing installations, so many
configurations of the same package can coexist.
@@ -46,18 +46,13 @@ See the
[Feature Overview](https://spack.readthedocs.io/en/latest/features.html)
for examples and highlights.
To install spack and your first package, make sure you have Python & Git.
To install spack and your first package, make sure you have Python.
Then:
$ git clone -c feature.manyFiles=true --depth=2 https://github.com/spack/spack.git
$ git clone -c feature.manyFiles=true https://github.com/spack/spack.git
$ cd spack/bin
$ ./spack install zlib
> [!TIP]
> `-c feature.manyFiles=true` improves git's performance on repositories with 1,000+ files.
>
> `--depth=2` prunes the git history to reduce the size of the Spack installation.
Documentation
----------------
@@ -70,7 +65,7 @@ Tutorial
----------------
We maintain a
[**hands-on tutorial**](https://spack-tutorial.readthedocs.io/).
[**hands-on tutorial**](https://spack.readthedocs.io/en/latest/tutorial.html).
It covers basic to advanced usage, packaging, developer features, and large HPC
deployments. You can do all of the exercises on your own laptop using a
Docker container.

View File

@@ -1,4 +1,5 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import subprocess

View File

@@ -1,6 +1,7 @@
#!/bin/sh
#
# Copyright sbang project developers. See COPYRIGHT file for details.
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# sbang project developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,7 +1,8 @@
#!/bin/sh
# -*- python -*-
#
# Copyright Spack Project Developers. See COPYRIGHT file for details.
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -25,6 +26,7 @@ exit 1
# The code above runs this file with our preferred python interpreter.
import os
import os.path
import sys
min_python3 = (3, 6)

View File

@@ -1,6 +1,7 @@
#!/bin/sh
#
# Copyright Spack Project Developers. See COPYRIGHT file for details.
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -21,4 +22,4 @@
#
# This is compatible across platforms.
#
exec spack python "$@"
exec /usr/bin/env spack python "$@"

View File

@@ -1,4 +1,5 @@
:: Copyright Spack Project Developers. See COPYRIGHT file for details.
:: Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
:: Spack Project Developers. See the top-level COPYRIGHT file for details.
::
:: SPDX-License-Identifier: (Apache-2.0 OR MIT)
::#######################################################################
@@ -187,27 +188,25 @@ if NOT "%_sp_args%"=="%_sp_args:--help=%" (
goto :end_switch
:case_load
if NOT defined _sp_args (
exit /B 0
)
:: If args contain --bat, or -h/--help: just execute.
if NOT "%_sp_args%"=="%_sp_args:--help=%" (
goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:-h=%" (
goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:--bat=%" (
goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:--list=%" (
goto :default_case
:: If args contain --sh, --csh, or -h/--help: just execute.
if defined _sp_args (
if NOT "%_sp_args%"=="%_sp_args:--help=%" (
goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:-h=%" (
goto :default_case
) else if NOT "%_sp_args%"=="%_sp_args:--bat=%" (
goto :default_case
)
)
for /f "tokens=* USEBACKQ" %%I in (
`python "%spack%" %_sp_flags% %_sp_subcommand% --bat %_sp_args%`
) do %%I
`python "%spack%" %_sp_flags% %_sp_subcommand% --bat %_sp_args%`) do %%I
goto :end_switch
:case_unload
goto :case_load
:default_case
python "%spack%" %_sp_flags% %_sp_subcommand% %_sp_args%
goto :end_switch

View File

@@ -1,4 +1,5 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# #######################################################################

View File

@@ -1,11 +1,71 @@
@ECHO OFF
setlocal EnableDelayedExpansion
:: (c) 2021 Lawrence Livermore National Laboratory
:: To use this file independently of Spack's installer, execute this script in its directory, or add the
:: associated bin directory to your PATH. Invoke to launch Spack Shell.
::
:: source_dir/spack/bin/spack_cmd.bat
::
pushd %~dp0..
set SPACK_ROOT=%CD%
pushd %CD%\..
set spackinstdir=%CD%
popd
call "%~dp0..\share\spack\setup-env.bat"
pushd %SPACK_ROOT%
%comspec% /K
:: Check if Python is on the PATH
if not defined python_pf_ver (
(for /f "delims=" %%F in ('where python.exe') do (
set "python_pf_ver=%%F"
goto :found_python
) ) 2> NUL
)
:found_python
if not defined python_pf_ver (
:: If not, look for Python from the Spack installer
:get_builtin
(for /f "tokens=*" %%g in ('dir /b /a:d "!spackinstdir!\Python*"') do (
set "python_ver=%%g")) 2> NUL
if not defined python_ver (
echo Python was not found on your system.
echo Please install Python or add Python to your PATH.
) else (
set "py_path=!spackinstdir!\!python_ver!"
set "py_exe=!py_path!\python.exe"
)
goto :exitpoint
) else (
:: Python is already on the path
set "py_exe=!python_pf_ver!"
(for /F "tokens=* USEBACKQ" %%F in (
`"!py_exe!" --version`) do (set "output=%%F")) 2>NUL
if not "!output:Microsoft Store=!"=="!output!" goto :get_builtin
goto :exitpoint
)
:exitpoint
set "PATH=%SPACK_ROOT%\bin\;%PATH%"
if defined py_path (
set "PATH=%py_path%;%PATH%"
)
if defined py_exe (
"%py_exe%" "%SPACK_ROOT%\bin\haspywin.py"
)
set "EDITOR=notepad"
DOSKEY spacktivate=spack env activate $*
@echo **********************************************************************
@echo ** Spack Package Manager
@echo **********************************************************************
IF "%1"=="" GOTO CONTINUE
set
GOTO:EOF
:continue
set PROMPT=[spack] %PROMPT%
%comspec% /k

View File

@@ -1,4 +1,5 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -9,15 +9,15 @@ bootstrap:
# may not be able to bootstrap all the software that Spack needs,
# depending on its type.
sources:
- name: github-actions-v0.6
metadata: $spack/share/spack/bootstrap/github-actions-v0.6
- name: github-actions-v0.5
- name: 'github-actions-v0.5'
metadata: $spack/share/spack/bootstrap/github-actions-v0.5
- name: spack-install
- name: 'github-actions-v0.4'
metadata: $spack/share/spack/bootstrap/github-actions-v0.4
- name: 'spack-install'
metadata: $spack/share/spack/bootstrap/spack-install
trusted:
# By default we trust bootstrapping from sources and from binaries
# produced on Github via the workflow
github-actions-v0.6: true
github-actions-v0.5: true
github-actions-v0.4: true
spack-install: true

View File

@@ -39,53 +39,11 @@ concretizer:
# Option to deal with possible duplicate nodes (i.e. different nodes from the same package) in the DAG.
duplicates:
# "none": allows a single node for any package in the DAG.
# "minimal": allows the duplication of 'build-tools' nodes only
# (e.g. py-setuptools, cmake etc.)
# "minimal": allows the duplication of 'build-tools' nodes only (e.g. py-setuptools, cmake etc.)
# "full" (experimental): allows separation of the entire build-tool stack (e.g. the entire "cmake" subDAG)
strategy: minimal
# Maximum number of duplicates in a DAG, when using a strategy that allows duplicates. "default" is the
# number used if there isn't a more specific alternative
max_dupes:
default: 1
# Virtuals
c: 2
cxx: 2
fortran: 1
# Regular packages
cmake: 2
gmake: 2
python: 2
python-venv: 2
py-cython: 2
py-flit-core: 2
py-pip: 2
py-setuptools: 2
py-wheel: 2
xcb-proto: 2
# Compilers
gcc: 2
llvm: 2
# Option to specify compatibility between operating systems for reuse of compilers and packages
# Specified as a key: [list] where the key is the os that is being targeted, and the list contains the OS's
# it can reuse. Note this is a directional compatibility so mutual compatibility between two OS's
# requires two entries i.e. os_compatible: {sonoma: [monterey], monterey: [sonoma]}
os_compatible: {}
# Option to specify whether to support splicing. Splicing allows for
# the relinking of concrete package dependencies in order to better
# reuse already built packages with ABI compatible dependencies
splice:
explicit: []
automatic: false
# Maximum time, in seconds, allowed for the 'solve' phase. If set to 0, there is no time limit.
timeout: 0
# If set to true, exceeding the timeout will always result in a concretization error. If false,
# the best (suboptimal) model computed before the timeout is used.
#
# Setting this to false yields unreproducible results, so we advise to use that value only
# for debugging purposes (e.g. check which constraints can help Spack concretize faster).
error_on_timeout: true
# Static analysis may reduce the concretization time by generating smaller ASP problems, in
# cases where there are requirements that prevent part of the search space to be explored.
static_analysis: false

View File

@@ -115,6 +115,12 @@ config:
suppress_gpg_warnings: false
# If set to true, Spack will attempt to build any compiler on the spec
# that is not already available. If set to False, Spack will only use
# compilers already configured in compilers.yaml
install_missing_compilers: false
# If set to true, Spack will always check checksums after downloading
# archives. If false, Spack skips the checksum step.
checksum: true
@@ -164,6 +170,23 @@ config:
# If set to true, Spack will use ccache to cache C compiles.
ccache: false
# The concretization algorithm to use in Spack. Options are:
#
# 'clingo': Uses a logic solver under the hood to solve DAGs with full
# backtracking and optimization for user preferences. Spack will
# try to bootstrap the logic solver, if not already available.
#
# 'original': Spack's original greedy, fixed-point concretizer. This
# algorithm can make decisions too early and will not backtrack
# sufficiently for many specs. This will soon be deprecated in
# favor of clingo.
#
# See `concretizer.yaml` for more settings you can fine-tune when
# using clingo.
concretizer: clingo
# How long to wait to lock the Spack installation database. This lock is used
# when Spack needs to manage its own package metadata and all operations are
# expected to complete within the default time limit. The timeout should
@@ -194,12 +217,6 @@ config:
# executables with many dependencies, in particular on slow filesystems.
bind: false
# Controls the handling of missing dynamic libraries after installation.
# Options are ignore (default), warn, or error. If set to error, the
# installation fails if installed binaries reference dynamic libraries that
# are not found in their specified rpaths.
missing_library_policy: ignore
# Set to 'false' to allow installation on filesystems that doesn't allow setgid bit
# manipulation by unprivileged user (e.g. AFS)

View File

@@ -0,0 +1,16 @@
# -------------------------------------------------------------------------
# This is the default configuration for Spack's module file generation.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
# $SPACK_ROOT/etc/spack/modules.yaml
#
# Per-user settings (overrides default and site settings):
# ~/.spack/modules.yaml
# -------------------------------------------------------------------------
modules: {}

View File

@@ -0,0 +1,3 @@
packages:
iconv:
require: [libiconv]

View File

@@ -20,14 +20,11 @@ packages:
awk: [gawk]
armci: [armcimpi]
blas: [openblas, amdblis]
c: [gcc]
cxx: [gcc]
D: [ldc]
daal: [intel-oneapi-daal]
elf: [elfutils]
fftw-api: [fftw, amdfftw]
flame: [libflame, amdlibflame]
fortran: [gcc]
fortran-rt: [gcc-runtime, intel-oneapi-runtime]
fuse: [libfuse]
gl: [glx, osmesa]
@@ -36,7 +33,7 @@ packages:
go-or-gccgo-bootstrap: [go-bootstrap, gcc]
iconv: [libiconv]
ipp: [intel-oneapi-ipp]
java: [openjdk, jdk]
java: [openjdk, jdk, ibm-java]
jpeg: [libjpeg-turbo, libjpeg]
lapack: [openblas, amdlibflame]
libc: [glibc, musl]
@@ -64,8 +61,6 @@ packages:
tbb: [intel-tbb]
unwind: [libunwind]
uuid: [util-linux-uuid, libuuid]
wasi-sdk: [wasi-sdk-prebuilt]
xkbdata-api: [xkeyboard-config, xkbdata]
xxd: [xxd-standalone, vim]
yacc: [bison, byacc]
ziglang: [zig]
@@ -73,27 +68,3 @@ packages:
permissions:
read: world
write: user
cray-fftw:
buildable: false
cray-libsci:
buildable: false
cray-mpich:
buildable: false
cray-mvapich2:
buildable: false
cray-pmi:
buildable: false
egl:
buildable: false
essl:
buildable: false
fujitsu-mpi:
buildable: false
fujitsu-ssl2:
buildable: false
hpcx-mpi:
buildable: false
mpt:
buildable: false
spectrum-mpi:
buildable: false

View File

@@ -1,5 +1,6 @@
config:
locks: false
concretizer: clingo
build_stage::
- '$user_cache_path/stage'
- '$spack/.staging'
stage_name: '{name}-{version}-{hash:7}'

View File

@@ -1,4 +1,5 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -1174,17 +1175,6 @@ unspecified version, but packages can depend on other packages with
could depend on ``mpich@1.2:`` if it can only build with version
``1.2`` or higher of ``mpich``.
.. note:: Windows Spec Syntax Caveats
Windows has a few idiosyncrasies when it comes to the Spack spec syntax and the use of certain shells
Spack's spec dependency syntax uses the carat (``^``) character, however this is an escape string in CMD
so it must be escaped with an additional carat (i.e. ``^^``).
CMD also will attempt to interpret strings with ``=`` characters in them. Any spec including this symbol
must double quote the string.
Note: All of these issues are unique to CMD, they can be avoided by using Powershell.
For more context on these caveats see the related issues: `carat <https://github.com/spack/spack/issues/42833>`_ and `equals <https://github.com/spack/spack/issues/43348>`_
Below are more details about the specifiers that you can add to specs.
.. _version-specifier:
@@ -1358,10 +1348,6 @@ For example, for the ``stackstart`` variant:
mpileaks stackstart==4 # variant will be propagated to dependencies
mpileaks stackstart=4 # only mpileaks will have this variant value
Spack also allows variants to be propagated from a package that does
not have that variant.
^^^^^^^^^^^^^^
Compiler Flags
^^^^^^^^^^^^^^
@@ -1447,12 +1433,22 @@ the reserved keywords ``platform``, ``os`` and ``target``:
$ spack install libelf os=ubuntu18.04
$ spack install libelf target=broadwell
or together by using the reserved keyword ``arch``:
.. code-block:: console
$ spack install libelf arch=cray-CNL10-haswell
Normally users don't have to bother specifying the architecture if they
are installing software for their current host, as in that case the
values will be detected automatically. If you need fine-grained control
over which packages use which targets (or over *all* packages' default
target), see :ref:`package-preferences`.
.. admonition:: Cray machines
The situation is a little bit different for Cray machines and a detailed
explanation on how the architecture can be set on them can be found at :ref:`cray-support`
.. _support-for-microarchitectures:
@@ -1761,24 +1757,19 @@ Verifying installations
The ``spack verify`` command can be used to verify the validity of
Spack-installed packages any time after installation.
^^^^^^^^^^^^^^^^^^^^^^^^^
``spack verify manifest``
^^^^^^^^^^^^^^^^^^^^^^^^^
At installation time, Spack creates a manifest of every file in the
installation prefix. For links, Spack tracks the mode, ownership, and
destination. For directories, Spack tracks the mode, and
ownership. For files, Spack tracks the mode, ownership, modification
time, hash, and size. The ``spack verify manifest`` command will check,
for every file in each package, whether any of those attributes have
changed. It will also check for newly added files or deleted files from
the installation prefix. Spack can either check all installed packages
time, hash, and size. The Spack verify command will check, for every
file in each package, whether any of those attributes have changed. It
will also check for newly added files or deleted files from the
installation prefix. Spack can either check all installed packages
using the `-a,--all` or accept specs listed on the command line to
verify.
The ``spack verify manifest`` command can also verify for individual files
that they haven't been altered since installation time. If the given file
The ``spack verify`` command can also verify for individual files that
they haven't been altered since installation time. If the given file
is not in a Spack installation prefix, Spack will report that it is
not owned by any package. To check individual files instead of specs,
use the ``-f,--files`` option.
@@ -1793,22 +1784,6 @@ check only local packages (as opposed to those used transparently from
``upstream`` spack instances) and the ``-j,--json`` option to output
machine-readable json data for any errors.
^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack verify libraries``
^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``spack verify libraries`` command can be used to verify that packages
do not have accidental system dependencies. This command scans the install
prefixes of packages for executables and shared libraries, and resolves
their needed libraries in their RPATHs. When needed libraries cannot be
located, an error is reported. This typically indicates that a package
was linked against a system library, instead of a library provided by
a Spack package.
This verification can also be enabled as a post-install hook by setting
``config:shared_linking:missing_library_policy`` to ``error`` or ``warn``
in :ref:`config.yaml <config-yaml>`.
-----------------------
Filesystem requirements
-----------------------

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -264,30 +265,25 @@ infrastructure, or to cache Spack built binaries in Github Actions and
GitLab CI.
To get started, configure an OCI mirror using ``oci://`` as the scheme,
and optionally specify variables that hold the username and password (or
personal access token) for the registry:
and optionally specify a username and password (or personal access token):
.. code-block:: console
$ spack mirror add --oci-username-variable REGISTRY_USER \
--oci-password-variable REGISTRY_TOKEN \
my_registry oci://example.com/my_image
$ spack mirror add --oci-username username --oci-password password my_registry oci://example.com/my_image
Spack follows the naming conventions of Docker, with Dockerhub as the default
registry. To use Dockerhub, you can omit the registry domain:
.. code-block:: console
$ spack mirror add ... my_registry oci://username/my_image
$ spack mirror add --oci-username username --oci-password password my_registry oci://username/my_image
From here, you can use the mirror as any other build cache:
.. code-block:: console
$ export REGISTRY_USER=...
$ export REGISTRY_TOKEN=...
$ spack buildcache push my_registry <specs...> # push to the registry
$ spack install <specs...> # or install from the registry
$ spack install <specs...> # install from the registry
A unique feature of buildcaches on top of OCI registries is that it's incredibly
easy to generate get a runnable container image with the binaries installed. This

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -170,7 +171,7 @@ bootstrapping.
To register the mirror on the platform where it's supposed to be used run the following command(s):
% spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources
% spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries
% spack buildcache update-index /opt/bootstrap/bootstrap_cache
This command needs to be run on a machine with internet access and the resulting folder
has to be moved over to the air-gapped system. Once the local sources are added using the

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -165,106 +166,3 @@ while `py-numpy` still needs an older version:
Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the
default behavior is ``duplicates:strategy:minimal``.
--------
Splicing
--------
The ``splice`` key covers config attributes for splicing specs in the solver.
"Splicing" is a method for replacing a dependency with another spec
that provides the same package or virtual. There are two types of
splices, referring to different behaviors for shared dependencies
between the root spec and the new spec replacing a dependency:
"transitive" and "intransitive". A "transitive" splice is one that
resolves all conflicts by taking the dependency from the new node. An
"intransitive" splice is one that resolves all conflicts by taking the
dependency from the original root. From a theory perspective, hybrid
splices are possible but are not modeled by Spack.
All spliced specs retain a ``build_spec`` attribute that points to the
original Spec before any splice occurred. The ``build_spec`` for a
non-spliced spec is itself.
The figure below shows examples of transitive and intransitive splices:
.. figure:: images/splices.png
:align: center
The concretizer can be configured to explicitly splice particular
replacements for a target spec. Splicing will allow the user to make
use of generically built public binary caches, while swapping in
highly optimized local builds for performance critical components
and/or components that interact closely with the specific hardware
details of the system. The most prominent candidate for splicing is
MPI providers. MPI packages have relatively well-understood ABI
characteristics, and most High Performance Computing facilities deploy
highly optimized MPI packages tailored to their particular
hardware. The following config block configures Spack to replace
whatever MPI provider each spec was concretized to use with the
particular package of ``mpich`` with the hash that begins ``abcdef``.
.. code-block:: yaml
concretizer:
splice:
explicit:
- target: mpi
replacement: mpich/abcdef
transitive: false
.. warning::
When configuring an explicit splice, you as the user take on the
responsibility for ensuring ABI compatibility between the specs
matched by the target and the replacement you provide. If they are
not compatible, Spack will not warn you and your application will
fail to run.
The ``target`` field of an explicit splice can be any abstract
spec. The ``replacement`` field must be a spec that includes the hash
of a concrete spec, and the replacement must either be the same
package as the target, provide the virtual that is the target, or
provide a virtual that the target provides. The ``transitive`` field
is optional -- by default, splices will be transitive.
.. note::
With explicit splices configured, it is possible for Spack to
concretize to a spec that does not satisfy the input. For example,
with the config above ``hdf5 ^mvapich2`` will concretize to user
``mpich/abcdef`` instead of ``mvapich2`` as the MPI provider. Spack
will warn the user in this case, but will not fail the
concretization.
.. _automatic_splicing:
^^^^^^^^^^^^^^^^^^
Automatic Splicing
^^^^^^^^^^^^^^^^^^
The Spack solver can be configured to do automatic splicing for
ABI-compatible packages. Automatic splices are enabled in the concretizer
config section
.. code-block:: yaml
concretizer:
splice:
automatic: True
Packages can include ABI-compatibility information using the
``can_splice`` directive. See :ref:`the packaging
guide<abi_compatibility>` for instructions on specifying ABI
compatibility using the ``can_splice`` directive.
.. note::
The ``can_splice`` directive is experimental and may be changed in
future versions.
When automatic splicing is enabled, the concretizer will combine any
number of ABI-compatible specs if possible to reuse installed packages
and packages available from binary caches. The end result of these
specs is equivalent to a series of transitive/intransitive splices,
but the series may be non-obvious.

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -146,15 +147,6 @@ example, the ``bash`` shell is used to run the ``autogen.sh`` script.
def autoreconf(self, spec, prefix):
which("bash")("autogen.sh")
If the ``package.py`` has build instructions in a separate
:ref:`builder class <multiple_build_systems>`, the signature for a phase changes slightly:
.. code-block:: python
class AutotoolsBuilder(AutotoolsBuilder):
def autoreconf(self, pkg, spec, prefix):
which("bash")("autogen.sh")
"""""""""""""""""""""""""""""""""""""""
patching configure or Makefile.in files
"""""""""""""""""""""""""""""""""""""""
@@ -272,9 +264,9 @@ often lists dependencies and the flags needed to locate them. The
"environment variables" section lists environment variables that the
build system uses to pass flags to the compiler and linker.
^^^^^^^^^^^^^^^^^^^^^^^^^
Adding flags to configure
^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^
Addings flags to configure
^^^^^^^^^^^^^^^^^^^^^^^^^^
For most of the flags you encounter, you will want a variant to
optionally enable/disable them. You can then optionally pass these
@@ -285,7 +277,7 @@ function like so:
def configure_args(self):
args = []
...
if self.spec.satisfies("+mpi"):
args.append("--enable-mpi")
else:
@@ -299,10 +291,7 @@ Alternatively, you can use the :ref:`enable_or_disable <autotools_enable_or_dis
.. code-block:: python
def configure_args(self):
args = []
...
args.extend(self.enable_or_disable("mpi"))
return args
return [self.enable_or_disable("mpi")]
Note that we are explicitly disabling MPI support if it is not
@@ -347,14 +336,7 @@ typically used to enable or disable some feature within the package.
default=False,
description="Memchecker support for debugging [degrades performance]"
)
...
def configure_args(self):
args = []
...
args.extend(self.enable_or_disable("memchecker"))
return args
config_args.extend(self.enable_or_disable("memchecker"))
In this example, specifying the variant ``+memchecker`` will generate
the following configuration options:

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -56,13 +57,13 @@ If you look at the ``perl`` package, you'll see:
.. code-block:: python
phases = ("configure", "build", "install")
phases = ["configure", "build", "install"]
Similarly, ``cmake`` defines:
.. code-block:: python
phases = ("bootstrap", "build", "install")
phases = ["bootstrap", "build", "install"]
If we look at the ``cmake`` example, this tells Spack's ``PackageBase``
class to run the ``bootstrap``, ``build``, and ``install`` functions
@@ -129,19 +130,14 @@ before or after a particular phase. For example, in ``perl``, we see:
@run_after("install")
def install_cpanm(self):
spec = self.spec
maker = make
cpan_dir = join_path("cpanm", "cpanm")
if sys.platform == "win32":
maker = nmake
cpan_dir = join_path(self.stage.source_path, cpan_dir)
cpan_dir = windows_sfn(cpan_dir)
if "+cpanm" in spec:
with working_dir(cpan_dir):
perl = spec["perl"].command
perl("Makefile.PL")
maker()
maker("install")
spec = self.spec
if spec.satisfies("+cpanm"):
with working_dir(join_path("cpanm", "cpanm")):
perl = spec["perl"].command
perl("Makefile.PL")
make()
make("install")
This extra step automatically installs ``cpanm`` in addition to the
base Perl installation.
@@ -180,14 +176,8 @@ In the ``perl`` package, we can see:
@run_after("build")
@on_package_attributes(run_tests=True)
def build_test(self):
if sys.platform == "win32":
win32_dir = os.path.join(self.stage.source_path, "win32")
win32_dir = windows_sfn(win32_dir)
with working_dir(win32_dir):
nmake("test", ignore_quotes=True)
else:
make("test")
def test(self):
make("test")
As you can guess, this runs ``make test`` *after* building the package,
if and only if testing is requested. Again, this is not specific to

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -24,7 +25,7 @@ use Spack to build packages with the tools.
The Spack Python class ``IntelOneapiPackage`` is a base class that is
used by ``IntelOneapiCompilers``, ``IntelOneapiMkl``,
``IntelOneapiTbb`` and other classes to implement the oneAPI
packages. Search for ``oneAPI`` at `packages.spack.io <https://packages.spack.io>`_ for the full
packages. Search for ``oneAPI`` at `<packages.spack.io>`_ for the full
list of available oneAPI packages, or use::
spack list -d oneAPI

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -24,14 +25,6 @@ QMake does not appear to have a standardized way of specifying
the installation directory, so you may have to set environment
variables or edit ``*.pro`` files to get things working properly.
QMake packages will depend on the virtual ``qmake`` package which
is provided by multiple versions of Qt: ``qt`` provides Qt up to
Qt5, and ``qt-base`` provides Qt from version Qt6 onwards. This
split was motivated by the desire to split the single Qt package
into its components to allow for more fine-grained installation.
To depend on a specific version, refer to the documentation on
:ref:`virtual-dependencies`.
^^^^^^
Phases
^^^^^^

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -48,14 +49,14 @@ following phases:
#. ``install`` - install the package
Package developers often add unit tests that can be invoked with
``scons test`` or ``scons check``. Spack provides a ``build_test`` method
``scons test`` or ``scons check``. Spack provides a ``test`` method
to handle this. Since we don't know which one the package developer
chose, the ``build_test`` method does nothing by default, but can be easily
chose, the ``test`` method does nothing by default, but can be easily
overridden like so:
.. code-block:: python
def build_test(self):
def test(self):
scons("check")

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -10,8 +11,7 @@ Chaining Spack Installations (upstreams.yaml)
You can point your Spack installation to another installation to use any
packages that are installed there. To register the other Spack instance,
you can add it as an entry to ``upstreams.yaml`` at any of the
:ref:`configuration-scopes`:
you can add it as an entry to ``upstreams.yaml``:
.. code-block:: yaml
@@ -22,8 +22,7 @@ you can add it as an entry to ``upstreams.yaml`` at any of the
install_tree: /path/to/another/spack/opt/spack
``install_tree`` must point to the ``opt/spack`` directory inside of the
Spack base directory, or the location of the ``install_tree`` defined
in :ref:`config.yaml <config-yaml>`.
Spack base directory.
Once the upstream Spack instance has been added, ``spack find`` will
automatically check the upstream instance when querying installed packages,

View File

@@ -1,4 +1,5 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -205,28 +206,17 @@ def setup(sphinx):
("py:class", "six.moves.urllib.parse.ParseResult"),
("py:class", "TextIO"),
("py:class", "hashlib._Hash"),
("py:class", "concurrent.futures._base.Executor"),
# Spack classes that are private and we don't want to expose
("py:class", "spack.provider_index._IndexBase"),
("py:class", "spack.repo._PrependFileLoader"),
("py:class", "spack.build_systems._checks.BuilderWithDefaults"),
("py:class", "spack.build_systems._checks.BaseBuilder"),
# Spack classes that intersphinx is unable to resolve
("py:class", "spack.version.StandardVersion"),
("py:class", "spack.spec.DependencySpec"),
("py:class", "spack.spec.ArchSpec"),
("py:class", "spack.spec.InstallStatus"),
("py:class", "spack.spec.SpecfileReaderBase"),
("py:class", "spack.install_test.Pb"),
("py:class", "spack.filesystem_view.SimpleFilesystemView"),
("py:class", "spack.traverse.EdgeAndDepth"),
("py:class", "archspec.cpu.microarchitecture.Microarchitecture"),
("py:class", "spack.compiler.CompilerCache"),
# TypeVar that is not handled correctly
("py:class", "llnl.util.lang.T"),
("py:class", "llnl.util.lang.KT"),
("py:class", "llnl.util.lang.VT"),
("py:obj", "llnl.util.lang.KT"),
("py:obj", "llnl.util.lang.VT"),
]
# The reST default role (used for this markup: `text`) to use for all documents.

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -25,23 +26,14 @@ These settings can be overridden in ``etc/spack/config.yaml`` or
The location where Spack will install packages and their dependencies.
Default is ``$spack/opt/spack``.
---------------
``projections``
---------------
---------------------------------------------------
``install_hash_length`` and ``install_path_scheme``
---------------------------------------------------
.. warning::
Modifying projections of the install tree is strongly discouraged.
By default Spack installs all packages into a unique directory relative to the install
tree root with the following layout:
.. code-block::
{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}
In very rare cases, it may be necessary to reduce the length of this path. For example,
very old versions of the Intel compiler are known to segfault when input paths are too long:
The default Spack installation path can be very long and can create problems
for scripts with hardcoded shebangs. Additionally, when using the Intel
compiler, and if there is also a long list of dependencies, the compiler may
segfault. If you see the following:
.. code-block:: console
@@ -49,25 +41,36 @@ very old versions of the Intel compiler are known to segfault when input paths a
** Segmentation violation signal raised. **
Access violation or stack overflow. Please contact Intel Support for assistance.
Another case is Python and R packages with many runtime dependencies, which can result
in very large ``PYTHONPATH`` and ``R_LIBS`` environment variables. This can cause the
``execve`` system call to fail with ``E2BIG``, preventing processes from starting.
it may be because variables containing dependency specs may be too long. There
are two parameters to help with long path names. Firstly, the
``install_hash_length`` parameter can set the length of the hash in the
installation path from 1 to 32. The default path uses the full 32 characters.
For this reason, Spack allows users to modify the installation layout through custom
projections. For example
Secondly, it is also possible to modify the entire installation
scheme. By default Spack uses
``{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}``
where the tokens that are available for use in this directive are the
same as those understood by the :meth:`~spack.spec.Spec.format`
method. Using this parameter it is possible to use a different package
layout or reduce the depth of the installation paths. For example
.. code-block:: yaml
config:
install_tree:
root: $spack/opt/spack
projections:
all: "{name}/{version}/{hash:16}"
install_path_scheme: '{name}/{version}/{hash:7}'
would install packages into sub-directories using only the package name, version and a
hash length of 16 characters.
would install packages into sub-directories using only the package
name, version and a hash length of 7 characters.
Notice that reducing the hash length increases the likelihood of hash collisions.
When using either parameter to set the hash length it only affects the
representation of the hash in the installation directory. You
should be aware that the smaller the hash length the more likely
naming conflicts will occur. These parameters are independent of those
used to configure module names.
.. warning:: Modifying the installation hash length or path scheme after
packages have been installed will prevent Spack from being
able to find the old installation directories.
--------------------
``build_stage``
@@ -125,8 +128,6 @@ are stored in ``$spack/var/spack/cache``. These are stored indefinitely
by default. Can be purged with :ref:`spack clean --downloads
<cmd-spack-clean>`.
.. _Misc Cache:
--------------------
``misc_cache``
--------------------
@@ -336,52 +337,3 @@ create a new alias called ``inst`` that will always call ``install -v``:
aliases:
inst: install -v
-------------------------------
``concretization_cache:enable``
-------------------------------
When set to ``true``, Spack will utilize a cache of solver outputs from
successful concretization runs. When enabled, Spack will check the concretization
cache prior to running the solver. If a previous request to solve a given
problem is present in the cache, Spack will load the concrete specs and other
solver data from the cache rather than running the solver. Specs not previously
concretized will be added to the cache on a successful solve. The cache additionally
holds solver statistics, so commands like ``spack solve`` will still return information
about the run that produced a given solver result.
This cache is a subcache of the :ref:`Misc Cache` and as such will be cleaned when the Misc
Cache is cleaned.
When ``false`` or ommitted, all concretization requests will be performed from scatch
----------------------------
``concretization_cache:url``
----------------------------
Path to the location where Spack will root the concretization cache. Currently this only supports
paths on the local filesystem.
Default location is under the :ref:`Misc Cache` at: ``$misc_cache/concretization``
------------------------------------
``concretization_cache:entry_limit``
------------------------------------
Sets a limit on the number of concretization results that Spack will cache. The limit is evaluated
after each concretization run; if Spack has stored more results than the limit allows, the
oldest concretization results are pruned until 10% of the limit has been removed.
Setting this value to 0 disables the automatic pruning. It is expected users will be
responsible for maintaining this cache.
-----------------------------------
``concretization_cache:size_limit``
-----------------------------------
Sets a limit on the size of the concretization cache in bytes. The limit is evaluated
after each concretization run; if Spack has stored more results than the limit allows, the
oldest concretization results are pruned until 10% of the limit has been removed.
Setting this value to 0 disables the automatic pruning. It is expected users will be
responsible for maintaining this cache.

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -280,7 +281,7 @@ When spack queries for configuration parameters, it searches in
higher-precedence scopes first. So, settings in a higher-precedence file
can override those with the same key in a lower-precedence one. For
list-valued settings, Spack *prepends* higher-precedence settings to
lower-precedence settings. Completely ignoring lower-precedence configuration
lower-precedence settings. Completely ignoring higher-level configuration
options is supported with the ``::`` notation for keys (see
:ref:`config-overrides` below).
@@ -510,7 +511,6 @@ Spack understands over a dozen special variables. These are:
* ``$target_family``. The target family for the current host, as
detected by ArchSpec. E.g. ``x86_64`` or ``aarch64``.
* ``$date``: the current date in the format YYYY-MM-DD
* ``$spack_short_version``: the Spack version truncated to the first components.
Note that, as with shell variables, you can write these as ``$varname``

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -37,11 +38,9 @@ just have to configure and OCI registry and run ``spack buildcache push``.
spack -e . install
# Configure the registry
spack -e . mirror add --oci-username-variable REGISTRY_USER \
--oci-password-variable REGISTRY_TOKEN \
container-registry oci://example.com/name/image
spack -e . mirror add --oci-username ... --oci-password ... container-registry oci://example.com/name/image
# Push the image (do set REGISTRY_USER and REGISTRY_TOKEN)
# Push the image
spack -e . buildcache push --update-index --base-image ubuntu:22.04 --tag my_env container-registry
The resulting container image can then be run as follows:
@@ -204,9 +203,12 @@ The OS that are currently supported are summarized in the table below:
* - Ubuntu 24.04
- ``ubuntu:24.04``
- ``spack/ubuntu-noble``
* - CentOS Stream9
- ``quay.io/centos/centos:stream9``
- ``spack/centos-stream9``
* - CentOS 7
- ``centos:7``
- ``spack/centos7``
* - CentOS Stream
- ``quay.io/centos/centos:stream``
- ``spack/centos-stream``
* - openSUSE Leap
- ``opensuse/leap``
- ``spack/leap15``

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -315,214 +316,6 @@ documentation tests to make sure there are no errors. Documentation changes can
in some obfuscated warning messages. If you don't understand what they mean, feel free
to ask when you submit your PR.
.. _spack-builders-and-pipelines:
^^^^^^^^^
GitLab CI
^^^^^^^^^
""""""""""""""""""
Build Cache Stacks
""""""""""""""""""
Spack welcomes the contribution of software stacks of interest to the community. These
stacks are used to test package recipes and generate publicly available build caches.
Spack uses GitLab CI for managing the orchestration of build jobs.
GitLab Entry Point
~~~~~~~~~~~~~~~~~~
Add stack entrypoint to the ``share/spack/gitlab/cloud_pipelines/.gitlab-ci.yml``. There
are two stages required for each new stack, the generation stage and the build stage.
The generate stage is defined using the job template ``.generate`` configured with
environment variables defining the name of the stack in ``SPACK_CI_STACK_NAME`` and the
platform (``SPACK_TARGET_PLATFORM``) and architecture (``SPACK_TARGET_ARCH``) configuration,
and the tags associated with the class of runners to build on.
.. note::
The ``SPACK_CI_STACK_NAME`` must match the name of the directory containing the
stacks ``spack.yaml``.
.. note::
The platform and architecture variables are specified in order to select the
correct configurations from the generic configurations used in Spack CI. The
configurations currently available are:
* ``.cray_rhel_zen4``
* ``.cray_sles_zen4``
* ``.darwin_aarch64``
* ``.darwin_x86_64``
* ``.linux_aarch64``
* ``.linux_icelake``
* ``.linux_neoverse_n1``
* ``.linux_neoverse_v1``
* ``.linux_neoverse_v2``
* ``.linux_skylake``
* ``.linux_x86_64``
* ``.linux_x86_64_v4``
New configurations can be added to accommodate new platforms and architectures.
The build stage is defined as a trigger job that consumes the GitLab CI pipeline generated in
the generate stage for this stack. Build stage jobs use the ``.build`` job template which
handles the basic configuration.
An example entry point for a new stack called ``my-super-cool-stack``
.. code-block:: yaml
.my-super-cool-stack:
extends: [ ".linux_x86_64_v3" ]
variables:
SPACK_CI_STACK_NAME: my-super-cool-stack
tags: [ "all", "tags", "your", "job", "needs"]
my-super-cool-stack-generate:
extends: [ ".generate", ".my-super-cool-stack" ]
image: my-super-cool-stack-image:0.0.1
my-super-cool-stack-build:
extends: [ ".build", ".my-super-cool-stack" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: my-super-cool-stack-generate
strategy: depend
needs:
- artifacts: True
job: my-super-cool-stack-generate
Stack Configuration
~~~~~~~~~~~~~~~~~~~
The stack configuration is a spack environment file with two additional sections added.
Stack configurations should be located in ``share/spack/gitlab/cloud_pipelines/stacks/<stack_name>/spack.yaml``.
The ``ci`` section is generally used to define stack specific mappings such as image or tags.
For more information on what can go into the ``ci`` section refer to the docs on pipelines.
The ``cdash`` section is used for defining where to upload the results of builds. Spack configures
most of the details for posting pipeline results to
`cdash.spack.io <https://cdash.spack.io/index.php?project=Spack+Testing>`_. The only
requirement in the stack configuration is to define a ``build-group`` that is unique,
this is usually the long name of the stack.
An example stack that builds ``zlib``.
.. code-block:: yaml
spack:
view: false
packages:
all:
require: ["%gcc", "target=x86_64_v3"]
specs:
- zlib
ci:
pipeline-gen
- build-job:
image: my-super-cool-stack-image:0.0.1
cdash:
build-group: My Super Cool Stack
.. note::
The ``image`` used in the ``*-generate`` job must match exactly the ``image`` used in the ``build-job``.
When the images do not match the build job may fail.
"""""""""""""""""""
Registering Runners
"""""""""""""""""""
Contributing computational resources to Spack's CI build farm is one way to help expand the
capabilities and offerings of the public Spack build caches. Currently, Spack utilizes linux runners
from AWS, Google, and the University of Oregon (UO).
Runners require three key peices:
* Runner Registration Token
* Accurate tags
* OIDC Authentication script
* GPG keys
Minimum GitLab Runner Version: ``16.1.0``
`Intallation instructions <https://docs.gitlab.com/runner/install/>`_
Registration Token
~~~~~~~~~~~~~~~~~~
The first step to contribute new runners is to open an issue in the `spack infrastructure <https://github.com/spack/spack-infrastructure/issues/new?assignees=&labels=runner-registration&projects=&template=runner_registration.yml>`_
project. This will be reported to the spack infrastructure team who will guide users through the process
of registering new runners for Spack CI.
The information needed to register a runner is the motivation for the new resources, a semi-detailed description of
the runner, and finallly the point of contact for maintaining the software on the runner.
The point of contact will then work with the infrastruture team to obtain runner registration token(s) for interacting with
with Spack's GitLab instance. Once the runner is active, this point of contact will also be responsible for updating the
GitLab runner software to keep pace with Spack's Gitlab.
Tagging
~~~~~~~
In the initial stages of runner registration it is important to **exclude** the special tag ``spack``. This will prevent
the new runner(s) from being picked up for production CI jobs while it is configured and evaluated. Once it is determined
that the runner is ready for production use the ``spack`` tag will be added.
Because gitlab has no concept of tag exclustion, runners that provide specialized resource also require specialized tags.
For example, a basic CPU only x86_64 runner may have a tag ``x86_64`` associated with it. However, a runner containing an
CUDA capable GPU may have the tag ``x86_64-cuda`` to denote that it should only be used for packages that will benefit from
a CUDA capable resource.
OIDC
~~~~
Spack runners use OIDC authentication for connecting to the appropriate AWS bucket
which is used for coordinating the communication of binaries between build jobs. In
order to configure OIDC authentication, Spack CI runners use a python script with minimal
dependencies. This script can be configured for runners as seen here using the ``pre_build_script``.
.. code-block:: toml
[[runners]]
pre_build_script = """
echo 'Executing Spack pre-build setup script'
for cmd in "${PY3:-}" python3 python; do
if command -v > /dev/null "$cmd"; then
export PY3="$(command -v "$cmd")"
break
fi
done
if [ -z "${PY3:-}" ]; then
echo "Unable to find python3 executable"
exit 1
fi
$PY3 -c "import urllib.request;urllib.request.urlretrieve('https://raw.githubusercontent.com/spack/spack-infrastructure/main/scripts/gitlab_runner_pre_build/pre_build.py', 'pre_build.py')"
$PY3 pre_build.py > envvars
. ./envvars
rm -f envvars
unset GITLAB_OIDC_TOKEN
"""
GPG Keys
~~~~~~~~
Runners that may be utilized for ``protected`` CI require the registration of an intermediate signing key that
can be used to sign packages. For more information on package signing read :ref:`key_architecture`.
--------
Coverage
--------

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -177,8 +178,12 @@ Spec-related modules
Contains :class:`~spack.spec.Spec`. Also implements most of the logic for concretization
of specs.
:mod:`spack.spec_parser`
Contains :class:`~spack.spec_parser.SpecParser` and functions related to parsing specs.
:mod:`spack.parser`
Contains :class:`~spack.parser.SpecParser` and functions related to parsing specs.
:mod:`spack.concretize`
Contains :class:`~spack.concretize.Concretizer` implementation,
which allows site administrators to change Spack's :ref:`concretization-policies`.
:mod:`spack.version`
Implements a simple :class:`~spack.version.Version` class with simple
@@ -332,9 +337,13 @@ inserting them at different places in the spack code base. Whenever a hook
type triggers by way of a function call, we find all the hooks of that type,
and run them.
Spack defines hooks by way of a module in the ``lib/spack/spack/hooks`` directory.
This module has to be registered in ``__init__.py`` so that Spack is aware of it.
This section will cover the basic kind of hooks, and how to write them.
Spack defines hooks by way of a module at ``lib/spack/spack/hooks`` where we can define
types of hooks in the ``__init__.py``, and then python files in that folder
can use hook functions. The files are automatically parsed, so if you write
a new file for some integration (e.g., ``lib/spack/spack/hooks/myintegration.py``
you can then write hook functions in that file that will be automatically detected,
and run whenever your hook is called. This section will cover the basic kind
of hooks, and how to write them.
^^^^^^^^^^^^^^
Types of Hooks
@@ -543,10 +552,10 @@ With either interpreter you can run a single command:
.. code-block:: console
$ spack python -c 'from spack.concretize import concretize_one; concretize_one("python")'
$ spack python -c 'from spack.spec import Spec; Spec("python").concretized()'
...
$ spack python -i ipython -c 'from spack.concretize import concretize_one; concretize_one("python")'
$ spack python -i ipython -c 'from spack.spec import Spec; Spec("python").concretized()'
Out[1]: ...
or a file:

View File

@@ -1,59 +1,53 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _environments:
=====================================
Environments (spack.yaml, spack.lock)
=====================================
=========================
Environments (spack.yaml)
=========================
An environment is used to group a set of specs intended for some purpose
to be built, rebuilt, and deployed in a coherent fashion. Environments
define aspects of the installation of the software, such as:
An environment is used to group together a set of specs for the
purpose of building, rebuilding and deploying in a coherent fashion.
Environments provide a number of advantages over the *à la carte*
approach of building and loading individual Spack modules:
#. *which* specs to install;
#. *how* those specs are configured; and
#. *where* the concretized software will be installed.
Aggregating this information into an environment for processing has advantages
over the *à la carte* approach of building and loading individual Spack modules.
With environments, you concretize, install, or load (activate) all of the
specs with a single command. Concretization fully configures the specs
and dependencies of the environment in preparation for installing the
software. This is a more robust solution than ad-hoc installation scripts.
And you can share an environment or even re-use it on a different computer.
Environment definitions, especially *how* specs are configured, allow the
software to remain stable and repeatable even when Spack packages are upgraded. Changes are only picked up when the environment is explicitly re-concretized.
Defining *where* specs are installed supports a filesystem view of the
environment. Yet Spack maintains a single installation of the software that
can be re-used across multiple environments.
Activating an environment determines *when* all of the associated (and
installed) specs are loaded so limits the software loaded to those specs
actually needed by the environment. Spack can even generate a script to
load all modules related to an environment.
#. Environments separate the steps of (a) choosing what to
install, (b) concretizing, and (c) installing. This allows
Environments to remain stable and repeatable, even if Spack packages
are upgraded: specs are only re-concretized when the user
explicitly asks for it. It is even possible to reliably
transport environments between different computers running
different versions of Spack!
#. Environments allow several specs to be built at once; a more robust
solution than ad-hoc scripts making multiple calls to ``spack
install``.
#. An Environment that is built as a whole can be loaded as a whole
into the user environment. An Environment can be built to maintain
a filesystem view of its packages, and the environment can load
that view into the user environment at activation time. Spack can
also generate a script to load all modules related to an
environment.
Other packaging systems also provide environments that are similar in
some ways to Spack environments; for example, `Conda environments
<https://conda.io/docs/user-guide/tasks/manage-environments.html>`_ or
`Python Virtual Environments
<https://docs.python.org/3/tutorial/venv.html>`_. Spack environments
provide some distinctive features though:
provide some distinctive features:
#. A spec installed "in" an environment is no different from the same
spec installed anywhere else in Spack.
#. Spack environments may contain more than one spec of the same
spec installed anywhere else in Spack. Environments are assembled
simply by collecting together a set of specs.
#. Spack Environments may contain more than one spec of the same
package.
Spack uses a "manifest and lock" model similar to `Bundler gemfiles
<https://bundler.io/man/gemfile.5.html>`_ and other package managers.
The environment's user input file (or manifest), is named ``spack.yaml``.
The lock file, which contains the fully configured and concretized specs,
is named ``spack.lock``.
<https://bundler.io/man/gemfile.5.html>`_ and other package
managers. The user input file is named ``spack.yaml`` and the lock
file is named ``spack.lock``
.. _environments-using:
@@ -74,73 +68,55 @@ An environment is created by:
$ spack env create myenv
The directory ``$SPACK_ROOT/var/spack/environments/myenv`` is created
to manage the environment.
Spack then creates the directory ``var/spack/environments/myenv``.
.. note::
All managed environments by default are stored in the
``$SPACK_ROOT/var/spack/environments`` folder. This location can be changed
by setting the ``environments_root`` variable in ``config.yaml``.
All managed environments by default are stored in the ``var/spack/environments`` folder.
This location can be changed by setting the ``environments_root`` variable in ``config.yaml``.
Spack creates the file ``spack.yaml``, hidden directory ``.spack-env``, and
``spack.lock`` file under ``$SPACK_ROOT/var/spack/environments/myenv``. User
interaction occurs through the ``spack.yaml`` file and the Spack commands
that affect it. Metadata and, by default, the view are stored in the
``.spack-env`` directory. When the environment is concretized, Spack creates
the ``spack.lock`` file with the fully configured specs and dependencies for
In the ``var/spack/environments/myenv`` directory, Spack creates the
file ``spack.yaml`` and the hidden directory ``.spack-env``.
Spack stores metadata in the ``.spack-env`` directory. User
interaction will occur through the ``spack.yaml`` file and the Spack
commands that affect it. When the environment is concretized, Spack
will create a file ``spack.lock`` with the concrete information for
the environment.
The ``.spack-env`` subdirectory also contains:
In addition to being the default location for the view associated with
an Environment, the ``.spack-env`` directory also contains:
* ``repo/``: A subdirectory acting as the repo consisting of the Spack
packages used in the environment. It allows the environment to build
the same, in theory, even on different versions of Spack with different
* ``repo/``: A repo consisting of the Spack packages used in this
environment. This allows the environment to build the same, in
theory, even on different versions of Spack with different
packages!
* ``logs/``: A subdirectory containing the build logs for the packages
in this environment.
* ``logs/``: A directory containing the build logs for the packages
in this Environment.
Spack Environments can also be created from either the user input, or
manifest, file or the lockfile. Create an environment from a manifest using:
Spack Environments can also be created from either a manifest file
(usually but not necessarily named, ``spack.yaml``) or a lockfile.
To create an Environment from a manifest:
.. code-block:: console
$ spack env create myenv spack.yaml
The resulting environment is guaranteed to have the same root specs as
the original but may concretize differently in the presence of different
explicit or default configuration settings (e.g., a different version of
Spack or for a different user account).
Environments created from a manifest will copy any included configs
from relative paths inside the environment. Relative paths from
outside the environment will cause errors, and absolute paths will be
kept absolute. For example, if ``spack.yaml`` includes:
.. code-block:: yaml
spack:
include: [./config.yaml]
then the created environment will have its own copy of the file
``config.yaml`` copied from the location in the original environment.
Create an environment from a ``spack.lock`` file using:
To create an Environment from a ``spack.lock`` lockfile:
.. code-block:: console
$ spack env create myenv spack.lock
The resulting environment, when on the same or a compatible machine, is
guaranteed to initially have the same concrete specs as the original.
Either of these commands can also take a full path to the
initialization file.
.. note::
Environment creation also accepts a full path to the file.
If the path is not under the ``$SPACK_ROOT/var/spack/environments``
directory then the source is referred to as an
:ref:`independent environment <independent_environments>`.
A Spack Environment created from a ``spack.yaml`` manifest is
guaranteed to have the same root specs as the original Environment,
but may concretize differently. A Spack Environment created from a
``spack.lock`` lockfile is guaranteed to have the same concrete specs
as the original Environment. Either may obviously then differ as the
user modifies it.
^^^^^^^^^^^^^^^^^^^^^^^^^
Activating an Environment
@@ -153,7 +129,7 @@ To activate an environment, use the following command:
$ spack env activate myenv
By default, the ``spack env activate`` will load the view associated
with the environment into the user environment. The ``-v,
with the Environment into the user environment. The ``-v,
--with-view`` argument ensures this behavior, and the ``-V,
--without-view`` argument activates the environment without changing
the user environment variables.
@@ -166,14 +142,11 @@ user's prompt to begin with the environment name in brackets.
$ spack env activate -p myenv
[myenv] $ ...
The ``activate`` command can also be used to create a new environment, if it is
not already defined, by adding the ``--create`` flag. Managed and independent
environments can both be created using the same flags that `spack env create`
accepts. If an environment already exists then spack will simply activate it
and ignore the create-specific flags.
The ``activate`` command can also be used to create a new environment if it does not already
exist.
.. code-block:: console
$ spack env activate --create -p myenv
# ...
# [creates if myenv does not exist yet]
@@ -195,50 +168,49 @@ or the shortcut alias
If the environment was activated with its view, deactivating the
environment will remove the view from the user environment.
.. _independent_environments:
^^^^^^^^^^^^^^^^^^^^^^
Anonymous Environments
^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^
Independent Environments
^^^^^^^^^^^^^^^^^^^^^^^^
Apart from managed environments, Spack also supports anonymous environments.
Independent environments can be located in any directory outside of Spack.
Anonymous environments can be placed in any directory of choice.
.. note::
When uninstalling packages, Spack asks the user to confirm the removal of packages
that are still used in a managed environment. This is not the case for independent
that are still used in a managed environment. This is not the case for anonymous
environments.
To create an independent environment, use one of the following commands:
To create an anonymous environment, use one of the following commands:
.. code-block:: console
$ spack env create --dir my_env
$ spack env create ./my_env
As a shorthand, you can also create an independent environment upon activation if it does not
As a shorthand, you can also create an anonymous environment upon activation if it does not
already exist:
.. code-block:: console
$ spack env activate --create ./my_env
For convenience, Spack can also place an independent environment in a temporary directory for you:
For convenience, Spack can also place an anonymous environment in a temporary directory for you:
.. code-block:: console
$ spack env activate --temp
^^^^^^^^^^^^^^^^^^^^^^^^^^
Environment-Aware Commands
^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Environment Sensitive Commands
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack commands are environment-aware. For example, the ``find``
command shows only the specs in the active environment if an
environment has been activated. Otherwise it shows all specs in
the Spack instance. The same rule applies to the ``install`` and
``uninstall`` commands.
Spack commands are environment sensitive. For example, the ``find``
command shows only the specs in the active Environment if an
Environment has been activated. Similarly, the ``install`` and
``uninstall`` commands act on the active environment.
.. code-block:: console
@@ -283,33 +255,32 @@ the Spack instance. The same rule applies to the ``install`` and
Note that when we installed the abstract spec ``zlib@1.2.8``, it was
presented as a root of the environment. All explicitly installed
packages will be listed as roots of the environment.
presented as a root of the Environment. All explicitly installed
packages will be listed as roots of the Environment.
All of the Spack commands that act on the list of installed specs are
environment-aware in this way, including ``install``,
``uninstall``, ``find``, ``extensions``, etcetera. In the
Environment-sensitive in this way, including ``install``,
``uninstall``, ``find``, ``extensions``, and more. In the
:ref:`environment-configuration` section we will discuss
environment-aware commands further.
Environment-sensitive commands further.
^^^^^^^^^^^^^^^^^^^^^
Adding Abstract Specs
^^^^^^^^^^^^^^^^^^^^^
An abstract spec is the user-specified spec before Spack applies
defaults or dependency information.
An abstract spec is the user-specified spec before Spack has applied
any defaults or dependency information.
Users can add abstract specs to an environment using the ``spack add``
command. The most important component of an environment is a list of
Users can add abstract specs to an Environment using the ``spack add``
command. The most important component of an Environment is a list of
abstract specs.
Adding a spec adds it as a root spec of the environment in the user
input file (``spack.yaml``). It does not affect the concrete specs
in the lock file (``spack.lock``) and it does not install the spec.
Adding a spec adds to the manifest (the ``spack.yaml`` file), which is
used to define the roots of the Environment, but does not affect the
concrete specs in the lockfile, nor does it install the spec.
The ``spack add`` command is environment-aware. It adds the spec to the
currently active environment. An error is generated if there isn't an
active environment. All environment-aware commands can also
The ``spack add`` command is environment aware. It adds to the
currently active environment. All environment aware commands can also
be called using the ``spack -e`` flag to specify the environment.
.. code-block:: console
@@ -329,11 +300,11 @@ or
Concretizing
^^^^^^^^^^^^
Once user specs have been added to an environment, they can be concretized.
There are three different modes of operation to concretize an environment,
explained in detail in :ref:`environments_concretization_config`.
Regardless of which mode of operation is chosen, the following
command will ensure all of the root specs are concretized according to the
Once some user specs have been added to an environment, they can be concretized.
There are at the moment three different modes of operation to concretize an environment,
which are explained in details in :ref:`environments_concretization_config`.
Regardless of which mode of operation has been chosen, the following
command will ensure all the root specs are concretized according to the
constraints that are prescribed in the configuration:
.. code-block:: console
@@ -342,15 +313,16 @@ constraints that are prescribed in the configuration:
In the case of specs that are not concretized together, the command
above will concretize only the specs that were added and not yet
concretized. Forcing a re-concretization of all of the specs can be done
by adding the ``-f`` option:
concretized. Forcing a re-concretization of all the specs can be done
instead with this command:
.. code-block:: console
[myenv]$ spack concretize -f
Without the option, Spack guarantees that already concretized specs are
unchanged in the environment.
When the ``-f`` flag is not used to reconcretize all specs, Spack
guarantees that already concretized specs are unchanged in the
environment.
The ``concretize`` command does not install any packages. For packages
that have already been installed outside of the environment, the
@@ -383,16 +355,16 @@ installed specs using the ``-c`` (``--concretized``) flag.
Installing an Environment
^^^^^^^^^^^^^^^^^^^^^^^^^
In addition to adding individual specs to an environment, one
can install the entire environment at once using the command
In addition to installing individual specs into an Environment, one
can install the entire Environment at once using the command
.. code-block:: console
[myenv]$ spack install
If the environment has been concretized, Spack will install the
concretized specs. Otherwise, ``spack install`` will concretize
the environment before installing the concretized specs.
If the Environment has been concretized, Spack will install the
concretized specs. Otherwise, ``spack install`` will first concretize
the Environment and then install the concretized specs.
.. note::
@@ -413,17 +385,17 @@ the environment before installing the concretized specs.
As it installs, ``spack install`` creates symbolic links in the
``logs/`` directory in the environment, allowing for easy inspection
``logs/`` directory in the Environment, allowing for easy inspection
of build logs related to that environment. The ``spack install``
command also stores a Spack repo containing the ``package.py`` file
used at install time for each package in the ``repos/`` directory in
the environment.
the Environment.
The ``--no-add`` option can be used in a concrete environment to tell
spack to install specs already present in the environment but not to
add any new root specs to the environment. For root specs provided
to ``spack install`` on the command line, ``--no-add`` is the default,
while for dependency specs, it is optional. In other
while for dependency specs on the other hand, it is optional. In other
words, if there is an unambiguous match in the active concrete environment
for a root spec provided to ``spack install`` on the command line, spack
does not require you to specify the ``--no-add`` option to prevent the spec
@@ -437,22 +409,12 @@ Developing Packages in a Spack Environment
The ``spack develop`` command allows one to develop Spack packages in
an environment. It requires a spec containing a concrete version, and
will configure Spack to install the package from local source.
If a version is not provided from the command line interface then spack
will automatically pick the highest version the package has defined.
This means any infinity versions (``develop``, ``main``, ``stable``) will be
preferred in this selection process.
By default, ``spack develop`` will also clone the package to a subdirectory in the
environment for the local source. This package will have a special variant ``dev_path``
will configure Spack to install the package from local source. By
default, it will also clone the package to a subdirectory in the
environment. This package will have a special variant ``dev_path``
set, and Spack will ensure the package and its dependents are rebuilt
any time the environment is installed if the package's local source
code has been modified. Spack's native implementation to check for modifications
is to check if ``mtime`` is newer than the installation.
A custom check can be created by overriding the ``detect_dev_src_change`` method
in your package class. This is particularly useful for projects using custom spack repo's
to drive development and want to optimize performance.
Spack ensures that all instances of a
code has been modified. Spack ensures that all instances of a
developed package in the environment are concretized to match the
version (and other constraints) passed as the spec argument to the
``spack develop`` command.
@@ -462,11 +424,11 @@ also be used as valid concrete versions (see :ref:`version-specifier`).
This means that for a package ``foo``, ``spack develop foo@git.main`` will clone
the ``main`` branch of the package, and ``spack install`` will install from
that git clone if ``foo`` is in the environment.
Further development on ``foo`` can be tested by re-installing the environment,
Further development on ``foo`` can be tested by reinstalling the environment,
and eventually committed and pushed to the upstream git repo.
If the package being developed supports out-of-source builds then users can use the
``--build_directory`` flag to control the location and name of the build directory.
``--build_directory`` flag to control the location and name of the build directory.
This is a shortcut to set the ``package_attributes:build_directory`` in the
``packages`` configuration (see :ref:`assigning-package-attributes`).
The supplied location will become the build-directory for that package in all future builds.
@@ -647,7 +609,7 @@ manipulate configuration inline in the ``spack.yaml`` file.
Inline configurations
^^^^^^^^^^^^^^^^^^^^^
Inline environment-scope configuration is done using the same yaml
Inline Environment-scope configuration is done using the same yaml
format as standard Spack configuration scopes, covered in the
:ref:`configuration` section. Each section is contained under a
top-level yaml object with it's name. For example, a ``spack.yaml``
@@ -672,7 +634,7 @@ Included configurations
Spack environments allow an ``include`` heading in their yaml
schema. This heading pulls in external configuration files and applies
them to the environment.
them to the Environment.
.. code-block:: yaml
@@ -685,9 +647,6 @@ them to the environment.
Environments can include files or URLs. File paths can be relative or
absolute. URLs include the path to the text for individual files or
can be the path to a directory containing configuration files.
Spack supports ``file``, ``http``, ``https`` and ``ftp`` protocols (or
schemes). Spack-specific, environment and user path variables may be
used in these paths. See :ref:`config-file-variables` for more information.
^^^^^^^^^^^^^^^^^^^^^^^^
Configuration precedence
@@ -702,7 +661,7 @@ have higher precedence, as the included configs are applied in reverse order.
Manually Editing the Specs List
-------------------------------
The list of abstract/root specs in the environment is maintained in
The list of abstract/root specs in the Environment is maintained in
the ``spack.yaml`` manifest under the heading ``specs``.
.. code-block:: yaml
@@ -810,7 +769,7 @@ evaluates to the cross-product of those specs. Spec matrices also
contain an ``excludes`` directive, which eliminates certain
combinations from the evaluated result.
The following two environment manifests are identical:
The following two Environment manifests are identical:
.. code-block:: yaml
@@ -885,7 +844,7 @@ files are identical.
In short files like the example, it may be easier to simply list the
included specs. However for more complicated examples involving many
packages across many toolchains, separately factored lists make
environments substantially more manageable.
Environments substantially more manageable.
Additionally, the ``-l`` option to the ``spack add`` command allows
one to add to named lists in the definitions section of the manifest
@@ -904,7 +863,7 @@ named list ``compilers`` is ``['%gcc', '%clang', '%intel']`` on
spack:
definitions:
- compilers: ['%gcc', '%clang']
- when: arch.satisfies('target=x86_64:')
- when: arch.satisfies('x86_64:')
compilers: ['%intel']
.. note::
@@ -972,89 +931,37 @@ This allows for a much-needed reduction in redundancy between packages
and constraints.
-----------------
Environment Views
-----------------
----------------
Filesystem Views
----------------
Spack Environments can have an associated filesystem view, which is a directory
with a more traditional structure ``<view>/bin``, ``<view>/lib``, ``<view>/include``
in which all files of the installed packages are linked.
By default a view is created for each environment, thanks to the ``view: true``
option in the ``spack.yaml`` manifest file:
.. code-block:: yaml
spack:
specs: [perl, python]
view: true
The view is created in a hidden directory ``.spack-env/view`` relative to the environment.
If you've used ``spack env activate``, you may have already interacted with this view. Spack
prepends its ``<view>/bin`` dir to ``PATH`` when the environment is activated, so that
you can directly run executables from all installed packages in the environment.
Views are highly customizable: you can control where they are put, modify their structure,
include and exclude specs, change how files are linked, and you can even generate multiple
views for a single environment.
Spack Environments can define filesystem views, which provide a direct access point
for software similar to the directory hierarchy that might exist under ``/usr/local``.
Filesystem views are updated every time the environment is written out to the lock
file ``spack.lock``, so the concrete environment and the view are always compatible.
The files of the view's installed packages are brought into the view by symbolic or
hard links, referencing the original Spack installation, or by copy.
.. _configuring_environment_views:
^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimal view configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Configuration in ``spack.yaml``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The minimal configuration
.. code-block:: yaml
spack:
# ...
view: true
lets Spack generate a single view with default settings under the
``.spack-env/view`` directory of the environment.
Another short way to configure a view is to specify just where to put it:
.. code-block:: yaml
spack:
# ...
view: /path/to/view
Views can also be disabled by setting ``view: false``.
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Advanced view configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^
One or more **view descriptors** can be defined under ``view``, keyed by a name.
The example from the previous section with ``view: /path/to/view`` is equivalent
to defining a view descriptor named ``default`` with a ``root`` attribute:
.. code-block:: yaml
spack:
# ...
view:
default: # name of the view
root: /path/to/view # view descriptor attribute
The ``default`` view descriptor name is special: when you ``spack env activate`` your
environment, this view will be used to update (among other things) your ``PATH``
variable.
View descriptors must contain the root of the view, and optionally projections,
``select`` and ``exclude`` lists and link information via ``link`` and
The Spack Environment manifest file has a top-level keyword
``view``. Each entry under that heading is a **view descriptor**, headed
by a name. Any number of views may be defined under the ``view`` heading.
The view descriptor contains the root of the view, and
optionally the projections for the view, ``select`` and
``exclude`` lists for the view and link information via ``link`` and
``link_type``.
As a more advanced example, in the following manifest
For example, in the following manifest
file snippet we define a view named ``mpis``, rooted at
``/path/to/view`` in which all projections use the package name,
version, and compiler name to determine the path for a given
package. This view selects all packages that depend on MPI, and
excludes those built with the GCC compiler at version 18.5.
excludes those built with the PGI compiler at version 18.5.
The root specs with their (transitive) link and run type dependencies
will be put in the view due to the ``link: all`` option,
and the files in the view will be symlinks to the spack install
@@ -1068,7 +975,7 @@ directories.
mpis:
root: /path/to/view
select: [^mpi]
exclude: ['%gcc@18.5']
exclude: ['%pgi@18.5']
projections:
all: '{name}/{version}-{compiler.name}'
link: all
@@ -1094,14 +1001,63 @@ of ``hardlink`` or ``copy``.
when the environment is not activated, and linked libraries will be located
*outside* of the view thanks to rpaths.
There are two shorthands for environments with a single view. If the
environment at ``/path/to/env`` has a single view, with a root at
``/path/to/env/.spack-env/view``, with default selection and exclusion
and the default projection, we can put ``view: True`` in the
environment manifest. Similarly, if the environment has a view with a
different root, but default selection, exclusion, and projections, the
manifest can say ``view: /path/to/view``. These views are
automatically named ``default``, so that
.. code-block:: yaml
spack:
# ...
view: True
is equivalent to
.. code-block:: yaml
spack:
# ...
view:
default:
root: .spack-env/view
and
.. code-block:: yaml
spack:
# ...
view: /path/to/view
is equivalent to
.. code-block:: yaml
spack:
# ...
view:
default:
root: /path/to/view
By default, Spack environments are configured with ``view: True`` in
the manifest. Environments can be configured without views using
``view: False``. For backwards compatibility reasons, environments
with no ``view`` key are treated the same as ``view: True``.
From the command line, the ``spack env create`` command takes an
argument ``--with-view [PATH]`` that sets the path for a single, default
view. If no path is specified, the default path is used (``view:
true``). The argument ``--without-view`` can be used to create an
True``). The argument ``--without-view`` can be used to create an
environment without any view configured.
The ``spack env view`` command can be used to change the manage views
of an environment. The subcommand ``spack env view enable`` will add a
of an Environment. The subcommand ``spack env view enable`` will add a
view named ``default`` to an environment. It takes an optional
argument to specify the path for the new default view. The subcommand
``spack env view disable`` will remove the view named ``default`` from
@@ -1163,18 +1119,11 @@ the projection under ``all`` before reaching those entries.
Activating environment views
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``spack env activate <env>`` has two effects:
1. It activates the environment so that further Spack commands such
as ``spack install`` will run in the context of the environment.
2. It activates the view so that environment variables such as
``PATH`` are updated to include the view.
Without further arguments, the ``default`` view of the environment is
activated. If a view with a different name has to be activated,
``spack env activate --with-view <name> <env>`` can be
used instead. You can also activate the environment without modifying
further environment variables using ``--without-view``.
The ``spack env activate`` command will put the default view for the
environment into the user's path, in addition to activating the
environment for Spack commands. The arguments ``-v,--with-view`` and
``-V,--without-view`` can be used to tune this behavior. The default
behavior is to activate with the environment view if there is one.
The environment variables affected by the ``spack env activate``
command and the paths that are used to update them are determined by
@@ -1197,8 +1146,8 @@ relevant variable if the path exists. For this reason, it is not
recommended to use non-default projections with the default view of an
environment.
The ``spack env deactivate`` command will remove the active view of
the Spack environment from the user's environment variables.
The ``spack env deactivate`` command will remove the default view of
the environment from the user's path.
.. _env-generate-depfile:
@@ -1269,7 +1218,7 @@ gets installed and is available for use in the ``env`` target.
$(SPACK) -e . env depfile -o $@ --make-prefix spack
env: spack/env
$(info environment installed!)
$(info Environment installed!)
clean:
rm -rf spack.lock env.mk spack/
@@ -1357,7 +1306,7 @@ index once every package is pushed. Note how this target uses the generated
example/push/%: example/install/%
@mkdir -p $(dir $@)
$(info About to push $(SPEC) to a buildcache)
$(SPACK) -e . buildcache push --only=package $(BUILDCACHE_DIR) /$(HASH)
$(SPACK) -e . buildcache push --allow-root --only=package $(BUILDCACHE_DIR) /$(HASH)
@touch $@
push: $(addprefix example/push/,$(example/SPACK_PACKAGE_IDS))

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -34,7 +35,7 @@ A build matrix showing which packages are working on which systems is shown belo
.. code-block:: console
apt update
apt install bzip2 ca-certificates g++ gcc gfortran git gzip lsb-release patch python3 tar unzip xz-utils zstd
apt install bzip2 ca-certificates file g++ gcc gfortran git gzip lsb-release patch python3 tar unzip xz-utils zstd
.. tab-item:: RHEL
@@ -60,15 +61,10 @@ Getting Spack is easy. You can clone it from the `github repository
.. code-block:: console
$ git clone -c feature.manyFiles=true --depth=2 https://github.com/spack/spack.git
$ git clone -c feature.manyFiles=true https://github.com/spack/spack.git
This will create a directory called ``spack``.
.. note::
``-c feature.manyFiles=true`` improves git's performance on repositories with 1,000+ files.
``--depth=2`` prunes the git history to reduce the size of the Spack installation.
.. _shell-support:
^^^^^^^^^^^^^
@@ -147,22 +143,20 @@ The first time you concretize a spec, Spack will bootstrap automatically:
--------------------------------
zlib@1.2.13%gcc@9.4.0+optimize+pic+shared build_system=makefile arch=linux-ubuntu20.04-icelake
The default bootstrap behavior is to use pre-built binaries. You can verify the
active bootstrap repositories with:
.. command-output:: spack bootstrap list
If for security concerns you cannot bootstrap ``clingo`` from pre-built
binaries, you have to disable fetching the binaries we generated with Github Actions.
.. code-block:: console
$ spack bootstrap disable github-actions-v0.6
==> "github-actions-v0.6" is now disabled and will not be used for bootstrapping
$ spack bootstrap disable github-actions-v0.5
==> "github-actions-v0.5" is now disabled and will not be used for bootstrapping
$ spack bootstrap disable github-actions-v0.4
==> "github-actions-v0.4" is now disabled and will not be used for bootstrapping
$ spack bootstrap disable github-actions-v0.3
==> "github-actions-v0.3" is now disabled and will not be used for bootstrapping
You can verify that the new settings are effective with:
.. command-output:: spack bootstrap list
You can verify that the new settings are effective with ``spack bootstrap list``.
.. note::
@@ -284,6 +278,10 @@ compilers`` or ``spack compiler list``:
intel@14.0.1 intel@13.0.1 intel@12.1.2 intel@10.1
-- clang -------------------------------------------------------
clang@3.4 clang@3.3 clang@3.2 clang@3.1
-- pgi ---------------------------------------------------------
pgi@14.3-0 pgi@13.2-0 pgi@12.1-0 pgi@10.9-0 pgi@8.0-1
pgi@13.10-0 pgi@13.1-1 pgi@11.10-0 pgi@10.2-0 pgi@7.1-3
pgi@13.6-0 pgi@12.8-0 pgi@11.1-0 pgi@9.0-4 pgi@7.0-6
Any of these compilers can be used to build Spack packages. More on
how this is done is in :ref:`sec-specs`.
@@ -803,6 +801,65 @@ flags to the ``icc`` command:
spec: intel@15.0.24.4.9.3
^^^
PGI
^^^
PGI comes with two sets of compilers for C++ and Fortran,
distinguishable by their names. "Old" compilers:
.. code-block:: yaml
cc: /soft/pgi/15.10/linux86-64/15.10/bin/pgcc
cxx: /soft/pgi/15.10/linux86-64/15.10/bin/pgCC
f77: /soft/pgi/15.10/linux86-64/15.10/bin/pgf77
fc: /soft/pgi/15.10/linux86-64/15.10/bin/pgf90
"New" compilers:
.. code-block:: yaml
cc: /soft/pgi/15.10/linux86-64/15.10/bin/pgcc
cxx: /soft/pgi/15.10/linux86-64/15.10/bin/pgc++
f77: /soft/pgi/15.10/linux86-64/15.10/bin/pgfortran
fc: /soft/pgi/15.10/linux86-64/15.10/bin/pgfortran
Older installations of PGI contains just the old compilers; whereas
newer installations contain the old and the new. The new compiler is
considered preferable, as some packages
(``hdf``) will not build with the old compiler.
When auto-detecting a PGI compiler, there are cases where Spack will
find the old compilers, when you really want it to find the new
compilers. It is best to check this ``compilers.yaml``; and if the old
compilers are being used, change ``pgf77`` and ``pgf90`` to
``pgfortran``.
Other issues:
* There are reports that some packages will not build with PGI,
including ``libpciaccess`` and ``openssl``. A workaround is to
build these packages with another compiler and then use them as
dependencies for PGI-build packages. For example:
.. code-block:: console
$ spack install openmpi%pgi ^libpciaccess%gcc
* PGI requires a license to use; see :ref:`licensed-compilers` for more
information on installation.
.. note::
It is believed the problem with HDF 4 is that everything is
compiled with the ``F77`` compiler, but at some point some Fortran
90 code slipped in there. So compilers that can handle both FORTRAN
77 and Fortran 90 (``gfortran``, ``pgfortran``, etc) are fine. But
compilers specific to one or the other (``pgf77``, ``pgf90``) won't
work.
^^^
NAG
^^^
@@ -1307,6 +1364,187 @@ This will write the private key to the file `dinosaur.priv`.
or for help on an issue or the Spack slack.
.. _cray-support:
-------------
Spack on Cray
-------------
Spack differs slightly when used on a Cray system. The architecture spec
can differentiate between the front-end and back-end processor and operating system.
For example, on Edison at NERSC, the back-end target processor
is "Ivy Bridge", so you can specify to use the back-end this way:
.. code-block:: console
$ spack install zlib target=ivybridge
You can also use the operating system to build against the back-end:
.. code-block:: console
$ spack install zlib os=CNL10
Notice that the name includes both the operating system name and the major
version number concatenated together.
Alternatively, if you want to build something for the front-end,
you can specify the front-end target processor. The processor for a login node
on Edison is "Sandy bridge" so we specify on the command line like so:
.. code-block:: console
$ spack install zlib target=sandybridge
And the front-end operating system is:
.. code-block:: console
$ spack install zlib os=SuSE11
^^^^^^^^^^^^^^^^^^^^^^^
Cray compiler detection
^^^^^^^^^^^^^^^^^^^^^^^
Spack can detect compilers using two methods. For the front-end, we treat
everything the same. The difference lies in back-end compiler detection.
Back-end compiler detection is made via the Tcl module avail command.
Once it detects the compiler it writes the appropriate PrgEnv and compiler
module name to compilers.yaml and sets the paths to each compiler with Cray\'s
compiler wrapper names (i.e. cc, CC, ftn). During build time, Spack will load
the correct PrgEnv and compiler module and will call appropriate wrapper.
The compilers.yaml config file will also differ. There is a
modules section that is filled with the compiler's Programming Environment
and module name. On other systems, this field is empty []:
.. code-block:: yaml
- compiler:
modules:
- PrgEnv-intel
- intel/15.0.109
As mentioned earlier, the compiler paths will look different on a Cray system.
Since most compilers are invoked using cc, CC and ftn, the paths for each
compiler are replaced with their respective Cray compiler wrapper names:
.. code-block:: yaml
paths:
cc: cc
cxx: CC
f77: ftn
fc: ftn
As opposed to an explicit path to the compiler executable. This allows Spack
to call the Cray compiler wrappers during build time.
For more on compiler configuration, check out :ref:`compiler-config`.
Spack sets the default Cray link type to dynamic, to better match other
other platforms. Individual packages can enable static linking (which is the
default outside of Spack on cray systems) using the ``-static`` flag.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Setting defaults and using Cray modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to use default compilers for each PrgEnv and also be able
to load cray external modules, you will need to set up a ``packages.yaml``.
Here's an example of an external configuration for cray modules:
.. code-block:: yaml
packages:
mpich:
externals:
- spec: "mpich@7.3.1%gcc@5.2.0 arch=cray_xc-haswell-CNL10"
modules:
- cray-mpich
- spec: "mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-haswell-CNL10"
modules:
- cray-mpich
all:
providers:
mpi: [mpich]
This tells Spack that for whatever package that depends on mpi, load the
cray-mpich module into the environment. You can then be able to use whatever
environment variables, libraries, etc, that are brought into the environment
via module load.
.. note::
For Cray-provided packages, it is best to use ``modules:`` instead of ``prefix:``
in ``packages.yaml``, because the Cray Programming Environment heavily relies on
modules (e.g., loading the ``cray-mpich`` module adds MPI libraries to the
compiler wrapper link line).
You can set the default compiler that Spack can use for each compiler type.
If you want to use the Cray defaults, then set them under ``all:`` in packages.yaml.
In the compiler field, set the compiler specs in your order of preference.
Whenever you build with that compiler type, Spack will concretize to that version.
Here is an example of a full packages.yaml used at NERSC
.. code-block:: yaml
packages:
mpich:
externals:
- spec: "mpich@7.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-mpich
- spec: "mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-SuSE11-ivybridge"
modules:
- cray-mpich
buildable: False
netcdf:
externals:
- spec: "netcdf@4.3.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-netcdf
- spec: "netcdf@4.3.3.1%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-netcdf
buildable: False
hdf5:
externals:
- spec: "hdf5@1.8.14%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-hdf5
- spec: "hdf5@1.8.14%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge"
modules:
- cray-hdf5
buildable: False
all:
compiler: [gcc@5.2.0, intel@16.0.0.109]
providers:
mpi: [mpich]
Here we tell spack that whenever we want to build with gcc use version 5.2.0 or
if we want to build with intel compilers, use version 16.0.0.109. We add a spec
for each compiler type for each cray modules. This ensures that for each
compiler on our system we can use that external module.
For more on external packages check out the section :ref:`sec-external-packages`.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using Linux containers on Cray machines
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Spack uses environment variables particular to the Cray programming
environment to determine which systems are Cray platforms. These
environment variables may be propagated into containers that are not
using the Cray programming environment.
To ensure that Spack does not autodetect the Cray programming
environment, unset the environment variable ``MODULEPATH``. This
will cause Spack to treat a linux container on a Cray system as a base
linux distro.
.. _windows_support:
----------------
@@ -1327,7 +1565,6 @@ Required:
* Microsoft Visual Studio
* Python
* Git
* 7z
Optional:
* Intel Fortran (needed for some packages)
@@ -1393,13 +1630,6 @@ as the project providing Git support on Windows. This is additionally the recomm
for installing Git on Windows, a link to which can be found above. Spack requires the
utilities vendored by this project.
"""
7zip
"""
A tool for extracting ``.xz`` files is required for extracting source tarballs. The latest 7zip
can be located at https://sourceforge.net/projects/sevenzip/.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Step 2: Install and setup Spack
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -1426,14 +1656,16 @@ in a Windows CMD prompt.
Step 3: Run and configure Spack
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
On Windows, Spack supports both primary native shells, Powershell and the traditional command prompt.
To use Spack, pick your favorite shell, and run ``bin\spack_cmd.bat`` or ``share/spack/setup-env.ps1``
(you may need to Run as Administrator) from the top-level spack
directory. This will provide a Spack enabled shell. If you receive a warning message that Python is not in your ``PATH``
To use Spack, run ``bin\spack_cmd.bat`` (you may need to Run as Administrator) from the top-level spack
directory. This will provide a Windows command prompt with an environment properly set up with Spack
and its prerequisites. If you receive a warning message that Python is not in your ``PATH``
(which may happen if you installed Python from the website and not the Windows Store) add the location
of the Python executable to your ``PATH`` now. You can permanently add Python to your ``PATH`` variable
by using the ``Edit the system environment variables`` utility in Windows Control Panel.
.. note::
Alternatively, Powershell can be used in place of CMD
To configure Spack, first run the following command inside the Spack console:
.. code-block:: console
@@ -1498,7 +1730,7 @@ and not tabs, so ensure that this is the case when editing one directly.
.. note:: Cygwin
The use of Cygwin is not officially supported by Spack and is not tested.
However Spack will not prevent this, so use if choosing to use Spack
However Spack will not throw an error, so use if choosing to use Spack
with Cygwin, know that no functionality is garunteed.
^^^^^^^^^^^^^^^^^
@@ -1512,12 +1744,21 @@ Spack console via:
spack install cpuinfo
If in the previous step, you did not have CMake or Ninja installed, running the command above should install both packages
If in the previous step, you did not have CMake or Ninja installed, running the command above should bootstrap both packages
.. note:: Spec Syntax Caveats
Windows has a few idiosyncrasies when it comes to the Spack spec syntax and the use of certain shells
See the Spack spec syntax doc for more information
"""""""""""""""""""""""""""
Windows Compatible Packages
"""""""""""""""""""""""""""
Not all spack packages currently have Windows support. Some are inherently incompatible with the
platform, and others simply have yet to be ported. To view the current set of packages with Windows
support, the list command should be used via `spack list -t windows`. If there's a package you'd like
to install on Windows but is not in that list, feel free to reach out to request the port or contribute
the port yourself.
.. note::
This is by no means a comprehensive list, some packages may have ports that were not tagged
while others may just work out of the box on Windows and have not been tagged as such.
^^^^^^^^^^^^^^
For developers
@@ -1527,3 +1768,6 @@ The intent is to provide a Windows installer that will automatically set up
Python, Git, and Spack, instead of requiring the user to do so manually.
Instructions for creating the installer are at
https://github.com/spack/spack/blob/develop/lib/spack/spack/cmd/installer/README.md
Alternatively a pre-built copy of the Windows installer is available as an artifact of Spack's Windows CI
available at each run of the CI on develop or any PR.

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 358 KiB

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -34,15 +35,10 @@ package:
.. code-block:: console
$ git clone -c feature.manyFiles=true --depth=2 https://github.com/spack/spack.git
$ git clone -c feature.manyFiles=true https://github.com/spack/spack.git
$ cd spack/bin
$ ./spack install libelf
.. note::
``-c feature.manyFiles=true`` improves git's performance on repositories with 1,000+ files.
``--depth=2`` prunes the git history to reduce the size of the Spack installation.
If you're new to spack and want to start using it, see :doc:`getting_started`,
or refer to the full manual below.

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -456,13 +457,14 @@ For instance, the following config options,
tcl:
all:
suffixes:
^python@3: 'python{^python.version.up_to_2}'
^python@3.12: 'python-3.12'
^openblas: 'openblas'
will add a ``python3.12`` to module names of packages compiled with Python 3.12, and similarly for
all specs depending on ``python@3``. This is useful to know which version of Python a set of Python
extensions is associated with. Likewise, the ``openblas`` string is attached to any program that
has openblas in the spec, most likely via the ``+blas`` variant specification.
will add a ``python-3.12`` version string to any packages compiled with
Python matching the spec, ``python@3.12``. This is useful to know which
version of Python a set of Python extensions is associated with. Likewise, the
``openblas`` string is attached to any program that has openblas in the spec,
most likely via the ``+blas`` variant specification.
The most heavyweight solution to module naming is to change the entire
naming convention for module files. This uses the projections format

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -58,7 +59,7 @@ Functional Example
------------------
The simplest fully functional standalone example of a working pipeline can be
examined live at this example `project <https://gitlab.com/spack/pipeline-quickstart>`_
examined live at this example `project <https://gitlab.com/scott.wittenburg/spack-pipeline-demo>`_
on gitlab.com.
Here's the ``.gitlab-ci.yml`` file from that example that builds and runs the
@@ -66,46 +67,39 @@ pipeline:
.. code-block:: yaml
stages: [ "generate", "build" ]
stages: [generate, build]
variables:
SPACK_REPOSITORY: "https://github.com/spack/spack.git"
SPACK_REF: "develop-2024-10-06"
SPACK_USER_CONFIG_PATH: ${CI_PROJECT_DIR}
SPACK_BACKTRACE: 1
SPACK_REPO: https://github.com/scottwittenburg/spack.git
SPACK_REF: pipelines-reproducible-builds
generate-pipeline:
tags:
- saas-linux-small-amd64
stage: generate
tags:
- docker
image:
name: ghcr.io/spack/ubuntu20.04-runner-x86_64:2023-01-01
script:
- git clone ${SPACK_REPOSITORY}
- cd spack && git checkout ${SPACK_REF} && cd ../
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]
before_script:
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_REF} && popd
- . "./spack/share/spack/setup-env.sh"
- spack --version
script:
- spack env activate --without-view .
- spack -d -v --color=always
ci generate
--check-index-only
- spack -d ci generate
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
artifacts:
paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir"
build-pipeline:
build-jobs:
stage: build
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
- artifact: "jobs_scratch_dir/pipeline.yml"
job: generate-pipeline
strategy: depend
needs:
- artifacts: True
job: generate-pipeline
The key thing to note above is that there are two jobs: The first job to run,
``generate-pipeline``, runs the ``spack ci generate`` command to generate a
@@ -120,93 +114,82 @@ And here's the spack environment built by the pipeline represented as a
spack:
view: false
concretizer:
unify: true
reuse: false
unify: false
definitions:
- pkgs:
- zlib
- bzip2 ~debug
- compiler:
- '%gcc'
- bzip2
- arch:
- '%gcc@7.5.0 arch=linux-ubuntu18.04-x86_64'
specs:
- matrix:
- - $pkgs
- - $compiler
- - $arch
mirrors: { "mirror": "s3://spack-public/mirror" }
ci:
target: gitlab
enable-artifacts-buildcache: True
rebuild-index: False
pipeline-gen:
- any-job:
tags:
- saas-linux-small-amd64
image:
name: ghcr.io/spack/ubuntu20.04-runner-x86_64:2023-01-01
before_script:
- git clone ${SPACK_REPOSITORY}
- cd spack && git checkout ${SPACK_REF} && cd ../
- . "./spack/share/spack/setup-env.sh"
- spack --version
- export SPACK_USER_CONFIG_PATH=${CI_PROJECT_DIR}
- spack config blame mirrors
- git clone ${SPACK_REPO}
- pushd spack && git checkout ${SPACK_CHECKOUT_VERSION} && popd
- . "./spack/share/spack/setup-env.sh"
- build-job:
tags: [docker]
image:
name: ghcr.io/scottwittenburg/ecpe4s-ubuntu18.04-runner-x86_64:2020-09-01
entrypoint: [""]
The elements of this file important to spack ci pipelines are described in more
detail below, but there are a couple of things to note about the above working
example:
.. note::
The use of ``reuse: false`` in spack environments used for pipelines is
almost always what you want, as without it your pipelines will not rebuild
packages even if package hashes have changed. This is due to the concretizer
strongly preferring known hashes when ``reuse: true``.
There is no ``script`` attribute specified for here. The reason for this is
Spack CI will automatically generate reasonable default scripts. More
detail on what is in these scripts can be found below.
The ``ci`` section in the above environment file contains the bare minimum
configuration required for ``spack ci generate`` to create a working pipeline.
The ``target: gitlab`` tells spack that the desired pipeline output is for
gitlab. However, this isn't strictly required, as currently gitlab is the
only possible output format for pipelines. The ``pipeline-gen`` section
contains the key information needed to specify attributes for the generated
jobs. Notice that it contains a list which has only a single element in
this case. In real pipelines it will almost certainly have more elements,
and in those cases, order is important: spack starts at the bottom of the
list and works upwards when applying attributes.
Also notice the ``before_script`` section. It is required when using any of the
default scripts to source the ``setup-env.sh`` script in order to inform
the default scripts where to find the ``spack`` executable.
But in this simple case, we use only the special key ``any-job`` to
indicate that spack should apply the specified attributes (``tags``, ``image``,
and ``before_script``) to any job it generates. This includes jobs for
building/pushing all packages, a ``rebuild-index`` job at the end of the
pipeline, as well as any ``noop`` jobs that might be needed by gitlab when
no rebuilds are required.
Normally ``enable-artifacts-buildcache`` is not recommended in production as it
results in large binary artifacts getting transferred back and forth between
gitlab and the runners. But in this example on gitlab.com where there is no
shared, persistent file system, and where no secrets are stored for giving
permission to write to an S3 bucket, ``enabled-buildcache-artifacts`` is the only
way to propagate binaries from jobs to their dependents.
Something to note is that in this simple case, we rely on spack to
generate a reasonable script for the package build jobs (it just creates
a script that invokes ``spack ci rebuild``).
Also, it is usually a good idea to let the pipeline generate a final "rebuild the
buildcache index" job, so that subsequent pipeline generation can quickly determine
which specs are up to date and which need to be rebuilt (it's a good idea for other
reasons as well, but those are out of scope for this discussion). In this case we
have disabled it (using ``rebuild-index: False``) because the index would only be
generated in the artifacts mirror anyway, and consequently would not be available
during subsequent pipeline runs.
Another thing to note is the use of the ``SPACK_USER_CONFIG_DIR`` environment
variable in any generated jobs. The purpose of this is to make spack
aware of one final file in the example, the one that contains the mirror
configuration. This file, ``mirrors.yaml`` looks like this:
.. note::
With the addition of reproducible builds (#22887) a previously working
pipeline will require some changes:
.. code-block:: yaml
* In the build-jobs, the environment location changed.
This will typically show as a ``KeyError`` in the failing job. Be sure to
point to ``${SPACK_CONCRETE_ENV_DIR}``.
mirrors:
buildcache-destination:
url: oci://registry.gitlab.com/spack/pipeline-quickstart
binary: true
access_pair:
id_variable: CI_REGISTRY_USER
secret_variable: CI_REGISTRY_PASSWORD
* When using ``include`` in your environment, be sure to make the included
files available in the build jobs. This means adding those files to the
artifact directory. Those files will also be missing in the reproducibility
artifact.
Note the name of the mirror is ``buildcache-destination``, which is required
as of Spack 0.23 (see below for more information). The mirror url simply
points to the container registry associated with the project, while
``id_variable`` and ``secret_variable`` refer to to environment variables
containing the access credentials for the mirror.
When spack builds packages for this example project, they will be pushed to
the project container registry, where they will be available for subsequent
jobs to install as dependencies, or for other pipelines to use to build runnable
container images.
* Because the location of the environment changed, including files with
relative path may have to be adapted to work both in the project context
(generation job) and in the concrete env dir context (build job).
-----------------------------------
Spack commands supporting pipelines
@@ -270,6 +253,17 @@ can easily happen if it is not updated frequently, this behavior ensures that
spack has a way to know for certain about the status of any concrete spec on
the remote mirror, but can slow down pipeline generation significantly.
The ``--optimize`` argument is experimental and runs the generated pipeline
document through a series of optimization passes designed to reduce the size
of the generated file.
The ``--dependencies`` is also experimental and disables what in Gitlab is
referred to as DAG scheduling, internally using the ``dependencies`` keyword
rather than ``needs`` to list dependency jobs. The drawback of using this option
is that before any job can begin, all jobs in previous stages must first
complete. The benefit is that Gitlab allows more dependencies to be listed
when using ``dependencies`` instead of ``needs``.
The optional ``--output-file`` argument should be an absolute path (including
file name) to the generated pipeline, and if not given, the default is
``./.gitlab-ci.yml``.
@@ -434,6 +428,15 @@ configuration with a ``script`` attribute. Specifying a signing job without a sc
does not create a signing job and the job configuration attributes will be ignored.
Signing jobs are always assigned the runner tags ``aws``, ``protected``, and ``notary``.
^^^^^^^^^^^^^^^^^
Cleanup (cleanup)
^^^^^^^^^^^^^^^^^
When using ``temporary-storage-url-prefix`` the cleanup job will destroy the mirror
created for the associated Gitlab pipeline. Cleanup jobs do not allow modifying the
script, but do expect that the spack command is in the path and require a
``before_script`` to be specified that sources the ``setup-env.sh`` script.
.. _noop_jobs:
^^^^^^^^^^^^
@@ -600,77 +603,6 @@ the attributes will be merged starting from the bottom match going up to the top
In the case that no match is found in a submapping section, no additional attributes will be applied.
^^^^^^^^^^^^^^^^^^^^^^^^
Dynamic Mapping Sections
^^^^^^^^^^^^^^^^^^^^^^^^
For large scale CI where cost optimization is required, dynamic mapping allows for the use of real-time
mapping schemes served by a web service. This type of mapping does not support the ``-remove`` type
behavior, but it does follow the rest of the merge rules for configurations.
The dynamic mapping service needs to implement a single REST API interface for getting
requests ``GET <URL>[:PORT][/PATH]?spec=<pkg_name@pkg_version +variant1+variant2%compiler@compiler_version>``.
example request.
.. code-block::
https://my-dyn-mapping.spack.io/allocation?spec=zlib-ng@2.1.6 +compat+opt+shared+pic+new_strategies arch=linux-ubuntu20.04-x86_64_v3%gcc@12.0.0
With an example response the updates kubernetes request variables, overrides the max retries for gitlab,
and prepends a note about the modifications made by the my-dyn-mapping.spack.io service.
.. code-block::
200 OK
{
"variables":
{
"KUBERNETES_CPU_REQUEST": "500m",
"KUBERNETES_MEMORY_REQUEST": "2G",
},
"retry": { "max:": "1"}
"script+:":
[
"echo \"Job modified by my-dyn-mapping.spack.io\""
]
}
The ci.yaml configuration section takes the URL endpoint as well as a number of options to configure how responses are handled.
It is possible to specify a list of allowed and ignored configuration attributes under ``allow`` and ``ignore``
respectively. It is also possible to configure required attributes under ``required`` section.
Options to configure the client timeout and SSL verification using the ``timeout`` and ``verify_ssl`` options.
By default, the ``timeout`` is set to the option in ``config:timeout`` and ``veryify_ssl`` is set the the option in ``config::verify_ssl``.
Passing header parameters to the request can be achieved through the ``header`` section. The values of the variables passed to the
header may be environment variables that are expanded at runtime, such as a private token configured on the runner.
Here is an example configuration pointing to ``my-dyn-mapping.spack.io/allocation``.
.. code-block:: yaml
ci:
- dynamic-mapping:
endpoint: my-dyn-mapping.spack.io/allocation
timeout: 10
verify_ssl: True
header:
PRIVATE_TOKEN: ${MY_PRIVATE_TOKEN}
MY_CONFIG: "fuzz_allocation:false"
allow:
- variables
ignore:
- script
require: []
^^^^^^^^^^^^^
Bootstrapping
^^^^^^^^^^^^^
@@ -742,13 +674,26 @@ build the package.
When including a bootstrapping phase as in the example above, the result is that
the bootstrapped compiler packages will be pushed to the binary mirror (and the
local artifacts mirror) before the actual release specs are built.
local artifacts mirror) before the actual release specs are built. In this case,
the jobs corresponding to subsequent release specs are configured to
``install_missing_compilers``, so that if spack is asked to install a package
with a compiler it doesn't know about, it can be quickly installed from the
binary mirror first.
Since bootstrapping compilers is optional, those items can be left out of the
environment/stack file, and in that case no bootstrapping will be done (only the
specs will be staged for building) and the runners will be expected to already
have all needed compilers installed and configured for spack to use.
^^^^^^^^^^^^^^^^^^^
Pipeline Buildcache
^^^^^^^^^^^^^^^^^^^
The ``enable-artifacts-buildcache`` key
takes a boolean and determines whether the pipeline uses artifacts to store and
pass along the buildcaches from one stage to the next (the default if you don't
provide this option is ``False``).
^^^^^^^^^^^^^^^^
Broken Specs URL
^^^^^^^^^^^^^^^^
@@ -820,69 +765,6 @@ presence of a ``SPACK_CDASH_AUTH_TOKEN`` environment variable during the
build group on CDash called "Release Testing" (that group will be created if
it didn't already exist).
.. _ci_artifacts:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
CI Artifacts Directory Layout
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running the CI build using the command ``spack ci rebuild`` a number of directories are created for
storing data generated during the CI job. The default root directory for artifacts is ``job_scratch_root``.
This can be overridden by passing the argument ``--artifacts-root`` to the ``spack ci generate`` command
or by setting the ``SPACK_ARTIFACTS_ROOT`` environment variable in the build job scripts.
The top level directories under the artifact root are ``concrete_environment``, ``logs``, ``reproduction``,
``tests``, and ``user_data``. Spack does not restrict what is written to any of these directories nor does
it require user specified files be written to any specific directory.
------------------------
``concrete_environment``
------------------------
The directory ``concrete_environment`` is used to communicate the ci generate processed ``spack.yaml`` and
the concrete ``spack.lock`` for the CI environment.
--------
``logs``
--------
The directory ``logs`` contains the spack build log, ``spack-build-out.txt``, and the spack build environment
modification file, ``spack-build-mod-env.txt``. Additionally all files specified by the packages ``Builder``
property ``archive_files`` are also copied here (ie. ``CMakeCache.txt`` in ``CMakeBuilder``).
----------------
``reproduction``
----------------
The directory ``reproduction`` is used to store the files needed by the ``spack reproduce-build`` command.
This includes ``repro.json``, copies of all of the files in ``concrete_environment``, the concrete spec
JSON file for the current spec being built, and all of the files written in the artifacts root directory.
The ``repro.json`` file is not versioned and is only designed to work with the version of spack CI was run with.
An example of what a ``repro.json`` may look like is here.
.. code:: json
{
"job_name": "adios2@2.9.2 /feaevuj %gcc@11.4.0 arch=linux-ubuntu20.04-x86_64_v3 E4S ROCm External",
"job_spec_json": "adios2.json",
"ci_project_dir": "/builds/spack/spack"
}
---------
``tests``
---------
The directory ``tests`` is used to store output from running ``spack test <job spec>``. This may or may not have
data in it depending on the package that was built and the availability of tests.
-------------
``user_data``
-------------
The directory ``user_data`` is used to store everything else that shouldn't be copied to the ``reproduction`` direcotory.
Users may use this to store additional logs or metrics or other types of files generated by the build job.
-------------------------------------
Using a custom spack in your pipeline
-------------------------------------

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -475,3 +476,9 @@ implemented using Python's built-in `sys.path
:py:mod:`spack.repo` module implements a custom `Python importer
<https://docs.python.org/2/library/imp.html>`_.
.. warning::
The mechanism for extending packages is not yet extensively tested,
and extending packages across repositories imposes inter-repo
dependencies, which may be hard to manage. Use this feature at your
own risk, but let us know if you have a use case for it.

View File

@@ -1,13 +1,13 @@
sphinx==8.2.3
sphinxcontrib-programoutput==0.18
sphinx_design==0.6.1
sphinx-rtd-theme==3.0.2
python-levenshtein==0.27.1
docutils==0.21.2
pygments==2.19.1
urllib3==2.3.0
pytest==8.3.5
isort==6.0.1
black==25.1.0
flake8==7.1.2
mypy==1.11.1
sphinx==7.2.6
sphinxcontrib-programoutput==0.17
sphinx_design==0.5.0
sphinx-rtd-theme==2.0.0
python-levenshtein==0.25.1
docutils==0.20.1
pygments==2.17.2
urllib3==2.2.1
pytest==8.2.0
isort==5.13.2
black==24.4.2
flake8==7.0.0
mypy==1.10.0

View File

@@ -1,4 +1,5 @@
.. Copyright Spack Project Developers. See COPYRIGHT file for details.
.. Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -1,4 +1,5 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -8,6 +8,7 @@ unzip, , , Compress/Decompress archives
bzip2, , , Compress/Decompress archives
xz, , , Compress/Decompress archives
zstd, , Optional, Compress/Decompress archives
file, , , Create/Use Buildcaches
lsb-release, , , Linux: identify operating system version
gnupg2, , , Sign/Verify Buildcaches
git, , , Manage Software Repositories
1 Name Supported Versions Notes Requirement Reason
8 bzip2 Compress/Decompress archives
9 xz Compress/Decompress archives
10 zstd Optional Compress/Decompress archives
11 file Create/Use Buildcaches
12 lsb-release Linux: identify operating system version
13 gnupg2 Sign/Verify Buildcaches
14 git Manage Software Repositories

337
lib/spack/env/cc vendored
View File

@@ -1,7 +1,8 @@
#!/bin/sh -f
# shellcheck disable=SC2034 # evals in this script fool shellcheck
#
# Copyright Spack Project Developers. See COPYRIGHT file for details.
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
@@ -100,9 +101,10 @@ setsep() {
esac
}
# prepend LISTNAME ELEMENT
# prepend LISTNAME ELEMENT [SEP]
#
# Prepend ELEMENT to the list stored in the variable LISTNAME.
# Prepend ELEMENT to the list stored in the variable LISTNAME,
# assuming the list is separated by SEP.
# Handles empty lists and single-element lists.
prepend() {
varname="$1"
@@ -172,46 +174,6 @@ preextend() {
unset IFS
}
execute() {
# dump the full command if the caller supplies SPACK_TEST_COMMAND=dump-args
if [ -n "${SPACK_TEST_COMMAND=}" ]; then
case "$SPACK_TEST_COMMAND" in
dump-args)
IFS="$lsep"
for arg in $full_command_list; do
echo "$arg"
done
unset IFS
exit
;;
dump-env-*)
var=${SPACK_TEST_COMMAND#dump-env-}
eval "printf '%s\n' \"\$0: \$var: \$$var\""
;;
*)
die "Unknown test command: '$SPACK_TEST_COMMAND'"
;;
esac
fi
#
# Write the input and output commands to debug logs if it's asked for.
#
if [ "$SPACK_DEBUG" = TRUE ]; then
input_log="$SPACK_DEBUG_LOG_DIR/spack-cc-$SPACK_DEBUG_LOG_ID.in.log"
output_log="$SPACK_DEBUG_LOG_DIR/spack-cc-$SPACK_DEBUG_LOG_ID.out.log"
echo "[$mode] $command $input_command" >> "$input_log"
IFS="$lsep"
echo "[$mode] "$full_command_list >> "$output_log"
unset IFS
fi
# Execute the full command, preserving spaces with IFS set
# to the alarm bell separator.
IFS="$lsep"; exec $full_command_list
exit
}
# Fail with a clear message if the input contains any bell characters.
if eval "[ \"\${*#*${lsep}}\" != \"\$*\" ]"; then
die "Compiler command line contains our separator ('${lsep}'). Cannot parse."
@@ -236,36 +198,6 @@ esac
}
"
# path_list functions. Path_lists have 3 parts: spack_store_<list>, <list> and system_<list>,
# which are used to prioritize paths when assembling the final command line.
# init_path_lists LISTNAME
# Set <LISTNAME>, spack_store_<LISTNAME>, and system_<LISTNAME> to "".
init_path_lists() {
eval "spack_store_$1=\"\""
eval "$1=\"\""
eval "system_$1=\"\""
}
# assign_path_lists LISTNAME1 LISTNAME2
# Copy contents of LISTNAME2 into LISTNAME1, for each path_list prefix.
assign_path_lists() {
eval "spack_store_$1=\"\${spack_store_$2}\""
eval "$1=\"\${$2}\""
eval "system_$1=\"\${system_$2}\""
}
# append_path_lists LISTNAME ELT
# Append the provided ELT to the appropriate list, based on the result of path_order().
append_path_lists() {
path_order "$2"
case $? in
0) eval "append spack_store_$1 \"\$2\"" ;;
1) eval "append $1 \"\$2\"" ;;
2) eval "append system_$1 \"\$2\"" ;;
esac
}
# Check if optional parameters are defined
# If we aren't asking for debug flags, don't add them
if [ -z "${SPACK_ADD_DEBUG_FLAGS:-}" ]; then
@@ -299,17 +231,12 @@ fi
# ld link
# ccld compile & link
# Note. SPACK_ALWAYS_XFLAGS are applied for all compiler invocations,
# including version checks (SPACK_XFLAGS variants are not applied
# for version checks).
command="${0##*/}"
comp="CC"
vcheck_flags=""
case "$command" in
cpp)
mode=cpp
debug_flags="-g"
vcheck_flags="${SPACK_ALWAYS_CPPFLAGS}"
;;
cc|c89|c99|gcc|clang|armclang|icc|icx|pgcc|nvc|xlc|xlc_r|fcc|amdclang|cl.exe|craycc)
command="$SPACK_CC"
@@ -317,7 +244,6 @@ case "$command" in
comp="CC"
lang_flags=C
debug_flags="-g"
vcheck_flags="${SPACK_ALWAYS_CFLAGS}"
;;
c++|CC|g++|clang++|armclang++|icpc|icpx|pgc++|nvc++|xlc++|xlc++_r|FCC|amdclang++|crayCC)
command="$SPACK_CXX"
@@ -325,7 +251,6 @@ case "$command" in
comp="CXX"
lang_flags=CXX
debug_flags="-g"
vcheck_flags="${SPACK_ALWAYS_CXXFLAGS}"
;;
ftn|f90|fc|f95|gfortran|flang|armflang|ifort|ifx|pgfortran|nvfortran|xlf90|xlf90_r|nagfor|frt|amdflang|crayftn)
command="$SPACK_FC"
@@ -333,7 +258,6 @@ case "$command" in
comp="FC"
lang_flags=F
debug_flags="-g"
vcheck_flags="${SPACK_ALWAYS_FFLAGS}"
;;
f77|xlf|xlf_r|pgf77)
command="$SPACK_F77"
@@ -341,7 +265,6 @@ case "$command" in
comp="F77"
lang_flags=F
debug_flags="-g"
vcheck_flags="${SPACK_ALWAYS_FFLAGS}"
;;
ld|ld.gold|ld.lld)
mode=ld
@@ -442,11 +365,7 @@ unset IFS
export PATH="$new_dirs"
if [ "$mode" = vcheck ]; then
full_command_list="$command"
args="$@"
extend full_command_list vcheck_flags
extend full_command_list args
execute
exec "${command}" "$@"
fi
# Darwin's linker has a -r argument that merges object files together.
@@ -498,7 +417,12 @@ input_command="$*"
parse_Wl() {
while [ $# -ne 0 ]; do
if [ "$wl_expect_rpath" = yes ]; then
append_path_lists return_rpath_dirs_list "$1"
path_order "$1"
case $? in
0) append return_spack_store_rpath_dirs_list "$1" ;;
1) append return_rpath_dirs_list "$1" ;;
2) append return_system_rpath_dirs_list "$1" ;;
esac
wl_expect_rpath=no
else
case "$1" in
@@ -507,14 +431,24 @@ parse_Wl() {
if [ -z "$arg" ]; then
shift; continue
fi
append_path_lists return_rpath_dirs_list "$arg"
path_order "$arg"
case $? in
0) append return_spack_store_rpath_dirs_list "$arg" ;;
1) append return_rpath_dirs_list "$arg" ;;
2) append return_system_rpath_dirs_list "$arg" ;;
esac
;;
--rpath=*)
arg="${1#--rpath=}"
if [ -z "$arg" ]; then
shift; continue
fi
append_path_lists return_rpath_dirs_list "$arg"
path_order "$arg"
case $? in
0) append return_spack_store_rpath_dirs_list "$arg" ;;
1) append return_rpath_dirs_list "$arg" ;;
2) append return_system_rpath_dirs_list "$arg" ;;
esac
;;
-rpath|--rpath)
wl_expect_rpath=yes
@@ -522,7 +456,8 @@ parse_Wl() {
"$dtags_to_strip")
;;
-Wl)
# Nested -Wl,-Wl means we're in NAG compiler territory. We don't support it.
# Nested -Wl,-Wl means we're in NAG compiler territory, we don't support
# it.
return 1
;;
*)
@@ -541,10 +476,21 @@ categorize_arguments() {
return_other_args_list=""
return_isystem_was_used=""
init_path_lists return_isystem_include_dirs_list
init_path_lists return_include_dirs_list
init_path_lists return_lib_dirs_list
init_path_lists return_rpath_dirs_list
return_isystem_spack_store_include_dirs_list=""
return_isystem_system_include_dirs_list=""
return_isystem_include_dirs_list=""
return_spack_store_include_dirs_list=""
return_system_include_dirs_list=""
return_include_dirs_list=""
return_spack_store_lib_dirs_list=""
return_system_lib_dirs_list=""
return_lib_dirs_list=""
return_spack_store_rpath_dirs_list=""
return_system_rpath_dirs_list=""
return_rpath_dirs_list=""
# Global state for keeping track of -Wl,-rpath -Wl,/path
wl_expect_rpath=no
@@ -610,17 +556,32 @@ categorize_arguments() {
arg="${1#-isystem}"
return_isystem_was_used=true
if [ -z "$arg" ]; then shift; arg="$1"; fi
append_path_lists return_isystem_include_dirs_list "$arg"
path_order "$arg"
case $? in
0) append return_isystem_spack_store_include_dirs_list "$arg" ;;
1) append return_isystem_include_dirs_list "$arg" ;;
2) append return_isystem_system_include_dirs_list "$arg" ;;
esac
;;
-I*)
arg="${1#-I}"
if [ -z "$arg" ]; then shift; arg="$1"; fi
append_path_lists return_include_dirs_list "$arg"
path_order "$arg"
case $? in
0) append return_spack_store_include_dirs_list "$arg" ;;
1) append return_include_dirs_list "$arg" ;;
2) append return_system_include_dirs_list "$arg" ;;
esac
;;
-L*)
arg="${1#-L}"
if [ -z "$arg" ]; then shift; arg="$1"; fi
append_path_lists return_lib_dirs_list "$arg"
path_order "$arg"
case $? in
0) append return_spack_store_lib_dirs_list "$arg" ;;
1) append return_lib_dirs_list "$arg" ;;
2) append return_system_lib_dirs_list "$arg" ;;
esac
;;
-l*)
# -loopopt=0 is generated erroneously in autoconf <= 2.69,
@@ -653,17 +614,32 @@ categorize_arguments() {
break
elif [ "$xlinker_expect_rpath" = yes ]; then
# Register the path of -Xlinker -rpath <other args> -Xlinker <path>
append_path_lists return_rpath_dirs_list "$1"
path_order "$1"
case $? in
0) append return_spack_store_rpath_dirs_list "$1" ;;
1) append return_rpath_dirs_list "$1" ;;
2) append return_system_rpath_dirs_list "$1" ;;
esac
xlinker_expect_rpath=no
else
case "$1" in
-rpath=*)
arg="${1#-rpath=}"
append_path_lists return_rpath_dirs_list "$arg"
path_order "$arg"
case $? in
0) append return_spack_store_rpath_dirs_list "$arg" ;;
1) append return_rpath_dirs_list "$arg" ;;
2) append return_system_rpath_dirs_list "$arg" ;;
esac
;;
--rpath=*)
arg="${1#--rpath=}"
append_path_lists return_rpath_dirs_list "$arg"
path_order "$arg"
case $? in
0) append return_spack_store_rpath_dirs_list "$arg" ;;
1) append return_rpath_dirs_list "$arg" ;;
2) append return_system_rpath_dirs_list "$arg" ;;
esac
;;
-rpath|--rpath)
xlinker_expect_rpath=yes
@@ -680,36 +656,7 @@ categorize_arguments() {
"$dtags_to_strip")
;;
*)
# if mode is not ld, we can just add to other args
if [ "$mode" != "ld" ]; then
append return_other_args_list "$1"
shift
continue
fi
# if we're in linker mode, we need to parse raw RPATH args
case "$1" in
-rpath=*)
arg="${1#-rpath=}"
append_path_lists return_rpath_dirs_list "$arg"
;;
--rpath=*)
arg="${1#--rpath=}"
append_path_lists return_rpath_dirs_list "$arg"
;;
-rpath|--rpath)
if [ $# -eq 1 ]; then
# -rpath without value: let the linker raise an error.
append return_other_args_list "$1"
break
fi
shift
append_path_lists return_rpath_dirs_list "$1"
;;
*)
append return_other_args_list "$1"
;;
esac
append return_other_args_list "$1"
;;
esac
shift
@@ -731,10 +678,21 @@ categorize_arguments() {
categorize_arguments "$@"
assign_path_lists isystem_include_dirs_list return_isystem_include_dirs_list
assign_path_lists include_dirs_list return_include_dirs_list
assign_path_lists lib_dirs_list return_lib_dirs_list
assign_path_lists rpath_dirs_list return_rpath_dirs_list
spack_store_include_dirs_list="$return_spack_store_include_dirs_list"
system_include_dirs_list="$return_system_include_dirs_list"
include_dirs_list="$return_include_dirs_list"
spack_store_lib_dirs_list="$return_spack_store_lib_dirs_list"
system_lib_dirs_list="$return_system_lib_dirs_list"
lib_dirs_list="$return_lib_dirs_list"
spack_store_rpath_dirs_list="$return_spack_store_rpath_dirs_list"
system_rpath_dirs_list="$return_system_rpath_dirs_list"
rpath_dirs_list="$return_rpath_dirs_list"
isystem_spack_store_include_dirs_list="$return_isystem_spack_store_include_dirs_list"
isystem_system_include_dirs_list="$return_isystem_system_include_dirs_list"
isystem_include_dirs_list="$return_isystem_include_dirs_list"
isystem_was_used="$return_isystem_was_used"
other_args_list="$return_other_args_list"
@@ -764,7 +722,6 @@ case "$mode" in
cc|ccld)
case $lang_flags in
F)
extend spack_flags_list SPACK_ALWAYS_FFLAGS
extend spack_flags_list SPACK_FFLAGS
;;
esac
@@ -774,7 +731,6 @@ esac
# C preprocessor flags come before any C/CXX flags
case "$mode" in
cpp|as|cc|ccld)
extend spack_flags_list SPACK_ALWAYS_CPPFLAGS
extend spack_flags_list SPACK_CPPFLAGS
;;
esac
@@ -785,11 +741,9 @@ case "$mode" in
cc|ccld)
case $lang_flags in
C)
extend spack_flags_list SPACK_ALWAYS_CFLAGS
extend spack_flags_list SPACK_CFLAGS
;;
CXX)
extend spack_flags_list SPACK_ALWAYS_CXXFLAGS
extend spack_flags_list SPACK_CXXFLAGS
;;
esac
@@ -810,10 +764,21 @@ IFS="$lsep"
categorize_arguments $spack_flags_list
unset IFS
assign_path_lists spack_flags_isystem_include_dirs_list return_isystem_include_dirs_list
assign_path_lists spack_flags_include_dirs_list return_include_dirs_list
assign_path_lists spack_flags_lib_dirs_list return_lib_dirs_list
assign_path_lists spack_flags_rpath_dirs_list return_rpath_dirs_list
spack_flags_isystem_spack_store_include_dirs_list="$return_isystem_spack_store_include_dirs_list"
spack_flags_isystem_system_include_dirs_list="$return_isystem_system_include_dirs_list"
spack_flags_isystem_include_dirs_list="$return_isystem_include_dirs_list"
spack_flags_spack_store_include_dirs_list="$return_spack_store_include_dirs_list"
spack_flags_system_include_dirs_list="$return_system_include_dirs_list"
spack_flags_include_dirs_list="$return_include_dirs_list"
spack_flags_spack_store_lib_dirs_list="$return_spack_store_lib_dirs_list"
spack_flags_system_lib_dirs_list="$return_system_lib_dirs_list"
spack_flags_lib_dirs_list="$return_lib_dirs_list"
spack_flags_spack_store_rpath_dirs_list="$return_spack_store_rpath_dirs_list"
spack_flags_system_rpath_dirs_list="$return_system_rpath_dirs_list"
spack_flags_rpath_dirs_list="$return_rpath_dirs_list"
spack_flags_isystem_was_used="$return_isystem_was_used"
spack_flags_other_args_list="$return_other_args_list"
@@ -872,7 +837,7 @@ esac
case "$mode" in
cpp|cc|as|ccld)
if [ "$spack_flags_isystem_was_used" = "true" ] || [ "$isystem_was_used" = "true" ]; then
extend spack_store_isystem_include_dirs_list SPACK_STORE_INCLUDE_DIRS
extend isystem_spack_store_include_dirs_list SPACK_STORE_INCLUDE_DIRS
extend isystem_include_dirs_list SPACK_INCLUDE_DIRS
else
extend spack_store_include_dirs_list SPACK_STORE_INCLUDE_DIRS
@@ -888,63 +853,64 @@ args_list="$flags_list"
# Include search paths partitioned by (in store, non-sytem, system)
# NOTE: adding ${lsep} to the prefix here turns every added element into two
extend args_list spack_store_spack_flags_include_dirs_list -I
extend args_list spack_flags_spack_store_include_dirs_list -I
extend args_list spack_store_include_dirs_list -I
extend args_list spack_flags_include_dirs_list -I
extend args_list include_dirs_list -I
extend args_list spack_store_spack_flags_isystem_include_dirs_list "-isystem${lsep}"
extend args_list spack_store_isystem_include_dirs_list "-isystem${lsep}"
extend args_list spack_flags_isystem_spack_store_include_dirs_list "-isystem${lsep}"
extend args_list isystem_spack_store_include_dirs_list "-isystem${lsep}"
extend args_list spack_flags_isystem_include_dirs_list "-isystem${lsep}"
extend args_list isystem_include_dirs_list "-isystem${lsep}"
extend args_list system_spack_flags_include_dirs_list -I
extend args_list spack_flags_system_include_dirs_list -I
extend args_list system_include_dirs_list -I
extend args_list system_spack_flags_isystem_include_dirs_list "-isystem${lsep}"
extend args_list system_isystem_include_dirs_list "-isystem${lsep}"
extend args_list spack_flags_isystem_system_include_dirs_list "-isystem${lsep}"
extend args_list isystem_system_include_dirs_list "-isystem${lsep}"
# Library search paths partitioned by (in store, non-sytem, system)
extend args_list spack_store_spack_flags_lib_dirs_list "-L"
extend args_list spack_flags_spack_store_lib_dirs_list "-L"
extend args_list spack_store_lib_dirs_list "-L"
extend args_list spack_flags_lib_dirs_list "-L"
extend args_list lib_dirs_list "-L"
extend args_list system_spack_flags_lib_dirs_list "-L"
extend args_list spack_flags_system_lib_dirs_list "-L"
extend args_list system_lib_dirs_list "-L"
# RPATHs arguments
rpath_prefix=""
case "$mode" in
ccld)
if [ -n "$dtags_to_add" ] ; then
append args_list "$linker_arg$dtags_to_add"
fi
rpath_prefix="$rpath"
extend args_list spack_flags_spack_store_rpath_dirs_list "$rpath"
extend args_list spack_store_rpath_dirs_list "$rpath"
extend args_list spack_flags_rpath_dirs_list "$rpath"
extend args_list rpath_dirs_list "$rpath"
extend args_list spack_flags_system_rpath_dirs_list "$rpath"
extend args_list system_rpath_dirs_list "$rpath"
;;
ld)
if [ -n "$dtags_to_add" ] ; then
append args_list "$dtags_to_add"
fi
rpath_prefix="-rpath${lsep}"
extend args_list spack_flags_spack_store_rpath_dirs_list "-rpath${lsep}"
extend args_list spack_store_rpath_dirs_list "-rpath${lsep}"
extend args_list spack_flags_rpath_dirs_list "-rpath${lsep}"
extend args_list rpath_dirs_list "-rpath${lsep}"
extend args_list spack_flags_system_rpath_dirs_list "-rpath${lsep}"
extend args_list system_rpath_dirs_list "-rpath${lsep}"
;;
esac
# if mode is ccld or ld, extend RPATH lists with the prefix determined above
if [ -n "$rpath_prefix" ]; then
extend args_list spack_store_spack_flags_rpath_dirs_list "$rpath_prefix"
extend args_list spack_store_rpath_dirs_list "$rpath_prefix"
extend args_list spack_flags_rpath_dirs_list "$rpath_prefix"
extend args_list rpath_dirs_list "$rpath_prefix"
extend args_list system_spack_flags_rpath_dirs_list "$rpath_prefix"
extend args_list system_rpath_dirs_list "$rpath_prefix"
fi
# Other arguments from the input command
extend args_list other_args_list
extend args_list spack_flags_other_args_list
@@ -967,4 +933,39 @@ if [ -n "$SPACK_CCACHE_BINARY" ]; then
esac
fi
execute
# dump the full command if the caller supplies SPACK_TEST_COMMAND=dump-args
if [ -n "${SPACK_TEST_COMMAND=}" ]; then
case "$SPACK_TEST_COMMAND" in
dump-args)
IFS="$lsep"
for arg in $full_command_list; do
echo "$arg"
done
unset IFS
exit
;;
dump-env-*)
var=${SPACK_TEST_COMMAND#dump-env-}
eval "printf '%s\n' \"\$0: \$var: \$$var\""
;;
*)
die "Unknown test command: '$SPACK_TEST_COMMAND'"
;;
esac
fi
#
# Write the input and output commands to debug logs if it's asked for.
#
if [ "$SPACK_DEBUG" = TRUE ]; then
input_log="$SPACK_DEBUG_LOG_DIR/spack-cc-$SPACK_DEBUG_LOG_ID.in.log"
output_log="$SPACK_DEBUG_LOG_DIR/spack-cc-$SPACK_DEBUG_LOG_ID.out.log"
echo "[$mode] $command $input_command" >> "$input_log"
IFS="$lsep"
echo "[$mode] "$full_command_list >> "$output_log"
unset IFS
fi
# Execute the full command, preserving spaces with IFS set
# to the alarm bell separator.
IFS="$lsep"; exec $full_command_list

Some files were not shown because too many files have changed in this diff Show More