Set command line scopes last in _main, so they are higher scopes
Restore the global configuration in a spawned process by inspecting
the result of ctx.get_start_method()
Add the ability to pass a mp.context to PackageInstallContext.
Add shell-tests to check overriding the configuration:
- Using both -c and -C from command line
- With and without an environment active
Currently, `spack solve` has different spec selection semantics than `spack spec`.
`spack solve` currently does not allow specifying a single spec when an environment is active.
This PR modifies `spack solve` to inherit the interface from `spack spec`, and to use
the same spec selection logic. This will allow for better use of `spack solve --show opt`
for debugging.
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
* ML CI: Linux aarch64
* Add config files
* No aarch64 tag
* Don't specify image
* Use amazonlinux image
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
* Update and require
* GCC is too old
* Fix some builds
* xgboost doesn't support old GCC + cuda
* Run on newer Ubuntu
* Remove mxnet
* Try aarch64 range
* Use main branch
* Conflict applies to all targets
* cuda only required when +cuda
* Use tagged version
* Comment out tf-estimator
* Add ROCm, use newer Ubuntu
* Remove ROCm
---------
Co-authored-by: kwryankrattiger <80296582+kwryankrattiger@users.noreply.github.com>
* Normalize Spack Win entrypoints
Currently Spack has multiple entrypoints on Windows that in addition to
differing from *nix implementations, differ from shell to shell on
Windows. This is a bit confusing for new users and in general
unnecessary.
This PR adds a normal setup script for the batch shell while preserving
the previous "click from file explorer for spack shell" behavior.
Additionally adds a shell title to both powershell and cmd letting users
know this is a Spack shell
* remove doskeys
This PR is in response to a question in the `environments` slack channel (https://spackpm.slack.com/archives/CMHK7MF51/p1729200068557219) about inadequate CLI help/documentation for one specific subcommand.
This PR uses the approach I took for the descriptions and help for `spack test` subcommands. Namely, I use the first line of the relevant docstring as the description, which is shown per subcommand in `spack env -h`, and the entire docstring as the help. I then added, where it seemed appropriate, help. I also tweaked argument docstrings to tighten them up, make consistent with similar arguments elsewhere in the command, and elaborate when it seemed important. (The only subcommand I didn't touch is `loads`.)
For example, before:
```
$ spack env update -h
usage: spack env update [-hy] env
positional arguments:
env name or directory of the environment to activate
optional arguments:
-h, --help show this help message and exit
-y, --yes-to-all assume "yes" is the answer to every confirmation request
```
After the changes in this PR:
```
$ spack env update -h
usage: spack env update [-hy] env
update the environment manifest to the latest schema format
update the environment to the latest schema format, which may not be
readable by older versions of spack
a backup copy of the manifest is retained in case there is a need to revert
this operation
positional arguments:
env name or directory of the environment
optional arguments:
-h, --help show this help message and exit
-y, --yes-to-all assume "yes" is the answer to every confirmation request
```
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
ci: Remove deprecated logic from the ci module
Remove the following from the ci module, schema, and tests:
- deprecated ci stack and handling of old ci config
- deprecated mirror handling logic
- support for artifacts buildcache
- support for temporary storage url
* ParaView: Explicitly set the ENABLE_MPI on/off
* Disallow MPI in the DAG when ~mpi
* @5.13 uses 'remove_children', use pugixml@1.11:, See #47098
* cloud_pipelines/stacks/data-vis-sdk: paraview +raytracing: add +adios2 +fides
Co-authored-by: Bernhard Kaindl <bernhardkaindl7@gmail.com>
* stacks: add a stack for devtools on darwin
After getting this whole mess building on darwin, let's keep it that
way, and maybe make it so we have some non-ML darwin binaries in spack
as well.
* reuse: false for devtools
* dtc: fix darwin dylib name and id
On mac the convention is `lib<name>.<num>.dylib`, while the makefile
creates a num suffixed one by default. The id in the file is also a
local name rather than rewritten to the full path, this fixes both
problems.
* node-js: make whereis more deterministic
* relocation(darwin): catch Mach-O load failure
The MachO library can throw an exception rather than return no headers,
this happened in an elf file in the test data of go-bootstrap. Trying
catching the exception and moving on for now. May also need to look
into why we're trying to rewrite an elf file.
* qemu: add darwin flags to clear out warnings
There's a build failure for qemu in CI, but it's invisible because of
the immense mass of warning output. Explicitly specify the target macos
version and remove the extraneous unknown-warning-option flag.
* dtc: libyaml is also a link dependency
libyaml is required at runtime to run the dtc binary, lack of it caused
the ci for qemu to fail when the library wasn't found.
* e4s external rocm ci: upgrade to v6.2.1
* use ghcr.io/spack/spack/ubuntu22.04-runner-amd64-gcc-11.4-rocm6.2.1:2024.10.08
* magma +rocm: add entry for v6.2.1
`spack gc` has so far been a global or environment-specific thing.
This adds the ability to restrict garbage collection to specific specs,
e.g. if you *just* want to get rid of all your unused python installations,
you could write:
```console
spack gc python
```
- [x] add `constraint` arg to `spack gc`
- [x] add a simple test
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* Add latest releases of Camp, RAJA, Umpire, CHAI and CARE
* Address review comments + blt requirement in Umpire
* CARE @develop & @main: Submodules -> False
* Changes in Umpire
* Changes in RAJA
* Changes in CHAI
* Changes in RAJA: prefer 'spec.satisfies' to 'in spec'
This is due to a non-equivalence in Spack with providers like mpi.
See e.g. https://github.com/spack/spack/pull/46126
* Changes in Umpire: prefer 'spec.satisfies' to 'in spec'
This is due to a non-equivalence in Spack with providers like mpi.
See e.g. https://github.com/spack/spack/pull/46126
* Changes in CARE:
Still need to update to CachedCMakePackage based on RADIUSS Spack Configs version
* Missing change in RAJA + changes in fmt
* Fix synta
* Changes in Camp
* Fix style
* CHAI: when ~raja, turn off RAJA in build system
* Fix: Ascent@0.9.3 does not support RAJA@2024.07.0
* Enforce same version constraint on Umpire as for RAJA
* Enforce preferred version of vtk-m in ascent 0.9.3
* Migrate CARE package to CachedCMakePackage
* Fix style in CARE package
* CARE: Apply changes for uniform implementation accross RADIUSS projects
* Caliper: move to CachedCMakePackage, from RADIUSS Spack Configs
* Adapt RAJA Perf to spack CI
* Activate CHAI, CARE and RAJAPerf in Spack CI
* Fixes and diffs with RADIUSS Spack Configs
* Caliper: fix
* Caliper : fix + RAJAPerf : style
* RAJAPerf: fixes
* Update maintainers
* raja-perf: fix license header
* raja-perf: Fix variant naming openmp_target -> omptarget
* raja-perf: style and blt dependency versions
* CARE: benchmark and examples off by default (like tests)
* CARE: fix missing variable
* Update var/spack/repos/builtin/packages/raja-perf/package.py
* CARE: fix branch name
* Revert changes in MFEM to pass CI
* Fix CXX17 condition in RAJA + add sycl option in RAJAPerf
---------
Co-authored-by: Rich Hornung <hornung1@llnl.gov>
* gptune: new test API
* gptune: cleanup; finish API changes; separate unrelated test parts
* gptune: standalone test cleanup with timeout constraints
* gptune: ensure stand-alone test bash failures terminate; enable in CI
* gptune: add directory to terminate_bash_failures
* gptune/stand-alone tests: use satisifes for checking variants
---------
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Spack can now bootstrap two new dependencies on Windows: GnuPG, and file.
These dependencies are modeled as a separate package, and they install a cross-compiled binary.
Details on how they binaries are built are in https://github.com/spack/windows-bootstrap-resources
* WarpX: Python on pyAMReX
Long overdue update for WarpX: in 2024, we updated our Python
bindings to rely on the new pyAMReX package. This deprecates the old
`py-warpx` package and adds a new dependency and variant to WarpX.
Also deprecates old versions that we will not continue to support.
* Update Cloud/E4S Pipelines for WarpX
`py-warpx` is replaced by `warpx +python`
oneAPI does not support IPO/LTO: diable for `py-amrex` even though
pybind11 strongly encourages it.
"spack buildcache push" for partially installed environments pushes all it
can by default, and only dumps errors towards the end.
If --fail-fast is provided, error out before pushing anything if any
of the packages is uninstalled
oci build caches using parallel push now use futures to ensure pushing
goes in best-effort style.
The old concretizer is still used to bootstrap clingo from source. If we switch to a DAG model
where compilers are treated as nodes, we need to either:
1. fix the old concretizer to support this (which is a lot of work and possibly research), or
2. bootstrap `clingo` without the old concretizer.
This PR takes the second approach and gets rid of the old concretizer code. To bootstrap
`clingo`, we store some concrete spec prototypes as JSON, select one according to the
coarse-grained system architecture, and tweak them according to the current host.
The old concretizer and related dead code are removed. In particular, this removes
`Spec.normalize()` and related methods, which were used in many unit-tests to set
up the test context. The tests have been updated not to use `normalize()`.
- [x] Bootstrap clingo concretization based on a JSON file
- [x] Bootstrap clingo *before* patchelf
- [x] Remove any use of the old concretizer, including:
* Remove only_clingo and only_original fixtures
* Remove _old_concretize and _new_concretize
* Remove _concretize_together_old
* Remove _concretize_together_new
* Remove any use of `SPACK_TEST_SOLVER`
* Simplify CI jobs
- [x] ensure bootstrapping `clingo` works on on Darwin and Windows
- [x] Raise an intelligible error when a compiler is missing
- [x] Ensure bootstrapping works on FreeBSD
- [x] remove normalize and related methods
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
`setup-env.sh` is meant to be sourced, not executed directly.
By revoking execution permissions, users who accidentally execute
the script will receive an error instead of seeing no effect.
* Remove execution permission from `setup-env.sh` and friends
* Don't make output file executable in `spack commands --update-completion`
---------
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
* e4s ci: enable some disabled specs
* comment out cp2k +cuda due to unsupported cuda_arch
* comment out dealii+cuda due to concretize error
* work through concretize errors
* e4s: comment out failing builds
* e4s stack: disabled non-building specs
* comment out failing specs
* comment out failing specs
* cleanup comments
Add language dependencies `c`, `cxx`, and `fortran`.
These `depends_on` statements are auto-generated based on file extensions found
in source tarballs/zipfiles.
The `# generated` comment can be removed by package maintainers after
validating correctness.
Originally if you had `x -> y -> z`, and an env with `x` in its speclist that is concretized but not installed, then `spack find -c y` would not show anything. This was intended: `spack find` has up-until-now only ever listed out installed specs (and `-c` was for adding a preamble section about roots).
This changes `spack find` so:
* `-c` makes it search through all concretized specs in the env (in a sense it is anticipated that a concretized environment would serve as a "speculative" DB and users may want to query it like they query the DB outside of envs)
* Adds a `-i/--install-status` option, equivalent to `-I` from `spack spec`
* Shows install status for either `-c` or `-i`
* As a side effect to prior point, `spack find -i` can now distinguish different installation states (upstream/external)
Examples:
```
$ spack find -r
==> In environment findtest
==> 1 root specs
- raja
==> 6 installed packages (not shown)
==> 12 concretized packages to be installed (not shown)
```
```
$ spack find
==> In environment findtest
==> 1 root specs
- raja
-- darwin-ventura-m1 / apple-clang@14.0.3 -----------------------
berkeley-db@18.1.40 bzip2@1.0.8 diffutils@3.10 gmake@4.4.1 gnuconfig@2022-09-17 libiconv@1.17
==> 6 installed packages
==> 12 concretized packages to be installed (show with `spack find -c`)
```
```
$ spack find -c
==> In environment findtest
==> 1 root specs
- raja
-- darwin-ventura-m1 / apple-clang@14.0.3 -----------------------
[+] berkeley-db@18.1.40 [+] bzip2@1.0.8 - cmake@3.29.4 [+] diffutils@3.10 [+] gmake@4.4.1 [+] libiconv@1.17 - nghttp2@1.62.0 - pkgconf@2.2.0 - readline@8.2
- blt@0.6.2 - camp@2024.02.1 - curl@8.7.1 - gdbm@1.23 [+] gnuconfig@2022-09-17 - ncurses@6.5 - perl@5.38.2 - raja@2024.02.2 - zlib-ng@2.1.6
==> 6 installed packages
==> 12 concretized packages to be installed
```
$ spack -E find
...
==> 82 installed packages
```
* update e4s stacks
* adios2 +rocm: disable kokkos due to spack issue #44832
* comment out mgard+cuda due to spack issue #44833
* comment out cabana, legion, arborx due to kokkos spack issue #44832
* comment out slepc, petsc due to petsc spack issue #44600
* comment out adios2+rocm due to kokkos rocm spack issue #44832
* comment out kokkos due to spack issue #44832
* e4s external rocm ci: bump rocm stack to v6.1.1
* comment out exago+rocm due to issue with raja@0.14.0 see spack issue #44593
* comment out adios2+rocm due to spack issue #44594
* comment out petsc+rocm due to spack issue #44600
* comment out sundials+rocm due to spack issue #44601
* comment out slepc+rocm due to petsc spack issue #44600
* comment out tau+rocm due to spack issue #44659
* comment out ecp-data-vis-sdk due to spack issue #44745
* packages: register rocm-core as external
* re-enable tau due to issue #44659 having been resolved
* use latest ci image: ecpe4s/ubuntu22.04-runner-amd64-gcc-11.4-rocm6.1.1:2024.06.17
* comment out paraview due to spack issue #44745
* comment out ecp-data-vis-sdk +vtkm due to issue https://gitlab.spack.io/spack/spack/-/jobs/11632511
* glew: rework dependency on gl
This simplifies the package and ensures a single gl implementation is
pulled in. Before we were adding direct dependencies, and those are
not unified through the virtual.
* mesa-demos: rework dependency on gl
This simplifies the package and ensures a single gl implementation is
pulled in. Before we were adding direct dependencies, and those are
not unified through the virtual.
* mesa-glu: rework dependency on gl
This simplifies the package and ensures a single gl implementation is
pulled in. Before we were adding direct dependencies, and those are
not unified through the virtual.
* paraview: fix dependency on glew
* mesa: group dependency on when("+glx")
* Add missing dependency on libxml2
* paraview: remove the "osmesa" and "egl" variant
Instead, enforce consistency using the "gl" virtual that allows
only one provider.
* visit: remove osmesa variant
* Disable paraview in the aws-isc stacks
* data-vis-sdk: rework constrains to enforce front-ends
* e4s-power: remove redundant paraview
* Pipelines: update osmesa variants
* trilinos-catalyst-ioss-adapter: make gl a run dependency
* Remove mesa18 and libosmesa
mesa18 was introduced in #19528 as a way to maintain the old
autotools build of mesa separate from the new meson build.
We could add a second build system to mesa, but since mesa18 has
been deprecated for a long time, we'll just remove it.
libosmesa was used to multiplex the gl provider between mesa18
and mesa, and is thus unecessary. Remove it to reduce complexity
in the graphical stack.
* Remove references to mesa18 and libosmesa
* vtk: rework dependency on gl and osmesa
* memsurfer: rework dependency on vtk
* visit: minimal fix to avoid having both osmesa and glx
Add support for Gitlab CI on Windows
This PR adds the config changes required to configure and execute
Gitlab pipelines running Windows builds on Windows runners using
the existing Gitlab CI infrastructure (and newly added Windows
infrastructure).
* Adds support for generating child pipelines dispatched to Windows runners
* Refactors the relevant pre-scripts, scripts, and post scripts to be compatible with Windows
* Adds Windows config section describing Windows jobs
* Adds VTK as Windows build stack (to be expanded later)
* Modifies proj to build on Windows
* Refactors Windows rpath symlinking to avoid system libs and externals
---------
Co-authored-by: Ryan Krattiger <ryan.krattiger@kitware.com>
Co-authored-by: Mike VanDenburgh <michael.vandenburgh@kitware.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Co-authored-by: Scott Wittenburg <scott.wittenburg@kitware.com>
* Add openfoam to aws-pcluster-neoverse_v1 stack
* Add more apps to aws-pcluster-x86_64_v4 stack
* Remove WRF while hdf5 cannot build in buildcache at the moment
* Update comment
* Add more apps for aws-pcluster-neoverse_v1 stack
* Remove apps that currently do not build
* Disable those packages that won't build
* Modify syntax such that correct cflags are used
* Changing syntax again to what works with other packages
* Fix overriding packages.yaml entry for gettext
* Use new `prefer` and `require:when` clauses to clarify intent
* Use newer spack version to install intel compiler
This removes the need for patches and makes sure the `prefer` directives in
`package.yaml` are understood.
* `prefer` not strong enough, need to set compilers
* Revert "Use newer spack version to install intel compiler"
This reverts commit ecb25a192c.
Cannot update the spack version to install intel compiler as this changes the
compiler hash but not the version. This leads to incompatible compiler paths. If
we update this spack version in the future make sure the compiler version also updates.
Tested-by: Stephen Sachs <stesachs@amazon.com>
* Remove `prefer` clause as it is not strong enough for our needs
This way we can safely go back to installing the intel compiler with an older
spack version.
* Prefer gcc or oneapi to build gettext
* Pin gettext version compatible with system glibc-headers
* relax gettext version requirement to enable later versions
* oneapi cannot build older gettext version
Add the ability to include any number of (potentially nested) concrete environments, e.g.:
```yaml
spack:
specs: []
concretizer:
unify: true
include_concrete:
- /path/to/environment1
- /path/to/environment2
```
or, from the CLI:
```console
$ spack env create myenv
$ spack -e myenv add python
$ spack -e myenv concretize
$ spack env create --include-concrete myenv included_env
```
The contents of included concrete environments' spack.lock files are
included in the environment's lock file at creation time. Any changes
to included concrete environments are only reflected after the environment
is re-concretized from the re-concretized included environments.
- [x] Concretize included envs
- [x] Save concrete specs in memory by hash
- [x] Add included envs to combined env's lock file
- [x] Add test
- [x] Update documentation
Co-authored-by: Kayla Butler <<butler59@llnl.gov>
Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.co
m>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
Since reuse is the default now, `--reuse-deps` can be confusing, as it
technically does not imply roots are fresh.
So add `--fresh-roots`, which is also easier to discover when running
`spack concretize --fre<tab>`
Some packages can't be redistributed in source or binary form. We need an explicit way to say that in a package.
This adds a `redistribute()` directive so that package authors can write, e.g.:
```python
redistribute(source=False, binary=False)
```
You can also do this conditionally with `when=`, as with other directives, e.g.:
```python
# 12.0 and higher are proprietary
redistribute(source=False, binary=False, when="@12.0:")
# can't redistribute when we depend on some proprietary dependency
redistribute(source=False, binary=False, when="^proprietary-dependency")
```
To prevent Spack from adding either their sources or binaries to public mirrors and build caches. You can still unconditionally add things *if* you run either:
* `spack mirror create --private`
* `spack buildcache push --private`
But the default behavior for build caches is not to include non-redistributable packages in either mirrors or build caches. We have previously done this manually for our public buildcache, but with this we can start maintaining redistributability directly in packages.
Caveats: currently the default for `redistribute()` is `True` for both `source` and `binary`, and you can only set either of them to `False` via this directive.
- [x] add `redistribute()` directive
- [x] add `redistribute_source` and `redistribute_binary` class methods to `PackageBase`
- [x] add `--private` option to `spack mirror`
- [x] add `--private` option to `spack buildcache push`
- [x] test exclusion of packages from source mirror (both as a root and as a dependency)
- [x] test exclusion of packages from binary mirror (both as a root and as a dependency)
This adds some improvements to `spack find` output when in environments based
around some thoughts about what users want to know when they're in an env.
If you're working in an enviroment, you mostly care about:
* What are the roots
* Which ones are installed / not installed
* What's been added that still needs to be concretized
So, this PR adds a couple tweaks to display that information more clearly:
- [x] We now display install status next to every root. You can easily see
which are installed and which aren't.
- [x] When you run `spack find -l` in an env, the roots now show their concrete
hash (if they've been concretized). They previously would show `-------`
(b/c the root spec itself is abstract), but showing the concretized root's
hash is a lot more useful.
- [x] Newly added/unconcretized specs still show `-------`, which now makes more
sense, b/c they are not concretized.
- [x] There is a new option, `-r` / `--only-roots` to *only* show env roots if
you don't want to look at all the installed specs.
- [x] Roots in the installed spec list are now highlighted as bold. This is
actually an old feature from the first env implementation , but various
refactors had disabled it inadvertently.
This commit adds a property `autopush` to mirrors. When true, every source build is immediately followed by a push to the build cache. This is useful in ephemeral environments such as CI / containers.
To enable autopush on existing build caches, use `spack mirror set --autopush <name>`. The same flag can be used in `spack mirror add`.
* ParaView: Update version 5.12.0
Add 5.12.0 release
Update default to 5.12.0
* Add patch for building ParaView 5.12 with kits
* Drop VTKm from neoverse
Users requested an option to filter between local/upstream results in `spack find` output.
```
# default behavior, same as without --install-tree argument
$ spack find --install-tree all
# show only local results
$ spack find --install-tree local
# show results from all upstreams
$ spack find --install-tree upstream
# show results from a particular upstream or the local install_tree
$ spack find --install-tree /path/to/install/tree/root
```
---------
Co-authored-by: becker33 <becker33@users.noreply.github.com>
* buildcache sync: manifest-glob with arbitrary destination
The current implementation of the --manifest-glob is a bit restrictive
requiring the destination to be known by the generation stage of CI.
This allows specifying an arbitrary destination mirror URL.
* Add unit test for buildcache sync with manifest
* Fix test and arguments for manifest-glob with override destination
* Add testing path for unused mirror argument
* Changes to re-enable aws-pcluster pipelines
- Use compilers from pre-installed spack store such that compiler path relocation works when downloading from buildcache.
- Install gcc from hash so there is no risk of building gcc from source in pipleine.
- `packages.yam` files are now part of the pipelines.
- No more eternal `postinstall.sh`. The necessary steps are in `setup=pcluster.sh` and will be version controlled within this repo.
- Re-enable pipelines.
* Add and
* Debugging output & mv skylake -> skylake_avx512
* Explicilty check for packages
* Handle case with no intel compiler
* compatibility when using setup-pcluster.sh on a pre-installed cluster.
* Disable palace as parser cannot read require clause at the moment
* ifort cannot build superlu in buildcache
`ifort` is unable to handle such long file names as used when cmake compiles
test programs inside build cache.
* Fix spack commit for intel compiler installation
* Need to fetch other commits before using them
* fix style
* Add TODO
* Update packages.yaml to not use 'compiler:', 'target:' or 'provider:'
Synchronize with changes in https://github.com/spack/spack-configs/blob/main/AWS/parallelcluster/
* Use Intel compiler from later version (orig commit no longer found)
* Use envsubst to deal with quoted newlines
This is cleaner than the `eval` command used.
* Need to fetch tags for checkout on version number
* Intel compiler needs to be from version that has compatible DB
* Install intel compiler with commit that has DB ver 7
* Decouple the intel compiler installation from current commit
- Use a completely different spack installation such that this current pipeline
commit remains untouched.
- Make the script suceed even if the compiler installation fails (e.g. because
the Database version has been updated)
- Make the install targets fall back to gcc in case the compiler did not install
correctly.
* Use generic target for x86_64_vX
There is no way to provision a skylake/icelake/zen runner. They are all in the
same pools under x86_64_v3 and x86_64_v4.
* Find the intel compiler in the current spack installation
* Remove SPACK_TARGET_ARCH
* Fix virtual package index & use package.yaml for intel compiler
* Use only one stack & pipeline per generic architecture
* Fix yaml format
* Cleanup typos
* Include fix for ifx.cfg to get the right gcc toolchain when linking
* [removeme] Adding timeout to debug hang in make (palace)
* Revert "[removeme] Adding timeout to debug hang in make (palace)"
This reverts commit fee8a01580489a4ea364368459e9353b46d0d7e2.
* palace x86_64_v4 gets stuck when compiling try newer oneapi
* Update comment
* Use the latest container image
* Update gcc_hashes to match new container
* Use only one tag providing tags per extends call
Also removed an unnecessary tag.
* Move generic setup script out of individual stack
* Cleanup from last commit
* Enable checking signature for packages available on the container
* Remove commented packages / Add comment for palace
* Enable openmpi@5 which needs pmix>3
* don't look for intel compiler on aarch64
Running a `spack-python` script like this:
```python
import spack
import multiprocessing
def echo(args):
print(args)
if __name__ == "__main__":
pool = multiprocessing.Pool(2)
pool.map(echo, range(10))
```
will fail in `develop` with an error like this:
```console
_pickle.PicklingError: Can't pickle <function echo at 0x104865820>: attribute lookup echo on __main__ failed
```
Python expects to be able to look up the method `echo` in `sys.path["__main__"]` in
subprocesses spawned by `multiprocessing`, but because we use `InteractiveConsole` to
run `spack python`, the executed file isn't considered to be the `__main__` module, and
lookups in subprocesses fail. We tried to fake this by setting `__name__` to `__main__`
in the `spack python` command, but that doesn't fix the fact that no `__main__` module
exists.
Another annoyance with `InteractiveConsole` is that `__file__` is not defined in the
main script scope, so you can't use it in your scripts.
We can use the [runpy.run_path()](https://docs.python.org/3/library/runpy.html#runpy.run_path) function,
which has been around since Python 3.2, to fix this.
- [x] Use `runpy` module to launch non-interactive `spack python` invocations
- [x] Only use `InteractiveConsole` for interactive `spack python`
Currently (outside of this PR) when you `spack develop` a path, this path is treated as the staging
directory (this means that for example all build artifacts are placed in the develop path).
This PR creates a separate staging directory for all `spack develop`ed builds. It looks like
```
# the stage root
/the-stage-root-for-all-spack-builds/
spack-stage-<hash>
# Spack packages inheriting CMakePackage put their build artifacts here
spack-build-<hash>/
```
Unlike non-develop builds, there is no `spack-src` directory, `source_path` is the provided `dev_path`.
Instead, separately, in the `dev_path`, we have:
```
/dev/path/for/foo/
build-{arch}-<hash> -> /the-stage-root-for-all-spack-builds/spack-stage-<hash>/
```
The main benefit of this is that build artifacts for out-of-source builds that are relative to
`Stage.path` are easily identified (and you can delete them with `spack clean`).
Other behavior added here:
- [x] A symlink is made from the `dev_path` to the stage directory. This symlink name incorporates
spec details, so that multiple Spack environments that develop the same path will not conflict
with one another
- [x] `spack cd` and `spack location` have added a `-c` shorthand for `--source-dir`
Spack builds can still change the develop path (in particular to keep track of applied patches),
and for in-source builds, this doesn't change much (although logs would not be written into
the develop path). Packages inheriting from `CMakePackage` should get this benefit
automatically though.
* e4s: new packages: glvis, laghos
* gl: require: osmesa
* be explicit: glvis ^llvm so that llvm-amdgpu not chosen
* glvis fails on oneapi stack due to issue 42839
* gitlab: remove requests for unreferenced packages
The packages removed in this commit are not built by any of
our current GitLab CI stacks.
* gitlab: update memory requests for "huge" packages
* gitlab: reduce memory requests for overprovisioned packages
* gitlab: more memory for py-torch (again)
* gitlab: update memory but keep CPU the same
It is useful to enable/disable stacks in order to handle turning
specific stacks on/off based on runner availability, stack stability,
testing requirements, etc.
The disabled stack list takes precedence over the enable stack list. The
assumption is that stacks that are disabled are so due to some
functionality missing or broken for that stack.
The enable stack list implicitly disables all stacks not listed in the
enable list.
The main script body is over-written for power. Putting thet timing
aggregation in the after script allows it to be called on all of the
current pipelines.
This "breaks" the deprecated schema by allowing unknown attributes
to the attributes section of the job types. The breaking change here is
that deprecated stacks will no longer ignore attributes that are unknown
but rather assume the new CI schema behavior of injecting them into the
generated CI configuration. This change is required to secure
authentication in Spack CI.