Compare commits
54 Commits
develop
...
releases/v
Author | SHA1 | Date | |
---|---|---|---|
![]() |
45accfac15 | ||
![]() |
320a974016 | ||
![]() |
b653ce09c8 | ||
![]() |
23bf0a316c | ||
![]() |
030bce9978 | ||
![]() |
ba9c8d4407 | ||
![]() |
16052f9d1d | ||
![]() |
b32105f4da | ||
![]() |
9c1c5c2936 | ||
![]() |
c8f7c78e73 | ||
![]() |
da50816127 | ||
![]() |
19186a5e44 | ||
![]() |
de4cf49e95 | ||
![]() |
f79928d7d1 | ||
![]() |
187f8e9f4a | ||
![]() |
2536dd57a7 | ||
![]() |
06a2c36a5a | ||
![]() |
5e0d210734 | ||
![]() |
e3d4531663 | ||
![]() |
9e8e72592d | ||
![]() |
2d9fa60f53 | ||
![]() |
f3149a6c35 | ||
![]() |
403ba23632 | ||
![]() |
d62c10ff76 | ||
![]() |
3aa24e5b13 | ||
![]() |
c7200b4327 | ||
![]() |
5b02b7003a | ||
![]() |
f83972ddc4 | ||
![]() |
fffca98a02 | ||
![]() |
390112fc76 | ||
![]() |
2f3f4ad4da | ||
![]() |
0f9e07321f | ||
![]() |
7593b18626 | ||
![]() |
e964a396c9 | ||
![]() |
8d45404b5b | ||
![]() |
7055061635 | ||
![]() |
5e9799db4a | ||
![]() |
4258fbbed3 | ||
![]() |
db8fcbbee4 | ||
![]() |
d33c990278 | ||
![]() |
59dd405626 | ||
![]() |
dbbf7dc969 | ||
![]() |
8a71aa874f | ||
![]() |
0766f63182 | ||
![]() |
380fedb7bc | ||
![]() |
33cc47f6d3 | ||
![]() |
5935f9c8a0 | ||
![]() |
a86911246a | ||
![]() |
cd94827c5f | ||
![]() |
bb8b4f9979 | ||
![]() |
fc7a16e77e | ||
![]() |
e633e57297 | ||
![]() |
7b74fab12f | ||
![]() |
005c7cd353 |
2
.github/workflows/unit_tests.yaml
vendored
2
.github/workflows/unit_tests.yaml
vendored
@ -11,7 +11,7 @@ concurrency:
|
|||||||
jobs:
|
jobs:
|
||||||
# Run unit tests with different configurations on linux
|
# Run unit tests with different configurations on linux
|
||||||
ubuntu:
|
ubuntu:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-20.04
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
python-version: ['2.7', '3.6', '3.7', '3.8', '3.9', '3.10', '3.11']
|
python-version: ['2.7', '3.6', '3.7', '3.8', '3.9', '3.10', '3.11']
|
||||||
|
314
CHANGELOG.md
314
CHANGELOG.md
@ -1,3 +1,317 @@
|
|||||||
|
# v0.19.2 (2023-04-04)
|
||||||
|
|
||||||
|
### Spack Bugfixes
|
||||||
|
|
||||||
|
* Ignore global variant requirement for packages that do not define it (#35037)
|
||||||
|
* Compiler wrapper: improved parsing of linker arguments (#35929, #35912)
|
||||||
|
* Do not detect apple-clang as cce on macOS (#35974)
|
||||||
|
* Views: fix support for optional Python extensions (#35489)
|
||||||
|
* Views: fix issue where Python executable gets symlinked instead of copied (#34661)
|
||||||
|
* Fix a bug where tests were not added when concretizing together (#35290)
|
||||||
|
* Compiler flags: fix clang/apple-clang c/c++ standard flags (#35062)
|
||||||
|
* Increase db timeout from 3s to 60s to improve stability of parallel installs (#35517)
|
||||||
|
* Buildcache: improve error handling in downloads (#35568)
|
||||||
|
* Module files for packages installed from buildcache have long placeholder paths abbreviated in configure args section (#36611)
|
||||||
|
* Reduce verbosity of error messages regarding non-existing module files (#35502)
|
||||||
|
* Ensure file with build environment variables is truncated when writing to it (#35673)
|
||||||
|
* `spack config update` now works on active environments (#36542)
|
||||||
|
* Fix an issue where spack.yaml got reformatted incorrectly (#36698)
|
||||||
|
* Packages UPC++ and GASNet-EX were updated (#36629)
|
||||||
|
|
||||||
|
|
||||||
|
# v0.19.1 (2023-02-07)
|
||||||
|
|
||||||
|
### Spack Bugfixes
|
||||||
|
|
||||||
|
* `buildcache create`: make "file exists" less verbose (#35019)
|
||||||
|
* `spack mirror create`: don't change paths to urls (#34992)
|
||||||
|
* Improve error message for requirements (#33988)
|
||||||
|
* uninstall: fix accidental cubic complexity (#34005)
|
||||||
|
* scons: fix signature for `install_args` (#34481)
|
||||||
|
* Fix `combine_phase_logs` text encoding issues (#34657)
|
||||||
|
* Use a module-like object to propagate changes in the MRO, when setting build env (#34059)
|
||||||
|
* PackageBase should not define builder legacy attributes (#33942)
|
||||||
|
* Forward lookup of the "run_tests" attribute (#34531)
|
||||||
|
* Bugfix for timers (#33917, #33900)
|
||||||
|
* Fix path handling in prefix inspections (#35318)
|
||||||
|
* Fix libtool filter for Fujitsu compilers (#34916)
|
||||||
|
* Bug fix for duplicate rpath errors on macOS when creating build caches (#34375)
|
||||||
|
* FileCache: delete the new cache file on exception (#34623)
|
||||||
|
* Propagate exceptions from Spack python console (#34547)
|
||||||
|
* Tests: Fix a bug/typo in a `config_values.py` fixture (#33886)
|
||||||
|
* Various CI fixes (#33953, #34560, #34560, #34828)
|
||||||
|
* Docs: remove monitors and analyzers, typos (#34358, #33926)
|
||||||
|
* bump release version for tutorial command (#33859)
|
||||||
|
|
||||||
|
|
||||||
|
# v0.19.0 (2022-11-11)
|
||||||
|
|
||||||
|
`v0.19.0` is a major feature release.
|
||||||
|
|
||||||
|
## Major features in this release
|
||||||
|
|
||||||
|
1. **Package requirements**
|
||||||
|
|
||||||
|
Spack's traditional [package preferences](
|
||||||
|
https://spack.readthedocs.io/en/latest/build_settings.html#package-preferences)
|
||||||
|
are soft, but we've added hard requriements to `packages.yaml` and `spack.yaml`
|
||||||
|
(#32528, #32369). Package requirements use the same syntax as specs:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
packages:
|
||||||
|
libfabric:
|
||||||
|
require: "@1.13.2"
|
||||||
|
mpich:
|
||||||
|
require:
|
||||||
|
- one_of: ["+cuda", "+rocm"]
|
||||||
|
```
|
||||||
|
|
||||||
|
More details in [the docs](
|
||||||
|
https://spack.readthedocs.io/en/latest/build_settings.html#package-requirements).
|
||||||
|
|
||||||
|
2. **Environment UI Improvements**
|
||||||
|
|
||||||
|
* Fewer surprising modifications to `spack.yaml` (#33711):
|
||||||
|
|
||||||
|
* `spack install` in an environment will no longer add to the `specs:` list; you'll
|
||||||
|
need to either use `spack add <spec>` or `spack install --add <spec>`.
|
||||||
|
|
||||||
|
* Similarly, `spack uninstall` will not remove from your environment's `specs:`
|
||||||
|
list; you'll need to use `spack remove` or `spack uninstall --remove`.
|
||||||
|
|
||||||
|
This will make it easier to manage an environment, as there is clear separation
|
||||||
|
between the stack to be installed (`spack.yaml`/`spack.lock`) and which parts of
|
||||||
|
it should be installed (`spack install` / `spack uninstall`).
|
||||||
|
|
||||||
|
* `concretizer:unify:true` is now the default mode for new environments (#31787)
|
||||||
|
|
||||||
|
We see more users creating `unify:true` environments now. Users who need
|
||||||
|
`unify:false` can add it to their environment to get the old behavior. This will
|
||||||
|
concretize every spec in the environment independently.
|
||||||
|
|
||||||
|
* Include environment configuration from URLs (#29026, [docs](
|
||||||
|
https://spack.readthedocs.io/en/latest/environments.html#included-configurations))
|
||||||
|
|
||||||
|
You can now include configuration in your environment directly from a URL:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
spack:
|
||||||
|
include:
|
||||||
|
- https://github.com/path/to/raw/config/compilers.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Multiple Build Systems**
|
||||||
|
|
||||||
|
An increasing number of packages in the ecosystem need the ability to support
|
||||||
|
multiple build systems (#30738, [docs](
|
||||||
|
https://spack.readthedocs.io/en/latest/packaging_guide.html#multiple-build-systems)),
|
||||||
|
either across versions, across platforms, or within the same version of the software.
|
||||||
|
This has been hard to support through multiple inheritance, as methods from different
|
||||||
|
build system superclasses would conflict. `package.py` files can now define separate
|
||||||
|
builder classes with installation logic for different build systems, e.g.:
|
||||||
|
|
||||||
|
```python
|
||||||
|
class ArpackNg(CMakePackage, AutotoolsPackage):
|
||||||
|
|
||||||
|
build_system(
|
||||||
|
conditional("cmake", when="@0.64:"),
|
||||||
|
conditional("autotools", when="@:0.63"),
|
||||||
|
default="cmake",
|
||||||
|
)
|
||||||
|
|
||||||
|
class CMakeBuilder(spack.build_systems.cmake.CMakeBuilder):
|
||||||
|
def cmake_args(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class Autotoolsbuilder(spack.build_systems.autotools.AutotoolsBuilder):
|
||||||
|
def configure_args(self):
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Compiler and variant propagation**
|
||||||
|
|
||||||
|
Currently, compiler flags and variants are inconsistent: compiler flags set for a
|
||||||
|
package are inherited by its dependencies, while variants are not. We should have
|
||||||
|
these be consistent by allowing for inheritance to be enabled or disabled for both
|
||||||
|
variants and compiler flags.
|
||||||
|
|
||||||
|
Example syntax:
|
||||||
|
- `package ++variant`:
|
||||||
|
enabled variant that will be propagated to dependencies
|
||||||
|
- `package +variant`:
|
||||||
|
enabled variant that will NOT be propagated to dependencies
|
||||||
|
- `package ~~variant`:
|
||||||
|
disabled variant that will be propagated to dependencies
|
||||||
|
- `package ~variant`:
|
||||||
|
disabled variant that will NOT be propagated to dependencies
|
||||||
|
- `package cflags==-g`:
|
||||||
|
`cflags` will be propagated to dependencies
|
||||||
|
- `package cflags=-g`:
|
||||||
|
`cflags` will NOT be propagated to dependencies
|
||||||
|
|
||||||
|
Syntax for non-boolan variants is similar to compiler flags. More in the docs for
|
||||||
|
[variants](
|
||||||
|
https://spack.readthedocs.io/en/latest/basic_usage.html#variants) and [compiler flags](
|
||||||
|
https://spack.readthedocs.io/en/latest/basic_usage.html#compiler-flags).
|
||||||
|
|
||||||
|
6. **Enhancements to git version specifiers**
|
||||||
|
|
||||||
|
* `v0.18.0` added the ability to use git commits as versions. You can now use the
|
||||||
|
`git.` prefix to specify git tags or branches as versions. All of these are valid git
|
||||||
|
versions in `v0.19` (#31200):
|
||||||
|
|
||||||
|
```console
|
||||||
|
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # raw commit
|
||||||
|
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234 # commit with git prefix
|
||||||
|
foo@git.develop # the develop branch
|
||||||
|
foo@git.0.19 # use the 0.19 tag
|
||||||
|
```
|
||||||
|
|
||||||
|
* `v0.19` also gives you more control over how Spack interprets git versions, in case
|
||||||
|
Spack cannot detect the version from the git repository. You can suffix a git
|
||||||
|
version with `=<version>` to force Spack to concretize it as a particular version
|
||||||
|
(#30998, #31914, #32257):
|
||||||
|
|
||||||
|
```console
|
||||||
|
# use mybranch, but treat it as version 3.2 for version comparison
|
||||||
|
foo@git.mybranch=3.2
|
||||||
|
|
||||||
|
# use the given commit, but treat it as develop for version comparison
|
||||||
|
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop
|
||||||
|
```
|
||||||
|
|
||||||
|
More in [the docs](
|
||||||
|
https://spack.readthedocs.io/en/latest/basic_usage.html#version-specifier)
|
||||||
|
|
||||||
|
7. **Changes to Cray EX Support**
|
||||||
|
|
||||||
|
Cray machines have historically had their own "platform" within Spack, because we
|
||||||
|
needed to go through the module system to leverage compilers and MPI installations on
|
||||||
|
these machines. The Cray EX programming environment now provides standalone `craycc`
|
||||||
|
executables and proper `mpicc` wrappers, so Spack can treat EX machines like Linux
|
||||||
|
with extra packages (#29392).
|
||||||
|
|
||||||
|
We expect this to greatly reduce bugs, as external packages and compilers can now be
|
||||||
|
used by prefix instead of through modules. We will also no longer be subject to
|
||||||
|
reproducibility issues when modules change from Cray PE release to release and from
|
||||||
|
site to site. This also simplifies dealing with the underlying Linux OS on cray
|
||||||
|
systems, as Spack will properly model the machine's OS as either SuSE or RHEL.
|
||||||
|
|
||||||
|
8. **Improvements to tests and testing in CI**
|
||||||
|
|
||||||
|
* `spack ci generate --tests` will generate a `.gitlab-ci.yml` file that not only does
|
||||||
|
builds but also runs tests for built packages (#27877). Public GitHub pipelines now
|
||||||
|
also run tests in CI.
|
||||||
|
|
||||||
|
* `spack test run --explicit` will only run tests for packages that are explicitly
|
||||||
|
installed, instead of all packages.
|
||||||
|
|
||||||
|
9. **Experimental binding link model**
|
||||||
|
|
||||||
|
You can add a new option to `config.yaml` to make Spack embed absolute paths to
|
||||||
|
needed shared libraries in ELF executables and shared libraries on Linux (#31948, [docs](
|
||||||
|
https://spack.readthedocs.io/en/latest/config_yaml.html#shared-linking-bind)):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
config:
|
||||||
|
shared_linking:
|
||||||
|
type: rpath
|
||||||
|
bind: true
|
||||||
|
```
|
||||||
|
|
||||||
|
This can improve launch time at scale for parallel applications, and it can make
|
||||||
|
installations less susceptible to environment variables like `LD_LIBRARY_PATH`, even
|
||||||
|
especially when dealing with external libraries that use `RUNPATH`. You can think of
|
||||||
|
this as a faster, even higher-precedence version of `RPATH`.
|
||||||
|
|
||||||
|
## Other new features of note
|
||||||
|
|
||||||
|
* `spack spec` prints dependencies more legibly. Dependencies in the output now appear
|
||||||
|
at the *earliest* level of indentation possible (#33406)
|
||||||
|
* You can override `package.py` attributes like `url`, directly in `packages.yaml`
|
||||||
|
(#33275, [docs](
|
||||||
|
https://spack.readthedocs.io/en/latest/build_settings.html#assigning-package-attributes))
|
||||||
|
* There are a number of new architecture-related format strings you can use in Spack
|
||||||
|
configuration files to specify paths (#29810, [docs](
|
||||||
|
https://spack.readthedocs.io/en/latest/configuration.html#config-file-variables))
|
||||||
|
* Spack now supports bootstrapping Clingo on Windows (#33400)
|
||||||
|
* There is now support for an `RPATH`-like library model on Windows (#31930)
|
||||||
|
|
||||||
|
## Performance Improvements
|
||||||
|
|
||||||
|
* Major performance improvements for installation from binary caches (#27610, #33628,
|
||||||
|
#33636, #33608, #33590, #33496)
|
||||||
|
* Test suite can now be parallelized using `xdist` (used in GitHub Actions) (#32361)
|
||||||
|
* Reduce lock contention for parallel builds in environments (#31643)
|
||||||
|
|
||||||
|
## New binary caches and stacks
|
||||||
|
|
||||||
|
* We now build nearly all of E4S with `oneapi` in our buildcache (#31781, #31804,
|
||||||
|
#31804, #31803, #31840, #31991, #32117, #32107, #32239)
|
||||||
|
* Added 3 new machine learning-centric stacks to binary cache: `x86_64_v3`, CUDA, ROCm
|
||||||
|
(#31592, #33463)
|
||||||
|
|
||||||
|
## Removals and Deprecations
|
||||||
|
|
||||||
|
* Support for Python 3.5 is dropped (#31908). Only Python 2.7 and 3.6+ are officially
|
||||||
|
supported.
|
||||||
|
|
||||||
|
* This is the last Spack release that will support Python 2 (#32615). Spack `v0.19`
|
||||||
|
will emit a deprecation warning if you run it with Python 2, and Python 2 support will
|
||||||
|
soon be removed from the `develop` branch.
|
||||||
|
|
||||||
|
* `LD_LIBRARY_PATH` is no longer set by default by `spack load` or module loads.
|
||||||
|
|
||||||
|
Setting `LD_LIBRARY_PATH` in Spack environments/modules can cause binaries from
|
||||||
|
outside of Spack to crash, and Spack's own builds use `RPATH` and do not need
|
||||||
|
`LD_LIBRARY_PATH` set in order to run. If you still want the old behavior, you
|
||||||
|
can run these commands to configure Spack to set `LD_LIBRARY_PATH`:
|
||||||
|
|
||||||
|
```console
|
||||||
|
spack config add modules:prefix_inspections:lib64:[LD_LIBRARY_PATH]
|
||||||
|
spack config add modules:prefix_inspections:lib:[LD_LIBRARY_PATH]
|
||||||
|
```
|
||||||
|
|
||||||
|
* The `spack:concretization:[together|separately]` has been removed after being
|
||||||
|
deprecated in `v0.18`. Use `concretizer:unify:[true|false]`.
|
||||||
|
* `config:module_roots` is no longer supported after being deprecated in `v0.18`. Use
|
||||||
|
configuration in module sets instead (#28659, [docs](
|
||||||
|
https://spack.readthedocs.io/en/latest/module_file_support.html)).
|
||||||
|
* `spack activate` and `spack deactivate` are no longer supported, having been
|
||||||
|
deprecated in `v0.18`. Use an environment with a view instead of
|
||||||
|
activating/deactivating ([docs](
|
||||||
|
https://spack.readthedocs.io/en/latest/environments.html#configuration-in-spack-yaml)).
|
||||||
|
* The old YAML format for buildcaches is now deprecated (#33707). If you are using an
|
||||||
|
old buildcache with YAML metadata you will need to regenerate it with JSON metadata.
|
||||||
|
* `spack bootstrap trust` and `spack bootstrap untrust` are deprecated in favor of
|
||||||
|
`spack bootstrap enable` and `spack bootstrap disable` and will be removed in `v0.20`.
|
||||||
|
(#33600)
|
||||||
|
* The `graviton2` architecture has been renamed to `neoverse_n1`, and `graviton3`
|
||||||
|
is now `neoverse_v1`. Buildcaches using the old architecture names will need to be rebuilt.
|
||||||
|
* The terms `blacklist` and `whitelist` have been replaced with `include` and `exclude`
|
||||||
|
in all configuration files (#31569). You can use `spack config update` to
|
||||||
|
automatically fix your configuration files.
|
||||||
|
|
||||||
|
## Notable Bugfixes
|
||||||
|
|
||||||
|
* Permission setting on installation now handles effective uid properly (#19980)
|
||||||
|
* `buildable:true` for an MPI implementation now overrides `buildable:false` for `mpi` (#18269)
|
||||||
|
* Improved error messages when attempting to use an unconfigured compiler (#32084)
|
||||||
|
* Do not punish explicitly requested compiler mismatches in the solver (#30074)
|
||||||
|
* `spack stage`: add missing --fresh and --reuse (#31626)
|
||||||
|
* Fixes for adding build system executables like `cmake` to package scope (#31739)
|
||||||
|
* Bugfix for binary relocation with aliased strings produced by newer `binutils` (#32253)
|
||||||
|
|
||||||
|
## Spack community stats
|
||||||
|
|
||||||
|
* 6,751 total packages, 335 new since `v0.18.0`
|
||||||
|
* 141 new Python packages
|
||||||
|
* 89 new R packages
|
||||||
|
* 303 people contributed to this release
|
||||||
|
* 287 committers to packages
|
||||||
|
* 57 committers to core
|
||||||
|
|
||||||
|
|
||||||
# v0.18.1 (2022-07-19)
|
# v0.18.1 (2022-07-19)
|
||||||
|
|
||||||
### Spack Bugfixes
|
### Spack Bugfixes
|
||||||
|
@ -10,8 +10,8 @@ For more on Spack's release structure, see
|
|||||||
| Version | Supported |
|
| Version | Supported |
|
||||||
| ------- | ------------------ |
|
| ------- | ------------------ |
|
||||||
| develop | :white_check_mark: |
|
| develop | :white_check_mark: |
|
||||||
| 0.17.x | :white_check_mark: |
|
| 0.19.x | :white_check_mark: |
|
||||||
| 0.16.x | :white_check_mark: |
|
| 0.18.x | :white_check_mark: |
|
||||||
|
|
||||||
## Reporting a Vulnerability
|
## Reporting a Vulnerability
|
||||||
|
|
||||||
|
@ -176,7 +176,7 @@ config:
|
|||||||
# when Spack needs to manage its own package metadata and all operations are
|
# when Spack needs to manage its own package metadata and all operations are
|
||||||
# expected to complete within the default time limit. The timeout should
|
# expected to complete within the default time limit. The timeout should
|
||||||
# therefore generally be left untouched.
|
# therefore generally be left untouched.
|
||||||
db_lock_timeout: 3
|
db_lock_timeout: 60
|
||||||
|
|
||||||
|
|
||||||
# How long to wait when attempting to modify a package (e.g. to install it).
|
# How long to wait when attempting to modify a package (e.g. to install it).
|
||||||
|
@ -1,162 +0,0 @@
|
|||||||
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
|
|
||||||
Spack Project Developers. See the top-level COPYRIGHT file for details.
|
|
||||||
|
|
||||||
SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
|
||||||
|
|
||||||
.. _analyze:
|
|
||||||
|
|
||||||
=======
|
|
||||||
Analyze
|
|
||||||
=======
|
|
||||||
|
|
||||||
|
|
||||||
The analyze command is a front-end to various tools that let us analyze
|
|
||||||
package installations. Each analyzer is a module for a different kind
|
|
||||||
of analysis that can be done on a package installation, including (but not
|
|
||||||
limited to) binary, log, or text analysis. Thus, the analyze command group
|
|
||||||
allows you to take an existing package install, choose an analyzer,
|
|
||||||
and extract some output for the package using it.
|
|
||||||
|
|
||||||
|
|
||||||
-----------------
|
|
||||||
Analyzer Metadata
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
For all analyzers, we write to an ``analyzers`` folder in ``~/.spack``, or the
|
|
||||||
value that you specify in your spack config at ``config:analyzers_dir``.
|
|
||||||
For example, here we see the results of running an analysis on zlib:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ tree ~/.spack/analyzers/
|
|
||||||
└── linux-ubuntu20.04-skylake
|
|
||||||
└── gcc-9.3.0
|
|
||||||
└── zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2
|
|
||||||
├── environment_variables
|
|
||||||
│ └── spack-analyzer-environment-variables.json
|
|
||||||
├── install_files
|
|
||||||
│ └── spack-analyzer-install-files.json
|
|
||||||
└── libabigail
|
|
||||||
└── spack-analyzer-libabigail-libz.so.1.2.11.xml
|
|
||||||
|
|
||||||
|
|
||||||
This means that you can always find analyzer output in this folder, and it
|
|
||||||
is organized with the same logic as the package install it was run for.
|
|
||||||
If you want to customize this top level folder, simply provide the ``--path``
|
|
||||||
argument to ``spack analyze run``. The nested organization will be maintained
|
|
||||||
within your custom root.
|
|
||||||
|
|
||||||
-----------------
|
|
||||||
Listing Analyzers
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
If you aren't familiar with Spack's analyzers, you can quickly list those that
|
|
||||||
are available:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack analyze list-analyzers
|
|
||||||
install_files : install file listing read from install_manifest.json
|
|
||||||
environment_variables : environment variables parsed from spack-build-env.txt
|
|
||||||
config_args : config args loaded from spack-configure-args.txt
|
|
||||||
libabigail : Application Binary Interface (ABI) features for objects
|
|
||||||
|
|
||||||
|
|
||||||
In the above, the first three are fairly simple - parsing metadata files from
|
|
||||||
a package install directory to save
|
|
||||||
|
|
||||||
-------------------
|
|
||||||
Analyzing a Package
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
The analyze command, akin to install, will accept a package spec to perform
|
|
||||||
an analysis for. The package must be installed. Let's walk through an example
|
|
||||||
with zlib. We first ask to analyze it. However, since we have more than one
|
|
||||||
install, we are asked to disambiguate:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack analyze run zlib
|
|
||||||
==> Error: zlib matches multiple packages.
|
|
||||||
Matching packages:
|
|
||||||
fz2bs56 zlib@1.2.11%gcc@7.5.0 arch=linux-ubuntu18.04-skylake
|
|
||||||
sl7m27m zlib@1.2.11%gcc@9.3.0 arch=linux-ubuntu20.04-skylake
|
|
||||||
Use a more specific spec.
|
|
||||||
|
|
||||||
|
|
||||||
We can then specify the spec version that we want to analyze:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack analyze run zlib/fz2bs56
|
|
||||||
|
|
||||||
If you don't provide any specific analyzer names, by default all analyzers
|
|
||||||
(shown in the ``list-analyzers`` subcommand list) will be run. If an analyzer does not
|
|
||||||
have any result, it will be skipped. For example, here is a result running for
|
|
||||||
zlib:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ ls ~/.spack/analyzers/linux-ubuntu20.04-skylake/gcc-9.3.0/zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2/
|
|
||||||
spack-analyzer-environment-variables.json
|
|
||||||
spack-analyzer-install-files.json
|
|
||||||
spack-analyzer-libabigail-libz.so.1.2.11.xml
|
|
||||||
|
|
||||||
If you want to run a specific analyzer, ask for it with `--analyzer`. Here we run
|
|
||||||
spack analyze on libabigail (already installed) _using_ libabigail1
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack analyze run --analyzer abigail libabigail
|
|
||||||
|
|
||||||
|
|
||||||
.. _analyze_monitoring:
|
|
||||||
|
|
||||||
----------------------
|
|
||||||
Monitoring An Analysis
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
For any kind of analysis, you can
|
|
||||||
use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
|
|
||||||
as a server to upload the same run metadata to. You can
|
|
||||||
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
|
|
||||||
to first create a server along with a username and token for yourself.
|
|
||||||
You can then use this guide to interact with the server.
|
|
||||||
|
|
||||||
You should first export our spack monitor token and username to the environment:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
|
|
||||||
$ export SPACKMON_USER=spacky
|
|
||||||
|
|
||||||
|
|
||||||
By default, the host for your server is expected to be at ``http://127.0.0.1``
|
|
||||||
with a prefix of ``ms1``, and if this is the case, you can simply add the
|
|
||||||
``--monitor`` flag to the install command:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack analyze run --monitor wget
|
|
||||||
|
|
||||||
If you need to customize the host or the prefix, you can do that as well:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack analyze run --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io wget
|
|
||||||
|
|
||||||
If your server doesn't have authentication, you can skip it:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack analyze run --monitor --monitor-disable-auth wget
|
|
||||||
|
|
||||||
Regardless of your choice, when you run analyze on an installed package (whether
|
|
||||||
it was installed with ``--monitor`` or not, you'll see the results generating as they did
|
|
||||||
before, and a message that the monitor server was pinged:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack analyze --monitor wget
|
|
||||||
...
|
|
||||||
==> Sending result for wget bin/wget to monitor.
|
|
@ -1244,8 +1244,8 @@ For example, for the ``stackstart`` variant:
|
|||||||
|
|
||||||
.. code-block:: sh
|
.. code-block:: sh
|
||||||
|
|
||||||
mpileaks stackstart=4 # variant will be propagated to dependencies
|
mpileaks stackstart==4 # variant will be propagated to dependencies
|
||||||
mpileaks stackstart==4 # only mpileaks will have this variant value
|
mpileaks stackstart=4 # only mpileaks will have this variant value
|
||||||
|
|
||||||
^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^
|
||||||
Compiler Flags
|
Compiler Flags
|
||||||
@ -1672,9 +1672,13 @@ own install prefix. However, certain packages are typically installed
|
|||||||
`Python <https://www.python.org>`_ packages are typically installed in the
|
`Python <https://www.python.org>`_ packages are typically installed in the
|
||||||
``$prefix/lib/python-2.7/site-packages`` directory.
|
``$prefix/lib/python-2.7/site-packages`` directory.
|
||||||
|
|
||||||
Spack has support for this type of installation as well. In Spack,
|
In Spack, installation prefixes are immutable, so this type of installation
|
||||||
a package that can live inside the prefix of another package is called
|
is not directly supported. However, it is possible to create views that
|
||||||
an *extension*. Suppose you have Python installed like so:
|
allow you to merge install prefixes of multiple packages into a single new prefix.
|
||||||
|
Views are a convenient way to get a more traditional filesystem structure.
|
||||||
|
Using *extensions*, you can ensure that Python packages always share the
|
||||||
|
same prefix in the view as Python itself. Suppose you have
|
||||||
|
Python installed like so:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -1712,8 +1716,6 @@ You can find extensions for your Python installation like this:
|
|||||||
py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1
|
py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1
|
||||||
py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0
|
py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0
|
||||||
|
|
||||||
==> None activated.
|
|
||||||
|
|
||||||
The extensions are a subset of what's returned by ``spack list``, and
|
The extensions are a subset of what's returned by ``spack list``, and
|
||||||
they are packages like any other. They are installed into their own
|
they are packages like any other. They are installed into their own
|
||||||
prefixes, and you can see this with ``spack find --paths``:
|
prefixes, and you can see this with ``spack find --paths``:
|
||||||
@ -1741,32 +1743,72 @@ directly when you run ``python``:
|
|||||||
ImportError: No module named numpy
|
ImportError: No module named numpy
|
||||||
>>>
|
>>>
|
||||||
|
|
||||||
^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
Using Extensions
|
Using Extensions in Environments
|
||||||
^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
There are multiple ways to get ``numpy`` working in Python. The first is
|
The recommended way of working with extensions such as ``py-numpy``
|
||||||
to use :ref:`shell-support`. You can simply ``load`` the extension,
|
above is through :ref:`Environments <environments>`. For example,
|
||||||
and it will be added to the ``PYTHONPATH`` in your current shell, and
|
the following creates an environment in the current working directory
|
||||||
Python itself will be available in the ``PATH``:
|
with a filesystem view in the ``./view`` directory:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ spack env create --with-view view --dir .
|
||||||
|
$ spack -e . add py-numpy
|
||||||
|
$ spack -e . concretize
|
||||||
|
$ spack -e . install
|
||||||
|
|
||||||
|
We recommend environments for two reasons. Firstly, environments
|
||||||
|
can be activated (requires :ref:`shell-support`):
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ spack env activate .
|
||||||
|
|
||||||
|
which sets all the right environment variables such as ``PATH`` and
|
||||||
|
``PYTHONPATH``. This ensures that
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ python
|
||||||
|
>>> import numpy
|
||||||
|
|
||||||
|
works. Secondly, even without shell support, the view ensures
|
||||||
|
that Python can locate its extensions:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
$ ./view/bin/python
|
||||||
|
>>> import numpy
|
||||||
|
|
||||||
|
See :ref:`environments` for a more in-depth description of Spack
|
||||||
|
environments and customizations to views.
|
||||||
|
|
||||||
|
^^^^^^^^^^^^^^^^^^^^
|
||||||
|
Using ``spack load``
|
||||||
|
^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
|
A more traditional way of using Spack and extensions is ``spack load``
|
||||||
|
(requires :ref:`shell-support`). This will add the extension to ``PYTHONPATH``
|
||||||
|
in your current shell, and Python itself will be available in the ``PATH``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
$ spack load py-numpy
|
$ spack load py-numpy
|
||||||
|
$ python
|
||||||
|
>>> import numpy
|
||||||
|
|
||||||
Now ``import numpy`` will succeed for as long as you keep your current
|
|
||||||
session open.
|
|
||||||
The loaded packages can be checked using ``spack find --loaded``
|
The loaded packages can be checked using ``spack find --loaded``
|
||||||
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
Loading Extensions via Modules
|
Loading Extensions via Modules
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
|
|
||||||
Instead of using Spack's environment modification capabilities through
|
Apart from ``spack env activate`` and ``spack load``, you can load numpy
|
||||||
the ``spack load`` command, you can load numpy through your
|
through your environment modules (using ``environment-modules`` or
|
||||||
environment modules (using ``environment-modules`` or ``lmod``). This
|
``lmod``). This will also add the extension to the ``PYTHONPATH`` in
|
||||||
will also add the extension to the ``PYTHONPATH`` in your current
|
your current shell.
|
||||||
shell.
|
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -1776,15 +1818,6 @@ If you do not know the name of the specific numpy module you wish to
|
|||||||
load, you can use the ``spack module tcl|lmod loads`` command to get
|
load, you can use the ``spack module tcl|lmod loads`` command to get
|
||||||
the name of the module from the Spack spec.
|
the name of the module from the Spack spec.
|
||||||
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
Extensions in an Environment
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
|
|
||||||
Another way to use extensions is to create a view, which merges the
|
|
||||||
python installation along with the extensions into a single prefix.
|
|
||||||
See :ref:`environments` for a more in-depth description
|
|
||||||
of environment views.
|
|
||||||
|
|
||||||
-----------------------
|
-----------------------
|
||||||
Filesystem requirements
|
Filesystem requirements
|
||||||
-----------------------
|
-----------------------
|
||||||
|
@ -724,10 +724,9 @@ extends vs. depends_on
|
|||||||
|
|
||||||
This is very similar to the naming dilemma above, with a slight twist.
|
This is very similar to the naming dilemma above, with a slight twist.
|
||||||
As mentioned in the :ref:`Packaging Guide <packaging_extensions>`,
|
As mentioned in the :ref:`Packaging Guide <packaging_extensions>`,
|
||||||
``extends`` and ``depends_on`` are very similar, but ``extends`` adds
|
``extends`` and ``depends_on`` are very similar, but ``extends`` ensures
|
||||||
the ability to *activate* the package. Activation involves symlinking
|
that the extension and extendee share the same prefix in views.
|
||||||
everything in the installation prefix of the package to the installation
|
This allows the user to import a Python module without
|
||||||
prefix of Python. This allows the user to import a Python module without
|
|
||||||
having to add that module to ``PYTHONPATH``.
|
having to add that module to ``PYTHONPATH``.
|
||||||
|
|
||||||
When deciding between ``extends`` and ``depends_on``, the best rule of
|
When deciding between ``extends`` and ``depends_on``, the best rule of
|
||||||
@ -735,7 +734,7 @@ thumb is to check the installation prefix. If Python libraries are
|
|||||||
installed to ``<prefix>/lib/pythonX.Y/site-packages``, then you
|
installed to ``<prefix>/lib/pythonX.Y/site-packages``, then you
|
||||||
should use ``extends``. If Python libraries are installed elsewhere
|
should use ``extends``. If Python libraries are installed elsewhere
|
||||||
or the only files that get installed reside in ``<prefix>/bin``, then
|
or the only files that get installed reside in ``<prefix>/bin``, then
|
||||||
don't use ``extends``, as symlinking the package wouldn't be useful.
|
don't use ``extends``.
|
||||||
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^
|
^^^^^^^^^^^^^^^^^^^^^
|
||||||
Alternatives to Spack
|
Alternatives to Spack
|
||||||
|
@ -193,10 +193,10 @@ Build system dependencies
|
|||||||
|
|
||||||
As an extension of the R ecosystem, your package will obviously depend
|
As an extension of the R ecosystem, your package will obviously depend
|
||||||
on R to build and run. Normally, we would use ``depends_on`` to express
|
on R to build and run. Normally, we would use ``depends_on`` to express
|
||||||
this, but for R packages, we use ``extends``. ``extends`` is similar to
|
this, but for R packages, we use ``extends``. This implies a special
|
||||||
``depends_on``, but adds an additional feature: the ability to "activate"
|
dependency on R, which is used to set environment variables such as
|
||||||
the package by symlinking it to the R installation directory. Since
|
``R_LIBS`` uniformly. Since every R package needs this, the ``RPackage``
|
||||||
every R package needs this, the ``RPackage`` base class contains:
|
base class contains:
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
|
@ -67,7 +67,6 @@ or refer to the full manual below.
|
|||||||
build_settings
|
build_settings
|
||||||
environments
|
environments
|
||||||
containers
|
containers
|
||||||
monitoring
|
|
||||||
mirrors
|
mirrors
|
||||||
module_file_support
|
module_file_support
|
||||||
repositories
|
repositories
|
||||||
@ -78,12 +77,6 @@ or refer to the full manual below.
|
|||||||
extensions
|
extensions
|
||||||
pipelines
|
pipelines
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 2
|
|
||||||
:caption: Research
|
|
||||||
|
|
||||||
analyze
|
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 2
|
:maxdepth: 2
|
||||||
:caption: Contributing
|
:caption: Contributing
|
||||||
|
@ -1,265 +0,0 @@
|
|||||||
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
|
|
||||||
Spack Project Developers. See the top-level COPYRIGHT file for details.
|
|
||||||
|
|
||||||
SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
|
||||||
|
|
||||||
.. _monitoring:
|
|
||||||
|
|
||||||
==========
|
|
||||||
Monitoring
|
|
||||||
==========
|
|
||||||
|
|
||||||
You can use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
|
|
||||||
server to store a database of your packages, builds, and associated metadata
|
|
||||||
for provenance, research, or some other kind of development. You should
|
|
||||||
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
|
|
||||||
to first create a server along with a username and token for yourself.
|
|
||||||
You can then use this guide to interact with the server.
|
|
||||||
|
|
||||||
-------------------
|
|
||||||
Analysis Monitoring
|
|
||||||
-------------------
|
|
||||||
|
|
||||||
To read about how to monitor an analysis (meaning you want to send analysis results
|
|
||||||
to a server) see :ref:`analyze_monitoring`.
|
|
||||||
|
|
||||||
---------------------
|
|
||||||
Monitoring An Install
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
Since an install is typically when you build packages, we logically want
|
|
||||||
to tell spack to monitor during this step. Let's start with an example
|
|
||||||
where we want to monitor the install of hdf5. Unless you have disabled authentication
|
|
||||||
for the server, we first want to export our spack monitor token and username to the environment:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
|
|
||||||
$ export SPACKMON_USER=spacky
|
|
||||||
|
|
||||||
|
|
||||||
By default, the host for your server is expected to be at ``http://127.0.0.1``
|
|
||||||
with a prefix of ``ms1``, and if this is the case, you can simply add the
|
|
||||||
``--monitor`` flag to the install command:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack install --monitor hdf5
|
|
||||||
|
|
||||||
|
|
||||||
If you need to customize the host or the prefix, you can do that as well:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack install --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io hdf5
|
|
||||||
|
|
||||||
|
|
||||||
As a precaution, we cut out early in the spack client if you have not provided
|
|
||||||
authentication credentials. For example, if you run the command above without
|
|
||||||
exporting your username or token, you'll see:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
==> Error: You are required to export SPACKMON_TOKEN and SPACKMON_USER
|
|
||||||
|
|
||||||
This extra check is to ensure that we don't start any builds,
|
|
||||||
and then discover that you forgot to export your token. However, if
|
|
||||||
your monitoring server has authentication disabled, you can tell this to
|
|
||||||
the client to skip this step:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack install --monitor --monitor-disable-auth hdf5
|
|
||||||
|
|
||||||
If the service is not running, you'll cleanly exit early - the install will
|
|
||||||
not continue if you've asked it to monitor and there is no service.
|
|
||||||
For example, here is what you'll see if the monitoring service is not running:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
[Errno 111] Connection refused
|
|
||||||
|
|
||||||
|
|
||||||
If you want to continue builds (and stop monitoring) you can set the ``--monitor-keep-going``
|
|
||||||
flag.
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack install --monitor --monitor-keep-going hdf5
|
|
||||||
|
|
||||||
This could mean that if a request fails, you only have partial or no data
|
|
||||||
added to your monitoring database. This setting will not be applied to the
|
|
||||||
first request to check if the server is running, but to subsequent requests.
|
|
||||||
If you don't have a monitor server running and you want to build, simply
|
|
||||||
don't provide the ``--monitor`` flag! Finally, if you want to provide one or
|
|
||||||
more tags to your build, you can do:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# Add one tag, "pizza"
|
|
||||||
$ spack install --monitor --monitor-tags pizza hdf5
|
|
||||||
|
|
||||||
# Add two tags, "pizza" and "pasta"
|
|
||||||
$ spack install --monitor --monitor-tags pizza,pasta hdf5
|
|
||||||
|
|
||||||
|
|
||||||
----------------------------
|
|
||||||
Monitoring with Containerize
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
The same argument group is available to add to a containerize command.
|
|
||||||
|
|
||||||
^^^^^^
|
|
||||||
Docker
|
|
||||||
^^^^^^
|
|
||||||
|
|
||||||
To add monitoring to a Docker container recipe generation using the defaults,
|
|
||||||
and assuming a monitor server running on localhost, you would
|
|
||||||
start with a spack.yaml in your present working directory:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
spack:
|
|
||||||
specs:
|
|
||||||
- samtools
|
|
||||||
|
|
||||||
And then do:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# preview first
|
|
||||||
spack containerize --monitor
|
|
||||||
|
|
||||||
# and then write to a Dockerfile
|
|
||||||
spack containerize --monitor > Dockerfile
|
|
||||||
|
|
||||||
|
|
||||||
The install command will be edited to include commands for enabling monitoring.
|
|
||||||
However, getting secrets into the container for your monitor server is something
|
|
||||||
that should be done carefully. Specifically you should:
|
|
||||||
|
|
||||||
- Never try to define secrets as ENV, ARG, or using ``--build-arg``
|
|
||||||
- Do not try to get the secret into the container via a "temporary" file that you remove (it in fact will still exist in a layer)
|
|
||||||
|
|
||||||
Instead, it's recommended to use buildkit `as explained here <https://pythonspeed.com/articles/docker-build-secrets/>`_.
|
|
||||||
You'll need to again export environment variables for your spack monitor server:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
|
|
||||||
$ export SPACKMON_USER=spacky
|
|
||||||
|
|
||||||
And then use buildkit along with your build and identifying the name of the secret:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ DOCKER_BUILDKIT=1 docker build --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
|
|
||||||
|
|
||||||
The secrets are expected to come from your environment, and then will be temporarily mounted and available
|
|
||||||
at ``/run/secrets/<name>``. If you forget to supply them (and authentication is required) the build
|
|
||||||
will fail. If you need to build on your host (and interact with a spack monitor at localhost) you'll
|
|
||||||
need to tell Docker to use the host network:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
|
|
||||||
|
|
||||||
|
|
||||||
^^^^^^^^^^^
|
|
||||||
Singularity
|
|
||||||
^^^^^^^^^^^
|
|
||||||
|
|
||||||
To add monitoring to a Singularity container build, the spack.yaml needs to
|
|
||||||
be modified slightly to specify wanting a different format:
|
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
spack:
|
|
||||||
specs:
|
|
||||||
- samtools
|
|
||||||
container:
|
|
||||||
format: singularity
|
|
||||||
|
|
||||||
|
|
||||||
Again, generate the recipe:
|
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# preview first
|
|
||||||
$ spack containerize --monitor
|
|
||||||
|
|
||||||
# then write to a Singularity recipe
|
|
||||||
$ spack containerize --monitor > Singularity
|
|
||||||
|
|
||||||
|
|
||||||
Singularity doesn't have a direct way to define secrets at build time, so we have
|
|
||||||
to do a bit of a manual command to add a file, source secrets in it, and remove it.
|
|
||||||
Since Singularity doesn't have layers like Docker, deleting a file will truly
|
|
||||||
remove it from the container and history. So let's say we have this file,
|
|
||||||
``secrets.sh``:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# secrets.sh
|
|
||||||
export SPACKMON_USER=spack
|
|
||||||
export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
|
|
||||||
|
|
||||||
|
|
||||||
We would then generate the Singularity recipe, and add a files section,
|
|
||||||
a source of that file at the start of ``%post``, and **importantly**
|
|
||||||
a removal of the final at the end of that same section.
|
|
||||||
|
|
||||||
.. code-block::
|
|
||||||
|
|
||||||
Bootstrap: docker
|
|
||||||
From: spack/ubuntu-bionic:latest
|
|
||||||
Stage: build
|
|
||||||
|
|
||||||
%files
|
|
||||||
secrets.sh /opt/secrets.sh
|
|
||||||
|
|
||||||
%post
|
|
||||||
. /opt/secrets.sh
|
|
||||||
|
|
||||||
# spack install commands are here
|
|
||||||
...
|
|
||||||
|
|
||||||
# Don't forget to remove here!
|
|
||||||
rm /opt/secrets.sh
|
|
||||||
|
|
||||||
|
|
||||||
You can then build the container as your normally would.
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ sudo singularity build container.sif Singularity
|
|
||||||
|
|
||||||
|
|
||||||
------------------
|
|
||||||
Monitoring Offline
|
|
||||||
------------------
|
|
||||||
|
|
||||||
In the case that you want to save monitor results to your filesystem
|
|
||||||
and then upload them later (perhaps you are in an environment where you don't
|
|
||||||
have credentials or it isn't safe to use them) you can use the ``--monitor-save-local``
|
|
||||||
flag.
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack install --monitor --monitor-save-local hdf5
|
|
||||||
|
|
||||||
This will save results in a subfolder, "monitor" in your designated spack
|
|
||||||
reports folder, which defaults to ``$HOME/.spack/reports/monitor``. When
|
|
||||||
you are ready to upload them to a spack monitor server:
|
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ spack monitor upload ~/.spack/reports/monitor
|
|
||||||
|
|
||||||
|
|
||||||
You can choose the root directory of results as shown above, or a specific
|
|
||||||
subdirectory. The command accepts other arguments to specify configuration
|
|
||||||
for the monitor.
|
|
@ -2634,9 +2634,12 @@ extendable package:
|
|||||||
extends('python')
|
extends('python')
|
||||||
...
|
...
|
||||||
|
|
||||||
Now, the ``py-numpy`` package can be used as an argument to ``spack
|
This accomplishes a few things. Firstly, the Python package can set special
|
||||||
activate``. When it is activated, all the files in its prefix will be
|
variables such as ``PYTHONPATH`` for all extensions when the run or build
|
||||||
symbolically linked into the prefix of the python package.
|
environment is set up. Secondly, filesystem views can ensure that extensions
|
||||||
|
are put in the same prefix as their extendee. This ensures that Python in
|
||||||
|
a view can always locate its Python packages, even without environment
|
||||||
|
variables set.
|
||||||
|
|
||||||
A package can only extend one other package at a time. To support packages
|
A package can only extend one other package at a time. To support packages
|
||||||
that may extend one of a list of other packages, Spack supports multiple
|
that may extend one of a list of other packages, Spack supports multiple
|
||||||
@ -2684,9 +2687,8 @@ variant(s) are selected. This may be accomplished with conditional
|
|||||||
...
|
...
|
||||||
|
|
||||||
Sometimes, certain files in one package will conflict with those in
|
Sometimes, certain files in one package will conflict with those in
|
||||||
another, which means they cannot both be activated (symlinked) at the
|
another, which means they cannot both be used in a view at the
|
||||||
same time. In this case, you can tell Spack to ignore those files
|
same time. In this case, you can tell Spack to ignore those files:
|
||||||
when it does the activation:
|
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
@ -2698,7 +2700,7 @@ when it does the activation:
|
|||||||
...
|
...
|
||||||
|
|
||||||
The code above will prevent everything in the ``$prefix/bin/`` directory
|
The code above will prevent everything in the ``$prefix/bin/`` directory
|
||||||
from being linked in at activation time.
|
from being linked in a view.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
@ -3523,7 +3525,7 @@ will likely contain some overriding of default builder methods:
|
|||||||
def cmake_args(self):
|
def cmake_args(self):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
class Autotoolsbuilder(spack.build_systems.autotools.AutotoolsBuilder):
|
class AutotoolsBuilder(spack.build_systems.autotools.AutotoolsBuilder):
|
||||||
def configure_args(self):
|
def configure_args(self):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
176
lib/spack/env/cc
vendored
176
lib/spack/env/cc
vendored
@ -427,6 +427,55 @@ isystem_include_dirs_list=""
|
|||||||
libs_list=""
|
libs_list=""
|
||||||
other_args_list=""
|
other_args_list=""
|
||||||
|
|
||||||
|
# Global state for keeping track of -Wl,-rpath -Wl,/path
|
||||||
|
wl_expect_rpath=no
|
||||||
|
|
||||||
|
# Same, but for -Xlinker -rpath -Xlinker /path
|
||||||
|
xlinker_expect_rpath=no
|
||||||
|
|
||||||
|
parse_Wl() {
|
||||||
|
# drop -Wl
|
||||||
|
shift
|
||||||
|
while [ $# -ne 0 ]; do
|
||||||
|
if [ "$wl_expect_rpath" = yes ]; then
|
||||||
|
if system_dir "$1"; then
|
||||||
|
append system_rpath_dirs_list "$1"
|
||||||
|
else
|
||||||
|
append rpath_dirs_list "$1"
|
||||||
|
fi
|
||||||
|
wl_expect_rpath=no
|
||||||
|
else
|
||||||
|
case "$1" in
|
||||||
|
-rpath=*)
|
||||||
|
arg="${1#-rpath=}"
|
||||||
|
if system_dir "$arg"; then
|
||||||
|
append system_rpath_dirs_list "$arg"
|
||||||
|
else
|
||||||
|
append rpath_dirs_list "$arg"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
--rpath=*)
|
||||||
|
arg="${1#--rpath=}"
|
||||||
|
if system_dir "$arg"; then
|
||||||
|
append system_rpath_dirs_list "$arg"
|
||||||
|
else
|
||||||
|
append rpath_dirs_list "$arg"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
-rpath|--rpath)
|
||||||
|
wl_expect_rpath=yes
|
||||||
|
;;
|
||||||
|
"$dtags_to_strip")
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
append other_args_list "-Wl,$1"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
fi
|
||||||
|
shift
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
while [ $# -ne 0 ]; do
|
while [ $# -ne 0 ]; do
|
||||||
|
|
||||||
@ -485,88 +534,77 @@ while [ $# -ne 0 ]; do
|
|||||||
append other_args_list "-l$arg"
|
append other_args_list "-l$arg"
|
||||||
;;
|
;;
|
||||||
-Wl,*)
|
-Wl,*)
|
||||||
arg="${1#-Wl,}"
|
IFS=,
|
||||||
if [ -z "$arg" ]; then shift; arg="$1"; fi
|
parse_Wl $1
|
||||||
case "$arg" in
|
unset IFS
|
||||||
-rpath=*) rp="${arg#-rpath=}" ;;
|
|
||||||
--rpath=*) rp="${arg#--rpath=}" ;;
|
|
||||||
-rpath,*) rp="${arg#-rpath,}" ;;
|
|
||||||
--rpath,*) rp="${arg#--rpath,}" ;;
|
|
||||||
-rpath|--rpath)
|
|
||||||
shift; arg="$1"
|
|
||||||
case "$arg" in
|
|
||||||
-Wl,*)
|
|
||||||
rp="${arg#-Wl,}"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
die "-Wl,-rpath was not followed by -Wl,*"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
;;
|
|
||||||
"$dtags_to_strip")
|
|
||||||
: # We want to remove explicitly this flag
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
append other_args_list "-Wl,$arg"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
;;
|
|
||||||
-Xlinker,*)
|
|
||||||
arg="${1#-Xlinker,}"
|
|
||||||
if [ -z "$arg" ]; then shift; arg="$1"; fi
|
|
||||||
|
|
||||||
case "$arg" in
|
|
||||||
-rpath=*) rp="${arg#-rpath=}" ;;
|
|
||||||
--rpath=*) rp="${arg#--rpath=}" ;;
|
|
||||||
-rpath|--rpath)
|
|
||||||
shift; arg="$1"
|
|
||||||
case "$arg" in
|
|
||||||
-Xlinker,*)
|
|
||||||
rp="${arg#-Xlinker,}"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
die "-Xlinker,-rpath was not followed by -Xlinker,*"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
append other_args_list "-Xlinker,$arg"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
;;
|
;;
|
||||||
-Xlinker)
|
-Xlinker)
|
||||||
if [ "$2" = "-rpath" ]; then
|
shift
|
||||||
if [ "$3" != "-Xlinker" ]; then
|
if [ $# -eq 0 ]; then
|
||||||
die "-Xlinker,-rpath was not followed by -Xlinker,*"
|
# -Xlinker without value: let the compiler error about it.
|
||||||
fi
|
append other_args_list -Xlinker
|
||||||
shift 3;
|
xlinker_expect_rpath=no
|
||||||
rp="$1"
|
break
|
||||||
elif [ "$2" = "$dtags_to_strip" ]; then
|
elif [ "$xlinker_expect_rpath" = yes ]; then
|
||||||
shift # We want to remove explicitly this flag
|
# Register the path of -Xlinker -rpath <other args> -Xlinker <path>
|
||||||
|
if system_dir "$1"; then
|
||||||
|
append system_rpath_dirs_list "$1"
|
||||||
else
|
else
|
||||||
append other_args_list "$1"
|
append rpath_dirs_list "$1"
|
||||||
fi
|
fi
|
||||||
|
xlinker_expect_rpath=no
|
||||||
|
else
|
||||||
|
case "$1" in
|
||||||
|
-rpath=*)
|
||||||
|
arg="${1#-rpath=}"
|
||||||
|
if system_dir "$arg"; then
|
||||||
|
append system_rpath_dirs_list "$arg"
|
||||||
|
else
|
||||||
|
append rpath_dirs_list "$arg"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
--rpath=*)
|
||||||
|
arg="${1#--rpath=}"
|
||||||
|
if system_dir "$arg"; then
|
||||||
|
append system_rpath_dirs_list "$arg"
|
||||||
|
else
|
||||||
|
append rpath_dirs_list "$arg"
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
-rpath|--rpath)
|
||||||
|
xlinker_expect_rpath=yes
|
||||||
|
;;
|
||||||
|
"$dtags_to_strip")
|
||||||
;;
|
;;
|
||||||
*)
|
*)
|
||||||
if [ "$1" = "$dtags_to_strip" ]; then
|
append other_args_list -Xlinker
|
||||||
: # We want to remove explicitly this flag
|
|
||||||
else
|
|
||||||
append other_args_list "$1"
|
append other_args_list "$1"
|
||||||
fi
|
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
# test rpaths against system directories in one place.
|
|
||||||
if [ -n "$rp" ]; then
|
|
||||||
if system_dir "$rp"; then
|
|
||||||
append system_rpath_dirs_list "$rp"
|
|
||||||
else
|
|
||||||
append rpath_dirs_list "$rp"
|
|
||||||
fi
|
|
||||||
fi
|
fi
|
||||||
|
;;
|
||||||
|
"$dtags_to_strip")
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
append other_args_list "$1"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
shift
|
shift
|
||||||
done
|
done
|
||||||
|
|
||||||
|
# We found `-Xlinker -rpath` but no matching value `-Xlinker /path`. Just append
|
||||||
|
# `-Xlinker -rpath` again and let the compiler or linker handle the error during arg
|
||||||
|
# parsing.
|
||||||
|
if [ "$xlinker_expect_rpath" = yes ]; then
|
||||||
|
append other_args_list -Xlinker
|
||||||
|
append other_args_list -rpath
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Same, but for -Wl flags.
|
||||||
|
if [ "$wl_expect_rpath" = yes ]; then
|
||||||
|
append other_args_list -Wl,-rpath
|
||||||
|
fi
|
||||||
|
|
||||||
#
|
#
|
||||||
# Add flags from Spack's cppflags, cflags, cxxflags, fcflags, fflags, and
|
# Add flags from Spack's cppflags, cflags, cxxflags, fcflags, fflags, and
|
||||||
# ldflags. We stick to the order that gmake puts the flags in by default.
|
# ldflags. We stick to the order that gmake puts the flags in by default.
|
||||||
|
@ -2589,3 +2589,28 @@ def temporary_dir(*args, **kwargs):
|
|||||||
yield tmp_dir
|
yield tmp_dir
|
||||||
finally:
|
finally:
|
||||||
remove_directory_contents(tmp_dir)
|
remove_directory_contents(tmp_dir)
|
||||||
|
|
||||||
|
|
||||||
|
def filesummary(path, print_bytes=16):
|
||||||
|
"""Create a small summary of the given file. Does not error
|
||||||
|
when file does not exist.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
print_bytes (int): Number of bytes to print from start/end of file
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Tuple of size and byte string containing first n .. last n bytes.
|
||||||
|
Size is 0 if file cannot be read."""
|
||||||
|
try:
|
||||||
|
n = print_bytes
|
||||||
|
with open(path, "rb") as f:
|
||||||
|
size = os.fstat(f.fileno()).st_size
|
||||||
|
if size <= 2 * n:
|
||||||
|
short_contents = f.read(2 * n)
|
||||||
|
else:
|
||||||
|
short_contents = f.read(n)
|
||||||
|
f.seek(-n, 2)
|
||||||
|
short_contents += b"..." + f.read(n)
|
||||||
|
return size, short_contents
|
||||||
|
except OSError:
|
||||||
|
return 0, b""
|
||||||
|
@ -75,7 +75,7 @@ def __init__(self, ignore=None):
|
|||||||
# so that we have a fast lookup and can run mkdir in order.
|
# so that we have a fast lookup and can run mkdir in order.
|
||||||
self.directories = OrderedDict()
|
self.directories = OrderedDict()
|
||||||
|
|
||||||
# Files to link. Maps dst_rel to (src_rel, src_root)
|
# Files to link. Maps dst_rel to (src_root, src_rel)
|
||||||
self.files = OrderedDict()
|
self.files = OrderedDict()
|
||||||
|
|
||||||
def before_visit_dir(self, root, rel_path, depth):
|
def before_visit_dir(self, root, rel_path, depth):
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||||
|
|
||||||
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
|
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
|
||||||
__version__ = "0.19.0.dev0"
|
__version__ = "0.19.2"
|
||||||
spack_version = __version__
|
spack_version = __version__
|
||||||
|
|
||||||
|
|
||||||
|
@ -288,7 +288,7 @@ def _check_build_test_callbacks(pkgs, error_cls):
|
|||||||
errors = []
|
errors = []
|
||||||
for pkg_name in pkgs:
|
for pkg_name in pkgs:
|
||||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||||
test_callbacks = pkg_cls.build_time_test_callbacks
|
test_callbacks = getattr(pkg_cls, "build_time_test_callbacks", None)
|
||||||
|
|
||||||
if test_callbacks and "test" in test_callbacks:
|
if test_callbacks and "test" in test_callbacks:
|
||||||
msg = '{0} package contains "test" method in ' "build_time_test_callbacks"
|
msg = '{0} package contains "test" method in ' "build_time_test_callbacks"
|
||||||
|
@ -36,6 +36,7 @@
|
|||||||
import spack.relocate as relocate
|
import spack.relocate as relocate
|
||||||
import spack.repo
|
import spack.repo
|
||||||
import spack.store
|
import spack.store
|
||||||
|
import spack.util.crypto
|
||||||
import spack.util.file_cache as file_cache
|
import spack.util.file_cache as file_cache
|
||||||
import spack.util.gpg
|
import spack.util.gpg
|
||||||
import spack.util.spack_json as sjson
|
import spack.util.spack_json as sjson
|
||||||
@ -293,10 +294,12 @@ def update_spec(self, spec, found_list):
|
|||||||
cur_entry["spec"] = new_entry["spec"]
|
cur_entry["spec"] = new_entry["spec"]
|
||||||
break
|
break
|
||||||
else:
|
else:
|
||||||
current_list.append = {
|
current_list.append(
|
||||||
|
{
|
||||||
"mirror_url": new_entry["mirror_url"],
|
"mirror_url": new_entry["mirror_url"],
|
||||||
"spec": new_entry["spec"],
|
"spec": new_entry["spec"],
|
||||||
}
|
}
|
||||||
|
)
|
||||||
|
|
||||||
def update(self, with_cooldown=False):
|
def update(self, with_cooldown=False):
|
||||||
"""Make sure local cache of buildcache index files is up to date.
|
"""Make sure local cache of buildcache index files is up to date.
|
||||||
@ -554,9 +557,9 @@ class NoOverwriteException(spack.error.SpackError):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, file_path):
|
def __init__(self, file_path):
|
||||||
err_msg = "\n%s\nexists\n" % file_path
|
super(NoOverwriteException, self).__init__(
|
||||||
err_msg += "Use -f option to overwrite."
|
'"{}" exists in buildcache. Use --force flag to overwrite.'.format(file_path)
|
||||||
super(NoOverwriteException, self).__init__(err_msg)
|
)
|
||||||
|
|
||||||
|
|
||||||
class NoGpgException(spack.error.SpackError):
|
class NoGpgException(spack.error.SpackError):
|
||||||
@ -601,7 +604,12 @@ class NoChecksumException(spack.error.SpackError):
|
|||||||
Raised if file fails checksum verification.
|
Raised if file fails checksum verification.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
pass
|
def __init__(self, path, size, contents, algorithm, expected, computed):
|
||||||
|
super(NoChecksumException, self).__init__(
|
||||||
|
"{} checksum failed for {}".format(algorithm, path),
|
||||||
|
"Expected {} but got {}. "
|
||||||
|
"File size = {} bytes. Contents = {!r}".format(expected, computed, size, contents),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class NewLayoutException(spack.error.SpackError):
|
class NewLayoutException(spack.error.SpackError):
|
||||||
@ -1859,14 +1867,15 @@ def _extract_inner_tarball(spec, filename, extract_to, unsigned, remote_checksum
|
|||||||
raise UnsignedPackageException(
|
raise UnsignedPackageException(
|
||||||
"To install unsigned packages, use the --no-check-signature option."
|
"To install unsigned packages, use the --no-check-signature option."
|
||||||
)
|
)
|
||||||
# get the sha256 checksum of the tarball
|
|
||||||
|
# compute the sha256 checksum of the tarball
|
||||||
local_checksum = checksum_tarball(tarfile_path)
|
local_checksum = checksum_tarball(tarfile_path)
|
||||||
|
expected = remote_checksum["hash"]
|
||||||
|
|
||||||
# if the checksums don't match don't install
|
# if the checksums don't match don't install
|
||||||
if local_checksum != remote_checksum["hash"]:
|
if local_checksum != expected:
|
||||||
raise NoChecksumException(
|
size, contents = fsys.filesummary(tarfile_path)
|
||||||
"Package tarball failed checksum verification.\n" "It cannot be installed."
|
raise NoChecksumException(tarfile_path, size, contents, "sha256", expected, local_checksum)
|
||||||
)
|
|
||||||
|
|
||||||
return tarfile_path
|
return tarfile_path
|
||||||
|
|
||||||
@ -1926,12 +1935,14 @@ def extract_tarball(spec, download_result, allow_root=False, unsigned=False, for
|
|||||||
|
|
||||||
# compute the sha256 checksum of the tarball
|
# compute the sha256 checksum of the tarball
|
||||||
local_checksum = checksum_tarball(tarfile_path)
|
local_checksum = checksum_tarball(tarfile_path)
|
||||||
|
expected = bchecksum["hash"]
|
||||||
|
|
||||||
# if the checksums don't match don't install
|
# if the checksums don't match don't install
|
||||||
if local_checksum != bchecksum["hash"]:
|
if local_checksum != expected:
|
||||||
|
size, contents = fsys.filesummary(tarfile_path)
|
||||||
_delete_staged_downloads(download_result)
|
_delete_staged_downloads(download_result)
|
||||||
raise NoChecksumException(
|
raise NoChecksumException(
|
||||||
"Package tarball failed checksum verification.\n" "It cannot be installed."
|
tarfile_path, size, contents, "sha256", expected, local_checksum
|
||||||
)
|
)
|
||||||
|
|
||||||
new_relative_prefix = str(os.path.relpath(spec.prefix, spack.store.layout.root))
|
new_relative_prefix = str(os.path.relpath(spec.prefix, spack.store.layout.root))
|
||||||
@ -2022,8 +2033,11 @@ def install_root_node(spec, allow_root, unsigned=False, force=False, sha256=None
|
|||||||
tarball_path = download_result["tarball_stage"].save_filename
|
tarball_path = download_result["tarball_stage"].save_filename
|
||||||
msg = msg.format(tarball_path, sha256)
|
msg = msg.format(tarball_path, sha256)
|
||||||
if not checker.check(tarball_path):
|
if not checker.check(tarball_path):
|
||||||
|
size, contents = fsys.filesummary(tarball_path)
|
||||||
_delete_staged_downloads(download_result)
|
_delete_staged_downloads(download_result)
|
||||||
raise spack.binary_distribution.NoChecksumException(msg)
|
raise NoChecksumException(
|
||||||
|
tarball_path, size, contents, checker.hash_name, sha256, checker.sum
|
||||||
|
)
|
||||||
tty.debug("Verified SHA256 checksum of the build cache")
|
tty.debug("Verified SHA256 checksum of the build cache")
|
||||||
|
|
||||||
# don't print long padded paths while extracting/relocating binaries
|
# don't print long padded paths while extracting/relocating binaries
|
||||||
|
@ -978,22 +978,9 @@ def add_modifications_for_dep(dep):
|
|||||||
if set_package_py_globals:
|
if set_package_py_globals:
|
||||||
set_module_variables_for_package(dpkg)
|
set_module_variables_for_package(dpkg)
|
||||||
|
|
||||||
# Allow dependencies to modify the module
|
current_module = ModuleChangePropagator(spec.package)
|
||||||
# Get list of modules that may need updating
|
dpkg.setup_dependent_package(current_module, spec)
|
||||||
modules = []
|
current_module.propagate_changes_to_mro()
|
||||||
for cls in inspect.getmro(type(spec.package)):
|
|
||||||
module = cls.module
|
|
||||||
if module == spack.package_base:
|
|
||||||
break
|
|
||||||
modules.append(module)
|
|
||||||
|
|
||||||
# Execute changes as if on a single module
|
|
||||||
# copy dict to ensure prior changes are available
|
|
||||||
changes = spack.util.pattern.Bunch()
|
|
||||||
dpkg.setup_dependent_package(changes, spec)
|
|
||||||
|
|
||||||
for module in modules:
|
|
||||||
module.__dict__.update(changes.__dict__)
|
|
||||||
|
|
||||||
if context == "build":
|
if context == "build":
|
||||||
builder = spack.builder.create(dpkg)
|
builder = spack.builder.create(dpkg)
|
||||||
@ -1437,3 +1424,51 @@ def write_log_summary(out, log_type, log, last=None):
|
|||||||
# If no errors are found but warnings are, display warnings
|
# If no errors are found but warnings are, display warnings
|
||||||
out.write("\n%s found in %s log:\n" % (plural(nwar, "warning"), log_type))
|
out.write("\n%s found in %s log:\n" % (plural(nwar, "warning"), log_type))
|
||||||
out.write(make_log_context(warnings))
|
out.write(make_log_context(warnings))
|
||||||
|
|
||||||
|
|
||||||
|
class ModuleChangePropagator(object):
|
||||||
|
"""Wrapper class to accept changes to a package.py Python module, and propagate them in the
|
||||||
|
MRO of the package.
|
||||||
|
|
||||||
|
It is mainly used as a substitute of the ``package.py`` module, when calling the
|
||||||
|
"setup_dependent_package" function during build environment setup.
|
||||||
|
"""
|
||||||
|
|
||||||
|
_PROTECTED_NAMES = ("package", "current_module", "modules_in_mro", "_set_attributes")
|
||||||
|
|
||||||
|
def __init__(self, package):
|
||||||
|
self._set_self_attributes("package", package)
|
||||||
|
self._set_self_attributes("current_module", package.module)
|
||||||
|
|
||||||
|
#: Modules for the classes in the MRO up to PackageBase
|
||||||
|
modules_in_mro = []
|
||||||
|
for cls in inspect.getmro(type(package)):
|
||||||
|
module = cls.module
|
||||||
|
|
||||||
|
if module == self.current_module:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if module == spack.package_base:
|
||||||
|
break
|
||||||
|
|
||||||
|
modules_in_mro.append(module)
|
||||||
|
self._set_self_attributes("modules_in_mro", modules_in_mro)
|
||||||
|
self._set_self_attributes("_set_attributes", {})
|
||||||
|
|
||||||
|
def _set_self_attributes(self, key, value):
|
||||||
|
super(ModuleChangePropagator, self).__setattr__(key, value)
|
||||||
|
|
||||||
|
def __getattr__(self, item):
|
||||||
|
return getattr(self.current_module, item)
|
||||||
|
|
||||||
|
def __setattr__(self, key, value):
|
||||||
|
if key in ModuleChangePropagator._PROTECTED_NAMES:
|
||||||
|
msg = 'Cannot set attribute "{}" in ModuleMonkeyPatcher'.format(key)
|
||||||
|
return AttributeError(msg)
|
||||||
|
|
||||||
|
setattr(self.current_module, key, value)
|
||||||
|
self._set_attributes[key] = value
|
||||||
|
|
||||||
|
def propagate_changes_to_mro(self):
|
||||||
|
for module_in_mro in self.modules_in_mro:
|
||||||
|
module_in_mro.__dict__.update(self._set_attributes)
|
||||||
|
@ -7,7 +7,7 @@
|
|||||||
import os.path
|
import os.path
|
||||||
import stat
|
import stat
|
||||||
import subprocess
|
import subprocess
|
||||||
from typing import List # novm
|
from typing import List # novm # noqa: F401
|
||||||
|
|
||||||
import llnl.util.filesystem as fs
|
import llnl.util.filesystem as fs
|
||||||
import llnl.util.tty as tty
|
import llnl.util.tty as tty
|
||||||
@ -427,15 +427,15 @@ def _do_patch_libtool(self):
|
|||||||
x.filter(regex="-nostdlib", repl="", string=True)
|
x.filter(regex="-nostdlib", repl="", string=True)
|
||||||
rehead = r"/\S*/"
|
rehead = r"/\S*/"
|
||||||
for o in [
|
for o in [
|
||||||
"fjhpctag.o",
|
r"fjhpctag\.o",
|
||||||
"fjcrt0.o",
|
r"fjcrt0\.o",
|
||||||
"fjlang08.o",
|
r"fjlang08\.o",
|
||||||
"fjomp.o",
|
r"fjomp\.o",
|
||||||
"crti.o",
|
r"crti\.o",
|
||||||
"crtbeginS.o",
|
r"crtbeginS\.o",
|
||||||
"crtendS.o",
|
r"crtendS\.o",
|
||||||
]:
|
]:
|
||||||
x.filter(regex=(rehead + o), repl="", string=True)
|
x.filter(regex=(rehead + o), repl="")
|
||||||
elif self.pkg.compiler.name == "dpcpp":
|
elif self.pkg.compiler.name == "dpcpp":
|
||||||
# Hack to filter out spurious predep_objects when building with Intel dpcpp
|
# Hack to filter out spurious predep_objects when building with Intel dpcpp
|
||||||
# (see https://github.com/spack/spack/issues/32863):
|
# (see https://github.com/spack/spack/issues/32863):
|
||||||
|
@ -6,7 +6,7 @@
|
|||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import shutil
|
import shutil
|
||||||
from typing import Optional
|
from typing import Optional # noqa: F401
|
||||||
|
|
||||||
import llnl.util.filesystem as fs
|
import llnl.util.filesystem as fs
|
||||||
import llnl.util.lang as lang
|
import llnl.util.lang as lang
|
||||||
@ -108,6 +108,9 @@ def view_file_conflicts(self, view, merge_map):
|
|||||||
return conflicts
|
return conflicts
|
||||||
|
|
||||||
def add_files_to_view(self, view, merge_map, skip_if_exists=True):
|
def add_files_to_view(self, view, merge_map, skip_if_exists=True):
|
||||||
|
if not self.extendee_spec:
|
||||||
|
return super(PythonExtension, self).add_files_to_view(view, merge_map, skip_if_exists)
|
||||||
|
|
||||||
bin_dir = self.spec.prefix.bin
|
bin_dir = self.spec.prefix.bin
|
||||||
python_prefix = self.extendee_spec.prefix
|
python_prefix = self.extendee_spec.prefix
|
||||||
python_is_external = self.extendee_spec.external
|
python_is_external = self.extendee_spec.external
|
||||||
|
@ -46,10 +46,10 @@ class SConsBuilder(BaseBuilder):
|
|||||||
phases = ("build", "install")
|
phases = ("build", "install")
|
||||||
|
|
||||||
#: Names associated with package methods in the old build-system format
|
#: Names associated with package methods in the old build-system format
|
||||||
legacy_methods = ("install_args", "build_test")
|
legacy_methods = ("build_test",)
|
||||||
|
|
||||||
#: Same as legacy_methods, but the signature is different
|
#: Same as legacy_methods, but the signature is different
|
||||||
legacy_long_methods = ("build_args",)
|
legacy_long_methods = ("build_args", "install_args")
|
||||||
|
|
||||||
#: Names associated with package attributes in the old build-system format
|
#: Names associated with package attributes in the old build-system format
|
||||||
legacy_attributes = ("build_time_test_callbacks",)
|
legacy_attributes = ("build_time_test_callbacks",)
|
||||||
@ -66,13 +66,13 @@ def build(self, pkg, spec, prefix):
|
|||||||
args = self.build_args(spec, prefix)
|
args = self.build_args(spec, prefix)
|
||||||
inspect.getmodule(self.pkg).scons(*args)
|
inspect.getmodule(self.pkg).scons(*args)
|
||||||
|
|
||||||
def install_args(self):
|
def install_args(self, spec, prefix):
|
||||||
"""Arguments to pass to install."""
|
"""Arguments to pass to install."""
|
||||||
return []
|
return []
|
||||||
|
|
||||||
def install(self, pkg, spec, prefix):
|
def install(self, pkg, spec, prefix):
|
||||||
"""Install the package."""
|
"""Install the package."""
|
||||||
args = self.install_args()
|
args = self.install_args(spec, prefix)
|
||||||
|
|
||||||
inspect.getmodule(self.pkg).scons("install", *args)
|
inspect.getmodule(self.pkg).scons("install", *args)
|
||||||
|
|
||||||
|
@ -6,7 +6,7 @@
|
|||||||
import copy
|
import copy
|
||||||
import functools
|
import functools
|
||||||
import inspect
|
import inspect
|
||||||
from typing import List, Optional, Tuple
|
from typing import List, Optional, Tuple # noqa: F401
|
||||||
|
|
||||||
import six
|
import six
|
||||||
|
|
||||||
@ -127,7 +127,12 @@ def __init__(self, wrapped_pkg_object, root_builder):
|
|||||||
wrapper_cls = type(self)
|
wrapper_cls = type(self)
|
||||||
bases = (package_cls, wrapper_cls)
|
bases = (package_cls, wrapper_cls)
|
||||||
new_cls_name = package_cls.__name__ + "Wrapper"
|
new_cls_name = package_cls.__name__ + "Wrapper"
|
||||||
new_cls = type(new_cls_name, bases, {})
|
# Forward attributes that might be monkey patched later
|
||||||
|
new_cls = type(
|
||||||
|
new_cls_name,
|
||||||
|
bases,
|
||||||
|
{"run_tests": property(lambda x: x.wrapped_package_object.run_tests)},
|
||||||
|
)
|
||||||
new_cls.__module__ = package_cls.__module__
|
new_cls.__module__ = package_cls.__module__
|
||||||
self.__class__ = new_cls
|
self.__class__ = new_cls
|
||||||
self.__dict__.update(wrapped_pkg_object.__dict__)
|
self.__dict__.update(wrapped_pkg_object.__dict__)
|
||||||
|
@ -1769,9 +1769,9 @@ def reproduce_ci_job(url, work_dir):
|
|||||||
download_and_extract_artifacts(url, work_dir)
|
download_and_extract_artifacts(url, work_dir)
|
||||||
|
|
||||||
lock_file = fs.find(work_dir, "spack.lock")[0]
|
lock_file = fs.find(work_dir, "spack.lock")[0]
|
||||||
concrete_env_dir = os.path.dirname(lock_file)
|
repro_lock_dir = os.path.dirname(lock_file)
|
||||||
|
|
||||||
tty.debug("Concrete environment directory: {0}".format(concrete_env_dir))
|
tty.debug("Found lock file in: {0}".format(repro_lock_dir))
|
||||||
|
|
||||||
yaml_files = fs.find(work_dir, ["*.yaml", "*.yml"])
|
yaml_files = fs.find(work_dir, ["*.yaml", "*.yml"])
|
||||||
|
|
||||||
@ -1794,6 +1794,21 @@ def reproduce_ci_job(url, work_dir):
|
|||||||
if pipeline_yaml:
|
if pipeline_yaml:
|
||||||
tty.debug("\n{0} is likely your pipeline file".format(yf))
|
tty.debug("\n{0} is likely your pipeline file".format(yf))
|
||||||
|
|
||||||
|
relative_concrete_env_dir = pipeline_yaml["variables"]["SPACK_CONCRETE_ENV_DIR"]
|
||||||
|
tty.debug("Relative environment path used by cloud job: {0}".format(relative_concrete_env_dir))
|
||||||
|
|
||||||
|
# Using the relative concrete environment path found in the generated
|
||||||
|
# pipeline variable above, copy the spack environment files so they'll
|
||||||
|
# be found in the same location as when the job ran in the cloud.
|
||||||
|
concrete_env_dir = os.path.join(work_dir, relative_concrete_env_dir)
|
||||||
|
if not os.path.isdir(concrete_env_dir):
|
||||||
|
fs.mkdirp(concrete_env_dir)
|
||||||
|
copy_lock_path = os.path.join(concrete_env_dir, "spack.lock")
|
||||||
|
orig_yaml_path = os.path.join(repro_lock_dir, "spack.yaml")
|
||||||
|
copy_yaml_path = os.path.join(concrete_env_dir, "spack.yaml")
|
||||||
|
shutil.copyfile(lock_file, copy_lock_path)
|
||||||
|
shutil.copyfile(orig_yaml_path, copy_yaml_path)
|
||||||
|
|
||||||
# Find the install script in the unzipped artifacts and make it executable
|
# Find the install script in the unzipped artifacts and make it executable
|
||||||
install_script = fs.find(work_dir, "install.sh")[0]
|
install_script = fs.find(work_dir, "install.sh")[0]
|
||||||
st = os.stat(install_script)
|
st = os.stat(install_script)
|
||||||
@ -1849,6 +1864,7 @@ def reproduce_ci_job(url, work_dir):
|
|||||||
if repro_details:
|
if repro_details:
|
||||||
mount_as_dir = repro_details["ci_project_dir"]
|
mount_as_dir = repro_details["ci_project_dir"]
|
||||||
mounted_repro_dir = os.path.join(mount_as_dir, rel_repro_dir)
|
mounted_repro_dir = os.path.join(mount_as_dir, rel_repro_dir)
|
||||||
|
mounted_env_dir = os.path.join(mount_as_dir, relative_concrete_env_dir)
|
||||||
|
|
||||||
# We will also try to clone spack from your local checkout and
|
# We will also try to clone spack from your local checkout and
|
||||||
# reproduce the state present during the CI build, and put that into
|
# reproduce the state present during the CI build, and put that into
|
||||||
@ -1932,7 +1948,7 @@ def reproduce_ci_job(url, work_dir):
|
|||||||
inst_list.append(" $ source {0}/share/spack/setup-env.sh\n".format(spack_root))
|
inst_list.append(" $ source {0}/share/spack/setup-env.sh\n".format(spack_root))
|
||||||
inst_list.append(
|
inst_list.append(
|
||||||
" $ spack env activate --without-view {0}\n\n".format(
|
" $ spack env activate --without-view {0}\n\n".format(
|
||||||
mounted_repro_dir if job_image else repro_dir
|
mounted_env_dir if job_image else repro_dir
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
inst_list.append(" - Run the install script\n\n")
|
inst_list.append(" - Run the install script\n\n")
|
||||||
|
@ -244,30 +244,35 @@ def config_remove(args):
|
|||||||
spack.config.set(path, existing, scope)
|
spack.config.set(path, existing, scope)
|
||||||
|
|
||||||
|
|
||||||
def _can_update_config_file(scope_dir, cfg_file):
|
def _can_update_config_file(scope, cfg_file):
|
||||||
dir_ok = fs.can_write_to_dir(scope_dir)
|
if isinstance(scope, spack.config.SingleFileScope):
|
||||||
cfg_ok = fs.can_access(cfg_file)
|
return fs.can_access(cfg_file)
|
||||||
return dir_ok and cfg_ok
|
return fs.can_write_to_dir(scope.path) and fs.can_access(cfg_file)
|
||||||
|
|
||||||
|
|
||||||
def config_update(args):
|
def config_update(args):
|
||||||
# Read the configuration files
|
# Read the configuration files
|
||||||
spack.config.config.get_config(args.section, scope=args.scope)
|
spack.config.config.get_config(args.section, scope=args.scope)
|
||||||
updates = spack.config.config.format_updates[args.section]
|
updates = list(
|
||||||
|
filter(
|
||||||
|
lambda s: not isinstance(
|
||||||
|
s, (spack.config.InternalConfigScope, spack.config.ImmutableConfigScope)
|
||||||
|
),
|
||||||
|
spack.config.config.format_updates[args.section],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
cannot_overwrite, skip_system_scope = [], False
|
cannot_overwrite, skip_system_scope = [], False
|
||||||
for scope in updates:
|
for scope in updates:
|
||||||
cfg_file = spack.config.config.get_config_filename(scope.name, args.section)
|
cfg_file = spack.config.config.get_config_filename(scope.name, args.section)
|
||||||
scope_dir = scope.path
|
can_be_updated = _can_update_config_file(scope, cfg_file)
|
||||||
can_be_updated = _can_update_config_file(scope_dir, cfg_file)
|
|
||||||
if not can_be_updated:
|
if not can_be_updated:
|
||||||
if scope.name == "system":
|
if scope.name == "system":
|
||||||
skip_system_scope = True
|
skip_system_scope = True
|
||||||
msg = (
|
tty.warn(
|
||||||
'Not enough permissions to write to "system" scope. '
|
'Not enough permissions to write to "system" scope. '
|
||||||
"Skipping update at that location [cfg={0}]"
|
"Skipping update at that location [cfg={0}]".format(cfg_file)
|
||||||
)
|
)
|
||||||
tty.warn(msg.format(cfg_file))
|
|
||||||
continue
|
continue
|
||||||
cannot_overwrite.append((scope, cfg_file))
|
cannot_overwrite.append((scope, cfg_file))
|
||||||
|
|
||||||
@ -315,18 +320,14 @@ def config_update(args):
|
|||||||
# Get a function to update the format
|
# Get a function to update the format
|
||||||
update_fn = spack.config.ensure_latest_format_fn(args.section)
|
update_fn = spack.config.ensure_latest_format_fn(args.section)
|
||||||
for scope in updates:
|
for scope in updates:
|
||||||
cfg_file = spack.config.config.get_config_filename(scope.name, args.section)
|
data = scope.get_section(args.section).pop(args.section)
|
||||||
with open(cfg_file) as f:
|
|
||||||
data = syaml.load_config(f) or {}
|
|
||||||
data = data.pop(args.section, {})
|
|
||||||
update_fn(data)
|
update_fn(data)
|
||||||
|
|
||||||
# Make a backup copy and rewrite the file
|
# Make a backup copy and rewrite the file
|
||||||
bkp_file = cfg_file + ".bkp"
|
bkp_file = cfg_file + ".bkp"
|
||||||
shutil.copy(cfg_file, bkp_file)
|
shutil.copy(cfg_file, bkp_file)
|
||||||
spack.config.config.update_config(args.section, data, scope=scope.name, force=True)
|
spack.config.config.update_config(args.section, data, scope=scope.name, force=True)
|
||||||
msg = 'File "{0}" updated [backup={1}]'
|
tty.msg('File "{}" update [backup={}]'.format(cfg_file, bkp_file))
|
||||||
tty.msg(msg.format(cfg_file, bkp_file))
|
|
||||||
|
|
||||||
|
|
||||||
def _can_revert_update(scope_dir, cfg_file, bkp_file):
|
def _can_revert_update(scope_dir, cfg_file, bkp_file):
|
||||||
|
@ -242,8 +242,8 @@ def print_tests(pkg):
|
|||||||
# So the presence of a callback in Spack does not necessarily correspond
|
# So the presence of a callback in Spack does not necessarily correspond
|
||||||
# to the actual presence of built-time tests for a package.
|
# to the actual presence of built-time tests for a package.
|
||||||
for callbacks, phase in [
|
for callbacks, phase in [
|
||||||
(pkg.build_time_test_callbacks, "Build"),
|
(getattr(pkg, "build_time_test_callbacks", None), "Build"),
|
||||||
(pkg.install_time_test_callbacks, "Install"),
|
(getattr(pkg, "install_time_test_callbacks", None), "Install"),
|
||||||
]:
|
]:
|
||||||
color.cprint("")
|
color.cprint("")
|
||||||
color.cprint(section_title("Available {0} Phase Test Methods:".format(phase)))
|
color.cprint(section_title("Available {0} Phase Test Methods:".format(phase)))
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
import llnl.util.tty as tty
|
import llnl.util.tty as tty
|
||||||
import llnl.util.tty.colify as colify
|
import llnl.util.tty.colify as colify
|
||||||
|
|
||||||
|
import spack.caches
|
||||||
import spack.cmd
|
import spack.cmd
|
||||||
import spack.cmd.common.arguments as arguments
|
import spack.cmd.common.arguments as arguments
|
||||||
import spack.concretize
|
import spack.concretize
|
||||||
@ -356,12 +357,9 @@ def versions_per_spec(args):
|
|||||||
return num_versions
|
return num_versions
|
||||||
|
|
||||||
|
|
||||||
def create_mirror_for_individual_specs(mirror_specs, directory_hint, skip_unstable_versions):
|
def create_mirror_for_individual_specs(mirror_specs, path, skip_unstable_versions):
|
||||||
local_push_url = local_mirror_url_from_user(directory_hint)
|
present, mirrored, error = spack.mirror.create(path, mirror_specs, skip_unstable_versions)
|
||||||
present, mirrored, error = spack.mirror.create(
|
tty.msg("Summary for mirror in {}".format(path))
|
||||||
local_push_url, mirror_specs, skip_unstable_versions
|
|
||||||
)
|
|
||||||
tty.msg("Summary for mirror in {}".format(local_push_url))
|
|
||||||
process_mirror_stats(present, mirrored, error)
|
process_mirror_stats(present, mirrored, error)
|
||||||
|
|
||||||
|
|
||||||
@ -379,21 +377,6 @@ def process_mirror_stats(present, mirrored, error):
|
|||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
|
|
||||||
def local_mirror_url_from_user(directory_hint):
|
|
||||||
"""Return a file:// url pointing to the local mirror to be used.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
directory_hint (str or None): directory where to create the mirror. If None,
|
|
||||||
defaults to "config:source_cache".
|
|
||||||
"""
|
|
||||||
mirror_directory = spack.util.path.canonicalize_path(
|
|
||||||
directory_hint or spack.config.get("config:source_cache")
|
|
||||||
)
|
|
||||||
tmp_mirror = spack.mirror.Mirror(mirror_directory)
|
|
||||||
local_url = url_util.format(tmp_mirror.push_url)
|
|
||||||
return local_url
|
|
||||||
|
|
||||||
|
|
||||||
def mirror_create(args):
|
def mirror_create(args):
|
||||||
"""Create a directory to be used as a spack mirror, and fill it with
|
"""Create a directory to be used as a spack mirror, and fill it with
|
||||||
package archives.
|
package archives.
|
||||||
@ -424,9 +407,12 @@ def mirror_create(args):
|
|||||||
"The option '--all' already implies mirroring all versions for each package.",
|
"The option '--all' already implies mirroring all versions for each package.",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# When no directory is provided, the source dir is used
|
||||||
|
path = args.directory or spack.caches.fetch_cache_location()
|
||||||
|
|
||||||
if args.all and not ev.active_environment():
|
if args.all and not ev.active_environment():
|
||||||
create_mirror_for_all_specs(
|
create_mirror_for_all_specs(
|
||||||
directory_hint=args.directory,
|
path=path,
|
||||||
skip_unstable_versions=args.skip_unstable_versions,
|
skip_unstable_versions=args.skip_unstable_versions,
|
||||||
selection_fn=not_excluded_fn(args),
|
selection_fn=not_excluded_fn(args),
|
||||||
)
|
)
|
||||||
@ -434,7 +420,7 @@ def mirror_create(args):
|
|||||||
|
|
||||||
if args.all and ev.active_environment():
|
if args.all and ev.active_environment():
|
||||||
create_mirror_for_all_specs_inside_environment(
|
create_mirror_for_all_specs_inside_environment(
|
||||||
directory_hint=args.directory,
|
path=path,
|
||||||
skip_unstable_versions=args.skip_unstable_versions,
|
skip_unstable_versions=args.skip_unstable_versions,
|
||||||
selection_fn=not_excluded_fn(args),
|
selection_fn=not_excluded_fn(args),
|
||||||
)
|
)
|
||||||
@ -443,16 +429,15 @@ def mirror_create(args):
|
|||||||
mirror_specs = concrete_specs_from_user(args)
|
mirror_specs = concrete_specs_from_user(args)
|
||||||
create_mirror_for_individual_specs(
|
create_mirror_for_individual_specs(
|
||||||
mirror_specs,
|
mirror_specs,
|
||||||
directory_hint=args.directory,
|
path=path,
|
||||||
skip_unstable_versions=args.skip_unstable_versions,
|
skip_unstable_versions=args.skip_unstable_versions,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def create_mirror_for_all_specs(directory_hint, skip_unstable_versions, selection_fn):
|
def create_mirror_for_all_specs(path, skip_unstable_versions, selection_fn):
|
||||||
mirror_specs = all_specs_with_all_versions(selection_fn=selection_fn)
|
mirror_specs = all_specs_with_all_versions(selection_fn=selection_fn)
|
||||||
local_push_url = local_mirror_url_from_user(directory_hint=directory_hint)
|
|
||||||
mirror_cache, mirror_stats = spack.mirror.mirror_cache_and_stats(
|
mirror_cache, mirror_stats = spack.mirror.mirror_cache_and_stats(
|
||||||
local_push_url, skip_unstable_versions=skip_unstable_versions
|
path, skip_unstable_versions=skip_unstable_versions
|
||||||
)
|
)
|
||||||
for candidate in mirror_specs:
|
for candidate in mirror_specs:
|
||||||
pkg_cls = spack.repo.path.get_pkg_class(candidate.name)
|
pkg_cls = spack.repo.path.get_pkg_class(candidate.name)
|
||||||
@ -462,13 +447,11 @@ def create_mirror_for_all_specs(directory_hint, skip_unstable_versions, selectio
|
|||||||
process_mirror_stats(*mirror_stats.stats())
|
process_mirror_stats(*mirror_stats.stats())
|
||||||
|
|
||||||
|
|
||||||
def create_mirror_for_all_specs_inside_environment(
|
def create_mirror_for_all_specs_inside_environment(path, skip_unstable_versions, selection_fn):
|
||||||
directory_hint, skip_unstable_versions, selection_fn
|
|
||||||
):
|
|
||||||
mirror_specs = concrete_specs_from_environment(selection_fn=selection_fn)
|
mirror_specs = concrete_specs_from_environment(selection_fn=selection_fn)
|
||||||
create_mirror_for_individual_specs(
|
create_mirror_for_individual_specs(
|
||||||
mirror_specs,
|
mirror_specs,
|
||||||
directory_hint=directory_hint,
|
path=path,
|
||||||
skip_unstable_versions=skip_unstable_versions,
|
skip_unstable_versions=skip_unstable_versions,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -127,8 +127,10 @@ def python_interpreter(args):
|
|||||||
console.runsource(startup.read(), startup_file, "exec")
|
console.runsource(startup.read(), startup_file, "exec")
|
||||||
|
|
||||||
if args.python_command:
|
if args.python_command:
|
||||||
|
propagate_exceptions_from(console)
|
||||||
console.runsource(args.python_command)
|
console.runsource(args.python_command)
|
||||||
elif args.python_args:
|
elif args.python_args:
|
||||||
|
propagate_exceptions_from(console)
|
||||||
sys.argv = args.python_args
|
sys.argv = args.python_args
|
||||||
with open(args.python_args[0]) as file:
|
with open(args.python_args[0]) as file:
|
||||||
console.runsource(file.read(), args.python_args[0], "exec")
|
console.runsource(file.read(), args.python_args[0], "exec")
|
||||||
@ -149,3 +151,18 @@ def python_interpreter(args):
|
|||||||
platform.machine(),
|
platform.machine(),
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def propagate_exceptions_from(console):
|
||||||
|
"""Set sys.excepthook to let uncaught exceptions return 1 to the shell.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
console (code.InteractiveConsole): the console that needs a change in sys.excepthook
|
||||||
|
"""
|
||||||
|
console.push("import sys")
|
||||||
|
console.push("_wrapped_hook = sys.excepthook")
|
||||||
|
console.push("def _hook(exc_type, exc_value, exc_tb):")
|
||||||
|
console.push(" _wrapped_hook(exc_type, exc_value, exc_tb)")
|
||||||
|
console.push(" sys.exit(1)")
|
||||||
|
console.push("")
|
||||||
|
console.push("sys.excepthook = _hook")
|
||||||
|
@ -11,6 +11,7 @@
|
|||||||
import llnl.util.tty as tty
|
import llnl.util.tty as tty
|
||||||
from llnl.util.filesystem import working_dir
|
from llnl.util.filesystem import working_dir
|
||||||
|
|
||||||
|
import spack
|
||||||
import spack.cmd.common.arguments as arguments
|
import spack.cmd.common.arguments as arguments
|
||||||
import spack.config
|
import spack.config
|
||||||
import spack.paths
|
import spack.paths
|
||||||
@ -24,7 +25,7 @@
|
|||||||
|
|
||||||
|
|
||||||
# tutorial configuration parameters
|
# tutorial configuration parameters
|
||||||
tutorial_branch = "releases/v0.18"
|
tutorial_branch = "releases/v%s" % ".".join(str(v) for v in spack.spack_version_info[:2])
|
||||||
tutorial_mirror = "file:///mirror"
|
tutorial_mirror = "file:///mirror"
|
||||||
tutorial_key = os.path.join(spack.paths.share_path, "keys", "tutorial.pub")
|
tutorial_key = os.path.join(spack.paths.share_path, "keys", "tutorial.pub")
|
||||||
|
|
||||||
|
@ -17,6 +17,7 @@
|
|||||||
import spack.package_base
|
import spack.package_base
|
||||||
import spack.repo
|
import spack.repo
|
||||||
import spack.store
|
import spack.store
|
||||||
|
import spack.traverse as traverse
|
||||||
from spack.database import InstallStatuses
|
from spack.database import InstallStatuses
|
||||||
|
|
||||||
description = "remove installed packages"
|
description = "remove installed packages"
|
||||||
@ -144,11 +145,7 @@ def installed_dependents(specs, env):
|
|||||||
active environment, and one from specs to dependent installs outside of
|
active environment, and one from specs to dependent installs outside of
|
||||||
the active environment.
|
the active environment.
|
||||||
|
|
||||||
Any of the input specs may appear in both mappings (if there are
|
Every installed dependent spec is listed once.
|
||||||
dependents both inside and outside the current environment).
|
|
||||||
|
|
||||||
If a dependent spec is used both by the active environment and by
|
|
||||||
an inactive environment, it will only appear in the first mapping.
|
|
||||||
|
|
||||||
If there is not current active environment, the first mapping will be
|
If there is not current active environment, the first mapping will be
|
||||||
empty.
|
empty.
|
||||||
@ -158,16 +155,24 @@ def installed_dependents(specs, env):
|
|||||||
|
|
||||||
env_hashes = set(env.all_hashes()) if env else set()
|
env_hashes = set(env.all_hashes()) if env else set()
|
||||||
|
|
||||||
all_specs_in_db = spack.store.db.query()
|
# Ensure we stop traversal at input specs.
|
||||||
|
visited = set(s.dag_hash() for s in specs)
|
||||||
|
|
||||||
for spec in specs:
|
for spec in specs:
|
||||||
installed = [x for x in all_specs_in_db if spec in x]
|
for dpt in traverse.traverse_nodes(
|
||||||
|
spec.dependents(deptype="all"),
|
||||||
# separate installed dependents into dpts in this environment and
|
direction="parents",
|
||||||
# dpts that are outside this environment
|
visited=visited,
|
||||||
for dpt in installed:
|
deptype="all",
|
||||||
if dpt not in specs:
|
root=True,
|
||||||
if dpt.dag_hash() in env_hashes:
|
key=lambda s: s.dag_hash(),
|
||||||
|
):
|
||||||
|
hash = dpt.dag_hash()
|
||||||
|
# Ensure that all the specs we get are installed
|
||||||
|
record = spack.store.db.query_local_by_spec_hash(hash)
|
||||||
|
if record is None or not record.installed:
|
||||||
|
continue
|
||||||
|
if hash in env_hashes:
|
||||||
active_dpts.setdefault(spec, set()).add(dpt)
|
active_dpts.setdefault(spec, set()).add(dpt)
|
||||||
else:
|
else:
|
||||||
outside_dpts.setdefault(spec, set()).add(dpt)
|
outside_dpts.setdefault(spec, set()).add(dpt)
|
||||||
@ -250,7 +255,7 @@ def is_ready(dag_hash):
|
|||||||
if force:
|
if force:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
_, record = spack.store.db.query_by_spec_hash(dag_hash)
|
record = spack.store.db.query_local_by_spec_hash(dag_hash)
|
||||||
if not record.ref_count:
|
if not record.ref_count:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
@ -36,36 +36,89 @@ def extract_version_from_output(cls, output):
|
|||||||
ver = match.group(match.lastindex)
|
ver = match.group(match.lastindex)
|
||||||
return ver
|
return ver
|
||||||
|
|
||||||
|
# C++ flags based on CMake Modules/Compiler/AppleClang-CXX.cmake
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def cxx11_flag(self):
|
def cxx11_flag(self):
|
||||||
# Adapted from CMake's AppleClang-CXX rules
|
|
||||||
# Spack's AppleClang detection only valid from Xcode >= 4.6
|
# Spack's AppleClang detection only valid from Xcode >= 4.6
|
||||||
if self.real_version < spack.version.ver("4.0.0"):
|
if self.real_version < spack.version.ver("4.0"):
|
||||||
raise spack.compiler.UnsupportedCompilerFlag(
|
raise spack.compiler.UnsupportedCompilerFlag(
|
||||||
self, "the C++11 standard", "cxx11_flag", "Xcode < 4.0.0"
|
self, "the C++11 standard", "cxx11_flag", "Xcode < 4.0"
|
||||||
)
|
)
|
||||||
return "-std=c++11"
|
return "-std=c++11"
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def cxx14_flag(self):
|
def cxx14_flag(self):
|
||||||
# Adapted from CMake's rules for AppleClang
|
if self.real_version < spack.version.ver("5.1"):
|
||||||
if self.real_version < spack.version.ver("5.1.0"):
|
|
||||||
raise spack.compiler.UnsupportedCompilerFlag(
|
raise spack.compiler.UnsupportedCompilerFlag(
|
||||||
self, "the C++14 standard", "cxx14_flag", "Xcode < 5.1.0"
|
self, "the C++14 standard", "cxx14_flag", "Xcode < 5.1"
|
||||||
)
|
)
|
||||||
elif self.real_version < spack.version.ver("6.1.0"):
|
elif self.real_version < spack.version.ver("6.1"):
|
||||||
return "-std=c++1y"
|
return "-std=c++1y"
|
||||||
|
|
||||||
return "-std=c++14"
|
return "-std=c++14"
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def cxx17_flag(self):
|
def cxx17_flag(self):
|
||||||
# Adapted from CMake's rules for AppleClang
|
if self.real_version < spack.version.ver("6.1"):
|
||||||
if self.real_version < spack.version.ver("6.1.0"):
|
|
||||||
raise spack.compiler.UnsupportedCompilerFlag(
|
raise spack.compiler.UnsupportedCompilerFlag(
|
||||||
self, "the C++17 standard", "cxx17_flag", "Xcode < 6.1.0"
|
self, "the C++17 standard", "cxx17_flag", "Xcode < 6.1"
|
||||||
)
|
)
|
||||||
|
elif self.real_version < spack.version.ver("10.0"):
|
||||||
return "-std=c++1z"
|
return "-std=c++1z"
|
||||||
|
return "-std=c++17"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def cxx20_flag(self):
|
||||||
|
if self.real_version < spack.version.ver("10.0"):
|
||||||
|
raise spack.compiler.UnsupportedCompilerFlag(
|
||||||
|
self, "the C++20 standard", "cxx20_flag", "Xcode < 10.0"
|
||||||
|
)
|
||||||
|
elif self.real_version < spack.version.ver("13.0"):
|
||||||
|
return "-std=c++2a"
|
||||||
|
return "-std=c++20"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def cxx23_flag(self):
|
||||||
|
if self.real_version < spack.version.ver("13.0"):
|
||||||
|
raise spack.compiler.UnsupportedCompilerFlag(
|
||||||
|
self, "the C++23 standard", "cxx23_flag", "Xcode < 13.0"
|
||||||
|
)
|
||||||
|
return "-std=c++2b"
|
||||||
|
|
||||||
|
# C flags based on CMake Modules/Compiler/AppleClang-C.cmake
|
||||||
|
|
||||||
|
@property
|
||||||
|
def c99_flag(self):
|
||||||
|
if self.real_version < spack.version.ver("4.0"):
|
||||||
|
raise spack.compiler.UnsupportedCompilerFlag(
|
||||||
|
self, "the C99 standard", "c99_flag", "< 4.0"
|
||||||
|
)
|
||||||
|
return "-std=c99"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def c11_flag(self):
|
||||||
|
if self.real_version < spack.version.ver("4.0"):
|
||||||
|
raise spack.compiler.UnsupportedCompilerFlag(
|
||||||
|
self, "the C11 standard", "c11_flag", "< 4.0"
|
||||||
|
)
|
||||||
|
return "-std=c11"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def c17_flag(self):
|
||||||
|
if self.real_version < spack.version.ver("11.0"):
|
||||||
|
raise spack.compiler.UnsupportedCompilerFlag(
|
||||||
|
self, "the C17 standard", "c17_flag", "< 11.0"
|
||||||
|
)
|
||||||
|
return "-std=c17"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def c23_flag(self):
|
||||||
|
if self.real_version < spack.version.ver("11.0.3"):
|
||||||
|
raise spack.compiler.UnsupportedCompilerFlag(
|
||||||
|
self, "the C23 standard", "c23_flag", "< 11.0.3"
|
||||||
|
)
|
||||||
|
return "-std=c2x"
|
||||||
|
|
||||||
def setup_custom_environment(self, pkg, env):
|
def setup_custom_environment(self, pkg, env):
|
||||||
"""Set the DEVELOPER_DIR environment for the Xcode toolchain.
|
"""Set the DEVELOPER_DIR environment for the Xcode toolchain.
|
||||||
|
@ -61,7 +61,7 @@ def is_clang_based(self):
|
|||||||
return version >= ver("9.0") and "classic" not in str(version)
|
return version >= ver("9.0") and "classic" not in str(version)
|
||||||
|
|
||||||
version_argument = "--version"
|
version_argument = "--version"
|
||||||
version_regex = r"[Vv]ersion.*?(\d+(\.\d+)+)"
|
version_regex = r"[Cc]ray (?:clang|C :|C\+\+ :|Fortran :) [Vv]ersion.*?(\d+(\.\d+)+)"
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def verbose_flag(self):
|
def verbose_flag(self):
|
||||||
|
@ -128,11 +128,24 @@ def c99_flag(self):
|
|||||||
|
|
||||||
@property
|
@property
|
||||||
def c11_flag(self):
|
def c11_flag(self):
|
||||||
if self.real_version < ver("6.1.0"):
|
if self.real_version < ver("3.0"):
|
||||||
raise UnsupportedCompilerFlag(self, "the C11 standard", "c11_flag", "< 6.1.0")
|
raise UnsupportedCompilerFlag(self, "the C11 standard", "c11_flag", "< 3.0")
|
||||||
else:
|
if self.real_version < ver("3.1"):
|
||||||
|
return "-std=c1x"
|
||||||
return "-std=c11"
|
return "-std=c11"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def c17_flag(self):
|
||||||
|
if self.real_version < ver("6.0"):
|
||||||
|
raise UnsupportedCompilerFlag(self, "the C17 standard", "c17_flag", "< 6.0")
|
||||||
|
return "-std=c17"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def c23_flag(self):
|
||||||
|
if self.real_version < ver("9.0"):
|
||||||
|
raise UnsupportedCompilerFlag(self, "the C23 standard", "c23_flag", "< 9.0")
|
||||||
|
return "-std=c2x"
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def cc_pic_flag(self):
|
def cc_pic_flag(self):
|
||||||
return "-fPIC"
|
return "-fPIC"
|
||||||
|
@ -743,9 +743,7 @@ def _concretize_specs_together_new(*abstract_specs, **kwargs):
|
|||||||
import spack.solver.asp
|
import spack.solver.asp
|
||||||
|
|
||||||
solver = spack.solver.asp.Solver()
|
solver = spack.solver.asp.Solver()
|
||||||
solver.tests = kwargs.get("tests", False)
|
result = solver.solve(abstract_specs, tests=kwargs.get("tests", False))
|
||||||
|
|
||||||
result = solver.solve(abstract_specs)
|
|
||||||
result.raise_if_unsat()
|
result.raise_if_unsat()
|
||||||
return [s.copy() for s in result.specs]
|
return [s.copy() for s in result.specs]
|
||||||
|
|
||||||
|
@ -36,10 +36,11 @@
|
|||||||
import re
|
import re
|
||||||
import sys
|
import sys
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
from typing import List # novm
|
from typing import List # novm # noqa: F401
|
||||||
|
|
||||||
import ruamel.yaml as yaml
|
import ruamel.yaml as yaml
|
||||||
import six
|
import six
|
||||||
|
from ruamel.yaml.comments import Comment
|
||||||
from ruamel.yaml.error import MarkedYAMLError
|
from ruamel.yaml.error import MarkedYAMLError
|
||||||
from six import iteritems
|
from six import iteritems
|
||||||
|
|
||||||
@ -532,16 +533,14 @@ def update_config(self, section, update_data, scope=None, force=False):
|
|||||||
scope = self._validate_scope(scope) # get ConfigScope object
|
scope = self._validate_scope(scope) # get ConfigScope object
|
||||||
|
|
||||||
# manually preserve comments
|
# manually preserve comments
|
||||||
need_comment_copy = section in scope.sections and scope.sections[section] is not None
|
need_comment_copy = section in scope.sections and scope.sections[section]
|
||||||
if need_comment_copy:
|
if need_comment_copy:
|
||||||
comments = getattr(
|
comments = getattr(scope.sections[section][section], Comment.attrib, None)
|
||||||
scope.sections[section][section], yaml.comments.Comment.attrib, None
|
|
||||||
)
|
|
||||||
|
|
||||||
# read only the requested section's data.
|
# read only the requested section's data.
|
||||||
scope.sections[section] = syaml.syaml_dict({section: update_data})
|
scope.sections[section] = syaml.syaml_dict({section: update_data})
|
||||||
if need_comment_copy and comments:
|
if need_comment_copy and comments:
|
||||||
setattr(scope.sections[section][section], yaml.comments.Comment.attrib, comments)
|
setattr(scope.sections[section][section], Comment.attrib, comments)
|
||||||
|
|
||||||
scope._write_section(section)
|
scope._write_section(section)
|
||||||
|
|
||||||
|
@ -26,7 +26,7 @@
|
|||||||
import socket
|
import socket
|
||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
from typing import Dict # novm
|
from typing import Dict # novm # noqa: F401
|
||||||
|
|
||||||
import six
|
import six
|
||||||
|
|
||||||
@ -725,6 +725,15 @@ def query_by_spec_hash(self, hash_key, data=None):
|
|||||||
return True, db._data[hash_key]
|
return True, db._data[hash_key]
|
||||||
return False, None
|
return False, None
|
||||||
|
|
||||||
|
def query_local_by_spec_hash(self, hash_key):
|
||||||
|
"""Get a spec by hash in the local database
|
||||||
|
|
||||||
|
Return:
|
||||||
|
(InstallRecord or None): InstallRecord when installed
|
||||||
|
locally, otherwise None."""
|
||||||
|
with self.read_transaction():
|
||||||
|
return self._data.get(hash_key, None)
|
||||||
|
|
||||||
def _assign_dependencies(self, hash_key, installs, data):
|
def _assign_dependencies(self, hash_key, installs, data):
|
||||||
# Add dependencies from other records in the install DB to
|
# Add dependencies from other records in the install DB to
|
||||||
# form a full spec.
|
# form a full spec.
|
||||||
|
@ -493,9 +493,14 @@ def get_projection_for_spec(self, spec):
|
|||||||
Relies on the ordering of projections to avoid ambiguity.
|
Relies on the ordering of projections to avoid ambiguity.
|
||||||
"""
|
"""
|
||||||
spec = spack.spec.Spec(spec)
|
spec = spack.spec.Spec(spec)
|
||||||
proj = spack.projections.get_projection(self.projections, spec)
|
locator_spec = spec
|
||||||
|
|
||||||
|
if spec.package.extendee_spec:
|
||||||
|
locator_spec = spec.package.extendee_spec
|
||||||
|
|
||||||
|
proj = spack.projections.get_projection(self.projections, locator_spec)
|
||||||
if proj:
|
if proj:
|
||||||
return os.path.join(self._root, spec.format(proj))
|
return os.path.join(self._root, locator_spec.format(proj))
|
||||||
return self._root
|
return self._root
|
||||||
|
|
||||||
def get_all_specs(self):
|
def get_all_specs(self):
|
||||||
@ -682,22 +687,34 @@ def skip_list(file):
|
|||||||
for dst in visitor.directories:
|
for dst in visitor.directories:
|
||||||
os.mkdir(os.path.join(self._root, dst))
|
os.mkdir(os.path.join(self._root, dst))
|
||||||
|
|
||||||
# Then group the files to be linked by spec...
|
# Link the files using a "merge map": full src => full dst
|
||||||
# For compatibility, we have to create a merge_map dict mapping
|
merge_map_per_prefix = self._source_merge_visitor_to_merge_map(visitor)
|
||||||
# full_src => full_dst
|
for spec in specs:
|
||||||
files_per_spec = itertools.groupby(visitor.files.items(), key=lambda item: item[1][0])
|
merge_map = merge_map_per_prefix.get(spec.package.view_source(), None)
|
||||||
|
if not merge_map:
|
||||||
for (spec, (src_root, rel_paths)) in zip(specs, files_per_spec):
|
# Not every spec may have files to contribute.
|
||||||
merge_map = dict()
|
continue
|
||||||
for dst_rel, (_, src_rel) in rel_paths:
|
|
||||||
full_src = os.path.join(src_root, src_rel)
|
|
||||||
full_dst = os.path.join(self._root, dst_rel)
|
|
||||||
merge_map[full_src] = full_dst
|
|
||||||
spec.package.add_files_to_view(self, merge_map, skip_if_exists=False)
|
spec.package.add_files_to_view(self, merge_map, skip_if_exists=False)
|
||||||
|
|
||||||
# Finally create the metadata dirs.
|
# Finally create the metadata dirs.
|
||||||
self.link_metadata(specs)
|
self.link_metadata(specs)
|
||||||
|
|
||||||
|
def _source_merge_visitor_to_merge_map(self, visitor):
|
||||||
|
# For compatibility with add_files_to_view, we have to create a
|
||||||
|
# merge_map of the form join(src_root, src_rel) => join(dst_root, dst_rel),
|
||||||
|
# but our visitor.files format is dst_rel => (src_root, src_rel).
|
||||||
|
# We exploit that visitor.files is an ordered dict, and files per source
|
||||||
|
# prefix are contiguous.
|
||||||
|
source_root = lambda item: item[1][0]
|
||||||
|
per_source = itertools.groupby(visitor.files.items(), key=source_root)
|
||||||
|
return {
|
||||||
|
src_root: {
|
||||||
|
os.path.join(src_root, src_rel): os.path.join(self._root, dst_rel)
|
||||||
|
for dst_rel, (_, src_rel) in group
|
||||||
|
}
|
||||||
|
for src_root, group in per_source
|
||||||
|
}
|
||||||
|
|
||||||
def link_metadata(self, specs):
|
def link_metadata(self, specs):
|
||||||
metadata_visitor = SourceMergeVisitor()
|
metadata_visitor = SourceMergeVisitor()
|
||||||
|
|
||||||
@ -743,6 +760,10 @@ def get_projection_for_spec(self, spec):
|
|||||||
Relies on the ordering of projections to avoid ambiguity.
|
Relies on the ordering of projections to avoid ambiguity.
|
||||||
"""
|
"""
|
||||||
spec = spack.spec.Spec(spec)
|
spec = spack.spec.Spec(spec)
|
||||||
|
|
||||||
|
if spec.package.extendee_spec:
|
||||||
|
spec = spec.package.extendee_spec
|
||||||
|
|
||||||
proj = spack.projections.get_projection(self.projections, spec)
|
proj = spack.projections.get_projection(self.projections, spec)
|
||||||
if proj:
|
if proj:
|
||||||
return os.path.join(self._root, spec.format(proj))
|
return os.path.join(self._root, spec.format(proj))
|
||||||
|
@ -56,9 +56,9 @@
|
|||||||
import spack.store
|
import spack.store
|
||||||
import spack.util.executable
|
import spack.util.executable
|
||||||
import spack.util.path
|
import spack.util.path
|
||||||
|
import spack.util.timer as timer
|
||||||
from spack.util.environment import EnvironmentModifications, dump_environment
|
from spack.util.environment import EnvironmentModifications, dump_environment
|
||||||
from spack.util.executable import which
|
from spack.util.executable import which
|
||||||
from spack.util.timer import Timer
|
|
||||||
|
|
||||||
#: Counter to support unique spec sequencing that is used to ensure packages
|
#: Counter to support unique spec sequencing that is used to ensure packages
|
||||||
#: with the same priority are (initially) processed in the order in which they
|
#: with the same priority are (initially) processed in the order in which they
|
||||||
@ -304,9 +304,9 @@ def _install_from_cache(pkg, cache_only, explicit, unsigned=False):
|
|||||||
bool: ``True`` if the package was extract from binary cache,
|
bool: ``True`` if the package was extract from binary cache,
|
||||||
``False`` otherwise
|
``False`` otherwise
|
||||||
"""
|
"""
|
||||||
timer = Timer()
|
t = timer.Timer()
|
||||||
installed_from_cache = _try_install_from_binary_cache(
|
installed_from_cache = _try_install_from_binary_cache(
|
||||||
pkg, explicit, unsigned=unsigned, timer=timer
|
pkg, explicit, unsigned=unsigned, timer=t
|
||||||
)
|
)
|
||||||
pkg_id = package_id(pkg)
|
pkg_id = package_id(pkg)
|
||||||
if not installed_from_cache:
|
if not installed_from_cache:
|
||||||
@ -316,14 +316,14 @@ def _install_from_cache(pkg, cache_only, explicit, unsigned=False):
|
|||||||
|
|
||||||
tty.msg("{0}: installing from source".format(pre))
|
tty.msg("{0}: installing from source".format(pre))
|
||||||
return False
|
return False
|
||||||
timer.stop()
|
t.stop()
|
||||||
tty.debug("Successfully extracted {0} from binary cache".format(pkg_id))
|
tty.debug("Successfully extracted {0} from binary cache".format(pkg_id))
|
||||||
_print_timer(
|
_print_timer(
|
||||||
pre=_log_prefix(pkg.name),
|
pre=_log_prefix(pkg.name),
|
||||||
pkg_id=pkg_id,
|
pkg_id=pkg_id,
|
||||||
fetch=timer.phases.get("search", 0) + timer.phases.get("fetch", 0),
|
fetch=t.duration("search") + t.duration("fetch"),
|
||||||
build=timer.phases.get("install", 0),
|
build=t.duration("install"),
|
||||||
total=timer.total,
|
total=t.duration(),
|
||||||
)
|
)
|
||||||
_print_installed_pkg(pkg.spec.prefix)
|
_print_installed_pkg(pkg.spec.prefix)
|
||||||
spack.hooks.post_install(pkg.spec)
|
spack.hooks.post_install(pkg.spec)
|
||||||
@ -372,7 +372,7 @@ def _process_external_package(pkg, explicit):
|
|||||||
|
|
||||||
|
|
||||||
def _process_binary_cache_tarball(
|
def _process_binary_cache_tarball(
|
||||||
pkg, binary_spec, explicit, unsigned, mirrors_for_spec=None, timer=None
|
pkg, binary_spec, explicit, unsigned, mirrors_for_spec=None, timer=timer.NULL_TIMER
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Process the binary cache tarball.
|
Process the binary cache tarball.
|
||||||
@ -391,11 +391,11 @@ def _process_binary_cache_tarball(
|
|||||||
bool: ``True`` if the package was extracted from binary cache,
|
bool: ``True`` if the package was extracted from binary cache,
|
||||||
else ``False``
|
else ``False``
|
||||||
"""
|
"""
|
||||||
|
timer.start("fetch")
|
||||||
download_result = binary_distribution.download_tarball(
|
download_result = binary_distribution.download_tarball(
|
||||||
binary_spec, unsigned, mirrors_for_spec=mirrors_for_spec
|
binary_spec, unsigned, mirrors_for_spec=mirrors_for_spec
|
||||||
)
|
)
|
||||||
if timer:
|
timer.stop("fetch")
|
||||||
timer.phase("fetch")
|
|
||||||
# see #10063 : install from source if tarball doesn't exist
|
# see #10063 : install from source if tarball doesn't exist
|
||||||
if download_result is None:
|
if download_result is None:
|
||||||
tty.msg("{0} exists in binary cache but with different hash".format(pkg.name))
|
tty.msg("{0} exists in binary cache but with different hash".format(pkg.name))
|
||||||
@ -405,6 +405,7 @@ def _process_binary_cache_tarball(
|
|||||||
tty.msg("Extracting {0} from binary cache".format(pkg_id))
|
tty.msg("Extracting {0} from binary cache".format(pkg_id))
|
||||||
|
|
||||||
# don't print long padded paths while extracting/relocating binaries
|
# don't print long padded paths while extracting/relocating binaries
|
||||||
|
timer.start("install")
|
||||||
with spack.util.path.filter_padding():
|
with spack.util.path.filter_padding():
|
||||||
binary_distribution.extract_tarball(
|
binary_distribution.extract_tarball(
|
||||||
binary_spec, download_result, allow_root=False, unsigned=unsigned, force=False
|
binary_spec, download_result, allow_root=False, unsigned=unsigned, force=False
|
||||||
@ -412,12 +413,11 @@ def _process_binary_cache_tarball(
|
|||||||
|
|
||||||
pkg.installed_from_binary_cache = True
|
pkg.installed_from_binary_cache = True
|
||||||
spack.store.db.add(pkg.spec, spack.store.layout, explicit=explicit)
|
spack.store.db.add(pkg.spec, spack.store.layout, explicit=explicit)
|
||||||
if timer:
|
timer.stop("install")
|
||||||
timer.phase("install")
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
def _try_install_from_binary_cache(pkg, explicit, unsigned=False, timer=None):
|
def _try_install_from_binary_cache(pkg, explicit, unsigned=False, timer=timer.NULL_TIMER):
|
||||||
"""
|
"""
|
||||||
Try to extract the package from binary cache.
|
Try to extract the package from binary cache.
|
||||||
|
|
||||||
@ -430,10 +430,10 @@ def _try_install_from_binary_cache(pkg, explicit, unsigned=False, timer=None):
|
|||||||
"""
|
"""
|
||||||
pkg_id = package_id(pkg)
|
pkg_id = package_id(pkg)
|
||||||
tty.debug("Searching for binary cache of {0}".format(pkg_id))
|
tty.debug("Searching for binary cache of {0}".format(pkg_id))
|
||||||
matches = binary_distribution.get_mirrors_for_spec(pkg.spec)
|
|
||||||
|
|
||||||
if timer:
|
timer.start("search")
|
||||||
timer.phase("search")
|
matches = binary_distribution.get_mirrors_for_spec(pkg.spec)
|
||||||
|
timer.stop("search")
|
||||||
|
|
||||||
if not matches:
|
if not matches:
|
||||||
return False
|
return False
|
||||||
@ -462,11 +462,10 @@ def combine_phase_logs(phase_log_files, log_path):
|
|||||||
phase_log_files (list): a list or iterator of logs to combine
|
phase_log_files (list): a list or iterator of logs to combine
|
||||||
log_path (str): the path to combine them to
|
log_path (str): the path to combine them to
|
||||||
"""
|
"""
|
||||||
|
with open(log_path, "wb") as log_file:
|
||||||
with open(log_path, "w") as log_file:
|
|
||||||
for phase_log_file in phase_log_files:
|
for phase_log_file in phase_log_files:
|
||||||
with open(phase_log_file, "r") as phase_log:
|
with open(phase_log_file, "rb") as phase_log:
|
||||||
log_file.write(phase_log.read())
|
shutil.copyfileobj(phase_log, log_file)
|
||||||
|
|
||||||
|
|
||||||
def dump_packages(spec, path):
|
def dump_packages(spec, path):
|
||||||
@ -1774,7 +1773,9 @@ def install(self):
|
|||||||
raise
|
raise
|
||||||
|
|
||||||
except binary_distribution.NoChecksumException as exc:
|
except binary_distribution.NoChecksumException as exc:
|
||||||
if not task.cache_only:
|
if task.cache_only:
|
||||||
|
raise
|
||||||
|
|
||||||
# Checking hash on downloaded binary failed.
|
# Checking hash on downloaded binary failed.
|
||||||
err = "Failed to install {0} from binary cache due to {1}:"
|
err = "Failed to install {0} from binary cache due to {1}:"
|
||||||
err += " Requeueing to install from source."
|
err += " Requeueing to install from source."
|
||||||
@ -1906,7 +1907,7 @@ def __init__(self, pkg, install_args):
|
|||||||
self.env_mods = install_args.get("env_modifications", EnvironmentModifications())
|
self.env_mods = install_args.get("env_modifications", EnvironmentModifications())
|
||||||
|
|
||||||
# timer for build phases
|
# timer for build phases
|
||||||
self.timer = Timer()
|
self.timer = timer.Timer()
|
||||||
|
|
||||||
# If we are using a padded path, filter the output to compress padded paths
|
# If we are using a padded path, filter the output to compress padded paths
|
||||||
# The real log still has full-length paths.
|
# The real log still has full-length paths.
|
||||||
@ -1961,8 +1962,8 @@ def run(self):
|
|||||||
pre=self.pre,
|
pre=self.pre,
|
||||||
pkg_id=self.pkg_id,
|
pkg_id=self.pkg_id,
|
||||||
fetch=self.pkg._fetch_time,
|
fetch=self.pkg._fetch_time,
|
||||||
build=self.timer.total - self.pkg._fetch_time,
|
build=self.timer.duration() - self.pkg._fetch_time,
|
||||||
total=self.timer.total,
|
total=self.timer.duration(),
|
||||||
)
|
)
|
||||||
_print_installed_pkg(self.pkg.prefix)
|
_print_installed_pkg(self.pkg.prefix)
|
||||||
|
|
||||||
@ -2035,6 +2036,7 @@ def _real_install(self):
|
|||||||
)
|
)
|
||||||
|
|
||||||
with log_contextmanager as logger:
|
with log_contextmanager as logger:
|
||||||
|
# Redirect stdout and stderr to daemon pipe
|
||||||
with logger.force_echo():
|
with logger.force_echo():
|
||||||
inner_debug_level = tty.debug_level()
|
inner_debug_level = tty.debug_level()
|
||||||
tty.set_debug(debug_level)
|
tty.set_debug(debug_level)
|
||||||
@ -2042,12 +2044,11 @@ def _real_install(self):
|
|||||||
tty.msg(msg.format(self.pre, phase_fn.name))
|
tty.msg(msg.format(self.pre, phase_fn.name))
|
||||||
tty.set_debug(inner_debug_level)
|
tty.set_debug(inner_debug_level)
|
||||||
|
|
||||||
# Redirect stdout and stderr to daemon pipe
|
|
||||||
self.timer.phase(phase_fn.name)
|
|
||||||
|
|
||||||
# Catch any errors to report to logging
|
# Catch any errors to report to logging
|
||||||
|
self.timer.start(phase_fn.name)
|
||||||
phase_fn.execute()
|
phase_fn.execute()
|
||||||
spack.hooks.on_phase_success(pkg, phase_fn.name, log_file)
|
spack.hooks.on_phase_success(pkg, phase_fn.name, log_file)
|
||||||
|
self.timer.stop(phase_fn.name)
|
||||||
|
|
||||||
except BaseException:
|
except BaseException:
|
||||||
combine_phase_logs(pkg.phase_log_files, pkg.log_path)
|
combine_phase_logs(pkg.phase_log_files, pkg.log_path)
|
||||||
|
@ -34,7 +34,7 @@
|
|||||||
import inspect
|
import inspect
|
||||||
import os.path
|
import os.path
|
||||||
import re
|
import re
|
||||||
from typing import Optional # novm
|
from typing import Optional # novm # noqa: F401
|
||||||
|
|
||||||
import llnl.util.filesystem
|
import llnl.util.filesystem
|
||||||
import llnl.util.tty as tty
|
import llnl.util.tty as tty
|
||||||
@ -402,13 +402,19 @@ def get_module(module_type, spec, get_full_path, module_set_name="default", requ
|
|||||||
else:
|
else:
|
||||||
writer = spack.modules.module_types[module_type](spec, module_set_name)
|
writer = spack.modules.module_types[module_type](spec, module_set_name)
|
||||||
if not os.path.isfile(writer.layout.filename):
|
if not os.path.isfile(writer.layout.filename):
|
||||||
|
fmt_str = "{name}{@version}{/hash:7}"
|
||||||
if not writer.conf.excluded:
|
if not writer.conf.excluded:
|
||||||
err_msg = "No module available for package {0} at {1}".format(
|
raise ModuleNotFoundError(
|
||||||
spec, writer.layout.filename
|
"The module for package {} should be at {}, but it does not exist".format(
|
||||||
|
spec.format(fmt_str), writer.layout.filename
|
||||||
|
)
|
||||||
)
|
)
|
||||||
raise ModuleNotFoundError(err_msg)
|
|
||||||
elif required:
|
elif required:
|
||||||
tty.debug("The module configuration has excluded {0}: " "omitting it".format(spec))
|
tty.debug(
|
||||||
|
"The module configuration has excluded {}: omitting it".format(
|
||||||
|
spec.format(fmt_str)
|
||||||
|
)
|
||||||
|
)
|
||||||
else:
|
else:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
@ -696,7 +702,7 @@ def configure_options(self):
|
|||||||
|
|
||||||
if os.path.exists(pkg.install_configure_args_path):
|
if os.path.exists(pkg.install_configure_args_path):
|
||||||
with open(pkg.install_configure_args_path, "r") as args_file:
|
with open(pkg.install_configure_args_path, "r") as args_file:
|
||||||
return args_file.read()
|
return spack.util.path.padding_filter(args_file.read())
|
||||||
|
|
||||||
# Returning a false-like value makes the default templates skip
|
# Returning a false-like value makes the default templates skip
|
||||||
# the configure option section
|
# the configure option section
|
||||||
|
@ -27,7 +27,16 @@
|
|||||||
import traceback
|
import traceback
|
||||||
import types
|
import types
|
||||||
import warnings
|
import warnings
|
||||||
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type # novm
|
from typing import ( # novm # noqa: F401
|
||||||
|
Any,
|
||||||
|
Callable,
|
||||||
|
Dict,
|
||||||
|
Iterable,
|
||||||
|
List,
|
||||||
|
Optional,
|
||||||
|
Tuple,
|
||||||
|
Type,
|
||||||
|
)
|
||||||
|
|
||||||
import six
|
import six
|
||||||
|
|
||||||
@ -531,10 +540,6 @@ class PackageBase(six.with_metaclass(PackageMeta, WindowsRPathMeta, PackageViewM
|
|||||||
# These are default values for instance variables.
|
# These are default values for instance variables.
|
||||||
#
|
#
|
||||||
|
|
||||||
#: A list or set of build time test functions to be called when tests
|
|
||||||
#: are executed or 'None' if there are no such test functions.
|
|
||||||
build_time_test_callbacks = None # type: Optional[List[str]]
|
|
||||||
|
|
||||||
#: By default, packages are not virtual
|
#: By default, packages are not virtual
|
||||||
#: Virtual packages override this attribute
|
#: Virtual packages override this attribute
|
||||||
virtual = False
|
virtual = False
|
||||||
@ -543,10 +548,6 @@ class PackageBase(six.with_metaclass(PackageMeta, WindowsRPathMeta, PackageViewM
|
|||||||
#: those that do not can be used to install a set of other Spack packages.
|
#: those that do not can be used to install a set of other Spack packages.
|
||||||
has_code = True
|
has_code = True
|
||||||
|
|
||||||
#: A list or set of install time test functions to be called when tests
|
|
||||||
#: are executed or 'None' if there are no such test functions.
|
|
||||||
install_time_test_callbacks = None # type: Optional[List[str]]
|
|
||||||
|
|
||||||
#: By default we build in parallel. Subclasses can override this.
|
#: By default we build in parallel. Subclasses can override this.
|
||||||
parallel = True
|
parallel = True
|
||||||
|
|
||||||
|
@ -3,6 +3,7 @@
|
|||||||
#
|
#
|
||||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||||
import collections
|
import collections
|
||||||
|
import itertools
|
||||||
import multiprocessing.pool
|
import multiprocessing.pool
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
@ -296,17 +297,24 @@ def modify_macho_object(cur_path, rpaths, deps, idpath, paths_to_paths):
|
|||||||
if idpath:
|
if idpath:
|
||||||
new_idpath = paths_to_paths.get(idpath, None)
|
new_idpath = paths_to_paths.get(idpath, None)
|
||||||
if new_idpath and not idpath == new_idpath:
|
if new_idpath and not idpath == new_idpath:
|
||||||
args += ["-id", new_idpath]
|
args += [("-id", new_idpath)]
|
||||||
|
|
||||||
for dep in deps:
|
for dep in deps:
|
||||||
new_dep = paths_to_paths.get(dep)
|
new_dep = paths_to_paths.get(dep)
|
||||||
if new_dep and dep != new_dep:
|
if new_dep and dep != new_dep:
|
||||||
args += ["-change", dep, new_dep]
|
args += [("-change", dep, new_dep)]
|
||||||
|
|
||||||
|
new_rpaths = []
|
||||||
for orig_rpath in rpaths:
|
for orig_rpath in rpaths:
|
||||||
new_rpath = paths_to_paths.get(orig_rpath)
|
new_rpath = paths_to_paths.get(orig_rpath)
|
||||||
if new_rpath and not orig_rpath == new_rpath:
|
if new_rpath and not orig_rpath == new_rpath:
|
||||||
args += ["-rpath", orig_rpath, new_rpath]
|
args_to_add = ("-rpath", orig_rpath, new_rpath)
|
||||||
|
if args_to_add not in args and new_rpath not in new_rpaths:
|
||||||
|
args += [args_to_add]
|
||||||
|
new_rpaths.append(new_rpath)
|
||||||
|
|
||||||
|
# Deduplicate and flatten
|
||||||
|
args = list(itertools.chain.from_iterable(llnl.util.lang.dedupe(args)))
|
||||||
if args:
|
if args:
|
||||||
args.append(str(cur_path))
|
args.append(str(cur_path))
|
||||||
install_name_tool = executable.Executable("install_name_tool")
|
install_name_tool = executable.Executable("install_name_tool")
|
||||||
|
@ -622,11 +622,13 @@ def solve(self, setup, specs, reuse=None, output=None, control=None):
|
|||||||
self.control = control or default_clingo_control()
|
self.control = control or default_clingo_control()
|
||||||
# set up the problem -- this generates facts and rules
|
# set up the problem -- this generates facts and rules
|
||||||
self.assumptions = []
|
self.assumptions = []
|
||||||
|
timer.start("setup")
|
||||||
with self.control.backend() as backend:
|
with self.control.backend() as backend:
|
||||||
self.backend = backend
|
self.backend = backend
|
||||||
setup.setup(self, specs, reuse=reuse)
|
setup.setup(self, specs, reuse=reuse)
|
||||||
timer.phase("setup")
|
timer.stop("setup")
|
||||||
|
|
||||||
|
timer.start("load")
|
||||||
# read in the main ASP program and display logic -- these are
|
# read in the main ASP program and display logic -- these are
|
||||||
# handwritten, not generated, so we load them as resources
|
# handwritten, not generated, so we load them as resources
|
||||||
parent_dir = os.path.dirname(__file__)
|
parent_dir = os.path.dirname(__file__)
|
||||||
@ -656,12 +658,13 @@ def visit(node):
|
|||||||
self.control.load(os.path.join(parent_dir, "concretize.lp"))
|
self.control.load(os.path.join(parent_dir, "concretize.lp"))
|
||||||
self.control.load(os.path.join(parent_dir, "os_compatibility.lp"))
|
self.control.load(os.path.join(parent_dir, "os_compatibility.lp"))
|
||||||
self.control.load(os.path.join(parent_dir, "display.lp"))
|
self.control.load(os.path.join(parent_dir, "display.lp"))
|
||||||
timer.phase("load")
|
timer.stop("load")
|
||||||
|
|
||||||
# Grounding is the first step in the solve -- it turns our facts
|
# Grounding is the first step in the solve -- it turns our facts
|
||||||
# and first-order logic rules into propositional logic.
|
# and first-order logic rules into propositional logic.
|
||||||
|
timer.start("ground")
|
||||||
self.control.ground([("base", [])])
|
self.control.ground([("base", [])])
|
||||||
timer.phase("ground")
|
timer.stop("ground")
|
||||||
|
|
||||||
# With a grounded program, we can run the solve.
|
# With a grounded program, we can run the solve.
|
||||||
result = Result(specs)
|
result = Result(specs)
|
||||||
@ -679,8 +682,10 @@ def on_model(model):
|
|||||||
|
|
||||||
if clingo_cffi:
|
if clingo_cffi:
|
||||||
solve_kwargs["on_unsat"] = cores.append
|
solve_kwargs["on_unsat"] = cores.append
|
||||||
|
|
||||||
|
timer.start("solve")
|
||||||
solve_result = self.control.solve(**solve_kwargs)
|
solve_result = self.control.solve(**solve_kwargs)
|
||||||
timer.phase("solve")
|
timer.stop("solve")
|
||||||
|
|
||||||
# once done, construct the solve result
|
# once done, construct the solve result
|
||||||
result.satisfiable = solve_result.satisfiable
|
result.satisfiable = solve_result.satisfiable
|
||||||
@ -940,11 +945,13 @@ def package_compiler_defaults(self, pkg):
|
|||||||
def package_requirement_rules(self, pkg):
|
def package_requirement_rules(self, pkg):
|
||||||
pkg_name = pkg.name
|
pkg_name = pkg.name
|
||||||
config = spack.config.get("packages")
|
config = spack.config.get("packages")
|
||||||
requirements = config.get(pkg_name, {}).get("require", []) or config.get("all", {}).get(
|
requirements, raise_on_failure = config.get(pkg_name, {}).get("require", []), True
|
||||||
"require", []
|
if not requirements:
|
||||||
)
|
requirements, raise_on_failure = config.get("all", {}).get("require", []), False
|
||||||
rules = self._rules_from_requirements(pkg_name, requirements)
|
rules = self._rules_from_requirements(pkg_name, requirements)
|
||||||
self.emit_facts_from_requirement_rules(rules, virtual=False)
|
self.emit_facts_from_requirement_rules(
|
||||||
|
rules, virtual=False, raise_on_failure=raise_on_failure
|
||||||
|
)
|
||||||
|
|
||||||
def _rules_from_requirements(self, pkg_name, requirements):
|
def _rules_from_requirements(self, pkg_name, requirements):
|
||||||
"""Manipulate requirements from packages.yaml, and return a list of tuples
|
"""Manipulate requirements from packages.yaml, and return a list of tuples
|
||||||
@ -1071,11 +1078,13 @@ def condition(self, required_spec, imposed_spec=None, name=None, msg=None, node=
|
|||||||
named_cond.name = named_cond.name or name
|
named_cond.name = named_cond.name or name
|
||||||
assert named_cond.name, "must provide name for anonymous condtions!"
|
assert named_cond.name, "must provide name for anonymous condtions!"
|
||||||
|
|
||||||
|
# Check if we can emit the requirements before updating the condition ID counter.
|
||||||
|
# In this way, if a condition can't be emitted but the exception is handled in the caller,
|
||||||
|
# we won't emit partial facts.
|
||||||
|
requirements = self.spec_clauses(named_cond, body=True, required_from=name)
|
||||||
|
|
||||||
condition_id = next(self._condition_id_counter)
|
condition_id = next(self._condition_id_counter)
|
||||||
self.gen.fact(fn.condition(condition_id, msg))
|
self.gen.fact(fn.condition(condition_id, msg))
|
||||||
|
|
||||||
# requirements trigger the condition
|
|
||||||
requirements = self.spec_clauses(named_cond, body=True, required_from=name)
|
|
||||||
for pred in requirements:
|
for pred in requirements:
|
||||||
self.gen.fact(fn.condition_requirement(condition_id, pred.name, *pred.args))
|
self.gen.fact(fn.condition_requirement(condition_id, pred.name, *pred.args))
|
||||||
|
|
||||||
@ -1171,23 +1180,39 @@ def provider_requirements(self):
|
|||||||
rules = self._rules_from_requirements(virtual_str, requirements)
|
rules = self._rules_from_requirements(virtual_str, requirements)
|
||||||
self.emit_facts_from_requirement_rules(rules, virtual=True)
|
self.emit_facts_from_requirement_rules(rules, virtual=True)
|
||||||
|
|
||||||
def emit_facts_from_requirement_rules(self, rules, virtual=False):
|
def emit_facts_from_requirement_rules(self, rules, virtual=False, raise_on_failure=True):
|
||||||
"""Generate facts to enforce requirements from packages.yaml."""
|
"""Generate facts to enforce requirements from packages.yaml.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
rules: rules for which we want facts to be emitted
|
||||||
|
virtual: if True the requirements are on a virtual spec
|
||||||
|
raise_on_failure: if True raise an exception when a requirement condition is invalid
|
||||||
|
for the current spec. If False, just skip that condition
|
||||||
|
"""
|
||||||
for requirement_grp_id, (pkg_name, policy, requirement_grp) in enumerate(rules):
|
for requirement_grp_id, (pkg_name, policy, requirement_grp) in enumerate(rules):
|
||||||
self.gen.fact(fn.requirement_group(pkg_name, requirement_grp_id))
|
self.gen.fact(fn.requirement_group(pkg_name, requirement_grp_id))
|
||||||
self.gen.fact(fn.requirement_policy(pkg_name, requirement_grp_id, policy))
|
self.gen.fact(fn.requirement_policy(pkg_name, requirement_grp_id, policy))
|
||||||
for requirement_weight, spec_str in enumerate(requirement_grp):
|
requirement_weight = 0
|
||||||
|
for spec_str in requirement_grp:
|
||||||
spec = spack.spec.Spec(spec_str)
|
spec = spack.spec.Spec(spec_str)
|
||||||
if not spec.name:
|
if not spec.name:
|
||||||
spec.name = pkg_name
|
spec.name = pkg_name
|
||||||
when_spec = spec
|
when_spec = spec
|
||||||
if virtual:
|
if virtual:
|
||||||
when_spec = spack.spec.Spec(pkg_name)
|
when_spec = spack.spec.Spec(pkg_name)
|
||||||
|
|
||||||
|
try:
|
||||||
member_id = self.condition(
|
member_id = self.condition(
|
||||||
required_spec=when_spec, imposed_spec=spec, name=pkg_name, node=virtual
|
required_spec=when_spec, imposed_spec=spec, name=pkg_name, node=virtual
|
||||||
)
|
)
|
||||||
|
except Exception:
|
||||||
|
if raise_on_failure:
|
||||||
|
raise RuntimeError("cannot emit requirements for the solver")
|
||||||
|
continue
|
||||||
|
|
||||||
self.gen.fact(fn.requirement_group_member(member_id, pkg_name, requirement_grp_id))
|
self.gen.fact(fn.requirement_group_member(member_id, pkg_name, requirement_grp_id))
|
||||||
self.gen.fact(fn.requirement_has_weight(member_id, requirement_weight))
|
self.gen.fact(fn.requirement_has_weight(member_id, requirement_weight))
|
||||||
|
requirement_weight += 1
|
||||||
|
|
||||||
def external_packages(self):
|
def external_packages(self):
|
||||||
"""Facts on external packages, as read from packages.yaml"""
|
"""Facts on external packages, as read from packages.yaml"""
|
||||||
|
@ -539,12 +539,12 @@ requirement_group_satisfied(Package, X) :-
|
|||||||
requirement_policy(Package, X, "one_of"),
|
requirement_policy(Package, X, "one_of"),
|
||||||
requirement_group(Package, X).
|
requirement_group(Package, X).
|
||||||
|
|
||||||
requirement_weight(Package, W) :-
|
requirement_weight(Package, Group, W) :-
|
||||||
condition_holds(Y),
|
condition_holds(Y),
|
||||||
requirement_has_weight(Y, W),
|
requirement_has_weight(Y, W),
|
||||||
requirement_group_member(Y, Package, X),
|
requirement_group_member(Y, Package, Group),
|
||||||
requirement_policy(Package, X, "one_of"),
|
requirement_policy(Package, Group, "one_of"),
|
||||||
requirement_group_satisfied(Package, X).
|
requirement_group_satisfied(Package, Group).
|
||||||
|
|
||||||
requirement_group_satisfied(Package, X) :-
|
requirement_group_satisfied(Package, X) :-
|
||||||
1 { condition_holds(Y) : requirement_group_member(Y, Package, X) } ,
|
1 { condition_holds(Y) : requirement_group_member(Y, Package, X) } ,
|
||||||
@ -552,18 +552,18 @@ requirement_group_satisfied(Package, X) :-
|
|||||||
requirement_policy(Package, X, "any_of"),
|
requirement_policy(Package, X, "any_of"),
|
||||||
requirement_group(Package, X).
|
requirement_group(Package, X).
|
||||||
|
|
||||||
requirement_weight(Package, W) :-
|
requirement_weight(Package, Group, W) :-
|
||||||
W = #min {
|
W = #min {
|
||||||
Z : requirement_has_weight(Y, Z), condition_holds(Y), requirement_group_member(Y, Package, X);
|
Z : requirement_has_weight(Y, Z), condition_holds(Y), requirement_group_member(Y, Package, Group);
|
||||||
% We need this to avoid an annoying warning during the solve
|
% We need this to avoid an annoying warning during the solve
|
||||||
% concretize.lp:1151:5-11: info: tuple ignored:
|
% concretize.lp:1151:5-11: info: tuple ignored:
|
||||||
% #sup@73
|
% #sup@73
|
||||||
10000
|
10000
|
||||||
},
|
},
|
||||||
requirement_policy(Package, X, "any_of"),
|
requirement_policy(Package, Group, "any_of"),
|
||||||
requirement_group_satisfied(Package, X).
|
requirement_group_satisfied(Package, Group).
|
||||||
|
|
||||||
error(2, "Cannot satisfy requirement group for package '{0}'", Package) :-
|
error(2, "Cannot satisfy the requirements in packages.yaml for the '{0}' package. You may want to delete them to proceed with concretization. To check where the requirements are defined run 'spack config blame packages'", Package) :-
|
||||||
activate_requirement_rules(Package),
|
activate_requirement_rules(Package),
|
||||||
requirement_group(Package, X),
|
requirement_group(Package, X),
|
||||||
not requirement_group_satisfied(Package, X).
|
not requirement_group_satisfied(Package, X).
|
||||||
@ -1222,8 +1222,8 @@ opt_criterion(75, "requirement weight").
|
|||||||
#minimize{ 0@275: #true }.
|
#minimize{ 0@275: #true }.
|
||||||
#minimize{ 0@75: #true }.
|
#minimize{ 0@75: #true }.
|
||||||
#minimize {
|
#minimize {
|
||||||
Weight@75+Priority
|
Weight@75+Priority,Package,Group
|
||||||
: requirement_weight(Package, Weight),
|
: requirement_weight(Package, Group, Weight),
|
||||||
build_priority(Package, Priority)
|
build_priority(Package, Priority)
|
||||||
}.
|
}.
|
||||||
|
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||||
|
import inspect
|
||||||
import os
|
import os
|
||||||
import platform
|
import platform
|
||||||
import posixpath
|
import posixpath
|
||||||
@ -14,6 +14,7 @@
|
|||||||
|
|
||||||
import spack.build_environment
|
import spack.build_environment
|
||||||
import spack.config
|
import spack.config
|
||||||
|
import spack.package_base
|
||||||
import spack.spec
|
import spack.spec
|
||||||
import spack.util.spack_yaml as syaml
|
import spack.util.spack_yaml as syaml
|
||||||
from spack.build_environment import (
|
from spack.build_environment import (
|
||||||
@ -130,13 +131,13 @@ def test_static_to_shared_library(build_environment):
|
|||||||
"linux": (
|
"linux": (
|
||||||
"/bin/mycc -shared"
|
"/bin/mycc -shared"
|
||||||
" -Wl,--disable-new-dtags"
|
" -Wl,--disable-new-dtags"
|
||||||
" -Wl,-soname,{2} -Wl,--whole-archive {0}"
|
" -Wl,-soname -Wl,{2} -Wl,--whole-archive {0}"
|
||||||
" -Wl,--no-whole-archive -o {1}"
|
" -Wl,--no-whole-archive -o {1}"
|
||||||
),
|
),
|
||||||
"darwin": (
|
"darwin": (
|
||||||
"/bin/mycc -dynamiclib"
|
"/bin/mycc -dynamiclib"
|
||||||
" -Wl,--disable-new-dtags"
|
" -Wl,--disable-new-dtags"
|
||||||
" -install_name {1} -Wl,-force_load,{0} -o {1}"
|
" -install_name {1} -Wl,-force_load -Wl,{0} -o {1}"
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -521,3 +522,27 @@ def test_dirty_disable_module_unload(config, mock_packages, working_env, mock_mo
|
|||||||
assert mock_module_cmd.calls
|
assert mock_module_cmd.calls
|
||||||
assert any(("unload", "cray-libsci") == item[0] for item in mock_module_cmd.calls)
|
assert any(("unload", "cray-libsci") == item[0] for item in mock_module_cmd.calls)
|
||||||
assert any(("unload", "cray-mpich") == item[0] for item in mock_module_cmd.calls)
|
assert any(("unload", "cray-mpich") == item[0] for item in mock_module_cmd.calls)
|
||||||
|
|
||||||
|
|
||||||
|
class TestModuleMonkeyPatcher:
|
||||||
|
def test_getting_attributes(self, config, mock_packages):
|
||||||
|
s = spack.spec.Spec("libelf").concretized()
|
||||||
|
module_wrapper = spack.build_environment.ModuleChangePropagator(s.package)
|
||||||
|
assert module_wrapper.Libelf == s.package.module.Libelf
|
||||||
|
|
||||||
|
def test_setting_attributes(self, config, mock_packages):
|
||||||
|
s = spack.spec.Spec("libelf").concretized()
|
||||||
|
module = s.package.module
|
||||||
|
module_wrapper = spack.build_environment.ModuleChangePropagator(s.package)
|
||||||
|
|
||||||
|
# Setting an attribute has an immediate effect
|
||||||
|
module_wrapper.SOME_ATTRIBUTE = 1
|
||||||
|
assert module.SOME_ATTRIBUTE == 1
|
||||||
|
|
||||||
|
# We can also propagate the settings to classes in the MRO
|
||||||
|
module_wrapper.propagate_changes_to_mro()
|
||||||
|
for cls in inspect.getmro(type(s.package)):
|
||||||
|
current_module = cls.module
|
||||||
|
if current_module == spack.package_base:
|
||||||
|
break
|
||||||
|
assert current_module.SOME_ATTRIBUTE == 1
|
||||||
|
@ -121,3 +121,31 @@ def test_old_style_compatibility_with_super(spec_str, method_name, expected):
|
|||||||
builder = spack.builder.create(s.package)
|
builder = spack.builder.create(s.package)
|
||||||
value = getattr(builder, method_name)()
|
value = getattr(builder, method_name)()
|
||||||
assert value == expected
|
assert value == expected
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.regression("33928")
|
||||||
|
@pytest.mark.usefixtures("builder_test_repository", "config", "working_env")
|
||||||
|
@pytest.mark.disable_clean_stage_check
|
||||||
|
def test_build_time_tests_are_executed_from_default_builder():
|
||||||
|
s = spack.spec.Spec("old-style-autotools").concretized()
|
||||||
|
builder = spack.builder.create(s.package)
|
||||||
|
builder.pkg.run_tests = True
|
||||||
|
for phase_fn in builder:
|
||||||
|
phase_fn.execute()
|
||||||
|
|
||||||
|
assert os.environ.get("CHECK_CALLED") == "1", "Build time tests not executed"
|
||||||
|
assert os.environ.get("INSTALLCHECK_CALLED") == "1", "Install time tests not executed"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.regression("34518")
|
||||||
|
@pytest.mark.usefixtures("builder_test_repository", "config", "working_env")
|
||||||
|
def test_monkey_patching_wrapped_pkg():
|
||||||
|
s = spack.spec.Spec("old-style-autotools").concretized()
|
||||||
|
builder = spack.builder.create(s.package)
|
||||||
|
assert s.package.run_tests is False
|
||||||
|
assert builder.pkg.run_tests is False
|
||||||
|
assert builder.pkg_with_dispatcher.run_tests is False
|
||||||
|
|
||||||
|
s.package.run_tests = True
|
||||||
|
assert builder.pkg.run_tests is True
|
||||||
|
assert builder.pkg_with_dispatcher.run_tests is True
|
||||||
|
@ -319,6 +319,63 @@ def test_fc_flags(wrapper_environment, wrapper_flags):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_Wl_parsing(wrapper_environment):
|
||||||
|
check_args(
|
||||||
|
cc,
|
||||||
|
["-Wl,-rpath,/a,--enable-new-dtags,-rpath=/b,--rpath", "-Wl,/c"],
|
||||||
|
[real_cc]
|
||||||
|
+ target_args
|
||||||
|
+ ["-Wl,--disable-new-dtags", "-Wl,-rpath,/a", "-Wl,-rpath,/b", "-Wl,-rpath,/c"],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_Xlinker_parsing(wrapper_environment):
|
||||||
|
# -Xlinker <x> ... -Xlinker <y> may have compiler flags inbetween, like -O3 in this
|
||||||
|
# example. Also check that a trailing -Xlinker (which is a compiler error) is not
|
||||||
|
# dropped or given an empty argument.
|
||||||
|
check_args(
|
||||||
|
cc,
|
||||||
|
[
|
||||||
|
"-Xlinker",
|
||||||
|
"-rpath",
|
||||||
|
"-O3",
|
||||||
|
"-Xlinker",
|
||||||
|
"/a",
|
||||||
|
"-Xlinker",
|
||||||
|
"--flag",
|
||||||
|
"-Xlinker",
|
||||||
|
"-rpath=/b",
|
||||||
|
"-Xlinker",
|
||||||
|
],
|
||||||
|
[real_cc]
|
||||||
|
+ target_args
|
||||||
|
+ [
|
||||||
|
"-Wl,--disable-new-dtags",
|
||||||
|
"-Wl,-rpath,/a",
|
||||||
|
"-Wl,-rpath,/b",
|
||||||
|
"-O3",
|
||||||
|
"-Xlinker",
|
||||||
|
"--flag",
|
||||||
|
"-Xlinker",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def test_rpath_without_value(wrapper_environment):
|
||||||
|
# cc -Wl,-rpath without a value shouldn't drop -Wl,-rpath;
|
||||||
|
# same for -Xlinker
|
||||||
|
check_args(
|
||||||
|
cc,
|
||||||
|
["-Wl,-rpath", "-O3", "-g"],
|
||||||
|
[real_cc] + target_args + ["-Wl,--disable-new-dtags", "-O3", "-g", "-Wl,-rpath"],
|
||||||
|
)
|
||||||
|
check_args(
|
||||||
|
cc,
|
||||||
|
["-Xlinker", "-rpath", "-O3", "-g"],
|
||||||
|
[real_cc] + target_args + ["-Wl,--disable-new-dtags", "-O3", "-g", "-Xlinker", "-rpath"],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def test_dep_rpath(wrapper_environment):
|
def test_dep_rpath(wrapper_environment):
|
||||||
"""Ensure RPATHs for root package are added."""
|
"""Ensure RPATHs for root package are added."""
|
||||||
check_args(cc, test_args, [real_cc] + target_args + common_compile_args)
|
check_args(cc, test_args, [real_cc] + target_args + common_compile_args)
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
import spack.database
|
import spack.database
|
||||||
import spack.environment as ev
|
import spack.environment as ev
|
||||||
import spack.main
|
import spack.main
|
||||||
|
import spack.schema.config
|
||||||
import spack.spec
|
import spack.spec
|
||||||
import spack.store
|
import spack.store
|
||||||
import spack.util.spack_yaml as syaml
|
import spack.util.spack_yaml as syaml
|
||||||
@ -652,3 +653,26 @@ def test_config_prefer_upstream(
|
|||||||
|
|
||||||
# Make sure a message about the conflicting hdf5's was given.
|
# Make sure a message about the conflicting hdf5's was given.
|
||||||
assert "- hdf5" in output
|
assert "- hdf5" in output
|
||||||
|
|
||||||
|
|
||||||
|
def test_environment_config_update(tmpdir, mutable_config, monkeypatch):
|
||||||
|
with open(str(tmpdir.join("spack.yaml")), "w") as f:
|
||||||
|
f.write(
|
||||||
|
"""\
|
||||||
|
spack:
|
||||||
|
config:
|
||||||
|
ccache: true
|
||||||
|
"""
|
||||||
|
)
|
||||||
|
|
||||||
|
def update_config(data):
|
||||||
|
data["ccache"] = False
|
||||||
|
return True
|
||||||
|
|
||||||
|
monkeypatch.setattr(spack.schema.config, "update", update_config)
|
||||||
|
|
||||||
|
with ev.Environment(str(tmpdir)):
|
||||||
|
config("update", "-y", "config")
|
||||||
|
|
||||||
|
with ev.Environment(str(tmpdir)) as e:
|
||||||
|
assert not e.raw_yaml["spack"]["config"]["ccache"]
|
||||||
|
@ -333,20 +333,6 @@ def test_error_conditions(self, cli_args, error_str):
|
|||||||
with pytest.raises(spack.error.SpackError, match=error_str):
|
with pytest.raises(spack.error.SpackError, match=error_str):
|
||||||
spack.cmd.mirror.mirror_create(args)
|
spack.cmd.mirror.mirror_create(args)
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
|
||||||
"cli_args,expected_end",
|
|
||||||
[
|
|
||||||
({"directory": None}, os.path.join("source")),
|
|
||||||
({"directory": os.path.join("foo", "bar")}, os.path.join("foo", "bar")),
|
|
||||||
],
|
|
||||||
)
|
|
||||||
def test_mirror_path_is_valid(self, cli_args, expected_end, config):
|
|
||||||
args = MockMirrorArgs(**cli_args)
|
|
||||||
local_push_url = spack.cmd.mirror.local_mirror_url_from_user(args.directory)
|
|
||||||
assert local_push_url.startswith("file:")
|
|
||||||
assert os.path.isabs(local_push_url.replace("file://", ""))
|
|
||||||
assert local_push_url.endswith(expected_end)
|
|
||||||
|
|
||||||
@pytest.mark.parametrize(
|
@pytest.mark.parametrize(
|
||||||
"cli_args,not_expected",
|
"cli_args,not_expected",
|
||||||
[
|
[
|
||||||
|
@ -3,12 +3,14 @@
|
|||||||
#
|
#
|
||||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||||
|
|
||||||
|
import itertools
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
import llnl.util.tty as tty
|
import llnl.util.tty as tty
|
||||||
|
|
||||||
|
import spack.cmd.uninstall
|
||||||
import spack.environment
|
import spack.environment
|
||||||
import spack.store
|
import spack.store
|
||||||
from spack.main import SpackCommand, SpackCommandError
|
from spack.main import SpackCommand, SpackCommandError
|
||||||
@ -40,6 +42,39 @@ def test_installed_dependents(mutable_database):
|
|||||||
uninstall("-y", "libelf")
|
uninstall("-y", "libelf")
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.db
|
||||||
|
def test_correct_installed_dependents(mutable_database):
|
||||||
|
# Test whether we return the right dependents.
|
||||||
|
|
||||||
|
# Take callpath from the database
|
||||||
|
callpath = spack.store.db.query_local("callpath")[0]
|
||||||
|
|
||||||
|
# Ensure it still has dependents and dependencies
|
||||||
|
dependents = callpath.dependents(deptype="all")
|
||||||
|
dependencies = callpath.dependencies(deptype="all")
|
||||||
|
assert dependents and dependencies
|
||||||
|
|
||||||
|
# Uninstall it, so it's missing.
|
||||||
|
callpath.package.do_uninstall(force=True)
|
||||||
|
|
||||||
|
# Retrieve all dependent hashes
|
||||||
|
inside_dpts, outside_dpts = spack.cmd.uninstall.installed_dependents(dependencies, None)
|
||||||
|
dependent_hashes = [s.dag_hash() for s in itertools.chain(*outside_dpts.values())]
|
||||||
|
set_dependent_hashes = set(dependent_hashes)
|
||||||
|
|
||||||
|
# We dont have an env, so this should be empty.
|
||||||
|
assert not inside_dpts
|
||||||
|
|
||||||
|
# Assert uniqueness
|
||||||
|
assert len(dependent_hashes) == len(set_dependent_hashes)
|
||||||
|
|
||||||
|
# Ensure parents of callpath are listed
|
||||||
|
assert all(s.dag_hash() in set_dependent_hashes for s in dependents)
|
||||||
|
|
||||||
|
# Ensure callpath itself is not, since it was missing.
|
||||||
|
assert callpath.dag_hash() not in set_dependent_hashes
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.db
|
@pytest.mark.db
|
||||||
def test_recursive_uninstall(mutable_database):
|
def test_recursive_uninstall(mutable_database):
|
||||||
"""Test recursive uninstall."""
|
"""Test recursive uninstall."""
|
||||||
|
@ -391,7 +391,7 @@ def test_apple_clang_flags():
|
|||||||
unsupported_flag_test("cxx17_flag", "apple-clang@6.0.0")
|
unsupported_flag_test("cxx17_flag", "apple-clang@6.0.0")
|
||||||
supported_flag_test("cxx17_flag", "-std=c++1z", "apple-clang@6.1.0")
|
supported_flag_test("cxx17_flag", "-std=c++1z", "apple-clang@6.1.0")
|
||||||
supported_flag_test("c99_flag", "-std=c99", "apple-clang@6.1.0")
|
supported_flag_test("c99_flag", "-std=c99", "apple-clang@6.1.0")
|
||||||
unsupported_flag_test("c11_flag", "apple-clang@6.0.0")
|
unsupported_flag_test("c11_flag", "apple-clang@3.0.0")
|
||||||
supported_flag_test("c11_flag", "-std=c11", "apple-clang@6.1.0")
|
supported_flag_test("c11_flag", "-std=c11", "apple-clang@6.1.0")
|
||||||
supported_flag_test("cc_pic_flag", "-fPIC", "apple-clang@2.0.0")
|
supported_flag_test("cc_pic_flag", "-fPIC", "apple-clang@2.0.0")
|
||||||
supported_flag_test("cxx_pic_flag", "-fPIC", "apple-clang@2.0.0")
|
supported_flag_test("cxx_pic_flag", "-fPIC", "apple-clang@2.0.0")
|
||||||
@ -411,7 +411,7 @@ def test_clang_flags():
|
|||||||
supported_flag_test("cxx17_flag", "-std=c++1z", "clang@3.5")
|
supported_flag_test("cxx17_flag", "-std=c++1z", "clang@3.5")
|
||||||
supported_flag_test("cxx17_flag", "-std=c++17", "clang@5.0")
|
supported_flag_test("cxx17_flag", "-std=c++17", "clang@5.0")
|
||||||
supported_flag_test("c99_flag", "-std=c99", "clang@3.3")
|
supported_flag_test("c99_flag", "-std=c99", "clang@3.3")
|
||||||
unsupported_flag_test("c11_flag", "clang@6.0.0")
|
unsupported_flag_test("c11_flag", "clang@2.0")
|
||||||
supported_flag_test("c11_flag", "-std=c11", "clang@6.1.0")
|
supported_flag_test("c11_flag", "-std=c11", "clang@6.1.0")
|
||||||
supported_flag_test("cc_pic_flag", "-fPIC", "clang@3.3")
|
supported_flag_test("cc_pic_flag", "-fPIC", "clang@3.3")
|
||||||
supported_flag_test("cxx_pic_flag", "-fPIC", "clang@3.3")
|
supported_flag_test("cxx_pic_flag", "-fPIC", "clang@3.3")
|
||||||
|
@ -58,6 +58,7 @@ def test_arm_version_detection(version_str, expected_version):
|
|||||||
[
|
[
|
||||||
("Cray C : Version 8.4.6 Mon Apr 15, 2019 12:13:39\n", "8.4.6"),
|
("Cray C : Version 8.4.6 Mon Apr 15, 2019 12:13:39\n", "8.4.6"),
|
||||||
("Cray C++ : Version 8.4.6 Mon Apr 15, 2019 12:13:45\n", "8.4.6"),
|
("Cray C++ : Version 8.4.6 Mon Apr 15, 2019 12:13:45\n", "8.4.6"),
|
||||||
|
("Cray clang Version 8.4.6 Mon Apr 15, 2019 12:13:45\n", "8.4.6"),
|
||||||
("Cray Fortran : Version 8.4.6 Mon Apr 15, 2019 12:13:55\n", "8.4.6"),
|
("Cray Fortran : Version 8.4.6 Mon Apr 15, 2019 12:13:55\n", "8.4.6"),
|
||||||
],
|
],
|
||||||
)
|
)
|
||||||
@ -487,3 +488,27 @@ def _module(cmd, *args):
|
|||||||
def test_aocc_version_detection(version_str, expected_version):
|
def test_aocc_version_detection(version_str, expected_version):
|
||||||
version = spack.compilers.aocc.Aocc.extract_version_from_output(version_str)
|
version = spack.compilers.aocc.Aocc.extract_version_from_output(version_str)
|
||||||
assert version == expected_version
|
assert version == expected_version
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.regression("33901")
|
||||||
|
@pytest.mark.parametrize(
|
||||||
|
"version_str",
|
||||||
|
[
|
||||||
|
(
|
||||||
|
"Apple clang version 11.0.0 (clang-1100.0.33.8)\n"
|
||||||
|
"Target: x86_64-apple-darwin18.7.0\n"
|
||||||
|
"Thread model: posix\n"
|
||||||
|
"InstalledDir: "
|
||||||
|
"/Applications/Xcode.app/Contents/Developer/Toolchains/"
|
||||||
|
"XcodeDefault.xctoolchain/usr/bin\n"
|
||||||
|
),
|
||||||
|
(
|
||||||
|
"Apple LLVM version 7.0.2 (clang-700.1.81)\n"
|
||||||
|
"Target: x86_64-apple-darwin15.2.0\n"
|
||||||
|
"Thread model: posix\n"
|
||||||
|
),
|
||||||
|
],
|
||||||
|
)
|
||||||
|
def test_apple_clang_not_detected_as_cce(version_str):
|
||||||
|
version = spack.compilers.cce.Cce.extract_version_from_output(version_str)
|
||||||
|
assert version == "unknown"
|
||||||
|
@ -413,3 +413,18 @@ def test_incompatible_virtual_requirements_raise(concretize_scope, mock_packages
|
|||||||
spec = Spec("callpath ^zmpi")
|
spec = Spec("callpath ^zmpi")
|
||||||
with pytest.raises(UnsatisfiableSpecError):
|
with pytest.raises(UnsatisfiableSpecError):
|
||||||
spec.concretize()
|
spec.concretize()
|
||||||
|
|
||||||
|
|
||||||
|
def test_non_existing_variants_under_all(concretize_scope, mock_packages):
|
||||||
|
if spack.config.get("config:concretizer") == "original":
|
||||||
|
pytest.skip("Original concretizer does not support configuration" " requirements")
|
||||||
|
conf_str = """\
|
||||||
|
packages:
|
||||||
|
all:
|
||||||
|
require:
|
||||||
|
- any_of: ["~foo", "@:"]
|
||||||
|
"""
|
||||||
|
update_packages_config(conf_str)
|
||||||
|
|
||||||
|
spec = Spec("callpath ^zmpi").concretized()
|
||||||
|
assert "~foo" not in spec
|
||||||
|
@ -28,7 +28,7 @@ def test_set_install_hash_length(hash_length, mutable_config, tmpdir):
|
|||||||
assert len(hash_str) == hash_length
|
assert len(hash_str) == hash_length
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.use_fixtures("mock_packages")
|
@pytest.mark.usefixtures("mock_packages")
|
||||||
def test_set_install_hash_length_upper_case(mutable_config, tmpdir):
|
def test_set_install_hash_length_upper_case(mutable_config, tmpdir):
|
||||||
mutable_config.set("config:install_hash_length", 5)
|
mutable_config.set("config:install_hash_length", 5)
|
||||||
mutable_config.set(
|
mutable_config.set(
|
||||||
|
@ -252,12 +252,8 @@ def test_install_times(install_mockery, mock_fetch, mutable_mock_repo):
|
|||||||
|
|
||||||
# The order should be maintained
|
# The order should be maintained
|
||||||
phases = [x["name"] for x in times["phases"]]
|
phases = [x["name"] for x in times["phases"]]
|
||||||
total = sum([x["seconds"] for x in times["phases"]])
|
assert phases == ["one", "two", "three", "install"]
|
||||||
for name in ["one", "two", "three", "install"]:
|
assert all(isinstance(x["seconds"], float) for x in times["phases"])
|
||||||
assert name in phases
|
|
||||||
|
|
||||||
# Give a generous difference threshold
|
|
||||||
assert abs(total - times["total"]["seconds"]) < 5
|
|
||||||
|
|
||||||
|
|
||||||
def test_flatten_deps(install_mockery, mock_fetch, mutable_mock_repo):
|
def test_flatten_deps(install_mockery, mock_fetch, mutable_mock_repo):
|
||||||
|
@ -622,7 +622,7 @@ def test_combine_phase_logs(tmpdir):
|
|||||||
|
|
||||||
# This is the output log we will combine them into
|
# This is the output log we will combine them into
|
||||||
combined_log = os.path.join(str(tmpdir), "combined-out.txt")
|
combined_log = os.path.join(str(tmpdir), "combined-out.txt")
|
||||||
spack.installer.combine_phase_logs(phase_log_files, combined_log)
|
inst.combine_phase_logs(phase_log_files, combined_log)
|
||||||
with open(combined_log, "r") as log_file:
|
with open(combined_log, "r") as log_file:
|
||||||
out = log_file.read()
|
out = log_file.read()
|
||||||
|
|
||||||
@ -631,6 +631,22 @@ def test_combine_phase_logs(tmpdir):
|
|||||||
assert "Output from %s\n" % log_file in out
|
assert "Output from %s\n" % log_file in out
|
||||||
|
|
||||||
|
|
||||||
|
def test_combine_phase_logs_does_not_care_about_encoding(tmpdir):
|
||||||
|
# this is invalid utf-8 at a minimum
|
||||||
|
data = b"\x00\xF4\xBF\x00\xBF\xBF"
|
||||||
|
input = [str(tmpdir.join("a")), str(tmpdir.join("b"))]
|
||||||
|
output = str(tmpdir.join("c"))
|
||||||
|
|
||||||
|
for path in input:
|
||||||
|
with open(path, "wb") as f:
|
||||||
|
f.write(data)
|
||||||
|
|
||||||
|
inst.combine_phase_logs(input, output)
|
||||||
|
|
||||||
|
with open(output, "rb") as f:
|
||||||
|
assert f.read() == data * 2
|
||||||
|
|
||||||
|
|
||||||
def test_check_deps_status_install_failure(install_mockery, monkeypatch):
|
def test_check_deps_status_install_failure(install_mockery, monkeypatch):
|
||||||
const_arg = installer_args(["a"], {})
|
const_arg = installer_args(["a"], {})
|
||||||
installer = create_installer(const_arg)
|
installer = create_installer(const_arg)
|
||||||
|
@ -903,3 +903,13 @@ def test_remove_linked_tree_doesnt_change_file_permission(tmpdir, initial_mode):
|
|||||||
fs.remove_linked_tree(str(file_instead_of_dir))
|
fs.remove_linked_tree(str(file_instead_of_dir))
|
||||||
final_stat = os.stat(str(file_instead_of_dir))
|
final_stat = os.stat(str(file_instead_of_dir))
|
||||||
assert final_stat == initial_stat
|
assert final_stat == initial_stat
|
||||||
|
|
||||||
|
|
||||||
|
def test_filesummary(tmpdir):
|
||||||
|
p = str(tmpdir.join("xyz"))
|
||||||
|
with open(p, "wb") as f:
|
||||||
|
f.write(b"abcdefghijklmnopqrstuvwxyz")
|
||||||
|
|
||||||
|
assert fs.filesummary(p, print_bytes=8) == (26, b"abcdefgh...stuvwxyz")
|
||||||
|
assert fs.filesummary(p, print_bytes=13) == (26, b"abcdefghijklmnopqrstuvwxyz")
|
||||||
|
assert fs.filesummary(p, print_bytes=100) == (26, b"abcdefghijklmnopqrstuvwxyz")
|
||||||
|
@ -32,6 +32,27 @@ def test_write_and_read_cache_file(file_cache):
|
|||||||
assert text == "foobar\n"
|
assert text == "foobar\n"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.skipif(sys.platform == "win32", reason="Locks not supported on Windows")
|
||||||
|
def test_failed_write_and_read_cache_file(file_cache):
|
||||||
|
"""Test failing to write then attempting to read a cached file."""
|
||||||
|
with pytest.raises(RuntimeError, match=r"^foobar$"):
|
||||||
|
with file_cache.write_transaction("test.yaml") as (old, new):
|
||||||
|
assert old is None
|
||||||
|
assert new is not None
|
||||||
|
raise RuntimeError("foobar")
|
||||||
|
|
||||||
|
# Cache dir should have exactly one (lock) file
|
||||||
|
assert os.listdir(file_cache.root) == [".test.yaml.lock"]
|
||||||
|
|
||||||
|
# File does not exist
|
||||||
|
assert not file_cache.init_entry("test.yaml")
|
||||||
|
|
||||||
|
# Attempting to read will cause a file not found error
|
||||||
|
with pytest.raises((IOError, OSError), match=r"test\.yaml"):
|
||||||
|
with file_cache.read_transaction("test.yaml"):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
def test_write_and_remove_cache_file(file_cache):
|
def test_write_and_remove_cache_file(file_cache):
|
||||||
"""Test two write transactions on a cached file. Then try to remove an
|
"""Test two write transactions on a cached file. Then try to remove an
|
||||||
entry from it.
|
entry from it.
|
||||||
|
150
lib/spack/spack/test/util/timer.py
Normal file
150
lib/spack/spack/test/util/timer.py
Normal file
@ -0,0 +1,150 @@
|
|||||||
|
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
|
||||||
|
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||||
|
#
|
||||||
|
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
|
from six import StringIO
|
||||||
|
|
||||||
|
import spack.util.timer as timer
|
||||||
|
|
||||||
|
|
||||||
|
class Tick(object):
|
||||||
|
"""Timer that increments the seconds passed by 1
|
||||||
|
everytime tick is called."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.time = 0.0
|
||||||
|
|
||||||
|
def tick(self):
|
||||||
|
self.time += 1
|
||||||
|
return self.time
|
||||||
|
|
||||||
|
|
||||||
|
def test_timer():
|
||||||
|
# 0
|
||||||
|
t = timer.Timer(now=Tick().tick)
|
||||||
|
|
||||||
|
# 1 (restart)
|
||||||
|
t.start()
|
||||||
|
|
||||||
|
# 2
|
||||||
|
t.start("wrapped")
|
||||||
|
|
||||||
|
# 3
|
||||||
|
t.start("first")
|
||||||
|
|
||||||
|
# 4
|
||||||
|
t.stop("first")
|
||||||
|
assert t.duration("first") == 1.0
|
||||||
|
|
||||||
|
# 5
|
||||||
|
t.start("second")
|
||||||
|
|
||||||
|
# 6
|
||||||
|
t.stop("second")
|
||||||
|
assert t.duration("second") == 1.0
|
||||||
|
|
||||||
|
# 7-8
|
||||||
|
with t.measure("third"):
|
||||||
|
pass
|
||||||
|
assert t.duration("third") == 1.0
|
||||||
|
|
||||||
|
# 9
|
||||||
|
t.stop("wrapped")
|
||||||
|
assert t.duration("wrapped") == 7.0
|
||||||
|
|
||||||
|
# tick 10-13
|
||||||
|
t.start("not-stopped")
|
||||||
|
assert t.duration("not-stopped") == 1.0
|
||||||
|
assert t.duration("not-stopped") == 2.0
|
||||||
|
assert t.duration("not-stopped") == 3.0
|
||||||
|
|
||||||
|
# 14
|
||||||
|
assert t.duration() == 13.0
|
||||||
|
|
||||||
|
# 15
|
||||||
|
t.stop()
|
||||||
|
assert t.duration() == 14.0
|
||||||
|
|
||||||
|
|
||||||
|
def test_timer_stop_stops_all():
|
||||||
|
# Ensure that timer.stop() effectively stops all timers.
|
||||||
|
|
||||||
|
# 0
|
||||||
|
t = timer.Timer(now=Tick().tick)
|
||||||
|
|
||||||
|
# 1
|
||||||
|
t.start("first")
|
||||||
|
|
||||||
|
# 2
|
||||||
|
t.start("second")
|
||||||
|
|
||||||
|
# 3
|
||||||
|
t.start("third")
|
||||||
|
|
||||||
|
# 4
|
||||||
|
t.stop()
|
||||||
|
|
||||||
|
assert t.duration("first") == 3.0
|
||||||
|
assert t.duration("second") == 2.0
|
||||||
|
assert t.duration("third") == 1.0
|
||||||
|
assert t.duration() == 4.0
|
||||||
|
|
||||||
|
|
||||||
|
def test_stopping_unstarted_timer_is_no_error():
|
||||||
|
t = timer.Timer(now=Tick().tick)
|
||||||
|
assert t.duration("hello") == 0.0
|
||||||
|
t.stop("hello")
|
||||||
|
assert t.duration("hello") == 0.0
|
||||||
|
|
||||||
|
|
||||||
|
def test_timer_write():
|
||||||
|
text_buffer = StringIO()
|
||||||
|
json_buffer = StringIO()
|
||||||
|
|
||||||
|
# 0
|
||||||
|
t = timer.Timer(now=Tick().tick)
|
||||||
|
|
||||||
|
# 1
|
||||||
|
t.start("timer")
|
||||||
|
|
||||||
|
# 2
|
||||||
|
t.stop("timer")
|
||||||
|
|
||||||
|
# 3
|
||||||
|
t.stop()
|
||||||
|
|
||||||
|
t.write_tty(text_buffer)
|
||||||
|
t.write_json(json_buffer)
|
||||||
|
|
||||||
|
output = text_buffer.getvalue().splitlines()
|
||||||
|
assert "timer" in output[0]
|
||||||
|
assert "1.000s" in output[0]
|
||||||
|
assert "total" in output[1]
|
||||||
|
assert "3.000s" in output[1]
|
||||||
|
|
||||||
|
deserialized = json.loads(json_buffer.getvalue())
|
||||||
|
assert deserialized == {
|
||||||
|
"phases": [{"name": "timer", "seconds": 1.0}],
|
||||||
|
"total": {"seconds": 3.0},
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def test_null_timer():
|
||||||
|
# Just ensure that the interface of the noop-timer doesn't break at some point
|
||||||
|
buffer = StringIO()
|
||||||
|
t = timer.NullTimer()
|
||||||
|
t.start()
|
||||||
|
t.start("first")
|
||||||
|
t.stop("first")
|
||||||
|
with t.measure("second"):
|
||||||
|
pass
|
||||||
|
t.stop()
|
||||||
|
assert t.duration("first") == 0.0
|
||||||
|
assert t.duration() == 0.0
|
||||||
|
assert not t.phases
|
||||||
|
t.write_json(buffer)
|
||||||
|
t.write_tty(buffer)
|
||||||
|
assert not buffer.getvalue()
|
@ -3,12 +3,13 @@
|
|||||||
#
|
#
|
||||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||||
|
|
||||||
|
import os
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from spack.directory_layout import DirectoryLayout
|
from spack.directory_layout import DirectoryLayout
|
||||||
from spack.filesystem_view import YamlFilesystemView
|
from spack.filesystem_view import SimpleFilesystemView, YamlFilesystemView
|
||||||
from spack.spec import Spec
|
from spack.spec import Spec
|
||||||
|
|
||||||
|
|
||||||
@ -23,3 +24,46 @@ def test_remove_extensions_ordered(install_mockery, mock_fetch, tmpdir):
|
|||||||
|
|
||||||
e1 = e2["extension1"]
|
e1 = e2["extension1"]
|
||||||
view.remove_specs(e1, e2)
|
view.remove_specs(e1, e2)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.regression("32456")
|
||||||
|
def test_view_with_spec_not_contributing_files(mock_packages, tmpdir):
|
||||||
|
tmpdir = str(tmpdir)
|
||||||
|
view_dir = os.path.join(tmpdir, "view")
|
||||||
|
os.mkdir(view_dir)
|
||||||
|
|
||||||
|
layout = DirectoryLayout(view_dir)
|
||||||
|
view = SimpleFilesystemView(view_dir, layout)
|
||||||
|
|
||||||
|
a = Spec("a")
|
||||||
|
b = Spec("b")
|
||||||
|
a.prefix = os.path.join(tmpdir, "a")
|
||||||
|
b.prefix = os.path.join(tmpdir, "b")
|
||||||
|
a._mark_concrete()
|
||||||
|
b._mark_concrete()
|
||||||
|
|
||||||
|
# Create directory structure for a and b, and view
|
||||||
|
os.makedirs(a.prefix.subdir)
|
||||||
|
os.makedirs(b.prefix.subdir)
|
||||||
|
os.makedirs(os.path.join(a.prefix, ".spack"))
|
||||||
|
os.makedirs(os.path.join(b.prefix, ".spack"))
|
||||||
|
|
||||||
|
# Add files to b's prefix, but not to a's
|
||||||
|
with open(b.prefix.file, "w") as f:
|
||||||
|
f.write("file 1")
|
||||||
|
|
||||||
|
with open(b.prefix.subdir.file, "w") as f:
|
||||||
|
f.write("file 2")
|
||||||
|
|
||||||
|
# In previous versions of Spack we incorrectly called add_files_to_view
|
||||||
|
# with b's merge map. It shouldn't be called at all, since a has no
|
||||||
|
# files to add to the view.
|
||||||
|
def pkg_a_add_files_to_view(view, merge_map, skip_if_exists=True):
|
||||||
|
assert False, "There shouldn't be files to add"
|
||||||
|
|
||||||
|
a.package.add_files_to_view = pkg_a_add_files_to_view
|
||||||
|
|
||||||
|
# Create view and see if files are linked.
|
||||||
|
view.add_specs(a, b)
|
||||||
|
assert os.path.lexists(os.path.join(view_dir, "file"))
|
||||||
|
assert os.path.lexists(os.path.join(view_dir, "subdir", "file"))
|
||||||
|
@ -141,7 +141,7 @@ def dump_environment(path, environment=None):
|
|||||||
use_env = environment or os.environ
|
use_env = environment or os.environ
|
||||||
hidden_vars = set(["PS1", "PWD", "OLDPWD", "TERM_SESSION_ID"])
|
hidden_vars = set(["PS1", "PWD", "OLDPWD", "TERM_SESSION_ID"])
|
||||||
|
|
||||||
fd = os.open(path, os.O_WRONLY | os.O_CREAT, 0o600)
|
fd = os.open(path, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600)
|
||||||
with os.fdopen(fd, "w") as env_file:
|
with os.fdopen(fd, "w") as env_file:
|
||||||
for var, val in sorted(use_env.items()):
|
for var, val in sorted(use_env.items()):
|
||||||
env_file.write(
|
env_file.write(
|
||||||
@ -915,7 +915,7 @@ def inspect_path(root, inspections, exclude=None):
|
|||||||
env = EnvironmentModifications()
|
env = EnvironmentModifications()
|
||||||
# Inspect the prefix to check for the existence of common directories
|
# Inspect the prefix to check for the existence of common directories
|
||||||
for relative_path, variables in inspections.items():
|
for relative_path, variables in inspections.items():
|
||||||
expected = os.path.join(root, relative_path)
|
expected = os.path.join(root, os.path.normpath(relative_path))
|
||||||
|
|
||||||
if os.path.isdir(expected) and not exclude(expected):
|
if os.path.isdir(expected) and not exclude(expected):
|
||||||
for variable in variables:
|
for variable in variables:
|
||||||
|
@ -144,8 +144,7 @@ def __exit__(cm, type, value, traceback):
|
|||||||
cm.tmp_file.close()
|
cm.tmp_file.close()
|
||||||
|
|
||||||
if value:
|
if value:
|
||||||
# remove tmp on exception & raise it
|
os.remove(cm.tmp_filename)
|
||||||
shutil.rmtree(cm.tmp_filename, True)
|
|
||||||
|
|
||||||
else:
|
else:
|
||||||
rename(cm.tmp_filename, cm.orig_filename)
|
rename(cm.tmp_filename, cm.orig_filename)
|
||||||
|
@ -444,7 +444,7 @@ def padding_filter(string):
|
|||||||
r"(/{pad})+" # the padding string repeated one or more times
|
r"(/{pad})+" # the padding string repeated one or more times
|
||||||
r"(/{longest_prefix})?(?=/)" # trailing prefix of padding as path component
|
r"(/{longest_prefix})?(?=/)" # trailing prefix of padding as path component
|
||||||
)
|
)
|
||||||
regex = regex.replace("/", os.sep)
|
regex = regex.replace("/", re.escape(os.sep))
|
||||||
regex = regex.format(pad=pad, longest_prefix=longest_prefix)
|
regex = regex.format(pad=pad, longest_prefix=longest_prefix)
|
||||||
_filter_re = re.compile(regex)
|
_filter_re = re.compile(regex)
|
||||||
|
|
||||||
|
@ -11,51 +11,140 @@
|
|||||||
"""
|
"""
|
||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
|
from collections import OrderedDict, namedtuple
|
||||||
|
from contextlib import contextmanager
|
||||||
|
|
||||||
|
from llnl.util.lang import pretty_seconds
|
||||||
|
|
||||||
import spack.util.spack_json as sjson
|
import spack.util.spack_json as sjson
|
||||||
|
|
||||||
|
Interval = namedtuple("Interval", ("begin", "end"))
|
||||||
|
|
||||||
class Timer(object):
|
#: name for the global timer (used in start(), stop(), duration() without arguments)
|
||||||
"""
|
global_timer_name = "_global"
|
||||||
Simple timer for timing phases of a solve or install
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.start = time.time()
|
|
||||||
self.last = self.start
|
|
||||||
self.phases = {}
|
|
||||||
self.end = None
|
|
||||||
|
|
||||||
def phase(self, name):
|
class NullTimer(object):
|
||||||
last = self.last
|
"""Timer interface that does nothing, useful in for "tell
|
||||||
now = time.time()
|
don't ask" style code when timers are optional."""
|
||||||
self.phases[name] = now - last
|
|
||||||
self.last = now
|
def start(self, name=global_timer_name):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def stop(self, name=global_timer_name):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def duration(self, name=global_timer_name):
|
||||||
|
return 0.0
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def measure(self, name):
|
||||||
|
yield
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def total(self):
|
def phases(self):
|
||||||
"""Return the total time"""
|
return []
|
||||||
if self.end:
|
|
||||||
return self.end - self.start
|
|
||||||
return time.time() - self.start
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
"""
|
|
||||||
Stop the timer to record a total time, if desired.
|
|
||||||
"""
|
|
||||||
self.end = time.time()
|
|
||||||
|
|
||||||
def write_json(self, out=sys.stdout):
|
def write_json(self, out=sys.stdout):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def write_tty(self, out=sys.stdout):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
#: instance of a do-nothing timer
|
||||||
|
NULL_TIMER = NullTimer()
|
||||||
|
|
||||||
|
|
||||||
|
class Timer(object):
|
||||||
|
"""Simple interval timer"""
|
||||||
|
|
||||||
|
def __init__(self, now=time.time):
|
||||||
"""
|
"""
|
||||||
Write a json object with times to file
|
Arguments:
|
||||||
|
now: function that gives the seconds since e.g. epoch
|
||||||
"""
|
"""
|
||||||
phases = [{"name": p, "seconds": s} for p, s in self.phases.items()]
|
self._now = now
|
||||||
times = {"phases": phases, "total": {"seconds": self.total}}
|
self._timers = OrderedDict() # type: OrderedDict[str,Interval]
|
||||||
|
|
||||||
|
# _global is the overal timer since the instance was created
|
||||||
|
self._timers[global_timer_name] = Interval(self._now(), end=None)
|
||||||
|
|
||||||
|
def start(self, name=global_timer_name):
|
||||||
|
"""
|
||||||
|
Start or restart a named timer, or the global timer when no name is given.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
name (str): Optional name of the timer. When no name is passed, the
|
||||||
|
global timer is started.
|
||||||
|
"""
|
||||||
|
self._timers[name] = Interval(self._now(), None)
|
||||||
|
|
||||||
|
def stop(self, name=global_timer_name):
|
||||||
|
"""
|
||||||
|
Stop a named timer, or all timers when no name is given. Stopping a
|
||||||
|
timer that has not started has no effect.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
name (str): Optional name of the timer. When no name is passed, all
|
||||||
|
timers are stopped.
|
||||||
|
"""
|
||||||
|
interval = self._timers.get(name, None)
|
||||||
|
if not interval:
|
||||||
|
return
|
||||||
|
self._timers[name] = Interval(interval.begin, self._now())
|
||||||
|
|
||||||
|
def duration(self, name=global_timer_name):
|
||||||
|
"""
|
||||||
|
Get the time in seconds of a named timer, or the total time if no
|
||||||
|
name is passed. The duration is always 0 for timers that have not been
|
||||||
|
started, no error is raised.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
name (str): (Optional) name of the timer
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
float: duration of timer.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
interval = self._timers[name]
|
||||||
|
except KeyError:
|
||||||
|
return 0.0
|
||||||
|
# Take either the interval end, the global timer, or now.
|
||||||
|
end = interval.end or self._timers[global_timer_name].end or self._now()
|
||||||
|
return end - interval.begin
|
||||||
|
|
||||||
|
@contextmanager
|
||||||
|
def measure(self, name):
|
||||||
|
"""
|
||||||
|
Context manager that allows you to time a block of code.
|
||||||
|
|
||||||
|
Arguments:
|
||||||
|
name (str): Name of the timer
|
||||||
|
"""
|
||||||
|
begin = self._now()
|
||||||
|
yield
|
||||||
|
self._timers[name] = Interval(begin, self._now())
|
||||||
|
|
||||||
|
@property
|
||||||
|
def phases(self):
|
||||||
|
"""Get all named timers (excluding the global/total timer)"""
|
||||||
|
return [k for k in self._timers.keys() if k != global_timer_name]
|
||||||
|
|
||||||
|
def write_json(self, out=sys.stdout):
|
||||||
|
"""Write a json object with times to file"""
|
||||||
|
phases = [{"name": p, "seconds": self.duration(p)} for p in self.phases]
|
||||||
|
times = {"phases": phases, "total": {"seconds": self.duration()}}
|
||||||
out.write(sjson.dump(times))
|
out.write(sjson.dump(times))
|
||||||
|
|
||||||
def write_tty(self, out=sys.stdout):
|
def write_tty(self, out=sys.stdout):
|
||||||
now = time.time()
|
"""Write a human-readable summary of timings"""
|
||||||
out.write("Time:\n")
|
# Individual timers ordered by registration
|
||||||
for phase, t in self.phases.items():
|
formatted = [(p, pretty_seconds(self.duration(p))) for p in self.phases]
|
||||||
out.write(" %-15s%.4f\n" % (phase + ":", t))
|
|
||||||
out.write("Total: %.4f\n" % (now - self.start))
|
# Total time
|
||||||
|
formatted.append(("total", pretty_seconds(self.duration())))
|
||||||
|
|
||||||
|
# Write to out
|
||||||
|
for name, duration in formatted:
|
||||||
|
out.write(" {:10s} {:>10s}\n".format(name, duration))
|
||||||
|
@ -153,113 +153,113 @@ protected-publish:
|
|||||||
# still run on UO runners and be signed
|
# still run on UO runners and be signed
|
||||||
# using the previous approach.
|
# using the previous approach.
|
||||||
########################################
|
########################################
|
||||||
.e4s-mac:
|
# .e4s-mac:
|
||||||
variables:
|
# variables:
|
||||||
SPACK_CI_STACK_NAME: e4s-mac
|
# SPACK_CI_STACK_NAME: e4s-mac
|
||||||
allow_failure: True
|
# allow_failure: True
|
||||||
|
|
||||||
.mac-pr:
|
# .mac-pr:
|
||||||
only:
|
# only:
|
||||||
- /^pr[\d]+_.*$/
|
# - /^pr[\d]+_.*$/
|
||||||
- /^github\/pr[\d]+_.*$/
|
# - /^github\/pr[\d]+_.*$/
|
||||||
variables:
|
# variables:
|
||||||
SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries-prs/${CI_COMMIT_REF_NAME}"
|
# SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries-prs/${CI_COMMIT_REF_NAME}"
|
||||||
SPACK_PRUNE_UNTOUCHED: "True"
|
# SPACK_PRUNE_UNTOUCHED: "True"
|
||||||
|
|
||||||
.mac-protected:
|
# .mac-protected:
|
||||||
only:
|
# only:
|
||||||
- /^develop$/
|
# - /^develop$/
|
||||||
- /^releases\/v.*/
|
# - /^releases\/v.*/
|
||||||
- /^v.*/
|
# - /^v.*/
|
||||||
- /^github\/develop$/
|
# - /^github\/develop$/
|
||||||
variables:
|
# variables:
|
||||||
SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries/${CI_COMMIT_REF_NAME}/${SPACK_CI_STACK_NAME}"
|
# SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries/${CI_COMMIT_REF_NAME}/${SPACK_CI_STACK_NAME}"
|
||||||
|
|
||||||
.mac-pr-build:
|
# .mac-pr-build:
|
||||||
extends: [ ".mac-pr", ".build" ]
|
# extends: [ ".mac-pr", ".build" ]
|
||||||
variables:
|
# variables:
|
||||||
AWS_ACCESS_KEY_ID: ${PR_MIRRORS_AWS_ACCESS_KEY_ID}
|
# AWS_ACCESS_KEY_ID: ${PR_MIRRORS_AWS_ACCESS_KEY_ID}
|
||||||
AWS_SECRET_ACCESS_KEY: ${PR_MIRRORS_AWS_SECRET_ACCESS_KEY}
|
# AWS_SECRET_ACCESS_KEY: ${PR_MIRRORS_AWS_SECRET_ACCESS_KEY}
|
||||||
|
|
||||||
.mac-protected-build:
|
# .mac-protected-build:
|
||||||
extends: [ ".mac-protected", ".build" ]
|
# extends: [ ".mac-protected", ".build" ]
|
||||||
variables:
|
# variables:
|
||||||
AWS_ACCESS_KEY_ID: ${PROTECTED_MIRRORS_AWS_ACCESS_KEY_ID}
|
# AWS_ACCESS_KEY_ID: ${PROTECTED_MIRRORS_AWS_ACCESS_KEY_ID}
|
||||||
AWS_SECRET_ACCESS_KEY: ${PROTECTED_MIRRORS_AWS_SECRET_ACCESS_KEY}
|
# AWS_SECRET_ACCESS_KEY: ${PROTECTED_MIRRORS_AWS_SECRET_ACCESS_KEY}
|
||||||
SPACK_SIGNING_KEY: ${PACKAGE_SIGNING_KEY}
|
# SPACK_SIGNING_KEY: ${PACKAGE_SIGNING_KEY}
|
||||||
|
|
||||||
e4s-mac-pr-generate:
|
# e4s-mac-pr-generate:
|
||||||
extends: [".e4s-mac", ".mac-pr"]
|
# extends: [".e4s-mac", ".mac-pr"]
|
||||||
stage: generate
|
# stage: generate
|
||||||
script:
|
# script:
|
||||||
- tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
|
# - tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
|
||||||
- . "./share/spack/setup-env.sh"
|
# - . "./share/spack/setup-env.sh"
|
||||||
- spack --version
|
# - spack --version
|
||||||
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
|
# - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
|
||||||
- spack env activate --without-view .
|
# - spack env activate --without-view .
|
||||||
- spack ci generate --check-index-only
|
# - spack ci generate --check-index-only
|
||||||
--buildcache-destination "${SPACK_BUILDCACHE_DESTINATION}"
|
# --buildcache-destination "${SPACK_BUILDCACHE_DESTINATION}"
|
||||||
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
# --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||||
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
|
# --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
|
||||||
artifacts:
|
# artifacts:
|
||||||
paths:
|
# paths:
|
||||||
- "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
# - "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||||
tags:
|
# tags:
|
||||||
- lambda
|
# - lambda
|
||||||
interruptible: true
|
# interruptible: true
|
||||||
retry:
|
# retry:
|
||||||
max: 2
|
# max: 2
|
||||||
when:
|
# when:
|
||||||
- runner_system_failure
|
# - runner_system_failure
|
||||||
- stuck_or_timeout_failure
|
# - stuck_or_timeout_failure
|
||||||
timeout: 60 minutes
|
# timeout: 60 minutes
|
||||||
|
|
||||||
e4s-mac-protected-generate:
|
# e4s-mac-protected-generate:
|
||||||
extends: [".e4s-mac", ".mac-protected"]
|
# extends: [".e4s-mac", ".mac-protected"]
|
||||||
stage: generate
|
# stage: generate
|
||||||
script:
|
# script:
|
||||||
- tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
|
# - tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
|
||||||
- . "./share/spack/setup-env.sh"
|
# - . "./share/spack/setup-env.sh"
|
||||||
- spack --version
|
# - spack --version
|
||||||
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
|
# - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
|
||||||
- spack env activate --without-view .
|
# - spack env activate --without-view .
|
||||||
- spack ci generate --check-index-only
|
# - spack ci generate --check-index-only
|
||||||
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
# --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||||
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
|
# --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
|
||||||
artifacts:
|
# artifacts:
|
||||||
paths:
|
# paths:
|
||||||
- "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
# - "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||||
tags:
|
# tags:
|
||||||
- omicron
|
# - omicron
|
||||||
interruptible: true
|
# interruptible: true
|
||||||
retry:
|
# retry:
|
||||||
max: 2
|
# max: 2
|
||||||
when:
|
# when:
|
||||||
- runner_system_failure
|
# - runner_system_failure
|
||||||
- stuck_or_timeout_failure
|
# - stuck_or_timeout_failure
|
||||||
timeout: 60 minutes
|
# timeout: 60 minutes
|
||||||
|
|
||||||
e4s-mac-pr-build:
|
# e4s-mac-pr-build:
|
||||||
extends: [ ".e4s-mac", ".mac-pr-build" ]
|
# extends: [ ".e4s-mac", ".mac-pr-build" ]
|
||||||
trigger:
|
# trigger:
|
||||||
include:
|
# include:
|
||||||
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
||||||
job: e4s-mac-pr-generate
|
# job: e4s-mac-pr-generate
|
||||||
strategy: depend
|
# strategy: depend
|
||||||
needs:
|
# needs:
|
||||||
- artifacts: True
|
# - artifacts: True
|
||||||
job: e4s-mac-pr-generate
|
# job: e4s-mac-pr-generate
|
||||||
|
|
||||||
e4s-mac-protected-build:
|
# e4s-mac-protected-build:
|
||||||
extends: [ ".e4s-mac", ".mac-protected-build" ]
|
# extends: [ ".e4s-mac", ".mac-protected-build" ]
|
||||||
trigger:
|
# trigger:
|
||||||
include:
|
# include:
|
||||||
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
||||||
job: e4s-mac-protected-generate
|
# job: e4s-mac-protected-generate
|
||||||
strategy: depend
|
# strategy: depend
|
||||||
needs:
|
# needs:
|
||||||
- artifacts: True
|
# - artifacts: True
|
||||||
job: e4s-mac-protected-generate
|
# job: e4s-mac-protected-generate
|
||||||
|
|
||||||
########################################
|
########################################
|
||||||
# E4S pipeline
|
# E4S pipeline
|
||||||
|
@ -254,6 +254,9 @@ spack:
|
|||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||||
|
@ -175,6 +175,9 @@ spack:
|
|||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||||
|
@ -47,6 +47,9 @@ spack:
|
|||||||
- cd ${SPACK_CONCRETE_ENV_DIR}
|
- cd ${SPACK_CONCRETE_ENV_DIR}
|
||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- spack --color=always --backtrace ci rebuild
|
- spack --color=always --backtrace ci rebuild
|
||||||
|
@ -60,6 +60,9 @@ spack:
|
|||||||
- cd ${SPACK_CONCRETE_ENV_DIR}
|
- cd ${SPACK_CONCRETE_ENV_DIR}
|
||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- spack --color=always --backtrace ci rebuild
|
- spack --color=always --backtrace ci rebuild
|
||||||
|
@ -280,6 +280,9 @@ spack:
|
|||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- export PATH=/bootstrap/runner/view/bin:${PATH}
|
- export PATH=/bootstrap/runner/view/bin:${PATH}
|
||||||
|
@ -270,6 +270,9 @@ spack:
|
|||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- spack --color=always --backtrace ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
- spack --color=always --backtrace ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||||
|
@ -104,6 +104,9 @@ spack:
|
|||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||||
|
@ -107,6 +107,9 @@ spack:
|
|||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||||
|
@ -110,6 +110,9 @@ spack:
|
|||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||||
|
@ -75,6 +75,9 @@ spack:
|
|||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||||
|
@ -77,6 +77,9 @@ spack:
|
|||||||
- cd ${SPACK_CONCRETE_ENV_DIR}
|
- cd ${SPACK_CONCRETE_ENV_DIR}
|
||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- spack --color=always --backtrace ci rebuild
|
- spack --color=always --backtrace ci rebuild
|
||||||
|
@ -79,6 +79,9 @@ spack:
|
|||||||
- cd ${SPACK_CONCRETE_ENV_DIR}
|
- cd ${SPACK_CONCRETE_ENV_DIR}
|
||||||
- spack env activate --without-view .
|
- spack env activate --without-view .
|
||||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||||
|
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||||
|
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||||
|
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||||
- spack --color=always --backtrace ci rebuild
|
- spack --color=always --backtrace ci rebuild
|
||||||
|
@ -48,3 +48,9 @@ def after_autoreconf_1(self):
|
|||||||
@run_after("autoreconf", when="@2.0")
|
@run_after("autoreconf", when="@2.0")
|
||||||
def after_autoreconf_2(self):
|
def after_autoreconf_2(self):
|
||||||
os.environ["AFTER_AUTORECONF_2_CALLED"] = "1"
|
os.environ["AFTER_AUTORECONF_2_CALLED"] = "1"
|
||||||
|
|
||||||
|
def check(self):
|
||||||
|
os.environ["CHECK_CALLED"] = "1"
|
||||||
|
|
||||||
|
def installcheck(self):
|
||||||
|
os.environ["INSTALLCHECK_CALLED"] = "1"
|
||||||
|
@ -3,8 +3,6 @@
|
|||||||
#
|
#
|
||||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||||
|
|
||||||
from time import sleep
|
|
||||||
|
|
||||||
from spack.package import *
|
from spack.package import *
|
||||||
|
|
||||||
|
|
||||||
@ -17,15 +15,12 @@ class DevBuildTestInstallPhases(Package):
|
|||||||
phases = ["one", "two", "three", "install"]
|
phases = ["one", "two", "three", "install"]
|
||||||
|
|
||||||
def one(self, spec, prefix):
|
def one(self, spec, prefix):
|
||||||
sleep(1)
|
|
||||||
print("One locomoco")
|
print("One locomoco")
|
||||||
|
|
||||||
def two(self, spec, prefix):
|
def two(self, spec, prefix):
|
||||||
sleep(2)
|
|
||||||
print("Two locomoco")
|
print("Two locomoco")
|
||||||
|
|
||||||
def three(self, spec, prefix):
|
def three(self, spec, prefix):
|
||||||
sleep(3)
|
|
||||||
print("Three locomoco")
|
print("Three locomoco")
|
||||||
|
|
||||||
def install(self, spec, prefix):
|
def install(self, spec, prefix):
|
||||||
|
@ -37,6 +37,8 @@ class Gasnet(Package, CudaPackage, ROCmPackage):
|
|||||||
version("main", branch="stable")
|
version("main", branch="stable")
|
||||||
version("master", branch="master")
|
version("master", branch="master")
|
||||||
|
|
||||||
|
version("2023.3.0", sha256="e1fa783d38a503cf2efa7662be591ca5c2bb98d19ac72a9bc6da457329a9a14f")
|
||||||
|
version("2022.9.2", sha256="2352d52f395a9aa14cc57d82957d9f1ebd928d0a0021fd26c5f1382a06cd6f1d")
|
||||||
version("2022.9.0", sha256="6873ff4ad8ebee49da4378f2d78095a6ccc31333d6ae4cd739b9f772af11f936")
|
version("2022.9.0", sha256="6873ff4ad8ebee49da4378f2d78095a6ccc31333d6ae4cd739b9f772af11f936")
|
||||||
version("2022.3.0", sha256="91b59aa84c0680c807e00d3d1d8fa7c33c1aed50b86d1616f93e499620a9ba09")
|
version("2022.3.0", sha256="91b59aa84c0680c807e00d3d1d8fa7c33c1aed50b86d1616f93e499620a9ba09")
|
||||||
version("2021.9.0", sha256="1b6ff6cdad5ecf76b92032ef9507e8a0876c9fc3ee0ab008de847c1fad0359ee")
|
version("2021.9.0", sha256="1b6ff6cdad5ecf76b92032ef9507e8a0876c9fc3ee0ab008de847c1fad0359ee")
|
||||||
|
@ -21,6 +21,13 @@ class OneapiLevelZero(CMakePackage):
|
|||||||
|
|
||||||
maintainers = ["rscohn2"]
|
maintainers = ["rscohn2"]
|
||||||
|
|
||||||
|
version("1.9.9", sha256="3d1784e790bbaae5f160b920c07e7dc2941640d9c631aaa668ccfd57aafc7b56")
|
||||||
|
version("1.9.4", sha256="7f91ed993be1e643c752cf95a319a0fc64113d91ec481fbb8a2f478f433d3380")
|
||||||
|
version("1.8.12", sha256="9c5d3dd912882abe8e2e3ba72f8c27e2a2d86759ac48f6318a0df091204985eb")
|
||||||
|
version("1.8.8", sha256="3553ae8fa0d2d69c4210a8f3428bd6612bd8bb8a627faf52c3658a01851e66d2")
|
||||||
|
version("1.8.5", sha256="b6e9663bbcc53c148d32376998298bec6f7c434ef2218c61fa708963e3a09394")
|
||||||
|
version("1.8.1", sha256="de9582ca075dbd207113d432c4d70a2daaf9d6904672c707e340d43cf4e114a5")
|
||||||
|
version("1.8.0", sha256="d4089820ed6338ce1616746498bff9383cd9485568190b7977d7c5bf0bf8297b")
|
||||||
version("1.7.15", sha256="c39bb05a8e5898aa6c444e1704105b93d3f1888b9c333f8e7e73825ffbfb2617")
|
version("1.7.15", sha256="c39bb05a8e5898aa6c444e1704105b93d3f1888b9c333f8e7e73825ffbfb2617")
|
||||||
version("1.7.9", sha256="b430a7f833a689c899b32172a31c3bca1d16adcad8ff866f240a3a8968433de7")
|
version("1.7.9", sha256="b430a7f833a689c899b32172a31c3bca1d16adcad8ff866f240a3a8968433de7")
|
||||||
version("1.7.4", sha256="23a3f393f6e8f7ed694e0d3248d1ac1b92f2b6964cdb4d747abc23328050513b")
|
version("1.7.4", sha256="23a3f393f6e8f7ed694e0d3248d1ac1b92f2b6964cdb4d747abc23328050513b")
|
||||||
|
@ -26,7 +26,7 @@ class Tau(Package):
|
|||||||
tags = ["e4s"]
|
tags = ["e4s"]
|
||||||
|
|
||||||
version("master", branch="master")
|
version("master", branch="master")
|
||||||
version("2.32", sha256="fc8f5cdbdae999e98e9e97b0d8d66d282cb8bb41c19d5486d48a2d2d11b4b475")
|
version("2.32", sha256="ee774a06e30ce0ef0f053635a52229152c39aba4f4933bed92da55e5e13466f3")
|
||||||
version("2.31.1", sha256="bf445b9d4fe40a5672a7b175044d2133791c4dfb36a214c1a55a931aebc06b9d")
|
version("2.31.1", sha256="bf445b9d4fe40a5672a7b175044d2133791c4dfb36a214c1a55a931aebc06b9d")
|
||||||
version("2.31", sha256="27e73c395dd2a42b91591ce4a76b88b1f67663ef13aa19ef4297c68f45d946c2")
|
version("2.31", sha256="27e73c395dd2a42b91591ce4a76b88b1f67663ef13aa19ef4297c68f45d946c2")
|
||||||
version("2.30.2", sha256="43f84a15b71a226f8a64d966f0cb46022bcfbaefb341295ecc6fa80bb82bbfb4")
|
version("2.30.2", sha256="43f84a15b71a226f8a64d966f0cb46022bcfbaefb341295ecc6fa80bb82bbfb4")
|
||||||
|
@ -16,7 +16,7 @@ def is_CrayXC():
|
|||||||
|
|
||||||
|
|
||||||
def is_CrayEX():
|
def is_CrayEX():
|
||||||
return (spack.platforms.host().name == "cray") and (
|
return (spack.platforms.host().name in ["linux", "cray"]) and (
|
||||||
os.environ.get("CRAYPE_NETWORK_TARGET") in ["ofi", "ucx"]
|
os.environ.get("CRAYPE_NETWORK_TARGET") in ["ofi", "ucx"]
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -47,6 +47,7 @@ class Upcxx(Package, CudaPackage, ROCmPackage):
|
|||||||
version("develop", branch="develop")
|
version("develop", branch="develop")
|
||||||
version("master", branch="master")
|
version("master", branch="master")
|
||||||
|
|
||||||
|
version("2023.3.0", sha256="382af3c093decdb51f0533e19efb4cc7536b6617067b2dd89431e323704a1009")
|
||||||
version("2022.9.0", sha256="dbf15fd9ba38bfe2491f556b55640343d6303048a117c4e84877ceddb64e4c7c")
|
version("2022.9.0", sha256="dbf15fd9ba38bfe2491f556b55640343d6303048a117c4e84877ceddb64e4c7c")
|
||||||
version("2022.3.0", sha256="72bccfc9dfab5c2351ee964232b3754957ecfdbe6b4de640e1b1387d45019496")
|
version("2022.3.0", sha256="72bccfc9dfab5c2351ee964232b3754957ecfdbe6b4de640e1b1387d45019496")
|
||||||
version("2021.9.0", sha256="9299e17602bcc8c05542cdc339897a9c2dba5b5c3838d6ef2df7a02250f42177")
|
version("2021.9.0", sha256="9299e17602bcc8c05542cdc339897a9c2dba5b5c3838d6ef2df7a02250f42177")
|
||||||
@ -67,12 +68,23 @@ class Upcxx(Package, CudaPackage, ROCmPackage):
|
|||||||
variant(
|
variant(
|
||||||
"cuda",
|
"cuda",
|
||||||
default=False,
|
default=False,
|
||||||
description="Enables UPC++ support for the CUDA memory kind.\n"
|
description="Enables UPC++ support for the CUDA memory kind on NVIDIA GPUs.\n"
|
||||||
+ "NOTE: Requires CUDA Driver library be present on the build system",
|
+ "NOTE: Requires CUDA Driver library be present on the build system",
|
||||||
|
when="@2019.3.0:",
|
||||||
)
|
)
|
||||||
|
|
||||||
variant(
|
variant(
|
||||||
"rocm", default=False, description="Enables UPC++ support for the ROCm/HIP memory kind"
|
"rocm",
|
||||||
|
default=False,
|
||||||
|
description="Enables UPC++ support for the ROCm/HIP memory kind on AMD GPUs",
|
||||||
|
when="@2022.3.0:",
|
||||||
|
)
|
||||||
|
|
||||||
|
variant(
|
||||||
|
"level_zero",
|
||||||
|
default=False,
|
||||||
|
description="Enables UPC++ support for the Level Zero memory kind on Intel GPUs",
|
||||||
|
when="@2023.3.0:",
|
||||||
)
|
)
|
||||||
|
|
||||||
variant(
|
variant(
|
||||||
@ -100,6 +112,8 @@ class Upcxx(Package, CudaPackage, ROCmPackage):
|
|||||||
|
|
||||||
conflicts("hip@:4.4.0", when="+rocm")
|
conflicts("hip@:4.4.0", when="+rocm")
|
||||||
|
|
||||||
|
depends_on("oneapi-level-zero@1.8.0:", when="+level_zero")
|
||||||
|
|
||||||
# All flags should be passed to the build-env in autoconf-like vars
|
# All flags should be passed to the build-env in autoconf-like vars
|
||||||
flag_handler = env_flags
|
flag_handler = env_flags
|
||||||
|
|
||||||
@ -202,6 +216,10 @@ def install(self, spec, prefix):
|
|||||||
"--with-ld-flags=" + self.compiler.cc_rpath_arg + spec["hip"].prefix.lib
|
"--with-ld-flags=" + self.compiler.cc_rpath_arg + spec["hip"].prefix.lib
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if "+level_zero" in spec:
|
||||||
|
options.append("--enable-ze")
|
||||||
|
options.append("--with-ze-home=" + spec["oneapi-level-zero"].prefix)
|
||||||
|
|
||||||
env["GASNET_CONFIGURE_ARGS"] = "--enable-rpath " + env["GASNET_CONFIGURE_ARGS"]
|
env["GASNET_CONFIGURE_ARGS"] = "--enable-rpath " + env["GASNET_CONFIGURE_ARGS"]
|
||||||
|
|
||||||
configure(*options)
|
configure(*options)
|
||||||
|
Loading…
Reference in New Issue
Block a user