Compare commits
95 Commits
bugfix/ext
...
releases/v
Author | SHA1 | Date | |
---|---|---|---|
![]() |
45accfac15 | ||
![]() |
320a974016 | ||
![]() |
b653ce09c8 | ||
![]() |
23bf0a316c | ||
![]() |
030bce9978 | ||
![]() |
ba9c8d4407 | ||
![]() |
16052f9d1d | ||
![]() |
b32105f4da | ||
![]() |
9c1c5c2936 | ||
![]() |
c8f7c78e73 | ||
![]() |
da50816127 | ||
![]() |
19186a5e44 | ||
![]() |
de4cf49e95 | ||
![]() |
f79928d7d1 | ||
![]() |
187f8e9f4a | ||
![]() |
2536dd57a7 | ||
![]() |
06a2c36a5a | ||
![]() |
5e0d210734 | ||
![]() |
e3d4531663 | ||
![]() |
9e8e72592d | ||
![]() |
2d9fa60f53 | ||
![]() |
f3149a6c35 | ||
![]() |
403ba23632 | ||
![]() |
d62c10ff76 | ||
![]() |
3aa24e5b13 | ||
![]() |
c7200b4327 | ||
![]() |
5b02b7003a | ||
![]() |
f83972ddc4 | ||
![]() |
fffca98a02 | ||
![]() |
390112fc76 | ||
![]() |
2f3f4ad4da | ||
![]() |
0f9e07321f | ||
![]() |
7593b18626 | ||
![]() |
e964a396c9 | ||
![]() |
8d45404b5b | ||
![]() |
7055061635 | ||
![]() |
5e9799db4a | ||
![]() |
4258fbbed3 | ||
![]() |
db8fcbbee4 | ||
![]() |
d33c990278 | ||
![]() |
59dd405626 | ||
![]() |
dbbf7dc969 | ||
![]() |
8a71aa874f | ||
![]() |
0766f63182 | ||
![]() |
380fedb7bc | ||
![]() |
33cc47f6d3 | ||
![]() |
5935f9c8a0 | ||
![]() |
a86911246a | ||
![]() |
cd94827c5f | ||
![]() |
bb8b4f9979 | ||
![]() |
fc7a16e77e | ||
![]() |
e633e57297 | ||
![]() |
7b74fab12f | ||
![]() |
005c7cd353 | ||
![]() |
0f54a63dfd | ||
![]() |
f11778bb02 | ||
![]() |
3437926cde | ||
![]() |
d25375da55 | ||
![]() |
0b302034df | ||
![]() |
b9f69a8dfa | ||
![]() |
c3e9aeeed0 | ||
![]() |
277234c044 | ||
![]() |
0077a25639 | ||
![]() |
6a3e20023e | ||
![]() |
f92987b11f | ||
![]() |
61f198e8af | ||
![]() |
4d90d663a3 | ||
![]() |
7a7e9eb04f | ||
![]() |
3ea4b53bf6 | ||
![]() |
ad0d908d8d | ||
![]() |
9a793fe01b | ||
![]() |
6dd3c78924 | ||
![]() |
5b080d63fb | ||
![]() |
ea8e3c27a4 | ||
![]() |
30ffd6d33e | ||
![]() |
c1aec72f60 | ||
![]() |
cfd0dc6d89 | ||
![]() |
60b3d32072 | ||
![]() |
5142ebdd57 | ||
![]() |
6b782e6d7e | ||
![]() |
168bced888 | ||
![]() |
489de38890 | ||
![]() |
2a20520cc8 | ||
![]() |
ae6213b193 | ||
![]() |
bb1cd430c0 | ||
![]() |
36877abd02 | ||
![]() |
62db008e42 | ||
![]() |
b10d75b1c6 | ||
![]() |
078767946c | ||
![]() |
9ca7165ef0 | ||
![]() |
d1d668a9d5 | ||
![]() |
284c3a3fd8 | ||
![]() |
ec89c47aee | ||
![]() |
49114ffff7 | ||
![]() |
05fd39477e |
2
.github/workflows/unit_tests.yaml
vendored
2
.github/workflows/unit_tests.yaml
vendored
@@ -11,7 +11,7 @@ concurrency:
|
||||
jobs:
|
||||
# Run unit tests with different configurations on linux
|
||||
ubuntu:
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubuntu-20.04
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ['2.7', '3.6', '3.7', '3.8', '3.9', '3.10', '3.11']
|
||||
|
320
CHANGELOG.md
320
CHANGELOG.md
@@ -1,16 +1,330 @@
|
||||
# v0.19.2 (2023-04-04)
|
||||
|
||||
### Spack Bugfixes
|
||||
|
||||
* Ignore global variant requirement for packages that do not define it (#35037)
|
||||
* Compiler wrapper: improved parsing of linker arguments (#35929, #35912)
|
||||
* Do not detect apple-clang as cce on macOS (#35974)
|
||||
* Views: fix support for optional Python extensions (#35489)
|
||||
* Views: fix issue where Python executable gets symlinked instead of copied (#34661)
|
||||
* Fix a bug where tests were not added when concretizing together (#35290)
|
||||
* Compiler flags: fix clang/apple-clang c/c++ standard flags (#35062)
|
||||
* Increase db timeout from 3s to 60s to improve stability of parallel installs (#35517)
|
||||
* Buildcache: improve error handling in downloads (#35568)
|
||||
* Module files for packages installed from buildcache have long placeholder paths abbreviated in configure args section (#36611)
|
||||
* Reduce verbosity of error messages regarding non-existing module files (#35502)
|
||||
* Ensure file with build environment variables is truncated when writing to it (#35673)
|
||||
* `spack config update` now works on active environments (#36542)
|
||||
* Fix an issue where spack.yaml got reformatted incorrectly (#36698)
|
||||
* Packages UPC++ and GASNet-EX were updated (#36629)
|
||||
|
||||
|
||||
# v0.19.1 (2023-02-07)
|
||||
|
||||
### Spack Bugfixes
|
||||
|
||||
* `buildcache create`: make "file exists" less verbose (#35019)
|
||||
* `spack mirror create`: don't change paths to urls (#34992)
|
||||
* Improve error message for requirements (#33988)
|
||||
* uninstall: fix accidental cubic complexity (#34005)
|
||||
* scons: fix signature for `install_args` (#34481)
|
||||
* Fix `combine_phase_logs` text encoding issues (#34657)
|
||||
* Use a module-like object to propagate changes in the MRO, when setting build env (#34059)
|
||||
* PackageBase should not define builder legacy attributes (#33942)
|
||||
* Forward lookup of the "run_tests" attribute (#34531)
|
||||
* Bugfix for timers (#33917, #33900)
|
||||
* Fix path handling in prefix inspections (#35318)
|
||||
* Fix libtool filter for Fujitsu compilers (#34916)
|
||||
* Bug fix for duplicate rpath errors on macOS when creating build caches (#34375)
|
||||
* FileCache: delete the new cache file on exception (#34623)
|
||||
* Propagate exceptions from Spack python console (#34547)
|
||||
* Tests: Fix a bug/typo in a `config_values.py` fixture (#33886)
|
||||
* Various CI fixes (#33953, #34560, #34560, #34828)
|
||||
* Docs: remove monitors and analyzers, typos (#34358, #33926)
|
||||
* bump release version for tutorial command (#33859)
|
||||
|
||||
|
||||
# v0.19.0 (2022-11-11)
|
||||
|
||||
`v0.19.0` is a major feature release.
|
||||
|
||||
## Major features in this release
|
||||
|
||||
1. **Package requirements**
|
||||
|
||||
Spack's traditional [package preferences](
|
||||
https://spack.readthedocs.io/en/latest/build_settings.html#package-preferences)
|
||||
are soft, but we've added hard requriements to `packages.yaml` and `spack.yaml`
|
||||
(#32528, #32369). Package requirements use the same syntax as specs:
|
||||
|
||||
```yaml
|
||||
packages:
|
||||
libfabric:
|
||||
require: "@1.13.2"
|
||||
mpich:
|
||||
require:
|
||||
- one_of: ["+cuda", "+rocm"]
|
||||
```
|
||||
|
||||
More details in [the docs](
|
||||
https://spack.readthedocs.io/en/latest/build_settings.html#package-requirements).
|
||||
|
||||
2. **Environment UI Improvements**
|
||||
|
||||
* Fewer surprising modifications to `spack.yaml` (#33711):
|
||||
|
||||
* `spack install` in an environment will no longer add to the `specs:` list; you'll
|
||||
need to either use `spack add <spec>` or `spack install --add <spec>`.
|
||||
|
||||
* Similarly, `spack uninstall` will not remove from your environment's `specs:`
|
||||
list; you'll need to use `spack remove` or `spack uninstall --remove`.
|
||||
|
||||
This will make it easier to manage an environment, as there is clear separation
|
||||
between the stack to be installed (`spack.yaml`/`spack.lock`) and which parts of
|
||||
it should be installed (`spack install` / `spack uninstall`).
|
||||
|
||||
* `concretizer:unify:true` is now the default mode for new environments (#31787)
|
||||
|
||||
We see more users creating `unify:true` environments now. Users who need
|
||||
`unify:false` can add it to their environment to get the old behavior. This will
|
||||
concretize every spec in the environment independently.
|
||||
|
||||
* Include environment configuration from URLs (#29026, [docs](
|
||||
https://spack.readthedocs.io/en/latest/environments.html#included-configurations))
|
||||
|
||||
You can now include configuration in your environment directly from a URL:
|
||||
|
||||
```yaml
|
||||
spack:
|
||||
include:
|
||||
- https://github.com/path/to/raw/config/compilers.yaml
|
||||
```
|
||||
|
||||
4. **Multiple Build Systems**
|
||||
|
||||
An increasing number of packages in the ecosystem need the ability to support
|
||||
multiple build systems (#30738, [docs](
|
||||
https://spack.readthedocs.io/en/latest/packaging_guide.html#multiple-build-systems)),
|
||||
either across versions, across platforms, or within the same version of the software.
|
||||
This has been hard to support through multiple inheritance, as methods from different
|
||||
build system superclasses would conflict. `package.py` files can now define separate
|
||||
builder classes with installation logic for different build systems, e.g.:
|
||||
|
||||
```python
|
||||
class ArpackNg(CMakePackage, AutotoolsPackage):
|
||||
|
||||
build_system(
|
||||
conditional("cmake", when="@0.64:"),
|
||||
conditional("autotools", when="@:0.63"),
|
||||
default="cmake",
|
||||
)
|
||||
|
||||
class CMakeBuilder(spack.build_systems.cmake.CMakeBuilder):
|
||||
def cmake_args(self):
|
||||
pass
|
||||
|
||||
class Autotoolsbuilder(spack.build_systems.autotools.AutotoolsBuilder):
|
||||
def configure_args(self):
|
||||
pass
|
||||
```
|
||||
|
||||
5. **Compiler and variant propagation**
|
||||
|
||||
Currently, compiler flags and variants are inconsistent: compiler flags set for a
|
||||
package are inherited by its dependencies, while variants are not. We should have
|
||||
these be consistent by allowing for inheritance to be enabled or disabled for both
|
||||
variants and compiler flags.
|
||||
|
||||
Example syntax:
|
||||
- `package ++variant`:
|
||||
enabled variant that will be propagated to dependencies
|
||||
- `package +variant`:
|
||||
enabled variant that will NOT be propagated to dependencies
|
||||
- `package ~~variant`:
|
||||
disabled variant that will be propagated to dependencies
|
||||
- `package ~variant`:
|
||||
disabled variant that will NOT be propagated to dependencies
|
||||
- `package cflags==-g`:
|
||||
`cflags` will be propagated to dependencies
|
||||
- `package cflags=-g`:
|
||||
`cflags` will NOT be propagated to dependencies
|
||||
|
||||
Syntax for non-boolan variants is similar to compiler flags. More in the docs for
|
||||
[variants](
|
||||
https://spack.readthedocs.io/en/latest/basic_usage.html#variants) and [compiler flags](
|
||||
https://spack.readthedocs.io/en/latest/basic_usage.html#compiler-flags).
|
||||
|
||||
6. **Enhancements to git version specifiers**
|
||||
|
||||
* `v0.18.0` added the ability to use git commits as versions. You can now use the
|
||||
`git.` prefix to specify git tags or branches as versions. All of these are valid git
|
||||
versions in `v0.19` (#31200):
|
||||
|
||||
```console
|
||||
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # raw commit
|
||||
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234 # commit with git prefix
|
||||
foo@git.develop # the develop branch
|
||||
foo@git.0.19 # use the 0.19 tag
|
||||
```
|
||||
|
||||
* `v0.19` also gives you more control over how Spack interprets git versions, in case
|
||||
Spack cannot detect the version from the git repository. You can suffix a git
|
||||
version with `=<version>` to force Spack to concretize it as a particular version
|
||||
(#30998, #31914, #32257):
|
||||
|
||||
```console
|
||||
# use mybranch, but treat it as version 3.2 for version comparison
|
||||
foo@git.mybranch=3.2
|
||||
|
||||
# use the given commit, but treat it as develop for version comparison
|
||||
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop
|
||||
```
|
||||
|
||||
More in [the docs](
|
||||
https://spack.readthedocs.io/en/latest/basic_usage.html#version-specifier)
|
||||
|
||||
7. **Changes to Cray EX Support**
|
||||
|
||||
Cray machines have historically had their own "platform" within Spack, because we
|
||||
needed to go through the module system to leverage compilers and MPI installations on
|
||||
these machines. The Cray EX programming environment now provides standalone `craycc`
|
||||
executables and proper `mpicc` wrappers, so Spack can treat EX machines like Linux
|
||||
with extra packages (#29392).
|
||||
|
||||
We expect this to greatly reduce bugs, as external packages and compilers can now be
|
||||
used by prefix instead of through modules. We will also no longer be subject to
|
||||
reproducibility issues when modules change from Cray PE release to release and from
|
||||
site to site. This also simplifies dealing with the underlying Linux OS on cray
|
||||
systems, as Spack will properly model the machine's OS as either SuSE or RHEL.
|
||||
|
||||
8. **Improvements to tests and testing in CI**
|
||||
|
||||
* `spack ci generate --tests` will generate a `.gitlab-ci.yml` file that not only does
|
||||
builds but also runs tests for built packages (#27877). Public GitHub pipelines now
|
||||
also run tests in CI.
|
||||
|
||||
* `spack test run --explicit` will only run tests for packages that are explicitly
|
||||
installed, instead of all packages.
|
||||
|
||||
9. **Experimental binding link model**
|
||||
|
||||
You can add a new option to `config.yaml` to make Spack embed absolute paths to
|
||||
needed shared libraries in ELF executables and shared libraries on Linux (#31948, [docs](
|
||||
https://spack.readthedocs.io/en/latest/config_yaml.html#shared-linking-bind)):
|
||||
|
||||
```yaml
|
||||
config:
|
||||
shared_linking:
|
||||
type: rpath
|
||||
bind: true
|
||||
```
|
||||
|
||||
This can improve launch time at scale for parallel applications, and it can make
|
||||
installations less susceptible to environment variables like `LD_LIBRARY_PATH`, even
|
||||
especially when dealing with external libraries that use `RUNPATH`. You can think of
|
||||
this as a faster, even higher-precedence version of `RPATH`.
|
||||
|
||||
## Other new features of note
|
||||
|
||||
* `spack spec` prints dependencies more legibly. Dependencies in the output now appear
|
||||
at the *earliest* level of indentation possible (#33406)
|
||||
* You can override `package.py` attributes like `url`, directly in `packages.yaml`
|
||||
(#33275, [docs](
|
||||
https://spack.readthedocs.io/en/latest/build_settings.html#assigning-package-attributes))
|
||||
* There are a number of new architecture-related format strings you can use in Spack
|
||||
configuration files to specify paths (#29810, [docs](
|
||||
https://spack.readthedocs.io/en/latest/configuration.html#config-file-variables))
|
||||
* Spack now supports bootstrapping Clingo on Windows (#33400)
|
||||
* There is now support for an `RPATH`-like library model on Windows (#31930)
|
||||
|
||||
## Performance Improvements
|
||||
|
||||
* Major performance improvements for installation from binary caches (#27610, #33628,
|
||||
#33636, #33608, #33590, #33496)
|
||||
* Test suite can now be parallelized using `xdist` (used in GitHub Actions) (#32361)
|
||||
* Reduce lock contention for parallel builds in environments (#31643)
|
||||
|
||||
## New binary caches and stacks
|
||||
|
||||
* We now build nearly all of E4S with `oneapi` in our buildcache (#31781, #31804,
|
||||
#31804, #31803, #31840, #31991, #32117, #32107, #32239)
|
||||
* Added 3 new machine learning-centric stacks to binary cache: `x86_64_v3`, CUDA, ROCm
|
||||
(#31592, #33463)
|
||||
|
||||
## Removals and Deprecations
|
||||
|
||||
* Support for Python 3.5 is dropped (#31908). Only Python 2.7 and 3.6+ are officially
|
||||
supported.
|
||||
|
||||
* This is the last Spack release that will support Python 2 (#32615). Spack `v0.19`
|
||||
will emit a deprecation warning if you run it with Python 2, and Python 2 support will
|
||||
soon be removed from the `develop` branch.
|
||||
|
||||
* `LD_LIBRARY_PATH` is no longer set by default by `spack load` or module loads.
|
||||
|
||||
Setting `LD_LIBRARY_PATH` in Spack environments/modules can cause binaries from
|
||||
outside of Spack to crash, and Spack's own builds use `RPATH` and do not need
|
||||
`LD_LIBRARY_PATH` set in order to run. If you still want the old behavior, you
|
||||
can run these commands to configure Spack to set `LD_LIBRARY_PATH`:
|
||||
|
||||
```console
|
||||
spack config add modules:prefix_inspections:lib64:[LD_LIBRARY_PATH]
|
||||
spack config add modules:prefix_inspections:lib:[LD_LIBRARY_PATH]
|
||||
```
|
||||
|
||||
* The `spack:concretization:[together|separately]` has been removed after being
|
||||
deprecated in `v0.18`. Use `concretizer:unify:[true|false]`.
|
||||
* `config:module_roots` is no longer supported after being deprecated in `v0.18`. Use
|
||||
configuration in module sets instead (#28659, [docs](
|
||||
https://spack.readthedocs.io/en/latest/module_file_support.html)).
|
||||
* `spack activate` and `spack deactivate` are no longer supported, having been
|
||||
deprecated in `v0.18`. Use an environment with a view instead of
|
||||
activating/deactivating ([docs](
|
||||
https://spack.readthedocs.io/en/latest/environments.html#configuration-in-spack-yaml)).
|
||||
* The old YAML format for buildcaches is now deprecated (#33707). If you are using an
|
||||
old buildcache with YAML metadata you will need to regenerate it with JSON metadata.
|
||||
* `spack bootstrap trust` and `spack bootstrap untrust` are deprecated in favor of
|
||||
`spack bootstrap enable` and `spack bootstrap disable` and will be removed in `v0.20`.
|
||||
(#33600)
|
||||
* The `graviton2` architecture has been renamed to `neoverse_n1`, and `graviton3`
|
||||
is now `neoverse_v1`. Buildcaches using the old architecture names will need to be rebuilt.
|
||||
* The terms `blacklist` and `whitelist` have been replaced with `include` and `exclude`
|
||||
in all configuration files (#31569). You can use `spack config update` to
|
||||
automatically fix your configuration files.
|
||||
|
||||
## Notable Bugfixes
|
||||
|
||||
* Permission setting on installation now handles effective uid properly (#19980)
|
||||
* `buildable:true` for an MPI implementation now overrides `buildable:false` for `mpi` (#18269)
|
||||
* Improved error messages when attempting to use an unconfigured compiler (#32084)
|
||||
* Do not punish explicitly requested compiler mismatches in the solver (#30074)
|
||||
* `spack stage`: add missing --fresh and --reuse (#31626)
|
||||
* Fixes for adding build system executables like `cmake` to package scope (#31739)
|
||||
* Bugfix for binary relocation with aliased strings produced by newer `binutils` (#32253)
|
||||
|
||||
## Spack community stats
|
||||
|
||||
* 6,751 total packages, 335 new since `v0.18.0`
|
||||
* 141 new Python packages
|
||||
* 89 new R packages
|
||||
* 303 people contributed to this release
|
||||
* 287 committers to packages
|
||||
* 57 committers to core
|
||||
|
||||
|
||||
# v0.18.1 (2022-07-19)
|
||||
|
||||
### Spack Bugfixes
|
||||
* Fix several bugs related to bootstrapping (#30834,#31042,#31180)
|
||||
* Fix a regression that was causing spec hashes to differ between
|
||||
* Fix a regression that was causing spec hashes to differ between
|
||||
Python 2 and Python 3 (#31092)
|
||||
* Fixed compiler flags for oneAPI and DPC++ (#30856)
|
||||
* Fixed several issues related to concretization (#31142,#31153,#31170,#31226)
|
||||
* Improved support for Cray manifest file and `spack external find` (#31144,#31201,#31173,#31186)
|
||||
* Assign a version to openSUSE Tumbleweed according to the GLIBC version
|
||||
in the system (#19895)
|
||||
in the system (#19895)
|
||||
* Improved Dockerfile generation for `spack containerize` (#29741,#31321)
|
||||
* Fixed a few bugs related to concurrent execution of commands (#31509,#31493,#31477)
|
||||
* Fixed a few bugs related to concurrent execution of commands (#31509,#31493,#31477)
|
||||
|
||||
### Package updates
|
||||
* WarpX: add v22.06, fixed libs property (#30866,#31102)
|
||||
|
@@ -10,8 +10,8 @@ For more on Spack's release structure, see
|
||||
| Version | Supported |
|
||||
| ------- | ------------------ |
|
||||
| develop | :white_check_mark: |
|
||||
| 0.17.x | :white_check_mark: |
|
||||
| 0.16.x | :white_check_mark: |
|
||||
| 0.19.x | :white_check_mark: |
|
||||
| 0.18.x | :white_check_mark: |
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
|
@@ -176,7 +176,7 @@ config:
|
||||
# when Spack needs to manage its own package metadata and all operations are
|
||||
# expected to complete within the default time limit. The timeout should
|
||||
# therefore generally be left untouched.
|
||||
db_lock_timeout: 3
|
||||
db_lock_timeout: 60
|
||||
|
||||
|
||||
# How long to wait when attempting to modify a package (e.g. to install it).
|
||||
|
@@ -1,162 +0,0 @@
|
||||
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
|
||||
Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
|
||||
SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
.. _analyze:
|
||||
|
||||
=======
|
||||
Analyze
|
||||
=======
|
||||
|
||||
|
||||
The analyze command is a front-end to various tools that let us analyze
|
||||
package installations. Each analyzer is a module for a different kind
|
||||
of analysis that can be done on a package installation, including (but not
|
||||
limited to) binary, log, or text analysis. Thus, the analyze command group
|
||||
allows you to take an existing package install, choose an analyzer,
|
||||
and extract some output for the package using it.
|
||||
|
||||
|
||||
-----------------
|
||||
Analyzer Metadata
|
||||
-----------------
|
||||
|
||||
For all analyzers, we write to an ``analyzers`` folder in ``~/.spack``, or the
|
||||
value that you specify in your spack config at ``config:analyzers_dir``.
|
||||
For example, here we see the results of running an analysis on zlib:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ tree ~/.spack/analyzers/
|
||||
└── linux-ubuntu20.04-skylake
|
||||
└── gcc-9.3.0
|
||||
└── zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2
|
||||
├── environment_variables
|
||||
│ └── spack-analyzer-environment-variables.json
|
||||
├── install_files
|
||||
│ └── spack-analyzer-install-files.json
|
||||
└── libabigail
|
||||
└── spack-analyzer-libabigail-libz.so.1.2.11.xml
|
||||
|
||||
|
||||
This means that you can always find analyzer output in this folder, and it
|
||||
is organized with the same logic as the package install it was run for.
|
||||
If you want to customize this top level folder, simply provide the ``--path``
|
||||
argument to ``spack analyze run``. The nested organization will be maintained
|
||||
within your custom root.
|
||||
|
||||
-----------------
|
||||
Listing Analyzers
|
||||
-----------------
|
||||
|
||||
If you aren't familiar with Spack's analyzers, you can quickly list those that
|
||||
are available:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack analyze list-analyzers
|
||||
install_files : install file listing read from install_manifest.json
|
||||
environment_variables : environment variables parsed from spack-build-env.txt
|
||||
config_args : config args loaded from spack-configure-args.txt
|
||||
libabigail : Application Binary Interface (ABI) features for objects
|
||||
|
||||
|
||||
In the above, the first three are fairly simple - parsing metadata files from
|
||||
a package install directory to save
|
||||
|
||||
-------------------
|
||||
Analyzing a Package
|
||||
-------------------
|
||||
|
||||
The analyze command, akin to install, will accept a package spec to perform
|
||||
an analysis for. The package must be installed. Let's walk through an example
|
||||
with zlib. We first ask to analyze it. However, since we have more than one
|
||||
install, we are asked to disambiguate:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack analyze run zlib
|
||||
==> Error: zlib matches multiple packages.
|
||||
Matching packages:
|
||||
fz2bs56 zlib@1.2.11%gcc@7.5.0 arch=linux-ubuntu18.04-skylake
|
||||
sl7m27m zlib@1.2.11%gcc@9.3.0 arch=linux-ubuntu20.04-skylake
|
||||
Use a more specific spec.
|
||||
|
||||
|
||||
We can then specify the spec version that we want to analyze:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack analyze run zlib/fz2bs56
|
||||
|
||||
If you don't provide any specific analyzer names, by default all analyzers
|
||||
(shown in the ``list-analyzers`` subcommand list) will be run. If an analyzer does not
|
||||
have any result, it will be skipped. For example, here is a result running for
|
||||
zlib:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ls ~/.spack/analyzers/linux-ubuntu20.04-skylake/gcc-9.3.0/zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2/
|
||||
spack-analyzer-environment-variables.json
|
||||
spack-analyzer-install-files.json
|
||||
spack-analyzer-libabigail-libz.so.1.2.11.xml
|
||||
|
||||
If you want to run a specific analyzer, ask for it with `--analyzer`. Here we run
|
||||
spack analyze on libabigail (already installed) _using_ libabigail1
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack analyze run --analyzer abigail libabigail
|
||||
|
||||
|
||||
.. _analyze_monitoring:
|
||||
|
||||
----------------------
|
||||
Monitoring An Analysis
|
||||
----------------------
|
||||
|
||||
For any kind of analysis, you can
|
||||
use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
|
||||
as a server to upload the same run metadata to. You can
|
||||
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
|
||||
to first create a server along with a username and token for yourself.
|
||||
You can then use this guide to interact with the server.
|
||||
|
||||
You should first export our spack monitor token and username to the environment:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
|
||||
$ export SPACKMON_USER=spacky
|
||||
|
||||
|
||||
By default, the host for your server is expected to be at ``http://127.0.0.1``
|
||||
with a prefix of ``ms1``, and if this is the case, you can simply add the
|
||||
``--monitor`` flag to the install command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack analyze run --monitor wget
|
||||
|
||||
If you need to customize the host or the prefix, you can do that as well:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack analyze run --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io wget
|
||||
|
||||
If your server doesn't have authentication, you can skip it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack analyze run --monitor --monitor-disable-auth wget
|
||||
|
||||
Regardless of your choice, when you run analyze on an installed package (whether
|
||||
it was installed with ``--monitor`` or not, you'll see the results generating as they did
|
||||
before, and a message that the monitor server was pinged:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack analyze --monitor wget
|
||||
...
|
||||
==> Sending result for wget bin/wget to monitor.
|
@@ -1114,21 +1114,21 @@ set of arbitrary versions, such as ``@1.0,1.5,1.7`` (``1.0``, ``1.5``,
|
||||
or ``1.7``). When you supply such a specifier to ``spack install``,
|
||||
it constrains the set of versions that Spack will install.
|
||||
|
||||
For packages with a ``git`` attribute, ``git`` references
|
||||
may be specified instead of a numerical version i.e. branches, tags
|
||||
and commits. Spack will stage and build based off the ``git``
|
||||
For packages with a ``git`` attribute, ``git`` references
|
||||
may be specified instead of a numerical version i.e. branches, tags
|
||||
and commits. Spack will stage and build based off the ``git``
|
||||
reference provided. Acceptable syntaxes for this are:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
|
||||
# branches and tags
|
||||
foo@git.develop # use the develop branch
|
||||
foo@git.0.19 # use the 0.19 tag
|
||||
|
||||
|
||||
# commit hashes
|
||||
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # 40 character hashes are automatically treated as git commits
|
||||
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234
|
||||
|
||||
|
||||
Spack versions from git reference either have an associated version supplied by the user,
|
||||
or infer a relationship to known versions from the structure of the git repository. If an
|
||||
associated version is supplied by the user, Spack treats the git version as equivalent to that
|
||||
@@ -1244,8 +1244,8 @@ For example, for the ``stackstart`` variant:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
mpileaks stackstart=4 # variant will be propagated to dependencies
|
||||
mpileaks stackstart==4 # only mpileaks will have this variant value
|
||||
mpileaks stackstart==4 # variant will be propagated to dependencies
|
||||
mpileaks stackstart=4 # only mpileaks will have this variant value
|
||||
|
||||
^^^^^^^^^^^^^^
|
||||
Compiler Flags
|
||||
@@ -1672,9 +1672,13 @@ own install prefix. However, certain packages are typically installed
|
||||
`Python <https://www.python.org>`_ packages are typically installed in the
|
||||
``$prefix/lib/python-2.7/site-packages`` directory.
|
||||
|
||||
Spack has support for this type of installation as well. In Spack,
|
||||
a package that can live inside the prefix of another package is called
|
||||
an *extension*. Suppose you have Python installed like so:
|
||||
In Spack, installation prefixes are immutable, so this type of installation
|
||||
is not directly supported. However, it is possible to create views that
|
||||
allow you to merge install prefixes of multiple packages into a single new prefix.
|
||||
Views are a convenient way to get a more traditional filesystem structure.
|
||||
Using *extensions*, you can ensure that Python packages always share the
|
||||
same prefix in the view as Python itself. Suppose you have
|
||||
Python installed like so:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@@ -1712,8 +1716,6 @@ You can find extensions for your Python installation like this:
|
||||
py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1
|
||||
py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0
|
||||
|
||||
==> None activated.
|
||||
|
||||
The extensions are a subset of what's returned by ``spack list``, and
|
||||
they are packages like any other. They are installed into their own
|
||||
prefixes, and you can see this with ``spack find --paths``:
|
||||
@@ -1741,32 +1743,72 @@ directly when you run ``python``:
|
||||
ImportError: No module named numpy
|
||||
>>>
|
||||
|
||||
^^^^^^^^^^^^^^^^
|
||||
Using Extensions
|
||||
^^^^^^^^^^^^^^^^
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Using Extensions in Environments
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
There are four ways to get ``numpy`` working in Python. The first is
|
||||
to use :ref:`shell-support`. You can simply ``load`` the extension,
|
||||
and it will be added to the ``PYTHONPATH`` in your current shell:
|
||||
The recommended way of working with extensions such as ``py-numpy``
|
||||
above is through :ref:`Environments <environments>`. For example,
|
||||
the following creates an environment in the current working directory
|
||||
with a filesystem view in the ``./view`` directory:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack load python
|
||||
$ spack load py-numpy
|
||||
$ spack env create --with-view view --dir .
|
||||
$ spack -e . add py-numpy
|
||||
$ spack -e . concretize
|
||||
$ spack -e . install
|
||||
|
||||
We recommend environments for two reasons. Firstly, environments
|
||||
can be activated (requires :ref:`shell-support`):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack env activate .
|
||||
|
||||
which sets all the right environment variables such as ``PATH`` and
|
||||
``PYTHONPATH``. This ensures that
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ python
|
||||
>>> import numpy
|
||||
|
||||
works. Secondly, even without shell support, the view ensures
|
||||
that Python can locate its extensions:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ./view/bin/python
|
||||
>>> import numpy
|
||||
|
||||
See :ref:`environments` for a more in-depth description of Spack
|
||||
environments and customizations to views.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
Using ``spack load``
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A more traditional way of using Spack and extensions is ``spack load``
|
||||
(requires :ref:`shell-support`). This will add the extension to ``PYTHONPATH``
|
||||
in your current shell, and Python itself will be available in the ``PATH``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack load py-numpy
|
||||
$ python
|
||||
>>> import numpy
|
||||
|
||||
Now ``import numpy`` will succeed for as long as you keep your current
|
||||
session open.
|
||||
The loaded packages can be checked using ``spack find --loaded``
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Loading Extensions via Modules
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Instead of using Spack's environment modification capabilities through
|
||||
the ``spack load`` command, you can load numpy through your
|
||||
environment modules (using ``environment-modules`` or ``lmod``). This
|
||||
will also add the extension to the ``PYTHONPATH`` in your current
|
||||
shell.
|
||||
Apart from ``spack env activate`` and ``spack load``, you can load numpy
|
||||
through your environment modules (using ``environment-modules`` or
|
||||
``lmod``). This will also add the extension to the ``PYTHONPATH`` in
|
||||
your current shell.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@@ -1776,130 +1818,6 @@ If you do not know the name of the specific numpy module you wish to
|
||||
load, you can use the ``spack module tcl|lmod loads`` command to get
|
||||
the name of the module from the Spack spec.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Activating Extensions in a View
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Another way to use extensions is to create a view, which merges the
|
||||
python installation along with the extensions into a single prefix.
|
||||
See :ref:`configuring_environment_views` for a more in-depth description
|
||||
of views.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Activating Extensions Globally
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
As an alternative to creating a merged prefix with Python and its extensions,
|
||||
and prior to support for views, Spack has provided a means to install the
|
||||
extension into the Spack installation prefix for the extendee. This has
|
||||
typically been useful since extendable packages typically search their own
|
||||
installation path for addons by default.
|
||||
|
||||
Global activations are performed with the ``spack activate`` command:
|
||||
|
||||
.. _cmd-spack-activate:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
``spack activate``
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack activate py-numpy
|
||||
==> Activated extension py-setuptools@11.3.1%gcc@4.4.7 arch=linux-debian7-x86_64-3c74eb69 for python@2.7.8%gcc@4.4.7.
|
||||
==> Activated extension py-nose@1.3.4%gcc@4.4.7 arch=linux-debian7-x86_64-5f70f816 for python@2.7.8%gcc@4.4.7.
|
||||
==> Activated extension py-numpy@1.9.1%gcc@4.4.7 arch=linux-debian7-x86_64-66733244 for python@2.7.8%gcc@4.4.7.
|
||||
|
||||
Several things have happened here. The user requested that
|
||||
``py-numpy`` be activated in the ``python`` installation it was built
|
||||
with. Spack knows that ``py-numpy`` depends on ``py-nose`` and
|
||||
``py-setuptools``, so it activated those packages first. Finally,
|
||||
once all dependencies were activated in the ``python`` installation,
|
||||
``py-numpy`` was activated as well.
|
||||
|
||||
If we run ``spack extensions`` again, we now see the three new
|
||||
packages listed as activated:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack extensions python
|
||||
==> python@2.7.8%gcc@4.4.7 arch=linux-debian7-x86_64-703c7a96
|
||||
==> 36 extensions:
|
||||
geos py-ipython py-pexpect py-pyside py-sip
|
||||
py-basemap py-libxml2 py-pil py-pytz py-six
|
||||
py-biopython py-mako py-pmw py-rpy2 py-sympy
|
||||
py-cython py-matplotlib py-pychecker py-scientificpython py-virtualenv
|
||||
py-dateutil py-mpi4py py-pygments py-scikit-learn
|
||||
py-epydoc py-mx py-pylint py-scipy
|
||||
py-gnuplot py-nose py-pyparsing py-setuptools
|
||||
py-h5py py-numpy py-pyqt py-shiboken
|
||||
|
||||
==> 12 installed:
|
||||
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
|
||||
py-dateutil@2.4.0 py-nose@1.3.4 py-pyside@1.2.2
|
||||
py-dateutil@2.4.0 py-numpy@1.9.1 py-pytz@2014.10
|
||||
py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1
|
||||
py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0
|
||||
|
||||
==> 3 currently activated:
|
||||
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
|
||||
py-nose@1.3.4 py-numpy@1.9.1 py-setuptools@11.3.1
|
||||
|
||||
Now, when a user runs python, ``numpy`` will be available for import
|
||||
*without* the user having to explicitly load it. ``python@2.7.8`` now
|
||||
acts like a system Python installation with ``numpy`` installed inside
|
||||
of it.
|
||||
|
||||
Spack accomplishes this by symbolically linking the *entire* prefix of
|
||||
the ``py-numpy`` package into the prefix of the ``python`` package. To the
|
||||
python interpreter, it looks like ``numpy`` is installed in the
|
||||
``site-packages`` directory.
|
||||
|
||||
The only limitation of global activation is that you can only have a *single*
|
||||
version of an extension activated at a time. This is because multiple
|
||||
versions of the same extension would conflict if symbolically linked
|
||||
into the same prefix. Users who want a different version of a package
|
||||
can still get it by using environment modules or views, but they will have to
|
||||
explicitly load their preferred version.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
``spack activate --force``
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If, for some reason, you want to activate a package *without* its
|
||||
dependencies, you can use ``spack activate --force``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack activate --force py-numpy
|
||||
==> Activated extension py-numpy@1.9.1%gcc@4.4.7 arch=linux-debian7-x86_64-66733244 for python@2.7.8%gcc@4.4.7.
|
||||
|
||||
.. _cmd-spack-deactivate:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
``spack deactivate``
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
We've seen how activating an extension can be used to set up a default
|
||||
version of a Python module. Obviously, you may want to change that at
|
||||
some point. ``spack deactivate`` is the command for this. There are
|
||||
several variants:
|
||||
|
||||
* ``spack deactivate <extension>`` will deactivate a single
|
||||
extension. If another activated extension depends on this one,
|
||||
Spack will warn you and exit with an error.
|
||||
* ``spack deactivate --force <extension>`` deactivates an extension
|
||||
regardless of packages that depend on it.
|
||||
* ``spack deactivate --all <extension>`` deactivates an extension and
|
||||
all of its dependencies. Use ``--force`` to disregard dependents.
|
||||
* ``spack deactivate --all <extendee>`` deactivates *all* activated
|
||||
extensions of a package. For example, to deactivate *all* python
|
||||
extensions, use:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack deactivate --all python
|
||||
|
||||
-----------------------
|
||||
Filesystem requirements
|
||||
-----------------------
|
||||
|
@@ -724,10 +724,9 @@ extends vs. depends_on
|
||||
|
||||
This is very similar to the naming dilemma above, with a slight twist.
|
||||
As mentioned in the :ref:`Packaging Guide <packaging_extensions>`,
|
||||
``extends`` and ``depends_on`` are very similar, but ``extends`` adds
|
||||
the ability to *activate* the package. Activation involves symlinking
|
||||
everything in the installation prefix of the package to the installation
|
||||
prefix of Python. This allows the user to import a Python module without
|
||||
``extends`` and ``depends_on`` are very similar, but ``extends`` ensures
|
||||
that the extension and extendee share the same prefix in views.
|
||||
This allows the user to import a Python module without
|
||||
having to add that module to ``PYTHONPATH``.
|
||||
|
||||
When deciding between ``extends`` and ``depends_on``, the best rule of
|
||||
@@ -735,7 +734,7 @@ thumb is to check the installation prefix. If Python libraries are
|
||||
installed to ``<prefix>/lib/pythonX.Y/site-packages``, then you
|
||||
should use ``extends``. If Python libraries are installed elsewhere
|
||||
or the only files that get installed reside in ``<prefix>/bin``, then
|
||||
don't use ``extends``, as symlinking the package wouldn't be useful.
|
||||
don't use ``extends``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
Alternatives to Spack
|
||||
|
@@ -193,10 +193,10 @@ Build system dependencies
|
||||
|
||||
As an extension of the R ecosystem, your package will obviously depend
|
||||
on R to build and run. Normally, we would use ``depends_on`` to express
|
||||
this, but for R packages, we use ``extends``. ``extends`` is similar to
|
||||
``depends_on``, but adds an additional feature: the ability to "activate"
|
||||
the package by symlinking it to the R installation directory. Since
|
||||
every R package needs this, the ``RPackage`` base class contains:
|
||||
this, but for R packages, we use ``extends``. This implies a special
|
||||
dependency on R, which is used to set environment variables such as
|
||||
``R_LIBS`` uniformly. Since every R package needs this, the ``RPackage``
|
||||
base class contains:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
@@ -253,27 +253,6 @@ to update them.
|
||||
multiple runs of ``spack style`` just to re-compute line numbers and
|
||||
makes it much easier to fix errors directly off of the CI output.
|
||||
|
||||
.. warning::
|
||||
|
||||
Flake8 and ``pep8-naming`` require a number of dependencies in order
|
||||
to run. If you installed ``py-flake8`` and ``py-pep8-naming``, the
|
||||
easiest way to ensure the right packages are on your ``PYTHONPATH`` is
|
||||
to run::
|
||||
|
||||
spack activate py-flake8
|
||||
spack activate pep8-naming
|
||||
|
||||
so that all of the dependencies are symlinked to a central
|
||||
location. If you see an error message like:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Traceback (most recent call last):
|
||||
File: "/usr/bin/flake8", line 5, in <module>
|
||||
from pkg_resources import load_entry_point
|
||||
ImportError: No module named pkg_resources
|
||||
|
||||
that means Flake8 couldn't find setuptools in your ``PYTHONPATH``.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
Documentation Tests
|
||||
@@ -309,13 +288,9 @@ All of these can be installed with Spack, e.g.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack activate py-sphinx
|
||||
$ spack activate py-sphinx-rtd-theme
|
||||
$ spack activate py-sphinxcontrib-programoutput
|
||||
$ spack load py-sphinx py-sphinx-rtd-theme py-sphinxcontrib-programoutput
|
||||
|
||||
so that all of the dependencies are symlinked into that Python's
|
||||
tree. Alternatively, you could arrange for their library
|
||||
directories to be added to PYTHONPATH. If you see an error message
|
||||
so that all of the dependencies are added to PYTHONPATH. If you see an error message
|
||||
like:
|
||||
|
||||
.. code-block:: console
|
||||
|
@@ -233,8 +233,8 @@ packages will be listed as roots of the Environment.
|
||||
|
||||
All of the Spack commands that act on the list of installed specs are
|
||||
Environment-sensitive in this way, including ``install``,
|
||||
``uninstall``, ``activate``, ``deactivate``, ``find``, ``extensions``,
|
||||
and more. In the :ref:`environment-configuration` section we will discuss
|
||||
``uninstall``, ``find``, ``extensions``, and more. In the
|
||||
:ref:`environment-configuration` section we will discuss
|
||||
Environment-sensitive commands further.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
@@ -67,7 +67,6 @@ or refer to the full manual below.
|
||||
build_settings
|
||||
environments
|
||||
containers
|
||||
monitoring
|
||||
mirrors
|
||||
module_file_support
|
||||
repositories
|
||||
@@ -78,12 +77,6 @@ or refer to the full manual below.
|
||||
extensions
|
||||
pipelines
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Research
|
||||
|
||||
analyze
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Contributing
|
||||
|
@@ -1,265 +0,0 @@
|
||||
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
|
||||
Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
|
||||
SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
.. _monitoring:
|
||||
|
||||
==========
|
||||
Monitoring
|
||||
==========
|
||||
|
||||
You can use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
|
||||
server to store a database of your packages, builds, and associated metadata
|
||||
for provenance, research, or some other kind of development. You should
|
||||
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
|
||||
to first create a server along with a username and token for yourself.
|
||||
You can then use this guide to interact with the server.
|
||||
|
||||
-------------------
|
||||
Analysis Monitoring
|
||||
-------------------
|
||||
|
||||
To read about how to monitor an analysis (meaning you want to send analysis results
|
||||
to a server) see :ref:`analyze_monitoring`.
|
||||
|
||||
---------------------
|
||||
Monitoring An Install
|
||||
---------------------
|
||||
|
||||
Since an install is typically when you build packages, we logically want
|
||||
to tell spack to monitor during this step. Let's start with an example
|
||||
where we want to monitor the install of hdf5. Unless you have disabled authentication
|
||||
for the server, we first want to export our spack monitor token and username to the environment:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
|
||||
$ export SPACKMON_USER=spacky
|
||||
|
||||
|
||||
By default, the host for your server is expected to be at ``http://127.0.0.1``
|
||||
with a prefix of ``ms1``, and if this is the case, you can simply add the
|
||||
``--monitor`` flag to the install command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install --monitor hdf5
|
||||
|
||||
|
||||
If you need to customize the host or the prefix, you can do that as well:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io hdf5
|
||||
|
||||
|
||||
As a precaution, we cut out early in the spack client if you have not provided
|
||||
authentication credentials. For example, if you run the command above without
|
||||
exporting your username or token, you'll see:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
==> Error: You are required to export SPACKMON_TOKEN and SPACKMON_USER
|
||||
|
||||
This extra check is to ensure that we don't start any builds,
|
||||
and then discover that you forgot to export your token. However, if
|
||||
your monitoring server has authentication disabled, you can tell this to
|
||||
the client to skip this step:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install --monitor --monitor-disable-auth hdf5
|
||||
|
||||
If the service is not running, you'll cleanly exit early - the install will
|
||||
not continue if you've asked it to monitor and there is no service.
|
||||
For example, here is what you'll see if the monitoring service is not running:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[Errno 111] Connection refused
|
||||
|
||||
|
||||
If you want to continue builds (and stop monitoring) you can set the ``--monitor-keep-going``
|
||||
flag.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install --monitor --monitor-keep-going hdf5
|
||||
|
||||
This could mean that if a request fails, you only have partial or no data
|
||||
added to your monitoring database. This setting will not be applied to the
|
||||
first request to check if the server is running, but to subsequent requests.
|
||||
If you don't have a monitor server running and you want to build, simply
|
||||
don't provide the ``--monitor`` flag! Finally, if you want to provide one or
|
||||
more tags to your build, you can do:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# Add one tag, "pizza"
|
||||
$ spack install --monitor --monitor-tags pizza hdf5
|
||||
|
||||
# Add two tags, "pizza" and "pasta"
|
||||
$ spack install --monitor --monitor-tags pizza,pasta hdf5
|
||||
|
||||
|
||||
----------------------------
|
||||
Monitoring with Containerize
|
||||
----------------------------
|
||||
|
||||
The same argument group is available to add to a containerize command.
|
||||
|
||||
^^^^^^
|
||||
Docker
|
||||
^^^^^^
|
||||
|
||||
To add monitoring to a Docker container recipe generation using the defaults,
|
||||
and assuming a monitor server running on localhost, you would
|
||||
start with a spack.yaml in your present working directory:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
spack:
|
||||
specs:
|
||||
- samtools
|
||||
|
||||
And then do:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# preview first
|
||||
spack containerize --monitor
|
||||
|
||||
# and then write to a Dockerfile
|
||||
spack containerize --monitor > Dockerfile
|
||||
|
||||
|
||||
The install command will be edited to include commands for enabling monitoring.
|
||||
However, getting secrets into the container for your monitor server is something
|
||||
that should be done carefully. Specifically you should:
|
||||
|
||||
- Never try to define secrets as ENV, ARG, or using ``--build-arg``
|
||||
- Do not try to get the secret into the container via a "temporary" file that you remove (it in fact will still exist in a layer)
|
||||
|
||||
Instead, it's recommended to use buildkit `as explained here <https://pythonspeed.com/articles/docker-build-secrets/>`_.
|
||||
You'll need to again export environment variables for your spack monitor server:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
|
||||
$ export SPACKMON_USER=spacky
|
||||
|
||||
And then use buildkit along with your build and identifying the name of the secret:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ DOCKER_BUILDKIT=1 docker build --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
|
||||
|
||||
The secrets are expected to come from your environment, and then will be temporarily mounted and available
|
||||
at ``/run/secrets/<name>``. If you forget to supply them (and authentication is required) the build
|
||||
will fail. If you need to build on your host (and interact with a spack monitor at localhost) you'll
|
||||
need to tell Docker to use the host network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
|
||||
|
||||
|
||||
^^^^^^^^^^^
|
||||
Singularity
|
||||
^^^^^^^^^^^
|
||||
|
||||
To add monitoring to a Singularity container build, the spack.yaml needs to
|
||||
be modified slightly to specify wanting a different format:
|
||||
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
spack:
|
||||
specs:
|
||||
- samtools
|
||||
container:
|
||||
format: singularity
|
||||
|
||||
|
||||
Again, generate the recipe:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# preview first
|
||||
$ spack containerize --monitor
|
||||
|
||||
# then write to a Singularity recipe
|
||||
$ spack containerize --monitor > Singularity
|
||||
|
||||
|
||||
Singularity doesn't have a direct way to define secrets at build time, so we have
|
||||
to do a bit of a manual command to add a file, source secrets in it, and remove it.
|
||||
Since Singularity doesn't have layers like Docker, deleting a file will truly
|
||||
remove it from the container and history. So let's say we have this file,
|
||||
``secrets.sh``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# secrets.sh
|
||||
export SPACKMON_USER=spack
|
||||
export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
|
||||
|
||||
|
||||
We would then generate the Singularity recipe, and add a files section,
|
||||
a source of that file at the start of ``%post``, and **importantly**
|
||||
a removal of the final at the end of that same section.
|
||||
|
||||
.. code-block::
|
||||
|
||||
Bootstrap: docker
|
||||
From: spack/ubuntu-bionic:latest
|
||||
Stage: build
|
||||
|
||||
%files
|
||||
secrets.sh /opt/secrets.sh
|
||||
|
||||
%post
|
||||
. /opt/secrets.sh
|
||||
|
||||
# spack install commands are here
|
||||
...
|
||||
|
||||
# Don't forget to remove here!
|
||||
rm /opt/secrets.sh
|
||||
|
||||
|
||||
You can then build the container as your normally would.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo singularity build container.sif Singularity
|
||||
|
||||
|
||||
------------------
|
||||
Monitoring Offline
|
||||
------------------
|
||||
|
||||
In the case that you want to save monitor results to your filesystem
|
||||
and then upload them later (perhaps you are in an environment where you don't
|
||||
have credentials or it isn't safe to use them) you can use the ``--monitor-save-local``
|
||||
flag.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack install --monitor --monitor-save-local hdf5
|
||||
|
||||
This will save results in a subfolder, "monitor" in your designated spack
|
||||
reports folder, which defaults to ``$HOME/.spack/reports/monitor``. When
|
||||
you are ready to upload them to a spack monitor server:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ spack monitor upload ~/.spack/reports/monitor
|
||||
|
||||
|
||||
You can choose the root directory of results as shown above, or a specific
|
||||
subdirectory. The command accepts other arguments to specify configuration
|
||||
for the monitor.
|
@@ -2634,9 +2634,12 @@ extendable package:
|
||||
extends('python')
|
||||
...
|
||||
|
||||
Now, the ``py-numpy`` package can be used as an argument to ``spack
|
||||
activate``. When it is activated, all the files in its prefix will be
|
||||
symbolically linked into the prefix of the python package.
|
||||
This accomplishes a few things. Firstly, the Python package can set special
|
||||
variables such as ``PYTHONPATH`` for all extensions when the run or build
|
||||
environment is set up. Secondly, filesystem views can ensure that extensions
|
||||
are put in the same prefix as their extendee. This ensures that Python in
|
||||
a view can always locate its Python packages, even without environment
|
||||
variables set.
|
||||
|
||||
A package can only extend one other package at a time. To support packages
|
||||
that may extend one of a list of other packages, Spack supports multiple
|
||||
@@ -2684,9 +2687,8 @@ variant(s) are selected. This may be accomplished with conditional
|
||||
...
|
||||
|
||||
Sometimes, certain files in one package will conflict with those in
|
||||
another, which means they cannot both be activated (symlinked) at the
|
||||
same time. In this case, you can tell Spack to ignore those files
|
||||
when it does the activation:
|
||||
another, which means they cannot both be used in a view at the
|
||||
same time. In this case, you can tell Spack to ignore those files:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@@ -2698,7 +2700,7 @@ when it does the activation:
|
||||
...
|
||||
|
||||
The code above will prevent everything in the ``$prefix/bin/`` directory
|
||||
from being linked in at activation time.
|
||||
from being linked in a view.
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -2722,67 +2724,6 @@ extensions; as a consequence python extension packages (those inheriting from
|
||||
``PythonPackage``) likewise override ``add_files_to_view`` in order to rewrite
|
||||
shebang lines which point to the Python interpreter.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Activation & deactivation
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Adding an extension to a view is referred to as an activation. If the view is
|
||||
maintained in the Spack installation prefix of the extendee this is called a
|
||||
global activation. Activations may involve updating some centralized state
|
||||
that is maintained by the extendee package, so there can be additional work
|
||||
for adding extensions compared with non-extension packages.
|
||||
|
||||
Spack's ``Package`` class has default ``activate`` and ``deactivate``
|
||||
implementations that handle symbolically linking extensions' prefixes
|
||||
into a specified view. Extendable packages can override these methods
|
||||
to add custom activate/deactivate logic of their own. For example,
|
||||
the ``activate`` and ``deactivate`` methods in the Python class handle
|
||||
symbolic linking of extensions, but they also handle details surrounding
|
||||
Python's ``.pth`` files, and other aspects of Python packaging.
|
||||
|
||||
Spack's extensions mechanism is designed to be extensible, so that
|
||||
other packages (like Ruby, R, Perl, etc.) can provide their own
|
||||
custom extension management logic, as they may not handle modules the
|
||||
same way that Python does.
|
||||
|
||||
Let's look at Python's activate function:
|
||||
|
||||
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
|
||||
:pyobject: Python.activate
|
||||
:linenos:
|
||||
|
||||
This function is called on the *extendee* (Python). It first calls
|
||||
``activate`` in the superclass, which handles symlinking the
|
||||
extension package's prefix into the specified view. It then does
|
||||
some special handling of the ``easy-install.pth`` file, part of
|
||||
Python's setuptools.
|
||||
|
||||
Deactivate behaves similarly to activate, but it unlinks files:
|
||||
|
||||
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
|
||||
:pyobject: Python.deactivate
|
||||
:linenos:
|
||||
|
||||
Both of these methods call some custom functions in the Python
|
||||
package. See the source for Spack's Python package for details.
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
Activation arguments
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You may have noticed that the ``activate`` function defined above
|
||||
takes keyword arguments. These are the keyword arguments from
|
||||
``extends()``, and they are passed to both activate and deactivate.
|
||||
|
||||
This capability allows an extension to customize its own activation by
|
||||
passing arguments to the extendee. Extendees can likewise implement
|
||||
custom ``activate()`` and ``deactivate()`` functions to suit their
|
||||
needs.
|
||||
|
||||
The only keyword argument supported by default is the ``ignore``
|
||||
argument, which can take a regex, list of regexes, or a predicate to
|
||||
determine which files *not* to symlink during activation.
|
||||
|
||||
.. _virtual-dependencies:
|
||||
|
||||
--------------------
|
||||
@@ -3584,7 +3525,7 @@ will likely contain some overriding of default builder methods:
|
||||
def cmake_args(self):
|
||||
pass
|
||||
|
||||
class Autotoolsbuilder(spack.build_systems.autotools.AutotoolsBuilder):
|
||||
class AutotoolsBuilder(spack.build_systems.autotools.AutotoolsBuilder):
|
||||
def configure_args(self):
|
||||
pass
|
||||
|
||||
|
178
lib/spack/env/cc
vendored
178
lib/spack/env/cc
vendored
@@ -427,6 +427,55 @@ isystem_include_dirs_list=""
|
||||
libs_list=""
|
||||
other_args_list=""
|
||||
|
||||
# Global state for keeping track of -Wl,-rpath -Wl,/path
|
||||
wl_expect_rpath=no
|
||||
|
||||
# Same, but for -Xlinker -rpath -Xlinker /path
|
||||
xlinker_expect_rpath=no
|
||||
|
||||
parse_Wl() {
|
||||
# drop -Wl
|
||||
shift
|
||||
while [ $# -ne 0 ]; do
|
||||
if [ "$wl_expect_rpath" = yes ]; then
|
||||
if system_dir "$1"; then
|
||||
append system_rpath_dirs_list "$1"
|
||||
else
|
||||
append rpath_dirs_list "$1"
|
||||
fi
|
||||
wl_expect_rpath=no
|
||||
else
|
||||
case "$1" in
|
||||
-rpath=*)
|
||||
arg="${1#-rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
;;
|
||||
--rpath=*)
|
||||
arg="${1#--rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
;;
|
||||
-rpath|--rpath)
|
||||
wl_expect_rpath=yes
|
||||
;;
|
||||
"$dtags_to_strip")
|
||||
;;
|
||||
*)
|
||||
append other_args_list "-Wl,$1"
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
shift
|
||||
done
|
||||
}
|
||||
|
||||
|
||||
while [ $# -ne 0 ]; do
|
||||
|
||||
@@ -485,88 +534,77 @@ while [ $# -ne 0 ]; do
|
||||
append other_args_list "-l$arg"
|
||||
;;
|
||||
-Wl,*)
|
||||
arg="${1#-Wl,}"
|
||||
if [ -z "$arg" ]; then shift; arg="$1"; fi
|
||||
case "$arg" in
|
||||
-rpath=*) rp="${arg#-rpath=}" ;;
|
||||
--rpath=*) rp="${arg#--rpath=}" ;;
|
||||
-rpath,*) rp="${arg#-rpath,}" ;;
|
||||
--rpath,*) rp="${arg#--rpath,}" ;;
|
||||
-rpath|--rpath)
|
||||
shift; arg="$1"
|
||||
case "$arg" in
|
||||
-Wl,*)
|
||||
rp="${arg#-Wl,}"
|
||||
;;
|
||||
*)
|
||||
die "-Wl,-rpath was not followed by -Wl,*"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
"$dtags_to_strip")
|
||||
: # We want to remove explicitly this flag
|
||||
;;
|
||||
*)
|
||||
append other_args_list "-Wl,$arg"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
-Xlinker,*)
|
||||
arg="${1#-Xlinker,}"
|
||||
if [ -z "$arg" ]; then shift; arg="$1"; fi
|
||||
|
||||
case "$arg" in
|
||||
-rpath=*) rp="${arg#-rpath=}" ;;
|
||||
--rpath=*) rp="${arg#--rpath=}" ;;
|
||||
-rpath|--rpath)
|
||||
shift; arg="$1"
|
||||
case "$arg" in
|
||||
-Xlinker,*)
|
||||
rp="${arg#-Xlinker,}"
|
||||
;;
|
||||
*)
|
||||
die "-Xlinker,-rpath was not followed by -Xlinker,*"
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
*)
|
||||
append other_args_list "-Xlinker,$arg"
|
||||
;;
|
||||
esac
|
||||
IFS=,
|
||||
parse_Wl $1
|
||||
unset IFS
|
||||
;;
|
||||
-Xlinker)
|
||||
if [ "$2" = "-rpath" ]; then
|
||||
if [ "$3" != "-Xlinker" ]; then
|
||||
die "-Xlinker,-rpath was not followed by -Xlinker,*"
|
||||
shift
|
||||
if [ $# -eq 0 ]; then
|
||||
# -Xlinker without value: let the compiler error about it.
|
||||
append other_args_list -Xlinker
|
||||
xlinker_expect_rpath=no
|
||||
break
|
||||
elif [ "$xlinker_expect_rpath" = yes ]; then
|
||||
# Register the path of -Xlinker -rpath <other args> -Xlinker <path>
|
||||
if system_dir "$1"; then
|
||||
append system_rpath_dirs_list "$1"
|
||||
else
|
||||
append rpath_dirs_list "$1"
|
||||
fi
|
||||
shift 3;
|
||||
rp="$1"
|
||||
elif [ "$2" = "$dtags_to_strip" ]; then
|
||||
shift # We want to remove explicitly this flag
|
||||
xlinker_expect_rpath=no
|
||||
else
|
||||
append other_args_list "$1"
|
||||
case "$1" in
|
||||
-rpath=*)
|
||||
arg="${1#-rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
;;
|
||||
--rpath=*)
|
||||
arg="${1#--rpath=}"
|
||||
if system_dir "$arg"; then
|
||||
append system_rpath_dirs_list "$arg"
|
||||
else
|
||||
append rpath_dirs_list "$arg"
|
||||
fi
|
||||
;;
|
||||
-rpath|--rpath)
|
||||
xlinker_expect_rpath=yes
|
||||
;;
|
||||
"$dtags_to_strip")
|
||||
;;
|
||||
*)
|
||||
append other_args_list -Xlinker
|
||||
append other_args_list "$1"
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
;;
|
||||
"$dtags_to_strip")
|
||||
;;
|
||||
*)
|
||||
if [ "$1" = "$dtags_to_strip" ]; then
|
||||
: # We want to remove explicitly this flag
|
||||
else
|
||||
append other_args_list "$1"
|
||||
fi
|
||||
append other_args_list "$1"
|
||||
;;
|
||||
esac
|
||||
|
||||
# test rpaths against system directories in one place.
|
||||
if [ -n "$rp" ]; then
|
||||
if system_dir "$rp"; then
|
||||
append system_rpath_dirs_list "$rp"
|
||||
else
|
||||
append rpath_dirs_list "$rp"
|
||||
fi
|
||||
fi
|
||||
shift
|
||||
done
|
||||
|
||||
# We found `-Xlinker -rpath` but no matching value `-Xlinker /path`. Just append
|
||||
# `-Xlinker -rpath` again and let the compiler or linker handle the error during arg
|
||||
# parsing.
|
||||
if [ "$xlinker_expect_rpath" = yes ]; then
|
||||
append other_args_list -Xlinker
|
||||
append other_args_list -rpath
|
||||
fi
|
||||
|
||||
# Same, but for -Wl flags.
|
||||
if [ "$wl_expect_rpath" = yes ]; then
|
||||
append other_args_list -Wl,-rpath
|
||||
fi
|
||||
|
||||
#
|
||||
# Add flags from Spack's cppflags, cflags, cxxflags, fcflags, fflags, and
|
||||
# ldflags. We stick to the order that gmake puts the flags in by default.
|
||||
|
@@ -1000,45 +1000,16 @@ def hash_directory(directory, ignore=[]):
|
||||
return md5_hash.hexdigest()
|
||||
|
||||
|
||||
def _try_unlink(path):
|
||||
try:
|
||||
os.unlink(path)
|
||||
except (IOError, OSError):
|
||||
# But if that fails, that's OK.
|
||||
pass
|
||||
|
||||
|
||||
@contextmanager
|
||||
@system_path_filter
|
||||
def write_tmp_and_move(path, mode="w"):
|
||||
"""Write to a temporary file in the same directory, then move into place."""
|
||||
# Rely on NamedTemporaryFile to give a unique file without races
|
||||
# in the directory of the target file.
|
||||
file = tempfile.NamedTemporaryFile(
|
||||
prefix="." + os.path.basename(path),
|
||||
suffix=".tmp",
|
||||
dir=os.path.dirname(path),
|
||||
mode=mode,
|
||||
delete=False, # we delete it ourselves
|
||||
)
|
||||
tmp_path = file.name
|
||||
|
||||
try:
|
||||
yield file
|
||||
except BaseException:
|
||||
# On any failure, try to remove the temporary file.
|
||||
_try_unlink(tmp_path)
|
||||
raise
|
||||
finally:
|
||||
# Always close the file decriptor
|
||||
file.close()
|
||||
|
||||
# Atomically move into existence.
|
||||
try:
|
||||
os.rename(tmp_path, path)
|
||||
except (IOError, OSError):
|
||||
_try_unlink(tmp_path)
|
||||
raise
|
||||
def write_tmp_and_move(filename):
|
||||
"""Write to a temporary file, then move into place."""
|
||||
dirname = os.path.dirname(filename)
|
||||
basename = os.path.basename(filename)
|
||||
tmp = os.path.join(dirname, ".%s.tmp" % basename)
|
||||
with open(tmp, "w") as f:
|
||||
yield f
|
||||
shutil.move(tmp, filename)
|
||||
|
||||
|
||||
@contextmanager
|
||||
@@ -2618,3 +2589,28 @@ def temporary_dir(*args, **kwargs):
|
||||
yield tmp_dir
|
||||
finally:
|
||||
remove_directory_contents(tmp_dir)
|
||||
|
||||
|
||||
def filesummary(path, print_bytes=16):
|
||||
"""Create a small summary of the given file. Does not error
|
||||
when file does not exist.
|
||||
|
||||
Args:
|
||||
print_bytes (int): Number of bytes to print from start/end of file
|
||||
|
||||
Returns:
|
||||
Tuple of size and byte string containing first n .. last n bytes.
|
||||
Size is 0 if file cannot be read."""
|
||||
try:
|
||||
n = print_bytes
|
||||
with open(path, "rb") as f:
|
||||
size = os.fstat(f.fileno()).st_size
|
||||
if size <= 2 * n:
|
||||
short_contents = f.read(2 * n)
|
||||
else:
|
||||
short_contents = f.read(n)
|
||||
f.seek(-n, 2)
|
||||
short_contents += b"..." + f.read(n)
|
||||
return size, short_contents
|
||||
except OSError:
|
||||
return 0, b""
|
||||
|
@@ -75,7 +75,7 @@ def __init__(self, ignore=None):
|
||||
# so that we have a fast lookup and can run mkdir in order.
|
||||
self.directories = OrderedDict()
|
||||
|
||||
# Files to link. Maps dst_rel to (src_rel, src_root)
|
||||
# Files to link. Maps dst_rel to (src_root, src_rel)
|
||||
self.files = OrderedDict()
|
||||
|
||||
def before_visit_dir(self, root, rel_path, depth):
|
||||
|
@@ -4,7 +4,7 @@
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
|
||||
__version__ = "0.19.0.dev0"
|
||||
__version__ = "0.19.2"
|
||||
spack_version = __version__
|
||||
|
||||
|
||||
|
@@ -288,7 +288,7 @@ def _check_build_test_callbacks(pkgs, error_cls):
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
test_callbacks = pkg_cls.build_time_test_callbacks
|
||||
test_callbacks = getattr(pkg_cls, "build_time_test_callbacks", None)
|
||||
|
||||
if test_callbacks and "test" in test_callbacks:
|
||||
msg = '{0} package contains "test" method in ' "build_time_test_callbacks"
|
||||
|
@@ -36,6 +36,7 @@
|
||||
import spack.relocate as relocate
|
||||
import spack.repo
|
||||
import spack.store
|
||||
import spack.util.crypto
|
||||
import spack.util.file_cache as file_cache
|
||||
import spack.util.gpg
|
||||
import spack.util.spack_json as sjson
|
||||
@@ -293,10 +294,12 @@ def update_spec(self, spec, found_list):
|
||||
cur_entry["spec"] = new_entry["spec"]
|
||||
break
|
||||
else:
|
||||
current_list.append = {
|
||||
"mirror_url": new_entry["mirror_url"],
|
||||
"spec": new_entry["spec"],
|
||||
}
|
||||
current_list.append(
|
||||
{
|
||||
"mirror_url": new_entry["mirror_url"],
|
||||
"spec": new_entry["spec"],
|
||||
}
|
||||
)
|
||||
|
||||
def update(self, with_cooldown=False):
|
||||
"""Make sure local cache of buildcache index files is up to date.
|
||||
@@ -554,9 +557,9 @@ class NoOverwriteException(spack.error.SpackError):
|
||||
"""
|
||||
|
||||
def __init__(self, file_path):
|
||||
err_msg = "\n%s\nexists\n" % file_path
|
||||
err_msg += "Use -f option to overwrite."
|
||||
super(NoOverwriteException, self).__init__(err_msg)
|
||||
super(NoOverwriteException, self).__init__(
|
||||
'"{}" exists in buildcache. Use --force flag to overwrite.'.format(file_path)
|
||||
)
|
||||
|
||||
|
||||
class NoGpgException(spack.error.SpackError):
|
||||
@@ -601,7 +604,12 @@ class NoChecksumException(spack.error.SpackError):
|
||||
Raised if file fails checksum verification.
|
||||
"""
|
||||
|
||||
pass
|
||||
def __init__(self, path, size, contents, algorithm, expected, computed):
|
||||
super(NoChecksumException, self).__init__(
|
||||
"{} checksum failed for {}".format(algorithm, path),
|
||||
"Expected {} but got {}. "
|
||||
"File size = {} bytes. Contents = {!r}".format(expected, computed, size, contents),
|
||||
)
|
||||
|
||||
|
||||
class NewLayoutException(spack.error.SpackError):
|
||||
@@ -1859,14 +1867,15 @@ def _extract_inner_tarball(spec, filename, extract_to, unsigned, remote_checksum
|
||||
raise UnsignedPackageException(
|
||||
"To install unsigned packages, use the --no-check-signature option."
|
||||
)
|
||||
# get the sha256 checksum of the tarball
|
||||
|
||||
# compute the sha256 checksum of the tarball
|
||||
local_checksum = checksum_tarball(tarfile_path)
|
||||
expected = remote_checksum["hash"]
|
||||
|
||||
# if the checksums don't match don't install
|
||||
if local_checksum != remote_checksum["hash"]:
|
||||
raise NoChecksumException(
|
||||
"Package tarball failed checksum verification.\n" "It cannot be installed."
|
||||
)
|
||||
if local_checksum != expected:
|
||||
size, contents = fsys.filesummary(tarfile_path)
|
||||
raise NoChecksumException(tarfile_path, size, contents, "sha256", expected, local_checksum)
|
||||
|
||||
return tarfile_path
|
||||
|
||||
@@ -1926,12 +1935,14 @@ def extract_tarball(spec, download_result, allow_root=False, unsigned=False, for
|
||||
|
||||
# compute the sha256 checksum of the tarball
|
||||
local_checksum = checksum_tarball(tarfile_path)
|
||||
expected = bchecksum["hash"]
|
||||
|
||||
# if the checksums don't match don't install
|
||||
if local_checksum != bchecksum["hash"]:
|
||||
if local_checksum != expected:
|
||||
size, contents = fsys.filesummary(tarfile_path)
|
||||
_delete_staged_downloads(download_result)
|
||||
raise NoChecksumException(
|
||||
"Package tarball failed checksum verification.\n" "It cannot be installed."
|
||||
tarfile_path, size, contents, "sha256", expected, local_checksum
|
||||
)
|
||||
|
||||
new_relative_prefix = str(os.path.relpath(spec.prefix, spack.store.layout.root))
|
||||
@@ -2022,8 +2033,11 @@ def install_root_node(spec, allow_root, unsigned=False, force=False, sha256=None
|
||||
tarball_path = download_result["tarball_stage"].save_filename
|
||||
msg = msg.format(tarball_path, sha256)
|
||||
if not checker.check(tarball_path):
|
||||
size, contents = fsys.filesummary(tarball_path)
|
||||
_delete_staged_downloads(download_result)
|
||||
raise spack.binary_distribution.NoChecksumException(msg)
|
||||
raise NoChecksumException(
|
||||
tarball_path, size, contents, checker.hash_name, sha256, checker.sum
|
||||
)
|
||||
tty.debug("Verified SHA256 checksum of the build cache")
|
||||
|
||||
# don't print long padded paths while extracting/relocating binaries
|
||||
|
@@ -978,22 +978,9 @@ def add_modifications_for_dep(dep):
|
||||
if set_package_py_globals:
|
||||
set_module_variables_for_package(dpkg)
|
||||
|
||||
# Allow dependencies to modify the module
|
||||
# Get list of modules that may need updating
|
||||
modules = []
|
||||
for cls in inspect.getmro(type(spec.package)):
|
||||
module = cls.module
|
||||
if module == spack.package_base:
|
||||
break
|
||||
modules.append(module)
|
||||
|
||||
# Execute changes as if on a single module
|
||||
# copy dict to ensure prior changes are available
|
||||
changes = spack.util.pattern.Bunch()
|
||||
dpkg.setup_dependent_package(changes, spec)
|
||||
|
||||
for module in modules:
|
||||
module.__dict__.update(changes.__dict__)
|
||||
current_module = ModuleChangePropagator(spec.package)
|
||||
dpkg.setup_dependent_package(current_module, spec)
|
||||
current_module.propagate_changes_to_mro()
|
||||
|
||||
if context == "build":
|
||||
builder = spack.builder.create(dpkg)
|
||||
@@ -1437,3 +1424,51 @@ def write_log_summary(out, log_type, log, last=None):
|
||||
# If no errors are found but warnings are, display warnings
|
||||
out.write("\n%s found in %s log:\n" % (plural(nwar, "warning"), log_type))
|
||||
out.write(make_log_context(warnings))
|
||||
|
||||
|
||||
class ModuleChangePropagator(object):
|
||||
"""Wrapper class to accept changes to a package.py Python module, and propagate them in the
|
||||
MRO of the package.
|
||||
|
||||
It is mainly used as a substitute of the ``package.py`` module, when calling the
|
||||
"setup_dependent_package" function during build environment setup.
|
||||
"""
|
||||
|
||||
_PROTECTED_NAMES = ("package", "current_module", "modules_in_mro", "_set_attributes")
|
||||
|
||||
def __init__(self, package):
|
||||
self._set_self_attributes("package", package)
|
||||
self._set_self_attributes("current_module", package.module)
|
||||
|
||||
#: Modules for the classes in the MRO up to PackageBase
|
||||
modules_in_mro = []
|
||||
for cls in inspect.getmro(type(package)):
|
||||
module = cls.module
|
||||
|
||||
if module == self.current_module:
|
||||
continue
|
||||
|
||||
if module == spack.package_base:
|
||||
break
|
||||
|
||||
modules_in_mro.append(module)
|
||||
self._set_self_attributes("modules_in_mro", modules_in_mro)
|
||||
self._set_self_attributes("_set_attributes", {})
|
||||
|
||||
def _set_self_attributes(self, key, value):
|
||||
super(ModuleChangePropagator, self).__setattr__(key, value)
|
||||
|
||||
def __getattr__(self, item):
|
||||
return getattr(self.current_module, item)
|
||||
|
||||
def __setattr__(self, key, value):
|
||||
if key in ModuleChangePropagator._PROTECTED_NAMES:
|
||||
msg = 'Cannot set attribute "{}" in ModuleMonkeyPatcher'.format(key)
|
||||
return AttributeError(msg)
|
||||
|
||||
setattr(self.current_module, key, value)
|
||||
self._set_attributes[key] = value
|
||||
|
||||
def propagate_changes_to_mro(self):
|
||||
for module_in_mro in self.modules_in_mro:
|
||||
module_in_mro.__dict__.update(self._set_attributes)
|
||||
|
@@ -7,7 +7,7 @@
|
||||
import os.path
|
||||
import stat
|
||||
import subprocess
|
||||
from typing import List # novm
|
||||
from typing import List # novm # noqa: F401
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
import llnl.util.tty as tty
|
||||
@@ -427,15 +427,15 @@ def _do_patch_libtool(self):
|
||||
x.filter(regex="-nostdlib", repl="", string=True)
|
||||
rehead = r"/\S*/"
|
||||
for o in [
|
||||
"fjhpctag.o",
|
||||
"fjcrt0.o",
|
||||
"fjlang08.o",
|
||||
"fjomp.o",
|
||||
"crti.o",
|
||||
"crtbeginS.o",
|
||||
"crtendS.o",
|
||||
r"fjhpctag\.o",
|
||||
r"fjcrt0\.o",
|
||||
r"fjlang08\.o",
|
||||
r"fjomp\.o",
|
||||
r"crti\.o",
|
||||
r"crtbeginS\.o",
|
||||
r"crtendS\.o",
|
||||
]:
|
||||
x.filter(regex=(rehead + o), repl="", string=True)
|
||||
x.filter(regex=(rehead + o), repl="")
|
||||
elif self.pkg.compiler.name == "dpcpp":
|
||||
# Hack to filter out spurious predep_objects when building with Intel dpcpp
|
||||
# (see https://github.com/spack/spack/issues/32863):
|
||||
|
@@ -205,13 +205,7 @@ def initconfig_hardware_entries(self):
|
||||
entries.append(cmake_cache_path("CUDA_TOOLKIT_ROOT_DIR", cudatoolkitdir))
|
||||
cudacompiler = "${CUDA_TOOLKIT_ROOT_DIR}/bin/nvcc"
|
||||
entries.append(cmake_cache_path("CMAKE_CUDA_COMPILER", cudacompiler))
|
||||
|
||||
if spec.satisfies("^mpi"):
|
||||
entries.append(cmake_cache_path("CMAKE_CUDA_HOST_COMPILER", "${MPI_CXX_COMPILER}"))
|
||||
else:
|
||||
entries.append(
|
||||
cmake_cache_path("CMAKE_CUDA_HOST_COMPILER", "${CMAKE_CXX_COMPILER}")
|
||||
)
|
||||
entries.append(cmake_cache_path("CMAKE_CUDA_HOST_COMPILER", "${CMAKE_CXX_COMPILER}"))
|
||||
|
||||
return entries
|
||||
|
||||
|
@@ -6,7 +6,7 @@
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
from typing import Optional
|
||||
from typing import Optional # noqa: F401
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
import llnl.util.lang as lang
|
||||
@@ -15,6 +15,7 @@
|
||||
import spack.builder
|
||||
import spack.multimethod
|
||||
import spack.package_base
|
||||
import spack.spec
|
||||
from spack.directives import build_system, depends_on, extends
|
||||
from spack.error import NoHeadersError, NoLibrariesError, SpecError
|
||||
from spack.version import Version
|
||||
@@ -107,6 +108,9 @@ def view_file_conflicts(self, view, merge_map):
|
||||
return conflicts
|
||||
|
||||
def add_files_to_view(self, view, merge_map, skip_if_exists=True):
|
||||
if not self.extendee_spec:
|
||||
return super(PythonExtension, self).add_files_to_view(view, merge_map, skip_if_exists)
|
||||
|
||||
bin_dir = self.spec.prefix.bin
|
||||
python_prefix = self.extendee_spec.prefix
|
||||
python_is_external = self.extendee_spec.external
|
||||
@@ -218,6 +222,27 @@ def list_url(cls):
|
||||
name = cls.pypi.split("/")[0]
|
||||
return "https://pypi.org/simple/" + name + "/"
|
||||
|
||||
def update_external_dependencies(self):
|
||||
"""
|
||||
Ensure all external python packages have a python dependency
|
||||
|
||||
If another package in the DAG depends on python, we use that
|
||||
python for the dependency of the external. If not, we assume
|
||||
that the external PythonPackage is installed into the same
|
||||
directory as the python it depends on.
|
||||
"""
|
||||
# TODO: Include this in the solve, rather than instantiating post-concretization
|
||||
if "python" not in self.spec:
|
||||
if "python" in self.spec.root:
|
||||
python = self.spec.root["python"]
|
||||
else:
|
||||
python = spack.spec.Spec("python")
|
||||
repo = spack.repo.path.repo_for_pkg(python)
|
||||
python.namespace = repo.namespace
|
||||
python._mark_concrete()
|
||||
python.external_path = self.prefix
|
||||
self.spec.add_dependency_edge(python, ("build", "link", "run"))
|
||||
|
||||
@property
|
||||
def headers(self):
|
||||
"""Discover header files in platlib."""
|
||||
|
@@ -46,10 +46,10 @@ class SConsBuilder(BaseBuilder):
|
||||
phases = ("build", "install")
|
||||
|
||||
#: Names associated with package methods in the old build-system format
|
||||
legacy_methods = ("install_args", "build_test")
|
||||
legacy_methods = ("build_test",)
|
||||
|
||||
#: Same as legacy_methods, but the signature is different
|
||||
legacy_long_methods = ("build_args",)
|
||||
legacy_long_methods = ("build_args", "install_args")
|
||||
|
||||
#: Names associated with package attributes in the old build-system format
|
||||
legacy_attributes = ("build_time_test_callbacks",)
|
||||
@@ -66,13 +66,13 @@ def build(self, pkg, spec, prefix):
|
||||
args = self.build_args(spec, prefix)
|
||||
inspect.getmodule(self.pkg).scons(*args)
|
||||
|
||||
def install_args(self):
|
||||
def install_args(self, spec, prefix):
|
||||
"""Arguments to pass to install."""
|
||||
return []
|
||||
|
||||
def install(self, pkg, spec, prefix):
|
||||
"""Install the package."""
|
||||
args = self.install_args()
|
||||
args = self.install_args(spec, prefix)
|
||||
|
||||
inspect.getmodule(self.pkg).scons("install", *args)
|
||||
|
||||
|
@@ -6,7 +6,7 @@
|
||||
import copy
|
||||
import functools
|
||||
import inspect
|
||||
from typing import List, Optional, Tuple
|
||||
from typing import List, Optional, Tuple # noqa: F401
|
||||
|
||||
import six
|
||||
|
||||
@@ -127,7 +127,12 @@ def __init__(self, wrapped_pkg_object, root_builder):
|
||||
wrapper_cls = type(self)
|
||||
bases = (package_cls, wrapper_cls)
|
||||
new_cls_name = package_cls.__name__ + "Wrapper"
|
||||
new_cls = type(new_cls_name, bases, {})
|
||||
# Forward attributes that might be monkey patched later
|
||||
new_cls = type(
|
||||
new_cls_name,
|
||||
bases,
|
||||
{"run_tests": property(lambda x: x.wrapped_package_object.run_tests)},
|
||||
)
|
||||
new_cls.__module__ = package_cls.__module__
|
||||
self.__class__ = new_cls
|
||||
self.__dict__.update(wrapped_pkg_object.__dict__)
|
||||
|
@@ -1769,9 +1769,9 @@ def reproduce_ci_job(url, work_dir):
|
||||
download_and_extract_artifacts(url, work_dir)
|
||||
|
||||
lock_file = fs.find(work_dir, "spack.lock")[0]
|
||||
concrete_env_dir = os.path.dirname(lock_file)
|
||||
repro_lock_dir = os.path.dirname(lock_file)
|
||||
|
||||
tty.debug("Concrete environment directory: {0}".format(concrete_env_dir))
|
||||
tty.debug("Found lock file in: {0}".format(repro_lock_dir))
|
||||
|
||||
yaml_files = fs.find(work_dir, ["*.yaml", "*.yml"])
|
||||
|
||||
@@ -1794,6 +1794,21 @@ def reproduce_ci_job(url, work_dir):
|
||||
if pipeline_yaml:
|
||||
tty.debug("\n{0} is likely your pipeline file".format(yf))
|
||||
|
||||
relative_concrete_env_dir = pipeline_yaml["variables"]["SPACK_CONCRETE_ENV_DIR"]
|
||||
tty.debug("Relative environment path used by cloud job: {0}".format(relative_concrete_env_dir))
|
||||
|
||||
# Using the relative concrete environment path found in the generated
|
||||
# pipeline variable above, copy the spack environment files so they'll
|
||||
# be found in the same location as when the job ran in the cloud.
|
||||
concrete_env_dir = os.path.join(work_dir, relative_concrete_env_dir)
|
||||
if not os.path.isdir(concrete_env_dir):
|
||||
fs.mkdirp(concrete_env_dir)
|
||||
copy_lock_path = os.path.join(concrete_env_dir, "spack.lock")
|
||||
orig_yaml_path = os.path.join(repro_lock_dir, "spack.yaml")
|
||||
copy_yaml_path = os.path.join(concrete_env_dir, "spack.yaml")
|
||||
shutil.copyfile(lock_file, copy_lock_path)
|
||||
shutil.copyfile(orig_yaml_path, copy_yaml_path)
|
||||
|
||||
# Find the install script in the unzipped artifacts and make it executable
|
||||
install_script = fs.find(work_dir, "install.sh")[0]
|
||||
st = os.stat(install_script)
|
||||
@@ -1849,6 +1864,7 @@ def reproduce_ci_job(url, work_dir):
|
||||
if repro_details:
|
||||
mount_as_dir = repro_details["ci_project_dir"]
|
||||
mounted_repro_dir = os.path.join(mount_as_dir, rel_repro_dir)
|
||||
mounted_env_dir = os.path.join(mount_as_dir, relative_concrete_env_dir)
|
||||
|
||||
# We will also try to clone spack from your local checkout and
|
||||
# reproduce the state present during the CI build, and put that into
|
||||
@@ -1932,7 +1948,7 @@ def reproduce_ci_job(url, work_dir):
|
||||
inst_list.append(" $ source {0}/share/spack/setup-env.sh\n".format(spack_root))
|
||||
inst_list.append(
|
||||
" $ spack env activate --without-view {0}\n\n".format(
|
||||
mounted_repro_dir if job_image else repro_dir
|
||||
mounted_env_dir if job_image else repro_dir
|
||||
)
|
||||
)
|
||||
inst_list.append(" - Run the install script\n\n")
|
||||
|
@@ -30,6 +30,7 @@
|
||||
import spack.paths
|
||||
import spack.spec
|
||||
import spack.store
|
||||
import spack.traverse as traverse
|
||||
import spack.user_environment as uenv
|
||||
import spack.util.spack_json as sjson
|
||||
import spack.util.string
|
||||
@@ -464,11 +465,12 @@ def format_list(specs):
|
||||
# create the final, formatted versions of all specs
|
||||
formatted = []
|
||||
for spec in specs:
|
||||
formatted.append((fmt(spec), spec))
|
||||
if deps:
|
||||
for depth, dep in spec.traverse(root=False, depth=True):
|
||||
formatted.append((fmt(dep, depth), dep))
|
||||
for depth, dep in traverse.traverse_tree([spec], depth_first=False):
|
||||
formatted.append((fmt(dep.spec, depth), dep.spec))
|
||||
formatted.append(("", None)) # mark newlines
|
||||
else:
|
||||
formatted.append((fmt(spec), spec))
|
||||
|
||||
# unless any of these are set, we can just colify and be done.
|
||||
if not any((deps, paths)):
|
||||
|
@@ -1,53 +0,0 @@
|
||||
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import llnl.util.tty as tty
|
||||
|
||||
import spack.cmd
|
||||
import spack.cmd.common.arguments as arguments
|
||||
import spack.environment as ev
|
||||
from spack.filesystem_view import YamlFilesystemView
|
||||
|
||||
description = "activate a package extension"
|
||||
section = "extensions"
|
||||
level = "long"
|
||||
|
||||
|
||||
def setup_parser(subparser):
|
||||
subparser.add_argument(
|
||||
"-f", "--force", action="store_true", help="activate without first activating dependencies"
|
||||
)
|
||||
subparser.add_argument("-v", "--view", metavar="VIEW", type=str, help="the view to operate on")
|
||||
arguments.add_common_arguments(subparser, ["installed_spec"])
|
||||
|
||||
|
||||
def activate(parser, args):
|
||||
|
||||
tty.warn(
|
||||
"spack activate is deprecated in favor of " "environments and will be removed in v0.19.0"
|
||||
)
|
||||
|
||||
specs = spack.cmd.parse_specs(args.spec)
|
||||
if len(specs) != 1:
|
||||
tty.die("activate requires one spec. %d given." % len(specs))
|
||||
|
||||
spec = spack.cmd.disambiguate_spec(specs[0], ev.active_environment())
|
||||
if not spec.package.is_extension:
|
||||
tty.die("%s is not an extension." % spec.name)
|
||||
|
||||
if args.view:
|
||||
target = args.view
|
||||
else:
|
||||
target = spec.package.extendee_spec.prefix
|
||||
|
||||
view = YamlFilesystemView(target, spack.store.layout)
|
||||
|
||||
if spec.package.is_activated(view):
|
||||
tty.msg("Package %s is already activated." % specs[0].short_spec)
|
||||
return
|
||||
|
||||
# TODO: refactor FilesystemView.add_extension and use that here (so there
|
||||
# aren't two ways of activating extensions)
|
||||
spec.package.do_activate(view, with_dependencies=not args.force)
|
@@ -52,6 +52,7 @@
|
||||
|
||||
CLINGO_JSON = "$spack/share/spack/bootstrap/github-actions-v0.4/clingo.json"
|
||||
GNUPG_JSON = "$spack/share/spack/bootstrap/github-actions-v0.4/gnupg.json"
|
||||
PATCHELF_JSON = "$spack/share/spack/bootstrap/github-actions-v0.4/patchelf.json"
|
||||
|
||||
# Metadata for a generated source mirror
|
||||
SOURCE_METADATA = {
|
||||
@@ -443,6 +444,7 @@ def write_metadata(subdir, metadata):
|
||||
abs_directory, rel_directory = write_metadata(subdir="binaries", metadata=BINARY_METADATA)
|
||||
shutil.copy(spack.util.path.canonicalize_path(CLINGO_JSON), abs_directory)
|
||||
shutil.copy(spack.util.path.canonicalize_path(GNUPG_JSON), abs_directory)
|
||||
shutil.copy(spack.util.path.canonicalize_path(PATCHELF_JSON), abs_directory)
|
||||
instructions += cmd.format("local-binaries", rel_directory)
|
||||
print(instructions)
|
||||
|
||||
|
@@ -244,30 +244,35 @@ def config_remove(args):
|
||||
spack.config.set(path, existing, scope)
|
||||
|
||||
|
||||
def _can_update_config_file(scope_dir, cfg_file):
|
||||
dir_ok = fs.can_write_to_dir(scope_dir)
|
||||
cfg_ok = fs.can_access(cfg_file)
|
||||
return dir_ok and cfg_ok
|
||||
def _can_update_config_file(scope, cfg_file):
|
||||
if isinstance(scope, spack.config.SingleFileScope):
|
||||
return fs.can_access(cfg_file)
|
||||
return fs.can_write_to_dir(scope.path) and fs.can_access(cfg_file)
|
||||
|
||||
|
||||
def config_update(args):
|
||||
# Read the configuration files
|
||||
spack.config.config.get_config(args.section, scope=args.scope)
|
||||
updates = spack.config.config.format_updates[args.section]
|
||||
updates = list(
|
||||
filter(
|
||||
lambda s: not isinstance(
|
||||
s, (spack.config.InternalConfigScope, spack.config.ImmutableConfigScope)
|
||||
),
|
||||
spack.config.config.format_updates[args.section],
|
||||
)
|
||||
)
|
||||
|
||||
cannot_overwrite, skip_system_scope = [], False
|
||||
for scope in updates:
|
||||
cfg_file = spack.config.config.get_config_filename(scope.name, args.section)
|
||||
scope_dir = scope.path
|
||||
can_be_updated = _can_update_config_file(scope_dir, cfg_file)
|
||||
can_be_updated = _can_update_config_file(scope, cfg_file)
|
||||
if not can_be_updated:
|
||||
if scope.name == "system":
|
||||
skip_system_scope = True
|
||||
msg = (
|
||||
tty.warn(
|
||||
'Not enough permissions to write to "system" scope. '
|
||||
"Skipping update at that location [cfg={0}]"
|
||||
"Skipping update at that location [cfg={0}]".format(cfg_file)
|
||||
)
|
||||
tty.warn(msg.format(cfg_file))
|
||||
continue
|
||||
cannot_overwrite.append((scope, cfg_file))
|
||||
|
||||
@@ -315,18 +320,14 @@ def config_update(args):
|
||||
# Get a function to update the format
|
||||
update_fn = spack.config.ensure_latest_format_fn(args.section)
|
||||
for scope in updates:
|
||||
cfg_file = spack.config.config.get_config_filename(scope.name, args.section)
|
||||
with open(cfg_file) as f:
|
||||
data = syaml.load_config(f) or {}
|
||||
data = data.pop(args.section, {})
|
||||
data = scope.get_section(args.section).pop(args.section)
|
||||
update_fn(data)
|
||||
|
||||
# Make a backup copy and rewrite the file
|
||||
bkp_file = cfg_file + ".bkp"
|
||||
shutil.copy(cfg_file, bkp_file)
|
||||
spack.config.config.update_config(args.section, data, scope=scope.name, force=True)
|
||||
msg = 'File "{0}" updated [backup={1}]'
|
||||
tty.msg(msg.format(cfg_file, bkp_file))
|
||||
tty.msg('File "{}" update [backup={}]'.format(cfg_file, bkp_file))
|
||||
|
||||
|
||||
def _can_revert_update(scope_dir, cfg_file, bkp_file):
|
||||
|
@@ -1,96 +0,0 @@
|
||||
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import llnl.util.tty as tty
|
||||
|
||||
import spack.cmd
|
||||
import spack.cmd.common.arguments as arguments
|
||||
import spack.environment as ev
|
||||
import spack.graph
|
||||
import spack.store
|
||||
from spack.filesystem_view import YamlFilesystemView
|
||||
|
||||
description = "deactivate a package extension"
|
||||
section = "extensions"
|
||||
level = "long"
|
||||
|
||||
|
||||
def setup_parser(subparser):
|
||||
subparser.add_argument(
|
||||
"-f",
|
||||
"--force",
|
||||
action="store_true",
|
||||
help="run deactivation even if spec is NOT currently activated",
|
||||
)
|
||||
subparser.add_argument("-v", "--view", metavar="VIEW", type=str, help="the view to operate on")
|
||||
subparser.add_argument(
|
||||
"-a",
|
||||
"--all",
|
||||
action="store_true",
|
||||
help="deactivate all extensions of an extendable package, or "
|
||||
"deactivate an extension AND its dependencies",
|
||||
)
|
||||
arguments.add_common_arguments(subparser, ["installed_spec"])
|
||||
|
||||
|
||||
def deactivate(parser, args):
|
||||
|
||||
tty.warn(
|
||||
"spack deactivate is deprecated in favor of " "environments and will be removed in v0.19.0"
|
||||
)
|
||||
|
||||
specs = spack.cmd.parse_specs(args.spec)
|
||||
if len(specs) != 1:
|
||||
tty.die("deactivate requires one spec. %d given." % len(specs))
|
||||
|
||||
env = ev.active_environment()
|
||||
spec = spack.cmd.disambiguate_spec(specs[0], env)
|
||||
pkg = spec.package
|
||||
|
||||
if args.view:
|
||||
target = args.view
|
||||
elif pkg.is_extension:
|
||||
target = pkg.extendee_spec.prefix
|
||||
elif pkg.extendable:
|
||||
target = spec.prefix
|
||||
|
||||
view = YamlFilesystemView(target, spack.store.layout)
|
||||
|
||||
if args.all:
|
||||
if pkg.extendable:
|
||||
tty.msg("Deactivating all extensions of %s" % pkg.spec.short_spec)
|
||||
ext_pkgs = spack.store.db.activated_extensions_for(spec, view.extensions_layout)
|
||||
|
||||
for ext_pkg in ext_pkgs:
|
||||
ext_pkg.spec.normalize()
|
||||
if ext_pkg.is_activated(view):
|
||||
ext_pkg.do_deactivate(view, force=True)
|
||||
|
||||
elif pkg.is_extension:
|
||||
if not args.force and not spec.package.is_activated(view):
|
||||
tty.die("%s is not activated." % pkg.spec.short_spec)
|
||||
|
||||
tty.msg("Deactivating %s and all dependencies." % pkg.spec.short_spec)
|
||||
|
||||
nodes_in_topological_order = spack.graph.topological_sort(spec)
|
||||
for espec in reversed(nodes_in_topological_order):
|
||||
epkg = espec.package
|
||||
if epkg.extends(pkg.extendee_spec):
|
||||
if epkg.is_activated(view) or args.force:
|
||||
epkg.do_deactivate(view, force=args.force)
|
||||
|
||||
else:
|
||||
tty.die("spack deactivate --all requires an extendable package " "or an extension.")
|
||||
|
||||
else:
|
||||
if not pkg.is_extension:
|
||||
tty.die(
|
||||
"spack deactivate requires an extension.", "Did you mean 'spack deactivate --all'?"
|
||||
)
|
||||
|
||||
if not args.force and not spec.package.is_activated(view):
|
||||
tty.die("Package %s is not activated." % spec.short_spec)
|
||||
|
||||
spec.package.do_deactivate(view, force=args.force)
|
@@ -14,7 +14,6 @@
|
||||
import spack.environment as ev
|
||||
import spack.repo
|
||||
import spack.store
|
||||
from spack.filesystem_view import YamlFilesystemView
|
||||
|
||||
description = "list extensions for package"
|
||||
section = "extensions"
|
||||
@@ -38,10 +37,9 @@ def setup_parser(subparser):
|
||||
"--show",
|
||||
action="store",
|
||||
default="all",
|
||||
choices=("packages", "installed", "activated", "all"),
|
||||
choices=("packages", "installed", "all"),
|
||||
help="show only part of output",
|
||||
)
|
||||
subparser.add_argument("-v", "--view", metavar="VIEW", type=str, help="the view to operate on")
|
||||
|
||||
subparser.add_argument(
|
||||
"spec",
|
||||
@@ -91,13 +89,6 @@ def extensions(parser, args):
|
||||
tty.msg("%d extensions:" % len(extensions))
|
||||
colify(ext.name for ext in extensions)
|
||||
|
||||
if args.view:
|
||||
target = args.view
|
||||
else:
|
||||
target = spec.prefix
|
||||
|
||||
view = YamlFilesystemView(target, spack.store.layout)
|
||||
|
||||
if args.show in ("installed", "all"):
|
||||
# List specs of installed extensions.
|
||||
installed = [s.spec for s in spack.store.db.installed_extensions_for(spec)]
|
||||
@@ -109,14 +100,3 @@ def extensions(parser, args):
|
||||
else:
|
||||
tty.msg("%d installed:" % len(installed))
|
||||
cmd.display_specs(installed, args)
|
||||
|
||||
if args.show in ("activated", "all"):
|
||||
# List specs of activated extensions.
|
||||
activated = view.extensions_layout.extension_map(spec)
|
||||
if args.show == "all":
|
||||
print
|
||||
if not activated:
|
||||
tty.msg("None activated.")
|
||||
else:
|
||||
tty.msg("%d activated:" % len(activated))
|
||||
cmd.display_specs(activated.values(), args)
|
||||
|
@@ -242,8 +242,8 @@ def print_tests(pkg):
|
||||
# So the presence of a callback in Spack does not necessarily correspond
|
||||
# to the actual presence of built-time tests for a package.
|
||||
for callbacks, phase in [
|
||||
(pkg.build_time_test_callbacks, "Build"),
|
||||
(pkg.install_time_test_callbacks, "Install"),
|
||||
(getattr(pkg, "build_time_test_callbacks", None), "Build"),
|
||||
(getattr(pkg, "install_time_test_callbacks", None), "Install"),
|
||||
]:
|
||||
color.cprint("")
|
||||
color.cprint(section_title("Available {0} Phase Test Methods:".format(phase)))
|
||||
|
@@ -9,6 +9,7 @@
|
||||
|
||||
import llnl.util.tty as tty
|
||||
|
||||
import spack.builder
|
||||
import spack.cmd
|
||||
import spack.cmd.common.arguments as arguments
|
||||
import spack.environment as ev
|
||||
@@ -134,6 +135,7 @@ def location(parser, args):
|
||||
# Either concretize or filter from already concretized environment
|
||||
spec = spack.cmd.matching_spec_from_env(spec)
|
||||
pkg = spec.package
|
||||
builder = spack.builder.create(pkg)
|
||||
|
||||
if args.stage_dir:
|
||||
print(pkg.stage.path)
|
||||
@@ -141,10 +143,10 @@ def location(parser, args):
|
||||
|
||||
if args.build_dir:
|
||||
# Out of source builds have build_directory defined
|
||||
if hasattr(pkg, "build_directory"):
|
||||
if hasattr(builder, "build_directory"):
|
||||
# build_directory can be either absolute or relative to the stage path
|
||||
# in either case os.path.join makes it absolute
|
||||
print(os.path.normpath(os.path.join(pkg.stage.path, pkg.build_directory)))
|
||||
print(os.path.normpath(os.path.join(pkg.stage.path, builder.build_directory)))
|
||||
return
|
||||
|
||||
# Otherwise assume in-source builds
|
||||
|
@@ -9,6 +9,7 @@
|
||||
import llnl.util.tty as tty
|
||||
import llnl.util.tty.colify as colify
|
||||
|
||||
import spack.caches
|
||||
import spack.cmd
|
||||
import spack.cmd.common.arguments as arguments
|
||||
import spack.concretize
|
||||
@@ -356,12 +357,9 @@ def versions_per_spec(args):
|
||||
return num_versions
|
||||
|
||||
|
||||
def create_mirror_for_individual_specs(mirror_specs, directory_hint, skip_unstable_versions):
|
||||
local_push_url = local_mirror_url_from_user(directory_hint)
|
||||
present, mirrored, error = spack.mirror.create(
|
||||
local_push_url, mirror_specs, skip_unstable_versions
|
||||
)
|
||||
tty.msg("Summary for mirror in {}".format(local_push_url))
|
||||
def create_mirror_for_individual_specs(mirror_specs, path, skip_unstable_versions):
|
||||
present, mirrored, error = spack.mirror.create(path, mirror_specs, skip_unstable_versions)
|
||||
tty.msg("Summary for mirror in {}".format(path))
|
||||
process_mirror_stats(present, mirrored, error)
|
||||
|
||||
|
||||
@@ -379,21 +377,6 @@ def process_mirror_stats(present, mirrored, error):
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def local_mirror_url_from_user(directory_hint):
|
||||
"""Return a file:// url pointing to the local mirror to be used.
|
||||
|
||||
Args:
|
||||
directory_hint (str or None): directory where to create the mirror. If None,
|
||||
defaults to "config:source_cache".
|
||||
"""
|
||||
mirror_directory = spack.util.path.canonicalize_path(
|
||||
directory_hint or spack.config.get("config:source_cache")
|
||||
)
|
||||
tmp_mirror = spack.mirror.Mirror(mirror_directory)
|
||||
local_url = url_util.format(tmp_mirror.push_url)
|
||||
return local_url
|
||||
|
||||
|
||||
def mirror_create(args):
|
||||
"""Create a directory to be used as a spack mirror, and fill it with
|
||||
package archives.
|
||||
@@ -424,9 +407,12 @@ def mirror_create(args):
|
||||
"The option '--all' already implies mirroring all versions for each package.",
|
||||
)
|
||||
|
||||
# When no directory is provided, the source dir is used
|
||||
path = args.directory or spack.caches.fetch_cache_location()
|
||||
|
||||
if args.all and not ev.active_environment():
|
||||
create_mirror_for_all_specs(
|
||||
directory_hint=args.directory,
|
||||
path=path,
|
||||
skip_unstable_versions=args.skip_unstable_versions,
|
||||
selection_fn=not_excluded_fn(args),
|
||||
)
|
||||
@@ -434,7 +420,7 @@ def mirror_create(args):
|
||||
|
||||
if args.all and ev.active_environment():
|
||||
create_mirror_for_all_specs_inside_environment(
|
||||
directory_hint=args.directory,
|
||||
path=path,
|
||||
skip_unstable_versions=args.skip_unstable_versions,
|
||||
selection_fn=not_excluded_fn(args),
|
||||
)
|
||||
@@ -443,16 +429,15 @@ def mirror_create(args):
|
||||
mirror_specs = concrete_specs_from_user(args)
|
||||
create_mirror_for_individual_specs(
|
||||
mirror_specs,
|
||||
directory_hint=args.directory,
|
||||
path=path,
|
||||
skip_unstable_versions=args.skip_unstable_versions,
|
||||
)
|
||||
|
||||
|
||||
def create_mirror_for_all_specs(directory_hint, skip_unstable_versions, selection_fn):
|
||||
def create_mirror_for_all_specs(path, skip_unstable_versions, selection_fn):
|
||||
mirror_specs = all_specs_with_all_versions(selection_fn=selection_fn)
|
||||
local_push_url = local_mirror_url_from_user(directory_hint=directory_hint)
|
||||
mirror_cache, mirror_stats = spack.mirror.mirror_cache_and_stats(
|
||||
local_push_url, skip_unstable_versions=skip_unstable_versions
|
||||
path, skip_unstable_versions=skip_unstable_versions
|
||||
)
|
||||
for candidate in mirror_specs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(candidate.name)
|
||||
@@ -462,13 +447,11 @@ def create_mirror_for_all_specs(directory_hint, skip_unstable_versions, selectio
|
||||
process_mirror_stats(*mirror_stats.stats())
|
||||
|
||||
|
||||
def create_mirror_for_all_specs_inside_environment(
|
||||
directory_hint, skip_unstable_versions, selection_fn
|
||||
):
|
||||
def create_mirror_for_all_specs_inside_environment(path, skip_unstable_versions, selection_fn):
|
||||
mirror_specs = concrete_specs_from_environment(selection_fn=selection_fn)
|
||||
create_mirror_for_individual_specs(
|
||||
mirror_specs,
|
||||
directory_hint=directory_hint,
|
||||
path=path,
|
||||
skip_unstable_versions=skip_unstable_versions,
|
||||
)
|
||||
|
||||
|
@@ -127,8 +127,10 @@ def python_interpreter(args):
|
||||
console.runsource(startup.read(), startup_file, "exec")
|
||||
|
||||
if args.python_command:
|
||||
propagate_exceptions_from(console)
|
||||
console.runsource(args.python_command)
|
||||
elif args.python_args:
|
||||
propagate_exceptions_from(console)
|
||||
sys.argv = args.python_args
|
||||
with open(args.python_args[0]) as file:
|
||||
console.runsource(file.read(), args.python_args[0], "exec")
|
||||
@@ -149,3 +151,18 @@ def python_interpreter(args):
|
||||
platform.machine(),
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def propagate_exceptions_from(console):
|
||||
"""Set sys.excepthook to let uncaught exceptions return 1 to the shell.
|
||||
|
||||
Args:
|
||||
console (code.InteractiveConsole): the console that needs a change in sys.excepthook
|
||||
"""
|
||||
console.push("import sys")
|
||||
console.push("_wrapped_hook = sys.excepthook")
|
||||
console.push("def _hook(exc_type, exc_value, exc_tb):")
|
||||
console.push(" _wrapped_hook(exc_type, exc_value, exc_tb)")
|
||||
console.push(" sys.exit(1)")
|
||||
console.push("")
|
||||
console.push("sys.excepthook = _hook")
|
||||
|
@@ -11,6 +11,7 @@
|
||||
import llnl.util.tty as tty
|
||||
from llnl.util.filesystem import working_dir
|
||||
|
||||
import spack
|
||||
import spack.cmd.common.arguments as arguments
|
||||
import spack.config
|
||||
import spack.paths
|
||||
@@ -24,7 +25,7 @@
|
||||
|
||||
|
||||
# tutorial configuration parameters
|
||||
tutorial_branch = "releases/v0.18"
|
||||
tutorial_branch = "releases/v%s" % ".".join(str(v) for v in spack.spack_version_info[:2])
|
||||
tutorial_mirror = "file:///mirror"
|
||||
tutorial_key = os.path.join(spack.paths.share_path, "keys", "tutorial.pub")
|
||||
|
||||
|
@@ -17,6 +17,7 @@
|
||||
import spack.package_base
|
||||
import spack.repo
|
||||
import spack.store
|
||||
import spack.traverse as traverse
|
||||
from spack.database import InstallStatuses
|
||||
|
||||
description = "remove installed packages"
|
||||
@@ -144,11 +145,7 @@ def installed_dependents(specs, env):
|
||||
active environment, and one from specs to dependent installs outside of
|
||||
the active environment.
|
||||
|
||||
Any of the input specs may appear in both mappings (if there are
|
||||
dependents both inside and outside the current environment).
|
||||
|
||||
If a dependent spec is used both by the active environment and by
|
||||
an inactive environment, it will only appear in the first mapping.
|
||||
Every installed dependent spec is listed once.
|
||||
|
||||
If there is not current active environment, the first mapping will be
|
||||
empty.
|
||||
@@ -158,19 +155,27 @@ def installed_dependents(specs, env):
|
||||
|
||||
env_hashes = set(env.all_hashes()) if env else set()
|
||||
|
||||
all_specs_in_db = spack.store.db.query()
|
||||
# Ensure we stop traversal at input specs.
|
||||
visited = set(s.dag_hash() for s in specs)
|
||||
|
||||
for spec in specs:
|
||||
installed = [x for x in all_specs_in_db if spec in x]
|
||||
|
||||
# separate installed dependents into dpts in this environment and
|
||||
# dpts that are outside this environment
|
||||
for dpt in installed:
|
||||
if dpt not in specs:
|
||||
if dpt.dag_hash() in env_hashes:
|
||||
active_dpts.setdefault(spec, set()).add(dpt)
|
||||
else:
|
||||
outside_dpts.setdefault(spec, set()).add(dpt)
|
||||
for dpt in traverse.traverse_nodes(
|
||||
spec.dependents(deptype="all"),
|
||||
direction="parents",
|
||||
visited=visited,
|
||||
deptype="all",
|
||||
root=True,
|
||||
key=lambda s: s.dag_hash(),
|
||||
):
|
||||
hash = dpt.dag_hash()
|
||||
# Ensure that all the specs we get are installed
|
||||
record = spack.store.db.query_local_by_spec_hash(hash)
|
||||
if record is None or not record.installed:
|
||||
continue
|
||||
if hash in env_hashes:
|
||||
active_dpts.setdefault(spec, set()).add(dpt)
|
||||
else:
|
||||
outside_dpts.setdefault(spec, set()).add(dpt)
|
||||
|
||||
return active_dpts, outside_dpts
|
||||
|
||||
@@ -250,7 +255,7 @@ def is_ready(dag_hash):
|
||||
if force:
|
||||
return True
|
||||
|
||||
_, record = spack.store.db.query_by_spec_hash(dag_hash)
|
||||
record = spack.store.db.query_local_by_spec_hash(dag_hash)
|
||||
if not record.ref_count:
|
||||
return True
|
||||
|
||||
|
@@ -36,36 +36,89 @@ def extract_version_from_output(cls, output):
|
||||
ver = match.group(match.lastindex)
|
||||
return ver
|
||||
|
||||
# C++ flags based on CMake Modules/Compiler/AppleClang-CXX.cmake
|
||||
|
||||
@property
|
||||
def cxx11_flag(self):
|
||||
# Adapted from CMake's AppleClang-CXX rules
|
||||
# Spack's AppleClang detection only valid from Xcode >= 4.6
|
||||
if self.real_version < spack.version.ver("4.0.0"):
|
||||
if self.real_version < spack.version.ver("4.0"):
|
||||
raise spack.compiler.UnsupportedCompilerFlag(
|
||||
self, "the C++11 standard", "cxx11_flag", "Xcode < 4.0.0"
|
||||
self, "the C++11 standard", "cxx11_flag", "Xcode < 4.0"
|
||||
)
|
||||
return "-std=c++11"
|
||||
|
||||
@property
|
||||
def cxx14_flag(self):
|
||||
# Adapted from CMake's rules for AppleClang
|
||||
if self.real_version < spack.version.ver("5.1.0"):
|
||||
if self.real_version < spack.version.ver("5.1"):
|
||||
raise spack.compiler.UnsupportedCompilerFlag(
|
||||
self, "the C++14 standard", "cxx14_flag", "Xcode < 5.1.0"
|
||||
self, "the C++14 standard", "cxx14_flag", "Xcode < 5.1"
|
||||
)
|
||||
elif self.real_version < spack.version.ver("6.1.0"):
|
||||
elif self.real_version < spack.version.ver("6.1"):
|
||||
return "-std=c++1y"
|
||||
|
||||
return "-std=c++14"
|
||||
|
||||
@property
|
||||
def cxx17_flag(self):
|
||||
# Adapted from CMake's rules for AppleClang
|
||||
if self.real_version < spack.version.ver("6.1.0"):
|
||||
if self.real_version < spack.version.ver("6.1"):
|
||||
raise spack.compiler.UnsupportedCompilerFlag(
|
||||
self, "the C++17 standard", "cxx17_flag", "Xcode < 6.1.0"
|
||||
self, "the C++17 standard", "cxx17_flag", "Xcode < 6.1"
|
||||
)
|
||||
return "-std=c++1z"
|
||||
elif self.real_version < spack.version.ver("10.0"):
|
||||
return "-std=c++1z"
|
||||
return "-std=c++17"
|
||||
|
||||
@property
|
||||
def cxx20_flag(self):
|
||||
if self.real_version < spack.version.ver("10.0"):
|
||||
raise spack.compiler.UnsupportedCompilerFlag(
|
||||
self, "the C++20 standard", "cxx20_flag", "Xcode < 10.0"
|
||||
)
|
||||
elif self.real_version < spack.version.ver("13.0"):
|
||||
return "-std=c++2a"
|
||||
return "-std=c++20"
|
||||
|
||||
@property
|
||||
def cxx23_flag(self):
|
||||
if self.real_version < spack.version.ver("13.0"):
|
||||
raise spack.compiler.UnsupportedCompilerFlag(
|
||||
self, "the C++23 standard", "cxx23_flag", "Xcode < 13.0"
|
||||
)
|
||||
return "-std=c++2b"
|
||||
|
||||
# C flags based on CMake Modules/Compiler/AppleClang-C.cmake
|
||||
|
||||
@property
|
||||
def c99_flag(self):
|
||||
if self.real_version < spack.version.ver("4.0"):
|
||||
raise spack.compiler.UnsupportedCompilerFlag(
|
||||
self, "the C99 standard", "c99_flag", "< 4.0"
|
||||
)
|
||||
return "-std=c99"
|
||||
|
||||
@property
|
||||
def c11_flag(self):
|
||||
if self.real_version < spack.version.ver("4.0"):
|
||||
raise spack.compiler.UnsupportedCompilerFlag(
|
||||
self, "the C11 standard", "c11_flag", "< 4.0"
|
||||
)
|
||||
return "-std=c11"
|
||||
|
||||
@property
|
||||
def c17_flag(self):
|
||||
if self.real_version < spack.version.ver("11.0"):
|
||||
raise spack.compiler.UnsupportedCompilerFlag(
|
||||
self, "the C17 standard", "c17_flag", "< 11.0"
|
||||
)
|
||||
return "-std=c17"
|
||||
|
||||
@property
|
||||
def c23_flag(self):
|
||||
if self.real_version < spack.version.ver("11.0.3"):
|
||||
raise spack.compiler.UnsupportedCompilerFlag(
|
||||
self, "the C23 standard", "c23_flag", "< 11.0.3"
|
||||
)
|
||||
return "-std=c2x"
|
||||
|
||||
def setup_custom_environment(self, pkg, env):
|
||||
"""Set the DEVELOPER_DIR environment for the Xcode toolchain.
|
||||
|
@@ -61,7 +61,7 @@ def is_clang_based(self):
|
||||
return version >= ver("9.0") and "classic" not in str(version)
|
||||
|
||||
version_argument = "--version"
|
||||
version_regex = r"[Vv]ersion.*?(\d+(\.\d+)+)"
|
||||
version_regex = r"[Cc]ray (?:clang|C :|C\+\+ :|Fortran :) [Vv]ersion.*?(\d+(\.\d+)+)"
|
||||
|
||||
@property
|
||||
def verbose_flag(self):
|
||||
|
@@ -128,10 +128,23 @@ def c99_flag(self):
|
||||
|
||||
@property
|
||||
def c11_flag(self):
|
||||
if self.real_version < ver("6.1.0"):
|
||||
raise UnsupportedCompilerFlag(self, "the C11 standard", "c11_flag", "< 6.1.0")
|
||||
else:
|
||||
return "-std=c11"
|
||||
if self.real_version < ver("3.0"):
|
||||
raise UnsupportedCompilerFlag(self, "the C11 standard", "c11_flag", "< 3.0")
|
||||
if self.real_version < ver("3.1"):
|
||||
return "-std=c1x"
|
||||
return "-std=c11"
|
||||
|
||||
@property
|
||||
def c17_flag(self):
|
||||
if self.real_version < ver("6.0"):
|
||||
raise UnsupportedCompilerFlag(self, "the C17 standard", "c17_flag", "< 6.0")
|
||||
return "-std=c17"
|
||||
|
||||
@property
|
||||
def c23_flag(self):
|
||||
if self.real_version < ver("9.0"):
|
||||
raise UnsupportedCompilerFlag(self, "the C23 standard", "c23_flag", "< 9.0")
|
||||
return "-std=c2x"
|
||||
|
||||
@property
|
||||
def cc_pic_flag(self):
|
||||
|
@@ -743,9 +743,7 @@ def _concretize_specs_together_new(*abstract_specs, **kwargs):
|
||||
import spack.solver.asp
|
||||
|
||||
solver = spack.solver.asp.Solver()
|
||||
solver.tests = kwargs.get("tests", False)
|
||||
|
||||
result = solver.solve(abstract_specs)
|
||||
result = solver.solve(abstract_specs, tests=kwargs.get("tests", False))
|
||||
result.raise_if_unsat()
|
||||
return [s.copy() for s in result.specs]
|
||||
|
||||
|
@@ -36,16 +36,17 @@
|
||||
import re
|
||||
import sys
|
||||
from contextlib import contextmanager
|
||||
from typing import List # novm
|
||||
from typing import List # novm # noqa: F401
|
||||
|
||||
import ruamel.yaml as yaml
|
||||
import six
|
||||
from ruamel.yaml.comments import Comment
|
||||
from ruamel.yaml.error import MarkedYAMLError
|
||||
from six import iteritems
|
||||
|
||||
import llnl.util.lang
|
||||
import llnl.util.tty as tty
|
||||
from llnl.util.filesystem import mkdirp, write_tmp_and_move
|
||||
from llnl.util.filesystem import mkdirp, rename
|
||||
|
||||
import spack.compilers
|
||||
import spack.paths
|
||||
@@ -287,8 +288,10 @@ def _write_section(self, section):
|
||||
parent = os.path.dirname(self.path)
|
||||
mkdirp(parent)
|
||||
|
||||
with write_tmp_and_move(self.path) as f:
|
||||
tmp = os.path.join(parent, ".%s.tmp" % os.path.basename(self.path))
|
||||
with open(tmp, "w") as f:
|
||||
syaml.dump_config(data_to_write, stream=f, default_flow_style=False)
|
||||
rename(tmp, self.path)
|
||||
|
||||
except (yaml.YAMLError, IOError) as e:
|
||||
raise ConfigFileError("Error writing to config file: '%s'" % str(e))
|
||||
@@ -530,16 +533,14 @@ def update_config(self, section, update_data, scope=None, force=False):
|
||||
scope = self._validate_scope(scope) # get ConfigScope object
|
||||
|
||||
# manually preserve comments
|
||||
need_comment_copy = section in scope.sections and scope.sections[section] is not None
|
||||
need_comment_copy = section in scope.sections and scope.sections[section]
|
||||
if need_comment_copy:
|
||||
comments = getattr(
|
||||
scope.sections[section][section], yaml.comments.Comment.attrib, None
|
||||
)
|
||||
comments = getattr(scope.sections[section][section], Comment.attrib, None)
|
||||
|
||||
# read only the requested section's data.
|
||||
scope.sections[section] = syaml.syaml_dict({section: update_data})
|
||||
if need_comment_copy and comments:
|
||||
setattr(scope.sections[section][section], yaml.comments.Comment.attrib, comments)
|
||||
setattr(scope.sections[section][section], Comment.attrib, comments)
|
||||
|
||||
scope._write_section(section)
|
||||
|
||||
|
@@ -26,7 +26,7 @@
|
||||
import socket
|
||||
import sys
|
||||
import time
|
||||
from typing import Dict # novm
|
||||
from typing import Dict # novm # noqa: F401
|
||||
|
||||
import six
|
||||
|
||||
@@ -53,7 +53,6 @@
|
||||
InconsistentInstallDirectoryError,
|
||||
)
|
||||
from spack.error import SpackError
|
||||
from spack.filesystem_view import YamlFilesystemView
|
||||
from spack.util.crypto import bit_length
|
||||
from spack.version import Version
|
||||
|
||||
@@ -726,6 +725,15 @@ def query_by_spec_hash(self, hash_key, data=None):
|
||||
return True, db._data[hash_key]
|
||||
return False, None
|
||||
|
||||
def query_local_by_spec_hash(self, hash_key):
|
||||
"""Get a spec by hash in the local database
|
||||
|
||||
Return:
|
||||
(InstallRecord or None): InstallRecord when installed
|
||||
locally, otherwise None."""
|
||||
with self.read_transaction():
|
||||
return self._data.get(hash_key, None)
|
||||
|
||||
def _assign_dependencies(self, hash_key, installs, data):
|
||||
# Add dependencies from other records in the install DB to
|
||||
# form a full spec.
|
||||
@@ -1379,23 +1387,6 @@ def installed_extensions_for(self, extendee_spec):
|
||||
if spec.package.extends(extendee_spec):
|
||||
yield spec.package
|
||||
|
||||
@_autospec
|
||||
def activated_extensions_for(self, extendee_spec, extensions_layout=None):
|
||||
"""
|
||||
Return the specs of all packages that extend
|
||||
the given spec
|
||||
"""
|
||||
if extensions_layout is None:
|
||||
view = YamlFilesystemView(extendee_spec.prefix, spack.store.layout)
|
||||
extensions_layout = view.extensions_layout
|
||||
for spec in self.query():
|
||||
try:
|
||||
extensions_layout.check_activated(extendee_spec, spec)
|
||||
yield spec.package
|
||||
except spack.directory_layout.NoSuchExtensionError:
|
||||
continue
|
||||
# TODO: conditional way to do this instead of catching exceptions
|
||||
|
||||
def _get_by_hash_local(self, dag_hash, default=None, installed=any):
|
||||
# hash is a full hash and is in the data somewhere
|
||||
if dag_hash in self._data:
|
||||
|
@@ -468,14 +468,7 @@ def _execute_depends_on(pkg):
|
||||
|
||||
@directive(("extendees", "dependencies"))
|
||||
def extends(spec, type=("build", "run"), **kwargs):
|
||||
"""Same as depends_on, but allows symlinking into dependency's
|
||||
prefix tree.
|
||||
|
||||
This is for Python and other language modules where the module
|
||||
needs to be installed into the prefix of the Python installation.
|
||||
Spack handles this by installing modules into their own prefix,
|
||||
but allowing ONE module version to be symlinked into a parent
|
||||
Python install at a time, using ``spack activate``.
|
||||
"""Same as depends_on, but also adds this package to the extendee list.
|
||||
|
||||
keyword arguments can be passed to extends() so that extension
|
||||
packages can pass parameters to the extendee's extension
|
||||
|
@@ -10,10 +10,8 @@
|
||||
import re
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
from contextlib import contextmanager
|
||||
|
||||
import ruamel.yaml as yaml
|
||||
import six
|
||||
|
||||
import llnl.util.filesystem as fs
|
||||
@@ -389,205 +387,6 @@ def remove_install_directory(self, spec, deprecated=False):
|
||||
path = os.path.dirname(path)
|
||||
|
||||
|
||||
class ExtensionsLayout(object):
|
||||
"""A directory layout is used to associate unique paths with specs for
|
||||
package extensions.
|
||||
Keeps track of which extensions are activated for what package.
|
||||
Depending on the use case, this can mean globally activated extensions
|
||||
directly in the installation folder - or extensions activated in
|
||||
filesystem views.
|
||||
"""
|
||||
|
||||
def __init__(self, view, **kwargs):
|
||||
self.view = view
|
||||
|
||||
def add_extension(self, spec, ext_spec):
|
||||
"""Add to the list of currently installed extensions."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def check_activated(self, spec, ext_spec):
|
||||
"""Ensure that ext_spec can be removed from spec.
|
||||
|
||||
If not, raise NoSuchExtensionError.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def check_extension_conflict(self, spec, ext_spec):
|
||||
"""Ensure that ext_spec can be activated in spec.
|
||||
|
||||
If not, raise ExtensionAlreadyInstalledError or
|
||||
ExtensionConflictError.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def extension_map(self, spec):
|
||||
"""Get a dict of currently installed extension packages for a spec.
|
||||
|
||||
Dict maps { name : extension_spec }
|
||||
Modifying dict does not affect internals of this layout.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def extendee_target_directory(self, extendee):
|
||||
"""Specify to which full path extendee should link all files
|
||||
from extensions."""
|
||||
raise NotImplementedError
|
||||
|
||||
def remove_extension(self, spec, ext_spec):
|
||||
"""Remove from the list of currently installed extensions."""
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class YamlViewExtensionsLayout(ExtensionsLayout):
|
||||
"""Maintain extensions within a view."""
|
||||
|
||||
def __init__(self, view, layout):
|
||||
"""layout is the corresponding YamlDirectoryLayout object for which
|
||||
we implement extensions.
|
||||
"""
|
||||
super(YamlViewExtensionsLayout, self).__init__(view)
|
||||
self.layout = layout
|
||||
self.extension_file_name = "extensions.yaml"
|
||||
|
||||
# Cache of already written/read extension maps.
|
||||
self._extension_maps = {}
|
||||
|
||||
def add_extension(self, spec, ext_spec):
|
||||
_check_concrete(spec)
|
||||
_check_concrete(ext_spec)
|
||||
|
||||
# Check whether it's already installed or if it's a conflict.
|
||||
exts = self._extension_map(spec)
|
||||
self.check_extension_conflict(spec, ext_spec)
|
||||
|
||||
# do the actual adding.
|
||||
exts[ext_spec.name] = ext_spec
|
||||
self._write_extensions(spec, exts)
|
||||
|
||||
def check_extension_conflict(self, spec, ext_spec):
|
||||
exts = self._extension_map(spec)
|
||||
if ext_spec.name in exts:
|
||||
installed_spec = exts[ext_spec.name]
|
||||
if ext_spec.dag_hash() == installed_spec.dag_hash():
|
||||
raise ExtensionAlreadyInstalledError(spec, ext_spec)
|
||||
else:
|
||||
raise ExtensionConflictError(spec, ext_spec, installed_spec)
|
||||
|
||||
def check_activated(self, spec, ext_spec):
|
||||
exts = self._extension_map(spec)
|
||||
if (ext_spec.name not in exts) or (ext_spec != exts[ext_spec.name]):
|
||||
raise NoSuchExtensionError(spec, ext_spec)
|
||||
|
||||
def extension_file_path(self, spec):
|
||||
"""Gets full path to an installed package's extension file, which
|
||||
keeps track of all the extensions for that package which have been
|
||||
added to this view.
|
||||
"""
|
||||
_check_concrete(spec)
|
||||
normalize_path = lambda p: (os.path.abspath(p).rstrip(os.path.sep))
|
||||
|
||||
view_prefix = self.view.get_projection_for_spec(spec)
|
||||
if normalize_path(spec.prefix) == normalize_path(view_prefix):
|
||||
# For backwards compatibility, when the view is the extended
|
||||
# package's installation directory, do not include the spec name
|
||||
# as a subdirectory.
|
||||
components = [view_prefix, self.layout.metadata_dir, self.extension_file_name]
|
||||
else:
|
||||
components = [
|
||||
view_prefix,
|
||||
self.layout.metadata_dir,
|
||||
spec.name,
|
||||
self.extension_file_name,
|
||||
]
|
||||
|
||||
return os.path.join(*components)
|
||||
|
||||
def extension_map(self, spec):
|
||||
"""Defensive copying version of _extension_map() for external API."""
|
||||
_check_concrete(spec)
|
||||
return self._extension_map(spec).copy()
|
||||
|
||||
def remove_extension(self, spec, ext_spec):
|
||||
_check_concrete(spec)
|
||||
_check_concrete(ext_spec)
|
||||
|
||||
# Make sure it's installed before removing.
|
||||
exts = self._extension_map(spec)
|
||||
self.check_activated(spec, ext_spec)
|
||||
|
||||
# do the actual removing.
|
||||
del exts[ext_spec.name]
|
||||
self._write_extensions(spec, exts)
|
||||
|
||||
def _extension_map(self, spec):
|
||||
"""Get a dict<name -> spec> for all extensions currently
|
||||
installed for this package."""
|
||||
_check_concrete(spec)
|
||||
|
||||
if spec not in self._extension_maps:
|
||||
path = self.extension_file_path(spec)
|
||||
if not os.path.exists(path):
|
||||
self._extension_maps[spec] = {}
|
||||
|
||||
else:
|
||||
by_hash = self.layout.specs_by_hash()
|
||||
exts = {}
|
||||
with open(path) as ext_file:
|
||||
yaml_file = yaml.load(ext_file)
|
||||
for entry in yaml_file["extensions"]:
|
||||
name = next(iter(entry))
|
||||
dag_hash = entry[name]["hash"]
|
||||
prefix = entry[name]["path"]
|
||||
|
||||
if dag_hash not in by_hash:
|
||||
raise InvalidExtensionSpecError(
|
||||
"Spec %s not found in %s" % (dag_hash, prefix)
|
||||
)
|
||||
|
||||
ext_spec = by_hash[dag_hash]
|
||||
if prefix != ext_spec.prefix:
|
||||
raise InvalidExtensionSpecError(
|
||||
"Prefix %s does not match spec hash %s: %s"
|
||||
% (prefix, dag_hash, ext_spec)
|
||||
)
|
||||
|
||||
exts[ext_spec.name] = ext_spec
|
||||
self._extension_maps[spec] = exts
|
||||
|
||||
return self._extension_maps[spec]
|
||||
|
||||
def _write_extensions(self, spec, extensions):
|
||||
path = self.extension_file_path(spec)
|
||||
|
||||
if not extensions:
|
||||
# Remove the empty extensions file
|
||||
os.remove(path)
|
||||
return
|
||||
|
||||
# Create a temp file in the same directory as the actual file.
|
||||
dirname, basename = os.path.split(path)
|
||||
fs.mkdirp(dirname)
|
||||
|
||||
tmp = tempfile.NamedTemporaryFile(prefix=basename, dir=dirname, delete=False)
|
||||
|
||||
# write tmp file
|
||||
with tmp:
|
||||
yaml.dump(
|
||||
{
|
||||
"extensions": [
|
||||
{ext.name: {"hash": ext.dag_hash(), "path": str(ext.prefix)}}
|
||||
for ext in sorted(extensions.values())
|
||||
]
|
||||
},
|
||||
tmp,
|
||||
default_flow_style=False,
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
# Atomic update by moving tmpfile on top of old one.
|
||||
fs.rename(tmp.name, path)
|
||||
|
||||
|
||||
class DirectoryLayoutError(SpackError):
|
||||
"""Superclass for directory layout errors."""
|
||||
|
||||
@@ -644,13 +443,3 @@ def __init__(self, spec, ext_spec, conflict):
|
||||
"%s cannot be installed in %s because it conflicts with %s"
|
||||
% (ext_spec.short_spec, spec.short_spec, conflict.short_spec)
|
||||
)
|
||||
|
||||
|
||||
class NoSuchExtensionError(DirectoryLayoutError):
|
||||
"""Raised when an extension isn't there on deactivate."""
|
||||
|
||||
def __init__(self, spec, ext_spec):
|
||||
super(NoSuchExtensionError, self).__init__(
|
||||
"%s cannot be removed from %s because it's not activated."
|
||||
% (ext_spec.short_spec, spec.short_spec)
|
||||
)
|
||||
|
@@ -786,17 +786,12 @@ def _read_manifest(self, f, raw_yaml=None):
|
||||
)
|
||||
else:
|
||||
self.views = {}
|
||||
|
||||
# Retrieve the current concretization strategy
|
||||
configuration = config_dict(self.yaml)
|
||||
|
||||
# Let `concretization` overrule `concretize:unify` config for now,
|
||||
# but use a translation table to have internally a representation
|
||||
# as if we were using the new configuration
|
||||
translation = {"separately": False, "together": True}
|
||||
try:
|
||||
self.unify = translation[configuration["concretization"]]
|
||||
except KeyError:
|
||||
self.unify = spack.config.get("concretizer:unify", False)
|
||||
# Retrieve unification scheme for the concretizer
|
||||
self.unify = spack.config.get("concretizer:unify", False)
|
||||
|
||||
# Retrieve dev-build packages:
|
||||
self.dev_specs = configuration.get("develop", {})
|
||||
|
@@ -37,10 +37,6 @@
|
||||
import spack.store
|
||||
import spack.util.spack_json as s_json
|
||||
import spack.util.spack_yaml as s_yaml
|
||||
from spack.directory_layout import (
|
||||
ExtensionAlreadyInstalledError,
|
||||
YamlViewExtensionsLayout,
|
||||
)
|
||||
from spack.error import SpackError
|
||||
|
||||
__all__ = ["FilesystemView", "YamlFilesystemView"]
|
||||
@@ -166,9 +162,6 @@ def add_specs(self, *specs, **kwargs):
|
||||
"""
|
||||
Add given specs to view.
|
||||
|
||||
The supplied specs might be standalone packages or extensions of
|
||||
other packages.
|
||||
|
||||
Should accept `with_dependencies` as keyword argument (default
|
||||
True) to indicate wether or not dependencies should be activated as
|
||||
well.
|
||||
@@ -176,13 +169,7 @@ def add_specs(self, *specs, **kwargs):
|
||||
Should except an `exclude` keyword argument containing a list of
|
||||
regexps that filter out matching spec names.
|
||||
|
||||
This method should make use of `activate_{extension,standalone}`.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def add_extension(self, spec):
|
||||
"""
|
||||
Add (link) an extension in this view. Does not add dependencies.
|
||||
This method should make use of `activate_standalone`.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
@@ -202,9 +189,6 @@ def remove_specs(self, *specs, **kwargs):
|
||||
"""
|
||||
Removes given specs from view.
|
||||
|
||||
The supplied spec might be a standalone package or an extension of
|
||||
another package.
|
||||
|
||||
Should accept `with_dependencies` as keyword argument (default
|
||||
True) to indicate wether or not dependencies should be deactivated
|
||||
as well.
|
||||
@@ -216,13 +200,7 @@ def remove_specs(self, *specs, **kwargs):
|
||||
Should except an `exclude` keyword argument containing a list of
|
||||
regexps that filter out matching spec names.
|
||||
|
||||
This method should make use of `deactivate_{extension,standalone}`.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def remove_extension(self, spec):
|
||||
"""
|
||||
Remove (unlink) an extension from this view.
|
||||
This method should make use of `deactivate_standalone`.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
@@ -296,8 +274,6 @@ def __init__(self, root, layout, **kwargs):
|
||||
msg += " which does not match projections passed manually."
|
||||
raise ConflictingProjectionsError(msg)
|
||||
|
||||
self.extensions_layout = YamlViewExtensionsLayout(self, layout)
|
||||
|
||||
self._croot = colorize_root(self._root) + " "
|
||||
|
||||
def write_projections(self):
|
||||
@@ -332,38 +308,10 @@ def add_specs(self, *specs, **kwargs):
|
||||
self.print_conflict(v, s)
|
||||
return
|
||||
|
||||
extensions = set(filter(lambda s: s.package.is_extension, specs))
|
||||
standalones = specs - extensions
|
||||
|
||||
set(map(self._check_no_ext_conflicts, extensions))
|
||||
# fail on first error, otherwise link extensions as well
|
||||
if all(map(self.add_standalone, standalones)):
|
||||
all(map(self.add_extension, extensions))
|
||||
|
||||
def add_extension(self, spec):
|
||||
if not spec.package.is_extension:
|
||||
tty.error(self._croot + "Package %s is not an extension." % spec.name)
|
||||
return False
|
||||
|
||||
if spec.external:
|
||||
tty.warn(self._croot + "Skipping external package: %s" % colorize_spec(spec))
|
||||
return True
|
||||
|
||||
if not spec.package.is_activated(self):
|
||||
spec.package.do_activate(self, verbose=self.verbose, with_dependencies=False)
|
||||
|
||||
# make sure the meta folder is linked as well (this is not done by the
|
||||
# extension-activation mechnism)
|
||||
if not self.check_added(spec):
|
||||
self.link_meta_folder(spec)
|
||||
|
||||
return True
|
||||
for s in specs:
|
||||
self.add_standalone(s)
|
||||
|
||||
def add_standalone(self, spec):
|
||||
if spec.package.is_extension:
|
||||
tty.error(self._croot + "Package %s is an extension." % spec.name)
|
||||
return False
|
||||
|
||||
if spec.external:
|
||||
tty.warn(self._croot + "Skipping external package: %s" % colorize_spec(spec))
|
||||
return True
|
||||
@@ -372,19 +320,6 @@ def add_standalone(self, spec):
|
||||
tty.warn(self._croot + "Skipping already linked package: %s" % colorize_spec(spec))
|
||||
return True
|
||||
|
||||
if spec.package.extendable:
|
||||
# Check for globally activated extensions in the extendee that
|
||||
# we're looking at.
|
||||
activated = [p.spec for p in spack.store.db.activated_extensions_for(spec)]
|
||||
if activated:
|
||||
tty.error(
|
||||
"Globally activated extensions cannot be used in "
|
||||
"conjunction with filesystem views. "
|
||||
"Please deactivate the following specs: "
|
||||
)
|
||||
spack.cmd.display_specs(activated, flags=True, variants=True, long=False)
|
||||
return False
|
||||
|
||||
self.merge(spec)
|
||||
|
||||
self.link_meta_folder(spec)
|
||||
@@ -533,27 +468,10 @@ def remove_specs(self, *specs, **kwargs):
|
||||
|
||||
# Remove the packages from the view
|
||||
for spec in to_deactivate_sorted:
|
||||
if spec.package.is_extension:
|
||||
self.remove_extension(spec, with_dependents=with_dependents)
|
||||
else:
|
||||
self.remove_standalone(spec)
|
||||
self.remove_standalone(spec)
|
||||
|
||||
self._purge_empty_directories()
|
||||
|
||||
def remove_extension(self, spec, with_dependents=True):
|
||||
"""
|
||||
Remove (unlink) an extension from this view.
|
||||
"""
|
||||
if not self.check_added(spec):
|
||||
tty.warn(self._croot + "Skipping package not linked in view: %s" % spec.name)
|
||||
return
|
||||
|
||||
if spec.package.is_activated(self):
|
||||
spec.package.do_deactivate(
|
||||
self, verbose=self.verbose, remove_dependents=with_dependents
|
||||
)
|
||||
self.unlink_meta_folder(spec)
|
||||
|
||||
def remove_standalone(self, spec):
|
||||
"""
|
||||
Remove (unlink) a standalone package from this view.
|
||||
@@ -575,8 +493,8 @@ def get_projection_for_spec(self, spec):
|
||||
Relies on the ordering of projections to avoid ambiguity.
|
||||
"""
|
||||
spec = spack.spec.Spec(spec)
|
||||
# Extensions are placed by their extendee, not by their own spec
|
||||
locator_spec = spec
|
||||
|
||||
if spec.package.extendee_spec:
|
||||
locator_spec = spec.package.extendee_spec
|
||||
|
||||
@@ -712,18 +630,6 @@ def unlink_meta_folder(self, spec):
|
||||
assert os.path.exists(path)
|
||||
shutil.rmtree(path)
|
||||
|
||||
def _check_no_ext_conflicts(self, spec):
|
||||
"""
|
||||
Check that there is no extension conflict for specs.
|
||||
"""
|
||||
extendee = spec.package.extendee_spec
|
||||
try:
|
||||
self.extensions_layout.check_extension_conflict(extendee, spec)
|
||||
except ExtensionAlreadyInstalledError:
|
||||
# we print the warning here because later on the order in which
|
||||
# packages get activated is not clear (set-sorting)
|
||||
tty.warn(self._croot + "Skipping already activated package: %s" % spec.name)
|
||||
|
||||
|
||||
class SimpleFilesystemView(FilesystemView):
|
||||
"""A simple and partial implementation of FilesystemView focused on
|
||||
@@ -781,22 +687,34 @@ def skip_list(file):
|
||||
for dst in visitor.directories:
|
||||
os.mkdir(os.path.join(self._root, dst))
|
||||
|
||||
# Then group the files to be linked by spec...
|
||||
# For compatibility, we have to create a merge_map dict mapping
|
||||
# full_src => full_dst
|
||||
files_per_spec = itertools.groupby(visitor.files.items(), key=lambda item: item[1][0])
|
||||
|
||||
for (spec, (src_root, rel_paths)) in zip(specs, files_per_spec):
|
||||
merge_map = dict()
|
||||
for dst_rel, (_, src_rel) in rel_paths:
|
||||
full_src = os.path.join(src_root, src_rel)
|
||||
full_dst = os.path.join(self._root, dst_rel)
|
||||
merge_map[full_src] = full_dst
|
||||
# Link the files using a "merge map": full src => full dst
|
||||
merge_map_per_prefix = self._source_merge_visitor_to_merge_map(visitor)
|
||||
for spec in specs:
|
||||
merge_map = merge_map_per_prefix.get(spec.package.view_source(), None)
|
||||
if not merge_map:
|
||||
# Not every spec may have files to contribute.
|
||||
continue
|
||||
spec.package.add_files_to_view(self, merge_map, skip_if_exists=False)
|
||||
|
||||
# Finally create the metadata dirs.
|
||||
self.link_metadata(specs)
|
||||
|
||||
def _source_merge_visitor_to_merge_map(self, visitor):
|
||||
# For compatibility with add_files_to_view, we have to create a
|
||||
# merge_map of the form join(src_root, src_rel) => join(dst_root, dst_rel),
|
||||
# but our visitor.files format is dst_rel => (src_root, src_rel).
|
||||
# We exploit that visitor.files is an ordered dict, and files per source
|
||||
# prefix are contiguous.
|
||||
source_root = lambda item: item[1][0]
|
||||
per_source = itertools.groupby(visitor.files.items(), key=source_root)
|
||||
return {
|
||||
src_root: {
|
||||
os.path.join(src_root, src_rel): os.path.join(self._root, dst_rel)
|
||||
for dst_rel, (_, src_rel) in group
|
||||
}
|
||||
for src_root, group in per_source
|
||||
}
|
||||
|
||||
def link_metadata(self, specs):
|
||||
metadata_visitor = SourceMergeVisitor()
|
||||
|
||||
@@ -842,14 +760,13 @@ def get_projection_for_spec(self, spec):
|
||||
Relies on the ordering of projections to avoid ambiguity.
|
||||
"""
|
||||
spec = spack.spec.Spec(spec)
|
||||
# Extensions are placed by their extendee, not by their own spec
|
||||
locator_spec = spec
|
||||
if spec.package.extendee_spec:
|
||||
locator_spec = spec.package.extendee_spec
|
||||
|
||||
proj = spack.projections.get_projection(self.projections, locator_spec)
|
||||
if spec.package.extendee_spec:
|
||||
spec = spec.package.extendee_spec
|
||||
|
||||
proj = spack.projections.get_projection(self.projections, spec)
|
||||
if proj:
|
||||
return os.path.join(self._root, locator_spec.format(proj))
|
||||
return os.path.join(self._root, spec.format(proj))
|
||||
return self._root
|
||||
|
||||
|
||||
|
@@ -1,20 +0,0 @@
|
||||
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import spack
|
||||
from spack.filesystem_view import YamlFilesystemView
|
||||
|
||||
|
||||
def pre_uninstall(spec):
|
||||
pkg = spec.package
|
||||
assert spec.concrete
|
||||
|
||||
if pkg.is_extension:
|
||||
target = pkg.extendee_spec.prefix
|
||||
view = YamlFilesystemView(target, spack.store.layout)
|
||||
|
||||
if pkg.is_activated(view):
|
||||
# deactivate globally
|
||||
pkg.do_deactivate(force=True)
|
@@ -186,39 +186,44 @@ def install_sbang():
|
||||
``sbang`` here ensures that users can access the script and that
|
||||
``sbang`` itself is in a short path.
|
||||
"""
|
||||
# copy in a new version of sbang if it differs from what's in spack
|
||||
sbang_path = sbang_install_path()
|
||||
if os.path.exists(sbang_path) and filecmp.cmp(spack.paths.sbang_script, sbang_path):
|
||||
return
|
||||
|
||||
all = spack.spec.Spec("all")
|
||||
group_name = spack.package_prefs.get_package_group(all)
|
||||
config_mode = spack.package_prefs.get_package_dir_permissions(all)
|
||||
group_id = grp.getgrnam(group_name).gr_gid if group_name else None
|
||||
|
||||
# First setup the bin dir correctly.
|
||||
# make $install_tree/bin
|
||||
sbang_bin_dir = os.path.dirname(sbang_path)
|
||||
if not os.path.isdir(sbang_bin_dir):
|
||||
fs.mkdirp(sbang_bin_dir)
|
||||
fs.mkdirp(sbang_bin_dir)
|
||||
|
||||
# Set group and ownership like we do on package directories
|
||||
if group_id:
|
||||
os.chown(sbang_bin_dir, os.stat(sbang_bin_dir).st_uid, group_id)
|
||||
os.chmod(sbang_bin_dir, config_mode)
|
||||
# get permissions for bin dir from configuration files
|
||||
group_name = spack.package_prefs.get_package_group(spack.spec.Spec("all"))
|
||||
config_mode = spack.package_prefs.get_package_dir_permissions(spack.spec.Spec("all"))
|
||||
|
||||
if group_name:
|
||||
os.chmod(sbang_bin_dir, config_mode) # Use package directory permissions
|
||||
else:
|
||||
fs.set_install_permissions(sbang_bin_dir)
|
||||
|
||||
# Then check if we need to install sbang itself.
|
||||
try:
|
||||
already_installed = filecmp.cmp(spack.paths.sbang_script, sbang_path)
|
||||
except (IOError, OSError):
|
||||
already_installed = False
|
||||
# set group on sbang_bin_dir if not already set (only if set in configuration)
|
||||
# TODO: after we drop python2 support, use shutil.chown to avoid gid lookups that
|
||||
# can fail for remote groups
|
||||
if group_name and os.stat(sbang_bin_dir).st_gid != grp.getgrnam(group_name).gr_gid:
|
||||
os.chown(sbang_bin_dir, os.stat(sbang_bin_dir).st_uid, grp.getgrnam(group_name).gr_gid)
|
||||
|
||||
if not already_installed:
|
||||
with fs.write_tmp_and_move(sbang_path) as f:
|
||||
shutil.copy(spack.paths.sbang_script, f.name)
|
||||
# copy over the fresh copy of `sbang`
|
||||
sbang_tmp_path = os.path.join(
|
||||
os.path.dirname(sbang_path),
|
||||
".%s.tmp" % os.path.basename(sbang_path),
|
||||
)
|
||||
shutil.copy(spack.paths.sbang_script, sbang_tmp_path)
|
||||
|
||||
# Set permissions on `sbang` (including group if set in configuration)
|
||||
os.chmod(sbang_path, config_mode)
|
||||
if group_id:
|
||||
os.chown(sbang_path, os.stat(sbang_path).st_uid, group_id)
|
||||
# set permissions on `sbang` (including group if set in configuration)
|
||||
os.chmod(sbang_tmp_path, config_mode)
|
||||
if group_name:
|
||||
os.chown(sbang_tmp_path, os.stat(sbang_tmp_path).st_uid, grp.getgrnam(group_name).gr_gid)
|
||||
|
||||
# Finally, move the new `sbang` into place atomically
|
||||
os.rename(sbang_tmp_path, sbang_path)
|
||||
|
||||
|
||||
def post_install(spec):
|
||||
|
@@ -56,9 +56,9 @@
|
||||
import spack.store
|
||||
import spack.util.executable
|
||||
import spack.util.path
|
||||
import spack.util.timer as timer
|
||||
from spack.util.environment import EnvironmentModifications, dump_environment
|
||||
from spack.util.executable import which
|
||||
from spack.util.timer import Timer
|
||||
|
||||
#: Counter to support unique spec sequencing that is used to ensure packages
|
||||
#: with the same priority are (initially) processed in the order in which they
|
||||
@@ -304,9 +304,9 @@ def _install_from_cache(pkg, cache_only, explicit, unsigned=False):
|
||||
bool: ``True`` if the package was extract from binary cache,
|
||||
``False`` otherwise
|
||||
"""
|
||||
timer = Timer()
|
||||
t = timer.Timer()
|
||||
installed_from_cache = _try_install_from_binary_cache(
|
||||
pkg, explicit, unsigned=unsigned, timer=timer
|
||||
pkg, explicit, unsigned=unsigned, timer=t
|
||||
)
|
||||
pkg_id = package_id(pkg)
|
||||
if not installed_from_cache:
|
||||
@@ -316,14 +316,14 @@ def _install_from_cache(pkg, cache_only, explicit, unsigned=False):
|
||||
|
||||
tty.msg("{0}: installing from source".format(pre))
|
||||
return False
|
||||
timer.stop()
|
||||
t.stop()
|
||||
tty.debug("Successfully extracted {0} from binary cache".format(pkg_id))
|
||||
_print_timer(
|
||||
pre=_log_prefix(pkg.name),
|
||||
pkg_id=pkg_id,
|
||||
fetch=timer.phases.get("search", 0) + timer.phases.get("fetch", 0),
|
||||
build=timer.phases.get("install", 0),
|
||||
total=timer.total,
|
||||
fetch=t.duration("search") + t.duration("fetch"),
|
||||
build=t.duration("install"),
|
||||
total=t.duration(),
|
||||
)
|
||||
_print_installed_pkg(pkg.spec.prefix)
|
||||
spack.hooks.post_install(pkg.spec)
|
||||
@@ -372,7 +372,7 @@ def _process_external_package(pkg, explicit):
|
||||
|
||||
|
||||
def _process_binary_cache_tarball(
|
||||
pkg, binary_spec, explicit, unsigned, mirrors_for_spec=None, timer=None
|
||||
pkg, binary_spec, explicit, unsigned, mirrors_for_spec=None, timer=timer.NULL_TIMER
|
||||
):
|
||||
"""
|
||||
Process the binary cache tarball.
|
||||
@@ -391,11 +391,11 @@ def _process_binary_cache_tarball(
|
||||
bool: ``True`` if the package was extracted from binary cache,
|
||||
else ``False``
|
||||
"""
|
||||
timer.start("fetch")
|
||||
download_result = binary_distribution.download_tarball(
|
||||
binary_spec, unsigned, mirrors_for_spec=mirrors_for_spec
|
||||
)
|
||||
if timer:
|
||||
timer.phase("fetch")
|
||||
timer.stop("fetch")
|
||||
# see #10063 : install from source if tarball doesn't exist
|
||||
if download_result is None:
|
||||
tty.msg("{0} exists in binary cache but with different hash".format(pkg.name))
|
||||
@@ -405,6 +405,7 @@ def _process_binary_cache_tarball(
|
||||
tty.msg("Extracting {0} from binary cache".format(pkg_id))
|
||||
|
||||
# don't print long padded paths while extracting/relocating binaries
|
||||
timer.start("install")
|
||||
with spack.util.path.filter_padding():
|
||||
binary_distribution.extract_tarball(
|
||||
binary_spec, download_result, allow_root=False, unsigned=unsigned, force=False
|
||||
@@ -412,12 +413,11 @@ def _process_binary_cache_tarball(
|
||||
|
||||
pkg.installed_from_binary_cache = True
|
||||
spack.store.db.add(pkg.spec, spack.store.layout, explicit=explicit)
|
||||
if timer:
|
||||
timer.phase("install")
|
||||
timer.stop("install")
|
||||
return True
|
||||
|
||||
|
||||
def _try_install_from_binary_cache(pkg, explicit, unsigned=False, timer=None):
|
||||
def _try_install_from_binary_cache(pkg, explicit, unsigned=False, timer=timer.NULL_TIMER):
|
||||
"""
|
||||
Try to extract the package from binary cache.
|
||||
|
||||
@@ -430,10 +430,10 @@ def _try_install_from_binary_cache(pkg, explicit, unsigned=False, timer=None):
|
||||
"""
|
||||
pkg_id = package_id(pkg)
|
||||
tty.debug("Searching for binary cache of {0}".format(pkg_id))
|
||||
matches = binary_distribution.get_mirrors_for_spec(pkg.spec)
|
||||
|
||||
if timer:
|
||||
timer.phase("search")
|
||||
timer.start("search")
|
||||
matches = binary_distribution.get_mirrors_for_spec(pkg.spec)
|
||||
timer.stop("search")
|
||||
|
||||
if not matches:
|
||||
return False
|
||||
@@ -462,11 +462,10 @@ def combine_phase_logs(phase_log_files, log_path):
|
||||
phase_log_files (list): a list or iterator of logs to combine
|
||||
log_path (str): the path to combine them to
|
||||
"""
|
||||
|
||||
with open(log_path, "w") as log_file:
|
||||
with open(log_path, "wb") as log_file:
|
||||
for phase_log_file in phase_log_files:
|
||||
with open(phase_log_file, "r") as phase_log:
|
||||
log_file.write(phase_log.read())
|
||||
with open(phase_log_file, "rb") as phase_log:
|
||||
shutil.copyfileobj(phase_log, log_file)
|
||||
|
||||
|
||||
def dump_packages(spec, path):
|
||||
@@ -1774,14 +1773,16 @@ def install(self):
|
||||
raise
|
||||
|
||||
except binary_distribution.NoChecksumException as exc:
|
||||
if not task.cache_only:
|
||||
# Checking hash on downloaded binary failed.
|
||||
err = "Failed to install {0} from binary cache due to {1}:"
|
||||
err += " Requeueing to install from source."
|
||||
tty.error(err.format(pkg.name, str(exc)))
|
||||
task.use_cache = False
|
||||
self._requeue_task(task)
|
||||
continue
|
||||
if task.cache_only:
|
||||
raise
|
||||
|
||||
# Checking hash on downloaded binary failed.
|
||||
err = "Failed to install {0} from binary cache due to {1}:"
|
||||
err += " Requeueing to install from source."
|
||||
tty.error(err.format(pkg.name, str(exc)))
|
||||
task.use_cache = False
|
||||
self._requeue_task(task)
|
||||
continue
|
||||
|
||||
except (Exception, SystemExit) as exc:
|
||||
self._update_failed(task, True, exc)
|
||||
@@ -1906,7 +1907,7 @@ def __init__(self, pkg, install_args):
|
||||
self.env_mods = install_args.get("env_modifications", EnvironmentModifications())
|
||||
|
||||
# timer for build phases
|
||||
self.timer = Timer()
|
||||
self.timer = timer.Timer()
|
||||
|
||||
# If we are using a padded path, filter the output to compress padded paths
|
||||
# The real log still has full-length paths.
|
||||
@@ -1961,8 +1962,8 @@ def run(self):
|
||||
pre=self.pre,
|
||||
pkg_id=self.pkg_id,
|
||||
fetch=self.pkg._fetch_time,
|
||||
build=self.timer.total - self.pkg._fetch_time,
|
||||
total=self.timer.total,
|
||||
build=self.timer.duration() - self.pkg._fetch_time,
|
||||
total=self.timer.duration(),
|
||||
)
|
||||
_print_installed_pkg(self.pkg.prefix)
|
||||
|
||||
@@ -2035,6 +2036,7 @@ def _real_install(self):
|
||||
)
|
||||
|
||||
with log_contextmanager as logger:
|
||||
# Redirect stdout and stderr to daemon pipe
|
||||
with logger.force_echo():
|
||||
inner_debug_level = tty.debug_level()
|
||||
tty.set_debug(debug_level)
|
||||
@@ -2042,12 +2044,11 @@ def _real_install(self):
|
||||
tty.msg(msg.format(self.pre, phase_fn.name))
|
||||
tty.set_debug(inner_debug_level)
|
||||
|
||||
# Redirect stdout and stderr to daemon pipe
|
||||
self.timer.phase(phase_fn.name)
|
||||
|
||||
# Catch any errors to report to logging
|
||||
self.timer.start(phase_fn.name)
|
||||
phase_fn.execute()
|
||||
spack.hooks.on_phase_success(pkg, phase_fn.name, log_file)
|
||||
self.timer.stop(phase_fn.name)
|
||||
|
||||
except BaseException:
|
||||
combine_phase_logs(pkg.phase_log_files, pkg.log_path)
|
||||
|
@@ -34,7 +34,7 @@
|
||||
import inspect
|
||||
import os.path
|
||||
import re
|
||||
from typing import Optional # novm
|
||||
from typing import Optional # novm # noqa: F401
|
||||
|
||||
import llnl.util.filesystem
|
||||
import llnl.util.tty as tty
|
||||
@@ -402,13 +402,19 @@ def get_module(module_type, spec, get_full_path, module_set_name="default", requ
|
||||
else:
|
||||
writer = spack.modules.module_types[module_type](spec, module_set_name)
|
||||
if not os.path.isfile(writer.layout.filename):
|
||||
fmt_str = "{name}{@version}{/hash:7}"
|
||||
if not writer.conf.excluded:
|
||||
err_msg = "No module available for package {0} at {1}".format(
|
||||
spec, writer.layout.filename
|
||||
raise ModuleNotFoundError(
|
||||
"The module for package {} should be at {}, but it does not exist".format(
|
||||
spec.format(fmt_str), writer.layout.filename
|
||||
)
|
||||
)
|
||||
raise ModuleNotFoundError(err_msg)
|
||||
elif required:
|
||||
tty.debug("The module configuration has excluded {0}: " "omitting it".format(spec))
|
||||
tty.debug(
|
||||
"The module configuration has excluded {}: omitting it".format(
|
||||
spec.format(fmt_str)
|
||||
)
|
||||
)
|
||||
else:
|
||||
return None
|
||||
|
||||
@@ -696,7 +702,7 @@ def configure_options(self):
|
||||
|
||||
if os.path.exists(pkg.install_configure_args_path):
|
||||
with open(pkg.install_configure_args_path, "r") as args_file:
|
||||
return args_file.read()
|
||||
return spack.util.path.padding_filter(args_file.read())
|
||||
|
||||
# Returning a false-like value makes the default templates skip
|
||||
# the configure option section
|
||||
|
@@ -26,9 +26,6 @@
|
||||
def configuration(module_set_name):
|
||||
config_path = "modules:%s:lmod" % module_set_name
|
||||
config = spack.config.get(config_path, {})
|
||||
if not config and module_set_name == "default":
|
||||
# return old format for backward compatibility
|
||||
return spack.config.get("modules:lmod", {})
|
||||
return config
|
||||
|
||||
|
||||
|
@@ -23,9 +23,6 @@
|
||||
def configuration(module_set_name):
|
||||
config_path = "modules:%s:tcl" % module_set_name
|
||||
config = spack.config.get(config_path, {})
|
||||
if not config and module_set_name == "default":
|
||||
# return old format for backward compatibility
|
||||
return spack.config.get("modules:tcl", {})
|
||||
return config
|
||||
|
||||
|
||||
|
@@ -27,7 +27,16 @@
|
||||
import traceback
|
||||
import types
|
||||
import warnings
|
||||
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type # novm
|
||||
from typing import ( # novm # noqa: F401
|
||||
Any,
|
||||
Callable,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
Optional,
|
||||
Tuple,
|
||||
Type,
|
||||
)
|
||||
|
||||
import six
|
||||
|
||||
@@ -531,10 +540,6 @@ class PackageBase(six.with_metaclass(PackageMeta, WindowsRPathMeta, PackageViewM
|
||||
# These are default values for instance variables.
|
||||
#
|
||||
|
||||
#: A list or set of build time test functions to be called when tests
|
||||
#: are executed or 'None' if there are no such test functions.
|
||||
build_time_test_callbacks = None # type: Optional[List[str]]
|
||||
|
||||
#: By default, packages are not virtual
|
||||
#: Virtual packages override this attribute
|
||||
virtual = False
|
||||
@@ -543,10 +548,6 @@ class PackageBase(six.with_metaclass(PackageMeta, WindowsRPathMeta, PackageViewM
|
||||
#: those that do not can be used to install a set of other Spack packages.
|
||||
has_code = True
|
||||
|
||||
#: A list or set of install time test functions to be called when tests
|
||||
#: are executed or 'None' if there are no such test functions.
|
||||
install_time_test_callbacks = None # type: Optional[List[str]]
|
||||
|
||||
#: By default we build in parallel. Subclasses can override this.
|
||||
parallel = True
|
||||
|
||||
@@ -919,6 +920,12 @@ def url_for_version(self, version):
|
||||
"""
|
||||
return self._implement_all_urls_for_version(version)[0]
|
||||
|
||||
def update_external_dependencies(self):
|
||||
"""
|
||||
Method to override in package classes to handle external dependencies
|
||||
"""
|
||||
pass
|
||||
|
||||
def all_urls_for_version(self, version):
|
||||
"""Return all URLs derived from version_urls(), url, urls, and
|
||||
list_url (if it contains a version) in a package in that order.
|
||||
@@ -1307,19 +1314,6 @@ def extends(self, spec):
|
||||
s = self.extendee_spec
|
||||
return s and spec.satisfies(s)
|
||||
|
||||
def is_activated(self, view):
|
||||
"""Return True if package is activated."""
|
||||
if not self.is_extension:
|
||||
raise ValueError("is_activated called on package that is not an extension.")
|
||||
if self.extendee_spec.installed_upstream:
|
||||
# If this extends an upstream package, it cannot be activated for
|
||||
# it. This bypasses construction of the extension map, which can
|
||||
# can fail when run in the context of a downstream Spack instance
|
||||
return False
|
||||
extensions_layout = view.extensions_layout
|
||||
exts = extensions_layout.extension_map(self.extendee_spec)
|
||||
return (self.name in exts) and (exts[self.name] == self.spec)
|
||||
|
||||
def provides(self, vpkg_name):
|
||||
"""
|
||||
True if this package provides a virtual package with the specified name
|
||||
@@ -2319,30 +2313,6 @@ def do_deprecate(self, deprecator, link_fn):
|
||||
"""Deprecate this package in favor of deprecator spec"""
|
||||
spec = self.spec
|
||||
|
||||
# Check whether package to deprecate has active extensions
|
||||
if self.extendable:
|
||||
view = spack.filesystem_view.YamlFilesystemView(spec.prefix, spack.store.layout)
|
||||
active_exts = view.extensions_layout.extension_map(spec).values()
|
||||
if active_exts:
|
||||
short = spec.format("{name}/{hash:7}")
|
||||
m = "Spec %s has active extensions\n" % short
|
||||
for active in active_exts:
|
||||
m += " %s\n" % active.format("{name}/{hash:7}")
|
||||
m += "Deactivate extensions before deprecating %s" % short
|
||||
tty.die(m)
|
||||
|
||||
# Check whether package to deprecate is an active extension
|
||||
if self.is_extension:
|
||||
extendee = self.extendee_spec
|
||||
view = spack.filesystem_view.YamlFilesystemView(extendee.prefix, spack.store.layout)
|
||||
|
||||
if self.is_activated(view):
|
||||
short = spec.format("{name}/{hash:7}")
|
||||
short_ext = extendee.format("{name}/{hash:7}")
|
||||
msg = "Spec %s is an active extension of %s\n" % (short, short_ext)
|
||||
msg += "Deactivate %s to be able to deprecate it" % short
|
||||
tty.die(msg)
|
||||
|
||||
# Install deprecator if it isn't installed already
|
||||
if not spack.store.db.query(deprecator):
|
||||
deprecator.package.do_install()
|
||||
@@ -2372,155 +2342,6 @@ def _check_extendable(self):
|
||||
if not self.extendable:
|
||||
raise ValueError("Package %s is not extendable!" % self.name)
|
||||
|
||||
def _sanity_check_extension(self):
|
||||
if not self.is_extension:
|
||||
raise ActivationError("This package is not an extension.")
|
||||
|
||||
extendee_package = self.extendee_spec.package
|
||||
extendee_package._check_extendable()
|
||||
|
||||
if not self.extendee_spec.installed:
|
||||
raise ActivationError("Can only (de)activate extensions for installed packages.")
|
||||
if not self.spec.installed:
|
||||
raise ActivationError("Extensions must first be installed.")
|
||||
if self.extendee_spec.name not in self.extendees:
|
||||
raise ActivationError("%s does not extend %s!" % (self.name, self.extendee.name))
|
||||
|
||||
def do_activate(self, view=None, with_dependencies=True, verbose=True):
|
||||
"""Called on an extension to invoke the extendee's activate method.
|
||||
|
||||
Commands should call this routine, and should not call
|
||||
activate() directly.
|
||||
"""
|
||||
if verbose:
|
||||
tty.msg(
|
||||
"Activating extension {0} for {1}".format(
|
||||
self.spec.cshort_spec, self.extendee_spec.cshort_spec
|
||||
)
|
||||
)
|
||||
|
||||
self._sanity_check_extension()
|
||||
if not view:
|
||||
view = YamlFilesystemView(self.extendee_spec.prefix, spack.store.layout)
|
||||
|
||||
extensions_layout = view.extensions_layout
|
||||
|
||||
try:
|
||||
extensions_layout.check_extension_conflict(self.extendee_spec, self.spec)
|
||||
except spack.directory_layout.ExtensionAlreadyInstalledError as e:
|
||||
# already installed, let caller know
|
||||
tty.msg(e.message)
|
||||
return
|
||||
|
||||
# Activate any package dependencies that are also extensions.
|
||||
if with_dependencies:
|
||||
for spec in self.dependency_activations():
|
||||
if not spec.package.is_activated(view):
|
||||
spec.package.do_activate(
|
||||
view, with_dependencies=with_dependencies, verbose=verbose
|
||||
)
|
||||
|
||||
self.extendee_spec.package.activate(self, view, **self.extendee_args)
|
||||
|
||||
extensions_layout.add_extension(self.extendee_spec, self.spec)
|
||||
|
||||
if verbose:
|
||||
tty.debug(
|
||||
"Activated extension {0} for {1}".format(
|
||||
self.spec.cshort_spec, self.extendee_spec.cshort_spec
|
||||
)
|
||||
)
|
||||
|
||||
def dependency_activations(self):
|
||||
return (
|
||||
spec
|
||||
for spec in self.spec.traverse(root=False, deptype="run")
|
||||
if spec.package.extends(self.extendee_spec)
|
||||
)
|
||||
|
||||
def activate(self, extension, view, **kwargs):
|
||||
"""
|
||||
Add the extension to the specified view.
|
||||
|
||||
Package authors can override this function to maintain some
|
||||
centralized state related to the set of activated extensions
|
||||
for a package.
|
||||
|
||||
Spack internals (commands, hooks, etc.) should call
|
||||
do_activate() method so that proper checks are always executed.
|
||||
"""
|
||||
view.merge(extension.spec, ignore=kwargs.get("ignore", None))
|
||||
|
||||
def do_deactivate(self, view=None, **kwargs):
|
||||
"""Remove this extension package from the specified view. Called
|
||||
on the extension to invoke extendee's deactivate() method.
|
||||
|
||||
`remove_dependents=True` deactivates extensions depending on this
|
||||
package instead of raising an error.
|
||||
"""
|
||||
self._sanity_check_extension()
|
||||
force = kwargs.get("force", False)
|
||||
verbose = kwargs.get("verbose", True)
|
||||
remove_dependents = kwargs.get("remove_dependents", False)
|
||||
|
||||
if verbose:
|
||||
tty.msg(
|
||||
"Deactivating extension {0} for {1}".format(
|
||||
self.spec.cshort_spec, self.extendee_spec.cshort_spec
|
||||
)
|
||||
)
|
||||
|
||||
if not view:
|
||||
view = YamlFilesystemView(self.extendee_spec.prefix, spack.store.layout)
|
||||
extensions_layout = view.extensions_layout
|
||||
|
||||
# Allow a force deactivate to happen. This can unlink
|
||||
# spurious files if something was corrupted.
|
||||
if not force:
|
||||
extensions_layout.check_activated(self.extendee_spec, self.spec)
|
||||
|
||||
activated = extensions_layout.extension_map(self.extendee_spec)
|
||||
for name, aspec in activated.items():
|
||||
if aspec == self.spec:
|
||||
continue
|
||||
for dep in aspec.traverse(deptype="run"):
|
||||
if self.spec == dep:
|
||||
if remove_dependents:
|
||||
aspec.package.do_deactivate(**kwargs)
|
||||
else:
|
||||
msg = (
|
||||
"Cannot deactivate {0} because {1} is "
|
||||
"activated and depends on it"
|
||||
)
|
||||
raise ActivationError(
|
||||
msg.format(self.spec.cshort_spec, aspec.cshort_spec)
|
||||
)
|
||||
|
||||
self.extendee_spec.package.deactivate(self, view, **self.extendee_args)
|
||||
|
||||
# redundant activation check -- makes SURE the spec is not
|
||||
# still activated even if something was wrong above.
|
||||
if self.is_activated(view):
|
||||
extensions_layout.remove_extension(self.extendee_spec, self.spec)
|
||||
|
||||
if verbose:
|
||||
tty.debug(
|
||||
"Deactivated extension {0} for {1}".format(
|
||||
self.spec.cshort_spec, self.extendee_spec.cshort_spec
|
||||
)
|
||||
)
|
||||
|
||||
def deactivate(self, extension, view, **kwargs):
|
||||
"""
|
||||
Remove all extension files from the specified view.
|
||||
|
||||
Package authors can override this method to support other
|
||||
extension mechanisms. Spack internals (commands, hooks, etc.)
|
||||
should call do_deactivate() method so that proper checks are
|
||||
always executed.
|
||||
"""
|
||||
view.unmerge(extension.spec, ignore=kwargs.get("ignore", None))
|
||||
|
||||
def view(self):
|
||||
"""Create a view with the prefix of this package as the root.
|
||||
Extensions added to this view will modify the installation prefix of
|
||||
|
@@ -3,6 +3,7 @@
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import collections
|
||||
import itertools
|
||||
import multiprocessing.pool
|
||||
import os
|
||||
import re
|
||||
@@ -296,17 +297,24 @@ def modify_macho_object(cur_path, rpaths, deps, idpath, paths_to_paths):
|
||||
if idpath:
|
||||
new_idpath = paths_to_paths.get(idpath, None)
|
||||
if new_idpath and not idpath == new_idpath:
|
||||
args += ["-id", new_idpath]
|
||||
args += [("-id", new_idpath)]
|
||||
|
||||
for dep in deps:
|
||||
new_dep = paths_to_paths.get(dep)
|
||||
if new_dep and dep != new_dep:
|
||||
args += ["-change", dep, new_dep]
|
||||
args += [("-change", dep, new_dep)]
|
||||
|
||||
new_rpaths = []
|
||||
for orig_rpath in rpaths:
|
||||
new_rpath = paths_to_paths.get(orig_rpath)
|
||||
if new_rpath and not orig_rpath == new_rpath:
|
||||
args += ["-rpath", orig_rpath, new_rpath]
|
||||
args_to_add = ("-rpath", orig_rpath, new_rpath)
|
||||
if args_to_add not in args and new_rpath not in new_rpaths:
|
||||
args += [args_to_add]
|
||||
new_rpaths.append(new_rpath)
|
||||
|
||||
# Deduplicate and flatten
|
||||
args = list(itertools.chain.from_iterable(llnl.util.lang.dedupe(args)))
|
||||
if args:
|
||||
args.append(str(cur_path))
|
||||
install_name_tool = executable.Executable("install_name_tool")
|
||||
|
@@ -8,32 +8,12 @@
|
||||
.. literalinclude:: _spack_root/lib/spack/spack/schema/env.py
|
||||
:lines: 36-
|
||||
"""
|
||||
import warnings
|
||||
|
||||
from llnl.util.lang import union_dicts
|
||||
|
||||
import spack.schema.merged
|
||||
import spack.schema.packages
|
||||
import spack.schema.projections
|
||||
|
||||
warned_about_concretization = False
|
||||
|
||||
|
||||
def deprecate_concretization(instance, props):
|
||||
global warned_about_concretization
|
||||
if warned_about_concretization:
|
||||
return None
|
||||
# Deprecate `spack:concretization` in favor of `spack:concretizer:unify`.
|
||||
concretization_to_unify = {"together": "true", "separately": "false"}
|
||||
concretization = instance["concretization"]
|
||||
unify = concretization_to_unify[concretization]
|
||||
|
||||
return (
|
||||
"concretization:{} is deprecated and will be removed in Spack 0.19 in favor of "
|
||||
"the new concretizer:unify:{} config option.".format(concretization, unify)
|
||||
)
|
||||
|
||||
|
||||
#: legal first keys in the schema
|
||||
keys = ("spack", "env")
|
||||
|
||||
@@ -76,11 +56,6 @@ def deprecate_concretization(instance, props):
|
||||
"type": "object",
|
||||
"default": {},
|
||||
"additionalProperties": False,
|
||||
"deprecatedProperties": {
|
||||
"properties": ["concretization"],
|
||||
"message": deprecate_concretization,
|
||||
"error": False,
|
||||
},
|
||||
"properties": union_dicts(
|
||||
# merged configuration scope schemas
|
||||
spack.schema.merged.properties,
|
||||
@@ -148,11 +123,6 @@ def deprecate_concretization(instance, props):
|
||||
},
|
||||
]
|
||||
},
|
||||
"concretization": {
|
||||
"type": "string",
|
||||
"enum": ["together", "separately"],
|
||||
"default": "separately",
|
||||
},
|
||||
},
|
||||
),
|
||||
}
|
||||
@@ -169,31 +139,6 @@ def update(data):
|
||||
Returns:
|
||||
True if data was changed, False otherwise
|
||||
"""
|
||||
updated = False
|
||||
if "include" in data:
|
||||
msg = "included configuration files should be updated manually" " [files={0}]"
|
||||
warnings.warn(msg.format(", ".join(data["include"])))
|
||||
|
||||
# Spack 0.19 drops support for `spack:concretization` in favor of
|
||||
# `spack:concretizer:unify`. Here we provide an upgrade path that changes the former
|
||||
# into the latter, or warns when there's an ambiguity. Note that Spack 0.17 is not
|
||||
# forward compatible with `spack:concretizer:unify`.
|
||||
if "concretization" in data:
|
||||
has_unify = "unify" in data.get("concretizer", {})
|
||||
to_unify = {"together": True, "separately": False}
|
||||
unify = to_unify[data["concretization"]]
|
||||
|
||||
if has_unify and data["concretizer"]["unify"] != unify:
|
||||
warnings.warn(
|
||||
"The following configuration conflicts: "
|
||||
"`spack:concretization:{}` and `spack:concretizer:unify:{}`"
|
||||
". Please update manually.".format(
|
||||
data["concretization"], data["concretizer"]["unify"]
|
||||
)
|
||||
)
|
||||
else:
|
||||
data.update({"concretizer": {"unify": unify}})
|
||||
data.pop("concretization")
|
||||
updated = True
|
||||
|
||||
return updated
|
||||
# There are not currently any deprecated attributes in this section
|
||||
# that have not been removed
|
||||
return False
|
||||
|
@@ -8,8 +8,6 @@
|
||||
.. literalinclude:: _spack_root/lib/spack/spack/schema/modules.py
|
||||
:lines: 13-
|
||||
"""
|
||||
import warnings
|
||||
|
||||
import spack.schema.environment
|
||||
import spack.schema.projections
|
||||
|
||||
@@ -26,9 +24,7 @@
|
||||
)
|
||||
|
||||
#: Matches a valid name for a module set
|
||||
valid_module_set_name = (
|
||||
r"^(?!arch_folder$|lmod$|roots$|enable$|prefix_inspections$|" r"tcl$|use_view$)\w[\w-]*$"
|
||||
)
|
||||
valid_module_set_name = r"^(?!prefix_inspections$)\w[\w-]*$"
|
||||
|
||||
#: Matches an anonymous spec, i.e. a spec without a root name
|
||||
anonymous_spec_regex = r"^[\^@%+~]"
|
||||
@@ -156,15 +152,6 @@
|
||||
}
|
||||
|
||||
|
||||
def deprecation_msg_default_module_set(instance, props):
|
||||
return (
|
||||
'Top-level properties "{0}" in module config are ignored as of Spack v0.18. '
|
||||
'They should be set on the "default" module set. Run\n\n'
|
||||
"\t$ spack config update modules\n\n"
|
||||
"to update the file to the new format".format('", "'.join(instance))
|
||||
)
|
||||
|
||||
|
||||
# Properties for inclusion into other schemas (requires definitions)
|
||||
properties = {
|
||||
"modules": {
|
||||
@@ -187,13 +174,6 @@ def deprecation_msg_default_module_set(instance, props):
|
||||
"additionalProperties": False,
|
||||
"properties": module_config_properties,
|
||||
},
|
||||
# Deprecated top-level keys (ignored in 0.18 with a warning)
|
||||
"^(arch_folder|lmod|roots|enable|tcl|use_view)$": {},
|
||||
},
|
||||
"deprecatedProperties": {
|
||||
"properties": ["arch_folder", "lmod", "roots", "enable", "tcl", "use_view"],
|
||||
"message": deprecation_msg_default_module_set,
|
||||
"error": False,
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -249,39 +229,6 @@ def update_keys(data, key_translations):
|
||||
return changed
|
||||
|
||||
|
||||
def update_default_module_set(data):
|
||||
"""Update module configuration to move top-level keys inside default module set.
|
||||
|
||||
This change was introduced in v0.18 (see 99083f1706 or #28659).
|
||||
"""
|
||||
changed = False
|
||||
|
||||
deprecated_top_level_keys = ("arch_folder", "lmod", "roots", "enable", "tcl", "use_view")
|
||||
|
||||
# Don't update when we already have a default module set
|
||||
if "default" in data:
|
||||
if any(key in data for key in deprecated_top_level_keys):
|
||||
warnings.warn(
|
||||
'Did not move top-level module properties into "default" '
|
||||
'module set, because the "default" module set is already '
|
||||
"defined"
|
||||
)
|
||||
return changed
|
||||
|
||||
default = {}
|
||||
|
||||
# Move deprecated top-level keys under "default" module set.
|
||||
for key in deprecated_top_level_keys:
|
||||
if key in data:
|
||||
default[key] = data.pop(key)
|
||||
|
||||
if default:
|
||||
changed = True
|
||||
data["default"] = default
|
||||
|
||||
return changed
|
||||
|
||||
|
||||
def update(data):
|
||||
"""Update the data in place to remove deprecated properties.
|
||||
|
||||
@@ -291,10 +238,5 @@ def update(data):
|
||||
Returns:
|
||||
True if data was changed, False otherwise
|
||||
"""
|
||||
# deprecated top-level module config (everything in default module set)
|
||||
changed = update_default_module_set(data)
|
||||
|
||||
# translate blacklist/whitelist to exclude/include
|
||||
changed |= update_keys(data, exclude_include_translations)
|
||||
|
||||
return changed
|
||||
return update_keys(data, exclude_include_translations)
|
||||
|
@@ -622,11 +622,13 @@ def solve(self, setup, specs, reuse=None, output=None, control=None):
|
||||
self.control = control or default_clingo_control()
|
||||
# set up the problem -- this generates facts and rules
|
||||
self.assumptions = []
|
||||
timer.start("setup")
|
||||
with self.control.backend() as backend:
|
||||
self.backend = backend
|
||||
setup.setup(self, specs, reuse=reuse)
|
||||
timer.phase("setup")
|
||||
timer.stop("setup")
|
||||
|
||||
timer.start("load")
|
||||
# read in the main ASP program and display logic -- these are
|
||||
# handwritten, not generated, so we load them as resources
|
||||
parent_dir = os.path.dirname(__file__)
|
||||
@@ -656,12 +658,13 @@ def visit(node):
|
||||
self.control.load(os.path.join(parent_dir, "concretize.lp"))
|
||||
self.control.load(os.path.join(parent_dir, "os_compatibility.lp"))
|
||||
self.control.load(os.path.join(parent_dir, "display.lp"))
|
||||
timer.phase("load")
|
||||
timer.stop("load")
|
||||
|
||||
# Grounding is the first step in the solve -- it turns our facts
|
||||
# and first-order logic rules into propositional logic.
|
||||
timer.start("ground")
|
||||
self.control.ground([("base", [])])
|
||||
timer.phase("ground")
|
||||
timer.stop("ground")
|
||||
|
||||
# With a grounded program, we can run the solve.
|
||||
result = Result(specs)
|
||||
@@ -679,8 +682,10 @@ def on_model(model):
|
||||
|
||||
if clingo_cffi:
|
||||
solve_kwargs["on_unsat"] = cores.append
|
||||
|
||||
timer.start("solve")
|
||||
solve_result = self.control.solve(**solve_kwargs)
|
||||
timer.phase("solve")
|
||||
timer.stop("solve")
|
||||
|
||||
# once done, construct the solve result
|
||||
result.satisfiable = solve_result.satisfiable
|
||||
@@ -940,11 +945,13 @@ def package_compiler_defaults(self, pkg):
|
||||
def package_requirement_rules(self, pkg):
|
||||
pkg_name = pkg.name
|
||||
config = spack.config.get("packages")
|
||||
requirements = config.get(pkg_name, {}).get("require", []) or config.get("all", {}).get(
|
||||
"require", []
|
||||
)
|
||||
requirements, raise_on_failure = config.get(pkg_name, {}).get("require", []), True
|
||||
if not requirements:
|
||||
requirements, raise_on_failure = config.get("all", {}).get("require", []), False
|
||||
rules = self._rules_from_requirements(pkg_name, requirements)
|
||||
self.emit_facts_from_requirement_rules(rules, virtual=False)
|
||||
self.emit_facts_from_requirement_rules(
|
||||
rules, virtual=False, raise_on_failure=raise_on_failure
|
||||
)
|
||||
|
||||
def _rules_from_requirements(self, pkg_name, requirements):
|
||||
"""Manipulate requirements from packages.yaml, and return a list of tuples
|
||||
@@ -1071,11 +1078,13 @@ def condition(self, required_spec, imposed_spec=None, name=None, msg=None, node=
|
||||
named_cond.name = named_cond.name or name
|
||||
assert named_cond.name, "must provide name for anonymous condtions!"
|
||||
|
||||
# Check if we can emit the requirements before updating the condition ID counter.
|
||||
# In this way, if a condition can't be emitted but the exception is handled in the caller,
|
||||
# we won't emit partial facts.
|
||||
requirements = self.spec_clauses(named_cond, body=True, required_from=name)
|
||||
|
||||
condition_id = next(self._condition_id_counter)
|
||||
self.gen.fact(fn.condition(condition_id, msg))
|
||||
|
||||
# requirements trigger the condition
|
||||
requirements = self.spec_clauses(named_cond, body=True, required_from=name)
|
||||
for pred in requirements:
|
||||
self.gen.fact(fn.condition_requirement(condition_id, pred.name, *pred.args))
|
||||
|
||||
@@ -1171,23 +1180,39 @@ def provider_requirements(self):
|
||||
rules = self._rules_from_requirements(virtual_str, requirements)
|
||||
self.emit_facts_from_requirement_rules(rules, virtual=True)
|
||||
|
||||
def emit_facts_from_requirement_rules(self, rules, virtual=False):
|
||||
"""Generate facts to enforce requirements from packages.yaml."""
|
||||
def emit_facts_from_requirement_rules(self, rules, virtual=False, raise_on_failure=True):
|
||||
"""Generate facts to enforce requirements from packages.yaml.
|
||||
|
||||
Args:
|
||||
rules: rules for which we want facts to be emitted
|
||||
virtual: if True the requirements are on a virtual spec
|
||||
raise_on_failure: if True raise an exception when a requirement condition is invalid
|
||||
for the current spec. If False, just skip that condition
|
||||
"""
|
||||
for requirement_grp_id, (pkg_name, policy, requirement_grp) in enumerate(rules):
|
||||
self.gen.fact(fn.requirement_group(pkg_name, requirement_grp_id))
|
||||
self.gen.fact(fn.requirement_policy(pkg_name, requirement_grp_id, policy))
|
||||
for requirement_weight, spec_str in enumerate(requirement_grp):
|
||||
requirement_weight = 0
|
||||
for spec_str in requirement_grp:
|
||||
spec = spack.spec.Spec(spec_str)
|
||||
if not spec.name:
|
||||
spec.name = pkg_name
|
||||
when_spec = spec
|
||||
if virtual:
|
||||
when_spec = spack.spec.Spec(pkg_name)
|
||||
member_id = self.condition(
|
||||
required_spec=when_spec, imposed_spec=spec, name=pkg_name, node=virtual
|
||||
)
|
||||
|
||||
try:
|
||||
member_id = self.condition(
|
||||
required_spec=when_spec, imposed_spec=spec, name=pkg_name, node=virtual
|
||||
)
|
||||
except Exception:
|
||||
if raise_on_failure:
|
||||
raise RuntimeError("cannot emit requirements for the solver")
|
||||
continue
|
||||
|
||||
self.gen.fact(fn.requirement_group_member(member_id, pkg_name, requirement_grp_id))
|
||||
self.gen.fact(fn.requirement_has_weight(member_id, requirement_weight))
|
||||
requirement_weight += 1
|
||||
|
||||
def external_packages(self):
|
||||
"""Facts on external packages, as read from packages.yaml"""
|
||||
@@ -2320,6 +2345,12 @@ def build_specs(self, function_tuples):
|
||||
if isinstance(spec.version, spack.version.GitVersion):
|
||||
spec.version.generate_git_lookup(spec.fullname)
|
||||
|
||||
# Add synthetic edges for externals that are extensions
|
||||
for root in self._specs.values():
|
||||
for dep in root.traverse():
|
||||
if dep.external:
|
||||
dep.package.update_external_dependencies()
|
||||
|
||||
return self._specs
|
||||
|
||||
|
||||
|
@@ -539,12 +539,12 @@ requirement_group_satisfied(Package, X) :-
|
||||
requirement_policy(Package, X, "one_of"),
|
||||
requirement_group(Package, X).
|
||||
|
||||
requirement_weight(Package, W) :-
|
||||
requirement_weight(Package, Group, W) :-
|
||||
condition_holds(Y),
|
||||
requirement_has_weight(Y, W),
|
||||
requirement_group_member(Y, Package, X),
|
||||
requirement_policy(Package, X, "one_of"),
|
||||
requirement_group_satisfied(Package, X).
|
||||
requirement_group_member(Y, Package, Group),
|
||||
requirement_policy(Package, Group, "one_of"),
|
||||
requirement_group_satisfied(Package, Group).
|
||||
|
||||
requirement_group_satisfied(Package, X) :-
|
||||
1 { condition_holds(Y) : requirement_group_member(Y, Package, X) } ,
|
||||
@@ -552,18 +552,18 @@ requirement_group_satisfied(Package, X) :-
|
||||
requirement_policy(Package, X, "any_of"),
|
||||
requirement_group(Package, X).
|
||||
|
||||
requirement_weight(Package, W) :-
|
||||
requirement_weight(Package, Group, W) :-
|
||||
W = #min {
|
||||
Z : requirement_has_weight(Y, Z), condition_holds(Y), requirement_group_member(Y, Package, X);
|
||||
Z : requirement_has_weight(Y, Z), condition_holds(Y), requirement_group_member(Y, Package, Group);
|
||||
% We need this to avoid an annoying warning during the solve
|
||||
% concretize.lp:1151:5-11: info: tuple ignored:
|
||||
% #sup@73
|
||||
10000
|
||||
},
|
||||
requirement_policy(Package, X, "any_of"),
|
||||
requirement_group_satisfied(Package, X).
|
||||
requirement_policy(Package, Group, "any_of"),
|
||||
requirement_group_satisfied(Package, Group).
|
||||
|
||||
error(2, "Cannot satisfy requirement group for package '{0}'", Package) :-
|
||||
error(2, "Cannot satisfy the requirements in packages.yaml for the '{0}' package. You may want to delete them to proceed with concretization. To check where the requirements are defined run 'spack config blame packages'", Package) :-
|
||||
activate_requirement_rules(Package),
|
||||
requirement_group(Package, X),
|
||||
not requirement_group_satisfied(Package, X).
|
||||
@@ -1222,8 +1222,8 @@ opt_criterion(75, "requirement weight").
|
||||
#minimize{ 0@275: #true }.
|
||||
#minimize{ 0@75: #true }.
|
||||
#minimize {
|
||||
Weight@75+Priority
|
||||
: requirement_weight(Package, Weight),
|
||||
Weight@75+Priority,Package,Group
|
||||
: requirement_weight(Package, Group, Weight),
|
||||
build_priority(Package, Priority)
|
||||
}.
|
||||
|
||||
|
@@ -2751,6 +2751,11 @@ def _old_concretize(self, tests=False, deprecation_warning=True):
|
||||
# If any spec in the DAG is deprecated, throw an error
|
||||
Spec.ensure_no_deprecated(self)
|
||||
|
||||
# Update externals as needed
|
||||
for dep in self.traverse():
|
||||
if dep.external:
|
||||
dep.package.update_external_dependencies()
|
||||
|
||||
# Now that the spec is concrete we should check if
|
||||
# there are declared conflicts
|
||||
#
|
||||
|
@@ -2,7 +2,7 @@
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import inspect
|
||||
import os
|
||||
import platform
|
||||
import posixpath
|
||||
@@ -14,6 +14,7 @@
|
||||
|
||||
import spack.build_environment
|
||||
import spack.config
|
||||
import spack.package_base
|
||||
import spack.spec
|
||||
import spack.util.spack_yaml as syaml
|
||||
from spack.build_environment import (
|
||||
@@ -130,13 +131,13 @@ def test_static_to_shared_library(build_environment):
|
||||
"linux": (
|
||||
"/bin/mycc -shared"
|
||||
" -Wl,--disable-new-dtags"
|
||||
" -Wl,-soname,{2} -Wl,--whole-archive {0}"
|
||||
" -Wl,-soname -Wl,{2} -Wl,--whole-archive {0}"
|
||||
" -Wl,--no-whole-archive -o {1}"
|
||||
),
|
||||
"darwin": (
|
||||
"/bin/mycc -dynamiclib"
|
||||
" -Wl,--disable-new-dtags"
|
||||
" -install_name {1} -Wl,-force_load,{0} -o {1}"
|
||||
" -install_name {1} -Wl,-force_load -Wl,{0} -o {1}"
|
||||
),
|
||||
}
|
||||
|
||||
@@ -521,3 +522,27 @@ def test_dirty_disable_module_unload(config, mock_packages, working_env, mock_mo
|
||||
assert mock_module_cmd.calls
|
||||
assert any(("unload", "cray-libsci") == item[0] for item in mock_module_cmd.calls)
|
||||
assert any(("unload", "cray-mpich") == item[0] for item in mock_module_cmd.calls)
|
||||
|
||||
|
||||
class TestModuleMonkeyPatcher:
|
||||
def test_getting_attributes(self, config, mock_packages):
|
||||
s = spack.spec.Spec("libelf").concretized()
|
||||
module_wrapper = spack.build_environment.ModuleChangePropagator(s.package)
|
||||
assert module_wrapper.Libelf == s.package.module.Libelf
|
||||
|
||||
def test_setting_attributes(self, config, mock_packages):
|
||||
s = spack.spec.Spec("libelf").concretized()
|
||||
module = s.package.module
|
||||
module_wrapper = spack.build_environment.ModuleChangePropagator(s.package)
|
||||
|
||||
# Setting an attribute has an immediate effect
|
||||
module_wrapper.SOME_ATTRIBUTE = 1
|
||||
assert module.SOME_ATTRIBUTE == 1
|
||||
|
||||
# We can also propagate the settings to classes in the MRO
|
||||
module_wrapper.propagate_changes_to_mro()
|
||||
for cls in inspect.getmro(type(s.package)):
|
||||
current_module = cls.module
|
||||
if current_module == spack.package_base:
|
||||
break
|
||||
assert current_module.SOME_ATTRIBUTE == 1
|
||||
|
@@ -121,3 +121,31 @@ def test_old_style_compatibility_with_super(spec_str, method_name, expected):
|
||||
builder = spack.builder.create(s.package)
|
||||
value = getattr(builder, method_name)()
|
||||
assert value == expected
|
||||
|
||||
|
||||
@pytest.mark.regression("33928")
|
||||
@pytest.mark.usefixtures("builder_test_repository", "config", "working_env")
|
||||
@pytest.mark.disable_clean_stage_check
|
||||
def test_build_time_tests_are_executed_from_default_builder():
|
||||
s = spack.spec.Spec("old-style-autotools").concretized()
|
||||
builder = spack.builder.create(s.package)
|
||||
builder.pkg.run_tests = True
|
||||
for phase_fn in builder:
|
||||
phase_fn.execute()
|
||||
|
||||
assert os.environ.get("CHECK_CALLED") == "1", "Build time tests not executed"
|
||||
assert os.environ.get("INSTALLCHECK_CALLED") == "1", "Install time tests not executed"
|
||||
|
||||
|
||||
@pytest.mark.regression("34518")
|
||||
@pytest.mark.usefixtures("builder_test_repository", "config", "working_env")
|
||||
def test_monkey_patching_wrapped_pkg():
|
||||
s = spack.spec.Spec("old-style-autotools").concretized()
|
||||
builder = spack.builder.create(s.package)
|
||||
assert s.package.run_tests is False
|
||||
assert builder.pkg.run_tests is False
|
||||
assert builder.pkg_with_dispatcher.run_tests is False
|
||||
|
||||
s.package.run_tests = True
|
||||
assert builder.pkg.run_tests is True
|
||||
assert builder.pkg_with_dispatcher.run_tests is True
|
||||
|
@@ -319,6 +319,63 @@ def test_fc_flags(wrapper_environment, wrapper_flags):
|
||||
)
|
||||
|
||||
|
||||
def test_Wl_parsing(wrapper_environment):
|
||||
check_args(
|
||||
cc,
|
||||
["-Wl,-rpath,/a,--enable-new-dtags,-rpath=/b,--rpath", "-Wl,/c"],
|
||||
[real_cc]
|
||||
+ target_args
|
||||
+ ["-Wl,--disable-new-dtags", "-Wl,-rpath,/a", "-Wl,-rpath,/b", "-Wl,-rpath,/c"],
|
||||
)
|
||||
|
||||
|
||||
def test_Xlinker_parsing(wrapper_environment):
|
||||
# -Xlinker <x> ... -Xlinker <y> may have compiler flags inbetween, like -O3 in this
|
||||
# example. Also check that a trailing -Xlinker (which is a compiler error) is not
|
||||
# dropped or given an empty argument.
|
||||
check_args(
|
||||
cc,
|
||||
[
|
||||
"-Xlinker",
|
||||
"-rpath",
|
||||
"-O3",
|
||||
"-Xlinker",
|
||||
"/a",
|
||||
"-Xlinker",
|
||||
"--flag",
|
||||
"-Xlinker",
|
||||
"-rpath=/b",
|
||||
"-Xlinker",
|
||||
],
|
||||
[real_cc]
|
||||
+ target_args
|
||||
+ [
|
||||
"-Wl,--disable-new-dtags",
|
||||
"-Wl,-rpath,/a",
|
||||
"-Wl,-rpath,/b",
|
||||
"-O3",
|
||||
"-Xlinker",
|
||||
"--flag",
|
||||
"-Xlinker",
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
def test_rpath_without_value(wrapper_environment):
|
||||
# cc -Wl,-rpath without a value shouldn't drop -Wl,-rpath;
|
||||
# same for -Xlinker
|
||||
check_args(
|
||||
cc,
|
||||
["-Wl,-rpath", "-O3", "-g"],
|
||||
[real_cc] + target_args + ["-Wl,--disable-new-dtags", "-O3", "-g", "-Wl,-rpath"],
|
||||
)
|
||||
check_args(
|
||||
cc,
|
||||
["-Xlinker", "-rpath", "-O3", "-g"],
|
||||
[real_cc] + target_args + ["-Wl,--disable-new-dtags", "-O3", "-g", "-Xlinker", "-rpath"],
|
||||
)
|
||||
|
||||
|
||||
def test_dep_rpath(wrapper_environment):
|
||||
"""Ensure RPATHs for root package are added."""
|
||||
check_args(cc, test_args, [real_cc] + target_args + common_compile_args)
|
||||
|
@@ -1,41 +0,0 @@
|
||||
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
from spack.main import SpackCommand
|
||||
|
||||
activate = SpackCommand("activate")
|
||||
deactivate = SpackCommand("deactivate")
|
||||
install = SpackCommand("install")
|
||||
extensions = SpackCommand("extensions")
|
||||
|
||||
pytestmark = pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows")
|
||||
|
||||
|
||||
def test_activate(mock_packages, mock_archive, mock_fetch, config, install_mockery):
|
||||
install("extension1")
|
||||
activate("extension1")
|
||||
output = extensions("--show", "activated", "extendee")
|
||||
assert "extension1" in output
|
||||
|
||||
|
||||
def test_deactivate(mock_packages, mock_archive, mock_fetch, config, install_mockery):
|
||||
install("extension1")
|
||||
activate("extension1")
|
||||
deactivate("extension1")
|
||||
output = extensions("--show", "activated", "extendee")
|
||||
assert "extension1" not in output
|
||||
|
||||
|
||||
def test_deactivate_all(mock_packages, mock_archive, mock_fetch, config, install_mockery):
|
||||
install("extension1")
|
||||
install("extension2")
|
||||
activate("extension1")
|
||||
activate("extension2")
|
||||
deactivate("--all", "extendee")
|
||||
output = extensions("--show", "activated", "extendee")
|
||||
assert "extension1" not in output
|
@@ -13,6 +13,7 @@
|
||||
import spack.database
|
||||
import spack.environment as ev
|
||||
import spack.main
|
||||
import spack.schema.config
|
||||
import spack.spec
|
||||
import spack.store
|
||||
import spack.util.spack_yaml as syaml
|
||||
@@ -652,3 +653,26 @@ def test_config_prefer_upstream(
|
||||
|
||||
# Make sure a message about the conflicting hdf5's was given.
|
||||
assert "- hdf5" in output
|
||||
|
||||
|
||||
def test_environment_config_update(tmpdir, mutable_config, monkeypatch):
|
||||
with open(str(tmpdir.join("spack.yaml")), "w") as f:
|
||||
f.write(
|
||||
"""\
|
||||
spack:
|
||||
config:
|
||||
ccache: true
|
||||
"""
|
||||
)
|
||||
|
||||
def update_config(data):
|
||||
data["ccache"] = False
|
||||
return True
|
||||
|
||||
monkeypatch.setattr(spack.schema.config, "update", update_config)
|
||||
|
||||
with ev.Environment(str(tmpdir)):
|
||||
config("update", "-y", "config")
|
||||
|
||||
with ev.Environment(str(tmpdir)) as e:
|
||||
assert not e.raw_yaml["spack"]["config"]["ccache"]
|
||||
|
@@ -15,7 +15,6 @@
|
||||
uninstall = SpackCommand("uninstall")
|
||||
deprecate = SpackCommand("deprecate")
|
||||
find = SpackCommand("find")
|
||||
activate = SpackCommand("activate")
|
||||
|
||||
pytestmark = pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows")
|
||||
|
||||
@@ -89,24 +88,6 @@ def test_deprecate_deps(mock_packages, mock_archive, mock_fetch, install_mockery
|
||||
assert sorted(deprecated) == sorted(list(old_spec.traverse()))
|
||||
|
||||
|
||||
def test_deprecate_fails_active_extensions(
|
||||
mock_packages, mock_archive, mock_fetch, install_mockery
|
||||
):
|
||||
"""Tests that active extensions and their extendees cannot be
|
||||
deprecated."""
|
||||
install("extendee")
|
||||
install("extension1")
|
||||
activate("extension1")
|
||||
|
||||
output = deprecate("-yi", "extendee", "extendee@nonexistent", fail_on_error=False)
|
||||
assert "extension1" in output
|
||||
assert "Deactivate extensions before deprecating" in output
|
||||
|
||||
output = deprecate("-yiD", "extension1", "extension1@notaversion", fail_on_error=False)
|
||||
assert "extendee" in output
|
||||
assert "is an active extension of" in output
|
||||
|
||||
|
||||
def test_uninstall_deprecated(mock_packages, mock_archive, mock_fetch, install_mockery):
|
||||
"""Tests that we can still uninstall deprecated packages."""
|
||||
install("libelf@0.8.13")
|
||||
|
@@ -2476,30 +2476,6 @@ def test_env_write_only_non_default_nested(tmpdir):
|
||||
assert manifest == contents
|
||||
|
||||
|
||||
@pytest.mark.parametrize("concretization,unify", [("together", "true"), ("separately", "false")])
|
||||
def test_update_concretization_to_concretizer_unify(concretization, unify, tmpdir):
|
||||
spack_yaml = """\
|
||||
spack:
|
||||
concretization: {}
|
||||
""".format(
|
||||
concretization
|
||||
)
|
||||
tmpdir.join("spack.yaml").write(spack_yaml)
|
||||
# Update the environment
|
||||
env("update", "-y", str(tmpdir))
|
||||
with open(str(tmpdir.join("spack.yaml"))) as f:
|
||||
assert (
|
||||
f.read()
|
||||
== """\
|
||||
spack:
|
||||
concretizer:
|
||||
unify: {}
|
||||
""".format(
|
||||
unify
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.regression("18147")
|
||||
def test_can_update_attributes_with_override(tmpdir):
|
||||
spack_yaml = """
|
||||
|
@@ -35,12 +35,11 @@ def python_database(mock_packages, mutable_database):
|
||||
def test_extensions(mock_packages, python_database, config, capsys):
|
||||
ext2 = Spec("py-extension2").concretized()
|
||||
|
||||
def check_output(ni, na):
|
||||
def check_output(ni):
|
||||
with capsys.disabled():
|
||||
output = extensions("python")
|
||||
packages = extensions("-s", "packages", "python")
|
||||
installed = extensions("-s", "installed", "python")
|
||||
activated = extensions("-s", "activated", "python")
|
||||
assert "==> python@2.7.11" in output
|
||||
assert "==> 2 extensions" in output
|
||||
assert "py-extension1" in output
|
||||
@@ -50,26 +49,13 @@ def check_output(ni, na):
|
||||
assert "py-extension1" in packages
|
||||
assert "py-extension2" in packages
|
||||
assert "installed" not in packages
|
||||
assert "activated" not in packages
|
||||
|
||||
assert ("%s installed" % (ni if ni else "None")) in output
|
||||
assert ("%s activated" % (na if na else "None")) in output
|
||||
assert ("%s installed" % (ni if ni else "None")) in installed
|
||||
assert ("%s activated" % (na if na else "None")) in activated
|
||||
|
||||
check_output(2, 0)
|
||||
|
||||
ext2.package.do_activate()
|
||||
check_output(2, 2)
|
||||
|
||||
ext2.package.do_deactivate(force=True)
|
||||
check_output(2, 1)
|
||||
|
||||
ext2.package.do_activate()
|
||||
check_output(2, 2)
|
||||
|
||||
check_output(2)
|
||||
ext2.package.do_uninstall(force=True)
|
||||
check_output(1, 1)
|
||||
check_output(1)
|
||||
|
||||
|
||||
def test_extensions_no_arguments(mock_packages):
|
||||
|
@@ -269,9 +269,9 @@ def test_find_format_deps(database, config):
|
||||
callpath-1.0
|
||||
dyninst-8.2
|
||||
libdwarf-20130729
|
||||
libelf-0.8.13
|
||||
zmpi-1.0
|
||||
fake-1.0
|
||||
libelf-0.8.13
|
||||
zmpi-1.0
|
||||
fake-1.0
|
||||
|
||||
"""
|
||||
)
|
||||
@@ -291,9 +291,9 @@ def test_find_format_deps_paths(database, config):
|
||||
callpath-1.0 {1}
|
||||
dyninst-8.2 {2}
|
||||
libdwarf-20130729 {3}
|
||||
libelf-0.8.13 {4}
|
||||
zmpi-1.0 {5}
|
||||
fake-1.0 {6}
|
||||
libelf-0.8.13 {4}
|
||||
zmpi-1.0 {5}
|
||||
fake-1.0 {6}
|
||||
|
||||
""".format(
|
||||
*prefixes
|
||||
|
@@ -333,20 +333,6 @@ def test_error_conditions(self, cli_args, error_str):
|
||||
with pytest.raises(spack.error.SpackError, match=error_str):
|
||||
spack.cmd.mirror.mirror_create(args)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"cli_args,expected_end",
|
||||
[
|
||||
({"directory": None}, os.path.join("source")),
|
||||
({"directory": os.path.join("foo", "bar")}, os.path.join("foo", "bar")),
|
||||
],
|
||||
)
|
||||
def test_mirror_path_is_valid(self, cli_args, expected_end, config):
|
||||
args = MockMirrorArgs(**cli_args)
|
||||
local_push_url = spack.cmd.mirror.local_mirror_url_from_user(args.directory)
|
||||
assert local_push_url.startswith("file:")
|
||||
assert os.path.isabs(local_push_url.replace("file://", ""))
|
||||
assert local_push_url.endswith(expected_end)
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"cli_args,not_expected",
|
||||
[
|
||||
|
@@ -3,12 +3,14 @@
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import itertools
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
import llnl.util.tty as tty
|
||||
|
||||
import spack.cmd.uninstall
|
||||
import spack.environment
|
||||
import spack.store
|
||||
from spack.main import SpackCommand, SpackCommandError
|
||||
@@ -40,6 +42,39 @@ def test_installed_dependents(mutable_database):
|
||||
uninstall("-y", "libelf")
|
||||
|
||||
|
||||
@pytest.mark.db
|
||||
def test_correct_installed_dependents(mutable_database):
|
||||
# Test whether we return the right dependents.
|
||||
|
||||
# Take callpath from the database
|
||||
callpath = spack.store.db.query_local("callpath")[0]
|
||||
|
||||
# Ensure it still has dependents and dependencies
|
||||
dependents = callpath.dependents(deptype="all")
|
||||
dependencies = callpath.dependencies(deptype="all")
|
||||
assert dependents and dependencies
|
||||
|
||||
# Uninstall it, so it's missing.
|
||||
callpath.package.do_uninstall(force=True)
|
||||
|
||||
# Retrieve all dependent hashes
|
||||
inside_dpts, outside_dpts = spack.cmd.uninstall.installed_dependents(dependencies, None)
|
||||
dependent_hashes = [s.dag_hash() for s in itertools.chain(*outside_dpts.values())]
|
||||
set_dependent_hashes = set(dependent_hashes)
|
||||
|
||||
# We dont have an env, so this should be empty.
|
||||
assert not inside_dpts
|
||||
|
||||
# Assert uniqueness
|
||||
assert len(dependent_hashes) == len(set_dependent_hashes)
|
||||
|
||||
# Ensure parents of callpath are listed
|
||||
assert all(s.dag_hash() in set_dependent_hashes for s in dependents)
|
||||
|
||||
# Ensure callpath itself is not, since it was missing.
|
||||
assert callpath.dag_hash() not in set_dependent_hashes
|
||||
|
||||
|
||||
@pytest.mark.db
|
||||
def test_recursive_uninstall(mutable_database):
|
||||
"""Test recursive uninstall."""
|
||||
|
@@ -12,7 +12,6 @@
|
||||
from spack.main import SpackCommand
|
||||
from spack.spec import Spec
|
||||
|
||||
activate = SpackCommand("activate")
|
||||
extensions = SpackCommand("extensions")
|
||||
install = SpackCommand("install")
|
||||
view = SpackCommand("view")
|
||||
@@ -135,46 +134,9 @@ def test_view_extension(tmpdir, mock_packages, mock_archive, mock_fetch, config,
|
||||
assert "extension1@1.0" in all_installed
|
||||
assert "extension1@2.0" in all_installed
|
||||
assert "extension2@1.0" in all_installed
|
||||
global_activated = extensions("--show", "activated", "extendee")
|
||||
assert "extension1@1.0" not in global_activated
|
||||
assert "extension1@2.0" not in global_activated
|
||||
assert "extension2@1.0" not in global_activated
|
||||
view_activated = extensions("--show", "activated", "-v", viewpath, "extendee")
|
||||
assert "extension1@1.0" in view_activated
|
||||
assert "extension1@2.0" not in view_activated
|
||||
assert "extension2@1.0" not in view_activated
|
||||
assert os.path.exists(os.path.join(viewpath, "bin", "extension1"))
|
||||
|
||||
|
||||
def test_view_extension_projection(
|
||||
tmpdir, mock_packages, mock_archive, mock_fetch, config, install_mockery
|
||||
):
|
||||
install("extendee@1.0")
|
||||
install("extension1@1.0")
|
||||
install("extension1@2.0")
|
||||
install("extension2@1.0")
|
||||
|
||||
viewpath = str(tmpdir.mkdir("view"))
|
||||
view_projection = {"all": "{name}-{version}"}
|
||||
projection_file = create_projection_file(tmpdir, view_projection)
|
||||
view("symlink", viewpath, "--projection-file={0}".format(projection_file), "extension1@1.0")
|
||||
|
||||
all_installed = extensions("--show", "installed", "extendee")
|
||||
assert "extension1@1.0" in all_installed
|
||||
assert "extension1@2.0" in all_installed
|
||||
assert "extension2@1.0" in all_installed
|
||||
global_activated = extensions("--show", "activated", "extendee")
|
||||
assert "extension1@1.0" not in global_activated
|
||||
assert "extension1@2.0" not in global_activated
|
||||
assert "extension2@1.0" not in global_activated
|
||||
view_activated = extensions("--show", "activated", "-v", viewpath, "extendee")
|
||||
assert "extension1@1.0" in view_activated
|
||||
assert "extension1@2.0" not in view_activated
|
||||
assert "extension2@1.0" not in view_activated
|
||||
|
||||
assert os.path.exists(os.path.join(viewpath, "extendee-1.0", "bin", "extension1"))
|
||||
|
||||
|
||||
def test_view_extension_remove(
|
||||
tmpdir, mock_packages, mock_archive, mock_fetch, config, install_mockery
|
||||
):
|
||||
@@ -185,10 +147,6 @@ def test_view_extension_remove(
|
||||
view("remove", viewpath, "extension1@1.0")
|
||||
all_installed = extensions("--show", "installed", "extendee")
|
||||
assert "extension1@1.0" in all_installed
|
||||
global_activated = extensions("--show", "activated", "extendee")
|
||||
assert "extension1@1.0" not in global_activated
|
||||
view_activated = extensions("--show", "activated", "-v", viewpath, "extendee")
|
||||
assert "extension1@1.0" not in view_activated
|
||||
assert not os.path.exists(os.path.join(viewpath, "bin", "extension1"))
|
||||
|
||||
|
||||
@@ -217,46 +175,6 @@ def test_view_extension_conflict_ignored(
|
||||
assert fin.read() == "1.0"
|
||||
|
||||
|
||||
def test_view_extension_global_activation(
|
||||
tmpdir, mock_packages, mock_archive, mock_fetch, config, install_mockery
|
||||
):
|
||||
install("extendee")
|
||||
install("extension1@1.0")
|
||||
install("extension1@2.0")
|
||||
install("extension2@1.0")
|
||||
viewpath = str(tmpdir.mkdir("view"))
|
||||
view("symlink", viewpath, "extension1@1.0")
|
||||
activate("extension1@2.0")
|
||||
activate("extension2@1.0")
|
||||
all_installed = extensions("--show", "installed", "extendee")
|
||||
assert "extension1@1.0" in all_installed
|
||||
assert "extension1@2.0" in all_installed
|
||||
assert "extension2@1.0" in all_installed
|
||||
global_activated = extensions("--show", "activated", "extendee")
|
||||
assert "extension1@1.0" not in global_activated
|
||||
assert "extension1@2.0" in global_activated
|
||||
assert "extension2@1.0" in global_activated
|
||||
view_activated = extensions("--show", "activated", "-v", viewpath, "extendee")
|
||||
assert "extension1@1.0" in view_activated
|
||||
assert "extension1@2.0" not in view_activated
|
||||
assert "extension2@1.0" not in view_activated
|
||||
assert os.path.exists(os.path.join(viewpath, "bin", "extension1"))
|
||||
assert not os.path.exists(os.path.join(viewpath, "bin", "extension2"))
|
||||
|
||||
|
||||
def test_view_extendee_with_global_activations(
|
||||
tmpdir, mock_packages, mock_archive, mock_fetch, config, install_mockery
|
||||
):
|
||||
install("extendee")
|
||||
install("extension1@1.0")
|
||||
install("extension1@2.0")
|
||||
install("extension2@1.0")
|
||||
viewpath = str(tmpdir.mkdir("view"))
|
||||
activate("extension1@2.0")
|
||||
output = view("symlink", viewpath, "extension1@1.0")
|
||||
assert "Error: Globally activated extensions cannot be used" in output
|
||||
|
||||
|
||||
def test_view_fails_with_missing_projections_file(tmpdir):
|
||||
viewpath = str(tmpdir.mkdir("view"))
|
||||
projection_file = os.path.join(str(tmpdir), "nonexistent")
|
||||
|
@@ -391,7 +391,7 @@ def test_apple_clang_flags():
|
||||
unsupported_flag_test("cxx17_flag", "apple-clang@6.0.0")
|
||||
supported_flag_test("cxx17_flag", "-std=c++1z", "apple-clang@6.1.0")
|
||||
supported_flag_test("c99_flag", "-std=c99", "apple-clang@6.1.0")
|
||||
unsupported_flag_test("c11_flag", "apple-clang@6.0.0")
|
||||
unsupported_flag_test("c11_flag", "apple-clang@3.0.0")
|
||||
supported_flag_test("c11_flag", "-std=c11", "apple-clang@6.1.0")
|
||||
supported_flag_test("cc_pic_flag", "-fPIC", "apple-clang@2.0.0")
|
||||
supported_flag_test("cxx_pic_flag", "-fPIC", "apple-clang@2.0.0")
|
||||
@@ -411,7 +411,7 @@ def test_clang_flags():
|
||||
supported_flag_test("cxx17_flag", "-std=c++1z", "clang@3.5")
|
||||
supported_flag_test("cxx17_flag", "-std=c++17", "clang@5.0")
|
||||
supported_flag_test("c99_flag", "-std=c99", "clang@3.3")
|
||||
unsupported_flag_test("c11_flag", "clang@6.0.0")
|
||||
unsupported_flag_test("c11_flag", "clang@2.0")
|
||||
supported_flag_test("c11_flag", "-std=c11", "clang@6.1.0")
|
||||
supported_flag_test("cc_pic_flag", "-fPIC", "clang@3.3")
|
||||
supported_flag_test("cxx_pic_flag", "-fPIC", "clang@3.3")
|
||||
|
@@ -58,6 +58,7 @@ def test_arm_version_detection(version_str, expected_version):
|
||||
[
|
||||
("Cray C : Version 8.4.6 Mon Apr 15, 2019 12:13:39\n", "8.4.6"),
|
||||
("Cray C++ : Version 8.4.6 Mon Apr 15, 2019 12:13:45\n", "8.4.6"),
|
||||
("Cray clang Version 8.4.6 Mon Apr 15, 2019 12:13:45\n", "8.4.6"),
|
||||
("Cray Fortran : Version 8.4.6 Mon Apr 15, 2019 12:13:55\n", "8.4.6"),
|
||||
],
|
||||
)
|
||||
@@ -487,3 +488,27 @@ def _module(cmd, *args):
|
||||
def test_aocc_version_detection(version_str, expected_version):
|
||||
version = spack.compilers.aocc.Aocc.extract_version_from_output(version_str)
|
||||
assert version == expected_version
|
||||
|
||||
|
||||
@pytest.mark.regression("33901")
|
||||
@pytest.mark.parametrize(
|
||||
"version_str",
|
||||
[
|
||||
(
|
||||
"Apple clang version 11.0.0 (clang-1100.0.33.8)\n"
|
||||
"Target: x86_64-apple-darwin18.7.0\n"
|
||||
"Thread model: posix\n"
|
||||
"InstalledDir: "
|
||||
"/Applications/Xcode.app/Contents/Developer/Toolchains/"
|
||||
"XcodeDefault.xctoolchain/usr/bin\n"
|
||||
),
|
||||
(
|
||||
"Apple LLVM version 7.0.2 (clang-700.1.81)\n"
|
||||
"Target: x86_64-apple-darwin15.2.0\n"
|
||||
"Thread model: posix\n"
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_apple_clang_not_detected_as_cce(version_str):
|
||||
version = spack.compilers.cce.Cce.extract_version_from_output(version_str)
|
||||
assert version == "unknown"
|
||||
|
@@ -1945,3 +1945,18 @@ def test_require_targets_are_allowed(self, mutable_database):
|
||||
|
||||
for s in spec.traverse():
|
||||
assert s.satisfies("target=%s" % spack.platforms.test.Test.front_end)
|
||||
|
||||
def test_external_python_extensions_have_dependency(self):
|
||||
"""Test that python extensions have access to a python dependency"""
|
||||
external_conf = {
|
||||
"py-extension1": {
|
||||
"buildable": False,
|
||||
"externals": [{"spec": "py-extension1@2.0", "prefix": "/fake"}],
|
||||
}
|
||||
}
|
||||
spack.config.set("packages", external_conf)
|
||||
|
||||
spec = Spec("py-extension2").concretized()
|
||||
|
||||
assert "python" in spec["py-extension1"]
|
||||
assert spec["python"] == spec["py-extension1"]["python"]
|
||||
|
@@ -413,3 +413,18 @@ def test_incompatible_virtual_requirements_raise(concretize_scope, mock_packages
|
||||
spec = Spec("callpath ^zmpi")
|
||||
with pytest.raises(UnsatisfiableSpecError):
|
||||
spec.concretize()
|
||||
|
||||
|
||||
def test_non_existing_variants_under_all(concretize_scope, mock_packages):
|
||||
if spack.config.get("config:concretizer") == "original":
|
||||
pytest.skip("Original concretizer does not support configuration" " requirements")
|
||||
conf_str = """\
|
||||
packages:
|
||||
all:
|
||||
require:
|
||||
- any_of: ["~foo", "@:"]
|
||||
"""
|
||||
update_packages_config(conf_str)
|
||||
|
||||
spec = Spec("callpath ^zmpi").concretized()
|
||||
assert "~foo" not in spec
|
||||
|
@@ -28,7 +28,7 @@ def test_set_install_hash_length(hash_length, mutable_config, tmpdir):
|
||||
assert len(hash_str) == hash_length
|
||||
|
||||
|
||||
@pytest.mark.use_fixtures("mock_packages")
|
||||
@pytest.mark.usefixtures("mock_packages")
|
||||
def test_set_install_hash_length_upper_case(mutable_config, tmpdir):
|
||||
mutable_config.set("config:install_hash_length", 5)
|
||||
mutable_config.set(
|
||||
|
@@ -252,12 +252,8 @@ def test_install_times(install_mockery, mock_fetch, mutable_mock_repo):
|
||||
|
||||
# The order should be maintained
|
||||
phases = [x["name"] for x in times["phases"]]
|
||||
total = sum([x["seconds"] for x in times["phases"]])
|
||||
for name in ["one", "two", "three", "install"]:
|
||||
assert name in phases
|
||||
|
||||
# Give a generous difference threshold
|
||||
assert abs(total - times["total"]["seconds"]) < 5
|
||||
assert phases == ["one", "two", "three", "install"]
|
||||
assert all(isinstance(x["seconds"], float) for x in times["phases"])
|
||||
|
||||
|
||||
def test_flatten_deps(install_mockery, mock_fetch, mutable_mock_repo):
|
||||
|
@@ -622,7 +622,7 @@ def test_combine_phase_logs(tmpdir):
|
||||
|
||||
# This is the output log we will combine them into
|
||||
combined_log = os.path.join(str(tmpdir), "combined-out.txt")
|
||||
spack.installer.combine_phase_logs(phase_log_files, combined_log)
|
||||
inst.combine_phase_logs(phase_log_files, combined_log)
|
||||
with open(combined_log, "r") as log_file:
|
||||
out = log_file.read()
|
||||
|
||||
@@ -631,6 +631,22 @@ def test_combine_phase_logs(tmpdir):
|
||||
assert "Output from %s\n" % log_file in out
|
||||
|
||||
|
||||
def test_combine_phase_logs_does_not_care_about_encoding(tmpdir):
|
||||
# this is invalid utf-8 at a minimum
|
||||
data = b"\x00\xF4\xBF\x00\xBF\xBF"
|
||||
input = [str(tmpdir.join("a")), str(tmpdir.join("b"))]
|
||||
output = str(tmpdir.join("c"))
|
||||
|
||||
for path in input:
|
||||
with open(path, "wb") as f:
|
||||
f.write(data)
|
||||
|
||||
inst.combine_phase_logs(input, output)
|
||||
|
||||
with open(output, "rb") as f:
|
||||
assert f.read() == data * 2
|
||||
|
||||
|
||||
def test_check_deps_status_install_failure(install_mockery, monkeypatch):
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
|
@@ -903,3 +903,13 @@ def test_remove_linked_tree_doesnt_change_file_permission(tmpdir, initial_mode):
|
||||
fs.remove_linked_tree(str(file_instead_of_dir))
|
||||
final_stat = os.stat(str(file_instead_of_dir))
|
||||
assert final_stat == initial_stat
|
||||
|
||||
|
||||
def test_filesummary(tmpdir):
|
||||
p = str(tmpdir.join("xyz"))
|
||||
with open(p, "wb") as f:
|
||||
f.write(b"abcdefghijklmnopqrstuvwxyz")
|
||||
|
||||
assert fs.filesummary(p, print_bytes=8) == (26, b"abcdefgh...stuvwxyz")
|
||||
assert fs.filesummary(p, print_bytes=13) == (26, b"abcdefghijklmnopqrstuvwxyz")
|
||||
assert fs.filesummary(p, print_bytes=100) == (26, b"abcdefghijklmnopqrstuvwxyz")
|
||||
|
@@ -84,12 +84,6 @@ def test_inheritance_of_patches(self):
|
||||
# Will error if inheritor package cannot find inherited patch files
|
||||
s.concretize()
|
||||
|
||||
def test_dependency_extensions(self):
|
||||
s = Spec("extension2")
|
||||
s.concretize()
|
||||
deps = set(x.name for x in s.package.dependency_activations())
|
||||
assert deps == set(["extension1"])
|
||||
|
||||
def test_import_class_from_package(self):
|
||||
from spack.pkg.builtin.mock.mpich import Mpich # noqa: F401
|
||||
|
||||
|
@@ -1,402 +0,0 @@
|
||||
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
"""This includes tests for customized activation logic for specific packages
|
||||
(e.g. python and perl).
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
|
||||
from llnl.util.link_tree import MergeConflictError
|
||||
|
||||
import spack.package_base
|
||||
import spack.spec
|
||||
from spack.directory_layout import DirectoryLayout
|
||||
from spack.filesystem_view import YamlFilesystemView
|
||||
|
||||
pytestmark = pytest.mark.skipif(
|
||||
sys.platform == "win32",
|
||||
reason="Python activation not currently supported on Windows",
|
||||
)
|
||||
|
||||
|
||||
def create_ext_pkg(name, prefix, extendee_spec, monkeypatch):
|
||||
ext_spec = spack.spec.Spec(name)
|
||||
ext_spec._concrete = True
|
||||
|
||||
ext_spec.package.spec.prefix = prefix
|
||||
ext_pkg = ext_spec.package
|
||||
|
||||
# temporarily override extendee_spec property on the package
|
||||
monkeypatch.setattr(ext_pkg.__class__, "extendee_spec", extendee_spec)
|
||||
|
||||
return ext_pkg
|
||||
|
||||
|
||||
def create_python_ext_pkg(name, prefix, python_spec, monkeypatch, namespace=None):
|
||||
ext_pkg = create_ext_pkg(name, prefix, python_spec, monkeypatch)
|
||||
ext_pkg.py_namespace = namespace
|
||||
return ext_pkg
|
||||
|
||||
|
||||
def create_dir_structure(tmpdir, dir_structure):
|
||||
for fname, children in dir_structure.items():
|
||||
tmpdir.ensure(fname, dir=fname.endswith("/"))
|
||||
if children:
|
||||
create_dir_structure(tmpdir.join(fname), children)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def builtin_and_mock_packages():
|
||||
# These tests use mock_repo packages to test functionality of builtin
|
||||
# packages for python and perl. To test this we put the mock repo at lower
|
||||
# precedence than the builtin repo, so we test builtin.perl against
|
||||
# builtin.mock.perl-extension.
|
||||
repo_dirs = [spack.paths.packages_path, spack.paths.mock_packages_path]
|
||||
with spack.repo.use_repositories(*repo_dirs):
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def python_and_extension_dirs(tmpdir, builtin_and_mock_packages):
|
||||
python_dirs = {"bin/": {"python": None}, "lib/": {"python2.7/": {"site-packages/": None}}}
|
||||
|
||||
python_name = "python"
|
||||
python_prefix = tmpdir.join(python_name)
|
||||
create_dir_structure(python_prefix, python_dirs)
|
||||
|
||||
python_spec = spack.spec.Spec("python@2.7.12")
|
||||
python_spec._concrete = True
|
||||
python_spec.package.spec.prefix = str(python_prefix)
|
||||
|
||||
ext_dirs = {
|
||||
"bin/": {"py-ext-tool": None},
|
||||
"lib/": {"python2.7/": {"site-packages/": {"py-extension1/": {"sample.py": None}}}},
|
||||
}
|
||||
|
||||
ext_name = "py-extension1"
|
||||
ext_prefix = tmpdir.join(ext_name)
|
||||
create_dir_structure(ext_prefix, ext_dirs)
|
||||
|
||||
easy_install_location = "lib/python2.7/site-packages/easy-install.pth"
|
||||
with open(str(ext_prefix.join(easy_install_location)), "w") as f:
|
||||
f.write(
|
||||
"""path/to/ext1.egg
|
||||
path/to/setuptools.egg"""
|
||||
)
|
||||
|
||||
return str(python_prefix), str(ext_prefix)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def namespace_extensions(tmpdir, builtin_and_mock_packages):
|
||||
ext1_dirs = {
|
||||
"bin/": {"py-ext-tool1": None},
|
||||
"lib/": {
|
||||
"python2.7/": {
|
||||
"site-packages/": {
|
||||
"examplenamespace/": {"__init__.py": None, "ext1_sample.py": None}
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
ext2_dirs = {
|
||||
"bin/": {"py-ext-tool2": None},
|
||||
"lib/": {
|
||||
"python2.7/": {
|
||||
"site-packages/": {
|
||||
"examplenamespace/": {"__init__.py": None, "ext2_sample.py": None}
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
ext1_name = "py-extension1"
|
||||
ext1_prefix = tmpdir.join(ext1_name)
|
||||
create_dir_structure(ext1_prefix, ext1_dirs)
|
||||
|
||||
ext2_name = "py-extension2"
|
||||
ext2_prefix = tmpdir.join(ext2_name)
|
||||
create_dir_structure(ext2_prefix, ext2_dirs)
|
||||
|
||||
return str(ext1_prefix), str(ext2_prefix), "examplenamespace"
|
||||
|
||||
|
||||
def test_python_activation_with_files(
|
||||
tmpdir, python_and_extension_dirs, monkeypatch, builtin_and_mock_packages
|
||||
):
|
||||
python_prefix, ext_prefix = python_and_extension_dirs
|
||||
|
||||
python_spec = spack.spec.Spec("python@2.7.12")
|
||||
python_spec._concrete = True
|
||||
python_spec.package.spec.prefix = python_prefix
|
||||
|
||||
ext_pkg = create_python_ext_pkg("py-extension1", ext_prefix, python_spec, monkeypatch)
|
||||
|
||||
python_pkg = python_spec.package
|
||||
python_pkg.activate(ext_pkg, python_pkg.view())
|
||||
|
||||
assert os.path.exists(os.path.join(python_prefix, "bin/py-ext-tool"))
|
||||
|
||||
easy_install_location = "lib/python2.7/site-packages/easy-install.pth"
|
||||
with open(os.path.join(python_prefix, easy_install_location), "r") as f:
|
||||
easy_install_contents = f.read()
|
||||
|
||||
assert "ext1.egg" in easy_install_contents
|
||||
assert "setuptools.egg" not in easy_install_contents
|
||||
|
||||
|
||||
def test_python_activation_view(
|
||||
tmpdir, python_and_extension_dirs, builtin_and_mock_packages, monkeypatch
|
||||
):
|
||||
python_prefix, ext_prefix = python_and_extension_dirs
|
||||
|
||||
python_spec = spack.spec.Spec("python@2.7.12")
|
||||
python_spec._concrete = True
|
||||
python_spec.package.spec.prefix = python_prefix
|
||||
|
||||
ext_pkg = create_python_ext_pkg("py-extension1", ext_prefix, python_spec, monkeypatch)
|
||||
|
||||
view_dir = str(tmpdir.join("view"))
|
||||
layout = DirectoryLayout(view_dir)
|
||||
view = YamlFilesystemView(view_dir, layout)
|
||||
|
||||
python_pkg = python_spec.package
|
||||
python_pkg.activate(ext_pkg, view)
|
||||
|
||||
assert not os.path.exists(os.path.join(python_prefix, "bin/py-ext-tool"))
|
||||
|
||||
assert os.path.exists(os.path.join(view_dir, "bin/py-ext-tool"))
|
||||
|
||||
|
||||
def test_python_ignore_namespace_init_conflict(
|
||||
tmpdir, namespace_extensions, builtin_and_mock_packages, monkeypatch
|
||||
):
|
||||
"""Test the view update logic in PythonPackage ignores conflicting
|
||||
instances of __init__ for packages which are in the same namespace.
|
||||
"""
|
||||
ext1_prefix, ext2_prefix, py_namespace = namespace_extensions
|
||||
|
||||
python_spec = spack.spec.Spec("python@2.7.12")
|
||||
python_spec._concrete = True
|
||||
|
||||
ext1_pkg = create_python_ext_pkg(
|
||||
"py-extension1", ext1_prefix, python_spec, monkeypatch, py_namespace
|
||||
)
|
||||
ext2_pkg = create_python_ext_pkg(
|
||||
"py-extension2", ext2_prefix, python_spec, monkeypatch, py_namespace
|
||||
)
|
||||
|
||||
view_dir = str(tmpdir.join("view"))
|
||||
layout = DirectoryLayout(view_dir)
|
||||
view = YamlFilesystemView(view_dir, layout)
|
||||
|
||||
python_pkg = python_spec.package
|
||||
python_pkg.activate(ext1_pkg, view)
|
||||
# Normally handled by Package.do_activate, but here we activate directly
|
||||
view.extensions_layout.add_extension(python_spec, ext1_pkg.spec)
|
||||
python_pkg.activate(ext2_pkg, view)
|
||||
|
||||
f1 = "lib/python2.7/site-packages/examplenamespace/ext1_sample.py"
|
||||
f2 = "lib/python2.7/site-packages/examplenamespace/ext2_sample.py"
|
||||
init_file = "lib/python2.7/site-packages/examplenamespace/__init__.py"
|
||||
|
||||
assert os.path.exists(os.path.join(view_dir, f1))
|
||||
assert os.path.exists(os.path.join(view_dir, f2))
|
||||
assert os.path.exists(os.path.join(view_dir, init_file))
|
||||
|
||||
|
||||
def test_python_keep_namespace_init(
|
||||
tmpdir, namespace_extensions, builtin_and_mock_packages, monkeypatch
|
||||
):
|
||||
"""Test the view update logic in PythonPackage keeps the namespace
|
||||
__init__ file as long as one package in the namespace still
|
||||
exists.
|
||||
"""
|
||||
ext1_prefix, ext2_prefix, py_namespace = namespace_extensions
|
||||
|
||||
python_spec = spack.spec.Spec("python@2.7.12")
|
||||
python_spec._concrete = True
|
||||
|
||||
ext1_pkg = create_python_ext_pkg(
|
||||
"py-extension1", ext1_prefix, python_spec, monkeypatch, py_namespace
|
||||
)
|
||||
ext2_pkg = create_python_ext_pkg(
|
||||
"py-extension2", ext2_prefix, python_spec, monkeypatch, py_namespace
|
||||
)
|
||||
|
||||
view_dir = str(tmpdir.join("view"))
|
||||
layout = DirectoryLayout(view_dir)
|
||||
view = YamlFilesystemView(view_dir, layout)
|
||||
|
||||
python_pkg = python_spec.package
|
||||
python_pkg.activate(ext1_pkg, view)
|
||||
# Normally handled by Package.do_activate, but here we activate directly
|
||||
view.extensions_layout.add_extension(python_spec, ext1_pkg.spec)
|
||||
python_pkg.activate(ext2_pkg, view)
|
||||
view.extensions_layout.add_extension(python_spec, ext2_pkg.spec)
|
||||
|
||||
f1 = "lib/python2.7/site-packages/examplenamespace/ext1_sample.py"
|
||||
init_file = "lib/python2.7/site-packages/examplenamespace/__init__.py"
|
||||
|
||||
python_pkg.deactivate(ext1_pkg, view)
|
||||
view.extensions_layout.remove_extension(python_spec, ext1_pkg.spec)
|
||||
|
||||
assert not os.path.exists(os.path.join(view_dir, f1))
|
||||
assert os.path.exists(os.path.join(view_dir, init_file))
|
||||
|
||||
python_pkg.deactivate(ext2_pkg, view)
|
||||
view.extensions_layout.remove_extension(python_spec, ext2_pkg.spec)
|
||||
|
||||
assert not os.path.exists(os.path.join(view_dir, init_file))
|
||||
|
||||
|
||||
def test_python_namespace_conflict(
|
||||
tmpdir, namespace_extensions, monkeypatch, builtin_and_mock_packages
|
||||
):
|
||||
"""Test the view update logic in PythonPackage reports an error when two
|
||||
python extensions with different namespaces have a conflicting __init__
|
||||
file.
|
||||
"""
|
||||
ext1_prefix, ext2_prefix, py_namespace = namespace_extensions
|
||||
other_namespace = py_namespace + "other"
|
||||
|
||||
python_spec = spack.spec.Spec("python@2.7.12")
|
||||
python_spec._concrete = True
|
||||
|
||||
ext1_pkg = create_python_ext_pkg(
|
||||
"py-extension1", ext1_prefix, python_spec, monkeypatch, py_namespace
|
||||
)
|
||||
ext2_pkg = create_python_ext_pkg(
|
||||
"py-extension2", ext2_prefix, python_spec, monkeypatch, other_namespace
|
||||
)
|
||||
|
||||
view_dir = str(tmpdir.join("view"))
|
||||
layout = DirectoryLayout(view_dir)
|
||||
view = YamlFilesystemView(view_dir, layout)
|
||||
|
||||
python_pkg = python_spec.package
|
||||
python_pkg.activate(ext1_pkg, view)
|
||||
view.extensions_layout.add_extension(python_spec, ext1_pkg.spec)
|
||||
with pytest.raises(MergeConflictError):
|
||||
python_pkg.activate(ext2_pkg, view)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def perl_and_extension_dirs(tmpdir, builtin_and_mock_packages):
|
||||
perl_dirs = {
|
||||
"bin/": {"perl": None},
|
||||
"lib/": {"site_perl/": {"5.24.1/": {"x86_64-linux/": None}}},
|
||||
}
|
||||
|
||||
perl_name = "perl"
|
||||
perl_prefix = tmpdir.join(perl_name)
|
||||
create_dir_structure(perl_prefix, perl_dirs)
|
||||
|
||||
perl_spec = spack.spec.Spec("perl@5.24.1")
|
||||
perl_spec._concrete = True
|
||||
perl_spec.package.spec.prefix = str(perl_prefix)
|
||||
|
||||
ext_dirs = {
|
||||
"bin/": {"perl-ext-tool": None},
|
||||
"lib/": {"site_perl/": {"5.24.1/": {"x86_64-linux/": {"TestExt/": {}}}}},
|
||||
}
|
||||
|
||||
ext_name = "perl-extension"
|
||||
ext_prefix = tmpdir.join(ext_name)
|
||||
create_dir_structure(ext_prefix, ext_dirs)
|
||||
|
||||
return str(perl_prefix), str(ext_prefix)
|
||||
|
||||
|
||||
def test_perl_activation(tmpdir, builtin_and_mock_packages, monkeypatch):
|
||||
# Note the lib directory is based partly on the perl version
|
||||
perl_spec = spack.spec.Spec("perl@5.24.1")
|
||||
perl_spec._concrete = True
|
||||
|
||||
perl_name = "perl"
|
||||
tmpdir.ensure(perl_name, dir=True)
|
||||
|
||||
perl_prefix = str(tmpdir.join(perl_name))
|
||||
# Set the prefix on the package's spec reference because that is a copy of
|
||||
# the original spec
|
||||
perl_spec.package.spec.prefix = perl_prefix
|
||||
|
||||
ext_name = "perl-extension"
|
||||
tmpdir.ensure(ext_name, dir=True)
|
||||
ext_pkg = create_ext_pkg(ext_name, str(tmpdir.join(ext_name)), perl_spec, monkeypatch)
|
||||
|
||||
perl_pkg = perl_spec.package
|
||||
perl_pkg.activate(ext_pkg, perl_pkg.view())
|
||||
|
||||
|
||||
def test_perl_activation_with_files(
|
||||
tmpdir, perl_and_extension_dirs, monkeypatch, builtin_and_mock_packages
|
||||
):
|
||||
perl_prefix, ext_prefix = perl_and_extension_dirs
|
||||
|
||||
perl_spec = spack.spec.Spec("perl@5.24.1")
|
||||
perl_spec._concrete = True
|
||||
perl_spec.package.spec.prefix = perl_prefix
|
||||
|
||||
ext_pkg = create_ext_pkg("perl-extension", ext_prefix, perl_spec, monkeypatch)
|
||||
|
||||
perl_pkg = perl_spec.package
|
||||
perl_pkg.activate(ext_pkg, perl_pkg.view())
|
||||
|
||||
assert os.path.exists(os.path.join(perl_prefix, "bin/perl-ext-tool"))
|
||||
|
||||
|
||||
def test_perl_activation_view(
|
||||
tmpdir, perl_and_extension_dirs, monkeypatch, builtin_and_mock_packages
|
||||
):
|
||||
perl_prefix, ext_prefix = perl_and_extension_dirs
|
||||
|
||||
perl_spec = spack.spec.Spec("perl@5.24.1")
|
||||
perl_spec._concrete = True
|
||||
perl_spec.package.spec.prefix = perl_prefix
|
||||
|
||||
ext_pkg = create_ext_pkg("perl-extension", ext_prefix, perl_spec, monkeypatch)
|
||||
|
||||
view_dir = str(tmpdir.join("view"))
|
||||
layout = DirectoryLayout(view_dir)
|
||||
view = YamlFilesystemView(view_dir, layout)
|
||||
|
||||
perl_pkg = perl_spec.package
|
||||
perl_pkg.activate(ext_pkg, view)
|
||||
|
||||
assert not os.path.exists(os.path.join(perl_prefix, "bin/perl-ext-tool"))
|
||||
|
||||
assert os.path.exists(os.path.join(view_dir, "bin/perl-ext-tool"))
|
||||
|
||||
|
||||
def test_is_activated_upstream_extendee(tmpdir, builtin_and_mock_packages, monkeypatch):
|
||||
"""When an extendee is installed upstream, make sure that the extension
|
||||
spec is never considered to be globally activated for it.
|
||||
"""
|
||||
extendee_spec = spack.spec.Spec("python")
|
||||
extendee_spec._concrete = True
|
||||
|
||||
python_name = "python"
|
||||
tmpdir.ensure(python_name, dir=True)
|
||||
|
||||
python_prefix = str(tmpdir.join(python_name))
|
||||
# Set the prefix on the package's spec reference because that is a copy of
|
||||
# the original spec
|
||||
extendee_spec.package.spec.prefix = python_prefix
|
||||
monkeypatch.setattr(extendee_spec.__class__, "installed_upstream", True)
|
||||
|
||||
ext_name = "py-extension1"
|
||||
tmpdir.ensure(ext_name, dir=True)
|
||||
ext_pkg = create_ext_pkg(ext_name, str(tmpdir.join(ext_name)), extendee_spec, monkeypatch)
|
||||
|
||||
# The view should not be checked at all if the extendee is installed
|
||||
# upstream, so use 'None' here
|
||||
mock_view = None
|
||||
assert not ext_pkg.is_activated(mock_view)
|
@@ -32,6 +32,27 @@ def test_write_and_read_cache_file(file_cache):
|
||||
assert text == "foobar\n"
|
||||
|
||||
|
||||
@pytest.mark.skipif(sys.platform == "win32", reason="Locks not supported on Windows")
|
||||
def test_failed_write_and_read_cache_file(file_cache):
|
||||
"""Test failing to write then attempting to read a cached file."""
|
||||
with pytest.raises(RuntimeError, match=r"^foobar$"):
|
||||
with file_cache.write_transaction("test.yaml") as (old, new):
|
||||
assert old is None
|
||||
assert new is not None
|
||||
raise RuntimeError("foobar")
|
||||
|
||||
# Cache dir should have exactly one (lock) file
|
||||
assert os.listdir(file_cache.root) == [".test.yaml.lock"]
|
||||
|
||||
# File does not exist
|
||||
assert not file_cache.init_entry("test.yaml")
|
||||
|
||||
# Attempting to read will cause a file not found error
|
||||
with pytest.raises((IOError, OSError), match=r"test\.yaml"):
|
||||
with file_cache.read_transaction("test.yaml"):
|
||||
pass
|
||||
|
||||
|
||||
def test_write_and_remove_cache_file(file_cache):
|
||||
"""Test two write transactions on a cached file. Then try to remove an
|
||||
entry from it.
|
||||
|
150
lib/spack/spack/test/util/timer.py
Normal file
150
lib/spack/spack/test/util/timer.py
Normal file
@@ -0,0 +1,150 @@
|
||||
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
|
||||
# Spack Project Developers. See the top-level COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import json
|
||||
|
||||
from six import StringIO
|
||||
|
||||
import spack.util.timer as timer
|
||||
|
||||
|
||||
class Tick(object):
|
||||
"""Timer that increments the seconds passed by 1
|
||||
everytime tick is called."""
|
||||
|
||||
def __init__(self):
|
||||
self.time = 0.0
|
||||
|
||||
def tick(self):
|
||||
self.time += 1
|
||||
return self.time
|
||||
|
||||
|
||||
def test_timer():
|
||||
# 0
|
||||
t = timer.Timer(now=Tick().tick)
|
||||
|
||||
# 1 (restart)
|
||||
t.start()
|
||||
|
||||
# 2
|
||||
t.start("wrapped")
|
||||
|
||||
# 3
|
||||
t.start("first")
|
||||
|
||||
# 4
|
||||
t.stop("first")
|
||||
assert t.duration("first") == 1.0
|
||||
|
||||
# 5
|
||||
t.start("second")
|
||||
|
||||
# 6
|
||||
t.stop("second")
|
||||
assert t.duration("second") == 1.0
|
||||
|
||||
# 7-8
|
||||
with t.measure("third"):
|
||||
pass
|
||||
assert t.duration("third") == 1.0
|
||||
|
||||
# 9
|
||||
t.stop("wrapped")
|
||||
assert t.duration("wrapped") == 7.0
|
||||
|
||||
# tick 10-13
|
||||
t.start("not-stopped")
|
||||
assert t.duration("not-stopped") == 1.0
|
||||
assert t.duration("not-stopped") == 2.0
|
||||
assert t.duration("not-stopped") == 3.0
|
||||
|
||||
# 14
|
||||
assert t.duration() == 13.0
|
||||
|
||||
# 15
|
||||
t.stop()
|
||||
assert t.duration() == 14.0
|
||||
|
||||
|
||||
def test_timer_stop_stops_all():
|
||||
# Ensure that timer.stop() effectively stops all timers.
|
||||
|
||||
# 0
|
||||
t = timer.Timer(now=Tick().tick)
|
||||
|
||||
# 1
|
||||
t.start("first")
|
||||
|
||||
# 2
|
||||
t.start("second")
|
||||
|
||||
# 3
|
||||
t.start("third")
|
||||
|
||||
# 4
|
||||
t.stop()
|
||||
|
||||
assert t.duration("first") == 3.0
|
||||
assert t.duration("second") == 2.0
|
||||
assert t.duration("third") == 1.0
|
||||
assert t.duration() == 4.0
|
||||
|
||||
|
||||
def test_stopping_unstarted_timer_is_no_error():
|
||||
t = timer.Timer(now=Tick().tick)
|
||||
assert t.duration("hello") == 0.0
|
||||
t.stop("hello")
|
||||
assert t.duration("hello") == 0.0
|
||||
|
||||
|
||||
def test_timer_write():
|
||||
text_buffer = StringIO()
|
||||
json_buffer = StringIO()
|
||||
|
||||
# 0
|
||||
t = timer.Timer(now=Tick().tick)
|
||||
|
||||
# 1
|
||||
t.start("timer")
|
||||
|
||||
# 2
|
||||
t.stop("timer")
|
||||
|
||||
# 3
|
||||
t.stop()
|
||||
|
||||
t.write_tty(text_buffer)
|
||||
t.write_json(json_buffer)
|
||||
|
||||
output = text_buffer.getvalue().splitlines()
|
||||
assert "timer" in output[0]
|
||||
assert "1.000s" in output[0]
|
||||
assert "total" in output[1]
|
||||
assert "3.000s" in output[1]
|
||||
|
||||
deserialized = json.loads(json_buffer.getvalue())
|
||||
assert deserialized == {
|
||||
"phases": [{"name": "timer", "seconds": 1.0}],
|
||||
"total": {"seconds": 3.0},
|
||||
}
|
||||
|
||||
|
||||
def test_null_timer():
|
||||
# Just ensure that the interface of the noop-timer doesn't break at some point
|
||||
buffer = StringIO()
|
||||
t = timer.NullTimer()
|
||||
t.start()
|
||||
t.start("first")
|
||||
t.stop("first")
|
||||
with t.measure("second"):
|
||||
pass
|
||||
t.stop()
|
||||
assert t.duration("first") == 0.0
|
||||
assert t.duration() == 0.0
|
||||
assert not t.phases
|
||||
t.write_json(buffer)
|
||||
t.write_tty(buffer)
|
||||
assert not buffer.getvalue()
|
@@ -9,30 +9,10 @@
|
||||
import pytest
|
||||
|
||||
from spack.directory_layout import DirectoryLayout
|
||||
from spack.filesystem_view import YamlFilesystemView
|
||||
from spack.filesystem_view import SimpleFilesystemView, YamlFilesystemView
|
||||
from spack.spec import Spec
|
||||
|
||||
|
||||
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
|
||||
def test_global_activation(install_mockery, mock_fetch):
|
||||
"""This test ensures that views which are maintained inside of an extendee
|
||||
package's prefix are maintained as expected and are compatible with
|
||||
global activations prior to #7152.
|
||||
"""
|
||||
spec = Spec("extension1").concretized()
|
||||
pkg = spec.package
|
||||
pkg.do_install()
|
||||
pkg.do_activate()
|
||||
|
||||
extendee_spec = spec["extendee"]
|
||||
extendee_pkg = spec["extendee"].package
|
||||
view = extendee_pkg.view()
|
||||
assert pkg.is_activated(view)
|
||||
|
||||
expected_path = os.path.join(extendee_spec.prefix, ".spack", "extensions.yaml")
|
||||
assert view.extensions_layout.extension_file_path(extendee_spec) == expected_path
|
||||
|
||||
|
||||
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
|
||||
def test_remove_extensions_ordered(install_mockery, mock_fetch, tmpdir):
|
||||
view_dir = str(tmpdir.join("view"))
|
||||
@@ -44,3 +24,46 @@ def test_remove_extensions_ordered(install_mockery, mock_fetch, tmpdir):
|
||||
|
||||
e1 = e2["extension1"]
|
||||
view.remove_specs(e1, e2)
|
||||
|
||||
|
||||
@pytest.mark.regression("32456")
|
||||
def test_view_with_spec_not_contributing_files(mock_packages, tmpdir):
|
||||
tmpdir = str(tmpdir)
|
||||
view_dir = os.path.join(tmpdir, "view")
|
||||
os.mkdir(view_dir)
|
||||
|
||||
layout = DirectoryLayout(view_dir)
|
||||
view = SimpleFilesystemView(view_dir, layout)
|
||||
|
||||
a = Spec("a")
|
||||
b = Spec("b")
|
||||
a.prefix = os.path.join(tmpdir, "a")
|
||||
b.prefix = os.path.join(tmpdir, "b")
|
||||
a._mark_concrete()
|
||||
b._mark_concrete()
|
||||
|
||||
# Create directory structure for a and b, and view
|
||||
os.makedirs(a.prefix.subdir)
|
||||
os.makedirs(b.prefix.subdir)
|
||||
os.makedirs(os.path.join(a.prefix, ".spack"))
|
||||
os.makedirs(os.path.join(b.prefix, ".spack"))
|
||||
|
||||
# Add files to b's prefix, but not to a's
|
||||
with open(b.prefix.file, "w") as f:
|
||||
f.write("file 1")
|
||||
|
||||
with open(b.prefix.subdir.file, "w") as f:
|
||||
f.write("file 2")
|
||||
|
||||
# In previous versions of Spack we incorrectly called add_files_to_view
|
||||
# with b's merge map. It shouldn't be called at all, since a has no
|
||||
# files to add to the view.
|
||||
def pkg_a_add_files_to_view(view, merge_map, skip_if_exists=True):
|
||||
assert False, "There shouldn't be files to add"
|
||||
|
||||
a.package.add_files_to_view = pkg_a_add_files_to_view
|
||||
|
||||
# Create view and see if files are linked.
|
||||
view.add_specs(a, b)
|
||||
assert os.path.lexists(os.path.join(view_dir, "file"))
|
||||
assert os.path.lexists(os.path.join(view_dir, "subdir", "file"))
|
||||
|
@@ -141,7 +141,7 @@ def dump_environment(path, environment=None):
|
||||
use_env = environment or os.environ
|
||||
hidden_vars = set(["PS1", "PWD", "OLDPWD", "TERM_SESSION_ID"])
|
||||
|
||||
fd = os.open(path, os.O_WRONLY | os.O_CREAT, 0o600)
|
||||
fd = os.open(path, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600)
|
||||
with os.fdopen(fd, "w") as env_file:
|
||||
for var, val in sorted(use_env.items()):
|
||||
env_file.write(
|
||||
@@ -915,7 +915,7 @@ def inspect_path(root, inspections, exclude=None):
|
||||
env = EnvironmentModifications()
|
||||
# Inspect the prefix to check for the existence of common directories
|
||||
for relative_path, variables in inspections.items():
|
||||
expected = os.path.join(root, relative_path)
|
||||
expected = os.path.join(root, os.path.normpath(relative_path))
|
||||
|
||||
if os.path.isdir(expected) and not exclude(expected):
|
||||
for variable in variables:
|
||||
|
@@ -144,8 +144,7 @@ def __exit__(cm, type, value, traceback):
|
||||
cm.tmp_file.close()
|
||||
|
||||
if value:
|
||||
# remove tmp on exception & raise it
|
||||
shutil.rmtree(cm.tmp_filename, True)
|
||||
os.remove(cm.tmp_filename)
|
||||
|
||||
else:
|
||||
rename(cm.tmp_filename, cm.orig_filename)
|
||||
|
@@ -444,7 +444,7 @@ def padding_filter(string):
|
||||
r"(/{pad})+" # the padding string repeated one or more times
|
||||
r"(/{longest_prefix})?(?=/)" # trailing prefix of padding as path component
|
||||
)
|
||||
regex = regex.replace("/", os.sep)
|
||||
regex = regex.replace("/", re.escape(os.sep))
|
||||
regex = regex.format(pad=pad, longest_prefix=longest_prefix)
|
||||
_filter_re = re.compile(regex)
|
||||
|
||||
|
@@ -11,51 +11,140 @@
|
||||
"""
|
||||
import sys
|
||||
import time
|
||||
from collections import OrderedDict, namedtuple
|
||||
from contextlib import contextmanager
|
||||
|
||||
from llnl.util.lang import pretty_seconds
|
||||
|
||||
import spack.util.spack_json as sjson
|
||||
|
||||
Interval = namedtuple("Interval", ("begin", "end"))
|
||||
|
||||
class Timer(object):
|
||||
"""
|
||||
Simple timer for timing phases of a solve or install
|
||||
"""
|
||||
#: name for the global timer (used in start(), stop(), duration() without arguments)
|
||||
global_timer_name = "_global"
|
||||
|
||||
def __init__(self):
|
||||
self.start = time.time()
|
||||
self.last = self.start
|
||||
self.phases = {}
|
||||
self.end = None
|
||||
|
||||
def phase(self, name):
|
||||
last = self.last
|
||||
now = time.time()
|
||||
self.phases[name] = now - last
|
||||
self.last = now
|
||||
class NullTimer(object):
|
||||
"""Timer interface that does nothing, useful in for "tell
|
||||
don't ask" style code when timers are optional."""
|
||||
|
||||
def start(self, name=global_timer_name):
|
||||
pass
|
||||
|
||||
def stop(self, name=global_timer_name):
|
||||
pass
|
||||
|
||||
def duration(self, name=global_timer_name):
|
||||
return 0.0
|
||||
|
||||
@contextmanager
|
||||
def measure(self, name):
|
||||
yield
|
||||
|
||||
@property
|
||||
def total(self):
|
||||
"""Return the total time"""
|
||||
if self.end:
|
||||
return self.end - self.start
|
||||
return time.time() - self.start
|
||||
|
||||
def stop(self):
|
||||
"""
|
||||
Stop the timer to record a total time, if desired.
|
||||
"""
|
||||
self.end = time.time()
|
||||
def phases(self):
|
||||
return []
|
||||
|
||||
def write_json(self, out=sys.stdout):
|
||||
pass
|
||||
|
||||
def write_tty(self, out=sys.stdout):
|
||||
pass
|
||||
|
||||
|
||||
#: instance of a do-nothing timer
|
||||
NULL_TIMER = NullTimer()
|
||||
|
||||
|
||||
class Timer(object):
|
||||
"""Simple interval timer"""
|
||||
|
||||
def __init__(self, now=time.time):
|
||||
"""
|
||||
Write a json object with times to file
|
||||
Arguments:
|
||||
now: function that gives the seconds since e.g. epoch
|
||||
"""
|
||||
phases = [{"name": p, "seconds": s} for p, s in self.phases.items()]
|
||||
times = {"phases": phases, "total": {"seconds": self.total}}
|
||||
self._now = now
|
||||
self._timers = OrderedDict() # type: OrderedDict[str,Interval]
|
||||
|
||||
# _global is the overal timer since the instance was created
|
||||
self._timers[global_timer_name] = Interval(self._now(), end=None)
|
||||
|
||||
def start(self, name=global_timer_name):
|
||||
"""
|
||||
Start or restart a named timer, or the global timer when no name is given.
|
||||
|
||||
Arguments:
|
||||
name (str): Optional name of the timer. When no name is passed, the
|
||||
global timer is started.
|
||||
"""
|
||||
self._timers[name] = Interval(self._now(), None)
|
||||
|
||||
def stop(self, name=global_timer_name):
|
||||
"""
|
||||
Stop a named timer, or all timers when no name is given. Stopping a
|
||||
timer that has not started has no effect.
|
||||
|
||||
Arguments:
|
||||
name (str): Optional name of the timer. When no name is passed, all
|
||||
timers are stopped.
|
||||
"""
|
||||
interval = self._timers.get(name, None)
|
||||
if not interval:
|
||||
return
|
||||
self._timers[name] = Interval(interval.begin, self._now())
|
||||
|
||||
def duration(self, name=global_timer_name):
|
||||
"""
|
||||
Get the time in seconds of a named timer, or the total time if no
|
||||
name is passed. The duration is always 0 for timers that have not been
|
||||
started, no error is raised.
|
||||
|
||||
Arguments:
|
||||
name (str): (Optional) name of the timer
|
||||
|
||||
Returns:
|
||||
float: duration of timer.
|
||||
"""
|
||||
try:
|
||||
interval = self._timers[name]
|
||||
except KeyError:
|
||||
return 0.0
|
||||
# Take either the interval end, the global timer, or now.
|
||||
end = interval.end or self._timers[global_timer_name].end or self._now()
|
||||
return end - interval.begin
|
||||
|
||||
@contextmanager
|
||||
def measure(self, name):
|
||||
"""
|
||||
Context manager that allows you to time a block of code.
|
||||
|
||||
Arguments:
|
||||
name (str): Name of the timer
|
||||
"""
|
||||
begin = self._now()
|
||||
yield
|
||||
self._timers[name] = Interval(begin, self._now())
|
||||
|
||||
@property
|
||||
def phases(self):
|
||||
"""Get all named timers (excluding the global/total timer)"""
|
||||
return [k for k in self._timers.keys() if k != global_timer_name]
|
||||
|
||||
def write_json(self, out=sys.stdout):
|
||||
"""Write a json object with times to file"""
|
||||
phases = [{"name": p, "seconds": self.duration(p)} for p in self.phases]
|
||||
times = {"phases": phases, "total": {"seconds": self.duration()}}
|
||||
out.write(sjson.dump(times))
|
||||
|
||||
def write_tty(self, out=sys.stdout):
|
||||
now = time.time()
|
||||
out.write("Time:\n")
|
||||
for phase, t in self.phases.items():
|
||||
out.write(" %-15s%.4f\n" % (phase + ":", t))
|
||||
out.write("Total: %.4f\n" % (now - self.start))
|
||||
"""Write a human-readable summary of timings"""
|
||||
# Individual timers ordered by registration
|
||||
formatted = [(p, pretty_seconds(self.duration(p))) for p in self.phases]
|
||||
|
||||
# Total time
|
||||
formatted.append(("total", pretty_seconds(self.duration())))
|
||||
|
||||
# Write to out
|
||||
for name, duration in formatted:
|
||||
out.write(" {:10s} {:>10s}\n".format(name, duration))
|
||||
|
@@ -162,39 +162,12 @@ def check_spec_manifest(spec):
|
||||
results.add_error(prefix, "manifest corrupted")
|
||||
return results
|
||||
|
||||
# Get extensions active in spec
|
||||
view = spack.filesystem_view.YamlFilesystemView(prefix, spack.store.layout)
|
||||
active_exts = view.extensions_layout.extension_map(spec).values()
|
||||
ext_file = ""
|
||||
if active_exts:
|
||||
# No point checking contents of this file as it is the only source of
|
||||
# truth for that information.
|
||||
ext_file = view.extensions_layout.extension_file_path(spec)
|
||||
|
||||
def is_extension_artifact(p):
|
||||
if os.path.islink(p):
|
||||
if any(os.readlink(p).startswith(e.prefix) for e in active_exts):
|
||||
# This file is linked in by an extension. Belongs to extension
|
||||
return True
|
||||
elif os.path.isdir(p) and p not in manifest:
|
||||
if all(is_extension_artifact(os.path.join(p, f)) for f in os.listdir(p)):
|
||||
return True
|
||||
return False
|
||||
|
||||
for root, dirs, files in os.walk(prefix):
|
||||
for entry in list(dirs + files):
|
||||
path = os.path.join(root, entry)
|
||||
|
||||
# Do not check links from prefix to active extension
|
||||
# TODO: make this stricter for non-linux systems that use symlink
|
||||
# permissions
|
||||
# Do not check directories that only exist for extensions
|
||||
if is_extension_artifact(path):
|
||||
continue
|
||||
|
||||
# Do not check manifest file. Can't store your own hash
|
||||
# Nothing to check for ext_file
|
||||
if path == manifest_file or path == ext_file:
|
||||
if path == manifest_file:
|
||||
continue
|
||||
|
||||
data = manifest.pop(path, {})
|
||||
|
@@ -153,113 +153,113 @@ protected-publish:
|
||||
# still run on UO runners and be signed
|
||||
# using the previous approach.
|
||||
########################################
|
||||
.e4s-mac:
|
||||
variables:
|
||||
SPACK_CI_STACK_NAME: e4s-mac
|
||||
allow_failure: True
|
||||
# .e4s-mac:
|
||||
# variables:
|
||||
# SPACK_CI_STACK_NAME: e4s-mac
|
||||
# allow_failure: True
|
||||
|
||||
.mac-pr:
|
||||
only:
|
||||
- /^pr[\d]+_.*$/
|
||||
- /^github\/pr[\d]+_.*$/
|
||||
variables:
|
||||
SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries-prs/${CI_COMMIT_REF_NAME}"
|
||||
SPACK_PRUNE_UNTOUCHED: "True"
|
||||
# .mac-pr:
|
||||
# only:
|
||||
# - /^pr[\d]+_.*$/
|
||||
# - /^github\/pr[\d]+_.*$/
|
||||
# variables:
|
||||
# SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries-prs/${CI_COMMIT_REF_NAME}"
|
||||
# SPACK_PRUNE_UNTOUCHED: "True"
|
||||
|
||||
.mac-protected:
|
||||
only:
|
||||
- /^develop$/
|
||||
- /^releases\/v.*/
|
||||
- /^v.*/
|
||||
- /^github\/develop$/
|
||||
variables:
|
||||
SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries/${CI_COMMIT_REF_NAME}/${SPACK_CI_STACK_NAME}"
|
||||
# .mac-protected:
|
||||
# only:
|
||||
# - /^develop$/
|
||||
# - /^releases\/v.*/
|
||||
# - /^v.*/
|
||||
# - /^github\/develop$/
|
||||
# variables:
|
||||
# SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries/${CI_COMMIT_REF_NAME}/${SPACK_CI_STACK_NAME}"
|
||||
|
||||
.mac-pr-build:
|
||||
extends: [ ".mac-pr", ".build" ]
|
||||
variables:
|
||||
AWS_ACCESS_KEY_ID: ${PR_MIRRORS_AWS_ACCESS_KEY_ID}
|
||||
AWS_SECRET_ACCESS_KEY: ${PR_MIRRORS_AWS_SECRET_ACCESS_KEY}
|
||||
# .mac-pr-build:
|
||||
# extends: [ ".mac-pr", ".build" ]
|
||||
# variables:
|
||||
# AWS_ACCESS_KEY_ID: ${PR_MIRRORS_AWS_ACCESS_KEY_ID}
|
||||
# AWS_SECRET_ACCESS_KEY: ${PR_MIRRORS_AWS_SECRET_ACCESS_KEY}
|
||||
|
||||
.mac-protected-build:
|
||||
extends: [ ".mac-protected", ".build" ]
|
||||
variables:
|
||||
AWS_ACCESS_KEY_ID: ${PROTECTED_MIRRORS_AWS_ACCESS_KEY_ID}
|
||||
AWS_SECRET_ACCESS_KEY: ${PROTECTED_MIRRORS_AWS_SECRET_ACCESS_KEY}
|
||||
SPACK_SIGNING_KEY: ${PACKAGE_SIGNING_KEY}
|
||||
# .mac-protected-build:
|
||||
# extends: [ ".mac-protected", ".build" ]
|
||||
# variables:
|
||||
# AWS_ACCESS_KEY_ID: ${PROTECTED_MIRRORS_AWS_ACCESS_KEY_ID}
|
||||
# AWS_SECRET_ACCESS_KEY: ${PROTECTED_MIRRORS_AWS_SECRET_ACCESS_KEY}
|
||||
# SPACK_SIGNING_KEY: ${PACKAGE_SIGNING_KEY}
|
||||
|
||||
e4s-mac-pr-generate:
|
||||
extends: [".e4s-mac", ".mac-pr"]
|
||||
stage: generate
|
||||
script:
|
||||
- tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
|
||||
- . "./share/spack/setup-env.sh"
|
||||
- spack --version
|
||||
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
|
||||
- spack env activate --without-view .
|
||||
- spack ci generate --check-index-only
|
||||
--buildcache-destination "${SPACK_BUILDCACHE_DESTINATION}"
|
||||
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
|
||||
artifacts:
|
||||
paths:
|
||||
- "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||
tags:
|
||||
- lambda
|
||||
interruptible: true
|
||||
retry:
|
||||
max: 2
|
||||
when:
|
||||
- runner_system_failure
|
||||
- stuck_or_timeout_failure
|
||||
timeout: 60 minutes
|
||||
# e4s-mac-pr-generate:
|
||||
# extends: [".e4s-mac", ".mac-pr"]
|
||||
# stage: generate
|
||||
# script:
|
||||
# - tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
|
||||
# - . "./share/spack/setup-env.sh"
|
||||
# - spack --version
|
||||
# - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
|
||||
# - spack env activate --without-view .
|
||||
# - spack ci generate --check-index-only
|
||||
# --buildcache-destination "${SPACK_BUILDCACHE_DESTINATION}"
|
||||
# --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||
# --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
|
||||
# artifacts:
|
||||
# paths:
|
||||
# - "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||
# tags:
|
||||
# - lambda
|
||||
# interruptible: true
|
||||
# retry:
|
||||
# max: 2
|
||||
# when:
|
||||
# - runner_system_failure
|
||||
# - stuck_or_timeout_failure
|
||||
# timeout: 60 minutes
|
||||
|
||||
e4s-mac-protected-generate:
|
||||
extends: [".e4s-mac", ".mac-protected"]
|
||||
stage: generate
|
||||
script:
|
||||
- tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
|
||||
- . "./share/spack/setup-env.sh"
|
||||
- spack --version
|
||||
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
|
||||
- spack env activate --without-view .
|
||||
- spack ci generate --check-index-only
|
||||
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
|
||||
artifacts:
|
||||
paths:
|
||||
- "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||
tags:
|
||||
- omicron
|
||||
interruptible: true
|
||||
retry:
|
||||
max: 2
|
||||
when:
|
||||
- runner_system_failure
|
||||
- stuck_or_timeout_failure
|
||||
timeout: 60 minutes
|
||||
# e4s-mac-protected-generate:
|
||||
# extends: [".e4s-mac", ".mac-protected"]
|
||||
# stage: generate
|
||||
# script:
|
||||
# - tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
|
||||
# - . "./share/spack/setup-env.sh"
|
||||
# - spack --version
|
||||
# - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
|
||||
# - spack env activate --without-view .
|
||||
# - spack ci generate --check-index-only
|
||||
# --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||
# --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
|
||||
# artifacts:
|
||||
# paths:
|
||||
# - "${CI_PROJECT_DIR}/jobs_scratch_dir"
|
||||
# tags:
|
||||
# - omicron
|
||||
# interruptible: true
|
||||
# retry:
|
||||
# max: 2
|
||||
# when:
|
||||
# - runner_system_failure
|
||||
# - stuck_or_timeout_failure
|
||||
# timeout: 60 minutes
|
||||
|
||||
e4s-mac-pr-build:
|
||||
extends: [ ".e4s-mac", ".mac-pr-build" ]
|
||||
trigger:
|
||||
include:
|
||||
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
||||
job: e4s-mac-pr-generate
|
||||
strategy: depend
|
||||
needs:
|
||||
- artifacts: True
|
||||
job: e4s-mac-pr-generate
|
||||
# e4s-mac-pr-build:
|
||||
# extends: [ ".e4s-mac", ".mac-pr-build" ]
|
||||
# trigger:
|
||||
# include:
|
||||
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
||||
# job: e4s-mac-pr-generate
|
||||
# strategy: depend
|
||||
# needs:
|
||||
# - artifacts: True
|
||||
# job: e4s-mac-pr-generate
|
||||
|
||||
e4s-mac-protected-build:
|
||||
extends: [ ".e4s-mac", ".mac-protected-build" ]
|
||||
trigger:
|
||||
include:
|
||||
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
||||
job: e4s-mac-protected-generate
|
||||
strategy: depend
|
||||
needs:
|
||||
- artifacts: True
|
||||
job: e4s-mac-protected-generate
|
||||
# e4s-mac-protected-build:
|
||||
# extends: [ ".e4s-mac", ".mac-protected-build" ]
|
||||
# trigger:
|
||||
# include:
|
||||
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
|
||||
# job: e4s-mac-protected-generate
|
||||
# strategy: depend
|
||||
# needs:
|
||||
# - artifacts: True
|
||||
# job: e4s-mac-protected-generate
|
||||
|
||||
########################################
|
||||
# E4S pipeline
|
||||
|
@@ -254,6 +254,9 @@ spack:
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||
|
@@ -175,6 +175,9 @@ spack:
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||
|
@@ -47,6 +47,9 @@ spack:
|
||||
- cd ${SPACK_CONCRETE_ENV_DIR}
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- spack --color=always --backtrace ci rebuild
|
||||
|
@@ -60,6 +60,9 @@ spack:
|
||||
- cd ${SPACK_CONCRETE_ENV_DIR}
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- spack --color=always --backtrace ci rebuild
|
||||
|
@@ -280,6 +280,9 @@ spack:
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- export PATH=/bootstrap/runner/view/bin:${PATH}
|
||||
|
@@ -270,6 +270,9 @@ spack:
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- spack --color=always --backtrace ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||
|
@@ -104,6 +104,9 @@ spack:
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||
|
@@ -107,6 +107,9 @@ spack:
|
||||
- spack env activate --without-view .
|
||||
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
|
||||
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
|
||||
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
|
||||
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
|
||||
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
|
||||
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
|
||||
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user