Compare commits

..

1 Commits

Author SHA1 Message Date
Gregory Becker
47a1ed8d91 wip 2022-11-08 17:03:45 -08:00
262 changed files with 3136 additions and 2167 deletions

View File

@@ -11,7 +11,7 @@ concurrency:
jobs:
# Run unit tests with different configurations on linux
ubuntu:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['2.7', '3.6', '3.7', '3.8', '3.9', '3.10', '3.11']

View File

@@ -1,309 +1,16 @@
# v0.19.1 (2023-02-07)
### Spack Bugfixes
* `buildcache create`: make "file exists" less verbose (#35019)
* `spack mirror create`: don't change paths to urls (#34992)
* Improve error message for requirements (#33988)
* uninstall: fix accidental cubic complexity (#34005)
* scons: fix signature for `install_args` (#34481)
* Fix `combine_phase_logs` text encoding issues (#34657)
* Use a module-like object to propagate changes in the MRO, when setting build env (#34059)
* PackageBase should not define builder legacy attributes (#33942)
* Forward lookup of the "run_tests" attribute (#34531)
* Bugfix for timers (#33917, #33900)
* Fix path handling in prefix inspections (#35318)
* Fix libtool filter for Fujitsu compilers (#34916)
* Bug fix for duplicate rpath errors on macOS when creating build caches (#34375)
* FileCache: delete the new cache file on exception (#34623)
* Propagate exceptions from Spack python console (#34547)
* Tests: Fix a bug/typo in a `config_values.py` fixture (#33886)
* Various CI fixes (#33953, #34560, #34560, #34828)
* Docs: remove monitors and analyzers, typos (#34358, #33926)
* bump release version for tutorial command (#33859)
# v0.19.0 (2022-11-11)
`v0.19.0` is a major feature release.
## Major features in this release
1. **Package requirements**
Spack's traditional [package preferences](
https://spack.readthedocs.io/en/latest/build_settings.html#package-preferences)
are soft, but we've added hard requriements to `packages.yaml` and `spack.yaml`
(#32528, #32369). Package requirements use the same syntax as specs:
```yaml
packages:
libfabric:
require: "@1.13.2"
mpich:
require:
- one_of: ["+cuda", "+rocm"]
```
More details in [the docs](
https://spack.readthedocs.io/en/latest/build_settings.html#package-requirements).
2. **Environment UI Improvements**
* Fewer surprising modifications to `spack.yaml` (#33711):
* `spack install` in an environment will no longer add to the `specs:` list; you'll
need to either use `spack add <spec>` or `spack install --add <spec>`.
* Similarly, `spack uninstall` will not remove from your environment's `specs:`
list; you'll need to use `spack remove` or `spack uninstall --remove`.
This will make it easier to manage an environment, as there is clear separation
between the stack to be installed (`spack.yaml`/`spack.lock`) and which parts of
it should be installed (`spack install` / `spack uninstall`).
* `concretizer:unify:true` is now the default mode for new environments (#31787)
We see more users creating `unify:true` environments now. Users who need
`unify:false` can add it to their environment to get the old behavior. This will
concretize every spec in the environment independently.
* Include environment configuration from URLs (#29026, [docs](
https://spack.readthedocs.io/en/latest/environments.html#included-configurations))
You can now include configuration in your environment directly from a URL:
```yaml
spack:
include:
- https://github.com/path/to/raw/config/compilers.yaml
```
4. **Multiple Build Systems**
An increasing number of packages in the ecosystem need the ability to support
multiple build systems (#30738, [docs](
https://spack.readthedocs.io/en/latest/packaging_guide.html#multiple-build-systems)),
either across versions, across platforms, or within the same version of the software.
This has been hard to support through multiple inheritance, as methods from different
build system superclasses would conflict. `package.py` files can now define separate
builder classes with installation logic for different build systems, e.g.:
```python
class ArpackNg(CMakePackage, AutotoolsPackage):
build_system(
conditional("cmake", when="@0.64:"),
conditional("autotools", when="@:0.63"),
default="cmake",
)
class CMakeBuilder(spack.build_systems.cmake.CMakeBuilder):
def cmake_args(self):
pass
class Autotoolsbuilder(spack.build_systems.autotools.AutotoolsBuilder):
def configure_args(self):
pass
```
5. **Compiler and variant propagation**
Currently, compiler flags and variants are inconsistent: compiler flags set for a
package are inherited by its dependencies, while variants are not. We should have
these be consistent by allowing for inheritance to be enabled or disabled for both
variants and compiler flags.
Example syntax:
- `package ++variant`:
enabled variant that will be propagated to dependencies
- `package +variant`:
enabled variant that will NOT be propagated to dependencies
- `package ~~variant`:
disabled variant that will be propagated to dependencies
- `package ~variant`:
disabled variant that will NOT be propagated to dependencies
- `package cflags==-g`:
`cflags` will be propagated to dependencies
- `package cflags=-g`:
`cflags` will NOT be propagated to dependencies
Syntax for non-boolan variants is similar to compiler flags. More in the docs for
[variants](
https://spack.readthedocs.io/en/latest/basic_usage.html#variants) and [compiler flags](
https://spack.readthedocs.io/en/latest/basic_usage.html#compiler-flags).
6. **Enhancements to git version specifiers**
* `v0.18.0` added the ability to use git commits as versions. You can now use the
`git.` prefix to specify git tags or branches as versions. All of these are valid git
versions in `v0.19` (#31200):
```console
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # raw commit
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234 # commit with git prefix
foo@git.develop # the develop branch
foo@git.0.19 # use the 0.19 tag
```
* `v0.19` also gives you more control over how Spack interprets git versions, in case
Spack cannot detect the version from the git repository. You can suffix a git
version with `=<version>` to force Spack to concretize it as a particular version
(#30998, #31914, #32257):
```console
# use mybranch, but treat it as version 3.2 for version comparison
foo@git.mybranch=3.2
# use the given commit, but treat it as develop for version comparison
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234=develop
```
More in [the docs](
https://spack.readthedocs.io/en/latest/basic_usage.html#version-specifier)
7. **Changes to Cray EX Support**
Cray machines have historically had their own "platform" within Spack, because we
needed to go through the module system to leverage compilers and MPI installations on
these machines. The Cray EX programming environment now provides standalone `craycc`
executables and proper `mpicc` wrappers, so Spack can treat EX machines like Linux
with extra packages (#29392).
We expect this to greatly reduce bugs, as external packages and compilers can now be
used by prefix instead of through modules. We will also no longer be subject to
reproducibility issues when modules change from Cray PE release to release and from
site to site. This also simplifies dealing with the underlying Linux OS on cray
systems, as Spack will properly model the machine's OS as either SuSE or RHEL.
8. **Improvements to tests and testing in CI**
* `spack ci generate --tests` will generate a `.gitlab-ci.yml` file that not only does
builds but also runs tests for built packages (#27877). Public GitHub pipelines now
also run tests in CI.
* `spack test run --explicit` will only run tests for packages that are explicitly
installed, instead of all packages.
9. **Experimental binding link model**
You can add a new option to `config.yaml` to make Spack embed absolute paths to
needed shared libraries in ELF executables and shared libraries on Linux (#31948, [docs](
https://spack.readthedocs.io/en/latest/config_yaml.html#shared-linking-bind)):
```yaml
config:
shared_linking:
type: rpath
bind: true
```
This can improve launch time at scale for parallel applications, and it can make
installations less susceptible to environment variables like `LD_LIBRARY_PATH`, even
especially when dealing with external libraries that use `RUNPATH`. You can think of
this as a faster, even higher-precedence version of `RPATH`.
## Other new features of note
* `spack spec` prints dependencies more legibly. Dependencies in the output now appear
at the *earliest* level of indentation possible (#33406)
* You can override `package.py` attributes like `url`, directly in `packages.yaml`
(#33275, [docs](
https://spack.readthedocs.io/en/latest/build_settings.html#assigning-package-attributes))
* There are a number of new architecture-related format strings you can use in Spack
configuration files to specify paths (#29810, [docs](
https://spack.readthedocs.io/en/latest/configuration.html#config-file-variables))
* Spack now supports bootstrapping Clingo on Windows (#33400)
* There is now support for an `RPATH`-like library model on Windows (#31930)
## Performance Improvements
* Major performance improvements for installation from binary caches (#27610, #33628,
#33636, #33608, #33590, #33496)
* Test suite can now be parallelized using `xdist` (used in GitHub Actions) (#32361)
* Reduce lock contention for parallel builds in environments (#31643)
## New binary caches and stacks
* We now build nearly all of E4S with `oneapi` in our buildcache (#31781, #31804,
#31804, #31803, #31840, #31991, #32117, #32107, #32239)
* Added 3 new machine learning-centric stacks to binary cache: `x86_64_v3`, CUDA, ROCm
(#31592, #33463)
## Removals and Deprecations
* Support for Python 3.5 is dropped (#31908). Only Python 2.7 and 3.6+ are officially
supported.
* This is the last Spack release that will support Python 2 (#32615). Spack `v0.19`
will emit a deprecation warning if you run it with Python 2, and Python 2 support will
soon be removed from the `develop` branch.
* `LD_LIBRARY_PATH` is no longer set by default by `spack load` or module loads.
Setting `LD_LIBRARY_PATH` in Spack environments/modules can cause binaries from
outside of Spack to crash, and Spack's own builds use `RPATH` and do not need
`LD_LIBRARY_PATH` set in order to run. If you still want the old behavior, you
can run these commands to configure Spack to set `LD_LIBRARY_PATH`:
```console
spack config add modules:prefix_inspections:lib64:[LD_LIBRARY_PATH]
spack config add modules:prefix_inspections:lib:[LD_LIBRARY_PATH]
```
* The `spack:concretization:[together|separately]` has been removed after being
deprecated in `v0.18`. Use `concretizer:unify:[true|false]`.
* `config:module_roots` is no longer supported after being deprecated in `v0.18`. Use
configuration in module sets instead (#28659, [docs](
https://spack.readthedocs.io/en/latest/module_file_support.html)).
* `spack activate` and `spack deactivate` are no longer supported, having been
deprecated in `v0.18`. Use an environment with a view instead of
activating/deactivating ([docs](
https://spack.readthedocs.io/en/latest/environments.html#configuration-in-spack-yaml)).
* The old YAML format for buildcaches is now deprecated (#33707). If you are using an
old buildcache with YAML metadata you will need to regenerate it with JSON metadata.
* `spack bootstrap trust` and `spack bootstrap untrust` are deprecated in favor of
`spack bootstrap enable` and `spack bootstrap disable` and will be removed in `v0.20`.
(#33600)
* The `graviton2` architecture has been renamed to `neoverse_n1`, and `graviton3`
is now `neoverse_v1`. Buildcaches using the old architecture names will need to be rebuilt.
* The terms `blacklist` and `whitelist` have been replaced with `include` and `exclude`
in all configuration files (#31569). You can use `spack config update` to
automatically fix your configuration files.
## Notable Bugfixes
* Permission setting on installation now handles effective uid properly (#19980)
* `buildable:true` for an MPI implementation now overrides `buildable:false` for `mpi` (#18269)
* Improved error messages when attempting to use an unconfigured compiler (#32084)
* Do not punish explicitly requested compiler mismatches in the solver (#30074)
* `spack stage`: add missing --fresh and --reuse (#31626)
* Fixes for adding build system executables like `cmake` to package scope (#31739)
* Bugfix for binary relocation with aliased strings produced by newer `binutils` (#32253)
## Spack community stats
* 6,751 total packages, 335 new since `v0.18.0`
* 141 new Python packages
* 89 new R packages
* 303 people contributed to this release
* 287 committers to packages
* 57 committers to core
# v0.18.1 (2022-07-19)
### Spack Bugfixes
* Fix several bugs related to bootstrapping (#30834,#31042,#31180)
* Fix a regression that was causing spec hashes to differ between
* Fix a regression that was causing spec hashes to differ between
Python 2 and Python 3 (#31092)
* Fixed compiler flags for oneAPI and DPC++ (#30856)
* Fixed several issues related to concretization (#31142,#31153,#31170,#31226)
* Improved support for Cray manifest file and `spack external find` (#31144,#31201,#31173,#31186)
* Assign a version to openSUSE Tumbleweed according to the GLIBC version
in the system (#19895)
in the system (#19895)
* Improved Dockerfile generation for `spack containerize` (#29741,#31321)
* Fixed a few bugs related to concurrent execution of commands (#31509,#31493,#31477)
* Fixed a few bugs related to concurrent execution of commands (#31509,#31493,#31477)
### Package updates
* WarpX: add v22.06, fixed libs property (#30866,#31102)

View File

@@ -10,8 +10,8 @@ For more on Spack's release structure, see
| Version | Supported |
| ------- | ------------------ |
| develop | :white_check_mark: |
| 0.19.x | :white_check_mark: |
| 0.18.x | :white_check_mark: |
| 0.17.x | :white_check_mark: |
| 0.16.x | :white_check_mark: |
## Reporting a Vulnerability

162
lib/spack/docs/analyze.rst Normal file
View File

@@ -0,0 +1,162 @@
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _analyze:
=======
Analyze
=======
The analyze command is a front-end to various tools that let us analyze
package installations. Each analyzer is a module for a different kind
of analysis that can be done on a package installation, including (but not
limited to) binary, log, or text analysis. Thus, the analyze command group
allows you to take an existing package install, choose an analyzer,
and extract some output for the package using it.
-----------------
Analyzer Metadata
-----------------
For all analyzers, we write to an ``analyzers`` folder in ``~/.spack``, or the
value that you specify in your spack config at ``config:analyzers_dir``.
For example, here we see the results of running an analysis on zlib:
.. code-block:: console
$ tree ~/.spack/analyzers/
└── linux-ubuntu20.04-skylake
└── gcc-9.3.0
└── zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2
├── environment_variables
│   └── spack-analyzer-environment-variables.json
├── install_files
│   └── spack-analyzer-install-files.json
└── libabigail
└── spack-analyzer-libabigail-libz.so.1.2.11.xml
This means that you can always find analyzer output in this folder, and it
is organized with the same logic as the package install it was run for.
If you want to customize this top level folder, simply provide the ``--path``
argument to ``spack analyze run``. The nested organization will be maintained
within your custom root.
-----------------
Listing Analyzers
-----------------
If you aren't familiar with Spack's analyzers, you can quickly list those that
are available:
.. code-block:: console
$ spack analyze list-analyzers
install_files : install file listing read from install_manifest.json
environment_variables : environment variables parsed from spack-build-env.txt
config_args : config args loaded from spack-configure-args.txt
libabigail : Application Binary Interface (ABI) features for objects
In the above, the first three are fairly simple - parsing metadata files from
a package install directory to save
-------------------
Analyzing a Package
-------------------
The analyze command, akin to install, will accept a package spec to perform
an analysis for. The package must be installed. Let's walk through an example
with zlib. We first ask to analyze it. However, since we have more than one
install, we are asked to disambiguate:
.. code-block:: console
$ spack analyze run zlib
==> Error: zlib matches multiple packages.
Matching packages:
fz2bs56 zlib@1.2.11%gcc@7.5.0 arch=linux-ubuntu18.04-skylake
sl7m27m zlib@1.2.11%gcc@9.3.0 arch=linux-ubuntu20.04-skylake
Use a more specific spec.
We can then specify the spec version that we want to analyze:
.. code-block:: console
$ spack analyze run zlib/fz2bs56
If you don't provide any specific analyzer names, by default all analyzers
(shown in the ``list-analyzers`` subcommand list) will be run. If an analyzer does not
have any result, it will be skipped. For example, here is a result running for
zlib:
.. code-block:: console
$ ls ~/.spack/analyzers/linux-ubuntu20.04-skylake/gcc-9.3.0/zlib-1.2.11-sl7m27mzkbejtkrajigj3a3m37ygv4u2/
spack-analyzer-environment-variables.json
spack-analyzer-install-files.json
spack-analyzer-libabigail-libz.so.1.2.11.xml
If you want to run a specific analyzer, ask for it with `--analyzer`. Here we run
spack analyze on libabigail (already installed) _using_ libabigail1
.. code-block:: console
$ spack analyze run --analyzer abigail libabigail
.. _analyze_monitoring:
----------------------
Monitoring An Analysis
----------------------
For any kind of analysis, you can
use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
as a server to upload the same run metadata to. You can
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
to first create a server along with a username and token for yourself.
You can then use this guide to interact with the server.
You should first export our spack monitor token and username to the environment:
.. code-block:: console
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
$ export SPACKMON_USER=spacky
By default, the host for your server is expected to be at ``http://127.0.0.1``
with a prefix of ``ms1``, and if this is the case, you can simply add the
``--monitor`` flag to the install command:
.. code-block:: console
$ spack analyze run --monitor wget
If you need to customize the host or the prefix, you can do that as well:
.. code-block:: console
$ spack analyze run --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io wget
If your server doesn't have authentication, you can skip it:
.. code-block:: console
$ spack analyze run --monitor --monitor-disable-auth wget
Regardless of your choice, when you run analyze on an installed package (whether
it was installed with ``--monitor`` or not, you'll see the results generating as they did
before, and a message that the monitor server was pinged:
.. code-block:: console
$ spack analyze --monitor wget
...
==> Sending result for wget bin/wget to monitor.

View File

@@ -1114,21 +1114,21 @@ set of arbitrary versions, such as ``@1.0,1.5,1.7`` (``1.0``, ``1.5``,
or ``1.7``). When you supply such a specifier to ``spack install``,
it constrains the set of versions that Spack will install.
For packages with a ``git`` attribute, ``git`` references
may be specified instead of a numerical version i.e. branches, tags
and commits. Spack will stage and build based off the ``git``
For packages with a ``git`` attribute, ``git`` references
may be specified instead of a numerical version i.e. branches, tags
and commits. Spack will stage and build based off the ``git``
reference provided. Acceptable syntaxes for this are:
.. code-block:: sh
# branches and tags
foo@git.develop # use the develop branch
foo@git.0.19 # use the 0.19 tag
# commit hashes
foo@abcdef1234abcdef1234abcdef1234abcdef1234 # 40 character hashes are automatically treated as git commits
foo@git.abcdef1234abcdef1234abcdef1234abcdef1234
Spack versions from git reference either have an associated version supplied by the user,
or infer a relationship to known versions from the structure of the git repository. If an
associated version is supplied by the user, Spack treats the git version as equivalent to that
@@ -1244,8 +1244,8 @@ For example, for the ``stackstart`` variant:
.. code-block:: sh
mpileaks stackstart==4 # variant will be propagated to dependencies
mpileaks stackstart=4 # only mpileaks will have this variant value
mpileaks stackstart=4 # variant will be propagated to dependencies
mpileaks stackstart==4 # only mpileaks will have this variant value
^^^^^^^^^^^^^^
Compiler Flags
@@ -1672,13 +1672,9 @@ own install prefix. However, certain packages are typically installed
`Python <https://www.python.org>`_ packages are typically installed in the
``$prefix/lib/python-2.7/site-packages`` directory.
In Spack, installation prefixes are immutable, so this type of installation
is not directly supported. However, it is possible to create views that
allow you to merge install prefixes of multiple packages into a single new prefix.
Views are a convenient way to get a more traditional filesystem structure.
Using *extensions*, you can ensure that Python packages always share the
same prefix in the view as Python itself. Suppose you have
Python installed like so:
Spack has support for this type of installation as well. In Spack,
a package that can live inside the prefix of another package is called
an *extension*. Suppose you have Python installed like so:
.. code-block:: console
@@ -1716,6 +1712,8 @@ You can find extensions for your Python installation like this:
py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1
py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0
==> None activated.
The extensions are a subset of what's returned by ``spack list``, and
they are packages like any other. They are installed into their own
prefixes, and you can see this with ``spack find --paths``:
@@ -1743,72 +1741,32 @@ directly when you run ``python``:
ImportError: No module named numpy
>>>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using Extensions in Environments
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^
Using Extensions
^^^^^^^^^^^^^^^^
The recommended way of working with extensions such as ``py-numpy``
above is through :ref:`Environments <environments>`. For example,
the following creates an environment in the current working directory
with a filesystem view in the ``./view`` directory:
.. code-block:: console
$ spack env create --with-view view --dir .
$ spack -e . add py-numpy
$ spack -e . concretize
$ spack -e . install
We recommend environments for two reasons. Firstly, environments
can be activated (requires :ref:`shell-support`):
.. code-block:: console
$ spack env activate .
which sets all the right environment variables such as ``PATH`` and
``PYTHONPATH``. This ensures that
.. code-block:: console
$ python
>>> import numpy
works. Secondly, even without shell support, the view ensures
that Python can locate its extensions:
.. code-block:: console
$ ./view/bin/python
>>> import numpy
See :ref:`environments` for a more in-depth description of Spack
environments and customizations to views.
^^^^^^^^^^^^^^^^^^^^
Using ``spack load``
^^^^^^^^^^^^^^^^^^^^
A more traditional way of using Spack and extensions is ``spack load``
(requires :ref:`shell-support`). This will add the extension to ``PYTHONPATH``
in your current shell, and Python itself will be available in the ``PATH``:
There are four ways to get ``numpy`` working in Python. The first is
to use :ref:`shell-support`. You can simply ``load`` the extension,
and it will be added to the ``PYTHONPATH`` in your current shell:
.. code-block:: console
$ spack load python
$ spack load py-numpy
$ python
>>> import numpy
Now ``import numpy`` will succeed for as long as you keep your current
session open.
The loaded packages can be checked using ``spack find --loaded``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Loading Extensions via Modules
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Apart from ``spack env activate`` and ``spack load``, you can load numpy
through your environment modules (using ``environment-modules`` or
``lmod``). This will also add the extension to the ``PYTHONPATH`` in
your current shell.
Instead of using Spack's environment modification capabilities through
the ``spack load`` command, you can load numpy through your
environment modules (using ``environment-modules`` or ``lmod``). This
will also add the extension to the ``PYTHONPATH`` in your current
shell.
.. code-block:: console
@@ -1818,6 +1776,130 @@ If you do not know the name of the specific numpy module you wish to
load, you can use the ``spack module tcl|lmod loads`` command to get
the name of the module from the Spack spec.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Activating Extensions in a View
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Another way to use extensions is to create a view, which merges the
python installation along with the extensions into a single prefix.
See :ref:`configuring_environment_views` for a more in-depth description
of views.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Activating Extensions Globally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As an alternative to creating a merged prefix with Python and its extensions,
and prior to support for views, Spack has provided a means to install the
extension into the Spack installation prefix for the extendee. This has
typically been useful since extendable packages typically search their own
installation path for addons by default.
Global activations are performed with the ``spack activate`` command:
.. _cmd-spack-activate:
^^^^^^^^^^^^^^^^^^
``spack activate``
^^^^^^^^^^^^^^^^^^
.. code-block:: console
$ spack activate py-numpy
==> Activated extension py-setuptools@11.3.1%gcc@4.4.7 arch=linux-debian7-x86_64-3c74eb69 for python@2.7.8%gcc@4.4.7.
==> Activated extension py-nose@1.3.4%gcc@4.4.7 arch=linux-debian7-x86_64-5f70f816 for python@2.7.8%gcc@4.4.7.
==> Activated extension py-numpy@1.9.1%gcc@4.4.7 arch=linux-debian7-x86_64-66733244 for python@2.7.8%gcc@4.4.7.
Several things have happened here. The user requested that
``py-numpy`` be activated in the ``python`` installation it was built
with. Spack knows that ``py-numpy`` depends on ``py-nose`` and
``py-setuptools``, so it activated those packages first. Finally,
once all dependencies were activated in the ``python`` installation,
``py-numpy`` was activated as well.
If we run ``spack extensions`` again, we now see the three new
packages listed as activated:
.. code-block:: console
$ spack extensions python
==> python@2.7.8%gcc@4.4.7 arch=linux-debian7-x86_64-703c7a96
==> 36 extensions:
geos py-ipython py-pexpect py-pyside py-sip
py-basemap py-libxml2 py-pil py-pytz py-six
py-biopython py-mako py-pmw py-rpy2 py-sympy
py-cython py-matplotlib py-pychecker py-scientificpython py-virtualenv
py-dateutil py-mpi4py py-pygments py-scikit-learn
py-epydoc py-mx py-pylint py-scipy
py-gnuplot py-nose py-pyparsing py-setuptools
py-h5py py-numpy py-pyqt py-shiboken
==> 12 installed:
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
py-dateutil@2.4.0 py-nose@1.3.4 py-pyside@1.2.2
py-dateutil@2.4.0 py-numpy@1.9.1 py-pytz@2014.10
py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1
py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0
==> 3 currently activated:
-- linux-debian7-x86_64 / gcc@4.4.7 --------------------------------
py-nose@1.3.4 py-numpy@1.9.1 py-setuptools@11.3.1
Now, when a user runs python, ``numpy`` will be available for import
*without* the user having to explicitly load it. ``python@2.7.8`` now
acts like a system Python installation with ``numpy`` installed inside
of it.
Spack accomplishes this by symbolically linking the *entire* prefix of
the ``py-numpy`` package into the prefix of the ``python`` package. To the
python interpreter, it looks like ``numpy`` is installed in the
``site-packages`` directory.
The only limitation of global activation is that you can only have a *single*
version of an extension activated at a time. This is because multiple
versions of the same extension would conflict if symbolically linked
into the same prefix. Users who want a different version of a package
can still get it by using environment modules or views, but they will have to
explicitly load their preferred version.
^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack activate --force``
^^^^^^^^^^^^^^^^^^^^^^^^^^
If, for some reason, you want to activate a package *without* its
dependencies, you can use ``spack activate --force``:
.. code-block:: console
$ spack activate --force py-numpy
==> Activated extension py-numpy@1.9.1%gcc@4.4.7 arch=linux-debian7-x86_64-66733244 for python@2.7.8%gcc@4.4.7.
.. _cmd-spack-deactivate:
^^^^^^^^^^^^^^^^^^^^
``spack deactivate``
^^^^^^^^^^^^^^^^^^^^
We've seen how activating an extension can be used to set up a default
version of a Python module. Obviously, you may want to change that at
some point. ``spack deactivate`` is the command for this. There are
several variants:
* ``spack deactivate <extension>`` will deactivate a single
extension. If another activated extension depends on this one,
Spack will warn you and exit with an error.
* ``spack deactivate --force <extension>`` deactivates an extension
regardless of packages that depend on it.
* ``spack deactivate --all <extension>`` deactivates an extension and
all of its dependencies. Use ``--force`` to disregard dependents.
* ``spack deactivate --all <extendee>`` deactivates *all* activated
extensions of a package. For example, to deactivate *all* python
extensions, use:
.. code-block:: console
$ spack deactivate --all python
-----------------------
Filesystem requirements
-----------------------

View File

@@ -724,9 +724,10 @@ extends vs. depends_on
This is very similar to the naming dilemma above, with a slight twist.
As mentioned in the :ref:`Packaging Guide <packaging_extensions>`,
``extends`` and ``depends_on`` are very similar, but ``extends`` ensures
that the extension and extendee share the same prefix in views.
This allows the user to import a Python module without
``extends`` and ``depends_on`` are very similar, but ``extends`` adds
the ability to *activate* the package. Activation involves symlinking
everything in the installation prefix of the package to the installation
prefix of Python. This allows the user to import a Python module without
having to add that module to ``PYTHONPATH``.
When deciding between ``extends`` and ``depends_on``, the best rule of
@@ -734,7 +735,7 @@ thumb is to check the installation prefix. If Python libraries are
installed to ``<prefix>/lib/pythonX.Y/site-packages``, then you
should use ``extends``. If Python libraries are installed elsewhere
or the only files that get installed reside in ``<prefix>/bin``, then
don't use ``extends``.
don't use ``extends``, as symlinking the package wouldn't be useful.
^^^^^^^^^^^^^^^^^^^^^
Alternatives to Spack

View File

@@ -193,10 +193,10 @@ Build system dependencies
As an extension of the R ecosystem, your package will obviously depend
on R to build and run. Normally, we would use ``depends_on`` to express
this, but for R packages, we use ``extends``. This implies a special
dependency on R, which is used to set environment variables such as
``R_LIBS`` uniformly. Since every R package needs this, the ``RPackage``
base class contains:
this, but for R packages, we use ``extends``. ``extends`` is similar to
``depends_on``, but adds an additional feature: the ability to "activate"
the package by symlinking it to the R installation directory. Since
every R package needs this, the ``RPackage`` base class contains:
.. code-block:: python

View File

@@ -253,6 +253,27 @@ to update them.
multiple runs of ``spack style`` just to re-compute line numbers and
makes it much easier to fix errors directly off of the CI output.
.. warning::
Flake8 and ``pep8-naming`` require a number of dependencies in order
to run. If you installed ``py-flake8`` and ``py-pep8-naming``, the
easiest way to ensure the right packages are on your ``PYTHONPATH`` is
to run::
spack activate py-flake8
spack activate pep8-naming
so that all of the dependencies are symlinked to a central
location. If you see an error message like:
.. code-block:: console
Traceback (most recent call last):
File: "/usr/bin/flake8", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
that means Flake8 couldn't find setuptools in your ``PYTHONPATH``.
^^^^^^^^^^^^^^^^^^^
Documentation Tests
@@ -288,9 +309,13 @@ All of these can be installed with Spack, e.g.
.. code-block:: console
$ spack load py-sphinx py-sphinx-rtd-theme py-sphinxcontrib-programoutput
$ spack activate py-sphinx
$ spack activate py-sphinx-rtd-theme
$ spack activate py-sphinxcontrib-programoutput
so that all of the dependencies are added to PYTHONPATH. If you see an error message
so that all of the dependencies are symlinked into that Python's
tree. Alternatively, you could arrange for their library
directories to be added to PYTHONPATH. If you see an error message
like:
.. code-block:: console

View File

@@ -233,8 +233,8 @@ packages will be listed as roots of the Environment.
All of the Spack commands that act on the list of installed specs are
Environment-sensitive in this way, including ``install``,
``uninstall``, ``find``, ``extensions``, and more. In the
:ref:`environment-configuration` section we will discuss
``uninstall``, ``activate``, ``deactivate``, ``find``, ``extensions``,
and more. In the :ref:`environment-configuration` section we will discuss
Environment-sensitive commands further.
^^^^^^^^^^^^^^^^^^^^^

View File

@@ -67,6 +67,7 @@ or refer to the full manual below.
build_settings
environments
containers
monitoring
mirrors
module_file_support
repositories
@@ -77,6 +78,12 @@ or refer to the full manual below.
extensions
pipelines
.. toctree::
:maxdepth: 2
:caption: Research
analyze
.. toctree::
:maxdepth: 2
:caption: Contributing

View File

@@ -0,0 +1,265 @@
.. Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _monitoring:
==========
Monitoring
==========
You can use a `spack monitor <https://github.com/spack/spack-monitor>`_ "Spackmon"
server to store a database of your packages, builds, and associated metadata
for provenance, research, or some other kind of development. You should
follow the instructions in the `spack monitor documentation <https://spack-monitor.readthedocs.org>`_
to first create a server along with a username and token for yourself.
You can then use this guide to interact with the server.
-------------------
Analysis Monitoring
-------------------
To read about how to monitor an analysis (meaning you want to send analysis results
to a server) see :ref:`analyze_monitoring`.
---------------------
Monitoring An Install
---------------------
Since an install is typically when you build packages, we logically want
to tell spack to monitor during this step. Let's start with an example
where we want to monitor the install of hdf5. Unless you have disabled authentication
for the server, we first want to export our spack monitor token and username to the environment:
.. code-block:: console
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
$ export SPACKMON_USER=spacky
By default, the host for your server is expected to be at ``http://127.0.0.1``
with a prefix of ``ms1``, and if this is the case, you can simply add the
``--monitor`` flag to the install command:
.. code-block:: console
$ spack install --monitor hdf5
If you need to customize the host or the prefix, you can do that as well:
.. code-block:: console
$ spack install --monitor --monitor-prefix monitor --monitor-host https://monitor-service.io hdf5
As a precaution, we cut out early in the spack client if you have not provided
authentication credentials. For example, if you run the command above without
exporting your username or token, you'll see:
.. code-block:: console
==> Error: You are required to export SPACKMON_TOKEN and SPACKMON_USER
This extra check is to ensure that we don't start any builds,
and then discover that you forgot to export your token. However, if
your monitoring server has authentication disabled, you can tell this to
the client to skip this step:
.. code-block:: console
$ spack install --monitor --monitor-disable-auth hdf5
If the service is not running, you'll cleanly exit early - the install will
not continue if you've asked it to monitor and there is no service.
For example, here is what you'll see if the monitoring service is not running:
.. code-block:: console
[Errno 111] Connection refused
If you want to continue builds (and stop monitoring) you can set the ``--monitor-keep-going``
flag.
.. code-block:: console
$ spack install --monitor --monitor-keep-going hdf5
This could mean that if a request fails, you only have partial or no data
added to your monitoring database. This setting will not be applied to the
first request to check if the server is running, but to subsequent requests.
If you don't have a monitor server running and you want to build, simply
don't provide the ``--monitor`` flag! Finally, if you want to provide one or
more tags to your build, you can do:
.. code-block:: console
# Add one tag, "pizza"
$ spack install --monitor --monitor-tags pizza hdf5
# Add two tags, "pizza" and "pasta"
$ spack install --monitor --monitor-tags pizza,pasta hdf5
----------------------------
Monitoring with Containerize
----------------------------
The same argument group is available to add to a containerize command.
^^^^^^
Docker
^^^^^^
To add monitoring to a Docker container recipe generation using the defaults,
and assuming a monitor server running on localhost, you would
start with a spack.yaml in your present working directory:
.. code-block:: yaml
spack:
specs:
- samtools
And then do:
.. code-block:: console
# preview first
spack containerize --monitor
# and then write to a Dockerfile
spack containerize --monitor > Dockerfile
The install command will be edited to include commands for enabling monitoring.
However, getting secrets into the container for your monitor server is something
that should be done carefully. Specifically you should:
- Never try to define secrets as ENV, ARG, or using ``--build-arg``
- Do not try to get the secret into the container via a "temporary" file that you remove (it in fact will still exist in a layer)
Instead, it's recommended to use buildkit `as explained here <https://pythonspeed.com/articles/docker-build-secrets/>`_.
You'll need to again export environment variables for your spack monitor server:
.. code-block:: console
$ export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
$ export SPACKMON_USER=spacky
And then use buildkit along with your build and identifying the name of the secret:
.. code-block:: console
$ DOCKER_BUILDKIT=1 docker build --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
The secrets are expected to come from your environment, and then will be temporarily mounted and available
at ``/run/secrets/<name>``. If you forget to supply them (and authentication is required) the build
will fail. If you need to build on your host (and interact with a spack monitor at localhost) you'll
need to tell Docker to use the host network:
.. code-block:: console
$ DOCKER_BUILDKIT=1 docker build --network="host" --secret id=st,env=SPACKMON_TOKEN --secret id=su,env=SPACKMON_USER -t spack/container .
^^^^^^^^^^^
Singularity
^^^^^^^^^^^
To add monitoring to a Singularity container build, the spack.yaml needs to
be modified slightly to specify wanting a different format:
.. code-block:: yaml
spack:
specs:
- samtools
container:
format: singularity
Again, generate the recipe:
.. code-block:: console
# preview first
$ spack containerize --monitor
# then write to a Singularity recipe
$ spack containerize --monitor > Singularity
Singularity doesn't have a direct way to define secrets at build time, so we have
to do a bit of a manual command to add a file, source secrets in it, and remove it.
Since Singularity doesn't have layers like Docker, deleting a file will truly
remove it from the container and history. So let's say we have this file,
``secrets.sh``:
.. code-block:: console
# secrets.sh
export SPACKMON_USER=spack
export SPACKMON_TOKEN=50445263afd8f67e59bd79bff597836ee6c05438
We would then generate the Singularity recipe, and add a files section,
a source of that file at the start of ``%post``, and **importantly**
a removal of the final at the end of that same section.
.. code-block::
Bootstrap: docker
From: spack/ubuntu-bionic:latest
Stage: build
%files
secrets.sh /opt/secrets.sh
%post
. /opt/secrets.sh
# spack install commands are here
...
# Don't forget to remove here!
rm /opt/secrets.sh
You can then build the container as your normally would.
.. code-block:: console
$ sudo singularity build container.sif Singularity
------------------
Monitoring Offline
------------------
In the case that you want to save monitor results to your filesystem
and then upload them later (perhaps you are in an environment where you don't
have credentials or it isn't safe to use them) you can use the ``--monitor-save-local``
flag.
.. code-block:: console
$ spack install --monitor --monitor-save-local hdf5
This will save results in a subfolder, "monitor" in your designated spack
reports folder, which defaults to ``$HOME/.spack/reports/monitor``. When
you are ready to upload them to a spack monitor server:
.. code-block:: console
$ spack monitor upload ~/.spack/reports/monitor
You can choose the root directory of results as shown above, or a specific
subdirectory. The command accepts other arguments to specify configuration
for the monitor.

View File

@@ -2634,12 +2634,9 @@ extendable package:
extends('python')
...
This accomplishes a few things. Firstly, the Python package can set special
variables such as ``PYTHONPATH`` for all extensions when the run or build
environment is set up. Secondly, filesystem views can ensure that extensions
are put in the same prefix as their extendee. This ensures that Python in
a view can always locate its Python packages, even without environment
variables set.
Now, the ``py-numpy`` package can be used as an argument to ``spack
activate``. When it is activated, all the files in its prefix will be
symbolically linked into the prefix of the python package.
A package can only extend one other package at a time. To support packages
that may extend one of a list of other packages, Spack supports multiple
@@ -2687,8 +2684,9 @@ variant(s) are selected. This may be accomplished with conditional
...
Sometimes, certain files in one package will conflict with those in
another, which means they cannot both be used in a view at the
same time. In this case, you can tell Spack to ignore those files:
another, which means they cannot both be activated (symlinked) at the
same time. In this case, you can tell Spack to ignore those files
when it does the activation:
.. code-block:: python
@@ -2700,7 +2698,7 @@ same time. In this case, you can tell Spack to ignore those files:
...
The code above will prevent everything in the ``$prefix/bin/`` directory
from being linked in a view.
from being linked in at activation time.
.. note::
@@ -2724,6 +2722,67 @@ extensions; as a consequence python extension packages (those inheriting from
``PythonPackage``) likewise override ``add_files_to_view`` in order to rewrite
shebang lines which point to the Python interpreter.
^^^^^^^^^^^^^^^^^^^^^^^^^
Activation & deactivation
^^^^^^^^^^^^^^^^^^^^^^^^^
Adding an extension to a view is referred to as an activation. If the view is
maintained in the Spack installation prefix of the extendee this is called a
global activation. Activations may involve updating some centralized state
that is maintained by the extendee package, so there can be additional work
for adding extensions compared with non-extension packages.
Spack's ``Package`` class has default ``activate`` and ``deactivate``
implementations that handle symbolically linking extensions' prefixes
into a specified view. Extendable packages can override these methods
to add custom activate/deactivate logic of their own. For example,
the ``activate`` and ``deactivate`` methods in the Python class handle
symbolic linking of extensions, but they also handle details surrounding
Python's ``.pth`` files, and other aspects of Python packaging.
Spack's extensions mechanism is designed to be extensible, so that
other packages (like Ruby, R, Perl, etc.) can provide their own
custom extension management logic, as they may not handle modules the
same way that Python does.
Let's look at Python's activate function:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.activate
:linenos:
This function is called on the *extendee* (Python). It first calls
``activate`` in the superclass, which handles symlinking the
extension package's prefix into the specified view. It then does
some special handling of the ``easy-install.pth`` file, part of
Python's setuptools.
Deactivate behaves similarly to activate, but it unlinks files:
.. literalinclude:: _spack_root/var/spack/repos/builtin/packages/python/package.py
:pyobject: Python.deactivate
:linenos:
Both of these methods call some custom functions in the Python
package. See the source for Spack's Python package for details.
^^^^^^^^^^^^^^^^^^^^
Activation arguments
^^^^^^^^^^^^^^^^^^^^
You may have noticed that the ``activate`` function defined above
takes keyword arguments. These are the keyword arguments from
``extends()``, and they are passed to both activate and deactivate.
This capability allows an extension to customize its own activation by
passing arguments to the extendee. Extendees can likewise implement
custom ``activate()`` and ``deactivate()`` functions to suit their
needs.
The only keyword argument supported by default is the ``ignore``
argument, which can take a regex, list of regexes, or a predicate to
determine which files *not* to symlink during activation.
.. _virtual-dependencies:
--------------------
@@ -3525,7 +3584,7 @@ will likely contain some overriding of default builder methods:
def cmake_args(self):
pass
class AutotoolsBuilder(spack.build_systems.autotools.AutotoolsBuilder):
class Autotoolsbuilder(spack.build_systems.autotools.AutotoolsBuilder):
def configure_args(self):
pass

View File

@@ -1000,16 +1000,45 @@ def hash_directory(directory, ignore=[]):
return md5_hash.hexdigest()
def _try_unlink(path):
try:
os.unlink(path)
except (IOError, OSError):
# But if that fails, that's OK.
pass
@contextmanager
@system_path_filter
def write_tmp_and_move(filename):
"""Write to a temporary file, then move into place."""
dirname = os.path.dirname(filename)
basename = os.path.basename(filename)
tmp = os.path.join(dirname, ".%s.tmp" % basename)
with open(tmp, "w") as f:
yield f
shutil.move(tmp, filename)
def write_tmp_and_move(path, mode="w"):
"""Write to a temporary file in the same directory, then move into place."""
# Rely on NamedTemporaryFile to give a unique file without races
# in the directory of the target file.
file = tempfile.NamedTemporaryFile(
prefix="." + os.path.basename(path),
suffix=".tmp",
dir=os.path.dirname(path),
mode=mode,
delete=False, # we delete it ourselves
)
tmp_path = file.name
try:
yield file
except BaseException:
# On any failure, try to remove the temporary file.
_try_unlink(tmp_path)
raise
finally:
# Always close the file decriptor
file.close()
# Atomically move into existence.
try:
os.rename(tmp_path, path)
except (IOError, OSError):
_try_unlink(tmp_path)
raise
@contextmanager

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
__version__ = "0.19.1"
__version__ = "0.19.0.dev0"
spack_version = __version__

View File

@@ -288,7 +288,7 @@ def _check_build_test_callbacks(pkgs, error_cls):
errors = []
for pkg_name in pkgs:
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
test_callbacks = getattr(pkg_cls, "build_time_test_callbacks", None)
test_callbacks = pkg_cls.build_time_test_callbacks
if test_callbacks and "test" in test_callbacks:
msg = '{0} package contains "test" method in ' "build_time_test_callbacks"

View File

@@ -293,12 +293,10 @@ def update_spec(self, spec, found_list):
cur_entry["spec"] = new_entry["spec"]
break
else:
current_list.append(
{
"mirror_url": new_entry["mirror_url"],
"spec": new_entry["spec"],
}
)
current_list.append = {
"mirror_url": new_entry["mirror_url"],
"spec": new_entry["spec"],
}
def update(self, with_cooldown=False):
"""Make sure local cache of buildcache index files is up to date.
@@ -556,9 +554,9 @@ class NoOverwriteException(spack.error.SpackError):
"""
def __init__(self, file_path):
super(NoOverwriteException, self).__init__(
'"{}" exists in buildcache. Use --force flag to overwrite.'.format(file_path)
)
err_msg = "\n%s\nexists\n" % file_path
err_msg += "Use -f option to overwrite."
super(NoOverwriteException, self).__init__(err_msg)
class NoGpgException(spack.error.SpackError):

View File

@@ -978,9 +978,22 @@ def add_modifications_for_dep(dep):
if set_package_py_globals:
set_module_variables_for_package(dpkg)
current_module = ModuleChangePropagator(spec.package)
dpkg.setup_dependent_package(current_module, spec)
current_module.propagate_changes_to_mro()
# Allow dependencies to modify the module
# Get list of modules that may need updating
modules = []
for cls in inspect.getmro(type(spec.package)):
module = cls.module
if module == spack.package_base:
break
modules.append(module)
# Execute changes as if on a single module
# copy dict to ensure prior changes are available
changes = spack.util.pattern.Bunch()
dpkg.setup_dependent_package(changes, spec)
for module in modules:
module.__dict__.update(changes.__dict__)
if context == "build":
builder = spack.builder.create(dpkg)
@@ -1424,51 +1437,3 @@ def write_log_summary(out, log_type, log, last=None):
# If no errors are found but warnings are, display warnings
out.write("\n%s found in %s log:\n" % (plural(nwar, "warning"), log_type))
out.write(make_log_context(warnings))
class ModuleChangePropagator(object):
"""Wrapper class to accept changes to a package.py Python module, and propagate them in the
MRO of the package.
It is mainly used as a substitute of the ``package.py`` module, when calling the
"setup_dependent_package" function during build environment setup.
"""
_PROTECTED_NAMES = ("package", "current_module", "modules_in_mro", "_set_attributes")
def __init__(self, package):
self._set_self_attributes("package", package)
self._set_self_attributes("current_module", package.module)
#: Modules for the classes in the MRO up to PackageBase
modules_in_mro = []
for cls in inspect.getmro(type(package)):
module = cls.module
if module == self.current_module:
continue
if module == spack.package_base:
break
modules_in_mro.append(module)
self._set_self_attributes("modules_in_mro", modules_in_mro)
self._set_self_attributes("_set_attributes", {})
def _set_self_attributes(self, key, value):
super(ModuleChangePropagator, self).__setattr__(key, value)
def __getattr__(self, item):
return getattr(self.current_module, item)
def __setattr__(self, key, value):
if key in ModuleChangePropagator._PROTECTED_NAMES:
msg = 'Cannot set attribute "{}" in ModuleMonkeyPatcher'.format(key)
return AttributeError(msg)
setattr(self.current_module, key, value)
self._set_attributes[key] = value
def propagate_changes_to_mro(self):
for module_in_mro in self.modules_in_mro:
module_in_mro.__dict__.update(self._set_attributes)

View File

@@ -7,7 +7,7 @@
import os.path
import stat
import subprocess
from typing import List # novm # noqa: F401
from typing import List # novm
import llnl.util.filesystem as fs
import llnl.util.tty as tty
@@ -427,15 +427,15 @@ def _do_patch_libtool(self):
x.filter(regex="-nostdlib", repl="", string=True)
rehead = r"/\S*/"
for o in [
r"fjhpctag\.o",
r"fjcrt0\.o",
r"fjlang08\.o",
r"fjomp\.o",
r"crti\.o",
r"crtbeginS\.o",
r"crtendS\.o",
"fjhpctag.o",
"fjcrt0.o",
"fjlang08.o",
"fjomp.o",
"crti.o",
"crtbeginS.o",
"crtendS.o",
]:
x.filter(regex=(rehead + o), repl="")
x.filter(regex=(rehead + o), repl="", string=True)
elif self.pkg.compiler.name == "dpcpp":
# Hack to filter out spurious predep_objects when building with Intel dpcpp
# (see https://github.com/spack/spack/issues/32863):

View File

@@ -205,7 +205,13 @@ def initconfig_hardware_entries(self):
entries.append(cmake_cache_path("CUDA_TOOLKIT_ROOT_DIR", cudatoolkitdir))
cudacompiler = "${CUDA_TOOLKIT_ROOT_DIR}/bin/nvcc"
entries.append(cmake_cache_path("CMAKE_CUDA_COMPILER", cudacompiler))
entries.append(cmake_cache_path("CMAKE_CUDA_HOST_COMPILER", "${CMAKE_CXX_COMPILER}"))
if spec.satisfies("^mpi"):
entries.append(cmake_cache_path("CMAKE_CUDA_HOST_COMPILER", "${MPI_CXX_COMPILER}"))
else:
entries.append(
cmake_cache_path("CMAKE_CUDA_HOST_COMPILER", "${CMAKE_CXX_COMPILER}")
)
return entries

View File

@@ -2,6 +2,7 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import glob
import inspect
import os
import re
@@ -15,7 +16,6 @@
import spack.builder
import spack.multimethod
import spack.package_base
import spack.spec
from spack.directives import build_system, depends_on, extends
from spack.error import NoHeadersError, NoLibrariesError, SpecError
from spack.version import Version
@@ -219,34 +219,13 @@ def list_url(cls):
name = cls.pypi.split("/")[0]
return "https://pypi.org/simple/" + name + "/"
def update_external_dependencies(self):
"""
Ensure all external python packages have a python dependency
If another package in the DAG depends on python, we use that
python for the dependency of the external. If not, we assume
that the external PythonPackage is installed into the same
directory as the python it depends on.
"""
# TODO: Include this in the solve, rather than instantiating post-concretization
if "python" not in self.spec:
if "python" in self.spec.root:
python = self.spec.root["python"]
else:
python = spack.spec.Spec("python")
repo = spack.repo.path.repo_for_pkg(python)
python.namespace = repo.namespace
python._mark_concrete()
python.external_path = self.prefix
self.spec.add_dependency_edge(python, ("build", "link", "run"))
@property
def headers(self):
"""Discover header files in platlib."""
# Headers may be in either location
include = self.prefix.join(self.spec["python"].package.include)
platlib = self.prefix.join(self.spec["python"].package.platlib)
include = self.prefix.join(self.include)
platlib = self.prefix.join(self.platlib)
headers = fs.find_all_headers(include) + fs.find_all_headers(platlib)
if headers:
@@ -255,13 +234,29 @@ def headers(self):
msg = "Unable to locate {} headers in {} or {}"
raise NoHeadersError(msg.format(self.spec.name, include, platlib))
@property
def include(self):
include = glob.glob(self.prefix.include.join("python*"))
if include:
return include[0]
return self.spec["python"].package.include
@property
def platlib(self):
for libname in ("lib", "lib64"):
platlib = glob.glob(self.prefix.join(libname).join("python*").join("site-packages"))
if platlib:
return platlib[0]
return self.spec["python"].package.platlib
@property
def libs(self):
"""Discover libraries in platlib."""
# Remove py- prefix in package name
library = "lib" + self.spec.name[3:].replace("-", "?")
root = self.prefix.join(self.spec["python"].package.platlib)
root = self.prefix.join(self.platlib)
for shared in [True, False]:
libs = fs.find_libraries(library, root, shared=shared, recursive=True)

View File

@@ -46,10 +46,10 @@ class SConsBuilder(BaseBuilder):
phases = ("build", "install")
#: Names associated with package methods in the old build-system format
legacy_methods = ("build_test",)
legacy_methods = ("install_args", "build_test")
#: Same as legacy_methods, but the signature is different
legacy_long_methods = ("build_args", "install_args")
legacy_long_methods = ("build_args",)
#: Names associated with package attributes in the old build-system format
legacy_attributes = ("build_time_test_callbacks",)
@@ -66,13 +66,13 @@ def build(self, pkg, spec, prefix):
args = self.build_args(spec, prefix)
inspect.getmodule(self.pkg).scons(*args)
def install_args(self, spec, prefix):
def install_args(self):
"""Arguments to pass to install."""
return []
def install(self, pkg, spec, prefix):
"""Install the package."""
args = self.install_args(spec, prefix)
args = self.install_args()
inspect.getmodule(self.pkg).scons("install", *args)

View File

@@ -6,7 +6,7 @@
import copy
import functools
import inspect
from typing import List, Optional, Tuple # noqa: F401
from typing import List, Optional, Tuple
import six
@@ -127,12 +127,7 @@ def __init__(self, wrapped_pkg_object, root_builder):
wrapper_cls = type(self)
bases = (package_cls, wrapper_cls)
new_cls_name = package_cls.__name__ + "Wrapper"
# Forward attributes that might be monkey patched later
new_cls = type(
new_cls_name,
bases,
{"run_tests": property(lambda x: x.wrapped_package_object.run_tests)},
)
new_cls = type(new_cls_name, bases, {})
new_cls.__module__ = package_cls.__module__
self.__class__ = new_cls
self.__dict__.update(wrapped_pkg_object.__dict__)

View File

@@ -1769,9 +1769,9 @@ def reproduce_ci_job(url, work_dir):
download_and_extract_artifacts(url, work_dir)
lock_file = fs.find(work_dir, "spack.lock")[0]
repro_lock_dir = os.path.dirname(lock_file)
concrete_env_dir = os.path.dirname(lock_file)
tty.debug("Found lock file in: {0}".format(repro_lock_dir))
tty.debug("Concrete environment directory: {0}".format(concrete_env_dir))
yaml_files = fs.find(work_dir, ["*.yaml", "*.yml"])
@@ -1794,21 +1794,6 @@ def reproduce_ci_job(url, work_dir):
if pipeline_yaml:
tty.debug("\n{0} is likely your pipeline file".format(yf))
relative_concrete_env_dir = pipeline_yaml["variables"]["SPACK_CONCRETE_ENV_DIR"]
tty.debug("Relative environment path used by cloud job: {0}".format(relative_concrete_env_dir))
# Using the relative concrete environment path found in the generated
# pipeline variable above, copy the spack environment files so they'll
# be found in the same location as when the job ran in the cloud.
concrete_env_dir = os.path.join(work_dir, relative_concrete_env_dir)
if not os.path.isdir(concrete_env_dir):
fs.mkdirp(concrete_env_dir)
copy_lock_path = os.path.join(concrete_env_dir, "spack.lock")
orig_yaml_path = os.path.join(repro_lock_dir, "spack.yaml")
copy_yaml_path = os.path.join(concrete_env_dir, "spack.yaml")
shutil.copyfile(lock_file, copy_lock_path)
shutil.copyfile(orig_yaml_path, copy_yaml_path)
# Find the install script in the unzipped artifacts and make it executable
install_script = fs.find(work_dir, "install.sh")[0]
st = os.stat(install_script)
@@ -1864,7 +1849,6 @@ def reproduce_ci_job(url, work_dir):
if repro_details:
mount_as_dir = repro_details["ci_project_dir"]
mounted_repro_dir = os.path.join(mount_as_dir, rel_repro_dir)
mounted_env_dir = os.path.join(mount_as_dir, relative_concrete_env_dir)
# We will also try to clone spack from your local checkout and
# reproduce the state present during the CI build, and put that into
@@ -1948,7 +1932,7 @@ def reproduce_ci_job(url, work_dir):
inst_list.append(" $ source {0}/share/spack/setup-env.sh\n".format(spack_root))
inst_list.append(
" $ spack env activate --without-view {0}\n\n".format(
mounted_env_dir if job_image else repro_dir
mounted_repro_dir if job_image else repro_dir
)
)
inst_list.append(" - Run the install script\n\n")

View File

@@ -30,7 +30,6 @@
import spack.paths
import spack.spec
import spack.store
import spack.traverse as traverse
import spack.user_environment as uenv
import spack.util.spack_json as sjson
import spack.util.string
@@ -465,12 +464,11 @@ def format_list(specs):
# create the final, formatted versions of all specs
formatted = []
for spec in specs:
formatted.append((fmt(spec), spec))
if deps:
for depth, dep in traverse.traverse_tree([spec], depth_first=False):
formatted.append((fmt(dep.spec, depth), dep.spec))
for depth, dep in spec.traverse(root=False, depth=True):
formatted.append((fmt(dep, depth), dep))
formatted.append(("", None)) # mark newlines
else:
formatted.append((fmt(spec), spec))
# unless any of these are set, we can just colify and be done.
if not any((deps, paths)):

View File

@@ -0,0 +1,53 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import llnl.util.tty as tty
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.environment as ev
from spack.filesystem_view import YamlFilesystemView
description = "activate a package extension"
section = "extensions"
level = "long"
def setup_parser(subparser):
subparser.add_argument(
"-f", "--force", action="store_true", help="activate without first activating dependencies"
)
subparser.add_argument("-v", "--view", metavar="VIEW", type=str, help="the view to operate on")
arguments.add_common_arguments(subparser, ["installed_spec"])
def activate(parser, args):
tty.warn(
"spack activate is deprecated in favor of " "environments and will be removed in v0.19.0"
)
specs = spack.cmd.parse_specs(args.spec)
if len(specs) != 1:
tty.die("activate requires one spec. %d given." % len(specs))
spec = spack.cmd.disambiguate_spec(specs[0], ev.active_environment())
if not spec.package.is_extension:
tty.die("%s is not an extension." % spec.name)
if args.view:
target = args.view
else:
target = spec.package.extendee_spec.prefix
view = YamlFilesystemView(target, spack.store.layout)
if spec.package.is_activated(view):
tty.msg("Package %s is already activated." % specs[0].short_spec)
return
# TODO: refactor FilesystemView.add_extension and use that here (so there
# aren't two ways of activating extensions)
spec.package.do_activate(view, with_dependencies=not args.force)

View File

@@ -52,7 +52,6 @@
CLINGO_JSON = "$spack/share/spack/bootstrap/github-actions-v0.4/clingo.json"
GNUPG_JSON = "$spack/share/spack/bootstrap/github-actions-v0.4/gnupg.json"
PATCHELF_JSON = "$spack/share/spack/bootstrap/github-actions-v0.4/patchelf.json"
# Metadata for a generated source mirror
SOURCE_METADATA = {
@@ -444,7 +443,6 @@ def write_metadata(subdir, metadata):
abs_directory, rel_directory = write_metadata(subdir="binaries", metadata=BINARY_METADATA)
shutil.copy(spack.util.path.canonicalize_path(CLINGO_JSON), abs_directory)
shutil.copy(spack.util.path.canonicalize_path(GNUPG_JSON), abs_directory)
shutil.copy(spack.util.path.canonicalize_path(PATCHELF_JSON), abs_directory)
instructions += cmd.format("local-binaries", rel_directory)
print(instructions)

View File

@@ -0,0 +1,96 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import llnl.util.tty as tty
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.environment as ev
import spack.graph
import spack.store
from spack.filesystem_view import YamlFilesystemView
description = "deactivate a package extension"
section = "extensions"
level = "long"
def setup_parser(subparser):
subparser.add_argument(
"-f",
"--force",
action="store_true",
help="run deactivation even if spec is NOT currently activated",
)
subparser.add_argument("-v", "--view", metavar="VIEW", type=str, help="the view to operate on")
subparser.add_argument(
"-a",
"--all",
action="store_true",
help="deactivate all extensions of an extendable package, or "
"deactivate an extension AND its dependencies",
)
arguments.add_common_arguments(subparser, ["installed_spec"])
def deactivate(parser, args):
tty.warn(
"spack deactivate is deprecated in favor of " "environments and will be removed in v0.19.0"
)
specs = spack.cmd.parse_specs(args.spec)
if len(specs) != 1:
tty.die("deactivate requires one spec. %d given." % len(specs))
env = ev.active_environment()
spec = spack.cmd.disambiguate_spec(specs[0], env)
pkg = spec.package
if args.view:
target = args.view
elif pkg.is_extension:
target = pkg.extendee_spec.prefix
elif pkg.extendable:
target = spec.prefix
view = YamlFilesystemView(target, spack.store.layout)
if args.all:
if pkg.extendable:
tty.msg("Deactivating all extensions of %s" % pkg.spec.short_spec)
ext_pkgs = spack.store.db.activated_extensions_for(spec, view.extensions_layout)
for ext_pkg in ext_pkgs:
ext_pkg.spec.normalize()
if ext_pkg.is_activated(view):
ext_pkg.do_deactivate(view, force=True)
elif pkg.is_extension:
if not args.force and not spec.package.is_activated(view):
tty.die("%s is not activated." % pkg.spec.short_spec)
tty.msg("Deactivating %s and all dependencies." % pkg.spec.short_spec)
nodes_in_topological_order = spack.graph.topological_sort(spec)
for espec in reversed(nodes_in_topological_order):
epkg = espec.package
if epkg.extends(pkg.extendee_spec):
if epkg.is_activated(view) or args.force:
epkg.do_deactivate(view, force=args.force)
else:
tty.die("spack deactivate --all requires an extendable package " "or an extension.")
else:
if not pkg.is_extension:
tty.die(
"spack deactivate requires an extension.", "Did you mean 'spack deactivate --all'?"
)
if not args.force and not spec.package.is_activated(view):
tty.die("Package %s is not activated." % spec.short_spec)
spec.package.do_deactivate(view, force=args.force)

View File

@@ -14,6 +14,7 @@
import spack.environment as ev
import spack.repo
import spack.store
from spack.filesystem_view import YamlFilesystemView
description = "list extensions for package"
section = "extensions"
@@ -37,9 +38,10 @@ def setup_parser(subparser):
"--show",
action="store",
default="all",
choices=("packages", "installed", "all"),
choices=("packages", "installed", "activated", "all"),
help="show only part of output",
)
subparser.add_argument("-v", "--view", metavar="VIEW", type=str, help="the view to operate on")
subparser.add_argument(
"spec",
@@ -89,6 +91,13 @@ def extensions(parser, args):
tty.msg("%d extensions:" % len(extensions))
colify(ext.name for ext in extensions)
if args.view:
target = args.view
else:
target = spec.prefix
view = YamlFilesystemView(target, spack.store.layout)
if args.show in ("installed", "all"):
# List specs of installed extensions.
installed = [s.spec for s in spack.store.db.installed_extensions_for(spec)]
@@ -100,3 +109,14 @@ def extensions(parser, args):
else:
tty.msg("%d installed:" % len(installed))
cmd.display_specs(installed, args)
if args.show in ("activated", "all"):
# List specs of activated extensions.
activated = view.extensions_layout.extension_map(spec)
if args.show == "all":
print
if not activated:
tty.msg("None activated.")
else:
tty.msg("%d activated:" % len(activated))
cmd.display_specs(activated.values(), args)

View File

@@ -242,8 +242,8 @@ def print_tests(pkg):
# So the presence of a callback in Spack does not necessarily correspond
# to the actual presence of built-time tests for a package.
for callbacks, phase in [
(getattr(pkg, "build_time_test_callbacks", None), "Build"),
(getattr(pkg, "install_time_test_callbacks", None), "Install"),
(pkg.build_time_test_callbacks, "Build"),
(pkg.install_time_test_callbacks, "Install"),
]:
color.cprint("")
color.cprint(section_title("Available {0} Phase Test Methods:".format(phase)))

View File

@@ -9,7 +9,6 @@
import llnl.util.tty as tty
import spack.builder
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.environment as ev
@@ -135,7 +134,6 @@ def location(parser, args):
# Either concretize or filter from already concretized environment
spec = spack.cmd.matching_spec_from_env(spec)
pkg = spec.package
builder = spack.builder.create(pkg)
if args.stage_dir:
print(pkg.stage.path)
@@ -143,10 +141,10 @@ def location(parser, args):
if args.build_dir:
# Out of source builds have build_directory defined
if hasattr(builder, "build_directory"):
if hasattr(pkg, "build_directory"):
# build_directory can be either absolute or relative to the stage path
# in either case os.path.join makes it absolute
print(os.path.normpath(os.path.join(pkg.stage.path, builder.build_directory)))
print(os.path.normpath(os.path.join(pkg.stage.path, pkg.build_directory)))
return
# Otherwise assume in-source builds

View File

@@ -9,7 +9,6 @@
import llnl.util.tty as tty
import llnl.util.tty.colify as colify
import spack.caches
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.concretize
@@ -357,9 +356,12 @@ def versions_per_spec(args):
return num_versions
def create_mirror_for_individual_specs(mirror_specs, path, skip_unstable_versions):
present, mirrored, error = spack.mirror.create(path, mirror_specs, skip_unstable_versions)
tty.msg("Summary for mirror in {}".format(path))
def create_mirror_for_individual_specs(mirror_specs, directory_hint, skip_unstable_versions):
local_push_url = local_mirror_url_from_user(directory_hint)
present, mirrored, error = spack.mirror.create(
local_push_url, mirror_specs, skip_unstable_versions
)
tty.msg("Summary for mirror in {}".format(local_push_url))
process_mirror_stats(present, mirrored, error)
@@ -377,6 +379,21 @@ def process_mirror_stats(present, mirrored, error):
sys.exit(1)
def local_mirror_url_from_user(directory_hint):
"""Return a file:// url pointing to the local mirror to be used.
Args:
directory_hint (str or None): directory where to create the mirror. If None,
defaults to "config:source_cache".
"""
mirror_directory = spack.util.path.canonicalize_path(
directory_hint or spack.config.get("config:source_cache")
)
tmp_mirror = spack.mirror.Mirror(mirror_directory)
local_url = url_util.format(tmp_mirror.push_url)
return local_url
def mirror_create(args):
"""Create a directory to be used as a spack mirror, and fill it with
package archives.
@@ -407,12 +424,9 @@ def mirror_create(args):
"The option '--all' already implies mirroring all versions for each package.",
)
# When no directory is provided, the source dir is used
path = args.directory or spack.caches.fetch_cache_location()
if args.all and not ev.active_environment():
create_mirror_for_all_specs(
path=path,
directory_hint=args.directory,
skip_unstable_versions=args.skip_unstable_versions,
selection_fn=not_excluded_fn(args),
)
@@ -420,7 +434,7 @@ def mirror_create(args):
if args.all and ev.active_environment():
create_mirror_for_all_specs_inside_environment(
path=path,
directory_hint=args.directory,
skip_unstable_versions=args.skip_unstable_versions,
selection_fn=not_excluded_fn(args),
)
@@ -429,15 +443,16 @@ def mirror_create(args):
mirror_specs = concrete_specs_from_user(args)
create_mirror_for_individual_specs(
mirror_specs,
path=path,
directory_hint=args.directory,
skip_unstable_versions=args.skip_unstable_versions,
)
def create_mirror_for_all_specs(path, skip_unstable_versions, selection_fn):
def create_mirror_for_all_specs(directory_hint, skip_unstable_versions, selection_fn):
mirror_specs = all_specs_with_all_versions(selection_fn=selection_fn)
local_push_url = local_mirror_url_from_user(directory_hint=directory_hint)
mirror_cache, mirror_stats = spack.mirror.mirror_cache_and_stats(
path, skip_unstable_versions=skip_unstable_versions
local_push_url, skip_unstable_versions=skip_unstable_versions
)
for candidate in mirror_specs:
pkg_cls = spack.repo.path.get_pkg_class(candidate.name)
@@ -447,11 +462,13 @@ def create_mirror_for_all_specs(path, skip_unstable_versions, selection_fn):
process_mirror_stats(*mirror_stats.stats())
def create_mirror_for_all_specs_inside_environment(path, skip_unstable_versions, selection_fn):
def create_mirror_for_all_specs_inside_environment(
directory_hint, skip_unstable_versions, selection_fn
):
mirror_specs = concrete_specs_from_environment(selection_fn=selection_fn)
create_mirror_for_individual_specs(
mirror_specs,
path=path,
directory_hint=directory_hint,
skip_unstable_versions=skip_unstable_versions,
)

View File

@@ -127,10 +127,8 @@ def python_interpreter(args):
console.runsource(startup.read(), startup_file, "exec")
if args.python_command:
propagate_exceptions_from(console)
console.runsource(args.python_command)
elif args.python_args:
propagate_exceptions_from(console)
sys.argv = args.python_args
with open(args.python_args[0]) as file:
console.runsource(file.read(), args.python_args[0], "exec")
@@ -151,18 +149,3 @@ def python_interpreter(args):
platform.machine(),
)
)
def propagate_exceptions_from(console):
"""Set sys.excepthook to let uncaught exceptions return 1 to the shell.
Args:
console (code.InteractiveConsole): the console that needs a change in sys.excepthook
"""
console.push("import sys")
console.push("_wrapped_hook = sys.excepthook")
console.push("def _hook(exc_type, exc_value, exc_tb):")
console.push(" _wrapped_hook(exc_type, exc_value, exc_tb)")
console.push(" sys.exit(1)")
console.push("")
console.push("sys.excepthook = _hook")

View File

@@ -11,7 +11,6 @@
import llnl.util.tty as tty
from llnl.util.filesystem import working_dir
import spack
import spack.cmd.common.arguments as arguments
import spack.config
import spack.paths
@@ -25,7 +24,7 @@
# tutorial configuration parameters
tutorial_branch = "releases/v%s" % ".".join(str(v) for v in spack.spack_version_info[:2])
tutorial_branch = "releases/v0.18"
tutorial_mirror = "file:///mirror"
tutorial_key = os.path.join(spack.paths.share_path, "keys", "tutorial.pub")

View File

@@ -17,7 +17,6 @@
import spack.package_base
import spack.repo
import spack.store
import spack.traverse as traverse
from spack.database import InstallStatuses
description = "remove installed packages"
@@ -145,7 +144,11 @@ def installed_dependents(specs, env):
active environment, and one from specs to dependent installs outside of
the active environment.
Every installed dependent spec is listed once.
Any of the input specs may appear in both mappings (if there are
dependents both inside and outside the current environment).
If a dependent spec is used both by the active environment and by
an inactive environment, it will only appear in the first mapping.
If there is not current active environment, the first mapping will be
empty.
@@ -155,27 +158,19 @@ def installed_dependents(specs, env):
env_hashes = set(env.all_hashes()) if env else set()
# Ensure we stop traversal at input specs.
visited = set(s.dag_hash() for s in specs)
all_specs_in_db = spack.store.db.query()
for spec in specs:
for dpt in traverse.traverse_nodes(
spec.dependents(deptype="all"),
direction="parents",
visited=visited,
deptype="all",
root=True,
key=lambda s: s.dag_hash(),
):
hash = dpt.dag_hash()
# Ensure that all the specs we get are installed
record = spack.store.db.query_local_by_spec_hash(hash)
if record is None or not record.installed:
continue
if hash in env_hashes:
active_dpts.setdefault(spec, set()).add(dpt)
else:
outside_dpts.setdefault(spec, set()).add(dpt)
installed = [x for x in all_specs_in_db if spec in x]
# separate installed dependents into dpts in this environment and
# dpts that are outside this environment
for dpt in installed:
if dpt not in specs:
if dpt.dag_hash() in env_hashes:
active_dpts.setdefault(spec, set()).add(dpt)
else:
outside_dpts.setdefault(spec, set()).add(dpt)
return active_dpts, outside_dpts
@@ -255,7 +250,7 @@ def is_ready(dag_hash):
if force:
return True
record = spack.store.db.query_local_by_spec_hash(dag_hash)
_, record = spack.store.db.query_by_spec_hash(dag_hash)
if not record.ref_count:
return True

View File

@@ -45,7 +45,7 @@
import llnl.util.lang
import llnl.util.tty as tty
from llnl.util.filesystem import mkdirp, rename
from llnl.util.filesystem import mkdirp, write_tmp_and_move
import spack.compilers
import spack.paths
@@ -287,10 +287,8 @@ def _write_section(self, section):
parent = os.path.dirname(self.path)
mkdirp(parent)
tmp = os.path.join(parent, ".%s.tmp" % os.path.basename(self.path))
with open(tmp, "w") as f:
with write_tmp_and_move(self.path) as f:
syaml.dump_config(data_to_write, stream=f, default_flow_style=False)
rename(tmp, self.path)
except (yaml.YAMLError, IOError) as e:
raise ConfigFileError("Error writing to config file: '%s'" % str(e))

View File

@@ -26,7 +26,7 @@
import socket
import sys
import time
from typing import Dict # novm # noqa: F401
from typing import Dict # novm
import six
@@ -53,6 +53,7 @@
InconsistentInstallDirectoryError,
)
from spack.error import SpackError
from spack.filesystem_view import YamlFilesystemView
from spack.util.crypto import bit_length
from spack.version import Version
@@ -725,15 +726,6 @@ def query_by_spec_hash(self, hash_key, data=None):
return True, db._data[hash_key]
return False, None
def query_local_by_spec_hash(self, hash_key):
"""Get a spec by hash in the local database
Return:
(InstallRecord or None): InstallRecord when installed
locally, otherwise None."""
with self.read_transaction():
return self._data.get(hash_key, None)
def _assign_dependencies(self, hash_key, installs, data):
# Add dependencies from other records in the install DB to
# form a full spec.
@@ -1387,6 +1379,23 @@ def installed_extensions_for(self, extendee_spec):
if spec.package.extends(extendee_spec):
yield spec.package
@_autospec
def activated_extensions_for(self, extendee_spec, extensions_layout=None):
"""
Return the specs of all packages that extend
the given spec
"""
if extensions_layout is None:
view = YamlFilesystemView(extendee_spec.prefix, spack.store.layout)
extensions_layout = view.extensions_layout
for spec in self.query():
try:
extensions_layout.check_activated(extendee_spec, spec)
yield spec.package
except spack.directory_layout.NoSuchExtensionError:
continue
# TODO: conditional way to do this instead of catching exceptions
def _get_by_hash_local(self, dag_hash, default=None, installed=any):
# hash is a full hash and is in the data somewhere
if dag_hash in self._data:

View File

@@ -468,7 +468,14 @@ def _execute_depends_on(pkg):
@directive(("extendees", "dependencies"))
def extends(spec, type=("build", "run"), **kwargs):
"""Same as depends_on, but also adds this package to the extendee list.
"""Same as depends_on, but allows symlinking into dependency's
prefix tree.
This is for Python and other language modules where the module
needs to be installed into the prefix of the Python installation.
Spack handles this by installing modules into their own prefix,
but allowing ONE module version to be symlinked into a parent
Python install at a time, using ``spack activate``.
keyword arguments can be passed to extends() so that extension
packages can pass parameters to the extendee's extension

View File

@@ -10,8 +10,10 @@
import re
import shutil
import sys
import tempfile
from contextlib import contextmanager
import ruamel.yaml as yaml
import six
import llnl.util.filesystem as fs
@@ -387,6 +389,205 @@ def remove_install_directory(self, spec, deprecated=False):
path = os.path.dirname(path)
class ExtensionsLayout(object):
"""A directory layout is used to associate unique paths with specs for
package extensions.
Keeps track of which extensions are activated for what package.
Depending on the use case, this can mean globally activated extensions
directly in the installation folder - or extensions activated in
filesystem views.
"""
def __init__(self, view, **kwargs):
self.view = view
def add_extension(self, spec, ext_spec):
"""Add to the list of currently installed extensions."""
raise NotImplementedError()
def check_activated(self, spec, ext_spec):
"""Ensure that ext_spec can be removed from spec.
If not, raise NoSuchExtensionError.
"""
raise NotImplementedError()
def check_extension_conflict(self, spec, ext_spec):
"""Ensure that ext_spec can be activated in spec.
If not, raise ExtensionAlreadyInstalledError or
ExtensionConflictError.
"""
raise NotImplementedError()
def extension_map(self, spec):
"""Get a dict of currently installed extension packages for a spec.
Dict maps { name : extension_spec }
Modifying dict does not affect internals of this layout.
"""
raise NotImplementedError()
def extendee_target_directory(self, extendee):
"""Specify to which full path extendee should link all files
from extensions."""
raise NotImplementedError
def remove_extension(self, spec, ext_spec):
"""Remove from the list of currently installed extensions."""
raise NotImplementedError()
class YamlViewExtensionsLayout(ExtensionsLayout):
"""Maintain extensions within a view."""
def __init__(self, view, layout):
"""layout is the corresponding YamlDirectoryLayout object for which
we implement extensions.
"""
super(YamlViewExtensionsLayout, self).__init__(view)
self.layout = layout
self.extension_file_name = "extensions.yaml"
# Cache of already written/read extension maps.
self._extension_maps = {}
def add_extension(self, spec, ext_spec):
_check_concrete(spec)
_check_concrete(ext_spec)
# Check whether it's already installed or if it's a conflict.
exts = self._extension_map(spec)
self.check_extension_conflict(spec, ext_spec)
# do the actual adding.
exts[ext_spec.name] = ext_spec
self._write_extensions(spec, exts)
def check_extension_conflict(self, spec, ext_spec):
exts = self._extension_map(spec)
if ext_spec.name in exts:
installed_spec = exts[ext_spec.name]
if ext_spec.dag_hash() == installed_spec.dag_hash():
raise ExtensionAlreadyInstalledError(spec, ext_spec)
else:
raise ExtensionConflictError(spec, ext_spec, installed_spec)
def check_activated(self, spec, ext_spec):
exts = self._extension_map(spec)
if (ext_spec.name not in exts) or (ext_spec != exts[ext_spec.name]):
raise NoSuchExtensionError(spec, ext_spec)
def extension_file_path(self, spec):
"""Gets full path to an installed package's extension file, which
keeps track of all the extensions for that package which have been
added to this view.
"""
_check_concrete(spec)
normalize_path = lambda p: (os.path.abspath(p).rstrip(os.path.sep))
view_prefix = self.view.get_projection_for_spec(spec)
if normalize_path(spec.prefix) == normalize_path(view_prefix):
# For backwards compatibility, when the view is the extended
# package's installation directory, do not include the spec name
# as a subdirectory.
components = [view_prefix, self.layout.metadata_dir, self.extension_file_name]
else:
components = [
view_prefix,
self.layout.metadata_dir,
spec.name,
self.extension_file_name,
]
return os.path.join(*components)
def extension_map(self, spec):
"""Defensive copying version of _extension_map() for external API."""
_check_concrete(spec)
return self._extension_map(spec).copy()
def remove_extension(self, spec, ext_spec):
_check_concrete(spec)
_check_concrete(ext_spec)
# Make sure it's installed before removing.
exts = self._extension_map(spec)
self.check_activated(spec, ext_spec)
# do the actual removing.
del exts[ext_spec.name]
self._write_extensions(spec, exts)
def _extension_map(self, spec):
"""Get a dict<name -> spec> for all extensions currently
installed for this package."""
_check_concrete(spec)
if spec not in self._extension_maps:
path = self.extension_file_path(spec)
if not os.path.exists(path):
self._extension_maps[spec] = {}
else:
by_hash = self.layout.specs_by_hash()
exts = {}
with open(path) as ext_file:
yaml_file = yaml.load(ext_file)
for entry in yaml_file["extensions"]:
name = next(iter(entry))
dag_hash = entry[name]["hash"]
prefix = entry[name]["path"]
if dag_hash not in by_hash:
raise InvalidExtensionSpecError(
"Spec %s not found in %s" % (dag_hash, prefix)
)
ext_spec = by_hash[dag_hash]
if prefix != ext_spec.prefix:
raise InvalidExtensionSpecError(
"Prefix %s does not match spec hash %s: %s"
% (prefix, dag_hash, ext_spec)
)
exts[ext_spec.name] = ext_spec
self._extension_maps[spec] = exts
return self._extension_maps[spec]
def _write_extensions(self, spec, extensions):
path = self.extension_file_path(spec)
if not extensions:
# Remove the empty extensions file
os.remove(path)
return
# Create a temp file in the same directory as the actual file.
dirname, basename = os.path.split(path)
fs.mkdirp(dirname)
tmp = tempfile.NamedTemporaryFile(prefix=basename, dir=dirname, delete=False)
# write tmp file
with tmp:
yaml.dump(
{
"extensions": [
{ext.name: {"hash": ext.dag_hash(), "path": str(ext.prefix)}}
for ext in sorted(extensions.values())
]
},
tmp,
default_flow_style=False,
encoding="utf-8",
)
# Atomic update by moving tmpfile on top of old one.
fs.rename(tmp.name, path)
class DirectoryLayoutError(SpackError):
"""Superclass for directory layout errors."""
@@ -443,3 +644,13 @@ def __init__(self, spec, ext_spec, conflict):
"%s cannot be installed in %s because it conflicts with %s"
% (ext_spec.short_spec, spec.short_spec, conflict.short_spec)
)
class NoSuchExtensionError(DirectoryLayoutError):
"""Raised when an extension isn't there on deactivate."""
def __init__(self, spec, ext_spec):
super(NoSuchExtensionError, self).__init__(
"%s cannot be removed from %s because it's not activated."
% (ext_spec.short_spec, spec.short_spec)
)

View File

@@ -786,12 +786,17 @@ def _read_manifest(self, f, raw_yaml=None):
)
else:
self.views = {}
# Retrieve the current concretization strategy
configuration = config_dict(self.yaml)
# Retrieve unification scheme for the concretizer
self.unify = spack.config.get("concretizer:unify", False)
# Let `concretization` overrule `concretize:unify` config for now,
# but use a translation table to have internally a representation
# as if we were using the new configuration
translation = {"separately": False, "together": True}
try:
self.unify = translation[configuration["concretization"]]
except KeyError:
self.unify = spack.config.get("concretizer:unify", False)
# Retrieve dev-build packages:
self.dev_specs = configuration.get("develop", {})

View File

@@ -37,6 +37,10 @@
import spack.store
import spack.util.spack_json as s_json
import spack.util.spack_yaml as s_yaml
from spack.directory_layout import (
ExtensionAlreadyInstalledError,
YamlViewExtensionsLayout,
)
from spack.error import SpackError
__all__ = ["FilesystemView", "YamlFilesystemView"]
@@ -162,6 +166,9 @@ def add_specs(self, *specs, **kwargs):
"""
Add given specs to view.
The supplied specs might be standalone packages or extensions of
other packages.
Should accept `with_dependencies` as keyword argument (default
True) to indicate wether or not dependencies should be activated as
well.
@@ -169,7 +176,13 @@ def add_specs(self, *specs, **kwargs):
Should except an `exclude` keyword argument containing a list of
regexps that filter out matching spec names.
This method should make use of `activate_standalone`.
This method should make use of `activate_{extension,standalone}`.
"""
raise NotImplementedError
def add_extension(self, spec):
"""
Add (link) an extension in this view. Does not add dependencies.
"""
raise NotImplementedError
@@ -189,6 +202,9 @@ def remove_specs(self, *specs, **kwargs):
"""
Removes given specs from view.
The supplied spec might be a standalone package or an extension of
another package.
Should accept `with_dependencies` as keyword argument (default
True) to indicate wether or not dependencies should be deactivated
as well.
@@ -200,7 +216,13 @@ def remove_specs(self, *specs, **kwargs):
Should except an `exclude` keyword argument containing a list of
regexps that filter out matching spec names.
This method should make use of `deactivate_standalone`.
This method should make use of `deactivate_{extension,standalone}`.
"""
raise NotImplementedError
def remove_extension(self, spec):
"""
Remove (unlink) an extension from this view.
"""
raise NotImplementedError
@@ -274,6 +296,8 @@ def __init__(self, root, layout, **kwargs):
msg += " which does not match projections passed manually."
raise ConflictingProjectionsError(msg)
self.extensions_layout = YamlViewExtensionsLayout(self, layout)
self._croot = colorize_root(self._root) + " "
def write_projections(self):
@@ -308,10 +332,38 @@ def add_specs(self, *specs, **kwargs):
self.print_conflict(v, s)
return
for s in specs:
self.add_standalone(s)
extensions = set(filter(lambda s: s.package.is_extension, specs))
standalones = specs - extensions
set(map(self._check_no_ext_conflicts, extensions))
# fail on first error, otherwise link extensions as well
if all(map(self.add_standalone, standalones)):
all(map(self.add_extension, extensions))
def add_extension(self, spec):
if not spec.package.is_extension:
tty.error(self._croot + "Package %s is not an extension." % spec.name)
return False
if spec.external:
tty.warn(self._croot + "Skipping external package: %s" % colorize_spec(spec))
return True
if not spec.package.is_activated(self):
spec.package.do_activate(self, verbose=self.verbose, with_dependencies=False)
# make sure the meta folder is linked as well (this is not done by the
# extension-activation mechnism)
if not self.check_added(spec):
self.link_meta_folder(spec)
return True
def add_standalone(self, spec):
if spec.package.is_extension:
tty.error(self._croot + "Package %s is an extension." % spec.name)
return False
if spec.external:
tty.warn(self._croot + "Skipping external package: %s" % colorize_spec(spec))
return True
@@ -320,6 +372,19 @@ def add_standalone(self, spec):
tty.warn(self._croot + "Skipping already linked package: %s" % colorize_spec(spec))
return True
if spec.package.extendable:
# Check for globally activated extensions in the extendee that
# we're looking at.
activated = [p.spec for p in spack.store.db.activated_extensions_for(spec)]
if activated:
tty.error(
"Globally activated extensions cannot be used in "
"conjunction with filesystem views. "
"Please deactivate the following specs: "
)
spack.cmd.display_specs(activated, flags=True, variants=True, long=False)
return False
self.merge(spec)
self.link_meta_folder(spec)
@@ -468,10 +533,27 @@ def remove_specs(self, *specs, **kwargs):
# Remove the packages from the view
for spec in to_deactivate_sorted:
self.remove_standalone(spec)
if spec.package.is_extension:
self.remove_extension(spec, with_dependents=with_dependents)
else:
self.remove_standalone(spec)
self._purge_empty_directories()
def remove_extension(self, spec, with_dependents=True):
"""
Remove (unlink) an extension from this view.
"""
if not self.check_added(spec):
tty.warn(self._croot + "Skipping package not linked in view: %s" % spec.name)
return
if spec.package.is_activated(self):
spec.package.do_deactivate(
self, verbose=self.verbose, remove_dependents=with_dependents
)
self.unlink_meta_folder(spec)
def remove_standalone(self, spec):
"""
Remove (unlink) a standalone package from this view.
@@ -493,8 +575,8 @@ def get_projection_for_spec(self, spec):
Relies on the ordering of projections to avoid ambiguity.
"""
spec = spack.spec.Spec(spec)
# Extensions are placed by their extendee, not by their own spec
locator_spec = spec
if spec.package.extendee_spec:
locator_spec = spec.package.extendee_spec
@@ -630,6 +712,18 @@ def unlink_meta_folder(self, spec):
assert os.path.exists(path)
shutil.rmtree(path)
def _check_no_ext_conflicts(self, spec):
"""
Check that there is no extension conflict for specs.
"""
extendee = spec.package.extendee_spec
try:
self.extensions_layout.check_extension_conflict(extendee, spec)
except ExtensionAlreadyInstalledError:
# we print the warning here because later on the order in which
# packages get activated is not clear (set-sorting)
tty.warn(self._croot + "Skipping already activated package: %s" % spec.name)
class SimpleFilesystemView(FilesystemView):
"""A simple and partial implementation of FilesystemView focused on
@@ -748,13 +842,14 @@ def get_projection_for_spec(self, spec):
Relies on the ordering of projections to avoid ambiguity.
"""
spec = spack.spec.Spec(spec)
# Extensions are placed by their extendee, not by their own spec
locator_spec = spec
if spec.package.extendee_spec:
spec = spec.package.extendee_spec
locator_spec = spec.package.extendee_spec
proj = spack.projections.get_projection(self.projections, spec)
proj = spack.projections.get_projection(self.projections, locator_spec)
if proj:
return os.path.join(self._root, spec.format(proj))
return os.path.join(self._root, locator_spec.format(proj))
return self._root

View File

@@ -0,0 +1,20 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack
from spack.filesystem_view import YamlFilesystemView
def pre_uninstall(spec):
pkg = spec.package
assert spec.concrete
if pkg.is_extension:
target = pkg.extendee_spec.prefix
view = YamlFilesystemView(target, spack.store.layout)
if pkg.is_activated(view):
# deactivate globally
pkg.do_deactivate(force=True)

View File

@@ -186,44 +186,39 @@ def install_sbang():
``sbang`` here ensures that users can access the script and that
``sbang`` itself is in a short path.
"""
# copy in a new version of sbang if it differs from what's in spack
sbang_path = sbang_install_path()
if os.path.exists(sbang_path) and filecmp.cmp(spack.paths.sbang_script, sbang_path):
return
# make $install_tree/bin
all = spack.spec.Spec("all")
group_name = spack.package_prefs.get_package_group(all)
config_mode = spack.package_prefs.get_package_dir_permissions(all)
group_id = grp.getgrnam(group_name).gr_gid if group_name else None
# First setup the bin dir correctly.
sbang_bin_dir = os.path.dirname(sbang_path)
fs.mkdirp(sbang_bin_dir)
if not os.path.isdir(sbang_bin_dir):
fs.mkdirp(sbang_bin_dir)
# get permissions for bin dir from configuration files
group_name = spack.package_prefs.get_package_group(spack.spec.Spec("all"))
config_mode = spack.package_prefs.get_package_dir_permissions(spack.spec.Spec("all"))
if group_name:
os.chmod(sbang_bin_dir, config_mode) # Use package directory permissions
# Set group and ownership like we do on package directories
if group_id:
os.chown(sbang_bin_dir, os.stat(sbang_bin_dir).st_uid, group_id)
os.chmod(sbang_bin_dir, config_mode)
else:
fs.set_install_permissions(sbang_bin_dir)
# set group on sbang_bin_dir if not already set (only if set in configuration)
# TODO: after we drop python2 support, use shutil.chown to avoid gid lookups that
# can fail for remote groups
if group_name and os.stat(sbang_bin_dir).st_gid != grp.getgrnam(group_name).gr_gid:
os.chown(sbang_bin_dir, os.stat(sbang_bin_dir).st_uid, grp.getgrnam(group_name).gr_gid)
# Then check if we need to install sbang itself.
try:
already_installed = filecmp.cmp(spack.paths.sbang_script, sbang_path)
except (IOError, OSError):
already_installed = False
# copy over the fresh copy of `sbang`
sbang_tmp_path = os.path.join(
os.path.dirname(sbang_path),
".%s.tmp" % os.path.basename(sbang_path),
)
shutil.copy(spack.paths.sbang_script, sbang_tmp_path)
if not already_installed:
with fs.write_tmp_and_move(sbang_path) as f:
shutil.copy(spack.paths.sbang_script, f.name)
# set permissions on `sbang` (including group if set in configuration)
os.chmod(sbang_tmp_path, config_mode)
if group_name:
os.chown(sbang_tmp_path, os.stat(sbang_tmp_path).st_uid, grp.getgrnam(group_name).gr_gid)
# Finally, move the new `sbang` into place atomically
os.rename(sbang_tmp_path, sbang_path)
# Set permissions on `sbang` (including group if set in configuration)
os.chmod(sbang_path, config_mode)
if group_id:
os.chown(sbang_path, os.stat(sbang_path).st_uid, group_id)
def post_install(spec):

View File

@@ -56,9 +56,9 @@
import spack.store
import spack.util.executable
import spack.util.path
import spack.util.timer as timer
from spack.util.environment import EnvironmentModifications, dump_environment
from spack.util.executable import which
from spack.util.timer import Timer
#: Counter to support unique spec sequencing that is used to ensure packages
#: with the same priority are (initially) processed in the order in which they
@@ -304,9 +304,9 @@ def _install_from_cache(pkg, cache_only, explicit, unsigned=False):
bool: ``True`` if the package was extract from binary cache,
``False`` otherwise
"""
t = timer.Timer()
timer = Timer()
installed_from_cache = _try_install_from_binary_cache(
pkg, explicit, unsigned=unsigned, timer=t
pkg, explicit, unsigned=unsigned, timer=timer
)
pkg_id = package_id(pkg)
if not installed_from_cache:
@@ -316,14 +316,14 @@ def _install_from_cache(pkg, cache_only, explicit, unsigned=False):
tty.msg("{0}: installing from source".format(pre))
return False
t.stop()
timer.stop()
tty.debug("Successfully extracted {0} from binary cache".format(pkg_id))
_print_timer(
pre=_log_prefix(pkg.name),
pkg_id=pkg_id,
fetch=t.duration("search") + t.duration("fetch"),
build=t.duration("install"),
total=t.duration(),
fetch=timer.phases.get("search", 0) + timer.phases.get("fetch", 0),
build=timer.phases.get("install", 0),
total=timer.total,
)
_print_installed_pkg(pkg.spec.prefix)
spack.hooks.post_install(pkg.spec)
@@ -372,7 +372,7 @@ def _process_external_package(pkg, explicit):
def _process_binary_cache_tarball(
pkg, binary_spec, explicit, unsigned, mirrors_for_spec=None, timer=timer.NULL_TIMER
pkg, binary_spec, explicit, unsigned, mirrors_for_spec=None, timer=None
):
"""
Process the binary cache tarball.
@@ -391,11 +391,11 @@ def _process_binary_cache_tarball(
bool: ``True`` if the package was extracted from binary cache,
else ``False``
"""
timer.start("fetch")
download_result = binary_distribution.download_tarball(
binary_spec, unsigned, mirrors_for_spec=mirrors_for_spec
)
timer.stop("fetch")
if timer:
timer.phase("fetch")
# see #10063 : install from source if tarball doesn't exist
if download_result is None:
tty.msg("{0} exists in binary cache but with different hash".format(pkg.name))
@@ -405,7 +405,6 @@ def _process_binary_cache_tarball(
tty.msg("Extracting {0} from binary cache".format(pkg_id))
# don't print long padded paths while extracting/relocating binaries
timer.start("install")
with spack.util.path.filter_padding():
binary_distribution.extract_tarball(
binary_spec, download_result, allow_root=False, unsigned=unsigned, force=False
@@ -413,11 +412,12 @@ def _process_binary_cache_tarball(
pkg.installed_from_binary_cache = True
spack.store.db.add(pkg.spec, spack.store.layout, explicit=explicit)
timer.stop("install")
if timer:
timer.phase("install")
return True
def _try_install_from_binary_cache(pkg, explicit, unsigned=False, timer=timer.NULL_TIMER):
def _try_install_from_binary_cache(pkg, explicit, unsigned=False, timer=None):
"""
Try to extract the package from binary cache.
@@ -430,10 +430,10 @@ def _try_install_from_binary_cache(pkg, explicit, unsigned=False, timer=timer.NU
"""
pkg_id = package_id(pkg)
tty.debug("Searching for binary cache of {0}".format(pkg_id))
timer.start("search")
matches = binary_distribution.get_mirrors_for_spec(pkg.spec)
timer.stop("search")
if timer:
timer.phase("search")
if not matches:
return False
@@ -462,10 +462,11 @@ def combine_phase_logs(phase_log_files, log_path):
phase_log_files (list): a list or iterator of logs to combine
log_path (str): the path to combine them to
"""
with open(log_path, "wb") as log_file:
with open(log_path, "w") as log_file:
for phase_log_file in phase_log_files:
with open(phase_log_file, "rb") as phase_log:
shutil.copyfileobj(phase_log, log_file)
with open(phase_log_file, "r") as phase_log:
log_file.write(phase_log.read())
def dump_packages(spec, path):
@@ -1905,7 +1906,7 @@ def __init__(self, pkg, install_args):
self.env_mods = install_args.get("env_modifications", EnvironmentModifications())
# timer for build phases
self.timer = timer.Timer()
self.timer = Timer()
# If we are using a padded path, filter the output to compress padded paths
# The real log still has full-length paths.
@@ -1960,8 +1961,8 @@ def run(self):
pre=self.pre,
pkg_id=self.pkg_id,
fetch=self.pkg._fetch_time,
build=self.timer.duration() - self.pkg._fetch_time,
total=self.timer.duration(),
build=self.timer.total - self.pkg._fetch_time,
total=self.timer.total,
)
_print_installed_pkg(self.pkg.prefix)
@@ -2034,7 +2035,6 @@ def _real_install(self):
)
with log_contextmanager as logger:
# Redirect stdout and stderr to daemon pipe
with logger.force_echo():
inner_debug_level = tty.debug_level()
tty.set_debug(debug_level)
@@ -2042,11 +2042,12 @@ def _real_install(self):
tty.msg(msg.format(self.pre, phase_fn.name))
tty.set_debug(inner_debug_level)
# Redirect stdout and stderr to daemon pipe
self.timer.phase(phase_fn.name)
# Catch any errors to report to logging
self.timer.start(phase_fn.name)
phase_fn.execute()
spack.hooks.on_phase_success(pkg, phase_fn.name, log_file)
self.timer.stop(phase_fn.name)
except BaseException:
combine_phase_logs(pkg.phase_log_files, pkg.log_path)

View File

@@ -26,6 +26,9 @@
def configuration(module_set_name):
config_path = "modules:%s:lmod" % module_set_name
config = spack.config.get(config_path, {})
if not config and module_set_name == "default":
# return old format for backward compatibility
return spack.config.get("modules:lmod", {})
return config

View File

@@ -23,6 +23,9 @@
def configuration(module_set_name):
config_path = "modules:%s:tcl" % module_set_name
config = spack.config.get(config_path, {})
if not config and module_set_name == "default":
# return old format for backward compatibility
return spack.config.get("modules:tcl", {})
return config

View File

@@ -27,16 +27,7 @@
import traceback
import types
import warnings
from typing import ( # novm # noqa: F401
Any,
Callable,
Dict,
Iterable,
List,
Optional,
Tuple,
Type,
)
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type # novm
import six
@@ -540,6 +531,10 @@ class PackageBase(six.with_metaclass(PackageMeta, WindowsRPathMeta, PackageViewM
# These are default values for instance variables.
#
#: A list or set of build time test functions to be called when tests
#: are executed or 'None' if there are no such test functions.
build_time_test_callbacks = None # type: Optional[List[str]]
#: By default, packages are not virtual
#: Virtual packages override this attribute
virtual = False
@@ -548,6 +543,10 @@ class PackageBase(six.with_metaclass(PackageMeta, WindowsRPathMeta, PackageViewM
#: those that do not can be used to install a set of other Spack packages.
has_code = True
#: A list or set of install time test functions to be called when tests
#: are executed or 'None' if there are no such test functions.
install_time_test_callbacks = None # type: Optional[List[str]]
#: By default we build in parallel. Subclasses can override this.
parallel = True
@@ -920,12 +919,6 @@ def url_for_version(self, version):
"""
return self._implement_all_urls_for_version(version)[0]
def update_external_dependencies(self):
"""
Method to override in package classes to handle external dependencies
"""
pass
def all_urls_for_version(self, version):
"""Return all URLs derived from version_urls(), url, urls, and
list_url (if it contains a version) in a package in that order.
@@ -1314,6 +1307,19 @@ def extends(self, spec):
s = self.extendee_spec
return s and spec.satisfies(s)
def is_activated(self, view):
"""Return True if package is activated."""
if not self.is_extension:
raise ValueError("is_activated called on package that is not an extension.")
if self.extendee_spec.installed_upstream:
# If this extends an upstream package, it cannot be activated for
# it. This bypasses construction of the extension map, which can
# can fail when run in the context of a downstream Spack instance
return False
extensions_layout = view.extensions_layout
exts = extensions_layout.extension_map(self.extendee_spec)
return (self.name in exts) and (exts[self.name] == self.spec)
def provides(self, vpkg_name):
"""
True if this package provides a virtual package with the specified name
@@ -2313,6 +2319,30 @@ def do_deprecate(self, deprecator, link_fn):
"""Deprecate this package in favor of deprecator spec"""
spec = self.spec
# Check whether package to deprecate has active extensions
if self.extendable:
view = spack.filesystem_view.YamlFilesystemView(spec.prefix, spack.store.layout)
active_exts = view.extensions_layout.extension_map(spec).values()
if active_exts:
short = spec.format("{name}/{hash:7}")
m = "Spec %s has active extensions\n" % short
for active in active_exts:
m += " %s\n" % active.format("{name}/{hash:7}")
m += "Deactivate extensions before deprecating %s" % short
tty.die(m)
# Check whether package to deprecate is an active extension
if self.is_extension:
extendee = self.extendee_spec
view = spack.filesystem_view.YamlFilesystemView(extendee.prefix, spack.store.layout)
if self.is_activated(view):
short = spec.format("{name}/{hash:7}")
short_ext = extendee.format("{name}/{hash:7}")
msg = "Spec %s is an active extension of %s\n" % (short, short_ext)
msg += "Deactivate %s to be able to deprecate it" % short
tty.die(msg)
# Install deprecator if it isn't installed already
if not spack.store.db.query(deprecator):
deprecator.package.do_install()
@@ -2342,6 +2372,155 @@ def _check_extendable(self):
if not self.extendable:
raise ValueError("Package %s is not extendable!" % self.name)
def _sanity_check_extension(self):
if not self.is_extension:
raise ActivationError("This package is not an extension.")
extendee_package = self.extendee_spec.package
extendee_package._check_extendable()
if not self.extendee_spec.installed:
raise ActivationError("Can only (de)activate extensions for installed packages.")
if not self.spec.installed:
raise ActivationError("Extensions must first be installed.")
if self.extendee_spec.name not in self.extendees:
raise ActivationError("%s does not extend %s!" % (self.name, self.extendee.name))
def do_activate(self, view=None, with_dependencies=True, verbose=True):
"""Called on an extension to invoke the extendee's activate method.
Commands should call this routine, and should not call
activate() directly.
"""
if verbose:
tty.msg(
"Activating extension {0} for {1}".format(
self.spec.cshort_spec, self.extendee_spec.cshort_spec
)
)
self._sanity_check_extension()
if not view:
view = YamlFilesystemView(self.extendee_spec.prefix, spack.store.layout)
extensions_layout = view.extensions_layout
try:
extensions_layout.check_extension_conflict(self.extendee_spec, self.spec)
except spack.directory_layout.ExtensionAlreadyInstalledError as e:
# already installed, let caller know
tty.msg(e.message)
return
# Activate any package dependencies that are also extensions.
if with_dependencies:
for spec in self.dependency_activations():
if not spec.package.is_activated(view):
spec.package.do_activate(
view, with_dependencies=with_dependencies, verbose=verbose
)
self.extendee_spec.package.activate(self, view, **self.extendee_args)
extensions_layout.add_extension(self.extendee_spec, self.spec)
if verbose:
tty.debug(
"Activated extension {0} for {1}".format(
self.spec.cshort_spec, self.extendee_spec.cshort_spec
)
)
def dependency_activations(self):
return (
spec
for spec in self.spec.traverse(root=False, deptype="run")
if spec.package.extends(self.extendee_spec)
)
def activate(self, extension, view, **kwargs):
"""
Add the extension to the specified view.
Package authors can override this function to maintain some
centralized state related to the set of activated extensions
for a package.
Spack internals (commands, hooks, etc.) should call
do_activate() method so that proper checks are always executed.
"""
view.merge(extension.spec, ignore=kwargs.get("ignore", None))
def do_deactivate(self, view=None, **kwargs):
"""Remove this extension package from the specified view. Called
on the extension to invoke extendee's deactivate() method.
`remove_dependents=True` deactivates extensions depending on this
package instead of raising an error.
"""
self._sanity_check_extension()
force = kwargs.get("force", False)
verbose = kwargs.get("verbose", True)
remove_dependents = kwargs.get("remove_dependents", False)
if verbose:
tty.msg(
"Deactivating extension {0} for {1}".format(
self.spec.cshort_spec, self.extendee_spec.cshort_spec
)
)
if not view:
view = YamlFilesystemView(self.extendee_spec.prefix, spack.store.layout)
extensions_layout = view.extensions_layout
# Allow a force deactivate to happen. This can unlink
# spurious files if something was corrupted.
if not force:
extensions_layout.check_activated(self.extendee_spec, self.spec)
activated = extensions_layout.extension_map(self.extendee_spec)
for name, aspec in activated.items():
if aspec == self.spec:
continue
for dep in aspec.traverse(deptype="run"):
if self.spec == dep:
if remove_dependents:
aspec.package.do_deactivate(**kwargs)
else:
msg = (
"Cannot deactivate {0} because {1} is "
"activated and depends on it"
)
raise ActivationError(
msg.format(self.spec.cshort_spec, aspec.cshort_spec)
)
self.extendee_spec.package.deactivate(self, view, **self.extendee_args)
# redundant activation check -- makes SURE the spec is not
# still activated even if something was wrong above.
if self.is_activated(view):
extensions_layout.remove_extension(self.extendee_spec, self.spec)
if verbose:
tty.debug(
"Deactivated extension {0} for {1}".format(
self.spec.cshort_spec, self.extendee_spec.cshort_spec
)
)
def deactivate(self, extension, view, **kwargs):
"""
Remove all extension files from the specified view.
Package authors can override this method to support other
extension mechanisms. Spack internals (commands, hooks, etc.)
should call do_deactivate() method so that proper checks are
always executed.
"""
view.unmerge(extension.spec, ignore=kwargs.get("ignore", None))
def view(self):
"""Create a view with the prefix of this package as the root.
Extensions added to this view will modify the installation prefix of

View File

@@ -3,7 +3,6 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import collections
import itertools
import multiprocessing.pool
import os
import re
@@ -297,24 +296,17 @@ def modify_macho_object(cur_path, rpaths, deps, idpath, paths_to_paths):
if idpath:
new_idpath = paths_to_paths.get(idpath, None)
if new_idpath and not idpath == new_idpath:
args += [("-id", new_idpath)]
args += ["-id", new_idpath]
for dep in deps:
new_dep = paths_to_paths.get(dep)
if new_dep and dep != new_dep:
args += [("-change", dep, new_dep)]
args += ["-change", dep, new_dep]
new_rpaths = []
for orig_rpath in rpaths:
new_rpath = paths_to_paths.get(orig_rpath)
if new_rpath and not orig_rpath == new_rpath:
args_to_add = ("-rpath", orig_rpath, new_rpath)
if args_to_add not in args and new_rpath not in new_rpaths:
args += [args_to_add]
new_rpaths.append(new_rpath)
args += ["-rpath", orig_rpath, new_rpath]
# Deduplicate and flatten
args = list(itertools.chain.from_iterable(llnl.util.lang.dedupe(args)))
if args:
args.append(str(cur_path))
install_name_tool = executable.Executable("install_name_tool")

View File

@@ -8,12 +8,32 @@
.. literalinclude:: _spack_root/lib/spack/spack/schema/env.py
:lines: 36-
"""
import warnings
from llnl.util.lang import union_dicts
import spack.schema.merged
import spack.schema.packages
import spack.schema.projections
warned_about_concretization = False
def deprecate_concretization(instance, props):
global warned_about_concretization
if warned_about_concretization:
return None
# Deprecate `spack:concretization` in favor of `spack:concretizer:unify`.
concretization_to_unify = {"together": "true", "separately": "false"}
concretization = instance["concretization"]
unify = concretization_to_unify[concretization]
return (
"concretization:{} is deprecated and will be removed in Spack 0.19 in favor of "
"the new concretizer:unify:{} config option.".format(concretization, unify)
)
#: legal first keys in the schema
keys = ("spack", "env")
@@ -56,6 +76,11 @@
"type": "object",
"default": {},
"additionalProperties": False,
"deprecatedProperties": {
"properties": ["concretization"],
"message": deprecate_concretization,
"error": False,
},
"properties": union_dicts(
# merged configuration scope schemas
spack.schema.merged.properties,
@@ -123,6 +148,11 @@
},
]
},
"concretization": {
"type": "string",
"enum": ["together", "separately"],
"default": "separately",
},
},
),
}
@@ -139,6 +169,31 @@ def update(data):
Returns:
True if data was changed, False otherwise
"""
# There are not currently any deprecated attributes in this section
# that have not been removed
return False
updated = False
if "include" in data:
msg = "included configuration files should be updated manually" " [files={0}]"
warnings.warn(msg.format(", ".join(data["include"])))
# Spack 0.19 drops support for `spack:concretization` in favor of
# `spack:concretizer:unify`. Here we provide an upgrade path that changes the former
# into the latter, or warns when there's an ambiguity. Note that Spack 0.17 is not
# forward compatible with `spack:concretizer:unify`.
if "concretization" in data:
has_unify = "unify" in data.get("concretizer", {})
to_unify = {"together": True, "separately": False}
unify = to_unify[data["concretization"]]
if has_unify and data["concretizer"]["unify"] != unify:
warnings.warn(
"The following configuration conflicts: "
"`spack:concretization:{}` and `spack:concretizer:unify:{}`"
". Please update manually.".format(
data["concretization"], data["concretizer"]["unify"]
)
)
else:
data.update({"concretizer": {"unify": unify}})
data.pop("concretization")
updated = True
return updated

View File

@@ -8,6 +8,8 @@
.. literalinclude:: _spack_root/lib/spack/spack/schema/modules.py
:lines: 13-
"""
import warnings
import spack.schema.environment
import spack.schema.projections
@@ -24,7 +26,9 @@
)
#: Matches a valid name for a module set
valid_module_set_name = r"^(?!prefix_inspections$)\w[\w-]*$"
valid_module_set_name = (
r"^(?!arch_folder$|lmod$|roots$|enable$|prefix_inspections$|" r"tcl$|use_view$)\w[\w-]*$"
)
#: Matches an anonymous spec, i.e. a spec without a root name
anonymous_spec_regex = r"^[\^@%+~]"
@@ -152,6 +156,15 @@
}
def deprecation_msg_default_module_set(instance, props):
return (
'Top-level properties "{0}" in module config are ignored as of Spack v0.18. '
'They should be set on the "default" module set. Run\n\n'
"\t$ spack config update modules\n\n"
"to update the file to the new format".format('", "'.join(instance))
)
# Properties for inclusion into other schemas (requires definitions)
properties = {
"modules": {
@@ -174,6 +187,13 @@
"additionalProperties": False,
"properties": module_config_properties,
},
# Deprecated top-level keys (ignored in 0.18 with a warning)
"^(arch_folder|lmod|roots|enable|tcl|use_view)$": {},
},
"deprecatedProperties": {
"properties": ["arch_folder", "lmod", "roots", "enable", "tcl", "use_view"],
"message": deprecation_msg_default_module_set,
"error": False,
},
}
}
@@ -229,6 +249,39 @@ def update_keys(data, key_translations):
return changed
def update_default_module_set(data):
"""Update module configuration to move top-level keys inside default module set.
This change was introduced in v0.18 (see 99083f1706 or #28659).
"""
changed = False
deprecated_top_level_keys = ("arch_folder", "lmod", "roots", "enable", "tcl", "use_view")
# Don't update when we already have a default module set
if "default" in data:
if any(key in data for key in deprecated_top_level_keys):
warnings.warn(
'Did not move top-level module properties into "default" '
'module set, because the "default" module set is already '
"defined"
)
return changed
default = {}
# Move deprecated top-level keys under "default" module set.
for key in deprecated_top_level_keys:
if key in data:
default[key] = data.pop(key)
if default:
changed = True
data["default"] = default
return changed
def update(data):
"""Update the data in place to remove deprecated properties.
@@ -238,5 +291,10 @@ def update(data):
Returns:
True if data was changed, False otherwise
"""
# deprecated top-level module config (everything in default module set)
changed = update_default_module_set(data)
# translate blacklist/whitelist to exclude/include
return update_keys(data, exclude_include_translations)
changed |= update_keys(data, exclude_include_translations)
return changed

View File

@@ -622,13 +622,11 @@ def solve(self, setup, specs, reuse=None, output=None, control=None):
self.control = control or default_clingo_control()
# set up the problem -- this generates facts and rules
self.assumptions = []
timer.start("setup")
with self.control.backend() as backend:
self.backend = backend
setup.setup(self, specs, reuse=reuse)
timer.stop("setup")
timer.phase("setup")
timer.start("load")
# read in the main ASP program and display logic -- these are
# handwritten, not generated, so we load them as resources
parent_dir = os.path.dirname(__file__)
@@ -658,13 +656,12 @@ def visit(node):
self.control.load(os.path.join(parent_dir, "concretize.lp"))
self.control.load(os.path.join(parent_dir, "os_compatibility.lp"))
self.control.load(os.path.join(parent_dir, "display.lp"))
timer.stop("load")
timer.phase("load")
# Grounding is the first step in the solve -- it turns our facts
# and first-order logic rules into propositional logic.
timer.start("ground")
self.control.ground([("base", [])])
timer.stop("ground")
timer.phase("ground")
# With a grounded program, we can run the solve.
result = Result(specs)
@@ -682,10 +679,8 @@ def on_model(model):
if clingo_cffi:
solve_kwargs["on_unsat"] = cores.append
timer.start("solve")
solve_result = self.control.solve(**solve_kwargs)
timer.stop("solve")
timer.phase("solve")
# once done, construct the solve result
result.satisfiable = solve_result.satisfiable
@@ -2325,12 +2320,6 @@ def build_specs(self, function_tuples):
if isinstance(spec.version, spack.version.GitVersion):
spec.version.generate_git_lookup(spec.fullname)
# Add synthetic edges for externals that are extensions
for root in self._specs.values():
for dep in root.traverse():
if dep.external:
dep.package.update_external_dependencies()
return self._specs

View File

@@ -563,7 +563,7 @@ requirement_weight(Package, W) :-
requirement_policy(Package, X, "any_of"),
requirement_group_satisfied(Package, X).
error(2, "Cannot satisfy the requirements in packages.yaml for the '{0}' package. You may want to delete them to proceed with concretization. To check where the requirements are defined run 'spack config blame packages'", Package) :-
error(2, "Cannot satisfy requirement group for package '{0}'", Package) :-
activate_requirement_rules(Package),
requirement_group(Package, X),
not requirement_group_satisfied(Package, X).

View File

@@ -2751,11 +2751,6 @@ def _old_concretize(self, tests=False, deprecation_warning=True):
# If any spec in the DAG is deprecated, throw an error
Spec.ensure_no_deprecated(self)
# Update externals as needed
for dep in self.traverse():
if dep.external:
dep.package.update_external_dependencies()
# Now that the spec is concrete we should check if
# there are declared conflicts
#

View File

@@ -2,7 +2,7 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import inspect
import os
import platform
import posixpath
@@ -14,7 +14,6 @@
import spack.build_environment
import spack.config
import spack.package_base
import spack.spec
import spack.util.spack_yaml as syaml
from spack.build_environment import (
@@ -522,27 +521,3 @@ def test_dirty_disable_module_unload(config, mock_packages, working_env, mock_mo
assert mock_module_cmd.calls
assert any(("unload", "cray-libsci") == item[0] for item in mock_module_cmd.calls)
assert any(("unload", "cray-mpich") == item[0] for item in mock_module_cmd.calls)
class TestModuleMonkeyPatcher:
def test_getting_attributes(self, config, mock_packages):
s = spack.spec.Spec("libelf").concretized()
module_wrapper = spack.build_environment.ModuleChangePropagator(s.package)
assert module_wrapper.Libelf == s.package.module.Libelf
def test_setting_attributes(self, config, mock_packages):
s = spack.spec.Spec("libelf").concretized()
module = s.package.module
module_wrapper = spack.build_environment.ModuleChangePropagator(s.package)
# Setting an attribute has an immediate effect
module_wrapper.SOME_ATTRIBUTE = 1
assert module.SOME_ATTRIBUTE == 1
# We can also propagate the settings to classes in the MRO
module_wrapper.propagate_changes_to_mro()
for cls in inspect.getmro(type(s.package)):
current_module = cls.module
if current_module == spack.package_base:
break
assert current_module.SOME_ATTRIBUTE == 1

View File

@@ -121,31 +121,3 @@ def test_old_style_compatibility_with_super(spec_str, method_name, expected):
builder = spack.builder.create(s.package)
value = getattr(builder, method_name)()
assert value == expected
@pytest.mark.regression("33928")
@pytest.mark.usefixtures("builder_test_repository", "config", "working_env")
@pytest.mark.disable_clean_stage_check
def test_build_time_tests_are_executed_from_default_builder():
s = spack.spec.Spec("old-style-autotools").concretized()
builder = spack.builder.create(s.package)
builder.pkg.run_tests = True
for phase_fn in builder:
phase_fn.execute()
assert os.environ.get("CHECK_CALLED") == "1", "Build time tests not executed"
assert os.environ.get("INSTALLCHECK_CALLED") == "1", "Install time tests not executed"
@pytest.mark.regression("34518")
@pytest.mark.usefixtures("builder_test_repository", "config", "working_env")
def test_monkey_patching_wrapped_pkg():
s = spack.spec.Spec("old-style-autotools").concretized()
builder = spack.builder.create(s.package)
assert s.package.run_tests is False
assert builder.pkg.run_tests is False
assert builder.pkg_with_dispatcher.run_tests is False
s.package.run_tests = True
assert builder.pkg.run_tests is True
assert builder.pkg_with_dispatcher.run_tests is True

View File

@@ -0,0 +1,41 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys
import pytest
from spack.main import SpackCommand
activate = SpackCommand("activate")
deactivate = SpackCommand("deactivate")
install = SpackCommand("install")
extensions = SpackCommand("extensions")
pytestmark = pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows")
def test_activate(mock_packages, mock_archive, mock_fetch, config, install_mockery):
install("extension1")
activate("extension1")
output = extensions("--show", "activated", "extendee")
assert "extension1" in output
def test_deactivate(mock_packages, mock_archive, mock_fetch, config, install_mockery):
install("extension1")
activate("extension1")
deactivate("extension1")
output = extensions("--show", "activated", "extendee")
assert "extension1" not in output
def test_deactivate_all(mock_packages, mock_archive, mock_fetch, config, install_mockery):
install("extension1")
install("extension2")
activate("extension1")
activate("extension2")
deactivate("--all", "extendee")
output = extensions("--show", "activated", "extendee")
assert "extension1" not in output

View File

@@ -15,6 +15,7 @@
uninstall = SpackCommand("uninstall")
deprecate = SpackCommand("deprecate")
find = SpackCommand("find")
activate = SpackCommand("activate")
pytestmark = pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows")
@@ -88,6 +89,24 @@ def test_deprecate_deps(mock_packages, mock_archive, mock_fetch, install_mockery
assert sorted(deprecated) == sorted(list(old_spec.traverse()))
def test_deprecate_fails_active_extensions(
mock_packages, mock_archive, mock_fetch, install_mockery
):
"""Tests that active extensions and their extendees cannot be
deprecated."""
install("extendee")
install("extension1")
activate("extension1")
output = deprecate("-yi", "extendee", "extendee@nonexistent", fail_on_error=False)
assert "extension1" in output
assert "Deactivate extensions before deprecating" in output
output = deprecate("-yiD", "extension1", "extension1@notaversion", fail_on_error=False)
assert "extendee" in output
assert "is an active extension of" in output
def test_uninstall_deprecated(mock_packages, mock_archive, mock_fetch, install_mockery):
"""Tests that we can still uninstall deprecated packages."""
install("libelf@0.8.13")

View File

@@ -2476,6 +2476,30 @@ def test_env_write_only_non_default_nested(tmpdir):
assert manifest == contents
@pytest.mark.parametrize("concretization,unify", [("together", "true"), ("separately", "false")])
def test_update_concretization_to_concretizer_unify(concretization, unify, tmpdir):
spack_yaml = """\
spack:
concretization: {}
""".format(
concretization
)
tmpdir.join("spack.yaml").write(spack_yaml)
# Update the environment
env("update", "-y", str(tmpdir))
with open(str(tmpdir.join("spack.yaml"))) as f:
assert (
f.read()
== """\
spack:
concretizer:
unify: {}
""".format(
unify
)
)
@pytest.mark.regression("18147")
def test_can_update_attributes_with_override(tmpdir):
spack_yaml = """

View File

@@ -35,11 +35,12 @@ def python_database(mock_packages, mutable_database):
def test_extensions(mock_packages, python_database, config, capsys):
ext2 = Spec("py-extension2").concretized()
def check_output(ni):
def check_output(ni, na):
with capsys.disabled():
output = extensions("python")
packages = extensions("-s", "packages", "python")
installed = extensions("-s", "installed", "python")
activated = extensions("-s", "activated", "python")
assert "==> python@2.7.11" in output
assert "==> 2 extensions" in output
assert "py-extension1" in output
@@ -49,13 +50,26 @@ def check_output(ni):
assert "py-extension1" in packages
assert "py-extension2" in packages
assert "installed" not in packages
assert "activated" not in packages
assert ("%s installed" % (ni if ni else "None")) in output
assert ("%s activated" % (na if na else "None")) in output
assert ("%s installed" % (ni if ni else "None")) in installed
assert ("%s activated" % (na if na else "None")) in activated
check_output(2, 0)
ext2.package.do_activate()
check_output(2, 2)
ext2.package.do_deactivate(force=True)
check_output(2, 1)
ext2.package.do_activate()
check_output(2, 2)
check_output(2)
ext2.package.do_uninstall(force=True)
check_output(1)
check_output(1, 1)
def test_extensions_no_arguments(mock_packages):

View File

@@ -269,9 +269,9 @@ def test_find_format_deps(database, config):
callpath-1.0
dyninst-8.2
libdwarf-20130729
libelf-0.8.13
zmpi-1.0
fake-1.0
libelf-0.8.13
zmpi-1.0
fake-1.0
"""
)
@@ -291,9 +291,9 @@ def test_find_format_deps_paths(database, config):
callpath-1.0 {1}
dyninst-8.2 {2}
libdwarf-20130729 {3}
libelf-0.8.13 {4}
zmpi-1.0 {5}
fake-1.0 {6}
libelf-0.8.13 {4}
zmpi-1.0 {5}
fake-1.0 {6}
""".format(
*prefixes

View File

@@ -333,6 +333,20 @@ def test_error_conditions(self, cli_args, error_str):
with pytest.raises(spack.error.SpackError, match=error_str):
spack.cmd.mirror.mirror_create(args)
@pytest.mark.parametrize(
"cli_args,expected_end",
[
({"directory": None}, os.path.join("source")),
({"directory": os.path.join("foo", "bar")}, os.path.join("foo", "bar")),
],
)
def test_mirror_path_is_valid(self, cli_args, expected_end, config):
args = MockMirrorArgs(**cli_args)
local_push_url = spack.cmd.mirror.local_mirror_url_from_user(args.directory)
assert local_push_url.startswith("file:")
assert os.path.isabs(local_push_url.replace("file://", ""))
assert local_push_url.endswith(expected_end)
@pytest.mark.parametrize(
"cli_args,not_expected",
[

View File

@@ -3,14 +3,12 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import itertools
import sys
import pytest
import llnl.util.tty as tty
import spack.cmd.uninstall
import spack.environment
import spack.store
from spack.main import SpackCommand, SpackCommandError
@@ -42,39 +40,6 @@ def test_installed_dependents(mutable_database):
uninstall("-y", "libelf")
@pytest.mark.db
def test_correct_installed_dependents(mutable_database):
# Test whether we return the right dependents.
# Take callpath from the database
callpath = spack.store.db.query_local("callpath")[0]
# Ensure it still has dependents and dependencies
dependents = callpath.dependents(deptype="all")
dependencies = callpath.dependencies(deptype="all")
assert dependents and dependencies
# Uninstall it, so it's missing.
callpath.package.do_uninstall(force=True)
# Retrieve all dependent hashes
inside_dpts, outside_dpts = spack.cmd.uninstall.installed_dependents(dependencies, None)
dependent_hashes = [s.dag_hash() for s in itertools.chain(*outside_dpts.values())]
set_dependent_hashes = set(dependent_hashes)
# We dont have an env, so this should be empty.
assert not inside_dpts
# Assert uniqueness
assert len(dependent_hashes) == len(set_dependent_hashes)
# Ensure parents of callpath are listed
assert all(s.dag_hash() in set_dependent_hashes for s in dependents)
# Ensure callpath itself is not, since it was missing.
assert callpath.dag_hash() not in set_dependent_hashes
@pytest.mark.db
def test_recursive_uninstall(mutable_database):
"""Test recursive uninstall."""

View File

@@ -12,6 +12,7 @@
from spack.main import SpackCommand
from spack.spec import Spec
activate = SpackCommand("activate")
extensions = SpackCommand("extensions")
install = SpackCommand("install")
view = SpackCommand("view")
@@ -134,9 +135,46 @@ def test_view_extension(tmpdir, mock_packages, mock_archive, mock_fetch, config,
assert "extension1@1.0" in all_installed
assert "extension1@2.0" in all_installed
assert "extension2@1.0" in all_installed
global_activated = extensions("--show", "activated", "extendee")
assert "extension1@1.0" not in global_activated
assert "extension1@2.0" not in global_activated
assert "extension2@1.0" not in global_activated
view_activated = extensions("--show", "activated", "-v", viewpath, "extendee")
assert "extension1@1.0" in view_activated
assert "extension1@2.0" not in view_activated
assert "extension2@1.0" not in view_activated
assert os.path.exists(os.path.join(viewpath, "bin", "extension1"))
def test_view_extension_projection(
tmpdir, mock_packages, mock_archive, mock_fetch, config, install_mockery
):
install("extendee@1.0")
install("extension1@1.0")
install("extension1@2.0")
install("extension2@1.0")
viewpath = str(tmpdir.mkdir("view"))
view_projection = {"all": "{name}-{version}"}
projection_file = create_projection_file(tmpdir, view_projection)
view("symlink", viewpath, "--projection-file={0}".format(projection_file), "extension1@1.0")
all_installed = extensions("--show", "installed", "extendee")
assert "extension1@1.0" in all_installed
assert "extension1@2.0" in all_installed
assert "extension2@1.0" in all_installed
global_activated = extensions("--show", "activated", "extendee")
assert "extension1@1.0" not in global_activated
assert "extension1@2.0" not in global_activated
assert "extension2@1.0" not in global_activated
view_activated = extensions("--show", "activated", "-v", viewpath, "extendee")
assert "extension1@1.0" in view_activated
assert "extension1@2.0" not in view_activated
assert "extension2@1.0" not in view_activated
assert os.path.exists(os.path.join(viewpath, "extendee-1.0", "bin", "extension1"))
def test_view_extension_remove(
tmpdir, mock_packages, mock_archive, mock_fetch, config, install_mockery
):
@@ -147,6 +185,10 @@ def test_view_extension_remove(
view("remove", viewpath, "extension1@1.0")
all_installed = extensions("--show", "installed", "extendee")
assert "extension1@1.0" in all_installed
global_activated = extensions("--show", "activated", "extendee")
assert "extension1@1.0" not in global_activated
view_activated = extensions("--show", "activated", "-v", viewpath, "extendee")
assert "extension1@1.0" not in view_activated
assert not os.path.exists(os.path.join(viewpath, "bin", "extension1"))
@@ -175,6 +217,46 @@ def test_view_extension_conflict_ignored(
assert fin.read() == "1.0"
def test_view_extension_global_activation(
tmpdir, mock_packages, mock_archive, mock_fetch, config, install_mockery
):
install("extendee")
install("extension1@1.0")
install("extension1@2.0")
install("extension2@1.0")
viewpath = str(tmpdir.mkdir("view"))
view("symlink", viewpath, "extension1@1.0")
activate("extension1@2.0")
activate("extension2@1.0")
all_installed = extensions("--show", "installed", "extendee")
assert "extension1@1.0" in all_installed
assert "extension1@2.0" in all_installed
assert "extension2@1.0" in all_installed
global_activated = extensions("--show", "activated", "extendee")
assert "extension1@1.0" not in global_activated
assert "extension1@2.0" in global_activated
assert "extension2@1.0" in global_activated
view_activated = extensions("--show", "activated", "-v", viewpath, "extendee")
assert "extension1@1.0" in view_activated
assert "extension1@2.0" not in view_activated
assert "extension2@1.0" not in view_activated
assert os.path.exists(os.path.join(viewpath, "bin", "extension1"))
assert not os.path.exists(os.path.join(viewpath, "bin", "extension2"))
def test_view_extendee_with_global_activations(
tmpdir, mock_packages, mock_archive, mock_fetch, config, install_mockery
):
install("extendee")
install("extension1@1.0")
install("extension1@2.0")
install("extension2@1.0")
viewpath = str(tmpdir.mkdir("view"))
activate("extension1@2.0")
output = view("symlink", viewpath, "extension1@1.0")
assert "Error: Globally activated extensions cannot be used" in output
def test_view_fails_with_missing_projections_file(tmpdir):
viewpath = str(tmpdir.mkdir("view"))
projection_file = os.path.join(str(tmpdir), "nonexistent")

View File

@@ -1945,18 +1945,3 @@ def test_require_targets_are_allowed(self, mutable_database):
for s in spec.traverse():
assert s.satisfies("target=%s" % spack.platforms.test.Test.front_end)
def test_external_python_extensions_have_dependency(self):
"""Test that python extensions have access to a python dependency"""
external_conf = {
"py-extension1": {
"buildable": False,
"externals": [{"spec": "py-extension1@2.0", "prefix": "/fake"}],
}
}
spack.config.set("packages", external_conf)
spec = Spec("py-extension2").concretized()
assert "python" in spec["py-extension1"]
assert spec["python"] == spec["py-extension1"]["python"]

View File

@@ -28,7 +28,7 @@ def test_set_install_hash_length(hash_length, mutable_config, tmpdir):
assert len(hash_str) == hash_length
@pytest.mark.usefixtures("mock_packages")
@pytest.mark.use_fixtures("mock_packages")
def test_set_install_hash_length_upper_case(mutable_config, tmpdir):
mutable_config.set("config:install_hash_length", 5)
mutable_config.set(

View File

@@ -252,8 +252,12 @@ def test_install_times(install_mockery, mock_fetch, mutable_mock_repo):
# The order should be maintained
phases = [x["name"] for x in times["phases"]]
assert phases == ["one", "two", "three", "install"]
assert all(isinstance(x["seconds"], float) for x in times["phases"])
total = sum([x["seconds"] for x in times["phases"]])
for name in ["one", "two", "three", "install"]:
assert name in phases
# Give a generous difference threshold
assert abs(total - times["total"]["seconds"]) < 5
def test_flatten_deps(install_mockery, mock_fetch, mutable_mock_repo):

View File

@@ -622,7 +622,7 @@ def test_combine_phase_logs(tmpdir):
# This is the output log we will combine them into
combined_log = os.path.join(str(tmpdir), "combined-out.txt")
inst.combine_phase_logs(phase_log_files, combined_log)
spack.installer.combine_phase_logs(phase_log_files, combined_log)
with open(combined_log, "r") as log_file:
out = log_file.read()
@@ -631,22 +631,6 @@ def test_combine_phase_logs(tmpdir):
assert "Output from %s\n" % log_file in out
def test_combine_phase_logs_does_not_care_about_encoding(tmpdir):
# this is invalid utf-8 at a minimum
data = b"\x00\xF4\xBF\x00\xBF\xBF"
input = [str(tmpdir.join("a")), str(tmpdir.join("b"))]
output = str(tmpdir.join("c"))
for path in input:
with open(path, "wb") as f:
f.write(data)
inst.combine_phase_logs(input, output)
with open(output, "rb") as f:
assert f.read() == data * 2
def test_check_deps_status_install_failure(install_mockery, monkeypatch):
const_arg = installer_args(["a"], {})
installer = create_installer(const_arg)

View File

@@ -84,6 +84,12 @@ def test_inheritance_of_patches(self):
# Will error if inheritor package cannot find inherited patch files
s.concretize()
def test_dependency_extensions(self):
s = Spec("extension2")
s.concretize()
deps = set(x.name for x in s.package.dependency_activations())
assert deps == set(["extension1"])
def test_import_class_from_package(self):
from spack.pkg.builtin.mock.mpich import Mpich # noqa: F401

View File

@@ -0,0 +1,402 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""This includes tests for customized activation logic for specific packages
(e.g. python and perl).
"""
import os
import sys
import pytest
from llnl.util.link_tree import MergeConflictError
import spack.package_base
import spack.spec
from spack.directory_layout import DirectoryLayout
from spack.filesystem_view import YamlFilesystemView
pytestmark = pytest.mark.skipif(
sys.platform == "win32",
reason="Python activation not currently supported on Windows",
)
def create_ext_pkg(name, prefix, extendee_spec, monkeypatch):
ext_spec = spack.spec.Spec(name)
ext_spec._concrete = True
ext_spec.package.spec.prefix = prefix
ext_pkg = ext_spec.package
# temporarily override extendee_spec property on the package
monkeypatch.setattr(ext_pkg.__class__, "extendee_spec", extendee_spec)
return ext_pkg
def create_python_ext_pkg(name, prefix, python_spec, monkeypatch, namespace=None):
ext_pkg = create_ext_pkg(name, prefix, python_spec, monkeypatch)
ext_pkg.py_namespace = namespace
return ext_pkg
def create_dir_structure(tmpdir, dir_structure):
for fname, children in dir_structure.items():
tmpdir.ensure(fname, dir=fname.endswith("/"))
if children:
create_dir_structure(tmpdir.join(fname), children)
@pytest.fixture()
def builtin_and_mock_packages():
# These tests use mock_repo packages to test functionality of builtin
# packages for python and perl. To test this we put the mock repo at lower
# precedence than the builtin repo, so we test builtin.perl against
# builtin.mock.perl-extension.
repo_dirs = [spack.paths.packages_path, spack.paths.mock_packages_path]
with spack.repo.use_repositories(*repo_dirs):
yield
@pytest.fixture()
def python_and_extension_dirs(tmpdir, builtin_and_mock_packages):
python_dirs = {"bin/": {"python": None}, "lib/": {"python2.7/": {"site-packages/": None}}}
python_name = "python"
python_prefix = tmpdir.join(python_name)
create_dir_structure(python_prefix, python_dirs)
python_spec = spack.spec.Spec("python@2.7.12")
python_spec._concrete = True
python_spec.package.spec.prefix = str(python_prefix)
ext_dirs = {
"bin/": {"py-ext-tool": None},
"lib/": {"python2.7/": {"site-packages/": {"py-extension1/": {"sample.py": None}}}},
}
ext_name = "py-extension1"
ext_prefix = tmpdir.join(ext_name)
create_dir_structure(ext_prefix, ext_dirs)
easy_install_location = "lib/python2.7/site-packages/easy-install.pth"
with open(str(ext_prefix.join(easy_install_location)), "w") as f:
f.write(
"""path/to/ext1.egg
path/to/setuptools.egg"""
)
return str(python_prefix), str(ext_prefix)
@pytest.fixture()
def namespace_extensions(tmpdir, builtin_and_mock_packages):
ext1_dirs = {
"bin/": {"py-ext-tool1": None},
"lib/": {
"python2.7/": {
"site-packages/": {
"examplenamespace/": {"__init__.py": None, "ext1_sample.py": None}
}
}
},
}
ext2_dirs = {
"bin/": {"py-ext-tool2": None},
"lib/": {
"python2.7/": {
"site-packages/": {
"examplenamespace/": {"__init__.py": None, "ext2_sample.py": None}
}
}
},
}
ext1_name = "py-extension1"
ext1_prefix = tmpdir.join(ext1_name)
create_dir_structure(ext1_prefix, ext1_dirs)
ext2_name = "py-extension2"
ext2_prefix = tmpdir.join(ext2_name)
create_dir_structure(ext2_prefix, ext2_dirs)
return str(ext1_prefix), str(ext2_prefix), "examplenamespace"
def test_python_activation_with_files(
tmpdir, python_and_extension_dirs, monkeypatch, builtin_and_mock_packages
):
python_prefix, ext_prefix = python_and_extension_dirs
python_spec = spack.spec.Spec("python@2.7.12")
python_spec._concrete = True
python_spec.package.spec.prefix = python_prefix
ext_pkg = create_python_ext_pkg("py-extension1", ext_prefix, python_spec, monkeypatch)
python_pkg = python_spec.package
python_pkg.activate(ext_pkg, python_pkg.view())
assert os.path.exists(os.path.join(python_prefix, "bin/py-ext-tool"))
easy_install_location = "lib/python2.7/site-packages/easy-install.pth"
with open(os.path.join(python_prefix, easy_install_location), "r") as f:
easy_install_contents = f.read()
assert "ext1.egg" in easy_install_contents
assert "setuptools.egg" not in easy_install_contents
def test_python_activation_view(
tmpdir, python_and_extension_dirs, builtin_and_mock_packages, monkeypatch
):
python_prefix, ext_prefix = python_and_extension_dirs
python_spec = spack.spec.Spec("python@2.7.12")
python_spec._concrete = True
python_spec.package.spec.prefix = python_prefix
ext_pkg = create_python_ext_pkg("py-extension1", ext_prefix, python_spec, monkeypatch)
view_dir = str(tmpdir.join("view"))
layout = DirectoryLayout(view_dir)
view = YamlFilesystemView(view_dir, layout)
python_pkg = python_spec.package
python_pkg.activate(ext_pkg, view)
assert not os.path.exists(os.path.join(python_prefix, "bin/py-ext-tool"))
assert os.path.exists(os.path.join(view_dir, "bin/py-ext-tool"))
def test_python_ignore_namespace_init_conflict(
tmpdir, namespace_extensions, builtin_and_mock_packages, monkeypatch
):
"""Test the view update logic in PythonPackage ignores conflicting
instances of __init__ for packages which are in the same namespace.
"""
ext1_prefix, ext2_prefix, py_namespace = namespace_extensions
python_spec = spack.spec.Spec("python@2.7.12")
python_spec._concrete = True
ext1_pkg = create_python_ext_pkg(
"py-extension1", ext1_prefix, python_spec, monkeypatch, py_namespace
)
ext2_pkg = create_python_ext_pkg(
"py-extension2", ext2_prefix, python_spec, monkeypatch, py_namespace
)
view_dir = str(tmpdir.join("view"))
layout = DirectoryLayout(view_dir)
view = YamlFilesystemView(view_dir, layout)
python_pkg = python_spec.package
python_pkg.activate(ext1_pkg, view)
# Normally handled by Package.do_activate, but here we activate directly
view.extensions_layout.add_extension(python_spec, ext1_pkg.spec)
python_pkg.activate(ext2_pkg, view)
f1 = "lib/python2.7/site-packages/examplenamespace/ext1_sample.py"
f2 = "lib/python2.7/site-packages/examplenamespace/ext2_sample.py"
init_file = "lib/python2.7/site-packages/examplenamespace/__init__.py"
assert os.path.exists(os.path.join(view_dir, f1))
assert os.path.exists(os.path.join(view_dir, f2))
assert os.path.exists(os.path.join(view_dir, init_file))
def test_python_keep_namespace_init(
tmpdir, namespace_extensions, builtin_and_mock_packages, monkeypatch
):
"""Test the view update logic in PythonPackage keeps the namespace
__init__ file as long as one package in the namespace still
exists.
"""
ext1_prefix, ext2_prefix, py_namespace = namespace_extensions
python_spec = spack.spec.Spec("python@2.7.12")
python_spec._concrete = True
ext1_pkg = create_python_ext_pkg(
"py-extension1", ext1_prefix, python_spec, monkeypatch, py_namespace
)
ext2_pkg = create_python_ext_pkg(
"py-extension2", ext2_prefix, python_spec, monkeypatch, py_namespace
)
view_dir = str(tmpdir.join("view"))
layout = DirectoryLayout(view_dir)
view = YamlFilesystemView(view_dir, layout)
python_pkg = python_spec.package
python_pkg.activate(ext1_pkg, view)
# Normally handled by Package.do_activate, but here we activate directly
view.extensions_layout.add_extension(python_spec, ext1_pkg.spec)
python_pkg.activate(ext2_pkg, view)
view.extensions_layout.add_extension(python_spec, ext2_pkg.spec)
f1 = "lib/python2.7/site-packages/examplenamespace/ext1_sample.py"
init_file = "lib/python2.7/site-packages/examplenamespace/__init__.py"
python_pkg.deactivate(ext1_pkg, view)
view.extensions_layout.remove_extension(python_spec, ext1_pkg.spec)
assert not os.path.exists(os.path.join(view_dir, f1))
assert os.path.exists(os.path.join(view_dir, init_file))
python_pkg.deactivate(ext2_pkg, view)
view.extensions_layout.remove_extension(python_spec, ext2_pkg.spec)
assert not os.path.exists(os.path.join(view_dir, init_file))
def test_python_namespace_conflict(
tmpdir, namespace_extensions, monkeypatch, builtin_and_mock_packages
):
"""Test the view update logic in PythonPackage reports an error when two
python extensions with different namespaces have a conflicting __init__
file.
"""
ext1_prefix, ext2_prefix, py_namespace = namespace_extensions
other_namespace = py_namespace + "other"
python_spec = spack.spec.Spec("python@2.7.12")
python_spec._concrete = True
ext1_pkg = create_python_ext_pkg(
"py-extension1", ext1_prefix, python_spec, monkeypatch, py_namespace
)
ext2_pkg = create_python_ext_pkg(
"py-extension2", ext2_prefix, python_spec, monkeypatch, other_namespace
)
view_dir = str(tmpdir.join("view"))
layout = DirectoryLayout(view_dir)
view = YamlFilesystemView(view_dir, layout)
python_pkg = python_spec.package
python_pkg.activate(ext1_pkg, view)
view.extensions_layout.add_extension(python_spec, ext1_pkg.spec)
with pytest.raises(MergeConflictError):
python_pkg.activate(ext2_pkg, view)
@pytest.fixture()
def perl_and_extension_dirs(tmpdir, builtin_and_mock_packages):
perl_dirs = {
"bin/": {"perl": None},
"lib/": {"site_perl/": {"5.24.1/": {"x86_64-linux/": None}}},
}
perl_name = "perl"
perl_prefix = tmpdir.join(perl_name)
create_dir_structure(perl_prefix, perl_dirs)
perl_spec = spack.spec.Spec("perl@5.24.1")
perl_spec._concrete = True
perl_spec.package.spec.prefix = str(perl_prefix)
ext_dirs = {
"bin/": {"perl-ext-tool": None},
"lib/": {"site_perl/": {"5.24.1/": {"x86_64-linux/": {"TestExt/": {}}}}},
}
ext_name = "perl-extension"
ext_prefix = tmpdir.join(ext_name)
create_dir_structure(ext_prefix, ext_dirs)
return str(perl_prefix), str(ext_prefix)
def test_perl_activation(tmpdir, builtin_and_mock_packages, monkeypatch):
# Note the lib directory is based partly on the perl version
perl_spec = spack.spec.Spec("perl@5.24.1")
perl_spec._concrete = True
perl_name = "perl"
tmpdir.ensure(perl_name, dir=True)
perl_prefix = str(tmpdir.join(perl_name))
# Set the prefix on the package's spec reference because that is a copy of
# the original spec
perl_spec.package.spec.prefix = perl_prefix
ext_name = "perl-extension"
tmpdir.ensure(ext_name, dir=True)
ext_pkg = create_ext_pkg(ext_name, str(tmpdir.join(ext_name)), perl_spec, monkeypatch)
perl_pkg = perl_spec.package
perl_pkg.activate(ext_pkg, perl_pkg.view())
def test_perl_activation_with_files(
tmpdir, perl_and_extension_dirs, monkeypatch, builtin_and_mock_packages
):
perl_prefix, ext_prefix = perl_and_extension_dirs
perl_spec = spack.spec.Spec("perl@5.24.1")
perl_spec._concrete = True
perl_spec.package.spec.prefix = perl_prefix
ext_pkg = create_ext_pkg("perl-extension", ext_prefix, perl_spec, monkeypatch)
perl_pkg = perl_spec.package
perl_pkg.activate(ext_pkg, perl_pkg.view())
assert os.path.exists(os.path.join(perl_prefix, "bin/perl-ext-tool"))
def test_perl_activation_view(
tmpdir, perl_and_extension_dirs, monkeypatch, builtin_and_mock_packages
):
perl_prefix, ext_prefix = perl_and_extension_dirs
perl_spec = spack.spec.Spec("perl@5.24.1")
perl_spec._concrete = True
perl_spec.package.spec.prefix = perl_prefix
ext_pkg = create_ext_pkg("perl-extension", ext_prefix, perl_spec, monkeypatch)
view_dir = str(tmpdir.join("view"))
layout = DirectoryLayout(view_dir)
view = YamlFilesystemView(view_dir, layout)
perl_pkg = perl_spec.package
perl_pkg.activate(ext_pkg, view)
assert not os.path.exists(os.path.join(perl_prefix, "bin/perl-ext-tool"))
assert os.path.exists(os.path.join(view_dir, "bin/perl-ext-tool"))
def test_is_activated_upstream_extendee(tmpdir, builtin_and_mock_packages, monkeypatch):
"""When an extendee is installed upstream, make sure that the extension
spec is never considered to be globally activated for it.
"""
extendee_spec = spack.spec.Spec("python")
extendee_spec._concrete = True
python_name = "python"
tmpdir.ensure(python_name, dir=True)
python_prefix = str(tmpdir.join(python_name))
# Set the prefix on the package's spec reference because that is a copy of
# the original spec
extendee_spec.package.spec.prefix = python_prefix
monkeypatch.setattr(extendee_spec.__class__, "installed_upstream", True)
ext_name = "py-extension1"
tmpdir.ensure(ext_name, dir=True)
ext_pkg = create_ext_pkg(ext_name, str(tmpdir.join(ext_name)), extendee_spec, monkeypatch)
# The view should not be checked at all if the extendee is installed
# upstream, so use 'None' here
mock_view = None
assert not ext_pkg.is_activated(mock_view)

View File

@@ -32,27 +32,6 @@ def test_write_and_read_cache_file(file_cache):
assert text == "foobar\n"
@pytest.mark.skipif(sys.platform == "win32", reason="Locks not supported on Windows")
def test_failed_write_and_read_cache_file(file_cache):
"""Test failing to write then attempting to read a cached file."""
with pytest.raises(RuntimeError, match=r"^foobar$"):
with file_cache.write_transaction("test.yaml") as (old, new):
assert old is None
assert new is not None
raise RuntimeError("foobar")
# Cache dir should have exactly one (lock) file
assert os.listdir(file_cache.root) == [".test.yaml.lock"]
# File does not exist
assert not file_cache.init_entry("test.yaml")
# Attempting to read will cause a file not found error
with pytest.raises((IOError, OSError), match=r"test\.yaml"):
with file_cache.read_transaction("test.yaml"):
pass
def test_write_and_remove_cache_file(file_cache):
"""Test two write transactions on a cached file. Then try to remove an
entry from it.

View File

@@ -1,150 +0,0 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import json
from six import StringIO
import spack.util.timer as timer
class Tick(object):
"""Timer that increments the seconds passed by 1
everytime tick is called."""
def __init__(self):
self.time = 0.0
def tick(self):
self.time += 1
return self.time
def test_timer():
# 0
t = timer.Timer(now=Tick().tick)
# 1 (restart)
t.start()
# 2
t.start("wrapped")
# 3
t.start("first")
# 4
t.stop("first")
assert t.duration("first") == 1.0
# 5
t.start("second")
# 6
t.stop("second")
assert t.duration("second") == 1.0
# 7-8
with t.measure("third"):
pass
assert t.duration("third") == 1.0
# 9
t.stop("wrapped")
assert t.duration("wrapped") == 7.0
# tick 10-13
t.start("not-stopped")
assert t.duration("not-stopped") == 1.0
assert t.duration("not-stopped") == 2.0
assert t.duration("not-stopped") == 3.0
# 14
assert t.duration() == 13.0
# 15
t.stop()
assert t.duration() == 14.0
def test_timer_stop_stops_all():
# Ensure that timer.stop() effectively stops all timers.
# 0
t = timer.Timer(now=Tick().tick)
# 1
t.start("first")
# 2
t.start("second")
# 3
t.start("third")
# 4
t.stop()
assert t.duration("first") == 3.0
assert t.duration("second") == 2.0
assert t.duration("third") == 1.0
assert t.duration() == 4.0
def test_stopping_unstarted_timer_is_no_error():
t = timer.Timer(now=Tick().tick)
assert t.duration("hello") == 0.0
t.stop("hello")
assert t.duration("hello") == 0.0
def test_timer_write():
text_buffer = StringIO()
json_buffer = StringIO()
# 0
t = timer.Timer(now=Tick().tick)
# 1
t.start("timer")
# 2
t.stop("timer")
# 3
t.stop()
t.write_tty(text_buffer)
t.write_json(json_buffer)
output = text_buffer.getvalue().splitlines()
assert "timer" in output[0]
assert "1.000s" in output[0]
assert "total" in output[1]
assert "3.000s" in output[1]
deserialized = json.loads(json_buffer.getvalue())
assert deserialized == {
"phases": [{"name": "timer", "seconds": 1.0}],
"total": {"seconds": 3.0},
}
def test_null_timer():
# Just ensure that the interface of the noop-timer doesn't break at some point
buffer = StringIO()
t = timer.NullTimer()
t.start()
t.start("first")
t.stop("first")
with t.measure("second"):
pass
t.stop()
assert t.duration("first") == 0.0
assert t.duration() == 0.0
assert not t.phases
t.write_json(buffer)
t.write_tty(buffer)
assert not buffer.getvalue()

View File

@@ -3,6 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import sys
import pytest
@@ -12,6 +13,26 @@
from spack.spec import Spec
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
def test_global_activation(install_mockery, mock_fetch):
"""This test ensures that views which are maintained inside of an extendee
package's prefix are maintained as expected and are compatible with
global activations prior to #7152.
"""
spec = Spec("extension1").concretized()
pkg = spec.package
pkg.do_install()
pkg.do_activate()
extendee_spec = spec["extendee"]
extendee_pkg = spec["extendee"].package
view = extendee_pkg.view()
assert pkg.is_activated(view)
expected_path = os.path.join(extendee_spec.prefix, ".spack", "extensions.yaml")
assert view.extensions_layout.extension_file_path(extendee_spec) == expected_path
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
def test_remove_extensions_ordered(install_mockery, mock_fetch, tmpdir):
view_dir = str(tmpdir.join("view"))

View File

@@ -915,7 +915,7 @@ def inspect_path(root, inspections, exclude=None):
env = EnvironmentModifications()
# Inspect the prefix to check for the existence of common directories
for relative_path, variables in inspections.items():
expected = os.path.join(root, os.path.normpath(relative_path))
expected = os.path.join(root, relative_path)
if os.path.isdir(expected) and not exclude(expected):
for variable in variables:

View File

@@ -144,7 +144,8 @@ def __exit__(cm, type, value, traceback):
cm.tmp_file.close()
if value:
os.remove(cm.tmp_filename)
# remove tmp on exception & raise it
shutil.rmtree(cm.tmp_filename, True)
else:
rename(cm.tmp_filename, cm.orig_filename)

View File

@@ -11,140 +11,51 @@
"""
import sys
import time
from collections import OrderedDict, namedtuple
from contextlib import contextmanager
from llnl.util.lang import pretty_seconds
import spack.util.spack_json as sjson
Interval = namedtuple("Interval", ("begin", "end"))
#: name for the global timer (used in start(), stop(), duration() without arguments)
global_timer_name = "_global"
class NullTimer(object):
"""Timer interface that does nothing, useful in for "tell
don't ask" style code when timers are optional."""
def start(self, name=global_timer_name):
pass
def stop(self, name=global_timer_name):
pass
def duration(self, name=global_timer_name):
return 0.0
@contextmanager
def measure(self, name):
yield
@property
def phases(self):
return []
def write_json(self, out=sys.stdout):
pass
def write_tty(self, out=sys.stdout):
pass
#: instance of a do-nothing timer
NULL_TIMER = NullTimer()
class Timer(object):
"""Simple interval timer"""
"""
Simple timer for timing phases of a solve or install
"""
def __init__(self, now=time.time):
"""
Arguments:
now: function that gives the seconds since e.g. epoch
"""
self._now = now
self._timers = OrderedDict() # type: OrderedDict[str,Interval]
def __init__(self):
self.start = time.time()
self.last = self.start
self.phases = {}
self.end = None
# _global is the overal timer since the instance was created
self._timers[global_timer_name] = Interval(self._now(), end=None)
def start(self, name=global_timer_name):
"""
Start or restart a named timer, or the global timer when no name is given.
Arguments:
name (str): Optional name of the timer. When no name is passed, the
global timer is started.
"""
self._timers[name] = Interval(self._now(), None)
def stop(self, name=global_timer_name):
"""
Stop a named timer, or all timers when no name is given. Stopping a
timer that has not started has no effect.
Arguments:
name (str): Optional name of the timer. When no name is passed, all
timers are stopped.
"""
interval = self._timers.get(name, None)
if not interval:
return
self._timers[name] = Interval(interval.begin, self._now())
def duration(self, name=global_timer_name):
"""
Get the time in seconds of a named timer, or the total time if no
name is passed. The duration is always 0 for timers that have not been
started, no error is raised.
Arguments:
name (str): (Optional) name of the timer
Returns:
float: duration of timer.
"""
try:
interval = self._timers[name]
except KeyError:
return 0.0
# Take either the interval end, the global timer, or now.
end = interval.end or self._timers[global_timer_name].end or self._now()
return end - interval.begin
@contextmanager
def measure(self, name):
"""
Context manager that allows you to time a block of code.
Arguments:
name (str): Name of the timer
"""
begin = self._now()
yield
self._timers[name] = Interval(begin, self._now())
def phase(self, name):
last = self.last
now = time.time()
self.phases[name] = now - last
self.last = now
@property
def phases(self):
"""Get all named timers (excluding the global/total timer)"""
return [k for k in self._timers.keys() if k != global_timer_name]
def total(self):
"""Return the total time"""
if self.end:
return self.end - self.start
return time.time() - self.start
def stop(self):
"""
Stop the timer to record a total time, if desired.
"""
self.end = time.time()
def write_json(self, out=sys.stdout):
"""Write a json object with times to file"""
phases = [{"name": p, "seconds": self.duration(p)} for p in self.phases]
times = {"phases": phases, "total": {"seconds": self.duration()}}
"""
Write a json object with times to file
"""
phases = [{"name": p, "seconds": s} for p, s in self.phases.items()]
times = {"phases": phases, "total": {"seconds": self.total}}
out.write(sjson.dump(times))
def write_tty(self, out=sys.stdout):
"""Write a human-readable summary of timings"""
# Individual timers ordered by registration
formatted = [(p, pretty_seconds(self.duration(p))) for p in self.phases]
# Total time
formatted.append(("total", pretty_seconds(self.duration())))
# Write to out
for name, duration in formatted:
out.write(" {:10s} {:>10s}\n".format(name, duration))
now = time.time()
out.write("Time:\n")
for phase, t in self.phases.items():
out.write(" %-15s%.4f\n" % (phase + ":", t))
out.write("Total: %.4f\n" % (now - self.start))

View File

@@ -162,12 +162,39 @@ def check_spec_manifest(spec):
results.add_error(prefix, "manifest corrupted")
return results
# Get extensions active in spec
view = spack.filesystem_view.YamlFilesystemView(prefix, spack.store.layout)
active_exts = view.extensions_layout.extension_map(spec).values()
ext_file = ""
if active_exts:
# No point checking contents of this file as it is the only source of
# truth for that information.
ext_file = view.extensions_layout.extension_file_path(spec)
def is_extension_artifact(p):
if os.path.islink(p):
if any(os.readlink(p).startswith(e.prefix) for e in active_exts):
# This file is linked in by an extension. Belongs to extension
return True
elif os.path.isdir(p) and p not in manifest:
if all(is_extension_artifact(os.path.join(p, f)) for f in os.listdir(p)):
return True
return False
for root, dirs, files in os.walk(prefix):
for entry in list(dirs + files):
path = os.path.join(root, entry)
# Do not check links from prefix to active extension
# TODO: make this stricter for non-linux systems that use symlink
# permissions
# Do not check directories that only exist for extensions
if is_extension_artifact(path):
continue
# Do not check manifest file. Can't store your own hash
if path == manifest_file:
# Nothing to check for ext_file
if path == manifest_file or path == ext_file:
continue
data = manifest.pop(path, {})

View File

@@ -153,113 +153,113 @@ protected-publish:
# still run on UO runners and be signed
# using the previous approach.
########################################
# .e4s-mac:
# variables:
# SPACK_CI_STACK_NAME: e4s-mac
# allow_failure: True
.e4s-mac:
variables:
SPACK_CI_STACK_NAME: e4s-mac
allow_failure: True
# .mac-pr:
# only:
# - /^pr[\d]+_.*$/
# - /^github\/pr[\d]+_.*$/
# variables:
# SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries-prs/${CI_COMMIT_REF_NAME}"
# SPACK_PRUNE_UNTOUCHED: "True"
.mac-pr:
only:
- /^pr[\d]+_.*$/
- /^github\/pr[\d]+_.*$/
variables:
SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries-prs/${CI_COMMIT_REF_NAME}"
SPACK_PRUNE_UNTOUCHED: "True"
# .mac-protected:
# only:
# - /^develop$/
# - /^releases\/v.*/
# - /^v.*/
# - /^github\/develop$/
# variables:
# SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries/${CI_COMMIT_REF_NAME}/${SPACK_CI_STACK_NAME}"
.mac-protected:
only:
- /^develop$/
- /^releases\/v.*/
- /^v.*/
- /^github\/develop$/
variables:
SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries/${CI_COMMIT_REF_NAME}/${SPACK_CI_STACK_NAME}"
# .mac-pr-build:
# extends: [ ".mac-pr", ".build" ]
# variables:
# AWS_ACCESS_KEY_ID: ${PR_MIRRORS_AWS_ACCESS_KEY_ID}
# AWS_SECRET_ACCESS_KEY: ${PR_MIRRORS_AWS_SECRET_ACCESS_KEY}
.mac-pr-build:
extends: [ ".mac-pr", ".build" ]
variables:
AWS_ACCESS_KEY_ID: ${PR_MIRRORS_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${PR_MIRRORS_AWS_SECRET_ACCESS_KEY}
# .mac-protected-build:
# extends: [ ".mac-protected", ".build" ]
# variables:
# AWS_ACCESS_KEY_ID: ${PROTECTED_MIRRORS_AWS_ACCESS_KEY_ID}
# AWS_SECRET_ACCESS_KEY: ${PROTECTED_MIRRORS_AWS_SECRET_ACCESS_KEY}
# SPACK_SIGNING_KEY: ${PACKAGE_SIGNING_KEY}
.mac-protected-build:
extends: [ ".mac-protected", ".build" ]
variables:
AWS_ACCESS_KEY_ID: ${PROTECTED_MIRRORS_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${PROTECTED_MIRRORS_AWS_SECRET_ACCESS_KEY}
SPACK_SIGNING_KEY: ${PACKAGE_SIGNING_KEY}
# e4s-mac-pr-generate:
# extends: [".e4s-mac", ".mac-pr"]
# stage: generate
# script:
# - tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
# - . "./share/spack/setup-env.sh"
# - spack --version
# - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
# - spack env activate --without-view .
# - spack ci generate --check-index-only
# --buildcache-destination "${SPACK_BUILDCACHE_DESTINATION}"
# --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
# --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
# artifacts:
# paths:
# - "${CI_PROJECT_DIR}/jobs_scratch_dir"
# tags:
# - lambda
# interruptible: true
# retry:
# max: 2
# when:
# - runner_system_failure
# - stuck_or_timeout_failure
# timeout: 60 minutes
e4s-mac-pr-generate:
extends: [".e4s-mac", ".mac-pr"]
stage: generate
script:
- tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
- . "./share/spack/setup-env.sh"
- spack --version
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- spack ci generate --check-index-only
--buildcache-destination "${SPACK_BUILDCACHE_DESTINATION}"
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts:
paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir"
tags:
- lambda
interruptible: true
retry:
max: 2
when:
- runner_system_failure
- stuck_or_timeout_failure
timeout: 60 minutes
# e4s-mac-protected-generate:
# extends: [".e4s-mac", ".mac-protected"]
# stage: generate
# script:
# - tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
# - . "./share/spack/setup-env.sh"
# - spack --version
# - cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
# - spack env activate --without-view .
# - spack ci generate --check-index-only
# --artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
# --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
# artifacts:
# paths:
# - "${CI_PROJECT_DIR}/jobs_scratch_dir"
# tags:
# - omicron
# interruptible: true
# retry:
# max: 2
# when:
# - runner_system_failure
# - stuck_or_timeout_failure
# timeout: 60 minutes
e4s-mac-protected-generate:
extends: [".e4s-mac", ".mac-protected"]
stage: generate
script:
- tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
- . "./share/spack/setup-env.sh"
- spack --version
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- spack ci generate --check-index-only
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts:
paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir"
tags:
- omicron
interruptible: true
retry:
max: 2
when:
- runner_system_failure
- stuck_or_timeout_failure
timeout: 60 minutes
# e4s-mac-pr-build:
# extends: [ ".e4s-mac", ".mac-pr-build" ]
# trigger:
# include:
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
# job: e4s-mac-pr-generate
# strategy: depend
# needs:
# - artifacts: True
# job: e4s-mac-pr-generate
e4s-mac-pr-build:
extends: [ ".e4s-mac", ".mac-pr-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: e4s-mac-pr-generate
strategy: depend
needs:
- artifacts: True
job: e4s-mac-pr-generate
# e4s-mac-protected-build:
# extends: [ ".e4s-mac", ".mac-protected-build" ]
# trigger:
# include:
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
# job: e4s-mac-protected-generate
# strategy: depend
# needs:
# - artifacts: True
# job: e4s-mac-protected-generate
e4s-mac-protected-build:
extends: [ ".e4s-mac", ".mac-protected-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: e4s-mac-protected-generate
strategy: depend
needs:
- artifacts: True
job: e4s-mac-protected-generate
########################################
# E4S pipeline

View File

@@ -254,9 +254,6 @@ spack:
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)

View File

@@ -175,9 +175,6 @@ spack:
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)

View File

@@ -47,9 +47,6 @@ spack:
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack --color=always --backtrace ci rebuild

View File

@@ -60,9 +60,6 @@ spack:
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack --color=always --backtrace ci rebuild

View File

@@ -280,9 +280,6 @@ spack:
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- export PATH=/bootstrap/runner/view/bin:${PATH}

View File

@@ -270,9 +270,6 @@ spack:
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack --color=always --backtrace ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)

View File

@@ -104,9 +104,6 @@ spack:
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)

View File

@@ -107,9 +107,6 @@ spack:
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)

View File

@@ -110,9 +110,6 @@ spack:
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)

View File

@@ -75,9 +75,6 @@ spack:
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack --color=always --backtrace ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)

View File

@@ -77,9 +77,6 @@ spack:
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack --color=always --backtrace ci rebuild

View File

@@ -79,9 +79,6 @@ spack:
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
# AWS runners mount E4S public key (verification), UO runners mount public/private (signing/verification)
- if [[ -r /mnt/key/e4s.gpg ]]; then spack gpg trust /mnt/key/e4s.gpg; fi
# UO runners mount intermediate ci public key (verification), AWS runners mount public/private (signing/verification)
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack --color=always --backtrace ci rebuild

View File

@@ -337,7 +337,16 @@ _spack() {
then
SPACK_COMPREPLY="-h --help -H --all-help --color -c --config -C --config-scope -d --debug --timestamp --pdb -e --env -D --env-dir -E --no-env --use-env-repo -k --insecure -l --enable-locks -L --disable-locks -m --mock -b --bootstrap -p --profile --sorted-profile --lines -v --verbose --stacktrace --backtrace -V --version --print-shell-vars"
else
SPACK_COMPREPLY="add arch audit blame bootstrap build-env buildcache cd change checksum ci clean clone commands compiler compilers concretize config containerize create debug dependencies dependents deprecate dev-build develop diff docs edit env extensions external fetch find gc gpg graph help info install license list load location log-parse maintainers make-installer mark mirror module patch pkg providers pydoc python reindex remove rm repo resource restage solve spec stage style tags test test-env tutorial undevelop uninstall unit-test unload url verify versions view"
SPACK_COMPREPLY="activate add arch audit blame bootstrap build-env buildcache cd change checksum ci clean clone commands compiler compilers concretize config containerize create deactivate debug dependencies dependents deprecate dev-build develop diff docs edit env extensions external fetch find gc gpg graph help info install license list load location log-parse maintainers make-installer mark mirror module patch pkg providers pydoc python reindex remove rm repo resource restage solve spec stage style tags test test-env tutorial undevelop uninstall unit-test unload url verify versions view"
fi
}
_spack_activate() {
if $list_options
then
SPACK_COMPREPLY="-h --help -f --force -v --view"
else
_installed_packages
fi
}
@@ -829,6 +838,15 @@ _spack_create() {
fi
}
_spack_deactivate() {
if $list_options
then
SPACK_COMPREPLY="-h --help -f --force -v --view -a --all"
else
_installed_packages
fi
}
_spack_debug() {
if $list_options
then
@@ -1021,7 +1039,7 @@ _spack_env_depfile() {
_spack_extensions() {
if $list_options
then
SPACK_COMPREPLY="-h --help -l --long -L --very-long -d --deps -p --paths -s --show"
SPACK_COMPREPLY="-h --help -l --long -L --very-long -d --deps -p --paths -s --show -v --view"
else
_extensions
fi

View File

@@ -0,0 +1,40 @@
compilers:
all:
clang@3.3:
cc: /path/to/clang
cxx: /path/to/clang++
f77: None
fc: None
modules: None
strategy: PATH
gcc@4.5.0:
cc: /path/to/gcc
cxx: /path/to/g++
f77: /path/to/gfortran
fc: /path/to/gfortran
modules: None
strategy: PATH
gcc@5.2.0:
cc: cc
cxx: CC
f77: ftn
fc: ftn
modules:
- PrgEnv-gnu
- gcc/5.2.0
strategy: MODULES
intel@15.0.1:
cc: cc
ccx: CC
f77: ftn
fc: ftn
modules:
- PrgEnv-intel
- intel/15.0.1
strategy: MODULES
intel@15.1.2:
cc: /path/to/icc
cxx: /path/to/ic++
f77: /path/to/ifort
fc: /path/to/ifort
strategy: PATH

View File

@@ -48,9 +48,3 @@ def after_autoreconf_1(self):
@run_after("autoreconf", when="@2.0")
def after_autoreconf_2(self):
os.environ["AFTER_AUTORECONF_2_CALLED"] = "1"
def check(self):
os.environ["CHECK_CALLED"] = "1"
def installcheck(self):
os.environ["INSTALLCHECK_CALLED"] = "1"

View File

@@ -3,6 +3,8 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from time import sleep
from spack.package import *
@@ -15,12 +17,15 @@ class DevBuildTestInstallPhases(Package):
phases = ["one", "two", "three", "install"]
def one(self, spec, prefix):
sleep(1)
print("One locomoco")
def two(self, spec, prefix):
sleep(2)
print("Two locomoco")
def three(self, spec, prefix):
sleep(3)
print("Three locomoco")
def install(self, spec, prefix):

View File

@@ -44,6 +44,10 @@ class Boost(Package):
version("1.66.0", sha256="5721818253e6a0989583192f96782c4a98eb6204965316df9f5ad75819225ca9")
version("1.65.1", sha256="9807a5d16566c57fd74fb522764e0b134a8bbe6b6e8967b83afefd30dcd3be81")
version("1.65.0", sha256="ea26712742e2fb079c2a566a31f3266973b76e38222b9f88b387e3c8b2f9902c")
# NOTE: 1.64.0 seems fine for *most* applications, but if you need
# +python and +mpi, there seem to be errors with out-of-date
# API calls from mpi/python.
# See: https://github.com/spack/spack/issues/3963
version("1.64.0", sha256="7bcc5caace97baa948931d712ea5f37038dbb1c5d89b43ad4def4ed7cb683332")
version("1.63.0", sha256="beae2529f759f6b3bf3f4969a19c2e9d6f0c503edcb2de4a61d1428519fcb3b0")
version("1.62.0", sha256="36c96b0f6155c98404091d8ceb48319a28279ca0333fba1ad8611eb90afb2ca0")
@@ -241,13 +245,6 @@ def libs(self):
conflicts("cxxstd=98", when="+fiber") # Fiber requires >=C++11.
conflicts("~context", when="+fiber") # Fiber requires Context.
# NOTE: 1.64.0 seems fine for *most* applications, but if you need
# +python and +mpi, there seem to be errors with out-of-date
# API calls from mpi/python.
# See: https://github.com/spack/spack/issues/3963
conflicts("@1.64.0", when="+python", msg="Errors with out-of-date API calls from Python")
conflicts("@1.64.0", when="+mpi", msg="Errors with out-of-date API calls from MPI")
conflicts("+taggedlayout", when="+versionedlayout")
conflicts("+numpy", when="~python")

View File

@@ -67,10 +67,8 @@ class Conduit(CMakePackage):
# package variants
###########################################################################
variant("examples", default=True, description="Build Conduit examples")
variant("shared", default=True, description="Build Conduit as shared libs")
variant("test", default=True, description="Enable Conduit unit tests")
variant("utilities", default=True, description="Build Conduit utilities")
# variants for python support
variant("python", default=False, description="Build Conduit Python support")
@@ -377,19 +375,6 @@ def hostconfig(self):
if flags:
cfg.write(cmake_cache_entry("BLT_EXE_LINKER_FLAGS", flags, description))
#######################
# Examples/Utilities
#######################
if "+examples" in spec:
cfg.write(cmake_cache_entry("ENABLE_EXAMPLES", "ON"))
else:
cfg.write(cmake_cache_entry("ENABLE_EXAMPLES", "OFF"))
if "+utilities" in spec:
cfg.write(cmake_cache_entry("ENABLE_UTILS", "ON"))
else:
cfg.write(cmake_cache_entry("ENABLE_UTILS", "OFF"))
#######################
# Unit Tests
#######################

View File

@@ -17,11 +17,8 @@ class CpuFeatures(CMakePackage):
version("0.7.0", tag="v0.7.0")
variant("shared", description="Build shared libraries", default=False)
depends_on("cmake@3.0.0:", type="build")
def cmake_args(self):
args = ["-DBUILD_TESTING:BOOL=OFF"]
args += self.enable_or_disable("BUILD_SHARED_LIBS", variant="shared")
return args

View File

@@ -14,7 +14,6 @@ class Cusz(CMakePackage, CudaPackage):
url = "https://github.com/szcompressor/cuSZ/archive/refs/tags/v0.3.tar.gz"
maintainers = ["jtian0", "dingwentao"]
tags = ["e4s"]
conflicts("~cuda")
conflicts("cuda_arch=none", when="+cuda")
@@ -22,9 +21,6 @@ class Cusz(CMakePackage, CudaPackage):
version("develop", branch="develop")
version("0.3", sha256="0feb4f7fd64879fe147624dd5ad164adf3983f79b2e0383d35724f8d185dcb11")
# these version of Cuda provide the CUB headers, but not CUB cmake configuration that we use.
conflicts("cuda@11.0.2:11.2.2")
depends_on("cub", when="^ cuda@:10.2.89")
def cmake_args(self):

View File

@@ -19,12 +19,6 @@ class Dcap(AutotoolsPackage):
depends_on("libtool", type="build")
depends_on("m4", type="build")
variant("plugins", default=True, description="Build plugins")
def patch(self):
if self.spec.satisfies("~plugins"):
filter_file("SUBDIRS = .*", "SUBDIRS = src", "Makefile.am")
def autoreconf(self, spec, prefix):
bash = which("bash")
bash("./bootstrap.sh")

View File

@@ -135,15 +135,13 @@ def configure_args(self):
options += self.with_or_without("mpi")
# TODO: --disable-sse-assembly, --enable-sparc64, --enable-neon-arch64
# Don't include vsx; as of 2022.05 it fails (reported upstream).
# Altivec SSE intrinsics are used anyway.
simd_features = ["sse", "avx", "avx2", "avx512", "sve128", "sve256", "sve512"]
simd_features = ["vsx", "sse", "avx", "avx2", "avx512", "sve128", "sve256", "sve512"]
for feature in simd_features:
msg = "--enable-{0}" if feature in spec.target else "--disable-{0}"
options.append(msg.format(feature))
if spec.target.family != "x86_64":
if spec.target.family == "aarch64":
options.append("--disable-sse-assembly")
if "%aocc" in spec:

View File

@@ -16,7 +16,6 @@ class Emacs(AutotoolsPackage, GNUMirrorPackage):
gnu_mirror_path = "emacs/emacs-24.5.tar.gz"
version("master", branch="master")
version("28.2", sha256="a6912b14ef4abb1edab7f88191bfd61c3edd7085e084de960a4f86485cb7cad8")
version("28.1", sha256="1439bf7f24e5769f35601dbf332e74dfc07634da6b1e9500af67188a92340a28")
version("27.2", sha256="80ff6118fb730a6d8c704dccd6915a6c0e0a166ab1daeef9fe68afa9073ddb73")
version("27.1", sha256="ffbfa61dc951b92cf31ebe3efc86c5a9d4411a1222b8a4ae6716cfd0e2a584db")

View File

@@ -681,9 +681,7 @@ def build_optimization_config(self):
# concretization, so we'll stick to that. The other way around however can
# result in compilation errors, when gcc@7 is built with gcc@11, and znver3
# is taken as a the target, which gcc@7 doesn't support.
# Note we're not adding this for aarch64 because of
# https://github.com/spack/spack/issues/31184
if "+bootstrap %gcc" in self.spec and self.spec.target.family != "aarch64":
if "+bootstrap %gcc" in self.spec:
flags += " " + self.get_common_target_flags(self.spec)
if "+bootstrap" in self.spec:

Some files were not shown because too many files have changed in this diff Show More