Compare commits

..

1 Commits

Author SHA1 Message Date
Todd Gamblin
3985b307a1 concretizer: add --minimal configuration option
The reusing concretizer minimizes builds, but it still preserves
defaults from packages and preferences while doing that. We can be
more aggressive by making minimization the top priority, at the
expense of "weird" concretizations.

This can be advantageous: if you write your packages as explicitly
as possible, then you can use that with `--minimal` to get the
smallest possible package configuration (at least in terms of the
number of packages in the build).  Conversely, you can use minimal
concretization as kind of a worst case to ensure that you have the
"right" constraints on your dependencies.

Example for intuition: `cmake` can optionally build without openssl, but
it's enabled by default because many builds use that functionality. Using
`minimal: true` will build `cmake~openssl` unless the user asks for
`cmake+openssl` explicitly.

- [x] add `minimal` option to `concretizer.yaml`
- [x] add `--minimal` CLI option to concretizer arguments
- [x] wire everything up
- [x] add some tests
2022-05-24 14:52:17 -04:00
135 changed files with 1161 additions and 5234 deletions

View File

@@ -4,7 +4,6 @@ Set-Location spack
git config --global user.email "spack@example.com"
git config --global user.name "Test User"
git config --global core.longpaths true
if ($(git branch --show-current) -ne "develop")
{

View File

@@ -39,7 +39,7 @@ jobs:
python-version: '3.10'
- name: Install Python packages
run: |
pip install --upgrade pip six setuptools==62.3.4 types-six
pip install --upgrade pip six setuptools types-six
- name: Setup git configuration
run: |
# Need this for the git tests to succeed.

View File

@@ -41,7 +41,7 @@ jobs:
python-version: 3.9
- name: Install Python packages
run: |
python -m pip install --upgrade pip six setuptools==62.3.4 flake8 isort>=4.3.5 mypy>=0.800 black pywin32 types-python-dateutil
python -m pip install --upgrade pip six setuptools flake8 isort>=4.3.5 mypy>=0.800 black pywin32 types-python-dateutil
- name: Create local develop
run: |
.\spack\.github\workflows\setup_git.ps1

View File

@@ -1,223 +1,3 @@
# v0.18.1 (2022-07-19)
### Spack Bugfixes
* Fix several bugs related to bootstrapping (#30834,#31042,#31180)
* Fix a regression that was causing spec hashes to differ between
Python 2 and Python 3 (#31092)
* Fixed compiler flags for oneAPI and DPC++ (#30856)
* Fixed several issues related to concretization (#31142,#31153,#31170,#31226)
* Improved support for Cray manifest file and `spack external find` (#31144,#31201,#31173.#31186)
* Assign a version to openSUSE Tumbleweed according to the GLIBC version
in the system (#19895)
* Improved Dockerfile generation for `spack containerize` (#29741,#31321)
* Fixed a few bugs related to concurrent execution of commands (#31509,#31493,#31477)
### Package updates
* WarpX: add v22.06, fixed libs property (#30866,#31102)
* openPMD: add v0.14.5, update recipe for @develop (#29484,#31023)
# v0.18.0 (2022-05-28)
`v0.18.0` is a major feature release.
## Major features in this release
1. **Concretizer now reuses by default**
`spack install --reuse` was introduced in `v0.17.0`, and `--reuse`
is now the default concretization mode. Spack will try hard to
resolve dependencies using installed packages or binaries (#30396).
To avoid reuse and to use the latest package configurations, (the
old default), you can use `spack install --fresh`, or add
configuration like this to your environment or `concretizer.yaml`:
```yaml
concretizer:
reuse: false
```
2. **Finer-grained hashes**
Spack hashes now include `link`, `run`, *and* `build` dependencies,
as well as a canonical hash of package recipes. Previously, hashes
only included `link` and `run` dependencies (though `build`
dependencies were stored by environments). We coarsened the hash to
reduce churn in user installations, but the new default concretizer
behavior mitigates this concern and gets us reuse *and* provenance.
You will be able to see the build dependencies of new installations
with `spack find`. Old installations will not change and their
hashes will not be affected. (#28156, #28504, #30717, #30861)
3. **Improved error messages**
Error handling with the new concretizer is now done with
optimization criteria rather than with unsatisfiable cores, and
Spack reports many more details about conflicting constraints.
(#30669)
4. **Unify environments when possible**
Environments have thus far supported `concretization: together` or
`concretization: separately`. These have been replaced by a new
preference in `concretizer.yaml`:
```yaml
concretizer:
unify: [true|false|when_possible]
```
`concretizer:unify:when_possible` will *try* to resolve a fully
unified environment, but if it cannot, it will create multiple
configurations of some packages where it has to. For large
environments that previously had to be concretized separately, this
can result in a huge speedup (40-50x). (#28941)
5. **Automatically find externals on Cray machines**
Spack can now automatically discover installed packages in the Cray
Programming Environment by running `spack external find` (or `spack
external read-cray-manifest` to *only* query the PE). Packages from
the PE (e.g., `cray-mpich` are added to the database with full
dependency information, and compilers from the PE are added to
`compilers.yaml`. Available with the June 2022 release of the Cray
Programming Environment. (#24894, #30428)
6. **New binary format and hardened signing**
Spack now has an updated binary format, with improvements for
security. The new format has a detached signature file, and Spack
verifies the signature before untarring or decompressing the binary
package. The previous format embedded the signature in a `tar`
file, which required the client to run `tar` *before* verifying
(#30750). Spack can still install from build caches using the old
format, but we encourage users to switch to the new format going
forward.
Production GitLab pipelines have been hardened to securely sign
binaries. There is now a separate signing stage so that signing
keys are never exposed to build system code, and signing keys are
ephemeral and only live as long as the signing pipeline stage.
(#30753)
7. **Bootstrap mirror generation**
The `spack bootstrap mirror` command can automatically create a
mirror for bootstrapping the concretizer and other needed
dependencies in an air-gapped environment. (#28556)
8. **Nascent Windows support**
Spack now has initial support for Windows. Spack core has been
refactored to run in the Windows environment, and a small number of
packages can now build for Windows. More details are
[in the documentation](https://spack.rtfd.io/en/latest/getting_started.html#spack-on-windows)
(#27021, #28385, many more)
9. **Makefile generation**
`spack env depfile` can be used to generate a `Makefile` from an
environment, which can be used to build packages the environment
in parallel on a single node. e.g.:
```console
spack -e myenv env depfile > Makefile
make
```
Spack propagates `gmake` jobserver information to builds so that
their jobs can share cores. (#30039, #30254, #30302, #30526)
10. **New variant features**
In addition to being conditional themselves, variants can now have
[conditional *values*](https://spack.readthedocs.io/en/latest/packaging_guide.html#conditional-possible-values)
that are only possible for certain configurations of a package. (#29530)
Variants can be
[declared "sticky"](https://spack.readthedocs.io/en/latest/packaging_guide.html#sticky-variants),
which prevents them from being enabled or disabled by the
concretizer. Sticky variants must be set explicitly by users
on the command line or in `packages.yaml`. (#28630)
* Allow conditional possible values in variants
* Add a "sticky" property to variants
## Other new features of note
* Environment views can optionally link only `run` dependencies
with `link:run` (#29336)
* `spack external find --all` finds library-only packages in
addition to build dependencies (#28005)
* Customizable `config:license_dir` option (#30135)
* `spack external find --path PATH` takes a custom search path (#30479)
* `spack spec` has a new `--format` argument like `spack find` (#27908)
* `spack concretize --quiet` skips printing concretized specs (#30272)
* `spack info` now has cleaner output and displays test info (#22097)
* Package-level submodule option for git commit versions (#30085, #30037)
* Using `/hash` syntax to refer to concrete specs in an environment
now works even if `/hash` is not installed. (#30276)
## Major internal refactors
* full hash (see above)
* new develop versioning scheme `0.19.0-dev0`
* Allow for multiple dependencies/dependents from the same package (#28673)
* Splice differing virtual packages (#27919)
## Performance Improvements
* Concretization of large environments with `unify: when_possible` is
much faster than concretizing separately (#28941, see above)
* Single-pass view generation algorithm is 2.6x faster (#29443)
## Archspec improvements
* `oneapi` and `dpcpp` flag support (#30783)
* better support for `M1` and `a64fx` (#30683)
## Removals and Deprecations
* Spack no longer supports Python `2.6` (#27256)
* Removed deprecated `--run-tests` option of `spack install`;
use `spack test` (#30461)
* Removed deprecated `spack flake8`; use `spack style` (#27290)
* Deprecate `spack:concretization` config option; use
`concretizer:unify` (#30038)
* Deprecate top-level module configuration; use module sets (#28659)
* `spack activate` and `spack deactivate` are deprecated in favor of
environments; will be removed in `0.19.0` (#29430; see also `link:run`
in #29336 above)
## Notable Bugfixes
* Fix bug that broke locks with many parallel builds (#27846)
* Many bugfixes and consistency improvements for the new concretizer
and `--reuse` (#30357, #30092, #29835, #29933, #28605, #29694, #28848)
## Packages
* `CMakePackage` uses `CMAKE_INSTALL_RPATH_USE_LINK_PATH` (#29703)
* Refactored `lua` support: `lua-lang` virtual supports both
`lua` and `luajit` via new `LuaPackage` build system(#28854)
* PythonPackage: now installs packages with `pip` (#27798)
* Python: improve site_packages_dir handling (#28346)
* Extends: support spec, not just package name (#27754)
* `find_libraries`: search for both .so and .dylib on macOS (#28924)
* Use stable URLs and `?full_index=1` for all github patches (#29239)
## Spack community stats
* 6,416 total packages, 458 new since `v0.17.0`
* 219 new Python packages
* 60 new R packages
* 377 people contributed to this release
* 337 committers to packages
* 85 committers to core
# v0.17.2 (2022-04-13)
### Spack bugfixes
@@ -231,7 +11,7 @@
* Fixed a few bugs affecting the spack ci command (#29518, #29419)
* Fix handling of Intel compiler environment (#29439)
* Fix a few edge cases when reindexing the DB (#28764)
* Remove "Known issues" from documentation (#29664)
* Remove "Known issues" from documentation (#29664)
* Other miscellaneous bugfixes (0b72e070583fc5bcd016f5adc8a84c99f2b7805f, #28403, #29261)
# v0.17.1 (2021-12-23)

View File

@@ -6,15 +6,34 @@ bootstrap:
# by Spack is installed in a "store" subfolder of this root directory
root: $user_cache_path/bootstrap
# Methods that can be used to bootstrap software. Each method may or
# may not be able to bootstrap all the software that Spack needs,
# may not be able to bootstrap all of the software that Spack needs,
# depending on its type.
sources:
- name: 'github-actions-v0.2'
metadata: $spack/share/spack/bootstrap/github-actions-v0.2
type: buildcache
description: |
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.2
homepage: https://github.com/spack/spack-bootstrap-mirrors
releases: https://github.com/spack/spack-bootstrap-mirrors/releases
- name: 'github-actions-v0.1'
metadata: $spack/share/spack/bootstrap/github-actions-v0.1
- name: 'spack-install'
metadata: $spack/share/spack/bootstrap/spack-install
type: buildcache
description: |
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.1
homepage: https://github.com/spack/spack-bootstrap-mirrors
releases: https://github.com/spack/spack-bootstrap-mirrors/releases
# This method is just Spack bootstrapping the software it needs from sources.
# It has been added here so that users can selectively disable bootstrapping
# from sources by "untrusting" it.
- name: spack-install
type: install
description: |
Specs built from sources by Spack. May take a long time.
trusted:
# By default we trust bootstrapping from sources and from binaries
# produced on Github via the workflow

View File

@@ -15,22 +15,38 @@ concretizer:
# as possible, rather than building. If `false`, we'll always give you a fresh
# concretization.
reuse: true
# If `true`, Spack will consider minimizing builds its *topmost* priority.
# Note that this can result in weird package configurations. In particular,
# Spack will disable variants and might downgrade versions to avoid building
# new packages for an install. By default, Spack respects defaults from
# packages and preferences *before* minimizing the number of builds.
#
# Example for intuition: `cmake` can optionally build without openssl, but
# it's enabled by default because many builds use that functionality. Using
# `minimal: true` will build `cmake~openssl` unless the user asks for
# `cmake+openssl` explicitly.
minimal: false
# Options that tune which targets are considered for concretization. The
# concretization process is very sensitive to the number targets, and the time
# needed to reach a solution increases noticeably with the number of targets
# considered.
targets:
# Determine whether we want to target specific or generic microarchitectures.
# An example of the first kind might be for instance "skylake" or "bulldozer",
# while generic microarchitectures are for instance "aarch64" or "x86_64_v4".
granularity: microarchitectures
# If "false" allow targets that are incompatible with the current host (for
# instance concretize with target "icelake" while running on "haswell").
# If "true" only allow targets that are compatible with the host.
host_compatible: true
# When "true" concretize root specs of environments together, so that each unique
# package in an environment corresponds to one concrete spec. This ensures
# environments can always be activated. When "false" perform concretization separately
# on each root spec, allowing different versions and variants of the same package in
# an environment.
unify: false
unify: false

View File

@@ -50,13 +50,6 @@ build cache files for the "ninja" spec:
Note that the targeted spec must already be installed. Once you have a build cache,
you can add it as a mirror, discussed next.
.. warning::
Spack improved the format used for binary caches in v0.18. The entire v0.18 series
will be able to verify and install binary caches both in the new and in the old format.
Support for using the old format is expected to end in v0.19, so we advise users to
recreate relevant buildcaches using Spack v0.18 or higher.
---------------------------------------
Finding or installing build cache files
---------------------------------------

View File

@@ -1,160 +0,0 @@
.. Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
Spack Project Developers. See the top-level COPYRIGHT file for details.
SPDX-License-Identifier: (Apache-2.0 OR MIT)
.. _bootstrapping:
=============
Bootstrapping
=============
In the :ref:`Getting started <getting_started>` Section we already mentioned that
Spack can bootstrap some of its dependencies, including ``clingo``. In fact, there
is an entire command dedicated to the management of every aspect of bootstrapping:
.. command-output:: spack bootstrap --help
The first thing to know to understand bootstrapping in Spack is that each of
Spack's dependencies is bootstrapped lazily; i.e. the first time it is needed and
can't be found. You can readily check if any prerequisite for using Spack
is missing by running:
.. code-block:: console
% spack bootstrap status
Spack v0.17.1 - python@3.8
[FAIL] Core Functionalities
[B] MISSING "clingo": required to concretize specs
[FAIL] Binary packages
[B] MISSING "gpg2": required to sign/verify buildcaches
Spack will take care of bootstrapping any missing dependency marked as [B]. Dependencies marked as [-] are instead required to be found on the system.
In the case of the output shown above Spack detected that both ``clingo`` and ``gnupg``
are missing and it's giving detailed information on why they are needed and whether
they can be bootstrapped. Running a command that concretize a spec, like:
.. code-block:: console
% spack solve zlib
==> Bootstrapping clingo from pre-built binaries
==> Fetching https://mirror.spack.io/bootstrap/github-actions/v0.1/build_cache/darwin-catalina-x86_64/apple-clang-12.0.0/clingo-bootstrap-spack/darwin-catalina-x86_64-apple-clang-12.0.0-clingo-bootstrap-spack-p5on7i4hejl775ezndzfdkhvwra3hatn.spack
==> Installing "clingo-bootstrap@spack%apple-clang@12.0.0~docs~ipo+python build_type=Release arch=darwin-catalina-x86_64" from a buildcache
[ ... ]
triggers the bootstrapping of clingo from pre-built binaries as expected.
-----------------------
The Bootstrapping store
-----------------------
The software installed for bootstrapping purposes is deployed in a separate store.
Its location can be checked with the following command:
.. code-block:: console
% spack bootstrap root
It can also be changed with the same command by just specifying the newly desired path:
.. code-block:: console
% spack bootstrap root /opt/spack/bootstrap
You can check what is installed in the bootstrapping store at any time using:
.. code-block:: console
% spack find -b
==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store"
==> 11 installed packages
-- darwin-catalina-x86_64 / apple-clang@12.0.0 ------------------
clingo-bootstrap@spack libassuan@2.5.5 libgpg-error@1.42 libksba@1.5.1 pinentry@1.1.1 zlib@1.2.11
gnupg@2.3.1 libgcrypt@1.9.3 libiconv@1.16 npth@1.6 python@3.8
In case it is needed you can remove all the software in the current bootstrapping store with:
.. code-block:: console
% spack clean -b
==> Removing bootstrapped software and configuration in "/Users/spack/.spack/bootstrap"
% spack find -b
==> Showing internal bootstrap store at "/Users/spack/.spack/bootstrap/store"
==> 0 installed packages
--------------------------------------------
Enabling and disabling bootstrapping methods
--------------------------------------------
Bootstrapping is always performed by trying the methods listed by:
.. command-output:: spack bootstrap list
in the order they appear, from top to bottom. By default Spack is
configured to try first bootstrapping from pre-built binaries and to
fall-back to bootstrapping from sources if that failed.
If need be, you can disable bootstrapping altogether by running:
.. code-block:: console
% spack bootstrap disable
in which case it's your responsibility to ensure Spack runs in an
environment where all its prerequisites are installed. You can
also configure Spack to skip certain bootstrapping methods by *untrusting*
them. For instance:
.. code-block:: console
% spack bootstrap untrust github-actions
==> "github-actions" is now untrusted and will not be used for bootstrapping
tells Spack to skip trying to bootstrap from binaries. To add the "github-actions" method back you can:
.. code-block:: console
% spack bootstrap trust github-actions
There is also an option to reset the bootstrapping configuration to Spack's defaults:
.. code-block:: console
% spack bootstrap reset
==> Bootstrapping configuration is being reset to Spack's defaults. Current configuration will be lost.
Do you want to continue? [Y/n]
%
----------------------------------------
Creating a mirror for air-gapped systems
----------------------------------------
Spack's default configuration for bootstrapping relies on the user having
access to the internet, either to fetch pre-compiled binaries or source tarballs.
Sometimes though Spack is deployed on air-gapped systems where such access is denied.
To help with similar situations Spack has a command that recreates, in a local folder
of choice, a mirror containing the source tarballs and/or binary packages needed for
bootstrapping.
.. code-block:: console
% spack bootstrap mirror --binary-packages /opt/bootstrap
==> Adding "clingo-bootstrap@spack+python %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "gnupg@2.3: %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding "patchelf@0.13.1:0.13.99 %apple-clang target=x86_64" and dependencies to the mirror at /opt/bootstrap/local-mirror
==> Adding binary packages from "https://github.com/alalazo/spack-bootstrap-mirrors/releases/download/v0.1-rc.2/bootstrap-buildcache.tar.gz" to the mirror at /opt/bootstrap/local-mirror
To register the mirror on the platform where it's supposed to be used run the following command(s):
% spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources
% spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries
This command needs to be run on a machine with internet access and the resulting folder
has to be moved over to the air-gapped system. Once the local sources are added using the
commands suggested at the prompt, they can be used to bootstrap Spack.

View File

@@ -273,9 +273,19 @@ or
Concretizing
^^^^^^^^^^^^
Once some user specs have been added to an environment, they can be concretized.
There are at the moment three different modes of operation to concretize an environment,
which are explained in details in :ref:`environments_concretization_config`.
Once some user specs have been added to an environment, they can be
concretized. *By default specs are concretized separately*, one after
the other. This mode of operation permits to deploy a full
software stack where multiple configurations of the same package
need to be installed alongside each other. Central installations done
at HPC centers by system administrators or user support groups
are a common case that fits in this behavior.
Environments *can also be configured to concretize all
the root specs in a unified way* to ensure that
each package in the environment corresponds to a single concrete spec. This
mode of operation is usually what is required by software developers that
want to deploy their development environment.
Regardless of which mode of operation has been chosen, the following
command will ensure all the root specs are concretized according to the
constraints that are prescribed in the configuration:
@@ -483,76 +493,37 @@ Appending to this list in the yaml is identical to using the ``spack
add`` command from the command line. However, there is more power
available from the yaml file.
.. _environments_concretization_config:
^^^^^^^^^^^^^^^^^^^
Spec concretization
^^^^^^^^^^^^^^^^^^^
An environment can be concretized in three different modes and the behavior active under any environment
is determined by the ``concretizer:unify`` property. By default specs are concretized *separately*, one after the other:
Specs can be concretized separately or together, as already
explained in :ref:`environments_concretization`. The behavior active
under any environment is determined by the ``concretizer:unify`` property:
.. code-block:: yaml
spack:
specs:
- hdf5~mpi
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: false
This mode of operation permits to deploy a full software stack where multiple configurations of the same package
need to be installed alongside each other using the best possible selection of transitive dependencies. The downside
is that redundancy of installations is disregarded completely, and thus environments might be more bloated than
strictly needed. In the example above, for instance, if a version of ``zlib`` newer than ``1.2.8`` is known to Spack,
then it will be used for both ``hdf5`` installations.
If redundancy of the environment is a concern, Spack provides a way to install it *together where possible*,
i.e. trying to maximize reuse of dependencies across different specs:
.. code-block:: yaml
spack:
specs:
- hdf5~mpi
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: when_possible
Also in this case Spack allows having multiple configurations of the same package, but privileges the reuse of
specs over other factors. Going back to our example, this means that both ``hdf5`` installations will use
``zlib@1.2.8`` as a dependency even if newer versions of that library are available.
Central installations done at HPC centers by system administrators or user support groups are a common case
that fits either of these two modes.
Environments can also be configured to concretize all the root specs *together*, in a self-consistent way, to
ensure that each package in the environment comes with a single configuration:
.. code-block:: yaml
spack:
specs:
- hdf5+mpi
- zlib@1.2.8
- ncview
- netcdf
- nco
- py-sphinx
concretizer:
unify: true
This mode of operation is usually what is required by software developers that want to deploy their development
environment and have a single view of it in the filesystem.
.. note::
The ``concretizer:unify`` config option was introduced in Spack 0.18 to
replace the ``concretization`` property. For reference,
``concretization: together`` is replaced by ``concretizer:unify:true``,
and ``concretization: separately`` is replaced by ``concretizer:unify:false``.
``concretization: separately`` is replaced by ``concretizer:unify:true``,
and ``concretization: together`` is replaced by ``concretizer:unify:false``.
.. admonition:: Re-concretization of user specs
When concretizing specs *together* or *together where possible* the entire set of specs will be
When concretizing specs together the entire set of specs will be
re-concretized after any addition of new user specs, to ensure that
the environment remains consistent / minimal. When instead the specs are concretized
the environment remains consistent. When instead the specs are concretized
separately only the new specs will be re-concretized after any addition.
^^^^^^^^^^^^^
@@ -799,7 +770,7 @@ directories.
select: [^mpi]
exclude: ['%pgi@18.5']
projections:
all: '{name}/{version}-{compiler.name}'
all: {name}/{version}-{compiler.name}
link: all
link_type: symlink
@@ -1051,4 +1022,4 @@ the include is conditional.
the ``--make-target-prefix`` flag and use the non-phony targets
``<target-prefix>/env`` and ``<target-prefix>/fetch`` as
prerequisites, instead of the phony targets ``<target-prefix>/all``
and ``<target-prefix>/fetch-all`` respectively.
and ``<target-prefix>/fetch-all`` respectively.

View File

@@ -63,7 +63,6 @@ or refer to the full manual below.
configuration
config_yaml
bootstrapping
build_settings
environments
containers

26
lib/spack/env/cc vendored
View File

@@ -401,7 +401,8 @@ input_command="$*"
# command line and recombine them with Spack arguments later. We
# parse these out so that we can make sure that system paths come
# last, that package arguments come first, and that Spack arguments
# are injected properly.
# are injected properly. Based on configuration, we also strip -Werror
# arguments.
#
# All other arguments, including -l arguments, are treated as
# 'other_args' and left in their original order. This ensures that
@@ -440,6 +441,29 @@ while [ $# -ne 0 ]; do
continue
fi
if [ -n "${SPACK_COMPILER_FLAGS_KEEP}" ] ; then
# NOTE: the eval is required to allow `|` alternatives inside the variable
eval "\
case '$1' in
$SPACK_COMPILER_FLAGS_KEEP)
append other_args_list "$1"
shift
continue
;;
esac
"
fi
if [ -n "${SPACK_COMPILER_FLAGS_REMOVE}" ] ; then
eval "\
case '$1' in
$SPACK_COMPILER_FLAGS_REMOVE)
shift
continue
;;
esac
"
fi
case "$1" in
-isystem*)
arg="${1#-isystem}"

View File

@@ -18,7 +18,7 @@
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.1.4 (commit b8eea9df2b4204ff27d204452cd46f5199a0b423)
* Version: 0.1.4 (commit 53fc4ac91e9b4c5e4079f15772503a80bece72ad)
argparse
--------

View File

@@ -85,21 +85,21 @@
"intel": [
{
"versions": ":",
"name": "x86-64",
"name": "pentium4",
"flags": "-march={name} -mtune=generic"
}
],
"oneapi": [
{
"versions": ":",
"name": "x86-64",
"name": "pentium4",
"flags": "-march={name} -mtune=generic"
}
],
"dpcpp": [
{
"versions": ":",
"name": "x86-64",
"name": "pentium4",
"flags": "-march={name} -mtune=generic"
}
]
@@ -143,20 +143,6 @@
"name": "x86-64",
"flags": "-march={name} -mtune=generic -mcx16 -msahf -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3"
}
],
"oneapi": [
{
"versions": "2021.2.0:",
"name": "x86-64-v2",
"flags": "-march={name} -mtune=generic"
}
],
"dpcpp": [
{
"versions": "2021.2.0:",
"name": "x86-64-v2",
"flags": "-march={name} -mtune=generic"
}
]
}
},
@@ -214,20 +200,6 @@
"name": "x86-64",
"flags": "-march={name} -mtune=generic -mcx16 -msahf -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mmovbe -mxsave"
}
],
"oneapi": [
{
"versions": "2021.2.0:",
"name": "x86-64-v3",
"flags": "-march={name} -mtune=generic"
}
],
"dpcpp": [
{
"versions": "2021.2.0:",
"name": "x86-64-v3",
"flags": "-march={name} -mtune=generic"
}
]
}
},
@@ -290,20 +262,6 @@
"name": "x86-64",
"flags": "-march={name} -mtune=generic -mcx16 -msahf -mpopcnt -msse3 -msse4.1 -msse4.2 -mssse3 -mavx -mavx2 -mbmi -mbmi2 -mf16c -mfma -mlzcnt -mmovbe -mxsave -mavx512f -mavx512bw -mavx512cd -mavx512dq -mavx512vl"
}
],
"oneapi": [
{
"versions": "2021.2.0:",
"name": "x86-64-v4",
"flags": "-march={name} -mtune=generic"
}
],
"dpcpp": [
{
"versions": "2021.2.0:",
"name": "x86-64-v4",
"flags": "-march={name} -mtune=generic"
}
]
}
},
@@ -344,19 +302,22 @@
"intel": [
{
"versions": "16.0:",
"flags": "-march={name} -mtune={name}"
"name": "pentium4",
"flags": "-march={name} -mtune=generic"
}
],
"oneapi": [
{
"versions": ":",
"flags": "-march={name} -mtune={name}"
"name": "pentium4",
"flags": "-march={name} -mtune=generic"
}
],
"dpcpp": [
{
"versions": ":",
"flags": "-march={name} -mtune={name}"
"name": "pentium4",
"flags": "-march={name} -mtune=generic"
}
]
}

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
#: (major, minor, micro, dev release) tuple
spack_version_info = (0, 18, 1)
spack_version_info = (0, 18, 0, 'dev0')
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string
spack_version = '.'.join(str(s) for s in spack_version_info)

View File

@@ -298,8 +298,7 @@ def _check_build_test_callbacks(pkgs, error_cls):
def _check_patch_urls(pkgs, error_cls):
"""Ensure that patches fetched from GitHub have stable sha256 hashes."""
github_patch_url_re = (
r"^https?://(?:patch-diff\.)?github(?:usercontent)?\.com/"
".+/.+/(?:commit|pull)/[a-fA-F0-9]*.(?:patch|diff)"
r"^https?://github\.com/.+/.+/(?:commit|pull)/[a-fA-F0-9]*.(?:patch|diff)"
)
errors = []

View File

@@ -210,7 +210,7 @@ def get_all_built_specs(self):
return spec_list
def find_built_spec(self, spec, mirrors_to_check=None):
def find_built_spec(self, spec):
"""Look in our cache for the built spec corresponding to ``spec``.
If the spec can be found among the configured binary mirrors, a
@@ -225,8 +225,6 @@ def find_built_spec(self, spec, mirrors_to_check=None):
Args:
spec (spack.spec.Spec): Concrete spec to find
mirrors_to_check: Optional mapping containing mirrors to check. If
None, just assumes all configured mirrors.
Returns:
An list of objects containing the found specs and mirror url where
@@ -242,23 +240,17 @@ def find_built_spec(self, spec, mirrors_to_check=None):
]
"""
self.regenerate_spec_cache()
return self.find_by_hash(spec.dag_hash(), mirrors_to_check=mirrors_to_check)
return self.find_by_hash(spec.dag_hash())
def find_by_hash(self, find_hash, mirrors_to_check=None):
def find_by_hash(self, find_hash):
"""Same as find_built_spec but uses the hash of a spec.
Args:
find_hash (str): hash of the spec to search
mirrors_to_check: Optional mapping containing mirrors to check. If
None, just assumes all configured mirrors.
"""
if find_hash not in self._mirrors_for_spec:
return None
results = self._mirrors_for_spec[find_hash]
if not mirrors_to_check:
return results
mirror_urls = mirrors_to_check.values()
return [r for r in results if r['mirror_url'] in mirror_urls]
return self._mirrors_for_spec[find_hash]
def update_spec(self, spec, found_list):
"""
@@ -571,13 +563,6 @@ def __init__(self, msg):
super(NewLayoutException, self).__init__(msg)
class UnsignedPackageException(spack.error.SpackError):
"""
Raised if installation of unsigned package is attempted without
the use of ``--no-check-signature``.
"""
def compute_hash(data):
return hashlib.sha256(data.encode('utf-8')).hexdigest()
@@ -766,16 +751,15 @@ def select_signing_key(key=None):
return key
def sign_specfile(key, force, specfile_path):
signed_specfile_path = '%s.sig' % specfile_path
if os.path.exists(signed_specfile_path):
def sign_tarball(key, force, specfile_path):
if os.path.exists('%s.asc' % specfile_path):
if force:
os.remove(signed_specfile_path)
os.remove('%s.asc' % specfile_path)
else:
raise NoOverwriteException(signed_specfile_path)
raise NoOverwriteException('%s.asc' % specfile_path)
key = select_signing_key(key)
spack.util.gpg.sign(key, specfile_path, signed_specfile_path, clearsign=True)
spack.util.gpg.sign(key, specfile_path, '%s.asc' % specfile_path)
def _fetch_spec_from_mirror(spec_url):
@@ -784,10 +768,7 @@ def _fetch_spec_from_mirror(spec_url):
_, _, spec_file = web_util.read_from_url(spec_url)
spec_file_contents = codecs.getreader('utf-8')(spec_file).read()
# Need full spec.json name or this gets confused with index.json.
if spec_url.endswith('.json.sig'):
specfile_json = Spec.extract_json_from_clearsig(spec_file_contents)
s = Spec.from_dict(specfile_json)
elif spec_url.endswith('.json'):
if spec_url.endswith('.json'):
s = Spec.from_json(spec_file_contents)
elif spec_url.endswith('.yaml'):
s = Spec.from_yaml(spec_file_contents)
@@ -848,9 +829,7 @@ def generate_package_index(cache_prefix):
file_list = (
entry
for entry in web_util.list_url(cache_prefix)
if entry.endswith('.yaml') or
entry.endswith('spec.json') or
entry.endswith('spec.json.sig'))
if entry.endswith('.yaml') or entry.endswith('spec.json'))
except KeyError as inst:
msg = 'No packages at {0}: {1}'.format(cache_prefix, inst)
tty.warn(msg)
@@ -965,7 +944,7 @@ def _build_tarball(
tmpdir = tempfile.mkdtemp()
cache_prefix = build_cache_prefix(tmpdir)
tarfile_name = tarball_name(spec, '.spack')
tarfile_name = tarball_name(spec, '.tar.gz')
tarfile_dir = os.path.join(cache_prefix, tarball_directory_name(spec))
tarfile_path = os.path.join(tarfile_dir, tarfile_name)
spackfile_path = os.path.join(
@@ -988,12 +967,10 @@ def _build_tarball(
spec_file = spack.store.layout.spec_file_path(spec)
specfile_name = tarball_name(spec, '.spec.json')
specfile_path = os.path.realpath(os.path.join(cache_prefix, specfile_name))
signed_specfile_path = '{0}.sig'.format(specfile_path)
deprecated_specfile_path = specfile_path.replace('.spec.json', '.spec.yaml')
remote_specfile_path = url_util.join(
outdir, os.path.relpath(specfile_path, os.path.realpath(tmpdir)))
remote_signed_specfile_path = '{0}.sig'.format(remote_specfile_path)
remote_specfile_path_deprecated = url_util.join(
outdir, os.path.relpath(deprecated_specfile_path,
os.path.realpath(tmpdir)))
@@ -1002,12 +979,9 @@ def _build_tarball(
if force:
if web_util.url_exists(remote_specfile_path):
web_util.remove_url(remote_specfile_path)
if web_util.url_exists(remote_signed_specfile_path):
web_util.remove_url(remote_signed_specfile_path)
if web_util.url_exists(remote_specfile_path_deprecated):
web_util.remove_url(remote_specfile_path_deprecated)
elif (web_util.url_exists(remote_specfile_path) or
web_util.url_exists(remote_signed_specfile_path) or
web_util.url_exists(remote_specfile_path_deprecated)):
raise NoOverwriteException(url_util.format(remote_specfile_path))
@@ -1069,7 +1043,6 @@ def _build_tarball(
raise ValueError(
'{0} not a valid spec file type (json or yaml)'.format(
spec_file))
spec_dict['buildcache_layout_version'] = 1
bchecksum = {}
bchecksum['hash_algorithm'] = 'sha256'
bchecksum['hash'] = checksum
@@ -1088,15 +1061,25 @@ def _build_tarball(
# sign the tarball and spec file with gpg
if not unsigned:
key = select_signing_key(key)
sign_specfile(key, force, specfile_path)
sign_tarball(key, force, specfile_path)
# put tarball, spec and signature files in .spack archive
with closing(tarfile.open(spackfile_path, 'w')) as tar:
tar.add(name=tarfile_path, arcname='%s' % tarfile_name)
tar.add(name=specfile_path, arcname='%s' % specfile_name)
if not unsigned:
tar.add(name='%s.asc' % specfile_path,
arcname='%s.asc' % specfile_name)
# cleanup file moved to archive
os.remove(tarfile_path)
if not unsigned:
os.remove('%s.asc' % specfile_path)
# push tarball and signed spec json to remote mirror
web_util.push_to_url(
spackfile_path, remote_spackfile_path, keep_original=False)
web_util.push_to_url(
signed_specfile_path if not unsigned else specfile_path,
remote_signed_specfile_path if not unsigned else remote_specfile_path,
keep_original=False)
specfile_path, remote_specfile_path, keep_original=False)
tty.debug('Buildcache for "{0}" written to \n {1}'
.format(spec, remote_spackfile_path))
@@ -1179,174 +1162,48 @@ def push(specs, push_url, specs_kwargs=None, **kwargs):
warnings.warn(str(e))
def try_verify(specfile_path):
"""Utility function to attempt to verify a local file. Assumes the
file is a clearsigned signature file.
Args:
specfile_path (str): Path to file to be verified.
Returns:
``True`` if the signature could be verified, ``False`` otherwise.
"""
suppress = config.get('config:suppress_gpg_warnings', False)
try:
spack.util.gpg.verify(specfile_path, suppress_warnings=suppress)
except Exception:
return False
return True
def try_fetch(url_to_fetch):
"""Utility function to try and fetch a file from a url, stage it
locally, and return the path to the staged file.
Args:
url_to_fetch (str): Url pointing to remote resource to fetch
Returns:
Path to locally staged resource or ``None`` if it could not be fetched.
"""
stage = Stage(url_to_fetch, keep=True)
stage.create()
try:
stage.fetch()
except fs.FetchError:
stage.destroy()
return None
return stage
def _delete_staged_downloads(download_result):
"""Clean up stages used to download tarball and specfile"""
download_result['tarball_stage'].destroy()
download_result['specfile_stage'].destroy()
def download_tarball(spec, unsigned=False, mirrors_for_spec=None):
def download_tarball(spec, preferred_mirrors=None):
"""
Download binary tarball for given package into stage area, returning
path to downloaded tarball if successful, None otherwise.
Args:
spec (spack.spec.Spec): Concrete spec
unsigned (bool): Whether or not to require signed binaries
mirrors_for_spec (list): Optional list of concrete specs and mirrors
obtained by calling binary_distribution.get_mirrors_for_spec().
These will be checked in order first before looking in other
configured mirrors.
preferred_mirrors (list): If provided, this is a list of preferred
mirror urls. Other configured mirrors will only be used if the
tarball can't be retrieved from one of these.
Returns:
``None`` if the tarball could not be downloaded (maybe also verified,
depending on whether new-style signed binary packages were found).
Otherwise, return an object indicating the path to the downloaded
tarball, the path to the downloaded specfile (in the case of new-style
buildcache), and whether or not the tarball is already verified.
.. code-block:: JSON
{
"tarball_path": "path-to-locally-saved-tarfile",
"specfile_path": "none-or-path-to-locally-saved-specfile",
"signature_verified": "true-if-binary-pkg-was-already-verified"
}
Path to the downloaded tarball, or ``None`` if the tarball could not
be downloaded from any configured mirrors.
"""
if not spack.mirror.MirrorCollection():
tty.die("Please add a spack mirror to allow " +
"download of pre-compiled packages.")
tarball = tarball_path_name(spec, '.spack')
specfile_prefix = tarball_name(spec, '.spec')
mirrors_to_try = []
urls_to_try = []
# Note on try_first and try_next:
# mirrors_for_spec mostly likely came from spack caching remote
# mirror indices locally and adding their specs to a local data
# structure supporting quick lookup of concrete specs. Those
# mirrors are likely a subset of all configured mirrors, and
# we'll probably find what we need in one of them. But we'll
# look in all configured mirrors if needed, as maybe the spec
# we need was in an un-indexed mirror. No need to check any
# mirror for the spec twice though.
try_first = [i['mirror_url'] for i in mirrors_for_spec] if mirrors_for_spec else []
try_next = [
i.fetch_url for i in spack.mirror.MirrorCollection().values()
if i.fetch_url not in try_first
]
if preferred_mirrors:
for preferred_url in preferred_mirrors:
urls_to_try.append(url_util.join(
preferred_url, _build_cache_relative_path, tarball))
for url in try_first + try_next:
mirrors_to_try.append({
'specfile': url_util.join(url,
_build_cache_relative_path, specfile_prefix),
'spackfile': url_util.join(url,
_build_cache_relative_path, tarball)
})
for mirror in spack.mirror.MirrorCollection().values():
if not preferred_mirrors or mirror.fetch_url not in preferred_mirrors:
urls_to_try.append(url_util.join(
mirror.fetch_url, _build_cache_relative_path, tarball))
tried_to_verify_sigs = []
# Assumes we care more about finding a spec file by preferred ext
# than by mirrory priority. This can be made less complicated as
# we remove support for deprecated spec formats and buildcache layouts.
for ext in ['json.sig', 'json', 'yaml']:
for mirror_to_try in mirrors_to_try:
specfile_url = '{0}.{1}'.format(mirror_to_try['specfile'], ext)
spackfile_url = mirror_to_try['spackfile']
local_specfile_stage = try_fetch(specfile_url)
if local_specfile_stage:
local_specfile_path = local_specfile_stage.save_filename
signature_verified = False
if ext.endswith('.sig') and not unsigned:
# If we found a signed specfile at the root, try to verify
# the signature immediately. We will not download the
# tarball if we could not verify the signature.
tried_to_verify_sigs.append(specfile_url)
signature_verified = try_verify(local_specfile_path)
if not signature_verified:
tty.warn("Failed to verify: {0}".format(specfile_url))
if unsigned or signature_verified or not ext.endswith('.sig'):
# We will download the tarball in one of three cases:
# 1. user asked for --no-check-signature
# 2. user didn't ask for --no-check-signature, but we
# found a spec.json.sig and verified the signature already
# 3. neither of the first two cases are true, but this file
# is *not* a signed json (not a spec.json.sig file). That
# means we already looked at all the mirrors and either didn't
# find any .sig files or couldn't verify any of them. But it
# is still possible to find an old style binary package where
# the signature is a detached .asc file in the outer archive
# of the tarball, and in that case, the only way to know is to
# download the tarball. This is a deprecated use case, so if
# something goes wrong during the extraction process (can't
# verify signature, checksum doesn't match) we will fail at
# that point instead of trying to download more tarballs from
# the remaining mirrors, looking for one we can use.
tarball_stage = try_fetch(spackfile_url)
if tarball_stage:
return {
'tarball_stage': tarball_stage,
'specfile_stage': local_specfile_stage,
'signature_verified': signature_verified,
}
local_specfile_stage.destroy()
# Falling through the nested loops meeans we exhaustively searched
# for all known kinds of spec files on all mirrors and did not find
# an acceptable one for which we could download a tarball.
if tried_to_verify_sigs:
raise NoVerifyException(("Spack found new style signed binary packages, "
"but was unable to verify any of them. Please "
"obtain and trust the correct public key. If "
"these are public spack binaries, please see the "
"spack docs for locations where keys can be found."))
for try_url in urls_to_try:
# stage the tarball into standard place
stage = Stage(try_url, name="build_cache", keep=True)
stage.create()
try:
stage.fetch()
return stage.save_filename
except fs.FetchError:
continue
tty.warn("download_tarball() was unable to download " +
"{0} from any configured mirrors".format(spec))
@@ -1520,55 +1377,7 @@ def is_backup_file(file):
relocate.relocate_text(text_names, prefix_to_prefix_text)
def _extract_inner_tarball(spec, filename, extract_to, unsigned, remote_checksum):
stagepath = os.path.dirname(filename)
spackfile_name = tarball_name(spec, '.spack')
spackfile_path = os.path.join(stagepath, spackfile_name)
tarfile_name = tarball_name(spec, '.tar.gz')
tarfile_path = os.path.join(extract_to, tarfile_name)
deprecated_yaml_name = tarball_name(spec, '.spec.yaml')
deprecated_yaml_path = os.path.join(extract_to, deprecated_yaml_name)
json_name = tarball_name(spec, '.spec.json')
json_path = os.path.join(extract_to, json_name)
with closing(tarfile.open(spackfile_path, 'r')) as tar:
tar.extractall(extract_to)
# some buildcache tarfiles use bzip2 compression
if not os.path.exists(tarfile_path):
tarfile_name = tarball_name(spec, '.tar.bz2')
tarfile_path = os.path.join(extract_to, tarfile_name)
if os.path.exists(json_path):
specfile_path = json_path
elif os.path.exists(deprecated_yaml_path):
specfile_path = deprecated_yaml_path
else:
raise ValueError('Cannot find spec file for {0}.'.format(extract_to))
if not unsigned:
if os.path.exists('%s.asc' % specfile_path):
suppress = config.get('config:suppress_gpg_warnings', False)
try:
spack.util.gpg.verify('%s.asc' % specfile_path, specfile_path, suppress)
except Exception:
raise NoVerifyException("Spack was unable to verify package "
"signature, please obtain and trust the "
"correct public key.")
else:
raise UnsignedPackageException(
"To install unsigned packages, use the --no-check-signature option.")
# get the sha256 checksum of the tarball
local_checksum = checksum_tarball(tarfile_path)
# if the checksums don't match don't install
if local_checksum != remote_checksum['hash']:
raise NoChecksumException(
"Package tarball failed checksum verification.\n"
"It cannot be installed.")
return tarfile_path
def extract_tarball(spec, download_result, allow_root=False, unsigned=False,
def extract_tarball(spec, filename, allow_root=False, unsigned=False,
force=False):
"""
extract binary tarball for given package into install area
@@ -1579,56 +1388,66 @@ def extract_tarball(spec, download_result, allow_root=False, unsigned=False,
else:
raise NoOverwriteException(str(spec.prefix))
specfile_path = download_result['specfile_stage'].save_filename
tmpdir = tempfile.mkdtemp()
stagepath = os.path.dirname(filename)
spackfile_name = tarball_name(spec, '.spack')
spackfile_path = os.path.join(stagepath, spackfile_name)
tarfile_name = tarball_name(spec, '.tar.gz')
tarfile_path = os.path.join(tmpdir, tarfile_name)
specfile_is_json = True
deprecated_yaml_name = tarball_name(spec, '.spec.yaml')
deprecated_yaml_path = os.path.join(tmpdir, deprecated_yaml_name)
json_name = tarball_name(spec, '.spec.json')
json_path = os.path.join(tmpdir, json_name)
with closing(tarfile.open(spackfile_path, 'r')) as tar:
tar.extractall(tmpdir)
# some buildcache tarfiles use bzip2 compression
if not os.path.exists(tarfile_path):
tarfile_name = tarball_name(spec, '.tar.bz2')
tarfile_path = os.path.join(tmpdir, tarfile_name)
if os.path.exists(json_path):
specfile_path = json_path
elif os.path.exists(deprecated_yaml_path):
specfile_is_json = False
specfile_path = deprecated_yaml_path
else:
raise ValueError('Cannot find spec file for {0}.'.format(tmpdir))
if not unsigned:
if os.path.exists('%s.asc' % specfile_path):
try:
suppress = config.get('config:suppress_gpg_warnings', False)
spack.util.gpg.verify(
'%s.asc' % specfile_path, specfile_path, suppress)
except Exception as e:
shutil.rmtree(tmpdir)
raise e
else:
shutil.rmtree(tmpdir)
raise NoVerifyException(
"Package spec file failed signature verification.\n"
"Use spack buildcache keys to download "
"and install a key for verification from the mirror.")
# get the sha256 checksum of the tarball
checksum = checksum_tarball(tarfile_path)
# get the sha256 checksum recorded at creation
spec_dict = {}
with open(specfile_path, 'r') as inputfile:
content = inputfile.read()
if specfile_path.endswith('.json.sig'):
spec_dict = Spec.extract_json_from_clearsig(content)
elif specfile_path.endswith('.json'):
if specfile_is_json:
spec_dict = sjson.load(content)
else:
spec_dict = syaml.load(content)
bchecksum = spec_dict['binary_cache_checksum']
filename = download_result['tarball_stage'].save_filename
signature_verified = download_result['signature_verified']
tmpdir = None
if ('buildcache_layout_version' not in spec_dict or
int(spec_dict['buildcache_layout_version']) < 1):
# Handle the older buildcache layout where the .spack file
# contains a spec json/yaml, maybe an .asc file (signature),
# and another tarball containing the actual install tree.
tmpdir = tempfile.mkdtemp()
try:
tarfile_path = _extract_inner_tarball(
spec, filename, tmpdir, unsigned, bchecksum)
except Exception as e:
_delete_staged_downloads(download_result)
shutil.rmtree(tmpdir)
raise e
else:
# Newer buildcache layout: the .spack file contains just
# in the install tree, the signature, if it exists, is
# wrapped around the spec.json at the root. If sig verify
# was required, it was already done before downloading
# the tarball.
tarfile_path = filename
if not unsigned and not signature_verified:
raise UnsignedPackageException(
"To install unsigned packages, use the --no-check-signature option.")
# compute the sha256 checksum of the tarball
local_checksum = checksum_tarball(tarfile_path)
# if the checksums don't match don't install
if local_checksum != bchecksum['hash']:
_delete_staged_downloads(download_result)
raise NoChecksumException(
"Package tarball failed checksum verification.\n"
"It cannot be installed.")
# if the checksums don't match don't install
if bchecksum['hash'] != checksum:
shutil.rmtree(tmpdir)
raise NoChecksumException(
"Package tarball failed checksum verification.\n"
"It cannot be installed.")
new_relative_prefix = str(os.path.relpath(spec.prefix,
spack.store.layout.root))
@@ -1653,13 +1472,11 @@ def extract_tarball(spec, download_result, allow_root=False, unsigned=False,
try:
tar.extractall(path=extract_tmp)
except Exception as e:
_delete_staged_downloads(download_result)
shutil.rmtree(extracted_dir)
raise e
try:
shutil.move(extracted_dir, spec.prefix)
except Exception as e:
_delete_staged_downloads(download_result)
shutil.rmtree(extracted_dir)
raise e
os.remove(tarfile_path)
@@ -1678,11 +1495,9 @@ def extract_tarball(spec, download_result, allow_root=False, unsigned=False,
spec_id = spec.format('{name}/{hash:7}')
tty.warn('No manifest file in tarball for spec %s' % spec_id)
finally:
if tmpdir:
shutil.rmtree(tmpdir)
shutil.rmtree(tmpdir)
if os.path.exists(filename):
os.remove(filename)
_delete_staged_downloads(download_result)
def install_root_node(spec, allow_root, unsigned=False, force=False, sha256=None):
@@ -1710,23 +1525,21 @@ def install_root_node(spec, allow_root, unsigned=False, force=False, sha256=None
warnings.warn("Package for spec {0} already installed.".format(spec.format()))
return
download_result = download_tarball(spec, unsigned)
if not download_result:
tarball = download_tarball(spec)
if not tarball:
msg = 'download of binary cache file for spec "{0}" failed'
raise RuntimeError(msg.format(spec.format()))
if sha256:
checker = spack.util.crypto.Checker(sha256)
msg = 'cannot verify checksum for "{0}" [expected={1}]'
tarball_path = download_result['tarball_stage'].save_filename
msg = msg.format(tarball_path, sha256)
if not checker.check(tarball_path):
_delete_staged_downloads(download_result)
msg = msg.format(tarball, sha256)
if not checker.check(tarball):
raise spack.binary_distribution.NoChecksumException(msg)
tty.debug('Verified SHA256 checksum of the build cache')
tty.msg('Installing "{0}" from a buildcache'.format(spec.format()))
extract_tarball(spec, download_result, allow_root, unsigned, force)
extract_tarball(spec, tarball, allow_root, unsigned, force)
spack.hooks.post_install(spec)
spack.store.db.add(spec, spack.store.layout)
@@ -1752,8 +1565,6 @@ def try_direct_fetch(spec, mirrors=None):
"""
deprecated_specfile_name = tarball_name(spec, '.spec.yaml')
specfile_name = tarball_name(spec, '.spec.json')
signed_specfile_name = tarball_name(spec, '.spec.json.sig')
specfile_is_signed = False
specfile_is_json = True
found_specs = []
@@ -1762,35 +1573,24 @@ def try_direct_fetch(spec, mirrors=None):
mirror.fetch_url, _build_cache_relative_path, deprecated_specfile_name)
buildcache_fetch_url_json = url_util.join(
mirror.fetch_url, _build_cache_relative_path, specfile_name)
buildcache_fetch_url_signed_json = url_util.join(
mirror.fetch_url, _build_cache_relative_path, signed_specfile_name)
try:
_, _, fs = web_util.read_from_url(buildcache_fetch_url_signed_json)
specfile_is_signed = True
_, _, fs = web_util.read_from_url(buildcache_fetch_url_json)
except (URLError, web_util.SpackWebError, HTTPError) as url_err:
try:
_, _, fs = web_util.read_from_url(buildcache_fetch_url_json)
except (URLError, web_util.SpackWebError, HTTPError) as url_err_x:
try:
_, _, fs = web_util.read_from_url(buildcache_fetch_url_yaml)
specfile_is_json = False
except (URLError, web_util.SpackWebError, HTTPError) as url_err_y:
tty.debug('Did not find {0} on {1}'.format(
specfile_name, buildcache_fetch_url_signed_json), url_err)
tty.debug('Did not find {0} on {1}'.format(
specfile_name, buildcache_fetch_url_json), url_err_x)
tty.debug('Did not find {0} on {1}'.format(
specfile_name, buildcache_fetch_url_yaml), url_err_y)
continue
_, _, fs = web_util.read_from_url(buildcache_fetch_url_yaml)
specfile_is_json = False
except (URLError, web_util.SpackWebError, HTTPError) as url_err_y:
tty.debug('Did not find {0} on {1}'.format(
specfile_name, buildcache_fetch_url_json), url_err)
tty.debug('Did not find {0} on {1}'.format(
specfile_name, buildcache_fetch_url_yaml), url_err_y)
continue
specfile_contents = codecs.getreader('utf-8')(fs).read()
# read the spec from the build cache file. All specs in build caches
# are concrete (as they are built) so we need to mark this spec
# concrete on read-in.
if specfile_is_signed:
specfile_json = Spec.extract_json_from_clearsig(specfile_contents)
fetched_spec = Spec.from_dict(specfile_json)
elif specfile_is_json:
if specfile_is_json:
fetched_spec = Spec.from_json(specfile_contents)
else:
fetched_spec = Spec.from_yaml(specfile_contents)
@@ -1827,7 +1627,7 @@ def get_mirrors_for_spec(spec=None, mirrors_to_check=None, index_only=False):
tty.debug("No Spack mirrors are currently configured")
return {}
results = binary_index.find_built_spec(spec, mirrors_to_check=mirrors_to_check)
results = binary_index.find_built_spec(spec)
# Maybe we just didn't have the latest information from the mirror, so
# try to fetch directly, unless we are only considering the indices.
@@ -2117,8 +1917,7 @@ def download_single_spec(
'path': local_tarball_path,
'required': True,
}, {
'url': [tarball_name(concrete_spec, '.spec.json.sig'),
tarball_name(concrete_spec, '.spec.json'),
'url': [tarball_name(concrete_spec, '.spec.json'),
tarball_name(concrete_spec, '.spec.yaml')],
'path': destination,
'required': True,

View File

@@ -5,7 +5,6 @@
from __future__ import print_function
import contextlib
import copy
import fnmatch
import functools
import json
@@ -38,11 +37,6 @@
import spack.util.environment
import spack.util.executable
import spack.util.path
import spack.util.spack_yaml
import spack.util.url
#: Name of the file containing metadata about the bootstrapping source
METADATA_YAML_FILENAME = 'metadata.yaml'
#: Map a bootstrapper type to the corresponding class
_bootstrap_methods = {}
@@ -210,43 +204,12 @@ def _executables_in_store(executables, query_spec, query_info=None):
return False
class _BootstrapperBase(object):
"""Base class to derive types that can bootstrap software for Spack"""
config_scope_name = ''
@_bootstrapper(type='buildcache')
class _BuildcacheBootstrapper(object):
"""Install the software needed during bootstrapping from a buildcache."""
def __init__(self, conf):
self.name = conf['name']
self.url = conf['info']['url']
@property
def mirror_url(self):
# Absolute paths
if os.path.isabs(self.url):
return spack.util.url.format(self.url)
# Check for :// and assume it's an url if we find it
if '://' in self.url:
return self.url
# Otherwise, it's a relative path
return spack.util.url.format(os.path.join(self.metadata_dir, self.url))
@property
def mirror_scope(self):
return spack.config.InternalConfigScope(
self.config_scope_name, {'mirrors:': {self.name: self.mirror_url}}
)
@_bootstrapper(type='buildcache')
class _BuildcacheBootstrapper(_BootstrapperBase):
"""Install the software needed during bootstrapping from a buildcache."""
config_scope_name = 'bootstrap_buildcache'
def __init__(self, conf):
super(_BuildcacheBootstrapper, self).__init__(conf)
self.metadata_dir = spack.util.path.canonicalize_path(conf['metadata'])
self.last_search = None
@staticmethod
@@ -269,8 +232,9 @@ def _spec_and_platform(abstract_spec_str):
def _read_metadata(self, package_name):
"""Return metadata about the given package."""
json_filename = '{0}.json'.format(package_name)
json_dir = self.metadata_dir
json_path = os.path.join(json_dir, json_filename)
json_path = os.path.join(
spack.paths.share_path, 'bootstrap', self.name, json_filename
)
with open(json_path) as f:
data = json.load(f)
return data
@@ -344,6 +308,12 @@ def _install_and_test(
return True
return False
@property
def mirror_scope(self):
return spack.config.InternalConfigScope(
'bootstrap_buildcache', {'mirrors:': {self.name: self.url}}
)
def try_import(self, module, abstract_spec_str):
test_fn, info = functools.partial(_try_import_from_store, module), {}
if test_fn(query_spec=abstract_spec_str, query_info=info):
@@ -373,13 +343,9 @@ def try_search_path(self, executables, abstract_spec_str):
@_bootstrapper(type='install')
class _SourceBootstrapper(_BootstrapperBase):
class _SourceBootstrapper(object):
"""Install the software needed during bootstrapping from sources."""
config_scope_name = 'bootstrap_source'
def __init__(self, conf):
super(_SourceBootstrapper, self).__init__(conf)
self.metadata_dir = spack.util.path.canonicalize_path(conf['metadata'])
self.conf = conf
self.last_search = None
@@ -412,8 +378,7 @@ def try_import(self, module, abstract_spec_str):
tty.debug(msg.format(module, abstract_spec_str))
# Install the spec that should make the module importable
with spack.config.override(self.mirror_scope):
concrete_spec.package.do_install(fail_fast=True)
concrete_spec.package.do_install(fail_fast=True)
if _try_import_from_store(module, query_spec=concrete_spec, query_info=info):
self.last_search = info
@@ -426,8 +391,6 @@ def try_search_path(self, executables, abstract_spec_str):
self.last_search = info
return True
tty.info("Bootstrapping {0} from sources".format(abstract_spec_str))
# If we compile code from sources detecting a few build tools
# might reduce compilation time by a fair amount
_add_externals_if_missing()
@@ -440,8 +403,7 @@ def try_search_path(self, executables, abstract_spec_str):
msg = "[BOOTSTRAP] Try installing '{0}' from sources"
tty.debug(msg.format(abstract_spec_str))
with spack.config.override(self.mirror_scope):
concrete_spec.package.do_install()
concrete_spec.package.do_install()
if _executables_in_store(executables, concrete_spec, query_info=info):
self.last_search = info
return True
@@ -456,10 +418,9 @@ def _make_bootstrapper(conf):
return _bootstrap_methods[btype](conf)
def source_is_enabled_or_raise(conf):
"""Raise ValueError if the source is not enabled for bootstrapping"""
def _validate_source_is_trusted(conf):
trusted, name = spack.config.get('bootstrap:trusted'), conf['name']
if not trusted.get(name, False):
if name not in trusted:
raise ValueError('source is not trusted')
@@ -525,12 +486,13 @@ def ensure_module_importable_or_raise(module, abstract_spec=None):
return
abstract_spec = abstract_spec or module
source_configs = spack.config.get('bootstrap:sources', [])
h = GroupedExceptionHandler()
for current_config in bootstrapping_sources():
for current_config in source_configs:
with h.forward(current_config['name']):
source_is_enabled_or_raise(current_config)
_validate_source_is_trusted(current_config)
b = _make_bootstrapper(current_config)
if b.try_import(module, abstract_spec):
@@ -567,12 +529,13 @@ def ensure_executables_in_path_or_raise(executables, abstract_spec):
return cmd
executables_str = ', '.join(executables)
source_configs = spack.config.get('bootstrap:sources', [])
h = GroupedExceptionHandler()
for current_config in bootstrapping_sources():
for current_config in source_configs:
with h.forward(current_config['name']):
source_is_enabled_or_raise(current_config)
_validate_source_is_trusted(current_config)
b = _make_bootstrapper(current_config)
if b.try_search_path(executables, abstract_spec):
@@ -855,19 +818,6 @@ def ensure_flake8_in_path_or_raise():
return ensure_executables_in_path_or_raise([executable], abstract_spec=root_spec)
def all_root_specs(development=False):
"""Return a list of all the root specs that may be used to bootstrap Spack.
Args:
development (bool): if True include dev dependencies
"""
specs = [clingo_root_spec(), gnupg_root_spec(), patchelf_root_spec()]
if development:
specs += [isort_root_spec(), mypy_root_spec(),
black_root_spec(), flake8_root_spec()]
return specs
def _missing(name, purpose, system_only=True):
"""Message to be printed if an executable is not found"""
msg = '[{2}] MISSING "{0}": {1}'
@@ -1005,23 +955,3 @@ def status_message(section):
msg += '\n'
msg = msg.format(pass_token if not missing_software else fail_token)
return msg, missing_software
def bootstrapping_sources(scope=None):
"""Return the list of configured sources of software for bootstrapping Spack
Args:
scope (str or None): if a valid configuration scope is given, return the
list only from that scope
"""
source_configs = spack.config.get('bootstrap:sources', default=None, scope=scope)
source_configs = source_configs or []
list_of_sources = []
for entry in source_configs:
current = copy.copy(entry)
metadata_dir = spack.util.path.canonicalize_path(entry['metadata'])
metadata_yaml = os.path.join(metadata_dir, METADATA_YAML_FILENAME)
with open(metadata_yaml) as f:
current.update(spack.util.spack_yaml.load(f))
list_of_sources.append(current)
return list_of_sources

View File

@@ -242,6 +242,17 @@ def clean_environment():
# show useful matches.
env.set('LC_ALL', build_lang)
remove_flags = set()
keep_flags = set()
if spack.config.get('config:flags:keep_werror') == 'all':
keep_flags.add('-Werror*')
else:
if spack.config.get('config:flags:keep_werror') == 'specific':
keep_flags.add('-Werror=*')
remove_flags.add('-Werror*')
env.set('SPACK_COMPILER_FLAGS_KEEP', '|'.join(keep_flags))
env.set('SPACK_COMPILER_FLAGS_REMOVE', '|'.join(remove_flags))
# Remove any macports installs from the PATH. The macports ld can
# cause conflicts with the built-in linker on el capitan. Solves
# assembler issues, e.g.:

View File

@@ -33,6 +33,7 @@
import spack.util.executable as exe
import spack.util.gpg as gpg_util
import spack.util.spack_yaml as syaml
import spack.util.url as url_util
import spack.util.web as web_util
from spack.error import SpackError
from spack.spec import Spec
@@ -41,8 +42,10 @@
'always',
]
SPACK_PR_MIRRORS_ROOT_URL = 's3://spack-binaries-prs'
SPACK_SHARED_PR_MIRROR_URL = url_util.join(SPACK_PR_MIRRORS_ROOT_URL,
'shared_pr_mirror')
TEMP_STORAGE_MIRROR_NAME = 'ci_temporary_mirror'
SPACK_RESERVED_TAGS = ["public", "protected", "notary"]
spack_gpg = spack.main.SpackCommand('gpg')
spack_compiler = spack.main.SpackCommand('compiler')
@@ -196,11 +199,6 @@ def _get_cdash_build_name(spec, build_group):
spec.name, spec.version, spec.compiler, spec.architecture, build_group)
def _remove_reserved_tags(tags):
"""Convenience function to strip reserved tags from jobs"""
return [tag for tag in tags if tag not in SPACK_RESERVED_TAGS]
def _get_spec_string(spec):
format_elements = [
'{name}{@version}',
@@ -233,10 +231,8 @@ def _add_dependency(spec_label, dep_label, deps):
deps[spec_label].add(dep_label)
def _get_spec_dependencies(specs, deps, spec_labels, check_index_only=False,
mirrors_to_check=None):
spec_deps_obj = _compute_spec_deps(specs, check_index_only=check_index_only,
mirrors_to_check=mirrors_to_check)
def _get_spec_dependencies(specs, deps, spec_labels, check_index_only=False):
spec_deps_obj = _compute_spec_deps(specs, check_index_only=check_index_only)
if spec_deps_obj:
dependencies = spec_deps_obj['dependencies']
@@ -253,7 +249,7 @@ def _get_spec_dependencies(specs, deps, spec_labels, check_index_only=False,
_add_dependency(entry['spec'], entry['depends'], deps)
def stage_spec_jobs(specs, check_index_only=False, mirrors_to_check=None):
def stage_spec_jobs(specs, check_index_only=False):
"""Take a set of release specs and generate a list of "stages", where the
jobs in any stage are dependent only on jobs in previous stages. This
allows us to maximize build parallelism within the gitlab-ci framework.
@@ -265,8 +261,6 @@ def stage_spec_jobs(specs, check_index_only=False, mirrors_to_check=None):
are up to date on those mirrors. This flag limits that search to
the binary cache indices on those mirrors to speed the process up,
even though there is no garantee the index is up to date.
mirrors_to_checK: Optional mapping giving mirrors to check instead of
any configured mirrors.
Returns: A tuple of information objects describing the specs, dependencies
and stages:
@@ -303,8 +297,8 @@ def _remove_satisfied_deps(deps, satisfied_list):
deps = {}
spec_labels = {}
_get_spec_dependencies(specs, deps, spec_labels, check_index_only=check_index_only,
mirrors_to_check=mirrors_to_check)
_get_spec_dependencies(
specs, deps, spec_labels, check_index_only=check_index_only)
# Save the original deps, as we need to return them at the end of the
# function. In the while loop below, the "dependencies" variable is
@@ -346,7 +340,7 @@ def _print_staging_summary(spec_labels, dependencies, stages):
_get_spec_string(s)))
def _compute_spec_deps(spec_list, check_index_only=False, mirrors_to_check=None):
def _compute_spec_deps(spec_list, check_index_only=False):
"""
Computes all the dependencies for the spec(s) and generates a JSON
object which provides both a list of unique spec names as well as a
@@ -419,7 +413,7 @@ def append_dep(s, d):
continue
up_to_date_mirrors = bindist.get_mirrors_for_spec(
spec=s, mirrors_to_check=mirrors_to_check, index_only=check_index_only)
spec=s, index_only=check_index_only)
skey = _spec_deps_key(s)
spec_labels[skey] = {
@@ -608,8 +602,8 @@ def get_spec_filter_list(env, affected_pkgs, dependencies=True, dependents=True)
def generate_gitlab_ci_yaml(env, print_summary, output_file,
prune_dag=False, check_index_only=False,
run_optimizer=False, use_dependencies=False,
artifacts_root=None, remote_mirror_override=None):
""" Generate a gitlab yaml file to run a dynamic child pipeline from
artifacts_root=None):
""" Generate a gitlab yaml file to run a dynamic chile pipeline from
the spec matrix in the active environment.
Arguments:
@@ -635,10 +629,6 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
artifacts_root (str): Path where artifacts like logs, environment
files (spack.yaml, spack.lock), etc should be written. GitLab
requires this to be within the project directory.
remote_mirror_override (str): Typically only needed when one spack.yaml
is used to populate several mirrors with binaries, based on some
criteria. Spack protected pipelines populate different mirrors based
on branch name, facilitated by this option.
"""
with spack.concretize.disable_compiler_existence_check():
with env.write_transaction():
@@ -688,19 +678,17 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
for s in affected_specs:
tty.debug(' {0}'.format(s.name))
# Downstream jobs will "need" (depend on, for both scheduling and
# artifacts, which include spack.lock file) this pipeline generation
# job by both name and pipeline id. If those environment variables
# do not exist, then maybe this is just running in a shell, in which
# case, there is no expectation gitlab will ever run the generated
# pipeline and those environment variables do not matter.
generate_job_name = os.environ.get('CI_JOB_NAME', 'job-does-not-exist')
parent_pipeline_id = os.environ.get('CI_PIPELINE_ID', 'pipeline-does-not-exist')
generate_job_name = os.environ.get('CI_JOB_NAME', None)
parent_pipeline_id = os.environ.get('CI_PIPELINE_ID', None)
# Values: "spack_pull_request", "spack_protected_branch", or not set
spack_pipeline_type = os.environ.get('SPACK_PIPELINE_TYPE', None)
is_pr_pipeline = spack_pipeline_type == 'spack_pull_request'
spack_buildcache_copy = os.environ.get('SPACK_COPY_BUILDCACHE', None)
spack_pr_branch = os.environ.get('SPACK_PR_BRANCH', None)
pr_mirror_url = None
if spack_pr_branch:
pr_mirror_url = url_util.join(SPACK_PR_MIRRORS_ROOT_URL,
spack_pr_branch)
if 'mirrors' not in yaml_root or len(yaml_root['mirrors'].values()) < 1:
tty.die('spack ci generate requires an env containing a mirror')
@@ -755,25 +743,14 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
'strip-compilers': False,
})
# If a remote mirror override (alternate buildcache destination) was
# specified, add it here in case it has already built hashes we might
# generate.
mirrors_to_check = None
if remote_mirror_override:
if spack_pipeline_type == 'spack_protected_branch':
# Overriding the main mirror in this case might result
# in skipping jobs on a release pipeline because specs are
# up to date in develop. Eventually we want to notice and take
# advantage of this by scheduling a job to copy the spec from
# develop to the release, but until we have that, this makes
# sure we schedule a rebuild job if the spec isn't already in
# override mirror.
mirrors_to_check = {
'override': remote_mirror_override
}
else:
spack.mirror.add(
'ci_pr_mirror', remote_mirror_override, cfg.default_modify_scope())
# Add per-PR mirror (and shared PR mirror) if enabled, as some specs might
# be up to date in one of those and thus not need to be rebuilt.
if pr_mirror_url:
spack.mirror.add(
'ci_pr_mirror', pr_mirror_url, cfg.default_modify_scope())
spack.mirror.add('ci_shared_pr_mirror',
SPACK_SHARED_PR_MIRROR_URL,
cfg.default_modify_scope())
pipeline_artifacts_dir = artifacts_root
if not pipeline_artifacts_dir:
@@ -848,13 +825,11 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
phase_spec.concretize()
staged_phases[phase_name] = stage_spec_jobs(
concrete_phase_specs,
check_index_only=check_index_only,
mirrors_to_check=mirrors_to_check)
check_index_only=check_index_only)
finally:
# Clean up remote mirror override if enabled
if remote_mirror_override:
if spack_pipeline_type != 'spack_protected_branch':
spack.mirror.remove('ci_pr_mirror', cfg.default_modify_scope())
# Clean up PR mirror if enabled
if pr_mirror_url:
spack.mirror.remove('ci_pr_mirror', cfg.default_modify_scope())
all_job_names = []
output_object = {}
@@ -914,14 +889,6 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
tags = [tag for tag in runner_attribs['tags']]
if spack_pipeline_type is not None:
# For spack pipelines "public" and "protected" are reserved tags
tags = _remove_reserved_tags(tags)
if spack_pipeline_type == 'spack_protected_branch':
tags.extend(['aws', 'protected'])
elif spack_pipeline_type == 'spack_pull_request':
tags.extend(['public'])
variables = {}
if 'variables' in runner_attribs:
variables.update(runner_attribs['variables'])
@@ -1207,10 +1174,6 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
service_job_config,
cleanup_job)
if 'tags' in cleanup_job:
service_tags = _remove_reserved_tags(cleanup_job['tags'])
cleanup_job['tags'] = service_tags
cleanup_job['stage'] = 'cleanup-temp-storage'
cleanup_job['script'] = [
'spack -d mirror destroy --mirror-url {0}/$CI_PIPELINE_ID'.format(
@@ -1218,74 +1181,9 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
]
cleanup_job['when'] = 'always'
cleanup_job['retry'] = service_job_retries
cleanup_job['interruptible'] = True
output_object['cleanup'] = cleanup_job
if ('signing-job-attributes' in gitlab_ci and
spack_pipeline_type == 'spack_protected_branch'):
# External signing: generate a job to check and sign binary pkgs
stage_names.append('stage-sign-pkgs')
signing_job_config = gitlab_ci['signing-job-attributes']
signing_job = {}
signing_job_attrs_to_copy = [
'image',
'tags',
'variables',
'before_script',
'script',
'after_script',
]
_copy_attributes(signing_job_attrs_to_copy,
signing_job_config,
signing_job)
signing_job_tags = []
if 'tags' in signing_job:
signing_job_tags = _remove_reserved_tags(signing_job['tags'])
for tag in ['aws', 'protected', 'notary']:
if tag not in signing_job_tags:
signing_job_tags.append(tag)
signing_job['tags'] = signing_job_tags
signing_job['stage'] = 'stage-sign-pkgs'
signing_job['when'] = 'always'
signing_job['retry'] = {
'max': 2,
'when': ['always']
}
signing_job['interruptible'] = True
output_object['sign-pkgs'] = signing_job
if spack_buildcache_copy:
# Generate a job to copy the contents from wherever the builds are getting
# pushed to the url specified in the "SPACK_BUILDCACHE_COPY" environment
# variable.
src_url = remote_mirror_override or remote_mirror_url
dest_url = spack_buildcache_copy
stage_names.append('stage-copy-buildcache')
copy_job = {
'stage': 'stage-copy-buildcache',
'tags': ['spack', 'public', 'medium', 'aws', 'x86_64'],
'image': 'ghcr.io/spack/python-aws-bash:0.0.1',
'when': 'on_success',
'interruptible': True,
'retry': service_job_retries,
'script': [
'. ./share/spack/setup-env.sh',
'spack --version',
'aws s3 sync --exclude *index.json* --exclude *pgp* {0} {1}'.format(
src_url, dest_url)
]
}
output_object['copy-mirror'] = copy_job
if rebuild_index_enabled:
# Add a final job to regenerate the index
stage_names.append('stage-rebuild-index')
@@ -1296,13 +1194,9 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
service_job_config,
final_job)
if 'tags' in final_job:
service_tags = _remove_reserved_tags(final_job['tags'])
final_job['tags'] = service_tags
index_target_mirror = mirror_urls[0]
if remote_mirror_override:
index_target_mirror = remote_mirror_override
if is_pr_pipeline:
index_target_mirror = pr_mirror_url
final_job['stage'] = 'stage-rebuild-index'
final_job['script'] = [
@@ -1311,7 +1205,6 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
]
final_job['when'] = 'always'
final_job['retry'] = service_job_retries
final_job['interruptible'] = True
output_object['rebuild-index'] = final_job
@@ -1344,9 +1237,8 @@ def generate_gitlab_ci_yaml(env, print_summary, output_file,
'SPACK_PIPELINE_TYPE': str(spack_pipeline_type)
}
if remote_mirror_override:
(output_object['variables']
['SPACK_REMOTE_MIRROR_OVERRIDE']) = remote_mirror_override
if pr_mirror_url:
output_object['variables']['SPACK_PR_MIRROR_URL'] = pr_mirror_url
spack_stack_name = os.environ.get('SPACK_CI_STACK_NAME', None)
if spack_stack_name:

View File

@@ -8,10 +8,7 @@
import argparse
import os
import re
import shlex
import sys
from textwrap import dedent
from typing import List, Tuple
import ruamel.yaml as yaml
import six
@@ -150,58 +147,6 @@ def get_command(cmd_name):
return getattr(get_module(cmd_name), pname)
class _UnquotedFlags(object):
"""Use a heuristic in `.extract()` to detect whether the user is trying to set
multiple flags like the docker ENV attribute allows (e.g. 'cflags=-Os -pipe').
If the heuristic finds a match (which can be checked with `__bool__()`), a warning
message explaining how to quote multiple flags correctly can be generated with
`.report()`.
"""
flags_arg_pattern = re.compile(
r'^({0})=([^\'"].*)$'.format(
'|'.join(spack.spec.FlagMap.valid_compiler_flags()),
))
def __init__(self, all_unquoted_flag_pairs):
# type: (List[Tuple[re.Match, str]]) -> None
self._flag_pairs = all_unquoted_flag_pairs
def __bool__(self):
# type: () -> bool
return bool(self._flag_pairs)
@classmethod
def extract(cls, sargs):
# type: (str) -> _UnquotedFlags
all_unquoted_flag_pairs = [] # type: List[Tuple[re.Match, str]]
prev_flags_arg = None
for arg in shlex.split(sargs):
if prev_flags_arg is not None:
all_unquoted_flag_pairs.append((prev_flags_arg, arg))
prev_flags_arg = cls.flags_arg_pattern.match(arg)
return cls(all_unquoted_flag_pairs)
def report(self):
# type: () -> str
single_errors = [
'({0}) {1} {2} => {3}'.format(
i + 1, match.group(0), next_arg,
'{0}="{1} {2}"'.format(match.group(1), match.group(2), next_arg),
)
for i, (match, next_arg) in enumerate(self._flag_pairs)
]
return dedent("""\
Some compiler or linker flags were provided without quoting their arguments,
which now causes spack to try to parse the *next* argument as a spec component
such as a variant instead of an additional compiler or linker flag. If the
intent was to set multiple flags, try quoting them together as described below.
Possible flag quotation errors (with the correctly-quoted version after the =>):
{0}""").format('\n'.join(single_errors))
def parse_specs(args, **kwargs):
"""Convenience function for parsing arguments from specs. Handles common
exceptions and dies if there are errors.
@@ -212,28 +157,15 @@ def parse_specs(args, **kwargs):
sargs = args
if not isinstance(args, six.string_types):
sargs = ' '.join(args)
unquoted_flags = _UnquotedFlags.extract(sargs)
sargs = ' '.join(spack.util.string.quote(args))
specs = spack.spec.parse(sargs)
for spec in specs:
if concretize:
spec.concretize(tests=tests) # implies normalize
elif normalize:
spec.normalize(tests=tests)
try:
specs = spack.spec.parse(sargs)
for spec in specs:
if concretize:
spec.concretize(tests=tests) # implies normalize
elif normalize:
spec.normalize(tests=tests)
return specs
except spack.error.SpecError as e:
msg = e.message
if e.long_message:
msg += e.long_message
if unquoted_flags:
msg += '\n\n'
msg += unquoted_flags.report()
raise spack.error.SpackError(msg)
return specs
def matching_spec_from_env(spec):

View File

@@ -6,9 +6,7 @@
import os.path
import shutil
import tempfile
import llnl.util.filesystem
import llnl.util.tty
import llnl.util.tty.color
@@ -17,9 +15,6 @@
import spack.cmd.common.arguments
import spack.config
import spack.main
import spack.mirror
import spack.spec
import spack.stage
import spack.util.path
description = "manage bootstrap configuration"
@@ -27,38 +22,6 @@
level = "long"
# Tarball to be downloaded if binary packages are requested in a local mirror
BINARY_TARBALL = 'https://github.com/spack/spack-bootstrap-mirrors/releases/download/v0.2/bootstrap-buildcache.tar.gz'
#: Subdirectory where to create the mirror
LOCAL_MIRROR_DIR = 'bootstrap_cache'
# Metadata for a generated binary mirror
BINARY_METADATA = {
'type': 'buildcache',
'description': ('Buildcache copied from a public tarball available on Github.'
'The sha256 checksum of binaries is checked before installation.'),
'info': {
'url': os.path.join('..', '..', LOCAL_MIRROR_DIR),
'homepage': 'https://github.com/spack/spack-bootstrap-mirrors',
'releases': 'https://github.com/spack/spack-bootstrap-mirrors/releases',
'tarball': BINARY_TARBALL
}
}
CLINGO_JSON = '$spack/share/spack/bootstrap/github-actions-v0.2/clingo.json'
GNUPG_JSON = '$spack/share/spack/bootstrap/github-actions-v0.2/gnupg.json'
# Metadata for a generated source mirror
SOURCE_METADATA = {
'type': 'install',
'description': 'Mirror with software needed to bootstrap Spack',
'info': {
'url': os.path.join('..', '..', LOCAL_MIRROR_DIR)
}
}
def _add_scope_option(parser):
scopes = spack.config.scopes()
scopes_metavar = spack.config.scopes_metavar
@@ -104,61 +67,24 @@ def setup_parser(subparser):
)
list = sp.add_parser(
'list', help='list all the sources of software to bootstrap Spack'
'list', help='list the methods available for bootstrapping'
)
_add_scope_option(list)
trust = sp.add_parser(
'trust', help='trust a bootstrapping source'
'trust', help='trust a bootstrapping method'
)
_add_scope_option(trust)
trust.add_argument(
'name', help='name of the source to be trusted'
'name', help='name of the method to be trusted'
)
untrust = sp.add_parser(
'untrust', help='untrust a bootstrapping source'
'untrust', help='untrust a bootstrapping method'
)
_add_scope_option(untrust)
untrust.add_argument(
'name', help='name of the source to be untrusted'
)
add = sp.add_parser(
'add', help='add a new source for bootstrapping'
)
_add_scope_option(add)
add.add_argument(
'--trust', action='store_true',
help='trust the source immediately upon addition')
add.add_argument(
'name', help='name of the new source of software'
)
add.add_argument(
'metadata_dir', help='directory where to find metadata files'
)
remove = sp.add_parser(
'remove', help='remove a bootstrapping source'
)
remove.add_argument(
'name', help='name of the source to be removed'
)
mirror = sp.add_parser(
'mirror', help='create a local mirror to bootstrap Spack'
)
mirror.add_argument(
'--binary-packages', action='store_true',
help='download public binaries in the mirror'
)
mirror.add_argument(
'--dev', action='store_true',
help='download dev dependencies too'
)
mirror.add_argument(
metavar='DIRECTORY', dest='root_dir',
help='root directory in which to create the mirror and metadata'
'name', help='name of the method to be untrusted'
)
@@ -211,7 +137,10 @@ def _root(args):
def _list(args):
sources = spack.bootstrap.bootstrapping_sources(scope=args.scope)
sources = spack.config.get(
'bootstrap:sources', default=None, scope=args.scope
)
if not sources:
llnl.util.tty.msg(
"No method available for bootstrapping Spack's dependencies"
@@ -320,121 +249,6 @@ def _status(args):
print()
def _add(args):
initial_sources = spack.bootstrap.bootstrapping_sources()
names = [s['name'] for s in initial_sources]
# If the name is already used error out
if args.name in names:
msg = 'a source named "{0}" already exist. Please choose a different name'
raise RuntimeError(msg.format(args.name))
# Check that the metadata file exists
metadata_dir = spack.util.path.canonicalize_path(args.metadata_dir)
if not os.path.exists(metadata_dir) or not os.path.isdir(metadata_dir):
raise RuntimeError(
'the directory "{0}" does not exist'.format(args.metadata_dir)
)
file = os.path.join(metadata_dir, 'metadata.yaml')
if not os.path.exists(file):
raise RuntimeError('the file "{0}" does not exist'.format(file))
# Insert the new source as the highest priority one
write_scope = args.scope or spack.config.default_modify_scope(section='bootstrap')
sources = spack.config.get('bootstrap:sources', scope=write_scope) or []
sources = [
{'name': args.name, 'metadata': args.metadata_dir}
] + sources
spack.config.set('bootstrap:sources', sources, scope=write_scope)
msg = 'New bootstrapping source "{0}" added in the "{1}" configuration scope'
llnl.util.tty.msg(msg.format(args.name, write_scope))
if args.trust:
_trust(args)
def _remove(args):
initial_sources = spack.bootstrap.bootstrapping_sources()
names = [s['name'] for s in initial_sources]
if args.name not in names:
msg = ('cannot find any bootstrapping source named "{0}". '
'Run `spack bootstrap list` to see available sources.')
raise RuntimeError(msg.format(args.name))
for current_scope in spack.config.scopes():
sources = spack.config.get('bootstrap:sources', scope=current_scope) or []
if args.name in [s['name'] for s in sources]:
sources = [s for s in sources if s['name'] != args.name]
spack.config.set('bootstrap:sources', sources, scope=current_scope)
msg = ('Removed the bootstrapping source named "{0}" from the '
'"{1}" configuration scope.')
llnl.util.tty.msg(msg.format(args.name, current_scope))
trusted = spack.config.get('bootstrap:trusted', scope=current_scope) or []
if args.name in trusted:
trusted.pop(args.name)
spack.config.set('bootstrap:trusted', trusted, scope=current_scope)
msg = 'Deleting information on "{0}" from list of trusted sources'
llnl.util.tty.msg(msg.format(args.name))
def _mirror(args):
mirror_dir = spack.util.path.canonicalize_path(
os.path.join(args.root_dir, LOCAL_MIRROR_DIR)
)
# TODO: Here we are adding gnuconfig manually, but this can be fixed
# TODO: as soon as we have an option to add to a mirror all the possible
# TODO: dependencies of a spec
root_specs = spack.bootstrap.all_root_specs(development=args.dev) + ['gnuconfig']
for spec_str in root_specs:
msg = 'Adding "{0}" and dependencies to the mirror at {1}'
llnl.util.tty.msg(msg.format(spec_str, mirror_dir))
# Suppress tty from the call below for terser messages
llnl.util.tty.set_msg_enabled(False)
spec = spack.spec.Spec(spec_str).concretized()
for node in spec.traverse():
spack.mirror.create(mirror_dir, [node])
llnl.util.tty.set_msg_enabled(True)
if args.binary_packages:
msg = 'Adding binary packages from "{0}" to the mirror at {1}'
llnl.util.tty.msg(msg.format(BINARY_TARBALL, mirror_dir))
llnl.util.tty.set_msg_enabled(False)
stage = spack.stage.Stage(BINARY_TARBALL, path=tempfile.mkdtemp())
stage.create()
stage.fetch()
stage.expand_archive()
build_cache_dir = os.path.join(stage.source_path, 'build_cache')
shutil.move(build_cache_dir, mirror_dir)
llnl.util.tty.set_msg_enabled(True)
def write_metadata(subdir, metadata):
metadata_rel_dir = os.path.join('metadata', subdir)
metadata_yaml = os.path.join(
args.root_dir, metadata_rel_dir, 'metadata.yaml'
)
llnl.util.filesystem.mkdirp(os.path.dirname(metadata_yaml))
with open(metadata_yaml, mode='w') as f:
spack.util.spack_yaml.dump(metadata, stream=f)
return os.path.dirname(metadata_yaml), metadata_rel_dir
instructions = ('\nTo register the mirror on the platform where it\'s supposed '
'to be used, move "{0}" to its final location and run the '
'following command(s):\n\n').format(args.root_dir)
cmd = ' % spack bootstrap add --trust {0} <final-path>/{1}\n'
_, rel_directory = write_metadata(subdir='sources', metadata=SOURCE_METADATA)
instructions += cmd.format('local-sources', rel_directory)
if args.binary_packages:
abs_directory, rel_directory = write_metadata(
subdir='binaries', metadata=BINARY_METADATA
)
shutil.copy(spack.util.path.canonicalize_path(CLINGO_JSON), abs_directory)
shutil.copy(spack.util.path.canonicalize_path(GNUPG_JSON), abs_directory)
instructions += cmd.format('local-binaries', rel_directory)
print(instructions)
def bootstrap(parser, args):
callbacks = {
'status': _status,
@@ -444,9 +258,6 @@ def bootstrap(parser, args):
'root': _root,
'list': _list,
'trust': _trust,
'untrust': _untrust,
'add': _add,
'remove': _remove,
'mirror': _mirror
'untrust': _untrust
}
callbacks[args.subcommand](args)

View File

@@ -64,11 +64,6 @@ def setup_parser(subparser):
'--dependencies', action='store_true', default=False,
help="(Experimental) disable DAG scheduling; use "
' "plain" dependencies.')
generate.add_argument(
'--buildcache-destination', default=None,
help="Override the mirror configured in the environment (spack.yaml) " +
"in order to push binaries from the generated pipeline to a " +
"different location.")
prune_group = generate.add_mutually_exclusive_group()
prune_group.add_argument(
'--prune-dag', action='store_true', dest='prune_dag',
@@ -132,7 +127,6 @@ def ci_generate(args):
prune_dag = args.prune_dag
index_only = args.index_only
artifacts_root = args.artifacts_root
buildcache_destination = args.buildcache_destination
if not output_file:
output_file = os.path.abspath(".gitlab-ci.yml")
@@ -146,8 +140,7 @@ def ci_generate(args):
spack_ci.generate_gitlab_ci_yaml(
env, True, output_file, prune_dag=prune_dag,
check_index_only=index_only, run_optimizer=run_optimizer,
use_dependencies=use_dependencies, artifacts_root=artifacts_root,
remote_mirror_override=buildcache_destination)
use_dependencies=use_dependencies, artifacts_root=artifacts_root)
if copy_yaml_to:
copy_to_dir = os.path.dirname(copy_yaml_to)
@@ -187,9 +180,6 @@ def ci_rebuild(args):
if not gitlab_ci:
tty.die('spack ci rebuild requires an env containing gitlab-ci cfg')
tty.msg('SPACK_BUILDCACHE_DESTINATION={0}'.format(
os.environ.get('SPACK_BUILDCACHE_DESTINATION', None)))
# Grab the environment variables we need. These either come from the
# pipeline generation step ("spack ci generate"), where they were written
# out as variables, or else provided by GitLab itself.
@@ -206,7 +196,7 @@ def ci_rebuild(args):
compiler_action = get_env_var('SPACK_COMPILER_ACTION')
cdash_build_name = get_env_var('SPACK_CDASH_BUILD_NAME')
spack_pipeline_type = get_env_var('SPACK_PIPELINE_TYPE')
remote_mirror_override = get_env_var('SPACK_REMOTE_MIRROR_OVERRIDE')
pr_mirror_url = get_env_var('SPACK_PR_MIRROR_URL')
remote_mirror_url = get_env_var('SPACK_REMOTE_MIRROR_URL')
# Construct absolute paths relative to current $CI_PROJECT_DIR
@@ -254,10 +244,6 @@ def ci_rebuild(args):
tty.debug('Pipeline type - PR: {0}, develop: {1}'.format(
spack_is_pr_pipeline, spack_is_develop_pipeline))
# If no override url exists, then just push binary package to the
# normal remote mirror url.
buildcache_mirror_url = remote_mirror_override or remote_mirror_url
# Figure out what is our temporary storage mirror: Is it artifacts
# buildcache? Or temporary-storage-url-prefix? In some cases we need to
# force something or pipelines might not have a way to propagate build
@@ -387,24 +373,7 @@ def ci_rebuild(args):
cfg.default_modify_scope())
# Check configured mirrors for a built spec with a matching hash
mirrors_to_check = None
if remote_mirror_override and spack_pipeline_type == 'spack_protected_branch':
# Passing "mirrors_to_check" below means we *only* look in the override
# mirror to see if we should skip building, which is what we want.
mirrors_to_check = {
'override': remote_mirror_override
}
# Adding this mirror to the list of configured mirrors means dependencies
# could be installed from either the override mirror or any other configured
# mirror (e.g. remote_mirror_url which is defined in the environment or
# pipeline_mirror_url), which is also what we want.
spack.mirror.add('mirror_override',
remote_mirror_override,
cfg.default_modify_scope())
matches = bindist.get_mirrors_for_spec(
job_spec, mirrors_to_check=mirrors_to_check, index_only=False)
matches = bindist.get_mirrors_for_spec(job_spec, index_only=False)
if matches:
# Got a hash match on at least one configured mirror. All
@@ -548,6 +517,13 @@ def ci_rebuild(args):
# any logs from the staging directory to artifacts now
spack_ci.copy_stage_logs_to_artifacts(job_spec, job_log_dir)
# Create buildcache on remote mirror, either on pr-specific mirror or
# on the main mirror defined in the gitlab-enabled spack environment
if spack_is_pr_pipeline:
buildcache_mirror_url = pr_mirror_url
else:
buildcache_mirror_url = remote_mirror_url
# If the install succeeded, create a buildcache entry for this job spec
# and push it to one or more mirrors. If the install did not succeed,
# print out some instructions on how to reproduce this build failure

View File

@@ -380,6 +380,11 @@ def add_concretizer_args(subparser):
const=False, default=None,
help='do not reuse installed deps; build newest configuration'
)
subgroup.add_argument(
'--minimal', action=ConfigSetAction, dest="concretizer:minimal",
const=True, default=None,
help='minimize builds (disables default variants, may choose older versions)'
)
subgroup.add_argument(
'--reuse', action=ConfigSetAction, dest="concretizer:reuse",
const=True, default=None,

View File

@@ -5,7 +5,6 @@
from __future__ import print_function
import argparse
import errno
import os
import sys
@@ -94,21 +93,6 @@ def external_find(args):
# It's fine to not find any manifest file if we are doing the
# search implicitly (i.e. as part of 'spack external find')
pass
except Exception as e:
# For most exceptions, just print a warning and continue.
# Note that KeyboardInterrupt does not subclass Exception
# (so CTRL-C will terminate the program as expected).
skip_msg = ("Skipping manifest and continuing with other external "
"checks")
if ((isinstance(e, IOError) or isinstance(e, OSError)) and
e.errno in [errno.EPERM, errno.EACCES]):
# The manifest file does not have sufficient permissions enabled:
# print a warning and keep going
tty.warn("Unable to read manifest due to insufficient "
"permissions.", skip_msg)
else:
tty.warn("Unable to read manifest, unexpected error: {0}"
.format(str(e)), skip_msg)
# If the user didn't specify anything, search for build tools by default
if not args.tags and not args.all and not args.packages:
@@ -141,7 +125,7 @@ def external_find(args):
# If the list of packages is empty, search for every possible package
if not args.tags and not packages_to_check:
packages_to_check = list(spack.repo.path.all_packages())
packages_to_check = spack.repo.path.all_packages()
detected_packages = spack.detection.by_executable(
packages_to_check, path_hints=args.path)
@@ -193,10 +177,7 @@ def _collect_and_consume_cray_manifest_files(
for directory in manifest_dirs:
for fname in os.listdir(directory):
if fname.endswith('.json'):
fpath = os.path.join(directory, fname)
tty.debug("Adding manifest file: {0}".format(fpath))
manifest_files.append(os.path.join(directory, fpath))
manifest_files.append(os.path.join(directory, fname))
if not manifest_files:
raise NoManifestFileError(
@@ -204,7 +185,6 @@ def _collect_and_consume_cray_manifest_files(
.format(cray_manifest.default_path))
for path in manifest_files:
tty.debug("Reading manifest file: " + path)
try:
cray_manifest.read(path, not dry_run)
except (spack.compilers.UnknownCompilerError, spack.error.SpackError) as e:

View File

@@ -15,8 +15,6 @@
import spack
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.config
import spack.environment
import spack.hash_types as ht
import spack.package
import spack.solver.asp as asp
@@ -76,51 +74,6 @@ def setup_parser(subparser):
spack.cmd.common.arguments.add_concretizer_args(subparser)
def _process_result(result, show, required_format, kwargs):
result.raise_if_unsat()
opt, _, _ = min(result.answers)
if ("opt" in show) and (not required_format):
tty.msg("Best of %d considered solutions." % result.nmodels)
tty.msg("Optimization Criteria:")
maxlen = max(len(s[2]) for s in result.criteria)
color.cprint(
"@*{ Priority Criterion %sInstalled ToBuild}" % ((maxlen - 10) * " ")
)
fmt = " @K{%%-8d} %%-%ds%%9s %%7s" % maxlen
for i, (installed_cost, build_cost, name) in enumerate(result.criteria, 1):
color.cprint(
fmt % (
i,
name,
"-" if build_cost is None else installed_cost,
installed_cost if build_cost is None else build_cost,
)
)
print()
# dump the solutions as concretized specs
if 'solutions' in show:
for spec in result.specs:
# With -y, just print YAML to output.
if required_format == 'yaml':
# use write because to_yaml already has a newline.
sys.stdout.write(spec.to_yaml(hash=ht.dag_hash))
elif required_format == 'json':
sys.stdout.write(spec.to_json(hash=ht.dag_hash))
else:
sys.stdout.write(
spec.tree(color=sys.stdout.isatty(), **kwargs))
print()
if result.unsolved_specs and "solutions" in show:
tty.msg("Unsolved specs")
for spec in result.unsolved_specs:
print(spec)
print()
def solve(parser, args):
# these are the same options as `spack spec`
name_fmt = '{namespace}.{name}' if args.namespaces else '{name}'
@@ -149,42 +102,58 @@ def solve(parser, args):
if models < 0:
tty.die("model count must be non-negative: %d")
# Format required for the output (JSON, YAML or None)
required_format = args.format
# If we have an active environment, pick the specs from there
env = spack.environment.active_environment()
if env and args.specs:
msg = "cannot give explicit specs when an environment is active"
raise RuntimeError(msg)
specs = list(env.user_specs) if env else spack.cmd.parse_specs(args.specs)
specs = spack.cmd.parse_specs(args.specs)
# set up solver parameters
# Note: reuse and other concretizer prefs are passed as configuration
solver = asp.Solver()
output = sys.stdout if "asp" in show else None
setup_only = set(show) == {'asp'}
unify = spack.config.get('concretizer:unify')
if unify != 'when_possible':
# set up solver parameters
# Note: reuse and other concretizer prefs are passed as configuration
result = solver.solve(
specs,
out=output,
models=models,
timers=args.timers,
stats=args.stats,
setup_only=setup_only
)
if not setup_only:
_process_result(result, show, required_format, kwargs)
else:
for idx, result in enumerate(solver.solve_in_rounds(
specs, out=output, models=models, timers=args.timers, stats=args.stats
)):
if "solutions" in show:
tty.msg("ROUND {0}".format(idx))
tty.msg("")
result = solver.solve(
specs,
out=output,
models=models,
timers=args.timers,
stats=args.stats,
setup_only=(set(show) == {'asp'})
)
if 'solutions' not in show:
return
# die if no solution was found
result.raise_if_unsat()
# show the solutions as concretized specs
if 'solutions' in show:
opt, _, _ = min(result.answers)
if ("opt" in show) and (not args.format):
tty.msg("Best of %d considered solutions." % result.nmodels)
tty.msg("Optimization Criteria:")
maxlen = max(len(s[2]) for s in result.criteria)
color.cprint(
"@*{ Priority Criterion %sInstalled ToBuild}" % ((maxlen - 10) * " ")
)
fmt = " @K{%%-8d} %%-%ds%%9s %%7s" % maxlen
for i, (installed_cost, build_cost, name) in enumerate(result.criteria, 1):
color.cprint(
fmt % (
i,
name,
"-" if build_cost is None else installed_cost,
installed_cost if build_cost is None else build_cost,
)
)
print()
for spec in result.specs:
# With -y, just print YAML to output.
if args.format == 'yaml':
# use write because to_yaml already has a newline.
sys.stdout.write(spec.to_yaml(hash=ht.dag_hash))
elif args.format == 'json':
sys.stdout.write(spec.to_json(hash=ht.dag_hash))
else:
print("% END ROUND {0}\n".format(idx))
if not setup_only:
_process_result(result, show, required_format, kwargs)
sys.stdout.write(
spec.tree(color=sys.stdout.isatty(), **kwargs))

View File

@@ -80,8 +80,7 @@ def spec(parser, args):
# Use command line specified specs, otherwise try to use environment specs.
if args.specs:
input_specs = spack.cmd.parse_specs(args.specs)
concretized_specs = spack.cmd.parse_specs(args.specs, concretize=True)
specs = list(zip(input_specs, concretized_specs))
specs = [(s, s.concretized()) for s in input_specs]
else:
env = ev.active_environment()
if env:

View File

@@ -24,7 +24,7 @@
# tutorial configuration parameters
tutorial_branch = "releases/v0.18"
tutorial_branch = "releases/v0.17"
tutorial_mirror = "file:///mirror"
tutorial_key = os.path.join(spack.paths.share_path, "keys", "tutorial.pub")

View File

@@ -252,13 +252,6 @@ def find_new_compilers(path_hints=None, scope=None):
merged configuration.
"""
compilers = find_compilers(path_hints)
return select_new_compilers(compilers, scope)
def select_new_compilers(compilers, scope=None):
"""Given a list of compilers, remove those that are already defined in
the configuration.
"""
compilers_not_in_config = []
for c in compilers:
arch_spec = spack.spec.ArchSpec((None, c.operating_system, c.target))

View File

@@ -105,12 +105,15 @@
'build_stage': '$tempdir/spack-stage',
'concretizer': 'clingo',
'license_dir': spack.paths.default_license_dir,
'flags': {
'keep_werror': 'none',
},
}
}
#: metavar to use for commands that accept scopes
#: this is shorter and more readable than listing all choices
scopes_metavar = '{defaults,system,site,user}[/PLATFORM] or env:ENVIRONMENT'
scopes_metavar = '{defaults,system,site,user}[/PLATFORM]'
#: Base name for the (internal) overrides scope.
overrides_base_name = 'overrides-'

View File

@@ -171,12 +171,11 @@ def strip(self):
def paths(self):
"""Important paths in the image"""
Paths = collections.namedtuple('Paths', [
'environment', 'store', 'hidden_view', 'view'
'environment', 'store', 'view'
])
return Paths(
environment='/opt/spack-environment',
store='/opt/software',
hidden_view='/opt/._view',
view='/opt/view'
)

View File

@@ -20,7 +20,6 @@
compiler_name_translation = {
'nvidia': 'nvhpc',
'rocm': 'rocmcc',
}
@@ -39,6 +38,10 @@ def translated_compiler_name(manifest_compiler_name):
elif manifest_compiler_name in spack.compilers.supported_compilers():
return manifest_compiler_name
else:
# Try to fail quickly. This can occur in two cases: (1) the compiler
# definition (2) a spec can specify a compiler that doesn't exist; the
# first will be caught when creating compiler definition. The second
# will result in Specs with associated undefined compilers.
raise spack.compilers.UnknownCompilerError(
"Manifest parsing - unknown compiler: {0}"
.format(manifest_compiler_name))
@@ -182,8 +185,6 @@ def read(path, apply_updates):
tty.debug("{0}: {1} compilers read from manifest".format(
path,
str(len(compilers))))
# Filter out the compilers that already appear in the configuration
compilers = spack.compilers.select_new_compilers(compilers)
if apply_updates and compilers:
spack.compilers.add_compilers_to_config(
compilers, init_config=False)

View File

@@ -628,7 +628,7 @@ def __init__(self, path, init_file=None, with_view=None, keep_relative=False):
# This attribute will be set properly from configuration
# during concretization
self.unify = None
self.concretization = None
self.clear()
if init_file:
@@ -772,14 +772,10 @@ def _read_manifest(self, f, raw_yaml=None):
# Retrieve the current concretization strategy
configuration = config_dict(self.yaml)
# Let `concretization` overrule `concretize:unify` config for now,
# but use a translation table to have internally a representation
# as if we were using the new configuration
translation = {'separately': False, 'together': True}
try:
self.unify = translation[configuration['concretization']]
except KeyError:
self.unify = spack.config.get('concretizer:unify', False)
# Let `concretization` overrule `concretize:unify` config for now.
unify = spack.config.get('concretizer:unify')
self.concretization = configuration.get(
'concretization', 'together' if unify else 'separately')
# Retrieve dev-build packages:
self.dev_specs = configuration.get('develop', {})
@@ -1160,44 +1156,14 @@ def concretize(self, force=False, tests=False):
self.specs_by_hash = {}
# Pick the right concretization strategy
if self.unify == 'when_possible':
return self._concretize_together_where_possible(tests=tests)
if self.unify is True:
if self.concretization == 'together':
return self._concretize_together(tests=tests)
if self.unify is False:
if self.concretization == 'separately':
return self._concretize_separately(tests=tests)
msg = 'concretization strategy not implemented [{0}]'
raise SpackEnvironmentError(msg.format(self.unify))
def _concretize_together_where_possible(self, tests=False):
# Avoid cyclic dependency
import spack.solver.asp
# Exit early if the set of concretized specs is the set of user specs
user_specs_did_not_change = not bool(
set(self.user_specs) - set(self.concretized_user_specs)
)
if user_specs_did_not_change:
return []
# Proceed with concretization
self.concretized_user_specs = []
self.concretized_order = []
self.specs_by_hash = {}
result_by_user_spec = {}
solver = spack.solver.asp.Solver()
for result in solver.solve_in_rounds(self.user_specs, tests=tests):
result_by_user_spec.update(result.specs_by_input)
result = []
for abstract, concrete in sorted(result_by_user_spec.items()):
self._add_concrete_spec(abstract, concrete)
result.append((abstract, concrete))
return result
raise SpackEnvironmentError(msg.format(self.concretization))
def _concretize_together(self, tests=False):
"""Concretization strategy that concretizes all the specs
@@ -1350,7 +1316,7 @@ def concretize_and_add(self, user_spec, concrete_spec=None, tests=False):
concrete_spec: if provided, then it is assumed that it is the
result of concretizing the provided ``user_spec``
"""
if self.unify is True:
if self.concretization == 'together':
msg = 'cannot install a single spec in an environment that is ' \
'configured to be concretized together. Run instead:\n\n' \
' $ spack add <spec>\n' \
@@ -1612,10 +1578,9 @@ def install_specs(self, specs=None, **install_args):
# ensure specs already installed are marked explicit
all_specs = specs or [cs for _, cs in self.concretized_specs()]
specs_installed = [s for s in all_specs if s.installed]
if specs_installed:
with spack.store.db.write_transaction(): # do all in one transaction
for spec in specs_installed:
spack.store.db.update_explicit(spec, True)
with spack.store.db.write_transaction(): # do all in one transaction
for spec in specs_installed:
spack.store.db.update_explicit(spec, True)
if not specs_to_install:
tty.msg('All of the packages are already installed')

View File

@@ -327,7 +327,9 @@ def fetch(self):
continue
try:
self._fetch_from_url(url)
partial_file, save_file = self._fetch_from_url(url)
if save_file and (partial_file is not None):
llnl.util.filesystem.rename(partial_file, save_file)
break
except FailedDownloadError as e:
errors.append(str(e))
@@ -377,7 +379,9 @@ def _check_headers(self, headers):
@_needs_stage
def _fetch_urllib(self, url):
save_file = self.stage.save_filename
save_file = None
if self.stage.save_filename:
save_file = self.stage.save_filename
tty.msg('Fetching {0}'.format(url))
# Run urllib but grab the mime type from the http headers
@@ -387,18 +391,16 @@ def _fetch_urllib(self, url):
# clean up archive on failure.
if self.archive_file:
os.remove(self.archive_file)
if os.path.lexists(save_file):
if save_file and os.path.exists(save_file):
os.remove(save_file)
msg = 'urllib failed to fetch with error {0}'.format(e)
raise FailedDownloadError(url, msg)
if os.path.lexists(save_file):
os.remove(save_file)
with open(save_file, 'wb') as _open_file:
shutil.copyfileobj(response, _open_file)
self._check_headers(str(headers))
return None, save_file
@_needs_stage
def _fetch_curl(self, url):
@@ -459,7 +461,7 @@ def _fetch_curl(self, url):
if self.archive_file:
os.remove(self.archive_file)
if partial_file and os.path.lexists(partial_file):
if partial_file and os.path.exists(partial_file):
os.remove(partial_file)
if curl.returncode == 22:
@@ -486,9 +488,7 @@ def _fetch_curl(self, url):
"Curl failed with error %d" % curl.returncode)
self._check_headers(headers)
if save_file and (partial_file is not None):
os.rename(partial_file, save_file)
return partial_file, save_file
@property # type: ignore # decorated properties unsupported in mypy
@_needs_stage
@@ -642,7 +642,7 @@ def fetch(self):
# remove old symlink if one is there.
filename = self.stage.save_filename
if os.path.lexists(filename):
if os.path.exists(filename):
os.remove(filename)
# Symlink to local cached archive.

View File

@@ -350,7 +350,7 @@ def _process_external_package(pkg, explicit):
def _process_binary_cache_tarball(pkg, binary_spec, explicit, unsigned,
mirrors_for_spec=None):
preferred_mirrors=None):
"""
Process the binary cache tarball.
@@ -360,17 +360,17 @@ def _process_binary_cache_tarball(pkg, binary_spec, explicit, unsigned,
explicit (bool): the package was explicitly requested by the user
unsigned (bool): ``True`` if binary package signatures to be checked,
otherwise, ``False``
mirrors_for_spec (list): Optional list of concrete specs and mirrors
obtained by calling binary_distribution.get_mirrors_for_spec().
preferred_mirrors (list): Optional list of urls to prefer when
attempting to download the tarball
Return:
bool: ``True`` if the package was extracted from binary cache,
else ``False``
"""
download_result = binary_distribution.download_tarball(
binary_spec, unsigned, mirrors_for_spec=mirrors_for_spec)
tarball = binary_distribution.download_tarball(
binary_spec, preferred_mirrors=preferred_mirrors)
# see #10063 : install from source if tarball doesn't exist
if download_result is None:
if tarball is None:
tty.msg('{0} exists in binary cache but with different hash'
.format(pkg.name))
return False
@@ -380,9 +380,9 @@ def _process_binary_cache_tarball(pkg, binary_spec, explicit, unsigned,
# don't print long padded paths while extracting/relocating binaries
with spack.util.path.filter_padding():
binary_distribution.extract_tarball(binary_spec, download_result,
allow_root=False, unsigned=unsigned,
force=False)
binary_distribution.extract_tarball(
binary_spec, tarball, allow_root=False, unsigned=unsigned, force=False
)
pkg.installed_from_binary_cache = True
spack.store.db.add(pkg.spec, spack.store.layout, explicit=explicit)
@@ -406,8 +406,12 @@ def _try_install_from_binary_cache(pkg, explicit, unsigned=False):
if not matches:
return False
return _process_binary_cache_tarball(pkg, pkg.spec, explicit, unsigned,
mirrors_for_spec=matches)
# In the absence of guidance from user or some other reason to prefer one
# mirror over another, any match will suffice, so just pick the first one.
preferred_mirrors = [match['mirror_url'] for match in matches]
binary_spec = matches[0]['spec']
return _process_binary_cache_tarball(pkg, binary_spec, explicit, unsigned,
preferred_mirrors=preferred_mirrors)
def clear_failures():

View File

@@ -4,7 +4,6 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import platform as py_platform
import re
from subprocess import check_output
from spack.version import Version
@@ -52,17 +51,6 @@ def __init__(self):
if 'ubuntu' in distname:
version = '.'.join(version[0:2])
# openSUSE Tumbleweed is a rolling release which can change
# more than once in a week, so set version to tumbleweed$GLIBVERS
elif 'opensuse-tumbleweed' in distname or 'opensusetumbleweed' in distname:
distname = 'opensuse'
output = check_output(["ldd", "--version"]).decode()
libcvers = re.findall(r'ldd \(GNU libc\) (.*)', output)
if len(libcvers) == 1:
version = 'tumbleweed' + libcvers[0]
else:
version = 'tumbleweed' + version[0]
else:
version = version[0]

View File

@@ -3,7 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from io import BufferedReader, IOBase
from io import BufferedReader
import six
import six.moves.urllib.error as urllib_error
@@ -23,15 +23,11 @@
# https://github.com/python/cpython/pull/3249
class WrapStream(BufferedReader):
def __init__(self, raw):
# In botocore >=1.23.47, StreamingBody inherits from IOBase, so we
# only add missing attributes in older versions.
# https://github.com/boto/botocore/commit/a624815eabac50442ed7404f3c4f2664cd0aa784
if not isinstance(raw, IOBase):
raw.readable = lambda: True
raw.writable = lambda: False
raw.seekable = lambda: False
raw.closed = False
raw.flush = lambda: None
raw.readable = lambda: True
raw.writable = lambda: False
raw.seekable = lambda: False
raw.closed = False
raw.flush = lambda: None
super(WrapStream, self).__init__(raw)
def detach(self):

View File

@@ -9,10 +9,12 @@
'type': 'object',
'properties': {
'name': {'type': 'string'},
'metadata': {'type': 'string'}
'description': {'type': 'string'},
'type': {'type': 'string'},
'info': {'type': 'object'}
},
'additionalProperties': False,
'required': ['name', 'metadata']
'required': ['name', 'description', 'type']
}
properties = {

View File

@@ -37,6 +37,5 @@
'hash': {'type': 'string'},
},
},
'buildcache_layout_version': {'type': 'number'}
},
}

View File

@@ -15,6 +15,7 @@
'additionalProperties': False,
'properties': {
'reuse': {'type': 'boolean'},
'minimal': {'type': 'boolean'},
'targets': {
'type': 'object',
'properties': {
@@ -26,10 +27,12 @@
}
},
'unify': {
'oneOf': [
{'type': 'boolean'},
{'type': 'string', 'enum': ['when_possible']}
]
'type': 'boolean'
# Todo: add when_possible.
# 'oneOf': [
# {'type': 'boolean'},
# {'type': 'string', 'enum': ['when_possible']}
# ]
}
}
}

View File

@@ -91,7 +91,16 @@
'additional_external_search_paths': {
'type': 'array',
'items': {'type': 'string'}
}
},
'flags': {
'type': 'object',
'properties': {
'keep_werror': {
'type': 'string',
'enum': ['all', 'specific', 'none'],
},
},
},
},
'deprecatedProperties': {
'properties': ['module_roots'],

View File

@@ -26,9 +26,6 @@
"cpe-version": {"type": "string", "minLength": 1},
"system-type": {"type": "string", "minLength": 1},
"schema-version": {"type": "string", "minLength": 1},
# Older schemas use did not have "cpe-version", just the
# schema version; in that case it was just called "version"
"version": {"type": "string", "minLength": 1},
}
},
"compilers": {

View File

@@ -110,7 +110,6 @@
},
},
'service-job-attributes': runner_selector_schema,
'signing-job-attributes': runner_selector_schema,
'rebuild-index': {'type': 'boolean'},
'broken-specs-url': {'type': 'string'},
},

View File

@@ -98,20 +98,6 @@ def getter(node):
# Below numbers are used to map names of criteria to the order
# they appear in the solution. See concretize.lp
# The space of possible priorities for optimization targets
# is partitioned in the following ranges:
#
# [0-100) Optimization criteria for software being reused
# [100-200) Fixed criteria that are higher priority than reuse, but lower than build
# [200-300) Optimization criteria for software being built
# [300-1000) High-priority fixed criteria
# [1000-inf) Error conditions
#
# Each optimization target is a minimization with optimal value 0.
#: High fixed priority offset for criteria that supersede all build criteria
high_fixed_priority_offset = 300
#: Priority offset for "build" criteria (regular criterio shifted to
#: higher priority for specs we have to build)
build_priority_offset = 200
@@ -126,7 +112,6 @@ def build_criteria_names(costs, tuples):
priorities_names = []
num_fixed = 0
num_high_fixed = 0
for pred, args in tuples:
if pred != "opt_criterion":
continue
@@ -143,8 +128,6 @@ def build_criteria_names(costs, tuples):
if priority < fixed_priority_offset:
build_priority = priority + build_priority_offset
priorities_names.append((build_priority, name))
elif priority >= high_fixed_priority_offset:
num_high_fixed += 1
else:
num_fixed += 1
@@ -158,26 +141,19 @@ def build_criteria_names(costs, tuples):
# split list into three parts: build criteria, fixed criteria, non-build criteria
num_criteria = len(priorities_names)
num_build = (num_criteria - num_fixed - num_high_fixed) // 2
num_build = (num_criteria - num_fixed) // 2
build_start_idx = num_high_fixed
fixed_start_idx = num_high_fixed + num_build
installed_start_idx = num_high_fixed + num_build + num_fixed
high_fixed = priorities_names[:build_start_idx]
build = priorities_names[build_start_idx:fixed_start_idx]
fixed = priorities_names[fixed_start_idx:installed_start_idx]
installed = priorities_names[installed_start_idx:]
build = priorities_names[:num_build]
fixed = priorities_names[num_build:num_build + num_fixed]
installed = priorities_names[num_build + num_fixed:]
# mapping from priority to index in cost list
indices = dict((p, i) for i, (p, n) in enumerate(priorities_names))
# make a list that has each name with its build and non-build costs
criteria = [(cost, None, name) for cost, (p, name) in
zip(costs[:build_start_idx], high_fixed)]
criteria += [(cost, None, name) for cost, (p, name) in
zip(costs[fixed_start_idx:installed_start_idx], fixed)]
criteria = [
(costs[p - fixed_priority_offset + num_build], None, name) for p, name in fixed
]
for (i, name), (b, _) in zip(installed, build):
criteria.append((costs[indices[i]], costs[indices[b]], name))
@@ -330,9 +306,7 @@ def __init__(self, specs, asp=None):
self.abstract_specs = specs
# Concrete specs
self._concrete_specs_by_input = None
self._concrete_specs = None
self._unsolved_specs = None
def format_core(self, core):
"""
@@ -429,32 +403,15 @@ def specs(self):
"""List of concretized specs satisfying the initial
abstract request.
"""
if self._concrete_specs is None:
self._compute_specs_from_answer_set()
return self._concrete_specs
# The specs were already computed, return them
if self._concrete_specs:
return self._concrete_specs
@property
def unsolved_specs(self):
"""List of abstract input specs that were not solved."""
if self._unsolved_specs is None:
self._compute_specs_from_answer_set()
return self._unsolved_specs
# Assert prerequisite
msg = 'cannot compute specs ["satisfiable" is not True ]'
assert self.satisfiable, msg
@property
def specs_by_input(self):
if self._concrete_specs_by_input is None:
self._compute_specs_from_answer_set()
return self._concrete_specs_by_input
def _compute_specs_from_answer_set(self):
if not self.satisfiable:
self._concrete_specs = []
self._unsolved_specs = self.abstract_specs
self._concrete_specs_by_input = {}
return
self._concrete_specs, self._unsolved_specs = [], []
self._concrete_specs_by_input = {}
self._concrete_specs = []
best = min(self.answers)
opt, _, answer = best
for input_spec in self.abstract_specs:
@@ -463,13 +420,10 @@ def _compute_specs_from_answer_set(self):
providers = [spec.name for spec in answer.values()
if spec.package.provides(key)]
key = providers[0]
candidate = answer.get(key)
if candidate and candidate.satisfies(input_spec):
self._concrete_specs.append(answer[key])
self._concrete_specs_by_input[input_spec] = answer[key]
else:
self._unsolved_specs.append(input_spec)
self._concrete_specs.append(answer[key])
return self._concrete_specs
def _normalize_packages_yaml(packages_yaml):
@@ -566,7 +520,6 @@ def solve(
setup,
specs,
nmodels=0,
reuse=None,
timers=False,
stats=False,
out=None,
@@ -577,8 +530,7 @@ def solve(
Arguments:
setup (SpackSolverSetup): An object to set up the ASP problem.
specs (list): List of ``Spec`` objects to solve for.
nmodels (int): Number of models to consider (default 0 for unlimited).
reuse (None or list): list of concrete specs that can be reused
nmodels (list): Number of models to consider (default 0 for unlimited).
timers (bool): Print out coarse timers for different solve phases.
stats (bool): Whether to output Clingo's internal solver statistics.
out: Optional output stream for the generated ASP program.
@@ -602,7 +554,7 @@ def solve(
self.assumptions = []
with self.control.backend() as backend:
self.backend = backend
setup.setup(self, specs, reuse=reuse)
setup.setup(self, specs)
timer.phase("setup")
# read in the main ASP program and display logic -- these are
@@ -621,7 +573,6 @@ def visit(node):
arg = ast_sym(ast_sym(term.atom).arguments[0])
self.fact(AspFunction(name)(arg.string))
self.h1("Error messages")
path = os.path.join(parent_dir, 'concretize.lp')
parse_files([path], visit)
@@ -631,7 +582,6 @@ def visit(node):
# Load the file itself
self.control.load(os.path.join(parent_dir, 'concretize.lp'))
self.control.load(os.path.join(parent_dir, "os_facts.lp"))
self.control.load(os.path.join(parent_dir, "display.lp"))
timer.phase("load")
@@ -672,7 +622,7 @@ def stringify(x):
if result.satisfiable:
# build spec from the best model
builder = SpecBuilder(specs, reuse=reuse)
builder = SpecBuilder(specs)
min_cost, best_model = min(models)
tuples = [
(sym.name, [stringify(a) for a in sym.arguments])
@@ -704,7 +654,7 @@ def stringify(x):
class SpackSolverSetup(object):
"""Class to set up and run a Spack concretization solve."""
def __init__(self, tests=False):
def __init__(self, reuse=None, minimal=None, tests=False):
self.gen = None # set by setup()
self.declared_versions = {}
@@ -717,7 +667,6 @@ def __init__(self, tests=False):
self.variant_values_from_specs = set()
self.version_constraints = set()
self.target_constraints = set()
self.default_targets = []
self.compiler_version_constraints = set()
self.post_facts = []
@@ -730,12 +679,13 @@ def __init__(self, tests=False):
# Caches to optimize the setup phase of the solver
self.target_specs_cache = None
# whether to add installed/binary hashes to the solve
# Solver paramters that affect setup -- see Solver documentation
self.reuse = spack.config.get(
"concretizer:reuse", False) if reuse is None else reuse
self.minimal = spack.config.get(
"concretizer:minimal", False) if minimal is None else minimal
self.tests = tests
# If False allows for input specs that are not solved
self.concretize_everything = True
def pkg_version_rules(self, pkg):
"""Output declared versions of a package.
@@ -749,13 +699,7 @@ def key_fn(version):
pkg = packagize(pkg)
declared_versions = self.declared_versions[pkg.name]
partially_sorted_versions = sorted(set(declared_versions), key=key_fn)
most_to_least_preferred = []
for _, group in itertools.groupby(partially_sorted_versions, key=key_fn):
most_to_least_preferred.extend(list(sorted(
group, reverse=True, key=lambda x: spack.version.ver(x.version)
)))
most_to_least_preferred = sorted(declared_versions, key=key_fn)
for weight, declared_version in enumerate(most_to_least_preferred):
self.gen.fact(fn.version_declared(
@@ -884,7 +828,7 @@ def package_compiler_defaults(self, pkg):
pkg.name, cspec.name, cspec.version, -i * 100
))
def pkg_rules(self, pkg, tests):
def pkg_rules(self, pkg):
pkg = packagize(pkg)
# versions
@@ -1172,26 +1116,24 @@ def preferred_variants(self, pkg_name):
pkg_name, variant.name, value
))
def target_preferences(self, pkg_name):
def preferred_targets(self, pkg_name):
key_fn = spack.package_prefs.PackagePrefs(pkg_name, 'target')
if not self.target_specs_cache:
self.target_specs_cache = [
spack.spec.Spec('target={0}'.format(target_name))
for _, target_name in self.default_targets
for target_name in archspec.cpu.TARGETS
]
package_targets = self.target_specs_cache[:]
package_targets.sort(key=key_fn)
target_specs = self.target_specs_cache
preferred_targets = [x for x in target_specs if key_fn(x) < 0]
if not preferred_targets:
return
offset = 0
best_default = self.default_targets[0][1]
for i, preferred in enumerate(package_targets):
if str(preferred.architecture.target) == best_default and i != 0:
offset = 100
self.gen.fact(fn.target_weight(
pkg_name, str(preferred.architecture.target), i + offset
))
preferred = preferred_targets[0]
self.gen.fact(fn.package_target_weight(
str(preferred.architecture.target), pkg_name, -30
))
def flag_defaults(self):
self.gen.h2("Compiler flag defaults")
@@ -1582,7 +1524,7 @@ def target_defaults(self, specs):
compiler.name, compiler.version, uarch.family.name
))
i = 0 # TODO compute per-target offset?
i = 0
for target in candidate_targets:
self.gen.fact(fn.target(target.name))
self.gen.fact(fn.target_family(target.name, target.family.name))
@@ -1591,15 +1533,12 @@ def target_defaults(self, specs):
# prefer best possible targets; weight others poorly so
# they're not used unless set explicitly
# these are stored to be generated as facts later offset by the
# number of preferred targets
if target.name in best_targets:
self.default_targets.append((i, target.name))
self.gen.fact(fn.default_target_weight(target.name, i))
i += 1
else:
self.default_targets.append((100, target.name))
self.gen.fact(fn.default_target_weight(target.name, 100))
self.default_targets = list(sorted(set(self.default_targets)))
self.gen.newline()
def virtual_providers(self):
@@ -1799,7 +1738,32 @@ def define_concrete_input_specs(self, specs, possible):
if spec.concrete:
self._facts_from_concrete_spec(spec, possible)
def setup(self, driver, specs, reuse=None):
def define_installed_packages(self, specs, possible):
"""Add facts about all specs already in the database.
Arguments:
possible (dict): result of Package.possible_dependencies() for
specs in this solve.
"""
# Specs from local store
with spack.store.db.read_transaction():
for spec in spack.store.db.query(installed=True):
if not spec.satisfies('dev_path=*'):
self._facts_from_concrete_spec(spec, possible)
# Specs from configured buildcaches
try:
index = spack.binary_distribution.update_cache_and_get_specs()
for spec in index:
if not spec.satisfies('dev_path=*'):
self._facts_from_concrete_spec(spec, possible)
except (spack.binary_distribution.FetchCacheError, IndexError):
# this is raised when no mirrors had indices.
# TODO: update mirror configuration so it can indicate that the source cache
# TODO: (or any mirror really) doesn't have binaries.
pass
def setup(self, driver, specs):
"""Generate an ASP program with relevant constraints for specs.
This calls methods on the solve driver to set up the problem with
@@ -1807,9 +1771,7 @@ def setup(self, driver, specs, reuse=None):
specs, as well as constraints from the specs themselves.
Arguments:
driver (PyclingoDriver): driver instance of this solve
specs (list): list of Specs to solve
reuse (None or list): list of concrete specs that can be reused
"""
self._condition_id_counter = itertools.count()
@@ -1848,11 +1810,15 @@ def setup(self, driver, specs, reuse=None):
self.gen.h1("Concrete input spec definitions")
self.define_concrete_input_specs(specs, possible)
if reuse:
self.gen.h1("Reusable specs")
self.gen.h1("Concretizer options")
if self.reuse:
self.gen.fact(fn.optimize_for_reuse())
for reusable_spec in reuse:
self._facts_from_concrete_spec(reusable_spec, possible)
if self.minimal:
self.gen.fact(fn.minimal_installs())
if self.reuse:
self.gen.h1("Installed packages")
self.define_installed_packages(specs, possible)
self.gen.h1('General Constraints')
self.available_compilers()
@@ -1872,25 +1838,32 @@ def setup(self, driver, specs, reuse=None):
self.gen.h1('Package Constraints')
for pkg in sorted(pkgs):
self.gen.h2('Package rules: %s' % pkg)
self.pkg_rules(pkg, tests=self.tests)
self.pkg_rules(pkg)
self.gen.h2('Package preferences: %s' % pkg)
self.preferred_variants(pkg)
self.target_preferences(pkg)
self.preferred_targets(pkg)
# Inject dev_path from environment
env = ev.active_environment()
if env:
for name, info in env.dev_specs.items():
dev_spec = spack.spec.Spec(info['spec'])
dev_spec.constrain(
'dev_path=%s' % spack.util.path.canonicalize_path(info['path'])
)
self.condition(spack.spec.Spec(name), dev_spec,
msg="%s is a develop spec" % name)
for spec in sorted(specs):
for dep in spec.traverse():
_develop_specs_from_env(dep, env)
self.gen.h1('Spec Constraints')
self.literal_specs(specs)
for spec in sorted(specs):
self.gen.h2('Spec: %s' % str(spec))
self.gen.fact(
fn.virtual_root(spec.name) if spec.virtual
else fn.root(spec.name)
)
for clause in self.spec_clauses(spec):
self.gen.fact(clause)
if clause.name == 'variant_set':
self.gen.fact(
fn.variant_default_value_from_cli(*clause.args)
)
self.gen.h1("Variant Values defined in specs")
self.define_variant_values()
@@ -1907,47 +1880,45 @@ def setup(self, driver, specs, reuse=None):
self.gen.h1("Target Constraints")
self.define_target_constraints()
def literal_specs(self, specs):
for idx, spec in enumerate(specs):
self.gen.h2('Spec: %s' % str(spec))
self.gen.fact(fn.literal(idx))
root_fn = fn.virtual_root(spec.name) if spec.virtual else fn.root(spec.name)
self.gen.fact(fn.literal(idx, root_fn.name, *root_fn.args))
for clause in self.spec_clauses(spec):
self.gen.fact(fn.literal(idx, clause.name, *clause.args))
if clause.name == 'variant_set':
self.gen.fact(fn.literal(
idx, "variant_default_value_from_cli", *clause.args
))
if self.concretize_everything:
self.gen.fact(fn.concretize_everything())
class SpecBuilder(object):
"""Class with actions to rebuild a spec from ASP results."""
#: Attributes that don't need actions
ignored_attributes = ["opt_criterion"]
def __init__(self, specs, reuse=None):
def __init__(self, specs):
self._specs = {}
self._result = None
self._command_line_specs = specs
self._flag_sources = collections.defaultdict(lambda: set())
self._flag_compiler_defaults = set()
# Pass in as arguments reusable specs and plug them in
# from this dictionary during reconstruction
self._hash_lookup = {}
if reuse is not None:
for spec in reuse:
for node in spec.traverse():
self._hash_lookup.setdefault(node.dag_hash(), node)
def hash(self, pkg, h):
if pkg not in self._specs:
self._specs[pkg] = self._hash_lookup[h]
try:
# try to get the candidate from the store
concrete_spec = spack.store.db.get_by_hash(h)[0]
except TypeError:
# the dag hash was not in the DB, try buildcache
s = spack.binary_distribution.binary_index.find_by_hash(h)
if s:
concrete_spec = s[0]['spec']
else:
# last attempt: maybe the hash comes from a particular input spec
# this only occurs in tests (so far)
for clspec in self._command_line_specs:
for spec in clspec.traverse():
if spec.concrete and spec.dag_hash() == h:
concrete_spec = spec
assert concrete_spec, "Unable to look up concrete spec with hash %s" % h
self._specs[pkg] = concrete_spec
else:
# TODO: remove this code -- it's dead unless we decide that node() clauses
# should come before hashes.
# ensure that if it's already there, it's correct
spec = self._specs[pkg]
assert spec.dag_hash() == h
def node(self, pkg):
if pkg not in self._specs:
@@ -2217,7 +2188,7 @@ def _develop_specs_from_env(spec, env):
class Solver(object):
"""This is the main external interface class for solving.
It manages solver configuration and preferences in one place. It sets up the solve
It manages solver configuration and preferences in once place. It sets up the solve
and passes the setup method to the driver, as well.
Properties of interest:
@@ -2225,6 +2196,10 @@ class Solver(object):
``reuse (bool)``
Whether to try to reuse existing installs/binaries
``minimal (bool)``
If ``True`` make minimizing nodes the top priority, even higher
than defaults from packages and preferences.
"""
def __init__(self):
self.driver = PyclingoDriver()
@@ -2232,42 +2207,7 @@ def __init__(self):
# These properties are settable via spack configuration, and overridable
# by setting them directly as properties.
self.reuse = spack.config.get("concretizer:reuse", False)
@staticmethod
def _check_input_and_extract_concrete_specs(specs):
reusable = []
for root in specs:
for s in root.traverse():
if s.virtual:
continue
if s.concrete:
reusable.append(s)
spack.spec.Spec.ensure_valid_variants(s)
return reusable
def _reusable_specs(self):
reusable_specs = []
if self.reuse:
# Specs from the local Database
with spack.store.db.read_transaction():
reusable_specs.extend([
s for s in spack.store.db.query(installed=True)
if not s.satisfies('dev_path=*')
])
# Specs from buildcaches
try:
index = spack.binary_distribution.update_cache_and_get_specs()
reusable_specs.extend([
s for s in index if not s.satisfies('dev_path=*')
])
except (spack.binary_distribution.FetchCacheError, IndexError):
# this is raised when no mirrors had indices.
# TODO: update mirror configuration so it can indicate that the
# TODO: source cache (or any mirror really) doesn't have binaries.
pass
return reusable_specs
self.minimal = spack.config.get("concretizer:minimal", False)
def solve(
self,
@@ -2292,78 +2232,23 @@ def solve(
setup_only (bool): if True, stop after setup and don't solve (default False).
"""
# Check upfront that the variants are admissible
reusable_specs = self._check_input_and_extract_concrete_specs(specs)
reusable_specs.extend(self._reusable_specs())
setup = SpackSolverSetup(tests=tests)
for root in specs:
for s in root.traverse():
if s.virtual:
continue
spack.spec.Spec.ensure_valid_variants(s)
setup = SpackSolverSetup(reuse=self.reuse, minimal=self.minimal, tests=tests)
return self.driver.solve(
setup,
specs,
nmodels=models,
reuse=reusable_specs,
timers=timers,
stats=stats,
out=out,
setup_only=setup_only,
)
def solve_in_rounds(
self,
specs,
out=None,
models=0,
timers=False,
stats=False,
tests=False,
):
"""Solve for a stable model of specs in multiple rounds.
This relaxes the assumption of solve that everything must be consistent and
solvable in a single round. Each round tries to maximize the reuse of specs
from previous rounds.
The function is a generator that yields the result of each round.
Arguments:
specs (list): list of Specs to solve.
models (int): number of models to search (default: 0)
out: Optionally write the generate ASP program to a file-like object.
timers (bool): print timing if set to True
stats (bool): print internal statistics if set to True
tests (bool): add test dependencies to the solve
"""
reusable_specs = self._check_input_and_extract_concrete_specs(specs)
reusable_specs.extend(self._reusable_specs())
setup = SpackSolverSetup(tests=tests)
# Tell clingo that we don't have to solve all the inputs at once
setup.concretize_everything = False
input_specs = specs
while True:
result = self.driver.solve(
setup,
input_specs,
nmodels=models,
reuse=reusable_specs,
timers=timers,
stats=stats,
out=out,
setup_only=False
)
yield result
# If we don't have unsolved specs we are done
if not result.unsolved_specs:
break
# This means we cannot progress with solving the input
if not result.satisfiable or not result.specs:
break
input_specs = result.unsolved_specs
for spec in result.specs:
reusable_specs.extend(spec.traverse())
class UnsatisfiableSpecError(spack.error.UnsatisfiableSpecError):
"""

View File

@@ -7,52 +7,6 @@
% This logic program implements Spack's concretizer
%=============================================================================
%-----------------------------------------------------------------------------
% Map literal input specs to facts that drive the solve
%-----------------------------------------------------------------------------
% Give clingo the choice to solve an input spec or not
{ literal_solved(ID) } :- literal(ID).
literal_not_solved(ID) :- not literal_solved(ID), literal(ID).
% If concretize_everything() is a fact, then we cannot have unsolved specs
:- literal_not_solved(ID), concretize_everything.
% Make a problem with "zero literals solved" unsat. This is to trigger
% looking for solutions to the ASP problem with "errors", which results
% in better reporting for users. See #30669 for details.
1 { literal_solved(ID) : literal(ID) }.
opt_criterion(300, "number of input specs not concretized").
#minimize{ 0@300: #true }.
#minimize { 1@300,ID : literal_not_solved(ID) }.
% Map constraint on the literal ID to the correct PSID
attr(Name, A1) :- literal(LiteralID, Name, A1), literal_solved(LiteralID).
attr(Name, A1, A2) :- literal(LiteralID, Name, A1, A2), literal_solved(LiteralID).
attr(Name, A1, A2, A3) :- literal(LiteralID, Name, A1, A2, A3), literal_solved(LiteralID).
% For these two atoms we only need implications in one direction
root(Package) :- attr("root", Package).
virtual_root(Package) :- attr("virtual_root", Package).
node_platform_set(Package, Platform) :- attr("node_platform_set", Package, Platform).
node_os_set(Package, OS) :- attr("node_os_set", Package, OS).
node_target_set(Package, Target) :- attr("node_target_set", Package, Target).
node_flag_set(Package, Flag, Value) :- attr("node_flag_set", Package, Flag, Value).
node_compiler_version_set(Package, Compiler, Version)
:- attr("node_compiler_version_set", Package, Compiler, Version).
variant_default_value_from_cli(Package, Variant, Value)
:- attr("variant_default_value_from_cli", Package, Variant, Value).
#defined concretize_everything/0.
#defined literal/1.
#defined literal/3.
#defined literal/4.
#defined literal/5.
%-----------------------------------------------------------------------------
% Version semantics
%-----------------------------------------------------------------------------
@@ -107,28 +61,10 @@ possible_version_weight(Package, Weight)
:- version(Package, Version),
version_declared(Package, Version, Weight).
% we can't use the weight for an external version if we don't use the
% corresponding external spec.
:- version(Package, Version),
version_weight(Package, Weight),
version_declared(Package, Version, Weight, "external"),
not external(Package).
% we can't use a weight from an installed spec if we are building it
% and vice-versa
:- version(Package, Version),
version_weight(Package, Weight),
version_declared(Package, Version, Weight, "installed"),
build(Package).
:- version(Package, Version),
version_weight(Package, Weight),
not version_declared(Package, Version, Weight, "installed"),
not build(Package).
1 { version_weight(Package, Weight) : version_declared(Package, Version, Weight) } 1
version_weight(Package, Weight)
:- version(Package, Version),
node(Package).
node(Package),
Weight = #min{W : version_declared(Package, Version, W)}.
% node_version_satisfies implies that exactly one of the satisfying versions
% is the package's version, and vice versa.
@@ -138,11 +74,6 @@ possible_version_weight(Package, Weight)
{ version(Package, Version) : version_satisfies(Package, Constraint, Version) }
:- node_version_satisfies(Package, Constraint).
% If there is at least a version that satisfy the constraint, impose a lower
% bound on the choice rule to avoid false positives with the error below
1 { version(Package, Version) : version_satisfies(Package, Constraint, Version) }
:- node_version_satisfies(Package, Constraint), version_satisfies(Package, Constraint, _).
% More specific error message if the version cannot satisfy some constraint
% Otherwise covered by `no_version_error` and `versions_conflict_error`.
error(1, "No valid version for '{0}' satisfies '@{1}'", Package, Constraint)
@@ -504,13 +435,13 @@ variant(Package, Variant) :- variant_condition(ID, Package, Variant),
condition_holds(ID).
% a variant cannot be set if it is not a variant on the package
error(2, "Cannot set variant '{0}' for package '{1}' because the variant condition cannot be satisfied for the given spec", Variant, Package)
error(2, "Cannot set variant '{0}' for package '{1}' because the variant condition cannot be satisfied for the given spec", Package, Variant)
:- variant_set(Package, Variant),
not variant(Package, Variant),
build(Package).
% a variant cannot take on a value if it is not a variant of the package
error(2, "Cannot set variant '{0}' for package '{1}' because the variant condition cannot be satisfied for the given spec", Variant, Package)
error(2, "Cannot set variant '{0}' for package '{1}' because the variant condition cannot be satisfied for the given spec", Package, Variant)
:- variant_value(Package, Variant, _),
not variant(Package, Variant),
build(Package).
@@ -737,14 +668,9 @@ node_os_mismatch(Package, Dependency) :-
% every OS is compatible with itself. We can use `os_compatible` to declare
os_compatible(OS, OS) :- os(OS).
% Transitive compatibility among operating systems
os_compatible(OS1, OS3) :- os_compatible(OS1, OS2), os_compatible(OS2, OS3).
% We can select only operating systems compatible with the ones
% for which we can build software. We need a cardinality constraint
% since we might have more than one "buildable_os(OS)" fact.
:- not 1 { os_compatible(CurrentOS, ReusedOS) : buildable_os(CurrentOS) },
node_os(Package, ReusedOS).
% OS compatibility rules for reusing solves.
% catalina binaries can be used on bigsur. Direction is package -> dependency.
os_compatible("bigsur", "catalina").
% If an OS is set explicitly respect the value
node_os(Package, OS) :- node_os_set(Package, OS), node(Package).
@@ -807,6 +733,24 @@ target_compatible(Descendent, Ancestor)
#defined target_satisfies/2.
#defined target_parent/2.
% The target weight is either the default target weight
% or a more specific per-package weight if set
target_weight(Target, Package, Weight)
:- default_target_weight(Target, Weight),
node(Package),
not derive_target_from_parent(_, Package),
not package_target_weight(Target, Package, _).
% TODO: Need to account for the case of more than one parent
% TODO: each of which sets different targets
target_weight(Target, Dependency, Weight)
:- depends_on(Package, Dependency),
derive_target_from_parent(Package, Dependency),
target_weight(Target, Package, Weight).
target_weight(Target, Package, Weight)
:- package_target_weight(Target, Package, Weight).
% can't use targets on node if the compiler for the node doesn't support them
error(2, "{0} compiler '{2}@{3}' incompatible with 'target={1}'", Package, Target, Compiler, Version)
:- node_target(Package, Target),
@@ -823,7 +767,11 @@ node_target(Package, Target)
node_target_weight(Package, Weight)
:- node(Package),
node_target(Package, Target),
target_weight(Package, Target, Weight).
target_weight(Target, Package, Weight).
derive_target_from_parent(Parent, Package)
:- depends_on(Parent, Package),
not package_target_weight(_, Package, _).
% compatibility rules for targets among nodes
node_target_match(Parent, Dependency)
@@ -941,9 +889,6 @@ compiler_weight(Package, 100)
not node_compiler_preference(Package, Compiler, Version, _),
not default_compiler_preference(Compiler, Version, _).
% For the time being, be strict and reuse only if the compiler match one we have on the system
:- node_compiler_version(Package, Compiler, Version), not compiler_version(Compiler, Version).
#defined node_compiler_preference/4.
#defined default_compiler_preference/3.
@@ -1017,26 +962,36 @@ impose(Hash) :- hash(Package, Hash).
% if we haven't selected a hash for a package, we'll be building it
build(Package) :- not hash(Package, _), node(Package).
% Minimizing builds is tricky. We want a minimizing criterion
% because we want to reuse what is avaialble, but
% we also want things that are built to stick to *default preferences* from
% the package and from the user. We therefore treat built specs differently and apply
% a different set of optimization criteria to them. Spack's *first* priority is to
% reuse what it *can*, but if it builds something, the built specs will respect
% defaults and preferences. This is implemented by bumping the priority of optimization
% criteria for built specs -- so that they take precedence over the otherwise
% topmost-priority criterion to reuse what is installed.
% Minimizing builds is tricky. We want a minimizing criterion because we want to reuse
% what is avaialble, but we also want things that are built to stick to *default
% preferences* from the package and from the user. We therefore treat built specs
% differently and apply a different set of optimization criteria to them. Spack's first
% priority is to reuse what it can, but if it builds something, the built specs will
% respect defaults and preferences.
%
% This is implemented by bumping the priority of optimization criteria for built specs
% -- so that they take precedence over the otherwise topmost-priority criterion to reuse
% what is installed.
%
% If the user explicitly asks for *minimal* installs, we don't differentiate between
% built and reused specs - the top priority is just minimizing builds.
%
% The priority ranges are:
% 200+ Shifted priorities for build nodes; correspond to priorities 0 - 99.
% 100 - 199 Unshifted priorities. Currently only includes minimizing #builds.
% 0 - 99 Priorities for non-built nodes.
build_priority(Package, 200) :- build(Package), node(Package), optimize_for_reuse().
build_priority(Package, 0) :- not build(Package), node(Package), optimize_for_reuse().
build_priority(Package, 200) :- node(Package), build(Package), optimize_for_reuse(),
not minimal_installs().
build_priority(Package, 0) :- node(Package), not build(Package), optimize_for_reuse().
% don't adjust build priorities if reuse is not enabled
% Don't adjust build priorities if reusing, or if doing minimal installs
% With minimal, minimizing builds is the TOP priority
build_priority(Package, 0) :- node(Package), not optimize_for_reuse().
build_priority(Package, 0) :- node(Package), minimal_installs().
% Minimize builds with both --reuse and with --minimal
minimize_builds() :- optimize_for_reuse().
minimize_builds() :- minimal_installs().
% don't assign versions from installed packages unless reuse is enabled
% NOTE: that "installed" means the declared version was only included because
@@ -1055,6 +1010,7 @@ build_priority(Package, 0) :- node(Package), not optimize_for_reuse().
not optimize_for_reuse().
#defined installed_hash/2.
#defined minimal_installs/0.
%-----------------------------------------------------------------
% Optimization to avoid errors
@@ -1084,7 +1040,7 @@ build_priority(Package, 0) :- node(Package), not optimize_for_reuse().
% Try hard to reuse installed packages (i.e., minimize the number built)
opt_criterion(100, "number of packages to build (vs. reuse)").
#minimize { 0@100: #true }.
#minimize { 1@100,Package : build(Package), optimize_for_reuse() }.
#minimize { 1@100,Package : build(Package), minimize_builds() }.
#defined optimize_for_reuse/0.
% Minimize the number of deprecated versions being used
@@ -1247,12 +1203,11 @@ opt_criterion(1, "non-preferred targets").
%-----------------
#heuristic version(Package, Version) : version_declared(Package, Version, 0), node(Package). [10, true]
#heuristic version_weight(Package, 0) : version_declared(Package, Version, 0), node(Package). [10, true]
#heuristic node_target(Package, Target) : package_target_weight(Target, Package, 0), node(Package). [10, true]
#heuristic node_target(Package, Target) : default_target_weight(Target, 0), node(Package). [10, true]
#heuristic node_target_weight(Package, 0) : node(Package). [10, true]
#heuristic variant_value(Package, Variant, Value) : variant_default_value(Package, Variant, Value), node(Package). [10, true]
#heuristic provider(Package, Virtual) : possible_provider_weight(Package, Virtual, 0, _), virtual_node(Virtual). [10, true]
#heuristic node(Package) : possible_provider_weight(Package, Virtual, 0, _), virtual_node(Virtual). [10, true]
#heuristic node_os(Package, OS) : buildable_os(OS). [10, true]
%-----------
% Notes

View File

@@ -1,22 +0,0 @@
% Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
% Spack Project Developers. See the top-level COPYRIGHT file for details.
%
% SPDX-License-Identifier: (Apache-2.0 OR MIT)
% OS compatibility rules for reusing solves.
% os_compatible(RecentOS, OlderOS)
% OlderOS binaries can be used on RecentOS
% macOS
os_compatible("monterey", "bigsur").
os_compatible("bigsur", "catalina").
% Ubuntu
os_compatible("ubuntu22.04", "ubuntu21.10").
os_compatible("ubuntu21.10", "ubuntu21.04").
os_compatible("ubuntu21.04", "ubuntu20.10").
os_compatible("ubuntu20.10", "ubuntu20.04").
os_compatible("ubuntu20.04", "ubuntu19.10").
os_compatible("ubuntu19.10", "ubuntu19.04").
os_compatible("ubuntu19.04", "ubuntu18.10").
os_compatible("ubuntu18.10", "ubuntu18.04").

View File

@@ -183,13 +183,6 @@
default_format += '{%compiler.name}{@compiler.version}{compiler_flags}'
default_format += '{variants}{arch=architecture}'
#: Regular expression to pull spec contents out of clearsigned signature
#: file.
CLEARSIGN_FILE_REGEX = re.compile(
(r"^-----BEGIN PGP SIGNED MESSAGE-----"
r"\s+Hash:\s+[^\s]+\s+(.+)-----BEGIN PGP SIGNATURE-----"),
re.MULTILINE | re.DOTALL)
#: specfile format version. Must increase monotonically
specfile_format_version = 3
@@ -2402,8 +2395,8 @@ def spec_and_dependency_types(s):
def from_dict(data):
"""Construct a spec from JSON/YAML.
Args:
data: a nested dict/list data structure read from YAML or JSON.
Parameters:
data -- a nested dict/list data structure read from YAML or JSON.
"""
return _spec_from_dict(data)
@@ -2412,8 +2405,8 @@ def from_dict(data):
def from_yaml(stream):
"""Construct a spec from YAML.
Args:
stream: string or file object to read from.
Parameters:
stream -- string or file object to read from.
"""
try:
data = yaml.load(stream)
@@ -2428,8 +2421,8 @@ def from_yaml(stream):
def from_json(stream):
"""Construct a spec from JSON.
Args:
stream: string or file object to read from.
Parameters:
stream -- string or file object to read from.
"""
try:
data = sjson.load(stream)
@@ -2440,27 +2433,6 @@ def from_json(stream):
e,
)
@staticmethod
def extract_json_from_clearsig(data):
m = CLEARSIGN_FILE_REGEX.search(data)
if m:
return sjson.load(m.group(1))
return sjson.load(data)
@staticmethod
def from_signed_json(stream):
"""Construct a spec from clearsigned json spec file.
Args:
stream: string or file object to read from.
"""
data = stream
if hasattr(stream, 'read'):
data = stream.read()
extracted_json = Spec.extract_json_from_clearsig(data)
return Spec.from_dict(extracted_json)
@staticmethod
def from_detection(spec_str, extra_attributes=None):
"""Construct a spec from a spec string determined during external
@@ -2952,18 +2924,28 @@ def _mark_concrete(self, value=True):
s.clear_cached_hashes()
s._mark_root_concrete(value)
def _finalize_concretization(self):
"""Assign hashes to this spec, and mark it concrete.
def _assign_hash(self, hash):
"""Compute and cache the provided hash type for this spec and its dependencies.
There are special semantics to consider for `package_hash`, because we can't
call it on *already* concrete specs, but we need to assign it *at concretization
time* to just-concretized specs. So, the concretizer must assign the package
hash *before* marking their specs concrete (so that we know which specs were
already concrete before this latest concretization).
Arguments:
hash (spack.hash_types.SpecHashDescriptor): the hash to assign to nodes
in the spec.
`dag_hash` is also tricky, since it cannot compute `package_hash()` lazily.
Because `package_hash` needs to be assigned *at concretization time*,
`to_node_dict()` can't just assume that it can compute `package_hash` itself
There are special semantics to consider for `package_hash`.
This should be called:
1. for `package_hash`, immediately after concretization, but *before* marking
concrete, and
2. for `dag_hash`, immediately after marking concrete.
`package_hash` is tricky, because we can't call it on *already* concrete specs,
but we need to assign it *at concretization time* to just-concretized specs. So,
the concretizer must assign the package hash *before* marking their specs
concrete (so that the only concrete specs are the ones already marked concrete).
`dag_hash` is also tricky, since it cannot compute `package_hash()` lazily for
the same reason. `package_hash` needs to be assigned *at concretization time*,
so, `to_node_dict()` can't just assume that it can compute `package_hash` itself
-- it needs to either see or not see a `_package_hash` attribute.
Rules of thumb for `package_hash`:
@@ -2977,26 +2959,28 @@ def _finalize_concretization(self):
for spec in self.traverse():
# Already concrete specs either already have a package hash (new dag_hash())
# or they never will b/c we can't know it (old dag_hash()). Skip them.
#
# We only assign package hash to not-yet-concrete specs, for which we know
# we can compute the hash.
if not spec.concrete:
# we need force=True here because package hash assignment has to happen
# before we mark concrete, so that we know what was *already* concrete.
spec._cached_hash(ht.package_hash, force=True)
if hash is ht.package_hash and not spec.concrete:
spec._cached_hash(hash, force=True)
# keep this check here to ensure package hash is saved
assert getattr(spec, ht.package_hash.attr)
assert getattr(spec, hash.attr)
else:
spec._cached_hash(hash)
def _finalize_concretization(self):
"""Assign hashes to this spec, and mark it concrete.
This is called at the end of concretization.
"""
# See docs for in _assign_hash for why package_hash needs to happen here.
self._assign_hash(ht.package_hash)
# Mark everything in the spec as concrete
self._mark_concrete()
# Assign dag_hash (this *could* be done lazily, but it's assigned anyway in
# ensure_no_deprecated, and it's clearer to see explicitly where it happens).
# Any specs that were concrete before finalization will already have a cached
# DAG hash.
for spec in self.traverse():
self._cached_hash(ht.dag_hash)
# ensure_no_deprecated, and it's clearer to see explicitly where it happens)
self._assign_hash(ht.dag_hash)
def concretized(self, tests=False):
"""This is a non-destructive version of concretize().

View File

@@ -602,25 +602,6 @@ def test_install_legacy_yaml(test_legacy_mirror, install_mockery_mutable_config,
uninstall_cmd('-y', '/t5mczux3tfqpxwmg7egp7axy2jvyulqk')
def test_install_legacy_buildcache_layout(install_mockery_mutable_config):
"""Legacy buildcache layout involved a nested archive structure
where the .spack file contained a repeated spec.json and another
compressed archive file containing the install tree. This test
makes sure we can still read that layout."""
legacy_layout_dir = os.path.join(test_path, 'data', 'mirrors', 'legacy_layout')
mirror_url = "file://{0}".format(legacy_layout_dir)
filename = ("test-debian6-core2-gcc-4.5.0-archive-files-2.0-"
"l3vdiqvbobmspwyb4q2b62fz6nitd4hk.spec.json")
spec_json_path = os.path.join(legacy_layout_dir, 'build_cache', filename)
mirror_cmd('add', '--scope', 'site', 'test-legacy-layout', mirror_url)
output = install_cmd(
'--no-check-signature', '--cache-only', '-f', spec_json_path, output=str)
mirror_cmd('rm', '--scope=site', 'test-legacy-layout')
expect_line = ("Extracting archive-files-2.0-"
"l3vdiqvbobmspwyb4q2b62fz6nitd4hk from binary cache")
assert(expect_line in output)
def test_FetchCacheError_only_accepts_lists_of_errors():
with pytest.raises(TypeError, match="list"):
bindist.FetchCacheError("error")

View File

@@ -180,20 +180,3 @@ def test_status_function_find_files(
_, missing = spack.bootstrap.status_message('optional')
assert missing is expected_missing
@pytest.mark.regression('31042')
def test_source_is_disabled(mutable_config):
# Get the configuration dictionary of the current bootstrapping source
conf = next(iter(spack.bootstrap.bootstrapping_sources()))
# The source is not explicitly enabled or disabled, so the following
# call should raise to skip using it for bootstrapping
with pytest.raises(ValueError):
spack.bootstrap.source_is_enabled_or_raise(conf)
# Try to explicitly disable the source and verify that the behavior
# is the same as above
spack.config.add('bootstrap:trusted:{0}:{1}'.format(conf['name'], False))
with pytest.raises(ValueError):
spack.bootstrap.source_is_enabled_or_raise(conf)

View File

@@ -12,6 +12,9 @@
import pytest
import spack.build_environment
import spack.config
import spack.spec
from spack.paths import build_env_path
from spack.util.environment import set_env, system_dirs
from spack.util.executable import Executable, ProcessError
@@ -129,7 +132,9 @@ def wrapper_environment():
SPACK_TARGET_ARGS="-march=znver2 -mtune=znver2",
SPACK_LINKER_ARG='-Wl,',
SPACK_DTAGS_TO_ADD='--disable-new-dtags',
SPACK_DTAGS_TO_STRIP='--enable-new-dtags'):
SPACK_DTAGS_TO_STRIP='--enable-new-dtags',
SPACK_COMPILER_FLAGS_KEEP='',
SPACK_COMPILER_FLAGS_REMOVE='-Werror*',):
yield
@@ -157,6 +162,21 @@ def check_args(cc, args, expected):
assert expected == cc_modified_args
def check_args_contents(cc, args, must_contain, must_not_contain):
"""Check output arguments that cc produces when called with args.
This assumes that cc will print debug command output with one element
per line, so that we see whether arguments that should (or shouldn't)
contain spaces are parsed correctly.
"""
with set_env(SPACK_TEST_COMMAND='dump-args'):
cc_modified_args = cc(*args, output=str).strip().split('\n')
for a in must_contain:
assert a in cc_modified_args
for a in must_not_contain:
assert a not in cc_modified_args
def check_env_var(executable, var, expected):
"""Check environment variables updated by the passed compiler wrapper
@@ -642,6 +662,63 @@ def test_no_ccache_prepend_for_fc(wrapper_environment):
common_compile_args)
def test_keep_and_remove(wrapper_environment):
werror_specific = ['-Werror=meh']
werror = ['-Werror']
werror_all = werror_specific + werror
with set_env(
SPACK_COMPILER_FLAGS_KEEP='',
SPACK_COMPILER_FLAGS_REMOVE='-Werror*',
):
check_args_contents(cc, test_args + werror_all, ['-Wl,--end-group'], werror_all)
with set_env(
SPACK_COMPILER_FLAGS_KEEP='-Werror=*',
SPACK_COMPILER_FLAGS_REMOVE='-Werror*',
):
check_args_contents(cc, test_args + werror_all, werror_specific, werror)
with set_env(
SPACK_COMPILER_FLAGS_KEEP='-Werror=*',
SPACK_COMPILER_FLAGS_REMOVE='-Werror*|-llib1|-Wl*',
):
check_args_contents(
cc,
test_args + werror_all,
werror_specific,
werror + ["-llib1", "-Wl,--rpath"]
)
@pytest.mark.parametrize('cfg_override,initial,expected,must_be_gone', [
# Set and unset variables
('config:flags:keep_werror:all',
['-Werror', '-Werror=specific', '-bah'],
['-Werror', '-Werror=specific', '-bah'],
[],
),
('config:flags:keep_werror:specific',
['-Werror', '-Werror=specific', '-bah'],
['-Werror=specific', '-bah'],
['-Werror'],
),
('config:flags:keep_werror:none',
['-Werror', '-Werror=specific', '-bah'],
['-bah'],
['-Werror', '-Werror=specific'],
),
])
@pytest.mark.usefixtures('wrapper_environment', 'mutable_config')
def test_flag_modification(cfg_override, initial, expected, must_be_gone):
spack.config.add(cfg_override)
env = spack.build_environment.clean_environment()
env.apply_modifications()
check_args_contents(
cc,
test_args + initial,
expected,
must_be_gone
)
@pytest.mark.regression('9160')
def test_disable_new_dtags(wrapper_environment, wrapper_flags):
with set_env(SPACK_TEST_COMMAND='dump-args'):

View File

@@ -10,7 +10,6 @@
import spack.config
import spack.environment as ev
import spack.main
import spack.mirror
from spack.util.path import convert_to_posix_path
_bootstrap = spack.main.SpackCommand('bootstrap')
@@ -140,82 +139,13 @@ def test_trust_or_untrust_fails_with_no_method(mutable_config):
def test_trust_or_untrust_fails_with_more_than_one_method(mutable_config):
wrong_config = {'sources': [
{'name': 'github-actions',
'metadata': '$spack/share/spack/bootstrap/github-actions'},
'type': 'buildcache',
'description': ''},
{'name': 'github-actions',
'metadata': '$spack/share/spack/bootstrap/github-actions'}],
'type': 'buildcache',
'description': 'Another entry'}],
'trusted': {}
}
with spack.config.override('bootstrap', wrong_config):
with pytest.raises(RuntimeError, match='more than one'):
_bootstrap('trust', 'github-actions')
@pytest.mark.parametrize('use_existing_dir', [True, False])
def test_add_failures_for_non_existing_files(mutable_config, tmpdir, use_existing_dir):
metadata_dir = str(tmpdir) if use_existing_dir else '/foo/doesnotexist'
with pytest.raises(RuntimeError, match='does not exist'):
_bootstrap('add', 'mock-mirror', metadata_dir)
def test_add_failures_for_already_existing_name(mutable_config):
with pytest.raises(RuntimeError, match='already exist'):
_bootstrap('add', 'github-actions', 'some-place')
def test_remove_failure_for_non_existing_names(mutable_config):
with pytest.raises(RuntimeError, match='cannot find'):
_bootstrap('remove', 'mock-mirror')
def test_remove_and_add_a_source(mutable_config):
# Check we start with a single bootstrapping source
sources = spack.bootstrap.bootstrapping_sources()
assert len(sources) == 1
# Remove it and check the result
_bootstrap('remove', 'github-actions')
sources = spack.bootstrap.bootstrapping_sources()
assert not sources
# Add it back and check we restored the initial state
_bootstrap(
'add', 'github-actions', '$spack/share/spack/bootstrap/github-actions-v0.2'
)
sources = spack.bootstrap.bootstrapping_sources()
assert len(sources) == 1
@pytest.mark.maybeslow
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows (yet)")
def test_bootstrap_mirror_metadata(mutable_config, linux_os, monkeypatch, tmpdir):
"""Test that `spack bootstrap mirror` creates a folder that can be ingested by
`spack bootstrap add`. Here we don't download data, since that would be an
expensive operation for a unit test.
"""
old_create = spack.mirror.create
monkeypatch.setattr(spack.mirror, 'create', lambda p, s: old_create(p, []))
# Create the mirror in a temporary folder
compilers = [{
'compiler': {
'spec': 'gcc@12.0.1',
'operating_system': '{0.name}{0.version}'.format(linux_os),
'modules': [],
'paths': {
'cc': '/usr/bin',
'cxx': '/usr/bin',
'fc': '/usr/bin',
'f77': '/usr/bin'
}
}
}]
with spack.config.override('compilers', compilers):
_bootstrap('mirror', str(tmpdir))
# Register the mirror
metadata_dir = tmpdir.join('metadata', 'sources')
_bootstrap('add', '--trust', 'test-mirror', str(metadata_dir))
assert _bootstrap.returncode == 0
assert any(m['name'] == 'test-mirror'
for m in spack.bootstrap.bootstrapping_sources())

View File

@@ -25,6 +25,7 @@
import spack.paths as spack_paths
import spack.repo as repo
import spack.util.gpg
import spack.util.spack_json as sjson
import spack.util.spack_yaml as syaml
import spack.util.url as url_util
from spack.schema.buildcache_spec import schema as specfile_schema
@@ -635,6 +636,10 @@ def test_ci_generate_for_pr_pipeline(tmpdir, mutable_mock_env_path,
outputfile = str(tmpdir.join('.gitlab-ci.yml'))
with ev.read('test'):
monkeypatch.setattr(
ci, 'SPACK_PR_MIRRORS_ROOT_URL', r"file:///fake/mirror")
monkeypatch.setattr(
ci, 'SPACK_SHARED_PR_MIRROR_URL', r"file:///fake/mirror_two")
ci_cmd('generate', '--output-file', outputfile)
with open(outputfile) as f:
@@ -679,6 +684,10 @@ def test_ci_generate_with_external_pkg(tmpdir, mutable_mock_env_path,
outputfile = str(tmpdir.join('.gitlab-ci.yml'))
with ev.read('test'):
monkeypatch.setattr(
ci, 'SPACK_PR_MIRRORS_ROOT_URL', r"file:///fake/mirror")
monkeypatch.setattr(
ci, 'SPACK_SHARED_PR_MIRROR_URL', r"file:///fake/mirror_two")
ci_cmd('generate', '--output-file', outputfile)
with open(outputfile) as f:
@@ -912,82 +921,11 @@ def fake_dl_method(spec, *args, **kwargs):
env_cmd('deactivate')
def test_ci_generate_mirror_override(tmpdir, mutable_mock_env_path,
install_mockery_mutable_config, mock_packages,
mock_fetch, mock_stage, mock_binary_index,
ci_base_environment):
"""Ensure that protected pipelines using --buildcache-destination do not
skip building specs that are not in the override mirror when they are
found in the main mirror."""
os.environ.update({
'SPACK_PIPELINE_TYPE': 'spack_protected_branch',
})
working_dir = tmpdir.join('working_dir')
mirror_dir = working_dir.join('mirror')
mirror_url = 'file://{0}'.format(mirror_dir.strpath)
spack_yaml_contents = """
spack:
definitions:
- packages: [patchelf]
specs:
- $packages
mirrors:
test-mirror: {0}
gitlab-ci:
mappings:
- match:
- patchelf
runner-attributes:
tags:
- donotcare
image: donotcare
service-job-attributes:
tags:
- nonbuildtag
image: basicimage
""".format(mirror_url)
filename = str(tmpdir.join('spack.yaml'))
with open(filename, 'w') as f:
f.write(spack_yaml_contents)
with tmpdir.as_cwd():
env_cmd('create', 'test', './spack.yaml')
first_ci_yaml = str(tmpdir.join('.gitlab-ci-1.yml'))
second_ci_yaml = str(tmpdir.join('.gitlab-ci-2.yml'))
with ev.read('test'):
install_cmd()
buildcache_cmd('create', '-u', '--mirror-url', mirror_url, 'patchelf')
buildcache_cmd('update-index', '--mirror-url', mirror_url, output=str)
# This generate should not trigger a rebuild of patchelf, since it's in
# the main mirror referenced in the environment.
ci_cmd('generate', '--check-index-only', '--output-file', first_ci_yaml)
# Because we used a mirror override (--buildcache-destination) on a
# spack protected pipeline, we expect to only look in the override
# mirror for the spec, and thus the patchelf job should be generated in
# this pipeline
ci_cmd('generate', '--check-index-only', '--output-file', second_ci_yaml,
'--buildcache-destination', 'file:///mirror/not/exist')
with open(first_ci_yaml) as fd1:
first_yaml = fd1.read()
assert 'no-specs-to-rebuild' in first_yaml
with open(second_ci_yaml) as fd2:
second_yaml = fd2.read()
assert 'no-specs-to-rebuild' not in second_yaml
@pytest.mark.disable_clean_stage_check
def test_push_mirror_contents(tmpdir, mutable_mock_env_path,
install_mockery_mutable_config, mock_packages,
mock_fetch, mock_stage, mock_gnupghome,
ci_base_environment, mock_binary_index):
ci_base_environment):
working_dir = tmpdir.join('working_dir')
mirror_dir = working_dir.join('mirror')
@@ -1100,10 +1038,10 @@ def test_push_mirror_contents(tmpdir, mutable_mock_env_path,
# Also test buildcache_spec schema
bc_files_list = os.listdir(buildcache_path)
for file_name in bc_files_list:
if file_name.endswith('.spec.json.sig'):
if file_name.endswith('.spec.json'):
spec_json_path = os.path.join(buildcache_path, file_name)
with open(spec_json_path) as json_fd:
json_object = Spec.extract_json_from_clearsig(json_fd.read())
json_object = sjson.load(json_fd)
validate(json_object, specfile_schema)
logs_dir = working_dir.join('logs_dir')
@@ -1214,6 +1152,10 @@ def test_ci_generate_override_runner_attrs(tmpdir, mutable_mock_env_path,
with ev.read('test'):
monkeypatch.setattr(
spack.main, 'get_version', lambda: '0.15.3-416-12ad69eb1')
monkeypatch.setattr(
ci, 'SPACK_PR_MIRRORS_ROOT_URL', r"file:///fake/mirror")
monkeypatch.setattr(
ci, 'SPACK_SHARED_PR_MIRROR_URL', r"file:///fake/mirror_two")
ci_cmd('generate', '--output-file', outputfile)
with open(outputfile) as f:
@@ -1315,6 +1257,10 @@ def test_ci_generate_with_workarounds(tmpdir, mutable_mock_env_path,
outputfile = str(tmpdir.join('.gitlab-ci.yml'))
with ev.read('test'):
monkeypatch.setattr(
ci, 'SPACK_PR_MIRRORS_ROOT_URL', r"file:///fake/mirror")
monkeypatch.setattr(
ci, 'SPACK_SHARED_PR_MIRROR_URL', r"file:///fake/mirror_two")
ci_cmd('generate', '--output-file', outputfile, '--dependencies')
with open(outputfile) as f:
@@ -1472,6 +1418,11 @@ def fake_get_mirrors_for_spec(spec=None, mirrors_to_check=None,
outputfile = str(tmpdir.join('.gitlab-ci.yml'))
with ev.read('test'):
monkeypatch.setattr(
ci, 'SPACK_PR_MIRRORS_ROOT_URL', r"file:///fake/mirror")
monkeypatch.setattr(
ci, 'SPACK_SHARED_PR_MIRROR_URL', r"file:///fake/mirror_two")
ci_cmd('generate', '--output-file', outputfile)
with open(outputfile) as of:
@@ -1680,6 +1631,11 @@ def test_ci_generate_temp_storage_url(tmpdir, mutable_mock_env_path,
env_cmd('create', 'test', './spack.yaml')
outputfile = str(tmpdir.join('.gitlab-ci.yml'))
monkeypatch.setattr(
ci, 'SPACK_PR_MIRRORS_ROOT_URL', r"file:///fake/mirror")
monkeypatch.setattr(
ci, 'SPACK_SHARED_PR_MIRROR_URL', r"file:///fake/mirror_two")
with ev.read('test'):
ci_cmd('generate', '--output-file', outputfile)
@@ -1760,64 +1716,6 @@ def test_ci_generate_read_broken_specs_url(tmpdir, mutable_mock_env_path,
assert(ex not in output)
def test_ci_generate_external_signing_job(tmpdir, mutable_mock_env_path,
install_mockery,
mock_packages, monkeypatch,
ci_base_environment):
"""Verify that in external signing mode: 1) each rebuild jobs includes
the location where the binary hash information is written and 2) we
properly generate a final signing job in the pipeline."""
os.environ.update({
'SPACK_PIPELINE_TYPE': 'spack_protected_branch'
})
filename = str(tmpdir.join('spack.yaml'))
with open(filename, 'w') as f:
f.write("""\
spack:
specs:
- archive-files
mirrors:
some-mirror: https://my.fake.mirror
gitlab-ci:
temporary-storage-url-prefix: file:///work/temp/mirror
mappings:
- match:
- archive-files
runner-attributes:
tags:
- donotcare
image: donotcare
signing-job-attributes:
tags:
- nonbuildtag
- secretrunner
image:
name: customdockerimage
entrypoint: []
variables:
IMPORTANT_INFO: avalue
script:
- echo hello
""")
with tmpdir.as_cwd():
env_cmd('create', 'test', './spack.yaml')
outputfile = str(tmpdir.join('.gitlab-ci.yml'))
with ev.read('test'):
ci_cmd('generate', '--output-file', outputfile)
with open(outputfile) as of:
pipeline_doc = syaml.load(of.read())
assert 'sign-pkgs' in pipeline_doc
signing_job = pipeline_doc['sign-pkgs']
assert 'tags' in signing_job
signing_job_tags = signing_job['tags']
for expected_tag in ['notary', 'protected', 'aws']:
assert expected_tag in signing_job_tags
def test_ci_reproduce(tmpdir, mutable_mock_env_path,
install_mockery, mock_packages, monkeypatch,
last_two_git_commits, ci_base_environment, mock_binary_index):

View File

@@ -45,22 +45,21 @@ def test_negative_integers_not_allowed_for_parallel_jobs(job_parser):
assert 'expected a positive integer' in str(exc_info.value)
@pytest.mark.parametrize('specs,cflags,negated_variants', [
(['coreutils cflags="-O3 -g"'], ['-O3', '-g'], []),
(['coreutils', 'cflags=-O3 -g'], ['-O3'], ['g']),
(['coreutils', 'cflags=-O3', '-g'], ['-O3'], ['g']),
@pytest.mark.parametrize('specs,expected_variants,unexpected_variants', [
(['coreutils', 'cflags=-O3 -g'], [], ['g']),
(['coreutils', 'cflags=-O3', '-g'], ['g'], []),
])
@pytest.mark.regression('12951')
def test_parse_spec_flags_with_spaces(specs, cflags, negated_variants):
def test_parse_spec_flags_with_spaces(
specs, expected_variants, unexpected_variants
):
spec_list = spack.cmd.parse_specs(specs)
assert len(spec_list) == 1
s = spec_list.pop()
assert s.compiler_flags['cflags'] == cflags
assert list(s.variants.keys()) == negated_variants
for v in negated_variants:
assert '~{0}'.format(v) in s
assert all(x not in s.variants for x in unexpected_variants)
assert all(x in s.variants for x in expected_variants)
@pytest.mark.usefixtures('config')
@@ -121,11 +120,19 @@ def test_concretizer_arguments(mutable_config, mock_packages):
spec = spack.main.SpackCommand("spec")
assert spack.config.get("concretizer:reuse", None) is None
assert spack.config.get("concretizer:minimal", None) is None
spec("--reuse", "zlib")
assert spack.config.get("concretizer:reuse", None) is True
assert spack.config.get("concretizer:minimal", None) is None
spec("--fresh", "zlib")
assert spack.config.get("concretizer:reuse", None) is False
assert spack.config.get("concretizer:minimal", None) is None
spec("--minimal", "zlib")
assert spack.config.get("concretizer:reuse", None) is False
assert spack.config.get("concretizer:minimal", None) is True

View File

@@ -18,40 +18,37 @@
concretize = SpackCommand('concretize')
unification_strategies = [False, True, 'when_possible']
@pytest.mark.parametrize('unify', unification_strategies)
def test_concretize_all_test_dependencies(unify):
@pytest.mark.parametrize('concretization', ['separately', 'together'])
def test_concretize_all_test_dependencies(concretization):
"""Check all test dependencies are concretized."""
env('create', 'test')
with ev.read('test') as e:
e.unify = unify
e.concretization = concretization
add('depb')
concretize('--test', 'all')
assert e.matching_spec('test-dependency')
@pytest.mark.parametrize('unify', unification_strategies)
def test_concretize_root_test_dependencies_not_recursive(unify):
@pytest.mark.parametrize('concretization', ['separately', 'together'])
def test_concretize_root_test_dependencies_not_recursive(concretization):
"""Check that test dependencies are not concretized recursively."""
env('create', 'test')
with ev.read('test') as e:
e.unify = unify
e.concretization = concretization
add('depb')
concretize('--test', 'root')
assert e.matching_spec('test-dependency') is None
@pytest.mark.parametrize('unify', unification_strategies)
def test_concretize_root_test_dependencies_are_concretized(unify):
@pytest.mark.parametrize('concretization', ['separately', 'together'])
def test_concretize_root_test_dependencies_are_concretized(concretization):
"""Check that root test dependencies are concretized."""
env('create', 'test')
with ev.read('test') as e:
e.unify = unify
e.concretization = concretization
add('a')
add('b')
concretize('--test', 'root')

View File

@@ -2197,7 +2197,7 @@ def test_env_activate_default_view_root_unconditional(mutable_mock_env_path):
def test_concretize_user_specs_together():
e = ev.create('coconcretization')
e.unify = True
e.concretization = 'together'
# Concretize a first time using 'mpich' as the MPI provider
e.add('mpileaks')
@@ -2225,7 +2225,7 @@ def test_concretize_user_specs_together():
def test_cant_install_single_spec_when_concretizing_together():
e = ev.create('coconcretization')
e.unify = True
e.concretization = 'together'
with pytest.raises(ev.SpackEnvironmentError, match=r'cannot install'):
e.concretize_and_add('zlib')
@@ -2234,7 +2234,7 @@ def test_cant_install_single_spec_when_concretizing_together():
def test_duplicate_packages_raise_when_concretizing_together():
e = ev.create('coconcretization')
e.unify = True
e.concretization = 'together'
e.add('mpileaks+opt')
e.add('mpileaks~opt')
@@ -2556,7 +2556,7 @@ def test_custom_version_concretize_together(tmpdir):
# Custom versions should be permitted in specs when
# concretizing together
e = ev.create('custom_version')
e.unify = True
e.concretization = 'together'
# Concretize a first time using 'mpich' as the MPI provider
e.add('hdf5@myversion')
@@ -2647,7 +2647,7 @@ def test_multiple_modules_post_env_hook(tmpdir, install_mockery, mock_fetch):
def test_virtual_spec_concretize_together(tmpdir):
# An environment should permit to concretize "mpi"
e = ev.create('virtual_spec')
e.unify = True
e.concretization = 'together'
e.add('mpi')
e.concretize()
@@ -2989,19 +2989,3 @@ def test_environment_depfile_out(tmpdir, mock_packages):
stdout = env('depfile', '-G', 'make')
with open(makefile_path, 'r') as f:
assert stdout == f.read()
def test_unify_when_possible_works_around_conflicts():
e = ev.create('coconcretization')
e.unify = 'when_possible'
e.add('mpileaks+opt')
e.add('mpileaks~opt')
e.add('mpich')
e.concretize()
assert len([x for x in e.all_specs() if x.satisfies('mpileaks')]) == 2
assert len([x for x in e.all_specs() if x.satisfies('mpileaks+opt')]) == 1
assert len([x for x in e.all_specs() if x.satisfies('mpileaks~opt')]) == 1
assert len([x for x in e.all_specs() if x.satisfies('mpich')]) == 1

View File

@@ -8,8 +8,6 @@
import pytest
from llnl.util.filesystem import getuid, touch
import spack
import spack.detection
import spack.detection.path
@@ -196,53 +194,6 @@ def test_find_external_empty_default_manifest_dir(
external('find')
@pytest.mark.skipif(sys.platform == 'win32',
reason="Can't chmod on Windows")
@pytest.mark.skipif(getuid() == 0, reason='user is root')
def test_find_external_manifest_with_bad_permissions(
mutable_config, working_env, mock_executable, mutable_mock_repo,
_platform_executables, tmpdir, monkeypatch):
"""The user runs 'spack external find'; the default path for storing
manifest files exists but with insufficient permissions. Check that
the command does not fail.
"""
test_manifest_dir = str(tmpdir.mkdir('manifest_dir'))
test_manifest_file_path = os.path.join(test_manifest_dir, 'badperms.json')
touch(test_manifest_file_path)
monkeypatch.setenv('PATH', '')
monkeypatch.setattr(spack.cray_manifest, 'default_path',
test_manifest_dir)
try:
os.chmod(test_manifest_file_path, 0)
output = external('find')
assert 'insufficient permissions' in output
assert 'Skipping manifest and continuing' in output
finally:
os.chmod(test_manifest_file_path, 0o700)
def test_find_external_manifest_failure(
mutable_config, mutable_mock_repo, tmpdir, monkeypatch):
"""The user runs 'spack external find'; the manifest parsing fails with
some exception. Ensure that the command still succeeds (i.e. moves on
to other external detection mechanisms).
"""
# First, create an empty manifest file (without a file to read, the
# manifest parsing is skipped)
test_manifest_dir = str(tmpdir.mkdir('manifest_dir'))
test_manifest_file_path = os.path.join(test_manifest_dir, 'test.json')
touch(test_manifest_file_path)
def fail():
raise Exception()
monkeypatch.setattr(
spack.cmd.external, '_collect_and_consume_cray_manifest_files', fail)
monkeypatch.setenv('PATH', '')
output = external('find')
assert 'Skipping manifest and continuing' in output
def test_find_external_nonempty_default_manifest_dir(
mutable_database, mutable_mock_repo,
_platform_executables, tmpdir, monkeypatch,

View File

@@ -4,12 +4,10 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import re
from textwrap import dedent
import pytest
import spack.environment as ev
import spack.error
import spack.spec
import spack.store
from spack.main import SpackCommand, SpackCommandError
@@ -57,54 +55,6 @@ def test_spec_concretizer_args(mutable_config, mutable_database):
assert h in output
def test_spec_parse_dependency_variant_value():
"""Verify that we can provide multiple key=value variants to multiple separate
packages within a spec string."""
output = spec('multivalue-variant fee=barbaz ^ a foobar=baz')
assert 'fee=barbaz' in output
assert 'foobar=baz' in output
def test_spec_parse_cflags_quoting():
"""Verify that compiler flags can be provided to a spec from the command line."""
output = spec('--yaml', 'gcc cflags="-Os -pipe" cxxflags="-flto -Os"')
gh_flagged = spack.spec.Spec.from_yaml(output)
assert ['-Os', '-pipe'] == gh_flagged.compiler_flags['cflags']
assert ['-flto', '-Os'] == gh_flagged.compiler_flags['cxxflags']
def test_spec_parse_unquoted_flags_report():
"""Verify that a useful error message is produced if unquoted compiler flags are
provided."""
# This should fail during parsing, since /usr/include is interpreted as a spec hash.
with pytest.raises(spack.error.SpackError) as cm:
# We don't try to figure out how many following args were intended to be part of
# cflags, we just explain how to fix it for the immediate next arg.
spec('gcc cflags=-Os -pipe -other-arg-that-gets-ignored cflags=-I /usr/include')
# Verify that the generated error message is nicely formatted.
assert str(cm.value) == dedent('''\
No installed spec matches the hash: 'usr'
Some compiler or linker flags were provided without quoting their arguments,
which now causes spack to try to parse the *next* argument as a spec component
such as a variant instead of an additional compiler or linker flag. If the
intent was to set multiple flags, try quoting them together as described below.
Possible flag quotation errors (with the correctly-quoted version after the =>):
(1) cflags=-Os -pipe => cflags="-Os -pipe"
(2) cflags=-I /usr/include => cflags="-I /usr/include"''')
# Verify that the same unquoted cflags report is generated in the error message even
# if it fails during concretization, not just during parsing.
with pytest.raises(spack.error.SpackError) as cm:
spec('gcc cflags=-Os -pipe')
cm = str(cm.value)
assert cm.startswith('trying to set variant "pipe" in package "gcc", but the package has no such variant [happened during concretization of gcc cflags="-Os" ~pipe]') # noqa: E501
assert cm.endswith('(1) cflags=-Os -pipe => cflags="-Os -pipe"')
def test_spec_yaml():
output = spec('--yaml', 'mpileaks')
@@ -175,14 +125,14 @@ def test_spec_returncode():
def test_spec_parse_error():
with pytest.raises(spack.error.SpackError) as e:
with pytest.raises(spack.spec.SpecParseError) as e:
spec("1.15:")
# make sure the error is formatted properly
error_msg = """\
1.15:
^"""
assert error_msg in str(e.value)
assert error_msg in e.value.long_message
def test_env_aware_spec(mutable_mock_env_path):

View File

@@ -1668,143 +1668,3 @@ def test_reuse_with_unknown_package_dont_raise(
with spack.config.override("concretizer:reuse", True):
s = Spec('c').concretized()
assert s.namespace == 'builtin.mock'
@pytest.mark.parametrize('specs,expected', [
(['libelf', 'libelf@0.8.10'], 1),
(['libdwarf%gcc', 'libelf%clang'], 2),
(['libdwarf%gcc', 'libdwarf%clang'], 4),
(['libdwarf^libelf@0.8.12', 'libdwarf^libelf@0.8.13'], 4),
(['hdf5', 'zmpi'], 3),
(['hdf5', 'mpich'], 2),
(['hdf5^zmpi', 'mpich'], 4),
(['mpi', 'zmpi'], 2),
(['mpi', 'mpich'], 1),
])
def test_best_effort_coconcretize(self, specs, expected):
import spack.solver.asp
if spack.config.get('config:concretizer') == 'original':
pytest.skip('Original concretizer cannot concretize in rounds')
specs = [spack.spec.Spec(s) for s in specs]
solver = spack.solver.asp.Solver()
solver.reuse = False
concrete_specs = set()
for result in solver.solve_in_rounds(specs):
for s in result.specs:
concrete_specs.update(s.traverse())
assert len(concrete_specs) == expected
@pytest.mark.parametrize('specs,expected_spec,occurances', [
# The algorithm is greedy, and it might decide to solve the "best"
# spec early in which case reuse is suboptimal. In this case the most
# recent version of libdwarf is selected and concretized to libelf@0.8.13
(['libdwarf@20111030^libelf@0.8.10',
'libdwarf@20130207^libelf@0.8.12',
'libdwarf@20130729'], 'libelf@0.8.12', 1),
# Check we reuse the best libelf in the environment
(['libdwarf@20130729^libelf@0.8.10',
'libdwarf@20130207^libelf@0.8.12',
'libdwarf@20111030'], 'libelf@0.8.12', 2),
(['libdwarf@20130729',
'libdwarf@20130207',
'libdwarf@20111030'], 'libelf@0.8.13', 3),
# We need to solve in 2 rounds and we expect mpich to be preferred to zmpi
(['hdf5+mpi', 'zmpi', 'mpich'], 'mpich', 2)
])
def test_best_effort_coconcretize_preferences(
self, specs, expected_spec, occurances
):
"""Test package preferences during coconcretization."""
import spack.solver.asp
if spack.config.get('config:concretizer') == 'original':
pytest.skip('Original concretizer cannot concretize in rounds')
specs = [spack.spec.Spec(s) for s in specs]
solver = spack.solver.asp.Solver()
solver.reuse = False
concrete_specs = {}
for result in solver.solve_in_rounds(specs):
concrete_specs.update(result.specs_by_input)
counter = 0
for spec in concrete_specs.values():
if expected_spec in spec:
counter += 1
assert counter == occurances, concrete_specs
@pytest.mark.regression('30864')
def test_misleading_error_message_on_version(self, mutable_database):
# For this bug to be triggered we need a reusable dependency
# that is not optimal in terms of optimization scores.
# We pick an old version of "b"
import spack.solver.asp
if spack.config.get('config:concretizer') == 'original':
pytest.skip('Original concretizer cannot reuse')
reusable_specs = [
spack.spec.Spec('non-existing-conditional-dep@1.0').concretized()
]
root_spec = spack.spec.Spec('non-existing-conditional-dep@2.0')
with spack.config.override("concretizer:reuse", True):
solver = spack.solver.asp.Solver()
setup = spack.solver.asp.SpackSolverSetup()
with pytest.raises(spack.solver.asp.UnsatisfiableSpecError,
match="'dep-with-variants' satisfies '@999'"):
solver.driver.solve(setup, [root_spec], reuse=reusable_specs)
@pytest.mark.regression('31148')
def test_version_weight_and_provenance(self):
"""Test package preferences during coconcretization."""
import spack.solver.asp
if spack.config.get('config:concretizer') == 'original':
pytest.skip('Original concretizer cannot reuse')
reusable_specs = [
spack.spec.Spec(spec_str).concretized()
for spec_str in ('b@0.9', 'b@1.0')
]
root_spec = spack.spec.Spec('a foobar=bar')
with spack.config.override("concretizer:reuse", True):
solver = spack.solver.asp.Solver()
setup = spack.solver.asp.SpackSolverSetup()
result = solver.driver.solve(
setup, [root_spec], reuse=reusable_specs, out=sys.stdout
)
# The result here should have a single spec to build ('a')
# and it should be using b@1.0 with a version badness of 2
# The provenance is:
# version_declared("b","1.0",0,"package_py").
# version_declared("b","0.9",1,"package_py").
# version_declared("b","1.0",2,"installed").
# version_declared("b","0.9",3,"installed").
for criterion in [
(1, None, 'number of packages to build (vs. reuse)'),
(2, 0, 'version badness')
]:
assert criterion in result.criteria
assert result.specs[0].satisfies('^b@1.0')
@pytest.mark.regression('31169')
def test_not_reusing_incompatible_os_or_compiler(self):
import spack.solver.asp
if spack.config.get('config:concretizer') == 'original':
pytest.skip('Original concretizer cannot reuse')
root_spec = spack.spec.Spec('b')
s = root_spec.concretized()
wrong_compiler, wrong_os = s.copy(), s.copy()
wrong_compiler.compiler = spack.spec.CompilerSpec('gcc@12.1.0')
wrong_os.architecture = spack.spec.ArchSpec('test-ubuntu2204-x86_64')
reusable_specs = [wrong_compiler, wrong_os]
with spack.config.override("concretizer:reuse", True):
solver = spack.solver.asp.Solver()
setup = spack.solver.asp.SpackSolverSetup()
result = solver.driver.solve(
setup, [root_spec], reuse=reusable_specs, out=sys.stdout
)
concrete_spec = result.specs[0]
assert concrete_spec.satisfies('%gcc@4.5.0')
assert concrete_spec.satisfies('os=debian6')

View File

@@ -144,7 +144,7 @@ def test_preferred_target(self, mutable_mock_repo):
update_packages('mpileaks', 'target', [default])
spec = concretize('mpileaks')
assert str(spec['mpileaks'].target) == default
assert str(spec['mpich'].target) == default
assert str(spec['mpich'].target) == default
def test_preferred_versions(self):

View File

@@ -10,7 +10,6 @@
logic needs to consume all related specs in a single pass).
"""
import json
import os
import pytest
@@ -33,7 +32,7 @@
},
"compiler": {
"name": "gcc",
"version": "10.2.0.cray"
"version": "10.2.0"
},
"dependencies": {
"packagey": {
@@ -157,7 +156,7 @@ def spec_json(self):
# Intended to match example_compiler_entry above
_common_compiler = JsonCompilerEntry(
name='gcc',
version='10.2.0.cray',
version='10.2.0',
arch={
"os": "centos8",
"target": "x86_64"
@@ -310,16 +309,8 @@ def test_failed_translate_compiler_name():
def create_manifest_content():
return {
# Note: the cray_manifest module doesn't use the _meta section right
# now, but it is anticipated to be useful
'_meta': {
"file-type": "cray-pe-json",
"system-type": "test",
"schema-version": "1.3",
"cpe-version": "22.06"
},
'specs': list(x.to_dict() for x in generate_openmpi_entries()),
'compilers': [_common_compiler.compiler_json()]
'compilers': []
}
@@ -345,45 +336,3 @@ def test_read_cray_manifest(
' ^/openmpifakehasha'.split(),
concretize=True)
assert concretized_specs[0]['hwloc'].dag_hash() == 'hwlocfakehashaaa'
def test_read_cray_manifest_twice_no_compiler_duplicates(
tmpdir, mutable_config, mock_packages, mutable_database):
if spack.config.get('config:concretizer') == 'clingo':
pytest.skip("The ASP-based concretizer is currently picky about "
" OS matching and will fail.")
with tmpdir.as_cwd():
test_db_fname = 'external-db.json'
with open(test_db_fname, 'w') as db_file:
json.dump(create_manifest_content(), db_file)
# Read the manifest twice
cray_manifest.read(test_db_fname, True)
cray_manifest.read(test_db_fname, True)
compilers = spack.compilers.all_compilers()
filtered = list(c for c in compilers if
c.spec == spack.spec.CompilerSpec('gcc@10.2.0.cray'))
assert(len(filtered) == 1)
def test_read_old_manifest_v1_2(
tmpdir, mutable_config, mock_packages, mutable_database):
"""Test reading a file using the older format
('version' instead of 'schema-version').
"""
manifest_dir = str(tmpdir.mkdir('manifest_dir'))
manifest_file_path = os.path.join(manifest_dir, 'test.json')
with open(manifest_file_path, 'w') as manifest_file:
manifest_file.write("""\
{
"_meta": {
"file-type": "cray-pe-json",
"system-type": "EX",
"version": "1.3"
},
"specs": []
}
""")
cray_manifest.read(manifest_file_path, True)

View File

@@ -1,5 +1,12 @@
bootstrap:
sources:
- name: 'github-actions'
metadata: $spack/share/spack/bootstrap/github-actions-v0.2
type: buildcache
description: |
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
info:
url: file:///home/spack/production/spack/mirrors/clingo
homepage: https://github.com/alalazo/spack-bootstrap-mirrors
releases: https://github.com/alalazo/spack-bootstrap-mirrors/releases
trusted: {}

View File

@@ -1,54 +0,0 @@
{
"spec": {
"_meta": {
"version": 3
},
"nodes": [
{
"name": "archive-files",
"version": "2.0",
"arch": {
"platform": "test",
"platform_os": "debian6",
"target": {
"name": "core2",
"vendor": "GenuineIntel",
"features": [
"mmx",
"sse",
"sse2",
"ssse3"
],
"generation": 0,
"parents": [
"nocona"
]
}
},
"compiler": {
"name": "gcc",
"version": "4.5.0"
},
"namespace": "builtin.mock",
"parameters": {
"cflags": [],
"cppflags": [],
"cxxflags": [],
"fflags": [],
"ldflags": [],
"ldlibs": []
},
"package_hash": "ncv2pr4o2yemepsa4h7u4p4dsgieul5fxvh6s5am5fsb65ebugaa====",
"hash": "l3vdiqvbobmspwyb4q2b62fz6nitd4hk"
}
]
},
"binary_cache_checksum": {
"hash_algorithm": "sha256",
"hash": "c226b51d88876746efd6f9737cc6dfdd349870b6c0b9c045d9bad0f2764a40b9"
},
"buildinfo": {
"relative_prefix": "test-debian6-core2/gcc-4.5.0/archive-files-2.0-l3vdiqvbobmspwyb4q2b62fz6nitd4hk",
"relative_rpaths": false
}
}

View File

@@ -1,93 +0,0 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
{
"spec": {
"_meta": {
"version": 2
},
"nodes": [
{
"name": "zlib",
"version": "1.2.12",
"arch": {
"platform": "linux",
"platform_os": "ubuntu18.04",
"target": {
"name": "haswell",
"vendor": "GenuineIntel",
"features": [
"aes",
"avx",
"avx2",
"bmi1",
"bmi2",
"f16c",
"fma",
"mmx",
"movbe",
"pclmulqdq",
"popcnt",
"rdrand",
"sse",
"sse2",
"sse4_1",
"sse4_2",
"ssse3"
],
"generation": 0,
"parents": [
"ivybridge",
"x86_64_v3"
]
}
},
"compiler": {
"name": "gcc",
"version": "8.4.0"
},
"namespace": "builtin",
"parameters": {
"optimize": true,
"patches": [
"0d38234384870bfd34dfcb738a9083952656f0c766a0f5990b1893076b084b76"
],
"pic": true,
"shared": true,
"cflags": [],
"cppflags": [],
"cxxflags": [],
"fflags": [],
"ldflags": [],
"ldlibs": []
},
"patches": [
"0d38234384870bfd34dfcb738a9083952656f0c766a0f5990b1893076b084b76"
],
"package_hash": "bm7rut622h3yt5mpm4kvf7pmh7tnmueezgk5yquhr2orbmixwxuq====",
"hash": "g7otk5dra3hifqxej36m5qzm7uyghqgb",
"full_hash": "fx2fyri7bv3vpz2rhke6g3l3dwxda4t6",
"build_hash": "g7otk5dra3hifqxej36m5qzm7uyghqgb"
}
]
},
"binary_cache_checksum": {
"hash_algorithm": "sha256",
"hash": "5b9a180f14e0d04b17b1b0c2a26cf3beae448d77d1bda4279283ca4567d0be90"
},
"buildinfo": {
"relative_prefix": "linux-ubuntu18.04-haswell/gcc-8.4.0/zlib-1.2.12-g7otk5dra3hifqxej36m5qzm7uyghqgb",
"relative_rpaths": false
}
}
-----BEGIN PGP SIGNATURE-----
iQEzBAEBCgAdFiEEz8AGj4zHZe4OaI2OQ0Tg92UAr50FAmJ5lD4ACgkQQ0Tg92UA
r52LEggAl/wXlOlHDnjWvqBlqAn3gaJEZ5PDVPczk6k0w+SNfDGrHfWJnL2c23Oq
CssbHylSgAFvaPT1frbiLfZj6L4j4Ym1qsxIlGNsVfW7Pbc4yNF0flqYMdWKXbgY
2sQoPegIKK7EBtpjDf0+VRYfJTMqjsSgjT/o+nTkg9oAnvU23EqXI5uiY84Z5z6l
CKLBm0GWg7MzI0u8NdiQMVNYVatvvZ8EQpblEUQ7jD4Bo0yoSr33Qdq8uvu4ZdlW
bvbIgeY3pTPF13g9uNznHLxW4j9BWQnOtHFI5UKQnYRner504Yoz9k+YxhwuDlaY
TrOxvHe9hG1ox1AP4tqQc+HsNpm5Kg==
=gi2R
-----END PGP SIGNATURE-----

View File

@@ -215,7 +215,7 @@ def test_process_binary_cache_tarball_none(install_mockery, monkeypatch,
def test_process_binary_cache_tarball_tar(install_mockery, monkeypatch, capfd):
"""Tests of _process_binary_cache_tarball with a tar file."""
def _spec(spec, unsigned=False, mirrors_for_spec=None):
def _spec(spec, preferred_mirrors=None):
return spec
# Skip binary distribution functionality since assume tested elsewhere

View File

@@ -135,6 +135,8 @@ def _mock_installed(self):
assert spack.version.Version('3') == a_spec[b][d].version
assert spack.version.Version('3') == a_spec[d].version
# TODO: with reuse, this will be different -- verify the reuse case
@pytest.mark.usefixtures('config')
def test_specify_preinstalled_dep():

View File

@@ -15,7 +15,6 @@
SpecFormatSigilError,
SpecFormatStringError,
UnconstrainableDependencySpecError,
UnsupportedCompilerError,
)
from spack.variant import (
InvalidVariantValueError,
@@ -1294,35 +1293,3 @@ def test_call_dag_hash_on_old_dag_hash_spec(mock_packages, config):
with pytest.raises(ValueError, match='Cannot call package_hash()'):
spec.package_hash()
@pytest.mark.regression('30861')
def test_concretize_partial_old_dag_hash_spec(mock_packages, config):
# create an "old" spec with no package hash
bottom = Spec("dt-diamond-bottom").concretized()
delattr(bottom, "_package_hash")
dummy_hash = "zd4m26eis2wwbvtyfiliar27wkcv3ehk"
bottom._hash = dummy_hash
# add it to an abstract spec as a dependency
top = Spec("dt-diamond")
top.add_dependency_edge(bottom, ())
# concretize with the already-concrete dependency
top.concretize()
for spec in top.traverse():
assert spec.concrete
# make sure dag_hash is untouched
assert spec["dt-diamond-bottom"].dag_hash() == dummy_hash
assert spec["dt-diamond-bottom"]._hash == dummy_hash
# make sure package hash is NOT recomputed
assert not getattr(spec["dt-diamond-bottom"], '_package_hash', None)
def test_unsupported_compiler():
with pytest.raises(UnsupportedCompilerError):
Spec('gcc%fake-compiler').validate_or_raise()

View File

@@ -8,10 +8,7 @@
The YAML and JSON formats preserve DAG information in the spec.
"""
from __future__ import print_function
import ast
import collections
import inspect
import os
@@ -20,7 +17,6 @@
from llnl.util.compat import Iterable, Mapping
import spack.hash_types as ht
import spack.paths
import spack.spec
import spack.util.spack_json as sjson
import spack.util.spack_yaml as syaml
@@ -49,27 +45,6 @@ def test_simple_spec():
check_json_round_trip(spec)
def test_read_spec_from_signed_json():
spec_dir = os.path.join(
spack.paths.test_path, 'data', 'mirrors', 'signed_json')
file_name = (
'linux-ubuntu18.04-haswell-gcc-8.4.0-'
'zlib-1.2.12-g7otk5dra3hifqxej36m5qzm7uyghqgb.spec.json.sig')
spec_path = os.path.join(spec_dir, file_name)
def check_spec(spec_to_check):
assert(spec_to_check.name == 'zlib')
assert(spec_to_check._hash == 'g7otk5dra3hifqxej36m5qzm7uyghqgb')
with open(spec_path) as fd:
s = Spec.from_signed_json(fd)
check_spec(s)
with open(spec_path) as fd:
s = Spec.from_signed_json(fd.read())
check_spec(s)
def test_normal_spec(mock_packages):
spec = Spec('mpileaks+debug~opt')
spec.normalize()
@@ -436,75 +411,3 @@ def test_legacy_yaml(tmpdir, install_mockery, mock_packages):
spec = Spec.from_yaml(yaml)
concrete_spec = spec.concretized()
assert concrete_spec.eq_dag(spec)
#: A well ordered Spec dictionary, using ``OrderdDict``.
#: Any operation that transforms Spec dictionaries should
#: preserve this order.
ordered_spec = collections.OrderedDict([
("arch", collections.OrderedDict([
("platform", "darwin"),
("platform_os", "bigsur"),
("target", collections.OrderedDict([
("features", [
"adx",
"aes",
"avx",
"avx2",
"bmi1",
"bmi2",
"clflushopt",
"f16c",
"fma",
"mmx",
"movbe",
"pclmulqdq",
"popcnt",
"rdrand",
"rdseed",
"sse",
"sse2",
"sse4_1",
"sse4_2",
"ssse3",
"xsavec",
"xsaveopt"
]),
("generation", 0),
("name", "skylake"),
("parents", ["broadwell"]),
("vendor", "GenuineIntel"),
])),
])),
("compiler", collections.OrderedDict([
("name", "apple-clang"),
("version", "13.0.0"),
])),
("name", "zlib"),
("namespace", "builtin"),
("parameters", collections.OrderedDict([
("cflags", []),
("cppflags", []),
("cxxflags", []),
("fflags", []),
("ldflags", []),
("ldlibs", []),
("optimize", True),
("pic", True),
("shared", True),
])),
("version", "1.2.11"),
])
@pytest.mark.regression("31092")
def test_strify_preserves_order():
"""Ensure that ``spack_json._strify()`` dumps dictionaries in the right order.
``_strify()`` is used in ``spack_json.dump()``, which is used in
``Spec.dag_hash()``, so if this goes wrong, ``Spec`` hashes can vary between python
versions.
"""
strified = sjson._strify(ordered_spec)
assert list(ordered_spec.items()) == list(strified.items())

View File

@@ -55,12 +55,9 @@ def test_write_and_remove_cache_file(file_cache):
file_cache.remove('test.yaml')
# After removal the file should not exist
# After removal both the file and the lock file should not exist
assert not os.path.exists(file_cache.cache_path('test.yaml'))
# Whether the lock file exists is more of an implementation detail, on Linux they
# continue to exist, on Windows they don't.
# assert os.path.exists(file_cache._lock_path('test.yaml'))
assert not os.path.exists(file_cache._lock_path('test.yaml'))
@pytest.mark.skipif(sys.platform == 'win32',
@@ -97,10 +94,3 @@ def test_cache_write_readonly_cache_fails(file_cache):
with pytest.raises(CacheError, match='Insufficient permissions to write'):
file_cache.write_transaction(filename)
@pytest.mark.regression('31475')
def test_delete_is_idempotent(file_cache):
"""Deleting a non-existent key should be idempotent, to simplify life when
running delete with multiple processes"""
file_cache.remove('test.yaml')

View File

@@ -3,7 +3,6 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import errno
import os
import shutil
@@ -171,17 +170,13 @@ def mtime(self, key):
return sinfo.st_mtime
def remove(self, key):
file = self.cache_path(key)
lock = self._get_lock(key)
try:
lock.acquire_write()
os.unlink(file)
except OSError as e:
# File not found is OK, so remove is idempotent.
if e.errno != errno.ENOENT:
raise
os.unlink(self.cache_path(key))
finally:
lock.release_write()
lock.cleanup()
class CacheError(SpackError):

View File

@@ -292,21 +292,17 @@ def sign(key, file, output, clearsign=False):
@_autoinit
def verify(signature, file=None, suppress_warnings=False):
def verify(signature, file, suppress_warnings=False):
"""Verify the signature on a file.
Args:
signature (str): signature of the file (or clearsigned file)
file (str): file to be verified. If None, then signature is
assumed to be a clearsigned file.
signature (str): signature of the file
file (str): file to be verified
suppress_warnings (bool): whether or not to suppress warnings
from GnuPG
"""
args = [signature]
if file:
args.append(file)
kwargs = {'error': str} if suppress_warnings else {}
GPG('--verify', *args, **kwargs)
GPG('--verify', signature, file, **kwargs)
@_autoinit

View File

@@ -4,7 +4,6 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Simple wrapper around JSON to guarantee consistent use of load/dump. """
import collections
import json
from typing import Any, Dict, Optional # novm
@@ -73,10 +72,9 @@ def _strify(data, ignore_dicts=False):
# if this is a dictionary, return dictionary of byteified keys and values
# but only if we haven't already byteified it
if isinstance(data, dict) and not ignore_dicts:
return collections.OrderedDict(
(_strify(key, ignore_dicts=True), _strify(value, ignore_dicts=True))
for key, value in iteritems(data)
)
return dict((_strify(key, ignore_dicts=True),
_strify(value, ignore_dicts=True)) for key, value in
iteritems(data))
# if it's anything else, return it in its original form
return data

View File

@@ -1,8 +0,0 @@
type: buildcache
description: |
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.1
homepage: https://github.com/spack/spack-bootstrap-mirrors
releases: https://github.com/spack/spack-bootstrap-mirrors/releases

View File

@@ -1,8 +0,0 @@
type: buildcache
description: |
Buildcache generated from a public workflow using Github Actions.
The sha256 checksum of binaries is checked before installation.
info:
url: https://mirror.spack.io/bootstrap/github-actions/v0.2
homepage: https://github.com/spack/spack-bootstrap-mirrors
releases: https://github.com/spack/spack-bootstrap-mirrors/releases

View File

@@ -1,8 +0,0 @@
# This method is just Spack bootstrapping the software it needs from sources.
# It has been added here so that users can selectively disable bootstrapping
# from sources by "untrusting" it.
type: install
description: |
Specs built from sources downloaded from the Spack public mirror.
info:
url: https://mirror.spack.io

View File

@@ -1,4 +1,4 @@
stages: [ "generate", "build", "publish" ]
stages: [ "generate", "build" ]
default:
image: { "name": "ghcr.io/spack/e4s-ubuntu-18.04:v2021-10-18", "entrypoint": [""] }
@@ -9,25 +9,16 @@ default:
.pr:
only:
- /^pr[\d]+_.*$/
- /^github\/pr[\d]+_.*$/
variables:
SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries-prs/${CI_COMMIT_REF_NAME}"
SPACK_PR_BRANCH: ${CI_COMMIT_REF_NAME}
SPACK_PIPELINE_TYPE: "spack_pull_request"
SPACK_PRUNE_UNTOUCHED: "True"
.protected-refs:
.develop:
only:
- /^develop$/
- /^releases\/v.*/
- /^v.*/
- /^github\/develop$/
.protected:
extends: [ ".protected-refs" ]
variables:
SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries/${CI_COMMIT_REF_NAME}/${SPACK_CI_STACK_NAME}"
SPACK_COPY_BUILDCACHE: "s3://spack-binaries/${CI_COMMIT_REF_NAME}"
SPACK_PIPELINE_TYPE: "spack_protected_branch"
.generate:
@@ -38,13 +29,12 @@ default:
- cd share/spack/gitlab/cloud_pipelines/stacks/${SPACK_CI_STACK_NAME}
- spack env activate --without-view .
- spack ci generate --check-index-only
--buildcache-destination "${SPACK_BUILDCACHE_DESTINATION}"
--artifacts-root "${CI_PROJECT_DIR}/jobs_scratch_dir"
--output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/cloud-ci-pipeline.yml"
artifacts:
paths:
- "${CI_PROJECT_DIR}/jobs_scratch_dir"
tags: ["spack", "aws", "public", "medium", "x86_64"]
tags: ["spack", "public", "medium", "x86_64"]
interruptible: true
retry:
max: 2
@@ -52,21 +42,11 @@ default:
- runner_system_failure
- stuck_or_timeout_failure
.generate-aarch64:
extends: [ ".generate" ]
tags: ["spack", "aws", "public", "medium", "aarch64"]
.pr-generate:
extends: [ ".pr", ".generate" ]
.pr-generate-aarch64:
extends: [ ".pr", ".generate-aarch64" ]
.protected-generate:
extends: [ ".protected", ".generate" ]
.protected-generate-aarch64:
extends: [ ".protected", ".generate-aarch64" ]
.develop-generate:
extends: [ ".develop", ".generate" ]
.build:
stage: build
@@ -77,24 +57,12 @@ default:
AWS_ACCESS_KEY_ID: ${PR_MIRRORS_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${PR_MIRRORS_AWS_SECRET_ACCESS_KEY}
.protected-build:
extends: [ ".protected", ".build" ]
.develop-build:
extends: [ ".develop", ".build" ]
variables:
AWS_ACCESS_KEY_ID: ${PROTECTED_MIRRORS_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${PROTECTED_MIRRORS_AWS_SECRET_ACCESS_KEY}
protected-publish:
stage: publish
extends: [ ".protected-refs" ]
image: "ghcr.io/spack/python-aws-bash:0.0.1"
tags: ["spack", "public", "medium", "aws", "x86_64"]
variables:
AWS_ACCESS_KEY_ID: ${PROTECTED_MIRRORS_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${PROTECTED_MIRRORS_AWS_SECRET_ACCESS_KEY}
script:
- . "./share/spack/setup-env.sh"
- spack --version
- spack buildcache update-index --mirror-url "s3://spack-binaries/${CI_COMMIT_REF_NAME}"
SPACK_SIGNING_KEY: ${PACKAGE_SIGNING_KEY}
########################################
# TEMPLATE FOR ADDING ANOTHER PIPELINE
@@ -115,8 +83,8 @@ protected-publish:
# my-super-cool-stack-pr-generate:
# extends: [ ".my-super-cool-stack", ".pr-generate"]
#
# my-super-cool-stack-protected-generate:
# extends: [ ".my-super-cool-stack", ".protected-generate"]
# my-super-cool-stack-develop-generate:
# extends: [ ".my-super-cool-stack", ".develop-generate"]
#
# my-super-cool-stack-pr-build:
# extends: [ ".my-super-cool-stack", ".pr-build" ]
@@ -126,62 +94,24 @@ protected-publish:
# job: my-super-cool-stack-pr-generate
# strategy: depend
#
# my-super-cool-stack-protected-build:
# extends: [ ".my-super-cool-stack", ".protected-build" ]
# my-super-cool-stack-develop-build:
# extends: [ ".my-super-cool-stack", ".develop-build" ]
# trigger:
# include:
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
# job: my-super-cool-stack-protected-generate
# job: my-super-cool-stack-develop-generate
# strategy: depend
########################################
# E4S Mac Stack
#
# With no near-future plans to have
# protected aws runners running mac
# builds, it seems best to decouple
# them from the rest of the stacks for
# the time being. This way they can
# still run on UO runners and be signed
# using the previous approach.
# E4S Mac Stack
########################################
.e4s-mac:
variables:
SPACK_CI_STACK_NAME: e4s-mac
allow_failure: True
.mac-pr:
only:
- /^pr[\d]+_.*$/
- /^github\/pr[\d]+_.*$/
variables:
SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries-prs/${CI_COMMIT_REF_NAME}"
SPACK_PRUNE_UNTOUCHED: "True"
.mac-protected:
only:
- /^develop$/
- /^releases\/v.*/
- /^v.*/
- /^github\/develop$/
variables:
SPACK_BUILDCACHE_DESTINATION: "s3://spack-binaries/${CI_COMMIT_REF_NAME}/${SPACK_CI_STACK_NAME}"
.mac-pr-build:
extends: [ ".mac-pr", ".build" ]
variables:
AWS_ACCESS_KEY_ID: ${PR_MIRRORS_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${PR_MIRRORS_AWS_SECRET_ACCESS_KEY}
.mac-protected-build:
extends: [ ".mac-protected", ".build" ]
variables:
AWS_ACCESS_KEY_ID: ${PROTECTED_MIRRORS_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${PROTECTED_MIRRORS_AWS_SECRET_ACCESS_KEY}
SPACK_SIGNING_KEY: ${PACKAGE_SIGNING_KEY}
e4s-mac-pr-generate:
extends: [".e4s-mac", ".mac-pr"]
extends: [".e4s-mac", ".pr"]
stage: generate
script:
- tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
@@ -205,8 +135,8 @@ e4s-mac-pr-generate:
- stuck_or_timeout_failure
timeout: 60 minutes
e4s-mac-protected-generate:
extends: [".e4s-mac", ".mac-protected"]
e4s-mac-develop-generate:
extends: [".e4s-mac", ".develop"]
stage: generate
script:
- tmp="$(mktemp -d)"; export SPACK_USER_CONFIG_PATH="$tmp"; export SPACK_USER_CACHE_PATH="$tmp"
@@ -231,7 +161,7 @@ e4s-mac-protected-generate:
timeout: 60 minutes
e4s-mac-pr-build:
extends: [ ".e4s-mac", ".mac-pr-build" ]
extends: [ ".e4s-mac", ".pr-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
@@ -241,16 +171,16 @@ e4s-mac-pr-build:
- artifacts: True
job: e4s-mac-pr-generate
e4s-mac-protected-build:
extends: [ ".e4s-mac", ".mac-protected-build" ]
e4s-mac-develop-build:
extends: [ ".e4s-mac", ".develop-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: e4s-mac-protected-generate
job: e4s-mac-develop-generate
strategy: depend
needs:
- artifacts: True
job: e4s-mac-protected-generate
job: e4s-mac-develop-generate
########################################
# E4S pipeline
@@ -262,8 +192,8 @@ e4s-mac-protected-build:
e4s-pr-generate:
extends: [ ".e4s", ".pr-generate"]
e4s-protected-generate:
extends: [ ".e4s", ".protected-generate"]
e4s-develop-generate:
extends: [ ".e4s", ".develop-generate"]
e4s-pr-build:
extends: [ ".e4s", ".pr-build" ]
@@ -276,16 +206,16 @@ e4s-pr-build:
- artifacts: True
job: e4s-pr-generate
e4s-protected-build:
extends: [ ".e4s", ".protected-build" ]
e4s-develop-build:
extends: [ ".e4s", ".develop-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: e4s-protected-generate
job: e4s-develop-generate
strategy: depend
needs:
- artifacts: True
job: e4s-protected-generate
job: e4s-develop-generate
########################################
# E4S on Power
@@ -301,8 +231,8 @@ e4s-protected-build:
# e4s-on-power-pr-generate:
# extends: [ ".e4s-on-power", ".pr-generate", ".power-e4s-generate-tags-and-image"]
# e4s-on-power-protected-generate:
# extends: [ ".e4s-on-power", ".protected-generate", ".power-e4s-generate-tags-and-image"]
# e4s-on-power-develop-generate:
# extends: [ ".e4s-on-power", ".develop-generate", ".power-e4s-generate-tags-and-image"]
# e4s-on-power-pr-build:
# extends: [ ".e4s-on-power", ".pr-build" ]
@@ -315,16 +245,16 @@ e4s-protected-build:
# - artifacts: True
# job: e4s-on-power-pr-generate
# e4s-on-power-protected-build:
# extends: [ ".e4s-on-power", ".protected-build" ]
# e4s-on-power-develop-build:
# extends: [ ".e4s-on-power", ".develop-build" ]
# trigger:
# include:
# - artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
# job: e4s-on-power-protected-generate
# job: e4s-on-power-develop-generate
# strategy: depend
# needs:
# - artifacts: True
# job: e4s-on-power-protected-generate
# job: e4s-on-power-develop-generate
#########################################
# Build tests for different build-systems
@@ -336,8 +266,8 @@ e4s-protected-build:
build_systems-pr-generate:
extends: [ ".build_systems", ".pr-generate"]
build_systems-protected-generate:
extends: [ ".build_systems", ".protected-generate"]
build_systems-develop-generate:
extends: [ ".build_systems", ".develop-generate"]
build_systems-pr-build:
extends: [ ".build_systems", ".pr-build" ]
@@ -350,16 +280,16 @@ build_systems-pr-build:
- artifacts: True
job: build_systems-pr-generate
build_systems-protected-build:
extends: [ ".build_systems", ".protected-build" ]
build_systems-develop-build:
extends: [ ".build_systems", ".develop-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: build_systems-protected-generate
job: build_systems-develop-generate
strategy: depend
needs:
- artifacts: True
job: build_systems-protected-generate
job: build_systems-develop-generate
#########################################
# RADIUSS
@@ -383,20 +313,20 @@ radiuss-pr-build:
- artifacts: True
job: radiuss-pr-generate
# --------- Protected ---------
radiuss-protected-generate:
extends: [ ".radiuss", ".protected-generate" ]
# --------- Develop ---------
radiuss-develop-generate:
extends: [ ".radiuss", ".develop-generate" ]
radiuss-protected-build:
extends: [ ".radiuss", ".protected-build" ]
radiuss-develop-build:
extends: [ ".radiuss", ".develop-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: radiuss-protected-generate
job: radiuss-develop-generate
strategy: depend
needs:
- artifacts: True
job: radiuss-protected-generate
job: radiuss-develop-generate
########################################
# ECP Data & Vis SDK
@@ -408,8 +338,8 @@ radiuss-protected-build:
data-vis-sdk-pr-generate:
extends: [ ".data-vis-sdk", ".pr-generate"]
data-vis-sdk-protected-generate:
extends: [ ".data-vis-sdk", ".protected-generate"]
data-vis-sdk-develop-generate:
extends: [ ".data-vis-sdk", ".develop-generate"]
data-vis-sdk-pr-build:
extends: [ ".data-vis-sdk", ".pr-build" ]
@@ -422,174 +352,16 @@ data-vis-sdk-pr-build:
- artifacts: True
job: data-vis-sdk-pr-generate
data-vis-sdk-protected-build:
extends: [ ".data-vis-sdk", ".protected-build" ]
data-vis-sdk-develop-build:
extends: [ ".data-vis-sdk", ".develop-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: data-vis-sdk-protected-generate
job: data-vis-sdk-develop-generate
strategy: depend
needs:
- artifacts: True
job: data-vis-sdk-protected-generate
########################################
# AWS AHUG Applications (x86_64)
########################################
# Call this AFTER .*-generate
.aws-ahug-overrides:
# This controls image for generate step; build step is controlled by spack.yaml
# Note that generator emits OS info for build so these should be the same.
image: { "name": "ghcr.io/spack/e4s-amazonlinux-2:v2022-03-21", "entrypoint": [""] }
.aws-ahug:
variables:
SPACK_CI_STACK_NAME: aws-ahug
aws-ahug-pr-generate:
extends: [ ".aws-ahug", ".pr-generate", ".aws-ahug-overrides" ]
tags: ["spack", "public", "medium", "x86_64_v4"]
aws-ahug-protected-generate:
extends: [ ".aws-ahug", ".protected-generate", ".aws-ahug-overrides" ]
tags: ["spack", "public", "medium", "x86_64_v4"]
aws-ahug-pr-build:
extends: [ ".aws-ahug", ".pr-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-ahug-pr-generate
strategy: depend
needs:
- artifacts: True
job: aws-ahug-pr-generate
aws-ahug-protected-build:
extends: [ ".aws-ahug", ".protected-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-ahug-protected-generate
strategy: depend
needs:
- artifacts: True
job: aws-ahug-protected-generate
# Parallel Pipeline for aarch64 (reuses override image, but generates and builds on aarch64)
.aws-ahug-aarch64:
variables:
SPACK_CI_STACK_NAME: aws-ahug-aarch64
aws-ahug-aarch64-pr-generate:
extends: [ ".aws-ahug-aarch64", ".pr-generate-aarch64", ".aws-ahug-overrides" ]
aws-ahug-aarch64-protected-generate:
extends: [ ".aws-ahug-aarch64", ".protected-generate-aarch64", ".aws-ahug-overrides" ]
aws-ahug-aarch64-pr-build:
extends: [ ".aws-ahug-aarch64", ".pr-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-ahug-aarch64-pr-generate
strategy: depend
needs:
- artifacts: True
job: aws-ahug-aarch64-pr-generate
aws-ahug-aarch64-protected-build:
extends: [ ".aws-ahug-aarch64", ".protected-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-ahug-aarch64-protected-generate
strategy: depend
needs:
- artifacts: True
job: aws-ahug-aarch64-protected-generate
########################################
# AWS ISC Applications (x86_64)
########################################
# Call this AFTER .*-generate
.aws-isc-overrides:
# This controls image for generate step; build step is controlled by spack.yaml
# Note that generator emits OS info for build so these should be the same.
image: { "name": "ghcr.io/spack/e4s-amazonlinux-2:v2022-03-21", "entrypoint": [""] }
.aws-isc:
variables:
SPACK_CI_STACK_NAME: aws-isc
aws-isc-pr-generate:
extends: [ ".aws-isc", ".pr-generate", ".aws-isc-overrides" ]
tags: ["spack", "public", "medium", "x86_64_v4"]
aws-isc-protected-generate:
extends: [ ".aws-isc", ".protected-generate", ".aws-isc-overrides" ]
tags: ["spack", "public", "medium", "x86_64_v4"]
aws-isc-pr-build:
extends: [ ".aws-isc", ".pr-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-isc-pr-generate
strategy: depend
needs:
- artifacts: True
job: aws-isc-pr-generate
aws-isc-protected-build:
extends: [ ".aws-isc", ".protected-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-isc-protected-generate
strategy: depend
needs:
- artifacts: True
job: aws-isc-protected-generate
# Parallel Pipeline for aarch64 (reuses override image, but generates and builds on aarch64)
.aws-isc-aarch64:
variables:
SPACK_CI_STACK_NAME: aws-isc-aarch64
aws-isc-aarch64-pr-generate:
extends: [ ".aws-isc-aarch64", ".pr-generate-aarch64", ".aws-isc-overrides" ]
aws-isc-aarch64-protected-generate:
extends: [ ".aws-isc-aarch64", ".protected-generate-aarch64", ".aws-isc-overrides" ]
aws-isc-aarch64-pr-build:
extends: [ ".aws-isc-aarch64", ".pr-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-isc-aarch64-pr-generate
strategy: depend
needs:
- artifacts: True
job: aws-isc-aarch64-pr-generate
aws-isc-aarch64-protected-build:
extends: [ ".aws-isc-aarch64", ".protected-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: aws-isc-aarch64-protected-generate
strategy: depend
needs:
- artifacts: True
job: aws-isc-aarch64-protected-generate
job: data-vis-sdk-develop-generate
########################################
# Spack Tutorial
@@ -601,8 +373,8 @@ aws-isc-aarch64-protected-build:
tutorial-pr-generate:
extends: [ ".tutorial", ".pr-generate"]
tutorial-protected-generate:
extends: [ ".tutorial", ".protected-generate"]
tutorial-develop-generate:
extends: [ ".tutorial", ".develop-generate"]
tutorial-pr-build:
extends: [ ".tutorial", ".pr-build" ]
@@ -615,13 +387,13 @@ tutorial-pr-build:
- artifacts: True
job: tutorial-pr-generate
tutorial-protected-build:
extends: [ ".tutorial", ".protected-build" ]
tutorial-develop-build:
extends: [ ".tutorial", ".develop-build" ]
trigger:
include:
- artifact: jobs_scratch_dir/cloud-ci-pipeline.yml
job: tutorial-protected-generate
job: tutorial-develop-generate
strategy: depend
needs:
- artifacts: True
job: tutorial-protected-generate
job: tutorial-develop-generate

View File

@@ -1,329 +0,0 @@
spack:
view: false
concretizer:
reuse: false
unify: false
config:
concretizer: clingo
install_tree:
root: /home/software/spack
padded_length: 384
projections:
all: '{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'
packages:
all:
providers:
blas:
- openblas
mkl:
- intel-oneapi-mkl
mpi:
- openmpi
- mpich
variants: +mpi
binutils:
variants: +ld +gold +headers +libiberty ~nls
version:
- 2.36.1
doxygen:
version:
- 1.8.20
elfutils:
variants: +bzip2 ~nls +xz
hdf5:
variants: +fortran +hl +shared
libfabric:
variants: fabrics=efa,tcp,udp,sockets,verbs,shm,mrail,rxd,rxm
libunwind:
variants: +pic +xz
#m4:
# version:
# - 1.4.18
mesa:
variants: ~llvm
mesa18:
variants: ~llvm
mpich:
#variants: ~wrapperrpath pmi=pmi netmod=ofi device=ch4
variants: ~wrapperrpath netmod=ofi device=ch4
#munge:
# variants: localstatedir=/var
ncurses:
variants: +termlib
openblas:
variants: threads=openmp
openmpi:
#variants: +pmi +internal-hwloc fabrics=ofi schedulers=slurm
variants: fabrics=ofi
openturns:
version: [1.18]
#slurm:
# variants: +pmix sysconfdir=/opt/slurm/etc
trilinos:
variants: +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
xz:
variants: +pic
definitions:
- compiler_specs:
- gcc
#- nvhpc@22.1
# - compilers:
# - '%gcc@7.5.0'
# - '%arm@21.0.0.879'
# - '%nvhpc@21.2'
# - 'arm@21.0.0.879'
#- when: arch.satisfies('os=ubuntu18.04')
# compilers: ['gcc@7.5.0', $bootstrap-compilers]
#- when: arch.satisfies('os=amzn2')
# compilers: ['gcc@7.3.1', $bootstrap-compilers]
# Note skipping spot since no spack package for it
- ahug_miniapps:
- cloverleaf
# - coevp
# Bad code pushed to github; needs specific versioning
# - cohmm
# - examinimd
# - exampm
# depends on openblas version conflicting on gcc@7.3.1
# - exasp2
# depends on openblas version conflicting on gcc@7.3.1
# - gamess-ri-mp2-miniapp
- hpcg
# depends on openblas version conflicting on gcc@7.3.1
# - laghos
- lulesh
# - miniaero
- miniamr
- minife
- minighost
# fails due to x86 specific timer (asm instructions)
# - minigmg
- minimd
# depends on openblas version conflicting on gcc@7.3.1
# - miniqmc
- minismac2d
- minitri
- minivite
- minixyce
- pennant
- picsarlite
- quicksilver
# - remhos
- rsbench
- simplemoc
- snap
- snappy
- tealeaf
# depends on openblas version conflicting on gcc@7.3.1
# - thornado-mini
- tycho2
- xsbench
- ahug_fullapps:
# depends on openblas version conflicting on gcc@7.3.1
# - abinit
- abyss
# conflicts on trilinos
# - albany
# - amber
- amg2013
# Bad variant fftw
# - aoflagger
# - athena
# - bowtie2
- branson
# - camx
# Bad variant gpu
# - candle-benchmarks
- cbench
- cgm
- chatterbug
# - cistem
- comd
# old version of openmpi
# - converge
# bad variant tensor ops mpi
# - cosmoflow-benchmark
# - cosmomc
- cosp2
# libxsmm not avail on arm
# - cp2k
# - dock
# - elk
- elmerfem
# - exabayes
# - examl
# - flecsph
# trilinos variant mumps
# - frontistr
- gatk
- graph500
- hpgmg
- lammps
- latte
- macsio
# - meep
- meme
# - modylas
# - mrbayes
# - mrchem
# cudnn depednency
# - mxnet
# trilinos variant mumps
# - nalu
# - nalu-wind
# - namd
- nek5000
- nekbone
# - nektar
# - nest
- nut
# - nwchem
- octopus
- openmm
- pathfinder
# - picsar
- pism
# meson version
# - qbox
- qmcpack
- quantum-espresso
# - relion
# - siesta
- snbone
- star
- su2
- swfft
- tinker
# gfortran lt 9 unsupported
# - vasp
- vpfft
- vpic
- warpx
# - yambo
- compiler:
- '%gcc@7.3.1'
- target:
- 'target=aarch64'
- 'target=graviton2'
specs:
- matrix:
- - $ahug_miniapps
- - $compiler
- - $target
- matrix:
- - $ahug_fullapps
- - $compiler
- - $target
# Build compilers to stage in binary cache
- matrix:
- - $compiler_specs
- - $compiler
- - $target
mirrors: { "mirror": "s3://spack-binaries/develop/aws-ahug-aarch64" }
gitlab-ci:
script:
- . "./share/spack/setup-env.sh"
- spack --version
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
image: { "name": "ghcr.io/spack/e4s-amazonlinux-2:v2022-03-21", "entrypoint": [""] }
mappings:
- match:
- llvm
- llvm-amdgpu
- paraview
runner-attributes:
tags: [ "spack", "huge", "aarch64" ]
variables:
CI_JOB_SIZE: huge
KUBERNETES_CPU_REQUEST: 11000m
KUBERNETES_MEMORY_REQUEST: 42G
- match:
- ascent
- axom
- cuda
- dyninst
- gcc
- ginkgo
- hpx
- kokkos-kernels
- kokkos-nvcc-wrapper
- magma
- mfem
- mpich
- openturns
- precice
- raja
- rocblas
- rocsolver
- rust
- slate
- strumpack
- sundials
- trilinos
- umpire
- vtk-h
- vtk-m
- warpx
runner-attributes:
tags: [ "spack", "large", "aarch64" ]
variables:
CI_JOB_SIZE: large
KUBERNETES_CPU_REQUEST: 8000m
KUBERNETES_MEMORY_REQUEST: 12G
- match: ['os=amzn2']
runner-attributes:
tags: ["spack", "aarch64"]
variables:
CI_JOB_SIZE: "default"
broken-specs-url: "s3://spack-binaries/broken-specs"
service-job-attributes:
before_script:
- . "./share/spack/setup-env.sh"
- spack --version
image: { "name": "ghcr.io/spack/e4s-amazonlinux-2:v2022-03-21", "entrypoint": [""] }
tags: ["spack", "public", "aarch64"]
signing-job-attributes:
image: { "name": "ghcr.io/spack/notary:latest", "entrypoint": [""] }
tags: ["spack", "aws"]
script:
- aws s3 sync --exclude "*" --include "*spec.json*" ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache /tmp
- /sign.sh
- aws s3 sync --exclude "*" --include "*spec.json.sig*" /tmp ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache
cdash:
build-group: AHUG ARM HPC User Group
url: https://cdash.spack.io
project: Spack Testing
site: Cloud Gitlab Infrastructure

View File

@@ -1,330 +0,0 @@
spack:
view: false
concretizer:
reuse: false
unify: false
config:
concretizer: clingo
install_tree:
root: /home/software/spack
padded_length: 384
projections:
all: '{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'
packages:
all:
providers:
blas:
- openblas
mkl:
- intel-oneapi-mkl
mpi:
- openmpi
- mpich
variants: +mpi
binutils:
variants: +ld +gold +headers +libiberty ~nls
version:
- 2.36.1
doxygen:
version:
- 1.8.20
elfutils:
variants: +bzip2 ~nls +xz
hdf5:
variants: +fortran +hl +shared
libfabric:
variants: fabrics=efa,tcp,udp,sockets,verbs,shm,mrail,rxd,rxm
libunwind:
variants: +pic +xz
#m4:
# version:
# - 1.4.18
mesa:
variants: ~llvm
mesa18:
variants: ~llvm
mpich:
#variants: ~wrapperrpath pmi=pmi netmod=ofi device=ch4
variants: ~wrapperrpath netmod=ofi device=ch4
#munge:
# variants: localstatedir=/var
ncurses:
variants: +termlib
openblas:
variants: threads=openmp
openmpi:
#variants: +pmi +internal-hwloc fabrics=ofi schedulers=slurm
variants: fabrics=ofi
openturns:
version: [1.18]
#slurm:
# variants: +pmix sysconfdir=/opt/slurm/etc
trilinos:
variants: +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
xz:
variants: +pic
definitions:
- compiler_specs:
- gcc
#- nvhpc@22.1
# - compilers:
# - '%gcc@7.5.0'
# - '%arm@21.0.0.879'
# - '%nvhpc@21.2'
# - 'arm@21.0.0.879'
#- when: arch.satisfies('os=ubuntu18.04')
# compilers: ['gcc@7.5.0', $bootstrap-compilers]
#- when: arch.satisfies('os=amzn2')
# compilers: ['gcc@7.3.1', $bootstrap-compilers]
# Note skipping spot since no spack package for it
- ahug_miniapps:
- cloverleaf
# - coevp
# Bad code pushed to github; needs specific versioning
# - cohmm
# - examinimd
# - exampm
# depends on openblas version conflicting on gcc@7.3.1
# - exasp2
# depends on openblas version conflicting on gcc@7.3.1
# - gamess-ri-mp2-miniapp
- hpcg
# depends on openblas version conflicting on gcc@7.3.1
# - laghos
- lulesh
# - miniaero
- miniamr
- minife
- minighost
# fails due to x86 specific timer (asm instructions)
# - minigmg
- minimd
# depends on openblas version conflicting on gcc@7.3.1
# - miniqmc
- minismac2d
- minitri
- minivite
- minixyce
- pennant
- picsarlite
- quicksilver
# - remhos
- rsbench
- simplemoc
- snap
- snappy
- tealeaf
# depends on openblas version conflicting on gcc@7.3.1
# - thornado-mini
- tycho2
- xsbench
- ahug_fullapps:
# depends on openblas version conflicting on gcc@7.3.1
# - abinit
- abyss
# conflicts on trilinos
# - albany
# - amber
- amg2013
# Bad variant fftw
# - aoflagger
# - athena
# - bowtie2
- branson
# - camx
# Bad variant gpu
# - candle-benchmarks
- cbench
- cgm
- chatterbug
# - cistem
- comd
# old version of openmpi
# - converge
# bad variant tensor ops mpi
# - cosmoflow-benchmark
# - cosmomc
- cosp2
# libxsmm not avail on arm
# - cp2k
# - dock
# - elk
- elmerfem
# - exabayes
# - examl
# - flecsph
# trilinos variant mumps
# - frontistr
- gatk
- graph500
- hpgmg
- lammps
- latte
- macsio
# - meep
- meme
# - modylas
# - mrbayes
# - mrchem
# cudnn depednency
# - mxnet
# trilinos variant mumps
# - nalu
# - nalu-wind
# - namd
- nek5000
- nekbone
# - nektar
# - nest
- nut
# - nwchem
- octopus
- openmm
- pathfinder
# - picsar
- pism
# meson version
# - qbox
- qmcpack
- quantum-espresso
# - relion
# - siesta
- snbone
- star
- su2
- swfft
- tinker
# gfortran lt 9 unsupported
# - vasp
- vpfft
- vpic
- warpx
# - yambo
- compiler:
- '%gcc@7.3.1'
- target:
#- 'target=x86_64'
- 'target=x86_64_v3'
- 'target=x86_64_v4'
specs:
- matrix:
- - $ahug_miniapps
- - $compiler
- - $target
- matrix:
- - $ahug_fullapps
- - $compiler
- - $target
# Build compilers to stage in binary cache
- matrix:
- - $compiler_specs
- - $compiler
- - $target
mirrors: { "mirror": "s3://spack-binaries/develop/aws-ahug" }
gitlab-ci:
script:
- . "./share/spack/setup-env.sh"
- spack --version
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
image: { "name": "ghcr.io/spack/e4s-amazonlinux-2:v2022-03-21", "entrypoint": [""] }
mappings:
- match:
- llvm
- llvm-amdgpu
- paraview
runner-attributes:
tags: [ "spack", "huge", "x86_64_v4" ]
variables:
CI_JOB_SIZE: huge
KUBERNETES_CPU_REQUEST: 11000m
KUBERNETES_MEMORY_REQUEST: 42G
- match:
- ascent
- axom
- cuda
- dyninst
- gcc
- ginkgo
- hpx
- kokkos-kernels
- kokkos-nvcc-wrapper
- magma
- mfem
- mpich
- openturns
- precice
- raja
- rocblas
- rocsolver
- rust
- slate
- strumpack
- sundials
- trilinos
- umpire
- vtk-h
- vtk-m
- warpx
runner-attributes:
tags: [ "spack", "large", "x86_64_v4" ]
variables:
CI_JOB_SIZE: large
KUBERNETES_CPU_REQUEST: 8000m
KUBERNETES_MEMORY_REQUEST: 12G
- match: ['os=amzn2']
runner-attributes:
tags: ["spack", "x86_64_v4"]
variables:
CI_JOB_SIZE: "default"
broken-specs-url: "s3://spack-binaries/broken-specs"
service-job-attributes:
before_script:
- . "./share/spack/setup-env.sh"
- spack --version
image: { "name": "ghcr.io/spack/e4s-amazonlinux-2:v2022-03-21", "entrypoint": [""] }
tags: ["spack", "public", "x86_64_v4"]
signing-job-attributes:
image: { "name": "ghcr.io/spack/notary:latest", "entrypoint": [""] }
tags: ["spack", "aws"]
script:
- aws s3 sync --exclude "*" --include "*spec.json*" ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache /tmp
- /sign.sh
- aws s3 sync --exclude "*" --include "*spec.json.sig*" /tmp ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache
cdash:
build-group: AHUG ARM HPC User Group
url: https://cdash.spack.io
project: Spack Testing
site: Cloud Gitlab Infrastructure

View File

@@ -1,256 +0,0 @@
spack:
view: false
concretizer:
reuse: false
unify: false
config:
concretizer: clingo
install_tree:
root: /home/software/spack
padded_length: 384
projections:
all: '{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'
packages:
all:
providers:
blas:
- openblas
mkl:
- intel-oneapi-mkl
mpi:
- openmpi
- mpich
variants: +mpi
binutils:
variants: +ld +gold +headers +libiberty ~nls
version:
- 2.36.1
doxygen:
version:
- 1.8.20
elfutils:
variants: +bzip2 ~nls +xz
hdf5:
variants: +fortran +hl +shared
libfabric:
variants: fabrics=efa,tcp,udp,sockets,verbs,shm,mrail,rxd,rxm
libunwind:
variants: +pic +xz
mesa:
variants: ~llvm
mesa18:
variants: ~llvm
mpich:
variants: ~wrapperrpath netmod=ofi device=ch4
ncurses:
variants: +termlib
openblas:
variants: threads=openmp
openmpi:
variants: fabrics=ofi +legacylaunchers
openturns:
version: [1.18]
relion:
variants: ~mklfft
# texlive:
# version: [20210325]
trilinos:
variants: +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
xz:
variants: +pic
definitions:
- compiler_specs:
- gcc@11.2
# Licensing OK?
# - intel-oneapi-compilers@2022.1
# - nvhpc
- cuda_specs:
# Depends on ctffind which embeds fsincos (x86-specific asm) within code. Will not build on ARM
#- relion +cuda cuda_arch=70
- raja +cuda cuda_arch=70
- mfem +cuda cuda_arch=70
- app_specs:
- bwa
# Depends on simde which requires newer compiler?
#- bowtie2
# Requires x86_64 specific ASM
#- cistem
- cromwell
- fastqc
- flux-sched
- flux-core
- flux-pmix
- gatk
- gromacs
- lammps
- wrf build_type=dm+sm
- mfem
- mpas-model ^parallelio+pnetcdf
- nextflow
- octave
- openfoam
- osu-micro-benchmarks
- parallel
- paraview
- picard
- quantum-espresso
- raja
# Depends on bowtie2 -> simde which requires newer compiler?
#- rsem
# Errors on texlive
#- rstudio
- salmon
- samtools
- seqtk
- snakemake
- star
# Requires gcc@9:
#- ufs-weather-model
# requires LLVM which fails without constraint
#- visit
- lib_specs:
- openmpi fabrics=ofi
- openmpi fabrics=ofi +legacylaunchers
- openmpi fabrics=auto
- mpich
- libfabric
- compiler:
- '%gcc@7.3.1'
- target:
- 'target=aarch64'
- 'target=graviton2'
specs:
- matrix:
- - $cuda_specs
- - $compiler
- - $target
- matrix:
- - $app_specs
- - $compiler
- - $target
- matrix:
- - $lib_specs
- - $compiler
- - $target
- matrix:
- - $compiler_specs
- - $compiler
- - $target
mirrors: { "mirror": "s3://spack-binaries/develop/aws-isc-aarch64" }
gitlab-ci:
script:
- . "./share/spack/setup-env.sh"
- spack --version
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
image: { "name": "ghcr.io/spack/e4s-amazonlinux-2:v2022-03-21", "entrypoint": [""] }
mappings:
- match:
- llvm
- llvm-amdgpu
- paraview
runner-attributes:
tags: [ "spack", "huge", "aarch64" ]
variables:
CI_JOB_SIZE: huge
KUBERNETES_CPU_REQUEST: 15000m
KUBERNETES_MEMORY_REQUEST: 62G
- match:
- ascent
- atk
- axom
- cistem
- ctffind
- cuda
- dyninst
- gcc
- ginkgo
- hdf5
- hpx
- kokkos-kernels
- kokkos-nvcc-wrapper
- magma
- mfem
- mpich
- openturns
- parallelio
- precice
- raja
- relion
- rocblas
- rocsolver
- rust
- slate
- strumpack
- sundials
- trilinos
- umpire
- vtk
- vtk-h
- vtk-m
- warpx
- wrf
- wxwidgets
runner-attributes:
tags: [ "spack", "large", "aarch64" ]
variables:
CI_JOB_SIZE: large
KUBERNETES_CPU_REQUEST: 8000m
KUBERNETES_MEMORY_REQUEST: 12G
- match: ['os=amzn2']
runner-attributes:
tags: ["spack", "aarch64"]
variables:
CI_JOB_SIZE: "default"
broken-specs-url: "s3://spack-binaries/broken-specs"
service-job-attributes:
before_script:
- . "./share/spack/setup-env.sh"
- spack --version
image: { "name": "ghcr.io/spack/e4s-amazonlinux-2:v2022-03-21", "entrypoint": [""] }
tags: ["spack", "public", "aarch64"]
signing-job-attributes:
image: { "name": "ghcr.io/spack/notary:latest", "entrypoint": [""] }
tags: ["spack", "aws"]
script:
- aws s3 sync --exclude "*" --include "*spec.json*" ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache /tmp
- /sign.sh
- aws s3 sync --exclude "*" --include "*spec.json.sig*" /tmp ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache
cdash:
build-group: AWS Packages
url: https://cdash.spack.io
project: Spack Testing
site: Cloud Gitlab Infrastructure

View File

@@ -1,258 +0,0 @@
spack:
view: false
concretizer:
reuse: false
unify: false
config:
concretizer: clingo
install_tree:
root: /home/software/spack
padded_length: 384
projections:
all: '{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'
packages:
all:
providers:
blas:
- openblas
mkl:
- intel-oneapi-mkl
mpi:
- openmpi
- mpich
variants: +mpi
binutils:
variants: +ld +gold +headers +libiberty ~nls
version:
- 2.36.1
doxygen:
version:
- 1.8.20
elfutils:
variants: +bzip2 ~nls +xz
hdf5:
variants: +fortran +hl +shared
libfabric:
variants: fabrics=efa,tcp,udp,sockets,verbs,shm,mrail,rxd,rxm
libunwind:
variants: +pic +xz
mesa:
variants: ~llvm
mesa18:
variants: ~llvm
mpich:
variants: ~wrapperrpath netmod=ofi device=ch4
ncurses:
variants: +termlib
openblas:
variants: threads=openmp
openmpi:
variants: fabrics=ofi +legacylaunchers
openturns:
version: [1.18]
relion:
variants: ~mklfft
# texlive:
# version: [20210325]
trilinos:
variants: +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
xz:
variants: +pic
definitions:
- compiler_specs:
- gcc@11.2
# Licensing OK?
# - intel-oneapi-compilers@2022.1
# - nvhpc
- cuda_specs:
# Disabled for consistency with aarch64
#- relion +cuda cuda_arch=70
- raja +cuda cuda_arch=70
- mfem +cuda cuda_arch=70
- app_specs:
- bwa
# Disabled for consistency with aarch64
#- bowtie2
# Disabled for consistency with aarch64
#- cistem
- cromwell
- fastqc
- flux-sched
- flux-core
- flux-pmix
- gatk
- gromacs
- lammps
- wrf build_type=dm+sm
- mfem
- mpas-model ^parallelio+pnetcdf
- nextflow
- octave
- openfoam
- osu-micro-benchmarks
- parallel
- paraview
- picard
- quantum-espresso
# Build broken for gcc@7.3.1 x86_64_v4 (error: '_mm512_loadu_epi32' was not declared in this scope)
#- raja
# Disabled for consistency with aarch64
#- rsem
# Errors on texlive
#- rstudio
- salmon
- samtools
- seqtk
- snakemake
- star
# Requires gcc@9:
#- ufs-weather-model
# Disabled for consistency with aarch64
#- visit
- lib_specs:
- openmpi fabrics=ofi
- openmpi fabrics=ofi +legacylaunchers
- openmpi fabrics=auto
- mpich
- libfabric
- compiler:
- '%gcc@7.3.1'
- target:
- 'target=x86_64_v3'
- 'target=x86_64_v4'
specs:
- matrix:
- - $cuda_specs
- - $compiler
- - $target
- matrix:
- - $app_specs
- - $compiler
- - $target
- matrix:
- - $lib_specs
- - $compiler
- - $target
- matrix:
- - $compiler_specs
- - $compiler
- - $target
mirrors: { "mirror": "s3://spack-binaries/develop/aws-isc" }
gitlab-ci:
script:
- . "./share/spack/setup-env.sh"
- spack --version
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
image: { "name": "ghcr.io/spack/e4s-amazonlinux-2:v2022-03-21", "entrypoint": [""] }
mappings:
- match:
- llvm
- llvm-amdgpu
- pango
- paraview
runner-attributes:
tags: [ "spack", "huge", "x86_64_v4" ]
variables:
CI_JOB_SIZE: huge
KUBERNETES_CPU_REQUEST: 11000m
KUBERNETES_MEMORY_REQUEST: 42G
- match:
- ascent
- atk
- axom
- cistem
- ctffind
- cuda
- dyninst
- gcc
- ginkgo
- hdf5
- hpx
- kokkos-kernels
- kokkos-nvcc-wrapper
- magma
- mfem
- mpich
- openturns
- parallelio
- precice
- raja
- relion
- rocblas
- rocsolver
- rust
- slate
- strumpack
- sundials
- trilinos
- umpire
- vtk
- vtk-h
- vtk-m
- warpx
- wrf
- wxwidgets
runner-attributes:
tags: [ "spack", "large", "x86_64_v4" ]
variables:
CI_JOB_SIZE: large
KUBERNETES_CPU_REQUEST: 8000m
KUBERNETES_MEMORY_REQUEST: 12G
- match: ['os=amzn2']
runner-attributes:
tags: ["spack", "x86_64_v4"]
variables:
CI_JOB_SIZE: "default"
broken-specs-url: "s3://spack-binaries/broken-specs"
service-job-attributes:
before_script:
- . "./share/spack/setup-env.sh"
- spack --version
image: { "name": "ghcr.io/spack/e4s-amazonlinux-2:v2022-03-21", "entrypoint": [""] }
tags: ["spack", "public", "x86_64_v4"]
signing-job-attributes:
image: { "name": "ghcr.io/spack/notary:latest", "entrypoint": [""] }
tags: ["spack", "aws"]
script:
- aws s3 sync --exclude "*" --include "*spec.json*" ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache /tmp
- /sign.sh
- aws s3 sync --exclude "*" --include "*spec.json.sig*" /tmp ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache
cdash:
build-group: AWS Packages
url: https://cdash.spack.io
project: Spack Testing
site: Cloud Gitlab Infrastructure

View File

@@ -3,7 +3,7 @@ spack:
concretizer:
reuse: false
unify: when_possible
unify: false
config:
install_tree:
@@ -29,7 +29,7 @@ spack:
- - $default_specs
- - $arch
mirrors: { "mirror": "s3://spack-binaries/develop/build_systems" }
mirrors: { "mirror": "s3://spack-binaries/build_systems" }
gitlab-ci:
script:
@@ -38,8 +38,6 @@ spack:
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild
image:
@@ -50,7 +48,7 @@ spack:
- match:
- cmake
runner-attributes:
tags: [ "spack", "large", "x86_64"]
tags: [ "spack", "public", "large", "x86_64"]
variables:
CI_JOB_SIZE: large
KUBERNETES_CPU_REQUEST: 8000m
@@ -63,7 +61,7 @@ spack:
- openjpeg
- sqlite
runner-attributes:
tags: [ "spack", "medium", "x86_64" ]
tags: [ "spack", "public", "medium", "x86_64" ]
variables:
CI_JOB_SIZE: "medium"
KUBERNETES_CPU_REQUEST: "2000m"
@@ -87,7 +85,7 @@ spack:
- xz
- zlib
runner-attributes:
tags: [ "spack", "medium", "x86_64" ]
tags: [ "spack", "public", "medium", "x86_64" ]
variables:
CI_JOB_SIZE: "small"
KUBERNETES_CPU_REQUEST: "500m"
@@ -96,27 +94,18 @@ spack:
- match:
- 'os=ubuntu18.04'
runner-attributes:
tags: ["spack", "x86_64"]
tags: ["spack", "public", "x86_64"]
variables:
CI_JOB_SIZE: "default"
broken-specs-url: "s3://spack-binaries/broken-specs"
broken-specs-url: "s3://spack-binaries-develop/broken-specs"
service-job-attributes:
before_script:
- . "./share/spack/setup-env.sh"
- spack --version
image: { "name": "ghcr.io/spack/e4s-ubuntu-18.04:v2021-10-18", "entrypoint": [""] }
tags: ["spack", "public", "x86_64"]
signing-job-attributes:
image: { "name": "ghcr.io/spack/notary:latest", "entrypoint": [""] }
tags: ["spack", "aws"]
script:
- aws s3 sync --exclude "*" --include "*spec.json*" ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache /tmp
- /sign.sh
- aws s3 sync --exclude "*" --include "*spec.json.sig*" /tmp ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache
cdash:
build-group: Build tests for different build systems
url: https://cdash.spack.io

View File

@@ -42,7 +42,7 @@ spack:
+zfp
+visit
mirrors: { "mirror": "s3://spack-binaries/develop/data-vis-sdk" }
mirrors: { "mirror": "s3://spack-binaries/data-vis-sdk" }
gitlab-ci:
image: { "name": "ghcr.io/spack/e4s-ubuntu-18.04:v2021-10-18", "entrypoint": [""] }
@@ -52,15 +52,13 @@ spack:
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild
mappings:
- match:
- llvm
- qt
runner-attributes:
tags: [ "spack", "huge", "x86_64" ]
tags: [ "spack", "public", "huge", "x86_64" ]
variables:
CI_JOB_SIZE: huge
KUBERNETES_CPU_REQUEST: 11000m
@@ -74,7 +72,7 @@ spack:
- visit
- vtk-m
runner-attributes:
tags: [ "spack", "large", "x86_64" ]
tags: [ "spack", "public", "large", "x86_64" ]
variables:
CI_JOB_SIZE: large
KUBERNETES_CPU_REQUEST: 8000m
@@ -100,7 +98,7 @@ spack:
- raja
- vtk-h
runner-attributes:
tags: [ "spack", "medium", "x86_64" ]
tags: [ "spack", "public", "medium", "x86_64" ]
variables:
CI_JOB_SIZE: "medium"
KUBERNETES_CPU_REQUEST: "2000m"
@@ -135,7 +133,7 @@ spack:
- util-linux-uuid
runner-attributes:
tags: [ "spack", "small", "x86_64" ]
tags: [ "spack", "public", "small", "x86_64" ]
variables:
CI_JOB_SIZE: "small"
KUBERNETES_CPU_REQUEST: "500m"
@@ -143,12 +141,11 @@ spack:
- match: ['@:']
runner-attributes:
tags: ["spack", "x86_64"]
tags: ["spack", "public", "x86_64"]
variables:
CI_JOB_SIZE: "default"
broken-specs-url: "s3://spack-binaries/broken-specs"
broken-specs-url: "s3://spack-binaries-develop/broken-specs"
service-job-attributes:
image: { "name": "ghcr.io/spack/e4s-ubuntu-18.04:v2021-10-18", "entrypoint": [""] }
before_script:
@@ -156,14 +153,6 @@ spack:
- spack --version
tags: ["spack", "public", "medium", "x86_64"]
signing-job-attributes:
image: { "name": "ghcr.io/spack/notary:latest", "entrypoint": [""] }
tags: ["spack", "aws"]
script:
- aws s3 sync --exclude "*" --include "*spec.json*" ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache /tmp
- /sign.sh
- aws s3 sync --exclude "*" --include "*spec.json.sig*" /tmp ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache
cdash:
build-group: Data and Vis SDK
url: https://cdash.spack.io

View File

@@ -32,7 +32,7 @@ spack:
- - $easy_specs
- - $arch
mirrors: { "mirror": "s3://spack-binaries/develop/e4s-mac" }
mirrors: { "mirror": "s3://spack-binaries/e4s-mac" }
gitlab-ci:
@@ -51,9 +51,7 @@ spack:
runner-attributes:
tags:
- omicron
broken-specs-url: "s3://spack-binaries/broken-specs"
broken-specs-url: "s3://spack-binaries-develop/broken-specs"
service-job-attributes:
before_script:
- . "./share/spack/setup-env.sh"

View File

@@ -222,7 +222,7 @@ spack:
- - $cuda_specs
- - $arch
mirrors: { "mirror": "s3://spack-binaries/develop/e4s" }
mirrors: { "mirror": "s3://spack-binaries/e4s" }
gitlab-ci:
@@ -233,8 +233,6 @@ spack:
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- mkdir -p ${SPACK_ARTIFACTS_ROOT}/user_data
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
image: { "name": "ghcr.io/spack/e4s-ubuntu-18.04:v2021-10-18", "entrypoint": [""] }
@@ -242,7 +240,7 @@ spack:
- match:
- llvm
runner-attributes:
tags: [ "spack", "huge", "x86_64" ]
tags: [ "spack", "public", "huge", "x86_64" ]
variables:
CI_JOB_SIZE: huge
KUBERNETES_CPU_REQUEST: 11000m
@@ -267,7 +265,7 @@ spack:
- vtk-m
- warpx
runner-attributes:
tags: [ "spack", "large", "x86_64" ]
tags: [ "spack", "public", "large", "x86_64" ]
variables:
CI_JOB_SIZE: large
KUBERNETES_CPU_REQUEST: 8000m
@@ -335,7 +333,7 @@ spack:
- vtk-h
- zfp
runner-attributes:
tags: [ "spack", "medium", "x86_64" ]
tags: [ "spack", "public", "medium", "x86_64" ]
variables:
CI_JOB_SIZE: "medium"
KUBERNETES_CPU_REQUEST: "2000m"
@@ -396,7 +394,7 @@ spack:
- zlib
- zstd
runner-attributes:
tags: [ "spack", "small", "x86_64" ]
tags: [ "spack", "public", "small", "x86_64" ]
variables:
CI_JOB_SIZE: "small"
KUBERNETES_CPU_REQUEST: "500m"
@@ -404,12 +402,11 @@ spack:
- match: ['os=ubuntu18.04']
runner-attributes:
tags: ["spack", "x86_64"]
tags: ["spack", "public", "x86_64"]
variables:
CI_JOB_SIZE: "default"
broken-specs-url: "s3://spack-binaries/broken-specs"
broken-specs-url: "s3://spack-binaries-develop/broken-specs"
service-job-attributes:
before_script:
- . "./share/spack/setup-env.sh"
@@ -417,14 +414,6 @@ spack:
image: { "name": "ghcr.io/spack/e4s-ubuntu-18.04:v2021-10-18", "entrypoint": [""] }
tags: ["spack", "public", "x86_64"]
signing-job-attributes:
image: { "name": "ghcr.io/spack/notary:latest", "entrypoint": [""] }
tags: ["spack", "aws"]
script:
- aws s3 sync --exclude "*" --include "*spec.json*" ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache /tmp
- /sign.sh
- aws s3 sync --exclude "*" --include "*spec.json.sig*" /tmp ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache
cdash:
build-group: New PR testing workflow
url: https://cdash.spack.io

View File

@@ -54,7 +54,7 @@ spack:
- zfp
mirrors:
mirror: "s3://spack-binaries/develop/radiuss"
mirror: "s3://spack-binaries/radiuss"
specs:
- matrix:
@@ -69,8 +69,6 @@ spack:
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild
mappings:
- match:
@@ -78,7 +76,7 @@ spack:
- openblas
- rust
runner-attributes:
tags: ["spack", "large", "x86_64"]
tags: ["spack", "public", "large", "x86_64"]
variables:
CI_JOB_SIZE: large
KUBERNETES_CPU_REQUEST: 8000m
@@ -98,7 +96,7 @@ spack:
- vtk-h
- vtk-m
runner-attributes:
tags: ["spack", "medium", "x86_64"]
tags: ["spack", "public", "medium", "x86_64"]
variables:
CI_JOB_SIZE: "medium"
KUBERNETES_CPU_REQUEST: "2000m"
@@ -152,7 +150,7 @@ spack:
- zfp
- zlib
runner-attributes:
tags: ["spack", "small", "x86_64"]
tags: ["spack", "public", "small", "x86_64"]
variables:
CI_JOB_SIZE: "small"
KUBERNETES_CPU_REQUEST: "500m"
@@ -160,12 +158,10 @@ spack:
- match: ['os=ubuntu18.04']
runner-attributes:
tags: ["spack", "x86_64"]
tags: ["spack", "public", "x86_64"]
variables:
CI_JOB_SIZE: "default"
broken-specs-url: "s3://spack-binaries/broken-specs"
service-job-attributes:
before_script:
- . "./share/spack/setup-env.sh"
@@ -173,14 +169,6 @@ spack:
image: { "name": "ghcr.io/spack/e4s-ubuntu-18.04:v2021-10-18", "entrypoint": [""] }
tags: ["spack", "public", "x86_64"]
signing-job-attributes:
image: { "name": "ghcr.io/spack/notary:latest", "entrypoint": [""] }
tags: ["spack", "aws"]
script:
- aws s3 sync --exclude "*" --include "*spec.json*" ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache /tmp
- /sign.sh
- aws s3 sync --exclude "*" --include "*spec.json.sig*" /tmp ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache
cdash:
build-group: RADIUSS
url: https://cdash.spack.io

View File

@@ -59,7 +59,7 @@ spack:
- $gcc_spack_built_packages
mirrors:
mirror: 's3://spack-binaries/develop/tutorial'
mirror: 's3://spack-binaries/tutorial'
gitlab-ci:
script:
@@ -69,8 +69,6 @@ spack:
- cd ${SPACK_CONCRETE_ENV_DIR}
- spack env activate --without-view .
- spack config add "config:install_tree:projections:${SPACK_JOB_SPEC_PKG_NAME}:'morepadding/{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}'"
- if [[ -r /mnt/key/intermediate_ci_signing_key.gpg ]]; then spack gpg trust /mnt/key/intermediate_ci_signing_key.gpg; fi
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild
image: { "name": "ghcr.io/spack/tutorial-ubuntu-18.04:v2021-11-02", "entrypoint": [""] }
@@ -83,7 +81,7 @@ spack:
- netlib-lapack
- trilinos
runner-attributes:
tags: ["spack", "large", "x86_64"]
tags: ["spack", "public", "large", "x86_64"]
variables:
CI_JOB_SIZE: large
KUBERNETES_CPU_REQUEST: 8000m
@@ -101,7 +99,7 @@ spack:
- py-scipy
- slurm
runner-attributes:
tags: ["spack", "medium", "x86_64"]
tags: ["spack", "public", "medium", "x86_64"]
variables:
CI_JOB_SIZE: "medium"
KUBERNETES_CPU_REQUEST: "2000m"
@@ -131,7 +129,7 @@ spack:
- tar
- util-linux-uuid
runner-attributes:
tags: ["spack", "small", "x86_64"]
tags: ["spack", "public", "small", "x86_64"]
variables:
CI_JOB_SIZE: "small"
KUBERNETES_CPU_REQUEST: "500m"
@@ -139,12 +137,11 @@ spack:
- match: ['@:']
runner-attributes:
tags: ["spack", "x86_64"]
tags: ["spack", "public", "x86_64"]
variables:
CI_JOB_SIZE: default
broken-specs-url: "s3://spack-binaries/broken-specs"
broken-specs-url: "s3://spack-binaries-develop/broken-specs"
service-job-attributes:
image: { "name": "ghcr.io/spack/tutorial-ubuntu-18.04:v2021-11-02", "entrypoint": [""] }
before_script:
@@ -152,14 +149,6 @@ spack:
- spack --version
tags: ["spack", "public", "x86_64"]
signing-job-attributes:
image: { "name": "ghcr.io/spack/notary:latest", "entrypoint": [""] }
tags: ["spack", "aws"]
script:
- aws s3 sync --exclude "*" --include "*spec.json*" ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache /tmp
- /sign.sh
- aws s3 sync --exclude "*" --include "*spec.json.sig*" /tmp ${SPACK_REMOTE_MIRROR_OVERRIDE}/build_cache
cdash:
build-group: Spack Tutorial
url: https://cdash.spack.io

View File

@@ -1,164 +1,37 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGKOOXIBEACy2KAwhV/qjObbfLc0StT0u8SaSUcubHE3DXpQOo69qyGxWM+e
2cfOt7cDuyw8yEYUzpKmRhUgVbUchCtyDQZlzx2nZ0IfuAudsUl0RANk7nbZdjeG
F4C21NvA69gtT4sGrDqwTGpKCLxcAUwwpYw2WUcyyz5e7mlGdxA4DmJ8uThDFHhd
Yoq2X8YHHvBRIIS8q+T1de2NeFsSIEV2DqYx/L+z6IWkgE60mJy+5mfcuT/+mRpX
iZ+w0JAUJDbATndp24TahLo60B+S/G2oIWN5WbKYfsJmHbU5EgjbbC/H8cITt8wS
ZTGm+ZnSH6QMPGc8A1w/n/77JAAyVpQW907gLnUOX8qRypkmpNUGVw5gmQj7jvR0
JyCO0z3V2W8DCvxzQR+Mci13ZTGw53pNiHQNn0K1iMT2IJ6XmTcKXrBy37yEQClx
06h3DxSWfNQlQXBO5lnvwetMrU3OuwztYfsrnlqM3byLW21ZRCft7OCSzwiNbWu/
qg8eyA9xiH/ElduuA1Z5dKcRY4dywHUy3GvbqkqJBlRCyySZlzeXkm3aBelFDaQ8
rviKZ9Bc5AIAgjUG6Sz5rZu2VgHkxPo8ZzVJXAR5BPzRcY9UFmGH5c2jja/Hf2kd
zP43wWAtXLM8Oci0fb5nizohTmgQq0JJNYRtZOEW0fSRd3tzh/eGTqQWOQARAQAB
tDZTcGFjayBQcm9qZWN0IE9mZmljaWFsIEJpbmFyaWVzIDxtYWludGFpbmVyc0Bz
cGFjay5pbz6JAk4EEwEKADgWIQQsjdMiTvNXOkK9Ih+o4Mo8HCraLwUCYo45cgIb
AwULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRCo4Mo8HCraL4vfD/9EQ5sotTYj
83YmyJP/MdRB358QzmKV7+ewcE2+f184kx7z4IK/gBFYz9t32rnBBgm6/aQASHD+
scm/mqV5+26B1nLUr5GbAEAm5yFDJulTBXX/TZpc5FyC+AgSh4j47kc9ZDM+YHFU
uPLNqhJI19LVMSyleifqd8Xbqo9/aCPw6eRZMGmw8estV+QgHCT5oD1Y4SSRm2th
/CUspVEWr8Dg0j3n7N/3y+pIBlf5lQ3wBeRgvH2c2ty28f8HauHiXAi0tkdCTU1W
509avGE8a5XWYL0zjBrcintEfeaf7e4zOW+YQdVmIp1HDvBkBrpuPSUwecZ5jtCX
1w74wa0XQOYy3J19Vhc/t2DN63wYCKV9BxznvOFOWDn9foo2kUkx7E5gnYcV/P6n
cGG5iikikRvK0IKGGmaYa4W0VNDK819npsTM0EFxE+6j9VWsGBXHSSdPk1vIRYuw
tWJi79/v5wpnsqwTycYv1v4V0PGeIi+dtCpFWj/4BFpqPLVVT4SswmHmEikAuFaC
APczWR8H62Y+qrLyMlk66GdyU0j0+SkWmLZVhqAinskcpnnwXzHS6cQtIePjygXy
09whvSRq1BZtiNqcND/hz6cJR/v29brxXhQplPLCu942OixvG+DUjTPn/GhMfA8R
mXvx0MFYcw2T0m8JfqYOXra7Tbi8UOulP4kCMwQQAQgAHRYhBFZuUukEjpZald+h
cYfpC6CmUwNPBQJijnY4AAoJEIfpC6CmUwNPcKYP/22pYBPIIUZwBeD8q/DileiX
L8YgHU+ziy1qxHPiVXuwWEvQUwnCG5BeyvsCViQrs1670OCqtsKk/sBJJvp17kkt
tKmBjUoZDS+YbD6d4q9QgHVN8jW1HzotWPGKrVV4oDdnNBBKoS5h6i4tpMca3kXq
d7Ow3qzxSYkHCmJqqXNCBQzgIlha2FijMNWe1cnJr+IpG6eJ5wfeQm3llIVHbeCj
YZyAorjRSzsopfWwXrQJXLTBJzgj4JDUWMmk6IdKcF41NOWB6f04Eky0zEHX02Xl
eRxJygan9bSayELz3vjWQu2aBR4NxDdsvRy2E35wfHHruqLKTzaC1VU3nYGDPcby
W+A/fA6+sY7Gh5y41xiD6hyDJxKkhEn9SfuF47YghFtW0r5fPe4ktwjI5e0sHWI6
WFdNLF51F6B7xSbzNwRPw+XSWNvLvXvOu94FSPV1aoT5Vgq79377q/wlqmv/SSk/
FcZFmglmIwJiLj/qUt2TlAkXb9UxQCfP3Sn98wGMOtZ6Hm0Uga3zjmw9dCTY/anL
n/RZYTJSXlEDAtnT/aU708zsz5skKHfLWTk4/1wLg/mUqImLJGRQ26J1C1njl9RK
p5SRAObb7qvoyM730avvGkiom5/GwxAzXiRs2c9gbuTEeCYhcBrwMSK1QiHZbG8y
IAe8fagtFxvtN5oJVmqpiQIzBBABCgAdFiEEE6CQhKr7XHIVX7vh1XGr4uJd+5wF
AmKOeGUACgkQ1XGr4uJd+5xphRAAmEINmSJVDApYyhlDRCZAE39oD+FUlbOyOkr+
yGXsCSz2cnMXdypBms3hVhs5ut6zmiekFmGJ1X2SrHWcm/NoZZAZDu+8LT0HF90W
MG+u4hP66lsglwlVYhzX8XjQpoSCunFHUb7HAShNVPSC30aWlVJZTyowLIuXiMmV
pwB5lhtHLoMjjAEpsy3l7lEey/+clL03fW3FzZWAwY2RgORNlz6ctaIPlN6Tjoya
iO6iFWE8/DiiATJ4fass3FijmfERD8o4H6nKKwEQzYTG5MoMuKC6fbokCL58wQ5k
jJ8wFpiajFKFsKfuk5+0q+LZ3FuLY0TUAOR7AJUELqTsBMUM3qf9EFU7UHN4KHoK
+3FrqCouT90pUjq8KoYXi0nfOqROuXJBAcx+g9G7H6x8yEPISE4la6T9aNujavip
jxMP7D3fY5leqxRozVtUZ5rHAvN1s6DtzPnrQVsn9RiwRJO3lYj64wvSRPHkRAVL
U9RtZXonlHsx1PVbx4XlAuSFOxLHApBWM+CqemyhXdtC1MCpTgqOpt7rTendZnqc
T2tDcaZIHw3+KAdXrU9zvvEqkk/dfnckdDQZTSd96r16HGLtYj6ILzd17/j/qTnq
BYeZl9CbYtyF117zyIjzaq8oTZxj97Tu5a9WXNOyaeB4e6A9AQeSJVA9vFM9vUCM
5pJRJzqJAjMEEAEKAB0WIQR802Mbz07k2wBqg1zqYglGK5OFuQUCYo54fAAKCRDq
YglGK5OFufWXEACH06ybO/LwTX4I+aoj19d5OSe8xaSZBHu2WVh2KfbPxInXNIhQ
1/W44hnQNGS0B6a/fN80xPD+tWdBLF1fAl7tgz+KBECc9fTsebeXMmY/FJZP60uL
1Da1RMEOd3lg7DgNfLjgIiXi4eqqYHHRwCSFww4VhZJ3lxnXrcuFwLXDAvGzB/5x
mK23fhCQ0tK9I/jcKyzrN3a/tcU7bXuQ0ewRDtVvfCnimGAjcayIpb3bhn0CByxJ
B4fH3mIw1/nMzBZr6PtNEPwSRLEnVsWwmUs9sCHWftgAqDpF3dnC5nk5XZdr+auo
3MeP47gbMW6xQcGY6XJgoWNIeNUpVJg12e0AsaOdEJJO4NU+IEmb5d5M889Kr3e2
sor5/6kbRAjh5RUShOLrae15Gkzd13K5oOlhBcyTTEqf4QSnOtvnvpKghHjaBkSh
em2N+HfZ8hmahRv3DI79rVx/vjLSFwc9Y/GYw56Xu8bBFmHP4rdFZP4VC87EjOwa
zNb/0XfpUJkFsGXyUdrvd+Ma4z8PA3Got9ZO2ZV/yAMnCpyTX5pUvF6RBA2zlv2Z
8EqafabD/iGJJFubHuoDLzTCSgD0hmO6l8r/lR5TmjHH2fw8kYXOCpRe2rbtyS1f
ttwP1IvoFb7mVQ6/Q7ghMxPiOxg/cGtW5fK+TYGU5cyYFu0Ad7dL11rgI4kCMwQQ
AQoAHRYhBBmw8mZl6mDaDDnZvOI98LWuJhGfBQJijnd2AAoJEOI98LWuJhGfXIEP
/0Nk2P2rMrX4MrfxACkHnRFS35GKbs2EqQGy2mxok5s+VDE/neKLozzBU/2x4ub8
P8UrXKBHAyW2MwZ1etx27ARoWcbGaOICbIMUCgmGSCqMlfo18SJVyssRPDvKxy/+
S7PIwgdFlRb79UxEMYi7L5Ig0H5nHYaHPAqvzTOssy+OXub0oU+sCK4s9WD9OPBf
vA9dfGJMnLyi4wTs0/6LXKAf5BwGOzeXhWL4GQmpRqb8Kw40BgBXhye9xUwr72BI
iAVVfUG5LTY9K8b9eK6DB78fdaZsvtfgY85Ou+OiMjEPitYCQF1mIt0qb9GcaC1b
VujdvM/ifpxXpyTdC95KUf773kTrv+v8842U99gccBNQp6rYUHkDT3bZXISAQEpd
c22iclcr6dCKRTRnaQpEkfDcidTEOnpadEDjl0EeZOeAS333awNe4ABP7pR7AvRW
2vg1cY7z52FEoLG20SkXbLb3Iaf5t3AOqS5z0kS59ALy6hxuuDJG8gb1KfzmJgMn
k9PJE8LdBVwsz346GLNUxLKzqBVFRX4N6WKDYURq30h/68p+U24K9/IeA5BTWsUQ
p7q11dk/JpkbyrK74V2hThwEyTv9hjMQuALTNr9sh3aUqT5XbhCgMnJcb1mkxhU9
ADKN+4h+tfuz6C0+wf3nFE08sUkStlN3Vbh/w+nJaII1iQEzBBABCgAdFiEEuNcs
eEKe1WcgoStyNsSqUuTK0GUFAmKOXFQACgkQNsSqUuTK0GWOmAf/RNODdqbdXkyP
8J4ePZDcPbzEhGNW6piSCbyJ7cVOQHLYuwV7i6gJ3L+h9AGii2fI4YBQkEzjiGJm
8o0HazR76R2iT7QcFQ4vLkX8wYgO5q4SFGlzOfhHr+OOrd2r2D6Pid/oCADUfa+x
NJt9V9naVCjkQq6rAFUK3cpwVCC8pB4at5+sL503u718Ce4u0uzuKwGXqmhsRruF
9ZIn1hkoneKKFDb7C1zT08oegy484LcjTfbzKuBHZOXW0cNkqtSCuL9lBmrD2t+r
KZWwNX8DLIOZtfxux+jCxks2dp903Zd9H5PlaZOldMbXGIEINfcUurl5H79vAXqW
EUOtsYYX74kCMwQQAQgAHRYhBPMiP8Tk+ZH+vALLH8FnKfGqz2bGBQJijpCCAAoJ
EMFnKfGqz2bGrZoP/jobYML0JDIQT22TyNfz2Q38WhLtdaEnKeEMU5HDq9uEOjjz
BZewMB4LLTgE3uvXngL7K/2R6i6fu1YAOX2RaySG4VfxNd7z1TTHnTWmeKd+rtB/
o5/iH4hp3uLYFPvWqKjr7PPuXzi1JE99lEThuyqM88GcKfuNvldJtjhALZL548St
es6D82tGumbWzFEeyDbCxJRBOWfX6vkVBR7w3Q2NRxEOtvc58mhXiHOs2/vXMMzr
1RMYzzYvq8jXi8uaa5Esmeo6r1Md67oaNfPNulhYUe4mKYwPuphcBSNCfRGQnRlU
8oToURyRcXI6Bd9dJSvtznMHrsWO+Zm4O752cvfi/GKHUPVJ/FvO5L0qo2546+tn
nIDPVhvAnhWO5+95ooRIXsxa2mzYtaudAu6pcI6OaMANjJ8fUTxFmedN9HqlkKPF
ghvcwqJdpmpRs05nAuNzHmnKkMVI8R7uBavB6F5cAfonNVgoCKCsFHpG2jGViMRj
/OtovngNpYrweyIiPxRYGhKiP3rNzNjMT2HfVz8xiTttIeMXU+JrBem40CagxBFa
JYsZjATkKDJ3tXlRs1JGqKiI3Veia4elCCLv6uzfKVGAg4xMWjtKbdxpi3ku8JSh
Ntzc928pJkHprP1WPGVbZ3xPPJ3N+WTawGYnblcLlRFeVErdKSJeQOknbH/EiQIz
BBABCAAdFiEEikTOi7BILoKkJagWI2JUH20U7YQFAmKOvW0ACgkQI2JUH20U7YRW
Aw//dBsV+CCqb0i9s8O6l/L7kZG///jxqobv6SLlUOKuFIckGMKBVi7QSLC11Wgv
Z8ETswrSDSP9YzP46TP4Ad2tZQoulhB+sEfNIsRu1doYXPmr23T68Jof4dinCTVO
rgoU8XboKjzQzy27ziziJ4OZxRl9c4zIaSw4FyEzt4BLKAByi9NT5CtJN5Sr3v9X
CncuKnekqpTpLltbLJYYK+DT+Vy8+FT9XehQbndKtM9i4FXvp6xzM61GfL3s3MA8
Xosol+8OrwYLKhUM5mbg0sqreqVRcmeiRCBO5MfCfrpukCbqBmwi/E7qyw+S4IAl
HdU8JVRQvCoJCCy8pZqfMAdgx35E3CM7/GIlb5Dk2teKPlmSBXu7ckMhhFauiDFK
ImX6ThCe/uExvK5npiowKvQsEjhDeUU4zt9N8UxgaRpPYr2tyHnqqRTeEtk9/K9j
O8WH825DeKhjwU6Eg4Qtb8HmlA0fnZ//L826KC8mTkFSbkdKtIMvlq8u6nAgHsMF
GoUbBvtDbvenZhkndQpuDd2tXpSob+9f1TqZGfWv2nbOtfEfXEf9BwayX+iJOH2F
cC6bJbIG5UTirbjxDmVmKn71CxgJTHRqSUULKE4rimpnDpN6S2qw4ZEK4EIkSH6b
qmlDWCsiprIc2KwiTOz8wCCsKNwXnc3PfFtM8bGO6N9Yz965Ag0EYo45cgEQAOnf
WNZhXdeapCMfs+YxuySSfCw8X2ucHegKV0tKCg1ozKY9Z/CQ4apcNxJD1ZS++0y8
38OjNo7JayAp4joCT+/lFN+OzFuMf5xc5E5pQeF1UAsi27FJFJWX9NIvdZ0BmTzy
E0GJGg9CSUPOfImV1fW7uVWkkzi+UE09pe8llmkY3JCX5ViIH0bTFzF52BZL06np
0MxNFwBVm4sZXyPOxInqOm66gICrbxLDriz3EYa2bJm17I6Kclvw/X/ohCeGU9WW
KEzWTE03OxRMqLlPfxgqVshIz2dO77u47yehI6BOsOhpp4Ag7rRgLpRs1Iemg02/
Oa82FiiLw0S4g/4UXyi4cZKF4vCefNs2z+IaSEe3l9z9Gg19gPsanrYP+CfZRsXk
2EYxwt0Ffmma2rQ24zQ9yJ5NvqiINX9xd/LjOv8pmkNArKXsbtY4KwtH7zVkiS4Q
9FsU5C/9BaM9z/fMpQNUq6mx9FCw0C+NntWYvfXn4PFNPC7klYgM/2VFvxq+vBk6
CVpbcSoYy1+7uZZ+hskyQek8Dbtnk7fLBRC4gHK9T2gbroo/eS0u9b1PFlaka0HG
1zKwU1u6Iq19r2qapKoMf3SGStEilh8x0eyCdEqqCHEi/HKYU4zGa54zBGlpkmMy
Q7VZmeozNpY/F02KZByMWIstjKZGEINXhaI/2F5HABEBAAGJAjYEGAEKACAWIQQs
jdMiTvNXOkK9Ih+o4Mo8HCraLwUCYo45cgIbDAAKCRCo4Mo8HCraL5bAD/9bS+7u
0Xq+kt8/sQpWnmwevgcnsSxVwENf1In9l/ZSShtaLvBUaj2Ot80bnfTl9dVruH8v
+Nh1HrzklGwwcqNP5jQZEVFaqQO/jA/c84gKvXoQPUA555rcTonZYquEBMqEMlJe
jil+Lb+pfII+BVD0wvQDCnpzni8u7r0cEjPEMevLoTJdcgNhn1WuvDRFp8+HtlTx
198wcZbAPgFHRpE1NQjrP2CBket/ZIxuvuAEf9TpYifsjG0NZcdxeN0JZ3HOKZUT
kKgNanm+PxqXRynnrdEEH6I2vPR+KMr5+ZqFcXbamvDy64Xewi0EVYecQk1SllfC
lCuDih5ZqcjmZqdcqoFxc+tc7gcb509Fo/+mBCq6nXEVorKPJqdoW3IGbz29Nkqc
ZczFyeOpCk3UaPCz3kxebVfaDydiRkFnWlFEZNkAidZGOKs5ykEmEvq8c9+dSyaS
3Y7xcx/SaGyF/a4+9cdd974/HcPKcRHRi7nXrn+yEVQq8CZAvKWVYyme461isPkz
loWb1AKXK5kHR0CFq4HTXMZrrNsdWoU2lP+BNVg0dQG4Z0QpcOrgwbLjXrqTVrbB
PITOx6cYB7RafdBmhBF+8qOHmr7wwI92DV0vYeEjlGT9FazAquCAMVqBnqz/52GM
Vzr8QG9fSxBmTeoMQkkdXah/sex7zWVfUqoJLrkCDQRijjtHARAAmPVZ6gtFeFcI
NzMD3iZVkFglIM91SfdglmgjxdC4mAjSGQk5dBxrqFJrT6Tn5b/i8JLGgrOpaDvb
O50chDmys+VtWEmoxa9kl4BOjzEbr7e0UymoK1GTK6ZAIIrHytFkSjcP3LSi5Am9
aCwVhZ3NoH3LKBzj+ast7I0QJT/mt5ti2h/4qEw3qJ6ImKUmjfLTOkTjt5NfWgT9
McdnV74XpOi3OdIL8vTp+1S5Dm3pByVYJdR0mfD0uRg9+OHN04t4D3k2hPovBxJN
E1uKE5IPt6RJ+E1POCfA1dM1PsaSf1S4zyyIlUK7HM99oXUg00JXHBUD5z12hXuy
5sQZE+lFaMRej+uwO/uO2YiebansrQc+MMnrkAKElCzb0esuRNWff5CPnQ2K0ZL/
x+xNfziRNvTAICz6ir4bPONa201V6rDFoe4HNLxL0u+mLnag5i/4xiE/eWtzEBSp
F1HNn+LSrNn65JDjU1y3o0iDwZZo1hVzG2zgx8f/7gXDJpMcVHpLWz5Why77643l
NoR+qdUwoofzC5Soz+m1SoOOoEfCTiZZaukaOSFDeh/ZQ7M/MvQ0ytd8HZwFm6po
QJYQiwJUV9Es4szqndr8bxHxoy55mJqewe+cTvqB2Nqy7OjXNFZD37TuLLw9clJK
MLKFrz2VitRyRADhg11oCmGp6GAZBLEAEQEAAYkEbAQYAQoAIBYhBCyN0yJO81c6
Qr0iH6jgyjwcKtovBQJijjtHAhsCAkAJEKjgyjwcKtovwXQgBBkBCgAdFiEE0sfr
PysF+oZZDSk8BAAbLj2wxyMFAmKOO0cACgkQBAAbLj2wxyNbCw//ah/m1jrdV254
WAEt800l9brvONRU5YwjZE9HuHaTXxaRU5jd55lreFGWqzYJDe1vKkR0BdCGHIB8
KERNGXq6oQUJ4+oVrLWeV11ojen7xmbSYwEvd5VEK2ihYHyq5n0n2mwOG3+9HPkm
5N1QqjykePr7xqkBn3xgMJgB4KGydZi/zk5mNM3R23T061gn0G3TntnGWjppzRzx
a4CUz4b+ys6yz6I6LQ1pIG3pYeXgb6p9McdWP3+gec1xYPTgR001AbcbuMAXzjRI
GNGblsy0CAXTPju1451129wTx9l5x4sLLscmHv/TDRT9/YpEPfhA0xtL/XdNgG9o
lndhi6UsC8dg68sKI6MZzbFJBUmzwvThZi9DjvS7tI4IynQEENB0rEpNwBgNpE3w
OvoJBB+Ykr7Lyg7M/AdymBu4sHTW6nUuLlDo45gHAaFkKdM+WCRllvdRDI6/CnDh
dqSnqrfcyFFgzPgrA3fqoQ1TX8pgoMWasnBShaZn2vmXBUcfImxKCCVSpqhzfSIx
lmkneo3WC2hqkMDTfcx8z77u37pJYPWMiHidGqkJCRr7P4K112SoH5SVa0R94yNu
5zOolbyvt1lgKYmS/UDpxfHkUHL1WVJo0Ki/LTADNYCMYUKz/4E3Em5T9DIBRrkq
F7BWxpCF3kogEnpOQ5Qj9cTOZsfBEqM4jA/9H233pFPKvgzYcYst3l65Ootx+dsh
5gHIbp0aWBGGxLMbwkjSXdIbGxZrmxTng+9CWgpAX9j5lkjCNJxEpxzYGiRwJE2N
p7O+dGnfO9VTsVWqcCc73T4s6HVnmxX8ZxGSW2LhoI1na+rqnz6rzz1Rdj+RnG57
HHDzGKvzjbFfbQREweduG+M4JbtOMLCCooojwzxyCRTbNsQEh0uleMyse1PjmnYz
dII0l+arVaOg73i+1KkMyCKvxd+zPw5/gPM62LcUxkqMfnFgBHyzh/W199b6ukZP
DODeXOzKhiJiIVtxztl6L+hpXY+yE60iPgcbIiP4qMZxFGGolM3LfDzzD57evJWH
6SuDy3yv6vmUcZgmEpDSU/wbByNN7FNTHblrImRDGHg9Xs/9NQV0ngA96jPvILsu
BUw4y1ybVwu5GgNct3VbDGzlaSpUpYnSyp04id3iZJRIYKBln+wJ7q73oo8vltlX
QpOENRJHJoDECq0R/UDg1j+mwEw/p1A9xcQ8caw9p0Y2YyIlcSboqeKbjXrOv4oi
rEUyfGBlSvk+8Deg7TIW4fGQ/uW4sthkRNLpSxWgV+t8VRTEtQ6+ONL+u2ehQkbF
7+kvlN1LTQNITLpJ+8DGlYke8qlloGY2ROPR+INyNbJCiJOLba2CjRBu48w30iXy
cHFsNYXiYw5O7lA=
=cjRy
mQENBGATaEABCAC//LQBgVfSPb46EwV+JLwxdfGhbk6EwJFoxiYh5lACqJple+Hg
8louI+h5KFGxTGflhmVNCx3VfHS2QsPySp+qu3uCsmanQdkHyQkaji4YO5XzJEhI
IWt6kKfUcttvBSlfXHhtPNvO0PPeRTQypqMlHx4QfJqmuH7jDMZUf3Kg6eLWOuEb
cW49KeAPfXdGXdq91a7XCNsurvyz8xFxgH/CywXF8pG8LVLB4dfN8b7M+RWxA023
UZR1f1tYg9eSPRwh7V4E69PcjF7WqvWRw+Uhkes7rUdDR2ZWXhFSn2BvL6p1MdI8
ZHCFHO6l6QwuYmSIhsCyh21YSwOb/GC0Z8bxABEBAAG0MFNwYWNrIEJ1aWxkIFBp
cGVsaW5lIChEZW1vIEtleSkgPGtleUBzcGFjay5kZW1vPokBTgQTAQoAOBYhBImQ
4GrBi1LDHJkUx5Mo0B3n+1WHBQJgE2hAAhsvBQsJCAcCBhUKCQgLAgQWAgMBAh4B
AheAAAoJEJMo0B3n+1WH7cMH/1Ay1GUB5V9/K+LJGFDLXzCVTqwJbJUB2IeXNpef
qQfhZWQzOi8qXFCYAIqRlJH3c+rYoQTpR+l7uPS87Q24MLHT/mN6ZP+mI4JLM00T
CUhs18wN5owNBM7FOs4FTlRmvWhlTCjicCXa0UuH6pB/T35Z/OQVisAUQY82kNu0
CUkWLfmfNfm9lVOWWMbceJ49sDGWsApYv4ihzzIvnDSS6n5Fg1p8+BEoDbzk2+f5
+Jr0lNXZmQvTx7kGUnwRfuUxJifB8SNbABWL0En2scaE/QACQXkbaNTPMdI8+l59
ucmvDDsQHlBRXPGRM1ut+1DHSkdkKqjor3TnLkJDz+rOL+K5AQ0EYBNoQAEIAMin
weK4wrqZhWOotTD3FS6IUh46Jd57jxd9dWBl5YkjvTwHQJQ54csneY/jWaMSoxhJ
CEuEnb4P/6P0g5lCVYflkXLrCPLrYPJazW0EtTXQ5YRxFT7ISytsQDNgfSQO6irs
rJlD+OWUGQYeIpa58hB+N9GnM8eka7lxKfay9lM3rn5Nz3E4x10mdgxYY9RzrFHv
1MTGvNe/wRO67e9s0yJT+JEJ5No5h/c6J0dcrAiegiOvbhUtAYaygCpaxryTz9Bt
CsSrBOXadzxIEnyp2pJE4vyxCVyHWve2EBk7Fagh45Z+JdA5QhGmNS3tHQmZ6Nyu
CPAEjzn4k3jjHgoDfTUAEQEAAYkCbAQYAQoAIBYhBImQ4GrBi1LDHJkUx5Mo0B3n
+1WHBQJgE2hAAhsuAUAJEJMo0B3n+1WHwHQgBBkBCgAdFiEEwbGjdbck4hC1jzBi
DDW/ourdPjAFAmATaEAACgkQDDW/ourdPjDaqgf+Oav1oC+TfOvPyIJgfuZK3Of6
hvTPW81udyKgmZ/pbwJ4rSAkX2BRe0k11OXSc0V4TEkUPG62lfyNbrb54FsZAaPk
s2C8G5LW2xlJ91JXIxsFQJcGlWTTrd7IFMe+YtcBHYSBJNmtRXodXO0sYUXCcaMk
Au/6y6x3m9nhfpqCsc3XZ6C0QxVMMhgrdSfnEPhHV/84m8mqDobU34+eDjnY/l6V
7ZYycH7Ihtv7z4Ed5Ahasr2FmrMOA9y6VFHeFxmUPhRi2QwFl2TZJ0z8sMosUUg0
0X2yMfkxnMUrym12NdLYrIMGCPo8vm6UhqY7TZis7N5esBEqmMMsqSH0xuGp8d+e
B/99W1lQjtdhtE/UEW/wRMQHFoDC2Hyd9jA+NpFK0l7ryft0Jq284b/H4reFffSj
ctQL123KtOLNFQsG5w2Theo2XtC9tvhYTAK8736bf7CWJFw3oW5OSpvfXntzJpmw
qcISIERXJPMENLSwUwg7YfpgmSKdrafWSaQEr/e5t2fjf0O3rJfagWH9s0+BetlY
NhAwpSf7Tm1X+rcep/8rKAsxwhgEQpfn88+2NTzVJDrTSt7CjcbV7nVIdUcK5ki9
2+262W2wWFNnZ3ofWutFl9yTKEY3RVbxpkzYAIM7Q1vUEpP+rYSCYlUwYudl5Dwb
BX2VXKOmi9HIn475ykM23BDR
=magh
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -434,7 +434,7 @@ _spack_bootstrap() {
then
SPACK_COMPREPLY="-h --help"
else
SPACK_COMPREPLY="status enable disable reset root list trust untrust add remove mirror"
SPACK_COMPREPLY="status enable disable reset root list trust untrust"
fi
}
@@ -485,33 +485,6 @@ _spack_bootstrap_untrust() {
fi
}
_spack_bootstrap_add() {
if $list_options
then
SPACK_COMPREPLY="-h --help --scope --trust"
else
SPACK_COMPREPLY=""
fi
}
_spack_bootstrap_remove() {
if $list_options
then
SPACK_COMPREPLY="-h --help"
else
SPACK_COMPREPLY=""
fi
}
_spack_bootstrap_mirror() {
if $list_options
then
SPACK_COMPREPLY="-h --help --binary-packages --dev"
else
SPACK_COMPREPLY=""
fi
}
_spack_build_env() {
if $list_options
then
@@ -626,7 +599,7 @@ _spack_ci() {
}
_spack_ci_generate() {
SPACK_COMPREPLY="-h --help --output-file --copy-to --optimize --dependencies --buildcache-destination --prune-dag --no-prune-dag --check-index-only --artifacts-root"
SPACK_COMPREPLY="-h --help --output-file --copy-to --optimize --dependencies --prune-dag --no-prune-dag --check-index-only --artifacts-root"
}
_spack_ci_rebuild_index() {

View File

@@ -47,7 +47,6 @@ FROM {{ run.image }}
COPY --from=builder {{ paths.environment }} {{ paths.environment }}
COPY --from=builder {{ paths.store }} {{ paths.store }}
COPY --from=builder {{ paths.hidden_view }} {{ paths.hidden_view }}
COPY --from=builder {{ paths.view }} {{ paths.view }}
COPY --from=builder /etc/profile.d/z10_spack_environment.sh /etc/profile.d/z10_spack_environment.sh
@@ -63,6 +62,5 @@ RUN {% if os_package_update %}{{ os_packages_final.update }} \
{% for label, value in labels.items() %}
LABEL "{{ label }}"="{{ value }}"
{% endfor %}
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l", "-c", "$*", "--" ]
CMD [ "/bin/bash" ]
ENTRYPOINT ["/bin/bash", "--rcfile", "/etc/profile", "-l"]
{% endif %}

View File

@@ -56,7 +56,6 @@ Stage: final
%files from build
{{ paths.environment }} /opt
{{ paths.store }} /opt
{{ paths.hidden_view }} /opt
{{ paths.view }} /opt
{{ paths.environment }}/environment_modifications.sh {{ paths.environment }}/environment_modifications.sh

View File

@@ -22,5 +22,5 @@ class InstalledDepsB(Package):
version("2", "abcdef0123456789abcdef0123456789")
version("3", "def0123456789abcdef0123456789abc")
depends_on("installed-deps-d@3:", type=("build", "link"))
depends_on("installed-deps-d", type=("build", "link"))
depends_on("installed-deps-e", type=("build", "link"))

View File

@@ -1,16 +0,0 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class NonExistingConditionalDep(Package):
"""Simple package with no source and one dependency"""
homepage = "http://www.example.com"
version('2.0')
version('1.0')
depends_on('dep-with-variants@999', when='@2.0')

View File

@@ -11,9 +11,8 @@ class Aria2(AutotoolsPackage):
"""An ultra fast download utility"""
homepage = "https://aria2.github.io"
url = "https://github.com/aria2/aria2/releases/download/release-1.36.0/aria2-1.36.0.tar.gz"
url = "https://github.com/aria2/aria2/releases/download/release-1.35.0/aria2-1.35.0.tar.gz"
version('1.36.0', sha256='b593b2fd382489909c96c62c6e180054c3332b950be3d73e0cb0d21ea8afb3c5')
version('1.35.0', sha256='fd85589416f8246cefc4e6ba2fa52da54fdf11fd5602a2db4b6749f7c33b5b2d')
version('1.34.0', sha256='ec4866985760b506aa36dc9021dbdc69551c1a647823cae328c30a4f3affaa6c')

View File

@@ -25,8 +25,8 @@ class Assimp(CMakePackage):
version('5.0.1', sha256='11310ec1f2ad2cd46b95ba88faca8f7aaa1efe9aa12605c55e3de2b977b3dbfc')
version('4.0.1', sha256='60080d8ab4daaab309f65b3cffd99f19eb1af8d05623fff469b9b652818e286e')
patch('https://patch-diff.githubusercontent.com/raw/assimp/assimp/pull/4203.patch?full_index=1',
sha256='24135e88bcef205e118f7a3f99948851c78d3f3e16684104dc603439dd790d74',
patch('https://patch-diff.githubusercontent.com/raw/assimp/assimp/pull/4203.patch',
sha256='a227714a215023184536e38b4bc7f8341f635e16bfb3b0ea029d420c29aacd2d',
when='@5.1:5.2.2')
variant('shared', default=True,

View File

@@ -41,14 +41,12 @@ class Ccache(CMakePackage):
version('3.3', sha256='b220fce435fe3d86b8b90097e986a17f6c1f971e0841283dd816adb238c5fd6a')
version('3.2.9', sha256='1e13961b83a3d215c4013469c149414a79312a22d3c7bf9f946abac9ee33e63f')
depends_on('gperf', when='@:3')
depends_on('libxslt', when='@:3')
depends_on('zlib', when='@:3')
depends_on('zstd', when='@4.0:')
depends_on('gperf', when='@:3')
depends_on('hiredis@0.13.3:', when='@4.4:')
depends_on('pkgconfig', type='build', when='@4.4:')
depends_on('libxslt', when='@:3')
depends_on('zlib', when='@:3')
conflicts('%gcc@:5', when='@4.4:')
conflicts('%clang@:4', when='@4.4:')

View File

@@ -26,7 +26,7 @@ def libs(self):
return find_libraries('libdSFMT', root=self.prefix, recursive=True)
def build(self, spec, prefix):
make('build-library', 'CC=cc')
make('build-library')
def install(self, spec, prefix):
make('PREFIX={0}'.format(prefix), 'install')

View File

@@ -11,9 +11,7 @@ class Eagle(MakefilePackage):
homepage = "https://github.com/tony-kuo/eagle"
url = "https://github.com/tony-kuo/eagle/archive/v1.1.2.tar.gz"
maintainers = ['snehring']
version('1.1.3', sha256='bd510b8eef2de14898cbf417e1c7a30b97ddaba24e5e2834da7b02767362fe3c')
version('1.1.2', sha256='afe967560d1f8fdbd0caf4b93b5f2a86830e9e4d399fee4a526140431343045e')
depends_on('curl')
@@ -38,8 +36,9 @@ def edit(self, spec, prefix):
# use spack C compiler
filter_file('CC=.*', 'CC={0}'.format(spack_cc), 'Makefile')
# let the user inject march if they want
filter_file('-march=native', '', 'Makefile', string=True)
# remove march=native %fj
if self.spec.satisfies('%fj'):
filter_file('-march=native', '', 'Makefile', string=True)
def install(self, spec, prefix):
mkdirp(prefix.bin)

View File

@@ -21,7 +21,6 @@ class Geant4(CMakePackage):
maintainers = ['drbenmorgan']
version('11.0.2', sha256='661e1ab6f42e58910472d771e76ffd16a2b411398eed70f39808762db707799e')
version('11.0.1', sha256='fa76d0774346b7347b1fb1424e1c1e0502264a83e185995f3c462372994f84fa')
version('11.0.0', sha256='04d11d4d9041507e7f86f48eb45c36430f2b6544a74c0ccaff632ac51d9644f1')
version('10.7.3', sha256='8615d93bd4178d34f31e19d67bc81720af67cdab1c8425af8523858dcddcf65b', preferred=True)

View File

@@ -1,25 +0,0 @@
From 3193c8f2596711c1feb8e655ec391050e0e78ed0 Mon Sep 17 00:00:00 2001
From: Harmen Stoppels <harmenstoppels@gmail.com>
Date: Mon, 23 May 2022 16:58:23 +0200
Subject: [PATCH] Guard GCC-specific macros with _COMPILER_GCC_
---
src/julia_internal.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/julia_internal.h b/src/julia_internal.h
index fbded19..32d038b 100644
--- a/src/julia_internal.h
+++ b/src/julia_internal.h
@@ -392,7 +392,7 @@ jl_svec_t *jl_perm_symsvec(size_t n, ...);
// this sizeof(__VA_ARGS__) trick can't be computed until C11, but that only matters to Clang in some situations
#if !defined(__clang_analyzer__) && !(defined(JL_ASAN_ENABLED) || defined(JL_TSAN_ENABLED))
-#ifdef __GNUC__
+#ifdef _COMPILER_GCC_
#define jl_perm_symsvec(n, ...) \
(jl_perm_symsvec)(__extension__({ \
static_assert( \
--
2.25.1

View File

@@ -1,30 +0,0 @@
From 4ec1970178606127fd8bbffa763f135ca0f12ee3 Mon Sep 17 00:00:00 2001
From: Harmen Stoppels <harmenstoppels@gmail.com>
Date: Tue, 24 May 2022 14:03:48 +0200
Subject: [PATCH] llvm: add NDEBUG when assertion mode is off
`llvm-config --cxxflags` unfortunately does not set `-DNDEBUG`, which
Julia needs to set correctly when including LLVM header files.
---
src/Makefile | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/src/Makefile b/src/Makefile
index 1a9af2fa7c..766fd2945f 100644
--- a/src/Makefile
+++ b/src/Makefile
@@ -98,6 +98,11 @@ PUBLIC_HEADER_TARGETS := $(addprefix $(build_includedir)/julia/,$(notdir $(PUBLI
LLVM_LDFLAGS := $(shell $(LLVM_CONFIG_HOST) --ldflags)
LLVM_CXXFLAGS := $(shell $(LLVM_CONFIG_HOST) --cxxflags)
+# llvm-config --cxxflags does not return -DNDEBUG
+ifeq ($(shell $(LLVM_CONFIG_HOST) --assertion-mode),OFF)
+LLVM_CXXFLAGS += -DNDEBUG
+endif
+
ifeq ($(JULIACODEGEN),LLVM)
ifneq ($(USE_SYSTEM_LLVM),0)
LLVMLINK += $(LLVM_LDFLAGS) $(shell $(LLVM_CONFIG_HOST) --libs --system-libs)
--
2.25.1

View File

@@ -130,12 +130,6 @@ class Julia(MakefilePackage):
# only applied to libllvm when it's vendored by julia).
patch('revert-fix-rpath-of-libllvm.patch', when='@1.7.0:1.7')
# Allow build with clang.
patch('gcc-ifdef.patch', when='@1.7.0:1.7')
# Make sure Julia sets -DNDEBUG when including LLVM header files.
patch('llvm-NDEBUG.patch', when='@1.7.0:1.7')
def patch(self):
# The system-libwhich-libblastrampoline.patch causes a rebuild of docs as it
# touches the main Makefile, so we reset the a/m-time to doc/_build's.
@@ -221,9 +215,6 @@ def edit(self, spec, prefix):
'1' if spec.variants['precompile'].value else '0'),
]
options.append('USEGCC:={}'.format('1' if '%gcc' in spec else '0'))
options.append('USECLANG:={}'.format('1' if '%clang' in spec else '0'))
# libm or openlibm?
if spec.variants['openlibm'].value:
options.append('USE_SYSTEM_LIBM=0')

Some files were not shown because too many files have changed in this diff Show More