Compare commits

..

1 Commits

Author SHA1 Message Date
Todd Gamblin
111501b583 concretizer: move remove_node transform into spec_clauses
The `remove_node` transformation is used in almost all calls to `condition()`,
but it's removing `attr("node", ...)` and `attr("virtual_node", ...)` attributes
that we could avoid adding in the first place. Some uses cases need the `node()`
clause, others do not.

- [x] Add a `node` argument to `spec_clauses` that defaults to `True`.
- [x] Remove `remove_node` transform.
- [x] Update calls to `spec_clauses` to use `node=False` where we would have
      used `remove_node`.
- [x] Remove the now unused `impose()` function (missed in #46729)

This is part of a larger effort to simplify the use of transformations
in the concretizer, but it's an easy first step.

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2025-01-04 16:11:41 -08:00
1307 changed files with 13475 additions and 20928 deletions

View File

@@ -40,17 +40,17 @@ jobs:
# 1: Platforms to build for # 1: Platforms to build for
# 2: Base image (e.g. ubuntu:22.04) # 2: Base image (e.g. ubuntu:22.04)
dockerfile: [[amazon-linux, 'linux/amd64,linux/arm64', 'amazonlinux:2'], dockerfile: [[amazon-linux, 'linux/amd64,linux/arm64', 'amazonlinux:2'],
[centos-stream9, 'linux/amd64,linux/arm64', 'centos:stream9'], [centos-stream9, 'linux/amd64,linux/arm64,linux/ppc64le', 'centos:stream9'],
[leap15, 'linux/amd64,linux/arm64', 'opensuse/leap:15'], [leap15, 'linux/amd64,linux/arm64,linux/ppc64le', 'opensuse/leap:15'],
[ubuntu-focal, 'linux/amd64,linux/arm64', 'ubuntu:20.04'], [ubuntu-focal, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:20.04'],
[ubuntu-jammy, 'linux/amd64,linux/arm64', 'ubuntu:22.04'], [ubuntu-jammy, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:22.04'],
[ubuntu-noble, 'linux/amd64,linux/arm64', 'ubuntu:24.04'], [ubuntu-noble, 'linux/amd64,linux/arm64,linux/ppc64le', 'ubuntu:24.04'],
[almalinux8, 'linux/amd64,linux/arm64', 'almalinux:8'], [almalinux8, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:8'],
[almalinux9, 'linux/amd64,linux/arm64', 'almalinux:9'], [almalinux9, 'linux/amd64,linux/arm64,linux/ppc64le', 'almalinux:9'],
[rockylinux8, 'linux/amd64,linux/arm64', 'rockylinux:8'], [rockylinux8, 'linux/amd64,linux/arm64', 'rockylinux:8'],
[rockylinux9, 'linux/amd64,linux/arm64', 'rockylinux:9'], [rockylinux9, 'linux/amd64,linux/arm64', 'rockylinux:9'],
[fedora39, 'linux/amd64,linux/arm64', 'fedora:39'], [fedora39, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:39'],
[fedora40, 'linux/amd64,linux/arm64', 'fedora:40']] [fedora40, 'linux/amd64,linux/arm64,linux/ppc64le', 'fedora:40']]
name: Build ${{ matrix.dockerfile[0] }} name: Build ${{ matrix.dockerfile[0] }}
if: github.repository == 'spack/spack' if: github.repository == 'spack/spack'
steps: steps:

View File

@@ -81,10 +81,6 @@ jobs:
with: with:
with_coverage: ${{ needs.changes.outputs.core }} with_coverage: ${{ needs.changes.outputs.core }}
import-check:
needs: [ changes ]
uses: ./.github/workflows/import-check.yaml
all-prechecks: all-prechecks:
needs: [ prechecks ] needs: [ prechecks ]
if: ${{ always() }} if: ${{ always() }}

View File

@@ -29,8 +29,7 @@ jobs:
- run: coverage xml - run: coverage xml
- name: "Upload coverage report to CodeCov" - name: "Upload coverage report to CodeCov"
uses: codecov/codecov-action@1e68e06f1dbfde0e4cefc87efeba9e4643565303 uses: codecov/codecov-action@05f5a9cfad807516dbbef9929c4a42df3eb78766
with: with:
verbose: true verbose: true
fail_ci_if_error: false fail_ci_if_error: false
token: ${{ secrets.CODECOV_TOKEN }}

View File

@@ -1,49 +0,0 @@
name: import-check
on:
workflow_call:
jobs:
# Check we don't make the situation with circular imports worse
import-check:
runs-on: ubuntu-latest
steps:
- uses: julia-actions/setup-julia@v2
with:
version: '1.10'
- uses: julia-actions/cache@v2
# PR: use the base of the PR as the old commit
- name: Checkout PR base commit
if: github.event_name == 'pull_request'
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
ref: ${{ github.event.pull_request.base.sha }}
path: old
# not a PR: use the previous commit as the old commit
- name: Checkout previous commit
if: github.event_name != 'pull_request'
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 2
path: old
- name: Checkout previous commit
if: github.event_name != 'pull_request'
run: git -C old reset --hard HEAD^
- name: Checkout new commit
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
path: new
- name: Install circular import checker
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: haampie/circular-import-fighter
ref: 4cdb0bf15f04ab6b49041d5ef1bfd9644cce7f33
path: circular-import-fighter
- name: Install dependencies
working-directory: circular-import-fighter
run: make -j dependencies
- name: Circular import check
working-directory: circular-import-fighter
run: make -j compare "SPACK_ROOT=../old ../new"

View File

@@ -1,7 +1,7 @@
black==25.1.0 black==24.10.0
clingo==5.7.1 clingo==5.7.1
flake8==7.1.2 flake8==7.1.1
isort==6.0.1 isort==5.13.2
mypy==1.15.0 mypy==1.8.0
types-six==1.17.0.20250304 types-six==1.17.0.20241205
vermin==1.6.0 vermin==1.6.0

View File

@@ -20,7 +20,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b - uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with: with:
python-version: '3.13' python-version: '3.11'
cache: 'pip' cache: 'pip'
- name: Install Python Packages - name: Install Python Packages
run: | run: |
@@ -39,7 +39,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b - uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with: with:
python-version: '3.13' python-version: '3.11'
cache: 'pip' cache: 'pip'
- name: Install Python packages - name: Install Python packages
run: | run: |
@@ -58,7 +58,7 @@ jobs:
secrets: inherit secrets: inherit
with: with:
with_coverage: ${{ inputs.with_coverage }} with_coverage: ${{ inputs.with_coverage }}
python_version: '3.13' python_version: '3.11'
# Check that spack can bootstrap the development environment on Python 3.6 - RHEL8 # Check that spack can bootstrap the development environment on Python 3.6 - RHEL8
bootstrap-dev-rhel8: bootstrap-dev-rhel8:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -86,6 +86,66 @@ jobs:
spack -d bootstrap now --dev spack -d bootstrap now --dev
spack -d style -t black spack -d style -t black
spack unit-test -V spack unit-test -V
# Check we don't make the situation with circular imports worse
import-check:
runs-on: ubuntu-latest
steps:
- uses: julia-actions/setup-julia@v2
with:
version: '1.10'
- uses: julia-actions/cache@v2
# PR: use the base of the PR as the old commit
- name: Checkout PR base commit
if: github.event_name == 'pull_request'
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
ref: ${{ github.event.pull_request.base.sha }}
path: old
# not a PR: use the previous commit as the old commit
- name: Checkout previous commit
if: github.event_name != 'pull_request'
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 2
path: old
- name: Checkout previous commit
if: github.event_name != 'pull_request'
run: git -C old reset --hard HEAD^
- name: Checkout new commit
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
path: new
- name: Install circular import checker
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: haampie/circular-import-fighter
ref: b5d6ce9be35f602cca7d5a6aa0259fca10639cca
path: circular-import-fighter
- name: Install dependencies
working-directory: circular-import-fighter
run: make -j dependencies
- name: Problematic imports before
working-directory: circular-import-fighter
run: make SPACK_ROOT=../old SUFFIX=.old
- name: Problematic imports after
working-directory: circular-import-fighter
run: make SPACK_ROOT=../new SUFFIX=.new
- name: Compare import cycles
working-directory: circular-import-fighter
run: |
edges_before="$(head -n1 solution.old)"
edges_after="$(head -n1 solution.new)"
if [ "$edges_after" -gt "$edges_before" ]; then
printf '\033[1;31mImport check failed: %s imports need to be deleted, ' "$edges_after"
printf 'previously this was %s\033[0m\n' "$edges_before"
printf 'Compare \033[1;97m"Problematic imports before"\033[0m and '
printf '\033[1;97m"Problematic imports after"\033[0m.\n'
exit 1
else
printf '\033[1;32mImport check passed: %s <= %s\033[0m\n' "$edges_after" "$edges_before"
fi
# Further style checks from pylint # Further style checks from pylint
pylint: pylint:

1
.gitignore vendored
View File

@@ -201,6 +201,7 @@ tramp
# Org-mode # Org-mode
.org-id-locations .org-id-locations
*_archive
# flymake-mode # flymake-mode
*_flymake.* *_flymake.*

View File

@@ -25,6 +25,7 @@ exit 1
# The code above runs this file with our preferred python interpreter. # The code above runs this file with our preferred python interpreter.
import os import os
import os.path
import sys import sys
min_python3 = (3, 6) min_python3 = (3, 6)

View File

@@ -43,28 +43,6 @@ concretizer:
# (e.g. py-setuptools, cmake etc.) # (e.g. py-setuptools, cmake etc.)
# "full" (experimental): allows separation of the entire build-tool stack (e.g. the entire "cmake" subDAG) # "full" (experimental): allows separation of the entire build-tool stack (e.g. the entire "cmake" subDAG)
strategy: minimal strategy: minimal
# Maximum number of duplicates in a DAG, when using a strategy that allows duplicates. "default" is the
# number used if there isn't a more specific alternative
max_dupes:
default: 1
# Virtuals
c: 2
cxx: 2
fortran: 1
# Regular packages
cmake: 2
gmake: 2
python: 2
python-venv: 2
py-cython: 2
py-flit-core: 2
py-pip: 2
py-setuptools: 2
py-wheel: 2
xcb-proto: 2
# Compilers
gcc: 2
llvm: 2
# Option to specify compatibility between operating systems for reuse of compilers and packages # Option to specify compatibility between operating systems for reuse of compilers and packages
# Specified as a key: [list] where the key is the os that is being targeted, and the list contains the OS's # Specified as a key: [list] where the key is the os that is being targeted, and the list contains the OS's
# it can reuse. Note this is a directional compatibility so mutual compatibility between two OS's # it can reuse. Note this is a directional compatibility so mutual compatibility between two OS's
@@ -85,7 +63,3 @@ concretizer:
# Setting this to false yields unreproducible results, so we advise to use that value only # Setting this to false yields unreproducible results, so we advise to use that value only
# for debugging purposes (e.g. check which constraints can help Spack concretize faster). # for debugging purposes (e.g. check which constraints can help Spack concretize faster).
error_on_timeout: true error_on_timeout: true
# Static analysis may reduce the concretization time by generating smaller ASP problems, in
# cases where there are requirements that prevent part of the search space to be explored.
static_analysis: false

View File

@@ -36,7 +36,7 @@ packages:
go-or-gccgo-bootstrap: [go-bootstrap, gcc] go-or-gccgo-bootstrap: [go-bootstrap, gcc]
iconv: [libiconv] iconv: [libiconv]
ipp: [intel-oneapi-ipp] ipp: [intel-oneapi-ipp]
java: [openjdk, jdk] java: [openjdk, jdk, ibm-java]
jpeg: [libjpeg-turbo, libjpeg] jpeg: [libjpeg-turbo, libjpeg]
lapack: [openblas, amdlibflame] lapack: [openblas, amdlibflame]
libc: [glibc, musl] libc: [glibc, musl]
@@ -73,27 +73,15 @@ packages:
permissions: permissions:
read: world read: world
write: user write: user
cray-fftw:
buildable: false
cray-libsci:
buildable: false
cray-mpich: cray-mpich:
buildable: false buildable: false
cray-mvapich2: cray-mvapich2:
buildable: false buildable: false
cray-pmi:
buildable: false
egl: egl:
buildable: false buildable: false
essl:
buildable: false
fujitsu-mpi: fujitsu-mpi:
buildable: false buildable: false
fujitsu-ssl2:
buildable: false
hpcx-mpi: hpcx-mpi:
buildable: false buildable: false
mpt:
buildable: false
spectrum-mpi: spectrum-mpi:
buildable: false buildable: false

View File

@@ -1,5 +1,5 @@
config: config:
locks: false locks: false
build_stage:: build_stage::
- '$user_cache_path/stage' - '$spack/.staging'
stage_name: '{name}-{version}-{hash:7}' stage_name: '{name}-{version}-{hash:7}'

View File

@@ -1761,24 +1761,19 @@ Verifying installations
The ``spack verify`` command can be used to verify the validity of The ``spack verify`` command can be used to verify the validity of
Spack-installed packages any time after installation. Spack-installed packages any time after installation.
^^^^^^^^^^^^^^^^^^^^^^^^^
``spack verify manifest``
^^^^^^^^^^^^^^^^^^^^^^^^^
At installation time, Spack creates a manifest of every file in the At installation time, Spack creates a manifest of every file in the
installation prefix. For links, Spack tracks the mode, ownership, and installation prefix. For links, Spack tracks the mode, ownership, and
destination. For directories, Spack tracks the mode, and destination. For directories, Spack tracks the mode, and
ownership. For files, Spack tracks the mode, ownership, modification ownership. For files, Spack tracks the mode, ownership, modification
time, hash, and size. The ``spack verify manifest`` command will check, time, hash, and size. The Spack verify command will check, for every
for every file in each package, whether any of those attributes have file in each package, whether any of those attributes have changed. It
changed. It will also check for newly added files or deleted files from will also check for newly added files or deleted files from the
the installation prefix. Spack can either check all installed packages installation prefix. Spack can either check all installed packages
using the `-a,--all` or accept specs listed on the command line to using the `-a,--all` or accept specs listed on the command line to
verify. verify.
The ``spack verify manifest`` command can also verify for individual files The ``spack verify`` command can also verify for individual files that
that they haven't been altered since installation time. If the given file they haven't been altered since installation time. If the given file
is not in a Spack installation prefix, Spack will report that it is is not in a Spack installation prefix, Spack will report that it is
not owned by any package. To check individual files instead of specs, not owned by any package. To check individual files instead of specs,
use the ``-f,--files`` option. use the ``-f,--files`` option.
@@ -1793,22 +1788,6 @@ check only local packages (as opposed to those used transparently from
``upstream`` spack instances) and the ``-j,--json`` option to output ``upstream`` spack instances) and the ``-j,--json`` option to output
machine-readable json data for any errors. machine-readable json data for any errors.
^^^^^^^^^^^^^^^^^^^^^^^^^^
``spack verify libraries``
^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``spack verify libraries`` command can be used to verify that packages
do not have accidental system dependencies. This command scans the install
prefixes of packages for executables and shared libraries, and resolves
their needed libraries in their RPATHs. When needed libraries cannot be
located, an error is reported. This typically indicates that a package
was linked against a system library, instead of a library provided by
a Spack package.
This verification can also be enabled as a post-install hook by setting
``config:shared_linking:missing_library_policy`` to ``error`` or ``warn``
in :ref:`config.yaml <config-yaml>`.
----------------------- -----------------------
Filesystem requirements Filesystem requirements
----------------------- -----------------------

View File

@@ -170,7 +170,7 @@ bootstrapping.
To register the mirror on the platform where it's supposed to be used run the following command(s): To register the mirror on the platform where it's supposed to be used run the following command(s):
% spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources % spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources
% spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries % spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries
% spack buildcache update-index /opt/bootstrap/bootstrap_cache
This command needs to be run on a machine with internet access and the resulting folder This command needs to be run on a machine with internet access and the resulting folder
has to be moved over to the air-gapped system. Once the local sources are added using the has to be moved over to the air-gapped system. Once the local sources are added using the

View File

@@ -272,9 +272,9 @@ often lists dependencies and the flags needed to locate them. The
"environment variables" section lists environment variables that the "environment variables" section lists environment variables that the
build system uses to pass flags to the compiler and linker. build system uses to pass flags to the compiler and linker.
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
Adding flags to configure Addings flags to configure
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^
For most of the flags you encounter, you will want a variant to For most of the flags you encounter, you will want a variant to
optionally enable/disable them. You can then optionally pass these optionally enable/disable them. You can then optionally pass these
@@ -285,7 +285,7 @@ function like so:
def configure_args(self): def configure_args(self):
args = [] args = []
...
if self.spec.satisfies("+mpi"): if self.spec.satisfies("+mpi"):
args.append("--enable-mpi") args.append("--enable-mpi")
else: else:
@@ -299,10 +299,7 @@ Alternatively, you can use the :ref:`enable_or_disable <autotools_enable_or_dis
.. code-block:: python .. code-block:: python
def configure_args(self): def configure_args(self):
args = [] return [self.enable_or_disable("mpi")]
...
args.extend(self.enable_or_disable("mpi"))
return args
Note that we are explicitly disabling MPI support if it is not Note that we are explicitly disabling MPI support if it is not
@@ -347,14 +344,7 @@ typically used to enable or disable some feature within the package.
default=False, default=False,
description="Memchecker support for debugging [degrades performance]" description="Memchecker support for debugging [degrades performance]"
) )
... config_args.extend(self.enable_or_disable("memchecker"))
def configure_args(self):
args = []
...
args.extend(self.enable_or_disable("memchecker"))
return args
In this example, specifying the variant ``+memchecker`` will generate In this example, specifying the variant ``+memchecker`` will generate
the following configuration options: the following configuration options:

View File

@@ -56,13 +56,13 @@ If you look at the ``perl`` package, you'll see:
.. code-block:: python .. code-block:: python
phases = ("configure", "build", "install") phases = ["configure", "build", "install"]
Similarly, ``cmake`` defines: Similarly, ``cmake`` defines:
.. code-block:: python .. code-block:: python
phases = ("bootstrap", "build", "install") phases = ["bootstrap", "build", "install"]
If we look at the ``cmake`` example, this tells Spack's ``PackageBase`` If we look at the ``cmake`` example, this tells Spack's ``PackageBase``
class to run the ``bootstrap``, ``build``, and ``install`` functions class to run the ``bootstrap``, ``build``, and ``install`` functions

View File

@@ -223,10 +223,6 @@ def setup(sphinx):
("py:class", "spack.compiler.CompilerCache"), ("py:class", "spack.compiler.CompilerCache"),
# TypeVar that is not handled correctly # TypeVar that is not handled correctly
("py:class", "llnl.util.lang.T"), ("py:class", "llnl.util.lang.T"),
("py:class", "llnl.util.lang.KT"),
("py:class", "llnl.util.lang.VT"),
("py:obj", "llnl.util.lang.KT"),
("py:obj", "llnl.util.lang.VT"),
] ]
# The reST default role (used for this markup: `text`) to use for all documents. # The reST default role (used for this markup: `text`) to use for all documents.

View File

@@ -25,23 +25,14 @@ These settings can be overridden in ``etc/spack/config.yaml`` or
The location where Spack will install packages and their dependencies. The location where Spack will install packages and their dependencies.
Default is ``$spack/opt/spack``. Default is ``$spack/opt/spack``.
--------------- ---------------------------------------------------
``projections`` ``install_hash_length`` and ``install_path_scheme``
--------------- ---------------------------------------------------
.. warning:: The default Spack installation path can be very long and can create problems
for scripts with hardcoded shebangs. Additionally, when using the Intel
Modifying projections of the install tree is strongly discouraged. compiler, and if there is also a long list of dependencies, the compiler may
segfault. If you see the following:
By default Spack installs all packages into a unique directory relative to the install
tree root with the following layout:
.. code-block::
{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}
In very rare cases, it may be necessary to reduce the length of this path. For example,
very old versions of the Intel compiler are known to segfault when input paths are too long:
.. code-block:: console .. code-block:: console
@@ -49,25 +40,36 @@ very old versions of the Intel compiler are known to segfault when input paths a
** Segmentation violation signal raised. ** ** Segmentation violation signal raised. **
Access violation or stack overflow. Please contact Intel Support for assistance. Access violation or stack overflow. Please contact Intel Support for assistance.
Another case is Python and R packages with many runtime dependencies, which can result it may be because variables containing dependency specs may be too long. There
in very large ``PYTHONPATH`` and ``R_LIBS`` environment variables. This can cause the are two parameters to help with long path names. Firstly, the
``execve`` system call to fail with ``E2BIG``, preventing processes from starting. ``install_hash_length`` parameter can set the length of the hash in the
installation path from 1 to 32. The default path uses the full 32 characters.
For this reason, Spack allows users to modify the installation layout through custom Secondly, it is also possible to modify the entire installation
projections. For example scheme. By default Spack uses
``{architecture}/{compiler.name}-{compiler.version}/{name}-{version}-{hash}``
where the tokens that are available for use in this directive are the
same as those understood by the :meth:`~spack.spec.Spec.format`
method. Using this parameter it is possible to use a different package
layout or reduce the depth of the installation paths. For example
.. code-block:: yaml .. code-block:: yaml
config: config:
install_tree: install_path_scheme: '{name}/{version}/{hash:7}'
root: $spack/opt/spack
projections:
all: "{name}/{version}/{hash:16}"
would install packages into sub-directories using only the package name, version and a would install packages into sub-directories using only the package
hash length of 16 characters. name, version and a hash length of 7 characters.
Notice that reducing the hash length increases the likelihood of hash collisions. When using either parameter to set the hash length it only affects the
representation of the hash in the installation directory. You
should be aware that the smaller the hash length the more likely
naming conflicts will occur. These parameters are independent of those
used to configure module names.
.. warning:: Modifying the installation hash length or path scheme after
packages have been installed will prevent Spack from being
able to find the old installation directories.
-------------------- --------------------
``build_stage`` ``build_stage``
@@ -125,8 +127,6 @@ are stored in ``$spack/var/spack/cache``. These are stored indefinitely
by default. Can be purged with :ref:`spack clean --downloads by default. Can be purged with :ref:`spack clean --downloads
<cmd-spack-clean>`. <cmd-spack-clean>`.
.. _Misc Cache:
-------------------- --------------------
``misc_cache`` ``misc_cache``
-------------------- --------------------
@@ -336,52 +336,3 @@ create a new alias called ``inst`` that will always call ``install -v``:
aliases: aliases:
inst: install -v inst: install -v
-------------------------------
``concretization_cache:enable``
-------------------------------
When set to ``true``, Spack will utilize a cache of solver outputs from
successful concretization runs. When enabled, Spack will check the concretization
cache prior to running the solver. If a previous request to solve a given
problem is present in the cache, Spack will load the concrete specs and other
solver data from the cache rather than running the solver. Specs not previously
concretized will be added to the cache on a successful solve. The cache additionally
holds solver statistics, so commands like ``spack solve`` will still return information
about the run that produced a given solver result.
This cache is a subcache of the :ref:`Misc Cache` and as such will be cleaned when the Misc
Cache is cleaned.
When ``false`` or ommitted, all concretization requests will be performed from scatch
----------------------------
``concretization_cache:url``
----------------------------
Path to the location where Spack will root the concretization cache. Currently this only supports
paths on the local filesystem.
Default location is under the :ref:`Misc Cache` at: ``$misc_cache/concretization``
------------------------------------
``concretization_cache:entry_limit``
------------------------------------
Sets a limit on the number of concretization results that Spack will cache. The limit is evaluated
after each concretization run; if Spack has stored more results than the limit allows, the
oldest concretization results are pruned until 10% of the limit has been removed.
Setting this value to 0 disables the automatic pruning. It is expected users will be
responsible for maintaining this cache.
-----------------------------------
``concretization_cache:size_limit``
-----------------------------------
Sets a limit on the size of the concretization cache in bytes. The limit is evaluated
after each concretization run; if Spack has stored more results than the limit allows, the
oldest concretization results are pruned until 10% of the limit has been removed.
Setting this value to 0 disables the automatic pruning. It is expected users will be
responsible for maintaining this cache.

View File

@@ -361,6 +361,7 @@ and the tags associated with the class of runners to build on.
* ``.linux_neoverse_n1`` * ``.linux_neoverse_n1``
* ``.linux_neoverse_v1`` * ``.linux_neoverse_v1``
* ``.linux_neoverse_v2`` * ``.linux_neoverse_v2``
* ``.linux_power``
* ``.linux_skylake`` * ``.linux_skylake``
* ``.linux_x86_64`` * ``.linux_x86_64``
* ``.linux_x86_64_v4`` * ``.linux_x86_64_v4``

View File

@@ -543,10 +543,10 @@ With either interpreter you can run a single command:
.. code-block:: console .. code-block:: console
$ spack python -c 'from spack.concretize import concretize_one; concretize_one("python")' $ spack python -c 'from spack.spec import Spec; Spec("python").concretized()'
... ...
$ spack python -i ipython -c 'from spack.concretize import concretize_one; concretize_one("python")' $ spack python -i ipython -c 'from spack.spec import Spec; Spec("python").concretized()'
Out[1]: ... Out[1]: ...
or a file: or a file:

View File

@@ -112,19 +112,6 @@ the original but may concretize differently in the presence of different
explicit or default configuration settings (e.g., a different version of explicit or default configuration settings (e.g., a different version of
Spack or for a different user account). Spack or for a different user account).
Environments created from a manifest will copy any included configs
from relative paths inside the environment. Relative paths from
outside the environment will cause errors, and absolute paths will be
kept absolute. For example, if ``spack.yaml`` includes:
.. code-block:: yaml
spack:
include: [./config.yaml]
then the created environment will have its own copy of the file
``config.yaml`` copied from the location in the original environment.
Create an environment from a ``spack.lock`` file using: Create an environment from a ``spack.lock`` file using:
.. code-block:: console .. code-block:: console
@@ -173,7 +160,7 @@ accepts. If an environment already exists then spack will simply activate it
and ignore the create-specific flags. and ignore the create-specific flags.
.. code-block:: console .. code-block:: console
$ spack env activate --create -p myenv $ spack env activate --create -p myenv
# ... # ...
# [creates if myenv does not exist yet] # [creates if myenv does not exist yet]
@@ -437,8 +424,8 @@ Developing Packages in a Spack Environment
The ``spack develop`` command allows one to develop Spack packages in The ``spack develop`` command allows one to develop Spack packages in
an environment. It requires a spec containing a concrete version, and an environment. It requires a spec containing a concrete version, and
will configure Spack to install the package from local source. will configure Spack to install the package from local source.
If a version is not provided from the command line interface then spack If a version is not provided from the command line interface then spack
will automatically pick the highest version the package has defined. will automatically pick the highest version the package has defined.
This means any infinity versions (``develop``, ``main``, ``stable``) will be This means any infinity versions (``develop``, ``main``, ``stable``) will be
preferred in this selection process. preferred in this selection process.
@@ -448,9 +435,9 @@ set, and Spack will ensure the package and its dependents are rebuilt
any time the environment is installed if the package's local source any time the environment is installed if the package's local source
code has been modified. Spack's native implementation to check for modifications code has been modified. Spack's native implementation to check for modifications
is to check if ``mtime`` is newer than the installation. is to check if ``mtime`` is newer than the installation.
A custom check can be created by overriding the ``detect_dev_src_change`` method A custom check can be created by overriding the ``detect_dev_src_change`` method
in your package class. This is particularly useful for projects using custom spack repo's in your package class. This is particularly useful for projects using custom spack repo's
to drive development and want to optimize performance. to drive development and want to optimize performance.
Spack ensures that all instances of a Spack ensures that all instances of a
developed package in the environment are concretized to match the developed package in the environment are concretized to match the
@@ -466,7 +453,7 @@ Further development on ``foo`` can be tested by re-installing the environment,
and eventually committed and pushed to the upstream git repo. and eventually committed and pushed to the upstream git repo.
If the package being developed supports out-of-source builds then users can use the If the package being developed supports out-of-source builds then users can use the
``--build_directory`` flag to control the location and name of the build directory. ``--build_directory`` flag to control the location and name of the build directory.
This is a shortcut to set the ``package_attributes:build_directory`` in the This is a shortcut to set the ``package_attributes:build_directory`` in the
``packages`` configuration (see :ref:`assigning-package-attributes`). ``packages`` configuration (see :ref:`assigning-package-attributes`).
The supplied location will become the build-directory for that package in all future builds. The supplied location will become the build-directory for that package in all future builds.

View File

@@ -456,13 +456,14 @@ For instance, the following config options,
tcl: tcl:
all: all:
suffixes: suffixes:
^python@3: 'python{^python.version.up_to_2}' ^python@3: 'python{^python.version}'
^openblas: 'openblas' ^openblas: 'openblas'
will add a ``python3.12`` to module names of packages compiled with Python 3.12, and similarly for will add a ``python-3.12.1`` version string to any packages compiled with
all specs depending on ``python@3``. This is useful to know which version of Python a set of Python Python matching the spec, ``python@3``. This is useful to know which
extensions is associated with. Likewise, the ``openblas`` string is attached to any program that version of Python a set of Python extensions is associated with. Likewise, the
has openblas in the spec, most likely via the ``+blas`` variant specification. ``openblas`` string is attached to any program that has openblas in the spec,
most likely via the ``+blas`` variant specification.
The most heavyweight solution to module naming is to change the entire The most heavyweight solution to module naming is to change the entire
naming convention for module files. This uses the projections format naming convention for module files. This uses the projections format

View File

@@ -820,69 +820,6 @@ presence of a ``SPACK_CDASH_AUTH_TOKEN`` environment variable during the
build group on CDash called "Release Testing" (that group will be created if build group on CDash called "Release Testing" (that group will be created if
it didn't already exist). it didn't already exist).
.. _ci_artifacts:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
CI Artifacts Directory Layout
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running the CI build using the command ``spack ci rebuild`` a number of directories are created for
storing data generated during the CI job. The default root directory for artifacts is ``job_scratch_root``.
This can be overridden by passing the argument ``--artifacts-root`` to the ``spack ci generate`` command
or by setting the ``SPACK_ARTIFACTS_ROOT`` environment variable in the build job scripts.
The top level directories under the artifact root are ``concrete_environment``, ``logs``, ``reproduction``,
``tests``, and ``user_data``. Spack does not restrict what is written to any of these directories nor does
it require user specified files be written to any specific directory.
------------------------
``concrete_environment``
------------------------
The directory ``concrete_environment`` is used to communicate the ci generate processed ``spack.yaml`` and
the concrete ``spack.lock`` for the CI environment.
--------
``logs``
--------
The directory ``logs`` contains the spack build log, ``spack-build-out.txt``, and the spack build environment
modification file, ``spack-build-mod-env.txt``. Additionally all files specified by the packages ``Builder``
property ``archive_files`` are also copied here (ie. ``CMakeCache.txt`` in ``CMakeBuilder``).
----------------
``reproduction``
----------------
The directory ``reproduction`` is used to store the files needed by the ``spack reproduce-build`` command.
This includes ``repro.json``, copies of all of the files in ``concrete_environment``, the concrete spec
JSON file for the current spec being built, and all of the files written in the artifacts root directory.
The ``repro.json`` file is not versioned and is only designed to work with the version of spack CI was run with.
An example of what a ``repro.json`` may look like is here.
.. code:: json
{
"job_name": "adios2@2.9.2 /feaevuj %gcc@11.4.0 arch=linux-ubuntu20.04-x86_64_v3 E4S ROCm External",
"job_spec_json": "adios2.json",
"ci_project_dir": "/builds/spack/spack"
}
---------
``tests``
---------
The directory ``tests`` is used to store output from running ``spack test <job spec>``. This may or may not have
data in it depending on the package that was built and the availability of tests.
-------------
``user_data``
-------------
The directory ``user_data`` is used to store everything else that shouldn't be copied to the ``reproduction`` direcotory.
Users may use this to store additional logs or metrics or other types of files generated by the build job.
------------------------------------- -------------------------------------
Using a custom spack in your pipeline Using a custom spack in your pipeline
------------------------------------- -------------------------------------

View File

@@ -1,13 +1,13 @@
sphinx==8.2.3 sphinx==8.1.3
sphinxcontrib-programoutput==0.18 sphinxcontrib-programoutput==0.18
sphinx_design==0.6.1 sphinx_design==0.6.1
sphinx-rtd-theme==3.0.2 sphinx-rtd-theme==3.0.2
python-levenshtein==0.27.1 python-levenshtein==0.26.1
docutils==0.21.2 docutils==0.21.2
pygments==2.19.1 pygments==2.18.0
urllib3==2.3.0 urllib3==2.3.0
pytest==8.3.5 pytest==8.3.4
isort==6.0.1 isort==5.13.2
black==25.1.0 black==24.10.0
flake8==7.1.2 flake8==7.1.1
mypy==1.11.1 mypy==1.11.1

View File

@@ -3,7 +3,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""URL primitives that just require Python standard library.""" """URL primitives that just require Python standard library."""
import itertools import itertools
import os import os.path
import re import re
from typing import Optional, Set, Tuple from typing import Optional, Set, Tuple
from urllib.parse import urlsplit, urlunsplit from urllib.parse import urlsplit, urlunsplit

View File

@@ -7,7 +7,6 @@
import fnmatch import fnmatch
import glob import glob
import hashlib import hashlib
import io
import itertools import itertools
import numbers import numbers
import os import os
@@ -21,7 +20,6 @@
from contextlib import contextmanager from contextlib import contextmanager
from itertools import accumulate from itertools import accumulate
from typing import ( from typing import (
IO,
Callable, Callable,
Deque, Deque,
Dict, Dict,
@@ -77,6 +75,7 @@
"install_tree", "install_tree",
"is_exe", "is_exe",
"join_path", "join_path",
"last_modification_time_recursive",
"library_extensions", "library_extensions",
"mkdirp", "mkdirp",
"partition_path", "partition_path",
@@ -670,7 +669,7 @@ def copy(src, dest, _permissions=False):
_permissions (bool): for internal use only _permissions (bool): for internal use only
Raises: Raises:
OSError: if *src* does not match any files or directories IOError: if *src* does not match any files or directories
ValueError: if *src* matches multiple files but *dest* is ValueError: if *src* matches multiple files but *dest* is
not a directory not a directory
""" """
@@ -681,7 +680,7 @@ def copy(src, dest, _permissions=False):
files = glob.glob(src) files = glob.glob(src)
if not files: if not files:
raise OSError("No such file or directory: '{0}'".format(src)) raise IOError("No such file or directory: '{0}'".format(src))
if len(files) > 1 and not os.path.isdir(dest): if len(files) > 1 and not os.path.isdir(dest):
raise ValueError( raise ValueError(
"'{0}' matches multiple files but '{1}' is not a directory".format(src, dest) "'{0}' matches multiple files but '{1}' is not a directory".format(src, dest)
@@ -712,7 +711,7 @@ def install(src, dest):
dest (str): the destination file or directory dest (str): the destination file or directory
Raises: Raises:
OSError: if *src* does not match any files or directories IOError: if *src* does not match any files or directories
ValueError: if *src* matches multiple files but *dest* is ValueError: if *src* matches multiple files but *dest* is
not a directory not a directory
""" """
@@ -750,7 +749,7 @@ def copy_tree(
_permissions (bool): for internal use only _permissions (bool): for internal use only
Raises: Raises:
OSError: if *src* does not match any files or directories IOError: if *src* does not match any files or directories
ValueError: if *src* is a parent directory of *dest* ValueError: if *src* is a parent directory of *dest*
""" """
if _permissions: if _permissions:
@@ -764,7 +763,7 @@ def copy_tree(
files = glob.glob(src) files = glob.glob(src)
if not files: if not files:
raise OSError("No such file or directory: '{0}'".format(src)) raise IOError("No such file or directory: '{0}'".format(src))
# For Windows hard-links and junctions, the source path must exist to make a symlink. Add # For Windows hard-links and junctions, the source path must exist to make a symlink. Add
# all symlinks to this list while traversing the tree, then when finished, make all # all symlinks to this list while traversing the tree, then when finished, make all
@@ -845,7 +844,7 @@ def install_tree(src, dest, symlinks=True, ignore=None):
ignore (typing.Callable): function indicating which files to ignore ignore (typing.Callable): function indicating which files to ignore
Raises: Raises:
OSError: if *src* does not match any files or directories IOError: if *src* does not match any files or directories
ValueError: if *src* is a parent directory of *dest* ValueError: if *src* is a parent directory of *dest*
""" """
copy_tree(src, dest, symlinks=symlinks, ignore=ignore, _permissions=True) copy_tree(src, dest, symlinks=symlinks, ignore=ignore, _permissions=True)
@@ -1471,36 +1470,15 @@ def set_executable(path):
@system_path_filter @system_path_filter
def recursive_mtime_greater_than(path: str, time: float) -> bool: def last_modification_time_recursive(path):
"""Returns true if any file or dir recursively under `path` has mtime greater than `time`.""" path = os.path.abspath(path)
# use bfs order to increase likelihood of early return times = [os.stat(path).st_mtime]
queue: Deque[str] = collections.deque([path]) times.extend(
os.lstat(os.path.join(root, name)).st_mtime
if os.stat(path).st_mtime > time: for root, dirs, files in os.walk(path)
return True for name in dirs + files
)
while queue: return max(times)
current = queue.popleft()
try:
entries = os.scandir(current)
except OSError:
continue
with entries:
for entry in entries:
try:
st = entry.stat(follow_symlinks=False)
except OSError:
continue
if st.st_mtime > time:
return True
if entry.is_dir(follow_symlinks=False):
queue.append(entry.path)
return False
@system_path_filter @system_path_filter
@@ -1762,7 +1740,8 @@ def find(
def _log_file_access_issue(e: OSError, path: str) -> None: def _log_file_access_issue(e: OSError, path: str) -> None:
tty.debug(f"find must skip {path}: {e}") errno_name = errno.errorcode.get(e.errno, "UNKNOWN")
tty.debug(f"find must skip {path}: {errno_name} {e}")
def _file_id(s: os.stat_result) -> Tuple[int, int]: def _file_id(s: os.stat_result) -> Tuple[int, int]:
@@ -2456,69 +2435,26 @@ class WindowsSimulatedRPath:
and vis versa. and vis versa.
""" """
def __init__( def __init__(self, package, link_install_prefix=True):
self,
package,
base_modification_prefix: Optional[Union[str, pathlib.Path]] = None,
link_install_prefix: bool = True,
):
""" """
Args: Args:
package (spack.package_base.PackageBase): Package requiring links package (spack.package_base.PackageBase): Package requiring links
base_modification_prefix (str|pathlib.Path): Path representation indicating
the root directory in which to establish the simulated rpath, ie where the
symlinks that comprise the "rpath" behavior will be installed.
Note: This is a mutually exclusive option with `link_install_prefix` using
both is an error.
Default: None
link_install_prefix (bool): Link against package's own install or stage root. link_install_prefix (bool): Link against package's own install or stage root.
Packages that run their own executables during build and require rpaths to Packages that run their own executables during build and require rpaths to
the build directory during build time require this option. the build directory during build time require this option. Default: install
Default: install
root root
Note: This is a mutually exclusive option with `base_modification_prefix`, using
both is an error.
""" """
self.pkg = package self.pkg = package
self._addl_rpaths: set[str] = set() self._addl_rpaths = set()
if link_install_prefix and base_modification_prefix:
raise RuntimeError(
"Invalid combination of arguments given to WindowsSimulated RPath.\n"
"Select either `link_install_prefix` to create an install prefix rpath"
" or specify a `base_modification_prefix` for any other link type. "
"Specifying both arguments is invalid."
)
if not (link_install_prefix or base_modification_prefix):
raise RuntimeError(
"Insufficient arguments given to WindowsSimulatedRpath.\n"
"WindowsSimulatedRPath requires one of link_install_prefix"
" or base_modification_prefix to be specified."
" Neither was provided."
)
self.link_install_prefix = link_install_prefix self.link_install_prefix = link_install_prefix
if base_modification_prefix: self._additional_library_dependents = set()
self.base_modification_prefix = pathlib.Path(base_modification_prefix)
else:
self.base_modification_prefix = pathlib.Path(self.pkg.prefix)
self._additional_library_dependents: set[pathlib.Path] = set()
if not self.link_install_prefix:
tty.debug(f"Generating rpath for non install context: {base_modification_prefix}")
@property @property
def library_dependents(self): def library_dependents(self):
""" """
Set of directories where package binaries/libraries are located. Set of directories where package binaries/libraries are located.
""" """
base_pths = set() return set([pathlib.Path(self.pkg.prefix.bin)]) | self._additional_library_dependents
if self.link_install_prefix:
base_pths.add(pathlib.Path(self.pkg.prefix.bin))
base_pths |= self._additional_library_dependents
return base_pths
def add_library_dependent(self, *dest): def add_library_dependent(self, *dest):
""" """
@@ -2534,12 +2470,6 @@ def add_library_dependent(self, *dest):
new_pth = pathlib.Path(pth).parent new_pth = pathlib.Path(pth).parent
else: else:
new_pth = pathlib.Path(pth) new_pth = pathlib.Path(pth)
path_is_in_prefix = new_pth.is_relative_to(self.base_modification_prefix)
if not path_is_in_prefix:
raise RuntimeError(
f"Attempting to generate rpath symlink out of rpath context:\
{str(self.base_modification_prefix)}"
)
self._additional_library_dependents.add(new_pth) self._additional_library_dependents.add(new_pth)
@property @property
@@ -2628,33 +2558,6 @@ def establish_link(self):
self._link(library, lib_dir) self._link(library, lib_dir)
def make_package_test_rpath(pkg, test_dir: Union[str, pathlib.Path]):
"""Establishes a temp Windows simulated rpath for the pkg in the testing directory
so an executable can test the libraries/executables with proper access
to dependent dlls
Note: this is a no-op on all other platforms besides Windows
Args:
pkg (spack.package_base.PackageBase): the package for which the rpath should be computed
test_dir: the testing directory in which we should construct an rpath
"""
# link_install_prefix as false ensures we're not linking into the install prefix
mini_rpath = WindowsSimulatedRPath(pkg, link_install_prefix=False)
# add the testing directory as a location to install rpath symlinks
mini_rpath.add_library_dependent(test_dir)
# check for whether build_directory is available, if not
# assume the stage root is the build dir
build_dir_attr = getattr(pkg, "build_directory", None)
build_directory = build_dir_attr if build_dir_attr else pkg.stage.path
# add the build dir & build dir bin
mini_rpath.add_rpath(os.path.join(build_directory, "bin"))
mini_rpath.add_rpath(os.path.join(build_directory))
# construct rpath
mini_rpath.establish_link()
@system_path_filter @system_path_filter
@memoized @memoized
def can_access_dir(path): def can_access_dir(path):
@@ -2883,20 +2786,6 @@ def keep_modification_time(*filenames):
os.utime(f, (os.path.getatime(f), mtime)) os.utime(f, (os.path.getatime(f), mtime))
@contextmanager
def temporary_file_position(stream):
orig_pos = stream.tell()
yield
stream.seek(orig_pos)
@contextmanager
def current_file_position(stream: IO[str], loc: int, relative_to=io.SEEK_CUR):
with temporary_file_position(stream):
stream.seek(loc, relative_to)
yield
@contextmanager @contextmanager
def temporary_dir( def temporary_dir(
suffix: Optional[str] = None, prefix: Optional[str] = None, dir: Optional[str] = None suffix: Optional[str] = None, prefix: Optional[str] = None, dir: Optional[str] = None

View File

@@ -14,7 +14,7 @@
import typing import typing
import warnings import warnings
from datetime import datetime, timedelta from datetime import datetime, timedelta
from typing import Callable, Dict, Iterable, List, Mapping, Optional, Tuple, TypeVar from typing import Callable, Dict, Iterable, List, Tuple, TypeVar
# Ignore emacs backups when listing modules # Ignore emacs backups when listing modules
ignore_modules = r"^\.#|~$" ignore_modules = r"^\.#|~$"
@@ -1080,88 +1080,3 @@ def __set__(self, instance, value):
def factory(self, instance, owner): def factory(self, instance, owner):
raise NotImplementedError("must be implemented by derived classes") raise NotImplementedError("must be implemented by derived classes")
KT = TypeVar("KT")
VT = TypeVar("VT")
class PriorityOrderedMapping(Mapping[KT, VT]):
"""Mapping that iterates over key according to an integer priority. If the priority is
the same for two keys, insertion order is what matters.
The priority is set when the key/value pair is added. If not set, the highest current priority
is used.
"""
_data: Dict[KT, VT]
_priorities: List[Tuple[int, KT]]
def __init__(self) -> None:
self._data = {}
# Tuple of (priority, key)
self._priorities = []
def __getitem__(self, key: KT) -> VT:
return self._data[key]
def __len__(self) -> int:
return len(self._data)
def __iter__(self):
yield from (key for _, key in self._priorities)
def __reversed__(self):
yield from (key for _, key in reversed(self._priorities))
def reversed_keys(self):
"""Iterates over keys from the highest priority, to the lowest."""
return reversed(self)
def reversed_values(self):
"""Iterates over values from the highest priority, to the lowest."""
yield from (self._data[key] for _, key in reversed(self._priorities))
def _highest_priority(self) -> int:
if not self._priorities:
return 0
result, _ = self._priorities[-1]
return result
def add(self, key: KT, *, value: VT, priority: Optional[int] = None) -> None:
"""Adds a key/value pair to the mapping, with a specific priority.
If the priority is None, then it is assumed to be the highest priority value currently
in the container.
Raises:
ValueError: when the same priority is already in the mapping
"""
if priority is None:
priority = self._highest_priority()
if key in self._data:
self.remove(key)
self._priorities.append((priority, key))
# We rely on sort being stable
self._priorities.sort(key=lambda x: x[0])
self._data[key] = value
assert len(self._data) == len(self._priorities)
def remove(self, key: KT) -> VT:
"""Removes a key from the mapping.
Returns:
The value associated with the key being removed
Raises:
KeyError: if the key is not in the mapping
"""
if key not in self._data:
raise KeyError(f"cannot find {key}")
popped_item = self._data.pop(key)
self._priorities = [(p, k) for p, k in self._priorities if k != key]
assert len(self._data) == len(self._priorities)
return popped_item

View File

@@ -41,16 +41,6 @@ def __init__(self, dst, src_a=None, src_b=None):
self.src_a = src_a self.src_a = src_a
self.src_b = src_b self.src_b = src_b
def __repr__(self) -> str:
return f"MergeConflict(dst={self.dst!r}, src_a={self.src_a!r}, src_b={self.src_b!r})"
def _samefile(a: str, b: str):
try:
return os.path.samefile(a, b)
except OSError:
return False
class SourceMergeVisitor(BaseDirectoryVisitor): class SourceMergeVisitor(BaseDirectoryVisitor):
""" """
@@ -60,14 +50,9 @@ class SourceMergeVisitor(BaseDirectoryVisitor):
- A list of merge conflicts in dst/ - A list of merge conflicts in dst/
""" """
def __init__( def __init__(self, ignore: Optional[Callable[[str], bool]] = None):
self, ignore: Optional[Callable[[str], bool]] = None, normalize_paths: bool = False
):
self.ignore = ignore if ignore is not None else lambda f: False self.ignore = ignore if ignore is not None else lambda f: False
# On case-insensitive filesystems, normalize paths to detect duplications
self.normalize_paths = normalize_paths
# When mapping <src root> to <dst root>/<projection>, we need to prepend the <projection> # When mapping <src root> to <dst root>/<projection>, we need to prepend the <projection>
# bit to the relative path in the destination dir. # bit to the relative path in the destination dir.
self.projection: str = "" self.projection: str = ""
@@ -86,88 +71,10 @@ def __init__(
# and can run mkdir in order. # and can run mkdir in order.
self.directories: Dict[str, Tuple[str, str]] = {} self.directories: Dict[str, Tuple[str, str]] = {}
# If the visitor is configured to normalize paths, keep a map of
# normalized path to: original path, root directory + relative path
self._directories_normalized: Dict[str, Tuple[str, str, str]] = {}
# Files to link. Maps dst_rel to (src_root, src_rel). This is an ordered dict, where files # Files to link. Maps dst_rel to (src_root, src_rel). This is an ordered dict, where files
# are guaranteed to be grouped by src_root in the order they were visited. # are guaranteed to be grouped by src_root in the order they were visited.
self.files: Dict[str, Tuple[str, str]] = {} self.files: Dict[str, Tuple[str, str]] = {}
# If the visitor is configured to normalize paths, keep a map of
# normalized path to: original path, root directory + relative path
self._files_normalized: Dict[str, Tuple[str, str, str]] = {}
def _in_directories(self, proj_rel_path: str) -> bool:
"""
Check if a path is already in the directory list
"""
if self.normalize_paths:
return proj_rel_path.lower() in self._directories_normalized
else:
return proj_rel_path in self.directories
def _directory(self, proj_rel_path: str) -> Tuple[str, str, str]:
"""
Get the directory that is mapped to a path
"""
if self.normalize_paths:
return self._directories_normalized[proj_rel_path.lower()]
else:
return (proj_rel_path, *self.directories[proj_rel_path])
def _del_directory(self, proj_rel_path: str):
"""
Remove a directory from the list of directories
"""
del self.directories[proj_rel_path]
if self.normalize_paths:
del self._directories_normalized[proj_rel_path.lower()]
def _add_directory(self, proj_rel_path: str, root: str, rel_path: str):
"""
Add a directory to the list of directories.
Also stores the normalized version for later lookups
"""
self.directories[proj_rel_path] = (root, rel_path)
if self.normalize_paths:
self._directories_normalized[proj_rel_path.lower()] = (proj_rel_path, root, rel_path)
def _in_files(self, proj_rel_path: str) -> bool:
"""
Check if a path is already in the files list
"""
if self.normalize_paths:
return proj_rel_path.lower() in self._files_normalized
else:
return proj_rel_path in self.files
def _file(self, proj_rel_path: str) -> Tuple[str, str, str]:
"""
Get the file that is mapped to a path
"""
if self.normalize_paths:
return self._files_normalized[proj_rel_path.lower()]
else:
return (proj_rel_path, *self.files[proj_rel_path])
def _del_file(self, proj_rel_path: str):
"""
Remove a file from the list of files
"""
del self.files[proj_rel_path]
if self.normalize_paths:
del self._files_normalized[proj_rel_path.lower()]
def _add_file(self, proj_rel_path: str, root: str, rel_path: str):
"""
Add a file to the list of files
Also stores the normalized version for later lookups
"""
self.files[proj_rel_path] = (root, rel_path)
if self.normalize_paths:
self._files_normalized[proj_rel_path.lower()] = (proj_rel_path, root, rel_path)
def before_visit_dir(self, root: str, rel_path: str, depth: int) -> bool: def before_visit_dir(self, root: str, rel_path: str, depth: int) -> bool:
""" """
Register a directory if dst / rel_path is not blocked by a file or ignored. Register a directory if dst / rel_path is not blocked by a file or ignored.
@@ -177,28 +84,23 @@ def before_visit_dir(self, root: str, rel_path: str, depth: int) -> bool:
if self.ignore(rel_path): if self.ignore(rel_path):
# Don't recurse when dir is ignored. # Don't recurse when dir is ignored.
return False return False
elif self._in_files(proj_rel_path): elif proj_rel_path in self.files:
# A file-dir conflict is fatal except if they're the same file (symlinked dir). # Can't create a dir where a file is.
src_a = os.path.join(*self._file(proj_rel_path)) src_a_root, src_a_relpath = self.files[proj_rel_path]
src_b = os.path.join(root, rel_path) self.fatal_conflicts.append(
MergeConflict(
if not _samefile(src_a, src_b): dst=proj_rel_path,
self.fatal_conflicts.append( src_a=os.path.join(src_a_root, src_a_relpath),
MergeConflict(dst=proj_rel_path, src_a=src_a, src_b=src_b) src_b=os.path.join(root, rel_path),
) )
return False )
return False
# Remove the link in favor of the dir. elif proj_rel_path in self.directories:
existing_proj_rel_path, _, _ = self._file(proj_rel_path)
self._del_file(existing_proj_rel_path)
self._add_directory(proj_rel_path, root, rel_path)
return True
elif self._in_directories(proj_rel_path):
# No new directory, carry on. # No new directory, carry on.
return True return True
else: else:
# Register new directory. # Register new directory.
self._add_directory(proj_rel_path, root, rel_path) self.directories[proj_rel_path] = (root, rel_path)
return True return True
def before_visit_symlinked_dir(self, root: str, rel_path: str, depth: int) -> bool: def before_visit_symlinked_dir(self, root: str, rel_path: str, depth: int) -> bool:
@@ -230,7 +132,7 @@ def before_visit_symlinked_dir(self, root: str, rel_path: str, depth: int) -> bo
if handle_as_dir: if handle_as_dir:
return self.before_visit_dir(root, rel_path, depth) return self.before_visit_dir(root, rel_path, depth)
self.visit_file(root, rel_path, depth, symlink=True) self.visit_file(root, rel_path, depth)
return False return False
def visit_file(self, root: str, rel_path: str, depth: int, *, symlink: bool = False) -> None: def visit_file(self, root: str, rel_path: str, depth: int, *, symlink: bool = False) -> None:
@@ -238,23 +140,30 @@ def visit_file(self, root: str, rel_path: str, depth: int, *, symlink: bool = Fa
if self.ignore(rel_path): if self.ignore(rel_path):
pass pass
elif self._in_directories(proj_rel_path): elif proj_rel_path in self.directories:
# Can't create a file where a dir is, unless they are the same file (symlinked dir), # Can't create a file where a dir is; fatal error
# in which case we simply drop the symlink in favor of the actual dir. self.fatal_conflicts.append(
src_a = os.path.join(*self._directory(proj_rel_path)) MergeConflict(
src_b = os.path.join(root, rel_path) dst=proj_rel_path,
if not symlink or not _samefile(src_a, src_b): src_a=os.path.join(*self.directories[proj_rel_path]),
self.fatal_conflicts.append( src_b=os.path.join(root, rel_path),
MergeConflict(dst=proj_rel_path, src_a=src_a, src_b=src_b)
) )
elif self._in_files(proj_rel_path): )
elif proj_rel_path in self.files:
# When two files project to the same path, they conflict iff they are distinct. # When two files project to the same path, they conflict iff they are distinct.
# If they are the same (i.e. one links to the other), register regular files rather # If they are the same (i.e. one links to the other), register regular files rather
# than symlinks. The reason is that in copy-type views, we need a copy of the actual # than symlinks. The reason is that in copy-type views, we need a copy of the actual
# file, not the symlink. # file, not the symlink.
src_a = os.path.join(*self._file(proj_rel_path))
src_a = os.path.join(*self.files[proj_rel_path])
src_b = os.path.join(root, rel_path) src_b = os.path.join(root, rel_path)
if not _samefile(src_a, src_b):
try:
samefile = os.path.samefile(src_a, src_b)
except OSError:
samefile = False
if not samefile:
# Distinct files produce a conflict. # Distinct files produce a conflict.
self.file_conflicts.append( self.file_conflicts.append(
MergeConflict(dst=proj_rel_path, src_a=src_a, src_b=src_b) MergeConflict(dst=proj_rel_path, src_a=src_a, src_b=src_b)
@@ -264,12 +173,12 @@ def visit_file(self, root: str, rel_path: str, depth: int, *, symlink: bool = Fa
if not symlink: if not symlink:
# Remove the link in favor of the actual file. The del is necessary to maintain the # Remove the link in favor of the actual file. The del is necessary to maintain the
# order of the files dict, which is grouped by root. # order of the files dict, which is grouped by root.
existing_proj_rel_path, _, _ = self._file(proj_rel_path) del self.files[proj_rel_path]
self._del_file(existing_proj_rel_path) self.files[proj_rel_path] = (root, rel_path)
self._add_file(proj_rel_path, root, rel_path)
else: else:
# Otherwise register this file to be linked. # Otherwise register this file to be linked.
self._add_file(proj_rel_path, root, rel_path) self.files[proj_rel_path] = (root, rel_path)
def visit_symlinked_file(self, root: str, rel_path: str, depth: int) -> None: def visit_symlinked_file(self, root: str, rel_path: str, depth: int) -> None:
# Treat symlinked files as ordinary files (without "dereferencing") # Treat symlinked files as ordinary files (without "dereferencing")
@@ -288,11 +197,11 @@ def set_projection(self, projection: str) -> None:
path = "" path = ""
for part in self.projection.split(os.sep): for part in self.projection.split(os.sep):
path = os.path.join(path, part) path = os.path.join(path, part)
if not self._in_files(path): if path not in self.files:
self._add_directory(path, "<projection>", path) self.directories[path] = ("<projection>", path)
else: else:
# Can't create a dir where a file is. # Can't create a dir where a file is.
_, src_a_root, src_a_relpath = self._file(path) src_a_root, src_a_relpath = self.files[path]
self.fatal_conflicts.append( self.fatal_conflicts.append(
MergeConflict( MergeConflict(
dst=path, dst=path,
@@ -318,8 +227,8 @@ def __init__(self, source_merge_visitor: SourceMergeVisitor):
def before_visit_dir(self, root: str, rel_path: str, depth: int) -> bool: def before_visit_dir(self, root: str, rel_path: str, depth: int) -> bool:
# If destination dir is a file in a src dir, add a conflict, # If destination dir is a file in a src dir, add a conflict,
# and don't traverse deeper # and don't traverse deeper
if self.src._in_files(rel_path): if rel_path in self.src.files:
_, src_a_root, src_a_relpath = self.src._file(rel_path) src_a_root, src_a_relpath = self.src.files[rel_path]
self.src.fatal_conflicts.append( self.src.fatal_conflicts.append(
MergeConflict( MergeConflict(
rel_path, os.path.join(src_a_root, src_a_relpath), os.path.join(root, rel_path) rel_path, os.path.join(src_a_root, src_a_relpath), os.path.join(root, rel_path)
@@ -329,9 +238,8 @@ def before_visit_dir(self, root: str, rel_path: str, depth: int) -> bool:
# If destination dir was also a src dir, remove the mkdir # If destination dir was also a src dir, remove the mkdir
# action, and traverse deeper. # action, and traverse deeper.
if self.src._in_directories(rel_path): if rel_path in self.src.directories:
existing_proj_rel_path, _, _ = self.src._directory(rel_path) del self.src.directories[rel_path]
self.src._del_directory(existing_proj_rel_path)
return True return True
# If the destination dir does not appear in the src dir, # If the destination dir does not appear in the src dir,
@@ -344,24 +252,38 @@ def before_visit_symlinked_dir(self, root: str, rel_path: str, depth: int) -> bo
be seen as files; we should not accidentally merge be seen as files; we should not accidentally merge
source dir with a symlinked dest dir. source dir with a symlinked dest dir.
""" """
# Always conflict
self.visit_file(root, rel_path, depth) if rel_path in self.src.directories:
src_a_root, src_a_relpath = self.src.directories[rel_path]
# Never descend into symlinked target dirs.
return False
def visit_file(self, root: str, rel_path: str, depth: int) -> None:
# Can't merge a file if target already exists
if self.src._in_directories(rel_path):
_, src_a_root, src_a_relpath = self.src._directory(rel_path)
self.src.fatal_conflicts.append( self.src.fatal_conflicts.append(
MergeConflict( MergeConflict(
rel_path, os.path.join(src_a_root, src_a_relpath), os.path.join(root, rel_path) rel_path, os.path.join(src_a_root, src_a_relpath), os.path.join(root, rel_path)
) )
) )
elif self.src._in_files(rel_path): if rel_path in self.src.files:
_, src_a_root, src_a_relpath = self.src._file(rel_path) src_a_root, src_a_relpath = self.src.files[rel_path]
self.src.fatal_conflicts.append(
MergeConflict(
rel_path, os.path.join(src_a_root, src_a_relpath), os.path.join(root, rel_path)
)
)
# Never descend into symlinked target dirs.
return False
def visit_file(self, root: str, rel_path: str, depth: int) -> None:
# Can't merge a file if target already exists
if rel_path in self.src.directories:
src_a_root, src_a_relpath = self.src.directories[rel_path]
self.src.fatal_conflicts.append(
MergeConflict(
rel_path, os.path.join(src_a_root, src_a_relpath), os.path.join(root, rel_path)
)
)
elif rel_path in self.src.files:
src_a_root, src_a_relpath = self.src.files[rel_path]
self.src.fatal_conflicts.append( self.src.fatal_conflicts.append(
MergeConflict( MergeConflict(
rel_path, os.path.join(src_a_root, src_a_relpath), os.path.join(root, rel_path) rel_path, os.path.join(src_a_root, src_a_relpath), os.path.join(root, rel_path)
@@ -386,7 +308,7 @@ class LinkTree:
def __init__(self, source_root): def __init__(self, source_root):
if not os.path.exists(source_root): if not os.path.exists(source_root):
raise OSError("No such file or directory: '%s'", source_root) raise IOError("No such file or directory: '%s'", source_root)
self._root = source_root self._root = source_root

View File

@@ -269,7 +269,7 @@ def __init__(
@staticmethod @staticmethod
def _poll_interval_generator( def _poll_interval_generator(
_wait_times: Optional[Tuple[float, float, float]] = None, _wait_times: Optional[Tuple[float, float, float]] = None
) -> Generator[float, None, None]: ) -> Generator[float, None, None]:
"""This implements a backoff scheme for polling a contended resource """This implements a backoff scheme for polling a contended resource
by suggesting a succession of wait times between polls. by suggesting a succession of wait times between polls.
@@ -391,7 +391,7 @@ def _poll_lock(self, op: int) -> bool:
return True return True
except OSError as e: except IOError as e:
# EAGAIN and EACCES == locked by another process (so try again) # EAGAIN and EACCES == locked by another process (so try again)
if e.errno not in (errno.EAGAIN, errno.EACCES): if e.errno not in (errno.EAGAIN, errno.EACCES):
raise raise

View File

@@ -2,7 +2,8 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Utility classes for logging the output of blocks of code.""" """Utility classes for logging the output of blocks of code.
"""
import atexit import atexit
import ctypes import ctypes
import errno import errno
@@ -343,6 +344,26 @@ def close(self):
self.file.close() self.file.close()
@contextmanager
def replace_environment(env):
"""Replace the current environment (`os.environ`) with `env`.
If `env` is empty (or None), this unsets all current environment
variables.
"""
env = env or {}
old_env = os.environ.copy()
try:
os.environ.clear()
for name, val in env.items():
os.environ[name] = val
yield
finally:
os.environ.clear()
for name, val in old_env.items():
os.environ[name] = val
def log_output(*args, **kwargs): def log_output(*args, **kwargs):
"""Context manager that logs its output to a file. """Context manager that logs its output to a file.
@@ -426,6 +447,7 @@ def __init__(
self.echo = echo self.echo = echo
self.debug = debug self.debug = debug
self.buffer = buffer self.buffer = buffer
self.env = env # the environment to use for _writer_daemon
self.filter_fn = filter_fn self.filter_fn = filter_fn
self._active = False # used to prevent re-entry self._active = False # used to prevent re-entry
@@ -497,20 +519,21 @@ def __enter__(self):
# just don't forward input if this fails # just don't forward input if this fails
pass pass
self.process = multiprocessing.Process( with replace_environment(self.env):
target=_writer_daemon, self.process = multiprocessing.Process(
args=( target=_writer_daemon,
input_fd, args=(
read_fd, input_fd,
self.write_fd, read_fd,
self.echo, self.write_fd,
self.log_file, self.echo,
child_pipe, self.log_file,
self.filter_fn, child_pipe,
), self.filter_fn,
) ),
self.process.daemon = True # must set before start() )
self.process.start() self.process.daemon = True # must set before start()
self.process.start()
finally: finally:
if input_fd: if input_fd:
@@ -706,7 +729,10 @@ class winlog:
Does not support the use of 'v' toggling as nixlog does. Does not support the use of 'v' toggling as nixlog does.
""" """
def __init__(self, file_like=None, echo=False, debug=0, buffer=False, filter_fn=None): def __init__(
self, file_like=None, echo=False, debug=0, buffer=False, env=None, filter_fn=None
):
self.env = env
self.debug = debug self.debug = debug
self.echo = echo self.echo = echo
self.logfile = file_like self.logfile = file_like
@@ -763,10 +789,11 @@ def background_reader(reader, echo_writer, _kill):
reader.close() reader.close()
self._active = True self._active = True
self._thread = Thread( with replace_environment(self.env):
target=background_reader, args=(self.reader, self.echo_writer, self._kill) self._thread = Thread(
) target=background_reader, args=(self.reader, self.echo_writer, self._kill)
self._thread.start() )
self._thread.start()
return self return self
def __exit__(self, exc_type, exc_val, exc_tb): def __exit__(self, exc_type, exc_val, exc_tb):
@@ -891,7 +918,7 @@ def _writer_daemon(
try: try:
if stdin_file.read(1) == "v": if stdin_file.read(1) == "v":
echo = not echo echo = not echo
except OSError as e: except IOError as e:
# If SIGTTIN is ignored, the system gives EIO # If SIGTTIN is ignored, the system gives EIO
# to let the caller know the read failed b/c it # to let the caller know the read failed b/c it
# was in the bg. Ignore that too. # was in the bg. Ignore that too.
@@ -986,7 +1013,7 @@ def wrapped(*args, **kwargs):
while True: while True:
try: try:
return function(*args, **kwargs) return function(*args, **kwargs)
except OSError as e: except IOError as e:
if e.errno == errno.EINTR: if e.errno == errno.EINTR:
continue continue
raise raise

View File

@@ -10,21 +10,9 @@
import spack.util.git import spack.util.git
#: PEP440 canonical <major>.<minor>.<micro>.<devN> string #: PEP440 canonical <major>.<minor>.<micro>.<devN> string
__version__ = "1.0.0.dev0" __version__ = "0.24.0.dev0"
spack_version = __version__ spack_version = __version__
#: The current Package API version implemented by this version of Spack. The Package API defines
#: the Python interface for packages as well as the layout of package repositories. The minor
#: version is incremented when the package API is extended in a backwards-compatible way. The major
#: version is incremented upon breaking changes. This version is changed independently from the
#: Spack version.
package_api_version = (1, 0)
#: The minimum Package API version that this version of Spack is compatible with. This should
#: always be a tuple of the form ``(major, 0)``, since compatibility with vX.Y implies
#: compatibility with vX.0.
min_package_api_version = (1, 0)
def __try_int(v): def __try_int(v):
try: try:
@@ -91,6 +79,4 @@ def get_short_version() -> str:
"get_version", "get_version",
"get_spack_commit", "get_spack_commit",
"get_short_version", "get_short_version",
"package_api_version",
"min_package_api_version",
] ]

View File

@@ -1010,7 +1010,7 @@ def _issues_in_depends_on_directive(pkgs, error_cls):
for dep_name, dep in deps_by_name.items(): for dep_name, dep in deps_by_name.items():
def check_virtual_with_variants(spec, msg): def check_virtual_with_variants(spec, msg):
if not spack.repo.PATH.is_virtual(spec.name) or not spec.variants: if not spec.virtual or not spec.variants:
return return
error = error_cls( error = error_cls(
f"{pkg_name}: {msg}", f"{pkg_name}: {msg}",
@@ -1356,8 +1356,14 @@ def _test_detection_by_executable(pkgs, debug_log, error_cls):
def _compare_extra_attribute(_expected, _detected, *, _spec): def _compare_extra_attribute(_expected, _detected, *, _spec):
result = [] result = []
# Check items are of the same type
if not isinstance(_detected, type(_expected)):
_summary = f'{pkg_name}: error when trying to detect "{_expected}"'
_details = [f"{_detected} was detected instead"]
return [error_cls(summary=_summary, details=_details)]
# If they are string expected is a regex # If they are string expected is a regex
if isinstance(_expected, str) and isinstance(_detected, str): if isinstance(_expected, str):
try: try:
_regex = re.compile(_expected) _regex = re.compile(_expected)
except re.error: except re.error:
@@ -1373,7 +1379,7 @@ def _compare_extra_attribute(_expected, _detected, *, _spec):
_details = [f"{_detected} does not match the regex"] _details = [f"{_detected} does not match the regex"]
return [error_cls(summary=_summary, details=_details)] return [error_cls(summary=_summary, details=_details)]
elif isinstance(_expected, dict) and isinstance(_detected, dict): if isinstance(_expected, dict):
_not_detected = set(_expected.keys()) - set(_detected.keys()) _not_detected = set(_expected.keys()) - set(_detected.keys())
if _not_detected: if _not_detected:
_summary = f"{pkg_name}: cannot detect some attributes for spec {_spec}" _summary = f"{pkg_name}: cannot detect some attributes for spec {_spec}"
@@ -1388,10 +1394,6 @@ def _compare_extra_attribute(_expected, _detected, *, _spec):
result.extend( result.extend(
_compare_extra_attribute(_expected[_key], _detected[_key], _spec=_spec) _compare_extra_attribute(_expected[_key], _detected[_key], _spec=_spec)
) )
else:
_summary = f'{pkg_name}: error when trying to detect "{_expected}"'
_details = [f"{_detected} was detected instead"]
return [error_cls(summary=_summary, details=_details)]
return result return result

View File

@@ -5,7 +5,6 @@
import codecs import codecs
import collections import collections
import concurrent.futures import concurrent.futures
import contextlib
import copy import copy
import hashlib import hashlib
import io import io
@@ -24,7 +23,7 @@
import urllib.request import urllib.request
import warnings import warnings
from contextlib import closing from contextlib import closing
from typing import IO, Callable, Dict, Iterable, List, NamedTuple, Optional, Set, Tuple, Union from typing import IO, Dict, Iterable, List, NamedTuple, Optional, Set, Tuple, Union
import llnl.util.filesystem as fsys import llnl.util.filesystem as fsys
import llnl.util.lang import llnl.util.lang
@@ -92,9 +91,6 @@
CURRENT_BUILD_CACHE_LAYOUT_VERSION = 2 CURRENT_BUILD_CACHE_LAYOUT_VERSION = 2
INDEX_HASH_FILE = "index.json.hash"
class BuildCacheDatabase(spack_db.Database): class BuildCacheDatabase(spack_db.Database):
"""A database for binary buildcaches. """A database for binary buildcaches.
@@ -506,7 +502,7 @@ def _fetch_and_cache_index(self, mirror_url, cache_entry={}):
scheme = urllib.parse.urlparse(mirror_url).scheme scheme = urllib.parse.urlparse(mirror_url).scheme
if scheme != "oci" and not web_util.url_exists( if scheme != "oci" and not web_util.url_exists(
url_util.join(mirror_url, BUILD_CACHE_RELATIVE_PATH, spack_db.INDEX_JSON_FILE) url_util.join(mirror_url, BUILD_CACHE_RELATIVE_PATH, "index.json")
): ):
return False return False
@@ -595,18 +591,32 @@ def file_matches(f: IO[bytes], regex: llnl.util.lang.PatternBytes) -> bool:
f.seek(0) f.seek(0)
def specs_to_relocate(spec: spack.spec.Spec) -> List[spack.spec.Spec]: def deps_to_relocate(spec):
"""Return the set of specs that may be referenced in the install prefix of the provided spec. """Return the transitive link and direct run dependencies of the spec.
We currently include non-external transitive link and direct run dependencies."""
specs = [ This is a special traversal for dependencies we need to consider when relocating a package.
Package binaries, scripts, and other files may refer to the prefixes of dependencies, so
we need to rewrite those locations when dependencies are in a different place at install time
than they were at build time.
This traversal covers transitive link dependencies and direct run dependencies because:
1. Spack adds RPATHs for transitive link dependencies so that packages can find needed
dependency libraries.
2. Packages may call any of their *direct* run dependencies (and may bake their paths into
binaries or scripts), so we also need to search for run dependency prefixes when relocating.
This returns a deduplicated list of transitive link dependencies and direct run dependencies.
"""
deps = [
s s
for s in itertools.chain( for s in itertools.chain(
spec.traverse(root=True, deptype="link", order="breadth", key=traverse.by_dag_hash), spec.traverse(root=True, deptype="link"), spec.dependencies(deptype="run")
spec.dependencies(deptype="run"),
) )
if not s.external if not s.external
] ]
return list(llnl.util.lang.dedupe(specs, key=lambda s: s.dag_hash())) return llnl.util.lang.dedupe(deps, key=lambda s: s.dag_hash())
def get_buildinfo_dict(spec): def get_buildinfo_dict(spec):
@@ -620,7 +630,7 @@ def get_buildinfo_dict(spec):
# "relocate_binaries": [], # "relocate_binaries": [],
# "relocate_links": [], # "relocate_links": [],
"hardlinks_deduped": True, "hardlinks_deduped": True,
"hash_to_prefix": {d.dag_hash(): str(d.prefix) for d in specs_to_relocate(spec)}, "hash_to_prefix": {d.dag_hash(): str(d.prefix) for d in deps_to_relocate(spec)},
} }
@@ -673,24 +683,19 @@ def sign_specfile(key: str, specfile_path: str) -> str:
def _read_specs_and_push_index( def _read_specs_and_push_index(
file_list: List[str], file_list, read_method, cache_prefix, db: BuildCacheDatabase, temp_dir, concurrency
read_method: Callable,
cache_prefix: str,
db: BuildCacheDatabase,
temp_dir: str,
concurrency: int,
): ):
"""Read all the specs listed in the provided list, using thread given thread parallelism, """Read all the specs listed in the provided list, using thread given thread parallelism,
generate the index, and push it to the mirror. generate the index, and push it to the mirror.
Args: Args:
file_list: List of urls or file paths pointing at spec files to read file_list (list(str)): List of urls or file paths pointing at spec files to read
read_method: A function taking a single argument, either a url or a file path, read_method: A function taking a single argument, either a url or a file path,
and which reads the spec file at that location, and returns the spec. and which reads the spec file at that location, and returns the spec.
cache_prefix: prefix of the build cache on s3 where index should be pushed. cache_prefix (str): prefix of the build cache on s3 where index should be pushed.
db: A spack database used for adding specs and then writing the index. db: A spack database used for adding specs and then writing the index.
temp_dir: Location to write index.json and hash for pushing temp_dir (str): Location to write index.json and hash for pushing
concurrency: Number of parallel processes to use when fetching concurrency (int): Number of parallel processes to use when fetching
""" """
for file in file_list: for file in file_list:
contents = read_method(file) contents = read_method(file)
@@ -708,7 +713,7 @@ def _read_specs_and_push_index(
# Now generate the index, compute its hash, and push the two files to # Now generate the index, compute its hash, and push the two files to
# the mirror. # the mirror.
index_json_path = os.path.join(temp_dir, spack_db.INDEX_JSON_FILE) index_json_path = os.path.join(temp_dir, "index.json")
with open(index_json_path, "w", encoding="utf-8") as f: with open(index_json_path, "w", encoding="utf-8") as f:
db._write_to_file(f) db._write_to_file(f)
@@ -718,14 +723,14 @@ def _read_specs_and_push_index(
index_hash = compute_hash(index_string) index_hash = compute_hash(index_string)
# Write the hash out to a local file # Write the hash out to a local file
index_hash_path = os.path.join(temp_dir, INDEX_HASH_FILE) index_hash_path = os.path.join(temp_dir, "index.json.hash")
with open(index_hash_path, "w", encoding="utf-8") as f: with open(index_hash_path, "w", encoding="utf-8") as f:
f.write(index_hash) f.write(index_hash)
# Push the index itself # Push the index itself
web_util.push_to_url( web_util.push_to_url(
index_json_path, index_json_path,
url_util.join(cache_prefix, spack_db.INDEX_JSON_FILE), url_util.join(cache_prefix, "index.json"),
keep_original=False, keep_original=False,
extra_args={"ContentType": "application/json", "CacheControl": "no-cache"}, extra_args={"ContentType": "application/json", "CacheControl": "no-cache"},
) )
@@ -733,7 +738,7 @@ def _read_specs_and_push_index(
# Push the hash # Push the hash
web_util.push_to_url( web_util.push_to_url(
index_hash_path, index_hash_path,
url_util.join(cache_prefix, INDEX_HASH_FILE), url_util.join(cache_prefix, "index.json.hash"),
keep_original=False, keep_original=False,
extra_args={"ContentType": "text/plain", "CacheControl": "no-cache"}, extra_args={"ContentType": "text/plain", "CacheControl": "no-cache"},
) )
@@ -802,7 +807,7 @@ def url_read_method(url):
try: try:
_, _, spec_file = web_util.read_from_url(url) _, _, spec_file = web_util.read_from_url(url)
contents = codecs.getreader("utf-8")(spec_file).read() contents = codecs.getreader("utf-8")(spec_file).read()
except (web_util.SpackWebError, OSError) as e: except web_util.SpackWebError as e:
tty.error(f"Error reading specfile: {url}: {e}") tty.error(f"Error reading specfile: {url}: {e}")
return contents return contents
@@ -870,12 +875,9 @@ def _url_generate_package_index(url: str, tmpdir: str, concurrency: int = 32):
tty.debug(f"Retrieving spec descriptor files from {url} to build index") tty.debug(f"Retrieving spec descriptor files from {url} to build index")
db = BuildCacheDatabase(tmpdir) db = BuildCacheDatabase(tmpdir)
db._write()
try: try:
_read_specs_and_push_index( _read_specs_and_push_index(file_list, read_fn, url, db, db.database_directory, concurrency)
file_list, read_fn, url, db, str(db.database_directory), concurrency
)
except Exception as e: except Exception as e:
raise GenerateIndexError(f"Encountered problem pushing package index to {url}: {e}") from e raise GenerateIndexError(f"Encountered problem pushing package index to {url}: {e}") from e
@@ -923,7 +925,7 @@ class FileTypes:
UNKNOWN = 2 UNKNOWN = 2
NOT_ISO8859_1_TEXT = re.compile(b"[\x00\x7f-\x9f]") NOT_ISO8859_1_TEXT = re.compile(b"[\x00\x7F-\x9F]")
def file_type(f: IO[bytes]) -> int: def file_type(f: IO[bytes]) -> int:
@@ -1110,7 +1112,7 @@ def _exists_in_buildcache(spec: spack.spec.Spec, tmpdir: str, out_url: str) -> E
def prefixes_to_relocate(spec): def prefixes_to_relocate(spec):
prefixes = [s.prefix for s in specs_to_relocate(spec)] prefixes = [s.prefix for s in deps_to_relocate(spec)]
prefixes.append(spack.hooks.sbang.sbang_install_path()) prefixes.append(spack.hooks.sbang.sbang_install_path())
prefixes.append(str(spack.store.STORE.layout.root)) prefixes.append(str(spack.store.STORE.layout.root))
return prefixes return prefixes
@@ -1789,7 +1791,7 @@ def _oci_update_index(
db.mark(spec, "in_buildcache", True) db.mark(spec, "in_buildcache", True)
# Create the index.json file # Create the index.json file
index_json_path = os.path.join(tmpdir, spack_db.INDEX_JSON_FILE) index_json_path = os.path.join(tmpdir, "index.json")
with open(index_json_path, "w", encoding="utf-8") as f: with open(index_json_path, "w", encoding="utf-8") as f:
db._write_to_file(f) db._write_to_file(f)
@@ -2010,7 +2012,7 @@ def fetch_url_to_mirror(url):
# Download the config = spec.json and the relevant tarball # Download the config = spec.json and the relevant tarball
try: try:
manifest = json.load(response) manifest = json.loads(response.read())
spec_digest = spack.oci.image.Digest.from_string(manifest["config"]["digest"]) spec_digest = spack.oci.image.Digest.from_string(manifest["config"]["digest"])
tarball_digest = spack.oci.image.Digest.from_string( tarball_digest = spack.oci.image.Digest.from_string(
manifest["layers"][-1]["digest"] manifest["layers"][-1]["digest"]
@@ -2137,9 +2139,10 @@ def fetch_url_to_mirror(url):
def dedupe_hardlinks_if_necessary(root, buildinfo): def dedupe_hardlinks_if_necessary(root, buildinfo):
"""Updates a buildinfo dict for old archives that did not dedupe hardlinks. De-duping hardlinks """Updates a buildinfo dict for old archives that did
is necessary when relocating files in parallel and in-place. This means we must preserve inodes not dedupe hardlinks. De-duping hardlinks is necessary
when relocating.""" when relocating files in parallel and in-place. This
means we must preserve inodes when relocating."""
# New archives don't need this. # New archives don't need this.
if buildinfo.get("hardlinks_deduped", False): if buildinfo.get("hardlinks_deduped", False):
@@ -2168,48 +2171,65 @@ def dedupe_hardlinks_if_necessary(root, buildinfo):
buildinfo[key] = new_list buildinfo[key] = new_list
def relocate_package(spec: spack.spec.Spec) -> None: def relocate_package(spec):
"""Relocate binaries and text files in the given spec prefix, based on its buildinfo file.""" """
spec_prefix = str(spec.prefix) Relocate the given package
buildinfo = read_buildinfo_file(spec_prefix) """
workdir = str(spec.prefix)
buildinfo = read_buildinfo_file(workdir)
new_layout_root = str(spack.store.STORE.layout.root)
new_prefix = str(spec.prefix)
new_rel_prefix = str(os.path.relpath(new_prefix, new_layout_root))
new_spack_prefix = str(spack.paths.prefix)
old_sbang_install_path = None
if "sbang_install_path" in buildinfo:
old_sbang_install_path = str(buildinfo["sbang_install_path"])
old_layout_root = str(buildinfo["buildpath"]) old_layout_root = str(buildinfo["buildpath"])
old_spack_prefix = str(buildinfo.get("spackprefix"))
old_rel_prefix = buildinfo.get("relative_prefix")
old_prefix = os.path.join(old_layout_root, old_rel_prefix)
rel = buildinfo.get("relative_rpaths", False)
# Warn about old style tarballs created with the --rel flag (removed in Spack v0.20) # In the past prefix_to_hash was the default and externals were not dropped, so prefixes
if buildinfo.get("relative_rpaths", False): # were not unique.
tty.warn(
f"Tarball for {spec} uses relative rpaths, which can cause library loading issues."
)
# In Spack 0.19 and older prefix_to_hash was the default and externals were not dropped, so
# prefixes were not unique.
if "hash_to_prefix" in buildinfo: if "hash_to_prefix" in buildinfo:
hash_to_old_prefix = buildinfo["hash_to_prefix"] hash_to_old_prefix = buildinfo["hash_to_prefix"]
elif "prefix_to_hash" in buildinfo: elif "prefix_to_hash" in buildinfo:
hash_to_old_prefix = {v: k for (k, v) in buildinfo["prefix_to_hash"].items()} hash_to_old_prefix = dict((v, k) for (k, v) in buildinfo["prefix_to_hash"].items())
else: else:
raise NewLayoutException( hash_to_old_prefix = dict()
"Package tarball was created from an install prefix with a different directory layout "
"and an older buildcache create implementation. It cannot be relocated."
)
prefix_to_prefix: Dict[str, str] = {} if old_rel_prefix != new_rel_prefix and not hash_to_old_prefix:
msg = "Package tarball was created from an install "
msg += "prefix with a different directory layout and an older "
msg += "buildcache create implementation. It cannot be relocated."
raise NewLayoutException(msg)
if "sbang_install_path" in buildinfo: # Spurious replacements (e.g. sbang) will cause issues with binaries
old_sbang_install_path = str(buildinfo["sbang_install_path"]) # For example, the new sbang can be longer than the old one.
prefix_to_prefix[old_sbang_install_path] = spack.hooks.sbang.sbang_install_path() # Hence 2 dictionaries are maintained here.
prefix_to_prefix_text = collections.OrderedDict()
prefix_to_prefix_bin = collections.OrderedDict()
# First match specific prefix paths. Possibly the *local* install prefix of some dependency is if old_sbang_install_path:
# in an upstream, so we cannot assume the original spack store root can be mapped uniformly to install_path = spack.hooks.sbang.sbang_install_path()
# the new spack store root. prefix_to_prefix_text[old_sbang_install_path] = install_path
# If the spec is spliced, we need to handle the simultaneous mapping from the old install_tree # First match specific prefix paths. Possibly the *local* install prefix
# to the new install_tree and from the build_spec to the spliced spec. Because foo.build_spec # of some dependency is in an upstream, so we cannot assume the original
# is foo for any non-spliced spec, we can simplify by checking for spliced-in nodes by checking # spack store root can be mapped uniformly to the new spack store root.
# for nodes not in the build_spec without any explicit check for whether the spec is spliced. #
# An analog in this algorithm is any spec that shares a name or provides the same virtuals in # If the spec is spliced, we need to handle the simultaneous mapping
# the context of the relevant root spec. This ensures that the analog for a spec s is the spec # from the old install_tree to the new install_tree and from the build_spec
# that s replaced when we spliced. # to the spliced spec.
relocation_specs = specs_to_relocate(spec) # Because foo.build_spec is foo for any non-spliced spec, we can simplify
# by checking for spliced-in nodes by checking for nodes not in the build_spec
# without any explicit check for whether the spec is spliced.
# An analog in this algorithm is any spec that shares a name or provides the same virtuals
# in the context of the relevant root spec. This ensures that the analog for a spec s
# is the spec that s replaced when we spliced.
relocation_specs = deps_to_relocate(spec)
build_spec_ids = set(id(s) for s in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD)) build_spec_ids = set(id(s) for s in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD))
for s in relocation_specs: for s in relocation_specs:
analog = s analog = s
@@ -2228,66 +2248,98 @@ def relocate_package(spec: spack.spec.Spec) -> None:
lookup_dag_hash = analog.dag_hash() lookup_dag_hash = analog.dag_hash()
if lookup_dag_hash in hash_to_old_prefix: if lookup_dag_hash in hash_to_old_prefix:
old_dep_prefix = hash_to_old_prefix[lookup_dag_hash] old_dep_prefix = hash_to_old_prefix[lookup_dag_hash]
prefix_to_prefix[old_dep_prefix] = str(s.prefix) prefix_to_prefix_bin[old_dep_prefix] = str(s.prefix)
prefix_to_prefix_text[old_dep_prefix] = str(s.prefix)
# Only then add the generic fallback of install prefix -> install prefix. # Only then add the generic fallback of install prefix -> install prefix.
prefix_to_prefix[old_layout_root] = str(spack.store.STORE.layout.root) prefix_to_prefix_text[old_prefix] = new_prefix
prefix_to_prefix_bin[old_prefix] = new_prefix
prefix_to_prefix_text[old_layout_root] = new_layout_root
prefix_to_prefix_bin[old_layout_root] = new_layout_root
# Delete identity mappings from prefix_to_prefix # This is vestigial code for the *old* location of sbang. Previously,
prefix_to_prefix = {k: v for k, v in prefix_to_prefix.items() if k != v} # sbang was a bash script, and it lived in the spack prefix. It is
# now a POSIX script that lives in the install prefix. Old packages
# will have the old sbang location in their shebangs.
orig_sbang = "#!/bin/bash {0}/bin/sbang".format(old_spack_prefix)
new_sbang = spack.hooks.sbang.sbang_shebang_line()
prefix_to_prefix_text[orig_sbang] = new_sbang
# If there's nothing to relocate, we're done. tty.debug("Relocating package from", "%s to %s." % (old_layout_root, new_layout_root))
if not prefix_to_prefix:
return
for old, new in prefix_to_prefix.items(): # Old archives maybe have hardlinks repeated.
tty.debug(f"Relocating: {old} => {new}.") dedupe_hardlinks_if_necessary(workdir, buildinfo)
# Old archives may have hardlinks repeated. def is_backup_file(file):
dedupe_hardlinks_if_necessary(spec_prefix, buildinfo) return file.endswith("~")
# Text files containing the prefix text # Text files containing the prefix text
textfiles = [os.path.join(spec_prefix, f) for f in buildinfo["relocate_textfiles"]] text_names = list()
binaries = [os.path.join(spec_prefix, f) for f in buildinfo.get("relocate_binaries")] for filename in buildinfo["relocate_textfiles"]:
links = [os.path.join(spec_prefix, f) for f in buildinfo.get("relocate_links", [])] text_name = os.path.join(workdir, filename)
# Don't add backup files generated by filter_file during install step.
if not is_backup_file(text_name):
text_names.append(text_name)
platform = spack.platforms.by_name(spec.platform) # If we are not installing back to the same install tree do the relocation
if "macho" in platform.binary_formats: if old_prefix != new_prefix:
relocate.relocate_macho_binaries(binaries, prefix_to_prefix) files_to_relocate = [
elif "elf" in platform.binary_formats: os.path.join(workdir, filename) for filename in buildinfo.get("relocate_binaries")
relocate.relocate_elf_binaries(binaries, prefix_to_prefix) ]
# If the buildcache was not created with relativized rpaths
# do the relocation of path in binaries
platform = spack.platforms.by_name(spec.platform)
if "macho" in platform.binary_formats:
relocate.relocate_macho_binaries(
files_to_relocate,
old_layout_root,
new_layout_root,
prefix_to_prefix_bin,
rel,
old_prefix,
new_prefix,
)
elif "elf" in platform.binary_formats and not rel:
# The new ELF dynamic section relocation logic only handles absolute to
# absolute relocation.
relocate.new_relocate_elf_binaries(files_to_relocate, prefix_to_prefix_bin)
elif "elf" in platform.binary_formats and rel:
relocate.relocate_elf_binaries(
files_to_relocate,
old_layout_root,
new_layout_root,
prefix_to_prefix_bin,
rel,
old_prefix,
new_prefix,
)
relocate.relocate_links(links, prefix_to_prefix) # Relocate links to the new install prefix
relocate.relocate_text(textfiles, prefix_to_prefix) links = [os.path.join(workdir, f) for f in buildinfo.get("relocate_links", [])]
changed_files = relocate.relocate_text_bin(binaries, prefix_to_prefix) relocate.relocate_links(links, prefix_to_prefix_bin)
# Add ad-hoc signatures to patched macho files when on macOS. # For all buildcaches
if "macho" in platform.binary_formats and sys.platform == "darwin": # relocate the install prefixes in text files including dependencies
codesign = which("codesign") relocate.relocate_text(text_names, prefix_to_prefix_text)
if not codesign:
return
for binary in changed_files:
# preserve the original inode by running codesign on a copy
with fsys.edit_in_place_through_temporary_file(binary) as tmp_binary:
codesign("-fs-", tmp_binary)
install_manifest = os.path.join( # relocate the install prefixes in binary files including dependencies
spec.prefix, changed_files = relocate.relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
spack.store.STORE.layout.metadata_dir,
spack.store.STORE.layout.manifest_file_name,
)
if not os.path.exists(install_manifest):
spec_id = spec.format("{name}/{hash:7}")
tty.warn("No manifest file in tarball for spec %s" % spec_id)
# overwrite old metadata with new # Add ad-hoc signatures to patched macho files when on macOS.
if spec.spliced: if "macho" in platform.binary_formats and sys.platform == "darwin":
# rewrite spec on disk codesign = which("codesign")
spack.store.STORE.layout.write_spec(spec, spack.store.STORE.layout.spec_file_path(spec)) if not codesign:
return
for binary in changed_files:
# preserve the original inode by running codesign on a copy
with fsys.edit_in_place_through_temporary_file(binary) as tmp_binary:
codesign("-fs-", tmp_binary)
# de-cache the install manifest # If we are installing back to the same location
with contextlib.suppress(FileNotFoundError): # relocate the sbang location if the spack directory changed
os.unlink(install_manifest) else:
if old_spack_prefix != new_spack_prefix:
relocate.relocate_text(text_names, prefix_to_prefix_text)
def _extract_inner_tarball(spec, filename, extract_to, signature_required: bool, remote_checksum): def _extract_inner_tarball(spec, filename, extract_to, signature_required: bool, remote_checksum):
@@ -2455,6 +2507,15 @@ def extract_tarball(spec, download_result, force=False, timer=timer.NULL_TIMER):
except Exception as e: except Exception as e:
shutil.rmtree(spec.prefix, ignore_errors=True) shutil.rmtree(spec.prefix, ignore_errors=True)
raise e raise e
else:
manifest_file = os.path.join(
spec.prefix,
spack.store.STORE.layout.metadata_dir,
spack.store.STORE.layout.manifest_file_name,
)
if not os.path.exists(manifest_file):
spec_id = spec.format("{name}/{hash:7}")
tty.warn("No manifest file in tarball for spec %s" % spec_id)
finally: finally:
if tmpdir: if tmpdir:
shutil.rmtree(tmpdir, ignore_errors=True) shutil.rmtree(tmpdir, ignore_errors=True)
@@ -2529,10 +2590,10 @@ def install_root_node(
allow_missing: when true, allows installing a node with missing dependencies allow_missing: when true, allows installing a node with missing dependencies
""" """
# Early termination # Early termination
if spec.external or not spec.concrete: if spec.external or spec.virtual:
warnings.warn("Skipping external or abstract spec {0}".format(spec.format())) warnings.warn("Skipping external or virtual package {0}".format(spec.format()))
return return
elif spec.installed and not force: elif spec.concrete and spec.installed and not force:
warnings.warn("Package for spec {0} already installed.".format(spec.format())) warnings.warn("Package for spec {0} already installed.".format(spec.format()))
return return
@@ -2559,6 +2620,10 @@ def install_root_node(
tty.msg('Installing "{0}" from a buildcache'.format(spec.format())) tty.msg('Installing "{0}" from a buildcache'.format(spec.format()))
extract_tarball(spec, download_result, force) extract_tarball(spec, download_result, force)
spec.package.windows_establish_runtime_linkage() spec.package.windows_establish_runtime_linkage()
if spec.spliced: # overwrite old metadata with new
spack.store.STORE.layout.write_spec(
spec, spack.store.STORE.layout.spec_file_path(spec)
)
spack.hooks.post_install(spec, False) spack.hooks.post_install(spec, False)
spack.store.STORE.db.add(spec, allow_missing=allow_missing) spack.store.STORE.db.add(spec, allow_missing=allow_missing)
@@ -2596,14 +2661,11 @@ def try_direct_fetch(spec, mirrors=None):
) )
try: try:
_, _, fs = web_util.read_from_url(buildcache_fetch_url_signed_json) _, _, fs = web_util.read_from_url(buildcache_fetch_url_signed_json)
specfile_contents = codecs.getreader("utf-8")(fs).read()
specfile_is_signed = True specfile_is_signed = True
except (web_util.SpackWebError, OSError) as e1: except web_util.SpackWebError as e1:
try: try:
_, _, fs = web_util.read_from_url(buildcache_fetch_url_json) _, _, fs = web_util.read_from_url(buildcache_fetch_url_json)
specfile_contents = codecs.getreader("utf-8")(fs).read() except web_util.SpackWebError as e2:
specfile_is_signed = False
except (web_util.SpackWebError, OSError) as e2:
tty.debug( tty.debug(
f"Did not find {specfile_name} on {buildcache_fetch_url_signed_json}", f"Did not find {specfile_name} on {buildcache_fetch_url_signed_json}",
e1, e1,
@@ -2613,6 +2675,7 @@ def try_direct_fetch(spec, mirrors=None):
f"Did not find {specfile_name} on {buildcache_fetch_url_json}", e2, level=2 f"Did not find {specfile_name} on {buildcache_fetch_url_json}", e2, level=2
) )
continue continue
specfile_contents = codecs.getreader("utf-8")(fs).read()
# read the spec from the build cache file. All specs in build caches # read the spec from the build cache file. All specs in build caches
# are concrete (as they are built) so we need to mark this spec # are concrete (as they are built) so we need to mark this spec
@@ -2706,9 +2769,8 @@ def get_keys(install=False, trust=False, force=False, mirrors=None):
try: try:
_, _, json_file = web_util.read_from_url(keys_index) _, _, json_file = web_util.read_from_url(keys_index)
json_index = sjson.load(json_file) json_index = sjson.load(codecs.getreader("utf-8")(json_file))
except (web_util.SpackWebError, OSError, ValueError) as url_err: except web_util.SpackWebError as url_err:
# TODO: avoid repeated request
if web_util.url_exists(keys_index): if web_util.url_exists(keys_index):
tty.error( tty.error(
f"Unable to find public keys in {url_util.format(fetch_url)}," f"Unable to find public keys in {url_util.format(fetch_url)},"
@@ -2955,14 +3017,14 @@ def __init__(self, url, local_hash, urlopen=web_util.urlopen):
def get_remote_hash(self): def get_remote_hash(self):
# Failure to fetch index.json.hash is not fatal # Failure to fetch index.json.hash is not fatal
url_index_hash = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, INDEX_HASH_FILE) url_index_hash = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json.hash")
try: try:
response = self.urlopen(urllib.request.Request(url_index_hash, headers=self.headers)) response = self.urlopen(urllib.request.Request(url_index_hash, headers=self.headers))
remote_hash = response.read(64) except (TimeoutError, urllib.error.URLError):
except OSError:
return None return None
# Validate the hash # Validate the hash
remote_hash = response.read(64)
if not re.match(rb"[a-f\d]{64}$", remote_hash): if not re.match(rb"[a-f\d]{64}$", remote_hash):
return None return None
return remote_hash.decode("utf-8") return remote_hash.decode("utf-8")
@@ -2976,17 +3038,17 @@ def conditional_fetch(self) -> FetchIndexResult:
return FetchIndexResult(etag=None, hash=None, data=None, fresh=True) return FetchIndexResult(etag=None, hash=None, data=None, fresh=True)
# Otherwise, download index.json # Otherwise, download index.json
url_index = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, spack_db.INDEX_JSON_FILE) url_index = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json")
try: try:
response = self.urlopen(urllib.request.Request(url_index, headers=self.headers)) response = self.urlopen(urllib.request.Request(url_index, headers=self.headers))
except OSError as e: except (TimeoutError, urllib.error.URLError) as e:
raise FetchIndexError(f"Could not fetch index from {url_index}", e) from e raise FetchIndexError("Could not fetch index from {}".format(url_index), e) from e
try: try:
result = codecs.getreader("utf-8")(response).read() result = codecs.getreader("utf-8")(response).read()
except (ValueError, OSError) as e: except ValueError as e:
raise FetchIndexError(f"Remote index {url_index} is invalid") from e raise FetchIndexError("Remote index {} is invalid".format(url_index), e) from e
computed_hash = compute_hash(result) computed_hash = compute_hash(result)
@@ -3020,7 +3082,7 @@ def __init__(self, url, etag, urlopen=web_util.urlopen):
def conditional_fetch(self) -> FetchIndexResult: def conditional_fetch(self) -> FetchIndexResult:
# Just do a conditional fetch immediately # Just do a conditional fetch immediately
url = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, spack_db.INDEX_JSON_FILE) url = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json")
headers = {"User-Agent": web_util.SPACK_USER_AGENT, "If-None-Match": f'"{self.etag}"'} headers = {"User-Agent": web_util.SPACK_USER_AGENT, "If-None-Match": f'"{self.etag}"'}
try: try:
@@ -3030,12 +3092,12 @@ def conditional_fetch(self) -> FetchIndexResult:
# Not modified; that means fresh. # Not modified; that means fresh.
return FetchIndexResult(etag=None, hash=None, data=None, fresh=True) return FetchIndexResult(etag=None, hash=None, data=None, fresh=True)
raise FetchIndexError(f"Could not fetch index {url}", e) from e raise FetchIndexError(f"Could not fetch index {url}", e) from e
except OSError as e: # URLError, socket.timeout, etc. except (TimeoutError, urllib.error.URLError) as e:
raise FetchIndexError(f"Could not fetch index {url}", e) from e raise FetchIndexError(f"Could not fetch index {url}", e) from e
try: try:
result = codecs.getreader("utf-8")(response).read() result = codecs.getreader("utf-8")(response).read()
except (ValueError, OSError) as e: except ValueError as e:
raise FetchIndexError(f"Remote index {url} is invalid", e) from e raise FetchIndexError(f"Remote index {url} is invalid", e) from e
headers = response.headers headers = response.headers
@@ -3067,11 +3129,11 @@ def conditional_fetch(self) -> FetchIndexResult:
headers={"Accept": "application/vnd.oci.image.manifest.v1+json"}, headers={"Accept": "application/vnd.oci.image.manifest.v1+json"},
) )
) )
except OSError as e: except (TimeoutError, urllib.error.URLError) as e:
raise FetchIndexError(f"Could not fetch manifest from {url_manifest}", e) from e raise FetchIndexError(f"Could not fetch manifest from {url_manifest}", e) from e
try: try:
manifest = json.load(response) manifest = json.loads(response.read())
except Exception as e: except Exception as e:
raise FetchIndexError(f"Remote index {url_manifest} is invalid", e) from e raise FetchIndexError(f"Remote index {url_manifest} is invalid", e) from e
@@ -3086,16 +3148,14 @@ def conditional_fetch(self) -> FetchIndexResult:
return FetchIndexResult(etag=None, hash=None, data=None, fresh=True) return FetchIndexResult(etag=None, hash=None, data=None, fresh=True)
# Otherwise fetch the blob / index.json # Otherwise fetch the blob / index.json
try: response = self.urlopen(
response = self.urlopen( urllib.request.Request(
urllib.request.Request( url=self.ref.blob_url(index_digest),
url=self.ref.blob_url(index_digest), headers={"Accept": "application/vnd.oci.image.layer.v1.tar+gzip"},
headers={"Accept": "application/vnd.oci.image.layer.v1.tar+gzip"},
)
) )
result = codecs.getreader("utf-8")(response).read() )
except (OSError, ValueError) as e:
raise FetchIndexError(f"Remote index {url_manifest} is invalid", e) from e result = codecs.getreader("utf-8")(response).read()
# Make sure the blob we download has the advertised hash # Make sure the blob we download has the advertised hash
if compute_hash(result) != index_digest.digest: if compute_hash(result) != index_digest.digest:

View File

@@ -5,14 +5,12 @@
import fnmatch import fnmatch
import glob import glob
import importlib import importlib
import os import os.path
import re import re
import sys import sys
import sysconfig import sysconfig
import warnings import warnings
from typing import Optional, Sequence, Union from typing import Dict, Optional, Sequence, Union
from typing_extensions import TypedDict
import archspec.cpu import archspec.cpu
@@ -20,17 +18,13 @@
from llnl.util import tty from llnl.util import tty
import spack.platforms import spack.platforms
import spack.spec
import spack.store import spack.store
import spack.util.environment import spack.util.environment
import spack.util.executable import spack.util.executable
from .config import spec_for_current_python from .config import spec_for_current_python
QueryInfo = Dict[str, "spack.spec.Spec"]
class QueryInfo(TypedDict, total=False):
spec: spack.spec.Spec
command: spack.util.executable.Executable
def _python_import(module: str) -> bool: def _python_import(module: str) -> bool:
@@ -217,9 +211,7 @@ def _executables_in_store(
): ):
spack.util.environment.path_put_first("PATH", [bin_dir]) spack.util.environment.path_put_first("PATH", [bin_dir])
if query_info is not None: if query_info is not None:
query_info["command"] = spack.util.executable.which( query_info["command"] = spack.util.executable.which(*executables, path=bin_dir)
*executables, path=bin_dir, required=True
)
query_info["spec"] = concrete_spec query_info["spec"] = concrete_spec
return True return True
return False return False

View File

@@ -27,9 +27,9 @@
class ClingoBootstrapConcretizer: class ClingoBootstrapConcretizer:
def __init__(self, configuration): def __init__(self, configuration):
self.host_platform = spack.platforms.host() self.host_platform = spack.platforms.host()
self.host_os = self.host_platform.default_operating_system() self.host_os = self.host_platform.operating_system("frontend")
self.host_target = archspec.cpu.host().family self.host_target = archspec.cpu.host().family
self.host_architecture = spack.spec.ArchSpec.default_arch() self.host_architecture = spack.spec.ArchSpec.frontend_arch()
self.host_architecture.target = str(self.host_target) self.host_architecture.target = str(self.host_target)
self.host_compiler = self._valid_compiler_or_raise() self.host_compiler = self._valid_compiler_or_raise()
self.host_python = self.python_external_spec() self.host_python = self.python_external_spec()

View File

@@ -4,7 +4,7 @@
"""Manage configuration swapping for bootstrapping purposes""" """Manage configuration swapping for bootstrapping purposes"""
import contextlib import contextlib
import os import os.path
import sys import sys
from typing import Any, Dict, Generator, MutableSequence, Sequence from typing import Any, Dict, Generator, MutableSequence, Sequence
@@ -141,7 +141,7 @@ def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
def _add_compilers_if_missing() -> None: def _add_compilers_if_missing() -> None:
arch = spack.spec.ArchSpec.default_arch() arch = spack.spec.ArchSpec.frontend_arch()
if not spack.compilers.compilers_for_arch(arch): if not spack.compilers.compilers_for_arch(arch):
spack.compilers.find_compilers() spack.compilers.find_compilers()

View File

@@ -25,6 +25,7 @@
import functools import functools
import json import json
import os import os
import os.path
import sys import sys
import uuid import uuid
from typing import Any, Callable, Dict, List, Optional, Tuple from typing import Any, Callable, Dict, List, Optional, Tuple
@@ -33,10 +34,8 @@
from llnl.util.lang import GroupedExceptionHandler from llnl.util.lang import GroupedExceptionHandler
import spack.binary_distribution import spack.binary_distribution
import spack.concretize
import spack.config import spack.config
import spack.detection import spack.detection
import spack.error
import spack.mirrors.mirror import spack.mirrors.mirror
import spack.platforms import spack.platforms
import spack.spec import spack.spec
@@ -45,17 +44,10 @@
import spack.util.executable import spack.util.executable
import spack.util.path import spack.util.path
import spack.util.spack_yaml import spack.util.spack_yaml
import spack.util.url
import spack.version import spack.version
from spack.installer import PackageInstaller from spack.installer import PackageInstaller
from ._common import ( from ._common import _executables_in_store, _python_import, _root_spec, _try_import_from_store
QueryInfo,
_executables_in_store,
_python_import,
_root_spec,
_try_import_from_store,
)
from .clingo import ClingoBootstrapConcretizer from .clingo import ClingoBootstrapConcretizer
from .config import spack_python_interpreter, spec_for_current_python from .config import spack_python_interpreter, spec_for_current_python
@@ -97,12 +89,8 @@ def __init__(self, conf: ConfigDictionary) -> None:
self.name = conf["name"] self.name = conf["name"]
self.metadata_dir = spack.util.path.canonicalize_path(conf["metadata"]) self.metadata_dir = spack.util.path.canonicalize_path(conf["metadata"])
# Check for relative paths, and turn them into absolute paths # Promote (relative) paths to file urls
# root is the metadata_dir self.url = spack.mirrors.mirror.Mirror(conf["info"]["url"]).fetch_url
maybe_url = conf["info"]["url"]
if spack.util.url.is_path_instead_of_url(maybe_url) and not os.path.isabs(maybe_url):
maybe_url = os.path.join(self.metadata_dir, maybe_url)
self.url = spack.mirrors.mirror.Mirror(maybe_url).fetch_url
@property @property
def mirror_scope(self) -> spack.config.InternalConfigScope: def mirror_scope(self) -> spack.config.InternalConfigScope:
@@ -146,7 +134,7 @@ class BuildcacheBootstrapper(Bootstrapper):
def __init__(self, conf) -> None: def __init__(self, conf) -> None:
super().__init__(conf) super().__init__(conf)
self.last_search: Optional[QueryInfo] = None self.last_search: Optional[ConfigDictionary] = None
self.config_scope_name = f"bootstrap_buildcache-{uuid.uuid4()}" self.config_scope_name = f"bootstrap_buildcache-{uuid.uuid4()}"
@staticmethod @staticmethod
@@ -223,14 +211,14 @@ def _install_and_test(
for _, pkg_hash, pkg_sha256 in item["binaries"]: for _, pkg_hash, pkg_sha256 in item["binaries"]:
self._install_by_hash(pkg_hash, pkg_sha256, bincache_platform) self._install_by_hash(pkg_hash, pkg_sha256, bincache_platform)
info: QueryInfo = {} info: ConfigDictionary = {}
if test_fn(query_spec=abstract_spec, query_info=info): if test_fn(query_spec=abstract_spec, query_info=info):
self.last_search = info self.last_search = info
return True return True
return False return False
def try_import(self, module: str, abstract_spec_str: str) -> bool: def try_import(self, module: str, abstract_spec_str: str) -> bool:
info: QueryInfo info: ConfigDictionary
test_fn, info = functools.partial(_try_import_from_store, module), {} test_fn, info = functools.partial(_try_import_from_store, module), {}
if test_fn(query_spec=abstract_spec_str, query_info=info): if test_fn(query_spec=abstract_spec_str, query_info=info):
return True return True
@@ -243,7 +231,7 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
return self._install_and_test(abstract_spec, bincache_platform, data, test_fn) return self._install_and_test(abstract_spec, bincache_platform, data, test_fn)
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool: def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
info: QueryInfo info: ConfigDictionary
test_fn, info = functools.partial(_executables_in_store, executables), {} test_fn, info = functools.partial(_executables_in_store, executables), {}
if test_fn(query_spec=abstract_spec_str, query_info=info): if test_fn(query_spec=abstract_spec_str, query_info=info):
self.last_search = info self.last_search = info
@@ -261,11 +249,11 @@ class SourceBootstrapper(Bootstrapper):
def __init__(self, conf) -> None: def __init__(self, conf) -> None:
super().__init__(conf) super().__init__(conf)
self.last_search: Optional[QueryInfo] = None self.last_search: Optional[ConfigDictionary] = None
self.config_scope_name = f"bootstrap_source-{uuid.uuid4()}" self.config_scope_name = f"bootstrap_source-{uuid.uuid4()}"
def try_import(self, module: str, abstract_spec_str: str) -> bool: def try_import(self, module: str, abstract_spec_str: str) -> bool:
info: QueryInfo = {} info: ConfigDictionary = {}
if _try_import_from_store(module, abstract_spec_str, query_info=info): if _try_import_from_store(module, abstract_spec_str, query_info=info):
self.last_search = info self.last_search = info
return True return True
@@ -282,22 +270,17 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
bootstrapper = ClingoBootstrapConcretizer(configuration=spack.config.CONFIG) bootstrapper = ClingoBootstrapConcretizer(configuration=spack.config.CONFIG)
concrete_spec = bootstrapper.concretize() concrete_spec = bootstrapper.concretize()
else: else:
abstract_spec = spack.spec.Spec( concrete_spec = spack.spec.Spec(
abstract_spec_str + " ^" + spec_for_current_python() abstract_spec_str + " ^" + spec_for_current_python()
) )
concrete_spec = spack.concretize.concretize_one(abstract_spec) concrete_spec.concretize()
msg = "[BOOTSTRAP MODULE {0}] Try installing '{1}' from sources" msg = "[BOOTSTRAP MODULE {0}] Try installing '{1}' from sources"
tty.debug(msg.format(module, abstract_spec_str)) tty.debug(msg.format(module, abstract_spec_str))
# Install the spec that should make the module importable # Install the spec that should make the module importable
with spack.config.override(self.mirror_scope): with spack.config.override(self.mirror_scope):
PackageInstaller( PackageInstaller([concrete_spec.package], fail_fast=True).install()
[concrete_spec.package],
fail_fast=True,
package_use_cache=False,
dependencies_use_cache=False,
).install()
if _try_import_from_store(module, query_spec=concrete_spec, query_info=info): if _try_import_from_store(module, query_spec=concrete_spec, query_info=info):
self.last_search = info self.last_search = info
@@ -305,7 +288,7 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
return False return False
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool: def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
info: QueryInfo = {} info: ConfigDictionary = {}
if _executables_in_store(executables, abstract_spec_str, query_info=info): if _executables_in_store(executables, abstract_spec_str, query_info=info):
self.last_search = info self.last_search = info
return True return True
@@ -316,7 +299,7 @@ def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bo
# might reduce compilation time by a fair amount # might reduce compilation time by a fair amount
_add_externals_if_missing() _add_externals_if_missing()
concrete_spec = spack.concretize.concretize_one(abstract_spec_str) concrete_spec = spack.spec.Spec(abstract_spec_str).concretized()
msg = "[BOOTSTRAP] Try installing '{0}' from sources" msg = "[BOOTSTRAP] Try installing '{0}' from sources"
tty.debug(msg.format(abstract_spec_str)) tty.debug(msg.format(abstract_spec_str))
with spack.config.override(self.mirror_scope): with spack.config.override(self.mirror_scope):
@@ -333,9 +316,11 @@ def create_bootstrapper(conf: ConfigDictionary):
return _bootstrap_methods[btype](conf) return _bootstrap_methods[btype](conf)
def source_is_enabled(conf: ConfigDictionary) -> bool: def source_is_enabled_or_raise(conf: ConfigDictionary):
"""Returns true if the source is not enabled for bootstrapping""" """Raise ValueError if the source is not enabled for bootstrapping"""
return spack.config.get("bootstrap:trusted").get(conf["name"], False) trusted, name = spack.config.get("bootstrap:trusted"), conf["name"]
if not trusted.get(name, False):
raise ValueError("source is not trusted")
def ensure_module_importable_or_raise(module: str, abstract_spec: Optional[str] = None): def ensure_module_importable_or_raise(module: str, abstract_spec: Optional[str] = None):
@@ -365,24 +350,24 @@ def ensure_module_importable_or_raise(module: str, abstract_spec: Optional[str]
exception_handler = GroupedExceptionHandler() exception_handler = GroupedExceptionHandler()
for current_config in bootstrapping_sources(): for current_config in bootstrapping_sources():
if not source_is_enabled(current_config):
continue
with exception_handler.forward(current_config["name"], Exception): with exception_handler.forward(current_config["name"], Exception):
if create_bootstrapper(current_config).try_import(module, abstract_spec): source_is_enabled_or_raise(current_config)
current_bootstrapper = create_bootstrapper(current_config)
if current_bootstrapper.try_import(module, abstract_spec):
return return
assert exception_handler, (
f"expected at least one exception to have been raised at this point: "
f"while bootstrapping {module}"
)
msg = f'cannot bootstrap the "{module}" Python module ' msg = f'cannot bootstrap the "{module}" Python module '
if abstract_spec: if abstract_spec:
msg += f'from spec "{abstract_spec}" ' msg += f'from spec "{abstract_spec}" '
if tty.is_debug():
if not exception_handler:
msg += ": no bootstrapping sources are enabled"
elif spack.error.debug or spack.error.SHOW_BACKTRACE:
msg += exception_handler.grouped_message(with_tracebacks=True) msg += exception_handler.grouped_message(with_tracebacks=True)
else: else:
msg += exception_handler.grouped_message(with_tracebacks=False) msg += exception_handler.grouped_message(with_tracebacks=False)
msg += "\nRun `spack --backtrace ...` for more detailed errors" msg += "\nRun `spack --debug ...` for more detailed errors"
raise ImportError(msg) raise ImportError(msg)
@@ -420,9 +405,8 @@ def ensure_executables_in_path_or_raise(
exception_handler = GroupedExceptionHandler() exception_handler = GroupedExceptionHandler()
for current_config in bootstrapping_sources(): for current_config in bootstrapping_sources():
if not source_is_enabled(current_config):
continue
with exception_handler.forward(current_config["name"], Exception): with exception_handler.forward(current_config["name"], Exception):
source_is_enabled_or_raise(current_config)
current_bootstrapper = create_bootstrapper(current_config) current_bootstrapper = create_bootstrapper(current_config)
if current_bootstrapper.try_search_path(executables, abstract_spec): if current_bootstrapper.try_search_path(executables, abstract_spec):
# Additional environment variables needed # Additional environment variables needed
@@ -430,7 +414,6 @@ def ensure_executables_in_path_or_raise(
current_bootstrapper.last_search["spec"], current_bootstrapper.last_search["spec"],
current_bootstrapper.last_search["command"], current_bootstrapper.last_search["command"],
) )
assert cmd is not None, "expected an Executable"
cmd.add_default_envmod( cmd.add_default_envmod(
spack.user_environment.environment_modifications_for_specs( spack.user_environment.environment_modifications_for_specs(
concrete_spec, set_package_py_globals=False concrete_spec, set_package_py_globals=False
@@ -438,17 +421,18 @@ def ensure_executables_in_path_or_raise(
) )
return cmd return cmd
assert exception_handler, (
f"expected at least one exception to have been raised at this point: "
f"while bootstrapping {executables_str}"
)
msg = f"cannot bootstrap any of the {executables_str} executables " msg = f"cannot bootstrap any of the {executables_str} executables "
if abstract_spec: if abstract_spec:
msg += f'from spec "{abstract_spec}" ' msg += f'from spec "{abstract_spec}" '
if tty.is_debug():
if not exception_handler:
msg += ": no bootstrapping sources are enabled"
elif spack.error.debug or spack.error.SHOW_BACKTRACE:
msg += exception_handler.grouped_message(with_tracebacks=True) msg += exception_handler.grouped_message(with_tracebacks=True)
else: else:
msg += exception_handler.grouped_message(with_tracebacks=False) msg += exception_handler.grouped_message(with_tracebacks=False)
msg += "\nRun `spack --backtrace ...` for more detailed errors" msg += "\nRun `spack --debug ...` for more detailed errors"
raise RuntimeError(msg) raise RuntimeError(msg)

View File

@@ -63,6 +63,7 @@ def _missing(name: str, purpose: str, system_only: bool = True) -> str:
def _core_requirements() -> List[RequiredResponseType]: def _core_requirements() -> List[RequiredResponseType]:
_core_system_exes = { _core_system_exes = {
"make": _missing("make", "required to build software from sources"),
"patch": _missing("patch", "required to patch source code before building"), "patch": _missing("patch", "required to patch source code before building"),
"tar": _missing("tar", "required to manage code archives"), "tar": _missing("tar", "required to manage code archives"),
"gzip": _missing("gzip", "required to compress/decompress code archives"), "gzip": _missing("gzip", "required to compress/decompress code archives"),

View File

@@ -44,19 +44,7 @@
from enum import Flag, auto from enum import Flag, auto
from itertools import chain from itertools import chain
from multiprocessing.connection import Connection from multiprocessing.connection import Connection
from typing import ( from typing import Callable, Dict, List, Optional, Set, Tuple
Callable,
Dict,
List,
Optional,
Sequence,
Set,
TextIO,
Tuple,
Type,
Union,
overload,
)
import archspec.cpu import archspec.cpu
@@ -158,128 +146,48 @@ def get_effective_jobs(jobs, parallel=True, supports_jobserver=False):
class MakeExecutable(Executable): class MakeExecutable(Executable):
"""Special callable executable object for make so the user can specify parallelism options """Special callable executable object for make so the user can specify
on a per-invocation basis. parallelism options on a per-invocation basis. Specifying
'parallel' to the call will override whatever the package's
global setting is, so you can either default to true or false and
override particular calls. Specifying 'jobs_env' to a particular
call will name an environment variable which will be set to the
parallelism level (without affecting the normal invocation with
-j).
""" """
def __init__(self, name: str, *, jobs: int, supports_jobserver: bool = True) -> None: def __init__(self, name, jobs, **kwargs):
super().__init__(name) supports_jobserver = kwargs.pop("supports_jobserver", True)
super().__init__(name, **kwargs)
self.supports_jobserver = supports_jobserver self.supports_jobserver = supports_jobserver
self.jobs = jobs self.jobs = jobs
@overload def __call__(self, *args, **kwargs):
def __call__( """parallel, and jobs_env from kwargs are swallowed and used here;
self, remaining arguments are passed through to the superclass.
*args: str,
parallel: bool = ...,
jobs_env: Optional[str] = ...,
jobs_env_supports_jobserver: bool = ...,
fail_on_error: bool = ...,
ignore_errors: Union[int, Sequence[int]] = ...,
ignore_quotes: Optional[bool] = ...,
timeout: Optional[int] = ...,
env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
extra_env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
input: Optional[TextIO] = ...,
output: Union[Optional[TextIO], str] = ...,
error: Union[Optional[TextIO], str] = ...,
_dump_env: Optional[Dict[str, str]] = ...,
) -> None: ...
@overload
def __call__(
self,
*args: str,
parallel: bool = ...,
jobs_env: Optional[str] = ...,
jobs_env_supports_jobserver: bool = ...,
fail_on_error: bool = ...,
ignore_errors: Union[int, Sequence[int]] = ...,
ignore_quotes: Optional[bool] = ...,
timeout: Optional[int] = ...,
env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
extra_env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
input: Optional[TextIO] = ...,
output: Union[Type[str], Callable] = ...,
error: Union[Optional[TextIO], str, Type[str], Callable] = ...,
_dump_env: Optional[Dict[str, str]] = ...,
) -> str: ...
@overload
def __call__(
self,
*args: str,
parallel: bool = ...,
jobs_env: Optional[str] = ...,
jobs_env_supports_jobserver: bool = ...,
fail_on_error: bool = ...,
ignore_errors: Union[int, Sequence[int]] = ...,
ignore_quotes: Optional[bool] = ...,
timeout: Optional[int] = ...,
env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
extra_env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
input: Optional[TextIO] = ...,
output: Union[Optional[TextIO], str, Type[str], Callable] = ...,
error: Union[Type[str], Callable] = ...,
_dump_env: Optional[Dict[str, str]] = ...,
) -> str: ...
def __call__(
self,
*args: str,
parallel: bool = True,
jobs_env: Optional[str] = None,
jobs_env_supports_jobserver: bool = False,
**kwargs,
) -> Optional[str]:
"""Runs this "make" executable in a subprocess.
Args:
parallel: if False, parallelism is disabled
jobs_env: environment variable that will be set to the current level of parallelism
jobs_env_supports_jobserver: whether the jobs env supports a job server
For all the other **kwargs, refer to the base class.
""" """
parallel = kwargs.pop("parallel", True)
jobs_env = kwargs.pop("jobs_env", None)
jobs_env_supports_jobserver = kwargs.pop("jobs_env_supports_jobserver", False)
jobs = get_effective_jobs( jobs = get_effective_jobs(
self.jobs, parallel=parallel, supports_jobserver=self.supports_jobserver self.jobs, parallel=parallel, supports_jobserver=self.supports_jobserver
) )
if jobs is not None: if jobs is not None:
args = (f"-j{jobs}",) + args args = ("-j{0}".format(jobs),) + args
if jobs_env: if jobs_env:
# Caller wants us to set an environment variable to control the parallelism # Caller wants us to set an environment variable to
# control the parallelism.
jobs_env_jobs = get_effective_jobs( jobs_env_jobs = get_effective_jobs(
self.jobs, parallel=parallel, supports_jobserver=jobs_env_supports_jobserver self.jobs, parallel=parallel, supports_jobserver=jobs_env_supports_jobserver
) )
if jobs_env_jobs is not None: if jobs_env_jobs is not None:
extra_env = kwargs.setdefault("extra_env", {}) kwargs["extra_env"] = {jobs_env: str(jobs_env_jobs)}
extra_env.update({jobs_env: str(jobs_env_jobs)})
return super().__call__(*args, **kwargs) return super().__call__(*args, **kwargs)
class UndeclaredDependencyError(spack.error.SpackError):
"""Raised if a dependency is invoking an executable through a module global, without
declaring a dependency on it.
"""
class DeprecatedExecutable:
def __init__(self, pkg: str, exe: str, exe_pkg: str) -> None:
self.pkg = pkg
self.exe = exe
self.exe_pkg = exe_pkg
def __call__(self, *args, **kwargs):
raise UndeclaredDependencyError(
f"{self.pkg} is using {self.exe} without declaring a dependency on {self.exe_pkg}"
)
def add_default_env(self, key: str, value: str):
self.__call__()
def clean_environment(): def clean_environment():
# Stuff in here sanitizes the build environment to eliminate # Stuff in here sanitizes the build environment to eliminate
# anything the user has set that may interfere. We apply it immediately # anything the user has set that may interfere. We apply it immediately
@@ -301,13 +209,11 @@ def clean_environment():
env.unset("CPLUS_INCLUDE_PATH") env.unset("CPLUS_INCLUDE_PATH")
env.unset("OBJC_INCLUDE_PATH") env.unset("OBJC_INCLUDE_PATH")
# prevent configure scripts from sourcing variables from config site file (AC_SITE_LOAD).
env.set("CONFIG_SITE", os.devnull)
env.unset("CMAKE_PREFIX_PATH") env.unset("CMAKE_PREFIX_PATH")
env.unset("PYTHONPATH") env.unset("PYTHONPATH")
env.unset("R_HOME") env.unset("R_HOME")
env.unset("R_ENVIRON") env.unset("R_ENVIRON")
env.unset("LUA_PATH") env.unset("LUA_PATH")
env.unset("LUA_CPATH") env.unset("LUA_CPATH")
@@ -715,9 +621,10 @@ def set_package_py_globals(pkg, context: Context = Context.BUILD):
module.std_meson_args = spack.build_systems.meson.MesonBuilder.std_args(pkg) module.std_meson_args = spack.build_systems.meson.MesonBuilder.std_args(pkg)
module.std_pip_args = spack.build_systems.python.PythonPipBuilder.std_args(pkg) module.std_pip_args = spack.build_systems.python.PythonPipBuilder.std_args(pkg)
module.make = DeprecatedExecutable(pkg.name, "make", "gmake") # TODO: make these build deps that can be installed if not found.
module.gmake = DeprecatedExecutable(pkg.name, "gmake", "gmake") module.make = MakeExecutable("make", jobs)
module.ninja = DeprecatedExecutable(pkg.name, "ninja", "ninja") module.gmake = MakeExecutable("gmake", jobs)
module.ninja = MakeExecutable("ninja", jobs, supports_jobserver=False)
# TODO: johnwparent: add package or builder support to define these build tools # TODO: johnwparent: add package or builder support to define these build tools
# for now there is no entrypoint for builders to define these on their # for now there is no entrypoint for builders to define these on their
# own # own

View File

@@ -6,9 +6,7 @@
import llnl.util.filesystem as fs import llnl.util.filesystem as fs
import spack.directives import spack.directives
import spack.spec
import spack.util.executable import spack.util.executable
import spack.util.prefix
from .autotools import AutotoolsBuilder, AutotoolsPackage from .autotools import AutotoolsBuilder, AutotoolsPackage
@@ -19,18 +17,19 @@ class AspellBuilder(AutotoolsBuilder):
to the Aspell extensions. to the Aspell extensions.
""" """
def configure( def configure(self, pkg, spec, prefix):
self,
pkg: "AspellDictPackage", # type: ignore[override]
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
):
aspell = spec["aspell"].prefix.bin.aspell aspell = spec["aspell"].prefix.bin.aspell
prezip = spec["aspell"].prefix.bin.prezip prezip = spec["aspell"].prefix.bin.prezip
destdir = prefix destdir = prefix
sh = spack.util.executable.Executable("/bin/sh") sh = spack.util.executable.which("sh")
sh("./configure", "--vars", f"ASPELL={aspell}", f"PREZIP={prezip}", f"DESTDIR={destdir}") sh(
"./configure",
"--vars",
"ASPELL={0}".format(aspell),
"PREZIP={0}".format(prezip),
"DESTDIR={0}".format(destdir),
)
# Aspell dictionaries install their bits into their prefix.lib # Aspell dictionaries install their bits into their prefix.lib

View File

@@ -2,6 +2,7 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os import os
import os.path
import stat import stat
import subprocess import subprocess
from typing import Callable, List, Optional, Set, Tuple, Union from typing import Callable, List, Optional, Set, Tuple, Union
@@ -355,13 +356,6 @@ def _do_patch_libtool_configure(self) -> None:
) )
# Support Libtool 2.4.2 and older: # Support Libtool 2.4.2 and older:
x.filter(regex=r'^(\s*test \$p = "-R")(; then\s*)$', repl=r'\1 || test x-l = x"$p"\2') x.filter(regex=r'^(\s*test \$p = "-R")(; then\s*)$', repl=r'\1 || test x-l = x"$p"\2')
# Configure scripts generated with libtool < 2.5.4 have a faulty test for the
# -single_module linker flag. A deprecation warning makes it think the default is
# -multi_module, triggering it to use problematic linker flags (such as ld -r). The
# linker default is `-single_module` from (ancient) macOS 10.4, so override by setting
# `lt_cv_apple_cc_single_mod=yes`. See the fix in libtool commit
# 82f7f52123e4e7e50721049f7fa6f9b870e09c9d.
x.filter("lt_cv_apple_cc_single_mod=no", "lt_cv_apple_cc_single_mod=yes", string=True)
@spack.phase_callbacks.run_after("configure") @spack.phase_callbacks.run_after("configure")
def _do_patch_libtool(self) -> None: def _do_patch_libtool(self) -> None:
@@ -533,7 +527,7 @@ def build_directory(self) -> str:
return build_dir return build_dir
@spack.phase_callbacks.run_before("autoreconf") @spack.phase_callbacks.run_before("autoreconf")
def _delete_configure_to_force_update(self) -> None: def delete_configure_to_force_update(self) -> None:
if self.force_autoreconf: if self.force_autoreconf:
fs.force_remove(self.configure_abs_path) fs.force_remove(self.configure_abs_path)
@@ -546,7 +540,7 @@ def autoreconf_search_path_args(self) -> List[str]:
return _autoreconf_search_path_args(self.spec) return _autoreconf_search_path_args(self.spec)
@spack.phase_callbacks.run_after("autoreconf") @spack.phase_callbacks.run_after("autoreconf")
def _set_configure_or_die(self) -> None: def set_configure_or_die(self) -> None:
"""Ensure the presence of a "configure" script, or raise. If the "configure" """Ensure the presence of a "configure" script, or raise. If the "configure"
is found, a module level attribute is set. is found, a module level attribute is set.
@@ -570,7 +564,10 @@ def configure_args(self) -> List[str]:
return [] return []
def autoreconf( def autoreconf(
self, pkg: AutotoolsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Not needed usually, configure should be already there""" """Not needed usually, configure should be already there"""
@@ -599,7 +596,10 @@ def autoreconf(
self.pkg.module.autoreconf(*autoreconf_args) self.pkg.module.autoreconf(*autoreconf_args)
def configure( def configure(
self, pkg: AutotoolsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Run "configure", with the arguments specified by the builder and an """Run "configure", with the arguments specified by the builder and an
appropriately set prefix. appropriately set prefix.
@@ -612,7 +612,10 @@ def configure(
pkg.module.configure(*options) pkg.module.configure(*options)
def build( def build(
self, pkg: AutotoolsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Run "make" on the build targets specified by the builder.""" """Run "make" on the build targets specified by the builder."""
# See https://autotools.io/automake/silent.html # See https://autotools.io/automake/silent.html
@@ -622,7 +625,10 @@ def build(
pkg.module.make(*params) pkg.module.make(*params)
def install( def install(
self, pkg: AutotoolsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Run "make" on the install targets specified by the builder.""" """Run "make" on the install targets specified by the builder."""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
@@ -819,7 +825,7 @@ def installcheck(self) -> None:
self.pkg._if_make_target_execute("installcheck") self.pkg._if_make_target_execute("installcheck")
@spack.phase_callbacks.run_after("install") @spack.phase_callbacks.run_after("install")
def _remove_libtool_archives(self) -> None: def remove_libtool_archives(self) -> None:
"""Remove all .la files in prefix sub-folders if the package sets """Remove all .la files in prefix sub-folders if the package sets
``install_libtool_archives`` to be False. ``install_libtool_archives`` to be False.
""" """

View File

@@ -10,9 +10,6 @@
import llnl.util.tty as tty import llnl.util.tty as tty
import spack.phase_callbacks import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import depends_on
from .cmake import CMakeBuilder, CMakePackage from .cmake import CMakeBuilder, CMakePackage
@@ -296,26 +293,12 @@ def initconfig_hardware_entries(self):
entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str)) entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str))
entries.append(cmake_cache_string("GPU_TARGETS", arch_str)) entries.append(cmake_cache_string("GPU_TARGETS", arch_str))
if spec.satisfies("%gcc"):
entries.append(
cmake_cache_string(
"CMAKE_HIP_FLAGS", f"--gcc-toolchain={self.pkg.compiler.prefix}"
)
)
return entries return entries
def std_initconfig_entries(self): def std_initconfig_entries(self):
cmake_prefix_path_env = os.environ["CMAKE_PREFIX_PATH"] cmake_prefix_path_env = os.environ["CMAKE_PREFIX_PATH"]
cmake_prefix_path = cmake_prefix_path_env.replace(os.pathsep, ";") cmake_prefix_path = cmake_prefix_path_env.replace(os.pathsep, ";")
complete_rpath_list = ";".join(
[
self.pkg.spec.prefix.lib,
self.pkg.spec.prefix.lib64,
*os.environ.get("SPACK_COMPILER_EXTRA_RPATHS", "").split(":"),
*os.environ.get("SPACK_COMPILER_IMPLICIT_RPATHS", "").split(":"),
]
)
return [ return [
"#------------------{0}".format("-" * 60), "#------------------{0}".format("-" * 60),
"# !!!! This is a generated file, edit at own risk !!!!", "# !!!! This is a generated file, edit at own risk !!!!",
@@ -324,8 +307,6 @@ def std_initconfig_entries(self):
"#------------------{0}\n".format("-" * 60), "#------------------{0}\n".format("-" * 60),
cmake_cache_string("CMAKE_PREFIX_PATH", cmake_prefix_path), cmake_cache_string("CMAKE_PREFIX_PATH", cmake_prefix_path),
cmake_cache_string("CMAKE_INSTALL_RPATH_USE_LINK_PATH", "ON"), cmake_cache_string("CMAKE_INSTALL_RPATH_USE_LINK_PATH", "ON"),
cmake_cache_string("CMAKE_BUILD_RPATH", complete_rpath_list),
cmake_cache_string("CMAKE_INSTALL_RPATH", complete_rpath_list),
self.define_cmake_cache_from_variant("CMAKE_BUILD_TYPE", "build_type"), self.define_cmake_cache_from_variant("CMAKE_BUILD_TYPE", "build_type"),
] ]
@@ -333,9 +314,7 @@ def initconfig_package_entries(self):
"""This method is to be overwritten by the package""" """This method is to be overwritten by the package"""
return [] return []
def initconfig( def initconfig(self, pkg, spec, prefix):
self, pkg: "CachedCMakePackage", spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
cache_entries = ( cache_entries = (
self.std_initconfig_entries() self.std_initconfig_entries()
+ self.initconfig_compiler_entries() + self.initconfig_compiler_entries()
@@ -372,10 +351,6 @@ class CachedCMakePackage(CMakePackage):
CMakeBuilder = CachedCMakeBuilder CMakeBuilder = CachedCMakeBuilder
# These dependencies are assumed in the builder
depends_on("c", type="build")
depends_on("cxx", type="build")
def flag_handler(self, name, flags): def flag_handler(self, name, flags):
if name in ("cflags", "cxxflags", "cppflags", "fflags"): if name in ("cflags", "cxxflags", "cppflags", "fflags"):
return None, None, None # handled in the cmake cache return None, None, None # handled in the cmake cache

View File

@@ -7,8 +7,6 @@
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.phase_callbacks import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on from spack.directives import build_system, depends_on
from spack.multimethod import when from spack.multimethod import when
@@ -70,12 +68,6 @@ def build_directory(self):
"""Return the directory containing the main Cargo.toml.""" """Return the directory containing the main Cargo.toml."""
return self.pkg.stage.source_path return self.pkg.stage.source_path
@property
def std_build_args(self):
"""Standard arguments for ``cargo build`` provided as a property for
convenience of package writers."""
return ["-j", str(self.pkg.module.make_jobs)]
@property @property
def build_args(self): def build_args(self):
"""Arguments for ``cargo build``.""" """Arguments for ``cargo build``."""
@@ -86,21 +78,12 @@ def check_args(self):
"""Argument for ``cargo test`` during check phase""" """Argument for ``cargo test`` during check phase"""
return [] return []
def setup_build_environment(self, env): def build(self, pkg, spec, prefix):
env.set("CARGO_HOME", self.stage.path)
def build(
self, pkg: CargoPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Runs ``cargo install`` in the source directory""" """Runs ``cargo install`` in the source directory"""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
pkg.module.cargo( pkg.module.cargo("install", "--root", "out", "--path", ".", *self.build_args)
"install", "--root", "out", "--path", ".", *self.std_build_args, *self.build_args
)
def install( def install(self, pkg, spec, prefix):
self, pkg: CargoPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Copy build files into package prefix.""" """Copy build files into package prefix."""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
fs.install_tree("out", prefix) fs.install_tree("out", prefix)

View File

@@ -11,7 +11,6 @@
from typing import Any, List, Optional, Tuple from typing import Any, List, Optional, Tuple
import llnl.util.filesystem as fs import llnl.util.filesystem as fs
from llnl.util import tty
from llnl.util.lang import stable_partition from llnl.util.lang import stable_partition
import spack.builder import spack.builder
@@ -455,27 +454,18 @@ def cmake_args(self) -> List[str]:
return [] return []
def cmake( def cmake(
self, pkg: CMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Runs ``cmake`` in the build directory""" """Runs ``cmake`` in the build directory"""
if spec.is_develop: # skip cmake phase if it is an incremental develop build
# skip cmake phase if it is an incremental develop build if spec.is_develop and os.path.isfile(
os.path.join(self.build_directory, "CMakeCache.txt")
# Determine the files that will re-run CMake that are generated from a successful ):
# configure step based on state return
primary_generator = _extract_primary_generator(self.generator)
configure_artifact = "Makefile"
if primary_generator == "Ninja":
configure_artifact = "ninja.build"
if os.path.isfile(os.path.join(self.build_directory, configure_artifact)):
tty.msg(
"Incremental build criteria satisfied."
"Skipping CMake configure step. To force configuration run"
f" `spack clean {pkg.name}`"
)
return
options = self.std_cmake_args options = self.std_cmake_args
options += self.cmake_args() options += self.cmake_args()
@@ -484,7 +474,10 @@ def cmake(
pkg.module.cmake(*options) pkg.module.cmake(*options)
def build( def build(
self, pkg: CMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Make the build targets""" """Make the build targets"""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
@@ -495,7 +488,10 @@ def build(
pkg.module.ninja(*self.build_targets) pkg.module.ninja(*self.build_targets)
def install( def install(
self, pkg: CMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Make the install targets""" """Make the install targets"""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):

View File

@@ -15,7 +15,7 @@ class CudaPackage(PackageBase):
"""Auxiliary class which contains CUDA variant, dependencies and conflicts """Auxiliary class which contains CUDA variant, dependencies and conflicts
and is meant to unify and facilitate its usage. and is meant to unify and facilitate its usage.
Maintainers: ax3l, Rombur, davidbeckingsale, pauleonix Maintainers: ax3l, Rombur, davidbeckingsale
""" """
# https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#gpu-feature-list # https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html#gpu-feature-list
@@ -47,12 +47,6 @@ class CudaPackage(PackageBase):
"89", "89",
"90", "90",
"90a", "90a",
"100",
"100a",
"101",
"101a",
"120",
"120a",
) )
# FIXME: keep cuda and cuda_arch separate to make usage easier until # FIXME: keep cuda and cuda_arch separate to make usage easier until
@@ -105,56 +99,39 @@ def compute_capabilities(arch_list: Iterable[str]) -> List[str]:
# CUDA version vs Architecture # CUDA version vs Architecture
# https://en.wikipedia.org/wiki/CUDA#GPUs_supported # https://en.wikipedia.org/wiki/CUDA#GPUs_supported
# https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#deprecated-features # https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#deprecated-features
# Tesla support:
depends_on("cuda@:6.0", when="cuda_arch=10") depends_on("cuda@:6.0", when="cuda_arch=10")
depends_on("cuda@:6.5", when="cuda_arch=11") depends_on("cuda@:6.5", when="cuda_arch=11")
depends_on("cuda@2.1:6.5", when="cuda_arch=12") depends_on("cuda@2.1:6.5", when="cuda_arch=12")
depends_on("cuda@2.1:6.5", when="cuda_arch=13") depends_on("cuda@2.1:6.5", when="cuda_arch=13")
# Fermi support:
depends_on("cuda@3.0:8.0", when="cuda_arch=20") depends_on("cuda@3.0:8.0", when="cuda_arch=20")
depends_on("cuda@3.2:8.0", when="cuda_arch=21") depends_on("cuda@3.2:8.0", when="cuda_arch=21")
# Kepler support:
depends_on("cuda@5.0:10.2", when="cuda_arch=30") depends_on("cuda@5.0:10.2", when="cuda_arch=30")
depends_on("cuda@5.0:10.2", when="cuda_arch=32") depends_on("cuda@5.0:10.2", when="cuda_arch=32")
depends_on("cuda@5.0:11.8", when="cuda_arch=35") depends_on("cuda@5.0:11.8", when="cuda_arch=35")
depends_on("cuda@6.5:11.8", when="cuda_arch=37") depends_on("cuda@6.5:11.8", when="cuda_arch=37")
# Maxwell support:
depends_on("cuda@6.0:", when="cuda_arch=50") depends_on("cuda@6.0:", when="cuda_arch=50")
depends_on("cuda@6.5:", when="cuda_arch=52") depends_on("cuda@6.5:", when="cuda_arch=52")
depends_on("cuda@6.5:", when="cuda_arch=53") depends_on("cuda@6.5:", when="cuda_arch=53")
# Pascal support:
depends_on("cuda@8.0:", when="cuda_arch=60") depends_on("cuda@8.0:", when="cuda_arch=60")
depends_on("cuda@8.0:", when="cuda_arch=61") depends_on("cuda@8.0:", when="cuda_arch=61")
depends_on("cuda@8.0:", when="cuda_arch=62") depends_on("cuda@8.0:", when="cuda_arch=62")
# Volta support:
depends_on("cuda@9.0:", when="cuda_arch=70") depends_on("cuda@9.0:", when="cuda_arch=70")
# Turing support:
depends_on("cuda@9.0:", when="cuda_arch=72") depends_on("cuda@9.0:", when="cuda_arch=72")
depends_on("cuda@10.0:", when="cuda_arch=75") depends_on("cuda@10.0:", when="cuda_arch=75")
# Ampere support:
depends_on("cuda@11.0:", when="cuda_arch=80") depends_on("cuda@11.0:", when="cuda_arch=80")
depends_on("cuda@11.1:", when="cuda_arch=86") depends_on("cuda@11.1:", when="cuda_arch=86")
depends_on("cuda@11.4:", when="cuda_arch=87") depends_on("cuda@11.4:", when="cuda_arch=87")
# Ada support:
depends_on("cuda@11.8:", when="cuda_arch=89") depends_on("cuda@11.8:", when="cuda_arch=89")
# Hopper support:
depends_on("cuda@12.0:", when="cuda_arch=90") depends_on("cuda@12.0:", when="cuda_arch=90")
depends_on("cuda@12.0:", when="cuda_arch=90a") depends_on("cuda@12.0:", when="cuda_arch=90a")
# Blackwell support:
depends_on("cuda@12.8:", when="cuda_arch=100")
depends_on("cuda@12.8:", when="cuda_arch=100a")
depends_on("cuda@12.8:", when="cuda_arch=101")
depends_on("cuda@12.8:", when="cuda_arch=101a")
depends_on("cuda@12.8:", when="cuda_arch=120")
depends_on("cuda@12.8:", when="cuda_arch=120a")
# From the NVIDIA install guide we know of conflicts for particular # From the NVIDIA install guide we know of conflicts for particular
# platforms (linux, darwin), architectures (x86, powerpc) and compilers # platforms (linux, darwin), architectures (x86, powerpc) and compilers
# (gcc, clang). We don't restrict %gcc and %clang conflicts to # (gcc, clang). We don't restrict %gcc and %clang conflicts to
@@ -186,7 +163,6 @@ def compute_capabilities(arch_list: Iterable[str]) -> List[str]:
conflicts("%gcc@12:", when="+cuda ^cuda@:11.8") conflicts("%gcc@12:", when="+cuda ^cuda@:11.8")
conflicts("%gcc@13:", when="+cuda ^cuda@:12.3") conflicts("%gcc@13:", when="+cuda ^cuda@:12.3")
conflicts("%gcc@14:", when="+cuda ^cuda@:12.6") conflicts("%gcc@14:", when="+cuda ^cuda@:12.6")
conflicts("%gcc@15:", when="+cuda ^cuda@:12.8")
conflicts("%clang@12:", when="+cuda ^cuda@:11.4.0") conflicts("%clang@12:", when="+cuda ^cuda@:11.4.0")
conflicts("%clang@13:", when="+cuda ^cuda@:11.5") conflicts("%clang@13:", when="+cuda ^cuda@:11.5")
conflicts("%clang@14:", when="+cuda ^cuda@:11.7") conflicts("%clang@14:", when="+cuda ^cuda@:11.7")
@@ -195,7 +171,6 @@ def compute_capabilities(arch_list: Iterable[str]) -> List[str]:
conflicts("%clang@17:", when="+cuda ^cuda@:12.3") conflicts("%clang@17:", when="+cuda ^cuda@:12.3")
conflicts("%clang@18:", when="+cuda ^cuda@:12.5") conflicts("%clang@18:", when="+cuda ^cuda@:12.5")
conflicts("%clang@19:", when="+cuda ^cuda@:12.6") conflicts("%clang@19:", when="+cuda ^cuda@:12.6")
conflicts("%clang@20:", when="+cuda ^cuda@:12.8")
# https://gist.github.com/ax3l/9489132#gistcomment-3860114 # https://gist.github.com/ax3l/9489132#gistcomment-3860114
conflicts("%gcc@10", when="+cuda ^cuda@:11.4.0") conflicts("%gcc@10", when="+cuda ^cuda@:11.4.0")

View File

@@ -7,8 +7,6 @@
import spack.directives import spack.directives
import spack.package_base import spack.package_base
import spack.phase_callbacks import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from ._checks import BuilderWithDefaults, apply_macos_rpath_fixups, execute_install_time_tests from ._checks import BuilderWithDefaults, apply_macos_rpath_fixups, execute_install_time_tests
@@ -50,8 +48,3 @@ class GenericBuilder(BuilderWithDefaults):
# unconditionally perform any post-install phase tests # unconditionally perform any post-install phase tests
spack.phase_callbacks.run_after("install")(execute_install_time_tests) spack.phase_callbacks.run_after("install")(execute_install_time_tests)
def install(
self, pkg: Package, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
raise NotImplementedError

View File

@@ -7,9 +7,7 @@
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.phase_callbacks import spack.phase_callbacks
import spack.spec from spack.directives import build_system, extends
import spack.util.prefix
from spack.directives import build_system, depends_on
from spack.multimethod import when from spack.multimethod import when
from ._checks import BuilderWithDefaults, execute_install_time_tests from ._checks import BuilderWithDefaults, execute_install_time_tests
@@ -28,7 +26,9 @@ class GoPackage(spack.package_base.PackageBase):
build_system("go") build_system("go")
with when("build_system=go"): with when("build_system=go"):
depends_on("go", type="build") # TODO: this seems like it should be depends_on, see
# setup_dependent_build_environment in go for why I kept it like this
extends("go@1.14:", type="build")
@spack.builder.builder("go") @spack.builder.builder("go")
@@ -71,7 +71,6 @@ class GoBuilder(BuilderWithDefaults):
def setup_build_environment(self, env): def setup_build_environment(self, env):
env.set("GO111MODULE", "on") env.set("GO111MODULE", "on")
env.set("GOTOOLCHAIN", "local") env.set("GOTOOLCHAIN", "local")
env.set("GOPATH", fs.join_path(self.pkg.stage.path, "go"))
@property @property
def build_directory(self): def build_directory(self):
@@ -82,31 +81,19 @@ def build_directory(self):
def build_args(self): def build_args(self):
"""Arguments for ``go build``.""" """Arguments for ``go build``."""
# Pass ldflags -s = --strip-all and -w = --no-warnings by default # Pass ldflags -s = --strip-all and -w = --no-warnings by default
return [ return ["-modcacherw", "-ldflags", "-s -w", "-o", f"{self.pkg.name}"]
"-p",
str(self.pkg.module.make_jobs),
"-modcacherw",
"-ldflags",
"-s -w",
"-o",
f"{self.pkg.name}",
]
@property @property
def check_args(self): def check_args(self):
"""Argument for ``go test`` during check phase""" """Argument for ``go test`` during check phase"""
return [] return []
def build( def build(self, pkg, spec, prefix):
self, pkg: GoPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Runs ``go build`` in the source directory""" """Runs ``go build`` in the source directory"""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
pkg.module.go("build", *self.build_args) pkg.module.go("build", *self.build_args)
def install( def install(self, pkg, spec, prefix):
self, pkg: GoPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Install built binaries into prefix bin.""" """Install built binaries into prefix bin."""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
fs.mkdirp(prefix.bin) fs.mkdirp(prefix.bin)

View File

@@ -7,9 +7,7 @@
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.spec
import spack.util.executable import spack.util.executable
import spack.util.prefix
from spack.directives import build_system, depends_on, extends from spack.directives import build_system, depends_on, extends
from spack.multimethod import when from spack.multimethod import when
@@ -57,9 +55,7 @@ class LuaBuilder(spack.builder.Builder):
#: Names associated with package attributes in the old build-system format #: Names associated with package attributes in the old build-system format
legacy_attributes = () legacy_attributes = ()
def unpack( def unpack(self, pkg, spec, prefix):
self, pkg: LuaPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
if os.path.splitext(pkg.stage.archive_file)[1] == ".rock": if os.path.splitext(pkg.stage.archive_file)[1] == ".rock":
directory = pkg.luarocks("unpack", pkg.stage.archive_file, output=str) directory = pkg.luarocks("unpack", pkg.stage.archive_file, output=str)
dirlines = directory.split("\n") dirlines = directory.split("\n")
@@ -70,16 +66,15 @@ def unpack(
def _generate_tree_line(name, prefix): def _generate_tree_line(name, prefix):
return """{{ name = "{name}", root = "{prefix}" }};""".format(name=name, prefix=prefix) return """{{ name = "{name}", root = "{prefix}" }};""".format(name=name, prefix=prefix)
def generate_luarocks_config( def generate_luarocks_config(self, pkg, spec, prefix):
self, pkg: LuaPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
spec = self.pkg.spec spec = self.pkg.spec
table_entries = [] table_entries = []
for d in spec.traverse(deptype=("build", "run")): for d in spec.traverse(deptype=("build", "run")):
if d.package.extends(self.pkg.extendee_spec): if d.package.extends(self.pkg.extendee_spec):
table_entries.append(self._generate_tree_line(d.name, d.prefix)) table_entries.append(self._generate_tree_line(d.name, d.prefix))
with open(self._luarocks_config_path(), "w", encoding="utf-8") as config: path = self._luarocks_config_path()
with open(path, "w", encoding="utf-8") as config:
config.write( config.write(
""" """
deps_mode="all" deps_mode="all"
@@ -90,26 +85,23 @@ def generate_luarocks_config(
"\n".join(table_entries) "\n".join(table_entries)
) )
) )
return path
def preprocess( def preprocess(self, pkg, spec, prefix):
self, pkg: LuaPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Override this to preprocess source before building with luarocks""" """Override this to preprocess source before building with luarocks"""
pass pass
def luarocks_args(self): def luarocks_args(self):
return [] return []
def install( def install(self, pkg, spec, prefix):
self, pkg: LuaPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
rock = "." rock = "."
specs = find(".", "*.rockspec", recursive=False) specs = find(".", "*.rockspec", recursive=False)
if specs: if specs:
rock = specs[0] rock = specs[0]
rocks_args = self.luarocks_args() rocks_args = self.luarocks_args()
rocks_args.append(rock) rocks_args.append(rock)
pkg.luarocks("--tree=" + prefix, "make", *rocks_args) self.pkg.luarocks("--tree=" + prefix, "make", *rocks_args)
def _luarocks_config_path(self): def _luarocks_config_path(self):
return os.path.join(self.pkg.stage.source_path, "spack_luarocks.lua") return os.path.join(self.pkg.stage.source_path, "spack_luarocks.lua")

View File

@@ -98,20 +98,29 @@ def build_directory(self) -> str:
return self.pkg.stage.source_path return self.pkg.stage.source_path
def edit( def edit(
self, pkg: MakefilePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Edit the Makefile before calling make. The default is a no-op.""" """Edit the Makefile before calling make. The default is a no-op."""
pass pass
def build( def build(
self, pkg: MakefilePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Run "make" on the build targets specified by the builder.""" """Run "make" on the build targets specified by the builder."""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
pkg.module.make(*self.build_targets) pkg.module.make(*self.build_targets)
def install( def install(
self, pkg: MakefilePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Run "make" on the install targets specified by the builder.""" """Run "make" on the install targets specified by the builder."""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):

View File

@@ -5,8 +5,6 @@
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on from spack.directives import build_system, depends_on
from spack.multimethod import when from spack.multimethod import when
from spack.util.executable import which from spack.util.executable import which
@@ -60,20 +58,16 @@ def build_args(self):
"""List of args to pass to build phase.""" """List of args to pass to build phase."""
return [] return []
def build( def build(self, pkg, spec, prefix):
self, pkg: MavenPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Compile code and package into a JAR file.""" """Compile code and package into a JAR file."""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
mvn = which("mvn", required=True) mvn = which("mvn")
if self.pkg.run_tests: if self.pkg.run_tests:
mvn("verify", *self.build_args()) mvn("verify", *self.build_args())
else: else:
mvn("package", "-DskipTests", *self.build_args()) mvn("package", "-DskipTests", *self.build_args())
def install( def install(self, pkg, spec, prefix):
self, pkg: MavenPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Copy to installation prefix.""" """Copy to installation prefix."""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
fs.install_tree(".", prefix) fs.install_tree(".", prefix)

View File

@@ -48,9 +48,6 @@ class MesonPackage(spack.package_base.PackageBase):
variant("strip", default=False, description="Strip targets on install") variant("strip", default=False, description="Strip targets on install")
depends_on("meson", type="build") depends_on("meson", type="build")
depends_on("ninja", type="build") depends_on("ninja", type="build")
# Meson uses pkg-config for dependency detection, and this dependency is
# often overlooked by packages that use meson as a build system.
depends_on("pkgconfig", type="build")
# Python detection in meson requires distutils to be importable, but distutils no longer # Python detection in meson requires distutils to be importable, but distutils no longer
# exists in Python 3.12. In Spack, we can't use setuptools as distutils replacement, # exists in Python 3.12. In Spack, we can't use setuptools as distutils replacement,
# because the distutils-precedence.pth startup file that setuptools ships with is not run # because the distutils-precedence.pth startup file that setuptools ships with is not run
@@ -191,7 +188,10 @@ def meson_args(self) -> List[str]:
return [] return []
def meson( def meson(
self, pkg: MesonPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Run ``meson`` in the build directory""" """Run ``meson`` in the build directory"""
options = [] options = []
@@ -204,7 +204,10 @@ def meson(
pkg.module.meson(*options) pkg.module.meson(*options)
def build( def build(
self, pkg: MesonPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Make the build targets""" """Make the build targets"""
options = ["-v"] options = ["-v"]
@@ -213,7 +216,10 @@ def build(
pkg.module.ninja(*options) pkg.module.ninja(*options)
def install( def install(
self, pkg: MesonPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None: ) -> None:
"""Make the install targets""" """Make the install targets"""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):

View File

@@ -7,8 +7,6 @@
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.spec
import spack.util.prefix
from spack.directives import build_system, conflicts from spack.directives import build_system, conflicts
from ._checks import BuilderWithDefaults from ._checks import BuilderWithDefaults
@@ -101,9 +99,7 @@ def msbuild_install_args(self):
as `msbuild_args` by default.""" as `msbuild_args` by default."""
return self.msbuild_args() return self.msbuild_args()
def build( def build(self, pkg, spec, prefix):
self, pkg: MSBuildPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Run "msbuild" on the build targets specified by the builder.""" """Run "msbuild" on the build targets specified by the builder."""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
pkg.module.msbuild( pkg.module.msbuild(
@@ -112,9 +108,7 @@ def build(
self.define_targets(*self.build_targets), self.define_targets(*self.build_targets),
) )
def install( def install(self, pkg, spec, prefix):
self, pkg: MSBuildPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Run "msbuild" on the install targets specified by the builder. """Run "msbuild" on the install targets specified by the builder.
This is INSTALL by default""" This is INSTALL by default"""
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):

View File

@@ -7,8 +7,6 @@
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.spec
import spack.util.prefix
from spack.directives import build_system, conflicts from spack.directives import build_system, conflicts
from ._checks import BuilderWithDefaults from ._checks import BuilderWithDefaults
@@ -125,9 +123,7 @@ def nmake_install_args(self):
Individual packages should override to specify NMake args to command line""" Individual packages should override to specify NMake args to command line"""
return [] return []
def build( def build(self, pkg, spec, prefix):
self, pkg: NMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Run "nmake" on the build targets specified by the builder.""" """Run "nmake" on the build targets specified by the builder."""
opts = self.std_nmake_args opts = self.std_nmake_args
opts += self.nmake_args() opts += self.nmake_args()
@@ -136,9 +132,7 @@ def build(
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
pkg.module.nmake(*opts, *self.build_targets, ignore_quotes=self.ignore_quotes) pkg.module.nmake(*opts, *self.build_targets, ignore_quotes=self.ignore_quotes)
def install( def install(self, pkg, spec, prefix):
self, pkg: NMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Run "nmake" on the install targets specified by the builder. """Run "nmake" on the install targets specified by the builder.
This is INSTALL by default""" This is INSTALL by default"""
opts = self.std_nmake_args opts = self.std_nmake_args

View File

@@ -3,8 +3,6 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.spec
import spack.util.prefix
from spack.directives import build_system, extends from spack.directives import build_system, extends
from spack.multimethod import when from spack.multimethod import when
@@ -44,9 +42,7 @@ class OctaveBuilder(BuilderWithDefaults):
#: Names associated with package attributes in the old build-system format #: Names associated with package attributes in the old build-system format
legacy_attributes = () legacy_attributes = ()
def install( def install(self, pkg, spec, prefix):
self, pkg: OctavePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Install the package from the archive file""" """Install the package from the archive file"""
pkg.module.octave( pkg.module.octave(
"--quiet", "--quiet",

View File

@@ -142,7 +142,7 @@ def setup_run_environment(self, env):
$ source {prefix}/{component}/{version}/env/vars.sh $ source {prefix}/{component}/{version}/env/vars.sh
""" """
# Only if environment modifications are desired (default is +envmods) # Only if environment modifications are desired (default is +envmods)
if "+envmods" in self.spec: if "~envmods" not in self.spec:
env.extend( env.extend(
EnvironmentModifications.from_sourcing_file( EnvironmentModifications.from_sourcing_file(
self.component_prefix.env.join("vars.sh"), *self.env_script_args self.component_prefix.env.join("vars.sh"), *self.env_script_args

View File

@@ -10,11 +10,8 @@
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.phase_callbacks import spack.phase_callbacks
import spack.spec from spack.directives import build_system, extends
import spack.util.prefix
from spack.directives import build_system, depends_on, extends
from spack.install_test import SkipTest, test_part from spack.install_test import SkipTest, test_part
from spack.multimethod import when
from spack.util.executable import Executable from spack.util.executable import Executable
from ._checks import BuilderWithDefaults, execute_build_time_tests from ._checks import BuilderWithDefaults, execute_build_time_tests
@@ -31,9 +28,7 @@ class PerlPackage(spack.package_base.PackageBase):
build_system("perl") build_system("perl")
with when("build_system=perl"): extends("perl", when="build_system=perl")
extends("perl")
depends_on("gmake", type="build")
@property @property
@memoized @memoized
@@ -151,9 +146,7 @@ def configure_args(self):
""" """
return [] return []
def configure( def configure(self, pkg, spec, prefix):
self, pkg: PerlPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Run Makefile.PL or Build.PL with arguments consisting of """Run Makefile.PL or Build.PL with arguments consisting of
an appropriate installation base directory followed by the an appropriate installation base directory followed by the
list returned by :py:meth:`~.PerlBuilder.configure_args`. list returned by :py:meth:`~.PerlBuilder.configure_args`.
@@ -177,9 +170,7 @@ def fix_shebang(self):
repl = "#!/usr/bin/env perl" repl = "#!/usr/bin/env perl"
filter_file(pattern, repl, "Build", backup=False) filter_file(pattern, repl, "Build", backup=False)
def build( def build(self, pkg, spec, prefix):
self, pkg: PerlPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Builds a Perl package.""" """Builds a Perl package."""
self.build_executable() self.build_executable()
@@ -190,8 +181,6 @@ def check(self):
"""Runs built-in tests of a Perl package.""" """Runs built-in tests of a Perl package."""
self.build_executable("test") self.build_executable("test")
def install( def install(self, pkg, spec, prefix):
self, pkg: PerlPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Installs a Perl package.""" """Installs a Perl package."""
self.build_executable("install") self.build_executable("install")

View File

@@ -28,7 +28,6 @@
import spack.repo import spack.repo
import spack.spec import spack.spec
import spack.store import spack.store
import spack.util.prefix
from spack.directives import build_system, depends_on, extends from spack.directives import build_system, depends_on, extends
from spack.error import NoHeadersError, NoLibrariesError from spack.error import NoHeadersError, NoLibrariesError
from spack.install_test import test_part from spack.install_test import test_part
@@ -264,17 +263,16 @@ def update_external_dependencies(self, extendee_spec=None):
# Ensure architecture information is present # Ensure architecture information is present
if not python.architecture: if not python.architecture:
host_platform = spack.platforms.host() host_platform = spack.platforms.host()
host_os = host_platform.default_operating_system() host_os = host_platform.operating_system("default_os")
host_target = host_platform.default_target() host_target = host_platform.target("default_target")
python.architecture = spack.spec.ArchSpec( python.architecture = spack.spec.ArchSpec(
(str(host_platform), str(host_os), str(host_target)) (str(host_platform), str(host_os), str(host_target))
) )
else: else:
if not python.architecture.platform: if not python.architecture.platform:
python.architecture.platform = spack.platforms.host() python.architecture.platform = spack.platforms.host()
platform = spack.platforms.by_name(python.architecture.platform)
if not python.architecture.os: if not python.architecture.os:
python.architecture.os = platform.default_operating_system() python.architecture.os = "default_os"
if not python.architecture.target: if not python.architecture.target:
python.architecture.target = archspec.cpu.host().family.name python.architecture.target = archspec.cpu.host().family.name

View File

@@ -6,8 +6,6 @@
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.phase_callbacks import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on from spack.directives import build_system, depends_on
from ._checks import BuilderWithDefaults, execute_build_time_tests from ._checks import BuilderWithDefaults, execute_build_time_tests
@@ -29,7 +27,6 @@ class QMakePackage(spack.package_base.PackageBase):
build_system("qmake") build_system("qmake")
depends_on("qmake", type="build", when="build_system=qmake") depends_on("qmake", type="build", when="build_system=qmake")
depends_on("gmake", type="build")
@spack.builder.builder("qmake") @spack.builder.builder("qmake")
@@ -64,23 +61,17 @@ def qmake_args(self):
"""List of arguments passed to qmake.""" """List of arguments passed to qmake."""
return [] return []
def qmake( def qmake(self, pkg, spec, prefix):
self, pkg: QMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Run ``qmake`` to configure the project and generate a Makefile.""" """Run ``qmake`` to configure the project and generate a Makefile."""
with working_dir(self.build_directory): with working_dir(self.build_directory):
pkg.module.qmake(*self.qmake_args()) pkg.module.qmake(*self.qmake_args())
def build( def build(self, pkg, spec, prefix):
self, pkg: QMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Make the build targets""" """Make the build targets"""
with working_dir(self.build_directory): with working_dir(self.build_directory):
pkg.module.make() pkg.module.make()
def install( def install(self, pkg, spec, prefix):
self, pkg: QMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Make the install targets""" """Make the install targets"""
with working_dir(self.build_directory): with working_dir(self.build_directory):
pkg.module.make("install") pkg.module.make("install")

View File

@@ -94,7 +94,7 @@ def list_url(cls):
if cls.cran: if cls.cran:
return f"https://cloud.r-project.org/src/contrib/Archive/{cls.cran}/" return f"https://cloud.r-project.org/src/contrib/Archive/{cls.cran}/"
@lang.classproperty @property
def git(cls): def git(self):
if cls.bioc: if self.bioc:
return f"https://git.bioconductor.org/packages/{cls.bioc}" return f"https://git.bioconductor.org/packages/{self.bioc}"

View File

@@ -9,8 +9,6 @@
import llnl.util.tty as tty import llnl.util.tty as tty
import spack.builder import spack.builder
import spack.spec
import spack.util.prefix
from spack.build_environment import SPACK_NO_PARALLEL_MAKE from spack.build_environment import SPACK_NO_PARALLEL_MAKE
from spack.config import determine_number_of_jobs from spack.config import determine_number_of_jobs
from spack.directives import build_system, extends, maintainers from spack.directives import build_system, extends, maintainers
@@ -76,22 +74,18 @@ def build_directory(self):
ret = os.path.join(ret, self.subdirectory) ret = os.path.join(ret, self.subdirectory)
return ret return ret
def install( def install(self, pkg, spec, prefix):
self, pkg: RacketPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Install everything from build directory.""" """Install everything from build directory."""
raco = Executable("raco") raco = Executable("raco")
with fs.working_dir(self.build_directory): with fs.working_dir(self.build_directory):
parallel = pkg.parallel and (not env_flag(SPACK_NO_PARALLEL_MAKE)) parallel = self.pkg.parallel and (not env_flag(SPACK_NO_PARALLEL_MAKE))
name = pkg.racket_name
assert name is not None, "Racket package name is not set"
args = [ args = [
"pkg", "pkg",
"install", "install",
"-t", "-t",
"dir", "dir",
"-n", "-n",
name, self.pkg.racket_name,
"--deps", "--deps",
"fail", "fail",
"--ignore-implies", "--ignore-implies",
@@ -107,7 +101,8 @@ def install(
except ProcessError: except ProcessError:
args.insert(-2, "--skip-installed") args.insert(-2, "--skip-installed")
raco(*args) raco(*args)
tty.warn( msg = (
f"Racket package {name} was already installed, uninstalling via " "Racket package {0} was already installed, uninstalling via "
"Spack may make someone unhappy!" "Spack may make someone unhappy!"
) )
tty.warn(msg.format(self.pkg.racket_name))

View File

@@ -140,7 +140,7 @@ class ROCmPackage(PackageBase):
when="+rocm", when="+rocm",
) )
depends_on("llvm-amdgpu", type="build", when="+rocm") depends_on("llvm-amdgpu", when="+rocm")
depends_on("hsa-rocr-dev", when="+rocm") depends_on("hsa-rocr-dev", when="+rocm")
depends_on("hip +rocm", when="+rocm") depends_on("hip +rocm", when="+rocm")

View File

@@ -5,8 +5,6 @@
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.spec
import spack.util.prefix
from spack.directives import build_system, extends, maintainers from spack.directives import build_system, extends, maintainers
from ._checks import BuilderWithDefaults from ._checks import BuilderWithDefaults
@@ -44,9 +42,7 @@ class RubyBuilder(BuilderWithDefaults):
#: Names associated with package attributes in the old build-system format #: Names associated with package attributes in the old build-system format
legacy_attributes = () legacy_attributes = ()
def build( def build(self, pkg, spec, prefix):
self, pkg: RubyPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Build a Ruby gem.""" """Build a Ruby gem."""
# ruby-rake provides both rake.gemspec and Rakefile, but only # ruby-rake provides both rake.gemspec and Rakefile, but only
@@ -62,9 +58,7 @@ def build(
# Some Ruby packages only ship `*.gem` files, so nothing to build # Some Ruby packages only ship `*.gem` files, so nothing to build
pass pass
def install( def install(self, pkg, spec, prefix):
self, pkg: RubyPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Install a Ruby gem. """Install a Ruby gem.
The ruby package sets ``GEM_HOME`` to tell gem where to install to.""" The ruby package sets ``GEM_HOME`` to tell gem where to install to."""

View File

@@ -4,8 +4,6 @@
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.phase_callbacks import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on from spack.directives import build_system, depends_on
from ._checks import BuilderWithDefaults, execute_build_time_tests from ._checks import BuilderWithDefaults, execute_build_time_tests
@@ -61,9 +59,7 @@ def build_args(self, spec, prefix):
"""Arguments to pass to build.""" """Arguments to pass to build."""
return [] return []
def build( def build(self, pkg, spec, prefix):
self, pkg: SConsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Build the package.""" """Build the package."""
pkg.module.scons(*self.build_args(spec, prefix)) pkg.module.scons(*self.build_args(spec, prefix))
@@ -71,9 +67,7 @@ def install_args(self, spec, prefix):
"""Arguments to pass to install.""" """Arguments to pass to install."""
return [] return []
def install( def install(self, pkg, spec, prefix):
self, pkg: SConsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Install the package.""" """Install the package."""
pkg.module.scons("install", *self.install_args(spec, prefix)) pkg.module.scons("install", *self.install_args(spec, prefix))

View File

@@ -11,8 +11,6 @@
import spack.install_test import spack.install_test
import spack.package_base import spack.package_base
import spack.phase_callbacks import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on, extends from spack.directives import build_system, depends_on, extends
from spack.multimethod import when from spack.multimethod import when
from spack.util.executable import Executable from spack.util.executable import Executable
@@ -43,7 +41,6 @@ class SIPPackage(spack.package_base.PackageBase):
with when("build_system=sip"): with when("build_system=sip"):
extends("python", type=("build", "link", "run")) extends("python", type=("build", "link", "run"))
depends_on("py-sip", type="build") depends_on("py-sip", type="build")
depends_on("gmake", type="build")
@property @property
def import_modules(self): def import_modules(self):
@@ -133,9 +130,7 @@ class SIPBuilder(BuilderWithDefaults):
build_directory = "build" build_directory = "build"
def configure( def configure(self, pkg, spec, prefix):
self, pkg: SIPPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Configure the package.""" """Configure the package."""
# https://www.riverbankcomputing.com/static/Docs/sip/command_line_tools.html # https://www.riverbankcomputing.com/static/Docs/sip/command_line_tools.html
@@ -153,9 +148,7 @@ def configure_args(self):
"""Arguments to pass to configure.""" """Arguments to pass to configure."""
return [] return []
def build( def build(self, pkg, spec, prefix):
self, pkg: SIPPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Build the package.""" """Build the package."""
args = self.build_args() args = self.build_args()
@@ -166,9 +159,7 @@ def build_args(self):
"""Arguments to pass to build.""" """Arguments to pass to build."""
return [] return []
def install( def install(self, pkg, spec, prefix):
self, pkg: SIPPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Install the package.""" """Install the package."""
args = self.install_args() args = self.install_args()

View File

@@ -6,8 +6,6 @@
import spack.builder import spack.builder
import spack.package_base import spack.package_base
import spack.phase_callbacks import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on from spack.directives import build_system, depends_on
from ._checks import BuilderWithDefaults, execute_build_time_tests, execute_install_time_tests from ._checks import BuilderWithDefaults, execute_build_time_tests, execute_install_time_tests
@@ -99,9 +97,7 @@ def waf(self, *args, **kwargs):
with working_dir(self.build_directory): with working_dir(self.build_directory):
self.python("waf", "-j{0}".format(jobs), *args, **kwargs) self.python("waf", "-j{0}".format(jobs), *args, **kwargs)
def configure( def configure(self, pkg, spec, prefix):
self, pkg: WafPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Configures the project.""" """Configures the project."""
args = ["--prefix={0}".format(self.pkg.prefix)] args = ["--prefix={0}".format(self.pkg.prefix)]
args += self.configure_args() args += self.configure_args()
@@ -112,9 +108,7 @@ def configure_args(self):
"""Arguments to pass to configure.""" """Arguments to pass to configure."""
return [] return []
def build( def build(self, pkg, spec, prefix):
self, pkg: WafPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Executes the build.""" """Executes the build."""
args = self.build_args() args = self.build_args()
@@ -124,9 +118,7 @@ def build_args(self):
"""Arguments to pass to build.""" """Arguments to pass to build."""
return [] return []
def install( def install(self, pkg, spec, prefix):
self, pkg: WafPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
"""Installs the targets on the system.""" """Installs the targets on the system."""
args = self.install_args() args = self.install_args()

View File

@@ -14,9 +14,9 @@
import zipfile import zipfile
from collections import namedtuple from collections import namedtuple
from typing import Callable, Dict, List, Set from typing import Callable, Dict, List, Set
from urllib.request import Request from urllib.error import HTTPError, URLError
from urllib.request import HTTPHandler, Request, build_opener
import llnl.path
import llnl.util.filesystem as fs import llnl.util.filesystem as fs
import llnl.util.tty as tty import llnl.util.tty as tty
from llnl.util.tty.color import cescape, colorize from llnl.util.tty.color import cescape, colorize
@@ -63,8 +63,6 @@
PushResult = namedtuple("PushResult", "success url") PushResult = namedtuple("PushResult", "success url")
urlopen = web_util.urlopen # alias for mocking in tests
def get_change_revisions(): def get_change_revisions():
"""If this is a git repo get the revisions to use when checking """If this is a git repo get the revisions to use when checking
@@ -84,9 +82,6 @@ def get_stack_changed(env_path, rev1="HEAD^", rev2="HEAD"):
whether or not the stack was changed. Returns True if the environment whether or not the stack was changed. Returns True if the environment
manifest changed between the provided revisions (or additionally if the manifest changed between the provided revisions (or additionally if the
`.gitlab-ci.yml` file itself changed). Returns False otherwise.""" `.gitlab-ci.yml` file itself changed). Returns False otherwise."""
# git returns posix paths always, normalize input to be comptaible
# with that
env_path = llnl.path.convert_to_posix_path(env_path)
git = spack.util.git.git() git = spack.util.git.git()
if git: if git:
with fs.working_dir(spack.paths.prefix): with fs.working_dir(spack.paths.prefix):
@@ -477,9 +472,12 @@ def generate_pipeline(env: ev.Environment, args) -> None:
# Use all unpruned specs to populate the build group for this set # Use all unpruned specs to populate the build group for this set
cdash_config = cfg.get("cdash") cdash_config = cfg.get("cdash")
if options.cdash_handler and options.cdash_handler.auth_token: if options.cdash_handler and options.cdash_handler.auth_token:
options.cdash_handler.populate_buildgroup( try:
[options.cdash_handler.build_name(s) for s in pipeline_specs] options.cdash_handler.populate_buildgroup(
) [options.cdash_handler.build_name(s) for s in pipeline_specs]
)
except (SpackError, HTTPError, URLError, TimeoutError) as err:
tty.warn(f"Problem populating buildgroup: {err}")
elif cdash_config: elif cdash_config:
# warn only if there was actually a CDash configuration. # warn only if there was actually a CDash configuration.
tty.warn("Unable to populate buildgroup without CDash credentials") tty.warn("Unable to populate buildgroup without CDash credentials")
@@ -616,7 +614,7 @@ def copy_test_logs_to_artifacts(test_stage, job_test_dir):
copy_files_to_artifacts(os.path.join(test_stage, "*", "*.txt"), job_test_dir) copy_files_to_artifacts(os.path.join(test_stage, "*", "*.txt"), job_test_dir)
def download_and_extract_artifacts(url, work_dir) -> str: def download_and_extract_artifacts(url, work_dir):
"""Look for gitlab artifacts.zip at the given url, and attempt to download """Look for gitlab artifacts.zip at the given url, and attempt to download
and extract the contents into the given work_dir and extract the contents into the given work_dir
@@ -624,10 +622,6 @@ def download_and_extract_artifacts(url, work_dir) -> str:
url (str): Complete url to artifacts.zip file url (str): Complete url to artifacts.zip file
work_dir (str): Path to destination where artifacts should be extracted work_dir (str): Path to destination where artifacts should be extracted
Output:
Artifacts root path relative to the archive root
""" """
tty.msg(f"Fetching artifacts from: {url}") tty.msg(f"Fetching artifacts from: {url}")
@@ -637,33 +631,31 @@ def download_and_extract_artifacts(url, work_dir) -> str:
if token: if token:
headers["PRIVATE-TOKEN"] = token headers["PRIVATE-TOKEN"] = token
request = Request(url, headers=headers, method="GET") opener = build_opener(HTTPHandler)
request = Request(url, headers=headers)
request.get_method = lambda: "GET"
response = opener.open(request, timeout=SPACK_CDASH_TIMEOUT)
response_code = response.getcode()
if response_code != 200:
msg = f"Error response code ({response_code}) in reproduce_ci_job"
raise SpackError(msg)
artifacts_zip_path = os.path.join(work_dir, "artifacts.zip") artifacts_zip_path = os.path.join(work_dir, "artifacts.zip")
os.makedirs(work_dir, exist_ok=True)
try: if not os.path.exists(work_dir):
response = urlopen(request, timeout=SPACK_CDASH_TIMEOUT) os.makedirs(work_dir)
with open(artifacts_zip_path, "wb") as out_file:
shutil.copyfileobj(response, out_file)
with zipfile.ZipFile(artifacts_zip_path) as zip_file: with open(artifacts_zip_path, "wb") as out_file:
zip_file.extractall(work_dir) shutil.copyfileobj(response, out_file)
# Get the artifact root
artifact_root = ""
for f in zip_file.filelist:
if "spack.lock" in f.filename:
artifact_root = os.path.dirname(os.path.dirname(f.filename))
break
except OSError as e:
raise SpackError(f"Error fetching artifacts: {e}")
finally:
try:
os.remove(artifacts_zip_path)
except FileNotFoundError:
# If the file doesn't exist we are already raising
pass
return artifact_root zip_file = zipfile.ZipFile(artifacts_zip_path)
zip_file.extractall(work_dir)
zip_file.close()
os.remove(artifacts_zip_path)
def get_spack_info(): def get_spack_info():
@@ -777,7 +769,7 @@ def setup_spack_repro_version(repro_dir, checkout_commit, merge_commit=None):
return True return True
def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime, use_local_head): def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime):
"""Given a url to gitlab artifacts.zip from a failed 'spack ci rebuild' job, """Given a url to gitlab artifacts.zip from a failed 'spack ci rebuild' job,
attempt to setup an environment in which the failure can be reproduced attempt to setup an environment in which the failure can be reproduced
locally. This entails the following: locally. This entails the following:
@@ -791,11 +783,8 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime, use_local_head)
commands to run to reproduce the build once inside the container. commands to run to reproduce the build once inside the container.
""" """
work_dir = os.path.realpath(work_dir) work_dir = os.path.realpath(work_dir)
if os.path.exists(work_dir) and os.listdir(work_dir):
raise SpackError(f"Cannot run reproducer in non-emptry working dir:\n {work_dir}")
platform_script_ext = "ps1" if IS_WINDOWS else "sh" platform_script_ext = "ps1" if IS_WINDOWS else "sh"
artifact_root = download_and_extract_artifacts(url, work_dir) download_and_extract_artifacts(url, work_dir)
gpg_path = None gpg_path = None
if gpg_url: if gpg_url:
@@ -857,9 +846,6 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime, use_local_head)
with open(repro_file, encoding="utf-8") as fd: with open(repro_file, encoding="utf-8") as fd:
repro_details = json.load(fd) repro_details = json.load(fd)
spec_file = fs.find(work_dir, repro_details["job_spec_json"])[0]
reproducer_spec = spack.spec.Spec.from_specfile(spec_file)
repro_dir = os.path.dirname(repro_file) repro_dir = os.path.dirname(repro_file)
rel_repro_dir = repro_dir.replace(work_dir, "").lstrip(os.path.sep) rel_repro_dir = repro_dir.replace(work_dir, "").lstrip(os.path.sep)
@@ -920,20 +906,17 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime, use_local_head)
commit_regex = re.compile(r"commit\s+([^\s]+)") commit_regex = re.compile(r"commit\s+([^\s]+)")
merge_commit_regex = re.compile(r"Merge\s+([^\s]+)\s+into\s+([^\s]+)") merge_commit_regex = re.compile(r"Merge\s+([^\s]+)\s+into\s+([^\s]+)")
if use_local_head: # Try the more specific merge commit regex first
commit_1 = "HEAD" m = merge_commit_regex.search(spack_info)
if m:
# This was a merge commit and we captured the parents
commit_1 = m.group(1)
commit_2 = m.group(2)
else: else:
# Try the more specific merge commit regex first # Not a merge commit, just get the commit sha
m = merge_commit_regex.search(spack_info) m = commit_regex.search(spack_info)
if m: if m:
# This was a merge commit and we captured the parents
commit_1 = m.group(1) commit_1 = m.group(1)
commit_2 = m.group(2)
else:
# Not a merge commit, just get the commit sha
m = commit_regex.search(spack_info)
if m:
commit_1 = m.group(1)
setup_result = False setup_result = False
if commit_1: if commit_1:
@@ -1008,8 +991,6 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime, use_local_head)
"entrypoint", entrypoint_script, work_dir, run=False, exit_on_failure=False "entrypoint", entrypoint_script, work_dir, run=False, exit_on_failure=False
) )
# Attempt to create a unique name for the reproducer container
container_suffix = "_" + reproducer_spec.dag_hash() if reproducer_spec else ""
docker_command = [ docker_command = [
runtime, runtime,
"run", "run",
@@ -1017,14 +998,14 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime, use_local_head)
"-t", "-t",
"--rm", "--rm",
"--name", "--name",
f"spack_reproducer{container_suffix}", "spack_reproducer",
"-v", "-v",
":".join([work_dir, mounted_workdir, "Z"]), ":".join([work_dir, mounted_workdir, "Z"]),
"-v", "-v",
":".join( ":".join(
[ [
os.path.join(work_dir, artifact_root), os.path.join(work_dir, "jobs_scratch_dir"),
os.path.join(mount_as_dir, artifact_root), os.path.join(mount_as_dir, "jobs_scratch_dir"),
"Z", "Z",
] ]
), ),

View File

@@ -1,21 +1,23 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details. # Copyright Spack Project Developers. See COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import codecs
import copy import copy
import json import json
import os import os
import re import re
import ssl
import sys import sys
import time import time
from collections import deque from collections import deque
from enum import Enum from enum import Enum
from typing import Dict, Generator, List, Optional, Set, Tuple from typing import Dict, Generator, List, Optional, Set, Tuple
from urllib.parse import quote, urlencode, urlparse from urllib.parse import quote, urlencode, urlparse
from urllib.request import Request from urllib.request import HTTPHandler, HTTPSHandler, Request, build_opener
import llnl.util.filesystem as fs import llnl.util.filesystem as fs
import llnl.util.tty as tty import llnl.util.tty as tty
from llnl.util.lang import memoized from llnl.util.lang import Singleton, memoized
import spack.binary_distribution as bindist import spack.binary_distribution as bindist
import spack.config as cfg import spack.config as cfg
@@ -33,11 +35,32 @@
from spack.reporters.cdash import SPACK_CDASH_TIMEOUT from spack.reporters.cdash import SPACK_CDASH_TIMEOUT
from spack.reporters.cdash import build_stamp as cdash_build_stamp from spack.reporters.cdash import build_stamp as cdash_build_stamp
def _urlopen():
error_handler = web_util.SpackHTTPDefaultErrorHandler()
# One opener with HTTPS ssl enabled
with_ssl = build_opener(
HTTPHandler(), HTTPSHandler(context=web_util.ssl_create_default_context()), error_handler
)
# One opener with HTTPS ssl disabled
without_ssl = build_opener(
HTTPHandler(), HTTPSHandler(context=ssl._create_unverified_context()), error_handler
)
# And dynamically dispatch based on the config:verify_ssl.
def dispatch_open(fullurl, data=None, timeout=None, verify_ssl=True):
opener = with_ssl if verify_ssl else without_ssl
timeout = timeout or cfg.get("config:connect_timeout", 1)
return opener.open(fullurl, data, timeout)
return dispatch_open
IS_WINDOWS = sys.platform == "win32" IS_WINDOWS = sys.platform == "win32"
SPACK_RESERVED_TAGS = ["public", "protected", "notary"] SPACK_RESERVED_TAGS = ["public", "protected", "notary"]
_dyn_mapping_urlopener = Singleton(_urlopen)
# this exists purely for testing purposes
_urlopen = web_util.urlopen
def copy_files_to_artifacts(src, artifacts_dir): def copy_files_to_artifacts(src, artifacts_dir):
@@ -256,25 +279,26 @@ def copy_test_results(self, source, dest):
reports = fs.join_path(source, "*_Test*.xml") reports = fs.join_path(source, "*_Test*.xml")
copy_files_to_artifacts(reports, dest) copy_files_to_artifacts(reports, dest)
def create_buildgroup(self, headers, url, group_name, group_type): def create_buildgroup(self, opener, headers, url, group_name, group_type):
data = {"newbuildgroup": group_name, "project": self.project, "type": group_type} data = {"newbuildgroup": group_name, "project": self.project, "type": group_type}
enc_data = json.dumps(data).encode("utf-8") enc_data = json.dumps(data).encode("utf-8")
request = Request(url, data=enc_data, headers=headers) request = Request(url, data=enc_data, headers=headers)
try: response = opener.open(request, timeout=SPACK_CDASH_TIMEOUT)
response_text = _urlopen(request, timeout=SPACK_CDASH_TIMEOUT).read() response_code = response.getcode()
except OSError as e:
tty.warn(f"Failed to create CDash buildgroup: {e}") if response_code not in [200, 201]:
msg = f"Creating buildgroup failed (response code = {response_code})"
tty.warn(msg)
return None return None
try: response_text = response.read()
response_json = json.loads(response_text) response_json = json.loads(response_text)
return response_json["id"] build_group_id = response_json["id"]
except (json.JSONDecodeError, KeyError) as e:
tty.warn(f"Failed to parse CDash response: {e}") return build_group_id
return None
def populate_buildgroup(self, job_names): def populate_buildgroup(self, job_names):
url = f"{self.url}/api/v1/buildgroup.php" url = f"{self.url}/api/v1/buildgroup.php"
@@ -284,11 +308,16 @@ def populate_buildgroup(self, job_names):
"Content-Type": "application/json", "Content-Type": "application/json",
} }
parent_group_id = self.create_buildgroup(headers, url, self.build_group, "Daily") opener = build_opener(HTTPHandler)
group_id = self.create_buildgroup(headers, url, f"Latest {self.build_group}", "Latest")
parent_group_id = self.create_buildgroup(opener, headers, url, self.build_group, "Daily")
group_id = self.create_buildgroup(
opener, headers, url, f"Latest {self.build_group}", "Latest"
)
if not parent_group_id or not group_id: if not parent_group_id or not group_id:
tty.warn(f"Failed to create or retrieve buildgroups for {self.build_group}") msg = f"Failed to create or retrieve buildgroups for {self.build_group}"
tty.warn(msg)
return return
data = { data = {
@@ -300,12 +329,15 @@ def populate_buildgroup(self, job_names):
enc_data = json.dumps(data).encode("utf-8") enc_data = json.dumps(data).encode("utf-8")
request = Request(url, data=enc_data, headers=headers, method="PUT") request = Request(url, data=enc_data, headers=headers)
request.get_method = lambda: "PUT"
try: response = opener.open(request, timeout=SPACK_CDASH_TIMEOUT)
_urlopen(request, timeout=SPACK_CDASH_TIMEOUT) response_code = response.getcode()
except OSError as e:
tty.warn(f"Failed to populate CDash buildgroup: {e}") if response_code != 200:
msg = f"Error response code ({response_code}) in populate_buildgroup"
tty.warn(msg)
def report_skipped(self, spec: spack.spec.Spec, report_dir: str, reason: Optional[str]): def report_skipped(self, spec: spack.spec.Spec, report_dir: str, reason: Optional[str]):
"""Explicitly report skipping testing of a spec (e.g., it's CI """Explicitly report skipping testing of a spec (e.g., it's CI
@@ -703,6 +735,9 @@ def _apply_section(dest, src):
for value in header.values(): for value in header.values():
value = os.path.expandvars(value) value = os.path.expandvars(value)
verify_ssl = mapping.get("verify_ssl", spack.config.get("config:verify_ssl", True))
timeout = mapping.get("timeout", spack.config.get("config:connect_timeout", 1))
required = mapping.get("require", []) required = mapping.get("require", [])
allowed = mapping.get("allow", []) allowed = mapping.get("allow", [])
ignored = mapping.get("ignore", []) ignored = mapping.get("ignore", [])
@@ -736,15 +771,19 @@ def job_query(job):
endpoint_url._replace(query=query).geturl(), headers=header, method="GET" endpoint_url._replace(query=query).geturl(), headers=header, method="GET"
) )
try: try:
response = _urlopen(request) response = _dyn_mapping_urlopener(
config = json.load(response) request, verify_ssl=verify_ssl, timeout=timeout
)
except Exception as e: except Exception as e:
# For now just ignore any errors from dynamic mapping and continue # For now just ignore any errors from dynamic mapping and continue
# This is still experimental, and failures should not stop CI # This is still experimental, and failures should not stop CI
# from running normally # from running normally
tty.warn(f"Failed to fetch dynamic mapping for query:\n\t{query}: {e}") tty.warn(f"Failed to fetch dynamic mapping for query:\n\t{query}")
tty.warn(f"{e}")
continue continue
config = json.load(codecs.getreader("utf-8")(response))
# Strip ignore keys # Strip ignore keys
if ignored: if ignored:
for key in ignored: for key in ignored:

View File

@@ -171,9 +171,7 @@ def quote_kvp(string: str) -> str:
def parse_specs( def parse_specs(
args: Union[str, List[str]], args: Union[str, List[str]], concretize: bool = False, tests: bool = False
concretize: bool = False,
tests: spack.concretize.TestsType = False,
) -> List[spack.spec.Spec]: ) -> List[spack.spec.Spec]:
"""Convenience function for parsing arguments from specs. Handles common """Convenience function for parsing arguments from specs. Handles common
exceptions and dies if there are errors. exceptions and dies if there are errors.
@@ -185,13 +183,11 @@ def parse_specs(
if not concretize: if not concretize:
return specs return specs
to_concretize: List[spack.concretize.SpecPairInput] = [(s, None) for s in specs] to_concretize = [(s, None) for s in specs]
return _concretize_spec_pairs(to_concretize, tests=tests) return _concretize_spec_pairs(to_concretize, tests=tests)
def _concretize_spec_pairs( def _concretize_spec_pairs(to_concretize, tests=False):
to_concretize: List[spack.concretize.SpecPairInput], tests: spack.concretize.TestsType = False
) -> List[spack.spec.Spec]:
"""Helper method that concretizes abstract specs from a list of abstract,concrete pairs. """Helper method that concretizes abstract specs from a list of abstract,concrete pairs.
Any spec with a concrete spec associated with it will concretize to that spec. Any spec Any spec with a concrete spec associated with it will concretize to that spec. Any spec
@@ -202,7 +198,7 @@ def _concretize_spec_pairs(
# Special case for concretizing a single spec # Special case for concretizing a single spec
if len(to_concretize) == 1: if len(to_concretize) == 1:
abstract, concrete = to_concretize[0] abstract, concrete = to_concretize[0]
return [concrete or spack.concretize.concretize_one(abstract, tests=tests)] return [concrete or abstract.concretized()]
# Special case if every spec is either concrete or has an abstract hash # Special case if every spec is either concrete or has an abstract hash
if all( if all(
@@ -254,9 +250,9 @@ def matching_spec_from_env(spec):
""" """
env = ev.active_environment() env = ev.active_environment()
if env: if env:
return env.matching_spec(spec) or spack.concretize.concretize_one(spec) return env.matching_spec(spec) or spec.concretized()
else: else:
return spack.concretize.concretize_one(spec) return spec.concretized()
def matching_specs_from_env(specs): def matching_specs_from_env(specs):
@@ -297,7 +293,7 @@ def disambiguate_spec(
def disambiguate_spec_from_hashes( def disambiguate_spec_from_hashes(
spec: spack.spec.Spec, spec: spack.spec.Spec,
hashes: Optional[List[str]], hashes: List[str],
local: bool = False, local: bool = False,
installed: Union[bool, InstallRecordStatus] = True, installed: Union[bool, InstallRecordStatus] = True,
first: bool = False, first: bool = False,

View File

@@ -3,7 +3,6 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import collections import collections
import warnings
import archspec.cpu import archspec.cpu
@@ -52,10 +51,10 @@ def setup_parser(subparser):
"-t", "--target", action="store_true", default=False, help="print only the target" "-t", "--target", action="store_true", default=False, help="print only the target"
) )
parts2.add_argument( parts2.add_argument(
"-f", "--frontend", action="store_true", default=False, help="print frontend (DEPRECATED)" "-f", "--frontend", action="store_true", default=False, help="print frontend"
) )
parts2.add_argument( parts2.add_argument(
"-b", "--backend", action="store_true", default=False, help="print backend (DEPRECATED)" "-b", "--backend", action="store_true", default=False, help="print backend"
) )
@@ -99,14 +98,15 @@ def arch(parser, args):
display_targets(archspec.cpu.TARGETS) display_targets(archspec.cpu.TARGETS)
return return
os_args, target_args = "default_os", "default_target"
if args.frontend: if args.frontend:
warnings.warn("the argument --frontend is deprecated, and will be removed in Spack v1.0") os_args, target_args = "frontend", "frontend"
elif args.backend: elif args.backend:
warnings.warn("the argument --backend is deprecated, and will be removed in Spack v1.0") os_args, target_args = "backend", "backend"
host_platform = spack.platforms.host() host_platform = spack.platforms.host()
host_os = host_platform.default_operating_system() host_os = host_platform.operating_system(os_args)
host_target = host_platform.default_target() host_target = host_platform.target(target_args)
if args.family: if args.family:
host_target = host_target.family host_target = host_target.family
elif args.generic: elif args.generic:

View File

@@ -1,7 +1,7 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details. # Copyright Spack Project Developers. See COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os import os.path
import shutil import shutil
import sys import sys
import tempfile import tempfile
@@ -14,9 +14,9 @@
import spack.bootstrap import spack.bootstrap
import spack.bootstrap.config import spack.bootstrap.config
import spack.bootstrap.core import spack.bootstrap.core
import spack.concretize
import spack.config import spack.config
import spack.mirrors.utils import spack.mirrors.utils
import spack.spec
import spack.stage import spack.stage
import spack.util.path import spack.util.path
import spack.util.spack_yaml import spack.util.spack_yaml
@@ -397,7 +397,7 @@ def _mirror(args):
llnl.util.tty.msg(msg.format(spec_str, mirror_dir)) llnl.util.tty.msg(msg.format(spec_str, mirror_dir))
# Suppress tty from the call below for terser messages # Suppress tty from the call below for terser messages
llnl.util.tty.set_msg_enabled(False) llnl.util.tty.set_msg_enabled(False)
spec = spack.concretize.concretize_one(spec_str) spec = spack.spec.Spec(spec_str).concretized()
for node in spec.traverse(): for node in spec.traverse():
spack.mirrors.utils.create(mirror_dir, [node]) spack.mirrors.utils.create(mirror_dir, [node])
llnl.util.tty.set_msg_enabled(True) llnl.util.tty.set_msg_enabled(True)
@@ -436,7 +436,6 @@ def write_metadata(subdir, metadata):
shutil.copy(spack.util.path.canonicalize_path(GNUPG_JSON), abs_directory) shutil.copy(spack.util.path.canonicalize_path(GNUPG_JSON), abs_directory)
shutil.copy(spack.util.path.canonicalize_path(PATCHELF_JSON), abs_directory) shutil.copy(spack.util.path.canonicalize_path(PATCHELF_JSON), abs_directory)
instructions += cmd.format("local-binaries", rel_directory) instructions += cmd.format("local-binaries", rel_directory)
instructions += " % spack buildcache update-index <final-path>/bootstrap_cache\n"
print(instructions) print(instructions)

View File

@@ -16,7 +16,6 @@
import spack.binary_distribution as bindist import spack.binary_distribution as bindist
import spack.cmd import spack.cmd
import spack.concretize
import spack.config import spack.config
import spack.deptypes as dt import spack.deptypes as dt
import spack.environment as ev import spack.environment as ev
@@ -555,7 +554,8 @@ def check_fn(args: argparse.Namespace):
tty.msg("No specs provided, exiting.") tty.msg("No specs provided, exiting.")
return return
specs = [spack.concretize.concretize_one(s) for s in specs] for spec in specs:
spec.concretize()
# Next see if there are any configured binary mirrors # Next see if there are any configured binary mirrors
configured_mirrors = spack.config.get("mirrors", scope=args.scope) configured_mirrors = spack.config.get("mirrors", scope=args.scope)
@@ -623,7 +623,7 @@ def save_specfile_fn(args):
root = specs[0] root = specs[0]
if not root.concrete: if not root.concrete:
root = spack.concretize.concretize_one(root) root.concretize()
save_dependency_specfiles( save_dependency_specfiles(
root, args.specfile_dir, dependencies=spack.cmd.parse_specs(args.specs) root, args.specfile_dir, dependencies=spack.cmd.parse_specs(args.specs)

View File

@@ -4,7 +4,7 @@
import re import re
import sys import sys
from typing import Dict, Optional, Tuple from typing import Dict, Optional
import llnl.string import llnl.string
import llnl.util.lang import llnl.util.lang
@@ -181,11 +181,7 @@ def checksum(parser, args):
print() print()
if args.add_to_package: if args.add_to_package:
path = spack.repo.PATH.filename_for_package_name(pkg.name) add_versions_to_package(pkg, version_lines, args.batch)
num_versions_added = add_versions_to_pkg(path, version_lines)
tty.msg(f"Added {num_versions_added} new versions to {pkg.name} in {path}")
if not args.batch and sys.stdin.isatty():
editor(path)
def print_checksum_status(pkg: PackageBase, version_hashes: dict): def print_checksum_status(pkg: PackageBase, version_hashes: dict):
@@ -231,9 +227,20 @@ def print_checksum_status(pkg: PackageBase, version_hashes: dict):
tty.die("Invalid checksums found.") tty.die("Invalid checksums found.")
def _update_version_statements(package_src: str, version_lines: str) -> Tuple[int, str]: def add_versions_to_package(pkg: PackageBase, version_lines: str, is_batch: bool):
"""Returns a tuple of number of versions added and the package's modified contents.""" """
Add checksumed versions to a package's instructions and open a user's
editor so they may double check the work of the function.
Args:
pkg (spack.package_base.PackageBase): A package class for a given package in Spack.
version_lines (str): A string of rendered version lines.
"""
# Get filename and path for package
filename = spack.repo.PATH.filename_for_package_name(pkg.name)
num_versions_added = 0 num_versions_added = 0
version_statement_re = re.compile(r"([\t ]+version\([^\)]*\))") version_statement_re = re.compile(r"([\t ]+version\([^\)]*\))")
version_re = re.compile(r'[\t ]+version\(\s*"([^"]+)"[^\)]*\)') version_re = re.compile(r'[\t ]+version\(\s*"([^"]+)"[^\)]*\)')
@@ -245,34 +252,33 @@ def _update_version_statements(package_src: str, version_lines: str) -> Tuple[in
if match: if match:
new_versions.append((Version(match.group(1)), ver_line)) new_versions.append((Version(match.group(1)), ver_line))
split_contents = version_statement_re.split(package_src) with open(filename, "r+", encoding="utf-8") as f:
contents = f.read()
split_contents = version_statement_re.split(contents)
for i, subsection in enumerate(split_contents): for i, subsection in enumerate(split_contents):
# If there are no more versions to add we should exit # If there are no more versions to add we should exit
if len(new_versions) <= 0: if len(new_versions) <= 0:
break break
# Check if the section contains a version # Check if the section contains a version
contents_version = version_re.match(subsection) contents_version = version_re.match(subsection)
if contents_version is not None: if contents_version is not None:
parsed_version = Version(contents_version.group(1)) parsed_version = Version(contents_version.group(1))
if parsed_version < new_versions[0][0]: if parsed_version < new_versions[0][0]:
split_contents[i:i] = [new_versions.pop(0)[1], " # FIXME", "\n"] split_contents[i:i] = [new_versions.pop(0)[1], " # FIXME", "\n"]
num_versions_added += 1 num_versions_added += 1
elif parsed_version == new_versions[0][0]: elif parsed_version == new_versions[0][0]:
new_versions.pop(0) new_versions.pop(0)
return num_versions_added, "".join(split_contents) # Seek back to the start of the file so we can rewrite the file contents.
f.seek(0)
f.writelines("".join(split_contents))
tty.msg(f"Added {num_versions_added} new versions to {pkg.name}")
tty.msg(f"Open {filename} to review the additions.")
def add_versions_to_pkg(path: str, version_lines: str) -> int: if sys.stdout.isatty() and not is_batch:
"""Add new versions to a package.py file. Returns the number of versions added.""" editor(filename)
with open(path, "r", encoding="utf-8") as f:
package_src = f.read()
num_versions_added, package_src = _update_version_statements(package_src, version_lines)
if num_versions_added > 0:
with open(path, "w", encoding="utf-8") as f:
f.write(package_src)
return num_versions_added

View File

@@ -176,11 +176,6 @@ def setup_parser(subparser):
reproduce.add_argument( reproduce.add_argument(
"-s", "--autostart", help="Run docker reproducer automatically", action="store_true" "-s", "--autostart", help="Run docker reproducer automatically", action="store_true"
) )
reproduce.add_argument(
"--use-local-head",
help="Use the HEAD of the local Spack instead of reproducing a commit",
action="store_true",
)
gpg_group = reproduce.add_mutually_exclusive_group(required=False) gpg_group = reproduce.add_mutually_exclusive_group(required=False)
gpg_group.add_argument( gpg_group.add_argument(
"--gpg-file", help="Path to public GPG key for validating binary cache installs" "--gpg-file", help="Path to public GPG key for validating binary cache installs"
@@ -613,12 +608,7 @@ def ci_reproduce(args):
gpg_key_url = None gpg_key_url = None
return spack_ci.reproduce_ci_job( return spack_ci.reproduce_ci_job(
args.job_url, args.job_url, args.working_dir, args.autostart, gpg_key_url, args.runtime
args.working_dir,
args.autostart,
gpg_key_url,
args.runtime,
args.use_local_head,
) )

View File

@@ -4,7 +4,7 @@
import argparse import argparse
import os import os.path
import textwrap import textwrap
from llnl.util.lang import stable_partition from llnl.util.lang import stable_partition
@@ -528,6 +528,7 @@ def __call__(self, parser, namespace, values, option_string):
# the const from the constructor or a value from the CLI. # the const from the constructor or a value from the CLI.
# Note that this is only called if the argument is actually # Note that this is only called if the argument is actually
# specified on the command line. # specified on the command line.
spack.config.CONFIG.ensure_scope_ordering()
spack.config.set(self.config_path, self.const, scope="command_line") spack.config.set(self.config_path, self.const, scope="command_line")

View File

@@ -2,6 +2,7 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os import os
import os.path
import llnl.util.tty import llnl.util.tty

View File

@@ -2,11 +2,23 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import platform import platform
import re
import sys
from datetime import datetime
from glob import glob
import llnl.util.tty as tty
from llnl.util.filesystem import working_dir
import spack import spack
import spack.paths
import spack.platforms import spack.platforms
import spack.spec import spack.spec
import spack.store
import spack.util.git
from spack.util.executable import which
description = "debugging commands for troubleshooting Spack" description = "debugging commands for troubleshooting Spack"
section = "developer" section = "developer"
@@ -15,13 +27,67 @@
def setup_parser(subparser): def setup_parser(subparser):
sp = subparser.add_subparsers(metavar="SUBCOMMAND", dest="debug_command") sp = subparser.add_subparsers(metavar="SUBCOMMAND", dest="debug_command")
sp.add_parser("create-db-tarball", help="create a tarball of Spack's installation metadata")
sp.add_parser("report", help="print information useful for bug reports") sp.add_parser("report", help="print information useful for bug reports")
def _debug_tarball_suffix():
now = datetime.now()
suffix = now.strftime("%Y-%m-%d-%H%M%S")
git = spack.util.git.git()
if not git:
return "nobranch-nogit-%s" % suffix
with working_dir(spack.paths.prefix):
if not os.path.isdir(".git"):
return "nobranch.nogit.%s" % suffix
# Get symbolic branch name and strip any special chars (mainly '/')
symbolic = git("rev-parse", "--abbrev-ref", "--short", "HEAD", output=str).strip()
symbolic = re.sub(r"[^\w.-]", "-", symbolic)
# Get the commit hash too.
commit = git("rev-parse", "--short", "HEAD", output=str).strip()
if symbolic == commit:
return "nobranch.%s.%s" % (commit, suffix)
else:
return "%s.%s.%s" % (symbolic, commit, suffix)
def create_db_tarball(args):
tar = which("tar")
tarball_name = "spack-db.%s.tar.gz" % _debug_tarball_suffix()
tarball_path = os.path.abspath(tarball_name)
base = os.path.basename(str(spack.store.STORE.root))
transform_args = []
# Currently --transform and -s are not supported by Windows native tar
if "GNU" in tar("--version", output=str):
transform_args = ["--transform", "s/^%s/%s/" % (base, tarball_name)]
elif sys.platform != "win32":
transform_args = ["-s", "/^%s/%s/" % (base, tarball_name)]
wd = os.path.dirname(str(spack.store.STORE.root))
with working_dir(wd):
files = [spack.store.STORE.db._index_path]
files += glob("%s/*/*/*/.spack/spec.json" % base)
files += glob("%s/*/*/*/.spack/spec.yaml" % base)
files = [os.path.relpath(f) for f in files]
args = ["-czf", tarball_path]
args += transform_args
args += files
tar(*args)
tty.msg("Created %s" % tarball_name)
def report(args): def report(args):
host_platform = spack.platforms.host() host_platform = spack.platforms.host()
host_os = host_platform.default_operating_system() host_os = host_platform.operating_system("frontend")
host_target = host_platform.default_target() host_target = host_platform.target("frontend")
architecture = spack.spec.ArchSpec((str(host_platform), str(host_os), str(host_target))) architecture = spack.spec.ArchSpec((str(host_platform), str(host_os), str(host_target)))
print("* **Spack:**", spack.get_version()) print("* **Spack:**", spack.get_version())
print("* **Python:**", platform.python_version()) print("* **Python:**", platform.python_version())
@@ -29,5 +95,5 @@ def report(args):
def debug(parser, args): def debug(parser, args):
if args.debug_command == "report": action = {"create-db-tarball": create_db_tarball, "report": report}
report(args) action[args.debug_command](args)

View File

@@ -9,9 +9,9 @@
import spack.cmd import spack.cmd
import spack.environment as ev import spack.environment as ev
import spack.package_base
import spack.store import spack.store
from spack.cmd.common import arguments from spack.cmd.common import arguments
from spack.solver.input_analysis import create_graph_analyzer
description = "show dependencies of a package" description = "show dependencies of a package"
section = "basic" section = "basic"
@@ -68,17 +68,15 @@ def dependencies(parser, args):
else: else:
spec = specs[0] spec = specs[0]
dependencies, virtuals, _ = create_graph_analyzer().possible_dependencies( dependencies = spack.package_base.possible_dependencies(
spec, spec,
transitive=args.transitive, transitive=args.transitive,
expand_virtuals=args.expand_virtuals, expand_virtuals=args.expand_virtuals,
allowed_deps=args.deptype, depflag=args.deptype,
) )
if not args.expand_virtuals:
dependencies.update(virtuals)
if spec.name in dependencies: if spec.name in dependencies:
dependencies.remove(spec.name) del dependencies[spec.name]
if dependencies: if dependencies:
colify(sorted(dependencies)) colify(sorted(dependencies))

View File

@@ -18,7 +18,6 @@
from llnl.util.symlink import symlink from llnl.util.symlink import symlink
import spack.cmd import spack.cmd
import spack.concretize
import spack.environment as ev import spack.environment as ev
import spack.installer import spack.installer
import spack.store import spack.store
@@ -104,7 +103,7 @@ def deprecate(parser, args):
) )
if args.install: if args.install:
deprecator = spack.concretize.concretize_one(specs[1]) deprecator = specs[1].concretized()
else: else:
deprecator = spack.cmd.disambiguate_spec(specs[1], env, local=True) deprecator = spack.cmd.disambiguate_spec(specs[1], env, local=True)

View File

@@ -10,7 +10,6 @@
import spack.build_environment import spack.build_environment
import spack.cmd import spack.cmd
import spack.cmd.common.arguments import spack.cmd.common.arguments
import spack.concretize
import spack.config import spack.config
import spack.repo import spack.repo
from spack.cmd.common import arguments from spack.cmd.common import arguments
@@ -114,8 +113,8 @@ def dev_build(self, args):
source_path = os.path.abspath(source_path) source_path = os.path.abspath(source_path)
# Forces the build to run out of the source directory. # Forces the build to run out of the source directory.
spec.constrain(f'dev_path="{source_path}"') spec.constrain("dev_path=%s" % source_path)
spec = spack.concretize.concretize_one(spec) spec.concretize()
if spec.installed: if spec.installed:
tty.error("Already installed in %s" % spec.prefix) tty.error("Already installed in %s" % spec.prefix)

View File

@@ -125,7 +125,7 @@ def develop(parser, args):
version = spec.versions.concrete_range_as_version version = spec.versions.concrete_range_as_version
if not version: if not version:
# look up the maximum version so infintiy versions are preferred for develop # look up the maximum version so infintiy versions are preferred for develop
version = max(spack.repo.PATH.get_pkg_class(spec.fullname).versions.keys()) version = max(spec.package_class.versions.keys())
tty.msg(f"Defaulting to highest version: {spec.name}@{version}") tty.msg(f"Defaulting to highest version: {spec.name}@{version}")
spec.versions = spack.version.VersionList([version]) spec.versions = spack.version.VersionList([version])

View File

@@ -110,7 +110,10 @@ def external_find(args):
# Note that KeyboardInterrupt does not subclass Exception # Note that KeyboardInterrupt does not subclass Exception
# (so CTRL-C will terminate the program as expected). # (so CTRL-C will terminate the program as expected).
skip_msg = "Skipping manifest and continuing with other external checks" skip_msg = "Skipping manifest and continuing with other external checks"
if isinstance(e, OSError) and e.errno in (errno.EPERM, errno.EACCES): if (isinstance(e, IOError) or isinstance(e, OSError)) and e.errno in [
errno.EPERM,
errno.EACCES,
]:
# The manifest file does not have sufficient permissions enabled: # The manifest file does not have sufficient permissions enabled:
# print a warning and keep going # print a warning and keep going
tty.warn("Unable to read manifest due to insufficient permissions.", skip_msg) tty.warn("Unable to read manifest due to insufficient permissions.", skip_msg)

View File

@@ -54,6 +54,10 @@
@m{target=target} specific <target> processor @m{target=target} specific <target> processor
@m{arch=platform-os-target} shortcut for all three above @m{arch=platform-os-target} shortcut for all three above
cross-compiling:
@m{os=backend} or @m{os=be} build for compute node (backend)
@m{os=frontend} or @m{os=fe} build for login node (frontend)
dependencies: dependencies:
^dependency [constraints] specify constraints on dependencies ^dependency [constraints] specify constraints on dependencies
^@K{/hash} build with a specific installed ^@K{/hash} build with a specific installed

View File

@@ -13,7 +13,6 @@
from llnl.util import lang, tty from llnl.util import lang, tty
import spack.cmd import spack.cmd
import spack.concretize
import spack.config import spack.config
import spack.environment as ev import spack.environment as ev
import spack.paths import spack.paths
@@ -451,7 +450,7 @@ def concrete_specs_from_file(args):
else: else:
s = spack.spec.Spec.from_json(f) s = spack.spec.Spec.from_json(f)
concretized = spack.concretize.concretize_one(s) concretized = s.concretized()
if concretized.dag_hash() != s.dag_hash(): if concretized.dag_hash() != s.dag_hash():
msg = 'skipped invalid file "{0}". ' msg = 'skipped invalid file "{0}". '
msg += "The file does not contain a concrete spec." msg += "The file does not contain a concrete spec."

View File

@@ -7,9 +7,9 @@
from llnl.path import convert_to_posix_path from llnl.path import convert_to_posix_path
import spack.concretize
import spack.paths import spack.paths
import spack.util.executable import spack.util.executable
from spack.spec import Spec
description = "generate Windows installer" description = "generate Windows installer"
section = "admin" section = "admin"
@@ -65,7 +65,8 @@ def make_installer(parser, args):
""" """
if sys.platform == "win32": if sys.platform == "win32":
output_dir = args.output_dir output_dir = args.output_dir
cmake_spec = spack.concretize.concretize_one("cmake") cmake_spec = Spec("cmake")
cmake_spec.concretize()
cmake_path = os.path.join(cmake_spec.prefix, "bin", "cmake.exe") cmake_path = os.path.join(cmake_spec.prefix, "bin", "cmake.exe")
cpack_path = os.path.join(cmake_spec.prefix, "bin", "cpack.exe") cpack_path = os.path.join(cmake_spec.prefix, "bin", "cpack.exe")
spack_source = args.spack_source spack_source = args.spack_source

View File

@@ -492,7 +492,7 @@ def extend_with_additional_versions(specs, num_versions):
mirror_specs = spack.mirrors.utils.get_all_versions(specs) mirror_specs = spack.mirrors.utils.get_all_versions(specs)
else: else:
mirror_specs = spack.mirrors.utils.get_matching_versions(specs, num_versions=num_versions) mirror_specs = spack.mirrors.utils.get_matching_versions(specs, num_versions=num_versions)
mirror_specs = [spack.concretize.concretize_one(x) for x in mirror_specs] mirror_specs = [x.concretized() for x in mirror_specs]
return mirror_specs return mirror_specs
@@ -545,7 +545,7 @@ def _not_license_excluded(self, x):
package does not explicitly forbid redistributing source.""" package does not explicitly forbid redistributing source."""
if self.private: if self.private:
return True return True
elif spack.repo.PATH.get_pkg_class(x.fullname).redistribute_source(x): elif x.package_class.redistribute_source(x):
return True return True
else: else:
tty.debug( tty.debug(

View File

@@ -5,7 +5,7 @@
"""Implementation details of the ``spack module`` command.""" """Implementation details of the ``spack module`` command."""
import collections import collections
import os import os.path
import shutil import shutil
import sys import sys

View File

@@ -41,11 +41,7 @@ def providers(parser, args):
specs = spack.cmd.parse_specs(args.virtual_package) specs = spack.cmd.parse_specs(args.virtual_package)
# Check prerequisites # Check prerequisites
non_virtual = [ non_virtual = [str(s) for s in specs if not s.virtual or s.name not in valid_virtuals]
str(s)
for s in specs
if not spack.repo.PATH.is_virtual(s.name) or s.name not in valid_virtuals
]
if non_virtual: if non_virtual:
msg = "non-virtual specs cannot be part of the query " msg = "non-virtual specs cannot be part of the query "
msg += "[{0}]\n".format(", ".join(non_virtual)) msg += "[{0}]\n".format(", ".join(non_virtual))

View File

@@ -6,7 +6,7 @@
import os import os
import re import re
import sys import sys
from itertools import islice, zip_longest from itertools import zip_longest
from typing import Dict, List, Optional from typing import Dict, List, Optional
import llnl.util.tty as tty import llnl.util.tty as tty
@@ -423,8 +423,7 @@ def _run_import_check(
continue continue
for m in is_abs_import.finditer(contents): for m in is_abs_import.finditer(contents):
# Find at most two occurences: the first is the import itself, the second is its usage. if contents.count(m.group(1)) == 1:
if len(list(islice(re.finditer(rf"{re.escape(m.group(1))}(?!\w)", contents), 2))) == 1:
to_remove.append(m.group(0)) to_remove.append(m.group(0))
exit_code = 1 exit_code = 1
print(f"{pretty_path}: redundant import: {m.group(1)}", file=out) print(f"{pretty_path}: redundant import: {m.group(1)}", file=out)
@@ -439,7 +438,7 @@ def _run_import_check(
module = _module_part(root, m.group(0)) module = _module_part(root, m.group(0))
if not module or module in to_add: if not module or module in to_add:
continue continue
if re.search(rf"import {re.escape(module)}(?!\w|\.)", contents): if re.search(rf"import {re.escape(module)}\b(?!\.)", contents):
continue continue
to_add.add(module) to_add.add(module)
exit_code = 1 exit_code = 1

View File

@@ -177,15 +177,16 @@ def test_run(args):
matching = spack.store.STORE.db.query_local(spec, hashes=hashes, explicit=explicit) matching = spack.store.STORE.db.query_local(spec, hashes=hashes, explicit=explicit)
if spec and not matching: if spec and not matching:
tty.warn("No {0}installed packages match spec {1}".format(explicit_str, spec)) tty.warn("No {0}installed packages match spec {1}".format(explicit_str, spec))
"""
TODO: Need to write out a log message and/or CDASH Testing
output that package not installed IF continue to process
these issues here.
# TODO: Need to write out a log message and/or CDASH Testing if args.log_format:
# output that package not installed IF continue to process # Proceed with the spec assuming the test process
# these issues here. # to ensure report package as skipped (e.g., for CI)
specs_to_test.append(spec)
# if args.log_format: """
# # Proceed with the spec assuming the test process
# # to ensure report package as skipped (e.g., for CI)
# specs_to_test.append(spec)
specs_to_test.extend(matching) specs_to_test.extend(matching)
@@ -252,9 +253,7 @@ def has_test_and_tags(pkg_class):
hashes = env.all_hashes() if env else None hashes = env.all_hashes() if env else None
specs = spack.store.STORE.db.query(hashes=hashes) specs = spack.store.STORE.db.query(hashes=hashes)
specs = list( specs = list(filter(lambda s: has_test_and_tags(s.package_class), specs))
filter(lambda s: has_test_and_tags(spack.repo.PATH.get_pkg_class(s.fullname)), specs)
)
spack.cmd.display_specs(specs, long=True) spack.cmd.display_specs(specs, long=True)

View File

@@ -2,7 +2,7 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os import os.path
import shutil import shutil
import llnl.util.tty as tty import llnl.util.tty as tty

View File

@@ -144,7 +144,7 @@ def is_installed(spec):
record = spack.store.STORE.db.query_local_by_spec_hash(spec.dag_hash()) record = spack.store.STORE.db.query_local_by_spec_hash(spec.dag_hash())
return record and record.installed return record and record.installed
all_specs = traverse.traverse_nodes( specs = traverse.traverse_nodes(
specs, specs,
root=False, root=False,
order="breadth", order="breadth",
@@ -155,7 +155,7 @@ def is_installed(spec):
) )
with spack.store.STORE.db.read_transaction(): with spack.store.STORE.db.read_transaction():
return [spec for spec in all_specs if is_installed(spec)] return [spec for spec in specs if is_installed(spec)]
def dependent_environments( def dependent_environments(

View File

@@ -5,7 +5,7 @@
import argparse import argparse
import collections import collections
import io import io
import os import os.path
import re import re
import sys import sys
@@ -216,7 +216,7 @@ def unit_test(parser, args, unknown_args):
# Ensure clingo is available before switching to the # Ensure clingo is available before switching to the
# mock configuration used by unit tests # mock configuration used by unit tests
with spack.bootstrap.ensure_bootstrap_configuration(): with spack.bootstrap.ensure_bootstrap_configuration():
spack.bootstrap.ensure_clingo_importable_or_raise() spack.bootstrap.ensure_core_dependencies()
if pytest is None: if pytest is None:
spack.bootstrap.ensure_environment_dependencies() spack.bootstrap.ensure_environment_dependencies()
import pytest import pytest

View File

@@ -2,48 +2,35 @@
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse import argparse
import io
from typing import List, Optional
import llnl.util.tty as tty import llnl.util.tty as tty
from llnl.string import plural
from llnl.util.filesystem import visit_directory_tree
import spack.cmd import spack.cmd
import spack.environment as ev import spack.environment as ev
import spack.spec
import spack.store import spack.store
import spack.verify import spack.verify
import spack.verify_libraries
from spack.cmd.common import arguments
description = "verify spack installations on disk" description = "check that all spack packages are on disk as installed"
section = "admin" section = "admin"
level = "long" level = "long"
MANIFEST_SUBPARSER: Optional[argparse.ArgumentParser] = None
def setup_parser(subparser):
setup_parser.parser = subparser
def setup_parser(subparser: argparse.ArgumentParser): subparser.add_argument(
global MANIFEST_SUBPARSER
sp = subparser.add_subparsers(metavar="SUBCOMMAND", dest="verify_command")
MANIFEST_SUBPARSER = sp.add_parser(
"manifest", help=verify_manifest.__doc__, description=verify_manifest.__doc__
)
MANIFEST_SUBPARSER.add_argument(
"-l", "--local", action="store_true", help="verify only locally installed packages" "-l", "--local", action="store_true", help="verify only locally installed packages"
) )
MANIFEST_SUBPARSER.add_argument( subparser.add_argument(
"-j", "--json", action="store_true", help="ouptut json-formatted errors" "-j", "--json", action="store_true", help="ouptut json-formatted errors"
) )
MANIFEST_SUBPARSER.add_argument("-a", "--all", action="store_true", help="verify all packages") subparser.add_argument("-a", "--all", action="store_true", help="verify all packages")
MANIFEST_SUBPARSER.add_argument( subparser.add_argument(
"specs_or_files", nargs=argparse.REMAINDER, help="specs or files to verify" "specs_or_files", nargs=argparse.REMAINDER, help="specs or files to verify"
) )
manifest_sp_type = MANIFEST_SUBPARSER.add_mutually_exclusive_group() type = subparser.add_mutually_exclusive_group()
manifest_sp_type.add_argument( type.add_argument(
"-s", "-s",
"--specs", "--specs",
action="store_const", action="store_const",
@@ -52,7 +39,7 @@ def setup_parser(subparser: argparse.ArgumentParser):
default="specs", default="specs",
help="treat entries as specs (default)", help="treat entries as specs (default)",
) )
manifest_sp_type.add_argument( type.add_argument(
"-f", "-f",
"--files", "--files",
action="store_const", action="store_const",
@@ -62,67 +49,14 @@ def setup_parser(subparser: argparse.ArgumentParser):
help="treat entries as absolute filenames\n\ncannot be used with '-a'", help="treat entries as absolute filenames\n\ncannot be used with '-a'",
) )
libraries_subparser = sp.add_parser(
"libraries", help=verify_libraries.__doc__, description=verify_libraries.__doc__
)
arguments.add_common_arguments(libraries_subparser, ["constraint"])
def verify(parser, args): def verify(parser, args):
cmd = args.verify_command
if cmd == "libraries":
return verify_libraries(args)
elif cmd == "manifest":
return verify_manifest(args)
parser.error("invalid verify subcommand")
def verify_libraries(args):
"""verify that shared libraries of install packages can be located in rpaths (Linux only)"""
specs_from_db = [s for s in args.specs(installed=True) if not s.external]
tty.info(f"Checking {len(specs_from_db)} packages for shared library resolution")
errors = 0
for spec in specs_from_db:
try:
pkg = spec.package
except Exception:
tty.warn(f"Skipping {spec.cformat('{name}{@version}{/hash}')} due to missing package")
error_msg = _verify_libraries(spec, pkg.unresolved_libraries)
if error_msg is not None:
errors += 1
tty.error(error_msg)
if errors:
tty.error(f"Cannot resolve shared libraries in {plural(errors, 'package')}")
return 1
def _verify_libraries(spec: spack.spec.Spec, unresolved_libraries: List[str]) -> Optional[str]:
"""Go over the prefix of the installed spec and verify its shared libraries can be resolved."""
visitor = spack.verify_libraries.ResolveSharedElfLibDepsVisitor(
[*spack.verify_libraries.ALLOW_UNRESOLVED, *unresolved_libraries]
)
visit_directory_tree(spec.prefix, visitor)
if not visitor.problems:
return None
output = io.StringIO()
visitor.write(output, indent=4, brief=True)
message = output.getvalue().rstrip()
return f"{spec.cformat('{name}{@version}{/hash}')}: {spec.prefix}:\n{message}"
def verify_manifest(args):
"""verify that install directories have not been modified since installation"""
local = args.local local = args.local
if args.type == "files": if args.type == "files":
if args.all: if args.all:
MANIFEST_SUBPARSER.error("cannot use --all with --files") setup_parser.parser.print_help()
return 1
for file in args.specs_or_files: for file in args.specs_or_files:
results = spack.verify.check_file_manifest(file) results = spack.verify.check_file_manifest(file)
@@ -153,7 +87,8 @@ def verify_manifest(args):
env = ev.active_environment() env = ev.active_environment()
specs = list(map(lambda x: spack.cmd.disambiguate_spec(x, env, local=local), spec_args)) specs = list(map(lambda x: spack.cmd.disambiguate_spec(x, env, local=local), spec_args))
else: else:
MANIFEST_SUBPARSER.error("use --all or specify specs to verify") setup_parser.parser.print_help()
return 1
for spec in specs: for spec in specs:
tty.debug("Verifying package %s") tty.debug("Verifying package %s")

View File

@@ -749,18 +749,12 @@ def __init__(self, compiler, feature, flag_name, ver_string=None):
class CompilerCacheEntry: class CompilerCacheEntry:
"""Deserialized cache entry for a compiler""" """Deserialized cache entry for a compiler"""
__slots__ = ("c_compiler_output", "real_version") __slots__ = ["c_compiler_output", "real_version"]
def __init__(self, c_compiler_output: Optional[str], real_version: str): def __init__(self, c_compiler_output: Optional[str], real_version: str):
self.c_compiler_output = c_compiler_output self.c_compiler_output = c_compiler_output
self.real_version = real_version self.real_version = real_version
@property
def empty(self) -> bool:
"""Sometimes the compiler is temporarily broken, preventing us from getting output. The
call site determines if that is a problem."""
return self.c_compiler_output is None
@classmethod @classmethod
def from_dict(cls, data: Dict[str, Optional[str]]): def from_dict(cls, data: Dict[str, Optional[str]]):
if not isinstance(data, dict): if not isinstance(data, dict):
@@ -798,10 +792,9 @@ def __init__(self, cache: "FileCache") -> None:
self.cache.init_entry(self.name) self.cache.init_entry(self.name)
self._data: Dict[str, Dict[str, Optional[str]]] = {} self._data: Dict[str, Dict[str, Optional[str]]] = {}
def _get_entry(self, key: str, *, allow_empty: bool) -> Optional[CompilerCacheEntry]: def _get_entry(self, key: str) -> Optional[CompilerCacheEntry]:
try: try:
entry = CompilerCacheEntry.from_dict(self._data[key]) return CompilerCacheEntry.from_dict(self._data[key])
return entry if allow_empty or not entry.empty else None
except ValueError: except ValueError:
del self._data[key] del self._data[key]
except KeyError: except KeyError:
@@ -819,7 +812,7 @@ def get(self, compiler: Compiler) -> CompilerCacheEntry:
self._data = {} self._data = {}
key = self._key(compiler) key = self._key(compiler)
value = self._get_entry(key, allow_empty=False) value = self._get_entry(key)
if value is not None: if value is not None:
return value return value
@@ -833,7 +826,7 @@ def get(self, compiler: Compiler) -> CompilerCacheEntry:
self._data = {} self._data = {}
# Use cache entry that may have been created by another process in the meantime. # Use cache entry that may have been created by another process in the meantime.
entry = self._get_entry(key, allow_empty=True) entry = self._get_entry(key)
# Finally compute the cache entry # Finally compute the cache entry
if entry is None: if entry is None:

View File

@@ -801,17 +801,17 @@ def _extract_compiler_paths(spec: "spack.spec.Spec") -> Optional[Dict[str, str]]
def _extract_os_and_target(spec: "spack.spec.Spec"): def _extract_os_and_target(spec: "spack.spec.Spec"):
if not spec.architecture: if not spec.architecture:
host_platform = spack.platforms.host() host_platform = spack.platforms.host()
operating_system = host_platform.default_operating_system() operating_system = host_platform.operating_system("default_os")
target = host_platform.default_target() target = host_platform.target("default_target")
else: else:
target = spec.architecture.target target = spec.architecture.target
if not target: if not target:
target = spack.platforms.host().default_target() target = spack.platforms.host().target("default_target")
operating_system = spec.os operating_system = spec.os
if not operating_system: if not operating_system:
host_platform = spack.platforms.host() host_platform = spack.platforms.host()
operating_system = host_platform.default_operating_system() operating_system = host_platform.operating_system("default_os")
return operating_system, target return operating_system, target

View File

@@ -5,7 +5,7 @@
import sys import sys
import time import time
from contextlib import contextmanager from contextlib import contextmanager
from typing import Iterable, List, Optional, Sequence, Tuple, Union from typing import Iterable, Optional, Sequence, Tuple, Union
import llnl.util.tty as tty import llnl.util.tty as tty
@@ -35,14 +35,14 @@ def enable_compiler_existence_check():
CHECK_COMPILER_EXISTENCE = saved CHECK_COMPILER_EXISTENCE = saved
SpecPairInput = Tuple[Spec, Optional[Spec]]
SpecPair = Tuple[Spec, Spec] SpecPair = Tuple[Spec, Spec]
SpecLike = Union[Spec, str]
TestsType = Union[bool, Iterable[str]] TestsType = Union[bool, Iterable[str]]
def _concretize_specs_together( def concretize_specs_together(
abstract_specs: Sequence[Spec], tests: TestsType = False abstract_specs: Sequence[SpecLike], tests: TestsType = False
) -> List[Spec]: ) -> Sequence[Spec]:
"""Given a number of specs as input, tries to concretize them together. """Given a number of specs as input, tries to concretize them together.
Args: Args:
@@ -50,16 +50,17 @@ def _concretize_specs_together(
tests: list of package names for which to consider tests dependencies. If True, all nodes tests: list of package names for which to consider tests dependencies. If True, all nodes
will have test dependencies. If False, test dependencies will be disregarded. will have test dependencies. If False, test dependencies will be disregarded.
""" """
from spack.solver.asp import Solver import spack.solver.asp
allow_deprecated = spack.config.get("config:deprecated", False) allow_deprecated = spack.config.get("config:deprecated", False)
result = Solver().solve(abstract_specs, tests=tests, allow_deprecated=allow_deprecated) solver = spack.solver.asp.Solver()
result = solver.solve(abstract_specs, tests=tests, allow_deprecated=allow_deprecated)
return [s.copy() for s in result.specs] return [s.copy() for s in result.specs]
def concretize_together( def concretize_together(
spec_list: Sequence[SpecPairInput], tests: TestsType = False spec_list: Sequence[SpecPair], tests: TestsType = False
) -> List[SpecPair]: ) -> Sequence[SpecPair]:
"""Given a number of specs as input, tries to concretize them together. """Given a number of specs as input, tries to concretize them together.
Args: Args:
@@ -70,13 +71,13 @@ def concretize_together(
""" """
to_concretize = [concrete if concrete else abstract for abstract, concrete in spec_list] to_concretize = [concrete if concrete else abstract for abstract, concrete in spec_list]
abstract_specs = [abstract for abstract, _ in spec_list] abstract_specs = [abstract for abstract, _ in spec_list]
concrete_specs = _concretize_specs_together(to_concretize, tests=tests) concrete_specs = concretize_specs_together(to_concretize, tests=tests)
return list(zip(abstract_specs, concrete_specs)) return list(zip(abstract_specs, concrete_specs))
def concretize_together_when_possible( def concretize_together_when_possible(
spec_list: Sequence[SpecPairInput], tests: TestsType = False spec_list: Sequence[SpecPair], tests: TestsType = False
) -> List[SpecPair]: ) -> Sequence[SpecPair]:
"""Given a number of specs as input, tries to concretize them together to the extent possible. """Given a number of specs as input, tries to concretize them together to the extent possible.
See documentation for ``unify: when_possible`` concretization for the precise definition of See documentation for ``unify: when_possible`` concretization for the precise definition of
@@ -88,7 +89,7 @@ def concretize_together_when_possible(
tests: list of package names for which to consider tests dependencies. If True, all nodes tests: list of package names for which to consider tests dependencies. If True, all nodes
will have test dependencies. If False, test dependencies will be disregarded. will have test dependencies. If False, test dependencies will be disregarded.
""" """
from spack.solver.asp import Solver import spack.solver.asp
to_concretize = [concrete if concrete else abstract for abstract, concrete in spec_list] to_concretize = [concrete if concrete else abstract for abstract, concrete in spec_list]
old_concrete_to_abstract = { old_concrete_to_abstract = {
@@ -96,8 +97,9 @@ def concretize_together_when_possible(
} }
result_by_user_spec = {} result_by_user_spec = {}
solver = spack.solver.asp.Solver()
allow_deprecated = spack.config.get("config:deprecated", False) allow_deprecated = spack.config.get("config:deprecated", False)
for result in Solver().solve_in_rounds( for result in solver.solve_in_rounds(
to_concretize, tests=tests, allow_deprecated=allow_deprecated to_concretize, tests=tests, allow_deprecated=allow_deprecated
): ):
result_by_user_spec.update(result.specs_by_input) result_by_user_spec.update(result.specs_by_input)
@@ -111,8 +113,8 @@ def concretize_together_when_possible(
def concretize_separately( def concretize_separately(
spec_list: Sequence[SpecPairInput], tests: TestsType = False spec_list: Sequence[SpecPair], tests: TestsType = False
) -> List[SpecPair]: ) -> Sequence[SpecPair]:
"""Concretizes the input specs separately from each other. """Concretizes the input specs separately from each other.
Args: Args:
@@ -121,7 +123,7 @@ def concretize_separately(
tests: list of package names for which to consider tests dependencies. If True, all nodes tests: list of package names for which to consider tests dependencies. If True, all nodes
will have test dependencies. If False, test dependencies will be disregarded. will have test dependencies. If False, test dependencies will be disregarded.
""" """
from spack.bootstrap import ensure_bootstrap_configuration, ensure_clingo_importable_or_raise import spack.bootstrap
to_concretize = [abstract for abstract, concrete in spec_list if not concrete] to_concretize = [abstract for abstract, concrete in spec_list if not concrete]
args = [ args = [
@@ -131,8 +133,8 @@ def concretize_separately(
] ]
ret = [(i, abstract) for i, abstract in enumerate(to_concretize) if abstract.concrete] ret = [(i, abstract) for i, abstract in enumerate(to_concretize) if abstract.concrete]
# Ensure we don't try to bootstrap clingo in parallel # Ensure we don't try to bootstrap clingo in parallel
with ensure_bootstrap_configuration(): with spack.bootstrap.ensure_bootstrap_configuration():
ensure_clingo_importable_or_raise() spack.bootstrap.ensure_clingo_importable_or_raise()
# Ensure all the indexes have been built or updated, since # Ensure all the indexes have been built or updated, since
# otherwise the processes in the pool may timeout on waiting # otherwise the processes in the pool may timeout on waiting
@@ -187,52 +189,10 @@ def _concretize_task(packed_arguments: Tuple[int, str, TestsType]) -> Tuple[int,
index, spec_str, tests = packed_arguments index, spec_str, tests = packed_arguments
with tty.SuppressOutput(msg_enabled=False): with tty.SuppressOutput(msg_enabled=False):
start = time.time() start = time.time()
spec = concretize_one(Spec(spec_str), tests=tests) spec = Spec(spec_str).concretized(tests=tests)
return index, spec, time.time() - start return index, spec, time.time() - start
def concretize_one(spec: Union[str, Spec], tests: TestsType = False) -> Spec:
"""Return a concretized copy of the given spec.
Args:
tests: if False disregard 'test' dependencies, if a list of names activate them for
the packages in the list, if True activate 'test' dependencies for all packages.
"""
from spack.solver.asp import Solver, SpecBuilder
if isinstance(spec, str):
spec = Spec(spec)
spec = spec.lookup_hash()
if spec.concrete:
return spec.copy()
for node in spec.traverse():
if not node.name:
raise spack.error.SpecError(
f"Spec {node} has no name; cannot concretize an anonymous spec"
)
allow_deprecated = spack.config.get("config:deprecated", False)
result = Solver().solve([spec], tests=tests, allow_deprecated=allow_deprecated)
# take the best answer
opt, i, answer = min(result.answers)
name = spec.name
# TODO: Consolidate this code with similar code in solve.py
if spack.repo.PATH.is_virtual(spec.name):
providers = [s.name for s in answer.values() if s.package.provides(name)]
name = providers[0]
node = SpecBuilder.make_node(pkg=name)
assert (
node in answer
), f"cannot find {name} in the list of specs {','.join([n.pkg for n in answer.keys()])}"
concretized = answer[node]
return concretized
class UnavailableCompilerVersionError(spack.error.SpackError): class UnavailableCompilerVersionError(spack.error.SpackError):
"""Raised when there is no available compiler that satisfies a """Raised when there is no available compiler that satisfies a
compiler spec.""" compiler spec."""

View File

@@ -36,8 +36,6 @@
import sys import sys
from typing import Any, Callable, Dict, Generator, List, Optional, Tuple, Union from typing import Any, Callable, Dict, Generator, List, Optional, Tuple, Union
import jsonschema
from llnl.util import filesystem, lang, tty from llnl.util import filesystem, lang, tty
import spack.error import spack.error
@@ -53,7 +51,6 @@
import spack.schema.definitions import spack.schema.definitions
import spack.schema.develop import spack.schema.develop
import spack.schema.env import spack.schema.env
import spack.schema.env_vars
import spack.schema.mirrors import spack.schema.mirrors
import spack.schema.modules import spack.schema.modules
import spack.schema.packages import spack.schema.packages
@@ -66,14 +63,11 @@
import spack.util.web as web_util import spack.util.web as web_util
from spack.util.cpus import cpus_available from spack.util.cpus import cpus_available
from .enums import ConfigScopePriority
#: Dict from section names -> schema for that section #: Dict from section names -> schema for that section
SECTION_SCHEMAS: Dict[str, Any] = { SECTION_SCHEMAS: Dict[str, Any] = {
"compilers": spack.schema.compilers.schema, "compilers": spack.schema.compilers.schema,
"concretizer": spack.schema.concretizer.schema, "concretizer": spack.schema.concretizer.schema,
"definitions": spack.schema.definitions.schema, "definitions": spack.schema.definitions.schema,
"env_vars": spack.schema.env_vars.schema,
"view": spack.schema.view.schema, "view": spack.schema.view.schema,
"develop": spack.schema.develop.schema, "develop": spack.schema.develop.schema,
"mirrors": spack.schema.mirrors.schema, "mirrors": spack.schema.mirrors.schema,
@@ -410,18 +404,26 @@ def _method(self, *args, **kwargs):
return _method return _method
ScopeWithOptionalPriority = Union[ConfigScope, Tuple[int, ConfigScope]]
ScopeWithPriority = Tuple[int, ConfigScope]
class Configuration: class Configuration:
"""A hierarchical configuration, merging a number of scopes at different priorities.""" """A full Spack configuration, from a hierarchy of config files.
This class makes it easy to add a new scope on top of an existing one.
"""
# convert to typing.OrderedDict when we drop 3.6, or OrderedDict when we reach 3.9 # convert to typing.OrderedDict when we drop 3.6, or OrderedDict when we reach 3.9
scopes: lang.PriorityOrderedMapping[str, ConfigScope] scopes: Dict[str, ConfigScope]
def __init__(self) -> None: def __init__(self, *scopes: ConfigScope) -> None:
self.scopes = lang.PriorityOrderedMapping() """Initialize a configuration with an initial list of scopes.
Args:
scopes: list of scopes to add to this
Configuration, ordered from lowest to highest precedence
"""
self.scopes = collections.OrderedDict()
for scope in scopes:
self.push_scope(scope)
self.format_updates: Dict[str, List[ConfigScope]] = collections.defaultdict(list) self.format_updates: Dict[str, List[ConfigScope]] = collections.defaultdict(list)
def ensure_unwrapped(self) -> "Configuration": def ensure_unwrapped(self) -> "Configuration":
@@ -429,31 +431,36 @@ def ensure_unwrapped(self) -> "Configuration":
return self return self
def highest(self) -> ConfigScope: def highest(self) -> ConfigScope:
"""Scope with the highest precedence""" """Scope with highest precedence"""
return next(self.scopes.reversed_values()) # type: ignore return next(reversed(self.scopes.values())) # type: ignore
@_config_mutator @_config_mutator
def push_scope(self, scope: ConfigScope, priority: Optional[int] = None) -> None: def ensure_scope_ordering(self):
"""Adds a scope to the Configuration, at a given priority. """Ensure that scope order matches documented precedent"""
# FIXME: We also need to consider that custom configurations and other orderings
# may not be preserved correctly
if "command_line" in self.scopes:
# TODO (when dropping python 3.6): self.scopes.move_to_end
self.scopes["command_line"] = self.remove_scope("command_line")
If a priority is not given, it is assumed to be the current highest priority. @_config_mutator
def push_scope(self, scope: ConfigScope) -> None:
"""Add a higher precedence scope to the Configuration."""
tty.debug(f"[CONFIGURATION: PUSH SCOPE]: {str(scope)}", level=2)
self.scopes[scope.name] = scope
Args: @_config_mutator
scope: scope to be added def pop_scope(self) -> ConfigScope:
priority: priority of the scope """Remove the highest precedence scope and return it."""
""" name, scope = self.scopes.popitem(last=True) # type: ignore[call-arg]
tty.debug(f"[CONFIGURATION: PUSH SCOPE]: {str(scope)}, priority={priority}", level=2) tty.debug(f"[CONFIGURATION: POP SCOPE]: {str(scope)}", level=2)
self.scopes.add(scope.name, value=scope, priority=priority) return scope
@_config_mutator @_config_mutator
def remove_scope(self, scope_name: str) -> Optional[ConfigScope]: def remove_scope(self, scope_name: str) -> Optional[ConfigScope]:
"""Removes a scope by name, and returns it. If the scope does not exist, returns None.""" """Remove scope by name; has no effect when ``scope_name`` does not exist"""
try: scope = self.scopes.pop(scope_name, None)
scope = self.scopes.remove(scope_name) tty.debug(f"[CONFIGURATION: POP SCOPE]: {str(scope)}", level=2)
tty.debug(f"[CONFIGURATION: POP SCOPE]: {str(scope)}", level=2)
except KeyError as e:
tty.debug(f"[CONFIGURATION: POP SCOPE]: {e}", level=2)
return None
return scope return scope
@property @property
@@ -462,13 +469,15 @@ def writable_scopes(self) -> Generator[ConfigScope, None, None]:
return (s for s in self.scopes.values() if s.writable) return (s for s in self.scopes.values() if s.writable)
def highest_precedence_scope(self) -> ConfigScope: def highest_precedence_scope(self) -> ConfigScope:
"""Writable scope with the highest precedence.""" """Writable scope with highest precedence."""
return next(s for s in self.scopes.reversed_values() if s.writable) return next(s for s in reversed(self.scopes.values()) if s.writable) # type: ignore
def highest_precedence_non_platform_scope(self) -> ConfigScope: def highest_precedence_non_platform_scope(self) -> ConfigScope:
"""Writable non-platform scope with the highest precedence""" """Writable non-platform scope with highest precedence"""
return next( return next(
s for s in self.scopes.reversed_values() if s.writable and not s.is_platform_dependent s
for s in reversed(self.scopes.values()) # type: ignore
if s.writable and not s.is_platform_dependent
) )
def matching_scopes(self, reg_expr) -> List[ConfigScope]: def matching_scopes(self, reg_expr) -> List[ConfigScope]:
@@ -735,7 +744,7 @@ def override(
""" """
if isinstance(path_or_scope, ConfigScope): if isinstance(path_or_scope, ConfigScope):
overrides = path_or_scope overrides = path_or_scope
CONFIG.push_scope(path_or_scope, priority=None) CONFIG.push_scope(path_or_scope)
else: else:
base_name = _OVERRIDES_BASE_NAME base_name = _OVERRIDES_BASE_NAME
# Ensure the new override gets a unique scope name # Ensure the new override gets a unique scope name
@@ -749,7 +758,7 @@ def override(
break break
overrides = InternalConfigScope(scope_name) overrides = InternalConfigScope(scope_name)
CONFIG.push_scope(overrides, priority=None) CONFIG.push_scope(overrides)
CONFIG.set(path_or_scope, value, scope=scope_name) CONFIG.set(path_or_scope, value, scope=scope_name)
try: try:
@@ -759,15 +768,13 @@ def override(
assert scope is overrides assert scope is overrides
def _add_platform_scope( def _add_platform_scope(cfg: Configuration, name: str, path: str, writable: bool = True) -> None:
cfg: Configuration, name: str, path: str, priority: ConfigScopePriority, writable: bool = True
) -> None:
"""Add a platform-specific subdirectory for the current platform.""" """Add a platform-specific subdirectory for the current platform."""
platform = spack.platforms.host().name platform = spack.platforms.host().name
scope = DirectoryConfigScope( scope = DirectoryConfigScope(
f"{name}/{platform}", os.path.join(path, platform), writable=writable f"{name}/{platform}", os.path.join(path, platform), writable=writable
) )
cfg.push_scope(scope, priority=priority) cfg.push_scope(scope)
def config_paths_from_entry_points() -> List[Tuple[str, str]]: def config_paths_from_entry_points() -> List[Tuple[str, str]]:
@@ -802,10 +809,11 @@ def create() -> Configuration:
it. It is bundled inside a function so that configuration can be it. It is bundled inside a function so that configuration can be
initialized lazily. initialized lazily.
""" """
cfg = Configuration()
# first do the builtin, hardcoded defaults # first do the builtin, hardcoded defaults
cfg = create_from( builtin = InternalConfigScope("_builtin", CONFIG_DEFAULTS)
(ConfigScopePriority.BUILTIN, InternalConfigScope("_builtin", CONFIG_DEFAULTS)) cfg.push_scope(builtin)
)
# Builtin paths to configuration files in Spack # Builtin paths to configuration files in Spack
configuration_paths = [ configuration_paths = [
@@ -835,9 +843,10 @@ def create() -> Configuration:
# add each scope and its platform-specific directory # add each scope and its platform-specific directory
for name, path in configuration_paths: for name, path in configuration_paths:
cfg.push_scope(DirectoryConfigScope(name, path), priority=ConfigScopePriority.CONFIG_FILES) cfg.push_scope(DirectoryConfigScope(name, path))
# Each scope can have per-platform overrides in subdirectories
_add_platform_scope(cfg, name, path, priority=ConfigScopePriority.CONFIG_FILES) # Each scope can have per-platfom overrides in subdirectories
_add_platform_scope(cfg, name, path)
return cfg return cfg
@@ -942,7 +951,13 @@ def set(path: str, value: Any, scope: Optional[str] = None) -> None:
return CONFIG.set(path, value, scope) return CONFIG.set(path, value, scope)
def scopes() -> lang.PriorityOrderedMapping[str, ConfigScope]: def add_default_platform_scope(platform: str) -> None:
plat_name = os.path.join("defaults", platform)
plat_path = os.path.join(CONFIGURATION_DEFAULTS_PATH[1], platform)
CONFIG.push_scope(DirectoryConfigScope(plat_name, plat_path))
def scopes() -> Dict[str, ConfigScope]:
"""Convenience function to get list of configuration scopes.""" """Convenience function to get list of configuration scopes."""
return CONFIG.scopes return CONFIG.scopes
@@ -1039,6 +1054,8 @@ def validate(
This leverages the line information (start_mark, end_mark) stored This leverages the line information (start_mark, end_mark) stored
on Spack YAML structures. on Spack YAML structures.
""" """
import jsonschema
try: try:
spack.schema.Validator(schema).validate(data) spack.schema.Validator(schema).validate(data)
except jsonschema.ValidationError as e: except jsonschema.ValidationError as e:
@@ -1396,7 +1413,7 @@ def ensure_latest_format_fn(section: str) -> Callable[[YamlConfigDict], bool]:
@contextlib.contextmanager @contextlib.contextmanager
def use_configuration( def use_configuration(
*scopes_or_paths: Union[ScopeWithOptionalPriority, str] *scopes_or_paths: Union[ConfigScope, str]
) -> Generator[Configuration, None, None]: ) -> Generator[Configuration, None, None]:
"""Use the configuration scopes passed as arguments within the context manager. """Use the configuration scopes passed as arguments within the context manager.
@@ -1411,7 +1428,7 @@ def use_configuration(
global CONFIG global CONFIG
# Normalize input and construct a Configuration object # Normalize input and construct a Configuration object
configuration = create_from(*scopes_or_paths) configuration = _config_from(scopes_or_paths)
CONFIG.clear_caches(), configuration.clear_caches() CONFIG.clear_caches(), configuration.clear_caches()
saved_config, CONFIG = CONFIG, configuration saved_config, CONFIG = CONFIG, configuration
@@ -1422,44 +1439,23 @@ def use_configuration(
CONFIG = saved_config CONFIG = saved_config
def _normalize_input(entry: Union[ScopeWithOptionalPriority, str]) -> ScopeWithPriority:
if isinstance(entry, tuple):
return entry
default_priority = ConfigScopePriority.CONFIG_FILES
if isinstance(entry, ConfigScope):
return default_priority, entry
# Otherwise we need to construct it
path = os.path.normpath(entry)
assert os.path.isdir(path), f'"{path}" must be a directory'
name = os.path.basename(path)
return default_priority, DirectoryConfigScope(name, path)
@lang.memoized @lang.memoized
def create_from(*scopes_or_paths: Union[ScopeWithOptionalPriority, str]) -> Configuration: def _config_from(scopes_or_paths: List[Union[ConfigScope, str]]) -> Configuration:
"""Creates a configuration object from the scopes passed in input. scopes = []
for scope_or_path in scopes_or_paths:
# If we have a config scope we are already done
if isinstance(scope_or_path, ConfigScope):
scopes.append(scope_or_path)
continue
Args: # Otherwise we need to construct it
*scopes_or_paths: either a tuple of (priority, ConfigScope), or a ConfigScope, or a string path = os.path.normpath(scope_or_path)
If priority is not given, it is assumed to be ConfigScopePriority.CONFIG_FILES. If a assert os.path.isdir(path), f'"{path}" must be a directory'
string is given, a DirectoryConfigScope is created from it. name = os.path.basename(path)
scopes.append(DirectoryConfigScope(name, path))
Examples: configuration = Configuration(*scopes)
return configuration
>>> builtin_scope = InternalConfigScope("_builtin", {"config": {"build_jobs": 1}})
>>> cl_scope = InternalConfigScope("command_line", {"config": {"build_jobs": 10}})
>>> cfg = create_from(
... (ConfigScopePriority.COMMAND_LINE, cl_scope),
... (ConfigScopePriority.BUILTIN, builtin_scope)
... )
"""
scopes_with_priority = [_normalize_input(x) for x in scopes_or_paths]
result = Configuration()
for priority, scope in scopes_with_priority:
result.push_scope(scope, priority=priority)
return result
def raw_github_gitlab_url(url: str) -> str: def raw_github_gitlab_url(url: str) -> str:

View File

@@ -6,8 +6,6 @@
""" """
import warnings import warnings
import jsonschema
import spack.environment as ev import spack.environment as ev
import spack.schema.env as env import spack.schema.env as env
import spack.util.spack_yaml as syaml import spack.util.spack_yaml as syaml
@@ -32,6 +30,8 @@ def validate(configuration_file):
Returns: Returns:
A sanitized copy of the configuration stored in the input file A sanitized copy of the configuration stored in the input file
""" """
import jsonschema
with open(configuration_file, encoding="utf-8") as f: with open(configuration_file, encoding="utf-8") as f:
config = syaml.load(f) config = syaml.load(f)
@@ -57,7 +57,7 @@ def validate(configuration_file):
# Set the default value of the concretization strategy to unify and # Set the default value of the concretization strategy to unify and
# warn if the user explicitly set another value # warn if the user explicitly set another value
env_dict.setdefault("concretizer", {"unify": True}) env_dict.setdefault("concretizer", {"unify": True})
if env_dict["concretizer"]["unify"] is not True: if not env_dict["concretizer"]["unify"] is True:
warnings.warn( warnings.warn(
'"concretizer:unify" is not set to "true", which means the ' '"concretizer:unify" is not set to "true", which means the '
"generated image may contain different variants of the same " "generated image may contain different variants of the same "

View File

@@ -3,7 +3,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Manages the details on the images used in the various stages.""" """Manages the details on the images used in the various stages."""
import json import json
import os import os.path
import shlex import shlex
import sys import sys

View File

@@ -9,8 +9,6 @@
from collections import namedtuple from collections import namedtuple
from typing import Optional from typing import Optional
import jsonschema
import spack.environment as ev import spack.environment as ev
import spack.error import spack.error
import spack.schema.env import spack.schema.env
@@ -190,6 +188,8 @@ def paths(self):
@tengine.context_property @tengine.context_property
def manifest(self): def manifest(self):
"""The spack.yaml file that should be used in the image""" """The spack.yaml file that should be used in the image"""
import jsonschema
# Copy in the part of spack.yaml prescribed in the configuration file # Copy in the part of spack.yaml prescribed in the configuration file
manifest = copy.deepcopy(self.config) manifest = copy.deepcopy(self.config)
manifest.pop("container") manifest.pop("container")

View File

@@ -41,8 +41,6 @@
Union, Union,
) )
import spack.repo
try: try:
import uuid import uuid
@@ -125,15 +123,6 @@
"deprecated_for", "deprecated_for",
) )
#: File where the database is written
INDEX_JSON_FILE = "index.json"
# Verifier file to check last modification of the DB
_INDEX_VERIFIER_FILE = "index_verifier"
# Lockfile for the database
_LOCK_FILE = "lock"
@llnl.util.lang.memoized @llnl.util.lang.memoized
def _getfqdn(): def _getfqdn():
@@ -271,7 +260,7 @@ class ForbiddenLockError(SpackError):
class ForbiddenLock: class ForbiddenLock:
def __getattr__(self, name): def __getattr__(self, name):
raise ForbiddenLockError(f"Cannot access attribute '{name}' of lock") raise ForbiddenLockError("Cannot access attribute '{0}' of lock".format(name))
def __reduce__(self): def __reduce__(self):
return ForbiddenLock, tuple() return ForbiddenLock, tuple()
@@ -430,25 +419,14 @@ class FailureTracker:
the likelihood of collision very low with no cleanup required. the likelihood of collision very low with no cleanup required.
""" """
#: root directory of the failure tracker
dir: pathlib.Path
#: File for locking particular concrete spec hashes
locker: SpecLocker
def __init__(self, root_dir: Union[str, pathlib.Path], default_timeout: Optional[float]): def __init__(self, root_dir: Union[str, pathlib.Path], default_timeout: Optional[float]):
#: Ensure a persistent location for dealing with parallel installation #: Ensure a persistent location for dealing with parallel installation
#: failures (e.g., across near-concurrent processes). #: failures (e.g., across near-concurrent processes).
self.dir = pathlib.Path(root_dir) / _DB_DIRNAME / "failures" self.dir = pathlib.Path(root_dir) / _DB_DIRNAME / "failures"
self.locker = SpecLocker(failures_lock_path(root_dir), default_timeout=default_timeout)
def _ensure_parent_directories(self) -> None:
"""Ensure that parent directories of the FailureTracker exist.
Accesses the filesystem only once, the first time it's called on a given FailureTracker.
"""
self.dir.mkdir(parents=True, exist_ok=True) self.dir.mkdir(parents=True, exist_ok=True)
self.locker = SpecLocker(failures_lock_path(root_dir), default_timeout=default_timeout)
def clear(self, spec: "spack.spec.Spec", force: bool = False) -> None: def clear(self, spec: "spack.spec.Spec", force: bool = False) -> None:
"""Removes any persistent and cached failure tracking for the spec. """Removes any persistent and cached failure tracking for the spec.
@@ -491,18 +469,13 @@ def clear_all(self) -> None:
tty.debug("Removing prefix failure tracking files") tty.debug("Removing prefix failure tracking files")
try: try:
marks = os.listdir(str(self.dir)) for fail_mark in os.listdir(str(self.dir)):
except FileNotFoundError: try:
return # directory doesn't exist yet (self.dir / fail_mark).unlink()
except OSError as exc:
tty.warn(f"Unable to remove failure marking file {fail_mark}: {str(exc)}")
except OSError as exc: except OSError as exc:
tty.warn(f"Unable to remove failure marking files: {str(exc)}") tty.warn(f"Unable to remove failure marking files: {str(exc)}")
return
for fail_mark in marks:
try:
(self.dir / fail_mark).unlink()
except OSError as exc:
tty.warn(f"Unable to remove failure marking file {fail_mark}: {str(exc)}")
def mark(self, spec: "spack.spec.Spec") -> lk.Lock: def mark(self, spec: "spack.spec.Spec") -> lk.Lock:
"""Marks a spec as failing to install. """Marks a spec as failing to install.
@@ -510,8 +483,6 @@ def mark(self, spec: "spack.spec.Spec") -> lk.Lock:
Args: Args:
spec: spec that failed to install spec: spec that failed to install
""" """
self._ensure_parent_directories()
# Dump the spec to the failure file for (manual) debugging purposes # Dump the spec to the failure file for (manual) debugging purposes
path = self._path(spec) path = self._path(spec)
path.write_text(spec.to_json()) path.write_text(spec.to_json())
@@ -596,13 +567,17 @@ def __init__(
Relevant only if the repository is not an upstream. Relevant only if the repository is not an upstream.
""" """
self.root = root self.root = root
self.database_directory = pathlib.Path(self.root) / _DB_DIRNAME self.database_directory = os.path.join(self.root, _DB_DIRNAME)
self.layout = layout self.layout = layout
# Set up layout of database files within the db dir # Set up layout of database files within the db dir
self._index_path = self.database_directory / INDEX_JSON_FILE self._index_path = os.path.join(self.database_directory, "index.json")
self._verifier_path = self.database_directory / _INDEX_VERIFIER_FILE self._verifier_path = os.path.join(self.database_directory, "index_verifier")
self._lock_path = self.database_directory / _LOCK_FILE self._lock_path = os.path.join(self.database_directory, "lock")
# Create needed directories and files
if not is_upstream and not os.path.exists(self.database_directory):
fs.mkdirp(self.database_directory)
self.is_upstream = is_upstream self.is_upstream = is_upstream
self.last_seen_verifier = "" self.last_seen_verifier = ""
@@ -617,14 +592,14 @@ def __init__(
# initialize rest of state. # initialize rest of state.
self.db_lock_timeout = lock_cfg.database_timeout self.db_lock_timeout = lock_cfg.database_timeout
tty.debug(f"DATABASE LOCK TIMEOUT: {str(self.db_lock_timeout)}s") tty.debug("DATABASE LOCK TIMEOUT: {0}s".format(str(self.db_lock_timeout)))
self.lock: Union[ForbiddenLock, lk.Lock] self.lock: Union[ForbiddenLock, lk.Lock]
if self.is_upstream: if self.is_upstream:
self.lock = ForbiddenLock() self.lock = ForbiddenLock()
else: else:
self.lock = lk.Lock( self.lock = lk.Lock(
str(self._lock_path), self._lock_path,
default_timeout=self.db_lock_timeout, default_timeout=self.db_lock_timeout,
desc="database", desc="database",
enable=lock_cfg.enable, enable=lock_cfg.enable,
@@ -641,11 +616,6 @@ def __init__(
self._write_transaction_impl = lk.WriteTransaction self._write_transaction_impl = lk.WriteTransaction
self._read_transaction_impl = lk.ReadTransaction self._read_transaction_impl = lk.ReadTransaction
def _ensure_parent_directories(self):
"""Create the parent directory for the DB, if necessary."""
if not self.is_upstream:
self.database_directory.mkdir(parents=True, exist_ok=True)
def write_transaction(self): def write_transaction(self):
"""Get a write lock context manager for use in a `with` block.""" """Get a write lock context manager for use in a `with` block."""
return self._write_transaction_impl(self.lock, acquire=self._read, release=self._write) return self._write_transaction_impl(self.lock, acquire=self._read, release=self._write)
@@ -660,8 +630,6 @@ def _write_to_file(self, stream):
This function does not do any locking or transactions. This function does not do any locking or transactions.
""" """
self._ensure_parent_directories()
# map from per-spec hash code to installation record. # map from per-spec hash code to installation record.
installs = dict( installs = dict(
(k, v.to_dict(include_fields=self.record_fields)) for k, v in self._data.items() (k, v.to_dict(include_fields=self.record_fields)) for k, v in self._data.items()
@@ -791,7 +759,7 @@ def _read_from_file(self, filename):
Does not do any locking. Does not do any locking.
""" """
try: try:
with open(str(filename), "r", encoding="utf-8") as f: with open(filename, "r", encoding="utf-8") as f:
# In the future we may use a stream of JSON objects, hence `raw_decode` for compat. # In the future we may use a stream of JSON objects, hence `raw_decode` for compat.
fdata, _ = JSONDecoder().raw_decode(f.read()) fdata, _ = JSONDecoder().raw_decode(f.read())
except Exception as e: except Exception as e:
@@ -892,13 +860,11 @@ def reindex(self):
if self.is_upstream: if self.is_upstream:
raise UpstreamDatabaseLockingError("Cannot reindex an upstream database") raise UpstreamDatabaseLockingError("Cannot reindex an upstream database")
self._ensure_parent_directories()
# Special transaction to avoid recursive reindex calls and to # Special transaction to avoid recursive reindex calls and to
# ignore errors if we need to rebuild a corrupt database. # ignore errors if we need to rebuild a corrupt database.
def _read_suppress_error(): def _read_suppress_error():
try: try:
if self._index_path.is_file(): if os.path.isfile(self._index_path):
self._read_from_file(self._index_path) self._read_from_file(self._index_path)
except CorruptDatabaseError as e: except CorruptDatabaseError as e:
tty.warn(f"Reindexing corrupt database, error was: {e}") tty.warn(f"Reindexing corrupt database, error was: {e}")
@@ -1041,7 +1007,7 @@ def _check_ref_counts(self):
% (key, found, expected, self._index_path) % (key, found, expected, self._index_path)
) )
def _write(self, type=None, value=None, traceback=None): def _write(self, type, value, traceback):
"""Write the in-memory database index to its file path. """Write the in-memory database index to its file path.
This is a helper function called by the WriteTransaction context This is a helper function called by the WriteTransaction context
@@ -1052,8 +1018,6 @@ def _write(self, type=None, value=None, traceback=None):
This routine does no locking. This routine does no locking.
""" """
self._ensure_parent_directories()
# Do not write if exceptions were raised # Do not write if exceptions were raised
if type is not None: if type is not None:
# A failure interrupted a transaction, so we should record that # A failure interrupted a transaction, so we should record that
@@ -1062,16 +1026,16 @@ def _write(self, type=None, value=None, traceback=None):
self._state_is_inconsistent = True self._state_is_inconsistent = True
return return
temp_file = str(self._index_path) + (".%s.%s.temp" % (_getfqdn(), os.getpid())) temp_file = self._index_path + (".%s.%s.temp" % (_getfqdn(), os.getpid()))
# Write a temporary database file them move it into place # Write a temporary database file them move it into place
try: try:
with open(temp_file, "w", encoding="utf-8") as f: with open(temp_file, "w", encoding="utf-8") as f:
self._write_to_file(f) self._write_to_file(f)
fs.rename(temp_file, str(self._index_path)) fs.rename(temp_file, self._index_path)
if _use_uuid: if _use_uuid:
with self._verifier_path.open("w", encoding="utf-8") as f: with open(self._verifier_path, "w", encoding="utf-8") as f:
new_verifier = str(uuid.uuid4()) new_verifier = str(uuid.uuid4())
f.write(new_verifier) f.write(new_verifier)
self.last_seen_verifier = new_verifier self.last_seen_verifier = new_verifier
@@ -1084,11 +1048,11 @@ def _write(self, type=None, value=None, traceback=None):
def _read(self): def _read(self):
"""Re-read Database from the data in the set location. This does no locking.""" """Re-read Database from the data in the set location. This does no locking."""
if self._index_path.is_file(): if os.path.isfile(self._index_path):
current_verifier = "" current_verifier = ""
if _use_uuid: if _use_uuid:
try: try:
with self._verifier_path.open("r", encoding="utf-8") as f: with open(self._verifier_path, "r", encoding="utf-8") as f:
current_verifier = f.read() current_verifier = f.read()
except BaseException: except BaseException:
pass pass
@@ -1101,7 +1065,7 @@ def _read(self):
self._state_is_inconsistent = False self._state_is_inconsistent = False
return return
elif self.is_upstream: elif self.is_upstream:
tty.warn(f"upstream not found: {self._index_path}") tty.warn("upstream not found: {0}".format(self._index_path))
def _add( def _add(
self, self,
@@ -1366,7 +1330,7 @@ def deprecate(self, spec: "spack.spec.Spec", deprecator: "spack.spec.Spec") -> N
def installed_relatives( def installed_relatives(
self, self,
spec: "spack.spec.Spec", spec: "spack.spec.Spec",
direction: tr.DirectionType = "children", direction: str = "children",
transitive: bool = True, transitive: bool = True,
deptype: Union[dt.DepFlag, dt.DepTypes] = dt.ALL, deptype: Union[dt.DepFlag, dt.DepTypes] = dt.ALL,
) -> Set["spack.spec.Spec"]: ) -> Set["spack.spec.Spec"]:
@@ -1558,12 +1522,7 @@ def _query(
# If we did fine something, the query spec can't be virtual b/c we matched an actual # If we did fine something, the query spec can't be virtual b/c we matched an actual
# package installation, so skip the virtual check entirely. If we *didn't* find anything, # package installation, so skip the virtual check entirely. If we *didn't* find anything,
# check all the deferred specs *if* the query is virtual. # check all the deferred specs *if* the query is virtual.
if ( if not results and query_spec is not None and deferred and query_spec.virtual:
not results
and query_spec is not None
and deferred
and spack.repo.PATH.is_virtual(query_spec.name)
):
results = [spec for spec in deferred if spec.satisfies(query_spec)] results = [spec for spec in deferred if spec.satisfies(query_spec)]
return results return results
@@ -1722,7 +1681,7 @@ def query(
) )
results = list(local_results) + list(x for x in upstream_results if x not in local_results) results = list(local_results) + list(x for x in upstream_results if x not in local_results)
results.sort() # type: ignore[call-overload] results.sort()
return results return results
def query_one( def query_one(

Some files were not shown because too many files have changed in this diff Show More