Installer: rewire spliced specs via RewireTask (#39136)

This PR allows users to configure explicit splicing replacement of an abstract spec in the concretizer.

concretizer:
  splice:
    explicit:
    - target: mpi
      replacement: mpich/abcdef
      transitive: true

This config block would mean "for any spec that concretizes to use mpi, splice in mpich/abcdef in place of the mpi it would naturally concretize to use. See #20262, #26873, #27919, and #46382 for PRs enabling splicing in the Spec object. This PR will be the first place the splice method is used in a user-facing manner. See https://spack.readthedocs.io/en/latest/spack.html#spack.spec.Spec.splice for more information on splicing.

This will allow users to reuse generic public binaries while splicing in the performant local mpi implementation on their system.

In the config file, the target may be any abstract spec. The replacement must be a spec that includes an abstract hash `/abcdef`. The transitive key is optional, defaulting to true if left out.

Two important items to note:

1. When writing explicit splice config, the user is in charge of ensuring that the replacement specs they use are binary compatible with whatever targets they replace. In practice, this will likely require either specific knowledge of what packages will be installed by the user's workflow, or somewhat more specific abstract "target" specs for splicing, to ensure binary compatibility.
2. Explicit splices can cause the output of the concretizer not to satisfy the input. For example, using the config above and consider a package in a binary cache `hdf5/xyzabc` that depends on mvapich2. Then the command `spack install hdf5/xyzabc` will instead install the result of splicing `mpich/abcdef` into `hdf5/xyzabc` in place of whatever mvapich2 spec it previously depended on. When this occurs, a warning message is printed `Warning: explicit splice configuration has caused the the concretized spec {concrete_spec} not to satisfy the input spec {input_spec}".

Highlighted technical details of implementation:

1. This PR required modifying the installer to have two separate types of Tasks, `RewireTask` and `BuildTask`. Spliced specs are queued as `RewireTask` and standard specs are queued as `BuildTask`. Each spliced spec retains a pointer to its build_spec for provenance. If a RewireTask is dequeued and the associated `build_spec` is neither available in the install_tree nor from a binary cache, the RewireTask is requeued with a new dependency on a BuildTask for the build_spec, and BuildTasks are queued for the build spec and its dependencies.
2. Relocation is modified so that a spack binary can be simultaneously installed and rewired. This ensures that installing the build_spec is not necessary when splicing from a binary cache.
3. The splicing model is modified to more accurately represent build dependencies -- that is, spliced specs do not have build dependencies, as spliced specs are never built. Their build_specs retain the build dependencies, as they may be built as part of installing the spliced spec.
4. There were vestiges of the compiler bootstrapping logic that were not removed in #46237 because I asked alalazo to leave them in to avoid making the rebase for this PR harder than it needed to be. Those last remains are removed in this PR.

Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
This commit is contained in:
Nathan Hanford 2024-10-10 15:48:58 -07:00 committed by GitHub
parent e6114f544d
commit af62a062cc
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
18 changed files with 875 additions and 374 deletions

View File

@ -85,7 +85,7 @@ jobs:
source share/spack/setup-env.sh source share/spack/setup-env.sh
spack debug report spack debug report
spack -d bootstrap now --dev spack -d bootstrap now --dev
spack style -t black spack -d style -t black
spack unit-test -V spack unit-test -V
import-check: import-check:
runs-on: ubuntu-latest runs-on: ubuntu-latest

View File

@ -166,3 +166,74 @@ while `py-numpy` still needs an older version:
Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the Up to Spack v0.20 ``duplicates:strategy:none`` was the default (and only) behavior. From Spack v0.21 the
default behavior is ``duplicates:strategy:minimal``. default behavior is ``duplicates:strategy:minimal``.
--------
Splicing
--------
The ``splice`` key covers config attributes for splicing specs in the solver.
"Splicing" is a method for replacing a dependency with another spec
that provides the same package or virtual. There are two types of
splices, referring to different behaviors for shared dependencies
between the root spec and the new spec replacing a dependency:
"transitive" and "intransitive". A "transitive" splice is one that
resolves all conflicts by taking the dependency from the new node. An
"intransitive" splice is one that resolves all conflicts by taking the
dependency from the original root. From a theory perspective, hybrid
splices are possible but are not modeled by Spack.
All spliced specs retain a ``build_spec`` attribute that points to the
original Spec before any splice occurred. The ``build_spec`` for a
non-spliced spec is itself.
The figure below shows examples of transitive and intransitive splices:
.. figure:: images/splices.png
:align: center
The concretizer can be configured to explicitly splice particular
replacements for a target spec. Splicing will allow the user to make
use of generically built public binary caches, while swapping in
highly optimized local builds for performance critical components
and/or components that interact closely with the specific hardware
details of the system. The most prominent candidate for splicing is
MPI providers. MPI packages have relatively well-understood ABI
characteristics, and most High Performance Computing facilities deploy
highly optimized MPI packages tailored to their particular
hardware. The following config block configures Spack to replace
whatever MPI provider each spec was concretized to use with the
particular package of ``mpich`` with the hash that begins ``abcdef``.
.. code-block:: yaml
concretizer:
splice:
explicit:
- target: mpi
replacement: mpich/abcdef
transitive: false
.. warning::
When configuring an explicit splice, you as the user take on the
responsibility for ensuring ABI compatibility between the specs
matched by the target and the replacement you provide. If they are
not compatible, Spack will not warn you and your application will
fail to run.
The ``target`` field of an explicit splice can be any abstract
spec. The ``replacement`` field must be a spec that includes the hash
of a concrete spec, and the replacement must either be the same
package as the target, provide the virtual that is the target, or
provide a virtual that the target provides. The ``transitive`` field
is optional -- by default, splices will be transitive.
.. note::
With explicit splices configured, it is possible for Spack to
concretize to a spec that does not satisfy the input. For example,
with the config above ``hdf5 ^mvapich2`` will concretize to user
``mpich/abcdef`` instead of ``mvapich2`` as the MPI provider. Spack
will warn the user in this case, but will not fail the
concretization.

Binary file not shown.

After

Width:  |  Height:  |  Size: 358 KiB

View File

@ -35,6 +35,7 @@
import spack.caches import spack.caches
import spack.config as config import spack.config as config
import spack.database as spack_db import spack.database as spack_db
import spack.deptypes as dt
import spack.error import spack.error
import spack.hash_types as ht import spack.hash_types as ht
import spack.hooks import spack.hooks
@ -712,15 +713,32 @@ def get_buildfile_manifest(spec):
return data return data
def hashes_to_prefixes(spec): def deps_to_relocate(spec):
"""Return a dictionary of hashes to prefixes for a spec and its deps, excluding externals""" """Return the transitive link and direct run dependencies of the spec.
return {
s.dag_hash(): str(s.prefix) This is a special traversal for dependencies we need to consider when relocating a package.
Package binaries, scripts, and other files may refer to the prefixes of dependencies, so
we need to rewrite those locations when dependencies are in a different place at install time
than they were at build time.
This traversal covers transitive link dependencies and direct run dependencies because:
1. Spack adds RPATHs for transitive link dependencies so that packages can find needed
dependency libraries.
2. Packages may call any of their *direct* run dependencies (and may bake their paths into
binaries or scripts), so we also need to search for run dependency prefixes when relocating.
This returns a deduplicated list of transitive link dependencies and direct run dependencies.
"""
deps = [
s
for s in itertools.chain( for s in itertools.chain(
spec.traverse(root=True, deptype="link"), spec.dependencies(deptype="run") spec.traverse(root=True, deptype="link"), spec.dependencies(deptype="run")
) )
if not s.external if not s.external
} ]
return llnl.util.lang.dedupe(deps, key=lambda s: s.dag_hash())
def get_buildinfo_dict(spec): def get_buildinfo_dict(spec):
@ -736,7 +754,7 @@ def get_buildinfo_dict(spec):
"relocate_binaries": manifest["binary_to_relocate"], "relocate_binaries": manifest["binary_to_relocate"],
"relocate_links": manifest["link_to_relocate"], "relocate_links": manifest["link_to_relocate"],
"hardlinks_deduped": manifest["hardlinks_deduped"], "hardlinks_deduped": manifest["hardlinks_deduped"],
"hash_to_prefix": hashes_to_prefixes(spec), "hash_to_prefix": {d.dag_hash(): str(d.prefix) for d in deps_to_relocate(spec)},
} }
@ -1631,7 +1649,6 @@ def _oci_push(
Dict[str, spack.oci.oci.Blob], Dict[str, spack.oci.oci.Blob],
List[Tuple[Spec, BaseException]], List[Tuple[Spec, BaseException]],
]: ]:
# Spec dag hash -> blob # Spec dag hash -> blob
checksums: Dict[str, spack.oci.oci.Blob] = {} checksums: Dict[str, spack.oci.oci.Blob] = {}
@ -2201,11 +2218,36 @@ def relocate_package(spec):
# First match specific prefix paths. Possibly the *local* install prefix # First match specific prefix paths. Possibly the *local* install prefix
# of some dependency is in an upstream, so we cannot assume the original # of some dependency is in an upstream, so we cannot assume the original
# spack store root can be mapped uniformly to the new spack store root. # spack store root can be mapped uniformly to the new spack store root.
for dag_hash, new_dep_prefix in hashes_to_prefixes(spec).items(): #
if dag_hash in hash_to_old_prefix: # If the spec is spliced, we need to handle the simultaneous mapping
old_dep_prefix = hash_to_old_prefix[dag_hash] # from the old install_tree to the new install_tree and from the build_spec
prefix_to_prefix_bin[old_dep_prefix] = new_dep_prefix # to the spliced spec.
prefix_to_prefix_text[old_dep_prefix] = new_dep_prefix # Because foo.build_spec is foo for any non-spliced spec, we can simplify
# by checking for spliced-in nodes by checking for nodes not in the build_spec
# without any explicit check for whether the spec is spliced.
# An analog in this algorithm is any spec that shares a name or provides the same virtuals
# in the context of the relevant root spec. This ensures that the analog for a spec s
# is the spec that s replaced when we spliced.
relocation_specs = deps_to_relocate(spec)
build_spec_ids = set(id(s) for s in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD))
for s in relocation_specs:
analog = s
if id(s) not in build_spec_ids:
analogs = [
d
for d in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD)
if s._splice_match(d, self_root=spec, other_root=spec.build_spec)
]
if analogs:
# Prefer same-name analogs and prefer higher versions
# This matches the preferences in Spec.splice, so we will find same node
analog = max(analogs, key=lambda a: (a.name == s.name, a.version))
lookup_dag_hash = analog.dag_hash()
if lookup_dag_hash in hash_to_old_prefix:
old_dep_prefix = hash_to_old_prefix[lookup_dag_hash]
prefix_to_prefix_bin[old_dep_prefix] = str(s.prefix)
prefix_to_prefix_text[old_dep_prefix] = str(s.prefix)
# Only then add the generic fallback of install prefix -> install prefix. # Only then add the generic fallback of install prefix -> install prefix.
prefix_to_prefix_text[old_prefix] = new_prefix prefix_to_prefix_text[old_prefix] = new_prefix
@ -2543,10 +2585,10 @@ def install_root_node(spec, unsigned=False, force=False, sha256=None):
warnings.warn("Package for spec {0} already installed.".format(spec.format())) warnings.warn("Package for spec {0} already installed.".format(spec.format()))
return return
download_result = download_tarball(spec, unsigned) download_result = download_tarball(spec.build_spec, unsigned)
if not download_result: if not download_result:
msg = 'download of binary cache file for spec "{0}" failed' msg = 'download of binary cache file for spec "{0}" failed'
raise RuntimeError(msg.format(spec.format())) raise RuntimeError(msg.format(spec.build_spec.format()))
if sha256: if sha256:
checker = spack.util.crypto.Checker(sha256) checker = spack.util.crypto.Checker(sha256)
@ -2565,6 +2607,11 @@ def install_root_node(spec, unsigned=False, force=False, sha256=None):
with spack.util.path.filter_padding(): with spack.util.path.filter_padding():
tty.msg('Installing "{0}" from a buildcache'.format(spec.format())) tty.msg('Installing "{0}" from a buildcache'.format(spec.format()))
extract_tarball(spec, download_result, force) extract_tarball(spec, download_result, force)
spec.package.windows_establish_runtime_linkage()
if spec.spliced: # overwrite old metadata with new
spack.store.STORE.layout.write_spec(
spec, spack.store.STORE.layout.spec_file_path(spec)
)
spack.hooks.post_install(spec, False) spack.hooks.post_install(spec, False)
spack.store.STORE.db.add(spec) spack.store.STORE.db.add(spec)

View File

@ -2165,6 +2165,13 @@ def _concrete_specs_dict(self):
# Assumes no legacy formats, since this was just created. # Assumes no legacy formats, since this was just created.
spec_dict[ht.dag_hash.name] = s.dag_hash() spec_dict[ht.dag_hash.name] = s.dag_hash()
concrete_specs[s.dag_hash()] = spec_dict concrete_specs[s.dag_hash()] = spec_dict
if s.build_spec is not s:
for d in s.build_spec.traverse():
build_spec_dict = d.node_dict_with_hashes(hash=ht.dag_hash)
build_spec_dict[ht.dag_hash.name] = d.dag_hash()
concrete_specs[d.dag_hash()] = build_spec_dict
return concrete_specs return concrete_specs
def _concrete_roots_dict(self): def _concrete_roots_dict(self):
@ -2324,7 +2331,7 @@ def filter_specs(self, reader, json_specs_by_hash, order_concretized):
specs_by_hash[lockfile_key] = spec specs_by_hash[lockfile_key] = spec
# Second pass: For each spec, get its dependencies from the node dict # Second pass: For each spec, get its dependencies from the node dict
# and add them to the spec # and add them to the spec, including build specs
for lockfile_key, node_dict in json_specs_by_hash.items(): for lockfile_key, node_dict in json_specs_by_hash.items():
name, data = reader.name_and_data(node_dict) name, data = reader.name_and_data(node_dict)
for _, dep_hash, deptypes, _, virtuals in reader.dependencies_from_node_dict(data): for _, dep_hash, deptypes, _, virtuals in reader.dependencies_from_node_dict(data):
@ -2332,6 +2339,10 @@ def filter_specs(self, reader, json_specs_by_hash, order_concretized):
specs_by_hash[dep_hash], depflag=dt.canonicalize(deptypes), virtuals=virtuals specs_by_hash[dep_hash], depflag=dt.canonicalize(deptypes), virtuals=virtuals
) )
if "build_spec" in node_dict:
_, bhash, _ = reader.extract_build_spec_info_from_node_dict(node_dict)
specs_by_hash[lockfile_key]._build_spec = specs_by_hash[bhash]
# Traverse the root specs one at a time in the order they appear. # Traverse the root specs one at a time in the order they appear.
# The first time we see each DAG hash, that's the one we want to # The first time we see each DAG hash, that's the one we want to
# keep. This is only required as long as we support older lockfile # keep. This is only required as long as we support older lockfile

View File

@ -2,8 +2,7 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
""" """This module encapsulates package installation functionality.
This module encapsulates package installation functionality.
The PackageInstaller coordinates concurrent builds of packages for the same The PackageInstaller coordinates concurrent builds of packages for the same
Spack instance by leveraging the dependency DAG and file system locks. It Spack instance by leveraging the dependency DAG and file system locks. It
@ -17,13 +16,14 @@
File system locks enable coordination such that no two processes attempt to File system locks enable coordination such that no two processes attempt to
build the same or a failed dependency package. build the same or a failed dependency package.
Failures to install dependency packages result in removal of their dependents' If a dependency package fails to install, its dependents' tasks will be
build tasks from the current process. A failure file is also written (and removed from the installing process's queue. A failure file is also written
locked) so that other processes can detect the failure and adjust their build and locked. Other processes use this file to detect the failure and dequeue
tasks accordingly. its dependents.
This module supports the coordination of local and distributed concurrent This module supports the coordination of local and distributed concurrent
installations of packages in a Spack instance. installations of packages in a Spack instance.
""" """
import copy import copy
@ -59,6 +59,7 @@
import spack.package_base import spack.package_base
import spack.package_prefs as prefs import spack.package_prefs as prefs
import spack.repo import spack.repo
import spack.rewiring
import spack.spec import spack.spec
import spack.store import spack.store
import spack.util.executable import spack.util.executable
@ -110,13 +111,22 @@ def _write_timer_json(pkg, timer, cache):
return return
class InstallAction: class ExecuteResult(enum.Enum):
# Task succeeded
SUCCESS = enum.auto()
# Task failed
FAILED = enum.auto()
# Task is missing build spec and will be requeued
MISSING_BUILD_SPEC = enum.auto()
class InstallAction(enum.Enum):
#: Don't perform an install #: Don't perform an install
NONE = 0 NONE = enum.auto()
#: Do a standard install #: Do a standard install
INSTALL = 1 INSTALL = enum.auto()
#: Do an overwrite install #: Do an overwrite install
OVERWRITE = 2 OVERWRITE = enum.auto()
class InstallStatus: class InstallStatus:
@ -440,7 +450,7 @@ def _process_binary_cache_tarball(
""" """
with timer.measure("fetch"): with timer.measure("fetch"):
download_result = binary_distribution.download_tarball( download_result = binary_distribution.download_tarball(
pkg.spec, unsigned, mirrors_for_spec pkg.spec.build_spec, unsigned, mirrors_for_spec
) )
if download_result is None: if download_result is None:
@ -451,6 +461,11 @@ def _process_binary_cache_tarball(
with timer.measure("install"), spack.util.path.filter_padding(): with timer.measure("install"), spack.util.path.filter_padding():
binary_distribution.extract_tarball(pkg.spec, download_result, force=False, timer=timer) binary_distribution.extract_tarball(pkg.spec, download_result, force=False, timer=timer)
if pkg.spec.spliced: # overwrite old metadata with new
spack.store.STORE.layout.write_spec(
pkg.spec, spack.store.STORE.layout.spec_file_path(pkg.spec)
)
if hasattr(pkg, "_post_buildcache_install_hook"): if hasattr(pkg, "_post_buildcache_install_hook"):
pkg._post_buildcache_install_hook() pkg._post_buildcache_install_hook()
@ -686,7 +701,7 @@ def log(pkg: "spack.package_base.PackageBase") -> None:
def package_id(spec: "spack.spec.Spec") -> str: def package_id(spec: "spack.spec.Spec") -> str:
"""A "unique" package identifier for installation purposes """A "unique" package identifier for installation purposes
The identifier is used to track build tasks, locks, install, and The identifier is used to track tasks, locks, install, and
failure statuses. failure statuses.
The identifier needs to distinguish between combinations of compilers The identifier needs to distinguish between combinations of compilers
@ -745,14 +760,14 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
) )
def __repr__(self) -> str: def __repr__(self) -> str:
"""Returns a formal representation of the build request.""" """Return a formal representation of the build request."""
rep = f"{self.__class__.__name__}(" rep = f"{self.__class__.__name__}("
for attr, value in self.__dict__.items(): for attr, value in self.__dict__.items():
rep += f"{attr}={value.__repr__()}, " rep += f"{attr}={value.__repr__()}, "
return f"{rep.strip(', ')})" return f"{rep.strip(', ')})"
def __str__(self) -> str: def __str__(self) -> str:
"""Returns a printable version of the build request.""" """Return a printable version of the build request."""
return f"package={self.pkg.name}, install_args={self.install_args}" return f"package={self.pkg.name}, install_args={self.install_args}"
def _add_default_args(self) -> None: def _add_default_args(self) -> None:
@ -849,8 +864,8 @@ def traverse_dependencies(self, spec=None, visited=None) -> Iterator["spack.spec
yield dep yield dep
class BuildTask: class Task:
"""Class for representing the build task for a package.""" """Base class for representing a task for a package."""
def __init__( def __init__(
self, self,
@ -864,13 +879,11 @@ def __init__(
installed: Set[str] = set(), installed: Set[str] = set(),
): ):
""" """
Instantiate a build task for a package. Instantiate a task for a package.
Args: Args:
pkg: the package to be built and installed and whose spec is pkg: the package to be built and installed
concrete
request: the associated install request request: the associated install request
compiler: whether task is for a bootstrap compiler
start: the initial start time for the package, in seconds start: the initial start time for the package, in seconds
attempts: the number of attempts to install the package, which attempts: the number of attempts to install the package, which
should be 0 when the task is initially instantiated should be 0 when the task is initially instantiated
@ -909,12 +922,12 @@ def __init__(
# The initial build task cannot have status "removed". # The initial build task cannot have status "removed".
if attempts == 0 and status == BuildStatus.REMOVED: if attempts == 0 and status == BuildStatus.REMOVED:
raise spack.error.InstallError( raise spack.error.InstallError(
f"Cannot create a build task for {self.pkg_id} with status '{status}'", pkg=pkg f"Cannot create a task for {self.pkg_id} with status '{status}'", pkg=pkg
) )
self.status = status self.status = status
# Package is associated with a bootstrap compiler # cache the PID, which is used for distributed build messages in self.execute
self.compiler = compiler self.pid = os.getpid()
# The initial start time for processing the spec # The initial start time for processing the spec
self.start = start self.start = start
@ -945,7 +958,7 @@ def __init__(
) )
# List of uninstalled dependencies, which is used to establish # List of uninstalled dependencies, which is used to establish
# the priority of the build task. # the priority of the task.
self.uninstalled_deps = set( self.uninstalled_deps = set(
pkg_id for pkg_id in self.dependencies if pkg_id not in installed pkg_id for pkg_id in self.dependencies if pkg_id not in installed
) )
@ -954,6 +967,13 @@ def __init__(
self.attempts = attempts self.attempts = attempts
self._update() self._update()
def execute(self, install_status: InstallStatus) -> ExecuteResult:
"""Execute the work of this task.
The ``install_status`` is an ``InstallStatus`` object used to format progress reporting for
this task in the context of the full ``BuildRequest``."""
raise NotImplementedError
def __eq__(self, other): def __eq__(self, other):
return self.key == other.key return self.key == other.key
@ -973,14 +993,14 @@ def __ne__(self, other):
return self.key != other.key return self.key != other.key
def __repr__(self) -> str: def __repr__(self) -> str:
"""Returns a formal representation of the build task.""" """Returns a formal representation of the task."""
rep = f"{self.__class__.__name__}(" rep = f"{self.__class__.__name__}("
for attr, value in self.__dict__.items(): for attr, value in self.__dict__.items():
rep += f"{attr}={value.__repr__()}, " rep += f"{attr}={value.__repr__()}, "
return f"{rep.strip(', ')})" return f"{rep.strip(', ')})"
def __str__(self) -> str: def __str__(self) -> str:
"""Returns a printable version of the build task.""" """Returns a printable version of the task."""
dependencies = f"#dependencies={len(self.dependencies)}" dependencies = f"#dependencies={len(self.dependencies)}"
return "priority={0}, status={1}, start={2}, {3}".format( return "priority={0}, status={1}, start={2}, {3}".format(
self.priority, self.status, self.start, dependencies self.priority, self.status, self.start, dependencies
@ -997,8 +1017,7 @@ def _update(self) -> None:
def add_dependent(self, pkg_id: str) -> None: def add_dependent(self, pkg_id: str) -> None:
""" """
Ensure the dependent package id is in the task's list so it will be Ensure the package is in this task's ``dependents`` list.
properly updated when this package is installed.
Args: Args:
pkg_id: package identifier of the dependent package pkg_id: package identifier of the dependent package
@ -1007,6 +1026,20 @@ def add_dependent(self, pkg_id: str) -> None:
tty.debug(f"Adding {pkg_id} as a dependent of {self.pkg_id}") tty.debug(f"Adding {pkg_id} as a dependent of {self.pkg_id}")
self.dependents.add(pkg_id) self.dependents.add(pkg_id)
def add_dependency(self, pkg_id, installed=False):
"""
Ensure the package is in this task's ``dependencies`` list.
Args:
pkg_id (str): package identifier of the dependency package
installed (bool): install status of the dependency package
"""
if pkg_id != self.pkg_id and pkg_id not in self.dependencies:
tty.debug(f"Adding {pkg_id} as a depencency of {self.pkg_id}")
self.dependencies.add(pkg_id)
if not installed:
self.uninstalled_deps.add(pkg_id)
def flag_installed(self, installed: List[str]) -> None: def flag_installed(self, installed: List[str]) -> None:
""" """
Ensure the dependency is not considered to still be uninstalled. Ensure the dependency is not considered to still be uninstalled.
@ -1023,6 +1056,39 @@ def flag_installed(self, installed: List[str]) -> None:
level=2, level=2,
) )
def _setup_install_dir(self, pkg: "spack.package_base.PackageBase") -> None:
"""
Create and ensure proper access controls for the install directory.
Write a small metadata file with the current spack environment.
Args:
pkg: the package to be built and installed
"""
# Move to a module level method.
if not os.path.exists(pkg.spec.prefix):
path = spack.util.path.debug_padded_filter(pkg.spec.prefix)
tty.debug(f"Creating the installation directory {path}")
spack.store.STORE.layout.create_install_directory(pkg.spec)
else:
# Set the proper group for the prefix
group = prefs.get_package_group(pkg.spec)
if group:
fs.chgrp(pkg.spec.prefix, group)
# Set the proper permissions.
# This has to be done after group because changing groups blows
# away the sticky group bit on the directory
mode = os.stat(pkg.spec.prefix).st_mode
perms = prefs.get_package_dir_permissions(pkg.spec)
if mode != perms:
os.chmod(pkg.spec.prefix, perms)
# Ensure the metadata path exists as well
fs.mkdirp(spack.store.STORE.layout.metadata_path(pkg.spec), mode=perms)
# Always write host environment - we assume this can change
spack.store.STORE.layout.write_host_environment(pkg.spec)
@property @property
def explicit(self) -> bool: def explicit(self) -> bool:
return self.pkg.spec.dag_hash() in self.request.install_args.get("explicit", []) return self.pkg.spec.dag_hash() in self.request.install_args.get("explicit", [])
@ -1053,7 +1119,7 @@ def key(self) -> Tuple[int, int]:
"""The key is the tuple (# uninstalled dependencies, sequence).""" """The key is the tuple (# uninstalled dependencies, sequence)."""
return (self.priority, self.sequence) return (self.priority, self.sequence)
def next_attempt(self, installed) -> "BuildTask": def next_attempt(self, installed) -> "Task":
"""Create a new, updated task for the next installation attempt.""" """Create a new, updated task for the next installation attempt."""
task = copy.copy(self) task = copy.copy(self)
task._update() task._update()
@ -1067,6 +1133,100 @@ def priority(self):
return len(self.uninstalled_deps) return len(self.uninstalled_deps)
class BuildTask(Task):
"""Class for representing a build task for a package."""
def execute(self, install_status):
"""
Perform the installation of the requested spec and/or dependency
represented by the build task.
"""
install_args = self.request.install_args
tests = install_args.get("tests")
unsigned = install_args.get("unsigned")
pkg, pkg_id = self.pkg, self.pkg_id
tty.msg(install_msg(pkg_id, self.pid, install_status))
self.start = self.start or time.time()
self.status = BuildStatus.INSTALLING
# Use the binary cache if requested
if self.use_cache:
if _install_from_cache(pkg, self.explicit, unsigned):
return ExecuteResult.SUCCESS
elif self.cache_only:
raise spack.error.InstallError(
"No binary found when cache-only was specified", pkg=pkg
)
else:
tty.msg(f"No binary for {pkg_id} found: installing from source")
pkg.run_tests = tests is True or tests and pkg.name in tests
# hook that allows tests to inspect the Package before installation
# see unit_test_check() docs.
if not pkg.unit_test_check():
return ExecuteResult.FAILED
try:
# Create stage object now and let it be serialized for the child process. That
# way monkeypatch in tests works correctly.
pkg.stage
self._setup_install_dir(pkg)
# Create a child process to do the actual installation.
# Preserve verbosity settings across installs.
spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process(
pkg, build_process, install_args
)
# Note: PARENT of the build process adds the new package to
# the database, so that we don't need to re-read from file.
spack.store.STORE.db.add(pkg.spec, explicit=self.explicit)
except spack.error.StopPhase as e:
# A StopPhase exception means that do_install was asked to
# stop early from clients, and is not an error at this point
pid = f"{self.pid}: " if tty.show_pid() else ""
tty.debug(f"{pid}{str(e)}")
tty.debug(f"Package stage directory: {pkg.stage.source_path}")
return ExecuteResult.SUCCESS
class RewireTask(Task):
"""Class for representing a rewire task for a package."""
def execute(self, install_status):
"""Execute rewire task
Rewire tasks are executed by either rewiring self.package.spec.build_spec that is already
installed or downloading and rewiring a binary for the it.
If not available installed or as binary, return ExecuteResult.MISSING_BUILD_SPEC.
This will prompt the Installer to requeue the task with a dependency on the BuildTask
to install self.pkg.spec.build_spec
"""
oldstatus = self.status
self.status = BuildStatus.INSTALLING
tty.msg(install_msg(self.pkg_id, self.pid, install_status))
self.start = self.start or time.time()
if not self.pkg.spec.build_spec.installed:
try:
install_args = self.request.install_args
unsigned = install_args.get("unsigned")
_process_binary_cache_tarball(self.pkg, explicit=self.explicit, unsigned=unsigned)
_print_installed_pkg(self.pkg.prefix)
return ExecuteResult.SUCCESS
except BaseException as e:
tty.error(f"Failed to rewire {self.pkg.spec} from binary. {e}")
self.status = oldstatus
return ExecuteResult.MISSING_BUILD_SPEC
spack.rewiring.rewire_node(self.pkg.spec, self.explicit)
_print_installed_pkg(self.pkg.prefix)
return ExecuteResult.SUCCESS
class PackageInstaller: class PackageInstaller:
""" """
Class for managing the install process for a Spack instance based on a bottom-up DAG approach. Class for managing the install process for a Spack instance based on a bottom-up DAG approach.
@ -1160,11 +1320,11 @@ def __init__(
# List of build requests # List of build requests
self.build_requests = [BuildRequest(pkg, install_args) for pkg in packages] self.build_requests = [BuildRequest(pkg, install_args) for pkg in packages]
# Priority queue of build tasks # Priority queue of tasks
self.build_pq: List[Tuple[Tuple[int, int], BuildTask]] = [] self.build_pq: List[Tuple[Tuple[int, int], Task]] = []
# Mapping of unique package ids to build task # Mapping of unique package ids to task
self.build_tasks: Dict[str, BuildTask] = {} self.build_tasks: Dict[str, Task] = {}
# Cache of package locks for failed packages, keyed on package's ids # Cache of package locks for failed packages, keyed on package's ids
self.failed: Dict[str, Optional[lk.Lock]] = {} self.failed: Dict[str, Optional[lk.Lock]] = {}
@ -1185,6 +1345,9 @@ def __init__(
# fast then that option applies to all build requests. # fast then that option applies to all build requests.
self.fail_fast = False self.fail_fast = False
# Initializing all_dependencies to empty. This will be set later in _init_queue.
self.all_dependencies: Dict[str, Set[str]] = {}
def __repr__(self) -> str: def __repr__(self) -> str:
"""Returns a formal representation of the package installer.""" """Returns a formal representation of the package installer."""
rep = f"{self.__class__.__name__}(" rep = f"{self.__class__.__name__}("
@ -1204,25 +1367,18 @@ def _add_init_task(
self, self,
pkg: "spack.package_base.PackageBase", pkg: "spack.package_base.PackageBase",
request: BuildRequest, request: BuildRequest,
is_compiler: bool,
all_deps: Dict[str, Set[str]], all_deps: Dict[str, Set[str]],
) -> None: ) -> None:
""" """
Creates and queus the initial build task for the package. Creates and queues the initial task for the package.
Args: Args:
pkg: the package to be built and installed pkg: the package to be built and installed
request: the associated install request request: the associated install request
is_compiler: whether task is for a bootstrap compiler
all_deps: dictionary of all dependencies and associated dependents all_deps: dictionary of all dependencies and associated dependents
""" """
task = BuildTask( cls = RewireTask if pkg.spec.spliced else BuildTask
pkg, task = cls(pkg, request=request, status=BuildStatus.QUEUED, installed=self.installed)
request=request,
compiler=is_compiler,
status=BuildStatus.QUEUED,
installed=self.installed,
)
for dep_id in task.dependencies: for dep_id in task.dependencies:
all_deps[dep_id].add(package_id(pkg.spec)) all_deps[dep_id].add(package_id(pkg.spec))
@ -1296,7 +1452,7 @@ def _check_deps_status(self, request: BuildRequest) -> None:
else: else:
lock.release_read() lock.release_read()
def _prepare_for_install(self, task: BuildTask) -> None: def _prepare_for_install(self, task: Task) -> None:
""" """
Check the database and leftover installation directories/files and Check the database and leftover installation directories/files and
prepare for a new install attempt for an uninstalled package. prepare for a new install attempt for an uninstalled package.
@ -1304,7 +1460,7 @@ def _prepare_for_install(self, task: BuildTask) -> None:
and ensuring the database is up-to-date. and ensuring the database is up-to-date.
Args: Args:
task (BuildTask): the build task whose associated package is task: the task whose associated package is
being checked being checked
""" """
install_args = task.request.install_args install_args = task.request.install_args
@ -1355,7 +1511,7 @@ def _prepare_for_install(self, task: BuildTask) -> None:
spack.store.STORE.db.update_explicit(task.pkg.spec, True) spack.store.STORE.db.update_explicit(task.pkg.spec, True)
def _cleanup_all_tasks(self) -> None: def _cleanup_all_tasks(self) -> None:
"""Cleanup all build tasks to include releasing their locks.""" """Cleanup all tasks to include releasing their locks."""
for pkg_id in self.locks: for pkg_id in self.locks:
self._release_lock(pkg_id) self._release_lock(pkg_id)
@ -1387,7 +1543,7 @@ def _cleanup_failed(self, pkg_id: str) -> None:
def _cleanup_task(self, pkg: "spack.package_base.PackageBase") -> None: def _cleanup_task(self, pkg: "spack.package_base.PackageBase") -> None:
""" """
Cleanup the build task for the spec Cleanup the task for the spec
Args: Args:
pkg: the package being installed pkg: the package being installed
@ -1459,7 +1615,7 @@ def _ensure_locked(
if lock_type == "read": if lock_type == "read":
# Wait until the other process finishes if there are no more # Wait until the other process finishes if there are no more
# build tasks with priority 0 (i.e., with no uninstalled # tasks with priority 0 (i.e., with no uninstalled
# dependencies). # dependencies).
no_p0 = len(self.build_tasks) == 0 or not self._next_is_pri0() no_p0 = len(self.build_tasks) == 0 or not self._next_is_pri0()
timeout = None if no_p0 else 3.0 timeout = None if no_p0 else 3.0
@ -1511,6 +1667,33 @@ def _ensure_locked(
self.locks[pkg_id] = (lock_type, lock) self.locks[pkg_id] = (lock_type, lock)
return self.locks[pkg_id] return self.locks[pkg_id]
def _requeue_with_build_spec_tasks(self, task):
"""Requeue the task and its missing build spec dependencies"""
# Full install of the build_spec is necessary because it didn't already exist somewhere
spec = task.pkg.spec
for dep in spec.build_spec.traverse():
dep_pkg = dep.package
dep_id = package_id(dep)
if dep_id not in self.build_tasks:
self._add_init_task(dep_pkg, task.request, self.all_dependencies)
# Clear any persistent failure markings _unless_ they are
# associated with another process in this parallel build
# of the spec.
spack.store.STORE.failure_tracker.clear(dep, force=False)
# Queue the build spec.
build_pkg_id = package_id(spec.build_spec)
build_spec_task = self.build_tasks[build_pkg_id]
spec_pkg_id = package_id(spec)
spec_task = task.next_attempt(self.installed)
spec_task.status = BuildStatus.QUEUED
# Convey a build spec as a dependency of a deployed spec.
build_spec_task.add_dependent(spec_pkg_id)
spec_task.add_dependency(build_pkg_id)
self._push_task(spec_task)
def _add_tasks(self, request: BuildRequest, all_deps): def _add_tasks(self, request: BuildRequest, all_deps):
"""Add tasks to the priority queue for the given build request. """Add tasks to the priority queue for the given build request.
@ -1540,7 +1723,7 @@ def _add_tasks(self, request: BuildRequest, all_deps):
dep_id = package_id(dep) dep_id = package_id(dep)
if dep_id not in self.build_tasks: if dep_id not in self.build_tasks:
self._add_init_task(dep_pkg, request, is_compiler=False, all_deps=all_deps) self._add_init_task(dep_pkg, request, all_deps=all_deps)
# Clear any persistent failure markings _unless_ they are # Clear any persistent failure markings _unless_ they are
# associated with another process in this parallel build # associated with another process in this parallel build
@ -1558,80 +1741,29 @@ def _add_tasks(self, request: BuildRequest, all_deps):
self._check_deps_status(request) self._check_deps_status(request)
# Now add the package itself, if appropriate # Now add the package itself, if appropriate
self._add_init_task(request.pkg, request, is_compiler=False, all_deps=all_deps) self._add_init_task(request.pkg, request, all_deps=all_deps)
# Ensure if one request is to fail fast then all requests will. # Ensure if one request is to fail fast then all requests will.
fail_fast = bool(request.install_args.get("fail_fast")) fail_fast = bool(request.install_args.get("fail_fast"))
self.fail_fast = self.fail_fast or fail_fast self.fail_fast = self.fail_fast or fail_fast
def _install_task(self, task: BuildTask, install_status: InstallStatus) -> None: def _install_task(self, task: Task, install_status: InstallStatus) -> None:
""" """
Perform the installation of the requested spec and/or dependency Perform the installation of the requested spec and/or dependency
represented by the build task. represented by the task.
Args: Args:
task: the installation build task for a package task: the installation task for a package
install_status: the installation status for the package""" install_status: the installation status for the package"""
rc = task.execute(install_status)
explicit = task.explicit if rc == ExecuteResult.MISSING_BUILD_SPEC:
install_args = task.request.install_args self._requeue_with_build_spec_tasks(task)
cache_only = task.cache_only else: # if rc == ExecuteResult.SUCCESS or rc == ExecuteResult.FAILED
use_cache = task.use_cache
tests = install_args.get("tests", False)
assert isinstance(tests, (bool, list)) # make mypy happy.
unsigned: Optional[bool] = install_args.get("unsigned")
pkg, pkg_id = task.pkg, task.pkg_id
tty.msg(install_msg(pkg_id, self.pid, install_status))
task.start = task.start or time.time()
task.status = BuildStatus.INSTALLING
# Use the binary cache if requested
if use_cache:
if _install_from_cache(pkg, explicit, unsigned):
self._update_installed(task) self._update_installed(task)
return
elif cache_only:
raise spack.error.InstallError(
"No binary found when cache-only was specified", pkg=pkg
)
else:
tty.msg(f"No binary for {pkg_id} found: installing from source")
pkg.run_tests = tests if isinstance(tests, bool) else pkg.name in tests
# hook that allows tests to inspect the Package before installation
# see unit_test_check() docs.
if not pkg.unit_test_check():
return
try:
self._setup_install_dir(pkg)
# Create stage object now and let it be serialized for the child process. That
# way monkeypatch in tests works correctly.
pkg.stage
# Create a child process to do the actual installation.
# Preserve verbosity settings across installs.
spack.package_base.PackageBase._verbose = spack.build_environment.start_build_process(
pkg, build_process, install_args
)
# Note: PARENT of the build process adds the new package to
# the database, so that we don't need to re-read from file.
spack.store.STORE.db.add(pkg.spec, explicit=explicit)
except spack.error.StopPhase as e:
# A StopPhase exception means that the installer was asked to stop early from clients,
# and is not an error at this point
pid = f"{self.pid}: " if tty.show_pid() else ""
tty.debug(f"{pid}{str(e)}")
tty.debug(f"Package stage directory: {pkg.stage.source_path}")
def _next_is_pri0(self) -> bool: def _next_is_pri0(self) -> bool:
""" """
Determine if the next build task has priority 0 Determine if the next task has priority 0
Return: Return:
True if it does, False otherwise True if it does, False otherwise
@ -1641,9 +1773,9 @@ def _next_is_pri0(self) -> bool:
task = self.build_pq[0][1] task = self.build_pq[0][1]
return task.priority == 0 return task.priority == 0
def _pop_task(self) -> Optional[BuildTask]: def _pop_task(self) -> Optional[Task]:
""" """
Remove and return the lowest priority build task. Remove and return the lowest priority task.
Source: Variant of function at docs.python.org/2/library/heapq.html Source: Variant of function at docs.python.org/2/library/heapq.html
""" """
@ -1655,17 +1787,17 @@ def _pop_task(self) -> Optional[BuildTask]:
return task return task
return None return None
def _push_task(self, task: BuildTask) -> None: def _push_task(self, task: Task) -> None:
""" """
Push (or queue) the specified build task for the package. Push (or queue) the specified task for the package.
Source: Customization of "add_task" function at Source: Customization of "add_task" function at
docs.python.org/2/library/heapq.html docs.python.org/2/library/heapq.html
Args: Args:
task: the installation build task for a package task: the installation task for a package
""" """
msg = "{0} a build task for {1} with status '{2}'" msg = "{0} a task for {1} with status '{2}'"
skip = "Skipping requeue of task for {0}: {1}" skip = "Skipping requeue of task for {0}: {1}"
# Ensure do not (re-)queue installed or failed packages whose status # Ensure do not (re-)queue installed or failed packages whose status
@ -1678,7 +1810,7 @@ def _push_task(self, task: BuildTask) -> None:
tty.debug(skip.format(task.pkg_id, "failed")) tty.debug(skip.format(task.pkg_id, "failed"))
return return
# Remove any associated build task since its sequence will change # Remove any associated task since its sequence will change
self._remove_task(task.pkg_id) self._remove_task(task.pkg_id)
desc = ( desc = (
"Queueing" if task.attempts == 1 else f"Requeueing ({ordinal(task.attempts)} attempt)" "Queueing" if task.attempts == 1 else f"Requeueing ({ordinal(task.attempts)} attempt)"
@ -1713,9 +1845,9 @@ def _release_lock(self, pkg_id: str) -> None:
except Exception as exc: except Exception as exc:
tty.warn(err.format(exc.__class__.__name__, ltype, pkg_id, str(exc))) tty.warn(err.format(exc.__class__.__name__, ltype, pkg_id, str(exc)))
def _remove_task(self, pkg_id: str) -> Optional[BuildTask]: def _remove_task(self, pkg_id: str) -> Optional[Task]:
""" """
Mark the existing package build task as being removed and return it. Mark the existing package task as being removed and return it.
Raises KeyError if not found. Raises KeyError if not found.
Source: Variant of function at docs.python.org/2/library/heapq.html Source: Variant of function at docs.python.org/2/library/heapq.html
@ -1724,19 +1856,19 @@ def _remove_task(self, pkg_id: str) -> Optional[BuildTask]:
pkg_id: identifier for the package to be removed pkg_id: identifier for the package to be removed
""" """
if pkg_id in self.build_tasks: if pkg_id in self.build_tasks:
tty.debug(f"Removing build task for {pkg_id} from list") tty.debug(f"Removing task for {pkg_id} from list")
task = self.build_tasks.pop(pkg_id) task = self.build_tasks.pop(pkg_id)
task.status = BuildStatus.REMOVED task.status = BuildStatus.REMOVED
return task return task
else: else:
return None return None
def _requeue_task(self, task: BuildTask, install_status: InstallStatus) -> None: def _requeue_task(self, task: Task, install_status: InstallStatus) -> None:
""" """
Requeues a task that appears to be in progress by another process. Requeues a task that appears to be in progress by another process.
Args: Args:
task (BuildTask): the installation build task for a package task (Task): the installation task for a package
""" """
if task.status not in [BuildStatus.INSTALLED, BuildStatus.INSTALLING]: if task.status not in [BuildStatus.INSTALLED, BuildStatus.INSTALLING]:
tty.debug( tty.debug(
@ -1748,47 +1880,15 @@ def _requeue_task(self, task: BuildTask, install_status: InstallStatus) -> None:
new_task.status = BuildStatus.INSTALLING new_task.status = BuildStatus.INSTALLING
self._push_task(new_task) self._push_task(new_task)
def _setup_install_dir(self, pkg: "spack.package_base.PackageBase") -> None:
"""
Create and ensure proper access controls for the install directory.
Write a small metadata file with the current spack environment.
Args:
pkg: the package to be built and installed
"""
if not os.path.exists(pkg.spec.prefix):
path = spack.util.path.debug_padded_filter(pkg.spec.prefix)
tty.debug(f"Creating the installation directory {path}")
spack.store.STORE.layout.create_install_directory(pkg.spec)
else:
# Set the proper group for the prefix
group = prefs.get_package_group(pkg.spec)
if group:
fs.chgrp(pkg.spec.prefix, group)
# Set the proper permissions.
# This has to be done after group because changing groups blows
# away the sticky group bit on the directory
mode = os.stat(pkg.spec.prefix).st_mode
perms = prefs.get_package_dir_permissions(pkg.spec)
if mode != perms:
os.chmod(pkg.spec.prefix, perms)
# Ensure the metadata path exists as well
fs.mkdirp(spack.store.STORE.layout.metadata_path(pkg.spec), mode=perms)
# Always write host environment - we assume this can change
spack.store.STORE.layout.write_host_environment(pkg.spec)
def _update_failed( def _update_failed(
self, task: BuildTask, mark: bool = False, exc: Optional[BaseException] = None self, task: Task, mark: bool = False, exc: Optional[BaseException] = None
) -> None: ) -> None:
""" """
Update the task and transitive dependents as failed; optionally mark Update the task and transitive dependents as failed; optionally mark
externally as failed; and remove associated build tasks. externally as failed; and remove associated tasks.
Args: Args:
task: the build task for the failed package task: the task for the failed package
mark: ``True`` if the package and its dependencies are to mark: ``True`` if the package and its dependencies are to
be marked as "failed", otherwise, ``False`` be marked as "failed", otherwise, ``False``
exc: optional exception if associated with the failure exc: optional exception if associated with the failure
@ -1806,19 +1906,19 @@ def _update_failed(
if dep_id in self.build_tasks: if dep_id in self.build_tasks:
tty.warn(f"Skipping build of {dep_id} since {pkg_id} failed") tty.warn(f"Skipping build of {dep_id} since {pkg_id} failed")
# Ensure the dependent's uninstalled dependents are # Ensure the dependent's uninstalled dependents are
# up-to-date and their build tasks removed. # up-to-date and their tasks removed.
dep_task = self.build_tasks[dep_id] dep_task = self.build_tasks[dep_id]
self._update_failed(dep_task, mark) self._update_failed(dep_task, mark)
self._remove_task(dep_id) self._remove_task(dep_id)
else: else:
tty.debug(f"No build task for {dep_id} to skip since {pkg_id} failed") tty.debug(f"No task for {dep_id} to skip since {pkg_id} failed")
def _update_installed(self, task: BuildTask) -> None: def _update_installed(self, task: Task) -> None:
""" """
Mark the task as installed and ensure dependent build tasks are aware. Mark the task as installed and ensure dependent tasks are aware.
Args: Args:
task (BuildTask): the build task for the installed package task: the task for the installed package
""" """
task.status = BuildStatus.INSTALLED task.status = BuildStatus.INSTALLED
self._flag_installed(task.pkg, task.dependents) self._flag_installed(task.pkg, task.dependents)
@ -1827,7 +1927,7 @@ def _flag_installed(
self, pkg: "spack.package_base.PackageBase", dependent_ids: Optional[Set[str]] = None self, pkg: "spack.package_base.PackageBase", dependent_ids: Optional[Set[str]] = None
) -> None: ) -> None:
""" """
Flag the package as installed and ensure known by all build tasks of Flag the package as installed and ensure known by all tasks of
known dependents. known dependents.
Args: Args:
@ -1855,7 +1955,7 @@ def _flag_installed(
dep_task = self.build_tasks[dep_id] dep_task = self.build_tasks[dep_id]
self._push_task(dep_task.next_attempt(self.installed)) self._push_task(dep_task.next_attempt(self.installed))
else: else:
tty.debug(f"{dep_id} has no build task to update for {pkg_id}'s success") tty.debug(f"{dep_id} has no task to update for {pkg_id}'s success")
def _init_queue(self) -> None: def _init_queue(self) -> None:
"""Initialize the build queue from the list of build requests.""" """Initialize the build queue from the list of build requests."""
@ -1874,8 +1974,9 @@ def _init_queue(self) -> None:
task = self.build_tasks[dep_id] task = self.build_tasks[dep_id]
for dependent_id in dependents.difference(task.dependents): for dependent_id in dependents.difference(task.dependents):
task.add_dependent(dependent_id) task.add_dependent(dependent_id)
self.all_dependencies = all_dependencies
def _install_action(self, task: BuildTask) -> int: def _install_action(self, task: Task) -> InstallAction:
""" """
Determine whether the installation should be overwritten (if it already Determine whether the installation should be overwritten (if it already
exists) or skipped (if has been handled by another process). exists) or skipped (if has been handled by another process).
@ -2023,7 +2124,6 @@ def install(self) -> None:
self._update_installed(task) self._update_installed(task)
path = spack.util.path.debug_padded_filter(pkg.prefix) path = spack.util.path.debug_padded_filter(pkg.prefix)
_print_installed_pkg(path) _print_installed_pkg(path)
else: else:
# At this point we've failed to get a write or a read # At this point we've failed to get a write or a read
# lock, which means another process has taken a write # lock, which means another process has taken a write
@ -2063,8 +2163,6 @@ def install(self) -> None:
# wrapper -- silence mypy # wrapper -- silence mypy
OverwriteInstall(self, spack.store.STORE.db, task, install_status).install() # type: ignore[arg-type] # noqa: E501 OverwriteInstall(self, spack.store.STORE.db, task, install_status).install() # type: ignore[arg-type] # noqa: E501
self._update_installed(task)
# If we installed then we should keep the prefix # If we installed then we should keep the prefix
stop_before_phase = getattr(pkg, "stop_before_phase", None) stop_before_phase = getattr(pkg, "stop_before_phase", None)
last_phase = getattr(pkg, "last_phase", None) last_phase = getattr(pkg, "last_phase", None)
@ -2108,7 +2206,9 @@ def install(self) -> None:
) )
# Terminate if requested to do so on the first failure. # Terminate if requested to do so on the first failure.
if self.fail_fast: if self.fail_fast:
raise spack.error.InstallError(f"{fail_fast_err}: {str(exc)}", pkg=pkg) raise spack.error.InstallError(
f"{fail_fast_err}: {str(exc)}", pkg=pkg
) from exc
# Terminate when a single build request has failed, or summarize errors later. # Terminate when a single build request has failed, or summarize errors later.
if task.is_build_request: if task.is_build_request:
@ -2124,6 +2224,7 @@ def install(self) -> None:
# Perform basic task cleanup for the installed spec to # Perform basic task cleanup for the installed spec to
# include downgrading the write to a read lock # include downgrading the write to a read lock
if pkg.spec.installed:
self._cleanup_task(pkg) self._cleanup_task(pkg)
# Cleanup, which includes releasing all of the read locks # Cleanup, which includes releasing all of the read locks
@ -2423,7 +2524,7 @@ def __init__(
self, self,
installer: PackageInstaller, installer: PackageInstaller,
database: spack.database.Database, database: spack.database.Database,
task: BuildTask, task: Task,
install_status: InstallStatus, install_status: InstallStatus,
): ):
self.installer = installer self.installer = installer

View File

@ -12,6 +12,7 @@
from llnl.util.symlink import readlink, symlink from llnl.util.symlink import readlink, symlink
import spack.binary_distribution as bindist import spack.binary_distribution as bindist
import spack.deptypes as dt
import spack.error import spack.error
import spack.hooks import spack.hooks
import spack.platforms import spack.platforms
@ -52,6 +53,7 @@ def rewire_node(spec, explicit):
its subgraph. Binaries, text, and links are all changed in accordance with its subgraph. Binaries, text, and links are all changed in accordance with
the splice. The resulting package is then 'installed.'""" the splice. The resulting package is then 'installed.'"""
tempdir = tempfile.mkdtemp() tempdir = tempfile.mkdtemp()
# copy anything installed to a temporary directory # copy anything installed to a temporary directory
shutil.copytree(spec.build_spec.prefix, os.path.join(tempdir, spec.dag_hash())) shutil.copytree(spec.build_spec.prefix, os.path.join(tempdir, spec.dag_hash()))
@ -59,8 +61,21 @@ def rewire_node(spec, explicit):
# compute prefix-to-prefix for every node from the build spec to the spliced # compute prefix-to-prefix for every node from the build spec to the spliced
# spec # spec
prefix_to_prefix = OrderedDict({spec.build_spec.prefix: spec.prefix}) prefix_to_prefix = OrderedDict({spec.build_spec.prefix: spec.prefix})
for build_dep in spec.build_spec.traverse(root=False): build_spec_ids = set(id(s) for s in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD))
prefix_to_prefix[build_dep.prefix] = spec[build_dep.name].prefix for s in bindist.deps_to_relocate(spec):
analog = s
if id(s) not in build_spec_ids:
analogs = [
d
for d in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD)
if s._splice_match(d, self_root=spec, other_root=spec.build_spec)
]
if analogs:
# Prefer same-name analogs and prefer higher versions
# This matches the preferences in Spec.splice, so we will find same node
analog = max(analogs, key=lambda a: (a.name == s.name, a.version))
prefix_to_prefix[analog.prefix] = s.prefix
manifest = bindist.get_buildfile_manifest(spec.build_spec) manifest = bindist.get_buildfile_manifest(spec.build_spec)
platform = spack.platforms.by_name(spec.platform) platform = spack.platforms.by_name(spec.platform)

View File

@ -55,6 +55,26 @@
"unify": { "unify": {
"oneOf": [{"type": "boolean"}, {"type": "string", "enum": ["when_possible"]}] "oneOf": [{"type": "boolean"}, {"type": "string", "enum": ["when_possible"]}]
}, },
"splice": {
"type": "object",
"additionalProperties": False,
"properties": {
"explicit": {
"type": "array",
"default": [],
"items": {
"type": "object",
"required": ["target", "replacement"],
"additionalProperties": False,
"properties": {
"target": {"type": "string"},
"replacement": {"type": "string"},
"transitive": {"type": "boolean", "default": False},
},
},
}
},
},
"duplicates": { "duplicates": {
"type": "object", "type": "object",
"properties": { "properties": {

View File

@ -523,7 +523,12 @@ def _compute_specs_from_answer_set(self):
node = SpecBuilder.make_node(pkg=providers[0]) node = SpecBuilder.make_node(pkg=providers[0])
candidate = answer.get(node) candidate = answer.get(node)
if candidate and candidate.satisfies(input_spec): if candidate and candidate.build_spec.satisfies(input_spec):
if not candidate.satisfies(input_spec):
tty.warn(
"explicit splice configuration has caused the concretized spec"
f" {candidate} not to satisfy the input spec {input_spec}"
)
self._concrete_specs.append(answer[node]) self._concrete_specs.append(answer[node])
self._concrete_specs_by_input[input_spec] = answer[node] self._concrete_specs_by_input[input_spec] = answer[node]
else: else:
@ -3814,7 +3819,33 @@ def build_specs(self, function_tuples):
spack.version.git_ref_lookup.GitRefLookup(spec.fullname) spack.version.git_ref_lookup.GitRefLookup(spec.fullname)
) )
return self._specs specs = self.execute_explicit_splices()
return specs
def execute_explicit_splices(self):
splice_config = spack.config.CONFIG.get("concretizer:splice:explicit", [])
splice_triples = []
for splice_set in splice_config:
target = splice_set["target"]
replacement = spack.spec.Spec(splice_set["replacement"])
assert replacement.abstract_hash
replacement.replace_hash()
transitive = splice_set.get("transitive", False)
splice_triples.append((target, replacement, transitive))
specs = {}
for key, spec in self._specs.items():
current_spec = spec
for target, replacement, transitive in splice_triples:
if target in current_spec:
# matches root or non-root
# e.g. mvapich2%gcc
current_spec = current_spec.splice(replacement, transitive)
new_key = NodeArgument(id=key.id, pkg=current_spec.name)
specs[new_key] = current_spec
return specs
def _develop_specs_from_env(spec, env): def _develop_specs_from_env(spec, env):

View File

@ -4183,7 +4183,7 @@ def _virtuals_provided(self, root):
"""Return set of virtuals provided by self in the context of root""" """Return set of virtuals provided by self in the context of root"""
if root is self: if root is self:
# Could be using any virtual the package can provide # Could be using any virtual the package can provide
return set(self.package.virtuals_provided) return set(v.name for v in self.package.virtuals_provided)
hashes = [s.dag_hash() for s in root.traverse()] hashes = [s.dag_hash() for s in root.traverse()]
in_edges = set( in_edges = set(
@ -4206,7 +4206,7 @@ def _splice_match(self, other, self_root, other_root):
return True return True
return bool( return bool(
self._virtuals_provided(self_root) bool(self._virtuals_provided(self_root))
and self._virtuals_provided(self_root) <= other._virtuals_provided(other_root) and self._virtuals_provided(self_root) <= other._virtuals_provided(other_root)
) )
@ -4226,29 +4226,24 @@ def _splice_detach_and_add_dependents(self, replacement, context):
# Only set it if it hasn't been spliced before # Only set it if it hasn't been spliced before
ancestor._build_spec = ancestor._build_spec or ancestor.copy() ancestor._build_spec = ancestor._build_spec or ancestor.copy()
ancestor.clear_cached_hashes(ignore=(ht.package_hash.attr,)) ancestor.clear_cached_hashes(ignore=(ht.package_hash.attr,))
for edge in ancestor.edges_to_dependencies(depflag=dt.BUILD):
if edge.depflag & ~dt.BUILD:
edge.depflag &= ~dt.BUILD
else:
ancestor._dependencies[edge.spec.name].remove(edge)
edge.spec._dependents[ancestor.name].remove(edge)
# For each direct dependent in the link/run graph, replace the dependency on # For each direct dependent in the link/run graph, replace the dependency on
# node with one on replacement # node with one on replacement
# For each build dependent, restrict the edge to build-only
for edge in self.edges_from_dependents(): for edge in self.edges_from_dependents():
if edge.parent not in ancestors_in_context: if edge.parent not in ancestors_in_context:
continue continue
build_dep = edge.depflag & dt.BUILD
other_dep = edge.depflag & ~dt.BUILD
if build_dep:
parent_edge = [e for e in edge.parent._dependencies[self.name] if e.spec is self]
assert len(parent_edge) == 1
edge.depflag = dt.BUILD
parent_edge[0].depflag = dt.BUILD
else:
edge.parent._dependencies.edges[self.name].remove(edge) edge.parent._dependencies.edges[self.name].remove(edge)
self._dependents.edges[edge.parent.name].remove(edge) self._dependents.edges[edge.parent.name].remove(edge)
edge.parent._add_dependency(replacement, depflag=edge.depflag, virtuals=edge.virtuals)
if other_dep: def _splice_helper(self, replacement):
edge.parent._add_dependency(replacement, depflag=other_dep, virtuals=edge.virtuals)
def _splice_helper(self, replacement, self_root, other_root):
"""Main loop of a transitive splice. """Main loop of a transitive splice.
The while loop around a traversal of self ensures that changes to self from previous The while loop around a traversal of self ensures that changes to self from previous
@ -4276,8 +4271,7 @@ def _splice_helper(self, replacement, self_root, other_root):
replacements_by_name[node.name].append(node) replacements_by_name[node.name].append(node)
virtuals = node._virtuals_provided(root=replacement) virtuals = node._virtuals_provided(root=replacement)
for virtual in virtuals: for virtual in virtuals:
# Virtual may be spec or str, get name or return str replacements_by_name[virtual].append(node)
replacements_by_name[getattr(virtual, "name", virtual)].append(node)
changed = True changed = True
while changed: while changed:
@ -4298,8 +4292,8 @@ def _splice_helper(self, replacement, self_root, other_root):
for virtual in node._virtuals_provided(root=self): for virtual in node._virtuals_provided(root=self):
analogs += [ analogs += [
r r
for r in replacements_by_name[getattr(virtual, "name", virtual)] for r in replacements_by_name[virtual]
if r._splice_match(node, self_root=self_root, other_root=other_root) if node._splice_match(r, self_root=self, other_root=replacement)
] ]
# No match, keep iterating over self # No match, keep iterating over self
@ -4313,34 +4307,56 @@ def _splice_helper(self, replacement, self_root, other_root):
# No splice needed here, keep checking # No splice needed here, keep checking
if analog == node: if analog == node:
continue continue
node._splice_detach_and_add_dependents(analog, context=self) node._splice_detach_and_add_dependents(analog, context=self)
changed = True changed = True
break break
def splice(self, other, transitive): def splice(self, other: "Spec", transitive: bool = True) -> "Spec":
"""Splices dependency "other" into this ("target") Spec, and return the """Returns a new, spliced concrete Spec with the "other" dependency and,
result as a concrete Spec. optionally, its dependencies.
If transitive, then other and its dependencies will be extrapolated to
a list of Specs and spliced in accordingly. Args:
For example, let there exist a dependency graph as follows: other: alternate dependency
transitive: include other's dependencies
Returns: a concrete, spliced version of the current Spec
When transitive is "True", use the dependencies from "other" to reconcile
conflicting dependencies. When transitive is "False", use dependencies from self.
For example, suppose we have the following dependency graph:
T T
| \ | \
Z<-H Z<-H
In this example, Spec T depends on H and Z, and H also depends on Z.
Suppose, however, that we wish to use a different H, known as H'. This Spec T depends on H and Z, and H also depends on Z. Now we want to use
function will splice in the new H' in one of two ways: a different H, called H'. This function can be used to splice in H' to
1. transitively, where H' depends on the Z' it was built with, and the create a new spec, called T*. If H' was built with Z', then transitive
new T* also directly depends on this new Z', or "True" will ensure H' and T* both depend on Z':
2. intransitively, where the new T* and H' both depend on the original
Z. T*
Since the Spec returned by this splicing function is no longer deployed | \
the same way it was built, any such changes are tracked by setting the Z'<-H'
build_spec to point to the corresponding dependency from the original
Spec. If transitive is "False", then H' and T* will both depend on
""" the original Z, resulting in a new H'*
T*
| \
Z<-H'*
Provenance of the build is tracked through the "build_spec" property
of the spliced spec and any correspondingly modified dependency specs.
The build specs are set to that of the original spec, so the original
spec's provenance is preserved unchanged."""
assert self.concrete assert self.concrete
assert other.concrete assert other.concrete
if self._splice_match(other, self_root=self, other_root=other):
return other.copy()
if not any( if not any(
node._splice_match(other, self_root=self, other_root=other) node._splice_match(other, self_root=self, other_root=other)
for node in self.traverse(root=False, deptype=dt.LINK | dt.RUN) for node in self.traverse(root=False, deptype=dt.LINK | dt.RUN)
@ -4379,12 +4395,12 @@ def mask_build_deps(in_spec):
# Transitively splice any relevant nodes from new into base # Transitively splice any relevant nodes from new into base
# This handles all shared dependencies between self and other # This handles all shared dependencies between self and other
spec._splice_helper(replacement, self_root=self, other_root=other) spec._splice_helper(replacement)
else: else:
# Do the same thing as the transitive splice, but reversed # Do the same thing as the transitive splice, but reversed
node_pairs = make_node_pairs(other, replacement) node_pairs = make_node_pairs(other, replacement)
mask_build_deps(replacement) mask_build_deps(replacement)
replacement._splice_helper(spec, self_root=other, other_root=self) replacement._splice_helper(spec)
# Intransitively splice replacement into spec # Intransitively splice replacement into spec
# This is very simple now that all shared dependencies have been handled # This is very simple now that all shared dependencies have been handled
@ -4392,13 +4408,14 @@ def mask_build_deps(in_spec):
if node._splice_match(other, self_root=spec, other_root=other): if node._splice_match(other, self_root=spec, other_root=other):
node._splice_detach_and_add_dependents(replacement, context=spec) node._splice_detach_and_add_dependents(replacement, context=spec)
# Set up build dependencies for modified nodes # For nodes that were spliced, modify the build spec to ensure build deps are preserved
# Also modify build_spec because the existing ones had build deps removed # For nodes that were not spliced, replace the build deps on the spec itself
for orig, copy in node_pairs: for orig, copy in node_pairs:
for edge in orig.edges_to_dependencies(depflag=dt.BUILD):
copy._add_dependency(edge.spec, depflag=dt.BUILD, virtuals=edge.virtuals)
if copy._build_spec: if copy._build_spec:
copy._build_spec = orig.build_spec.copy() copy._build_spec = orig.build_spec.copy()
else:
for edge in orig.edges_to_dependencies(depflag=dt.BUILD):
copy._add_dependency(edge.spec, depflag=dt.BUILD, virtuals=edge.virtuals)
return spec return spec
@ -4797,7 +4814,7 @@ def _load(cls, data):
virtuals=virtuals, virtuals=virtuals,
) )
if "build_spec" in node.keys(): if "build_spec" in node.keys():
_, bhash, _ = cls.build_spec_from_node_dict(node, hash_type=hash_type) _, bhash, _ = cls.extract_build_spec_info_from_node_dict(node, hash_type=hash_type)
node_spec._build_spec = hash_dict[bhash]["node_spec"] node_spec._build_spec = hash_dict[bhash]["node_spec"]
return hash_dict[root_spec_hash]["node_spec"] return hash_dict[root_spec_hash]["node_spec"]
@ -4925,7 +4942,7 @@ def extract_info_from_dep(cls, elt, hash):
return dep_hash, deptypes, hash_type, virtuals return dep_hash, deptypes, hash_type, virtuals
@classmethod @classmethod
def build_spec_from_node_dict(cls, node, hash_type=ht.dag_hash.name): def extract_build_spec_info_from_node_dict(cls, node, hash_type=ht.dag_hash.name):
build_spec_dict = node["build_spec"] build_spec_dict = node["build_spec"]
return build_spec_dict["name"], build_spec_dict[hash_type], hash_type return build_spec_dict["name"], build_spec_dict[hash_type], hash_type

View File

@ -68,22 +68,6 @@ def cache_directory(tmpdir):
spack.config.caches = old_cache_path spack.config.caches = old_cache_path
@pytest.fixture(scope="module")
def mirror_dir(tmpdir_factory):
dir = tmpdir_factory.mktemp("mirror")
dir.ensure("build_cache", dir=True)
yield str(dir)
dir.join("build_cache").remove()
@pytest.fixture(scope="function")
def test_mirror(mirror_dir):
mirror_url = url_util.path_to_file_url(mirror_dir)
mirror_cmd("add", "--scope", "site", "test-mirror-func", mirror_url)
yield mirror_dir
mirror_cmd("rm", "--scope=site", "test-mirror-func")
@pytest.fixture(scope="module") @pytest.fixture(scope="module")
def config_directory(tmp_path_factory): def config_directory(tmp_path_factory):
# Copy defaults to a temporary "site" scope # Copy defaults to a temporary "site" scope
@ -222,9 +206,9 @@ def dummy_prefix(tmpdir):
@pytest.mark.requires_executables(*args) @pytest.mark.requires_executables(*args)
@pytest.mark.maybeslow @pytest.mark.maybeslow
@pytest.mark.usefixtures( @pytest.mark.usefixtures(
"default_config", "cache_directory", "install_dir_default_layout", "test_mirror" "default_config", "cache_directory", "install_dir_default_layout", "temporary_mirror"
) )
def test_default_rpaths_create_install_default_layout(mirror_dir): def test_default_rpaths_create_install_default_layout(temporary_mirror_dir):
""" """
Test the creation and installation of buildcaches with default rpaths Test the creation and installation of buildcaches with default rpaths
into the default directory layout scheme. into the default directory layout scheme.
@ -237,13 +221,12 @@ def test_default_rpaths_create_install_default_layout(mirror_dir):
install_cmd("--no-cache", sy_spec.name) install_cmd("--no-cache", sy_spec.name)
# Create a buildache # Create a buildache
buildcache_cmd("push", "-u", mirror_dir, cspec.name, sy_spec.name) buildcache_cmd("push", "-u", temporary_mirror_dir, cspec.name, sy_spec.name)
# Test force overwrite create buildcache (-f option) # Test force overwrite create buildcache (-f option)
buildcache_cmd("push", "-uf", mirror_dir, cspec.name) buildcache_cmd("push", "-uf", temporary_mirror_dir, cspec.name)
# Create mirror index # Create mirror index
buildcache_cmd("update-index", mirror_dir) buildcache_cmd("update-index", temporary_mirror_dir)
# List the buildcaches in the mirror # List the buildcaches in the mirror
buildcache_cmd("list", "-alv") buildcache_cmd("list", "-alv")
@ -271,9 +254,9 @@ def test_default_rpaths_create_install_default_layout(mirror_dir):
@pytest.mark.maybeslow @pytest.mark.maybeslow
@pytest.mark.nomockstage @pytest.mark.nomockstage
@pytest.mark.usefixtures( @pytest.mark.usefixtures(
"default_config", "cache_directory", "install_dir_non_default_layout", "test_mirror" "default_config", "cache_directory", "install_dir_non_default_layout", "temporary_mirror"
) )
def test_default_rpaths_install_nondefault_layout(mirror_dir): def test_default_rpaths_install_nondefault_layout(temporary_mirror_dir):
""" """
Test the creation and installation of buildcaches with default rpaths Test the creation and installation of buildcaches with default rpaths
into the non-default directory layout scheme. into the non-default directory layout scheme.
@ -294,9 +277,9 @@ def test_default_rpaths_install_nondefault_layout(mirror_dir):
@pytest.mark.maybeslow @pytest.mark.maybeslow
@pytest.mark.nomockstage @pytest.mark.nomockstage
@pytest.mark.usefixtures( @pytest.mark.usefixtures(
"default_config", "cache_directory", "install_dir_default_layout", "test_mirror" "default_config", "cache_directory", "install_dir_default_layout", "temporary_mirror"
) )
def test_relative_rpaths_install_default_layout(mirror_dir): def test_relative_rpaths_install_default_layout(temporary_mirror_dir):
""" """
Test the creation and installation of buildcaches with relative Test the creation and installation of buildcaches with relative
rpaths into the default directory layout scheme. rpaths into the default directory layout scheme.
@ -323,9 +306,9 @@ def test_relative_rpaths_install_default_layout(mirror_dir):
@pytest.mark.maybeslow @pytest.mark.maybeslow
@pytest.mark.nomockstage @pytest.mark.nomockstage
@pytest.mark.usefixtures( @pytest.mark.usefixtures(
"default_config", "cache_directory", "install_dir_non_default_layout", "test_mirror" "default_config", "cache_directory", "install_dir_non_default_layout", "temporary_mirror"
) )
def test_relative_rpaths_install_nondefault(mirror_dir): def test_relative_rpaths_install_nondefault(temporary_mirror_dir):
""" """
Test the installation of buildcaches with relativized rpaths Test the installation of buildcaches with relativized rpaths
into the non-default directory layout scheme. into the non-default directory layout scheme.
@ -374,9 +357,9 @@ def test_push_and_fetch_keys(mock_gnupghome, tmp_path):
@pytest.mark.maybeslow @pytest.mark.maybeslow
@pytest.mark.nomockstage @pytest.mark.nomockstage
@pytest.mark.usefixtures( @pytest.mark.usefixtures(
"default_config", "cache_directory", "install_dir_non_default_layout", "test_mirror" "default_config", "cache_directory", "install_dir_non_default_layout", "temporary_mirror"
) )
def test_built_spec_cache(mirror_dir): def test_built_spec_cache(temporary_mirror_dir):
"""Because the buildcache list command fetches the buildcache index """Because the buildcache list command fetches the buildcache index
and uses it to populate the binary_distribution built spec cache, when and uses it to populate the binary_distribution built spec cache, when
this test calls get_mirrors_for_spec, it is testing the popluation of this test calls get_mirrors_for_spec, it is testing the popluation of
@ -397,7 +380,7 @@ def fake_dag_hash(spec, length=None):
return "tal4c7h4z0gqmixb1eqa92mjoybxn5l6"[:length] return "tal4c7h4z0gqmixb1eqa92mjoybxn5l6"[:length]
@pytest.mark.usefixtures("install_mockery", "mock_packages", "mock_fetch", "test_mirror") @pytest.mark.usefixtures("install_mockery", "mock_packages", "mock_fetch", "temporary_mirror")
def test_spec_needs_rebuild(monkeypatch, tmpdir): def test_spec_needs_rebuild(monkeypatch, tmpdir):
"""Make sure needs_rebuild properly compares remote hash """Make sure needs_rebuild properly compares remote hash
against locally computed one, avoiding unnecessary rebuilds""" against locally computed one, avoiding unnecessary rebuilds"""
@ -518,7 +501,7 @@ def mock_list_url(url, recursive=False):
@pytest.mark.usefixtures("mock_fetch", "install_mockery") @pytest.mark.usefixtures("mock_fetch", "install_mockery")
def test_update_sbang(tmpdir, test_mirror): def test_update_sbang(tmpdir, temporary_mirror):
"""Test the creation and installation of buildcaches with default rpaths """Test the creation and installation of buildcaches with default rpaths
into the non-default directory layout scheme, triggering an update of the into the non-default directory layout scheme, triggering an update of the
sbang. sbang.
@ -529,7 +512,7 @@ def test_update_sbang(tmpdir, test_mirror):
old_spec_hash_str = "/{0}".format(old_spec.dag_hash()) old_spec_hash_str = "/{0}".format(old_spec.dag_hash())
# Need a fake mirror with *function* scope. # Need a fake mirror with *function* scope.
mirror_dir = test_mirror mirror_dir = temporary_mirror
# Assume all commands will concretize old_spec the same way. # Assume all commands will concretize old_spec the same way.
install_cmd("--no-cache", old_spec.name) install_cmd("--no-cache", old_spec.name)

View File

@ -27,17 +27,19 @@ def test_build_task_errors(install_mockery):
# Using a concretized package now means the request argument is checked. # Using a concretized package now means the request argument is checked.
spec.concretize() spec.concretize()
assert spec.concrete assert spec.concrete
with pytest.raises(TypeError, match="is not a valid build request"): with pytest.raises(TypeError, match="is not a valid build request"):
inst.BuildTask(spec.package, None) inst.BuildTask(spec.package, None)
# Using a valid package and spec, the next check is the status argument. # Using a valid package and spec, the next check is the status argument.
request = inst.BuildRequest(spec.package, {}) request = inst.BuildRequest(spec.package, {})
with pytest.raises(TypeError, match="is not a valid build status"): with pytest.raises(TypeError, match="is not a valid build status"):
inst.BuildTask(spec.package, request, status="queued") inst.BuildTask(spec.package, request, status="queued")
# Now we can check that build tasks cannot be create when the status # Now we can check that build tasks cannot be create when the status
# indicates the task is/should've been removed. # indicates the task is/should've been removed.
with pytest.raises(spack.error.InstallError, match="Cannot create a build task"): with pytest.raises(spack.error.InstallError, match="Cannot create a task"):
inst.BuildTask(spec.package, request, status=inst.BuildStatus.REMOVED) inst.BuildTask(spec.package, request, status=inst.BuildStatus.REMOVED)
# Also make sure to not accept an incompatible installed argument value. # Also make sure to not accept an incompatible installed argument value.

View File

@ -38,6 +38,7 @@
import spack.util.spack_json as sjson import spack.util.spack_json as sjson
import spack.util.spack_yaml import spack.util.spack_yaml
from spack.cmd.env import _env_create from spack.cmd.env import _env_create
from spack.installer import PackageInstaller
from spack.main import SpackCommand, SpackCommandError from spack.main import SpackCommand, SpackCommandError
from spack.spec import Spec from spack.spec import Spec
from spack.stage import stage_prefix from spack.stage import stage_prefix
@ -803,6 +804,38 @@ def test_user_removed_spec(environment_from_manifest):
assert not any(x.name == "hypre" for x in env_specs) assert not any(x.name == "hypre" for x in env_specs)
def test_lockfile_spliced_specs(environment_from_manifest, install_mockery):
"""Test that an environment can round-trip a spliced spec."""
# Create a local install for zmpi to splice in
# Default concretization is not using zmpi
zmpi = spack.spec.Spec("zmpi").concretized()
PackageInstaller([zmpi.package], fake=True).install()
e1 = environment_from_manifest(
f"""
spack:
specs:
- mpileaks
concretizer:
splice:
explicit:
- target: mpi
replacement: zmpi/{zmpi.dag_hash()}
"""
)
with e1:
e1.concretize()
e1.write()
# By reading into a second environment, we force a round trip to json
e2 = _env_create("test2", init_file=e1.lock_path)
# The one spec is mpileaks
for _, spec in e2.concretized_specs():
assert spec.spliced
assert spec["mpi"].satisfies(zmpi)
def test_init_from_lockfile(environment_from_manifest): def test_init_from_lockfile(environment_from_manifest):
"""Test that an environment can be instantiated from a lockfile.""" """Test that an environment can be instantiated from a lockfile."""
e1 = environment_from_manifest( e1 = environment_from_manifest(

View File

@ -2281,6 +2281,30 @@ def test_virtuals_are_annotated_on_edges(self, spec_str):
edges = spec.edges_to_dependencies(name="callpath") edges = spec.edges_to_dependencies(name="callpath")
assert len(edges) == 1 and edges[0].virtuals == () assert len(edges) == 1 and edges[0].virtuals == ()
@pytest.mark.parametrize("transitive", [True, False])
def test_explicit_splices(
self, mutable_config, database_mutable_config, mock_packages, transitive, capfd
):
mpich_spec = database_mutable_config.query("mpich")[0]
splice_info = {
"target": "mpi",
"replacement": f"/{mpich_spec.dag_hash()}",
"transitive": transitive,
}
spack.config.CONFIG.set("concretizer", {"splice": {"explicit": [splice_info]}})
spec = spack.spec.Spec("hdf5 ^zmpi").concretized()
assert spec.satisfies(f"^mpich/{mpich_spec.dag_hash()}")
assert spec.build_spec.dependencies(name="zmpi", deptype="link")
assert not spec.build_spec.satisfies(f"^mpich/{mpich_spec.dag_hash()}")
assert not spec.dependencies(name="zmpi", deptype="link")
captured = capfd.readouterr()
assert "Warning: explicit splice configuration has caused" in captured.err
assert "hdf5 ^zmpi" in captured.err
assert str(spec) in captured.err
@pytest.mark.db @pytest.mark.db
@pytest.mark.parametrize( @pytest.mark.parametrize(
"spec_str,mpi_name", "spec_str,mpi_name",

View File

@ -62,8 +62,11 @@
import spack.version import spack.version
from spack.fetch_strategy import URLFetchStrategy from spack.fetch_strategy import URLFetchStrategy
from spack.installer import PackageInstaller from spack.installer import PackageInstaller
from spack.main import SpackCommand
from spack.util.pattern import Bunch from spack.util.pattern import Bunch
mirror_cmd = SpackCommand("mirror")
@pytest.fixture(autouse=True) @pytest.fixture(autouse=True)
def check_config_fixture(request): def check_config_fixture(request):
@ -989,6 +992,38 @@ def install_mockery(temporary_store: spack.store.Store, mutable_config, mock_pac
temporary_store.failure_tracker.clear_all() temporary_store.failure_tracker.clear_all()
@pytest.fixture(scope="module")
def temporary_mirror_dir(tmpdir_factory):
dir = tmpdir_factory.mktemp("mirror")
dir.ensure("build_cache", dir=True)
yield str(dir)
dir.join("build_cache").remove()
@pytest.fixture(scope="function")
def temporary_mirror(temporary_mirror_dir):
mirror_url = url_util.path_to_file_url(temporary_mirror_dir)
mirror_cmd("add", "--scope", "site", "test-mirror-func", mirror_url)
yield temporary_mirror_dir
mirror_cmd("rm", "--scope=site", "test-mirror-func")
@pytest.fixture(scope="function")
def mutable_temporary_mirror_dir(tmpdir_factory):
dir = tmpdir_factory.mktemp("mirror")
dir.ensure("build_cache", dir=True)
yield str(dir)
dir.join("build_cache").remove()
@pytest.fixture(scope="function")
def mutable_temporary_mirror(mutable_temporary_mirror_dir):
mirror_url = url_util.path_to_file_url(mutable_temporary_mirror_dir)
mirror_cmd("add", "--scope", "site", "test-mirror-func", mirror_url)
yield mutable_temporary_mirror_dir
mirror_cmd("rm", "--scope=site", "test-mirror-func")
@pytest.fixture(scope="function") @pytest.fixture(scope="function")
def temporary_store(tmpdir, request): def temporary_store(tmpdir, request):
"""Hooks a temporary empty store for the test function.""" """Hooks a temporary empty store for the test function."""

View File

@ -29,6 +29,7 @@
import spack.store import spack.store
import spack.util.lock as lk import spack.util.lock as lk
from spack.installer import PackageInstaller from spack.installer import PackageInstaller
from spack.main import SpackCommand
def _mock_repo(root, namespace): def _mock_repo(root, namespace):
@ -640,6 +641,88 @@ def test_prepare_for_install_on_installed(install_mockery, monkeypatch):
installer._prepare_for_install(task) installer._prepare_for_install(task)
def test_installer_init_requests(install_mockery):
"""Test of installer initial requests."""
spec_name = "dependent-install"
with spack.config.override("config:install_missing_compilers", True):
installer = create_installer([spec_name], {})
# There is only one explicit request in this case
assert len(installer.build_requests) == 1
request = installer.build_requests[0]
assert request.pkg.name == spec_name
@pytest.mark.parametrize("transitive", [True, False])
def test_install_spliced(install_mockery, mock_fetch, monkeypatch, capsys, transitive):
"""Test installing a spliced spec"""
spec = spack.spec.Spec("splice-t").concretized()
dep = spack.spec.Spec("splice-h+foo").concretized()
# Do the splice.
out = spec.splice(dep, transitive)
installer = create_installer([out], {"verbose": True, "fail_fast": True})
installer.install()
for node in out.traverse():
assert node.installed
assert node.build_spec.installed
@pytest.mark.parametrize("transitive", [True, False])
def test_install_spliced_build_spec_installed(install_mockery, capfd, mock_fetch, transitive):
"""Test installing a spliced spec with the build spec already installed"""
spec = spack.spec.Spec("splice-t").concretized()
dep = spack.spec.Spec("splice-h+foo").concretized()
# Do the splice.
out = spec.splice(dep, transitive)
PackageInstaller([out.build_spec.package]).install()
installer = create_installer([out], {"verbose": True, "fail_fast": True})
installer._init_queue()
for _, task in installer.build_pq:
assert isinstance(task, inst.RewireTask if task.pkg.spec.spliced else inst.BuildTask)
installer.install()
for node in out.traverse():
assert node.installed
assert node.build_spec.installed
@pytest.mark.not_on_windows("lacking windows support for binary installs")
@pytest.mark.parametrize("transitive", [True, False])
@pytest.mark.parametrize(
"root_str", ["splice-t^splice-h~foo", "splice-h~foo", "splice-vt^splice-a"]
)
def test_install_splice_root_from_binary(
install_mockery, mock_fetch, mutable_temporary_mirror, transitive, root_str
):
"""Test installing a spliced spec with the root available in binary cache"""
# Test splicing and rewiring a spec with the same name, different hash.
original_spec = spack.spec.Spec(root_str).concretized()
spec_to_splice = spack.spec.Spec("splice-h+foo").concretized()
PackageInstaller([original_spec.package, spec_to_splice.package]).install()
out = original_spec.splice(spec_to_splice, transitive)
buildcache = SpackCommand("buildcache")
buildcache(
"push",
"--unsigned",
"--update-index",
mutable_temporary_mirror,
str(original_spec),
str(spec_to_splice),
)
uninstall = SpackCommand("uninstall")
uninstall("-ay")
PackageInstaller([out.package], unsigned=True).install()
assert len(spack.store.STORE.db.query()) == len(list(out.traverse()))
def test_install_task_use_cache(install_mockery, monkeypatch): def test_install_task_use_cache(install_mockery, monkeypatch):
installer = create_installer(["trivial-install-test-package"], {}) installer = create_installer(["trivial-install-test-package"], {})
request = installer.build_requests[0] request = installer.build_requests[0]
@ -650,6 +733,33 @@ def test_install_task_use_cache(install_mockery, monkeypatch):
assert request.pkg_id in installer.installed assert request.pkg_id in installer.installed
def test_install_task_requeue_build_specs(install_mockery, monkeypatch, capfd):
"""Check that a missing build_spec spec is added by _install_task."""
# This test also ensures coverage of most of the new
# _requeue_with_build_spec_tasks method.
def _missing(*args, **kwargs):
return inst.ExecuteResult.MISSING_BUILD_SPEC
# Set the configuration to ensure _requeue_with_build_spec_tasks actually
# does something.
with spack.config.override("config:install_missing_compilers", True):
installer = create_installer(["depb"], {})
installer._init_queue()
request = installer.build_requests[0]
task = create_build_task(request.pkg)
# Drop one of the specs so its task is missing before _install_task
popped_task = installer._pop_task()
assert inst.package_id(popped_task.pkg.spec) not in installer.build_tasks
monkeypatch.setattr(task, "execute", _missing)
installer._install_task(task, None)
# Ensure the dropped task/spec was added back by _install_task
assert inst.package_id(popped_task.pkg.spec) in installer.build_tasks
def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys): def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
"""Test _release_lock for supposed write lock with exception.""" """Test _release_lock for supposed write lock with exception."""
installer = create_installer(["trivial-install-test-package"], {}) installer = create_installer(["trivial-install-test-package"], {})
@ -745,8 +855,10 @@ def _chgrp(path, group, follow_symlinks=True):
monkeypatch.setattr(prefs, "get_package_group", _get_group) monkeypatch.setattr(prefs, "get_package_group", _get_group)
monkeypatch.setattr(fs, "chgrp", _chgrp) monkeypatch.setattr(fs, "chgrp", _chgrp)
installer = create_installer(["trivial-install-test-package"], {}) build_task = create_build_task(
spec = installer.build_requests[0].pkg.spec spack.spec.Spec("trivial-install-test-package").concretized().package
)
spec = build_task.request.pkg.spec
fs.touchp(spec.prefix) fs.touchp(spec.prefix)
metadatadir = spack.store.STORE.layout.metadata_path(spec) metadatadir = spack.store.STORE.layout.metadata_path(spec)
@ -756,7 +868,7 @@ def _chgrp(path, group, follow_symlinks=True):
metadatadir = None metadatadir = None
# Should fail with a "not a directory" error # Should fail with a "not a directory" error
with pytest.raises(OSError, match=metadatadir): with pytest.raises(OSError, match=metadatadir):
installer._setup_install_dir(spec.package) build_task._setup_install_dir(spec.package)
out = str(capfd.readouterr()[0]) out = str(capfd.readouterr()[0])
@ -843,79 +955,74 @@ def test_install_failed_not_fast(install_mockery, monkeypatch, capsys):
assert "Skipping build of pkg-a" in out assert "Skipping build of pkg-a" in out
def test_install_fail_on_interrupt(install_mockery, monkeypatch): def _interrupt(installer, task, install_status, **kwargs):
if task.pkg.name == "pkg-a":
raise KeyboardInterrupt("mock keyboard interrupt for pkg-a")
else:
return installer._real_install_task(task, None)
# installer.installed.add(task.pkg.name)
def test_install_fail_on_interrupt(install_mockery, mock_fetch, monkeypatch):
"""Test ctrl-c interrupted install.""" """Test ctrl-c interrupted install."""
spec_name = "pkg-a" spec_name = "pkg-a"
err_msg = "mock keyboard interrupt for {0}".format(spec_name) err_msg = "mock keyboard interrupt for {0}".format(spec_name)
def _interrupt(installer, task, install_status, **kwargs):
if task.pkg.name == spec_name:
raise KeyboardInterrupt(err_msg)
else:
installer.installed.add(task.pkg.name)
installer = create_installer([spec_name], {}) installer = create_installer([spec_name], {})
setattr(inst.PackageInstaller, "_real_install_task", inst.PackageInstaller._install_task)
# Raise a KeyboardInterrupt error to trigger early termination # Raise a KeyboardInterrupt error to trigger early termination
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _interrupt) monkeypatch.setattr(inst.PackageInstaller, "_install_task", _interrupt)
with pytest.raises(KeyboardInterrupt, match=err_msg): with pytest.raises(KeyboardInterrupt, match=err_msg):
installer.install() installer.install()
assert "pkg-b" in installer.installed # ensure dependency of pkg-a is 'installed' assert not any(i.startswith("pkg-a-") for i in installer.installed)
assert spec_name not in installer.installed assert any(
i.startswith("pkg-b-") for i in installer.installed
) # ensure dependency of a is 'installed'
def test_install_fail_single(install_mockery, monkeypatch): class MyBuildException(Exception):
"""Test expected results for failure of single package."""
spec_name = "pkg-a"
err_msg = "mock internal package build error for {0}".format(spec_name)
class MyBuildException(Exception):
pass pass
def _install(installer, task, install_status, **kwargs):
if task.pkg.name == spec_name:
raise MyBuildException(err_msg)
else:
installer.installed.add(task.pkg.name)
installer = create_installer([spec_name], {}) def _install_fail_my_build_exception(installer, task, install_status, **kwargs):
print(task, task.pkg.name)
if task.pkg.name == "pkg-a":
raise MyBuildException("mock internal package build error for pkg-a")
else:
# No need for more complex logic here because no splices
task.execute(install_status)
installer._update_installed(task)
def test_install_fail_single(install_mockery, mock_fetch, monkeypatch):
"""Test expected results for failure of single package."""
installer = create_installer(["pkg-a"], {})
# Raise a KeyboardInterrupt error to trigger early termination # Raise a KeyboardInterrupt error to trigger early termination
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install) monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install_fail_my_build_exception)
with pytest.raises(MyBuildException, match=err_msg): with pytest.raises(MyBuildException, match="mock internal package build error for pkg-a"):
installer.install() installer.install()
assert "pkg-b" in installer.installed # ensure dependency of a is 'installed' # ensure dependency of a is 'installed' and a is not
assert spec_name not in installer.installed assert any(pkg_id.startswith("pkg-b-") for pkg_id in installer.installed)
assert not any(pkg_id.startswith("pkg-a-") for pkg_id in installer.installed)
def test_install_fail_multi(install_mockery, monkeypatch): def test_install_fail_multi(install_mockery, mock_fetch, monkeypatch):
"""Test expected results for failure of multiple packages.""" """Test expected results for failure of multiple packages."""
spec_name = "pkg-c" installer = create_installer(["pkg-a", "pkg-c"], {})
err_msg = "mock internal package build error"
class MyBuildException(Exception):
pass
def _install(installer, task, install_status, **kwargs):
if task.pkg.name == spec_name:
raise MyBuildException(err_msg)
else:
installer.installed.add(task.pkg.name)
installer = create_installer([spec_name, "pkg-a"], {})
# Raise a KeyboardInterrupt error to trigger early termination # Raise a KeyboardInterrupt error to trigger early termination
monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install) monkeypatch.setattr(inst.PackageInstaller, "_install_task", _install_fail_my_build_exception)
with pytest.raises(spack.error.InstallError, match="Installation request failed"): with pytest.raises(spack.error.InstallError, match="Installation request failed"):
installer.install() installer.install()
assert "pkg-a" in installer.installed # ensure the the second spec installed # ensure the the second spec installed but not the first
assert spec_name not in installer.installed assert any(pkg_id.startswith("pkg-c-") for pkg_id in installer.installed)
assert not any(pkg_id.startswith("pkg-a-") for pkg_id in installer.installed)
def test_install_fail_fast_on_detect(install_mockery, monkeypatch, capsys): def test_install_fail_fast_on_detect(install_mockery, monkeypatch, capsys):

View File

@ -9,6 +9,7 @@
import pytest import pytest
import spack.deptypes as dt
import spack.rewiring import spack.rewiring
import spack.store import spack.store
from spack.installer import PackageInstaller from spack.installer import PackageInstaller
@ -22,6 +23,18 @@
args.extend(["g++", "patchelf"]) args.extend(["g++", "patchelf"])
def check_spliced_spec_prefixes(spliced_spec):
"""check the file in the prefix has the correct paths"""
for node in spliced_spec.traverse(root=True):
text_file_path = os.path.join(node.prefix, node.name)
with open(text_file_path, "r") as f:
text = f.read()
print(text)
for modded_spec in node.traverse(root=True, deptype=dt.ALL & ~dt.BUILD):
print(modded_spec)
assert modded_spec.prefix in text
@pytest.mark.requires_executables(*args) @pytest.mark.requires_executables(*args)
@pytest.mark.parametrize("transitive", [True, False]) @pytest.mark.parametrize("transitive", [True, False])
def test_rewire_db(mock_fetch, install_mockery, transitive): def test_rewire_db(mock_fetch, install_mockery, transitive):
@ -42,13 +55,8 @@ def test_rewire_db(mock_fetch, install_mockery, transitive):
installed_in_db = rec.installed if rec else False installed_in_db = rec.installed if rec else False
assert installed_in_db assert installed_in_db
# check the file in the prefix has the correct paths # check for correct prefix paths
for node in spliced_spec.traverse(root=True): check_spliced_spec_prefixes(spliced_spec)
text_file_path = os.path.join(node.prefix, node.name)
with open(text_file_path, "r") as f:
text = f.read()
for modded_spec in node.traverse(root=True, deptype=("link", "run")):
assert modded_spec.prefix in text
@pytest.mark.requires_executables(*args) @pytest.mark.requires_executables(*args)
@ -150,3 +158,26 @@ def test_rewire_not_installed_fails(mock_fetch, install_mockery):
match="failed due to missing install of build spec", match="failed due to missing install of build spec",
): ):
spack.rewiring.rewire(spliced_spec) spack.rewiring.rewire(spliced_spec)
def test_rewire_virtual(mock_fetch, install_mockery):
"""Check installed package can successfully splice an alternate virtual implementation"""
dep = "splice-a"
alt_dep = "splice-h"
spec = Spec(f"splice-vt^{dep}").concretized()
alt_spec = Spec(alt_dep).concretized()
PackageInstaller([spec.package, alt_spec.package]).install()
spliced_spec = spec.splice(alt_spec, True)
spack.rewiring.rewire(spliced_spec)
# Confirm the original spec still has the original virtual implementation.
assert spec.satisfies(f"^{dep}")
# Confirm the spliced spec uses the new virtual implementation.
assert spliced_spec.satisfies(f"^{alt_dep}")
# check for correct prefix paths
check_spliced_spec_prefixes(spliced_spec)

View File

@ -1056,12 +1056,11 @@ def test_splice_intransitive_complex(self, setup_complex_splice):
spliced = a_red.splice(c_blue, transitive=False) spliced = a_red.splice(c_blue, transitive=False)
assert spliced.satisfies( assert spliced.satisfies(
"pkg-a color=red ^pkg-b color=red ^pkg-c color=blue " "pkg-a color=red ^pkg-b color=red ^pkg-c color=blue "
"^pkg-d color=red ^pkg-e color=red ^pkg-f color=blue ^pkg-g@3 color=blue" "^pkg-d color=red ^pkg-e color=red ^pkg-f color=blue ^pkg-g@2 color=red"
)
assert set(spliced.dependencies(deptype=dt.BUILD)) == set(
a_red.dependencies(deptype=dt.BUILD)
) )
assert set(spliced.dependencies(deptype=dt.BUILD)) == set()
assert spliced.build_spec == a_red assert spliced.build_spec == a_red
# We cannot check spliced["b"].build_spec is spliced["b"] because Spec.__getitem__ creates # We cannot check spliced["b"].build_spec is spliced["b"] because Spec.__getitem__ creates
# a new wrapper object on each invocation. So we select once and check on that object # a new wrapper object on each invocation. So we select once and check on that object
# For the rest of the unchanged specs we will just check the s._build_spec is None. # For the rest of the unchanged specs we will just check the s._build_spec is None.
@ -1072,11 +1071,9 @@ def test_splice_intransitive_complex(self, setup_complex_splice):
assert spliced["pkg-c"].satisfies( assert spliced["pkg-c"].satisfies(
"pkg-c color=blue ^pkg-d color=red ^pkg-e color=red " "pkg-c color=blue ^pkg-d color=red ^pkg-e color=red "
"^pkg-f color=blue ^pkg-g@3 color=blue" "^pkg-f color=blue ^pkg-g@2 color=red"
)
assert set(spliced["pkg-c"].dependencies(deptype=dt.BUILD)) == set(
c_blue.dependencies(deptype=dt.BUILD)
) )
assert set(spliced["pkg-c"].dependencies(deptype=dt.BUILD)) == set()
assert spliced["pkg-c"].build_spec == c_blue assert spliced["pkg-c"].build_spec == c_blue
assert set(spliced["pkg-c"].dependents()) == {spliced} assert set(spliced["pkg-c"].dependents()) == {spliced}
@ -1101,14 +1098,12 @@ def test_splice_intransitive_complex(self, setup_complex_splice):
# Build dependent edge to f because f originally dependended on the e this was copied from # Build dependent edge to f because f originally dependended on the e this was copied from
assert set(spliced["pkg-e"].dependents(deptype=dt.BUILD)) == {spliced["pkg-b"]} assert set(spliced["pkg-e"].dependents(deptype=dt.BUILD)) == {spliced["pkg-b"]}
assert spliced["pkg-f"].satisfies("pkg-f color=blue ^pkg-e color=red ^pkg-g@3 color=blue") assert spliced["pkg-f"].satisfies("pkg-f color=blue ^pkg-e color=red ^pkg-g@2 color=red")
assert set(spliced["pkg-f"].dependencies(deptype=dt.BUILD)) == set( assert set(spliced["pkg-f"].dependencies(deptype=dt.BUILD)) == set()
c_blue["pkg-f"].dependencies(deptype=dt.BUILD)
)
assert spliced["pkg-f"].build_spec == c_blue["pkg-f"] assert spliced["pkg-f"].build_spec == c_blue["pkg-f"]
assert set(spliced["pkg-f"].dependents()) == {spliced["pkg-c"]} assert set(spliced["pkg-f"].dependents()) == {spliced["pkg-c"]}
# spliced["g"] is g3, but spliced["b"]["g"] is g1 # spliced["pkg-g"] is g2, but spliced["pkg-b"]["pkg-g"] is g1
assert spliced["pkg-g"] == a_red["pkg-g"] assert spliced["pkg-g"] == a_red["pkg-g"]
assert spliced["pkg-g"]._build_spec is None assert spliced["pkg-g"]._build_spec is None
assert set(spliced["pkg-g"].dependents(deptype=dt.LINK)) == { assert set(spliced["pkg-g"].dependents(deptype=dt.LINK)) == {
@ -1117,7 +1112,6 @@ def test_splice_intransitive_complex(self, setup_complex_splice):
spliced["pkg-f"], spliced["pkg-f"],
a_red["pkg-c"], a_red["pkg-c"],
} }
assert set(spliced["pkg-g"].dependents(deptype=dt.BUILD)) == {spliced, a_red["pkg-c"]}
assert spliced["pkg-b"]["pkg-g"] == a_red["pkg-b"]["pkg-g"] assert spliced["pkg-b"]["pkg-g"] == a_red["pkg-b"]["pkg-g"]
assert spliced["pkg-b"]["pkg-g"]._build_spec is None assert spliced["pkg-b"]["pkg-g"]._build_spec is None
@ -1131,14 +1125,7 @@ def test_splice_intransitive_complex(self, setup_complex_splice):
# traverse_edges creates a synthetic edge with no deptypes to the root # traverse_edges creates a synthetic edge with no deptypes to the root
if edge.depflag: if edge.depflag:
depflag = dt.LINK depflag = dt.LINK
if (edge.parent.name, edge.spec.name) not in [ if not edge.parent.spliced:
("pkg-a", "pkg-c"), # These are the spliced edges
("pkg-c", "pkg-d"),
("pkg-f", "pkg-e"),
("pkg-c", "pkg-g"),
("pkg-f", "pkg-g"),
("pkg-c", "pkg-f"), # ancestor to spliced edge
]:
depflag |= dt.BUILD depflag |= dt.BUILD
assert edge.depflag == depflag assert edge.depflag == depflag
@ -1150,21 +1137,17 @@ def test_splice_transitive_complex(self, setup_complex_splice):
"pkg-a color=red ^pkg-b color=red ^pkg-c color=blue ^pkg-d color=blue " "pkg-a color=red ^pkg-b color=red ^pkg-c color=blue ^pkg-d color=blue "
"^pkg-e color=blue ^pkg-f color=blue ^pkg-g@3 color=blue" "^pkg-e color=blue ^pkg-f color=blue ^pkg-g@3 color=blue"
) )
assert set(spliced.dependencies(deptype=dt.BUILD)) == set( assert set(spliced.dependencies(deptype=dt.BUILD)) == set()
a_red.dependencies(deptype=dt.BUILD)
)
assert spliced.build_spec == a_red assert spliced.build_spec == a_red
assert spliced["pkg-b"].satisfies( assert spliced["pkg-b"].satisfies(
"pkg-b color=red ^pkg-d color=blue ^pkg-e color=blue ^pkg-g@2 color=blue" "pkg-b color=red ^pkg-d color=blue ^pkg-e color=blue ^pkg-g@2 color=blue"
) )
assert set(spliced["pkg-b"].dependencies(deptype=dt.BUILD)) == set( assert set(spliced["pkg-b"].dependencies(deptype=dt.BUILD)) == set()
a_red["pkg-b"].dependencies(deptype=dt.BUILD)
)
assert spliced["pkg-b"].build_spec == a_red["pkg-b"] assert spliced["pkg-b"].build_spec == a_red["pkg-b"]
assert set(spliced["pkg-b"].dependents()) == {spliced} assert set(spliced["pkg-b"].dependents()) == {spliced}
# We cannot check spliced["b"].build_spec is spliced["b"] because Spec.__getitem__ creates # We cannot check spliced["c"].build_spec is spliced["c"] because Spec.__getitem__ creates
# a new wrapper object on each invocation. So we select once and check on that object # a new wrapper object on each invocation. So we select once and check on that object
# For the rest of the unchanged specs we will just check the s._build_spec is None. # For the rest of the unchanged specs we will just check the s._build_spec is None.
c = spliced["pkg-c"] c = spliced["pkg-c"]
@ -1211,17 +1194,7 @@ def test_splice_transitive_complex(self, setup_complex_splice):
# traverse_edges creates a synthetic edge with no deptypes to the root # traverse_edges creates a synthetic edge with no deptypes to the root
if edge.depflag: if edge.depflag:
depflag = dt.LINK depflag = dt.LINK
if (edge.parent.name, edge.spec.name) not in [ if not edge.parent.spliced:
("pkg-a", "pkg-c"), # These are the spliced edges
("pkg-a", "pkg-g"),
("pkg-b", "pkg-d"),
("pkg-b", "pkg-e"),
("pkg-b", "pkg-g"),
(
"pkg-a",
"pkg-b",
), # This edge not spliced, but b was spliced invalidating edge
]:
depflag |= dt.BUILD depflag |= dt.BUILD
assert edge.depflag == depflag assert edge.depflag == depflag
@ -1365,10 +1338,10 @@ def test_splice_swap_names(self, default_mock_concretization, transitive):
@pytest.mark.parametrize("transitive", [True, False]) @pytest.mark.parametrize("transitive", [True, False])
def test_splice_swap_names_mismatch_virtuals(self, default_mock_concretization, transitive): def test_splice_swap_names_mismatch_virtuals(self, default_mock_concretization, transitive):
spec = default_mock_concretization("splice-t") vt = default_mock_concretization("splice-vt")
dep = default_mock_concretization("splice-vh+foo") vh = default_mock_concretization("splice-vh+foo")
with pytest.raises(spack.spec.SpliceError, match="virtual"): with pytest.raises(spack.spec.SpliceError, match="virtual"):
spec.splice(dep, transitive) vt.splice(vh, transitive)
def test_spec_override(self): def test_spec_override(self):
init_spec = Spec("pkg-a foo=baz foobar=baz cflags=-O3 cxxflags=-O1") init_spec = Spec("pkg-a foo=baz foobar=baz cflags=-O3 cxxflags=-O1")