Best effort co-concretization (iterative algorithm) (#28941)

Currently, environments can either be concretized fully together or fully separately. This works well for users who create environments for interoperable software and can use `concretizer:unify:true`. It does not allow environments with conflicting software to be concretized for maximal interoperability.

The primary use-case for this is facilities providing system software. Facilities provide multiple MPI implementations, but all software built against a given MPI ought to be interoperable.

This PR adds a concretization option `concretizer:unify:when_possible`. When this option is used, Spack will concretize specs in the environment separately, but will optimize for minimal differences in overlapping packages.

* Add a level of indirection to root specs

This commit introduce the "literal" atom, which comes with
a few different "arities". The unary "literal" contains an
integer that id the ID of a spec literal. Other "literals"
contain information on the requests made by literal ID. For
instance zlib@1.2.11 generates the following facts:

literal(0,"root","zlib").
literal(0,"node","zlib").
literal(0,"node_version_satisfies","zlib","1.2.11").

This should help with solving large environments "together
where possible" since later literals can be now solved
together in batches.

* Add a mechanism to relax the number of literals being solved

* Modify spack solve to display the new criteria

Since the new criteria is above all the build criteria,
we need to modify the way we display the output.

Originally done by Greg in #27964 and cherry-picked
to this branch by the co-author of the commit.

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* Inject reusable specs into the solve

Instead of coupling the PyclingoDriver() object with
spack.config, inject the concrete specs that can be
reused.

A method level function takes care of reading from
the store and the buildcache.

* spack solve: show output of multi-rounds

* add tests for best-effort coconcretization

* Enforce having at least a literal being solved

Co-authored-by: Greg Becker <becker33@llnl.gov>
This commit is contained in:
Massimiliano Culpo
2022-05-24 21:13:28 +02:00
committed by GitHub
parent 494e567fe5
commit f2a81af70e
12 changed files with 533 additions and 207 deletions

View File

@@ -273,19 +273,9 @@ or
Concretizing Concretizing
^^^^^^^^^^^^ ^^^^^^^^^^^^
Once some user specs have been added to an environment, they can be Once some user specs have been added to an environment, they can be concretized.
concretized. *By default specs are concretized separately*, one after There are at the moment three different modes of operation to concretize an environment,
the other. This mode of operation permits to deploy a full which are explained in details in :ref:`environments_concretization_config`.
software stack where multiple configurations of the same package
need to be installed alongside each other. Central installations done
at HPC centers by system administrators or user support groups
are a common case that fits in this behavior.
Environments *can also be configured to concretize all
the root specs in a unified way* to ensure that
each package in the environment corresponds to a single concrete spec. This
mode of operation is usually what is required by software developers that
want to deploy their development environment.
Regardless of which mode of operation has been chosen, the following Regardless of which mode of operation has been chosen, the following
command will ensure all the root specs are concretized according to the command will ensure all the root specs are concretized according to the
constraints that are prescribed in the configuration: constraints that are prescribed in the configuration:
@@ -493,25 +483,64 @@ Appending to this list in the yaml is identical to using the ``spack
add`` command from the command line. However, there is more power add`` command from the command line. However, there is more power
available from the yaml file. available from the yaml file.
.. _environments_concretization_config:
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
Spec concretization Spec concretization
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^
An environment can be concretized in three different modes and the behavior active under any environment
Specs can be concretized separately or together, as already is determined by the ``concretizer:unify`` property. By default specs are concretized *separately*, one after the other:
explained in :ref:`environments_concretization`. The behavior active
under any environment is determined by the ``concretizer:unify`` property:
.. code-block:: yaml .. code-block:: yaml
spack: spack:
specs: specs:
- ncview - hdf5~mpi
- netcdf - hdf5+mpi
- nco - zlib@1.2.8
- py-sphinx concretizer:
unify: false
This mode of operation permits to deploy a full software stack where multiple configurations of the same package
need to be installed alongside each other using the best possible selection of transitive dependencies. The downside
is that redundancy of installations is disregarded completely, and thus environments might be more bloated than
strictly needed. In the example above, for instance, if a version of ``zlib`` newer than ``1.2.8`` is known to Spack,
then it will be used for both ``hdf5`` installations.
If redundancy of the environment is a concern, Spack provides a way to install it *together where possible*,
i.e. trying to maximize reuse of dependencies across different specs:
.. code-block:: yaml
spack:
specs:
- hdf5~mpi
- hdf5+mpi
- zlib@1.2.8
concretizer:
unify: when_possible
Also in this case Spack allows having multiple configurations of the same package, but privileges the reuse of
specs over other factors. Going back to our example, this means that both ``hdf5`` installations will use
``zlib@1.2.8`` as a dependency even if newer versions of that library are available.
Central installations done at HPC centers by system administrators or user support groups are a common case
that fits either of these two modes.
Environments can also be configured to concretize all the root specs *together*, in a self-consistent way, to
ensure that each package in the environment comes with a single configuration:
.. code-block:: yaml
spack:
specs:
- hdf5+mpi
- zlib@1.2.8
concretizer: concretizer:
unify: true unify: true
This mode of operation is usually what is required by software developers that want to deploy their development
environment and have a single view of it in the filesystem.
.. note:: .. note::
The ``concretizer:unify`` config option was introduced in Spack 0.18 to The ``concretizer:unify`` config option was introduced in Spack 0.18 to
@@ -521,9 +550,9 @@ under any environment is determined by the ``concretizer:unify`` property:
.. admonition:: Re-concretization of user specs .. admonition:: Re-concretization of user specs
When concretizing specs together the entire set of specs will be When concretizing specs *together* or *together where possible* the entire set of specs will be
re-concretized after any addition of new user specs, to ensure that re-concretized after any addition of new user specs, to ensure that
the environment remains consistent. When instead the specs are concretized the environment remains consistent / minimal. When instead the specs are concretized
separately only the new specs will be re-concretized after any addition. separately only the new specs will be re-concretized after any addition.
^^^^^^^^^^^^^ ^^^^^^^^^^^^^

View File

@@ -15,6 +15,8 @@
import spack import spack
import spack.cmd import spack.cmd
import spack.cmd.common.arguments as arguments import spack.cmd.common.arguments as arguments
import spack.config
import spack.environment
import spack.hash_types as ht import spack.hash_types as ht
import spack.package import spack.package
import spack.solver.asp as asp import spack.solver.asp as asp
@@ -74,6 +76,51 @@ def setup_parser(subparser):
spack.cmd.common.arguments.add_concretizer_args(subparser) spack.cmd.common.arguments.add_concretizer_args(subparser)
def _process_result(result, show, required_format, kwargs):
result.raise_if_unsat()
opt, _, _ = min(result.answers)
if ("opt" in show) and (not required_format):
tty.msg("Best of %d considered solutions." % result.nmodels)
tty.msg("Optimization Criteria:")
maxlen = max(len(s[2]) for s in result.criteria)
color.cprint(
"@*{ Priority Criterion %sInstalled ToBuild}" % ((maxlen - 10) * " ")
)
fmt = " @K{%%-8d} %%-%ds%%9s %%7s" % maxlen
for i, (installed_cost, build_cost, name) in enumerate(result.criteria, 1):
color.cprint(
fmt % (
i,
name,
"-" if build_cost is None else installed_cost,
installed_cost if build_cost is None else build_cost,
)
)
print()
# dump the solutions as concretized specs
if 'solutions' in show:
for spec in result.specs:
# With -y, just print YAML to output.
if required_format == 'yaml':
# use write because to_yaml already has a newline.
sys.stdout.write(spec.to_yaml(hash=ht.dag_hash))
elif required_format == 'json':
sys.stdout.write(spec.to_json(hash=ht.dag_hash))
else:
sys.stdout.write(
spec.tree(color=sys.stdout.isatty(), **kwargs))
print()
if result.unsolved_specs and "solutions" in show:
tty.msg("Unsolved specs")
for spec in result.unsolved_specs:
print(spec)
print()
def solve(parser, args): def solve(parser, args):
# these are the same options as `spack spec` # these are the same options as `spack spec`
name_fmt = '{namespace}.{name}' if args.namespaces else '{name}' name_fmt = '{namespace}.{name}' if args.namespaces else '{name}'
@@ -102,58 +149,42 @@ def solve(parser, args):
if models < 0: if models < 0:
tty.die("model count must be non-negative: %d") tty.die("model count must be non-negative: %d")
specs = spack.cmd.parse_specs(args.specs) # Format required for the output (JSON, YAML or None)
required_format = args.format
# If we have an active environment, pick the specs from there
env = spack.environment.active_environment()
if env and args.specs:
msg = "cannot give explicit specs when an environment is active"
raise RuntimeError(msg)
specs = list(env.user_specs) if env else spack.cmd.parse_specs(args.specs)
# set up solver parameters
# Note: reuse and other concretizer prefs are passed as configuration
solver = asp.Solver() solver = asp.Solver()
output = sys.stdout if "asp" in show else None output = sys.stdout if "asp" in show else None
result = solver.solve( setup_only = set(show) == {'asp'}
specs, unify = spack.config.get('concretizer:unify')
out=output, if unify != 'when_possible':
models=models, # set up solver parameters
timers=args.timers, # Note: reuse and other concretizer prefs are passed as configuration
stats=args.stats, result = solver.solve(
setup_only=(set(show) == {'asp'}) specs,
) out=output,
if 'solutions' not in show: models=models,
return timers=args.timers,
stats=args.stats,
# die if no solution was found setup_only=setup_only
result.raise_if_unsat() )
if not setup_only:
# show the solutions as concretized specs _process_result(result, show, required_format, kwargs)
if 'solutions' in show: else:
opt, _, _ = min(result.answers) for idx, result in enumerate(solver.solve_in_rounds(
specs, out=output, models=models, timers=args.timers, stats=args.stats
if ("opt" in show) and (not args.format): )):
tty.msg("Best of %d considered solutions." % result.nmodels) if "solutions" in show:
tty.msg("Optimization Criteria:") tty.msg("ROUND {0}".format(idx))
tty.msg("")
maxlen = max(len(s[2]) for s in result.criteria)
color.cprint(
"@*{ Priority Criterion %sInstalled ToBuild}" % ((maxlen - 10) * " ")
)
fmt = " @K{%%-8d} %%-%ds%%9s %%7s" % maxlen
for i, (installed_cost, build_cost, name) in enumerate(result.criteria, 1):
color.cprint(
fmt % (
i,
name,
"-" if build_cost is None else installed_cost,
installed_cost if build_cost is None else build_cost,
)
)
print()
for spec in result.specs:
# With -y, just print YAML to output.
if args.format == 'yaml':
# use write because to_yaml already has a newline.
sys.stdout.write(spec.to_yaml(hash=ht.dag_hash))
elif args.format == 'json':
sys.stdout.write(spec.to_json(hash=ht.dag_hash))
else: else:
sys.stdout.write( print("% END ROUND {0}\n".format(idx))
spec.tree(color=sys.stdout.isatty(), **kwargs)) if not setup_only:
_process_result(result, show, required_format, kwargs)

View File

@@ -628,7 +628,7 @@ def __init__(self, path, init_file=None, with_view=None, keep_relative=False):
# This attribute will be set properly from configuration # This attribute will be set properly from configuration
# during concretization # during concretization
self.concretization = None self.unify = None
self.clear() self.clear()
if init_file: if init_file:
@@ -772,10 +772,14 @@ def _read_manifest(self, f, raw_yaml=None):
# Retrieve the current concretization strategy # Retrieve the current concretization strategy
configuration = config_dict(self.yaml) configuration = config_dict(self.yaml)
# Let `concretization` overrule `concretize:unify` config for now. # Let `concretization` overrule `concretize:unify` config for now,
unify = spack.config.get('concretizer:unify') # but use a translation table to have internally a representation
self.concretization = configuration.get( # as if we were using the new configuration
'concretization', 'together' if unify else 'separately') translation = {'separately': False, 'together': True}
try:
self.unify = translation[configuration['concretization']]
except KeyError:
self.unify = spack.config.get('concretizer:unify', False)
# Retrieve dev-build packages: # Retrieve dev-build packages:
self.dev_specs = configuration.get('develop', {}) self.dev_specs = configuration.get('develop', {})
@@ -1156,14 +1160,44 @@ def concretize(self, force=False, tests=False):
self.specs_by_hash = {} self.specs_by_hash = {}
# Pick the right concretization strategy # Pick the right concretization strategy
if self.concretization == 'together': if self.unify == 'when_possible':
return self._concretize_together_where_possible(tests=tests)
if self.unify is True:
return self._concretize_together(tests=tests) return self._concretize_together(tests=tests)
if self.concretization == 'separately': if self.unify is False:
return self._concretize_separately(tests=tests) return self._concretize_separately(tests=tests)
msg = 'concretization strategy not implemented [{0}]' msg = 'concretization strategy not implemented [{0}]'
raise SpackEnvironmentError(msg.format(self.concretization)) raise SpackEnvironmentError(msg.format(self.unify))
def _concretize_together_where_possible(self, tests=False):
# Avoid cyclic dependency
import spack.solver.asp
# Exit early if the set of concretized specs is the set of user specs
user_specs_did_not_change = not bool(
set(self.user_specs) - set(self.concretized_user_specs)
)
if user_specs_did_not_change:
return []
# Proceed with concretization
self.concretized_user_specs = []
self.concretized_order = []
self.specs_by_hash = {}
result_by_user_spec = {}
solver = spack.solver.asp.Solver()
for result in solver.solve_in_rounds(self.user_specs, tests=tests):
result_by_user_spec.update(result.specs_by_input)
result = []
for abstract, concrete in sorted(result_by_user_spec.items()):
self._add_concrete_spec(abstract, concrete)
result.append((abstract, concrete))
return result
def _concretize_together(self, tests=False): def _concretize_together(self, tests=False):
"""Concretization strategy that concretizes all the specs """Concretization strategy that concretizes all the specs
@@ -1316,7 +1350,7 @@ def concretize_and_add(self, user_spec, concrete_spec=None, tests=False):
concrete_spec: if provided, then it is assumed that it is the concrete_spec: if provided, then it is assumed that it is the
result of concretizing the provided ``user_spec`` result of concretizing the provided ``user_spec``
""" """
if self.concretization == 'together': if self.unify is True:
msg = 'cannot install a single spec in an environment that is ' \ msg = 'cannot install a single spec in an environment that is ' \
'configured to be concretized together. Run instead:\n\n' \ 'configured to be concretized together. Run instead:\n\n' \
' $ spack add <spec>\n' \ ' $ spack add <spec>\n' \

View File

@@ -26,12 +26,10 @@
} }
}, },
'unify': { 'unify': {
'type': 'boolean' 'oneOf': [
# Todo: add when_possible. {'type': 'boolean'},
# 'oneOf': [ {'type': 'string', 'enum': ['when_possible']}
# {'type': 'boolean'}, ]
# {'type': 'string', 'enum': ['when_possible']}
# ]
} }
} }
} }

View File

@@ -98,6 +98,20 @@ def getter(node):
# Below numbers are used to map names of criteria to the order # Below numbers are used to map names of criteria to the order
# they appear in the solution. See concretize.lp # they appear in the solution. See concretize.lp
# The space of possible priorities for optimization targets
# is partitioned in the following ranges:
#
# [0-100) Optimization criteria for software being reused
# [100-200) Fixed criteria that are higher priority than reuse, but lower than build
# [200-300) Optimization criteria for software being built
# [300-1000) High-priority fixed criteria
# [1000-inf) Error conditions
#
# Each optimization target is a minimization with optimal value 0.
#: High fixed priority offset for criteria that supersede all build criteria
high_fixed_priority_offset = 300
#: Priority offset for "build" criteria (regular criterio shifted to #: Priority offset for "build" criteria (regular criterio shifted to
#: higher priority for specs we have to build) #: higher priority for specs we have to build)
build_priority_offset = 200 build_priority_offset = 200
@@ -112,6 +126,7 @@ def build_criteria_names(costs, tuples):
priorities_names = [] priorities_names = []
num_fixed = 0 num_fixed = 0
num_high_fixed = 0
for pred, args in tuples: for pred, args in tuples:
if pred != "opt_criterion": if pred != "opt_criterion":
continue continue
@@ -128,6 +143,8 @@ def build_criteria_names(costs, tuples):
if priority < fixed_priority_offset: if priority < fixed_priority_offset:
build_priority = priority + build_priority_offset build_priority = priority + build_priority_offset
priorities_names.append((build_priority, name)) priorities_names.append((build_priority, name))
elif priority >= high_fixed_priority_offset:
num_high_fixed += 1
else: else:
num_fixed += 1 num_fixed += 1
@@ -141,19 +158,26 @@ def build_criteria_names(costs, tuples):
# split list into three parts: build criteria, fixed criteria, non-build criteria # split list into three parts: build criteria, fixed criteria, non-build criteria
num_criteria = len(priorities_names) num_criteria = len(priorities_names)
num_build = (num_criteria - num_fixed) // 2 num_build = (num_criteria - num_fixed - num_high_fixed) // 2
build = priorities_names[:num_build] build_start_idx = num_high_fixed
fixed = priorities_names[num_build:num_build + num_fixed] fixed_start_idx = num_high_fixed + num_build
installed = priorities_names[num_build + num_fixed:] installed_start_idx = num_high_fixed + num_build + num_fixed
high_fixed = priorities_names[:build_start_idx]
build = priorities_names[build_start_idx:fixed_start_idx]
fixed = priorities_names[fixed_start_idx:installed_start_idx]
installed = priorities_names[installed_start_idx:]
# mapping from priority to index in cost list # mapping from priority to index in cost list
indices = dict((p, i) for i, (p, n) in enumerate(priorities_names)) indices = dict((p, i) for i, (p, n) in enumerate(priorities_names))
# make a list that has each name with its build and non-build costs # make a list that has each name with its build and non-build costs
criteria = [ criteria = [(cost, None, name) for cost, (p, name) in
(costs[p - fixed_priority_offset + num_build], None, name) for p, name in fixed zip(costs[:build_start_idx], high_fixed)]
] criteria += [(cost, None, name) for cost, (p, name) in
zip(costs[fixed_start_idx:installed_start_idx], fixed)]
for (i, name), (b, _) in zip(installed, build): for (i, name), (b, _) in zip(installed, build):
criteria.append((costs[indices[i]], costs[indices[b]], name)) criteria.append((costs[indices[i]], costs[indices[b]], name))
@@ -306,7 +330,9 @@ def __init__(self, specs, asp=None):
self.abstract_specs = specs self.abstract_specs = specs
# Concrete specs # Concrete specs
self._concrete_specs_by_input = None
self._concrete_specs = None self._concrete_specs = None
self._unsolved_specs = None
def format_core(self, core): def format_core(self, core):
""" """
@@ -403,15 +429,32 @@ def specs(self):
"""List of concretized specs satisfying the initial """List of concretized specs satisfying the initial
abstract request. abstract request.
""" """
# The specs were already computed, return them if self._concrete_specs is None:
if self._concrete_specs: self._compute_specs_from_answer_set()
return self._concrete_specs return self._concrete_specs
# Assert prerequisite @property
msg = 'cannot compute specs ["satisfiable" is not True ]' def unsolved_specs(self):
assert self.satisfiable, msg """List of abstract input specs that were not solved."""
if self._unsolved_specs is None:
self._compute_specs_from_answer_set()
return self._unsolved_specs
self._concrete_specs = [] @property
def specs_by_input(self):
if self._concrete_specs_by_input is None:
self._compute_specs_from_answer_set()
return self._concrete_specs_by_input
def _compute_specs_from_answer_set(self):
if not self.satisfiable:
self._concrete_specs = []
self._unsolved_specs = self.abstract_specs
self._concrete_specs_by_input = {}
return
self._concrete_specs, self._unsolved_specs = [], []
self._concrete_specs_by_input = {}
best = min(self.answers) best = min(self.answers)
opt, _, answer = best opt, _, answer = best
for input_spec in self.abstract_specs: for input_spec in self.abstract_specs:
@@ -420,10 +463,13 @@ def specs(self):
providers = [spec.name for spec in answer.values() providers = [spec.name for spec in answer.values()
if spec.package.provides(key)] if spec.package.provides(key)]
key = providers[0] key = providers[0]
candidate = answer.get(key)
self._concrete_specs.append(answer[key]) if candidate and candidate.satisfies(input_spec):
self._concrete_specs.append(answer[key])
return self._concrete_specs self._concrete_specs_by_input[input_spec] = answer[key]
else:
self._unsolved_specs.append(input_spec)
def _normalize_packages_yaml(packages_yaml): def _normalize_packages_yaml(packages_yaml):
@@ -520,6 +566,7 @@ def solve(
setup, setup,
specs, specs,
nmodels=0, nmodels=0,
reuse=None,
timers=False, timers=False,
stats=False, stats=False,
out=None, out=None,
@@ -530,7 +577,8 @@ def solve(
Arguments: Arguments:
setup (SpackSolverSetup): An object to set up the ASP problem. setup (SpackSolverSetup): An object to set up the ASP problem.
specs (list): List of ``Spec`` objects to solve for. specs (list): List of ``Spec`` objects to solve for.
nmodels (list): Number of models to consider (default 0 for unlimited). nmodels (int): Number of models to consider (default 0 for unlimited).
reuse (None or list): list of concrete specs that can be reused
timers (bool): Print out coarse timers for different solve phases. timers (bool): Print out coarse timers for different solve phases.
stats (bool): Whether to output Clingo's internal solver statistics. stats (bool): Whether to output Clingo's internal solver statistics.
out: Optional output stream for the generated ASP program. out: Optional output stream for the generated ASP program.
@@ -554,7 +602,7 @@ def solve(
self.assumptions = [] self.assumptions = []
with self.control.backend() as backend: with self.control.backend() as backend:
self.backend = backend self.backend = backend
setup.setup(self, specs) setup.setup(self, specs, reuse=reuse)
timer.phase("setup") timer.phase("setup")
# read in the main ASP program and display logic -- these are # read in the main ASP program and display logic -- these are
@@ -573,6 +621,7 @@ def visit(node):
arg = ast_sym(ast_sym(term.atom).arguments[0]) arg = ast_sym(ast_sym(term.atom).arguments[0])
self.fact(AspFunction(name)(arg.string)) self.fact(AspFunction(name)(arg.string))
self.h1("Error messages")
path = os.path.join(parent_dir, 'concretize.lp') path = os.path.join(parent_dir, 'concretize.lp')
parse_files([path], visit) parse_files([path], visit)
@@ -622,7 +671,7 @@ def stringify(x):
if result.satisfiable: if result.satisfiable:
# build spec from the best model # build spec from the best model
builder = SpecBuilder(specs) builder = SpecBuilder(specs, reuse=reuse)
min_cost, best_model = min(models) min_cost, best_model = min(models)
tuples = [ tuples = [
(sym.name, [stringify(a) for a in sym.arguments]) (sym.name, [stringify(a) for a in sym.arguments])
@@ -654,7 +703,7 @@ def stringify(x):
class SpackSolverSetup(object): class SpackSolverSetup(object):
"""Class to set up and run a Spack concretization solve.""" """Class to set up and run a Spack concretization solve."""
def __init__(self, reuse=False, tests=False): def __init__(self, tests=False):
self.gen = None # set by setup() self.gen = None # set by setup()
self.declared_versions = {} self.declared_versions = {}
@@ -679,12 +728,12 @@ def __init__(self, reuse=False, tests=False):
# Caches to optimize the setup phase of the solver # Caches to optimize the setup phase of the solver
self.target_specs_cache = None self.target_specs_cache = None
# whether to add installed/binary hashes to the solve
self.reuse = reuse
# whether to add installed/binary hashes to the solve # whether to add installed/binary hashes to the solve
self.tests = tests self.tests = tests
# If False allows for input specs that are not solved
self.concretize_everything = True
def pkg_version_rules(self, pkg): def pkg_version_rules(self, pkg):
"""Output declared versions of a package. """Output declared versions of a package.
@@ -1737,32 +1786,7 @@ def define_concrete_input_specs(self, specs, possible):
if spec.concrete: if spec.concrete:
self._facts_from_concrete_spec(spec, possible) self._facts_from_concrete_spec(spec, possible)
def define_installed_packages(self, specs, possible): def setup(self, driver, specs, reuse=None):
"""Add facts about all specs already in the database.
Arguments:
possible (dict): result of Package.possible_dependencies() for
specs in this solve.
"""
# Specs from local store
with spack.store.db.read_transaction():
for spec in spack.store.db.query(installed=True):
if not spec.satisfies('dev_path=*'):
self._facts_from_concrete_spec(spec, possible)
# Specs from configured buildcaches
try:
index = spack.binary_distribution.update_cache_and_get_specs()
for spec in index:
if not spec.satisfies('dev_path=*'):
self._facts_from_concrete_spec(spec, possible)
except (spack.binary_distribution.FetchCacheError, IndexError):
# this is raised when no mirrors had indices.
# TODO: update mirror configuration so it can indicate that the source cache
# TODO: (or any mirror really) doesn't have binaries.
pass
def setup(self, driver, specs):
"""Generate an ASP program with relevant constraints for specs. """Generate an ASP program with relevant constraints for specs.
This calls methods on the solve driver to set up the problem with This calls methods on the solve driver to set up the problem with
@@ -1770,7 +1794,9 @@ def setup(self, driver, specs):
specs, as well as constraints from the specs themselves. specs, as well as constraints from the specs themselves.
Arguments: Arguments:
driver (PyclingoDriver): driver instance of this solve
specs (list): list of Specs to solve specs (list): list of Specs to solve
reuse (None or list): list of concrete specs that can be reused
""" """
self._condition_id_counter = itertools.count() self._condition_id_counter = itertools.count()
@@ -1809,11 +1835,11 @@ def setup(self, driver, specs):
self.gen.h1("Concrete input spec definitions") self.gen.h1("Concrete input spec definitions")
self.define_concrete_input_specs(specs, possible) self.define_concrete_input_specs(specs, possible)
if self.reuse: if reuse:
self.gen.h1("Installed packages") self.gen.h1("Reusable specs")
self.gen.fact(fn.optimize_for_reuse()) self.gen.fact(fn.optimize_for_reuse())
self.gen.newline() for reusable_spec in reuse:
self.define_installed_packages(specs, possible) self._facts_from_concrete_spec(reusable_spec, possible)
self.gen.h1('General Constraints') self.gen.h1('General Constraints')
self.available_compilers() self.available_compilers()
@@ -1846,19 +1872,7 @@ def setup(self, driver, specs):
_develop_specs_from_env(dep, env) _develop_specs_from_env(dep, env)
self.gen.h1('Spec Constraints') self.gen.h1('Spec Constraints')
for spec in sorted(specs): self.literal_specs(specs)
self.gen.h2('Spec: %s' % str(spec))
self.gen.fact(
fn.virtual_root(spec.name) if spec.virtual
else fn.root(spec.name)
)
for clause in self.spec_clauses(spec):
self.gen.fact(clause)
if clause.name == 'variant_set':
self.gen.fact(
fn.variant_default_value_from_cli(*clause.args)
)
self.gen.h1("Variant Values defined in specs") self.gen.h1("Variant Values defined in specs")
self.define_variant_values() self.define_variant_values()
@@ -1875,45 +1889,47 @@ def setup(self, driver, specs):
self.gen.h1("Target Constraints") self.gen.h1("Target Constraints")
self.define_target_constraints() self.define_target_constraints()
def literal_specs(self, specs):
for idx, spec in enumerate(specs):
self.gen.h2('Spec: %s' % str(spec))
self.gen.fact(fn.literal(idx))
root_fn = fn.virtual_root(spec.name) if spec.virtual else fn.root(spec.name)
self.gen.fact(fn.literal(idx, root_fn.name, *root_fn.args))
for clause in self.spec_clauses(spec):
self.gen.fact(fn.literal(idx, clause.name, *clause.args))
if clause.name == 'variant_set':
self.gen.fact(fn.literal(
idx, "variant_default_value_from_cli", *clause.args
))
if self.concretize_everything:
self.gen.fact(fn.concretize_everything())
class SpecBuilder(object): class SpecBuilder(object):
"""Class with actions to rebuild a spec from ASP results.""" """Class with actions to rebuild a spec from ASP results."""
#: Attributes that don't need actions #: Attributes that don't need actions
ignored_attributes = ["opt_criterion"] ignored_attributes = ["opt_criterion"]
def __init__(self, specs): def __init__(self, specs, reuse=None):
self._specs = {} self._specs = {}
self._result = None self._result = None
self._command_line_specs = specs self._command_line_specs = specs
self._flag_sources = collections.defaultdict(lambda: set()) self._flag_sources = collections.defaultdict(lambda: set())
self._flag_compiler_defaults = set() self._flag_compiler_defaults = set()
# Pass in as arguments reusable specs and plug them in
# from this dictionary during reconstruction
self._hash_lookup = {}
if reuse is not None:
for spec in reuse:
for node in spec.traverse():
self._hash_lookup.setdefault(node.dag_hash(), node)
def hash(self, pkg, h): def hash(self, pkg, h):
if pkg not in self._specs: if pkg not in self._specs:
try: self._specs[pkg] = self._hash_lookup[h]
# try to get the candidate from the store
concrete_spec = spack.store.db.get_by_hash(h)[0]
except TypeError:
# the dag hash was not in the DB, try buildcache
s = spack.binary_distribution.binary_index.find_by_hash(h)
if s:
concrete_spec = s[0]['spec']
else:
# last attempt: maybe the hash comes from a particular input spec
# this only occurs in tests (so far)
for clspec in self._command_line_specs:
for spec in clspec.traverse():
if spec.concrete and spec.dag_hash() == h:
concrete_spec = spec
assert concrete_spec, "Unable to look up concrete spec with hash %s" % h
self._specs[pkg] = concrete_spec
else:
# TODO: remove this code -- it's dead unless we decide that node() clauses
# should come before hashes.
# ensure that if it's already there, it's correct
spec = self._specs[pkg]
assert spec.dag_hash() == h
def node(self, pkg): def node(self, pkg):
if pkg not in self._specs: if pkg not in self._specs:
@@ -2183,7 +2199,7 @@ def _develop_specs_from_env(spec, env):
class Solver(object): class Solver(object):
"""This is the main external interface class for solving. """This is the main external interface class for solving.
It manages solver configuration and preferences in once place. It sets up the solve It manages solver configuration and preferences in one place. It sets up the solve
and passes the setup method to the driver, as well. and passes the setup method to the driver, as well.
Properties of interest: Properties of interest:
@@ -2199,6 +2215,42 @@ def __init__(self):
# by setting them directly as properties. # by setting them directly as properties.
self.reuse = spack.config.get("concretizer:reuse", False) self.reuse = spack.config.get("concretizer:reuse", False)
@staticmethod
def _check_input_and_extract_concrete_specs(specs):
reusable = []
for root in specs:
for s in root.traverse():
if s.virtual:
continue
if s.concrete:
reusable.append(s)
spack.spec.Spec.ensure_valid_variants(s)
return reusable
def _reusable_specs(self):
reusable_specs = []
if self.reuse:
# Specs from the local Database
with spack.store.db.read_transaction():
reusable_specs.extend([
s for s in spack.store.db.query(installed=True)
if not s.satisfies('dev_path=*')
])
# Specs from buildcaches
try:
index = spack.binary_distribution.update_cache_and_get_specs()
reusable_specs.extend([
s for s in index if not s.satisfies('dev_path=*')
])
except (spack.binary_distribution.FetchCacheError, IndexError):
# this is raised when no mirrors had indices.
# TODO: update mirror configuration so it can indicate that the
# TODO: source cache (or any mirror really) doesn't have binaries.
pass
return reusable_specs
def solve( def solve(
self, self,
specs, specs,
@@ -2222,23 +2274,78 @@ def solve(
setup_only (bool): if True, stop after setup and don't solve (default False). setup_only (bool): if True, stop after setup and don't solve (default False).
""" """
# Check upfront that the variants are admissible # Check upfront that the variants are admissible
for root in specs: reusable_specs = self._check_input_and_extract_concrete_specs(specs)
for s in root.traverse(): reusable_specs.extend(self._reusable_specs())
if s.virtual: setup = SpackSolverSetup(tests=tests)
continue
spack.spec.Spec.ensure_valid_variants(s)
setup = SpackSolverSetup(reuse=self.reuse, tests=tests)
return self.driver.solve( return self.driver.solve(
setup, setup,
specs, specs,
nmodels=models, nmodels=models,
reuse=reusable_specs,
timers=timers, timers=timers,
stats=stats, stats=stats,
out=out, out=out,
setup_only=setup_only, setup_only=setup_only,
) )
def solve_in_rounds(
self,
specs,
out=None,
models=0,
timers=False,
stats=False,
tests=False,
):
"""Solve for a stable model of specs in multiple rounds.
This relaxes the assumption of solve that everything must be consistent and
solvable in a single round. Each round tries to maximize the reuse of specs
from previous rounds.
The function is a generator that yields the result of each round.
Arguments:
specs (list): list of Specs to solve.
models (int): number of models to search (default: 0)
out: Optionally write the generate ASP program to a file-like object.
timers (bool): print timing if set to True
stats (bool): print internal statistics if set to True
tests (bool): add test dependencies to the solve
"""
reusable_specs = self._check_input_and_extract_concrete_specs(specs)
reusable_specs.extend(self._reusable_specs())
setup = SpackSolverSetup(tests=tests)
# Tell clingo that we don't have to solve all the inputs at once
setup.concretize_everything = False
input_specs = specs
while True:
result = self.driver.solve(
setup,
input_specs,
nmodels=models,
reuse=reusable_specs,
timers=timers,
stats=stats,
out=out,
setup_only=False
)
yield result
# If we don't have unsolved specs we are done
if not result.unsolved_specs:
break
# This means we cannot progress with solving the input
if not result.satisfiable or not result.specs:
break
input_specs = result.unsolved_specs
for spec in result.specs:
reusable_specs.extend(spec.traverse())
class UnsatisfiableSpecError(spack.error.UnsatisfiableSpecError): class UnsatisfiableSpecError(spack.error.UnsatisfiableSpecError):
""" """

View File

@@ -7,6 +7,52 @@
% This logic program implements Spack's concretizer % This logic program implements Spack's concretizer
%============================================================================= %=============================================================================
%-----------------------------------------------------------------------------
% Map literal input specs to facts that drive the solve
%-----------------------------------------------------------------------------
% Give clingo the choice to solve an input spec or not
{ literal_solved(ID) } :- literal(ID).
literal_not_solved(ID) :- not literal_solved(ID), literal(ID).
% If concretize_everything() is a fact, then we cannot have unsolved specs
:- literal_not_solved(ID), concretize_everything.
% Make a problem with "zero literals solved" unsat. This is to trigger
% looking for solutions to the ASP problem with "errors", which results
% in better reporting for users. See #30669 for details.
1 { literal_solved(ID) : literal(ID) }.
opt_criterion(300, "number of input specs not concretized").
#minimize{ 0@300: #true }.
#minimize { 1@300,ID : literal_not_solved(ID) }.
% Map constraint on the literal ID to the correct PSID
attr(Name, A1) :- literal(LiteralID, Name, A1), literal_solved(LiteralID).
attr(Name, A1, A2) :- literal(LiteralID, Name, A1, A2), literal_solved(LiteralID).
attr(Name, A1, A2, A3) :- literal(LiteralID, Name, A1, A2, A3), literal_solved(LiteralID).
% For these two atoms we only need implications in one direction
root(Package) :- attr("root", Package).
virtual_root(Package) :- attr("virtual_root", Package).
node_platform_set(Package, Platform) :- attr("node_platform_set", Package, Platform).
node_os_set(Package, OS) :- attr("node_os_set", Package, OS).
node_target_set(Package, Target) :- attr("node_target_set", Package, Target).
node_flag_set(Package, Flag, Value) :- attr("node_flag_set", Package, Flag, Value).
node_compiler_version_set(Package, Compiler, Version)
:- attr("node_compiler_version_set", Package, Compiler, Version).
variant_default_value_from_cli(Package, Variant, Value)
:- attr("variant_default_value_from_cli", Package, Variant, Value).
#defined concretize_everything/0.
#defined literal/1.
#defined literal/3.
#defined literal/4.
#defined literal/5.
%----------------------------------------------------------------------------- %-----------------------------------------------------------------------------
% Version semantics % Version semantics
%----------------------------------------------------------------------------- %-----------------------------------------------------------------------------

View File

@@ -18,37 +18,40 @@
concretize = SpackCommand('concretize') concretize = SpackCommand('concretize')
@pytest.mark.parametrize('concretization', ['separately', 'together']) unification_strategies = [False, True, 'when_possible']
def test_concretize_all_test_dependencies(concretization):
@pytest.mark.parametrize('unify', unification_strategies)
def test_concretize_all_test_dependencies(unify):
"""Check all test dependencies are concretized.""" """Check all test dependencies are concretized."""
env('create', 'test') env('create', 'test')
with ev.read('test') as e: with ev.read('test') as e:
e.concretization = concretization e.unify = unify
add('depb') add('depb')
concretize('--test', 'all') concretize('--test', 'all')
assert e.matching_spec('test-dependency') assert e.matching_spec('test-dependency')
@pytest.mark.parametrize('concretization', ['separately', 'together']) @pytest.mark.parametrize('unify', unification_strategies)
def test_concretize_root_test_dependencies_not_recursive(concretization): def test_concretize_root_test_dependencies_not_recursive(unify):
"""Check that test dependencies are not concretized recursively.""" """Check that test dependencies are not concretized recursively."""
env('create', 'test') env('create', 'test')
with ev.read('test') as e: with ev.read('test') as e:
e.concretization = concretization e.unify = unify
add('depb') add('depb')
concretize('--test', 'root') concretize('--test', 'root')
assert e.matching_spec('test-dependency') is None assert e.matching_spec('test-dependency') is None
@pytest.mark.parametrize('concretization', ['separately', 'together']) @pytest.mark.parametrize('unify', unification_strategies)
def test_concretize_root_test_dependencies_are_concretized(concretization): def test_concretize_root_test_dependencies_are_concretized(unify):
"""Check that root test dependencies are concretized.""" """Check that root test dependencies are concretized."""
env('create', 'test') env('create', 'test')
with ev.read('test') as e: with ev.read('test') as e:
e.concretization = concretization e.unify = unify
add('a') add('a')
add('b') add('b')
concretize('--test', 'root') concretize('--test', 'root')

View File

@@ -2197,7 +2197,7 @@ def test_env_activate_default_view_root_unconditional(mutable_mock_env_path):
def test_concretize_user_specs_together(): def test_concretize_user_specs_together():
e = ev.create('coconcretization') e = ev.create('coconcretization')
e.concretization = 'together' e.unify = True
# Concretize a first time using 'mpich' as the MPI provider # Concretize a first time using 'mpich' as the MPI provider
e.add('mpileaks') e.add('mpileaks')
@@ -2225,7 +2225,7 @@ def test_concretize_user_specs_together():
def test_cant_install_single_spec_when_concretizing_together(): def test_cant_install_single_spec_when_concretizing_together():
e = ev.create('coconcretization') e = ev.create('coconcretization')
e.concretization = 'together' e.unify = True
with pytest.raises(ev.SpackEnvironmentError, match=r'cannot install'): with pytest.raises(ev.SpackEnvironmentError, match=r'cannot install'):
e.concretize_and_add('zlib') e.concretize_and_add('zlib')
@@ -2234,7 +2234,7 @@ def test_cant_install_single_spec_when_concretizing_together():
def test_duplicate_packages_raise_when_concretizing_together(): def test_duplicate_packages_raise_when_concretizing_together():
e = ev.create('coconcretization') e = ev.create('coconcretization')
e.concretization = 'together' e.unify = True
e.add('mpileaks+opt') e.add('mpileaks+opt')
e.add('mpileaks~opt') e.add('mpileaks~opt')
@@ -2556,7 +2556,7 @@ def test_custom_version_concretize_together(tmpdir):
# Custom versions should be permitted in specs when # Custom versions should be permitted in specs when
# concretizing together # concretizing together
e = ev.create('custom_version') e = ev.create('custom_version')
e.concretization = 'together' e.unify = True
# Concretize a first time using 'mpich' as the MPI provider # Concretize a first time using 'mpich' as the MPI provider
e.add('hdf5@myversion') e.add('hdf5@myversion')
@@ -2647,7 +2647,7 @@ def test_multiple_modules_post_env_hook(tmpdir, install_mockery, mock_fetch):
def test_virtual_spec_concretize_together(tmpdir): def test_virtual_spec_concretize_together(tmpdir):
# An environment should permit to concretize "mpi" # An environment should permit to concretize "mpi"
e = ev.create('virtual_spec') e = ev.create('virtual_spec')
e.concretization = 'together' e.unify = True
e.add('mpi') e.add('mpi')
e.concretize() e.concretize()
@@ -2989,3 +2989,19 @@ def test_environment_depfile_out(tmpdir, mock_packages):
stdout = env('depfile', '-G', 'make') stdout = env('depfile', '-G', 'make')
with open(makefile_path, 'r') as f: with open(makefile_path, 'r') as f:
assert stdout == f.read() assert stdout == f.read()
def test_unify_when_possible_works_around_conflicts():
e = ev.create('coconcretization')
e.unify = 'when_possible'
e.add('mpileaks+opt')
e.add('mpileaks~opt')
e.add('mpich')
e.concretize()
assert len([x for x in e.all_specs() if x.satisfies('mpileaks')]) == 2
assert len([x for x in e.all_specs() if x.satisfies('mpileaks+opt')]) == 1
assert len([x for x in e.all_specs() if x.satisfies('mpileaks~opt')]) == 1
assert len([x for x in e.all_specs() if x.satisfies('mpich')]) == 1

View File

@@ -1668,3 +1668,67 @@ def test_reuse_with_unknown_package_dont_raise(
with spack.config.override("concretizer:reuse", True): with spack.config.override("concretizer:reuse", True):
s = Spec('c').concretized() s = Spec('c').concretized()
assert s.namespace == 'builtin.mock' assert s.namespace == 'builtin.mock'
@pytest.mark.parametrize('specs,expected', [
(['libelf', 'libelf@0.8.10'], 1),
(['libdwarf%gcc', 'libelf%clang'], 2),
(['libdwarf%gcc', 'libdwarf%clang'], 4),
(['libdwarf^libelf@0.8.12', 'libdwarf^libelf@0.8.13'], 4),
(['hdf5', 'zmpi'], 3),
(['hdf5', 'mpich'], 2),
(['hdf5^zmpi', 'mpich'], 4),
(['mpi', 'zmpi'], 2),
(['mpi', 'mpich'], 1),
])
def test_best_effort_coconcretize(self, specs, expected):
import spack.solver.asp
if spack.config.get('config:concretizer') == 'original':
pytest.skip('Original concretizer cannot concretize in rounds')
specs = [spack.spec.Spec(s) for s in specs]
solver = spack.solver.asp.Solver()
solver.reuse = False
concrete_specs = set()
for result in solver.solve_in_rounds(specs):
for s in result.specs:
concrete_specs.update(s.traverse())
assert len(concrete_specs) == expected
@pytest.mark.parametrize('specs,expected_spec,occurances', [
# The algorithm is greedy, and it might decide to solve the "best"
# spec early in which case reuse is suboptimal. In this case the most
# recent version of libdwarf is selected and concretized to libelf@0.8.13
(['libdwarf@20111030^libelf@0.8.10',
'libdwarf@20130207^libelf@0.8.12',
'libdwarf@20130729'], 'libelf@0.8.12', 1),
# Check we reuse the best libelf in the environment
(['libdwarf@20130729^libelf@0.8.10',
'libdwarf@20130207^libelf@0.8.12',
'libdwarf@20111030'], 'libelf@0.8.12', 2),
(['libdwarf@20130729',
'libdwarf@20130207',
'libdwarf@20111030'], 'libelf@0.8.13', 3),
# We need to solve in 2 rounds and we expect mpich to be preferred to zmpi
(['hdf5+mpi', 'zmpi', 'mpich'], 'mpich', 2)
])
def test_best_effort_coconcretize_preferences(
self, specs, expected_spec, occurances
):
"""Test package preferences during coconcretization."""
import spack.solver.asp
if spack.config.get('config:concretizer') == 'original':
pytest.skip('Original concretizer cannot concretize in rounds')
specs = [spack.spec.Spec(s) for s in specs]
solver = spack.solver.asp.Solver()
solver.reuse = False
concrete_specs = {}
for result in solver.solve_in_rounds(specs):
concrete_specs.update(result.specs_by_input)
counter = 0
for spec in concrete_specs.values():
if expected_spec in spec:
counter += 1
assert counter == occurances, concrete_specs

View File

@@ -135,8 +135,6 @@ def _mock_installed(self):
assert spack.version.Version('3') == a_spec[b][d].version assert spack.version.Version('3') == a_spec[b][d].version
assert spack.version.Version('3') == a_spec[d].version assert spack.version.Version('3') == a_spec[d].version
# TODO: with reuse, this will be different -- verify the reuse case
@pytest.mark.usefixtures('config') @pytest.mark.usefixtures('config')
def test_specify_preinstalled_dep(): def test_specify_preinstalled_dep():

View File

@@ -3,7 +3,7 @@ spack:
concretizer: concretizer:
reuse: false reuse: false
unify: false unify: when_possible
config: config:
install_tree: install_tree:

View File

@@ -22,5 +22,5 @@ class InstalledDepsB(Package):
version("2", "abcdef0123456789abcdef0123456789") version("2", "abcdef0123456789abcdef0123456789")
version("3", "def0123456789abcdef0123456789abc") version("3", "def0123456789abcdef0123456789abc")
depends_on("installed-deps-d", type=("build", "link")) depends_on("installed-deps-d@3:", type=("build", "link"))
depends_on("installed-deps-e", type=("build", "link")) depends_on("installed-deps-e", type=("build", "link"))