Compare commits
106 Commits
develop-20
...
win/port-g
Author | SHA1 | Date | |
---|---|---|---|
![]() |
b424388d6d | ||
![]() |
a67455707a | ||
![]() |
09ca71dbe0 | ||
![]() |
ea082539e4 | ||
![]() |
143146f4f3 | ||
![]() |
ee6ae402aa | ||
![]() |
0b26b26821 | ||
![]() |
c764f9b1ab | ||
![]() |
db19d83ea7 | ||
![]() |
24256be6d6 | ||
![]() |
633723236e | ||
![]() |
381f31e69e | ||
![]() |
9438cac219 | ||
![]() |
85cf66f650 | ||
![]() |
f3c080e546 | ||
![]() |
37634f8b08 | ||
![]() |
2ae8bbce9e | ||
![]() |
b8bfaf65bf | ||
![]() |
7968cb7fa2 | ||
![]() |
ebc2efdfd2 | ||
![]() |
ff07fd5ccb | ||
![]() |
3f83ef6566 | ||
![]() |
554ce7f063 | ||
![]() |
23963779f4 | ||
![]() |
45c5af10c3 | ||
![]() |
532a37e7ba | ||
![]() |
aeb9a92845 | ||
![]() |
a3c7ad7669 | ||
![]() |
b99288dcae | ||
![]() |
01b7cc5106 | ||
![]() |
f5888d8127 | ||
![]() |
77c838ca93 | ||
![]() |
11e538d962 | ||
![]() |
7d444038ee | ||
![]() |
c24471834b | ||
![]() |
b1e33ae37b | ||
![]() |
c36617f9da | ||
![]() |
deadb64206 | ||
![]() |
9eaa88e467 | ||
![]() |
bd58801415 | ||
![]() |
548a9de671 | ||
![]() |
8e7c53a8ba | ||
![]() |
5e630174a1 | ||
![]() |
175a65dfba | ||
![]() |
39d4c402d5 | ||
![]() |
e51748ee8f | ||
![]() |
f9457fa80b | ||
![]() |
4cc2ca3e2e | ||
![]() |
3843001004 | ||
![]() |
e24bb5dd1c | ||
![]() |
f6013114eb | ||
![]() |
bdca875eb3 | ||
![]() |
af8c392de2 | ||
![]() |
9aa3b4619b | ||
![]() |
3d733da70a | ||
![]() |
cda99b792c | ||
![]() |
9834bad82e | ||
![]() |
3453259c98 | ||
![]() |
ee243b84eb | ||
![]() |
5080e2cb45 | ||
![]() |
f42ef7aea7 | ||
![]() |
41793673d9 | ||
![]() |
8b6a6982ee | ||
![]() |
ee74ca6391 | ||
![]() |
7165e70186 | ||
![]() |
97d632a161 | ||
![]() |
571919992d | ||
![]() |
99112ad2ad | ||
![]() |
75c70c395d | ||
![]() |
960bdfe612 | ||
![]() |
97892bda18 | ||
![]() |
cead6ef98d | ||
![]() |
5d70c0f100 | ||
![]() |
361632fc4b | ||
![]() |
6576655137 | ||
![]() |
feb26efecd | ||
![]() |
4752d1cde3 | ||
![]() |
a07afa6e1a | ||
![]() |
7327d2913a | ||
![]() |
8f8a1f7f52 | ||
![]() |
eb8d836e76 | ||
![]() |
bad8495e16 | ||
![]() |
43de7f4881 | ||
![]() |
84585ac575 | ||
![]() |
86f9d3865b | ||
![]() |
834e7b2b0a | ||
![]() |
c14f23ddaa | ||
![]() |
49f3681a12 | ||
![]() |
19e1d10cdf | ||
![]() |
7caf2a512d | ||
![]() |
1f6e3cc8cb | ||
![]() |
169c4245e0 | ||
![]() |
ee1982010f | ||
![]() |
dd396c4a76 | ||
![]() |
235802013d | ||
![]() |
e1f07e98ae | ||
![]() |
60aee6f535 | ||
![]() |
2510dc9e6e | ||
![]() |
5607dd259b | ||
![]() |
7bd5d1fd3c | ||
![]() |
f6104cc3cb | ||
![]() |
b54d286b4a | ||
![]() |
ea9c488897 | ||
![]() |
ba1d295023 | ||
![]() |
27f04b3544 | ||
![]() |
8cd9497522 |
1
.github/workflows/unit_tests.yaml
vendored
1
.github/workflows/unit_tests.yaml
vendored
@@ -165,6 +165,7 @@ jobs:
|
||||
- name: Install Python packages
|
||||
run: |
|
||||
pip install --upgrade pip setuptools pytest coverage[toml] pytest-cov clingo pytest-xdist
|
||||
pip install --upgrade flake8 "isort>=4.3.5" "mypy>=0.900" "click" "black"
|
||||
- name: Setup git configuration
|
||||
run: |
|
||||
# Need this for the git tests to succeed.
|
||||
|
@@ -60,6 +60,7 @@ packages:
|
||||
xxd: [xxd-standalone, vim]
|
||||
yacc: [bison, byacc]
|
||||
ziglang: [zig]
|
||||
zlib-api: [zlib, zlib-ng+compat]
|
||||
permissions:
|
||||
read: world
|
||||
write: user
|
||||
|
@@ -32,9 +32,14 @@ can't be found. You can readily check if any prerequisite for using Spack is mis
|
||||
|
||||
Spack will take care of bootstrapping any missing dependency marked as [B]. Dependencies marked as [-] are instead required to be found on the system.
|
||||
|
||||
% echo $?
|
||||
1
|
||||
|
||||
In the case of the output shown above Spack detected that both ``clingo`` and ``gnupg``
|
||||
are missing and it's giving detailed information on why they are needed and whether
|
||||
they can be bootstrapped. Running a command that concretize a spec, like:
|
||||
they can be bootstrapped. The return code of this command summarizes the results, if any
|
||||
dependencies are missing the return code is ``1``, otherwise ``0``. Running a command that
|
||||
concretizes a spec, like:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@@ -44,7 +49,7 @@ they can be bootstrapped. Running a command that concretize a spec, like:
|
||||
==> Installing "clingo-bootstrap@spack%apple-clang@12.0.0~docs~ipo+python build_type=Release arch=darwin-catalina-x86_64" from a buildcache
|
||||
[ ... ]
|
||||
|
||||
triggers the bootstrapping of clingo from pre-built binaries as expected.
|
||||
automatically triggers the bootstrapping of clingo from pre-built binaries as expected.
|
||||
|
||||
Users can also bootstrap all the dependencies needed by Spack in a single command, which
|
||||
might be useful to setup containers or other similar environments:
|
||||
|
@@ -104,11 +104,13 @@ Clone `spack-configs <https://github.com/spack/spack-configs>`_ repo and activat
|
||||
|
||||
`Intel oneAPI CPU environment <https://github.com/spack/spack-configs/blob/main/INTEL/CPU/spack.yaml>`_ contains applications tested and validated by Intel, this list is constantly extended. And currently it supports:
|
||||
|
||||
- `Devito <https://www.devitoproject.org/>`_
|
||||
- `GROMACS <https://www.gromacs.org/>`_
|
||||
- `HPCG <https://www.hpcg-benchmark.org/>`_
|
||||
- `HPL <https://netlib.org/benchmark/hpl/>`_
|
||||
- `LAMMPS <https://www.lammps.org/#gsc.tab=0>`_
|
||||
- `OpenFOAM <https://www.openfoam.com/>`_
|
||||
- `Quantum Espresso <https://www.quantum-espresso.org/>`_
|
||||
- `STREAM <https://www.cs.virginia.edu/stream/>`_
|
||||
- `WRF <https://github.com/wrf-model/WRF>`_
|
||||
|
||||
|
@@ -2243,7 +2243,7 @@ looks like this:
|
||||
url = "http://www.openssl.org/source/openssl-1.0.1h.tar.gz"
|
||||
|
||||
version("1.0.1h", md5="8d6d684a9430d5cc98a62a5d8fbda8cf")
|
||||
depends_on("zlib")
|
||||
depends_on("zlib-api")
|
||||
|
||||
parallel = False
|
||||
|
||||
|
@@ -1,13 +1,13 @@
|
||||
sphinx==6.2.1
|
||||
sphinxcontrib-programoutput==0.17
|
||||
sphinx_design==0.4.1
|
||||
sphinx_design==0.5.0
|
||||
sphinx-rtd-theme==1.2.2
|
||||
python-levenshtein==0.21.1
|
||||
docutils==0.18.1
|
||||
pygments==2.15.1
|
||||
urllib3==2.0.3
|
||||
pygments==2.16.1
|
||||
urllib3==2.0.4
|
||||
pytest==7.4.0
|
||||
isort==5.12.0
|
||||
black==23.1.0
|
||||
flake8==6.0.0
|
||||
mypy==1.4.1
|
||||
black==23.7.0
|
||||
flake8==6.1.0
|
||||
mypy==1.5.0
|
||||
|
@@ -143,7 +143,7 @@ def get_fh(self, path: str) -> IO:
|
||||
def release_by_stat(self, stat):
|
||||
key = (stat.st_dev, stat.st_ino, os.getpid())
|
||||
open_file = self._descriptors.get(key)
|
||||
assert open_file, "Attempted to close non-existing inode: %s" % stat.st_inode
|
||||
assert open_file, "Attempted to close non-existing inode: %s" % stat.st_ino
|
||||
|
||||
open_file.refs -= 1
|
||||
if not open_file.refs:
|
||||
|
@@ -286,7 +286,7 @@ def _check_build_test_callbacks(pkgs, error_cls):
|
||||
"""Ensure stand-alone test method is not included in build-time callbacks"""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
test_callbacks = getattr(pkg_cls, "build_time_test_callbacks", None)
|
||||
|
||||
# TODO (post-34236): "test*"->"test_*" once remove deprecated methods
|
||||
@@ -312,7 +312,7 @@ def _check_patch_urls(pkgs, error_cls):
|
||||
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
for condition, patches in pkg_cls.patches.items():
|
||||
for patch in patches:
|
||||
if not isinstance(patch, spack.patch.UrlPatch):
|
||||
@@ -342,7 +342,7 @@ def _search_for_reserved_attributes_names_in_packages(pkgs, error_cls):
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
name_definitions = collections.defaultdict(list)
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
|
||||
for cls_item in inspect.getmro(pkg_cls):
|
||||
for name in RESERVED_NAMES:
|
||||
@@ -383,7 +383,7 @@ def _ensure_packages_are_pickeleable(pkgs, error_cls):
|
||||
"""Ensure that package objects are pickleable"""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
pkg = pkg_cls(spack.spec.Spec(pkg_name))
|
||||
try:
|
||||
pickle.dumps(pkg)
|
||||
@@ -424,7 +424,7 @@ def _ensure_all_versions_can_produce_a_fetcher(pkgs, error_cls):
|
||||
"""Ensure all versions in a package can produce a fetcher"""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
pkg = pkg_cls(spack.spec.Spec(pkg_name))
|
||||
try:
|
||||
spack.fetch_strategy.check_pkg_attributes(pkg)
|
||||
@@ -449,7 +449,7 @@ def _ensure_docstring_and_no_fixme(pkgs, error_cls):
|
||||
]
|
||||
for pkg_name in pkgs:
|
||||
details = []
|
||||
filename = spack.repo.path.filename_for_package_name(pkg_name)
|
||||
filename = spack.repo.PATH.filename_for_package_name(pkg_name)
|
||||
with open(filename, "r") as package_file:
|
||||
for i, line in enumerate(package_file):
|
||||
pattern = next((r for r in fixme_regexes if r.search(line)), None)
|
||||
@@ -461,7 +461,7 @@ def _ensure_docstring_and_no_fixme(pkgs, error_cls):
|
||||
error_msg = "Package '{}' contains boilerplate that need to be removed"
|
||||
errors.append(error_cls(error_msg.format(pkg_name), details))
|
||||
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
if not pkg_cls.__doc__:
|
||||
error_msg = "Package '{}' miss a docstring"
|
||||
errors.append(error_cls(error_msg.format(pkg_name), []))
|
||||
@@ -474,7 +474,7 @@ def _ensure_all_packages_use_sha256_checksums(pkgs, error_cls):
|
||||
"""Ensure no packages use md5 checksums"""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
if pkg_cls.manual_download:
|
||||
continue
|
||||
|
||||
@@ -511,7 +511,7 @@ def _ensure_env_methods_are_ported_to_builders(pkgs, error_cls):
|
||||
"""Ensure that methods modifying the build environment are ported to builder classes."""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
buildsystem_variant, _ = pkg_cls.variants["build_system"]
|
||||
buildsystem_names = [getattr(x, "value", x) for x in buildsystem_variant.values]
|
||||
builder_cls_names = [spack.builder.BUILDER_CLS[x].__name__ for x in buildsystem_names]
|
||||
@@ -538,7 +538,7 @@ def _linting_package_file(pkgs, error_cls):
|
||||
"""Check for correctness of links"""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
|
||||
# Does the homepage have http, and if so, does https work?
|
||||
if pkg_cls.homepage.startswith("http://"):
|
||||
@@ -562,7 +562,7 @@ def _unknown_variants_in_directives(pkgs, error_cls):
|
||||
"""Report unknown or wrong variants in directives for this package"""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
|
||||
# Check "conflicts" directive
|
||||
for conflict, triggers in pkg_cls.conflicts.items():
|
||||
@@ -628,15 +628,15 @@ def _unknown_variants_in_dependencies(pkgs, error_cls):
|
||||
"""Report unknown dependencies and wrong variants for dependencies"""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
filename = spack.repo.path.filename_for_package_name(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
filename = spack.repo.PATH.filename_for_package_name(pkg_name)
|
||||
for dependency_name, dependency_data in pkg_cls.dependencies.items():
|
||||
# No need to analyze virtual packages
|
||||
if spack.repo.path.is_virtual(dependency_name):
|
||||
if spack.repo.PATH.is_virtual(dependency_name):
|
||||
continue
|
||||
|
||||
try:
|
||||
dependency_pkg_cls = spack.repo.path.get_pkg_class(dependency_name)
|
||||
dependency_pkg_cls = spack.repo.PATH.get_pkg_class(dependency_name)
|
||||
except spack.repo.UnknownPackageError:
|
||||
# This dependency is completely missing, so report
|
||||
# and continue the analysis
|
||||
@@ -675,7 +675,7 @@ def _ensure_variant_defaults_are_parsable(pkgs, error_cls):
|
||||
"""Ensures that variant defaults are present and parsable from cli"""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
for variant_name, entry in pkg_cls.variants.items():
|
||||
variant, _ = entry
|
||||
default_is_parsable = (
|
||||
@@ -709,18 +709,33 @@ def _ensure_variant_defaults_are_parsable(pkgs, error_cls):
|
||||
return errors
|
||||
|
||||
|
||||
@package_directives
|
||||
def _ensure_variants_have_descriptions(pkgs, error_cls):
|
||||
"""Ensures that all variants have a description."""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
for variant_name, entry in pkg_cls.variants.items():
|
||||
variant, _ = entry
|
||||
if not variant.description:
|
||||
error_msg = "Variant '{}' in package '{}' is missing a description"
|
||||
errors.append(error_cls(error_msg.format(variant_name, pkg_name), []))
|
||||
|
||||
return errors
|
||||
|
||||
|
||||
@package_directives
|
||||
def _version_constraints_are_satisfiable_by_some_version_in_repo(pkgs, error_cls):
|
||||
"""Report if version constraints used in directives are not satisfiable"""
|
||||
errors = []
|
||||
for pkg_name in pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
filename = spack.repo.path.filename_for_package_name(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
filename = spack.repo.PATH.filename_for_package_name(pkg_name)
|
||||
dependencies_to_check = []
|
||||
for dependency_name, dependency_data in pkg_cls.dependencies.items():
|
||||
# Skip virtual dependencies for the time being, check on
|
||||
# their versions can be added later
|
||||
if spack.repo.path.is_virtual(dependency_name):
|
||||
if spack.repo.PATH.is_virtual(dependency_name):
|
||||
continue
|
||||
|
||||
dependencies_to_check.extend([edge.spec for edge in dependency_data.values()])
|
||||
@@ -729,7 +744,7 @@ def _version_constraints_are_satisfiable_by_some_version_in_repo(pkgs, error_cls
|
||||
for s in dependencies_to_check:
|
||||
dependency_pkg_cls = None
|
||||
try:
|
||||
dependency_pkg_cls = spack.repo.path.get_pkg_class(s.name)
|
||||
dependency_pkg_cls = spack.repo.PATH.get_pkg_class(s.name)
|
||||
# Some packages have hacks that might cause failures on some platform
|
||||
# Allow to explicitly set conditions to skip version checks in that case
|
||||
skip_conditions = getattr(dependency_pkg_cls, "skip_version_audit", [])
|
||||
@@ -772,7 +787,7 @@ def _analyze_variants_in_directive(pkg, constraint, directive, error_cls):
|
||||
except variant_exceptions as e:
|
||||
summary = pkg.name + ': wrong variant in "{0}" directive'
|
||||
summary = summary.format(directive)
|
||||
filename = spack.repo.path.filename_for_package_name(pkg.name)
|
||||
filename = spack.repo.PATH.filename_for_package_name(pkg.name)
|
||||
|
||||
error_msg = str(e).strip()
|
||||
if isinstance(e, KeyError):
|
||||
|
@@ -476,13 +476,13 @@ def ensure_executables_in_path_or_raise(
|
||||
def _add_externals_if_missing() -> None:
|
||||
search_list = [
|
||||
# clingo
|
||||
spack.repo.path.get_pkg_class("cmake"),
|
||||
spack.repo.path.get_pkg_class("bison"),
|
||||
spack.repo.PATH.get_pkg_class("cmake"),
|
||||
spack.repo.PATH.get_pkg_class("bison"),
|
||||
# GnuPG
|
||||
spack.repo.path.get_pkg_class("gawk"),
|
||||
spack.repo.PATH.get_pkg_class("gawk"),
|
||||
]
|
||||
if IS_WINDOWS:
|
||||
search_list.append(spack.repo.path.get_pkg_class("winbison"))
|
||||
search_list.append(spack.repo.PATH.get_pkg_class("winbison"))
|
||||
detected_packages = spack.detection.by_executable(search_list)
|
||||
spack.detection.update_configuration(detected_packages, scope="bootstrap")
|
||||
|
||||
|
@@ -1256,9 +1256,8 @@ def make_stack(tb, stack=None):
|
||||
func = getattr(obj, tb.tb_frame.f_code.co_name, "")
|
||||
if func:
|
||||
typename, *_ = func.__qualname__.partition(".")
|
||||
|
||||
if isinstance(obj, CONTEXT_BASES) and typename not in basenames:
|
||||
break
|
||||
if isinstance(obj, CONTEXT_BASES) and typename not in basenames:
|
||||
break
|
||||
else:
|
||||
return None
|
||||
|
||||
|
@@ -248,7 +248,8 @@ def std_cmake_args(self):
|
||||
@staticmethod
|
||||
def std_args(pkg, generator=None):
|
||||
"""Computes the standard cmake arguments for a generic package"""
|
||||
generator = generator or "Unix Makefiles"
|
||||
default_generator = "Ninja" if sys.platform == "win32" else "Unix Makefiles"
|
||||
generator = generator or default_generator
|
||||
valid_primary_generators = ["Unix Makefiles", "Ninja"]
|
||||
primary_generator = _extract_primary_generator(generator)
|
||||
if primary_generator not in valid_primary_generators:
|
||||
|
@@ -209,5 +209,5 @@ def install(self, pkg, spec, prefix):
|
||||
def check(self):
|
||||
"""Search Meson-generated files for the target ``test`` and run it if found."""
|
||||
with fs.working_dir(self.build_directory):
|
||||
self._if_ninja_target_execute("test")
|
||||
self._if_ninja_target_execute("check")
|
||||
self.pkg._if_ninja_target_execute("test")
|
||||
self.pkg._if_ninja_target_execute("check")
|
||||
|
@@ -201,7 +201,7 @@ def update_external_dependencies(self, extendee_spec=None):
|
||||
else:
|
||||
python = self.get_external_python_for_prefix()
|
||||
if not python.concrete:
|
||||
repo = spack.repo.path.repo_for_pkg(python)
|
||||
repo = spack.repo.PATH.repo_for_pkg(python)
|
||||
python.namespace = repo.namespace
|
||||
|
||||
# Ensure architecture information is present
|
||||
@@ -301,7 +301,7 @@ def get_external_python_for_prefix(self):
|
||||
return python_externals_configured[0]
|
||||
|
||||
python_externals_detection = spack.detection.by_executable(
|
||||
[spack.repo.path.get_pkg_class("python")], path_hints=[self.spec.external_path]
|
||||
[spack.repo.PATH.get_pkg_class("python")], path_hints=[self.spec.external_path]
|
||||
)
|
||||
|
||||
python_externals_detected = [
|
||||
|
@@ -140,8 +140,6 @@ class ROCmPackage(PackageBase):
|
||||
depends_on("hsa-rocr-dev", when="+rocm")
|
||||
depends_on("hip +rocm", when="+rocm")
|
||||
|
||||
conflicts("^blt@:0.3.6", when="+rocm")
|
||||
|
||||
# need amd gpu type for rocm builds
|
||||
conflicts("amdgpu_target=none", when="+rocm")
|
||||
|
||||
|
@@ -535,7 +535,7 @@ def __job_name(name, suffix=""):
|
||||
"""Compute the name of a named job with appropriate suffix.
|
||||
Valid suffixes are either '-remove' or empty string or None
|
||||
"""
|
||||
assert type(name) == str
|
||||
assert isinstance(name, str)
|
||||
|
||||
jname = name
|
||||
if suffix:
|
||||
@@ -885,7 +885,7 @@ def generate_gitlab_ci_yaml(
|
||||
cli_scopes = [
|
||||
os.path.relpath(s.path, concrete_env_dir)
|
||||
for s in cfg.scopes().values()
|
||||
if type(s) == cfg.ImmutableConfigScope
|
||||
if isinstance(s, cfg.ImmutableConfigScope)
|
||||
and s.path not in env_includes
|
||||
and os.path.exists(s.path)
|
||||
]
|
||||
@@ -1504,7 +1504,7 @@ def copy_stage_logs_to_artifacts(job_spec: spack.spec.Spec, job_log_dir: str) ->
|
||||
return
|
||||
|
||||
try:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(job_spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(job_spec.name)
|
||||
job_pkg = pkg_cls(job_spec)
|
||||
tty.debug("job package: {0}".format(job_pkg))
|
||||
except AssertionError:
|
||||
|
@@ -291,7 +291,7 @@ def ensure_single_spec_or_die(spec, matching_specs):
|
||||
if len(matching_specs) <= 1:
|
||||
return
|
||||
|
||||
format_string = "{name}{@version}{%compiler}{arch=architecture}"
|
||||
format_string = "{name}{@version}{%compiler.name}{@compiler.version}{arch=architecture}"
|
||||
args = ["%s matches multiple packages." % spec, "Matching packages:"]
|
||||
args += [
|
||||
colorize(" @K{%s} " % s.dag_hash(7)) + s.cformat(format_string) for s in matching_specs
|
||||
|
@@ -47,7 +47,7 @@ def configs(parser, args):
|
||||
|
||||
|
||||
def packages(parser, args):
|
||||
pkgs = args.name or spack.repo.path.all_package_names()
|
||||
pkgs = args.name or spack.repo.PATH.all_package_names()
|
||||
reports = spack.audit.run_group(args.subcommand, pkgs=pkgs)
|
||||
_process_reports(reports)
|
||||
|
||||
@@ -57,7 +57,7 @@ def packages_https(parser, args):
|
||||
if not args.check_all and not args.name:
|
||||
tty.die("Please specify one or more packages to audit, or --all.")
|
||||
|
||||
pkgs = args.name or spack.repo.path.all_package_names()
|
||||
pkgs = args.name or spack.repo.PATH.all_package_names()
|
||||
reports = spack.audit.run_group(args.subcommand, pkgs=pkgs)
|
||||
_process_reports(reports)
|
||||
|
||||
|
@@ -126,7 +126,7 @@ def blame(parser, args):
|
||||
blame_file = path
|
||||
|
||||
if not blame_file:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(args.package_or_file)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(args.package_or_file)
|
||||
blame_file = pkg_cls.module.__file__.rstrip("c") # .pyc -> .py
|
||||
|
||||
# get git blame for the package
|
||||
|
@@ -4,6 +4,7 @@
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import os.path
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
import llnl.util.filesystem
|
||||
@@ -326,6 +327,7 @@ def _status(args):
|
||||
if missing:
|
||||
print(llnl.util.tty.color.colorize(legend))
|
||||
print()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def _add(args):
|
||||
|
@@ -83,7 +83,7 @@ def checksum(parser, args):
|
||||
tty.die("`spack checksum` accepts package names, not URLs.")
|
||||
|
||||
# Get the package we're going to generate checksums for
|
||||
pkg_cls = spack.repo.path.get_pkg_class(args.package)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(args.package)
|
||||
pkg = pkg_cls(spack.spec.Spec(args.package))
|
||||
|
||||
# Build a list of versions to checksum
|
||||
@@ -210,7 +210,7 @@ def add_versions_to_package(pkg: PackageBase, version_lines: str):
|
||||
|
||||
"""
|
||||
# Get filename and path for package
|
||||
filename = spack.repo.path.filename_for_package_name(pkg.name)
|
||||
filename = spack.repo.PATH.filename_for_package_name(pkg.name)
|
||||
num_versions_added = 0
|
||||
|
||||
version_statement_re = re.compile(r"([\t ]+version\([^\)]*\))")
|
||||
|
@@ -17,6 +17,7 @@
|
||||
import spack.config
|
||||
import spack.repo
|
||||
import spack.stage
|
||||
import spack.store
|
||||
import spack.util.path
|
||||
from spack.paths import lib_path, var_path
|
||||
|
||||
@@ -121,7 +122,7 @@ def clean(parser, args):
|
||||
|
||||
if args.failures:
|
||||
tty.msg("Removing install failure marks")
|
||||
spack.installer.clear_failures()
|
||||
spack.store.STORE.failure_tracker.clear_all()
|
||||
|
||||
if args.misc_cache:
|
||||
tty.msg("Removing cached information on repositories")
|
||||
|
@@ -36,13 +36,13 @@
|
||||
"bash": {
|
||||
"aliases": True,
|
||||
"format": "bash",
|
||||
"header": os.path.join(spack.paths.share_path, "bash", "spack-completion.in"),
|
||||
"header": os.path.join(spack.paths.share_path, "bash", "spack-completion.bash"),
|
||||
"update": os.path.join(spack.paths.share_path, "spack-completion.bash"),
|
||||
},
|
||||
"fish": {
|
||||
"aliases": True,
|
||||
"format": "fish",
|
||||
"header": os.path.join(spack.paths.share_path, "fish", "spack-completion.in"),
|
||||
"header": os.path.join(spack.paths.share_path, "fish", "spack-completion.fish"),
|
||||
"update": os.path.join(spack.paths.share_path, "spack-completion.fish"),
|
||||
},
|
||||
}
|
||||
|
@@ -915,11 +915,11 @@ def get_repository(args, name):
|
||||
)
|
||||
else:
|
||||
if spec.namespace:
|
||||
repo = spack.repo.path.get_repo(spec.namespace, None)
|
||||
repo = spack.repo.PATH.get_repo(spec.namespace, None)
|
||||
if not repo:
|
||||
tty.die("Unknown namespace: '{0}'".format(spec.namespace))
|
||||
else:
|
||||
repo = spack.repo.path.first_repo()
|
||||
repo = spack.repo.PATH.first_repo()
|
||||
|
||||
# Set the namespace on the spec if it's not there already
|
||||
if not spec.namespace:
|
||||
|
@@ -47,14 +47,14 @@ def inverted_dependencies():
|
||||
actual dependents.
|
||||
"""
|
||||
dag = {}
|
||||
for pkg_cls in spack.repo.path.all_package_classes():
|
||||
for pkg_cls in spack.repo.PATH.all_package_classes():
|
||||
dag.setdefault(pkg_cls.name, set())
|
||||
for dep in pkg_cls.dependencies:
|
||||
deps = [dep]
|
||||
|
||||
# expand virtuals if necessary
|
||||
if spack.repo.path.is_virtual(dep):
|
||||
deps += [s.name for s in spack.repo.path.providers_for(dep)]
|
||||
if spack.repo.PATH.is_virtual(dep):
|
||||
deps += [s.name for s in spack.repo.PATH.providers_for(dep)]
|
||||
|
||||
for d in deps:
|
||||
dag.setdefault(d, set()).add(pkg_cls.name)
|
||||
|
@@ -98,7 +98,7 @@ def dev_build(self, args):
|
||||
tty.die("spack dev-build only takes one spec.")
|
||||
|
||||
spec = specs[0]
|
||||
if not spack.repo.path.exists(spec.name):
|
||||
if not spack.repo.PATH.exists(spec.name):
|
||||
tty.die(
|
||||
"No package for '{0}' was found.".format(spec.name),
|
||||
" Use `spack create` to create a new package",
|
||||
|
@@ -31,9 +31,9 @@ def edit_package(name, repo_path, namespace):
|
||||
if repo_path:
|
||||
repo = spack.repo.Repo(repo_path)
|
||||
elif namespace:
|
||||
repo = spack.repo.path.get_repo(namespace)
|
||||
repo = spack.repo.PATH.get_repo(namespace)
|
||||
else:
|
||||
repo = spack.repo.path
|
||||
repo = spack.repo.PATH
|
||||
path = repo.filename_for_package_name(name)
|
||||
|
||||
spec = Spec(name)
|
||||
|
@@ -58,7 +58,7 @@ def extensions(parser, args):
|
||||
|
||||
extendable_pkgs = []
|
||||
for name in spack.repo.all_package_names():
|
||||
pkg_cls = spack.repo.path.get_pkg_class(name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(name)
|
||||
if pkg_cls.extendable:
|
||||
extendable_pkgs.append(name)
|
||||
|
||||
@@ -81,7 +81,7 @@ def extensions(parser, args):
|
||||
|
||||
if args.show in ("packages", "all"):
|
||||
# List package names of extensions
|
||||
extensions = spack.repo.path.extensions_for(spec)
|
||||
extensions = spack.repo.PATH.extensions_for(spec)
|
||||
if not extensions:
|
||||
tty.msg("%s has no extensions." % spec.cshort_spec)
|
||||
else:
|
||||
|
@@ -133,9 +133,9 @@ def external_find(args):
|
||||
|
||||
# Add the packages that have been required explicitly
|
||||
if args.packages:
|
||||
pkg_cls_to_check = [spack.repo.path.get_pkg_class(pkg) for pkg in args.packages]
|
||||
pkg_cls_to_check = [spack.repo.PATH.get_pkg_class(pkg) for pkg in args.packages]
|
||||
if args.tags:
|
||||
allowed = set(spack.repo.path.packages_with_tags(*args.tags))
|
||||
allowed = set(spack.repo.PATH.packages_with_tags(*args.tags))
|
||||
pkg_cls_to_check = [x for x in pkg_cls_to_check if x.name in allowed]
|
||||
|
||||
if args.tags and not pkg_cls_to_check:
|
||||
@@ -144,15 +144,15 @@ def external_find(args):
|
||||
# Since tags are cached it's much faster to construct what we need
|
||||
# to search directly, rather than filtering after the fact
|
||||
pkg_cls_to_check = [
|
||||
spack.repo.path.get_pkg_class(pkg_name)
|
||||
spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
for tag in args.tags
|
||||
for pkg_name in spack.repo.path.packages_with_tags(tag)
|
||||
for pkg_name in spack.repo.PATH.packages_with_tags(tag)
|
||||
]
|
||||
pkg_cls_to_check = list(set(pkg_cls_to_check))
|
||||
|
||||
# If the list of packages is empty, search for every possible package
|
||||
if not args.tags and not pkg_cls_to_check:
|
||||
pkg_cls_to_check = list(spack.repo.path.all_package_classes())
|
||||
pkg_cls_to_check = list(spack.repo.PATH.all_package_classes())
|
||||
|
||||
# If the user specified any packages to exclude from external find, add them here
|
||||
if args.exclude:
|
||||
@@ -239,7 +239,7 @@ def _collect_and_consume_cray_manifest_files(
|
||||
|
||||
def external_list(args):
|
||||
# Trigger a read of all packages, might take a long time.
|
||||
list(spack.repo.path.all_package_classes())
|
||||
list(spack.repo.PATH.all_package_classes())
|
||||
# Print all the detectable packages
|
||||
tty.msg("Detectable packages per repository")
|
||||
for namespace, pkgs in sorted(spack.package_base.detectable_packages.items()):
|
||||
|
@@ -268,7 +268,7 @@ def find(parser, args):
|
||||
|
||||
# If tags have been specified on the command line, filter by tags
|
||||
if args.tags:
|
||||
packages_with_tags = spack.repo.path.packages_with_tags(*args.tags)
|
||||
packages_with_tags = spack.repo.PATH.packages_with_tags(*args.tags)
|
||||
results = [x for x in results if x.name in packages_with_tags]
|
||||
|
||||
if args.loaded:
|
||||
|
@@ -349,7 +349,7 @@ def print_virtuals(pkg):
|
||||
|
||||
def info(parser, args):
|
||||
spec = spack.spec.Spec(args.package)
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
pkg = pkg_cls(spec)
|
||||
|
||||
# Output core package information
|
||||
|
@@ -107,7 +107,7 @@ def match(p, f):
|
||||
if f.match(p):
|
||||
return True
|
||||
|
||||
pkg_cls = spack.repo.path.get_pkg_class(p)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(p)
|
||||
if pkg_cls.__doc__:
|
||||
return f.match(pkg_cls.__doc__)
|
||||
return False
|
||||
@@ -159,7 +159,7 @@ def get_dependencies(pkg):
|
||||
@formatter
|
||||
def version_json(pkg_names, out):
|
||||
"""Print all packages with their latest versions."""
|
||||
pkg_classes = [spack.repo.path.get_pkg_class(name) for name in pkg_names]
|
||||
pkg_classes = [spack.repo.PATH.get_pkg_class(name) for name in pkg_names]
|
||||
|
||||
out.write("[\n")
|
||||
|
||||
@@ -201,7 +201,7 @@ def html(pkg_names, out):
|
||||
"""
|
||||
|
||||
# Read in all packages
|
||||
pkg_classes = [spack.repo.path.get_pkg_class(name) for name in pkg_names]
|
||||
pkg_classes = [spack.repo.PATH.get_pkg_class(name) for name in pkg_names]
|
||||
|
||||
# Start at 2 because the title of the page from Sphinx is id1.
|
||||
span_id = 2
|
||||
@@ -313,13 +313,13 @@ def list(parser, args):
|
||||
|
||||
# If tags have been specified on the command line, filter by tags
|
||||
if args.tags:
|
||||
packages_with_tags = spack.repo.path.packages_with_tags(*args.tags)
|
||||
packages_with_tags = spack.repo.PATH.packages_with_tags(*args.tags)
|
||||
sorted_packages = [p for p in sorted_packages if p in packages_with_tags]
|
||||
|
||||
if args.update:
|
||||
# change output stream if user asked for update
|
||||
if os.path.exists(args.update):
|
||||
if os.path.getmtime(args.update) > spack.repo.path.last_mtime():
|
||||
if os.path.getmtime(args.update) > spack.repo.PATH.last_mtime():
|
||||
tty.msg("File is up to date: %s" % args.update)
|
||||
return
|
||||
|
||||
|
@@ -109,7 +109,7 @@ def location(parser, args):
|
||||
return
|
||||
|
||||
if args.packages:
|
||||
print(spack.repo.path.first_repo().root)
|
||||
print(spack.repo.PATH.first_repo().root)
|
||||
return
|
||||
|
||||
if args.stages:
|
||||
@@ -135,7 +135,7 @@ def location(parser, args):
|
||||
|
||||
# Package dir just needs the spec name
|
||||
if args.package_dir:
|
||||
print(spack.repo.path.dirname_for_package_name(spec.name))
|
||||
print(spack.repo.PATH.dirname_for_package_name(spec.name))
|
||||
return
|
||||
|
||||
# Either concretize or filter from already concretized environment
|
||||
|
@@ -54,11 +54,11 @@ def setup_parser(subparser):
|
||||
|
||||
def packages_to_maintainers(package_names=None):
|
||||
if not package_names:
|
||||
package_names = spack.repo.path.all_package_names()
|
||||
package_names = spack.repo.PATH.all_package_names()
|
||||
|
||||
pkg_to_users = defaultdict(lambda: set())
|
||||
for name in package_names:
|
||||
cls = spack.repo.path.get_pkg_class(name)
|
||||
cls = spack.repo.PATH.get_pkg_class(name)
|
||||
for user in cls.maintainers:
|
||||
pkg_to_users[name].add(user)
|
||||
|
||||
@@ -67,8 +67,8 @@ def packages_to_maintainers(package_names=None):
|
||||
|
||||
def maintainers_to_packages(users=None):
|
||||
user_to_pkgs = defaultdict(lambda: [])
|
||||
for name in spack.repo.path.all_package_names():
|
||||
cls = spack.repo.path.get_pkg_class(name)
|
||||
for name in spack.repo.PATH.all_package_names():
|
||||
cls = spack.repo.PATH.get_pkg_class(name)
|
||||
for user in cls.maintainers:
|
||||
lower_users = [u.lower() for u in users]
|
||||
if not users or user.lower() in lower_users:
|
||||
@@ -80,8 +80,8 @@ def maintainers_to_packages(users=None):
|
||||
def maintained_packages():
|
||||
maintained = []
|
||||
unmaintained = []
|
||||
for name in spack.repo.path.all_package_names():
|
||||
cls = spack.repo.path.get_pkg_class(name)
|
||||
for name in spack.repo.PATH.all_package_names():
|
||||
cls = spack.repo.PATH.get_pkg_class(name)
|
||||
if cls.maintainers:
|
||||
maintained.append(name)
|
||||
else:
|
||||
|
@@ -474,7 +474,7 @@ def create_mirror_for_all_specs(path, skip_unstable_versions, selection_fn):
|
||||
path, skip_unstable_versions=skip_unstable_versions
|
||||
)
|
||||
for candidate in mirror_specs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(candidate.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(candidate.name)
|
||||
pkg_obj = pkg_cls(spack.spec.Spec(candidate))
|
||||
mirror_stats.next_spec(pkg_obj.spec)
|
||||
spack.mirror.create_mirror_from_package_object(pkg_obj, mirror_cache, mirror_stats)
|
||||
|
@@ -309,7 +309,7 @@ def refresh(module_type, specs, args):
|
||||
|
||||
# Skip unknown packages.
|
||||
writers = [
|
||||
cls(spec, args.module_set_name) for spec in specs if spack.repo.path.exists(spec.name)
|
||||
cls(spec, args.module_set_name) for spec in specs if spack.repo.PATH.exists(spec.name)
|
||||
]
|
||||
|
||||
# Filter excluded packages early
|
||||
@@ -321,12 +321,13 @@ def refresh(module_type, specs, args):
|
||||
file2writer[item.layout.filename].append(item)
|
||||
|
||||
if len(file2writer) != len(writers):
|
||||
spec_fmt_str = "{name}@={version}%{compiler}/{hash:7} {variants} arch={arch}"
|
||||
message = "Name clashes detected in module files:\n"
|
||||
for filename, writer_list in file2writer.items():
|
||||
if len(writer_list) > 1:
|
||||
message += "\nfile: {0}\n".format(filename)
|
||||
for x in writer_list:
|
||||
message += "spec: {0}\n".format(x.spec.format())
|
||||
message += "spec: {0}\n".format(x.spec.format(spec_fmt_str))
|
||||
tty.error(message)
|
||||
tty.error("Operation aborted")
|
||||
raise SystemExit(1)
|
||||
@@ -376,7 +377,7 @@ def refresh(module_type, specs, args):
|
||||
def modules_cmd(parser, args, module_type, callbacks=callbacks):
|
||||
# Qualifiers to be used when querying the db for specs
|
||||
constraint_qualifiers = {
|
||||
"refresh": {"installed": True, "known": lambda x: not spack.repo.path.exists(x)}
|
||||
"refresh": {"installed": True, "known": lambda x: not spack.repo.PATH.exists(x)}
|
||||
}
|
||||
query_args = constraint_qualifiers.get(args.subparser_name, {})
|
||||
|
||||
|
@@ -143,7 +143,7 @@ def pkg_source(args):
|
||||
tty.die("spack pkg source requires exactly one spec")
|
||||
|
||||
spec = specs[0]
|
||||
filename = spack.repo.path.filename_for_package_name(spec.name)
|
||||
filename = spack.repo.PATH.filename_for_package_name(spec.name)
|
||||
|
||||
# regular source dump -- just get the package and print its contents
|
||||
if args.canonical:
|
||||
@@ -184,7 +184,7 @@ def pkg_grep(args, unknown_args):
|
||||
grouper = lambda e: e[0] // 500
|
||||
|
||||
# set up iterator and save the first group to ensure we don't end up with a group of size 1
|
||||
groups = itertools.groupby(enumerate(spack.repo.path.all_package_paths()), grouper)
|
||||
groups = itertools.groupby(enumerate(spack.repo.PATH.all_package_paths()), grouper)
|
||||
if not groups:
|
||||
return 0 # no packages to search
|
||||
|
||||
|
@@ -24,7 +24,7 @@ def setup_parser(subparser):
|
||||
|
||||
|
||||
def providers(parser, args):
|
||||
valid_virtuals = sorted(spack.repo.path.provider_index.providers.keys())
|
||||
valid_virtuals = sorted(spack.repo.PATH.provider_index.providers.keys())
|
||||
|
||||
buffer = io.StringIO()
|
||||
isatty = sys.stdout.isatty()
|
||||
@@ -53,5 +53,5 @@ def providers(parser, args):
|
||||
for spec in specs:
|
||||
if sys.stdout.isatty():
|
||||
print("{0}:".format(spec))
|
||||
spack.cmd.display_specs(sorted(spack.repo.path.providers_for(spec)))
|
||||
spack.cmd.display_specs(sorted(spack.repo.PATH.providers_for(spec)))
|
||||
print("")
|
||||
|
@@ -29,7 +29,7 @@ def setup_parser(subparser):
|
||||
|
||||
def _show_patch(sha256):
|
||||
"""Show a record from the patch index."""
|
||||
patches = spack.repo.path.patch_index.index
|
||||
patches = spack.repo.PATH.patch_index.index
|
||||
data = patches.get(sha256)
|
||||
|
||||
if not data:
|
||||
@@ -47,7 +47,7 @@ def _show_patch(sha256):
|
||||
owner = rec["owner"]
|
||||
|
||||
if "relative_path" in rec:
|
||||
pkg_dir = spack.repo.path.get_pkg_class(owner).package_dir
|
||||
pkg_dir = spack.repo.PATH.get_pkg_class(owner).package_dir
|
||||
path = os.path.join(pkg_dir, rec["relative_path"])
|
||||
print(" path: %s" % path)
|
||||
else:
|
||||
@@ -60,7 +60,7 @@ def _show_patch(sha256):
|
||||
|
||||
def resource_list(args):
|
||||
"""list all resources known to spack (currently just patches)"""
|
||||
patches = spack.repo.path.patch_index.index
|
||||
patches = spack.repo.PATH.patch_index.index
|
||||
for sha256 in patches:
|
||||
if args.only_hashes:
|
||||
print(sha256)
|
||||
|
@@ -68,7 +68,7 @@ def tags(parser, args):
|
||||
return
|
||||
|
||||
# unique list of available tags
|
||||
available_tags = sorted(spack.repo.path.tag_index.keys())
|
||||
available_tags = sorted(spack.repo.PATH.tag_index.keys())
|
||||
if not available_tags:
|
||||
tty.msg("No tagged packages")
|
||||
return
|
||||
|
@@ -228,7 +228,7 @@ def create_reporter(args, specs_to_test, test_suite):
|
||||
|
||||
def test_list(args):
|
||||
"""list installed packages with available tests"""
|
||||
tagged = set(spack.repo.path.packages_with_tags(*args.tag)) if args.tag else set()
|
||||
tagged = set(spack.repo.PATH.packages_with_tags(*args.tag)) if args.tag else set()
|
||||
|
||||
def has_test_and_tags(pkg_class):
|
||||
tests = spack.install_test.test_functions(pkg_class)
|
||||
@@ -237,7 +237,7 @@ def has_test_and_tags(pkg_class):
|
||||
if args.list_all:
|
||||
report_packages = [
|
||||
pkg_class.name
|
||||
for pkg_class in spack.repo.path.all_package_classes()
|
||||
for pkg_class in spack.repo.PATH.all_package_classes()
|
||||
if has_test_and_tags(pkg_class)
|
||||
]
|
||||
|
||||
|
@@ -155,7 +155,7 @@ def url_list(args):
|
||||
urls = set()
|
||||
|
||||
# Gather set of URLs from all packages
|
||||
for pkg_cls in spack.repo.path.all_package_classes():
|
||||
for pkg_cls in spack.repo.PATH.all_package_classes():
|
||||
url = getattr(pkg_cls, "url", None)
|
||||
urls = url_list_parsing(args, urls, url, pkg_cls)
|
||||
|
||||
@@ -192,7 +192,7 @@ def url_summary(args):
|
||||
tty.msg("Generating a summary of URL parsing in Spack...")
|
||||
|
||||
# Loop through all packages
|
||||
for pkg_cls in spack.repo.path.all_package_classes():
|
||||
for pkg_cls in spack.repo.PATH.all_package_classes():
|
||||
urls = set()
|
||||
pkg = pkg_cls(spack.spec.Spec(pkg_cls.name))
|
||||
|
||||
@@ -336,7 +336,7 @@ def add(self, pkg_name, fetcher):
|
||||
version_stats = UrlStats()
|
||||
resource_stats = UrlStats()
|
||||
|
||||
for pkg_cls in spack.repo.path.all_package_classes():
|
||||
for pkg_cls in spack.repo.PATH.all_package_classes():
|
||||
npkgs += 1
|
||||
|
||||
for v in pkg_cls.versions:
|
||||
|
@@ -45,7 +45,7 @@ def setup_parser(subparser):
|
||||
|
||||
def versions(parser, args):
|
||||
spec = spack.spec.Spec(args.package)
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
pkg = pkg_cls(spec)
|
||||
|
||||
safe_versions = pkg.versions
|
||||
|
@@ -113,7 +113,7 @@ def _valid_virtuals_and_externals(self, spec):
|
||||
pref_key = lambda spec: 0 # no-op pref key
|
||||
|
||||
if spec.virtual:
|
||||
candidates = spack.repo.path.providers_for(spec)
|
||||
candidates = spack.repo.PATH.providers_for(spec)
|
||||
if not candidates:
|
||||
raise spack.error.UnsatisfiableProviderSpecError(candidates[0], spec)
|
||||
|
||||
|
@@ -90,7 +90,7 @@ def spec_from_entry(entry):
|
||||
name=entry["name"], version=entry["version"], compiler=compiler_str, arch=arch_str
|
||||
)
|
||||
|
||||
pkg_cls = spack.repo.path.get_pkg_class(entry["name"])
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(entry["name"])
|
||||
|
||||
if "parameters" in entry:
|
||||
variant_strs = list()
|
||||
|
@@ -21,10 +21,11 @@
|
||||
import contextlib
|
||||
import datetime
|
||||
import os
|
||||
import pathlib
|
||||
import socket
|
||||
import sys
|
||||
import time
|
||||
from typing import Dict, List, NamedTuple, Set, Type, Union
|
||||
from typing import Any, Callable, Dict, Generator, List, NamedTuple, Set, Type, Union
|
||||
|
||||
try:
|
||||
import uuid
|
||||
@@ -141,22 +142,23 @@ class InstallStatuses:
|
||||
def canonicalize(cls, query_arg):
|
||||
if query_arg is True:
|
||||
return [cls.INSTALLED]
|
||||
elif query_arg is False:
|
||||
if query_arg is False:
|
||||
return [cls.MISSING]
|
||||
elif query_arg is any:
|
||||
if query_arg is any:
|
||||
return [cls.INSTALLED, cls.DEPRECATED, cls.MISSING]
|
||||
elif isinstance(query_arg, InstallStatus):
|
||||
if isinstance(query_arg, InstallStatus):
|
||||
return [query_arg]
|
||||
else:
|
||||
try: # Try block catches if it is not an iterable at all
|
||||
if any(type(x) != InstallStatus for x in query_arg):
|
||||
raise TypeError
|
||||
except TypeError:
|
||||
raise TypeError(
|
||||
"installation query must be `any`, boolean, "
|
||||
"InstallStatus, or iterable of InstallStatus"
|
||||
)
|
||||
return query_arg
|
||||
try:
|
||||
statuses = list(query_arg)
|
||||
if all(isinstance(x, InstallStatus) for x in statuses):
|
||||
return statuses
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
raise TypeError(
|
||||
"installation query must be `any`, boolean, "
|
||||
"InstallStatus, or iterable of InstallStatus"
|
||||
)
|
||||
|
||||
|
||||
class InstallRecord:
|
||||
@@ -306,15 +308,16 @@ def __reduce__(self):
|
||||
|
||||
"""
|
||||
|
||||
#: Data class to configure locks in Database objects
|
||||
#:
|
||||
#: Args:
|
||||
#: enable (bool): whether to enable locks or not.
|
||||
#: database_timeout (int or None): timeout for the database lock
|
||||
#: package_timeout (int or None): timeout for the package lock
|
||||
|
||||
|
||||
class LockConfiguration(NamedTuple):
|
||||
"""Data class to configure locks in Database objects
|
||||
|
||||
Args:
|
||||
enable: whether to enable locks or not.
|
||||
database_timeout: timeout for the database lock
|
||||
package_timeout: timeout for the package lock
|
||||
"""
|
||||
|
||||
enable: bool
|
||||
database_timeout: Optional[int]
|
||||
package_timeout: Optional[int]
|
||||
@@ -348,13 +351,230 @@ def lock_configuration(configuration):
|
||||
)
|
||||
|
||||
|
||||
def prefix_lock_path(root_dir: Union[str, pathlib.Path]) -> pathlib.Path:
|
||||
"""Returns the path of the prefix lock file, given the root directory.
|
||||
|
||||
Args:
|
||||
root_dir: root directory containing the database directory
|
||||
"""
|
||||
return pathlib.Path(root_dir) / _DB_DIRNAME / "prefix_lock"
|
||||
|
||||
|
||||
def failures_lock_path(root_dir: Union[str, pathlib.Path]) -> pathlib.Path:
|
||||
"""Returns the path of the failures lock file, given the root directory.
|
||||
|
||||
Args:
|
||||
root_dir: root directory containing the database directory
|
||||
"""
|
||||
return pathlib.Path(root_dir) / _DB_DIRNAME / "prefix_failures"
|
||||
|
||||
|
||||
class SpecLocker:
|
||||
"""Manages acquiring and releasing read or write locks on concrete specs."""
|
||||
|
||||
def __init__(self, lock_path: Union[str, pathlib.Path], default_timeout: Optional[float]):
|
||||
self.lock_path = pathlib.Path(lock_path)
|
||||
self.default_timeout = default_timeout
|
||||
|
||||
# Maps (spec.dag_hash(), spec.name) to the corresponding lock object
|
||||
self.locks: Dict[Tuple[str, str], lk.Lock] = {}
|
||||
|
||||
def lock(self, spec: "spack.spec.Spec", timeout: Optional[float] = None) -> lk.Lock:
|
||||
"""Returns a lock on a concrete spec.
|
||||
|
||||
The lock is a byte range lock on the nth byte of a file.
|
||||
|
||||
The lock file is ``self.lock_path``.
|
||||
|
||||
n is the sys.maxsize-bit prefix of the DAG hash. This makes likelihood of collision is
|
||||
very low AND it gives us readers-writer lock semantics with just a single lockfile, so
|
||||
no cleanup required.
|
||||
"""
|
||||
assert spec.concrete, "cannot lock a non-concrete spec"
|
||||
timeout = timeout or self.default_timeout
|
||||
key = self._lock_key(spec)
|
||||
|
||||
if key not in self.locks:
|
||||
self.locks[key] = self.raw_lock(spec, timeout=timeout)
|
||||
else:
|
||||
self.locks[key].default_timeout = timeout
|
||||
|
||||
return self.locks[key]
|
||||
|
||||
def raw_lock(self, spec: "spack.spec.Spec", timeout: Optional[float] = None) -> lk.Lock:
|
||||
"""Returns a raw lock for a Spec, but doesn't keep track of it."""
|
||||
return lk.Lock(
|
||||
str(self.lock_path),
|
||||
start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
|
||||
length=1,
|
||||
default_timeout=timeout,
|
||||
desc=spec.name,
|
||||
)
|
||||
|
||||
def has_lock(self, spec: "spack.spec.Spec") -> bool:
|
||||
"""Returns True if the spec is already managed by this spec locker"""
|
||||
return self._lock_key(spec) in self.locks
|
||||
|
||||
def _lock_key(self, spec: "spack.spec.Spec") -> Tuple[str, str]:
|
||||
return (spec.dag_hash(), spec.name)
|
||||
|
||||
@contextlib.contextmanager
|
||||
def write_lock(self, spec: "spack.spec.Spec") -> Generator["SpecLocker", None, None]:
|
||||
lock = self.lock(spec)
|
||||
lock.acquire_write()
|
||||
|
||||
try:
|
||||
yield self
|
||||
except lk.LockError:
|
||||
# This addresses the case where a nested lock attempt fails inside
|
||||
# of this context manager
|
||||
raise
|
||||
except (Exception, KeyboardInterrupt):
|
||||
lock.release_write()
|
||||
raise
|
||||
else:
|
||||
lock.release_write()
|
||||
|
||||
def clear(self, spec: "spack.spec.Spec") -> Tuple[bool, Optional[lk.Lock]]:
|
||||
key = self._lock_key(spec)
|
||||
lock = self.locks.pop(key, None)
|
||||
return bool(lock), lock
|
||||
|
||||
def clear_all(self, clear_fn: Optional[Callable[[lk.Lock], Any]] = None) -> None:
|
||||
if clear_fn is not None:
|
||||
for lock in self.locks.values():
|
||||
clear_fn(lock)
|
||||
self.locks.clear()
|
||||
|
||||
|
||||
class FailureTracker:
|
||||
"""Tracks installation failures.
|
||||
|
||||
Prefix failure marking takes the form of a byte range lock on the nth
|
||||
byte of a file for coordinating between concurrent parallel build
|
||||
processes and a persistent file, named with the full hash and
|
||||
containing the spec, in a subdirectory of the database to enable
|
||||
persistence across overlapping but separate related build processes.
|
||||
|
||||
The failure lock file lives alongside the install DB.
|
||||
|
||||
``n`` is the sys.maxsize-bit prefix of the associated DAG hash to make
|
||||
the likelihood of collision very low with no cleanup required.
|
||||
"""
|
||||
|
||||
def __init__(self, root_dir: Union[str, pathlib.Path], default_timeout: Optional[float]):
|
||||
#: Ensure a persistent location for dealing with parallel installation
|
||||
#: failures (e.g., across near-concurrent processes).
|
||||
self.dir = pathlib.Path(root_dir) / _DB_DIRNAME / "failures"
|
||||
self.dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
self.locker = SpecLocker(failures_lock_path(root_dir), default_timeout=default_timeout)
|
||||
|
||||
def clear(self, spec: "spack.spec.Spec", force: bool = False) -> None:
|
||||
"""Removes any persistent and cached failure tracking for the spec.
|
||||
|
||||
see `mark()`.
|
||||
|
||||
Args:
|
||||
spec: the spec whose failure indicators are being removed
|
||||
force: True if the failure information should be cleared when a failure lock
|
||||
exists for the file, or False if the failure should not be cleared (e.g.,
|
||||
it may be associated with a concurrent build)
|
||||
"""
|
||||
locked = self.lock_taken(spec)
|
||||
if locked and not force:
|
||||
tty.msg(f"Retaining failure marking for {spec.name} due to lock")
|
||||
return
|
||||
|
||||
if locked:
|
||||
tty.warn(f"Removing failure marking despite lock for {spec.name}")
|
||||
|
||||
succeeded, lock = self.locker.clear(spec)
|
||||
if succeeded and lock is not None:
|
||||
lock.release_write()
|
||||
|
||||
if self.persistent_mark(spec):
|
||||
path = self._path(spec)
|
||||
tty.debug(f"Removing failure marking for {spec.name}")
|
||||
try:
|
||||
path.unlink()
|
||||
except OSError as err:
|
||||
tty.warn(
|
||||
f"Unable to remove failure marking for {spec.name} ({str(path)}): {str(err)}"
|
||||
)
|
||||
|
||||
def clear_all(self) -> None:
|
||||
"""Force remove install failure tracking files."""
|
||||
tty.debug("Releasing prefix failure locks")
|
||||
self.locker.clear_all(
|
||||
clear_fn=lambda x: x.release_write() if x.is_write_locked() else True
|
||||
)
|
||||
|
||||
tty.debug("Removing prefix failure tracking files")
|
||||
try:
|
||||
for fail_mark in os.listdir(str(self.dir)):
|
||||
try:
|
||||
(self.dir / fail_mark).unlink()
|
||||
except OSError as exc:
|
||||
tty.warn(f"Unable to remove failure marking file {fail_mark}: {str(exc)}")
|
||||
except OSError as exc:
|
||||
tty.warn(f"Unable to remove failure marking files: {str(exc)}")
|
||||
|
||||
def mark(self, spec: "spack.spec.Spec") -> lk.Lock:
|
||||
"""Marks a spec as failing to install.
|
||||
|
||||
Args:
|
||||
spec: spec that failed to install
|
||||
"""
|
||||
# Dump the spec to the failure file for (manual) debugging purposes
|
||||
path = self._path(spec)
|
||||
path.write_text(spec.to_json())
|
||||
|
||||
# Also ensure a failure lock is taken to prevent cleanup removal
|
||||
# of failure status information during a concurrent parallel build.
|
||||
if not self.locker.has_lock(spec):
|
||||
try:
|
||||
mark = self.locker.lock(spec)
|
||||
mark.acquire_write()
|
||||
except lk.LockTimeoutError:
|
||||
# Unlikely that another process failed to install at the same
|
||||
# time but log it anyway.
|
||||
tty.debug(f"PID {os.getpid()} failed to mark install failure for {spec.name}")
|
||||
tty.warn(f"Unable to mark {spec.name} as failed.")
|
||||
|
||||
return self.locker.lock(spec)
|
||||
|
||||
def has_failed(self, spec: "spack.spec.Spec") -> bool:
|
||||
"""Return True if the spec is marked as failed."""
|
||||
# The failure was detected in this process.
|
||||
if self.locker.has_lock(spec):
|
||||
return True
|
||||
|
||||
# The failure was detected by a concurrent process (e.g., an srun),
|
||||
# which is expected to be holding a write lock if that is the case.
|
||||
if self.lock_taken(spec):
|
||||
return True
|
||||
|
||||
# Determine if the spec may have been marked as failed by a separate
|
||||
# spack build process running concurrently.
|
||||
return self.persistent_mark(spec)
|
||||
|
||||
def lock_taken(self, spec: "spack.spec.Spec") -> bool:
|
||||
"""Return True if another process has a failure lock on the spec."""
|
||||
check = self.locker.raw_lock(spec)
|
||||
return check.is_write_locked()
|
||||
|
||||
def persistent_mark(self, spec: "spack.spec.Spec") -> bool:
|
||||
"""Determine if the spec has a persistent failure marking."""
|
||||
return self._path(spec).exists()
|
||||
|
||||
def _path(self, spec: "spack.spec.Spec") -> pathlib.Path:
|
||||
"""Return the path to the spec's failure file, which may not exist."""
|
||||
assert spec.concrete, "concrete spec required for failure path"
|
||||
return self.dir / f"{spec.name}-{spec.dag_hash()}"
|
||||
|
||||
|
||||
class Database:
|
||||
#: Per-process lock objects for each install prefix
|
||||
_prefix_locks: Dict[str, lk.Lock] = {}
|
||||
|
||||
#: Per-process failure (lock) objects for each install prefix
|
||||
_prefix_failures: Dict[str, lk.Lock] = {}
|
||||
|
||||
#: Fields written for each install record
|
||||
record_fields: Tuple[str, ...] = DEFAULT_INSTALL_RECORD_FIELDS
|
||||
|
||||
@@ -392,24 +612,10 @@ def __init__(
|
||||
self._verifier_path = os.path.join(self.database_directory, "index_verifier")
|
||||
self._lock_path = os.path.join(self.database_directory, "lock")
|
||||
|
||||
# This is for other classes to use to lock prefix directories.
|
||||
self.prefix_lock_path = os.path.join(self.database_directory, "prefix_lock")
|
||||
|
||||
# Ensure a persistent location for dealing with parallel installation
|
||||
# failures (e.g., across near-concurrent processes).
|
||||
self._failure_dir = os.path.join(self.database_directory, "failures")
|
||||
|
||||
# Support special locks for handling parallel installation failures
|
||||
# of a spec.
|
||||
self.prefix_fail_path = os.path.join(self.database_directory, "prefix_failures")
|
||||
|
||||
# Create needed directories and files
|
||||
if not is_upstream and not os.path.exists(self.database_directory):
|
||||
fs.mkdirp(self.database_directory)
|
||||
|
||||
if not is_upstream and not os.path.exists(self._failure_dir):
|
||||
fs.mkdirp(self._failure_dir)
|
||||
|
||||
self.is_upstream = is_upstream
|
||||
self.last_seen_verifier = ""
|
||||
# Failed write transactions (interrupted by exceptions) will alert
|
||||
@@ -423,15 +629,7 @@ def __init__(
|
||||
|
||||
# initialize rest of state.
|
||||
self.db_lock_timeout = lock_cfg.database_timeout
|
||||
self.package_lock_timeout = lock_cfg.package_timeout
|
||||
|
||||
tty.debug("DATABASE LOCK TIMEOUT: {0}s".format(str(self.db_lock_timeout)))
|
||||
timeout_format_str = (
|
||||
"{0}s".format(str(self.package_lock_timeout))
|
||||
if self.package_lock_timeout
|
||||
else "No timeout"
|
||||
)
|
||||
tty.debug("PACKAGE LOCK TIMEOUT: {0}".format(str(timeout_format_str)))
|
||||
|
||||
self.lock: Union[ForbiddenLock, lk.Lock]
|
||||
if self.is_upstream:
|
||||
@@ -471,212 +669,6 @@ def read_transaction(self):
|
||||
"""Get a read lock context manager for use in a `with` block."""
|
||||
return self._read_transaction_impl(self.lock, acquire=self._read)
|
||||
|
||||
def _failed_spec_path(self, spec):
|
||||
"""Return the path to the spec's failure file, which may not exist."""
|
||||
if not spec.concrete:
|
||||
raise ValueError("Concrete spec required for failure path for {0}".format(spec.name))
|
||||
|
||||
return os.path.join(self._failure_dir, "{0}-{1}".format(spec.name, spec.dag_hash()))
|
||||
|
||||
def clear_all_failures(self) -> None:
|
||||
"""Force remove install failure tracking files."""
|
||||
tty.debug("Releasing prefix failure locks")
|
||||
for pkg_id in list(self._prefix_failures.keys()):
|
||||
lock = self._prefix_failures.pop(pkg_id, None)
|
||||
if lock:
|
||||
lock.release_write()
|
||||
|
||||
# Remove all failure markings (aka files)
|
||||
tty.debug("Removing prefix failure tracking files")
|
||||
for fail_mark in os.listdir(self._failure_dir):
|
||||
try:
|
||||
os.remove(os.path.join(self._failure_dir, fail_mark))
|
||||
except OSError as exc:
|
||||
tty.warn(
|
||||
"Unable to remove failure marking file {0}: {1}".format(fail_mark, str(exc))
|
||||
)
|
||||
|
||||
def clear_failure(self, spec: "spack.spec.Spec", force: bool = False) -> None:
|
||||
"""
|
||||
Remove any persistent and cached failure tracking for the spec.
|
||||
|
||||
see `mark_failed()`.
|
||||
|
||||
Args:
|
||||
spec: the spec whose failure indicators are being removed
|
||||
force: True if the failure information should be cleared when a prefix failure
|
||||
lock exists for the file, or False if the failure should not be cleared (e.g.,
|
||||
it may be associated with a concurrent build)
|
||||
"""
|
||||
failure_locked = self.prefix_failure_locked(spec)
|
||||
if failure_locked and not force:
|
||||
tty.msg("Retaining failure marking for {0} due to lock".format(spec.name))
|
||||
return
|
||||
|
||||
if failure_locked:
|
||||
tty.warn("Removing failure marking despite lock for {0}".format(spec.name))
|
||||
|
||||
lock = self._prefix_failures.pop(spec.prefix, None)
|
||||
if lock:
|
||||
lock.release_write()
|
||||
|
||||
if self.prefix_failure_marked(spec):
|
||||
try:
|
||||
path = self._failed_spec_path(spec)
|
||||
tty.debug("Removing failure marking for {0}".format(spec.name))
|
||||
os.remove(path)
|
||||
except OSError as err:
|
||||
tty.warn(
|
||||
"Unable to remove failure marking for {0} ({1}): {2}".format(
|
||||
spec.name, path, str(err)
|
||||
)
|
||||
)
|
||||
|
||||
def mark_failed(self, spec: "spack.spec.Spec") -> lk.Lock:
|
||||
"""
|
||||
Mark a spec as failing to install.
|
||||
|
||||
Prefix failure marking takes the form of a byte range lock on the nth
|
||||
byte of a file for coordinating between concurrent parallel build
|
||||
processes and a persistent file, named with the full hash and
|
||||
containing the spec, in a subdirectory of the database to enable
|
||||
persistence across overlapping but separate related build processes.
|
||||
|
||||
The failure lock file, ``spack.store.STORE.db.prefix_failures``, lives
|
||||
alongside the install DB. ``n`` is the sys.maxsize-bit prefix of the
|
||||
associated DAG hash to make the likelihood of collision very low with
|
||||
no cleanup required.
|
||||
"""
|
||||
# Dump the spec to the failure file for (manual) debugging purposes
|
||||
path = self._failed_spec_path(spec)
|
||||
with open(path, "w") as f:
|
||||
spec.to_json(f)
|
||||
|
||||
# Also ensure a failure lock is taken to prevent cleanup removal
|
||||
# of failure status information during a concurrent parallel build.
|
||||
err = "Unable to mark {0.name} as failed."
|
||||
|
||||
prefix = spec.prefix
|
||||
if prefix not in self._prefix_failures:
|
||||
mark = lk.Lock(
|
||||
self.prefix_fail_path,
|
||||
start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
|
||||
length=1,
|
||||
default_timeout=self.package_lock_timeout,
|
||||
desc=spec.name,
|
||||
)
|
||||
|
||||
try:
|
||||
mark.acquire_write()
|
||||
except lk.LockTimeoutError:
|
||||
# Unlikely that another process failed to install at the same
|
||||
# time but log it anyway.
|
||||
tty.debug(
|
||||
"PID {0} failed to mark install failure for {1}".format(os.getpid(), spec.name)
|
||||
)
|
||||
tty.warn(err.format(spec))
|
||||
|
||||
# Whether we or another process marked it as a failure, track it
|
||||
# as such locally.
|
||||
self._prefix_failures[prefix] = mark
|
||||
|
||||
return self._prefix_failures[prefix]
|
||||
|
||||
def prefix_failed(self, spec: "spack.spec.Spec") -> bool:
|
||||
"""Return True if the prefix (installation) is marked as failed."""
|
||||
# The failure was detected in this process.
|
||||
if spec.prefix in self._prefix_failures:
|
||||
return True
|
||||
|
||||
# The failure was detected by a concurrent process (e.g., an srun),
|
||||
# which is expected to be holding a write lock if that is the case.
|
||||
if self.prefix_failure_locked(spec):
|
||||
return True
|
||||
|
||||
# Determine if the spec may have been marked as failed by a separate
|
||||
# spack build process running concurrently.
|
||||
return self.prefix_failure_marked(spec)
|
||||
|
||||
def prefix_failure_locked(self, spec: "spack.spec.Spec") -> bool:
|
||||
"""Return True if a process has a failure lock on the spec."""
|
||||
check = lk.Lock(
|
||||
self.prefix_fail_path,
|
||||
start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
|
||||
length=1,
|
||||
default_timeout=self.package_lock_timeout,
|
||||
desc=spec.name,
|
||||
)
|
||||
|
||||
return check.is_write_locked()
|
||||
|
||||
def prefix_failure_marked(self, spec: "spack.spec.Spec") -> bool:
|
||||
"""Determine if the spec has a persistent failure marking."""
|
||||
return os.path.exists(self._failed_spec_path(spec))
|
||||
|
||||
def prefix_lock(self, spec: "spack.spec.Spec", timeout: Optional[float] = None) -> lk.Lock:
|
||||
"""Get a lock on a particular spec's installation directory.
|
||||
|
||||
NOTE: The installation directory **does not** need to exist.
|
||||
|
||||
Prefix lock is a byte range lock on the nth byte of a file.
|
||||
|
||||
The lock file is ``spack.store.STORE.db.prefix_lock`` -- the DB
|
||||
tells us what to call it and it lives alongside the install DB.
|
||||
|
||||
n is the sys.maxsize-bit prefix of the DAG hash. This makes
|
||||
likelihood of collision is very low AND it gives us
|
||||
readers-writer lock semantics with just a single lockfile, so no
|
||||
cleanup required.
|
||||
"""
|
||||
timeout = timeout or self.package_lock_timeout
|
||||
prefix = spec.prefix
|
||||
if prefix not in self._prefix_locks:
|
||||
self._prefix_locks[prefix] = lk.Lock(
|
||||
self.prefix_lock_path,
|
||||
start=spec.dag_hash_bit_prefix(bit_length(sys.maxsize)),
|
||||
length=1,
|
||||
default_timeout=timeout,
|
||||
desc=spec.name,
|
||||
)
|
||||
elif timeout != self._prefix_locks[prefix].default_timeout:
|
||||
self._prefix_locks[prefix].default_timeout = timeout
|
||||
|
||||
return self._prefix_locks[prefix]
|
||||
|
||||
@contextlib.contextmanager
|
||||
def prefix_read_lock(self, spec):
|
||||
prefix_lock = self.prefix_lock(spec)
|
||||
prefix_lock.acquire_read()
|
||||
|
||||
try:
|
||||
yield self
|
||||
except lk.LockError:
|
||||
# This addresses the case where a nested lock attempt fails inside
|
||||
# of this context manager
|
||||
raise
|
||||
except (Exception, KeyboardInterrupt):
|
||||
prefix_lock.release_read()
|
||||
raise
|
||||
else:
|
||||
prefix_lock.release_read()
|
||||
|
||||
@contextlib.contextmanager
|
||||
def prefix_write_lock(self, spec):
|
||||
prefix_lock = self.prefix_lock(spec)
|
||||
prefix_lock.acquire_write()
|
||||
|
||||
try:
|
||||
yield self
|
||||
except lk.LockError:
|
||||
# This addresses the case where a nested lock attempt fails inside
|
||||
# of this context manager
|
||||
raise
|
||||
except (Exception, KeyboardInterrupt):
|
||||
prefix_lock.release_write()
|
||||
raise
|
||||
else:
|
||||
prefix_lock.release_write()
|
||||
|
||||
def _write_to_file(self, stream):
|
||||
"""Write out the database in JSON format to the stream passed
|
||||
as argument.
|
||||
|
@@ -33,7 +33,7 @@ class OpenMpi(Package):
|
||||
import functools
|
||||
import os.path
|
||||
import re
|
||||
from typing import List, Optional, Set, Union
|
||||
from typing import Any, Callable, List, Optional, Set, Tuple, Union
|
||||
|
||||
import llnl.util.lang
|
||||
import llnl.util.tty.color
|
||||
@@ -520,7 +520,8 @@ def _execute_conflicts(pkg):
|
||||
|
||||
# Save in a list the conflicts and the associated custom messages
|
||||
when_spec_list = pkg.conflicts.setdefault(conflict_spec, [])
|
||||
when_spec_list.append((when_spec, msg))
|
||||
msg_with_name = f"{pkg.name}: {msg}" if msg is not None else msg
|
||||
when_spec_list.append((when_spec, msg_with_name))
|
||||
|
||||
return _execute_conflicts
|
||||
|
||||
@@ -663,39 +664,35 @@ def _execute_patch(pkg_or_dep):
|
||||
|
||||
@directive("variants")
|
||||
def variant(
|
||||
name,
|
||||
default=None,
|
||||
description="",
|
||||
values=None,
|
||||
multi=None,
|
||||
validator=None,
|
||||
when=None,
|
||||
sticky=False,
|
||||
name: str,
|
||||
default: Optional[Any] = None,
|
||||
description: str = "",
|
||||
values: Optional[Union[collections.abc.Sequence, Callable[[Any], bool]]] = None,
|
||||
multi: Optional[bool] = None,
|
||||
validator: Optional[Callable[[str, str, Tuple[Any, ...]], None]] = None,
|
||||
when: Optional[Union[str, bool]] = None,
|
||||
sticky: bool = False,
|
||||
):
|
||||
"""Define a variant for the package. Packager can specify a default
|
||||
value as well as a text description.
|
||||
"""Define a variant for the package.
|
||||
|
||||
Packager can specify a default value as well as a text description.
|
||||
|
||||
Args:
|
||||
name (str): name of the variant
|
||||
default (str or bool): default value for the variant, if not
|
||||
specified otherwise the default will be False for a boolean
|
||||
variant and 'nothing' for a multi-valued variant
|
||||
description (str): description of the purpose of the variant
|
||||
values (tuple or typing.Callable): either a tuple of strings containing the
|
||||
allowed values, or a callable accepting one value and returning
|
||||
True if it is valid
|
||||
multi (bool): if False only one value per spec is allowed for
|
||||
this variant
|
||||
validator (typing.Callable): optional group validator to enforce additional
|
||||
logic. It receives the package name, the variant name and a tuple
|
||||
of values and should raise an instance of SpackError if the group
|
||||
doesn't meet the additional constraints
|
||||
when (spack.spec.Spec, bool): optional condition on which the
|
||||
variant applies
|
||||
sticky (bool): the variant should not be changed by the concretizer to
|
||||
find a valid concrete spec.
|
||||
name: Name of the variant
|
||||
default: Default value for the variant, if not specified otherwise the default will be
|
||||
False for a boolean variant and 'nothing' for a multi-valued variant
|
||||
description: Description of the purpose of the variant
|
||||
values: Either a tuple of strings containing the allowed values, or a callable accepting
|
||||
one value and returning True if it is valid
|
||||
multi: If False only one value per spec is allowed for this variant
|
||||
validator: Optional group validator to enforce additional logic. It receives the package
|
||||
name, the variant name and a tuple of values and should raise an instance of SpackError
|
||||
if the group doesn't meet the additional constraints
|
||||
when: Optional condition on which the variant applies
|
||||
sticky: The variant should not be changed by the concretizer to find a valid concrete spec
|
||||
|
||||
Raises:
|
||||
DirectiveError: if arguments passed to the directive are invalid
|
||||
DirectiveError: If arguments passed to the directive are invalid
|
||||
"""
|
||||
|
||||
def format_error(msg, pkg):
|
||||
@@ -900,7 +897,8 @@ def _execute_requires(pkg):
|
||||
|
||||
# Save in a list the requirements and the associated custom messages
|
||||
when_spec_list = pkg.requirements.setdefault(tuple(requirement_specs), [])
|
||||
when_spec_list.append((when_spec, policy, msg))
|
||||
msg_with_name = f"{pkg.name}: {msg}" if msg is not None else msg
|
||||
when_spec_list.append((when_spec, policy, msg_with_name))
|
||||
|
||||
return _execute_requires
|
||||
|
||||
|
@@ -203,7 +203,7 @@ def activate(env, use_env_repo=False):
|
||||
env.store_token = spack.store.reinitialize()
|
||||
|
||||
if use_env_repo:
|
||||
spack.repo.path.put_first(env.repo)
|
||||
spack.repo.PATH.put_first(env.repo)
|
||||
|
||||
tty.debug("Using environment '%s'" % env.name)
|
||||
|
||||
@@ -227,7 +227,7 @@ def deactivate():
|
||||
|
||||
# use _repo so we only remove if a repo was actually constructed
|
||||
if _active_environment._repo:
|
||||
spack.repo.path.remove(_active_environment._repo)
|
||||
spack.repo.PATH.remove(_active_environment._repo)
|
||||
|
||||
tty.debug("Deactivated environment '%s'" % _active_environment.name)
|
||||
|
||||
@@ -1084,8 +1084,8 @@ def add(self, user_spec, list_name=user_speclist_name):
|
||||
if list_name == user_speclist_name:
|
||||
if spec.anonymous:
|
||||
raise SpackEnvironmentError("cannot add anonymous specs to an environment")
|
||||
elif not spack.repo.path.exists(spec.name) and not spec.abstract_hash:
|
||||
virtuals = spack.repo.path.provider_index.providers.keys()
|
||||
elif not spack.repo.PATH.exists(spec.name) and not spec.abstract_hash:
|
||||
virtuals = spack.repo.PATH.provider_index.providers.keys()
|
||||
if spec.name not in virtuals:
|
||||
msg = "no such package: %s" % spec.name
|
||||
raise SpackEnvironmentError(msg)
|
||||
@@ -1262,7 +1262,7 @@ def develop(self, spec: Spec, path: str, clone: bool = False) -> bool:
|
||||
# better if we can create the `source_path` directly into its final
|
||||
# destination.
|
||||
abspath = spack.util.path.canonicalize_path(path, default_wd=self.path)
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
# We construct a package class ourselves, rather than asking for
|
||||
# Spec.package, since Spec only allows this when it is concrete
|
||||
package = pkg_cls(spec)
|
||||
@@ -1490,7 +1490,7 @@ def _concretize_separately(self, tests=False):
|
||||
# for a write lock. We do this indirectly by retrieving the
|
||||
# provider index, which should in turn trigger the update of
|
||||
# all the indexes if there's any need for that.
|
||||
_ = spack.repo.path.provider_index
|
||||
_ = spack.repo.PATH.provider_index
|
||||
|
||||
# Ensure we have compilers in compilers.yaml to avoid that
|
||||
# processes try to write the config file in parallel
|
||||
@@ -2280,7 +2280,7 @@ def _add_to_environment_repository(self, spec_node: Spec) -> None:
|
||||
repository = spack.repo.create_or_construct(repository_dir, spec_node.namespace)
|
||||
pkg_dir = repository.dirname_for_package_name(spec_node.name)
|
||||
fs.mkdirp(pkg_dir)
|
||||
spack.repo.path.dump_provenance(spec_node, pkg_dir)
|
||||
spack.repo.PATH.dump_provenance(spec_node, pkg_dir)
|
||||
|
||||
def manifest_uptodate_or_warn(self):
|
||||
"""Emits a warning if the manifest file is not up-to-date."""
|
||||
|
@@ -535,7 +535,7 @@ def edge_entry(self, edge):
|
||||
|
||||
def _static_edges(specs, deptype):
|
||||
for spec in specs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
possible = pkg_cls.possible_dependencies(expand_virtuals=True, deptype=deptype)
|
||||
|
||||
for parent_name, dependencies in possible.items():
|
||||
|
@@ -49,7 +49,7 @@ def __call__(self, spec):
|
||||
|
||||
|
||||
def _content_hash_override(spec):
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
pkg = pkg_cls(spec)
|
||||
return pkg.content_hash()
|
||||
|
||||
|
@@ -1147,12 +1147,12 @@ def write_test_result(self, spec, result):
|
||||
def write_reproducibility_data(self):
|
||||
for spec in self.specs:
|
||||
repo_cache_path = self.stage.repo.join(spec.name)
|
||||
spack.repo.path.dump_provenance(spec, repo_cache_path)
|
||||
spack.repo.PATH.dump_provenance(spec, repo_cache_path)
|
||||
for vspec in spec.package.virtuals_provided:
|
||||
repo_cache_path = self.stage.repo.join(vspec.name)
|
||||
if not os.path.exists(repo_cache_path):
|
||||
try:
|
||||
spack.repo.path.dump_provenance(vspec, repo_cache_path)
|
||||
spack.repo.PATH.dump_provenance(vspec, repo_cache_path)
|
||||
except spack.repo.UnknownPackageError:
|
||||
pass # not all virtuals have package files
|
||||
|
||||
|
@@ -519,13 +519,6 @@ def _try_install_from_binary_cache(
|
||||
)
|
||||
|
||||
|
||||
def clear_failures() -> None:
|
||||
"""
|
||||
Remove all failure tracking markers for the Spack instance.
|
||||
"""
|
||||
spack.store.STORE.db.clear_all_failures()
|
||||
|
||||
|
||||
def combine_phase_logs(phase_log_files: List[str], log_path: str) -> None:
|
||||
"""
|
||||
Read set or list of logs and combine them into one file.
|
||||
@@ -597,7 +590,7 @@ def dump_packages(spec: "spack.spec.Spec", path: str) -> None:
|
||||
# Get the location of the package in the dest repo.
|
||||
dest_pkg_dir = repo.dirname_for_package_name(node.name)
|
||||
if node is spec:
|
||||
spack.repo.path.dump_provenance(node, dest_pkg_dir)
|
||||
spack.repo.PATH.dump_provenance(node, dest_pkg_dir)
|
||||
elif source_pkg_dir:
|
||||
fs.install_tree(source_pkg_dir, dest_pkg_dir)
|
||||
|
||||
@@ -1126,15 +1119,13 @@ class PackageInstaller:
|
||||
instance.
|
||||
"""
|
||||
|
||||
def __init__(self, installs: List[Tuple["spack.package_base.PackageBase", dict]] = []):
|
||||
def __init__(self, installs: List[Tuple["spack.package_base.PackageBase", dict]] = []) -> None:
|
||||
"""Initialize the installer.
|
||||
|
||||
Args:
|
||||
installs (list): list of tuples, where each
|
||||
tuple consists of a package (PackageBase) and its associated
|
||||
install arguments (dict)
|
||||
Return:
|
||||
PackageInstaller: instance
|
||||
"""
|
||||
# List of build requests
|
||||
self.build_requests = [BuildRequest(pkg, install_args) for pkg, install_args in installs]
|
||||
@@ -1287,7 +1278,7 @@ def _check_deps_status(self, request: BuildRequest) -> None:
|
||||
dep_id = package_id(dep_pkg)
|
||||
|
||||
# Check for failure since a prefix lock is not required
|
||||
if spack.store.STORE.db.prefix_failed(dep):
|
||||
if spack.store.STORE.failure_tracker.has_failed(dep):
|
||||
action = "'spack install' the dependency"
|
||||
msg = "{0} is marked as an install failure: {1}".format(dep_id, action)
|
||||
raise InstallError(err.format(request.pkg_id, msg), pkg=dep_pkg)
|
||||
@@ -1502,7 +1493,7 @@ def _ensure_locked(
|
||||
if lock is None:
|
||||
tty.debug(msg.format("Acquiring", desc, pkg_id, pretty_seconds(timeout or 0)))
|
||||
op = "acquire"
|
||||
lock = spack.store.STORE.db.prefix_lock(pkg.spec, timeout)
|
||||
lock = spack.store.STORE.prefix_locker.lock(pkg.spec, timeout)
|
||||
if timeout != lock.default_timeout:
|
||||
tty.warn(
|
||||
"Expected prefix lock timeout {0}, not {1}".format(
|
||||
@@ -1627,12 +1618,12 @@ def _add_tasks(self, request: BuildRequest, all_deps):
|
||||
# Clear any persistent failure markings _unless_ they are
|
||||
# associated with another process in this parallel build
|
||||
# of the spec.
|
||||
spack.store.STORE.db.clear_failure(dep, force=False)
|
||||
spack.store.STORE.failure_tracker.clear(dep, force=False)
|
||||
|
||||
install_package = request.install_args.get("install_package")
|
||||
if install_package and request.pkg_id not in self.build_tasks:
|
||||
# Be sure to clear any previous failure
|
||||
spack.store.STORE.db.clear_failure(request.spec, force=True)
|
||||
spack.store.STORE.failure_tracker.clear(request.spec, force=True)
|
||||
|
||||
# If not installing dependencies, then determine their
|
||||
# installation status before proceeding
|
||||
@@ -1888,7 +1879,7 @@ def _update_failed(
|
||||
err = "" if exc is None else ": {0}".format(str(exc))
|
||||
tty.debug("Flagging {0} as failed{1}".format(pkg_id, err))
|
||||
if mark:
|
||||
self.failed[pkg_id] = spack.store.STORE.db.mark_failed(task.pkg.spec)
|
||||
self.failed[pkg_id] = spack.store.STORE.failure_tracker.mark(task.pkg.spec)
|
||||
else:
|
||||
self.failed[pkg_id] = None
|
||||
task.status = STATUS_FAILED
|
||||
@@ -2074,7 +2065,7 @@ def install(self) -> None:
|
||||
|
||||
# Flag a failed spec. Do not need an (install) prefix lock since
|
||||
# assume using a separate (failed) prefix lock file.
|
||||
if pkg_id in self.failed or spack.store.STORE.db.prefix_failed(spec):
|
||||
if pkg_id in self.failed or spack.store.STORE.failure_tracker.has_failed(spec):
|
||||
term_status.clear()
|
||||
tty.warn("{0} failed to install".format(pkg_id))
|
||||
self._update_failed(task)
|
||||
|
@@ -605,7 +605,7 @@ def setup_main_options(args):
|
||||
spack.config.config.scopes["command_line"].sections["repos"] = syaml.syaml_dict(
|
||||
[(key, [spack.paths.mock_packages_path])]
|
||||
)
|
||||
spack.repo.path = spack.repo.create(spack.config.config)
|
||||
spack.repo.PATH = spack.repo.create(spack.config.config)
|
||||
|
||||
# If the user asked for it, don't check ssl certs.
|
||||
if args.insecure:
|
||||
|
@@ -442,7 +442,7 @@ def mirror_archive_paths(fetcher, per_package_ref, spec=None):
|
||||
storage path of the resource associated with the specified ``fetcher``."""
|
||||
ext = None
|
||||
if spec:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
versions = pkg_cls.versions.get(spec.version, {})
|
||||
ext = versions.get("extension", None)
|
||||
# If the spec does not explicitly specify an extension (the default case),
|
||||
@@ -474,7 +474,7 @@ def get_all_versions(specs):
|
||||
"""
|
||||
version_specs = []
|
||||
for spec in specs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
# Skip any package that has no known versions.
|
||||
if not pkg_cls.versions:
|
||||
tty.msg("No safe (checksummed) versions for package %s" % pkg_cls.name)
|
||||
|
@@ -143,7 +143,7 @@ def hierarchy_tokens(self):
|
||||
|
||||
# Check if all the tokens in the hierarchy are virtual specs.
|
||||
# If not warn the user and raise an error.
|
||||
not_virtual = [t for t in tokens if t != "compiler" and not spack.repo.path.is_virtual(t)]
|
||||
not_virtual = [t for t in tokens if t != "compiler" and not spack.repo.PATH.is_virtual(t)]
|
||||
if not_virtual:
|
||||
msg = "Non-virtual specs in 'hierarchy' list for lmod: {0}\n"
|
||||
msg += "Please check the 'modules.yaml' configuration files"
|
||||
|
@@ -665,7 +665,7 @@ def __init__(self, spec):
|
||||
self.win_rpath = fsys.WindowsSimulatedRPath(self)
|
||||
|
||||
if self.is_extension:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(self.extendee_spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(self.extendee_spec.name)
|
||||
pkg_cls(self.extendee_spec)._check_extendable()
|
||||
|
||||
super().__init__()
|
||||
@@ -728,11 +728,11 @@ def possible_dependencies(
|
||||
continue
|
||||
|
||||
# expand virtuals if enabled, otherwise just stop at virtuals
|
||||
if spack.repo.path.is_virtual(name):
|
||||
if spack.repo.PATH.is_virtual(name):
|
||||
if virtuals is not None:
|
||||
virtuals.add(name)
|
||||
if expand_virtuals:
|
||||
providers = spack.repo.path.providers_for(name)
|
||||
providers = spack.repo.PATH.providers_for(name)
|
||||
dep_names = [spec.name for spec in providers]
|
||||
else:
|
||||
visited.setdefault(cls.name, set()).add(name)
|
||||
@@ -756,7 +756,7 @@ def possible_dependencies(
|
||||
continue
|
||||
|
||||
try:
|
||||
dep_cls = spack.repo.path.get_pkg_class(dep_name)
|
||||
dep_cls = spack.repo.PATH.get_pkg_class(dep_name)
|
||||
except spack.repo.UnknownPackageError:
|
||||
# log unknown packages
|
||||
missing.setdefault(cls.name, set()).add(dep_name)
|
||||
@@ -2209,7 +2209,7 @@ def uninstall_by_spec(spec, force=False, deprecator=None):
|
||||
pkg = None
|
||||
|
||||
# Pre-uninstall hook runs first.
|
||||
with spack.store.STORE.db.prefix_write_lock(spec):
|
||||
with spack.store.STORE.prefix_locker.write_lock(spec):
|
||||
if pkg is not None:
|
||||
try:
|
||||
spack.hooks.pre_uninstall(spec)
|
||||
@@ -2459,8 +2459,8 @@ def possible_dependencies(*pkg_or_spec, **kwargs):
|
||||
if not isinstance(pos, spack.spec.Spec):
|
||||
pos = spack.spec.Spec(pos)
|
||||
|
||||
if spack.repo.path.is_virtual(pos.name):
|
||||
packages.extend(p.package_class for p in spack.repo.path.providers_for(pos.name))
|
||||
if spack.repo.PATH.is_virtual(pos.name):
|
||||
packages.extend(p.package_class for p in spack.repo.PATH.providers_for(pos.name))
|
||||
continue
|
||||
else:
|
||||
packages.append(pos.package_class)
|
||||
|
@@ -147,7 +147,7 @@ def preferred_variants(cls, pkg_name):
|
||||
variants = " ".join(variants)
|
||||
|
||||
# Only return variants that are actually supported by the package
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
spec = spack.spec.Spec("%s %s" % (pkg_name, variants))
|
||||
return dict(
|
||||
(name, variant) for name, variant in spec.variants.items() if name in pkg_cls.variants
|
||||
@@ -162,7 +162,7 @@ def spec_externals(spec):
|
||||
from spack.util.module_cmd import path_from_modules # noqa: F401
|
||||
|
||||
def _package(maybe_abstract_spec):
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
return pkg_cls(maybe_abstract_spec)
|
||||
|
||||
allpkgs = spack.config.get("packages")
|
||||
@@ -199,7 +199,7 @@ def is_spec_buildable(spec):
|
||||
so_far = all_buildable # the default "so far"
|
||||
|
||||
def _package(s):
|
||||
pkg_cls = spack.repo.path.get_pkg_class(s.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(s.name)
|
||||
return pkg_cls(s)
|
||||
|
||||
# check whether any providers for this package override the default
|
||||
|
@@ -238,7 +238,7 @@ def to_dict(self):
|
||||
|
||||
def from_dict(dictionary, repository=None):
|
||||
"""Create a patch from json dictionary."""
|
||||
repository = repository or spack.repo.path
|
||||
repository = repository or spack.repo.PATH
|
||||
owner = dictionary.get("owner")
|
||||
if "owner" not in dictionary:
|
||||
raise ValueError("Invalid patch dictionary: %s" % dictionary)
|
||||
|
@@ -149,7 +149,7 @@ def compute_loader(self, fullname):
|
||||
|
||||
# If it's a module in some repo, or if it is the repo's
|
||||
# namespace, let the repo handle it.
|
||||
for repo in path.repos:
|
||||
for repo in PATH.repos:
|
||||
# We are using the namespace of the repo and the repo contains the package
|
||||
if namespace == repo.full_namespace:
|
||||
# With 2 nested conditionals we can call "repo.real_name" only once
|
||||
@@ -163,7 +163,7 @@ def compute_loader(self, fullname):
|
||||
|
||||
# No repo provides the namespace, but it is a valid prefix of
|
||||
# something in the RepoPath.
|
||||
if path.by_namespace.is_prefix(fullname):
|
||||
if PATH.by_namespace.is_prefix(fullname):
|
||||
return SpackNamespaceLoader()
|
||||
|
||||
return None
|
||||
@@ -184,9 +184,9 @@ def compute_loader(self, fullname):
|
||||
def packages_path():
|
||||
"""Get the test repo if it is active, otherwise the builtin repo."""
|
||||
try:
|
||||
return spack.repo.path.get_repo("builtin.mock").packages_path
|
||||
return spack.repo.PATH.get_repo("builtin.mock").packages_path
|
||||
except spack.repo.UnknownNamespaceError:
|
||||
return spack.repo.path.get_repo("builtin").packages_path
|
||||
return spack.repo.PATH.get_repo("builtin").packages_path
|
||||
|
||||
|
||||
class GitExe:
|
||||
@@ -282,7 +282,7 @@ def add_package_to_git_stage(packages):
|
||||
git = GitExe()
|
||||
|
||||
for pkg_name in packages:
|
||||
filename = spack.repo.path.filename_for_package_name(pkg_name)
|
||||
filename = spack.repo.PATH.filename_for_package_name(pkg_name)
|
||||
if not os.path.isfile(filename):
|
||||
tty.die("No such package: %s. Path does not exist:" % pkg_name, filename)
|
||||
|
||||
@@ -1374,7 +1374,7 @@ def create(configuration):
|
||||
|
||||
|
||||
#: Singleton repo path instance
|
||||
path: Union[RepoPath, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(_path)
|
||||
PATH: Union[RepoPath, llnl.util.lang.Singleton] = llnl.util.lang.Singleton(_path)
|
||||
|
||||
# Add the finder to sys.meta_path
|
||||
REPOS_FINDER = ReposFinder()
|
||||
@@ -1383,7 +1383,7 @@ def create(configuration):
|
||||
|
||||
def all_package_names(include_virtuals=False):
|
||||
"""Convenience wrapper around ``spack.repo.all_package_names()``."""
|
||||
return path.all_package_names(include_virtuals)
|
||||
return PATH.all_package_names(include_virtuals)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
@@ -1398,7 +1398,7 @@ def use_repositories(*paths_and_repos, **kwargs):
|
||||
Returns:
|
||||
Corresponding RepoPath object
|
||||
"""
|
||||
global path
|
||||
global PATH
|
||||
# TODO (Python 2.7): remove this kwargs on deprecation of Python 2.7 support
|
||||
override = kwargs.get("override", True)
|
||||
paths = [getattr(x, "root", x) for x in paths_and_repos]
|
||||
@@ -1407,12 +1407,12 @@ def use_repositories(*paths_and_repos, **kwargs):
|
||||
spack.config.config.push_scope(
|
||||
spack.config.InternalConfigScope(name=scope_name, data={repos_key: paths})
|
||||
)
|
||||
path, saved = create(configuration=spack.config.config), path
|
||||
PATH, saved = create(configuration=spack.config.config), PATH
|
||||
try:
|
||||
yield path
|
||||
yield PATH
|
||||
finally:
|
||||
spack.config.config.remove_scope(scope_name=scope_name)
|
||||
path = saved
|
||||
PATH = saved
|
||||
|
||||
|
||||
class MockRepositoryBuilder:
|
||||
|
@@ -237,7 +237,7 @@ def listify(args):
|
||||
|
||||
def packagize(pkg):
|
||||
if isinstance(pkg, str):
|
||||
return spack.repo.path.get_pkg_class(pkg)
|
||||
return spack.repo.PATH.get_pkg_class(pkg)
|
||||
else:
|
||||
return pkg
|
||||
|
||||
@@ -342,7 +342,7 @@ def extend_flag_list(flag_list, new_flags):
|
||||
|
||||
def check_packages_exist(specs):
|
||||
"""Ensure all packages mentioned in specs exist."""
|
||||
repo = spack.repo.path
|
||||
repo = spack.repo.PATH
|
||||
for spec in specs:
|
||||
for s in spec.traverse():
|
||||
try:
|
||||
@@ -529,7 +529,7 @@ def _compute_specs_from_answer_set(self):
|
||||
def _normalize_packages_yaml(packages_yaml):
|
||||
normalized_yaml = copy.copy(packages_yaml)
|
||||
for pkg_name in packages_yaml:
|
||||
is_virtual = spack.repo.path.is_virtual(pkg_name)
|
||||
is_virtual = spack.repo.PATH.is_virtual(pkg_name)
|
||||
if pkg_name == "all" or not is_virtual:
|
||||
continue
|
||||
|
||||
@@ -537,7 +537,7 @@ def _normalize_packages_yaml(packages_yaml):
|
||||
data = normalized_yaml.pop(pkg_name)
|
||||
is_buildable = data.get("buildable", True)
|
||||
if not is_buildable:
|
||||
for provider in spack.repo.path.providers_for(pkg_name):
|
||||
for provider in spack.repo.PATH.providers_for(pkg_name):
|
||||
entry = normalized_yaml.setdefault(provider.name, {})
|
||||
entry["buildable"] = False
|
||||
|
||||
@@ -956,8 +956,8 @@ def target_ranges(self, spec, single_target_fn):
|
||||
return [fn.attr("node_target_satisfies", spec.name, target)]
|
||||
|
||||
def conflict_rules(self, pkg):
|
||||
default_msg = "{0} '{1}' conflicts with '{2}'"
|
||||
no_constraint_msg = "{0} conflicts with '{1}'"
|
||||
default_msg = "{0}: '{1}' conflicts with '{2}'"
|
||||
no_constraint_msg = "{0}: conflicts with '{1}'"
|
||||
for trigger, constraints in pkg.conflicts.items():
|
||||
trigger_msg = "conflict trigger %s" % str(trigger)
|
||||
trigger_id = self.condition(spack.spec.Spec(trigger), name=pkg.name, msg=trigger_msg)
|
||||
@@ -1416,7 +1416,7 @@ def external_packages(self):
|
||||
continue
|
||||
|
||||
# This package does not appear in any repository
|
||||
if pkg_name not in spack.repo.path:
|
||||
if pkg_name not in spack.repo.PATH:
|
||||
continue
|
||||
|
||||
self.gen.h2("External package: {0}".format(pkg_name))
|
||||
@@ -1598,7 +1598,7 @@ class Body:
|
||||
if not spec.concrete:
|
||||
reserved_names = spack.directives.reserved_names
|
||||
if not spec.virtual and vname not in reserved_names:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
try:
|
||||
variant_def, _ = pkg_cls.variants[vname]
|
||||
except KeyError:
|
||||
@@ -1606,7 +1606,7 @@ class Body:
|
||||
raise RuntimeError(msg.format(vname, spec.name))
|
||||
else:
|
||||
variant_def.validate_or_raise(
|
||||
variant, spack.repo.path.get_pkg_class(spec.name)
|
||||
variant, spack.repo.PATH.get_pkg_class(spec.name)
|
||||
)
|
||||
|
||||
clauses.append(f.variant_value(spec.name, vname, value))
|
||||
@@ -1678,7 +1678,7 @@ class Body:
|
||||
except spack.repo.UnknownNamespaceError:
|
||||
# Try to look up the package of the same name and use its
|
||||
# providers. This is as good as we can do without edge info.
|
||||
pkg_class = spack.repo.path.get_pkg_class(dep.name)
|
||||
pkg_class = spack.repo.PATH.get_pkg_class(dep.name)
|
||||
spec = spack.spec.Spec(f"{dep.name}@{dep.version}")
|
||||
pkg = pkg_class(spec)
|
||||
|
||||
@@ -1724,7 +1724,7 @@ def build_version_dict(self, possible_pkgs):
|
||||
packages_yaml = spack.config.get("packages")
|
||||
packages_yaml = _normalize_packages_yaml(packages_yaml)
|
||||
for pkg_name in possible_pkgs:
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
|
||||
# All the versions from the corresponding package.py file. Since concepts
|
||||
# like being a "develop" version or being preferred exist only at a
|
||||
@@ -1755,7 +1755,7 @@ def key_fn(item):
|
||||
# specs will be computed later
|
||||
version_preferences = packages_yaml.get(pkg_name, {}).get("version", [])
|
||||
version_defs = []
|
||||
pkg_class = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_class = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
for vstr in version_preferences:
|
||||
v = vn.ver(vstr)
|
||||
if isinstance(v, vn.GitVersion):
|
||||
@@ -2053,7 +2053,7 @@ def define_virtual_constraints(self):
|
||||
# aggregate constraints into per-virtual sets
|
||||
constraint_map = collections.defaultdict(lambda: set())
|
||||
for pkg_name, versions in self.version_constraints:
|
||||
if not spack.repo.path.is_virtual(pkg_name):
|
||||
if not spack.repo.PATH.is_virtual(pkg_name):
|
||||
continue
|
||||
constraint_map[pkg_name].add(versions)
|
||||
|
||||
@@ -2141,7 +2141,7 @@ def _facts_from_concrete_spec(self, spec, possible):
|
||||
self.reusable_and_possible[h] = spec
|
||||
try:
|
||||
# Only consider installed packages for repo we know
|
||||
spack.repo.path.get(spec)
|
||||
spack.repo.PATH.get(spec)
|
||||
except (spack.repo.UnknownNamespaceError, spack.repo.UnknownPackageError):
|
||||
return
|
||||
|
||||
@@ -2366,7 +2366,7 @@ def _specs_from_requires(self, pkg_name, section):
|
||||
# Prefer spec's name if it exists, in case the spec is
|
||||
# requiring a specific implementation inside of a virtual section
|
||||
# e.g. packages:mpi:require:openmpi@4.0.1
|
||||
pkg_class = spack.repo.path.get_pkg_class(spec.name or pkg_name)
|
||||
pkg_class = spack.repo.PATH.get_pkg_class(spec.name or pkg_name)
|
||||
satisfying_versions = self._check_for_defined_matching_versions(
|
||||
pkg_class, spec.versions
|
||||
)
|
||||
@@ -2621,7 +2621,7 @@ def build_specs(self, function_tuples):
|
||||
# predicates on virtual packages.
|
||||
if name != "error":
|
||||
pkg = args[0]
|
||||
if spack.repo.path.is_virtual(pkg):
|
||||
if spack.repo.PATH.is_virtual(pkg):
|
||||
continue
|
||||
|
||||
# if we've already gotten a concrete spec for this pkg,
|
||||
@@ -2639,7 +2639,7 @@ def build_specs(self, function_tuples):
|
||||
for spec in self._specs.values():
|
||||
if spec.namespace:
|
||||
continue
|
||||
repo = spack.repo.path.repo_for_pkg(spec)
|
||||
repo = spack.repo.PATH.repo_for_pkg(spec)
|
||||
spec.namespace = repo.namespace
|
||||
|
||||
# fix flags after all specs are constructed
|
||||
|
@@ -1299,7 +1299,7 @@ def __init__(self, spec, name, query_parameters):
|
||||
original_spec = getattr(spec, "wrapped_obj", spec)
|
||||
self.wrapped_obj = original_spec
|
||||
self.token = original_spec, name, query_parameters
|
||||
is_virtual = spack.repo.path.is_virtual(name)
|
||||
is_virtual = spack.repo.PATH.is_virtual(name)
|
||||
self.last_query = QueryState(
|
||||
name=name, extra_parameters=query_parameters, isvirtual=is_virtual
|
||||
)
|
||||
@@ -1733,7 +1733,7 @@ def package(self):
|
||||
self.name
|
||||
)
|
||||
if not self._package:
|
||||
self._package = spack.repo.path.get(self)
|
||||
self._package = spack.repo.PATH.get(self)
|
||||
return self._package
|
||||
|
||||
@property
|
||||
@@ -1741,11 +1741,11 @@ def package_class(self):
|
||||
"""Internal package call gets only the class object for a package.
|
||||
Use this to just get package metadata.
|
||||
"""
|
||||
return spack.repo.path.get_pkg_class(self.fullname)
|
||||
return spack.repo.PATH.get_pkg_class(self.fullname)
|
||||
|
||||
@property
|
||||
def virtual(self):
|
||||
return spack.repo.path.is_virtual(self.name)
|
||||
return spack.repo.PATH.is_virtual(self.name)
|
||||
|
||||
@property
|
||||
def concrete(self):
|
||||
@@ -2272,7 +2272,7 @@ def override(init_spec, change_spec):
|
||||
# TODO: this doesn't account for the case where the changed spec
|
||||
# (and the user spec) have dependencies
|
||||
new_spec = init_spec.copy()
|
||||
package_cls = spack.repo.path.get_pkg_class(new_spec.name)
|
||||
package_cls = spack.repo.PATH.get_pkg_class(new_spec.name)
|
||||
if change_spec.versions and not change_spec.versions == vn.any_version:
|
||||
new_spec.versions = change_spec.versions
|
||||
for variant, value in change_spec.variants.items():
|
||||
@@ -2546,7 +2546,7 @@ def validate_detection(self):
|
||||
assert isinstance(self.extra_attributes, collections.abc.Mapping), msg
|
||||
|
||||
# Validate the spec calling a package specific method
|
||||
pkg_cls = spack.repo.path.get_pkg_class(self.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(self.name)
|
||||
validate_fn = getattr(pkg_cls, "validate_detected_spec", lambda x, y: None)
|
||||
validate_fn(self, self.extra_attributes)
|
||||
|
||||
@@ -2645,7 +2645,7 @@ def _expand_virtual_packages(self, concretizer):
|
||||
"""
|
||||
# Make an index of stuff this spec already provides
|
||||
self_index = spack.provider_index.ProviderIndex(
|
||||
repository=spack.repo.path, specs=self.traverse(), restrict=True
|
||||
repository=spack.repo.PATH, specs=self.traverse(), restrict=True
|
||||
)
|
||||
changed = False
|
||||
done = False
|
||||
@@ -2785,7 +2785,7 @@ def _old_concretize(self, tests=False, deprecation_warning=True):
|
||||
visited_user_specs = set()
|
||||
for dep in self.traverse():
|
||||
visited_user_specs.add(dep.name)
|
||||
pkg_cls = spack.repo.path.get_pkg_class(dep.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(dep.name)
|
||||
visited_user_specs.update(x.name for x in pkg_cls(dep).provided)
|
||||
|
||||
extra = set(user_spec_deps.keys()).difference(visited_user_specs)
|
||||
@@ -2868,7 +2868,7 @@ def inject_patches_variant(root):
|
||||
# we can do it as late as possible to allow as much
|
||||
# compatibility across repositories as possible.
|
||||
if s.namespace is None:
|
||||
s.namespace = spack.repo.path.repo_for_pkg(s.name).namespace
|
||||
s.namespace = spack.repo.PATH.repo_for_pkg(s.name).namespace
|
||||
|
||||
if s.concrete:
|
||||
continue
|
||||
@@ -2926,7 +2926,7 @@ def ensure_external_path_if_external(external_spec):
|
||||
|
||||
# Get the path from the module the package can override the default
|
||||
# (this is mostly needed for Cray)
|
||||
pkg_cls = spack.repo.path.get_pkg_class(external_spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(external_spec.name)
|
||||
package = pkg_cls(external_spec)
|
||||
external_spec.external_path = getattr(
|
||||
package, "external_prefix", md.path_from_modules(external_spec.external_modules)
|
||||
@@ -3200,7 +3200,7 @@ def _find_provider(self, vdep, provider_index):
|
||||
Raise an exception if there is a conflicting virtual
|
||||
dependency already in this spec.
|
||||
"""
|
||||
assert spack.repo.path.is_virtual_safe(vdep.name), vdep
|
||||
assert spack.repo.PATH.is_virtual_safe(vdep.name), vdep
|
||||
|
||||
# note that this defensively copies.
|
||||
providers = provider_index.providers_for(vdep)
|
||||
@@ -3266,7 +3266,7 @@ def _merge_dependency(self, dependency, visited, spec_deps, provider_index, test
|
||||
# If it's a virtual dependency, try to find an existing
|
||||
# provider in the spec, and merge that.
|
||||
virtuals = ()
|
||||
if spack.repo.path.is_virtual_safe(dep.name):
|
||||
if spack.repo.PATH.is_virtual_safe(dep.name):
|
||||
virtuals = (dep.name,)
|
||||
visited.add(dep.name)
|
||||
provider = self._find_provider(dep, provider_index)
|
||||
@@ -3274,11 +3274,11 @@ def _merge_dependency(self, dependency, visited, spec_deps, provider_index, test
|
||||
dep = provider
|
||||
else:
|
||||
index = spack.provider_index.ProviderIndex(
|
||||
repository=spack.repo.path, specs=[dep], restrict=True
|
||||
repository=spack.repo.PATH, specs=[dep], restrict=True
|
||||
)
|
||||
items = list(spec_deps.items())
|
||||
for name, vspec in items:
|
||||
if not spack.repo.path.is_virtual_safe(vspec.name):
|
||||
if not spack.repo.PATH.is_virtual_safe(vspec.name):
|
||||
continue
|
||||
|
||||
if index.providers_for(vspec):
|
||||
@@ -3428,7 +3428,7 @@ def normalize(self, force=False, tests=False, user_spec_deps=None):
|
||||
# Initialize index of virtual dependency providers if
|
||||
# concretize didn't pass us one already
|
||||
provider_index = spack.provider_index.ProviderIndex(
|
||||
repository=spack.repo.path, specs=[s for s in all_spec_deps.values()], restrict=True
|
||||
repository=spack.repo.PATH, specs=[s for s in all_spec_deps.values()], restrict=True
|
||||
)
|
||||
|
||||
# traverse the package DAG and fill out dependencies according
|
||||
@@ -3459,7 +3459,7 @@ def validate_or_raise(self):
|
||||
for spec in self.traverse():
|
||||
# raise an UnknownPackageError if the spec's package isn't real.
|
||||
if (not spec.virtual) and spec.name:
|
||||
spack.repo.path.get_pkg_class(spec.fullname)
|
||||
spack.repo.PATH.get_pkg_class(spec.fullname)
|
||||
|
||||
# validate compiler in addition to the package name.
|
||||
if spec.compiler:
|
||||
@@ -3527,7 +3527,7 @@ def update_variant_validate(self, variant_name, values):
|
||||
variant = pkg_variant.make_variant(value)
|
||||
self.variants[variant_name] = variant
|
||||
|
||||
pkg_cls = spack.repo.path.get_pkg_class(self.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(self.name)
|
||||
pkg_variant.validate_or_raise(self.variants[variant_name], pkg_cls)
|
||||
|
||||
def constrain(self, other, deps=True):
|
||||
@@ -3732,8 +3732,8 @@ def _intersects(self, other: "Spec", deps: bool = True) -> bool:
|
||||
if self.name != other.name and self.name and other.name:
|
||||
if self.virtual and other.virtual:
|
||||
# Two virtual specs intersect only if there are providers for both
|
||||
lhs = spack.repo.path.providers_for(str(self))
|
||||
rhs = spack.repo.path.providers_for(str(other))
|
||||
lhs = spack.repo.PATH.providers_for(str(self))
|
||||
rhs = spack.repo.PATH.providers_for(str(other))
|
||||
intersection = [s for s in lhs if any(s.intersects(z) for z in rhs)]
|
||||
return bool(intersection)
|
||||
|
||||
@@ -3742,7 +3742,7 @@ def _intersects(self, other: "Spec", deps: bool = True) -> bool:
|
||||
virtual_spec, non_virtual_spec = (self, other) if self.virtual else (other, self)
|
||||
try:
|
||||
# Here we might get an abstract spec
|
||||
pkg_cls = spack.repo.path.get_pkg_class(non_virtual_spec.fullname)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(non_virtual_spec.fullname)
|
||||
pkg = pkg_cls(non_virtual_spec)
|
||||
except spack.repo.UnknownEntityError:
|
||||
# If we can't get package info on this spec, don't treat
|
||||
@@ -3813,10 +3813,10 @@ def _intersects_dependencies(self, other):
|
||||
|
||||
# For virtual dependencies, we need to dig a little deeper.
|
||||
self_index = spack.provider_index.ProviderIndex(
|
||||
repository=spack.repo.path, specs=self.traverse(), restrict=True
|
||||
repository=spack.repo.PATH, specs=self.traverse(), restrict=True
|
||||
)
|
||||
other_index = spack.provider_index.ProviderIndex(
|
||||
repository=spack.repo.path, specs=other.traverse(), restrict=True
|
||||
repository=spack.repo.PATH, specs=other.traverse(), restrict=True
|
||||
)
|
||||
|
||||
# This handles cases where there are already providers for both vpkgs
|
||||
@@ -3857,7 +3857,7 @@ def _satisfies(self, other: "Spec", deps: bool = True) -> bool:
|
||||
if not self.virtual and other.virtual:
|
||||
try:
|
||||
# Here we might get an abstract spec
|
||||
pkg_cls = spack.repo.path.get_pkg_class(self.fullname)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(self.fullname)
|
||||
pkg = pkg_cls(self)
|
||||
except spack.repo.UnknownEntityError:
|
||||
# If we can't get package info on this spec, don't treat
|
||||
@@ -3939,8 +3939,8 @@ def patches(self):
|
||||
# translate patch sha256sums to patch objects by consulting the index
|
||||
if self._patches_assigned():
|
||||
for sha256 in self.variants["patches"]._patches_in_order_of_appearance:
|
||||
index = spack.repo.path.patch_index
|
||||
pkg_cls = spack.repo.path.get_pkg_class(self.name)
|
||||
index = spack.repo.PATH.patch_index
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(self.name)
|
||||
patch = index.patch_for_package(sha256, pkg_cls)
|
||||
self._patches.append(patch)
|
||||
|
||||
|
@@ -326,7 +326,7 @@ def __init__(
|
||||
self.keep = keep
|
||||
|
||||
# File lock for the stage directory. We use one file for all
|
||||
# stage locks. See spack.database.Database.prefix_lock for
|
||||
# stage locks. See spack.database.Database.prefix_locker.lock for
|
||||
# details on this approach.
|
||||
self._lock = None
|
||||
if lock:
|
||||
|
@@ -25,13 +25,14 @@
|
||||
from typing import Any, Callable, Dict, Generator, List, Optional, Union
|
||||
|
||||
import llnl.util.lang
|
||||
import llnl.util.tty as tty
|
||||
from llnl.util import tty
|
||||
|
||||
import spack.config
|
||||
import spack.database
|
||||
import spack.directory_layout
|
||||
import spack.error
|
||||
import spack.paths
|
||||
import spack.spec
|
||||
import spack.util.path
|
||||
|
||||
#: default installation root, relative to the Spack install path
|
||||
@@ -134,18 +135,21 @@ def parse_install_tree(config_dict):
|
||||
class Store:
|
||||
"""A store is a path full of installed Spack packages.
|
||||
|
||||
Stores consist of packages installed according to a
|
||||
``DirectoryLayout``, along with an index, or _database_ of their
|
||||
contents. The directory layout controls what paths look like and how
|
||||
Spack ensures that each unique spec gets its own unique directory (or
|
||||
not, though we don't recommend that). The database is a single file
|
||||
that caches metadata for the entire Spack installation. It prevents
|
||||
us from having to spider the install tree to figure out what's there.
|
||||
Stores consist of packages installed according to a ``DirectoryLayout``, along with a database
|
||||
of their contents.
|
||||
|
||||
The directory layout controls what paths look like and how Spack ensures that each unique spec
|
||||
gets its own unique directory (or not, though we don't recommend that).
|
||||
|
||||
The database is a single file that caches metadata for the entire Spack installation. It
|
||||
prevents us from having to spider the install tree to figure out what's there.
|
||||
|
||||
The store is also able to lock installation prefixes, and to mark installation failures.
|
||||
|
||||
Args:
|
||||
root: path to the root of the install tree
|
||||
unpadded_root: path to the root of the install tree without padding.
|
||||
The sbang script has to be installed here to work with padded roots
|
||||
unpadded_root: path to the root of the install tree without padding. The sbang script has
|
||||
to be installed here to work with padded roots
|
||||
projections: expression according to guidelines that describes how to construct a path to
|
||||
a package prefix in this store
|
||||
hash_length: length of the hashes used in the directory layout. Spec hash suffixes will be
|
||||
@@ -170,6 +174,19 @@ def __init__(
|
||||
self.upstreams = upstreams
|
||||
self.lock_cfg = lock_cfg
|
||||
self.db = spack.database.Database(root, upstream_dbs=upstreams, lock_cfg=lock_cfg)
|
||||
|
||||
timeout_format_str = (
|
||||
f"{str(lock_cfg.package_timeout)}s" if lock_cfg.package_timeout else "No timeout"
|
||||
)
|
||||
tty.debug("PACKAGE LOCK TIMEOUT: {0}".format(str(timeout_format_str)))
|
||||
|
||||
self.prefix_locker = spack.database.SpecLocker(
|
||||
spack.database.prefix_lock_path(root), default_timeout=lock_cfg.package_timeout
|
||||
)
|
||||
self.failure_tracker = spack.database.FailureTracker(
|
||||
self.root, default_timeout=lock_cfg.package_timeout
|
||||
)
|
||||
|
||||
self.layout = spack.directory_layout.DirectoryLayout(
|
||||
root, projections=projections, hash_length=hash_length
|
||||
)
|
||||
|
@@ -102,7 +102,7 @@ def __init__(self):
|
||||
def restore(self):
|
||||
if _SERIALIZE:
|
||||
spack.config.config = self.config
|
||||
spack.repo.path = spack.repo.create(self.config)
|
||||
spack.repo.PATH = spack.repo.create(self.config)
|
||||
spack.platforms.host = self.platform
|
||||
spack.store.STORE = self.store
|
||||
self.test_patches.restore()
|
||||
|
@@ -32,10 +32,10 @@ def packages_with_tags(tags, installed, skip_empty):
|
||||
"""
|
||||
tag_pkgs = collections.defaultdict(lambda: list)
|
||||
spec_names = _get_installed_package_names() if installed else []
|
||||
keys = spack.repo.path.tag_index if tags is None else tags
|
||||
keys = spack.repo.PATH.tag_index if tags is None else tags
|
||||
for tag in keys:
|
||||
packages = [
|
||||
name for name in spack.repo.path.tag_index[tag] if not installed or name in spec_names
|
||||
name for name in spack.repo.PATH.tag_index[tag] if not installed or name in spec_names
|
||||
]
|
||||
if packages or not skip_empty:
|
||||
tag_pkgs[tag] = packages
|
||||
|
@@ -23,8 +23,6 @@
|
||||
|
||||
DATA_PATH = os.path.join(spack.paths.test_path, "data")
|
||||
|
||||
pytestmark = pytest.mark.skipif(sys.platform == "win32", reason="does not run on windows")
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def concretize_and_setup(default_mock_concretization):
|
||||
@@ -45,6 +43,7 @@ def _func(dir_str):
|
||||
return _func
|
||||
|
||||
|
||||
@pytest.mark.skipif(sys.platform == "win32", reason="make not available on Windows")
|
||||
@pytest.mark.usefixtures("config", "mock_packages", "working_env")
|
||||
class TestTargets:
|
||||
@pytest.mark.parametrize(
|
||||
@@ -93,6 +92,7 @@ def test_negative_ninja_check(self, input_dir, test_dir, concretize_and_setup):
|
||||
s.package._if_ninja_target_execute("check")
|
||||
|
||||
|
||||
@pytest.mark.skipif(sys.platform == "win32", reason="autotools not available on windows")
|
||||
@pytest.mark.usefixtures("config", "mock_packages")
|
||||
class TestAutotoolsPackage:
|
||||
def test_with_or_without(self, default_mock_concretization):
|
||||
|
@@ -15,7 +15,7 @@ def test_build_request_errors(install_mockery):
|
||||
inst.BuildRequest("abc", {})
|
||||
|
||||
spec = spack.spec.Spec("trivial-install-test-package")
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
with pytest.raises(ValueError, match="must have a concrete spec"):
|
||||
inst.BuildRequest(pkg_cls(spec), {})
|
||||
|
||||
|
@@ -15,7 +15,7 @@ def test_build_task_errors(install_mockery):
|
||||
inst.BuildTask("abc", None, False, 0, 0, 0, [])
|
||||
|
||||
spec = spack.spec.Spec("trivial-install-test-package")
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
with pytest.raises(ValueError, match="must have a concrete spec"):
|
||||
inst.BuildTask(pkg_cls(spec), None, False, 0, 0, 0, [])
|
||||
|
||||
|
@@ -72,7 +72,7 @@ def _get_number(*args, **kwargs):
|
||||
|
||||
|
||||
def test_checksum_versions(mock_packages, mock_clone_repo, mock_fetch, mock_stage):
|
||||
pkg_cls = spack.repo.path.get_pkg_class("zlib")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("zlib")
|
||||
versions = [str(v) for v in pkg_cls.versions]
|
||||
output = spack_checksum("zlib", *versions)
|
||||
assert "Found 3 versions" in output
|
||||
@@ -101,14 +101,14 @@ def test_checksum_deprecated_version(mock_packages, mock_clone_repo, mock_fetch,
|
||||
|
||||
|
||||
def test_checksum_at(mock_packages):
|
||||
pkg_cls = spack.repo.path.get_pkg_class("zlib")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("zlib")
|
||||
versions = [str(v) for v in pkg_cls.versions]
|
||||
output = spack_checksum(f"zlib@{versions[0]}")
|
||||
assert "Found 1 version" in output
|
||||
|
||||
|
||||
def test_checksum_url(mock_packages):
|
||||
pkg_cls = spack.repo.path.get_pkg_class("zlib")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("zlib")
|
||||
output = spack_checksum(f"{pkg_cls.url}", fail_on_error=False)
|
||||
assert "accepts package names" in output
|
||||
|
||||
|
@@ -10,9 +10,11 @@
|
||||
import llnl.util.filesystem as fs
|
||||
|
||||
import spack.caches
|
||||
import spack.cmd.clean
|
||||
import spack.main
|
||||
import spack.package_base
|
||||
import spack.stage
|
||||
import spack.store
|
||||
|
||||
clean = spack.main.SpackCommand("clean")
|
||||
|
||||
@@ -33,7 +35,7 @@ def __call__(self, *args, **kwargs):
|
||||
monkeypatch.setattr(spack.stage, "purge", Counter("stages"))
|
||||
monkeypatch.setattr(spack.caches.fetch_cache, "destroy", Counter("downloads"), raising=False)
|
||||
monkeypatch.setattr(spack.caches.misc_cache, "destroy", Counter("caches"))
|
||||
monkeypatch.setattr(spack.installer, "clear_failures", Counter("failures"))
|
||||
monkeypatch.setattr(spack.store.STORE.failure_tracker, "clear_all", Counter("failures"))
|
||||
monkeypatch.setattr(spack.cmd.clean, "remove_python_cache", Counter("python_cache"))
|
||||
|
||||
yield counts
|
||||
|
@@ -219,7 +219,8 @@ def test_fish_completion():
|
||||
def test_update_completion_arg(shell, tmpdir, monkeypatch):
|
||||
"""Test the update completion flag."""
|
||||
|
||||
mock_infile = tmpdir.join("spack-completion.in")
|
||||
tmpdir.join(shell).mkdir()
|
||||
mock_infile = tmpdir.join(shell).join(f"spack-completion.{shell}")
|
||||
mock_outfile = tmpdir.join(f"spack-completion.{shell}")
|
||||
|
||||
mock_args = {
|
||||
@@ -267,7 +268,7 @@ def test_updated_completion_scripts(shell, tmpdir):
|
||||
"and adding the changed files to your pull request."
|
||||
)
|
||||
|
||||
header = os.path.join(spack.paths.share_path, shell, "spack-completion.in")
|
||||
header = os.path.join(spack.paths.share_path, shell, f"spack-completion.{shell}")
|
||||
script = "spack-completion.{0}".format(shell)
|
||||
old_script = os.path.join(spack.paths.share_path, script)
|
||||
new_script = str(tmpdir.join(script))
|
||||
|
@@ -266,7 +266,7 @@ def test_dev_build_multiple(
|
||||
# root and dependency if they wanted a dev build for both.
|
||||
leaf_dir = tmpdir.mkdir("leaf")
|
||||
leaf_spec = spack.spec.Spec("dev-build-test-install@=1.0.0") # non-existing version
|
||||
leaf_pkg_cls = spack.repo.path.get_pkg_class(leaf_spec.name)
|
||||
leaf_pkg_cls = spack.repo.PATH.get_pkg_class(leaf_spec.name)
|
||||
with leaf_dir.as_cwd():
|
||||
with open(leaf_pkg_cls.filename, "w") as f:
|
||||
f.write(leaf_pkg_cls.original_string)
|
||||
@@ -275,7 +275,7 @@ def test_dev_build_multiple(
|
||||
# don't concretize outside environment -- dev info will be wrong
|
||||
root_dir = tmpdir.mkdir("root")
|
||||
root_spec = spack.spec.Spec("dev-build-test-dependent@0.0.0")
|
||||
root_pkg_cls = spack.repo.path.get_pkg_class(root_spec.name)
|
||||
root_pkg_cls = spack.repo.PATH.get_pkg_class(root_spec.name)
|
||||
with root_dir.as_cwd():
|
||||
with open(root_pkg_cls.filename, "w") as f:
|
||||
f.write(root_pkg_cls.original_string)
|
||||
@@ -329,7 +329,7 @@ def test_dev_build_env_dependency(
|
||||
dep_spec = spack.spec.Spec("dev-build-test-install")
|
||||
|
||||
with build_dir.as_cwd():
|
||||
dep_pkg_cls = spack.repo.path.get_pkg_class(dep_spec.name)
|
||||
dep_pkg_cls = spack.repo.PATH.get_pkg_class(dep_spec.name)
|
||||
with open(dep_pkg_cls.filename, "w") as f:
|
||||
f.write(dep_pkg_cls.original_string)
|
||||
|
||||
|
@@ -277,7 +277,7 @@ def test_env_modifications_error_on_activate(install_mockery, mock_fetch, monkey
|
||||
def setup_error(pkg, env):
|
||||
raise RuntimeError("cmake-client had issues!")
|
||||
|
||||
pkg = spack.repo.path.get_pkg_class("cmake-client")
|
||||
pkg = spack.repo.PATH.get_pkg_class("cmake-client")
|
||||
monkeypatch.setattr(pkg, "setup_run_environment", setup_error)
|
||||
|
||||
spack.environment.shell.activate(e)
|
||||
|
@@ -43,7 +43,7 @@ def define_plat_exe(exe):
|
||||
|
||||
|
||||
def test_find_external_single_package(mock_executable, executables_found, _platform_executables):
|
||||
pkgs_to_check = [spack.repo.path.get_pkg_class("cmake")]
|
||||
pkgs_to_check = [spack.repo.PATH.get_pkg_class("cmake")]
|
||||
cmake_path = mock_executable("cmake", output="echo cmake version 1.foo")
|
||||
executables_found({str(cmake_path): define_plat_exe("cmake")})
|
||||
|
||||
@@ -58,7 +58,7 @@ def test_find_external_single_package(mock_executable, executables_found, _platf
|
||||
def test_find_external_two_instances_same_package(
|
||||
mock_executable, executables_found, _platform_executables
|
||||
):
|
||||
pkgs_to_check = [spack.repo.path.get_pkg_class("cmake")]
|
||||
pkgs_to_check = [spack.repo.PATH.get_pkg_class("cmake")]
|
||||
|
||||
# Each of these cmake instances is created in a different prefix
|
||||
# In Windows, quoted strings are echo'd with quotes includes
|
||||
@@ -347,7 +347,7 @@ def test_overriding_prefix(mock_executable, mutable_config, monkeypatch, _platfo
|
||||
def _determine_variants(cls, exes, version_str):
|
||||
return "languages=c", {"prefix": "/opt/gcc/bin", "compilers": {"c": exes[0]}}
|
||||
|
||||
gcc_cls = spack.repo.path.get_pkg_class("gcc")
|
||||
gcc_cls = spack.repo.PATH.get_pkg_class("gcc")
|
||||
monkeypatch.setattr(gcc_cls, "determine_variants", _determine_variants)
|
||||
|
||||
# Find the external spec
|
||||
|
@@ -23,6 +23,7 @@
|
||||
import spack.environment as ev
|
||||
import spack.hash_types as ht
|
||||
import spack.package_base
|
||||
import spack.store
|
||||
import spack.util.executable
|
||||
from spack.error import SpackError
|
||||
from spack.main import SpackCommand
|
||||
@@ -705,9 +706,11 @@ def test_cache_only_fails(tmpdir, mock_fetch, install_mockery, capfd):
|
||||
assert "was not installed" in out
|
||||
|
||||
# Check that failure prefix locks are still cached
|
||||
failure_lock_prefixes = ",".join(spack.store.STORE.db._prefix_failures.keys())
|
||||
assert "libelf" in failure_lock_prefixes
|
||||
assert "libdwarf" in failure_lock_prefixes
|
||||
failed_packages = [
|
||||
pkg_name for dag_hash, pkg_name in spack.store.STORE.failure_tracker.locker.locks.keys()
|
||||
]
|
||||
assert "libelf" in failed_packages
|
||||
assert "libdwarf" in failed_packages
|
||||
|
||||
|
||||
def test_install_only_dependencies(tmpdir, mock_fetch, install_mockery):
|
||||
|
@@ -82,7 +82,7 @@ def mock_pkg_git_repo(git, tmpdir_factory):
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def mock_pkg_names():
|
||||
repo = spack.repo.path.get_repo("builtin.mock")
|
||||
repo = spack.repo.PATH.get_repo("builtin.mock")
|
||||
|
||||
# Be sure to include virtual packages since packages with stand-alone
|
||||
# tests may inherit additional tests from the virtuals they provide,
|
||||
@@ -105,11 +105,11 @@ def split(output):
|
||||
|
||||
|
||||
def test_packages_path():
|
||||
assert spack.repo.packages_path() == spack.repo.path.get_repo("builtin").packages_path
|
||||
assert spack.repo.packages_path() == spack.repo.PATH.get_repo("builtin").packages_path
|
||||
|
||||
|
||||
def test_mock_packages_path(mock_packages):
|
||||
assert spack.repo.packages_path() == spack.repo.path.get_repo("builtin.mock").packages_path
|
||||
assert spack.repo.packages_path() == spack.repo.PATH.get_repo("builtin.mock").packages_path
|
||||
|
||||
|
||||
def test_pkg_add(git, mock_pkg_git_repo):
|
||||
@@ -126,8 +126,8 @@ def test_pkg_add(git, mock_pkg_git_repo):
|
||||
finally:
|
||||
shutil.rmtree("pkg-e")
|
||||
# Removing a package mid-run disrupts Spack's caching
|
||||
if spack.repo.path.repos[0]._fast_package_checker:
|
||||
spack.repo.path.repos[0]._fast_package_checker.invalidate()
|
||||
if spack.repo.PATH.repos[0]._fast_package_checker:
|
||||
spack.repo.PATH.repos[0]._fast_package_checker.invalidate()
|
||||
|
||||
with pytest.raises(spack.main.SpackCommandError):
|
||||
pkg("add", "does-not-exist")
|
||||
@@ -248,7 +248,7 @@ def test_pkg_source_requires_one_arg(mock_packages):
|
||||
def test_pkg_source(mock_packages):
|
||||
fake_source = pkg("source", "fake")
|
||||
|
||||
fake_file = spack.repo.path.filename_for_package_name("fake")
|
||||
fake_file = spack.repo.PATH.filename_for_package_name("fake")
|
||||
with open(fake_file) as f:
|
||||
contents = f.read()
|
||||
assert fake_source == contents
|
||||
@@ -303,7 +303,7 @@ def test_pkg_grep(mock_packages, capfd):
|
||||
pkg("grep", "-l", "splice", output=str)
|
||||
output, _ = capfd.readouterr()
|
||||
assert output.strip() == "\n".join(
|
||||
spack.repo.path.get_pkg_class(name).module.__file__
|
||||
spack.repo.PATH.get_pkg_class(name).module.__file__
|
||||
for name in ["splice-a", "splice-h", "splice-t", "splice-vh", "splice-z"]
|
||||
)
|
||||
|
||||
|
@@ -120,7 +120,7 @@ def test_changed_files_all_files():
|
||||
assert len(files) > 6000
|
||||
|
||||
# a builtin package
|
||||
zlib = spack.repo.path.get_pkg_class("zlib")
|
||||
zlib = spack.repo.PATH.get_pkg_class("zlib")
|
||||
zlib_file = zlib.module.__file__
|
||||
if zlib_file.endswith("pyc"):
|
||||
zlib_file = zlib_file[:-1]
|
||||
|
@@ -41,7 +41,7 @@ def test_tags_no_tags(monkeypatch):
|
||||
class tag_path:
|
||||
tag_index = dict()
|
||||
|
||||
monkeypatch.setattr(spack.repo, "path", tag_path)
|
||||
monkeypatch.setattr(spack.repo, "PATH", tag_path)
|
||||
out = tags()
|
||||
assert "No tagged" in out
|
||||
|
||||
|
@@ -43,7 +43,7 @@ def check_spec(abstract, concrete):
|
||||
cflag = concrete.compiler_flags[flag]
|
||||
assert set(aflag) <= set(cflag)
|
||||
|
||||
for name in spack.repo.path.get_pkg_class(abstract.name).variants:
|
||||
for name in spack.repo.PATH.get_pkg_class(abstract.name).variants:
|
||||
assert name in concrete.variants
|
||||
|
||||
for flag in concrete.compiler_flags.valid_compiler_flags():
|
||||
@@ -292,7 +292,7 @@ def test_concretize_with_provides_when(self):
|
||||
"""Make sure insufficient versions of MPI are not in providers list when
|
||||
we ask for some advanced version.
|
||||
"""
|
||||
repo = spack.repo.path
|
||||
repo = spack.repo.PATH
|
||||
assert not any(s.intersects("mpich2@:1.0") for s in repo.providers_for("mpi@2.1"))
|
||||
assert not any(s.intersects("mpich2@:1.1") for s in repo.providers_for("mpi@2.2"))
|
||||
assert not any(s.intersects("mpich@:1") for s in repo.providers_for("mpi@2"))
|
||||
@@ -301,7 +301,7 @@ def test_concretize_with_provides_when(self):
|
||||
|
||||
def test_provides_handles_multiple_providers_of_same_version(self):
|
||||
""" """
|
||||
providers = spack.repo.path.providers_for("mpi@3.0")
|
||||
providers = spack.repo.PATH.providers_for("mpi@3.0")
|
||||
|
||||
# Note that providers are repo-specific, so we don't misinterpret
|
||||
# providers, but vdeps are not namespace-specific, so we can
|
||||
@@ -1446,7 +1446,7 @@ def test_non_default_provider_of_multiple_virtuals(self):
|
||||
assert s["lapack"].name == "low-priority-provider"
|
||||
|
||||
for virtual_pkg in ("mpi", "lapack"):
|
||||
for pkg in spack.repo.path.providers_for(virtual_pkg):
|
||||
for pkg in spack.repo.PATH.providers_for(virtual_pkg):
|
||||
if pkg.name == "low-priority-provider":
|
||||
continue
|
||||
assert pkg not in s
|
||||
@@ -1919,7 +1919,7 @@ def test_installed_specs_disregard_conflicts(self, mutable_database, monkeypatch
|
||||
pytest.xfail("Use case not supported by the original concretizer")
|
||||
|
||||
# Add a conflict to "mpich" that match an already installed "mpich~debug"
|
||||
pkg_cls = spack.repo.path.get_pkg_class("mpich")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("mpich")
|
||||
monkeypatch.setitem(pkg_cls.conflicts, "~debug", [(Spec(), None)])
|
||||
|
||||
# If we concretize with --fresh the conflict is taken into account
|
||||
|
@@ -1158,13 +1158,13 @@ def test_license_dir_config(mutable_config, mock_packages):
|
||||
expected_dir = spack.paths.default_license_dir
|
||||
assert spack.config.get("config:license_dir") == expected_dir
|
||||
assert spack.package_base.PackageBase.global_license_dir == expected_dir
|
||||
assert spack.repo.path.get_pkg_class("a").global_license_dir == expected_dir
|
||||
assert spack.repo.PATH.get_pkg_class("a").global_license_dir == expected_dir
|
||||
|
||||
rel_path = os.path.join(os.path.sep, "foo", "bar", "baz")
|
||||
spack.config.set("config:license_dir", rel_path)
|
||||
assert spack.config.get("config:license_dir") == rel_path
|
||||
assert spack.package_base.PackageBase.global_license_dir == rel_path
|
||||
assert spack.repo.path.get_pkg_class("a").global_license_dir == rel_path
|
||||
assert spack.repo.PATH.get_pkg_class("a").global_license_dir == rel_path
|
||||
|
||||
|
||||
@pytest.mark.regression("22547")
|
||||
|
@@ -773,7 +773,7 @@ def concretize_scope(mutable_config, tmpdir):
|
||||
yield str(tmpdir.join("concretize"))
|
||||
|
||||
mutable_config.pop_scope()
|
||||
spack.repo.path._provider_index = None
|
||||
spack.repo.PATH._provider_index = None
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@@ -950,21 +950,14 @@ def disable_compiler_execution(monkeypatch, request):
|
||||
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def install_mockery(temporary_store, mutable_config, mock_packages):
|
||||
def install_mockery(temporary_store: spack.store.Store, mutable_config, mock_packages):
|
||||
"""Hooks a fake install directory, DB, and stage directory into Spack."""
|
||||
# We use a fake package, so temporarily disable checksumming
|
||||
with spack.config.override("config:checksum", False):
|
||||
yield
|
||||
|
||||
# Also wipe out any cached prefix failure locks (associated with
|
||||
# the session-scoped mock archive).
|
||||
for pkg_id in list(temporary_store.db._prefix_failures.keys()):
|
||||
lock = spack.store.STORE.db._prefix_failures.pop(pkg_id, None)
|
||||
if lock:
|
||||
try:
|
||||
lock.release_write()
|
||||
except Exception:
|
||||
pass
|
||||
# Wipe out any cached prefix failure locks (associated with the session-scoped mock archive)
|
||||
temporary_store.failure_tracker.clear_all()
|
||||
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
@@ -1944,5 +1937,5 @@ def nullify_globals(request, monkeypatch):
|
||||
ensure_configuration_fixture_run_before(request)
|
||||
monkeypatch.setattr(spack.config, "config", None)
|
||||
monkeypatch.setattr(spack.caches, "misc_cache", None)
|
||||
monkeypatch.setattr(spack.repo, "path", None)
|
||||
monkeypatch.setattr(spack.repo, "PATH", None)
|
||||
monkeypatch.setattr(spack.store, "STORE", None)
|
||||
|
@@ -235,7 +235,7 @@ class Llvm(CMakePackage, CudaPackage):
|
||||
depends_on("libffi", when="+cuda") # libomptarget
|
||||
|
||||
# llvm-config --system-libs libraries.
|
||||
depends_on("zlib")
|
||||
depends_on("zlib-api")
|
||||
|
||||
# lldb dependencies
|
||||
depends_on("swig", when="+lldb")
|
||||
|
@@ -807,22 +807,22 @@ def test_query_spec_with_non_conditional_virtual_dependency(database):
|
||||
def test_failed_spec_path_error(database):
|
||||
"""Ensure spec not concrete check is covered."""
|
||||
s = spack.spec.Spec("a")
|
||||
with pytest.raises(ValueError, match="Concrete spec required"):
|
||||
spack.store.STORE.db._failed_spec_path(s)
|
||||
with pytest.raises(AssertionError, match="concrete spec required"):
|
||||
spack.store.STORE.failure_tracker.mark(s)
|
||||
|
||||
|
||||
@pytest.mark.db
|
||||
def test_clear_failure_keep(mutable_database, monkeypatch, capfd):
|
||||
"""Add test coverage for clear_failure operation when to be retained."""
|
||||
|
||||
def _is(db, spec):
|
||||
def _is(self, spec):
|
||||
return True
|
||||
|
||||
# Pretend the spec has been failure locked
|
||||
monkeypatch.setattr(spack.database.Database, "prefix_failure_locked", _is)
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "lock_taken", _is)
|
||||
|
||||
s = spack.spec.Spec("a")
|
||||
spack.store.STORE.db.clear_failure(s)
|
||||
s = spack.spec.Spec("a").concretized()
|
||||
spack.store.STORE.failure_tracker.clear(s)
|
||||
out = capfd.readouterr()[0]
|
||||
assert "Retaining failure marking" in out
|
||||
|
||||
@@ -831,16 +831,16 @@ def _is(db, spec):
|
||||
def test_clear_failure_forced(default_mock_concretization, mutable_database, monkeypatch, capfd):
|
||||
"""Add test coverage for clear_failure operation when force."""
|
||||
|
||||
def _is(db, spec):
|
||||
def _is(self, spec):
|
||||
return True
|
||||
|
||||
# Pretend the spec has been failure locked
|
||||
monkeypatch.setattr(spack.database.Database, "prefix_failure_locked", _is)
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "lock_taken", _is)
|
||||
# Ensure raise OSError when try to remove the non-existent marking
|
||||
monkeypatch.setattr(spack.database.Database, "prefix_failure_marked", _is)
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "persistent_mark", _is)
|
||||
|
||||
s = default_mock_concretization("a")
|
||||
spack.store.STORE.db.clear_failure(s, force=True)
|
||||
spack.store.STORE.failure_tracker.clear(s, force=True)
|
||||
out = capfd.readouterr()[1]
|
||||
assert "Removing failure marking despite lock" in out
|
||||
assert "Unable to remove failure marking" in out
|
||||
@@ -858,55 +858,34 @@ def _raise_exc(lock):
|
||||
|
||||
with tmpdir.as_cwd():
|
||||
s = default_mock_concretization("a")
|
||||
spack.store.STORE.db.mark_failed(s)
|
||||
spack.store.STORE.failure_tracker.mark(s)
|
||||
|
||||
out = str(capsys.readouterr()[1])
|
||||
assert "Unable to mark a as failed" in out
|
||||
|
||||
# Clean up the failure mark to ensure it does not interfere with other
|
||||
# tests using the same spec.
|
||||
del spack.store.STORE.db._prefix_failures[s.prefix]
|
||||
spack.store.STORE.failure_tracker.clear_all()
|
||||
|
||||
|
||||
@pytest.mark.db
|
||||
def test_prefix_failed(default_mock_concretization, mutable_database, monkeypatch):
|
||||
"""Add coverage to prefix_failed operation."""
|
||||
|
||||
def _is(db, spec):
|
||||
return True
|
||||
"""Add coverage to failed operation."""
|
||||
|
||||
s = default_mock_concretization("a")
|
||||
|
||||
# Confirm the spec is not already marked as failed
|
||||
assert not spack.store.STORE.db.prefix_failed(s)
|
||||
assert not spack.store.STORE.failure_tracker.has_failed(s)
|
||||
|
||||
# Check that a failure entry is sufficient
|
||||
spack.store.STORE.db._prefix_failures[s.prefix] = None
|
||||
assert spack.store.STORE.db.prefix_failed(s)
|
||||
spack.store.STORE.failure_tracker.mark(s)
|
||||
assert spack.store.STORE.failure_tracker.has_failed(s)
|
||||
|
||||
# Remove the entry and check again
|
||||
del spack.store.STORE.db._prefix_failures[s.prefix]
|
||||
assert not spack.store.STORE.db.prefix_failed(s)
|
||||
spack.store.STORE.failure_tracker.clear(s)
|
||||
assert not spack.store.STORE.failure_tracker.has_failed(s)
|
||||
|
||||
# Now pretend that the prefix failure is locked
|
||||
monkeypatch.setattr(spack.database.Database, "prefix_failure_locked", _is)
|
||||
assert spack.store.STORE.db.prefix_failed(s)
|
||||
|
||||
|
||||
def test_prefix_read_lock_error(default_mock_concretization, mutable_database, monkeypatch):
|
||||
"""Cover the prefix read lock exception."""
|
||||
|
||||
def _raise(db, spec):
|
||||
raise lk.LockError("Mock lock error")
|
||||
|
||||
s = default_mock_concretization("a")
|
||||
|
||||
# Ensure subsequent lock operations fail
|
||||
monkeypatch.setattr(lk.Lock, "acquire_read", _raise)
|
||||
|
||||
with pytest.raises(Exception):
|
||||
with spack.store.STORE.db.prefix_read_lock(s):
|
||||
assert False
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "lock_taken", lambda self, spec: True)
|
||||
assert spack.store.STORE.failure_tracker.has_failed(s)
|
||||
|
||||
|
||||
def test_prefix_write_lock_error(default_mock_concretization, mutable_database, monkeypatch):
|
||||
@@ -921,7 +900,7 @@ def _raise(db, spec):
|
||||
monkeypatch.setattr(lk.Lock, "acquire_write", _raise)
|
||||
|
||||
with pytest.raises(Exception):
|
||||
with spack.store.STORE.db.prefix_write_lock(s):
|
||||
with spack.store.STORE.prefix_locker.write_lock(s):
|
||||
assert False
|
||||
|
||||
|
||||
|
@@ -16,7 +16,7 @@ def test_false_directives_do_not_exist(mock_packages):
|
||||
"""Ensure directives that evaluate to False at import time are added to
|
||||
dicts on packages.
|
||||
"""
|
||||
cls = spack.repo.path.get_pkg_class("when-directives-false")
|
||||
cls = spack.repo.PATH.get_pkg_class("when-directives-false")
|
||||
assert not cls.dependencies
|
||||
assert not cls.resources
|
||||
assert not cls.patches
|
||||
@@ -26,7 +26,7 @@ def test_true_directives_exist(mock_packages):
|
||||
"""Ensure directives that evaluate to True at import time are added to
|
||||
dicts on packages.
|
||||
"""
|
||||
cls = spack.repo.path.get_pkg_class("when-directives-true")
|
||||
cls = spack.repo.PATH.get_pkg_class("when-directives-true")
|
||||
|
||||
assert cls.dependencies
|
||||
assert spack.spec.Spec() in cls.dependencies["extendee"]
|
||||
@@ -40,7 +40,7 @@ def test_true_directives_exist(mock_packages):
|
||||
|
||||
|
||||
def test_constraints_from_context(mock_packages):
|
||||
pkg_cls = spack.repo.path.get_pkg_class("with-constraint-met")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("with-constraint-met")
|
||||
|
||||
assert pkg_cls.dependencies
|
||||
assert spack.spec.Spec("@1.0") in pkg_cls.dependencies["b"]
|
||||
@@ -51,7 +51,7 @@ def test_constraints_from_context(mock_packages):
|
||||
|
||||
@pytest.mark.regression("26656")
|
||||
def test_constraints_from_context_are_merged(mock_packages):
|
||||
pkg_cls = spack.repo.path.get_pkg_class("with-constraint-met")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("with-constraint-met")
|
||||
|
||||
assert pkg_cls.dependencies
|
||||
assert spack.spec.Spec("@0.14:15 ^b@3.8:4.0") in pkg_cls.dependencies["c"]
|
||||
@@ -68,7 +68,7 @@ def test_extends_spec(config, mock_packages):
|
||||
|
||||
@pytest.mark.regression("34368")
|
||||
def test_error_on_anonymous_dependency(config, mock_packages):
|
||||
pkg = spack.repo.path.get_pkg_class("a")
|
||||
pkg = spack.repo.PATH.get_pkg_class("a")
|
||||
with pytest.raises(spack.directives.DependencyError):
|
||||
spack.directives._depends_on(pkg, "@4.5")
|
||||
|
||||
@@ -85,7 +85,7 @@ def test_error_on_anonymous_dependency(config, mock_packages):
|
||||
],
|
||||
)
|
||||
def test_maintainer_directive(config, mock_packages, package_name, expected_maintainers):
|
||||
pkg_cls = spack.repo.path.get_pkg_class(package_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(package_name)
|
||||
assert pkg_cls.maintainers == expected_maintainers
|
||||
|
||||
|
||||
|
@@ -79,7 +79,7 @@ def test_read_and_write_spec(temporary_store, config, mock_packages):
|
||||
layout.
|
||||
"""
|
||||
layout = temporary_store.layout
|
||||
pkg_names = list(spack.repo.path.all_package_names())[:max_packages]
|
||||
pkg_names = list(spack.repo.PATH.all_package_names())[:max_packages]
|
||||
|
||||
for name in pkg_names:
|
||||
if name.startswith("external"):
|
||||
@@ -191,7 +191,7 @@ def test_handle_unknown_package(temporary_store, config, mock_packages):
|
||||
def test_find(temporary_store, config, mock_packages):
|
||||
"""Test that finding specs within an install layout works."""
|
||||
layout = temporary_store.layout
|
||||
package_names = list(spack.repo.path.all_package_names())[:max_packages]
|
||||
package_names = list(spack.repo.PATH.all_package_names())[:max_packages]
|
||||
|
||||
# Create install prefixes for all packages in the list
|
||||
installed_specs = {}
|
||||
|
@@ -100,7 +100,7 @@ def test_fetch(
|
||||
t = mock_git_repository.checks[type_of_test]
|
||||
h = mock_git_repository.hash
|
||||
|
||||
pkg_class = spack.repo.path.get_pkg_class("git-test")
|
||||
pkg_class = spack.repo.PATH.get_pkg_class("git-test")
|
||||
# This would fail using the default-no-per-version-git check but that
|
||||
# isn't included in this test
|
||||
monkeypatch.delattr(pkg_class, "git")
|
||||
@@ -147,7 +147,7 @@ def test_fetch_pkg_attr_submodule_init(
|
||||
"""
|
||||
|
||||
t = mock_git_repository.checks["default-no-per-version-git"]
|
||||
pkg_class = spack.repo.path.get_pkg_class("git-test")
|
||||
pkg_class = spack.repo.PATH.get_pkg_class("git-test")
|
||||
# For this test, the version args don't specify 'git' (which is
|
||||
# the majority of version specifications)
|
||||
monkeypatch.setattr(pkg_class, "git", mock_git_repository.url)
|
||||
@@ -179,7 +179,7 @@ def test_adhoc_version_submodules(
|
||||
):
|
||||
t = mock_git_repository.checks["tag"]
|
||||
# Construct the package under test
|
||||
pkg_class = spack.repo.path.get_pkg_class("git-test")
|
||||
pkg_class = spack.repo.PATH.get_pkg_class("git-test")
|
||||
monkeypatch.setitem(pkg_class.versions, Version("git"), t.args)
|
||||
monkeypatch.setattr(pkg_class, "git", "file://%s" % mock_git_repository.path, raising=False)
|
||||
|
||||
|
@@ -24,7 +24,7 @@ def test_static_graph_mpileaks(config, mock_packages):
|
||||
assert ' "libelf" [label="libelf"]\n' in dot
|
||||
assert ' "libdwarf" [label="libdwarf"]\n' in dot
|
||||
|
||||
mpi_providers = spack.repo.path.providers_for("mpi")
|
||||
mpi_providers = spack.repo.PATH.providers_for("mpi")
|
||||
for spec in mpi_providers:
|
||||
assert ('"mpileaks" -> "%s"' % spec.name) in dot
|
||||
assert ('"callpath" -> "%s"' % spec.name) in dot
|
||||
|
@@ -52,7 +52,7 @@ def test_uninstall_non_existing_package(install_mockery, mock_fetch, monkeypatch
|
||||
|
||||
# Mock deletion of the package
|
||||
spec._package = None
|
||||
monkeypatch.setattr(spack.repo.path, "get", find_nothing)
|
||||
monkeypatch.setattr(spack.repo.PATH, "get", find_nothing)
|
||||
with pytest.raises(spack.repo.UnknownPackageError):
|
||||
spec.package
|
||||
|
||||
@@ -159,7 +159,7 @@ def test_partial_install_delete_prefix_and_stage(install_mockery, mock_fetch, wo
|
||||
s.package.remove_prefix = rm_prefix_checker.remove_prefix
|
||||
|
||||
# must clear failure markings for the package before re-installing it
|
||||
spack.store.STORE.db.clear_failure(s, True)
|
||||
spack.store.STORE.failure_tracker.clear(s, True)
|
||||
|
||||
s.package.set_install_succeed()
|
||||
s.package.stage = MockStage(s.package.stage)
|
||||
@@ -354,7 +354,7 @@ def test_partial_install_keep_prefix(install_mockery, mock_fetch, monkeypatch, w
|
||||
assert os.path.exists(s.package.prefix)
|
||||
|
||||
# must clear failure markings for the package before re-installing it
|
||||
spack.store.STORE.db.clear_failure(s, True)
|
||||
spack.store.STORE.failure_tracker.clear(s, True)
|
||||
|
||||
s.package.set_install_succeed()
|
||||
s.package.stage = MockStage(s.package.stage)
|
||||
@@ -616,7 +616,7 @@ def _install(src, dest):
|
||||
def test_unconcretized_install(install_mockery, mock_fetch, mock_packages):
|
||||
"""Test attempts to perform install phases with unconcretized spec."""
|
||||
spec = Spec("trivial-install-test-package")
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
|
||||
with pytest.raises(ValueError, match="must have a concrete spec"):
|
||||
pkg_cls(spec).do_install()
|
||||
|
@@ -19,6 +19,7 @@
|
||||
import spack.compilers
|
||||
import spack.concretize
|
||||
import spack.config
|
||||
import spack.database
|
||||
import spack.installer as inst
|
||||
import spack.package_base
|
||||
import spack.package_prefs as prefs
|
||||
@@ -364,7 +365,7 @@ def test_ensure_locked_err(install_mockery, monkeypatch, tmpdir, capsys):
|
||||
"""Test _ensure_locked when a non-lock exception is raised."""
|
||||
mock_err_msg = "Mock exception error"
|
||||
|
||||
def _raise(lock, timeout):
|
||||
def _raise(lock, timeout=None):
|
||||
raise RuntimeError(mock_err_msg)
|
||||
|
||||
const_arg = installer_args(["trivial-install-test-package"], {})
|
||||
@@ -432,7 +433,7 @@ def test_ensure_locked_new_lock(install_mockery, tmpdir, lock_type, reads, write
|
||||
|
||||
|
||||
def test_ensure_locked_new_warn(install_mockery, monkeypatch, tmpdir, capsys):
|
||||
orig_pl = spack.database.Database.prefix_lock
|
||||
orig_pl = spack.database.SpecLocker.lock
|
||||
|
||||
def _pl(db, spec, timeout):
|
||||
lock = orig_pl(db, spec, timeout)
|
||||
@@ -444,7 +445,7 @@ def _pl(db, spec, timeout):
|
||||
installer = create_installer(const_arg)
|
||||
spec = installer.build_requests[0].pkg.spec
|
||||
|
||||
monkeypatch.setattr(spack.database.Database, "prefix_lock", _pl)
|
||||
monkeypatch.setattr(spack.database.SpecLocker, "lock", _pl)
|
||||
|
||||
lock_type = "read"
|
||||
ltype, lock = installer._ensure_locked(lock_type, spec.package)
|
||||
@@ -457,7 +458,7 @@ def _pl(db, spec, timeout):
|
||||
|
||||
def test_package_id_err(install_mockery):
|
||||
s = spack.spec.Spec("trivial-install-test-package")
|
||||
pkg_cls = spack.repo.path.get_pkg_class(s.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(s.name)
|
||||
with pytest.raises(ValueError, match="spec is not concretized"):
|
||||
inst.package_id(pkg_cls(s))
|
||||
|
||||
@@ -597,59 +598,50 @@ def _repoerr(repo, name):
|
||||
assert "Couldn't copy in provenance for cmake" in out
|
||||
|
||||
|
||||
def test_clear_failures_success(install_mockery):
|
||||
def test_clear_failures_success(tmpdir):
|
||||
"""Test the clear_failures happy path."""
|
||||
failures = spack.database.FailureTracker(str(tmpdir), default_timeout=0.1)
|
||||
|
||||
spec = spack.spec.Spec("a")
|
||||
spec._mark_concrete()
|
||||
|
||||
# Set up a test prefix failure lock
|
||||
lock = lk.Lock(
|
||||
spack.store.STORE.db.prefix_fail_path, start=1, length=1, default_timeout=1e-9, desc="test"
|
||||
)
|
||||
try:
|
||||
lock.acquire_write()
|
||||
except lk.LockTimeoutError:
|
||||
tty.warn("Failed to write lock the test install failure")
|
||||
spack.store.STORE.db._prefix_failures["test"] = lock
|
||||
|
||||
# Set up a fake failure mark (or file)
|
||||
fs.touch(os.path.join(spack.store.STORE.db._failure_dir, "test"))
|
||||
failures.mark(spec)
|
||||
assert failures.has_failed(spec)
|
||||
|
||||
# Now clear failure tracking
|
||||
inst.clear_failures()
|
||||
failures.clear_all()
|
||||
|
||||
# Ensure there are no cached failure locks or failure marks
|
||||
assert len(spack.store.STORE.db._prefix_failures) == 0
|
||||
assert len(os.listdir(spack.store.STORE.db._failure_dir)) == 0
|
||||
assert len(failures.locker.locks) == 0
|
||||
assert len(os.listdir(failures.dir)) == 0
|
||||
|
||||
# Ensure the core directory and failure lock file still exist
|
||||
assert os.path.isdir(spack.store.STORE.db._failure_dir)
|
||||
assert os.path.isdir(failures.dir)
|
||||
|
||||
# Locks on windows are a no-op
|
||||
if sys.platform != "win32":
|
||||
assert os.path.isfile(spack.store.STORE.db.prefix_fail_path)
|
||||
assert os.path.isfile(failures.locker.lock_path)
|
||||
|
||||
|
||||
def test_clear_failures_errs(install_mockery, monkeypatch, capsys):
|
||||
@pytest.mark.xfail(sys.platform == "win32", reason="chmod does not prevent removal on Win")
|
||||
def test_clear_failures_errs(tmpdir, capsys):
|
||||
"""Test the clear_failures exception paths."""
|
||||
orig_fn = os.remove
|
||||
err_msg = "Mock os remove"
|
||||
failures = spack.database.FailureTracker(str(tmpdir), default_timeout=0.1)
|
||||
spec = spack.spec.Spec("a")
|
||||
spec._mark_concrete()
|
||||
failures.mark(spec)
|
||||
|
||||
def _raise_except(path):
|
||||
raise OSError(err_msg)
|
||||
|
||||
# Set up a fake failure mark (or file)
|
||||
fs.touch(os.path.join(spack.store.STORE.db._failure_dir, "test"))
|
||||
|
||||
monkeypatch.setattr(os, "remove", _raise_except)
|
||||
# Make the file marker not writeable, so that clearing_failures fails
|
||||
failures.dir.chmod(0o000)
|
||||
|
||||
# Clear failure tracking
|
||||
inst.clear_failures()
|
||||
failures.clear_all()
|
||||
|
||||
# Ensure expected warning generated
|
||||
out = str(capsys.readouterr()[1])
|
||||
assert "Unable to remove failure" in out
|
||||
assert err_msg in out
|
||||
|
||||
# Restore remove for teardown
|
||||
monkeypatch.setattr(os, "remove", orig_fn)
|
||||
failures.dir.chmod(0o750)
|
||||
|
||||
|
||||
def test_combine_phase_logs(tmpdir):
|
||||
@@ -694,14 +686,18 @@ def test_combine_phase_logs_does_not_care_about_encoding(tmpdir):
|
||||
assert f.read() == data * 2
|
||||
|
||||
|
||||
def test_check_deps_status_install_failure(install_mockery, monkeypatch):
|
||||
def test_check_deps_status_install_failure(install_mockery):
|
||||
"""Tests that checking the dependency status on a request to install
|
||||
'a' fails, if we mark the dependency as failed.
|
||||
"""
|
||||
s = spack.spec.Spec("a").concretized()
|
||||
for dep in s.traverse(root=False):
|
||||
spack.store.STORE.failure_tracker.mark(dep)
|
||||
|
||||
const_arg = installer_args(["a"], {})
|
||||
installer = create_installer(const_arg)
|
||||
request = installer.build_requests[0]
|
||||
|
||||
# Make sure the package is identified as failed
|
||||
monkeypatch.setattr(spack.database.Database, "prefix_failed", _true)
|
||||
|
||||
with pytest.raises(inst.InstallError, match="install failure"):
|
||||
installer._check_deps_status(request)
|
||||
|
||||
@@ -1006,7 +1002,7 @@ def test_install_failed(install_mockery, monkeypatch, capsys):
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Make sure the package is identified as failed
|
||||
monkeypatch.setattr(spack.database.Database, "prefix_failed", _true)
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "has_failed", _true)
|
||||
|
||||
with pytest.raises(inst.InstallError, match="request failed"):
|
||||
installer.install()
|
||||
@@ -1022,7 +1018,7 @@ def test_install_failed_not_fast(install_mockery, monkeypatch, capsys):
|
||||
installer = create_installer(const_arg)
|
||||
|
||||
# Make sure the package is identified as failed
|
||||
monkeypatch.setattr(spack.database.Database, "prefix_failed", _true)
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "has_failed", _true)
|
||||
|
||||
with pytest.raises(inst.InstallError, match="request failed"):
|
||||
installer.install()
|
||||
@@ -1121,7 +1117,7 @@ def test_install_fail_fast_on_detect(install_mockery, monkeypatch, capsys):
|
||||
#
|
||||
# This will prevent b from installing, which will cause the build of a
|
||||
# to be skipped.
|
||||
monkeypatch.setattr(spack.database.Database, "prefix_failed", _true)
|
||||
monkeypatch.setattr(spack.database.FailureTracker, "has_failed", _true)
|
||||
|
||||
with pytest.raises(inst.InstallError, match="after first install failure"):
|
||||
installer.install()
|
||||
|
@@ -181,7 +181,7 @@ def find_nothing(*args):
|
||||
|
||||
# Mock deletion of the package
|
||||
spec._package = None
|
||||
monkeypatch.setattr(spack.repo.path, "get", find_nothing)
|
||||
monkeypatch.setattr(spack.repo.PATH, "get", find_nothing)
|
||||
with pytest.raises(spack.repo.UnknownPackageError):
|
||||
spec.package
|
||||
|
||||
|
@@ -48,7 +48,7 @@ def mpileaks_possible_deps(mock_packages, mpi_names):
|
||||
|
||||
|
||||
def test_possible_dependencies(mock_packages, mpileaks_possible_deps):
|
||||
pkg_cls = spack.repo.path.get_pkg_class("mpileaks")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("mpileaks")
|
||||
expanded_possible_deps = pkg_cls.possible_dependencies(expand_virtuals=True)
|
||||
assert mpileaks_possible_deps == expanded_possible_deps
|
||||
assert {
|
||||
@@ -62,14 +62,14 @@ def test_possible_dependencies(mock_packages, mpileaks_possible_deps):
|
||||
|
||||
|
||||
def test_possible_direct_dependencies(mock_packages, mpileaks_possible_deps):
|
||||
pkg_cls = spack.repo.path.get_pkg_class("mpileaks")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("mpileaks")
|
||||
deps = pkg_cls.possible_dependencies(transitive=False, expand_virtuals=False)
|
||||
assert {"callpath": set(), "mpi": set(), "mpileaks": {"callpath", "mpi"}} == deps
|
||||
|
||||
|
||||
def test_possible_dependencies_virtual(mock_packages, mpi_names):
|
||||
expected = dict(
|
||||
(name, set(spack.repo.path.get_pkg_class(name).dependencies)) for name in mpi_names
|
||||
(name, set(spack.repo.PATH.get_pkg_class(name).dependencies)) for name in mpi_names
|
||||
)
|
||||
|
||||
# only one mock MPI has a dependency
|
||||
@@ -79,14 +79,14 @@ def test_possible_dependencies_virtual(mock_packages, mpi_names):
|
||||
|
||||
|
||||
def test_possible_dependencies_missing(mock_packages):
|
||||
pkg_cls = spack.repo.path.get_pkg_class("missing-dependency")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("missing-dependency")
|
||||
missing = {}
|
||||
pkg_cls.possible_dependencies(transitive=True, missing=missing)
|
||||
assert {"this-is-a-missing-dependency"} == missing["missing-dependency"]
|
||||
|
||||
|
||||
def test_possible_dependencies_with_deptypes(mock_packages):
|
||||
dtbuild1 = spack.repo.path.get_pkg_class("dtbuild1")
|
||||
dtbuild1 = spack.repo.PATH.get_pkg_class("dtbuild1")
|
||||
|
||||
assert {
|
||||
"dtbuild1": {"dtrun2", "dtlink2"},
|
||||
|
@@ -18,17 +18,17 @@
|
||||
|
||||
def pkg_factory(name):
|
||||
"""Return a package object tied to an abstract spec"""
|
||||
pkg_cls = spack.repo.path.get_pkg_class(name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(name)
|
||||
return pkg_cls(Spec(name))
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("config", "mock_packages")
|
||||
class TestPackage:
|
||||
def test_load_package(self):
|
||||
spack.repo.path.get_pkg_class("mpich")
|
||||
spack.repo.PATH.get_pkg_class("mpich")
|
||||
|
||||
def test_package_name(self):
|
||||
pkg_cls = spack.repo.path.get_pkg_class("mpich")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("mpich")
|
||||
assert pkg_cls.name == "mpich"
|
||||
|
||||
def test_package_filename(self):
|
||||
@@ -62,7 +62,7 @@ def test_import_package_as(self):
|
||||
from spack.pkg.builtin import mock # noqa: F401
|
||||
|
||||
def test_inheritance_of_diretives(self):
|
||||
pkg_cls = spack.repo.path.get_pkg_class("simple-inheritance")
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class("simple-inheritance")
|
||||
|
||||
# Check dictionaries that should have been filled by directives
|
||||
assert len(pkg_cls.dependencies) == 3
|
||||
@@ -125,7 +125,7 @@ def test_urls_for_versions(mock_packages, config):
|
||||
|
||||
def test_url_for_version_with_no_urls(mock_packages, config):
|
||||
spec = Spec("git-test")
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
with pytest.raises(spack.package_base.NoURLError):
|
||||
pkg_cls(spec).url_for_version("1.0")
|
||||
|
||||
@@ -314,7 +314,7 @@ def test_fetch_options(version_str, digest_end, extra_options):
|
||||
|
||||
def test_package_deprecated_version(mock_packages, mock_fetch, mock_stage):
|
||||
spec = Spec("deprecated-versions")
|
||||
pkg_cls = spack.repo.path.get_pkg_class(spec.name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(spec.name)
|
||||
|
||||
assert spack.package_base.deprecated_version(pkg_cls, "1.1.0")
|
||||
assert not spack.package_base.deprecated_version(pkg_cls, "1.0.0")
|
||||
|
@@ -191,7 +191,7 @@ def test_nested_directives(mock_packages):
|
||||
"""Ensure pkg data structures are set up properly by nested directives."""
|
||||
# this ensures that the patch() directive results were removed
|
||||
# properly from the DirectiveMeta._directives_to_be_executed list
|
||||
patcher = spack.repo.path.get_pkg_class("patch-several-dependencies")
|
||||
patcher = spack.repo.PATH.get_pkg_class("patch-several-dependencies")
|
||||
assert len(patcher.patches) == 0
|
||||
|
||||
# this ensures that results of dependency patches were properly added
|
||||
|
@@ -26,19 +26,19 @@
|
||||
|
||||
|
||||
def test_provider_index_round_trip(mock_packages):
|
||||
p = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.path)
|
||||
p = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.PATH)
|
||||
|
||||
ostream = io.StringIO()
|
||||
p.to_json(ostream)
|
||||
|
||||
istream = io.StringIO(ostream.getvalue())
|
||||
q = ProviderIndex.from_json(istream, repository=spack.repo.path)
|
||||
q = ProviderIndex.from_json(istream, repository=spack.repo.PATH)
|
||||
|
||||
assert p == q
|
||||
|
||||
|
||||
def test_providers_for_simple(mock_packages):
|
||||
p = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.path)
|
||||
p = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.PATH)
|
||||
|
||||
blas_providers = p.providers_for("blas")
|
||||
assert Spec("netlib-blas") in blas_providers
|
||||
@@ -51,7 +51,7 @@ def test_providers_for_simple(mock_packages):
|
||||
|
||||
|
||||
def test_mpi_providers(mock_packages):
|
||||
p = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.path)
|
||||
p = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.PATH)
|
||||
|
||||
mpi_2_providers = p.providers_for("mpi@2")
|
||||
assert Spec("mpich2") in mpi_2_providers
|
||||
@@ -64,12 +64,12 @@ def test_mpi_providers(mock_packages):
|
||||
|
||||
|
||||
def test_equal(mock_packages):
|
||||
p = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.path)
|
||||
q = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.path)
|
||||
p = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.PATH)
|
||||
q = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.PATH)
|
||||
assert p == q
|
||||
|
||||
|
||||
def test_copy(mock_packages):
|
||||
p = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.path)
|
||||
p = ProviderIndex(specs=spack.repo.all_package_names(), repository=spack.repo.PATH)
|
||||
q = p.copy()
|
||||
assert p == q
|
||||
|
@@ -59,9 +59,9 @@ def test_repo_unknown_pkg(mutable_mock_repo):
|
||||
@pytest.mark.maybeslow
|
||||
def test_repo_last_mtime():
|
||||
latest_mtime = max(
|
||||
os.path.getmtime(p.module.__file__) for p in spack.repo.path.all_package_classes()
|
||||
os.path.getmtime(p.module.__file__) for p in spack.repo.PATH.all_package_classes()
|
||||
)
|
||||
assert spack.repo.path.last_mtime() == latest_mtime
|
||||
assert spack.repo.PATH.last_mtime() == latest_mtime
|
||||
|
||||
|
||||
def test_repo_invisibles(mutable_mock_repo, extra_repo):
|
||||
@@ -90,10 +90,10 @@ def test_use_repositories_doesnt_change_class():
|
||||
"""Test that we don't create the same package module and class multiple times
|
||||
when swapping repositories.
|
||||
"""
|
||||
zlib_cls_outer = spack.repo.path.get_pkg_class("zlib")
|
||||
current_paths = [r.root for r in spack.repo.path.repos]
|
||||
zlib_cls_outer = spack.repo.PATH.get_pkg_class("zlib")
|
||||
current_paths = [r.root for r in spack.repo.PATH.repos]
|
||||
with spack.repo.use_repositories(*current_paths):
|
||||
zlib_cls_inner = spack.repo.path.get_pkg_class("zlib")
|
||||
zlib_cls_inner = spack.repo.PATH.get_pkg_class("zlib")
|
||||
assert id(zlib_cls_inner) == id(zlib_cls_outer)
|
||||
|
||||
|
||||
@@ -126,7 +126,7 @@ def test_all_virtual_packages_have_default_providers():
|
||||
configuration = spack.config.create()
|
||||
defaults = configuration.get("packages", scope="defaults")
|
||||
default_providers = defaults["all"]["providers"]
|
||||
providers = spack.repo.path.provider_index.providers
|
||||
providers = spack.repo.PATH.provider_index.providers
|
||||
default_providers_filename = configuration.scopes["defaults"].get_section_filename("packages")
|
||||
for provider in providers:
|
||||
assert provider in default_providers, (
|
||||
|
@@ -44,7 +44,7 @@ def _mock(pkg_name, spec, deptypes=all_deptypes):
|
||||
"""
|
||||
spec = Spec(spec)
|
||||
# Save original dependencies before making any changes.
|
||||
pkg_cls = spack.repo.path.get_pkg_class(pkg_name)
|
||||
pkg_cls = spack.repo.PATH.get_pkg_class(pkg_name)
|
||||
if pkg_name not in saved_deps:
|
||||
saved_deps[pkg_name] = (pkg_cls, pkg_cls.dependencies.copy())
|
||||
|
||||
|
@@ -659,7 +659,9 @@ def test_source_path_available(self, mock_stage_archive):
|
||||
assert source_path.endswith(spack.stage._source_path_subdir)
|
||||
assert not os.path.exists(source_path)
|
||||
|
||||
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
|
||||
@pytest.mark.skipif(
|
||||
sys.platform == "win32", reason="Windows file permission erroring is not yet supported"
|
||||
)
|
||||
@pytest.mark.skipif(getuid() == 0, reason="user is root")
|
||||
def test_first_accessible_path(self, tmpdir):
|
||||
"""Test _first_accessible_path names."""
|
||||
@@ -691,7 +693,6 @@ def test_first_accessible_path(self, tmpdir):
|
||||
# Cleanup
|
||||
shutil.rmtree(str(name))
|
||||
|
||||
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
|
||||
def test_create_stage_root(self, tmpdir, no_path_access):
|
||||
"""Test create_stage_root permissions."""
|
||||
test_dir = tmpdir.join("path")
|
||||
@@ -755,7 +756,9 @@ def test_resolve_paths(self):
|
||||
|
||||
assert spack.stage._resolve_paths(paths) == res_paths
|
||||
|
||||
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
|
||||
@pytest.mark.skipif(
|
||||
sys.platform == "win32", reason="Windows file permission erroring is not yet supported"
|
||||
)
|
||||
@pytest.mark.skipif(getuid() == 0, reason="user is root")
|
||||
def test_get_stage_root_bad_path(self, clear_stage_root):
|
||||
"""Ensure an invalid stage path root raises a StageError."""
|
||||
@@ -864,7 +867,6 @@ def test_diystage_preserve_file(self, tmpdir):
|
||||
_file.read() == _readme_contents
|
||||
|
||||
|
||||
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
|
||||
def test_stage_create_replace_path(tmp_build_stage_dir):
|
||||
"""Ensure stage creation replaces a non-directory path."""
|
||||
_, test_stage_path = tmp_build_stage_dir
|
||||
@@ -872,16 +874,15 @@ def test_stage_create_replace_path(tmp_build_stage_dir):
|
||||
|
||||
nondir = os.path.join(test_stage_path, "afile")
|
||||
touch(nondir)
|
||||
path = str(nondir)
|
||||
path = url_util.path_to_file_url(str(nondir))
|
||||
|
||||
stage = Stage(path, name="")
|
||||
stage = Stage(path, name="afile")
|
||||
stage.create()
|
||||
|
||||
# Ensure the stage path is "converted" to a directory
|
||||
assert os.path.isdir(stage.path)
|
||||
assert os.path.isdir(nondir)
|
||||
|
||||
|
||||
@pytest.mark.skipif(sys.platform == "win32", reason="Not supported on Windows (yet)")
|
||||
def test_cannot_access(capsys):
|
||||
"""Ensure can_access dies with the expected error."""
|
||||
with pytest.raises(SystemExit):
|
||||
|
@@ -97,7 +97,7 @@ def test_tag_get_installed_packages(mock_packages, mock_archive, mock_fetch, ins
|
||||
|
||||
def test_tag_index_round_trip(mock_packages):
|
||||
# Assumes at least two packages -- mpich and mpich2 -- have tags
|
||||
mock_index = spack.repo.path.tag_index
|
||||
mock_index = spack.repo.PATH.tag_index
|
||||
assert mock_index.tags
|
||||
|
||||
ostream = io.StringIO()
|
||||
@@ -153,7 +153,7 @@ def test_tag_no_tags(mock_packages):
|
||||
|
||||
|
||||
def test_tag_update_package(mock_packages):
|
||||
mock_index = spack.repo.path.tag_index
|
||||
mock_index = spack.repo.PATH.tag_index
|
||||
index = spack.tag.TagIndex(repository=mock_packages)
|
||||
for name in spack.repo.all_package_names():
|
||||
index.update_package(name)
|
||||
|
@@ -20,9 +20,9 @@
|
||||
|
||||
def compare_sans_name(eq, spec1, spec2):
|
||||
content1 = ph.canonical_source(spec1)
|
||||
content1 = content1.replace(spack.repo.path.get_pkg_class(spec1.name).__name__, "TestPackage")
|
||||
content1 = content1.replace(spack.repo.PATH.get_pkg_class(spec1.name).__name__, "TestPackage")
|
||||
content2 = ph.canonical_source(spec2)
|
||||
content2 = content2.replace(spack.repo.path.get_pkg_class(spec2.name).__name__, "TestPackage")
|
||||
content2 = content2.replace(spack.repo.PATH.get_pkg_class(spec2.name).__name__, "TestPackage")
|
||||
if eq:
|
||||
assert content1 == content2
|
||||
else:
|
||||
@@ -31,12 +31,12 @@ def compare_sans_name(eq, spec1, spec2):
|
||||
|
||||
def compare_hash_sans_name(eq, spec1, spec2):
|
||||
content1 = ph.canonical_source(spec1)
|
||||
pkg_cls1 = spack.repo.path.get_pkg_class(spec1.name)
|
||||
pkg_cls1 = spack.repo.PATH.get_pkg_class(spec1.name)
|
||||
content1 = content1.replace(pkg_cls1.__name__, "TestPackage")
|
||||
hash1 = pkg_cls1(spec1).content_hash(content=content1)
|
||||
|
||||
content2 = ph.canonical_source(spec2)
|
||||
pkg_cls2 = spack.repo.path.get_pkg_class(spec2.name)
|
||||
pkg_cls2 = spack.repo.PATH.get_pkg_class(spec2.name)
|
||||
content2 = content2.replace(pkg_cls2.__name__, "TestPackage")
|
||||
hash2 = pkg_cls2(spec2).content_hash(content=content2)
|
||||
|
||||
|
@@ -3,7 +3,10 @@
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
import collections
|
||||
import email.message
|
||||
import os
|
||||
import pickle
|
||||
import urllib.request
|
||||
|
||||
import pytest
|
||||
|
||||
@@ -339,3 +342,25 @@ def get_s3_session(url, method="fetch"):
|
||||
def test_s3_url_parsing():
|
||||
assert spack.util.s3._parse_s3_endpoint_url("example.com") == "https://example.com"
|
||||
assert spack.util.s3._parse_s3_endpoint_url("http://example.com") == "http://example.com"
|
||||
|
||||
|
||||
def test_detailed_http_error_pickle(tmpdir):
|
||||
tmpdir.join("response").write("response")
|
||||
|
||||
headers = email.message.Message()
|
||||
headers.add_header("Content-Type", "text/plain")
|
||||
|
||||
# Use a temporary file object as a response body
|
||||
with open(str(tmpdir.join("response")), "rb") as f:
|
||||
error = spack.util.web.DetailedHTTPError(
|
||||
urllib.request.Request("http://example.com"), 404, "Not Found", headers, f
|
||||
)
|
||||
|
||||
deserialized = pickle.loads(pickle.dumps(error))
|
||||
|
||||
assert isinstance(deserialized, spack.util.web.DetailedHTTPError)
|
||||
assert deserialized.code == 404
|
||||
assert deserialized.filename == "http://example.com"
|
||||
assert deserialized.reason == "Not Found"
|
||||
assert str(deserialized.info()) == str(headers)
|
||||
assert str(deserialized) == str(error)
|
||||
|
@@ -337,7 +337,7 @@ def package_ast(spec, filter_multimethods=True, source=None):
|
||||
spec = spack.spec.Spec(spec)
|
||||
|
||||
if source is None:
|
||||
filename = spack.repo.path.filename_for_package_name(spec.name)
|
||||
filename = spack.repo.PATH.filename_for_package_name(spec.name)
|
||||
with open(filename) as f:
|
||||
source = f.read()
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user