Compare commits

..

9 Commits

Author SHA1 Message Date
Wouter Deconinck
026521132f cairo: import spack.build_systems 2025-01-08 12:23:03 -06:00
Wouter Deconinck
649d6f901c cairo: depends_on pixman@0.40: when @1.18.2: 2025-01-08 12:23:03 -06:00
Wouter Deconinck
ff9d763de5 cairo: add v1.18.2 2025-01-08 12:23:03 -06:00
Wouter Deconinck
3e466a5d66 cairo: depends_on lzo 2025-01-08 12:23:03 -06:00
Wouter Deconinck
ec9d3f799e cairo: depends_on freetype@2.10: for FT_Color 2025-01-08 12:23:03 -06:00
Wouter Deconinck
4928acf3a4 cairo: add 1.17.6 for continuity; doc lines 2025-01-08 12:23:03 -06:00
wdconinc
248b99c1d4 [@spackbot] updating style on behalf of wdconinc 2025-01-08 12:23:03 -06:00
Wouter Deconinck
340a7a564c cairo: fix variant in spec syntax 2025-01-08 12:23:03 -06:00
Wouter Deconinck
384afd559b cairo: "new" version 1.18.0 (now also MesonPackage) 2025-01-08 12:23:03 -06:00
730 changed files with 5180 additions and 7718 deletions

View File

@@ -29,7 +29,7 @@ jobs:
- run: coverage xml
- name: "Upload coverage report to CodeCov"
uses: codecov/codecov-action@1e68e06f1dbfde0e4cefc87efeba9e4643565303
uses: codecov/codecov-action@05f5a9cfad807516dbbef9929c4a42df3eb78766
with:
verbose: true
fail_ci_if_error: false

View File

@@ -2,6 +2,6 @@ black==24.10.0
clingo==5.7.1
flake8==7.1.1
isort==5.13.2
mypy==1.11.2
mypy==1.8.0
types-six==1.17.0.20241205
vermin==1.6.0

View File

@@ -20,7 +20,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.13'
python-version: '3.11'
cache: 'pip'
- name: Install Python Packages
run: |
@@ -39,7 +39,7 @@ jobs:
fetch-depth: 0
- uses: actions/setup-python@0b93645e9fea7318ecaed2b359559ac225c90a2b
with:
python-version: '3.13'
python-version: '3.11'
cache: 'pip'
- name: Install Python packages
run: |
@@ -58,7 +58,7 @@ jobs:
secrets: inherit
with:
with_coverage: ${{ inputs.with_coverage }}
python_version: '3.13'
python_version: '3.11'
# Check that spack can bootstrap the development environment on Python 3.6 - RHEL8
bootstrap-dev-rhel8:
runs-on: ubuntu-latest

View File

@@ -25,6 +25,7 @@ exit 1
# The code above runs this file with our preferred python interpreter.
import os
import os.path
import sys
min_python3 = (3, 6)

View File

@@ -36,7 +36,7 @@ packages:
go-or-gccgo-bootstrap: [go-bootstrap, gcc]
iconv: [libiconv]
ipp: [intel-oneapi-ipp]
java: [openjdk, jdk]
java: [openjdk, jdk, ibm-java]
jpeg: [libjpeg-turbo, libjpeg]
lapack: [openblas, amdlibflame]
libc: [glibc, musl]
@@ -73,27 +73,15 @@ packages:
permissions:
read: world
write: user
cray-fftw:
buildable: false
cray-libsci:
buildable: false
cray-mpich:
buildable: false
cray-mvapich2:
buildable: false
cray-pmi:
buildable: false
egl:
buildable: false
essl:
buildable: false
fujitsu-mpi:
buildable: false
fujitsu-ssl2:
buildable: false
hpcx-mpi:
buildable: false
mpt:
buildable: false
spectrum-mpi:
buildable: false

View File

@@ -170,7 +170,7 @@ bootstrapping.
To register the mirror on the platform where it's supposed to be used run the following command(s):
% spack bootstrap add --trust local-sources /opt/bootstrap/metadata/sources
% spack bootstrap add --trust local-binaries /opt/bootstrap/metadata/binaries
% spack buildcache update-index /opt/bootstrap/bootstrap_cache
This command needs to be run on a machine with internet access and the resulting folder
has to be moved over to the air-gapped system. Once the local sources are added using the

View File

@@ -56,13 +56,13 @@ If you look at the ``perl`` package, you'll see:
.. code-block:: python
phases = ("configure", "build", "install")
phases = ["configure", "build", "install"]
Similarly, ``cmake`` defines:
.. code-block:: python
phases = ("bootstrap", "build", "install")
phases = ["bootstrap", "build", "install"]
If we look at the ``cmake`` example, this tells Spack's ``PackageBase``
class to run the ``bootstrap``, ``build``, and ``install`` functions

View File

@@ -543,10 +543,10 @@ With either interpreter you can run a single command:
.. code-block:: console
$ spack python -c 'from spack.concretize import concretize_one; concretize_one("python")'
$ spack python -c 'from spack.spec import Spec; Spec("python").concretized()'
...
$ spack python -i ipython -c 'from spack.concretize import concretize_one; concretize_one("python")'
$ spack python -i ipython -c 'from spack.spec import Spec; Spec("python").concretized()'
Out[1]: ...
or a file:

View File

@@ -456,13 +456,14 @@ For instance, the following config options,
tcl:
all:
suffixes:
^python@3: 'python{^python.version.up_to_2}'
^python@3: 'python{^python.version}'
^openblas: 'openblas'
will add a ``python3.12`` to module names of packages compiled with Python 3.12, and similarly for
all specs depending on ``python@3``. This is useful to know which version of Python a set of Python
extensions is associated with. Likewise, the ``openblas`` string is attached to any program that
has openblas in the spec, most likely via the ``+blas`` variant specification.
will add a ``python-3.12.1`` version string to any packages compiled with
Python matching the spec, ``python@3``. This is useful to know which
version of Python a set of Python extensions is associated with. Likewise, the
``openblas`` string is attached to any program that has openblas in the spec,
most likely via the ``+blas`` variant specification.
The most heavyweight solution to module naming is to change the entire
naming convention for module files. This uses the projections format

View File

@@ -4,7 +4,7 @@ sphinx_design==0.6.1
sphinx-rtd-theme==3.0.2
python-levenshtein==0.26.1
docutils==0.21.2
pygments==2.19.1
pygments==2.18.0
urllib3==2.3.0
pytest==8.3.4
isort==5.13.2

View File

@@ -3,7 +3,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""URL primitives that just require Python standard library."""
import itertools
import os
import os.path
import re
from typing import Optional, Set, Tuple
from urllib.parse import urlsplit, urlunsplit

View File

@@ -75,6 +75,7 @@
"install_tree",
"is_exe",
"join_path",
"last_modification_time_recursive",
"library_extensions",
"mkdirp",
"partition_path",
@@ -1469,36 +1470,15 @@ def set_executable(path):
@system_path_filter
def recursive_mtime_greater_than(path: str, time: float) -> bool:
"""Returns true if any file or dir recursively under `path` has mtime greater than `time`."""
# use bfs order to increase likelihood of early return
queue: Deque[str] = collections.deque([path])
if os.stat(path).st_mtime > time:
return True
while queue:
current = queue.popleft()
try:
entries = os.scandir(current)
except OSError:
continue
with entries:
for entry in entries:
try:
st = entry.stat(follow_symlinks=False)
except OSError:
continue
if st.st_mtime > time:
return True
if entry.is_dir(follow_symlinks=False):
queue.append(entry.path)
return False
def last_modification_time_recursive(path):
path = os.path.abspath(path)
times = [os.stat(path).st_mtime]
times.extend(
os.lstat(os.path.join(root, name)).st_mtime
for root, dirs, files in os.walk(path)
for name in dirs + files
)
return max(times)
@system_path_filter
@@ -1760,7 +1740,8 @@ def find(
def _log_file_access_issue(e: OSError, path: str) -> None:
tty.debug(f"find must skip {path}: {e}")
errno_name = errno.errorcode.get(e.errno, "UNKNOWN")
tty.debug(f"find must skip {path}: {errno_name} {e}")
def _file_id(s: os.stat_result) -> Tuple[int, int]:

View File

@@ -1356,8 +1356,14 @@ def _test_detection_by_executable(pkgs, debug_log, error_cls):
def _compare_extra_attribute(_expected, _detected, *, _spec):
result = []
# Check items are of the same type
if not isinstance(_detected, type(_expected)):
_summary = f'{pkg_name}: error when trying to detect "{_expected}"'
_details = [f"{_detected} was detected instead"]
return [error_cls(summary=_summary, details=_details)]
# If they are string expected is a regex
if isinstance(_expected, str) and isinstance(_detected, str):
if isinstance(_expected, str):
try:
_regex = re.compile(_expected)
except re.error:
@@ -1373,7 +1379,7 @@ def _compare_extra_attribute(_expected, _detected, *, _spec):
_details = [f"{_detected} does not match the regex"]
return [error_cls(summary=_summary, details=_details)]
elif isinstance(_expected, dict) and isinstance(_detected, dict):
if isinstance(_expected, dict):
_not_detected = set(_expected.keys()) - set(_detected.keys())
if _not_detected:
_summary = f"{pkg_name}: cannot detect some attributes for spec {_spec}"
@@ -1388,10 +1394,6 @@ def _compare_extra_attribute(_expected, _detected, *, _spec):
result.extend(
_compare_extra_attribute(_expected[_key], _detected[_key], _spec=_spec)
)
else:
_summary = f'{pkg_name}: error when trying to detect "{_expected}"'
_details = [f"{_detected} was detected instead"]
return [error_cls(summary=_summary, details=_details)]
return result

View File

@@ -5,7 +5,6 @@
import codecs
import collections
import concurrent.futures
import contextlib
import copy
import hashlib
import io
@@ -24,7 +23,7 @@
import urllib.request
import warnings
from contextlib import closing
from typing import IO, Callable, Dict, Iterable, List, NamedTuple, Optional, Set, Tuple, Union
from typing import IO, Dict, Iterable, List, NamedTuple, Optional, Set, Tuple, Union
import llnl.util.filesystem as fsys
import llnl.util.lang
@@ -92,9 +91,6 @@
CURRENT_BUILD_CACHE_LAYOUT_VERSION = 2
INDEX_HASH_FILE = "index.json.hash"
class BuildCacheDatabase(spack_db.Database):
"""A database for binary buildcaches.
@@ -506,7 +502,7 @@ def _fetch_and_cache_index(self, mirror_url, cache_entry={}):
scheme = urllib.parse.urlparse(mirror_url).scheme
if scheme != "oci" and not web_util.url_exists(
url_util.join(mirror_url, BUILD_CACHE_RELATIVE_PATH, spack_db.INDEX_JSON_FILE)
url_util.join(mirror_url, BUILD_CACHE_RELATIVE_PATH, "index.json")
):
return False
@@ -595,18 +591,32 @@ def file_matches(f: IO[bytes], regex: llnl.util.lang.PatternBytes) -> bool:
f.seek(0)
def specs_to_relocate(spec: spack.spec.Spec) -> List[spack.spec.Spec]:
"""Return the set of specs that may be referenced in the install prefix of the provided spec.
We currently include non-external transitive link and direct run dependencies."""
specs = [
def deps_to_relocate(spec):
"""Return the transitive link and direct run dependencies of the spec.
This is a special traversal for dependencies we need to consider when relocating a package.
Package binaries, scripts, and other files may refer to the prefixes of dependencies, so
we need to rewrite those locations when dependencies are in a different place at install time
than they were at build time.
This traversal covers transitive link dependencies and direct run dependencies because:
1. Spack adds RPATHs for transitive link dependencies so that packages can find needed
dependency libraries.
2. Packages may call any of their *direct* run dependencies (and may bake their paths into
binaries or scripts), so we also need to search for run dependency prefixes when relocating.
This returns a deduplicated list of transitive link dependencies and direct run dependencies.
"""
deps = [
s
for s in itertools.chain(
spec.traverse(root=True, deptype="link", order="breadth", key=traverse.by_dag_hash),
spec.dependencies(deptype="run"),
spec.traverse(root=True, deptype="link"), spec.dependencies(deptype="run")
)
if not s.external
]
return list(llnl.util.lang.dedupe(specs, key=lambda s: s.dag_hash()))
return llnl.util.lang.dedupe(deps, key=lambda s: s.dag_hash())
def get_buildinfo_dict(spec):
@@ -620,7 +630,7 @@ def get_buildinfo_dict(spec):
# "relocate_binaries": [],
# "relocate_links": [],
"hardlinks_deduped": True,
"hash_to_prefix": {d.dag_hash(): str(d.prefix) for d in specs_to_relocate(spec)},
"hash_to_prefix": {d.dag_hash(): str(d.prefix) for d in deps_to_relocate(spec)},
}
@@ -673,24 +683,19 @@ def sign_specfile(key: str, specfile_path: str) -> str:
def _read_specs_and_push_index(
file_list: List[str],
read_method: Callable,
cache_prefix: str,
db: BuildCacheDatabase,
temp_dir: str,
concurrency: int,
file_list, read_method, cache_prefix, db: BuildCacheDatabase, temp_dir, concurrency
):
"""Read all the specs listed in the provided list, using thread given thread parallelism,
generate the index, and push it to the mirror.
Args:
file_list: List of urls or file paths pointing at spec files to read
file_list (list(str)): List of urls or file paths pointing at spec files to read
read_method: A function taking a single argument, either a url or a file path,
and which reads the spec file at that location, and returns the spec.
cache_prefix: prefix of the build cache on s3 where index should be pushed.
cache_prefix (str): prefix of the build cache on s3 where index should be pushed.
db: A spack database used for adding specs and then writing the index.
temp_dir: Location to write index.json and hash for pushing
concurrency: Number of parallel processes to use when fetching
temp_dir (str): Location to write index.json and hash for pushing
concurrency (int): Number of parallel processes to use when fetching
"""
for file in file_list:
contents = read_method(file)
@@ -708,7 +713,7 @@ def _read_specs_and_push_index(
# Now generate the index, compute its hash, and push the two files to
# the mirror.
index_json_path = os.path.join(temp_dir, spack_db.INDEX_JSON_FILE)
index_json_path = os.path.join(temp_dir, "index.json")
with open(index_json_path, "w", encoding="utf-8") as f:
db._write_to_file(f)
@@ -718,14 +723,14 @@ def _read_specs_and_push_index(
index_hash = compute_hash(index_string)
# Write the hash out to a local file
index_hash_path = os.path.join(temp_dir, INDEX_HASH_FILE)
index_hash_path = os.path.join(temp_dir, "index.json.hash")
with open(index_hash_path, "w", encoding="utf-8") as f:
f.write(index_hash)
# Push the index itself
web_util.push_to_url(
index_json_path,
url_util.join(cache_prefix, spack_db.INDEX_JSON_FILE),
url_util.join(cache_prefix, "index.json"),
keep_original=False,
extra_args={"ContentType": "application/json", "CacheControl": "no-cache"},
)
@@ -733,7 +738,7 @@ def _read_specs_and_push_index(
# Push the hash
web_util.push_to_url(
index_hash_path,
url_util.join(cache_prefix, INDEX_HASH_FILE),
url_util.join(cache_prefix, "index.json.hash"),
keep_original=False,
extra_args={"ContentType": "text/plain", "CacheControl": "no-cache"},
)
@@ -802,7 +807,7 @@ def url_read_method(url):
try:
_, _, spec_file = web_util.read_from_url(url)
contents = codecs.getreader("utf-8")(spec_file).read()
except (web_util.SpackWebError, OSError) as e:
except web_util.SpackWebError as e:
tty.error(f"Error reading specfile: {url}: {e}")
return contents
@@ -870,12 +875,9 @@ def _url_generate_package_index(url: str, tmpdir: str, concurrency: int = 32):
tty.debug(f"Retrieving spec descriptor files from {url} to build index")
db = BuildCacheDatabase(tmpdir)
db._write()
try:
_read_specs_and_push_index(
file_list, read_fn, url, db, str(db.database_directory), concurrency
)
_read_specs_and_push_index(file_list, read_fn, url, db, db.database_directory, concurrency)
except Exception as e:
raise GenerateIndexError(f"Encountered problem pushing package index to {url}: {e}") from e
@@ -1110,7 +1112,7 @@ def _exists_in_buildcache(spec: spack.spec.Spec, tmpdir: str, out_url: str) -> E
def prefixes_to_relocate(spec):
prefixes = [s.prefix for s in specs_to_relocate(spec)]
prefixes = [s.prefix for s in deps_to_relocate(spec)]
prefixes.append(spack.hooks.sbang.sbang_install_path())
prefixes.append(str(spack.store.STORE.layout.root))
return prefixes
@@ -1789,7 +1791,7 @@ def _oci_update_index(
db.mark(spec, "in_buildcache", True)
# Create the index.json file
index_json_path = os.path.join(tmpdir, spack_db.INDEX_JSON_FILE)
index_json_path = os.path.join(tmpdir, "index.json")
with open(index_json_path, "w", encoding="utf-8") as f:
db._write_to_file(f)
@@ -2010,7 +2012,7 @@ def fetch_url_to_mirror(url):
# Download the config = spec.json and the relevant tarball
try:
manifest = json.load(response)
manifest = json.loads(response.read())
spec_digest = spack.oci.image.Digest.from_string(manifest["config"]["digest"])
tarball_digest = spack.oci.image.Digest.from_string(
manifest["layers"][-1]["digest"]
@@ -2137,9 +2139,10 @@ def fetch_url_to_mirror(url):
def dedupe_hardlinks_if_necessary(root, buildinfo):
"""Updates a buildinfo dict for old archives that did not dedupe hardlinks. De-duping hardlinks
is necessary when relocating files in parallel and in-place. This means we must preserve inodes
when relocating."""
"""Updates a buildinfo dict for old archives that did
not dedupe hardlinks. De-duping hardlinks is necessary
when relocating files in parallel and in-place. This
means we must preserve inodes when relocating."""
# New archives don't need this.
if buildinfo.get("hardlinks_deduped", False):
@@ -2168,48 +2171,65 @@ def dedupe_hardlinks_if_necessary(root, buildinfo):
buildinfo[key] = new_list
def relocate_package(spec: spack.spec.Spec) -> None:
"""Relocate binaries and text files in the given spec prefix, based on its buildinfo file."""
spec_prefix = str(spec.prefix)
buildinfo = read_buildinfo_file(spec_prefix)
def relocate_package(spec):
"""
Relocate the given package
"""
workdir = str(spec.prefix)
buildinfo = read_buildinfo_file(workdir)
new_layout_root = str(spack.store.STORE.layout.root)
new_prefix = str(spec.prefix)
new_rel_prefix = str(os.path.relpath(new_prefix, new_layout_root))
new_spack_prefix = str(spack.paths.prefix)
old_sbang_install_path = None
if "sbang_install_path" in buildinfo:
old_sbang_install_path = str(buildinfo["sbang_install_path"])
old_layout_root = str(buildinfo["buildpath"])
old_spack_prefix = str(buildinfo.get("spackprefix"))
old_rel_prefix = buildinfo.get("relative_prefix")
old_prefix = os.path.join(old_layout_root, old_rel_prefix)
rel = buildinfo.get("relative_rpaths", False)
# Warn about old style tarballs created with the --rel flag (removed in Spack v0.20)
if buildinfo.get("relative_rpaths", False):
tty.warn(
f"Tarball for {spec} uses relative rpaths, which can cause library loading issues."
)
# In Spack 0.19 and older prefix_to_hash was the default and externals were not dropped, so
# prefixes were not unique.
# In the past prefix_to_hash was the default and externals were not dropped, so prefixes
# were not unique.
if "hash_to_prefix" in buildinfo:
hash_to_old_prefix = buildinfo["hash_to_prefix"]
elif "prefix_to_hash" in buildinfo:
hash_to_old_prefix = {v: k for (k, v) in buildinfo["prefix_to_hash"].items()}
hash_to_old_prefix = dict((v, k) for (k, v) in buildinfo["prefix_to_hash"].items())
else:
raise NewLayoutException(
"Package tarball was created from an install prefix with a different directory layout "
"and an older buildcache create implementation. It cannot be relocated."
)
hash_to_old_prefix = dict()
prefix_to_prefix: Dict[str, str] = {}
if old_rel_prefix != new_rel_prefix and not hash_to_old_prefix:
msg = "Package tarball was created from an install "
msg += "prefix with a different directory layout and an older "
msg += "buildcache create implementation. It cannot be relocated."
raise NewLayoutException(msg)
if "sbang_install_path" in buildinfo:
old_sbang_install_path = str(buildinfo["sbang_install_path"])
prefix_to_prefix[old_sbang_install_path] = spack.hooks.sbang.sbang_install_path()
# Spurious replacements (e.g. sbang) will cause issues with binaries
# For example, the new sbang can be longer than the old one.
# Hence 2 dictionaries are maintained here.
prefix_to_prefix_text = collections.OrderedDict()
prefix_to_prefix_bin = collections.OrderedDict()
# First match specific prefix paths. Possibly the *local* install prefix of some dependency is
# in an upstream, so we cannot assume the original spack store root can be mapped uniformly to
# the new spack store root.
if old_sbang_install_path:
install_path = spack.hooks.sbang.sbang_install_path()
prefix_to_prefix_text[old_sbang_install_path] = install_path
# If the spec is spliced, we need to handle the simultaneous mapping from the old install_tree
# to the new install_tree and from the build_spec to the spliced spec. Because foo.build_spec
# is foo for any non-spliced spec, we can simplify by checking for spliced-in nodes by checking
# for nodes not in the build_spec without any explicit check for whether the spec is spliced.
# An analog in this algorithm is any spec that shares a name or provides the same virtuals in
# the context of the relevant root spec. This ensures that the analog for a spec s is the spec
# that s replaced when we spliced.
relocation_specs = specs_to_relocate(spec)
# First match specific prefix paths. Possibly the *local* install prefix
# of some dependency is in an upstream, so we cannot assume the original
# spack store root can be mapped uniformly to the new spack store root.
#
# If the spec is spliced, we need to handle the simultaneous mapping
# from the old install_tree to the new install_tree and from the build_spec
# to the spliced spec.
# Because foo.build_spec is foo for any non-spliced spec, we can simplify
# by checking for spliced-in nodes by checking for nodes not in the build_spec
# without any explicit check for whether the spec is spliced.
# An analog in this algorithm is any spec that shares a name or provides the same virtuals
# in the context of the relevant root spec. This ensures that the analog for a spec s
# is the spec that s replaced when we spliced.
relocation_specs = deps_to_relocate(spec)
build_spec_ids = set(id(s) for s in spec.build_spec.traverse(deptype=dt.ALL & ~dt.BUILD))
for s in relocation_specs:
analog = s
@@ -2228,66 +2248,98 @@ def relocate_package(spec: spack.spec.Spec) -> None:
lookup_dag_hash = analog.dag_hash()
if lookup_dag_hash in hash_to_old_prefix:
old_dep_prefix = hash_to_old_prefix[lookup_dag_hash]
prefix_to_prefix[old_dep_prefix] = str(s.prefix)
prefix_to_prefix_bin[old_dep_prefix] = str(s.prefix)
prefix_to_prefix_text[old_dep_prefix] = str(s.prefix)
# Only then add the generic fallback of install prefix -> install prefix.
prefix_to_prefix[old_layout_root] = str(spack.store.STORE.layout.root)
prefix_to_prefix_text[old_prefix] = new_prefix
prefix_to_prefix_bin[old_prefix] = new_prefix
prefix_to_prefix_text[old_layout_root] = new_layout_root
prefix_to_prefix_bin[old_layout_root] = new_layout_root
# Delete identity mappings from prefix_to_prefix
prefix_to_prefix = {k: v for k, v in prefix_to_prefix.items() if k != v}
# This is vestigial code for the *old* location of sbang. Previously,
# sbang was a bash script, and it lived in the spack prefix. It is
# now a POSIX script that lives in the install prefix. Old packages
# will have the old sbang location in their shebangs.
orig_sbang = "#!/bin/bash {0}/bin/sbang".format(old_spack_prefix)
new_sbang = spack.hooks.sbang.sbang_shebang_line()
prefix_to_prefix_text[orig_sbang] = new_sbang
# If there's nothing to relocate, we're done.
if not prefix_to_prefix:
return
tty.debug("Relocating package from", "%s to %s." % (old_layout_root, new_layout_root))
for old, new in prefix_to_prefix.items():
tty.debug(f"Relocating: {old} => {new}.")
# Old archives maybe have hardlinks repeated.
dedupe_hardlinks_if_necessary(workdir, buildinfo)
# Old archives may have hardlinks repeated.
dedupe_hardlinks_if_necessary(spec_prefix, buildinfo)
def is_backup_file(file):
return file.endswith("~")
# Text files containing the prefix text
textfiles = [os.path.join(spec_prefix, f) for f in buildinfo["relocate_textfiles"]]
binaries = [os.path.join(spec_prefix, f) for f in buildinfo.get("relocate_binaries")]
links = [os.path.join(spec_prefix, f) for f in buildinfo.get("relocate_links", [])]
text_names = list()
for filename in buildinfo["relocate_textfiles"]:
text_name = os.path.join(workdir, filename)
# Don't add backup files generated by filter_file during install step.
if not is_backup_file(text_name):
text_names.append(text_name)
platform = spack.platforms.by_name(spec.platform)
if "macho" in platform.binary_formats:
relocate.relocate_macho_binaries(binaries, prefix_to_prefix)
elif "elf" in platform.binary_formats:
relocate.relocate_elf_binaries(binaries, prefix_to_prefix)
# If we are not installing back to the same install tree do the relocation
if old_prefix != new_prefix:
files_to_relocate = [
os.path.join(workdir, filename) for filename in buildinfo.get("relocate_binaries")
]
# If the buildcache was not created with relativized rpaths
# do the relocation of path in binaries
platform = spack.platforms.by_name(spec.platform)
if "macho" in platform.binary_formats:
relocate.relocate_macho_binaries(
files_to_relocate,
old_layout_root,
new_layout_root,
prefix_to_prefix_bin,
rel,
old_prefix,
new_prefix,
)
elif "elf" in platform.binary_formats and not rel:
# The new ELF dynamic section relocation logic only handles absolute to
# absolute relocation.
relocate.new_relocate_elf_binaries(files_to_relocate, prefix_to_prefix_bin)
elif "elf" in platform.binary_formats and rel:
relocate.relocate_elf_binaries(
files_to_relocate,
old_layout_root,
new_layout_root,
prefix_to_prefix_bin,
rel,
old_prefix,
new_prefix,
)
relocate.relocate_links(links, prefix_to_prefix)
relocate.relocate_text(textfiles, prefix_to_prefix)
changed_files = relocate.relocate_text_bin(binaries, prefix_to_prefix)
# Relocate links to the new install prefix
links = [os.path.join(workdir, f) for f in buildinfo.get("relocate_links", [])]
relocate.relocate_links(links, prefix_to_prefix_bin)
# Add ad-hoc signatures to patched macho files when on macOS.
if "macho" in platform.binary_formats and sys.platform == "darwin":
codesign = which("codesign")
if not codesign:
return
for binary in changed_files:
# preserve the original inode by running codesign on a copy
with fsys.edit_in_place_through_temporary_file(binary) as tmp_binary:
codesign("-fs-", tmp_binary)
# For all buildcaches
# relocate the install prefixes in text files including dependencies
relocate.relocate_text(text_names, prefix_to_prefix_text)
install_manifest = os.path.join(
spec.prefix,
spack.store.STORE.layout.metadata_dir,
spack.store.STORE.layout.manifest_file_name,
)
if not os.path.exists(install_manifest):
spec_id = spec.format("{name}/{hash:7}")
tty.warn("No manifest file in tarball for spec %s" % spec_id)
# relocate the install prefixes in binary files including dependencies
changed_files = relocate.relocate_text_bin(files_to_relocate, prefix_to_prefix_bin)
# overwrite old metadata with new
if spec.spliced:
# rewrite spec on disk
spack.store.STORE.layout.write_spec(spec, spack.store.STORE.layout.spec_file_path(spec))
# Add ad-hoc signatures to patched macho files when on macOS.
if "macho" in platform.binary_formats and sys.platform == "darwin":
codesign = which("codesign")
if not codesign:
return
for binary in changed_files:
# preserve the original inode by running codesign on a copy
with fsys.edit_in_place_through_temporary_file(binary) as tmp_binary:
codesign("-fs-", tmp_binary)
# de-cache the install manifest
with contextlib.suppress(FileNotFoundError):
os.unlink(install_manifest)
# If we are installing back to the same location
# relocate the sbang location if the spack directory changed
else:
if old_spack_prefix != new_spack_prefix:
relocate.relocate_text(text_names, prefix_to_prefix_text)
def _extract_inner_tarball(spec, filename, extract_to, signature_required: bool, remote_checksum):
@@ -2455,6 +2507,15 @@ def extract_tarball(spec, download_result, force=False, timer=timer.NULL_TIMER):
except Exception as e:
shutil.rmtree(spec.prefix, ignore_errors=True)
raise e
else:
manifest_file = os.path.join(
spec.prefix,
spack.store.STORE.layout.metadata_dir,
spack.store.STORE.layout.manifest_file_name,
)
if not os.path.exists(manifest_file):
spec_id = spec.format("{name}/{hash:7}")
tty.warn("No manifest file in tarball for spec %s" % spec_id)
finally:
if tmpdir:
shutil.rmtree(tmpdir, ignore_errors=True)
@@ -2559,6 +2620,10 @@ def install_root_node(
tty.msg('Installing "{0}" from a buildcache'.format(spec.format()))
extract_tarball(spec, download_result, force)
spec.package.windows_establish_runtime_linkage()
if spec.spliced: # overwrite old metadata with new
spack.store.STORE.layout.write_spec(
spec, spack.store.STORE.layout.spec_file_path(spec)
)
spack.hooks.post_install(spec, False)
spack.store.STORE.db.add(spec, allow_missing=allow_missing)
@@ -2596,14 +2661,11 @@ def try_direct_fetch(spec, mirrors=None):
)
try:
_, _, fs = web_util.read_from_url(buildcache_fetch_url_signed_json)
specfile_contents = codecs.getreader("utf-8")(fs).read()
specfile_is_signed = True
except (web_util.SpackWebError, OSError) as e1:
except web_util.SpackWebError as e1:
try:
_, _, fs = web_util.read_from_url(buildcache_fetch_url_json)
specfile_contents = codecs.getreader("utf-8")(fs).read()
specfile_is_signed = False
except (web_util.SpackWebError, OSError) as e2:
except web_util.SpackWebError as e2:
tty.debug(
f"Did not find {specfile_name} on {buildcache_fetch_url_signed_json}",
e1,
@@ -2613,6 +2675,7 @@ def try_direct_fetch(spec, mirrors=None):
f"Did not find {specfile_name} on {buildcache_fetch_url_json}", e2, level=2
)
continue
specfile_contents = codecs.getreader("utf-8")(fs).read()
# read the spec from the build cache file. All specs in build caches
# are concrete (as they are built) so we need to mark this spec
@@ -2706,9 +2769,8 @@ def get_keys(install=False, trust=False, force=False, mirrors=None):
try:
_, _, json_file = web_util.read_from_url(keys_index)
json_index = sjson.load(json_file)
except (web_util.SpackWebError, OSError, ValueError) as url_err:
# TODO: avoid repeated request
json_index = sjson.load(codecs.getreader("utf-8")(json_file))
except web_util.SpackWebError as url_err:
if web_util.url_exists(keys_index):
tty.error(
f"Unable to find public keys in {url_util.format(fetch_url)},"
@@ -2955,14 +3017,14 @@ def __init__(self, url, local_hash, urlopen=web_util.urlopen):
def get_remote_hash(self):
# Failure to fetch index.json.hash is not fatal
url_index_hash = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, INDEX_HASH_FILE)
url_index_hash = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json.hash")
try:
response = self.urlopen(urllib.request.Request(url_index_hash, headers=self.headers))
remote_hash = response.read(64)
except OSError:
except (TimeoutError, urllib.error.URLError):
return None
# Validate the hash
remote_hash = response.read(64)
if not re.match(rb"[a-f\d]{64}$", remote_hash):
return None
return remote_hash.decode("utf-8")
@@ -2976,17 +3038,17 @@ def conditional_fetch(self) -> FetchIndexResult:
return FetchIndexResult(etag=None, hash=None, data=None, fresh=True)
# Otherwise, download index.json
url_index = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, spack_db.INDEX_JSON_FILE)
url_index = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json")
try:
response = self.urlopen(urllib.request.Request(url_index, headers=self.headers))
except OSError as e:
raise FetchIndexError(f"Could not fetch index from {url_index}", e) from e
except (TimeoutError, urllib.error.URLError) as e:
raise FetchIndexError("Could not fetch index from {}".format(url_index), e) from e
try:
result = codecs.getreader("utf-8")(response).read()
except (ValueError, OSError) as e:
raise FetchIndexError(f"Remote index {url_index} is invalid") from e
except ValueError as e:
raise FetchIndexError("Remote index {} is invalid".format(url_index), e) from e
computed_hash = compute_hash(result)
@@ -3020,7 +3082,7 @@ def __init__(self, url, etag, urlopen=web_util.urlopen):
def conditional_fetch(self) -> FetchIndexResult:
# Just do a conditional fetch immediately
url = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, spack_db.INDEX_JSON_FILE)
url = url_util.join(self.url, BUILD_CACHE_RELATIVE_PATH, "index.json")
headers = {"User-Agent": web_util.SPACK_USER_AGENT, "If-None-Match": f'"{self.etag}"'}
try:
@@ -3030,12 +3092,12 @@ def conditional_fetch(self) -> FetchIndexResult:
# Not modified; that means fresh.
return FetchIndexResult(etag=None, hash=None, data=None, fresh=True)
raise FetchIndexError(f"Could not fetch index {url}", e) from e
except OSError as e: # URLError, socket.timeout, etc.
except (TimeoutError, urllib.error.URLError) as e:
raise FetchIndexError(f"Could not fetch index {url}", e) from e
try:
result = codecs.getreader("utf-8")(response).read()
except (ValueError, OSError) as e:
except ValueError as e:
raise FetchIndexError(f"Remote index {url} is invalid", e) from e
headers = response.headers
@@ -3067,11 +3129,11 @@ def conditional_fetch(self) -> FetchIndexResult:
headers={"Accept": "application/vnd.oci.image.manifest.v1+json"},
)
)
except OSError as e:
except (TimeoutError, urllib.error.URLError) as e:
raise FetchIndexError(f"Could not fetch manifest from {url_manifest}", e) from e
try:
manifest = json.load(response)
manifest = json.loads(response.read())
except Exception as e:
raise FetchIndexError(f"Remote index {url_manifest} is invalid", e) from e
@@ -3086,16 +3148,14 @@ def conditional_fetch(self) -> FetchIndexResult:
return FetchIndexResult(etag=None, hash=None, data=None, fresh=True)
# Otherwise fetch the blob / index.json
try:
response = self.urlopen(
urllib.request.Request(
url=self.ref.blob_url(index_digest),
headers={"Accept": "application/vnd.oci.image.layer.v1.tar+gzip"},
)
response = self.urlopen(
urllib.request.Request(
url=self.ref.blob_url(index_digest),
headers={"Accept": "application/vnd.oci.image.layer.v1.tar+gzip"},
)
result = codecs.getreader("utf-8")(response).read()
except (OSError, ValueError) as e:
raise FetchIndexError(f"Remote index {url_manifest} is invalid", e) from e
)
result = codecs.getreader("utf-8")(response).read()
# Make sure the blob we download has the advertised hash
if compute_hash(result) != index_digest.digest:

View File

@@ -5,14 +5,12 @@
import fnmatch
import glob
import importlib
import os
import os.path
import re
import sys
import sysconfig
import warnings
from typing import Optional, Sequence, Union
from typing_extensions import TypedDict
from typing import Dict, Optional, Sequence, Union
import archspec.cpu
@@ -20,17 +18,13 @@
from llnl.util import tty
import spack.platforms
import spack.spec
import spack.store
import spack.util.environment
import spack.util.executable
from .config import spec_for_current_python
class QueryInfo(TypedDict, total=False):
spec: spack.spec.Spec
command: spack.util.executable.Executable
QueryInfo = Dict[str, "spack.spec.Spec"]
def _python_import(module: str) -> bool:
@@ -217,9 +211,7 @@ def _executables_in_store(
):
spack.util.environment.path_put_first("PATH", [bin_dir])
if query_info is not None:
query_info["command"] = spack.util.executable.which(
*executables, path=bin_dir, required=True
)
query_info["command"] = spack.util.executable.which(*executables, path=bin_dir)
query_info["spec"] = concrete_spec
return True
return False

View File

@@ -27,9 +27,9 @@
class ClingoBootstrapConcretizer:
def __init__(self, configuration):
self.host_platform = spack.platforms.host()
self.host_os = self.host_platform.default_operating_system()
self.host_os = self.host_platform.operating_system("frontend")
self.host_target = archspec.cpu.host().family
self.host_architecture = spack.spec.ArchSpec.default_arch()
self.host_architecture = spack.spec.ArchSpec.frontend_arch()
self.host_architecture.target = str(self.host_target)
self.host_compiler = self._valid_compiler_or_raise()
self.host_python = self.python_external_spec()

View File

@@ -4,7 +4,7 @@
"""Manage configuration swapping for bootstrapping purposes"""
import contextlib
import os
import os.path
import sys
from typing import Any, Dict, Generator, MutableSequence, Sequence
@@ -141,7 +141,7 @@ def _bootstrap_config_scopes() -> Sequence["spack.config.ConfigScope"]:
def _add_compilers_if_missing() -> None:
arch = spack.spec.ArchSpec.default_arch()
arch = spack.spec.ArchSpec.frontend_arch()
if not spack.compilers.compilers_for_arch(arch):
spack.compilers.find_compilers()

View File

@@ -25,6 +25,7 @@
import functools
import json
import os
import os.path
import sys
import uuid
from typing import Any, Callable, Dict, List, Optional, Tuple
@@ -33,10 +34,8 @@
from llnl.util.lang import GroupedExceptionHandler
import spack.binary_distribution
import spack.concretize
import spack.config
import spack.detection
import spack.error
import spack.mirrors.mirror
import spack.platforms
import spack.spec
@@ -45,17 +44,10 @@
import spack.util.executable
import spack.util.path
import spack.util.spack_yaml
import spack.util.url
import spack.version
from spack.installer import PackageInstaller
from ._common import (
QueryInfo,
_executables_in_store,
_python_import,
_root_spec,
_try_import_from_store,
)
from ._common import _executables_in_store, _python_import, _root_spec, _try_import_from_store
from .clingo import ClingoBootstrapConcretizer
from .config import spack_python_interpreter, spec_for_current_python
@@ -97,12 +89,8 @@ def __init__(self, conf: ConfigDictionary) -> None:
self.name = conf["name"]
self.metadata_dir = spack.util.path.canonicalize_path(conf["metadata"])
# Check for relative paths, and turn them into absolute paths
# root is the metadata_dir
maybe_url = conf["info"]["url"]
if spack.util.url.is_path_instead_of_url(maybe_url) and not os.path.isabs(maybe_url):
maybe_url = os.path.join(self.metadata_dir, maybe_url)
self.url = spack.mirrors.mirror.Mirror(maybe_url).fetch_url
# Promote (relative) paths to file urls
self.url = spack.mirrors.mirror.Mirror(conf["info"]["url"]).fetch_url
@property
def mirror_scope(self) -> spack.config.InternalConfigScope:
@@ -146,7 +134,7 @@ class BuildcacheBootstrapper(Bootstrapper):
def __init__(self, conf) -> None:
super().__init__(conf)
self.last_search: Optional[QueryInfo] = None
self.last_search: Optional[ConfigDictionary] = None
self.config_scope_name = f"bootstrap_buildcache-{uuid.uuid4()}"
@staticmethod
@@ -223,14 +211,14 @@ def _install_and_test(
for _, pkg_hash, pkg_sha256 in item["binaries"]:
self._install_by_hash(pkg_hash, pkg_sha256, bincache_platform)
info: QueryInfo = {}
info: ConfigDictionary = {}
if test_fn(query_spec=abstract_spec, query_info=info):
self.last_search = info
return True
return False
def try_import(self, module: str, abstract_spec_str: str) -> bool:
info: QueryInfo
info: ConfigDictionary
test_fn, info = functools.partial(_try_import_from_store, module), {}
if test_fn(query_spec=abstract_spec_str, query_info=info):
return True
@@ -243,7 +231,7 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
return self._install_and_test(abstract_spec, bincache_platform, data, test_fn)
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
info: QueryInfo
info: ConfigDictionary
test_fn, info = functools.partial(_executables_in_store, executables), {}
if test_fn(query_spec=abstract_spec_str, query_info=info):
self.last_search = info
@@ -261,11 +249,11 @@ class SourceBootstrapper(Bootstrapper):
def __init__(self, conf) -> None:
super().__init__(conf)
self.last_search: Optional[QueryInfo] = None
self.last_search: Optional[ConfigDictionary] = None
self.config_scope_name = f"bootstrap_source-{uuid.uuid4()}"
def try_import(self, module: str, abstract_spec_str: str) -> bool:
info: QueryInfo = {}
info: ConfigDictionary = {}
if _try_import_from_store(module, abstract_spec_str, query_info=info):
self.last_search = info
return True
@@ -282,10 +270,10 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
bootstrapper = ClingoBootstrapConcretizer(configuration=spack.config.CONFIG)
concrete_spec = bootstrapper.concretize()
else:
abstract_spec = spack.spec.Spec(
concrete_spec = spack.spec.Spec(
abstract_spec_str + " ^" + spec_for_current_python()
)
concrete_spec = spack.concretize.concretize_one(abstract_spec)
concrete_spec.concretize()
msg = "[BOOTSTRAP MODULE {0}] Try installing '{1}' from sources"
tty.debug(msg.format(module, abstract_spec_str))
@@ -300,7 +288,7 @@ def try_import(self, module: str, abstract_spec_str: str) -> bool:
return False
def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bool:
info: QueryInfo = {}
info: ConfigDictionary = {}
if _executables_in_store(executables, abstract_spec_str, query_info=info):
self.last_search = info
return True
@@ -311,7 +299,7 @@ def try_search_path(self, executables: Tuple[str], abstract_spec_str: str) -> bo
# might reduce compilation time by a fair amount
_add_externals_if_missing()
concrete_spec = spack.concretize.concretize_one(abstract_spec_str)
concrete_spec = spack.spec.Spec(abstract_spec_str).concretized()
msg = "[BOOTSTRAP] Try installing '{0}' from sources"
tty.debug(msg.format(abstract_spec_str))
with spack.config.override(self.mirror_scope):
@@ -328,9 +316,11 @@ def create_bootstrapper(conf: ConfigDictionary):
return _bootstrap_methods[btype](conf)
def source_is_enabled(conf: ConfigDictionary) -> bool:
"""Returns true if the source is not enabled for bootstrapping"""
return spack.config.get("bootstrap:trusted").get(conf["name"], False)
def source_is_enabled_or_raise(conf: ConfigDictionary):
"""Raise ValueError if the source is not enabled for bootstrapping"""
trusted, name = spack.config.get("bootstrap:trusted"), conf["name"]
if not trusted.get(name, False):
raise ValueError("source is not trusted")
def ensure_module_importable_or_raise(module: str, abstract_spec: Optional[str] = None):
@@ -360,23 +350,24 @@ def ensure_module_importable_or_raise(module: str, abstract_spec: Optional[str]
exception_handler = GroupedExceptionHandler()
for current_config in bootstrapping_sources():
if not source_is_enabled(current_config):
continue
with exception_handler.forward(current_config["name"], Exception):
if create_bootstrapper(current_config).try_import(module, abstract_spec):
source_is_enabled_or_raise(current_config)
current_bootstrapper = create_bootstrapper(current_config)
if current_bootstrapper.try_import(module, abstract_spec):
return
assert exception_handler, (
f"expected at least one exception to have been raised at this point: "
f"while bootstrapping {module}"
)
msg = f'cannot bootstrap the "{module}" Python module '
if abstract_spec:
msg += f'from spec "{abstract_spec}" '
if not exception_handler:
msg += ": no bootstrapping sources are enabled"
elif spack.error.debug or spack.error.SHOW_BACKTRACE:
if tty.is_debug():
msg += exception_handler.grouped_message(with_tracebacks=True)
else:
msg += exception_handler.grouped_message(with_tracebacks=False)
msg += "\nRun `spack --backtrace ...` for more detailed errors"
msg += "\nRun `spack --debug ...` for more detailed errors"
raise ImportError(msg)
@@ -414,9 +405,8 @@ def ensure_executables_in_path_or_raise(
exception_handler = GroupedExceptionHandler()
for current_config in bootstrapping_sources():
if not source_is_enabled(current_config):
continue
with exception_handler.forward(current_config["name"], Exception):
source_is_enabled_or_raise(current_config)
current_bootstrapper = create_bootstrapper(current_config)
if current_bootstrapper.try_search_path(executables, abstract_spec):
# Additional environment variables needed
@@ -424,7 +414,6 @@ def ensure_executables_in_path_or_raise(
current_bootstrapper.last_search["spec"],
current_bootstrapper.last_search["command"],
)
assert cmd is not None, "expected an Executable"
cmd.add_default_envmod(
spack.user_environment.environment_modifications_for_specs(
concrete_spec, set_package_py_globals=False
@@ -432,17 +421,18 @@ def ensure_executables_in_path_or_raise(
)
return cmd
assert exception_handler, (
f"expected at least one exception to have been raised at this point: "
f"while bootstrapping {executables_str}"
)
msg = f"cannot bootstrap any of the {executables_str} executables "
if abstract_spec:
msg += f'from spec "{abstract_spec}" '
if not exception_handler:
msg += ": no bootstrapping sources are enabled"
elif spack.error.debug or spack.error.SHOW_BACKTRACE:
if tty.is_debug():
msg += exception_handler.grouped_message(with_tracebacks=True)
else:
msg += exception_handler.grouped_message(with_tracebacks=False)
msg += "\nRun `spack --backtrace ...` for more detailed errors"
msg += "\nRun `spack --debug ...` for more detailed errors"
raise RuntimeError(msg)

View File

@@ -63,6 +63,7 @@ def _missing(name: str, purpose: str, system_only: bool = True) -> str:
def _core_requirements() -> List[RequiredResponseType]:
_core_system_exes = {
"make": _missing("make", "required to build software from sources"),
"patch": _missing("patch", "required to patch source code before building"),
"tar": _missing("tar", "required to manage code archives"),
"gzip": _missing("gzip", "required to compress/decompress code archives"),

View File

@@ -44,19 +44,7 @@
from enum import Flag, auto
from itertools import chain
from multiprocessing.connection import Connection
from typing import (
Callable,
Dict,
List,
Optional,
Sequence,
Set,
TextIO,
Tuple,
Type,
Union,
overload,
)
from typing import Callable, Dict, List, Optional, Set, Tuple
import archspec.cpu
@@ -158,128 +146,48 @@ def get_effective_jobs(jobs, parallel=True, supports_jobserver=False):
class MakeExecutable(Executable):
"""Special callable executable object for make so the user can specify parallelism options
on a per-invocation basis.
"""Special callable executable object for make so the user can specify
parallelism options on a per-invocation basis. Specifying
'parallel' to the call will override whatever the package's
global setting is, so you can either default to true or false and
override particular calls. Specifying 'jobs_env' to a particular
call will name an environment variable which will be set to the
parallelism level (without affecting the normal invocation with
-j).
"""
def __init__(self, name: str, *, jobs: int, supports_jobserver: bool = True) -> None:
super().__init__(name)
def __init__(self, name, jobs, **kwargs):
supports_jobserver = kwargs.pop("supports_jobserver", True)
super().__init__(name, **kwargs)
self.supports_jobserver = supports_jobserver
self.jobs = jobs
@overload
def __call__(
self,
*args: str,
parallel: bool = ...,
jobs_env: Optional[str] = ...,
jobs_env_supports_jobserver: bool = ...,
fail_on_error: bool = ...,
ignore_errors: Union[int, Sequence[int]] = ...,
ignore_quotes: Optional[bool] = ...,
timeout: Optional[int] = ...,
env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
extra_env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
input: Optional[TextIO] = ...,
output: Union[Optional[TextIO], str] = ...,
error: Union[Optional[TextIO], str] = ...,
_dump_env: Optional[Dict[str, str]] = ...,
) -> None: ...
@overload
def __call__(
self,
*args: str,
parallel: bool = ...,
jobs_env: Optional[str] = ...,
jobs_env_supports_jobserver: bool = ...,
fail_on_error: bool = ...,
ignore_errors: Union[int, Sequence[int]] = ...,
ignore_quotes: Optional[bool] = ...,
timeout: Optional[int] = ...,
env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
extra_env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
input: Optional[TextIO] = ...,
output: Union[Type[str], Callable] = ...,
error: Union[Optional[TextIO], str, Type[str], Callable] = ...,
_dump_env: Optional[Dict[str, str]] = ...,
) -> str: ...
@overload
def __call__(
self,
*args: str,
parallel: bool = ...,
jobs_env: Optional[str] = ...,
jobs_env_supports_jobserver: bool = ...,
fail_on_error: bool = ...,
ignore_errors: Union[int, Sequence[int]] = ...,
ignore_quotes: Optional[bool] = ...,
timeout: Optional[int] = ...,
env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
extra_env: Optional[Union[Dict[str, str], EnvironmentModifications]] = ...,
input: Optional[TextIO] = ...,
output: Union[Optional[TextIO], str, Type[str], Callable] = ...,
error: Union[Type[str], Callable] = ...,
_dump_env: Optional[Dict[str, str]] = ...,
) -> str: ...
def __call__(
self,
*args: str,
parallel: bool = True,
jobs_env: Optional[str] = None,
jobs_env_supports_jobserver: bool = False,
**kwargs,
) -> Optional[str]:
"""Runs this "make" executable in a subprocess.
Args:
parallel: if False, parallelism is disabled
jobs_env: environment variable that will be set to the current level of parallelism
jobs_env_supports_jobserver: whether the jobs env supports a job server
For all the other **kwargs, refer to the base class.
def __call__(self, *args, **kwargs):
"""parallel, and jobs_env from kwargs are swallowed and used here;
remaining arguments are passed through to the superclass.
"""
parallel = kwargs.pop("parallel", True)
jobs_env = kwargs.pop("jobs_env", None)
jobs_env_supports_jobserver = kwargs.pop("jobs_env_supports_jobserver", False)
jobs = get_effective_jobs(
self.jobs, parallel=parallel, supports_jobserver=self.supports_jobserver
)
if jobs is not None:
args = (f"-j{jobs}",) + args
args = ("-j{0}".format(jobs),) + args
if jobs_env:
# Caller wants us to set an environment variable to control the parallelism
# Caller wants us to set an environment variable to
# control the parallelism.
jobs_env_jobs = get_effective_jobs(
self.jobs, parallel=parallel, supports_jobserver=jobs_env_supports_jobserver
)
if jobs_env_jobs is not None:
extra_env = kwargs.setdefault("extra_env", {})
extra_env.update({jobs_env: str(jobs_env_jobs)})
kwargs["extra_env"] = {jobs_env: str(jobs_env_jobs)}
return super().__call__(*args, **kwargs)
class UndeclaredDependencyError(spack.error.SpackError):
"""Raised if a dependency is invoking an executable through a module global, without
declaring a dependency on it.
"""
class DeprecatedExecutable:
def __init__(self, pkg: str, exe: str, exe_pkg: str) -> None:
self.pkg = pkg
self.exe = exe
self.exe_pkg = exe_pkg
def __call__(self, *args, **kwargs):
raise UndeclaredDependencyError(
f"{self.pkg} is using {self.exe} without declaring a dependency on {self.exe_pkg}"
)
def add_default_env(self, key: str, value: str):
self.__call__()
def clean_environment():
# Stuff in here sanitizes the build environment to eliminate
# anything the user has set that may interfere. We apply it immediately
@@ -301,13 +209,11 @@ def clean_environment():
env.unset("CPLUS_INCLUDE_PATH")
env.unset("OBJC_INCLUDE_PATH")
# prevent configure scripts from sourcing variables from config site file (AC_SITE_LOAD).
env.set("CONFIG_SITE", os.devnull)
env.unset("CMAKE_PREFIX_PATH")
env.unset("PYTHONPATH")
env.unset("R_HOME")
env.unset("R_ENVIRON")
env.unset("LUA_PATH")
env.unset("LUA_CPATH")
@@ -715,9 +621,10 @@ def set_package_py_globals(pkg, context: Context = Context.BUILD):
module.std_meson_args = spack.build_systems.meson.MesonBuilder.std_args(pkg)
module.std_pip_args = spack.build_systems.python.PythonPipBuilder.std_args(pkg)
module.make = DeprecatedExecutable(pkg.name, "make", "gmake")
module.gmake = DeprecatedExecutable(pkg.name, "gmake", "gmake")
module.ninja = DeprecatedExecutable(pkg.name, "ninja", "ninja")
# TODO: make these build deps that can be installed if not found.
module.make = MakeExecutable("make", jobs)
module.gmake = MakeExecutable("gmake", jobs)
module.ninja = MakeExecutable("ninja", jobs, supports_jobserver=False)
# TODO: johnwparent: add package or builder support to define these build tools
# for now there is no entrypoint for builders to define these on their
# own

View File

@@ -6,9 +6,7 @@
import llnl.util.filesystem as fs
import spack.directives
import spack.spec
import spack.util.executable
import spack.util.prefix
from .autotools import AutotoolsBuilder, AutotoolsPackage
@@ -19,18 +17,19 @@ class AspellBuilder(AutotoolsBuilder):
to the Aspell extensions.
"""
def configure(
self,
pkg: "AspellDictPackage", # type: ignore[override]
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
):
def configure(self, pkg, spec, prefix):
aspell = spec["aspell"].prefix.bin.aspell
prezip = spec["aspell"].prefix.bin.prezip
destdir = prefix
sh = spack.util.executable.Executable("/bin/sh")
sh("./configure", "--vars", f"ASPELL={aspell}", f"PREZIP={prezip}", f"DESTDIR={destdir}")
sh = spack.util.executable.which("sh")
sh(
"./configure",
"--vars",
"ASPELL={0}".format(aspell),
"PREZIP={0}".format(prezip),
"DESTDIR={0}".format(destdir),
)
# Aspell dictionaries install their bits into their prefix.lib

View File

@@ -2,6 +2,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import os.path
import stat
import subprocess
from typing import Callable, List, Optional, Set, Tuple, Union
@@ -355,13 +356,6 @@ def _do_patch_libtool_configure(self) -> None:
)
# Support Libtool 2.4.2 and older:
x.filter(regex=r'^(\s*test \$p = "-R")(; then\s*)$', repl=r'\1 || test x-l = x"$p"\2')
# Configure scripts generated with libtool < 2.5.4 have a faulty test for the
# -single_module linker flag. A deprecation warning makes it think the default is
# -multi_module, triggering it to use problematic linker flags (such as ld -r). The
# linker default is `-single_module` from (ancient) macOS 10.4, so override by setting
# `lt_cv_apple_cc_single_mod=yes`. See the fix in libtool commit
# 82f7f52123e4e7e50721049f7fa6f9b870e09c9d.
x.filter("lt_cv_apple_cc_single_mod=no", "lt_cv_apple_cc_single_mod=yes", string=True)
@spack.phase_callbacks.run_after("configure")
def _do_patch_libtool(self) -> None:
@@ -533,7 +527,7 @@ def build_directory(self) -> str:
return build_dir
@spack.phase_callbacks.run_before("autoreconf")
def _delete_configure_to_force_update(self) -> None:
def delete_configure_to_force_update(self) -> None:
if self.force_autoreconf:
fs.force_remove(self.configure_abs_path)
@@ -546,7 +540,7 @@ def autoreconf_search_path_args(self) -> List[str]:
return _autoreconf_search_path_args(self.spec)
@spack.phase_callbacks.run_after("autoreconf")
def _set_configure_or_die(self) -> None:
def set_configure_or_die(self) -> None:
"""Ensure the presence of a "configure" script, or raise. If the "configure"
is found, a module level attribute is set.
@@ -570,7 +564,10 @@ def configure_args(self) -> List[str]:
return []
def autoreconf(
self, pkg: AutotoolsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Not needed usually, configure should be already there"""
@@ -599,7 +596,10 @@ def autoreconf(
self.pkg.module.autoreconf(*autoreconf_args)
def configure(
self, pkg: AutotoolsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Run "configure", with the arguments specified by the builder and an
appropriately set prefix.
@@ -612,7 +612,10 @@ def configure(
pkg.module.configure(*options)
def build(
self, pkg: AutotoolsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Run "make" on the build targets specified by the builder."""
# See https://autotools.io/automake/silent.html
@@ -622,7 +625,10 @@ def build(
pkg.module.make(*params)
def install(
self, pkg: AutotoolsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Run "make" on the install targets specified by the builder."""
with fs.working_dir(self.build_directory):
@@ -819,7 +825,7 @@ def installcheck(self) -> None:
self.pkg._if_make_target_execute("installcheck")
@spack.phase_callbacks.run_after("install")
def _remove_libtool_archives(self) -> None:
def remove_libtool_archives(self) -> None:
"""Remove all .la files in prefix sub-folders if the package sets
``install_libtool_archives`` to be False.
"""

View File

@@ -10,8 +10,6 @@
import llnl.util.tty as tty
import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from .cmake import CMakeBuilder, CMakePackage
@@ -295,13 +293,6 @@ def initconfig_hardware_entries(self):
entries.append(cmake_cache_string("AMDGPU_TARGETS", arch_str))
entries.append(cmake_cache_string("GPU_TARGETS", arch_str))
if spec.satisfies("%gcc"):
entries.append(
cmake_cache_string(
"CMAKE_HIP_FLAGS", f"--gcc-toolchain={self.pkg.compiler.prefix}"
)
)
return entries
def std_initconfig_entries(self):
@@ -332,9 +323,7 @@ def initconfig_package_entries(self):
"""This method is to be overwritten by the package"""
return []
def initconfig(
self, pkg: "CachedCMakePackage", spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def initconfig(self, pkg, spec, prefix):
cache_entries = (
self.std_initconfig_entries()
+ self.initconfig_compiler_entries()

View File

@@ -7,8 +7,6 @@
import spack.builder
import spack.package_base
import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on
from spack.multimethod import when
@@ -83,16 +81,12 @@ def check_args(self):
def setup_build_environment(self, env):
env.set("CARGO_HOME", self.stage.path)
def build(
self, pkg: CargoPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def build(self, pkg, spec, prefix):
"""Runs ``cargo install`` in the source directory"""
with fs.working_dir(self.build_directory):
pkg.module.cargo("install", "--root", "out", "--path", ".", *self.build_args)
def install(
self, pkg: CargoPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Copy build files into package prefix."""
with fs.working_dir(self.build_directory):
fs.install_tree("out", prefix)

View File

@@ -454,7 +454,10 @@ def cmake_args(self) -> List[str]:
return []
def cmake(
self, pkg: CMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Runs ``cmake`` in the build directory"""
@@ -471,7 +474,10 @@ def cmake(
pkg.module.cmake(*options)
def build(
self, pkg: CMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Make the build targets"""
with fs.working_dir(self.build_directory):
@@ -482,7 +488,10 @@ def build(
pkg.module.ninja(*self.build_targets)
def install(
self, pkg: CMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Make the install targets"""
with fs.working_dir(self.build_directory):

View File

@@ -7,8 +7,6 @@
import spack.directives
import spack.package_base
import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from ._checks import BuilderWithDefaults, apply_macos_rpath_fixups, execute_install_time_tests
@@ -50,8 +48,3 @@ class GenericBuilder(BuilderWithDefaults):
# unconditionally perform any post-install phase tests
spack.phase_callbacks.run_after("install")(execute_install_time_tests)
def install(
self, pkg: Package, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
raise NotImplementedError

View File

@@ -7,9 +7,7 @@
import spack.builder
import spack.package_base
import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on
from spack.directives import build_system, extends
from spack.multimethod import when
from ._checks import BuilderWithDefaults, execute_install_time_tests
@@ -28,7 +26,9 @@ class GoPackage(spack.package_base.PackageBase):
build_system("go")
with when("build_system=go"):
depends_on("go", type="build")
# TODO: this seems like it should be depends_on, see
# setup_dependent_build_environment in go for why I kept it like this
extends("go@1.14:", type="build")
@spack.builder.builder("go")
@@ -71,7 +71,6 @@ class GoBuilder(BuilderWithDefaults):
def setup_build_environment(self, env):
env.set("GO111MODULE", "on")
env.set("GOTOOLCHAIN", "local")
env.set("GOPATH", fs.join_path(self.pkg.stage.path, "go"))
@property
def build_directory(self):
@@ -82,31 +81,19 @@ def build_directory(self):
def build_args(self):
"""Arguments for ``go build``."""
# Pass ldflags -s = --strip-all and -w = --no-warnings by default
return [
"-p",
str(self.pkg.module.make_jobs),
"-modcacherw",
"-ldflags",
"-s -w",
"-o",
f"{self.pkg.name}",
]
return ["-modcacherw", "-ldflags", "-s -w", "-o", f"{self.pkg.name}"]
@property
def check_args(self):
"""Argument for ``go test`` during check phase"""
return []
def build(
self, pkg: GoPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def build(self, pkg, spec, prefix):
"""Runs ``go build`` in the source directory"""
with fs.working_dir(self.build_directory):
pkg.module.go("build", *self.build_args)
def install(
self, pkg: GoPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Install built binaries into prefix bin."""
with fs.working_dir(self.build_directory):
fs.mkdirp(prefix.bin)

View File

@@ -7,9 +7,7 @@
import spack.builder
import spack.package_base
import spack.spec
import spack.util.executable
import spack.util.prefix
from spack.directives import build_system, depends_on, extends
from spack.multimethod import when
@@ -57,9 +55,7 @@ class LuaBuilder(spack.builder.Builder):
#: Names associated with package attributes in the old build-system format
legacy_attributes = ()
def unpack(
self, pkg: LuaPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def unpack(self, pkg, spec, prefix):
if os.path.splitext(pkg.stage.archive_file)[1] == ".rock":
directory = pkg.luarocks("unpack", pkg.stage.archive_file, output=str)
dirlines = directory.split("\n")
@@ -70,16 +66,15 @@ def unpack(
def _generate_tree_line(name, prefix):
return """{{ name = "{name}", root = "{prefix}" }};""".format(name=name, prefix=prefix)
def generate_luarocks_config(
self, pkg: LuaPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def generate_luarocks_config(self, pkg, spec, prefix):
spec = self.pkg.spec
table_entries = []
for d in spec.traverse(deptype=("build", "run")):
if d.package.extends(self.pkg.extendee_spec):
table_entries.append(self._generate_tree_line(d.name, d.prefix))
with open(self._luarocks_config_path(), "w", encoding="utf-8") as config:
path = self._luarocks_config_path()
with open(path, "w", encoding="utf-8") as config:
config.write(
"""
deps_mode="all"
@@ -90,26 +85,23 @@ def generate_luarocks_config(
"\n".join(table_entries)
)
)
return path
def preprocess(
self, pkg: LuaPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def preprocess(self, pkg, spec, prefix):
"""Override this to preprocess source before building with luarocks"""
pass
def luarocks_args(self):
return []
def install(
self, pkg: LuaPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
rock = "."
specs = find(".", "*.rockspec", recursive=False)
if specs:
rock = specs[0]
rocks_args = self.luarocks_args()
rocks_args.append(rock)
pkg.luarocks("--tree=" + prefix, "make", *rocks_args)
self.pkg.luarocks("--tree=" + prefix, "make", *rocks_args)
def _luarocks_config_path(self):
return os.path.join(self.pkg.stage.source_path, "spack_luarocks.lua")

View File

@@ -98,20 +98,29 @@ def build_directory(self) -> str:
return self.pkg.stage.source_path
def edit(
self, pkg: MakefilePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Edit the Makefile before calling make. The default is a no-op."""
pass
def build(
self, pkg: MakefilePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Run "make" on the build targets specified by the builder."""
with fs.working_dir(self.build_directory):
pkg.module.make(*self.build_targets)
def install(
self, pkg: MakefilePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Run "make" on the install targets specified by the builder."""
with fs.working_dir(self.build_directory):

View File

@@ -5,8 +5,6 @@
import spack.builder
import spack.package_base
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on
from spack.multimethod import when
from spack.util.executable import which
@@ -60,20 +58,16 @@ def build_args(self):
"""List of args to pass to build phase."""
return []
def build(
self, pkg: MavenPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def build(self, pkg, spec, prefix):
"""Compile code and package into a JAR file."""
with fs.working_dir(self.build_directory):
mvn = which("mvn", required=True)
mvn = which("mvn")
if self.pkg.run_tests:
mvn("verify", *self.build_args())
else:
mvn("package", "-DskipTests", *self.build_args())
def install(
self, pkg: MavenPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Copy to installation prefix."""
with fs.working_dir(self.build_directory):
fs.install_tree(".", prefix)

View File

@@ -188,7 +188,10 @@ def meson_args(self) -> List[str]:
return []
def meson(
self, pkg: MesonPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Run ``meson`` in the build directory"""
options = []
@@ -201,7 +204,10 @@ def meson(
pkg.module.meson(*options)
def build(
self, pkg: MesonPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Make the build targets"""
options = ["-v"]
@@ -210,7 +216,10 @@ def build(
pkg.module.ninja(*options)
def install(
self, pkg: MesonPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
self,
pkg: spack.package_base.PackageBase,
spec: spack.spec.Spec,
prefix: spack.util.prefix.Prefix,
) -> None:
"""Make the install targets"""
with fs.working_dir(self.build_directory):

View File

@@ -7,8 +7,6 @@
import spack.builder
import spack.package_base
import spack.spec
import spack.util.prefix
from spack.directives import build_system, conflicts
from ._checks import BuilderWithDefaults
@@ -101,9 +99,7 @@ def msbuild_install_args(self):
as `msbuild_args` by default."""
return self.msbuild_args()
def build(
self, pkg: MSBuildPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def build(self, pkg, spec, prefix):
"""Run "msbuild" on the build targets specified by the builder."""
with fs.working_dir(self.build_directory):
pkg.module.msbuild(
@@ -112,9 +108,7 @@ def build(
self.define_targets(*self.build_targets),
)
def install(
self, pkg: MSBuildPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Run "msbuild" on the install targets specified by the builder.
This is INSTALL by default"""
with fs.working_dir(self.build_directory):

View File

@@ -7,8 +7,6 @@
import spack.builder
import spack.package_base
import spack.spec
import spack.util.prefix
from spack.directives import build_system, conflicts
from ._checks import BuilderWithDefaults
@@ -125,9 +123,7 @@ def nmake_install_args(self):
Individual packages should override to specify NMake args to command line"""
return []
def build(
self, pkg: NMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def build(self, pkg, spec, prefix):
"""Run "nmake" on the build targets specified by the builder."""
opts = self.std_nmake_args
opts += self.nmake_args()
@@ -136,9 +132,7 @@ def build(
with fs.working_dir(self.build_directory):
pkg.module.nmake(*opts, *self.build_targets, ignore_quotes=self.ignore_quotes)
def install(
self, pkg: NMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Run "nmake" on the install targets specified by the builder.
This is INSTALL by default"""
opts = self.std_nmake_args

View File

@@ -3,8 +3,6 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.builder
import spack.package_base
import spack.spec
import spack.util.prefix
from spack.directives import build_system, extends
from spack.multimethod import when
@@ -44,9 +42,7 @@ class OctaveBuilder(BuilderWithDefaults):
#: Names associated with package attributes in the old build-system format
legacy_attributes = ()
def install(
self, pkg: OctavePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Install the package from the archive file"""
pkg.module.octave(
"--quiet",

View File

@@ -10,8 +10,6 @@
import spack.builder
import spack.package_base
import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on, extends
from spack.install_test import SkipTest, test_part
from spack.multimethod import when
@@ -151,9 +149,7 @@ def configure_args(self):
"""
return []
def configure(
self, pkg: PerlPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def configure(self, pkg, spec, prefix):
"""Run Makefile.PL or Build.PL with arguments consisting of
an appropriate installation base directory followed by the
list returned by :py:meth:`~.PerlBuilder.configure_args`.
@@ -177,9 +173,7 @@ def fix_shebang(self):
repl = "#!/usr/bin/env perl"
filter_file(pattern, repl, "Build", backup=False)
def build(
self, pkg: PerlPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def build(self, pkg, spec, prefix):
"""Builds a Perl package."""
self.build_executable()
@@ -190,8 +184,6 @@ def check(self):
"""Runs built-in tests of a Perl package."""
self.build_executable("test")
def install(
self, pkg: PerlPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Installs a Perl package."""
self.build_executable("install")

View File

@@ -28,7 +28,6 @@
import spack.repo
import spack.spec
import spack.store
import spack.util.prefix
from spack.directives import build_system, depends_on, extends
from spack.error import NoHeadersError, NoLibrariesError
from spack.install_test import test_part
@@ -264,17 +263,16 @@ def update_external_dependencies(self, extendee_spec=None):
# Ensure architecture information is present
if not python.architecture:
host_platform = spack.platforms.host()
host_os = host_platform.default_operating_system()
host_target = host_platform.default_target()
host_os = host_platform.operating_system("default_os")
host_target = host_platform.target("default_target")
python.architecture = spack.spec.ArchSpec(
(str(host_platform), str(host_os), str(host_target))
)
else:
if not python.architecture.platform:
python.architecture.platform = spack.platforms.host()
platform = spack.platforms.by_name(python.architecture.platform)
if not python.architecture.os:
python.architecture.os = platform.default_operating_system()
python.architecture.os = "default_os"
if not python.architecture.target:
python.architecture.target = archspec.cpu.host().family.name

View File

@@ -6,8 +6,6 @@
import spack.builder
import spack.package_base
import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on
from ._checks import BuilderWithDefaults, execute_build_time_tests
@@ -29,7 +27,6 @@ class QMakePackage(spack.package_base.PackageBase):
build_system("qmake")
depends_on("qmake", type="build", when="build_system=qmake")
depends_on("gmake", type="build")
@spack.builder.builder("qmake")
@@ -64,23 +61,17 @@ def qmake_args(self):
"""List of arguments passed to qmake."""
return []
def qmake(
self, pkg: QMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def qmake(self, pkg, spec, prefix):
"""Run ``qmake`` to configure the project and generate a Makefile."""
with working_dir(self.build_directory):
pkg.module.qmake(*self.qmake_args())
def build(
self, pkg: QMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def build(self, pkg, spec, prefix):
"""Make the build targets"""
with working_dir(self.build_directory):
pkg.module.make()
def install(
self, pkg: QMakePackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Make the install targets"""
with working_dir(self.build_directory):
pkg.module.make("install")

View File

@@ -94,7 +94,7 @@ def list_url(cls):
if cls.cran:
return f"https://cloud.r-project.org/src/contrib/Archive/{cls.cran}/"
@lang.classproperty
def git(cls):
if cls.bioc:
return f"https://git.bioconductor.org/packages/{cls.bioc}"
@property
def git(self):
if self.bioc:
return f"https://git.bioconductor.org/packages/{self.bioc}"

View File

@@ -9,8 +9,6 @@
import llnl.util.tty as tty
import spack.builder
import spack.spec
import spack.util.prefix
from spack.build_environment import SPACK_NO_PARALLEL_MAKE
from spack.config import determine_number_of_jobs
from spack.directives import build_system, extends, maintainers
@@ -76,22 +74,18 @@ def build_directory(self):
ret = os.path.join(ret, self.subdirectory)
return ret
def install(
self, pkg: RacketPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Install everything from build directory."""
raco = Executable("raco")
with fs.working_dir(self.build_directory):
parallel = pkg.parallel and (not env_flag(SPACK_NO_PARALLEL_MAKE))
name = pkg.racket_name
assert name is not None, "Racket package name is not set"
parallel = self.pkg.parallel and (not env_flag(SPACK_NO_PARALLEL_MAKE))
args = [
"pkg",
"install",
"-t",
"dir",
"-n",
name,
self.pkg.racket_name,
"--deps",
"fail",
"--ignore-implies",
@@ -107,7 +101,8 @@ def install(
except ProcessError:
args.insert(-2, "--skip-installed")
raco(*args)
tty.warn(
f"Racket package {name} was already installed, uninstalling via "
msg = (
"Racket package {0} was already installed, uninstalling via "
"Spack may make someone unhappy!"
)
tty.warn(msg.format(self.pkg.racket_name))

View File

@@ -140,7 +140,7 @@ class ROCmPackage(PackageBase):
when="+rocm",
)
depends_on("llvm-amdgpu", type="build", when="+rocm")
depends_on("llvm-amdgpu", when="+rocm")
depends_on("hsa-rocr-dev", when="+rocm")
depends_on("hip +rocm", when="+rocm")

View File

@@ -5,8 +5,6 @@
import spack.builder
import spack.package_base
import spack.spec
import spack.util.prefix
from spack.directives import build_system, extends, maintainers
from ._checks import BuilderWithDefaults
@@ -44,9 +42,7 @@ class RubyBuilder(BuilderWithDefaults):
#: Names associated with package attributes in the old build-system format
legacy_attributes = ()
def build(
self, pkg: RubyPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def build(self, pkg, spec, prefix):
"""Build a Ruby gem."""
# ruby-rake provides both rake.gemspec and Rakefile, but only
@@ -62,9 +58,7 @@ def build(
# Some Ruby packages only ship `*.gem` files, so nothing to build
pass
def install(
self, pkg: RubyPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Install a Ruby gem.
The ruby package sets ``GEM_HOME`` to tell gem where to install to."""

View File

@@ -4,8 +4,6 @@
import spack.builder
import spack.package_base
import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on
from ._checks import BuilderWithDefaults, execute_build_time_tests
@@ -61,9 +59,7 @@ def build_args(self, spec, prefix):
"""Arguments to pass to build."""
return []
def build(
self, pkg: SConsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def build(self, pkg, spec, prefix):
"""Build the package."""
pkg.module.scons(*self.build_args(spec, prefix))
@@ -71,9 +67,7 @@ def install_args(self, spec, prefix):
"""Arguments to pass to install."""
return []
def install(
self, pkg: SConsPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Install the package."""
pkg.module.scons("install", *self.install_args(spec, prefix))

View File

@@ -11,8 +11,6 @@
import spack.install_test
import spack.package_base
import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on, extends
from spack.multimethod import when
from spack.util.executable import Executable
@@ -43,7 +41,6 @@ class SIPPackage(spack.package_base.PackageBase):
with when("build_system=sip"):
extends("python", type=("build", "link", "run"))
depends_on("py-sip", type="build")
depends_on("gmake", type="build")
@property
def import_modules(self):
@@ -133,9 +130,7 @@ class SIPBuilder(BuilderWithDefaults):
build_directory = "build"
def configure(
self, pkg: SIPPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def configure(self, pkg, spec, prefix):
"""Configure the package."""
# https://www.riverbankcomputing.com/static/Docs/sip/command_line_tools.html
@@ -153,9 +148,7 @@ def configure_args(self):
"""Arguments to pass to configure."""
return []
def build(
self, pkg: SIPPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def build(self, pkg, spec, prefix):
"""Build the package."""
args = self.build_args()
@@ -166,9 +159,7 @@ def build_args(self):
"""Arguments to pass to build."""
return []
def install(
self, pkg: SIPPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Install the package."""
args = self.install_args()

View File

@@ -6,8 +6,6 @@
import spack.builder
import spack.package_base
import spack.phase_callbacks
import spack.spec
import spack.util.prefix
from spack.directives import build_system, depends_on
from ._checks import BuilderWithDefaults, execute_build_time_tests, execute_install_time_tests
@@ -99,9 +97,7 @@ def waf(self, *args, **kwargs):
with working_dir(self.build_directory):
self.python("waf", "-j{0}".format(jobs), *args, **kwargs)
def configure(
self, pkg: WafPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def configure(self, pkg, spec, prefix):
"""Configures the project."""
args = ["--prefix={0}".format(self.pkg.prefix)]
args += self.configure_args()
@@ -112,9 +108,7 @@ def configure_args(self):
"""Arguments to pass to configure."""
return []
def build(
self, pkg: WafPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def build(self, pkg, spec, prefix):
"""Executes the build."""
args = self.build_args()
@@ -124,9 +118,7 @@ def build_args(self):
"""Arguments to pass to build."""
return []
def install(
self, pkg: WafPackage, spec: spack.spec.Spec, prefix: spack.util.prefix.Prefix
) -> None:
def install(self, pkg, spec, prefix):
"""Installs the targets on the system."""
args = self.install_args()

View File

@@ -14,6 +14,7 @@
import zipfile
from collections import namedtuple
from typing import Callable, Dict, List, Set
from urllib.error import HTTPError, URLError
from urllib.request import HTTPHandler, Request, build_opener
import llnl.util.filesystem as fs
@@ -471,9 +472,12 @@ def generate_pipeline(env: ev.Environment, args) -> None:
# Use all unpruned specs to populate the build group for this set
cdash_config = cfg.get("cdash")
if options.cdash_handler and options.cdash_handler.auth_token:
options.cdash_handler.populate_buildgroup(
[options.cdash_handler.build_name(s) for s in pipeline_specs]
)
try:
options.cdash_handler.populate_buildgroup(
[options.cdash_handler.build_name(s) for s in pipeline_specs]
)
except (SpackError, HTTPError, URLError, TimeoutError) as err:
tty.warn(f"Problem populating buildgroup: {err}")
elif cdash_config:
# warn only if there was actually a CDash configuration.
tty.warn("Unable to populate buildgroup without CDash credentials")

View File

@@ -1,21 +1,23 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import codecs
import copy
import json
import os
import re
import ssl
import sys
import time
from collections import deque
from enum import Enum
from typing import Dict, Generator, List, Optional, Set, Tuple
from urllib.parse import quote, urlencode, urlparse
from urllib.request import Request
from urllib.request import HTTPHandler, HTTPSHandler, Request, build_opener
import llnl.util.filesystem as fs
import llnl.util.tty as tty
from llnl.util.lang import memoized
from llnl.util.lang import Singleton, memoized
import spack.binary_distribution as bindist
import spack.config as cfg
@@ -33,11 +35,32 @@
from spack.reporters.cdash import SPACK_CDASH_TIMEOUT
from spack.reporters.cdash import build_stamp as cdash_build_stamp
def _urlopen():
error_handler = web_util.SpackHTTPDefaultErrorHandler()
# One opener with HTTPS ssl enabled
with_ssl = build_opener(
HTTPHandler(), HTTPSHandler(context=web_util.ssl_create_default_context()), error_handler
)
# One opener with HTTPS ssl disabled
without_ssl = build_opener(
HTTPHandler(), HTTPSHandler(context=ssl._create_unverified_context()), error_handler
)
# And dynamically dispatch based on the config:verify_ssl.
def dispatch_open(fullurl, data=None, timeout=None, verify_ssl=True):
opener = with_ssl if verify_ssl else without_ssl
timeout = timeout or cfg.get("config:connect_timeout", 1)
return opener.open(fullurl, data, timeout)
return dispatch_open
IS_WINDOWS = sys.platform == "win32"
SPACK_RESERVED_TAGS = ["public", "protected", "notary"]
# this exists purely for testing purposes
_urlopen = web_util.urlopen
_dyn_mapping_urlopener = Singleton(_urlopen)
def copy_files_to_artifacts(src, artifacts_dir):
@@ -256,25 +279,26 @@ def copy_test_results(self, source, dest):
reports = fs.join_path(source, "*_Test*.xml")
copy_files_to_artifacts(reports, dest)
def create_buildgroup(self, headers, url, group_name, group_type):
def create_buildgroup(self, opener, headers, url, group_name, group_type):
data = {"newbuildgroup": group_name, "project": self.project, "type": group_type}
enc_data = json.dumps(data).encode("utf-8")
request = Request(url, data=enc_data, headers=headers)
try:
response_text = _urlopen(request, timeout=SPACK_CDASH_TIMEOUT).read()
except OSError as e:
tty.warn(f"Failed to create CDash buildgroup: {e}")
response = opener.open(request, timeout=SPACK_CDASH_TIMEOUT)
response_code = response.getcode()
if response_code not in [200, 201]:
msg = f"Creating buildgroup failed (response code = {response_code})"
tty.warn(msg)
return None
try:
response_json = json.loads(response_text)
return response_json["id"]
except (json.JSONDecodeError, KeyError) as e:
tty.warn(f"Failed to parse CDash response: {e}")
return None
response_text = response.read()
response_json = json.loads(response_text)
build_group_id = response_json["id"]
return build_group_id
def populate_buildgroup(self, job_names):
url = f"{self.url}/api/v1/buildgroup.php"
@@ -284,11 +308,16 @@ def populate_buildgroup(self, job_names):
"Content-Type": "application/json",
}
parent_group_id = self.create_buildgroup(headers, url, self.build_group, "Daily")
group_id = self.create_buildgroup(headers, url, f"Latest {self.build_group}", "Latest")
opener = build_opener(HTTPHandler)
parent_group_id = self.create_buildgroup(opener, headers, url, self.build_group, "Daily")
group_id = self.create_buildgroup(
opener, headers, url, f"Latest {self.build_group}", "Latest"
)
if not parent_group_id or not group_id:
tty.warn(f"Failed to create or retrieve buildgroups for {self.build_group}")
msg = f"Failed to create or retrieve buildgroups for {self.build_group}"
tty.warn(msg)
return
data = {
@@ -300,12 +329,15 @@ def populate_buildgroup(self, job_names):
enc_data = json.dumps(data).encode("utf-8")
request = Request(url, data=enc_data, headers=headers, method="PUT")
request = Request(url, data=enc_data, headers=headers)
request.get_method = lambda: "PUT"
try:
_urlopen(request, timeout=SPACK_CDASH_TIMEOUT)
except OSError as e:
tty.warn(f"Failed to populate CDash buildgroup: {e}")
response = opener.open(request, timeout=SPACK_CDASH_TIMEOUT)
response_code = response.getcode()
if response_code != 200:
msg = f"Error response code ({response_code}) in populate_buildgroup"
tty.warn(msg)
def report_skipped(self, spec: spack.spec.Spec, report_dir: str, reason: Optional[str]):
"""Explicitly report skipping testing of a spec (e.g., it's CI
@@ -703,6 +735,9 @@ def _apply_section(dest, src):
for value in header.values():
value = os.path.expandvars(value)
verify_ssl = mapping.get("verify_ssl", spack.config.get("config:verify_ssl", True))
timeout = mapping.get("timeout", spack.config.get("config:connect_timeout", 1))
required = mapping.get("require", [])
allowed = mapping.get("allow", [])
ignored = mapping.get("ignore", [])
@@ -736,15 +771,19 @@ def job_query(job):
endpoint_url._replace(query=query).geturl(), headers=header, method="GET"
)
try:
response = _urlopen(request)
config = json.load(response)
response = _dyn_mapping_urlopener(
request, verify_ssl=verify_ssl, timeout=timeout
)
except Exception as e:
# For now just ignore any errors from dynamic mapping and continue
# This is still experimental, and failures should not stop CI
# from running normally
tty.warn(f"Failed to fetch dynamic mapping for query:\n\t{query}: {e}")
tty.warn(f"Failed to fetch dynamic mapping for query:\n\t{query}")
tty.warn(f"{e}")
continue
config = json.load(codecs.getreader("utf-8")(response))
# Strip ignore keys
if ignored:
for key in ignored:

View File

@@ -202,7 +202,7 @@ def _concretize_spec_pairs(
# Special case for concretizing a single spec
if len(to_concretize) == 1:
abstract, concrete = to_concretize[0]
return [concrete or spack.concretize.concretize_one(abstract, tests=tests)]
return [concrete or abstract.concretized(tests=tests)]
# Special case if every spec is either concrete or has an abstract hash
if all(
@@ -254,9 +254,9 @@ def matching_spec_from_env(spec):
"""
env = ev.active_environment()
if env:
return env.matching_spec(spec) or spack.concretize.concretize_one(spec)
return env.matching_spec(spec) or spec.concretized()
else:
return spack.concretize.concretize_one(spec)
return spec.concretized()
def matching_specs_from_env(specs):
@@ -297,7 +297,7 @@ def disambiguate_spec(
def disambiguate_spec_from_hashes(
spec: spack.spec.Spec,
hashes: Optional[List[str]],
hashes: List[str],
local: bool = False,
installed: Union[bool, InstallRecordStatus] = True,
first: bool = False,

View File

@@ -3,7 +3,6 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import collections
import warnings
import archspec.cpu
@@ -52,10 +51,10 @@ def setup_parser(subparser):
"-t", "--target", action="store_true", default=False, help="print only the target"
)
parts2.add_argument(
"-f", "--frontend", action="store_true", default=False, help="print frontend (DEPRECATED)"
"-f", "--frontend", action="store_true", default=False, help="print frontend"
)
parts2.add_argument(
"-b", "--backend", action="store_true", default=False, help="print backend (DEPRECATED)"
"-b", "--backend", action="store_true", default=False, help="print backend"
)
@@ -99,14 +98,15 @@ def arch(parser, args):
display_targets(archspec.cpu.TARGETS)
return
os_args, target_args = "default_os", "default_target"
if args.frontend:
warnings.warn("the argument --frontend is deprecated, and will be removed in Spack v1.0")
os_args, target_args = "frontend", "frontend"
elif args.backend:
warnings.warn("the argument --backend is deprecated, and will be removed in Spack v1.0")
os_args, target_args = "backend", "backend"
host_platform = spack.platforms.host()
host_os = host_platform.default_operating_system()
host_target = host_platform.default_target()
host_os = host_platform.operating_system(os_args)
host_target = host_platform.target(target_args)
if args.family:
host_target = host_target.family
elif args.generic:

View File

@@ -1,7 +1,7 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import os.path
import shutil
import sys
import tempfile
@@ -14,9 +14,9 @@
import spack.bootstrap
import spack.bootstrap.config
import spack.bootstrap.core
import spack.concretize
import spack.config
import spack.mirrors.utils
import spack.spec
import spack.stage
import spack.util.path
import spack.util.spack_yaml
@@ -397,7 +397,7 @@ def _mirror(args):
llnl.util.tty.msg(msg.format(spec_str, mirror_dir))
# Suppress tty from the call below for terser messages
llnl.util.tty.set_msg_enabled(False)
spec = spack.concretize.concretize_one(spec_str)
spec = spack.spec.Spec(spec_str).concretized()
for node in spec.traverse():
spack.mirrors.utils.create(mirror_dir, [node])
llnl.util.tty.set_msg_enabled(True)
@@ -436,7 +436,6 @@ def write_metadata(subdir, metadata):
shutil.copy(spack.util.path.canonicalize_path(GNUPG_JSON), abs_directory)
shutil.copy(spack.util.path.canonicalize_path(PATCHELF_JSON), abs_directory)
instructions += cmd.format("local-binaries", rel_directory)
instructions += " % spack buildcache update-index <final-path>/bootstrap_cache\n"
print(instructions)

View File

@@ -16,7 +16,6 @@
import spack.binary_distribution as bindist
import spack.cmd
import spack.concretize
import spack.config
import spack.deptypes as dt
import spack.environment as ev
@@ -555,7 +554,8 @@ def check_fn(args: argparse.Namespace):
tty.msg("No specs provided, exiting.")
return
specs = [spack.concretize.concretize_one(s) for s in specs]
for spec in specs:
spec.concretize()
# Next see if there are any configured binary mirrors
configured_mirrors = spack.config.get("mirrors", scope=args.scope)
@@ -623,7 +623,7 @@ def save_specfile_fn(args):
root = specs[0]
if not root.concrete:
root = spack.concretize.concretize_one(root)
root.concretize()
save_dependency_specfiles(
root, args.specfile_dir, dependencies=spack.cmd.parse_specs(args.specs)

View File

@@ -4,7 +4,7 @@
import argparse
import os
import os.path
import textwrap
from llnl.util.lang import stable_partition

View File

@@ -2,6 +2,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import os.path
import llnl.util.tty

View File

@@ -86,8 +86,8 @@ def create_db_tarball(args):
def report(args):
host_platform = spack.platforms.host()
host_os = host_platform.default_operating_system()
host_target = host_platform.default_target()
host_os = host_platform.operating_system("frontend")
host_target = host_platform.target("frontend")
architecture = spack.spec.ArchSpec((str(host_platform), str(host_os), str(host_target)))
print("* **Spack:**", spack.get_version())
print("* **Python:**", platform.python_version())

View File

@@ -18,7 +18,6 @@
from llnl.util.symlink import symlink
import spack.cmd
import spack.concretize
import spack.environment as ev
import spack.installer
import spack.store
@@ -104,7 +103,7 @@ def deprecate(parser, args):
)
if args.install:
deprecator = spack.concretize.concretize_one(specs[1])
deprecator = specs[1].concretized()
else:
deprecator = spack.cmd.disambiguate_spec(specs[1], env, local=True)

View File

@@ -10,7 +10,6 @@
import spack.build_environment
import spack.cmd
import spack.cmd.common.arguments
import spack.concretize
import spack.config
import spack.repo
from spack.cmd.common import arguments
@@ -114,8 +113,8 @@ def dev_build(self, args):
source_path = os.path.abspath(source_path)
# Forces the build to run out of the source directory.
spec.constrain(f'dev_path="{source_path}"')
spec = spack.concretize.concretize_one(spec)
spec.constrain("dev_path=%s" % source_path)
spec.concretize()
if spec.installed:
tty.error("Already installed in %s" % spec.prefix)

View File

@@ -54,6 +54,10 @@
@m{target=target} specific <target> processor
@m{arch=platform-os-target} shortcut for all three above
cross-compiling:
@m{os=backend} or @m{os=be} build for compute node (backend)
@m{os=frontend} or @m{os=fe} build for login node (frontend)
dependencies:
^dependency [constraints] specify constraints on dependencies
^@K{/hash} build with a specific installed

View File

@@ -13,7 +13,6 @@
from llnl.util import lang, tty
import spack.cmd
import spack.concretize
import spack.config
import spack.environment as ev
import spack.paths
@@ -451,7 +450,7 @@ def concrete_specs_from_file(args):
else:
s = spack.spec.Spec.from_json(f)
concretized = spack.concretize.concretize_one(s)
concretized = s.concretized()
if concretized.dag_hash() != s.dag_hash():
msg = 'skipped invalid file "{0}". '
msg += "The file does not contain a concrete spec."

View File

@@ -7,9 +7,9 @@
from llnl.path import convert_to_posix_path
import spack.concretize
import spack.paths
import spack.util.executable
from spack.spec import Spec
description = "generate Windows installer"
section = "admin"
@@ -65,7 +65,8 @@ def make_installer(parser, args):
"""
if sys.platform == "win32":
output_dir = args.output_dir
cmake_spec = spack.concretize.concretize_one("cmake")
cmake_spec = Spec("cmake")
cmake_spec.concretize()
cmake_path = os.path.join(cmake_spec.prefix, "bin", "cmake.exe")
cpack_path = os.path.join(cmake_spec.prefix, "bin", "cpack.exe")
spack_source = args.spack_source

View File

@@ -492,7 +492,7 @@ def extend_with_additional_versions(specs, num_versions):
mirror_specs = spack.mirrors.utils.get_all_versions(specs)
else:
mirror_specs = spack.mirrors.utils.get_matching_versions(specs, num_versions=num_versions)
mirror_specs = [spack.concretize.concretize_one(x) for x in mirror_specs]
mirror_specs = [x.concretized() for x in mirror_specs]
return mirror_specs

View File

@@ -5,7 +5,7 @@
"""Implementation details of the ``spack module`` command."""
import collections
import os
import os.path
import shutil
import sys

View File

@@ -2,7 +2,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import os.path
import shutil
import llnl.util.tty as tty

View File

@@ -144,7 +144,7 @@ def is_installed(spec):
record = spack.store.STORE.db.query_local_by_spec_hash(spec.dag_hash())
return record and record.installed
all_specs = traverse.traverse_nodes(
specs = traverse.traverse_nodes(
specs,
root=False,
order="breadth",
@@ -155,7 +155,7 @@ def is_installed(spec):
)
with spack.store.STORE.db.read_transaction():
return [spec for spec in all_specs if is_installed(spec)]
return [spec for spec in specs if is_installed(spec)]
def dependent_environments(

View File

@@ -5,7 +5,7 @@
import argparse
import collections
import io
import os
import os.path
import re
import sys

View File

@@ -801,17 +801,17 @@ def _extract_compiler_paths(spec: "spack.spec.Spec") -> Optional[Dict[str, str]]
def _extract_os_and_target(spec: "spack.spec.Spec"):
if not spec.architecture:
host_platform = spack.platforms.host()
operating_system = host_platform.default_operating_system()
target = host_platform.default_target()
operating_system = host_platform.operating_system("default_os")
target = host_platform.target("default_target")
else:
target = spec.architecture.target
if not target:
target = spack.platforms.host().default_target()
target = spack.platforms.host().target("default_target")
operating_system = spec.os
if not operating_system:
host_platform = spack.platforms.host()
operating_system = host_platform.default_operating_system()
operating_system = host_platform.operating_system("default_os")
return operating_system, target

View File

@@ -37,12 +37,13 @@ def enable_compiler_existence_check():
SpecPairInput = Tuple[Spec, Optional[Spec]]
SpecPair = Tuple[Spec, Spec]
SpecLike = Union[Spec, str]
TestsType = Union[bool, Iterable[str]]
def _concretize_specs_together(
abstract_specs: Sequence[Spec], tests: TestsType = False
) -> List[Spec]:
def concretize_specs_together(
abstract_specs: Sequence[SpecLike], tests: TestsType = False
) -> Sequence[Spec]:
"""Given a number of specs as input, tries to concretize them together.
Args:
@@ -50,10 +51,11 @@ def _concretize_specs_together(
tests: list of package names for which to consider tests dependencies. If True, all nodes
will have test dependencies. If False, test dependencies will be disregarded.
"""
from spack.solver.asp import Solver
import spack.solver.asp
allow_deprecated = spack.config.get("config:deprecated", False)
result = Solver().solve(abstract_specs, tests=tests, allow_deprecated=allow_deprecated)
solver = spack.solver.asp.Solver()
result = solver.solve(abstract_specs, tests=tests, allow_deprecated=allow_deprecated)
return [s.copy() for s in result.specs]
@@ -70,7 +72,7 @@ def concretize_together(
"""
to_concretize = [concrete if concrete else abstract for abstract, concrete in spec_list]
abstract_specs = [abstract for abstract, _ in spec_list]
concrete_specs = _concretize_specs_together(to_concretize, tests=tests)
concrete_specs = concretize_specs_together(to_concretize, tests=tests)
return list(zip(abstract_specs, concrete_specs))
@@ -88,7 +90,7 @@ def concretize_together_when_possible(
tests: list of package names for which to consider tests dependencies. If True, all nodes
will have test dependencies. If False, test dependencies will be disregarded.
"""
from spack.solver.asp import Solver
import spack.solver.asp
to_concretize = [concrete if concrete else abstract for abstract, concrete in spec_list]
old_concrete_to_abstract = {
@@ -96,8 +98,9 @@ def concretize_together_when_possible(
}
result_by_user_spec = {}
solver = spack.solver.asp.Solver()
allow_deprecated = spack.config.get("config:deprecated", False)
for result in Solver().solve_in_rounds(
for result in solver.solve_in_rounds(
to_concretize, tests=tests, allow_deprecated=allow_deprecated
):
result_by_user_spec.update(result.specs_by_input)
@@ -121,7 +124,7 @@ def concretize_separately(
tests: list of package names for which to consider tests dependencies. If True, all nodes
will have test dependencies. If False, test dependencies will be disregarded.
"""
from spack.bootstrap import ensure_bootstrap_configuration, ensure_clingo_importable_or_raise
import spack.bootstrap
to_concretize = [abstract for abstract, concrete in spec_list if not concrete]
args = [
@@ -131,8 +134,8 @@ def concretize_separately(
]
ret = [(i, abstract) for i, abstract in enumerate(to_concretize) if abstract.concrete]
# Ensure we don't try to bootstrap clingo in parallel
with ensure_bootstrap_configuration():
ensure_clingo_importable_or_raise()
with spack.bootstrap.ensure_bootstrap_configuration():
spack.bootstrap.ensure_clingo_importable_or_raise()
# Ensure all the indexes have been built or updated, since
# otherwise the processes in the pool may timeout on waiting
@@ -187,52 +190,10 @@ def _concretize_task(packed_arguments: Tuple[int, str, TestsType]) -> Tuple[int,
index, spec_str, tests = packed_arguments
with tty.SuppressOutput(msg_enabled=False):
start = time.time()
spec = concretize_one(Spec(spec_str), tests=tests)
spec = Spec(spec_str).concretized(tests=tests)
return index, spec, time.time() - start
def concretize_one(spec: Union[str, Spec], tests: TestsType = False) -> Spec:
"""Return a concretized copy of the given spec.
Args:
tests: if False disregard 'test' dependencies, if a list of names activate them for
the packages in the list, if True activate 'test' dependencies for all packages.
"""
from spack.solver.asp import Solver, SpecBuilder
if isinstance(spec, str):
spec = Spec(spec)
spec = spec.lookup_hash()
if spec.concrete:
return spec.copy()
for node in spec.traverse():
if not node.name:
raise spack.error.SpecError(
f"Spec {node} has no name; cannot concretize an anonymous spec"
)
allow_deprecated = spack.config.get("config:deprecated", False)
result = Solver().solve([spec], tests=tests, allow_deprecated=allow_deprecated)
# take the best answer
opt, i, answer = min(result.answers)
name = spec.name
# TODO: Consolidate this code with similar code in solve.py
if spec.virtual:
providers = [s.name for s in answer.values() if s.package.provides(name)]
name = providers[0]
node = SpecBuilder.make_node(pkg=name)
assert (
node in answer
), f"cannot find {name} in the list of specs {','.join([n.pkg for n in answer.keys()])}"
concretized = answer[node]
return concretized
class UnavailableCompilerVersionError(spack.error.SpackError):
"""Raised when there is no available compiler that satisfies a
compiler spec."""

View File

@@ -36,8 +36,6 @@
import sys
from typing import Any, Callable, Dict, Generator, List, Optional, Tuple, Union
import jsonschema
from llnl.util import filesystem, lang, tty
import spack.error
@@ -53,7 +51,6 @@
import spack.schema.definitions
import spack.schema.develop
import spack.schema.env
import spack.schema.env_vars
import spack.schema.mirrors
import spack.schema.modules
import spack.schema.packages
@@ -71,7 +68,6 @@
"compilers": spack.schema.compilers.schema,
"concretizer": spack.schema.concretizer.schema,
"definitions": spack.schema.definitions.schema,
"env_vars": spack.schema.env_vars.schema,
"view": spack.schema.view.schema,
"develop": spack.schema.develop.schema,
"mirrors": spack.schema.mirrors.schema,
@@ -1052,6 +1048,8 @@ def validate(
This leverages the line information (start_mark, end_mark) stored
on Spack YAML structures.
"""
import jsonschema
try:
spack.schema.Validator(schema).validate(data)
except jsonschema.ValidationError as e:

View File

@@ -6,8 +6,6 @@
"""
import warnings
import jsonschema
import spack.environment as ev
import spack.schema.env as env
import spack.util.spack_yaml as syaml
@@ -32,6 +30,8 @@ def validate(configuration_file):
Returns:
A sanitized copy of the configuration stored in the input file
"""
import jsonschema
with open(configuration_file, encoding="utf-8") as f:
config = syaml.load(f)

View File

@@ -3,7 +3,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Manages the details on the images used in the various stages."""
import json
import os
import os.path
import shlex
import sys

View File

@@ -9,8 +9,6 @@
from collections import namedtuple
from typing import Optional
import jsonschema
import spack.environment as ev
import spack.error
import spack.schema.env
@@ -190,6 +188,8 @@ def paths(self):
@tengine.context_property
def manifest(self):
"""The spack.yaml file that should be used in the image"""
import jsonschema
# Copy in the part of spack.yaml prescribed in the configuration file
manifest = copy.deepcopy(self.config)
manifest.pop("container")

View File

@@ -123,15 +123,6 @@
"deprecated_for",
)
#: File where the database is written
INDEX_JSON_FILE = "index.json"
# Verifier file to check last modification of the DB
_INDEX_VERIFIER_FILE = "index_verifier"
# Lockfile for the database
_LOCK_FILE = "lock"
@llnl.util.lang.memoized
def _getfqdn():
@@ -269,7 +260,7 @@ class ForbiddenLockError(SpackError):
class ForbiddenLock:
def __getattr__(self, name):
raise ForbiddenLockError(f"Cannot access attribute '{name}' of lock")
raise ForbiddenLockError("Cannot access attribute '{0}' of lock".format(name))
def __reduce__(self):
return ForbiddenLock, tuple()
@@ -428,25 +419,14 @@ class FailureTracker:
the likelihood of collision very low with no cleanup required.
"""
#: root directory of the failure tracker
dir: pathlib.Path
#: File for locking particular concrete spec hashes
locker: SpecLocker
def __init__(self, root_dir: Union[str, pathlib.Path], default_timeout: Optional[float]):
#: Ensure a persistent location for dealing with parallel installation
#: failures (e.g., across near-concurrent processes).
self.dir = pathlib.Path(root_dir) / _DB_DIRNAME / "failures"
self.locker = SpecLocker(failures_lock_path(root_dir), default_timeout=default_timeout)
def _ensure_parent_directories(self) -> None:
"""Ensure that parent directories of the FailureTracker exist.
Accesses the filesystem only once, the first time it's called on a given FailureTracker.
"""
self.dir.mkdir(parents=True, exist_ok=True)
self.locker = SpecLocker(failures_lock_path(root_dir), default_timeout=default_timeout)
def clear(self, spec: "spack.spec.Spec", force: bool = False) -> None:
"""Removes any persistent and cached failure tracking for the spec.
@@ -489,18 +469,13 @@ def clear_all(self) -> None:
tty.debug("Removing prefix failure tracking files")
try:
marks = os.listdir(str(self.dir))
except FileNotFoundError:
return # directory doesn't exist yet
for fail_mark in os.listdir(str(self.dir)):
try:
(self.dir / fail_mark).unlink()
except OSError as exc:
tty.warn(f"Unable to remove failure marking file {fail_mark}: {str(exc)}")
except OSError as exc:
tty.warn(f"Unable to remove failure marking files: {str(exc)}")
return
for fail_mark in marks:
try:
(self.dir / fail_mark).unlink()
except OSError as exc:
tty.warn(f"Unable to remove failure marking file {fail_mark}: {str(exc)}")
def mark(self, spec: "spack.spec.Spec") -> lk.Lock:
"""Marks a spec as failing to install.
@@ -508,8 +483,6 @@ def mark(self, spec: "spack.spec.Spec") -> lk.Lock:
Args:
spec: spec that failed to install
"""
self._ensure_parent_directories()
# Dump the spec to the failure file for (manual) debugging purposes
path = self._path(spec)
path.write_text(spec.to_json())
@@ -594,13 +567,17 @@ def __init__(
Relevant only if the repository is not an upstream.
"""
self.root = root
self.database_directory = pathlib.Path(self.root) / _DB_DIRNAME
self.database_directory = os.path.join(self.root, _DB_DIRNAME)
self.layout = layout
# Set up layout of database files within the db dir
self._index_path = self.database_directory / INDEX_JSON_FILE
self._verifier_path = self.database_directory / _INDEX_VERIFIER_FILE
self._lock_path = self.database_directory / _LOCK_FILE
self._index_path = os.path.join(self.database_directory, "index.json")
self._verifier_path = os.path.join(self.database_directory, "index_verifier")
self._lock_path = os.path.join(self.database_directory, "lock")
# Create needed directories and files
if not is_upstream and not os.path.exists(self.database_directory):
fs.mkdirp(self.database_directory)
self.is_upstream = is_upstream
self.last_seen_verifier = ""
@@ -615,14 +592,14 @@ def __init__(
# initialize rest of state.
self.db_lock_timeout = lock_cfg.database_timeout
tty.debug(f"DATABASE LOCK TIMEOUT: {str(self.db_lock_timeout)}s")
tty.debug("DATABASE LOCK TIMEOUT: {0}s".format(str(self.db_lock_timeout)))
self.lock: Union[ForbiddenLock, lk.Lock]
if self.is_upstream:
self.lock = ForbiddenLock()
else:
self.lock = lk.Lock(
str(self._lock_path),
self._lock_path,
default_timeout=self.db_lock_timeout,
desc="database",
enable=lock_cfg.enable,
@@ -639,11 +616,6 @@ def __init__(
self._write_transaction_impl = lk.WriteTransaction
self._read_transaction_impl = lk.ReadTransaction
def _ensure_parent_directories(self):
"""Create the parent directory for the DB, if necessary."""
if not self.is_upstream:
self.database_directory.mkdir(parents=True, exist_ok=True)
def write_transaction(self):
"""Get a write lock context manager for use in a `with` block."""
return self._write_transaction_impl(self.lock, acquire=self._read, release=self._write)
@@ -658,8 +630,6 @@ def _write_to_file(self, stream):
This function does not do any locking or transactions.
"""
self._ensure_parent_directories()
# map from per-spec hash code to installation record.
installs = dict(
(k, v.to_dict(include_fields=self.record_fields)) for k, v in self._data.items()
@@ -789,7 +759,7 @@ def _read_from_file(self, filename):
Does not do any locking.
"""
try:
with open(str(filename), "r", encoding="utf-8") as f:
with open(filename, "r", encoding="utf-8") as f:
# In the future we may use a stream of JSON objects, hence `raw_decode` for compat.
fdata, _ = JSONDecoder().raw_decode(f.read())
except Exception as e:
@@ -890,13 +860,11 @@ def reindex(self):
if self.is_upstream:
raise UpstreamDatabaseLockingError("Cannot reindex an upstream database")
self._ensure_parent_directories()
# Special transaction to avoid recursive reindex calls and to
# ignore errors if we need to rebuild a corrupt database.
def _read_suppress_error():
try:
if self._index_path.is_file():
if os.path.isfile(self._index_path):
self._read_from_file(self._index_path)
except CorruptDatabaseError as e:
tty.warn(f"Reindexing corrupt database, error was: {e}")
@@ -1039,7 +1007,7 @@ def _check_ref_counts(self):
% (key, found, expected, self._index_path)
)
def _write(self, type=None, value=None, traceback=None):
def _write(self, type, value, traceback):
"""Write the in-memory database index to its file path.
This is a helper function called by the WriteTransaction context
@@ -1050,8 +1018,6 @@ def _write(self, type=None, value=None, traceback=None):
This routine does no locking.
"""
self._ensure_parent_directories()
# Do not write if exceptions were raised
if type is not None:
# A failure interrupted a transaction, so we should record that
@@ -1060,16 +1026,16 @@ def _write(self, type=None, value=None, traceback=None):
self._state_is_inconsistent = True
return
temp_file = str(self._index_path) + (".%s.%s.temp" % (_getfqdn(), os.getpid()))
temp_file = self._index_path + (".%s.%s.temp" % (_getfqdn(), os.getpid()))
# Write a temporary database file them move it into place
try:
with open(temp_file, "w", encoding="utf-8") as f:
self._write_to_file(f)
fs.rename(temp_file, str(self._index_path))
fs.rename(temp_file, self._index_path)
if _use_uuid:
with self._verifier_path.open("w", encoding="utf-8") as f:
with open(self._verifier_path, "w", encoding="utf-8") as f:
new_verifier = str(uuid.uuid4())
f.write(new_verifier)
self.last_seen_verifier = new_verifier
@@ -1082,11 +1048,11 @@ def _write(self, type=None, value=None, traceback=None):
def _read(self):
"""Re-read Database from the data in the set location. This does no locking."""
if self._index_path.is_file():
if os.path.isfile(self._index_path):
current_verifier = ""
if _use_uuid:
try:
with self._verifier_path.open("r", encoding="utf-8") as f:
with open(self._verifier_path, "r", encoding="utf-8") as f:
current_verifier = f.read()
except BaseException:
pass
@@ -1099,7 +1065,7 @@ def _read(self):
self._state_is_inconsistent = False
return
elif self.is_upstream:
tty.warn(f"upstream not found: {self._index_path}")
tty.warn("upstream not found: {0}".format(self._index_path))
def _add(
self,
@@ -1364,7 +1330,7 @@ def deprecate(self, spec: "spack.spec.Spec", deprecator: "spack.spec.Spec") -> N
def installed_relatives(
self,
spec: "spack.spec.Spec",
direction: tr.DirectionType = "children",
direction: str = "children",
transitive: bool = True,
deptype: Union[dt.DepFlag, dt.DepTypes] = dt.ALL,
) -> Set["spack.spec.Spec"]:
@@ -1715,7 +1681,7 @@ def query(
)
results = list(local_results) + list(x for x in upstream_results if x not in local_results)
results.sort() # type: ignore[call-overload]
results.sort()
return results
def query_one(

View File

@@ -15,6 +15,7 @@
import glob
import itertools
import os
import os.path
import pathlib
import re
import sys

View File

@@ -7,6 +7,7 @@
import collections
import concurrent.futures
import os
import os.path
import re
import sys
import traceback

View File

@@ -32,7 +32,7 @@ class OpenMpi(Package):
"""
import collections
import collections.abc
import os
import os.path
import re
from typing import Any, Callable, List, Optional, Tuple, Type, Union

View File

@@ -10,7 +10,6 @@
import spack.environment as ev
import spack.repo
import spack.schema.environment
import spack.store
from spack.util.environment import EnvironmentModifications
@@ -157,11 +156,6 @@ def activate(
# MANPATH, PYTHONPATH, etc. All variables that end in PATH (case-sensitive)
# become PATH variables.
#
env_vars_yaml = env.manifest.configuration.get("env_vars", None)
if env_vars_yaml:
env_mods.extend(spack.schema.environment.parse(env_vars_yaml))
try:
if view and env.has_view(view):
with spack.store.STORE.db.read_transaction():
@@ -195,10 +189,6 @@ def deactivate() -> EnvironmentModifications:
if active is None:
return env_mods
env_vars_yaml = active.manifest.configuration.get("env_vars", None)
if env_vars_yaml:
env_mods.extend(spack.schema.environment.parse(env_vars_yaml).reversed())
active_view = os.getenv(ev.spack_env_view_var)
if active_view and active.has_view(active_view):

View File

@@ -25,6 +25,7 @@
import functools
import http.client
import os
import os.path
import re
import shutil
import urllib.error
@@ -320,15 +321,9 @@ def _fetch_urllib(self, url):
request = urllib.request.Request(url, headers={"User-Agent": web_util.SPACK_USER_AGENT})
if os.path.lexists(save_file):
os.remove(save_file)
try:
response = web_util.urlopen(request)
tty.msg(f"Fetching {url}")
with open(save_file, "wb") as f:
shutil.copyfileobj(response, f)
except OSError as e:
except (TimeoutError, urllib.error.URLError) as e:
# clean up archive on failure.
if self.archive_file:
os.remove(self.archive_file)
@@ -336,6 +331,14 @@ def _fetch_urllib(self, url):
os.remove(save_file)
raise FailedDownloadError(e) from e
tty.msg(f"Fetching {url}")
if os.path.lexists(save_file):
os.remove(save_file)
with open(save_file, "wb") as f:
shutil.copyfileobj(response, f)
# Save the redirected URL for error messages. Sometimes we're redirected to an arbitrary
# mirror that is broken, leading to spurious download failures. In that case it's helpful
# for users to know which URL was actually fetched.
@@ -532,16 +535,11 @@ def __init__(self, *, url: str, checksum: Optional[str] = None, **kwargs):
@_needs_stage
def fetch(self):
file = self.stage.save_filename
if os.path.lexists(file):
os.remove(file)
tty.msg(f"Fetching {self.url}")
try:
response = self._urlopen(self.url)
tty.msg(f"Fetching {self.url}")
with open(file, "wb") as f:
shutil.copyfileobj(response, f)
except OSError as e:
except (TimeoutError, urllib.error.URLError) as e:
# clean up archive on failure.
if self.archive_file:
os.remove(self.archive_file)
@@ -549,6 +547,12 @@ def fetch(self):
os.remove(file)
raise FailedDownloadError(e) from e
if os.path.lexists(file):
os.remove(file)
with open(file, "wb") as f:
shutil.copyfileobj(response, f)
class VCSFetchStrategy(FetchStrategy):
"""Superclass for version control system fetch strategies.

View File

@@ -35,6 +35,7 @@
import spack.config
import spack.directory_layout
import spack.paths
import spack.projections
import spack.relocate
import spack.schema.projections
@@ -43,6 +44,7 @@
import spack.util.spack_json as s_json
import spack.util.spack_yaml as s_yaml
from spack.error import SpackError
from spack.hooks import sbang
__all__ = ["FilesystemView", "YamlFilesystemView"]
@@ -89,10 +91,16 @@ def view_copy(
if stat.S_ISLNK(src_stat.st_mode):
spack.relocate.relocate_links(links=[dst], prefix_to_prefix=prefix_to_projection)
elif spack.relocate.is_binary(dst):
spack.relocate.relocate_text_bin(binaries=[dst], prefix_to_prefix=prefix_to_projection)
spack.relocate.relocate_text_bin(binaries=[dst], prefixes=prefix_to_projection)
else:
prefix_to_projection[spack.store.STORE.layout.root] = view._root
spack.relocate.relocate_text(files=[dst], prefix_to_prefix=prefix_to_projection)
# This is vestigial code for the *old* location of sbang.
prefix_to_projection[f"#!/bin/bash {spack.paths.spack_root}/bin/sbang"] = (
sbang.sbang_shebang_line()
)
spack.relocate.relocate_text(files=[dst], prefixes=prefix_to_projection)
# The os module on Windows does not have a chown function.
if sys.platform != "win32":

View File

@@ -275,7 +275,7 @@ def _do_fake_install(pkg: "spack.package_base.PackageBase") -> None:
fs.mkdirp(pkg.prefix.bin)
fs.touch(os.path.join(pkg.prefix.bin, command))
if sys.platform != "win32":
chmod = which("chmod", required=True)
chmod = which("chmod")
chmod("+x", os.path.join(pkg.prefix.bin, command))
# Install fake header file
@@ -539,7 +539,7 @@ def dump_packages(spec: "spack.spec.Spec", path: str) -> None:
# Note that we copy them in as they are in the *install* directory
# NOT as they are in the repository, because we want a snapshot of
# how *this* particular build was done.
for node in spec.traverse(deptype="all"):
for node in spec.traverse(deptype=all):
if node is not spec:
# Locate the dependency package in the install tree and find
# its provenance information.

View File

@@ -14,6 +14,7 @@
import io
import operator
import os
import os.path
import pstats
import re
import shlex
@@ -728,7 +729,7 @@ def _compatible_sys_types():
with the current host.
"""
host_platform = spack.platforms.host()
host_os = str(host_platform.default_operating_system())
host_os = str(host_platform.operating_system("default_os"))
host_target = archspec.cpu.host()
compatible_targets = [host_target] + host_target.ancestors

View File

@@ -2,6 +2,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import os.path
from typing import Optional
import llnl.url

View File

@@ -2,6 +2,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import os.path
import traceback
import llnl.util.tty as tty

View File

@@ -31,7 +31,7 @@
import copy
import datetime
import inspect
import os
import os.path
import re
import string
from typing import List, Optional

View File

@@ -4,7 +4,7 @@
import collections
import itertools
import os
import os.path
from typing import Dict, List, Optional, Tuple
import llnl.util.filesystem as fs

View File

@@ -5,7 +5,7 @@
"""This module implements the classes necessary to generate Tcl
non-hierarchical modules.
"""
import os
import os.path
from typing import Dict, Optional, Tuple
import spack.config

View File

@@ -7,7 +7,6 @@
import base64
import json
import re
import socket
import time
import urllib.error
import urllib.parse
@@ -411,7 +410,7 @@ def wrapper(*args, **kwargs):
for i in range(retries):
try:
return f(*args, **kwargs)
except OSError as e:
except (urllib.error.URLError, TimeoutError) as e:
# Retry on internal server errors, and rate limit errors
# Potentially this could take into account the Retry-After header
# if registries support it
@@ -421,10 +420,9 @@ def wrapper(*args, **kwargs):
and (500 <= e.code < 600 or e.code == 429)
)
or (
isinstance(e, urllib.error.URLError)
and isinstance(e.reason, socket.timeout)
isinstance(e, urllib.error.URLError) and isinstance(e.reason, TimeoutError)
)
or isinstance(e, socket.timeout)
or isinstance(e, TimeoutError)
):
# Exponential backoff
sleep(2**i)

View File

@@ -3,6 +3,8 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import llnl.util.lang
import spack.util.spack_yaml as syaml
@llnl.util.lang.lazy_lexicographic_ordering
class OperatingSystem:
@@ -40,4 +42,4 @@ def _cmp_iter(self):
yield self.version
def to_dict(self):
return {"name": self.name, "version": self.version}
return syaml.syaml_dict([("name", self.name), ("version", self.version)])

View File

@@ -3,52 +3,29 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
# flake8: noqa: F401
"""spack.package defines the public API for Spack packages, by re-exporting useful symbols from
other modules. Packages should import this module, instead of importing from spack.* directly
to ensure forward compatibility with future versions of Spack."""
"""spack.util.package is a set of useful build tools and directives for packages.
Everything in this module is automatically imported into Spack package files.
"""
from os import chdir, environ, getcwd, makedirs, mkdir, remove, removedirs
from shutil import move, rmtree
from spack.error import InstallError, NoHeadersError, NoLibrariesError
# Emulate some shell commands for convenience
env = environ
cd = chdir
pwd = getcwd
# import most common types used in packages
from typing import Dict, List, Optional
from llnl.util.filesystem import (
FileFilter,
FileList,
HeaderList,
LibraryList,
ancestor,
can_access,
change_sed_delimiter,
copy,
copy_tree,
filter_file,
find,
find_all_headers,
find_first,
find_headers,
find_libraries,
find_system_libraries,
force_remove,
force_symlink,
install,
install_tree,
is_exe,
join_path,
keep_modification_time,
library_extensions,
mkdirp,
remove_directory_contents,
remove_linked_tree,
rename,
set_executable,
set_install_permissions,
touch,
working_dir,
)
import llnl.util.filesystem
from llnl.util.filesystem import *
from llnl.util.symlink import symlink
import spack.util.executable
# These props will be overridden when the build env is set up.
from spack.build_environment import MakeExecutable
from spack.build_systems.aspell_dict import AspellDictPackage
@@ -99,24 +76,7 @@
from spack.builder import BaseBuilder
from spack.config import determine_number_of_jobs
from spack.deptypes import ALL_TYPES as all_deptypes
from spack.directives import (
build_system,
can_splice,
conditional,
conflicts,
depends_on,
extends,
license,
maintainers,
patch,
provides,
redistribute,
requires,
resource,
variant,
version,
)
from spack.error import InstallError, NoHeadersError, NoLibrariesError
from spack.directives import *
from spack.install_test import (
SkipTest,
cache_extra_test_sources,
@@ -126,36 +86,28 @@
install_test_root,
test_part,
)
from spack.installer import ExternalPackageError, InstallLockError, UpstreamPackageError
from spack.mixins import filter_compiler_wrappers
from spack.multimethod import default_args, when
from spack.package_base import build_system_flags, env_flags, inject_flags, on_package_attributes
from spack.package_completions import (
bash_completion_path,
fish_completion_path,
zsh_completion_path,
from spack.package_base import (
DependencyConflictError,
build_system_flags,
env_flags,
flatten_dependencies,
inject_flags,
install_dependency_symlinks,
on_package_attributes,
)
from spack.package_completions import *
from spack.phase_callbacks import run_after, run_before
from spack.spec import Spec
from spack.util.executable import Executable, ProcessError, which, which_string
from spack.spec import InvalidSpecDetected, Spec
from spack.util.executable import *
from spack.util.filesystem import fix_darwin_install_name
from spack.variant import any_combination_of, auto_or_any_combination_of, disjoint_sets
from spack.version import Version, ver
# Emulate some shell commands for convenience
env = environ
cd = chdir
pwd = getcwd
# These are just here for editor support; they may be set when the build env is set up.
configure: Executable
make_jobs: int
make: MakeExecutable
ninja: MakeExecutable
python_include: str
python_platlib: str
python_purelib: str
python: Executable
spack_cc: str
spack_cxx: str
spack_f77: str
spack_fc: str
# These are just here for editor support; they will be replaced when the build env
# is set up.
make = MakeExecutable("make", jobs=1)
ninja = MakeExecutable("ninja", jobs=1)
configure = Executable(join_path(".", "configure"))

View File

@@ -30,6 +30,7 @@
import llnl.util.filesystem as fsys
import llnl.util.tty as tty
from llnl.util.lang import classproperty, memoized
from llnl.util.link_tree import LinkTree
import spack.compilers
import spack.config
@@ -766,9 +767,6 @@ def __init__(self, spec):
self.win_rpath = fsys.WindowsSimulatedRPath(self)
super().__init__()
def __getitem__(self, key: str) -> "PackageBase":
return self.spec[key].package
@classmethod
def dependency_names(cls):
return _subkeys(cls.dependencies)
@@ -1098,14 +1096,14 @@ def update_external_dependencies(self, extendee_spec=None):
"""
pass
def detect_dev_src_change(self) -> bool:
def detect_dev_src_change(self):
"""
Method for checking for source code changes to trigger rebuild/reinstall
"""
dev_path_var = self.spec.variants.get("dev_path", None)
_, record = spack.store.STORE.db.query_by_spec_hash(self.spec.dag_hash())
assert dev_path_var and record, "dev_path variant and record must be present"
return fsys.recursive_mtime_greater_than(dev_path_var.value, record.installation_time)
mtime = fsys.last_modification_time_recursive(dev_path_var.value)
return mtime > record.installation_time
def all_urls_for_version(self, version: StandardVersion) -> List[str]:
"""Return all URLs derived from version_urls(), url, urls, and
@@ -1818,6 +1816,12 @@ def _has_make_target(self, target):
Returns:
bool: True if 'target' is found, else False
"""
# Prevent altering LC_ALL for 'make' outside this function
make = copy.deepcopy(self.module.make)
# Use English locale for missing target message comparison
make.add_default_env("LC_ALL", "C")
# Check if we have a Makefile
for makefile in ["GNUmakefile", "Makefile", "makefile"]:
if os.path.exists(makefile):
@@ -1826,12 +1830,6 @@ def _has_make_target(self, target):
tty.debug("No Makefile found in the build directory")
return False
# Prevent altering LC_ALL for 'make' outside this function
make = copy.deepcopy(self.module.make)
# Use English locale for missing target message comparison
make.add_default_env("LC_ALL", "C")
# Check if 'target' is a valid target.
#
# `make -n target` performs a "dry run". It prints the commands that
@@ -2291,6 +2289,19 @@ def rpath_args(self):
build_system_flags = PackageBase.build_system_flags
def install_dependency_symlinks(pkg, spec, prefix):
"""
Execute a dummy install and flatten dependencies.
This routine can be used in a ``package.py`` definition by setting
``install = install_dependency_symlinks``.
This feature comes in handy for creating a common location for the
the installation of third-party libraries.
"""
flatten_dependencies(spec, prefix)
def use_cray_compiler_names():
"""Compiler names for builds that rely on cray compiler names."""
os.environ["CC"] = "cc"
@@ -2299,6 +2310,23 @@ def use_cray_compiler_names():
os.environ["F77"] = "ftn"
def flatten_dependencies(spec, flat_dir):
"""Make each dependency of spec present in dir via symlink."""
for dep in spec.traverse(root=False):
name = dep.name
dep_path = spack.store.STORE.layout.path_for_spec(dep)
dep_files = LinkTree(dep_path)
os.mkdir(flat_dir + "/" + name)
conflict = dep_files.find_conflict(flat_dir + "/" + name)
if conflict:
raise DependencyConflictError(conflict)
dep_files.merge(flat_dir + "/" + name)
def possible_dependencies(
*pkg_or_spec: Union[str, spack.spec.Spec, typing.Type[PackageBase]],
transitive: bool = True,

View File

@@ -4,9 +4,10 @@
import hashlib
import os
import os.path
import pathlib
import sys
from typing import Any, Dict, Optional, Set, Tuple, Type, Union
from typing import Any, Dict, Optional, Tuple, Type, Union
import llnl.util.filesystem
from llnl.url import allowed_archive
@@ -503,38 +504,36 @@ def patch_for_package(self, sha256: str, pkg: "spack.package_base.PackageBase")
patch_dict["sha256"] = sha256
return from_dict(patch_dict, repository=self.repository)
def update_packages(self, pkgs_fullname: Set[str]) -> None:
def update_package(self, pkg_fullname: str) -> None:
"""Update the patch cache.
Args:
pkg_fullname: package to update.
"""
# remove this package from any patch entries that reference it.
if self.index:
empty = []
for sha256, package_to_patch in self.index.items():
remove = []
for fullname, patch_dict in package_to_patch.items():
if patch_dict["owner"] in pkgs_fullname:
remove.append(fullname)
empty = []
for sha256, package_to_patch in self.index.items():
remove = []
for fullname, patch_dict in package_to_patch.items():
if patch_dict["owner"] == pkg_fullname:
remove.append(fullname)
for fullname in remove:
package_to_patch.pop(fullname)
for fullname in remove:
package_to_patch.pop(fullname)
if not package_to_patch:
empty.append(sha256)
if not package_to_patch:
empty.append(sha256)
# remove any entries that are now empty
for sha256 in empty:
del self.index[sha256]
# remove any entries that are now empty
for sha256 in empty:
del self.index[sha256]
# update the index with per-package patch indexes
for pkg_fullname in pkgs_fullname:
pkg_cls = self.repository.get_pkg_class(pkg_fullname)
partial_index = self._index_patches(pkg_cls, self.repository)
for sha256, package_to_patch in partial_index.items():
p2p = self.index.setdefault(sha256, {})
p2p.update(package_to_patch)
pkg_cls = self.repository.get_pkg_class(pkg_fullname)
partial_index = self._index_patches(pkg_cls, self.repository)
for sha256, package_to_patch in partial_index.items():
p2p = self.index.setdefault(sha256, {})
p2p.update(package_to_patch)
def update(self, other: "PatchCache") -> None:
"""Update this cache with the contents of another.

View File

@@ -52,7 +52,8 @@ def use_platform(new_platform):
import spack.config
assert isinstance(new_platform, Platform), f'"{new_platform}" must be an instance of Platform'
msg = '"{0}" must be an instance of Platform'
assert isinstance(new_platform, Platform), msg.format(new_platform)
original_host_fn = host

View File

@@ -1,22 +1,42 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import warnings
from typing import Optional
import archspec.cpu
import llnl.util.lang
import spack.error
class NoPlatformError(spack.error.SpackError):
def __init__(self):
msg = "Could not determine a platform for this machine"
super().__init__(msg)
@llnl.util.lang.lazy_lexicographic_ordering
class Platform:
"""Platform is an abstract class extended by subclasses.
To add a new type of platform (such as cray_xe), create a subclass and set all the
class attributes such as priority, front_target, back_target, front_os, back_os.
Platform also contain a priority class attribute. A lower number signifies higher
priority. These numbers are arbitrarily set and can be changed though often there
isn't much need unless a new platform is added and the user wants that to be
detected first.
Targets are created inside the platform subclasses. Most architecture (like linux,
and darwin) will have only one target family (x86_64) but in the case of Cray
machines, there is both a frontend and backend processor. The user can specify
which targets are present on front-end and back-end architecture.
Depending on the platform, operating systems are either autodetected or are
set. The user can set the frontend and backend operating setting by the class
attributes front_os and back_os. The operating system will be responsible for
compiler detection.
"""
# Subclass sets number. Controls detection order
@@ -25,72 +45,82 @@ class Platform:
#: binary formats used on this platform; used by relocation logic
binary_formats = ["elf"]
default: str
default_os: str
front_end: Optional[str] = None
back_end: Optional[str] = None
default: Optional[str] = None # The default back end target.
front_os: Optional[str] = None
back_os: Optional[str] = None
default_os: Optional[str] = None
reserved_targets = ["default_target", "frontend", "fe", "backend", "be"]
reserved_oss = ["default_os", "frontend", "fe", "backend", "be"]
deprecated_names = ["frontend", "fe", "backend", "be"]
def __init__(self, name):
self.targets = {}
self.operating_sys = {}
self.name = name
self._init_targets()
def add_target(self, name: str, target: archspec.cpu.Microarchitecture) -> None:
"""Used by the platform specific subclass to list available targets.
Raises an error if the platform specifies a name
that is reserved by spack as an alias.
"""
if name in Platform.reserved_targets:
msg = f"{name} is a spack reserved alias and cannot be the name of a target"
raise ValueError(msg)
msg = "{0} is a spack reserved alias and cannot be the name of a target"
raise ValueError(msg.format(name))
self.targets[name] = target
def _init_targets(self):
self.default = archspec.cpu.host().name
def _add_archspec_targets(self):
for name, microarchitecture in archspec.cpu.TARGETS.items():
self.add_target(name, microarchitecture)
def target(self, name):
"""This is a getter method for the target dictionary
that handles defaulting based on the values provided by default,
front-end, and back-end. This can be overwritten
by a subclass for which we want to provide further aliasing options.
"""
# TODO: Check if we can avoid using strings here
name = str(name)
if name in Platform.deprecated_names:
warnings.warn(f"target={name} is deprecated, use target={self.default} instead")
if name in Platform.reserved_targets:
if name == "default_target":
name = self.default
elif name == "frontend" or name == "fe":
name = self.front_end
elif name == "backend" or name == "be":
name = self.back_end
return self.targets.get(name, None)
def add_operating_system(self, name, os_class):
if name in Platform.reserved_oss + Platform.deprecated_names:
msg = f"{name} is a spack reserved alias and cannot be the name of an OS"
raise ValueError(msg)
"""Add the operating_system class object into the
platform.operating_sys dictionary.
"""
if name in Platform.reserved_oss:
msg = "{0} is a spack reserved alias and cannot be the name of an OS"
raise ValueError(msg.format(name))
self.operating_sys[name] = os_class
def default_target(self):
return self.target(self.default)
def default_operating_system(self):
return self.operating_system(self.default_os)
def operating_system(self, name):
if name in Platform.deprecated_names:
warnings.warn(f"os={name} is deprecated, use os={self.default_os} instead")
if name in Platform.reserved_oss:
if name == "default_os":
name = self.default_os
if name == "frontend" or name == "fe":
name = self.front_os
if name == "backend" or name == "be":
name = self.back_os
return self.operating_sys.get(name, None)
def setup_platform_environment(self, pkg, env):
"""Platform-specific build environment modifications.
This method is meant toi be overridden by subclasses, when needed.
"""Subclass can override this method if it requires any
platform-specific build environment modifications.
"""
pass
@classmethod
def detect(cls):
"""Returns True if the host platform is detected to be the current Platform class,
False otherwise.
"""Return True if the the host platform is detected to be the current
Platform class, False otherwise.
Derived classes are responsible for implementing this method.
"""
@@ -105,7 +135,11 @@ def __str__(self):
def _cmp_iter(self):
yield self.name
yield self.default
yield self.front_end
yield self.back_end
yield self.default_os
yield self.front_os
yield self.back_os
def targets():
for t in sorted(self.targets.values()):

View File

@@ -1,7 +1,7 @@
# Copyright Spack Project Developers. See COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import os.path
def slingshot_network():

View File

@@ -4,6 +4,8 @@
import platform as py_platform
import archspec.cpu
from spack.operating_systems.mac_os import MacOs
from spack.version import Version
@@ -17,8 +19,18 @@ class Darwin(Platform):
def __init__(self):
super().__init__("darwin")
self._add_archspec_targets()
self.default = archspec.cpu.host().name
self.front_end = self.default
self.back_end = self.default
mac_os = MacOs()
self.default_os = str(mac_os)
self.front_os = str(mac_os)
self.back_os = str(mac_os)
self.add_operating_system(str(mac_os), mac_os)
@classmethod

View File

@@ -3,6 +3,8 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import platform
import archspec.cpu
from spack.operating_systems.freebsd import FreeBSDOs
from ._platform import Platform
@@ -13,8 +15,18 @@ class FreeBSD(Platform):
def __init__(self):
super().__init__("freebsd")
self._add_archspec_targets()
# Get specific default
self.default = archspec.cpu.host().name
self.front_end = self.default
self.back_end = self.default
os = FreeBSDOs()
self.default_os = str(os)
self.front_os = self.default_os
self.back_os = self.default_os
self.add_operating_system(str(os), os)
@classmethod

View File

@@ -3,6 +3,8 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import platform
import archspec.cpu
from spack.operating_systems.linux_distro import LinuxDistro
from ._platform import Platform
@@ -13,8 +15,18 @@ class Linux(Platform):
def __init__(self):
super().__init__("linux")
self._add_archspec_targets()
# Get specific default
self.default = archspec.cpu.host().name
self.front_end = self.default
self.back_end = self.default
linux_dist = LinuxDistro()
self.default_os = str(linux_dist)
self.front_os = self.default_os
self.back_os = self.default_os
self.add_operating_system(str(linux_dist), linux_dist)
@classmethod

View File

@@ -16,19 +16,31 @@ class Test(Platform):
if platform.system().lower() == "darwin":
binary_formats = ["macho"]
if platform.machine() == "arm64":
front_end = "aarch64"
back_end = "m1"
default = "m1"
else:
front_end = "x86_64"
back_end = "core2"
default = "core2"
front_os = "redhat6"
back_os = "debian6"
default_os = "debian6"
default = "m1" if platform.machine() == "arm64" else "core2"
def __init__(self, name=None):
name = name or "test"
super().__init__(name)
self.add_operating_system("debian6", spack.operating_systems.OperatingSystem("debian", 6))
self.add_operating_system("redhat6", spack.operating_systems.OperatingSystem("redhat", 6))
self.add_target(self.default, archspec.cpu.TARGETS[self.default])
self.add_target(self.front_end, archspec.cpu.TARGETS[self.front_end])
def _init_targets(self):
targets = ("aarch64", "m1") if platform.machine() == "arm64" else ("x86_64", "core2")
for t in targets:
self.add_target(t, archspec.cpu.TARGETS[t])
self.add_operating_system(
self.default_os, spack.operating_systems.OperatingSystem("debian", 6)
)
self.add_operating_system(
self.front_os, spack.operating_systems.OperatingSystem("redhat", 6)
)
@classmethod
def detect(cls):

View File

@@ -4,6 +4,8 @@
import platform
import archspec.cpu
from spack.operating_systems.windows_os import WindowsOs
from ._platform import Platform
@@ -14,8 +16,18 @@ class Windows(Platform):
def __init__(self):
super().__init__("windows")
self._add_archspec_targets()
self.default = archspec.cpu.host().name
self.front_end = self.default
self.back_end = self.default
windows_os = WindowsOs()
self.default_os = str(windows_os)
self.front_os = str(windows_os)
self.back_os = str(windows_os)
self.add_operating_system(str(windows_os), windows_os)
@classmethod

View File

@@ -2,7 +2,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Classes and functions to manage providers of virtual dependencies"""
from typing import Dict, Iterable, List, Optional, Set, Union
from typing import Dict, List, Optional, Set
import spack.error
import spack.spec
@@ -26,7 +26,7 @@ class _IndexBase:
#: Calling providers_for(spec) will find specs that provide a
#: matching implementation of MPI. Derived class need to construct
#: this attribute according to the semantics above.
providers: Dict[str, Dict["spack.spec.Spec", Set["spack.spec.Spec"]]]
providers: Dict[str, Dict[str, Set[str]]]
def providers_for(self, virtual_spec):
"""Return a list of specs of all packages that provide virtual
@@ -99,56 +99,66 @@ def __init__(
self.repository = repository
self.restrict = restrict
self.providers = {}
if specs:
self.update_packages(specs)
def update_packages(self, specs: Iterable[Union[str, "spack.spec.Spec"]]):
specs = specs or []
for spec in specs:
if not isinstance(spec, spack.spec.Spec):
spec = spack.spec.Spec(spec)
if self.repository.is_virtual_safe(spec.name):
continue
self.update(spec)
def update(self, spec):
"""Update the provider index with additional virtual specs.
Args:
spec: spec potentially providing additional virtual specs
"""
for spec in specs:
if not isinstance(spec, spack.spec.Spec):
spec = spack.spec.Spec(spec)
if not isinstance(spec, spack.spec.Spec):
spec = spack.spec.Spec(spec)
if not spec.name or self.repository.is_virtual_safe(spec.name):
# Only non-virtual packages with name can provide virtual specs.
continue
if not spec.name:
# Empty specs do not have a package
return
pkg_provided = self.repository.get_pkg_class(spec.name).provided
for provider_spec_readonly, provided_specs in pkg_provided.items():
for provided_spec in provided_specs:
# TODO: fix this comment.
# We want satisfaction other than flags
provider_spec = provider_spec_readonly.copy()
provider_spec.compiler_flags = spec.compiler_flags.copy()
msg = "cannot update an index passing the virtual spec '{}'".format(spec.name)
assert not self.repository.is_virtual_safe(spec.name), msg
if spec.intersects(provider_spec, deps=False):
provided_name = provided_spec.name
pkg_provided = self.repository.get_pkg_class(spec.name).provided
for provider_spec_readonly, provided_specs in pkg_provided.items():
for provided_spec in provided_specs:
# TODO: fix this comment.
# We want satisfaction other than flags
provider_spec = provider_spec_readonly.copy()
provider_spec.compiler_flags = spec.compiler_flags.copy()
provider_map = self.providers.setdefault(provided_name, {})
if provided_spec not in provider_map:
provider_map[provided_spec] = set()
if spec.intersects(provider_spec, deps=False):
provided_name = provided_spec.name
if self.restrict:
provider_set = provider_map[provided_spec]
provider_map = self.providers.setdefault(provided_name, {})
if provided_spec not in provider_map:
provider_map[provided_spec] = set()
# If this package existed in the index before,
# need to take the old versions out, as they're
# now more constrained.
old = {s for s in provider_set if s.name == spec.name}
provider_set.difference_update(old)
if self.restrict:
provider_set = provider_map[provided_spec]
# Now add the new version.
provider_set.add(spec)
# If this package existed in the index before,
# need to take the old versions out, as they're
# now more constrained.
old = set([s for s in provider_set if s.name == spec.name])
provider_set.difference_update(old)
else:
# Before putting the spec in the map, constrain
# it so that it provides what was asked for.
constrained = spec.copy()
constrained.constrain(provider_spec)
provider_map[provided_spec].add(constrained)
# Now add the new version.
provider_set.add(spec)
else:
# Before putting the spec in the map, constrain
# it so that it provides what was asked for.
constrained = spec.copy()
constrained.constrain(provider_spec)
provider_map[provided_spec].add(constrained)
def to_json(self, stream=None):
"""Dump a JSON representation of this object.
@@ -183,13 +193,14 @@ def merge(self, other):
spdict[provided_spec] = spdict[provided_spec].union(opdict[provided_spec])
def remove_providers(self, pkgs_fullname: Set[str]):
def remove_provider(self, pkg_name):
"""Remove a provider from the ProviderIndex."""
empty_pkg_dict = []
for pkg, pkg_dict in self.providers.items():
empty_pset = []
for provided, pset in pkg_dict.items():
pset.difference_update(pkgs_fullname)
same_name = set(p for p in pset if p.fullname == pkg_name)
pset.difference_update(same_name)
if not pset:
empty_pset.append(provided)

View File

@@ -6,7 +6,8 @@
import os
import re
import sys
from typing import Dict, Iterable, List, Optional
from collections import OrderedDict
from typing import List, Optional
import macholib.mach_o
import macholib.MachO
@@ -17,11 +18,28 @@
from llnl.util.lang import memoized
from llnl.util.symlink import readlink, symlink
import spack.error
import spack.store
import spack.util.elf as elf
import spack.util.executable as executable
from .relocate_text import BinaryFilePrefixReplacer, PrefixToPrefix, TextFilePrefixReplacer
from .relocate_text import BinaryFilePrefixReplacer, TextFilePrefixReplacer
class InstallRootStringError(spack.error.SpackError):
def __init__(self, file_path, root_path):
"""Signal that the relocated binary still has the original
Spack's store root string
Args:
file_path (str): path of the binary
root_path (str): original Spack's store root string
"""
super().__init__(
"\n %s \ncontains string\n %s \n"
"after replacing it in rpaths.\n"
"Package should not be relocated.\n Use -a to override." % (file_path, root_path)
)
@memoized
@@ -36,11 +54,144 @@ def _patchelf() -> Optional[executable.Executable]:
return spack.bootstrap.ensure_patchelf_in_path_or_raise()
def _elf_rpaths_for(path):
"""Return the RPATHs for an executable or a library.
Args:
path (str): full path to the executable or library
Return:
RPATHs as a list of strings. Returns an empty array
on ELF parsing errors, or when the ELF file simply
has no rpaths.
"""
return elf.get_rpaths(path) or []
def _make_relative(reference_file, path_root, paths):
"""Return a list where any path in ``paths`` that starts with
``path_root`` is made relative to the directory in which the
reference file is stored.
After a path is made relative it is prefixed with the ``$ORIGIN``
string.
Args:
reference_file (str): file from which the reference directory
is computed
path_root (str): root of the relative paths
paths: (list) paths to be examined
Returns:
List of relative paths
"""
start_directory = os.path.dirname(reference_file)
pattern = re.compile(path_root)
relative_paths = []
for path in paths:
if pattern.match(path):
rel = os.path.relpath(path, start=start_directory)
path = os.path.join("$ORIGIN", rel)
relative_paths.append(path)
return relative_paths
def _normalize_relative_paths(start_path, relative_paths):
"""Normalize the relative paths with respect to the original path name
of the file (``start_path``).
The paths that are passed to this function existed or were relevant
on another filesystem, so os.path.abspath cannot be used.
A relative path may contain the signifier $ORIGIN. Assuming that
``start_path`` is absolute, this implies that the relative path
(relative to start_path) should be replaced with an absolute path.
Args:
start_path (str): path from which the starting directory
is extracted
relative_paths (str): list of relative paths as obtained by a
call to :ref:`_make_relative`
Returns:
List of normalized paths
"""
normalized_paths = []
pattern = re.compile(re.escape("$ORIGIN"))
start_directory = os.path.dirname(start_path)
for path in relative_paths:
if path.startswith("$ORIGIN"):
sub = pattern.sub(start_directory, path)
path = os.path.normpath(sub)
normalized_paths.append(path)
return normalized_paths
def _decode_macho_data(bytestring):
return bytestring.rstrip(b"\x00").decode("ascii")
def _macho_find_paths(orig_rpaths, deps, idpath, prefix_to_prefix):
def macho_make_paths_relative(path_name, old_layout_root, rpaths, deps, idpath):
"""
Return a dictionary mapping the original rpaths to the relativized rpaths.
This dictionary is used to replace paths in mach-o binaries.
Replace old_dir with relative path from dirname of path name
in rpaths and deps; idpath is replaced with @rpath/libname.
"""
paths_to_paths = dict()
if idpath:
paths_to_paths[idpath] = os.path.join("@rpath", "%s" % os.path.basename(idpath))
for rpath in rpaths:
if re.match(old_layout_root, rpath):
rel = os.path.relpath(rpath, start=os.path.dirname(path_name))
paths_to_paths[rpath] = os.path.join("@loader_path", "%s" % rel)
else:
paths_to_paths[rpath] = rpath
for dep in deps:
if re.match(old_layout_root, dep):
rel = os.path.relpath(dep, start=os.path.dirname(path_name))
paths_to_paths[dep] = os.path.join("@loader_path", "%s" % rel)
else:
paths_to_paths[dep] = dep
return paths_to_paths
def macho_make_paths_normal(orig_path_name, rpaths, deps, idpath):
"""
Return a dictionary mapping the relativized rpaths to the original rpaths.
This dictionary is used to replace paths in mach-o binaries.
Replace '@loader_path' with the dirname of the origname path name
in rpaths and deps; idpath is replaced with the original path name
"""
rel_to_orig = dict()
if idpath:
rel_to_orig[idpath] = orig_path_name
for rpath in rpaths:
if re.match("@loader_path", rpath):
norm = os.path.normpath(
re.sub(re.escape("@loader_path"), os.path.dirname(orig_path_name), rpath)
)
rel_to_orig[rpath] = norm
else:
rel_to_orig[rpath] = rpath
for dep in deps:
if re.match("@loader_path", dep):
norm = os.path.normpath(
re.sub(re.escape("@loader_path"), os.path.dirname(orig_path_name), dep)
)
rel_to_orig[dep] = norm
else:
rel_to_orig[dep] = dep
return rel_to_orig
def macho_find_paths(orig_rpaths, deps, idpath, old_layout_root, prefix_to_prefix):
"""
Inputs
original rpaths from mach-o binaries
@@ -56,12 +207,13 @@ def _macho_find_paths(orig_rpaths, deps, idpath, prefix_to_prefix):
# Sort from longest path to shortest, to ensure we try /foo/bar/baz before /foo/bar
prefix_iteration_order = sorted(prefix_to_prefix, key=len, reverse=True)
for orig_rpath in orig_rpaths:
for old_prefix in prefix_iteration_order:
new_prefix = prefix_to_prefix[old_prefix]
if orig_rpath.startswith(old_prefix):
new_rpath = re.sub(re.escape(old_prefix), new_prefix, orig_rpath)
paths_to_paths[orig_rpath] = new_rpath
break
if orig_rpath.startswith(old_layout_root):
for old_prefix in prefix_iteration_order:
new_prefix = prefix_to_prefix[old_prefix]
if orig_rpath.startswith(old_prefix):
new_rpath = re.sub(re.escape(old_prefix), new_prefix, orig_rpath)
paths_to_paths[orig_rpath] = new_rpath
break
else:
paths_to_paths[orig_rpath] = orig_rpath
@@ -85,7 +237,7 @@ def _macho_find_paths(orig_rpaths, deps, idpath, prefix_to_prefix):
return paths_to_paths
def _modify_macho_object(cur_path, rpaths, deps, idpath, paths_to_paths):
def modify_macho_object(cur_path, rpaths, deps, idpath, paths_to_paths):
"""
This function is used to make machO buildcaches on macOS by
replacing old paths with new paths using install_name_tool
@@ -128,7 +280,7 @@ def _modify_macho_object(cur_path, rpaths, deps, idpath, paths_to_paths):
install_name_tool(*args, temp_path)
def _macholib_get_paths(cur_path):
def macholib_get_paths(cur_path):
"""Get rpaths, dependent libraries, and library id of mach-o objects."""
headers = []
try:
@@ -196,7 +348,9 @@ def _set_elf_rpaths_and_interpreter(
return None
def relocate_macho_binaries(path_names, prefix_to_prefix):
def relocate_macho_binaries(
path_names, old_layout_root, new_layout_root, prefix_to_prefix, rel, old_prefix, new_prefix
):
"""
Use macholib python package to get the rpaths, depedent libraries
and library identity for libraries from the MachO object. Modify them
@@ -209,26 +363,88 @@ def relocate_macho_binaries(path_names, prefix_to_prefix):
# Corner case where macho object file ended up in the path name list
if path_name.endswith(".o"):
continue
# get the paths in the old prefix
rpaths, deps, idpath = _macholib_get_paths(path_name)
# get the mapping of paths in the old prerix to the new prefix
paths_to_paths = _macho_find_paths(rpaths, deps, idpath, prefix_to_prefix)
# replace the old paths with new paths
_modify_macho_object(path_name, rpaths, deps, idpath, paths_to_paths)
if rel:
# get the relativized paths
rpaths, deps, idpath = macholib_get_paths(path_name)
# get the file path name in the original prefix
orig_path_name = re.sub(re.escape(new_prefix), old_prefix, path_name)
# get the mapping of the relativized paths to the original
# normalized paths
rel_to_orig = macho_make_paths_normal(orig_path_name, rpaths, deps, idpath)
# replace the relativized paths with normalized paths
modify_macho_object(path_name, rpaths, deps, idpath, rel_to_orig)
# get the normalized paths in the mach-o binary
rpaths, deps, idpath = macholib_get_paths(path_name)
# get the mapping of paths in old prefix to path in new prefix
paths_to_paths = macho_find_paths(
rpaths, deps, idpath, old_layout_root, prefix_to_prefix
)
# replace the old paths with new paths
modify_macho_object(path_name, rpaths, deps, idpath, paths_to_paths)
# get the new normalized path in the mach-o binary
rpaths, deps, idpath = macholib_get_paths(path_name)
# get the mapping of paths to relative paths in the new prefix
paths_to_paths = macho_make_paths_relative(
path_name, new_layout_root, rpaths, deps, idpath
)
# replace the new paths with relativized paths in the new prefix
modify_macho_object(path_name, rpaths, deps, idpath, paths_to_paths)
else:
# get the paths in the old prefix
rpaths, deps, idpath = macholib_get_paths(path_name)
# get the mapping of paths in the old prerix to the new prefix
paths_to_paths = macho_find_paths(
rpaths, deps, idpath, old_layout_root, prefix_to_prefix
)
# replace the old paths with new paths
modify_macho_object(path_name, rpaths, deps, idpath, paths_to_paths)
def relocate_elf_binaries(binaries: Iterable[str], prefix_to_prefix: Dict[str, str]) -> None:
"""Take a list of binaries, and an ordered prefix to prefix mapping, and update the rpaths
accordingly."""
def _transform_rpaths(orig_rpaths, orig_root, new_prefixes):
"""Return an updated list of RPATHs where each entry in the original list
starting with the old root is relocated to another place according to the
mapping passed as argument.
Args:
orig_rpaths (list): list of the original RPATHs
orig_root (str): original root to be substituted
new_prefixes (dict): dictionary that maps the original prefixes to
where they should be relocated
Returns:
List of paths
"""
new_rpaths = []
for orig_rpath in orig_rpaths:
# If the original RPATH doesn't start with the target root
# append it verbatim and proceed
if not orig_rpath.startswith(orig_root):
new_rpaths.append(orig_rpath)
continue
# Otherwise inspect the mapping and transform + append any prefix
# that starts with a registered key
# avoiding duplicates
for old_prefix, new_prefix in new_prefixes.items():
if orig_rpath.startswith(old_prefix):
new_rpath = re.sub(re.escape(old_prefix), new_prefix, orig_rpath)
if new_rpath not in new_rpaths:
new_rpaths.append(new_rpath)
return new_rpaths
def new_relocate_elf_binaries(binaries, prefix_to_prefix):
"""Take a list of binaries, and an ordered dictionary of
prefix to prefix mapping, and update the rpaths accordingly."""
# Transform to binary string
prefix_to_prefix_bin = {
k.encode("utf-8"): v.encode("utf-8") for k, v in prefix_to_prefix.items()
}
prefix_to_prefix = OrderedDict(
(k.encode("utf-8"), v.encode("utf-8")) for (k, v) in prefix_to_prefix.items()
)
for path in binaries:
try:
elf.substitute_rpath_and_pt_interp_in_place_or_raise(path, prefix_to_prefix_bin)
elf.substitute_rpath_and_pt_interp_in_place_or_raise(path, prefix_to_prefix)
except elf.ElfCStringUpdatesFailed as e:
# Fall back to `patchelf --set-rpath ... --set-interpreter ...`
rpaths = e.rpath.new_value.decode("utf-8").split(":") if e.rpath else []
@@ -236,13 +452,105 @@ def relocate_elf_binaries(binaries: Iterable[str], prefix_to_prefix: Dict[str, s
_set_elf_rpaths_and_interpreter(path, rpaths=rpaths, interpreter=interpreter)
def _warn_if_link_cant_be_relocated(link: str, target: str):
def relocate_elf_binaries(
binaries, orig_root, new_root, new_prefixes, rel, orig_prefix, new_prefix
):
"""Relocate the binaries passed as arguments by changing their RPATHs.
Use patchelf to get the original RPATHs and then replace them with
rpaths in the new directory layout.
New RPATHs are determined from a dictionary mapping the prefixes in the
old directory layout to the prefixes in the new directory layout if the
rpath was in the old layout root, i.e. system paths are not replaced.
Args:
binaries (list): list of binaries that might need relocation, located
in the new prefix
orig_root (str): original root to be substituted
new_root (str): new root to be used, only relevant for relative RPATHs
new_prefixes (dict): dictionary that maps the original prefixes to
where they should be relocated
rel (bool): True if the RPATHs are relative, False if they are absolute
orig_prefix (str): prefix where the executable was originally located
new_prefix (str): prefix where we want to relocate the executable
"""
for new_binary in binaries:
orig_rpaths = _elf_rpaths_for(new_binary)
# TODO: Can we deduce `rel` from the original RPATHs?
if rel:
# Get the file path in the original prefix
orig_binary = re.sub(re.escape(new_prefix), orig_prefix, new_binary)
# Get the normalized RPATHs in the old prefix using the file path
# in the orig prefix
orig_norm_rpaths = _normalize_relative_paths(orig_binary, orig_rpaths)
# Get the normalize RPATHs in the new prefix
new_norm_rpaths = _transform_rpaths(orig_norm_rpaths, orig_root, new_prefixes)
# Get the relative RPATHs in the new prefix
new_rpaths = _make_relative(new_binary, new_root, new_norm_rpaths)
# check to see if relative rpaths are changed before rewriting
if sorted(new_rpaths) != sorted(orig_rpaths):
_set_elf_rpaths_and_interpreter(new_binary, new_rpaths)
else:
new_rpaths = _transform_rpaths(orig_rpaths, orig_root, new_prefixes)
_set_elf_rpaths_and_interpreter(new_binary, new_rpaths)
def make_link_relative(new_links, orig_links):
"""Compute the relative target from the original link and
make the new link relative.
Args:
new_links (list): new links to be made relative
orig_links (list): original links
"""
for new_link, orig_link in zip(new_links, orig_links):
target = readlink(orig_link)
relative_target = os.path.relpath(target, os.path.dirname(orig_link))
os.unlink(new_link)
symlink(relative_target, new_link)
def make_macho_binaries_relative(cur_path_names, orig_path_names, old_layout_root):
"""
Replace old RPATHs with paths relative to old_dir in binary files
"""
if not sys.platform == "darwin":
return
for cur_path, orig_path in zip(cur_path_names, orig_path_names):
(rpaths, deps, idpath) = macholib_get_paths(cur_path)
paths_to_paths = macho_make_paths_relative(
orig_path, old_layout_root, rpaths, deps, idpath
)
modify_macho_object(cur_path, rpaths, deps, idpath, paths_to_paths)
def make_elf_binaries_relative(new_binaries, orig_binaries, orig_layout_root):
"""Replace the original RPATHs in the new binaries making them
relative to the original layout root.
Args:
new_binaries (list): new binaries whose RPATHs is to be made relative
orig_binaries (list): original binaries
orig_layout_root (str): path to be used as a base for making
RPATHs relative
"""
for new_binary, orig_binary in zip(new_binaries, orig_binaries):
orig_rpaths = _elf_rpaths_for(new_binary)
if orig_rpaths:
new_rpaths = _make_relative(orig_binary, orig_layout_root, orig_rpaths)
_set_elf_rpaths_and_interpreter(new_binary, new_rpaths)
def warn_if_link_cant_be_relocated(link, target):
if not os.path.isabs(target):
return
tty.warn(f'Symbolic link at "{link}" to "{target}" cannot be relocated')
tty.warn('Symbolic link at "{}" to "{}" cannot be relocated'.format(link, target))
def relocate_links(links: Iterable[str], prefix_to_prefix: Dict[str, str]) -> None:
def relocate_links(links, prefix_to_prefix):
"""Relocate links to a new install prefix."""
regex = re.compile("|".join(re.escape(p) for p in prefix_to_prefix.keys()))
for link in links:
@@ -251,7 +559,7 @@ def relocate_links(links: Iterable[str], prefix_to_prefix: Dict[str, str]) -> No
# No match.
if match is None:
_warn_if_link_cant_be_relocated(link, old_target)
warn_if_link_cant_be_relocated(link, old_target)
continue
new_target = prefix_to_prefix[match.group()] + old_target[match.end() :]
@@ -259,32 +567,32 @@ def relocate_links(links: Iterable[str], prefix_to_prefix: Dict[str, str]) -> No
symlink(new_target, link)
def relocate_text(files: Iterable[str], prefix_to_prefix: PrefixToPrefix) -> None:
def relocate_text(files, prefixes):
"""Relocate text file from the original installation prefix to the
new prefix.
Relocation also affects the the path in Spack's sbang script.
Args:
files: Text files to be relocated
prefix_to_prefix: ordered prefix to prefix mapping
files (list): Text files to be relocated
prefixes (OrderedDict): String prefixes which need to be changed
"""
TextFilePrefixReplacer.from_strings_or_bytes(prefix_to_prefix).apply(files)
TextFilePrefixReplacer.from_strings_or_bytes(prefixes).apply(files)
def relocate_text_bin(binaries: Iterable[str], prefix_to_prefix: PrefixToPrefix) -> List[str]:
def relocate_text_bin(binaries, prefixes):
"""Replace null terminated path strings hard-coded into binaries.
The new install prefix must be shorter than the original one.
Args:
binaries: paths to binaries to be relocated
prefix_to_prefix: ordered prefix to prefix mapping
binaries (list): binaries to be relocated
prefixes (OrderedDict): String prefixes which need to be changed.
Raises:
spack.relocate_text.BinaryTextReplaceError: when the new path is longer than the old path
"""
return BinaryFilePrefixReplacer.from_strings_or_bytes(prefix_to_prefix).apply(binaries)
return BinaryFilePrefixReplacer.from_strings_or_bytes(prefixes).apply(binaries)
def is_macho_magic(magic: bytes) -> bool:
@@ -321,7 +629,7 @@ def _exists_dir(dirname):
return os.path.isdir(dirname)
def is_macho_binary(path: str) -> bool:
def is_macho_binary(path):
try:
with open(path, "rb") as f:
return is_macho_magic(f.read(4))
@@ -345,7 +653,7 @@ def fixup_macos_rpath(root, filename):
return False
# Get Mach-O header commands
(rpath_list, deps, id_dylib) = _macholib_get_paths(abspath)
(rpath_list, deps, id_dylib) = macholib_get_paths(abspath)
# Convert rpaths list to (name -> number of occurrences)
add_rpaths = set()

View File

@@ -6,61 +6,64 @@
paths inside text files and binaries."""
import re
from typing import IO, Dict, Iterable, List, Union
from llnl.util.lang import PatternBytes
from collections import OrderedDict
from typing import Dict, Union
import spack.error
Prefix = Union[str, bytes]
PrefixToPrefix = Union[Dict[str, str], Dict[bytes, bytes]]
def encode_path(p: Prefix) -> bytes:
return p if isinstance(p, bytes) else p.encode("utf-8")
def _prefix_to_prefix_as_bytes(prefix_to_prefix: PrefixToPrefix) -> Dict[bytes, bytes]:
return {encode_path(k): encode_path(v) for (k, v) in prefix_to_prefix.items()}
def _prefix_to_prefix_as_bytes(prefix_to_prefix) -> Dict[bytes, bytes]:
return OrderedDict((encode_path(k), encode_path(v)) for (k, v) in prefix_to_prefix.items())
def utf8_path_to_binary_regex(prefix: str) -> PatternBytes:
def utf8_path_to_binary_regex(prefix: str):
"""Create a binary regex that matches the input path in utf8"""
prefix_bytes = re.escape(prefix).encode("utf-8")
return re.compile(b"(?<![\\w\\-_/])([\\w\\-_]*?)%s([\\w\\-_/]*)" % prefix_bytes)
def _byte_strings_to_single_binary_regex(prefixes: Iterable[bytes]) -> PatternBytes:
def _byte_strings_to_single_binary_regex(prefixes):
all_prefixes = b"|".join(re.escape(p) for p in prefixes)
return re.compile(b"(?<![\\w\\-_/])([\\w\\-_]*?)(%s)([\\w\\-_/]*)" % all_prefixes)
def utf8_paths_to_single_binary_regex(prefixes: Iterable[str]) -> PatternBytes:
def utf8_paths_to_single_binary_regex(prefixes):
"""Create a (binary) regex that matches any input path in utf8"""
return _byte_strings_to_single_binary_regex(p.encode("utf-8") for p in prefixes)
def filter_identity_mappings(prefix_to_prefix: Dict[bytes, bytes]) -> Dict[bytes, bytes]:
def filter_identity_mappings(prefix_to_prefix):
"""Drop mappings that are not changed."""
# NOTE: we don't guard against the following case:
# [/abc/def -> /abc/def, /abc -> /x] *will* be simplified to
# [/abc -> /x], meaning that after this simplification /abc/def will be
# mapped to /x/def instead of /abc/def. This should not be a problem.
return {k: v for k, v in prefix_to_prefix.items() if k != v}
return OrderedDict((k, v) for (k, v) in prefix_to_prefix.items() if k != v)
class PrefixReplacer:
"""Base class for applying a prefix to prefix map to a list of binaries or text files. Derived
classes implement _apply_to_file to do the actual work, which is different when it comes to
"""Base class for applying a prefix to prefix map
to a list of binaries or text files.
Child classes implement _apply_to_file to do the
actual work, which is different when it comes to
binaries and text files."""
def __init__(self, prefix_to_prefix: Dict[bytes, bytes]) -> None:
def __init__(self, prefix_to_prefix: Dict[bytes, bytes]):
"""
Arguments:
prefix_to_prefix: An ordered mapping from prefix to prefix. The order is relevant to
support substring fallbacks, for example
``[("/first/sub", "/x"), ("/first", "/y")]`` will ensure /first/sub is matched and
replaced before /first.
prefix_to_prefix (OrderedDict):
A ordered mapping from prefix to prefix. The order is
relevant to support substring fallbacks, for example
[("/first/sub", "/x"), ("/first", "/y")] will ensure
/first/sub is matched and replaced before /first.
"""
self.prefix_to_prefix = filter_identity_mappings(prefix_to_prefix)
@@ -71,7 +74,7 @@ def is_noop(self) -> bool:
or there are no prefixes to replace."""
return not self.prefix_to_prefix
def apply(self, filenames: Iterable[str]) -> List[str]:
def apply(self, filenames: list):
"""Returns a list of files that were modified"""
changed_files = []
if self.is_noop:
@@ -81,20 +84,17 @@ def apply(self, filenames: Iterable[str]) -> List[str]:
changed_files.append(filename)
return changed_files
def apply_to_filename(self, filename: str) -> bool:
def apply_to_filename(self, filename):
if self.is_noop:
return False
with open(filename, "rb+") as f:
return self.apply_to_file(f)
def apply_to_file(self, f: IO[bytes]) -> bool:
def apply_to_file(self, f):
if self.is_noop:
return False
return self._apply_to_file(f)
def _apply_to_file(self, f: IO) -> bool:
raise NotImplementedError("Derived classes must implement this method")
class TextFilePrefixReplacer(PrefixReplacer):
"""This class applies prefix to prefix mappings for relocation
@@ -112,11 +112,13 @@ def __init__(self, prefix_to_prefix: Dict[bytes, bytes]):
self.regex = _byte_strings_to_single_binary_regex(self.prefix_to_prefix.keys())
@classmethod
def from_strings_or_bytes(cls, prefix_to_prefix: PrefixToPrefix) -> "TextFilePrefixReplacer":
def from_strings_or_bytes(
cls, prefix_to_prefix: Dict[Prefix, Prefix]
) -> "TextFilePrefixReplacer":
"""Create a TextFilePrefixReplacer from an ordered prefix to prefix map."""
return cls(_prefix_to_prefix_as_bytes(prefix_to_prefix))
def _apply_to_file(self, f: IO) -> bool:
def _apply_to_file(self, f):
"""Text replacement implementation simply reads the entire file
in memory and applies the combined regex."""
replacement = lambda m: m.group(1) + self.prefix_to_prefix[m.group(2)] + m.group(3)
@@ -131,12 +133,12 @@ def _apply_to_file(self, f: IO) -> bool:
class BinaryFilePrefixReplacer(PrefixReplacer):
def __init__(self, prefix_to_prefix: Dict[bytes, bytes], suffix_safety_size: int = 7) -> None:
def __init__(self, prefix_to_prefix, suffix_safety_size=7):
"""
prefix_to_prefix: Ordered dictionary where the keys are bytes representing the old prefixes
and the values are the new
suffix_safety_size: in case of null terminated strings, what size of the suffix should
remain to avoid aliasing issues?
prefix_to_prefix (OrderedDict): OrderedDictionary where the keys are
bytes representing the old prefixes and the values are the new
suffix_safety_size (int): in case of null terminated strings, what size
of the suffix should remain to avoid aliasing issues?
"""
assert suffix_safety_size >= 0
super().__init__(prefix_to_prefix)
@@ -144,18 +146,17 @@ def __init__(self, prefix_to_prefix: Dict[bytes, bytes], suffix_safety_size: int
self.regex = self.binary_text_regex(self.prefix_to_prefix.keys(), suffix_safety_size)
@classmethod
def binary_text_regex(
cls, binary_prefixes: Iterable[bytes], suffix_safety_size: int = 7
) -> PatternBytes:
"""Create a regex that looks for exact matches of prefixes, and also tries to match a
C-string type null terminator in a small lookahead window.
def binary_text_regex(cls, binary_prefixes, suffix_safety_size=7):
"""
Create a regex that looks for exact matches of prefixes, and also tries to
match a C-string type null terminator in a small lookahead window.
Arguments:
binary_prefixes: Iterable of byte strings of prefixes to match
suffix_safety_size: Sizeof the lookahed for null-terminated string.
binary_prefixes (list): List of byte strings of prefixes to match
suffix_safety_size (int): Sizeof the lookahed for null-terminated string.
Returns: compiled regex
"""
# Note: it's important not to use capture groups for the prefix, since it destroys
# performance due to common prefix optimization.
return re.compile(
b"("
+ b"|".join(re.escape(p) for p in binary_prefixes)
@@ -164,34 +165,36 @@ def binary_text_regex(
@classmethod
def from_strings_or_bytes(
cls, prefix_to_prefix: PrefixToPrefix, suffix_safety_size: int = 7
cls, prefix_to_prefix: Dict[Prefix, Prefix], suffix_safety_size: int = 7
) -> "BinaryFilePrefixReplacer":
"""Create a BinaryFilePrefixReplacer from an ordered prefix to prefix map.
Arguments:
prefix_to_prefix: Ordered mapping of prefix to prefix.
suffix_safety_size: Number of bytes to retain at the end of a C-string to avoid binary
string-aliasing issues.
prefix_to_prefix (OrderedDict): Ordered mapping of prefix to prefix.
suffix_safety_size (int): Number of bytes to retain at the end of a C-string
to avoid binary string-aliasing issues.
"""
return cls(_prefix_to_prefix_as_bytes(prefix_to_prefix), suffix_safety_size)
def _apply_to_file(self, f: IO[bytes]) -> bool:
def _apply_to_file(self, f):
"""
Given a file opened in rb+ mode, apply the string replacements as specified by an ordered
dictionary of prefix to prefix mappings. This method takes special care of null-terminated
C-strings. C-string constants are problematic because compilers and linkers optimize
readonly strings for space by aliasing those that share a common suffix (only suffix since
all of them are null terminated). See https://github.com/spack/spack/pull/31739 and
https://github.com/spack/spack/pull/32253 for details. Our logic matches the original
prefix with a ``suffix_safety_size + 1`` lookahead for null bytes. If no null terminator
is found, we simply pad with leading /, assuming that it's a long C-string; the full
C-string after replacement has a large suffix in common with its original value. If there
*is* a null terminator we can do the same as long as the replacement has a sufficiently
long common suffix with the original prefix. As a last resort when the replacement does
not have a long enough common suffix, we can try to shorten the string, but this only
works if the new length is sufficiently short (typically the case when going from large
padding -> normal path) If the replacement string is longer, or all of the above fails,
we error out.
Given a file opened in rb+ mode, apply the string replacements as
specified by an ordered dictionary of prefix to prefix mappings. This
method takes special care of null-terminated C-strings. C-string constants
are problematic because compilers and linkers optimize readonly strings for
space by aliasing those that share a common suffix (only suffix since all
of them are null terminated). See https://github.com/spack/spack/pull/31739
and https://github.com/spack/spack/pull/32253 for details. Our logic matches
the original prefix with a ``suffix_safety_size + 1`` lookahead for null bytes.
If no null terminator is found, we simply pad with leading /, assuming that
it's a long C-string; the full C-string after replacement has a large suffix
in common with its original value.
If there *is* a null terminator we can do the same as long as the replacement
has a sufficiently long common suffix with the original prefix.
As a last resort when the replacement does not have a long enough common suffix,
we can try to shorten the string, but this only works if the new length is
sufficiently short (typically the case when going from large padding -> normal path)
If the replacement string is longer, or all of the above fails, we error out.
Arguments:
f: file opened in rb+ mode
@@ -201,10 +204,11 @@ def _apply_to_file(self, f: IO[bytes]) -> bool:
"""
assert f.tell() == 0
# We *could* read binary data in chunks to avoid loading all in memory, but it's nasty to
# deal with matches across boundaries, so let's stick to something simple.
# We *could* read binary data in chunks to avoid loading all in memory,
# but it's nasty to deal with matches across boundaries, so let's stick to
# something simple.
modified = False
modified = True
for match in self.regex.finditer(f.read()):
# The matching prefix (old) and its replacement (new)
@@ -214,7 +218,8 @@ def _apply_to_file(self, f: IO[bytes]) -> bool:
# Did we find a trailing null within a N + 1 bytes window after the prefix?
null_terminated = match.end(0) > match.end(1)
# Suffix string length, excluding the null byte. Only makes sense if null_terminated
# Suffix string length, excluding the null byte
# Only makes sense if null_terminated
suffix_strlen = match.end(0) - match.end(1) - 1
# How many bytes are we shrinking our string?
@@ -224,9 +229,9 @@ def _apply_to_file(self, f: IO[bytes]) -> bool:
if bytes_shorter < 0:
raise CannotGrowString(old, new)
# If we don't know whether this is a null terminated C-string (we're looking only N + 1
# bytes ahead), or if it is and we have a common suffix, we can simply pad with leading
# dir separators.
# If we don't know whether this is a null terminated C-string (we're looking
# only N + 1 bytes ahead), or if it is and we have a common suffix, we can
# simply pad with leading dir separators.
elif (
not null_terminated
or suffix_strlen >= self.suffix_safety_size # == is enough, but let's be defensive
@@ -235,9 +240,9 @@ def _apply_to_file(self, f: IO[bytes]) -> bool:
):
replacement = b"/" * bytes_shorter + new
# If it *was* null terminated, all that matters is that we can leave N bytes of old
# suffix in place. Note that > is required since we also insert an additional null
# terminator.
# If it *was* null terminated, all that matters is that we can leave N bytes
# of old suffix in place. Note that > is required since we also insert an
# additional null terminator.
elif bytes_shorter > self.suffix_safety_size:
replacement = new + match.group(2) # includes the trailing null
@@ -252,6 +257,22 @@ def _apply_to_file(self, f: IO[bytes]) -> bool:
return modified
class BinaryStringReplacementError(spack.error.SpackError):
def __init__(self, file_path, old_len, new_len):
"""The size of the file changed after binary path substitution
Args:
file_path (str): file with changing size
old_len (str): original length of the file
new_len (str): length of the file after substitution
"""
super().__init__(
"Doing a binary string replacement in %s failed.\n"
"The size of the file changed from %s to %s\n"
"when it should have remanined the same." % (file_path, old_len, new_len)
)
class BinaryTextReplaceError(spack.error.SpackError):
def __init__(self, msg):
msg += (
@@ -263,16 +284,17 @@ def __init__(self, msg):
class CannotGrowString(BinaryTextReplaceError):
def __init__(self, old, new):
return super().__init__(
f"Cannot replace {old!r} with {new!r} because the new prefix is longer."
)
msg = "Cannot replace {!r} with {!r} because the new prefix is longer.".format(old, new)
super().__init__(msg)
class CannotShrinkCString(BinaryTextReplaceError):
def __init__(self, old, new, full_old_string):
# Just interpolate binary string to not risk issues with invalid unicode, which would be
# really bad user experience: error in error. We have no clue if we actually deal with a
# real C-string nor what encoding it has.
super().__init__(
f"Cannot replace {old!r} with {new!r} in the C-string {full_old_string!r}."
# Just interpolate binary string to not risk issues with invalid
# unicode, which would be really bad user experience: error in error.
# We have no clue if we actually deal with a real C-string nor what
# encoding it has.
msg = "Cannot replace {!r} with {!r} in the C-string {!r}.".format(
old, new, full_old_string
)
super().__init__(msg)

View File

@@ -14,6 +14,7 @@
import inspect
import itertools
import os
import os.path
import random
import re
import shutil
@@ -465,7 +466,7 @@ def read(self, stream):
"""Read this index from a provided file object."""
@abc.abstractmethod
def update(self, pkgs_fullname: Set[str]):
def update(self, pkg_fullname):
"""Update the index in memory with information about a package."""
@abc.abstractmethod
@@ -482,8 +483,8 @@ def _create(self):
def read(self, stream):
self.index = spack.tag.TagIndex.from_json(stream, self.repository)
def update(self, pkgs_fullname: Set[str]):
self.index.update_packages({p.split(".")[-1] for p in pkgs_fullname})
def update(self, pkg_fullname):
self.index.update_package(pkg_fullname.split(".")[-1])
def write(self, stream):
self.index.to_json(stream)
@@ -498,14 +499,15 @@ def _create(self):
def read(self, stream):
self.index = spack.provider_index.ProviderIndex.from_json(stream, self.repository)
def update(self, pkgs_fullname: Set[str]):
def update(self, pkg_fullname):
name = pkg_fullname.split(".")[-1]
is_virtual = (
lambda name: not self.repository.exists(name)
or self.repository.get_pkg_class(name).virtual
not self.repository.exists(name) or self.repository.get_pkg_class(name).virtual
)
non_virtual_pkgs_fullname = {p for p in pkgs_fullname if not is_virtual(p.split(".")[-1])}
self.index.remove_providers(non_virtual_pkgs_fullname)
self.index.update_packages(non_virtual_pkgs_fullname)
if is_virtual:
return
self.index.remove_provider(pkg_fullname)
self.index.update(pkg_fullname)
def write(self, stream):
self.index.to_json(stream)
@@ -530,8 +532,8 @@ def read(self, stream):
def write(self, stream):
self.index.to_json(stream)
def update(self, pkgs_fullname: Set[str]):
self.index.update_packages(pkgs_fullname)
def update(self, pkg_fullname):
self.index.update_package(pkg_fullname)
class RepoIndex:
@@ -621,7 +623,9 @@ def _build_index(self, name: str, indexer: Indexer):
if new_index_mtime != index_mtime:
needs_update = self.checker.modified_since(new_index_mtime)
indexer.update({f"{self.namespace}.{pkg_name}" for pkg_name in needs_update})
for pkg_name in needs_update:
indexer.update(f"{self.namespace}.{pkg_name}")
indexer.write(new)
return indexer.index

Some files were not shown because too many files have changed in this diff Show More