Compare commits
41 Commits
hs/revert/
...
packages/p
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
559cda89b7 | ||
|
|
ff74147107 | ||
|
|
7547d8a91e | ||
|
|
d394e54925 | ||
|
|
5f467bb577 | ||
|
|
716639bbdd | ||
|
|
16d8a210b3 | ||
|
|
b6783bbfa1 | ||
|
|
962829f72e | ||
|
|
31ff80ddca | ||
|
|
396785477c | ||
|
|
752b72c304 | ||
|
|
38d77570b4 | ||
|
|
d8885b28fa | ||
|
|
abd3487570 | ||
|
|
0d760a5fd8 | ||
|
|
dde91ae181 | ||
|
|
590dbf67f3 | ||
|
|
d199738f31 | ||
|
|
f55f829437 | ||
|
|
295f3ff915 | ||
|
|
a0ad02c247 | ||
|
|
a21d314ba7 | ||
|
|
a4ad8c8174 | ||
|
|
aa3ee3fa2a | ||
|
|
a8584d5eb4 | ||
|
|
26f7b2c066 | ||
|
|
3a715c3e07 | ||
|
|
963519d2b2 | ||
|
|
34efcb686c | ||
|
|
5016084213 | ||
|
|
5a04e84097 | ||
|
|
ec34e88d79 | ||
|
|
31fa12ebd3 | ||
|
|
ecf414ed07 | ||
|
|
119bec391e | ||
|
|
d5c0ace993 | ||
|
|
d6bbd8f758 | ||
|
|
f74d51bf6e | ||
|
|
821ebee53c | ||
|
|
9dada76d34 |
@@ -125,6 +125,8 @@ are stored in ``$spack/var/spack/cache``. These are stored indefinitely
|
||||
by default. Can be purged with :ref:`spack clean --downloads
|
||||
<cmd-spack-clean>`.
|
||||
|
||||
.. _Misc Cache:
|
||||
|
||||
--------------------
|
||||
``misc_cache``
|
||||
--------------------
|
||||
@@ -334,3 +336,52 @@ create a new alias called ``inst`` that will always call ``install -v``:
|
||||
|
||||
aliases:
|
||||
inst: install -v
|
||||
|
||||
-------------------------------
|
||||
``concretization_cache:enable``
|
||||
-------------------------------
|
||||
|
||||
When set to ``true``, Spack will utilize a cache of solver outputs from
|
||||
successful concretization runs. When enabled, Spack will check the concretization
|
||||
cache prior to running the solver. If a previous request to solve a given
|
||||
problem is present in the cache, Spack will load the concrete specs and other
|
||||
solver data from the cache rather than running the solver. Specs not previously
|
||||
concretized will be added to the cache on a successful solve. The cache additionally
|
||||
holds solver statistics, so commands like ``spack solve`` will still return information
|
||||
about the run that produced a given solver result.
|
||||
|
||||
This cache is a subcache of the :ref:`Misc Cache` and as such will be cleaned when the Misc
|
||||
Cache is cleaned.
|
||||
|
||||
When ``false`` or ommitted, all concretization requests will be performed from scatch
|
||||
|
||||
----------------------------
|
||||
``concretization_cache:url``
|
||||
----------------------------
|
||||
|
||||
Path to the location where Spack will root the concretization cache. Currently this only supports
|
||||
paths on the local filesystem.
|
||||
|
||||
Default location is under the :ref:`Misc Cache` at: ``$misc_cache/concretization``
|
||||
|
||||
------------------------------------
|
||||
``concretization_cache:entry_limit``
|
||||
------------------------------------
|
||||
|
||||
Sets a limit on the number of concretization results that Spack will cache. The limit is evaluated
|
||||
after each concretization run; if Spack has stored more results than the limit allows, the
|
||||
oldest concretization results are pruned until 10% of the limit has been removed.
|
||||
|
||||
Setting this value to 0 disables the automatic pruning. It is expected users will be
|
||||
responsible for maintaining this cache.
|
||||
|
||||
-----------------------------------
|
||||
``concretization_cache:size_limit``
|
||||
-----------------------------------
|
||||
|
||||
Sets a limit on the size of the concretization cache in bytes. The limit is evaluated
|
||||
after each concretization run; if Spack has stored more results than the limit allows, the
|
||||
oldest concretization results are pruned until 10% of the limit has been removed.
|
||||
|
||||
Setting this value to 0 disables the automatic pruning. It is expected users will be
|
||||
responsible for maintaining this cache.
|
||||
|
||||
@@ -30,7 +30,7 @@ than always choosing the latest versions or default variants.
|
||||
|
||||
.. note::
|
||||
|
||||
As a rule of thumb: requirements + constraints > reuse > preferences > defaults.
|
||||
As a rule of thumb: requirements + constraints > strong preferences > reuse > preferences > defaults.
|
||||
|
||||
The following set of criteria (from lowest to highest precedence) explain
|
||||
common cases where concretization output may seem surprising at first.
|
||||
@@ -56,7 +56,19 @@ common cases where concretization output may seem surprising at first.
|
||||
concretizer:
|
||||
reuse: dependencies # other options are 'true' and 'false'
|
||||
|
||||
3. :ref:`Package requirements <package-requirements>` configured in ``packages.yaml``,
|
||||
3. :ref:`Strong preferences <package-strong-preferences>` configured in ``packages.yaml``
|
||||
are higher priority than reuse, and can be used to strongly prefer a specific version
|
||||
or variant, without erroring out if it's not possible. Strong preferences are specified
|
||||
as follows:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
packages:
|
||||
foo:
|
||||
prefer:
|
||||
- "@1.1: ~mpi"
|
||||
|
||||
4. :ref:`Package requirements <package-requirements>` configured in ``packages.yaml``,
|
||||
and constraints from the command line as well as ``package.py`` files override all
|
||||
of the above. Requirements are specified as follows:
|
||||
|
||||
@@ -66,6 +78,8 @@ common cases where concretization output may seem surprising at first.
|
||||
foo:
|
||||
require:
|
||||
- "@1.2: +mpi"
|
||||
conflicts:
|
||||
- "@1.4"
|
||||
|
||||
Requirements and constraints restrict the set of possible solutions, while reuse
|
||||
behavior and preferences influence what an optimal solution looks like.
|
||||
|
||||
@@ -486,6 +486,8 @@ present. For instance with a configuration like:
|
||||
|
||||
you will use ``mvapich2~cuda %gcc`` as an ``mpi`` provider.
|
||||
|
||||
.. _package-strong-preferences:
|
||||
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Conflicts and strong preferences
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
import fnmatch
|
||||
import glob
|
||||
import hashlib
|
||||
import io
|
||||
import itertools
|
||||
import numbers
|
||||
import os
|
||||
@@ -20,6 +21,7 @@
|
||||
from contextlib import contextmanager
|
||||
from itertools import accumulate
|
||||
from typing import (
|
||||
IO,
|
||||
Callable,
|
||||
Deque,
|
||||
Dict,
|
||||
@@ -2881,6 +2883,20 @@ def keep_modification_time(*filenames):
|
||||
os.utime(f, (os.path.getatime(f), mtime))
|
||||
|
||||
|
||||
@contextmanager
|
||||
def temporary_file_position(stream):
|
||||
orig_pos = stream.tell()
|
||||
yield
|
||||
stream.seek(orig_pos)
|
||||
|
||||
|
||||
@contextmanager
|
||||
def current_file_position(stream: IO[str], loc: int, relative_to=io.SEEK_CUR):
|
||||
with temporary_file_position(stream):
|
||||
stream.seek(loc, relative_to)
|
||||
yield
|
||||
|
||||
|
||||
@contextmanager
|
||||
def temporary_dir(
|
||||
suffix: Optional[str] = None, prefix: Optional[str] = None, dir: Optional[str] = None
|
||||
|
||||
@@ -278,17 +278,24 @@ def initconfig_hardware_entries(self):
|
||||
entries.append("# ROCm")
|
||||
entries.append("#------------------{0}\n".format("-" * 30))
|
||||
|
||||
# Explicitly setting HIP_ROOT_DIR may be a patch that is no longer necessary
|
||||
entries.append(cmake_cache_path("HIP_ROOT_DIR", "{0}".format(spec["hip"].prefix)))
|
||||
llvm_bin = spec["llvm-amdgpu"].prefix.bin
|
||||
llvm_prefix = spec["llvm-amdgpu"].prefix
|
||||
# Some ROCm systems seem to point to /<path>/rocm-<ver>/ and
|
||||
# others point to /<path>/rocm-<ver>/llvm
|
||||
if os.path.basename(os.path.normpath(llvm_prefix)) != "llvm":
|
||||
llvm_bin = os.path.join(llvm_prefix, "llvm/bin/")
|
||||
entries.append(
|
||||
cmake_cache_filepath("CMAKE_HIP_COMPILER", os.path.join(llvm_bin, "clang++"))
|
||||
)
|
||||
if spec.satisfies("^blt@0.7:"):
|
||||
rocm_root = os.path.dirname(spec["llvm-amdgpu"].prefix)
|
||||
entries.append(cmake_cache_path("ROCM_PATH", rocm_root))
|
||||
else:
|
||||
# Explicitly setting HIP_ROOT_DIR may be a patch that is no longer necessary
|
||||
entries.append(cmake_cache_path("HIP_ROOT_DIR", "{0}".format(spec["hip"].prefix)))
|
||||
llvm_bin = spec["llvm-amdgpu"].prefix.bin
|
||||
llvm_prefix = spec["llvm-amdgpu"].prefix
|
||||
# Some ROCm systems seem to point to /<path>/rocm-<ver>/ and
|
||||
# others point to /<path>/rocm-<ver>/llvm
|
||||
if os.path.basename(os.path.normpath(llvm_prefix)) != "llvm":
|
||||
llvm_bin = os.path.join(llvm_prefix, "llvm/bin/")
|
||||
entries.append(
|
||||
cmake_cache_filepath(
|
||||
"CMAKE_HIP_COMPILER", os.path.join(llvm_bin, "amdclang++")
|
||||
)
|
||||
)
|
||||
|
||||
archs = self.spec.variants["amdgpu_target"].value
|
||||
if archs[0] != "none":
|
||||
arch_str = ";".join(archs)
|
||||
|
||||
@@ -48,6 +48,7 @@
|
||||
import spack.store
|
||||
import spack.url
|
||||
import spack.util.environment
|
||||
import spack.util.executable
|
||||
import spack.util.path
|
||||
import spack.util.web
|
||||
import spack.variant
|
||||
@@ -1369,6 +1370,14 @@ def prefix(self):
|
||||
def home(self):
|
||||
return self.prefix
|
||||
|
||||
@property
|
||||
def command(self) -> spack.util.executable.Executable:
|
||||
"""Returns the main executable for this package."""
|
||||
path = os.path.join(self.home.bin, self.spec.name)
|
||||
if fsys.is_exe(path):
|
||||
return spack.util.executable.Executable(path)
|
||||
raise RuntimeError(f"Unable to locate {self.spec.name} command in {self.home.bin}")
|
||||
|
||||
@property # type: ignore[misc]
|
||||
@memoized
|
||||
def compiler(self):
|
||||
|
||||
@@ -108,6 +108,8 @@ def _get_user_cache_path():
|
||||
#: transient caches for Spack data (virtual cache, patch sha256 lookup, etc.)
|
||||
default_misc_cache_path = os.path.join(user_cache_path, "cache")
|
||||
|
||||
#: concretization cache for Spack concretizations
|
||||
default_conc_cache_path = os.path.join(default_misc_cache_path, "concretization")
|
||||
|
||||
# Below paths pull configuration from the host environment.
|
||||
#
|
||||
|
||||
@@ -58,6 +58,15 @@
|
||||
{"type": "string"}, # deprecated
|
||||
]
|
||||
},
|
||||
"concretization_cache": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"enable": {"type": "boolean"},
|
||||
"url": {"type": "string"},
|
||||
"entry_limit": {"type": "integer", "minimum": 0},
|
||||
"size_limit": {"type": "integer", "minimum": 0},
|
||||
},
|
||||
},
|
||||
"install_hash_length": {"type": "integer", "minimum": 1},
|
||||
"install_path_scheme": {"type": "string"}, # deprecated
|
||||
"build_stage": {
|
||||
|
||||
@@ -5,9 +5,12 @@
|
||||
import collections.abc
|
||||
import copy
|
||||
import enum
|
||||
import errno
|
||||
import functools
|
||||
import hashlib
|
||||
import io
|
||||
import itertools
|
||||
import json
|
||||
import os
|
||||
import pathlib
|
||||
import pprint
|
||||
@@ -17,12 +20,25 @@
|
||||
import typing
|
||||
import warnings
|
||||
from contextlib import contextmanager
|
||||
from typing import Callable, Dict, Iterator, List, NamedTuple, Optional, Set, Tuple, Type, Union
|
||||
from typing import (
|
||||
IO,
|
||||
Callable,
|
||||
Dict,
|
||||
Iterator,
|
||||
List,
|
||||
NamedTuple,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
Type,
|
||||
Union,
|
||||
)
|
||||
|
||||
import archspec.cpu
|
||||
|
||||
import llnl.util.lang
|
||||
import llnl.util.tty as tty
|
||||
from llnl.util.filesystem import current_file_position
|
||||
from llnl.util.lang import elide_list
|
||||
|
||||
import spack
|
||||
@@ -37,12 +53,14 @@
|
||||
import spack.package_base
|
||||
import spack.package_prefs
|
||||
import spack.patch
|
||||
import spack.paths
|
||||
import spack.platforms
|
||||
import spack.repo
|
||||
import spack.solver.splicing
|
||||
import spack.spec
|
||||
import spack.store
|
||||
import spack.util.crypto
|
||||
import spack.util.hash
|
||||
import spack.util.libc
|
||||
import spack.util.module_cmd as md
|
||||
import spack.util.path
|
||||
@@ -51,6 +69,7 @@
|
||||
import spack.version as vn
|
||||
import spack.version.git_ref_lookup
|
||||
from spack import traverse
|
||||
from spack.util.file_cache import FileCache
|
||||
|
||||
from .core import (
|
||||
AspFunction,
|
||||
@@ -538,6 +557,363 @@ def format_unsolved(unsolved_specs):
|
||||
msg += "\n\t(No candidate specs from solver)"
|
||||
return msg
|
||||
|
||||
def to_dict(self, test: bool = False) -> dict:
|
||||
"""Produces dict representation of Result object
|
||||
|
||||
Does not include anything related to unsatisfiability as we
|
||||
are only interested in storing satisfiable results
|
||||
"""
|
||||
serial_node_arg = (
|
||||
lambda node_dict: f"""{{"id": "{node_dict.id}", "pkg": "{node_dict.pkg}"}}"""
|
||||
)
|
||||
ret = dict()
|
||||
ret["asp"] = self.asp
|
||||
ret["criteria"] = self.criteria
|
||||
ret["optimal"] = self.optimal
|
||||
ret["warnings"] = self.warnings
|
||||
ret["nmodels"] = self.nmodels
|
||||
ret["abstract_specs"] = [str(x) for x in self.abstract_specs]
|
||||
ret["satisfiable"] = self.satisfiable
|
||||
serial_answers = []
|
||||
for answer in self.answers:
|
||||
serial_answer = answer[:2]
|
||||
serial_answer_dict = {}
|
||||
for node, spec in answer[2].items():
|
||||
serial_answer_dict[serial_node_arg(node)] = spec.to_dict()
|
||||
serial_answer = serial_answer + (serial_answer_dict,)
|
||||
serial_answers.append(serial_answer)
|
||||
ret["answers"] = serial_answers
|
||||
ret["specs_by_input"] = {}
|
||||
input_specs = {} if not self.specs_by_input else self.specs_by_input
|
||||
for input, spec in input_specs.items():
|
||||
ret["specs_by_input"][str(input)] = spec.to_dict()
|
||||
return ret
|
||||
|
||||
@staticmethod
|
||||
def from_dict(obj: dict):
|
||||
"""Returns Result object from compatible dictionary"""
|
||||
|
||||
def _dict_to_node_argument(dict):
|
||||
id = dict["id"]
|
||||
pkg = dict["pkg"]
|
||||
return NodeArgument(id=id, pkg=pkg)
|
||||
|
||||
def _str_to_spec(spec_str):
|
||||
return spack.spec.Spec(spec_str)
|
||||
|
||||
def _dict_to_spec(spec_dict):
|
||||
loaded_spec = spack.spec.Spec.from_dict(spec_dict)
|
||||
_ensure_external_path_if_external(loaded_spec)
|
||||
spack.spec.Spec.ensure_no_deprecated(loaded_spec)
|
||||
return loaded_spec
|
||||
|
||||
asp = obj.get("asp")
|
||||
spec_list = obj.get("abstract_specs")
|
||||
if not spec_list:
|
||||
raise RuntimeError("Invalid json for concretization Result object")
|
||||
if spec_list:
|
||||
spec_list = [_str_to_spec(x) for x in spec_list]
|
||||
result = Result(spec_list, asp)
|
||||
result.criteria = obj.get("criteria")
|
||||
result.optimal = obj.get("optimal")
|
||||
result.warnings = obj.get("warnings")
|
||||
result.nmodels = obj.get("nmodels")
|
||||
result.satisfiable = obj.get("satisfiable")
|
||||
result._unsolved_specs = []
|
||||
answers = []
|
||||
for answer in obj.get("answers", []):
|
||||
loaded_answer = answer[:2]
|
||||
answer_node_dict = {}
|
||||
for node, spec in answer[2].items():
|
||||
answer_node_dict[_dict_to_node_argument(json.loads(node))] = _dict_to_spec(spec)
|
||||
loaded_answer.append(answer_node_dict)
|
||||
answers.append(tuple(loaded_answer))
|
||||
result.answers = answers
|
||||
result._concrete_specs_by_input = {}
|
||||
result._concrete_specs = []
|
||||
for input, spec in obj.get("specs_by_input", {}).items():
|
||||
result._concrete_specs_by_input[_str_to_spec(input)] = _dict_to_spec(spec)
|
||||
result._concrete_specs.append(_dict_to_spec(spec))
|
||||
return result
|
||||
|
||||
|
||||
class ConcretizationCache:
|
||||
"""Store for Spack concretization results and statistics
|
||||
|
||||
Serializes solver result objects and statistics to json and stores
|
||||
at a given endpoint in a cache associated by the sha256 of the
|
||||
asp problem and the involved control files.
|
||||
"""
|
||||
|
||||
def __init__(self, root: Union[str, None] = None):
|
||||
root = root or spack.config.get(
|
||||
"config:concretization_cache:url", spack.paths.default_conc_cache_path
|
||||
)
|
||||
self.root = pathlib.Path(spack.util.path.canonicalize_path(root))
|
||||
self._fc = FileCache(self.root)
|
||||
self._cache_manifest = ".cache_manifest"
|
||||
self._manifest_queue: List[Tuple[pathlib.Path, int]] = []
|
||||
|
||||
def cleanup(self):
|
||||
"""Prunes the concretization cache according to configured size and entry
|
||||
count limits. Cleanup is done in FIFO ordering."""
|
||||
# TODO: determine a better default
|
||||
entry_limit = spack.config.get("config:concretization_cache:entry_limit", 1000)
|
||||
bytes_limit = spack.config.get("config:concretization_cache:size_limit", 3e8)
|
||||
# lock the entire buildcache as we're removing a lot of data from the
|
||||
# manifest and cache itself
|
||||
with self._fc.read_transaction(self._cache_manifest) as f:
|
||||
count, cache_bytes = self._extract_cache_metadata(f)
|
||||
if not count or not cache_bytes:
|
||||
return
|
||||
entry_count = int(count)
|
||||
manifest_bytes = int(cache_bytes)
|
||||
# move beyond the metadata entry
|
||||
f.readline()
|
||||
if entry_count > entry_limit and entry_limit > 0:
|
||||
with self._fc.write_transaction(self._cache_manifest) as (old, new):
|
||||
# prune the oldest 10% or until we have removed 10% of
|
||||
# total bytes starting from oldest entry
|
||||
# TODO: make this configurable?
|
||||
prune_count = entry_limit // 10
|
||||
lines_to_prune = f.readlines(prune_count)
|
||||
for i, line in enumerate(lines_to_prune):
|
||||
sha, cache_entry_bytes = self._parse_manifest_entry(line)
|
||||
if sha and cache_entry_bytes:
|
||||
cache_path = self._cache_path_from_hash(sha)
|
||||
if self._fc.remove(cache_path):
|
||||
entry_count -= 1
|
||||
manifest_bytes -= int(cache_entry_bytes)
|
||||
else:
|
||||
tty.warn(
|
||||
f"Invalid concretization cache entry: '{line}' on line: {i+1}"
|
||||
)
|
||||
self._write_manifest(f, entry_count, manifest_bytes)
|
||||
|
||||
elif manifest_bytes > bytes_limit and bytes_limit > 0:
|
||||
with self._fc.write_transaction(self._cache_manifest) as (old, new):
|
||||
# take 10% of current size off
|
||||
prune_amount = bytes_limit // 10
|
||||
total_pruned = 0
|
||||
i = 0
|
||||
while total_pruned < prune_amount:
|
||||
sha, manifest_cache_bytes = self._parse_manifest_entry(f.readline())
|
||||
if sha and manifest_cache_bytes:
|
||||
entry_bytes = int(manifest_cache_bytes)
|
||||
cache_path = self.root / sha[:2] / sha
|
||||
if self._safe_remove(cache_path):
|
||||
entry_count -= 1
|
||||
entry_bytes -= entry_bytes
|
||||
total_pruned += entry_bytes
|
||||
else:
|
||||
tty.warn(
|
||||
"Invalid concretization cache entry "
|
||||
f"'{sha} {manifest_cache_bytes}' on line: {i}"
|
||||
)
|
||||
i += 1
|
||||
self._write_manifest(f, entry_count, manifest_bytes)
|
||||
for cache_dir in self.root.iterdir():
|
||||
if cache_dir.is_dir() and not any(cache_dir.iterdir()):
|
||||
self._safe_remove(cache_dir)
|
||||
|
||||
def cache_entries(self):
|
||||
"""Generator producing cache entries"""
|
||||
for cache_dir in self.root.iterdir():
|
||||
# ensure component is cache entry directory
|
||||
# not metadata file
|
||||
if cache_dir.is_dir():
|
||||
for cache_entry in cache_dir.iterdir():
|
||||
if not cache_entry.is_dir():
|
||||
yield cache_entry
|
||||
else:
|
||||
raise RuntimeError(
|
||||
"Improperly formed concretization cache. "
|
||||
f"Directory {cache_entry.name} is improperly located "
|
||||
"within the concretization cache."
|
||||
)
|
||||
|
||||
def _parse_manifest_entry(self, line):
|
||||
"""Returns parsed manifest entry lines
|
||||
with handling for invalid reads."""
|
||||
if line:
|
||||
cache_values = line.strip("\n").split(" ")
|
||||
if len(cache_values) < 2:
|
||||
tty.warn(f"Invalid cache entry at {line}")
|
||||
return None, None
|
||||
return None, None
|
||||
|
||||
def _write_manifest(self, manifest_file, entry_count, entry_bytes):
|
||||
"""Writes new concretization cache manifest file.
|
||||
|
||||
Arguments:
|
||||
manifest_file: IO stream opened for readin
|
||||
and writing wrapping the manifest file
|
||||
with cursor at calltime set to location
|
||||
where manifest should be truncated
|
||||
entry_count: new total entry count
|
||||
entry_bytes: new total entry bytes count
|
||||
|
||||
"""
|
||||
persisted_entries = manifest_file.readlines()
|
||||
manifest_file.truncate(0)
|
||||
manifest_file.write(f"{entry_count} {entry_bytes}\n")
|
||||
manifest_file.writelines(persisted_entries)
|
||||
|
||||
def _results_from_cache(self, cache_entry_buffer: IO[str]) -> Union[Result, None]:
|
||||
"""Returns a Results object from the concretizer cache
|
||||
|
||||
Reads the cache hit and uses `Result`'s own deserializer
|
||||
to produce a new Result object
|
||||
"""
|
||||
|
||||
with current_file_position(cache_entry_buffer, 0):
|
||||
cache_str = cache_entry_buffer.read()
|
||||
# TODO: Should this be an error if None?
|
||||
# Same for _stats_from_cache
|
||||
if cache_str:
|
||||
cache_entry = json.loads(cache_str)
|
||||
result_json = cache_entry["results"]
|
||||
return Result.from_dict(result_json)
|
||||
return None
|
||||
|
||||
def _stats_from_cache(self, cache_entry_buffer: IO[str]) -> Union[List, None]:
|
||||
"""Returns concretization statistic from the
|
||||
concretization associated with the cache.
|
||||
|
||||
Deserialzes the the json representation of the
|
||||
statistics covering the cached concretization run
|
||||
and returns the Python data structures
|
||||
"""
|
||||
with current_file_position(cache_entry_buffer, 0):
|
||||
cache_str = cache_entry_buffer.read()
|
||||
if cache_str:
|
||||
return json.loads(cache_str)["statistics"]
|
||||
return None
|
||||
|
||||
def _extract_cache_metadata(self, cache_stream: IO[str]):
|
||||
"""Extracts and returns cache entry count and bytes count from head of manifest
|
||||
file"""
|
||||
# make sure we're always reading from the beginning of the stream
|
||||
# concretization cache manifest data lives at the top of the file
|
||||
with current_file_position(cache_stream, 0):
|
||||
return self._parse_manifest_entry(cache_stream.readline())
|
||||
|
||||
def _prefix_digest(self, problem: str) -> Tuple[str, str]:
|
||||
"""Return the first two characters of, and the full, sha256 of the given asp problem"""
|
||||
prob_digest = hashlib.sha256(problem.encode()).hexdigest()
|
||||
prefix = prob_digest[:2]
|
||||
return prefix, prob_digest
|
||||
|
||||
def _cache_path_from_problem(self, problem: str) -> pathlib.Path:
|
||||
"""Returns a Path object representing the path to the cache
|
||||
entry for the given problem"""
|
||||
prefix, digest = self._prefix_digest(problem)
|
||||
return pathlib.Path(prefix) / digest
|
||||
|
||||
def _cache_path_from_hash(self, hash: str) -> pathlib.Path:
|
||||
"""Returns a Path object representing the cache entry
|
||||
corresponding to the given sha256 hash"""
|
||||
return pathlib.Path(hash[:2]) / hash
|
||||
|
||||
def _lock_prefix_from_cache_path(self, cache_path: str):
|
||||
"""Returns the bit location corresponding to a given cache entry path
|
||||
for file locking"""
|
||||
return spack.util.hash.base32_prefix_bits(
|
||||
spack.util.hash.b32_hash(cache_path), spack.util.crypto.bit_length(sys.maxsize)
|
||||
)
|
||||
|
||||
def flush_manifest(self):
|
||||
"""Updates the concretization cache manifest file after a cache write operation
|
||||
Updates the current byte count and entry counts and writes to the head of the
|
||||
manifest file"""
|
||||
manifest_file = self.root / self._cache_manifest
|
||||
manifest_file.touch(exist_ok=True)
|
||||
with open(manifest_file, "r+", encoding="utf-8") as f:
|
||||
# check if manifest is empty
|
||||
count, cache_bytes = self._extract_cache_metadata(f)
|
||||
if not count or not cache_bytes:
|
||||
# cache is unintialized
|
||||
count = 0
|
||||
cache_bytes = 0
|
||||
f.seek(0, io.SEEK_END)
|
||||
for manifest_update in self._manifest_queue:
|
||||
entry_path, entry_bytes = manifest_update
|
||||
count += 1
|
||||
cache_bytes += entry_bytes
|
||||
f.write(f"{entry_path.name} {entry_bytes}")
|
||||
f.seek(0, io.SEEK_SET)
|
||||
new_stats = f"{int(count)+1} {int(cache_bytes)}\n"
|
||||
f.write(new_stats)
|
||||
|
||||
def _register_cache_update(self, cache_path: pathlib.Path, bytes_written: int):
|
||||
"""Adds manifest entry to update queue for later updates to the manifest"""
|
||||
self._manifest_queue.append((cache_path, bytes_written))
|
||||
|
||||
def _safe_remove(self, cache_dir: pathlib.Path):
|
||||
"""Removes cache entries with handling for the case where the entry has been
|
||||
removed already or there are multiple cache entries in a directory"""
|
||||
try:
|
||||
if cache_dir.is_dir():
|
||||
cache_dir.rmdir()
|
||||
else:
|
||||
cache_dir.unlink()
|
||||
return True
|
||||
except FileNotFoundError:
|
||||
# This is acceptable, removal is idempotent
|
||||
pass
|
||||
except OSError as e:
|
||||
if e.errno == errno.ENOTEMPTY:
|
||||
# there exists another cache entry in this directory, don't clean yet
|
||||
pass
|
||||
return False
|
||||
|
||||
def store(self, problem: str, result: Result, statistics: List, test: bool = False):
|
||||
"""Creates entry in concretization cache for problem if none exists,
|
||||
storing the concretization Result object and statistics in the cache
|
||||
as serialized json joined as a single file.
|
||||
|
||||
Hash membership is computed based on the sha256 of the provided asp
|
||||
problem.
|
||||
"""
|
||||
cache_path = self._cache_path_from_problem(problem)
|
||||
if self._fc.init_entry(cache_path):
|
||||
# if an entry for this conc hash exists already, we're don't want
|
||||
# to overwrite, just exit
|
||||
tty.debug(f"Cache entry {cache_path} exists, will not be overwritten")
|
||||
return
|
||||
with self._fc.write_transaction(cache_path) as (old, new):
|
||||
if old:
|
||||
# Entry for this conc hash exists already, do not overwrite
|
||||
tty.debug(f"Cache entry {cache_path} exists, will not be overwritten")
|
||||
return
|
||||
cache_dict = {"results": result.to_dict(test=test), "statistics": statistics}
|
||||
bytes_written = new.write(json.dumps(cache_dict))
|
||||
self._register_cache_update(cache_path, bytes_written)
|
||||
|
||||
def fetch(self, problem: str) -> Union[Tuple[Result, List], Tuple[None, None]]:
|
||||
"""Returns the concretization cache result for a lookup based on the given problem.
|
||||
|
||||
Checks the concretization cache for the given problem, and either returns the
|
||||
Python objects cached on disk representing the concretization results and statistics
|
||||
or returns none if no cache entry was found.
|
||||
"""
|
||||
cache_path = self._cache_path_from_problem(problem)
|
||||
result, statistics = None, None
|
||||
with self._fc.read_transaction(cache_path) as f:
|
||||
if f:
|
||||
result = self._results_from_cache(f)
|
||||
statistics = self._stats_from_cache(f)
|
||||
if result and statistics:
|
||||
tty.debug(f"Concretization cache hit at {str(cache_path)}")
|
||||
return result, statistics
|
||||
tty.debug(f"Concretization cache miss at {str(cache_path)}")
|
||||
return None, None
|
||||
|
||||
|
||||
CONC_CACHE: ConcretizationCache = llnl.util.lang.Singleton(
|
||||
lambda: ConcretizationCache()
|
||||
) # type: ignore
|
||||
|
||||
|
||||
def _normalize_packages_yaml(packages_yaml):
|
||||
normalized_yaml = copy.copy(packages_yaml)
|
||||
@@ -806,6 +1182,15 @@ def solve(self, setup, specs, reuse=None, output=None, control=None, allow_depre
|
||||
if sys.platform == "win32":
|
||||
tty.debug("Ensuring basic dependencies {win-sdk, wgl} available")
|
||||
spack.bootstrap.core.ensure_winsdk_external_or_raise()
|
||||
control_files = ["concretize.lp", "heuristic.lp", "display.lp"]
|
||||
if not setup.concretize_everything:
|
||||
control_files.append("when_possible.lp")
|
||||
if using_libc_compatibility():
|
||||
control_files.append("libc_compatibility.lp")
|
||||
else:
|
||||
control_files.append("os_compatibility.lp")
|
||||
if setup.enable_splicing:
|
||||
control_files.append("splices.lp")
|
||||
|
||||
timer.start("setup")
|
||||
asp_problem = setup.setup(specs, reuse=reuse, allow_deprecated=allow_deprecated)
|
||||
@@ -815,123 +1200,133 @@ def solve(self, setup, specs, reuse=None, output=None, control=None, allow_depre
|
||||
return Result(specs), None, None
|
||||
timer.stop("setup")
|
||||
|
||||
timer.start("load")
|
||||
# Add the problem instance
|
||||
self.control.add("base", [], asp_problem)
|
||||
# Load the file itself
|
||||
timer.start("cache-check")
|
||||
timer.start("ordering")
|
||||
# ensure deterministic output
|
||||
problem_repr = "\n".join(sorted(asp_problem.split("\n")))
|
||||
timer.stop("ordering")
|
||||
parent_dir = os.path.dirname(__file__)
|
||||
self.control.load(os.path.join(parent_dir, "concretize.lp"))
|
||||
self.control.load(os.path.join(parent_dir, "heuristic.lp"))
|
||||
self.control.load(os.path.join(parent_dir, "display.lp"))
|
||||
if not setup.concretize_everything:
|
||||
self.control.load(os.path.join(parent_dir, "when_possible.lp"))
|
||||
full_path = lambda x: os.path.join(parent_dir, x)
|
||||
abs_control_files = [full_path(x) for x in control_files]
|
||||
for ctrl_file in abs_control_files:
|
||||
with open(ctrl_file, "r", encoding="utf-8") as f:
|
||||
problem_repr += "\n" + f.read()
|
||||
|
||||
# Binary compatibility is based on libc on Linux, and on the os tag elsewhere
|
||||
if using_libc_compatibility():
|
||||
self.control.load(os.path.join(parent_dir, "libc_compatibility.lp"))
|
||||
else:
|
||||
self.control.load(os.path.join(parent_dir, "os_compatibility.lp"))
|
||||
if setup.enable_splicing:
|
||||
self.control.load(os.path.join(parent_dir, "splices.lp"))
|
||||
result = None
|
||||
conc_cache_enabled = spack.config.get("config:concretization_cache:enable", True)
|
||||
if conc_cache_enabled:
|
||||
result, concretization_stats = CONC_CACHE.fetch(problem_repr)
|
||||
|
||||
timer.stop("load")
|
||||
timer.stop("cache-check")
|
||||
if not result:
|
||||
timer.start("load")
|
||||
# Add the problem instance
|
||||
self.control.add("base", [], asp_problem)
|
||||
# Load the files
|
||||
[self.control.load(lp) for lp in abs_control_files]
|
||||
timer.stop("load")
|
||||
|
||||
# Grounding is the first step in the solve -- it turns our facts
|
||||
# and first-order logic rules into propositional logic.
|
||||
timer.start("ground")
|
||||
self.control.ground([("base", [])])
|
||||
timer.stop("ground")
|
||||
# Grounding is the first step in the solve -- it turns our facts
|
||||
# and first-order logic rules into propositional logic.
|
||||
timer.start("ground")
|
||||
self.control.ground([("base", [])])
|
||||
timer.stop("ground")
|
||||
|
||||
# With a grounded program, we can run the solve.
|
||||
models = [] # stable models if things go well
|
||||
cores = [] # unsatisfiable cores if they do not
|
||||
# With a grounded program, we can run the solve.
|
||||
models = [] # stable models if things go well
|
||||
cores = [] # unsatisfiable cores if they do not
|
||||
|
||||
def on_model(model):
|
||||
models.append((model.cost, model.symbols(shown=True, terms=True)))
|
||||
def on_model(model):
|
||||
models.append((model.cost, model.symbols(shown=True, terms=True)))
|
||||
|
||||
solve_kwargs = {
|
||||
"assumptions": setup.assumptions,
|
||||
"on_model": on_model,
|
||||
"on_core": cores.append,
|
||||
}
|
||||
solve_kwargs = {
|
||||
"assumptions": setup.assumptions,
|
||||
"on_model": on_model,
|
||||
"on_core": cores.append,
|
||||
}
|
||||
|
||||
if clingo_cffi():
|
||||
solve_kwargs["on_unsat"] = cores.append
|
||||
if clingo_cffi():
|
||||
solve_kwargs["on_unsat"] = cores.append
|
||||
|
||||
timer.start("solve")
|
||||
time_limit = spack.config.CONFIG.get("concretizer:timeout", -1)
|
||||
error_on_timeout = spack.config.CONFIG.get("concretizer:error_on_timeout", True)
|
||||
# Spack uses 0 to set no time limit, clingo API uses -1
|
||||
if time_limit == 0:
|
||||
time_limit = -1
|
||||
with self.control.solve(**solve_kwargs, async_=True) as handle:
|
||||
finished = handle.wait(time_limit)
|
||||
if not finished:
|
||||
specs_str = ", ".join(llnl.util.lang.elide_list([str(s) for s in specs], 4))
|
||||
header = f"Spack is taking more than {time_limit} seconds to solve for {specs_str}"
|
||||
if error_on_timeout:
|
||||
raise UnsatisfiableSpecError(f"{header}, stopping concretization")
|
||||
warnings.warn(f"{header}, using the best configuration found so far")
|
||||
handle.cancel()
|
||||
timer.start("solve")
|
||||
time_limit = spack.config.CONFIG.get("concretizer:timeout", -1)
|
||||
error_on_timeout = spack.config.CONFIG.get("concretizer:error_on_timeout", True)
|
||||
# Spack uses 0 to set no time limit, clingo API uses -1
|
||||
if time_limit == 0:
|
||||
time_limit = -1
|
||||
with self.control.solve(**solve_kwargs, async_=True) as handle:
|
||||
finished = handle.wait(time_limit)
|
||||
if not finished:
|
||||
specs_str = ", ".join(llnl.util.lang.elide_list([str(s) for s in specs], 4))
|
||||
header = (
|
||||
f"Spack is taking more than {time_limit} seconds to solve for {specs_str}"
|
||||
)
|
||||
if error_on_timeout:
|
||||
raise UnsatisfiableSpecError(f"{header}, stopping concretization")
|
||||
warnings.warn(f"{header}, using the best configuration found so far")
|
||||
handle.cancel()
|
||||
|
||||
solve_result = handle.get()
|
||||
timer.stop("solve")
|
||||
solve_result = handle.get()
|
||||
timer.stop("solve")
|
||||
|
||||
# once done, construct the solve result
|
||||
result = Result(specs)
|
||||
result.satisfiable = solve_result.satisfiable
|
||||
# once done, construct the solve result
|
||||
result = Result(specs)
|
||||
result.satisfiable = solve_result.satisfiable
|
||||
|
||||
if result.satisfiable:
|
||||
timer.start("construct_specs")
|
||||
# get the best model
|
||||
builder = SpecBuilder(specs, hash_lookup=setup.reusable_and_possible)
|
||||
min_cost, best_model = min(models)
|
||||
if result.satisfiable:
|
||||
timer.start("construct_specs")
|
||||
# get the best model
|
||||
builder = SpecBuilder(specs, hash_lookup=setup.reusable_and_possible)
|
||||
min_cost, best_model = min(models)
|
||||
|
||||
# first check for errors
|
||||
error_handler = ErrorHandler(best_model, specs)
|
||||
error_handler.raise_if_errors()
|
||||
# first check for errors
|
||||
error_handler = ErrorHandler(best_model, specs)
|
||||
error_handler.raise_if_errors()
|
||||
|
||||
# build specs from spec attributes in the model
|
||||
spec_attrs = [(name, tuple(rest)) for name, *rest in extract_args(best_model, "attr")]
|
||||
answers = builder.build_specs(spec_attrs)
|
||||
# build specs from spec attributes in the model
|
||||
spec_attrs = [
|
||||
(name, tuple(rest)) for name, *rest in extract_args(best_model, "attr")
|
||||
]
|
||||
answers = builder.build_specs(spec_attrs)
|
||||
|
||||
# add best spec to the results
|
||||
result.answers.append((list(min_cost), 0, answers))
|
||||
# add best spec to the results
|
||||
result.answers.append((list(min_cost), 0, answers))
|
||||
|
||||
# get optimization criteria
|
||||
criteria_args = extract_args(best_model, "opt_criterion")
|
||||
result.criteria = build_criteria_names(min_cost, criteria_args)
|
||||
# get optimization criteria
|
||||
criteria_args = extract_args(best_model, "opt_criterion")
|
||||
result.criteria = build_criteria_names(min_cost, criteria_args)
|
||||
|
||||
# record the number of models the solver considered
|
||||
result.nmodels = len(models)
|
||||
# record the number of models the solver considered
|
||||
result.nmodels = len(models)
|
||||
|
||||
# record the possible dependencies in the solve
|
||||
result.possible_dependencies = setup.pkgs
|
||||
timer.stop("construct_specs")
|
||||
timer.stop()
|
||||
elif cores:
|
||||
result.control = self.control
|
||||
result.cores.extend(cores)
|
||||
# record the possible dependencies in the solve
|
||||
result.possible_dependencies = setup.pkgs
|
||||
timer.stop("construct_specs")
|
||||
timer.stop()
|
||||
elif cores:
|
||||
result.control = self.control
|
||||
result.cores.extend(cores)
|
||||
|
||||
result.raise_if_unsat()
|
||||
|
||||
if result.satisfiable and result.unsolved_specs and setup.concretize_everything:
|
||||
unsolved_str = Result.format_unsolved(result.unsolved_specs)
|
||||
raise InternalConcretizerError(
|
||||
"Internal Spack error: the solver completed but produced specs"
|
||||
" that do not satisfy the request. Please report a bug at "
|
||||
f"https://github.com/spack/spack/issues\n\t{unsolved_str}"
|
||||
)
|
||||
if conc_cache_enabled:
|
||||
CONC_CACHE.store(problem_repr, result, self.control.statistics, test=setup.tests)
|
||||
concretization_stats = self.control.statistics
|
||||
if output.timers:
|
||||
timer.write_tty()
|
||||
print()
|
||||
|
||||
if output.stats:
|
||||
print("Statistics:")
|
||||
pprint.pprint(self.control.statistics)
|
||||
|
||||
result.raise_if_unsat()
|
||||
|
||||
if result.satisfiable and result.unsolved_specs and setup.concretize_everything:
|
||||
unsolved_str = Result.format_unsolved(result.unsolved_specs)
|
||||
raise InternalConcretizerError(
|
||||
"Internal Spack error: the solver completed but produced specs"
|
||||
" that do not satisfy the request. Please report a bug at "
|
||||
f"https://github.com/spack/spack/issues\n\t{unsolved_str}"
|
||||
)
|
||||
|
||||
return result, timer, self.control.statistics
|
||||
pprint.pprint(concretization_stats)
|
||||
return result, timer, concretization_stats
|
||||
|
||||
|
||||
class ConcreteSpecsByHash(collections.abc.Mapping):
|
||||
@@ -1373,7 +1768,7 @@ def effect_rules(self):
|
||||
return
|
||||
|
||||
self.gen.h2("Imposed requirements")
|
||||
for name in self._effect_cache:
|
||||
for name in sorted(self._effect_cache):
|
||||
cache = self._effect_cache[name]
|
||||
for (spec_str, _), (effect_id, requirements) in cache.items():
|
||||
self.gen.fact(fn.pkg_fact(name, fn.effect_id(effect_id)))
|
||||
@@ -1426,8 +1821,8 @@ def define_variant(
|
||||
|
||||
elif isinstance(values, vt.DisjointSetsOfValues):
|
||||
union = set()
|
||||
for sid, s in enumerate(values.sets):
|
||||
for value in s:
|
||||
for sid, s in enumerate(sorted(values.sets)):
|
||||
for value in sorted(s):
|
||||
pkg_fact(fn.variant_value_from_disjoint_sets(vid, value, sid))
|
||||
union.update(s)
|
||||
values = union
|
||||
@@ -1608,7 +2003,7 @@ def package_provider_rules(self, pkg):
|
||||
self.gen.fact(fn.pkg_fact(pkg.name, fn.possible_provider(vpkg_name)))
|
||||
|
||||
for when, provided in pkg.provided.items():
|
||||
for vpkg in provided:
|
||||
for vpkg in sorted(provided):
|
||||
if vpkg.name not in self.possible_virtuals:
|
||||
continue
|
||||
|
||||
@@ -1623,8 +2018,8 @@ def package_provider_rules(self, pkg):
|
||||
condition_id = self.condition(
|
||||
when, required_name=pkg.name, msg="Virtuals are provided together"
|
||||
)
|
||||
for set_id, virtuals_together in enumerate(sets_of_virtuals):
|
||||
for name in virtuals_together:
|
||||
for set_id, virtuals_together in enumerate(sorted(sets_of_virtuals)):
|
||||
for name in sorted(virtuals_together):
|
||||
self.gen.fact(
|
||||
fn.pkg_fact(pkg.name, fn.provided_together(condition_id, set_id, name))
|
||||
)
|
||||
@@ -1734,7 +2129,7 @@ def package_splice_rules(self, pkg):
|
||||
for map in pkg.variants.values():
|
||||
for k in map:
|
||||
filt_match_variants.add(k)
|
||||
filt_match_variants = list(filt_match_variants)
|
||||
filt_match_variants = sorted(filt_match_variants)
|
||||
variant_constraints = self._gen_match_variant_splice_constraints(
|
||||
pkg, cond, spec_to_splice, hash_var, splice_node, filt_match_variants
|
||||
)
|
||||
@@ -1839,8 +2234,8 @@ def emit_facts_from_requirement_rules(self, rules: List[RequirementRule]):
|
||||
spec.attach_git_version_lookup()
|
||||
|
||||
when_spec = spec
|
||||
if virtual:
|
||||
when_spec = spack.spec.Spec(pkg_name)
|
||||
if virtual and spec.name != pkg_name:
|
||||
when_spec = spack.spec.Spec(f"^[virtuals={pkg_name}] {spec.name}")
|
||||
|
||||
try:
|
||||
context = ConditionContext()
|
||||
@@ -2264,7 +2659,7 @@ def define_package_versions_and_validate_preferences(
|
||||
):
|
||||
"""Declare any versions in specs not declared in packages."""
|
||||
packages_yaml = spack.config.get("packages")
|
||||
for pkg_name in possible_pkgs:
|
||||
for pkg_name in sorted(possible_pkgs):
|
||||
pkg_cls = self.pkg_class(pkg_name)
|
||||
|
||||
# All the versions from the corresponding package.py file. Since concepts
|
||||
@@ -2592,7 +2987,7 @@ def define_variant_values(self):
|
||||
"""
|
||||
# Tell the concretizer about possible values from specs seen in spec_clauses().
|
||||
# We might want to order these facts by pkg and name if we are debugging.
|
||||
for pkg_name, variant_def_id, value in self.variant_values_from_specs:
|
||||
for pkg_name, variant_def_id, value in sorted(self.variant_values_from_specs):
|
||||
try:
|
||||
vid = self.variant_ids_by_def_id[variant_def_id]
|
||||
except KeyError:
|
||||
@@ -2630,6 +3025,8 @@ def concrete_specs(self):
|
||||
# Declare as possible parts of specs that are not in package.py
|
||||
# - Add versions to possible versions
|
||||
# - Add OS to possible OS's
|
||||
|
||||
# is traverse deterministic?
|
||||
for dep in spec.traverse():
|
||||
self.possible_versions[dep.name].add(dep.version)
|
||||
if isinstance(dep.version, vn.GitVersion):
|
||||
@@ -2867,7 +3264,7 @@ def define_runtime_constraints(self):
|
||||
recorder.consume_facts()
|
||||
|
||||
def literal_specs(self, specs):
|
||||
for spec in specs:
|
||||
for spec in sorted(specs):
|
||||
self.gen.h2("Spec: %s" % str(spec))
|
||||
condition_id = next(self._id_counter)
|
||||
trigger_id = next(self._id_counter)
|
||||
@@ -3368,7 +3765,7 @@ def consume_facts(self):
|
||||
# on the available compilers)
|
||||
self._setup.pkg_version_rules(runtime_pkg)
|
||||
|
||||
for imposed_spec, when_spec in self.runtime_conditions:
|
||||
for imposed_spec, when_spec in sorted(self.runtime_conditions):
|
||||
msg = f"{when_spec} requires {imposed_spec} at runtime"
|
||||
_ = self._setup.condition(when_spec, imposed_spec=imposed_spec, msg=msg)
|
||||
|
||||
@@ -4225,6 +4622,9 @@ def solve_with_stats(
|
||||
reusable_specs.extend(self.selector.reusable_specs(specs))
|
||||
setup = SpackSolverSetup(tests=tests)
|
||||
output = OutputConfiguration(timers=timers, stats=stats, out=out, setup_only=setup_only)
|
||||
|
||||
CONC_CACHE.flush_manifest()
|
||||
CONC_CACHE.cleanup()
|
||||
return self.driver.solve(
|
||||
setup, specs, reuse=reusable_specs, output=output, allow_deprecated=allow_deprecated
|
||||
)
|
||||
@@ -4294,6 +4694,9 @@ def solve_in_rounds(
|
||||
for spec in result.specs:
|
||||
reusable_specs.extend(spec.traverse())
|
||||
|
||||
CONC_CACHE.flush_manifest()
|
||||
CONC_CACHE.cleanup()
|
||||
|
||||
|
||||
class UnsatisfiableSpecError(spack.error.UnsatisfiableSpecError):
|
||||
"""There was an issue with the spec that was requested (i.e. a user error)."""
|
||||
|
||||
@@ -597,6 +597,13 @@ attr("virtual_on_edge", PackageNode, ProviderNode, Virtual)
|
||||
attr("virtual_on_incoming_edges", ProviderNode, Virtual)
|
||||
:- attr("virtual_on_edge", _, ProviderNode, Virtual).
|
||||
|
||||
% This is needed to allow requirement on virtuals,
|
||||
% when a virtual root is requested
|
||||
attr("virtual_on_incoming_edges", ProviderNode, Virtual)
|
||||
:- attr("virtual_root", node(min_dupe_id, Virtual)),
|
||||
attr("root", ProviderNode),
|
||||
provider(ProviderNode, node(min_dupe_id, Virtual)).
|
||||
|
||||
% dependencies on virtuals also imply that the virtual is a virtual node
|
||||
1 { attr("virtual_node", node(0..X-1, Virtual)) : max_dupes(Virtual, X) }
|
||||
:- node_depends_on_virtual(PackageNode, Virtual).
|
||||
@@ -953,12 +960,14 @@ error(100, "Cannot set variant '{0}' for package '{1}' because the variant condi
|
||||
build(node(ID, Package)).
|
||||
|
||||
% at most one variant value for single-valued variants.
|
||||
error(100, "'{0}' required multiple values for single-valued variant '{1}'", Package, Variant)
|
||||
error(100, "'{0}' requires conflicting variant values 'Spec({1}={2})' and 'Spec({1}={3})'", Package, Variant, Value1, Value2)
|
||||
:- attr("node", node(ID, Package)),
|
||||
node_has_variant(node(ID, Package), Variant, _),
|
||||
variant_single_value(node(ID, Package), Variant),
|
||||
build(node(ID, Package)),
|
||||
2 { attr("variant_value", node(ID, Package), Variant, Value) }.
|
||||
attr("variant_value", node(ID, Package), Variant, Value1),
|
||||
attr("variant_value", node(ID, Package), Variant, Value2),
|
||||
Value1 < Value2,
|
||||
build(node(ID, Package)).
|
||||
|
||||
error(100, "No valid value for variant '{1}' of package '{0}'", Package, Variant)
|
||||
:- attr("node", node(ID, Package)),
|
||||
|
||||
@@ -117,7 +117,7 @@ error(0, "Cannot find a valid provider for virtual {0}", Virtual, startcauses, C
|
||||
condition_holds(Cause, node(CID, TriggerPkg)).
|
||||
|
||||
% At most one variant value for single-valued variants
|
||||
error(0, "'{0}' required multiple values for single-valued variant '{1}'\n Requested 'Spec({1}={2})' and 'Spec({1}={3})'", Package, Variant, Value1, Value2, startcauses, Cause1, X, Cause2, X)
|
||||
error(0, "'{0}' requires conflicting variant values 'Spec({1}={2})' and 'Spec({1}={3})'", Package, Variant, Value1, Value2, startcauses, Cause1, X, Cause2, X)
|
||||
:- attr("node", node(X, Package)),
|
||||
node_has_variant(node(X, Package), Variant, VariantID),
|
||||
variant_single_value(node(X, Package), Variant),
|
||||
|
||||
@@ -66,18 +66,29 @@ def rules_from_package_py(self, pkg: spack.package_base.PackageBase) -> List[Req
|
||||
return rules
|
||||
|
||||
def rules_from_virtual(self, virtual_str: str) -> List[RequirementRule]:
|
||||
requirements = self.config.get("packages", {}).get(virtual_str, {}).get("require", [])
|
||||
return self._rules_from_requirements(
|
||||
virtual_str, requirements, kind=RequirementKind.VIRTUAL
|
||||
)
|
||||
kind, requests = self._raw_yaml_data(virtual_str, section="require", virtual=True)
|
||||
result = self._rules_from_requirements(virtual_str, requests, kind=kind)
|
||||
|
||||
kind, requests = self._raw_yaml_data(virtual_str, section="prefer", virtual=True)
|
||||
result.extend(self._rules_from_preferences(virtual_str, preferences=requests, kind=kind))
|
||||
|
||||
kind, requests = self._raw_yaml_data(virtual_str, section="conflict", virtual=True)
|
||||
result.extend(self._rules_from_conflicts(virtual_str, conflicts=requests, kind=kind))
|
||||
|
||||
return result
|
||||
|
||||
def rules_from_require(self, pkg: spack.package_base.PackageBase) -> List[RequirementRule]:
|
||||
kind, requirements = self._raw_yaml_data(pkg, section="require")
|
||||
kind, requirements = self._raw_yaml_data(pkg.name, section="require")
|
||||
return self._rules_from_requirements(pkg.name, requirements, kind=kind)
|
||||
|
||||
def rules_from_prefer(self, pkg: spack.package_base.PackageBase) -> List[RequirementRule]:
|
||||
kind, preferences = self._raw_yaml_data(pkg.name, section="prefer")
|
||||
return self._rules_from_preferences(pkg.name, preferences=preferences, kind=kind)
|
||||
|
||||
def _rules_from_preferences(
|
||||
self, pkg_name: str, *, preferences, kind: RequirementKind
|
||||
) -> List[RequirementRule]:
|
||||
result = []
|
||||
kind, preferences = self._raw_yaml_data(pkg, section="prefer")
|
||||
for item in preferences:
|
||||
spec, condition, message = self._parse_prefer_conflict_item(item)
|
||||
result.append(
|
||||
@@ -86,7 +97,7 @@ def rules_from_prefer(self, pkg: spack.package_base.PackageBase) -> List[Require
|
||||
# require:
|
||||
# - any_of: [spec_str, "@:"]
|
||||
RequirementRule(
|
||||
pkg_name=pkg.name,
|
||||
pkg_name=pkg_name,
|
||||
policy="any_of",
|
||||
requirements=[spec, spack.spec.Spec("@:")],
|
||||
kind=kind,
|
||||
@@ -97,8 +108,13 @@ def rules_from_prefer(self, pkg: spack.package_base.PackageBase) -> List[Require
|
||||
return result
|
||||
|
||||
def rules_from_conflict(self, pkg: spack.package_base.PackageBase) -> List[RequirementRule]:
|
||||
kind, conflicts = self._raw_yaml_data(pkg.name, section="conflict")
|
||||
return self._rules_from_conflicts(pkg.name, conflicts=conflicts, kind=kind)
|
||||
|
||||
def _rules_from_conflicts(
|
||||
self, pkg_name: str, *, conflicts, kind: RequirementKind
|
||||
) -> List[RequirementRule]:
|
||||
result = []
|
||||
kind, conflicts = self._raw_yaml_data(pkg, section="conflict")
|
||||
for item in conflicts:
|
||||
spec, condition, message = self._parse_prefer_conflict_item(item)
|
||||
result.append(
|
||||
@@ -107,7 +123,7 @@ def rules_from_conflict(self, pkg: spack.package_base.PackageBase) -> List[Requi
|
||||
# require:
|
||||
# - one_of: [spec_str, "@:"]
|
||||
RequirementRule(
|
||||
pkg_name=pkg.name,
|
||||
pkg_name=pkg_name,
|
||||
policy="one_of",
|
||||
requirements=[spec, spack.spec.Spec("@:")],
|
||||
kind=kind,
|
||||
@@ -129,10 +145,14 @@ def _parse_prefer_conflict_item(self, item):
|
||||
message = item.get("message")
|
||||
return spec, condition, message
|
||||
|
||||
def _raw_yaml_data(self, pkg: spack.package_base.PackageBase, *, section: str):
|
||||
def _raw_yaml_data(self, pkg_name: str, *, section: str, virtual: bool = False):
|
||||
config = self.config.get("packages")
|
||||
data = config.get(pkg.name, {}).get(section, [])
|
||||
data = config.get(pkg_name, {}).get(section, [])
|
||||
kind = RequirementKind.PACKAGE
|
||||
|
||||
if virtual:
|
||||
return RequirementKind.VIRTUAL, data
|
||||
|
||||
if not data:
|
||||
data = config.get("all", {}).get(section, [])
|
||||
kind = RequirementKind.DEFAULT
|
||||
|
||||
@@ -97,7 +97,6 @@
|
||||
import spack.spec_parser
|
||||
import spack.store
|
||||
import spack.traverse
|
||||
import spack.util.executable
|
||||
import spack.util.hash
|
||||
import spack.util.prefix
|
||||
import spack.util.spack_json as sjson
|
||||
@@ -1110,28 +1109,6 @@ def clear(self):
|
||||
self.edges.clear()
|
||||
|
||||
|
||||
def _command_default_handler(spec: "Spec"):
|
||||
"""Default handler when looking for the 'command' attribute.
|
||||
|
||||
Tries to search for ``spec.name`` in the ``spec.home.bin`` directory.
|
||||
|
||||
Parameters:
|
||||
spec: spec that is being queried
|
||||
|
||||
Returns:
|
||||
Executable: An executable of the command
|
||||
|
||||
Raises:
|
||||
RuntimeError: If the command is not found
|
||||
"""
|
||||
home = getattr(spec.package, "home")
|
||||
path = os.path.join(home.bin, spec.name)
|
||||
|
||||
if fs.is_exe(path):
|
||||
return spack.util.executable.Executable(path)
|
||||
raise RuntimeError(f"Unable to locate {spec.name} command in {home.bin}")
|
||||
|
||||
|
||||
def _headers_default_handler(spec: "Spec"):
|
||||
"""Default handler when looking for the 'headers' attribute.
|
||||
|
||||
@@ -1335,9 +1312,7 @@ class SpecBuildInterface(lang.ObjectWrapper):
|
||||
home = ForwardQueryToPackage("home", default_handler=None)
|
||||
headers = ForwardQueryToPackage("headers", default_handler=_headers_default_handler)
|
||||
libs = ForwardQueryToPackage("libs", default_handler=_libs_default_handler)
|
||||
command = ForwardQueryToPackage(
|
||||
"command", default_handler=_command_default_handler, _indirect=True
|
||||
)
|
||||
command = ForwardQueryToPackage("command", default_handler=None, _indirect=True)
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
|
||||
@@ -42,7 +42,7 @@ def mock_pkg_git_repo(git, tmp_path_factory):
|
||||
repo_dir = root_dir / "builtin.mock"
|
||||
shutil.copytree(spack.paths.mock_packages_path, str(repo_dir))
|
||||
|
||||
repo_cache = spack.util.file_cache.FileCache(str(root_dir / "cache"))
|
||||
repo_cache = spack.util.file_cache.FileCache(root_dir / "cache")
|
||||
mock_repo = spack.repo.RepoPath(str(repo_dir), cache=repo_cache)
|
||||
mock_repo_packages = mock_repo.repos[0].packages_path
|
||||
|
||||
|
||||
@@ -3254,3 +3254,54 @@ def test_spec_unification(unify, mutable_config, mock_packages):
|
||||
maybe_fails = pytest.raises if unify is True else llnl.util.lang.nullcontext
|
||||
with maybe_fails(spack.solver.asp.UnsatisfiableSpecError):
|
||||
_ = spack.cmd.parse_specs([a_restricted, b], concretize=True)
|
||||
|
||||
|
||||
def test_concretization_cache_roundtrip(use_concretization_cache, monkeypatch, mutable_config):
|
||||
"""Tests whether we can write the results of a clingo solve to the cache
|
||||
and load the same spec request from the cache to produce identical specs"""
|
||||
# Force determinism:
|
||||
# Solver setup is normally non-deterministic due to non-determinism in
|
||||
# asp solver setup logic generation. The only other inputs to the cache keys are
|
||||
# the .lp files, which are invariant over the course of this test.
|
||||
# This method forces the same setup to be produced for the same specs
|
||||
# which gives us a guarantee of cache hits, as it removes the only
|
||||
# element of non deterministic solver setup for the same spec
|
||||
# Basically just a quick and dirty memoization
|
||||
solver_setup = spack.solver.asp.SpackSolverSetup.setup
|
||||
|
||||
def _setup(self, specs, *, reuse=None, allow_deprecated=False):
|
||||
if not getattr(_setup, "cache_setup", None):
|
||||
cache_setup = solver_setup(self, specs, reuse=reuse, allow_deprecated=allow_deprecated)
|
||||
setattr(_setup, "cache_setup", cache_setup)
|
||||
return getattr(_setup, "cache_setup")
|
||||
|
||||
# monkeypatch our forced determinism setup method into solver setup
|
||||
monkeypatch.setattr(spack.solver.asp.SpackSolverSetup, "setup", _setup)
|
||||
|
||||
assert spack.config.get("config:concretization_cache:enable")
|
||||
|
||||
# run one standard concretization to populate the cache and the setup method
|
||||
# memoization
|
||||
h = spack.concretize.concretize_one("hdf5")
|
||||
|
||||
# due to our forced determinism above, we should not be observing
|
||||
# cache misses, assert that we're not storing any new cache entries
|
||||
def _ensure_no_store(self, problem: str, result, statistics, test=False):
|
||||
# always throw, we never want to reach this code path
|
||||
assert False, "Concretization cache hit expected"
|
||||
|
||||
# Assert that we're actually hitting the cache
|
||||
cache_fetch = spack.solver.asp.ConcretizationCache.fetch
|
||||
|
||||
def _ensure_cache_hits(self, problem: str):
|
||||
result, statistics = cache_fetch(self, problem)
|
||||
assert result, "Expected successful concretization cache hit"
|
||||
assert statistics, "Expected statistics to be non null on cache hit"
|
||||
return result, statistics
|
||||
|
||||
monkeypatch.setattr(spack.solver.asp.ConcretizationCache, "store", _ensure_no_store)
|
||||
monkeypatch.setattr(spack.solver.asp.ConcretizationCache, "fetch", _ensure_cache_hits)
|
||||
# ensure subsequent concretizations of the same spec produce the same spec
|
||||
# object
|
||||
for _ in range(5):
|
||||
assert h == spack.concretize.concretize_one("hdf5")
|
||||
|
||||
@@ -29,8 +29,7 @@
|
||||
]
|
||||
|
||||
variant_error_messages = [
|
||||
"'fftw' required multiple values for single-valued variant 'mpi'",
|
||||
" Requested '~mpi' and '+mpi'",
|
||||
"'fftw' requires conflicting variant values '~mpi' and '+mpi'",
|
||||
" required because quantum-espresso depends on fftw+mpi when +invino",
|
||||
" required because quantum-espresso+invino ^fftw~mpi requested explicitly",
|
||||
" required because quantum-espresso+invino ^fftw~mpi requested explicitly",
|
||||
|
||||
@@ -377,11 +377,14 @@ def test_require_cflags(concretize_scope, mock_packages):
|
||||
"""
|
||||
update_packages_config(conf_str)
|
||||
|
||||
spec_mpich2 = spack.concretize.concretize_one("mpich2")
|
||||
assert spec_mpich2.satisfies("cflags=-g")
|
||||
mpich2 = spack.concretize.concretize_one("mpich2")
|
||||
assert mpich2.satisfies("cflags=-g")
|
||||
|
||||
spec_mpi = spack.concretize.concretize_one("mpi")
|
||||
assert spec_mpi.satisfies("mpich cflags=-O1")
|
||||
mpileaks = spack.concretize.concretize_one("mpileaks")
|
||||
assert mpileaks["mpi"].satisfies("mpich cflags=-O1")
|
||||
|
||||
mpi = spack.concretize.concretize_one("mpi")
|
||||
assert mpi.satisfies("mpich cflags=-O1")
|
||||
|
||||
|
||||
def test_requirements_for_package_that_is_not_needed(concretize_scope, test_repo):
|
||||
@@ -982,6 +985,52 @@ def test_requiring_package_on_multiple_virtuals(concretize_scope, mock_packages)
|
||||
["%clang"],
|
||||
["%gcc"],
|
||||
),
|
||||
# Test using preferences on virtuals
|
||||
(
|
||||
"""
|
||||
packages:
|
||||
all:
|
||||
providers:
|
||||
mpi: [mpich]
|
||||
mpi:
|
||||
prefer:
|
||||
- zmpi
|
||||
""",
|
||||
"mpileaks",
|
||||
["^[virtuals=mpi] zmpi"],
|
||||
["^[virtuals=mpi] mpich"],
|
||||
),
|
||||
(
|
||||
"""
|
||||
packages:
|
||||
all:
|
||||
providers:
|
||||
mpi: [mpich]
|
||||
mpi:
|
||||
prefer:
|
||||
- zmpi
|
||||
""",
|
||||
"mpileaks ^[virtuals=mpi] mpich",
|
||||
["^[virtuals=mpi] mpich"],
|
||||
["^[virtuals=mpi] zmpi"],
|
||||
),
|
||||
# Tests that strong preferences can be overridden by requirements
|
||||
(
|
||||
"""
|
||||
packages:
|
||||
all:
|
||||
providers:
|
||||
mpi: [zmpi]
|
||||
mpi:
|
||||
require:
|
||||
- mpich
|
||||
prefer:
|
||||
- zmpi
|
||||
""",
|
||||
"mpileaks",
|
||||
["^[virtuals=mpi] mpich"],
|
||||
["^[virtuals=mpi] zmpi"],
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_strong_preferences_packages_yaml(
|
||||
@@ -1032,6 +1081,16 @@ def test_strong_preferences_packages_yaml(
|
||||
""",
|
||||
"multivalue-variant@=2.3 %clang",
|
||||
),
|
||||
# Test using conflict on virtual
|
||||
(
|
||||
"""
|
||||
packages:
|
||||
mpi:
|
||||
conflict:
|
||||
- mpich
|
||||
""",
|
||||
"mpileaks ^[virtuals=mpi] mpich",
|
||||
),
|
||||
],
|
||||
)
|
||||
def test_conflict_packages_yaml(packages_yaml, spec_str, concretize_scope, mock_packages):
|
||||
@@ -1168,3 +1227,26 @@ def test_anonymous_spec_cannot_be_used_in_virtual_requirements(
|
||||
update_packages_config(packages_yaml)
|
||||
with pytest.raises(spack.error.SpackError, match=err_match):
|
||||
spack.concretize.concretize_one("mpileaks")
|
||||
|
||||
|
||||
def test_virtual_requirement_respects_any_of(concretize_scope, mock_packages):
|
||||
"""Tests that "any of" requirements can be used with virtuals"""
|
||||
conf_str = """\
|
||||
packages:
|
||||
mpi:
|
||||
require:
|
||||
- any_of: ["mpich2", "mpich"]
|
||||
"""
|
||||
update_packages_config(conf_str)
|
||||
|
||||
s = spack.concretize.concretize_one("mpileaks")
|
||||
assert s.satisfies("^[virtuals=mpi] mpich2")
|
||||
|
||||
s = spack.concretize.concretize_one("mpileaks ^mpich2")
|
||||
assert s.satisfies("^[virtuals=mpi] mpich2")
|
||||
|
||||
s = spack.concretize.concretize_one("mpileaks ^mpich")
|
||||
assert s.satisfies("^[virtuals=mpi] mpich")
|
||||
|
||||
with pytest.raises(spack.error.SpackError):
|
||||
spack.concretize.concretize_one("mpileaks ^[virtuals=mpi] zmpi")
|
||||
|
||||
@@ -350,6 +350,16 @@ def pytest_collection_modifyitems(config, items):
|
||||
item.add_marker(skip_as_slow)
|
||||
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def use_concretization_cache(mutable_config, tmpdir):
|
||||
"""Enables the use of the concretization cache"""
|
||||
spack.config.set("config:concretization_cache:enable", True)
|
||||
# ensure we have an isolated concretization cache
|
||||
new_conc_cache_loc = str(tmpdir.mkdir("concretization"))
|
||||
spack.config.set("config:concretization_cache:path", new_conc_cache_loc)
|
||||
yield
|
||||
|
||||
|
||||
#
|
||||
# These fixtures are applied to all tests
|
||||
#
|
||||
@@ -2133,7 +2143,7 @@ def _c_compiler_always_exists():
|
||||
@pytest.fixture(scope="session")
|
||||
def mock_test_cache(tmp_path_factory):
|
||||
cache_dir = tmp_path_factory.mktemp("cache")
|
||||
return spack.util.file_cache.FileCache(str(cache_dir))
|
||||
return spack.util.file_cache.FileCache(cache_dir)
|
||||
|
||||
|
||||
class MockHTTPResponse(io.IOBase):
|
||||
|
||||
@@ -14,3 +14,5 @@ config:
|
||||
checksum: true
|
||||
dirty: false
|
||||
locks: {1}
|
||||
concretization_cache:
|
||||
enable: false
|
||||
|
||||
@@ -161,7 +161,7 @@ def test_handle_unknown_package(temporary_store, config, mock_packages, tmp_path
|
||||
"""
|
||||
layout = temporary_store.layout
|
||||
|
||||
repo_cache = spack.util.file_cache.FileCache(str(tmp_path / "cache"))
|
||||
repo_cache = spack.util.file_cache.FileCache(tmp_path / "cache")
|
||||
mock_db = spack.repo.RepoPath(spack.paths.mock_packages_path, cache=repo_cache)
|
||||
|
||||
not_in_mock = set.difference(
|
||||
|
||||
@@ -34,7 +34,7 @@ def extra_repo(tmp_path_factory, request):
|
||||
subdirectory: '{request.param}'
|
||||
"""
|
||||
)
|
||||
repo_cache = spack.util.file_cache.FileCache(str(cache_dir))
|
||||
repo_cache = spack.util.file_cache.FileCache(cache_dir)
|
||||
return spack.repo.Repo(str(repo_dir), cache=repo_cache), request.param
|
||||
|
||||
|
||||
@@ -194,7 +194,7 @@ def _repo_paths(repos):
|
||||
|
||||
repo_paths, namespaces = _repo_paths(repos)
|
||||
|
||||
repo_cache = spack.util.file_cache.FileCache(str(tmp_path / "cache"))
|
||||
repo_cache = spack.util.file_cache.FileCache(tmp_path / "cache")
|
||||
repo_path = spack.repo.RepoPath(*repo_paths, cache=repo_cache)
|
||||
assert len(repo_path.repos) == len(namespaces)
|
||||
assert [x.namespace for x in repo_path.repos] == namespaces
|
||||
@@ -362,5 +362,5 @@ def test_repo_package_api_version(tmp_path: pathlib.Path):
|
||||
namespace: example
|
||||
"""
|
||||
)
|
||||
cache = spack.util.file_cache.FileCache(str(tmp_path / "cache"))
|
||||
cache = spack.util.file_cache.FileCache(tmp_path / "cache")
|
||||
assert spack.repo.Repo(str(tmp_path / "example"), cache=cache).package_api == (1, 0)
|
||||
|
||||
@@ -5,16 +5,17 @@
|
||||
import errno
|
||||
import math
|
||||
import os
|
||||
import pathlib
|
||||
import shutil
|
||||
from typing import IO, Optional, Tuple
|
||||
from typing import IO, Dict, Optional, Tuple, Union
|
||||
|
||||
from llnl.util.filesystem import mkdirp, rename
|
||||
from llnl.util.filesystem import rename
|
||||
|
||||
from spack.error import SpackError
|
||||
from spack.util.lock import Lock, ReadTransaction, WriteTransaction
|
||||
|
||||
|
||||
def _maybe_open(path: str) -> Optional[IO[str]]:
|
||||
def _maybe_open(path: Union[str, pathlib.Path]) -> Optional[IO[str]]:
|
||||
try:
|
||||
return open(path, "r", encoding="utf-8")
|
||||
except OSError as e:
|
||||
@@ -24,7 +25,7 @@ def _maybe_open(path: str) -> Optional[IO[str]]:
|
||||
|
||||
|
||||
class ReadContextManager:
|
||||
def __init__(self, path: str) -> None:
|
||||
def __init__(self, path: Union[str, pathlib.Path]) -> None:
|
||||
self.path = path
|
||||
|
||||
def __enter__(self) -> Optional[IO[str]]:
|
||||
@@ -70,7 +71,7 @@ class FileCache:
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, root, timeout=120):
|
||||
def __init__(self, root: Union[str, pathlib.Path], timeout=120):
|
||||
"""Create a file cache object.
|
||||
|
||||
This will create the cache directory if it does not exist yet.
|
||||
@@ -82,58 +83,60 @@ def __init__(self, root, timeout=120):
|
||||
for cache files, this specifies how long Spack should wait
|
||||
before assuming that there is a deadlock.
|
||||
"""
|
||||
self.root = root.rstrip(os.path.sep)
|
||||
if not os.path.exists(self.root):
|
||||
mkdirp(self.root)
|
||||
if isinstance(root, str):
|
||||
root = pathlib.Path(root)
|
||||
self.root = root
|
||||
self.root.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
self._locks = {}
|
||||
self._locks: Dict[Union[pathlib.Path, str], Lock] = {}
|
||||
self.lock_timeout = timeout
|
||||
|
||||
def destroy(self):
|
||||
"""Remove all files under the cache root."""
|
||||
for f in os.listdir(self.root):
|
||||
path = os.path.join(self.root, f)
|
||||
if os.path.isdir(path):
|
||||
shutil.rmtree(path, True)
|
||||
for f in self.root.iterdir():
|
||||
if f.is_dir():
|
||||
shutil.rmtree(f, True)
|
||||
else:
|
||||
os.remove(path)
|
||||
f.unlink()
|
||||
|
||||
def cache_path(self, key):
|
||||
def cache_path(self, key: Union[str, pathlib.Path]):
|
||||
"""Path to the file in the cache for a particular key."""
|
||||
return os.path.join(self.root, key)
|
||||
return self.root / key
|
||||
|
||||
def _lock_path(self, key):
|
||||
def _lock_path(self, key: Union[str, pathlib.Path]):
|
||||
"""Path to the file in the cache for a particular key."""
|
||||
keyfile = os.path.basename(key)
|
||||
keydir = os.path.dirname(key)
|
||||
|
||||
return os.path.join(self.root, keydir, "." + keyfile + ".lock")
|
||||
return self.root / keydir / ("." + keyfile + ".lock")
|
||||
|
||||
def _get_lock(self, key):
|
||||
def _get_lock(self, key: Union[str, pathlib.Path]):
|
||||
"""Create a lock for a key, if necessary, and return a lock object."""
|
||||
if key not in self._locks:
|
||||
self._locks[key] = Lock(self._lock_path(key), default_timeout=self.lock_timeout)
|
||||
self._locks[key] = Lock(str(self._lock_path(key)), default_timeout=self.lock_timeout)
|
||||
return self._locks[key]
|
||||
|
||||
def init_entry(self, key):
|
||||
def init_entry(self, key: Union[str, pathlib.Path]):
|
||||
"""Ensure we can access a cache file. Create a lock for it if needed.
|
||||
|
||||
Return whether the cache file exists yet or not.
|
||||
"""
|
||||
cache_path = self.cache_path(key)
|
||||
|
||||
# Avoid using pathlib here to allow the logic below to
|
||||
# function as is
|
||||
# TODO: Maybe refactor the following logic for pathlib
|
||||
exists = os.path.exists(cache_path)
|
||||
if exists:
|
||||
if not os.path.isfile(cache_path):
|
||||
if not cache_path.is_file():
|
||||
raise CacheError("Cache file is not a file: %s" % cache_path)
|
||||
|
||||
if not os.access(cache_path, os.R_OK):
|
||||
raise CacheError("Cannot access cache file: %s" % cache_path)
|
||||
else:
|
||||
# if the file is hierarchical, make parent directories
|
||||
parent = os.path.dirname(cache_path)
|
||||
if parent.rstrip(os.path.sep) != self.root:
|
||||
mkdirp(parent)
|
||||
parent = cache_path.parent
|
||||
if parent != self.root:
|
||||
parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if not os.access(parent, os.R_OK | os.W_OK):
|
||||
raise CacheError("Cannot access cache directory: %s" % parent)
|
||||
@@ -142,7 +145,7 @@ def init_entry(self, key):
|
||||
self._get_lock(key)
|
||||
return exists
|
||||
|
||||
def read_transaction(self, key):
|
||||
def read_transaction(self, key: Union[str, pathlib.Path]):
|
||||
"""Get a read transaction on a file cache item.
|
||||
|
||||
Returns a ReadTransaction context manager and opens the cache file for
|
||||
@@ -153,9 +156,11 @@ def read_transaction(self, key):
|
||||
|
||||
"""
|
||||
path = self.cache_path(key)
|
||||
return ReadTransaction(self._get_lock(key), acquire=lambda: ReadContextManager(path))
|
||||
return ReadTransaction(
|
||||
self._get_lock(key), acquire=lambda: ReadContextManager(path) # type: ignore
|
||||
)
|
||||
|
||||
def write_transaction(self, key):
|
||||
def write_transaction(self, key: Union[str, pathlib.Path]):
|
||||
"""Get a write transaction on a file cache item.
|
||||
|
||||
Returns a WriteTransaction context manager that opens a temporary file
|
||||
@@ -167,9 +172,11 @@ def write_transaction(self, key):
|
||||
if os.path.exists(path) and not os.access(path, os.W_OK):
|
||||
raise CacheError(f"Insufficient permissions to write to file cache at {path}")
|
||||
|
||||
return WriteTransaction(self._get_lock(key), acquire=lambda: WriteContextManager(path))
|
||||
return WriteTransaction(
|
||||
self._get_lock(key), acquire=lambda: WriteContextManager(path) # type: ignore
|
||||
)
|
||||
|
||||
def mtime(self, key) -> float:
|
||||
def mtime(self, key: Union[str, pathlib.Path]) -> float:
|
||||
"""Return modification time of cache file, or -inf if it does not exist.
|
||||
|
||||
Time is in units returned by os.stat in the mtime field, which is
|
||||
@@ -179,14 +186,14 @@ def mtime(self, key) -> float:
|
||||
if not self.init_entry(key):
|
||||
return -math.inf
|
||||
else:
|
||||
return os.stat(self.cache_path(key)).st_mtime
|
||||
return self.cache_path(key).stat().st_mtime
|
||||
|
||||
def remove(self, key):
|
||||
def remove(self, key: Union[str, pathlib.Path]):
|
||||
file = self.cache_path(key)
|
||||
lock = self._get_lock(key)
|
||||
try:
|
||||
lock.acquire_write()
|
||||
os.unlink(file)
|
||||
file.unlink()
|
||||
except OSError as e:
|
||||
# File not found is OK, so remove is idempotent.
|
||||
if e.errno != errno.ENOENT:
|
||||
|
||||
@@ -36,7 +36,8 @@ spack:
|
||||
- paraview_specs:
|
||||
- matrix:
|
||||
- - paraview +raytracing +adios2 +fides
|
||||
- - +qt ^[virtuals=gl] glx # GUI Support w/ GLX Rendering
|
||||
- - +qt ^[virtuals=gl] glx ^[virtuals=qmake] qt-base # Qt6 GUI Support w/ GLX Rendering
|
||||
- +qt ^[virtuals=gl] glx ^[virtuals=qmake] qt # Qt5 GUI Support w/ GLX Rendering
|
||||
- ~qt ^[virtuals=gl] glx # GLX Rendering
|
||||
- ^[virtuals=gl] osmesa # OSMesa Rendering
|
||||
- visit_specs:
|
||||
|
||||
@@ -56,13 +56,11 @@ def url_for_version(self, version):
|
||||
|
||||
# To enable this plug-in to work with NCCL add it to the LD_LIBRARY_PATH
|
||||
def setup_run_environment(self, env):
|
||||
aws_ofi_nccl_home = self.spec.prefix
|
||||
env.append_path("LD_LIBRARY_PATH", aws_ofi_nccl_home.lib)
|
||||
env.append_path("LD_LIBRARY_PATH", self.prefix.lib)
|
||||
|
||||
# To enable this plug-in to work with NCCL add it to the LD_LIBRARY_PATH
|
||||
def setup_dependent_run_environment(self, env, dependent_spec):
|
||||
aws_ofi_nccl_home = self.spec["aws-ofi-nccl"].prefix
|
||||
env.append_path("LD_LIBRARY_PATH", aws_ofi_nccl_home.lib)
|
||||
env.append_path("LD_LIBRARY_PATH", self.prefix.lib)
|
||||
|
||||
def configure_args(self):
|
||||
spec = self.spec
|
||||
|
||||
@@ -37,13 +37,11 @@ class AwsOfiRccl(AutotoolsPackage):
|
||||
|
||||
# To enable this plug-in to work with RCCL add it to the LD_LIBRARY_PATH
|
||||
def setup_run_environment(self, env):
|
||||
aws_ofi_rccl_home = self.spec["aws-ofi-rccl"].prefix
|
||||
env.prepend_path("LD_LIBRARY_PATH", aws_ofi_rccl_home.lib)
|
||||
env.prepend_path("LD_LIBRARY_PATH", self.prefix.lib)
|
||||
|
||||
# To enable this plug-in to work with RCCL add it to the LD_LIBRARY_PATH
|
||||
def setup_dependent_run_environment(self, env, dependent_spec):
|
||||
aws_ofi_rccl_home = self.spec["aws-ofi-rccl"].prefix
|
||||
env.prepend_path("LD_LIBRARY_PATH", aws_ofi_rccl_home.lib)
|
||||
env.prepend_path("LD_LIBRARY_PATH", self.prefix.lib)
|
||||
|
||||
def configure_args(self):
|
||||
spec = self.spec
|
||||
|
||||
@@ -319,7 +319,7 @@ def install_test(self):
|
||||
)
|
||||
|
||||
# Spack's logs don't handle colored output well
|
||||
bazel = Executable(self.spec["bazel"].command.path)
|
||||
bazel = Executable(self.command.path)
|
||||
bazel(
|
||||
"--output_user_root=/tmp/spack/bazel/spack-test",
|
||||
"build",
|
||||
@@ -332,7 +332,7 @@ def install_test(self):
|
||||
assert exe(output=str) == "Hi!\n"
|
||||
|
||||
def setup_dependent_package(self, module, dependent_spec):
|
||||
module.bazel = Executable(self.spec["bazel"].command.path)
|
||||
module.bazel = Executable(self.command.path)
|
||||
|
||||
@property
|
||||
def parallel(self):
|
||||
|
||||
@@ -72,6 +72,7 @@ class Blt(Package):
|
||||
# if you export targets this could cause problems in downstream
|
||||
# projects if not handled properly. More info here:
|
||||
# https://llnl-blt.readthedocs.io/en/develop/tutorial/exporting_targets.html
|
||||
version("0.7.0", sha256="df8720a9cba1199d21f1d32649cebb9dddf95aa61bc3ac23f6c8a3c6b6083528")
|
||||
version("0.6.2", sha256="84b663162957c1fe0e896ac8e94cbf2b6def4a152ccfa12a293db14fb25191c8")
|
||||
version("0.6.1", sha256="205540b704b8da5a967475be9e8f2d1a5e77009b950e7fbf01c0edabc4315906")
|
||||
version("0.6.0", sha256="ede355e85f7b11d7c8442b51e4f7871c152093818606e00b1e1cf30f67ebdb23")
|
||||
|
||||
@@ -17,6 +17,7 @@ class Btop(MakefilePackage, CMakePackage):
|
||||
|
||||
license("Apache-2.0")
|
||||
|
||||
version("1.4.0", sha256="ac0d2371bf69d5136de7e9470c6fb286cbee2e16b4c7a6d2cd48a14796e86650")
|
||||
version("1.3.2", sha256="331d18488b1dc7f06cfa12cff909230816a24c57790ba3e8224b117e3f0ae03e")
|
||||
version("1.3.0", sha256="375e078ce2091969f0cd14030620bd1a94987451cf7a73859127a786006a32cf")
|
||||
version("1.2.13", sha256="668dc4782432564c35ad0d32748f972248cc5c5448c9009faeb3445282920e02")
|
||||
|
||||
@@ -170,7 +170,7 @@ def install(self, spec, prefix):
|
||||
@run_after("install")
|
||||
def install_pkgconfig(self):
|
||||
# Add pkgconfig file after installation
|
||||
libdir = self.spec["bzip2"].libs.directories[0]
|
||||
libdir = self.libs.directories[0]
|
||||
pkg_path = join_path(self.prefix.lib, "pkgconfig")
|
||||
mkdirp(pkg_path)
|
||||
|
||||
|
||||
@@ -186,7 +186,7 @@ def initconfig_compiler_entries(self):
|
||||
compiler = self.compiler
|
||||
entries = super().initconfig_compiler_entries()
|
||||
|
||||
if spec.satisfies("+rocm"):
|
||||
if spec.satisfies("+rocm ^blt@:0.6"):
|
||||
entries.insert(0, cmake_cache_path("CMAKE_CXX_COMPILER", spec["hip"].hipcc))
|
||||
|
||||
llnl_link_helpers(entries, spec, compiler)
|
||||
|
||||
@@ -221,7 +221,7 @@ def initconfig_compiler_entries(self):
|
||||
# Default entries are already defined in CachedCMakePackage, inherit them:
|
||||
entries = super().initconfig_compiler_entries()
|
||||
|
||||
if spec.satisfies("+rocm"):
|
||||
if spec.satisfies("+rocm ^blt@:0.6"):
|
||||
entries.insert(0, cmake_cache_path("CMAKE_CXX_COMPILER", spec["hip"].hipcc))
|
||||
|
||||
llnl_link_helpers(entries, spec, compiler)
|
||||
|
||||
@@ -169,7 +169,7 @@ def check_install(self):
|
||||
os.environ.pop("CLIKPATH", "")
|
||||
os.environ.pop("PLANCKLIKE", "")
|
||||
|
||||
exe = spec["cosmomc"].command.path
|
||||
exe = self.command.path
|
||||
args = []
|
||||
if spec.satisfies("+mpi"):
|
||||
# Add mpirun prefix
|
||||
|
||||
@@ -52,9 +52,9 @@ def test_smoke_test(self):
|
||||
ctest = Executable(spec["cmake"].prefix.bin.ctest)
|
||||
|
||||
cmake(
|
||||
spec["diy"].prefix.share.DIY.examples.smoke_test,
|
||||
self.prefix.share.DIY.examples.smoke_test,
|
||||
f"-DMPI_HOME={spec['mpi'].prefix}",
|
||||
f"-DCMAKE_PREFIX_PATH={spec['diy'].prefix}",
|
||||
f"-DCMAKE_PREFIX_PATH={self.prefix}",
|
||||
)
|
||||
cmake("--build", ".")
|
||||
ctest("--verbose")
|
||||
|
||||
@@ -28,10 +28,8 @@ def install(self, spec, prefix):
|
||||
|
||||
@run_after("install")
|
||||
def check_install(self):
|
||||
print("Attempt to call 'dust' with '--version'")
|
||||
dust = Executable(join_path(self.spec["dust"].prefix.bin, "dust"))
|
||||
dust = Executable(join_path(self.prefix.bin, "dust"))
|
||||
output = dust("--version", output=str.split)
|
||||
print("stdout received fromm dust is '{}".format(output))
|
||||
assert "Dust " in output
|
||||
|
||||
def test_run(self):
|
||||
|
||||
@@ -81,7 +81,7 @@ class Dyninst(CMakePackage):
|
||||
sha256="0064d8d51bd01bd0035e1ebc49276f627ce6366d4524c92cf47d3c09b0031f96",
|
||||
)
|
||||
|
||||
requires("%gcc", when="@:13.0.0", msg="dyninst builds only with GCC")
|
||||
requires("%gcc", when="@:12", msg="dyninst builds only with GCC")
|
||||
|
||||
# No Mac support (including apple-clang)
|
||||
conflicts("platform=darwin", msg="macOS is not supported")
|
||||
|
||||
@@ -40,20 +40,17 @@ def flag_handler(self, name, flags):
|
||||
return flags, None, None
|
||||
|
||||
def install(self, spec, prefix):
|
||||
options = ["--prefix=%s" % prefix]
|
||||
oapp = options.append
|
||||
|
||||
# Specify installation directory for Fortran module files
|
||||
# Default is [INCLUDEDIR/FC_TYPE]
|
||||
oapp("--with-moduledir=%s" % prefix.include)
|
||||
options = [f"--prefix={prefix}", f"--with-moduledir={prefix.include}"]
|
||||
|
||||
# Netcdf4/HDF
|
||||
hdf_libs = "-L%s -lhdf5_hl -lhdf5" % spec["hdf5"].prefix.lib
|
||||
hdf_libs = f"-L{spec['hdf5'].prefix.lib} -lhdf5_hl -lhdf5"
|
||||
options.extend(
|
||||
[
|
||||
"--with-netcdf-incs=-I%s" % spec["netcdf-fortran"].prefix.include,
|
||||
"--with-netcdf-libs=-L%s -lnetcdff -lnetcdf %s"
|
||||
% (spec["netcdf-fortran"].prefix.lib, hdf_libs),
|
||||
f"--with-netcdf-incs=-I{spec['netcdf-fortran'].prefix.include}",
|
||||
f"--with-netcdf-libs=-L{spec['netcdf-fortran'].prefix.lib} "
|
||||
f"-lnetcdff -lnetcdf {hdf_libs}",
|
||||
]
|
||||
)
|
||||
|
||||
@@ -66,7 +63,6 @@ def install(self, spec, prefix):
|
||||
def test_etsf_io_help(self):
|
||||
"""check etsf_io can execute (--help)"""
|
||||
|
||||
path = self.spec["etsf-io"].prefix.bin.etsf_io
|
||||
etsfio = which(path)
|
||||
etsfio = which(self.prefix.bin.etsf_io)
|
||||
out = etsfio("--help", output=str.split, error=str.split)
|
||||
assert "Usage: etsf_io" in out
|
||||
|
||||
@@ -24,6 +24,7 @@ class Fastjet(AutotoolsPackage):
|
||||
|
||||
license("GPL-2.0-only")
|
||||
|
||||
version("3.4.3", sha256="cc175471bfab8656b8c6183a8e5e9ad05d5f7506e46f3212a9a8230905b8f6a3")
|
||||
version("3.4.2", sha256="b3d33155b55ce43f420cd6d99b525acf7bdc2593a7bb7ea898a9ddb3d8ca38e3")
|
||||
version("3.4.1", sha256="05608c6ff213f06dd9de723813d6b4dccd51e661ac13098f74bfc9eeaf1cb5aa")
|
||||
version("3.4.0", sha256="ee07c8747c8ead86d88de4a9e4e8d1e9e7d7614973f5631ba8297f7a02478b91")
|
||||
|
||||
@@ -55,8 +55,8 @@ class Flecsi(CMakePackage, CudaPackage, ROCmPackage):
|
||||
description="Set Caliper Profiling Detail",
|
||||
multi=False,
|
||||
)
|
||||
variant("kokkos", default=False, description="Enable Kokkos Support", when="@:2.3.1")
|
||||
variant("openmp", default=False, description="Enable OpenMP Support", when="@:2.3.1")
|
||||
variant("kokkos", default=False, description="Enable Kokkos Support", when="@:2.3")
|
||||
variant("openmp", default=False, description="Enable OpenMP Support", when="@:2.3")
|
||||
|
||||
depends_on("c", type="build")
|
||||
depends_on("cxx", type="build")
|
||||
@@ -67,8 +67,8 @@ class Flecsi(CMakePackage, CudaPackage, ROCmPackage):
|
||||
|
||||
depends_on("graphviz", when="+graphviz")
|
||||
depends_on("hdf5+hl+mpi", when="+hdf5")
|
||||
depends_on("metis@5.1.0:")
|
||||
depends_on("parmetis@4.0.3:")
|
||||
depends_on("metis@5.1.0:", when="@:2.3.1")
|
||||
depends_on("parmetis@4.0.3:", when="@:2.3.1")
|
||||
depends_on("boost@1.70.0: cxxstd=17 +program_options +stacktrace")
|
||||
|
||||
depends_on("cmake@3.15:")
|
||||
@@ -83,7 +83,7 @@ class Flecsi(CMakePackage, CudaPackage, ROCmPackage):
|
||||
requires("^kokkos +cuda_constexpr +cuda_lambda", when="^kokkos +cuda")
|
||||
depends_on("kokkos +rocm", when="+kokkos +rocm")
|
||||
depends_on("kokkos +openmp", when="+kokkos +openmp")
|
||||
requires("+openmp", when="@:2.3.1 ^kokkos +openmp")
|
||||
requires("+openmp", when="@:2.3 ^kokkos +openmp")
|
||||
depends_on("legion@cr-20210122", when="backend=legion @2.0:2.1.0")
|
||||
depends_on("legion@cr-20230307", when="backend=legion @2.2.0:2.2.1")
|
||||
depends_on("legion@24.03.0:", when="backend=legion @2.2.2:")
|
||||
|
||||
@@ -35,13 +35,8 @@ class Freeipmi(AutotoolsPackage):
|
||||
def configure_args(self):
|
||||
# FIXME: If root checking of root installation is added fix this:
|
||||
# Discussed in issue #4432
|
||||
tty.warn(
|
||||
"Requires 'root' for bmc-watchdog.service installation to" " /lib/systemd/system/ !"
|
||||
)
|
||||
|
||||
args = [
|
||||
"--prefix={0}".format(prefix),
|
||||
"--with-systemdsystemunitdir=" + self.spec["freeipmi"].prefix.lib.systemd.system,
|
||||
tty.warn("Requires 'root' for bmc-watchdog.service installation to /lib/systemd/system/")
|
||||
return [
|
||||
f"--prefix={self.prefix}",
|
||||
f"--with-systemdsystemunitdir={self.prefix.lib.systemd.system}",
|
||||
]
|
||||
|
||||
return args
|
||||
|
||||
@@ -1005,7 +1005,7 @@ def write_specs_file(self):
|
||||
specs_file = join_path(self.spec_dir, "specs")
|
||||
with open(specs_file, "w") as f:
|
||||
# can't extend the builtins without dumping them first
|
||||
f.write(self.spec["gcc"].command("-dumpspecs", output=str, error=os.devnull).strip())
|
||||
f.write(self.command("-dumpspecs", output=str, error=os.devnull).strip())
|
||||
|
||||
f.write("\n\n# Generated by Spack\n\n")
|
||||
|
||||
@@ -1179,7 +1179,7 @@ def _post_buildcache_install_hook(self):
|
||||
|
||||
# Setting up the runtime environment shouldn't be necessary here.
|
||||
relocation_args = []
|
||||
gcc = self.spec["gcc"].command
|
||||
gcc = self.command
|
||||
specs_file = os.path.join(self.spec_dir, "specs")
|
||||
dryrun = gcc("test.c", "-###", output=os.devnull, error=str).strip()
|
||||
if not dryrun:
|
||||
|
||||
@@ -120,4 +120,4 @@ def setup_dependent_package(self, module, dependent_spec):
|
||||
install_tree('bin', prefix.bin)
|
||||
"""
|
||||
# Add a go command/compiler for extensions
|
||||
module.go = self.spec["go"].command
|
||||
module.go = self.command
|
||||
|
||||
@@ -25,7 +25,7 @@ class Hiptt(MakefilePackage, ROCmPackage):
|
||||
|
||||
# To enable this package add it to the LD_LIBRARY_PATH
|
||||
def setup_dependent_build_environment(self, env, dependent_spec):
|
||||
hiptt_home = self.spec["hiptt"].prefix
|
||||
hiptt_home = self.prefix
|
||||
env.prepend_path("cuTT_ROOT", hiptt_home)
|
||||
env.prepend_path("cuTT_LIBRARY", hiptt_home.lib)
|
||||
env.prepend_path("cuTT_INCLUDE_PATH", hiptt_home.include)
|
||||
|
||||
@@ -100,7 +100,7 @@ def check_install(self):
|
||||
prefixes = ";".join(
|
||||
[
|
||||
self.spec["libdrm"].prefix,
|
||||
self.spec["hsakmt-roct"].prefix,
|
||||
self.prefix,
|
||||
self.spec["numactl"].prefix,
|
||||
self.spec["pkgconfig"].prefix,
|
||||
self.spec["llvm-amdgpu"].prefix,
|
||||
@@ -108,7 +108,7 @@ def check_install(self):
|
||||
self.spec["ncurses"].prefix,
|
||||
]
|
||||
)
|
||||
hsakmt_path = ";".join([self.spec["hsakmt-roct"].prefix])
|
||||
hsakmt_path = ";".join([self.prefix])
|
||||
cc_options = [
|
||||
"-DCMAKE_PREFIX_PATH=" + prefixes,
|
||||
"-DLIBHSAKMT_PATH=" + hsakmt_path,
|
||||
|
||||
@@ -36,7 +36,7 @@ def configure_args(self):
|
||||
@run_after("install")
|
||||
@on_package_attributes(run_tests=True)
|
||||
def install_test(self):
|
||||
jq = self.spec["jq"].command
|
||||
jq = self.command
|
||||
f = os.path.join(os.path.dirname(__file__), "input.json")
|
||||
|
||||
assert jq(".bar", input=f, output=str) == "2\n"
|
||||
|
||||
@@ -33,5 +33,4 @@ class Keepalived(AutotoolsPackage):
|
||||
depends_on("openssl")
|
||||
|
||||
def configure_args(self):
|
||||
args = ["--with-systemdsystemunitdir=" + self.spec["keepalived"].prefix.lib.systemd.system]
|
||||
return args
|
||||
return [f"--with-systemdsystemunitdir={self.prefix.lib.systemd.system}"]
|
||||
|
||||
@@ -34,9 +34,5 @@ def autoreconf(self, spec, prefix):
|
||||
bash("autogen.sh")
|
||||
|
||||
def configure_args(self):
|
||||
args = [
|
||||
"--disable-manpages",
|
||||
"--with-bashcompletiondir="
|
||||
+ join_path(self.spec["kmod"].prefix, "share", "bash-completion", "completions"),
|
||||
]
|
||||
return args
|
||||
completions = join_path(self.prefix, "share", "bash-completion", "completions")
|
||||
return ["--disable-manpages", f"--with-bashcompletiondir={completions}"]
|
||||
|
||||
@@ -9,7 +9,7 @@ class Lfortran(CMakePackage):
|
||||
"""Modern interactive LLVM-based Fortran compiler"""
|
||||
|
||||
homepage = "https://lfortran.org"
|
||||
url = "https://lfortran.github.io/tarballs/release/lfortran-0.19.0.tar.gz"
|
||||
url = "https://github.com/lfortran/lfortran/releases/download/v0.49.0/lfortran-0.49.0.tar.gz"
|
||||
git = "https://github.com/lfortran/lfortran.git"
|
||||
|
||||
maintainers("certik")
|
||||
@@ -17,8 +17,13 @@ class Lfortran(CMakePackage):
|
||||
|
||||
# The build process uses 'git describe --tags' to get the package version
|
||||
version("main", branch="main", get_full_repo=True)
|
||||
version("0.49.0", sha256="a9225fd33d34ce786f72a964a1179579caff62dd176a6a1477d2594fecdc7cd6")
|
||||
version("0.30.0", sha256="aafdfbfe81d69ceb3650ae1cf9bcd8a1f1532d895bf88f3071fe9610859bcd6f")
|
||||
version("0.19.0", sha256="d496f61d7133b624deb3562677c0cbf98e747262babd4ac010dbd3ab4303d805")
|
||||
version(
|
||||
"0.19.0",
|
||||
sha256="d496f61d7133b624deb3562677c0cbf98e747262babd4ac010dbd3ab4303d805",
|
||||
url="https://lfortran.github.io/tarballs/release/lfortran-0.19.0.tar.gz",
|
||||
)
|
||||
|
||||
depends_on("c", type="build") # generated
|
||||
depends_on("cxx", type="build") # generated
|
||||
@@ -30,7 +35,8 @@ class Lfortran(CMakePackage):
|
||||
depends_on("python@3:", type="build", when="@main")
|
||||
depends_on("cmake", type="build")
|
||||
depends_on("llvm@11:15", type=("build", "run"), when="@0.19.0+llvm")
|
||||
depends_on("llvm@11:16", type=("build", "run"), when="@0.30.0:+llvm")
|
||||
depends_on("llvm@11:16", type=("build", "run"), when="@0.30.0+llvm")
|
||||
depends_on("llvm@11:", type=("build", "run"), when="+llvm")
|
||||
depends_on("zlib-api")
|
||||
depends_on("re2c", type="build", when="@main")
|
||||
depends_on("bison@:3.4", type="build", when="@main")
|
||||
|
||||
@@ -176,15 +176,13 @@ def setup_build_environment(self, env):
|
||||
|
||||
# To enable this package add it to the LD_LIBRARY_PATH
|
||||
def setup_run_environment(self, env):
|
||||
libfabric_home = self.spec["libfabric"].prefix
|
||||
env.prepend_path("LD_LIBRARY_PATH", libfabric_home.lib)
|
||||
env.prepend_path("LD_LIBRARY_PATH", libfabric_home.lib64)
|
||||
env.prepend_path("LD_LIBRARY_PATH", self.prefix.lib)
|
||||
env.prepend_path("LD_LIBRARY_PATH", self.prefix.lib64)
|
||||
|
||||
# To enable this package add it to the LD_LIBRARY_PATH
|
||||
def setup_dependent_run_environment(self, env, dependent_spec):
|
||||
libfabric_home = self.spec["libfabric"].prefix
|
||||
env.prepend_path("LD_LIBRARY_PATH", libfabric_home.lib)
|
||||
env.prepend_path("LD_LIBRARY_PATH", libfabric_home.lib64)
|
||||
env.prepend_path("LD_LIBRARY_PATH", self.prefix.lib)
|
||||
env.prepend_path("LD_LIBRARY_PATH", self.prefix.lib64)
|
||||
|
||||
@when("@main")
|
||||
def autoreconf(self, spec, prefix):
|
||||
|
||||
@@ -0,0 +1,635 @@
|
||||
diff --git a/include/fuse_kernel.h b/include/fuse_kernel.h
|
||||
index c632b58..c0ef981 100644
|
||||
--- a/include/fuse_kernel.h
|
||||
+++ b/include/fuse_kernel.h
|
||||
@@ -88,12 +88,11 @@
|
||||
#ifndef _LINUX_FUSE_H
|
||||
#define _LINUX_FUSE_H
|
||||
|
||||
-#include <sys/types.h>
|
||||
-#define __u64 uint64_t
|
||||
-#define __s64 int64_t
|
||||
-#define __u32 uint32_t
|
||||
-#define __s32 int32_t
|
||||
-#define __u16 uint16_t
|
||||
+#ifdef __KERNEL__
|
||||
+#include <linux/types.h>
|
||||
+#else
|
||||
+#include <stdint.h>
|
||||
+#endif
|
||||
|
||||
/*
|
||||
* Version negotiation:
|
||||
@@ -128,42 +127,42 @@
|
||||
userspace works under 64bit kernels */
|
||||
|
||||
struct fuse_attr {
|
||||
- __u64 ino;
|
||||
- __u64 size;
|
||||
- __u64 blocks;
|
||||
- __u64 atime;
|
||||
- __u64 mtime;
|
||||
- __u64 ctime;
|
||||
- __u32 atimensec;
|
||||
- __u32 mtimensec;
|
||||
- __u32 ctimensec;
|
||||
- __u32 mode;
|
||||
- __u32 nlink;
|
||||
- __u32 uid;
|
||||
- __u32 gid;
|
||||
- __u32 rdev;
|
||||
- __u32 blksize;
|
||||
- __u32 padding;
|
||||
+ uint64_t ino;
|
||||
+ uint64_t size;
|
||||
+ uint64_t blocks;
|
||||
+ uint64_t atime;
|
||||
+ uint64_t mtime;
|
||||
+ uint64_t ctime;
|
||||
+ uint32_t atimensec;
|
||||
+ uint32_t mtimensec;
|
||||
+ uint32_t ctimensec;
|
||||
+ uint32_t mode;
|
||||
+ uint32_t nlink;
|
||||
+ uint32_t uid;
|
||||
+ uint32_t gid;
|
||||
+ uint32_t rdev;
|
||||
+ uint32_t blksize;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_kstatfs {
|
||||
- __u64 blocks;
|
||||
- __u64 bfree;
|
||||
- __u64 bavail;
|
||||
- __u64 files;
|
||||
- __u64 ffree;
|
||||
- __u32 bsize;
|
||||
- __u32 namelen;
|
||||
- __u32 frsize;
|
||||
- __u32 padding;
|
||||
- __u32 spare[6];
|
||||
+ uint64_t blocks;
|
||||
+ uint64_t bfree;
|
||||
+ uint64_t bavail;
|
||||
+ uint64_t files;
|
||||
+ uint64_t ffree;
|
||||
+ uint32_t bsize;
|
||||
+ uint32_t namelen;
|
||||
+ uint32_t frsize;
|
||||
+ uint32_t padding;
|
||||
+ uint32_t spare[6];
|
||||
};
|
||||
|
||||
struct fuse_file_lock {
|
||||
- __u64 start;
|
||||
- __u64 end;
|
||||
- __u32 type;
|
||||
- __u32 pid; /* tgid */
|
||||
+ uint64_t start;
|
||||
+ uint64_t end;
|
||||
+ uint32_t type;
|
||||
+ uint32_t pid; /* tgid */
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -334,143 +333,143 @@ enum fuse_notify_code {
|
||||
#define FUSE_COMPAT_ENTRY_OUT_SIZE 120
|
||||
|
||||
struct fuse_entry_out {
|
||||
- __u64 nodeid; /* Inode ID */
|
||||
- __u64 generation; /* Inode generation: nodeid:gen must
|
||||
+ uint64_t nodeid; /* Inode ID */
|
||||
+ uint64_t generation; /* Inode generation: nodeid:gen must
|
||||
be unique for the fs's lifetime */
|
||||
- __u64 entry_valid; /* Cache timeout for the name */
|
||||
- __u64 attr_valid; /* Cache timeout for the attributes */
|
||||
- __u32 entry_valid_nsec;
|
||||
- __u32 attr_valid_nsec;
|
||||
+ uint64_t entry_valid; /* Cache timeout for the name */
|
||||
+ uint64_t attr_valid; /* Cache timeout for the attributes */
|
||||
+ uint32_t entry_valid_nsec;
|
||||
+ uint32_t attr_valid_nsec;
|
||||
struct fuse_attr attr;
|
||||
};
|
||||
|
||||
struct fuse_forget_in {
|
||||
- __u64 nlookup;
|
||||
+ uint64_t nlookup;
|
||||
};
|
||||
|
||||
struct fuse_forget_one {
|
||||
- __u64 nodeid;
|
||||
- __u64 nlookup;
|
||||
+ uint64_t nodeid;
|
||||
+ uint64_t nlookup;
|
||||
};
|
||||
|
||||
struct fuse_batch_forget_in {
|
||||
- __u32 count;
|
||||
- __u32 dummy;
|
||||
+ uint32_t count;
|
||||
+ uint32_t dummy;
|
||||
};
|
||||
|
||||
struct fuse_getattr_in {
|
||||
- __u32 getattr_flags;
|
||||
- __u32 dummy;
|
||||
- __u64 fh;
|
||||
+ uint32_t getattr_flags;
|
||||
+ uint32_t dummy;
|
||||
+ uint64_t fh;
|
||||
};
|
||||
|
||||
#define FUSE_COMPAT_ATTR_OUT_SIZE 96
|
||||
|
||||
struct fuse_attr_out {
|
||||
- __u64 attr_valid; /* Cache timeout for the attributes */
|
||||
- __u32 attr_valid_nsec;
|
||||
- __u32 dummy;
|
||||
+ uint64_t attr_valid; /* Cache timeout for the attributes */
|
||||
+ uint32_t attr_valid_nsec;
|
||||
+ uint32_t dummy;
|
||||
struct fuse_attr attr;
|
||||
};
|
||||
|
||||
#define FUSE_COMPAT_MKNOD_IN_SIZE 8
|
||||
|
||||
struct fuse_mknod_in {
|
||||
- __u32 mode;
|
||||
- __u32 rdev;
|
||||
- __u32 umask;
|
||||
- __u32 padding;
|
||||
+ uint32_t mode;
|
||||
+ uint32_t rdev;
|
||||
+ uint32_t umask;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_mkdir_in {
|
||||
- __u32 mode;
|
||||
- __u32 umask;
|
||||
+ uint32_t mode;
|
||||
+ uint32_t umask;
|
||||
};
|
||||
|
||||
struct fuse_rename_in {
|
||||
- __u64 newdir;
|
||||
+ uint64_t newdir;
|
||||
};
|
||||
|
||||
struct fuse_link_in {
|
||||
- __u64 oldnodeid;
|
||||
+ uint64_t oldnodeid;
|
||||
};
|
||||
|
||||
struct fuse_setattr_in {
|
||||
- __u32 valid;
|
||||
- __u32 padding;
|
||||
- __u64 fh;
|
||||
- __u64 size;
|
||||
- __u64 lock_owner;
|
||||
- __u64 atime;
|
||||
- __u64 mtime;
|
||||
- __u64 unused2;
|
||||
- __u32 atimensec;
|
||||
- __u32 mtimensec;
|
||||
- __u32 unused3;
|
||||
- __u32 mode;
|
||||
- __u32 unused4;
|
||||
- __u32 uid;
|
||||
- __u32 gid;
|
||||
- __u32 unused5;
|
||||
+ uint32_t valid;
|
||||
+ uint32_t padding;
|
||||
+ uint64_t fh;
|
||||
+ uint64_t size;
|
||||
+ uint64_t lock_owner;
|
||||
+ uint64_t atime;
|
||||
+ uint64_t mtime;
|
||||
+ uint64_t unused2;
|
||||
+ uint32_t atimensec;
|
||||
+ uint32_t mtimensec;
|
||||
+ uint32_t unused3;
|
||||
+ uint32_t mode;
|
||||
+ uint32_t unused4;
|
||||
+ uint32_t uid;
|
||||
+ uint32_t gid;
|
||||
+ uint32_t unused5;
|
||||
};
|
||||
|
||||
struct fuse_open_in {
|
||||
- __u32 flags;
|
||||
- __u32 unused;
|
||||
+ uint32_t flags;
|
||||
+ uint32_t unused;
|
||||
};
|
||||
|
||||
struct fuse_create_in {
|
||||
- __u32 flags;
|
||||
- __u32 mode;
|
||||
- __u32 umask;
|
||||
- __u32 padding;
|
||||
+ uint32_t flags;
|
||||
+ uint32_t mode;
|
||||
+ uint32_t umask;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_open_out {
|
||||
- __u64 fh;
|
||||
- __u32 open_flags;
|
||||
- __u32 padding;
|
||||
+ uint64_t fh;
|
||||
+ uint32_t open_flags;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_release_in {
|
||||
- __u64 fh;
|
||||
- __u32 flags;
|
||||
- __u32 release_flags;
|
||||
- __u64 lock_owner;
|
||||
+ uint64_t fh;
|
||||
+ uint32_t flags;
|
||||
+ uint32_t release_flags;
|
||||
+ uint64_t lock_owner;
|
||||
};
|
||||
|
||||
struct fuse_flush_in {
|
||||
- __u64 fh;
|
||||
- __u32 unused;
|
||||
- __u32 padding;
|
||||
- __u64 lock_owner;
|
||||
+ uint64_t fh;
|
||||
+ uint32_t unused;
|
||||
+ uint32_t padding;
|
||||
+ uint64_t lock_owner;
|
||||
};
|
||||
|
||||
struct fuse_read_in {
|
||||
- __u64 fh;
|
||||
- __u64 offset;
|
||||
- __u32 size;
|
||||
- __u32 read_flags;
|
||||
- __u64 lock_owner;
|
||||
- __u32 flags;
|
||||
- __u32 padding;
|
||||
+ uint64_t fh;
|
||||
+ uint64_t offset;
|
||||
+ uint32_t size;
|
||||
+ uint32_t read_flags;
|
||||
+ uint64_t lock_owner;
|
||||
+ uint32_t flags;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
#define FUSE_COMPAT_WRITE_IN_SIZE 24
|
||||
|
||||
struct fuse_write_in {
|
||||
- __u64 fh;
|
||||
- __u64 offset;
|
||||
- __u32 size;
|
||||
- __u32 write_flags;
|
||||
- __u64 lock_owner;
|
||||
- __u32 flags;
|
||||
- __u32 padding;
|
||||
+ uint64_t fh;
|
||||
+ uint64_t offset;
|
||||
+ uint32_t size;
|
||||
+ uint32_t write_flags;
|
||||
+ uint64_t lock_owner;
|
||||
+ uint32_t flags;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_write_out {
|
||||
- __u32 size;
|
||||
- __u32 padding;
|
||||
+ uint32_t size;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
#define FUSE_COMPAT_STATFS_SIZE 48
|
||||
@@ -480,32 +479,32 @@ struct fuse_statfs_out {
|
||||
};
|
||||
|
||||
struct fuse_fsync_in {
|
||||
- __u64 fh;
|
||||
- __u32 fsync_flags;
|
||||
- __u32 padding;
|
||||
+ uint64_t fh;
|
||||
+ uint32_t fsync_flags;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_setxattr_in {
|
||||
- __u32 size;
|
||||
- __u32 flags;
|
||||
+ uint32_t size;
|
||||
+ uint32_t flags;
|
||||
};
|
||||
|
||||
struct fuse_getxattr_in {
|
||||
- __u32 size;
|
||||
- __u32 padding;
|
||||
+ uint32_t size;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_getxattr_out {
|
||||
- __u32 size;
|
||||
- __u32 padding;
|
||||
+ uint32_t size;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_lk_in {
|
||||
- __u64 fh;
|
||||
- __u64 owner;
|
||||
+ uint64_t fh;
|
||||
+ uint64_t owner;
|
||||
struct fuse_file_lock lk;
|
||||
- __u32 lk_flags;
|
||||
- __u32 padding;
|
||||
+ uint32_t lk_flags;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_lk_out {
|
||||
@@ -513,179 +512,179 @@ struct fuse_lk_out {
|
||||
};
|
||||
|
||||
struct fuse_access_in {
|
||||
- __u32 mask;
|
||||
- __u32 padding;
|
||||
+ uint32_t mask;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_init_in {
|
||||
- __u32 major;
|
||||
- __u32 minor;
|
||||
- __u32 max_readahead;
|
||||
- __u32 flags;
|
||||
+ uint32_t major;
|
||||
+ uint32_t minor;
|
||||
+ uint32_t max_readahead;
|
||||
+ uint32_t flags;
|
||||
};
|
||||
|
||||
struct fuse_init_out {
|
||||
- __u32 major;
|
||||
- __u32 minor;
|
||||
- __u32 max_readahead;
|
||||
- __u32 flags;
|
||||
- __u16 max_background;
|
||||
- __u16 congestion_threshold;
|
||||
- __u32 max_write;
|
||||
+ uint32_t major;
|
||||
+ uint32_t minor;
|
||||
+ uint32_t max_readahead;
|
||||
+ uint32_t flags;
|
||||
+ uint16_t max_background;
|
||||
+ uint16_t congestion_threshold;
|
||||
+ uint32_t max_write;
|
||||
};
|
||||
|
||||
#define CUSE_INIT_INFO_MAX 4096
|
||||
|
||||
struct cuse_init_in {
|
||||
- __u32 major;
|
||||
- __u32 minor;
|
||||
- __u32 unused;
|
||||
- __u32 flags;
|
||||
+ uint32_t major;
|
||||
+ uint32_t minor;
|
||||
+ uint32_t unused;
|
||||
+ uint32_t flags;
|
||||
};
|
||||
|
||||
struct cuse_init_out {
|
||||
- __u32 major;
|
||||
- __u32 minor;
|
||||
- __u32 unused;
|
||||
- __u32 flags;
|
||||
- __u32 max_read;
|
||||
- __u32 max_write;
|
||||
- __u32 dev_major; /* chardev major */
|
||||
- __u32 dev_minor; /* chardev minor */
|
||||
- __u32 spare[10];
|
||||
+ uint32_t major;
|
||||
+ uint32_t minor;
|
||||
+ uint32_t unused;
|
||||
+ uint32_t flags;
|
||||
+ uint32_t max_read;
|
||||
+ uint32_t max_write;
|
||||
+ uint32_t dev_major; /* chardev major */
|
||||
+ uint32_t dev_minor; /* chardev minor */
|
||||
+ uint32_t spare[10];
|
||||
};
|
||||
|
||||
struct fuse_interrupt_in {
|
||||
- __u64 unique;
|
||||
+ uint64_t unique;
|
||||
};
|
||||
|
||||
struct fuse_bmap_in {
|
||||
- __u64 block;
|
||||
- __u32 blocksize;
|
||||
- __u32 padding;
|
||||
+ uint64_t block;
|
||||
+ uint32_t blocksize;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_bmap_out {
|
||||
- __u64 block;
|
||||
+ uint64_t block;
|
||||
};
|
||||
|
||||
struct fuse_ioctl_in {
|
||||
- __u64 fh;
|
||||
- __u32 flags;
|
||||
- __u32 cmd;
|
||||
- __u64 arg;
|
||||
- __u32 in_size;
|
||||
- __u32 out_size;
|
||||
+ uint64_t fh;
|
||||
+ uint32_t flags;
|
||||
+ uint32_t cmd;
|
||||
+ uint64_t arg;
|
||||
+ uint32_t in_size;
|
||||
+ uint32_t out_size;
|
||||
};
|
||||
|
||||
struct fuse_ioctl_iovec {
|
||||
- __u64 base;
|
||||
- __u64 len;
|
||||
+ uint64_t base;
|
||||
+ uint64_t len;
|
||||
};
|
||||
|
||||
struct fuse_ioctl_out {
|
||||
- __s32 result;
|
||||
- __u32 flags;
|
||||
- __u32 in_iovs;
|
||||
- __u32 out_iovs;
|
||||
+ int32_t result;
|
||||
+ uint32_t flags;
|
||||
+ uint32_t in_iovs;
|
||||
+ uint32_t out_iovs;
|
||||
};
|
||||
|
||||
struct fuse_poll_in {
|
||||
- __u64 fh;
|
||||
- __u64 kh;
|
||||
- __u32 flags;
|
||||
- __u32 padding;
|
||||
+ uint64_t fh;
|
||||
+ uint64_t kh;
|
||||
+ uint32_t flags;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_poll_out {
|
||||
- __u32 revents;
|
||||
- __u32 padding;
|
||||
+ uint32_t revents;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_notify_poll_wakeup_out {
|
||||
- __u64 kh;
|
||||
+ uint64_t kh;
|
||||
};
|
||||
|
||||
struct fuse_fallocate_in {
|
||||
- __u64 fh;
|
||||
- __u64 offset;
|
||||
- __u64 length;
|
||||
- __u32 mode;
|
||||
- __u32 padding;
|
||||
+ uint64_t fh;
|
||||
+ uint64_t offset;
|
||||
+ uint64_t length;
|
||||
+ uint32_t mode;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_in_header {
|
||||
- __u32 len;
|
||||
- __u32 opcode;
|
||||
- __u64 unique;
|
||||
- __u64 nodeid;
|
||||
- __u32 uid;
|
||||
- __u32 gid;
|
||||
- __u32 pid;
|
||||
- __u32 padding;
|
||||
+ uint32_t len;
|
||||
+ uint32_t opcode;
|
||||
+ uint64_t unique;
|
||||
+ uint64_t nodeid;
|
||||
+ uint32_t uid;
|
||||
+ uint32_t gid;
|
||||
+ uint32_t pid;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_out_header {
|
||||
- __u32 len;
|
||||
- __s32 error;
|
||||
- __u64 unique;
|
||||
+ uint32_t len;
|
||||
+ int32_t error;
|
||||
+ uint64_t unique;
|
||||
};
|
||||
|
||||
struct fuse_dirent {
|
||||
- __u64 ino;
|
||||
- __u64 off;
|
||||
- __u32 namelen;
|
||||
- __u32 type;
|
||||
+ uint64_t ino;
|
||||
+ uint64_t off;
|
||||
+ uint32_t namelen;
|
||||
+ uint32_t type;
|
||||
char name[];
|
||||
};
|
||||
|
||||
#define FUSE_NAME_OFFSET offsetof(struct fuse_dirent, name)
|
||||
-#define FUSE_DIRENT_ALIGN(x) (((x) + sizeof(__u64) - 1) & ~(sizeof(__u64) - 1))
|
||||
+#define FUSE_DIRENT_ALIGN(x) (((x) + sizeof(uint64_t) - 1) & ~(sizeof(uint64_t) - 1))
|
||||
#define FUSE_DIRENT_SIZE(d) \
|
||||
FUSE_DIRENT_ALIGN(FUSE_NAME_OFFSET + (d)->namelen)
|
||||
|
||||
struct fuse_notify_inval_inode_out {
|
||||
- __u64 ino;
|
||||
- __s64 off;
|
||||
- __s64 len;
|
||||
+ uint64_t ino;
|
||||
+ int64_t off;
|
||||
+ int64_t len;
|
||||
};
|
||||
|
||||
struct fuse_notify_inval_entry_out {
|
||||
- __u64 parent;
|
||||
- __u32 namelen;
|
||||
- __u32 padding;
|
||||
+ uint64_t parent;
|
||||
+ uint32_t namelen;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_notify_delete_out {
|
||||
- __u64 parent;
|
||||
- __u64 child;
|
||||
- __u32 namelen;
|
||||
- __u32 padding;
|
||||
+ uint64_t parent;
|
||||
+ uint64_t child;
|
||||
+ uint32_t namelen;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_notify_store_out {
|
||||
- __u64 nodeid;
|
||||
- __u64 offset;
|
||||
- __u32 size;
|
||||
- __u32 padding;
|
||||
+ uint64_t nodeid;
|
||||
+ uint64_t offset;
|
||||
+ uint32_t size;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
struct fuse_notify_retrieve_out {
|
||||
- __u64 notify_unique;
|
||||
- __u64 nodeid;
|
||||
- __u64 offset;
|
||||
- __u32 size;
|
||||
- __u32 padding;
|
||||
+ uint64_t notify_unique;
|
||||
+ uint64_t nodeid;
|
||||
+ uint64_t offset;
|
||||
+ uint32_t size;
|
||||
+ uint32_t padding;
|
||||
};
|
||||
|
||||
/* Matches the size of fuse_write_in */
|
||||
struct fuse_notify_retrieve_in {
|
||||
- __u64 dummy1;
|
||||
- __u64 offset;
|
||||
- __u32 size;
|
||||
- __u32 dummy2;
|
||||
- __u64 dummy3;
|
||||
- __u64 dummy4;
|
||||
+ uint64_t dummy1;
|
||||
+ uint64_t offset;
|
||||
+ uint32_t size;
|
||||
+ uint32_t dummy2;
|
||||
+ uint64_t dummy3;
|
||||
+ uint64_t dummy4;
|
||||
};
|
||||
|
||||
#endif /* _LINUX_FUSE_H */
|
||||
@@ -94,6 +94,13 @@ def url_for_version(self, version):
|
||||
sha256="94d5c6d9785471147506851b023cb111ef2081d1c0e695728037bbf4f64ce30a",
|
||||
when="@:2",
|
||||
)
|
||||
# fixed in v3.x, but some packages still require v2.x
|
||||
# backport of https://github.com/libfuse/libfuse/commit/6b02a7082ae4c560427ff95b51aa8930bb4a6e1f
|
||||
patch(
|
||||
"fix_aarch64_compile.patch",
|
||||
sha256="6ced88c987543d8e62614fa9bd796e7ede7238d55cc50910ece4355c9c4e57d6",
|
||||
when="@:2 target=aarch64:",
|
||||
)
|
||||
|
||||
executables = ["^fusermount3?$"]
|
||||
|
||||
|
||||
@@ -308,15 +308,13 @@ def cmake_args(self):
|
||||
|
||||
# Make sure that the compiler paths are in the LD_LIBRARY_PATH
|
||||
def setup_run_environment(self, env):
|
||||
llvm_amdgpu_home = self.spec["llvm-amdgpu"].prefix
|
||||
env.prepend_path("LD_LIBRARY_PATH", llvm_amdgpu_home + "/lib")
|
||||
env.prepend_path("LD_LIBRARY_PATH", self.prefix.lib)
|
||||
|
||||
# Make sure that the compiler paths are in the LD_LIBRARY_PATH
|
||||
def setup_dependent_run_environment(self, env, dependent_spec):
|
||||
llvm_amdgpu_home = self.spec["llvm-amdgpu"].prefix
|
||||
env.prepend_path("LD_LIBRARY_PATH", llvm_amdgpu_home + "/lib")
|
||||
env.prepend_path("LD_LIBRARY_PATH", self.prefix.lib)
|
||||
# Required for enabling asan on dependent packages
|
||||
for root, _, files in os.walk(self.spec["llvm-amdgpu"].prefix):
|
||||
for root, _, files in os.walk(self.prefix):
|
||||
if "libclang_rt.asan-x86_64.so" in files:
|
||||
env.prepend_path("LD_LIBRARY_PATH", root)
|
||||
env.prune_duplicate_paths("LD_LIBRARY_PATH")
|
||||
@@ -339,7 +337,7 @@ def post_install(self):
|
||||
|
||||
# Required for enabling asan on dependent packages
|
||||
def setup_dependent_build_environment(self, env, dependent_spec):
|
||||
for root, _, files in os.walk(self.spec["llvm-amdgpu"].prefix):
|
||||
for root, _, files in os.walk(self.prefix):
|
||||
if "libclang_rt.asan-x86_64.so" in files:
|
||||
env.prepend_path("LD_LIBRARY_PATH", root)
|
||||
env.prune_duplicate_paths("LD_LIBRARY_PATH")
|
||||
|
||||
@@ -36,8 +36,4 @@ class Lxc(AutotoolsPackage):
|
||||
depends_on("m4", type="build")
|
||||
|
||||
def configure_args(self):
|
||||
args = [
|
||||
"bashcompdir="
|
||||
+ join_path(self.spec["lxc"].prefix, "share", "bash-completion", "completions")
|
||||
]
|
||||
return args
|
||||
return [f"bashcompdir={join_path(self.prefix, 'share', 'bash-completion', 'completions')}"]
|
||||
|
||||
@@ -27,5 +27,4 @@ class Moosefs(AutotoolsPackage):
|
||||
depends_on("c", type="build") # generated
|
||||
|
||||
def configure_args(self):
|
||||
args = ["--with-systemdsystemunitdir=" + self.spec["moosefs"].prefix.lib.systemd.system]
|
||||
return args
|
||||
return [f"--with-systemdsystemunitdir={self.prefix.lib.systemd.system}"]
|
||||
|
||||
69
var/spack/repos/builtin/packages/nvpl-scalapack/package.py
Normal file
69
var/spack/repos/builtin/packages/nvpl-scalapack/package.py
Normal file
@@ -0,0 +1,69 @@
|
||||
# Copyright Spack Project Developers. See COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class NvplScalapack(Package):
|
||||
"""NVPL ScaLAPACK (NVIDIA Performance Libraries ScaLAPACK)."""
|
||||
|
||||
homepage = "https://docs.nvidia.com/nvpl/latest/scalapack/index.html"
|
||||
url = (
|
||||
"https://developer.download.nvidia.com/compute/nvpl/redist/nvpl_scalapack/"
|
||||
"linux-sbsa/nvpl_scalapack-linux-sbsa-0.2.1-archive.tar.xz"
|
||||
)
|
||||
|
||||
maintainers("RMeli")
|
||||
|
||||
version("0.2.1", sha256="dada4d1ecf044d90609b9e62750b383d11be9b22c87e109414bcc07dce3c83c9")
|
||||
|
||||
provides("scalapack")
|
||||
|
||||
variant("ilp64", default=False, description="Force 64-bit Fortran native integers")
|
||||
|
||||
depends_on("nvpl-blas +ilp64", when="+ilp64")
|
||||
depends_on("nvpl-blas ~ilp64", when="~ilp64")
|
||||
depends_on("nvpl-lapack +ilp64", when="+ilp64")
|
||||
depends_on("nvpl-lapack ~ilp64", when="~ilp64")
|
||||
depends_on("mpi")
|
||||
|
||||
requires("target=armv8.2a:", msg="Any CPU with Arm-v8.2a+ microarch")
|
||||
|
||||
conflicts("%gcc@:7")
|
||||
conflicts("%clang@:13")
|
||||
|
||||
def url_for_version(self, version):
|
||||
"""Spack can't detect the version in the URL above"""
|
||||
url = "https://developer.download.nvidia.com/compute/nvpl/redist/nvpl_scalapack/linux-sbsa/nvpl_scalapack-linux-sbsa-{0}-archive.tar.xz"
|
||||
return url.format(version)
|
||||
|
||||
@property
|
||||
def scalapack_headers(self):
|
||||
return find_all_headers(self.spec.prefix.include)
|
||||
|
||||
@property
|
||||
def scalapack_libs(self):
|
||||
spec = self.spec
|
||||
|
||||
int_type = "ilp64" if spec.satisfies("+ilp64") else "lp64"
|
||||
|
||||
if any(
|
||||
spec.satisfies(f"^[virtuals=mpi] {mpi_library}")
|
||||
for mpi_library in ["mpich", "cray-mpich", "mvapich", "mvapich2"]
|
||||
):
|
||||
mpi_type = "mpich"
|
||||
elif spec.satisfies("^[virtuals=mpi] openmpi"):
|
||||
mpi_type = "openmpi" + spec["openmpi"].version.up_to(1)
|
||||
else:
|
||||
raise InstallError(
|
||||
f"Unsupported MPI library {spec['mpi']}.\n"
|
||||
"Add support to the Spack package, if needed."
|
||||
)
|
||||
|
||||
name = [f"libnvpl_blacs_{int_type}_{mpi_type}", f"libnvpl_scalapack_{int_type}"]
|
||||
|
||||
return find_libraries(name, spec.prefix.lib, shared=True, recursive=True)
|
||||
|
||||
def install(self, spec, prefix):
|
||||
install_tree(".", prefix)
|
||||
@@ -291,10 +291,7 @@ def setup_run_environment(self, env):
|
||||
|
||||
# Find openspeedshop library path
|
||||
oss_libdir = find_libraries(
|
||||
"libopenss-framework",
|
||||
root=self.spec["openspeedshop-utils"].prefix,
|
||||
shared=True,
|
||||
recursive=True,
|
||||
"libopenss-framework", root=self.prefix, shared=True, recursive=True
|
||||
)
|
||||
env.prepend_path("LD_LIBRARY_PATH", os.path.dirname(oss_libdir.joined()))
|
||||
|
||||
|
||||
@@ -42,10 +42,8 @@ def autoreconf(self, spec, prefix):
|
||||
def configure_args(self):
|
||||
args = [
|
||||
"--enable-parallel-netcdf",
|
||||
"--with-web-server-path={0}/html".format(
|
||||
self.spec["ophidia-analytics-framework"].prefix
|
||||
),
|
||||
f"--with-web-server-path={self.prefix}/html",
|
||||
"--with-web-server-url=http://127.0.0.1/ophidia",
|
||||
"--with-ophidiaio-server-path={0}".format(self.spec["ophidia-io-server"].prefix),
|
||||
f"--with-ophidiaio-server-path={self.spec['ophidia-io-server'].prefix}",
|
||||
]
|
||||
return args
|
||||
|
||||
@@ -231,11 +231,21 @@ class Paraview(CMakePackage, CudaPackage, ROCmPackage):
|
||||
depends_on("mpi", when="+mpi")
|
||||
conflicts("mpi", when="~mpi")
|
||||
|
||||
depends_on("qt@:4", when="@:5.2.0+qt")
|
||||
depends_on("qt+sql", when="+qt")
|
||||
with when("+qt"):
|
||||
depends_on("qt+opengl", when="@5.3.0:+opengl2")
|
||||
depends_on("qt~opengl", when="@5.3.0:~opengl2")
|
||||
depends_on("qmake", when="@5.12.0:")
|
||||
depends_on("qt", when="@5.3.0:5.11")
|
||||
depends_on("qt@:4", when="@:5.2.0")
|
||||
with when("^[virtuals=qmake] qt-base"):
|
||||
depends_on("qt-base+gui+network+widgets")
|
||||
depends_on("qt-base+opengl", when="+opengl2")
|
||||
depends_on("qt-base~opengl", when="~opengl2")
|
||||
depends_on("qt-tools +assistant") # Qt::Help
|
||||
depends_on("qt-5compat")
|
||||
depends_on("qt-svg")
|
||||
with when("^[virtuals=qmake] qt"):
|
||||
depends_on("qt+sql")
|
||||
depends_on("qt+opengl", when="+opengl2")
|
||||
depends_on("qt~opengl", when="~opengl2")
|
||||
|
||||
depends_on("gl@3.2:", when="+opengl2")
|
||||
depends_on("gl@1.2:", when="~opengl2")
|
||||
@@ -384,6 +394,12 @@ class Paraview(CMakePackage, CudaPackage, ROCmPackage):
|
||||
|
||||
patch("kits_with_catalyst_5_12.patch", when="@5.12.0")
|
||||
|
||||
# https://github.com/Kitware/VTK-m/commit/c805a6039ea500cb96158cfc11271987c9f67aa4
|
||||
patch("vtkm-remove-unused-method-from-mir-tables.patch", when="@5.13.2 %oneapi@2025:")
|
||||
|
||||
# https://github.com/Kitware/VTK-m/commit/48e385af319543800398656645327243a29babfb
|
||||
patch("vtkm-fix-problems-in-class-member-names.patch", when="@5.13.2 %oneapi@2025:")
|
||||
|
||||
generator("ninja", "make", default="ninja")
|
||||
# https://gitlab.kitware.com/paraview/paraview/-/issues/21223
|
||||
conflicts("generator=ninja", when="%xl")
|
||||
@@ -438,6 +454,10 @@ def flag_handler(self, name, flags):
|
||||
if self.spec["hdf5"].satisfies("@1.12:"):
|
||||
flags.append("-DH5_USE_110_API")
|
||||
|
||||
if self.spec.satisfies("%oneapi@2025:"):
|
||||
flags.append("-Wno-error=missing-template-arg-list-after-template-kw")
|
||||
flags.append("-Wno-missing-template-arg-list-after-template-kw")
|
||||
|
||||
return flags, None, None
|
||||
|
||||
def setup_run_environment(self, env):
|
||||
@@ -594,7 +614,7 @@ def use_x11():
|
||||
# The assumed qt version changed to QT5 (as of paraview 5.2.1),
|
||||
# so explicitly specify which QT major version is actually being used
|
||||
if spec.satisfies("+qt"):
|
||||
cmake_args.extend(["-DPARAVIEW_QT_VERSION=%s" % spec["qt"].version[0]])
|
||||
cmake_args.extend(["-DPARAVIEW_QT_VERSION=%s" % spec["qmake"].version[0]])
|
||||
if IS_WINDOWS:
|
||||
# Windows does not currently support Qt Quick
|
||||
cmake_args.append("-DVTK_MODULE_ENABLE_VTK_GUISupportQtQuick:STRING=NO")
|
||||
@@ -748,19 +768,15 @@ def use_x11():
|
||||
|
||||
def test_smoke_test(self):
|
||||
"""Simple smoke test for ParaView"""
|
||||
spec = self.spec
|
||||
|
||||
pvserver = Executable(spec["paraview"].prefix.bin.pvserver)
|
||||
pvserver = Executable(self.prefix.bin.pvserver)
|
||||
pvserver("--help")
|
||||
|
||||
def test_pvpython(self):
|
||||
"""Test pvpython"""
|
||||
spec = self.spec
|
||||
|
||||
if "~python" in spec:
|
||||
if "~python" in self.spec:
|
||||
raise SkipTest("Package must be installed with +python")
|
||||
|
||||
pvpython = Executable(spec["paraview"].prefix.bin.pvpython)
|
||||
pvpython = Executable(self.prefix.bin.pvpython)
|
||||
pvpython("-c", "import paraview")
|
||||
|
||||
def test_mpi_ensemble(self):
|
||||
@@ -771,8 +787,8 @@ def test_mpi_ensemble(self):
|
||||
raise SkipTest("Package must be installed with +mpi and +python")
|
||||
|
||||
mpirun = spec["mpi"].prefix.bin.mpirun
|
||||
pvserver = spec["paraview"].prefix.bin.pvserver
|
||||
pvpython = Executable(spec["paraview"].prefix.bin.pvpython)
|
||||
pvserver = self.prefix.bin.pvserver
|
||||
pvpython = Executable(self.prefix.bin.pvpython)
|
||||
|
||||
with working_dir("smoke_test_build", create=True):
|
||||
with Popen(
|
||||
|
||||
@@ -0,0 +1,22 @@
|
||||
diff --git a/VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/vtkm/filter/scalar_topology/worklet/contourtree_distributed/HierarchicalContourTree.h b/VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/vtkm/filter/scalar_topology/worklet/contourtree_distributed/HierarchicalContourTree.h
|
||||
index acd5eca2b..5a23705db 100644
|
||||
--- a/VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/vtkm/filter/scalar_topology/worklet/contourtree_distributed/HierarchicalContourTree.h
|
||||
+++ b/VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/vtkm/filter/scalar_topology/worklet/contourtree_distributed/HierarchicalContourTree.h
|
||||
@@ -663,7 +663,7 @@ std::string HierarchicalContourTree<FieldType>::PrintDotSuperStructure(const cha
|
||||
auto hyperarcsPortal = this->Hyperarcs.ReadPortal();
|
||||
auto regularNodeGlobalIdsPortal = this->RegularNodeGlobalIds.ReadPortal();
|
||||
auto whichIterationPortal = this->WhichIteration.ReadPortal();
|
||||
- auto whichRoundPortal = this->whichRound.ReadPortal();
|
||||
+ auto whichRoundPortal = this->WhichRound.ReadPortal();
|
||||
auto superarcsPortal = this->Superarcs.ReadPortal();
|
||||
auto superparentsPortal = this->Superparents.ReadPortal();
|
||||
for (vtkm::Id supernode = 0; supernode < this->Supernodes.GetNumberOfValues(); supernode++)
|
||||
@@ -708,7 +708,7 @@ std::string HierarchicalContourTree<FieldType>::PrintDotSuperStructure(const cha
|
||||
if (contourtree_augmented::NoSuchElement(superarcTo))
|
||||
{ // no superarc
|
||||
// if it occurred on the final round, it's the global root and is shown as the NULL node
|
||||
- if (whichRoundPortal.Get(superarcFrom) == this->NRounds)
|
||||
+ if (whichRoundPortal.Get(superarcFrom) == this->NumRounds)
|
||||
{ // root node
|
||||
outstream << "\tSN" << std::setw(1) << superarcFrom << " -> SA" << std::setw(1) << superarc
|
||||
<< " [label=\"S" << std::setw(1) << superarc << "\",style=dotted]\n";
|
||||
@@ -0,0 +1,16 @@
|
||||
diff --git a/VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/vtkm/filter/contour/worklet/mir/MIRTables.h b/VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/vtkm/filter/contour/worklet/mir/MIRTables.h
|
||||
index 3dff3329e..a6f4d4f1f 100644
|
||||
--- a/VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/vtkm/filter/contour/worklet/mir/MIRTables.h
|
||||
+++ b/VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/vtkm/filter/contour/worklet/mir/MIRTables.h
|
||||
@@ -11400,11 +11400,6 @@ public:
|
||||
return FacesLookup[shape];
|
||||
}
|
||||
|
||||
- VTKM_EXEC vtkm::UInt8 GetPoint(vtkm::Id pointIndex) const
|
||||
- {
|
||||
- return this->CellFacePortal.Get(pointIndex);
|
||||
- }
|
||||
-
|
||||
private:
|
||||
typename vtkm::cont::ArrayHandle<vtkm::UInt8>::ReadPortalType MIRTablesDataPortal;
|
||||
typename vtkm::cont::ArrayHandle<vtkm::UInt16>::ReadPortalType MIRTablesIndicesPortal;
|
||||
@@ -458,17 +458,15 @@ def symlink_windows(self):
|
||||
|
||||
@run_after("install")
|
||||
def install_cpanm(self):
|
||||
spec = self.spec
|
||||
maker = make
|
||||
cpan_dir = join_path("cpanm", "cpanm")
|
||||
if sys.platform == "win32":
|
||||
maker = nmake
|
||||
cpan_dir = join_path(self.stage.source_path, cpan_dir)
|
||||
cpan_dir = windows_sfn(cpan_dir)
|
||||
if "+cpanm" in spec:
|
||||
if "+cpanm" in self.spec:
|
||||
with working_dir(cpan_dir):
|
||||
perl = spec["perl"].command
|
||||
perl("Makefile.PL")
|
||||
self.command("Makefile.PL")
|
||||
maker()
|
||||
maker("install")
|
||||
|
||||
@@ -502,7 +500,7 @@ def setup_dependent_package(self, module, dependent_spec):
|
||||
if dependent_spec.package.is_extension:
|
||||
# perl extension builds can have a global perl
|
||||
# executable function
|
||||
module.perl = self.spec["perl"].command
|
||||
module.perl = self.command
|
||||
|
||||
# Add variables for library directory
|
||||
module.perl_lib_dir = dependent_spec.prefix.lib.perl5
|
||||
@@ -541,8 +539,7 @@ def filter_config_dot_pm(self):
|
||||
kwargs = {"ignore_absent": True, "backup": False, "string": False}
|
||||
|
||||
# Find the actual path to the installed Config.pm file.
|
||||
perl = self.spec["perl"].command
|
||||
config_dot_pm = perl(
|
||||
config_dot_pm = self.command(
|
||||
"-MModule::Loaded", "-MConfig", "-e", "print is_loaded(Config)", output=str
|
||||
)
|
||||
|
||||
@@ -606,17 +603,15 @@ def command(self):
|
||||
ext = ""
|
||||
if sys.platform == "win32":
|
||||
ext = ".exe"
|
||||
path = os.path.join(self.prefix.bin, "{0}{1}{2}".format(self.spec.name, ver, ext))
|
||||
path = os.path.join(self.prefix.bin, f"{self.spec.name}{ver}{ext}")
|
||||
if os.path.exists(path):
|
||||
return Executable(path)
|
||||
else:
|
||||
msg = "Unable to locate {0} command in {1}"
|
||||
raise RuntimeError(msg.format(self.spec.name, self.prefix.bin))
|
||||
raise RuntimeError(f"Unable to locate {self.spec.name} command in {self.prefix.bin}")
|
||||
|
||||
def test_version(self):
|
||||
"""check version"""
|
||||
perl = self.spec["perl"].command
|
||||
out = perl("--version", output=str.split, error=str.split)
|
||||
out = self.command("--version", output=str.split, error=str.split)
|
||||
expected = ["perl", str(self.spec.version)]
|
||||
for expect in expected:
|
||||
assert expect in out
|
||||
@@ -626,6 +621,5 @@ def test_hello(self):
|
||||
msg = "Hello, World!"
|
||||
options = ["-e", "use warnings; use strict;\nprint('%s\n');" % msg]
|
||||
|
||||
perl = self.spec["perl"].command
|
||||
out = perl(*options, output=str.split, error=str.split)
|
||||
out = self.command(*options, output=str.split, error=str.split)
|
||||
assert msg in out
|
||||
|
||||
@@ -13,11 +13,14 @@ class Plink2(MakefilePackage):
|
||||
url = "https://github.com/chrchang/plink-ng/archive/refs/tags/v2.00a5.11.tar.gz"
|
||||
list_url = "https://github.com/chrchang/plink-ng/tags"
|
||||
|
||||
maintainers("teaguesterling")
|
||||
|
||||
license("GPLv3", checked_by="teaguesterling")
|
||||
# See: https://github.com/chrchang/plink-ng/blob/master/2.0/COPYING
|
||||
|
||||
maintainers("teaguesterling")
|
||||
|
||||
version(
|
||||
"2.0.0-a.6.9", sha256="492fc1e87b60b2209b7c3c1d616a01c1126978424cf795184d013ecf8a47e028"
|
||||
)
|
||||
version("2.00a5.11", sha256="8b664baa0b603f374123c32818ea2f053272840ba60e998d06cb864f3a6f1c38")
|
||||
version("2.00a5.10", sha256="53d845c6a04f8fc701e6f58f6431654e36cbf6b79bff25099862d169a8199a45")
|
||||
version("2.00a4.3", sha256="3cd1d26ac6dd1c451b42440f479789aa19d2b57642c118aac530a5ff1b0b4ce6")
|
||||
@@ -34,6 +37,9 @@ class Plink2(MakefilePackage):
|
||||
|
||||
build_directory = "2.0/build_dynamic"
|
||||
|
||||
def url_for_version(self, version):
|
||||
return f"https://github.com/chrchang/plink-ng/archive/refs/tags/v{version!s}.tar.gz"
|
||||
|
||||
def edit(self, spec, prefix):
|
||||
with working_dir(self.build_directory):
|
||||
makefile = FileFilter("Makefile")
|
||||
|
||||
@@ -244,7 +244,7 @@ def apply_patch(self, other):
|
||||
|
||||
def setup_dependent_package(self, module, dependent_spec):
|
||||
# Make plumed visible from dependent packages
|
||||
module.plumed = dependent_spec["plumed"].command
|
||||
module.plumed = self.command
|
||||
|
||||
@property
|
||||
def plumed_inc(self):
|
||||
|
||||
@@ -121,8 +121,7 @@ def check_install(self):
|
||||
with working_dir(checkdir, create=True):
|
||||
source = join_path(os.path.dirname(self.module.__file__), "example1.c")
|
||||
cflags = spec["pocl"].headers.cpp_flags.split()
|
||||
# ldflags = spec["pocl"].libs.ld_flags.split()
|
||||
ldflags = ["-L%s" % spec["pocl"].prefix.lib, "-lOpenCL", "-lpoclu"]
|
||||
ldflags = [f"-L{self.prefix.lib}", "-lOpenCL", "-lpoclu"]
|
||||
output = compile_c_and_execute(source, cflags, ldflags)
|
||||
compare_output_file(
|
||||
output, join_path(os.path.dirname(self.module.__file__), "example1.out")
|
||||
|
||||
@@ -96,7 +96,7 @@ def filter_compilers(self):
|
||||
"-I{0}".format(
|
||||
" -I".join(
|
||||
[
|
||||
os.path.join(spec["psi4"].prefix.include, "psi4"),
|
||||
os.path.join(self.prefix.include, "psi4"),
|
||||
os.path.join(spec["boost"].prefix.include, "boost"),
|
||||
os.path.join(spec["python"].headers.directories[0]),
|
||||
spec["lapack"].prefix.include,
|
||||
|
||||
30
var/spack/repos/builtin/packages/py-asdf-astropy/package.py
Normal file
30
var/spack/repos/builtin/packages/py-asdf-astropy/package.py
Normal file
@@ -0,0 +1,30 @@
|
||||
# Copyright Spack Project Developers. See COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class PyAsdfAstropy(PythonPackage):
|
||||
"""ASDF serialization support for astropy"""
|
||||
|
||||
homepage = "https://asdf-astropy.readthedocs.io/"
|
||||
pypi = "asdf_astropy/asdf_astropy-0.7.1.tar.gz"
|
||||
|
||||
license("BSD-3-Clause", checked_by="lgarrison")
|
||||
|
||||
version("0.7.1", sha256="5aa5a448ee0945bd834a9ba8fb86cf43b39e85d24260e1339b734173ab6024c7")
|
||||
|
||||
depends_on("python@3.10:", type=("build", "run"))
|
||||
|
||||
depends_on("py-setuptools@60:", type="build")
|
||||
depends_on("py-setuptools-scm@3.4: +toml", type="build")
|
||||
|
||||
depends_on("py-asdf@2.14.4:", type=("build", "run"))
|
||||
depends_on("py-asdf-coordinates-schemas@0.3:", type=("build", "run"))
|
||||
depends_on("py-asdf-transform-schemas@0.5:", type=("build", "run"))
|
||||
depends_on("py-asdf-standard@1.1.0:", type=("build", "run"))
|
||||
# depends_on("py-astropy@5.2.0:", type=("build", "run"))
|
||||
conflicts("py-astropy@:5.1")
|
||||
depends_on("py-numpy@1.24:", type=("build", "run"))
|
||||
depends_on("py-packaging@19:", type=("build", "run"))
|
||||
@@ -0,0 +1,26 @@
|
||||
# Copyright Spack Project Developers. See COPYRIGHT file for details.
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
class PyAsdfCoordinatesSchemas(PythonPackage):
|
||||
"""ASDF schemas for coordinates"""
|
||||
|
||||
homepage = "https://www.asdf-format.org/projects/asdf-coordinates-schemas/"
|
||||
pypi = "asdf_coordinates_schemas/asdf_coordinates_schemas-0.3.0.tar.gz"
|
||||
|
||||
maintainers("lgarrison")
|
||||
|
||||
license("BSD-3-Clause", checked_by="lgarrison")
|
||||
|
||||
version("0.3.0", sha256="c98b6015dcec87a158fcde7798583f0615d08125fa6e1e9de16c4eb03fcd604e")
|
||||
|
||||
depends_on("python@3.9:", type=("build", "run"))
|
||||
|
||||
depends_on("py-setuptools@60:", type="build")
|
||||
depends_on("py-setuptools-scm@3.4: +toml", type="build")
|
||||
|
||||
depends_on("py-asdf@2.12.1:", type=("build", "run"))
|
||||
depends_on("py-asdf-standard@1.1.0:", type=("build", "run"))
|
||||
@@ -15,12 +15,15 @@ class PyAsdfTransformSchemas(PythonPackage):
|
||||
|
||||
license("BSD-3-Clause")
|
||||
|
||||
version("0.5.0", sha256="82cf4c782575734a895327f25ff583ce9499d7e2b836fe8880b2d7961c6b462b")
|
||||
version("0.3.0", sha256="0cf2ff7b22ccb408fe58ddd9b2441a59ba73fe323e416d59b9e0a4728a7d2dd6")
|
||||
|
||||
depends_on("python@3.9:", when="@0.5.0:", type=("build", "run"))
|
||||
depends_on("python@3.8:", type=("build", "run"))
|
||||
|
||||
depends_on("py-setuptools@42:", type="build")
|
||||
depends_on("py-setuptools-scm@3.4: +toml", type="build")
|
||||
|
||||
depends_on("py-asdf-standard@1.1.0:", when="@0.5.0:", type=("build", "run"))
|
||||
depends_on("py-asdf-standard@1.0.1:", type=("build", "run"))
|
||||
depends_on("py-importlib-resources@3:", type=("build", "run"), when="^python@:3.8")
|
||||
depends_on("py-importlib-resources@3:", type=("build", "run"), when="@:0.3.0 ^python@:3.8")
|
||||
|
||||
@@ -17,6 +17,7 @@ class PyAsdf(PythonPackage):
|
||||
|
||||
license("BSD-3-Clause")
|
||||
|
||||
version("4.1.0", sha256="0ff44992c85fd768bd9a9512ab7f012afb52ddcee390e9caf67e30d404122da1")
|
||||
version("3.5.0", sha256="047ad7bdd8f40b04b8625abfd119a35d18b344301c60ea9ddf63964e7ce19669")
|
||||
version("2.15.0", sha256="686f1c91ebf987d41f915cfb6aa70940d7ad17f87ede0be70463147ad2314587")
|
||||
version("2.4.2", sha256="6ff3557190c6a33781dae3fd635a8edf0fa0c24c6aca27d8679af36408ea8ff2")
|
||||
|
||||
@@ -14,6 +14,8 @@ class PyCachecontrol(PythonPackage):
|
||||
|
||||
license("Apache-2.0")
|
||||
|
||||
version("0.14.2", sha256="7d47d19f866409b98ff6025b6a0fca8e4c791fb31abbd95f622093894ce903a2")
|
||||
version("0.14.0", sha256="7db1195b41c81f8274a7bbd97c956f44e8348265a1bc7641c37dfebc39f0c938")
|
||||
version("0.13.1", sha256="f012366b79d2243a6118309ce73151bf52a38d4a5dac8ea57f09bd29087e506b")
|
||||
version("0.13.0", sha256="fd3fd2cb0ca66b9a6c1d56cc9709e7e49c63dbd19b1b1bcbd8d3f94cedfe8ce5")
|
||||
version("0.12.11", sha256="a5b9fcc986b184db101aa280b42ecdcdfc524892596f606858e0b7a8b4d9e144")
|
||||
@@ -22,12 +24,13 @@ class PyCachecontrol(PythonPackage):
|
||||
variant("filecache", default=False, description="Add lockfile dependency")
|
||||
variant("redis", default=False, description="Add redis dependency")
|
||||
|
||||
depends_on("py-flit-core@3.2:3", when="@0.13.1", type="build")
|
||||
depends_on("py-flit-core@3.2:3", when="@0.13.1:", type="build")
|
||||
depends_on("py-setuptools", when="@:0.13.0", type="build")
|
||||
depends_on("py-requests@2.16.0:", when="@0.13", type=("build", "run"))
|
||||
depends_on("py-requests@2.16.0:", when="@0.13:", type=("build", "run"))
|
||||
depends_on("py-requests", type=("build", "run"))
|
||||
depends_on("py-msgpack@0.5.2:", type=("build", "run"))
|
||||
depends_on("py-filelock@3.8.0:", when="@0.13+filecache", type=("build", "run"))
|
||||
depends_on("py-msgpack@0.5.2:1", when="@0.14:", type=("build", "run"))
|
||||
depends_on("py-filelock@3.8.0:", when="@0.13:+filecache", type=("build", "run"))
|
||||
depends_on("py-lockfile@0.9:", when="@0.12+filecache", type=("build", "run"))
|
||||
depends_on("py-redis@2.10.5:", when="+redis", type=("build", "run"))
|
||||
|
||||
|
||||
@@ -16,9 +16,11 @@ class PyCwlUtils(PythonPackage):
|
||||
|
||||
license("Apache-2.0")
|
||||
|
||||
version("0.37", sha256="7b69c948f8593fdf44b44852bd8ef94c666736ce0ac12cf6e66e2a72ad16a773")
|
||||
version("0.21", sha256="583f05010f7572f3a69310325472ccb6efc2db7f43dc6428d03552e0ffcbaaf9")
|
||||
|
||||
depends_on("python@3.6:", type=("build", "run"))
|
||||
depends_on("python@3.8:", when="@0.29:", type=("build", "run"))
|
||||
depends_on("py-setuptools", type="build")
|
||||
|
||||
depends_on("py-cwl-upgrader@1.2.3:", type=("build", "run"))
|
||||
@@ -26,4 +28,17 @@ class PyCwlUtils(PythonPackage):
|
||||
depends_on("py-rdflib", type=("build", "run"))
|
||||
depends_on("py-requests", type=("build", "run"))
|
||||
depends_on("py-cachecontrol", type=("build", "run"))
|
||||
depends_on("py-schema-salad@8.3.20220825114525:8", type=("build", "run"))
|
||||
depends_on("py-schema-salad@8.3.20220825114525:8", when="@:0.31", type=("build", "run"))
|
||||
# intermediate versions 0.32:0.36 may not require 8.8, but should work with this stricter
|
||||
# requirement
|
||||
depends_on("py-schema-salad@8.8.20250205075315:8", when="@0.32:", type=("build", "run"))
|
||||
depends_on("py-ruamel-yaml@0.17.6:0.18", when="@0.30:", type=("build", "run"))
|
||||
depends_on("py-typing-extensions", when="@0.37 ^python@:3.9", type=("build", "run"))
|
||||
|
||||
def url_for_version(self, version):
|
||||
url = "https://files.pythonhosted.org/packages/source/c/cwl-utils/cwl{}utils-{}.tar.gz"
|
||||
if version >= Version("0.34"):
|
||||
sep = "_"
|
||||
else:
|
||||
sep = "-"
|
||||
return url.format(sep, version)
|
||||
|
||||
@@ -16,6 +16,7 @@ class PyMypy(PythonPackage):
|
||||
|
||||
license("MIT AND PSF-2.0", checked_by="tgamblin")
|
||||
|
||||
version("1.15.0", sha256="404534629d51d3efea5c800ee7c42b72a6554d6c400e6a79eafe15d11341fd43")
|
||||
version("1.14.1", sha256="7ec88144fe9b510e8475ec2f5f251992690fcf89ccb4500b214b4226abcd32d6")
|
||||
version("1.13.0", sha256="0291a61b6fbf3e6673e3405cfcc0e7650bebc7939659fdca2702958038bd835e")
|
||||
version("1.12.1", sha256="f5b3936f7a6d0e8280c9bdef94c7ce4847f5cdfc258fbb2c29a8c1711e8bb96d")
|
||||
|
||||
@@ -13,6 +13,10 @@ class PySchemaSalad(PythonPackage):
|
||||
pypi = "schema-salad/schema_salad-8.7.20241021092521.tar.gz"
|
||||
|
||||
license("Apache-2.0")
|
||||
version(
|
||||
"8.8.20250205075315",
|
||||
sha256="444a45509fb048347e0ec205b2af6390f0bb145f7183716ba6af2f75a22b8bdd",
|
||||
)
|
||||
version(
|
||||
"8.7.20241021092521",
|
||||
sha256="287b27adff70e55dd715bfbea18bb1a58fd73de14b4273be4038559308089cdf",
|
||||
@@ -33,18 +37,23 @@ class PySchemaSalad(PythonPackage):
|
||||
depends_on("py-ruamel-yaml@0.17.6:0.18", when="@8.4.20231113094720:", type=("build", "run"))
|
||||
depends_on("py-rdflib@4.2.2:6", type=("build", "run"))
|
||||
depends_on("py-mistune@2.0.3:2.0", type=("build", "run"))
|
||||
depends_on("py-cachecontrol@0.11.7:0.12+filecache", type=("build", "run"))
|
||||
depends_on(
|
||||
"py-cachecontrol@0.11.7:0.12+filecache", when="@:8.7.20240718183047", type=("build", "run")
|
||||
)
|
||||
depends_on(
|
||||
"py-cachecontrol@0.13.1:0.14+filecache",
|
||||
when="@8.7.20240820070935:8.7.20241021092521",
|
||||
type=("build", "run"),
|
||||
)
|
||||
|
||||
depends_on(
|
||||
"py-cachecontrol@0.14:0.14+filecache", when="@8.8.20241204110045:", type=("build", "run")
|
||||
)
|
||||
depends_on("py-setuptools-scm@6.2:+toml", type="build")
|
||||
depends_on("py-setuptools-scm@8.0.4:8+toml", when="@8.4.20231024070348:", type="build")
|
||||
depends_on("py-mypy@0.961", when="@8.3.20220717184004:8.3.20221028160159", type="build")
|
||||
depends_on("py-mypy@0.991", when="@8.3.20221209165047:8.4.20230201194352", type="build")
|
||||
depends_on("py-mypy@1.12.1", when="@8.7.20241021092521", type="build")
|
||||
depends_on("py-mypy@1.15.0", when="@8.8.20250205075315", type="build")
|
||||
depends_on("py-black@19.10b0:", type="build")
|
||||
depends_on("py-black@19.10b0:24.10", when="@8.7.20241021092521:", type="build")
|
||||
depends_on("py-types-pkg-resources", when="@:8.4.20231117150958", type="build")
|
||||
|
||||
@@ -54,7 +54,7 @@ def bindir(self):
|
||||
def command(self):
|
||||
"""Returns a python Executable instance"""
|
||||
python_name = "python" if self.spec.satisfies("platform=windows") else "python3"
|
||||
return which(python_name, path=self.bindir)
|
||||
return which(python_name, path=self.bindir, required=True)
|
||||
|
||||
def _get_path(self, name) -> str:
|
||||
return self.command(
|
||||
|
||||
@@ -19,16 +19,16 @@
|
||||
from spack.package import *
|
||||
|
||||
|
||||
def make_pyvenv_cfg(python_spec: Spec, venv_prefix: str) -> str:
|
||||
def make_pyvenv_cfg(python_pkg: Package, venv_prefix: str) -> str:
|
||||
"""Make a pyvenv_cfg file for a given (real) python command and venv prefix."""
|
||||
python_cmd = python_spec.command.path
|
||||
python_cmd = python_pkg.command.path
|
||||
lines = [
|
||||
# directory containing python command
|
||||
f"home = {os.path.dirname(python_cmd)}",
|
||||
# venv should not allow site packages from the real python to be loaded
|
||||
"include-system-site-packages = false",
|
||||
# version of the python command
|
||||
f"version = {python_spec.version}",
|
||||
f"version = {python_pkg.spec.version}",
|
||||
# the path to the python command
|
||||
f"executable = {python_cmd}",
|
||||
# command "used" to create the pyvenv.cfg
|
||||
@@ -1369,20 +1369,15 @@ def add_files_to_view(self, view, merge_map, skip_if_exists=True):
|
||||
return
|
||||
|
||||
with open(pyvenv_cfg, "w") as cfg_file:
|
||||
cfg_file.write(make_pyvenv_cfg(self.spec["python"], projection))
|
||||
cfg_file.write(make_pyvenv_cfg(self, projection))
|
||||
|
||||
def test_hello_world(self):
|
||||
"""run simple hello world program"""
|
||||
# do not use self.command because we are also testing the run env
|
||||
python = self.spec["python"].command
|
||||
|
||||
msg = "hello world!"
|
||||
out = python("-c", f'print("{msg}")', output=str.split, error=str.split)
|
||||
assert msg in out
|
||||
out = self.command("-c", 'print("hello world!")', output=str.split, error=str.split)
|
||||
assert "hello world!" in out
|
||||
|
||||
def test_import_executable(self):
|
||||
"""ensure import of installed executable works"""
|
||||
python = self.spec["python"].command
|
||||
|
||||
python = self.command
|
||||
out = python("-c", "import sys; print(sys.executable)", output=str.split, error=str.split)
|
||||
assert self.spec.prefix in out
|
||||
|
||||
@@ -36,6 +36,9 @@ class QtTools(QtPackage):
|
||||
description="Qt Widgets Designer for designing and building GUIs with Qt Widgets.",
|
||||
)
|
||||
|
||||
# use of relative path in https://github.com/qt/qttools/blob/6.8.2/.gitmodules
|
||||
conflicts("+assistant", when="@6.8.2", msg="Incorrect git submodule prevents +assistant")
|
||||
|
||||
depends_on("llvm +clang")
|
||||
|
||||
depends_on("qt-base +network")
|
||||
|
||||
@@ -176,7 +176,7 @@ def initconfig_compiler_entries(self):
|
||||
# Default entries are already defined in CachedCMakePackage, inherit them:
|
||||
entries = super().initconfig_compiler_entries()
|
||||
|
||||
if spec.satisfies("+rocm"):
|
||||
if spec.satisfies("+rocm ^blt@:0.6"):
|
||||
entries.insert(0, cmake_cache_path("CMAKE_CXX_COMPILER", spec["hip"].hipcc))
|
||||
|
||||
# adrienbernede-23-01
|
||||
|
||||
@@ -295,7 +295,7 @@ def initconfig_compiler_entries(self):
|
||||
# Default entries are already defined in CachedCMakePackage, inherit them:
|
||||
entries = super().initconfig_compiler_entries()
|
||||
|
||||
if spec.satisfies("+rocm"):
|
||||
if spec.satisfies("+rocm ^blt@:0.6"):
|
||||
entries.insert(0, cmake_cache_path("CMAKE_CXX_COMPILER", spec["hip"].hipcc))
|
||||
|
||||
llnl_link_helpers(entries, spec, compiler)
|
||||
|
||||
@@ -89,6 +89,6 @@ def test_make(self):
|
||||
test_dir = join_path(self.test_suite.current_test_cache_dir, self.test_src_dir)
|
||||
with working_dir(test_dir):
|
||||
cmake = self.spec["cmake"].command
|
||||
cmake("-DCMAKE_PREFIX_PATH=" + self.spec["rocm-clang-ocl"].prefix, ".")
|
||||
cmake(f"-DCMAKE_PREFIX_PATH={self.prefix}", ".")
|
||||
make = which("make")
|
||||
make()
|
||||
|
||||
@@ -80,9 +80,8 @@ def test_cmake(self):
|
||||
"""Test cmake"""
|
||||
test_dir = join_path(self.test_suite.current_test_cache_dir, self.test_src_dir)
|
||||
with working_dir(test_dir, create=True):
|
||||
prefixes = ";".join([self.spec["rocm-cmake"].prefix])
|
||||
cc_options = ["-DCMAKE_PREFIX_PATH=" + prefixes, "."]
|
||||
cmake = which(self.spec["cmake"].prefix.bin.cmake)
|
||||
cc_options = [f"-DCMAKE_PREFIX_PATH={self.prefix}", "."]
|
||||
cmake = self.spec["cmake"].command
|
||||
cmake(*cc_options)
|
||||
make()
|
||||
make("clean")
|
||||
|
||||
@@ -345,23 +345,21 @@ class RocmOpenmpExtras(Package):
|
||||
|
||||
def setup_run_environment(self, env):
|
||||
devlibs_prefix = self.spec["llvm-amdgpu"].prefix
|
||||
openmp_extras_prefix = self.spec["rocm-openmp-extras"].prefix
|
||||
llvm_prefix = self.spec["llvm-amdgpu"].prefix
|
||||
hsa_prefix = self.spec["hsa-rocr-dev"].prefix
|
||||
env.set("AOMP", "{0}".format(llvm_prefix))
|
||||
env.set("HIP_DEVICE_LIB_PATH", "{0}/amdgcn/bitcode".format(devlibs_prefix))
|
||||
env.prepend_path("CPATH", "{0}/include".format(openmp_extras_prefix))
|
||||
env.prepend_path("LIBRARY_PATH", "{0}/lib".format(openmp_extras_prefix))
|
||||
env.set("AOMP", f"{llvm_prefix}")
|
||||
env.set("HIP_DEVICE_LIB_PATH", f"{devlibs_prefix}/amdgcn/bitcode")
|
||||
env.prepend_path("CPATH", f"{self.prefix}/include")
|
||||
env.prepend_path("LIBRARY_PATH", f"{self.prefix}/lib")
|
||||
if self.spec.satisfies("@5.3.0:"):
|
||||
env.prepend_path("LD_LIBRARY_PATH", "{0}/lib".format(openmp_extras_prefix))
|
||||
env.prepend_path("LD_LIBRARY_PATH", "{0}/lib".format(hsa_prefix))
|
||||
env.set("AOMP_GPU", "`{0}/bin/mygpu`".format(openmp_extras_prefix))
|
||||
env.prepend_path("LD_LIBRARY_PATH", f"{self.prefix}/lib")
|
||||
env.prepend_path("LD_LIBRARY_PATH", f"{hsa_prefix}/lib")
|
||||
env.set("AOMP_GPU", f"`{self.prefix}/bin/mygpu`")
|
||||
|
||||
def setup_build_environment(self, env):
|
||||
openmp_extras_prefix = self.spec["rocm-openmp-extras"].prefix
|
||||
llvm_prefix = self.spec["llvm-amdgpu"].prefix
|
||||
env.set("AOMP", "{0}".format(llvm_prefix))
|
||||
env.set("FC", "{0}/bin/flang".format(openmp_extras_prefix))
|
||||
env.set("AOMP", f"{llvm_prefix}")
|
||||
env.set("FC", f"{self.prefix}/bin/flang")
|
||||
if self.spec.satisfies("@6.1:"):
|
||||
env.prepend_path("LD_LIBRARY_PATH", self.spec["hsa-rocr-dev"].prefix.lib)
|
||||
gfx_list = "gfx700 gfx701 gfx801 gfx803 gfx900 gfx902 gfx906 gfx908"
|
||||
@@ -492,12 +490,11 @@ def install(self, spec, prefix):
|
||||
src = self.stage.source_path
|
||||
gfx_list = os.environ["GFXLIST"]
|
||||
gfx_list = gfx_list.replace(" ", ";")
|
||||
openmp_extras_prefix = self.spec["rocm-openmp-extras"].prefix
|
||||
devlibs_prefix = self.spec["llvm-amdgpu"].prefix
|
||||
if self.spec.satisfies("@6.1:"):
|
||||
devlibs_src = "{0}/rocm-openmp-extras/llvm-project/amd/device-libs".format(src)
|
||||
devlibs_src = f"{src}/rocm-openmp-extras/llvm-project/amd/device-libs"
|
||||
else:
|
||||
devlibs_src = "{0}/rocm-openmp-extras/rocm-device-libs".format(src)
|
||||
devlibs_src = f"{src}/rocm-openmp-extras/rocm-device-libs"
|
||||
hsa_prefix = self.spec["hsa-rocr-dev"].prefix
|
||||
if self.spec.satisfies("@:6.2"):
|
||||
hsakmt_prefix = self.spec["hsakmt-roct"].prefix
|
||||
@@ -507,10 +504,10 @@ def install(self, spec, prefix):
|
||||
comgr_prefix = self.spec["comgr"].prefix
|
||||
llvm_inc = "/rocm-openmp-extras/llvm-project/llvm/include"
|
||||
llvm_prefix = self.spec["llvm-amdgpu"].prefix
|
||||
omp_bin_dir = "{0}/bin".format(openmp_extras_prefix)
|
||||
omp_lib_dir = "{0}/lib".format(openmp_extras_prefix)
|
||||
bin_dir = "{0}/bin".format(llvm_prefix)
|
||||
lib_dir = "{0}/lib".format(llvm_prefix)
|
||||
omp_bin_dir = f"{self.prefix}/bin"
|
||||
omp_lib_dir = f"{self.prefix}/lib"
|
||||
bin_dir = f"{llvm_prefix}/bin"
|
||||
lib_dir = f"{llvm_prefix}/lib"
|
||||
flang_warning = "-Wno-incompatible-pointer-types-discards-qualifiers"
|
||||
libpgmath = "/rocm-openmp-extras/flang/runtime/libpgmath/lib/common"
|
||||
elfutils_inc = spec["elfutils"].prefix.include
|
||||
@@ -543,23 +540,21 @@ def install(self, spec, prefix):
|
||||
os.path.join(omp_bin_dir, "flang-legacy"), os.path.join(bin_dir, "flang-legacy")
|
||||
)
|
||||
os.symlink(os.path.join(omp_lib_dir, "libdevice"), os.path.join(lib_dir, "libdevice"))
|
||||
os.symlink(
|
||||
os.path.join(openmp_extras_prefix, "lib-debug"), os.path.join(llvm_prefix, "lib-debug")
|
||||
)
|
||||
os.symlink(os.path.join(self.prefix, "lib-debug"), os.path.join(llvm_prefix, "lib-debug"))
|
||||
|
||||
# Set cmake args
|
||||
components = dict()
|
||||
|
||||
components["aomp-extras"] = [
|
||||
"../rocm-openmp-extras/aomp-extras",
|
||||
"-DLLVM_DIR={0}".format(llvm_prefix),
|
||||
"-DDEVICE_LIBS_DIR={0}/amdgcn/bitcode".format(devlibs_prefix),
|
||||
"-DCMAKE_C_COMPILER={0}/clang".format(bin_dir),
|
||||
"-DCMAKE_CXX_COMPILER={0}/clang++".format(bin_dir),
|
||||
f"-DLLVM_DIR={llvm_prefix}",
|
||||
f"-DDEVICE_LIBS_DIR={devlibs_prefix}/amdgcn/bitcode",
|
||||
f"-DCMAKE_C_COMPILER={bin_dir}/clang",
|
||||
f"-DCMAKE_CXX_COMPILER={bin_dir}/clang++",
|
||||
"-DAOMP_STANDALONE_BUILD=0",
|
||||
"-DDEVICELIBS_ROOT={0}".format(devlibs_src),
|
||||
f"-DDEVICELIBS_ROOT={devlibs_src}",
|
||||
"-DNEW_BC_PATH=1",
|
||||
"-DAOMP={0}".format(llvm_prefix),
|
||||
f"-DAOMP={llvm_prefix}",
|
||||
]
|
||||
|
||||
# Shared cmake configuration for openmp, openmp-debug
|
||||
@@ -569,39 +564,39 @@ def install(self, spec, prefix):
|
||||
# Passing the elfutils include path via cmake options is a
|
||||
# workaround until hsa-rocr-dev switches to elfutils.
|
||||
openmp_common_args = [
|
||||
"-DROCM_DIR={0}".format(hsa_prefix),
|
||||
"-DDEVICE_LIBS_DIR={0}/amdgcn/bitcode".format(devlibs_prefix),
|
||||
f"-DROCM_DIR={hsa_prefix}",
|
||||
f"-DDEVICE_LIBS_DIR={devlibs_prefix}/amdgcn/bitcode",
|
||||
"-DAOMP_STANDALONE_BUILD=0",
|
||||
"-DDEVICELIBS_ROOT={0}".format(devlibs_src),
|
||||
"-DOPENMP_TEST_C_COMPILER={0}/clang".format(bin_dir),
|
||||
"-DOPENMP_TEST_CXX_COMPILER={0}/clang++".format(bin_dir),
|
||||
"-DCMAKE_C_COMPILER={0}/clang".format(bin_dir),
|
||||
"-DCMAKE_CXX_COMPILER={0}/clang++".format(bin_dir),
|
||||
"-DLIBOMPTARGET_AMDGCN_GFXLIST={0}".format(gfx_list),
|
||||
f"-DDEVICELIBS_ROOT={devlibs_src}",
|
||||
f"-DOPENMP_TEST_C_COMPILER={bin_dir}/clang",
|
||||
f"-DOPENMP_TEST_CXX_COMPILER={bin_dir}/clang++",
|
||||
f"-DCMAKE_C_COMPILER={bin_dir}/clang",
|
||||
f"-DCMAKE_CXX_COMPILER={bin_dir}/clang++",
|
||||
f"-DLIBOMPTARGET_AMDGCN_GFXLIST={gfx_list}",
|
||||
"-DLIBOMP_COPY_EXPORTS=OFF",
|
||||
"-DHSA_LIB={0}/lib".format(hsa_prefix),
|
||||
"-DCOMGR_INCLUDE={0}/include".format(comgr_prefix),
|
||||
"-DCOMGR_LIB={0}/lib".format(comgr_prefix),
|
||||
f"-DHSA_LIB={hsa_prefix}/lib",
|
||||
f"-DCOMGR_INCLUDE={comgr_prefix}/include",
|
||||
f"-DCOMGR_LIB={comgr_prefix}/lib",
|
||||
"-DOPENMP_ENABLE_LIBOMPTARGET=1",
|
||||
"-DOPENMP_ENABLE_LIBOMPTARGET_HSA=1",
|
||||
"-DLLVM_MAIN_INCLUDE_DIR={0}{1}".format(src, llvm_inc),
|
||||
"-DLLVM_INSTALL_PREFIX={0}".format(llvm_prefix),
|
||||
"-DCMAKE_C_FLAGS=-isystem{0} -I{1}".format(elfutils_inc, ffi_inc),
|
||||
"-DCMAKE_CXX_FLAGS=-isystem{0} -I{1}".format(elfutils_inc, ffi_inc),
|
||||
f"-DLLVM_MAIN_INCLUDE_DIR={src}{llvm_inc}",
|
||||
f"-DLLVM_INSTALL_PREFIX={llvm_prefix}",
|
||||
f"-DCMAKE_C_FLAGS=-isystem{elfutils_inc} -I{ffi_inc}",
|
||||
f"-DCMAKE_CXX_FLAGS=-isystem{elfutils_inc} -I{ffi_inc}",
|
||||
"-DNEW_BC_PATH=1",
|
||||
"-DHSA_INCLUDE={0}/include/hsa".format(hsa_prefix),
|
||||
f"-DHSA_INCLUDE={hsa_prefix}/include/hsa",
|
||||
"-DLIBOMPTARGET_ENABLE_DEBUG=ON",
|
||||
]
|
||||
if self.spec.satisfies("@5.7:6.1"):
|
||||
openmp_common_args += [
|
||||
"-DLIBDRM_LIB={0}/lib".format(libdrm_prefix),
|
||||
"-DHSAKMT_INC_PATH={0}/include".format(hsakmt_prefix),
|
||||
"-DNUMACTL_DIR={0}".format(numactl_prefix),
|
||||
f"-DLIBDRM_LIB={libdrm_prefix}/lib",
|
||||
f"-DHSAKMT_INC_PATH={hsakmt_prefix}/include",
|
||||
f"-DNUMACTL_DIR={numactl_prefix}",
|
||||
]
|
||||
if self.spec.satisfies("@:6.2"):
|
||||
openmp_common_args += [
|
||||
"-DHSAKMT_LIB={0}/lib".format(hsakmt_prefix),
|
||||
"-DHSAKMT_LIB64={0}/lib64".format(hsakmt_prefix),
|
||||
f"-DHSAKMT_LIB={hsakmt_prefix}/lib",
|
||||
f"-DHSAKMT_LIB64={hsakmt_prefix}/lib64",
|
||||
]
|
||||
if self.spec.satisfies("+asan"):
|
||||
openmp_common_args += [
|
||||
@@ -626,15 +621,15 @@ def install(self, spec, prefix):
|
||||
# Shared cmake configuration for pgmath, flang, flang-runtime
|
||||
flang_common_args = [
|
||||
"-DLLVM_ENABLE_ASSERTIONS=ON",
|
||||
"-DLLVM_CONFIG={0}/llvm-config".format(bin_dir),
|
||||
"-DCMAKE_CXX_COMPILER={0}/clang++".format(bin_dir),
|
||||
"-DCMAKE_C_COMPILER={0}/clang".format(bin_dir),
|
||||
"-DCMAKE_Fortran_COMPILER={0}/flang".format(bin_dir),
|
||||
f"-DLLVM_CONFIG={bin_dir}/llvm-config",
|
||||
f"-DCMAKE_CXX_COMPILER={bin_dir}/clang++",
|
||||
f"-DCMAKE_C_COMPILER={bin_dir}/clang",
|
||||
f"-DCMAKE_Fortran_COMPILER={bin_dir}/flang",
|
||||
"-DLLVM_TARGETS_TO_BUILD=AMDGPU;x86",
|
||||
# Spack thinks some warnings from the flang build are errors.
|
||||
# Disable those warnings in C and CXX flags.
|
||||
"-DCMAKE_CXX_FLAGS={0}".format(flang_warning) + " -I{0}{1}".format(src, libpgmath),
|
||||
"-DCMAKE_C_FLAGS={0}".format(flang_warning) + " -I{0}{1}".format(src, libpgmath),
|
||||
f"-DCMAKE_CXX_FLAGS={flang_warning} -I{src}{libpgmath}",
|
||||
f"-DCMAKE_C_FLAGS={flang_warning} -I{src}{libpgmath}",
|
||||
]
|
||||
|
||||
components["pgmath"] = ["../rocm-openmp-extras/flang/runtime/libpgmath"]
|
||||
@@ -662,9 +657,9 @@ def install(self, spec, prefix):
|
||||
]
|
||||
|
||||
components["flang-legacy"] = [
|
||||
"-DCMAKE_C_COMPILER={0}/clang".format(bin_dir),
|
||||
"-DCMAKE_CXX_COMPILER={0}/clang++".format(bin_dir),
|
||||
"../rocm-openmp-extras/flang/flang-legacy/{0}".format(flang_legacy_version),
|
||||
f"-DCMAKE_C_COMPILER={bin_dir}/clang",
|
||||
f"-DCMAKE_CXX_COMPILER={bin_dir}/clang++",
|
||||
f"../rocm-openmp-extras/flang/flang-legacy/{flang_legacy_version}",
|
||||
]
|
||||
|
||||
flang_legacy_flags = []
|
||||
@@ -675,14 +670,10 @@ def install(self, spec, prefix):
|
||||
):
|
||||
flang_legacy_flags.append("-D_GLIBCXX_USE_CXX11_ABI=0")
|
||||
if self.spec.satisfies("@6.2:"):
|
||||
flang_legacy_flags.append("-L{0}".format(ncurses_lib_dir))
|
||||
flang_legacy_flags.append("-L{0}".format(zlib_lib_dir))
|
||||
components["flang-legacy-llvm"] += [
|
||||
"-DCMAKE_CXX_FLAGS={0}".format(" ".join(flang_legacy_flags))
|
||||
]
|
||||
components["flang-legacy"] += [
|
||||
"-DCMAKE_CXX_FLAGS={0}".format(" ".join(flang_legacy_flags))
|
||||
]
|
||||
flang_legacy_flags.append(f"-L{ncurses_lib_dir}")
|
||||
flang_legacy_flags.append(f"-L{zlib_lib_dir}")
|
||||
components["flang-legacy-llvm"] += [f"-DCMAKE_CXX_FLAGS={' '.join(flang_legacy_flags)}"]
|
||||
components["flang-legacy"] += [f"-DCMAKE_CXX_FLAGS={' '.join(flang_legacy_flags)}"]
|
||||
|
||||
components["flang"] = [
|
||||
"../rocm-openmp-extras/flang",
|
||||
@@ -696,7 +687,7 @@ def install(self, spec, prefix):
|
||||
"../rocm-openmp-extras/flang",
|
||||
"-DLLVM_INSTALL_RUNTIME=ON",
|
||||
"-DFLANG_BUILD_RUNTIME=ON",
|
||||
"-DOPENMP_BUILD_DIR={0}/spack-build-openmp/runtime/src".format(src),
|
||||
f"-DOPENMP_BUILD_DIR={src}/spack-build-openmp/runtime/src",
|
||||
]
|
||||
components["flang-runtime"] += flang_common_args
|
||||
|
||||
@@ -715,7 +706,7 @@ def install(self, spec, prefix):
|
||||
cmake_args = components[component]
|
||||
cmake_args.extend(std_cmake_args)
|
||||
if component == "flang-legacy-llvm":
|
||||
with working_dir("spack-build-{0}/llvm-legacy".format(component), create=True):
|
||||
with working_dir(f"spack-build-{component}/llvm-legacy", create=True):
|
||||
cmake_args.append("-DCMAKE_BUILD_TYPE=Release")
|
||||
cmake(*cmake_args)
|
||||
make()
|
||||
@@ -727,7 +718,7 @@ def install(self, spec, prefix):
|
||||
make("install")
|
||||
os.symlink(os.path.join(bin_dir, "clang"), os.path.join(omp_bin_dir, "clang"))
|
||||
else:
|
||||
with working_dir("spack-build-{0}".format(component), create=True):
|
||||
with working_dir(f"spack-build-{component}", create=True):
|
||||
# OpenMP build needs to be run twice(Release, Debug)
|
||||
if component == "openmp-debug":
|
||||
cmake_args.append("-DCMAKE_BUILD_TYPE=Debug")
|
||||
|
||||
@@ -23,7 +23,7 @@ class Rrdtool(AutotoolsPackage):
|
||||
depends_on("perl-extutils-makemaker")
|
||||
|
||||
def configure_args(self):
|
||||
return ["--with-systemdsystemunitdir=" + self.spec["rrdtool"].prefix.lib.systemd.system]
|
||||
return [f"--with-systemdsystemunitdir={self.prefix.lib.systemd.system}"]
|
||||
|
||||
def flag_handler(self, name, flags):
|
||||
if name == "ldlibs" and "intl" in self.spec["gettext"].libs.names:
|
||||
|
||||
@@ -49,5 +49,4 @@ def autoreconf(self, spec, prefix):
|
||||
Executable("./autogen.sh")()
|
||||
|
||||
def configure_args(self):
|
||||
args = ["--with-systemdsystemunitdir=" + self.spec["rsyslog"].prefix.lib.systemd.system]
|
||||
return args
|
||||
return [f"--with-systemdsystemunitdir={self.prefix.lib.systemd.system}"]
|
||||
|
||||
@@ -25,15 +25,15 @@ class SentieonGenomics(Package):
|
||||
url = "https://s3.amazonaws.com/sentieon-release/software/sentieon-genomics-201808.01.tar.gz"
|
||||
maintainers("snehring")
|
||||
|
||||
version("202308.02", sha256="a04b98c1b7c4e8916fdc45f15685d5fd83db56386ec2478eb5ea594170405bd5")
|
||||
version("202308", sha256="d663067f46e499c23819e344cf548fdc362abbf94d3ef086a2e655c072ebe0d6")
|
||||
version("202112.07", sha256="7178769bb5a9619840996356bda4660410fb6f228b2c0b86611bcb1c6bcfc2e1")
|
||||
version("202112.06", sha256="18306036f01c3d41dd7ae738b18ae76fd6b666f1172dd4696cd55b4a8465270d")
|
||||
version("202112.05", sha256="c97b14b0484a0c0025115ad7b911453af7bdcd09874c26cbc39fd0bc5588a306")
|
||||
version("202112.04", sha256="154732dc752476d984908e78b1fc5120d4f23028ee165cc4a451ecc1df0e0246")
|
||||
version("202112.02", sha256="033943df7958550fd42b410d34ae91a8956a905fc90ca8baa93d2830f918872c")
|
||||
version("201808.07", sha256="fb66b18a7b99e44968eb2c3a6a5b761d6b1e70fba9f7dfc4e5db3a74ab3d3dd9")
|
||||
version("201808.01", sha256="6d77bcd5a35539549b28eccae07b19a3b353d027720536e68f46dcf4b980d5f7")
|
||||
version("202308.02", sha256="adb553c72d5180f551aea77fb6626dea36f33f1968f3d0ab0bb00dc7af4f5b55")
|
||||
version("202308", sha256="13dc8d50577fe4767142c50f1a95772db95cd4b173c2b281cdcdd68a5af47cb0")
|
||||
version("202112.07", sha256="ea770483d3e70e9d157fe938096d5ea06e47166d57e0037cf66b6449c7fce2ab")
|
||||
version("202112.06", sha256="c6deefda1da814af9fafdeafe5d3b5da3c8698fb9ec17bd03ea32dbabaaca3e5")
|
||||
version("202112.05", sha256="77f2b7b727b68cfdb302faa914b202137dea87cff5e30ab121d3e42f55194dda")
|
||||
version("202112.04", sha256="36f76ea061bf72c102601717537804101162fa5ebf215061917eeedd128c4d78")
|
||||
version("202112.02", sha256="52ea6ab36d9836612eaa9657ddd6297aa43672eb6065534caba21f9a7845b67f")
|
||||
version("201808.07", sha256="7c9c12dc52770a0fbdf094ce058f43b601bbbf311c13b5fb56a6088ec1680824")
|
||||
version("201808.01", sha256="9f61aa600710d9110463430dcf49cbc03a14dcad5e5bac8717b7e41baaf86fff")
|
||||
|
||||
# Licensing.
|
||||
license_require = True
|
||||
|
||||
@@ -20,6 +20,7 @@ class Simsipm(CMakePackage):
|
||||
|
||||
license("MIT")
|
||||
|
||||
version("2.1.0", sha256="e99fcf81f88419c7d7ee6aecec0bbc7a0abc85c5430a4f910a8962d6f5c54e02")
|
||||
version("2.0.2", sha256="ba60ed88b54b1b29d089f583dbce93b3272b0b13d47772941339f1503ee3fa48")
|
||||
version("1.2.4", sha256="1c633bebb19c490b5e6dfa5ada4a6bc7ec36348237c2626d57843a25af923211")
|
||||
|
||||
@@ -28,12 +29,21 @@ class Simsipm(CMakePackage):
|
||||
variant("python", default=False, description="Build pybind11-based python bindings")
|
||||
variant("openmp", default=False, description="Use OpenMP", when="@:1")
|
||||
|
||||
variant(
|
||||
"cxxstd",
|
||||
default="17",
|
||||
values=("11", "14", "17", "20"),
|
||||
multi=False,
|
||||
description="Use the specified C++ standard when building.",
|
||||
)
|
||||
|
||||
extends("python", when="+python")
|
||||
depends_on("python@3.6:", when="+python", type=("build", "run"))
|
||||
depends_on("py-pybind11", when="+python", type=("build", "link"))
|
||||
|
||||
def cmake_args(self):
|
||||
args = [
|
||||
self.define("CMAKE_CXX_STANDARD", self.spec.variants["cxxstd"].value),
|
||||
self.define_from_variant("SIPM_BUILD_PYTHON", "python"),
|
||||
self.define_from_variant("SIPM_ENABLE_OPENMP", "openmp"),
|
||||
self.define("SIPM_ENABLE_TEST", self.run_tests),
|
||||
|
||||
@@ -67,7 +67,7 @@ def setup_run_environment(self, env):
|
||||
env.prepend_path("PYTHONPATH", self.spec.prefix.share.tixi.python)
|
||||
|
||||
# allow ctypes to find the tixi library
|
||||
libs = ":".join(self.spec["tixi"].libs.directories)
|
||||
libs = ":".join(self.libs.directories)
|
||||
if sys.platform == "darwin":
|
||||
env.prepend_path("DYLD_FALLBACK_LIBRARY_PATH", libs)
|
||||
else:
|
||||
|
||||
@@ -80,9 +80,7 @@ def install(self, spec, prefix):
|
||||
|
||||
# Replace stage dir -> installed src dir in tkConfig
|
||||
filter_file(
|
||||
stage_src,
|
||||
installed_src,
|
||||
join_path(self.spec["tk"].libs.directories[0], "tkConfig.sh"),
|
||||
stage_src, installed_src, join_path(self.libs.directories[0], "tkConfig.sh")
|
||||
)
|
||||
|
||||
@run_after("install")
|
||||
@@ -92,8 +90,7 @@ def symlink_wish(self):
|
||||
|
||||
def test_tk_help(self):
|
||||
"""run tk help"""
|
||||
tk = self.spec["tk"].command
|
||||
tk("-h")
|
||||
self.command("-h")
|
||||
|
||||
def test_tk_load(self):
|
||||
"""check that tk can be loaded"""
|
||||
@@ -112,15 +109,11 @@ def command(self):
|
||||
# Although we symlink wishX.Y to wish, we also need to support external
|
||||
# installations that may not have this symlink, or may have multiple versions
|
||||
# of Tk installed in the same directory.
|
||||
return Executable(
|
||||
os.path.realpath(self.prefix.bin.join("wish{0}".format(self.version.up_to(2))))
|
||||
)
|
||||
return Executable(os.path.realpath(self.prefix.bin.join(f"wish{self.version.up_to(2)}")))
|
||||
|
||||
@property
|
||||
def libs(self):
|
||||
return find_libraries(
|
||||
["libtk{0}".format(self.version.up_to(2))], root=self.prefix, recursive=True
|
||||
)
|
||||
return find_libraries([f"libtk{self.version.up_to(2)}"], root=self.prefix, recursive=True)
|
||||
|
||||
def _find_script_dir(self):
|
||||
# Put more-specific prefixes first
|
||||
|
||||
@@ -11,18 +11,25 @@ class Topaz(PythonPackage):
|
||||
featuring micrograph and tomogram denoising with DNNs."""
|
||||
|
||||
homepage = "https://topaz-em.readthedocs.io/"
|
||||
pypi = "topaz-em/topaz-em-0.2.5.tar.gz"
|
||||
pypi = "topaz-em/topaz_em-0.3.7.tar.gz"
|
||||
|
||||
license("GPL-3.0-or-later")
|
||||
|
||||
version("0.2.5", sha256="002a6eb775598b6c4df0225f3a488bfe6a6da9246e8ca42eb4e7d58f694c25cc")
|
||||
version("0.3.7", sha256="ae3c0d6ccb1e8ad2e4926421442b8cb33a4d01d1ee1dff83174949a9f91cc8a9")
|
||||
version(
|
||||
"0.2.5",
|
||||
sha256="002a6eb775598b6c4df0225f3a488bfe6a6da9246e8ca42eb4e7d58f694c25cc",
|
||||
url="https://files.pythonhosted.org/packages/source/t/topaz-em/topaz-em-0.2.5.tar.gz",
|
||||
)
|
||||
|
||||
depends_on("py-setuptools", type="build")
|
||||
depends_on("py-torch@1:", type=("build", "run"))
|
||||
depends_on("py-torch@1:2.3.1", type=("build", "run"))
|
||||
depends_on("py-torchvision", type=("build", "run"))
|
||||
depends_on("py-numpy@1.11:", type=("build", "run"))
|
||||
depends_on("py-pandas", type=("build", "run"))
|
||||
depends_on("py-pandas@0.20.3:", type=("build", "run"))
|
||||
depends_on("py-scikit-learn@0.19.0:", type=("build", "run"))
|
||||
depends_on("py-scipy@0.17.0:", type=("build", "run"))
|
||||
depends_on("py-pillow@6.2.0:", type=("build", "run"))
|
||||
depends_on("py-future", type=("build", "run"))
|
||||
depends_on("py-tqdm@4.65.0:", type=("build", "run"), when="@0.3.7:")
|
||||
depends_on("py-h5py@3.7.0:", type=("build", "run"), when="@0.3.7:")
|
||||
|
||||
@@ -324,7 +324,7 @@ def initconfig_compiler_entries(self):
|
||||
# Default entries are already defined in CachedCMakePackage, inherit them:
|
||||
entries = super().initconfig_compiler_entries()
|
||||
|
||||
if "+rocm" in spec:
|
||||
if spec.satisfies("+rocm ^blt@:0.6"):
|
||||
entries.insert(0, cmake_cache_path("CMAKE_CXX_COMPILER", spec["hip"].hipcc))
|
||||
|
||||
option_prefix = "UMPIRE_" if spec.satisfies("@2022.03.0:") else ""
|
||||
|
||||
@@ -2,8 +2,6 @@
|
||||
#
|
||||
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
|
||||
|
||||
import sys
|
||||
|
||||
from spack.package import *
|
||||
|
||||
|
||||
@@ -47,9 +45,6 @@ def headers(self):
|
||||
return HeaderList([])
|
||||
|
||||
def install(self, spec, prefix):
|
||||
if sys.platform != "darwin":
|
||||
raise InstallError("vecLibFort can be installed on macOS only")
|
||||
|
||||
filter_file(r"^PREFIX=.*", "", "Makefile")
|
||||
|
||||
make_args = []
|
||||
@@ -57,13 +52,13 @@ def install(self, spec, prefix):
|
||||
if spec.satisfies("%gcc@6:"):
|
||||
make_args += ["CFLAGS=-flax-vector-conversions"]
|
||||
|
||||
make_args += ["PREFIX=%s" % prefix, "install"]
|
||||
make_args += [f"PREFIX={prefix}", "install"]
|
||||
|
||||
make(*make_args)
|
||||
|
||||
# test
|
||||
fc = which("fc")
|
||||
flags = ["-o", "tester", "-O", "tester.f90"]
|
||||
flags.extend(spec["veclibfort"].libs.ld_flags.split())
|
||||
flags.extend(self.libs.ld_flags.split())
|
||||
fc(*flags)
|
||||
Executable("./tester")()
|
||||
|
||||
@@ -16,7 +16,7 @@ class VepCache(Package):
|
||||
|
||||
license("Apache-2.0", checked_by="teaguesterling")
|
||||
|
||||
vep_versions = ["112", "111", "110"]
|
||||
vep_versions = ["113", "112", "111", "110"]
|
||||
depends_on("vep", type="build")
|
||||
for major in vep_versions:
|
||||
version(major)
|
||||
|
||||
@@ -276,20 +276,16 @@ def cmake_args(self):
|
||||
|
||||
def test_smoke_test(self):
|
||||
"""Build and run ctests"""
|
||||
spec = self.spec
|
||||
|
||||
if "+examples" not in spec:
|
||||
if "+examples" not in self.spec:
|
||||
raise SkipTest("Package must be installed with +examples")
|
||||
|
||||
testdir = "smoke_test_build"
|
||||
with working_dir(testdir, create=True):
|
||||
cmake = Executable(spec["cmake"].prefix.bin.cmake)
|
||||
ctest = Executable(spec["cmake"].prefix.bin.ctest)
|
||||
cmakeExampleDir = spec["vtk-m"].prefix.share.doc.VTKm.examples.smoke_test
|
||||
|
||||
cmake(*([cmakeExampleDir, "-DVTKm_ROOT=" + spec["vtk-m"].prefix]))
|
||||
cmake(*(["--build", "."]))
|
||||
ctest(*(["--verbose"]))
|
||||
cmake = Executable(self.spec["cmake"].prefix.bin.cmake)
|
||||
ctest = Executable(self.spec["cmake"].prefix.bin.ctest)
|
||||
cmake(self.prefix.share.doc.VTKm.examples.smoke_test, f"-DVTKm_ROOT={self.prefix}")
|
||||
cmake("--build", ".")
|
||||
ctest("--verbose")
|
||||
|
||||
@run_after("install")
|
||||
@on_package_attributes(run_tests=True)
|
||||
|
||||
@@ -56,9 +56,9 @@ def setup_build_environment(self, env):
|
||||
env.append_path("C_INCLUDE_PATH", self.spec["util-linux"].prefix.include.blkid)
|
||||
|
||||
def configure_args(self):
|
||||
args = ["--with-systemd-unit-dir=" + self.spec["xfsprogs"].prefix.lib.systemd.system]
|
||||
args = [f"--with-systemd-unit-dir={self.prefix.lib.systemd.system}"]
|
||||
if self.spec.satisfies("@6.5.0:"):
|
||||
args.append("--with-udev-rule-dir=" + self.spec["xfsprogs"].prefix)
|
||||
args.append(f"--with-udev-rule-dir={self.prefix}")
|
||||
return args
|
||||
|
||||
def install(self, spec, prefix):
|
||||
|
||||
@@ -185,9 +185,9 @@ def test_smoke_test(self, source_dir=None):
|
||||
cmake = Executable(spec["cmake"].prefix.bin.cmake)
|
||||
ctest = Executable(spec["cmake"].prefix.bin.ctest)
|
||||
|
||||
cmake(*([".", "-DZFP_ROOT=" + spec["zfp"].prefix]))
|
||||
cmake(*(["--build", "."]))
|
||||
ctest(*(["--verbose"]))
|
||||
cmake(".", f"-DZFP_ROOT={self.prefix}")
|
||||
cmake("--build", ".")
|
||||
ctest("--verbose")
|
||||
|
||||
@run_after("install")
|
||||
def copy_test_files(self):
|
||||
|
||||
Reference in New Issue
Block a user