Compare commits

..

53 Commits

Author SHA1 Message Date
Satish Balay
f2ddfe31ff fix 2024-09-25 10:23:49 -05:00
Satish Balay
9ea2612650 llvm@19: add a work-around so that llvm can detect/use both "-lcurses -lterminfo" when using "ncurses+termlib" 2024-09-24 16:54:11 -05:00
Satish Balay
4e033e4940 llvm@19 removed LLVM_ENABLE_TERMINFO - i.e. expects ncurses~termlib 2024-09-24 09:29:26 +02:00
Satish Balay
0b8a86d397 update provides(libllvm) 2024-09-24 09:29:26 +02:00
Satish Balay
aa33912c84 llvm: add v19.1.0 2024-09-24 09:29:26 +02:00
Todd Gamblin
c070ddac97 database: don't call socket.getfqdn() on every write (#46554)
We've seen `getfqdn()` cause slowdowns on macOS in CI when added elsewhere. It's also
called by database.py every time we write the DB file.

- [x] replace the call with a memoized version so that it is only called once per process.

Signed-off-by: Todd Gamblin <tgamblin@llnl.gov>
2024-09-23 23:59:07 -07:00
Massimiliano Culpo
679770b02c solver: use a new heuristic (#46548)
This PR introduces a new heuristic for the solver, which behaves better when
compilers are treated as nodes. Apparently, it performs better also on `develop`,
where compilers are still node attributes.

The new heuristic:
- Sets an initial priority for guessing a few attributes. The order is "nodes" (300), 
  "dependencies" (150), "virtual dependencies" (60), "version" and "variants" (30), and
  "targets" and "compilers" (1). This initial priority decays over time during the solve, and
  falls back to the defaults.

- By default, it considers most guessed facts as "false". For instance, by default a node
  doesn't exist in the optimal answer set, or a version is not picked as a node version etc.

- There are certain conditions that override the default heuristic using the _priority_ of
  a rule, which previously we didn't use. For instance, by default we guess that a
  `attr("variant", Node, Variant, Value)` is false, but if we know that the node is already
  in the answer set, and the value is the default one, then we guess it is true.

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-09-23 23:44:47 -07:00
Justin Cook
971577d853 spec: fix spelling (#46550)
Signed-off-by: Justin Cook <jscook@lbl.gov>
2024-09-24 07:06:54 +02:00
Kyoko Nagahashi
2c36a8aac3 new package: msr-safe (#46289)
This includes a test_linux699 variant which "activates" a version
that pulls from a repository other than the official repository.
This version is required to work with Linux kernel version
6.9.9 or later. Future official `msr-safe` versions are expected
to support later Linux kernel versions.
2024-09-23 16:04:33 -06:00
Kyoko Nagahashi
bda94e4067 linux-external-modules: update maintainers (#46474) 2024-09-23 15:35:41 -06:00
Adam J. Stewart
93ab70b07c py-dm-tree: support externally-installed pybind11 and abseil-cpp (#46431) 2024-09-23 12:11:38 -04:00
Stephen Nicholas Swatman
44215de24e acts dependencies: new versions as of 2024/09/23 (#46538)
This commit adds some new versions of acts, detray, and vecmem.
2024-09-23 07:16:55 -05:00
Chris Marsh
5b77ce15c7 py-cffi: Add macos patch from cffi-feedstock, add version 1.17.1, update depe (#46484)
* Add macos patch from cffi-feedstock, add version 1.17.1, update depends_on versions

* missing patch

* Use a url for the patch

* Remove 3.12 support
2024-09-23 11:15:55 +02:00
Wouter Deconinck
c118c7733b *: no loop over files with filter_file(*files) (#46420)
* *: no loop over files with filter_file(*files)
* scalpel: revert
2024-09-22 11:04:23 -06:00
Joseph Wang
f73f0f861d qd: add new versions and pull from main source tree (#46451)
* qd: add new versions and pull from main source tree

* add comment to that sha256 identical is intentional
2024-09-22 10:05:26 -06:00
Adam J. Stewart
2269d424f9 py-jaxlib: add GCC 13 support (#46433) 2024-09-22 09:55:06 -06:00
potter-s
d4ad167567 bcftools: Add runtime dependency gffutils (#46255)
Co-authored-by: Simon Potter <sp39sanger.ac.uk>
2024-09-22 17:50:24 +02:00
Sajid Ali
5811d754d9 libwebsockets: add v4.3.3 (#46380)
Co-authored-by: Bernhard Kaindl <bernhard.kaindl@cloud.com>
2024-09-22 09:44:23 -06:00
Jen Herting
315f3a0b4d py-jiter: new package (#46308) 2024-09-22 17:42:12 +02:00
Jen Herting
3a353c2a04 py-striprtf: New package (#46304) 2024-09-22 17:33:55 +02:00
Jen Herting
29aefd8d86 py-dirtyjson: new package (#46305) 2024-09-22 17:33:00 +02:00
Jen Herting
ba978964e5 py-typing-extensions: added 4.12.2 (#46309) 2024-09-22 17:30:50 +02:00
Jen Herting
ac0a1ff3a2 py-beautifulsoup4: added 4.12.3 (#46310) 2024-09-22 17:29:53 +02:00
Christophe Prud'homme
f2474584bf gsmsh: add missing png, jpeg and zlib deps (#46395) 2024-09-22 17:23:19 +02:00
Adam J. Stewart
61c07becc5 py-tensorflow: add GCC 13 conflict (#46435) 2024-09-22 09:17:32 -06:00
Richard Berger
98b149d711 py-sphinx-fortran: new package (#46401) 2024-09-22 16:52:23 +02:00
Richard Berger
a608f83bfc py-pyenchant: new package (#46400) 2024-09-22 16:39:37 +02:00
Tuomas Koskela
f2c132af2d fftw: Apply fix for missing FFTW3LibraryDepends.cmake (#46477) 2024-09-22 16:35:17 +02:00
Adam J. Stewart
873cb5c1a0 py-horovod: support newer torch, gcc (#46432) 2024-09-22 08:26:25 -06:00
Richard Berger
c2eea41848 py-linkchecker: new package (#46403) 2024-09-22 15:59:02 +02:00
Adam J. Stewart
d62a03bbf8 py-fiona: add v1.10.1 (#46425) 2024-09-22 15:28:55 +02:00
Wouter Deconinck
4e48ed73c6 static-analysis-suite: delete: no longer available (#46519) 2024-09-22 14:56:02 +02:00
Thomas Bouvier
8328851391 py-nvidia-dali: update to v1.41.0 (#46369)
* py-nvidia-dali: update to v1.41.0

* py-nvidia-dali: drop unnecessary 'preferred' attribute
2024-09-22 06:51:09 -06:00
Derek Ryan Strong
73125df0ec fpart: Confirm license and c dependency (#46509) 2024-09-22 14:15:56 +02:00
Wouter Deconinck
639990c385 bird: change url and checksums, add v2.15.1 (#46513) 2024-09-22 14:10:26 +02:00
Wouter Deconinck
34525388fe codec2: fix url; add v1.2.0 (#46514) 2024-09-22 14:06:57 +02:00
Wouter Deconinck
2375f873bf grackle: fix url, checksums, deps and sbang (#46516) 2024-09-22 14:01:28 +02:00
Wouter Deconinck
3e0331b250 goblin-hmc-sim: fix url (#46515) 2024-09-22 13:58:07 +02:00
Wouter Deconinck
c302013c5b yajl: fix url (#46518) 2024-09-22 13:54:57 +02:00
Wouter Deconinck
87d389fe78 shc: fix url (#46517) 2024-09-22 13:54:10 +02:00
Wouter Deconinck
27c590d2dc testdfsio: fix url and switch to be deprecated (#46520) 2024-09-22 13:42:21 +02:00
Wouter Deconinck
960f206a68 evemu: fix url (#46521) 2024-09-22 13:35:31 +02:00
Wouter Deconinck
1ccfb1444a py-falcon: fix url (#46522) 2024-09-22 13:34:38 +02:00
Wouter Deconinck
17199e7fed py-cftime: fix url (#46523) 2024-09-22 13:32:14 +02:00
Wouter Deconinck
b88971e125 tinker: add v8.7.2 (#46527) 2024-09-22 13:28:52 +02:00
Juan Miguel Carceller
f4ddb54293 opendatadetector: Add an env variable pointing to the share directory (#46511)
* opendatadetector: Add an env variable pointing to the share directory

* Rename the new variable to OPENDATADETECTOR_DATA and use join_path

---------

Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-09-22 05:51:25 -05:00
Juan Miguel Carceller
478d1fd8ff geant4: Add a patch for twisted tubes (#45368)
Co-authored-by: jmcarcell <jmcarcell@users.noreply.github.com>
2024-09-22 12:10:37 +02:00
afzpatel
3c7357225a py-onnxruntime: add v1.18.0 -> v1.18.3 and add ROCm support (#46448)
* add ROCm support for py-onnxruntime

* add new versions of py-onnxruntime

* add review changes
2024-09-21 17:23:59 -05:00
Adam J. Stewart
8a3128eb70 Bazel: add GCC 13 support for v6 (#46430)
* Bazel: add GCC 13 support for v6
* Fix offline builds
2024-09-21 11:24:38 -06:00
Adam J. Stewart
096ab11961 py-onnx: link to external protobuf (#46434) 2024-09-21 10:59:20 -06:00
Adam J. Stewart
9577fd8b8a py-ruff: add v0.6.5 (#46459) 2024-09-21 10:39:32 -06:00
Adam J. Stewart
8088fb8ccc py-cmocean: add v4.0.3 (#46454) 2024-09-21 10:18:27 -06:00
Massimiliano Culpo
b93c57cab9 Remove spack.target from code (#46503)
The `spack.target.Target` class is a weird entity, that is just needed to:

1. Sort microarchitectures in lists deterministically
2. Being able to use microarchitectures in hashed containers

This PR removes it, and uses `archspec.cpu.Microarchitecture` directly. To sort lists, we use a proper `key=` when needed. Being able to use `Microarchitecture` objects in sets is achieved by updating the external `archspec`.

Signed-off-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2024-09-21 14:05:41 +02:00
83 changed files with 1783 additions and 567 deletions

View File

@@ -219,6 +219,7 @@ def setup(sphinx):
("py:class", "spack.install_test.Pb"),
("py:class", "spack.filesystem_view.SimpleFilesystemView"),
("py:class", "spack.traverse.EdgeAndDepth"),
("py:class", "archspec.cpu.microarchitecture.Microarchitecture"),
]
# The reST default role (used for this markup: `text`) to use for all documents.

View File

@@ -18,7 +18,7 @@
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
* Version: 0.2.5-dev (commit cbb1fd5eb397a70d466e5160b393b87b0dbcc78f)
* Version: 0.2.5-dev (commit bceb39528ac49dd0c876b2e9bf3e7482e9c2be4a)
astunparse
----------------

View File

@@ -115,6 +115,9 @@ def __eq__(self, other):
and self.cpu_part == other.cpu_part
)
def __hash__(self):
return hash(self.name)
@coerce_target_names
def __ne__(self, other):
return not self == other

View File

@@ -45,6 +45,8 @@
from itertools import chain
from typing import Dict, List, Set, Tuple
import archspec.cpu
import llnl.util.tty as tty
from llnl.string import plural
from llnl.util.filesystem import join_path
@@ -358,7 +360,7 @@ def set_compiler_environment_variables(pkg, env):
_add_werror_handling(keep_werror, env)
# Set the target parameters that the compiler will add
isa_arg = spec.architecture.target.optimization_flags(compiler)
isa_arg = optimization_flags(compiler, spec.target)
env.set("SPACK_TARGET_ARGS", isa_arg)
# Trap spack-tracked compiler flags as appropriate.
@@ -403,6 +405,36 @@ def set_compiler_environment_variables(pkg, env):
return env
def optimization_flags(compiler, target):
if spack.compilers.is_mixed_toolchain(compiler):
msg = (
"microarchitecture specific optimizations are not "
"supported yet on mixed compiler toolchains [check"
f" {compiler.name}@{compiler.version} for further details]"
)
tty.debug(msg)
return ""
# Try to check if the current compiler comes with a version number or
# has an unexpected suffix. If so, treat it as a compiler with a
# custom spec.
compiler_version = compiler.version
version_number, suffix = archspec.cpu.version_components(compiler.version)
if not version_number or suffix:
try:
compiler_version = compiler.real_version
except spack.util.executable.ProcessError as e:
# log this and just return compiler.version instead
tty.debug(str(e))
try:
result = target.optimization_flags(compiler.name, compiler_version.dotted_numeric_string)
except (ValueError, archspec.cpu.UnsupportedMicroarchitecture):
result = ""
return result
def set_wrapper_variables(pkg, env):
"""Set environment variables used by the Spack compiler wrapper (which have the prefix
`SPACK_`) and also add the compiler wrappers to PATH.
@@ -783,7 +815,6 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
# Platform specific setup goes before package specific setup. This is for setting
# defaults like MACOSX_DEPLOYMENT_TARGET on macOS.
platform = spack.platforms.by_name(pkg.spec.architecture.platform)
target = platform.target(pkg.spec.architecture.target)
platform.setup_platform_environment(pkg, env_mods)
tty.debug("setup_package: grabbing modifications from dependencies")
@@ -808,9 +839,6 @@ def setup_package(pkg, dirty, context: Context = Context.BUILD):
for mod in pkg.compiler.modules:
load_module(mod)
if target and target.module_name:
load_module(target.module_name)
load_external_modules(pkg)
implicit_rpaths = pkg.compiler.implicit_rpaths()

View File

@@ -558,7 +558,7 @@ def get_compilers(config, cspec=None, arch_spec=None):
except KeyError:
# TODO: Check if this exception handling makes sense, or if we
# TODO: need to change / refactor tests
family = arch_spec.target
family = str(arch_spec.target)
except AttributeError:
assert arch_spec is None
@@ -803,12 +803,11 @@ def _extract_os_and_target(spec: "spack.spec.Spec"):
if not spec.architecture:
host_platform = spack.platforms.host()
operating_system = host_platform.operating_system("default_os")
target = host_platform.target("default_target").microarchitecture
target = host_platform.target("default_target")
else:
target = spec.architecture.target
if not target:
target = spack.platforms.host().target("default_target")
target = target.microarchitecture
operating_system = spec.os
if not operating_system:

View File

@@ -50,6 +50,7 @@
pass
import llnl.util.filesystem as fs
import llnl.util.lang
import llnl.util.tty as tty
import spack.deptypes as dt
@@ -121,6 +122,17 @@
)
@llnl.util.lang.memoized
def _getfqdn():
"""Memoized version of `getfqdn()`.
If we call `getfqdn()` too many times, DNS can be very slow. We only need to call it
one time per process, so we cache it here.
"""
return socket.getfqdn()
def reader(version: vn.StandardVersion) -> Type["spack.spec.SpecfileReaderBase"]:
reader_cls = {
vn.Version("5"): spack.spec.SpecfileV1,
@@ -1084,7 +1096,7 @@ def _write(self, type, value, traceback):
self._state_is_inconsistent = True
return
temp_file = self._index_path + (".%s.%s.temp" % (socket.getfqdn(), os.getpid()))
temp_file = self._index_path + (".%s.%s.temp" % (_getfqdn(), os.getpid()))
# Write a temporary database file them move it into place
try:

View File

@@ -15,7 +15,7 @@
import spack
import spack.error
import spack.fetch_strategy as fs
import spack.fetch_strategy
import spack.mirror
import spack.repo
import spack.stage
@@ -314,11 +314,15 @@ def stage(self) -> "spack.stage.Stage":
# Two checksums, one for compressed file, one for its contents
if self.archive_sha256 and self.sha256:
fetcher: fs.FetchStrategy = fs.FetchAndVerifyExpandedFile(
self.url, archive_sha256=self.archive_sha256, expanded_sha256=self.sha256
fetcher: spack.fetch_strategy.FetchStrategy = (
spack.fetch_strategy.FetchAndVerifyExpandedFile(
self.url, archive_sha256=self.archive_sha256, expanded_sha256=self.sha256
)
)
else:
fetcher = fs.URLFetchStrategy(url=self.url, sha256=self.sha256, expand=False)
fetcher = spack.fetch_strategy.URLFetchStrategy(
url=self.url, sha256=self.sha256, expand=False
)
# The same package can have multiple patches with the same name but
# with different contents, therefore apply a subset of the hash.
@@ -397,7 +401,7 @@ def from_dict(
sha256 = dictionary["sha256"]
checker = Checker(sha256)
if patch.path and not checker.check(patch.path):
raise fs.ChecksumError(
raise spack.fetch_strategy.ChecksumError(
"sha256 checksum failed for %s" % patch.path,
"Expected %s but got %s " % (sha256, checker.sum)
+ "Patch may have changed since concretization.",

View File

@@ -4,6 +4,8 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from typing import Optional
import archspec.cpu
import llnl.util.lang
import spack.error
@@ -60,7 +62,7 @@ def __init__(self, name):
self.operating_sys = {}
self.name = name
def add_target(self, name, target):
def add_target(self, name: str, target: archspec.cpu.Microarchitecture) -> None:
"""Used by the platform specific subclass to list available targets.
Raises an error if the platform specifies a name
that is reserved by spack as an alias.
@@ -70,6 +72,10 @@ def add_target(self, name, target):
raise ValueError(msg.format(name))
self.targets[name] = target
def _add_archspec_targets(self):
for name, microarchitecture in archspec.cpu.TARGETS.items():
self.add_target(name, microarchitecture)
def target(self, name):
"""This is a getter method for the target dictionary
that handles defaulting based on the values provided by default,

View File

@@ -7,7 +7,6 @@
import archspec.cpu
import spack.target
from spack.operating_systems.mac_os import MacOs
from spack.version import Version
@@ -21,9 +20,7 @@ class Darwin(Platform):
def __init__(self):
super().__init__("darwin")
for name in archspec.cpu.TARGETS:
self.add_target(name, spack.target.Target(name))
self._add_archspec_targets()
self.default = archspec.cpu.host().name
self.front_end = self.default

View File

@@ -6,7 +6,6 @@
import archspec.cpu
import spack.target
from spack.operating_systems.freebsd import FreeBSDOs
from ._platform import Platform
@@ -18,8 +17,7 @@ class FreeBSD(Platform):
def __init__(self):
super().__init__("freebsd")
for name in archspec.cpu.TARGETS:
self.add_target(name, spack.target.Target(name))
self._add_archspec_targets()
# Get specific default
self.default = archspec.cpu.host().name

View File

@@ -6,7 +6,6 @@
import archspec.cpu
import spack.target
from spack.operating_systems.linux_distro import LinuxDistro
from ._platform import Platform
@@ -18,8 +17,7 @@ class Linux(Platform):
def __init__(self):
super().__init__("linux")
for name in archspec.cpu.TARGETS:
self.add_target(name, spack.target.Target(name))
self._add_archspec_targets()
# Get specific default
self.default = archspec.cpu.host().name

View File

@@ -4,8 +4,9 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import platform
import archspec.cpu
import spack.operating_systems
import spack.target
from ._platform import Platform
@@ -32,8 +33,8 @@ class Test(Platform):
def __init__(self, name=None):
name = name or "test"
super().__init__(name)
self.add_target(self.default, spack.target.Target(self.default))
self.add_target(self.front_end, spack.target.Target(self.front_end))
self.add_target(self.default, archspec.cpu.TARGETS[self.default])
self.add_target(self.front_end, archspec.cpu.TARGETS[self.front_end])
self.add_operating_system(
self.default_os, spack.operating_systems.OperatingSystem("debian", 6)

View File

@@ -7,7 +7,6 @@
import archspec.cpu
import spack.target
from spack.operating_systems.windows_os import WindowsOs
from ._platform import Platform
@@ -18,9 +17,7 @@ class Windows(Platform):
def __init__(self):
super().__init__("windows")
for name in archspec.cpu.TARGETS:
self.add_target(name, spack.target.Target(name))
self._add_archspec_targets()
self.default = archspec.cpu.host().name
self.front_end = self.default

View File

@@ -2479,7 +2479,7 @@ def _all_targets_satisfiying(single_constraint):
return allowed_targets
cache = {}
for target_constraint in sorted(self.target_constraints):
for target_constraint in sorted(self.target_constraints, key=lambda x: x.name):
# Construct the list of allowed targets for this constraint
allowed_targets = []
for single_constraint in str(target_constraint).split(","):
@@ -3237,7 +3237,7 @@ def add_compiler_from_concrete_spec(self, spec: "spack.spec.Spec") -> None:
candidate = KnownCompiler(
spec=spec.compiler,
os=str(spec.architecture.os),
target=str(spec.architecture.target.microarchitecture.family),
target=str(spec.architecture.target.family),
available=False,
compiler_obj=None,
)

View File

@@ -7,32 +7,37 @@
% Heuristic to speed-up solves
%=============================================================================
% No duplicates by default (most of them will be true)
#heuristic attr("node", node(PackageID, Package)). [100, init]
#heuristic attr("node", node(PackageID, Package)). [ 2, factor]
#heuristic attr("virtual_node", node(VirtualID, Virtual)). [100, init]
#heuristic attr("node", node(1..X-1, Package)) : max_dupes(Package, X), not virtual(Package), X > 1. [-1, sign]
#heuristic attr("virtual_node", node(1..X-1, Package)) : max_dupes(Package, X), virtual(Package) , X > 1. [-1, sign]
#heuristic attr("node", PackageNode). [300, init]
#heuristic attr("node", PackageNode). [ 2, factor]
#heuristic attr("node", PackageNode). [ -1, sign]
#heuristic attr("node", node(0, Dependency)) : attr("dependency_holds", ParentNode, Dependency, Type), not virtual(Dependency). [1@2, sign]
% Pick preferred version
#heuristic attr("version", node(PackageID, Package), Version) : pkg_fact(Package, version_declared(Version, Weight)), attr("node", node(PackageID, Package)). [40, init]
#heuristic version_weight(node(PackageID, Package), 0) : pkg_fact(Package, version_declared(Version, 0 )), attr("node", node(PackageID, Package)). [ 1, sign]
#heuristic attr("version", node(PackageID, Package), Version) : pkg_fact(Package, version_declared(Version, 0 )), attr("node", node(PackageID, Package)). [ 1, sign]
#heuristic attr("version", node(PackageID, Package), Version) : pkg_fact(Package, version_declared(Version, Weight)), attr("node", node(PackageID, Package)), Weight > 0. [-1, sign]
#heuristic attr("virtual_node", node(X, Virtual)). [60, init]
#heuristic attr("virtual_node", node(X, Virtual)). [-1, sign]
#heuristic attr("virtual_node", node(0, Virtual)) : node_depends_on_virtual(PackageNode, Virtual). [1@2, sign]
#heuristic attr("depends_on", ParentNode, ChildNode, Type). [150, init]
#heuristic attr("depends_on", ParentNode, ChildNode, Type). [4, factor]
#heuristic attr("depends_on", ParentNode, ChildNode, Type). [-1, sign]
#heuristic attr("depends_on", ParentNode, node(0, Dependency), Type) : attr("dependency_holds", ParentNode, Dependency, Type), not virtual(Dependency). [1@2, sign]
#heuristic attr("depends_on", ParentNode, ProviderNode , Type) : node_depends_on_virtual(ParentNode, Virtual, Type), provider(ProviderNode, node(VirtualID, Virtual)). [1@2, sign]
#heuristic attr("version", node(PackageID, Package), Version). [30, init]
#heuristic attr("version", node(PackageID, Package), Version). [-1, sign]
#heuristic attr("version", node(PackageID, Package), Version) : pkg_fact(Package, version_declared(Version, 0)), attr("node", node(PackageID, Package)). [ 1@2, sign]
#heuristic version_weight(node(PackageID, Package), Weight). [30, init]
#heuristic version_weight(node(PackageID, Package), Weight). [-1 , sign]
#heuristic version_weight(node(PackageID, Package), 0 ) : attr("node", node(PackageID, Package)). [ 1@2, sign]
% Use default variants
#heuristic attr("variant_value", node(PackageID, Package), Variant, Value) : variant_default_value(Package, Variant, Value), attr("node", node(PackageID, Package)). [40, true]
#heuristic attr("variant_value", node(PackageID, Package), Variant, Value) : not variant_default_value(Package, Variant, Value), attr("node", node(PackageID, Package)). [40, false]
% Use default operating system and platform
#heuristic attr("node_os", node(PackageID, Package), OS) : os(OS, 0), attr("root", node(PackageID, Package)). [40, true]
#heuristic attr("node_platform", node(PackageID, Package), Platform) : allowed_platform(Platform), attr("root", node(PackageID, Package)). [40, true]
#heuristic attr("variant_value", PackageNode, Variant, Value). [30, init]
#heuristic attr("variant_value", PackageNode, Variant, Value). [-1, sign]
#heuristic attr("variant_value", PackageNode, Variant, Value) : variant_default_value(PackageNode, Variant, Value), attr("node", PackageNode). [1@2, sign]
% Use default targets
#heuristic attr("node_target", node(PackageID, Package), Target) : target_weight(Target, Weight), attr("node", node(PackageID, Package)). [30, init]
#heuristic attr("node_target", node(PackageID, Package), Target) : target_weight(Target, Weight), attr("node", node(PackageID, Package)). [ 2, factor]
#heuristic attr("node_target", node(PackageID, Package), Target) : target_weight(Target, 0), attr("node", node(PackageID, Package)). [ 1, sign]
#heuristic attr("node_target", node(PackageID, Package), Target) : target_weight(Target, Weight), attr("node", node(PackageID, Package)), Weight > 0. [-1, sign]
#heuristic attr("node_target", node(PackageID, Package), Target). [-1, sign]
#heuristic attr("node_target", node(PackageID, Package), Target) : target_weight(Target, 0), attr("node", node(PackageID, Package)). [1@2, sign]
% Use the default compilers
#heuristic node_compiler(node(PackageID, Package), ID) : compiler_weight(ID, 0), compiler_id(ID), attr("node", node(PackageID, Package)). [30, init]

View File

@@ -26,7 +26,7 @@
version, like "1.2", or it can be a range of versions, e.g. "1.2:1.4".
If multiple specific versions or multiple ranges are acceptable, they
can be separated by commas, e.g. if a package will only build with
versions 1.0, 1.2-1.4, and 1.6-1.8 of mavpich, you could say:
versions 1.0, 1.2-1.4, and 1.6-1.8 of mvapich, you could say:
depends_on("mvapich@1.0,1.2:1.4,1.6:1.8")
@@ -61,6 +61,8 @@
import warnings
from typing import Any, Callable, Dict, List, Match, Optional, Set, Tuple, Union
import archspec.cpu
import llnl.path
import llnl.string
import llnl.util.filesystem as fs
@@ -82,7 +84,6 @@
import spack.repo
import spack.solver
import spack.store
import spack.target
import spack.traverse as traverse
import spack.util.executable
import spack.util.hash
@@ -213,6 +214,12 @@ def ensure_modern_format_string(fmt: str) -> None:
)
def _make_microarchitecture(name: str) -> archspec.cpu.Microarchitecture:
if isinstance(name, archspec.cpu.Microarchitecture):
return name
return archspec.cpu.TARGETS.get(name, archspec.cpu.generic_microarchitecture(name))
@lang.lazy_lexicographic_ordering
class ArchSpec:
"""Aggregate the target platform, the operating system and the target microarchitecture."""
@@ -301,7 +308,10 @@ def _autospec(self, spec_like):
def _cmp_iter(self):
yield self.platform
yield self.os
yield self.target
if self.target is None:
yield self.target
else:
yield self.target.name
@property
def platform(self):
@@ -360,10 +370,10 @@ def target(self, value):
# will assumed to be the host machine's platform.
def target_or_none(t):
if isinstance(t, spack.target.Target):
if isinstance(t, archspec.cpu.Microarchitecture):
return t
if t and t != "None":
return spack.target.Target(t)
return _make_microarchitecture(t)
return None
value = target_or_none(value)
@@ -452,10 +462,11 @@ def _target_constrain(self, other: "ArchSpec") -> bool:
results = self._target_intersection(other)
attribute_str = ",".join(results)
if self.target == attribute_str:
intersection_target = _make_microarchitecture(attribute_str)
if self.target == intersection_target:
return False
self.target = attribute_str
self.target = intersection_target
return True
def _target_intersection(self, other):
@@ -473,7 +484,7 @@ def _target_intersection(self, other):
# s_target_range is a concrete target
# get a microarchitecture reference for at least one side
# of each comparison so we can use archspec comparators
s_comp = spack.target.Target(s_min).microarchitecture
s_comp = _make_microarchitecture(s_min)
if not o_sep:
if s_min == o_min:
results.append(s_min)
@@ -481,21 +492,21 @@ def _target_intersection(self, other):
results.append(s_min)
elif not o_sep:
# "cast" to microarchitecture
o_comp = spack.target.Target(o_min).microarchitecture
o_comp = _make_microarchitecture(o_min)
if (not s_min or o_comp >= s_min) and (not s_max or o_comp <= s_max):
results.append(o_min)
else:
# Take intersection of two ranges
# Lots of comparisons needed
_s_min = spack.target.Target(s_min).microarchitecture
_s_max = spack.target.Target(s_max).microarchitecture
_o_min = spack.target.Target(o_min).microarchitecture
_o_max = spack.target.Target(o_max).microarchitecture
_s_min = _make_microarchitecture(s_min)
_s_max = _make_microarchitecture(s_max)
_o_min = _make_microarchitecture(o_min)
_o_max = _make_microarchitecture(o_max)
n_min = s_min if _s_min >= _o_min else o_min
n_max = s_max if _s_max <= _o_max else o_max
_n_min = spack.target.Target(n_min).microarchitecture
_n_max = spack.target.Target(n_max).microarchitecture
_n_min = _make_microarchitecture(n_min)
_n_max = _make_microarchitecture(n_max)
if _n_min == _n_max:
results.append(n_min)
elif not n_min or not n_max or _n_min < _n_max:
@@ -548,12 +559,18 @@ def target_concrete(self):
)
def to_dict(self):
# Generic targets represent either an architecture family (like x86_64)
# or a custom micro-architecture
if self.target.vendor == "generic":
target_data = str(self.target)
else:
# Get rid of compiler flag information before turning the uarch into a dict
uarch_dict = self.target.to_dict()
uarch_dict.pop("compilers", None)
target_data = syaml.syaml_dict(uarch_dict.items())
d = syaml.syaml_dict(
[
("platform", self.platform),
("platform_os", self.os),
("target", self.target.to_dict_or_value()),
]
[("platform", self.platform), ("platform_os", self.os), ("target", target_data)]
)
return syaml.syaml_dict([("arch", d)])
@@ -561,7 +578,10 @@ def to_dict(self):
def from_dict(d):
"""Import an ArchSpec from raw YAML/JSON data"""
arch = d["arch"]
target = spack.target.Target.from_dict_or_value(arch["target"])
target_name = arch["target"]
if not isinstance(target_name, str):
target_name = target_name["name"]
target = _make_microarchitecture(target_name)
return ArchSpec((arch["platform"], arch["platform_os"], target))
def __str__(self):
@@ -1135,7 +1155,7 @@ def _libs_default_handler(spec: "Spec"):
for shared in search_shared:
# Since we are searching for link libraries, on Windows search only for
# ".Lib" extensions by default as those represent import libraries for implict links.
# ".Lib" extensions by default as those represent import libraries for implicit links.
libs = fs.find_libraries(name, home, shared=shared, recursive=True, runtime=False)
if libs:
return libs
@@ -2477,7 +2497,7 @@ def spec_builder(d):
spec_like, dep_like = next(iter(d.items()))
# If the requirements was for unique nodes (default)
# then re-use keys from the local cache. Otherwise build
# then reuse keys from the local cache. Otherwise build
# a new node every time.
if not isinstance(spec_like, Spec):
spec = spec_cache[spec_like] if normal else Spec(spec_like)
@@ -4120,9 +4140,7 @@ def os(self):
@property
def target(self):
# This property returns the underlying microarchitecture object
# to give to the attribute the appropriate comparison semantic
return self.architecture.target.microarchitecture
return self.architecture.target
@property
def build_spec(self):
@@ -5021,7 +5039,7 @@ def __init__(self, provided, required):
class UnsatisfiableCompilerSpecError(spack.error.UnsatisfiableSpecError):
"""Raised when a spec comiler conflicts with package constraints."""
"""Raised when a spec compiler conflicts with package constraints."""
def __init__(self, provided, required):
super().__init__(provided, required, "compiler")

View File

@@ -33,7 +33,6 @@
import spack.caches
import spack.config
import spack.error
import spack.fetch_strategy as fs
import spack.mirror
import spack.resource
import spack.spec
@@ -43,6 +42,7 @@
import spack.util.path as sup
import spack.util.pattern as pattern
import spack.util.url as url_util
from spack import fetch_strategy as fs # breaks a cycle
from spack.util.crypto import bit_length, prefix_bits
from spack.util.editor import editor, executable
from spack.version import StandardVersion, VersionList
@@ -352,8 +352,8 @@ def __init__(
url_or_fetch_strategy,
*,
name=None,
mirror_paths: Optional[spack.mirror.MirrorLayout] = None,
mirrors: Optional[Iterable[spack.mirror.Mirror]] = None,
mirror_paths: Optional["spack.mirror.MirrorLayout"] = None,
mirrors: Optional[Iterable["spack.mirror.Mirror"]] = None,
keep=False,
path=None,
lock=True,
@@ -464,7 +464,7 @@ def source_path(self):
"""Returns the well-known source directory path."""
return os.path.join(self.path, _source_path_subdir)
def _generate_fetchers(self, mirror_only=False) -> Generator[fs.FetchStrategy, None, None]:
def _generate_fetchers(self, mirror_only=False) -> Generator["fs.FetchStrategy", None, None]:
fetchers: List[fs.FetchStrategy] = []
if not mirror_only:
fetchers.append(self.default_fetcher)
@@ -600,7 +600,7 @@ def cache_local(self):
spack.caches.FETCH_CACHE.store(self.fetcher, self.mirror_layout.path)
def cache_mirror(
self, mirror: spack.caches.MirrorCache, stats: spack.mirror.MirrorStats
self, mirror: "spack.caches.MirrorCache", stats: "spack.mirror.MirrorStats"
) -> None:
"""Perform a fetch if the resource is not already cached
@@ -668,7 +668,7 @@ def destroy(self):
class ResourceStage(Stage):
def __init__(
self,
fetch_strategy: fs.FetchStrategy,
fetch_strategy: "fs.FetchStrategy",
root: Stage,
resource: spack.resource.Resource,
**kwargs,

View File

@@ -1,161 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import functools
import archspec.cpu
import llnl.util.tty as tty
import spack.compiler
import spack.compilers
import spack.spec
import spack.util.executable
import spack.util.spack_yaml as syaml
def _ensure_other_is_target(method):
"""In a single argument method, ensure that the argument is an
instance of ``Target``.
"""
@functools.wraps(method)
def _impl(self, other):
if isinstance(other, str):
other = Target(other)
if not isinstance(other, Target):
return NotImplemented
return method(self, other)
return _impl
class Target:
def __init__(self, name, module_name=None):
"""Target models microarchitectures and their compatibility.
Args:
name (str or Microarchitecture): microarchitecture of the target
module_name (str): optional module name to get access to the
current target. This is typically used on machines
like Cray (e.g. craype-compiler)
"""
if not isinstance(name, archspec.cpu.Microarchitecture):
name = archspec.cpu.TARGETS.get(name, archspec.cpu.generic_microarchitecture(name))
self.microarchitecture = name
self.module_name = module_name
@property
def name(self):
return self.microarchitecture.name
@_ensure_other_is_target
def __eq__(self, other):
return (
self.microarchitecture == other.microarchitecture
and self.module_name == other.module_name
)
def __ne__(self, other):
# This method is necessary as long as we support Python 2. In Python 3
# __ne__ defaults to the implementation below
return not self == other
@_ensure_other_is_target
def __lt__(self, other):
# TODO: In the future it would be convenient to say
# TODO: `spec.architecture.target < other.architecture.target`
# TODO: and change the semantic of the comparison operators
# This is needed to sort deterministically specs in a list.
# It doesn't implement a total ordering semantic.
return self.microarchitecture.name < other.microarchitecture.name
def __hash__(self):
return hash((self.name, self.module_name))
@staticmethod
def from_dict_or_value(dict_or_value):
# A string here represents a generic target (like x86_64 or ppc64) or
# a custom micro-architecture
if isinstance(dict_or_value, str):
return Target(dict_or_value)
# TODO: From a dict we actually retrieve much more information than
# TODO: just the name. We can use that information to reconstruct an
# TODO: "old" micro-architecture or check the current definition.
target_info = dict_or_value
return Target(target_info["name"])
def to_dict_or_value(self):
"""Returns a dict or a value representing the current target.
String values are used to keep backward compatibility with generic
targets, like e.g. x86_64 or ppc64. More specific micro-architectures
will return a dictionary which contains information on the name,
features, vendor, generation and parents of the current target.
"""
# Generic targets represent either an architecture
# family (like x86_64) or a custom micro-architecture
if self.microarchitecture.vendor == "generic":
return str(self)
# Get rid of compiler flag information before turning the uarch into a dict
uarch_dict = self.microarchitecture.to_dict()
uarch_dict.pop("compilers", None)
return syaml.syaml_dict(uarch_dict.items())
def __repr__(self):
cls_name = self.__class__.__name__
fmt = cls_name + "({0}, {1})"
return fmt.format(repr(self.microarchitecture), repr(self.module_name))
def __str__(self):
return str(self.microarchitecture)
def __contains__(self, cpu_flag):
return cpu_flag in self.microarchitecture
def optimization_flags(self, compiler):
"""Returns the flags needed to optimize for this target using
the compiler passed as argument.
Args:
compiler (spack.spec.CompilerSpec or spack.compiler.Compiler): object that
contains both the name and the version of the compiler we want to use
"""
# Mixed toolchains are not supported yet
if isinstance(compiler, spack.compiler.Compiler):
if spack.compilers.is_mixed_toolchain(compiler):
msg = (
"microarchitecture specific optimizations are not "
"supported yet on mixed compiler toolchains [check"
" {0.name}@{0.version} for further details]"
)
tty.debug(msg.format(compiler))
return ""
# Try to check if the current compiler comes with a version number or
# has an unexpected suffix. If so, treat it as a compiler with a
# custom spec.
compiler_version = compiler.version
version_number, suffix = archspec.cpu.version_components(compiler.version)
if not version_number or suffix:
# Try to deduce the underlying version of the compiler, regardless
# of its name in compilers.yaml. Depending on where this function
# is called we might get either a CompilerSpec or a fully fledged
# compiler object.
if isinstance(compiler, spack.spec.CompilerSpec):
compiler = spack.compilers.compilers_for_spec(compiler).pop()
try:
compiler_version = compiler.real_version
except spack.util.executable.ProcessError as e:
# log this and just return compiler.version instead
tty.debug(str(e))
return self.microarchitecture.optimization_flags(
compiler.name, compiler_version.dotted_numeric_string
)

View File

@@ -12,7 +12,6 @@
import spack.concretize
import spack.operating_systems
import spack.platforms
import spack.target
from spack.spec import ArchSpec, Spec
@@ -83,25 +82,6 @@ def test_operating_system_conversion_to_dict():
assert operating_system.to_dict() == {"name": "os", "version": "1.0"}
@pytest.mark.parametrize(
"cpu_flag,target_name",
[
# Test that specific flags can be used in queries
("ssse3", "haswell"),
("popcnt", "nehalem"),
("avx512f", "skylake_avx512"),
("avx512ifma", "icelake"),
# Test that proxy flags can be used in queries too
("sse3", "nehalem"),
("avx512", "skylake_avx512"),
("avx512", "icelake"),
],
)
def test_target_container_semantic(cpu_flag, target_name):
target = spack.target.Target(target_name)
assert cpu_flag in target
@pytest.mark.parametrize(
"item,architecture_str",
[
@@ -118,68 +98,6 @@ def test_arch_spec_container_semantic(item, architecture_str):
assert item in architecture
@pytest.mark.parametrize(
"compiler_spec,target_name,expected_flags",
[
# Homogeneous compilers
("gcc@4.7.2", "ivybridge", "-march=core-avx-i -mtune=core-avx-i"),
("clang@3.5", "x86_64", "-march=x86-64 -mtune=generic"),
("apple-clang@9.1.0", "x86_64", "-march=x86-64"),
# Mixed toolchain
("clang@8.0.0", "broadwell", ""),
],
)
@pytest.mark.filterwarnings("ignore:microarchitecture specific")
@pytest.mark.not_on_windows("Windows doesn't support the compiler wrapper")
def test_optimization_flags(compiler_spec, target_name, expected_flags, compiler_factory):
target = spack.target.Target(target_name)
compiler_dict = compiler_factory(spec=compiler_spec, operating_system="")["compiler"]
if compiler_spec == "clang@8.0.0":
compiler_dict["paths"] = {
"cc": "/path/to/clang-8",
"cxx": "/path/to/clang++-8",
"f77": "/path/to/gfortran-9",
"fc": "/path/to/gfortran-9",
}
compiler = spack.compilers.compiler_from_dict(compiler_dict)
opt_flags = target.optimization_flags(compiler)
assert opt_flags == expected_flags
@pytest.mark.parametrize(
"compiler_str,real_version,target_str,expected_flags",
[
("gcc@=9.2.0", None, "haswell", "-march=haswell -mtune=haswell"),
# Check that custom string versions are accepted
("gcc@=10foo", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
# Check that we run version detection (4.4.0 doesn't support icelake)
("gcc@=4.4.0-special", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
# Check that the special case for Apple's clang is treated correctly
# i.e. it won't try to detect the version again
("apple-clang@=9.1.0", None, "x86_64", "-march=x86-64"),
],
)
def test_optimization_flags_with_custom_versions(
compiler_str,
real_version,
target_str,
expected_flags,
monkeypatch,
mutable_config,
compiler_factory,
):
target = spack.target.Target(target_str)
compiler_dict = compiler_factory(spec=compiler_str, operating_system="redhat6")
mutable_config.set("compilers", [compiler_dict])
if real_version:
monkeypatch.setattr(spack.compiler.Compiler, "get_real_version", lambda x: real_version)
compiler = spack.compilers.compiler_from_dict(compiler_dict["compiler"])
opt_flags = target.optimization_flags(compiler)
assert opt_flags == expected_flags
@pytest.mark.regression("15306")
@pytest.mark.parametrize(
"architecture_tuple,constraint_tuple",

View File

@@ -9,6 +9,8 @@
import pytest
import archspec.cpu
from llnl.path import Path, convert_to_platform_path
from llnl.util.filesystem import HeaderList, LibraryList
@@ -737,3 +739,64 @@ def test_rpath_with_duplicate_link_deps():
assert child in rpath_deps
assert runtime_2 in rpath_deps
assert runtime_1 not in rpath_deps
@pytest.mark.parametrize(
"compiler_spec,target_name,expected_flags",
[
# Homogeneous compilers
("gcc@4.7.2", "ivybridge", "-march=core-avx-i -mtune=core-avx-i"),
("clang@3.5", "x86_64", "-march=x86-64 -mtune=generic"),
("apple-clang@9.1.0", "x86_64", "-march=x86-64"),
# Mixed toolchain
("clang@8.0.0", "broadwell", ""),
],
)
@pytest.mark.filterwarnings("ignore:microarchitecture specific")
@pytest.mark.not_on_windows("Windows doesn't support the compiler wrapper")
def test_optimization_flags(compiler_spec, target_name, expected_flags, compiler_factory):
target = archspec.cpu.TARGETS[target_name]
compiler_dict = compiler_factory(spec=compiler_spec, operating_system="")["compiler"]
if compiler_spec == "clang@8.0.0":
compiler_dict["paths"] = {
"cc": "/path/to/clang-8",
"cxx": "/path/to/clang++-8",
"f77": "/path/to/gfortran-9",
"fc": "/path/to/gfortran-9",
}
compiler = spack.compilers.compiler_from_dict(compiler_dict)
opt_flags = spack.build_environment.optimization_flags(compiler, target)
assert opt_flags == expected_flags
@pytest.mark.parametrize(
"compiler_str,real_version,target_str,expected_flags",
[
("gcc@=9.2.0", None, "haswell", "-march=haswell -mtune=haswell"),
# Check that custom string versions are accepted
("gcc@=10foo", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
# Check that we run version detection (4.4.0 doesn't support icelake)
("gcc@=4.4.0-special", "9.2.0", "icelake", "-march=icelake-client -mtune=icelake-client"),
# Check that the special case for Apple's clang is treated correctly
# i.e. it won't try to detect the version again
("apple-clang@=9.1.0", None, "x86_64", "-march=x86-64"),
],
)
def test_optimization_flags_with_custom_versions(
compiler_str,
real_version,
target_str,
expected_flags,
monkeypatch,
mutable_config,
compiler_factory,
):
target = archspec.cpu.TARGETS[target_str]
compiler_dict = compiler_factory(spec=compiler_str, operating_system="redhat6")
mutable_config.set("compilers", [compiler_dict])
if real_version:
monkeypatch.setattr(spack.compiler.Compiler, "get_real_version", lambda x: real_version)
compiler = spack.compilers.compiler_from_dict(compiler_dict["compiler"])
opt_flags = spack.build_environment.optimization_flags(compiler, target)
assert opt_flags == expected_flags

View File

@@ -407,7 +407,7 @@ def test_substitute_config_variables(mock_low_high_config, monkeypatch):
) == os.path.abspath(os.path.join("foo", "test", "bar"))
host_target = spack.platforms.host().target("default_target")
host_target_family = str(host_target.microarchitecture.family)
host_target_family = str(host_target.family)
assert spack_path.canonicalize_path(
os.path.join("foo", "$target_family", "bar")
) == os.path.abspath(os.path.join("foo", host_target_family, "bar"))

View File

@@ -71,7 +71,7 @@ def replacements():
"operating_system": lambda: arch.os,
"os": lambda: arch.os,
"target": lambda: arch.target,
"target_family": lambda: arch.target.microarchitecture.family,
"target_family": lambda: arch.target.family,
"date": lambda: date.today().strftime("%Y-%m-%d"),
"env": lambda: ev.active_environment().path if ev.active_environment() else NOMATCH,
}

View File

@@ -41,6 +41,7 @@ class Acts(CMakePackage, CudaPackage):
# Supported Acts versions
version("main", branch="main")
version("master", branch="main", deprecated=True) # For compatibility
version("36.3.1", commit="b58e5b0c33fb8423ce60a6a45f333edd0d178acd", submodules=True)
version("36.3.0", commit="3b875cebabdd10462e224279558429f49ed75945", submodules=True)
version("36.2.0", commit="e2fb53da911dc481969e56d635898a46b8d78df9", submodules=True)
version("36.1.0", commit="3f19d1a0eec1d11937d66d0ef603f0b25b9b4e96", submodules=True)

View File

@@ -7,6 +7,7 @@
from llnl.util import tty
from spack.build_environment import optimization_flags
from spack.package import *
from spack.pkg.builtin.fftw import FftwBase
@@ -213,10 +214,7 @@ def configure(self, spec, prefix):
# variable to set AMD_ARCH configure option.
# Spack user can not directly use AMD_ARCH for this purpose but should
# use target variable to set appropriate -march option in AMD_ARCH.
arch = spec.architecture
options.append(
"AMD_ARCH={0}".format(arch.target.optimization_flags(spec.compiler).split("=")[-1])
)
options.append(f"AMD_ARCH={optimization_flags(self.compiler, spec.target)}")
# Specific SIMD support.
# float and double precisions are supported

View File

@@ -2,7 +2,7 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.build_environment import optimization_flags
from spack.package import *
@@ -110,21 +110,23 @@ def build_targets(self):
return ["all", "html"] if "+doc" in self.spec else ["all"]
def cmake_args(self):
spec = self.spec
args = [
self.define_from_variant("ARB_WITH_ASSERTIONS", "assertions"),
self.define_from_variant("ARB_WITH_MPI", "mpi"),
self.define_from_variant("ARB_WITH_PYTHON", "python"),
self.define_from_variant("ARB_VECTORIZE", "vectorize"),
self.define("ARB_ARCH", "none"),
self.define("ARB_CXX_FLAGS_TARGET", optimization_flags(self.compiler, spec.target)),
]
if self.spec.satisfies("+cuda"):
args.append("-DARB_GPU=cuda")
args.append(self.define_from_variant("ARB_USE_GPU_RNG", "gpu_rng"))
# query spack for the architecture-specific compiler flags set by its wrapper
args.append("-DARB_ARCH=none")
opt_flags = self.spec.architecture.target.optimization_flags(self.spec.compiler)
args.append("-DARB_CXX_FLAGS_TARGET=" + opt_flags)
args.extend(
[
self.define("ARB_GPU", "cuda"),
self.define_from_variant("ARB_USE_GPU_RNG", "gpu_rng"),
]
)
return args

View File

@@ -168,13 +168,11 @@ def filter_sbang(self):
pattern = "^#!.*"
repl = f"#!{self.spec['perl'].command.path}"
files = glob.glob("*.pl")
for file in files:
filter_file(pattern, repl, *files, backup=False)
filter_file(pattern, repl, *files, backup=False)
repl = f"#!{self.spec['python'].command.path}"
files = glob.glob("*.py")
for file in files:
filter_file(pattern, repl, *files, backup=False)
filter_file(pattern, repl, *files, backup=False)
def setup_build_environment(self, env):
htslib = self.spec["htslib"].prefix

View File

@@ -120,6 +120,14 @@ class Bazel(Package):
# https://blog.bazel.build/2021/05/21/bazel-4-1.html
conflicts("platform=darwin target=aarch64:", when="@:4.0")
# https://github.com/bazelbuild/bazel/issues/18642
patch(
"https://github.com/bazelbuild/bazel/pull/20785.patch?full_index=1",
sha256="85dde31d129bbd31e004c5c87f23cdda9295fbb22946dc6d362f23d83bae1fd8",
when="@6.0:6.4",
)
conflicts("%gcc@13:", when="@:5")
# Patches for compiling various older bazels which had ICWYU violations revealed by
# (but not unique to) GCC 11 header changes. These are derived from
# https://gitlab.alpinelinux.org/alpine/aports/-/merge_requests/29084/
@@ -131,8 +139,6 @@ class Bazel(Package):
# Bazel-4.0.0 does not compile with gcc-11
# Newer versions of grpc and abseil dependencies are needed but are not in bazel-4.0.0
conflicts("@4.0.0", when="%gcc@11:")
# https://github.com/bazelbuild/bazel/issues/18642
conflicts("@:6", when="%gcc@13:")
executables = ["^bazel$"]
@@ -144,6 +150,11 @@ class Bazel(Package):
"sha256": "f1c8360c01fcf276778d3519394805dc2a71a64274a3a0908bc9edff7b5aebc8",
"when": "@4:6",
}
resource_dictionary["com_google_absl"] = {
"url": "https://github.com/abseil/abseil-cpp/archive/refs/tags/20230802.0.tar.gz",
"sha256": "59d2976af9d6ecf001a81a35749a6e551a335b949d34918cfade07737b9d93c5",
"when": "@6.0:6.4",
}
resource_dictionary["zulu_11_56_19"] = {
"url": "https://mirror.bazel.build/cdn.azul.com/zulu/bin/zulu11.56.19-ca-jdk11.0.15-linux_x64.tar.gz",
"sha256": "e064b61d93304012351242bf0823c6a2e41d9e28add7ea7f05378b7243d34247",

View File

@@ -55,6 +55,7 @@ class Bcftools(AutotoolsPackage):
depends_on("gsl", when="+libgsl")
depends_on("py-matplotlib", when="@1.6:", type="run")
depends_on("py-gffutils", when="@1.9:", type="run")
depends_on("perl", when="@1.8:~perl-filters", type="run")
depends_on("perl", when="@1.8:+perl-filters", type=("build", "run"))

View File

@@ -14,13 +14,42 @@ class Bird(AutotoolsPackage):
systems and distributed under the GNU General Public License."""
homepage = "https://bird.network.cz/"
url = "https://github.com/BIRD/bird/archive/v2.0.2.tar.gz"
url = "https://gitlab.nic.cz/labs/bird/-/archive/v2.0.2/bird-v2.0.2.tar.gz"
license("GPL-2.0-or-later")
license("GPL-2.0-or-later", checked_by="wdconinc")
version("2.0.2", sha256="bd42d48fbcc2c0046d544f1183cd98193ff15b792d332ff45f386b0180b09335")
version("2.0.1", sha256="cd6ea4a39ca97ad16d364bf80f919f0e75eba02dd7fe46be40f55d78d022244a")
version("2.15.1", sha256="5a4cf55c4767192aa57880ac5f6763e5b8c26f688ab5934df96e3615c4b0a1e1")
version("2.15", sha256="485b731ed0668b0da4f5110ba8ea98d248e10b25421820feca5dcdd94ab98a29")
version("2.14", sha256="22823b20d31096fcfded6773ecc7d9ee6da0339ede805422647c04127c67472f")
version("2.13.1", sha256="4a55c469f5d2984b62eef929343815b75a7b19132b8c3f40b41f8f66e27d3078")
version("2.13", sha256="db3df5dd84de98c2a61f8415c9812876578d6ba159038d853b211700e43dbae1")
version("2.0.12", sha256="70ef51cbf2b7711db484225da5bdf0344ba31629a167148bfe294f61f07573f6")
version("2.0.11", sha256="a2a1163166def10e014c6f832d6552b00ab46714024613c76cd6ebc3cd3e51c4")
version("2.0.10", sha256="8e053a64ed3e2c681fcee33ee31e61c7a5df32f94644799f283d294108e83722")
version("2.0.9", sha256="912d5c1bbefffd6198b10688ef6e16d0b9dfb2886944f481fc38b4d869ffd2c4")
version("2.0.8", sha256="4d0eeea762dcd4422e1e276e2ed123cfed630cf1cce017b50463d79fcf2fff0c")
version("2.0.7", sha256="d0c6aeaaef3217d6210261a49751fc662838b55fec92f576e20938917dbf89ab")
version("2.0.6", sha256="61518120c76bbfe0b52eff614e7580a1d973e66907df5aeac83fe344aa30595a")
version("2.0.5", sha256="f20dc822fc95aa580759c9b83bfd6c7c2e8504d8d0602cee118db1447054f5d0")
version("2.0.4", sha256="8c191b87524db3ff587253f46f94524ad2a89efdec8a12c800544a5fb01a2861")
version("2.0.3", sha256="54ec151518564f87e81de4ac19376689e5ba8dd9129f1e9a79086db3df0931f8")
version("2.0.2", sha256="e1e9ac92faf5893890c478386fdbd3c391ec2e9b911b1dfccec7b7fa825e9820")
version("2.0.1", sha256="c222968bb017e6b77d14f4e778f437b84f4ccae686355a3ad8e88799285e7636")
# fix multiple definitions with extern rta_dest_names
patch(
"https://gitlab.nic.cz/labs/bird/-/commit/4bbc10614f3431c37e6352f5a6ea5c693c31021e.diff",
sha256="ab891b10dab2fa17a3047cd48e082cccc14f958f4255dcae771deab1330da7c8",
when="@:2.0.7",
)
# fix linker errors due to undefined behavior on signals
patch(
"https://gitlab.nic.cz/labs/bird/-/commit/24493e9169d3058958ab3ec4d2b02c5753954981.diff",
sha256="ea49dea1c503836feea127c605b99352b1e353df490d63873af09973cf2b3d14",
when="@:2.0.6",
)
depends_on("c", type="build")
depends_on("autoconf", type="build")
depends_on("automake", type="build")
depends_on("libtool", type="build")

View File

@@ -63,8 +63,7 @@ def filter_sbang(self):
pattern = "^#!.*/usr/bin/env perl"
repl = "#!{0}".format(self.spec["perl"].command.path)
files = glob.iglob("*.pl")
for file in files:
filter_file(pattern, repl, *files, backup=False)
filter_file(pattern, repl, *files, backup=False)
def setup_run_environment(self, env):
env.prepend_path("PERL5LIB", self.prefix.lib)

View File

@@ -12,10 +12,20 @@ class Codec2(CMakePackage):
HF/VHF digital radio."""
homepage = "https://www.rowetel.com/?page_id=452"
url = "https://github.com/drowe67/codec2/archive/v0.9.2.tar.gz"
url = "https://github.com/drowe67/codec2/archive/refs/tags/1.2.0.tar.gz"
license("LGPL-2.1-or-later")
version("1.2.0", sha256="cbccae52b2c2ecc5d2757e407da567eb681241ff8dadce39d779a7219dbcf449")
version("1.1.0", sha256="d56ba661008a780b823d576a5a2742c94d0b0507574643a7d4f54c76134826a3")
version("1.0.5", sha256="cd9a065dd1c3477f6172a0156294f767688847e4d170103d1f08b3a075f82826")
version("0.9.2", sha256="19181a446f4df3e6d616b50cabdac4485abb9cd3242cf312a0785f892ed4c76c")
depends_on("c", type="build")
def url_for_version(self, version):
# Release 1.2.0 started with shallow git clone "to reduce repo size"
if version < Version("1.2.0"):
return f"https://github.com/drowe67/codec2-dev/archive/refs/tags/v{version}.tar.gz"
else:
return f"https://github.com/drowe67/codec2/archive/refs/tags/{version}.tar.gz"

View File

@@ -20,6 +20,7 @@ class Detray(CMakePackage):
license("MPL-2.0", checked_by="stephenswat")
version("0.75.2", sha256="249066c138eac4114032e8d558f3a05885140a809332a347c7667978dbff54ee")
version("0.74.2", sha256="9fd14cf1ec30477d33c530670e9fed86b07db083912fe51dac64bf2453b321e8")
version("0.73.0", sha256="f574016bc7515a34a675b577e93316e18cf753f1ab7581dcf1c8271a28cb7406")
version("0.72.1", sha256="6cc8d34bc0d801338e9ab142c4a9884d19d9c02555dbb56972fab86b98d0f75b")
@@ -92,6 +93,7 @@ def cmake_args(self):
self.define("DETRAY_SETUP_GOOGLETEST", False),
self.define("DETRAY_SETUP_BENCHMARK", False),
self.define("DETRAY_BUILD_TUTORIALS", False),
self.define("DETRAY_BUILD_TEST_UTILS", True),
]
return args

View File

@@ -10,17 +10,18 @@ class Evemu(AutotoolsPackage):
"""The evemu library and tools are used to describe devices, record data,
create devices and replay data from kernel evdev devices."""
homepage = "https://github.com/freedesktop/evemu"
url = "https://github.com/freedesktop/evemu/archive/v2.7.0.tar.gz"
homepage = "https://gitlab.freedesktop.org/libevdev/evemu"
url = "https://gitlab.freedesktop.org/libevdev/evemu/-/archive/v2.7.0/evemu-v2.7.0.tar.gz"
license("LGPL-3.0-only")
version("2.7.0", sha256="aee1ecc2b6761134470316d97208b173adb4686dc72548b82b2c2b5d1e5dc259")
version("2.6.0", sha256="dc2382bee4dcb6c413271d586dc11d9b4372a70fa2b66b1e53a7107f2f9f51f8")
version("2.5.0", sha256="ab7cce32800db84ab3504789583d1be0d9b0a5f2689389691367b18cf059b09f")
version("2.4.0", sha256="d346ec59289f588bd93fe3cfa40858c7e048660164338787da79b9ebe3256069")
version("2.3.1", sha256="f2dd97310520bc7824adc38b69ead22c53944a666810c60a3e49592914e14e8a")
version("2.7.0", sha256="b4ba7458ccb394e9afdb2562c9809e9e90fd1099e8a028d05de3f12349ab6afa")
version("2.6.0", sha256="2efa4abb51f9f35a48605db51ab835cf688f02f6041d48607e78e11ec3524ac8")
version("2.5.0", sha256="1d88b2a81db36b6018cdc3e8d57fbb95e3a5df9e6806cd7b3d29c579a7113d4f")
version("2.4.0", sha256="ea8e7147550432321418ae1161a909e054ff482c86a6a1631f727171791a501d")
version("2.3.1", sha256="fbe77a083ed4328e76e2882fb164efc925b308b83e879b518136ee54d74def46")
depends_on("c", type="build")
depends_on("autoconf", type="build")
depends_on("automake", type="build")
depends_on("libtool", type="build")

View File

@@ -247,6 +247,11 @@ class Fftw(FftwBase):
provides("fftw-api@3", when="@3:")
patch("pfft-3.3.9.patch", when="@3.3.9:+pfft_patches", level=0)
patch(
"https://github.com/FFTW/fftw3/commit/f69fef7aa546d4477a2a3fd7f13fa8b2f6c54af7.patch?full_index=1",
sha256="872cff9a7d346e91a108ffd3540bfcebeb8cf86c7f40f6b31fd07a80267cbf53",
when="@3.3.7:",
)
patch("pfft-3.3.5.patch", when="@3.3.5:3.3.8+pfft_patches", level=0)
patch("pfft-3.3.4.patch", when="@3.3.4+pfft_patches", level=0)
patch("pgi-3.3.6-pl2.patch", when="@3.3.6-pl2%pgi", level=0)

View File

@@ -17,14 +17,12 @@ class Fpart(AutotoolsPackage):
maintainers("drkrynstrng")
license("BSD-2-Clause")
license("BSD-2-Clause", checked_by="drkrynstrng")
version("master", branch="master")
version("1.6.0", sha256="ed1fac2853fc421071b72e4c5d8455a231bc30e50034db14af8b0485ece6e097")
version("1.5.1", sha256="c353a28f48e4c08f597304cb4ebb88b382f66b7fabfc8d0328ccbb0ceae9220c")
depends_on("c", type="build") # generated
variant("embfts", default=False, description="Build with embedded fts functions")
variant("static", default=False, description="Build static binary")
variant("debug", default=False, description="Build with debugging support")
@@ -37,6 +35,7 @@ class Fpart(AutotoolsPackage):
description="Tools used by fpsync to copy files",
)
depends_on("c", type="build")
depends_on("autoconf", type="build")
depends_on("automake", type="build")
depends_on("libtool", type="build")

View File

@@ -136,5 +136,4 @@ def perl_interpreter(self):
pattern = "^#!.*/usr/bin/perl"
repl = "#!{0}".format(self.spec["perl"].command.path)
files = ["fconv2", "fconvdens2", "fdowngrad.pl", "fout2in", "grBhfat", "grpop"]
for file in files:
filter_file(pattern, repl, *files, backup=False)
filter_file(pattern, repl, *files, backup=False)

View File

@@ -195,6 +195,9 @@ def std_when(values):
# See https://bugzilla-geant4.kek.jp/show_bug.cgi?id=2556
patch("package-cache.patch", level=1, when="@10.7.0:11.1.2^cmake@3.17:")
# Issue with Twisted tubes, see https://bugzilla-geant4.kek.jp/show_bug.cgi?id=2619
patch("twisted-tubes.patch", level=1, when="@11.2.0:11.2.2")
# NVHPC: "thread-local declaration follows non-thread-local declaration"
conflicts("%nvhpc", when="+threads")

View File

@@ -0,0 +1,875 @@
diff --git a/source/geometry/solids/specific/include/G4TwistedTubs.hh b/source/geometry/solids/specific/include/G4TwistedTubs.hh
index b8be4e629da8edb87c8e7fdcb12ae243fbb910e4..e6ca127646f1aa1f60b04b5100123ccfff9b698c 100644
--- a/source/geometry/solids/specific/include/G4TwistedTubs.hh
+++ b/source/geometry/solids/specific/include/G4TwistedTubs.hh
@@ -226,109 +226,6 @@ class G4TwistedTubs : public G4VSolid
mutable G4bool fRebuildPolyhedron = false;
mutable G4Polyhedron* fpPolyhedron = nullptr; // polyhedron for vis
- class LastState // last Inside result
- {
- public:
- LastState()
- {
- p.set(kInfinity,kInfinity,kInfinity);
- inside = kOutside;
- }
- ~LastState()= default;
- LastState(const LastState& r) = default;
- LastState& operator=(const LastState& r)
- {
- if (this == &r) { return *this; }
- p = r.p; inside = r.inside;
- return *this;
- }
- public:
- G4ThreeVector p;
- EInside inside;
- };
-
- class LastVector // last SurfaceNormal result
- {
- public:
- LastVector()
- {
- p.set(kInfinity,kInfinity,kInfinity);
- vec.set(kInfinity,kInfinity,kInfinity);
- surface = new G4VTwistSurface*[1];
- }
- ~LastVector()
- {
- delete [] surface;
- }
- LastVector(const LastVector& r) : p(r.p), vec(r.vec)
- {
- surface = new G4VTwistSurface*[1];
- surface[0] = r.surface[0];
- }
- LastVector& operator=(const LastVector& r)
- {
- if (&r == this) { return *this; }
- p = r.p; vec = r.vec;
- delete [] surface; surface = new G4VTwistSurface*[1];
- surface[0] = r.surface[0];
- return *this;
- }
- public:
- G4ThreeVector p;
- G4ThreeVector vec;
- G4VTwistSurface **surface;
- };
-
- class LastValue // last G4double value
- {
- public:
- LastValue()
- {
- p.set(kInfinity,kInfinity,kInfinity);
- value = DBL_MAX;
- }
- ~LastValue()= default;
- LastValue(const LastValue& r) = default;
- LastValue& operator=(const LastValue& r)
- {
- if (this == &r) { return *this; }
- p = r.p; value = r.value;
- return *this;
- }
- public:
- G4ThreeVector p;
- G4double value;
- };
-
- class LastValueWithDoubleVector // last G4double value
- {
- public:
- LastValueWithDoubleVector()
- {
- p.set(kInfinity,kInfinity,kInfinity);
- vec.set(kInfinity,kInfinity,kInfinity);
- value = DBL_MAX;
- }
- ~LastValueWithDoubleVector()= default;
- LastValueWithDoubleVector(const LastValueWithDoubleVector& r) = default;
- LastValueWithDoubleVector& operator=(const LastValueWithDoubleVector& r)
- {
- if (this == &r) { return *this; }
- p = r.p; vec = r.vec; value = r.value;
- return *this;
- }
- public:
- G4ThreeVector p;
- G4ThreeVector vec;
- G4double value;
- };
-
- LastState fLastInside;
- LastVector fLastNormal;
- LastValue fLastDistanceToIn;
- LastValue fLastDistanceToOut;
- LastValueWithDoubleVector fLastDistanceToInWithV;
- LastValueWithDoubleVector fLastDistanceToOutWithV;
};
//=====================================================================
diff --git a/source/geometry/solids/specific/include/G4VTwistedFaceted.hh b/source/geometry/solids/specific/include/G4VTwistedFaceted.hh
index 3d58ba0b242bb4ddc900a3bf0dfd404252cc42e3..6c412c390d0bf780abfe68fdaa89ea76e3264f7c 100644
--- a/source/geometry/solids/specific/include/G4VTwistedFaceted.hh
+++ b/source/geometry/solids/specific/include/G4VTwistedFaceted.hh
@@ -190,110 +190,6 @@ class G4VTwistedFaceted: public G4VSolid
G4VTwistSurface* fSide180 ; // Twisted Side at phi = 180 deg
G4VTwistSurface* fSide270 ; // Twisted Side at phi = 270 deg
- private:
-
- class LastState // last Inside result
- {
- public:
- LastState()
- {
- p.set(kInfinity,kInfinity,kInfinity); inside = kOutside;
- }
- ~LastState()= default;
- LastState(const LastState& r) = default;
- LastState& operator=(const LastState& r)
- {
- if (this == &r) { return *this; }
- p = r.p; inside = r.inside;
- return *this;
- }
- public:
- G4ThreeVector p;
- EInside inside;
- };
-
- class LastVector // last SurfaceNormal result
- {
- public:
- LastVector()
- {
- p.set(kInfinity,kInfinity,kInfinity);
- vec.set(kInfinity,kInfinity,kInfinity);
- surface = new G4VTwistSurface*[1];
- }
- ~LastVector()
- {
- delete [] surface;
- }
- LastVector(const LastVector& r) : p(r.p), vec(r.vec)
- {
- surface = new G4VTwistSurface*[1];
- surface[0] = r.surface[0];
- }
- LastVector& operator=(const LastVector& r)
- {
- if (&r == this) { return *this; }
- p = r.p; vec = r.vec;
- delete [] surface; surface = new G4VTwistSurface*[1];
- surface[0] = r.surface[0];
- return *this;
- }
- public:
- G4ThreeVector p;
- G4ThreeVector vec;
- G4VTwistSurface **surface;
- };
-
- class LastValue // last G4double value
- {
- public:
- LastValue()
- {
- p.set(kInfinity,kInfinity,kInfinity);
- value = DBL_MAX;
- }
- ~LastValue()= default;
- LastValue(const LastValue& r) = default;
- LastValue& operator=(const LastValue& r)
- {
- if (this == &r) { return *this; }
- p = r.p; value = r.value;
- return *this;
- }
- public:
- G4ThreeVector p;
- G4double value;
- };
-
- class LastValueWithDoubleVector // last G4double value
- {
- public:
- LastValueWithDoubleVector()
- {
- p.set(kInfinity,kInfinity,kInfinity);
- vec.set(kInfinity,kInfinity,kInfinity);
- value = DBL_MAX;
- }
- ~LastValueWithDoubleVector()= default;
- LastValueWithDoubleVector(const LastValueWithDoubleVector& r) = default;
- LastValueWithDoubleVector& operator=(const LastValueWithDoubleVector& r)
- {
- if (this == &r) { return *this; }
- p = r.p; vec = r.vec; value = r.value;
- return *this;
- }
- public:
- G4ThreeVector p;
- G4ThreeVector vec;
- G4double value;
- };
-
- LastState fLastInside;
- LastVector fLastNormal;
- LastValue fLastDistanceToIn;
- LastValue fLastDistanceToOut;
- LastValueWithDoubleVector fLastDistanceToInWithV;
- LastValueWithDoubleVector fLastDistanceToOutWithV;
};
//=====================================================================
diff --git a/source/geometry/solids/specific/src/G4TwistedTubs.cc b/source/geometry/solids/specific/src/G4TwistedTubs.cc
index 60dea7239081e58af194ecbe6cdeb33781a069b3..e8e414fabd74ecd1e2ed83ee8c072b932e9ae6dd 100644
--- a/source/geometry/solids/specific/src/G4TwistedTubs.cc
+++ b/source/geometry/solids/specific/src/G4TwistedTubs.cc
@@ -56,6 +56,7 @@ namespace
G4Mutex polyhedronMutex = G4MUTEX_INITIALIZER;
}
+
//=====================================================================
//* constructors ------------------------------------------------------
@@ -223,12 +224,7 @@ G4TwistedTubs::G4TwistedTubs(const G4TwistedTubs& rhs)
fTanOuterStereo2(rhs.fTanOuterStereo2),
fLowerEndcap(nullptr), fUpperEndcap(nullptr), fLatterTwisted(nullptr), fFormerTwisted(nullptr),
fInnerHype(nullptr), fOuterHype(nullptr),
- fCubicVolume(rhs.fCubicVolume), fSurfaceArea(rhs.fSurfaceArea),
- fLastInside(rhs.fLastInside), fLastNormal(rhs.fLastNormal),
- fLastDistanceToIn(rhs.fLastDistanceToIn),
- fLastDistanceToOut(rhs.fLastDistanceToOut),
- fLastDistanceToInWithV(rhs.fLastDistanceToInWithV),
- fLastDistanceToOutWithV(rhs.fLastDistanceToOutWithV)
+ fCubicVolume(rhs.fCubicVolume), fSurfaceArea(rhs.fSurfaceArea)
{
for (auto i=0; i<2; ++i)
{
@@ -268,11 +264,6 @@ G4TwistedTubs& G4TwistedTubs::operator = (const G4TwistedTubs& rhs)
fLowerEndcap= fUpperEndcap= fLatterTwisted= fFormerTwisted= nullptr;
fInnerHype= fOuterHype= nullptr;
fCubicVolume= rhs.fCubicVolume; fSurfaceArea= rhs.fSurfaceArea;
- fLastInside= rhs.fLastInside; fLastNormal= rhs.fLastNormal;
- fLastDistanceToIn= rhs.fLastDistanceToIn;
- fLastDistanceToOut= rhs.fLastDistanceToOut;
- fLastDistanceToInWithV= rhs.fLastDistanceToInWithV;
- fLastDistanceToOutWithV= rhs.fLastDistanceToOutWithV;
for (auto i=0; i<2; ++i)
{
@@ -381,44 +372,32 @@ EInside G4TwistedTubs::Inside(const G4ThreeVector& p) const
// G4Timer timer(timerid, "G4TwistedTubs", "Inside");
// timer.Start();
- G4ThreeVector *tmpp;
- EInside *tmpinside;
- if (fLastInside.p == p)
- {
- return fLastInside.inside;
- }
- else
- {
- tmpp = const_cast<G4ThreeVector*>(&(fLastInside.p));
- tmpinside = const_cast<EInside*>(&(fLastInside.inside));
- tmpp->set(p.x(), p.y(), p.z());
- }
EInside outerhypearea = ((G4TwistTubsHypeSide *)fOuterHype)->Inside(p);
G4double innerhyperho = ((G4TwistTubsHypeSide *)fInnerHype)->GetRhoAtPZ(p);
G4double distanceToOut = p.getRho() - innerhyperho; // +ve: inside
-
+ EInside tmpinside;
if ((outerhypearea == kOutside) || (distanceToOut < -halftol))
{
- *tmpinside = kOutside;
+ tmpinside = kOutside;
}
else if (outerhypearea == kSurface)
{
- *tmpinside = kSurface;
+ tmpinside = kSurface;
}
else
{
if (distanceToOut <= halftol)
{
- *tmpinside = kSurface;
+ tmpinside = kSurface;
}
else
{
- *tmpinside = kInside;
+ tmpinside = kInside;
}
}
- return fLastInside.inside;
+ return tmpinside;
}
//=====================================================================
@@ -433,14 +412,6 @@ G4ThreeVector G4TwistedTubs::SurfaceNormal(const G4ThreeVector& p) const
// Which of the three or four surfaces are we closest to?
//
- if (fLastNormal.p == p)
- {
- return fLastNormal.vec;
- }
- auto tmpp = const_cast<G4ThreeVector*>(&(fLastNormal.p));
- auto tmpnormal = const_cast<G4ThreeVector*>(&(fLastNormal.vec));
- auto tmpsurface = const_cast<G4VTwistSurface**>(fLastNormal.surface);
- tmpp->set(p.x(), p.y(), p.z());
G4double distance = kInfinity;
@@ -466,10 +437,7 @@ G4ThreeVector G4TwistedTubs::SurfaceNormal(const G4ThreeVector& p) const
}
}
- tmpsurface[0] = surfaces[besti];
- *tmpnormal = tmpsurface[0]->GetNormal(bestxx, true);
-
- return fLastNormal.vec;
+ return surfaces[besti]->GetNormal(bestxx, true);
}
//=====================================================================
@@ -485,26 +453,6 @@ G4double G4TwistedTubs::DistanceToIn (const G4ThreeVector& p,
// The function returns kInfinity if no intersection or
// just grazing within tolerance.
- //
- // checking last value
- //
-
- G4ThreeVector* tmpp;
- G4ThreeVector* tmpv;
- G4double* tmpdist;
- if ((fLastDistanceToInWithV.p == p) && (fLastDistanceToInWithV.vec == v))
- {
- return fLastDistanceToIn.value;
- }
- else
- {
- tmpp = const_cast<G4ThreeVector*>(&(fLastDistanceToInWithV.p));
- tmpv = const_cast<G4ThreeVector*>(&(fLastDistanceToInWithV.vec));
- tmpdist = const_cast<G4double*>(&(fLastDistanceToInWithV.value));
- tmpp->set(p.x(), p.y(), p.z());
- tmpv->set(v.x(), v.y(), v.z());
- }
-
//
// Calculate DistanceToIn(p,v)
//
@@ -524,8 +472,7 @@ G4double G4TwistedTubs::DistanceToIn (const G4ThreeVector& p,
G4ThreeVector normal = SurfaceNormal(p);
if (normal*v < 0)
{
- *tmpdist = 0.;
- return fLastDistanceToInWithV.value;
+ return 0;
}
}
}
@@ -557,9 +504,7 @@ G4double G4TwistedTubs::DistanceToIn (const G4ThreeVector& p,
bestxx = xx;
}
}
- *tmpdist = distance;
-
- return fLastDistanceToInWithV.value;
+ return distance;
}
//=====================================================================
@@ -570,23 +515,6 @@ G4double G4TwistedTubs::DistanceToIn (const G4ThreeVector& p) const
// DistanceToIn(p):
// Calculate distance to surface of shape from `outside',
// allowing for tolerance
-
- //
- // checking last value
- //
-
- G4ThreeVector* tmpp;
- G4double* tmpdist;
- if (fLastDistanceToIn.p == p)
- {
- return fLastDistanceToIn.value;
- }
- else
- {
- tmpp = const_cast<G4ThreeVector*>(&(fLastDistanceToIn.p));
- tmpdist = const_cast<G4double*>(&(fLastDistanceToIn.value));
- tmpp->set(p.x(), p.y(), p.z());
- }
//
// Calculate DistanceToIn(p)
@@ -600,8 +528,7 @@ G4double G4TwistedTubs::DistanceToIn (const G4ThreeVector& p) const
{}
case (kSurface) :
{
- *tmpdist = 0.;
- return fLastDistanceToIn.value;
+ return 0;
}
case (kOutside) :
{
@@ -628,8 +555,7 @@ G4double G4TwistedTubs::DistanceToIn (const G4ThreeVector& p) const
bestxx = xx;
}
}
- *tmpdist = distance;
- return fLastDistanceToIn.value;
+ return distance;
}
default :
{
@@ -656,32 +582,11 @@ G4double G4TwistedTubs::DistanceToOut( const G4ThreeVector& p,
// The function returns kInfinity if no intersection or
// just grazing within tolerance.
- //
- // checking last value
- //
-
- G4ThreeVector* tmpp;
- G4ThreeVector* tmpv;
- G4double* tmpdist;
- if ((fLastDistanceToOutWithV.p == p) && (fLastDistanceToOutWithV.vec == v) )
- {
- return fLastDistanceToOutWithV.value;
- }
- else
- {
- tmpp = const_cast<G4ThreeVector*>(&(fLastDistanceToOutWithV.p));
- tmpv = const_cast<G4ThreeVector*>(&(fLastDistanceToOutWithV.vec));
- tmpdist = const_cast<G4double*>(&(fLastDistanceToOutWithV.value));
- tmpp->set(p.x(), p.y(), p.z());
- tmpv->set(v.x(), v.y(), v.z());
- }
-
//
// Calculate DistanceToOut(p,v)
//
EInside currentside = Inside(p);
-
if (currentside == kOutside)
{
}
@@ -693,16 +598,14 @@ G4double G4TwistedTubs::DistanceToOut( const G4ThreeVector& p,
// If the particle is exiting from the volume, return 0.
//
G4ThreeVector normal = SurfaceNormal(p);
- G4VTwistSurface *blockedsurface = fLastNormal.surface[0];
if (normal*v > 0)
{
if (calcNorm)
{
- *norm = (blockedsurface->GetNormal(p, true));
- *validNorm = blockedsurface->IsValidNorm();
+ *norm = normal;
+ *validNorm = true;
}
- *tmpdist = 0.;
- return fLastDistanceToOutWithV.value;
+ return 0;
}
}
}
@@ -746,9 +649,7 @@ G4double G4TwistedTubs::DistanceToOut( const G4ThreeVector& p,
}
}
- *tmpdist = distance;
-
- return fLastDistanceToOutWithV.value;
+ return distance;
}
@@ -761,23 +662,6 @@ G4double G4TwistedTubs::DistanceToOut( const G4ThreeVector& p ) const
// Calculate distance to surface of shape from `inside',
// allowing for tolerance
- //
- // checking last value
- //
-
- G4ThreeVector* tmpp;
- G4double* tmpdist;
- if (fLastDistanceToOut.p == p)
- {
- return fLastDistanceToOut.value;
- }
- else
- {
- tmpp = const_cast<G4ThreeVector*>(&(fLastDistanceToOut.p));
- tmpdist = const_cast<G4double*>(&(fLastDistanceToOut.value));
- tmpp->set(p.x(), p.y(), p.z());
- }
-
//
// Calculate DistanceToOut(p)
//
@@ -791,8 +675,7 @@ G4double G4TwistedTubs::DistanceToOut( const G4ThreeVector& p ) const
}
case (kSurface) :
{
- *tmpdist = 0.;
- return fLastDistanceToOut.value;
+ return 0;
}
case (kInside) :
{
@@ -819,9 +702,7 @@ G4double G4TwistedTubs::DistanceToOut( const G4ThreeVector& p ) const
bestxx = xx;
}
}
- *tmpdist = distance;
-
- return fLastDistanceToOut.value;
+ return distance;
}
default :
{
diff --git a/source/geometry/solids/specific/src/G4VTwistedFaceted.cc b/source/geometry/solids/specific/src/G4VTwistedFaceted.cc
index b8d5c74539453e7a5a5f99623c5e4c9477ff8014..5a524e3398509d340955028835cdf6d52b70b66b 100644
--- a/source/geometry/solids/specific/src/G4VTwistedFaceted.cc
+++ b/source/geometry/solids/specific/src/G4VTwistedFaceted.cc
@@ -54,6 +54,7 @@ namespace
G4Mutex polyhedronMutex = G4MUTEX_INITIALIZER;
}
+
//=====================================================================
//* constructors ------------------------------------------------------
@@ -222,12 +223,7 @@ G4VTwistedFaceted::G4VTwistedFaceted(const G4VTwistedFaceted& rhs)
fDx3(rhs.fDx3), fDx4(rhs.fDx4), fDz(rhs.fDz), fDx(rhs.fDx), fDy(rhs.fDy),
fAlph(rhs.fAlph), fTAlph(rhs.fTAlph), fdeltaX(rhs.fdeltaX),
fdeltaY(rhs.fdeltaY), fPhiTwist(rhs.fPhiTwist), fLowerEndcap(nullptr),
- fUpperEndcap(nullptr), fSide0(nullptr), fSide90(nullptr), fSide180(nullptr), fSide270(nullptr),
- fLastInside(rhs.fLastInside), fLastNormal(rhs.fLastNormal),
- fLastDistanceToIn(rhs.fLastDistanceToIn),
- fLastDistanceToOut(rhs.fLastDistanceToOut),
- fLastDistanceToInWithV(rhs.fLastDistanceToInWithV),
- fLastDistanceToOutWithV(rhs.fLastDistanceToOutWithV)
+ fUpperEndcap(nullptr), fSide0(nullptr), fSide90(nullptr), fSide180(nullptr), fSide270(nullptr)
{
CreateSurfaces();
}
@@ -257,11 +253,6 @@ G4VTwistedFaceted& G4VTwistedFaceted::operator = (const G4VTwistedFaceted& rhs)
fCubicVolume= rhs.fCubicVolume; fSurfaceArea= rhs.fSurfaceArea;
fRebuildPolyhedron = false;
delete fpPolyhedron; fpPolyhedron = nullptr;
- fLastInside= rhs.fLastInside; fLastNormal= rhs.fLastNormal;
- fLastDistanceToIn= rhs.fLastDistanceToIn;
- fLastDistanceToOut= rhs.fLastDistanceToOut;
- fLastDistanceToInWithV= rhs.fLastDistanceToInWithV;
- fLastDistanceToOutWithV= rhs.fLastDistanceToOutWithV;
CreateSurfaces();
@@ -347,20 +338,7 @@ G4VTwistedFaceted::CalculateExtent( const EAxis pAxis,
EInside G4VTwistedFaceted::Inside(const G4ThreeVector& p) const
{
- G4ThreeVector *tmpp;
- EInside *tmpin;
- if (fLastInside.p == p)
- {
- return fLastInside.inside;
- }
- else
- {
- tmpp = const_cast<G4ThreeVector*>(&(fLastInside.p));
- tmpin = const_cast<EInside*>(&(fLastInside.inside));
- tmpp->set(p.x(), p.y(), p.z());
- }
-
- *tmpin = kOutside ;
+ EInside tmpin = kOutside ;
G4double phi = p.z()/(2*fDz) * fPhiTwist ; // rotate the point to z=0
G4double cphi = std::cos(-phi) ;
@@ -414,13 +392,13 @@ EInside G4VTwistedFaceted::Inside(const G4ThreeVector& p) const
if ( posy <= yMax - kCarTolerance*0.5
&& posy >= yMin + kCarTolerance*0.5 )
{
- if (std::fabs(posz) <= fDz - kCarTolerance*0.5 ) *tmpin = kInside ;
- else if (std::fabs(posz) <= fDz + kCarTolerance*0.5 ) *tmpin = kSurface ;
+ if (std::fabs(posz) <= fDz - kCarTolerance*0.5 ) tmpin = kInside ;
+ else if (std::fabs(posz) <= fDz + kCarTolerance*0.5 ) tmpin = kSurface ;
}
else if ( posy <= yMax + kCarTolerance*0.5
&& posy >= yMin - kCarTolerance*0.5 )
{
- if (std::fabs(posz) <= fDz + kCarTolerance*0.5 ) *tmpin = kSurface ;
+ if (std::fabs(posz) <= fDz + kCarTolerance*0.5 ) tmpin = kSurface ;
}
}
else if ( posx <= xMax + kCarTolerance*0.5
@@ -429,15 +407,15 @@ EInside G4VTwistedFaceted::Inside(const G4ThreeVector& p) const
if ( posy <= yMax + kCarTolerance*0.5
&& posy >= yMin - kCarTolerance*0.5 )
{
- if (std::fabs(posz) <= fDz + kCarTolerance*0.5) *tmpin = kSurface ;
+ if (std::fabs(posz) <= fDz + kCarTolerance*0.5) tmpin = kSurface ;
}
}
#ifdef G4TWISTDEBUG
- G4cout << "inside = " << fLastInside.inside << G4endl ;
+ G4cout << "inside = " << tmpin << G4endl ;
#endif
- return fLastInside.inside;
+ return tmpin;
}
@@ -454,15 +432,6 @@ G4ThreeVector G4VTwistedFaceted::SurfaceNormal(const G4ThreeVector& p) const
// Which of the three or four surfaces are we closest to?
//
- if (fLastNormal.p == p)
- {
- return fLastNormal.vec;
- }
-
- auto tmpp = const_cast<G4ThreeVector*>(&(fLastNormal.p));
- auto tmpnormal = const_cast<G4ThreeVector*>(&(fLastNormal.vec));
- auto tmpsurface = const_cast<G4VTwistSurface**>(fLastNormal.surface);
- tmpp->set(p.x(), p.y(), p.z());
G4double distance = kInfinity;
@@ -490,10 +459,7 @@ G4ThreeVector G4VTwistedFaceted::SurfaceNormal(const G4ThreeVector& p) const
}
}
- tmpsurface[0] = surfaces[besti];
- *tmpnormal = tmpsurface[0]->GetNormal(bestxx, true);
-
- return fLastNormal.vec;
+ return surfaces[besti]->GetNormal(bestxx, true);
}
@@ -510,26 +476,6 @@ G4double G4VTwistedFaceted::DistanceToIn (const G4ThreeVector& p,
// The function returns kInfinity if no intersection or
// just grazing within tolerance.
- //
- // checking last value
- //
-
- G4ThreeVector* tmpp;
- G4ThreeVector* tmpv;
- G4double* tmpdist;
- if (fLastDistanceToInWithV.p == p && fLastDistanceToInWithV.vec == v)
- {
- return fLastDistanceToIn.value;
- }
- else
- {
- tmpp = const_cast<G4ThreeVector*>(&(fLastDistanceToInWithV.p));
- tmpv = const_cast<G4ThreeVector*>(&(fLastDistanceToInWithV.vec));
- tmpdist = const_cast<G4double*>(&(fLastDistanceToInWithV.value));
- tmpp->set(p.x(), p.y(), p.z());
- tmpv->set(v.x(), v.y(), v.z());
- }
-
//
// Calculate DistanceToIn(p,v)
//
@@ -547,8 +493,7 @@ G4double G4VTwistedFaceted::DistanceToIn (const G4ThreeVector& p,
G4ThreeVector normal = SurfaceNormal(p);
if (normal*v < 0)
{
- *tmpdist = 0.;
- return fLastDistanceToInWithV.value;
+ return 0;
}
}
@@ -574,7 +519,7 @@ G4double G4VTwistedFaceted::DistanceToIn (const G4ThreeVector& p,
for (const auto & surface : surfaces)
{
#ifdef G4TWISTDEBUG
- G4cout << G4endl << "surface " << i << ": " << G4endl << G4endl ;
+ G4cout << G4endl << "surface " << &surface - &*surfaces << ": " << G4endl << G4endl ;
#endif
G4double tmpdistance = surface->DistanceToIn(p, v, xx);
#ifdef G4TWISTDEBUG
@@ -592,9 +537,8 @@ G4double G4VTwistedFaceted::DistanceToIn (const G4ThreeVector& p,
G4cout << "best distance = " << distance << G4endl ;
#endif
- *tmpdist = distance;
// timer.Stop();
- return fLastDistanceToInWithV.value;
+ return distance;
}
@@ -608,23 +552,6 @@ G4double G4VTwistedFaceted::DistanceToIn (const G4ThreeVector& p) const
// allowing for tolerance
//
- //
- // checking last value
- //
-
- G4ThreeVector* tmpp;
- G4double* tmpdist;
- if (fLastDistanceToIn.p == p)
- {
- return fLastDistanceToIn.value;
- }
- else
- {
- tmpp = const_cast<G4ThreeVector*>(&(fLastDistanceToIn.p));
- tmpdist = const_cast<G4double*>(&(fLastDistanceToIn.value));
- tmpp->set(p.x(), p.y(), p.z());
- }
-
//
// Calculate DistanceToIn(p)
//
@@ -639,8 +566,7 @@ G4double G4VTwistedFaceted::DistanceToIn (const G4ThreeVector& p) const
case (kSurface) :
{
- *tmpdist = 0.;
- return fLastDistanceToIn.value;
+ return 0;
}
case (kOutside) :
@@ -671,8 +597,7 @@ G4double G4VTwistedFaceted::DistanceToIn (const G4ThreeVector& p) const
bestxx = xx;
}
}
- *tmpdist = distance;
- return fLastDistanceToIn.value;
+ return distance;
}
default:
@@ -702,26 +627,6 @@ G4VTwistedFaceted::DistanceToOut( const G4ThreeVector& p,
// The function returns kInfinity if no intersection or
// just grazing within tolerance.
- //
- // checking last value
- //
-
- G4ThreeVector* tmpp;
- G4ThreeVector* tmpv;
- G4double* tmpdist;
- if (fLastDistanceToOutWithV.p == p && fLastDistanceToOutWithV.vec == v )
- {
- return fLastDistanceToOutWithV.value;
- }
- else
- {
- tmpp = const_cast<G4ThreeVector*>(&(fLastDistanceToOutWithV.p));
- tmpv = const_cast<G4ThreeVector*>(&(fLastDistanceToOutWithV.vec));
- tmpdist = const_cast<G4double*>(&(fLastDistanceToOutWithV.value));
- tmpp->set(p.x(), p.y(), p.z());
- tmpv->set(v.x(), v.y(), v.z());
- }
-
//
// Calculate DistanceToOut(p,v)
//
@@ -737,17 +642,15 @@ G4VTwistedFaceted::DistanceToOut( const G4ThreeVector& p,
// if the particle is exiting from the volume, return 0
//
G4ThreeVector normal = SurfaceNormal(p);
- G4VTwistSurface *blockedsurface = fLastNormal.surface[0];
if (normal*v > 0)
{
if (calcNorm)
{
- *norm = (blockedsurface->GetNormal(p, true));
- *validNorm = blockedsurface->IsValidNorm();
+ *norm = normal;
+ *validNorm = true;
}
- *tmpdist = 0.;
// timer.Stop();
- return fLastDistanceToOutWithV.value;
+ return 0;
}
}
@@ -789,8 +692,7 @@ G4VTwistedFaceted::DistanceToOut( const G4ThreeVector& p,
}
}
- *tmpdist = distance;
- return fLastDistanceToOutWithV.value;
+ return distance;
}
@@ -802,24 +704,6 @@ G4double G4VTwistedFaceted::DistanceToOut( const G4ThreeVector& p ) const
// DistanceToOut(p):
// Calculate distance to surface of shape from `inside',
// allowing for tolerance
-
- //
- // checking last value
- //
-
- G4ThreeVector* tmpp;
- G4double* tmpdist;
-
- if (fLastDistanceToOut.p == p)
- {
- return fLastDistanceToOut.value;
- }
- else
- {
- tmpp = const_cast<G4ThreeVector*>(&(fLastDistanceToOut.p));
- tmpdist = const_cast<G4double*>(&(fLastDistanceToOut.value));
- tmpp->set(p.x(), p.y(), p.z());
- }
//
// Calculate DistanceToOut(p)
@@ -848,8 +732,7 @@ G4double G4VTwistedFaceted::DistanceToOut( const G4ThreeVector& p ) const
}
case (kSurface) :
{
- *tmpdist = 0.;
- retval = fLastDistanceToOut.value;
+ retval = 0;
break;
}
@@ -881,9 +764,7 @@ G4double G4VTwistedFaceted::DistanceToOut( const G4ThreeVector& p ) const
bestxx = xx;
}
}
- *tmpdist = distance;
-
- retval = fLastDistanceToOut.value;
+ retval = distance;
break;
}

View File

@@ -62,8 +62,7 @@ def filter_sbang(self):
pattern = "^#!.*/usr/bin/perl"
repl = "#!{0}".format(self.spec["perl"].command.path)
files = glob.iglob("*.pl")
for file in files:
filter_file(pattern, repl, *files, backup=False)
filter_file(pattern, repl, *files, backup=False)
def setup_run_environment(self, env):
env.prepend_path("PERL5LIB", self.prefix.lib)

View File

@@ -80,6 +80,9 @@ class Gmsh(CMakePackage):
# https://gmsh.info/doc/texinfo/gmsh.html#Compiling-the-source-code
# We make changes to the GMSH default, such as external blas.
depends_on("libpng", when="+fltk")
depends_on("libjpeg-turbo", when="+fltk")
depends_on("zlib-api")
depends_on("blas", when="~eigen")
depends_on("lapack", when="~eigen")
depends_on("eigen@3:", when="+eigen+external")

View File

@@ -15,7 +15,7 @@ class GoblinHmcSim(MakefilePackage):
homepage = "https://github.com/tactcomplabs/gc64-hmcsim"
git = "https://github.com/tactcomplabs/gc64-hmcsim"
# The version numbers track the SST they were released with
url = "https://github.com/tactcomplabs/gc64-hmcsim/archive/sst-8.0.0-release.tar.gz"
url = "https://github.com/tactcomplabs/gc64-hmcsim/archive/refs/tags/sst-8.0.0-release.tar.gz"
# This works with parallel builds outside Spack
# For some reason .o files get thrashed inside Spack
parallel = False

View File

@@ -5,6 +5,7 @@
import os.path
from spack.hooks.sbang import filter_shebang
from spack.package import *
@@ -15,23 +16,39 @@ class Grackle(Package):
simulation code
"""
homepage = "http://grackle.readthedocs.io/en/grackle-3.1/"
url = "https://bitbucket.org/grackle/grackle/get/grackle-3.1.tar.bz2"
homepage = "http://grackle.readthedocs.io/en/latest/"
url = "https://github.com/grackle-project/grackle/archive/refs/tags/grackle-3.1.tar.gz"
version("3.1", sha256="504fb080c7f8578c92dcde76cf9e8b851331a38ac76fc4a784df4ecbe1ff2ae8")
version("3.0", sha256="9219033332188d615e49135a3b030963f076b3afee098592b0c3e9f8bafdf504")
version("2.2", sha256="b1d201313c924df38d1e677015f7c31dce42083ef6a0e0936bb9410ccd8a3655")
version("2.0.1", sha256="8f784aaf53d98ddb52b448dc51eb9ec452261a2dbb360170a798693b85165f7d")
version("3.1", sha256="5705985a70d65bc2478cc589ca26f631a8de90e3c8f129a6b2af69db17c01079")
version("3.0", sha256="41e9ba1fe18043a98db194a6f5b9c76a7f0296a95a457d2b7d73311195b7d781")
version("2.2", sha256="5855cb0f93736fd8dd47efeb0abdf36af9339ede86de7f895f527513566c0fae")
version("2.0.1", sha256="bcdf6b3ff7b7515ae5e9f1f3369b2690ed8b3c450040e92a03e40582f57a0864")
variant("float", default=False, description="Build with float")
depends_on("libtool", when="@2.2")
depends_on("libtool", when="@2.2:")
depends_on("c", type="build")
depends_on("fortran", type="build")
depends_on("tcsh", type="build")
depends_on("mpi")
depends_on("hdf5+mpi")
parallel = False
@run_before("install")
def filter_sbang(self):
"""Run before install so that the standard Spack sbang install hook
can fix up the path to the tcsh binary.
"""
tcsh = self.spec["tcsh"].command
with working_dir(self.stage.source_path):
match = "^#!/bin/csh.*"
substitute = f"#!{tcsh}"
filter_file(match, substitute, "configure")
# Since scripts are run during installation, we need to add sbang
filter_shebang("configure")
def install(self, spec, prefix):
template_name = "{0.architecture}-{0.compiler.name}"
grackle_architecture = template_name.format(spec)
@@ -59,7 +76,7 @@ def install(self, spec, prefix):
filter_file(key, value, makefile)
configure()
with working_dir("src/clib"):
with working_dir(join_path(self.stage.source_path, "src", "clib")):
make("clean")
make("machine-{0}".format(grackle_architecture))
make("opt-high")

View File

@@ -89,24 +89,20 @@ def filter_sbang(self):
pattern = "^#!.*/usr/bin/env python"
repl = f"#!{self.spec['python'].command.path}"
files = ["hisat2-build", "hisat2-inspect"]
for file in files:
filter_file(pattern, repl, *files, backup=False)
filter_file(pattern, repl, *files, backup=False)
pattern = "^#!.*/usr/bin/env perl"
repl = f"#!{self.spec['perl'].command.path}"
files = ["hisat2"]
for file in files:
filter_file(pattern, repl, *files, backup=False)
filter_file(pattern, repl, *files, backup=False)
pattern = "^#!.*/usr/bin/env python3"
repl = f"#!{self.spec['python'].command.path}"
files = glob.glob("*.py")
for file in files:
filter_file(pattern, repl, *files, backup=False)
filter_file(pattern, repl, *files, backup=False)
with working_dir(self.prefix.scripts):
pattern = "^#!.*/usr/bin/perl"
repl = f"#!{self.spec['perl'].command.path}"
files = glob.glob("*.pl")
for file in files:
filter_file(pattern, repl, *files, backup=False)
filter_file(pattern, repl, *files, backup=False)

View File

@@ -7,6 +7,7 @@
import platform
import re
from spack.build_environment import optimization_flags
from spack.package import *
@@ -161,7 +162,7 @@ def edit(self, spec, prefix):
if spec.satisfies("%intel"):
# with intel-parallel-studio+mpi the '-march' arguments
# are not passed to icc
arch_opt = spec.architecture.target.optimization_flags(spec.compiler)
arch_opt = optimization_flags(self.compiler, spec.target)
self.config["@CCFLAGS@"] = f"-O3 -restrict -ansi-alias -ip {arch_opt}"
self.config["@CCNOOPT@"] = "-restrict"
self._write_make_arch(spec, prefix)

View File

@@ -5,6 +5,7 @@
import datetime as dt
import os
from spack.build_environment import optimization_flags
from spack.package import *
@@ -898,7 +899,7 @@ def cmake_args(self):
args.append(self.define("CMAKE_CXX_FLAGS_RELWITHDEBINFO", cxx_flags))
# Overwrite generic cpu tune option
cmake_tune_flags = spec.architecture.target.optimization_flags(spec.compiler)
cmake_tune_flags = optimization_flags(self.compiler, spec.target)
args.append(self.define("CMAKE_TUNE_FLAGS", cmake_tune_flags))
args.append(self.define_from_variant("LAMMPS_SIZES", "lammps_sizes"))

View File

@@ -15,6 +15,7 @@ class Libwebsockets(CMakePackage):
license("MIT")
version("4.3.3", sha256="6fd33527b410a37ebc91bb64ca51bdabab12b076bc99d153d7c5dd405e4bdf90")
version("2.2.1", sha256="e7f9eaef258e003c9ada0803a9a5636757a5bc0a58927858834fb38a87d18ad2")
version("2.1.1", sha256="96183cbdfcd6e6a3d9465e854a924b7bfde6c8c6d3384d6159ad797c2e823b4d")
version("2.1.0", sha256="bcc96aaa609daae4d3f7ab1ee480126709ef4f6a8bf9c85de40aae48e38cce66")
@@ -26,3 +27,6 @@ class Libwebsockets(CMakePackage):
depends_on("zlib-api")
depends_on("openssl")
def cmake_args(self):
return ["-DLWS_WITHOUT_TESTAPPS=ON"]

View File

@@ -24,9 +24,9 @@ class LinuxExternalModules(MakefilePackage):
# linux-external-modules.
how_to = "https://docs.kernel.org/kbuild/modules.html"
maintainers("fleshling", "rountree")
maintainers("kyotsukete", "rountree")
license("GPL-2.0-only", checked_by="fleshling")
license("GPL-2.0-only", checked_by="kyotsukete")
version("6.10.3", sha256="fa5f22fd67dd05812d39dca579320c493048e26c4a556048a12385e7ae6fc698")
version("6.10.2", sha256="73d8520dd9cba5acfc5e7208e76b35d9740b8aae38210a9224e32ec4c0d29b70")

View File

@@ -0,0 +1,52 @@
commit 5053d62162ad01d78da42a405f683aaf53c5724e
Author: Satish Balay <balay@mcs.anl.gov>
Date: Tue Sep 24 15:18:47 2024 -0500
llvm: patch to work when ncurses is built with --with-termlib [i.e. ncurses~termlib]
diff --git a/lldb/cmake/modules/FindCursesAndPanel.cmake b/lldb/cmake/modules/FindCursesAndPanel.cmake
index aaadf214b..98242cdf5 100644
--- a/lldb/cmake/modules/FindCursesAndPanel.cmake
+++ b/lldb/cmake/modules/FindCursesAndPanel.cmake
@@ -2,12 +2,13 @@
# FindCursesAndPanel
# -----------
#
-# Find the curses and panel library as a whole.
+# Find the curses tinfo and panel library as a whole.
-if(CURSES_INCLUDE_DIRS AND CURSES_LIBRARIES AND PANEL_LIBRARIES)
+if(CURSES_INCLUDE_DIRS AND CURSES_LIBRARIES AND TINFO_LIBRARIES AND PANEL_LIBRARIES)
set(CURSESANDPANEL_FOUND TRUE)
else()
find_package(Curses QUIET)
+ find_library(TINFO_LIBRARIES NAMES tinfo DOC "The curses tinfo library" QUIET)
find_library(PANEL_LIBRARIES NAMES panel DOC "The curses panel library" QUIET)
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(CursesAndPanel
@@ -16,9 +17,10 @@ else()
REQUIRED_VARS
CURSES_INCLUDE_DIRS
CURSES_LIBRARIES
+ TINFO_LIBRARIES
PANEL_LIBRARIES)
- if(CURSES_FOUND AND PANEL_LIBRARIES)
- mark_as_advanced(CURSES_INCLUDE_DIRS CURSES_LIBRARIES PANEL_LIBRARIES)
+ if(CURSES_FOUND AND TINFO_LIBRARIES AND PANEL_LIBRARIES)
+ mark_as_advanced(CURSES_INCLUDE_DIRS CURSES_LIBRARIES TINFO_LIBRARIES PANEL_LIBRARIES)
endif()
endif()
diff --git a/lldb/source/Core/CMakeLists.txt b/lldb/source/Core/CMakeLists.txt
index dbc620b91..83003818d 100644
--- a/lldb/source/Core/CMakeLists.txt
+++ b/lldb/source/Core/CMakeLists.txt
@@ -10,7 +10,7 @@ set(LLDB_CURSES_LIBS)
set(LLDB_LIBEDIT_LIBS)
if (LLDB_ENABLE_CURSES)
- list(APPEND LLDB_CURSES_LIBS ${PANEL_LIBRARIES} ${CURSES_LIBRARIES})
+ list(APPEND LLDB_CURSES_LIBS ${PANEL_LIBRARIES} ${CURSES_LIBRARIES} ${TINFO_LIBRARIES})
if (LLVM_BUILD_STATIC)
list(APPEND LLDB_CURSES_LIBS gpm)
endif()

View File

@@ -56,6 +56,7 @@ class Llvm(CMakePackage, CudaPackage, LlvmDetection, CompilerPackage):
license("Apache-2.0")
version("main", branch="main")
version("19.1.0", sha256="0a08341036ca99a106786f50f9c5cb3fbe458b3b74cab6089fd368d0edb2edfe")
version("18.1.8", sha256="09c08693a9afd6236f27a2ebae62cda656eba19021ef3f94d59e931d662d4856")
version("18.1.7", sha256="b60df7cbe02cef2523f7357120fb0d46cbb443791cde3a5fb36b82c335c0afc9")
version("18.1.6", sha256="01390edfae5b809e982b530ff9088e674c62b13aa92cb9dc1e067fa2cf501083")
@@ -285,6 +286,8 @@ class Llvm(CMakePackage, CudaPackage, LlvmDetection, CompilerPackage):
description="Enable zstd support for static analyzer / lld",
)
provides("libllvm@19", when="@19.0.0:19")
provides("libllvm@18", when="@18.0.0:18")
provides("libllvm@17", when="@17.0.0:17")
provides("libllvm@16", when="@16.0.0:16")
provides("libllvm@15", when="@15.0.0:15")
@@ -424,6 +427,12 @@ class Llvm(CMakePackage, CudaPackage, LlvmDetection, CompilerPackage):
# Fixed in upstream versions of both
conflicts("^cmake@3.19.0", when="@6:11.0.0")
# llvm-19.0.1
patch(
"llvm-19.0.1-ncurses-termlib.patch",
when="@19: ^ncurses+termlib"
)
# Fix lld templates: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230463
patch(
"https://raw.githubusercontent.com/freebsd/freebsd-ports/f8f9333d8e1e5a7a6b28c5ef0ca73785db06136e/devel/llvm50/files/lld/patch-tools_lld_ELF_Symbols.cpp",

View File

@@ -0,0 +1,61 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class MsrSafe(MakefilePackage):
"""msr_safe provides controlled userspace access to model-specific registers (MSRs).
It allows system administrators to give register-level read access and bit-level write
access to trusted users in production environments. This access is useful where kernel
drivers have not caught up with new processor features, or performance constraints
requires batch access across dozens or hundreds of registers."""
homepage = "https://github.com/LLNL/msr-safe"
url = "https://github.com/LLNL/msr-safe/archive/refs/tags/v1.7.0.tar.gz"
maintainers("kyotsukete", "rountree")
license("GPL-2.0-only", checked_by="kyotsukete")
variant(
"test_linux699",
default=False,
description="This variant is for testing against Linux kernel 6.9.9",
)
requires("@0.0.0_linux6.9.9", when="+test_linux699")
conflicts("@0.0.0_linux6.9.9", when="~test_linux699")
# Version 0.0.0_linux6.9.9 is based on msr-safe@1.7.0 and solves for conflicts between 1.7.0
# and the Linux kernel version 6.9.9.
version(
"0.0.0_linux6.9.9",
sha256="2b68670eda4467eaa9ddd7340522ab2000cf9d16d083607f9c481650ea1a2fc9",
url="https://github.com/rountree/msr-safe/archive/refs/heads/linux-6.9.9-cleanup.zip",
)
version("1.7.0", sha256="bdf4f96bde92a23dc3a98716611ebbe7d302005305adf6a368cb25da9c8a609a")
version("1.6.0", sha256="defe9d12e2cdbcb1a9aa29bb09376d4156c3dbbeb7afc33315ca4b0b6859f5bb")
version("1.5.0", sha256="e91bac281339bcb0d119a74d68a73eafb5944fd933a893e0e3209576b4c6f233")
version("1.4.0", sha256="3e5a913e73978c9ce15ec5d2bf1a4583e9e5c30e4e75da0f76d9a7a6153398c0")
version("1.3.0", sha256="718dcc78272b45ffddf520078e7e54b0b6ce272f1ef0376de009a133149982a0")
version("1.2.0", sha256="d3c2e5280f94d65866f82a36fea50562dc3eaccbcaa81438562caaf35989d8e8")
version("1.1.0", sha256="5b723e9d360e15f3ed854a84de7430b2b77be1eb1515db03c66456db43684a83")
version("1.0.2", sha256="9511d021ab6510195e8cc3b0353a0ac414ab6965a188f47fbb8581f9156a970e")
depends_on("linux-external-modules")
@property
def build_targets(self):
return [
"-C",
f"{self.spec['linux-external-modules'].prefix}",
f"M={self.build_directory}",
"modules",
]
@property
def install_targets(self):
return [f"DESTDIR={self.prefix}", "spack-install"]

View File

@@ -9,6 +9,7 @@
import llnl.util.tty as tty
from spack.build_environment import optimization_flags
from spack.package import *
@@ -175,7 +176,7 @@ def _edit_arch_generic(self, spec, prefix):
# this options are take from the default provided
# configuration files
# https://github.com/UIUC-PPL/charm/pull/2778
archopt = spec.architecture.target.optimization_flags(spec.compiler)
archopt = optimization_flags(self.compiler, spec.target)
if self.spec.satisfies("^charmpp@:6.10.1"):
optims_opts = {

View File

@@ -3,6 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.build_environment import optimization_flags
from spack.package import *
@@ -149,7 +150,7 @@ def cmake_args(self):
# add cpu arch specific optimisation flags to CMake so that they are passed
# to embedded Makefile that neuron has for compiling MOD files
compilation_flags = self.spec.architecture.target.optimization_flags(self.spec.compiler)
compilation_flags = optimization_flags(self.compiler, self.spec.target)
args.append(self.define("CMAKE_CXX_FLAGS", compilation_flags))
return args

View File

@@ -32,10 +32,10 @@ class Opendatadetector(CMakePackage):
def cmake_args(self):
args = []
# C++ Standard
args.append("-DCMAKE_CXX_STANDARD=%s" % self.spec["root"].variants["cxxstd"].value)
return args
def setup_run_environment(self, env):
env.set("OPENDATADETECTOR_DATA", join_path(self.prefix.share, "OpenDataDetector"))
env.prepend_path("LD_LIBRARY_PATH", self.prefix.lib)
env.prepend_path("LD_LIBRARY_PATH", self.prefix.lib64)

View File

@@ -17,6 +17,7 @@ class PyBeautifulsoup4(PythonPackage):
# Requires pytest
skip_modules = ["bs4.tests"]
version("4.12.3", sha256="74e3d1928edc070d21748185c46e3fb33490f22f52a3addee9aee0f4f7781051")
version("4.12.2", sha256="492bbc69dca35d12daac71c4db1bfff0c876c00ef4a2ffacce226d4638eb72da")
version("4.11.1", sha256="ad9aa55b65ef2808eb405f46cf74df7fcb7044d5cbc26487f96eb2ef2e436693")
version("4.10.0", sha256="c23ad23c521d818955a4151a67d81580319d4bf548d3d49f4223ae041ff98891")

View File

@@ -16,6 +16,7 @@ class PyCffi(PythonPackage):
license("MIT")
version("1.17.1", sha256="1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824")
version("1.16.0", sha256="bcb3ef43e58665bbda2fb198698fcae6776483e0c4a631aa5647806c25e02cc0")
version("1.15.1", sha256="d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9")
version("1.15.0", sha256="920f0d66a896c2d99f0adbb391f990a84091179542c205fa53ce5787aff87954")
@@ -33,11 +34,29 @@ class PyCffi(PythonPackage):
# setuptools before distutils, but only on Windows. This could be made
# unconditional to support Python 3.12
depends_on("python@:3.11", type=("build", "run"))
# python 3.12 support was released in @1.16:, however the removal
# in python3.12 of distutils has resulted in an imperfect fix for prefix-based
# tools like spack, see:
# https://github.com/spack/spack/pull/46224
# https://github.com/cython/cython/pull/5754#issuecomment-1752102480
# until this is correctly fixed, do not enable 3.12 support
# depends_on("python@:3.12", type=("build", "run"), when="@1.16:")
depends_on("pkgconfig", type="build")
depends_on("py-setuptools", type="build")
depends_on("py-setuptools@66.1:", type="build", when="@1.16:")
depends_on("py-pycparser", type=("build", "run"))
depends_on("libffi")
# This patch enables allocate write+execute memory for ffi.callback() on macos
# https://github.com/conda-forge/cffi-feedstock/pull/47/files
patch(
"https://raw.githubusercontent.com/conda-forge/cffi-feedstock/refs/heads/main/recipe/0003-apple-api.patch",
when="@1.16: platform=darwin",
sha256="db836e67e2973ba7d3f4185b385fda49e2398281fc10362e5e413b75fdf93bf0",
)
def flag_handler(self, name, flags):
if self.spec.satisfies("%clang@13:"):
if name in ["cflags", "cxxflags", "cppflags"]:

View File

@@ -12,7 +12,7 @@ class PyCftime(PythonPackage):
netCDF conventions"""
homepage = "https://unidata.github.io/cftime/"
url = "https://github.com/Unidata/cftime/archive/v1.0.3.4rel.tar.gz"
url = "https://github.com/Unidata/cftime/archive/refs/tags/v1.0.3.4rel.tar.gz"
version("1.0.3.4", sha256="f261ff8c65ceef4799784cd999b256d608c177d4c90b083553aceec3b6c23fd3")

View File

@@ -15,6 +15,7 @@ class PyCmocean(PythonPackage):
license("MIT")
version("4.0.3", sha256="37868399fb5f41b4eac596e69803f9bfaea49946514dfb2e7f48886854250d7c")
version("3.0.3", sha256="abaf99383c1a60f52970c86052ae6c14eafa84fc16984488040283c02db77c0b")
version("2.0", sha256="13eea3c8994d8e303e32a2db0b3e686f6edfb41cb21e7b0e663c2b17eea9b03a")
@@ -23,3 +24,6 @@ class PyCmocean(PythonPackage):
depends_on("py-matplotlib", type=("build", "run"))
depends_on("py-numpy", type=("build", "run"))
depends_on("py-packaging", when="@3:", type=("build", "run"))
# https://github.com/matplotlib/cmocean/pull/99
conflicts("^py-numpy@2:", when="@:3.0")

View File

@@ -0,0 +1,19 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyDirtyjson(PythonPackage):
"""JSON decoder for Python that can extract data from the muck"""
homepage = "https://github.com/codecobblers/dirtyjson"
pypi = "dirtyjson/dirtyjson-1.0.8.tar.gz"
license("MIT or AFL-2.1", checked_by="qwertos")
version("1.0.8", sha256="90ca4a18f3ff30ce849d100dcf4a003953c79d3a2348ef056f1d9c22231a25fd")
depends_on("py-setuptools", type="build")

View File

@@ -23,13 +23,37 @@ class PyDmTree(PythonPackage):
version("0.1.8", sha256="0fcaabbb14e7980377439e7140bd05552739ca5e515ecb3119f234acee4b9430")
version("0.1.7", sha256="30fec8aca5b92823c0e796a2f33b875b4dccd470b57e91e6c542405c5f77fd2a")
version("0.1.6", sha256="6776404b23b4522c01012ffb314632aba092c9541577004ab153321e87da439a")
version("0.1.5", sha256="a951d2239111dfcc468071bc8ff792c7b1e3192cab5a3c94d33a8b2bda3127fa")
version(
"0.1.6",
sha256="6776404b23b4522c01012ffb314632aba092c9541577004ab153321e87da439a",
deprecated=True,
)
version(
"0.1.5",
sha256="a951d2239111dfcc468071bc8ff792c7b1e3192cab5a3c94d33a8b2bda3127fa",
deprecated=True,
)
depends_on("cxx", type="build") # generated
depends_on("cxx", type="build")
# Based on PyPI wheel availability
depends_on("python@:3.12", when="@0.1.8:", type=("build", "run"))
depends_on("python@:3.10", when="@0.1.6:0.1.7", type=("build", "run"))
depends_on("python@:3.8", when="@0.1.5", type=("build", "run"))
depends_on("py-setuptools", type="build")
depends_on("cmake", when="@0.1.7:", type="build")
depends_on("cmake@3.12:", when="@0.1.7:", type="build")
depends_on("py-pybind11@2.10.1:", when="@0.1.8:")
depends_on("abseil-cpp", when="@0.1.8:")
patch(
"https://github.com/google-deepmind/tree/pull/73.patch?full_index=1",
sha256="77dbd895611d412da99a5afbf312c3c49984ad02bd0e56ad342b2002a87d789c",
when="@0.1.8",
)
conflicts("%gcc@13:", when="@:0.1.7")
# Historical dependencies
depends_on("bazel@:5", when="@:0.1.6", type="build")
depends_on("py-six@1.12.0:", when="@:0.1.6", type=("build", "run"))

View File

@@ -11,7 +11,7 @@ class PyFalcon(PythonPackage):
building large-scale app backends and microservices."""
homepage = "https://github.com/falconry/falcon"
url = "https://github.com/falconry/falcon/archive/3.0.0a2.tar.gz"
url = "https://github.com/falconry/falcon/archive/refs/tags/3.0.0a2.tar.gz"
license("Apache-2.0")

View File

@@ -18,6 +18,7 @@ class PyFiona(PythonPackage):
license("BSD-3-Clause")
version("master", branch="master")
version("1.10.1", sha256="b00ae357669460c6491caba29c2022ff0acfcbde86a95361ea8ff5cd14a86b68")
version("1.10.0", sha256="3529fd46d269ff3f70aeb9316a93ae95cf2f87d7e148a8ff0d68532bf81ff7ae")
version("1.9.6", sha256="791b3494f8b218c06ea56f892bd6ba893dfa23525347761d066fb7738acda3b1")
version("1.9.5", sha256="99e2604332caa7692855c2ae6ed91e1fffdf9b59449aa8032dd18e070e59a2f7")

View File

@@ -13,121 +13,51 @@ class PyHorovod(PythonPackage, CudaPackage):
homepage = "https://github.com/horovod"
git = "https://github.com/horovod/horovod.git"
maintainers("adamjstewart", "aweits", "tgaddair", "thomas-bouvier")
submodules = True
license("Apache-2.0")
maintainers("adamjstewart", "aweits", "tgaddair", "thomas-bouvier")
version("master", branch="master", submodules=True)
version(
"0.28.1", tag="v0.28.1", commit="1d217b59949986d025f6db93c49943fb6b6cc78f", submodules=True
)
version(
"0.28.0", tag="v0.28.0", commit="587d72004736209a93ebda8cec0acdb7870db583", submodules=True
)
version(
"0.27.0", tag="v0.27.0", commit="bfaca90d5cf66780a97d8799d4e1573855b64560", submodules=True
)
version(
"0.26.1", tag="v0.26.1", commit="34604870eabd9dc670c222deb1da9acc6b9d7c03", submodules=True
)
version(
"0.26.0", tag="v0.26.0", commit="c638dcec972750d4a75b229bc208cff9dc76b00a", submodules=True
)
version(
"0.25.0", tag="v0.25.0", commit="48e0affcba962831668cd1222866af2d632920c2", submodules=True
)
version(
"0.24.3", tag="v0.24.3", commit="a2d9e280c1210a8e364a7dc83ca6c2182fefa99d", submodules=True
)
version(
"0.24.2", tag="v0.24.2", commit="b4c191c8d05086842517b3836285a85c6f96ab22", submodules=True
)
version(
"0.24.1", tag="v0.24.1", commit="ebd135098571722469bb6290a6d098a9e1c96574", submodules=True
)
version(
"0.24.0", tag="v0.24.0", commit="b089df66a29d3ba6672073eef3d42714d9d3626b", submodules=True
)
version(
"0.23.0", tag="v0.23.0", commit="66ad6d5a3586decdac356e8ec95c204990bbc3d6", submodules=True
)
version(
"0.22.1", tag="v0.22.1", commit="93a2f2583ed63391a904aaeb03b602729be90f15", submodules=True
)
version(
"0.22.0", tag="v0.22.0", commit="3ff94801fbb4dbf6bc47c23888c93cad4887435f", submodules=True
)
version(
"0.21.3", tag="v0.21.3", commit="6916985c9df111f36864724e2611827f64de8e11", submodules=True
)
version(
"0.21.2", tag="v0.21.2", commit="c64b1d60c6bad7834f3315f12707f8ebf11c9c3d", submodules=True
)
version(
"0.21.1", tag="v0.21.1", commit="a9dea74abc1f0b8e81cd2b6dd9fe81e2c4244e39", submodules=True
)
version(
"0.21.0", tag="v0.21.0", commit="7d71874258fc8625ad8952defad0ea5b24531248", submodules=True
)
version(
"0.20.3", tag="v0.20.3", commit="b3c4d81327590c9064d544622b6250d9a19ce2c2", submodules=True
)
version(
"0.20.2", tag="v0.20.2", commit="cef4393eb980d4137bb91256da4dd847b7f44d1c", submodules=True
)
version(
"0.20.1", tag="v0.20.1", commit="4099c2b7f34f709f0db1c09f06b2594d7b4b9615", submodules=True
)
version(
"0.20.0", tag="v0.20.0", commit="396c1319876039ad8f5a56c007a020605ccb8277", submodules=True
)
version(
"0.19.5", tag="v0.19.5", commit="b52e4b3e6ce5b1b494b77052878a0aad05c2e3ce", submodules=True
)
version(
"0.19.4", tag="v0.19.4", commit="31f1f700b8fa6d3b6df284e291e302593fbb4fa3", submodules=True
)
version(
"0.19.3", tag="v0.19.3", commit="ad63bbe9da8b41d0940260a2dd6935fa0486505f", submodules=True
)
version(
"0.19.2", tag="v0.19.2", commit="f8fb21e0ceebbdc6ccc069c43239731223d2961d", submodules=True
)
version(
"0.19.1", tag="v0.19.1", commit="9ad69e78e83c34568743e8e97b1504c6c7af34c3", submodules=True
)
version(
"0.19.0", tag="v0.19.0", commit="1a805d9b20224069b294f361e47f5d9b55f426ff", submodules=True
)
version(
"0.18.2", tag="v0.18.2", commit="bb2134b427e0e0c5a83624d02fafa4f14de623d9", submodules=True
)
version(
"0.18.1", tag="v0.18.1", commit="0008191b3e61b5dfccddabe0129bbed7cd544c56", submodules=True
)
version(
"0.18.0", tag="v0.18.0", commit="a639de51e9a38d5c1f99f458c045aeaebe70351e", submodules=True
)
version(
"0.17.1", tag="v0.17.1", commit="399e70adc0f74184b5848d9a46b9b6ad67b5fe6d", submodules=True
)
version(
"0.17.0", tag="v0.17.0", commit="2fed0410774b480ad19057320be9027be06b309e", submodules=True
)
version(
"0.16.4", tag="v0.16.4", commit="2aac48c95c035bee7d68f9aff30e59319f46c21e", submodules=True
)
version(
"0.16.3", tag="v0.16.3", commit="30a2148784478415dc31d65a6aa08d237f364b42", submodules=True
)
version(
"0.16.2", tag="v0.16.2", commit="217774652eeccfcd60aa6e268dfd6b766d71b768", submodules=True
)
version("master", branch="master")
version("0.28.1", tag="v0.28.1", commit="1d217b59949986d025f6db93c49943fb6b6cc78f")
version("0.28.0", tag="v0.28.0", commit="587d72004736209a93ebda8cec0acdb7870db583")
version("0.27.0", tag="v0.27.0", commit="bfaca90d5cf66780a97d8799d4e1573855b64560")
version("0.26.1", tag="v0.26.1", commit="34604870eabd9dc670c222deb1da9acc6b9d7c03")
version("0.26.0", tag="v0.26.0", commit="c638dcec972750d4a75b229bc208cff9dc76b00a")
version("0.25.0", tag="v0.25.0", commit="48e0affcba962831668cd1222866af2d632920c2")
version("0.24.3", tag="v0.24.3", commit="a2d9e280c1210a8e364a7dc83ca6c2182fefa99d")
version("0.24.2", tag="v0.24.2", commit="b4c191c8d05086842517b3836285a85c6f96ab22")
version("0.24.1", tag="v0.24.1", commit="ebd135098571722469bb6290a6d098a9e1c96574")
version("0.24.0", tag="v0.24.0", commit="b089df66a29d3ba6672073eef3d42714d9d3626b")
version("0.23.0", tag="v0.23.0", commit="66ad6d5a3586decdac356e8ec95c204990bbc3d6")
version("0.22.1", tag="v0.22.1", commit="93a2f2583ed63391a904aaeb03b602729be90f15")
version("0.22.0", tag="v0.22.0", commit="3ff94801fbb4dbf6bc47c23888c93cad4887435f")
version("0.21.3", tag="v0.21.3", commit="6916985c9df111f36864724e2611827f64de8e11")
version("0.21.2", tag="v0.21.2", commit="c64b1d60c6bad7834f3315f12707f8ebf11c9c3d")
version("0.21.1", tag="v0.21.1", commit="a9dea74abc1f0b8e81cd2b6dd9fe81e2c4244e39")
version("0.21.0", tag="v0.21.0", commit="7d71874258fc8625ad8952defad0ea5b24531248")
version("0.20.3", tag="v0.20.3", commit="b3c4d81327590c9064d544622b6250d9a19ce2c2")
version("0.20.2", tag="v0.20.2", commit="cef4393eb980d4137bb91256da4dd847b7f44d1c")
version("0.20.1", tag="v0.20.1", commit="4099c2b7f34f709f0db1c09f06b2594d7b4b9615")
version("0.20.0", tag="v0.20.0", commit="396c1319876039ad8f5a56c007a020605ccb8277")
version("0.19.5", tag="v0.19.5", commit="b52e4b3e6ce5b1b494b77052878a0aad05c2e3ce")
version("0.19.4", tag="v0.19.4", commit="31f1f700b8fa6d3b6df284e291e302593fbb4fa3")
version("0.19.3", tag="v0.19.3", commit="ad63bbe9da8b41d0940260a2dd6935fa0486505f")
version("0.19.2", tag="v0.19.2", commit="f8fb21e0ceebbdc6ccc069c43239731223d2961d")
version("0.19.1", tag="v0.19.1", commit="9ad69e78e83c34568743e8e97b1504c6c7af34c3")
version("0.19.0", tag="v0.19.0", commit="1a805d9b20224069b294f361e47f5d9b55f426ff")
version("0.18.2", tag="v0.18.2", commit="bb2134b427e0e0c5a83624d02fafa4f14de623d9")
version("0.18.1", tag="v0.18.1", commit="0008191b3e61b5dfccddabe0129bbed7cd544c56")
version("0.18.0", tag="v0.18.0", commit="a639de51e9a38d5c1f99f458c045aeaebe70351e")
version("0.17.1", tag="v0.17.1", commit="399e70adc0f74184b5848d9a46b9b6ad67b5fe6d")
version("0.17.0", tag="v0.17.0", commit="2fed0410774b480ad19057320be9027be06b309e")
version("0.16.4", tag="v0.16.4", commit="2aac48c95c035bee7d68f9aff30e59319f46c21e")
version("0.16.3", tag="v0.16.3", commit="30a2148784478415dc31d65a6aa08d237f364b42")
version("0.16.2", tag="v0.16.2", commit="217774652eeccfcd60aa6e268dfd6b766d71b768")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated
depends_on("fortran", type="build") # generated
depends_on("c", type="build")
depends_on("cxx", type="build")
depends_on("fortran", type="build")
# https://github.com/horovod/horovod/blob/master/docs/install.rst
variant(
@@ -232,7 +162,20 @@ class PyHorovod(PythonPackage, CudaPackage):
"controllers=gloo", when="@:0.20.0 platform=darwin", msg="Gloo cannot be compiled on MacOS"
)
# https://github.com/horovod/horovod/issues/3996
conflicts("^py-torch@2.1:", when="@:0.28.1")
patch(
"https://github.com/horovod/horovod/pull/3998.patch?full_index=1",
sha256="9ecd4e8e315764afab20f2086e24baccf8178779a3c663196b24dc55a23a6aca",
when="@0.25:0.28.1",
)
conflicts("^py-torch@2.1:", when="@:0.24")
# https://github.com/horovod/horovod/pull/3957
patch(
"https://github.com/horovod/horovod/pull/3957.patch?full_index=1",
sha256="9e22e312c0cbf224b4135ba70bd4fd2e4170d8316c996643e360112abaac8f93",
when="@0.21:0.28.1",
)
conflicts("%gcc@13:", when="@:0.20")
# https://github.com/horovod/horovod/pull/1835
patch("fma.patch", when="@0.19.0:0.19.1")

View File

@@ -99,6 +99,12 @@ class PyJaxlib(PythonPackage, CudaPackage):
depends_on("py-numpy@:1", when="@:0.4.25")
depends_on("py-ml-dtypes@0.4:", when="@0.4.29")
patch(
"https://github.com/google/jax/pull/20101.patch?full_index=1",
sha256="4dfb9f32d4eeb0a0fb3a6f4124c4170e3fe49511f1b768cd634c78d489962275",
when="@:0.4.25",
)
conflicts(
"cuda_arch=none",
when="+cuda",

View File

@@ -0,0 +1,21 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyJiter(PythonPackage):
"""Fast iterable JSON parser."""
homepage = "https://github.com/pydantic/jiter/"
pypi = "jiter/jiter-0.5.0.tar.gz"
license("MIT", checked_by="qwertos")
version("0.5.0", sha256="1d916ba875bcab5c5f7d927df998c4cb694d27dceddf3392e58beaf10563368a")
depends_on("python@3.8:", type=("build", "run"))
depends_on("py-maturin@1", type="build")
depends_on("rust@1.73:", type=("build", "run"))

View File

@@ -0,0 +1,27 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyLinkchecker(PythonPackage):
"""Check for broken links in web sites."""
homepage = "https://linkchecker.github.io/linkchecker/"
pypi = "LinkChecker/LinkChecker-10.5.0.tar.gz"
maintainers("rbberger")
license("GPL-2.0")
version("10.5.0", sha256="978b42b803e58b7a8f6ffae1ff88fa7fd1e87b944403b5dc82380dd59f516bb9")
depends_on("python@3.9:", type=("build", "run"))
depends_on("py-requests@2.20:", type=("build", "run"))
depends_on("py-dnspython@2:", type=("build", "run"))
depends_on("py-beautifulsoup4@4.8.1:", type=("build", "run"))
depends_on("py-hatchling@1.8.0:", type="build")
depends_on("py-hatch-vcs", type="build")
depends_on("py-setuptools-scm@7.1.0:", type="build")

View File

@@ -23,10 +23,21 @@ class PyNvidiaDali(PythonPackage):
system = platform.system().lower()
arch = platform.machine()
if "linux" in system and arch == "x86_64":
version(
"1.41.0-cuda120",
sha256="240b4135e7c71c5f669d2f2970fa350f7ad1a0a4aab588a3ced578f9b6d7abd9",
url="https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.41.0-17427117-py3-none-manylinux2014_x86_64.whl",
expand=False,
)
version(
"1.41.0.cuda110",
sha256="6b12993384b694463c651a6c22621e6982b8834946eefcc864ab061b5c6e972e",
url="https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.41.0-17427118-py3-none-manylinux2014_x86_64.whl",
expand=False,
)
version(
"1.36.0-cuda120",
sha256="9a7754aacb245785462592aec89cbaec72e0a84d84399a061a563546bbf44805",
preferred=True,
url="https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.36.0-13435171-py3-none-manylinux2014_x86_64.whl",
expand=False,
)
@@ -109,10 +120,21 @@ class PyNvidiaDali(PythonPackage):
expand=False,
)
elif "linux" in system and arch == "aarch64":
version(
"1.41.0-cuda120",
sha256="5b9eddcd6433244a1c5bec44db71c5dccede7d81f929711c634c4d79f6ce5f81",
url="https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.41.0-17427117-py3-none-manylinux2014_aarch64.whl",
expand=False,
)
version(
"1.41.0-cuda110",
sha256="7ec004a65ea7c1bd1272f27b3a5aea9f0d74e95e5d54523db2fabbf8b6efedc9",
url="https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.41.0-17427118-py3-none-manylinux2014_aarch64.whl",
expand=False,
)
version(
"1.36.0-cuda120",
sha256="575ae1ff9b7633c847182163e2d339f2bdafe8dd0ca4ca6e3092a02890f803c2",
preferred=True,
url="https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.36.0-13435171-py3-none-manylinux2014_aarch64.whl",
expand=False,
)
@@ -196,6 +218,7 @@ class PyNvidiaDali(PythonPackage):
)
cuda120_versions = (
"@1.41.0-cuda120",
"@1.36.0-cuda120",
"@1.27.0-cuda120",
"@1.26.0-cuda120",
@@ -205,6 +228,7 @@ class PyNvidiaDali(PythonPackage):
"@1.22.0-cuda120",
)
cuda110_versions = (
"@1.41.0-cuda110",
"@1.36.0-cuda110",
"@1.27.0-cuda110",
"@1.26.0-cuda110",

View File

@@ -45,6 +45,7 @@ class PyOnnx(PythonPackage):
# requirements.txt
depends_on("py-setuptools@64:", type="build")
depends_on("py-setuptools", type="build")
depends_on("protobuf")
depends_on("py-protobuf@3.20.2:", type=("build", "run"), when="@1.15:")
depends_on("py-protobuf@3.20.2:3", type=("build", "run"), when="@1.13")
depends_on("py-protobuf@3.12.2:3.20.1", type=("build", "run"), when="@1.12")
@@ -56,7 +57,6 @@ class PyOnnx(PythonPackage):
# https://github.com/protocolbuffers/protobuf/pull/8794, fixed in
# https://github.com/onnx/onnx/pull/3112
depends_on("py-protobuf@:3.17", type=("build", "run"), when="@:1.8")
depends_on("py-protobuf+cpp", type=("build", "run"))
depends_on("py-numpy", type=("build", "run"))
depends_on("py-numpy@1.16.6:", type=("build", "run"), when="@1.8.1:1.13")
depends_on("py-numpy@1.20:", type=("build", "run"), when="@1.16.0:")

View File

@@ -0,0 +1,45 @@
From c4930c939cc1c8b4c6122b1e9530942ecd517fb2 Mon Sep 17 00:00:00 2001
From: Afzal Patel <Afzal.Patel@amd.com>
Date: Tue, 17 Sep 2024 19:33:51 +0000
Subject: [PATCH] Find individual ROCm dependencies
---
cmake/onnxruntime_providers_rocm.cmake | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/cmake/onnxruntime_providers_rocm.cmake b/cmake/onnxruntime_providers_rocm.cmake
index b662682915..2e9574c04d 100644
--- a/cmake/onnxruntime_providers_rocm.cmake
+++ b/cmake/onnxruntime_providers_rocm.cmake
@@ -11,6 +11,12 @@
find_package(rocblas REQUIRED)
find_package(MIOpen REQUIRED)
find_package(hipfft REQUIRED)
+ find_package(rocrand REQUIRED)
+ find_package(hipsparse REQUIRED)
+ find_package(hipcub REQUIRED)
+ find_package(rocprim REQUIRED)
+ find_package(rocthrust REQUIRED)
+ find_package(hipblas REQUIRED)
# MIOpen version
if(NOT DEFINED ENV{MIOPEN_PATH})
@@ -147,7 +153,14 @@
${eigen_INCLUDE_DIRS}
PUBLIC
${onnxruntime_ROCM_HOME}/include
- ${onnxruntime_ROCM_HOME}/include/roctracer)
+ ${onnxruntime_ROCM_HOME}/include/roctracer
+ ${HIPRAND_INCLUDE_DIR}
+ ${ROCRAND_INCLUDE_DIR}
+ ${HIPSPARSE_INCLUDE_DIR}
+ ${HIPCUB_INCLUDE_DIR}
+ ${ROCPRIM_INCLUDE_DIR}
+ ${ROCTHRUST_INCLUDE_DIR}
+ ${HIPBLAS_INCLUDE_DIR})
set_target_properties(onnxruntime_providers_rocm PROPERTIES LINKER_LANGUAGE CXX)
set_target_properties(onnxruntime_providers_rocm PROPERTIES FOLDER "ONNXRuntime")
--
2.43.5

View File

@@ -6,7 +6,7 @@
from spack.package import *
class PyOnnxruntime(CMakePackage, PythonExtension):
class PyOnnxruntime(CMakePackage, PythonExtension, ROCmPackage):
"""ONNX Runtime is a performance-focused complete scoring
engine for Open Neural Network Exchange (ONNX) models, with
an open extensible architecture to continually address the
@@ -22,6 +22,9 @@ class PyOnnxruntime(CMakePackage, PythonExtension):
license("MIT")
version("1.18.2", tag="v1.18.2", commit="9691af1a2a17b12af04652f4d8d2a18ce9507025")
version("1.18.1", tag="v1.18.1", commit="387127404e6c1d84b3468c387d864877ed1c67fe")
version("1.18.0", tag="v1.18.0", commit="45737400a2f3015c11f005ed7603611eaed306a6")
version("1.17.3", tag="v1.17.3", commit="56b660f36940a919295e6f1e18ad3a9a93a10bf7")
version("1.17.1", tag="v1.17.1", commit="8f5c79cb63f09ef1302e85081093a3fe4da1bc7d")
version("1.10.0", tag="v1.10.0", commit="0d9030e79888d1d5828730b254fedc53c7b640c1")
@@ -50,6 +53,8 @@ class PyOnnxruntime(CMakePackage, PythonExtension):
depends_on("py-coloredlogs", when="@1.17:", type=("build", "run"))
depends_on("py-flatbuffers", type=("build", "run"))
depends_on("py-numpy@1.16.6:", type=("build", "run"))
depends_on("py-numpy@1.21.6:", when="@1.18:", type=("build", "run"))
depends_on("py-numpy@:1", when="@:1.18", type=("build", "run"))
depends_on("py-packaging", type=("build", "run"))
depends_on("py-protobuf", type=("build", "run"))
depends_on("py-sympy@1.1:", type=("build", "run"))
@@ -60,6 +65,7 @@ class PyOnnxruntime(CMakePackage, PythonExtension):
depends_on("py-cerberus", type=("build", "run"))
depends_on("py-onnx", type=("build", "run"))
depends_on("py-onnx@:1.15.0", type=("build", "run"), when="@:1.17")
depends_on("py-onnx@:1.16", type=("build", "run"), when="@:1.18")
depends_on("zlib-api")
depends_on("libpng")
depends_on("cuda", when="+cuda")
@@ -67,6 +73,35 @@ class PyOnnxruntime(CMakePackage, PythonExtension):
depends_on("iconv", type=("build", "link", "run"))
depends_on("re2+shared")
rocm_dependencies = [
"hsa-rocr-dev",
"hip",
"hiprand",
"hipsparse",
"hipfft",
"hipcub",
"hipblas",
"llvm-amdgpu",
"miopen-hip",
"migraphx",
"rocblas",
"rccl",
"rocprim",
"rocminfo",
"rocm-core",
"rocm-cmake",
"roctracer-dev",
"rocthrust",
"rocrand",
"rocsparse",
]
with when("+rocm"):
for pkg_dep in rocm_dependencies:
depends_on(f"{pkg_dep}@5.7:6.1", when="@1.17")
depends_on(f"{pkg_dep}@6.1:", when="@1.18:")
depends_on(pkg_dep)
# Adopted from CMS experiment's fork of onnxruntime
# https://github.com/cms-externals/onnxruntime/compare/5bc92df...d594f80
patch("cms.patch", level=1, when="@1.7.2")
@@ -85,6 +120,10 @@ class PyOnnxruntime(CMakePackage, PythonExtension):
when="@1.10:1.15",
)
# ORT is assuming all ROCm components are installed in a single path,
# this patch finds the packages individually
patch("0001-Find-ROCm-Packages-Individually.patch", when="@1.17: +rocm")
dynamic_cpu_arch_values = ("NOAVX", "AVX", "AVX2", "AVX512")
variant(
@@ -99,10 +138,28 @@ class PyOnnxruntime(CMakePackage, PythonExtension):
root_cmakelists_dir = "cmake"
build_directory = "."
def patch(self):
if self.spec.satisfies("@1.17 +rocm"):
filter_file(
r"${onnxruntime_ROCM_HOME}/.info/version-dev",
"{0}/.info/version".format(self.spec["rocm-core"].prefix),
"cmake/CMakeLists.txt",
string=True,
)
if self.spec.satisfies("@1.18: +rocm"):
filter_file(
r"${onnxruntime_ROCM_HOME}/.info/version",
"{0}/.info/version".format(self.spec["rocm-core"].prefix),
"cmake/CMakeLists.txt",
string=True,
)
def setup_build_environment(self, env):
value = self.spec.variants["dynamic_cpu_arch"].value
value = self.dynamic_cpu_arch_values.index(value)
env.set("MLAS_DYNAMIC_CPU_ARCH", str(value))
if self.spec.satisfies("+rocm"):
env.set("MIOPEN_PATH", self.spec["miopen-hip"].prefix)
def setup_run_environment(self, env):
value = self.spec.variants["dynamic_cpu_arch"].value
@@ -137,6 +194,18 @@ def cmake_args(self):
)
)
if self.spec.satisfies("+rocm"):
args.extend(
(
define("CMAKE_HIP_COMPILER", f"{self.spec['llvm-amdgpu'].prefix}/bin/clang++"),
define("onnxruntime_USE_MIGRAPHX", "ON"),
define("onnxruntime_MIGRAPHX_HOME", self.spec["migraphx"].prefix),
define("onnxruntime_USE_ROCM", "ON"),
define("onnxruntime_ROCM_HOME", self.spec["hip"].prefix),
define("onnxruntime_ROCM_VERSION", self.spec["hip"].version),
define("onnxruntime_USE_COMPOSABLE_KERNEL", "OFF"),
)
)
return args
@run_after("install")

View File

@@ -0,0 +1,23 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyPyenchant(PythonPackage):
"""Sphinx Documentation Generator."""
homepage = "https://pyenchant.github.io/pyenchant/"
pypi = "pyenchant/pyenchant-3.2.2.tar.gz"
git = "https://github.com/pyenchant/pyenchant.git"
license("LGPL-2.1")
version("3.2.2", sha256="1cf830c6614362a78aab78d50eaf7c6c93831369c52e1bb64ffae1df0341e637")
depends_on("enchant")
depends_on("python@3.5:")
depends_on("py-setuptools")

View File

@@ -16,6 +16,7 @@ class PyRuff(PythonPackage):
license("MIT")
maintainers("adamjstewart")
version("0.6.5", sha256="4d32d87fab433c0cf285c3683dd4dae63be05fd7a1d65b3f5bf7cdd05a6b96fb")
version("0.5.7", sha256="8dfc0a458797f5d9fb622dd0efc52d796f23f0a1493a9527f4e49a550ae9a7e5")
version("0.4.5", sha256="286eabd47e7d4d521d199cab84deca135557e6d1e0f0d01c29e757c3cb151b54")
version("0.4.0", sha256="7457308d9ebf00d6a1c9a26aa755e477787a636c90b823f91cd7d4bea9e89260")

View File

@@ -0,0 +1,26 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PySphinxFortran(PythonPackage):
"""Fortran domain and autodoc extensions to Sphinx"""
homepage = "https://sphinx-fortran.readthedocs.io"
pypi = "sphinx-fortran/sphinx-fortran-1.1.1.tar.gz"
git = "https://github.com/VACUMM/sphinx-fortran.git"
maintainers("rbberger")
license("CeCILL-2.1")
version("master", branch="master")
version("1.1.1", sha256="e912e6b292e80768ad3cf580a560a4752c2c077eda4a1bbfc3a4ca0f11fb8ee1")
depends_on("py-sphinx@1:")
depends_on("py-numpy@1:")
depends_on("py-six")
depends_on("py-future")

View File

@@ -0,0 +1,19 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyStriprtf(PythonPackage):
"""A simple library to convert rtf to text"""
homepage = "https://github.com/joshy/striprtf"
pypi = "striprtf/striprtf-0.0.26.tar.gz"
license("BSD-3-Clause", checked_by="qwertos")
version("0.0.26", sha256="fdb2bba7ac440072d1c41eab50d8d74ae88f60a8b6575c6e2c7805dc462093aa")
depends_on("py-setuptools", type="build")

View File

@@ -8,6 +8,7 @@
import sys
import tempfile
from spack.build_environment import optimization_flags
from spack.package import *
rocm_dependencies = [
@@ -383,6 +384,7 @@ class PyTensorflow(Package, CudaPackage, ROCmPackage, PythonExtension):
# https://www.tensorflow.org/install/source#tested_build_configurations
# https://github.com/tensorflow/tensorflow/issues/70199
# (-mavx512fp16 exists in gcc@12:)
conflicts("%gcc@13:", when="@:2.14")
conflicts("%gcc@:11", when="@2.17:")
conflicts("%gcc@:9.3.0", when="@2.9:")
conflicts("%gcc@:7.3.0")
@@ -656,7 +658,7 @@ def setup_build_environment(self, env):
# Please specify optimization flags to use during compilation when
# bazel option '--config=opt' is specified
env.set("CC_OPT_FLAGS", spec.architecture.target.optimization_flags(spec.compiler))
env.set("CC_OPT_FLAGS", optimization_flags(self.compiler, spec.target))
# Would you like to interactively configure ./WORKSPACE for
# Android builds?

View File

@@ -17,6 +17,7 @@ class PyTypingExtensions(PythonPackage):
license("0BSD")
version("4.12.2", sha256="1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8")
version("4.8.0", sha256="df8e4339e9cb77357558cbdbceca33c303714cf861d1eef15e1070055ae8b7ef")
version("4.6.3", sha256="d91d5919357fe7f681a9f2b5b4cb2a5f1ef0a1e9f59c4d8ff0d3491e05c0ffd5")
version("4.5.0", sha256="5cb5f4a79139d699607b3ef622a1dedafa84e115ab0024e0d9c044a9479ca7cb")

View File

@@ -13,12 +13,15 @@ class Qd(AutotoolsPackage):
homepage = "https://bitbucket.org/njet/qd-library/src/master/"
git = "https://bitbucket.org/njet/qd-library.git"
url = "https://www.davidhbailey.com/dhbsoftware/qd-2.3.13.tar.gz"
tags = ["hep"]
license("BSD-3-Clause-LBNL")
version("2.3.13", commit="a57dde96b3255b80f7f39cd80217c213bf78d949")
version("2.3.24", sha256="a47b6c73f86e6421e86a883568dd08e299b20e36c11a99bdfbe50e01bde60e38")
version("2.3.23", sha256="b3eaf41ce413ec08f348ee73e606bd3ff9203e411c377c3c0467f89acf69ee26")
# The sha256 for 2.3.23 and 2.3.13 are identical as they are the same content
version("2.3.13", sha256="b3eaf41ce413ec08f348ee73e606bd3ff9203e411c377c3c0467f89acf69ee26")
depends_on("c", type="build") # generated
depends_on("cxx", type="build") # generated

View File

@@ -13,7 +13,7 @@ class Shc(AutotoolsPackage):
and linked to produce a stripped binary executable."""
homepage = "https://neurobin.org/projects/softwares/unix/shc/"
url = "https://github.com/neurobin/shc/archive/4.0.3.tar.gz"
url = "https://github.com/neurobin/shc/archive/refs/tags/4.0.3.tar.gz"
license("GPL-3.0-or-later")

View File

@@ -1,38 +0,0 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class StaticAnalysisSuite(CMakePackage):
"""SAS (Static Analysis Suite) is a powerful tool for running static
analysis on C++ code."""
homepage = "https://github.com/dpiparo/SAS"
url = "https://github.com/dpiparo/SAS/archive/0.1.3.tar.gz"
version(
"0.2.0",
sha256="a369e56f8edc61dbf59ae09dbb11d98bc05fd337c5e47e13af9c913bf7bfc538",
deprecated=True,
)
version(
"0.1.4",
sha256="9b2a3436efe3c8060ee4882f3ed37d848ee79a63d6055a71a23fad6409559f40",
deprecated=True,
)
version(
"0.1.3",
sha256="93c3194bb7d518c215e79436bfb43304683832b3cc66bfc838f6195ce4574943",
deprecated=True,
)
depends_on("python@2.7:")
depends_on("llvm@3.5:")
depends_on("cmake@2.8:", type="build")
def cmake_args(self):
args = ["-DLLVM_DEV_DIR=%s" % self.spec["llvm"].prefix]
return args

View File

@@ -10,9 +10,13 @@
class Testdfsio(MavenPackage):
"""A corrected and enhanced version of Apache Hadoop TestDFSIO"""
homepage = "https://github.com/tthx/testdfsio"
url = "https://github.com/tthx/testdfsio/archive/0.0.1.tar.gz"
homepage = "https://github.com/asotirov0/testdfsio"
url = "https://github.com/asotirov0/testdfsio/archive/0.0.1.tar.gz"
version("0.0.1", sha256="fe8cc47260ffb3e3ac90e0796ebfe73eb4dac64964ab77671e5d32435339dd09")
version(
"0.0.1",
sha256="fe8cc47260ffb3e3ac90e0796ebfe73eb4dac64964ab77671e5d32435339dd09",
deprecated=True,
)
depends_on("hadoop@3.2.1:", type="run")

View File

@@ -16,7 +16,13 @@ class Tinker(CMakePackage):
homepage = "https://dasher.wustl.edu/tinker/"
url = "https://dasher.wustl.edu/tinker/downloads/tinker-8.7.1.tar.gz"
version("8.7.1", sha256="0d6eff8bbc9be0b37d62b6fd3da35bb5499958eafe67aa9c014c4648c8b46d0f")
version("8.7.2", sha256="f9e94ae0684d527cd2772a4a7a05c41864ce6246f1194f6c1c402a94598151c2")
version(
"8.7.1",
sha256="0d6eff8bbc9be0b37d62b6fd3da35bb5499958eafe67aa9c014c4648c8b46d0f",
deprecated=True,
)
patch("tinker-8.7.1-cmake.patch")
depends_on("fftw")

View File

@@ -17,6 +17,7 @@ class Vecmem(CMakePackage, CudaPackage):
license("MPL-2.0-no-copyleft-exception")
version("1.8.0", sha256="d04f1bfcd08837f85c794a69da9f248e163985214a302c22381037feb5b3a7a9")
version("1.7.0", sha256="ff4bf8ea86a5edcb4a1e3d8dd0c42c73c60e998c6fb6512a40182c1f4620a73d")
version("1.6.0", sha256="797b016ac0b79bb39abad059ffa9f4817e519218429c9ab4c115f989616bd5d4")
version("1.5.0", sha256="5d7a2d2dd8eb961af12a1ed9e4e427b89881e843064ffa96ad0cf0934ba9b7ae")

View File

@@ -83,7 +83,7 @@ def determine_variants(cls, libs, ver_str):
return variants
def _spec_arch_to_sdk_arch(self):
spec_arch = str(self.spec.architecture.target.microarchitecture.family).lower()
spec_arch = str(self.spec.architecture.target.family).lower()
_64bit = "64" in spec_arch
arm = "arm" in spec_arch
if arm:

View File

@@ -10,7 +10,7 @@ class Yajl(CMakePackage):
"""Yet Another JSON Library (YAJL)"""
homepage = "https://lloyd.github.io/yajl/"
url = "https://github.com/lloyd/yajl/archive/2.1.0.zip"
url = "https://github.com/lloyd/yajl/archive/refs/tags/2.1.0.zip"
git = "https://github.com/lloyd/yajl.git"
license("MIT")