Compare commits

...

45 Commits

Author SHA1 Message Date
Wouter Deconinck
c034e1ba9f meshlab: allow downloads of vendored components 2024-06-28 11:01:09 -05:00
Wouter Deconinck
84b6266190 meshlab: put in builtin repo 2024-04-08 14:41:37 -05:00
Wouter Deconinck
94a42d786f meshlab: disable two failing plugins 2024-04-08 14:29:31 -05:00
Wouter Deconinck
0e71ea51ce meshlab: new package 2024-04-08 14:29:31 -05:00
Luc Berger
b3cef1072d Nalu: updating Trilinos recipe a bit (#43471)
* Nalu: updating Trilinos recipe a bit

Basic changes to build/install nalu properly using Spack.
Some more changes would be nice for instance adding an
option to build against Trilinos master or develop. Adding
a dependency on googletest to avoid the annoying build
failures in the unit-tests.

* Nalu: adding release 1.6.0

Nalu v1.6.0 can build cleanly against Trilinos 14.0.0 with the
proposed changes. The only other combo is master / master but
than one is "floating" as these branch evolve over time. When a
new Nalu comes out we might want to add another fixed version to
keep this recipes up to date!
2024-04-08 10:39:51 -06:00
Wouter Deconinck
e8ae9a403c acts: depends_on py-onnxruntime when +onnx for @23.3: (#43529) 2024-04-08 14:13:17 +02:00
Wouter Deconinck
1a8ef161c8 fastjet: new multi-valued variant plugins (#43523)
* fastjet: new multi-valued variant `plugins`

* rivet: depends_on fastjet plugins=cxx
2024-04-08 14:12:12 +02:00
Harmen Stoppels
d3913938bc py-tatsu: add upperbound on python (#43510) 2024-04-08 11:26:46 +02:00
Harmen Stoppels
4179880fe6 py-pymatgen: add forward compat bound for cython (#43511) 2024-04-08 11:26:09 +02:00
Harmen Stoppels
125dd0368e py-triton: add zlib (#43512) 2024-04-08 11:25:33 +02:00
Harmen Stoppels
fd68f8916c gperftools: add cmake build system (#43506)
the autotools build system does something funky which causes a link line
where gccs default link dirs are explicitly added and end up before the
-L from spack's libunwind, so that ultimately it links against system
libunwind.

the cmake build system does better.
2024-04-08 10:00:05 +02:00
Jonas Eschle
93e6f5fa4e Update jax & jaxlib versions (#42863)
* upgrade new versions

* style fix

* update jaxlib deps (not cuda and bazel yet)

* update jaxlib cuda versions

* update jaxlib cuda versions

* update jaxlib cuda versions

* chore: style fix

* Update package.py

* Update package.py

* fix:  typo

* docs: add source for cuda version

* py-jaxlib 0.4.14 also doesn't build on ppc64le

* Add 0.4.26

---------

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2024-04-07 12:04:23 +02:00
Robert Cohn
54acda3f11 oneapi licenses (#43451) 2024-04-06 08:04:04 -04:00
kwryankrattiger
663e20fcc4 ParaView: add v5.12.0 (#42943)
* ParaView: Update version 5.12.0

Add 5.12.0 release
Update default to 5.12.0

* Add patch for building ParaView 5.12 with kits

* Drop VTKm from neoverse
2024-04-06 04:12:48 +00:00
eugeneswalker
6428132ebb e4s ci: enable lammps variants from presets/most.cmake (#43522) 2024-04-05 20:56:18 -07:00
eugeneswalker
171958cf09 py-deephyper: add v0.6.0 (#43492)
* py-deephyper: add latest version: v0.6.0

* e4s: add py-deephyper

* v0.6.0: depend on python@3.7:3.11

* add py-packaging constraint so arm64 builds work

* [@spackbot] updating style on behalf of eugeneswalker
2024-04-06 00:28:37 +00:00
Frédéric Simonis
0d0f7ab030 Add release 3.1.0 (#43508) 2024-04-05 16:58:57 -07:00
eugeneswalker
35f8b43a54 e4s ci: add nekbone (#43515)
* e4s ci: add nekbone, nek5000

* remove nek5000
2024-04-05 16:36:13 -07:00
one
6f7eb3750c Add new versions of tinyxml2 (#43467)
* Add new versions of tinyxml2
  Added 7.0.0 to 10.0.0
* Add the variant "shared"
2024-04-05 13:36:45 -07:00
renjithravindrankannath
2121eb31ba Patch to set PARAMETERS_MIN_ALIGNMENT to the native alignment for rocm-opencl (#43444)
* For avx build, the start address of values_ buffer in KernelParameters
   is not a correct as it is computed based on 16-byte alignment.
* Style check error fix
2024-04-05 12:02:32 -07:00
kwryankrattiger
c68d739825 CI: Add debug to the log aggregation script (#42562)
* CI: Add debug to the log aggregation script
2024-04-05 14:00:27 -05:00
John W. Parent
c468697b35 Use correct method "append" instead of extend (#43514) 2024-04-05 18:46:47 +00:00
G-Ragghianti
c4094cf051 slate: Adding comgr as dependency (#43448)
* Adding comgr as dependency

* adding more smoke test deps
2024-04-05 11:32:16 -07:00
LinaMuryanto
9ff9ca61e6 py-amq, py-celery, py-kombu: New versions, fix build (#43295)
* updating package.py for py-celery, py-kombu, py-amq
* added more py-kombu package versions
* fix copyrights and stype on py-kombu/package.py
* removed extra spaces
* added py-billiard 4.2.0 and added back the license('BSD-3-Clause')
* removed extra spaces in py-celery/package.py
* fixed py-amqp 2.4.0 sha; fixed py-celery's dependency of py-click (when version restrictions)
* more clean up on specifying version bounds
2024-04-05 11:14:18 -07:00
Massimiliano Culpo
826e0c0405 Improve hit-rate on buildcaches (#43272)
* Relax compiler and target mismatches

The mismatch occurs on an edge. Previously it was assigned
the parent priority, now it is assigned the child priority.

This should make reuse from buildcaches or store more likely,
since most mismatches will be counted with "reused" priority.

* Optimize version badness for runtimes at very low priority

We don't want to e.g. switch other attributes because we
cannot reuse an old installed runtime.

* Optimize runtime attributes at very low priority

This is such that the version of the runtime would
not influence whether we should reuse a spec.

Compiler mismatches are considered for runtimes,
to avoid situations where compiling foo%gcc@9
brings in gcc-runtime%gcc@13 if gcc@13 is among
the available compilers

* Exclude specs without runtimes from reuse

This should ensure that we do not reuse specs that
could be broken, as they expect the compiler to be
installed in a specific place.
2024-04-05 20:10:28 +02:00
Andrey Perestoronin
1b86a842ea add itac and inspector packages (#43507) 2024-04-05 09:30:36 -04:00
Harmen Stoppels
558a28bf52 bazel: conflict with gcc 13 (#43504) 2024-04-05 14:24:06 +02:00
Harmen Stoppels
411576e1fa Do not acquire a write lock on the env post install if no views (#43505) 2024-04-05 12:31:21 +02:00
eugeneswalker
cab4f92960 datatransferkit: needs trilinos@14.2: for @3.1.0: (#43496) 2024-04-05 03:03:15 -06:00
Adam J. Stewart
c6c13f6782 GDAL: add v3.8.5 (#43493) 2024-04-04 21:29:28 -06:00
Daniele Cesarini
cf11fab5ad Added Libfort library (#43490) 2024-04-04 21:14:27 -06:00
Harmen Stoppels
1d8b35c840 installer.py: compute package_id from spec (#43485)
The installer runs `get_dependent_ids`, which follows edges outside the
subdag that's being installed, so it returns a superset of the actual
dependents.

That's generally fine, except that it calls `s.package` on every
dependent, which triggers a package class to be instantiated, which is a
lot of work.

Instead, compute the package id from the spec, since that's all that's
used anyways and does not trigger *lots* of slow and redundant
instantiations of package objects.
2024-04-04 20:39:30 -06:00
Weiqun Zhang
5dc46a976d amrex: add v24.04 (#43443) 2024-04-04 19:00:44 -07:00
Alex Richert
05f5596cdd Update grib-util recipe (#43484) 2024-04-04 15:07:05 -07:00
psakievich
6942c7f35b Update exawind packages (#40793) 2024-04-04 12:54:20 -06:00
Alex Richert
18f0ac0f94 Add g2@3.4.9 (#43481) 2024-04-04 11:50:08 -07:00
Vicente Bolea
d9196ee3f8 adios2: bump version 2.10.0 (#43479) 2024-04-04 13:46:40 -05:00
John W. Parent
ef0bb6fe6b Msvc: Determine OneAPI_ROOT from fc compiler path (#43131)
If ONEAPI_ROOT is not set as an environment variable, the current approach will raise an error.
Instead we can compute the OneAPI_ROOT from the compiler paths like we do with vcvarsall.
2024-04-04 11:14:44 -07:00
Alex Richert
3fed320013 Add MPI and arch bugfixes to SCOTCH (#39264)
* Add MPI and arch bugfixes to SCOTCH

* Update scotch/package.py
2024-04-04 12:48:39 -05:00
Chris Marsh
1aa77e695d Trilinos: add threadsafe variant (#43480)
* Fixes #43454 by exposing a threadsafe variant

* [@spackbot] updating style on behalf of Chrismarsh

* fix style

---------

Co-authored-by: Chrismarsh <Chrismarsh@users.noreply.github.com>
2024-04-04 10:00:53 -07:00
Alex Richert
3a0efeecf1 add g2c@1.9.0 (#43482) 2024-04-04 09:56:19 -07:00
Alex Richert
5ffb5657c9 update g2tmpl recipe (#43483) 2024-04-04 09:56:09 -07:00
Alex Richert
2b3e7fd10a Add shared variant for fftw to allow static-only builds (#37897)
Co-authored-by: alexrichert <alexrichert@gmail.com>
2024-04-04 03:47:46 -06:00
James Smillie
cb315e18f0 py-pip package: enable install on Windows (#43203)
The installation mechanism used on Linux to install py-pip (using pip
from the downloaded wheel to install the wheel) does not work on Windows.

This updates the installation of py-pip on Windows to download and
use a zipapp of a specific pip version in order to install the wheel
pip version that is requested.
2024-04-03 23:03:56 -06:00
Jonas Eschle
10c637aca0 py-zfit: add v0.18.* (#43200)
Co-authored-by: Valentin Volkl <valentin.volkl@cern.ch>
2024-04-04 02:55:00 +02:00
73 changed files with 983 additions and 206 deletions

View File

@@ -14,7 +14,7 @@
from llnl.util.link_tree import LinkTree
from spack.build_environment import dso_suffix
from spack.directives import conflicts, variant
from spack.directives import conflicts, license, variant
from spack.package_base import InstallError
from spack.util.environment import EnvironmentModifications
from spack.util.executable import Executable
@@ -26,6 +26,7 @@ class IntelOneApiPackage(Package):
"""Base class for Intel oneAPI packages."""
homepage = "https://software.intel.com/oneapi"
license("https://intel.ly/393CijO")
# oneAPI license does not allow mirroring outside of the
# organization (e.g. University/Company).

View File

@@ -1947,9 +1947,9 @@ def reproduce_ci_job(url, work_dir, autostart, gpg_url, runtime):
entrypoint_script.append(["echo", f"Re-run install script using:\n\t{install_mechanism}"])
# Allow interactive
if IS_WINDOWS:
entrypoint_script.extend(["&", "($args -Join ' ')", "-NoExit"])
entrypoint_script.append(["&", "($args -Join ' ')", "-NoExit"])
else:
entrypoint_script.extend(["exec", "$@"])
entrypoint_script.append(["exec", "$@"])
process_command(
"entrypoint", entrypoint_script, work_dir, run=False, exit_on_failure=False

View File

@@ -420,10 +420,9 @@ def install_with_active_env(env: ev.Environment, args, install_kwargs, reporter_
with reporter_factory(specs_to_install):
env.install_specs(specs_to_install, **install_kwargs)
finally:
# TODO: this is doing way too much to trigger
# views and modules to be generated.
with env.write_transaction():
env.write(regenerate=True)
if env.views:
with env.write_transaction():
env.write(regenerate=True)
def concrete_specs_from_cli(args, install_kwargs):

View File

@@ -199,7 +199,7 @@ def __init__(self, *args, **kwargs):
# for a fortran compiler
if paths[2]:
# If this found, it sets all the vars
oneapi_root = os.getenv("ONEAPI_ROOT")
oneapi_root = os.path.join(self.cc, "../../..")
oneapi_root_setvars = os.path.join(oneapi_root, "setvars.bat")
oneapi_version_setvars = os.path.join(
oneapi_root, "compiler", str(self.ifx_version), "env", "vars.bat"

View File

@@ -119,7 +119,7 @@ def __init__(self, pkg_count: int):
self.pkg_ids: Set[str] = set()
def next_pkg(self, pkg: "spack.package_base.PackageBase"):
pkg_id = package_id(pkg)
pkg_id = package_id(pkg.spec)
if pkg_id not in self.pkg_ids:
self.pkg_num += 1
@@ -221,12 +221,12 @@ def _handle_external_and_upstream(pkg: "spack.package_base.PackageBase", explici
# consists in module file generation and registration in the DB.
if pkg.spec.external:
_process_external_package(pkg, explicit)
_print_installed_pkg(f"{pkg.prefix} (external {package_id(pkg)})")
_print_installed_pkg(f"{pkg.prefix} (external {package_id(pkg.spec)})")
return True
if pkg.spec.installed_upstream:
tty.verbose(
f"{package_id(pkg)} is installed in an upstream Spack instance at "
f"{package_id(pkg.spec)} is installed in an upstream Spack instance at "
f"{pkg.spec.prefix}"
)
_print_installed_pkg(pkg.prefix)
@@ -403,7 +403,7 @@ def _install_from_cache(
return False
t.stop()
pkg_id = package_id(pkg)
pkg_id = package_id(pkg.spec)
tty.debug(f"Successfully extracted {pkg_id} from binary cache")
_write_timer_json(pkg, t, True)
@@ -484,7 +484,7 @@ def _process_binary_cache_tarball(
if download_result is None:
return False
tty.msg(f"Extracting {package_id(pkg)} from binary cache")
tty.msg(f"Extracting {package_id(pkg.spec)} from binary cache")
with timer.measure("install"), spack.util.path.filter_padding():
binary_distribution.extract_tarball(pkg.spec, download_result, force=False, timer=timer)
@@ -513,7 +513,7 @@ def _try_install_from_binary_cache(
if not spack.mirror.MirrorCollection(binary=True):
return False
tty.debug(f"Searching for binary cache of {package_id(pkg)}")
tty.debug(f"Searching for binary cache of {package_id(pkg.spec)}")
with timer.measure("search"):
matches = binary_distribution.get_mirrors_for_spec(pkg.spec, index_only=True)
@@ -610,7 +610,7 @@ def get_dependent_ids(spec: "spack.spec.Spec") -> List[str]:
Returns: list of package ids
"""
return [package_id(d.package) for d in spec.dependents()]
return [package_id(d) for d in spec.dependents()]
def install_msg(name: str, pid: int, install_status: InstallStatus) -> str:
@@ -720,7 +720,7 @@ def log(pkg: "spack.package_base.PackageBase") -> None:
dump_packages(pkg.spec, packages_dir)
def package_id(pkg: "spack.package_base.PackageBase") -> str:
def package_id(spec: "spack.spec.Spec") -> str:
"""A "unique" package identifier for installation purposes
The identifier is used to track build tasks, locks, install, and
@@ -732,10 +732,10 @@ def package_id(pkg: "spack.package_base.PackageBase") -> str:
Args:
pkg: the package from which the identifier is derived
"""
if not pkg.spec.concrete:
if not spec.concrete:
raise ValueError("Cannot provide a unique, readable id when the spec is not concretized.")
return f"{pkg.name}-{pkg.version}-{pkg.spec.dag_hash()}"
return f"{spec.name}-{spec.version}-{spec.dag_hash()}"
class BuildRequest:
@@ -765,7 +765,7 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
self.pkg.last_phase = install_args.pop("stop_at", None) # type: ignore[attr-defined]
# Cache the package id for convenience
self.pkg_id = package_id(pkg)
self.pkg_id = package_id(pkg.spec)
# Save off the original install arguments plus standard defaults
# since they apply to the requested package *and* dependencies.
@@ -780,9 +780,9 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
# are not able to return full dependents for all packages across
# environment specs.
self.dependencies = set(
package_id(d.package)
package_id(d)
for d in self.pkg.spec.dependencies(deptype=self.get_depflags(self.pkg))
if package_id(d.package) != self.pkg_id
if package_id(d) != self.pkg_id
)
def __repr__(self) -> str:
@@ -832,7 +832,7 @@ def get_depflags(self, pkg: "spack.package_base.PackageBase") -> int:
depflag = dt.LINK | dt.RUN
include_build_deps = self.install_args.get("include_build_deps")
if self.pkg_id == package_id(pkg):
if self.pkg_id == package_id(pkg.spec):
cache_only = self.install_args.get("package_cache_only")
else:
cache_only = self.install_args.get("dependencies_cache_only")
@@ -927,7 +927,7 @@ def __init__(
raise ValueError(f"{self.pkg.name} must have a concrete spec")
# The "unique" identifier for the task's package
self.pkg_id = package_id(self.pkg)
self.pkg_id = package_id(self.pkg.spec)
# The explicit build request associated with the package
if not isinstance(request, BuildRequest):
@@ -965,9 +965,9 @@ def __init__(
# if use traverse for transitive dependencies, then must remove
# transitive dependents on failure.
self.dependencies = set(
package_id(d.package)
package_id(d)
for d in self.pkg.spec.dependencies(deptype=self.request.get_depflags(self.pkg))
if package_id(d.package) != self.pkg_id
if package_id(d) != self.pkg_id
)
# Handle bootstrapped compiler
@@ -983,7 +983,7 @@ def __init__(
dep.constrain(f"os={str(arch_spec.os)}")
dep.constrain(f"target={arch_spec.target.microarchitecture.family.name}:")
dep.concretize()
dep_id = package_id(dep.package)
dep_id = package_id(dep)
self.dependencies.add(dep_id)
# List of uninstalled dependencies, which is used to establish
@@ -1194,7 +1194,7 @@ def _add_bootstrap_compilers(
"""
packages = _packages_needed_to_bootstrap_compiler(compiler, architecture, pkgs)
for comp_pkg, is_compiler in packages:
pkgid = package_id(comp_pkg)
pkgid = package_id(comp_pkg.spec)
if pkgid not in self.build_tasks:
self._add_init_task(comp_pkg, request, is_compiler, all_deps)
elif is_compiler:
@@ -1241,7 +1241,7 @@ def _add_init_task(
"""
task = BuildTask(pkg, request, is_compiler, 0, 0, STATUS_ADDED, self.installed)
for dep_id in task.dependencies:
all_deps[dep_id].add(package_id(pkg))
all_deps[dep_id].add(package_id(pkg.spec))
self._push_task(task)
@@ -1276,7 +1276,7 @@ def _check_deps_status(self, request: BuildRequest) -> None:
err = "Cannot proceed with {0}: {1}"
for dep in request.traverse_dependencies():
dep_pkg = dep.package
dep_id = package_id(dep_pkg)
dep_id = package_id(dep)
# Check for failure since a prefix lock is not required
if spack.store.STORE.failure_tracker.has_failed(dep):
@@ -1409,7 +1409,7 @@ def _cleanup_task(self, pkg: "spack.package_base.PackageBase") -> None:
Args:
pkg: the package being installed
"""
self._remove_task(package_id(pkg))
self._remove_task(package_id(pkg.spec))
# Ensure we have a read lock to prevent others from uninstalling the
# spec during our installation.
@@ -1423,7 +1423,7 @@ def _ensure_install_ready(self, pkg: "spack.package_base.PackageBase") -> None:
Args:
pkg: the package being locally installed
"""
pkg_id = package_id(pkg)
pkg_id = package_id(pkg.spec)
pre = f"{pkg_id} cannot be installed locally:"
# External packages cannot be installed locally.
@@ -1465,7 +1465,7 @@ def _ensure_locked(
"write",
], f'"{lock_type}" is not a supported package management lock type'
pkg_id = package_id(pkg)
pkg_id = package_id(pkg.spec)
ltype, lock = self.locks.get(pkg_id, (lock_type, None))
if lock and ltype == lock_type:
return ltype, lock
@@ -1601,7 +1601,7 @@ def _add_tasks(self, request: BuildRequest, all_deps):
for dep in request.traverse_dependencies():
dep_pkg = dep.package
dep_id = package_id(dep_pkg)
dep_id = package_id(dep)
if dep_id not in self.build_tasks:
self._add_init_task(dep_pkg, request, False, all_deps)
@@ -1913,7 +1913,7 @@ def _flag_installed(
dependent_ids: set of the package's dependent ids, or None if the dependent ids are
limited to those maintained in the package (dependency DAG)
"""
pkg_id = package_id(pkg)
pkg_id = package_id(pkg.spec)
if pkg_id in self.installed:
# Already determined the package has been installed
@@ -2309,7 +2309,7 @@ def __init__(self, pkg: "spack.package_base.PackageBase", install_args: dict):
# info/debug information
self.pre = _log_prefix(pkg.name)
self.pkg_id = package_id(pkg)
self.pkg_id = package_id(pkg.spec)
def run(self) -> bool:
"""Main entry point from ``build_process`` to kick off install in child."""

View File

@@ -3371,7 +3371,7 @@ def _is_reusable(spec: spack.spec.Spec, packages, local: bool) -> bool:
return False
if not spec.external:
return True
return _has_runtime_dependencies(spec)
# Cray external manifest externals are always reusable
if local:
@@ -3396,6 +3396,19 @@ def _is_reusable(spec: spack.spec.Spec, packages, local: bool) -> bool:
return False
def _has_runtime_dependencies(spec: spack.spec.Spec) -> bool:
if not WITH_RUNTIME:
return True
if spec.compiler.name == "gcc" and not spec.dependencies("gcc-runtime"):
return False
if spec.compiler.name == "oneapi" and not spec.dependencies("intel-oneapi-runtime"):
return False
return True
class Solver:
"""This is the main external interface class for solving.

View File

@@ -80,6 +80,7 @@ unification_set(SetID, VirtualNode)
#defined multiple_unification_sets/1.
#defined runtime/1.
%----
% Rules to break symmetry and speed-up searches
@@ -1494,18 +1495,20 @@ opt_criterion(40, "compiler mismatches that are not from CLI").
#minimize{ 0@240: #true }.
#minimize{ 0@40: #true }.
#minimize{
1@40+Priority,PackageNode,DependencyNode
: compiler_mismatch(PackageNode, DependencyNode),
build_priority(PackageNode, Priority)
1@40+Priority,PackageNode,node(ID, Dependency)
: compiler_mismatch(PackageNode, node(ID, Dependency)),
build_priority(node(ID, Dependency), Priority),
not runtime(Dependency)
}.
opt_criterion(39, "compiler mismatches that are not from CLI").
#minimize{ 0@239: #true }.
#minimize{ 0@39: #true }.
#minimize{
1@39+Priority,PackageNode,DependencyNode
: compiler_mismatch_required(PackageNode, DependencyNode),
build_priority(PackageNode, Priority)
1@39+Priority,PackageNode,node(ID, Dependency)
: compiler_mismatch_required(PackageNode, node(ID, Dependency)),
build_priority(node(ID, Dependency), Priority),
not runtime(Dependency)
}.
opt_criterion(30, "non-preferred OS's").
@@ -1522,9 +1525,10 @@ opt_criterion(25, "version badness").
#minimize{ 0@225: #true }.
#minimize{ 0@25: #true }.
#minimize{
Weight@25+Priority,PackageNode
: version_weight(PackageNode, Weight),
build_priority(PackageNode, Priority)
Weight@25+Priority,node(X, Package)
: version_weight(node(X, Package), Weight),
build_priority(node(X, Package), Priority),
not runtime(Package)
}.
% Try to use all the default values of variants
@@ -1543,9 +1547,10 @@ opt_criterion(15, "non-preferred compilers").
#minimize{ 0@215: #true }.
#minimize{ 0@15: #true }.
#minimize{
Weight@15+Priority,PackageNode
: node_compiler_weight(PackageNode, Weight),
build_priority(PackageNode, Priority)
Weight@15+Priority,node(X, Package)
: node_compiler_weight(node(X, Package), Weight),
build_priority(node(X, Package), Priority),
not runtime(Package)
}.
% Minimize the number of mismatches for targets in the DAG, try
@@ -1554,18 +1559,55 @@ opt_criterion(10, "target mismatches").
#minimize{ 0@210: #true }.
#minimize{ 0@10: #true }.
#minimize{
1@10+Priority,PackageNode,Dependency
: node_target_mismatch(PackageNode, Dependency),
build_priority(PackageNode, Priority)
1@10+Priority,PackageNode,node(ID, Dependency)
: node_target_mismatch(PackageNode, node(ID, Dependency)),
build_priority(node(ID, Dependency), Priority),
not runtime(Dependency)
}.
opt_criterion(5, "non-preferred targets").
#minimize{ 0@205: #true }.
#minimize{ 0@5: #true }.
#minimize{
Weight@5+Priority,PackageNode
: node_target_weight(PackageNode, Weight),
build_priority(PackageNode, Priority)
Weight@5+Priority,node(X, Package)
: node_target_weight(node(X, Package), Weight),
build_priority(node(X, Package), Priority),
not runtime(Package)
}.
% Minimize the number of compiler mismatches for runtimes
opt_criterion(4, "compiler mismatches (runtimes)").
#minimize{ 0@204: #true }.
#minimize{ 0@4: #true }.
#minimize{
1@4,PackageNode,node(ID, Dependency)
: compiler_mismatch(PackageNode, node(ID, Dependency)), runtime(Dependency)
}.
#minimize{
1@4,PackageNode,node(ID, Dependency)
: compiler_mismatch_required(PackageNode, node(ID, Dependency)), runtime(Dependency)
}.
% Choose more recent versions for runtimes
opt_criterion(3, "version badness (runtimes)").
#minimize{ 0@203: #true }.
#minimize{ 0@3: #true }.
#minimize{
Weight@3,node(X, Package)
: version_weight(node(X, Package), Weight),
runtime(Package)
}.
% Choose best target for runtimes
opt_criterion(2, "non-preferred targets (runtimes)").
#minimize{ 0@202: #true }.
#minimize{ 0@2: #true }.
#minimize{
Weight@2,node(X, Package)
: node_target_weight(node(X, Package), Weight),
runtime(Package)
}.
% Choose more recent versions for nodes

View File

@@ -142,7 +142,7 @@ def optimization_flags(self, compiler):
# custom spec.
compiler_version = compiler.version
version_number, suffix = archspec.cpu.version_components(compiler.version)
if not version_number or suffix not in ("", "apple"):
if not version_number or suffix:
# Try to deduce the underlying version of the compiler, regardless
# of its name in compilers.yaml. Depending on where this function
# is called we might get either a CompilerSpec or a fully fledged

View File

@@ -112,10 +112,10 @@ def test_compiler_find_no_apple_gcc(no_compilers_yaml, working_env, mock_executa
@pytest.mark.regression("37996")
def test_compiler_remove(mutable_config, mock_packages):
"""Tests that we can remove a compiler from configuration."""
assert spack.spec.CompilerSpec("gcc@=4.8.0") in spack.compilers.all_compiler_specs()
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@4.8.0", add_paths=[], scope=None)
assert spack.spec.CompilerSpec("gcc@=9.4.0") in spack.compilers.all_compiler_specs()
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@9.4.0", add_paths=[], scope=None)
spack.cmd.compiler.compiler_remove(args)
assert spack.spec.CompilerSpec("gcc@=4.8.0") not in spack.compilers.all_compiler_specs()
assert spack.spec.CompilerSpec("gcc@=9.4.0") not in spack.compilers.all_compiler_specs()
@pytest.mark.regression("37996")
@@ -124,10 +124,10 @@ def test_removing_compilers_from_multiple_scopes(mutable_config, mock_packages):
site_config = spack.config.get("compilers", scope="site")
spack.config.set("compilers", site_config, scope="user")
assert spack.spec.CompilerSpec("gcc@=4.8.0") in spack.compilers.all_compiler_specs()
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@4.8.0", add_paths=[], scope=None)
assert spack.spec.CompilerSpec("gcc@=9.4.0") in spack.compilers.all_compiler_specs()
args = spack.util.pattern.Bunch(all=True, compiler_spec="gcc@9.4.0", add_paths=[], scope=None)
spack.cmd.compiler.compiler_remove(args)
assert spack.spec.CompilerSpec("gcc@=4.8.0") not in spack.compilers.all_compiler_specs()
assert spack.spec.CompilerSpec("gcc@=9.4.0") not in spack.compilers.all_compiler_specs()
@pytest.mark.not_on_windows("Cannot execute bash script on Windows")

View File

@@ -31,7 +31,7 @@ def test_spec():
@pytest.mark.only_clingo("Known failure of the original concretizer")
def test_spec_concretizer_args(mutable_config, mutable_database):
def test_spec_concretizer_args(mutable_config, mutable_database, do_not_check_runtimes_on_reuse):
"""End-to-end test of CLI concretizer prefs.
It's here to make sure that everything works from CLI

View File

@@ -254,7 +254,7 @@ def gcc11_with_flags(compiler_factory):
# This must use the mutable_config fixture because the test
# adjusting_default_target_based_on_compiler uses the current_host fixture,
# which changes the config.
@pytest.mark.usefixtures("mutable_config", "mock_packages")
@pytest.mark.usefixtures("mutable_config", "mock_packages", "do_not_check_runtimes_on_reuse")
class TestConcretize:
def test_concretize(self, spec):
check_concretize(spec)
@@ -614,7 +614,7 @@ def test_my_dep_depends_on_provider_of_my_virtual_dep(self):
spec.normalize()
spec.concretize()
@pytest.mark.parametrize("compiler_str", ["clang", "gcc", "gcc@10.2.1", "clang@:12.0.0"])
@pytest.mark.parametrize("compiler_str", ["clang", "gcc", "gcc@10.2.1", "clang@:15.0.0"])
def test_compiler_inheritance(self, compiler_str):
spec_str = "mpileaks %{0}".format(compiler_str)
spec = Spec(spec_str).concretized()
@@ -877,7 +877,7 @@ def test_concretize_anonymous_dep(self, spec_str):
# Unconstrained versions select default compiler (gcc@4.5.0)
("bowtie@1.4.0", "%gcc@10.2.1"),
# Version with conflicts and no valid gcc select another compiler
("bowtie@1.3.0", "%clang@12.0.0"),
("bowtie@1.3.0", "%clang@15.0.0"),
# If a higher gcc is available still prefer that
("bowtie@1.2.2 os=redhat6", "%gcc@11.1.0"),
],
@@ -1439,7 +1439,7 @@ def test_os_selection_when_multiple_choices_are_possible(
@pytest.mark.regression("22718")
@pytest.mark.parametrize(
"spec_str,expected_compiler",
[("mpileaks", "%gcc@10.2.1"), ("mpileaks ^mpich%clang@12.0.0", "%clang@12.0.0")],
[("mpileaks", "%gcc@10.2.1"), ("mpileaks ^mpich%clang@15.0.0", "%clang@15.0.0")],
)
def test_compiler_is_unique(self, spec_str, expected_compiler):
s = Spec(spec_str).concretized()
@@ -1727,7 +1727,7 @@ def test_reuse_with_unknown_package_dont_raise(self, tmpdir, temporary_store, mo
[
(["libelf", "libelf@0.8.10"], 1),
(["libdwarf%gcc", "libelf%clang"], 2),
(["libdwarf%gcc", "libdwarf%clang"], 4),
(["libdwarf%gcc", "libdwarf%clang"], 3),
(["libdwarf^libelf@0.8.12", "libdwarf^libelf@0.8.13"], 4),
(["hdf5", "zmpi"], 3),
(["hdf5", "mpich"], 2),

View File

@@ -7,6 +7,8 @@
import pytest
import archspec.cpu
import spack.paths
import spack.repo
import spack.solver.asp
@@ -24,9 +26,7 @@ def _concretize_with_reuse(*, root_str, reused_str):
reused_spec = spack.spec.Spec(reused_str).concretized()
setup = spack.solver.asp.SpackSolverSetup(tests=False)
driver = spack.solver.asp.PyclingoDriver()
result, _, _ = driver.solve(
setup, [spack.spec.Spec(f"{root_str} ^{reused_str}")], reuse=[reused_spec]
)
result, _, _ = driver.solve(setup, [spack.spec.Spec(f"{root_str}")], reuse=[reused_spec])
root = result.specs[0]
return root, reused_spec
@@ -47,7 +47,7 @@ def enable_runtimes():
def test_correct_gcc_runtime_is_injected_as_dependency(runtime_repo):
s = spack.spec.Spec("a%gcc@10.2.1 ^b%gcc@4.8.0").concretized()
s = spack.spec.Spec("a%gcc@10.2.1 ^b%gcc@9.4.0").concretized()
a, b = s["a"], s["b"]
# Both a and b should depend on the same gcc-runtime directly
@@ -78,9 +78,28 @@ def test_external_nodes_do_not_have_runtimes(runtime_repo, mutable_config, tmp_p
"root_str,reused_str,expected,nruntime",
[
# The reused runtime is older than we need, thus we'll add a more recent one for a
("a%gcc@10.2.1", "b%gcc@4.8.0", {"a": "gcc-runtime@10.2.1", "b": "gcc-runtime@4.8.0"}, 2),
("a%gcc@10.2.1", "b%gcc@9.4.0", {"a": "gcc-runtime@10.2.1", "b": "gcc-runtime@9.4.0"}, 2),
# The root is compiled with an older compiler, thus we'll reuse the runtime from b
("a%gcc@4.8.0", "b%gcc@10.2.1", {"a": "gcc-runtime@10.2.1", "b": "gcc-runtime@10.2.1"}, 1),
("a%gcc@9.4.0", "b%gcc@10.2.1", {"a": "gcc-runtime@10.2.1", "b": "gcc-runtime@10.2.1"}, 1),
# Same as before, but tests that we can reuse from a more generic target
pytest.param(
"a%gcc@9.4.0",
"b%gcc@10.2.1 target=x86_64",
{"a": "gcc-runtime@10.2.1 target=x86_64", "b": "gcc-runtime@10.2.1 target=x86_64"},
1,
marks=pytest.mark.skipif(
str(archspec.cpu.host().family) != "x86_64", reason="test data is x86_64 specific"
),
),
pytest.param(
"a%gcc@10.2.1",
"b%gcc@9.4.0 target=x86_64",
{"a": "gcc-runtime@10.2.1 target=x86_64", "b": "gcc-runtime@9.4.0 target=x86_64"},
2,
marks=pytest.mark.skipif(
str(archspec.cpu.host().family) != "x86_64", reason="test data is x86_64 specific"
),
),
],
)
def test_reusing_specs_with_gcc_runtime(root_str, reused_str, expected, nruntime, runtime_repo):
@@ -104,8 +123,8 @@ def test_reusing_specs_with_gcc_runtime(root_str, reused_str, expected, nruntime
[
# Ensure that, whether we have multiple runtimes in the DAG or not,
# we always link only the latest version
("a%gcc@10.2.1", "b%gcc@4.8.0", ["gcc-runtime@10.2.1"], ["gcc-runtime@4.8.0"]),
("a%gcc@4.8.0", "b%gcc@10.2.1", ["gcc-runtime@10.2.1"], ["gcc-runtime@4.8.0"]),
("a%gcc@10.2.1", "b%gcc@9.4.0", ["gcc-runtime@10.2.1"], ["gcc-runtime@9.4.0"]),
("a%gcc@9.4.0", "b%gcc@10.2.1", ["gcc-runtime@10.2.1"], ["gcc-runtime@9.4.0"]),
],
)
def test_views_can_handle_duplicate_runtime_nodes(

View File

@@ -105,7 +105,7 @@ def test_preferred_variants_from_wildcard(self):
@pytest.mark.parametrize(
"compiler_str,spec_str",
[("gcc@=4.8.0", "mpileaks"), ("clang@=12.0.0", "mpileaks"), ("gcc@=4.8.0", "openmpi")],
[("gcc@=9.4.0", "mpileaks"), ("clang@=15.0.0", "mpileaks"), ("gcc@=9.4.0", "openmpi")],
)
def test_preferred_compilers(self, compiler_str, spec_str):
"""Test preferred compilers are applied correctly"""

View File

@@ -2013,3 +2013,12 @@ def _factory(*, spec, operating_system):
def host_architecture_str():
"""Returns the broad architecture family (x86_64, aarch64, etc.)"""
return str(archspec.cpu.host().family)
def _true(x):
return True
@pytest.fixture()
def do_not_check_runtimes_on_reuse(monkeypatch):
monkeypatch.setattr(spack.solver.asp, "_has_runtime_dependencies", _true)

View File

@@ -1,6 +1,6 @@
compilers:
- compiler:
spec: gcc@=4.8.0
spec: gcc@=9.4.0
operating_system: {linux_os.name}{linux_os.version}
paths:
cc: /path/to/gcc
@@ -10,7 +10,7 @@ compilers:
modules: []
target: {target}
- compiler:
spec: gcc@=4.8.0
spec: gcc@=9.4.0
operating_system: redhat6
paths:
cc: /path/to/gcc
@@ -20,7 +20,7 @@ compilers:
modules: []
target: {target}
- compiler:
spec: clang@=12.0.0
spec: clang@=15.0.0
operating_system: {linux_os.name}{linux_os.version}
paths:
cc: /path/to/clang

View File

@@ -16,7 +16,7 @@ packages:
externalvirtual:
buildable: False
externals:
- spec: externalvirtual@2.0%clang@12.0.0
- spec: externalvirtual@2.0%clang@15.0.0
prefix: /path/to/external_virtual_clang
- spec: externalvirtual@1.0%gcc@10.2.1
prefix: /path/to/external_virtual_gcc

View File

@@ -4,7 +4,7 @@ lmod:
hash_length: 0
core_compilers:
- 'clang@12.0.0'
- 'clang@15.0.0'
core_specs:
- 'mpich@3.0.1'

View File

@@ -2,4 +2,4 @@ enable:
- lmod
lmod:
core_compilers:
- 'clang@12.0.0'
- 'clang@15.0.0'

View File

@@ -2,4 +2,4 @@ enable:
- lmod
lmod:
core_compilers:
- 'clang@=12.0.0'
- 'clang@=15.0.0'

View File

@@ -136,7 +136,7 @@ def test_get_dependent_ids(install_mockery, mock_packages):
spec.concretize()
assert spec.concrete
pkg_id = inst.package_id(spec.package)
pkg_id = inst.package_id(spec)
# Grab the sole dependency of 'a', which is 'b'
dep = spec.dependencies()[0]
@@ -385,7 +385,7 @@ def test_ensure_locked_have(install_mockery, tmpdir, capsys):
const_arg = installer_args(["trivial-install-test-package"], {})
installer = create_installer(const_arg)
spec = installer.build_requests[0].pkg.spec
pkg_id = inst.package_id(spec.package)
pkg_id = inst.package_id(spec)
with tmpdir.as_cwd():
# Test "downgrade" of a read lock (to a read lock)
@@ -456,17 +456,15 @@ def _pl(db, spec, timeout):
def test_package_id_err(install_mockery):
s = spack.spec.Spec("trivial-install-test-package")
pkg_cls = spack.repo.PATH.get_pkg_class(s.name)
with pytest.raises(ValueError, match="spec is not concretized"):
inst.package_id(pkg_cls(s))
inst.package_id(s)
def test_package_id_ok(install_mockery):
spec = spack.spec.Spec("trivial-install-test-package")
spec.concretize()
assert spec.concrete
pkg = spec.package
assert pkg.name in inst.package_id(pkg)
assert spec.name in inst.package_id(spec)
def test_fake_install(install_mockery):
@@ -726,7 +724,7 @@ def test_check_deps_status_external(install_mockery, monkeypatch):
installer._check_deps_status(request)
for dep in request.spec.traverse(root=False):
assert inst.package_id(dep.package) in installer.installed
assert inst.package_id(dep) in installer.installed
def test_check_deps_status_upstream(install_mockery, monkeypatch):
@@ -739,7 +737,7 @@ def test_check_deps_status_upstream(install_mockery, monkeypatch):
installer._check_deps_status(request)
for dep in request.spec.traverse(root=False):
assert inst.package_id(dep.package) in installer.installed
assert inst.package_id(dep) in installer.installed
def test_add_bootstrap_compilers(install_mockery, monkeypatch):
@@ -1111,7 +1109,7 @@ def test_install_fail_fast_on_detect(install_mockery, monkeypatch, capsys):
const_arg = installer_args(["b"], {"fail_fast": False})
const_arg.extend(installer_args(["c"], {"fail_fast": True}))
installer = create_installer(const_arg)
pkg_ids = [inst.package_id(spec.package) for spec, _ in const_arg]
pkg_ids = [inst.package_id(spec) for spec, _ in const_arg]
# Make sure all packages are identified as failed
#
@@ -1186,7 +1184,7 @@ def test_install_lock_installed_requeue(install_mockery, monkeypatch, capfd):
const_arg = installer_args(["b"], {})
b, _ = const_arg[0]
installer = create_installer(const_arg)
b_pkg_id = inst.package_id(b.package)
b_pkg_id = inst.package_id(b)
def _prep(installer, task):
installer.installed.add(b_pkg_id)
@@ -1196,7 +1194,7 @@ def _prep(installer, task):
monkeypatch.setattr(inst.PackageInstaller, "_ensure_locked", _not_locked)
def _requeued(installer, task, install_status):
tty.msg("requeued {0}".format(inst.package_id(task.pkg)))
tty.msg("requeued {0}".format(inst.package_id(task.pkg.spec)))
# Flag the package as installed
monkeypatch.setattr(inst.PackageInstaller, "_prepare_for_install", _prep)
@@ -1262,7 +1260,7 @@ def test_install_skip_patch(install_mockery, mock_fetch):
installer.install()
spec, install_args = const_arg[0]
assert inst.package_id(spec.package) in installer.installed
assert inst.package_id(spec) in installer.installed
def test_install_implicit(install_mockery, mock_fetch):

View File

@@ -29,7 +29,7 @@
]
@pytest.fixture(params=["clang@=12.0.0", "gcc@=10.2.1"])
@pytest.fixture(params=["clang@=15.0.0", "gcc@=10.2.1"])
def compiler(request):
return request.param
@@ -59,7 +59,7 @@ def test_layout_for_specs_compiled_with_core_compilers(
we can use both ``compiler@version`` and ``compiler@=version`` to specify a core compiler.
"""
module_configuration(modules_config)
module, spec = factory("libelf%clang@12.0.0")
module, spec = factory("libelf%clang@15.0.0")
assert "Core" in module.layout.available_path_parts
def test_file_layout(self, compiler, provider, factory, module_configuration):
@@ -78,7 +78,7 @@ def test_file_layout(self, compiler, provider, factory, module_configuration):
# is transformed to r"Core" if the compiler is listed among core
# compilers
# Check that specs listed as core_specs are transformed to "Core"
if compiler == "clang@=12.0.0" or spec_string == "mpich@3.0.1":
if compiler == "clang@=15.0.0" or spec_string == "mpich@3.0.1":
assert "Core" in layout.available_path_parts
else:
assert compiler.replace("@=", "/") in layout.available_path_parts

View File

@@ -27,12 +27,12 @@ ci:
- - spack config blame mirrors
- spack --color=always --backtrace ci rebuild --tests > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
after_script:
- - ./bin/spack python ${CI_PROJECT_DIR}/share/spack/gitlab/cloud_pipelines/scripts/common/aggregate_package_logs.spack.py
--prefix /home/software/spack:${CI_PROJECT_DIR}/opt/spack
--log install_times.json
${SPACK_ARTIFACTS_ROOT}/user_data/install_times.json
- - cat /proc/loadavg || true
- cat /proc/meminfo | grep 'MemTotal\|MemFree' || true
- - time python ${CI_PROJECT_DIR}/share/spack/gitlab/cloud_pipelines/scripts/common/aggregate_package_logs.spack.py
--prefix /home/software/spack:${CI_PROJECT_DIR}/opt/spack
--log install_times.json
${SPACK_ARTIFACTS_ROOT}/user_data/install_times.json || true
variables:
CI_JOB_SIZE: "default"
CI_GPG_KEY_ROOT: /mnt/key

View File

@@ -37,10 +37,14 @@ def find_logs(prefix, filename):
# Look in the list of prefixes for logs
for prefix in prefixes:
logs = find_logs(prefix, args.log)
print(f"Walking {prefix}")
logs = [log for log in find_logs(prefix, args.log)]
print(f" * found {len(logs)} logs")
for log in logs:
print(f" * appending data for {log}")
with open(log) as fd:
data.append(json.load(fd))
print(f"Writing {args.output_file}")
with open(args.output_file, "w") as fd:
json.dump(data, fd)

View File

@@ -240,7 +240,6 @@ spack:
- umpire ~shared +cuda cuda_arch=75
# INCLUDED IN ECP DAV CUDA
- adios2 +cuda cuda_arch=75
- paraview +cuda cuda_arch=75
- vtk-m +cuda cuda_arch=75
- zfp +cuda cuda_arch=75
# --
@@ -248,6 +247,7 @@ spack:
# - axom +cuda cuda_arch=75 # axom: https://github.com/spack/spack/issues/29520
# - cusz +cuda cuda_arch=75 # cusz: https://github.com/spack/spack/issues/38787
# - dealii +cuda cuda_arch=75 # slepc: make[1]: *** internal error: invalid --jobserver-auth string 'fifo:/tmp/GMfifo1313'.
# - paraview +cuda cuda_arch=75 # Error building some cuda componets in paraview
# - ecp-data-vis-sdk +adios2 +hdf5 +vtkm +zfp +paraview +cuda cuda_arch=75 # embree: https://github.com/spack/spack/issues/39534
# - lammps +cuda cuda_arch=75 # lammps: needs NVIDIA driver
# - lbann +cuda cuda_arch=75 # lbann: https://github.com/spack/spack/issues/38788
@@ -287,7 +287,6 @@ spack:
- umpire ~shared +cuda cuda_arch=80
# INCLUDED IN ECP DAV CUDA
- adios2 +cuda cuda_arch=80
- paraview +cuda cuda_arch=80
- vtk-m +cuda cuda_arch=80
- zfp +cuda cuda_arch=80
# --
@@ -295,6 +294,7 @@ spack:
# - axom +cuda cuda_arch=80 # axom: https://github.com/spack/spack/issues/29520
# - cusz +cuda cuda_arch=80 # cusz: https://github.com/spack/spack/issues/38787
# - dealii +cuda cuda_arch=80 # slepc: make[1]: *** internal error: invalid --jobserver-auth string 'fifo:/tmp/GMfifo1313'.
# - paraview +cuda cuda_arch=80 # Error building some cuda componets in paraview
# - ecp-data-vis-sdk +adios2 +hdf5 +vtkm +zfp +paraview +cuda cuda_arch=80 # embree: https://github.com/spack/spack/issues/39534
# - lammps +cuda cuda_arch=80 # lammps: needs NVIDIA driver
# - lbann +cuda cuda_arch=80 # lbann: https://github.com/spack/spack/issues/38788

View File

@@ -103,7 +103,7 @@ spack:
- kokkos +openmp
- kokkos-kernels +openmp
- laghos
- lammps
- lammps +amoeba +asphere +bocs +body +bpm +brownian +cg-dna +cg-spica +class2 +colloid +colvars +compress +coreshell +dielectric +diffraction +dipole +dpd-basic +dpd-meso +dpd-react +dpd-smooth +drude +eff +electrode +extra-compute +extra-dump +extra-fix +extra-molecule +extra-pair +fep +granular +interlayer +kspace +lepton +machdyn +manybody +mc +meam +mesont +misc +ml-iap +ml-pod +ml-snap +mofff +molecule +openmp-package +opt +orient +peri +phonon +plugin +poems +qeq +reaction +reaxff +replica +rigid +shock +sph +spin +srd +tally +uef +voronoi +yaff
- lbann
- legion
- libnrm
@@ -119,6 +119,7 @@ spack:
- mpifileutils ~xattr
- nccmp
- nco
- nekbone +mpi
- netlib-scalapack
- nrm
- nvhpc
@@ -186,6 +187,7 @@ spack:
# --
# - geopm # geopm: https://github.com/spack/spack/issues/38795
# - glvis # glvis: https://github.com/spack/spack/issues/42839
# - nek5000 +mpi +visit # nek5000: Error: AttributeError: 'str' object has no attribute 'propagate': 'VISIT_INSTALL="' + spec["visit"].prefix.bin + '"',
# PYTHON PACKAGES
- opencv +python3
@@ -207,6 +209,7 @@ spack:
- py-seaborn
- py-tensorflow
- py-torch
- py-deephyper
# # CUDA NOARCH
# - bricks +cuda

View File

@@ -331,10 +331,7 @@ class Acts(CMakePackage, CudaPackage):
depends_on("python@3.8:", when="+python @19.11:19")
depends_on("python@3.8:", when="+python @21:")
depends_on("py-onnxruntime@:1.12", when="+onnx @:23.2")
# FIXME py-onnxruntime@1.12: required but not yet available
# Ref: https://github.com/spack/spack/pull/37064
# depends_on("py-onnxruntime@1.12:", when="+onnx @23.3:")
conflicts("+onnx", when="@23.3:", msg="py-onnxruntime@1.12: required but not yet available")
depends_on("py-onnxruntime@1.12:", when="+onnx @23.3:")
depends_on("py-pybind11 @2.6.2:", when="+python @18:")
depends_on("py-pytest", when="+python +unit_tests")

View File

@@ -26,13 +26,11 @@ class Adios2(CMakePackage, CudaPackage, ROCmPackage):
version("master", branch="master")
version(
"2.10.0-rc1", sha256="8b72142bd5aabfb80c7963f524df11b8721c09ef20caea6df5fb00c31a7747c0"
)
version(
"2.9.2",
sha256="78309297c82a95ee38ed3224c98b93d330128c753a43893f63bbe969320e4979",
"2.10.0",
sha256="e5984de488bda546553dd2f46f047e539333891e63b9fe73944782ba6c2d95e4",
preferred=True,
)
version("2.9.2", sha256="78309297c82a95ee38ed3224c98b93d330128c753a43893f63bbe969320e4979")
version("2.9.1", sha256="ddfa32c14494250ee8a48ef1c97a1bf6442c15484bbbd4669228a0f90242f4f9")
version("2.9.0", sha256="69f98ef58c818bb5410133e1891ac192653b0ec96eb9468590140f2552b6e5d1")
version("2.8.3", sha256="4906ab1899721c41dd918dddb039ba2848a1fb0cf84f3a563a1179b9d6ee0d9f")
@@ -225,7 +223,7 @@ class Adios2(CMakePackage, CudaPackage, ROCmPackage):
# cmake: find threads package first
# https://github.com/ornladios/ADIOS2/pull/3893
patch("2.9.2-cmake-find-threads-package-first.patch", when="@2.9.2:")
patch("2.9.2-cmake-find-threads-package-first.patch", when="@2.9")
@when("%fj")
def patch(self):

View File

@@ -20,6 +20,7 @@ class AmrWind(CMakePackage, CudaPackage, ROCmPackage):
license("BSD-3-Clause")
version("main", branch="main", submodules=True)
version("0.9.0", tag="v0.9.0", submodules=True)
variant("hypre", default=True, description="Enable Hypre integration")
variant("ascent", default=False, description="Enable Ascent integration")
@@ -31,9 +32,22 @@ class AmrWind(CMakePackage, CudaPackage, ROCmPackage):
variant("shared", default=True, description="Build shared libraries")
variant("tests", default=True, description="Activate regression tests")
variant("tiny_profile", default=False, description="Activate tiny profile")
variant("hdf5", default=False, description="Enable HDF5 plots with ZFP compression")
variant("umpire", default=False, description="Enable Umpire")
variant("sycl", default=False, description="Enable SYCL backend")
variant("gpu-aware-mpi", default=False, description="gpu-aware-mpi")
depends_on("hypre~int64@2.20.0:", when="+hypre")
depends_on("hypre+mpi", when="+hypre+mpi")
depends_on("hdf5~mpi", when="+hdf5~mpi")
depends_on("hdf5+mpi", when="+hdf5+mpi")
depends_on("h5z-zfp", when="+hdf5")
depends_on("zfp", when="+hdf5")
depends_on("hypre+umpire", when="+umpire")
depends_on("hypre+sycl", when="+sycl")
depends_on("hypre+gpu-aware-mpi", when="+gpu-aware-mpi")
depends_on("hypre@2.29.0:", when="@0.9.0:+hypre")
for arch in CudaPackage.cuda_arch_values:
depends_on("hypre+cuda cuda_arch=%s" % arch, when="+cuda+hypre cuda_arch=%s" % arch)
for arch in ROCmPackage.amdgpu_targets:
@@ -88,6 +102,13 @@ def cmake_args(self):
if "+mpi" in self.spec:
args.append(define("MPI_HOME", self.spec["mpi"].prefix))
if "+hdf5" in self.spec:
cmake_options.append(self.define("AMR_WIND_ENABLE_HDF5", True))
cmake_options.append(self.define("AMR_WIND_ENABLE_HDF5_ZFP", True))
# Help AMReX understand if HDF5 is parallel or not.
# Building HDF5 with CMake as Spack does, causes this inspection to break.
cmake_options.append(self.define("HDF5_IS_PARALLEL", spec.satisfies("+mpi")))
if "+cuda" in self.spec:
amrex_arch = [
"{0:.1f}".format(float(i) / 10.0) for i in self.spec.variants["cuda_arch"].value
@@ -100,4 +121,16 @@ def cmake_args(self):
targets = self.spec.variants["amdgpu_target"].value
args.append("-DAMReX_AMD_ARCH=" + ";".join(str(x) for x in targets))
if "+sycl" in self.spec:
cmake_options.append(self.define("AMR_WIND_ENABLE_SYCL", True))
requires(
"%dpcpp",
"%oneapi",
policy="one_of",
msg=(
"AMReX's SYCL GPU Backend requires DPC++ (dpcpp) "
"or the oneAPI CXX (icpx) compiler."
),
)
return args

View File

@@ -26,6 +26,7 @@ class Amrex(CMakePackage, CudaPackage, ROCmPackage):
license("BSD-3-Clause")
version("develop", branch="development")
version("24.04", sha256="77a91e75ad0106324a44ca514e1e8abc54f2fc2d453406441c871075726a8167")
version("24.03", sha256="024876fe65838d1021fcbf8530b992bff8d9be1d3f08a1723c4e2e5f7c28b427")
version("24.02", sha256="286cc3ca29daa69c8eafc1cd7a572662dec9eb78631ac3d33a1260868fdc6996")
version("24.01", sha256="83dbd4dad6dc51fa4a80aad0347b15ee5a6d816cf4abcd87f7b0e2987d8131b7")

View File

@@ -131,6 +131,8 @@ class Bazel(Package):
# Bazel-4.0.0 does not compile with gcc-11
# Newer versions of grpc and abseil dependencies are needed but are not in bazel-4.0.0
conflicts("@4.0.0", when="%gcc@11:")
# https://github.com/bazelbuild/bazel/issues/18642
conflicts("@:6", when="%gcc@13:")
executables = ["^bazel$"]

View File

@@ -43,7 +43,7 @@ class Datatransferkit(CMakePackage):
depends_on("trilinos+openmp", when="+openmp")
depends_on("trilinos+stratimikos+belos", when="@master")
depends_on("trilinos@13:13.4.1", when="@3.1-rc2:3.1-rc3")
depends_on("trilinos@14:", when="@3.1.0:")
depends_on("trilinos@14.2:", when="@3.1.0:")
def cmake_args(self):
spec = self.spec

View File

@@ -6,37 +6,93 @@
from spack.package import *
class Exawind(CMakePackage):
class Exawind(CMakePackage, CudaPackage, ROCmPackage):
"""Multi-application driver for Exawind project."""
homepage = "https://github.com/Exawind/exawind-driver"
git = "https://github.com/Exawind/exawind-driver.git"
maintainers("jrood-nrel", "psakievich")
maintainers("jrood-nrel")
tags = ["ecp", "ecp-apps"]
# Testing is currently always enabled, but should be optional in the future
# to avoid cloning the mesh submodule
version("master", branch="main", submodules=True)
version("1.0.0", tag="v1.0.0", submodules=True)
license("Apache-2.0")
version("master", branch="main")
variant("openfast", default=False, description="Enable OpenFAST integration")
variant("hypre", default=True, description="Enable hypre solver")
variant("stk_simd", default=False, description="Enable SIMD in STK")
variant("umpire", default=False, description="Enable Umpire")
variant("tiny_profile", default=False, description="Turn on AMR-wind with tiny profile")
variant("sycl", default=False, description="Enable SYCL backend for AMR-Wind")
variant("gpu-aware-mpi", default=False, description="gpu-aware-mpi")
depends_on("trilinos+stk")
depends_on("tioga+shared~nodegid")
depends_on("nalu-wind+hypre+openfast+tioga+wind-utils")
depends_on("amr-wind+hypre+mpi+netcdf+openfast")
depends_on("openfast+cxx+shared@2.6.0:")
conflicts("amr-wind+hypre", when="+sycl")
for arch in CudaPackage.cuda_arch_values:
depends_on("amr-wind+cuda cuda_arch=%s" % arch, when="+cuda cuda_arch=%s" % arch)
depends_on("nalu-wind+cuda cuda_arch=%s" % arch, when="+cuda cuda_arch=%s" % arch)
for arch in ROCmPackage.amdgpu_targets:
depends_on("amr-wind+rocm amdgpu_target=%s" % arch, when="+rocm amdgpu_target=%s" % arch)
depends_on("nalu-wind+rocm amdgpu_target=%s" % arch, when="+rocm amdgpu_target=%s" % arch)
depends_on("nalu-wind+tioga")
depends_on("amr-wind+netcdf+mpi")
depends_on("tioga~nodegid")
depends_on("yaml-cpp@0.6:")
depends_on("nalu-wind+openfast", when="+openfast")
depends_on("amr-wind+hypre", when="+hypre~sycl")
depends_on("amr-wind~hypre", when="~hypre")
depends_on("nalu-wind+hypre", when="+hypre")
depends_on("nalu-wind~hypre", when="~hypre")
depends_on("amr-wind+sycl", when="+sycl")
depends_on("nalu-wind+umpire", when="+umpire")
depends_on("amr-wind+umpire", when="+umpire")
depends_on("amr-wind+tiny_profile", when="+tiny_profile")
depends_on("nalu-wind+gpu-aware-mpi", when="+gpu-aware-mpi")
depends_on("amr-wind+gpu-aware-mpi", when="+gpu-aware-mpi")
depends_on("nalu-wind@2.0.0:", when="@1.0.0:")
depends_on("amr-wind@0.9.0:", when="@1.0.0:")
depends_on("tioga@1.0.0:", when="@1.0.0:")
def cmake_args(self):
spec = self.spec
args = [
self.define("Trilinos_DIR", spec["trilinos"].prefix),
self.define("TIOGA_DIR", spec["tioga"].prefix),
self.define("Nalu-Wind_DIR", spec["nalu-wind"].prefix),
self.define("AMR-Wind_DIR", spec["amr-wind"].prefix),
self.define("OpenFAST_DIR", spec["openfast"].prefix),
self.define("YAML-CPP_DIR", spec["yaml-cpp"].prefix),
]
args = [self.define("MPI_HOME", spec["mpi"].prefix)]
if "+umpire" in self.spec:
args.append(self.define_from_variant("EXAWIND_ENABLE_UMPIRE", "umpire"))
args.append(self.define("UMPIRE_DIR", self.spec["umpire"].prefix))
if spec.satisfies("+cuda"):
args.append(self.define("EXAWIND_ENABLE_CUDA", True))
args.append(self.define("CUDAToolkit_ROOT", self.spec["cuda"].prefix))
args.append(self.define("EXAWIND_CUDA_ARCH", self.spec.variants["cuda_arch"].value))
if spec.satisfies("+rocm"):
targets = self.spec.variants["amdgpu_target"].value
args.append(self.define("EXAWIND_ENABLE_ROCM", True))
args.append(self.define("CMAKE_CXX_COMPILER", self.spec["hip"].hipcc))
args.append(self.define("CMAKE_HIP_ARCHITECTURES", ";".join(str(x) for x in targets)))
args.append(self.define("AMDGPU_TARGETS", ";".join(str(x) for x in targets)))
args.append(self.define("GPU_TARGETS", ";".join(str(x) for x in targets)))
if spec.satisfies("^amr-wind+hdf5"):
args.append(self.define("H5Z_ZFP_USE_STATIC_LIBS", True))
if spec.satisfies("^amr-wind+ascent"):
args.append(self.define("CMAKE_EXE_LINKER_FLAGS", self.compiler.openmp_flag))
return args
def setup_build_environment(self, env):
if "~stk_simd" in self.spec:
env.append_flags("CXXFLAGS", "-DUSE_STK_SIMD_NONE")
if "+rocm" in self.spec:
env.set("OMPI_CXX", self.spec["hip"].hipcc)
env.set("MPICH_CXX", self.spec["hip"].hipcc)
env.set("MPICXX_CXX", self.spec["hip"].hipcc)

View File

@@ -72,6 +72,28 @@ class Fastjet(AutotoolsPackage):
)
variant("atlas", default=False, description="Patch to make random generator thread_local")
available_plugins = (
conditional("atlascone", when="@2.4.0:"),
conditional("cdfcones", when="@2.1.0:"),
conditional("cmsiterativecone", when="@2.4.0:"),
conditional("d0runicone", when="@3.0.0:"),
conditional("d0runiicone", when="@2.4.0:"),
conditional("eecambridge", when="@2.4.0:"),
conditional("gridjet", when="@3.0.0:"),
conditional("jade", when="@2.4.0:"),
conditional("nesteddefs", when="@2.4.0:"),
conditional("pxcone", when="@2.1.0:"),
conditional("siscone", when="@2.1.0:"),
conditional("trackjet", when="@2.4.0:"),
)
variant(
"plugins",
multi=True,
values=("all", "cxx") + available_plugins,
default="all",
description="List of plugins to enable, or 'cxx' or 'all'",
)
patch("atlas.patch", when="@:3.3 +atlas", level=0)
patch(
"https://gitlab.cern.ch/sft/lcgcmake/-/raw/23c82f269b8e5df0190e20b7fbe06db16b24d667/externals/patches/fastjet-3.4.1.patch",
@@ -81,7 +103,21 @@ class Fastjet(AutotoolsPackage):
)
def configure_args(self):
extra_args = ["--enable-allplugins"]
extra_args = []
plugins = self.spec.variants["plugins"].value
if "all" in plugins:
extra_args += ["--enable-allplugins"]
elif "cxx" in plugins:
extra_args += ["--enable-allcxxplugins"]
else:
for plugin in self.available_plugins:
# conditional returns an iterable _ConditionalVariantValues
for v in plugin:
# this version does not support this plugin
if not self.spec.satisfies(v.when):
continue
enabled = v.value in plugins
extra_args += [f"--{'enable' if enabled else 'disable'}-{v.value}"]
extra_args += self.enable_or_disable("shared")
extra_args += self.enable_or_disable("auto-ptr")
if self.spec.variants["thread-safety"].value == "limited":

View File

@@ -25,6 +25,7 @@ class FftwBase(AutotoolsPackage):
)
variant("openmp", default=False, description="Enable OpenMP support.")
variant("mpi", default=True, description="Activate MPI support")
variant("shared", default=True, description="Build shared libraries")
depends_on("mpi", when="+mpi")
depends_on("llvm-openmp", when="%apple-clang +openmp")
@@ -104,7 +105,9 @@ def setup_build_environment(self, env):
def configure(self, spec, prefix):
# Base options
options = ["--prefix={0}".format(prefix), "--enable-shared", "--enable-threads"]
options = ["--prefix={0}".format(prefix), "--enable-threads"]
options.extend(self.enable_or_disable("shared"))
if not self.compiler.f77 or not self.compiler.fc:
options.append("--disable-fortran")
if spec.satisfies("@:2"):

View File

@@ -20,6 +20,7 @@ class G2(CMakePackage):
maintainers("AlexanderRichert-NOAA", "Hang-Lei-NOAA", "edwardhartnett")
version("develop", branch="develop")
version("3.4.9", sha256="6edc33091f6bd2acb191182831499c226a1c3992c3acc104d6363528b12dfbae")
version("3.4.8", sha256="071a6f799c4c4fdfd5d0478152a0cbb9d668d12d71c78d5bda71845fc5580a7f")
version("3.4.7", sha256="d6530611e3a515122f11ed4aeede7641f6f8932ef9ee0d4828786572767304dc")
version("3.4.6", sha256="c4b03946365ce0bacf1e10e8412a5debd72d8671d1696aa4fb3f3adb119175fe")

View File

@@ -18,6 +18,7 @@ class G2c(CMakePackage):
maintainers("AlexanderRichert-NOAA", "Hang-Lei-NOAA", "edwardhartnett")
version("develop", branch="develop")
version("1.9.0", sha256="5554276e18bdcddf387a08c2dd23f9da310c6598905df6a2a244516c22ded9aa")
version("1.8.0", sha256="4ce9f5a7cb0950699fe08ebc5a463ab4d09ef550c050391a319308a2494f971f")
version("1.7.0", sha256="73afba9da382fed73ed8692d77fa037bb313280879cd4012a5e5697dccf55175")
version("1.6.4", sha256="5129a772572a358296b05fbe846bd390c6a501254588c6a223623649aefacb9d")

View File

@@ -18,6 +18,20 @@ class G2tmpl(CMakePackage):
maintainers("edwardhartnett", "AlexanderRichert-NOAA", "Hang-Lei-NOAA")
version("develop", branch="develop")
version("1.11.0", sha256="00fde3b37c6b4d1f0eaf60f230159298ffcb47349a076c3bd6afa20c7ed791a9")
version("1.10.2", sha256="4063361369f3691f75288c801fa9d1a2414908b7d6c07bbf69d4165802e2a7fc")
version("1.10.1", sha256="0be425e5128fabb89915a92261aa75c27a46a3e115e00c686fc311321e5d1e2a")
version("1.10.0", sha256="dcc0e40b8952f91d518c59df7af64e099131c17d85d910075bfa474c8822649d")
variant("shared", default=False, description="Build shared library")
def cmake_args(self):
args = [
self.define_from_variant("BUILD_SHARED_LIBS", "shared"),
self.define("BUILD_TESTING", self.run_tests),
]
return args
def check(self):
with working_dir(self.builder.build_directory):
make("test")

View File

@@ -32,6 +32,7 @@ class Gdal(CMakePackage, AutotoolsPackage, PythonExtension):
license("MIT")
version("3.8.5", sha256="e8b4df2a8a7d25272f867455c0c230459545972f81f0eff2ddbf6a6f60dcb1e4")
version("3.8.4", sha256="0c53ced95d29474236487202709b49015854f8e02e35e44ed0f4f4e12a7966ce")
version("3.8.3", sha256="ae2d160f65016e208eca34ff14490ec4511f1fa03fd386ac130449d15e82929d")
version("3.8.2", sha256="dc2921ee1cf7a5c0498e94d15fb9ab9c9689c296363a1d021fc3293dd242b4db")

View File

@@ -3,10 +3,11 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.build_systems import autotools, cmake
from spack.package import *
class Gperftools(AutotoolsPackage):
class Gperftools(AutotoolsPackage, CMakePackage):
"""Google's fast malloc/free implementation, especially for
multi-threaded applications. Contains tcmalloc, heap-checker,
heap-profiler, and cpu-profiler.
@@ -19,6 +20,8 @@ class Gperftools(AutotoolsPackage):
license("BSD-3-Clause")
build_system(conditional("cmake", when="@2.8.1:"), "autotools", default="cmake")
version("2.15", sha256="c69fef855628c81ef56f12e3c58f2b7ce1f326c0a1fe783e5cae0b88cbbe9a80")
version("2.14", sha256="6b561baf304b53d0a25311bd2e29bc993bed76b7c562380949e7cb5e3846b299")
version("2.13", sha256="4882c5ece69f8691e51ffd6486df7d79dbf43b0c909d84d3c0883e30d27323e7")
@@ -43,21 +46,38 @@ class Gperftools(AutotoolsPackage):
)
depends_on("unwind", when="+libunwind")
depends_on("cmake@3.12:", type="build", when="build_system=cmake")
# Linker error: src/base/dynamic_annotations.cc:46: undefined reference to
# `TCMallocGetenvSafe'
conflicts("target=ppc64:", when="@2.14")
conflicts("target=ppc64le:", when="@2.14")
def configure_args(self):
args = []
args += self.enable_or_disable("sized-delete", variant="sized_delete")
args += self.enable_or_disable(
"dynamic-sized-delete-support", variant="dynamic_sized_delete_support"
)
args += self.enable_or_disable("debugalloc")
args += self.enable_or_disable("libunwind")
if self.spec.satisfies("+libunwind"):
args += ["LDFLAGS=-lunwind"]
# the autotools build system creates an explicit list of -L <system dir> flags that end up
# before the -L <spack dir> flags, which causes the system libunwind to be linked instead of
# the spack libunwind. This is a workaround to fix that.
conflicts("+libunwind", when="build_system=autotools")
return args
class CMakeBuilder(cmake.CMakeBuilder):
def cmake_args(self):
return [
self.define_from_variant("gperftools_sized_delete", "sized_delete"),
self.define_from_variant(
"gperftools_dynamic_sized_delete_support", "dynamic_sized_delete_support"
),
self.define_from_variant("GPERFTOOLS_BUILD_DEBUGALLOC", "debugalloc"),
self.define_from_variant("gperftools_enable_libunwind", "libunwind"),
]
class AutotooolsBuilder(autotools.AutotoolsBuilder):
def configure_args(self):
return [
*self.enable_or_disable("sized-delete", variant="sized_delete"),
*self.enable_or_disable(
"dynamic-sized-delete-support", variant="dynamic_sized_delete_support"
),
*self.enable_or_disable("debugalloc"),
*self.enable_or_disable("libunwind"),
]

View File

@@ -22,6 +22,7 @@ class GribUtil(CMakePackage):
version("1.2.3", sha256="b17b08e12360bb8ad01298e615f1b4198e304b0443b6db35fe990a817e648ad5")
variant("openmp", default=False, description="Use OpenMP multithreading")
variant("tests", default=False, description="Enable this variant when installing with --test")
depends_on("jasper")
depends_on("libpng")
@@ -30,15 +31,19 @@ class GribUtil(CMakePackage):
requires("^w3emc precision=4,d", when="^w3emc@2.10:")
depends_on("w3nco", when="@:1.2.3")
depends_on("g2")
depends_on("g2c@1.8: +utils", when="+tests")
depends_on("bacio")
depends_on("ip")
depends_on("ip@:3.3.3", when="@:1.2.4")
depends_on("sp")
requires("^ip precision=d", when="^ip@4.1:")
depends_on("ip@:3.3.3", when="@:1.2")
depends_on("sp", when="^ip@:4")
requires("^sp precision=d", when="^ip@:4 ^sp@2.4:")
def cmake_args(self):
args = [
self.define_from_variant("OPENMP", "openmp"),
self.define("BUILD_TESTING", self.run_tests),
self.define("G2C_COMPARE", self.run_tests),
]
return args

View File

@@ -72,6 +72,7 @@ class Hypre(AutotoolsPackage, CudaPackage, ROCmPackage):
variant("int64", default=False, description="Use 64bit integers")
variant("mixedint", default=False, description="Use 64bit integers while reducing memory use")
variant("complex", default=False, description="Use complex values")
variant("gpu-aware-mpi", default=False, description="Use gpu-aware mpi")
variant("mpi", default=True, description="Enable MPI support")
variant("openmp", default=False, description="Enable OpenMP support")
variant("debug", default=False, description="Build debug instead of optimized version")
@@ -300,6 +301,9 @@ def configure_args(self):
configure_args.append("--with-magma-lib=%s" % spec["magma"].libs)
configure_args.append("--with-magma")
if "+gpu-aware-mpi" in spec:
options.append("--enable-gpu-aware-mpi")
configure_args.extend(self.enable_or_disable("fortran"))
return configure_args

View File

@@ -24,6 +24,12 @@ class IntelOneapiInspector(IntelOneApiLibraryPackageWithSdk):
homepage = "https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/inspector.html"
version(
"2024.1.0",
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/891acaab-a5b4-4a3c-9f36-60dca629e410/l_inspector_oneapi_p_2024.1.0.158_offline.sh",
sha256="b4e4e2e395ad98ce025a6bf26950fc39c2f54c905a1a9c88cb129109a5dd0936",
expand=False,
)
version(
"2024.0.0",
url="https://registrationcenter-download.intel.com/akdlm//IRC_NAS/44ae6846-719c-49bd-b196-b16ce5835a1e/l_inspector_oneapi_p_2024.0.0.49433_offline.sh",

View File

@@ -27,6 +27,12 @@ class IntelOneapiItac(IntelOneApiPackage):
maintainers("rscohn2")
version(
"2022.1.0",
url="https://registrationcenter-download.intel.com/akdlm/IRC_NAS/644eec67-83d9-4bdd-be0d-d90587ec72ed/l_itac_oneapi_p_2022.1.0.158_offline.sh",
sha256="2a1f4be6b349d1629006ee72087b361a5f3e714bddd1ef932045d0c03c0b20e8",
expand=False,
)
version(
"2022.0.0",
url="https://registrationcenter-download.intel.com/akdlm//IRC_NAS/e83526f5-7e0f-4708-9e0d-47f1e65f29aa/l_itac_oneapi_p_2022.0.0.49690_offline.sh",

View File

@@ -15,6 +15,9 @@ class IntelOneapiRuntime(Package):
homepage = "https://software.intel.com/content/www/us/en/develop/tools/oneapi.html"
has_code = False
license("https://intel.ly/393CijO")
maintainers("rscohn2")
tags = ["runtime"]

View File

@@ -0,0 +1,36 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class Libfort(CMakePackage):
"""libfort is a simple crossplatform library to create formatted text tables."""
homepage = "https://github.com/seleznevae/libfort"
url = "https://github.com/seleznevae/libfort/archive/refs/tags/v0.4.2.tar.gz"
license("MIT")
version("0.4.2", sha256="8f7b03f1aa526e50c9828f09490f3c844b73d5f9ca72493fe81931746f75e489")
variant("enable_astyle", default=False, description="Enable astyle")
variant("enable_wchar", default=True, description="Enable wchar support")
variant("enable_utf8", default=True, description="Enable utf8 support")
variant("enable_testing", default=True, description="Enables building tests and examples")
variant("shared", default=False, description="Build shared library")
depends_on("cmake@3.0.0:", type="build")
def cmake_args(self):
args = [
self.define_from_variant("FORT_ENABLE_ASTYLE", "enable_astyle"),
self.define_from_variant("FORT_ENABLE_WCHAR", "enable_wchar"),
self.define_from_variant("FORT_ENABLE_UTF8", "enable_utf8"),
self.define_from_variant("FORT_ENABLE_TESTING", "enable_testing"),
self.define_from_variant("BUILD_SHARED_LIBS", "shared"),
]
return args

View File

@@ -0,0 +1,44 @@
# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class Meshlab(CMakePackage):
"""The open source mesh processing system."""
homepage = "https://www.meshlab.net"
url = "https://github.com/cnr-isti-vclab/meshlab/archive/refs/tags/MeshLab-2023.12.tar.gz"
git = "https://github.com/cnr-isti-vclab/meshlab.git"
maintainers("wdconinc")
license("GPL-3.0", checked_by="wdconinc")
version("main", branch="main", submodules=True)
version("2023.12", commit="2dbd2f4b12df3b47d8777b2b4a43cabd9e425735", submodules=True)
variant("double_scalar", default=False, description="Type to use for scalars")
depends_on("eigen")
depends_on("glew")
depends_on("mpfr")
depends_on("qt@5.15: +opengl")
def cmake_args(self):
args = [
self.define_from_variant("MESHLAB_BUILD_WITH_DOUBLE_SCALAR", "double_scalar"),
# E57 and Nexus plugins fail on gcc-13 due to missing include cstdint,
# but patching is cumbersome since build process downloads their source.
# Ref: https://github.com/asmaloney/libE57Format/pull/176
self.define("MESHLAB_ALLOW_DOWNLOAD_SOURCE_LIBE57", False),
# Ref: https://github.com/cnr-isti-vclab/corto/pull/44
self.define("MESHLAB_ALLOW_DOWNLOAD_SOURCE_NEXUS", False),
]
for bundle in "LIBIGL", "LEVMAR", "LIB3DS", "EMBREE", "NEXUS", "QHULL", "STRUCTURE_SYNTH", "TINYGLTF", "MUPARSER", "BOOST", "OPENCTM", "U3D", "LIBE57", "CGAL", "XERCES":
args.append(self.define(f"MESHLAB_ALLOW_DOWNLOAD_SOURCE_{bundle}", False))
return args

View File

@@ -15,7 +15,7 @@ def _parse_float(val):
return False
class NaluWind(CMakePackage, CudaPackage):
class NaluWind(CMakePackage, CudaPackage, ROCmPackage):
"""Nalu-Wind: Wind energy focused variant of Nalu."""
homepage = "https://nalu-wind.readthedocs.io"
@@ -26,6 +26,7 @@ class NaluWind(CMakePackage, CudaPackage):
tags = ["ecp", "ecp-apps"]
version("master", branch="master")
version("2.0.0", tag="v2.0.0")
variant("pic", default=True, description="Position independent code")
variant(
@@ -45,12 +46,31 @@ class NaluWind(CMakePackage, CudaPackage):
variant("hypre", default=True, description="Compile with Hypre support")
variant("trilinos-solvers", default=True, description="Compile with Trilinos Solvers support")
variant("catalyst", default=False, description="Compile with Catalyst support")
variant("shared", default=True, description="Build shared libraries")
variant("fftw", default=False, description="Compile with FFTW support")
variant("fsi", default=False, description="Enable fluid-structure-interaction models")
variant("boost", default=False, description="Enable Boost integration")
variant("gpu-aware-mpi", default=False, description="gpu-aware-mpi")
variant("wind-utils", default=False, description="Build wind-utils")
variant("umpire", default=False, description="Enable Umpire")
conflicts(
"+shared",
when="+cuda",
msg="invalid device functions are generated with shared libs and cuda",
)
conflicts(
"+shared",
when="+rocm",
msg="invalid device functions are generated with shared libs and rocm",
)
conflicts("+cuda", when="+rocm")
conflicts("+rocm", when="+cuda")
depends_on("mpi")
depends_on("yaml-cpp@0.5.3:")
depends_on("openfast@4.0.0:+cxx+netcdf", when="+fsi")
depends_on("trilinos@13.4.1+exodus+zoltan+stk", when="@=2.0.0")
depends_on("hypre@2.29.0:", when="@2.0.0:+hypre")
depends_on(
"trilinos@13:+exodus+tpetra+zoltan+stk~superlu-dist~superlu+hdf5+shards~hypre+gtest"
)
@@ -76,12 +96,42 @@ class NaluWind(CMakePackage, CudaPackage):
"hypre@develop +mpi+cuda~int64~superlu-dist cuda_arch={0}".format(_arch),
when="+hypre+cuda cuda_arch={0}".format(_arch),
)
for _arch in ROCmPackage.amdgpu_targets:
depends_on(
"trilinos@13.4.0.2022.10.27: "
"~shared+exodus+tpetra+zoltan+stk~superlu-dist~superlu"
"+hdf5+shards~hypre+gtest+rocm amdgpu_target={0}".format(_arch),
when="+rocm amdgpu_target={0}".format(_arch),
)
depends_on(
"hypre+rocm amdgpu_target={0}".format(_arch),
when="+hypre+rocm amdgpu_target={0}".format(_arch),
)
depends_on("trilinos-catalyst-ioss-adapter", when="+catalyst")
depends_on("fftw+mpi", when="+fftw")
depends_on("nccmp")
# indirect dependency needed to make original concretizer work
depends_on("netcdf-c+parallel-netcdf")
depends_on("boost +filesystem +iostreams cxxstd=14", when="+boost")
supported_cxxstd = ["17"]
variant(
"cxxstd", default="17", values=supported_cxxstd, multi=False, description="cxx standard"
)
for std in supported_cxxstd:
depends_on("trilinos cxxstd=%s" % std, when="cxxstd=%s" % std)
def setup_build_environment(self, env):
if "~stk_simd" in self.spec:
env.append_flags("CXXFLAGS", "-DUSE_STK_SIMD_NONE")
if "+cuda" in self.spec:
env.set("CUDA_LAUNCH_BLOCKING", "1")
env.set("CUDA_MANAGED_FORCE_DEVICE_ALLOC", "1")
env.set("OMPI_CXX", self.spec["kokkos-nvcc-wrapper"].kokkos_cxx)
env.set("MPICH_CXX", self.spec["kokkos-nvcc-wrapper"].kokkos_cxx)
env.set("MPICXX_CXX", self.spec["kokkos-nvcc-wrapper"].kokkos_cxx)
if "+rocm" in self.spec:
env.append_flags("CXXFLAGS", "-fgpu-rdc")
def cmake_args(self):
spec = self.spec
@@ -95,22 +145,26 @@ def cmake_args(self):
self.define_from_variant("ENABLE_CUDA", "cuda"),
self.define_from_variant("ENABLE_WIND_UTILS", "wind-utils"),
self.define_from_variant("ENABLE_BOOST", "boost"),
self.define_from_variant("CMAKE_CXX_STANDARD", "cxxstd"),
self.define_from_variant("BUILD_SHARED_LIBS", "shared"),
self.define_from_variant("ENABLE_OPENFAST", "openfast"),
self.define_from_variant("ENABLE_TIOGA", "tioga"),
self.define_from_variant("ENABLE_HYPRE", "hypre"),
self.define_from_variant("ENABLE_TRILINOS_SOLVERS", "trilinos-solvers"),
self.define_from_variant("ENABLE_PARAVIEW_CATALYST", "catalyst"),
self.define_from_variant("ENABLE_FFTW", "fftw"),
self.define_from_variant("ENABLE_UMPIRE", "umpire"),
]
args.append(self.define_from_variant("ENABLE_OPENFAST", "openfast"))
if "+openfast" in spec:
args.append(self.define("OpenFAST_DIR", spec["openfast"].prefix))
args.append(self.define_from_variant("ENABLE_TIOGA", "tioga"))
if "+tioga" in spec:
args.append(self.define("TIOGA_DIR", spec["tioga"].prefix))
args.append(self.define_from_variant("ENABLE_HYPRE", "hypre"))
if "+hypre" in spec:
args.append(self.define("HYPRE_DIR", spec["hypre"].prefix))
args.append(self.define_from_variant("ENABLE_TRILINOS_SOLVERS", "trilinos-solvers"))
args.append(self.define_from_variant("ENABLE_PARAVIEW_CATALYST", "catalyst"))
if "+catalyst" in spec:
args.append(
self.define(
@@ -118,7 +172,6 @@ def cmake_args(self):
)
)
args.append(self.define_from_variant("ENABLE_FFTW", "fftw"))
if "+fftw" in spec:
args.append(self.define("FFTW_DIR", spec["fftw"].prefix))
@@ -131,6 +184,15 @@ def cmake_args(self):
]
)
if "+umpire" in spec:
args.append(self.define("UMPIRE_DIR", spec["umpire"].prefix))
if "+rocm" in spec:
args.append(self.define("CMAKE_CXX_COMPILER", spec["hip"].hipcc))
args.append(self.define("ENABLE_ROCM", True))
targets = spec.variants["amdgpu_target"].value
args.append(self.define("GPU_TARGETS", ";".join(str(x) for x in targets)))
if "darwin" in spec.architecture:
args.append(self.define("CMAKE_MACOSX_RPATH", "ON"))

View File

@@ -15,9 +15,14 @@ class Nalu(CMakePackage):
"""
homepage = "https://github.com/NaluCFD/Nalu"
url = "https://github.com/NaluCFD/Nalu/archive/refs/tags/v1.6.0.tar.gz"
git = "https://github.com/NaluCFD/Nalu.git"
version("master", branch="master")
version("1.6.0", sha256="2eafafe25ed44a7bc1429881f8f944b9794ca51b1e1b29c28a45b91520c7cf97")
depends_on("trilinos@master", when="@master")
depends_on("trilinos@=14.0.0", when="@1.6.0")
# Options
variant(
@@ -31,16 +36,15 @@ class Nalu(CMakePackage):
# Required dependencies
depends_on("mpi")
depends_on("yaml-cpp@0.5.3:", when="+shared")
depends_on("yaml-cpp~shared@0.5.3:", when="~shared")
depends_on("yaml-cpp@0.5.3:0.6.2", when="+shared")
depends_on("yaml-cpp~shared@0.5.3:0.6.2", when="~shared")
# Cannot build Trilinos as a shared library with STK on Darwin
# which is why we have a 'shared' variant for Nalu
# https://github.com/trilinos/Trilinos/issues/2994
depends_on(
"trilinos"
"+mpi+exodus+tpetra+muelu+belos+ifpack2+amesos2+zoltan+stk+boost"
"~superlu-dist+superlu+hdf5+shards~hypre"
"@master"
"+mpi+exodus+tpetra+muelu+belos+ifpack2+amesos2+zoltan+stk+boost+gtest"
"~superlu-dist+superlu+hdf5+shards~hypre gotype=long"
)
depends_on("trilinos~shared", when="~shared")
# Optional dependencies

View File

@@ -0,0 +1,17 @@
diff --git a/modules/openfast-library/src/FAST_Solver.f90 b/modules/openfast-library/src/FAST_Solver.f90
index 364d0b78..10056965 100644
--- a/modules/openfast-library/src/FAST_Solver.f90
+++ b/modules/openfast-library/src/FAST_Solver.f90
@@ -607,9 +607,9 @@ SUBROUTINE AD_InputSolve_IfW( p_FAST, u_AD, y_IfW, y_OpFM, ErrStat, ErrMsg )
end if
if (u_AD%rotors(1)%NacelleMotion%NNodes > 0) then
- u_AD%rotors(1)%InflowOnNacelle(1) = y_OpFM%u(node)
- u_AD%rotors(1)%InflowOnNacelle(2) = y_OpFM%v(node)
- u_AD%rotors(1)%InflowOnNacelle(3) = y_OpFM%w(node)
+ u_AD%rotors(1)%InflowOnNacelle(1) = y_OpFM%u(1)
+ u_AD%rotors(1)%InflowOnNacelle(2) = y_OpFM%v(1)
+ u_AD%rotors(1)%InflowOnNacelle(3) = y_OpFM%w(1)
node = node + 1
else
u_AD%rotors(1)%InflowOnNacelle = 0.0_ReKi

View File

@@ -13,6 +13,7 @@ class Openfast(CMakePackage):
git = "https://github.com/OpenFAST/openfast.git"
maintainers("jrood-nrel")
patch("hub_seg_fault.patch", when="@2.7:3.2")
license("Apache-2.0")

View File

@@ -0,0 +1,21 @@
From 65ec5b0604576474141def0ba7f0c7b94f6b32ee Mon Sep 17 00:00:00 2001
From: Ryan Krattiger <ryan.krattiger@kitware.com>
Date: Fri, 8 Mar 2024 09:17:03 -0600
---
VTK/IO/CatalystConduit/vtk.module | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/VTK/IO/CatalystConduit/vtk.module b/VTK/IO/CatalystConduit/vtk.module
index c67f5a099d5..18e706e8c9f 100644
--- a/VTK/IO/CatalystConduit/vtk.module
+++ b/VTK/IO/CatalystConduit/vtk.module
@@ -5,7 +5,7 @@ LIBRARY_NAME
DESCRIPTION
Catalyst implementation for VTK, including Conduit to/from VTK conversion.
KIT
- VTK::IO
+ VTK::Parallel
SPDX_LICENSE_IDENTIFIER
BSD-3-Clause
SPDX_COPYRIGHT_TEXT

View File

@@ -31,13 +31,11 @@ class Paraview(CMakePackage, CudaPackage, ROCmPackage):
version("master", branch="master", submodules=True)
version(
"5.12.0-RC3", sha256="6aaa46ff295126707294482e6ba24bd0ec0d68cf6bb5f56f145f8bcc53fc3f70"
)
version(
"5.11.2",
sha256="5c5d2f922f30d91feefc43b4a729015dbb1459f54c938896c123d2ac289c7a1e",
"5.12.0",
sha256="d289afe7b48533e2ca4a39a3b48d3874bfe67cf7f37fdd2131271c57e64de20d",
preferred=True,
)
version("5.11.2", sha256="5c5d2f922f30d91feefc43b4a729015dbb1459f54c938896c123d2ac289c7a1e")
version("5.11.1", sha256="5cc2209f7fa37cd3155d199ff6c3590620c12ca4da732ef7698dec37fa8dbb34")
version("5.11.0", sha256="9a0b8fe8b1a2cdfd0ace9a87fa87e0ec21ee0f6f0bcb1fdde050f4f585a25165")
version("5.10.1", sha256="520e3cdfba4f8592be477314c2f6c37ec73fb1d5b25ac30bdbd1c5214758b9c2")
@@ -319,6 +317,8 @@ class Paraview(CMakePackage, CudaPackage, ROCmPackage):
patch("exodusII-netcdf4.9.0.patch", when="@5.10.0:5.10.2")
patch("kits_with_catalyst_5_12.patch", when="@5.12.0")
generator("ninja", "make", default="ninja")
# https://gitlab.kitware.com/paraview/paraview/-/issues/21223
conflicts("generator=ninja", when="%xl")

View File

@@ -23,6 +23,7 @@ class Precice(CMakePackage):
license("LGPL-3.0-or-later")
version("develop", branch="develop")
version("3.1.0", sha256="11e7d3d4055ee30852c0e83692ca7563acaa095bd223ebdbd5c8c851b3646d37")
version("3.0.0", sha256="efe6cf505d9305af89c6da1fdba246199a75a1c63a6a22103773ed95341879ba")
version("2.5.1", sha256="a5a37d3430eac395e885eb9cbbed9d0980a15e96c3e44763a3769fa7301e3b3a")
version("2.5.0", sha256="76ec6ee0d1a66f6f3d3d2d11f03cfc5aa7ef4d9e5deb9b7a4b4455ec7f796c00")
@@ -71,7 +72,8 @@ class Precice(CMakePackage):
depends_on("pkgconfig", type="build", when="@2.2:")
# Boost components
depends_on("boost+filesystem+log+program_options+system+test+thread")
depends_on("boost+log+program_options+system+test+thread")
depends_on("boost+filesystem", when="@:3.0.0")
depends_on("boost+signals", when="@:2.3")
# Baseline versions

View File

@@ -13,12 +13,20 @@ class PyAmqp(PythonPackage):
license("BSD-3-Clause")
version("5.2.0", sha256="a1ecff425ad063ad42a486c902807d1482311481c8ad95a72694b2975e75f7fd")
version("5.1.1", sha256="2c1b13fecc0893e946c65cbd5f36427861cffa4ea2201d8f6fca22e2a373b5e2")
version("5.0.9", sha256="1e5f707424e544078ca196e72ae6a14887ce74e02bd126be54b7c03c971bef18")
version("5.0.6", sha256="03e16e94f2b34c31f8bf1206d8ddd3ccaa4c315f7f6a1879b7b1210d229568c2")
version("5.0.1", sha256="9881f8e6fe23e3db9faa6cfd8c05390213e1d1b95c0162bc50552cad75bffa5f")
version("5.0.0", sha256="1183b66e54a5c533b679d9f557b31c5b31d26701761f2bbd144054cce58f3588")
version("2.6.1", sha256="70cdb10628468ff14e57ec2f751c7aa9e48e7e3651cfd62d431213c0c4e58f21")
version("2.6.0", sha256="24dbaff8ce4f30566bb88976b398e8c4e77637171af3af6f1b9650f48890e60b")
version("2.5.2", sha256="77f1aef9410698d20eaeac5b73a87817365f457a507d82edf292e12cbb83b08d")
version("2.5.1", sha256="19a917e260178b8d410122712bac69cb3e6db010d68f6101e7307508aded5e68")
version("2.5.0", sha256="cbb6f87d53cac612a594f982b717cc1c54c6a1e17943a0a0d32dc6cc9e2120c8")
version("2.4.2", sha256="043beb485774ca69718a35602089e524f87168268f0d1ae115f28b88d27f92d7")
version("2.4.1", sha256="6816eed27521293ee03aa9ace300a07215b11fee4e845588a9b863a7ba30addb")
version("2.4.0", sha256="9f181e4aef6562e6f9f45660578fc1556150ca06e836ecb9e733e6ea10b48464")
depends_on("python@2.7:2.8,3.5:", type=("build", "run"))
depends_on("python@3.6:", type=("build", "run"), when="@5.0.9:")

View File

@@ -13,6 +13,7 @@ class PyBilliard(PythonPackage):
license("BSD-3-Clause")
version("4.2.0", sha256="9a3c3184cb275aa17a732f93f65b20c525d3d9f253722d26a82194803ade5a2c")
version("3.6.4.0", sha256="299de5a8da28a783d51b197d496bef4f1595dd023a93a4f59dde1886ae905547")
version("3.6.3.0", sha256="d91725ce6425f33a97dfa72fb6bfef0e47d4652acd98a032bd1a7fbf06d5fa6a")
version("3.6.1.0", sha256="b8809c74f648dfe69b973c8e660bcec00603758c9db8ba89d7719f88d5f01f26")

View File

@@ -9,10 +9,11 @@
class PyCelery(PythonPackage):
"""Celery - Distributed Task Queue."""
pypi = "celery/celery-4.2.1.tar.gz"
pypi = "celery/celery-5.3.6.tar.gz"
license("BSD-3-Clause")
version("5.3.6", sha256="870cc71d737c0200c397290d730344cc991d13a057534353d124c9380267aab9")
version("5.2.3", sha256="e2cd41667ad97d4f6a2f4672d1c6a6ebada194c619253058b5f23704aaadaa82")
version("5.0.0", sha256="313930fddde703d8e37029a304bf91429cd11aeef63c57de6daca9d958e1f255")
version("4.4.7", sha256="d220b13a8ed57c78149acf82c006785356071844afe0b27012a4991d44026f9f")
@@ -53,24 +54,34 @@ class PyCelery(PythonPackage):
depends_on("python@2.7:2.8,3.4:", type=("build", "run"))
depends_on("py-setuptools", type=("build", "run"))
depends_on("py-setuptools@59.1.1:59.6", type=("build", "run"), when="@5.2.3:")
depends_on("py-setuptools@59.1.1:59.6", type=("build", "run"), when="@5.2.3:5.2.4")
depends_on("py-redis@3.2.0:", when="+redis", type=("build", "run"))
depends_on("py-redis@3.4.1:3,4.0.2:", when="@5.2.3:+redis", type=("build", "run"))
depends_on("py-sqlalchemy", when="+sqlalchemy", type=("build", "run"))
depends_on("py-click@7.0:7", when="@5.0.0:5.0", type=("build", "run"))
depends_on("py-click@8.0.3:8", when="@5.2.0:", type=("build", "run"))
depends_on("py-click@8.0.3:8", when="@5.2", type=("build", "run"))
depends_on("py-click@8.1.2:", when="@5.3:", type=("build", "run"))
depends_on("py-click-didyoumean@0.0.3:", when="@5.0.0:5", type=("build", "run"))
depends_on("py-click-plugins@1.1.1:", when="@5.0.3:", type=("build", "run"))
depends_on("py-click-repl@:0.1.6", when="@5.0.0:5.0", type=("build", "run"))
depends_on("py-click-repl@0.2.0:", when="@5.2.0:", type=("build", "run"))
depends_on("py-pytz@2019.3:", type=("build", "run"))
depends_on("py-pytz@2021.3:", type=("build", "run"), when="@5.2.3")
depends_on("py-billiard@3.6.3.0:3", type=("build", "run"))
depends_on("py-billiard@3.6.4.0:3", type=("build", "run"), when="@5.2.3")
depends_on("py-pytz@2019.3:", type=("build", "run"), when="@:5.2")
depends_on("py-pytz@2021.3:", type=("build", "run"), when="@5.2.3:5.2.99")
depends_on("py-billiard@3.6.3", type=("build", "run"), when="@:5.0.99")
depends_on("py-billiard@3.6.4.0:3.99", type=("build", "run"), when="@5.1.0:5.2")
depends_on("py-billiard@4.2.0:5.0", type=("build", "run"), when="@5.3.0:")
depends_on("py-kombu@4.6.11", when="@4.3.0:4", type=("build", "run"))
depends_on("py-kombu@5.0.0:", when="@5.0.0:5.0", type=("build", "run"))
depends_on("py-kombu@5.2.3:5", when="@5.2.0:5.2", type=("build", "run"))
depends_on("py-kombu@5.0.0:", when="@5.0", type=("build", "run"))
depends_on("py-kombu@5.1.0:", when="@5.1", type=("build", "run"))
depends_on("py-kombu@5.2.1", when="@5.2.0", type=("build", "run"))
depends_on("py-kombu@5.2.2", when="@5.2.1:5.2.2", type=("build", "run"))
depends_on("py-kombu@5.2.3", when="@5.2.3:5.2.99", type=("build", "run"))
depends_on("py-kombu@5.3.0", when="@5.3.0", type=("build", "run"))
depends_on("py-kombu@5.3.1", when="@5.3.1", type=("build", "run"))
depends_on("py-kombu@5.3.2", when="@5.3.4", type=("build", "run"))
depends_on("py-kombu@5.3.3", when="@5.3.5", type=("build", "run"))
depends_on("py-kombu@5.3.4:5.3.5", when="@5.3.6", type=("build", "run"))
depends_on("py-vine@1.3.0", when="@4.3.0:4", type=("build", "run"))
depends_on("py-vine@5.0.0:5", when="@5.0.0:5", type=("build", "run"))

View File

@@ -20,22 +20,32 @@ class PyDeephyper(PythonPackage):
license("BSD-3-Clause")
version("master", branch="master")
version("0.6.0", sha256="cda2dd7c74bdca4203d9cd637c4f441595f77bae6d77ef8e4a056b005357de34")
version("0.4.2", sha256="ee1811a22b08eff3c9098f63fbbb37f7c8703e2f878f2bdf2ec35a978512867f")
depends_on("python@3.7:3.9", type=("build", "run"))
depends_on("python@3.7:3.11", type=("build", "run"), when="@0.6.0")
depends_on("py-setuptools@42:", type="build", when="@0.6.0")
depends_on("py-setuptools@40:49.1", type="build")
depends_on("py-wheel@0.36.2", type="build")
depends_on("py-cython@0.29.24:2", type="build")
depends_on("py-cython@0.29.24:", type="build", when="@0.6.0")
depends_on("py-cython@0.29.24:2", type="build", when="@0.4.2")
depends_on("py-configspace@0.4.20:", type=("build", "run"))
depends_on("py-dm-tree", type=("build", "run"))
depends_on("py-jinja2@:3.0", type=("build", "run"))
depends_on("py-jinja2@:3.1", type=("build", "run"), when="@0.6.0")
depends_on("py-jinja2@:3.0", type=("build", "run"), when="@0.4.2")
depends_on("py-numpy@1.20:", type=("build", "run"), when="@0.6.0")
depends_on("py-numpy", type=("build", "run"))
depends_on("py-pandas@0.24.2:", type=("build", "run"))
depends_on("py-packaging", type=("build", "run"))
depends_on(
"py-packaging@20.5:", type=("build", "run"), when="@0.6.0 target=aarch64: platform=darwin"
)
depends_on("py-scikit-learn@0.23.1:", type=("build", "run"))
depends_on("py-scipy@1.7:", type=("build", "run"), when="@0.6.0")
depends_on("py-scipy@0.19.1:", type=("build", "run"))
depends_on("py-tqdm@4.64.0:", type=("build", "run"))
depends_on("py-pyyaml", type=("build", "run"))
depends_on("py-tinydb", type=("build", "run"))
depends_on("py-tinydb", type=("build", "run"), when="@0.4.2")

View File

@@ -22,11 +22,31 @@ class PyJax(PythonPackage):
pypi = "jax/jax-0.2.25.tar.gz"
license("Apache-2.0")
maintainers("adamjstewart")
maintainers("adamjstewart", "jonas-eschle")
version("0.4.26", sha256="2cce025d0a279ec630d550524749bc8efe25d2ff47240d2a7d4cfbc5090c5383")
version("0.4.25", sha256="a8ee189c782de2b7b2ffb64a8916da380b882a617e2769aa429b71d79747b982")
version("0.4.24", sha256="4a6b6fd026ddd22653c7fa2fac1904c3de2dbe845b61ede08af9a5cc709662ae")
version("0.4.23", sha256="2a229a5a758d1b803891b2eaed329723f6b15b4258b14dc0ccb1498c84963685")
version("0.4.22", sha256="801434dda6e14f82a45fff753969a33281ab22fb2a50fe801b651390321057ba")
version("0.4.21", sha256="c97fd0d2751d6e1eb15aa2052ff7cfdc129f8fafc2c14cd779720658926a587b")
version("0.4.20", sha256="ea96a763a8b1a9374639d1159ab4de163461d01cd022f67c34c09581b71ed2ac")
version("0.4.19", sha256="29f87f9a50964d3ca5eeb2973de3462f0e8b4eca6d46027894a0e9a903420601")
version("0.4.18", sha256="776cf33890100803e98f45f9af10aa727271c6993d4e766c069118733c928132")
version("0.4.17", sha256="d7508a69e87835f534cb07a2f21d79cc1cb8c4cfdcf7fb010927267ef7355f1d")
version("0.4.16", sha256="e2ca82c9bf973c2c1c01f5340a583692b31f277aa3abd0544229c1fe5fa44b02")
version("0.4.15", sha256="2aa123ccef591e355dea94a6e714b6559f8e1d6368a576a223f97d031ece0d15")
version("0.4.14", sha256="18fed3881f26e8b13c8cb46eeeea3dba9eb4d48e3714d8e8f2304dd6e237083d")
version("0.4.13", sha256="03bfe6749dfe647f16f15f6616638adae6c4a7ca7167c75c21961ecfd3a3baaa")
version("0.4.12", sha256="d2de9a2388ffe002f16506d3ad1cc6e34d7536b98948e49c7e05bbcfe8e57998")
version("0.4.11", sha256="8b1cd443b698339df8d8807578ee141e5b67e36125b3945b146f600177d60d79")
version("0.4.10", sha256="1bf0f2720f778f2937301a16a4d5cd3497f13a4d6c970c24a88918a81816a888")
version("0.4.9", sha256="1ed135cd08f48e4baf10f6eafdb4a4cdae781f9052b5838c09c91a9f4fa75f09")
version("0.4.8", sha256="08116481f7336db16c24812bfb5e6f9786915f4c2f6ff4028331fa69e7535202")
version("0.4.7", sha256="5e7002d74db25f97c99b979d4ba1233b1ef26e1597e5fc468ad11d1c8a9dc4f8")
version("0.4.6", sha256="d06ea8fba4ed315ec55110396058cb48c8edb2ab0b412f28c8a123beee9e58ab")
version("0.4.5", sha256="1633e56d34b18ddfa7d2a216ce214fa6fa712d36552532aaa71da416aede7268")
version("0.4.4", sha256="39b07e07343ed7c74492ee5e75db77456d3afdd038a322671f09fc748f6392cb")
version("0.4.3", sha256="d43f08f940aa30eb339965cfb3d6bee2296537b0dc2f0c65ccae3009279529ae")
version(
"0.3.23",
@@ -58,7 +78,33 @@ class PyJax(PythonPackage):
# See jax/_src/lib/__init__.py
# https://github.com/google/jax/commit/8be057de1f50756fe7522f7e98b2f30fad56f7e4
for v in ["0.4.25", "0.4.23", "0.4.16", "0.4.3", "0.3.23"]:
for v in [
"0.4.26",
"0.4.25",
"0.4.24",
"0.4.23",
"0.4.22",
"0.4.21",
"0.4.20",
"0.4.19",
"0.4.18",
"0.4.17",
"0.4.16",
"0.4.15",
"0.4.14",
"0.4.13",
"0.4.12",
"0.4.11",
"0.4.10",
"0.4.9",
"0.4.8",
"0.4.7",
"0.4.6",
"0.4.5",
"0.4.4",
"0.4.3",
"0.3.23",
]:
depends_on(f"py-jaxlib@:{v}", when=f"@{v}", type=("build", "run"))
# See _minimum_jaxlib_version in jax/version.py

View File

@@ -20,9 +20,19 @@ class PyJaxlib(PythonPackage, CudaPackage):
license("Apache-2.0")
maintainers("adamjstewart")
version("0.4.26", sha256="ddc14da1eaa34f23430d40ad9b9585088575cac439a2fa1c6833a247e1b221fd")
version("0.4.25", sha256="fc1197c401924942eb14185a61688d0c476e3e81ff71f9dc95e620b57c06eec8")
version("0.4.24", sha256="c4e6963c2c36f634a9a1765e476a1ed4e6c4a7954465ebf72e29f344c28ddc28")
version("0.4.23", sha256="e4c06d62ba54becffd91abc862627b8b11b79c5a77366af8843b819665b6d568")
version("0.4.21", sha256="8d57f66d00b9c0b824b1eff84adda5b765a412b3f316ef7c773632d1edbf9477")
version("0.4.20", sha256="058410d2bc12f7562c7b01e0c8cd587cb68059c12f78bc945055e5ddc445f5fd")
version("0.4.19", sha256="51242b217a1f82474e42d24f09ed5dedff951eeb4579c6e49e706d1adfd6949d")
version("0.4.16", sha256="85c8bc050abe0a2cf62e8cfc7edb4904dd3807924b5714ec6277f291c576b5ca")
version("0.4.14", sha256="9f309476a8f6337717b059b8d10b5859b4134c30cf8f1220bb70379b5e2744a4")
version("0.4.11", sha256="bdfc45f33970beba5caf28d061668a4863f05994deea26791db50ea605fc2e36")
version("0.4.7", sha256="0578d5dd5035b5225cadb6a62ca5f93dd76b70292268502fc01a0fd9ca7001d0")
version("0.4.6", sha256="2c9bf8962815bc54ef524e33dc8eda9d165d379fe87e0df210f316adead27787")
version("0.4.4", sha256="881f402c7983b56b185e182d5315dd64c9f5320be96213d0415996ece1826806")
version("0.4.3", sha256="2104735dc22be2b105e5517bd5bc6ae97f40e8e9e54928cac1585c6112a3d910")
version(
"0.3.22",
@@ -40,9 +50,12 @@ class PyJaxlib(PythonPackage, CudaPackage):
# build/build.py
depends_on("py-build", when="@0.4.14:", type="build")
# Based on PyPI wheels
depends_on("python@3.9:3.12", when="@0.4.17:", type=("build", "run"))
depends_on("python@3.9:3.11", when="@0.4.14:0.4.16", type=("build", "run"))
depends_on("python@3.8:3.11", when="@0.4.6:0.4.13", type=("build", "run"))
# jaxlib/setup.py
depends_on("python@3.9:", when="@0.4.14:", type=("build", "run"))
depends_on("python@3.8:", when="@0.4:", type=("build", "run"))
depends_on("py-setuptools", type="build")
depends_on("py-scipy@1.9:", when="@0.4.19:", type=("build", "run"))
depends_on("py-scipy@1.7:", when="@0.4.7:", type=("build", "run"))
@@ -63,12 +76,16 @@ class PyJaxlib(PythonPackage, CudaPackage):
depends_on("bazel@4.2.1", when="@0.1.75:0.1.76", type="build")
depends_on("bazel@4.1.0", when="@0.1.70:0.1.74", type="build")
# README.md
depends_on("cuda@11.4:", when="@0.4:+cuda")
# jaxlib/setup.py
depends_on("cuda@12.1.105:", when="@0.4.26:+cuda")
depends_on("cuda@11.8:", when="@0.4.11:+cuda")
depends_on("cuda@11.4:", when="@0.4.0:0.4.7+cuda")
depends_on("cuda@11.1:", when="@0.3+cuda")
# https://github.com/google/jax/issues/12614
depends_on("cuda@11.1:11.7.0", when="@0.1+cuda")
depends_on("cudnn@8.2:", when="@0.4:+cuda")
depends_on("cudnn@8.8:", when="@0.4.11:+cuda")
depends_on("cudnn@8.2:", when="@0.4:0.4.7+cuda")
depends_on("cudnn@8.0.5:", when="+cuda")
# Historical dependencies
@@ -83,7 +100,7 @@ class PyJaxlib(PythonPackage, CudaPackage):
)
# https://github.com/google/jax/issues/19992
conflicts("@0.4.16:0.4.25", when="target=ppc64le:")
conflicts("@0.4.4:", when="target=ppc64le:")
def patch(self):
self.tmp_path = tempfile.mkdtemp(prefix="spack")

View File

@@ -13,7 +13,17 @@ class PyKombu(PythonPackage):
license("BSD-3-Clause")
version("5.3.5", sha256="30e470f1a6b49c70dc6f6d13c3e4cc4e178aa6c469ceb6bcd55645385fc84b93")
version("5.3.4", sha256="0bb2e278644d11dea6272c17974a3dbb9688a949f3bb60aeb5b791329c44fadc")
version("5.3.3", sha256="1491df826cfc5178c80f3e89dd6dfba68e484ef334db81070eb5cb8094b31167")
version("5.3.2", sha256="0ba213f630a2cb2772728aef56ac6883dc3a2f13435e10048f6e97d48506dbbd")
version("5.3.1", sha256="fbd7572d92c0bf71c112a6b45163153dea5a7b6a701ec16b568c27d0fd2370f2")
version("5.3.0", sha256="d084ec1f96f7a7c37ba9e816823bdbc08f0fc7ddb3a5be555805e692102297d8")
version("5.2.4", sha256="37cee3ee725f94ea8bb173eaab7c1760203ea53bbebae226328600f9d2799610")
version("5.2.3", sha256="81a90c1de97e08d3db37dbf163eaaf667445e1068c98bfd89f051a40e9f6dbbd")
version("5.2.2", sha256="0f5d0763fb916808f617b886697b2be28e6bc35026f08e679697fc814b48a608")
version("5.2.1", sha256="f262a2adc71b53e5b7dad4933bbdee65d8766ca4df6a9043b13edaad2144aaec")
version("5.1.0", sha256="01481d99f4606f6939cdc9b637264ed353ee9e3e4f62cfb582324142c41a572d")
version("5.0.2", sha256="f4965fba0a4718d47d470beeb5d6446e3357a62402b16c510b6a2f251e05ac3c")
version("4.6.11", sha256="ca1b45faac8c0b18493d02a8571792f3c40291cf2bcf1f55afed3d8f3aa7ba74")
version("4.6.6", sha256="1760b54b1d15a547c9a26d3598a1c8cdaf2436386ac1f5561934bc8a3cbbbd86")
@@ -26,15 +36,19 @@ class PyKombu(PythonPackage):
variant("redis", default=False, description="Use redis transport")
depends_on("py-setuptools", type="build")
# "pytz>dev" in tests_require: setuptools parser changed in v60 and errors.
depends_on("py-setuptools@:59", when="@4.6:5.2", type="build")
depends_on("py-amqp@2.4", when="@4.3.0:4.5.0", type=("build", "run"))
depends_on("py-amqp@2.5.0", when="@4.6.0:4.6.3", type=("build", "run"))
depends_on("py-amqp@2.5.1", when="@4.6.4:4.6.5", type=("build", "run"))
depends_on("py-amqp@2.5.2:2.5.99", when="@4.6.6:4.6.8", type=("build", "run"))
depends_on("py-amqp@2.6.0:2.99", when="@4.6.9:5.0.1", type=("build", "run"))
depends_on("py-amqp@5.0.0:5.0.5", when="@5.0.2:5.0.99", type=("build", "run"))
depends_on("py-amqp@5.0.6:5.0.8", when="@5.1.0:5.2.2", type=("build", "run"))
depends_on("py-amqp@5.0.9:5.1.0", when="@5.2.3:5.2.4", type=("build", "run"))
depends_on("py-amqp@5.1.1:5.1.99", when="@5.3.0:5.3.5", type=("build", "run"))
depends_on("py-amqp@2.5.2:2.5", when="@:4.6.6", type=("build", "run"))
depends_on("py-amqp@2.6.0:2.6", when="@4.6.7:4", type=("build", "run"))
depends_on("py-amqp@5.0.0:5", when="@5.0.0:5.0.2", type=("build", "run"))
depends_on("py-amqp@5.0.9:5.0", when="@5.2.3", type=("build", "run"))
depends_on("py-vine", when="@5.1.0:", type=("build", "run"))
depends_on("py-importlib-metadata@0.18:", type=("build", "run"), when="^python@:3.7")
depends_on("py-cached-property", type=("build", "run"), when="^python@:3.7")
depends_on("py-redis@3.4.1:3,4.0.2:", when="+redis", type=("build", "run"))
depends_on("py-backports-zoneinfo@0.2.1:", when="^python@:3.8", type=("build", "run"))

View File

@@ -2,8 +2,8 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import sys
from spack.package import *
@@ -44,6 +44,15 @@ class PyPip(Package, PythonExtension):
# Uses collections.MutableMapping
depends_on("python@:3.9", when="@:19.1", type=("build", "run"))
resource(
name="pip-bootstrap",
url="https://bootstrap.pypa.io/pip/zipapp/pip-22.3.1.pyz",
checksum="c9363c70ad91d463f9492a8a2c89f60068f86b0239bd2a6aa77367aab5fefb3e",
when="platform=windows",
placement={"pip-22.3.1.pyz": "pip.pyz"},
expand=False,
)
def url_for_version(self, version):
url = "https://files.pythonhosted.org/packages/{0}/p/pip/pip-{1}-{0}-none-any.whl"
if version >= Version("21"):
@@ -58,7 +67,14 @@ def install(self, spec, prefix):
# itself, see:
# https://discuss.python.org/t/bootstrapping-a-specific-version-of-pip/12306
whl = self.stage.archive_file
args = [os.path.join(whl, "pip")] + std_pip_args + ["--prefix=" + prefix, whl]
args = std_pip_args + ["--prefix=" + prefix, whl]
if sys.platform == "win32":
# On Windows for newer versions of pip, you must bootstrap pip first.
# In order to achieve this, use the pip.pyz zipapp version of pip to
# bootstrap the pip wheel install.
args.insert(0, os.path.join(self.stage.source_path, "pip.pyz"))
else:
args.insert(0, os.path.join(whl, "pip"))
python(*args)
def setup_dependent_package(self, module, dependent_spec):

View File

@@ -52,3 +52,8 @@ class PyPymatgen(PythonPackage):
depends_on("py-uncertainties@3.1.4:", when="@2021.1.1:", type=("build", "run"))
depends_on("py-pybtex", when="@2022.1.9:", type=("build", "run"))
depends_on("py-tqdm", when="@2022.1.9:", type=("build", "run"))
# <<< manual changes
# https://github.com/materialsproject/pymatgen/commit/29b5b909e109cb04d4b118d0de5b3929819b9378
depends_on("py-cython@:2", when="@:2023.7.16", type="build")
# manual changes >>>

View File

@@ -24,3 +24,8 @@ class PyTatsu(PythonPackage):
depends_on("py-setuptools", type="build")
# optional dependency, otherwise falls back to standard implementation
depends_on("py-regex@2018.8:", type=("build", "run"), when="+future_regex")
# <<< manual changes
# https://github.com/neogeny/TatSu/commit/a4fd84a2785fb0820ed65fe80ebd768458643b66
depends_on("python@:3.9", type=("build", "run"), when="@:4")
# manual changes >>>

View File

@@ -19,5 +19,6 @@ class PyTriton(PythonPackage):
depends_on("py-setuptools@40.8:", type="build")
depends_on("cmake@3.18:", type="build")
depends_on("py-filelock", type=("build", "run"))
depends_on("zlib-api", type="link")
build_directory = "python"

View File

@@ -18,15 +18,17 @@ class PyZfit(PythonPackage):
maintainers("jonas-eschle")
license("BSD-3-Clause", checked_by="jonas-eschle")
# TODO: reactivate once TF 2.15 is ready https://github.com/spack/spack/pull/41069
# version("0.18.1", sha256="fbc6b3a636d8dc74fb2e69dfec5855f534c4583ec18efac9e9107ad45b18eb43")
# version("0.18.0", sha256="21d9479480f74945c67707b715780693bd4e94062c551bf41fe04a2eddb47fab")
tags = ["likelihood", "statistics", "inference", "fitting", "hep"]
version("0.18.2", sha256="099b111e135937966b4c6342c7738731f112aea33e1b9f4a9785d2eac9e530f1")
version("0.18.1", sha256="fbc6b3a636d8dc74fb2e69dfec5855f534c4583ec18efac9e9107ad45b18eb43")
version("0.18.0", sha256="21d9479480f74945c67707b715780693bd4e94062c551bf41fe04a2eddb47fab")
version("0.17.0", sha256="cd60dfc360c82666af4e8dddd78edb0ab95a095b9dd0868457f0981dc03afa5a")
version("0.16.0", sha256="b3b170af23b61d7e265d6fb1bab1d052003f3fb41b3c537527cc1e5a1066dc10")
version("0.15.5", sha256="00a1138429e8a7f830c9e229b9c0bcd6071b95dadd8c87eb81191079fb679225")
version("0.14.1", sha256="66d1e349403f1d6c6350138d0f2b422046bcbdfb34fd95453dadae29a8b0c98a")
depends_on("python@3.8:3.11", type=("build", "run"))
depends_on("python@3.9:3.11", type=("build", "run"))
depends_on("py-setuptools@42:", type="build")
depends_on("py-setuptools-scm-git-archive", type="build")
depends_on("py-setuptools-scm@3.4:+toml", type="build")
@@ -37,9 +39,8 @@ class PyZfit(PythonPackage):
# TODO: remove "build" once fixed in spack that tests need "run", not "build"
with default_args(type=("build", "run")):
# TODO: reactivate once TF 2.15 is ready https://github.com/spack/spack/pull/41069
# depends_on("py-tensorflow@2.15", type=("run"), when="@0.18")
# depends_on("py-tensorflow-probability@0.23", type=("run"), when="@0.18")
depends_on("py-tensorflow@2.15", type=("run"), when="@0.18")
depends_on("py-tensorflow-probability@0.23", type=("run"), when="@0.18")
depends_on("py-tensorflow@2.13", when="@0.15:0.17")
depends_on("py-tensorflow-probability@0.21", when="@0.16:0.17")
@@ -54,7 +55,7 @@ class PyZfit(PythonPackage):
with when("+hs3"):
depends_on("py-asdf")
depends_on("py-attrs", when="@0.15:18.0")
depends_on("py-attrs", when="@0.15:")
depends_on("py-typing-extensions", when="^python@:3.8")
depends_on("py-boost-histogram")
depends_on("py-colorama")

View File

@@ -54,7 +54,7 @@ class Rivet(AutotoolsPackage):
depends_on("hepmc", when="hepmc=2")
depends_on("hepmc3", when="hepmc=3")
depends_on("fastjet")
depends_on("fastjet plugins=cxx")
depends_on("fastjet@3.4.0:", when="@3.1.7:")
depends_on("fjcontrib")
depends_on("python", type=("build", "run"))

View File

@@ -82,6 +82,19 @@ def url_for_version(self, version):
placement="rocclr",
when=f"@{d_version}",
)
# For avx build, the start address of values_ buffer in KernelParameters is not
# correct as it is computed based on 16-byte alignment.
patch(
"https://github.com/ROCm/clr/commit/c4f773db0b4ccbbeed4e3d6c0f6bff299c2aa3f0.patch?full_index=1",
sha256="5bb9b0e08888830ccf3a0a658529fe25f4ee62b5b8890f349bf2cc914236eb2f",
when="@5.7:",
)
patch(
"https://github.com/ROCm/clr/commit/7868876db742fb4d44483892856a66d2993add03.patch?full_index=1",
sha256="7668b2a710baf4cb063e6b00280fb75c4c3e0511575e8298a9c7ae5143f60b33",
when="@5.7:",
)
# Patch to set package installation path for OpenCL.
patch("0001-fix-build-error-rocm-opencl-5.1.0.patch", when="@5.1")

View File

@@ -39,7 +39,11 @@ class Scotch(CMakePackage, MakefilePackage):
build_system(conditional("cmake", when="@7:"), "makefile", default="cmake")
variant("threads", default=True, description="use POSIX Pthreads within Scotch and PT-Scotch")
variant("mpi_thread", default=False, description="use multi-threaded algorithms")
variant(
"mpi_thread",
default=False,
description="use multi-threaded algorithms in conjunction with MPI",
)
variant("mpi", default=True, description="Compile parallel libraries")
variant("compression", default=True, description="May use compressed files")
variant("esmumps", default=False, description="Compile esmumps (needed by mumps)")
@@ -48,6 +52,7 @@ class Scotch(CMakePackage, MakefilePackage):
"metis", default=False, description="Expose vendored METIS/ParMETIS libraries and wrappers"
)
variant("int64", default=False, description="Use int64_t for SCOTCH_Num typedef")
variant("noarch", default=False, description="Unset SPACK_TARGET_ARGS")
variant(
"link_error_lib",
default=False,
@@ -69,6 +74,9 @@ class Scotch(CMakePackage, MakefilePackage):
patch("libscotchmetis-return-6.0.5a.patch", when="@6.0.5a")
patch("libscotch-scotcherr-link-7.0.1.patch", when="@7.0.1 +link_error_lib")
# Avoid OpenMPI segfaults by using MPI_Comm_F2C for parmetis communicator
patch("parmetis-mpi.patch", when="@6.1.1:7.0.3 +metis ^openmpi")
# Vendored dependency of METIS/ParMETIS conflicts with standard
# installations
conflicts("metis", when="+metis")
@@ -127,6 +135,10 @@ def cmake_args(self):
return args
@when("+noarch")
def setup_build_environment(self, env):
env.unset("SPACK_TARGET_ARGS")
class MakefileBuilder(spack.build_systems.makefile.MakefileBuilder):
build_directory = "src"

View File

@@ -0,0 +1,127 @@
diff --git a/src/libscotchmetis/parmetis_dgraph_order_f.c b/src/libscotchmetis/parmetis_dgraph_order_f.c
index 44c2ede..549a834 100644
--- a/src/libscotchmetis/parmetis_dgraph_order_f.c
+++ b/src/libscotchmetis/parmetis_dgraph_order_f.c
@@ -46,7 +46,7 @@
/** # Version 6.0 : from : 13 sep 2012 **/
/** to : 18 may 2019 **/
/** # Version 7.0 : from : 21 jan 2023 **/
-/** to : 21 jan 2023 **/
+/** to : 30 jun 2023 **/
/** **/
/************************************************************/
@@ -76,11 +76,14 @@ const SCOTCH_Num * const numflag, \
const SCOTCH_Num * const options, \
SCOTCH_Num * const order, \
SCOTCH_Num * const sizes, \
-MPI_Comm * const commptr, \
+const MPI_Fint * const commptr, \
int * const revaptr), \
(vtxdist, xadj, adjncy, numflag, options, order, sizes, commptr, revaptr))
{
- *revaptr = SCOTCH_ParMETIS_V3_NodeND (vtxdist, xadj, adjncy, numflag, options, order, sizes, commptr);
+ MPI_Comm commdat;
+
+ commdat = MPI_Comm_f2c (*commptr);
+ *revaptr = SCOTCH_ParMETIS_V3_NodeND (vtxdist, xadj, adjncy, numflag, options, order, sizes, &commdat);
}
/*******************/
@@ -101,10 +104,13 @@ const SCOTCH_Num * const numflag, \
const SCOTCH_Num * const options, \
SCOTCH_Num * const order, \
SCOTCH_Num * const sizes, \
-MPI_Comm * const commptr), \
+const MPI_Fint * const commptr), \
(vtxdist, xadj, adjncy, numflag, options, order, sizes, commptr))
{
- METISNAMEU (ParMETIS_V3_NodeND) (vtxdist, xadj, adjncy, numflag, options, order, sizes, commptr);
+ MPI_Comm commdat;
+
+ commdat = MPI_Comm_f2c (*commptr);
+ METISNAMEU (ParMETIS_V3_NodeND) (vtxdist, xadj, adjncy, numflag, options, order, sizes, &commdat);
}
#endif /* SCOTCH_METIS_PREFIX */
diff --git a/src/libscotchmetis/parmetis_dgraph_part_f.c b/src/libscotchmetis/parmetis_dgraph_part_f.c
index 2b76818..3bf66af 100644
--- a/src/libscotchmetis/parmetis_dgraph_part_f.c
+++ b/src/libscotchmetis/parmetis_dgraph_part_f.c
@@ -44,7 +44,7 @@
/** # Version 6.0 : from : 13 sep 2012 **/
/** to : 18 may 2019 **/
/** # Version 7.0 : from : 21 jan 2023 **/
-/** to : 21 jan 2023 **/
+/** to : 30 jun 2023 **/
/** **/
/************************************************************/
@@ -81,12 +81,15 @@ const float * const ubvec, \
const SCOTCH_Num * const options, \
SCOTCH_Num * const edgecut, \
SCOTCH_Num * const part, \
-MPI_Comm * const commptr, \
+const MPI_Fint * const commptr, \
int * const revaptr), \
(vtxdist, xadj, adjncy, vwgt, adjwgt, wgtflag, numflag, ncon, nparts, tpwgts, ubvec, options, edgecut, part, commptr, revaptr))
{
+ MPI_Comm commdat;
+
+ commdat = MPI_Comm_f2c (*commptr);
*revaptr = SCOTCH_ParMETIS_V3_PartKway (vtxdist, xadj, adjncy, vwgt, adjwgt, wgtflag, numflag,
- ncon, nparts, tpwgts, ubvec, options, edgecut, part, commptr);
+ ncon, nparts, tpwgts, ubvec, options, edgecut, part, &commdat);
}
/*
@@ -111,12 +114,15 @@ const float * const ubvec, \
const SCOTCH_Num * const options, \
SCOTCH_Num * const edgecut, \
SCOTCH_Num * const part, \
-MPI_Comm * const commptr, \
+const MPI_Fint * const commptr, \
int * const revaptr), \
(vtxdist, xadj, adjncy, vwgt, adjwgt, wgtflag, numflag, ndims, xyz, ncon, nparts, tpwgts, ubvec, options, edgecut, part, commptr, revaptr))
{
+ MPI_Comm commdat;
+
+ commdat = MPI_Comm_f2c (*commptr);
*revaptr = SCOTCH_ParMETIS_V3_PartGeomKway (vtxdist, xadj, adjncy, vwgt, adjwgt, wgtflag, numflag,
- ndims, xyz, ncon, nparts, tpwgts, ubvec, options, edgecut, part, commptr);
+ ndims, xyz, ncon, nparts, tpwgts, ubvec, options, edgecut, part, &commdat);
}
/*******************/
@@ -144,10 +150,13 @@ const float * const ubvec, \
const SCOTCH_Num * const options, \
SCOTCH_Num * const edgecut, \
SCOTCH_Num * const part, \
-MPI_Comm * const commptr), \
+const MPI_Fint * const commptr), \
(vtxdist, xadj, adjncy, vwgt, adjwgt, wgtflag, numflag, ncon, nparts, tpwgts, ubvec, options, edgecut, part, commptr))
{
- METISNAMEU (ParMETIS_V3_PartKway) (vtxdist, xadj, adjncy, vwgt, adjwgt, wgtflag, numflag, ncon, nparts, tpwgts, ubvec, options, edgecut, part, commptr);
+ MPI_Comm commdat;
+
+ commdat = MPI_Comm_f2c (*commptr);
+ METISNAMEU (ParMETIS_V3_PartKway) (vtxdist, xadj, adjncy, vwgt, adjwgt, wgtflag, numflag, ncon, nparts, tpwgts, ubvec, options, edgecut, part, &commdat);
}
/*
@@ -172,10 +181,13 @@ const float * const ubvec,
const SCOTCH_Num * const options, \
SCOTCH_Num * const edgecut, \
SCOTCH_Num * const part, \
-MPI_Comm * const commptr), \
+const MPI_Fint * const commptr), \
(vtxdist, xadj, adjncy, vwgt, adjwgt, wgtflag, numflag, ndims, xyz, ncon, nparts, tpwgts, ubvec, options, edgecut, part, commptr))
{
- METISNAMEU (ParMETIS_V3_PartGeomKway) (vtxdist, xadj, adjncy, vwgt, adjwgt, wgtflag, numflag, ndims, xyz, ncon, nparts, tpwgts, ubvec, options, edgecut, part, commptr);
+ MPI_Comm commdat;
+
+ commdat = MPI_Comm_f2c (*commptr);
+ METISNAMEU (ParMETIS_V3_PartGeomKway) (vtxdist, xadj, adjncy, vwgt, adjwgt, wgtflag, numflag, ndims, xyz, ncon, nparts, tpwgts, ubvec, options, edgecut, part, &commdat);
}
#endif /* SCOTCH_METIS_PREFIX */

View File

@@ -92,6 +92,9 @@ class Slate(CMakePackage, CudaPackage, ROCmPackage):
depends_on("scalapack", when="@:2022.07.00", type="test")
depends_on("python", type="test")
depends_on("hipify-clang", when="@:2021.05.02 +rocm ^hip@5:")
depends_on("comgr", when="+rocm")
depends_on("rocblas", when="+rocm")
depends_on("rocsolver", when="+rocm")
requires("%oneapi", when="+sycl", msg="slate+sycl must be compiled with %oneapi")
@@ -166,9 +169,11 @@ def test(self):
test_dir = join_path(self.test_suite.current_test_cache_dir, "examples", "build")
with working_dir(test_dir, create=True):
cmake_bin = join_path(self.spec["cmake"].prefix.bin, "cmake")
# This package must directly depend on all packages listed here.
# Otherwise, it will not work when some packages are external to spack.
deps = "slate blaspp lapackpp mpi"
if self.spec.satisfies("+rocm"):
deps += " rocblas hip llvm-amdgpu comgr hsa-rocr-dev rocsolver"
deps += " rocblas hip llvm-amdgpu comgr hsa-rocr-dev rocsolver "
prefixes = ";".join([self.spec[x].prefix for x in deps.split()])
self.run_test(cmake_bin, ["-DCMAKE_PREFIX_PATH=" + prefixes, ".."])
make()

View File

@@ -14,6 +14,10 @@ class Tinyxml2(CMakePackage):
license("Zlib")
version("10.0.0", sha256="3bdf15128ba16686e69bce256cc468e76c7b94ff2c7f391cc5ec09e40bff3839")
version("9.0.0", sha256="cc2f1417c308b1f6acc54f88eb70771a0bf65f76282ce5c40e54cfe52952702c")
version("8.0.0", sha256="6ce574fbb46751842d23089485ae73d3db12c1b6639cda7721bf3a7ee862012c")
version("7.0.0", sha256="fa0d1c745d65d4d833e62cb183e23c2034dc7a35ec1a4977e808bdebb9b4fe60")
version("6.2.0", sha256="cdf0c2179ae7a7931dba52463741cf59024198bbf9673bf08415bcb46344110f")
version("4.0.1", sha256="14b38ef25cc136d71339ceeafb4856bb638d486614103453eccd323849267f20")
version("4.0.0", sha256="90add44f06de081047d431c08d7269c25b4030e5fe19c3bc8381c001ce8f258c")
@@ -21,3 +25,13 @@ class Tinyxml2(CMakePackage):
version("2.2.0", sha256="f891224f32e7a06bf279290619cec80cc8ddc335c13696872195ffb87f5bce67")
version("2.1.0", sha256="4bdd6569fdce00460bf9cda0ff5dcff46d342b4595900d849cc46a277a74cce6")
version("2.0.2", sha256="3cc3aa09cd1ce77736f23488c7cb24e65e11daed4e870ddc8d352aa4070c7c74")
variant("shared", default=False, description="Build shared library")
def cmake_args(self):
args = []
if self.spec.satisfies("+shared"):
args.append("-DBUILD_SHARED_LIBS=ON")
return args

View File

@@ -20,6 +20,7 @@ class Tioga(CMakePackage):
# The original TIOGA repo has possibly been abandoned,
# so work on TIOGA has continued in the Exawind project
version("develop", git="https://github.com/Exawind/tioga.git", branch="exawind")
version("1.0.0", git="https://github.com/Exawind/tioga.git", tag="v1.0.0")
version("master", branch="master")
variant("shared", default=sys.platform != "darwin", description="Build shared libraries")

View File

@@ -103,6 +103,13 @@ class Trilinos(CMakePackage, CudaPackage, ROCmPackage):
variant("uvm", default=False, when="@13.2: +cuda", description="Turn on UVM for CUDA build")
variant("wrapper", default=False, description="Use nvcc-wrapper for CUDA build")
# Makes the Teuchos Memory Management classes (Teuchos::RCP, Teuchos::Ptr, Teuchos::Array,
# Teuchos::ArrayView, and Teuchos::ArrayRCP) thread-safe. Requires at least the OMP kokkos
# backend to be enabled. Without, this seemingly does nothing
variant(
"threadsafe", default=False, when="+openmp", description="Enable threadsafe in Teuchos"
)
# TPLs (alphabet order)
variant("adios2", default=False, description="Enable ADIOS2")
variant("boost", default=False, description="Compile with Boost")
@@ -633,6 +640,7 @@ def define_enable(suffix, value=None):
define_trilinos_enable(
"EXPLICIT_INSTANTIATION", "explicit_template_instantiation"
),
define_from_variant("Trilinos_ENABLE_THREAD_SAFE", "threadsafe"),
]
)