Compare commits

...

32 Commits

Author SHA1 Message Date
Cameron Smith
44309e3b0b simmodsuite: depends on rpc
Sun RPC support was optional starting with glibc version 2.26 and was removed in version 2.32. On newer operating systems (e.g., RHEL8) libtirpc is required to satisfy the xdr symbols.
2022-07-20 09:38:41 -04:00
Harmen Stoppels
e7be8dbbcf spack stage: add missing --fresh and --reuse (#31626) 2022-07-20 13:50:56 +02:00
Harmen Stoppels
e3aca44601 installer.py: require "explicit: True" in the install arguments to mark a package "explicit" (#31646) 2022-07-20 12:09:45 +02:00
Harmen Stoppels
43673fee80 py-pip: 22.x (#31621) 2022-07-18 19:13:42 -06:00
Servesh Muralidharan
918637b943 Update py-xdot to use python_purelib #31616 (#31628)
Issue discussed in https://github.com/spack/spack/issues/31616
2022-07-18 12:56:19 -07:00
Cory Bloor
5c59e9746a rocblas: tighten tensile dependencies (#31414)
* rocblas: make tensile dependencies conditional

* Remove rocm-smi from the rocblas dependency list

rocm-smi was added to the rocblas dependency list because Tensile was a
dependency of rocBLAS and rocm-smi was a dependency of Tensile. However,
that reasoning was not correct.

Tensile is composed of three components:

  1. A command-line tool for generating kernels, benchmarking them, and
     saving the parameters used for generating the best kernels
     (a.k.a. "Solutions") in YAML files.
  2. A build system component that reads YAML solution files, generates
     kernel source files, and invokes the compiler to compile then into
     code object files (*.co, *.hsco). An index of the kernels and their
     associated parameters is also generated and stored in either YAML
     or MessagePack format (TensileLibrary.yaml or TensileLibrary.dat).
  3. A runtime library that will load the TensileLibrary and code object
     files when asked to execute a GEMM and choose the ideal kernel for
     your specific input parameters.

rocBLAS developers use (1) during rocBLAS development. This is when
Tensile depends on rocm-smi. The GPU clock speed and temperature must be
controlled to ensure consistency when doing the solution benchmarking.
That control is provided by rocm-smi. When building rocBLAS, Tensile is
used for (2). However, there is no need for control of the GPU at that
point and rocm-smi is not a dependency. At runtime, the rocBLAS library
uses Tensile for (3), but there is again no dependency on rocm-smi.

tl;dr: rocm-smi is a dependency of the tensile benchmarking tool,
which is not a build dependency or runtime dependency of rocblas.
2022-07-18 12:21:40 -07:00
eugeneswalker
b065d69136 e4s ci: add ginkgo +rocm (#31603) 2022-07-18 14:19:31 +02:00
Luke Diorio-Toth
710ec58df0 kraken2: add v2.1.2 (#31613) 2022-07-18 02:45:48 -06:00
Massimiliano Culpo
70be612f43 containerize: fix missing environment activation (#31596) 2022-07-18 10:31:16 +02:00
haralmha
c589f97cf0 lhapdf: add gettext libs explicitly (#31555)
Co-authored-by: Valentin Volkl <valentin.volkl@cern.ch>
2022-07-18 10:25:08 +02:00
Adam J. Stewart
ad4f551c22 py-pyarrow: aarch64 patch no longer applies/required (#31609) 2022-07-18 09:44:58 +02:00
Adam J. Stewart
37134aa331 py-kornia: add v0.6.6 (#31611) 2022-07-18 09:43:41 +02:00
Glenn Johnson
bb1632c99d Fixes for kallisto package (#31617)
This PR contains several fixes for the kallisto package.

- create hdf5 variant as hdf5 is optional beginning with 0.46.2
- provide patch for 0.43 to link against libz
- provide patch for older versions to build again gcc-11 and up
- patch and code to use autoconf-2.70 and up
2022-07-18 09:37:21 +02:00
Jonathon Anderson
117c7cc3db Only hack botocore when needed (#31610)
Newer versions of botocore (>=1.23.47) support the full IOBase
interface, so the hacks added to supplement the missing attributes are
no longer needed. Conditionally disable the hacks if they appear to be
unnecessary based on the class hierarchy found at runtime.
2022-07-18 08:40:13 +02:00
Luke Diorio-Toth
7b95e2f050 py-phydms, py-pypdf2, and py-pyvolve: new packages (#31540)
* py-panaroo: new package

* moving panaroo to branch

* py-phydms: new package

* added dependencies and new packages

* fixed py-pypdf2 dependencies
2022-07-16 17:24:29 -07:00
Luke Diorio-Toth
03c1962252 mizani, plotnine, and pystan: added more versions and variable dependencies (#31541)
* py-panaroo: new package

* moving panaroo to branch

* updated mizani, plotnine, and pystan versions and requirements

* made suggested fixes

* adding more requested fixes

* added new versions of statsmodels and httpstan
2022-07-16 17:23:29 -07:00
FJ-NaokiMatsumura
268ec998c0 py-horovod: add versions 0.22+, fix aarch64 build (#29310)
* py-torch: add version 0.23.0 and fix to built on aarch64

* Add newer versions, fix build issues

* Fix tests

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2022-07-16 01:41:42 -06:00
Adam J. Stewart
caee7341dd Qhull: fix RPATH on macOS (#31586) 2022-07-15 16:14:19 -07:00
Adam J. Stewart
ac15ebec76 py-geocube: add v0.3.2 (#31588) 2022-07-15 16:03:11 -07:00
Lucas Frérot
914ce8508a tamaas: new version 2.5.0 (#31501)
* tamaas: new version 2.5.0

* tamaas: do not build/install documentation

* tamaas: hash correction for v2.5.0.post1

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>

Co-authored-by: Tamara Dahlgren <35777542+tldahlgren@users.noreply.github.com>
2022-07-15 10:49:11 -07:00
haralmha
17b6bb9d06 gperftools: Add version 2.10 and add unwind to LDFLAGS (#31574)
* gperftools: Add version 2.10 and add unwind to LDFLAGS

* Make unwind flag conditional on libunwind variant
2022-07-15 10:34:47 -07:00
Harmen Stoppels
6f8d7390f1 Use lexists instead of exists during fetch (#31509) 2022-07-15 18:43:14 +02:00
牟展佑
123c405e11 openssl: add patch for @1.1.1q. fixed issues #31545 (#31575) 2022-07-14 20:29:45 -06:00
Elizabeth Sherrock
6f46345b60 blight, fparser, typing_extensions (#31566)
* Added fparser 0.0.16

* Created package

* Change from GitHub download to pypi

* py-typing-extensions: 4.3.0

* Added py-pdoc3

* Finished adding py-blight

* Fixed typos

* Fixed dependency specs

* Fixed some things after PR review

* Added @woodruffw as blight maintainer

* Removed maintainer comment block from pdoc3
2022-07-14 17:13:47 -06:00
Erik Schnetter
49ae847c6c snappy: Update to version 1.1.9 (#31578) 2022-07-14 16:53:46 -06:00
eugeneswalker
3cb6fd140c update e4s to reflect june status (#31032) 2022-07-14 22:05:57 +00:00
Joseph Snyder
64b41b012c Bug/fix credentials s3 buildcache update (#31391)
* Add connection information to buildcache update command

Ensure that the s3 connection made when attempting to update the content
of a buildcache attempts to use the extra connection information
from the mirror creation.

* Add unique help for endpoint URL argument

Fix copy/paste error for endpoint URL help which was the same as
the access token

* Re-work URL checking for S3 mirrors

Due to the fact that nested bucket URLs would never match the string used
for checking that the mirror is the same, switch the check used.

Sort all mirror URLs by length to have the most specific cases first
and see if the desired URL "starts with" the mirror URL.

* Long line style fixes

Add execptions for long lines and fix other style errors

* Use format() function to rebuild URL

Use the format command to rebuild the url instead of crafing a
formatted string out of known values

* Add early exit for URL checking

When a valid mirror is found, break from the loop
2022-07-14 20:49:46 +00:00
Todd Gamblin
3d0347ddd3 Deprecate blacklist/whitelist in favor of include/exclude (#31569)
For a long time the module configuration has had a few settings that use
`blacklist`/`whitelist` terminology. We've been asked by some of our users to replace
this with more inclusive language. In addition to being non-inclusive, `blacklist` and
`whitelist` are inconsistent with the rest of Spack, which uses `include` and `exclude`
for the same concepts.

- [x] Deprecate `blacklist`, `whitelist`, `blacklist_implicits` and `environment_blacklist`
      in favor of `exclude`, `include`, `exclude_implicits` and `exclude_env_vars` in module
      configuration, to be removed in Spack v0.20.
- [x] Print deprecation warnings if any of the deprecated names are in module config.
- [x] Update tests to test old and new names.
- [x] Update docs.
- [x] Update `spack config update` to fix this automatically, and include a note in the error
      that you can use this command.
2022-07-14 20:42:33 +00:00
Jen Herting
875b032151 [py-tensorflow-hub] full_index=1 caused checksum change (#31585) 2022-07-14 11:47:30 -07:00
snehring
c7e49ea5a0 interproscan: new version 5.56-89.0 (#31565)
* interproscan: new version 5.56-89.0

* interproscan: add maintainer
2022-07-14 09:53:12 -07:00
Adam J. Stewart
a1f07b68a8 py-statsmodels: add v0.13.2 (#31571) 2022-07-14 09:30:43 -07:00
Axel Huebl
c4efeab976 ADIOS2: 2.8.2 (#31564)
Add the latest ADIOS2 release.
2022-07-14 03:17:38 -06:00
73 changed files with 1740 additions and 428 deletions

View File

@@ -308,7 +308,7 @@ the variable ``FOOBAR`` will be unset.
spec constraints are instead evaluated top to bottom.
""""""""""""""""""""""""""""""""""""""""""""
Blacklist or whitelist specific module files
Exclude or include specific module files
""""""""""""""""""""""""""""""""""""""""""""
You can use anonymous specs also to prevent module files from being written or
@@ -322,8 +322,8 @@ your system. If you write a configuration file like:
modules:
default:
tcl:
whitelist: ['gcc', 'llvm'] # Whitelist will have precedence over blacklist
blacklist: ['%gcc@4.4.7'] # Assuming gcc@4.4.7 is the system compiler
include: ['gcc', 'llvm'] # include will have precedence over exclude
exclude: ['%gcc@4.4.7'] # Assuming gcc@4.4.7 is the system compiler
you will prevent the generation of module files for any package that
is compiled with ``gcc@4.4.7``, with the only exception of any ``gcc``
@@ -490,7 +490,7 @@ satisfies a default, Spack will generate the module file in the
appropriate path, and will generate a default symlink to the module
file as well.
.. warning::
.. warning::
If Spack is configured to generate multiple default packages in the
same directory, the last modulefile to be generated will be the
default module.
@@ -589,7 +589,7 @@ Filter out environment modifications
Modifications to certain environment variables in module files are there by
default, for instance because they are generated by prefix inspections.
If you want to prevent modifications to some environment variables, you can
do so by using the environment blacklist:
do so by using the ``exclude_env_vars``:
.. code-block:: yaml
@@ -599,7 +599,7 @@ do so by using the environment blacklist:
all:
filter:
# Exclude changes to any of these variables
environment_blacklist: ['CPATH', 'LIBRARY_PATH']
exclude_env_vars: ['CPATH', 'LIBRARY_PATH']
The configuration above will generate module files that will not contain
modifications to either ``CPATH`` or ``LIBRARY_PATH``.

View File

@@ -618,7 +618,7 @@ def get_buildfile_manifest(spec):
Return a data structure with information about a build, including
text_to_relocate, binary_to_relocate, binary_to_relocate_fullpath
link_to_relocate, and other, which means it doesn't fit any of previous
checks (and should not be relocated). We blacklist docs (man) and
checks (and should not be relocated). We exclude docs (man) and
metadata (.spack). This can be used to find a particular kind of file
in spack, or to generate the build metadata.
"""
@@ -626,12 +626,12 @@ def get_buildfile_manifest(spec):
"link_to_relocate": [], "other": [],
"binary_to_relocate_fullpath": []}
blacklist = (".spack", "man")
exclude_list = (".spack", "man")
# Do this at during tarball creation to save time when tarball unpacked.
# Used by make_package_relative to determine binaries to change.
for root, dirs, files in os.walk(spec.prefix, topdown=True):
dirs[:] = [d for d in dirs if d not in blacklist]
dirs[:] = [d for d in dirs if d not in exclude_list]
# Directories may need to be relocated too.
for directory in dirs:

View File

@@ -403,4 +403,4 @@ def add_s3_connection_args(subparser, add_help):
default=None)
subparser.add_argument(
'--s3-endpoint-url',
help="Access Token to use to connect to this S3 mirror")
help="Endpoint URL to use to connect to this S3 mirror")

View File

@@ -104,9 +104,9 @@ def edit(parser, args):
path = os.path.join(path, name)
if not os.path.exists(path):
files = glob.glob(path + '*')
blacklist = ['.pyc', '~'] # blacklist binaries and backups
exclude_list = ['.pyc', '~'] # exclude binaries and backups
files = list(filter(
lambda x: all(s not in x for s in blacklist), files))
lambda x: all(s not in x for s in exclude_list), files))
if len(files) > 1:
m = 'Multiple files exist with the name {0}.'.format(name)
m += ' Please specify a suffix. Files are:\n\n'

View File

@@ -131,7 +131,7 @@ def check_module_set_name(name):
_missing_modules_warning = (
"Modules have been omitted for one or more specs, either"
" because they were blacklisted or because the spec is"
" because they were excluded or because the spec is"
" associated with a package that is installed upstream and"
" that installation has not generated a module file. Rerun"
" this command with debug output enabled for more details.")
@@ -180,7 +180,7 @@ def loads(module_type, specs, args, out=None):
for spec, mod in modules:
if not mod:
module_output_for_spec = (
'## blacklisted or missing from upstream: {0}'.format(
'## excluded or missing from upstream: {0}'.format(
spec.format()))
else:
d['exclude'] = '## ' if spec.name in exclude_set else ''
@@ -293,8 +293,8 @@ def refresh(module_type, specs, args):
cls(spec, args.module_set_name) for spec in specs
if spack.repo.path.exists(spec.name)]
# Filter blacklisted packages early
writers = [x for x in writers if not x.conf.blacklisted]
# Filter excluded packages early
writers = [x for x in writers if not x.conf.excluded]
# Detect name clashes in module files
file2writer = collections.defaultdict(list)

View File

@@ -24,6 +24,7 @@ def setup_parser(subparser):
subparser.add_argument(
'-p', '--path', dest='path',
help="path to stage package, does not add to spack tree")
arguments.add_concretizer_args(subparser)
def stage(parser, args):

View File

@@ -337,9 +337,7 @@ def fetch(self):
continue
try:
partial_file, save_file = self._fetch_from_url(url)
if save_file and (partial_file is not None):
llnl.util.filesystem.rename(partial_file, save_file)
self._fetch_from_url(url)
break
except FailedDownloadError as e:
errors.append(str(e))
@@ -389,9 +387,7 @@ def _check_headers(self, headers):
@_needs_stage
def _fetch_urllib(self, url):
save_file = None
if self.stage.save_filename:
save_file = self.stage.save_filename
save_file = self.stage.save_filename
tty.msg('Fetching {0}'.format(url))
# Run urllib but grab the mime type from the http headers
@@ -401,16 +397,18 @@ def _fetch_urllib(self, url):
# clean up archive on failure.
if self.archive_file:
os.remove(self.archive_file)
if save_file and os.path.exists(save_file):
if os.path.lexists(save_file):
os.remove(save_file)
msg = 'urllib failed to fetch with error {0}'.format(e)
raise FailedDownloadError(url, msg)
if os.path.lexists(save_file):
os.remove(save_file)
with open(save_file, 'wb') as _open_file:
shutil.copyfileobj(response, _open_file)
self._check_headers(str(headers))
return None, save_file
@_needs_stage
def _fetch_curl(self, url):
@@ -471,7 +469,7 @@ def _fetch_curl(self, url):
if self.archive_file:
os.remove(self.archive_file)
if partial_file and os.path.exists(partial_file):
if partial_file and os.path.lexists(partial_file):
os.remove(partial_file)
if curl.returncode == 22:
@@ -498,7 +496,9 @@ def _fetch_curl(self, url):
"Curl failed with error %d" % curl.returncode)
self._check_headers(headers)
return partial_file, save_file
if save_file and (partial_file is not None):
fs.rename(partial_file, save_file)
@property # type: ignore # decorated properties unsupported in mypy
@_needs_stage
@@ -613,7 +613,7 @@ def fetch(self):
# remove old symlink if one is there.
filename = self.stage.save_filename
if os.path.exists(filename):
if os.path.lexists(filename):
os.remove(filename)
# Symlink to local cached archive.

View File

@@ -2212,7 +2212,8 @@ def flag_installed(self, installed):
@property
def explicit(self):
"""The package was explicitly requested by the user."""
return self.pkg == self.request.pkg
return self.pkg == self.request.pkg and \
self.request.install_args.get('explicit', True)
@property
def key(self):

View File

@@ -54,6 +54,34 @@
import spack.util.spack_yaml as syaml
def get_deprecated(dictionary, name, old_name, default):
"""Get a deprecated property from a ``dict``.
Arguments:
dictionary (dict): dictionary to get a value from.
name (str): New name for the property. If present, supersedes ``old_name``.
old_name (str): Deprecated name for the property. If present, a warning
is printed.
default (object): value to return if neither name is found.
"""
value = default
# always warn if old name is present
if old_name in dictionary:
value = dictionary.get(old_name, value)
main_msg = "`{}:` is deprecated in module config and will be removed in v0.20."
details = (
"Use `{}:` instead. You can run `spack config update` to translate your "
"configuration files automatically."
)
tty.warn(main_msg.format(old_name), details.format(name))
# name overrides old name if present
value = dictionary.get(name, value)
return value
#: config section for this file
def configuration(module_set_name):
config_path = 'modules:%s' % module_set_name
@@ -351,14 +379,14 @@ def get_module(
Retrieve the module file for the given spec if it is available. If the
module is not available, this will raise an exception unless the module
is blacklisted or if the spec is installed upstream.
is excluded or if the spec is installed upstream.
Args:
module_type: the type of module we want to retrieve (e.g. lmod)
spec: refers to the installed package that we want to retrieve a module
for
required: if the module is required but blacklisted, this function will
print a debug message. If a module is missing but not blacklisted,
required: if the module is required but excluded, this function will
print a debug message. If a module is missing but not excluded,
then an exception is raised (regardless of whether it is required)
get_full_path: if ``True``, this returns the full path to the module.
Otherwise, this returns the module name.
@@ -386,13 +414,13 @@ def get_module(
else:
writer = spack.modules.module_types[module_type](spec, module_set_name)
if not os.path.isfile(writer.layout.filename):
if not writer.conf.blacklisted:
if not writer.conf.excluded:
err_msg = "No module available for package {0} at {1}".format(
spec, writer.layout.filename
)
raise ModuleNotFoundError(err_msg)
elif required:
tty.debug("The module configuration has blacklisted {0}: "
tty.debug("The module configuration has excluded {0}: "
"omitting it".format(spec))
else:
return None
@@ -483,26 +511,30 @@ def hash(self):
return None
@property
def blacklisted(self):
"""Returns True if the module has been blacklisted,
False otherwise.
"""
def excluded(self):
"""Returns True if the module has been excluded, False otherwise."""
# A few variables for convenience of writing the method
spec = self.spec
conf = self.module.configuration(self.name)
# Compute the list of whitelist rules that match
wlrules = conf.get('whitelist', [])
whitelist_matches = [x for x in wlrules if spec.satisfies(x)]
# Compute the list of include rules that match
# DEPRECATED: remove 'whitelist' in v0.20
include_rules = get_deprecated(conf, "include", "whitelist", [])
include_matches = [x for x in include_rules if spec.satisfies(x)]
# Compute the list of blacklist rules that match
blrules = conf.get('blacklist', [])
blacklist_matches = [x for x in blrules if spec.satisfies(x)]
# Compute the list of exclude rules that match
# DEPRECATED: remove 'blacklist' in v0.20
exclude_rules = get_deprecated(conf, "exclude", "blacklist", [])
exclude_matches = [x for x in exclude_rules if spec.satisfies(x)]
# Should I blacklist the module because it's implicit?
blacklist_implicits = conf.get('blacklist_implicits')
# Should I exclude the module because it's implicit?
# DEPRECATED: remove 'blacklist_implicits' in v0.20
exclude_implicits = get_deprecated(
conf, "exclude_implicits", "blacklist_implicits", None
)
installed_implicitly = not spec._installed_explicitly()
blacklisted_as_implicit = blacklist_implicits and installed_implicitly
excluded_as_implicit = exclude_implicits and installed_implicitly
def debug_info(line_header, match_list):
if match_list:
@@ -511,15 +543,15 @@ def debug_info(line_header, match_list):
for rule in match_list:
tty.debug('\t\tmatches rule: {0}'.format(rule))
debug_info('WHITELIST', whitelist_matches)
debug_info('BLACKLIST', blacklist_matches)
debug_info('INCLUDE', include_matches)
debug_info('EXCLUDE', exclude_matches)
if blacklisted_as_implicit:
msg = '\tBLACKLISTED_AS_IMPLICIT : {0}'.format(spec.cshort_spec)
if excluded_as_implicit:
msg = '\tEXCLUDED_AS_IMPLICIT : {0}'.format(spec.cshort_spec)
tty.debug(msg)
is_blacklisted = blacklist_matches or blacklisted_as_implicit
if not whitelist_matches and is_blacklisted:
is_excluded = exclude_matches or excluded_as_implicit
if not include_matches and is_excluded:
return True
return False
@@ -544,17 +576,22 @@ def specs_to_prereq(self):
return self._create_list_for('prerequisites')
@property
def environment_blacklist(self):
def exclude_env_vars(self):
"""List of variables that should be left unmodified."""
return self.conf.get('filter', {}).get('environment_blacklist', {})
filter = self.conf.get('filter', {})
# DEPRECATED: remove in v0.20
return get_deprecated(
filter, "exclude_env_vars", "environment_blacklist", {}
)
def _create_list_for(self, what):
whitelist = []
include = []
for item in self.conf[what]:
conf = type(self)(item, self.name)
if not conf.blacklisted:
whitelist.append(item)
return whitelist
if not conf.excluded:
include.append(item)
return include
@property
def verbose(self):
@@ -733,8 +770,8 @@ def environment_modifications(self):
# Modifications required from modules.yaml
env.extend(self.conf.env)
# List of variables that are blacklisted in modules.yaml
blacklist = self.conf.environment_blacklist
# List of variables that are excluded in modules.yaml
exclude = self.conf.exclude_env_vars
# We may have tokens to substitute in environment commands
@@ -758,7 +795,7 @@ def environment_modifications(self):
pass
x.name = str(x.name).replace('-', '_')
return [(type(x).__name__, x) for x in env if x.name not in blacklist]
return [(type(x).__name__, x) for x in env if x.name not in exclude]
@tengine.context_property
def autoload(self):
@@ -831,9 +868,9 @@ def write(self, overwrite=False):
existing file. If False the operation is skipped an we print
a warning to the user.
"""
# Return immediately if the module is blacklisted
if self.conf.blacklisted:
msg = '\tNOT WRITING: {0} [BLACKLISTED]'
# Return immediately if the module is excluded
if self.conf.excluded:
msg = '\tNOT WRITING: {0} [EXCLUDED]'
tty.debug(msg.format(self.spec.cshort_spec))
return

View File

@@ -3,7 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from io import BufferedReader
from io import BufferedReader, IOBase
import six
import six.moves.urllib.error as urllib_error
@@ -23,11 +23,15 @@
# https://github.com/python/cpython/pull/3249
class WrapStream(BufferedReader):
def __init__(self, raw):
raw.readable = lambda: True
raw.writable = lambda: False
raw.seekable = lambda: False
raw.closed = False
raw.flush = lambda: None
# In botocore >=1.23.47, StreamingBody inherits from IOBase, so we
# only add missing attributes in older versions.
# https://github.com/boto/botocore/commit/a624815eabac50442ed7404f3c4f2664cd0aa784
if not isinstance(raw, IOBase):
raw.readable = lambda: True
raw.writable = lambda: False
raw.seekable = lambda: False
raw.closed = False
raw.flush = lambda: None
super(WrapStream, self).__init__(raw)
def detach(self):

View File

@@ -18,9 +18,13 @@
#:
#: THIS NEEDS TO BE UPDATED FOR EVERY NEW KEYWORD THAT
#: IS ADDED IMMEDIATELY BELOW THE MODULE TYPE ATTRIBUTE
spec_regex = r'(?!hierarchy|core_specs|verbose|hash_length|whitelist|' \
r'blacklist|projections|naming_scheme|core_compilers|all|' \
r'defaults)(^\w[\w-]*)'
spec_regex = (
r'(?!hierarchy|core_specs|verbose|hash_length|defaults|'
r'whitelist|blacklist|' # DEPRECATED: remove in 0.20.
r'include|exclude|' # use these more inclusive/consistent options
r'projections|naming_scheme|core_compilers|all)(^\w[\w-]*)'
)
#: Matches a valid name for a module set
valid_module_set_name = r'^(?!arch_folder$|lmod$|roots$|enable$|prefix_inspections$|'\
@@ -50,12 +54,21 @@
'default': {},
'additionalProperties': False,
'properties': {
# DEPRECATED: remove in 0.20.
'environment_blacklist': {
'type': 'array',
'default': [],
'items': {
'type': 'string'
}
},
# use exclude_env_vars instead
'exclude_env_vars': {
'type': 'array',
'default': [],
'items': {
'type': 'string'
}
}
}
},
@@ -95,12 +108,20 @@
'minimum': 0,
'default': 7
},
# DEPRECATED: remove in 0.20.
'whitelist': array_of_strings,
'blacklist': array_of_strings,
'blacklist_implicits': {
'type': 'boolean',
'default': False
},
# whitelist/blacklist have been replaced with include/exclude
'include': array_of_strings,
'exclude': array_of_strings,
'exclude_implicits': {
'type': 'boolean',
'default': False
},
'defaults': array_of_strings,
'naming_scheme': {
'type': 'string' # Can we be more specific here?
@@ -224,14 +245,51 @@ def deprecation_msg_default_module_set(instance, props):
}
def update(data):
"""Update the data in place to remove deprecated properties.
# deprecated keys and their replacements
exclude_include_translations = {
"whitelist": "include",
"blacklist": "exclude",
"blacklist_implicits": "exclude_implicits",
"environment_blacklist": "exclude_env_vars",
}
Args:
data (dict): dictionary to be updated
Returns:
True if data was changed, False otherwise
def update_keys(data, key_translations):
"""Change blacklist/whitelist to exclude/include.
Arguments:
data (dict): data from a valid modules configuration.
key_translations (dict): A dictionary of keys to translate to
their respective values.
Return:
(bool) whether anything was changed in data
"""
changed = False
if isinstance(data, dict):
keys = list(data.keys())
for key in keys:
value = data[key]
translation = key_translations.get(key)
if translation:
data[translation] = data.pop(key)
changed = True
changed |= update_keys(value, key_translations)
elif isinstance(data, list):
for elt in data:
changed |= update_keys(elt, key_translations)
return changed
def update_default_module_set(data):
"""Update module configuration to move top-level keys inside default module set.
This change was introduced in v0.18 (see 99083f1706 or #28659).
"""
changed = False
@@ -258,3 +316,21 @@ def update(data):
data['default'] = default
return changed
def update(data):
"""Update the data in place to remove deprecated properties.
Args:
data (dict): dictionary to be updated
Returns:
True if data was changed, False otherwise
"""
# deprecated top-level module config (everything in default module set)
changed = update_default_module_set(data)
# translate blacklist/whitelist to exclude/include
changed |= update_keys(data, exclude_include_translations)
return changed

View File

@@ -149,16 +149,20 @@ def test_find_recursive():
@pytest.mark.db
def test_find_recursive_blacklisted(database, module_configuration):
module_configuration('blacklist')
# DEPRECATED: remove blacklist in v0.20
@pytest.mark.parametrize("config_name", ["exclude", "blacklist"])
def test_find_recursive_excluded(database, module_configuration, config_name):
module_configuration(config_name)
module('lmod', 'refresh', '-y', '--delete-tree')
module('lmod', 'find', '-r', 'mpileaks ^mpich')
@pytest.mark.db
def test_loads_recursive_blacklisted(database, module_configuration):
module_configuration('blacklist')
# DEPRECATED: remove blacklist in v0.20
@pytest.mark.parametrize("config_name", ["exclude", "blacklist"])
def test_loads_recursive_excluded(database, module_configuration, config_name):
module_configuration(config_name)
module('lmod', 'refresh', '-y', '--delete-tree')
output = module('lmod', 'loads', '-r', 'mpileaks ^mpich')
@@ -166,7 +170,7 @@ def test_loads_recursive_blacklisted(database, module_configuration):
assert any(re.match(r'[^#]*module load.*mpileaks', ln) for ln in lines)
assert not any(re.match(r'[^#]module load.*callpath', ln) for ln in lines)
assert any(re.match(r'## blacklisted or missing.*callpath', ln)
assert any(re.match(r'## excluded or missing.*callpath', ln)
for ln in lines)
# TODO: currently there is no way to separate stdout and stderr when

View File

@@ -8,6 +8,7 @@
import pytest
import spack.config
import spack.environment as ev
import spack.repo
from spack.main import SpackCommand
@@ -122,3 +123,13 @@ def fake_stage(pkg, mirror_only=False):
# assert that all were staged
assert len(expected) == 0
@pytest.mark.disable_clean_stage_check
def test_concretizer_arguments(mock_packages, mock_fetch):
"""Make sure stage also has --reuse and --fresh flags."""
stage("--reuse", "trivial-install-test-package")
assert spack.config.get("concretizer:reuse", None) is True
stage("--fresh", "trivial-install-test-package")
assert spack.config.get("concretizer:reuse", None) is False

View File

@@ -1003,7 +1003,7 @@ def __call__(self, filename):
@pytest.fixture()
def module_configuration(monkeypatch, request):
def module_configuration(monkeypatch, request, mutable_config):
"""Reads the module configuration file from the mock ones prepared
for tests and monkeypatches the right classes to hook it in.
"""
@@ -1018,6 +1018,8 @@ def module_configuration(monkeypatch, request):
spack.paths.test_path, 'data', 'modules', writer_key
)
# ConfigUpdate, when called, will modify configuration, so we need to use
# the mutable_config fixture
return ConfigUpdate(root_for_conf, writer_mod, writer_key, monkeypatch)

View File

@@ -10,7 +10,7 @@ lmod:
all:
autoload: none
filter:
environment_blacklist:
exclude_env_vars:
- CMAKE_PREFIX_PATH
environment:
set:

View File

@@ -1,3 +1,5 @@
# DEPRECATED: remove this in v0.20
# See `exclude.yaml` for the new syntax
enable:
- lmod
lmod:

View File

@@ -0,0 +1,30 @@
# DEPRECATED: remove this in v0.20
# See `alter_environment.yaml` for the new syntax
enable:
- lmod
lmod:
core_compilers:
- 'clang@3.3'
hierarchy:
- mpi
all:
autoload: none
filter:
environment_blacklist:
- CMAKE_PREFIX_PATH
environment:
set:
'{name}_ROOT': '{prefix}'
'platform=test target=x86_64':
environment:
set:
FOO: 'foo'
unset:
- BAR
'platform=test target=core2':
load:
- 'foo/bar'

View File

@@ -0,0 +1,12 @@
enable:
- lmod
lmod:
core_compilers:
- 'clang@3.3'
hierarchy:
- mpi
exclude:
- callpath
all:
autoload: direct

View File

@@ -4,7 +4,7 @@ tcl:
all:
autoload: none
filter:
environment_blacklist:
exclude_env_vars:
- CMAKE_PREFIX_PATH
environment:
set:

View File

@@ -1,3 +1,5 @@
# DEPRECATED: remove this in v0.20
# See `exclude.yaml` for the new syntax
enable:
- tcl
tcl:

View File

@@ -0,0 +1,25 @@
# DEPRECATED: remove this in v0.20
# See `alter_environment.yaml` for the new syntax
enable:
- tcl
tcl:
all:
autoload: none
filter:
environment_blacklist:
- CMAKE_PREFIX_PATH
environment:
set:
'{name}_ROOT': '{prefix}'
'platform=test target=x86_64':
environment:
set:
FOO: 'foo'
OMPI_MCA_mpi_leave_pinned: '1'
unset:
- BAR
'platform=test target=core2':
load:
- 'foo/bar'

View File

@@ -1,3 +1,5 @@
# DEPRECATED: remove this in v0.20
# See `exclude_implicits.yaml` for the new syntax
enable:
- tcl
tcl:

View File

@@ -0,0 +1,10 @@
enable:
- tcl
tcl:
include:
- zmpi
exclude:
- callpath
- mpi
all:
autoload: direct

View File

@@ -0,0 +1,6 @@
enable:
- tcl
tcl:
exclude_implicits: true
all:
autoload: direct

View File

@@ -379,40 +379,40 @@ def test_clear(env):
assert len(env) == 0
@pytest.mark.parametrize('env,blacklist,whitelist', [
# Check we can blacklist a literal
@pytest.mark.parametrize('env,exclude,include', [
# Check we can exclude a literal
({'SHLVL': '1'}, ['SHLVL'], []),
# Check whitelist takes precedence
# Check include takes precedence
({'SHLVL': '1'}, ['SHLVL'], ['SHLVL']),
])
def test_sanitize_literals(env, blacklist, whitelist):
def test_sanitize_literals(env, exclude, include):
after = environment.sanitize(env, blacklist, whitelist)
after = environment.sanitize(env, exclude, include)
# Check that all the whitelisted variables are there
assert all(x in after for x in whitelist)
# Check that all the included variables are there
assert all(x in after for x in include)
# Check that the blacklisted variables that are not
# whitelisted are there
blacklist = list(set(blacklist) - set(whitelist))
assert all(x not in after for x in blacklist)
# Check that the excluded variables that are not
# included are there
exclude = list(set(exclude) - set(include))
assert all(x not in after for x in exclude)
@pytest.mark.parametrize('env,blacklist,whitelist,expected,deleted', [
# Check we can blacklist using a regex
@pytest.mark.parametrize('env,exclude,include,expected,deleted', [
# Check we can exclude using a regex
({'SHLVL': '1'}, ['SH.*'], [], [], ['SHLVL']),
# Check we can whitelist using a regex
# Check we can include using a regex
({'SHLVL': '1'}, ['SH.*'], ['SH.*'], ['SHLVL'], []),
# Check regex to blacklist Modules v4 related vars
# Check regex to exclude Modules v4 related vars
({'MODULES_LMALTNAME': '1', 'MODULES_LMCONFLICT': '2'},
['MODULES_(.*)'], [], [], ['MODULES_LMALTNAME', 'MODULES_LMCONFLICT']),
({'A_modquar': '1', 'b_modquar': '2', 'C_modshare': '3'},
[r'(\w*)_mod(quar|share)'], [], [],
['A_modquar', 'b_modquar', 'C_modshare']),
])
def test_sanitize_regex(env, blacklist, whitelist, expected, deleted):
def test_sanitize_regex(env, exclude, include, expected, deleted):
after = environment.sanitize(env, blacklist, whitelist)
after = environment.sanitize(env, exclude, include)
assert all(x in after for x in expected)
assert all(x not in after for x in deleted)
@@ -460,7 +460,7 @@ def test_from_environment_diff(before, after, search_list):
@pytest.mark.skipif(sys.platform == 'win32',
reason="LMod not supported on Windows")
@pytest.mark.regression('15775')
def test_blacklist_lmod_variables():
def test_exclude_lmod_variables():
# Construct the list of environment modifications
file = os.path.join(datadir, 'sourceme_lmod.sh')
env = EnvironmentModifications.from_sourcing_file(file)

View File

@@ -1176,6 +1176,18 @@ def test_install_skip_patch(install_mockery, mock_fetch):
assert inst.package_id(spec.package) in installer.installed
def test_install_implicit(install_mockery, mock_fetch):
"""Test the path skip_patch install path."""
spec_name = 'trivial-install-test-package'
const_arg = installer_args([spec_name],
{'fake': False})
installer = create_installer(const_arg)
pkg = installer.build_requests[0].pkg
assert not create_build_task(pkg, {'explicit': False}).explicit
assert create_build_task(pkg, {'explicit': True}).explicit
assert create_build_task(pkg).explicit
def test_overwrite_install_backup_success(temporary_store, config, mock_packages,
tmpdir):
"""

View File

@@ -11,7 +11,9 @@
import spack.error
import spack.modules.tcl
import spack.package_base
import spack.schema.modules
import spack.spec
import spack.util.spack_yaml as syaml
from spack.modules.common import UpstreamModuleIndex
from spack.spec import Spec
@@ -226,3 +228,34 @@ def find_nothing(*args):
assert module_path
spack.package_base.PackageBase.uninstall_by_spec(spec)
# DEPRECATED: remove blacklist in v0.20
@pytest.mark.parametrize("module_type, old_config,new_config", [
("tcl", "blacklist.yaml", "exclude.yaml"),
("tcl", "blacklist_implicits.yaml", "exclude_implicits.yaml"),
("tcl", "blacklist_environment.yaml", "alter_environment.yaml"),
("lmod", "blacklist.yaml", "exclude.yaml"),
("lmod", "blacklist_environment.yaml", "alter_environment.yaml"),
])
def test_exclude_include_update(module_type, old_config, new_config):
module_test_data_root = os.path.join(
spack.paths.test_path, 'data', 'modules', module_type
)
with open(os.path.join(module_test_data_root, old_config)) as f:
old_yaml = syaml.load(f)
with open(os.path.join(module_test_data_root, new_config)) as f:
new_yaml = syaml.load(f)
# ensure file that needs updating is translated to the right thing.
assert spack.schema.modules.update_keys(
old_yaml, spack.schema.modules.exclude_include_translations
)
assert new_yaml == old_yaml
# ensure a file that doesn't need updates doesn't get updated
original_new_yaml = new_yaml.copy()
assert not spack.schema.modules.update_keys(
new_yaml, spack.schema.modules.exclude_include_translations
)
original_new_yaml == new_yaml

View File

@@ -110,10 +110,16 @@ def test_autoload_all(self, modulefile_content, module_configuration):
assert len([x for x in content if 'depends_on(' in x]) == 5
def test_alter_environment(self, modulefile_content, module_configuration):
# DEPRECATED: remove blacklist in v0.20
@pytest.mark.parametrize(
"config_name", ["alter_environment", "blacklist_environment"]
)
def test_alter_environment(
self, modulefile_content, module_configuration, config_name
):
"""Tests modifications to run-time environment."""
module_configuration('alter_environment')
module_configuration(config_name)
content = modulefile_content('mpileaks platform=test target=x86_64')
assert len(
@@ -145,10 +151,11 @@ def test_prepend_path_separator(self, modulefile_content,
elif re.match(r'[a-z]+_path\("SEMICOLON"', line):
assert line.endswith('"bar", ";")')
def test_blacklist(self, modulefile_content, module_configuration):
"""Tests blacklisting the generation of selected modules."""
@pytest.mark.parametrize("config_name", ["exclude", "blacklist"])
def test_exclude(self, modulefile_content, module_configuration, config_name):
"""Tests excluding the generation of selected modules."""
module_configuration('blacklist')
module_configuration(config_name)
content = modulefile_content(mpileaks_spec_string)
assert len([x for x in content if 'depends_on(' in x]) == 1

View File

@@ -97,10 +97,16 @@ def test_prerequisites_all(self, modulefile_content, module_configuration):
assert len([x for x in content if 'prereq' in x]) == 5
def test_alter_environment(self, modulefile_content, module_configuration):
# DEPRECATED: remove blacklist in v0.20
@pytest.mark.parametrize(
"config_name", ["alter_environment", "blacklist_environment"]
)
def test_alter_environment(
self, modulefile_content, module_configuration, config_name
):
"""Tests modifications to run-time environment."""
module_configuration('alter_environment')
module_configuration(config_name)
content = modulefile_content('mpileaks platform=test target=x86_64')
assert len([x for x in content
@@ -129,10 +135,11 @@ def test_alter_environment(self, modulefile_content, module_configuration):
assert len([x for x in content if 'module load foo/bar' in x]) == 1
assert len([x for x in content if 'setenv LIBDWARF_ROOT' in x]) == 1
def test_blacklist(self, modulefile_content, module_configuration):
"""Tests blacklisting the generation of selected modules."""
@pytest.mark.parametrize("config_name", ["exclude", "blacklist"])
def test_exclude(self, modulefile_content, module_configuration, config_name):
"""Tests excluding the generation of selected modules."""
module_configuration('blacklist')
module_configuration(config_name)
content = modulefile_content('mpileaks ^zmpi')
assert len([x for x in content if 'is-loaded' in x]) == 1
@@ -359,24 +366,27 @@ def test_extend_context(
@pytest.mark.regression('4400')
@pytest.mark.db
def test_blacklist_implicits(
self, modulefile_content, module_configuration, database
@pytest.mark.parametrize(
"config_name", ["exclude_implicits", "blacklist_implicits"]
)
def test_exclude_implicits(
self, modulefile_content, module_configuration, database, config_name
):
module_configuration('blacklist_implicits')
module_configuration(config_name)
# mpileaks has been installed explicitly when setting up
# the tests database
mpileaks_specs = database.query('mpileaks')
for item in mpileaks_specs:
writer = writer_cls(item, 'default')
assert not writer.conf.blacklisted
assert not writer.conf.excluded
# callpath is a dependency of mpileaks, and has been pulled
# in implicitly
callpath_specs = database.query('callpath')
for item in callpath_specs:
writer = writer_cls(item, 'default')
assert writer.conf.blacklisted
assert writer.conf.excluded
@pytest.mark.regression('9624')
@pytest.mark.db

View File

@@ -673,10 +673,10 @@ def from_sourcing_file(filename, *arguments, **kwargs):
(default: ``&> /dev/null``)
concatenate_on_success (str): operator used to execute a command
only when the previous command succeeds (default: ``&&``)
blacklist ([str or re]): ignore any modifications of these
exclude ([str or re]): ignore any modifications of these
variables (default: [])
whitelist ([str or re]): always respect modifications of these
variables (default: []). has precedence over blacklist.
include ([str or re]): always respect modifications of these
variables (default: []). Supersedes any excluded variables.
clean (bool): in addition to removing empty entries,
also remove duplicate entries (default: False).
"""
@@ -687,13 +687,13 @@ def from_sourcing_file(filename, *arguments, **kwargs):
msg = 'Trying to source non-existing file: {0}'.format(filename)
raise RuntimeError(msg)
# Prepare a whitelist and a blacklist of environment variable names
blacklist = kwargs.get('blacklist', [])
whitelist = kwargs.get('whitelist', [])
# Prepare include and exclude lists of environment variable names
exclude = kwargs.get('exclude', [])
include = kwargs.get('include', [])
clean = kwargs.get('clean', False)
# Other variables unrelated to sourcing a file
blacklist.extend([
exclude.extend([
# Bash internals
'SHLVL', '_', 'PWD', 'OLDPWD', 'PS1', 'PS2', 'ENV',
# Environment modules v4
@@ -706,12 +706,12 @@ def from_sourcing_file(filename, *arguments, **kwargs):
# Compute the environments before and after sourcing
before = sanitize(
environment_after_sourcing_files(os.devnull, **kwargs),
blacklist=blacklist, whitelist=whitelist
exclude=exclude, include=include
)
file_and_args = (filename,) + arguments
after = sanitize(
environment_after_sourcing_files(file_and_args, **kwargs),
blacklist=blacklist, whitelist=whitelist
exclude=exclude, include=include
)
# Delegate to the other factory
@@ -881,22 +881,6 @@ def validate(env, errstream):
set_or_unset_not_first(variable, list_of_changes, errstream)
def filter_environment_blacklist(env, variables):
"""Generator that filters out any change to environment variables present in
the input list.
Args:
env: list of environment modifications
variables: list of variable names to be filtered
Returns:
items in env if they are not in variables
"""
for item in env:
if item.name not in variables:
yield item
def inspect_path(root, inspections, exclude=None):
"""Inspects ``root`` to search for the subdirectories in ``inspections``.
Adds every path found to a list of prepend-path commands and returns it.
@@ -1060,17 +1044,15 @@ def _source_single_file(file_and_args, environment):
return current_environment
def sanitize(environment, blacklist, whitelist):
def sanitize(environment, exclude, include):
"""Returns a copy of the input dictionary where all the keys that
match a blacklist pattern and don't match a whitelist pattern are
match an excluded pattern and don't match an included pattern are
removed.
Args:
environment (dict): input dictionary
blacklist (list): literals or regex patterns to be
blacklisted
whitelist (list): literals or regex patterns to be
whitelisted
exclude (list): literals or regex patterns to be excluded
include (list): literals or regex patterns to be included
"""
def set_intersection(fullset, *args):
@@ -1088,9 +1070,9 @@ def set_intersection(fullset, *args):
# Don't modify input, make a copy instead
environment = sjson.decode_json_dict(dict(environment))
# Retain (whitelist) has priority over prune (blacklist)
prune = set_intersection(set(environment), *blacklist)
prune -= set_intersection(prune, *whitelist)
# include supersedes any excluded items
prune = set_intersection(set(environment), *exclude)
prune -= set_intersection(prune, *include)
for k in prune:
environment.pop(k, None)

View File

@@ -14,9 +14,18 @@
def get_mirror_connection(url, url_type="push"):
connection = {}
# Try to find a mirror for potential connection information
for mirror in spack.mirror.MirrorCollection().values():
if "%s://%s" % (url.scheme, url.netloc) == mirror.push_url:
connection = mirror.to_dict()[url_type]
# Check to see if desired file starts with any of the mirror URLs
rebuilt_path = url_util.format(url)
# Gather dict of push URLS point to the value of the whole mirror
mirror_dict = {x.push_url: x for x in spack.mirror.MirrorCollection().values()} # noqa: E501
# Ensure most specific URLs (longest) are presented first
mirror_url_keys = mirror_dict.keys()
mirror_url_keys = sorted(mirror_url_keys, key=len, reverse=True)
for mURL in mirror_url_keys:
# See if desired URL starts with the mirror's push URL
if rebuilt_path.startswith(mURL):
connection = mirror_dict[mURL].to_dict()[url_type]
break
return connection

View File

@@ -391,7 +391,7 @@ def list_url(url, recursive=False):
if os.path.isfile(os.path.join(local_path, subpath))]
if url.scheme == 's3':
s3 = s3_util.create_s3_session(url)
s3 = s3_util.create_s3_session(url, connection=s3_util.get_mirror_connection(url)) # noqa: E501
if recursive:
return list(_iter_s3_prefix(s3, url))

View File

@@ -264,9 +264,11 @@ e4s-mac-protected-build:
e4s-pr-generate:
extends: [ ".e4s", ".pr-generate"]
image: ecpe4s/ubuntu22.04-runner-x86_64:2022-07-01
e4s-protected-generate:
extends: [ ".e4s", ".protected-generate"]
image: ecpe4s/ubuntu22.04-runner-x86_64:2022-07-01
e4s-pr-build:
extends: [ ".e4s", ".pr-build" ]

View File

@@ -15,23 +15,16 @@ spack:
packages:
all:
compiler:
- gcc@7.5.0
compiler: [gcc@11.2.0]
providers:
blas:
- openblas
mpi:
- mpich
target:
- x86_64
blas: [openblas]
mpi: [mpich]
target: [x86_64]
variants: +mpi
binutils:
variants: +ld +gold +headers +libiberty ~nls
version:
- 2.36.1
doxygen:
version:
- 1.8.20
cuda:
version: [11.7.0]
elfutils:
variants: +bzip2 ~nls +xz
hdf5:
@@ -40,187 +33,197 @@ spack:
variants: fabrics=sockets,tcp,udp,rxm
libunwind:
variants: +pic +xz
mesa:
variants: ~llvm
mesa18:
variants: ~llvm
mpich:
variants: ~wrapperrpath
ncurses:
variants: +termlib
openblas:
variants: threads=openmp
openturns:
version: [1.18]
python:
version: [3.8.13]
trilinos:
variants: +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
variants: +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext
+ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu
+nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos
+teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
xz:
variants: +pic
definitions:
- cuda_specs:
- amrex +cuda cuda_arch=70
- caliper +cuda cuda_arch=70
- chai ~benchmarks ~tests +cuda cuda_arch=70 ^umpire ~shared
- ginkgo +cuda cuda_arch=70
- heffte +cuda cuda_arch=70
- hpx +cuda cuda_arch=70
- hypre +cuda cuda_arch=70
- kokkos +wrapper +cuda cuda_arch=70
- kokkos-kernels +cuda cuda_arch=70 ^kokkos +wrapper +cuda cuda_arch=70
- magma +cuda cuda_arch=70
- mfem +cuda cuda_arch=70
- parsec +cuda cuda_arch=70
- petsc +cuda cuda_arch=70
- raja +cuda cuda_arch=70
- slate +cuda cuda_arch=70
- slepc +cuda cuda_arch=70
- strumpack ~slate +cuda cuda_arch=70
- sundials +cuda cuda_arch=70
- superlu-dist +cuda cuda_arch=70
- tasmanian +cuda cuda_arch=70
# Trilinos: enable CUDA, Kokkos, and important Tpetra-era solver packages;
# disable Epetra; disable ETI to speed up CI; disable all other TPLs
- trilinos@13.2.0 +cuda cuda_arch=70 +wrapper +amesos2 +belos +ifpack2 +kokkos +muelu +nox +stratimikos +tpetra ~amesos ~anasazi ~aztec ~epetraext ~ifpack ~isorropia ~ml ~teko ~tempus ~zoltan ~zoltan2 ~explicit_template_instantiation ~adios2~basker~boost~chaco~complex~debug~dtk~epetraextbtf~epetraextexperimental~epetraextgraphreorderings~exodus~float~fortran~gtest~hypre~intrepid~intrepid2~ipo~mesquite~minitensor~mumps~openmp~phalanx~piro~rocm~rol~rythmos~sacado~scorec~shards~shared~shylu~stk~stokhos~strumpack~suite-sparse~superlu~superlu-dist~trilinoscouplings~x11
- umpire ~shared +cuda cuda_arch=70
- vtk-m +cuda cuda_arch=70
- zfp +cuda cuda_arch=70
#- ascent ~shared +cuda cuda_arch=70
#- axom +cuda cuda_arch=70 ^umpire ~shared
#- dealii +cuda cuda_arch=70 # gmsh
#- flecsi +cuda cuda_arch=70
#- paraview +cuda cuda_arch=70
- rocm_specs:
- kokkos +rocm amdgpu_target=gfx906
#- amrex +rocm amdgpu_target=gfx906
#- chai +rocm ~benchmarks amdgpu_target=gfx906
#- ginkgo +rocm amdgpu_target=gfx906 # needs hip<4.1
#- raja +rocm ~openmp amdgpu_target=gfx906 # blt 0.3.6 issue with rocm
#- slate +rocm amdgpu_target=gfx906
#- strumpack +rocm ~slate amdgpu_target=gfx906
#- sundials +rocm amdgpu_target=gfx906
#- tasmanian +rocm amdgpu_target=gfx906
#- umpire+rocm amdgpu_target=gfx906 # blt 0.3.6 issue with rocm
- default_specs:
- adios
- adios2
- aml
- amrex
- arborx
- archer
- argobots
- ascent
- axom
- bolt
- cabana
- caliper
- chai ~benchmarks ~tests
- conduit
- darshan-runtime
- darshan-util
- datatransferkit
- dyninst
- faodel
- flecsi@1.4.2 +external_cinch
- flit
- flux-core
- fortrilinos
- gasnet
- ginkgo
- globalarrays
- gmp
- gotcha
- gptune
- h5bench
- hdf5
- heffte +fftw
- hpctoolkit
- hpx
- hypre
- kokkos +openmp
- kokkos-kernels +openmp
- lammps
- legion
- libnrm
- libquo
- libunwind
- llvm targets=amdgpu,nvptx +clang +compiler-rt +libcxx +lld +lldb +llvm_dylib +flang ~cuda
- loki
- mercury
- metall
- mfem
- mpark-variant
- mpifileutils ~xattr
- nccmp
- nco
- ninja
- nrm
- omega-h
- openmpi
- openpmd-api
- papi
- papyrus@1.0.1
- parallel-netcdf
- parsec ~cuda
- pdt
- petsc
- phist
- plasma
- precice
- pumi
- py-jupyterhub
- py-libensemble
- py-petsc4py
- py-warpx ^warpx dims=2
- py-warpx ^warpx dims=3
- py-warpx ^warpx dims=rz
- qthreads scheduler=distrib
- raja
- rempi
- scr
- slate ~cuda
- slepc
- stc
- strumpack ~slate
- sundials
- superlu
- superlu-dist
- swig
- swig@4.0.2-fortran
- sz
- tasmanian
- tau +mpi +python
- trilinos +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack +ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro +phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko +tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
- turbine
- umap
- umpire
- unifyfs@0.9.1
- upcxx
- variorum
- veloc
- wannier90
- zfp
#- dealii
#- geopm
#- qt
#- qwt
- arch:
- '%gcc target=x86_64'
mesa:
version: [21.3.8]
specs:
# CPU
- adios
- adios2
- alquimia
- aml
- amrex
- arborx
- archer
- argobots
- ascent
- axom
- bolt
- bricks
- butterflypack
- cabana
- chai ~benchmarks ~tests
- conduit
- darshan-runtime
- darshan-util
- datatransferkit
- dyninst
- exaworks
- faodel
- flecsi
- flit
- flux-core
- fortrilinos
- gasnet
- ginkgo
- globalarrays
- gmp
- gptune
- hdf5 +fortran +hl +shared
- heffte +fftw
- hpctoolkit
- hpx networking=mpi
- hypre
- kokkos +openmp
- kokkos-kernels +openmp
- lammps
- legion
- libnrm
- libquo
- libunwind
- mercury
- metall
- mfem
- mpark-variant
- mpifileutils ~xattr
- nccmp
- nco
- netlib-scalapack
- nrm
- nvhpc
- omega-h
- openmpi
- openpmd-api
- papi
- papyrus
- parallel-netcdf
- parsec ~cuda
- pdt
- petsc
- phist
- plasma
- plumed
- precice
- pumi
- py-cinemasci
- py-jupyterhub
- py-libensemble
- py-petsc4py
- py-warpx ^warpx dims=2
- py-warpx ^warpx dims=3
- py-warpx ^warpx dims=rz
- qthreads scheduler=distrib
- raja
- scr
- slate ~cuda
- slepc
- stc
- strumpack ~slate
- sundials
- superlu
- superlu-dist
- swig
- swig@4.0.2-fortran
- sz
- tasmanian
- tau +mpi +python
- trilinos@13.0.1 +amesos +amesos2 +anasazi +aztec +belos +boost +epetra +epetraext +ifpack
+ifpack2 +intrepid +intrepid2 +isorropia +kokkos +ml +minitensor +muelu +nox +piro
+phalanx +rol +rythmos +sacado +stk +shards +shylu +stokhos +stratimikos +teko
+tempus +tpetra +trilinoscouplings +zoltan +zoltan2 +superlu-dist gotype=long_long
- turbine
- umap
- umpire
- upcxx
- veloc
- vtk-m
- wannier90
- zfp
- matrix:
- - $default_specs
- - $arch
# CUDA
- adios2 +cuda cuda_arch=80
- arborx +cuda cuda_arch=80 ^kokkos@3.6.00 +wrapper
- bricks +cuda
- cabana +cuda ^kokkos@3.6.00 +wrapper +cuda_lambda +cuda cuda_arch=80
- chai ~benchmarks ~tests +cuda cuda_arch=80 ^umpire@6.0.0 ~shared
- flux-core +cuda
- ginkgo +cuda cuda_arch=80
- heffte +cuda cuda_arch=80
- hpctoolkit +cuda
- hpx +cuda cuda_arch=80
- hypre +cuda cuda_arch=80
- kokkos-kernels +cuda cuda_arch=80 ^kokkos +wrapper +cuda cuda_arch=80
- kokkos +wrapper +cuda cuda_arch=80
- magma +cuda cuda_arch=80
- mfem +cuda cuda_arch=80
- papi +cuda
- petsc +cuda cuda_arch=80
- raja +cuda cuda_arch=80
- slate +cuda cuda_arch=80
- slepc +cuda cuda_arch=80
- strumpack ~slate +cuda cuda_arch=80
- sundials +cuda cuda_arch=80
- superlu-dist +cuda cuda_arch=80
- tasmanian +cuda cuda_arch=80
- tau +mpi +cuda
- umpire ~shared +cuda cuda_arch=80
- vtk-m +cuda cuda_arch=80
- zfp +cuda cuda_arch=80
- matrix:
- - $cuda_specs
- - $arch
# ROCm
- amrex +rocm amdgpu_target=gfx90a
- arborx +rocm amdgpu_target=gfx90a
- gasnet +rocm amdgpu_target=gfx90a
- ginkgo +rocm amdgpu_target=gfx90a
- heffte +rocm amdgpu_target=gfx90a
- hpx +rocm amdgpu_target=gfx90a
- kokkos +rocm amdgpu_target=gfx90a
- magma ~cuda +rocm amdgpu_target=gfx90a
- petsc +rocm amdgpu_target=gfx90a
- slepc +rocm amdgpu_target=gfx90a ^petsc +rocm amdgpu_target=gfx90a
- strumpack ~slate +rocm amdgpu_target=gfx90a
- superlu-dist +rocm amdgpu_target=gfx90a
- tau +mpi +rocm
- upcxx +rocm amdgpu_target=gfx90a
# CPU failures
#- caliper # /usr/bin/ld: ../../libcaliper.so.2.7.0: undefined reference to `_dl_sym'
#- charliecloud # autogen.sh: 6: [[: not found
#- geopm # /usr/include/x86_64-linux-gnu/bits/string_fortified.h:95:10: error:'__builtin_strncpy' specified bound 512 equals destination size [-Werror=stringop-truncation]
#- gotcha # /usr/bin/ld: ../../libgotcha.so.2.0.2: undefined reference to `_dl_sym'
#- h5bench # commons/h5bench_util.h:196: multiple definition of `has_vol_async';
#- loki # ../include/loki/Singleton.h:158:14: warning: 'template<class> class std::auto_ptr' is deprecated: use 'std::unique_ptr' instead [-Wdeprecated-declarations]
#- paraview +qt # llvm@14
#- pruners-ninja # test/ninja_test_util.c:34: multiple definition of `a';
#- rempi # rempi_message_manager.h:53:3: error: 'string' does not name a type
#- unifyfs # gotcha: /usr/bin/ld: ../../libgotcha.so.2.0.2: undefined reference to `_dl_sym'
#- variorum # /usr/bin/ld: Intel/CMakeFiles/variorum_intel.dir/Broadwell_4F.c.o:(.bss+0x0): multiple definition of `g_platform';
# CUDA failures
#- caliper +cuda cuda_arch=80 # /usr/bin/ld: ../../libcaliper.so.2.7.0: undefined reference to `_dl_sym'
#- parsec +cuda cuda_arch=80 # parsec/mca/device/cuda/transfer.c:168: multiple definition of `parsec_CUDA_d2h_max_flows';
#- trilinos@13.2.0 +cuda cuda_arch=80 # /usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with '...':
# ROCm failures
#- chai ~benchmarks +rocm amdgpu_target=gfx90a # umpire: Target "blt_hip" INTERFACE_INCLUDE_DIRECTORIES property contains path: "/tmp/root/spack-stage/spack-stage-umpire-2022.03.1-by6rldnpdowaaoqgxkeqejwyx5uxo2sv/spack-src/HIP_CLANG_INCLUDE_PATH-NOTFOUND/.." which is prefixed in the source directory.
#- hpctoolkit +rocm # roctracer-dev: core/memory_pool.h:155:64: error: 'int pthread_yield()' is deprecated: pthread_yield is deprecated, use sched_yield instead [-Werror=deprecated-declarations]
#- raja ~openmp +rocm amdgpu_target=gfx90a # cmake: Could NOT find ROCPRIM (missing: ROCPRIM_INCLUDE_DIRS)
#- umpire +rocm amdgpu_target=gfx90a # Target "blt_hip" INTERFACE_INCLUDE_DIRECTORIES property contains path: "/tmp/root/spack-stage/spack-stage-umpire-2022.03.1-by6rldnpdowaaoqgxkeqejwyx5uxo2sv/spack-src/HIP_CLANG_INCLUDE_PATH-NOTFOUND/.." which is prefixed in the source directory.
mirrors: { "mirror": "s3://spack-binaries/develop/e4s" }
@@ -237,10 +240,14 @@ spack:
- if [[ -r /mnt/key/spack_public_key.gpg ]]; then spack gpg trust /mnt/key/spack_public_key.gpg; fi
- spack -d ci rebuild > >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_out.txt) 2> >(tee ${SPACK_ARTIFACTS_ROOT}/user_data/pipeline_err.txt >&2)
image: { "name": "ghcr.io/spack/e4s-ubuntu-18.04:v2021-10-18", "entrypoint": [""] }
image: ecpe4s/ubuntu22.04-runner-x86_64:2022-07-01
mappings:
- match:
- hipblas
- llvm
- llvm-amdgpu
- rocblas
runner-attributes:
tags: [ "spack", "huge", "x86_64" ]
variables:
@@ -402,7 +409,7 @@ spack:
KUBERNETES_CPU_REQUEST: "500m"
KUBERNETES_MEMORY_REQUEST: "500M"
- match: ['os=ubuntu18.04']
- match: ['os=ubuntu22.04']
runner-attributes:
tags: ["spack", "x86_64"]
variables:
@@ -414,7 +421,7 @@ spack:
before_script:
- . "./share/spack/setup-env.sh"
- spack --version
image: { "name": "ghcr.io/spack/e4s-ubuntu-18.04:v2021-10-18", "entrypoint": [""] }
image: ecpe4s/ubuntu22.04-runner-x86_64:2022-07-01
tags: ["spack", "public", "x86_64"]
signing-job-attributes:

View File

@@ -1688,7 +1688,7 @@ _spack_spec() {
_spack_stage() {
if $list_options
then
SPACK_COMPREPLY="-h --help -n --no-checksum --deprecated -p --path"
SPACK_COMPREPLY="-h --help -n --no-checksum --deprecated -p --path -U --fresh --reuse"
else
_all_packages
fi

View File

@@ -19,8 +19,7 @@ RUN mkdir {{ paths.environment }} \
{{ manifest }} > {{ paths.environment }}/spack.yaml
# Install the software, remove unnecessary deps
RUN spack install --fail-fast && \
spack gc -y
RUN cd {{ paths.environment }} && spack env activate . && spack install --fail-fast && spack gc -y
{% if strip %}
# Strip all the binaries

View File

@@ -21,6 +21,7 @@ class Adios2(CMakePackage, CudaPackage):
tags = ['e4s']
version('master', branch='master')
version('2.8.2', sha256='9909f6409dc44b2c28c1fda0042dab4b711f25ec3277ef0cb6ffc40f5483910d')
version('2.8.1', sha256='3f515b442bbd52e3189866b121613fe3b59edb8845692ea86fad83d1eba35d93')
version('2.8.0', sha256='5af3d950e616989133955c2430bd09bcf6bad3a04cf62317b401eaf6e7c2d479')
version('2.7.1', sha256='c8e237fd51f49d8a62a0660db12b72ea5067512aa7970f3fcf80b70e3f87ca3e')

View File

@@ -16,6 +16,7 @@ class Gperftools(AutotoolsPackage):
url = "https://github.com/gperftools/gperftools/releases/download/gperftools-2.7/gperftools-2.7.tar.gz"
maintainers = ['albestro', 'eschnett', 'msimberg', 'teonnik']
version('2.10', sha256='83e3bfdd28b8bcf53222c3798d4d395d52dadbbae59e8730c4a6d31a9c3732d8')
version('2.9.1', sha256='ea566e528605befb830671e359118c2da718f721c27225cbbc93858c7520fee3')
version('2.8.1', sha256='12f07a8ba447f12a3ae15e6e3a6ad74de35163b787c0c7b76288d7395f2f74e0')
version('2.7', sha256='1ee8c8699a0eff6b6a203e59b43330536b22bbcbe6448f54c7091e5efb0763c9')
@@ -36,5 +37,9 @@ def configure_args(self):
variant='dynamic_sized_delete_support')
args += self.enable_or_disable("debugalloc")
args += self.enable_or_disable("libunwind")
if self.spec.satisfies('+libunwind'):
args += [
"LDFLAGS=-lunwind"
]
return args

View File

@@ -14,7 +14,9 @@ class Interproscan(Package):
homepage = "https://www.ebi.ac.uk/interpro/interproscan.html"
url = "https://github.com/ebi-pf-team/interproscan/archive/5.36-75.0.tar.gz"
maintainers = ['snehring']
version('5.56-89.0', sha256='75e6a8f86ca17356a2f77f75b07d6d8fb7b397c9575f6e9716b64983e490b230')
version('5.38-76.0', sha256='cb191ff8eee275689b789167a57b368ea5c06bbcd36b4de23e8bbbbdc0fc7434')
version('5.36-75.0', sha256='383d7431e47c985056c856ceb6d4dcf7ed2559a4a3d5c210c01ce3975875addb')
version('4.8',

View File

@@ -0,0 +1,12 @@
--- a/ext/htslib/configure.ac 2022-01-16 23:02:18.000000000 -0600
+++ b/ext/htslib/configure.ac 2022-07-16 18:45:33.283586367 -0500
@@ -46,7 +46,9 @@
the PACKAGE_* defines are unused and are overridden by the more
accurate PACKAGE_VERSION as computed by the Makefile. */])
+AC_CANONICAL_HOST
AC_PROG_CC
+AC_PROG_INSTALL
AC_PROG_RANLIB
dnl Avoid chicken-and-egg problem where pkg-config supplies the

View File

@@ -0,0 +1,9 @@
--- a/src/MinCollector.cpp 2020-02-16 16:27:33.000000000 -0600
+++ b/src/MinCollector.cpp 2022-07-17 13:09:47.698229720 -0500
@@ -1,5 +1,6 @@
#include "MinCollector.h"
#include <algorithm>
+#include <limits>
// utility functions

View File

@@ -0,0 +1,10 @@
--- a/src/CMakeLists.txt 2017-03-20 05:38:35.000000000 -0500
+++ b/src/CMakeLists.txt 2022-07-17 12:56:04.456804708 -0500
@@ -32,6 +32,7 @@
if ( ZLIB_FOUND )
include_directories( ${ZLIB_INCLUDE_DIRS} )
+ target_link_libraries(kallisto kallisto_core ${ZLIB_LIBRARIES})
else()
message(FATAL_ERROR "zlib not found. Required for to output files" )
endif( ZLIB_FOUND )

View File

@@ -17,8 +17,15 @@ class Kallisto(CMakePackage):
version('0.46.2', sha256='c447ca8ddc40fcbd7d877d7c868bc8b72807aa8823a8a8d659e19bdd515baaf2')
version('0.43.1', sha256='7baef1b3b67bcf81dc7c604db2ef30f5520b48d532bf28ec26331cb60ce69400')
# HDF5 support is optional beginning with version 0.46.2.
variant('hdf5',
when='@0.46.2:',
default=False,
description='Build with HDF5 support')
depends_on('zlib')
depends_on('hdf5')
depends_on('hdf5', when='@:0.43')
depends_on('hdf5', when='+hdf5')
# htslib isn't built in time to be used....
parallel = False
@@ -30,6 +37,19 @@ class Kallisto(CMakePackage):
depends_on('libtool', type='build', when='@0.44.0:')
depends_on('m4', type='build', when='@0.44.0:')
patch('link_zlib.patch', when='@:0.43')
patch('limits.patch', when='@:0.46')
patch('htslib_configure.patch', when='@0.44.0:^autoconf@2.70:')
@run_before('cmake')
def autoreconf(self):
# Versions of autoconf greater than 2.69 need config.guess and
# config.sub in the tree.
if self.spec.satisfies('@0.44.0:^autoconf@2.70:'):
with working_dir(join_path(self.stage.source_path, 'ext', 'htslib')):
autoreconf = which('autoreconf')
autoreconf('--install')
# Including '-DCMAKE_VERBOSE_MAKEFILE:BOOL=ON' in the cmake args
# causes bits of cmake's output to end up in the autoconf-generated
# configure script.
@@ -41,8 +61,10 @@ def std_cmake_args(self):
setting.
"""
a = super(Kallisto, self).std_cmake_args
if (self.spec.version >= Version('0.44.0')):
if self.spec.satisfies('@0.44.0:'):
args = [i for i in a if i != '-DCMAKE_VERBOSE_MAKEFILE:BOOL=ON']
if self.spec.satisfies('@0.46.2:'):
args.append(self.define_from_variant('USE_HDF5', 'hdf5'))
else:
args = a

View File

@@ -18,6 +18,7 @@ class Kraken2(Package):
maintainers = ['rberg2']
version('2.1.2', sha256='e5f431e8bc3d5493a79e1d8125f4aacbad24f9ea2cc9657b66da06a32bef6ff3')
version('2.1.1', sha256='8f3e928cdb32b9e8e6f55b44703d1557b2a5fc3f30f63e8d16e465e19a81dee4')
version('2.0.8-beta', sha256='f2a91fc57a40b3e87df8ac2ea7c0ff1060cc9295c95de417ee53249ee3f7ad8e')
version('2.0.7-beta', sha256='baa160f5aef73327e1a79e6d1c54b64b2fcdaee0be31b456f7bc411d1897a744')

View File

@@ -34,9 +34,11 @@ class Lhapdf(AutotoolsPackage):
extends('python', when='+python')
depends_on('py-cython', type='build', when='+python')
depends_on('py-setuptools', type='build', when='+python')
depends_on('gettext', type='build', when='+python')
def configure_args(self):
args = ['FCFLAGS=-O3', 'CFLAGS=-O3', 'CXXFLAGS=-O3',
'LIBS=-L' + self.spec['python'].prefix.lib]
'LIBS=-L' + self.spec['python'].prefix.lib +
' -L' + self.spec['gettext'].prefix.lib]
args.extend(self.enable_or_disable('python'))
return args

View File

@@ -110,6 +110,10 @@ class Openssl(Package): # Uses Fake Autotools, should subclass Package
depends_on('ca-certificates-mozilla', type=('build', 'run'), when='certs=mozilla')
depends_on('nasm', when='platform=windows')
patch('https://github.com/openssl/openssl/commit/f9e578e720bb35228948564192adbe3bc503d5fb.patch?full_index=1',
sha256='3fdcf2d1e47c34f3a012f23306322c5a35cad55b180c9b6fb34537b55884645c',
when='@1.1.1q')
@classmethod
def determine_version(cls, exe):
output = Executable(exe)('version', output=str, error=str)

View File

@@ -0,0 +1,40 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyBlight(PythonPackage):
"""A catch-all compile-tool wrapper."""
homepage = "https://github.com/trailofbits/blight"
pypi = "blight/blight-0.0.47.tar.gz"
maintainers = ['woodruffw']
version('0.0.47', sha256='eb4a881adb98e03a0a855b95bfcddb0f4b3ca568b00cb45b571f047ae75c5667')
variant('dev', default=False, description='Install dependencies to help with development')
depends_on('python@3.7:', type=('build', 'run'))
# In process of changing build backend after 0.0.47 release.
depends_on('py-setuptools', type='build')
depends_on('py-click@7.1:8', type=('build', 'run'))
depends_on('py-typing-extensions', type=('build', 'run'))
depends_on('py-pydantic@1.7:1', type=('build', 'run'))
depends_on('py-flake8', type=('build', 'run'), when='+dev')
depends_on('py-black', type=('build', 'run'), when='+dev')
# blight uses pyproject.toml to configure isort. isort added
# support in 5.0.0
depends_on('py-isort@5.0.0:', type=('build', 'run'), when='+dev')
depends_on('py-pytest', type=('build', 'run'), when='+dev')
depends_on('py-pytest-cov', type=('build', 'run'), when='+dev')
depends_on('py-coverage+toml', type=('build', 'run'), when='+dev')
depends_on('py-twine', type=('build', 'run'), when='+dev')
depends_on('py-pdoc3', type=('build', 'run'), when='+dev')
depends_on('py-mypy', type=('build', 'run'), when='@0.0.5:+dev')

View File

@@ -10,18 +10,20 @@ class PyFparser(PythonPackage):
"""Parser for Fortran 77..2003 code."""
homepage = "https://github.com/stfc/fparser"
url = "https://github.com/stfc/fparser/archive/0.0.5.tar.gz"
git = "https://github.com/stfc/fparser.git"
pypi = "fparser/fparser-0.0.16.tar.gz"
version('develop', branch='master')
version('0.0.6', sha256='6ced61573257d11037d25c02d5f0ea92ca9bf1783018bf5f0de30d245ae631ac')
version('0.0.5', sha256='7668b331b9423d15353d502ab26d1d561acd5247882dab672f1e45565dabaf08')
version('0.0.16', sha256='a06389b95a1b9ed12f8141b69c67343da5ba0a29277b2997b02573a93af14e13')
version('0.0.6', sha256='bf8a419cb528df1bfc24ddd26d63f2ebea6f1e103f1a259d8d3a6c9b1cd53012')
version('0.0.5', sha256='f3b5b0ac56fd22abed558c0fb0ba4f28edb8de7ef24cfda8ca8996562215822f')
depends_on('py-setuptools', type='build')
depends_on('py-numpy', type=('build', 'run'), when='@:0.0.5')
depends_on('py-nose', type='build')
depends_on('py-six', type='build', when='@0.0.6:')
depends_on('py-nose', type=('build', 'run'), when='@:0.0.7')
# six is unused as of 0.0.15, but still listed in setup.py
depends_on('py-six', type=('build', 'run'), when='@0.0.6:')
depends_on('py-pytest', type='test')

View File

@@ -14,6 +14,7 @@ class PyGeocube(PythonPackage):
maintainers = ['adamjstewart']
version('0.3.2', sha256='71ff0228f1ef44e3a649d29a045ff7e2a2094a5cfca30fadab8f88f4ec23a41d')
version('0.3.1', sha256='5c97131010cd8d556a5fad2a3824452120640ac33a6a45b6ca9ee3c28f2e266f')
version('0.0.17', sha256='bf8da0fa96d772ebaea0b98bafa0ba5b8639669d5feb07465d4255af177bddc0')

View File

@@ -0,0 +1,711 @@
--- spack-src/third_party/eigen/Eigen/src/Core/arch/NEON/PacketMath.h.orig 2022-03-02 16:22:53.000000000 +0900
+++ spack-src/third_party/eigen/Eigen/src/Core/arch/NEON/PacketMath.h 2022-03-03 14:11:43.000000000 +0900
@@ -1922,13 +1922,13 @@
template<> EIGEN_STRONG_INLINE void pstoreu<uint64_t>(uint64_t* to, const Packet2ul& from)
{ EIGEN_DEBUG_UNALIGNED_STORE vst1q_u64(to,from); }
-template<> EIGEN_DEVICE_FUNC inline Packet2f pgather<float, Packet2f>(const float* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2f pgather<float, Packet2f>(const float* from, Index stride)
{
Packet2f res = vld1_dup_f32(from);
res = vld1_lane_f32(from + 1*stride, res, 1);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet4f pgather<float, Packet4f>(const float* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4f pgather<float, Packet4f>(const float* from, Index stride)
{
Packet4f res = vld1q_dup_f32(from);
res = vld1q_lane_f32(from + 1*stride, res, 1);
@@ -1936,14 +1936,14 @@
res = vld1q_lane_f32(from + 3*stride, res, 3);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet4c pgather<int8_t, Packet4c>(const int8_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4c pgather<int8_t, Packet4c>(const int8_t* from, Index stride)
{
Packet4c res;
for (int i = 0; i != 4; i++)
reinterpret_cast<int8_t*>(&res)[i] = *(from + i * stride);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet8c pgather<int8_t, Packet8c>(const int8_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8c pgather<int8_t, Packet8c>(const int8_t* from, Index stride)
{
Packet8c res = vld1_dup_s8(from);
res = vld1_lane_s8(from + 1*stride, res, 1);
@@ -1955,7 +1955,7 @@
res = vld1_lane_s8(from + 7*stride, res, 7);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet16c pgather<int8_t, Packet16c>(const int8_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet16c pgather<int8_t, Packet16c>(const int8_t* from, Index stride)
{
Packet16c res = vld1q_dup_s8(from);
res = vld1q_lane_s8(from + 1*stride, res, 1);
@@ -1975,14 +1975,14 @@
res = vld1q_lane_s8(from + 15*stride, res, 15);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet4uc pgather<uint8_t, Packet4uc>(const uint8_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4uc pgather<uint8_t, Packet4uc>(const uint8_t* from, Index stride)
{
Packet4uc res;
for (int i = 0; i != 4; i++)
reinterpret_cast<uint8_t*>(&res)[i] = *(from + i * stride);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet8uc pgather<uint8_t, Packet8uc>(const uint8_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8uc pgather<uint8_t, Packet8uc>(const uint8_t* from, Index stride)
{
Packet8uc res = vld1_dup_u8(from);
res = vld1_lane_u8(from + 1*stride, res, 1);
@@ -1994,7 +1994,7 @@
res = vld1_lane_u8(from + 7*stride, res, 7);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet16uc pgather<uint8_t, Packet16uc>(const uint8_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet16uc pgather<uint8_t, Packet16uc>(const uint8_t* from, Index stride)
{
Packet16uc res = vld1q_dup_u8(from);
res = vld1q_lane_u8(from + 1*stride, res, 1);
@@ -2014,7 +2014,7 @@
res = vld1q_lane_u8(from + 15*stride, res, 15);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet4s pgather<int16_t, Packet4s>(const int16_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4s pgather<int16_t, Packet4s>(const int16_t* from, Index stride)
{
Packet4s res = vld1_dup_s16(from);
res = vld1_lane_s16(from + 1*stride, res, 1);
@@ -2022,7 +2022,7 @@
res = vld1_lane_s16(from + 3*stride, res, 3);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet8s pgather<int16_t, Packet8s>(const int16_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8s pgather<int16_t, Packet8s>(const int16_t* from, Index stride)
{
Packet8s res = vld1q_dup_s16(from);
res = vld1q_lane_s16(from + 1*stride, res, 1);
@@ -2034,7 +2034,7 @@
res = vld1q_lane_s16(from + 7*stride, res, 7);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet4us pgather<uint16_t, Packet4us>(const uint16_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4us pgather<uint16_t, Packet4us>(const uint16_t* from, Index stride)
{
Packet4us res = vld1_dup_u16(from);
res = vld1_lane_u16(from + 1*stride, res, 1);
@@ -2042,7 +2042,7 @@
res = vld1_lane_u16(from + 3*stride, res, 3);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet8us pgather<uint16_t, Packet8us>(const uint16_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8us pgather<uint16_t, Packet8us>(const uint16_t* from, Index stride)
{
Packet8us res = vld1q_dup_u16(from);
res = vld1q_lane_u16(from + 1*stride, res, 1);
@@ -2054,13 +2054,13 @@
res = vld1q_lane_u16(from + 7*stride, res, 7);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet2i pgather<int32_t, Packet2i>(const int32_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2i pgather<int32_t, Packet2i>(const int32_t* from, Index stride)
{
Packet2i res = vld1_dup_s32(from);
res = vld1_lane_s32(from + 1*stride, res, 1);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet4i pgather<int32_t, Packet4i>(const int32_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4i pgather<int32_t, Packet4i>(const int32_t* from, Index stride)
{
Packet4i res = vld1q_dup_s32(from);
res = vld1q_lane_s32(from + 1*stride, res, 1);
@@ -2068,13 +2068,13 @@
res = vld1q_lane_s32(from + 3*stride, res, 3);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet2ui pgather<uint32_t, Packet2ui>(const uint32_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2ui pgather<uint32_t, Packet2ui>(const uint32_t* from, Index stride)
{
Packet2ui res = vld1_dup_u32(from);
res = vld1_lane_u32(from + 1*stride, res, 1);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet4ui pgather<uint32_t, Packet4ui>(const uint32_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4ui pgather<uint32_t, Packet4ui>(const uint32_t* from, Index stride)
{
Packet4ui res = vld1q_dup_u32(from);
res = vld1q_lane_u32(from + 1*stride, res, 1);
@@ -2082,37 +2082,37 @@
res = vld1q_lane_u32(from + 3*stride, res, 3);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet2l pgather<int64_t, Packet2l>(const int64_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2l pgather<int64_t, Packet2l>(const int64_t* from, Index stride)
{
Packet2l res = vld1q_dup_s64(from);
res = vld1q_lane_s64(from + 1*stride, res, 1);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline Packet2ul pgather<uint64_t, Packet2ul>(const uint64_t* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2ul pgather<uint64_t, Packet2ul>(const uint64_t* from, Index stride)
{
Packet2ul res = vld1q_dup_u64(from);
res = vld1q_lane_u64(from + 1*stride, res, 1);
return res;
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<float, Packet2f>(float* to, const Packet2f& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<float, Packet2f>(float* to, const Packet2f& from, Index stride)
{
vst1_lane_f32(to + stride*0, from, 0);
vst1_lane_f32(to + stride*1, from, 1);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<float, Packet4f>(float* to, const Packet4f& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<float, Packet4f>(float* to, const Packet4f& from, Index stride)
{
vst1q_lane_f32(to + stride*0, from, 0);
vst1q_lane_f32(to + stride*1, from, 1);
vst1q_lane_f32(to + stride*2, from, 2);
vst1q_lane_f32(to + stride*3, from, 3);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<int8_t, Packet4c>(int8_t* to, const Packet4c& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<int8_t, Packet4c>(int8_t* to, const Packet4c& from, Index stride)
{
for (int i = 0; i != 4; i++)
*(to + i * stride) = reinterpret_cast<const int8_t*>(&from)[i];
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<int8_t, Packet8c>(int8_t* to, const Packet8c& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<int8_t, Packet8c>(int8_t* to, const Packet8c& from, Index stride)
{
vst1_lane_s8(to + stride*0, from, 0);
vst1_lane_s8(to + stride*1, from, 1);
@@ -2123,7 +2123,7 @@
vst1_lane_s8(to + stride*6, from, 6);
vst1_lane_s8(to + stride*7, from, 7);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<int8_t, Packet16c>(int8_t* to, const Packet16c& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<int8_t, Packet16c>(int8_t* to, const Packet16c& from, Index stride)
{
vst1q_lane_s8(to + stride*0, from, 0);
vst1q_lane_s8(to + stride*1, from, 1);
@@ -2142,12 +2142,12 @@
vst1q_lane_s8(to + stride*14, from, 14);
vst1q_lane_s8(to + stride*15, from, 15);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<uint8_t, Packet4uc>(uint8_t* to, const Packet4uc& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<uint8_t, Packet4uc>(uint8_t* to, const Packet4uc& from, Index stride)
{
for (int i = 0; i != 4; i++)
*(to + i * stride) = reinterpret_cast<const uint8_t*>(&from)[i];
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<uint8_t, Packet8uc>(uint8_t* to, const Packet8uc& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<uint8_t, Packet8uc>(uint8_t* to, const Packet8uc& from, Index stride)
{
vst1_lane_u8(to + stride*0, from, 0);
vst1_lane_u8(to + stride*1, from, 1);
@@ -2158,7 +2158,7 @@
vst1_lane_u8(to + stride*6, from, 6);
vst1_lane_u8(to + stride*7, from, 7);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<uint8_t, Packet16uc>(uint8_t* to, const Packet16uc& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<uint8_t, Packet16uc>(uint8_t* to, const Packet16uc& from, Index stride)
{
vst1q_lane_u8(to + stride*0, from, 0);
vst1q_lane_u8(to + stride*1, from, 1);
@@ -2177,14 +2177,14 @@
vst1q_lane_u8(to + stride*14, from, 14);
vst1q_lane_u8(to + stride*15, from, 15);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<int16_t, Packet4s>(int16_t* to, const Packet4s& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<int16_t, Packet4s>(int16_t* to, const Packet4s& from, Index stride)
{
vst1_lane_s16(to + stride*0, from, 0);
vst1_lane_s16(to + stride*1, from, 1);
vst1_lane_s16(to + stride*2, from, 2);
vst1_lane_s16(to + stride*3, from, 3);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<int16_t, Packet8s>(int16_t* to, const Packet8s& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<int16_t, Packet8s>(int16_t* to, const Packet8s& from, Index stride)
{
vst1q_lane_s16(to + stride*0, from, 0);
vst1q_lane_s16(to + stride*1, from, 1);
@@ -2195,14 +2195,14 @@
vst1q_lane_s16(to + stride*6, from, 6);
vst1q_lane_s16(to + stride*7, from, 7);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<uint16_t, Packet4us>(uint16_t* to, const Packet4us& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<uint16_t, Packet4us>(uint16_t* to, const Packet4us& from, Index stride)
{
vst1_lane_u16(to + stride*0, from, 0);
vst1_lane_u16(to + stride*1, from, 1);
vst1_lane_u16(to + stride*2, from, 2);
vst1_lane_u16(to + stride*3, from, 3);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<uint16_t, Packet8us>(uint16_t* to, const Packet8us& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<uint16_t, Packet8us>(uint16_t* to, const Packet8us& from, Index stride)
{
vst1q_lane_u16(to + stride*0, from, 0);
vst1q_lane_u16(to + stride*1, from, 1);
@@ -2213,36 +2213,36 @@
vst1q_lane_u16(to + stride*6, from, 6);
vst1q_lane_u16(to + stride*7, from, 7);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<int32_t, Packet2i>(int32_t* to, const Packet2i& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<int32_t, Packet2i>(int32_t* to, const Packet2i& from, Index stride)
{
vst1_lane_s32(to + stride*0, from, 0);
vst1_lane_s32(to + stride*1, from, 1);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<int32_t, Packet4i>(int32_t* to, const Packet4i& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<int32_t, Packet4i>(int32_t* to, const Packet4i& from, Index stride)
{
vst1q_lane_s32(to + stride*0, from, 0);
vst1q_lane_s32(to + stride*1, from, 1);
vst1q_lane_s32(to + stride*2, from, 2);
vst1q_lane_s32(to + stride*3, from, 3);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<uint32_t, Packet2ui>(uint32_t* to, const Packet2ui& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<uint32_t, Packet2ui>(uint32_t* to, const Packet2ui& from, Index stride)
{
vst1_lane_u32(to + stride*0, from, 0);
vst1_lane_u32(to + stride*1, from, 1);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<uint32_t, Packet4ui>(uint32_t* to, const Packet4ui& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<uint32_t, Packet4ui>(uint32_t* to, const Packet4ui& from, Index stride)
{
vst1q_lane_u32(to + stride*0, from, 0);
vst1q_lane_u32(to + stride*1, from, 1);
vst1q_lane_u32(to + stride*2, from, 2);
vst1q_lane_u32(to + stride*3, from, 3);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<int64_t, Packet2l>(int64_t* to, const Packet2l& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<int64_t, Packet2l>(int64_t* to, const Packet2l& from, Index stride)
{
vst1q_lane_s64(to + stride*0, from, 0);
vst1q_lane_s64(to + stride*1, from, 1);
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<uint64_t, Packet2ul>(uint64_t* to, const Packet2ul& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<uint64_t, Packet2ul>(uint64_t* to, const Packet2ul& from, Index stride)
{
vst1q_lane_u64(to + stride*0, from, 0);
vst1q_lane_u64(to + stride*1, from, 1);
@@ -2457,23 +2457,23 @@
template<> EIGEN_STRONG_INLINE uint64_t predux<Packet2ul>(const Packet2ul& a)
{ return vgetq_lane_u64(a, 0) + vgetq_lane_u64(a, 1); }
-template<> EIGEN_DEVICE_FUNC inline Packet4c predux_half_dowto4(const Packet8c& a)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4c predux_half_dowto4(const Packet8c& a)
{
return vget_lane_s32(vreinterpret_s32_s8(vadd_s8(a,
vreinterpret_s8_s32(vrev64_s32(vreinterpret_s32_s8(a))))), 0);
}
-template<> EIGEN_DEVICE_FUNC inline Packet8c predux_half_dowto4(const Packet16c& a)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8c predux_half_dowto4(const Packet16c& a)
{ return vadd_s8(vget_high_s8(a), vget_low_s8(a)); }
-template<> EIGEN_DEVICE_FUNC inline Packet4uc predux_half_dowto4(const Packet8uc& a)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4uc predux_half_dowto4(const Packet8uc& a)
{
return vget_lane_u32(vreinterpret_u32_u8(vadd_u8(a,
vreinterpret_u8_u32(vrev64_u32(vreinterpret_u32_u8(a))))), 0);
}
-template<> EIGEN_DEVICE_FUNC inline Packet8uc predux_half_dowto4(const Packet16uc& a)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8uc predux_half_dowto4(const Packet16uc& a)
{ return vadd_u8(vget_high_u8(a), vget_low_u8(a)); }
-template<> EIGEN_DEVICE_FUNC inline Packet4s predux_half_dowto4(const Packet8s& a)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4s predux_half_dowto4(const Packet8s& a)
{ return vadd_s16(vget_high_s16(a), vget_low_s16(a)); }
-template<> EIGEN_DEVICE_FUNC inline Packet4us predux_half_dowto4(const Packet8us& a)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4us predux_half_dowto4(const Packet8us& a)
{ return vadd_u16(vget_high_u16(a), vget_low_u16(a)); }
// Other reduction functions:
@@ -2752,13 +2752,13 @@
return vget_lane_u32(vpmax_u32(tmp, tmp), 0);
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet2f, 2>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet2f, 2>& kernel)
{
const float32x2x2_t z = vzip_f32(kernel.packet[0], kernel.packet[1]);
kernel.packet[0] = z.val[0];
kernel.packet[1] = z.val[1];
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet4f, 4>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet4f, 4>& kernel)
{
const float32x4x2_t tmp1 = vzipq_f32(kernel.packet[0], kernel.packet[1]);
const float32x4x2_t tmp2 = vzipq_f32(kernel.packet[2], kernel.packet[3]);
@@ -2768,7 +2768,7 @@
kernel.packet[2] = vcombine_f32(vget_low_f32(tmp1.val[1]), vget_low_f32(tmp2.val[1]));
kernel.packet[3] = vcombine_f32(vget_high_f32(tmp1.val[1]), vget_high_f32(tmp2.val[1]));
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet4c, 4>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet4c, 4>& kernel)
{
const int8x8_t a = vreinterpret_s8_s32(vset_lane_s32(kernel.packet[2], vdup_n_s32(kernel.packet[0]), 1));
const int8x8_t b = vreinterpret_s8_s32(vset_lane_s32(kernel.packet[3], vdup_n_s32(kernel.packet[1]), 1));
@@ -2781,7 +2781,7 @@
kernel.packet[2] = vget_lane_s32(vreinterpret_s32_s16(zip16.val[1]), 0);
kernel.packet[3] = vget_lane_s32(vreinterpret_s32_s16(zip16.val[1]), 1);
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet8c, 8>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet8c, 8>& kernel)
{
int8x8x2_t zip8[4];
uint16x4x2_t zip16[4];
@@ -2811,7 +2811,7 @@
}
}
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet16c, 16>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet16c, 16>& kernel)
{
int8x16x2_t zip8[8];
uint16x8x2_t zip16[8];
@@ -2858,7 +2858,7 @@
}
}
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet4uc, 4>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet4uc, 4>& kernel)
{
const uint8x8_t a = vreinterpret_u8_u32(vset_lane_u32(kernel.packet[2], vdup_n_u32(kernel.packet[0]), 1));
const uint8x8_t b = vreinterpret_u8_u32(vset_lane_u32(kernel.packet[3], vdup_n_u32(kernel.packet[1]), 1));
@@ -2871,7 +2871,7 @@
kernel.packet[2] = vget_lane_u32(vreinterpret_u32_u16(zip16.val[1]), 0);
kernel.packet[3] = vget_lane_u32(vreinterpret_u32_u16(zip16.val[1]), 1);
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet8uc, 8>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet8uc, 8>& kernel)
{
uint8x8x2_t zip8[4];
uint16x4x2_t zip16[4];
@@ -2901,7 +2901,7 @@
}
}
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet16uc, 16>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet16uc, 16>& kernel)
{
uint8x16x2_t zip8[8];
uint16x8x2_t zip16[8];
@@ -2946,7 +2946,7 @@
}
}
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet4s, 4>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet4s, 4>& kernel)
{
const int16x4x2_t zip16_1 = vzip_s16(kernel.packet[0], kernel.packet[1]);
const int16x4x2_t zip16_2 = vzip_s16(kernel.packet[2], kernel.packet[3]);
@@ -2960,7 +2960,7 @@
kernel.packet[3] = vreinterpret_s16_u32(zip32_2.val[1]);
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet8s, 4>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet8s, 4>& kernel)
{
const int16x8x2_t zip16_1 = vzipq_s16(kernel.packet[0], kernel.packet[1]);
const int16x8x2_t zip16_2 = vzipq_s16(kernel.packet[2], kernel.packet[3]);
@@ -2974,7 +2974,7 @@
kernel.packet[3] = vreinterpretq_s16_u32(zip32_2.val[1]);
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet16uc, 4>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet16uc, 4>& kernel)
{
const uint8x16x2_t zip8_1 = vzipq_u8(kernel.packet[0], kernel.packet[1]);
const uint8x16x2_t zip8_2 = vzipq_u8(kernel.packet[2], kernel.packet[3]);
@@ -2988,7 +2988,7 @@
kernel.packet[3] = vreinterpretq_u8_u16(zip16_2.val[1]);
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet8s, 8>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet8s, 8>& kernel)
{
const int16x8x2_t zip16_1 = vzipq_s16(kernel.packet[0], kernel.packet[1]);
const int16x8x2_t zip16_2 = vzipq_s16(kernel.packet[2], kernel.packet[3]);
@@ -3009,7 +3009,7 @@
kernel.packet[6] = vreinterpretq_s16_u32(vcombine_u32(vget_low_u32(zip32_2.val[1]), vget_low_u32(zip32_4.val[1])));
kernel.packet[7] = vreinterpretq_s16_u32(vcombine_u32(vget_high_u32(zip32_2.val[1]), vget_high_u32(zip32_4.val[1])));
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet4us, 4>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet4us, 4>& kernel)
{
const uint16x4x2_t zip16_1 = vzip_u16(kernel.packet[0], kernel.packet[1]);
const uint16x4x2_t zip16_2 = vzip_u16(kernel.packet[2], kernel.packet[3]);
@@ -3022,7 +3022,7 @@
kernel.packet[2] = vreinterpret_u16_u32(zip32_2.val[0]);
kernel.packet[3] = vreinterpret_u16_u32(zip32_2.val[1]);
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet8us, 8>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet8us, 8>& kernel)
{
const uint16x8x2_t zip16_1 = vzipq_u16(kernel.packet[0], kernel.packet[1]);
const uint16x8x2_t zip16_2 = vzipq_u16(kernel.packet[2], kernel.packet[3]);
@@ -3043,13 +3043,13 @@
kernel.packet[6] = vreinterpretq_u16_u32(vcombine_u32(vget_low_u32(zip32_2.val[1]), vget_low_u32(zip32_4.val[1])));
kernel.packet[7] = vreinterpretq_u16_u32(vcombine_u32(vget_high_u32(zip32_2.val[1]), vget_high_u32(zip32_4.val[1])));
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet2i, 2>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet2i, 2>& kernel)
{
const int32x2x2_t z = vzip_s32(kernel.packet[0], kernel.packet[1]);
kernel.packet[0] = z.val[0];
kernel.packet[1] = z.val[1];
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet4i, 4>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet4i, 4>& kernel)
{
const int32x4x2_t tmp1 = vzipq_s32(kernel.packet[0], kernel.packet[1]);
const int32x4x2_t tmp2 = vzipq_s32(kernel.packet[2], kernel.packet[3]);
@@ -3059,13 +3059,13 @@
kernel.packet[2] = vcombine_s32(vget_low_s32(tmp1.val[1]), vget_low_s32(tmp2.val[1]));
kernel.packet[3] = vcombine_s32(vget_high_s32(tmp1.val[1]), vget_high_s32(tmp2.val[1]));
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet2ui, 2>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet2ui, 2>& kernel)
{
const uint32x2x2_t z = vzip_u32(kernel.packet[0], kernel.packet[1]);
kernel.packet[0] = z.val[0];
kernel.packet[1] = z.val[1];
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet4ui, 4>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet4ui, 4>& kernel)
{
const uint32x4x2_t tmp1 = vzipq_u32(kernel.packet[0], kernel.packet[1]);
const uint32x4x2_t tmp2 = vzipq_u32(kernel.packet[2], kernel.packet[3]);
@@ -3075,7 +3075,7 @@
kernel.packet[2] = vcombine_u32(vget_low_u32(tmp1.val[1]), vget_low_u32(tmp2.val[1]));
kernel.packet[3] = vcombine_u32(vget_high_u32(tmp1.val[1]), vget_high_u32(tmp2.val[1]));
}
-EIGEN_DEVICE_FUNC inline void
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void
ptranspose(PacketBlock<Packet2l, 2>& kernel)
{
#if EIGEN_ARCH_ARM64
@@ -3094,7 +3094,7 @@
kernel.packet[1] = vcombine_s64(tmp[0][1], tmp[1][1]);
#endif
}
-EIGEN_DEVICE_FUNC inline void
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void
ptranspose(PacketBlock<Packet2ul, 2>& kernel)
{
#if EIGEN_ARCH_ARM64
@@ -3114,37 +3114,37 @@
#endif
}
-template<> EIGEN_DEVICE_FUNC inline Packet2f pselect( const Packet2f& mask, const Packet2f& a, const Packet2f& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2f pselect( const Packet2f& mask, const Packet2f& a, const Packet2f& b)
{ return vbsl_f32(vreinterpret_u32_f32(mask), a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet4f pselect(const Packet4f& mask, const Packet4f& a, const Packet4f& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4f pselect(const Packet4f& mask, const Packet4f& a, const Packet4f& b)
{ return vbslq_f32(vreinterpretq_u32_f32(mask), a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet8c pselect(const Packet8c& mask, const Packet8c& a, const Packet8c& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8c pselect(const Packet8c& mask, const Packet8c& a, const Packet8c& b)
{ return vbsl_s8(vreinterpret_u8_s8(mask), a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet16c pselect(const Packet16c& mask, const Packet16c& a, const Packet16c& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet16c pselect(const Packet16c& mask, const Packet16c& a, const Packet16c& b)
{ return vbslq_s8(vreinterpretq_u8_s8(mask), a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet8uc pselect(const Packet8uc& mask, const Packet8uc& a, const Packet8uc& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8uc pselect(const Packet8uc& mask, const Packet8uc& a, const Packet8uc& b)
{ return vbsl_u8(mask, a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet16uc pselect(const Packet16uc& mask, const Packet16uc& a, const Packet16uc& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet16uc pselect(const Packet16uc& mask, const Packet16uc& a, const Packet16uc& b)
{ return vbslq_u8(mask, a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet4s pselect(const Packet4s& mask, const Packet4s& a, const Packet4s& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4s pselect(const Packet4s& mask, const Packet4s& a, const Packet4s& b)
{ return vbsl_s16(vreinterpret_u16_s16(mask), a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet8s pselect(const Packet8s& mask, const Packet8s& a, const Packet8s& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8s pselect(const Packet8s& mask, const Packet8s& a, const Packet8s& b)
{ return vbslq_s16(vreinterpretq_u16_s16(mask), a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet4us pselect(const Packet4us& mask, const Packet4us& a, const Packet4us& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4us pselect(const Packet4us& mask, const Packet4us& a, const Packet4us& b)
{ return vbsl_u16(mask, a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet8us pselect(const Packet8us& mask, const Packet8us& a, const Packet8us& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8us pselect(const Packet8us& mask, const Packet8us& a, const Packet8us& b)
{ return vbslq_u16(mask, a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet2i pselect(const Packet2i& mask, const Packet2i& a, const Packet2i& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2i pselect(const Packet2i& mask, const Packet2i& a, const Packet2i& b)
{ return vbsl_s32(vreinterpret_u32_s32(mask), a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet4i pselect(const Packet4i& mask, const Packet4i& a, const Packet4i& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4i pselect(const Packet4i& mask, const Packet4i& a, const Packet4i& b)
{ return vbslq_s32(vreinterpretq_u32_s32(mask), a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet2ui pselect(const Packet2ui& mask, const Packet2ui& a, const Packet2ui& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2ui pselect(const Packet2ui& mask, const Packet2ui& a, const Packet2ui& b)
{ return vbsl_u32(mask, a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet4ui pselect(const Packet4ui& mask, const Packet4ui& a, const Packet4ui& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4ui pselect(const Packet4ui& mask, const Packet4ui& a, const Packet4ui& b)
{ return vbslq_u32(mask, a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet2l pselect(const Packet2l& mask, const Packet2l& a, const Packet2l& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2l pselect(const Packet2l& mask, const Packet2l& a, const Packet2l& b)
{ return vbslq_s64(vreinterpretq_u64_s64(mask), a, b); }
-template<> EIGEN_DEVICE_FUNC inline Packet2ul pselect(const Packet2ul& mask, const Packet2ul& a, const Packet2ul& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2ul pselect(const Packet2ul& mask, const Packet2ul& a, const Packet2ul& b)
{ return vbslq_u64(mask, a, b); }
/**
@@ -3441,7 +3441,7 @@
return pandnot<Packet4us>(a, b);
}
-template<> EIGEN_DEVICE_FUNC inline Packet4bf pselect(const Packet4bf& mask, const Packet4bf& a,
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4bf pselect(const Packet4bf& mask, const Packet4bf& a,
const Packet4bf& b)
{
return pselect<Packet4us>(mask, a, b);
@@ -3507,7 +3507,7 @@
return preverse<Packet4us>(a);
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet4bf, 4>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet4bf, 4>& kernel)
{
PacketBlock<Packet4us, 4> k;
k.packet[0] = kernel.packet[0];
@@ -3739,7 +3739,7 @@
template<> EIGEN_STRONG_INLINE void pstoreu<double>(double* to, const Packet2d& from)
{ EIGEN_DEBUG_UNALIGNED_STORE vst1q_f64(to,from); }
-template<> EIGEN_DEVICE_FUNC inline Packet2d pgather<double, Packet2d>(const double* from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2d pgather<double, Packet2d>(const double* from, Index stride)
{
Packet2d res = pset1<Packet2d>(0.0);
res = vld1q_lane_f64(from + 0*stride, res, 0);
@@ -3747,7 +3747,7 @@
return res;
}
-template<> EIGEN_DEVICE_FUNC inline void pscatter<double, Packet2d>(double* to, const Packet2d& from, Index stride)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<double, Packet2d>(double* to, const Packet2d& from, Index stride)
{
vst1q_lane_f64(to + stride*0, from, 0);
vst1q_lane_f64(to + stride*1, from, 1);
@@ -3791,7 +3791,7 @@
{ return vgetq_lane_f64(vpmaxq_f64(a,a), 0); }
-EIGEN_DEVICE_FUNC inline void
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void
ptranspose(PacketBlock<Packet2d, 2>& kernel)
{
const float64x2_t tmp1 = vzip1q_f64(kernel.packet[0], kernel.packet[1]);
@@ -3801,7 +3801,7 @@
kernel.packet[1] = tmp2;
}
-template<> EIGEN_DEVICE_FUNC inline Packet2d pselect( const Packet2d& mask, const Packet2d& a, const Packet2d& b)
+template<> EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet2d pselect( const Packet2d& mask, const Packet2d& a, const Packet2d& b)
{ return vbslq_f64(vreinterpretq_u64_f64(mask), a, b); }
template<> EIGEN_STRONG_INLINE Packet2d pldexp<Packet2d>(const Packet2d& a, const Packet2d& exponent)
@@ -3914,7 +3914,7 @@
};
template<>
-EIGEN_DEVICE_FUNC Packet4hf predux_half_dowto4<Packet8hf>(const Packet8hf& a) {
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4hf predux_half_dowto4<Packet8hf>(const Packet8hf& a) {
return vadd_f16(vget_low_f16(a), vget_high_f16(a));
}
@@ -4193,23 +4193,23 @@
return vcombine_f16(lo, hi);
}
-EIGEN_DEVICE_FUNC inline Packet8hf pinsertfirst(const Packet8hf& a, Eigen::half b) { return vsetq_lane_f16(b.x, a, 0); }
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8hf pinsertfirst(const Packet8hf& a, Eigen::half b) { return vsetq_lane_f16(b.x, a, 0); }
-EIGEN_DEVICE_FUNC inline Packet4hf pinsertfirst(const Packet4hf& a, Eigen::half b) { return vset_lane_f16(b.x, a, 0); }
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4hf pinsertfirst(const Packet4hf& a, Eigen::half b) { return vset_lane_f16(b.x, a, 0); }
template <>
-EIGEN_DEVICE_FUNC inline Packet8hf pselect(const Packet8hf& mask, const Packet8hf& a, const Packet8hf& b) {
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8hf pselect(const Packet8hf& mask, const Packet8hf& a, const Packet8hf& b) {
return vbslq_f16(vreinterpretq_u16_f16(mask), a, b);
}
template <>
-EIGEN_DEVICE_FUNC inline Packet4hf pselect(const Packet4hf& mask, const Packet4hf& a, const Packet4hf& b) {
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4hf pselect(const Packet4hf& mask, const Packet4hf& a, const Packet4hf& b) {
return vbsl_f16(vreinterpret_u16_f16(mask), a, b);
}
-EIGEN_DEVICE_FUNC inline Packet8hf pinsertlast(const Packet8hf& a, Eigen::half b) { return vsetq_lane_f16(b.x, a, 7); }
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8hf pinsertlast(const Packet8hf& a, Eigen::half b) { return vsetq_lane_f16(b.x, a, 7); }
-EIGEN_DEVICE_FUNC inline Packet4hf pinsertlast(const Packet4hf& a, Eigen::half b) { return vset_lane_f16(b.x, a, 3); }
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4hf pinsertlast(const Packet4hf& a, Eigen::half b) { return vset_lane_f16(b.x, a, 3); }
template <>
EIGEN_STRONG_INLINE void pstore<Eigen::half>(Eigen::half* to, const Packet8hf& from) {
@@ -4232,7 +4232,7 @@
}
template <>
-EIGEN_DEVICE_FUNC inline Packet8hf pgather<Eigen::half, Packet8hf>(const Eigen::half* from, Index stride) {
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet8hf pgather<Eigen::half, Packet8hf>(const Eigen::half* from, Index stride) {
Packet8hf res = pset1<Packet8hf>(Eigen::half(0.f));
res = vsetq_lane_f16(from[0 * stride].x, res, 0);
res = vsetq_lane_f16(from[1 * stride].x, res, 1);
@@ -4246,7 +4246,7 @@
}
template <>
-EIGEN_DEVICE_FUNC inline Packet4hf pgather<Eigen::half, Packet4hf>(const Eigen::half* from, Index stride) {
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE Packet4hf pgather<Eigen::half, Packet4hf>(const Eigen::half* from, Index stride) {
Packet4hf res = pset1<Packet4hf>(Eigen::half(0.f));
res = vset_lane_f16(from[0 * stride].x, res, 0);
res = vset_lane_f16(from[1 * stride].x, res, 1);
@@ -4256,7 +4256,7 @@
}
template <>
-EIGEN_DEVICE_FUNC inline void pscatter<Eigen::half, Packet8hf>(Eigen::half* to, const Packet8hf& from, Index stride) {
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<Eigen::half, Packet8hf>(Eigen::half* to, const Packet8hf& from, Index stride) {
to[stride * 0].x = vgetq_lane_f16(from, 0);
to[stride * 1].x = vgetq_lane_f16(from, 1);
to[stride * 2].x = vgetq_lane_f16(from, 2);
@@ -4268,7 +4268,7 @@
}
template <>
-EIGEN_DEVICE_FUNC inline void pscatter<Eigen::half, Packet4hf>(Eigen::half* to, const Packet4hf& from, Index stride) {
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void pscatter<Eigen::half, Packet4hf>(Eigen::half* to, const Packet4hf& from, Index stride) {
to[stride * 0].x = vget_lane_f16(from, 0);
to[stride * 1].x = vget_lane_f16(from, 1);
to[stride * 2].x = vget_lane_f16(from, 2);
@@ -4422,7 +4422,7 @@
return h;
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet8hf, 4>& kernel)
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet8hf, 4>& kernel)
{
EIGEN_ALIGN16 Eigen::half in[4][8];
@@ -4451,7 +4451,7 @@
kernel.packet[3] = pload<Packet8hf>(out[3]);
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet4hf, 4>& kernel) {
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet4hf, 4>& kernel) {
EIGEN_ALIGN16 float16x4x4_t tmp_x4;
float16_t* tmp = (float16_t*)&kernel;
tmp_x4 = vld4_f16(tmp);
@@ -4462,7 +4462,7 @@
kernel.packet[3] = tmp_x4.val[3];
}
-EIGEN_DEVICE_FUNC inline void ptranspose(PacketBlock<Packet8hf, 8>& kernel) {
+EIGEN_DEVICE_FUNC EIGEN_STRONG_INLINE void ptranspose(PacketBlock<Packet8hf, 8>& kernel) {
float16x8x2_t T_1[4];
T_1[0] = vuzpq_f16(kernel.packet[0], kernel.packet[1]);
@@ -4498,3 +4498,4 @@
} // end namespace Eigen
#endif // EIGEN_PACKET_MATH_NEON_H
+

View File

@@ -17,6 +17,14 @@ class PyHorovod(PythonPackage, CudaPackage):
maintainers = ['adamjstewart', 'aweits', 'tgaddair']
version('master', branch='master', submodules=True)
version('0.25.0', tag='v0.25.0', submodules=True)
version('0.24.3', tag='v0.24.3', submodules=True)
version('0.24.2', tag='v0.24.2', submodules=True)
version('0.24.1', tag='v0.24.1', submodules=True)
version('0.24.0', tag='v0.24.0', submodules=True)
version('0.23.0', tag='v0.23.0', submodules=True)
version('0.22.1', tag='v0.22.1', submodules=True)
version('0.22.0', tag='v0.22.0', submodules=True)
version('0.21.3', tag='v0.21.3', submodules=True)
version('0.21.2', tag='v0.21.2', submodules=True)
version('0.21.1', tag='v0.21.1', submodules=True)
@@ -43,7 +51,7 @@ class PyHorovod(PythonPackage, CudaPackage):
# https://github.com/horovod/horovod/blob/master/docs/install.rst
variant('frameworks', default='pytorch',
description='Deep learning frameworks to build support for',
values=('tensorflow', 'pytorch', 'mxnet', 'keras', 'spark', 'ray'),
values=('tensorflow', 'keras', 'pytorch', 'mxnet', 'spark', 'ray'),
multi=True)
variant('controllers', default='mpi',
description='Controllers to coordinate work between processes',
@@ -54,6 +62,11 @@ class PyHorovod(PythonPackage, CudaPackage):
variant('cuda', default=True, description='Build with CUDA')
variant('rocm', default=False, description='Build with ROCm')
# Build dependencies
depends_on('cmake@3.13:', type='build', when='@0.24:')
depends_on('cmake@2.8.12:', type='build', when='@0.20:')
depends_on('pkgconfig', type='build')
# Required dependencies
depends_on('python@3.6:', type=('build', 'run'), when='@0.20:')
depends_on('py-setuptools', type='build')
@@ -66,38 +79,42 @@ class PyHorovod(PythonPackage, CudaPackage):
# Framework dependencies
depends_on('py-tensorflow@1.1.0:', type=('build', 'link', 'run'), when='frameworks=tensorflow')
depends_on('py-tensorflow@1.15:', type=('build', 'link', 'run'), when='frameworks=tensorflow @0.20:')
depends_on('py-tensorflow-estimator', type=('build', 'run'), when='frameworks=tensorflow')
depends_on('py-tensorflow-estimator', type=('build', 'run'), when='frameworks=tensorflow')
depends_on('py-keras@2.0.8,2.1.2:', type=('build', 'run'), when='frameworks=keras')
depends_on('py-torch@0.4.0:', type=('build', 'link', 'run'), when='frameworks=pytorch')
depends_on('py-torch@1.2:', type=('build', 'link', 'run'), when='frameworks=pytorch @0.20:')
depends_on('py-torch@1.5:', type=('build', 'link', 'run'), when='frameworks=pytorch @0.25:')
depends_on('py-torchvision', type=('build', 'run'), when='frameworks=pytorch @:0.19.1')
depends_on('py-cffi@1.4.0:', type=('build', 'run'), when='frameworks=pytorch')
depends_on('py-pytorch-lightning', type=('build', 'run'), when='frameworks=pytorch @0.22:0.23')
depends_on('py-pytorch-lightning@1.3.8', type=('build', 'run'), when='frameworks=pytorch @0.24')
depends_on('py-pytorch-lightning@1.3.8:1.5.9', type=('build', 'run'), when='frameworks=pytorch @0.25:')
depends_on('mxnet@1.4.1:+python', type=('build', 'link', 'run'), when='frameworks=mxnet')
depends_on('py-keras@2.0.8,2.1.2:', type=('build', 'run'), when='frameworks=keras')
depends_on('py-h5py@:2', type=('build', 'run'), when='frameworks=spark')
depends_on('py-h5py@:2', type=('build', 'run'), when='frameworks=spark @:0.23')
depends_on('py-numpy', type=('build', 'run'), when='frameworks=spark')
depends_on('py-petastorm@0.8.2', type=('build', 'run'), when='frameworks=spark @:0.19.1')
depends_on('py-petastorm@0.9.0:', type=('build', 'run'), when='frameworks=spark @0.19.2:0.21.0')
depends_on('py-petastorm@0.9.8:', type=('build', 'run'), when='frameworks=spark @0.21.1:')
depends_on('py-petastorm@0.11:', type=('build', 'run'), when='frameworks=spark @0.22:')
depends_on('py-pyarrow@0.15.0:', type=('build', 'run'), when='frameworks=spark')
depends_on('py-pyspark@2.3.2:', type=('build', 'run'), when='frameworks=spark ^python@:3.7')
depends_on('py-pyspark@3.0.0:', type=('build', 'run'), when='frameworks=spark ^python@3.8:')
depends_on('py-fsspec', type=('build', 'run'), when='frameworks=spark @0.22.1:0.24.1')
depends_on('py-fsspec@2021.07:', type=('build', 'run'), when='frameworks=spark @0.24.2:')
depends_on('py-ray', type=('build', 'run'), when='frameworks=ray')
# Build dependencies
depends_on('cmake@2.8.12:', type='build', when='@0.20:')
depends_on('pkgconfig', type='build')
depends_on('py-aioredis@:1', type=('build', 'run'), when='frameworks=ray @0.23:')
# Controller dependencies
depends_on('mpi', when='controllers=mpi')
# There does not appear to be a way to use an external Gloo installation
depends_on('cmake', type='build', when='controllers=gloo')
depends_on('libuv@1.26:', when='controllers=gloo platform=darwin')
# Tensor Operations dependencies
depends_on('nccl@2:', when='tensor_ops=nccl')
depends_on('mpi', when='tensor_ops=mpi')
# There does not appear to be a way to use an external Gloo installation
depends_on('cmake', type='build', when='tensor_ops=gloo')
depends_on('libuv@1.26:', when='tensor_ops=gloo platform=darwin')
depends_on('intel-oneapi-ccl', when='tensor_ops=ccl')
conflicts('cuda_arch=none', when='+cuda',
msg='Must specify CUDA compute capabilities of your GPU, see '
@@ -109,6 +126,11 @@ class PyHorovod(PythonPackage, CudaPackage):
# https://github.com/horovod/horovod/pull/1835
patch('fma.patch', when='@0.19.0:0.19.1')
# Patch vendored copy of eigen to fix build on aarch64
# https://github.com/horovod/horovod/issues/3605
# https://gitlab.com/libeigen/eigen/-/commit/fd1dcb6b45a2c797ad4c4d6cc7678ee70763b4ed
patch('eigen.patch', when='@0.21: target=aarch64:')
@property
def import_modules(self):
modules = [
@@ -124,8 +146,7 @@ def import_modules(self):
if 'frameworks=pytorch' in self.spec:
modules.extend([
'horovod.torch', 'horovod.torch.mpi_lib',
'horovod.torch.elastic', 'horovod.torch.mpi_lib_impl'
'horovod.torch', 'horovod.torch.elastic'
])
if 'frameworks=mxnet' in self.spec:
@@ -160,7 +181,7 @@ def setup_build_environment(self, env):
# Build system
env.set('PKG_CONFIG_EXECUTABLE',
self.spec['pkgconfig'].prefix.bin.join('pkg-config'))
if '^cmake' in self.spec:
if 'cmake' in self.spec:
env.set('HOROVOD_CMAKE', self.spec['cmake'].command.path)
env.set('MAKEFLAGS', '-j{0}'.format(make_jobs))
@@ -181,11 +202,11 @@ def setup_build_environment(self, env):
env.set('HOROVOD_WITHOUT_MXNET', 1)
# Controllers
if 'controllers=mpi' in self.spec:
if 'controllers=mpi' in self.spec or 'tensor_ops=mpi' in self.spec:
env.set('HOROVOD_WITH_MPI', 1)
else:
env.set('HOROVOD_WITHOUT_MPI', 1)
if 'controllers=gloo' in self.spec:
if 'controllers=gloo' in self.spec or 'tensor_ops=gloo' in self.spec:
env.set('HOROVOD_WITH_GLOO', 1)
else:
env.set('HOROVOD_WITHOUT_GLOO', 1)
@@ -220,4 +241,4 @@ def setup_build_environment(self, env):
def test(self):
super(PyHorovod, self).test()
run_test(self.prefix.bin.horovodrun, '--check-build')
self.run_test(self.prefix.bin.horovodrun, '--check-build')

View File

@@ -14,6 +14,7 @@ class PyHttpstan(PythonPackage):
maintainers = ['haralmha']
version('4.8.0', sha256='cadfce05d24ec2af50d71c5212c648cbee5684a7f98fedd3838e124e5a9b4962')
version('4.7.2', sha256='94f6631d969cbd91d136194b074d02642d8c9e2a05674877a39059be87c5bf7b')
version('4.6.1', sha256='703e5e04e60651e0004574bb9695827d759fd13eb0d6bd67f827c1bfa0a1fd31')

View File

@@ -12,6 +12,7 @@ class PyKornia(PythonPackage):
homepage = "https://www.kornia.org/"
pypi = "kornia/kornia-0.5.10.tar.gz"
version('0.6.6', sha256='e29f0f994e3bafec016b101a9a3e89c3751b4fe99ada3ac21d3febb47904faa4')
version('0.6.5', sha256='14cbd8b4064b3d0fb5a8198d1b5fd9231bcd62b9039351641fca6b294b5069f0')
version('0.6.4', sha256='ff60307a7244b315db43bfc4d4d6769094cf7d7494cf367c1d71a56343e2c50f')
version('0.6.3', sha256='0b689b5a47f55f2b08f61e6731760542cc3e3c09c3f0498164b934a3aef0bab3')

View File

@@ -2,6 +2,8 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
@@ -9,16 +11,32 @@ class PyMizani(PythonPackage):
"""Mizani is a scales package for graphics. It is based on Hadley Wickham's
Scales package."""
pypi = "mizani/mizani-0.7.3.tar.gz"
homepage = "https://mizani.readthedocs.io/en/latest"
pypi = "mizani/mizani-0.7.4.tar.gz"
version(
"0.7.3",
sha256="f521300bd29ca918fcd629bc8ab50fa04e41bdbe00f6bcf74055d3c6273770a4",
)
version('0.7.4', sha256='b84b923cd3b8b4c0421a750672e5a85ed2aa05e632bd37af8419d5bbf65c397c')
version('0.7.3', sha256='f521300bd29ca918fcd629bc8ab50fa04e41bdbe00f6bcf74055d3c6273770a4')
version('0.6.0', sha256='2cdba487ee54faf3e5bfe0903155a13ff13d27a2dae709f9432194915b4fb1cd')
depends_on('python@3.6:', type=('build', 'run'))
depends_on("py-matplotlib@3.1.1:", type=("build", "run"))
depends_on("py-numpy", type=("build", "run"))
# common requirements
depends_on("py-palettable", type=("build", "run"))
depends_on("py-pandas@1.1.0:", type=("build", "run"))
depends_on("py-setuptools", type="build")
# variable requirements
depends_on('python@3.8:', type=('build', 'run'), when='@0.7.4:')
depends_on('python@3.6:', type=('build', 'run'), when='@0.6.0:')
depends_on("py-setuptools@42:", type="build", when='@0.7.4:')
depends_on("py-setuptools", type="build", when='@0.6.0:')
depends_on("py-matplotlib@3.5.0:", type=("build", "run"), when='@0.7.4:')
depends_on("py-matplotlib@3.1.1:", type=("build", "run"), when='@0.6.0:')
depends_on("py-numpy@1.19.0:", type=("build", "run"), when='@0.7.4:')
depends_on("py-numpy", type=("build", "run"), when='@0.6.0:')
depends_on("py-scipy@1.5.0:", type=("build", "run"), when='@0.7.4:')
depends_on("py-pandas@1.3.5:", type=("build", "run"), when='@0.7.4:')
depends_on("py-pandas@1.1.0:", type=("build", "run"), when='@0.7.2:')
depends_on("py-pandas@1.0.0:", type=("build", "run"), when='@0.7.0:')
depends_on("py-pandas@0.25.0:", type=("build", "run"), when='@0.6.0:')

View File

@@ -0,0 +1,24 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyPdoc3(PythonPackage):
"""Auto-generate API documentation for Python projects."""
homepage = "https://pdoc3.github.io/pdoc/"
pypi = "pdoc3/pdoc3-0.10.0.tar.gz"
version('0.10.0', sha256='5f22e7bcb969006738e1aa4219c75a32f34c2d62d46dc9d2fb2d3e0b0287e4b7')
depends_on('python@3.6:', type=('build', 'run'))
depends_on('py-setuptools', type='build')
depends_on('py-setuptools-git', type='build')
depends_on('py-setuptools-scm', type='build')
depends_on('py-mako', type=('build', 'run'))
depends_on('py-markdown@3.0:', type=('build', 'run'))

View File

@@ -12,11 +12,13 @@ class PyPetastorm(PythonPackage):
Tensorflow, Pytorch, and other Python-based ML training frameworks."""
homepage = "https://github.com/uber/petastorm"
pypi = "petastorm/petastorm-0.8.2.tar.gz"
url = "https://github.com/uber/petastorm/archive/refs/tags/v0.11.4.tar.gz"
maintainers = ['adamjstewart']
version('0.9.8', sha256='66009b7ad3f08b0485a748f12b2095a0d2470e04f0c63de43cd5b099f270c268')
version('0.8.2', sha256='7782c315e1ee8d15c7741e3eea41e77b9efce661cf58aa0220a801db64f52f91')
version('0.11.4', sha256='7090dfc86f110e641d95798bcc75f8b1ca14cd56ed3feef491baaa6849629e51')
version('0.9.8', sha256='571855224411b88b759ba5d48b288ad2ba09997ebd259292f72b9246144b8101')
version('0.8.2', sha256='1bf4f26ce0b14f7334c0c29868154f1e600021a044f7565a5ad766b5ecdde911')
depends_on('python@3:', when='@0.9.8:', type=('build', 'run'))
depends_on('py-setuptools', type='build')
@@ -33,3 +35,4 @@ class PyPetastorm(PythonPackage):
depends_on('py-pyarrow@0.12.0:', type=('build', 'run'), when='@:0.8.2')
depends_on('py-pyarrow@0.17.1:', type=('build', 'run'), when='@0.9.8:')
depends_on('py-six@1.5.0:', type=('build', 'run'))
depends_on('py-fsspec', type=('build', 'run'), when='@0.11.4:')

View File

@@ -0,0 +1,36 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyPhydms(PythonPackage):
"""phydms enables phylogenetic analyses using deep mutational scanning data
to inform the substitution models. It implements Experimentally informed
codon models (ExpCM) for phylogenetic inference and the detection of
biologically interesting selection."""
homepage = "http://jbloomlab.github.io/phydms"
pypi = "phydms/phydms-2.4.1.tar.gz"
version('2.4.1', sha256='04eb50bdb07907214050d19214d9bc8cf2002e24ca30fbe6e0f23f013d584d5c')
depends_on('python@3.5:', type=('build', 'run'))
depends_on('py-setuptools', type='build')
depends_on('py-biopython@1.67:', type=('build', 'run'))
depends_on('py-cython@0.28:', type=('build', 'run'))
depends_on('py-numpy@1.16.5:', type=('build', 'run'))
depends_on('py-scipy@0.18:', type=('build', 'run'))
depends_on('py-matplotlib@2.0.2:', type=('build', 'run'))
depends_on('py-natsort@5.0.1:', type=('build', 'run'))
depends_on('py-sympy@1.0:', type=('build', 'run'))
depends_on('py-six@1.10:', type=('build', 'run'))
depends_on('py-pandas@0.20.2:', type=('build', 'run'))
depends_on('py-pyvolve@1.0.3:', type=('build', 'run'))
depends_on('py-statsmodels@0.8:', type=('build', 'run'))
depends_on('py-weblogo@3.4:3.5', type=('build', 'run'))
depends_on('py-pypdf2@1.26:', type=('build', 'run'))

View File

@@ -17,6 +17,7 @@ class PyPip(Package):
maintainers = ['adamjstewart']
version('22.1.2', sha256='a3edacb89022ef5258bf61852728bf866632a394da837ca49eb4303635835f17', expand=False)
version('21.3.1', sha256='deaf32dcd9ab821e359cd8330786bcd077604b5c5730c0b096eda46f95c24a2d', expand=False)
version('21.1.2', sha256='f8ea1baa693b61c8ad1c1d8715e59ab2b93cd3c4769bacab84afcc4279e7a70e', expand=False)
version('20.2', sha256='d75f1fc98262dabf74656245c509213a5d0f52137e40e8f8ed5cc256ddd02923', expand=False)
@@ -28,6 +29,7 @@ class PyPip(Package):
version('9.0.1', sha256='690b762c0a8460c303c089d5d0be034fb15a5ea2b75bdf565f40421f542fefb0', expand=False)
extends('python')
depends_on('python@3.7:', when='@22:', type=('build', 'run'))
depends_on('python@3.6:', when='@21:', type=('build', 'run'))
depends_on('python@2.7:2.8,3.5:', when='@19.2:', type=('build', 'run'))
depends_on('python@2.7:2.8,3.4:', when='@18:', type=('build', 'run'))

View File

@@ -2,6 +2,8 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
@@ -10,20 +12,46 @@ class PyPlotnine(PythonPackage):
based on ggplot2. The grammar allows users to compose plots by explicitly
mapping data to the visual objects that make up the plot."""
pypi = "plotnine/plotnine-0.8.0.tar.gz"
homepage = "https://plotnine.readthedocs.io/en/stable"
pypi = "plotnine/plotnine-0.8.0.tar.gz"
version(
"0.8.0",
sha256="39de59edcc28106761b65238647d0b1f6212ea7f3a78f8be0b846616db969276",
)
version('0.9.0', sha256='0e89a93015f3c71d6844ac7aa9fb0da09b908f5f7dfa7dd5d68a5ca32b2ebcea')
version('0.8.0', sha256='39de59edcc28106761b65238647d0b1f6212ea7f3a78f8be0b846616db969276')
version('0.7.1', sha256='02f2b0435dae2e917198c5367fd97b010445d64d9888c6b7e755d3cdfe7ad057')
version('0.7.0', sha256='8ee67cbf010ccea32670760e930b7b02177030a89ccdf85e35d156a96ce36cd3')
version('0.6.0', sha256='aae2c8164abb209ef4f28cab01132d23f6879fcf8d492657487359e1241459e5')
depends_on('python@3.6:', type=('build', 'run'))
depends_on("py-descartes@1.1.0:", type=("build", "run"))
depends_on("py-matplotlib@3.1.1:", type=("build", "run"))
depends_on("py-mizani@0.7.3:", type=("build", "run"))
depends_on("py-numpy@1.19.0:", type=("build", "run"))
depends_on("py-pandas@1.1.0:", type=("build", "run"))
depends_on("py-patsy@0.5.1:", type=("build", "run"))
depends_on("py-scipy@1.5.0:", type=("build", "run"))
depends_on("py-setuptools", type="build")
depends_on("py-statsmodels@0.12.1:", type=("build", "run"))
depends_on('python@3.8:', type=('build', 'run'), when='@0.9.0:')
depends_on('python@3.6:', type=('build', 'run'), when='@0.6.0:')
depends_on("py-setuptools@59:", type="build", when='@0.9.0:')
depends_on("py-setuptools", type="build", when='@0.6.0:')
depends_on("py-setuptools-scm@6.4:+toml", type="build", when='@0.9.0:')
depends_on("py-descartes@1.1.0:", type=("build", "run"), when='@:0.8.0')
depends_on("py-matplotlib@3.5.0:", type=("build", "run"), when='@0.9.0:')
depends_on("py-matplotlib@3.1.1:", type=("build", "run"), when='@0.6.0:')
depends_on("py-mizani@0.7.3:", type=("build", "run"), when='@0.8.0:')
depends_on("py-mizani@0.6.0:", type=("build", "run"), when='@0.6.0:')
depends_on("py-numpy@1.19.0:", type=("build", "run"), when='@0.8.0:')
depends_on("py-numpy@1.16.0:", type=("build", "run"), when='@0.6.0:')
depends_on("py-pandas@1.3.5:", type=("build", "run"), when='@0.9.0:')
depends_on("py-pandas@1.1.0:", type=("build", "run"), when='@0.7.1:')
depends_on("py-pandas@1.0.3:", type=("build", "run"), when='@0.7.0:')
depends_on("py-pandas@0.25.0:", type=("build", "run"), when='@0.6.0:')
depends_on("py-patsy@0.5.1:", type=("build", "run"), when='@0.7.0:')
depends_on("py-patsy@0.4.1:", type=("build", "run"), when='@0.6.0:')
depends_on("py-scipy@1.5.0:", type=("build", "run"), when='@0.8.0:')
depends_on("py-scipy@1.2.0:", type=("build", "run"), when='@0.6.0:')
depends_on("py-statsmodels@0.13.2:", type=("build", "run"), when='@0.9.0:')
depends_on("py-statsmodels@0.12.1:", type=("build", "run"), when='@0.8.0:')
depends_on("py-statsmodels@0.11.1:", type=("build", "run"), when='@0.7.0:')
depends_on("py-statsmodels@0.9.0:", type=("build", "run"), when='@0.6.0:')

View File

@@ -54,7 +54,7 @@ class PyPyarrow(PythonPackage, CudaPackage):
depends_on('arrow+cuda' + v, when='+cuda' + v)
depends_on('arrow+orc' + v, when='+orc' + v)
patch('for_aarch64.patch', when='target=aarch64:')
patch('for_aarch64.patch', when='@0 target=aarch64:')
def install_options(self, spec, prefix):
args = []

View File

@@ -0,0 +1,24 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyPypdf2(PythonPackage):
"""PyPDF2 is a free and open source pure-python PDF library capable of
splitting, merging, cropping, and transforming the pages of PDF files.
It can also add custom data, viewing options, and passwords to PDF files.
PyPDF2 can retrieve text and metadata from PDFs as well."""
homepage = "https://pypdf2.readthedocs.io/en/latest/"
pypi = "PyPDF2/PyPDF2-2.5.0.tar.gz"
version('2.5.0', sha256='5802b1f40fa79be1b5ab9edc95a4e7f7e73399589db4f0e66ca831f449e7a2cd')
version('1.26.0', sha256='e28f902f2f0a1603ea95ebe21dff311ef09be3d0f0ef29a3e44a932729564385')
depends_on('python@3.6:', type=('build', 'run'), when='@2.0.0:')
depends_on('py-setuptools', type='build')
depends_on('py-typing-extensions', type=('build', 'run'), when='@2.0.0:^python@:3.9')

View File

@@ -3,6 +3,7 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
@@ -14,13 +15,22 @@ class PyPystan(PythonPackage):
maintainers = ['haralmha']
version('3.4.0', sha256='325e2fb0ab804555c05a603e0c9152ab11fcc3af01f3e9a9ff9fe9954b93184f')
version('3.5.0', sha256='078571d071a5b7c0af59206d4994a0979f4ac4b61f4a720b640c44fe35514929')
version('3.4.0', sha256='325e2fb0ab804555c05a603e0c9152ab11fcc3af01f3e9a9ff9fe9954b93184f')
version('2.19.1.1', sha256='fa8bad8dbc0da22bbe6f36af56c9abbfcf10f92df8ce627d59a36bd8d25eb038')
version('2.19.0.0', sha256='b85301b960d5991918b40bd64a4e9321813657a9fc028e0f39edce7220a309eb')
depends_on('python@3.8:3', type=('build', 'run'))
depends_on('py-setuptools', type=('build', 'run'))
depends_on('py-poetry-core@1.0.0:', type='build')
depends_on('py-aiohttp@3.6:3', type=('build', 'run'))
depends_on('py-httpstan@4.7', type=('build', 'run'))
depends_on('py-pysimdjson@3.2:3', type=('build', 'run'))
depends_on('py-numpy@1.19:1', type=('build', 'run'))
depends_on('py-clikit@0.6', type=('build', 'run'))
# common requirements
depends_on('py-setuptools', type=('build', 'run'))
depends_on('py-poetry-core@1.0.0:', type=('build', 'run'))
# variable requirements
depends_on('python@3.8:3', type=('build', 'run'), when='@3.4.0:')
depends_on('py-aiohttp@3.6:3', type=('build', 'run'), when='@3.4.0:')
depends_on('py-httpstan@4.8', type=('build', 'run'), when='@3.5.0:')
depends_on('py-httpstan@4.7', type=('build', 'run'), when='@3.4')
depends_on('py-pysimdjson@3.2:3', type=('build', 'run'), when='@3.4.0:')
depends_on('py-numpy@1.19:1', type=('build', 'run'), when='@3.4.0:')
depends_on('py-numpy@1.7:', type=('build', 'run'), when='@2.19.0.0:')
depends_on('py-clikit@0.6', type=('build', 'run'), when='@3.4.0:')
depends_on('py-cython@0.22:0.23.2,0.25.2:', type=('build', 'run'), when='@:2.19.1.1')

View File

@@ -0,0 +1,25 @@
# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.package import *
class PyPyvolve(PythonPackage):
"""Pyvolve is an open-source Python module for simulating sequences
along a phylogenetic tree according to continuous-time Markov models
of sequence evolution"""
homepage = "https://github.com/sjspielman/pyvolve"
pypi = "Pyvolve/Pyvolve-1.1.0.tar.gz"
version('1.1.0', sha256='850aae6213a95c3f8c438ef7cdab33f4dafe8ef305b6fa85bbea1a9e7484c787')
version('1.0.3', sha256='725d5851f24b3b4564970a999bad8e2e90782cf81a07c3a3370c492a956d9d51')
depends_on('py-setuptools', type='build')
depends_on('py-biopython', type=('build', 'run'))
depends_on('py-numpy@1.20.0:', type=('build', 'run'), when='@1.1.0:')
depends_on('py-numpy@1.7:', type=('build', 'run'), when='@1.0.3:')
depends_on('py-scipy', type=('build', 'run'))

View File

@@ -12,6 +12,7 @@ class PyStatsmodels(PythonPackage):
homepage = "https://www.statsmodels.org"
pypi = "statsmodels/statsmodels-0.8.0.tar.gz"
version('0.13.2', sha256='77dc292c9939c036a476f1770f9d08976b05437daa229928da73231147cde7d4')
version('0.13.1', sha256='006ec8d896d238873af8178d5475203844f2c391194ed8d42ddac37f5ff77a69')
version('0.13.0', sha256='f2efc02011b7240a9e851acd76ab81150a07d35c97021cb0517887539a328f8a')
version('0.12.2', sha256='8ad7a7ae7cdd929095684118e3b05836c0ccb08b6a01fe984159475d174a1b10')
@@ -53,6 +54,7 @@ class PyStatsmodels(PythonPackage):
depends_on('py-scipy@0.18:', type=('build', 'run'), when='@0.10.1:')
depends_on('py-scipy@1.2:', type=('build', 'run'), when='@0.12.0:')
depends_on('py-scipy@1.3:', type=('build', 'run'), when='@0.13.0:')
depends_on('py-packaging@21.3:', type=('build', 'run'), when='@0.13.2:')
depends_on('py-matplotlib@1.3:', type=('build', 'run'), when='@0.8.0 +plotting')
depends_on('py-pytest', type='test')

View File

@@ -31,7 +31,7 @@ class PyTensorflowHub(Package):
depends_on('py-protobuf@3.8.0:', type=('build', 'run'))
patch("https://github.com/tensorflow/hub/commit/049192a7edd3e80eebf1735b93f57c7965381bdb.patch?full_index=1",
sha256="a825b2dd96d8f1ff1aaf2e4c9e2cbb52d3d75609909fce960e1cfa681040c4c3",
sha256="c8b59d17511a8ebd2a58717723b9b77514a12b43bb2e6acec6d0c1062df6e457",
when="@:0.12")
def install(self, spec, prefix):

View File

@@ -15,6 +15,7 @@ class PyTypingExtensions(PythonPackage):
homepage = "https://github.com/python/typing/tree/master/typing_extensions"
pypi = "typing_extensions/typing_extensions-3.7.4.tar.gz"
version('4.3.0', sha256='e6d2677a32f47fc7eb2795db1dd15c1f34eff616bcaf2cfb5e997f854fa1c4a6')
version('4.2.0', sha256='f1c24655a0da0d1b67f07e17a5e6b2a105894e6824b92096378bb3668ef02376')
version('4.1.1', sha256='1a9462dcc3347a79b1f1c0271fbe79e844580bb598bafa1ed208b94da3cdcd42')
version('3.10.0.2', sha256='49f75d16ff11f1cd258e1b988ccff82a3ca5570217d7ad8c5f48205dd99a677e')

View File

@@ -11,9 +11,11 @@ class PyWeblogo(PythonPackage):
sequence logos as easy and painless as possible."""
homepage = "http://weblogo.threeplusone.com"
pypi = "weblogo/weblogo-3.6.0.tar.gz"
pypi = "weblogo/weblogo-3.6.0.tar.gz"
version('3.6.0', sha256='af5a9f065581f18d71bd7c22b160c1e443932f22cab992d439d3dc8757c80a85')
version('3.6.0', sha256='af5a9f065581f18d71bd7c22b160c1e443932f22cab992d439d3dc8757c80a85')
version('3.5.0', sha256='84e39ee7c4f70efea55d6a92b3efdc4d2602b3d32a793f98865bca35e6bd1133')
version('3.4', sha256='1fb661df47252064dd6d59d3c340b24d87bebe9048ca9ada904ac1e95669e08f')
depends_on('py-setuptools', type='build')
depends_on('ghostscript', type=('build', 'run'))

View File

@@ -42,7 +42,7 @@ def post_install(self):
join_path(spec['gdk-pixbuf'].prefix.lib, 'girepository-1.0'),
join_path(spec['gtkplus'].prefix.lib, 'girepository-1.0'),
join_path(spec['harfbuzz'].prefix.lib, 'girepository-1.0'))
dst = join_path(python_platlib, 'xdot', '__init__.py')
dst = join_path(python_purelib, 'xdot', '__init__.py')
filter_file("import sys",
"import sys\nimport os\nos.environ['GI_TYPELIB_PATH']" +
" = '%s'" % repo_paths, dst)

View File

@@ -45,3 +45,9 @@ def libs(self):
else:
return find_libraries('libqhull', self.prefix,
shared=True, recursive=True)
@run_after('install')
def darwin_fix(self):
# The shared library is not installed correctly on Darwin; fix this
if self.spec.satisfies('platform=darwin'):
fix_darwin_install_name(self.prefix.lib)

View File

@@ -85,23 +85,18 @@ def check(self):
depends_on('llvm-amdgpu@' + ver, type='build', when='@' + ver)
depends_on('rocminfo@' + ver, type='build', when='@' + ver)
for ver in ['3.5.0', '3.7.0', '3.8.0', '3.9.0']:
depends_on('rocm-smi@' + ver, type='build', when='@' + ver)
for ver in ['4.0.0', '4.1.0', '4.2.0', '4.3.0', '4.3.1', '4.5.0', '4.5.2',
'5.0.0', '5.0.2', '5.1.0', '5.1.3']:
depends_on('rocm-smi-lib@' + ver, type='build', when='@' + ver)
# This is the default library format since 3.7.0
depends_on('msgpack-c@3:', when='@3.7:')
depends_on('python@3.6:', type='build')
depends_on('py-virtualenv', type='build')
depends_on('perl-file-which', type='build')
depends_on('py-pyyaml', type='build')
depends_on('py-wheel', type='build')
depends_on('py-msgpack', type='build')
depends_on('py-pip', type='build')
with when('+tensile'):
# default library format since 3.7.0
depends_on('msgpack-c@3:', when='@3.7:')
depends_on('py-virtualenv', type='build')
depends_on('perl-file-which', type='build')
depends_on('py-pyyaml', type='build')
depends_on('py-wheel', type='build')
depends_on('py-msgpack', type='build')
depends_on('py-pip', type='build')
for t_version, t_commit in [
('@3.5.0', 'f842a1a4427624eff6cbddb2405c36dec9a210cd'),

View File

@@ -235,6 +235,7 @@ class SimmetrixSimmodsuite(Package):
variant('paralleladapt', default=False, description='enable parallel adaptation')
depends_on('mpi')
depends_on('rpc')
oslib = 'x64_rhel7_gcc48'

View File

@@ -12,6 +12,7 @@ class Snappy(CMakePackage):
homepage = "https://github.com/google/snappy"
url = "https://github.com/google/snappy/archive/1.1.8.tar.gz"
version('1.1.9', sha256='75c1fbb3d618dd3a0483bff0e26d0a92b495bbe5059c8b4f1c962b478b6e06e7')
version('1.1.8', sha256='16b677f07832a612b0836178db7f374e414f94657c138e6993cbfc5dcc58651f')
version('1.1.7', sha256='3dfa02e873ff51a11ee02b9ca391807f0c8ea0529a4924afa645fbf97163f9d4')
@@ -20,13 +21,14 @@ class Snappy(CMakePackage):
depends_on('googletest', type='test')
patch('link_gtest.patch')
patch('link_gtest.patch', when='@:1.1.8')
def cmake_args(self):
return [
self.define('CMAKE_INSTALL_LIBDIR', self.prefix.lib),
self.define_from_variant('BUILD_SHARED_LIBS', 'shared'),
self.define('SNAPPY_BUILD_TESTS', self.run_tests),
self.define('SNAPPY_BUILD_BENCHMARKS', 'OFF'),
]
def flag_handler(self, name, flags):

View File

@@ -17,6 +17,7 @@ class Tamaas(SConsPackage):
maintainers = ["prs513rosewood"]
version("master", branch="master")
version("2.5.0.post1", sha256="28e52dc5b8a5f77588c73a6ef396c44c6a8e9d77e3e4929a4ab07232dc9bc565")
version("2.4.0", sha256="38edba588ff3a6643523c28fb391e001dbafa9d0e58053b9e080eda70f8c71c9")
version("2.3.1", sha256="7d63e374cbc7b5b93578ece7be5c084d1c2f0dbe1d57c4f0c8abd5ff5fff9ab0")
version("2.3.0", sha256="0529e015c6cb5bbabaea5dce6efc5ec0f2aa76c00541f0d90ad0e2e3060a4520")
@@ -61,6 +62,7 @@ def build_args(self, spec, prefix):
"real_type=double",
"integer_type=int",
"build_tests=False",
"doc_builders=none",
"prefix={}".format(prefix),
"BOOST_ROOT={}".format(spec['boost'].prefix),
"THRUST_ROOT={}".format(spec['thrust'].prefix),