Compare commits

...

22 Commits

Author SHA1 Message Date
Gregory Becker
a8d6533b09 Update changelog for v0.14.3 2020-07-10 17:15:40 -07:00
Gregory Becker
93bd27dc19 bump version number to v0.14.3 2020-07-10 17:04:38 -07:00
Michael Kuhn
ddc79ce4df autotools bugfix: handle missing config.guess (#17356)
Spack was attempting to calculate abspath on the located config.guess
path even when it was not found (None); this commit skips the abspath
calculation when config.guess is not found.
2020-07-10 16:59:31 -07:00
Michael Kuhn
e9e2a84be1 autotools: Fix config.guess detection, take two (#17333)
The previous fix from #17149 contained a thinko that produced errors for
packages that overwrite configure_directory.
2020-07-10 16:58:01 -07:00
Michael Kuhn
eb3792ec65 autotools: Fix config.guess detection (#17149)
The config.guess detection used a relative path that did not work in
combination with `check_call`. Use an absolute path instead.
2020-07-10 16:57:31 -07:00
Joseph Ciurej
ef1b8c1916 revise 'autotools' automated 'config.*' update mechanism to support 'config.sub' 2020-07-10 16:56:46 -07:00
Peter Scheibel
5a6f8cf671 Fix global activation check for upstream extendees (#17231)
* short-circuit is_activated check when the extendee is installed upstream

* add test for checking activation status of packages with an extendee installed upstream
2020-07-10 16:49:47 -07:00
cedricchevalier19
750ca36a8d Fix gcc + binutils compilation. (#9024)
* fix binutils deptype for gcc

binutils needs to be a run dependency of gcc

* Fix gcc+binutils build on RHEL7+

static-libstdc++ is not available with system gcc.
Anyway, as it is for bootstraping, we do not really care depending on
a shared libstdc++.

Co-authored-by: Michael Kuhn <michael@ikkoku.de>
2020-07-10 16:49:05 -07:00
Greg Becker
476961782d make gcc build on aarch64 (#17280) 2020-07-10 16:48:41 -07:00
Todd Gamblin
ac51bfb530 bugfix: no infinite recursion in setup-env.sh on Cray
On Cray platforms, we rely heavily on the module system to figure out
what targets, compilers, etc. are available. This unfortunately means
that we shell out to the `module` command as part of platform
initialization.

Because we run subcommands in a shell, we can get infinite recursion if
`setup-env.sh` and friends are in some init script like `.bashrc`.

This fixes the infinite loop by adding guards around `setup-env.sh`,
`setup-env.csh`, and `setup-env.fish`, to prevent recursive
initializations of Spack. This is safe because Spack never shells out to
itself, so we do not need it to be initialized in subshells.

- [x] add recursion guard around `setup-env.sh`
- [x] add recursion guard around `setup-env.csh`
2020-07-10 16:47:53 -07:00
Peter Scheibel
a57689084d add public spack mirror (#17077) 2020-07-10 16:46:08 -07:00
Greg Becker
9c62115101 installation: skip repository metadata for externals (#16954)
When Spack installs a package, it stores repository package.py files
for it and all of its dependencies - any package with a Spack metadata
directory in its installation prefix.

It turns out this was too broad: this ends up including external
packages installed by Spack (e.g. installed by another Spack instance).
Currently Spack doesn't store the namespace properly for such packages,
so even though the package file could be fetched from the external,
Spack is unable to locate it.

This commit avoids the issue by skipping any attempt to locate and copy
from the package repository of externals, regardless of whether they
have a Spack repo directory.
2020-07-10 16:43:48 -07:00
Massimiliano Culpo
328d512341 commands: use a single ThreadPool for spack versions (#16749)
This fixes a fork bomb in `spack versions`. Recursive generation of pools
to scrape URLs in `_spider` was creating large numbers of processes.
Instead of recursively creating process pools, we now use a single
`ThreadPool` with a concurrency limit.

More on the issue: having ~10 users running at the same time spack
versions on front-end nodes caused kernel lockup due to the high number
of sockets opened (sys-admin reports ~210k distributed over 3 nodes).
Users were internal, so they had ulimit -n set to ~70k.

The forking behavior could be observed by just running:

    $ spack versions boost

and checking the number of processes spawned. Number of processes
per se was not the issue, but each one of them opens a socket
which can stress `iptables`.

In the original issue the kernel watchdog was reporting:

    Message from syslogd@login03 at May 19 12:01:30 ...
    kernel:Watchdog CPU:110 Hard LOCKUP
    Message from syslogd@login03 at May 19 12:01:31 ...
    kernel:watchdog: BUG: soft lockup - CPU#110 stuck for 23s! [python3:2756]
    Message from syslogd@login03 at May 19 12:01:31 ...
    kernel:watchdog: BUG: soft lockup - CPU#94 stuck for 22s! [iptables:5603]
2020-07-10 16:43:12 -07:00
Massimiliano Culpo
0762d8356d spack uninstall: improve help message (#16886)
fixes #12527

Mention that specs  can be uninstalled by hash also in
the help message. Reference `spack gc` in case people
are looking for ways to clean the store from build time
dependencies.

Use "spec" instead of "package" to avoid ambiguity in
the error message.
2020-07-10 16:40:28 -07:00
Marc Allen
322a12e801 npm: use mkdirp instead of mkdir (#16835)
fixes #16833

Co-authored-by: Marc Allen <mrcall@amazon.com>
2020-07-10 16:37:25 -07:00
Massimiliano Culpo
a69674c73e containerize: allow 0.14.1-0.14.3 versions (#16824) 2020-07-10 16:35:39 -07:00
Sergey Kosukhin
e2f5f668a9 concretize: fix UnboundLocalError due to import within a function (#16809) 2020-07-10 16:32:31 -07:00
Greg Becker
dc59fc7ab8 bugfix: reorder variants in Spec strings (#16462)
* change print order for variants to avoid zsh parsing bugs

* change tests for new variant parse order
2020-07-10 16:28:59 -07:00
Todd Gamblin
473424ad60 bugfix: spack shouldn't fail in an incomplete environment (#16473)
Fixed #15884.

Spack asks every package linked into an environment to tell us how
environment variables should be modified when a spack environment is
activated. As part of this, specs in an environment are symlinked into
the environment's view (see #13249), and the package calculates
environment modifications with *the default view as the prefix*.

All of this works nicely for pointing the user's environment at the view
*if* every package is successfully linked. Unfortunately, right now we
only track what specs "should" be in a view, not which specs actually
are. So we end up calculating environment modifications on things that
aren't linked into thee view, and the exception isn't caught, so lots of
spack commands end up failing.

This fixes the issue by ignoring and warning about specs where
calculating environment modifications fails. So we can still keep using
Spack even if the current environment is incomplete.

We should probably also just avoid computing env modifications *entirely*
for unlinked packages, but right now that is a slow operation (requires a
lot of YAML parsing). We should revisit that when we have some better
state management for views, but the fix adopted here will still be
necessary, as we want spack commands to be resilient to other types of
bugs in `setup_run_environment()` and friends. That code is in packages
and we have to assume it could be buggy when we call it outside of builds
(as it might fail more than just the build).
2020-07-10 16:27:58 -07:00
Greg Becker
3c1379c985 cmake build system: filter system paths from rpaths (#16612) 2020-07-10 16:27:26 -07:00
Massimiliano Culpo
b0dc57a939 spack info: replace "True, False" with "on, off" (#16235)
fixes #16184
2020-07-10 16:26:54 -07:00
Todd Gamblin
b99102f68c remove files accidentally committed with 0.14.0 (#16138) 2020-07-10 16:25:34 -07:00
30 changed files with 500 additions and 12072 deletions

View File

@ -1,4 +1,18 @@
# v0.14.2 (2019-04-15) # V0.14.3 (2020-07-10)
This is a minor release on the `0.14` series. The latest release of
Spack is `0.15.1`. This release includes bugfixes backported to the
`0.14` series from `0.15.0` and `0.15.1`. These include
* Spack has a public mirror for source files to prevent downtimes when sites go down (#17077)
* Spack setup scripts no longer hang when sourced in .*rc files on Cray (#17386)
* Spack commands no longer fail in incomplete spack environment (#16473)
* Improved detection of config.guess and config.sub files (#16854, #17149, #17333, #17356)
* GCC builds on aarch64 architectures and at spec `%gcc +binutils` (#17280, #9024)
* Better cleaning of the build environment (#8623)
* `spack versions` command no longer has potential to cause fork bomb (#16749)
# v0.14.2 (2020-04-15)
This is a minor release on the `0.14` series. It includes performance This is a minor release on the `0.14` series. It includes performance
improvements and bug fixes: improvements and bug fixes:
@ -13,7 +27,7 @@ improvements and bug fixes:
* Avoid adding spurious `LMOD` env vars to Intel modules (#15778) * Avoid adding spurious `LMOD` env vars to Intel modules (#15778)
* Don't output [+] for mock installs run during tests (#15609) * Don't output [+] for mock installs run during tests (#15609)
# v0.14.1 (2019-03-20) # v0.14.1 (2020-03-20)
This is a bugfix release on top of `v0.14.0`. Specific fixes include: This is a bugfix release on top of `v0.14.0`. Specific fixes include:

View File

@ -0,0 +1,2 @@
mirrors:
spack-public: https://spack-llnl-mirror.s3-us-west-2.amazonaws.com/

View File

@ -5,7 +5,7 @@
#: major, minor, patch version for Spack, in a tuple #: major, minor, patch version for Spack, in a tuple
spack_version_info = (0, 14, 2) spack_version_info = (0, 14, 3)
#: String containing Spack version joined with .'s #: String containing Spack version joined with .'s
spack_version = '.'.join(str(v) for v in spack_version_info) spack_version = '.'.join(str(v) for v in spack_version_info)

View File

@ -608,7 +608,7 @@ def get_rpaths(pkg):
# module show output. # module show output.
if pkg.compiler.modules and len(pkg.compiler.modules) > 1: if pkg.compiler.modules and len(pkg.compiler.modules) > 1:
rpaths.append(get_path_from_module(pkg.compiler.modules[1])) rpaths.append(get_path_from_module(pkg.compiler.modules[1]))
return rpaths return list(dedupe(filter_system_paths(rpaths)))
def get_std_cmake_args(pkg): def get_std_cmake_args(pkg):

View File

@ -56,8 +56,9 @@ class AutotoolsPackage(PackageBase):
#: This attribute is used in UI queries that need to know the build #: This attribute is used in UI queries that need to know the build
#: system base class #: system base class
build_system_class = 'AutotoolsPackage' build_system_class = 'AutotoolsPackage'
#: Whether or not to update ``config.guess`` on old architectures #: Whether or not to update ``config.guess`` and ``config.sub`` on old
patch_config_guess = True #: architectures
patch_config_files = True
#: Whether or not to update ``libtool`` #: Whether or not to update ``libtool``
#: (currently only for Arm/Clang/Fujitsu compilers) #: (currently only for Arm/Clang/Fujitsu compilers)
patch_libtool = True patch_libtool = True
@ -86,72 +87,92 @@ def archive_files(self):
return [os.path.join(self.build_directory, 'config.log')] return [os.path.join(self.build_directory, 'config.log')]
@run_after('autoreconf') @run_after('autoreconf')
def _do_patch_config_guess(self): def _do_patch_config_files(self):
"""Some packages ship with an older config.guess and need to have """Some packages ship with older config.guess/config.sub files and
this updated when installed on a newer architecture. In particular, need to have these updated when installed on a newer architecture.
config.guess fails for PPC64LE for version prior to a 2013-06-10 In particular, config.guess fails for PPC64LE for version prior
build date (automake 1.13.4) and for ARM (aarch64).""" to a 2013-06-10 build date (automake 1.13.4) and for ARM (aarch64)."""
if not self.patch_config_guess or ( if not self.patch_config_files or (
not self.spec.satisfies('target=ppc64le:') and not self.spec.satisfies('target=ppc64le:') and
not self.spec.satisfies('target=aarch64:') not self.spec.satisfies('target=aarch64:')
): ):
return return
my_config_guess = None
config_guess = None # TODO: Expand this to select the 'config.sub'-compatible architecture
if os.path.exists('config.guess'): # for each platform (e.g. 'config.sub' doesn't accept 'power9le', but
# First search the top-level source directory # does accept 'ppc64le').
my_config_guess = 'config.guess' if self.spec.satisfies('target=ppc64le:'):
config_arch = 'ppc64le'
elif self.spec.satisfies('target=aarch64:'):
config_arch = 'aarch64'
else: else:
# Then search in all sub directories. config_arch = 'local'
# We would like to use AC_CONFIG_AUX_DIR, but not all packages
# ship with their configure.in or configure.ac. my_config_files = {'guess': None, 'sub': None}
d = '.' config_files = {'guess': None, 'sub': None}
dirs = [os.path.join(d, o) for o in os.listdir(d) config_args = {'guess': [], 'sub': [config_arch]}
if os.path.isdir(os.path.join(d, o))]
for dirname in dirs: for config_name in config_files.keys():
path = os.path.join(dirname, 'config.guess') config_file = 'config.{0}'.format(config_name)
if os.path.exists(config_file):
# First search the top-level source directory
my_config_files[config_name] = os.path.abspath(config_file)
else:
# Then search in all sub directories recursively.
# We would like to use AC_CONFIG_AUX_DIR, but not all packages
# ship with their configure.in or configure.ac.
config_path = next((os.path.abspath(os.path.join(r, f))
for r, ds, fs in os.walk('.') for f in fs
if f == config_file), None)
my_config_files[config_name] = config_path
if my_config_files[config_name] is not None:
try:
config_path = my_config_files[config_name]
check_call([config_path] + config_args[config_name],
stdout=PIPE, stderr=PIPE)
# The package's config file already runs OK, so just use it
continue
except Exception as e:
tty.debug(e)
else:
continue
# Look for a spack-installed automake package
if 'automake' in self.spec:
automake_dir = 'automake-' + str(self.spec['automake'].version)
automake_path = os.path.join(self.spec['automake'].prefix,
'share', automake_dir)
path = os.path.join(automake_path, config_file)
if os.path.exists(path): if os.path.exists(path):
my_config_guess = path config_files[config_name] = path
# Look for the system's config.guess
if (config_files[config_name] is None and
os.path.exists('/usr/share')):
automake_dir = [s for s in os.listdir('/usr/share') if
"automake" in s]
if automake_dir:
automake_path = os.path.join('/usr/share', automake_dir[0])
path = os.path.join(automake_path, config_file)
if os.path.exists(path):
config_files[config_name] = path
if config_files[config_name] is not None:
try:
config_path = config_files[config_name]
my_config_path = my_config_files[config_name]
if my_config_guess is not None: check_call([config_path] + config_args[config_name],
try: stdout=PIPE, stderr=PIPE)
check_call([my_config_guess], stdout=PIPE, stderr=PIPE)
# The package's config.guess already runs OK, so just use it
return
except Exception as e:
tty.debug(e)
else:
return
# Look for a spack-installed automake package m = os.stat(my_config_path).st_mode & 0o777 | stat.S_IWUSR
if 'automake' in self.spec: os.chmod(my_config_path, m)
automake_path = os.path.join(self.spec['automake'].prefix, 'share', shutil.copyfile(config_path, my_config_path)
'automake-' + continue
str(self.spec['automake'].version)) except Exception as e:
path = os.path.join(automake_path, 'config.guess') tty.debug(e)
if os.path.exists(path):
config_guess = path
# Look for the system's config.guess
if config_guess is None and os.path.exists('/usr/share'):
automake_dir = [s for s in os.listdir('/usr/share') if
"automake" in s]
if automake_dir:
automake_path = os.path.join('/usr/share', automake_dir[0])
path = os.path.join(automake_path, 'config.guess')
if os.path.exists(path):
config_guess = path
if config_guess is not None:
try:
check_call([config_guess], stdout=PIPE, stderr=PIPE)
mod = os.stat(my_config_guess).st_mode & 0o777 | stat.S_IWUSR
os.chmod(my_config_guess, mod)
shutil.copyfile(config_guess, my_config_guess)
return
except Exception as e:
tty.debug(e)
raise RuntimeError('Failed to find suitable config.guess') raise RuntimeError('Failed to find suitable ' + config_file)
@run_after('configure') @run_after('configure')
def _do_patch_libtool(self): def _do_patch_libtool(self):

View File

@ -114,10 +114,8 @@ def lines(self):
'{0} [{1}]'.format(k, self.default(v)), '{0} [{1}]'.format(k, self.default(v)),
width=self.column_widths[0] width=self.column_widths[0]
) )
allowed = textwrap.wrap( allowed = v.allowed_values.replace('True, False', 'on, off')
v.allowed_values, allowed = textwrap.wrap(allowed, width=self.column_widths[1])
width=self.column_widths[1]
)
description = textwrap.wrap( description = textwrap.wrap(
v.description, v.description,
width=self.column_widths[2] width=self.column_widths[2]

View File

@ -26,7 +26,8 @@
error_message = """You can either: error_message = """You can either:
a) use a more specific spec, or a) use a more specific spec, or
b) use `spack uninstall --all` to uninstall ALL matching specs. b) specify the spec by its hash (e.g. `spack uninstall /hash`), or
c) use `spack uninstall --all` to uninstall ALL matching specs.
""" """
# Arguments for display_specs when we find ambiguity # Arguments for display_specs when we find ambiguity
@ -39,6 +40,18 @@
def setup_parser(subparser): def setup_parser(subparser):
epilog_msg = ("Specs to be uninstalled are specified using the spec syntax"
" (`spack help --spec`) and can be identified by their "
"hashes. To remove packages that are needed only at build "
"time and were not explicitly installed see `spack gc -h`."
"\n\nWhen using the --all option ALL packages matching the "
"supplied specs will be uninstalled. For instance, "
"`spack uninstall --all libelf` uninstalls all the versions "
"of `libelf` currently present in Spack's store. If no spec "
"is supplied, all installed packages will be uninstalled. "
"If used in an environment, all packages in the environment "
"will be uninstalled.")
subparser.epilog = epilog_msg
subparser.add_argument( subparser.add_argument(
'-f', '--force', action='store_true', dest='force', '-f', '--force', action='store_true', dest='force',
help="remove regardless of whether other packages or environments " help="remove regardless of whether other packages or environments "
@ -47,12 +60,8 @@ def setup_parser(subparser):
subparser, ['recurse_dependents', 'yes_to_all', 'installed_specs']) subparser, ['recurse_dependents', 'yes_to_all', 'installed_specs'])
subparser.add_argument( subparser.add_argument(
'-a', '--all', action='store_true', dest='all', '-a', '--all', action='store_true', dest='all',
help="USE CAREFULLY. Remove ALL installed packages that match each " help="remove ALL installed packages that match each supplied spec"
"supplied spec. i.e., if you `uninstall --all libelf`," )
" ALL versions of `libelf` are uninstalled. If no spec is "
"supplied, all installed packages will be uninstalled. "
"If used in an environment, all packages in the environment "
"will be uninstalled.")
def find_matching_specs(env, specs, allow_multiple_matches=False, force=False): def find_matching_specs(env, specs, allow_multiple_matches=False, force=False):

View File

@ -21,6 +21,10 @@
def setup_parser(subparser): def setup_parser(subparser):
subparser.add_argument('-s', '--safe-only', action='store_true', subparser.add_argument('-s', '--safe-only', action='store_true',
help='only list safe versions of the package') help='only list safe versions of the package')
subparser.add_argument(
'-c', '--concurrency', default=32, type=int,
help='number of concurrent requests'
)
arguments.add_common_arguments(subparser, ['package']) arguments.add_common_arguments(subparser, ['package'])
@ -45,7 +49,7 @@ def versions(parser, args):
if sys.stdout.isatty(): if sys.stdout.isatty():
tty.msg('Remote versions (not yet checksummed):') tty.msg('Remote versions (not yet checksummed):')
fetched_versions = pkg.fetch_remote_versions() fetched_versions = pkg.fetch_remote_versions(args.concurrency)
remote_versions = set(fetched_versions).difference(safe_versions) remote_versions = set(fetched_versions).difference(safe_versions)
if not remote_versions: if not remote_versions:

View File

@ -8,7 +8,10 @@
"build_tags": { "build_tags": {
"develop": "latest", "develop": "latest",
"0.14": "0.14", "0.14": "0.14",
"0.14.0": "0.14.0" "0.14.0": "0.14.0",
"0.14.1": "0.14.1",
"0.14.2": "0.14.2",
"0.14.3": "0.14.3"
} }
}, },
"ubuntu:16.04": { "ubuntu:16.04": {
@ -20,7 +23,10 @@
"build_tags": { "build_tags": {
"develop": "latest", "develop": "latest",
"0.14": "0.14", "0.14": "0.14",
"0.14.0": "0.14.0" "0.14.0": "0.14.0",
"0.14.1": "0.14.1",
"0.14.2": "0.14.2",
"0.14.3": "0.14.3"
} }
}, },
"centos:7": { "centos:7": {
@ -32,7 +38,10 @@
"build_tags": { "build_tags": {
"develop": "latest", "develop": "latest",
"0.14": "0.14", "0.14": "0.14",
"0.14.0": "0.14.0" "0.14.0": "0.14.0",
"0.14.1": "0.14.1",
"0.14.2": "0.14.2",
"0.14.3": "0.14.3"
} }
}, },
"centos:6": { "centos:6": {
@ -44,7 +53,10 @@
"build_tags": { "build_tags": {
"develop": "latest", "develop": "latest",
"0.14": "0.14", "0.14": "0.14",
"0.14.0": "0.14.0" "0.14.0": "0.14.0",
"0.14.1": "0.14.1",
"0.14.2": "0.14.2",
"0.14.3": "0.14.3"
} }
} }
} }

View File

@ -1091,6 +1091,25 @@ def regenerate_views(self):
for view in self.views.values(): for view in self.views.values():
view.regenerate(specs, self.roots()) view.regenerate(specs, self.roots())
def _env_modifications_for_default_view(self, reverse=False):
all_mods = spack.util.environment.EnvironmentModifications()
errors = []
for _, spec in self.concretized_specs():
if spec in self.default_view and spec.package.installed:
try:
mods = uenv.environment_modifications_for_spec(
spec, self.default_view)
except Exception as e:
msg = ("couldn't get environment settings for %s"
% spec.format("{name}@{version} /{hash:7}"))
errors.append((msg, str(e)))
continue
all_mods.extend(mods.reversed() if reverse else mods)
return all_mods, errors
def add_default_view_to_shell(self, shell): def add_default_view_to_shell(self, shell):
env_mod = spack.util.environment.EnvironmentModifications() env_mod = spack.util.environment.EnvironmentModifications()
@ -1101,10 +1120,11 @@ def add_default_view_to_shell(self, shell):
env_mod.extend(uenv.unconditional_environment_modifications( env_mod.extend(uenv.unconditional_environment_modifications(
self.default_view)) self.default_view))
for _, spec in self.concretized_specs(): mods, errors = self._env_modifications_for_default_view()
if spec in self.default_view and spec.package.installed: env_mod.extend(mods)
env_mod.extend(uenv.environment_modifications_for_spec( if errors:
spec, self.default_view)) for err in errors:
tty.warn(*err)
# deduplicate paths from specs mapped to the same location # deduplicate paths from specs mapped to the same location
for env_var in env_mod.group_by_name(): for env_var in env_mod.group_by_name():
@ -1122,11 +1142,9 @@ def rm_default_view_from_shell(self, shell):
env_mod.extend(uenv.unconditional_environment_modifications( env_mod.extend(uenv.unconditional_environment_modifications(
self.default_view).reversed()) self.default_view).reversed())
for _, spec in self.concretized_specs(): mods, _ = self._env_modifications_for_default_view(reverse=True)
if spec in self.default_view and spec.package.installed: env_mod.extend(mods)
env_mod.extend(
uenv.environment_modifications_for_spec(
spec, self.default_view).reversed())
return env_mod.shell_modifications(shell) return env_mod.shell_modifications(shell)
def _add_concrete_spec(self, spec, concrete, new=True): def _add_concrete_spec(self, spec, concrete, new=True):

View File

@ -398,9 +398,14 @@ def dump_packages(spec, path):
source = spack.store.layout.build_packages_path(node) source = spack.store.layout.build_packages_path(node)
source_repo_root = os.path.join(source, node.namespace) source_repo_root = os.path.join(source, node.namespace)
# There's no provenance installed for the source package. Skip it. # If there's no provenance installed for the package, skip it.
# User can always get something current from the builtin repo. # If it's external, skip it because it either:
if not os.path.isdir(source_repo_root): # 1) it wasn't built with Spack, so it has no Spack metadata
# 2) it was built by another Spack instance, and we do not
# (currently) use Spack metadata to associate repos with externals
# built by other Spack instances.
# Spack can always get something current from the builtin repo.
if node.external or not os.path.isdir(source_repo_root):
continue continue
# Create a source repo and get the pkg directory out of it. # Create a source repo and get the pkg directory out of it.

View File

@ -1019,6 +1019,11 @@ def is_activated(self, view):
if not self.is_extension: if not self.is_extension:
raise ValueError( raise ValueError(
"is_activated called on package that is not an extension.") "is_activated called on package that is not an extension.")
if self.extendee_spec.package.installed_upstream:
# If this extends an upstream package, it cannot be activated for
# it. This bypasses construction of the extension map, which can
# can fail when run in the context of a downstream Spack instance
return False
extensions_layout = view.extensions_layout extensions_layout = view.extensions_layout
exts = extensions_layout.extension_map(self.extendee_spec) exts = extensions_layout.extension_map(self.extendee_spec)
return (self.name in exts) and (exts[self.name] == self.spec) return (self.name in exts) and (exts[self.name] == self.spec)
@ -2001,7 +2006,7 @@ def all_urls(self):
urls.append(args['url']) urls.append(args['url'])
return urls return urls
def fetch_remote_versions(self): def fetch_remote_versions(self, concurrency=128):
"""Find remote versions of this package. """Find remote versions of this package.
Uses ``list_url`` and any other URLs listed in the package file. Uses ``list_url`` and any other URLs listed in the package file.
@ -2014,7 +2019,8 @@ def fetch_remote_versions(self):
try: try:
return spack.util.web.find_versions_of_archive( return spack.util.web.find_versions_of_archive(
self.all_urls, self.list_url, self.list_depth) self.all_urls, self.list_url, self.list_depth, concurrency
)
except spack.util.web.NoNetworkConnectionError as e: except spack.util.web.NoNetworkConnectionError as e:
tty.die("Package.fetch_versions couldn't connect to:", e.url, tty.die("Package.fetch_versions couldn't connect to:", e.url,
e.message) e.message)

View File

@ -29,7 +29,10 @@
}, },
'spack': { 'spack': {
'type': 'string', 'type': 'string',
'enum': ['develop', '0.14', '0.14.0'] 'enum': [
'develop',
'0.14', '0.14.0', '0.14.1', '0.14.2', '0.14.3'
]
} }
}, },
'required': ['image', 'spack'] 'required': ['image', 'spack']

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,282 +0,0 @@
%=============================================================================
% Generate
%=============================================================================
%-----------------------------------------------------------------------------
% Version semantics
%-----------------------------------------------------------------------------
% versions are declared w/priority -- declared with priority implies declared
version_declared(P, V) :- version_declared(P, V, _).
% If something is a package, it has only one version and that must be a
% possible version.
1 { version(P, V) : version_possible(P, V) } 1 :- node(P).
% If a version is declared but conflicted, it's not possible.
version_possible(P, V) :- version_declared(P, V), not version_conflict(P, V).
version_weight(P, V, N) :- version(P, V), version_declared(P, V, N).
#defined version_conflict/2.
%-----------------------------------------------------------------------------
% Dependency semantics
%-----------------------------------------------------------------------------
% Dependencies of any type imply that one package "depends on" another
depends_on(P, D) :- depends_on(P, D, _).
% declared dependencies are real if they're not virtual
depends_on(P, D, T) :- declared_dependency(P, D, T), not virtual(D), node(P).
% if you declare a dependency on a virtual, you depend on one of its providers
1 { depends_on(P, Q, T) : provides_virtual(Q, V) } 1
:- declared_dependency(P, V, T), virtual(V), node(P).
% if a virtual was required by some root spec, one provider is in the DAG
1 { node(P) : provides_virtual(P, V) } 1 :- virtual_node(V).
% for any virtual, there can be at most one provider in the DAG
provider(P, V) :- node(P), provides_virtual(P, V).
0 { provider(P, V) : node(P) } 1 :- virtual(V).
% give dependents the virtuals they want
provider_weight(D, N)
:- virtual(V), depends_on(P, D), provider(D, V),
pkg_provider_preference(P, V, D, N).
provider_weight(D, N)
:- virtual(V), depends_on(P, D), provider(D, V),
not pkg_provider_preference(P, V, D, _),
default_provider_preference(V, D, N).
% if there's no preference for something, it costs 100 to discourage its
% use with minimization
provider_weight(D, 100)
:- virtual(V), depends_on(P, D), provider(D, V),
not pkg_provider_preference(P, V, D, _),
not default_provider_preference(V, D, _).
% all nodes must be reachable from some root
needed(D) :- root(D), node(D).
needed(D) :- root(P), depends_on(P, D).
needed(D) :- needed(P), depends_on(P, D), node(P).
:- node(P), not needed(P).
% real dependencies imply new nodes.
node(D) :- node(P), depends_on(P, D).
% do not warn if generated program contains none of these.
#defined depends_on/3.
#defined declared_dependency/3.
#defined virtual/1.
#defined virtual_node/1.
#defined provides_virtual/2.
#defined pkg_provider_preference/4.
#defined default_provider_preference/3.
#defined root/1.
%-----------------------------------------------------------------------------
% Variant semantics
%-----------------------------------------------------------------------------
% one variant value for single-valued variants.
1 { variant_value(P, V, X) : variant_possible_value(P, V, X) } 1
:- node(P), variant(P, V), variant_single_value(P, V).
% at least one variant value for multi-valued variants.
1 { variant_value(P, V, X) : variant_possible_value(P, V, X) }
:- node(P), variant(P, V), not variant_single_value(P, V).
% if a variant is set to anything, it is considered 'set'.
variant_set(P, V) :- variant_set(P, V, _).
% variant_set is an explicitly set variant value. If it's not 'set',
% we revert to the default value. If it is set, we force the set value
variant_value(P, V, X) :- node(P), variant(P, V), variant_set(P, V, X).
% prefer default values.
variant_not_default(P, V, X, 1)
:- variant_value(P, V, X),
not variant_default_value(P, V, X),
node(P).
variant_not_default(P, V, X, 0)
:- variant_value(P, V, X),
variant_default_value(P, V, X),
node(P).
% suppress wranings about this atom being unset. It's only set if some
% spec or some package sets it, and without this, clingo will give
% warnings like 'info: atom does not occur in any rule head'.
#defined variant/2.
#defined variant_set/3.
#defined variant_single_value/2.
#defined variant_default_value/3.
#defined variant_possible_value/3.
%-----------------------------------------------------------------------------
% Platform/OS semantics
%-----------------------------------------------------------------------------
% one platform, os per node
% TODO: convert these to use optimization, like targets.
1 { node_platform(P, A) : node_platform(P, A) } 1 :- node(P).
1 { node_os(P, A) : node_os(P, A) } 1 :- node(P).
% arch fields for pkg P are set if set to anything
node_platform_set(P) :- node_platform_set(P, _).
node_os_set(P) :- node_os_set(P, _).
% if no platform/os is set, fall back to the defaults
node_platform(P, A)
:- node(P), not node_platform_set(P), node_platform_default(A).
node_os(P, A) :- node(P), not node_os_set(P), node_os_default(A).
% setting os/platform on a node is a hard constraint
node_platform(P, A) :- node(P), node_platform_set(P, A).
node_os(P, A) :- node(P), node_os_set(P, A).
% avoid info warnings (see variants)
#defined node_platform_set/2.
#defined node_os_set/2.
%-----------------------------------------------------------------------------
% Target semantics
%-----------------------------------------------------------------------------
% one target per node -- optimization will pick the "best" one
1 { node_target(P, T) : target(T) } 1 :- node(P).
% can't use targets on node if the compiler for the node doesn't support them
:- node_target(P, T), not compiler_supports_target(C, V, T),
node_compiler(P, C), node_compiler_version(P, C, V).
% if a target is set explicitly, respect it
node_target(P, T) :- node(P), node_target_set(P, T).
% each node has the weight of its assigned target
node_target_weight(P, N) :- node(P), node_target(P, T), target_weight(T, N).
#defined node_target_set/2.
%-----------------------------------------------------------------------------
% Compiler semantics
%-----------------------------------------------------------------------------
% one compiler per node
1 { node_compiler(P, C) : compiler(C) } 1 :- node(P).
1 { node_compiler_version(P, C, V) : compiler_version(C, V) } 1 :- node(P).
1 { compiler_weight(P, N) : compiler_weight(P, N) } 1 :- node(P).
% dependencies imply we should try to match hard compiler constraints
% todo: look at what to do about intersecting constraints here. we'd
% ideally go with the "lowest" pref in the DAG
node_compiler_match_pref(P, C) :- node_compiler_hard(P, C).
node_compiler_match_pref(D, C)
:- depends_on(P, D), node_compiler_match_pref(P, C),
not node_compiler_hard(D, _).
compiler_match(P, 1) :- node_compiler(P, C), node_compiler_match_pref(P, C).
node_compiler_version_match_pref(P, C, V)
:- node_compiler_version_hard(P, C, V).
node_compiler_version_match_pref(D, C, V)
:- depends_on(P, D), node_compiler_version_match_pref(P, C, V),
not node_compiler_version_hard(D, C, _).
compiler_version_match(P, 1)
:- node_compiler_version(P, C, V),
node_compiler_version_match_pref(P, C, V).
#defined node_compiler_hard/2.
#defined node_compiler_version_hard/3.
% compilers weighted by preference acccording to packages.yaml
compiler_weight(P, N)
:- node_compiler(P, C), node_compiler_version(P, C, V),
node_compiler_preference(P, C, V, N).
compiler_weight(P, N)
:- node_compiler(P, C), node_compiler_version(P, C, V),
not node_compiler_preference(P, C, _, _),
default_compiler_preference(C, V, N).
compiler_weight(P, 100)
:- node_compiler(P, C), node_compiler_version(P, C, V),
not node_compiler_preference(P, C, _, _),
not default_compiler_preference(C, _, _).
#defined node_compiler_preference/4.
#defined default_compiler_preference/3.
%-----------------------------------------------------------------------------
% Compiler flags
%-----------------------------------------------------------------------------
% propagate flags when compilers match
inherit_flags(P, D)
:- depends_on(P, D), node_compiler(P, C), node_compiler(D, C),
compiler(C), flag_type(T).
node_flag_inherited(D, T, F) :- node_flag_set(P, T, F), inherit_flags(P, D).
node_flag_inherited(D, T, F)
:- node_flag_inherited(P, T, F), inherit_flags(P, D).
% node with flags set to anythingg is "set"
node_flag_set(P) :- node_flag_set(P, _, _).
% remember where flags came from
node_flag_source(P, P) :- node_flag_set(P).
node_flag_source(D, Q) :- node_flag_source(P, Q), inherit_flags(P, D).
% compiler flags from compilers.yaml are put on nodes if compiler matches
node_flag(P, T, F),
node_flag_compiler_default(P)
:- not node_flag_set(P), compiler_version_flag(C, V, T, F),
node_compiler(P, C), node_compiler_version(P, C, V),
flag_type(T), compiler(C), compiler_version(C, V).
% if a flag is set to something or inherited, it's included
node_flag(P, T, F) :- node_flag_set(P, T, F).
node_flag(P, T, F) :- node_flag_inherited(P, T, F).
% if no node flags are set for a type, there are no flags.
no_flags(P, T) :- not node_flag(P, T, _), node(P), flag_type(T).
#defined compiler_version_flag/4.
#defined node_flag/3.
#defined node_flag_set/3.
%-----------------------------------------------------------------------------
% How to optimize the spec (high to low priority)
%-----------------------------------------------------------------------------
% weight root preferences higher
%
% TODO: how best to deal with this issue? It's not clear how best to
% weight all the constraints. Without this root preference, `spack solve
% hdf5` will pick mpich instead of openmpi, even if openmpi is the
% preferred provider, because openmpi has a version constraint on hwloc.
% It ends up choosing between settling for an old version of hwloc, or
% picking the second-best provider. This workaround weights root
% preferences higher so that hdf5's prefs are more important, but it's
% not clear this is a general solution. It would be nice to weight by
% distance to root, but that seems to slow down the solve a lot.
%
% One option is to make preferences hard constraints. Or maybe we need
% to look more closely at where a constraint came from and factor that
% into our weights. e.g., a non-default variant resulting from a version
% constraint counts like a version constraint. Needs more thought later.
%
root(D, 2) :- root(D), node(D).
root(D, 1) :- not root(D), node(D).
% prefer default variants
#minimize { N*R@10,P,V,X : variant_not_default(P, V, X, N), root(P, R) }.
% pick most preferred virtual providers
#minimize{ N*R@9,D : provider_weight(D, N), root(P, R) }.
% prefer more recent versions.
#minimize{ N@8,P,V : version_weight(P, V, N) }.
% compiler preferences
#maximize{ N@7,P : compiler_match(P, N) }.
#minimize{ N@6,P : compiler_weight(P, N) }.
% fastest target for node
% TODO: if these are slightly different by compiler (e.g., skylake is
% best, gcc supports skylake and broadweell, clang's best is haswell)
% things seem to get really slow.
#minimize{ N@5,P : node_target_weight(P, N) }.

View File

@ -2134,6 +2134,8 @@ def concretize(self, tests=False):
consistent with requirements of its packages. See flatten() and consistent with requirements of its packages. See flatten() and
normalize() for more details on this. normalize() for more details on this.
""" """
import spack.concretize
if not self.name: if not self.name:
raise spack.error.SpecError( raise spack.error.SpecError(
"Attempting to concretize anonymous spec") "Attempting to concretize anonymous spec")
@ -2145,7 +2147,6 @@ def concretize(self, tests=False):
force = False force = False
user_spec_deps = self.flat_dependencies(copy=False) user_spec_deps = self.flat_dependencies(copy=False)
import spack.concretize
concretizer = spack.concretize.Concretizer(self.copy()) concretizer = spack.concretize.Concretizer(self.copy())
while changed: while changed:
changes = (self.normalize(force, tests=tests, changes = (self.normalize(force, tests=tests,

View File

@ -163,6 +163,28 @@ def test_env_install_single_spec(install_mockery, mock_fetch):
assert e.specs_by_hash[e.concretized_order[0]].name == 'cmake-client' assert e.specs_by_hash[e.concretized_order[0]].name == 'cmake-client'
def test_env_modifications_error_on_activate(
install_mockery, mock_fetch, monkeypatch, capfd):
env('create', 'test')
install = SpackCommand('install')
e = ev.read('test')
with e:
install('cmake-client')
def setup_error(pkg, env):
raise RuntimeError("cmake-client had issues!")
pkg = spack.repo.path.get_pkg_class("cmake-client")
monkeypatch.setattr(pkg, "setup_run_environment", setup_error)
with e:
pass
_, err = capfd.readouterr()
assert "cmake-client had issues!" in err
assert "Warning: couldn't get environment settings" in err
def test_env_install_same_spec_twice(install_mockery, mock_fetch, capfd): def test_env_install_same_spec_twice(install_mockery, mock_fetch, capfd):
env('create', 'test') env('create', 'test')

View File

@ -633,3 +633,8 @@ def test_compiler_version_matches_any_entry_in_compilers_yaml(self):
s = Spec('mpileaks %gcc@4.5:') s = Spec('mpileaks %gcc@4.5:')
s.concretize() s.concretize()
assert str(s.compiler.version) == '4.5.0' assert str(s.compiler.version) == '4.5.0'
def test_concretize_anonymous(self):
with pytest.raises(spack.error.SpecError):
s = Spec('+variant')
s.concretize()

View File

@ -177,7 +177,7 @@ def test_full_specs(self):
" ^stackwalker@8.1_1e") " ^stackwalker@8.1_1e")
self.check_parse( self.check_parse(
"mvapich_foo" "mvapich_foo"
" ^_openmpi@1.2:1.4,1.6%intel@12.1 debug=2 ~qt_4" " ^_openmpi@1.2:1.4,1.6%intel@12.1~qt_4 debug=2"
" ^stackwalker@8.1_1e") " ^stackwalker@8.1_1e")
self.check_parse( self.check_parse(
'mvapich_foo' 'mvapich_foo'
@ -185,7 +185,7 @@ def test_full_specs(self):
' ^stackwalker@8.1_1e') ' ^stackwalker@8.1_1e')
self.check_parse( self.check_parse(
"mvapich_foo" "mvapich_foo"
" ^_openmpi@1.2:1.4,1.6%intel@12.1 debug=2 ~qt_4" " ^_openmpi@1.2:1.4,1.6%intel@12.1~qt_4 debug=2"
" ^stackwalker@8.1_1e arch=test-redhat6-x86") " ^stackwalker@8.1_1e arch=test-redhat6-x86")
def test_canonicalize(self): def test_canonicalize(self):

View File

@ -408,3 +408,32 @@ def test_perl_activation_view(tmpdir, perl_and_extension_dirs,
assert not os.path.exists(os.path.join(perl_prefix, 'bin/perl-ext-tool')) assert not os.path.exists(os.path.join(perl_prefix, 'bin/perl-ext-tool'))
assert os.path.exists(os.path.join(view_dir, 'bin/perl-ext-tool')) assert os.path.exists(os.path.join(view_dir, 'bin/perl-ext-tool'))
def test_is_activated_upstream_extendee(tmpdir, builtin_and_mock_packages,
monkeypatch):
"""When an extendee is installed upstream, make sure that the extension
spec is never considered to be globally activated for it.
"""
extendee_spec = spack.spec.Spec('python')
extendee_spec._concrete = True
python_name = 'python'
tmpdir.ensure(python_name, dir=True)
python_prefix = str(tmpdir.join(python_name))
# Set the prefix on the package's spec reference because that is a copy of
# the original spec
extendee_spec.package.spec.prefix = python_prefix
monkeypatch.setattr(extendee_spec.package.__class__,
'installed_upstream', True)
ext_name = 'py-extension1'
tmpdir.ensure(ext_name, dir=True)
ext_pkg = create_ext_pkg(
ext_name, str(tmpdir.join(ext_name)), extendee_spec, monkeypatch)
# The view should not be checked at all if the extendee is installed
# upstream, so use 'None' here
mock_view = None
assert not ext_pkg.is_activated(mock_view)

View File

@ -694,7 +694,7 @@ def test_str(self):
c['foobar'] = SingleValuedVariant('foobar', 'fee') c['foobar'] = SingleValuedVariant('foobar', 'fee')
c['feebar'] = SingleValuedVariant('feebar', 'foo') c['feebar'] = SingleValuedVariant('feebar', 'foo')
c['shared'] = BoolValuedVariant('shared', True) c['shared'] = BoolValuedVariant('shared', True)
assert str(c) == ' feebar=foo foo=bar,baz foobar=fee +shared' assert str(c) == '+shared feebar=foo foo=bar,baz foobar=fee'
def test_disjoint_set_initialization_errors(): def test_disjoint_set_initialization_errors():

View File

@ -2,125 +2,101 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details. # Spack Project Developers. See the top-level COPYRIGHT file for details.
# #
# SPDX-License-Identifier: (Apache-2.0 OR MIT) # SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""Tests for web.py."""
import os import os
import ordereddict_backport
import pytest import pytest
from ordereddict_backport import OrderedDict
import spack.paths import spack.paths
import spack.util.web as web_util import spack.util.web
from spack.version import ver from spack.version import ver
web_data_path = os.path.join(spack.paths.test_path, 'data', 'web') def _create_url(relative_url):
web_data_path = os.path.join(spack.paths.test_path, 'data', 'web')
root = 'file://' + web_data_path + '/index.html' return 'file://' + os.path.join(web_data_path, relative_url)
root_tarball = 'file://' + web_data_path + '/foo-0.0.0.tar.gz'
page_1 = 'file://' + os.path.join(web_data_path, '1.html')
page_2 = 'file://' + os.path.join(web_data_path, '2.html')
page_3 = 'file://' + os.path.join(web_data_path, '3.html')
page_4 = 'file://' + os.path.join(web_data_path, '4.html')
def test_spider_0(): root = _create_url('index.html')
pages, links = web_util.spider(root, depth=0) root_tarball = _create_url('foo-0.0.0.tar.gz')
page_1 = _create_url('1.html')
assert root in pages page_2 = _create_url('2.html')
assert page_1 not in pages page_3 = _create_url('3.html')
assert page_2 not in pages page_4 = _create_url('4.html')
assert page_3 not in pages
assert page_4 not in pages
assert "This is the root page." in pages[root]
assert root not in links
assert page_1 in links
assert page_2 not in links
assert page_3 not in links
assert page_4 not in links
def test_spider_1(): @pytest.mark.parametrize(
pages, links = web_util.spider(root, depth=1) 'depth,expected_found,expected_not_found,expected_text', [
(0,
{'pages': [root], 'links': [page_1]},
{'pages': [page_1, page_2, page_3, page_4],
'links': [root, page_2, page_3, page_4]},
{root: "This is the root page."}),
(1,
{'pages': [root, page_1], 'links': [page_1, page_2]},
{'pages': [page_2, page_3, page_4],
'links': [root, page_3, page_4]},
{root: "This is the root page.",
page_1: "This is page 1."}),
(2,
{'pages': [root, page_1, page_2],
'links': [page_1, page_2, page_3, page_4]},
{'pages': [page_3, page_4], 'links': [root]},
{root: "This is the root page.",
page_1: "This is page 1.",
page_2: "This is page 2."}),
(3,
{'pages': [root, page_1, page_2, page_3, page_4],
'links': [root, page_1, page_2, page_3, page_4]},
{'pages': [], 'links': []},
{root: "This is the root page.",
page_1: "This is page 1.",
page_2: "This is page 2.",
page_3: "This is page 3.",
page_4: "This is page 4."}),
])
def test_spider(depth, expected_found, expected_not_found, expected_text):
pages, links = spack.util.web.spider(root, depth=depth)
assert root in pages for page in expected_found['pages']:
assert page_1 in pages assert page in pages
assert page_2 not in pages
assert page_3 not in pages
assert page_4 not in pages
assert "This is the root page." in pages[root] for page in expected_not_found['pages']:
assert "This is page 1." in pages[page_1] assert page not in pages
assert root not in links for link in expected_found['links']:
assert page_1 in links assert link in links
assert page_2 in links
assert page_3 not in links for link in expected_not_found['links']:
assert page_4 not in links assert link not in links
for page, text in expected_text.items():
assert text in pages[page]
def test_spider_2(): def test_spider_no_response(monkeypatch):
pages, links = web_util.spider(root, depth=2) # Mock the absence of a response
monkeypatch.setattr(
assert root in pages spack.util.web, 'read_from_url', lambda x, y: (None, None, None)
assert page_1 in pages )
assert page_2 in pages pages, links = spack.util.web.spider(root, depth=0)
assert page_3 not in pages assert not pages and not links
assert page_4 not in pages
assert "This is the root page." in pages[root]
assert "This is page 1." in pages[page_1]
assert "This is page 2." in pages[page_2]
assert root not in links
assert page_1 in links
assert page_1 in links
assert page_2 in links
assert page_3 in links
assert page_4 in links
def test_spider_3():
pages, links = web_util.spider(root, depth=3)
assert root in pages
assert page_1 in pages
assert page_2 in pages
assert page_3 in pages
assert page_4 in pages
assert "This is the root page." in pages[root]
assert "This is page 1." in pages[page_1]
assert "This is page 2." in pages[page_2]
assert "This is page 3." in pages[page_3]
assert "This is page 4." in pages[page_4]
assert root in links # circular link on page 3
assert page_1 in links
assert page_1 in links
assert page_2 in links
assert page_3 in links
assert page_4 in links
def test_find_versions_of_archive_0(): def test_find_versions_of_archive_0():
versions = web_util.find_versions_of_archive( versions = spack.util.web.find_versions_of_archive(
root_tarball, root, list_depth=0) root_tarball, root, list_depth=0)
assert ver('0.0.0') in versions assert ver('0.0.0') in versions
def test_find_versions_of_archive_1(): def test_find_versions_of_archive_1():
versions = web_util.find_versions_of_archive( versions = spack.util.web.find_versions_of_archive(
root_tarball, root, list_depth=1) root_tarball, root, list_depth=1)
assert ver('0.0.0') in versions assert ver('0.0.0') in versions
assert ver('1.0.0') in versions assert ver('1.0.0') in versions
def test_find_versions_of_archive_2(): def test_find_versions_of_archive_2():
versions = web_util.find_versions_of_archive( versions = spack.util.web.find_versions_of_archive(
root_tarball, root, list_depth=2) root_tarball, root, list_depth=2)
assert ver('0.0.0') in versions assert ver('0.0.0') in versions
assert ver('1.0.0') in versions assert ver('1.0.0') in versions
@ -128,14 +104,14 @@ def test_find_versions_of_archive_2():
def test_find_exotic_versions_of_archive_2(): def test_find_exotic_versions_of_archive_2():
versions = web_util.find_versions_of_archive( versions = spack.util.web.find_versions_of_archive(
root_tarball, root, list_depth=2) root_tarball, root, list_depth=2)
# up for grabs to make this better. # up for grabs to make this better.
assert ver('2.0.0b2') in versions assert ver('2.0.0b2') in versions
def test_find_versions_of_archive_3(): def test_find_versions_of_archive_3():
versions = web_util.find_versions_of_archive( versions = spack.util.web.find_versions_of_archive(
root_tarball, root, list_depth=3) root_tarball, root, list_depth=3)
assert ver('0.0.0') in versions assert ver('0.0.0') in versions
assert ver('1.0.0') in versions assert ver('1.0.0') in versions
@ -145,7 +121,7 @@ def test_find_versions_of_archive_3():
def test_find_exotic_versions_of_archive_3(): def test_find_exotic_versions_of_archive_3():
versions = web_util.find_versions_of_archive( versions = spack.util.web.find_versions_of_archive(
root_tarball, root, list_depth=3) root_tarball, root, list_depth=3)
assert ver('2.0.0b2') in versions assert ver('2.0.0b2') in versions
assert ver('3.0a1') in versions assert ver('3.0a1') in versions
@ -159,35 +135,35 @@ def test_get_header():
# looking up headers should just work like a plain dict # looking up headers should just work like a plain dict
# lookup when there is an entry with the right key # lookup when there is an entry with the right key
assert(web_util.get_header(headers, 'Content-type') == 'text/plain') assert(spack.util.web.get_header(headers, 'Content-type') == 'text/plain')
# looking up headers should still work if there is a fuzzy match # looking up headers should still work if there is a fuzzy match
assert(web_util.get_header(headers, 'contentType') == 'text/plain') assert(spack.util.web.get_header(headers, 'contentType') == 'text/plain')
# ...unless there is an exact match for the "fuzzy" spelling. # ...unless there is an exact match for the "fuzzy" spelling.
headers['contentType'] = 'text/html' headers['contentType'] = 'text/html'
assert(web_util.get_header(headers, 'contentType') == 'text/html') assert(spack.util.web.get_header(headers, 'contentType') == 'text/html')
# If lookup has to fallback to fuzzy matching and there are more than one # If lookup has to fallback to fuzzy matching and there are more than one
# fuzzy match, the result depends on the internal ordering of the given # fuzzy match, the result depends on the internal ordering of the given
# mapping # mapping
headers = OrderedDict() headers = ordereddict_backport.OrderedDict()
headers['Content-type'] = 'text/plain' headers['Content-type'] = 'text/plain'
headers['contentType'] = 'text/html' headers['contentType'] = 'text/html'
assert(web_util.get_header(headers, 'CONTENT_TYPE') == 'text/plain') assert(spack.util.web.get_header(headers, 'CONTENT_TYPE') == 'text/plain')
del headers['Content-type'] del headers['Content-type']
assert(web_util.get_header(headers, 'CONTENT_TYPE') == 'text/html') assert(spack.util.web.get_header(headers, 'CONTENT_TYPE') == 'text/html')
# Same as above, but different ordering # Same as above, but different ordering
headers = OrderedDict() headers = ordereddict_backport.OrderedDict()
headers['contentType'] = 'text/html' headers['contentType'] = 'text/html'
headers['Content-type'] = 'text/plain' headers['Content-type'] = 'text/plain'
assert(web_util.get_header(headers, 'CONTENT_TYPE') == 'text/html') assert(spack.util.web.get_header(headers, 'CONTENT_TYPE') == 'text/html')
del headers['contentType'] del headers['contentType']
assert(web_util.get_header(headers, 'CONTENT_TYPE') == 'text/plain') assert(spack.util.web.get_header(headers, 'CONTENT_TYPE') == 'text/plain')
# If there isn't even a fuzzy match, raise KeyError # If there isn't even a fuzzy match, raise KeyError
with pytest.raises(KeyError): with pytest.raises(KeyError):
web_util.get_header(headers, 'ContentLength') spack.util.web.get_header(headers, 'ContentLength')

View File

@ -7,17 +7,18 @@
import codecs import codecs
import errno import errno
import re import multiprocessing.pool
import os import os
import os.path import os.path
import re
import shutil import shutil
import ssl import ssl
import sys import sys
import traceback import traceback
from six.moves.urllib.request import urlopen, Request import six
from six.moves.urllib.error import URLError from six.moves.urllib.error import URLError
import multiprocessing.pool from six.moves.urllib.request import urlopen, Request
try: try:
# Python 2 had these in the HTMLParser package. # Python 2 had these in the HTMLParser package.
@ -63,34 +64,6 @@ def handle_starttag(self, tag, attrs):
self.links.append(val) self.links.append(val)
class NonDaemonProcess(multiprocessing.Process):
"""Process that allows sub-processes, so pools can have sub-pools."""
@property
def daemon(self):
return False
@daemon.setter
def daemon(self, value):
pass
if sys.version_info[0] < 3:
class NonDaemonPool(multiprocessing.pool.Pool):
"""Pool that uses non-daemon processes"""
Process = NonDaemonProcess
else:
class NonDaemonContext(type(multiprocessing.get_context())): # novm
Process = NonDaemonProcess
class NonDaemonPool(multiprocessing.pool.Pool):
"""Pool that uses non-daemon processes"""
def __init__(self, *args, **kwargs):
kwargs['context'] = NonDaemonContext()
super(NonDaemonPool, self).__init__(*args, **kwargs)
def uses_ssl(parsed_url): def uses_ssl(parsed_url):
if parsed_url.scheme == 'https': if parsed_url.scheme == 'https':
return True return True
@ -334,109 +307,152 @@ def list_url(url):
for key in _iter_s3_prefix(s3, url))) for key in _iter_s3_prefix(s3, url)))
def _spider(url, visited, root, depth, max_depth, raise_on_error): def spider(root_urls, depth=0, concurrency=32):
"""Fetches URL and any pages it links to up to max_depth. """Get web pages from root URLs.
depth should initially be zero, and max_depth is the max depth of If depth is specified (e.g., depth=2), then this will also follow
links to follow from the root. up to <depth> levels of links from each root.
Prints out a warning only if the root can't be fetched; it ignores Args:
errors with pages that the root links to. root_urls (str or list of str): root urls used as a starting point
for spidering
depth (int): level of recursion into links
concurrency (int): number of simultaneous requests that can be sent
Returns a tuple of: Returns:
- pages: dict of pages visited (URL) mapped to their full text. A dict of pages visited (URL) mapped to their full text and the
- links: set of links encountered while visiting the pages. set of visited links.
""" """
pages = {} # dict from page URL -> text content. # Cache of visited links, meant to be captured by the closure below
links = set() # set of all links seen on visited pages. _visited = set()
try: def _spider(url, collect_nested):
response_url, _, response = read_from_url(url, 'text/html') """Fetches URL and any pages it links to.
if not response_url or not response:
return pages, links
page = codecs.getreader('utf-8')(response).read() Prints out a warning only if the root can't be fetched; it ignores
pages[response_url] = page errors with pages that the root links to.
# Parse out the links in the page Args:
link_parser = LinkParser() url (str): url being fetched and searched for links
collect_nested (bool): whether we want to collect arguments
for nested spidering on the links found in this url
Returns:
A tuple of:
- pages: dict of pages visited (URL) mapped to their full text.
- links: set of links encountered while visiting the pages.
- spider_args: argument for subsequent call to spider
"""
pages = {} # dict from page URL -> text content.
links = set() # set of all links seen on visited pages.
subcalls = [] subcalls = []
link_parser.feed(page)
while link_parser.links: try:
raw_link = link_parser.links.pop() response_url, _, response = read_from_url(url, 'text/html')
abs_link = url_util.join( if not response_url or not response:
response_url, return pages, links, subcalls
raw_link.strip(),
resolve_href=True)
links.add(abs_link)
# Skip stuff that looks like an archive page = codecs.getreader('utf-8')(response).read()
if any(raw_link.endswith(suf) for suf in ALLOWED_ARCHIVE_TYPES): pages[response_url] = page
continue
# Skip things outside the root directory # Parse out the links in the page
if not abs_link.startswith(root): link_parser = LinkParser()
continue link_parser.feed(page)
# Skip already-visited links while link_parser.links:
if abs_link in visited: raw_link = link_parser.links.pop()
continue abs_link = url_util.join(
response_url,
raw_link.strip(),
resolve_href=True)
links.add(abs_link)
# If we're not at max depth, follow links. # Skip stuff that looks like an archive
if depth < max_depth: if any(raw_link.endswith(s) for s in ALLOWED_ARCHIVE_TYPES):
subcalls.append((abs_link, visited, root, continue
depth + 1, max_depth, raise_on_error))
visited.add(abs_link)
if subcalls: # Skip already-visited links
pool = NonDaemonPool(processes=len(subcalls)) if abs_link in _visited:
try: continue
results = pool.map(_spider_wrapper, subcalls)
for sub_pages, sub_links in results: # If we're not at max depth, follow links.
pages.update(sub_pages) if collect_nested:
links.update(sub_links) subcalls.append((abs_link,))
_visited.add(abs_link)
finally: except URLError as e:
pool.terminate() tty.debug(str(e))
pool.join()
except URLError as e: if hasattr(e, 'reason') and isinstance(e.reason, ssl.SSLError):
tty.debug(e) tty.warn("Spack was unable to fetch url list due to a "
"certificate verification problem. You can try "
"running spack -k, which will not check SSL "
"certificates. Use this at your own risk.")
if hasattr(e, 'reason') and isinstance(e.reason, ssl.SSLError): except HTMLParseError as e:
tty.warn("Spack was unable to fetch url list due to a certificate " # This error indicates that Python's HTML parser sucks.
"verification problem. You can try running spack -k, " msg = "Got an error parsing HTML."
"which will not check SSL certificates. Use this at your "
"own risk.")
if raise_on_error: # Pre-2.7.3 Pythons in particular have rather prickly HTML parsing.
raise NoNetworkConnectionError(str(e), url) if sys.version_info[:3] < (2, 7, 3):
msg += " Use Python 2.7.3 or newer for better HTML parsing."
except HTMLParseError as e: tty.warn(msg, url, "HTMLParseError: " + str(e))
# This error indicates that Python's HTML parser sucks.
msg = "Got an error parsing HTML."
# Pre-2.7.3 Pythons in particular have rather prickly HTML parsing. except Exception as e:
if sys.version_info[:3] < (2, 7, 3): # Other types of errors are completely ignored,
msg += " Use Python 2.7.3 or newer for better HTML parsing." # except in debug mode
tty.debug("Error in _spider: %s:%s" % (type(e), str(e)),
traceback.format_exc())
tty.warn(msg, url, "HTMLParseError: " + str(e)) finally:
tty.debug("SPIDER: [url={0}]".format(url))
except Exception as e: return pages, links, subcalls
# Other types of errors are completely ignored, except in debug mode.
tty.debug("Error in _spider: %s:%s" % (type(e), e), # TODO: Needed until we drop support for Python 2.X
traceback.format_exc()) def star(func):
def _wrapper(args):
return func(*args)
return _wrapper
if isinstance(root_urls, six.string_types):
root_urls = [root_urls]
# Clear the local cache of visited pages before starting the search
_visited.clear()
current_depth = 0
pages, links, spider_args = {}, set(), []
collect = current_depth < depth
for root in root_urls:
root = url_util.parse(root)
spider_args.append((root, collect))
tp = multiprocessing.pool.ThreadPool(processes=concurrency)
try:
while current_depth <= depth:
tty.debug("SPIDER: [depth={0}, max_depth={1}, urls={2}]".format(
current_depth, depth, len(spider_args))
)
results = tp.map(star(_spider), spider_args)
spider_args = []
collect = current_depth < depth
for sub_pages, sub_links, sub_spider_args in results:
sub_spider_args = [x + (collect,) for x in sub_spider_args]
pages.update(sub_pages)
links.update(sub_links)
spider_args.extend(sub_spider_args)
current_depth += 1
finally:
tp.terminate()
tp.join()
return pages, links return pages, links
def _spider_wrapper(args):
"""Wrapper for using spider with multiprocessing."""
return _spider(*args)
def _urlopen(req, *args, **kwargs): def _urlopen(req, *args, **kwargs):
"""Wrapper for compatibility with old versions of Python.""" """Wrapper for compatibility with old versions of Python."""
url = req url = req
@ -458,37 +474,22 @@ def _urlopen(req, *args, **kwargs):
return opener(req, *args, **kwargs) return opener(req, *args, **kwargs)
def spider(root, depth=0): def find_versions_of_archive(
"""Gets web pages from a root URL. archive_urls, list_url=None, list_depth=0, concurrency=32
):
If depth is specified (e.g., depth=2), then this will also follow
up to <depth> levels of links from the root.
This will spawn processes to fetch the children, for much improved
performance over a sequential fetch.
"""
root = url_util.parse(root)
pages, links = _spider(root, set(), root, 0, depth, False)
return pages, links
def find_versions_of_archive(archive_urls, list_url=None, list_depth=0):
"""Scrape web pages for new versions of a tarball. """Scrape web pages for new versions of a tarball.
Arguments: Args:
archive_urls (str or list or tuple): URL or sequence of URLs for archive_urls (str or list or tuple): URL or sequence of URLs for
different versions of a package. Typically these are just the different versions of a package. Typically these are just the
tarballs from the package file itself. By default, this searches tarballs from the package file itself. By default, this searches
the parent directories of archives. the parent directories of archives.
Keyword Arguments:
list_url (str or None): URL for a listing of archives. list_url (str or None): URL for a listing of archives.
Spack will scrape these pages for download links that look Spack will scrape these pages for download links that look
like the archive URL. like the archive URL.
list_depth (int): max depth to follow links on list_url pages.
list_depth (int): Max depth to follow links on list_url pages.
Defaults to 0. Defaults to 0.
concurrency (int): maximum number of concurrent requests
""" """
if not isinstance(archive_urls, (list, tuple)): if not isinstance(archive_urls, (list, tuple)):
archive_urls = [archive_urls] archive_urls = [archive_urls]
@ -509,12 +510,7 @@ def find_versions_of_archive(archive_urls, list_url=None, list_depth=0):
list_urls |= additional_list_urls list_urls |= additional_list_urls
# Grab some web pages to scrape. # Grab some web pages to scrape.
pages = {} pages, links = spider(list_urls, depth=list_depth, concurrency=concurrency)
links = set()
for lurl in list_urls:
pg, lnk = spider(lurl, depth=list_depth)
pages.update(pg)
links.update(lnk)
# Scrape them for archive URLs # Scrape them for archive URLs
regexes = [] regexes = []

View File

@ -567,25 +567,24 @@ def __str__(self):
# print keys in order # print keys in order
sorted_keys = sorted(self.keys()) sorted_keys = sorted(self.keys())
# Separate boolean variants from key-value pairs as they print
# differently. All booleans go first to avoid ' ~foo' strings that
# break spec reuse in zsh.
bool_keys = []
kv_keys = []
for key in sorted_keys:
bool_keys.append(key) if isinstance(self[key].value, bool) \
else kv_keys.append(key)
# add spaces before and after key/value variants. # add spaces before and after key/value variants.
string = StringIO() string = StringIO()
kv = False for key in bool_keys:
for key in sorted_keys: string.write(str(self[key]))
vspec = self[key]
if not isinstance(vspec.value, bool): for key in kv_keys:
# add space before all kv pairs. string.write(' ')
string.write(' ') string.write(str(self[key]))
kv = True
else:
# not a kv pair this time
if kv:
# if it was LAST time, then pad after.
string.write(' ')
kv = False
string.write(str(vspec))
return string.getvalue() return string.getvalue()

View File

@ -12,6 +12,13 @@
# setenv SPACK_ROOT /path/to/spack # setenv SPACK_ROOT /path/to/spack
# source $SPACK_ROOT/share/spack/setup-env.csh # source $SPACK_ROOT/share/spack/setup-env.csh
# #
# prevent infinite recursion when spack shells out (e.g., on cray for modules)
if ($?_sp_initializing) then
exit 0
endif
setenv _sp_initializing true
if ($?SPACK_ROOT) then if ($?SPACK_ROOT) then
set _spack_source_file = $SPACK_ROOT/share/spack/setup-env.csh set _spack_source_file = $SPACK_ROOT/share/spack/setup-env.csh
set _spack_share_dir = $SPACK_ROOT/share/spack set _spack_share_dir = $SPACK_ROOT/share/spack
@ -37,3 +44,6 @@ else
echo "ERROR: Sourcing spack setup-env.csh requires setting SPACK_ROOT to " echo "ERROR: Sourcing spack setup-env.csh requires setting SPACK_ROOT to "
echo " the root of your spack installation." echo " the root of your spack installation."
endif endif
# done: unset sentinel variable as we're no longer initializing
unsetenv _sp_initializing

View File

@ -39,6 +39,12 @@
# spack module files. # spack module files.
######################################################################## ########################################################################
# prevent infinite recursion when spack shells out (e.g., on cray for modules)
if [ -n "${_sp_initializing:-}" ]; then
exit 0
fi
export _sp_initializing=true
spack() { spack() {
# Store LD_LIBRARY_PATH variables from spack shell function # Store LD_LIBRARY_PATH variables from spack shell function
# This is necessary because MacOS System Integrity Protection clears # This is necessary because MacOS System Integrity Protection clears
@ -358,3 +364,7 @@ _sp_multi_pathadd MODULEPATH "$_sp_tcl_roots"
if [ "$_sp_shell" = bash ]; then if [ "$_sp_shell" = bash ]; then
source $_sp_share_dir/spack-completion.bash source $_sp_share_dir/spack-completion.bash
fi fi
# done: unset sentinel variable as we're no longer initializing
unset _sp_initializing
export _sp_initializing

View File

@ -1493,7 +1493,7 @@ _spack_verify() {
_spack_versions() { _spack_versions() {
if $list_options if $list_options
then then
SPACK_COMPREPLY="-h --help -s --safe-only" SPACK_COMPREPLY="-h --help -s --safe-only -c --concurrency"
else else
_all_packages _all_packages
fi fi

View File

@ -103,7 +103,7 @@ class Gcc(AutotoolsPackage, GNUMirrorPackage):
depends_on('zlib', when='@6:') depends_on('zlib', when='@6:')
depends_on('libiconv', when='platform=darwin') depends_on('libiconv', when='platform=darwin')
depends_on('gnat', when='languages=ada') depends_on('gnat', when='languages=ada')
depends_on('binutils~libiberty', when='+binutils') depends_on('binutils~libiberty', when='+binutils', type=('build', 'link', 'run'))
depends_on('zip', type='build', when='languages=java') depends_on('zip', type='build', when='languages=java')
depends_on('cuda', when='+nvptx') depends_on('cuda', when='+nvptx')
@ -303,15 +303,9 @@ def configure_args(self):
# Binutils # Binutils
if spec.satisfies('+binutils'): if spec.satisfies('+binutils'):
stage1_ldflags = str(self.rpath_args)
boot_ldflags = stage1_ldflags + ' -static-libstdc++ -static-libgcc'
if '%gcc' in spec:
stage1_ldflags = boot_ldflags
binutils = spec['binutils'].prefix.bin binutils = spec['binutils'].prefix.bin
options.extend([ options.extend([
'--with-sysroot=/', '--with-sysroot=/',
'--with-stage1-ldflags=' + stage1_ldflags,
'--with-boot-ldflags=' + boot_ldflags,
'--with-gnu-ld', '--with-gnu-ld',
'--with-ld=' + binutils.ld, '--with-ld=' + binutils.ld,
'--with-gnu-as', '--with-gnu-as',
@ -344,6 +338,12 @@ def configure_args(self):
'--with-libiconv-prefix={0}'.format(spec['libiconv'].prefix) '--with-libiconv-prefix={0}'.format(spec['libiconv'].prefix)
]) ])
# enable appropriate bootstrapping flags
stage1_ldflags = str(self.rpath_args)
boot_ldflags = stage1_ldflags + ' -static-libstdc++ -static-libgcc'
options.append('--with-stage1-ldflags=' + stage1_ldflags)
options.append('--with-boot-ldflags=' + boot_ldflags)
return options return options
# run configure/make/make(install) for the nvptx-none target # run configure/make/make(install) for the nvptx-none target

View File

@ -54,11 +54,11 @@ def install(self, spec, prefix):
def setup_dependent_build_environment(self, env, dependent_spec): def setup_dependent_build_environment(self, env, dependent_spec):
npm_config_cache_dir = "%s/npm-cache" % dependent_spec.prefix npm_config_cache_dir = "%s/npm-cache" % dependent_spec.prefix
if not os.path.isdir(npm_config_cache_dir): if not os.path.isdir(npm_config_cache_dir):
mkdir(npm_config_cache_dir) mkdirp(npm_config_cache_dir)
env.set('npm_config_cache', npm_config_cache_dir) env.set('npm_config_cache', npm_config_cache_dir)
def setup_dependent_run_environment(self, env, dependent_spec): def setup_dependent_run_environment(self, env, dependent_spec):
npm_config_cache_dir = "%s/npm-cache" % dependent_spec.prefix npm_config_cache_dir = "%s/npm-cache" % dependent_spec.prefix
if not os.path.isdir(npm_config_cache_dir): if not os.path.isdir(npm_config_cache_dir):
mkdir(npm_config_cache_dir) mkdirp(npm_config_cache_dir)
env.set('npm_config_cache', npm_config_cache_dir) env.set('npm_config_cache', npm_config_cache_dir)