Merge branch 'develop' into features/shared

This commit is contained in:
Carson Woods
2020-04-20 11:02:54 -05:00
521 changed files with 6655 additions and 13288 deletions

View File

@@ -1,89 +1,43 @@
---
name: "\U0001F4A5 Build error"
about: Some package in Spack didn't build correctly
title: "Installation issue: "
labels: "build-error"
---
<!--*Thanks for taking the time to report this build failure. To proceed with the
report please:*
<!-- Thanks for taking the time to report this build failure. To proceed with the report please:
1. Title the issue "Installation issue: <name-of-the-package>".
2. Provide the information required below.
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
-->
### Spack version
<!-- Add the output to the command below -->
```console
$ spack --version
```
We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively! -->
### Steps to reproduce the issue
<!-- Fill in the exact spec you are trying to build and the relevant part of the error message -->
```console
$ spack install <spec> # Fill in the exact spec you are using
... # and the relevant part of the error message
$ spack install <spec>
...
```
### Platform and user environment
### Information on your system
<!-- Please report your OS here:
```commandline
$ uname -a
Linux nuvolari 4.15.0-29-generic #31-Ubuntu SMP Tue Jul 17 15:39:52 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -d
Description: Ubuntu 18.04.1 LTS
```
and, if relevant, post or attach:
<!-- Please include the output of `spack debug report` -->
- `packages.yaml`
- `compilers.yaml`
to the issue
-->
<!-- If you have any relevant configuration detail (custom `packages.yaml` or `modules.yaml`, etc.) you can add that here as well. -->
### Additional information
<!--Sometimes the issue benefits from additional details. In these cases there are
a few things we can suggest doing. First of all, you can post the full output of:
```console
$ spack spec --install-status <spec>
...
```
to show people whether Spack installed a faulty software or if it was not able to
build it at all.
<!-- Please upload the following files. They should be present in the stage directory of the failing build. Also upload any config.log or similar file if one exists. -->
* [spack-build-out.txt]()
* [spack-build-env.txt]()
If your build didn't make it past the configure stage, Spack as also commands to parse
logs and report error and warning messages:
```console
$ spack log-parse --show=errors,warnings <file-to-parse>
```
You might want to run this command on the `config.log` or any other similar file
found in the stage directory:
```console
$ spack location -s <spec>
```
In case in `config.log` there are other settings that you think might be the cause
of the build failure, you can consider attaching the file to this issue.
Rebuilding the package with the following options:
```console
$ spack -d install -j 1 <spec>
...
```
will provide additional debug information. After the failure you will find two files in the current directory:
1. `spack-cc-<spec>.in`, which contains details on the command given in input
to Spack's compiler wrapper
1. `spack-cc-<spec>.out`, which contains the command used to compile / link the
failed object after Spack's compiler wrapper did its processing
You can post or attach those files to provide maintainers with more information on what
is causing the failure.-->
<!-- Some packages have maintainers who have volunteered to debug build failures. Run `spack maintainers <name-of-the-package>` and @mention them here if they exist. -->
### General information
- [ ] I have run `spack --version` and reported the version of Spack
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [ ] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [ ] I have run `spack maintainers <name-of-the-package>` and @mentioned any maintainers
- [ ] I have uploaded the build log and environment files
- [ ] I have searched the issues of this repo and believe this is not a duplicate

View File

@@ -18,6 +18,8 @@ on:
- '!var/spack/repos/builtin/packages/py-setuptools/**'
- '!var/spack/repos/builtin/packages/openjpeg/**'
- '!var/spack/repos/builtin/packages/r-rcpp/**'
# Don't run if we only modified documentation
- 'lib/spack/docs/**'
jobs:
build:

View File

@@ -1,3 +1,18 @@
# v0.14.2 (2019-04-15)
This is a minor release on the `0.14` series. It includes performance
improvements and bug fixes:
* Improvements to how `spack install` handles foreground/background (#15723)
* Major performance improvements for reading the package DB (#14693, #15777)
* No longer check for the old `index.yaml` database file (#15298)
* Properly activate environments with '-h' in the name (#15429)
* External packages have correct `.prefix` in environments/views (#15475)
* Improvements to computing env modifications from sourcing files (#15791)
* Bugfix on Cray machines when getting `TERM` env variable (#15630)
* Avoid adding spurious `LMOD` env vars to Intel modules (#15778)
* Don't output [+] for mock installs run during tests (#15609)
# v0.14.1 (2019-03-20)
This is a bugfix release on top of `v0.14.0`. Specific fixes include:

View File

@@ -16,8 +16,6 @@
modules:
prefix_inspections:
lib:
- DYLD_LIBRARY_PATH
- DYLD_FALLBACK_LIBRARY_PATH
lib64:
- DYLD_LIBRARY_PATH
- DYLD_FALLBACK_LIBRARY_PATH

View File

@@ -1234,6 +1234,8 @@ add a version specifier to the spec:
Notice that the package versions that provide insufficient MPI
versions are now filtered out.
.. _extensions:
---------------------------
Extensions & Python support
---------------------------
@@ -1241,8 +1243,7 @@ Extensions & Python support
Spack's installation model assumes that each package will live in its
own install prefix. However, certain packages are typically installed
*within* the directory hierarchy of other packages. For example,
modules in interpreted languages like `Python
<https://www.python.org>`_ are typically installed in the
`Python <https://www.python.org>`_ packages are typically installed in the
``$prefix/lib/python-2.7/site-packages`` directory.
Spack has support for this type of installation as well. In Spack,

View File

@@ -130,7 +130,7 @@ To activate an environment, use the following command:
By default, the ``spack env activate`` will load the view associated
with the Environment into the user environment. The ``-v,
--with-view`` argument ensures this behavior, and the ``-V,
--without-vew`` argument activates the environment without changing
--without-view`` argument activates the environment without changing
the user environment variables.
The ``-p`` option to the ``spack env activate`` command modifies the

View File

@@ -165,8 +165,6 @@ used ``gcc``. You could therefore just type:
To identify just the one built with the Intel compiler.
.. _extensions:
.. _cmd-spack-module-loads:
^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@@ -2197,7 +2197,7 @@ property to ``True``, e.g.:
extendable = True
...
To make a package into an extension, simply add simply add an
To make a package into an extension, simply add an
``extends`` call in the package definition, and pass it the name of an
extendable package:
@@ -2212,6 +2212,10 @@ Now, the ``py-numpy`` package can be used as an argument to ``spack
activate``. When it is activated, all the files in its prefix will be
symbolically linked into the prefix of the python package.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Adding additional constraints
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Some packages produce a Python extension, but are only compatible with
Python 3, or with Python 2. In those cases, a ``depends_on()``
declaration should be made in addition to the ``extends()``
@@ -2231,8 +2235,7 @@ variant(s) are selected. This may be accomplished with conditional
.. code-block:: python
class FooLib(Package):
variant('python', default=True, description= \
'Build the Python extension Module')
variant('python', default=True, description='Build the Python extension Module')
extends('python', when='+python')
...

View File

@@ -117,6 +117,27 @@ created has the same name as the current branch being tested, but has ``multi-ci
prepended to the branch name. Once Gitlab CI has full support for dynamically
defined workloads, this command will be deprecated.
Until this command is no longer needed and can be deprecated, there are
a few gotchas to note. While you can embed your username and password in the
`DOWNSTREAM_CI_REPO` url, you may not be able to have Gitlab mask the value, as
it will likely contain characters that Gitlab cannot currently mask. Another
option is to set up an SSH token, but for this to work, the associated SSH
key must be passphrase-less so that it can be provided in an automated manner.
If you attempt to set up an SSH token that does require a passphrase, you may
see a log message similar to:
```
fatal: https://<instance-url>/<org>/<project>:<port>/info/refs not valid: is this a git repository?
```
In this case, you can try a passphrase-less SSH key, or else embed your gitlab
username and password in the `DOWNSTREAM_CI_REPO` as in the following example:
```
https://<username>:<password>@<instance-url>/<org>/<project>.git
```
.. _cmd_spack_ci_rebuild:
^^^^^^^^^^^^^^^^^^^^
@@ -436,4 +457,5 @@ DOWNSTREAM_CI_REPO
^^^^^^^^^^^^^^^^^^
Needed until Gitlab CI supports dynamic job generation. Can contain connection
credentials, and could be the same repository or a different one.
credentials embedded in the url, and could be the same repository or a different
one.

View File

@@ -1432,10 +1432,6 @@ The following functionality is prepared:
#. Base image: the example starts from a minimal ubuntu.
#. Installing as root: docker images are usually set up as root.
Since some autotools scripts might complain about this being unsafe, we set
``FORCE_UNSAFE_CONFIGURE=1`` to avoid configure errors.
#. Pre-install the spack dependencies, including modules from the packages.
This avoids needing to build those from scratch via ``spack bootstrap``.
Package installs are followed by a clean-up of the system package index,
@@ -1466,8 +1462,7 @@ In order to build and run the image, execute:
# general environment for docker
ENV DEBIAN_FRONTEND=noninteractive \
SPACK_ROOT=/usr/local \
FORCE_UNSAFE_CONFIGURE=1
SPACK_ROOT=/usr/local
# install minimal spack dependencies
RUN apt-get update \

View File

@@ -624,9 +624,9 @@ def replace_directory_transaction(directory_name, tmp_root=None):
# Check the input is indeed a directory with absolute path.
# Raise before anything is done to avoid moving the wrong directory
assert os.path.isdir(directory_name), \
'"directory_name" must be a valid directory'
'Invalid directory: ' + directory_name
assert os.path.isabs(directory_name), \
'"directory_name" must contain an absolute path'
'"directory_name" must contain an absolute path: ' + directory_name
directory_basename = os.path.basename(directory_name)

View File

@@ -619,3 +619,28 @@ def load_module_from_file(module_name, module_path):
import imp
module = imp.load_source(module_name, module_path)
return module
def uniq(sequence):
"""Remove strings of duplicate elements from a list.
This works like the command-line ``uniq`` tool. It filters strings
of duplicate elements in a list. Adjacent matching elements are
merged into the first occurrence.
For example::
uniq([1, 1, 1, 1, 2, 2, 2, 3, 3]) == [1, 2, 3]
uniq([1, 1, 1, 1, 2, 2, 2, 1, 1]) == [1, 2, 1]
"""
if not sequence:
return []
uniq_list = [sequence[0]]
last = sequence[0]
for element in sequence[1:]:
if element != last:
uniq_list.append(element)
last = element
return uniq_list

View File

@@ -7,6 +7,8 @@
"""
from __future__ import unicode_literals
import atexit
import errno
import multiprocessing
import os
import re
@@ -25,6 +27,7 @@
except ImportError:
termios = None
# Use this to strip escape sequences
_escape = re.compile(r'\x1b[^m]*m|\x1b\[?1034h')
@@ -38,17 +41,22 @@
@contextmanager
def background_safe():
signal.signal(signal.SIGTTOU, signal.SIG_IGN)
yield
signal.signal(signal.SIGTTOU, signal.SIG_DFL)
def ignore_signal(signum):
"""Context manager to temporarily ignore a signal."""
old_handler = signal.signal(signum, signal.SIG_IGN)
try:
yield
finally:
signal.signal(signum, old_handler)
def _is_background_tty():
"""Return True iff this process is backgrounded and stdout is a tty"""
if sys.stdout.isatty():
return os.getpgrp() != os.tcgetpgrp(sys.stdout.fileno())
return False # not writing to tty, not background
def _is_background_tty(stream):
"""True if the stream is a tty and calling process is in the background.
"""
return (
stream.isatty() and
os.getpgrp() != os.tcgetpgrp(stream.fileno())
)
def _strip(line):
@@ -56,27 +64,80 @@ def _strip(line):
return _escape.sub('', line)
class _keyboard_input(object):
class keyboard_input(object):
"""Context manager to disable line editing and echoing.
Use this with ``sys.stdin`` for keyboard input, e.g.::
with keyboard_input(sys.stdin):
r, w, x = select.select([sys.stdin], [], [])
# ... do something with keypresses ...
with keyboard_input(sys.stdin) as kb:
while True:
kb.check_fg_bg()
r, w, x = select.select([sys.stdin], [], [])
# ... do something with keypresses ...
This disables canonical input so that keypresses are available on the
stream immediately. Typically standard input allows line editing,
which means keypresses won't be sent until the user hits return.
The ``keyboard_input`` context manager disables canonical
(line-based) input and echoing, so that keypresses are available on
the stream immediately, and they are not printed to the
terminal. Typically, standard input is line-buffered, which means
keypresses won't be sent until the user hits return. In this mode, a
user can hit, e.g., 'v', and it will be read on the other end of the
pipe immediately but not printed.
It also disables echoing, so that keys pressed aren't printed to the
terminal. So, the user can hit, e.g., 'v', and it's read on the
other end of the pipe immediately but not printed.
The handler takes care to ensure that terminal changes only take
effect when the calling process is in the foreground. If the process
is backgrounded, canonical mode and echo are re-enabled. They are
disabled again when the calling process comes back to the foreground.
When the with block completes, prior TTY settings are restored.
This context manager works through a single signal handler for
``SIGTSTP``, along with a poolling routine called ``check_fg_bg()``.
Here are the relevant states, transitions, and POSIX signals::
[Running] -------- Ctrl-Z sends SIGTSTP ------------.
[ in FG ] <------- fg sends SIGCONT --------------. |
^ | |
| fg (no signal) | |
| | v
[Running] <------- bg sends SIGCONT ---------- [Stopped]
[ in BG ] [ in BG ]
We handle all transitions exept for ``SIGTSTP`` generated by Ctrl-Z
by periodically calling ``check_fg_bg()``. This routine notices if
we are in the background with canonical mode or echo disabled, or if
we are in the foreground without canonical disabled and echo enabled,
and it fixes the terminal settings in response.
``check_fg_bg()`` works *except* for when the process is stopped with
``SIGTSTP``. We cannot rely on a periodic timer in this case, as it
may not rrun before the process stops. We therefore restore terminal
settings in the ``SIGTSTP`` handler.
Additional notes:
* We mostly use polling here instead of a SIGARLM timer or a
thread. This is to avoid the complexities of many interrupts, which
seem to make system calls (like I/O) unreliable in older Python
versions (2.6 and 2.7). See these issues for details:
1. https://www.python.org/dev/peps/pep-0475/
2. https://bugs.python.org/issue8354
There are essentially too many ways for asynchronous signals to go
wrong if we also have to support older Python versions, so we opt
not to use them.
* ``SIGSTOP`` can stop a process (in the foreground or background),
but it can't be caught. Because of this, we can't fix any terminal
settings on ``SIGSTOP``, and the terminal will be left with
``ICANON`` and ``ECHO`` disabled until it is resumes execution.
* Technically, a process *could* be sent ``SIGTSTP`` while running in
the foreground, without the shell backgrounding that process. This
doesn't happen in practice, and we assume that ``SIGTSTP`` always
means that defaults should be restored.
* We rely on ``termios`` support. Without it, or if the stream isn't
a TTY, ``keyboard_input`` has no effect.
Note: this depends on termios support. If termios isn't available,
or if the stream isn't a TTY, this context manager has no effect.
"""
def __init__(self, stream):
"""Create a context manager that will enable keyboard input on stream.
@@ -89,42 +150,97 @@ def __init__(self, stream):
"""
self.stream = stream
def _is_background(self):
"""True iff calling process is in the background."""
return _is_background_tty(self.stream)
def _get_canon_echo_flags(self):
"""Get current termios canonical and echo settings."""
cfg = termios.tcgetattr(self.stream)
return (
bool(cfg[3] & termios.ICANON),
bool(cfg[3] & termios.ECHO),
)
def _enable_keyboard_input(self):
"""Disable canonical input and echoing on ``self.stream``."""
# "enable" input by disabling canonical mode and echo
new_cfg = termios.tcgetattr(self.stream)
new_cfg[3] &= ~termios.ICANON
new_cfg[3] &= ~termios.ECHO
# Apply new settings for terminal
with ignore_signal(signal.SIGTTOU):
termios.tcsetattr(self.stream, termios.TCSANOW, new_cfg)
def _restore_default_terminal_settings(self):
"""Restore the original input configuration on ``self.stream``."""
# _restore_default_terminal_settings Can be called in foreground
# or background. When called in the background, tcsetattr triggers
# SIGTTOU, which we must ignore, or the process will be stopped.
with ignore_signal(signal.SIGTTOU):
termios.tcsetattr(self.stream, termios.TCSANOW, self.old_cfg)
def _tstp_handler(self, signum, frame):
self._restore_default_terminal_settings()
os.kill(os.getpid(), signal.SIGSTOP)
def check_fg_bg(self):
# old_cfg is set up in __enter__ and indicates that we have
# termios and a valid stream.
if not self.old_cfg:
return
# query terminal flags and fg/bg status
flags = self._get_canon_echo_flags()
bg = self._is_background()
# restore sanity if flags are amiss -- see diagram in class docs
if not bg and any(flags): # fg, but input not enabled
self._enable_keyboard_input()
elif bg and not all(flags): # bg, but input enabled
self._restore_default_terminal_settings()
def __enter__(self):
"""Enable immediate keypress input on stream.
"""Enable immediate keypress input, while this process is foreground.
If the stream is not a TTY or the system doesn't support termios,
do nothing.
"""
self.old_cfg = None
self.old_handlers = {}
# Ignore all this if the input stream is not a tty.
if not self.stream or not self.stream.isatty():
return
return self
# If this fails, self.old_cfg will remain None
if termios and not _is_background_tty():
# save old termios settings
old_cfg = termios.tcgetattr(self.stream)
if termios:
# save old termios settings to restore later
self.old_cfg = termios.tcgetattr(self.stream)
try:
# create new settings with canonical input and echo
# disabled, so keypresses are immediate & don't echo.
self.new_cfg = termios.tcgetattr(self.stream)
self.new_cfg[3] &= ~termios.ICANON
self.new_cfg[3] &= ~termios.ECHO
# Install a signal handler to disable/enable keyboard input
# when the process moves between foreground and background.
self.old_handlers[signal.SIGTSTP] = signal.signal(
signal.SIGTSTP, self._tstp_handler)
# Apply new settings for terminal
termios.tcsetattr(self.stream, termios.TCSADRAIN, self.new_cfg)
self.old_cfg = old_cfg
# add an atexit handler to ensure the terminal is restored
atexit.register(self._restore_default_terminal_settings)
except Exception:
pass # some OS's do not support termios, so ignore
# enable keyboard input initially (if foreground)
if not self._is_background():
self._enable_keyboard_input()
return self
def __exit__(self, exc_type, exception, traceback):
"""If termios was avaialble, restore old settings."""
"""If termios was available, restore old settings."""
if self.old_cfg:
with background_safe(): # change it back even if backgrounded now
termios.tcsetattr(self.stream, termios.TCSADRAIN, self.old_cfg)
self._restore_default_terminal_settings()
# restore SIGSTP and SIGCONT handlers
if self.old_handlers:
for signum, old_handler in self.old_handlers.items():
signal.signal(signum, old_handler)
class Unbuffered(object):
@@ -300,11 +416,11 @@ def __enter__(self):
self._saved_debug = tty._debug
# OS-level pipe for redirecting output to logger
self.read_fd, self.write_fd = os.pipe()
read_fd, write_fd = os.pipe()
# Multiprocessing pipe for communication back from the daemon
# Currently only used to save echo value between uses
self.parent, self.child = multiprocessing.Pipe()
self.parent_pipe, child_pipe = multiprocessing.Pipe()
# Sets a daemon that writes to file what it reads from a pipe
try:
@@ -315,10 +431,15 @@ def __enter__(self):
input_stream = None # just don't forward input if this fails
self.process = multiprocessing.Process(
target=self._writer_daemon, args=(input_stream,))
target=_writer_daemon,
args=(
input_stream, read_fd, write_fd, self.echo, self.log_file,
child_pipe
)
)
self.process.daemon = True # must set before start()
self.process.start()
os.close(self.read_fd) # close in the parent process
os.close(read_fd) # close in the parent process
finally:
if input_stream:
@@ -340,9 +461,9 @@ def __enter__(self):
self._saved_stderr = os.dup(sys.stderr.fileno())
# redirect to the pipe we created above
os.dup2(self.write_fd, sys.stdout.fileno())
os.dup2(self.write_fd, sys.stderr.fileno())
os.close(self.write_fd)
os.dup2(write_fd, sys.stdout.fileno())
os.dup2(write_fd, sys.stderr.fileno())
os.close(write_fd)
else:
# Handle I/O the Python way. This won't redirect lower-level
@@ -355,7 +476,7 @@ def __enter__(self):
self._saved_stderr = sys.stderr
# create a file object for the pipe; redirect to it.
pipe_fd_out = os.fdopen(self.write_fd, 'w')
pipe_fd_out = os.fdopen(write_fd, 'w')
sys.stdout = pipe_fd_out
sys.stderr = pipe_fd_out
@@ -394,14 +515,14 @@ def __exit__(self, exc_type, exc_val, exc_tb):
# print log contents in parent if needed.
if self.write_log_in_parent:
string = self.parent.recv()
string = self.parent_pipe.recv()
self.file_like.write(string)
if self.close_log_in_parent:
self.log_file.close()
# recover and store echo settings from the child before it dies
self.echo = self.parent.recv()
self.echo = self.parent_pipe.recv()
# join the daemon process. The daemon will quit automatically
# when the write pipe is closed; we just wait for it here.
@@ -426,90 +547,166 @@ def force_echo(self):
# exactly before and after the text we want to echo.
sys.stdout.write(xon)
sys.stdout.flush()
yield
sys.stdout.write(xoff)
sys.stdout.flush()
def _writer_daemon(self, stdin):
"""Daemon that writes output to the log file and stdout."""
# Use line buffering (3rd param = 1) since Python 3 has a bug
# that prevents unbuffered text I/O.
in_pipe = os.fdopen(self.read_fd, 'r', 1)
os.close(self.write_fd)
echo = self.echo # initial echo setting, user-controllable
force_echo = False # parent can force echo for certain output
# list of streams to select from
istreams = [in_pipe, stdin] if stdin else [in_pipe]
log_file = self.log_file
def handle_write(force_echo):
# Handle output from the with block process.
# If we arrive here it means that in_pipe was
# ready for reading : it should never happen that
# line is false-ish
line = in_pipe.readline()
if not line:
return (True, force_echo) # break while loop
# find control characters and strip them.
controls = control.findall(line)
line = re.sub(control, '', line)
# Echo to stdout if requested or forced
if echo or force_echo:
try:
if termios:
conf = termios.tcgetattr(sys.stdout)
tostop = conf[3] & termios.TOSTOP
else:
tostop = True
except Exception:
tostop = True
if not (tostop and _is_background_tty()):
sys.stdout.write(line)
sys.stdout.flush()
# Stripped output to log file.
log_file.write(_strip(line))
log_file.flush()
if xon in controls:
force_echo = True
if xoff in controls:
force_echo = False
return (False, force_echo)
try:
with _keyboard_input(stdin):
while True:
# No need to set any timeout for select.select
# Wait until a key press or an event on in_pipe.
rlist, _, _ = select.select(istreams, [], [])
# Allow user to toggle echo with 'v' key.
# Currently ignores other chars.
# only read stdin if we're in the foreground
if stdin in rlist and not _is_background_tty():
if stdin.read(1) == 'v':
echo = not echo
if in_pipe in rlist:
br, fe = handle_write(force_echo)
force_echo = fe
if br:
break
except BaseException:
tty.error("Exception occurred in writer daemon!")
traceback.print_exc()
yield
finally:
# send written data back to parent if we used a StringIO
if self.write_log_in_parent:
self.child.send(log_file.getvalue())
log_file.close()
sys.stdout.write(xoff)
sys.stdout.flush()
# send echo value back to the parent so it can be preserved.
self.child.send(echo)
def _writer_daemon(stdin, read_fd, write_fd, echo, log_file, control_pipe):
"""Daemon used by ``log_output`` to write to a log file and to ``stdout``.
The daemon receives output from the parent process and writes it both
to a log and, optionally, to ``stdout``. The relationship looks like
this::
Terminal
|
| +-------------------------+
| | Parent Process |
+--------> | with log_output(): |
| stdin | ... |
| +-------------------------+
| ^ | write_fd (parent's redirected stdout)
| | control |
| | pipe |
| | v read_fd
| +-------------------------+ stdout
| | Writer daemon |------------>
+--------> | read from read_fd | log_file
stdin | write to out and log |------------>
+-------------------------+
Within the ``log_output`` handler, the parent's output is redirected
to a pipe from which the daemon reads. The daemon writes each line
from the pipe to a log file and (optionally) to ``stdout``. The user
can hit ``v`` to toggle output on ``stdout``.
In addition to the input and output file descriptors, the daemon
interacts with the parent via ``control_pipe``. It reports whether
``stdout`` was enabled or disabled when it finished and, if the
``log_file`` is a ``StringIO`` object, then the daemon also sends the
logged output back to the parent as a string, to be written to the
``StringIO`` in the parent. This is mainly for testing.
Arguments:
stdin (stream): input from the terminal
read_fd (int): pipe for reading from parent's redirected stdout
write_fd (int): parent's end of the pipe will write to (will be
immediately closed by the writer daemon)
echo (bool): initial echo setting -- controlled by user and
preserved across multiple writer daemons
log_file (file-like): file to log all output
control_pipe (Pipe): multiprocessing pipe on which to send control
information to the parent
"""
# Use line buffering (3rd param = 1) since Python 3 has a bug
# that prevents unbuffered text I/O.
in_pipe = os.fdopen(read_fd, 'r', 1)
os.close(write_fd)
# list of streams to select from
istreams = [in_pipe, stdin] if stdin else [in_pipe]
force_echo = False # parent can force echo for certain output
try:
with keyboard_input(stdin) as kb:
while True:
# fix the terminal settings if we recently came to
# the foreground
kb.check_fg_bg()
# wait for input from any stream. use a coarse timeout to
# allow other checks while we wait for input
rlist, _, _ = _retry(select.select)(istreams, [], [], 1e-1)
# Allow user to toggle echo with 'v' key.
# Currently ignores other chars.
# only read stdin if we're in the foreground
if stdin in rlist and not _is_background_tty(stdin):
# it's possible to be backgrounded between the above
# check and the read, so we ignore SIGTTIN here.
with ignore_signal(signal.SIGTTIN):
try:
if stdin.read(1) == 'v':
echo = not echo
except IOError as e:
# If SIGTTIN is ignored, the system gives EIO
# to let the caller know the read failed b/c it
# was in the bg. Ignore that too.
if e.errno != errno.EIO:
raise
if in_pipe in rlist:
# Handle output from the calling process.
line = _retry(in_pipe.readline)()
if not line:
break
# find control characters and strip them.
controls = control.findall(line)
line = control.sub('', line)
# Echo to stdout if requested or forced.
if echo or force_echo:
sys.stdout.write(line)
sys.stdout.flush()
# Stripped output to log file.
log_file.write(_strip(line))
log_file.flush()
if xon in controls:
force_echo = True
if xoff in controls:
force_echo = False
except BaseException:
tty.error("Exception occurred in writer daemon!")
traceback.print_exc()
finally:
# send written data back to parent if we used a StringIO
if isinstance(log_file, StringIO):
control_pipe.send(log_file.getvalue())
log_file.close()
# send echo value back to the parent so it can be preserved.
control_pipe.send(echo)
def _retry(function):
"""Retry a call if errors indicating an interrupted system call occur.
Interrupted system calls return -1 and set ``errno`` to ``EINTR`` if
certain flags are not set. Newer Pythons automatically retry them,
but older Pythons do not, so we need to retry the calls.
This function converts a call like this:
syscall(args)
and makes it retry by wrapping the function like this:
_retry(syscall)(args)
This is a private function because EINTR is unfortunately raised in
different ways from different functions, and we only handle the ones
relevant for this file.
"""
def wrapped(*args, **kwargs):
while True:
try:
return function(*args, **kwargs)
except IOError as e:
if e.errno == errno.EINTR:
continue
raise
except select.error as e:
if e.args[0] == errno.EINTR:
continue
raise
return wrapped

View File

@@ -0,0 +1,344 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""The pty module handles pseudo-terminals.
Currently, the infrastructure here is only used to test llnl.util.tty.log.
If this is used outside a testing environment, we will want to reconsider
things like timeouts in ``ProcessController.wait()``, which are set to
get tests done quickly, not to avoid high CPU usage.
"""
from __future__ import print_function
import os
import signal
import multiprocessing
import re
import sys
import termios
import time
import traceback
import llnl.util.tty.log as log
from spack.util.executable import which
class ProcessController(object):
"""Wrapper around some fundamental process control operations.
This allows one process to drive another similar to the way a shell
would, by sending signals and I/O.
"""
def __init__(self, pid, master_fd,
timeout=1, sleep_time=1e-1, debug=False):
"""Create a controller to manipulate the process with id ``pid``
Args:
pid (int): id of process to control
master_fd (int): master file descriptor attached to pid's stdin
timeout (int): time in seconds for wait operations to time out
(default 1 second)
sleep_time (int): time to sleep after signals, to control the
signal rate of the controller (default 1e-1)
debug (bool): whether ``horizontal_line()`` and ``status()`` should
produce output when called (default False)
``sleep_time`` allows the caller to insert delays after calls
that signal or modify the controlled process. Python behaves very
poorly if signals arrive too fast, and drowning a Python process
with a Python handler with signals can kill the process and hang
our tests, so we throttle this a closer-to-interactive rate.
"""
self.pid = pid
self.pgid = os.getpgid(pid)
self.master_fd = master_fd
self.timeout = timeout
self.sleep_time = sleep_time
self.debug = debug
# we need the ps command to wait for process statuses
self.ps = which("ps", required=True)
def get_canon_echo_attrs(self):
"""Get echo and canon attributes of the terminal of master_fd."""
cfg = termios.tcgetattr(self.master_fd)
return (
bool(cfg[3] & termios.ICANON),
bool(cfg[3] & termios.ECHO),
)
def horizontal_line(self, name):
"""Labled horizontal line for debugging."""
if self.debug:
sys.stderr.write(
"------------------------------------------- %s\n" % name
)
def status(self):
"""Print debug message with status info for the child."""
if self.debug:
canon, echo = self.get_canon_echo_attrs()
sys.stderr.write("canon: %s, echo: %s\n" % (
"on" if canon else "off",
"on" if echo else "off",
))
sys.stderr.write("input: %s\n" % self.input_on())
sys.stderr.write("bg: %s\n" % self.background())
sys.stderr.write("\n")
def input_on(self):
"""True if keyboard input is enabled on the master_fd pty."""
return self.get_canon_echo_attrs() == (False, False)
def background(self):
"""True if pgid is in a background pgroup of master_fd's terminal."""
return self.pgid != os.tcgetpgrp(self.master_fd)
def tstp(self):
"""Send SIGTSTP to the controlled process."""
self.horizontal_line("tstp")
os.killpg(self.pgid, signal.SIGTSTP)
time.sleep(self.sleep_time)
def cont(self):
self.horizontal_line("cont")
os.killpg(self.pgid, signal.SIGCONT)
time.sleep(self.sleep_time)
def fg(self):
self.horizontal_line("fg")
with log.ignore_signal(signal.SIGTTOU):
os.tcsetpgrp(self.master_fd, os.getpgid(self.pid))
time.sleep(self.sleep_time)
def bg(self):
self.horizontal_line("bg")
with log.ignore_signal(signal.SIGTTOU):
os.tcsetpgrp(self.master_fd, os.getpgrp())
time.sleep(self.sleep_time)
def write(self, byte_string):
self.horizontal_line("write '%s'" % byte_string.decode("utf-8"))
os.write(self.master_fd, byte_string)
def wait(self, condition):
start = time.time()
while (((time.time() - start) < self.timeout) and not condition()):
time.sleep(1e-2)
assert condition()
def wait_enabled(self):
self.wait(lambda: self.input_on() and not self.background())
def wait_disabled(self):
self.wait(lambda: not self.input_on() and self.background())
def wait_disabled_fg(self):
self.wait(lambda: not self.input_on() and not self.background())
def proc_status(self):
status = self.ps("-p", str(self.pid), "-o", "stat", output=str)
status = re.split(r"\s+", status.strip(), re.M)
return status[1]
def wait_stopped(self):
self.wait(lambda: "T" in self.proc_status())
def wait_running(self):
self.wait(lambda: "T" not in self.proc_status())
class PseudoShell(object):
"""Sets up master and child processes with a PTY.
You can create a ``PseudoShell`` if you want to test how some
function responds to terminal input. This is a pseudo-shell from a
job control perspective; ``master_function`` and ``child_function``
are set up with a pseudoterminal (pty) so that the master can drive
the child through process control signals and I/O.
The two functions should have signatures like this::
def master_function(proc, ctl, **kwargs)
def child_function(**kwargs)
``master_function`` is spawned in its own process and passed three
arguments:
proc
the ``multiprocessing.Process`` object representing the child
ctl
a ``ProcessController`` object tied to the child
kwargs
keyword arguments passed from ``PseudoShell.start()``.
``child_function`` is only passed ``kwargs`` delegated from
``PseudoShell.start()``.
The ``ctl.master_fd`` will have its ``master_fd`` connected to
``sys.stdin`` in the child process. Both processes will share the
same ``sys.stdout`` and ``sys.stderr`` as the process instantiating
``PseudoShell``.
Here are the relationships between processes created::
._________________________________________________________.
| Child Process | pid 2
| - runs child_function | pgroup 2
|_________________________________________________________| session 1
^
| create process with master_fd connected to stdin
| stdout, stderr are the same as caller
._________________________________________________________.
| Master Process | pid 1
| - runs master_function | pgroup 1
| - uses ProcessController and master_fd to control child | session 1
|_________________________________________________________|
^
| create process
| stdin, stdout, stderr are the same as caller
._________________________________________________________.
| Caller | pid 0
| - Constructs, starts, joins PseudoShell | pgroup 0
| - provides master_function, child_function | session 0
|_________________________________________________________|
"""
def __init__(self, master_function, child_function):
self.proc = None
self.master_function = master_function
self.child_function = child_function
# these can be optionally set to change defaults
self.controller_timeout = 1
self.sleep_time = 0
def start(self, **kwargs):
"""Start the master and child processes.
Arguments:
kwargs (dict): arbitrary keyword arguments that will be
passed to master and child functions
The master process will create the child, then call
``master_function``. The child process will call
``child_function``.
"""
self.proc = multiprocessing.Process(
target=PseudoShell._set_up_and_run_master_function,
args=(self.master_function, self.child_function,
self.controller_timeout, self.sleep_time),
kwargs=kwargs,
)
self.proc.start()
def join(self):
"""Wait for the child process to finish, and return its exit code."""
self.proc.join()
return self.proc.exitcode
@staticmethod
def _set_up_and_run_child_function(
tty_name, stdout_fd, stderr_fd, ready, child_function, **kwargs):
"""Child process wrapper for PseudoShell.
Handles the mechanics of setting up a PTY, then calls
``child_function``.
"""
# new process group, like a command or pipeline launched by a shell
os.setpgrp()
# take controlling terminal and set up pty IO
stdin_fd = os.open(tty_name, os.O_RDWR)
os.dup2(stdin_fd, sys.stdin.fileno())
os.dup2(stdout_fd, sys.stdout.fileno())
os.dup2(stderr_fd, sys.stderr.fileno())
os.close(stdin_fd)
if kwargs.get("debug"):
sys.stderr.write(
"child: stdin.isatty(): %s\n" % sys.stdin.isatty())
# tell the parent that we're really running
if kwargs.get("debug"):
sys.stderr.write("child: ready!\n")
ready.value = True
try:
child_function(**kwargs)
except BaseException:
traceback.print_exc()
@staticmethod
def _set_up_and_run_master_function(
master_function, child_function, controller_timeout, sleep_time,
**kwargs):
"""Set up a pty, spawn a child process, and execute master_function.
Handles the mechanics of setting up a PTY, then calls
``master_function``.
"""
os.setsid() # new session; this process is the controller
master_fd, child_fd = os.openpty()
pty_name = os.ttyname(child_fd)
# take controlling terminal
pty_fd = os.open(pty_name, os.O_RDWR)
os.close(pty_fd)
ready = multiprocessing.Value('i', False)
child_process = multiprocessing.Process(
target=PseudoShell._set_up_and_run_child_function,
args=(pty_name, sys.stdout.fileno(), sys.stderr.fileno(),
ready, child_function),
kwargs=kwargs,
)
child_process.start()
# wait for subprocess to be running and connected.
while not ready.value:
time.sleep(1e-5)
pass
if kwargs.get("debug"):
sys.stderr.write("pid: %d\n" % os.getpid())
sys.stderr.write("pgid: %d\n" % os.getpgrp())
sys.stderr.write("sid: %d\n" % os.getsid(0))
sys.stderr.write("tcgetpgrp: %d\n" % os.tcgetpgrp(master_fd))
sys.stderr.write("\n")
child_pgid = os.getpgid(child_process.pid)
sys.stderr.write("child pid: %d\n" % child_process.pid)
sys.stderr.write("child pgid: %d\n" % child_pgid)
sys.stderr.write("child sid: %d\n" % os.getsid(child_process.pid))
sys.stderr.write("\n")
sys.stderr.flush()
# set up master to ignore SIGTSTP, like a shell
signal.signal(signal.SIGTSTP, signal.SIG_IGN)
# call the master function once the child is ready
try:
controller = ProcessController(
child_process.pid, master_fd, debug=kwargs.get("debug"))
controller.timeout = controller_timeout
controller.sleep_time = sleep_time
error = master_function(child_process, controller, **kwargs)
except BaseException:
error = 1
traceback.print_exc()
child_process.join()
# return whether either the parent or child failed
return error or child_process.exitcode

View File

@@ -5,7 +5,7 @@
#: major, minor, patch version for Spack, in a tuple
spack_version_info = (0, 14, 1)
spack_version_info = (0, 14, 2)
#: String containing Spack version joined with .'s
spack_version = '.'.join(str(v) for v in spack_version_info)

View File

@@ -145,6 +145,7 @@ def clean_environment():
env.unset('CPATH')
env.unset('LD_RUN_PATH')
env.unset('DYLD_LIBRARY_PATH')
env.unset('DYLD_FALLBACK_LIBRARY_PATH')
build_lang = spack.config.get('config:build_language')
if build_lang:

View File

@@ -153,6 +153,21 @@ def _do_patch_config_guess(self):
raise RuntimeError('Failed to find suitable config.guess')
@run_before('configure')
def _set_autotools_environment_varoables(self):
"""Many autotools builds use a version of mknod.m4 that fails when
running as root unless FORCE_UNSAFE_CONFIGURE is set to 1.
We set this to 1 and expect the user to take responsibiltiy if
they are running as root. They have to anyway, as this variable
doesn't actually prevent configure from doing bad things as root.
Without it, configure just fails halfway through, but it can
still run things *before* this check. Forcing this just removes a
nuisance -- this is not circumventing any real protection.
"""
os.environ["FORCE_UNSAFE_CONFIGURE"] = "1"
@run_after('configure')
def _do_patch_libtool(self):
"""If configure generates a "libtool" script that does not correctly
@@ -169,7 +184,7 @@ def _do_patch_libtool(self):
line = 'wl="-Wl,"\n'
if line == 'pic_flag=""\n':
line = 'pic_flag="{0}"\n'\
.format(self.compiler.pic_flag)
.format(self.compiler.cc_pic_flag)
sys.stdout.write(line)
@property
@@ -219,11 +234,11 @@ def autoreconf(self, spec, prefix):
# This line is what is needed most of the time
# --install, --verbose, --force
autoreconf_args = ['-ivf']
if 'pkgconfig' in spec:
autoreconf_args += [
'-I',
os.path.join(spec['pkgconfig'].prefix, 'share', 'aclocal'),
]
for dep in spec.dependencies(deptype='build'):
if os.path.exists(dep.prefix.share.aclocal):
autoreconf_args.extend([
'-I', dep.prefix.share.aclocal
])
autoreconf_args += self.autoreconf_extra_args
m.autoreconf(*autoreconf_args)

View File

@@ -0,0 +1,40 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.util.url
import spack.package
class SourceforgePackage(spack.package.PackageBase):
"""Mixin that takes care of setting url and mirrors for Sourceforge
packages."""
#: Path of the package in a Sourceforge mirror
sourceforge_mirror_path = None
#: List of Sourceforge mirrors used by Spack
base_mirrors = [
'https://prdownloads.sourceforge.net/',
'https://freefr.dl.sourceforge.net/',
'https://netcologne.dl.sourceforge.net/',
'https://pilotfiber.dl.sourceforge.net/',
'https://downloads.sourceforge.net/',
'http://kent.dl.sourceforge.net/sourceforge/'
]
@property
def urls(self):
self._ensure_sourceforge_mirror_path_is_set_or_raise()
return [
spack.util.url.join(m, self.sourceforge_mirror_path,
resolve_href=True)
for m in self.base_mirrors
]
def _ensure_sourceforge_mirror_path_is_set_or_raise(self):
if self.sourceforge_mirror_path is None:
cls_name = type(self).__name__
msg = ('{0} must define a `sourceforge_mirror_path` attribute'
' [none defined]')
raise AttributeError(msg.format(cls_name))

View File

@@ -0,0 +1,37 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.util.url
import spack.package
class SourcewarePackage(spack.package.PackageBase):
"""Mixin that takes care of setting url and mirrors for Sourceware.org
packages."""
#: Path of the package in a Sourceware mirror
sourceware_mirror_path = None
#: List of Sourceware mirrors used by Spack
base_mirrors = [
'https://sourceware.org/pub/',
'https://mirrors.kernel.org/sourceware/',
'https://ftp.gwdg.de/pub/linux/sources.redhat.com/'
]
@property
def urls(self):
self._ensure_sourceware_mirror_path_is_set_or_raise()
return [
spack.util.url.join(m, self.sourceware_mirror_path,
resolve_href=True)
for m in self.base_mirrors
]
def _ensure_sourceware_mirror_path_is_set_or_raise(self):
if self.sourceware_mirror_path is None:
cls_name = type(self).__name__
msg = ('{0} must define a `sourceware_mirror_path` attribute'
' [none defined]')
raise AttributeError(msg.format(cls_name))

View File

@@ -0,0 +1,37 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.util.url
import spack.package
class XorgPackage(spack.package.PackageBase):
"""Mixin that takes care of setting url and mirrors for x.org
packages."""
#: Path of the package in a x.org mirror
xorg_mirror_path = None
#: List of x.org mirrors used by Spack
base_mirrors = [
'https://www.x.org/archive/individual/',
'https://mirrors.ircam.fr/pub/x.org/individual/',
'http://xorg.mirrors.pair.com/individual/'
]
@property
def urls(self):
self._ensure_xorg_mirror_path_is_set_or_raise()
return [
spack.util.url.join(m, self.xorg_mirror_path,
resolve_href=True)
for m in self.base_mirrors
]
def _ensure_xorg_mirror_path_is_set_or_raise(self):
if self.xorg_mirror_path is None:
cls_name = type(self).__name__
msg = ('{0} must define a `xorg_mirror_path` attribute'
' [none defined]')
raise AttributeError(msg.format(cls_name))

View File

@@ -609,7 +609,7 @@ def get_concrete_spec(args):
if spec_str:
try:
spec = Spec(spec_str)
spec = find_matching_specs(spec_str)[0]
spec.concretize()
except SpecError as spec_error:
tty.error('Unable to concrectize spec {0}'.format(args.spec))

View File

@@ -283,8 +283,6 @@ def ci_rebuild(args):
spack_cmd = exe.which('spack')
os.environ['FORCE_UNSAFE_CONFIGURE'] = '1'
cdash_report_dir = os.path.join(ci_artifact_dir, 'cdash_report')
temp_dir = os.path.join(ci_artifact_dir, 'jobs_scratch_dir')
job_log_dir = os.path.join(temp_dir, 'logs')

View File

@@ -250,13 +250,13 @@ def enable_new_dtags(self):
PrgEnv_compiler = None
def __init__(self, cspec, operating_system, target,
paths, modules=[], alias=None, environment=None,
paths, modules=None, alias=None, environment=None,
extra_rpaths=None, enable_implicit_rpaths=None,
**kwargs):
self.spec = cspec
self.operating_system = str(operating_system)
self.target = target
self.modules = modules
self.modules = modules or []
self.alias = alias
self.extra_rpaths = extra_rpaths
self.enable_implicit_rpaths = enable_implicit_rpaths
@@ -317,6 +317,10 @@ def _get_compiler_link_paths(cls, paths):
first_compiler = next((c for c in paths if c), None)
if not first_compiler:
return []
if not cls.verbose_flag():
# In this case there is no mechanism to learn what link directories
# are used by the compiler
return []
try:
tmpdir = tempfile.mkdtemp(prefix='spack-implicit-link-info')
@@ -406,6 +410,30 @@ def c11_flag(self):
"the C11 standard",
"c11_flag")
@property
def cc_pic_flag(self):
"""Returns the flag used by the C compiler to produce
Position Independent Code (PIC)."""
return '-fPIC'
@property
def cxx_pic_flag(self):
"""Returns the flag used by the C++ compiler to produce
Position Independent Code (PIC)."""
return '-fPIC'
@property
def f77_pic_flag(self):
"""Returns the flag used by the F77 compiler to produce
Position Independent Code (PIC)."""
return '-fPIC'
@property
def fc_pic_flag(self):
"""Returns the flag used by the FC compiler to produce
Position Independent Code (PIC)."""
return '-fPIC'
#
# Compiler classes have methods for querying the version of
# specific compiler executables. This is used when discovering compilers.

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.compiler
import re
class Arm(spack.compiler.Compiler):
@@ -35,7 +36,20 @@ class Arm(spack.compiler.Compiler):
# InstalledDir:
# /opt/arm/arm-hpc-compiler-19.0_Generic-AArch64_RHEL-7_aarch64-linux/bin
version_argument = '--version'
version_regex = r'Arm C\/C\+\+\/Fortran Compiler version ([^ )]+)'
version_regex = r'Arm C\/C\+\+\/Fortran Compiler version ([\d\.]+) '\
r'\(build number (\d+)\) '
@classmethod
def extract_version_from_output(cls, output):
"""Extracts the version from compiler's output."""
match = re.search(cls.version_regex, output)
temp = 'unknown'
if match:
if match.group(1).count('.') == 1:
temp = match.group(1) + ".0." + match.group(2)
else:
temp = match.group(1) + "." + match.group(2)
return temp
@classmethod
def verbose_flag(cls):
@@ -66,7 +80,19 @@ def c11_flag(self):
return "-std=c11"
@property
def pic_flag(self):
def cc_pic_flag(self):
return "-fPIC"
@property
def cxx_pic_flag(self):
return "-fPIC"
@property
def f77_pic_flag(self):
return "-fPIC"
@property
def fc_pic_flag(self):
return "-fPIC"
required_libs = ['libclang', 'libflang']

View File

@@ -68,5 +68,17 @@ def c11_flag(self):
'< 8.5')
@property
def pic_flag(self):
def cc_pic_flag(self):
return "-h PIC"
@property
def cxx_pic_flag(self):
return "-h PIC"
@property
def f77_pic_flag(self):
return "-h PIC"
@property
def fc_pic_flag(self):
return "-h PIC"

View File

@@ -174,7 +174,19 @@ def c11_flag(self):
return "-std=c11"
@property
def pic_flag(self):
def cc_pic_flag(self):
return "-fPIC"
@property
def cxx_pic_flag(self):
return "-fPIC"
@property
def f77_pic_flag(self):
return "-fPIC"
@property
def fc_pic_flag(self):
return "-fPIC"
required_libs = ['libclang']

View File

@@ -59,9 +59,17 @@ def c11_flag(self):
return "-std=c11"
@property
def pic_flag(self):
def cc_pic_flag(self):
return "-KPIC"
def setup_custom_environment(self, pkg, env):
env.append_flags('fcc_ENV', '-Nclang')
env.append_flags('FCC_ENV', '-Nclang')
@property
def cxx_pic_flag(self):
return "-KPIC"
@property
def f77_pic_flag(self):
return "-KPIC"
@property
def fc_pic_flag(self):
return "-KPIC"

View File

@@ -110,7 +110,19 @@ def c11_flag(self):
return "-std=c11"
@property
def pic_flag(self):
def cc_pic_flag(self):
return "-fPIC"
@property
def cxx_pic_flag(self):
return "-fPIC"
@property
def f77_pic_flag(self):
return "-fPIC"
@property
def fc_pic_flag(self):
return "-fPIC"
required_libs = ['libgcc', 'libgfortran']

View File

@@ -92,7 +92,19 @@ def c11_flag(self):
return "-std=c1x"
@property
def pic_flag(self):
def cc_pic_flag(self):
return "-fPIC"
@property
def cxx_pic_flag(self):
return "-fPIC"
@property
def f77_pic_flag(self):
return "-fPIC"
@property
def fc_pic_flag(self):
return "-fPIC"
@property

View File

@@ -41,7 +41,11 @@ def cxx11_flag(self):
return "-std=c++11"
@property
def pic_flag(self):
def f77_pic_flag(self):
return "-PIC"
@property
def fc_pic_flag(self):
return "-PIC"
# Unlike other compilers, the NAG compiler passes options to GCC, which

View File

@@ -46,7 +46,19 @@ def cxx11_flag(self):
return "-std=c++11"
@property
def pic_flag(self):
def cc_pic_flag(self):
return "-fpic"
@property
def cxx_pic_flag(self):
return "-fpic"
@property
def f77_pic_flag(self):
return "-fpic"
@property
def fc_pic_flag(self):
return "-fpic"
required_libs = ['libpgc', 'libpgf90']

View File

@@ -70,7 +70,19 @@ def c11_flag(self):
'< 12.1')
@property
def pic_flag(self):
def cc_pic_flag(self):
return "-qpic"
@property
def cxx_pic_flag(self):
return "-qpic"
@property
def f77_pic_flag(self):
return "-qpic"
@property
def fc_pic_flag(self):
return "-qpic"
@property

View File

@@ -1150,6 +1150,12 @@ def _remove(self, spec):
del self._data[key]
for dep in rec.spec.dependencies(_tracked_deps):
# FIXME: the two lines below needs to be updated once #11983 is
# FIXME: fixed. The "if" statement should be deleted and specs are
# FIXME: to be removed from dependents by hash and not by name.
# FIXME: See https://github.com/spack/spack/pull/15777#issuecomment-607818955
if dep._dependents.get(spec.name):
del dep._dependents[spec.name]
self._decrement_ref_count(dep)
if rec.deprecated_for:

View File

@@ -693,7 +693,10 @@ def main(argv=None):
# Spack clears these variables before building and installing packages,
# but needs to know the prior state for commands like `spack load` and
# `spack env activate that modify the user environment.
for var in ('LD_LIBRARY_PATH', 'DYLD_LIBRARY_PATH'):
recovered_vars = (
'LD_LIBRARY_PATH', 'DYLD_LIBRARY_PATH', 'DYLD_FALLBACK_LIBRARY_PATH'
)
for var in recovered_vars:
stored_var_name = 'SPACK_%s' % var
if stored_var_name in os.environ:
os.environ[var] = os.environ[stored_var_name]

View File

@@ -31,6 +31,9 @@
from spack.build_systems.meson import MesonPackage
from spack.build_systems.sip import SIPPackage
from spack.build_systems.gnu import GNUMirrorPackage
from spack.build_systems.sourceforge import SourceforgePackage
from spack.build_systems.sourceware import SourcewarePackage
from spack.build_systems.xorg import XorgPackage
from spack.mixins import filter_compiler_wrappers

View File

@@ -240,6 +240,7 @@ def update_package(self, pkg_name):
# Add it again under the appropriate tags
for tag in getattr(package, 'tags', []):
tag = tag.lower()
self._tag_dict[tag].append(package.name)
@@ -1002,6 +1003,7 @@ def packages_with_tags(self, *tags):
index = self.tag_index
for t in tags:
t = t.lower()
v &= set(index[t])
return sorted(v)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,282 +0,0 @@
%=============================================================================
% Generate
%=============================================================================
%-----------------------------------------------------------------------------
% Version semantics
%-----------------------------------------------------------------------------
% versions are declared w/priority -- declared with priority implies declared
version_declared(P, V) :- version_declared(P, V, _).
% If something is a package, it has only one version and that must be a
% possible version.
1 { version(P, V) : version_possible(P, V) } 1 :- node(P).
% If a version is declared but conflicted, it's not possible.
version_possible(P, V) :- version_declared(P, V), not version_conflict(P, V).
version_weight(P, V, N) :- version(P, V), version_declared(P, V, N).
#defined version_conflict/2.
%-----------------------------------------------------------------------------
% Dependency semantics
%-----------------------------------------------------------------------------
% Dependencies of any type imply that one package "depends on" another
depends_on(P, D) :- depends_on(P, D, _).
% declared dependencies are real if they're not virtual
depends_on(P, D, T) :- declared_dependency(P, D, T), not virtual(D), node(P).
% if you declare a dependency on a virtual, you depend on one of its providers
1 { depends_on(P, Q, T) : provides_virtual(Q, V) } 1
:- declared_dependency(P, V, T), virtual(V), node(P).
% if a virtual was required by some root spec, one provider is in the DAG
1 { node(P) : provides_virtual(P, V) } 1 :- virtual_node(V).
% for any virtual, there can be at most one provider in the DAG
provider(P, V) :- node(P), provides_virtual(P, V).
0 { provider(P, V) : node(P) } 1 :- virtual(V).
% give dependents the virtuals they want
provider_weight(D, N)
:- virtual(V), depends_on(P, D), provider(D, V),
pkg_provider_preference(P, V, D, N).
provider_weight(D, N)
:- virtual(V), depends_on(P, D), provider(D, V),
not pkg_provider_preference(P, V, D, _),
default_provider_preference(V, D, N).
% if there's no preference for something, it costs 100 to discourage its
% use with minimization
provider_weight(D, 100)
:- virtual(V), depends_on(P, D), provider(D, V),
not pkg_provider_preference(P, V, D, _),
not default_provider_preference(V, D, _).
% all nodes must be reachable from some root
needed(D) :- root(D), node(D).
needed(D) :- root(P), depends_on(P, D).
needed(D) :- needed(P), depends_on(P, D), node(P).
:- node(P), not needed(P).
% real dependencies imply new nodes.
node(D) :- node(P), depends_on(P, D).
% do not warn if generated program contains none of these.
#defined depends_on/3.
#defined declared_dependency/3.
#defined virtual/1.
#defined virtual_node/1.
#defined provides_virtual/2.
#defined pkg_provider_preference/4.
#defined default_provider_preference/3.
#defined root/1.
%-----------------------------------------------------------------------------
% Variant semantics
%-----------------------------------------------------------------------------
% one variant value for single-valued variants.
1 { variant_value(P, V, X) : variant_possible_value(P, V, X) } 1
:- node(P), variant(P, V), variant_single_value(P, V).
% at least one variant value for multi-valued variants.
1 { variant_value(P, V, X) : variant_possible_value(P, V, X) }
:- node(P), variant(P, V), not variant_single_value(P, V).
% if a variant is set to anything, it is considered 'set'.
variant_set(P, V) :- variant_set(P, V, _).
% variant_set is an explicitly set variant value. If it's not 'set',
% we revert to the default value. If it is set, we force the set value
variant_value(P, V, X) :- node(P), variant(P, V), variant_set(P, V, X).
% prefer default values.
variant_not_default(P, V, X, 1)
:- variant_value(P, V, X),
not variant_default_value(P, V, X),
node(P).
variant_not_default(P, V, X, 0)
:- variant_value(P, V, X),
variant_default_value(P, V, X),
node(P).
% suppress wranings about this atom being unset. It's only set if some
% spec or some package sets it, and without this, clingo will give
% warnings like 'info: atom does not occur in any rule head'.
#defined variant/2.
#defined variant_set/3.
#defined variant_single_value/2.
#defined variant_default_value/3.
#defined variant_possible_value/3.
%-----------------------------------------------------------------------------
% Platform/OS semantics
%-----------------------------------------------------------------------------
% one platform, os per node
% TODO: convert these to use optimization, like targets.
1 { node_platform(P, A) : node_platform(P, A) } 1 :- node(P).
1 { node_os(P, A) : node_os(P, A) } 1 :- node(P).
% arch fields for pkg P are set if set to anything
node_platform_set(P) :- node_platform_set(P, _).
node_os_set(P) :- node_os_set(P, _).
% if no platform/os is set, fall back to the defaults
node_platform(P, A)
:- node(P), not node_platform_set(P), node_platform_default(A).
node_os(P, A) :- node(P), not node_os_set(P), node_os_default(A).
% setting os/platform on a node is a hard constraint
node_platform(P, A) :- node(P), node_platform_set(P, A).
node_os(P, A) :- node(P), node_os_set(P, A).
% avoid info warnings (see variants)
#defined node_platform_set/2.
#defined node_os_set/2.
%-----------------------------------------------------------------------------
% Target semantics
%-----------------------------------------------------------------------------
% one target per node -- optimization will pick the "best" one
1 { node_target(P, T) : target(T) } 1 :- node(P).
% can't use targets on node if the compiler for the node doesn't support them
:- node_target(P, T), not compiler_supports_target(C, V, T),
node_compiler(P, C), node_compiler_version(P, C, V).
% if a target is set explicitly, respect it
node_target(P, T) :- node(P), node_target_set(P, T).
% each node has the weight of its assigned target
node_target_weight(P, N) :- node(P), node_target(P, T), target_weight(T, N).
#defined node_target_set/2.
%-----------------------------------------------------------------------------
% Compiler semantics
%-----------------------------------------------------------------------------
% one compiler per node
1 { node_compiler(P, C) : compiler(C) } 1 :- node(P).
1 { node_compiler_version(P, C, V) : compiler_version(C, V) } 1 :- node(P).
1 { compiler_weight(P, N) : compiler_weight(P, N) } 1 :- node(P).
% dependencies imply we should try to match hard compiler constraints
% todo: look at what to do about intersecting constraints here. we'd
% ideally go with the "lowest" pref in the DAG
node_compiler_match_pref(P, C) :- node_compiler_hard(P, C).
node_compiler_match_pref(D, C)
:- depends_on(P, D), node_compiler_match_pref(P, C),
not node_compiler_hard(D, _).
compiler_match(P, 1) :- node_compiler(P, C), node_compiler_match_pref(P, C).
node_compiler_version_match_pref(P, C, V)
:- node_compiler_version_hard(P, C, V).
node_compiler_version_match_pref(D, C, V)
:- depends_on(P, D), node_compiler_version_match_pref(P, C, V),
not node_compiler_version_hard(D, C, _).
compiler_version_match(P, 1)
:- node_compiler_version(P, C, V),
node_compiler_version_match_pref(P, C, V).
#defined node_compiler_hard/2.
#defined node_compiler_version_hard/3.
% compilers weighted by preference acccording to packages.yaml
compiler_weight(P, N)
:- node_compiler(P, C), node_compiler_version(P, C, V),
node_compiler_preference(P, C, V, N).
compiler_weight(P, N)
:- node_compiler(P, C), node_compiler_version(P, C, V),
not node_compiler_preference(P, C, _, _),
default_compiler_preference(C, V, N).
compiler_weight(P, 100)
:- node_compiler(P, C), node_compiler_version(P, C, V),
not node_compiler_preference(P, C, _, _),
not default_compiler_preference(C, _, _).
#defined node_compiler_preference/4.
#defined default_compiler_preference/3.
%-----------------------------------------------------------------------------
% Compiler flags
%-----------------------------------------------------------------------------
% propagate flags when compilers match
inherit_flags(P, D)
:- depends_on(P, D), node_compiler(P, C), node_compiler(D, C),
compiler(C), flag_type(T).
node_flag_inherited(D, T, F) :- node_flag_set(P, T, F), inherit_flags(P, D).
node_flag_inherited(D, T, F)
:- node_flag_inherited(P, T, F), inherit_flags(P, D).
% node with flags set to anythingg is "set"
node_flag_set(P) :- node_flag_set(P, _, _).
% remember where flags came from
node_flag_source(P, P) :- node_flag_set(P).
node_flag_source(D, Q) :- node_flag_source(P, Q), inherit_flags(P, D).
% compiler flags from compilers.yaml are put on nodes if compiler matches
node_flag(P, T, F),
node_flag_compiler_default(P)
:- not node_flag_set(P), compiler_version_flag(C, V, T, F),
node_compiler(P, C), node_compiler_version(P, C, V),
flag_type(T), compiler(C), compiler_version(C, V).
% if a flag is set to something or inherited, it's included
node_flag(P, T, F) :- node_flag_set(P, T, F).
node_flag(P, T, F) :- node_flag_inherited(P, T, F).
% if no node flags are set for a type, there are no flags.
no_flags(P, T) :- not node_flag(P, T, _), node(P), flag_type(T).
#defined compiler_version_flag/4.
#defined node_flag/3.
#defined node_flag_set/3.
%-----------------------------------------------------------------------------
% How to optimize the spec (high to low priority)
%-----------------------------------------------------------------------------
% weight root preferences higher
%
% TODO: how best to deal with this issue? It's not clear how best to
% weight all the constraints. Without this root preference, `spack solve
% hdf5` will pick mpich instead of openmpi, even if openmpi is the
% preferred provider, because openmpi has a version constraint on hwloc.
% It ends up choosing between settling for an old version of hwloc, or
% picking the second-best provider. This workaround weights root
% preferences higher so that hdf5's prefs are more important, but it's
% not clear this is a general solution. It would be nice to weight by
% distance to root, but that seems to slow down the solve a lot.
%
% One option is to make preferences hard constraints. Or maybe we need
% to look more closely at where a constraint came from and factor that
% into our weights. e.g., a non-default variant resulting from a version
% constraint counts like a version constraint. Needs more thought later.
%
root(D, 2) :- root(D), node(D).
root(D, 1) :- not root(D), node(D).
% prefer default variants
#minimize { N*R@10,P,V,X : variant_not_default(P, V, X, N), root(P, R) }.
% pick most preferred virtual providers
#minimize{ N*R@9,D : provider_weight(D, N), root(P, R) }.
% prefer more recent versions.
#minimize{ N@8,P,V : version_weight(P, V, N) }.
% compiler preferences
#maximize{ N@7,P : compiler_match(P, N) }.
#minimize{ N@6,P : compiler_weight(P, N) }.
% fastest target for node
% TODO: if these are slightly different by compiler (e.g., skylake is
% best, gcc supports skylake and broadweell, clang's best is haswell)
% things seem to get really slow.
#minimize{ N@5,P : node_target_weight(P, N) }.

View File

@@ -219,3 +219,95 @@ def test_define_from_variant(self):
with pytest.raises(KeyError, match="not a variant"):
pkg.define_from_variant('NONEXISTENT')
@pytest.mark.usefixtures('config', 'mock_packages')
class TestGNUMirrorPackage(object):
def test_define(self):
s = Spec('mirror-gnu')
s.concretize()
pkg = spack.repo.get(s)
s = Spec('mirror-gnu-broken')
s.concretize()
pkg_broken = spack.repo.get(s)
cls_name = type(pkg_broken).__name__
with pytest.raises(AttributeError,
match=r'{0} must define a `gnu_mirror_path` '
r'attribute \[none defined\]'
.format(cls_name)):
pkg_broken.urls
assert pkg.urls[0] == 'https://ftpmirror.gnu.org/' \
'make/make-4.2.1.tar.gz'
@pytest.mark.usefixtures('config', 'mock_packages')
class TestSourceforgePackage(object):
def test_define(self):
s = Spec('mirror-sourceforge')
s.concretize()
pkg = spack.repo.get(s)
s = Spec('mirror-sourceforge-broken')
s.concretize()
pkg_broken = spack.repo.get(s)
cls_name = type(pkg_broken).__name__
with pytest.raises(AttributeError,
match=r'{0} must define a `sourceforge_mirror_path`'
r' attribute \[none defined\]'
.format(cls_name)):
pkg_broken.urls
assert pkg.urls[0] == 'https://prdownloads.sourceforge.net/' \
'tcl/tcl8.6.5-src.tar.gz'
@pytest.mark.usefixtures('config', 'mock_packages')
class TestSourcewarePackage(object):
def test_define(self):
s = Spec('mirror-sourceware')
s.concretize()
pkg = spack.repo.get(s)
s = Spec('mirror-sourceware-broken')
s.concretize()
pkg_broken = spack.repo.get(s)
cls_name = type(pkg_broken).__name__
with pytest.raises(AttributeError,
match=r'{0} must define a `sourceware_mirror_path` '
r'attribute \[none defined\]'
.format(cls_name)):
pkg_broken.urls
assert pkg.urls[0] == 'https://sourceware.org/pub/' \
'bzip2/bzip2-1.0.8.tar.gz'
@pytest.mark.usefixtures('config', 'mock_packages')
class TestXorgPackage(object):
def test_define(self):
s = Spec('mirror-xorg')
s.concretize()
pkg = spack.repo.get(s)
s = Spec('mirror-xorg-broken')
s.concretize()
pkg_broken = spack.repo.get(s)
cls_name = type(pkg_broken).__name__
with pytest.raises(AttributeError,
match=r'{0} must define a `xorg_mirror_path` '
r'attribute \[none defined\]'
.format(cls_name)):
pkg_broken.urls
assert pkg.urls[0] == 'https://www.x.org/archive/individual/' \
'util/util-macros-1.19.1.tar.bz2'

View File

@@ -37,6 +37,14 @@ def test_list_tags():
assert 'cloverleaf3d' in output
assert 'hdf5' not in output
output = list('--tags', 'hpc')
assert 'nek5000' in output
assert 'mfem' in output
output = list('--tags', 'HPC')
assert 'nek5000' in output
assert 'mfem' in output
def test_list_format_name_only():
output = list('--format', 'name_only')

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import pytest
import llnl.util.tty as tty
import spack.store
from spack.main import SpackCommand, SpackCommandError
@@ -30,7 +31,7 @@ def test_multiple_matches(mutable_database):
@pytest.mark.db
def test_installed_dependents(mutable_database):
"""Test can't uninstall when ther are installed dependents."""
"""Test can't uninstall when there are installed dependents."""
with pytest.raises(SpackCommandError):
uninstall('-y', 'libelf')
@@ -190,3 +191,16 @@ def db_specs():
assert len(mpileaks_specs) == 3
assert len(callpath_specs) == 3 # back to 3
assert len(mpi_specs) == 3
@pytest.mark.db
@pytest.mark.regression('15773')
def test_in_memory_consistency_when_uninstalling(
mutable_database, monkeypatch
):
"""Test that uninstalling doesn't raise warnings"""
def _warn(*args, **kwargs):
raise RuntimeError('a warning was triggered!')
monkeypatch.setattr(tty, 'warn', _warn)
# Now try to uninstall and check this doesn't trigger warnings
uninstall('-y', '-a')

View File

@@ -17,13 +17,13 @@
import spack.compilers.arm
import spack.compilers.cce
import spack.compilers.clang
import spack.compilers.fj
import spack.compilers.gcc
import spack.compilers.intel
import spack.compilers.nag
import spack.compilers.pgi
import spack.compilers.xl
import spack.compilers.xl_r
import spack.compilers.fj
from spack.compiler import Compiler
@@ -222,18 +222,53 @@ def supported_flag_test(flag, flag_value_ref, spec=None):
# Tests for UnsupportedCompilerFlag exceptions from default
# implementations of flags.
def test_default_flags():
supported_flag_test("cc_rpath_arg", "-Wl,-rpath,")
supported_flag_test("cxx_rpath_arg", "-Wl,-rpath,")
supported_flag_test("f77_rpath_arg", "-Wl,-rpath,")
supported_flag_test("fc_rpath_arg", "-Wl,-rpath,")
supported_flag_test("linker_arg", "-Wl,")
unsupported_flag_test("openmp_flag")
unsupported_flag_test("cxx11_flag")
unsupported_flag_test("cxx14_flag")
unsupported_flag_test("cxx17_flag")
supported_flag_test("cxx98_flag", "")
unsupported_flag_test("c99_flag")
unsupported_flag_test("c11_flag")
supported_flag_test("cc_pic_flag", "-fPIC")
supported_flag_test("cxx_pic_flag", "-fPIC")
supported_flag_test("f77_pic_flag", "-fPIC")
supported_flag_test("fc_pic_flag", "-fPIC")
# Verify behavior of particular compiler definitions.
def test_clang_flags():
# Common
supported_flag_test("pic_flag", "-fPIC", "gcc@4.0")
def test_arm_flags():
supported_flag_test("openmp_flag", "-fopenmp", "arm@1.0")
supported_flag_test("cxx11_flag", "-std=c++11", "arm@1.0")
supported_flag_test("cxx14_flag", "-std=c++14", "arm@1.0")
supported_flag_test("cxx17_flag", "-std=c++1z", "arm@1.0")
supported_flag_test("c99_flag", "-std=c99", "arm@1.0")
supported_flag_test("c11_flag", "-std=c11", "arm@1.0")
supported_flag_test("cc_pic_flag", "-fPIC", "arm@1.0")
supported_flag_test("cxx_pic_flag", "-fPIC", "arm@1.0")
supported_flag_test("f77_pic_flag", "-fPIC", "arm@1.0")
supported_flag_test("fc_pic_flag", "-fPIC", "arm@1.0")
def test_cce_flags():
supported_flag_test("openmp_flag", "-h omp", "cce@1.0")
supported_flag_test("cxx11_flag", "-h std=c++11", "cce@1.0")
unsupported_flag_test("c99_flag", "cce@8.0")
supported_flag_test("c99_flag", "-h c99,noconform,gnu", "cce@8.1")
supported_flag_test("c99_flag", "-h stc=c99,noconform,gnu", "cce@8.4")
unsupported_flag_test("c11_flag", "cce@8.4")
supported_flag_test("c11_flag", "-h std=c11,noconform,gnu", "cce@8.5")
supported_flag_test("cc_pic_flag", "-h PIC", "cce@1.0")
supported_flag_test("cxx_pic_flag", "-h PIC", "cce@1.0")
supported_flag_test("f77_pic_flag", "-h PIC", "cce@1.0")
supported_flag_test("fc_pic_flag", "-h PIC", "cce@1.0")
def test_clang_flags():
# Apple Clang.
supported_flag_test(
"openmp_flag", "-Xpreprocessor -fopenmp", "clang@2.0.0-apple")
@@ -244,6 +279,13 @@ def test_clang_flags():
supported_flag_test("cxx14_flag", "-std=c++14", "clang@6.1.0-apple")
unsupported_flag_test("cxx17_flag", "clang@6.0.0-apple")
supported_flag_test("cxx17_flag", "-std=c++1z", "clang@6.1.0-apple")
supported_flag_test("c99_flag", "-std=c99", "clang@6.1.0-apple")
unsupported_flag_test("c11_flag", "clang@6.0.0-apple")
supported_flag_test("c11_flag", "-std=c11", "clang@6.1.0-apple")
supported_flag_test("cc_pic_flag", "-fPIC", "clang@2.0.0-apple")
supported_flag_test("cxx_pic_flag", "-fPIC", "clang@2.0.0-apple")
supported_flag_test("f77_pic_flag", "-fPIC", "clang@2.0.0-apple")
supported_flag_test("fc_pic_flag", "-fPIC", "clang@2.0.0-apple")
# non-Apple Clang.
supported_flag_test("openmp_flag", "-fopenmp", "clang@3.3")
@@ -255,12 +297,26 @@ def test_clang_flags():
unsupported_flag_test("cxx17_flag", "clang@3.4")
supported_flag_test("cxx17_flag", "-std=c++1z", "clang@3.5")
supported_flag_test("cxx17_flag", "-std=c++17", "clang@5.0")
supported_flag_test("c99_flag", "-std=c99", "clang@3.3")
unsupported_flag_test("c11_flag", "clang@6.0.0")
supported_flag_test("c11_flag", "-std=c11", "clang@6.1.0")
supported_flag_test("cc_pic_flag", "-fPIC", "clang@3.3")
supported_flag_test("cxx_pic_flag", "-fPIC", "clang@3.3")
supported_flag_test("f77_pic_flag", "-fPIC", "clang@3.3")
supported_flag_test("fc_pic_flag", "-fPIC", "clang@3.3")
def test_cce_flags():
supported_flag_test("openmp_flag", "-h omp", "cce@1.0")
supported_flag_test("cxx11_flag", "-h std=c++11", "cce@1.0")
supported_flag_test("pic_flag", "-h PIC", "cce@1.0")
def test_fj_flags():
supported_flag_test("openmp_flag", "-Kopenmp", "fj@4.0.0")
supported_flag_test("cxx98_flag", "-std=c++98", "fj@4.0.0")
supported_flag_test("cxx11_flag", "-std=c++11", "fj@4.0.0")
supported_flag_test("cxx14_flag", "-std=c++14", "fj@4.0.0")
supported_flag_test("c99_flag", "-std=c99", "fj@4.0.0")
supported_flag_test("c11_flag", "-std=c11", "fj@4.0.0")
supported_flag_test("cc_pic_flag", "-KPIC", "fj@4.0.0")
supported_flag_test("cxx_pic_flag", "-KPIC", "fj@4.0.0")
supported_flag_test("f77_pic_flag", "-KPIC", "fj@4.0.0")
supported_flag_test("fc_pic_flag", "-KPIC", "fj@4.0.0")
def test_gcc_flags():
@@ -275,7 +331,17 @@ def test_gcc_flags():
supported_flag_test("cxx14_flag", "-std=c++14", "gcc@4.9")
supported_flag_test("cxx14_flag", "", "gcc@6.0")
unsupported_flag_test("cxx17_flag", "gcc@4.9")
supported_flag_test("pic_flag", "-fPIC", "gcc@4.0")
supported_flag_test("cxx17_flag", "-std=c++1z", "gcc@5.0")
supported_flag_test("cxx17_flag", "-std=c++17", "gcc@6.0")
unsupported_flag_test("c99_flag", "gcc@4.4")
supported_flag_test("c99_flag", "-std=c99", "gcc@4.5")
unsupported_flag_test("c11_flag", "gcc@4.6")
supported_flag_test("c11_flag", "-std=c11", "gcc@4.7")
supported_flag_test("cc_pic_flag", "-fPIC", "gcc@4.0")
supported_flag_test("cxx_pic_flag", "-fPIC", "gcc@4.0")
supported_flag_test("f77_pic_flag", "-fPIC", "gcc@4.0")
supported_flag_test("fc_pic_flag", "-fPIC", "gcc@4.0")
supported_flag_test("stdcxx_libs", ("-lstdc++",), "gcc@4.1")
def test_intel_flags():
@@ -287,43 +353,105 @@ def test_intel_flags():
unsupported_flag_test("cxx14_flag", "intel@14.0")
supported_flag_test("cxx14_flag", "-std=c++1y", "intel@15.0")
supported_flag_test("cxx14_flag", "-std=c++14", "intel@15.0.2")
supported_flag_test("pic_flag", "-fPIC", "intel@1.0")
unsupported_flag_test("c99_flag", "intel@11.0")
supported_flag_test("c99_flag", "-std=c99", "intel@12.0")
unsupported_flag_test("c11_flag", "intel@15.0")
supported_flag_test("c11_flag", "-std=c1x", "intel@16.0")
supported_flag_test("cc_pic_flag", "-fPIC", "intel@1.0")
supported_flag_test("cxx_pic_flag", "-fPIC", "intel@1.0")
supported_flag_test("f77_pic_flag", "-fPIC", "intel@1.0")
supported_flag_test("fc_pic_flag", "-fPIC", "intel@1.0")
supported_flag_test("stdcxx_libs", ("-cxxlib",), "intel@1.0")
def test_nag_flags():
supported_flag_test("openmp_flag", "-openmp", "nag@1.0")
supported_flag_test("cxx11_flag", "-std=c++11", "nag@1.0")
supported_flag_test("pic_flag", "-PIC", "nag@1.0")
supported_flag_test("cc_pic_flag", "-fPIC", "nag@1.0")
supported_flag_test("cxx_pic_flag", "-fPIC", "nag@1.0")
supported_flag_test("f77_pic_flag", "-PIC", "nag@1.0")
supported_flag_test("fc_pic_flag", "-PIC", "nag@1.0")
supported_flag_test("cc_rpath_arg", "-Wl,-rpath,", "nag@1.0")
supported_flag_test("cxx_rpath_arg", "-Wl,-rpath,", "nag@1.0")
supported_flag_test("f77_rpath_arg", "-Wl,-Wl,,-rpath,,", "nag@1.0")
supported_flag_test("fc_rpath_arg", "-Wl,-Wl,,-rpath,,", "nag@1.0")
supported_flag_test("linker_arg", "-Wl,-Wl,,", "nag@1.0")
def test_pgi_flags():
supported_flag_test("openmp_flag", "-mp", "pgi@1.0")
supported_flag_test("cxx11_flag", "-std=c++11", "pgi@1.0")
supported_flag_test("pic_flag", "-fpic", "pgi@1.0")
unsupported_flag_test("c99_flag", "pgi@12.9")
supported_flag_test("c99_flag", "-c99", "pgi@12.10")
unsupported_flag_test("c11_flag", "pgi@15.2")
supported_flag_test("c11_flag", "-c11", "pgi@15.3")
supported_flag_test("cc_pic_flag", "-fpic", "pgi@1.0")
supported_flag_test("cxx_pic_flag", "-fpic", "pgi@1.0")
supported_flag_test("f77_pic_flag", "-fpic", "pgi@1.0")
supported_flag_test("fc_pic_flag", "-fpic", "pgi@1.0")
def test_xl_flags():
supported_flag_test("openmp_flag", "-qsmp=omp", "xl@1.0")
unsupported_flag_test("cxx11_flag", "xl@13.0")
supported_flag_test("cxx11_flag", "-qlanglvl=extended0x", "xl@13.1")
supported_flag_test("pic_flag", "-qpic", "xl@1.0")
unsupported_flag_test("c99_flag", "xl@10.0")
supported_flag_test("c99_flag", "-qlanglvl=extc99", "xl@10.1")
supported_flag_test("c99_flag", "-std=gnu99", "xl@13.1.1")
unsupported_flag_test("c11_flag", "xl@12.0")
supported_flag_test("c11_flag", "-qlanglvl=extc1x", "xl@12.1")
supported_flag_test("c11_flag", "-std=gnu11", "xl@13.1.2")
supported_flag_test("cc_pic_flag", "-qpic", "xl@1.0")
supported_flag_test("cxx_pic_flag", "-qpic", "xl@1.0")
supported_flag_test("f77_pic_flag", "-qpic", "xl@1.0")
supported_flag_test("fc_pic_flag", "-qpic", "xl@1.0")
supported_flag_test("fflags", "-qzerosize", "xl@1.0")
def test_xl_r_flags():
supported_flag_test("openmp_flag", "-qsmp=omp", "xl_r@1.0")
unsupported_flag_test("cxx11_flag", "xl_r@13.0")
supported_flag_test("cxx11_flag", "-qlanglvl=extended0x", "xl_r@13.1")
supported_flag_test("pic_flag", "-qpic", "xl_r@1.0")
unsupported_flag_test("c99_flag", "xl_r@10.0")
supported_flag_test("c99_flag", "-qlanglvl=extc99", "xl_r@10.1")
supported_flag_test("c99_flag", "-std=gnu99", "xl_r@13.1.1")
unsupported_flag_test("c11_flag", "xl_r@12.0")
supported_flag_test("c11_flag", "-qlanglvl=extc1x", "xl_r@12.1")
supported_flag_test("c11_flag", "-std=gnu11", "xl_r@13.1.2")
supported_flag_test("cc_pic_flag", "-qpic", "xl_r@1.0")
supported_flag_test("cxx_pic_flag", "-qpic", "xl_r@1.0")
supported_flag_test("f77_pic_flag", "-qpic", "xl_r@1.0")
supported_flag_test("fc_pic_flag", "-qpic", "xl_r@1.0")
supported_flag_test("fflags", "-qzerosize", "xl_r@1.0")
def test_fj_flags():
supported_flag_test("openmp_flag", "-Kopenmp", "fj@4.0.0")
supported_flag_test("cxx98_flag", "-std=c++98", "fj@4.0.0")
supported_flag_test("cxx11_flag", "-std=c++11", "fj@4.0.0")
supported_flag_test("cxx14_flag", "-std=c++14", "fj@4.0.0")
supported_flag_test("c99_flag", "-std=c99", "fj@4.0.0")
supported_flag_test("c11_flag", "-std=c11", "fj@4.0.0")
supported_flag_test("pic_flag", "-KPIC", "fj@4.0.0")
@pytest.mark.parametrize('version_str,expected_version', [
('Arm C/C++/Fortran Compiler version 19.0 (build number 73) (based on LLVM 7.0.2)\n' # NOQA
'Target: aarch64--linux-gnu\n'
'Thread model: posix\n'
'InstalledDir:\n'
'/opt/arm/arm-hpc-compiler-19.0_Generic-AArch64_RHEL-7_aarch64-linux/bin\n', # NOQA
'19.0.0.73'),
('Arm C/C++/Fortran Compiler version 19.3.1 (build number 75) (based on LLVM 7.0.2)\n' # NOQA
'Target: aarch64--linux-gnu\n'
'Thread model: posix\n'
'InstalledDir:\n'
'/opt/arm/arm-hpc-compiler-19.0_Generic-AArch64_RHEL-7_aarch64-linux/bin\n', # NOQA
'19.3.1.75')
])
def test_arm_version_detection(version_str, expected_version):
version = spack.compilers.arm.Arm.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize('version_str,expected_version', [
('Cray C : Version 8.4.6 Mon Apr 15, 2019 12:13:39\n', '8.4.6'),
('Cray C++ : Version 8.4.6 Mon Apr 15, 2019 12:13:45\n', '8.4.6'),
('Cray Fortran : Version 8.4.6 Mon Apr 15, 2019 12:13:55\n', '8.4.6')
])
def test_cce_version_detection(version_str, expected_version):
version = spack.compilers.cce.Cce.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.regression('10191')
@@ -364,15 +492,23 @@ def test_clang_version_detection(version_str, expected_version):
@pytest.mark.parametrize('version_str,expected_version', [
('Arm C/C++/Fortran Compiler version 19.0 (build number 73) (based on LLVM 7.0.2)\n' # NOQA
'Target: aarch64--linux-gnu\n'
'Thread model: posix\n'
'InstalledDir:\n'
'/opt/arm/arm-hpc-compiler-19.0_Generic-AArch64_RHEL-7_aarch64-linux/bin\n', # NOQA
'19.0')
# C compiler
('fcc (FCC) 4.0.0 20190314\n'
'simulating gcc version 6.1\n'
'Copyright FUJITSU LIMITED 2019',
'4.0.0'),
# C++ compiler
('FCC (FCC) 4.0.0 20190314\n'
'simulating gcc version 6.1\n'
'Copyright FUJITSU LIMITED 2019',
'4.0.0'),
# Fortran compiler
('frt (FRT) 4.0.0 20190314\n'
'Copyright FUJITSU LIMITED 2019',
'4.0.0')
])
def test_arm_version_detection(version_str, expected_version):
version = spack.compilers.arm.Arm.extract_version_from_output(version_str)
def test_fj_version_detection(version_str, expected_version):
version = spack.compilers.fj.Fj.extract_version_from_output(version_str)
assert version == expected_version
@@ -448,37 +584,6 @@ def test_xl_version_detection(version_str, expected_version):
assert version == expected_version
@pytest.mark.parametrize('version_str,expected_version', [
('Cray C : Version 8.4.6 Mon Apr 15, 2019 12:13:39\n', '8.4.6'),
('Cray C++ : Version 8.4.6 Mon Apr 15, 2019 12:13:45\n', '8.4.6'),
('Cray Fortran : Version 8.4.6 Mon Apr 15, 2019 12:13:55\n', '8.4.6')
])
def test_cce_version_detection(version_str, expected_version):
version = spack.compilers.cce.Cce.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize('version_str,expected_version', [
# C compiler
('fcc (FCC) 4.0.0 20190314\n'
'simulating gcc version 6.1\n'
'Copyright FUJITSU LIMITED 2019',
'4.0.0'),
# C++ compiler
('FCC (FCC) 4.0.0 20190314\n'
'simulating gcc version 6.1\n'
'Copyright FUJITSU LIMITED 2019',
'4.0.0'),
# Fortran compiler
('frt (FRT) 4.0.0 20190314\n'
'Copyright FUJITSU LIMITED 2019',
'4.0.0')
])
def test_fj_version_detection(version_str, expected_version):
version = spack.compilers.fj.Fj.extract_version_from_output(version_str)
assert version == expected_version
@pytest.mark.parametrize('compiler_spec,expected_result', [
('gcc@4.7.2', False), ('clang@3.3', False), ('clang@8.0.0', True)
])

View File

@@ -130,3 +130,10 @@ def test_load_modules_from_file(module_path):
foo = llnl.util.lang.load_module_from_file('foo', module_path)
assert foo.value == 1
assert foo.path == os.path.join('/usr', 'bin')
def test_uniq():
assert [1, 2, 3] == llnl.util.lang.uniq([1, 2, 3])
assert [1, 2, 3] == llnl.util.lang.uniq([1, 1, 1, 1, 2, 2, 2, 3, 3])
assert [1, 2, 1] == llnl.util.lang.uniq([1, 1, 1, 1, 2, 2, 2, 1, 1])
assert [] == llnl.util.lang.uniq([])

View File

@@ -1,84 +0,0 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
import pytest
from llnl.util.tty.log import log_output
from spack.util.executable import which
def test_log_python_output_with_python_stream(capsys, tmpdir):
# pytest's DontReadFromInput object does not like what we do here, so
# disable capsys or things hang.
with tmpdir.as_cwd():
with capsys.disabled():
with log_output('foo.txt'):
print('logged')
with open('foo.txt') as f:
assert f.read() == 'logged\n'
assert capsys.readouterr() == ('', '')
def test_log_python_output_with_fd_stream(capfd, tmpdir):
with tmpdir.as_cwd():
with log_output('foo.txt'):
print('logged')
with open('foo.txt') as f:
assert f.read() == 'logged\n'
# Coverage is cluttering stderr during tests
assert capfd.readouterr()[0] == ''
def test_log_python_output_and_echo_output(capfd, tmpdir):
with tmpdir.as_cwd():
with log_output('foo.txt') as logger:
with logger.force_echo():
print('echo')
print('logged')
# Coverage is cluttering stderr during tests
assert capfd.readouterr()[0] == 'echo\n'
with open('foo.txt') as f:
assert f.read() == 'echo\nlogged\n'
@pytest.mark.skipif(not which('echo'), reason="needs echo command")
def test_log_subproc_output(capsys, tmpdir):
echo = which('echo')
# pytest seems to interfere here, so we need to use capsys.disabled()
# TODO: figure out why this is and whether it means we're doing
# sometihng wrong with OUR redirects. Seems like it should work even
# with capsys enabled.
with tmpdir.as_cwd():
with capsys.disabled():
with log_output('foo.txt'):
echo('logged')
with open('foo.txt') as f:
assert f.read() == 'logged\n'
@pytest.mark.skipif(not which('echo'), reason="needs echo command")
def test_log_subproc_and_echo_output(capfd, tmpdir):
echo = which('echo')
with tmpdir.as_cwd():
with log_output('foo.txt') as logger:
with logger.force_echo():
echo('echo')
print('logged')
# Coverage is cluttering stderr during tests
assert capfd.readouterr()[0] == 'echo\n'
with open('foo.txt') as f:
assert f.read() == 'logged\n'

View File

@@ -0,0 +1,442 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
import contextlib
import multiprocessing
import os
import signal
import sys
import time
try:
import termios
except ImportError:
termios = None
import pytest
import llnl.util.tty.log
from llnl.util.lang import uniq
from llnl.util.tty.log import log_output
from llnl.util.tty.pty import PseudoShell
from spack.util.executable import which
@contextlib.contextmanager
def nullcontext():
yield
def test_log_python_output_with_echo(capfd, tmpdir):
with tmpdir.as_cwd():
with log_output('foo.txt', echo=True):
print('logged')
# foo.txt has output
with open('foo.txt') as f:
assert f.read() == 'logged\n'
# output is also echoed.
assert capfd.readouterr()[0] == 'logged\n'
def test_log_python_output_without_echo(capfd, tmpdir):
with tmpdir.as_cwd():
with log_output('foo.txt'):
print('logged')
# foo.txt has output
with open('foo.txt') as f:
assert f.read() == 'logged\n'
# nothing on stdout or stderr
assert capfd.readouterr()[0] == ''
def test_log_python_output_and_echo_output(capfd, tmpdir):
with tmpdir.as_cwd():
# echo two lines
with log_output('foo.txt') as logger:
with logger.force_echo():
print('force echo')
print('logged')
# log file contains everything
with open('foo.txt') as f:
assert f.read() == 'force echo\nlogged\n'
# only force-echo'd stuff is in output
assert capfd.readouterr()[0] == 'force echo\n'
@pytest.mark.skipif(not which('echo'), reason="needs echo command")
def test_log_subproc_and_echo_output_no_capfd(capfd, tmpdir):
echo = which('echo')
# this is split into two tests because capfd interferes with the
# output logged to file when using a subprocess. We test the file
# here, and echoing in test_log_subproc_and_echo_output_capfd below.
with capfd.disabled():
with tmpdir.as_cwd():
with log_output('foo.txt') as logger:
with logger.force_echo():
echo('echo')
print('logged')
with open('foo.txt') as f:
assert f.read() == 'echo\nlogged\n'
@pytest.mark.skipif(not which('echo'), reason="needs echo command")
def test_log_subproc_and_echo_output_capfd(capfd, tmpdir):
echo = which('echo')
# This tests *only* what is echoed when using a subprocess, as capfd
# interferes with the logged data. See
# test_log_subproc_and_echo_output_no_capfd for tests on the logfile.
with tmpdir.as_cwd():
with log_output('foo.txt') as logger:
with logger.force_echo():
echo('echo')
print('logged')
assert capfd.readouterr()[0] == "echo\n"
#
# Tests below use a pseudoterminal to test llnl.util.tty.log
#
def simple_logger(**kwargs):
"""Mock logger (child) process for testing log.keyboard_input."""
def handler(signum, frame):
running[0] = False
signal.signal(signal.SIGUSR1, handler)
log_path = kwargs["log_path"]
running = [True]
with log_output(log_path):
while running[0]:
print("line")
time.sleep(1e-3)
def mock_shell_fg(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_enabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_fg_no_termios(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_disabled_fg()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_bg(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.bg()
ctl.status()
ctl.wait_disabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_tstp_cont(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.tstp()
ctl.wait_stopped()
ctl.cont()
ctl.wait_running()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_tstp_tstp_cont(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.tstp()
ctl.wait_stopped()
ctl.tstp()
ctl.wait_stopped()
ctl.cont()
ctl.wait_running()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_tstp_tstp_cont_cont(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.tstp()
ctl.wait_stopped()
ctl.tstp()
ctl.wait_stopped()
ctl.cont()
ctl.wait_running()
ctl.cont()
ctl.wait_running()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_bg_fg(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.bg()
ctl.status()
ctl.wait_disabled()
ctl.fg()
ctl.status()
ctl.wait_enabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_bg_fg_no_termios(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.bg()
ctl.status()
ctl.wait_disabled()
ctl.fg()
ctl.status()
ctl.wait_disabled_fg()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_fg_bg(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_enabled()
ctl.bg()
ctl.status()
ctl.wait_disabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_fg_bg_no_termios(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_disabled_fg()
ctl.bg()
ctl.status()
ctl.wait_disabled()
os.kill(proc.pid, signal.SIGUSR1)
@contextlib.contextmanager
def no_termios():
saved = llnl.util.tty.log.termios
llnl.util.tty.log.termios = None
try:
yield
finally:
llnl.util.tty.log.termios = saved
@pytest.mark.skipif(not which("ps"), reason="requires ps utility")
@pytest.mark.skipif(not termios, reason="requires termios support")
@pytest.mark.parametrize('test_fn,termios_on_or_off', [
# tests with termios
(mock_shell_fg, nullcontext),
(mock_shell_bg, nullcontext),
(mock_shell_bg_fg, nullcontext),
(mock_shell_fg_bg, nullcontext),
(mock_shell_tstp_cont, nullcontext),
(mock_shell_tstp_tstp_cont, nullcontext),
(mock_shell_tstp_tstp_cont_cont, nullcontext),
# tests without termios
(mock_shell_fg_no_termios, no_termios),
(mock_shell_bg, no_termios),
(mock_shell_bg_fg_no_termios, no_termios),
(mock_shell_fg_bg_no_termios, no_termios),
(mock_shell_tstp_cont, no_termios),
(mock_shell_tstp_tstp_cont, no_termios),
(mock_shell_tstp_tstp_cont_cont, no_termios),
])
def test_foreground_background(test_fn, termios_on_or_off, tmpdir):
"""Functional tests for foregrounding and backgrounding a logged process.
This ensures that things like SIGTTOU are not raised and that
terminal settings are corrected on foreground/background and on
process stop and start.
"""
shell = PseudoShell(test_fn, simple_logger)
log_path = str(tmpdir.join("log.txt"))
# run the shell test
with termios_on_or_off():
shell.start(log_path=log_path, debug=True)
exitcode = shell.join()
# processes completed successfully
assert exitcode == 0
# assert log was created
assert os.path.exists(log_path)
def synchronized_logger(**kwargs):
"""Mock logger (child) process for testing log.keyboard_input.
This logger synchronizes with the parent process to test that 'v' can
toggle output. It is used in ``test_foreground_background_output`` below.
"""
def handler(signum, frame):
running[0] = False
signal.signal(signal.SIGUSR1, handler)
log_path = kwargs["log_path"]
write_lock = kwargs["write_lock"]
v_lock = kwargs["v_lock"]
running = [True]
sys.stderr.write(os.getcwd() + "\n")
with log_output(log_path) as logger:
with logger.force_echo():
print("forced output")
while running[0]:
with write_lock:
if v_lock.acquire(False): # non-blocking acquire
print("off")
v_lock.release()
else:
print("on") # lock held; v is toggled on
time.sleep(1e-2)
def mock_shell_v_v(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background_output."""
write_lock = kwargs["write_lock"]
v_lock = kwargs["v_lock"]
ctl.fg()
ctl.wait_enabled()
time.sleep(.1)
write_lock.acquire() # suspend writing
v_lock.acquire() # enable v lock
ctl.write(b'v') # toggle v on stdin
time.sleep(.1)
write_lock.release() # resume writing
time.sleep(.1)
write_lock.acquire() # suspend writing
ctl.write(b'v') # toggle v on stdin
time.sleep(.1)
v_lock.release() # disable v lock
write_lock.release() # resume writing
time.sleep(.1)
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_v_v_no_termios(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background_output."""
write_lock = kwargs["write_lock"]
v_lock = kwargs["v_lock"]
ctl.fg()
ctl.wait_disabled_fg()
time.sleep(.1)
write_lock.acquire() # suspend writing
v_lock.acquire() # enable v lock
ctl.write(b'v\n') # toggle v on stdin
time.sleep(.1)
write_lock.release() # resume writing
time.sleep(.1)
write_lock.acquire() # suspend writing
ctl.write(b'v\n') # toggle v on stdin
time.sleep(.1)
v_lock.release() # disable v lock
write_lock.release() # resume writing
time.sleep(.1)
os.kill(proc.pid, signal.SIGUSR1)
@pytest.mark.skipif(not which("ps"), reason="requires ps utility")
@pytest.mark.skipif(not termios, reason="requires termios support")
@pytest.mark.parametrize('test_fn,termios_on_or_off', [
(mock_shell_v_v, nullcontext),
(mock_shell_v_v_no_termios, no_termios),
])
def test_foreground_background_output(
test_fn, capfd, termios_on_or_off, tmpdir):
"""Tests hitting 'v' toggles output, and that force_echo works."""
shell = PseudoShell(test_fn, synchronized_logger)
log_path = str(tmpdir.join("log.txt"))
# Locks for synchronizing with child
write_lock = multiprocessing.Lock() # must be held by child to write
v_lock = multiprocessing.Lock() # held while master is in v mode
with termios_on_or_off():
shell.start(
write_lock=write_lock,
v_lock=v_lock,
debug=True,
log_path=log_path
)
exitcode = shell.join()
out, err = capfd.readouterr()
print(err) # will be shown if something goes wrong
print(out)
# processes completed successfully
assert exitcode == 0
# split output into lines
output = out.strip().split("\n")
# also get lines of log file
assert os.path.exists(log_path)
with open(log_path) as log:
log = log.read().strip().split("\n")
# Master and child process coordinate with locks such that the child
# writes "off" when echo is off, and "on" when echo is on. The
# output should contain mostly "on" lines, but may contain an "off"
# or two. This is because the master toggles echo by sending "v" on
# stdin to the child, but this is not synchronized with our locks.
# It's good enough for a test, though. We allow at most 2 "off"'s in
# the output to account for the race.
assert (
['forced output', 'on'] == uniq(output) or
output.count("off") <= 2 # if master_fd is a bit slow
)
# log should be off for a while, then on, then off
assert (
['forced output', 'off', 'on', 'off'] == uniq(log) and
log.count("off") > 2 # ensure some "off" lines were omitted
)

View File

@@ -40,7 +40,7 @@ def prefix_inspections(platform):
if platform == 'darwin':
for subdir in ('lib', 'lib64'):
inspections[subdir].append('DYLD_LIBRARY_PATH')
inspections[subdir].append('DYLD_FALLBACK_LIBRARY_PATH')
return inspections

View File

@@ -29,13 +29,16 @@
########################################################################
# Store LD_LIBRARY_PATH variables from spack shell function
# This is necessary because MacOS System Integrity Protection clears
# (DY?)LD_LIBRARY_PATH variables on process start.
# variables that affect dyld on process start.
if ( ${?LD_LIBRARY_PATH} ) then
setenv SPACK_LD_LIBRARY_PATH $LD_LIBRARY_PATH
endif
if ( ${?DYLD_LIBRARY_PATH} ) then
setenv SPACK_DYLD_LIBRARY_PATH $DYLD_LIBRARY_PATH
endif
if ( ${?DYLD_FALLBACK_LIBRARY_PATH} ) then
setenv SPACK_DYLD_FALLBACK_LIBRARY_PATH $DYLD_FALLBACK_LIBRARY_PATH
endif
# accumulate initial flags for main spack command
set _sp_flags = ""

View File

@@ -5,7 +5,6 @@ ENV DOCKERFILE_BASE=centos \
DOCKERFILE_DISTRO=centos \
DOCKERFILE_DISTRO_VERSION=6 \
SPACK_ROOT=/opt/spack \
FORCE_UNSAFE_CONFIGURE=1 \
DEBIAN_FRONTEND=noninteractive \
CURRENTLY_BUILDING_DOCKER_IMAGE=1 \
container=docker

View File

@@ -5,7 +5,6 @@ ENV DOCKERFILE_BASE=centos \
DOCKERFILE_DISTRO=centos \
DOCKERFILE_DISTRO_VERSION=7 \
SPACK_ROOT=/opt/spack \
FORCE_UNSAFE_CONFIGURE=1 \
DEBIAN_FRONTEND=noninteractive \
CURRENTLY_BUILDING_DOCKER_IMAGE=1 \
container=docker

View File

@@ -5,7 +5,6 @@ ENV DOCKERFILE_BASE=ubuntu:16.04 \
DOCKERFILE_DISTRO=ubuntu \
DOCKERFILE_DISTRO_VERSION=16.04 \
SPACK_ROOT=/opt/spack \
FORCE_UNSAFE_CONFIGURE=1 \
DEBIAN_FRONTEND=noninteractive \
CURRENTLY_BUILDING_DOCKER_IMAGE=1 \
container=docker
@@ -90,4 +89,3 @@ RUN spack spec hdf5+mpi
ENTRYPOINT ["/bin/bash", "/opt/spack/share/spack/docker/entrypoint.bash"]
CMD ["docker-shell"]

View File

@@ -5,7 +5,6 @@ ENV DOCKERFILE_BASE=ubuntu \
DOCKERFILE_DISTRO=ubuntu \
DOCKERFILE_DISTRO_VERSION=18.04 \
SPACK_ROOT=/opt/spack \
FORCE_UNSAFE_CONFIGURE=1 \
DEBIAN_FRONTEND=noninteractive \
CURRENTLY_BUILDING_DOCKER_IMAGE=1 \
container=docker

View File

@@ -2,8 +2,7 @@ FROM ubuntu:16.04
# General environment for docker
ENV DEBIAN_FRONTEND=noninteractive \
SPACK_ROOT=/usr/local \
FORCE_UNSAFE_CONFIGURE=1
SPACK_ROOT=/usr/local
# Install system packages
RUN apt-get update \
@@ -48,4 +47,3 @@ RUN spack install netlib-scalapack ^openmpi ^openblas %gcc@7.2.0 \
# image run hook: the -l will make sure /etc/profile environments are loaded
CMD /bin/bash -l

View File

@@ -42,13 +42,10 @@
spack() {
# Store LD_LIBRARY_PATH variables from spack shell function
# This is necessary because MacOS System Integrity Protection clears
# (DY?)LD_LIBRARY_PATH variables on process start.
if [ -n "${LD_LIBRARY_PATH-}" ]; then
export SPACK_LD_LIBRARY_PATH=$LD_LIBRARY_PATH
fi
if [ -n "${DYLD_LIBRARY_PATH-}" ]; then
export SPACK_DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH
fi
# variables that affect dyld on process start.
for var in LD_LIBRARY_PATH DYLD_LIBRARY_PATH DYLD_FALLBACK_LIBRARY_PATH; do
eval "if [ -n \"\${${var}-}\" ]; then export SPACK_$var=\${${var}}; fi"
done
# Zsh does not do word splitting by default, this enables it for this
# function only

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class MirrorGnuBroken(AutotoolsPackage, GNUMirrorPackage):
"""Simple GNU package"""
homepage = "https://www.gnu.org/software/make/"
url = "https://ftpmirror.gnu.org/make/make-4.2.1.tar.gz"
version('4.2.1', sha256='e40b8f018c1da64edd1cc9a6fce5fa63b2e707e404e20cad91fbae337c98a5b7')

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class MirrorGnu(AutotoolsPackage, GNUMirrorPackage):
"""Simple GNU package"""
homepage = "https://www.gnu.org/software/make/"
gnu_mirror_path = "make/make-4.2.1.tar.gz"
version('4.2.1', sha256='e40b8f018c1da64edd1cc9a6fce5fa63b2e707e404e20cad91fbae337c98a5b7')

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class MirrorSourceforgeBroken(AutotoolsPackage, SourceforgePackage):
"""Simple sourceforge.net package"""
homepage = "http://www.tcl.tk"
url = "http://prdownloads.sourceforge.net/tcl/tcl8.6.5-src.tar.gz"
version('8.6.8', sha256='c43cb0c1518ce42b00e7c8f6eaddd5195c53a98f94adc717234a65cbcfd3f96a')

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class MirrorSourceforge(AutotoolsPackage, SourceforgePackage):
"""Simple sourceforge.net package"""
homepage = "http://www.tcl.tk"
sourceforge_mirror_path = "tcl/tcl8.6.5-src.tar.gz"
version('8.6.8', sha256='c43cb0c1518ce42b00e7c8f6eaddd5195c53a98f94adc717234a65cbcfd3f96a')

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class MirrorGnuBroken(AutotoolsPackage, GNUMirrorPackage):
"""Simple GNU package"""
homepage = "https://www.gnu.org/software/make/"
url = "https://ftpmirror.gnu.org/make/make-4.2.1.tar.gz"
version('4.2.1', sha256='e40b8f018c1da64edd1cc9a6fce5fa63b2e707e404e20cad91fbae337c98a5b7')

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class MirrorSourcewareBroken(AutotoolsPackage, SourcewarePackage):
"""Simple sourceware.org package"""
homepage = "https://sourceware.org/bzip2/"
url = "https://sourceware.org/pub/bzip2/bzip2-1.0.8.tar.gz"
version('1.0.8', sha256='ab5a03176ee106d3f0fa90e381da478ddae405918153cca248e682cd0c4a2269')

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class MirrorSourceware(AutotoolsPackage, SourcewarePackage):
"""Simple sourceware.org package"""
homepage = "https://sourceware.org/bzip2/"
sourceware_mirror_path = "bzip2/bzip2-1.0.8.tar.gz"
version('1.0.8', sha256='ab5a03176ee106d3f0fa90e381da478ddae405918153cca248e682cd0c4a2269')

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class MirrorXorgBroken(AutotoolsPackage, XorgPackage):
"""Simple x.org package"""
homepage = "http://cgit.freedesktop.org/xorg/util/macros/"
url = "https://www.x.org/archive/individual/util/util-macros-1.19.1.tar.bz2"
version('1.19.1', sha256='18d459400558f4ea99527bc9786c033965a3db45bf4c6a32eefdc07aa9e306a6')

View File

@@ -0,0 +1,15 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class MirrorXorg(AutotoolsPackage, XorgPackage):
"""Simple x.org package"""
homepage = "http://cgit.freedesktop.org/xorg/util/macros/"
xorg_mirror_path = "util/util-macros-1.19.1.tar.bz2"
version('1.19.1', sha256='18d459400558f4ea99527bc9786c033965a3db45bf4c6a32eefdc07aa9e306a6')

View File

@@ -33,7 +33,8 @@ class ActsCore(CMakePackage):
git = "https://github.com/acts-project/acts.git"
maintainers = ['HadrienG2']
version('develop', branch='master')
version('master', branch='master')
version('0.21.0', commit='10b719e68ddaca15b28ac25b3daddce8c0d3368d')
version('0.20.0', commit='1d37a849a9c318e8ca4fa541ef8433c1f004637b')
version('0.19.0', commit='408335636486c421c6222a64372250ef12544df6')
version('0.18.0', commit='d58a68cf75b52a5e0f563bc237f09250aa9da80c')

View File

@@ -133,7 +133,7 @@ def configure_args(self):
extra_args = [
# required, otherwise building its python bindings will fail
'CFLAGS={0}'.format(self.compiler.pic_flag)
'CFLAGS={0}'.format(self.compiler.cc_pic_flag)
]
extra_args += self.enable_or_disable('shared')

View File

@@ -0,0 +1,22 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Aespipe(AutotoolsPackage):
"""aespipe program is AES encrypting or decrypting pipe. It reads from
standard input and writes to standard output."""
homepage = "http://loop-aes.sourceforge.net/"
url = "https://sourceforge.net/projects/loop-aes/files/aespipe/v2.4f/aespipe-v2.4f.tar.bz2"
version('2.4f', sha256='b135e1659f58dc9be5e3c88923cd03d2a936096ab8cd7f2b3af4cb7a844cef96')
version('2.4e', sha256='bad5abb8678c2a6062d22b893171623e0c8e6163b5c1e6e5086e2140e606b93a')
version('2.4d', sha256='c5ce656e0ade49b93e1163ec7b35450721d5743d8d804ad3a9e39add0389e50f')
version('2.4c', sha256='260190beea911190a839e711f610ec3454a9b13985d35479775b7e26ad4c845e')
version('2.4b', sha256='4f08611966998f66266f03d40d0597f94096164393c8f303b2dfd565e9d9b59d')
version('2.3e', sha256='4e63a5709fdd0bffdb555582f9fd7a0bd1842e429420159accaf7f60c5d3c70f')
version('2.3d', sha256='70330cd0710446c9ddf8148a7713fd73f1dc5e0b13fc4d3c75590305b2e3f008')

View File

@@ -13,7 +13,7 @@ class AperturePhotometry(Package):
homepage = "http://www.aperturephotometry.org/aptool/"
url = "http://www.aperturephotometry.org/aptool/wp-content/plugins/download-monitor/download.php?id=1"
version('2.7.2', '2beca6aac14c5e0a94d115f81edf0caa9ec83dc9d32893ea00ee376c9360deb0', extension='tar.gz')
version('2.8.2', 'cb29eb39a630dc5d17c02fb824c69571fe1870a910a6acf9115c5f76fd89dd7e', extension='tar.gz')
depends_on('java')

View File

@@ -6,7 +6,7 @@
from spack import *
class Applewmproto(AutotoolsPackage):
class Applewmproto(AutotoolsPackage, XorgPackage):
"""Apple Rootless Window Management Extension.
This extension defines a protcol that allows X window managers
@@ -14,7 +14,7 @@ class Applewmproto(AutotoolsPackage):
running X11 in a rootless mode."""
homepage = "http://cgit.freedesktop.org/xorg/proto/applewmproto"
url = "https://www.x.org/archive/individual/proto/applewmproto-1.4.2.tar.gz"
xorg_mirror_path = "proto/applewmproto-1.4.2.tar.gz"
version('1.4.2', sha256='ff8ac07d263a23357af2d6ff0cca3c1d56b043ddf7797a5a92ec624f4704df2e')

View File

@@ -6,14 +6,14 @@
from spack import *
class Appres(AutotoolsPackage):
class Appres(AutotoolsPackage, XorgPackage):
"""The appres program prints the resources seen by an application (or
subhierarchy of an application) with the specified class and instance
names. It can be used to determine which resources a particular
program will load."""
homepage = "http://cgit.freedesktop.org/xorg/app/appres"
url = "https://www.x.org/archive/individual/app/appres-1.0.4.tar.gz"
xorg_mirror_path = "app/appres-1.0.4.tar.gz"
version('1.0.4', sha256='22cb6f639c891ffdbb5371bc50a88278185789eae6907d05e9e0bd1086a80803')

View File

@@ -0,0 +1,23 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Asdcplib(AutotoolsPackage):
"""AS-DCP and AS-02 File Access Library."""
homepage = "https://github.com/cinecert/asdcplib"
url = "https://github.com/cinecert/asdcplib/archive/rel_2_10_35.tar.gz"
version('2_10_35', sha256='a68eec9ae0cc363f75331dc279c6dd6d3a9999a9e5f0a4405fd9afa8a29ca27b')
version('2_10_34', sha256='faa54ee407c1afceb141e08dae9ebf83b3f839e9c49a1793ac741ec6cdee5c3c')
version('2_10_33', sha256='16fafb5da3d46b0f44570ef9780c85dd82cca60106a9e005e538809ea1a95373')
version('2_10_32', sha256='fe5123c49980ee3fa25dea876286f2ac974d203bfcc6c77fc288a59025dee3ee')
depends_on('m4', type='build')
depends_on('autoconf', type='build')
depends_on('automake', type='build')
depends_on('libtool', type='build')

View File

@@ -12,7 +12,7 @@ class AwsParallelcluster(PythonPackage):
tool to deploy and manage HPC clusters in the AWS cloud."""
homepage = "https://github.com/aws/aws-parallelcluster"
url = "https://pypi.io/packages/source/a/aws-parallelcluster/aws-parallelcluster-2.6.0.tar.gz"
url = "https://pypi.io/packages/source/a/aws-parallelcluster/aws-parallelcluster-2.6.1.tar.gz"
maintainers = [
'sean-smith', 'demartinofra', 'enrico-usai', 'lukeseawalker', 'rexcsn',
@@ -23,6 +23,7 @@ class AwsParallelcluster(PythonPackage):
'pcluster.config', 'pcluster.networking'
]
version('2.6.1', sha256='2ce9015d90b5d4dc88b46a44cb8a82e8fb0bb2b4cca30335fc5759202ec1b343')
version('2.6.0', sha256='aaed6962cf5027206834ac24b3d312da91e0f96ae8607f555e12cb124b869f0c')
version('2.5.1', sha256='4fd6e14583f8cf81f9e4aa1d6188e3708d3d14e6ae252de0a94caaf58be76303')
version('2.5.0', sha256='3b0209342ea0d9d8cc95505456103ad87c2d4e35771aa838765918194efd0ad3')

View File

@@ -0,0 +1,30 @@
--- a/Makefile 2020-04-08 17:21:01.982819829 -0500
+++ b/Makefile 2020-04-08 17:21:42.982804931 -0500
@@ -289,7 +289,7 @@
ifeq ($(BUILDTYPE), MacOSX)
CUDA_L := -L$(CUDA_BASE)/lib -lcufft -lcudart -lcublas -m64 -lstdc++
else
-CUDA_L := -L$(CUDA_BASE)/lib64 -lcufft -lcudart -lcublas -lstdc++ -Wl,-rpath $(CUDA_BASE)/lib64
+CUDA_L := -L$(CUDA_BASE)/lib64 -lcufft -lcudart -lcublas -lstdc++
endif
else
CUDA_H :=
@@ -327,14 +327,13 @@
CPPFLAGS += -DUSE_ACML
else
BLAS_H := -I$(BLAS_BASE)/include
-ifeq ($(BUILDTYPE), MacOSX)
-BLAS_L := -L$(BLAS_BASE)/lib -lopenblas
+ifeq ($(OPENBLAS),1)
+BLAS_L := -lopenblas
else
ifeq ($(NOLAPACKE),1)
-BLAS_L := -L$(BLAS_BASE)/lib -llapack -lblas
-CPPFLAGS += -Isrc/lapacke
+BLAS_L := -llapack -lcblas
else
-BLAS_L := -L$(BLAS_BASE)/lib -llapacke -lblas
+BLAS_L := -llapacke -lcblas
endif
endif
endif

View File

@@ -0,0 +1,107 @@
diff -ru a/matlab/bart.m b/matlab/bart.m
--- a/matlab/bart.m 2020-04-10 18:50:50.056248692 -0500
+++ b/matlab/bart.m 2020-04-10 18:52:20.541178180 -0500
@@ -11,7 +11,7 @@
return
end
- bart_path = getenv('TOOLBOX_PATH');
+ bart_path = [getenv('TOOLBOX_PATH') '/bin'];
isWSL = false;
if isempty(bart_path)
diff -ru a/python/bart.py b/python/bart.py
--- a/python/bart.py 2020-04-10 18:50:50.056248692 -0500
+++ b/python/bart.py 2020-04-10 19:18:09.481950358 -0500
@@ -19,7 +19,7 @@
return None
try:
- bart_path = os.environ['TOOLBOX_PATH'] + '/bart '
+ bart_path = os.environ['TOOLBOX_PATH'] + '/bin '
except:
bart_path = None
isWSL = False
diff -ru a/scripts/espirit_econ.sh b/scripts/espirit_econ.sh
--- a/scripts/espirit_econ.sh 2020-04-10 18:50:50.055248693 -0500
+++ b/scripts/espirit_econ.sh 2020-04-10 19:13:06.463193324 -0500
@@ -56,8 +56,6 @@
fi
-export PATH=$TOOLBOX_PATH:$PATH
-
input=$(readlink -f "$1")
output=$(readlink -f "$2")
@@ -67,7 +65,7 @@
exit 1
fi
-if [ ! -e $TOOLBOX_PATH/bart ] ; then
+if [ ! -e $TOOLBOX_PATH/bin/bart ] ; then
echo "\$TOOLBOX_PATH is not set correctly!" >&2
exit 1
fi
diff -ru a/scripts/grasp.sh b/scripts/grasp.sh
--- a/scripts/grasp.sh 2020-04-10 18:50:50.055248693 -0500
+++ b/scripts/grasp.sh 2020-04-10 19:13:31.461173327 -0500
@@ -90,8 +90,6 @@
fi
-export PATH=$TOOLBOX_PATH:$PATH
-
input=$(readlink -f "$1")
output=$(readlink -f "$2")
@@ -101,7 +99,7 @@
exit 1
fi
-if [ ! -e $TOOLBOX_PATH/bart ] ; then
+if [ ! -e $TOOLBOX_PATH/bin/bart ] ; then
echo "\$TOOLBOX_PATH is not set correctly!" >&2
exit 1
fi
diff -ru a/scripts/octview.m b/scripts/octview.m
--- a/scripts/octview.m 2020-04-10 18:50:50.055248693 -0500
+++ b/scripts/octview.m 2020-04-10 19:14:33.386123750 -0500
@@ -1,6 +1,6 @@
#! /usr/bin/octave -qf
-addpath(strcat(getenv("TOOLBOX_PATH"), "/matlab"));
+addpath(strcat(getenv("TOOLBOX_PATH"), "/bin", "/matlab"));
arg_list = argv();
diff -ru a/scripts/profile.sh b/scripts/profile.sh
--- a/scripts/profile.sh 2020-04-10 18:50:50.055248693 -0500
+++ b/scripts/profile.sh 2020-04-10 19:15:00.723101850 -0500
@@ -45,7 +45,7 @@
exit 1
fi
-if [ ! -e $TOOLBOX_PATH/bart ] ; then
+if [ ! -e $TOOLBOX_PATH/bin/bart ] ; then
echo "\$TOOLBOX_PATH is not set correctly!" >&2
exit 1
fi
@@ -57,7 +57,7 @@
cd $WORKDIR
-nm --defined-only $TOOLBOX_PATH/bart | cut -c11-16,19- | sort > bart.syms
+nm --defined-only $TOOLBOX_PATH/bin/bart | cut -c11-16,19- | sort > bart.syms
cat $in | grep "^TRACE" \
diff -ru a/startup.m b/startup.m
--- a/startup.m 2020-04-10 18:50:50.048248699 -0500
+++ b/startup.m 2020-04-10 18:51:40.390209486 -0500
@@ -1,4 +1,3 @@
% set Matlab path and TOOLBOX_PATH environment variable
-addpath(fullfile(pwd, 'matlab'));
-setenv('TOOLBOX_PATH', pwd);
+addpath(fullfile(getenv('TOOLBOX_PATH'), 'matlab'));

View File

@@ -0,0 +1,84 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Bart(MakefilePackage, CudaPackage):
"""BART: Toolbox for Computational Magnetic Resonance Imaging"""
homepage = "https://mrirecon.github.io/bart/"
url = "https://github.com/mrirecon/bart/archive/v0.5.00.tar.gz"
version('0.5.00', sha256='30eedcda0f0ef3808157542e0d67df5be49ee41e4f41487af5c850632788f643')
# patch to fix build with MKL
patch('https://github.com/mrirecon/bart/commit/b62ca4972d5ac41a44217a5c27123c15daae74db.patch',
sha256='8fd1be181da928448da750b32d45ee6dce7ba6af0424617c4f8d653cf3f05445',
when='@0.5.00')
# patch to fix Makefile for openblas and cuda
patch('Makefile.patch')
# patch to set path to bart
patch('bart_path.patch')
depends_on('libpng')
depends_on('fftw')
depends_on('blas')
depends_on('lapack')
depends_on('py-numpy', type='run')
depends_on('py-matplotlib', type='run')
extends('python')
conflicts('^atlas', msg='BART does not currently support atlas')
def edit(self, spec, prefix):
env['PREFIX'] = prefix
env['FFTW_BASE'] = spec['fftw'].prefix
if spec['blas'].name == 'openblas':
env['OPENBLAS'] = '1'
if spec['blas'].name in ['intel-mkl', 'intel-parallel-studio']:
env['MKL'] = '1'
env['MKL_BASE'] = env['MKLROOT']
else:
env['BLAS_BASE'] = spec['blas'].prefix
if '^netlib-lapack+lapacke' not in spec:
env['NOLAPACKE'] = '1'
if '+cuda' in spec:
cuda_arch = self.spec.variants['cuda_arch'].value
env['CUDA'] = '1'
env['CUDA_BASE'] = spec['cuda'].prefix
env['GPUARCH_FLAGS'] = ' '.join(self.cuda_flags(cuda_arch))
def install(self, spec, prefix):
python_dir = join_path(prefix,
spec['python'].package.site_packages_dir)
make('install')
install_tree('scripts', prefix.scripts)
install_tree('matlab', prefix.matlab)
install('startup.m', prefix)
install('python/bart.py', python_dir)
install('python/cfl.py', python_dir)
install('python/wslsupport.py', python_dir)
if '^python@3:' in spec:
install('python/bartview3.py', join_path(prefix.bin, 'bartview'))
filter_file(r'#!/usr/bin/python3', '#!/usr/bin/env python',
prefix.bin.bartview)
else:
install('python/bartview.py', join_path(prefix.bin, 'bartview'))
filter_file(r'#!/usr/bin/python', '#!/usr/bin/env python',
prefix.bin.bartview)
def setup_run_environment(self, env):
env.set('TOOLBOX_PATH', self.prefix)

View File

@@ -129,7 +129,17 @@ def url_for_version(self, version):
def setup_build_environment(self, env):
env.set('EXTRA_BAZEL_ARGS',
# Spack's logs don't handle colored output well
'--color=no --host_javabase=@local_jdk//:jdk')
'--color=no --host_javabase=@local_jdk//:jdk'
# Enable verbose output for failures
' --verbose_failures'
# Ask bazel to explain what it's up to
# Needs a filename as argument
' --explain=explainlogfile.txt'
# Increase verbosity of explanation,
' --verbose_explanations'
# Show (formatted) subcommands being executed
' --subcommands=pretty_print'
' --jobs={0}'.format(make_jobs))
def bootstrap(self, spec, prefix):
bash = which('bash')

View File

@@ -6,11 +6,11 @@
from spack import *
class Bbmap(Package):
class Bbmap(Package, SourceforgePackage):
"""Short read aligner for DNA and RNA-seq data."""
homepage = "http://sourceforge.net/projects/bbmap/"
url = "https://downloads.sourceforge.net/project/bbmap/BBMap_38.63.tar.gz"
sourceforge_mirror_path = "bbmap/BBMap_38.63.tar.gz"
version('38.63', sha256='089064104526c8d696164aefa067f935b888bc71ef95527c72a98c17ee90a01f')
version('37.36', sha256='befe76d7d6f3d0f0cd79b8a01004a2283bdc0b5ab21b0743e9dbde7c7d79e8a9')

View File

@@ -6,7 +6,7 @@
from spack import *
class Bdftopcf(AutotoolsPackage):
class Bdftopcf(AutotoolsPackage, XorgPackage):
"""bdftopcf is a font compiler for the X server and font server. Fonts
in Portable Compiled Format can be read by any architecture, although
the file is structured to allow one particular architecture to read
@@ -15,7 +15,7 @@ class Bdftopcf(AutotoolsPackage):
slowly) on other machines."""
homepage = "http://cgit.freedesktop.org/xorg/app/bdftopcf"
url = "https://www.x.org/archive/individual/app/bdftopcf-1.0.5.tar.gz"
xorg_mirror_path = "app/bdftopcf-1.0.5.tar.gz"
version('1.0.5', sha256='78a5ec945de1d33e6812167b1383554fda36e38576849e74a9039dc7364ff2c3')

View File

@@ -6,14 +6,14 @@
from spack import *
class Beforelight(AutotoolsPackage):
class Beforelight(AutotoolsPackage, XorgPackage):
"""The beforelight program is a sample implementation of a screen saver
for X servers supporting the MIT-SCREEN-SAVER extension. It is only
recommended for use as a code sample, as it does not include features
such as screen locking or configurability."""
homepage = "http://cgit.freedesktop.org/xorg/app/beforelight"
url = "https://www.x.org/archive/individual/app/beforelight-1.0.5.tar.gz"
xorg_mirror_path = "app/beforelight-1.0.5.tar.gz"
version('1.0.5', sha256='93bb3c457d6d5e8def3180fdee07bc84d1b7f0e5378a95812e2193cd51455cdc')

View File

@@ -6,14 +6,14 @@
from spack import *
class Bigreqsproto(AutotoolsPackage):
class Bigreqsproto(AutotoolsPackage, XorgPackage):
"""Big Requests Extension.
This extension defines a protocol to enable the use of requests
that exceed 262140 bytes in length."""
homepage = "http://cgit.freedesktop.org/xorg/proto/bigreqsproto"
url = "https://www.x.org/archive/individual/proto/bigreqsproto-1.1.2.tar.gz"
xorg_mirror_path = "proto/bigreqsproto-1.1.2.tar.gz"
version('1.1.2', sha256='de68a1a9dd1a1219ad73531bff9f662bc62fcd777387549c43cd282399f4a6ea')

View File

@@ -6,11 +6,11 @@
from spack import *
class Bitmap(AutotoolsPackage):
class Bitmap(AutotoolsPackage, XorgPackage):
"""bitmap, bmtoa, atobm - X bitmap (XBM) editor and converter utilities."""
homepage = "http://cgit.freedesktop.org/xorg/app/bitmap"
url = "https://www.x.org/archive/individual/app/bitmap-1.0.8.tar.gz"
xorg_mirror_path = "app/bitmap-1.0.8.tar.gz"
version('1.0.8', sha256='1a2fbd10a2ca5cd93f7b77bbb0555b86d8b35e0fc18d036b1607c761755006fc')

View File

@@ -0,0 +1,63 @@
# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class BoincClient(AutotoolsPackage):
"""BOINC is a platform for high-throughput computing on a
large scale (thousands or millions of computers). It can be
used for volunteer computing (using consumer devices) or
grid computing (using organizational resources). It
supports virtualized, parallel, and GPU-based
applications."""
homepage = "https://boinc.berkeley.edu/"
url = "https://github.com/BOINC/boinc/archive/client_release/7.16/7.16.5.tar.gz"
version('7.16.5', sha256='33db60991b253e717c6124cce4750ae7729eaab4e54ec718b9e37f87012d668a')
variant('manager', default=False, description='Builds the client manager')
variant('graphics', default=False, description='Graphic apps support')
# Dependency documentation:
# https://boinc.berkeley.edu/trac/wiki/SoftwarePrereqsUnix
conflicts('%gcc@:3.0.4')
depends_on('autoconf@2.58:', type='build')
depends_on('automake@1.8:', type='build')
depends_on('libtool@1.5:', type='build')
depends_on('m4@1.4:', type='build')
depends_on('curl@7.17.1:')
depends_on('openssl@0.9.8:')
depends_on('freeglut@3:', when='+graphics')
depends_on('libsm', when='+graphics')
depends_on('libice', when='+graphics')
depends_on('libxmu', when='+graphics')
depends_on('libxi', when='+graphics')
depends_on('libx11', when='+graphics')
depends_on('libjpeg', when='+graphics')
depends_on('wxwidgets@3.0.0:', when='+manager')
depends_on('libnotify', when='+manager')
depends_on('sqlite@3.1:', when='+manager')
patch('systemd-fix.patch')
def configure_args(self):
spec = self.spec
args = []
args.append("--disable-server")
args.append("--enable-client")
if '+manager' in spec:
args.append('--enable-manager')
else:
args.append('--disable-manager')
return args

View File

@@ -0,0 +1,13 @@
--- a/client/scripts/Makefile.am 2020-02-23 22:22:11.000000000 -0500
+++ b/client/scripts/Makefile.am 2020-03-27 18:40:28.881826512 -0400
@@ -7,8 +7,8 @@
$(INSTALL) -b boinc-client $(DESTDIR)$(sysconfdir)/init.d/boinc-client ; \
fi
if [ -d /usr/lib/systemd/system ] ; then \
- $(INSTALL) -d $(DESTDIR)/usr/lib/systemd/system/ ; \
- $(INSTALL_DATA) boinc-client.service $(DESTDIR)/usr/lib/systemd/system/boinc-client.service ; \
+ $(INSTALL) -d $(DESTDIR)$(prefix)/lib/systemd/system/ ; \
+ $(INSTALL_DATA) boinc-client.service $(DESTDIR)$(prefix)/lib/systemd/system/boinc-client.service ; \
elif [ -d /lib/systemd/system ] ; then \
$(INSTALL) -d $(DESTDIR)/lib/systemd/system/ ; \
$(INSTALL_DATA) boinc-client.service $(DESTDIR)/lib/systemd/system/boinc-client.service ; \

View File

@@ -0,0 +1,21 @@
# Copyright 2013-2019 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Bonniepp(AutotoolsPackage):
"""Bonnie++ is a benchmark suite that is aimed at performing a number of
simple tests of hard drive and file system performance."""
homepage = "https://doc.coker.com.au/projects/bonnie"
url = "https://www.coker.com.au/bonnie++/bonnie++-1.98.tgz"
version('1.98', sha256='6e0bcbc08b78856fd998dd7bcb352d4615a99c26c2dc83d5b8345b102bad0b04')
def configure_args(self):
configure_args = []
configure_args.append('--enable-debug')
return configure_args

View File

@@ -0,0 +1,31 @@
From 40960b23338da0a359d6aa83585ace09ad8804d2 Mon Sep 17 00:00:00 2001
From: Bo Anderson <mail@boanderson.me>
Date: Sun, 29 Mar 2020 14:55:08 +0100
Subject: [PATCH] Fix compiler version check on macOS
Fixes #440.
---
src/tools/darwin.jam | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/src/tools/darwin.jam b/src/tools/darwin.jam
index 8d477410b0..97e7ecb851 100644
--- tools/build/src/tools/darwin.jam
+++ tools/build/src/tools/darwin.jam
@@ -137,13 +137,14 @@ rule init ( version ? : command * : options * : requirement * )
# - Set the toolset generic common options.
common.handle-options darwin : $(condition) : $(command) : $(options) ;
+ real-version = [ regex.split $(real-version) \\. ] ;
# - GCC 4.0 and higher in Darwin does not have -fcoalesce-templates.
- if $(real-version) < "4.0.0"
+ if [ version.version-less $(real-version) : 4 0 ]
{
flags darwin.compile.c++ OPTIONS $(condition) : -fcoalesce-templates ;
}
# - GCC 4.2 and higher in Darwin does not have -Wno-long-double.
- if $(real-version) < "4.2.0"
+ if [ version.version-less $(real-version) : 4 2 ]
{
flags darwin.compile OPTIONS $(condition) : -Wno-long-double ;
}

View File

@@ -203,6 +203,12 @@ def libs(self):
patch('boost_1.63.0_pgi.patch', when='@1.63.0%pgi')
patch('boost_1.63.0_pgi_17.4_workaround.patch', when='@1.63.0%pgi@17.4')
# Fix for version comparison on newer Clang on darwin
# See: https://github.com/boostorg/build/issues/440
# See: https://github.com/macports/macports-ports/pull/6726
patch('darwin_clang_version.patch', level=0,
when='@1.56.0:1.72.0 platform=darwin')
# Fix the bootstrap/bjam build for Cray
patch('bootstrap-path.patch', when='@1.39.0: platform=cray')
@@ -365,7 +371,7 @@ def determine_b2_options(self, spec, options):
cxxflags.append(flag)
if '+pic' in self.spec:
cxxflags.append(self.compiler.pic_flag)
cxxflags.append(self.compiler.cxx_pic_flag)
# clang is not officially supported for pre-compiled headers
# and at least in clang 3.9 still fails to build

View File

@@ -7,12 +7,12 @@
from os import symlink
class Bridger(MakefilePackage):
class Bridger(MakefilePackage, SourceforgePackage):
"""Bridger : An Efficient De novo Transcriptome Assembler For
RNA-Seq Data"""
homepage = "https://sourceforge.net/projects/rnaseqassembly/"
url = "https://downloads.sourceforge.net/project/rnaseqassembly/Bridger_r2014-12-01.tar.gz"
sourceforge_mirror_path = "rnaseqassembly/Bridger_r2014-12-01.tar.gz"
version('2014-12-01', sha256='8fbec8603ea8ad2162cbd0c658e4e0a4af6453bdb53310b4b7e0d112e40b5737')

View File

@@ -6,7 +6,7 @@
from spack import *
class Bzip2(Package):
class Bzip2(Package, SourcewarePackage):
"""bzip2 is a freely available, patent free high-quality data
compressor. It typically compresses files to within 10% to 15%
of the best available techniques (the PPM family of statistical
@@ -14,10 +14,7 @@ class Bzip2(Package):
and six times faster at decompression."""
homepage = "https://sourceware.org/bzip2/"
url = "https://sourceware.org/pub/bzip2/bzip2-1.0.8.tar.gz"
# The server is sometimes a bit slow to respond
fetch_options = {'timeout': 60}
sourceware_mirror_path = "bzip2/bzip2-1.0.8.tar.gz"
version('1.0.8', sha256='ab5a03176ee106d3f0fa90e381da478ddae405918153cca248e682cd0c4a2269')
version('1.0.7', sha256='e768a87c5b1a79511499beb41500bcc4caf203726fff46a6f5f9ad27fe08ab2b')

View File

@@ -59,7 +59,7 @@ def build_args(self, spec, prefix):
'CC={0}'.format(spack_cc),
'CXX={0}'.format(spack_cxx),
'FORTRAN={0}'.format(spack_fc),
'cc_flags={0}'.format(self.compiler.pic_flag),
'cc_flags={0}'.format(self.compiler.cc_pic_flag),
# Allow Spack environment variables to propagate through to SCons
'env_vars=all'
]

View File

@@ -13,8 +13,9 @@ class Cctools(AutotoolsPackage):
"""
homepage = "https://github.com/cooperative-computing-lab/cctools"
url = "https://github.com/cooperative-computing-lab/cctools/archive/release/7.1.0.tar.gz"
url = "https://github.com/cooperative-computing-lab/cctools/archive/release/7.1.2.tar.gz"
version('7.1.2', sha256='ca871e9fe245d047d4c701271cf2b868e6e3a170e8834c1887157ed855985131')
version('7.1.0', sha256='84748245db10ff26c0c0a7b9fd3ec20fbbb849dd4aadc5e8531fd1671abe7a81')
version('7.0.18', sha256='5b6f3c87ae68dd247534a5c073eb68cb1a60176a7f04d82699fbc05e649a91c2')
version('6.1.1', sha256='97f073350c970d6157f80891b3bf6d4f3eedb5f031fea386dc33e22f22b8af9d')

View File

@@ -16,15 +16,19 @@ class Ceed(BundlePackage):
homepage = "https://ceed.exascaleproject.org"
version('2.0')
maintainers = ['jedbrown', 'v-dobrev', 'tzanio']
version('3.0.0')
version('2.0.0')
version('1.0.0')
variant('cuda', default=False,
description='Build MAGMA; enable CUDA support in libCEED and OCCA')
variant('mfem', default=True, description='Build MFEM and Laghos')
description='Enable CUDA support')
variant('mfem', default=True, description='Build MFEM, Laghos and Remhos')
variant('nek', default=True,
description='Build Nek5000, GSLIB, Nekbone, and NekCEM')
variant('occa', default=True,
description='Build OCCA; enable OCCA support in libCEED')
description='Enable OCCA support')
variant('petsc', default=True,
description='Build PETSc and HPGMG')
variant('pumi', default=True,
@@ -34,6 +38,11 @@ class Ceed(BundlePackage):
# TODO: Add 'int64' variant?
# LibCEED
# ceed-3.0
depends_on('libceed@0.6~cuda', when='@3.0.0~cuda')
depends_on('libceed@0.6+cuda+magma', when='@3.0.0+cuda')
depends_on('libceed@0.6+occa', when='@3.0.0+occa')
depends_on('libceed@0.6~occa', when='@3.0.0~occa')
# ceed-2.0
depends_on('libceed@0.4~cuda', when='@2.0.0~cuda')
depends_on('libceed@0.4+cuda', when='@2.0.0+cuda')
@@ -46,6 +55,9 @@ class Ceed(BundlePackage):
depends_on('libceed@0.2~occa', when='@1.0.0~occa')
# OCCA
# ceed-3.0
depends_on('occa@1.0.9~cuda', when='@3.0.0+occa~cuda')
depends_on('occa@1.0.9+cuda', when='@3.0.0+occa+cuda')
# ceed-2.0
depends_on('occa@1.0.8~cuda', when='@2.0.0+occa~cuda')
depends_on('occa@1.0.8+cuda', when='@2.0.0+occa+cuda')
@@ -54,6 +66,12 @@ class Ceed(BundlePackage):
depends_on('occa@1.0.0-alpha.5+cuda', when='@1.0.0+occa+cuda')
# Nek5000, GSLIB, Nekbone, and NekCEM
# ceed-3.0
depends_on('nek5000@19.0', when='@3.0.0+nek')
depends_on('nektools@19.0%gcc', when='@3.0.0+nek')
depends_on('gslib@1.0.6', when='@3.0.0+nek')
depends_on('nekbone@17.0', when='@3.0.0+nek')
depends_on('nekcem@c8db04b', when='@3.0.0+nek')
# ceed-2.0
depends_on('nek5000@17.0', when='@2.0.0+nek')
depends_on('nektools@17.0%gcc', when='@2.0.0+nek')
@@ -67,7 +85,17 @@ class Ceed(BundlePackage):
depends_on('nekbone@17.0', when='@1.0.0+nek')
depends_on('nekcem@0b8bedd', when='@1.0.0+nek')
# PETSc, HPGMG
# PETSc
# ceed-3.0
depends_on('petsc+cuda', when='@3.0.0+petsc+cuda')
# For a +quickbuild we disable hdf5, and superlu-dist in PETSc.
depends_on('petsc@3.13.0:3.13.99~hdf5~superlu-dist',
when='@3.0.0+petsc+quickbuild')
depends_on('petsc@3.13.0:3.13.99+mpi+double~int64', when='@3.0.0+petsc~mfem')
# The mfem petsc examples need the petsc variants +hypre, +suite-sparse,
# and +mumps:
depends_on('petsc@3.13.0:3.13.99+mpi+hypre+suite-sparse+mumps+double~int64',
when='@3.0.0+petsc+mfem')
# ceed-2.0
# For a +quickbuild we disable hdf5, and superlu-dist in PETSc.
# Ideally, these can be turned into recommendations to Spack for
@@ -94,18 +122,37 @@ class Ceed(BundlePackage):
depends_on('hpgmg@a0a5510df23b+fe', when='@1.0.0+petsc')
# MAGMA
# ceed-3.0
depends_on('magma@2.5.3', when='@3.0.0+cuda')
# ceed-2.0
depends_on('magma@2.5.0', when='@2.0.0+cuda')
# ceed-1.0
depends_on('magma@2.3.0', when='@1.0.0+cuda')
# PUMI
# ceed-3.0
depends_on('pumi@2.2.2', when='@3.0.0+pumi')
# ceed-2.0
depends_on('pumi@2.2.0', when='@2.0.0+pumi')
# ceed-1.0
depends_on('pumi@2.1.0', when='@1.0.0+pumi')
# MFEM, Laghos
# MFEM, Laghos, Remhos
# ceed-3.0
depends_on('mfem@4.1.0+mpi+examples+miniapps', when='@3.0.0+mfem~petsc')
depends_on('mfem@4.1.0+mpi+petsc+examples+miniapps',
when='@3.0.0+mfem+petsc')
depends_on('mfem@4.1.0+pumi', when='@3.0.0+mfem+pumi')
depends_on('mfem@4.1.0+gslib', when='@3.0.0+mfem+nek')
depends_on('mfem@4.1.0+libceed', when='@3.0.0+mfem')
depends_on('mfem@4.1.0+cuda', when='@3.0.0+mfem+cuda')
depends_on('mfem@4.1.0+occa', when='@3.0.0+mfem+occa')
depends_on('laghos@3.0', when='@3.0.0+mfem')
depends_on('remhos@1.0', when='@3.0.0+mfem')
# If using gcc version <= 4.8 build suite-sparse version <= 5.1.0
depends_on('suite-sparse@:5.1.0', when='@3.0.0%gcc@:4.8+mfem+petsc')
# ceed-2.0
depends_on('mfem@3.4.0+mpi+examples+miniapps', when='@2.0.0+mfem~petsc')
depends_on('mfem@3.4.0+mpi+petsc+examples+miniapps',

View File

@@ -11,10 +11,11 @@ class Charliecloud(AutotoolsPackage):
maintainers = ['j-ogas']
homepage = "https://hpc.github.io/charliecloud"
url = "https://github.com/hpc/charliecloud/releases/download/v0.14/charliecloud-0.9.10.tar.gz"
url = "https://github.com/hpc/charliecloud/releases/download/v0.14/charliecloud-0.14.tar.gz"
git = "https://github.com/hpc/charliecloud.git"
version('master', branch='master')
version('0.15', sha256='2163420d43c934151c4f44a188313bdb7f79e576d5a86ba64b9ea45f784b9921')
version('0.14', sha256='4ae23c2d6442949e16902f9d5604dbd1d6059aeb5dd461b11fc5c74d49dcb194')
depends_on('m4', type='build')
@@ -32,6 +33,8 @@ class Charliecloud(AutotoolsPackage):
depends_on('py-sphinx', type='build', when='+docs')
depends_on('py-sphinx-rtd-theme', type='build', when='+docs')
conflicts('platform=darwin', msg='This package does not build on macOS')
# bash automated testing harness (bats)
depends_on('bats@0.4.0', type='test')

View File

@@ -6,7 +6,7 @@
from spack import *
class Compiz(AutotoolsPackage):
class Compiz(AutotoolsPackage, XorgPackage):
"""compiz - OpenGL window and compositing manager.
Compiz is an OpenGL compositing manager that use
@@ -15,7 +15,7 @@ class Compiz(AutotoolsPackage):
and it is designed to run well on most graphics hardware."""
homepage = "http://www.compiz.org/"
url = "https://www.x.org/archive/individual/app/compiz-0.7.8.tar.gz"
xorg_mirror_path = "app/compiz-0.7.8.tar.gz"
version('0.7.8', sha256='b46f52b776cc78e85357a07688d04b36ec19c65eadeaf6f6cfcca7b8515e6503')

View File

@@ -6,14 +6,14 @@
from spack import *
class Compositeproto(AutotoolsPackage):
class Compositeproto(AutotoolsPackage, XorgPackage):
"""Composite Extension.
This package contains header files and documentation for the composite
extension. Library and server implementations are separate."""
homepage = "http://cgit.freedesktop.org/xorg/proto/compositeproto"
url = "https://www.x.org/archive/individual/proto/compositeproto-0.4.2.tar.gz"
xorg_mirror_path = "proto/compositeproto-0.4.2.tar.gz"
version('0.4.2', sha256='22195b7e50036440b1c6b3b2d63eb03dfa6e71c8a1263ed1f07b0f31ae7dad50')

View File

@@ -107,10 +107,10 @@ class Conduit(Package):
#
# Use HDF5 1.8, for wider output compatibly
# variants reflect we are not using hdf5's mpi or fortran features.
depends_on("hdf5@1.8.19:1.8.999~cxx~mpi~fortran", when="+hdf5+hdf5_compat+shared")
depends_on("hdf5@1.8.19:1.8.999~shared~cxx~mpi~fortran", when="+hdf5+hdf5_compat~shared")
depends_on("hdf5~cxx~mpi~fortran", when="+hdf5~hdf5_compat+shared")
depends_on("hdf5~shared~cxx~mpi~fortran", when="+hdf5~hdf5_compat~shared")
depends_on("hdf5@1.8.19:1.8.999~cxx", when="+hdf5+hdf5_compat+shared")
depends_on("hdf5@1.8.19:1.8.999~shared~cxx", when="+hdf5+hdf5_compat~shared")
depends_on("hdf5~cxx", when="+hdf5~hdf5_compat+shared")
depends_on("hdf5~shared~cxx", when="+hdf5~hdf5_compat~shared")
###############
# Silo

View File

@@ -6,7 +6,7 @@
from spack import *
class Constype(AutotoolsPackage):
class Constype(AutotoolsPackage, XorgPackage):
"""constype prints on the standard output the Sun code for the type of
display that the specified device is.
@@ -14,7 +14,7 @@ class Constype(AutotoolsPackage):
SPARC OS'es and to Solaris on both SPARC & x86."""
homepage = "http://cgit.freedesktop.org/xorg/app/constype"
url = "https://www.x.org/archive/individual/app/constype-1.0.4.tar.gz"
xorg_mirror_path = "app/constype-1.0.4.tar.gz"
version('1.0.4', sha256='ec09aff369cf1d527fd5b8075fb4dd0ecf89d905190cf1a0a0145d5e523f913d')

View File

@@ -12,8 +12,9 @@ class Coreutils(AutotoolsPackage, GNUMirrorPackage):
the core utilities which are expected to exist on every
operating system.
"""
homepage = "http://www.gnu.org/software/coreutils/"
gnu_mirror_path = "coreutils/coreutils-8.26.tar.xz"
homepage = 'http://www.gnu.org/software/coreutils/'
gnu_mirror_path = 'coreutils/coreutils-8.26.tar.xz'
version('8.31', sha256='ff7a9c918edce6b4f4b2725e3f9b37b0c4d193531cac49a48b56c4d0d3a9e9fd')
version('8.30', sha256='e831b3a86091496cdba720411f9748de81507798f6130adeaef872d206e1b057')
@@ -21,4 +22,17 @@ class Coreutils(AutotoolsPackage, GNUMirrorPackage):
version('8.26', sha256='155e94d748f8e2bc327c66e0cbebdb8d6ab265d2f37c3c928f7bf6c3beba9a8e')
version('8.23', sha256='ec43ca5bcfc62242accb46b7f121f6b684ee21ecd7d075059bf650ff9e37b82d')
variant("gprefix", default=False, description="prefix commands with 'g', to avoid conflicts with OS utilities")
build_directory = 'spack-build'
def configure_args(self):
spec = self.spec
configure_args = []
if spec.satisfies('platform=darwin'):
if "+gprefix" in self.spec:
configure_args.append('--program-prefix=g')
configure_args.append('--without-gmp')
configure_args.append('gl_cv_func_ftello_works=yes')
return configure_args

View File

@@ -20,6 +20,7 @@ class Cp2k(MakefilePackage, CudaPackage):
git = 'https://github.com/cp2k/cp2k.git'
list_url = 'https://github.com/cp2k/cp2k/releases'
version('7.1', sha256='ccd711a09a426145440e666310dd01cc5772ab103493c4ae6a3470898cd0addb')
version('6.1', sha256='af803558e0a6b9e9d9ce8a3ab955ba32bacd179922455424e061c82c9fefa34b')
version('5.1', sha256='e23613b593354fa82e0b8410e17d94c607a0b8c6d9b5d843528403ab09904412')
version('4.1', sha256='4a3e4a101d8a35ebd80a9e9ecb02697fb8256364f1eccdbe4e5a85d31fe21343')
@@ -43,6 +44,7 @@ class Cp2k(MakefilePackage, CudaPackage):
variant('sirius', default=False,
description=('Enable planewave electronic structure'
' calculations via SIRIUS'))
variant('cosma', default=False, description='Use COSMA for p?gemm')
# override cuda_arch from CudaPackage since we only support one arch
# at a time and only specific ones for which we have parameter files
@@ -104,10 +106,13 @@ class Cp2k(MakefilePackage, CudaPackage):
depends_on('mpi@2:', when='+mpi')
depends_on('scalapack', when='+mpi')
depends_on('cosma+scalapack', when='+cosma')
depends_on('cosma+cuda+scalapack', when='+cosma+cuda')
depends_on('elpa@2011.12:2016.13+openmp', when='+openmp+elpa@:5.999')
depends_on('elpa@2011.12:2017.11+openmp', when='+openmp+elpa@6.0:')
depends_on('elpa@2011.12:2016.13~openmp', when='~openmp+elpa@:5.999')
depends_on('elpa@2011.12:2017.11~openmp', when='~openmp+elpa@6.0:')
depends_on('elpa@2018.05:~openmp', when='~openmp+elpa@7.0:')
depends_on('plumed+shared+mpi', when='+plumed+mpi')
depends_on('plumed+shared~mpi', when='+plumed~mpi')
@@ -123,13 +128,17 @@ class Cp2k(MakefilePackage, CudaPackage):
depends_on('sirius+fortran+vdwxc+shared~openmp', when='+sirius~openmp')
# the bundled libcusmm uses numpy in the parameter prediction (v7+)
# which is written using Python 3
depends_on('py-numpy', when='@7:+cuda', type='build')
depends_on('python@3.6:', when='@7:+cuda', type='build')
# PEXSI, ELPA and SIRIUS need MPI in CP2K
# PEXSI, ELPA, COSMA and SIRIUS depend on MPI
conflicts('~mpi', '+pexsi')
conflicts('~mpi', '+elpa')
conflicts('~mpi', '+sirius')
conflicts('~mpi', '+cosma')
conflicts('+sirius', '@:6.999') # sirius support was introduced in 7+
conflicts('+cosma', '@:7.999') # COSMA support was introduced in 8+
conflicts('~cuda', '+cuda_fft')
conflicts('~cuda', '+cuda_blas')
@@ -286,6 +295,12 @@ def edit(self, spec, prefix):
elif self.spec.variants['blas'].value == 'accelerate':
cppflags += ['-D__ACCELERATE']
if '+cosma' in spec:
# add before ScaLAPACK to override the p?gemm symbols
cosma = spec['cosma'].libs
ldflags.append(cosma.search_flags)
libs.extend(cosma)
# MPI
if '+mpi' in self.spec:
cppflags.extend([

View File

@@ -34,6 +34,8 @@ class Cryptsetup(AutotoolsPackage):
depends_on('libtool', type='build')
depends_on('m4', type='build')
depends_on('automake@:1.16.1', when='@2.2.1', type='build')
# Upstream includes support for discovering the location of the libintl
# library but is missing the bit in the Makefile.ac that includes it in
# the LDFLAGS. See https://gitlab.com/cryptsetup/cryptsetup/issues/479

View File

@@ -0,0 +1,32 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Cubist(MakefilePackage):
"""Cubist is a powerful tool for generating rule-based models that
balance the need for accurate prediction against the requirements of
intelligibility.
Cubist models generally give better results than those
produced by simple techniques such as multivariate linear regression,
while also being easier to understand than neural networks."""
homepage = "https://www.rulequest.com"
url = "https://www.rulequest.com/GPL/Cubist.tgz"
version('2.07', 'f2b20807cd3275e775c42263a4efd3f50df6e495a8b6dc8989ea2d41b973ac1a')
def edit(self, spec, prefix):
makefile = FileFilter('Makefile')
makefile.filter("SHELL .*", "SHELL = /bin/bash")
def install(self, spec, prefix):
mkdirp(self.prefix.bin)
install('cubist', prefix.bin)
install('summary', prefix.bin)
install('xval', prefix.bin)

View File

@@ -6,14 +6,14 @@
from spack import *
class Damageproto(AutotoolsPackage):
class Damageproto(AutotoolsPackage, XorgPackage):
"""X Damage Extension.
This package contains header files and documentation for the X Damage
extension. Library and server implementations are separate."""
homepage = "https://cgit.freedesktop.org/xorg/proto/damageproto"
url = "https://www.x.org/releases/individual/proto/damageproto-1.2.1.tar.gz"
xorg_mirror_path = "proto/damageproto-1.2.1.tar.gz"
version('1.2.1', sha256='f65ccbf1de9750a527ea6e85694085b179f2d06495cbdb742b3edb2149fef303')

Some files were not shown because too many files have changed in this diff Show More