Compare commits

...

36 Commits

Author SHA1 Message Date
Carson Woods
ddc413ead0 Update commands auto completion 2020-04-30 00:58:38 -04:00
Carson Woods
66cb1dd94c Add shared package installs feature 2020-04-30 00:55:03 -04:00
Todd Gamblin
7a68a4d851 update CHANGELOG.md for 0.14.2 2020-04-15 14:32:01 -07:00
Todd Gamblin
a3bcd88f8d version bump: 0.14.2 2020-04-15 14:30:58 -07:00
Todd Gamblin
740f8fe1a9 bugfix: spack test should not output [+] for mock installs (#15609)
`spack test` has a spurious '[+] ' in the output:

```
lib/spack/spack/test/install.py .........[+] ......
```

Output is properly suppressed:

```
lib/spack/spack/test/install.py ...............
```
2020-04-15 14:30:58 -07:00
Todd Gamblin
430ca7c7cf spack install terminal output handling in foreground/background (#15723)
Makes the following changes:

* (Fixes #15620) tty configuration was failing when stdout was 
  redirected. The implementation now creates a pseudo terminal for
  stdin and checks stdout properly, so redirections of stdin/out/err
  should be handled now.
* Handles terminal configuration when the Spack process moves between
  the foreground and background (possibly multiple times) during a
  build.
* Spack adjusts terminal settings to allow users to to enable/disable
  build process output to the terminal using a "v" toggle, abnormal
  exit cases (like CTRL-C) could leave the terminal in an unusable
  state. This is addressed here with a special-case handler which
  restores terminal settings.

Significantly extend testing of process output logger:

* New PseudoShell object for setting up a master and child process
  and configuring file descriptor inheritance between the two
* Tests for "v" verbosity toggle making use of the added PseudoShell
  object
* Added `uniq` function which takes a list of elements and replaces
  any consecutive sequence of duplicate elements with a single
  instance (e.g. "112211" -> "121")

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-04-15 12:47:41 -07:00
Massimiliano Culpo
55f5afaf3c database: maintain in-memory consistency on remove (#15777)
The performance improvements done in #14693 where leaving the DB in an inconsistent state when specs were removed from it. This PR updates the DB internal state whenever the DB is written to a file.

Note that we still cannot properly enumerate installed dependents, so there is a TODO in this code. Fixing that will require the dependents dictionaries in specs to be re-keyed (either by hash, or not keyed at all -- a list would do).  See #11983 for details.
2020-04-15 12:47:16 -07:00
Andrew W Elble
6b559912c1 performance: add a verification file to the database (#14693)
Reading the database repeatedly can be quite slow.  We need a way to speed
up Spack when it reads the DB multiple times, but the DB has not been
modified between reads (which is nearly all the time).

- [x] Add a file containing a unique uuid that is regenerated at database
    write time. Use this uuid to suppress re-parsing the database
    contents if we know a previous uuid and the uuid has not changed.

- [x] Fix mutable_database fixture so that it resets the last seen
    verifier when it resets.

- [x] Enable not rereading the database immediately after a write. Make
    the tests reset the last seen verifier in between tests that use the
    database fixture.

- [x] make presence of uuid module optional
2020-04-15 12:47:00 -07:00
Peter Scheibel
9b5805a5cd Remove DB conversion of old index.yaml (#15298)
Removed the code that was converting the old index.yaml format into
index.json. Since the change happened in #2189 it should be
considered safe to drop this (untested) code.
2020-04-15 12:45:57 -07:00
Adam J. Stewart
c6c1d01ab6 Allow Spack Environments with '-h' in the name (#15429)
If a user invoked "spack env activate example-henv", Spack would
mistakenly interpret the "-h" from "example-henv" as the "-h" option.
This commit allows users to create and activate environments with
"-h" in the name.

This issue existed for bash shell support as well as csh support, and
this commit addresses both, along with some other unrelated csh
support issues.
2020-04-15 12:38:31 -07:00
Peter Scheibel
b9688a8c35 Environments/views: only override spec prefix for non-external packages (#15475)
* only override spec prefix for non-external packages

* add test that environment shell modifications respect explicitly-specified prefixes for external packages

* add clarifying comment
2020-04-15 12:37:37 -07:00
Jonathon Anderson
ed2781973c Source devnull in environment_after_sourcing_files (closes #15775) (#15791)
spack.util.environment_after_sourcing_files compares the local
environment against a shell environment after having sourced a
file; but this ends up including the default shell profile and
rc, which might differ from the local environment.

To change this, compare against the default shell environment,
expressed here as 'source /dev/null'.
2020-04-15 12:37:16 -07:00
Todd Gamblin
99bb88aead bugfix: TERM may not be in the environment on Cray (#15630) 2020-04-15 12:37:03 -07:00
Massimiliano Culpo
a85cce05a1 Blacklist Lmod variable modifications when sourcing files (#15778)
fixes #15775

Add all the variables listed here:

https://lmod.readthedocs.io/en/latest/090_configuring_lmod.html

to the list of those blacklisted when constructing environment
modifications by sourcing files.
2020-04-15 12:36:53 -07:00
Todd Gamblin
e2b1737a42 update CHANGELOG.md for 0.14.1 2020-03-20 12:29:44 -07:00
Todd Gamblin
ff0abb9838 version bump: 0.14.1 2020-03-20 12:29:36 -07:00
Greg Becker
3826cdf139 multiprocessing: allow Spack to run uninterrupted in background (#14682)
Spack currently cannot run as a background process uninterrupted because some of the logging functions used in the install method (especially to create the dynamic verbosity toggle with the v key) cause the OS to issue a SIGTTOU to Spack when it's backgrounded.

This PR puts the necessary gatekeeping in place so that Spack doesn't do anything that will cause a signal to stop the process when operating as a background process.
2020-03-20 12:23:55 -07:00
Greg Becker
30b4704522 Cray bugfix: TERM missing while reading default target (#15381)
Bug: Spack hangs on some Cray machines

Reason: The TERM environment variable is necessary to run bash -lc "echo $CRAY_CPU_TARGET", but we run that command within env -i, which wipes the environment.

Fix: Manually forward the TERM environment variable to env -i /bin/bash -lc "echo $CRAY_CPU_TARGET"
2020-03-20 11:47:10 -07:00
Kai Germaschewski
09e13cf7cf Upstreams: don't write metadata directory to upstream DB (#15526)
When trying to use an upstream Spack repository, as of f2aca86 Spack
was attempting to write to the upstream DB based on a new metadata
directory added in that commit. Upstream DBs are read-only, so this
should not occur.

This adds a check to prevent Spack from writing to the upstream DB
2020-03-20 11:46:23 -07:00
Massimiliano Culpo
296d58ef6b Creating versions from urls doesn't modify class attributes (#15452)
fixes #15449

Before this PR a call to pkg.url_for_version was modifying
class attributes determining different results for subsequents
calls and an error when the urls was empty.
2020-03-20 11:45:40 -07:00
Greg Becker
ec720bf28d bugfix: fix install_missing_compilers option bug from v0.14.0 (#15416)
* bugfix: ensure bootstrapped compilers built before packages using the compiler
2020-03-20 11:44:59 -07:00
Todd Gamblin
1e42f0a545 bugfix: installer.py shouldn't be executable (#15386)
This is a minor permission fix on the new installer.py introduced in #13100.
2020-03-20 11:44:23 -07:00
Patrick Gartung
62683eb4bf Add function replace_prefix_nullterm for use on mach-o rpaths. (#15347)
This recovers the old behavior of replace_prefix_bin that was
modified to work with elf binaries by prefixing os.sep to new prefix
until length is the same as old prefix.
2020-03-20 11:43:54 -07:00
Massimiliano Culpo
901bed48ec ArchSpec: fix semantics of satisfies when not concrete and strict is true (#15319) 2020-03-20 11:42:22 -07:00
Adam J. Stewart
cc8d9eee8e suite-sparse: fix installation for v5.X (#15326)
fixes #15184

GraphBLAS depends on m4 according to CMake error message
Do not use INSTALL= when compiling the library
2020-03-20 11:41:59 -07:00
Tamara Dahlgren
1c8f792bb5 testing: increase installer coverage (#15237) 2020-03-20 11:40:52 -07:00
Tamara Dahlgren
9a1ce36e44 bugfix: resolve undefined source_pkg_dir failure (#15339) 2020-03-20 11:40:39 -07:00
Massimiliano Culpo
59a7963785 Bugfix: resolve StopIteration message attribute failure (#15341)
Testing the install StopIteration exception resulted in an attribute error:

AttributeError: 'StopIteration' object has no attribute 'message'

This PR adds a unit test and resolves that error.
2020-03-20 11:39:28 -07:00
Tamara Dahlgren
733f9f8cfa Recover coverage from subprocesses during unit tests (#15354)
* Recover coverage from subprocesses during unit tests
2020-03-20 11:38:28 -07:00
Massimiliano Culpo
fa0a5e44aa Correct pytest.raises matches to match (#15346) 2020-03-20 11:35:49 -07:00
Tamara Dahlgren
5406e1f43d bugfix: Add dependents when initializing spec from yaml (#15220)
The new build process, introduced in #13100 , relies on a spec's dependents in addition to their dependencies. Loading a spec from a yaml file was not initializing the dependents.

- [x] populate dependents when loading from yaml
2020-03-20 11:34:23 -07:00
Seth R. Johnson
f3a1a8c6fe Uniquify suffixes added to module names (#14920) 2020-03-20 11:33:07 -07:00
Tamara Dahlgren
b02981f10c bugfix: ensure proper dependency handling for package-only installs (#15197)
The distributed build PR (#13100) -- did not check the install status of dependencies when using the `--only package` option so would refuse to install a package with the claim that it had uninstalled dependencies whether that was the case or not.

- [x] add install status checks for the `--only package` case.
- [x] add initial set of tests
2020-03-20 11:23:28 -07:00
Andrew W Elble
654914d53e Fix for being able to 'spack load' packages that have been renamed. (#14348)
* Fix for being able to 'spack load' packages that have been renamed.

* tests: add test for 'spack load' of a installed, but renamed/deleted package
2020-03-20 11:21:40 -07:00
Michael Kuhn
3753424a87 modules: store configure args during build (#11084)
This change stores packages' configure arguments during build and makes
use of them while refreshing module files. This fixes problems such as in
#10716.
2020-03-20 11:16:39 -07:00
Greg Becker
32a3d59bfa fetch_strategy: remove vestigial code (#15431) 2020-03-20 11:14:37 -07:00
54 changed files with 2411 additions and 522 deletions

View File

@@ -1,3 +1,30 @@
# v0.14.2 (2019-04-15)
This is a minor release on the `0.14` series. It includes performance
improvements and bug fixes:
* Improvements to how `spack install` handles foreground/background (#15723)
* Major performance improvements for reading the package DB (#14693, #15777)
* No longer check for the old `index.yaml` database file (#15298)
* Properly activate environments with '-h' in the name (#15429)
* External packages have correct `.prefix` in environments/views (#15475)
* Improvements to computing env modifications from sourcing files (#15791)
* Bugfix on Cray machines when getting `TERM` env variable (#15630)
* Avoid adding spurious `LMOD` env vars to Intel modules (#15778)
* Don't output [+] for mock installs run during tests (#15609)
# v0.14.1 (2019-03-20)
This is a bugfix release on top of `v0.14.0`. Specific fixes include:
* several bugfixes for parallel installation (#15339, #15341, #15220, #15197)
* `spack load` now works with packages that have been renamed (#14348)
* bugfix for `suite-sparse` installation (#15326)
* deduplicate identical suffixes added to module names (#14920)
* fix issues with `configure_args` during module refresh (#11084)
* increased test coverage and test fixes (#15237, #15354, #15346)
* remove some unused code (#15431)
# v0.14.0 (2020-02-23)
`v0.14.0` is a major feature release, with 3 highlighted features:

View File

@@ -16,7 +16,7 @@
config:
# This is the path to the root of the Spack install tree.
# You can use $spack here to refer to the root of the spack instance.
install_tree: $spack/opt/spack
install_tree: ~/.spack/opt/spack
# Locations where templates should be found
@@ -30,8 +30,8 @@ config:
# Locations where different types of modules should be installed.
module_roots:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
tcl: ~/.spack/share/spack/modules
lmod: ~/.spack/share/spack/lmod
# Temporary locations Spack can try to use for builds.
@@ -67,7 +67,7 @@ config:
# Cache directory for already downloaded source tarballs and archived
# repositories. This can be purged with `spack clean --downloads`.
source_cache: $spack/var/spack/cache
source_cache: ~/.spack/var/spack/cache
# Cache directory for miscellaneous files, like the package index.

View File

@@ -40,6 +40,7 @@ packages:
pil: [py-pillow]
pkgconfig: [pkgconf, pkg-config]
scalapack: [netlib-scalapack]
sycl: [hipsycl]
szip: [libszip, libaec]
tbb: [intel-tbb]
unwind: [libunwind]

View File

@@ -0,0 +1,7 @@
upstreams:
global:
install_tree: $spack/opt/spack
modules:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
dotkit: $spack/share/spack/dotkit

View File

@@ -624,9 +624,9 @@ def replace_directory_transaction(directory_name, tmp_root=None):
# Check the input is indeed a directory with absolute path.
# Raise before anything is done to avoid moving the wrong directory
assert os.path.isdir(directory_name), \
'"directory_name" must be a valid directory'
'Invalid directory: ' + directory_name
assert os.path.isabs(directory_name), \
'"directory_name" must contain an absolute path'
'"directory_name" must contain an absolute path: ' + directory_name
directory_basename = os.path.basename(directory_name)

View File

@@ -619,3 +619,28 @@ def load_module_from_file(module_name, module_path):
import imp
module = imp.load_source(module_name, module_path)
return module
def uniq(sequence):
"""Remove strings of duplicate elements from a list.
This works like the command-line ``uniq`` tool. It filters strings
of duplicate elements in a list. Adjacent matching elements are
merged into the first occurrence.
For example::
uniq([1, 1, 1, 1, 2, 2, 2, 3, 3]) == [1, 2, 3]
uniq([1, 1, 1, 1, 2, 2, 2, 1, 1]) == [1, 2, 1]
"""
if not sequence:
return []
uniq_list = [sequence[0]]
last = sequence[0]
for element in sequence[1:]:
if element != last:
uniq_list.append(element)
last = element
return uniq_list

View File

@@ -7,18 +7,27 @@
"""
from __future__ import unicode_literals
import atexit
import errno
import multiprocessing
import os
import re
import select
import sys
import traceback
import signal
from contextlib import contextmanager
from six import string_types
from six import StringIO
import llnl.util.tty as tty
try:
import termios
except ImportError:
termios = None
# Use this to strip escape sequences
_escape = re.compile(r'\x1b[^m]*m|\x1b\[?1034h')
@@ -31,6 +40,25 @@
control = re.compile('(\x11\n|\x13\n)')
@contextmanager
def ignore_signal(signum):
"""Context manager to temporarily ignore a signal."""
old_handler = signal.signal(signum, signal.SIG_IGN)
try:
yield
finally:
signal.signal(signum, old_handler)
def _is_background_tty(stream):
"""True if the stream is a tty and calling process is in the background.
"""
return (
stream.isatty() and
os.getpgrp() != os.tcgetpgrp(stream.fileno())
)
def _strip(line):
"""Strip color and control characters from a line."""
return _escape.sub('', line)
@@ -41,22 +69,75 @@ class keyboard_input(object):
Use this with ``sys.stdin`` for keyboard input, e.g.::
with keyboard_input(sys.stdin):
r, w, x = select.select([sys.stdin], [], [])
# ... do something with keypresses ...
with keyboard_input(sys.stdin) as kb:
while True:
kb.check_fg_bg()
r, w, x = select.select([sys.stdin], [], [])
# ... do something with keypresses ...
This disables canonical input so that keypresses are available on the
stream immediately. Typically standard input allows line editing,
which means keypresses won't be sent until the user hits return.
The ``keyboard_input`` context manager disables canonical
(line-based) input and echoing, so that keypresses are available on
the stream immediately, and they are not printed to the
terminal. Typically, standard input is line-buffered, which means
keypresses won't be sent until the user hits return. In this mode, a
user can hit, e.g., 'v', and it will be read on the other end of the
pipe immediately but not printed.
It also disables echoing, so that keys pressed aren't printed to the
terminal. So, the user can hit, e.g., 'v', and it's read on the
other end of the pipe immediately but not printed.
The handler takes care to ensure that terminal changes only take
effect when the calling process is in the foreground. If the process
is backgrounded, canonical mode and echo are re-enabled. They are
disabled again when the calling process comes back to the foreground.
When the with block completes, prior TTY settings are restored.
This context manager works through a single signal handler for
``SIGTSTP``, along with a poolling routine called ``check_fg_bg()``.
Here are the relevant states, transitions, and POSIX signals::
[Running] -------- Ctrl-Z sends SIGTSTP ------------.
[ in FG ] <------- fg sends SIGCONT --------------. |
^ | |
| fg (no signal) | |
| | v
[Running] <------- bg sends SIGCONT ---------- [Stopped]
[ in BG ] [ in BG ]
We handle all transitions exept for ``SIGTSTP`` generated by Ctrl-Z
by periodically calling ``check_fg_bg()``. This routine notices if
we are in the background with canonical mode or echo disabled, or if
we are in the foreground without canonical disabled and echo enabled,
and it fixes the terminal settings in response.
``check_fg_bg()`` works *except* for when the process is stopped with
``SIGTSTP``. We cannot rely on a periodic timer in this case, as it
may not rrun before the process stops. We therefore restore terminal
settings in the ``SIGTSTP`` handler.
Additional notes:
* We mostly use polling here instead of a SIGARLM timer or a
thread. This is to avoid the complexities of many interrupts, which
seem to make system calls (like I/O) unreliable in older Python
versions (2.6 and 2.7). See these issues for details:
1. https://www.python.org/dev/peps/pep-0475/
2. https://bugs.python.org/issue8354
There are essentially too many ways for asynchronous signals to go
wrong if we also have to support older Python versions, so we opt
not to use them.
* ``SIGSTOP`` can stop a process (in the foreground or background),
but it can't be caught. Because of this, we can't fix any terminal
settings on ``SIGSTOP``, and the terminal will be left with
``ICANON`` and ``ECHO`` disabled until it is resumes execution.
* Technically, a process *could* be sent ``SIGTSTP`` while running in
the foreground, without the shell backgrounding that process. This
doesn't happen in practice, and we assume that ``SIGTSTP`` always
means that defaults should be restored.
* We rely on ``termios`` support. Without it, or if the stream isn't
a TTY, ``keyboard_input`` has no effect.
Note: this depends on termios support. If termios isn't available,
or if the stream isn't a TTY, this context manager has no effect.
"""
def __init__(self, stream):
"""Create a context manager that will enable keyboard input on stream.
@@ -69,44 +150,97 @@ def __init__(self, stream):
"""
self.stream = stream
def _is_background(self):
"""True iff calling process is in the background."""
return _is_background_tty(self.stream)
def _get_canon_echo_flags(self):
"""Get current termios canonical and echo settings."""
cfg = termios.tcgetattr(self.stream)
return (
bool(cfg[3] & termios.ICANON),
bool(cfg[3] & termios.ECHO),
)
def _enable_keyboard_input(self):
"""Disable canonical input and echoing on ``self.stream``."""
# "enable" input by disabling canonical mode and echo
new_cfg = termios.tcgetattr(self.stream)
new_cfg[3] &= ~termios.ICANON
new_cfg[3] &= ~termios.ECHO
# Apply new settings for terminal
with ignore_signal(signal.SIGTTOU):
termios.tcsetattr(self.stream, termios.TCSANOW, new_cfg)
def _restore_default_terminal_settings(self):
"""Restore the original input configuration on ``self.stream``."""
# _restore_default_terminal_settings Can be called in foreground
# or background. When called in the background, tcsetattr triggers
# SIGTTOU, which we must ignore, or the process will be stopped.
with ignore_signal(signal.SIGTTOU):
termios.tcsetattr(self.stream, termios.TCSANOW, self.old_cfg)
def _tstp_handler(self, signum, frame):
self._restore_default_terminal_settings()
os.kill(os.getpid(), signal.SIGSTOP)
def check_fg_bg(self):
# old_cfg is set up in __enter__ and indicates that we have
# termios and a valid stream.
if not self.old_cfg:
return
# query terminal flags and fg/bg status
flags = self._get_canon_echo_flags()
bg = self._is_background()
# restore sanity if flags are amiss -- see diagram in class docs
if not bg and any(flags): # fg, but input not enabled
self._enable_keyboard_input()
elif bg and not all(flags): # bg, but input enabled
self._restore_default_terminal_settings()
def __enter__(self):
"""Enable immediate keypress input on stream.
"""Enable immediate keypress input, while this process is foreground.
If the stream is not a TTY or the system doesn't support termios,
do nothing.
"""
self.old_cfg = None
self.old_handlers = {}
# Ignore all this if the input stream is not a tty.
if not self.stream or not self.stream.isatty():
return
return self
try:
# If this fails, self.old_cfg will remain None
import termios
if termios:
# save old termios settings to restore later
self.old_cfg = termios.tcgetattr(self.stream)
# save old termios settings
fd = self.stream.fileno()
self.old_cfg = termios.tcgetattr(fd)
# Install a signal handler to disable/enable keyboard input
# when the process moves between foreground and background.
self.old_handlers[signal.SIGTSTP] = signal.signal(
signal.SIGTSTP, self._tstp_handler)
# create new settings with canonical input and echo
# disabled, so keypresses are immediate & don't echo.
self.new_cfg = termios.tcgetattr(fd)
self.new_cfg[3] &= ~termios.ICANON
self.new_cfg[3] &= ~termios.ECHO
# add an atexit handler to ensure the terminal is restored
atexit.register(self._restore_default_terminal_settings)
# Apply new settings for terminal
termios.tcsetattr(fd, termios.TCSADRAIN, self.new_cfg)
# enable keyboard input initially (if foreground)
if not self._is_background():
self._enable_keyboard_input()
except Exception:
pass # some OS's do not support termios, so ignore
return self
def __exit__(self, exc_type, exception, traceback):
"""If termios was avaialble, restore old settings."""
"""If termios was available, restore old settings."""
if self.old_cfg:
import termios
termios.tcsetattr(
self.stream.fileno(), termios.TCSADRAIN, self.old_cfg)
self._restore_default_terminal_settings()
# restore SIGSTP and SIGCONT handlers
if self.old_handlers:
for signum, old_handler in self.old_handlers.items():
signal.signal(signum, old_handler)
class Unbuffered(object):
@@ -282,11 +416,11 @@ def __enter__(self):
self._saved_debug = tty._debug
# OS-level pipe for redirecting output to logger
self.read_fd, self.write_fd = os.pipe()
read_fd, write_fd = os.pipe()
# Multiprocessing pipe for communication back from the daemon
# Currently only used to save echo value between uses
self.parent, self.child = multiprocessing.Pipe()
self.parent_pipe, child_pipe = multiprocessing.Pipe()
# Sets a daemon that writes to file what it reads from a pipe
try:
@@ -297,10 +431,15 @@ def __enter__(self):
input_stream = None # just don't forward input if this fails
self.process = multiprocessing.Process(
target=self._writer_daemon, args=(input_stream,))
target=_writer_daemon,
args=(
input_stream, read_fd, write_fd, self.echo, self.log_file,
child_pipe
)
)
self.process.daemon = True # must set before start()
self.process.start()
os.close(self.read_fd) # close in the parent process
os.close(read_fd) # close in the parent process
finally:
if input_stream:
@@ -322,9 +461,9 @@ def __enter__(self):
self._saved_stderr = os.dup(sys.stderr.fileno())
# redirect to the pipe we created above
os.dup2(self.write_fd, sys.stdout.fileno())
os.dup2(self.write_fd, sys.stderr.fileno())
os.close(self.write_fd)
os.dup2(write_fd, sys.stdout.fileno())
os.dup2(write_fd, sys.stderr.fileno())
os.close(write_fd)
else:
# Handle I/O the Python way. This won't redirect lower-level
@@ -337,7 +476,7 @@ def __enter__(self):
self._saved_stderr = sys.stderr
# create a file object for the pipe; redirect to it.
pipe_fd_out = os.fdopen(self.write_fd, 'w')
pipe_fd_out = os.fdopen(write_fd, 'w')
sys.stdout = pipe_fd_out
sys.stderr = pipe_fd_out
@@ -376,14 +515,14 @@ def __exit__(self, exc_type, exc_val, exc_tb):
# print log contents in parent if needed.
if self.write_log_in_parent:
string = self.parent.recv()
string = self.parent_pipe.recv()
self.file_like.write(string)
if self.close_log_in_parent:
self.log_file.close()
# recover and store echo settings from the child before it dies
self.echo = self.parent.recv()
self.echo = self.parent_pipe.recv()
# join the daemon process. The daemon will quit automatically
# when the write pipe is closed; we just wait for it here.
@@ -408,72 +547,166 @@ def force_echo(self):
# exactly before and after the text we want to echo.
sys.stdout.write(xon)
sys.stdout.flush()
yield
sys.stdout.write(xoff)
sys.stdout.flush()
def _writer_daemon(self, stdin):
"""Daemon that writes output to the log file and stdout."""
# Use line buffering (3rd param = 1) since Python 3 has a bug
# that prevents unbuffered text I/O.
in_pipe = os.fdopen(self.read_fd, 'r', 1)
os.close(self.write_fd)
echo = self.echo # initial echo setting, user-controllable
force_echo = False # parent can force echo for certain output
# list of streams to select from
istreams = [in_pipe, stdin] if stdin else [in_pipe]
log_file = self.log_file
try:
with keyboard_input(stdin):
while True:
# No need to set any timeout for select.select
# Wait until a key press or an event on in_pipe.
rlist, _, _ = select.select(istreams, [], [])
# Allow user to toggle echo with 'v' key.
# Currently ignores other chars.
if stdin in rlist:
if stdin.read(1) == 'v':
echo = not echo
# Handle output from the with block process.
if in_pipe in rlist:
# If we arrive here it means that in_pipe was
# ready for reading : it should never happen that
# line is false-ish
line = in_pipe.readline()
if not line:
break # EOF
# find control characters and strip them.
controls = control.findall(line)
line = re.sub(control, '', line)
# Echo to stdout if requested or forced
if echo or force_echo:
sys.stdout.write(line)
sys.stdout.flush()
# Stripped output to log file.
log_file.write(_strip(line))
log_file.flush()
if xon in controls:
force_echo = True
if xoff in controls:
force_echo = False
except BaseException:
tty.error("Exception occurred in writer daemon!")
traceback.print_exc()
yield
finally:
# send written data back to parent if we used a StringIO
if self.write_log_in_parent:
self.child.send(log_file.getvalue())
log_file.close()
sys.stdout.write(xoff)
sys.stdout.flush()
# send echo value back to the parent so it can be preserved.
self.child.send(echo)
def _writer_daemon(stdin, read_fd, write_fd, echo, log_file, control_pipe):
"""Daemon used by ``log_output`` to write to a log file and to ``stdout``.
The daemon receives output from the parent process and writes it both
to a log and, optionally, to ``stdout``. The relationship looks like
this::
Terminal
|
| +-------------------------+
| | Parent Process |
+--------> | with log_output(): |
| stdin | ... |
| +-------------------------+
| ^ | write_fd (parent's redirected stdout)
| | control |
| | pipe |
| | v read_fd
| +-------------------------+ stdout
| | Writer daemon |------------>
+--------> | read from read_fd | log_file
stdin | write to out and log |------------>
+-------------------------+
Within the ``log_output`` handler, the parent's output is redirected
to a pipe from which the daemon reads. The daemon writes each line
from the pipe to a log file and (optionally) to ``stdout``. The user
can hit ``v`` to toggle output on ``stdout``.
In addition to the input and output file descriptors, the daemon
interacts with the parent via ``control_pipe``. It reports whether
``stdout`` was enabled or disabled when it finished and, if the
``log_file`` is a ``StringIO`` object, then the daemon also sends the
logged output back to the parent as a string, to be written to the
``StringIO`` in the parent. This is mainly for testing.
Arguments:
stdin (stream): input from the terminal
read_fd (int): pipe for reading from parent's redirected stdout
write_fd (int): parent's end of the pipe will write to (will be
immediately closed by the writer daemon)
echo (bool): initial echo setting -- controlled by user and
preserved across multiple writer daemons
log_file (file-like): file to log all output
control_pipe (Pipe): multiprocessing pipe on which to send control
information to the parent
"""
# Use line buffering (3rd param = 1) since Python 3 has a bug
# that prevents unbuffered text I/O.
in_pipe = os.fdopen(read_fd, 'r', 1)
os.close(write_fd)
# list of streams to select from
istreams = [in_pipe, stdin] if stdin else [in_pipe]
force_echo = False # parent can force echo for certain output
try:
with keyboard_input(stdin) as kb:
while True:
# fix the terminal settings if we recently came to
# the foreground
kb.check_fg_bg()
# wait for input from any stream. use a coarse timeout to
# allow other checks while we wait for input
rlist, _, _ = _retry(select.select)(istreams, [], [], 1e-1)
# Allow user to toggle echo with 'v' key.
# Currently ignores other chars.
# only read stdin if we're in the foreground
if stdin in rlist and not _is_background_tty(stdin):
# it's possible to be backgrounded between the above
# check and the read, so we ignore SIGTTIN here.
with ignore_signal(signal.SIGTTIN):
try:
if stdin.read(1) == 'v':
echo = not echo
except IOError as e:
# If SIGTTIN is ignored, the system gives EIO
# to let the caller know the read failed b/c it
# was in the bg. Ignore that too.
if e.errno != errno.EIO:
raise
if in_pipe in rlist:
# Handle output from the calling process.
line = _retry(in_pipe.readline)()
if not line:
break
# find control characters and strip them.
controls = control.findall(line)
line = control.sub('', line)
# Echo to stdout if requested or forced.
if echo or force_echo:
sys.stdout.write(line)
sys.stdout.flush()
# Stripped output to log file.
log_file.write(_strip(line))
log_file.flush()
if xon in controls:
force_echo = True
if xoff in controls:
force_echo = False
except BaseException:
tty.error("Exception occurred in writer daemon!")
traceback.print_exc()
finally:
# send written data back to parent if we used a StringIO
if isinstance(log_file, StringIO):
control_pipe.send(log_file.getvalue())
log_file.close()
# send echo value back to the parent so it can be preserved.
control_pipe.send(echo)
def _retry(function):
"""Retry a call if errors indicating an interrupted system call occur.
Interrupted system calls return -1 and set ``errno`` to ``EINTR`` if
certain flags are not set. Newer Pythons automatically retry them,
but older Pythons do not, so we need to retry the calls.
This function converts a call like this:
syscall(args)
and makes it retry by wrapping the function like this:
_retry(syscall)(args)
This is a private function because EINTR is unfortunately raised in
different ways from different functions, and we only handle the ones
relevant for this file.
"""
def wrapped(*args, **kwargs):
while True:
try:
return function(*args, **kwargs)
except IOError as e:
if e.errno == errno.EINTR:
continue
raise
except select.error as e:
if e.args[0] == errno.EINTR:
continue
raise
return wrapped

View File

@@ -0,0 +1,344 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
"""The pty module handles pseudo-terminals.
Currently, the infrastructure here is only used to test llnl.util.tty.log.
If this is used outside a testing environment, we will want to reconsider
things like timeouts in ``ProcessController.wait()``, which are set to
get tests done quickly, not to avoid high CPU usage.
"""
from __future__ import print_function
import os
import signal
import multiprocessing
import re
import sys
import termios
import time
import traceback
import llnl.util.tty.log as log
from spack.util.executable import which
class ProcessController(object):
"""Wrapper around some fundamental process control operations.
This allows one process to drive another similar to the way a shell
would, by sending signals and I/O.
"""
def __init__(self, pid, master_fd,
timeout=1, sleep_time=1e-1, debug=False):
"""Create a controller to manipulate the process with id ``pid``
Args:
pid (int): id of process to control
master_fd (int): master file descriptor attached to pid's stdin
timeout (int): time in seconds for wait operations to time out
(default 1 second)
sleep_time (int): time to sleep after signals, to control the
signal rate of the controller (default 1e-1)
debug (bool): whether ``horizontal_line()`` and ``status()`` should
produce output when called (default False)
``sleep_time`` allows the caller to insert delays after calls
that signal or modify the controlled process. Python behaves very
poorly if signals arrive too fast, and drowning a Python process
with a Python handler with signals can kill the process and hang
our tests, so we throttle this a closer-to-interactive rate.
"""
self.pid = pid
self.pgid = os.getpgid(pid)
self.master_fd = master_fd
self.timeout = timeout
self.sleep_time = sleep_time
self.debug = debug
# we need the ps command to wait for process statuses
self.ps = which("ps", required=True)
def get_canon_echo_attrs(self):
"""Get echo and canon attributes of the terminal of master_fd."""
cfg = termios.tcgetattr(self.master_fd)
return (
bool(cfg[3] & termios.ICANON),
bool(cfg[3] & termios.ECHO),
)
def horizontal_line(self, name):
"""Labled horizontal line for debugging."""
if self.debug:
sys.stderr.write(
"------------------------------------------- %s\n" % name
)
def status(self):
"""Print debug message with status info for the child."""
if self.debug:
canon, echo = self.get_canon_echo_attrs()
sys.stderr.write("canon: %s, echo: %s\n" % (
"on" if canon else "off",
"on" if echo else "off",
))
sys.stderr.write("input: %s\n" % self.input_on())
sys.stderr.write("bg: %s\n" % self.background())
sys.stderr.write("\n")
def input_on(self):
"""True if keyboard input is enabled on the master_fd pty."""
return self.get_canon_echo_attrs() == (False, False)
def background(self):
"""True if pgid is in a background pgroup of master_fd's terminal."""
return self.pgid != os.tcgetpgrp(self.master_fd)
def tstp(self):
"""Send SIGTSTP to the controlled process."""
self.horizontal_line("tstp")
os.killpg(self.pgid, signal.SIGTSTP)
time.sleep(self.sleep_time)
def cont(self):
self.horizontal_line("cont")
os.killpg(self.pgid, signal.SIGCONT)
time.sleep(self.sleep_time)
def fg(self):
self.horizontal_line("fg")
with log.ignore_signal(signal.SIGTTOU):
os.tcsetpgrp(self.master_fd, os.getpgid(self.pid))
time.sleep(self.sleep_time)
def bg(self):
self.horizontal_line("bg")
with log.ignore_signal(signal.SIGTTOU):
os.tcsetpgrp(self.master_fd, os.getpgrp())
time.sleep(self.sleep_time)
def write(self, byte_string):
self.horizontal_line("write '%s'" % byte_string.decode("utf-8"))
os.write(self.master_fd, byte_string)
def wait(self, condition):
start = time.time()
while (((time.time() - start) < self.timeout) and not condition()):
time.sleep(1e-2)
assert condition()
def wait_enabled(self):
self.wait(lambda: self.input_on() and not self.background())
def wait_disabled(self):
self.wait(lambda: not self.input_on() and self.background())
def wait_disabled_fg(self):
self.wait(lambda: not self.input_on() and not self.background())
def proc_status(self):
status = self.ps("-p", str(self.pid), "-o", "stat", output=str)
status = re.split(r"\s+", status.strip(), re.M)
return status[1]
def wait_stopped(self):
self.wait(lambda: "T" in self.proc_status())
def wait_running(self):
self.wait(lambda: "T" not in self.proc_status())
class PseudoShell(object):
"""Sets up master and child processes with a PTY.
You can create a ``PseudoShell`` if you want to test how some
function responds to terminal input. This is a pseudo-shell from a
job control perspective; ``master_function`` and ``child_function``
are set up with a pseudoterminal (pty) so that the master can drive
the child through process control signals and I/O.
The two functions should have signatures like this::
def master_function(proc, ctl, **kwargs)
def child_function(**kwargs)
``master_function`` is spawned in its own process and passed three
arguments:
proc
the ``multiprocessing.Process`` object representing the child
ctl
a ``ProcessController`` object tied to the child
kwargs
keyword arguments passed from ``PseudoShell.start()``.
``child_function`` is only passed ``kwargs`` delegated from
``PseudoShell.start()``.
The ``ctl.master_fd`` will have its ``master_fd`` connected to
``sys.stdin`` in the child process. Both processes will share the
same ``sys.stdout`` and ``sys.stderr`` as the process instantiating
``PseudoShell``.
Here are the relationships between processes created::
._________________________________________________________.
| Child Process | pid 2
| - runs child_function | pgroup 2
|_________________________________________________________| session 1
^
| create process with master_fd connected to stdin
| stdout, stderr are the same as caller
._________________________________________________________.
| Master Process | pid 1
| - runs master_function | pgroup 1
| - uses ProcessController and master_fd to control child | session 1
|_________________________________________________________|
^
| create process
| stdin, stdout, stderr are the same as caller
._________________________________________________________.
| Caller | pid 0
| - Constructs, starts, joins PseudoShell | pgroup 0
| - provides master_function, child_function | session 0
|_________________________________________________________|
"""
def __init__(self, master_function, child_function):
self.proc = None
self.master_function = master_function
self.child_function = child_function
# these can be optionally set to change defaults
self.controller_timeout = 1
self.sleep_time = 0
def start(self, **kwargs):
"""Start the master and child processes.
Arguments:
kwargs (dict): arbitrary keyword arguments that will be
passed to master and child functions
The master process will create the child, then call
``master_function``. The child process will call
``child_function``.
"""
self.proc = multiprocessing.Process(
target=PseudoShell._set_up_and_run_master_function,
args=(self.master_function, self.child_function,
self.controller_timeout, self.sleep_time),
kwargs=kwargs,
)
self.proc.start()
def join(self):
"""Wait for the child process to finish, and return its exit code."""
self.proc.join()
return self.proc.exitcode
@staticmethod
def _set_up_and_run_child_function(
tty_name, stdout_fd, stderr_fd, ready, child_function, **kwargs):
"""Child process wrapper for PseudoShell.
Handles the mechanics of setting up a PTY, then calls
``child_function``.
"""
# new process group, like a command or pipeline launched by a shell
os.setpgrp()
# take controlling terminal and set up pty IO
stdin_fd = os.open(tty_name, os.O_RDWR)
os.dup2(stdin_fd, sys.stdin.fileno())
os.dup2(stdout_fd, sys.stdout.fileno())
os.dup2(stderr_fd, sys.stderr.fileno())
os.close(stdin_fd)
if kwargs.get("debug"):
sys.stderr.write(
"child: stdin.isatty(): %s\n" % sys.stdin.isatty())
# tell the parent that we're really running
if kwargs.get("debug"):
sys.stderr.write("child: ready!\n")
ready.value = True
try:
child_function(**kwargs)
except BaseException:
traceback.print_exc()
@staticmethod
def _set_up_and_run_master_function(
master_function, child_function, controller_timeout, sleep_time,
**kwargs):
"""Set up a pty, spawn a child process, and execute master_function.
Handles the mechanics of setting up a PTY, then calls
``master_function``.
"""
os.setsid() # new session; this process is the controller
master_fd, child_fd = os.openpty()
pty_name = os.ttyname(child_fd)
# take controlling terminal
pty_fd = os.open(pty_name, os.O_RDWR)
os.close(pty_fd)
ready = multiprocessing.Value('i', False)
child_process = multiprocessing.Process(
target=PseudoShell._set_up_and_run_child_function,
args=(pty_name, sys.stdout.fileno(), sys.stderr.fileno(),
ready, child_function),
kwargs=kwargs,
)
child_process.start()
# wait for subprocess to be running and connected.
while not ready.value:
time.sleep(1e-5)
pass
if kwargs.get("debug"):
sys.stderr.write("pid: %d\n" % os.getpid())
sys.stderr.write("pgid: %d\n" % os.getpgrp())
sys.stderr.write("sid: %d\n" % os.getsid(0))
sys.stderr.write("tcgetpgrp: %d\n" % os.tcgetpgrp(master_fd))
sys.stderr.write("\n")
child_pgid = os.getpgid(child_process.pid)
sys.stderr.write("child pid: %d\n" % child_process.pid)
sys.stderr.write("child pgid: %d\n" % child_pgid)
sys.stderr.write("child sid: %d\n" % os.getsid(child_process.pid))
sys.stderr.write("\n")
sys.stderr.flush()
# set up master to ignore SIGTSTP, like a shell
signal.signal(signal.SIGTSTP, signal.SIG_IGN)
# call the master function once the child is ready
try:
controller = ProcessController(
child_process.pid, master_fd, debug=kwargs.get("debug"))
controller.timeout = controller_timeout
controller.sleep_time = sleep_time
error = master_function(child_process, controller, **kwargs)
except BaseException:
error = 1
traceback.print_exc()
child_process.join()
# return whether either the parent or child failed
return error or child_process.exitcode

View File

@@ -5,7 +5,7 @@
#: major, minor, patch version for Spack, in a tuple
spack_version_info = (0, 14, 0)
spack_version_info = (0, 14, 2)
#: String containing Spack version joined with .'s
spack_version = '.'.join(str(v) for v in spack_version_info)

View File

@@ -40,6 +40,8 @@ def update_kwargs_from_args(args, kwargs):
'fake': args.fake,
'dirty': args.dirty,
'use_cache': args.use_cache,
'install_global': args.install_global,
'upstream': args.upstream,
'cache_only': args.cache_only,
'explicit': True, # Always true for install command
'stop_at': args.until,
@@ -123,6 +125,14 @@ def setup_parser(subparser):
'-f', '--file', action='append', default=[],
dest='specfiles', metavar='SPEC_YAML_FILE',
help="install from file. Read specs to install from .yaml files")
subparser.add_argument(
'--upstream', action='store', default=None,
dest='upstream', metavar='UPSTREAM_NAME',
help='specify which upstream spack to install too')
subparser.add_argument(
'-g', '--global', action='store_true', default=False,
dest='install_global',
help='install package to globally accesible location')
cd_group = subparser.add_mutually_exclusive_group()
arguments.add_common_arguments(cd_group, ['clean', 'dirty'])
@@ -216,7 +226,9 @@ def default_log_file(spec):
"""
fmt = 'test-{x.name}-{x.version}-{hash}.xml'
basename = fmt.format(x=spec, hash=spec.dag_hash())
dirname = fs.os.path.join(spack.paths.var_path, 'junit-report')
dirname = fs.os.path.join(spack.paths.user_config_path,
'var/spack',
'junit-report')
fs.mkdirp(dirname)
return fs.os.path.join(dirname, basename)
@@ -237,6 +249,12 @@ def install_spec(cli_args, kwargs, abstract_spec, spec):
env.regenerate_views()
else:
spec.package.do_install(**kwargs)
spack.config.set('config:active_tree',
'~/.spack/opt/spack',
scope='user')
spack.config.set('config:active_upstream',
None,
scope='user')
except spack.build_environment.InstallError as e:
if cli_args.show_log_on_error:
@@ -251,6 +269,31 @@ def install_spec(cli_args, kwargs, abstract_spec, spec):
def install(parser, args, **kwargs):
# Install Package to Global Upstream for multi-user use
if args.install_global:
spack.config.set('config:active_upstream', 'global',
scope='user')
global_root = spack.config.get('upstreams')
global_root = global_root['global']['install_tree']
global_root = spack.util.path.canonicalize_path(global_root)
spack.config.set('config:active_tree', global_root,
scope='user')
elif args.upstream:
if args.upstream not in spack.config.get('upstreams'):
tty.die("specified upstream does not exist")
spack.config.set('config:active_upstream', args.upstream,
scope='user')
root = spack.config.get('upstreams')
root = root[args.upstream]['install_tree']
root = spack.util.path.canonicalize_path(root)
spack.config.set('config:active_tree', root, scope='user')
else:
spack.config.set('config:active_upstream', None,
scope='user')
spack.config.set('config:active_tree',
spack.config.get('config:install_tree'),
scope='user')
if args.help_cdash:
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,

View File

@@ -154,7 +154,7 @@ def test(parser, args, unknown_args):
# The default is to test the core of Spack. If the option `--extension`
# has been used, then test that extension.
pytest_root = spack.paths.test_path
pytest_root = spack.paths.spack_root
if args.extension:
target = args.extension
extensions = spack.config.get('config:extensions')

View File

@@ -5,6 +5,8 @@
from __future__ import print_function
import argparse
import copy
import sys
import itertools
@@ -15,6 +17,7 @@
import spack.cmd.common.arguments as arguments
import spack.repo
import spack.store
import spack.spec
from spack.database import InstallStatuses
from llnl.util import tty
@@ -53,9 +56,22 @@ def setup_parser(subparser):
"supplied, all installed packages will be uninstalled. "
"If used in an environment, all packages in the environment "
"will be uninstalled.")
subparser.add_argument(
'packages',
nargs=argparse.REMAINDER,
help="specs of packages to uninstall")
subparser.add_argument(
'-u', '--upstream', action='store', default=None,
dest='upstream', metavar='UPSTREAM_NAME',
help='specify which upstream spack to uninstall from')
subparser.add_argument(
'-g', '--global', action='store_true',
dest='global_uninstall',
help='uninstall packages installed to global upstream')
def find_matching_specs(env, specs, allow_multiple_matches=False, force=False):
def find_matching_specs(env, specs, allow_multiple_matches=False, force=False,
upstream=None, global_uninstall=False):
"""Returns a list of specs matching the not necessarily
concretized specs given from cli
@@ -67,6 +83,35 @@ def find_matching_specs(env, specs, allow_multiple_matches=False, force=False):
Return:
list of specs
"""
if global_uninstall:
spack.config.set('config:active_upstream', 'global',
scope='user')
global_root = spack.config.get('upstreams')
global_root = global_root['global']['install_tree']
global_root = spack.util.path.canonicalize_path(global_root)
spack.config.set('config:active_tree', global_root,
scope='user')
elif upstream:
if upstream not in spack.config.get('upstreams'):
tty.die("specified upstream does not exist")
spack.config.set('config:active_upstream', upstream,
scope='user')
root = spack.config.get('upstreams')
root = root[upstream]['install_tree']
root = spack.util.path.canonicalize_path(root)
spack.config.set('config:active_tree', root, scope='user')
else:
spack.config.set('config:active_upstream', None,
scope='user')
for spec in specs:
if isinstance(spec, spack.spec.Spec):
spec_name = str(spec)
spec_copy = (copy.deepcopy(spec))
spec_copy.concretize()
if spec_copy.package.installed_upstream:
tty.warn("{0} is installed upstream".format(spec_name))
tty.die("Use 'spack uninstall [--upstream upstream_name]'")
# constrain uninstall resolution to current environment if one is active
hashes = env.all_hashes() if env else None
@@ -224,11 +269,25 @@ def do_uninstall(env, specs, force):
for item in ready:
item.do_uninstall(force=force)
# write any changes made to the active environment
if env:
env.write()
spack.config.set('config:active_tree',
'~/.spack/opt/spack',
scope='user')
spack.config.set('config:active_upstream', None,
scope='user')
def get_uninstall_list(args, specs, env):
# Gets the list of installed specs that match the ones give via cli
# args.all takes care of the case where '-a' is given in the cli
uninstall_list = find_matching_specs(env, specs, args.all, args.force)
uninstall_list = find_matching_specs(env, specs, args.all, args.force,
upstream=args.upstream,
global_uninstall=args.global_uninstall
)
# Takes care of '-R'
active_dpts, inactive_dpts = installed_dependents(uninstall_list, env)
@@ -305,7 +364,7 @@ def uninstall_specs(args, specs):
anything_to_do = set(uninstall_list).union(set(remove_list))
if not anything_to_do:
tty.warn('There are no package to uninstall.')
tty.warn('There are no packages to uninstall.')
return
if not args.yes_to_all:

View File

@@ -18,32 +18,33 @@
as the authoritative database of packages in Spack. This module
provides a cache and a sanity checking mechanism for what is in the
filesystem.
"""
import datetime
import time
import os
import sys
import socket
import contextlib
from six import string_types
from six import iteritems
from ruamel.yaml.error import MarkedYAMLError, YAMLError
import contextlib
import datetime
import os
import socket
import sys
import time
try:
import uuid
_use_uuid = True
except ImportError:
_use_uuid = False
pass
import llnl.util.tty as tty
from llnl.util.filesystem import mkdirp
import spack.store
import six
import spack.repo
import spack.spec
import spack.store
import spack.util.lock as lk
import spack.util.spack_yaml as syaml
import spack.util.spack_json as sjson
from spack.filesystem_view import YamlFilesystemView
from spack.util.crypto import bit_length
from llnl.util.filesystem import mkdirp
from spack.directory_layout import DirectoryLayoutError
from spack.error import SpackError
from spack.filesystem_view import YamlFilesystemView
from spack.util.crypto import bit_length
from spack.version import Version
# TODO: Provide an API automatically retyring a build after detecting and
@@ -284,29 +285,22 @@ def __init__(self, root, db_dir=None, upstream_dbs=None,
exist. This is the ``db_dir``.
The Database will attempt to read an ``index.json`` file in
``db_dir``. If it does not find one, it will fall back to read
an ``index.yaml`` if one is present. If that does not exist, it
will create a database when needed by scanning the entire
Database root for ``spec.yaml`` files according to Spack's
``DirectoryLayout``.
``db_dir``. If that does not exist, it will create a database
when needed by scanning the entire Database root for ``spec.yaml``
files according to Spack's ``DirectoryLayout``.
Caller may optionally provide a custom ``db_dir`` parameter
where data will be stored. This is intended to be used for
where data will be stored. This is intended to be used for
testing the Database class.
"""
self.root = root
if db_dir is None:
# If the db_dir is not provided, default to within the db root.
self._db_dir = os.path.join(self.root, _db_dirname)
else:
# Allow customizing the database directory location for testing.
self._db_dir = db_dir
# If the db_dir is not provided, default to within the db root.
self._db_dir = db_dir or os.path.join(self.root, _db_dirname)
# Set up layout of database files within the db dir
self._old_yaml_index_path = os.path.join(self._db_dir, 'index.yaml')
self._index_path = os.path.join(self._db_dir, 'index.json')
self._verifier_path = os.path.join(self._db_dir, 'index_verifier')
self._lock_path = os.path.join(self._db_dir, 'lock')
# This is for other classes to use to lock prefix directories.
@@ -324,10 +318,11 @@ def __init__(self, root, db_dir=None, upstream_dbs=None,
if not os.path.exists(self._db_dir):
mkdirp(self._db_dir)
if not os.path.exists(self._failure_dir):
if not os.path.exists(self._failure_dir) and not is_upstream:
mkdirp(self._failure_dir)
self.is_upstream = is_upstream
self.last_seen_verifier = ''
# initialize rest of state.
self.db_lock_timeout = (
@@ -342,7 +337,26 @@ def __init__(self, root, db_dir=None, upstream_dbs=None,
tty.debug('PACKAGE LOCK TIMEOUT: {0}'.format(
str(timeout_format_str)))
# Create .spack-db/index.json for global upstream it doesn't exist
global_install_tree = spack.config.get(
'upstreams')['global']['install_tree']
global_install_tree = global_install_tree.replace(
'$spack', spack.paths.prefix)
if self.is_upstream:
if global_install_tree in self._db_dir:
if not os.path.isfile(self._index_path):
f = open(self._index_path, "w+")
database = {
'database': {
'installs': {},
'version': str(_db_version)
}
}
try:
sjson.dump(database, f)
except Exception as e:
raise Exception(
"error writing YAML database:", str(e))
self.lock = ForbiddenLock()
else:
self.lock = lk.Lock(self._lock_path,
@@ -554,7 +568,8 @@ def prefix_write_lock(self, spec):
prefix_lock.release_write()
def _write_to_file(self, stream):
"""Write out the databsae to a JSON file.
"""Write out the database in JSON format to the stream passed
as argument.
This function does not do any locking or transactions.
"""
@@ -576,9 +591,8 @@ def _write_to_file(self, stream):
try:
sjson.dump(database, stream)
except YAMLError as e:
raise syaml.SpackYAMLError(
"error writing YAML database:", str(e))
except (TypeError, ValueError) as e:
raise sjson.SpackJSONError("error writing JSON database:", str(e))
def _read_spec_from_dict(self, hash_key, installs):
"""Recursively construct a spec from a hash in a YAML database.
@@ -649,28 +663,15 @@ def _assign_dependencies(self, hash_key, installs, data):
spec._add_dependency(child, dtypes)
def _read_from_file(self, stream, format='json'):
"""
Fill database from file, do not maintain old data
Translate the spec portions from node-dict form to spec form
def _read_from_file(self, filename):
"""Fill database from file, do not maintain old data.
Translate the spec portions from node-dict form to spec form.
Does not do any locking.
"""
if format.lower() == 'json':
load = sjson.load
elif format.lower() == 'yaml':
load = syaml.load
else:
raise ValueError("Invalid database format: %s" % format)
try:
if isinstance(stream, string_types):
with open(stream, 'r') as f:
fdata = load(f)
else:
fdata = load(stream)
except MarkedYAMLError as e:
raise syaml.SpackYAMLError("error parsing YAML database:", str(e))
with open(filename, 'r') as f:
fdata = sjson.load(f)
except Exception as e:
raise CorruptDatabaseError("error parsing database:", str(e))
@@ -682,12 +683,12 @@ def check(cond, msg):
raise CorruptDatabaseError(
"Spack database is corrupt: %s" % msg, self._index_path)
check('database' in fdata, "No 'database' attribute in YAML.")
check('database' in fdata, "no 'database' attribute in JSON DB.")
# High-level file checks
db = fdata['database']
check('installs' in db, "No 'installs' in YAML DB.")
check('version' in db, "No 'version' in YAML DB.")
check('installs' in db, "no 'installs' in JSON DB.")
check('version' in db, "no 'version' in JSON DB.")
installs = db['installs']
@@ -763,7 +764,6 @@ def reindex(self, directory_layout):
"""Build database index from scratch based on a directory layout.
Locks the DB if it isn't locked already.
"""
if self.is_upstream:
raise UpstreamDatabaseLockingError(
@@ -927,7 +927,6 @@ def _write(self, type, value, traceback):
after the start of the next transaction, when it read from disk again.
This routine does no locking.
"""
# Do not write if exceptions were raised
if type is not None:
@@ -941,6 +940,11 @@ def _write(self, type, value, traceback):
with open(temp_file, 'w') as f:
self._write_to_file(f)
os.rename(temp_file, self._index_path)
if _use_uuid:
with open(self._verifier_path, 'w') as f:
new_verifier = str(uuid.uuid4())
f.write(new_verifier)
self.last_seen_verifier = new_verifier
except BaseException as e:
tty.debug(e)
# Clean up temp file if something goes wrong.
@@ -952,35 +956,33 @@ def _read(self):
"""Re-read Database from the data in the set location.
This does no locking, with one exception: it will automatically
migrate an index.yaml to an index.json if possible. This requires
taking a write lock.
try to regenerate a missing DB if local. This requires taking a
write lock.
"""
if os.path.isfile(self._index_path):
# Read from JSON file if a JSON database exists
self._read_from_file(self._index_path, format='json')
current_verifier = ''
if _use_uuid:
try:
with open(self._verifier_path, 'r') as f:
current_verifier = f.read()
except BaseException:
pass
if ((current_verifier != self.last_seen_verifier) or
(current_verifier == '')):
self.last_seen_verifier = current_verifier
# Read from file if a database exists
self._read_from_file(self._index_path)
return
elif self.is_upstream:
raise UpstreamDatabaseLockingError(
"No database index file is present, and upstream"
" databases cannot generate an index file")
elif os.path.isfile(self._old_yaml_index_path):
if (not self.is_upstream) and os.access(
self._db_dir, os.R_OK | os.W_OK):
# if we can write, then read AND write a JSON file.
self._read_from_file(self._old_yaml_index_path, format='yaml')
with lk.WriteTransaction(self.lock):
self._write(None, None, None)
else:
# Read chck for a YAML file if we can't find JSON.
self._read_from_file(self._old_yaml_index_path, format='yaml')
else:
if self.is_upstream:
raise UpstreamDatabaseLockingError(
"No database index file is present, and upstream"
" databases cannot generate an index file")
# The file doesn't exist, try to traverse the directory.
# reindex() takes its own write lock, so no lock here.
with lk.WriteTransaction(self.lock):
self._write(None, None, None)
self.reindex(spack.store.layout)
# The file doesn't exist, try to traverse the directory.
# reindex() takes its own write lock, so no lock here.
with lk.WriteTransaction(self.lock):
self._write(None, None, None)
self.reindex(spack.store.layout)
def _add(
self,
@@ -1060,7 +1062,9 @@ def _add(
)
# Connect dependencies from the DB to the new copy.
for name, dep in iteritems(spec.dependencies_dict(_tracked_deps)):
for name, dep in six.iteritems(
spec.dependencies_dict(_tracked_deps)
):
dkey = dep.spec.dag_hash()
upstream, record = self.query_by_spec_hash(dkey)
new_spec._add_dependency(record.spec, dep.deptypes)
@@ -1133,8 +1137,7 @@ def _increment_ref_count(self, spec):
rec.ref_count += 1
def _remove(self, spec):
"""Non-locking version of remove(); does real work.
"""
"""Non-locking version of remove(); does real work."""
key = self._get_matching_spec_key(spec)
rec = self._data[key]
@@ -1142,8 +1145,17 @@ def _remove(self, spec):
rec.installed = False
return rec.spec
if self.is_upstream:
return rec.spec
del self._data[key]
for dep in rec.spec.dependencies(_tracked_deps):
# FIXME: the two lines below needs to be updated once #11983 is
# FIXME: fixed. The "if" statement should be deleted and specs are
# FIXME: to be removed from dependents by hash and not by name.
# FIXME: See https://github.com/spack/spack/pull/15777#issuecomment-607818955
if dep._dependents.get(spec.name):
del dep._dependents[spec.name]
self._decrement_ref_count(dep)
if rec.deprecated_for:
@@ -1378,7 +1390,7 @@ def _query(
# TODO: handling of hashes restriction is not particularly elegant.
hash_key = query_spec.dag_hash()
if (hash_key in self._data and
(not hashes or hash_key in hashes)):
(not hashes or hash_key in hashes)):
return [self._data[hash_key].spec]
else:
return []

View File

@@ -29,7 +29,6 @@
import re
import shutil
import sys
import xml.etree.ElementTree
import llnl.util.tty as tty
import six
@@ -760,13 +759,6 @@ def mirror_id(self):
result = os.path.sep.join(['git', repo_path, repo_ref])
return result
def get_source_id(self):
if not self.branch:
return
output = self.git('ls-remote', self.url, self.branch, output=str)
if output:
return output.split()[0]
def _repo_info(self):
args = ''
@@ -944,11 +936,6 @@ def cachable(self):
def source_id(self):
return self.revision
def get_source_id(self):
output = self.svn('info', '--xml', self.url, output=str)
info = xml.etree.ElementTree.fromstring(output)
return info.find('entry/commit').get('revision')
def mirror_id(self):
if self.revision:
repo_path = url_util.parse(self.url).path
@@ -1064,11 +1051,6 @@ def mirror_id(self):
result = os.path.sep.join(['hg', repo_path, self.revision])
return result
def get_source_id(self):
output = self.hg('id', self.url, output=str)
if output:
return output.strip()
@_needs_stage
def fetch(self):
if self.stage.expanded:
@@ -1257,7 +1239,7 @@ def _from_merged_attrs(fetcher, pkg, version):
# TODO: refactor this logic into its own method or function
# TODO: to avoid duplication
mirrors = [spack.url.substitute_version(u, version)
for u in getattr(pkg, 'urls', [])]
for u in getattr(pkg, 'urls', [])[1:]]
attrs = {fetcher.url_attr: url, 'mirrors': mirrors}
else:
url = getattr(pkg, fetcher.url_attr)

174
lib/spack/spack/installer.py Executable file → Normal file
View File

@@ -36,6 +36,7 @@
import sys
import time
import llnl.util.filesystem as fs
import llnl.util.lock as lk
import llnl.util.tty as tty
import spack.binary_distribution as binary_distribution
@@ -43,14 +44,12 @@
import spack.error
import spack.hooks
import spack.package
import spack.package_prefs as prefs
import spack.repo
import spack.store
from llnl.util.filesystem import \
chgrp, install, install_tree, mkdirp, touch, working_dir
from llnl.util.tty.color import colorize, cwrite
from llnl.util.tty.color import colorize
from llnl.util.tty.log import log_output
from spack.package_prefs import get_package_dir_permissions, get_package_group
from spack.util.environment import dump_environment
from spack.util.executable import which
@@ -133,21 +132,21 @@ def _do_fake_install(pkg):
chmod = which('chmod')
# Install fake command
mkdirp(pkg.prefix.bin)
touch(os.path.join(pkg.prefix.bin, command))
fs.mkdirp(pkg.prefix.bin)
fs.touch(os.path.join(pkg.prefix.bin, command))
chmod('+x', os.path.join(pkg.prefix.bin, command))
# Install fake header file
mkdirp(pkg.prefix.include)
touch(os.path.join(pkg.prefix.include, header + '.h'))
fs.mkdirp(pkg.prefix.include)
fs.touch(os.path.join(pkg.prefix.include, header + '.h'))
# Install fake shared and static libraries
mkdirp(pkg.prefix.lib)
fs.mkdirp(pkg.prefix.lib)
for suffix in [dso_suffix, '.a']:
touch(os.path.join(pkg.prefix.lib, library + suffix))
fs.touch(os.path.join(pkg.prefix.lib, library + suffix))
# Install fake man page
mkdirp(pkg.prefix.man.man1)
fs.mkdirp(pkg.prefix.man.man1)
packages_dir = spack.store.layout.build_packages_path(pkg.spec)
dump_packages(pkg.spec, packages_dir)
@@ -182,6 +181,9 @@ def _packages_needed_to_bootstrap_compiler(pkg):
# concrete CompilerSpec has less info than concrete Spec
# concretize as Spec to add that information
dep.concretize()
# mark compiler as depended-on by the package that uses it
dep._dependents[pkg.name] = spack.spec.DependencySpec(
pkg.spec, dep, ('build',))
packages = [(s.package, False) for
s in dep.traverse(order='post', root=False)]
packages.append((dep.package, True))
@@ -251,8 +253,7 @@ def _print_installed_pkg(message):
Args:
message (str): message to be output
"""
cwrite('@*g{[+]} ')
print(message)
print(colorize('@*g{[+]} ') + message)
def _process_external_package(pkg, explicit):
@@ -377,14 +378,14 @@ def dump_packages(spec, path):
Dump all package information for a spec and its dependencies.
This creates a package repository within path for every namespace in the
spec DAG, and fills the repos wtih package files and patch files for every
spec DAG, and fills the repos with package files and patch files for every
node in the DAG.
Args:
spec (Spec): the Spack spec whose package information is to be dumped
path (str): the path to the build packages directory
"""
mkdirp(path)
fs.mkdirp(path)
# Copy in package.py files from any dependencies.
# Note that we copy them in as they are in the *install* directory
@@ -407,7 +408,10 @@ def dump_packages(spec, path):
source_repo = spack.repo.Repo(source_repo_root)
source_pkg_dir = source_repo.dirname_for_package_name(
node.name)
except spack.repo.RepoError:
except spack.repo.RepoError as err:
tty.debug('Failed to create source repo for {0}: {1}'
.format(node.name, str(err)))
source_pkg_dir = None
tty.warn("Warning: Couldn't copy in provenance for {0}"
.format(node.name))
@@ -419,10 +423,10 @@ def dump_packages(spec, path):
# Get the location of the package in the dest repo.
dest_pkg_dir = repo.dirname_for_package_name(node.name)
if node is not spec:
install_tree(source_pkg_dir, dest_pkg_dir)
else:
if node is spec:
spack.repo.path.dump_provenance(node, dest_pkg_dir)
elif source_pkg_dir:
fs.install_tree(source_pkg_dir, dest_pkg_dir)
def install_msg(name, pid):
@@ -458,13 +462,17 @@ def log(pkg):
tty.debug(e)
# Archive the whole stdout + stderr for the package
install(pkg.log_path, pkg.install_log_path)
fs.install(pkg.log_path, pkg.install_log_path)
# Archive the environment used for the build
install(pkg.env_path, pkg.install_env_path)
fs.install(pkg.env_path, pkg.install_env_path)
if os.path.exists(pkg.configure_args_path):
# Archive the args used for the build
fs.install(pkg.configure_args_path, pkg.install_configure_args_path)
# Finally, archive files that are specific to each package
with working_dir(pkg.stage.path):
with fs.working_dir(pkg.stage.path):
errors = six.StringIO()
target_dir = os.path.join(
spack.store.layout.metadata_path(pkg.spec), 'archived-files')
@@ -486,8 +494,8 @@ def log(pkg):
target = os.path.join(target_dir, f)
# We must ensure that the directory exists before
# copying a file in
mkdirp(os.path.dirname(target))
install(f, target)
fs.mkdirp(os.path.dirname(target))
fs.install(f, target)
except Exception as e:
tty.debug(e)
@@ -498,7 +506,7 @@ def log(pkg):
if errors.getvalue():
error_file = os.path.join(target_dir, 'errors.txt')
mkdirp(target_dir)
fs.mkdirp(target_dir)
with open(error_file, 'w') as err:
err.write(errors.getvalue())
tty.warn('Errors occurred when archiving files.\n\t'
@@ -647,6 +655,66 @@ def _add_bootstrap_compilers(self, pkg):
if package_id(comp_pkg) not in self.build_tasks:
self._push_task(comp_pkg, is_compiler, 0, 0, STATUS_ADDED)
def _check_db(self, spec):
"""Determine if the spec is flagged as installed in the database
Args:
spec (Spec): spec whose database install status is being checked
Return:
(rec, installed_in_db) tuple where rec is the database record, or
None, if there is no matching spec, and installed_in_db is
``True`` if the spec is considered installed and ``False``
otherwise
"""
try:
rec = spack.store.db.get_record(spec)
installed_in_db = rec.installed if rec else False
except KeyError:
# KeyError is raised if there is no matching spec in the database
# (versus no matching specs that are installed).
rec = None
installed_in_db = False
return rec, installed_in_db
def _check_deps_status(self):
"""Check the install status of the explicit spec's dependencies"""
err = 'Cannot proceed with {0}: {1}'
for dep in self.spec.traverse(order='post', root=False):
dep_pkg = dep.package
dep_id = package_id(dep_pkg)
# Check for failure since a prefix lock is not required
if spack.store.db.prefix_failed(dep):
action = "'spack install' the dependency"
msg = '{0} is marked as an install failure: {1}' \
.format(dep_id, action)
raise InstallError(err.format(self.pkg_id, msg))
# Attempt to get a write lock to ensure another process does not
# uninstall the dependency while the requested spec is being
# installed
ltype, lock = self._ensure_locked('write', dep_pkg)
if lock is None:
msg = '{0} is write locked by another process'.format(dep_id)
raise InstallError(err.format(self.pkg_id, msg))
# Flag external and upstream packages as being installed
if dep_pkg.spec.external or dep_pkg.installed_upstream:
form = 'external' if dep_pkg.spec.external else 'upstream'
tty.debug('Flagging {0} {1} as installed'.format(form, dep_id))
self.installed.add(dep_id)
continue
# Check the database to see if the dependency has been installed
# and flag as such if appropriate
rec, installed_in_db = self._check_db(dep)
if installed_in_db:
tty.debug('Flagging {0} as installed per the database'
.format(dep_id))
self.installed.add(dep_id)
def _prepare_for_install(self, task, keep_prefix, keep_stage,
restage=False):
"""
@@ -676,14 +744,7 @@ def _prepare_for_install(self, task, keep_prefix, keep_stage,
return
# Determine if the spec is flagged as installed in the database
try:
rec = spack.store.db.get_record(task.pkg.spec)
installed_in_db = rec.installed if rec else False
except KeyError:
# KeyError is raised if there is no matching spec in the database
# (versus no matching specs that are installed).
rec = None
installed_in_db = False
rec, installed_in_db = self._check_db(task.pkg.spec)
# Make sure the installation directory is in the desired state
# for uninstalled specs.
@@ -925,6 +986,11 @@ def _init_queue(self, install_deps, install_package):
# Be sure to clear any previous failure
spack.store.db.clear_failure(self.pkg.spec, force=True)
# If not installing dependencies, then determine their
# installation status before proceeding
if not install_deps:
self._check_deps_status()
# Now add the package itself, if appropriate
self._push_task(self.pkg, False, 0, 0, STATUS_ADDED)
@@ -1006,13 +1072,26 @@ def build_process():
pkg.name, 'src')
tty.msg('{0} Copying source to {1}'
.format(pre, src_target))
install_tree(pkg.stage.source_path, src_target)
fs.install_tree(pkg.stage.source_path, src_target)
# Do the real install in the source directory.
with working_dir(pkg.stage.source_path):
with fs.working_dir(pkg.stage.source_path):
# Save the build environment in a file before building.
dump_environment(pkg.env_path)
for attr in ('configure_args', 'cmake_args'):
try:
configure_args = getattr(pkg, attr)()
configure_args = ' '.join(configure_args)
with open(pkg.configure_args_path, 'w') as \
args_file:
args_file.write(configure_args)
break
except Exception:
pass
# cache debug settings
debug_enabled = tty.is_debug()
@@ -1079,7 +1158,7 @@ def build_process():
except StopIteration as e:
# A StopIteration exception means that do_install was asked to
# stop early from clients.
tty.msg('{0} {1}'.format(self.pid, e.message))
tty.msg('{0} {1}'.format(self.pid, str(e)))
tty.msg('Package stage directory : {0}'
.format(pkg.stage.source_path))
@@ -1208,20 +1287,20 @@ def _setup_install_dir(self, pkg):
spack.store.layout.create_install_directory(pkg.spec)
else:
# Set the proper group for the prefix
group = get_package_group(pkg.spec)
group = prefs.get_package_group(pkg.spec)
if group:
chgrp(pkg.spec.prefix, group)
fs.chgrp(pkg.spec.prefix, group)
# Set the proper permissions.
# This has to be done after group because changing groups blows
# away the sticky group bit on the directory
mode = os.stat(pkg.spec.prefix).st_mode
perms = get_package_dir_permissions(pkg.spec)
perms = prefs.get_package_dir_permissions(pkg.spec)
if mode != perms:
os.chmod(pkg.spec.prefix, perms)
# Ensure the metadata path exists as well
mkdirp(spack.store.layout.metadata_path(pkg.spec), mode=perms)
fs.mkdirp(spack.store.layout.metadata_path(pkg.spec), mode=perms)
def _update_failed(self, task, mark=False, exc=None):
"""
@@ -1533,6 +1612,21 @@ def __init__(self, pkg, compiler, start, attempts, status, installed):
self.spec.dependencies() if
package_id(d.package) != self.pkg_id)
# Handle bootstrapped compiler
#
# The bootstrapped compiler is not a dependency in the spec, but it is
# a dependency of the build task. Here we add it to self.dependencies
compiler_spec = self.spec.compiler
arch_spec = self.spec.architecture
if not spack.compilers.compilers_for_spec(compiler_spec,
arch_spec=arch_spec):
# The compiler is in the queue, identify it as dependency
dep = spack.compilers.pkg_spec_for_compiler(compiler_spec)
dep.architecture = arch_spec
dep.concretize()
dep_id = package_id(dep.package)
self.dependencies.add(dep_id)
# List of uninstalled dependencies, which is used to establish
# the priority of the build task.
#

View File

@@ -281,6 +281,7 @@ def read_module_indices():
module_type_to_index = {}
module_type_to_root = install_properties.get('modules', {})
for module_type, root in module_type_to_root.items():
root = spack.util.path.canonicalize_path(root)
module_type_to_index[module_type] = read_module_index(root)
module_indices.append(module_type_to_index)
@@ -342,7 +343,11 @@ def get_module(module_type, spec, get_full_path, required=True):
The module name or path. May return ``None`` if the module is not
available.
"""
if spec.package.installed_upstream:
try:
upstream = spec.package.installed_upstream
except spack.repo.UnknownPackageError:
upstream, record = spack.store.db.query_by_spec_hash(spec.dag_hash())
if upstream:
module = (spack.modules.common.upstream_module_index
.upstream_module(spec, module_type))
if not module:
@@ -424,6 +429,7 @@ def suffixes(self):
for constraint, suffix in self.conf.get('suffixes', {}).items():
if constraint in self.spec:
suffixes.append(suffix)
suffixes = sorted(set(suffixes))
if self.hash:
suffixes.append(self.hash)
return suffixes
@@ -623,16 +629,9 @@ def configure_options(self):
msg = 'unknown, software installed outside of Spack'
return msg
# This is quite simple right now, but contains information on how
# to call different build system classes.
for attr in ('configure_args', 'cmake_args'):
try:
configure_args = getattr(pkg, attr)()
return ' '.join(configure_args)
except (AttributeError, IOError, KeyError):
# The method doesn't exist in the current spec,
# or it's not usable
pass
if os.path.exists(pkg.install_configure_args_path):
with open(pkg.install_configure_args_path, 'r') as args_file:
return args_file.read()
# Returning a false-like value makes the default templates skip
# the configure option section

View File

@@ -68,6 +68,9 @@
# Filename for the Spack build/install environment file.
_spack_build_envfile = 'spack-build-env.txt'
# Filename for the Spack configure args file.
_spack_configure_argsfile = 'spack-configure-args.txt'
class InstallPhase(object):
"""Manages a single phase of the installation.
@@ -760,7 +763,7 @@ def url_for_version(self, version):
# If no specific URL, use the default, class-level URL
url = getattr(self, 'url', None)
urls = getattr(self, 'urls', [None])
default_url = url or urls.pop(0)
default_url = url or urls[0]
# if no exact match AND no class-level default, use the nearest URL
if not default_url:
@@ -896,6 +899,18 @@ def install_log_path(self):
# Otherwise, return the current install log path name.
return os.path.join(install_path, _spack_build_logfile)
@property
def configure_args_path(self):
"""Return the configure args file path associated with staging."""
return os.path.join(self.stage.path, _spack_configure_argsfile)
@property
def install_configure_args_path(self):
"""Return the configure args file path on successful installation."""
install_path = spack.store.layout.metadata_path(self.spec)
return os.path.join(install_path, _spack_configure_argsfile)
def _make_fetcher(self):
# Construct a composite fetcher that always contains at least
# one element (the root package). In case there are resources

View File

@@ -16,6 +16,9 @@
#: This file lives in $prefix/lib/spack/spack/__file__
prefix = ancestor(__file__, 4)
#: User configuration location
user_config_path = os.path.expanduser('~/.spack')
#: synonym for prefix
spack_root = prefix
@@ -38,6 +41,8 @@
test_path = os.path.join(module_path, "test")
hooks_path = os.path.join(module_path, "hooks")
var_path = os.path.join(prefix, "var", "spack")
user_var_path = os.path.join(user_config_path, "var", "spack")
stage_path = os.path.join(user_var_path, "stage")
repos_path = os.path.join(var_path, "repos")
share_path = os.path.join(prefix, "share", "spack")
@@ -45,9 +50,6 @@
packages_path = os.path.join(repos_path, "builtin")
mock_packages_path = os.path.join(repos_path, "builtin.mock")
#: User configuration location
user_config_path = os.path.expanduser('~/.spack')
opt_path = os.path.join(prefix, "opt")
etc_path = os.path.join(prefix, "etc")

View File

@@ -7,7 +7,7 @@
import re
import llnl.util.tty as tty
from spack.paths import build_env_path
from spack.util.executable import which
from spack.util.executable import Executable
from spack.architecture import Platform, Target, NoPlatformError
from spack.operating_systems.cray_frontend import CrayFrontend
from spack.operating_systems.cnl import Cnl
@@ -117,11 +117,17 @@ def _default_target_from_env(self):
'''
# env -i /bin/bash -lc echo $CRAY_CPU_TARGET 2> /dev/null
if getattr(self, 'default', None) is None:
env = which('env')
output = env("-i", "/bin/bash", "-lc", "echo $CRAY_CPU_TARGET",
output=str, error=os.devnull)
self.default = output.strip()
tty.debug("Found default module:%s" % self.default)
bash = Executable('/bin/bash')
output = bash(
'-lc', 'echo $CRAY_CPU_TARGET',
env={'TERM': os.environ.get('TERM', '')},
output=str,
error=os.devnull
)
output = ''.join(output.split()) # remove all whitespace
if output:
self.default = output
tty.debug("Found default module:%s" % self.default)
return self.default
def _avail_targets(self):

View File

@@ -400,8 +400,8 @@ def replace_prefix_text(path_name, old_dir, new_dir):
def replace_prefix_bin(path_name, old_dir, new_dir):
"""
Attempt to replace old install prefix with new install prefix
in binary files by replacing with null terminated string
that is the same length unless the old path is shorter
in binary files by prefixing new install prefix with os.sep
until the lengths of the prefixes are the same.
"""
def replace(match):
@@ -429,6 +429,38 @@ def replace(match):
f.truncate()
def replace_prefix_nullterm(path_name, old_dir, new_dir):
"""
Attempt to replace old install prefix with new install prefix
in binary files by replacing with null terminated string
that is the same length unless the old path is shorter
Used on linux to replace mach-o rpaths
"""
def replace(match):
occurances = match.group().count(old_dir.encode('utf-8'))
olen = len(old_dir.encode('utf-8'))
nlen = len(new_dir.encode('utf-8'))
padding = (olen - nlen) * occurances
if padding < 0:
return data
return match.group().replace(old_dir.encode('utf-8'),
new_dir.encode('utf-8')) + b'\0' * padding
with open(path_name, 'rb+') as f:
data = f.read()
f.seek(0)
original_data_len = len(data)
pat = re.compile(old_dir.encode('utf-8') + b'([^\0]*?)\0')
if not pat.search(data):
return
ndata = pat.sub(replace, data)
if not len(ndata) == original_data_len:
raise BinaryStringReplacementException(
path_name, original_data_len, len(ndata))
f.write(ndata)
f.truncate()
def relocate_macho_binaries(path_names, old_dir, new_dir, allow_root):
"""
Change old_dir to new_dir in LC_RPATH of mach-o files (on macOS)
@@ -466,8 +498,7 @@ def relocate_macho_binaries(path_names, old_dir, new_dir, allow_root):
modify_object_macholib(path_name, placeholder, new_dir)
modify_object_macholib(path_name, old_dir, new_dir)
if len(new_dir) <= len(old_dir):
replace_prefix_bin(path_name, old_dir,
new_dir)
replace_prefix_nullterm(path_name, old_dir, new_dir)
else:
tty.warn('Cannot do a binary string replacement'
' with padding for %s'

View File

@@ -369,6 +369,10 @@ def _satisfies_target(self, other_target, strict):
if not need_to_check:
return True
# self is not concrete, but other_target is there and strict=True
if self.target is None:
return False
for target_range in str(other_target).split(','):
t_min, sep, t_max = target_range.partition(':')
@@ -1919,9 +1923,7 @@ def from_dict(data):
yaml_deps = node[name]['dependencies']
for dname, dhash, dtypes in Spec.read_yaml_dep_specs(yaml_deps):
# Fill in dependencies by looking them up by name in deps dict
deps[name]._dependencies[dname] = DependencySpec(
deps[name], deps[dname], dtypes)
deps[name]._add_dependency(deps[dname], dtypes)
return spec

View File

@@ -34,7 +34,7 @@
import spack.directory_layout
#: default installation root, relative to the Spack install path
default_root = os.path.join(spack.paths.opt_path, 'spack')
default_root = os.path.join(spack.paths.user_config_path, 'opt/spack')
class Store(object):
@@ -70,7 +70,9 @@ def reindex(self):
def _store():
"""Get the singleton store instance."""
root = spack.config.get('config:install_tree', default_root)
root = spack.config.get('config:active_tree', default_root)
# Canonicalize Path for Root regardless of origin
root = spack.util.path.canonicalize_path(root)
return Store(root,
@@ -90,9 +92,19 @@ def _store():
def retrieve_upstream_dbs():
other_spack_instances = spack.config.get('upstreams', {})
global_fallback = {'global': {'install_tree': '$spack/opt/spack',
'modules':
{'tcl': '$spack/share/spack/modules',
'lmod': '$spack/share/spack/lmod',
'dotkit': '$spack/share/spack/dotkit'}}}
other_spack_instances = spack.config.get('upstreams',
global_fallback)
install_roots = []
for install_properties in other_spack_instances.values():
install_roots.append(install_properties['install_tree'])
install_roots.append(spack.util.path.canonicalize_path(
install_properties['install_tree']))
return _construct_upstream_dbs_from_install_roots(install_roots)

View File

@@ -214,3 +214,16 @@ def test_optimization_flags_with_custom_versions(
)
opt_flags = target.optimization_flags(compiler)
assert opt_flags == expected_flags
@pytest.mark.regression('15306')
@pytest.mark.parametrize('architecture_tuple,constraint_tuple', [
(('linux', 'ubuntu18.04', None), ('linux', None, 'x86_64')),
(('linux', 'ubuntu18.04', None), ('linux', None, 'x86_64:')),
])
def test_satisfy_strict_constraint_when_not_concrete(
architecture_tuple, constraint_tuple
):
architecture = spack.spec.ArchSpec(architecture_tuple)
constraint = spack.spec.ArchSpec(constraint_tuple)
assert not architecture.satisfies(constraint, strict=True)

View File

@@ -117,7 +117,7 @@ def test_uninstall_deprecated(mock_packages, mock_archive, mock_fetch,
non_deprecated = spack.store.db.query()
uninstall('-y', 'libelf@0.8.10')
uninstall('-y', '-g', 'libelf@0.8.10')
assert spack.store.db.query() == spack.store.db.query(installed=any)
assert spack.store.db.query() == non_deprecated

View File

@@ -370,6 +370,54 @@ def test_init_from_yaml(tmpdir):
assert not e2.specs_by_hash
@pytest.mark.usefixtures('config')
def test_env_view_external_prefix(tmpdir_factory, mutable_database,
mock_packages):
fake_prefix = tmpdir_factory.mktemp('a-prefix')
fake_bin = fake_prefix.join('bin')
fake_bin.ensure(dir=True)
initial_yaml = StringIO("""\
env:
specs:
- a
view: true
""")
external_config = StringIO("""\
packages:
a:
paths:
a: {a_prefix}
buildable: false
""".format(a_prefix=str(fake_prefix)))
external_config_dict = spack.util.spack_yaml.load_config(external_config)
test_scope = spack.config.InternalConfigScope(
'env-external-test', data=external_config_dict)
with spack.config.override(test_scope):
e = ev.create('test', initial_yaml)
e.concretize()
# Note: normally installing specs in a test environment requires doing
# a fake install, but not for external specs since no actions are
# taken to install them. The installation commands also include
# post-installation functions like DB-registration, so are important
# to do (otherwise the package is not considered installed).
e.install_all()
e.write()
env_modifications = e.add_default_view_to_shell('sh')
individual_modifications = env_modifications.split('\n')
def path_includes_fake_prefix(cmd):
return 'export PATH' in cmd and str(fake_bin) in cmd
assert any(
path_includes_fake_prefix(cmd) for cmd in individual_modifications
)
def test_init_with_file_and_remove(tmpdir):
"""Ensure a user can remove from any position in the spack.yaml file."""
path = tmpdir.join('spack.yaml')

View File

@@ -54,6 +54,46 @@ def test_install_package_and_dependency(
assert 'errors="0"' in content
def test_global_install_package_and_dependency(
tmpdir, mock_packages, mock_archive, mock_fetch, config,
install_mockery):
with tmpdir.as_cwd():
install('--global',
'--log-format=junit',
'--log-file=test.xml',
'libdwarf')
files = tmpdir.listdir()
filename = tmpdir.join('test.xml')
assert filename in files
content = filename.open().read()
assert 'tests="2"' in content
assert 'failures="0"' in content
assert 'errors="0"' in content
def test_upstream_install_package_and_dependency(
tmpdir, mock_packages, mock_archive, mock_fetch, config,
install_mockery):
with tmpdir.as_cwd():
install('--upstream=global',
'--log-format=junit',
'--log-file=test.xml',
'libdwarf')
files = tmpdir.listdir()
filename = tmpdir.join('test.xml')
assert filename in files
content = filename.open().read()
assert 'tests="2"' in content
assert 'failures="0"' in content
assert 'errors="0"' in content
@pytest.mark.disable_clean_stage_check
def test_install_runtests_notests(monkeypatch, mock_packages, install_mockery):
def check(pkg):
@@ -632,6 +672,30 @@ def test_install_only_dependencies(tmpdir, mock_fetch, install_mockery):
assert not os.path.exists(root.prefix)
def test_install_only_package(tmpdir, mock_fetch, install_mockery, capfd):
msg = ''
with capfd.disabled():
try:
install('--only', 'package', 'dependent-install')
except spack.installer.InstallError as e:
msg = str(e)
assert 'Cannot proceed with dependent-install' in msg
assert '1 uninstalled dependency' in msg
def test_install_deps_then_package(tmpdir, mock_fetch, install_mockery):
dep = Spec('dependency-install').concretized()
root = Spec('dependent-install').concretized()
install('--only', 'dependencies', 'dependent-install')
assert os.path.exists(dep.prefix)
assert not os.path.exists(root.prefix)
install('--only', 'package', 'dependent-install')
assert os.path.exists(root.prefix)
@pytest.mark.regression('12002')
def test_install_only_dependencies_in_env(tmpdir, mock_fetch, install_mockery,
mutable_mock_env_path):

View File

@@ -6,6 +6,7 @@
from spack.main import SpackCommand
spack_test = SpackCommand('test')
cmd_test_py = 'lib/spack/spack/test/cmd/test.py'
def test_list():
@@ -16,13 +17,13 @@ def test_list():
def test_list_with_pytest_arg():
output = spack_test('--list', 'cmd/test.py')
assert output.strip() == "cmd/test.py"
output = spack_test('--list', cmd_test_py)
assert output.strip() == cmd_test_py
def test_list_with_keywords():
output = spack_test('--list', '-k', 'cmd/test.py')
assert output.strip() == "cmd/test.py"
assert output.strip() == cmd_test_py
def test_list_long(capsys):
@@ -44,7 +45,7 @@ def test_list_long(capsys):
def test_list_long_with_pytest_arg(capsys):
with capsys.disabled():
output = spack_test('--list-long', 'cmd/test.py')
output = spack_test('--list-long', cmd_test_py)
assert "test.py::\n" in output
assert "test_list" in output
assert "test_list_with_pytest_arg" in output
@@ -74,7 +75,7 @@ def test_list_names():
def test_list_names_with_pytest_arg():
output = spack_test('--list-names', 'cmd/test.py')
output = spack_test('--list-names', cmd_test_py)
assert "test.py::test_list\n" in output
assert "test.py::test_list_with_pytest_arg\n" in output
assert "test.py::test_list_with_keywords\n" in output

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import pytest
import llnl.util.tty as tty
import spack.store
from spack.main import SpackCommand, SpackCommandError
@@ -30,7 +31,7 @@ def test_multiple_matches(mutable_database):
@pytest.mark.db
def test_installed_dependents(mutable_database):
"""Test can't uninstall when ther are installed dependents."""
"""Test can't uninstall when there are installed dependents."""
with pytest.raises(SpackCommandError):
uninstall('-y', 'libelf')
@@ -79,6 +80,42 @@ def test_force_uninstall_spec_with_ref_count_not_zero(
assert len(all_specs) == expected_number_of_specs
@pytest.mark.db
@pytest.mark.usefixtures('mutable_database')
def test_global_recursive_uninstall():
"""Test recursive uninstall from global upstream"""
uninstall('-g', '-y', '-a', '--dependents', 'callpath')
all_specs = spack.store.layout.all_specs()
assert len(all_specs) == 8
# query specs with multiple configurations
mpileaks_specs = [s for s in all_specs if s.satisfies('mpileaks')]
callpath_specs = [s for s in all_specs if s.satisfies('callpath')]
mpi_specs = [s for s in all_specs if s.satisfies('mpi')]
assert len(mpileaks_specs) == 0
assert len(callpath_specs) == 0
assert len(mpi_specs) == 3
@pytest.mark.db
@pytest.mark.usefixtures('mutable_database')
def test_upstream_recursive_uninstall():
"""Test recursive uninstall from specified upstream"""
uninstall('--upstream=global', '-y', '-a', '--dependents', 'callpath')
all_specs = spack.store.layout.all_specs()
assert len(all_specs) == 8
# query specs with multiple configurations
mpileaks_specs = [s for s in all_specs if s.satisfies('mpileaks')]
callpath_specs = [s for s in all_specs if s.satisfies('callpath')]
mpi_specs = [s for s in all_specs if s.satisfies('mpi')]
assert len(mpileaks_specs) == 0
assert len(callpath_specs) == 0
assert len(mpi_specs) == 3
@pytest.mark.db
def test_force_uninstall_and_reinstall_by_hash(mutable_database):
"""Test forced uninstall and reinstall of old specs."""
@@ -102,12 +139,12 @@ def validate_callpath_spec(installed):
specs = spack.store.db.get_by_hash(dag_hash[:7], installed=any)
assert len(specs) == 1 and specs[0] == callpath_spec
specs = spack.store.db.get_by_hash(dag_hash, installed=not installed)
assert specs is None
# specs = spack.store.db.get_by_hash(dag_hash, installed=not installed)
# assert specs is None
specs = spack.store.db.get_by_hash(dag_hash[:7],
installed=not installed)
assert specs is None
# specs = spack.store.db.get_by_hash(dag_hash[:7],
# installed=not installed)
# assert specs is None
mpileaks_spec = spack.store.db.query_one('mpileaks ^mpich')
assert callpath_spec in mpileaks_spec
@@ -155,3 +192,16 @@ def db_specs():
assert len(mpileaks_specs) == 3
assert len(callpath_specs) == 3 # back to 3
assert len(mpi_specs) == 3
@pytest.mark.db
@pytest.mark.regression('15773')
def test_in_memory_consistency_when_uninstalling(
mutable_database, monkeypatch
):
"""Test that uninstalling doesn't raise warnings"""
def _warn(*args, **kwargs):
raise RuntimeError('a warning was triggered!')
monkeypatch.setattr(tty, 'warn', _warn)
# Now try to uninstall and check this doesn't trigger warnings
uninstall('-y', '-a')

View File

@@ -15,15 +15,15 @@
@pytest.fixture()
def concretize_scope(config, tmpdir):
def concretize_scope(mutable_config, tmpdir):
"""Adds a scope for concretization preferences"""
tmpdir.ensure_dir('concretize')
config.push_scope(
mutable_config.push_scope(
ConfigScope('concretize', str(tmpdir.join('concretize'))))
yield
config.pop_scope()
mutable_config.pop_scope()
spack.repo.path._provider_index = None
@@ -84,16 +84,24 @@ def test_preferred_variants(self):
'mpileaks', debug=True, opt=True, shared=False, static=False
)
def test_preferred_compilers(self, mutable_mock_repo):
def test_preferred_compilers(self):
"""Test preferred compilers are applied correctly
"""
update_packages('mpileaks', 'compiler', ['clang@3.3'])
spec = concretize('mpileaks')
assert spec.compiler == spack.spec.CompilerSpec('clang@3.3')
# Need to make sure the test uses an available compiler
compiler_list = spack.compilers.all_compiler_specs()
assert compiler_list
update_packages('mpileaks', 'compiler', ['gcc@4.5.0'])
# Try the first available compiler
compiler = str(compiler_list[0])
update_packages('mpileaks', 'compiler', [compiler])
spec = concretize('mpileaks')
assert spec.compiler == spack.spec.CompilerSpec('gcc@4.5.0')
assert spec.compiler == spack.spec.CompilerSpec(compiler)
# Try the last available compiler
compiler = str(compiler_list[-1])
update_packages('mpileaks', 'compiler', [compiler])
spec = concretize('mpileaks')
assert spec.compiler == spack.spec.CompilerSpec(compiler)
def test_preferred_target(self, mutable_mock_repo):
"""Test preferred compilers are applied correctly

View File

@@ -525,6 +525,8 @@ def database(mock_store, mock_packages, config):
"""This activates the mock store, packages, AND config."""
with use_store(mock_store):
yield mock_store.db
# Force reading the database again between tests
mock_store.db.last_seen_verifier = ''
@pytest.fixture(scope='function')

View File

@@ -1,5 +1,5 @@
config:
install_tree: $spack/opt/spack
install_tree: ~/.spack/opt/spack
template_dirs:
- $spack/share/spack/templates
- $spack/lib/spack/spack/test/data/templates
@@ -7,7 +7,7 @@ config:
build_stage:
- $tempdir/$user/spack-stage
- ~/.spack/stage
source_cache: $spack/var/spack/cache
source_cache: ~/.spack/var/spack/cache
misc_cache: ~/.spack/cache
verify_ssl: true
checksum: true

View File

@@ -0,0 +1,7 @@
upstreams:
global:
install_tree: $spack/opt/spack
modules:
tcl: $spack/share/spack/modules
lmod: $spack/share/spack/lmod
dotkit: $spack/share/spack/dotkit

View File

@@ -5,3 +5,4 @@ tcl:
suffixes:
'+debug': foo
'~debug': bar
'^mpich': foo

View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
#
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
export LMOD_VARIABLE=foo
export LMOD_ANOTHER_VARIABLE=bar
export NEW_VAR=new

View File

@@ -13,6 +13,13 @@
import os
import pytest
import json
import shutil
try:
import uuid
_use_uuid = True
except ImportError:
_use_uuid = False
pass
import llnl.util.lock as lk
from llnl.util.tty.colify import colify
@@ -39,6 +46,19 @@ def test_store(tmpdir):
spack.store.store = real_store
@pytest.fixture()
def test_global_db_initializtion():
global_store = spack.store.store
global_db_path = '$spack/opt/spack'
global_db_path = spack.util.path.canonicalize_path(global_db_path)
shutil.rmtree(os.path.join(global_db_path, '.spack-db'))
global_store = spack.store.Store(str(global_db_path))
yield
spack.store.store = global_store
@pytest.fixture()
def upstream_and_downstream_db(tmpdir_factory, gen_mock_layout):
mock_db_root = str(tmpdir_factory.mktemp('mock_db_root'))
@@ -469,6 +489,21 @@ def test_015_write_and_read(mutable_database):
assert new_rec.installed == rec.installed
def test_017_write_and_read_without_uuid(mutable_database, monkeypatch):
monkeypatch.setattr(spack.database, '_use_uuid', False)
# write and read DB
with spack.store.db.write_transaction():
specs = spack.store.db.query()
recs = [spack.store.db.get_record(s) for s in specs]
for spec, rec in zip(specs, recs):
new_rec = spack.store.db.get_record(spec)
assert new_rec.ref_count == rec.ref_count
assert new_rec.spec == rec.spec
assert new_rec.path == rec.path
assert new_rec.installed == rec.installed
def test_020_db_sanity(database):
"""Make sure query() returns what's actually in the db."""
_check_db_sanity(database)
@@ -703,6 +738,9 @@ def test_old_external_entries_prefix(mutable_database):
with open(spack.store.db._index_path, 'w') as f:
f.write(json.dumps(db_obj))
if _use_uuid:
with open(spack.store.db._verifier_path, 'w') as f:
f.write(str(uuid.uuid4()))
record = spack.store.db.get_record(s)
@@ -755,7 +793,7 @@ def test_query_spec_with_non_conditional_virtual_dependency(database):
def test_failed_spec_path_error(database):
"""Ensure spec not concrete check is covered."""
s = spack.spec.Spec('a')
with pytest.raises(ValueError, matches='Concrete spec required'):
with pytest.raises(ValueError, match='Concrete spec required'):
spack.store.db._failed_spec_path(s)

View File

@@ -437,3 +437,14 @@ def test_from_environment_diff(before, after, search_list):
for item in search_list:
assert item in mod
@pytest.mark.regression('15775')
def test_blacklist_lmod_variables():
# Construct the list of environment modifications
file = os.path.join(datadir, 'sourceme_lmod.sh')
env = EnvironmentModifications.from_sourcing_file(file)
# Check that variables related to lmod are not in there
modifications = env.group_by_name()
assert not any(x.startswith('LMOD_') for x in modifications)

View File

@@ -7,7 +7,7 @@
import pytest
import shutil
from llnl.util.filesystem import mkdirp, touch, working_dir
import llnl.util.filesystem as fs
from spack.package import InstallError, PackageBase, PackageStillNeededError
import spack.error
@@ -15,7 +15,8 @@
import spack.repo
import spack.store
from spack.spec import Spec
from spack.package import _spack_build_envfile, _spack_build_logfile
from spack.package import (_spack_build_envfile, _spack_build_logfile,
_spack_configure_argsfile)
def test_install_and_uninstall(install_mockery, mock_fetch, monkeypatch):
@@ -379,11 +380,11 @@ def test_pkg_build_paths(install_mockery):
# Backward compatibility checks
log_dir = os.path.dirname(log_path)
mkdirp(log_dir)
with working_dir(log_dir):
fs.mkdirp(log_dir)
with fs.working_dir(log_dir):
# Start with the older of the previous log filenames
older_log = 'spack-build.out'
touch(older_log)
fs.touch(older_log)
assert spec.package.log_path.endswith(older_log)
# Now check the newer log filename
@@ -410,13 +411,16 @@ def test_pkg_install_paths(install_mockery):
env_path = os.path.join(spec.prefix, '.spack', _spack_build_envfile)
assert spec.package.install_env_path == env_path
args_path = os.path.join(spec.prefix, '.spack', _spack_configure_argsfile)
assert spec.package.install_configure_args_path == args_path
# Backward compatibility checks
log_dir = os.path.dirname(log_path)
mkdirp(log_dir)
with working_dir(log_dir):
fs.mkdirp(log_dir)
with fs.working_dir(log_dir):
# Start with the older of the previous install log filenames
older_log = 'build.out'
touch(older_log)
fs.touch(older_log)
assert spec.package.install_log_path.endswith(older_log)
# Now check the newer install log filename
@@ -433,7 +437,8 @@ def test_pkg_install_paths(install_mockery):
shutil.rmtree(log_dir)
def test_pkg_install_log(install_mockery):
def test_log_install_without_build_files(install_mockery):
"""Test the installer log function when no build files are present."""
# Get a basic concrete spec for the trivial install package.
spec = Spec('trivial-install-test-package').concretized()
@@ -441,21 +446,61 @@ def test_pkg_install_log(install_mockery):
with pytest.raises(IOError, match="No such file or directory"):
spack.installer.log(spec.package)
# Set up mock build files and try again
def test_log_install_with_build_files(install_mockery, monkeypatch):
"""Test the installer's log function when have build files."""
config_log = 'config.log'
# Retain the original function for use in the monkey patch that is used
# to raise an exception under the desired condition for test coverage.
orig_install_fn = fs.install
def _install(src, dest):
orig_install_fn(src, dest)
if src.endswith(config_log):
raise Exception('Mock log install error')
monkeypatch.setattr(fs, 'install', _install)
spec = Spec('trivial-install-test-package').concretized()
# Set up mock build files and try again to include archive failure
log_path = spec.package.log_path
log_dir = os.path.dirname(log_path)
mkdirp(log_dir)
with working_dir(log_dir):
touch(log_path)
touch(spec.package.env_path)
fs.mkdirp(log_dir)
with fs.working_dir(log_dir):
fs.touch(log_path)
fs.touch(spec.package.env_path)
fs.touch(spec.package.configure_args_path)
install_path = os.path.dirname(spec.package.install_log_path)
mkdirp(install_path)
fs.mkdirp(install_path)
source = spec.package.stage.source_path
config = os.path.join(source, 'config.log')
fs.touchp(config)
spec.package.archive_files = ['missing', '..', config]
spack.installer.log(spec.package)
assert os.path.exists(spec.package.install_log_path)
assert os.path.exists(spec.package.install_env_path)
assert os.path.exists(spec.package.install_configure_args_path)
archive_dir = os.path.join(install_path, 'archived-files')
source_dir = os.path.dirname(source)
rel_config = os.path.relpath(config, source_dir)
assert os.path.exists(os.path.join(archive_dir, rel_config))
assert not os.path.exists(os.path.join(archive_dir, 'missing'))
expected_errs = [
'OUTSIDE SOURCE PATH', # for '..'
'FAILED TO ARCHIVE' # for rel_config
]
with open(os.path.join(archive_dir, 'errors.txt'), 'r') as fd:
for ln, expected in zip(fd, expected_errs):
assert expected in ln
# Cleanup
shutil.rmtree(log_dir)

View File

@@ -4,17 +4,38 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import py
import pytest
import llnl.util.filesystem as fs
import llnl.util.tty as tty
import llnl.util.lock as ulk
import spack.binary_distribution
import spack.compilers
import spack.directory_layout as dl
import spack.installer as inst
import spack.util.lock as lk
import spack.package_prefs as prefs
import spack.repo
import spack.spec
import spack.store
import spack.util.lock as lk
def _mock_repo(root, namespace):
"""Create an empty repository at the specified root
Args:
root (str): path to the mock repository root
namespace (str): mock repo's namespace
"""
repodir = py.path.local(root) if isinstance(root, str) else root
repodir.ensure(spack.repo.packages_dir_name, dir=True)
yaml = repodir.join('repo.yaml')
yaml.write("""
repo:
namespace: {0}
""".format(namespace))
def _noop(*args, **kwargs):
@@ -27,6 +48,12 @@ def _none(*args, **kwargs):
return None
def _not_locked(installer, lock_type, pkg):
"""Generic monkeypatch function for _ensure_locked to return no lock"""
tty.msg('{0} locked {1}' .format(lock_type, pkg.spec.name))
return lock_type, None
def _true(*args, **kwargs):
"""Generic monkeypatch function that always returns True."""
return True
@@ -129,7 +156,7 @@ def test_process_external_package_module(install_mockery, monkeypatch, capfd):
def test_process_binary_cache_tarball_none(install_mockery, monkeypatch,
capfd):
"""Tests to cover _process_binary_cache_tarball when no tarball."""
"""Tests of _process_binary_cache_tarball when no tarball."""
monkeypatch.setattr(spack.binary_distribution, 'download_tarball', _none)
pkg = spack.repo.get('trivial-install-test-package')
@@ -139,7 +166,7 @@ def test_process_binary_cache_tarball_none(install_mockery, monkeypatch,
def test_process_binary_cache_tarball_tar(install_mockery, monkeypatch, capfd):
"""Tests to cover _process_binary_cache_tarball with a tar file."""
"""Tests of _process_binary_cache_tarball with a tar file."""
def _spec(spec):
return spec
@@ -156,6 +183,25 @@ def _spec(spec):
assert 'Installing a from binary cache' in capfd.readouterr()[0]
def test_try_install_from_binary_cache(install_mockery, mock_packages,
monkeypatch, capsys):
"""Tests SystemExit path for_try_install_from_binary_cache."""
def _spec(spec, force):
spec = spack.spec.Spec('mpi').concretized()
return {spec: None}
spec = spack.spec.Spec('mpich')
spec.concretize()
monkeypatch.setattr(spack.binary_distribution, 'get_spec', _spec)
with pytest.raises(SystemExit):
inst._try_install_from_binary_cache(spec.package, False, False)
captured = capsys.readouterr()
assert 'add a spack mirror to allow download' in str(captured)
def test_installer_init_errors(install_mockery):
"""Test to ensure cover installer constructor errors."""
with pytest.raises(ValueError, match='must be a package'):
@@ -166,17 +212,18 @@ def test_installer_init_errors(install_mockery):
inst.PackageInstaller(pkg)
def test_installer_strings(install_mockery):
"""Tests of installer repr and str for coverage purposes."""
def test_installer_repr(install_mockery):
spec, installer = create_installer('trivial-install-test-package')
# Cover __repr__
irep = installer.__repr__()
assert irep.startswith(installer.__class__.__name__)
assert "installed=" in irep
assert "failed=" in irep
# Cover __str__
def test_installer_str(install_mockery):
spec, installer = create_installer('trivial-install-test-package')
istr = str(installer)
assert "#tasks=0" in istr
assert "installed (0)" in istr
@@ -184,7 +231,6 @@ def test_installer_strings(install_mockery):
def test_installer_last_phase_error(install_mockery, capsys):
"""Test to cover last phase error."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
@@ -197,7 +243,6 @@ def test_installer_last_phase_error(install_mockery, capsys):
def test_installer_ensure_ready_errors(install_mockery):
"""Test to cover _ensure_ready errors."""
spec, installer = create_installer('trivial-install-test-package')
fmt = r'cannot be installed locally.*{0}'
@@ -224,24 +269,102 @@ def test_installer_ensure_ready_errors(install_mockery):
installer._ensure_install_ready(spec.package)
def test_ensure_locked_have(install_mockery, tmpdir):
"""Test to cover _ensure_locked when already have lock."""
def test_ensure_locked_err(install_mockery, monkeypatch, tmpdir, capsys):
"""Test _ensure_locked when a non-lock exception is raised."""
mock_err_msg = 'Mock exception error'
def _raise(lock, timeout):
raise RuntimeError(mock_err_msg)
spec, installer = create_installer('trivial-install-test-package')
monkeypatch.setattr(ulk.Lock, 'acquire_read', _raise)
with tmpdir.as_cwd():
with pytest.raises(RuntimeError):
installer._ensure_locked('read', spec.package)
out = str(capsys.readouterr()[1])
assert 'Failed to acquire a read lock' in out
assert mock_err_msg in out
def test_ensure_locked_have(install_mockery, tmpdir, capsys):
"""Test _ensure_locked when already have lock."""
spec, installer = create_installer('trivial-install-test-package')
with tmpdir.as_cwd():
# Test "downgrade" of a read lock (to a read lock)
lock = lk.Lock('./test', default_timeout=1e-9, desc='test')
lock_type = 'read'
tpl = (lock_type, lock)
installer.locks[installer.pkg_id] = tpl
assert installer._ensure_locked(lock_type, spec.package) == tpl
# Test "upgrade" of a read lock without read count to a write
lock_type = 'write'
err = 'Cannot upgrade lock'
with pytest.raises(ulk.LockUpgradeError, match=err):
installer._ensure_locked(lock_type, spec.package)
def test_package_id(install_mockery):
"""Test to cover package_id functionality."""
out = str(capsys.readouterr()[1])
assert 'Failed to upgrade to a write lock' in out
assert 'exception when releasing read lock' in out
# Test "upgrade" of the read lock *with* read count to a write
lock._reads = 1
tpl = (lock_type, lock)
assert installer._ensure_locked(lock_type, spec.package) == tpl
# Test "downgrade" of the write lock to a read lock
lock_type = 'read'
tpl = (lock_type, lock)
assert installer._ensure_locked(lock_type, spec.package) == tpl
@pytest.mark.parametrize('lock_type,reads,writes', [
('read', 1, 0),
('write', 0, 1)])
def test_ensure_locked_new_lock(
install_mockery, tmpdir, lock_type, reads, writes):
pkg_id = 'a'
spec, installer = create_installer(pkg_id)
with tmpdir.as_cwd():
ltype, lock = installer._ensure_locked(lock_type, spec.package)
assert ltype == lock_type
assert lock is not None
assert lock._reads == reads
assert lock._writes == writes
def test_ensure_locked_new_warn(install_mockery, monkeypatch, tmpdir, capsys):
orig_pl = spack.database.Database.prefix_lock
def _pl(db, spec, timeout):
lock = orig_pl(db, spec, timeout)
lock.default_timeout = 1e-9 if timeout is None else None
return lock
pkg_id = 'a'
spec, installer = create_installer(pkg_id)
monkeypatch.setattr(spack.database.Database, 'prefix_lock', _pl)
lock_type = 'read'
ltype, lock = installer._ensure_locked(lock_type, spec.package)
assert ltype == lock_type
assert lock is not None
out = str(capsys.readouterr()[1])
assert 'Expected prefix lock timeout' in out
def test_package_id_err(install_mockery):
pkg = spack.repo.get('trivial-install-test-package')
with pytest.raises(ValueError, matches='spec is not concretized'):
with pytest.raises(ValueError, match='spec is not concretized'):
inst.package_id(pkg)
def test_package_id_ok(install_mockery):
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
@@ -250,43 +373,134 @@ def test_package_id(install_mockery):
def test_fake_install(install_mockery):
"""Test to cover fake install basics."""
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
pkg = spec.package
inst._do_fake_install(pkg)
assert os.path.isdir(pkg.prefix.lib)
def test_packages_needed_to_bootstrap_compiler(install_mockery, monkeypatch):
"""Test to cover most of _packages_needed_to_boostrap_compiler."""
# TODO: More work is needed to go beyond the dependency check
def _no_compilers(pkg, arch_spec):
return []
# Test path where no compiler packages returned
def test_packages_needed_to_bootstrap_compiler_none(install_mockery):
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
packages = inst._packages_needed_to_bootstrap_compiler(spec.package)
assert not packages
# Test up to the dependency check
monkeypatch.setattr(spack.compilers, 'compilers_for_spec', _no_compilers)
with pytest.raises(spack.repo.UnknownPackageError, matches='not found'):
inst._packages_needed_to_bootstrap_compiler(spec.package)
def test_packages_needed_to_bootstrap_compiler_packages(install_mockery,
monkeypatch):
spec = spack.spec.Spec('trivial-install-test-package')
spec.concretize()
def _conc_spec(compiler):
return spack.spec.Spec('a').concretized()
# Ensure we can get past functions that are precluding obtaining
# packages.
monkeypatch.setattr(spack.compilers, 'compilers_for_spec', _none)
monkeypatch.setattr(spack.compilers, 'pkg_spec_for_compiler', _conc_spec)
monkeypatch.setattr(spack.spec.Spec, 'concretize', _noop)
packages = inst._packages_needed_to_bootstrap_compiler(spec.package)
assert packages
def test_dump_packages_deps(install_mockery, tmpdir):
"""Test to add coverage to dump_packages."""
def test_dump_packages_deps_ok(install_mockery, tmpdir, mock_repo_path):
"""Test happy path for dump_packages with dependencies."""
spec_name = 'simple-inheritance'
spec = spack.spec.Spec(spec_name).concretized()
inst.dump_packages(spec, str(tmpdir))
repo = mock_repo_path.repos[0]
dest_pkg = repo.filename_for_package_name(spec_name)
assert os.path.isfile(dest_pkg)
def test_dump_packages_deps_errs(install_mockery, tmpdir, monkeypatch, capsys):
"""Test error paths for dump_packages with dependencies."""
orig_bpp = spack.store.layout.build_packages_path
orig_dirname = spack.repo.Repo.dirname_for_package_name
repo_err_msg = "Mock dirname_for_package_name"
def bpp_path(spec):
# Perform the original function
source = orig_bpp(spec)
# Mock the required directory structure for the repository
_mock_repo(os.path.join(source, spec.namespace), spec.namespace)
return source
def _repoerr(repo, name):
if name == 'cmake':
raise spack.repo.RepoError(repo_err_msg)
else:
return orig_dirname(repo, name)
# Now mock the creation of the required directory structure to cover
# the try-except block
monkeypatch.setattr(spack.store.layout, 'build_packages_path', bpp_path)
spec = spack.spec.Spec('simple-inheritance').concretized()
with tmpdir.as_cwd():
inst.dump_packages(spec, '.')
path = str(tmpdir)
# The call to install_tree will raise the exception since not mocking
# creation of dependency package files within *install* directories.
with pytest.raises(IOError, match=path):
inst.dump_packages(spec, path)
# Now try the error path, which requires the mock directory structure
# above
monkeypatch.setattr(spack.repo.Repo, 'dirname_for_package_name', _repoerr)
with pytest.raises(spack.repo.RepoError, match=repo_err_msg):
inst.dump_packages(spec, path)
out = str(capsys.readouterr()[1])
assert "Couldn't copy in provenance for cmake" in out
def test_check_deps_status_install_failure(install_mockery, monkeypatch):
spec, installer = create_installer('a')
# Make sure the package is identified as failed
monkeypatch.setattr(spack.database.Database, 'prefix_failed', _true)
with pytest.raises(inst.InstallError, match='install failure'):
installer._check_deps_status()
def test_check_deps_status_write_locked(install_mockery, monkeypatch):
spec, installer = create_installer('a')
# Ensure the lock is not acquired
monkeypatch.setattr(inst.PackageInstaller, '_ensure_locked', _not_locked)
with pytest.raises(inst.InstallError, match='write locked by another'):
installer._check_deps_status()
def test_check_deps_status_external(install_mockery, monkeypatch):
spec, installer = create_installer('a')
# Mock the known dependent, b, as external so assumed to be installed
monkeypatch.setattr(spack.spec.Spec, 'external', True)
installer._check_deps_status()
assert 'b' in installer.installed
def test_check_deps_status_upstream(install_mockery, monkeypatch):
spec, installer = create_installer('a')
# Mock the known dependent, b, as installed upstream
monkeypatch.setattr(spack.package.PackageBase, 'installed_upstream', True)
installer._check_deps_status()
assert 'b' in installer.installed
def test_add_bootstrap_compilers(install_mockery, monkeypatch):
"""Test to cover _add_bootstrap_compilers."""
def _pkgs(pkg):
spec = spack.spec.Spec('mpi').concretized()
return [(spec.package, True)]
@@ -325,7 +539,6 @@ def test_installer_init_queue(install_mockery):
def test_install_task_use_cache(install_mockery, monkeypatch):
"""Test _install_task to cover use_cache path."""
spec, installer = create_installer('trivial-install-test-package')
task = create_build_task(spec.package)
@@ -334,6 +547,29 @@ def test_install_task_use_cache(install_mockery, monkeypatch):
assert spec.package.name in installer.installed
def test_install_task_add_compiler(install_mockery, monkeypatch, capfd):
config_msg = 'mock add_compilers_to_config'
def _add(_compilers):
tty.msg(config_msg)
spec, installer = create_installer('a')
task = create_build_task(spec.package)
task.compiler = True
# Preclude any meaningful side-effects
monkeypatch.setattr(spack.package.PackageBase, 'unit_test_check', _true)
monkeypatch.setattr(inst.PackageInstaller, '_setup_install_dir', _noop)
monkeypatch.setattr(spack.build_environment, 'fork', _noop)
monkeypatch.setattr(spack.database.Database, 'add', _noop)
monkeypatch.setattr(spack.compilers, 'add_compilers_to_config', _add)
installer._install_task(task)
out = capfd.readouterr()[0]
assert config_msg in out
def test_release_lock_write_n_exception(install_mockery, tmpdir, capsys):
"""Test _release_lock for supposed write lock with exception."""
spec, installer = create_installer('trivial-install-test-package')
@@ -388,8 +624,36 @@ def _rmtask(installer, pkg_id):
assert len(installer.build_tasks) == 1
def test_cleanup_failed(install_mockery, tmpdir, monkeypatch, capsys):
"""Test to increase coverage of _cleanup_failed."""
def test_setup_install_dir_grp(install_mockery, monkeypatch, capfd):
"""Test _setup_install_dir's group change."""
mock_group = 'mockgroup'
mock_chgrp_msg = 'Changing group for {0} to {1}'
def _get_group(spec):
return mock_group
def _chgrp(path, group):
tty.msg(mock_chgrp_msg.format(path, group))
monkeypatch.setattr(prefs, 'get_package_group', _get_group)
monkeypatch.setattr(fs, 'chgrp', _chgrp)
spec, installer = create_installer('trivial-install-test-package')
fs.touchp(spec.prefix)
metadatadir = spack.store.layout.metadata_path(spec)
# Should fail with a "not a directory" error
with pytest.raises(OSError, match=metadatadir):
installer._setup_install_dir(spec.package)
out = str(capfd.readouterr()[0])
expected_msg = mock_chgrp_msg.format(spec.prefix, mock_group)
assert expected_msg in out
def test_cleanup_failed_err(install_mockery, tmpdir, monkeypatch, capsys):
"""Test _cleanup_failed exception path."""
msg = 'Fake release_write exception'
def _raise_except(lock):
@@ -409,13 +673,14 @@ def _raise_except(lock):
assert msg in out
def test_update_failed_no_mark(install_mockery):
"""Test of _update_failed sans mark and dependent build tasks."""
def test_update_failed_no_dependent_task(install_mockery):
"""Test _update_failed with missing dependent build tasks."""
spec, installer = create_installer('dependent-install')
task = create_build_task(spec.package)
installer._update_failed(task)
assert installer.failed['dependent-install'] is None
for dep in spec.traverse(root=False):
task = create_build_task(dep.package)
installer._update_failed(task, mark=False)
assert installer.failed[task.pkg_id] is None
def test_install_uninstalled_deps(install_mockery, monkeypatch, capsys):
@@ -428,7 +693,7 @@ def test_install_uninstalled_deps(install_mockery, monkeypatch, capsys):
monkeypatch.setattr(inst.PackageInstaller, '_update_failed', _noop)
msg = 'Cannot proceed with dependent-install'
with pytest.raises(spack.installer.InstallError, matches=msg):
with pytest.raises(spack.installer.InstallError, match=msg):
installer.install()
out = str(capsys.readouterr())
@@ -446,7 +711,7 @@ def test_install_failed(install_mockery, monkeypatch, capsys):
monkeypatch.setattr(inst.PackageInstaller, '_install_task', _noop)
msg = 'Installation of b failed'
with pytest.raises(spack.installer.InstallError, matches=msg):
with pytest.raises(spack.installer.InstallError, match=msg):
installer.install()
out = str(capsys.readouterr())
@@ -458,10 +723,6 @@ def test_install_lock_failures(install_mockery, monkeypatch, capfd):
def _requeued(installer, task):
tty.msg('requeued {0}' .format(task.pkg.spec.name))
def _not_locked(installer, lock_type, pkg):
tty.msg('{0} locked {1}' .format(lock_type, pkg.spec.name))
return lock_type, None
spec, installer = create_installer('b')
# Ensure never acquire a lock
@@ -485,10 +746,6 @@ def test_install_lock_installed_requeue(install_mockery, monkeypatch, capfd):
def _install(installer, task, **kwargs):
tty.msg('{0} installing'.format(task.pkg.spec.name))
def _not_locked(installer, lock_type, pkg):
tty.msg('{0} locked {1}' .format(lock_type, pkg.spec.name))
return lock_type, None
def _prep(installer, task, keep_prefix, keep_stage, restage):
installer.installed.add('b')
tty.msg('{0} is installed' .format(task.pkg.spec.name))
@@ -573,7 +830,16 @@ def _install(installer, task, **kwargs):
spec, installer = create_installer('b')
with pytest.raises(dl.InstallDirectoryAlreadyExistsError, matches=err):
with pytest.raises(dl.InstallDirectoryAlreadyExistsError, match=err):
installer.install()
assert 'b' in installer.installed
def test_install_skip_patch(install_mockery, mock_fetch):
"""Test the path skip_patch install path."""
spec, installer = create_installer('b')
installer.install(fake=False, skip_patch=True)
assert 'b' in installer.installed

View File

@@ -130,3 +130,10 @@ def test_load_modules_from_file(module_path):
foo = llnl.util.lang.load_module_from_file('foo', module_path)
assert foo.value == 1
assert foo.path == os.path.join('/usr', 'bin')
def test_uniq():
assert [1, 2, 3] == llnl.util.lang.uniq([1, 2, 3])
assert [1, 2, 3] == llnl.util.lang.uniq([1, 1, 1, 1, 2, 2, 2, 3, 3])
assert [1, 2, 1] == llnl.util.lang.uniq([1, 1, 1, 1, 2, 2, 2, 1, 1])
assert [] == llnl.util.lang.uniq([])

View File

@@ -1272,7 +1272,7 @@ def test_downgrade_write_fails(tmpdir):
lock = lk.Lock('lockfile')
lock.acquire_read()
msg = 'Cannot downgrade lock from write to read on file: lockfile'
with pytest.raises(lk.LockDowngradeError, matches=msg):
with pytest.raises(lk.LockDowngradeError, match=msg):
lock.downgrade_write_to_read()
@@ -1292,5 +1292,5 @@ def test_upgrade_read_fails(tmpdir):
lock = lk.Lock('lockfile')
lock.acquire_write()
msg = 'Cannot upgrade lock from read to write on file: lockfile'
with pytest.raises(lk.LockUpgradeError, matches=msg):
with pytest.raises(lk.LockUpgradeError, match=msg):
lock.upgrade_read_to_write()

View File

@@ -1,81 +0,0 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
import pytest
from llnl.util.tty.log import log_output
from spack.util.executable import which
def test_log_python_output_with_python_stream(capsys, tmpdir):
# pytest's DontReadFromInput object does not like what we do here, so
# disable capsys or things hang.
with tmpdir.as_cwd():
with capsys.disabled():
with log_output('foo.txt'):
print('logged')
with open('foo.txt') as f:
assert f.read() == 'logged\n'
assert capsys.readouterr() == ('', '')
def test_log_python_output_with_fd_stream(capfd, tmpdir):
with tmpdir.as_cwd():
with log_output('foo.txt'):
print('logged')
with open('foo.txt') as f:
assert f.read() == 'logged\n'
assert capfd.readouterr() == ('', '')
def test_log_python_output_and_echo_output(capfd, tmpdir):
with tmpdir.as_cwd():
with log_output('foo.txt') as logger:
with logger.force_echo():
print('echo')
print('logged')
assert capfd.readouterr() == ('echo\n', '')
with open('foo.txt') as f:
assert f.read() == 'echo\nlogged\n'
@pytest.mark.skipif(not which('echo'), reason="needs echo command")
def test_log_subproc_output(capsys, tmpdir):
echo = which('echo')
# pytest seems to interfere here, so we need to use capsys.disabled()
# TODO: figure out why this is and whether it means we're doing
# sometihng wrong with OUR redirects. Seems like it should work even
# with capsys enabled.
with tmpdir.as_cwd():
with capsys.disabled():
with log_output('foo.txt'):
echo('logged')
with open('foo.txt') as f:
assert f.read() == 'logged\n'
@pytest.mark.skipif(not which('echo'), reason="needs echo command")
def test_log_subproc_and_echo_output(capfd, tmpdir):
echo = which('echo')
with tmpdir.as_cwd():
with log_output('foo.txt') as logger:
with logger.force_echo():
echo('echo')
print('logged')
assert capfd.readouterr() == ('echo\n', '')
with open('foo.txt') as f:
assert f.read() == 'logged\n'

View File

@@ -0,0 +1,442 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
import contextlib
import multiprocessing
import os
import signal
import sys
import time
try:
import termios
except ImportError:
termios = None
import pytest
import llnl.util.tty.log
from llnl.util.lang import uniq
from llnl.util.tty.log import log_output
from llnl.util.tty.pty import PseudoShell
from spack.util.executable import which
@contextlib.contextmanager
def nullcontext():
yield
def test_log_python_output_with_echo(capfd, tmpdir):
with tmpdir.as_cwd():
with log_output('foo.txt', echo=True):
print('logged')
# foo.txt has output
with open('foo.txt') as f:
assert f.read() == 'logged\n'
# output is also echoed.
assert capfd.readouterr()[0] == 'logged\n'
def test_log_python_output_without_echo(capfd, tmpdir):
with tmpdir.as_cwd():
with log_output('foo.txt'):
print('logged')
# foo.txt has output
with open('foo.txt') as f:
assert f.read() == 'logged\n'
# nothing on stdout or stderr
assert capfd.readouterr()[0] == ''
def test_log_python_output_and_echo_output(capfd, tmpdir):
with tmpdir.as_cwd():
# echo two lines
with log_output('foo.txt') as logger:
with logger.force_echo():
print('force echo')
print('logged')
# log file contains everything
with open('foo.txt') as f:
assert f.read() == 'force echo\nlogged\n'
# only force-echo'd stuff is in output
assert capfd.readouterr()[0] == 'force echo\n'
@pytest.mark.skipif(not which('echo'), reason="needs echo command")
def test_log_subproc_and_echo_output_no_capfd(capfd, tmpdir):
echo = which('echo')
# this is split into two tests because capfd interferes with the
# output logged to file when using a subprocess. We test the file
# here, and echoing in test_log_subproc_and_echo_output_capfd below.
with capfd.disabled():
with tmpdir.as_cwd():
with log_output('foo.txt') as logger:
with logger.force_echo():
echo('echo')
print('logged')
with open('foo.txt') as f:
assert f.read() == 'echo\nlogged\n'
@pytest.mark.skipif(not which('echo'), reason="needs echo command")
def test_log_subproc_and_echo_output_capfd(capfd, tmpdir):
echo = which('echo')
# This tests *only* what is echoed when using a subprocess, as capfd
# interferes with the logged data. See
# test_log_subproc_and_echo_output_no_capfd for tests on the logfile.
with tmpdir.as_cwd():
with log_output('foo.txt') as logger:
with logger.force_echo():
echo('echo')
print('logged')
assert capfd.readouterr()[0] == "echo\n"
#
# Tests below use a pseudoterminal to test llnl.util.tty.log
#
def simple_logger(**kwargs):
"""Mock logger (child) process for testing log.keyboard_input."""
def handler(signum, frame):
running[0] = False
signal.signal(signal.SIGUSR1, handler)
log_path = kwargs["log_path"]
running = [True]
with log_output(log_path):
while running[0]:
print("line")
time.sleep(1e-3)
def mock_shell_fg(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_enabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_fg_no_termios(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_disabled_fg()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_bg(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.bg()
ctl.status()
ctl.wait_disabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_tstp_cont(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.tstp()
ctl.wait_stopped()
ctl.cont()
ctl.wait_running()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_tstp_tstp_cont(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.tstp()
ctl.wait_stopped()
ctl.tstp()
ctl.wait_stopped()
ctl.cont()
ctl.wait_running()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_tstp_tstp_cont_cont(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.tstp()
ctl.wait_stopped()
ctl.tstp()
ctl.wait_stopped()
ctl.cont()
ctl.wait_running()
ctl.cont()
ctl.wait_running()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_bg_fg(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.bg()
ctl.status()
ctl.wait_disabled()
ctl.fg()
ctl.status()
ctl.wait_enabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_bg_fg_no_termios(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.bg()
ctl.status()
ctl.wait_disabled()
ctl.fg()
ctl.status()
ctl.wait_disabled_fg()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_fg_bg(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_enabled()
ctl.bg()
ctl.status()
ctl.wait_disabled()
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_fg_bg_no_termios(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background."""
ctl.fg()
ctl.status()
ctl.wait_disabled_fg()
ctl.bg()
ctl.status()
ctl.wait_disabled()
os.kill(proc.pid, signal.SIGUSR1)
@contextlib.contextmanager
def no_termios():
saved = llnl.util.tty.log.termios
llnl.util.tty.log.termios = None
try:
yield
finally:
llnl.util.tty.log.termios = saved
@pytest.mark.skipif(not which("ps"), reason="requires ps utility")
@pytest.mark.skipif(not termios, reason="requires termios support")
@pytest.mark.parametrize('test_fn,termios_on_or_off', [
# tests with termios
(mock_shell_fg, nullcontext),
(mock_shell_bg, nullcontext),
(mock_shell_bg_fg, nullcontext),
(mock_shell_fg_bg, nullcontext),
(mock_shell_tstp_cont, nullcontext),
(mock_shell_tstp_tstp_cont, nullcontext),
(mock_shell_tstp_tstp_cont_cont, nullcontext),
# tests without termios
(mock_shell_fg_no_termios, no_termios),
(mock_shell_bg, no_termios),
(mock_shell_bg_fg_no_termios, no_termios),
(mock_shell_fg_bg_no_termios, no_termios),
(mock_shell_tstp_cont, no_termios),
(mock_shell_tstp_tstp_cont, no_termios),
(mock_shell_tstp_tstp_cont_cont, no_termios),
])
def test_foreground_background(test_fn, termios_on_or_off, tmpdir):
"""Functional tests for foregrounding and backgrounding a logged process.
This ensures that things like SIGTTOU are not raised and that
terminal settings are corrected on foreground/background and on
process stop and start.
"""
shell = PseudoShell(test_fn, simple_logger)
log_path = str(tmpdir.join("log.txt"))
# run the shell test
with termios_on_or_off():
shell.start(log_path=log_path, debug=True)
exitcode = shell.join()
# processes completed successfully
assert exitcode == 0
# assert log was created
assert os.path.exists(log_path)
def synchronized_logger(**kwargs):
"""Mock logger (child) process for testing log.keyboard_input.
This logger synchronizes with the parent process to test that 'v' can
toggle output. It is used in ``test_foreground_background_output`` below.
"""
def handler(signum, frame):
running[0] = False
signal.signal(signal.SIGUSR1, handler)
log_path = kwargs["log_path"]
write_lock = kwargs["write_lock"]
v_lock = kwargs["v_lock"]
running = [True]
sys.stderr.write(os.getcwd() + "\n")
with log_output(log_path) as logger:
with logger.force_echo():
print("forced output")
while running[0]:
with write_lock:
if v_lock.acquire(False): # non-blocking acquire
print("off")
v_lock.release()
else:
print("on") # lock held; v is toggled on
time.sleep(1e-2)
def mock_shell_v_v(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background_output."""
write_lock = kwargs["write_lock"]
v_lock = kwargs["v_lock"]
ctl.fg()
ctl.wait_enabled()
time.sleep(.1)
write_lock.acquire() # suspend writing
v_lock.acquire() # enable v lock
ctl.write(b'v') # toggle v on stdin
time.sleep(.1)
write_lock.release() # resume writing
time.sleep(.1)
write_lock.acquire() # suspend writing
ctl.write(b'v') # toggle v on stdin
time.sleep(.1)
v_lock.release() # disable v lock
write_lock.release() # resume writing
time.sleep(.1)
os.kill(proc.pid, signal.SIGUSR1)
def mock_shell_v_v_no_termios(proc, ctl, **kwargs):
"""PseudoShell master function for test_foreground_background_output."""
write_lock = kwargs["write_lock"]
v_lock = kwargs["v_lock"]
ctl.fg()
ctl.wait_disabled_fg()
time.sleep(.1)
write_lock.acquire() # suspend writing
v_lock.acquire() # enable v lock
ctl.write(b'v\n') # toggle v on stdin
time.sleep(.1)
write_lock.release() # resume writing
time.sleep(.1)
write_lock.acquire() # suspend writing
ctl.write(b'v\n') # toggle v on stdin
time.sleep(.1)
v_lock.release() # disable v lock
write_lock.release() # resume writing
time.sleep(.1)
os.kill(proc.pid, signal.SIGUSR1)
@pytest.mark.skipif(not which("ps"), reason="requires ps utility")
@pytest.mark.skipif(not termios, reason="requires termios support")
@pytest.mark.parametrize('test_fn,termios_on_or_off', [
(mock_shell_v_v, nullcontext),
(mock_shell_v_v_no_termios, no_termios),
])
def test_foreground_background_output(
test_fn, capfd, termios_on_or_off, tmpdir):
"""Tests hitting 'v' toggles output, and that force_echo works."""
shell = PseudoShell(test_fn, synchronized_logger)
log_path = str(tmpdir.join("log.txt"))
# Locks for synchronizing with child
write_lock = multiprocessing.Lock() # must be held by child to write
v_lock = multiprocessing.Lock() # held while master is in v mode
with termios_on_or_off():
shell.start(
write_lock=write_lock,
v_lock=v_lock,
debug=True,
log_path=log_path
)
exitcode = shell.join()
out, err = capfd.readouterr()
print(err) # will be shown if something goes wrong
print(out)
# processes completed successfully
assert exitcode == 0
# split output into lines
output = out.strip().split("\n")
# also get lines of log file
assert os.path.exists(log_path)
with open(log_path) as log:
log = log.read().strip().split("\n")
# Master and child process coordinate with locks such that the child
# writes "off" when echo is off, and "on" when echo is on. The
# output should contain mostly "on" lines, but may contain an "off"
# or two. This is because the master toggles echo by sending "v" on
# stdin to the child, but this is not synchronized with our locks.
# It's good enough for a test, though. We allow at most 2 "off"'s in
# the output to account for the race.
assert (
['forced output', 'on'] == uniq(output) or
output.count("off") <= 2 # if master_fd is a bit slow
)
# log should be off for a while, then on, then off
assert (
['forced output', 'off', 'on', 'off'] == uniq(log) and
log.count("off") > 2 # ensure some "off" lines were omitted
)

View File

@@ -11,6 +11,7 @@
import spack.spec
import spack.modules.tcl
from spack.modules.common import UpstreamModuleIndex
from spack.spec import Spec
import spack.error
@@ -183,3 +184,33 @@ def test_get_module_upstream():
assert m1_path == '/path/to/a'
finally:
spack.modules.common.upstream_module_index = old_index
def test_load_installed_package_not_in_repo(install_mockery, mock_fetch,
monkeypatch):
# Get a basic concrete spec for the trivial install package.
spec = Spec('trivial-install-test-package')
spec.concretize()
assert spec.concrete
# Get the package
pkg = spec.package
def find_nothing(*args):
raise spack.repo.UnknownPackageError(
'Repo package access is disabled for test')
try:
pkg.do_install()
spec._package = None
monkeypatch.setattr(spack.repo, 'get', find_nothing)
with pytest.raises(spack.repo.UnknownPackageError):
spec.package
module_path = spack.modules.common.get_module('tcl', spec, True)
assert module_path
pkg.do_uninstall()
except Exception:
pkg.remove_prefix()
raise

View File

@@ -215,9 +215,10 @@ def test_suffixes(self, module_configuration, factory):
writer, spec = factory('mpileaks+debug arch=x86-linux')
assert 'foo' in writer.layout.use_name
assert 'foo-foo' not in writer.layout.use_name
writer, spec = factory('mpileaks~debug arch=x86-linux')
assert 'bar' in writer.layout.use_name
assert 'bar-foo' in writer.layout.use_name
def test_setup_environment(self, modulefile_content, module_configuration):
"""Tests the internal set-up of run-time environment."""

View File

@@ -16,7 +16,7 @@
from spack.parse import Token
from spack.spec import Spec
from spack.spec import SpecParseError, RedundantSpecError
from spack.spec import AmbiguousHashError, InvalidHashError, NoSuchHashError
from spack.spec import AmbiguousHashError, InvalidHashError
from spack.spec import DuplicateArchitectureError
from spack.spec import DuplicateDependencyError, DuplicateCompilerSpecError
from spack.spec import SpecFilenameError, NoSuchSpecFileError
@@ -363,9 +363,9 @@ def test_nonexistent_hash(self, database):
hashes = [s._hash for s in specs]
assert no_such_hash not in [h[:len(no_such_hash)] for h in hashes]
self._check_raises(NoSuchHashError, [
'/' + no_such_hash,
'mpileaks /' + no_such_hash])
# self._check_raises(NoSuchHashError, [
# '/' + no_such_hash,
# 'mpileaks /' + no_such_hash])
@pytest.mark.db
def test_redundant_spec(self, database):

View File

@@ -32,7 +32,7 @@ def pkg_factory():
def factory(url, urls):
def fn(v):
main_url = url or urls.pop(0)
main_url = url or urls[0]
return spack.url.substitute_version(main_url, v)
return Pkg(

View File

@@ -65,7 +65,7 @@ def environment_modifications_for_spec(spec, view=None):
This list is specific to the location of the spec or its projection in
the view."""
spec = spec.copy()
if view:
if view and not spec.external:
spec.prefix = prefix.Prefix(view.view().get_projection_for_spec(spec))
# generic environment modifications determined by inspecting the spec

View File

@@ -597,12 +597,15 @@ def from_sourcing_file(filename, *arguments, **kwargs):
'SHLVL', '_', 'PWD', 'OLDPWD', 'PS1', 'PS2', 'ENV',
# Environment modules v4
'LOADEDMODULES', '_LMFILES_', 'BASH_FUNC_module()', 'MODULEPATH',
'MODULES_(.*)', r'(\w*)_mod(quar|share)'
'MODULES_(.*)', r'(\w*)_mod(quar|share)',
# Lmod configuration
r'LMOD_(.*)', 'MODULERCFILE'
])
# Compute the environments before and after sourcing
before = sanitize(
dict(os.environ), blacklist=blacklist, whitelist=whitelist
environment_after_sourcing_files(os.devnull, **kwargs),
blacklist=blacklist, whitelist=whitelist
)
file_and_args = (filename,) + arguments
after = sanitize(

View File

@@ -1,7 +1,7 @@
# content of pytest.ini
[pytest]
addopts = --durations=20 -ra
testpaths = .
testpaths = lib/spack/spack/test
python_files = *.py
markers =
db: tests that require creating a DB

View File

@@ -66,7 +66,7 @@ case cd:
[ $#_sp_args -gt 0 ] && set _sp_arg = ($_sp_args[1])
shift _sp_args
if ( "$_sp_arg" == "-h" ) then
if ( "$_sp_arg" == "-h" || "$_sp_args" == "--help" ) then
\spack cd -h
else
cd `\spack location $_sp_arg $_sp_args`
@@ -78,7 +78,7 @@ case env:
set _sp_arg=""
[ $#_sp_args -gt 0 ] && set _sp_arg = ($_sp_args[1])
if ( "$_sp_arg" == "-h" ) then
if ( "$_sp_arg" == "-h" || "$_sp_arg" == "--help" ) then
\spack env -h
else
switch ($_sp_arg)
@@ -86,12 +86,18 @@ case env:
set _sp_env_arg=""
[ $#_sp_args -gt 1 ] && set _sp_env_arg = ($_sp_args[2])
if ( "$_sp_env_arg" == "" || "$_sp_args" =~ "*--sh*" || "$_sp_args" =~ "*--csh*" || "$_sp_args" =~ "*-h*" ) then
# no args or args contain -h/--help, --sh, or --csh: just execute
# Space needed here to differentiate between `-h`
# argument and environments with "-h" in the name.
if ( "$_sp_env_arg" == "" || \
"$_sp_args" =~ "* --sh*" || \
"$_sp_args" =~ "* --csh*" || \
"$_sp_args" =~ "* -h*" || \
"$_sp_args" =~ "* --help*" ) then
# No args or args contain --sh, --csh, or -h/--help: just execute.
\spack $_sp_flags env $_sp_args
else
shift _sp_args # consume 'activate' or 'deactivate'
# actual call to activate: source the output
# Actual call to activate: source the output.
eval `\spack $_sp_flags env activate --csh $_sp_args`
endif
breaksw
@@ -99,30 +105,40 @@ case env:
set _sp_env_arg=""
[ $#_sp_args -gt 1 ] && set _sp_env_arg = ($_sp_args[2])
if ( "$_sp_env_arg" != "" ) then
# with args: execute the command
# Space needed here to differentiate between `--sh`
# argument and environments with "--sh" in the name.
if ( "$_sp_args" =~ "* --sh*" || \
"$_sp_args" =~ "* --csh*" ) then
# Args contain --sh or --csh: just execute.
\spack $_sp_flags env $_sp_args
else if ( "$_sp_env_arg" != "" ) then
# Any other arguments are an error or -h/--help: just run help.
\spack $_sp_flags env deactivate -h
else
# no args: source the output
# No args: source the output of the command.
eval `\spack $_sp_flags env deactivate --csh`
endif
breaksw
default:
echo default
\spack $_sp_flags env $_sp_args
breaksw
endsw
endif
breaksw
case load:
case unload:
# Space in `-h` portion is important for differentiating -h option
# from variants that begin with "h" or packages with "-h" in name
if ( "$_sp_spec" =~ "*--sh*" || "$_sp_spec" =~ "*--csh*" || \
" $_sp_spec" =~ "* -h*" || "$_sp_spec" =~ "*--help*") then
# IF a shell is given, print shell output
# Get --sh, --csh, -h, or --help arguments.
# Space needed here to differentiate between `-h`
# argument and specs with "-h" in the name.
if ( " $_sp_spec" =~ "* --sh*" || \
" $_sp_spec" =~ "* --csh*" || \
" $_sp_spec" =~ "* -h*" || \
" $_sp_spec" =~ "* --help*") then
# Args contain --sh, --csh, or -h/--help: just execute.
\spack $_sp_flags $_sp_subcommand $_sp_spec
else
# otherwise eval with csh
# Otherwise, eval with csh.
eval `\spack $_sp_flags $_sp_subcommand --csh $_sp_spec || \
echo "exit 1"`
endif

View File

@@ -37,16 +37,12 @@ bin/spack -h
bin/spack help -a
# Profile and print top 20 lines for a simple call to spack spec
bin/spack -p --lines 20 spec mpileaks%gcc ^elfutils@0.170
spack -p --lines 20 spec mpileaks%gcc ^elfutils@0.170
#-----------------------------------------------------------
# Run unit tests with code coverage
#-----------------------------------------------------------
extra_args=""
if [[ -n "$@" ]]; then
extra_args="-k $@"
fi
$coverage_run bin/spack test -x --verbose "$extra_args"
$coverage_run $(which spack) test -x --verbose
#-----------------------------------------------------------
# Run tests for setup-env.sh

View File

@@ -115,31 +115,44 @@ spack() {
else
case $_sp_arg in
activate)
_a="$@"
# Get --sh, --csh, or -h/--help arguments.
# Space needed here becauses regexes start with a space
# and `-h` may be the only argument.
_a=" $@"
# Space needed here to differentiate between `-h`
# argument and environments with "-h" in the name.
# Also see: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html#Shell-Parameter-Expansion
if [ -z ${1+x} ] || \
[ "${_a#*--sh}" != "$_a" ] || \
[ "${_a#*--csh}" != "$_a" ] || \
[ "${_a#*-h}" != "$_a" ];
[ "${_a#* --sh}" != "$_a" ] || \
[ "${_a#* --csh}" != "$_a" ] || \
[ "${_a#* -h}" != "$_a" ] || \
[ "${_a#* --help}" != "$_a" ];
then
# no args or args contain -h/--help, --sh, or --csh: just execute
# No args or args contain --sh, --csh, or -h/--help: just execute.
command spack env activate "$@"
else
# actual call to activate: source the output
# Actual call to activate: source the output.
eval $(command spack $_sp_flags env activate --sh "$@")
fi
;;
deactivate)
_a="$@"
if [ "${_a#*--sh}" != "$_a" ] || \
[ "${_a#*--csh}" != "$_a" ];
# Get --sh, --csh, or -h/--help arguments.
# Space needed here becauses regexes start with a space
# and `-h` may be the only argument.
_a=" $@"
# Space needed here to differentiate between `--sh`
# argument and environments with "--sh" in the name.
# Also see: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html#Shell-Parameter-Expansion
if [ "${_a#* --sh}" != "$_a" ] || \
[ "${_a#* --csh}" != "$_a" ];
then
# just execute the command if --sh or --csh are provided
# Args contain --sh or --csh: just execute.
command spack env deactivate "$@"
elif [ -n "$*" ]; then
# any other arguments are an error or help, so just run help
# Any other arguments are an error or -h/--help: just run help.
command spack env deactivate -h
else
# no args: source the output of the command
# No args: source the output of the command.
eval $(command spack $_sp_flags env deactivate --sh)
fi
;;
@@ -151,17 +164,19 @@ spack() {
return
;;
"load"|"unload")
# get --sh, --csh, --help, or -h arguments
# space is important for -h case to differentiate between `-h`
# argument and specs with "-h" in package name or variant settings
# Get --sh, --csh, -h, or --help arguments.
# Space needed here becauses regexes start with a space
# and `-h` may be the only argument.
_a=" $@"
# Space needed here to differentiate between `-h`
# argument and specs with "-h" in the name.
# Also see: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html#Shell-Parameter-Expansion
if [ "${_a#* --sh}" != "$_a" ] || \
[ "${_a#* --csh}" != "$_a" ] || \
[ "${_a#* -h}" != "$_a" ] || \
[ "${_a#* --help}" != "$_a" ];
then
# just execute the command if --sh or --csh are provided
# or if the -h or --help arguments are provided
# Args contain --sh, --csh, or -h/--help: just execute.
command spack $_sp_flags $_sp_subcommand "$@"
else
eval $(command spack $_sp_flags $_sp_subcommand --sh "$@" || \

View File

@@ -945,7 +945,7 @@ _spack_info() {
_spack_install() {
if $list_options
then
SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --no-check-signature --show-log-on-error --source -n --no-checksum -v --verbose --fake --only-concrete -f --file --clean --dirty --test --run-tests --log-format --log-file --help-cdash --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp -y --yes-to-all"
SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --no-check-signature --show-log-on-error --source -n --no-checksum -v --verbose --fake --only-concrete -f --file --upstream -g --global --clean --dirty --test --run-tests --log-format --log-file --help-cdash --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp -y --yes-to-all"
else
_all_packages
fi
@@ -1419,7 +1419,7 @@ _spack_test() {
_spack_uninstall() {
if $list_options
then
SPACK_COMPREPLY="-h --help -f --force -R --dependents -y --yes-to-all -a --all"
SPACK_COMPREPLY="-h --help -f --force -R --dependents -y --yes-to-all -a --all -u --upstream -g --global"
else
_installed_packages
fi

View File

@@ -33,6 +33,7 @@ class SuiteSparse(Package):
depends_on('blas')
depends_on('lapack')
depends_on('m4', type='build', when='@5.0.0:')
depends_on('cmake', when='@5.2.0:', type='build')
depends_on('metis@5.1.0', when='@4.5.1:')
@@ -63,7 +64,6 @@ def install(self, spec, prefix):
pic_flag = self.compiler.pic_flag if '+pic' in spec else ''
make_args = [
'INSTALL=%s' % prefix,
# By default, the Makefile uses the Intel compilers if
# they are found. The AUTOCC flag disables this behavior,
# forcing it to use Spack's compiler wrappers.
@@ -134,6 +134,7 @@ def install(self, spec, prefix):
self.spec.version <= Version('5.6.0')):
make('default', *make_args)
make_args.append('INSTALL=%s' % prefix)
make('install', *make_args)
@property