- Clean up error messages for when a lock can't be created, or when an
exclusive (write) lock can't be taken on a file.
- Add a number of subclasses of LockError to distinguish timeouts from
permission issues.
- Add an explicit check to prevent the user from taking a write lock on a
read-only file.
- We had a check for this for when we try to *upgrade* a lock on an RO
file, but not for an initial write lock attempt.
- Add more tests for different lock permission scenarios.
- write locks previously wrote information about the lock holder (host
and pid), and read locks woudl read this in.
- This is really only for debugging, so only enable it then
- add some tests that target debug info, and improve multiproc lock test
output
Functional updates:
- `python` now creates a copy of the `python` binaries when it is added
to a view
- Python extensions (packages which subclass `PythonPackage`) rewrite
their shebang lines to refer to python in the view
- Python packages in the same namespace will not generate conflicts if
both have `...lib/site-packages/namespace-example/__init__.py`
- These `__init__` files will also remain when removing any package in
the namespace until the last package in the namespace is removed
Generally (Updated 2/16):
- Any package can define `add_files_to_view` to customize how it is added
to a view (and at the moment custom definitions are included for
`python` and `PythonPackage`)
- Likewise any package can define `remove_files_from_view` to customize
which files are removed (e.g. you don't always want to remove the
namespace `__init__`)
- Any package can define `view_file_conflicts` to customize what it
considers a merge conflict
- Global activations are handled like views (where the view root is the
spec prefix of the extendee)
- Benefit: filesystem-management aspects of activating extensions are
now placed in views (e.g. now one can hardlink a global activation)
- Benefit: overriding `Package.activate` is more straightforward (see
`Python.activate`)
- Complication: extension packages which have special-purpose logic
*only* when activated outside of the extendee prefix must check for
this in their `add_files_to_view` method (see `PythonPackage`)
- `LinkTree` is refactored to have separate methods for copying a
directory structure and for copying files (since it was found that
generally packages may want to alter how files are copied but still
wanted to copy directories in the same way)
TODOs (updated 2/20):
- [x] additional testing (there is some unit testing added at this point
but more would be useful)
- [x] refactor or reorganize `LinkTree` methods: currently there is a
separate set of methods for replicating just the directory structure
without the files, and a set for replicating everything
- [x] Right now external views (i.e. those not used for global
activations) call `view.add_extension`, but global activations do not
to avoid some extra work that goes into maintaining external views. I'm
not sure if addressing that needs to be done here but I'd like to
clarify it in the comments (UPDATE: for now I have added a TODO and in
my opinion this can be merged now and the refactor handled later)
- [x] Several method descriptions (e.g. for `Package.activate`) are out
of date and reference a distinction between global activations and
views, they need to be updated
- [x] Update aspell package activations
- Spack was assuming that a group with gid == current uid would always exist.
- This was breaking the travis build for macos.
- also fix issue with the DB tarball test finding coverage filesx
- spack.util.lock behaves the same as llnl.util.lock, but Lock._lock and
Lock._unlock do nothing.
- can be disabled with a control variable.
- configuration options can enable/disable locking:
- `locks` option in spack configuration controls whether Spack will use filesystem locks or not.
- `-l` and `-L` command-line options can force-disable or force-enable locking.
- Spack will check for group- and world-writability before disabling
locks, and it will not allow a group- or world-writable instance to
have locks disabled.
- update documentation
- simplify the singleton pattern across the codebase
- reduce lines of code needed for crufty initialization
- reduce functions that need to mess with a global
- Singletons whose semantics changed:
- spack.store.store() -> spack.store
- spack.repo.path() -> spack.repo.path
- spack.config.config() -> spack.config.config
- spack.caches.fetch_cache() -> spack.caches.fetch_cache
- spack.caches.misc_cache() -> spack.caches.misc_cache
* Added installation date and time to the database
Information on the date and time of installation of a spec is recorded
into the database. The information is retained on reindexing.
* Expose the possibility to query for installation date
The DB can now be queried for specs that have been installed in a given
time window. This query possibility is exposed to command line via two
new options of the `find` command.
* Extended docstring for Database._add
* Use timestamps since the epoch instead of formatted date in the DB
* Allow 'pretty date' formats from command line
* Substituted kwargs with explicit arguments
* Simplified regex for pretty date strings. Added unit tests.
This updates the fix_darwin_install_name function to use the Spack
Executable object to run install_name_tool, which ensures that
process output is formatted as a 'str' for python2 and python3.
Originally fix_darwin_install_name was invoking subprocess.Popen
directly.
Following the discussion with Todd and Adam, find has been modified to
accept glob expressions. This should not affect performance as every
glob implementation I inspected has 3 cases (no wildcard, wildcard but
no directories involved, wildcard and directories involved) and uses
fnmatch underneath.
Mixins have been changed to do by default a non-recursive search (but
a recursive search can still be triggered using the recursive keyword).
Following comments from Todd:
- the call to tty.debug has been moved deeper, to log the filtering of each file
- the shadowing on the name "kwargs" is avoided
- command reference now includes usage for all Spack commands as output
by `spack help`. Each command usage links to any related section in
the docs.
- added `spack commands` command which can list command names,
subcommands, and generate RST docs for commands.
- added `llnl.util.argparsewriter`, which analyzes an argparse parser and
calls hooks for description, usage, options, and subcommands
'spack install' can now reinstall a spec even if it has dependents, via
the --overwrite option. This option moves the current installation in a
temporary directory. If the reinstallation is successful the temporary
is removed, otherwise a rollback is performed.
- When you don't use wildcards, flake8 will find places where you used an
undefined name.
- This commit has all the bugfixes resulting from this static check.
I'm tracking down a problem with the perl package that's been
generating this error:
```
OSError: OSError: [Errno 2] No such file or directory: '/blah/blah/blah/lib/5.24.1/x86_64-linux/Config.pm~'
```
The real problem is upstream, but it's being masked by an exception
raised in `filter_file`s finally block.
In my case, `backup` is `False`.
The backup is created around line 127, the `re.sub()` calls
fails (working on that), the `except` block fires and moves the backup
file back, then the finally block tries to remove the non-existent
backup file.
This change just avoids trying to remove the non-existent file.
- Python I/O would not properly interleave (or appear) with output from
subcommands.
- Add a flusing wrapper around sys.stdout and sys.stderr when
redirecting, so that Python output is synchronous with that of
subcommands.
- 'v' toggle was previously only good for the current install.
- subsequent installs needed user to press 'v' again.
- 'v' state is now preserved across dependency installs.
- Simplify interface to log_output. New interface requires only one
context handler instead of two. Before:
with log_output('logfile.txt') as log_redirection:
with log_redirection:
# do things ... output will be logged
After:
with log_output('logfile.txt'):
# do things ... output will be logged
If you also want the output to be echoed to ``stdout``, use the
`echo` parameter::
with log_output('logfile.txt', echo=True):
# do things ... output will be logged and printed out
And, if you just want to echo *some* stuff from the parent, use
``force_echo``:
with log_output('logfile.txt', echo=False) as logger:
# do things ... output will be logged
with logger.force_echo():
# things here will be echoed *and* logged
A key difference between this and the previous implementation is that
*everything* in the context handler is logged. Previously, things like
`Executing phase 'configure'` would not be logged, only output to the
screen, so understanding phases in the build log was difficult.
- The implementation of `log_output()` is different in two major ways:
1. This implementation avoids race cases by using only one pipe (before
we had a multiprocessing pipe and a unix pipe). The logger daemon
stops naturally when the input stream is closed, which avoids a race
in the previous implementation where we'd miss some lines of output
because the parent would shut the daemon down before it was done
with all output.
2. Instead of turning output redirection on and off, which prevented
some things from being logged, this version uses control characters
in the output stream to enable/disable forced echoing. We're using
the time-honored xon and xoff codes, which tell the daemon to echo
anything between them AND write it to the log. This is how
`logger.force_echo()` works.
- Fix places where output could get stuck in buffers by flushing more
aggressively. This makes the output printed to the terminal the same
as that which would be printed through a pipe to `cat` or to a file.
Previously these could be weirdly different, and some output would be
missing when redirecting Spack to a file or pipe.
- Simplify input and color handling in both `build_environment.fork()`
and `llnl.util.tty.log.log_output()`. Neither requires an input_stream
parameter anymore; we assume stdin will be forwarded if possible.
- remove `llnl.util.lang.duplicate_stream()` and remove associated
monkey-patching in tests, as these aren't needed if you just check
whether stdin is a tty and has a fileno attribute.
* Disable spec colorization when redirecting stdout and add command line flag to re-enable
* Add command line `--color` flag to control output colorization
* Add options to `llnl.util.tty.color` to allow color to be auto/always/never
* Add `Spec.cformat()` function to be used when `format()` should have auto-coloring
- Lock test can be run either as a node-local test or as an MPI test.
- Lock test is now parametrized by filesystem, so you can test the
locking capabilities of your NFS, Lustre, or GPFS filesystem. See docs
for details.
PR #3367 inadvertently changed the semantics of _find_recursive and
_find_non_recursive so that the returned list are not ordered as the
input search list. This commit restores the original semantic, and adds
tests to verify it.
Modifications:
- added support for multi-valued variants
- refactored code related to variants into variant.py
- added new generic features to AutotoolsPackage that leverage multi-valued variants
- modified openmpi to use new features
- added unit tests for the new semantics
## Motivation
Python installations are both important and unfortunately inconsistent. Depending on the Python version, OS, and the strength of the Earth's magnetic field when it was installed, the name of the Python executable, directory containing its libraries, library names, and the directory containing its headers can vary drastically.
I originally got into this mess with #3274, where I discovered that Boost could not be built with Python 3 because the executable is called `python3` and we were telling it to use `python`. I got deeper into this mess when I started hacking on #3140, where I discovered just how difficult it is to find the location and name of the Python libraries and headers.
Currently, half of the packages that depend on Python and need to know this information jump through hoops to determine the correct information. The other half are hard-coded to use `python`, `spec['python'].prefix.lib`, and `spec['python'].prefix.include`. Obviously, none of these packages would work for Python 3, and there's no reason to duplicate the effort. The Python package itself should contain all of the information necessary to use it properly. This is in line with the recent work by @alalazo and @davydden with respect to `spec['blas'].libs` and friends.
## Prefix
For most packages in Spack, we assume that the installation directory is `spec['python'].prefix`. This generally works for anything installed with Spack, but gets complicated when we include external packages. Python is a commonly used external package (it needs to be installed just to run Spack). If it was installed with Homebrew, `which python` would return `/usr/local/bin/python`, and most users would erroneously assume that `/usr/local` is the installation directory. If you peruse through #2173, you'll immediately see why this is not the case. Homebrew actually installs Python in `/usr/local/Cellar/python/2.7.12_2` and symlinks the executable to `/usr/local/bin/python`. `PYTHONHOME` (and presumably most things that need to know where Python is installed) needs to be set to the actual installation directory, not `/usr/local`.
Normally I would say, "sounds like user error, make sure to use the real installation directory in your `packages.yaml`". But I think we can make a special case for Python. That's what we decided in #2173 anyway. If we change our minds, I would be more than happy to simplify things.
To solve this problem, I created a `spec['python'].home` attribute that works the same way as `spec['python'].prefix` but queries Python to figure out where it was actually installed. @tgamblin Is there any way to overwrite `spec['python'].prefix`? I think it's currently immutable.
## Command
In general, Python 2 comes with both `python` and `python2` commands, while Python 3 only comes with a `python3` command. But this is up to the OS developers. For example, `/usr/bin/python` on Gentoo is actually Python 3. Worse yet, if someone is using an externally installed Python, all 3 commands may exist in the same directory! Here's what I'm thinking:
If the spec is for Python 3, try searching for the `python3` command.
If the spec is for Python 2, try searching for the `python2` command.
If neither are found, try searching for the `python` command.
## Libraries
Spack installs Python libraries in `spec['python'].prefix.lib`. Except on openSUSE 13, where it installs to `spec['python'].prefix.lib64` (see #2295 and #2253). On my CentOS 6 machine, the Python libraries are installed in `/usr/lib64`. Both need to work.
The libraries themselves change name depending on OS and Python version. For Python 2.7 on macOS, I'm seeing:
```
lib/libpython2.7.dylib
```
For Python 3.6 on CentOS 6, I'm seeing:
```
lib/libpython3.so
lib/libpython3.6m.so.1.0
lib/libpython3.6m.so -> lib/libpython3.6m.so.1.0
```
Notice the `m` after the version number. Yeah, that's a thing.
## Headers
In Python 2.7, I'm seeing:
```
include/python2.7/pyconfig.h
```
In Python 3.6, I'm seeing:
```
include/python3.6m/pyconfig.h
```
It looks like all Python 3 installations have this `m`. Tested with Python 3.2 and 3.6 on macOS and CentOS 6
Spack has really nice support for libraries (`find_libraries` and `LibraryList`), but nothing for headers. Fixed.