Commit Graph

390 Commits

Author SHA1 Message Date
Massimiliano Culpo
ff04d1bfc1
Use the non-deprecated MetaPathFinder interface (#29745)
* Extract the MetaPathFinder and Loaders for packages in their own classes

https://peps.python.org/pep-0451/

Currently, RepoPath and Repo implement the (deprecated) interface of
MetaPathFinder (find_module) and of Loader (load_module). This commit
extracts both of them and places the code in their own classes.

The MetaPathFinder interface is updated to contain both the deprecated
"find_module" (for Python 2.7 support) and the recommended "find_spec".
Update of the Loader interface is deferred at a subsequent commit.

* Move the lines to be prepended inside "RepoLoader"

Also adjust the naming of a few variables too

* Remove spack.util.imp, since code is only used in spack.repo

* Remove support from loading Python modules Python > 3 but < 3.5

* Remove `Repo._create_namespace`

This function was interacting badly with the MetaPathFinder
and causing issues with "normal" imports. Removing the
function allows to do things like:
```python
import spack.pkg.builtin.mpich
cls = spack.pkg.builtin.mpich.Mpich
```

* Remove code needed to trigger the Singleton evaluation

The finder is coded in a way to trigger the Singleton,
so we don't need external code now that we register it
at module level into `sys.meta_path`.

* Add unit tests
2022-04-07 15:58:20 -07:00
Harmen Stoppels
558f7f007e
link_tree.py: format conflict error message (#29870)
Show each `[src a] and [src b] both project to [dst]` on a separate
line.
2022-04-06 07:27:02 +02:00
Massimiliano Culpo
f2fc4ee9af
Allow conditional possible values in variants (#29530)
Allow declaring possible values for variants with an associated condition. If the variant takes one of those values, the condition is imposed as a further constraint.

The idea of this PR is to implement part of the mechanisms needed for modeling [packages with multiple build-systems]( https://github.com/spack/seps/pull/3). After this PR the build-system directive can be implemented as:
```python
variant(
    'build-system',
    default='cmake',
    values=(
        'autotools',
        conditional('cmake', when='@X.Y:')
    ), 
    description='...',
)
```

Modifications:
- [x] Allow conditional possible values in variants
- [x] Add a unit-test for the feature
- [x] Add documentation
2022-04-04 17:37:57 -07:00
Brian Van Essen
29da99427e
"spack external find": also find library-only packages (#28005)
Update "spack external find --all" to also find library-only packages.
A Package can add a ".libraries" attribute, which is a list of regular
expressions to use to find libraries associated with the Package.
"spack external find --all" will search LD_LIBRARY_PATH for potential
libraries.

This PR adds examples for NCCL, RCCL, and hipblas packages. These
examples specify the suffix ".so" for the regular expressions used
to find libraries, so generally are only useful for detecting library
packages on Linux.
2022-04-01 13:30:10 -07:00
John W. Parent
88b1bf751d
Windows Support: Fixup Perl build (#29711)
* Add pl2bat to PATH: Windows on Perl requires the script pl2bat.bat
  and Perl to be available to the installer via the PATH. The build
  and dependent environments of Perl on Windows have the install
  prefix bin added to the PATH.
* symlink with win32file module instead of using Executable to
  call mklink (mklink is a shell function and so is not accessible
  in this manner).
2022-03-31 11:47:11 -07:00
Massimiliano Culpo
048a0de35b
Use the appropriate function to remove files in the stage directory (#29690)
We shouldn't be using "remove_linked_tree" to remove the lock file,
since that function expects to receive a directory path as an
argument.

Also, as a further measure to avoid regression, this commit restores
the "ignore_errors=True" argument on linux and adds a unit test
checking that "remove_linked_tree" doesn't change file permissions
as a side effect of a failure to remove.
2022-03-25 08:39:09 +01:00
Harmen Stoppels
59e522e815
environment views: single pass view generation (#29443)
Reduces the number of stat calls to a bare minimum:
- Single pass over src prefixes
- Handle projection clashes in memory

Symlinked directories in the src prefixes are now conditionally
transformed into directories with symlinks in the dst dir. Notably
`intel-mkl`, `cuda` and `qt` has top-level symlinked directories that
previously resulted in empty directories in the view. We now avoid
cycles and possible exponential blowup by only expanding symlinks that:
- point to dirs deeper in the folder structure;
- are a fixed depth of 2.
2022-03-24 03:54:33 -06:00
Harmen Stoppels
773da7ceba
python: drop dependency on file for script check (#29513)
`file` was used to detect Python scripts with shebangs, so that the interpreter could be changed from <python prefix> to <view path>. With this change, we detect shebangs using Python instead, so that `file` is no longer required.
2022-03-23 10:31:23 +01:00
John Parent
4aee27816e Windows Support: Testing Suite integration
Broaden support for execution of the test suite
on Windows.
General bug and review fixups
2022-03-17 09:01:01 -07:00
John Parent
cf1349ba35 "spack commands --update-completion" 2022-03-17 09:01:01 -07:00
John W. Parent
e4d4a5193f Path handling (#28402)
Consolidate Spack's internal filepath logic to a select
few places and refactor to consistent internal useage of
os.path utilities. Creates a prefix, and a series of utilities
in the path utility module that facilitate handling paths
in a platform agnostic manner.

Convert Windows paths to posix paths internally

Prefer posixpath.join instead of os.path.join

Updated util/ directory to account for Windows integration

Co-authored-by: Stephen Crowell <stephen.crowell@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>

Module template format for windows (#23041)
2022-03-17 09:01:01 -07:00
John Parent
df4129d395 Expand external find for Windows (#27588)
* Incorporate new search location

* Add external user option

* proper doc string

* Explicit commands in getting started

* raise during chgrp on Win

recover installer changes

Notate admin privleges

Windows phase install hooks

Find external python and install ninja (#23496)

Allow external find python to find windows python and spack install ninja

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
2022-03-17 09:01:01 -07:00
John Parent
3a994032f8 Spack on Windows package ports
CMake - Windows Bootstrap (#25825)

Remove hardcoded cmake compiler (#26410)

Revert breaking cmake changes
Ensure no autotools on Windows

Perl on Windows (#26612)

Python source build windows (#26313)

Reconfigure sysconf for Windows

Python2.6 compatibility

Fxixup new sbang tests for windows

Ruby support (#28287)

Add NASM support (#28319)

Add mock Ninja package for testing
2022-03-17 09:01:01 -07:00
loulawrence
f587a9ce68 use pytest stdout/err capture (#22584)
* On windows, write to StringIO without redirection in test cases to avoid conflicting with logger
2022-03-17 09:01:01 -07:00
Betsy McPhail
d4d101f57e Allow 'spack external find' to find executables on the system path (#22091)
Co-authored-by: Lou Lawrence <lou.lawrence@kitware.com>
2022-03-17 09:01:01 -07:00
Betsy McPhail
fb0e91c534 Windows: Symlink support
To provide Windows-compatible functionality, spack code should use
llnl.util.symlink instead of os.symlink. On non-Windows platforms
and on Windows where supported, os.symlink will still be used.

Use junctions when symlinks aren't supported on Windows (#22583)

Support islink for junctions (#24182)

Windows: Update llnl/util/filesystem

* Use '/' as path separator on Windows.
* Recognizing that Windows paths start with '<Letter>:/' instead of '/'

Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
2022-03-17 09:01:01 -07:00
Betsy McPhail
a7de2fa380 Create rename utility function
os.rename() fails on Windows if file already exists.

Create getuid utility function (#21736)

On Windows, replace os.getuid with ctypes.windll.shell32.IsUserAnAdmin().

Tests: Use getuid util function

Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
2022-03-17 09:01:01 -07:00
Betsy McPhail
b60a0eea01 Workarounds for install errors on Windows (#21890)
1. Forwarding sys.stdin, e.g. use input_multiprocess_fd,
gives an error on Windows. Skipping for now

3.  subprocess_context needs to serialize for Windows, like it does
for Mac.

Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
2022-03-17 09:01:01 -07:00
Ben Cowan
81bc00d61f Adding basic Windows features (#21259)
* Snapshot of some MSVC infrastructure added during experiments a while ago.  Rebasing from spack/develop.

* Added platform and OS definitions for Windows.

* Updated Windows platform file to conform to new archspec use.

* Added Windows as a platform; introduced some debugging code.

* Added type annotations.

* Fixed copyright.

* Removed print statements.

* Ensure `spack arch` returns correctly on Windows (#21428)

* Correctly identify windows as 'windows-Windows10-AMD64'
2022-03-17 09:01:01 -07:00
Massimiliano Culpo
2cd5c00923
Allow for multiple dependencies/dependents from the same package (#28673)
Change the internal representation of `Spec` to allow for multiple dependencies or
dependents stemming from the same package. This change permits to represent cases
which are frequent in cross compiled environments or to bootstrap compilers.

Modifications:
- [x] Substitute `DependencyMap` with `_EdgeMap`. The main differences are that the
      latter does not support direct item assignment and can be modified only through its
      API. It also provides a `select_by` method to query items.
- [x] Reworked a few public APIs of `Spec` to get list of dependencies or related edges.
- [x] Added unit tests to prevent regression on #11983 and prove the synthetic construction
      of specs with multiple deps from the same package. 

Since #22845 went in first, this PR reuses that format and thus it should not change hashes.
The same package may be present multiple times in the list of dependencies with different
associated specs (each with its own hash).
2022-03-10 11:53:45 -08:00
Harmen Stoppels
dc78f4c58a
environment.py: allow link:run (#29336)
* environment.py: allow link:run

Some users want minimal views, excluding run-type dependencies, since
those type of dependencies are covered by rpaths and the symlinked
libraries in the view aren't used anyways.

With this change, an environment like this:

```
spack:
  specs: ['py-flake8']
  view:
    default:
      root: view
      link: run
```

includes python packages and python, but no link type deps of python.
2022-03-09 12:35:26 -08:00
Danny McClanahan
2c331a1d7f
make @llnl.util.lang.memoized support kwargs (#21722)
* make memoized() support kwargs

* add testing for @memoized
2022-03-02 11:12:15 -08:00
Todd Gamblin
36b0730fac
Add spack --bootstrap option for accessing bootstrap store (#25601)
We can see what is in the bootstrap store with `spack find -b`, and you can clean it with `spack
clean -b`, but we can't do much else with it, and if there are bootstrap issues they can be hard to
debug.

We already have `spack --mock`, which allows you to swap in the mock packages from the command
line. This PR introduces `spack -b` / `spack --bootstrap`, which runs all of spack with
`ensure_bootstrap_configuration()` set. This means that you can run `spack -b find`, `spack -b
install`, `spack -b spec`, etc. to see what *would* happen with bootstrap configuration, to remove
specific bootstrap packages, etc. This will hopefully make developers' lives easier as they deal
with bootstrap packages.

This PR also uses a `nullcontext` context manager. `nullcontext` has been implemented in several
other places in Spack, and this PR consolidates them to `llnl.util.lang`, with a note that we can
delete the function if we ever reqyire a new enough Python.

- [x] introduce `spack --bootstrap` option
- [x] consolidated all `nullcontext` usages to `llnl.util.lang`
2022-02-22 12:35:34 -07:00
Adam J. Stewart
3c1b2c0fc9
find_libraries: search for both .so and .dylib on macOS (#28924) 2022-02-16 14:07:44 +01:00
Danny McClanahan
0c2de252f1
introduce llnl.util.compat to remove sys.version_info checks (#21720)
- also split typing.py into typing_extensions and add py2 shims
2022-01-21 12:32:52 -08:00
Todd Gamblin
93377942d1 Update copyright year to 2022 2022-01-14 22:50:21 -08:00
Harmen Stoppels
fb93979b94
Set backup=False by default in filter_file (#28036) 2021-12-16 13:50:20 +01:00
Paul Ferrell
c0edb17b93
Handle byte sequences which are not encoded as UTF8 while logging. (#21447)
Fix builds which produce a lines with non-UTF8 output while logging
The alternative is to read in binary mode, and then decode while
ignoring errors.
2021-11-29 13:27:02 +01:00
Massimiliano Culpo
fa7189b480
Remove support for Python 2.6 (#27256)
Modifications:
- [x] Removed `centos:6` unit test, adjusted vermin checks
- [x] Removed backport of `collections.OrderedDict`
- [x] Removed backport of `functools.total_ordering`
- [x] Removed Python 2.6 specific skip markers in unit tests
- [x] Fixed a few minor Python 2.6 related TODOs in code

Updating the vendored dependencies will be done in separate PRs
2021-11-23 09:06:17 -08:00
Maxim Belkin
0ab5d42bd5
Fix typos (ouptut) (#27317) 2021-11-09 16:11:34 -06:00
Massimiliano Culpo
6063600a7b
containerize: pin the Spack version used in a container (#21910)
This PR permits to specify the `url` and `ref` of the Spack instance used in a container recipe simply by expanding the YAML schema as outlined in #20442:
```yaml
container:
  images:
    os: amazonlinux:2
    spack:
      ref: develop
      resolve_sha: true
```
The `resolve_sha` option, if true, verifies the `ref` by cloning the Spack repository in a temporary directory and transforming any tag or branch name to a commit sha. When this new ability is leveraged an additional "bootstrap" stage is added, which builds an image with Spack setup and ready to install software. The Spack repository to be used can be customized with the `url` keyword under `spack`.

Modifications:
- [x] Permit to pin the version of Spack, either by branch or tag or sha
- [x] Added a few new OSes (centos:8, amazonlinux:2, ubuntu:20.04, alpine:3, cuda:11.2.1)
- [x] Permit to print the bootstrap image as a standalone
- [x] Add documentation on the new part of the schema
- [x] Add unit tests for different use cases
2021-10-25 13:09:27 -07:00
Morten Kristensen
9e11e62bca
py-vermin: add latest version 1.3.1 (#26920)
* py-vermin: add latest version 1.3.1

* Exclude line from Vermin since version is already being checked for

Vermin 1.3.1 finds that `encoding` kwarg of builtin `open()` requires Python 3+.
2021-10-24 16:49:05 +00:00
Michael Kuhn
d1f3279607
installer: Support showing status information in terminal title (#16259)
Installing packages with a lot of dependencies does not have an easy way
of judging the current progress (apart from running `spack spec -I pkg`
in another terminal). This change allows Spack to update the terminal's
title with status information, including its current progress as well as
information about the current and total number of packages.
2021-10-11 17:54:59 +02:00
Harmen Stoppels
18760de972
Spack install: handle failed restore of backup (#25647)
Spack has logic to preserve an installation prefix when it is being
overwritten: if the new install fails, the old files are restored.
This PR adds error handling for when this backup restoration fails
(i.e. the new install fails, and then some unexpected error prevents
restoration from the backup).
2021-10-01 11:40:48 -07:00
Harmen Stoppels
f5ab3ad82a
Fix: --overwrite backs up old install dir, but it gets destroyed anyways (#25583)
* Make sure PackageInstaller does not remove the just-restored
  install dir after failure in spack install --overwrite
* Remove cryptic error message and rethrow actual error
2021-08-27 12:41:24 -07:00
Massimiliano Culpo
1ab6f30fdd
Remove fork_context from llnl.util.lang (#25620)
This object was introduced in #18124, and was later superseded by
#18205 and removed any use if the object.
2021-08-26 09:39:59 -07:00
Todd Gamblin
1374fea5d9
locks: only open lockfiles once instead of for every lock held (#24794)
This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.

The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.

Because of this, we need to track open lock files so that we only close them when
a process no longer needs them.  We do this by tracking each lockfile by its
inode and process id.  This has several nice properties:

1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
   process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
   just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
   inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
   number of times necessary for the locks we have.

Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.

Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
      needed (this avoids inadvertently releasing locks that should not be released).
2021-08-24 14:08:34 -07:00
Harmen Stoppels
220a87812c
New spack.environment.active_environment api, and make spack.environment not depend on spack.cmd. (#25439)
* Refactor active environment getters

- Make `spack.environment.active_environment` a trivial getter for the active
environment, replacing `spack.environment.get_env` when the arguments are
not needed
- New method `spack.cmd.require_active_environment(cmd_name)` for 
commands that require an environment (rather than abusing 
get_env/active_environment)
- Clean up calling code to call spack.environment.active_environment or
spack.cmd.require_active_environment as appropriate
- Remove the `-e` parsing from `active_environment`, because `main.py` is
responsible for processing `-e` and already activates the environment.
- Move `spack.environment.find_environment` to
`spack.cmd.find_environment`, to avoid having spack.environment aware
of argparse.
- Refactor `spack install` command so argument parsing is all handled in the
command, no argparse in spack.environment or spack.installer
- Update documentation

* Python 2: toplevel import errors only with 'as ev'

In two files, `import spack.environment as ev` leads to errors
These errors are not well understood ("'module' object has no attribute
'environment'"). All other files standardize on the above syntax.
2021-08-19 19:01:37 -07:00
Axel Huebl
41ae83463e
doc: def llnl.util.filesystem.find fix rst (#25350)
The star was not rendered in the docs.
2021-08-11 09:14:04 +02:00
Todd Gamblin
7ddd6ad461 installation: filter padding from all tty output
This is both a bugfix and a generalization of #25168. In #25168, we attempted to filter padding
*just* from the debug output of `spack.util.executable.Executable` objects. It turns out we got it
wrong -- filtering the command line string instead of the arg list resulted in output like this:

```
==> [2021-08-05-21:34:19.918576] ["'", '/', 'b', 'i', 'n', '/', 't', 'a', 'r', "'", ' ', "'", '-', 'o', 'x', 'f', "'", ' ', "'", '/', 't', 'm', 'p', '/', 'r', 'o', 'o', 't', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '-', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '-', 'w', 'p', 'h', 'p', 't', 'l', 'h', 'w', 'u', 's', 'e', 'i', 'a', '4', 'k', 'p', 'g', 'y', 'd', 'q', 'l', 'l', 'i', '2', '4', 'q', 'b', '5', '5', 'q', 'u', '4', '/', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '.', 't', 'a', 'r', '.', 'b', 'z', '2', "'"]
```

Additionally, plenty of builds output padded paths in other plcaes -- e.g., not just command
arguments, but in other `tty` messages via `llnl.util.filesystem` and other places. `Executable`
isn't really the right place for this.

This PR reverts the changes to `Executable` and moves the filtering into `llnl.util.tty`. There is
now a context manager there that you can use to install a filter for all output.
`spack.installer.build_process()` now uses this context manager to make `tty` do path filtering
when padding is enabled.

- [x] revert filtering in `Executable`
- [x] add ability for `tty` to filter output
- [x] install output filter in `build_process()`
- [x] tests
2021-08-09 01:42:07 -07:00
Dylan Simon
507d3c841c
don't spin writer daemon when < /dev/null (#25170) 2021-08-02 21:39:38 -07:00
Todd Gamblin
ab5954520f
spack diff: make output order deterministic (#25169)
The output order for `spack diff` is nondeterministic for larger diffs -- if you
ran it several times it will not put the fields in the spec in the same order on
successive invocations.

This makes a few fixes to `spack diff`:

- [x] Implement the change discussed in https://github.com/spack/spack/pull/22283#discussion_r598337448
      to make `AspFunction` comparable in and of itself and to eliminate the need for `to_tuple()`

- [x] Sort the lists of diff properties so that the output is always in the same order.

- [x] Make the output for different fields the same as what we use in the solver. Previously, we
      would use `Type(value)` for non-string values and `value` for strings.  Now we just use
      the value.  So the output looks a little cleaner:

      ```
      == Old ==========================        == New ====================
      @@ node_target @@                        @@ node_target @@
      -  gdbm Target(x86_64)                   -  gdbm x86_64
      +  zlib Target(skylake)                  +  zlib skylake
      @@ variant_value @@                      @@ variant_value @@
      -  ncurses symlinks bool(False)          -  ncurses symlinks False
      +  zlib optimize bool(True)              +  zlib optimize True
      @@ version @@                            @@ version @@
      -  gdbm Version(1.18.1)                  -  gdbm 1.18.1
      +  zlib Version(1.2.11)                  +  zlib 1.2.11
      @@ node_os @@                            @@ node_os @@
      -  gdbm catalina                         -  gdbm catalina
      +  zlib catalina                         +  zlib catalina
      ```

I suppose if we want to use `repr()` in the output we could do that and could be
consistent but we don't do that elsewhere -- the types of things in Specs are
all stringifiable so the string and the name of the attribute (`version`, `node_os`,
etc.) are sufficient to know what they are.
2021-08-01 05:15:33 +00:00
Adam J. Stewart
b8afc0fd29 API Docs: fix broken reference targets 2021-07-16 08:30:56 -07:00
Todd Gamblin
f58b2e03ca
build output: filter padding out of console output when padded_length is used (#24514)
Spack allows users to set `padded_length` to pad out the installation path in
build farms so that any binaries created are more easily relocatable. The issue
with this is that the padding dominates installation output and makes it
difficult to see what is going on. The padding also causes logs to easily
exceed size limits for things like GitLab artifacts.

This PR fixes this by adding a filter in the logger daemon. If you use a
setting like this:

config:
    install_tree:
        padded_length: 512

Then lines like this in the output:

==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_pla/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga

will be replaced with the much more readable:

==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/[padded-to-512-chars]/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga

You can see that the padding has been replaced with `[padded-to-512-chars]` to
indicate the total number of characters in the padded prefix. Over a long log
file, this should save a lot of space and allow us to see error messages in
GitHub/GitLab log output.

The *actual* build logs still have full paths in them. Also lines that are
output by Spack and not by a package build are not filtered and will still
display the fully padded path. There aren't that many of these, so the change
should still help reduce file size and readability quite a bit.
2021-07-12 21:48:52 +00:00
Todd Gamblin
24c01d57cf
imports: sort imports everywhere in Spack (#24695)
* fix remaining flake8 errors

* imports: sort imports everywhere in Spack

We enabled import order checking in #23947, but fixing things manually drives
people crazy. This used `spack style --fix --all` from #24071 to automatically
sort everything in Spack so PR submitters won't have to deal with it.

This should go in after #24071, as it assumes we're using `isort`, not
`flake8-import-order` to order things. `isort` seems to be more flexible and
allows `llnl` mports to be in their own group before `spack` ones, so this
seems like a good switch.
2021-07-08 22:12:30 +00:00
Tom Scogland
4a7b0afde2
Log performance improvement (#23925)
* util.tty.log: read up to 100 lines if ready

Rework to read up to 100 lines from the captured stdin as long as data
is ready to be read immediately.  Adds a helper function to poll with
`select` for ready data.  This showed a roughly 5-10x perf improvement
for high-rate writes through the logger with relatively short lines.

* util.tty.log: Defer flushes to end of ready reads

Rather than flush per line, flush per set of reads.  Since this is a
non-blocking loop, the total perceived wait is short.

* util.tty.log: only scan each line once, usually

Rather than always find all control characters then substitute them all,
use `subn` to count the number of control characters replaced.  Only if
control characters exist find out what they are.  This could be made
truly single pass with sub with a function, but it's a more intrusive
change and this got 99%ish of the performance improvement (roughly
another 2x in some cases).

* util.tty.log: remove check for `readable`

Python < 3 does not support a readable check on streams, should not be
necessary here since we control the only use and it's explicitly a
stream to be read.
2021-05-31 20:33:14 -07:00
Massimiliano Culpo
219eb09e59
Put a module object in sys.modules before executing module code (#23269)
The loading protocol mandates that the the module we are going
to import needs to be already in sys.modules before its code is
executed, so to prevent unbounded recursions and multiple loading.

Loading a module from file exits early if the module is already
in sys.modules
2021-05-06 11:53:40 +02:00
vsoch
613348ec90 Use gethostname() instead of getfqdn() for lock debug mode
In debug mode, processes taking an exclusive lock write out their node name to
the lock file. We were using `getfqdn()` for this, but it seems to produce
inconsistent results when used from within some github actions containers.

We get this error because getfqdn() seems to return a short name in one place
and a fully qualified name in another:

```
  File "/home/runner/work/spack/spack/lib/spack/spack/test/llnl/util/lock.py", line 1211, in p1
    assert lock.host == self.host
AssertionError: assert 'fv-az290-764....cloudapp.net' == 'fv-az290-764'
  - fv-az290-764.internal.cloudapp.net
  + fv-az290-764
!!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!
== 1 failed, 2547 passed, 7 skipped, 22 xfailed, 2 xpassed in 1238.67 seconds ==
```

This seems to stem from https://bugs.python.org/issue5004.

We don't really need to get a fully qualified hostname for debugging, so use
`gethostname()` because its results are more consistent. This seems to fix the
issue.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>
2021-04-15 00:01:41 -07:00
Peter Scheibel
f624ce0834
Build process output: handle UTF-8 for python 3.x to 3.7 (#22888)
We set LC_ALL=C to encourage a build process to generate ASCII
output (so our logger daemon can decode it). Most packages
respect this but it appears that intel-oneapi-compilers does
not in some cases (see #22813). This reads the output of the build
process as UTF-8, which still works if the build process respects
LC_ALL=C but also works if the process generates UTF-8 output.

For Python >= 3.7 all files are opened with UTF-8 encoding by
default. Python 2 does not support the encoding argument on
'open', so to support Python 2 the files would have to be
opened in byte mode and explicitly decoded (as a side note,
this would be the only way to handle other encodings without
being informed of them in advance).
2021-04-09 18:10:01 -07:00
Todd Gamblin
01a6adb5f7 specs: use lazy lexicographic comparison instead of key_ordering
We have been using the `@llnl.util.lang.key_ordering` decorator for specs
and most of their components. This leverages the fact that in Python,
tuple comparison is lexicographic. It allows you to implement a
`_cmp_key` method on your class, and have `__eq__`, `__lt__`, etc.
implemented automatically using that key. For example, you might use
tuple keys to implement comparison, e.g.:

```python
class Widget:
    # author implements this
    def _cmp_key(self):
        return (
            self.a,
            self.b,
            (self.c, self.d),
            self.e
        )

    # operators are generated by @key_ordering
    def __eq__(self, other):
        return self._cmp_key() == other._cmp_key()

    def __lt__(self):
        return self._cmp_key() < other._cmp_key()

    # etc.
```

The issue there for simple comparators is that we have to bulid the
tuples *and* we have to generate all the values in them up front. When
implementing comparisons for large data structures, this can be costly.

This PR replaces `@key_ordering` with a new decorator,
`@lazy_lexicographic_ordering`. Lazy lexicographic comparison maps the
tuple comparison shown above to generator functions. Instead of comparing
based on pre-constructed tuple keys, users of this decorator can compare
using elements from a generator. So, you'd write:

```python
@lazy_lexicographic_ordering
class Widget:
    def _cmp_iter(self):
        yield a
        yield b
        def cd_fun():
            yield c
            yield d
        yield cd_fun
        yield e

    # operators are added by decorator (but are a bit more complex)

There are no tuples that have to be pre-constructed, and the generator
does not have to complete. Instead of tuples, we simply make functions
that lazily yield what would've been in the tuple. If a yielded value is
a `callable`, the comparison functions will call it and recursively
compar it. The comparator just walks the data structure like you'd expect
it to.

The ``@lazy_lexicographic_ordering`` decorator handles the details of
implementing comparison operators, and the ``Widget`` implementor only
has to worry about writing ``_cmp_iter``, and making sure the elements in
it are also comparable.

Using this PR shaves another 1.5 sec off the runtime of `spack buildcache
list`, and it also speeds up Spec comparison by about 30%. The runtime
improvement comes mostly from *not* calling `hash()` `_cmp_iter()`.
2021-03-31 14:39:23 -07:00
Adam J. Stewart
40a40e0265
Python 3.10 support: collections.abc (#20441) 2021-02-01 11:30:25 -06:00
Sergey Kosukhin
4d7a9df810
Add a context wrapper for mtime preservation (#21258)
Sometimes we need to patch a file that is a dependency for some other
automatically generated file that comes in a release tarball. As a
result, make tries to regenerate the dependent file using additional
tools (e.g. help2man), which would not be needed otherwise.

In some cases, it's preferable to avoid that (e.g. see #21255). A way
to do that is to save the modification timestamps before patching and
restoring them afterwards. This PR introduces a context wrapper that
does that.
2021-01-27 11:41:07 -08:00
Nathan Hanford
ebc871abbf
[WIP] relocate.py: parallelize test replacement logic (#19690)
* sbang pushed back to callers;
star moved to util.lang

* updated unit test

* sbang test moved; local tests pass

Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
2021-01-20 09:17:47 -08:00
Todd Gamblin
a8ccb8e116 copyrights: update all files with license headers for 2021
- [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files
- [x] update all licensed files to say 2013-2021 using
      `spack license update-copyright-year`
- [x] appease mypy with some additions to package.py that needed
      for oneapi.py
2021-01-02 12:12:00 -08:00
Tom Scogland
857749a9ba
add mypy to style checks; rename spack flake8 to spack style (#20384)
I lost my mind a bit after getting the completion stuff working and
decided to get Mypy working for spack as well. This adds a 
`.mypy.ini` that checks all of the spack and llnl modules, though
not yet packages, and fixes all of the identified missing types and
type issues for the spack library.

In addition to these changes, this includes:

* rename `spack flake8` to `spack style`

Aliases flake8 to style, and just runs flake8 as before, but with
a warning.  The style command runs both `flake8` and `mypy`,
in sequence. Added --no-<tool> options to turn off one or the
other, they are on by default.  Fixed two issues caught by the tools.

* stub typing module for python2.x

We don't support typing in Spack for python 2.x. To allow 2.x to
support `import typing` and `from typing import ...` without a
try/except dance to support old versions, this adds a stub module
*just* for python 2.x.  Doing it this way means we can only reliably
use all type hints in python3.7+, and mypi.ini has been updated to
reflect that.

* add non-default black check to spack style

This is a first step to requiring black.  It doesn't enforce it by
default, but it will check it if requested.  Currently enforcing the
line length of 79 since that's what flake8 requires, but it's a bit odd
for a black formatted project to be quite that narrow.  All settings are
in the style command since spack has no pyproject.toml and I don't
want to add one until more discussion happens. Also re-format
`style.py` since it no longer passed the black style check
with the new length.

* use style check in github action

Update the style and docs action to use `spack style`, adding in mypy
and black to the action even if it isn't running black right now.
2020-12-22 21:39:10 -08:00
Greg Becker
77b2e578ec
spack test (#15702)
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.

Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.

This handles the following trickier aspects of testing with direct
support in Spack's package API:

- [x] Caching source or intermediate build files at build time for
      use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
      packages).

See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-11-18 02:39:02 -08:00
Todd Gamblin
0ed019d4ef concretizer: first working version with pyclingo interface
- [x] Solver now uses the Python interface to clingo
- [x] can extract unsatisfiable cores from problems when things go wrong
- [x] use Python callbacks for versions instead of choice rules (this may
      ultimately hurt performance)
2020-11-17 10:04:13 -08:00
Peter Scheibel
bb42470211
macos: update build process to use spawn instead of fork (#18205)
Spack creates a separate process to do package installation. Different
operating systems and Python versions use different methods to create
it but up until Python 3.8 both Linux and Mac OS used "fork" (which
duplicates process memory, file descriptor table, etc.).

Python >= 3.8 on Mac OS prefers creating an entirely new process
(referred to as the "spawn" start method) because "fork" was found to
cause issues (in other words "spawn" is the default start method used
by multiprocessing.Process). Spack was dependent on the particular
behavior of fork to replicate process memory and transmit file
descriptors.

This PR refactors the Spack internals to support starting a child
process with the "spawn" method. To achieve this, it makes the
following changes:

- ensure that the package repository and other global state are
  transmitted to the child process
- ensure that file descriptors are transmitted to the child process in
  a way that works with multiprocessing and spawn
- make all the state needed for the build process and tests picklable
  (package, stage, etc.)
- move a number of locally-defined functions into global scope so that
  they can be pickled
- rework tests where needed to avoid using local functions

This PR also reworks sbang tests to work on macOS, where temporary
directories are deeper than the Linux sbang limit. We make the limit
platform-dependent (macOS supports 512-character shebangs)

See: #14102
2020-11-12 12:26:23 -08:00
Massimiliano Culpo
458d88eaad
Make archspec a vendored dependency (#19600)
- Added archspec to the list of vendored dependencies
- Removed every reference to llnl.util.cpu
- Removed tests from Spack code base
2020-10-30 13:02:14 -07:00
Greg Becker
7a6268593c
Environments: specify packages for developer builds (#15256)
* allow environments to specify dev-build packages

* spack develop and spack undevelop commands

* never pull dev-build packges from bincache

* reinstall dev_specs when code has changed; reinstall dependents too

* preserve dev info paths and versions in concretization as special variant

* move install overwrite transaction into installer

* move dev-build argument handling to package.do_install

now that specs are dev-aware, package.do_install can add
necessary args (keep_stage=True, use_cache=False) to dev
builds. This simplifies driving logic in cmd and env._install

* allow 'any' as wildcard for variants

* spec: allow anonymous dependencies

raise an error when constraining by or normalizing an anonymous dep
refactor concretize_develop to remove dev_build variant
refactor tests to check for ^dev_path=any instead of +dev_build

* fix variant class hierarchy
2020-10-15 17:23:16 -07:00
Greg Becker
2e4892c111
env view failures: print underlying error message (#18713) 2020-09-18 10:21:14 -07:00
Massimiliano Culpo
8ad2cc2acf
Environments: Avoid inconsistent state on failed write (#18538)
Fixes #18441 

When writing an environment, there are cases where the lock file for
the environment may be removed. In this case there was a period 
between removing the lock file and writing the new manifest file
where an exception could leave the manifest in its old state (in
which case the lock and manifest would be out of sync).

This adds a context manager which is used to restore the prior lock
file state in cases where the manifest file cannot be written.
2020-09-11 10:57:29 -07:00
Adam J. Stewart
741bb9bafe
install/install_tree: glob support (#18376)
* install/install_tree: glob support

* Add unit tests

* Update existing packages

* Raise error if glob finds no files, document function raises
2020-09-03 10:47:19 -07:00
Rui Xue
d9b945f663
Mac OS: support Python >= 3.8 by using fork-based multiprocessing (#18124)
As detailed in https://bugs.python.org/issue33725, starting new
processes with 'fork' on Mac OS is not guaranteed to work in general.
As of Python 3.8 the default process spawning mechanism was changed
to avoid this issue.

Spack depends on the fork-based method to preserve file descriptors
transparently, to preserve global state, and to avoid pickling some
objects. An effort is underway to remove dependence on fork-based
process spawning (see #18205). In the meantime, this allows Spack to
run with Python 3.8 on Mac OS by explicitly choosing to use 'fork'.

Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2020-09-02 00:15:39 -07:00
Michael Kuhn
b6321cdfa9
microarchitectures: Fix icelake (#18151)
Some of the feature flags are named differently and clwb is missing on
my i7-1065G7. cascadelake and cannonlake might have similar problems but
I do not have access to those architectures to test.
2020-08-19 11:49:43 +02:00
Massimiliano Culpo
1dba0ce81b Removed references to BlueGene/Q in docs and comments 2020-08-10 17:09:09 -07:00
Massimiliano Culpo
193e8333fa Update packages.yaml format and support configuration updates
The YAML config for paths and modules of external packages has
changed: the new format allows a single spec to load multiple
modules. Spack will automatically convert from the old format
when reading the configs (the updates do not add new essential
properties, so this change in Spack is backwards-compatible).

With this update, Spack cannot modify existing configs/environments
without updating them (e.g. “spack config add” will fail if the
configuration is in a format that predates this PR). The user is
prompted to do this explicitly and commands are provided. All
config scopes can be updated at once. Each environment must be
updated one at a time.
2020-08-10 11:59:05 -07:00
Tamara Dahlgren
605c1a76e0
Reduce output verbosity with debug levels (#17546)
* switch from bool to int debug levels

* Added debug options and changed lock logging to use more detailed values

* Limit installer and timestamp PIDs to standard debug output

* Reduced verbosity of fetch/stage/install output, changing most to debug level 1

* Combine lock log methods; change build process install to debug

* Changed binary cache install messages to extraction messages
2020-07-23 00:49:57 -07:00
Todd Gamblin
4ea76dc95c change master/child to controller/minion in pty docstrings
PTY support used the concept of 'master' and 'child' processes. 'master'
has been renamed to 'controller' and 'child' to 'minion'.
2020-07-06 11:39:19 -07:00
Massimiliano Culpo
14599f09be
Separate Apple Clang from LLVM Clang (#17110)
* Separate Apple Clang from LLVM Clang

Apple Clang is a compiler of its own. All places
referring to "-apple" suffix have been updated.

* Hack to use a dash in 'apple-clang'

To be able to use autodoc from Sphinx we need
a valid Python name for the module that contains
Apple's Clang code.

* Updated packages to account for the existence of apple-clang

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Added unit test for XCode related functions

Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-06-25 11:18:48 -05:00
Todd Gamblin
7b8e5c8999 bugfix: don't use sys.stdout as a default arg value (#16541)
After migrating to `travis-ci.com`, we saw I/O issues in our tests --
tests that relied on `capfd` and `capsys` were failing. We've also seen
this in GitHub actions, and it's kept us from switching to them so far.

Turns out that the issue is that using streams like `sys.stdout` as
default arguments doesn't play well with `pytest` and output redirection,
as `pytest` changes the values of `sys.stdout` and `sys.stderr`. if these
values are evaluated before output redirection (as they are when used as
default arg values), output won't be captured properly later.

- [x] replace all stream default arg values with `None`, and only assign stream
      values inside functions.
- [x] fix tests we didn't notice were relying on this erroneous behavior
2020-05-09 00:56:18 -07:00
Todd Gamblin
a563884af3
spack install terminal output handling in foreground/background (#15723)
Makes the following changes:

* (Fixes #15620) tty configuration was failing when stdout was 
  redirected. The implementation now creates a pseudo terminal for
  stdin and checks stdout properly, so redirections of stdin/out/err
  should be handled now.
* Handles terminal configuration when the Spack process moves between
  the foreground and background (possibly multiple times) during a
  build.
* Spack adjusts terminal settings to allow users to to enable/disable
  build process output to the terminal using a "v" toggle, abnormal
  exit cases (like CTRL-C) could leave the terminal in an unusable
  state. This is addressed here with a special-case handler which
  restores terminal settings.

Significantly extend testing of process output logger:

* New PseudoShell object for setting up a master and child process
  and configuring file descriptor inheritance between the two
* Tests for "v" verbosity toggle making use of the added PseudoShell
  object
* Added `uniq` function which takes a list of elements and replaces
  any consecutive sequence of duplicate elements with a single
  instance (e.g. "112211" -> "121")

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-04-15 11:05:41 -07:00
Greg Becker
75fafcece9
multiprocessing: allow Spack to run uninterrupted in background (#14682)
Spack currently cannot run as a background process uninterrupted because some of the logging functions used in the install method (especially to create the dynamic verbosity toggle with the v key) cause the OS to issue a SIGTTOU to Spack when it's backgrounded.

This PR puts the necessary gatekeeping in place so that Spack doesn't do anything that will cause a signal to stop the process when operating as a background process.
2020-03-20 12:22:32 -07:00
Dr. Christian Tacke
22a56a89c7
Use shutil.copy2 in install_tree (#15058)
Sometimes one needs to preserve the (relative order) of
mtimes on installed files.  So it's better to just copy
over all the metadata from the source tree to the install
tree. If permissions need fixing, that will be done anyway
afterwards.

One major use case are resource()s:
They're unpacked in one place and then copied to their
final place using install_tree(). If the resource is a
source tree using autoconf/automake, resetting mtimes
uncorrectly might force unwanted autoconf/etc calls.
2020-02-19 23:09:26 -06:00
Tamara Dahlgren
f2aca86502
Distributed builds (#13100)
Fixes #9394
Closes #13217.

## Background
Spack provides the ability to enable/disable parallel builds through two options: package `parallel` and configuration `build_jobs`.  This PR changes the algorithm to allow multiple, simultaneous processes to coordinate the installation of the same spec (and specs with overlapping dependencies.).

The `parallel` (boolean) property sets the default for its package though the value can be overridden in the `install` method.

Spack's current parallel builds are limited to build tools supporting `jobs` arguments (e.g., `Makefiles`).  The number of jobs actually used is calculated as`min(config:build_jobs, # cores, 16)`, which can be overridden in the package or on the command line (i.e., `spack install -j <# jobs>`).

This PR adds support for distributed (single- and multi-node) parallel builds.  The goals of this work include improving the efficiency of installing packages with many dependencies and reducing the repetition associated with concurrent installations of (dependency) packages.

## Approach
### File System Locks
Coordination between concurrent installs of overlapping packages to a Spack instance is accomplished through bottom-up dependency DAG processing and file system locks.  The runs can be a combination of interactive and batch processes affecting the same file system.  Exclusive prefix locks are required to install a package while shared prefix locks are required to check if the package is installed.

Failures are communicated through a separate exclusive prefix failure lock, for concurrent processes, combined with a persistent store, for separate, related build processes.  The resulting file contains the failing spec to facilitate manual debugging.

### Priority Queue
Management of dependency builds changed from reliance on recursion to use of a priority queue where the priority of a spec is based on the number of its remaining uninstalled dependencies.  

Using a queue required a change to dependency build exception handling with the most visible issue being that the `install` method *must* install something in the prefix.  Consequently, packages can no longer get away with an install method consisting of `pass`, for example.

## Caveats
- This still only parallelizes a single-rooted build.  Multi-rooted installs (e.g., for environments) are TBD in a future PR.

Tasks:
- [x] Adjust package lock timeout to correspond to value used in the demo
- [x] Adjust database lock timeout to reduce contention on startup of concurrent
    `spack install <spec>` calls
- [x] Replace (test) package's `install: pass` methods with file creation since post-install 
    `sanity_check_prefix` will otherwise error out with `Install failed .. Nothing was installed!`
- [x] Resolve remaining existing test failures
- [x] Respond to alalazo's initial feedback
- [x] Remove `bin/demo-locks.py`
- [x] Add new tests to address new coverage issues
- [x] Replace built-in package's `def install(..): pass` to "install" something
    (i.e., only `apple-libunwind`)
- [x] Increase code coverage
2020-02-19 00:04:22 -08:00
Matt Belhorn
b43f658c39
Adds fma and vsx features to entire power arch family. (#14759)
VSX alitvec extensions are supported by PowerISA from v2.06 (Power7+), but might
not be listed in features.

FMA has been supported by PowerISA since Power1, but might not be listed in
features.

This commit adds these features to all the power ISA family sets.
2020-02-06 16:42:05 +01:00
Greg Becker
52ab2421bb
Fix handling of filter_file exceptions (#14651) 2020-01-28 12:49:26 -08:00
Adam J. Stewart
11f2b61261 Use spack commands --format=bash to generate shell completion (#14393)
This PR adds a `--format=bash` option to `spack commands` to
auto-generate the Bash programmable tab completion script. It can be
extended to work for other shells.

Progress:

- [x] Fix bug in superclass initialization in `ArgparseWriter`
- [x] Refactor `ArgparseWriter` (see below)
- [x] Ensure that output of old `--format` options remains the same
- [x] Add `ArgparseCompletionWriter` and `BashCompletionWriter`
- [x] Add `--aliases` option to add command aliases
- [x] Standardize positional argument names
- [x] Tests for `spack commands --format=bash` coverage
- [x] Tests to make sure `spack-completion.bash` stays up-to-date
- [x] Tests for `spack-completion.bash` coverage
- [x] Speed up `spack-completion.bash` by caching subroutine calls

This PR also necessitates a significant refactoring of
`ArgparseWriter`. Previously, `ArgparseWriter` was mostly a single
`_write` method which handled everything from extracting the information
we care about from the parser to formatting the output. Now, `_write`
only handles recursion, while the information extraction is split into a
separate `parse` method, and the formatting is handled by `format`. This
allows subclasses to completely redefine how the format will appear
without overriding all of `_write`.

Co-Authored-by: Todd Gamblin <tgamblin@llnl.gov>
2020-01-22 21:31:12 -08:00
Todd Gamblin
4af6303086
copyright: update copyright dates for 2020 (#14328) 2019-12-30 22:36:56 -08:00
Todd Gamblin
8e8235043d package_prefs: move class-level cache to PackagePref instance
`PackagePrefs` has had a class-level cache of data from `packages.yaml` for
a long time, but it complicates testing and leads to subtle errors,
especially now that we frequently manipulate custom config scopes and
environments.

Moving the cache to instance-level doesn't slow down concretization or
the test suite, and it just caches for the life of a `PackagePrefs`
instance (i.e., for a single cocncretization) so we don't need to worry
about global state anymore.

- [x] Remove class-level caches from `PackagePrefs`
- [x] Add a cached _spec_order object on each `PackagePrefs` instance
- [x] Remove all calls to `PackagePrefs.clear_caches()`
2019-12-30 13:01:31 -08:00
Todd Gamblin
2dafeaf819
bugfix: colify_table should not revert to 1 column for non-tty (#14307)
Commands like `spack blame` were printig poorly when redirected to files,
as colify reverts to a single column when redirected.  This works for
list data but not tables.

- [x] Force a table by always passing `tty=True` from `colify_table()`
2019-12-28 11:26:31 -08:00
Massimiliano Culpo
d333e14721 tests: check min required python version with vermin (#14289)
This commit removes the `python_version.py` unit test module
and the vendored dependencies `pyqver2.py` and `pyqver3.py`.
It substitutes them with an equivalent check done using
`vermin` that is run as a separate workflow via Github Actions.

This allows us to delete 2 vendored dependencies that are unmaintained
and substitutes them with a maintained tool.

Also, updates the list of vendored dependencies.
2019-12-24 09:28:33 -08:00
t-karatsu
1e2c9d960c a64fx: fix typo in GCC flags (#14286) 2019-12-24 17:45:03 +01:00
Todd Gamblin
9b90d7e801 performance: reduce system calls required for remove_dead_links
`os.path.exists()` will report False if the target of a symlink doesn't
exist, so we can avoid a costly call to realpath here.
2019-12-23 18:36:56 -08:00
Todd Gamblin
6c9467e8c6 lock transactions: avoid redundant reading in write transactions
Our `LockTransaction` class was reading overly aggressively.  In cases
like this:

```
1  with spack.store.db.read_transaction():
2    with spack.store.db.write_transaction():
3      ...
```

The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time.  `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.

- [x] `Lock.acquire_write()` return `False` in cases where we already had
       a read lock.
2019-12-23 18:36:56 -08:00
Todd Gamblin
bb517fdb84 lock transactions: ensure that nested write transactions write
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:

```
1  with spack.store.db.read_transaction():
2    with spack.store.db.write_transaction():
3      ...
4  with spack.store.db.read_transaction():
   ...
```

The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.

The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.

The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it.  Since the DB was never written, we got stale data.

- [x] Make `Lock.release_write()` return `True` whenever we release the
      *last write* in a nest.
2019-12-23 18:36:56 -08:00
Todd Gamblin
eb8fc4f3be lock transactions: fix non-transactional writes
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released.  This is
pretty obviously wrong.

- [x] Refactor `Lock` so that a release function can be passed to the
      `Lock` and called *only* when a lock is really released.

- [x] Refactor `LockTransaction` classes to use the release function
  instead of checking the return value of `release_read()` / `release_write()`
2019-12-23 18:36:56 -08:00
Massimiliano Culpo
80495d83ed microarchitectures: fix ppc flags for clang (#14196) 2019-12-20 14:40:54 -08:00
Adam J. Stewart
18c2029fef Fix argparse rST parsing of help messages (#14014)
Thanks!
2019-12-17 10:23:22 -08:00
Massimiliano Culpo
5eca4f1470 microarchitectures: readable names for AArch64 vendors (#13825)
Vendors for ARM come out of `/proc/cpuinfo` as hex numbers instead of readable strings.

- Add support for associating vendor names with the hex numbers.
- Also move these mappings from Python code to `microarchitectures.json`
- Move darwin feature name mappings to `microarchitectures.json` as well
2019-12-17 00:47:50 -08:00
Axel Huebl
d705e96a63
Spec Header Dirs: Only first include/ (#13991)
* CUDA HeaderList: Unit Test

* Spec Header Dirs: Only first include/

Avoid matching recurringly nested include paths that usually
refer to internally shipped libraries in packages.
Example in CUDA Toolkit, shipping a libc++ fork internally
with libcu++ since 10.2.89:
`<prefix>/include/cuda/some/more/details/include/` or
`<prefix>/include/cuda/std/detail/libcxx/include`

regex: non-greedy first match of include

Co-Authored-By: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* CUDA: Re-Enable 10.2.89 as Default
2019-12-06 23:47:03 -08:00
Massimiliano Culpo
e9f027210f Fixed x86-64 optimization flags for clang (#13913)
* Fixed x86-64 optimization flags for clang
* Fixed expected results in unit tests

Before the flags used where the one for llc, the underlying compiler from LLVM IR to machine assembly. It turns out that the semantic of `-march`, `-mtune` and `-mcpu` changes from clang front-end to llc.

I found no definitive reference for the flags submitted in this PR, but I checked the assembly on a vectorizable function using Godbolt's web-site.
2019-12-04 09:11:34 -08:00
Massimiliano Culpo
0684a58d16 Fixed detection for cascadelake microarchitecture (#13820)
fixes #13803
2019-11-21 13:09:48 -07:00
t-karatsu
513fe55fc3 Features/expand microarch for aarch64 (#13780)
* Add process to determine aarch64 microarchitecture

* add microarchitectures for thunderx2 and a64fx

* Add optimize flags for gcc on aarch64 family processors thunderx2 and a64fx.

* Add optimize flags for clang on aarch64 family processors thunderx2 and a64fx

* Add testing for thunderx2 and a64fx microarchitectures
2019-11-20 00:01:12 -07:00
Greg Becker
230c6aa326
cpu: fix clang flags for generic x86_64 (#13491)
* cpu: differentiate flags used for pristine LLVM vs. Apple's version
2019-10-30 17:16:13 -05:00
Chris Green
77af4684aa Improvements to detection of AMD architectures. (#13407)
New entry for K10 microarchitecture.

Reorder Zen* microarchitectures to avoid triggering as k10.

Remove some desktop-specific flags that were preventing Opteron Bulldozer/Piledriver/Steamroller/Excavator CPUs from being recognized as such.

Remove one or two flags which weren't produced in /proc/cpuinfo on older OS (RHEL6 and friends).
2019-10-24 15:48:54 -07:00
Chris Green
0913328812 Correctly identify Skylake CPUs on Darwin. (#13377)
* Correctly identify Skylake CPUs on Darwin.

* Add a test for haswell on Mojave.
2019-10-24 12:44:58 -05:00
Massimiliano Culpo
b14f18acda microarchitectures: look in /sbin and /usr/sbin for sysctl (#13365)
This PR ensures that on Darwin we always append /sbin and /usr/sbin to PATH, if they are not already present, when looking for sysctl.

* Make sure we look into /sbin and /usr/sbin for sysctl
* Refactor sysctl for better readability
* Remove marker to make test pass
2019-10-22 21:42:38 -07:00
Massimiliano Culpo
8808207ddf Fixed optimization flags support for old GCC versions (#13362)
These changes update our gcc microarchitecture descriptions based on manuals found here https://gcc.gnu.org/onlinedocs/ and assuming that new architectures are not added during patch releases.
2019-10-22 21:40:45 -07:00
Massimiliano Culpo
cfbdd2179e microarchitectures: add optimization flags for Intel compilers (#13345)
* Added optimization flags for Intel compilers with Intel CPUs
* Added optimization flags for Intel compilers with AMD CPUs
2019-10-22 00:33:59 -07:00