Commit Graph

462 Commits

Author SHA1 Message Date
Harmen Stoppels
d67b12eb79
locks: improved errors (#33477)
Instead of showing

```
==> Error: Timed out waiting for a write lock.
```

show

```
==> Error: Timed out waiting for a write lock after 1.200ms and 4 attempts on file: /some/file
```

s.t. we actually get to see where acquiring a lock failed even when not
running in debug mode.

And use pretty time units everywhere, so we don't get 1.45e-9 seconds
but 1.450ns etc.
2022-10-24 11:54:49 +02:00
Brian Vanderwende
27cf8dddec
shell prompt: enclose control sequence in brackets (#33079)
When setting `PS1` in Bash, it's required to enclose non-printable characters in square brackets, so that the width of the terminal is handled correctly.

See https://www.gnu.org/software/bash/manual/bash.html#Controlling-the-Prompt
2022-10-10 07:29:58 -06:00
Sergey Kosukhin
4bc8f66388
autotools: extend patching of the libtool script (#30768)
* filter_file: introduce argument 'start_at'

* autotools: extend patching of the libtool script

* autotools: refactor _patch_usr_bin_file

* autotools: improve readability of the filtering

* autotools: keep the modification time of the configure scripts

* autotools: do not try to patch directories

* autotools: explain libtool patching for posterity

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
Co-authored-by: Harmen Stoppels <harmenstoppels@gmail.com>
2022-10-07 22:04:44 +00:00
John W. Parent
30f6fd8dc0
Fetching/decompressing: use magic numbers (#31589)
Spack currently depends on parsing filenames of downloaded files to
determine what type of archive they are and how to decompress them.
This commit adds a preliminary check based on magic numbers to
determine archive type (but falls back on name parsing if the
extension type cannot be determined).

As part of this work, this commit also enables decompression of
.tar.xz-compressed archives on Windows.
2022-09-26 00:01:42 -07:00
John W. Parent
deca34676f
Bugfix: find_libraries (#32653)
53a7b49 created a reference error which broke `.libs` (and
`find_libraries`) for many packages. This fixes the reference
error and improves the testing for `find_libraries` by actually
checking the extension types of libraries that are retrieved by
the function.
2022-09-14 14:45:42 -06:00
John W. Parent
53a7b49619
Windows rpath support (#31930)
Add a post-install step which runs (only) on Windows to modify an
install prefix, adding symlinks to all dependency libraries.

Windows does not have the same concept of RPATHs as Linux, but when
resolving symbols will check the local directory for dependency
libraries; by placing a symlink to each dependency library in the
directory with the library that needs it, the package can then
use all Spack-built dependencies.

Note:

* This collects dependency libraries based on Package.rpath, which
  includes only direct link dependencies
* There is no examination of libraries to check what dependencies
  they require, so all libraries of dependencies are symlinked
  into any directory of the package which contains libraries
2022-09-13 10:28:29 -07:00
Tom Scogland
762ba27036
Make GHA tests parallel by using xdist (#32361)
* Add two no-op jobs named "all-prechecks" and "all"

These are a suggestion from @tgamblin, they are stable named markers we
can use from gitlab and possibly for required checks to make CI more
resilient to refactors changing the names of specific checks.

* Enable parallel testing using xdist for unit testing in CI

* Normalize tmp paths to deal with macos

* add -u flag compatibility to spack python

As of now, it is accepted and ignored.  The usage with xdist, where it
is invoked specifically by `python -u spack python` which is then passed
`-u` by xdist is the entire reason for doing this.  It should never be
used without explicitly passing -u to the executing python interpreter.

* use spack python in xdist to support python 2

When running on python2, spack has many import cycles unless started
through main.  To allow that, this uses `spack python` as the
interpreter, leveraging the `-u` support so xdist doesn't error out when
it unconditionally requests unbuffered binary IO.

* Use shutil.move to account for tmpdir being in a separate filesystem sometimes
2022-09-07 20:12:57 +02:00
Seth R. Johnson
c7292aa4b6
Fix spack locking on some NFS systems (#32426)
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2022-09-06 09:50:59 -07:00
Wouter Deconinck
8f34a3ac5c
filesystem: in recursive mtime, check only files that exist (#32175)
* filesystem: use lstat in recursive mtime

When a `develop` path contains a dead symlink, the `os.stat` in the recursive `mtime` determination trips up over it.

Closes #32165.
2022-08-16 21:06:31 +00:00
Massimiliano Culpo
1913dc2da3
Fix performance regression with spack mirror create --all (#32005)
This PR fixes the performance regression reported in #31985 and a few
other issues found while refactoring the spack mirror create command.

Modifications:

* (Primary) Do not require concretization for
  `spack mirror create --all`
* Forbid using --versions-per-spec together with --all
* Fixed a few issues when reading specs from input file (specs were
  not concretized, command would fail when trying to mirror
  dependencies)
* Fix issue with default directory for spack mirror create not being
  canonicalized
* Add more unit tests to poke spack mirror create
* Skip externals also when mirroring environments
* Changed slightly the wording for reporting (it was mentioning
  "Successfully created" even in presence of errors)
* Fix issue with colify (was not called properly during error
  reporting)
2022-08-11 16:51:01 -07:00
Harmen Stoppels
f53b522572
Add base class for directory visitor (#32008) 2022-08-09 15:43:30 +02:00
Massimiliano Culpo
aeac72e1e3 Style fixes 2022-08-02 10:52:52 -07:00
Massimiliano Culpo
2b3f350071 Use __slots__ for fast attribute access 2022-08-02 10:52:52 -07:00
Todd Gamblin
f52f6e99db black: reformat entire repository with black 2022-07-31 13:29:20 -07:00
Massimiliano Culpo
7f2b5e8e57
spack.repo.get() can only be called on concrete specs (#31411)
The goal of this PR is to make clearer where we need a package object in Spack as opposed to a package class.

We currently instantiate a lot of package objects when we could make do with a class.  We should use the class
when we only need metadata, and we should only instantiate and us an instance of `PackageBase` at build time.

Modifications:
- [x] Remove the `spack.repo.get` convenience function (which was used in many places, and not really needed)
- [x] Use `spack.repo.path.get_pkg_class` wherever possible
- [x] Try to route most of the need for `spack.repo.path.get` through `Spec.package`
- [x] Introduce a non-data descriptor, that can be used as a decorator, to have "class level properties"
- [x] Refactor unit tests that had to be modified to reduce code duplication
- [x] `Spec.package` and `Repo.get` now require a concrete spec as input
- [x] Remove `RepoPath.all_packages` and `Repo.all_packages`
2022-07-12 19:45:24 -04:00
John W. Parent
5b45df5269
Update decompression support on Windows (#25185)
Most package installations include compressed source files. This
adds support for common archive types on Windows:

* Add support for using system 7zip functionality to decompress .Z
  files when available (and on Windows, use 7zip for .xz archives)
* Default to using built-in Python support for tar/bz2 decompression
  (note that Python tar documentation mentions preservation of file
  permissions)
* Add tests for decompression support
* Extract logic for handling exploding archives (i.e. compressed
  archives that expand to more than one base file) into an
  exploding_archive_catch context manager in the filesystem module
2022-06-06 18:14:43 -07:00
Jordan Galby
c2fd98ccd2
Fix spack install chgrp on symlinks (#30743)
Fixes missing chgrp on symlinks in package installations, and errors on
symlinks referencing non-existent or non-writable locations.

Note: `os.chown(.., follow_symlinks=False)` is python3 only, but
`os.lchown` exists in both versions.
2022-05-19 08:50:24 -07:00
Danny McClanahan
a681fd7b42
Introduce GroupedExceptionHandler and use it to simplify bootstrap error handling (#30192) 2022-05-15 10:59:11 +00:00
John W. Parent
9bcf496f21
Windows permissions: uninstalling and cleaning stages (#29714)
When running on Windows, Spack may generate files in the stage/install
prefixes that do not have write permissions, which prevents the
removal of those directories (e.g. when cleaning stages or uninstalling).
There should be a refactoring to avoid this in the first place, but that
is assumed to be longer term, so the temporary fix is to make such files
writable if they are not. This PR:

* Automatically handles these permissions errors when uninstalling
  packages from the Spack root (makes then writable)
* Updates similar already-existing logic when removing Spack-managed
  stage directories (the error-handling was assuming all errors were
  permissions errors and was therefore handling other errors
  inappropriately)

Note: these permissions issues only appear on Windows so this logic is
only applied there (permissions are not modified for this purpose on
Linux etc.).

This also adds special handling for a case where calling `isdir`
on an `os.DirEntry` object would fail for improperly-created symlinks
(e.g. on Windows, using `os.symlink` without `target_is_directory=True`).
Note this specific issue only came up when enabling link_tree tests
(specifically `source_merge_visitor_cant_be_cyclical`).
2022-05-09 10:28:14 -07:00
Brian Van Essen
4576fbe648
OpenCV and OpenBLAS: add external find support (#30240)
Added support for finding the OpenCV package via the find external
command. Included support for identifying variants based on available
shared libraries.

Added support to finding the OpenBLAS package via the find external
command.

Enabled packages to show that they can be discovered via the find
external command in the info message.

Updated the OpenCV and OpenBLAS packages to use the extensible search
mechanism for library extensions on multiple OS platforms.

Corrected how find externals works on Darwin for OpenCV and OpenBLAS
to accommodate that the version numbers are placed before the file
extension instead of after it, as on Linux.

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2022-05-03 09:04:50 +02:00
Harmen Stoppels
e7a0b952ab
install --overwrite: use rename instead of tmpdir (#29746)
I tried to use --overwrite on nvhpc, but nvhpc's install size is 16GB. Seems
better to do os.rename in the same directory than moving the directory to
`/tmp`.

- [x] install --overwrite: use rename instead of tmpdir
- [x] use tempfile
2022-04-28 01:42:42 -06:00
John W. Parent
83b91246b1
Windows: fix termination of process output redirection (#29923)
The parent thread in the process stdout redirection logic on Windows
was closing a file that was being read in child thread, which lead to
error-based termination of the reader thread. This updates the
interaction to avoid the error.
2022-04-26 12:56:13 -07:00
Massimiliano Culpo
ff04d1bfc1
Use the non-deprecated MetaPathFinder interface (#29745)
* Extract the MetaPathFinder and Loaders for packages in their own classes

https://peps.python.org/pep-0451/

Currently, RepoPath and Repo implement the (deprecated) interface of
MetaPathFinder (find_module) and of Loader (load_module). This commit
extracts both of them and places the code in their own classes.

The MetaPathFinder interface is updated to contain both the deprecated
"find_module" (for Python 2.7 support) and the recommended "find_spec".
Update of the Loader interface is deferred at a subsequent commit.

* Move the lines to be prepended inside "RepoLoader"

Also adjust the naming of a few variables too

* Remove spack.util.imp, since code is only used in spack.repo

* Remove support from loading Python modules Python > 3 but < 3.5

* Remove `Repo._create_namespace`

This function was interacting badly with the MetaPathFinder
and causing issues with "normal" imports. Removing the
function allows to do things like:
```python
import spack.pkg.builtin.mpich
cls = spack.pkg.builtin.mpich.Mpich
```

* Remove code needed to trigger the Singleton evaluation

The finder is coded in a way to trigger the Singleton,
so we don't need external code now that we register it
at module level into `sys.meta_path`.

* Add unit tests
2022-04-07 15:58:20 -07:00
Harmen Stoppels
558f7f007e
link_tree.py: format conflict error message (#29870)
Show each `[src a] and [src b] both project to [dst]` on a separate
line.
2022-04-06 07:27:02 +02:00
Massimiliano Culpo
f2fc4ee9af
Allow conditional possible values in variants (#29530)
Allow declaring possible values for variants with an associated condition. If the variant takes one of those values, the condition is imposed as a further constraint.

The idea of this PR is to implement part of the mechanisms needed for modeling [packages with multiple build-systems]( https://github.com/spack/seps/pull/3). After this PR the build-system directive can be implemented as:
```python
variant(
    'build-system',
    default='cmake',
    values=(
        'autotools',
        conditional('cmake', when='@X.Y:')
    ), 
    description='...',
)
```

Modifications:
- [x] Allow conditional possible values in variants
- [x] Add a unit-test for the feature
- [x] Add documentation
2022-04-04 17:37:57 -07:00
Brian Van Essen
29da99427e
"spack external find": also find library-only packages (#28005)
Update "spack external find --all" to also find library-only packages.
A Package can add a ".libraries" attribute, which is a list of regular
expressions to use to find libraries associated with the Package.
"spack external find --all" will search LD_LIBRARY_PATH for potential
libraries.

This PR adds examples for NCCL, RCCL, and hipblas packages. These
examples specify the suffix ".so" for the regular expressions used
to find libraries, so generally are only useful for detecting library
packages on Linux.
2022-04-01 13:30:10 -07:00
John W. Parent
88b1bf751d
Windows Support: Fixup Perl build (#29711)
* Add pl2bat to PATH: Windows on Perl requires the script pl2bat.bat
  and Perl to be available to the installer via the PATH. The build
  and dependent environments of Perl on Windows have the install
  prefix bin added to the PATH.
* symlink with win32file module instead of using Executable to
  call mklink (mklink is a shell function and so is not accessible
  in this manner).
2022-03-31 11:47:11 -07:00
Massimiliano Culpo
048a0de35b
Use the appropriate function to remove files in the stage directory (#29690)
We shouldn't be using "remove_linked_tree" to remove the lock file,
since that function expects to receive a directory path as an
argument.

Also, as a further measure to avoid regression, this commit restores
the "ignore_errors=True" argument on linux and adds a unit test
checking that "remove_linked_tree" doesn't change file permissions
as a side effect of a failure to remove.
2022-03-25 08:39:09 +01:00
Harmen Stoppels
59e522e815
environment views: single pass view generation (#29443)
Reduces the number of stat calls to a bare minimum:
- Single pass over src prefixes
- Handle projection clashes in memory

Symlinked directories in the src prefixes are now conditionally
transformed into directories with symlinks in the dst dir. Notably
`intel-mkl`, `cuda` and `qt` has top-level symlinked directories that
previously resulted in empty directories in the view. We now avoid
cycles and possible exponential blowup by only expanding symlinks that:
- point to dirs deeper in the folder structure;
- are a fixed depth of 2.
2022-03-24 03:54:33 -06:00
Harmen Stoppels
773da7ceba
python: drop dependency on file for script check (#29513)
`file` was used to detect Python scripts with shebangs, so that the interpreter could be changed from <python prefix> to <view path>. With this change, we detect shebangs using Python instead, so that `file` is no longer required.
2022-03-23 10:31:23 +01:00
John Parent
4aee27816e Windows Support: Testing Suite integration
Broaden support for execution of the test suite
on Windows.
General bug and review fixups
2022-03-17 09:01:01 -07:00
John Parent
cf1349ba35 "spack commands --update-completion" 2022-03-17 09:01:01 -07:00
John W. Parent
e4d4a5193f Path handling (#28402)
Consolidate Spack's internal filepath logic to a select
few places and refactor to consistent internal useage of
os.path utilities. Creates a prefix, and a series of utilities
in the path utility module that facilitate handling paths
in a platform agnostic manner.

Convert Windows paths to posix paths internally

Prefer posixpath.join instead of os.path.join

Updated util/ directory to account for Windows integration

Co-authored-by: Stephen Crowell <stephen.crowell@khq.kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>

Module template format for windows (#23041)
2022-03-17 09:01:01 -07:00
John Parent
df4129d395 Expand external find for Windows (#27588)
* Incorporate new search location

* Add external user option

* proper doc string

* Explicit commands in getting started

* raise during chgrp on Win

recover installer changes

Notate admin privleges

Windows phase install hooks

Find external python and install ninja (#23496)

Allow external find python to find windows python and spack install ninja

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
2022-03-17 09:01:01 -07:00
John Parent
3a994032f8 Spack on Windows package ports
CMake - Windows Bootstrap (#25825)

Remove hardcoded cmake compiler (#26410)

Revert breaking cmake changes
Ensure no autotools on Windows

Perl on Windows (#26612)

Python source build windows (#26313)

Reconfigure sysconf for Windows

Python2.6 compatibility

Fxixup new sbang tests for windows

Ruby support (#28287)

Add NASM support (#28319)

Add mock Ninja package for testing
2022-03-17 09:01:01 -07:00
loulawrence
f587a9ce68 use pytest stdout/err capture (#22584)
* On windows, write to StringIO without redirection in test cases to avoid conflicting with logger
2022-03-17 09:01:01 -07:00
Betsy McPhail
d4d101f57e Allow 'spack external find' to find executables on the system path (#22091)
Co-authored-by: Lou Lawrence <lou.lawrence@kitware.com>
2022-03-17 09:01:01 -07:00
Betsy McPhail
fb0e91c534 Windows: Symlink support
To provide Windows-compatible functionality, spack code should use
llnl.util.symlink instead of os.symlink. On non-Windows platforms
and on Windows where supported, os.symlink will still be used.

Use junctions when symlinks aren't supported on Windows (#22583)

Support islink for junctions (#24182)

Windows: Update llnl/util/filesystem

* Use '/' as path separator on Windows.
* Recognizing that Windows paths start with '<Letter>:/' instead of '/'

Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
2022-03-17 09:01:01 -07:00
Betsy McPhail
a7de2fa380 Create rename utility function
os.rename() fails on Windows if file already exists.

Create getuid utility function (#21736)

On Windows, replace os.getuid with ctypes.windll.shell32.IsUserAnAdmin().

Tests: Use getuid util function

Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: Betsy McPhail <betsy.mcphail@kitware.com>
2022-03-17 09:01:01 -07:00
Betsy McPhail
b60a0eea01 Workarounds for install errors on Windows (#21890)
1. Forwarding sys.stdin, e.g. use input_multiprocess_fd,
gives an error on Windows. Skipping for now

3.  subprocess_context needs to serialize for Windows, like it does
for Mac.

Co-authored-by: lou.lawrence@kitware.com <lou.lawrence@kitware.com>
Co-authored-by: John Parent <john.parent@kitware.com>
2022-03-17 09:01:01 -07:00
Ben Cowan
81bc00d61f Adding basic Windows features (#21259)
* Snapshot of some MSVC infrastructure added during experiments a while ago.  Rebasing from spack/develop.

* Added platform and OS definitions for Windows.

* Updated Windows platform file to conform to new archspec use.

* Added Windows as a platform; introduced some debugging code.

* Added type annotations.

* Fixed copyright.

* Removed print statements.

* Ensure `spack arch` returns correctly on Windows (#21428)

* Correctly identify windows as 'windows-Windows10-AMD64'
2022-03-17 09:01:01 -07:00
Massimiliano Culpo
2cd5c00923
Allow for multiple dependencies/dependents from the same package (#28673)
Change the internal representation of `Spec` to allow for multiple dependencies or
dependents stemming from the same package. This change permits to represent cases
which are frequent in cross compiled environments or to bootstrap compilers.

Modifications:
- [x] Substitute `DependencyMap` with `_EdgeMap`. The main differences are that the
      latter does not support direct item assignment and can be modified only through its
      API. It also provides a `select_by` method to query items.
- [x] Reworked a few public APIs of `Spec` to get list of dependencies or related edges.
- [x] Added unit tests to prevent regression on #11983 and prove the synthetic construction
      of specs with multiple deps from the same package. 

Since #22845 went in first, this PR reuses that format and thus it should not change hashes.
The same package may be present multiple times in the list of dependencies with different
associated specs (each with its own hash).
2022-03-10 11:53:45 -08:00
Harmen Stoppels
dc78f4c58a
environment.py: allow link:run (#29336)
* environment.py: allow link:run

Some users want minimal views, excluding run-type dependencies, since
those type of dependencies are covered by rpaths and the symlinked
libraries in the view aren't used anyways.

With this change, an environment like this:

```
spack:
  specs: ['py-flake8']
  view:
    default:
      root: view
      link: run
```

includes python packages and python, but no link type deps of python.
2022-03-09 12:35:26 -08:00
Danny McClanahan
2c331a1d7f
make @llnl.util.lang.memoized support kwargs (#21722)
* make memoized() support kwargs

* add testing for @memoized
2022-03-02 11:12:15 -08:00
Todd Gamblin
36b0730fac
Add spack --bootstrap option for accessing bootstrap store (#25601)
We can see what is in the bootstrap store with `spack find -b`, and you can clean it with `spack
clean -b`, but we can't do much else with it, and if there are bootstrap issues they can be hard to
debug.

We already have `spack --mock`, which allows you to swap in the mock packages from the command
line. This PR introduces `spack -b` / `spack --bootstrap`, which runs all of spack with
`ensure_bootstrap_configuration()` set. This means that you can run `spack -b find`, `spack -b
install`, `spack -b spec`, etc. to see what *would* happen with bootstrap configuration, to remove
specific bootstrap packages, etc. This will hopefully make developers' lives easier as they deal
with bootstrap packages.

This PR also uses a `nullcontext` context manager. `nullcontext` has been implemented in several
other places in Spack, and this PR consolidates them to `llnl.util.lang`, with a note that we can
delete the function if we ever reqyire a new enough Python.

- [x] introduce `spack --bootstrap` option
- [x] consolidated all `nullcontext` usages to `llnl.util.lang`
2022-02-22 12:35:34 -07:00
Adam J. Stewart
3c1b2c0fc9
find_libraries: search for both .so and .dylib on macOS (#28924) 2022-02-16 14:07:44 +01:00
Danny McClanahan
0c2de252f1
introduce llnl.util.compat to remove sys.version_info checks (#21720)
- also split typing.py into typing_extensions and add py2 shims
2022-01-21 12:32:52 -08:00
Todd Gamblin
93377942d1 Update copyright year to 2022 2022-01-14 22:50:21 -08:00
Harmen Stoppels
fb93979b94
Set backup=False by default in filter_file (#28036) 2021-12-16 13:50:20 +01:00
Paul Ferrell
c0edb17b93
Handle byte sequences which are not encoded as UTF8 while logging. (#21447)
Fix builds which produce a lines with non-UTF8 output while logging
The alternative is to read in binary mode, and then decode while
ignoring errors.
2021-11-29 13:27:02 +01:00
Massimiliano Culpo
fa7189b480
Remove support for Python 2.6 (#27256)
Modifications:
- [x] Removed `centos:6` unit test, adjusted vermin checks
- [x] Removed backport of `collections.OrderedDict`
- [x] Removed backport of `functools.total_ordering`
- [x] Removed Python 2.6 specific skip markers in unit tests
- [x] Fixed a few minor Python 2.6 related TODOs in code

Updating the vendored dependencies will be done in separate PRs
2021-11-23 09:06:17 -08:00
Maxim Belkin
0ab5d42bd5
Fix typos (ouptut) (#27317) 2021-11-09 16:11:34 -06:00
Massimiliano Culpo
6063600a7b
containerize: pin the Spack version used in a container (#21910)
This PR permits to specify the `url` and `ref` of the Spack instance used in a container recipe simply by expanding the YAML schema as outlined in #20442:
```yaml
container:
  images:
    os: amazonlinux:2
    spack:
      ref: develop
      resolve_sha: true
```
The `resolve_sha` option, if true, verifies the `ref` by cloning the Spack repository in a temporary directory and transforming any tag or branch name to a commit sha. When this new ability is leveraged an additional "bootstrap" stage is added, which builds an image with Spack setup and ready to install software. The Spack repository to be used can be customized with the `url` keyword under `spack`.

Modifications:
- [x] Permit to pin the version of Spack, either by branch or tag or sha
- [x] Added a few new OSes (centos:8, amazonlinux:2, ubuntu:20.04, alpine:3, cuda:11.2.1)
- [x] Permit to print the bootstrap image as a standalone
- [x] Add documentation on the new part of the schema
- [x] Add unit tests for different use cases
2021-10-25 13:09:27 -07:00
Morten Kristensen
9e11e62bca
py-vermin: add latest version 1.3.1 (#26920)
* py-vermin: add latest version 1.3.1

* Exclude line from Vermin since version is already being checked for

Vermin 1.3.1 finds that `encoding` kwarg of builtin `open()` requires Python 3+.
2021-10-24 16:49:05 +00:00
Michael Kuhn
d1f3279607
installer: Support showing status information in terminal title (#16259)
Installing packages with a lot of dependencies does not have an easy way
of judging the current progress (apart from running `spack spec -I pkg`
in another terminal). This change allows Spack to update the terminal's
title with status information, including its current progress as well as
information about the current and total number of packages.
2021-10-11 17:54:59 +02:00
Harmen Stoppels
18760de972
Spack install: handle failed restore of backup (#25647)
Spack has logic to preserve an installation prefix when it is being
overwritten: if the new install fails, the old files are restored.
This PR adds error handling for when this backup restoration fails
(i.e. the new install fails, and then some unexpected error prevents
restoration from the backup).
2021-10-01 11:40:48 -07:00
Harmen Stoppels
f5ab3ad82a
Fix: --overwrite backs up old install dir, but it gets destroyed anyways (#25583)
* Make sure PackageInstaller does not remove the just-restored
  install dir after failure in spack install --overwrite
* Remove cryptic error message and rethrow actual error
2021-08-27 12:41:24 -07:00
Massimiliano Culpo
1ab6f30fdd
Remove fork_context from llnl.util.lang (#25620)
This object was introduced in #18124, and was later superseded by
#18205 and removed any use if the object.
2021-08-26 09:39:59 -07:00
Todd Gamblin
1374fea5d9
locks: only open lockfiles once instead of for every lock held (#24794)
This adds lockfile tracking to Spack's lock mechanism, so that we ensure that there
is only one open file descriptor per inode.

The `fcntl` locks that Spack uses are associated with an inode and a process.
This is convenient, because if a process exits, it releases its locks.
Unfortunately, this also means that if you close a file, *all* locks associated
with that file's inode are released, regardless of whether the process has any
other open file descriptors on it.

Because of this, we need to track open lock files so that we only close them when
a process no longer needs them.  We do this by tracking each lockfile by its
inode and process id.  This has several nice properties:

1. Tracking by pid ensures that, if we fork, we don't inadvertently track the parent
   process's lockfiles. `fcntl` locks are not inherited across forks, so we'll
   just track new lockfiles in the child.
2. Tracking by inode ensures that referencs are counted per inode, and that we don't
   inadvertently close a file whose inode still has open locks.
3. Tracking by both pid and inode ensures that we only open lockfiles the minimum
   number of times necessary for the locks we have.

Note: as mentioned elsewhere, these locks aren't thread safe -- they're designed to
work in Python and assume the GIL.

Tasks:
- [x] Introduce an `OpenFileTracker` class to track open file descriptors by inode.
- [x] Reference-count open file descriptors and only close them if they're no longer
      needed (this avoids inadvertently releasing locks that should not be released).
2021-08-24 14:08:34 -07:00
Harmen Stoppels
220a87812c
New spack.environment.active_environment api, and make spack.environment not depend on spack.cmd. (#25439)
* Refactor active environment getters

- Make `spack.environment.active_environment` a trivial getter for the active
environment, replacing `spack.environment.get_env` when the arguments are
not needed
- New method `spack.cmd.require_active_environment(cmd_name)` for 
commands that require an environment (rather than abusing 
get_env/active_environment)
- Clean up calling code to call spack.environment.active_environment or
spack.cmd.require_active_environment as appropriate
- Remove the `-e` parsing from `active_environment`, because `main.py` is
responsible for processing `-e` and already activates the environment.
- Move `spack.environment.find_environment` to
`spack.cmd.find_environment`, to avoid having spack.environment aware
of argparse.
- Refactor `spack install` command so argument parsing is all handled in the
command, no argparse in spack.environment or spack.installer
- Update documentation

* Python 2: toplevel import errors only with 'as ev'

In two files, `import spack.environment as ev` leads to errors
These errors are not well understood ("'module' object has no attribute
'environment'"). All other files standardize on the above syntax.
2021-08-19 19:01:37 -07:00
Axel Huebl
41ae83463e
doc: def llnl.util.filesystem.find fix rst (#25350)
The star was not rendered in the docs.
2021-08-11 09:14:04 +02:00
Todd Gamblin
7ddd6ad461 installation: filter padding from all tty output
This is both a bugfix and a generalization of #25168. In #25168, we attempted to filter padding
*just* from the debug output of `spack.util.executable.Executable` objects. It turns out we got it
wrong -- filtering the command line string instead of the arg list resulted in output like this:

```
==> [2021-08-05-21:34:19.918576] ["'", '/', 'b', 'i', 'n', '/', 't', 'a', 'r', "'", ' ', "'", '-', 'o', 'x', 'f', "'", ' ', "'", '/', 't', 'm', 'p', '/', 'r', 'o', 'o', 't', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '/', 's', 'p', 'a', 'c', 'k', '-', 's', 't', 'a', 'g', 'e', '-', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '-', 'w', 'p', 'h', 'p', 't', 'l', 'h', 'w', 'u', 's', 'e', 'i', 'a', '4', 'k', 'p', 'g', 'y', 'd', 'q', 'l', 'l', 'i', '2', '4', 'q', 'b', '5', '5', 'q', 'u', '4', '/', 'p', 'a', 't', 'c', 'h', 'e', 'l', 'f', '-', '0', '.', '1', '3', '.', 't', 'a', 'r', '.', 'b', 'z', '2', "'"]
```

Additionally, plenty of builds output padded paths in other plcaes -- e.g., not just command
arguments, but in other `tty` messages via `llnl.util.filesystem` and other places. `Executable`
isn't really the right place for this.

This PR reverts the changes to `Executable` and moves the filtering into `llnl.util.tty`. There is
now a context manager there that you can use to install a filter for all output.
`spack.installer.build_process()` now uses this context manager to make `tty` do path filtering
when padding is enabled.

- [x] revert filtering in `Executable`
- [x] add ability for `tty` to filter output
- [x] install output filter in `build_process()`
- [x] tests
2021-08-09 01:42:07 -07:00
Dylan Simon
507d3c841c
don't spin writer daemon when < /dev/null (#25170) 2021-08-02 21:39:38 -07:00
Todd Gamblin
ab5954520f
spack diff: make output order deterministic (#25169)
The output order for `spack diff` is nondeterministic for larger diffs -- if you
ran it several times it will not put the fields in the spec in the same order on
successive invocations.

This makes a few fixes to `spack diff`:

- [x] Implement the change discussed in https://github.com/spack/spack/pull/22283#discussion_r598337448
      to make `AspFunction` comparable in and of itself and to eliminate the need for `to_tuple()`

- [x] Sort the lists of diff properties so that the output is always in the same order.

- [x] Make the output for different fields the same as what we use in the solver. Previously, we
      would use `Type(value)` for non-string values and `value` for strings.  Now we just use
      the value.  So the output looks a little cleaner:

      ```
      == Old ==========================        == New ====================
      @@ node_target @@                        @@ node_target @@
      -  gdbm Target(x86_64)                   -  gdbm x86_64
      +  zlib Target(skylake)                  +  zlib skylake
      @@ variant_value @@                      @@ variant_value @@
      -  ncurses symlinks bool(False)          -  ncurses symlinks False
      +  zlib optimize bool(True)              +  zlib optimize True
      @@ version @@                            @@ version @@
      -  gdbm Version(1.18.1)                  -  gdbm 1.18.1
      +  zlib Version(1.2.11)                  +  zlib 1.2.11
      @@ node_os @@                            @@ node_os @@
      -  gdbm catalina                         -  gdbm catalina
      +  zlib catalina                         +  zlib catalina
      ```

I suppose if we want to use `repr()` in the output we could do that and could be
consistent but we don't do that elsewhere -- the types of things in Specs are
all stringifiable so the string and the name of the attribute (`version`, `node_os`,
etc.) are sufficient to know what they are.
2021-08-01 05:15:33 +00:00
Adam J. Stewart
b8afc0fd29 API Docs: fix broken reference targets 2021-07-16 08:30:56 -07:00
Todd Gamblin
f58b2e03ca
build output: filter padding out of console output when padded_length is used (#24514)
Spack allows users to set `padded_length` to pad out the installation path in
build farms so that any binaries created are more easily relocatable. The issue
with this is that the padding dominates installation output and makes it
difficult to see what is going on. The padding also causes logs to easily
exceed size limits for things like GitLab artifacts.

This PR fixes this by adding a filter in the logger daemon. If you use a
setting like this:

config:
    install_tree:
        padded_length: 512

Then lines like this in the output:

==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_placeholder__/__spack_path_pla/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga

will be replaced with the much more readable:

==> [2021-06-23-15:59:05.020387] './configure' '--prefix=/Users/gamblin2/padding-log-test/opt/[padded-to-512-chars]/darwin-bigsur-skylake/apple-clang-12.0.5/zlib-1.2.11-74mwnxgn6nujehpyyalhwizwojwn5zga

You can see that the padding has been replaced with `[padded-to-512-chars]` to
indicate the total number of characters in the padded prefix. Over a long log
file, this should save a lot of space and allow us to see error messages in
GitHub/GitLab log output.

The *actual* build logs still have full paths in them. Also lines that are
output by Spack and not by a package build are not filtered and will still
display the fully padded path. There aren't that many of these, so the change
should still help reduce file size and readability quite a bit.
2021-07-12 21:48:52 +00:00
Todd Gamblin
24c01d57cf
imports: sort imports everywhere in Spack (#24695)
* fix remaining flake8 errors

* imports: sort imports everywhere in Spack

We enabled import order checking in #23947, but fixing things manually drives
people crazy. This used `spack style --fix --all` from #24071 to automatically
sort everything in Spack so PR submitters won't have to deal with it.

This should go in after #24071, as it assumes we're using `isort`, not
`flake8-import-order` to order things. `isort` seems to be more flexible and
allows `llnl` mports to be in their own group before `spack` ones, so this
seems like a good switch.
2021-07-08 22:12:30 +00:00
Tom Scogland
4a7b0afde2
Log performance improvement (#23925)
* util.tty.log: read up to 100 lines if ready

Rework to read up to 100 lines from the captured stdin as long as data
is ready to be read immediately.  Adds a helper function to poll with
`select` for ready data.  This showed a roughly 5-10x perf improvement
for high-rate writes through the logger with relatively short lines.

* util.tty.log: Defer flushes to end of ready reads

Rather than flush per line, flush per set of reads.  Since this is a
non-blocking loop, the total perceived wait is short.

* util.tty.log: only scan each line once, usually

Rather than always find all control characters then substitute them all,
use `subn` to count the number of control characters replaced.  Only if
control characters exist find out what they are.  This could be made
truly single pass with sub with a function, but it's a more intrusive
change and this got 99%ish of the performance improvement (roughly
another 2x in some cases).

* util.tty.log: remove check for `readable`

Python < 3 does not support a readable check on streams, should not be
necessary here since we control the only use and it's explicitly a
stream to be read.
2021-05-31 20:33:14 -07:00
Massimiliano Culpo
219eb09e59
Put a module object in sys.modules before executing module code (#23269)
The loading protocol mandates that the the module we are going
to import needs to be already in sys.modules before its code is
executed, so to prevent unbounded recursions and multiple loading.

Loading a module from file exits early if the module is already
in sys.modules
2021-05-06 11:53:40 +02:00
vsoch
613348ec90 Use gethostname() instead of getfqdn() for lock debug mode
In debug mode, processes taking an exclusive lock write out their node name to
the lock file. We were using `getfqdn()` for this, but it seems to produce
inconsistent results when used from within some github actions containers.

We get this error because getfqdn() seems to return a short name in one place
and a fully qualified name in another:

```
  File "/home/runner/work/spack/spack/lib/spack/spack/test/llnl/util/lock.py", line 1211, in p1
    assert lock.host == self.host
AssertionError: assert 'fv-az290-764....cloudapp.net' == 'fv-az290-764'
  - fv-az290-764.internal.cloudapp.net
  + fv-az290-764
!!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!
== 1 failed, 2547 passed, 7 skipped, 22 xfailed, 2 xpassed in 1238.67 seconds ==
```

This seems to stem from https://bugs.python.org/issue5004.

We don't really need to get a fully qualified hostname for debugging, so use
`gethostname()` because its results are more consistent. This seems to fix the
issue.

Signed-off-by: vsoch <vsoch@users.noreply.github.com>
2021-04-15 00:01:41 -07:00
Peter Scheibel
f624ce0834
Build process output: handle UTF-8 for python 3.x to 3.7 (#22888)
We set LC_ALL=C to encourage a build process to generate ASCII
output (so our logger daemon can decode it). Most packages
respect this but it appears that intel-oneapi-compilers does
not in some cases (see #22813). This reads the output of the build
process as UTF-8, which still works if the build process respects
LC_ALL=C but also works if the process generates UTF-8 output.

For Python >= 3.7 all files are opened with UTF-8 encoding by
default. Python 2 does not support the encoding argument on
'open', so to support Python 2 the files would have to be
opened in byte mode and explicitly decoded (as a side note,
this would be the only way to handle other encodings without
being informed of them in advance).
2021-04-09 18:10:01 -07:00
Todd Gamblin
01a6adb5f7 specs: use lazy lexicographic comparison instead of key_ordering
We have been using the `@llnl.util.lang.key_ordering` decorator for specs
and most of their components. This leverages the fact that in Python,
tuple comparison is lexicographic. It allows you to implement a
`_cmp_key` method on your class, and have `__eq__`, `__lt__`, etc.
implemented automatically using that key. For example, you might use
tuple keys to implement comparison, e.g.:

```python
class Widget:
    # author implements this
    def _cmp_key(self):
        return (
            self.a,
            self.b,
            (self.c, self.d),
            self.e
        )

    # operators are generated by @key_ordering
    def __eq__(self, other):
        return self._cmp_key() == other._cmp_key()

    def __lt__(self):
        return self._cmp_key() < other._cmp_key()

    # etc.
```

The issue there for simple comparators is that we have to bulid the
tuples *and* we have to generate all the values in them up front. When
implementing comparisons for large data structures, this can be costly.

This PR replaces `@key_ordering` with a new decorator,
`@lazy_lexicographic_ordering`. Lazy lexicographic comparison maps the
tuple comparison shown above to generator functions. Instead of comparing
based on pre-constructed tuple keys, users of this decorator can compare
using elements from a generator. So, you'd write:

```python
@lazy_lexicographic_ordering
class Widget:
    def _cmp_iter(self):
        yield a
        yield b
        def cd_fun():
            yield c
            yield d
        yield cd_fun
        yield e

    # operators are added by decorator (but are a bit more complex)

There are no tuples that have to be pre-constructed, and the generator
does not have to complete. Instead of tuples, we simply make functions
that lazily yield what would've been in the tuple. If a yielded value is
a `callable`, the comparison functions will call it and recursively
compar it. The comparator just walks the data structure like you'd expect
it to.

The ``@lazy_lexicographic_ordering`` decorator handles the details of
implementing comparison operators, and the ``Widget`` implementor only
has to worry about writing ``_cmp_iter``, and making sure the elements in
it are also comparable.

Using this PR shaves another 1.5 sec off the runtime of `spack buildcache
list`, and it also speeds up Spec comparison by about 30%. The runtime
improvement comes mostly from *not* calling `hash()` `_cmp_iter()`.
2021-03-31 14:39:23 -07:00
Adam J. Stewart
40a40e0265
Python 3.10 support: collections.abc (#20441) 2021-02-01 11:30:25 -06:00
Sergey Kosukhin
4d7a9df810
Add a context wrapper for mtime preservation (#21258)
Sometimes we need to patch a file that is a dependency for some other
automatically generated file that comes in a release tarball. As a
result, make tries to regenerate the dependent file using additional
tools (e.g. help2man), which would not be needed otherwise.

In some cases, it's preferable to avoid that (e.g. see #21255). A way
to do that is to save the modification timestamps before patching and
restoring them afterwards. This PR introduces a context wrapper that
does that.
2021-01-27 11:41:07 -08:00
Nathan Hanford
ebc871abbf
[WIP] relocate.py: parallelize test replacement logic (#19690)
* sbang pushed back to callers;
star moved to util.lang

* updated unit test

* sbang test moved; local tests pass

Co-authored-by: Nathan Hanford <hanford1@llnl.gov>
2021-01-20 09:17:47 -08:00
Todd Gamblin
a8ccb8e116 copyrights: update all files with license headers for 2021
- [x] add `concretize.lp`, `spack.yaml`, etc. to licensed files
- [x] update all licensed files to say 2013-2021 using
      `spack license update-copyright-year`
- [x] appease mypy with some additions to package.py that needed
      for oneapi.py
2021-01-02 12:12:00 -08:00
Tom Scogland
857749a9ba
add mypy to style checks; rename spack flake8 to spack style (#20384)
I lost my mind a bit after getting the completion stuff working and
decided to get Mypy working for spack as well. This adds a 
`.mypy.ini` that checks all of the spack and llnl modules, though
not yet packages, and fixes all of the identified missing types and
type issues for the spack library.

In addition to these changes, this includes:

* rename `spack flake8` to `spack style`

Aliases flake8 to style, and just runs flake8 as before, but with
a warning.  The style command runs both `flake8` and `mypy`,
in sequence. Added --no-<tool> options to turn off one or the
other, they are on by default.  Fixed two issues caught by the tools.

* stub typing module for python2.x

We don't support typing in Spack for python 2.x. To allow 2.x to
support `import typing` and `from typing import ...` without a
try/except dance to support old versions, this adds a stub module
*just* for python 2.x.  Doing it this way means we can only reliably
use all type hints in python3.7+, and mypi.ini has been updated to
reflect that.

* add non-default black check to spack style

This is a first step to requiring black.  It doesn't enforce it by
default, but it will check it if requested.  Currently enforcing the
line length of 79 since that's what flake8 requires, but it's a bit odd
for a black formatted project to be quite that narrow.  All settings are
in the style command since spack has no pyproject.toml and I don't
want to add one until more discussion happens. Also re-format
`style.py` since it no longer passed the black style check
with the new length.

* use style check in github action

Update the style and docs action to use `spack style`, adding in mypy
and black to the action even if it isn't running black right now.
2020-12-22 21:39:10 -08:00
Greg Becker
77b2e578ec
spack test (#15702)
Users can add test() methods to their packages to run smoke tests on
installations with the new `spack test` command (the old `spack test` is
now `spack unit-test`). spack test is environment-aware, so you can
`spack install` an environment and then run `spack test run` to run smoke
tests on all of its packages. Historical test logs can be perused with
`spack test results`. Generic smoke tests for MPI implementations, C,
C++, and Fortran compilers as well as specific smoke tests for 18
packages.

Inside the test method, individual tests can be run separately (and
continue to run best-effort after a test failure) using the `run_test`
method. The `run_test` method encapsulates finding test executables,
running and checking return codes, checking output, and error handling.

This handles the following trickier aspects of testing with direct
support in Spack's package API:

- [x] Caching source or intermediate build files at build time for
      use at test time.
- [x] Test dependencies,
- [x] packages that require a compiler for testing (such as library only
      packages).

See the packaging guide for more details on using Spack testing support.
Included is support for package.py files for virtual packages. This does
not change the Spack interface, but is a major change in internals.

Co-authored-by: Tamara Dahlgren <dahlgren1@llnl.gov>
Co-authored-by: wspear <wjspear@gmail.com>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-11-18 02:39:02 -08:00
Todd Gamblin
0ed019d4ef concretizer: first working version with pyclingo interface
- [x] Solver now uses the Python interface to clingo
- [x] can extract unsatisfiable cores from problems when things go wrong
- [x] use Python callbacks for versions instead of choice rules (this may
      ultimately hurt performance)
2020-11-17 10:04:13 -08:00
Peter Scheibel
bb42470211
macos: update build process to use spawn instead of fork (#18205)
Spack creates a separate process to do package installation. Different
operating systems and Python versions use different methods to create
it but up until Python 3.8 both Linux and Mac OS used "fork" (which
duplicates process memory, file descriptor table, etc.).

Python >= 3.8 on Mac OS prefers creating an entirely new process
(referred to as the "spawn" start method) because "fork" was found to
cause issues (in other words "spawn" is the default start method used
by multiprocessing.Process). Spack was dependent on the particular
behavior of fork to replicate process memory and transmit file
descriptors.

This PR refactors the Spack internals to support starting a child
process with the "spawn" method. To achieve this, it makes the
following changes:

- ensure that the package repository and other global state are
  transmitted to the child process
- ensure that file descriptors are transmitted to the child process in
  a way that works with multiprocessing and spawn
- make all the state needed for the build process and tests picklable
  (package, stage, etc.)
- move a number of locally-defined functions into global scope so that
  they can be pickled
- rework tests where needed to avoid using local functions

This PR also reworks sbang tests to work on macOS, where temporary
directories are deeper than the Linux sbang limit. We make the limit
platform-dependent (macOS supports 512-character shebangs)

See: #14102
2020-11-12 12:26:23 -08:00
Massimiliano Culpo
458d88eaad
Make archspec a vendored dependency (#19600)
- Added archspec to the list of vendored dependencies
- Removed every reference to llnl.util.cpu
- Removed tests from Spack code base
2020-10-30 13:02:14 -07:00
Greg Becker
7a6268593c
Environments: specify packages for developer builds (#15256)
* allow environments to specify dev-build packages

* spack develop and spack undevelop commands

* never pull dev-build packges from bincache

* reinstall dev_specs when code has changed; reinstall dependents too

* preserve dev info paths and versions in concretization as special variant

* move install overwrite transaction into installer

* move dev-build argument handling to package.do_install

now that specs are dev-aware, package.do_install can add
necessary args (keep_stage=True, use_cache=False) to dev
builds. This simplifies driving logic in cmd and env._install

* allow 'any' as wildcard for variants

* spec: allow anonymous dependencies

raise an error when constraining by or normalizing an anonymous dep
refactor concretize_develop to remove dev_build variant
refactor tests to check for ^dev_path=any instead of +dev_build

* fix variant class hierarchy
2020-10-15 17:23:16 -07:00
Greg Becker
2e4892c111
env view failures: print underlying error message (#18713) 2020-09-18 10:21:14 -07:00
Massimiliano Culpo
8ad2cc2acf
Environments: Avoid inconsistent state on failed write (#18538)
Fixes #18441 

When writing an environment, there are cases where the lock file for
the environment may be removed. In this case there was a period 
between removing the lock file and writing the new manifest file
where an exception could leave the manifest in its old state (in
which case the lock and manifest would be out of sync).

This adds a context manager which is used to restore the prior lock
file state in cases where the manifest file cannot be written.
2020-09-11 10:57:29 -07:00
Adam J. Stewart
741bb9bafe
install/install_tree: glob support (#18376)
* install/install_tree: glob support

* Add unit tests

* Update existing packages

* Raise error if glob finds no files, document function raises
2020-09-03 10:47:19 -07:00
Rui Xue
d9b945f663
Mac OS: support Python >= 3.8 by using fork-based multiprocessing (#18124)
As detailed in https://bugs.python.org/issue33725, starting new
processes with 'fork' on Mac OS is not guaranteed to work in general.
As of Python 3.8 the default process spawning mechanism was changed
to avoid this issue.

Spack depends on the fork-based method to preserve file descriptors
transparently, to preserve global state, and to avoid pickling some
objects. An effort is underway to remove dependence on fork-based
process spawning (see #18205). In the meantime, this allows Spack to
run with Python 3.8 on Mac OS by explicitly choosing to use 'fork'.

Co-authored-by: Peter Josef Scheibel <scheibel1@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
Co-authored-by: Todd Gamblin <tgamblin@llnl.gov>
2020-09-02 00:15:39 -07:00
Michael Kuhn
b6321cdfa9
microarchitectures: Fix icelake (#18151)
Some of the feature flags are named differently and clwb is missing on
my i7-1065G7. cascadelake and cannonlake might have similar problems but
I do not have access to those architectures to test.
2020-08-19 11:49:43 +02:00
Massimiliano Culpo
1dba0ce81b Removed references to BlueGene/Q in docs and comments 2020-08-10 17:09:09 -07:00
Massimiliano Culpo
193e8333fa Update packages.yaml format and support configuration updates
The YAML config for paths and modules of external packages has
changed: the new format allows a single spec to load multiple
modules. Spack will automatically convert from the old format
when reading the configs (the updates do not add new essential
properties, so this change in Spack is backwards-compatible).

With this update, Spack cannot modify existing configs/environments
without updating them (e.g. “spack config add” will fail if the
configuration is in a format that predates this PR). The user is
prompted to do this explicitly and commands are provided. All
config scopes can be updated at once. Each environment must be
updated one at a time.
2020-08-10 11:59:05 -07:00
Tamara Dahlgren
605c1a76e0
Reduce output verbosity with debug levels (#17546)
* switch from bool to int debug levels

* Added debug options and changed lock logging to use more detailed values

* Limit installer and timestamp PIDs to standard debug output

* Reduced verbosity of fetch/stage/install output, changing most to debug level 1

* Combine lock log methods; change build process install to debug

* Changed binary cache install messages to extraction messages
2020-07-23 00:49:57 -07:00
Todd Gamblin
4ea76dc95c change master/child to controller/minion in pty docstrings
PTY support used the concept of 'master' and 'child' processes. 'master'
has been renamed to 'controller' and 'child' to 'minion'.
2020-07-06 11:39:19 -07:00
Massimiliano Culpo
14599f09be
Separate Apple Clang from LLVM Clang (#17110)
* Separate Apple Clang from LLVM Clang

Apple Clang is a compiler of its own. All places
referring to "-apple" suffix have been updated.

* Hack to use a dash in 'apple-clang'

To be able to use autodoc from Sphinx we need
a valid Python name for the module that contains
Apple's Clang code.

* Updated packages to account for the existence of apple-clang

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>

* Added unit test for XCode related functions

Co-authored-by: Gregory Becker <becker33@llnl.gov>
Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-06-25 11:18:48 -05:00
Todd Gamblin
7b8e5c8999 bugfix: don't use sys.stdout as a default arg value (#16541)
After migrating to `travis-ci.com`, we saw I/O issues in our tests --
tests that relied on `capfd` and `capsys` were failing. We've also seen
this in GitHub actions, and it's kept us from switching to them so far.

Turns out that the issue is that using streams like `sys.stdout` as
default arguments doesn't play well with `pytest` and output redirection,
as `pytest` changes the values of `sys.stdout` and `sys.stderr`. if these
values are evaluated before output redirection (as they are when used as
default arg values), output won't be captured properly later.

- [x] replace all stream default arg values with `None`, and only assign stream
      values inside functions.
- [x] fix tests we didn't notice were relying on this erroneous behavior
2020-05-09 00:56:18 -07:00
Todd Gamblin
a563884af3
spack install terminal output handling in foreground/background (#15723)
Makes the following changes:

* (Fixes #15620) tty configuration was failing when stdout was 
  redirected. The implementation now creates a pseudo terminal for
  stdin and checks stdout properly, so redirections of stdin/out/err
  should be handled now.
* Handles terminal configuration when the Spack process moves between
  the foreground and background (possibly multiple times) during a
  build.
* Spack adjusts terminal settings to allow users to to enable/disable
  build process output to the terminal using a "v" toggle, abnormal
  exit cases (like CTRL-C) could leave the terminal in an unusable
  state. This is addressed here with a special-case handler which
  restores terminal settings.

Significantly extend testing of process output logger:

* New PseudoShell object for setting up a master and child process
  and configuring file descriptor inheritance between the two
* Tests for "v" verbosity toggle making use of the added PseudoShell
  object
* Added `uniq` function which takes a list of elements and replaces
  any consecutive sequence of duplicate elements with a single
  instance (e.g. "112211" -> "121")

Co-authored-by: Adam J. Stewart <ajstewart426@gmail.com>
2020-04-15 11:05:41 -07:00
Greg Becker
75fafcece9
multiprocessing: allow Spack to run uninterrupted in background (#14682)
Spack currently cannot run as a background process uninterrupted because some of the logging functions used in the install method (especially to create the dynamic verbosity toggle with the v key) cause the OS to issue a SIGTTOU to Spack when it's backgrounded.

This PR puts the necessary gatekeeping in place so that Spack doesn't do anything that will cause a signal to stop the process when operating as a background process.
2020-03-20 12:22:32 -07:00
Dr. Christian Tacke
22a56a89c7
Use shutil.copy2 in install_tree (#15058)
Sometimes one needs to preserve the (relative order) of
mtimes on installed files.  So it's better to just copy
over all the metadata from the source tree to the install
tree. If permissions need fixing, that will be done anyway
afterwards.

One major use case are resource()s:
They're unpacked in one place and then copied to their
final place using install_tree(). If the resource is a
source tree using autoconf/automake, resetting mtimes
uncorrectly might force unwanted autoconf/etc calls.
2020-02-19 23:09:26 -06:00
Tamara Dahlgren
f2aca86502
Distributed builds (#13100)
Fixes #9394
Closes #13217.

## Background
Spack provides the ability to enable/disable parallel builds through two options: package `parallel` and configuration `build_jobs`.  This PR changes the algorithm to allow multiple, simultaneous processes to coordinate the installation of the same spec (and specs with overlapping dependencies.).

The `parallel` (boolean) property sets the default for its package though the value can be overridden in the `install` method.

Spack's current parallel builds are limited to build tools supporting `jobs` arguments (e.g., `Makefiles`).  The number of jobs actually used is calculated as`min(config:build_jobs, # cores, 16)`, which can be overridden in the package or on the command line (i.e., `spack install -j <# jobs>`).

This PR adds support for distributed (single- and multi-node) parallel builds.  The goals of this work include improving the efficiency of installing packages with many dependencies and reducing the repetition associated with concurrent installations of (dependency) packages.

## Approach
### File System Locks
Coordination between concurrent installs of overlapping packages to a Spack instance is accomplished through bottom-up dependency DAG processing and file system locks.  The runs can be a combination of interactive and batch processes affecting the same file system.  Exclusive prefix locks are required to install a package while shared prefix locks are required to check if the package is installed.

Failures are communicated through a separate exclusive prefix failure lock, for concurrent processes, combined with a persistent store, for separate, related build processes.  The resulting file contains the failing spec to facilitate manual debugging.

### Priority Queue
Management of dependency builds changed from reliance on recursion to use of a priority queue where the priority of a spec is based on the number of its remaining uninstalled dependencies.  

Using a queue required a change to dependency build exception handling with the most visible issue being that the `install` method *must* install something in the prefix.  Consequently, packages can no longer get away with an install method consisting of `pass`, for example.

## Caveats
- This still only parallelizes a single-rooted build.  Multi-rooted installs (e.g., for environments) are TBD in a future PR.

Tasks:
- [x] Adjust package lock timeout to correspond to value used in the demo
- [x] Adjust database lock timeout to reduce contention on startup of concurrent
    `spack install <spec>` calls
- [x] Replace (test) package's `install: pass` methods with file creation since post-install 
    `sanity_check_prefix` will otherwise error out with `Install failed .. Nothing was installed!`
- [x] Resolve remaining existing test failures
- [x] Respond to alalazo's initial feedback
- [x] Remove `bin/demo-locks.py`
- [x] Add new tests to address new coverage issues
- [x] Replace built-in package's `def install(..): pass` to "install" something
    (i.e., only `apple-libunwind`)
- [x] Increase code coverage
2020-02-19 00:04:22 -08:00
Matt Belhorn
b43f658c39
Adds fma and vsx features to entire power arch family. (#14759)
VSX alitvec extensions are supported by PowerISA from v2.06 (Power7+), but might
not be listed in features.

FMA has been supported by PowerISA since Power1, but might not be listed in
features.

This commit adds these features to all the power ISA family sets.
2020-02-06 16:42:05 +01:00
Greg Becker
52ab2421bb
Fix handling of filter_file exceptions (#14651) 2020-01-28 12:49:26 -08:00
Adam J. Stewart
11f2b61261 Use spack commands --format=bash to generate shell completion (#14393)
This PR adds a `--format=bash` option to `spack commands` to
auto-generate the Bash programmable tab completion script. It can be
extended to work for other shells.

Progress:

- [x] Fix bug in superclass initialization in `ArgparseWriter`
- [x] Refactor `ArgparseWriter` (see below)
- [x] Ensure that output of old `--format` options remains the same
- [x] Add `ArgparseCompletionWriter` and `BashCompletionWriter`
- [x] Add `--aliases` option to add command aliases
- [x] Standardize positional argument names
- [x] Tests for `spack commands --format=bash` coverage
- [x] Tests to make sure `spack-completion.bash` stays up-to-date
- [x] Tests for `spack-completion.bash` coverage
- [x] Speed up `spack-completion.bash` by caching subroutine calls

This PR also necessitates a significant refactoring of
`ArgparseWriter`. Previously, `ArgparseWriter` was mostly a single
`_write` method which handled everything from extracting the information
we care about from the parser to formatting the output. Now, `_write`
only handles recursion, while the information extraction is split into a
separate `parse` method, and the formatting is handled by `format`. This
allows subclasses to completely redefine how the format will appear
without overriding all of `_write`.

Co-Authored-by: Todd Gamblin <tgamblin@llnl.gov>
2020-01-22 21:31:12 -08:00
Todd Gamblin
4af6303086
copyright: update copyright dates for 2020 (#14328) 2019-12-30 22:36:56 -08:00
Todd Gamblin
8e8235043d package_prefs: move class-level cache to PackagePref instance
`PackagePrefs` has had a class-level cache of data from `packages.yaml` for
a long time, but it complicates testing and leads to subtle errors,
especially now that we frequently manipulate custom config scopes and
environments.

Moving the cache to instance-level doesn't slow down concretization or
the test suite, and it just caches for the life of a `PackagePrefs`
instance (i.e., for a single cocncretization) so we don't need to worry
about global state anymore.

- [x] Remove class-level caches from `PackagePrefs`
- [x] Add a cached _spec_order object on each `PackagePrefs` instance
- [x] Remove all calls to `PackagePrefs.clear_caches()`
2019-12-30 13:01:31 -08:00
Todd Gamblin
2dafeaf819
bugfix: colify_table should not revert to 1 column for non-tty (#14307)
Commands like `spack blame` were printig poorly when redirected to files,
as colify reverts to a single column when redirected.  This works for
list data but not tables.

- [x] Force a table by always passing `tty=True` from `colify_table()`
2019-12-28 11:26:31 -08:00
Massimiliano Culpo
d333e14721 tests: check min required python version with vermin (#14289)
This commit removes the `python_version.py` unit test module
and the vendored dependencies `pyqver2.py` and `pyqver3.py`.
It substitutes them with an equivalent check done using
`vermin` that is run as a separate workflow via Github Actions.

This allows us to delete 2 vendored dependencies that are unmaintained
and substitutes them with a maintained tool.

Also, updates the list of vendored dependencies.
2019-12-24 09:28:33 -08:00
t-karatsu
1e2c9d960c a64fx: fix typo in GCC flags (#14286) 2019-12-24 17:45:03 +01:00
Todd Gamblin
9b90d7e801 performance: reduce system calls required for remove_dead_links
`os.path.exists()` will report False if the target of a symlink doesn't
exist, so we can avoid a costly call to realpath here.
2019-12-23 18:36:56 -08:00
Todd Gamblin
6c9467e8c6 lock transactions: avoid redundant reading in write transactions
Our `LockTransaction` class was reading overly aggressively.  In cases
like this:

```
1  with spack.store.db.read_transaction():
2    with spack.store.db.write_transaction():
3      ...
```

The `ReadTransaction` on line 1 would read in the DB, but the
WriteTransaction on line 2 would read in the DB *again*, even though we
had a read lock the whole time.  `WriteTransaction`s were only
considering nested writes to decide when to read, but they didn't know
when we already had a read lock.

- [x] `Lock.acquire_write()` return `False` in cases where we already had
       a read lock.
2019-12-23 18:36:56 -08:00
Todd Gamblin
bb517fdb84 lock transactions: ensure that nested write transactions write
If a write transaction was nested inside a read transaction, it would not
write properly on release, e.g., in a sequence like this, inside our
`LockTransaction` class:

```
1  with spack.store.db.read_transaction():
2    with spack.store.db.write_transaction():
3      ...
4  with spack.store.db.read_transaction():
   ...
```

The WriteTransaction on line 2 had no way of knowing that its
`__exit__()` call was the last *write* in the nesting, and it would skip
calling its write function.

The `__exit__()` call of the `ReadTransaction` on line 1 wouldn't know
how to write, and the file would never be written.

The DB would be correct in memory, but the `ReadTransaction` on line 4
would re-read the whole DB assuming that other processes may have
modified it.  Since the DB was never written, we got stale data.

- [x] Make `Lock.release_write()` return `True` whenever we release the
      *last write* in a nest.
2019-12-23 18:36:56 -08:00
Todd Gamblin
eb8fc4f3be lock transactions: fix non-transactional writes
Lock transactions were actually writing *after* the lock was
released. The code was looking at the result of `release_write()` before
writing, then writing based on whether the lock was released.  This is
pretty obviously wrong.

- [x] Refactor `Lock` so that a release function can be passed to the
      `Lock` and called *only* when a lock is really released.

- [x] Refactor `LockTransaction` classes to use the release function
  instead of checking the return value of `release_read()` / `release_write()`
2019-12-23 18:36:56 -08:00
Massimiliano Culpo
80495d83ed microarchitectures: fix ppc flags for clang (#14196) 2019-12-20 14:40:54 -08:00
Adam J. Stewart
18c2029fef Fix argparse rST parsing of help messages (#14014)
Thanks!
2019-12-17 10:23:22 -08:00
Massimiliano Culpo
5eca4f1470 microarchitectures: readable names for AArch64 vendors (#13825)
Vendors for ARM come out of `/proc/cpuinfo` as hex numbers instead of readable strings.

- Add support for associating vendor names with the hex numbers.
- Also move these mappings from Python code to `microarchitectures.json`
- Move darwin feature name mappings to `microarchitectures.json` as well
2019-12-17 00:47:50 -08:00
Axel Huebl
d705e96a63
Spec Header Dirs: Only first include/ (#13991)
* CUDA HeaderList: Unit Test

* Spec Header Dirs: Only first include/

Avoid matching recurringly nested include paths that usually
refer to internally shipped libraries in packages.
Example in CUDA Toolkit, shipping a libc++ fork internally
with libcu++ since 10.2.89:
`<prefix>/include/cuda/some/more/details/include/` or
`<prefix>/include/cuda/std/detail/libcxx/include`

regex: non-greedy first match of include

Co-Authored-By: Massimiliano Culpo <massimiliano.culpo@gmail.com>

* CUDA: Re-Enable 10.2.89 as Default
2019-12-06 23:47:03 -08:00
Massimiliano Culpo
e9f027210f Fixed x86-64 optimization flags for clang (#13913)
* Fixed x86-64 optimization flags for clang
* Fixed expected results in unit tests

Before the flags used where the one for llc, the underlying compiler from LLVM IR to machine assembly. It turns out that the semantic of `-march`, `-mtune` and `-mcpu` changes from clang front-end to llc.

I found no definitive reference for the flags submitted in this PR, but I checked the assembly on a vectorizable function using Godbolt's web-site.
2019-12-04 09:11:34 -08:00
Massimiliano Culpo
0684a58d16 Fixed detection for cascadelake microarchitecture (#13820)
fixes #13803
2019-11-21 13:09:48 -07:00
t-karatsu
513fe55fc3 Features/expand microarch for aarch64 (#13780)
* Add process to determine aarch64 microarchitecture

* add microarchitectures for thunderx2 and a64fx

* Add optimize flags for gcc on aarch64 family processors thunderx2 and a64fx.

* Add optimize flags for clang on aarch64 family processors thunderx2 and a64fx

* Add testing for thunderx2 and a64fx microarchitectures
2019-11-20 00:01:12 -07:00
Greg Becker
230c6aa326
cpu: fix clang flags for generic x86_64 (#13491)
* cpu: differentiate flags used for pristine LLVM vs. Apple's version
2019-10-30 17:16:13 -05:00
Chris Green
77af4684aa Improvements to detection of AMD architectures. (#13407)
New entry for K10 microarchitecture.

Reorder Zen* microarchitectures to avoid triggering as k10.

Remove some desktop-specific flags that were preventing Opteron Bulldozer/Piledriver/Steamroller/Excavator CPUs from being recognized as such.

Remove one or two flags which weren't produced in /proc/cpuinfo on older OS (RHEL6 and friends).
2019-10-24 15:48:54 -07:00
Chris Green
0913328812 Correctly identify Skylake CPUs on Darwin. (#13377)
* Correctly identify Skylake CPUs on Darwin.

* Add a test for haswell on Mojave.
2019-10-24 12:44:58 -05:00
Massimiliano Culpo
b14f18acda microarchitectures: look in /sbin and /usr/sbin for sysctl (#13365)
This PR ensures that on Darwin we always append /sbin and /usr/sbin to PATH, if they are not already present, when looking for sysctl.

* Make sure we look into /sbin and /usr/sbin for sysctl
* Refactor sysctl for better readability
* Remove marker to make test pass
2019-10-22 21:42:38 -07:00
Massimiliano Culpo
8808207ddf Fixed optimization flags support for old GCC versions (#13362)
These changes update our gcc microarchitecture descriptions based on manuals found here https://gcc.gnu.org/onlinedocs/ and assuming that new architectures are not added during patch releases.
2019-10-22 21:40:45 -07:00
Massimiliano Culpo
cfbdd2179e microarchitectures: add optimization flags for Intel compilers (#13345)
* Added optimization flags for Intel compilers with Intel CPUs
* Added optimization flags for Intel compilers with AMD CPUs
2019-10-22 00:33:59 -07:00
Massimiliano Culpo
498f448ef3 microarchitectures: fix custom compiler versions (#13222)
Custom string versions for compilers were raising a ValueError on
conversion to int. This commit fixes the behavior by trying to detect
the underlying compiler version when in presence of a custom string
version.

* Refactor code that deals with custom versions for better readability
* Partition version components with a regex
* Fix semantic of custom compiler versions with a suffix
* clang@x.y-apple has been special-cased
* Add unit tests
2019-10-21 10:24:57 -07:00
Massimiliano Culpo
41fb0395a6 Microarchitecture specific optimizations for LLVM (#13250)
* Added architecture specific optimization flags for Clang / LLVM
* Disallow compiler optimizations for mixed toolchains
    * We emit a warning when building for a mixed toolchain
* Fixed issues with suffixed versions of compilers; Apple's Clang will, 
    for the time being, fall back on x86-64 for every compilation.
2019-10-19 13:19:29 -07:00
Michael Kuhn
ffe87ed49f filter_file: fix multiple invocations on the same file (#13234)
Since the backup file is only created on the first invocation, it will
contain the original file without any modifications. Further invocations
will then read the backup file, effectively reverting prior invocations.

This can be reproduced easily by trying to install likwid, which will
try to install into /usr/local. Work around this by creating a temporary
file to read from.
2019-10-16 15:15:24 -07:00
Tamara Dahlgren
1ef71376f2 Bugfix: stage directory permissions and cleaning (#12733)
* This updates stage names to use "spack-stage-" as a prefix.
  This avoids removing non-Spack directories in "spack clean" as
  c141e99 did (in this case so long as they don't contain the
  prefix "spack-stage-"), and also addresses a follow-up issue
  where Spack stage directories were not removed.
* Spack now does more-stringent checking of expected permissions for
  staging directories. For a given stage root that includes a user
  component, all directories before the user component that are
  created by Spack are expected to match the permissions of their
  parent; the user component and all deeper directories are expected
  to be accessible to the user (read/write/execute).
2019-10-16 14:55:37 -07:00
Greg Becker
94e80933f0 Feature: installed file verification (#12841)
This feature generates a verification manifest for each installed
package and provides a command, "spack verify", which can be used to
compare the current file checksums/permissions with those calculated
at installed time.

Verification includes

* Checksums of files
* File permissions
* Modification time
* File size

Packages installed before this PR will be skipped during verification.
To verify such a package you must reinstall it.

The spack verify command has three modes.

* With the -a,--all option it will check every installed package.
* With the -f,--files option, it will check some specific files,
  determine which package they belong to, and confirm that they have
  not been changed.
* With the -s,--specs option or by default, it will check some
  specific packages that no files havae changed.
2019-10-15 14:24:52 -07:00
Massimiliano Culpo
5cd28847e8 filter_file uses "surrogateescape" error handling (#12765)
From Python docs:
--
'surrogateescape' will represent any incorrect bytes as code points in
the Unicode Private Use Area ranging from U+DC80 to U+DCFF. These
private code points will then be turned back into the same bytes when
the surrogateescape error handler is used when writing data. This is
useful for processing files in an unknown encoding.
--

This will allow us to process files with unknown encodings.

To accommodate the case of self-extracting bash scripts, filter_file
can now stop filtering text input if a certain marker is found. The
marker must be passed at call time via the "stop_at" function argument.
At that point the file will be reopened in binary mode and copied
verbatim.

* use "surrogateescape" error handling to ignore unknown chars
* permit to stop filtering if a marker is found
* add unit tests for non-ASCII and mixed text/binary files
2019-10-14 20:35:14 -07:00
Massimiliano Culpo
8dd95c1705 Fixed options to compile generic code on ppc64 and ppc64le 2019-10-11 21:20:28 -07:00
Massimiliano Culpo
b07460ab5f Added NEON to the list of features required for the aarch64 family
Both floating-point and NEON are required in all standard ARMv8
implementations. Theoretically though specialized markets can support
no NEON or floating-point at all. Source:

https://developer.arm.com/docs/den0024/latest/aarch64-floating-point-and-neon

On the other hand the base procedure call standard for Aarch64
"assumes the availability of the vector registers for passing
floating-point and SIMD arguments". Further "the Arm 64-bit
architecture defines two mandatory register banks: a general-purpose
register bank which can be used for scalar integer processing and
pointer arithmetic; and a SIMD and Floating-Point register bank".
Source:

https://developer.arm.com/docs/ihi0055/latest/procedure-call-standard-for-the-arm-64-bit-architecture

This makes customization of Aarch64 with no NEON instruction set
available so unlikely that we can consider them a feature of the
generic family.
2019-10-10 16:24:36 -07:00
Massimiliano Culpo
78577c0a90
Generic x86_64 code compiled with GCC uses non deprecated mtune flags (#13022)
fixes #12928
2019-10-03 10:31:03 +02:00
Massimiliano Culpo
9117dfd118 Add all the 'generic' architectures that are mentioned in recipes (#12958)
LLVM, mesa and other packages check for these generic
microarchitectures. One solution is to let Spack know they exist.
2019-09-28 21:47:05 -07:00
Adam J. Stewart
065cbe89fe Fix "specific target" detection in Python 3 (#12906)
The output of subprocess.check_output is a byte string in Python 3. This causes dictionary lookup to fail later on.

A try-except around this function prevented this error from being noticed. Removed this so that more errors can propagate out.
2019-09-24 09:47:54 -07:00
Massimiliano Culpo
2468ccee58 AMD: fix architecture hierarchy (zen) (#12913)
* microarchitectures: zen starts from x86_64, not from excavator
* Unit tests: fixed a test that is wrong with the new modeling
* microarchitectures: fixed features and inheritance for 15h family

bulldozer doesn't inherit from barcelona (10h) + added xop, lwp and tbm
instruction sets to the 15h family (it distinguish the family from 17h)
2019-09-23 21:54:13 -07:00
Gregory Becker
c43f105359 targets: add mic_knl target to microarchitectures.json
- This is needed to support Cray machines -- we need an architecture
  mic_knl > x86_64

- We used Cray's naming scheme for this target to make it work seamlessly
  with the module-based detection sccheme on Cray.  mic_knl is pretty
  much dead, so this will be the last succh target.  We will need to work
  wtih Cray and other vendors in the future.
2019-09-20 00:51:37 -07:00
Massimiliano Culpo
3c4322bf1a targets: Spack targets can now be fine-grained microarchitectures
Spack can now:

- label ppc64, ppc64le, x86_64, etc. builds with specific
  microarchitecture-specific names, like 'haswell', 'skylake' or
  'icelake'.

- detect the host architecture of a machine from /proc/cpuinfo or similar
  tools.

- Understand which microarchitectures are compatible with which (for
  binary reuse)

- Understand which compiler flags are needed (for GCC, so far) to build
  binaries for particular microarchitectures.

All of this is managed through a JSON file (microarchitectures.json) that
contains detailed auto-detection, compiler flag, and compatibility
information for specific microarchitecture targets.  The `llnl.util.cpu`
module implements a library that allows detection and comparison of
microarchitectures based on the data in this file.

The `target` part of Spack specs is now essentially a Microarchitecture
object, and Specs' targets can be compared for compatibility as well.
This allows us to label optimized binary packages at a granularity that
enables them to be reused on compatible machines.  Previously, we only
knew that a package was built for x86_64, NOT which x86_64 machines it
was usable on.

Currently this feature supports Intel, Power, and AMD chips. Support for
ARM is forthcoming.

Specifics:

- Add microarchitectures.json with descriptions of architectures

- Relaxed semantic of compiler's "target" attribute.  Before this change
  the semantic to check if a compiler could be viable for a given target
  was exact match. This made sense as the finest granularity of targets
  was architecture families.  As now we can target micro-architectures,
  this commit changes the semantic by interpreting as the architecture
  family what is stored in the compiler's "target" attribute. A compiler
  is then a viable choice if the target being concretized belongs to the
  same family. Similarly when a new compiler is detected the architecture
  family is stored in the "target" attribute.

- Make Spack's `cc` compiler wrapper inject target-specific flags on the
  command line

- Architecture concretization updated to use the same algorithm as
  compiler concretization

- Micro-architecture features, vendor, generation etc. are included in
  the package hash.  Generic architectures, such as x86_64 or ppc64, are
  still dumped using the name only.

- If the compiler for a target is not supported exit with an intelligible
  error message. If the compiler support is unknown don't try to use
  optimization flags.

- Support and define feature aliases (e.g., sse3 -> ssse3) in
  microarchitectures.json and on Microarchitecture objects. Feature
  aliases are defined in targets.json and map a name (the "alias") to a
  list of rules that must be met for the test to be successful. The rules
  that are available can be extended later using a decorator.

- Implement subset semantics for comparing microarchitectures (treat
  microarchitectures as a partial order, i.e. (a < b), (a == b) and (b <
  a) can all be false.

- Implement logic to automatically demote the default target if the
  compiler being used is too old to optimize for it. Updated docs to make
  this behavior explicit.  This avoids surprising the user if the default
  compiler is older than the host architecture.

This commit adds unit tests to verify the semantics of target ranges and
target lists in constraints. The implementation to allow target ranges
and lists is minimal and doesn't add any new type.  A more careful
refactor that takes into account the type system might be due later.

Co-authored-by: Gregory Becker <becker33.llnl.gov>
2019-09-20 00:51:37 -07:00
Gregory Becker
dfabf5d6b1 targets: first pass at target detection for linux
Add llnl.util.cpu_name, with initial support for detecting different
microarchitectures on Linux.  This also adds preliminary changes for
compiler support and variants to control the optimizatoin levels by
target.

This does not yet include translations of targets to particular
compilers; that is left to another PR.

Co-authored-by: Massimiliano Culpo <massimiliano.culpo@gmail.com>
2019-09-20 00:51:37 -07:00
Peter Scheibel
141a1648e6 implicit rpaths filtering (#12789)
* implicit_rpaths are now removed from compilers.yaml config and are always instantiated dynamically, this occurs one time in the build_environment module

* per-compiler list required libraries (e.g. libstdc++, libgfortran) and whitelist directories from rpaths including those libraries. Remove non-whitelisted implicit rpaths. Some libraries default for all compilers.

* reintroduce 'implicit_rpaths' as a config variable that can be used to disable Spack insertion of compiler RPATHs generated at build time.
2019-09-17 17:45:21 -05:00
Greg Becker
3b115fffb1 permissions: fix file permissions on intermediate install directories (#12399)
- mkdirp now takes arguments to allow it to properly set permissions on created directories.
- Two arguments (group and mode) set permissions for the leaf directory.
- Intermediate directories can inherit permissions from either the topmost existing directory (the parent) or the leaf.
2019-08-20 23:08:02 -07:00
Tamara Dahlgren
0ea6e0f817
features: Remove stage symlinks (#12072)
Fixes #11163

The goal of this work is to simplify stage directory structures by eliminating use of symbolic links. This means, among other things, that` $spack/var/spack/stage` will no longer be the core staging directory. Instead, the first accessible `config:build_stage` path will be used.

Spack will no longer automatically append `spack-stage` (or the like) to configured build stage directories so the onus of distinguishing the directory from other work -- so the other work is not automatically removed with a `spack clean` operation -- falls on the user.
2019-08-19 10:31:24 -07:00
albestro
3a026f1412 Fix #11240 (#11995)
* extends mkdirs with permissions for intermediate folders

Does not use os.makedirs mode parameter because its behavior is changed
with Python 3.7 (it ignores it for intermediate dirs), and moreover it
was not possible to set different modes for newly-created folders
and leaf folder.

reference:
- https://bugs.python.org/issue19930
- https://docs.python.org/3.7/library/os.html#os.makedirs

* comment mkdirp step easing code understanding

* revert mkdir to default for package metapath

since metapath is nested in package folder, there is no need
to specify permissions for intermediate folders because the prefix
already exists.

* comment create_install_directory package modes
2019-07-19 10:13:29 -05:00
Tim Fuller
5bc15b2d9a find_libraries searches lib and lib64 before prefix (#11958)
The default library search for a package checks the lib/ and lib64/
directories for libraries before the root prefix, in order to save
time when searching for libraries provided by externals (which e.g.
may have '/usr/' as their root).

This moves that logic into the "find_libraries" utility method so
packages implementing their own custom library search logic can
benefit from it.

This also updates packages which appear to be replicating this logic
exactly, replacing it with a single call to "find_libraries".
2019-07-12 17:46:47 -07:00
Carson Woods
76f1ee5f32 'spack compiler add' resolves relative path to absolute path (#11792)
Fixes #11782

Spack was not properly resolving relative paths to absolute paths
when a relative path was passed to "spack compiler add [PATH]".
Now, if provided a relative path, the absolute path is written to
compilers.yaml rather than the relative path.
2019-07-12 16:06:26 -07:00
Peter Scheibel
284ae9d1cc
Resources: use expanded archive name by default (#11688)
For resources, it is desirable to use the expanded archive name of
the resource as the name of the directory when adding it to the root
staging area.

#11528 established 'spack-src' as the universal directory where
source files are placed, which also affected the behavior of
resources managed with Stages.

This adds a new property ('srcdir') to Stage to remember the name of
the expanded source directory, and uses this as the default name when
placing a resource directory in the root staging area.

This also:

* Ensures that downloaded sources are archived using the expanded
  archive name (otherwise Spack will not be able to determine the
  original directory name when using a cached archive).
* Updates working_dir context manager to guarantee restoration of
  original working directory when an exception occurs
* Adds a "temp_cwd" context manager which creates a temporary
  directory and sets it as the working directory
2019-06-20 11:09:31 -07:00
Adam J. Stewart
4e812090c0
Add additional common C++ and Fortran header file extensions (#11600)
* Add additional common C++ and Fortran header file extensions

* Add .hxx extension

* Add .txx and .tcc extensions

* Add .icc extension
2019-06-11 20:13:55 -04:00
Massimiliano Culpo
6d56d45454 Compiler search uses a pool of workers (#10190)
- spack.compilers.find_compilers now uses a multiprocess.pool.ThreadPool to execute
  system commands for the detection of compiler versions.

- A few memoized functions have been introduced to avoid poking the filesystem multiple
  times for the same results.

- Performance is much improved, and Spack no longer fork-bombs the system when doing a `compiler find`
2019-06-07 09:57:26 -07:00
Tamara Dahlgren
8e3fd3f7c2 tty: make tty.* print exception types
- make tty.msg, tty.info, etc. print the exception type and stringified
  message if the message argument is an exception.

- simplify parts of the code that call tty.debug(str(e))

- add extra tty.debug statements in places where exceptions were
  previously ignored
2019-06-05 22:41:28 -07:00
Todd Gamblin
d6f2ff1426 link_tree: add option to merge link trees with relative targets
- previous version of link trees would only do absolute symlinks

- this version can do relative links using merge(relative=True)
2019-05-26 18:23:44 -07:00
Todd Gamblin
6380f1917a commands: Add --header and --update options to spack commands
The Spack documentation currently hard-codes some functionality in
`conf.py`, which makes the doc build less "pluggable" for things like
localized doc builds.

In particular, we unconditionally generate an index of commands and a
package list as part of the docs, but those should really only be done if
things are not up to date.

This commit does the following:

- Add `--header` option to `spack commands` so that it can do the work of
  prepending text to its output.

- Add `--update FILE` option to `spack commands` that makes it generate a
  new command index *only* if FILE is out of date w.r.t. commands in the
  Spack source.

- Simplify code in `conf.py` to use these options and only update the
  command index when needed.
2019-05-26 18:23:44 -07:00
Peter Scheibel
ea1de6b941 Maintain a view for an environment (#10017)
Environments are nowm by default, created with views.  When activated, if an environment includes a view, this view will be added to `PATH`, `CPATH`, and other shell variables to expose the Spack environment in the user's shell.

Example:

```
spack env create e1 #by default this will maintain a view in the directory Spack maintains for the env
spack env create e1 --with-view=/abs/path/to/anywhere
spack env create e1 --without-view
```

The `spack.yaml` manifest file now looks like this:

```
spack:
  specs:
  - python
  view: true #or false, or a string
```

These commands can be used to control the view configuration for the active environment, without hand-editing the `spack.yaml` file:

```
spack env view enable
spack env view envable /abs/path/to/anywhere
spack env view disable
```

Views are automatically updated when specs are installed to an environment. A view only maintains one copy of any package. An environment may refer to a package multiple times, in particular if it appears as a dependency. This PR establishes a prioritization for which environment specs are added to views: a spec has higher priority if it was concretized first. This does not necessarily exactly match the order in which specs were added, for example, given `X->Z` and `Y->Z'`:

```
spack env activate e1
spack add X
spack install Y # immediately concretizes and installs Y and Z'
spack install # concretizes X and Z
```

In this case `Z'` will be favored over `Z`. 

Specs in the environment must be concrete and installed to be added to the view, so there is another minor ordering effect: by default the view maintained for the environment ignores file conflicts between packages. If packages are not installed in order, and there are file conflicts, then the version chosen depends on the order.

Both ordering issues are avoided if `spack install`/`spack add` and `spack install <spec>` are not mixed.
2019-04-10 16:00:12 -07:00