Compare commits

...

101 Commits

Author SHA1 Message Date
Gregory Becker
afe1fd89b9 WIP -- wait for 18205 to continue 2020-10-21 18:37:21 -07:00
Tamara Dahlgren
1452020f22 Added raja smoke tests; updated build dir for cmake-based package tests 2020-10-14 19:35:45 -07:00
Tamara Dahlgren
9afff8eb60 Resolved eleven unit test failures (#18979) 2020-10-06 10:54:29 -07:00
Tamara Dahlgren
c6cd52f616 Remove unused test_compiler in intel.py (#18950) 2020-09-25 14:04:26 -07:00
Tamara Dahlgren
8bbfbc741d Added __init__.py to address test collection on the tty.py test (#18903) 2020-09-25 14:04:24 -07:00
Tamara Dahlgren
3f31fffe65 Resolved all basic flake8 errors 2020-09-25 14:04:23 -07:00
Tamara Dahlgren
fa023354c6 Restore test subcommand list limited to the first line though (#18723) 2020-09-23 12:44:27 -07:00
Tamara Dahlgren
02bd3d55a0 Bugfix: correct test find stage directory; fix flake8 errors (#18704) 2020-09-23 12:44:22 -07:00
Tamara Dahlgren
e58d4f8cb7 Fix test subcommand help/description (#18721) 2020-09-23 12:44:19 -07:00
Tamara Dahlgren
37a77e0d12 Preliminary binutils install tests (#18645) 2020-09-23 12:44:17 -07:00
Tamara Dahlgren
f10864b96e openmpi: Remove unneeded references to test part status values (#18644) 2020-09-23 12:44:14 -07:00
Greg Becker
19e226259c Features/spack test refactor cmds (#18518)
* no clean -t option, use 'spack test remove'

* refactor commands to make better use of TestSuite objects
2020-09-23 12:44:12 -07:00
Tamara Dahlgren
102b91203a Rename and make escaped text file utility readily available to packages (#18339) 2020-09-23 12:44:09 -07:00
Tamara Dahlgren
04ca718051 Updated hdf smoke test (#18337) 2020-09-23 12:44:06 -07:00
Tamara Dahlgren
61abc75bc6 Add remaining bugfixes and consistency changes from #18210 (#18334) 2020-09-23 12:44:03 -07:00
Greg Becker
86eececc5c Features/spack test refactor (#18277)
* refactor test code into a TestSuite object and install_test module

* update mpi tests

* refactor tests suites to use content hash for name and record reproducibility info

* update unit tests and fix bugs

* Fix tests using data dir for new format
Use new `self.test_stage` object to access current data dir

Co-authored by: Tamara Dahlgren <dahlgren1@llnl.gov>
2020-09-23 12:44:00 -07:00
Tamara Dahlgren
7551b66cd9 Smoke tests: handle package syntax errors (#17919) 2020-09-23 12:43:58 -07:00
Tamara Dahlgren
c9a00562c4 Preliminary HDF smoke tests (#18076) 2020-09-23 12:43:55 -07:00
Gregory Becker
dafda1ab1a update tests to use status=0 over status=None 2020-09-23 12:43:53 -07:00
Greg Becker
d6b9871169 Features/compiler tests (#17353)
* fix setup of run environment for tests

* remove unnecessary 'None' option from run_tests status arg

* allow package files for virtuals

* run tests for all virtuals provided by each package

* add tests for mpi

* add compiler tests for virtual packages

* run compiler tests automatically like virtuals

* use working_dir instead of os.chdir

* Move knowledge of virtual-ness from spec to repo

* refactor test/cmd/clean

* update cmd/pkg tests for correctness
2020-09-23 12:43:50 -07:00
Tamara Dahlgren
b6d1704729 Smoke tests: Preliminary berkeley-db tests (#17899) 2020-09-23 12:43:46 -07:00
Tamara Dahlgren
f49fa74bf7 Smoke test: Add test of a sequence of hdf5 commands (#17686) 2020-09-23 12:43:44 -07:00
Tamara Dahlgren
0b8bc43fb0 smoke test: preliminary sqlite tests 2020-09-23 12:43:40 -07:00
Tamara Dahlgren
70eb1960fb smoke test: Ensure expected test results and options are lists 2020-09-23 12:43:37 -07:00
Tamara Dahlgren
a7c109c3aa Smoke tests: cmake version checks (#17359)
* Smoke tests: cmake version checks

* Simplified cmake install checks: dict-to-list

Co-authored-by: Greg Becker <becker33@llnl.gov>
2020-09-23 12:43:34 -07:00
Tamara Dahlgren
da62f89f4a Smoke tests: hdf5 version checks and check_install (#17360) 2020-09-23 12:43:31 -07:00
Tamara Dahlgren
d9f0170024 Smoke tests: switched warn to debug plus bugfix (#17576) 2020-09-23 12:43:28 -07:00
Tamara Dahlgren
171ebd8189 Features/spack test emacs (#17363)
* Smoke tests: emacs version checks
2020-09-23 12:43:24 -07:00
Tamara Dahlgren
6d986b4478 Smoke tests: Preliminary Umpire install tests (#17178)
* Preliminary install tests for the Umpire package
2020-09-23 12:43:20 -07:00
Tamara Dahlgren
03569dee8d Add install tests for libsigsegv (#17064) 2020-09-23 12:43:18 -07:00
Tamara Dahlgren
e2ddd7846c bugfix: fix cache_extra_test_sources' file copy; add unit tests (#17057) 2020-09-23 12:43:16 -07:00
Gregory Becker
32693fa573 fixup bugs after rebase 2020-09-23 12:43:14 -07:00
Gregory Becker
e221ba6ba7 update macos test for new unit-test command 2020-09-23 12:43:12 -07:00
Gregory Becker
4146c3d135 flake 2020-09-23 12:43:10 -07:00
Gregory Becker
f5b165a76f flake 2020-09-23 12:43:08 -07:00
Tamara Dahlgren
3508362dde smoke tests: grab and run build examples (openmpi) (#16365)
* Snapshot smoke tests that grab and run examples

* Resolved openmpi example test issues for 2.0.0-4.0.3

* Use spec.satisfies; copy extra packages after install (vs. prior to install tests

* Added smoke tests for selected openmpi installed binaries

* Use which() to determine if install exe exists

* Switched onus for installer test source grab from installer to package

* Resolved (local) flake8 issues with package.py

* Use runner.name; use string format for *run_test* messages

* Renamed copy_src_to_install to cache_extra_test_source and added comments

* Metadata path cleanup: added metadata_dir property to and its use in package

* Support list of source paths to cache for install testing (with unit test)

* Added test subdir to install_test_root; changed skip_file to lambda
2020-09-23 12:43:05 -07:00
Tamara Dahlgren
fb145df4f4 bugfix: Resolve perl install test bug (#16501) 2020-09-23 12:43:02 -07:00
Tamara Dahlgren
fd46f67d63 smoke tests: Refined openmpi version checks (#16337)
* Refined openmpi version checks to pass for 2.1.0 through 4.0.3

* Allow skipping install tests with exe not in bin dir and revised openmpi version tests
2020-09-23 12:43:00 -07:00
Greg Becker
edf3e91a12 Add --fail-first and --fail-fast options to spack test run (#16277)
`spack test run --fail-first` exits after the first failed package.

`spack test run --fail-fast` stops each package test after the first
failure.
2020-09-23 12:42:58 -07:00
Gregory Becker
bca630a22a make output comparisons regex 2020-09-23 12:42:56 -07:00
Gregory Becker
5ff9ba320d remove debug log parser from ctest 2020-09-23 12:42:54 -07:00
Gregory Becker
50640d4924 update from Error to FAILED 2020-09-23 12:42:51 -07:00
Gregory Becker
f5cfcadfc5 change test headings from {name}-{hash} to {name}-{version}-{hash} 2020-09-23 12:42:49 -07:00
Gregory Becker
a5c534b86d update bash completion 2020-09-23 12:42:46 -07:00
Gregory Becker
99364b9c3f refactor 2020-09-23 12:42:43 -07:00
Gregory Becker
749ab2e79d Make Spack tests record their errors and continue
previously, tests would fail on the first error
now, we wrap them in a TestFailure object that records all failures
2020-09-23 12:42:40 -07:00
Greg Becker
1b3e1897ca Features/spack test subcommands (#16054)
* spack test: subcommands for asynchronous tests

* commands are `run`, `list`, `status`, `results`, `remove`.
2020-09-23 12:42:36 -07:00
Tamara Dahlgren
3976b2a083 tests: Preliminary libsigsegv smoke tests (updated) (#15981)
* tests: Preliminary libsigsegv smoke tests (updated)

* Cleaned up and added doc to libsigsegv smoke test
2020-09-23 12:42:33 -07:00
Tamara Dahlgren
54603cb91f tests: Update openmpi smoke tests to new run_test api (#15982)
* tests: Update openmpi smoke tests to new run_test api

* Removed version check try-except tracking per discussion

* Changed openmpi orted command status values to list
2020-09-23 12:42:29 -07:00
Tamara Dahlgren
aa630b8d71 install tests: added support for multiple test command status values (#15979)
* install tests: added support for multiple test command status values
2020-09-23 12:42:25 -07:00
Gregory Becker
77acf8ddc2 spack unit-test: fix pytest help command 2020-09-23 12:42:22 -07:00
Gregory Becker
dfb02e6d45 test runner: add options to check installation dir and print purpose 2020-09-23 12:42:18 -07:00
Gregory Becker
cf4a0cbc01 python: use self.command to get exe name in test 2020-09-23 12:42:16 -07:00
Gregory Becker
b0eb02a86f cmd/test.py: fix typo in spdx license header 2020-09-23 12:42:14 -07:00
Gregory Becker
73f76bc1b5 update bash completions 2020-09-23 12:42:12 -07:00
Gregory Becker
6f39d8011e spack test: factor out common args 2020-09-23 12:42:10 -07:00
Gregory Becker
97dc74c727 python: fix tests, remove intentional debug failures 2020-09-23 12:42:08 -07:00
Gregory Becker
d53eefa69f fix docs 2020-09-23 12:42:05 -07:00
Gregory Becker
bae57f2ae8 spack test: update existing docs for moved unit-test cmd 2020-09-23 12:41:23 -07:00
Gregory Becker
ba58ae9118 simplify error handling using language features 2020-09-23 12:36:23 -07:00
Gregory Becker
fdb8a59bae fix get_package_context check whether in a package file 2020-09-23 12:36:22 -07:00
Gregory Becker
d92f52ae02 fix handling of asserts for python3 2020-09-23 12:36:22 -07:00
Gregory Becker
3229bf04f5 fix 'belt and suspenders' for config values 2020-09-23 12:36:21 -07:00
Gregory Becker
ccf519daa5 update travis 2020-09-23 12:36:20 -07:00
Gregory Becker
c5ae92bf3f flake 2020-09-23 12:36:19 -07:00
Gregory Becker
f83280cb58 standardize names for configure_test, build_test, install_test 2020-09-23 12:36:18 -07:00
Gregory Becker
6e80de652c unbreak zlib 2020-09-23 12:36:17 -07:00
Gregory Becker
0dc212e67d tests and bugfixes 2020-09-23 12:36:16 -07:00
Gregory Becker
3ce2efe32a update bash completions 2020-09-23 12:36:14 -07:00
Gregory Becker
76ce5d90ec fixup unit-test from develop 2020-09-23 12:36:14 -07:00
Gregory Becker
e5a9a376bf fix cmd/clean tests 2020-09-23 12:36:13 -07:00
Gregory Becker
d6a497540d fixup reporter work 2020-09-23 12:36:12 -07:00
Gregory Becker
b996d65a96 bugfix 2020-09-23 12:36:11 -07:00
Gregory Becker
991a2aae37 test name message 2020-09-23 12:36:10 -07:00
Tamara Dahlgren
8ba45e358b Initial OpenMPI smoke tests: version checks 2020-09-23 12:36:09 -07:00
Gregory Becker
28e76be185 spack clean: option to clean test stage (-t) 2020-09-23 12:36:09 -07:00
Gregory Becker
70e91cc1e0 spack test: add dirty/clean flags to command 2020-09-23 12:36:08 -07:00
Gregory Becker
b52113aca9 move test dir to config option 2020-09-23 12:36:07 -07:00
Gregory Becker
ce06e24a2e refactor run_test to Package level 2020-09-23 12:36:06 -07:00
Gregory Becker
dd0fbe670c continue testing after error 2020-09-23 12:36:05 -07:00
Tamara Dahlgren
6ad70b5f5d Preliminary libxml2 tests (#15092)
* Initial libxml2 tests (using executables)

* Expanded libxml2 tests using installed bins

* Refactored/generalized _run_tests
2020-09-23 12:36:04 -07:00
wspear
dadf4d1ed9 Fixed import string (#15094) 2020-09-23 12:36:03 -07:00
Gregory Becker
64bac977f1 add spack test-env command, refactor to combine with build-env 2020-09-23 12:36:02 -07:00
Gregory Becker
2f1d26fa87 allow tests to require compiler 2020-09-23 12:36:02 -07:00
Gregory Becker
cf713c5320 Modify existing test methods to naming scheme <phase_name>test
Existing test methods run via callbacks at install time when run with `spack install --run-tests`
These methods are tied into the package build system, and cannot be run arbitrarily
New naming scheme for these tests based on the build system phase after which they should be run
The method name `test` is now reserved for methods run via the `spack test` command
2020-09-23 12:36:01 -07:00
Tamara Dahlgren
035e7b3743 tests: Added preliminary smoke test for perl (#14592)
* Added install test for perl, including use statements
2020-09-23 12:35:59 -07:00
Tamara Dahlgren
473457f2ba tests: Preliminary m4 smoke tests (#14553)
* Preliminary m4 smoke tests
2020-09-23 12:35:59 -07:00
Tamara Dahlgren
490bca73d1 Change variable name to 'standard' file to avoid confusion with function (#14589) 2020-09-23 12:35:58 -07:00
Tamara Dahlgren
59e885bd4f tests: Preliminary patchelf smoke tests (#14551)
* Initial patchelf smoke tests
2020-09-23 12:35:57 -07:00
Gregory Becker
966fc427a9 copy test data into './data' in test environment 2020-09-23 12:35:56 -07:00
Gregory Becker
8a34511789 improved error printing 2020-09-23 12:35:55 -07:00
Gregory Becker
8f255f9e6a fix reporter call for install command 2020-09-23 12:35:54 -07:00
Gregory Becker
4d282ad4d9 Changes in cmd/test.py in develop mirrored to cmd/unit-test.py 2020-09-23 12:35:53 -07:00
Gregory Becker
7216451ba7 tests occur in temporary directory, can be kept for debugging 2020-09-23 12:35:52 -07:00
Gregory Becker
e614cdf007 improve error catching/handling/re-raising 2020-09-23 12:35:51 -07:00
Gregory Becker
bc486a961c make test fail 2020-09-23 12:35:50 -07:00
Gregory Becker
a13eab94ce improve logging and add junit basics 2020-09-23 12:35:49 -07:00
Gregory Becker
6574c6779b python3 syntax for re-raising an error with the old traceback 2020-09-23 12:35:48 -07:00
Gregory Becker
d2cfbf177d make cdash test reporter work for testing 2020-09-23 12:35:46 -07:00
Gregory Becker
bfb97e4d57 add reporting format options to spack test 2020-09-23 12:35:14 -07:00
Gregory Becker
4151224ef2 WIP infrastructure for Spack test command to test existing installations 2020-09-23 12:22:26 -07:00
134 changed files with 3567 additions and 728 deletions

View File

@@ -32,7 +32,7 @@ jobs:
git --version
. .github/workflows/setup_git.sh
. share/spack/setup-env.sh
coverage run $(which spack) test
coverage run $(which spack) unit-test
coverage combine
coverage xml
- uses: codecov/codecov-action@v1

View File

@@ -64,6 +64,10 @@ config:
- ~/.spack/stage
# - $spack/var/spack/stage
# Directory in which to run tests and store test results.
# Tests will be stored in directories named by date/time and package
# name/hash.
test_stage: ~/.spack/test
# Cache directory for already downloaded source tarballs and archived
# repositories. This can be purged with `spack clean --downloads`.

View File

@@ -175,7 +175,7 @@ In the ``perl`` package, we can see:
@run_after('build')
@on_package_attributes(run_tests=True)
def test(self):
def build_test(self):
make('test')
As you can guess, this runs ``make test`` *after* building the package,

View File

@@ -56,7 +56,7 @@ overridden like so:
.. code-block:: python
def test(self):
def build_test(self):
scons('check')

View File

@@ -74,7 +74,7 @@ locally to speed up the review process.
We currently test against Python 2.6, 2.7, and 3.5-3.7 on both macOS and Linux and
perform 3 types of tests:
.. _cmd-spack-test:
.. _cmd-spack-unit-test:
^^^^^^^^^^
Unit Tests
@@ -96,7 +96,7 @@ To run *all* of the unit tests, use:
.. code-block:: console
$ spack test
$ spack unit-test
These tests may take several minutes to complete. If you know you are
only modifying a single Spack feature, you can run subsets of tests at a
@@ -105,13 +105,13 @@ time. For example, this would run all the tests in
.. code-block:: console
$ spack test lib/spack/spack/test/architecture.py
$ spack unit-test lib/spack/spack/test/architecture.py
And this would run the ``test_platform`` test from that file:
.. code-block:: console
$ spack test lib/spack/spack/test/architecture.py::test_platform
$ spack unit-test lib/spack/spack/test/architecture.py::test_platform
This allows you to develop iteratively: make a change, test that change,
make another change, test that change, etc. We use `pytest
@@ -121,29 +121,29 @@ pytest docs
<http://doc.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests>`_
for more details on test selection syntax.
``spack test`` has a few special options that can help you understand
what tests are available. To get a list of all available unit test
files, run:
``spack unit-test`` has a few special options that can help you
understand what tests are available. To get a list of all available
unit test files, run:
.. command-output:: spack test --list
.. command-output:: spack unit-test --list
:ellipsis: 5
To see a more detailed list of available unit tests, use ``spack test
--list-long``:
To see a more detailed list of available unit tests, use ``spack
unit-test --list-long``:
.. command-output:: spack test --list-long
.. command-output:: spack unit-test --list-long
:ellipsis: 10
And to see the fully qualified names of all tests, use ``--list-names``:
.. command-output:: spack test --list-names
.. command-output:: spack unit-test --list-names
:ellipsis: 5
You can combine these with ``pytest`` arguments to restrict which tests
you want to know about. For example, to see just the tests in
``architecture.py``:
.. command-output:: spack test --list-long lib/spack/spack/test/architecture.py
.. command-output:: spack unit-test --list-long lib/spack/spack/test/architecture.py
You can also combine any of these options with a ``pytest`` keyword
search. See the `pytest usage docs
@@ -151,7 +151,7 @@ search. See the `pytest usage docs
for more details on test selection syntax. For example, to see the names of all tests that have "spec"
or "concretize" somewhere in their names:
.. command-output:: spack test --list-names -k "spec and concretize"
.. command-output:: spack unit-test --list-names -k "spec and concretize"
By default, ``pytest`` captures the output of all unit tests, and it will
print any captured output for failed tests. Sometimes it's helpful to see
@@ -161,7 +161,7 @@ argument to ``pytest``:
.. code-block:: console
$ spack test -s --list-long lib/spack/spack/test/architecture.py::test_platform
$ spack unit-test -s --list-long lib/spack/spack/test/architecture.py::test_platform
Unit tests are crucial to making sure bugs aren't introduced into
Spack. If you are modifying core Spack libraries or adding new
@@ -176,7 +176,7 @@ how to write tests!
You may notice the ``share/spack/qa/run-unit-tests`` script in the
repository. This script is designed for CI. It runs the unit
tests and reports coverage statistics back to Codecov. If you want to
run the unit tests yourself, we suggest you use ``spack test``.
run the unit tests yourself, we suggest you use ``spack unit-test``.
^^^^^^^^^^^^
Flake8 Tests

View File

@@ -363,11 +363,12 @@ Developer commands
``spack doc``
^^^^^^^^^^^^^
^^^^^^^^^^^^^^
``spack test``
^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^
``spack unit-test``
^^^^^^^^^^^^^^^^^^^
See the :ref:`contributor guide section <cmd-spack-test>` on ``spack test``.
See the :ref:`contributor guide section <cmd-spack-unit-test>` on
``spack unit-test``.
.. _cmd-spack-python:

View File

@@ -87,11 +87,12 @@ will be available from the command line:
--implicit select specs that are not installed or were installed implicitly
--output OUTPUT where to dump the result
The corresponding unit tests can be run giving the appropriate options to ``spack test``:
The corresponding unit tests can be run giving the appropriate options
to ``spack unit-test``:
.. code-block:: console
$ spack test --extension=scripting
$ spack unit-test --extension=scripting
============================================================== test session starts ===============================================================
platform linux2 -- Python 2.7.15rc1, pytest-3.2.5, py-1.4.34, pluggy-0.4.0

View File

@@ -118,6 +118,7 @@ def match(self, text):
"([^:]+): (Error:|error|undefined reference|multiply defined)",
"([^ :]+) ?: (error|fatal error|catastrophic error)",
"([^:]+)\\(([^\\)]+)\\) ?: (error|fatal error|catastrophic error)"),
"^FAILED",
"^[Bb]us [Ee]rror",
"^[Ss]egmentation [Vv]iolation",
"^[Ss]egmentation [Ff]ault",

View File

@@ -41,6 +41,8 @@
'fix_darwin_install_name',
'force_remove',
'force_symlink',
'chgrp',
'chmod_x',
'copy',
'install',
'copy_tree',
@@ -51,6 +53,7 @@
'partition_path',
'prefixes',
'remove_dead_links',
'remove_directory_contents',
'remove_if_dead_link',
'remove_linked_tree',
'set_executable',
@@ -1796,3 +1799,13 @@ def md5sum(file):
with open(file, "rb") as f:
md5.update(f.read())
return md5.digest()
def remove_directory_contents(dir):
"""Remove all contents of a directory."""
if os.path.exists(dir):
for entry in [os.path.join(dir, entry) for entry in os.listdir(dir)]:
if os.path.isfile(entry) or os.path.islink(entry):
os.unlink(entry)
else:
shutil.rmtree(entry)

View File

@@ -324,7 +324,8 @@ class log_output(object):
work within test frameworks like nose and pytest.
"""
def __init__(self, file_like=None, echo=False, debug=0, buffer=False):
def __init__(self, file_like=None, output=None, error=None,
echo=False, debug=0, buffer=False):
"""Create a new output log context manager.
Args:
@@ -349,13 +350,15 @@ def __init__(self, file_like=None, echo=False, debug=0, buffer=False):
"""
self.file_like = file_like
self.output = output or sys.stdout
self.error = error or sys.stderr
self.echo = echo
self.debug = debug
self.buffer = buffer
self._active = False # used to prevent re-entry
def __call__(self, file_like=None, echo=None, debug=None, buffer=None):
def __call__(self, file_like=None, output=None, error=None,
echo=None, debug=None, buffer=None):
"""This behaves the same as init. It allows a logger to be reused.
Arguments are the same as for ``__init__()``. Args here take
@@ -376,6 +379,10 @@ def __call__(self, file_like=None, echo=None, debug=None, buffer=None):
"""
if file_like is not None:
self.file_like = file_like
if output is not None:
self.output = output
if error is not None:
self.error = error
if echo is not None:
self.echo = echo
if debug is not None:
@@ -434,8 +441,8 @@ def __enter__(self):
self.process = fork_context.Process(
target=_writer_daemon,
args=(
input_stream, read_fd, write_fd, self.echo, self.log_file,
child_pipe
input_stream, read_fd, write_fd, self.echo, self.output,
self.log_file, child_pipe
)
)
self.process.daemon = True # must set before start()
@@ -448,43 +455,54 @@ def __enter__(self):
# Flush immediately before redirecting so that anything buffered
# goes to the original stream
sys.stdout.flush()
sys.stderr.flush()
self.output.flush()
self.error.flush()
# sys.stdout.flush()
# sys.stderr.flush()
# Now do the actual output rediction.
self.use_fds = _file_descriptors_work(sys.stdout, sys.stderr)
self.use_fds = _file_descriptors_work(self.output, self.error)#sys.stdout, sys.stderr)
if self.use_fds:
# We try first to use OS-level file descriptors, as this
# redirects output for subprocesses and system calls.
# Save old stdout and stderr file descriptors
self._saved_stdout = os.dup(sys.stdout.fileno())
self._saved_stderr = os.dup(sys.stderr.fileno())
self._saved_output = os.dup(self.output.fileno())
self._saved_error = os.dup(self.error.fileno())
# self._saved_stdout = os.dup(sys.stdout.fileno())
# self._saved_stderr = os.dup(sys.stderr.fileno())
# redirect to the pipe we created above
os.dup2(write_fd, sys.stdout.fileno())
os.dup2(write_fd, sys.stderr.fileno())
os.dup2(write_fd, self.output.fileno())
os.dup2(write_fd, self.error.fileno())
# os.dup2(write_fd, sys.stdout.fileno())
# os.dup2(write_fd, sys.stderr.fileno())
os.close(write_fd)
else:
# Handle I/O the Python way. This won't redirect lower-level
# output, but it's the best we can do, and the caller
# shouldn't expect any better, since *they* have apparently
# redirected I/O the Python way.
# Save old stdout and stderr file objects
self._saved_stdout = sys.stdout
self._saved_stderr = sys.stderr
self._saved_output = self.output
self._saved_error = self.error
# self._saved_stdout = sys.stdout
# self._saved_stderr = sys.stderr
# create a file object for the pipe; redirect to it.
pipe_fd_out = os.fdopen(write_fd, 'w')
sys.stdout = pipe_fd_out
sys.stderr = pipe_fd_out
self.output = pipe_fd_out
self.error = pipe_fd_out
# sys.stdout = pipe_fd_out
# sys.stderr = pipe_fd_out
# Unbuffer stdout and stderr at the Python level
if not self.buffer:
sys.stdout = Unbuffered(sys.stdout)
sys.stderr = Unbuffered(sys.stderr)
self.output = Unbuffered(self.output)
self.error = Unbuffered(self.error)
# sys.stdout = Unbuffered(sys.stdout)
# sys.stderr = Unbuffered(sys.stderr)
# Force color and debug settings now that we have redirected.
tty.color.set_color_when(forced_color)
@@ -499,20 +517,29 @@ def __enter__(self):
def __exit__(self, exc_type, exc_val, exc_tb):
# Flush any buffered output to the logger daemon.
sys.stdout.flush()
sys.stderr.flush()
self.output.flush()
self.error.flush()
# sys.stdout.flush()
# sys.stderr.flush()
# restore previous output settings, either the low-level way or
# the python way
if self.use_fds:
os.dup2(self._saved_stdout, sys.stdout.fileno())
os.close(self._saved_stdout)
os.dup2(self._saved_output, self.output.fileno())
os.close(self._saved_output)
os.dup2(self._saved_stderr, sys.stderr.fileno())
os.close(self._saved_stderr)
os.dup2(self._saved_error, self.error.fileno())
os.close(self._saved_error)
# os.dup2(self._saved_stdout, sys.stdout.fileno())
# os.close(self._saved_stdout)
# os.dup2(self._saved_stderr, sys.stderr.fileno())
# os.close(self._saved_stderr)
else:
sys.stdout = self._saved_stdout
sys.stderr = self._saved_stderr
self.output = self._saved_output
self.error = self._saved_error
# sys.stdout = self._saved_stdout
# sys.stderr = self._saved_stderr
# print log contents in parent if needed.
if self.write_log_in_parent:
@@ -546,16 +573,17 @@ def force_echo(self):
# output. We us these control characters rather than, say, a
# separate pipe, because they're in-band and assured to appear
# exactly before and after the text we want to echo.
sys.stdout.write(xon)
sys.stdout.flush()
self.output.write(xon)
self.output.flush()
try:
yield
finally:
sys.stdout.write(xoff)
sys.stdout.flush()
self.output.write(xoff)
self.output.flush()
def _writer_daemon(stdin, read_fd, write_fd, echo, log_file, control_pipe):
def _writer_daemon(stdin, read_fd, write_fd, echo, echo_stream, log_file,
control_pipe):
"""Daemon used by ``log_output`` to write to a log file and to ``stdout``.
The daemon receives output from the parent process and writes it both
@@ -598,6 +626,7 @@ def _writer_daemon(stdin, read_fd, write_fd, echo, log_file, control_pipe):
immediately closed by the writer daemon)
echo (bool): initial echo setting -- controlled by user and
preserved across multiple writer daemons
echo_stream (stream): output to echo to when echoing
log_file (file-like): file to log all output
control_pipe (Pipe): multiprocessing pipe on which to send control
information to the parent
@@ -652,8 +681,8 @@ def _writer_daemon(stdin, read_fd, write_fd, echo, log_file, control_pipe):
# Echo to stdout if requested or forced.
if echo or force_echo:
sys.stdout.write(line)
sys.stdout.flush()
echo_stream.write(line)
echo_stream.flush()
# Stripped output to log file.
log_file.write(_strip(line))

View File

@@ -33,7 +33,6 @@
calls you can make from within the install() function.
"""
import re
import inspect
import multiprocessing
import os
import shutil
@@ -52,9 +51,12 @@
import spack.config
import spack.main
import spack.paths
import spack.package
import spack.schema.environment
import spack.store
import spack.install_test
import spack.architecture as arch
import spack.util.path
from spack.util.string import plural
from spack.util.environment import (
env_flag, filter_system_paths, get_path, is_system_path,
@@ -451,7 +453,6 @@ def _set_variables_for_single_module(pkg, module):
jobs = spack.config.get('config:build_jobs', 16) if pkg.parallel else 1
jobs = min(jobs, multiprocessing.cpu_count())
assert jobs is not None, "no default set for config:build_jobs"
m = module
m.make_jobs = jobs
@@ -711,28 +712,43 @@ def load_external_modules(pkg):
load_module(external_module)
def setup_package(pkg, dirty):
def setup_package(pkg, dirty, context='build'):
"""Execute all environment setup routines."""
build_env = EnvironmentModifications()
env = EnvironmentModifications()
# clean environment
if not dirty:
clean_environment()
set_compiler_environment_variables(pkg, build_env)
set_build_environment_variables(pkg, build_env, dirty)
pkg.architecture.platform.setup_platform_environment(pkg, build_env)
# setup compilers and build tools for build contexts
need_compiler = context == 'build' or (context == 'test' and
pkg.test_requires_compiler)
if need_compiler:
set_compiler_environment_variables(pkg, env)
set_build_environment_variables(pkg, env, dirty)
build_env.extend(
modifications_from_dependencies(pkg.spec, context='build')
)
# architecture specific setup
pkg.architecture.platform.setup_platform_environment(pkg, env)
if (not dirty) and (not build_env.is_unset('CPATH')):
tty.debug("A dependency has updated CPATH, this may lead pkg-config"
" to assume that the package is part of the system"
" includes and omit it when invoked with '--cflags'.")
if context == 'build':
# recursive post-order dependency information
env.extend(
modifications_from_dependencies(pkg.spec, context=context)
)
set_module_variables_for_package(pkg)
pkg.setup_build_environment(build_env)
if (not dirty) and (not env.is_unset('CPATH')):
tty.debug("A dependency has updated CPATH, this may lead pkg-"
"config to assume that the package is part of the system"
" includes and omit it when invoked with '--cflags'.")
# setup package itself
set_module_variables_for_package(pkg)
pkg.setup_build_environment(env)
elif context == 'test':
import spack.user_environment as uenv # avoid circular import
env.extend(uenv.environment_modifications_for_spec(pkg.spec))
set_module_variables_for_package(pkg)
env.prepend_path('PATH', '.')
# Loading modules, in particular if they are meant to be used outside
# of Spack, can change environment variables that are relevant to the
@@ -742,15 +758,16 @@ def setup_package(pkg, dirty):
# unnecessary. Modules affecting these variables will be overwritten anyway
with preserve_environment('CC', 'CXX', 'FC', 'F77'):
# All module loads that otherwise would belong in previous
# functions have to occur after the build_env object has its
# functions have to occur after the env object has its
# modifications applied. Otherwise the environment modifications
# could undo module changes, such as unsetting LD_LIBRARY_PATH
# after a module changes it.
for mod in pkg.compiler.modules:
# Fixes issue https://github.com/spack/spack/issues/3153
if os.environ.get("CRAY_CPU_TARGET") == "mic-knl":
load_module("cce")
load_module(mod)
if need_compiler:
for mod in pkg.compiler.modules:
# Fixes issue https://github.com/spack/spack/issues/3153
if os.environ.get("CRAY_CPU_TARGET") == "mic-knl":
load_module("cce")
load_module(mod)
# kludge to handle cray libsci being automatically loaded by PrgEnv
# modules on cray platform. Module unload does no damage when
@@ -764,12 +781,12 @@ def setup_package(pkg, dirty):
implicit_rpaths = pkg.compiler.implicit_rpaths()
if implicit_rpaths:
build_env.set('SPACK_COMPILER_IMPLICIT_RPATHS',
':'.join(implicit_rpaths))
env.set('SPACK_COMPILER_IMPLICIT_RPATHS',
':'.join(implicit_rpaths))
# Make sure nothing's strange about the Spack environment.
validate(build_env, tty.warn)
build_env.apply_modifications()
validate(env, tty.warn)
env.apply_modifications()
def modifications_from_dependencies(spec, context):
@@ -789,7 +806,8 @@ def modifications_from_dependencies(spec, context):
deptype_and_method = {
'build': (('build', 'link', 'test'),
'setup_dependent_build_environment'),
'run': (('link', 'run'), 'setup_dependent_run_environment')
'run': (('link', 'run'), 'setup_dependent_run_environment'),
'test': (('link', 'run', 'test'), 'setup_dependent_run_environment')
}
deptype, method = deptype_and_method[context]
@@ -803,7 +821,7 @@ def modifications_from_dependencies(spec, context):
return env
def fork(pkg, function, dirty, fake):
def fork(pkg, function, dirty, fake, context='build', **kwargs):
"""Fork a child process to do part of a spack build.
Args:
@@ -815,6 +833,8 @@ def fork(pkg, function, dirty, fake):
dirty (bool): If True, do NOT clean the environment before
building.
fake (bool): If True, skip package setup b/c it's not a real build
context (string): If 'build', setup build environment. If 'test', setup
test environment.
Usage::
@@ -843,7 +863,7 @@ def child_process(child_pipe, input_stream):
try:
if not fake:
setup_package(pkg, dirty=dirty)
setup_package(pkg, dirty=dirty, context=context)
return_value = function()
child_pipe.send(return_value)
@@ -861,19 +881,29 @@ def child_process(child_pipe, input_stream):
# build up some context from the offending package so we can
# show that, too.
package_context = get_package_context(tb)
if exc_type is not spack.install_test.TestFailure:
package_context = get_package_context(traceback.extract_tb(tb))
else:
package_context = []
build_log = None
if hasattr(pkg, 'log_path'):
if context == 'build' and hasattr(pkg, 'log_path'):
build_log = pkg.log_path
test_log = None
if context == 'test':
test_log = os.path.join(
pkg.test_suite.stage,
spack.install_test.TestSuite.test_log_name(pkg.spec))
# make a pickleable exception to send to parent.
msg = "%s: %s" % (exc_type.__name__, str(exc))
ce = ChildError(msg,
exc_type.__module__,
exc_type.__name__,
tb_string, build_log, package_context)
tb_string, package_context,
build_log, test_log)
child_pipe.send(ce)
finally:
@@ -926,8 +956,8 @@ def get_package_context(traceback, context=3):
"""Return some context for an error message when the build fails.
Args:
traceback (traceback): A traceback from some exception raised during
install
traceback (list of tuples): output from traceback.extract_tb() or
traceback.extract_stack()
context (int): Lines of context to show before and after the line
where the error happened
@@ -936,51 +966,44 @@ def get_package_context(traceback, context=3):
from there.
"""
def make_stack(tb, stack=None):
"""Tracebacks come out of the system in caller -> callee order. Return
an array in callee -> caller order so we can traverse it."""
if stack is None:
stack = []
if tb is not None:
make_stack(tb.tb_next, stack)
stack.append(tb)
return stack
stack = make_stack(traceback)
for tb in stack:
frame = tb.tb_frame
if 'self' in frame.f_locals:
# Find the first proper subclass of PackageBase.
obj = frame.f_locals['self']
if isinstance(obj, spack.package.PackageBase):
for filename, lineno, function, text in reversed(traceback):
if 'package.py' in filename or 'spack/build_systems' in filename:
if function not in ('run_test', '_run_test_helper'):
# We are in a package and not one of the listed methods
# We exclude these methods because we expect errors in them to
# be the result of user tests failing, and we show the tests
# instead.
break
# Package files have a line added at import time, so we adjust the lineno
# when we are getting context from a package file instead of a base class
adjust = 1 if spack.paths.is_package_file(filename) else 0
lineno = lineno - adjust
# We found obj, the Package implementation we care about.
# Point out the location in the install method where we failed.
lines = [
'{0}:{1:d}, in {2}:'.format(
inspect.getfile(frame.f_code),
frame.f_lineno - 1, # subtract 1 because f_lineno is 0-indexed
frame.f_code.co_name
filename,
lineno,
function
)
]
# Build a message showing context in the install method.
sourcelines, start = inspect.getsourcelines(frame)
# Calculate lineno of the error relative to the start of the function.
# Subtract 1 because f_lineno is 0-indexed.
fun_lineno = frame.f_lineno - start - 1
start_ctx = max(0, fun_lineno - context)
sourcelines = sourcelines[start_ctx:fun_lineno + context + 1]
# Adjust for import mangling of package files.
with open(filename, 'r') as f:
sourcelines = f.readlines()
start = max(0, lineno - context - 1)
sourcelines = sourcelines[start:lineno + context + 1]
for i, line in enumerate(sourcelines):
is_error = start_ctx + i == fun_lineno
i = i + adjust # adjusting for import munging again
is_error = start + i == lineno
mark = '>> ' if is_error else ' '
# Add start to get lineno relative to start of file, not function.
marked = ' {0}{1:-6d}{2}'.format(
mark, start + start_ctx + i, line.rstrip())
mark, start + i, line.rstrip())
if is_error:
marked = colorize('@R{%s}' % cescape(marked))
lines.append(marked)
@@ -1034,14 +1057,15 @@ class ChildError(InstallError):
# context instead of Python context.
build_errors = [('spack.util.executable', 'ProcessError')]
def __init__(self, msg, module, classname, traceback_string, build_log,
context):
def __init__(self, msg, module, classname, traceback_string, context,
build_log, test_log):
super(ChildError, self).__init__(msg)
self.module = module
self.name = classname
self.traceback = traceback_string
self.build_log = build_log
self.context = context
self.build_log = build_log
self.test_log = test_log
@property
def long_message(self):
@@ -1050,21 +1074,12 @@ def long_message(self):
if (self.module, self.name) in ChildError.build_errors:
# The error happened in some external executed process. Show
# the build log with errors or warnings highlighted.
# the log with errors or warnings highlighted.
if self.build_log and os.path.exists(self.build_log):
errors, warnings = parse_log_events(self.build_log)
nerr = len(errors)
nwar = len(warnings)
if nerr > 0:
# If errors are found, only display errors
out.write(
"\n%s found in build log:\n" % plural(nerr, 'error'))
out.write(make_log_context(errors))
elif nwar > 0:
# If no errors are found but warnings are, display warnings
out.write(
"\n%s found in build log:\n" % plural(nwar, 'warning'))
out.write(make_log_context(warnings))
write_log_summary(out, 'build', self.build_log)
if self.test_log and os.path.exists(self.test_log):
write_log_summary(out, 'test', self.test_log)
else:
# The error happened in in the Python code, so try to show
@@ -1081,6 +1096,10 @@ def long_message(self):
out.write('See build log for details:\n')
out.write(' %s\n' % self.build_log)
if self.test_log and os.path.exists(self.test_log):
out.write('See test log for details:\n')
out.write(' %s\n' % self.test_log)
return out.getvalue()
def __str__(self):
@@ -1097,13 +1116,16 @@ def __reduce__(self):
self.module,
self.name,
self.traceback,
self.context,
self.build_log,
self.context)
self.test_log)
def _make_child_error(msg, module, name, traceback, build_log, context):
def _make_child_error(msg, module, name, traceback, context,
build_log, test_log):
"""Used by __reduce__ in ChildError to reconstruct pickled errors."""
return ChildError(msg, module, name, traceback, build_log, context)
return ChildError(msg, module, name, traceback, context,
build_log, test_log)
class StopPhase(spack.error.SpackError):
@@ -1114,3 +1136,30 @@ def __reduce__(self):
def _make_stop_phase(msg, long_msg):
return StopPhase(msg, long_msg)
def write_log_summary(out, log_type, log, last=None):
errors, warnings = parse_log_events(log)
nerr = len(errors)
nwar = len(warnings)
if nerr > 0:
if last and nerr > last:
errors = errors[-last:]
nerr = last
# If errors are found, only display errors
out.write(
"\n%s found in %s log:\n" %
(plural(nerr, 'error'), log_type))
out.write(make_log_context(errors))
elif nwar > 0:
if last and nwar > last:
warnings = warnings[-last:]
nwar = last
# If no errors are found but warnings are, display warnings
out.write(
"\n%s found in %s log:\n" %
(plural(nwar, 'warning'), log_type))
out.write(make_log_context(warnings))

View File

@@ -308,14 +308,21 @@ def flags_to_build_system_args(self, flags):
self.cmake_flag_args.append(libs_string.format(lang,
libs_flags))
@property
def build_dirname(self):
"""Returns the directory name to use when building the package
:return: name of the subdirectory for building the package
"""
return 'spack-build-%s' % self.spec.dag_hash(7)
@property
def build_directory(self):
"""Returns the directory to use when building the package
:return: directory where to build the package
"""
dirname = 'spack-build-%s' % self.spec.dag_hash(7)
return os.path.join(self.stage.path, dirname)
return os.path.join(self.stage.path, self.build_dirname)
def cmake_args(self):
"""Produces a list containing all the arguments that must be passed to

View File

@@ -1017,6 +1017,15 @@ def setup_run_environment(self, env):
env.extend(EnvironmentModifications.from_sourcing_file(f, *args))
if self.spec.name in ('intel', 'intel-parallel-studio'):
# this package provides compilers
# TODO: fix check above when compilers are dependencies
env.set('CC', self.prefix.bin.icc)
env.set('CXX', self.prefix.bin.icpc)
env.set('FC', self.prefix.bin.ifort)
env.set('F77', self.prefix.bin.ifort)
env.set('F90', self.prefix.bin.ifort)
def setup_dependent_build_environment(self, env, dependent_spec):
# NB: This function is overwritten by 'mpi' provider packages:
#

View File

@@ -91,7 +91,7 @@ def configure(self, spec, prefix):
build_system_class = 'PythonPackage'
#: Callback names for build-time test
build_time_test_callbacks = ['test']
build_time_test_callbacks = ['build_test']
#: Callback names for install-time test
install_time_test_callbacks = ['import_module_test']
@@ -361,7 +361,7 @@ def check_args(self, spec, prefix):
# Testing
def test(self):
def build_test(self):
"""Run unit tests after in-place build.
These tests are only run if the package actually has a 'test' command.

View File

@@ -33,7 +33,7 @@ class SConsPackage(PackageBase):
build_system_class = 'SConsPackage'
#: Callback names for build-time test
build_time_test_callbacks = ['test']
build_time_test_callbacks = ['build_test']
depends_on('scons', type='build')
@@ -59,7 +59,7 @@ def install(self, spec, prefix):
# Testing
def test(self):
def build_test(self):
"""Run unit tests after build.
By default, does nothing. Override this if you want to

View File

@@ -47,10 +47,10 @@ class WafPackage(PackageBase):
build_system_class = 'WafPackage'
# Callback names for build-time test
build_time_test_callbacks = ['test']
build_time_test_callbacks = ['build_test']
# Callback names for install-time test
install_time_test_callbacks = ['installtest']
install_time_test_callbacks = ['install_test']
# Much like AutotoolsPackage does not require automake and autoconf
# to build, WafPackage does not require waf to build. It only requires
@@ -106,7 +106,7 @@ def install_args(self):
# Testing
def test(self):
def build_test(self):
"""Run unit tests after build.
By default, does nothing. Override this if you want to
@@ -116,7 +116,7 @@ def test(self):
run_after('build')(PackageBase._run_default_build_time_test_callbacks)
def installtest(self):
def install_test(self):
"""Run unit tests after install.
By default, does nothing. Override this if you want to

View File

@@ -2,86 +2,15 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
import argparse
import os
import llnl.util.tty as tty
import spack.build_environment as build_environment
import spack.cmd
import spack.cmd.common.arguments as arguments
from spack.util.environment import dump_environment, pickle_environment
import spack.cmd.common.env_utility as env_utility
description = "run a command in a spec's install environment, " \
"or dump its environment to screen or file"
section = "build"
level = "long"
def setup_parser(subparser):
arguments.add_common_arguments(subparser, ['clean', 'dirty'])
subparser.add_argument(
'--dump', metavar="FILE",
help="dump a source-able environment to FILE"
)
subparser.add_argument(
'--pickle', metavar="FILE",
help="dump a pickled source-able environment to FILE"
)
subparser.add_argument(
'spec', nargs=argparse.REMAINDER,
metavar='spec [--] [cmd]...',
help="spec of package environment to emulate")
subparser.epilog\
= 'If a command is not specified, the environment will be printed ' \
'to standard output (cf /usr/bin/env) unless --dump and/or --pickle ' \
'are specified.\n\nIf a command is specified and spec is ' \
'multi-word, then the -- separator is obligatory.'
setup_parser = env_utility.setup_parser
def build_env(parser, args):
if not args.spec:
tty.die("spack build-env requires a spec.")
# Specs may have spaces in them, so if they do, require that the
# caller put a '--' between the spec and the command to be
# executed. If there is no '--', assume that the spec is the
# first argument.
sep = '--'
if sep in args.spec:
s = args.spec.index(sep)
spec = args.spec[:s]
cmd = args.spec[s + 1:]
else:
spec = args.spec[0]
cmd = args.spec[1:]
specs = spack.cmd.parse_specs(spec, concretize=True)
if len(specs) > 1:
tty.die("spack build-env only takes one spec.")
spec = specs[0]
build_environment.setup_package(spec.package, args.dirty)
if args.dump:
# Dump a source-able environment to a text file.
tty.msg("Dumping a source-able environment to {0}".format(args.dump))
dump_environment(args.dump)
if args.pickle:
# Dump a source-able environment to a pickle file.
tty.msg(
"Pickling a source-able environment to {0}".format(args.pickle))
pickle_environment(args.pickle)
if cmd:
# Execute the command with the new environment
os.execvp(cmd[0], cmd)
elif not bool(args.pickle or args.dump):
# If no command or dump/pickle option act like the "env" command
# and print out env vars.
for key, val in os.environ.items():
print("%s=%s" % (key, val))
env_utility.emulate_env_utility('build-env', 'build', args)

View File

@@ -10,10 +10,11 @@
import llnl.util.tty as tty
import spack.caches
import spack.cmd
import spack.cmd.test
import spack.cmd.common.arguments as arguments
import spack.repo
import spack.stage
import spack.config
from spack.paths import lib_path, var_path

View File

@@ -275,3 +275,53 @@ def no_checksum():
return Args(
'-n', '--no-checksum', action='store_true', default=False,
help="do not use checksums to verify downloaded files (unsafe)")
def add_cdash_args(subparser, add_help):
cdash_help = {}
if add_help:
cdash_help['upload-url'] = "CDash URL where reports will be uploaded"
cdash_help['build'] = """The name of the build that will be reported to CDash.
Defaults to spec of the package to operate on."""
cdash_help['site'] = """The site name that will be reported to CDash.
Defaults to current system hostname."""
cdash_help['track'] = """Results will be reported to this group on CDash.
Defaults to Experimental."""
cdash_help['buildstamp'] = """Instead of letting the CDash reporter prepare the
buildstamp which, when combined with build name, site and project,
uniquely identifies the build, provide this argument to identify
the build yourself. Format: %%Y%%m%%d-%%H%%M-[cdash-track]"""
else:
cdash_help['upload-url'] = argparse.SUPPRESS
cdash_help['build'] = argparse.SUPPRESS
cdash_help['site'] = argparse.SUPPRESS
cdash_help['track'] = argparse.SUPPRESS
cdash_help['buildstamp'] = argparse.SUPPRESS
subparser.add_argument(
'--cdash-upload-url',
default=None,
help=cdash_help['upload-url']
)
subparser.add_argument(
'--cdash-build',
default=None,
help=cdash_help['build']
)
subparser.add_argument(
'--cdash-site',
default=None,
help=cdash_help['site']
)
cdash_subgroup = subparser.add_mutually_exclusive_group()
cdash_subgroup.add_argument(
'--cdash-track',
default='Experimental',
help=cdash_help['track']
)
cdash_subgroup.add_argument(
'--cdash-buildstamp',
default=None,
help=cdash_help['buildstamp']
)

View File

@@ -0,0 +1,82 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
import argparse
import os
import llnl.util.tty as tty
import spack.build_environment as build_environment
import spack.paths
import spack.cmd
import spack.cmd.common.arguments as arguments
from spack.util.environment import dump_environment, pickle_environment
def setup_parser(subparser):
arguments.add_common_arguments(subparser, ['clean', 'dirty'])
subparser.add_argument(
'--dump', metavar="FILE",
help="dump a source-able environment to FILE"
)
subparser.add_argument(
'--pickle', metavar="FILE",
help="dump a pickled source-able environment to FILE"
)
subparser.add_argument(
'spec', nargs=argparse.REMAINDER,
metavar='spec [--] [cmd]...',
help="specs of package environment to emulate")
subparser.epilog\
= 'If a command is not specified, the environment will be printed ' \
'to standard output (cf /usr/bin/env) unless --dump and/or --pickle ' \
'are specified.\n\nIf a command is specified and spec is ' \
'multi-word, then the -- separator is obligatory.'
def emulate_env_utility(cmd_name, context, args):
if not args.spec:
tty.die("spack %s requires a spec." % cmd_name)
# Specs may have spaces in them, so if they do, require that the
# caller put a '--' between the spec and the command to be
# executed. If there is no '--', assume that the spec is the
# first argument.
sep = '--'
if sep in args.spec:
s = args.spec.index(sep)
spec = args.spec[:s]
cmd = args.spec[s + 1:]
else:
spec = args.spec[0]
cmd = args.spec[1:]
specs = spack.cmd.parse_specs(spec, concretize=True)
if len(specs) > 1:
tty.die("spack %s only takes one spec." % cmd_name)
spec = specs[0]
build_environment.setup_package(spec.package, args.dirty, context)
if args.dump:
# Dump a source-able environment to a text file.
tty.msg("Dumping a source-able environment to {0}".format(args.dump))
dump_environment(args.dump)
if args.pickle:
# Dump a source-able environment to a pickle file.
tty.msg(
"Pickling a source-able environment to {0}".format(args.pickle))
pickle_environment(args.pickle)
if cmd:
# Execute the command with the new environment
os.execvp(cmd[0], cmd)
elif not bool(args.pickle or args.dump):
# If no command or dump/pickle option act like the "env" command
# and print out env vars.
for key, val in os.environ.items():
print("%s=%s" % (key, val))

View File

@@ -160,65 +160,8 @@ def setup_parser(subparser):
action='store_true',
help="Show usage instructions for CDash reporting"
)
subparser.add_argument(
'-y', '--yes-to-all',
action='store_true',
dest='yes_to_all',
help="""assume "yes" is the answer to every confirmation request.
To run completely non-interactively, also specify '--no-checksum'."""
)
add_cdash_args(subparser, False)
arguments.add_common_arguments(subparser, ['spec'])
def add_cdash_args(subparser, add_help):
cdash_help = {}
if add_help:
cdash_help['upload-url'] = "CDash URL where reports will be uploaded"
cdash_help['build'] = """The name of the build that will be reported to CDash.
Defaults to spec of the package to install."""
cdash_help['site'] = """The site name that will be reported to CDash.
Defaults to current system hostname."""
cdash_help['track'] = """Results will be reported to this group on CDash.
Defaults to Experimental."""
cdash_help['buildstamp'] = """Instead of letting the CDash reporter prepare the
buildstamp which, when combined with build name, site and project,
uniquely identifies the build, provide this argument to identify
the build yourself. Format: %%Y%%m%%d-%%H%%M-[cdash-track]"""
else:
cdash_help['upload-url'] = argparse.SUPPRESS
cdash_help['build'] = argparse.SUPPRESS
cdash_help['site'] = argparse.SUPPRESS
cdash_help['track'] = argparse.SUPPRESS
cdash_help['buildstamp'] = argparse.SUPPRESS
subparser.add_argument(
'--cdash-upload-url',
default=None,
help=cdash_help['upload-url']
)
subparser.add_argument(
'--cdash-build',
default=None,
help=cdash_help['build']
)
subparser.add_argument(
'--cdash-site',
default=None,
help=cdash_help['site']
)
cdash_subgroup = subparser.add_mutually_exclusive_group()
cdash_subgroup.add_argument(
'--cdash-track',
default='Experimental',
help=cdash_help['track']
)
cdash_subgroup.add_argument(
'--cdash-buildstamp',
default=None,
help=cdash_help['buildstamp']
)
arguments.add_cdash_args(subparser, False)
arguments.add_common_arguments(subparser, ['yes_to_all', 'spec'])
def default_log_file(spec):
@@ -270,7 +213,7 @@ def install(parser, args, **kwargs):
SPACK_CDASH_AUTH_TOKEN
authentication token to present to CDash
'''))
add_cdash_args(parser, True)
arguments.add_cdash_args(parser, True)
parser.print_help()
return
@@ -320,7 +263,8 @@ def install(parser, args, **kwargs):
tty.warn("Deprecated option: --run-tests: use --test=all instead")
# 1. Abstract specs from cli
reporter = spack.report.collect_info(args.log_format, args)
reporter = spack.report.collect_info(
spack.package.PackageInstaller, '_install_task', args.log_format, args)
if args.log_file:
reporter.filename = args.log_file
@@ -360,7 +304,7 @@ def install(parser, args, **kwargs):
if not args.log_file and not reporter.filename:
reporter.filename = default_log_file(specs[0])
reporter.specs = specs
with reporter:
with reporter('build'):
if args.overwrite:
installed = list(filter(lambda x: x,

View File

@@ -54,6 +54,9 @@ def setup_parser(subparser):
subparser.add_argument(
'--update', metavar='FILE', default=None, action='store',
help='write output to the specified file, if any package is newer')
subparser.add_argument(
'-v', '--virtuals', action='store_true', default=False,
help='include virtual packages in list')
arguments.add_common_arguments(subparser, ['tags'])
@@ -267,7 +270,7 @@ def list(parser, args):
formatter = formatters[args.format]
# Retrieve the names of all the packages
pkgs = set(spack.repo.all_package_names())
pkgs = set(spack.repo.all_package_names(args.virtuals))
# Filter the set appropriately
sorted_packages = filter_by_name(pkgs, args)

View File

@@ -4,166 +4,319 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
from __future__ import division
import collections
import sys
import re
import os
import argparse
import pytest
from six import StringIO
import textwrap
import fnmatch
import re
import shutil
import llnl.util.tty.color as color
from llnl.util.filesystem import working_dir
from llnl.util.tty.colify import colify
import llnl.util.tty as tty
import spack.paths
import spack.install_test
import spack.environment as ev
import spack.cmd
import spack.cmd.common.arguments as arguments
import spack.report
import spack.package
description = "run spack's unit tests (wrapper around pytest)"
section = "developer"
description = "run spack's tests for an install"
section = "administrator"
level = "long"
def first_line(docstring):
"""Return the first line of the docstring."""
return docstring.split('\n')[0]
def setup_parser(subparser):
subparser.add_argument(
'-H', '--pytest-help', action='store_true', default=False,
help="show full pytest help, with advanced options")
sp = subparser.add_subparsers(metavar='SUBCOMMAND', dest='test_command')
# extra spack arguments to list tests
list_group = subparser.add_argument_group("listing tests")
list_mutex = list_group.add_mutually_exclusive_group()
list_mutex.add_argument(
'-l', '--list', action='store_const', default=None,
dest='list', const='list', help="list test filenames")
list_mutex.add_argument(
'-L', '--list-long', action='store_const', default=None,
dest='list', const='long', help="list all test functions")
list_mutex.add_argument(
'-N', '--list-names', action='store_const', default=None,
dest='list', const='names', help="list full names of all tests")
# Run
run_parser = sp.add_parser('run', description=test_run.__doc__,
help=first_line(test_run.__doc__))
# use tests for extension
subparser.add_argument(
'--extension', default=None,
help="run test for a given spack extension")
alias_help_msg = "Provide an alias for this test-suite"
alias_help_msg += " for subsequent access."
run_parser.add_argument('--alias', help=alias_help_msg)
# spell out some common pytest arguments, so they'll show up in help
pytest_group = subparser.add_argument_group(
"common pytest arguments (spack test --pytest-help for more details)")
pytest_group.add_argument(
"-s", action='append_const', dest='parsed_args', const='-s',
help="print output while tests run (disable capture)")
pytest_group.add_argument(
"-k", action='store', metavar="EXPRESSION", dest='expression',
help="filter tests by keyword (can also use w/list options)")
pytest_group.add_argument(
"--showlocals", action='append_const', dest='parsed_args',
const='--showlocals', help="show local variable values in tracebacks")
run_parser.add_argument(
'--fail-fast', action='store_true',
help="Stop tests for each package after the first failure."
)
run_parser.add_argument(
'--fail-first', action='store_true',
help="Stop after the first failed package."
)
run_parser.add_argument(
'--keep-stage',
action='store_true',
help='Keep testing directory for debugging'
)
run_parser.add_argument(
'--log-format',
default=None,
choices=spack.report.valid_formats,
help="format to be used for log files"
)
run_parser.add_argument(
'--log-file',
default=None,
help="filename for the log file. if not passed a default will be used"
)
arguments.add_cdash_args(run_parser, False)
run_parser.add_argument(
'--help-cdash',
action='store_true',
help="Show usage instructions for CDash reporting"
)
# remainder is just passed to pytest
subparser.add_argument(
'pytest_args', nargs=argparse.REMAINDER, help="arguments for pytest")
length_group = run_parser.add_mutually_exclusive_group()
length_group.add_argument(
'--smoke', action='store_true', dest='smoke_test', default=True,
help='run smoke tests (default)')
length_group.add_argument(
'--capability', action='store_false', dest='smoke_test', default=True,
help='run full capability tests using pavilion')
cd_group = run_parser.add_mutually_exclusive_group()
arguments.add_common_arguments(cd_group, ['clean', 'dirty'])
arguments.add_common_arguments(run_parser, ['installed_specs'])
# List
list_parser = sp.add_parser('list', description=test_list.__doc__,
help=first_line(test_list.__doc__))
list_parser.add_argument(
'filter', nargs=argparse.REMAINDER,
help='optional case-insensitive glob patterns to filter results.')
# Find
find_parser = sp.add_parser('find', description=test_find.__doc__,
help=first_line(test_find.__doc__))
find_parser.add_argument(
'filter', nargs=argparse.REMAINDER,
help='optional case-insensitive glob patterns to filter results.')
# Status
status_parser = sp.add_parser('status', description=test_status.__doc__,
help=first_line(test_status.__doc__))
status_parser.add_argument(
'names', nargs=argparse.REMAINDER,
help="Test suites for which to print status")
# Results
results_parser = sp.add_parser('results', description=test_results.__doc__,
help=first_line(test_results.__doc__))
results_parser.add_argument(
'names', nargs=argparse.REMAINDER,
help="Test suites for which to print results")
# Remove
remove_parser = sp.add_parser('remove', description=test_remove.__doc__,
help=first_line(test_remove.__doc__))
arguments.add_common_arguments(remove_parser, ['yes_to_all'])
remove_parser.add_argument(
'names', nargs=argparse.REMAINDER,
help="Test suites to remove from test stage")
def do_list(args, extra_args):
"""Print a lists of tests than what pytest offers."""
# Run test collection and get the tree out.
old_output = sys.stdout
try:
sys.stdout = output = StringIO()
pytest.main(['--collect-only'] + extra_args)
finally:
sys.stdout = old_output
def test_run(args):
"""Run tests for the specified installed packages.
lines = output.getvalue().split('\n')
tests = collections.defaultdict(lambda: set())
prefix = []
# collect tests into sections
for line in lines:
match = re.match(r"(\s*)<([^ ]*) '([^']*)'", line)
if not match:
continue
indent, nodetype, name = match.groups()
# strip parametrized tests
if "[" in name:
name = name[:name.index("[")]
depth = len(indent) // 2
if nodetype.endswith("Function"):
key = tuple(prefix)
tests[key].add(name)
else:
prefix = prefix[:depth]
prefix.append(name)
def colorize(c, prefix):
if isinstance(prefix, tuple):
return "::".join(
color.colorize("@%s{%s}" % (c, p))
for p in prefix if p != "()"
)
return color.colorize("@%s{%s}" % (c, prefix))
if args.list == "list":
files = set(prefix[0] for prefix in tests)
color_files = [colorize("B", file) for file in sorted(files)]
colify(color_files)
elif args.list == "long":
for prefix, functions in sorted(tests.items()):
path = colorize("*B", prefix) + "::"
functions = [colorize("c", f) for f in sorted(functions)]
color.cprint(path)
colify(functions, indent=4)
print()
else: # args.list == "names"
all_functions = [
colorize("*B", prefix) + "::" + colorize("c", f)
for prefix, functions in sorted(tests.items())
for f in sorted(functions)
]
colify(all_functions)
def add_back_pytest_args(args, unknown_args):
"""Add parsed pytest args, unknown args, and remainder together.
We add some basic pytest arguments to the Spack parser to ensure that
they show up in the short help, so we have to reassemble things here.
If no specs are listed, run tests for all packages in the current
environment or all installed packages if there is no active environment.
"""
result = args.parsed_args or []
result += unknown_args or []
result += args.pytest_args or []
if args.expression:
result += ["-k", args.expression]
return result
# cdash help option
if args.help_cdash:
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog=textwrap.dedent('''\
environment variables:
SPACK_CDASH_AUTH_TOKEN
authentication token to present to CDash
'''))
arguments.add_cdash_args(parser, True)
parser.print_help()
return
# set config option for fail-fast
if args.fail_fast:
spack.config.set('config:fail_fast', True, scope='command_line')
# Get specs to test
env = ev.get_env(args, 'test')
hashes = env.all_hashes() if env else None
specs = spack.cmd.parse_specs(args.specs) if args.specs else [None]
specs_to_test = []
for spec in specs:
matching = spack.store.db.query_local(spec, hashes=hashes)
if spec and not matching:
tty.warn("No installed packages match spec %s" % spec)
specs_to_test.extend(matching)
# test_stage_dir
test_suite = spack.install_test.TestSuite(specs_to_test, args.alias)
test_suite.ensure_stage()
tty.msg("Spack test %s" % test_suite.name)
# Set up reporter
setattr(args, 'package', [s.format() for s in test_suite.specs])
reporter = spack.report.collect_info(
spack.package.PackageBase, 'do_test', args.log_format, args)
if not reporter.filename:
if args.log_file:
if os.path.isabs(args.log_file):
log_file = args.log_file
else:
log_dir = os.getcwd()
log_file = os.path.join(log_dir, args.log_file)
else:
log_file = os.path.join(
os.getcwd(),
'test-%s' % test_suite.name)
reporter.filename = log_file
reporter.specs = specs_to_test
with reporter('test', test_suite.stage):
if args.smoke_test:
test_suite(remove_directory=not args.keep_stage,
dirty=args.dirty,
fail_first=args.fail_first)
else:
raise NotImplementedError
def test(parser, args, unknown_args):
if args.pytest_help:
# make the pytest.main help output more accurate
sys.argv[0] = 'spack test'
return pytest.main(['-h'])
def test_list(args):
"""List all installed packages with available tests."""
raise NotImplementedError
# add back any parsed pytest args we need to pass to pytest
pytest_args = add_back_pytest_args(args, unknown_args)
# The default is to test the core of Spack. If the option `--extension`
# has been used, then test that extension.
pytest_root = spack.paths.spack_root
if args.extension:
target = args.extension
extensions = spack.config.get('config:extensions')
pytest_root = spack.extensions.path_for_extension(target, *extensions)
def test_find(args): # TODO: merge with status (noargs)
"""Find tests that are running or have available results.
# pytest.ini lives in the root of the spack repository.
with working_dir(pytest_root):
if args.list:
do_list(args, pytest_args)
Displays aliases for tests that have them, otherwise test suite content
hashes."""
test_suites = spack.install_test.get_all_test_suites()
# Filter tests by filter argument
if args.filter:
def create_filter(f):
raw = fnmatch.translate('f' if '*' in f or '?' in f
else '*' + f + '*')
return re.compile(raw, flags=re.IGNORECASE)
filters = [create_filter(f) for f in args.filter]
def match(t, f):
return f.match(t)
test_suites = [t for t in test_suites
if any(match(t.alias, f) for f in filters) and
os.path.isdir(t.stage)]
names = [t.name for t in test_suites]
if names:
# TODO: Make these specify results vs active
msg = "Spack test results available for the following tests:\n"
msg += " %s\n" % ' '.join(names)
msg += " Run `spack test remove` to remove all tests"
tty.msg(msg)
else:
msg = "No test results match the query\n"
msg += " Tests may have been removed using `spack test remove`"
tty.msg(msg)
def test_status(args):
"""Get the current status for the specified Spack test suite(s)."""
if args.names:
test_suites = []
for name in args.names:
test_suite = spack.install_test.get_test_suite(name)
if test_suite:
test_suites.append(test_suite)
else:
tty.msg("No test suite %s found in test stage" % name)
else:
test_suites = spack.install_test.get_all_test_suites()
if not test_suites:
tty.msg("No test suites with status to report")
for test_suite in test_suites:
# TODO: Make this handle capability tests too
# TODO: Make this handle tests running in another process
tty.msg("Test suite %s completed" % test_suite.name)
def test_results(args):
"""Get the results from Spack test suite(s) (default all)."""
if args.names:
test_suites = []
for name in args.names:
test_suite = spack.install_test.get_test_suite(name)
if test_suite:
test_suites.append(test_suite)
else:
tty.msg("No test suite %s found in test stage" % name)
else:
test_suites = spack.install_test.get_all_test_suites()
if not test_suites:
tty.msg("No test suites with results to report")
# TODO: Make this handle capability tests too
# The results file may turn out to be a placeholder for future work
for test_suite in test_suites:
results_file = test_suite.results_file
if os.path.exists(results_file):
msg = "Results for test suite %s: \n" % test_suite.name
with open(results_file, 'r') as f:
lines = f.readlines()
for line in lines:
msg += " %s" % line
tty.msg(msg)
else:
msg = "Test %s has no results.\n" % test_suite.name
msg += " Check if it is running with "
msg += "`spack test status %s`" % test_suite.name
tty.msg(msg)
def test_remove(args):
"""Remove results from Spack test suite(s) (default all).
If no test suite is listed, remove results for all suites.
Removed tests can no longer be accessed for results or status, and will not
appear in `spack test list` results."""
if args.names:
test_suites = []
for name in args.names:
test_suite = spack.install_test.get_test_suite(name)
if test_suite:
test_suites.append(test_suite)
else:
tty.msg("No test suite %s found in test stage" % name)
else:
test_suites = spack.install_test.get_all_test_suites()
if not test_suites:
tty.msg("No test suites to remove")
return
if not args.yes_to_all:
msg = 'The following test suites will be removed:\n\n'
msg += ' ' + ' '.join(test.name for test in test_suites) + '\n'
tty.msg(msg)
answer = tty.get_yes_or_no('Do you want to proceed?', default=False)
if not answer:
tty.msg('Aborting removal of test suites')
return
return pytest.main(pytest_args)
for test_suite in test_suites:
shutil.rmtree(test_suite.stage)
def test(parser, args):
globals()['test_%s' % args.test_command](args)

View File

@@ -0,0 +1,16 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import spack.cmd.common.env_utility as env_utility
description = "run a command in a spec's test environment, " \
"or dump its environment to screen or file"
section = "administration"
level = "long"
setup_parser = env_utility.setup_parser
def test_env(parser, args):
env_utility.emulate_env_utility('test-env', 'test', args)

View File

@@ -0,0 +1,169 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from __future__ import print_function
from __future__ import division
import collections
import sys
import re
import argparse
import pytest
from six import StringIO
import llnl.util.tty.color as color
from llnl.util.filesystem import working_dir
from llnl.util.tty.colify import colify
import spack.paths
description = "run spack's unit tests (wrapper around pytest)"
section = "developer"
level = "long"
def setup_parser(subparser):
subparser.add_argument(
'-H', '--pytest-help', action='store_true', default=False,
help="show full pytest help, with advanced options")
# extra spack arguments to list tests
list_group = subparser.add_argument_group("listing tests")
list_mutex = list_group.add_mutually_exclusive_group()
list_mutex.add_argument(
'-l', '--list', action='store_const', default=None,
dest='list', const='list', help="list test filenames")
list_mutex.add_argument(
'-L', '--list-long', action='store_const', default=None,
dest='list', const='long', help="list all test functions")
list_mutex.add_argument(
'-N', '--list-names', action='store_const', default=None,
dest='list', const='names', help="list full names of all tests")
# use tests for extension
subparser.add_argument(
'--extension', default=None,
help="run test for a given spack extension")
# spell out some common pytest arguments, so they'll show up in help
pytest_group = subparser.add_argument_group(
"common pytest arguments (spack unit-test --pytest-help for more)")
pytest_group.add_argument(
"-s", action='append_const', dest='parsed_args', const='-s',
help="print output while tests run (disable capture)")
pytest_group.add_argument(
"-k", action='store', metavar="EXPRESSION", dest='expression',
help="filter tests by keyword (can also use w/list options)")
pytest_group.add_argument(
"--showlocals", action='append_const', dest='parsed_args',
const='--showlocals', help="show local variable values in tracebacks")
# remainder is just passed to pytest
subparser.add_argument(
'pytest_args', nargs=argparse.REMAINDER, help="arguments for pytest")
def do_list(args, extra_args):
"""Print a lists of tests than what pytest offers."""
# Run test collection and get the tree out.
old_output = sys.stdout
try:
sys.stdout = output = StringIO()
pytest.main(['--collect-only'] + extra_args)
finally:
sys.stdout = old_output
lines = output.getvalue().split('\n')
tests = collections.defaultdict(lambda: set())
prefix = []
# collect tests into sections
for line in lines:
match = re.match(r"(\s*)<([^ ]*) '([^']*)'", line)
if not match:
continue
indent, nodetype, name = match.groups()
# strip parametrized tests
if "[" in name:
name = name[:name.index("[")]
depth = len(indent) // 2
if nodetype.endswith("Function"):
key = tuple(prefix)
tests[key].add(name)
else:
prefix = prefix[:depth]
prefix.append(name)
def colorize(c, prefix):
if isinstance(prefix, tuple):
return "::".join(
color.colorize("@%s{%s}" % (c, p))
for p in prefix if p != "()"
)
return color.colorize("@%s{%s}" % (c, prefix))
if args.list == "list":
files = set(prefix[0] for prefix in tests)
color_files = [colorize("B", file) for file in sorted(files)]
colify(color_files)
elif args.list == "long":
for prefix, functions in sorted(tests.items()):
path = colorize("*B", prefix) + "::"
functions = [colorize("c", f) for f in sorted(functions)]
color.cprint(path)
colify(functions, indent=4)
print()
else: # args.list == "names"
all_functions = [
colorize("*B", prefix) + "::" + colorize("c", f)
for prefix, functions in sorted(tests.items())
for f in sorted(functions)
]
colify(all_functions)
def add_back_pytest_args(args, unknown_args):
"""Add parsed pytest args, unknown args, and remainder together.
We add some basic pytest arguments to the Spack parser to ensure that
they show up in the short help, so we have to reassemble things here.
"""
result = args.parsed_args or []
result += unknown_args or []
result += args.pytest_args or []
if args.expression:
result += ["-k", args.expression]
return result
def unit_test(parser, args, unknown_args):
if args.pytest_help:
# make the pytest.main help output more accurate
sys.argv[0] = 'spack test'
return pytest.main(['-h'])
# add back any parsed pytest args we need to pass to pytest
pytest_args = add_back_pytest_args(args, unknown_args)
# The default is to test the core of Spack. If the option `--extension`
# has been used, then test that extension.
pytest_root = spack.paths.spack_root
if args.extension:
target = args.extension
extensions = spack.config.get('config:extensions')
pytest_root = spack.extensions.path_for_extension(target, *extensions)
# pytest.ini lives in the root of the spack repository.
with working_dir(pytest_root):
if args.list:
do_list(args, pytest_args)
return
return pytest.main(pytest_args)

View File

@@ -365,6 +365,7 @@ def _proper_compiler_style(cspec, aspec):
compilers = spack.compilers.compilers_for_spec(
cspec, arch_spec=aspec
)
# If the spec passed as argument is concrete we want to check
# the versions match exactly
if (cspec.concrete and compilers and
@@ -454,7 +455,7 @@ def concretize_compiler_flags(self, spec):
# continue. `return True` here to force concretization to keep
# running.
return True
raise Exception
compiler_match = lambda other: (
spec.compiler == other.compiler and
spec.architecture == other.architecture)

View File

@@ -0,0 +1,264 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import base64
import hashlib
import os
import re
import shutil
import sys
import tty
import llnl.util.filesystem as fs
from spack.spec import Spec
import spack.error
import spack.util.prefix
import spack.util.spack_json as sjson
test_suite_filename = 'test_suite.lock'
results_filename = 'results.txt'
def get_escaped_text_output(filename):
"""Retrieve and escape the expected text output from the file
Args:
filename (str): path to the file
Returns:
(list of str): escaped text lines read from the file
"""
with open(filename, 'r') as f:
# Ensure special characters are escaped as needed
expected = f.read()
# Split the lines to make it easier to debug failures when there is
# a lot of output
return [re.escape(ln) for ln in expected.split('\n')]
def get_test_stage_dir():
return spack.util.path.canonicalize_path(
spack.config.get('config:test_stage', '~/.spack/test'))
def get_all_test_suites():
stage_root = get_test_stage_dir()
def valid_stage(d):
dirpath = os.path.join(stage_root, d)
return (os.path.isdir(dirpath) and
test_suite_filename in os.listdir(dirpath))
candidates = [
os.path.join(stage_root, d, test_suite_filename)
for d in os.listdir(stage_root)
if valid_stage(d)
]
test_suites = [TestSuite.from_file(c) for c in candidates]
return test_suites
def get_test_suite(name):
assert name, "Cannot search for empty test name or 'None'"
test_suites = get_all_test_suites()
names = [ts for ts in test_suites
if ts.name == name]
assert len(names) < 2, "alias shadows test suite hash"
if not names:
return None
return names[0]
class TestSuite(object):
def __init__(self, specs, alias=None):
# copy so that different test suites have different package objects
# even if they contain the same spec
self.specs = [spec.copy() for spec in specs]
self.current_test_spec = None # spec currently tested, can be virtual
self.current_base_spec = None # spec currently running do_test
self.alias = alias
self._hash = None
@property
def name(self):
return self.alias if self.alias else self.content_hash
@property
def content_hash(self):
if not self._hash:
json_text = sjson.dump(self.to_dict())
sha = hashlib.sha1(json_text.encode('utf-8'))
b32_hash = base64.b32encode(sha.digest()).lower()
if sys.version_info[0] >= 3:
b32_hash = b32_hash.decode('utf-8')
self._hash = b32_hash
return self._hash
def __call__(self, *args, **kwargs):
self.write_reproducibility_data()
remove_directory = kwargs.get('remove_directory', True)
dirty = kwargs.get('dirty', False)
fail_first = kwargs.get('fail_first', False)
for spec in self.specs:
try:
msg = "A package object cannot run in two test suites at once"
assert not spec.package.test_suite, msg
# Set up the test suite to know which test is running
spec.package.test_suite = self
self.current_base_spec = spec
self.current_test_spec = spec
# setup per-test directory in the stage dir
test_dir = self.test_dir_for_spec(spec)
if os.path.exists(test_dir):
shutil.rmtree(test_dir)
fs.mkdirp(test_dir)
# run the package tests
spec.package.do_test(
dirty=dirty
)
# Clean up on success and log passed test
if remove_directory:
shutil.rmtree(test_dir)
self.write_test_result(spec, 'PASSED')
except BaseException as exc:
if isinstance(exc, SyntaxError):
# Create the test log file and report the error.
self.ensure_stage()
msg = 'Testing package {0}\n{1}'\
.format(self.test_pkg_id(spec), str(exc))
_add_msg_to_file(self.log_file_for_spec(spec), msg)
self.write_test_result(spec, 'FAILED')
if fail_first:
break
finally:
spec.package.test_suite = None
self.current_test_spec = None
self.current_base_spec = None
def ensure_stage(self):
if not os.path.exists(self.stage):
fs.mkdirp(self.stage)
@property
def stage(self):
return spack.util.prefix.Prefix(
os.path.join(get_test_stage_dir(), self.content_hash))
@property
def results_file(self):
return self.stage.join(results_filename)
@classmethod
def test_pkg_id(cls, spec):
"""Build the standard install test package identifier
Args:
spec (Spec): instance of the spec under test
Returns:
(str): the install test package identifier
"""
return spec.format('{name}-{version}-{hash:7}')
@classmethod
def test_log_name(cls, spec):
return '%s-test-out.txt' % cls.test_pkg_id(spec)
def log_file_for_spec(self, spec):
return self.stage.join(self.test_log_name(spec))
def test_dir_for_spec(self, spec):
return self.stage.join(self.test_pkg_id(spec))
@property
def current_test_data_dir(self):
assert self.current_test_spec and self.current_base_spec
test_spec = self.current_test_spec
base_spec = self.current_base_spec
return self.test_dir_for_spec(base_spec).data.join(test_spec.name)
def add_failure(self, exc, msg):
current_hash = self.current_base_spec.dag_hash()
current_failures = self.failures.get(current_hash, [])
current_failures.append((exc, msg))
self.failures[current_hash] = current_failures
def write_test_result(self, spec, result):
msg = "{0} {1}".format(self.test_pkg_id(spec), result)
_add_msg_to_file(self.results_file, msg)
def write_reproducibility_data(self):
for spec in self.specs:
repo_cache_path = self.stage.repo.join(spec.name)
spack.repo.path.dump_provenance(spec, repo_cache_path)
for vspec in spec.package.virtuals_provided:
repo_cache_path = self.stage.repo.join(vspec.name)
if not os.path.exists(repo_cache_path):
try:
spack.repo.path.dump_provenance(vspec, repo_cache_path)
except spack.repo.UnknownPackageError:
pass # not all virtuals have package files
with open(self.stage.join(test_suite_filename), 'w') as f:
sjson.dump(self.to_dict(), stream=f)
def to_dict(self):
specs = [s.to_dict() for s in self.specs]
d = {'specs': specs}
if self.alias:
d['alias'] = self.alias
return d
@staticmethod
def from_dict(d):
specs = [Spec.from_dict(spec_dict) for spec_dict in d['specs']]
alias = d.get('alias', None)
return TestSuite(specs, alias)
@staticmethod
def from_file(filename):
try:
with open(filename, 'r') as f:
data = sjson.load(f)
return TestSuite.from_dict(data)
except Exception as e:
tty.debug(e)
raise sjson.SpackJSONError("error parsing JSON TestSuite:", str(e))
def _add_msg_to_file(filename, msg):
"""Add the message to the specified file
Args:
filename (str): path to the file
msg (str): message to be appended to the file
"""
with open(filename, 'a+') as f:
f.write('{0}\n'.format(msg))
class TestFailure(spack.error.SpackError):
"""Raised when package tests have failed for an installation."""
def __init__(self, failures):
# Failures are all exceptions
msg = "%d tests failed.\n" % len(failures)
for failure, message in failures:
msg += '\n\n%s\n' % str(failure)
msg += '\n%s\n' % message
super(TestFailure, self).__init__(msg)

View File

@@ -1108,10 +1108,10 @@ def build_process():
pkg.name, 'src')
tty.debug('{0} Copying source to {1}'
.format(pre, src_target))
fs.install_tree(pkg.stage.source_path, src_target)
fs.install_tree(source_path, src_target)
# Do the real install in the source directory.
with fs.working_dir(pkg.stage.source_path):
with fs.working_dir(source_path):
# Save the build environment in a file before building.
dump_environment(pkg.env_path)
@@ -1133,7 +1133,7 @@ def build_process():
# Spawn a daemon that reads from a pipe and redirects
# everything to log_path
with log_output(pkg.log_path, echo, True) as logger:
with log_output(pkg.log_path, echo=echo, debug=True) as logger:
for phase_name, phase_attr in zip(
pkg.phases, pkg._InstallPhase_phases):

View File

@@ -12,6 +12,7 @@
import spack.config
import spack.compilers
import spack.spec
import spack.repo
import spack.error
import spack.tengine as tengine
@@ -125,7 +126,9 @@ def hierarchy_tokens(self):
# Check if all the tokens in the hierarchy are virtual specs.
# If not warn the user and raise an error.
not_virtual = [t for t in tokens if not spack.spec.Spec.is_virtual(t)]
not_virtual = [t for t in tokens
if t != 'compiler' and
not spack.repo.path.is_virtual(t)]
if not_virtual:
msg = "Non-virtual specs in 'hierarchy' list for lmod: {0}\n"
msg += "Please check the 'modules.yaml' configuration files"

View File

@@ -24,10 +24,12 @@
import textwrap
import time
import traceback
import six
import types
import llnl.util.filesystem as fsys
import llnl.util.tty as tty
import spack.compilers
import spack.config
import spack.dependency
@@ -45,16 +47,15 @@
import spack.url
import spack.util.environment
import spack.util.web
from llnl.util.filesystem import mkdirp, touch, working_dir
from llnl.util.lang import memoized
from llnl.util.link_tree import LinkTree
from ordereddict_backport import OrderedDict
from six import StringIO
from six import string_types
from six import with_metaclass
from spack.filesystem_view import YamlFilesystemView
from spack.installer import \
install_args_docstring, PackageInstaller, InstallError
from spack.install_test import TestFailure, TestSuite
from spack.util.executable import which, ProcessError
from spack.util.prefix import Prefix
from spack.stage import stage_prefix, Stage, ResourceStage, StageComposite
from spack.util.package_hash import package_hash
from spack.version import Version
@@ -444,7 +445,36 @@ def remove_files_from_view(self, view, merge_map):
view.remove_file(src, dst)
class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
def test_log_pathname(test_stage, spec):
"""Build the pathname of the test log file
Args:
test_stage (str): path to the test stage directory
spec (Spec): instance of the spec under test
Returns:
(str): the pathname of the test log file
"""
return os.path.join(test_stage,
'test-{0}-out.txt'.format(TestSuite.test_pkg_id(spec)))
def setup_test_stage(test_name):
"""Set up the test stage directory.
Args:
test_name (str): the name of the test
Returns:
(str): the path to the test stage directory
"""
test_stage = Prefix(spack.cmd.test.get_stage(test_name))
if not os.path.exists(test_stage):
fsys.mkdirp(test_stage)
return test_stage
class PackageBase(six.with_metaclass(PackageMeta, PackageViewMixin, object)):
"""This is the superclass for all spack packages.
***The Package class***
@@ -534,6 +564,10 @@ class PackageBase(with_metaclass(PackageMeta, PackageViewMixin, object)):
#: are executed or 'None' if there are no such test functions.
build_time_test_callbacks = None
#: By default, packages are not virtual
#: Virtual packages override this attribute
virtual = False
#: Most Spack packages are used to install source or binary code while
#: those that do not can be used to install a set of other Spack packages.
has_code = True
@@ -965,20 +999,23 @@ def env_path(self):
else:
return os.path.join(self.stage.path, _spack_build_envfile)
@property
def metadata_dir(self):
"""Return the install metadata directory."""
return spack.store.layout.metadata_path(self.spec)
@property
def install_env_path(self):
"""
Return the build environment file path on successful installation.
"""
install_path = spack.store.layout.metadata_path(self.spec)
# Backward compatibility: Return the name of an existing log path;
# otherwise, return the current install env path name.
old_filename = os.path.join(install_path, 'build.env')
old_filename = os.path.join(self.metadata_dir, 'build.env')
if os.path.exists(old_filename):
return old_filename
else:
return os.path.join(install_path, _spack_build_envfile)
return os.path.join(self.metadata_dir, _spack_build_envfile)
@property
def log_path(self):
@@ -995,16 +1032,14 @@ def log_path(self):
@property
def install_log_path(self):
"""Return the build log file path on successful installation."""
install_path = spack.store.layout.metadata_path(self.spec)
# Backward compatibility: Return the name of an existing install log.
for filename in ['build.out', 'build.txt']:
old_log = os.path.join(install_path, filename)
old_log = os.path.join(self.metadata_dir, filename)
if os.path.exists(old_log):
return old_log
# Otherwise, return the current install log path name.
return os.path.join(install_path, _spack_build_logfile)
return os.path.join(self.metadata_dir, _spack_build_logfile)
@property
def configure_args_path(self):
@@ -1014,9 +1049,12 @@ def configure_args_path(self):
@property
def install_configure_args_path(self):
"""Return the configure args file path on successful installation."""
install_path = spack.store.layout.metadata_path(self.spec)
return os.path.join(self.metadata_dir, _spack_configure_argsfile)
return os.path.join(install_path, _spack_configure_argsfile)
@property
def install_test_root(self):
"""Return the install test root directory."""
return os.path.join(self.metadata_dir, 'test')
def _make_fetcher(self):
# Construct a composite fetcher that always contains at least
@@ -1291,7 +1329,7 @@ def do_stage(self, mirror_only=False):
raise FetchError("Archive was empty for %s" % self.name)
else:
# Support for post-install hooks requires a stage.source_path
mkdirp(self.stage.source_path)
fsys.mkdirp(self.stage.source_path)
def do_patch(self):
"""Applies patches if they haven't been applied already."""
@@ -1337,7 +1375,7 @@ def do_patch(self):
patched = False
for patch in patches:
try:
with working_dir(self.stage.source_path):
with fsys.working_dir(self.stage.source_path):
patch.apply(self.stage)
tty.debug('Applied patch {0}'.format(patch.path_or_url))
patched = True
@@ -1346,12 +1384,12 @@ def do_patch(self):
# Touch bad file if anything goes wrong.
tty.msg('Patch %s failed.' % patch.path_or_url)
touch(bad_file)
fsys.touch(bad_file)
raise
if has_patch_fun:
try:
with working_dir(self.stage.source_path):
with fsys.working_dir(self.stage.source_path):
self.patch()
tty.debug('Ran patch() for {0}'.format(self.name))
patched = True
@@ -1369,7 +1407,7 @@ def do_patch(self):
# Touch bad file if anything goes wrong.
tty.msg('patch() function failed for {0}'.format(self.name))
touch(bad_file)
fsys.touch(bad_file)
raise
# Get rid of any old failed file -- patches have either succeeded
@@ -1380,9 +1418,9 @@ def do_patch(self):
# touch good or no patches file so that we skip next time.
if patched:
touch(good_file)
fsys.touch(good_file)
else:
touch(no_patches_file)
fsys.touch(no_patches_file)
@classmethod
def all_patches(cls):
@@ -1592,6 +1630,296 @@ def do_install(self, **kwargs):
do_install.__doc__ += install_args_docstring
def cache_extra_test_sources(self, srcs):
"""Copy relative source paths to the corresponding install test subdir
This method is intended as an optional install test setup helper for
grabbing source files/directories during the installation process and
copying them to the installation test subdirectory for subsequent use
during install testing.
Args:
srcs (str or list of str): relative path for files and or
subdirectories located in the staged source path that are to
be copied to the corresponding location(s) under the install
testing directory.
"""
paths = [srcs] if isinstance(srcs, six.string_types) else srcs
for path in paths:
src_path = os.path.join(self.stage.source_path, path)
dest_path = os.path.join(self.install_test_root, path)
if os.path.isdir(src_path):
fsys.install_tree(src_path, dest_path)
else:
fsys.mkdirp(os.path.dirname(dest_path))
fsys.copy(src_path, dest_path)
test_requires_compiler = False
test_failures = None
test_suite = None
def do_test(self, dirty=False):
if self.test_requires_compiler:
compilers = spack.compilers.compilers_for_spec(
self.spec.compiler, arch_spec=self.spec.architecture)
if not compilers:
tty.error('Skipping tests for package %s\n' %
self.spec.format('{name}-{version}-{hash:7}') +
'Package test requires missing compiler %s' %
self.spec.compiler)
return
# Clear test failures
self.test_failures = []
self.test_log_file = self.test_suite.log_file_for_spec(self.spec)
def test_process():
with tty.log.log_output(self.test_log_file) as logger:
with logger.force_echo():
tty.msg('Testing package {0}'
.format(self.test_suite.test_pkg_id(self.spec)))
# use debug print levels for log file to record commands
old_debug = tty.is_debug()
tty.set_debug(True)
# run test methods from the package and all virtuals it
# provides virtuals have to be deduped by name
v_names = list(set([vspec.name
for vspec in self.virtuals_provided]))
# hack for compilers that are not dependencies (yet)
# TODO: this all eventually goes away
c_names = ('gcc', 'intel', 'intel-parallel-studio', 'pgi')
if self.name in c_names:
v_names.extend(['c', 'cxx', 'fortran'])
if self.spec.satisfies('llvm+clang'):
v_names.extend(['c', 'cxx'])
test_specs = [self.spec] + [spack.spec.Spec(v_name)
for v_name in sorted(v_names)]
try:
with fsys.working_dir(
self.test_suite.test_dir_for_spec(self.spec)):
for spec in test_specs:
self.test_suite.current_test_spec = spec
# Fail gracefully if a virtual has no package/tests
try:
spec_pkg = spec.package
except spack.repo.UnknownPackageError:
continue
# copy test data into test data dir
data_source = Prefix(spec_pkg.package_dir).test
data_dir = self.test_suite.current_test_data_dir
if (os.path.isdir(data_source) and
not os.path.exists(data_dir)):
# We assume data dir is used read-only
# maybe enforce this later
shutil.copytree(data_source, data_dir)
# Get all methods named test_* from package
d = spec_pkg.__class__.__dict__
test_fns = [
fn for name, fn in d.items()
if (name == 'test' or name.startswith('test_'))
and hasattr(fn, '__call__')
]
# grab the function for each method so we can call
# it with this package in place of its `self`
# object
test_fns = list(map(
lambda x: x.__func__ if not isinstance(
x, types.FunctionType) else x,
test_fns))
for fn in test_fns:
# Run the tests
print('TEST: %s' %
(fn.__doc__ or fn.__name__))
try:
fn(self)
print('PASSED')
except BaseException as e:
# print a summary of the error to the log file
# so that cdash and junit reporters know about it
exc_type, _, tb = sys.exc_info()
print('FAILED: {0}'.format(e))
# construct combined stacktrace of processes
import traceback
stack = traceback.extract_stack()[:-1]
stack += traceback.extract_tb(tb)
# Package files have a line added at import time,
# so they are effectively one-indexed. Other files
# we subtract 1 from the lineno for zero-indexing.
for i, entry in enumerate(stack):
filename, lineno, function, text = entry
if not spack.paths.is_package_file(filename):
lineno = lineno - 1
stack[i] = (filename, lineno, function, text)
# Format the stack to print and print it
out = traceback.format_list(stack)
for line in out:
print(line.rstrip('\n'))
if exc_type is spack.util.executable.ProcessError:
out = six.StringIO()
spack.build_environment.write_log_summary(
out, 'test', self.test_log_file, last=1)
m = out.getvalue()
else:
# Get context from combined stack
m = '\n'.join(
spack.build_environment.get_package_context(
stack)
)
exc = e # e is deleted after this block
# If we fail fast, raise another error
if spack.config.get('config:fail_fast', False):
raise TestFailure([(exc, m)])
else:
self.test_failures.append((exc, m))
# If fail-fast was on, we error out above
# If we collect errors, raise them in batch here
if self.test_failures:
raise TestFailure(self.test_failures)
finally:
# reset debug level
tty.set_debug(old_debug)
spack.build_environment.fork(
self, test_process, dirty=dirty, fake=False, context='test')
def run_test(self, exe, options=[], expected=[], status=0,
installed=False, purpose='', skip_missing=False,
work_dir=None):
"""Run the test and confirm the expected results are obtained
Log any failures and continue, they will be re-raised later
Args:
exe (str): the name of the executable
options (str or list of str): list of options to pass to the runner
expected (str or list of str): list of expected output strings.
Each string is a regex expected to match part of the output.
status (int or list of int): possible passing status values
with 0 meaning the test is expected to succeed
installed (bool): the executable must be in the install prefix
purpose (str): message to display before running test
skip_missing (bool): skip the test if the executable is not
in the install prefix bin directory or the provided work_dir
work_dir (str or None): path to the smoke test directory
"""
wdir = '.' if work_dir is None else work_dir
with fsys.working_dir(wdir):
try:
runner = which(exe)
if runner is None and skip_missing:
return
assert runner is not None, \
"Failed to find executable '{0}'".format(exe)
self._run_test_helper(
runner, options, expected, status, installed, purpose)
print("PASSED")
return True
except BaseException as e:
# print a summary of the error to the log file
# so that cdash and junit reporters know about it
exc_type, _, tb = sys.exc_info()
print('FAILED: {0}'.format(e))
import traceback
# remove the current call frame to get back to
stack = traceback.extract_stack()[:-1]
# Package files have a line added at import time, so we re-read
# the file to make line numbers match. We have to subtract two
# from the line number because the original line number is
# inflated once by the import statement and the lines are
# displaced one by the import statement.
for i, entry in enumerate(stack):
filename, lineno, function, text = entry
if spack.paths.is_package_file(filename):
with open(filename, 'r') as f:
lines = f.readlines()
text = lines[lineno - 2]
stack[i] = (filename, lineno, function, text)
# Format the stack to print and print it
out = traceback.format_list(stack)
for line in out:
print(line.rstrip('\n'))
if exc_type is spack.util.executable.ProcessError:
out = six.StringIO()
spack.build_environment.write_log_summary(
out, 'test', self.test_log_file, last=1)
m = out.getvalue()
else:
# We're below the package context, so get context from
# stack instead of from traceback.
# The traceback is truncated here, so we can't use it to
# traverse the stack.
m = '\n'.join(
spack.build_environment.get_package_context(
traceback.extract_stack())
)
exc = e # e is deleted after this block
# If we fail fast, raise another error
if spack.config.get('config:fail_fast', False):
raise TestFailure([(exc, m)])
else:
self.test_failures.append((exc, m))
return False
def _run_test_helper(self, runner, options, expected, status, installed,
purpose):
status = [status] if isinstance(status, six.integer_types) else status
expected = [expected] if isinstance(expected, six.string_types) else \
expected
options = [options] if isinstance(options, six.string_types) else \
options
if purpose:
tty.msg(purpose)
else:
tty.debug('test: {0}: expect command status in {1}'
.format(runner.name, status))
if installed:
msg = "Executable '{0}' expected in prefix".format(runner.name)
msg += ", found in {0} instead".format(runner.path)
assert self.spec.prefix in runner.path, msg
try:
output = runner(*options, output=str.split, error=str.split)
can_pass = not status or 0 in status
assert can_pass, \
'Expected {0} execution to fail'.format(runner.name)
except ProcessError as err:
output = str(err)
match = re.search(r'exited with status ([0-9]+)', output)
if not (match and int(match.group(1)) in status):
raise
for check in expected:
cmd = ' '.join([runner.name] + options)
msg = "Expected '{0}' to match output of `{1}`".format(check, cmd)
msg += '\n\nOutput: {0}'.format(output)
assert re.search(check, output), msg
def unit_test_check(self):
"""Hook for unit tests to assert things about package internals.
@@ -1613,7 +1941,7 @@ def sanity_check_prefix(self):
"""This function checks whether install succeeded."""
def check_paths(path_list, filetype, predicate):
if isinstance(path_list, string_types):
if isinstance(path_list, six.string_types):
path_list = [path_list]
for path in path_list:
@@ -1966,7 +2294,7 @@ def do_deprecate(self, deprecator, link_fn):
# copy spec metadata to "deprecated" dir of deprecator
depr_yaml = spack.store.layout.deprecated_file_path(spec,
deprecator)
fs.mkdirp(os.path.dirname(depr_yaml))
fsys.mkdirp(os.path.dirname(depr_yaml))
shutil.copy2(self_yaml, depr_yaml)
# Any specs deprecated in favor of this spec are re-deprecated in
@@ -2145,7 +2473,7 @@ def format_doc(self, **kwargs):
doc = re.sub(r'\s+', ' ', self.__doc__)
lines = textwrap.wrap(doc, 72)
results = StringIO()
results = six.StringIO()
for line in lines:
results.write((" " * indent) + line + "\n")
return results.getvalue()

View File

@@ -10,9 +10,9 @@
dependencies.
"""
import os
import inspect
from llnl.util.filesystem import ancestor
#: This file lives in $prefix/lib/spack/spack/__file__
prefix = ancestor(__file__, 4)
@@ -39,6 +39,7 @@
hooks_path = os.path.join(module_path, "hooks")
var_path = os.path.join(prefix, "var", "spack")
repos_path = os.path.join(var_path, "repos")
tests_path = os.path.join(var_path, "tests")
share_path = os.path.join(prefix, "share", "spack")
# Paths to built-in Spack repositories.
@@ -58,3 +59,16 @@
mock_gpg_data_path = os.path.join(var_path, "gpg.mock", "data")
mock_gpg_keys_path = os.path.join(var_path, "gpg.mock", "keys")
gpg_path = os.path.join(opt_path, "spack", "gpg")
def is_package_file(filename):
"""Determine whether we are in a package file from a repo."""
# Package files are named `package.py` and are not in lib/spack/spack
# We have to remove the file extension because it can be .py and can be
# .pyc depending on context, and can differ between the files
import spack.package # break cycle
filename_noext = os.path.splitext(filename)[0]
packagebase_filename_noext = os.path.splitext(
inspect.getfile(spack.package.PackageBase))[0]
return (filename_noext != packagebase_filename_noext and
os.path.basename(filename_noext) == 'package')

View File

@@ -59,6 +59,7 @@
from spack.installer import \
ExternalPackageError, InstallError, InstallLockError, UpstreamPackageError
from spack.install_test import get_escaped_text_output
from spack.variant import any_combination_of, auto_or_any_combination_of
from spack.variant import disjoint_sets

View File

@@ -131,6 +131,11 @@ def __init__(self, packages_path):
#: Reference to the appropriate entry in the global cache
self._packages_to_stats = self._paths_cache[packages_path]
def invalidate(self):
"""Regenerate cache for this checker."""
self._paths_cache[self.packages_path] = self._create_new_cache()
self._packages_to_stats = self._paths_cache[self.packages_path]
def _create_new_cache(self):
"""Create a new cache for packages in a repo.
@@ -308,6 +313,9 @@ def read(self, stream):
self.index = spack.provider_index.ProviderIndex.from_json(stream)
def update(self, pkg_fullname):
name = pkg_fullname.split('.')[-1]
if spack.repo.path.is_virtual(name, use_index=False):
return
self.index.remove_provider(pkg_fullname)
self.index.update(pkg_fullname)
@@ -517,12 +525,12 @@ def first_repo(self):
"""Get the first repo in precedence order."""
return self.repos[0] if self.repos else None
def all_package_names(self):
def all_package_names(self, include_virtuals=False):
"""Return all unique package names in all repositories."""
if self._all_package_names is None:
all_pkgs = set()
for repo in self.repos:
for name in repo.all_package_names():
for name in repo.all_package_names(include_virtuals):
all_pkgs.add(name)
self._all_package_names = sorted(all_pkgs, key=lambda n: n.lower())
return self._all_package_names
@@ -675,9 +683,17 @@ def exists(self, pkg_name):
"""
return any(repo.exists(pkg_name) for repo in self.repos)
def is_virtual(self, pkg_name):
"""True if the package with this name is virtual, False otherwise."""
return pkg_name in self.provider_index
def is_virtual(self, pkg_name, use_index=True):
"""True if the package with this name is virtual, False otherwise.
Set `use_index` False when calling from a code block that could
be run during the computation of the provider index."""
have_name = pkg_name is not None
if use_index:
return have_name and pkg_name in self.provider_index
else:
return have_name and (not self.exists(pkg_name) or
self.get_pkg_class(pkg_name).virtual)
def __contains__(self, pkg_name):
return self.exists(pkg_name)
@@ -906,10 +922,6 @@ def dump_provenance(self, spec, path):
This dumps the package file and any associated patch files.
Raises UnknownPackageError if not found.
"""
# Some preliminary checks.
if spec.virtual:
raise UnknownPackageError(spec.name)
if spec.namespace and spec.namespace != self.namespace:
raise UnknownPackageError(
"Repository %s does not contain package %s."
@@ -992,9 +1004,12 @@ def _pkg_checker(self):
self._fast_package_checker = FastPackageChecker(self.packages_path)
return self._fast_package_checker
def all_package_names(self):
def all_package_names(self, include_virtuals=False):
"""Returns a sorted list of all package names in the Repo."""
return sorted(self._pkg_checker.keys())
names = sorted(self._pkg_checker.keys())
if include_virtuals:
return names
return [x for x in names if not self.is_virtual(x)]
def packages_with_tags(self, *tags):
v = set(self.all_package_names())
@@ -1025,7 +1040,7 @@ def last_mtime(self):
def is_virtual(self, pkg_name):
"""True if the package with this name is virtual, False otherwise."""
return self.provider_index.contains(pkg_name)
return pkg_name in self.provider_index
def _get_pkg_module(self, pkg_name):
"""Create a module for a particular package.
@@ -1059,7 +1074,8 @@ def _get_pkg_module(self, pkg_name):
# manually construct the error message in order to give the
# user the correct package.py where the syntax error is located
raise SyntaxError('invalid syntax in {0:}, line {1:}'
''.format(file_path, e.lineno))
.format(file_path, e.lineno))
module.__package__ = self.full_namespace
module.__loader__ = self
self._modules[pkg_name] = module
@@ -1190,9 +1206,9 @@ def get(spec):
return path.get(spec)
def all_package_names():
def all_package_names(include_virtuals=False):
"""Convenience wrapper around ``spack.repo.all_package_names()``."""
return path.all_package_names()
return path.all_package_names(include_virtuals)
def set_path(repo):

View File

@@ -9,11 +9,13 @@
import functools
import time
import traceback
import os
import llnl.util.lang
import spack.build_environment
import spack.fetch_strategy
import spack.package
from spack.install_test import TestSuite
from spack.reporter import Reporter
from spack.reporters.cdash import CDash
from spack.reporters.junit import JUnit
@@ -33,12 +35,16 @@
]
def fetch_package_log(pkg):
def fetch_log(pkg, do_fn, dir):
log_files = {
'_install_task': pkg.build_log_path,
'do_test': os.path.join(dir, TestSuite.test_log_name(pkg.spec)),
}
try:
with codecs.open(pkg.build_log_path, 'r', 'utf-8') as f:
with codecs.open(log_files[do_fn.__name__], 'r', 'utf-8') as f:
return ''.join(f.readlines())
except Exception:
return 'Cannot open build log for {0}'.format(
return 'Cannot open log for {0}'.format(
pkg.spec.cshort_spec
)
@@ -58,15 +64,20 @@ class InfoCollector(object):
specs (list of Spec): specs whose install information will
be recorded
"""
#: Backup of PackageInstaller._install_task
_backup__install_task = spack.package.PackageInstaller._install_task
def __init__(self, specs):
#: Specs that will be installed
def __init__(self, wrap_class, do_fn, specs, dir):
#: Class for which to wrap a function
self.wrap_class = wrap_class
#: Action to be reported on
self.do_fn = do_fn
#: Backup of PackageBase function
self._backup_do_fn = getattr(self.wrap_class, do_fn)
#: Specs that will be acted on
self.input_specs = specs
#: This is where we record the data that will be included
#: in our report.
self.specs = []
#: Record directory for test log paths
self.dir = dir
def __enter__(self):
# Initialize the spec report with the data that is available upfront.
@@ -98,30 +109,37 @@ def __enter__(self):
Property('compiler', input_spec.compiler))
# Check which specs are already installed and mark them as skipped
for dep in filter(lambda x: x.package.installed,
input_spec.traverse()):
package = {
'name': dep.name,
'id': dep.dag_hash(),
'elapsed_time': '0.0',
'result': 'skipped',
'message': 'Spec already installed'
}
spec['packages'].append(package)
# only for install_task
if self.do_fn == '_install_task':
for dep in filter(lambda x: x.package.installed,
input_spec.traverse()):
package = {
'name': dep.name,
'id': dep.dag_hash(),
'elapsed_time': '0.0',
'result': 'skipped',
'message': 'Spec already installed'
}
spec['packages'].append(package)
def gather_info(_install_task):
"""Decorates PackageInstaller._install_task to gather useful
information on PackageBase.do_install for a CI report.
def gather_info(do_fn):
"""Decorates do_fn to gather useful information for
a CI report.
It's defined here to capture the environment and build
this context as the installations proceed.
"""
@functools.wraps(_install_task)
def wrapper(installer, task, *args, **kwargs):
pkg = task.pkg
@functools.wraps(do_fn)
def wrapper(instance, *args, **kwargs):
if isinstance(instance, spack.package.PackageBase):
pkg = instance
elif hasattr(args[0], 'pkg'):
pkg = args[0].pkg
else:
raise Exception
# We accounted before for what is already installed
installed_on_entry = pkg.installed
installed_already = pkg.installed
package = {
'name': pkg.name,
@@ -135,13 +153,12 @@ def wrapper(installer, task, *args, **kwargs):
start_time = time.time()
value = None
try:
value = _install_task(installer, task, *args, **kwargs)
value = do_fn(instance, *args, **kwargs)
package['result'] = 'success'
package['stdout'] = fetch_package_log(pkg)
package['stdout'] = fetch_log(pkg, do_fn, self.dir)
package['installed_from_binary_cache'] = \
pkg.installed_from_binary_cache
if installed_on_entry:
if do_fn.__name__ == '_install_task' and installed_already:
return
except spack.build_environment.InstallError as e:
@@ -149,7 +166,7 @@ def wrapper(installer, task, *args, **kwargs):
# didn't work correctly)
package['result'] = 'failure'
package['message'] = e.message or 'Installation failure'
package['stdout'] = fetch_package_log(pkg)
package['stdout'] = fetch_log(pkg, do_fn, self.dir)
package['stdout'] += package['message']
package['exception'] = e.traceback
@@ -157,7 +174,7 @@ def wrapper(installer, task, *args, **kwargs):
# Everything else is an error (the installation
# failed outside of the child process)
package['result'] = 'error'
package['stdout'] = fetch_package_log(pkg)
package['stdout'] = fetch_log(pkg, do_fn, self.dir)
package['message'] = str(e) or 'Unknown error'
package['exception'] = traceback.format_exc()
@@ -184,15 +201,14 @@ def wrapper(installer, task, *args, **kwargs):
return wrapper
spack.package.PackageInstaller._install_task = gather_info(
spack.package.PackageInstaller._install_task
)
setattr(self.wrap_class, self.do_fn, gather_info(
getattr(self.wrap_class, self.do_fn)
))
def __exit__(self, exc_type, exc_val, exc_tb):
# Restore the original method in PackageInstaller
spack.package.PackageInstaller._install_task = \
InfoCollector._backup__install_task
# Restore the original method in PackageBase
setattr(self.wrap_class, self.do_fn, self._backup_do_fn)
for spec in self.specs:
spec['npackages'] = len(spec['packages'])
@@ -225,22 +241,26 @@ class collect_info(object):
# The file 'junit.xml' is written when exiting
# the context
specs = [Spec('hdf5').concretized()]
with collect_info(specs, 'junit', 'junit.xml'):
s = [Spec('hdf5').concretized()]
with collect_info(PackageBase, do_install, s, 'junit', 'a.xml'):
# A report will be generated for these specs...
for spec in specs:
spec.do_install()
for spec in s:
getattr(class, function)(spec)
# ...but not for this one
Spec('zlib').concretized().do_install()
Args:
class: class on which to wrap a function
function: function to wrap
format_name (str or None): one of the supported formats
args (dict): args passed to spack install
args (dict): args passed to function
Raises:
ValueError: when ``format_name`` is not in ``valid_formats``
"""
def __init__(self, format_name, args):
def __init__(self, cls, function, format_name, args):
self.cls = cls
self.function = function
self.filename = None
if args.cdash_upload_url:
self.format_name = 'cdash'
@@ -253,13 +273,19 @@ def __init__(self, format_name, args):
.format(self.format_name))
self.report_writer = report_writers[self.format_name](args)
def __call__(self, type, dir=os.getcwd()):
self.type = type
self.dir = dir
return self
def concretization_report(self, msg):
self.report_writer.concretization_report(self.filename, msg)
def __enter__(self):
if self.format_name:
# Start the collector and patch PackageInstaller._install_task
self.collector = InfoCollector(self.specs)
# Start the collector and patch self.function on appropriate class
self.collector = InfoCollector(
self.cls, self.function, self.specs, self.dir)
self.collector.__enter__()
def __exit__(self, exc_type, exc_val, exc_tb):
@@ -269,4 +295,5 @@ def __exit__(self, exc_type, exc_val, exc_tb):
self.collector.__exit__(exc_type, exc_val, exc_tb)
report_data = {'specs': self.collector.specs}
self.report_writer.build_report(self.filename, report_data)
report_fn = getattr(self.report_writer, '%s_report' % self.type)
report_fn(self.filename, report_data)

View File

@@ -16,5 +16,8 @@ def __init__(self, args):
def build_report(self, filename, report_data):
pass
def test_report(self, filename, report_data):
pass
def concretization_report(self, filename, msg):
pass

View File

@@ -72,10 +72,12 @@ def __init__(self, args):
tty.verbose("Using CDash auth token from environment")
self.authtoken = os.environ.get('SPACK_CDASH_AUTH_TOKEN')
if args.spec:
packages = []
if getattr(args, 'spec', ''):
packages = args.spec
else:
packages = []
elif getattr(args, 'specs', ''):
packages = args.specs
elif getattr(args, 'specfiles', ''):
for file in args.specfiles:
with open(file, 'r') as f:
s = spack.spec.Spec.from_yaml(f)
@@ -98,7 +100,7 @@ def __init__(self, args):
self.revision = git('rev-parse', 'HEAD', output=str).strip()
self.multiple_packages = False
def report_for_package(self, directory_name, package, duration):
def build_report_for_package(self, directory_name, package, duration):
if 'stdout' not in package:
# Skip reporting on packages that did not generate any output.
return
@@ -250,7 +252,114 @@ def build_report(self, directory_name, input_data):
if 'time' in spec:
duration = int(spec['time'])
for package in spec['packages']:
self.report_for_package(directory_name, package, duration)
self.build_report_for_package(
directory_name, package, duration)
self.print_cdash_link()
def test_report_for_package(self, directory_name, package, duration):
if 'stdout' not in package:
# Skip reporting on packages that did not generate any output.
return
self.current_package_name = package['name']
self.buildname = "{0} - {1}".format(
self.base_buildname, package['name'])
report_data = self.initialize_report(directory_name)
for phase in ('test', 'update'):
report_data[phase] = {}
report_data[phase]['loglines'] = []
report_data[phase]['status'] = 0
report_data[phase]['endtime'] = self.endtime
# Track the phases we perform so we know what reports to create.
# We always report the update step because this is how we tell CDash
# what revision of Spack we are using.
phases_encountered = ['test', 'update']
# Generate a report for this package.
# The first line just says "Testing package name-hash"
report_data['test']['loglines'].append(
text_type("{0} output for {1}:".format(
'test', package['name'])))
for line in package['stdout'].splitlines()[1:]:
report_data['test']['loglines'].append(
xml.sax.saxutils.escape(line))
self.starttime = self.endtime - duration
for phase in phases_encountered:
report_data[phase]['starttime'] = self.starttime
report_data[phase]['log'] = \
'\n'.join(report_data[phase]['loglines'])
errors, warnings = parse_log_events(report_data[phase]['loglines'])
# Cap the number of errors and warnings at 50 each.
errors = errors[0:49]
warnings = warnings[0:49]
if phase == 'test':
# Convert log output from ASCII to Unicode and escape for XML.
def clean_log_event(event):
event = vars(event)
event['text'] = xml.sax.saxutils.escape(event['text'])
event['pre_context'] = xml.sax.saxutils.escape(
'\n'.join(event['pre_context']))
event['post_context'] = xml.sax.saxutils.escape(
'\n'.join(event['post_context']))
# source_file and source_line_no are either strings or
# the tuple (None,). Distinguish between these two cases.
if event['source_file'][0] is None:
event['source_file'] = ''
event['source_line_no'] = ''
else:
event['source_file'] = xml.sax.saxutils.escape(
event['source_file'])
return event
# Convert errors to warnings if the package reported success.
if package['result'] == 'success':
warnings = errors + warnings
errors = []
report_data[phase]['errors'] = []
report_data[phase]['warnings'] = []
for error in errors:
report_data[phase]['errors'].append(clean_log_event(error))
for warning in warnings:
report_data[phase]['warnings'].append(
clean_log_event(warning))
if phase == 'update':
report_data[phase]['revision'] = self.revision
# Write the report.
report_name = phase.capitalize() + ".xml"
report_file_name = package['name'] + "_" + report_name
phase_report = os.path.join(directory_name, report_file_name)
with codecs.open(phase_report, 'w', 'utf-8') as f:
env = spack.tengine.make_environment()
if phase != 'update':
# Update.xml stores site information differently
# than the rest of the CTest XML files.
site_template = os.path.join(self.template_dir, 'Site.xml')
t = env.get_template(site_template)
f.write(t.render(report_data))
phase_template = os.path.join(self.template_dir, report_name)
t = env.get_template(phase_template)
f.write(t.render(report_data))
self.upload(phase_report)
def test_report(self, directory_name, input_data):
# Generate reports for each package in each spec.
for spec in input_data['specs']:
duration = 0
if 'time' in spec:
duration = int(spec['time'])
for package in spec['packages']:
self.test_report_for_package(
directory_name, package, duration)
self.print_cdash_link()
def concretization_report(self, directory_name, msg):

View File

@@ -27,3 +27,6 @@ def build_report(self, filename, report_data):
env = spack.tengine.make_environment()
t = env.get_template(self.template_file)
f.write(t.render(report_data))
def test_report(self, filename, report_data):
self.build_report(filename, report_data)

View File

@@ -29,6 +29,7 @@
{'type': 'array',
'items': {'type': 'string'}}],
},
'test_stage': {'type': 'string'},
'extensions': {
'type': 'array',
'items': {'type': 'string'}

View File

@@ -943,7 +943,7 @@ def __init__(self, spec, name, query_parameters):
'QueryState', ['name', 'extra_parameters', 'isvirtual']
)
is_virtual = Spec.is_virtual(name)
is_virtual = spack.repo.path.is_virtual(name)
self.last_query = QueryState(
name=name,
extra_parameters=query_parameters,
@@ -1211,12 +1211,9 @@ def virtual(self):
Possible idea: just use conventin and make virtual deps all
caps, e.g., MPI vs mpi.
"""
return Spec.is_virtual(self.name)
@staticmethod
def is_virtual(name):
"""Test if a name is virtual without requiring a Spec."""
return (name is not None) and (not spack.repo.path.exists(name))
# This method can be called while regenerating the provider index
# So we turn off using the index to detect virtuals
return spack.repo.path.is_virtual(self.name, use_index=False)
@property
def concrete(self):

View File

@@ -68,7 +68,8 @@ def make_environment(dirs=None):
"""Returns an configured environment for template rendering."""
if dirs is None:
# Default directories where to search for templates
builtins = spack.config.get('config:template_dirs')
builtins = spack.config.get('config:template_dirs',
['$spack/share/spack/templates'])
extensions = spack.extensions.get_template_dirs()
dirs = [canonicalize_path(d)
for d in itertools.chain(builtins, extensions)]

View File

@@ -15,44 +15,51 @@
@pytest.fixture()
def mock_calls_for_clean(monkeypatch):
counts = {}
class Counter(object):
def __init__(self):
self.call_count = 0
def __init__(self, name):
self.name = name
counts[name] = 0
def __call__(self, *args, **kwargs):
self.call_count += 1
counts[self.name] += 1
monkeypatch.setattr(spack.package.PackageBase, 'do_clean', Counter())
monkeypatch.setattr(spack.stage, 'purge', Counter())
monkeypatch.setattr(spack.package.PackageBase, 'do_clean',
Counter('package'))
monkeypatch.setattr(spack.stage, 'purge', Counter('stages'))
monkeypatch.setattr(
spack.caches.fetch_cache, 'destroy', Counter(), raising=False)
spack.caches.fetch_cache, 'destroy', Counter('downloads'),
raising=False)
monkeypatch.setattr(
spack.caches.misc_cache, 'destroy', Counter())
spack.caches.misc_cache, 'destroy', Counter('caches'))
monkeypatch.setattr(
spack.installer, 'clear_failures', Counter())
spack.installer, 'clear_failures', Counter('failures'))
yield counts
all_effects = ['stages', 'downloads', 'caches', 'failures']
@pytest.mark.usefixtures(
'mock_packages', 'config', 'mock_calls_for_clean'
'mock_packages', 'config'
)
@pytest.mark.parametrize('command_line,counters', [
('mpileaks', [1, 0, 0, 0, 0]),
('-s', [0, 1, 0, 0, 0]),
('-sd', [0, 1, 1, 0, 0]),
('-m', [0, 0, 0, 1, 0]),
('-f', [0, 0, 0, 0, 1]),
('-a', [0, 1, 1, 1, 1]),
('', [0, 0, 0, 0, 0]),
@pytest.mark.parametrize('command_line,effects', [
('mpileaks', ['package']),
('-s', ['stages']),
('-sd', ['stages', 'downloads']),
('-m', ['caches']),
('-f', ['failures']),
('-a', all_effects),
('', []),
])
def test_function_calls(command_line, counters):
def test_function_calls(command_line, effects, mock_calls_for_clean):
# Call the command with the supplied command line
clean(command_line)
# Assert that we called the expected functions the correct
# number of times
assert spack.package.PackageBase.do_clean.call_count == counters[0]
assert spack.stage.purge.call_count == counters[1]
assert spack.caches.fetch_cache.destroy.call_count == counters[2]
assert spack.caches.misc_cache.destroy.call_count == counters[3]
assert spack.installer.clear_failures.call_count == counters[4]
for name in ['package'] + all_effects:
assert mock_calls_for_clean[name] == (1 if name in effects else 0)

View File

@@ -118,7 +118,7 @@ def test_install_dirty_flag(arguments, expected):
assert args.dirty == expected
def test_package_output(tmpdir, capsys, install_mockery, mock_fetch):
def test_package_output(tmpdir, install_mockery, mock_fetch):
"""Ensure output printed from pkgs is captured by output redirection."""
# we can't use output capture here because it interferes with Spack's
# logging. TODO: see whether we can get multiple log_outputs to work
@@ -697,18 +697,16 @@ def test_install_only_dependencies_of_all_in_env(
assert os.path.exists(dep.prefix)
def test_install_help_does_not_show_cdash_options(capsys):
def test_install_help_does_not_show_cdash_options():
"""Make sure `spack install --help` does not describe CDash arguments"""
with pytest.raises(SystemExit):
install('--help')
captured = capsys.readouterr()
assert 'CDash URL' not in captured.out
output = install('--help')
assert 'CDash URL' not in output
def test_install_help_cdash(capsys):
def test_install_help_cdash():
"""Make sure `spack install --help-cdash` describes CDash arguments"""
install_cmd = SpackCommand('install')
out = install_cmd('--help-cdash')
out = install('--help-cdash')
assert 'CDash URL' in out

View File

@@ -102,7 +102,7 @@ def __init__(self, specs=None, all=False, file=None,
self.exclude_specs = exclude_specs
def test_exclude_specs(mock_packages):
def test_exclude_specs(mock_packages, config):
args = MockMirrorArgs(
specs=['mpich'],
versions_per_spec='all',
@@ -117,7 +117,7 @@ def test_exclude_specs(mock_packages):
assert (not expected_exclude & set(mirror_specs))
def test_exclude_file(mock_packages, tmpdir):
def test_exclude_file(mock_packages, tmpdir, config):
exclude_path = os.path.join(str(tmpdir), 'test-exclude.txt')
with open(exclude_path, 'w') as exclude_file:
exclude_file.write("""\

View File

@@ -62,9 +62,9 @@ def mock_pkg_git_repo(tmpdir_factory):
mkdirp('pkg-a', 'pkg-b', 'pkg-c')
with open('pkg-a/package.py', 'w') as f:
f.write(pkg_template.format(name='PkgA'))
with open('pkg-c/package.py', 'w') as f:
f.write(pkg_template.format(name='PkgB'))
with open('pkg-b/package.py', 'w') as f:
f.write(pkg_template.format(name='PkgB'))
with open('pkg-c/package.py', 'w') as f:
f.write(pkg_template.format(name='PkgC'))
git('add', 'pkg-a', 'pkg-b', 'pkg-c')
git('-c', 'commit.gpgsign=false', 'commit',
@@ -128,6 +128,8 @@ def test_pkg_add(mock_pkg_git_repo):
git('status', '--short', output=str))
finally:
shutil.rmtree('pkg-e')
# Removing a package mid-run disrupts Spack's caching
spack.repo.path.repos[0]._fast_package_checker.invalidate()
with pytest.raises(spack.main.SpackCommandError):
pkg('add', 'does-not-exist')

View File

@@ -3,93 +3,180 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import argparse
import os
import pytest
import spack.config
import spack.package
import spack.cmd.install
from spack.main import SpackCommand
install = SpackCommand('install')
spack_test = SpackCommand('test')
cmd_test_py = 'lib/spack/spack/test/cmd/test.py'
def test_list():
output = spack_test('--list')
assert "test.py" in output
assert "spec_semantics.py" in output
assert "test_list" not in output
def test_test_package_not_installed(
tmpdir, mock_packages, mock_archive, mock_fetch, config,
install_mockery_mutable_config, mock_test_stage):
output = spack_test('run', 'libdwarf')
assert "No installed packages match spec libdwarf" in output
def test_list_with_pytest_arg():
output = spack_test('--list', cmd_test_py)
assert output.strip() == cmd_test_py
@pytest.mark.parametrize('arguments,expected', [
(['run'], spack.config.get('config:dirty')), # default from config file
(['run', '--clean'], False),
(['run', '--dirty'], True),
])
def test_test_dirty_flag(arguments, expected):
parser = argparse.ArgumentParser()
spack.cmd.test.setup_parser(parser)
args = parser.parse_args(arguments)
assert args.dirty == expected
def test_list_with_keywords():
output = spack_test('--list', '-k', 'cmd/test.py')
assert output.strip() == cmd_test_py
def test_test_output(mock_test_stage, mock_packages, mock_archive, mock_fetch,
install_mockery_mutable_config):
"""Ensure output printed from pkgs is captured by output redirection."""
install('printing-package')
spack_test('run', 'printing-package')
stage_files = os.listdir(mock_test_stage)
assert len(stage_files) == 1
# Grab test stage directory contents
testdir = os.path.join(mock_test_stage, stage_files[0])
testdir_files = os.listdir(testdir)
# Grab the output from the test log
testlog = list(filter(lambda x: x.endswith('out.txt') and
x != 'results.txt', testdir_files))
outfile = os.path.join(testdir, testlog[0])
with open(outfile, 'r') as f:
output = f.read()
assert "BEFORE TEST" in output
assert "RUNNING TEST" in output
assert "AFTER TEST" in output
assert "FAILED" not in output
def test_list_long(capsys):
with capsys.disabled():
output = spack_test('--list-long')
assert "test.py::\n" in output
assert "test_list" in output
assert "test_list_with_pytest_arg" in output
assert "test_list_with_keywords" in output
assert "test_list_long" in output
assert "test_list_long_with_pytest_arg" in output
assert "test_list_names" in output
assert "test_list_names_with_pytest_arg" in output
def test_test_output_on_error(
mock_packages, mock_archive, mock_fetch, install_mockery_mutable_config,
capfd, mock_test_stage
):
install('test-error')
# capfd interferes with Spack's capturing
with capfd.disabled():
out = spack_test('run', 'test-error', fail_on_error=False)
assert "spec_dag.py::\n" in output
assert 'test_installed_deps' in output
assert 'test_test_deptype' in output
assert "TestFailure" in out
assert "Command exited with status 1" in out
def test_list_long_with_pytest_arg(capsys):
with capsys.disabled():
output = spack_test('--list-long', cmd_test_py)
assert "test.py::\n" in output
assert "test_list" in output
assert "test_list_with_pytest_arg" in output
assert "test_list_with_keywords" in output
assert "test_list_long" in output
assert "test_list_long_with_pytest_arg" in output
assert "test_list_names" in output
assert "test_list_names_with_pytest_arg" in output
def test_test_output_on_failure(
mock_packages, mock_archive, mock_fetch, install_mockery_mutable_config,
capfd, mock_test_stage
):
install('test-fail')
with capfd.disabled():
out = spack_test('run', 'test-fail', fail_on_error=False)
assert "spec_dag.py::\n" not in output
assert 'test_installed_deps' not in output
assert 'test_test_deptype' not in output
assert "assert False" in out
assert "TestFailure" in out
def test_list_names():
output = spack_test('--list-names')
assert "test.py::test_list\n" in output
assert "test.py::test_list_with_pytest_arg\n" in output
assert "test.py::test_list_with_keywords\n" in output
assert "test.py::test_list_long\n" in output
assert "test.py::test_list_long_with_pytest_arg\n" in output
assert "test.py::test_list_names\n" in output
assert "test.py::test_list_names_with_pytest_arg\n" in output
def test_show_log_on_error(
mock_packages, mock_archive, mock_fetch,
install_mockery_mutable_config, capfd, mock_test_stage
):
"""Make sure spack prints location of test log on failure."""
install('test-error')
with capfd.disabled():
out = spack_test('run', 'test-error', fail_on_error=False)
assert "spec_dag.py::test_installed_deps\n" in output
assert 'spec_dag.py::test_test_deptype\n' in output
assert 'See test log' in out
assert mock_test_stage in out
def test_list_names_with_pytest_arg():
output = spack_test('--list-names', cmd_test_py)
assert "test.py::test_list\n" in output
assert "test.py::test_list_with_pytest_arg\n" in output
assert "test.py::test_list_with_keywords\n" in output
assert "test.py::test_list_long\n" in output
assert "test.py::test_list_long_with_pytest_arg\n" in output
assert "test.py::test_list_names\n" in output
assert "test.py::test_list_names_with_pytest_arg\n" in output
@pytest.mark.usefixtures(
'mock_packages', 'mock_archive', 'mock_fetch',
'install_mockery_mutable_config'
)
@pytest.mark.parametrize('pkg_name,msgs', [
('test-error', ['FAILED:', 'Command exited', 'TestFailure']),
('test-fail', ['FAILED:', 'assert False', 'TestFailure'])
])
def test_junit_output_with_failures(tmpdir, mock_test_stage, pkg_name, msgs):
install(pkg_name)
with tmpdir.as_cwd():
spack_test('run',
'--log-format=junit', '--log-file=test.xml',
pkg_name)
assert "spec_dag.py::test_installed_deps\n" not in output
assert 'spec_dag.py::test_test_deptype\n' not in output
files = tmpdir.listdir()
filename = tmpdir.join('test.xml')
assert filename in files
content = filename.open().read()
# Count failures and errors correctly
assert 'tests="1"' in content
assert 'failures="1"' in content
assert 'errors="0"' in content
# We want to have both stdout and stderr
assert '<system-out>' in content
for msg in msgs:
assert msg in content
def test_pytest_help():
output = spack_test('--pytest-help')
assert "-k EXPRESSION" in output
assert "pytest-warnings:" in output
assert "--collect-only" in output
def test_cdash_output_test_error(
tmpdir, mock_fetch, install_mockery_mutable_config, mock_packages,
mock_archive, mock_test_stage, capfd):
install('test-error')
with tmpdir.as_cwd():
spack_test('run',
'--log-format=cdash',
'--log-file=cdash_reports',
'test-error')
report_dir = tmpdir.join('cdash_reports')
assert report_dir in tmpdir.listdir()
report_file = report_dir.join('test-error_Test.xml')
assert report_file in report_dir.listdir()
content = report_file.open().read()
assert 'FAILED: Command exited with status 1' in content
def test_cdash_upload_clean_test(
tmpdir, mock_fetch, install_mockery_mutable_config, mock_packages,
mock_archive, mock_test_stage):
install('printing-package')
with tmpdir.as_cwd():
spack_test('run',
'--log-file=cdash_reports',
'--log-format=cdash',
'printing-package')
report_dir = tmpdir.join('cdash_reports')
assert report_dir in tmpdir.listdir()
report_file = report_dir.join('printing-package_Test.xml')
assert report_file in report_dir.listdir()
content = report_file.open().read()
assert '</Test>' in content
assert '<Text>' not in content
def test_test_help_does_not_show_cdash_options(mock_test_stage, capsys):
"""Make sure `spack test --help` does not describe CDash arguments"""
with pytest.raises(SystemExit):
spack_test('run', '--help')
captured = capsys.readouterr()
assert 'CDash URL' not in captured.out
def test_test_help_cdash(mock_test_stage):
"""Make sure `spack test --help-cdash` describes CDash arguments"""
out = spack_test('run', '--help-cdash')
assert 'CDash URL' in out

View File

@@ -0,0 +1,96 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.main import SpackCommand
spack_test = SpackCommand('unit-test')
cmd_test_py = 'lib/spack/spack/test/cmd/unit_test.py'
def test_list():
output = spack_test('--list')
assert "unit_test.py" in output
assert "spec_semantics.py" in output
assert "test_list" not in output
def test_list_with_pytest_arg():
output = spack_test('--list', cmd_test_py)
assert output.strip() == cmd_test_py
def test_list_with_keywords():
output = spack_test('--list', '-k', 'cmd/unit_test.py')
assert output.strip() == cmd_test_py
def test_list_long(capsys):
with capsys.disabled():
output = spack_test('--list-long')
assert "unit_test.py::\n" in output
assert "test_list" in output
assert "test_list_with_pytest_arg" in output
assert "test_list_with_keywords" in output
assert "test_list_long" in output
assert "test_list_long_with_pytest_arg" in output
assert "test_list_names" in output
assert "test_list_names_with_pytest_arg" in output
assert "spec_dag.py::\n" in output
assert 'test_installed_deps' in output
assert 'test_test_deptype' in output
def test_list_long_with_pytest_arg(capsys):
with capsys.disabled():
output = spack_test('--list-long', cmd_test_py)
print(output)
assert "unit_test.py::\n" in output
assert "test_list" in output
assert "test_list_with_pytest_arg" in output
assert "test_list_with_keywords" in output
assert "test_list_long" in output
assert "test_list_long_with_pytest_arg" in output
assert "test_list_names" in output
assert "test_list_names_with_pytest_arg" in output
assert "spec_dag.py::\n" not in output
assert 'test_installed_deps' not in output
assert 'test_test_deptype' not in output
def test_list_names():
output = spack_test('--list-names')
assert "unit_test.py::test_list\n" in output
assert "unit_test.py::test_list_with_pytest_arg\n" in output
assert "unit_test.py::test_list_with_keywords\n" in output
assert "unit_test.py::test_list_long\n" in output
assert "unit_test.py::test_list_long_with_pytest_arg\n" in output
assert "unit_test.py::test_list_names\n" in output
assert "unit_test.py::test_list_names_with_pytest_arg\n" in output
assert "spec_dag.py::test_installed_deps\n" in output
assert 'spec_dag.py::test_test_deptype\n' in output
def test_list_names_with_pytest_arg():
output = spack_test('--list-names', cmd_test_py)
assert "unit_test.py::test_list\n" in output
assert "unit_test.py::test_list_with_pytest_arg\n" in output
assert "unit_test.py::test_list_with_keywords\n" in output
assert "unit_test.py::test_list_long\n" in output
assert "unit_test.py::test_list_long_with_pytest_arg\n" in output
assert "unit_test.py::test_list_names\n" in output
assert "unit_test.py::test_list_names_with_pytest_arg\n" in output
assert "spec_dag.py::test_installed_deps\n" not in output
assert 'spec_dag.py::test_test_deptype\n' not in output
def test_pytest_help():
output = spack_test('--pytest-help')
assert "-k EXPRESSION" in output
assert "pytest-warnings:" in output
assert "--collect-only" in output

View File

@@ -263,12 +263,12 @@ def concretize_multi_provider(self):
('dealii', 'develop'),
('xsdk', '0.4.0'),
])
def concretize_difficult_packages(self, a, b):
def concretize_difficult_packages(self, spec, version):
"""Test a couple of large packages that are often broken due
to current limitations in the concretizer"""
s = Spec(a + '@' + b)
s = Spec(spec + '@' + version)
s.concretize()
assert s[a].version == ver(b)
assert s[spec].version == ver(version)
def test_concretize_two_virtuals(self):

View File

@@ -1184,3 +1184,14 @@ def _factory(name, output, subdir=('bin',)):
return str(f)
return _factory
@pytest.fixture()
def mock_test_stage(mutable_config, tmpdir):
# NOTE: This fixture MUST be applied after any fixture that uses
# the config fixture under the hood
# No need to unset because we use mutable_config
tmp_stage = str(tmpdir.join('test_stage'))
mutable_config.set('config:test_stage', tmp_stage)
yield tmp_stage

View File

@@ -0,0 +1,4 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)

View File

@@ -16,7 +16,7 @@
from llnl.util.filesystem import resolve_link_target_relative_to_the_link
pytestmark = pytest.mark.usefixtures('config', 'mutable_mock_repo')
pytestmark = pytest.mark.usefixtures('mutable_config', 'mutable_mock_repo')
# paths in repos that shouldn't be in the mirror tarballs.
exclude = ['.hg', '.git', '.svn']
@@ -97,7 +97,7 @@ def check_mirror():
# tarball
assert not dcmp.right_only
# and that all original files are present.
assert all(l in exclude for l in dcmp.left_only)
assert all(left in exclude for left in dcmp.left_only)
def test_url_mirror(mock_archive):

View File

@@ -10,7 +10,12 @@
static DSL metadata for packages.
"""
import os
import pytest
import shutil
import llnl.util.filesystem as fs
import spack.package
import spack.repo
@@ -119,3 +124,72 @@ def test_possible_dependencies_with_multiple_classes(
})
assert expected == spack.package.possible_dependencies(*pkgs)
def setup_install_test(source_paths, install_test_root):
"""
Set up the install test by creating sources and install test roots.
The convention used here is to create an empty file if the path name
ends with an extension otherwise, a directory is created.
"""
fs.mkdirp(install_test_root)
for path in source_paths:
if os.path.splitext(path)[1]:
fs.touchp(path)
else:
fs.mkdirp(path)
@pytest.mark.parametrize('spec,sources,extras,expect', [
('a',
['example/a.c'], # Source(s)
['example/a.c'], # Extra test source
['example/a.c']), # Test install dir source(s)
('b',
['test/b.cpp', 'test/b.hpp', 'example/b.txt'], # Source(s)
['test'], # Extra test source
['test/b.cpp', 'test/b.hpp']), # Test install dir source
('c',
['examples/a.py', 'examples/b.py', 'examples/c.py', 'tests/d.py'],
['examples/b.py', 'tests'],
['examples/b.py', 'tests/d.py']),
])
def test_cache_extra_sources(install_mockery, spec, sources, extras, expect):
"""Test the package's cache extra test sources helper function."""
pkg = spack.repo.get(spec)
pkg.spec.concretize()
source_path = pkg.stage.source_path
srcs = [fs.join_path(source_path, s) for s in sources]
setup_install_test(srcs, pkg.install_test_root)
emsg_dir = 'Expected {0} to be a directory'
emsg_file = 'Expected {0} to be a file'
for s in srcs:
assert os.path.exists(s), 'Expected {0} to exist'.format(s)
if os.path.splitext(s)[1]:
assert os.path.isfile(s), emsg_file.format(s)
else:
assert os.path.isdir(s), emsg_dir.format(s)
pkg.cache_extra_test_sources(extras)
src_dests = [fs.join_path(pkg.install_test_root, s) for s in sources]
exp_dests = [fs.join_path(pkg.install_test_root, e) for e in expect]
poss_dests = set(src_dests) | set(exp_dests)
msg = 'Expected {0} to{1} exist'
for pd in poss_dests:
if pd in exp_dests:
assert os.path.exists(pd), msg.format(pd, '')
if os.path.splitext(pd)[1]:
assert os.path.isfile(pd), emsg_file.format(pd)
else:
assert os.path.isdir(pd), emsg_dir.format(pd)
else:
assert not os.path.exists(pd), msg.format(pd, ' not')
# Perform a little cleanup
shutil.rmtree(os.path.dirname(source_path))

View File

@@ -0,0 +1,53 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
import spack.install_test
import spack.spec
def test_test_log_pathname(mock_packages, config):
"""Ensure test log path is reasonable."""
spec = spack.spec.Spec('libdwarf').concretized()
test_name = 'test_name'
test_suite = spack.install_test.TestSuite([spec], test_name)
logfile = test_suite.log_file_for_spec(spec)
assert test_suite.stage in logfile
assert test_suite.test_log_name(spec) in logfile
def test_test_ensure_stage(mock_test_stage):
"""Make sure test stage directory is properly set up."""
spec = spack.spec.Spec('libdwarf').concretized()
test_name = 'test_name'
test_suite = spack.install_test.TestSuite([spec], test_name)
test_suite.ensure_stage()
assert os.path.isdir(test_suite.stage)
assert mock_test_stage in test_suite.stage
def test_write_test_result(mock_packages, mock_test_stage):
"""Ensure test results written to a results file."""
spec = spack.spec.Spec('libdwarf').concretized()
result = 'TEST'
test_name = 'write-test'
test_suite = spack.install_test.TestSuite([spec], test_name)
test_suite.ensure_stage()
results_file = test_suite.results_file
test_suite.write_test_result(spec, result)
with open(results_file, 'r') as f:
lines = f.readlines()
assert len(lines) == 1
msg = lines[0]
assert result in msg
assert spec.name in msg

View File

@@ -2,12 +2,12 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys
import os
import re
import shlex
import subprocess
from six import string_types, text_type
from six import string_types, text_type, StringIO
import llnl.util.tty as tty
@@ -93,14 +93,11 @@ def __call__(self, *args, **kwargs):
* python streams, e.g. open Python file objects, or ``os.devnull``
* filenames, which will be automatically opened for writing
* ``str``, as in the Python string type. If you set these to ``str``,
output and error will be written to pipes and returned as a string.
If both ``output`` and ``error`` are set to ``str``, then one string
is returned containing output concatenated with error. Not valid
for ``input``
By default, the subprocess inherits the parent's file descriptors.
Returns:
(str) The interleaved output and error
"""
# Environment
env_arg = kwargs.get('env', None)
@@ -123,17 +120,20 @@ def __call__(self, *args, **kwargs):
ignore_errors = (ignore_errors, )
input = kwargs.pop('input', None)
output = kwargs.pop('output', None)
error = kwargs.pop('error', None)
output = kwargs.pop('output', sys.stdout)
error = kwargs.pop('error', sys.stderr)
if input is str:
raise ValueError('Cannot use `str` as input stream.')
if output is str:
output = os.devnull
if error is str:
error = os.devnull
def streamify(arg, mode):
if isinstance(arg, string_types):
return open(arg, mode), True
elif arg is str:
return subprocess.PIPE, False
else:
return arg, False
@@ -158,31 +158,45 @@ def streamify(arg, mode):
tty.debug(cmd_line)
try:
proc = subprocess.Popen(
cmd,
stdin=istream,
stderr=estream,
stdout=ostream,
env=env)
out, err = proc.communicate()
output_string = StringIO()
# Determine whether any of our streams are StringIO
# We cannot call `Popen` directly with a StringIO object
output_use_stringIO = False
if not hasattr(ostream, 'fileno'):
output_use_stringIO = True
ostream_stringIO = ostream
ostream = subprocess.PIPE
error_use_stringIO = False
if not hasattr(estream, 'fileno'):
error_use_stringIO = True
estream_stringIO = estream
estream = subprocess.PIPE
result = None
if output is str or error is str:
result = ''
if output is str:
result += text_type(out.decode('utf-8'))
if error is str:
result += text_type(err.decode('utf-8'))
try:
with tty.log.log_output(
output_string, output=ostream, error=estream, echo=True):
proc = subprocess.Popen(
cmd,
stdin=istream,
stderr=estream,
stdout=ostream,
env=env)
out, err = proc.communicate()
if output_use_stringIO:
ostream_stringIO.write(out)
if error_use_stringIO:
estream_stringIO.write(err)
result = output_string.getvalue()
rc = self.returncode = proc.returncode
if fail_on_error and rc != 0 and (rc not in ignore_errors):
long_msg = cmd_line
if result:
# If the output is not captured in the result, it will have
# been stored either in the specified files (e.g. if
# 'output' specifies a file) or written to the parent's
# stdout/stderr (e.g. if 'output' is not specified)
if output == os.devnull or error == os.devnull:
# If the output is not being printed anywhere, include it
# in the error message. Otherwise, don't pollute the error
# message.
long_msg += '\n' + result
raise ProcessError('Command exited with status %d:' %

View File

@@ -130,7 +130,7 @@ def sign(cls, key, file, output, clearsign=False):
@classmethod
def verify(cls, signature, file, suppress_warnings=False):
if suppress_warnings:
cls.gpg()('--verify', signature, file, error=str)
cls.gpg()('--verify', signature, file, error=os.devnull)
else:
cls.gpg()('--verify', signature, file)

View File

@@ -21,6 +21,8 @@ class MockPackageBase(object):
Use ``MockPackageMultiRepo.add_package()`` to create new instances.
"""
virtual = False
def __init__(self, dependencies, dependency_types,
conditions=None, versions=None):
"""Instantiate a new MockPackageBase.
@@ -87,7 +89,7 @@ def get_pkg_class(self, name):
def exists(self, name):
return name in self.spec_to_pkg
def is_virtual(self, name):
def is_virtual(self, name, use_index=True):
return False
def repo_for_pkg(self, name):

View File

@@ -56,7 +56,7 @@ contains 'hdf5' _spack_completions spack -d install --jobs 8 ''
contains 'hdf5' _spack_completions spack install -v ''
# XFAIL: Fails for Python 2.6 because pkg_resources not found?
#contains 'compilers.py' _spack_completions spack test ''
#contains 'compilers.py' _spack_completions spack unit-test ''
title 'Testing debugging functions'

View File

@@ -42,4 +42,4 @@ spack -p --lines 20 spec mpileaks%gcc ^elfutils@0.170
#-----------------------------------------------------------
# Run unit tests with code coverage
#-----------------------------------------------------------
$coverage_run $(which spack) test -x --verbose
$coverage_run $(which spack) unit-test -x --verbose

View File

@@ -320,7 +320,7 @@ _spack() {
then
SPACK_COMPREPLY="-h --help -H --all-help --color -C --config-scope -d --debug --timestamp --pdb -e --env -D --env-dir -E --no-env --use-env-repo -k --insecure -l --enable-locks -L --disable-locks -m --mock -p --profile --sorted-profile --lines -v --verbose --stacktrace -V --version --print-shell-vars"
else
SPACK_COMPREPLY="activate add arch blame build-env buildcache cd checksum ci clean clone commands compiler compilers concretize config containerize create deactivate debug dependencies dependents deprecate dev-build docs edit env extensions external fetch find flake8 gc gpg graph help info install license list load location log-parse maintainers mirror module patch pkg providers pydoc python reindex remove rm repo resource restage setup spec stage test uninstall unload url verify versions view"
SPACK_COMPREPLY="activate add arch blame build-env buildcache cd checksum ci clean clone commands compiler compilers concretize config containerize create deactivate debug dependencies dependents deprecate dev-build docs edit env extensions external fetch find flake8 gc gpg graph help info install license list load location log-parse maintainers mirror module patch pkg providers pydoc python reindex remove rm repo resource restage setup spec stage test test-env uninstall unit-test unload url verify versions view"
fi
}
@@ -1002,7 +1002,7 @@ _spack_info() {
_spack_install() {
if $list_options
then
SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --fail-fast --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --no-check-signature --show-log-on-error --source -n --no-checksum -v --verbose --fake --only-concrete -f --file --clean --dirty --test --run-tests --log-format --log-file --help-cdash -y --yes-to-all --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp"
SPACK_COMPREPLY="-h --help --only -u --until -j --jobs --overwrite --fail-fast --keep-prefix --keep-stage --dont-restage --use-cache --no-cache --cache-only --no-check-signature --show-log-on-error --source -n --no-checksum -v --verbose --fake --only-concrete -f --file --clean --dirty --test --run-tests --log-format --log-file --help-cdash --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp -y --yes-to-all"
else
_all_packages
fi
@@ -1028,7 +1028,7 @@ _spack_license_verify() {
_spack_list() {
if $list_options
then
SPACK_COMPREPLY="-h --help -d --search-description --format --update -t --tags"
SPACK_COMPREPLY="-h --help -d --search-description --format --update -v --virtuals -t --tags"
else
_all_packages
fi
@@ -1467,9 +1467,72 @@ _spack_stage() {
_spack_test() {
if $list_options
then
SPACK_COMPREPLY="-h --help -H --pytest-help -l --list -L --list-long -N --list-names --extension -s -k --showlocals"
SPACK_COMPREPLY="-h --help"
else
_tests
SPACK_COMPREPLY="run list find status results remove"
fi
}
_spack_test_run() {
if $list_options
then
SPACK_COMPREPLY="-h --help --alias --fail-fast --fail-first --keep-stage --log-format --log-file --cdash-upload-url --cdash-build --cdash-site --cdash-track --cdash-buildstamp --help-cdash --smoke --capability --clean --dirty"
else
_installed_packages
fi
}
_spack_test_list() {
if $list_options
then
SPACK_COMPREPLY="-h --help"
else
_all_packages
fi
}
_spack_test_find() {
if $list_options
then
SPACK_COMPREPLY="-h --help"
else
_all_packages
fi
}
_spack_test_status() {
if $list_options
then
SPACK_COMPREPLY="-h --help"
else
SPACK_COMPREPLY=""
fi
}
_spack_test_results() {
if $list_options
then
SPACK_COMPREPLY="-h --help"
else
SPACK_COMPREPLY=""
fi
}
_spack_test_remove() {
if $list_options
then
SPACK_COMPREPLY="-h --help -y --yes-to-all"
else
SPACK_COMPREPLY=""
fi
}
_spack_test_env() {
if $list_options
then
SPACK_COMPREPLY="-h --help --clean --dirty --dump --pickle"
else
_all_packages
fi
}
@@ -1482,6 +1545,15 @@ _spack_uninstall() {
fi
}
_spack_unit_test() {
if $list_options
then
SPACK_COMPREPLY="-h --help -H --pytest-help -l --list -L --list-long -N --list-names --extension -s -k --showlocals"
else
_tests
fi
}
_spack_unload() {
if $list_options
then

View File

@@ -0,0 +1,27 @@
<Test>
<StartTestTime>{{ test.starttime }}</StartTestTime>
<TestCommand>{{ install_command }}</TestCommand>
{% for warning in test.warnings %}
<Warning>
<TestLogLine>{{ warning.line_no }}</TestLogLine>
<Text>{{ warning.text }}</Text>
<SourceFile>{{ warning.source_file }}</SourceFile>
<SourceLineNumber>{{ warning.source_line_no }}</SourceLineNumber>
<PreContext>{{ warning.pre_context }}</PreContext>
<PostContext>{{ warning.post_context }}</PostContext>
</Warning>
{% endfor %}
{% for error in test.errors %}
<Error>
<TestLogLine>{{ error.line_no }}</TestLogLine>
<Text>{{ error.text }}</Text>
<SourceFile>{{ error.source_file }}</SourceFile>
<SourceLineNumber>{{ error.source_line_no }}</SourceLineNumber>
<PreContext>{{ error.pre_context }}</PreContext>
<PostContext>{{ error.post_context }}</PostContext>
</Error>
{% endfor %}
<EndTestTime>{{ test.endtime }}</EndTestTime>
<ElapsedMinutes>0</ElapsedMinutes>
</Test>
</Site>

View File

@@ -24,3 +24,8 @@ def install(self, spec, prefix):
make('install')
print("AFTER INSTALL")
def test_true(self):
print("BEFORE TEST")
which('echo')('RUNNING TEST') # run an executable
print("AFTER TEST")

View File

@@ -0,0 +1,21 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class TestError(Package):
"""This package has a test method that fails in a subprocess."""
homepage = "http://www.example.com/test-failure"
url = "http://www.test-failure.test/test-failure-1.0.tar.gz"
version('1.0', 'foobarbaz')
def install(self, spec, prefix):
mkdirp(prefix.bin)
def test(self):
which('false')()

View File

@@ -0,0 +1,21 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class TestFail(Package):
"""This package has a test method that fails in a subprocess."""
homepage = "http://www.example.com/test-failure"
url = "http://www.test-failure.test/test-failure-1.0.tar.gz"
version('1.0', 'foobarbaz')
def install(self, spec, prefix):
mkdirp(prefix.bin)
def test(self):
assert False

View File

@@ -178,7 +178,7 @@ def install(self, spec, prefix):
@run_after('install')
@on_package_attributes(run_tests=True)
def test(self):
def install_test(self):
# https://github.com/Homebrew/homebrew-core/blob/master/Formula/bazel.rb
# Bazel does not work properly on NFS, switch to /tmp

View File

@@ -40,3 +40,19 @@ def configure_args(self):
# depends on Berkey DB, creating a circular dependency
'--with-repmgr-ssl=no',
]
def test_version_arg(self):
"""Test executables run and respond to -V argument"""
cmd = [
'db_checkpoint', 'db_deadlock', 'db_dump', 'db_load',
'db_printlog', 'db_stat', 'db_upgrade', 'db_verify'
]
for cmd in cmds:
exe = which(cmd)
if not exe:
# not guaranteeing all executables for all versions
continue
assert self.prefix in exe.path
output = exe('-V')
assert self.spec.version.string in output

View File

@@ -128,3 +128,14 @@ def flag_handler(self, name, flags):
if self.spec.satisfies('@:2.34 %gcc@10:'):
flags.append('-fcommon')
return (flags, None, None)
def test_check_versions(self):
"""Check that executables run and respond to "--version" argument."""
cmds = ['ar', 'c++filt', 'coffdump', 'dlltool', 'elfedit', 'gprof',
'ld', 'nm', 'objdump', 'ranlib', 'readelf', 'size', 'strings']
for cmd in cmds:
exe = which(cmd, required=True)
assert self.prefix in exe.path
output = exe('--version')
assert str(self.spec.version) in output

View File

@@ -0,0 +1,27 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
class C(Package):
"""Virtual package for C compilers."""
homepage = 'http://open-std.org/JTC1/SC22/WG14/www/standards'
virtual = True
def test(self):
test_source = self.test_suite.current_test_data_dir
for test in os.listdir(test_source):
filepath = test_source.join(test)
exe_name = '%s.exe' % test
cc_exe = os.environ['CC']
cc_opts = ['-o', exe_name, filepath]
compiled = self.run_test(cc_exe, options=cc_opts, installed=True)
if compiled:
expected = ['Hello world', 'YES!']
self.run_test(exe_name, expected=expected)

View File

@@ -0,0 +1,7 @@
#include <stdio.h>
int main()
{
printf ("Hello world from C!\n");
printf ("YES!");
return 0;
}

View File

@@ -146,7 +146,7 @@ def build_args(self, spec, prefix):
return args
def test(self):
def build_test(self):
if '+python' in self.spec:
# Tests will always fail if Python dependencies aren't built
# In addition, 3 of the tests fail when run in parallel

View File

@@ -2,6 +2,7 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import re
@@ -241,7 +242,7 @@ def build(self, spec, prefix):
@run_after('build')
@on_package_attributes(run_tests=True)
def test(self):
def build_test(self):
# Some tests fail, takes forever
make('test')
@@ -253,3 +254,16 @@ def install(self, spec, prefix):
filter_file('mpcc_r)', 'mpcc_r mpifcc)', f, string=True)
filter_file('mpc++_r)', 'mpc++_r mpiFCC)', f, string=True)
filter_file('mpifc)', 'mpifc mpifrt)', f, string=True)
def _test_check_versions(self):
"""Perform version checks on installed package binaries."""
spec_vers_str = 'version {0}'.format(self.spec.version)
for exe in ['ccmake', 'cmake', 'cpack', 'ctest']:
reason = 'test version of {0} is {1}'.format(exe, spec_vers_str)
self.run_test(exe, ['--version'], [spec_vers_str],
installed=True, purpose=reason, skip_missing=True)
def test(self):
"""Perform smoke tests on the installed package."""
self._test_check_versions()

View File

@@ -217,7 +217,7 @@ def build(self, spec, prefix):
@run_after('build')
@on_package_attributes(run_tests=True)
def test(self):
def build_test(self):
with working_dir('spack-build'):
print("Running Conduit Unit Tests...")
make("test")

View File

@@ -0,0 +1,38 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
class Cxx(Package):
"""Virtual package for the C++ language."""
homepage = 'https://isocpp.org/std/the-standard'
virtual = True
def test(self):
test_source = self.test_suite.current_test_data_dir
for test in os.listdir(test_source):
filepath = os.path.join(test_source, test)
exe_name = '%s.exe' % test
cxx_exe = os.environ['CXX']
# standard options
# Hack to get compiler attributes
# TODO: remove this when compilers are dependencies
c_name = clang if self.spec.satisfies('llvm+clang') else self.name
c_spec = spack.spec.CompilerSpec(c_name, self.spec.version)
c_cls = spack.compilers.class_for_compiler_name(c_name)
compiler = c_cls(c_spec, None, None, ['fakecc', 'fakecxx'])
cxx_opts = [compiler.cxx11_flag] if 'c++11' in test else []
cxx_opts += ['-o', exe_name, filepath]
compiled = self.run_test(cxx_exe, options=cxx_opts, installed=True)
if compiled:
expected = ['Hello world', 'YES!']
self.run_test(exe_name, expected=expected)

View File

@@ -0,0 +1,9 @@
#include <stdio.h>
int main()
{
printf ("Hello world from C++\n");
printf ("YES!");
return 0;
}

View File

@@ -0,0 +1,9 @@
#include <iostream>
using namespace std;
int main()
{
cout << "Hello world from C++!" << endl;
cout << "YES!" << endl;
return (0);
}

View File

@@ -0,0 +1,9 @@
#include <iostream>
using namespace std;
int main()
{
cout << "Hello world from C++!" << endl;
cout << "YES!" << endl;
return (0);
}

View File

@@ -0,0 +1,17 @@
#include <iostream>
#include <regex>
using namespace std;
int main()
{
auto func = [] () { cout << "Hello world from C++11" << endl; };
func(); // now call the function
std::regex r("st|mt|tr");
std::cout << "std::regex r(\"st|mt|tr\")" << " match tr? ";
if (std::regex_match("tr", r) == 0)
std::cout << "NO!\n ==> Using pre g++ 4.9.2 libstdc++ which doesn't implement regex properly" << std::endl;
else
std::cout << "YES!\n ==> Correct libstdc++11 implementation of regex (4.9.2 or later)" << std::endl;
}

View File

@@ -80,3 +80,18 @@ def configure_args(self):
args.append('--without-gnutls')
return args
def _test_check_versions(self):
"""Perform version checks on installed package binaries."""
checks = ['ctags', 'ebrowse', 'emacs', 'emacsclient', 'etags']
for exe in checks:
expected = str(self.spec.version)
reason = 'test version of {0} is {1}'.format(exe, expected)
self.run_test(exe, ['--version'], expected, installed=True,
purpose=reason, skip_missing=True)
def test(self):
"""Perform smoke tests on the installed package."""
# Simple version check tests on known binaries
self._test_check_versions()

View File

@@ -0,0 +1,28 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
class Fortran(Package):
"""Virtual package for the Fortran language."""
homepage = 'https://wg5-fortran.org/'
virtual = True
def test(self):
test_source = self.test_suite.current_test_data_dir
for test in os.listdir(test_source):
filepath = os.path.join(test_source, test)
exe_name = '%s.exe' % test
fc_exe = os.environ['FC']
fc_opts = ['-o', exe_name, filepath]
compiled = self.run_test(fc_exe, options=fc_opts, installed=True)
if compiled:
expected = ['Hello world', 'YES!']
self.run_test(exe_name, expected=expected)

View File

@@ -0,0 +1,6 @@
program line
write (*,*) "Hello world from FORTRAN"
write (*,*) "YES!"
end

View File

@@ -0,0 +1,6 @@
program line
write (*,*) "Hello world from FORTRAN"
write (*,*) "YES!"
end program line

View File

@@ -4,6 +4,7 @@
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import sys
import os
class Hdf(AutotoolsPackage):
@@ -151,3 +152,67 @@ def configure_args(self):
def check(self):
with working_dir(self.build_directory):
make('check', parallel=False)
extra_install_tests = 'hdf/util/testfiles'
@run_after('install')
def setup_build_tests(self):
"""Copy the build test files after the package is installed to an
install test subdirectory for use during `spack test run`."""
self.cache_extra_test_sources(self.extra_install_tests)
def _test_check_versions(self):
"""Perform version checks on selected installed package binaries."""
spec_vers_str = 'Version {0}'.format(self.spec.version.up_to(2))
exes = ['hdfimport', 'hrepack', 'ncdump', 'ncgen']
for exe in exes:
reason = 'test: ensuring version of {0} is {1}' \
.format(exe, spec_vers_str)
self.run_test(exe, ['-V'], spec_vers_str, installed=True,
purpose=reason, skip_missing=True)
def _test_gif_converters(self):
"""This test performs an image conversion sequence and diff."""
work_dir = '.'
storm_fn = os.path.join(self.install_test_root,
self.extra_install_tests, 'storm110.hdf')
gif_fn = 'storm110.gif'
new_hdf_fn = 'storm110gif.hdf'
# Convert a test HDF file to a gif
self.run_test('hdf2gif', [storm_fn, gif_fn], '', installed=True,
purpose="test: hdf-to-gif", work_dir=work_dir)
# Convert the gif to an HDF file
self.run_test('gif2hdf', [gif_fn, new_hdf_fn], '', installed=True,
purpose="test: gif-to-hdf", work_dir=work_dir)
# Compare the original and new HDF files
self.run_test('hdiff', [new_hdf_fn, storm_fn], '', installed=True,
purpose="test: compare orig to new hdf",
work_dir=work_dir)
def _test_list(self):
"""This test compares low-level HDF file information to expected."""
storm_fn = os.path.join(self.install_test_root,
self.extra_install_tests, 'storm110.hdf')
test_data_dir = self.test_suite.current_test_data_dir
work_dir = '.'
reason = 'test: checking hdfls output'
details_file = os.path.join(test_data_dir, 'storm110.out')
expected = get_escaped_text_output(details_file)
self.run_test('hdfls', [storm_fn], expected, installed=True,
purpose=reason, skip_missing=True, work_dir=work_dir)
def test(self):
"""Perform smoke tests on the installed package."""
# Simple version check tests on subset of known binaries that respond
self._test_check_versions()
# Run gif converter sequence test
self._test_gif_converters()
# Run hdfls output
self._test_list()

View File

@@ -0,0 +1,17 @@
File library version: Major= 0, Minor=0, Release=0
String=
Number type : (tag 106)
Ref nos: 110
Machine type : (tag 107)
Ref nos: 4369
Image Dimensions-8 : (tag 200)
Ref nos: 110
Raster Image-8 : (tag 202)
Ref nos: 110
Image Dimensions : (tag 300)
Ref nos: 110
Raster Image Data : (tag 302)
Ref nos: 110
Raster Image Group : (tag 306)
Ref nos: 110

View File

@@ -6,8 +6,6 @@
import shutil
import sys
from spack import *
class Hdf5(AutotoolsPackage):
"""HDF5 is a data model, library, and file format for storing and managing
@@ -375,3 +373,54 @@ def check_install(self):
print('-' * 80)
raise RuntimeError("HDF5 install check failed")
shutil.rmtree(checkdir)
def _test_check_versions(self):
"""Perform version checks on selected installed package binaries."""
spec_vers_str = 'Version {0}'.format(self.spec.version)
exes = [
'h5copy', 'h5diff', 'h5dump', 'h5format_convert', 'h5ls',
'h5mkgrp', 'h5repack', 'h5stat', 'h5unjam',
]
use_short_opt = ['h52gif', 'h5repart', 'h5unjam']
for exe in exes:
reason = 'test: ensuring version of {0} is {1}' \
.format(exe, spec_vers_str)
option = '-V' if exe in use_short_opt else '--version'
self.run_test(exe, option, spec_vers_str, installed=True,
purpose=reason, skip_missing=True)
def _test_example(self):
"""This test performs copy, dump, and diff on an example hdf5 file."""
test_data_dir = self.test_suite.current_test_data_dir
filename = 'spack.h5'
h5_file = test_data_dir.join(filename)
reason = 'test: ensuring h5dump produces expected output'
expected = get_escaped_text_output(test_data_dir.join('dump.out'))
self.run_test('h5dump', filename, expected, installed=True,
purpose=reason, skip_missing=True,
work_dir=test_data_dir)
reason = 'test: ensuring h5copy runs'
options = ['-i', h5_file, '-s', 'Spack', '-o', 'test.h5', '-d',
'Spack']
self.run_test('h5copy', options, [], installed=True,
purpose=reason, skip_missing=True, work_dir='.')
reason = ('test: ensuring h5diff shows no differences between orig and'
' copy')
self.run_test('h5diff', [h5_file, 'test.h5'], [], installed=True,
purpose=reason, skip_missing=True, work_dir='.')
def test(self):
"""Perform smoke tests on the installed package."""
# Simple version check tests on known binaries
self._test_check_versions()
# Run sequence of commands on an hdf5 file
self._test_example()
# Run existing install check
self.check_install()

View File

@@ -0,0 +1,45 @@
HDF5 "spack.h5" {
GROUP "/" {
GROUP "Spack" {
GROUP "Software" {
ATTRIBUTE "Distribution" {
DATATYPE H5T_STRING {
STRSIZE H5T_VARIABLE;
STRPAD H5T_STR_NULLTERM;
CSET H5T_CSET_UTF8;
CTYPE H5T_C_S1;
}
DATASPACE SCALAR
DATA {
(0): "Open Source"
}
}
DATASET "data" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 7, 11 ) / ( 7, 11 ) }
DATA {
(0,0): 0.371141, 0.508482, 0.585975, 0.0944911, 0.684849,
(0,5): 0.580396, 0.720271, 0.693561, 0.340432, 0.217145,
(0,10): 0.636083,
(1,0): 0.686996, 0.773501, 0.656767, 0.617543, 0.226132,
(1,5): 0.768632, 0.0548711, 0.54572, 0.355544, 0.591548,
(1,10): 0.233007,
(2,0): 0.230032, 0.192087, 0.293845, 0.0369338, 0.038727,
(2,5): 0.0977931, 0.966522, 0.0821391, 0.857921, 0.495703,
(2,10): 0.746006,
(3,0): 0.598494, 0.990266, 0.993009, 0.187481, 0.746391,
(3,5): 0.140095, 0.122661, 0.929242, 0.542415, 0.802758,
(3,10): 0.757941,
(4,0): 0.372124, 0.411982, 0.270479, 0.950033, 0.329948,
(4,5): 0.936704, 0.105097, 0.742285, 0.556565, 0.18988, 0.72797,
(5,0): 0.801669, 0.271807, 0.910649, 0.186251, 0.868865,
(5,5): 0.191484, 0.788371, 0.920173, 0.582249, 0.682022,
(5,10): 0.146883,
(6,0): 0.826824, 0.0886705, 0.402606, 0.0532444, 0.72509,
(6,5): 0.964683, 0.330362, 0.833284, 0.630456, 0.411489, 0.247806
}
}
}
}
}
}

Binary file not shown.

View File

@@ -21,7 +21,7 @@ class Jq(AutotoolsPackage):
@run_after('install')
@on_package_attributes(run_tests=True)
def installtest(self):
def install_test(self):
jq = self.spec['jq'].command
f = os.path.join(os.path.dirname(__file__), 'input.json')

View File

@@ -27,7 +27,7 @@ def cmake_args(self):
@run_after('install')
@on_package_attributes(run_tests=True)
def test(self):
def test_install(self):
# The help message exits with an exit code of 1
kcov = Executable(self.prefix.bin.kcov)
kcov('-h', ignore_errors=1)

View File

@@ -3,8 +3,6 @@
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class Libsigsegv(AutotoolsPackage, GNUMirrorPackage):
"""GNU libsigsegv is a library for handling page faults in user mode."""
@@ -18,5 +16,58 @@ class Libsigsegv(AutotoolsPackage, GNUMirrorPackage):
patch('patch.new_config_guess', when='@2.10')
test_requires_compiler = True
def configure_args(self):
return ['--enable-shared']
extra_install_tests = 'tests/.libs'
@run_after('install')
def setup_build_tests(self):
"""Copy the build test files after the package is installed to an
install test subdirectory for use during `spack test run`."""
self.cache_extra_test_sources(self.extra_install_tests)
def test_link_and_run(self):
"""Check ability to link and run with libsigsegv."""
data_dir = self.test_suite.current_test_data_dir
prog = 'smoke_test'
src = data_dir.join('{0}.c'.format(prog))
compiler_options = [
'-I{0}'.format(self.prefix.include),
src,
'-o',
prog,
'-L{0}'.format(self.prefix.lib),
'-lsigsegv',
'{0}{1}'.format(self.compiler.cc_rpath_arg, self.prefix.lib)]
which('cc', required=True)(*compiler_options)
# Now run the program and confirm the output matches expectations
with open(data_dir.join('smoke_test.out'), 'r') as f:
expected = f.read()
output = which(prog)(output=str, error=str)
assert expected in output
def test_libsigsegv_unit_tests(self):
"""Run selected sigsegv tests from package unit tests"""
passed = 'Test passed'
checks = {
'sigsegv1': [passed],
'sigsegv2': [passed],
'sigsegv3': ['caught', passed],
'stackoverflow1': ['recursion', 'Stack overflow', passed],
'stackoverflow2': ['recursion', 'overflow', 'violation', passed],
}
for cmd, expected in checks.items():
exe = which(cmd)
if not exe:
# It could have been installed before we knew to capture this
# file from the build system
continue
output = exe(output=str, error=str)
for e in expected:
assert e in output

View File

@@ -0,0 +1,70 @@
/* Simple "Hello World" test set up to handle a single page fault
*
* Inspired by libsigsegv's test cases with argument names for handlers
* taken from the header files.
*/
#include "sigsegv.h"
#include <stdio.h>
#include <stdlib.h> /* for exit */
# include <stddef.h> /* for NULL on SunOS4 (per libsigsegv examples) */
#include <setjmp.h> /* for controlling handler-related flow */
/* Calling environment */
jmp_buf calling_env;
char *message = "Hello, World!";
/* Track the number of times the handler is called */
volatile int times_called = 0;
/* Continuation function, which relies on the latest libsigsegv API */
static void
resume(void *cont_arg1, void *cont_arg2, void *cont_arg3)
{
/* Go to calling environment and restore state. */
longjmp(calling_env, times_called);
}
/* sigsegv handler */
int
handle_sigsegv(void *fault_address, int serious)
{
times_called++;
/* Generate handler output for the test. */
printf("Caught sigsegv #%d\n", times_called);
return sigsegv_leave_handler(resume, NULL, NULL, NULL);
}
/* "Buggy" function used to demonstrate non-local goto */
void printit(char *m)
{
if (times_called < 1) {
/* Force SIGSEGV only on the first call. */
volatile int *fail_ptr = 0;
int failure = *fail_ptr;
printf("%s\n", m);
} else {
/* Print it correctly. */
printf("%s\n", m);
}
}
int
main(void)
{
/* Install the global SIGSEGV handler */
sigsegv_install_handler(&handle_sigsegv);
char *msg = "Hello World!";
int calls = setjmp(calling_env); /* Resume here after detecting sigsegv */
/* Call the function that will trigger the page fault. */
printit(msg);
return 0;
}

View File

@@ -0,0 +1,2 @@
Caught sigsegv #1
Hello World!

View File

@@ -2,6 +2,8 @@
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import llnl.util.filesystem as fs
import llnl.util.tty as tty
from spack import *
@@ -68,3 +70,34 @@ def import_module_test(self):
if '+python' in self.spec:
with working_dir('spack-test', create=True):
python('-c', 'import libxml2')
def test(self):
"""Perform smoke tests on the installed package"""
# Start with what we already have post-install
tty.msg('test: Performing simple import test')
self.import_module_test()
data_dir = self.test_suite.current_test_data_dir
# Now run defined tests based on expected executables
dtd_path = data_dir.join('info.dtd')
test_filename = 'test.xml'
exec_checks = {
'xml2-config': [
('--version', [str(self.spec.version)], 0)],
'xmllint': [
(['--auto', '-o', test_filename], [], 0),
(['--postvalid', test_filename],
['validity error', 'no DTD found', 'does not validate'], 3),
(['--dtdvalid', dtd_path, test_filename],
['validity error', 'does not follow the DTD'], 3),
(['--dtdvalid', dtd_path, data_dir.join('info.xml')], [], 0)],
'xmlcatalog': [
('--create', ['<catalog xmlns', 'catalog"/>'], 0)],
}
for exe in exec_checks:
for options, expected, status in exec_checks[exe]:
self.run_test(exe, options, expected, status)
# Perform some cleanup
fs.force_remove(test_filename)

View File

@@ -0,0 +1,2 @@
<!ELEMENT info (data)>
<!ELEMENT data (#PCDATA)>

View File

@@ -0,0 +1,4 @@
<?xml version="1.0"?>
<info>
<data>abc</data>
</info>

View File

@@ -76,3 +76,16 @@ def configure_args(self):
args.append('ac_cv_type_struct_sched_param=yes')
return args
def test(self):
spec_vers = str(self.spec.version)
reason = 'test: ensuring m4 version is {0}'.format(spec_vers)
self.run_test('m4', '--version', spec_vers, installed=True,
purpose=reason, skip_missing=False)
reason = 'test: ensuring m4 example succeeds'
test_data_dir = self.test_suite.current_test_data_dir
hello_file = test_data_dir.join('hello.m4')
expected = get_escaped_text_output(test_data_dir.join('hello.out'))
self.run_test('m4', hello_file, expected, installed=True,
purpose=reason, skip_missing=False)

View File

@@ -0,0 +1,4 @@
define(NAME, World)
dnl This line should not show up
// macro is ifdef(`NAME', , not)defined
Hello, NAME!

View File

@@ -0,0 +1,3 @@
// macro is defined
Hello, World!

View File

@@ -0,0 +1,31 @@
# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
import os
class Mpi(Package):
"""Virtual package for the Message Passing Interface."""
homepage = 'https://www.mpi-forum.org/'
virtual = True
def test(self):
for lang in ('c', 'f'):
filename = self.test_suite.current_test_data_dir.join(
'mpi_hello.' + lang)
compiler_var = 'MPICC' if lang == 'c' else 'MPIF90'
compiler = os.environ[compiler_var]
exe_name = 'mpi_hello_%s' % lang
mpirun = join_path(self.prefix.bin, 'mpirun')
compiled = self.run_test(compiler,
options=['-o', exe_name, filename])
if compiled:
self.run_test(mpirun,
options=['-np', '1', exe_name],
expected=[r'Hello world! From rank \s*0 of \s*1']
)

View File

@@ -0,0 +1,16 @@
#include <stdio.h>
#include <mpi.h>
int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
int rank;
int num_ranks;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &num_ranks);
printf("Hello world! From rank %d of %d\n", rank, num_ranks);
MPI_Finalize();
return(0);
}

View File

@@ -0,0 +1,11 @@
c Fortran example
program hello
include 'mpif.h'
integer rank, num_ranks, err_flag
call MPI_INIT(err_flag)
call MPI_COMM_SIZE(MPI_COMM_WORLD, num_ranks, err_flag)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, err_flag)
print*, 'Hello world! From rank', rank, 'of ', num_ranks
call MPI_FINALIZE(err_flag)
end

View File

@@ -51,7 +51,7 @@ def configure(self, spec, prefix):
@run_after('configure')
@on_package_attributes(run_tests=True)
def test(self):
def configure_test(self):
ninja = Executable('./ninja')
ninja('-j{0}'.format(make_jobs), 'ninja_test')
ninja_test = Executable('./ninja_test')

View File

@@ -34,7 +34,7 @@ def configure(self, spec, prefix):
@run_after('configure')
@on_package_attributes(run_tests=True)
def test(self):
def configure_test(self):
ninja = Executable('./ninja')
ninja('-j{0}'.format(make_jobs), 'ninja_test')
ninja_test = Executable('./ninja_test')

Some files were not shown because too many files have changed in this diff Show More